Gemini’s Task Automation: A Bold Leap, Even if It's Still a Work in Progress
Google’s Gemini has introduced a new feature that lets the AI take control of selected apps and perform tasks on your behalf. In early trials on the Pixel 10 Pro and the Galaxy S26 Ultra, the tech can order food from DoorDash or hail a ride with Uber, but the experience is far from perfect. Below we break down what the feature offers, where it falters, and why it’s still worth watching.
Gemini on the Pixel 10 Pro
On the Pixel, Gemini’s task automation feels a bit like a child learning to drive. You give a simple prompt—"Order lunch for me"—and the AI opens the DoorDash app, scrolls through options, and places an order. The interface is straightforward, but the AI often misinterprets preferences, like picking the wrong delivery time or the wrong restaurant. The system still requires manual confirmation before finalizing, which interrupts the flow.
Gemini on the Galaxy S26 Ultra
The Samsung test was slightly smoother, thanks in part to a more robust integration with the phone’s UI. Gemini can launch apps, navigate menus, and even handle payments if you’ve stored a card. Yet the feature is limited to a handful of services and struggles with complex tasks—such as booking a multi‑stop trip or adjusting delivery details after the first interaction.
Performance & Limitations
* Speed – The AI takes a few seconds to parse a request and navigate a third‑party app, resulting in a noticeable lag. It feels clunky compared to manual navigation. * Scope – Currently restricted to a curated list of food delivery and rideshare apps. No support for broader productivity or enterprise tools. * Reliability – Mis‑recognition of prompts and occasional failures to complete a task highlight the early‑stage status of the feature.
Despite these shortcomings, Gemini’s ability to orchestrate multiple steps—searching, selecting, paying—demonstrates a sophisticated grasp of app workflows.
Future Potential
Google’s roadmap suggests expanding the feature set beyond a handful of services. If Gemini can learn to handle more complex sequences, automate repetitive work, and maintain context across apps, it could redefine the way we interact with our phones. The key will be refining the AI’s conversational understanding and building deeper API integrations with third‑party services.
Conclusion
Gemini’s task automation is a promising, albeit rough, prototype. It shows that AI can bridge the gap between a user’s intent and app actions, but the current iteration feels more like a demo than a polished tool. If you’re curious about hands‑off AI and want to help shape its future, consider sharing your thoughts with the Dakik community.
Ready to influence the next generation of AI assistants?
Take our quick survey and let us know what you’d like to see in Gemini’s automation suite.



