A particular type of frustration develops gradually. You ask Siri to take a picture, then ask it to edit it, then ask it to send it to someone. By the time you make the third request, you’ve given up and completed the task yourself. It’s not a spectacular failure. It’s a silent one. Millions of iPhones experience a gradual decline in trust that occurs dozens of times a week until users no longer have high expectations for the assistant.
That might be evolving. Apple is testing a version of Siri for iOS 27 that can handle several requests combined into a single command, according to Bloomberg. Not one action followed by a pause, followed by another action. the entire chain, completed all at once. In a sentence, it sounds modest. If it does work, it would feel anything but modest.
| Category | Details |
|---|---|
| Company | Apple Inc. |
| Stock Ticker | AAPL |
| Founded | 1976 |
| Headquarters | Cupertino, California, USA |
| CEO | Tim Cook |
| Feature in Focus | Siri Multi-Command Processing (iOS 27) |
| Operating Systems Affected | iOS 27, iPadOS 27, macOS 27 |
| WWDC 2026 Keynote Date | June 8, 2026 |
| AI Competitors Referenced | Claude, ChatGPT, Gemini |
| Current Siri Limitation | Single-command processing only |
| Internal Codename | Campos |
| Expected Release | Fall 2026 (September target) |
| Partnership | Alphabet’s Gemini AI model integration |
| Status of Features | Labeled “Preview” in internal testing |
Raw knowledge has never really been the problem with Siri. It can handle simple tasks like setting a timer, checking the weather, or making a phone call. When a request has multiple moving components, problems begin. Siri usually drops the thread at that point.
In contrast, more recent AI assistants that were created from the ground up with natural language processing at their core have subtly rendered Siri obsolete. There is a perception in Silicon Valley that Apple anticipated this issue and misjudged how quickly the disparity would grow.

When Apple debuted a more powerful version of Siri at WWDC in June 2024, the audience’s response was genuinely positive. The demonstrations appeared promising. Then there were delays. Signs that the product wasn’t quite where Apple wanted it included engineering issues, internal recalibrations, and feature rollouts that came with “Preview” labels attached.
It’s safe to say that the initial Apple Intelligence push was met with a lackluster response. The disarray of the subsequent rollout contrasts sharply with the assurance of those early announcements.
It appears that what Apple is currently developing is more ambitious than anything it has previously shown. According to reports, the internal codename Campos refers to a redesigned version of Siri that is an actual AI chatbot that is deeply integrated into iOS, iPadOS, and macOS rather than existing on top of them. The difference is important.
Without leaving the operating system, an internal assistant can access, modify, and forward a file to another person. That would explain why the engineering timeline kept getting behind schedule because it is a fundamentally different architecture than what Siri has been using.
The issue of outside assistance is another. Beyond its current partnership with ChatGPT, Apple is reportedly considering opening Siri to third-party AI models like Gemini and Claude. It’s possible that Apple’s engineers came to the conclusion at some point that it would take longer than the market was willing to wait to develop world-class language intelligence completely internally.
For a business that built its reputation on controlling the entire stack, that is a big admission. One of the most fascinating developments in consumer technology at the moment is watching Apple resolve that conflict in real time.
The next major turning point will be WWDC 2026, which starts on June 8. It is anticipated that Apple will use the keynote to demonstrate the true progress of the new Siri, not in demo settings but in ways that make clear what will be available in September and what will be kept a secret until a spring update. It’s important to pay attention to the “Preview” label on internal testing builds.
Apple has previously used that term to indicate that a feature is on the way but not yet complete, so there’s no reason to think this rollout will be any different.
Whether everything Apple has outlined will be successful by fall is still up in the air. Just the multi-command feature necessitates that Siri retain context across multiple steps, comprehend dependencies between tasks, and carry them out in the correct order—inside apps it doesn’t own, on hardware with actual limitations, and for users with wildly disparate habits and expectations.
That is really difficult. Gemini and ChatGPT have had more time to make public iterations. Under pressure, Apple is catching up in some ways.
It is evident that the Siri version that will be included in iOS 27 will differ not only in degree but also in kind. Using an iPhone could feel significantly different if the multi-command feature launches as the testing apparently indicates.
It would be more like conversing with something that remains in the room rather than handling a string of small transactions. The June keynote will begin to address whether Apple can truly deliver features that are reliable and timely.
