Apple AI: Can Apple Still Win the AI Race?

OpenAI's ChatGPT and the Future of Apps
OpenAI recently revealed that applications are now capable of functioning directly within ChatGPT. This allows users to accomplish tasks such as booking trips, generating music playlists, and modifying designs, all without the need to navigate between separate applications.
Initial reactions have been strong, with some suggesting this marks the beginning of a new app platform era. Predictions are surfacing that a ChatGPT-driven ecosystem could render Apple’s App Store unnecessary.
Apple's Counter Strategy: A Revamped Siri
Despite the potential disruption posed by OpenAI’s platform, Apple’s ongoing development of an enhanced Siri remains a significant factor. Although its release has faced considerable delays, Apple’s vision could ultimately prove advantageous.
Apple currently maintains control over both the hardware and the operating system. Furthermore, the company boasts a substantial user base of approximately 1.5 billion iPhone users worldwide, exceeding ChatGPT’s 800 million weekly active users.
Should Apple’s strategy succeed, it could solidify the company’s position as a leader in the app industry. It would also facilitate a modernization of app usage in the context of artificial intelligence.
Reimagining App Interaction
Apple’s core objective is to eliminate the traditional app icon while preserving the functionality of the applications themselves. The company’s concept for AI-driven computing, unveiled at its developer conference last year, envisions iPhone users interacting with a significantly improved Siri.
This revamped system aims to alter the way users engage with apps on their smartphones. The anticipated shift involves a reduction in screen tapping and an increase in voice-based interaction.
Essentially, Apple is aiming for a more intuitive and conversational app experience.
This approach could redefine how we utilize mobile applications in an increasingly AI-centric world.
Are Traditional Apps Becoming Obsolete?
The concept of a shift in how we access information is gaining momentum.
The practice of arranging application icons on an iPhone’s Home Screen to facilitate access to online information represents an outdated computing paradigm. Designed as a miniaturized desktop experience, apps are increasingly superseded by alternative methods of user interaction with online services.
Currently, users frequently turn to AI assistants for recommendations and insights, often preferring this approach to traditional Google searches or launching dedicated applications like Yelp. Voice commands through smart speakers or AirPods are common for music playback, while chatbots provide business details or movie/show reviews.
The AI, powered by a large language model trained on extensive web data, identifies user intent and delivers a relevant response.
This method can be more efficient than navigating Google’s search results to locate the desired information. (Google recognized this trend over a decade ago, integrating direct answers into search results pages.)
Furthermore, AI often streamlines the process compared to locating the correct app on a crowded iPhone, launching it, and then navigating its unique interface to complete a task or obtain an answer.
However, ChatGPT’s application system, despite its improvements, remains confined within the ChatGPT environment. Utilizing these apps necessitates interaction through a chatbot interface, potentially requiring user familiarization. Activating an app requires explicitly naming it within the prompt or referencing it to access a button initiating app usage, followed by a precise query. (Bloomberg’s testing revealed potential for indefinite loading screens if the query is inaccurate!)This raises the question: Is this the definitive future of apps, or merely a temporary solution in the absence of robust competition? Will consumers continue to utilize ChatGPT when alternative, natively integrated options – such as those potentially offered by Apple – become available, or will they reconsider Siri?
The answer remains unclear, but dismissing Apple’s potential would be premature, despite Siri’s current shortcomings.
While Siri currently faces challenges, Apple’s broader ecosystem offers significant advantages. Consumers already possess and are familiar with the apps they desire, or know how to locate them within the App Store. Established usage patterns and muscle memory are powerful factors.
Conversely, adopting ChatGPT’s app platform presents certain hurdles.
Installation of the desired app is required, followed by a connection process within ChatGPT, involving a series of permission requests. This necessitates authentication using existing credentials and, if applicable, two-factor authentication code entry.Following this initial setup, the process should become more streamlined. For example, an AI-generated Spotify playlist can be launched directly within the Spotify application with a single tap.
This experience closely mirrors Apple’s proposed functionality, where Siri will enable voice or text-based app control.
The OpenAI app model also has limitations. Interaction is restricted to a single app at a time, hindering the ability to seamlessly switch between applications – a valuable feature when comparing prices or evaluating options like hotels versus Airbnbs.Integrating apps within ChatGPT also diminishes the branding, design, and unique identity associated with individual applications. (While some may appreciate a decluttered Spotify interface, others may disagree.) In certain scenarios, utilizing the native mobile app may prove more efficient than its ChatGPT counterpart due to the greater flexibility it offers.
Ultimately, persuading users to adopt a new app platform may prove challenging without a clear and compelling advantage to using apps within ChatGPT beyond its novelty.
Can Apple Revitalize Siri’s Image with Advanced AI Capabilities?
During its WWDC 2024 presentation – which Apple assures the public was not a demonstration of unfinished software – the company illustrated how applications will operate within the new system and leverage other AI features, such as proofreading functionalities.
Crucially, Apple informed developers that they will be able to utilize certain AI capabilities without requiring extensive additional development work. An example given was a note-taking application employing proofreading or rewriting tools.
Furthermore, developers who have already integrated SiriKit into their applications will gain expanded capabilities for enabling users to perform actions within those apps. SiriKit, a toolkit facilitating app interoperability with Siri and Apple’s Shortcuts, has been available to developers since iOS 10.
These developers can anticipate immediate improvements when the updated Siri is released.
Apple stated its initial focus will be on categories including Notes, Media, Messaging, Payments, Restaurant Reservations, VoIP Calling, and Workouts.Applications within these categories will empower users to initiate actions through Siri. Specifically, Siri will be capable of invoking any item from an app’s menu. For instance, a user could request Siri to display presenter notes within a slide presentation, and the corresponding productivity application would respond accordingly.
These applications will also be able to access text currently displayed on the screen, utilizing Apple’s standard text systems. This aims to create more intuitive app interactions, reducing the need for precisely worded prompts or commands. As an illustration, a reminder to wish a grandfather a happy birthday could be acted upon by simply saying “FaceTime him.”
Apple’s existing Intents framework is also undergoing updates to integrate with Apple Intelligence, extending coverage to a wider range of applications in categories like Books, Browsers, Cameras, Document Readers, File Management, Journals, Mail, Photos, Presentations, Spreadsheets, Whiteboards, and Word Processors.Apple is developing new, predefined, trained, and tested “Intents” and making them accessible to developers. This means a user could instruct the Darkroom photo-editing app to apply a cinematic filter to an image via Siri.
Siri will also proactively suggest app actions, assisting iPhone users in discovering and utilizing their apps’ functionalities.
Developers have been embracing the App Intents framework, introduced in iOS 16, due to its broader functionality for integrating app actions and content with other platform features, including Spotlight, Siri, the iPhone’s Action button, widgets, controls, and visual search – extending beyond just Apple Intelligence.
Unlike ChatGPT, Apple maintains control over its operating system and hardware, and provides the App Store for app discovery, alongside essential infrastructure, developer tools, APIs, and frameworks – not merely an AI-powered interface.While Apple may need to incorporate AI technology from other sources, it possesses the data necessary for personalized app recommendations and, importantly, the privacy controls allowing users to limit data collection by applications. (A “Do Not Track” option for ChatGPT’s app ecosystem remains absent.)
OpenAI’s system doesn’t offer seamless integration with all applications upon launch. It requires developer adoption and relies on the Model Context Protocol (MCP), a newer technology for connecting AI assistants to other systems. Consequently, ChatGPT currently supports only a limited number of apps, such as Booking.com, Expedia, Spotify, Figma, Coursera, Zillow, and Canva.
MCP adoption is increasing, but a slower rollout could provide Apple with valuable time to close the gap.
Reports indicate that Apple’s AI system is nearing completion. The company is reportedly conducting internal testing, enabling users to execute actions within apps using Siri voice commands. Bloomberg reported that this enhanced Siri version functions natively with numerous apps, including those from major companies like Uber, AllTrails, Threads, Temu, Amazon, YouTube, Facebook, and WhatsApp. Apple has confirmed to TechCrunch that the release remains on schedule for next year.
OpenAI's Pursuit of Hardware: Mirroring Apple's Strategy with Jony Ive
Dislodging the iPhone’s dominance as an application platform presents a significant challenge, even for a tech giant like OpenAI.
Recognizing this difficulty, OpenAI is actively investigating the development of its own hardware, collaborating with Jony Ive, formerly Apple’s chief design officer. This initiative stems from a desire to integrate their AI more seamlessly into the daily routines and behaviors of consumers.
However, reports suggest that devising a computing model superior to the smartphone has proven elusive for the company thus far. Simultaneously, public sentiment reveals a reluctance towards perpetually active AI devices, raising concerns about privacy and conflicting with established societal expectations.
Recent negative reactions to AI have manifested in various ways, including defacement of advertisements by AI device manufacturer Friend, backlash from Taylor Swift fans regarding AI experimentation, and damage to the public image of both consumer brands and businesses.
Consequently, the potential for success of a dedicated OpenAI device remains uncertain.
Currently, OpenAI’s approach largely involves utilizing its application to manage and interact with other existing applications.
Should Apple successfully implement its planned Siri enhancements, the need for OpenAI’s intermediary role could be diminished.
Related Posts

Waymo Baby Delivery: Birth in Self-Driving Car

Google AI Leadership: Promoting Data Center Tech Expert

AI Safety Concerns: Attorneys General Warn Tech Giants
Nvidia Reportedly Tests Tracking Software Amid Chip Smuggling Concerns

Spotify's AI Prompted Playlists: Personalized Music is Here
