TLDR:
- Apple is releasing iOS 18.2 beta with ChatGPT integration and new AI features
- Features include Genmoji, Image Playground, Image Wand, and Visual Intelligence tools
- The update will only be available on newer devices with A17 Pro chip or later
- ChatGPT integration will be optional and privacy-focused, not requiring user accounts
- Full public release expected as part of iOS 18.1 next week
Apple has unveiled its latest developer beta for iOS 18.2, marking a significant expansion of its artificial intelligence capabilities through the integration of ChatGPT and several new visual tools. The announcement came Wednesday as part of the company’s ongoing rollout of Apple Intelligence features.
The new beta version introduces several key features that will be available on newer Apple devices, including the iPhone 15 Pro, iPhone 15 Pro Max, and the upcoming iPhone 16 line.
The update will also support iPads running on Apple’s A17 Pro and M1 chips or later, as well as Macs equipped with M1 and newer processors.
At the center of this release is the integration with OpenAI’s ChatGPT, which will work alongside Apple’s Siri virtual assistant. When users ask questions that Siri identifies as better suited for ChatGPT, the system will request permission before forwarding the query. Notably, users won’t need to create an OpenAI account to access these features.
Privacy remains a key focus in the ChatGPT integration. Apple has implemented measures to obscure users’ IP addresses and ensure that data isn’t stored or used for training OpenAI’s models. However, users who choose to sign in with their ChatGPT accounts will fall under ChatGPT’s standard privacy policies.
The beta release includes several new creative tools. Image Playground allows users to generate and manipulate images using various concepts and styles. The Image Wand feature enables users to remove unwanted objects from photos or transform rough sketches into complete images.
A notable addition is Genmoji, Apple’s new AI-powered emoji generator. This tool can create custom emojis based on text descriptions and understands contextual information about contacts, allowing for personalized emoji creation.
The Visual Intelligence feature, accessible through the new Camera Control button on iPhone 16 devices, offers real-time information about objects and text in the camera’s view. Users can point their cameras at restaurants to see operating hours and reviews, or scan text for immediate translation and information extraction.
Writing Tools have also received an upgrade, allowing users to describe how they want to modify text. For example, users can request that party invitations be made more enthusiastic or formal, with the AI adjusting the tone accordingly.
Apple’s initial wave of Intelligence features, set for public release next week with iOS 18.1, includes notification summaries, basic writing tools, and AI image editing capabilities. These features will serve as the foundation for the more advanced tools introduced in the 18.2 beta.
The partnership between Apple and OpenAI represents a significant development in the AI landscape. While Microsoft has already established deep integration with OpenAI models, Apple’s approach focuses on maintaining user privacy while providing access to advanced AI capabilities.
Financial details of the Apple-OpenAI partnership remain undisclosed, and Apple was not part of OpenAI’s recent funding round that valued the company at $157 billion. Apple has indicated that future integrations might include AI models from other providers, such as Google.
The new features will be available through various APIs, allowing developers to incorporate these capabilities into their applications. This developer access could be crucial for creating practical applications that demonstrate the value of these AI features to consumers.
Wall Street has responded positively to Apple’s AI initiatives, with the company’s stock rising approximately 35% over the past year and 19% since the initial announcement of Apple Intelligence at the Worldwide Developer Conference in June.
For developers, the beta release represents an opportunity to begin integrating these new features into their applications before the public release. The typical beta cycle takes several weeks before features are made available to general users.