Tech

See how Apple’s AI model tries to keep your data private

Share on facebook
Share on twitter
Share on linkedin
Share on pinterest
Share on telegram
Share on email
Share on reddit
Share on whatsapp
Share on telegram


At WWDC on Monday, Apple unveiled Apple Intelligence, a suite of features that bring generative AI tools like rewriting a draft email, summarizing notifications, and creating custom emojis to iPhone, iPad, and Mac. Apple passed a significant portion of his talk explaining how useful the tools will be – and an almost equal portion of time assuring customers how private the new AI system keeps their data.

This privacy is possible thanks to a two-pronged approach to generative AI that Apple began explaining in its keynote and offered more details in later articles and presentations. They show that Apple Intelligence was developed with a device philosophy that can quickly perform the common AI tasks users want, such as transcribing calls and organizing their calendars. However, Apple Intelligence can also access cloud servers for more complex AI requests that include sending personal context data – and ensuring that both deliver good results while keeping your data private is where Apple concentrated his efforts.

The big news is that Apple is using its own homegrown AI models for Apple Intelligence. Apple notes that does not train its models with private data or user interactions, which is unique compared to other companies. Instead, Apple uses licensed materials and publicly available online data that is collected by the company’s Applebot crawler. Publishers must opt ​​out if they don’t want their data ingested by Apple, which seems similar to Google and OpenAI’s policies. Apple also says it fails to provide social security and credit card numbers that circulate online and ignores “profanity and other low-quality content.”

A big selling point for Apple Intelligence is its deep integration into Apple’s operating systems and apps, as well as the way the company optimizes its models for energy efficiency and size to fit iPhones. Keeping AI requests local is key to eliminating many privacy concerns, but the tradeoff is using smaller, less capable on-device models.

To make these local models useful, Apple employs fine-tuning, which trains models to make them better at specific tasks, such as proofreading or summarizing text. Abilities are placed in the form of “adapters,” which can be placed on the base model and swapped out for the task at hand, similar to applying power-up attributes to your character in an RPG. Likewise, Apple’s broadcast model for Image Playground and Genmoji also uses adapters to achieve different art styles, like illustration or animation (which makes people and pets look like cheap Pixar characters).

Apple says it has optimized its models to speed up the time between sending a prompt and delivering a response, and uses techniques like “speculative decoding,” “context removal,” and “group query attention” to take advantage. of Apple Silicon Neural. Motor. Chipmakers have only recently started adding neural cores (NPU) to the die, which helps alleviate CPU and GPU bandwidth when processing machine learning and AI algorithms. It’s part of the reason why only Macs and iPads with M-series chips and only the iPhone 15 Pro and Pro Max support Apple Intelligence.

The approach is similar to what we’re seeing in the Windows world: Intel launched its 14th-gen Meteor Lake architecture with an NPU-enabled chip, and Qualcomm’s new Snapdragon X chips built for Microsoft’s Copilot Plus PCs have them too. As a result, many AI capabilities in Windows are restricted to new devices that can perform work locally on those chips.

According to Apple Search, out of 750 responses tested for text summarization, the AI ​​on Apple’s device (with appropriate adapter) had more human-appealing results than Microsoft’s Phi-3-mini model. It seems like a huge achievement, but most chatbot services today use much larger models in the cloud to get better results, and that’s where Apple is trying to tread a careful line regarding privacy. To compete with larger models, Apple is creating a seamless process that sends complex requests to cloud servers while trying to prove to users that their data remains private.

If a user’s request needs a more capable AI model, Apple sends the request to its Private Cloud Compute (PCC) servers. PCC runs on its own operating system based on the “foundations of iOS” and has its own machine learning stack that powers Apple Intelligence. According to Apple, PCC has its own Secure Boot and Secure Enclave to store encryption keys that only work with the requesting device, and the Trusted Execution Monitor ensures that only signed and verified code is executed.

Apple says the user’s device creates an end-to-end encrypted connection to a PCC cluster before sending the request. Apple says it can’t access data on PCC because it doesn’t have server management tools, so there’s no remote shell. Apple also does not provide PCC with any persistent storage, so requests and possible personal context data pulled from the Apple Intelligence Semantic Index are apparently deleted from the cloud afterwards.

Each PCC build will have a virtual build that the public or researchers can inspect, and only builds that are signed and registered as inspected will go into production.

One of the big open questions is exactly what types of requests will go to the cloud. When processing a request, Apple Intelligence has a step called Orchestration, where it decides whether to proceed on the device or use PCC. We still don’t know exactly what constitutes a request complex enough to trigger a cloud process, and we likely won’t know until Apple Intelligence is available in the fall.

There’s another way Apple is handling privacy concerns: by making it someone else’s problem. Apple’s revamped Siri can send some questions to ChatGPT in the cloud, but only with permission after you’ve asked some really tough questions. This process transfers the privacy issue into the hands of OpenAI, which has its own policies, and the user, who must agree to download their query. In an interview with Marques Brownlee, Apple CEO Tim Cook said ChatGPT would be called for requests involving “world knowledge” that are “outside the realm of personal context.”

Apple’s split on-premises and cloud approach to Apple Intelligence isn’t entirely new. Google has a Gemini Nano model that can run locally on Android devices alongside its Pro and Flash models that process in the cloud. Meanwhile, Microsoft Copilot Plus PCs can process AI requests locally, as the company continues to lean on its OpenAI deal and also build its own in-house MAI-1 model. None of Apple’s rivals, however, have emphasized their privacy commitments as deeply in comparison.

Of course, this all looks great in staged demos and edited articles. However, the real test will be later this year when we see Apple Intelligence in action. We’ll have to see if Apple can strike that balance between quality AI experiences and privacy – and continue to increase it in the years to come.



Source link

Support fearless, independent journalism

We are not owned by a billionaire or shareholders – our readers support us. Donate any amount over $2. BNC Global Media Group is a global news organization that delivers fearless investigative journalism to discerning readers like you! Help us to continue publishing daily.

Support us just once

We accept support of any size, at any time – you name it for $2 or more.

Related

More

1 2 3 6,129

Don't Miss