Apple Intelligence offers genius solution to combat AI's biggest flaw
"Just don't hallucinate."
Apple has had an eventful 2024 so far, releasing its first mixed-reality Vision Pro headset, new iPad and MacBook Air models, the Apple Pencil Pro, and handling the debut of its M4 Apple Silicon chipset.
However, Apple has its busiest months of the year ahead, with September through October expected to showcase a new Apple Watch (Including Ultra and SE models) alongside new iPhone 16, AirPods, iPads, and a range of M4-outfitted MacBook Pros and Macs.
Even then, there's also the public release of the latest versions of its operating systems in iOS, iPadOS, and macOS. However, perhaps most anticipated of all is the arrival of Apple Intelligence — the company's long-awaited integration of generative AI software into its popular platforms and apps.
While many mouths salivate over the prospect of getting to grips with features like Writing Tools, Image Playground, Genmoji, and Math Notes, there's another group eagerly clutching at popcorn buckets as they wait for Apple's new AI features to go off the rails and have a moment like Microsoft Bing Chat and cause the company some sort of embarrassment in the process.
Sadly for folks in the latter category, Apple appears well prepared to prevent this from happening. Testers of the new Apple Intelligence features have uncovered a series of secret instructions designed to keep the company's chatbot-powered AI tools on track — and it's so simple that it might actually be genius.
Apple to Apple Intelligence: Don't be weird, do be helpful
Last week, early testers of Apple's upcoming macOS Sequoia uncovered a series of unsecured plaintext JSON files that appeared to reveal the secret instructions set by Apple for its large language model-based tools to follow.
The instructions lay the groundwork for Apple's AI to form responses, designating its tasks, limitations, and even overall "personality."
Not every deal is worth a squeal. Get only the good stuff from us.
The deal scientists at Laptop Mag won't direct you to measly discounts. We ensure you'll only get the laptop and tech sales that are worth shouting about -- delivered directly to your inbox this holiday season.
The uncovered instructions appear to be designed for an AI-based Mail assistant, offering explicit instructions on how the AI should react to any prompt entered by the user.
The instructions are pretty basic, all of which are written in plain English, and easy enough to understand the reasoning behind them.
For example, the opening instruction reminds the LLM of its overall duties, "You are a helpful mail assistant which can help identify relevant questions from a given mail and a short reply snippet," which points to the AI being able to summarize and draw relevant questions out of emails for users to answer.
macOS 15.1 Beta 1 | Apple Intelligence Backend Prompts from r/MacOSBeta
While other instructions enforce things like word limits for replies, reminders to be friendly, and reminders to stay on task, nestled at the foot of certain prompts are Apple's instructions to combat one of AI's biggest flaws: hallucinations.
AI hallucinations can come in many forms, though the fabrication of facts and figures is a common problem. LLMs, like the one Apple intends to use for its AI tool sets, can often get tangled up in their own web of lies and inaccuracies, if not outright break from their personalities, becoming rude, aggressive, or confused.
Obviously, this is something Apple wishes to avoid, so you'd expect a highly sophisticated and technical method of ensuring its AI stays on point. However, in actuality, Apple's secret AI instructions almost hilariously just ask the bot to avoid being weird and remain helpful.
"Do not hallucinate," is one command, followed by "Do not make up factual information." It's a solution so simple that you'd presume it could never work. However, tagging in these reminders at the end of instructions may be all it takes to prevent the bot from losing its way amid the earlier instructions.
Outlook
Apple's hallucination prevention might be basic, but we won't knock something that works. Just because something is simple, doesn't mean it isn't serving its purpose, after all.
It's a rare occurrence that we're able to peek so freely into a work in process of Apple's, and it will be interesting to see if these JSON files remain open to inspection as developer and public betas of macOS Sequoia continue to release.
If so, we may see more sophistication added to these prompt templates, and witness Apple's efforts to refine Apple Intelligence with each update. That being said, it won't be long before Apple Intelligence is available to iPhone, iPad, and Mac users in full, and we'll get to see just how well Apple's prompt protections hold up to a wider audience.
While Apple is expected to release the next milestone versions of iOS, iPadOS, and macOS next month, it's currently believed that Apple Intelligence will miss this launch window and arrive at some point in October. However, those waiting on Siri's AI-backed upgrade will likely have to wait even longer, with Apple's newly augmented assistant expected to be released in 2025.
More from Laptop Mag
Rael Hornby, potentially influenced by far too many LucasArts titles at an early age, once thought he’d grow up to be a mighty pirate. However, after several interventions with close friends and family members, you’re now much more likely to see his name attached to the bylines of tech articles. While not maintaining a double life as an aspiring writer by day and indie game dev by night, you’ll find him sat in a corner somewhere muttering to himself about microtransactions or hunting down promising indie games on Twitter.