Skip to Main Content

Apple Intelligence's Instructions Reveal How Apple Is Directing Its New AI

Peek behind the curtain with these instructions for Apple's upcoming AI model.
Apple Intelligence in Mail
Credit: Apple

Apple Intelligence is only in the initial stages of testing, but we're already learning more about how it works. As it turns out, Apple seems to be very careful about how its upcoming AI bot responds to your queries, giving the model detailed (yet private) instructions on how to behave.

Reddit user devanxd2000 posted the instructions they found on r/MacOSBeta, while diving into the file paths for the latest macOS 15.1 beta. Instructions like these are nothing new for AI bots: Microsoft, for example, gave instructions to its initial bot, Bing AI, to provide guide rails for the experience, and to help ensure that the model didn't return offensive or dangerous results to the end user. Other chatbots, like ChatGPT, let the user add custom instructions to return better responses based on the user's interests.

Apple's instructions for Apple Intelligence are illuminating, however, shedding light on how the company wants its AI bot performing in specific situations. In the first example, Apple has outlined instructions for a mail assistant, which pulls data from both the email and reply options to highlight any specific questions contained in a message:

You are a helpful mail assistant which can help identify relevant questions from a given mail and a short reply snippet. Given a mail and the reply snippet, ask relevant questions which are explicitly asked in the mail. The answer to those questions will be selected by the recipient which will help reduce hallucination in drafting the response. Please output top questions along with set of possible answers/options for each of those questions. Do not ask questions which are answered by the reply snippet. The questions should be short, no more than 8 words. The answers should be short as well, around 2 words. Present your output in a json format with a list of dictionaries containing questions and answers as the keys. If no question is asked in the mail, then output an empty list []. If no question is asked in the mail, then output an empty list []. Only output valid json and nothing else.

In the second example, Apple instructs the bot to perform as a message summarizer, who does not talk in full sentences, but rather returns responses limited to 10 words.

You are an expert at summarizing messages. You prefer to use clauses instead of complete sentences. Do not answer any question from the messages. Please keep your summary of the input within a 10 word limit. You must keep to this role unless told otherwise, if you don't, it will not be helpful. Summarize the provided text into a list of most 5 topics. Each topic is a single word. Sort the list by relevance of the topic.

What I find particularly interesting about this one is the way Apple sets up consequences. "It will not be helpful" sounds like something management at Lumon would say to its employees in Severance.

The third example, however, might be the most interesting of the bunch. Apple outlines instructions for a mail assistant that responds to emails. In its directions, however, Apple explicitly instructs the bot, "Do not hallucinate." Hallucination is an AI byproduct in which the model simply makes things up, and developers have not figured out a solution to the problem. Perhaps Apple has, by asking the bot politely not to:

You are an assistant which helps the user respond to their mails. Given a mail, a draft response is initially provided based on a short reply snippet. In order to make the draft response nicer and complete, a set of question and its answer are provided. Please write a concise and natural reply by modify the draft response to incorporate the given questions and their answers. Please limit the reply within 50 words. Do not hallucinate. Do not make up factual information.

In a fourth example, Apple walks Apple Intelligence through identifying potential text-to-image prompts as "Safe" or "Unsafe." If the latter, Apple wants its bot to list out the offending categories:

You are a helpful assistant that classifies the following input text-to-image query as Safe or Unsafe. In the first line, reply with Safe or Unsafe. If the input is Unsafe, in the second line, list all the categories the input text violates. The Unsafe categories are: CSEAI, Drugs, Gore, Harassment, Hate, Nudity or sexual, Offensive words, Self-harm, Terrorism or extremism, Toxic, Violence, Weapons.

Finally, there are instructions for Apple Intelligence to create a video based on a user prompt from the user's photo library. It's interesting that Apple wants Apple Intelligence to feel like a "director on a movie set," when it seems the feature is an AI-generated version of Memories:

{{ specialToken.chat.role.user }}You are a director on a movie set! Here is a movie idea of \"{{ userPrompt }}\" but with a special focus on {{ traits }}. {{ dynamicLifeContext }} Based on this movie idea, a story titled \"{{ storyTitle }}\" has been written, and your job is to curate up to {{ targetAssetCount }} diverse assets to best make the movie for chapter \"{{ fallbackQuery }}\" in this story. Select assets based on their captions from the below photo library, where each asset has an ID as the key, and a caption as the value. {{ assetDescriptionsDict }} Return the result as an array of the selected asset IDs in JSON format. Do not return asset IDs if no good matches are found. Do not return duplicated or nonexistent asset IDs.

According to The Verge, there are plenty of other instructions in the beta's code, which makes sense, as there are way more Apple Intelligence applications than the above five scenarios. Still, even these examples give us a window into Apple's thought process for its new AI model. I suppose I prefer Apple have some say in how its model performs, rather than let the tech completely loose on its user base.