Copilot Taxonomy - some deep research into all things copilot

The Prompt

My name is Shannon Moir and I'm the Director of AI for Fusion5.  I'd like to write an educational and somewhat humorous blog on copilot, the microsoft UI for AI. Can you start by summarising the different copilot subscriptions that are available for the enterprise (and their subscription costs AUD today) - and what they include.  Can you talk to what data Microsoft 365 (M365) Copilot for Business has access to "out of the box". Can you be clear on all of the sources of information that copilot has access to. Can you be very clear on how copilot determines whether it can use a particular document in the response, this can be a detailed paragraph or two.  Does copilot create a new model for each organisation? or is it a result of RAG with access built in?  How often is this model / rag / rules engine updated?  Can you aim the content at CIO/CFO type roles, people that need to make decision about AI adoption and care about the bottom line - the basis of the entry is below.  Please include any references or weblinks in the copy you create. 



Today I'm using the chatGPT deepResearch functionality to answer the question.  I'm doing this because I want more information on GraphAPI and some really good details on exactly how copilot is working.  I think it's important for everyone.






Today I leaned that deep research has no sense of humour, it's pretty mundane in the response, but the details are excellent.  This is a really good read if you want to know exactly how copilot works and exactly what is has access to.  Spoiler alert - it respects ALL existing security to give you access to all content it can reach in the organisation + some extra guardrails for sensitive content.

The response

Microsoft Copilot Subscriptions for Australian Enterprises: Pricing, Features, and Data Security

Microsoft 365 Copilot for Business (Enterprise Productivity AI)

Pricing (Australia): Microsoft 365 Copilot is an add-on for enterprise Microsoft 365 plans (E3, E5, Business Standard/Premium). It costs AU$44.90 per user per month (if paid annually) or AU$47.15 on a monthly plan (annual commitment) (GST excluded). This roughly aligns with the US$30/user/month price point that Microsoft announced for Copilot. A minimum seat count (e.g. 300 users) may apply for enterprise purchase (as of initial rollout).

What’s Included: Microsoft 365 Copilot integrates AI assistance across the core Office apps and services. With this subscription, users get Copilot features in Word, Excel, PowerPoint, Outlook, Teams, and more. For example, Copilot can draft documents in Word, analyze data in Excel, create PowerPoint presentations, summarize Outlook email threads, and generate meeting summaries or chat replies in Teams. It also provides Business Chat (Copilot Chat) – a conversational AI that works across your work data (emails, files, meetings, etc.) to answer questions and generate insights. Microsoft 365 Copilot is “grounded” in your business data: it can pull information from your Outlook emails, Teams chats, OneDrive and SharePoint documents, calendar, meetings, and contacts to provide contextually relevant responses. The subscription additionally includes Copilot Pages (for generating and editing content pages) and Copilot Studio (tools to create custom AI “agents” or automations tailored to business processes). An admin Copilot dashboard is provided to monitor usage and impact. All these capabilities come with enterprise-grade security and compliance built-in.

Value Proposition: For a CIO, Microsoft 365 Copilot promises to boost knowledge worker productivity by automating content generation, offering data-driven insights, and freeing employees from mundane tasks. CFOs will note the add-on cost (around A$45 per user/month) and should weigh this against potential productivity gains – Microsoft cites time saved in drafting communications, analyzing data, and retrieving information. Importantly, Copilot uses data the organization already has in Microsoft 365, potentially increasing the ROI of your existing Microsoft 365 investment by unlocking more value from emails, documents, and chats. From a risk perspective, Microsoft 365 Copilot respects your existing permissions and data privacy (discussed further below), meaning it will not expose data to users who lack access. This helps mitigate data leakage concerns while still enabling broad use of enterprise knowledge in AI-driven answers.

GitHub Copilot for Business (AI Pair Programming)

Pricing (Australia): GitHub Copilot for Business is available as a monthly per-user subscription. The official price is US$19 per user per month, which is roughly ~AU$29 per user/month (excluding GST) – Australian enterprise customers would pay in local currency equivalent. (GitHub also offers a higher-tier Copilot Enterprise at US$39 per user/month with additional controls, though for most businesses the $19 plan is sufficient.)

What’s Included: GitHub Copilot for Business provides an AI “pair programmer” integrated into development environments (such as Visual Studio, VS Code, etc.). It uses OpenAI GPT-4-based models to suggest code snippets, autofill boilerplate, and even explain code in natural language. The Business plan offers enterprise-grade features like organization-wide license management, policy controls (e.g. an option to block suggestions that match public code to address IP compliance), and higher API rate limits. All users get unlimited code completions and chat in supported IDEs, helping developers code faster and with fewer errors. GitHub Copilot can significantly accelerate software development cycles – a point of value for CIOs – by reducing routine coding effort and helping upskill developers (it can suggest best practices and common algorithms). From a CFO perspective, the cost (~A$30 per dev per month) can be justified by productivity gains and potentially higher quality code. Security/Risk Note: GitHub Copilot for Business does not ingest your proprietary code into the public training dataset. Microsoft has stated that Copilot’s AI model is not retrained on your organization’s code or data – it uses the pretrained model and generates suggestions on the fly. This means your code stays private, and suggestions are derived from patterns the model learned from public code (plus any prompt context you supply). However, as a best practice, companies should still implement peer code reviews and security scans, since AI suggestions can occasionally introduce vulnerabilities or errors. GitHub is continually updating Copilot with improvements (e.g. **“Copilot X” features like code explaining and safer coding prompts) as the underlying models advance.

Dynamics 365 Copilot (AI in CRM and ERP)

Pricing (Australia): “Dynamics 365 Copilot” refers to AI assistance features across various Dynamics 365 business applications (Sales, Customer Service, Marketing, Finance, etc.). Unlike a single product, these are typically features included in or added onto existing D365 licenses. Microsoft has included some Copilot capabilities in premium D365 licenses and offers add-on subscriptions for enhanced functionality. For example, in Dynamics 365 Sales: Enterprise users get a set of Copilot features included (outlined below), while a full “Copilot for Sales” add-on is available at about US$50 per user per month (≈ AU$75/user/month) to unlock the advanced AI sales assistant features. (Sales Premium license holders can activate Copilot by simply adding the standard Microsoft 365 Copilot license at US$30.) Similarly, other D365 modules have Copilot features – e.g. Customer Service has AI that drafts agent responses or summarizes cases, and these may be included for users of certain tiers or offered as add-ons at a similar ~$50 per user/month price point. It’s advisable to check the specific Dynamics 365 application’s licensing guide, as Microsoft’s strategy has been to bundle some Copilot capabilities into top-tier plans and sell others as optional upgrades.

What’s Included: Dynamics 365 Copilot infuses AI into CRM/ERP workflows to boost productivity for front-line and operational roles. For instance, in D365 Sales, Copilot can provide contextual insights and recommendations on leads/opportunities, generate summaries of customer accounts or opportunities, prepare meeting briefings, and even draft sales emails. It allows natural-language querying of the CRM (“Copilot chat”) to pull up info like “Summarize the last contact with Contoso Ltd” or “What are the key deals closing this quarter?” – saving salespeople time. These features (lead/account summaries, email drafts, meeting prep, etc.) are included with standard Sales Enterprise/Premium licenses. The Copilot for Sales add-on (the ~$50 pupm subscription) goes further by connecting Microsoft 365 Copilot’s broader capabilities to CRM data – for example, it can integrate with Salesforce or D365 Sales to bring CRM context into Outlook and Teams. (Microsoft 365 Copilot for Sales is essentially a bridge that brings CRM data into the Microsoft 365 Copilot experiences.) In Dynamics 365 Customer Service, Copilot can assist support agents by drafting responses to customer inquiries, summarizing lengthy case notes, and highlighting relevant knowledge base articles. In Dynamics 365 Finance or Supply Chain, Copilot might help analyze trends or generate reports. The value for CIOs is in enhancing user productivity within business apps – e.g. faster sales cycles, improved customer responses – by leveraging AI on the rich data already in their D365 systems. For CFOs, these AI features could translate to better customer engagement (potentially revenue-impacting) and efficiency gains in operations. Risk and Governance: The Copilot features in Dynamics 365 use the data within your CRM/ERP, which is often sensitive (customer data, financials). Microsoft ensures that Copilot will only surface information the user is allowed to access in the CRM system, respecting the security roles and field-level permissions configured in D365. Moreover, any data sent to the AI models is not used to train the foundation model – it’s a retrieval-on-demand approach (see architecture discussion below). Organizations can also tune Copilot responses by configuring which fields or records it can draw from. Since these features are relatively new, CIOs should involve their governance and compliance teams to monitor outputs, especially in regulated industries – for example, verifying that AI-generated customer communications meet compliance guidelines before sending.

Power Platform Copilot (Low-Code AI Assistance)

Pricing (Australia): Power Platform Copilot features are included within the existing Power Platform licenses (Power Apps, Power Automate, etc.) during the preview period. Currently, there is no separate subscription fee for “Power Platform Copilot” – users with appropriate licenses can use these AI capabilities. Microsoft initially offered Copilot in Power Apps/Automate for free in preview, and it is expected that final pricing or consumption billing will be announced once generally available. It’s likely that Copilot usage in Power Platform will be governed by the underlying AI Builder/Azure AI credits in the future (for example, Microsoft has introduced a pay-as-you-go model for Copilot components like Copilot Studio agents, where usage beyond a certain limit is billed per message). For now, enterprises can experiment with Power Platform Copilot without extra cost beyond their standard Power Platform licensing.

What’s Included: “Copilot” in the Power Platform is an AI helper that makes it easier to build apps, workflows, and bots with natural language. In Power Apps, Copilot allows a user (e.g. a business analyst) to describe the app they want in words, and it will generate data tables and a starter app automatically. For example, an analyst could say “I need an app to track customer feedback with fields for name, date, and comments,” and Copilot will produce a basic app with those elements. In Power Automate, Copilot enables users to write a workflow in plain English – e.g. “When a SharePoint list item is updated, send an approval request email” – and it will build the flow with the proper connectors and actions. This drastically lowers the skill barrier to create automations and apps, which is a huge value for CIOs aiming to empower “citizen developers” and reduce IT backlog. It also speeds up professional developers’ work by handling boilerplate setup. The Copilot can also assist in Power Virtual Agents (chatbot) creation, by generating conversation topics and bot dialogs based on a prompt or existing website content. From a CFO viewpoint, these capabilities could reduce the need for outsourcing development or purchasing off-the-shelf apps, as internal teams can more quickly build solutions themselves – effectively improving the ROI of your Power Platform investment. Risk Considerations: Since Power Platform Copilot can create connectors and integrate data, governance is important. The AI will only create solutions within the permissions of the user and the data sources they choose. Administrators can control Copilot features via Power Platform admin settings (and eventually via Copilot governance policies). As with any generated app or flow, testing and validation are needed to ensure the AI-built solution works as intended. Microsoft’s upcoming licensing model for these AI features will likely be consumption-based (to cover the Azure AI service costs), so CIOs/CFOs should prepare for potential additional costs once Copilot moves out of preview – e.g. budgeting for AI Builder “GPTo1” credits or Copilot message packs when they become a paid feature.

Microsoft Security Copilot (AI for Security Operations)

Pricing (Australia): Microsoft Security Copilot is offered as a separate SaaS service via Azure, using a consumption-based billing model rather than a fixed per-user price. Instead of charging per seat, Microsoft charges for the compute capacity used by Security Copilot – measured in Security Compute Units (SCUs). As of early 2024 general availability, the rate is about US$4 per SCU per hour. In Australian terms, that’s roughly AU$6 per SCU per hour (using current exchange rates). There is no fixed “monthly subscription” fee – your monthly cost will scale with usage. For example, Microsoft provisions a baseline of 3 SCUs/hour to run Copilot, which would equate to roughly US$12/hour (≈AU$18/hour). If you ran at that capacity 24x7, it would be on the order of ~AU$13,000 per month. However, the service is elastic: you can provision a certain number of SCUs and even set an “overage” limit for peak times. This consumption model means you pay only for the AI resources you use, but CFOs should carefully estimate usage (based on the volume of security incidents and analysts using the tool) to budget accordingly. Microsoft provides an Azure pricing calculator for Security Copilot to help with this, and you’ll need an Azure subscription to purchase SCUs.

What’s Included: Security Copilot is an AI assistant specifically for security teams (SecOps, incident response, threat hunting, etc.). It leverages Microsoft’s security intelligence (the vast threat signal data in Microsoft’s security graph) and an OpenAI GPT-4 model to help analyze and summarize security incidents. In practice, Security Copilot can take natural language prompts from a security analyst (e.g. “Investigate alerts related to ContosoSales database in the last 24 hours” or “Summarize this incident and suggest next steps”) and then collate information from tools like Microsoft Sentinel, Defender, and other security products. It will produce a response that might include a timeline of an attack, affected assets, recommended mitigations, or guided queries for deeper analysis. Essentially, it acts as a junior security analyst that never sleeps – processing alerts at machine speed and offering insights or automating parts of the investigation. Microsoft has positioned Security Copilot as a force-multiplier for under-resourced security teams, potentially reducing time to identify and respond to threats. From a CIO/CISO perspective, the value is in improved security posture and efficiency: it can help detect complex threats by correlating signals and can generate executive-ready summaries of incidents in seconds. For a CFO, the cost model (consumption-based) means you can start small – perhaps enabling Copilot for a subset of security workflows – and scale up if it proves valuable, rather than paying a blanket license for all security staff. Security/Risk Perspective: Fittingly, Security Copilot is built with security and privacy in mind. The data it uses (security logs, incident data) stays within your Azure tenant boundaries; Copilot’s analysis happens without exposing that sensitive data to outsiders. It also respects role-based access – an analyst will only get answers based on data they have permission to view in the connected security tools. Moreover, Microsoft continuously updates Security Copilot’s threat intelligence and reasoning capabilities (learning from new attacks in the wild, updating its models to recognize new threat patterns). However, it’s important to treat Copilot’s output as guidance – security professionals should validate suggestions. Microsoft has implemented responsible AI controls here as well, to avoid false positives or biases: for example, Copilot will show its work (e.g. which alerts or data points led to a conclusion) so the human analyst can verify. Overall, Security Copilot can improve risk management by catching what humans might miss, but it should be adopted alongside strong security processes, not as a replacement for them.

Data Access in Microsoft 365 Copilot: What Can it See Out-of-the-Box?

Microsoft 365 Copilot’s power comes from its ability to tap into your organization’s Microsoft 365 data estate by default. Out-of-the-box, M365 Copilot can access content and context via the Microsoft Graph, which means all the information a user could normally access through Microsoft 365 tools is available to Copilot to draw upon. Concretely, this includes:

  • Email and Calendar: Copilot can read your Outlook/Exchange emails (including threads and attachments you have), and your calendar events. For example, it could summarize an email conversation or extract action items from a meeting invite. It uses your mailbox data to answer questions like “What did I commit to in my emails last week?”

  • Chat and Meetings: Copilot has access to Microsoft Teams chats and meeting information. It can summarize Teams chat threads, channel conversations, or meeting transcripts (if those are recorded or live-transcribed). For instance, you can ask, “What were the main topics in the team chat yesterday?” and it will synthesize based on the Teams messages you have access to. It’s also aware of your current meeting context – during a meeting you could ask Copilot for insights about the participants or related projects (pulling from their bios or past interactions).

  • Files in OneDrive and SharePoint: Documents you can access in OneDrive for Business or SharePoint (Word documents, Excel sheets, PowerPoint decks, PDFs, OneNote, etc.) are within Copilot’s reach. It can summarize documents, compare them, extract data, or draft content based on them. For example, you might prompt, “Summarize the Q4 Strategy document from our SharePoint,” and Copilot will generate a summary – but only if you have at least view permission on that file. It can even accept a file you drop into the Copilot chat to refine a prompt.

  • Other Microsoft 365 content: Copilot also considers your contacts (via Outlook People) and possibly information in tools like Planner or To Do (for instance, it might compile tasks assigned to you). Microsoft has indicated Copilot uses “the user’s working context” – such as recent Teams meetings, recent documents you edited, or emails on a topic – to ground its answers. Essentially, any data that is accessible through Microsoft Graph and is not explicitly excluded will be available. That typically covers SharePoint sites (files, pages you can access), OneNote notebooks (since they are stored in SharePoint/OneDrive), and even Viva Engage (Yammer) conversations if the user has access.

It’s important to note that **Copilot does not have any elevated privileges – it sees only what the individual user can see. Microsoft states clearly: “Microsoft 365 Copilot presents only data that each individual can access, using the same underlying access controls of Microsoft 365”. So, if a document is private to your manager and you don’t have permission, Copilot will not retrieve or reveal it. If a Team or SharePoint site is private or you are not a member, Copilot cannot summarize or draw from its contents. This “permission mirror” ensures Copilot can’t be used as a backdoor to snoop on data. In fact, when asked, Copilot will outright refuse to provide content from a file you name if you don’t have access. The system relies on Microsoft Graph’s security trimming – any search or retrieval Copilot does via Graph automatically filters out content you aren’t authorized to see.

In summary, out-of-the-box Microsoft 365 Copilot has access to a broad swath of work data: your emails, chats, meetings, and files – basically the productivity data in Microsoft 365. This breadth is what allows it to give very rich, context-aware answers (e.g. drafting a report that includes data from a relevant Excel file and quotes from a recent Teams discussion). For CIOs considering Copilot, understanding this scope is key: Copilot can surface insights from siloed information and break down barriers between apps. For CFOs, this means the organization could get more value from existing data (finding information faster, reusing past work easily). The flip side is ensuring that your data governance is solid – since Copilot will faithfully reflect whatever access permissions and data quality you have. Any improperly secured file (e.g. one that many people have access to but perhaps shouldn’t) could potentially be summarized to a wider audience via Copilot. This is a reminder to tighten permissions and use tools like sensitivity labels; Copilot respects sensitivity labels and won’t disclose protected content to unauthorized users (rights-protected documents remain inaccessible to Copilot if the user can’t open them normally). Microsoft’s design emphasizes no new data leakage: **“Copilot can only access files that you have permissions for… including files in your OneDrive as well as files shared with you”*.

How Microsoft 365 Copilot Retrieves Information and Respects Permissions

When a user prompts Microsoft 365 Copilot with a request, there is a sophisticated orchestration under the hood to fetch relevant data securely before the AI formulates a response. Here’s how that process works in simple terms:

  1. User Prompt & Orchestration: The user’s query (e.g. “Summarize the customer feedback we received this week”) is first analyzed by the Copilot orchestration engine. Copilot determines which data sources might contain the answer. In this example, it might consider emails (for feedback emails), Teams chats (if feedback was discussed), or a OneDrive folder where feedback docs are stored. Copilot does not directly have a copy of all your data. Instead, it issues search and retrieval requests through Microsoft Graph APIs on-the-fly. Essentially, Copilot asks Graph, “What information do we have that’s relevant to this question?”

  2. Graph Retrieval with Security Trim: The Microsoft Graph API then performs a search or data query across your content (Exchange, SharePoint, Teams, etc.), scoped to the user’s identity. This step automatically enforces all the normal security and compliance rules. If the query calls for emails about “customer feedback,” Graph will only return emails from the user’s mailbox (or other mailboxes they have access to, if any delegation exists). If it searches documents, Graph will only return files from SharePoint/OneDrive that the user can access. Microsoft 365 has a concept of “security trimming,” meaning search results exclude any item the user isn’t permitted to see. This means Copilot’s retrieval step inherently respects SharePoint permissions, mailbox rights, Teams channel membership, etc. Copilot never bypasses permissions – it can’t magically read content in someone else’s private folder. Microsoft affirms: “The permissions model within your Microsoft 365 tenant… ensures data won’t leak between users. Copilot presents only data that each individual can access, using the same controls as other M365 services.”. In practice, if five documents match the query but you have access to only three, Copilot will retrieve those three and ignore the off-limits ones.

  3. Grounding the AI with Retrieved Data: The relevant snippets of content found via Graph are then passed into the prompt for the large language model (LLM). This step is often called “grounding” the model – providing it with factual reference text so it can generate a targeted, accurate answer. For example, Graph might return a summary of a Teams conversation about customer feedback and an Excel sheet with feedback data. Copilot will feed the pertinent text (and perhaps some extracted key points) into the GPT-4-based model along with the user’s question, instructing the model to base its answer on this data. Because the model is only seeing content the user is allowed to see, it cannot leak information outside the user’s access scope – it literally doesn’t receive any forbidden data. Notably, Copilot can also use semantic indexing to improve this retrieval. Microsoft 365 Copilot maintains a semantic index of your organization’s data (sort of a vector database of embeddings of your files/emails) which helps it find relevant information beyond simple keyword search. This index is built and updated automatically (it stays within your tenant and respects all security boundaries). So when Copilot searches for relevant content, it can do so intelligently – e.g., finding a relevant paragraph in a 10-page document that relates to “customer feedback summary,” even if the wording isn’t identical. This greatly enhances Copilot’s ability to retrieve the right pieces of data to ground the AI’s answer.

  4. Generating the Answer: With the query and the retrieved context, the LLM (the AI model) generates a response. For instance, it might produce a summary: “This week we received generally positive customer feedback. In email correspondence, Client A praised our support team’s responsiveness, while Client B provided suggestions on improving product documentation…” and so on, citing specifics from the retrieved sources. If Copilot is answering inside an app like Word or Outlook, it might directly produce the draft there. In the Teams Copilot chat interface, it would display the answer in the chat. If asked, Copilot can often provide references (e.g., which document or email it drew a statement from) – this traceability is useful for users to verify answers.

  5. Permission Re-check and Limits: Copilot’s design includes safeguards that double-check permissions at time of output. If, say, between the retrieval step and the moment of answer, a document’s permissions changed or the user’s access was revoked, the system should prevent showing that content. Also, Copilot will not reveal the raw text of a document unless it was necessary for the task. It tends to summarize or rephrase the content. For example, if you ask “What are the key takeaways from Project X proposal?” it will summarize the proposal’s key points rather than dumping whole paragraphs. This reduces the risk of over-exposing information. (If you explicitly ask for a quote or section, it will provide it, but again only if you have access rights.)

  6. User Feedback Loop and Improvement: After receiving an answer, the user can refine it (ask follow-up questions, etc.). Behind the scenes, none of your organizational data that was retrieved is retained by Microsoft beyond this session – it’s transiently used to generate the answer and not stored for training the model. Microsoft has explicitly stated that “prompts, responses, and data accessed through Graph aren’t used to train the foundation LLMs” that power Copilot. Your data remains within your tenant; the AI model does not learn your documents in a way that others could ever see. The only data that might be stored briefly are the prompts and results in a user’s activity log (so you can see what Copilot did, similar to a chat history). Admins can also audit Copilot usage – for instance, logs will show that a user’s Copilot query accessed these five documents. This transparency can help in governance and investigating any issues.

From a CIO perspective, this retrieval and permission-check mechanism is reassuring: it leverages existing, tested Microsoft 365 security. Copilot is essentially running as the user in terms of data access. There is no “God mode AI” reading all company data indiscriminately – it’s all scoped by user identity and query context. This means that if your org’s permissions are well-managed, Copilot will respect all of them. If there are weaknesses (like overly broad access somewhere), Copilot might reveal that to the rightful users. In fact, early adopters noted Copilot can make one realize just how much information some employees have access to (which could prompt tightening of access control). From a risk standpoint, it’s important to manage internal permissions and possibly use Information Barriers or Sensitivity Labels if you want to enforce stricter segmentation of data even within the company – Copilot will obey those policies.

Another aspect is compliance: Copilot logging means you have records of what data was retrieved and presented, which is useful for compliance officers worried about data misuse. Also, since Copilot uses only data the user can access, it inherently complies with things like legal holds or retention policies (it can’t surface something that was deleted or archived if the user has no access to it through normal means).

Custom Model per Organization vs. Retrieval-Augmented Generation (RAG) Architecture

One key question for decision-makers is how Microsoft is handling the AI model itself: Do they train a custom AI model on each organization’s data, or do they use a single global model with dynamic retrieval of data (often called Retrieval-Augmented Generation)? The answer is that Microsoft 365 Copilot uses a RAG approach with a shared underlying model, rather than creating a bespoke model per tenant.

  • No Custom LLM per Tenant: Microsoft uses large foundation models (currently GPT-4 from OpenAI, hosted in Azure) as the brain of Copilot. Your organization’s data is not used to train or fine-tune these base models. Microsoft explicitly assures that your content is not used to improve the generic model – it stays out of the training corpus. Therefore, all organizations get the benefit of the same highly capable base AI model, and improvements to that model by Microsoft or OpenAI (better reasoning, more knowledge, etc.) benefit everyone simultaneously. There isn’t a separate “Model A” for Org A and “Model B” for Org B. Instead, Microsoft focuses on isolating each tenant’s data at query time (the retrieval mechanism we described). This also means that Copilot doesn’t retain your data beyond your interactions – each prompt is handled with the data available at that moment. The AI isn’t gradually building a memory of your documents across sessions (apart from session context within a single conversation, and even that is not retained long-term).

  • Retrieval-Augmented Generation (RAG): The architecture of M365 Copilot is a textbook example of RAG: when you ask a question, the system retrieves relevant data (that you have access to) and augments the prompt to the generative model with that data. The model then generates a response using both its pre-trained knowledge and the retrieved organizational data as grounding. This approach is powerful because it keeps your proprietary information within your control (in Graph/search index) and only feeds the model bits of data as needed, on the fly. The model’s output is thus informed by your data without the model needing to “learn” your data in a traditional sense. It’s also dynamic – if you update a document or receive new emails, those can immediately influence Copilot’s answers (since the retrieval will fetch the latest data available). Essentially, Copilot’s knowledge of your organization is always up-to-date because it’s searching the live data. In contrast, a static trained model would go stale as soon as your data changed. CIOs will appreciate that RAG means no training phase, no need to feed your data into an AI black box – your data stays in standard storage and is only accessed when you invoke Copilot.

  • Model Updates and RAG Evolution: Microsoft handles all the updates to the Copilot system on the back end. The underlying GPT-4 model (and any future models used) is managed and updated by Microsoft as improvements become available. Microsoft has indicated that it will regularly update the AI models to improve performance and responsiveness – for example, Microsoft recently noted that PowerPoint Copilot now updates its language model regularly to deliver faster summaries. These updates happen in the cloud service; from the customer side, you don’t have to retrain anything or install new models. When OpenAI or Microsoft release a more advanced model (say GPT-4 Turbo, or GPT-5 down the line), Microsoft can integrate that into Copilot (after testing for enterprise readiness) and roll it out to customers as part of the service. So, each tenant benefits from ongoing AI advancements without additional effort or cost beyond the subscription. On the retrieval side, Microsoft is also enhancing how Copilot finds and uses your data. The introduction of semantic indexing for Copilot is one such improvement – it was added to make results more relevant and is built tenant-by-tenant. Microsoft will continue tuning the retrieval algorithms (for example, to better rank results, or to incorporate new data sources like third-party connectors). They may also allow organizations to customize what data sources Copilot considers first. As of now, Copilot primarily uses Microsoft 365 data by default, but Microsoft has announced extensibility (Copilot can use plugins to query external systems or databases). Admins can govern these, ensuring any extension of Copilot’s retrieval still abides by permissions and compliance rules. The Copilot ecosystem is evolving rapidly – you can expect new connectors, plugin integrations, and possibly industry-specific AI improvements to roll out periodically (Microsoft is delivering Copilot updates on a monthly cadence in the Microsoft 365 service, similar to how regular features update).

  • One Model, Multiple Tenants, Complete Isolation: It’s worth emphasizing the multitenant nature of Copilot’s AI service. The AI model might be the same for many customers, but each tenant’s data is isolated and encrypted in Microsoft’s cloud. When Copilot works on your prompt, it only has access to your tenant’s index and data (and only the pieces your user identity can see). There is no crossover of data between companies – e.g. Copilot will never retrieve another company’s document as part of your answer. Microsoft has strict data boundary enforcement at the service level. In fact, Microsoft 365 Copilot runs on Azure infrastructure within the Microsoft 365 compliance boundary – so it adheres to the data residency commitments you have with M365. If your tenant is geo-located (say, in Australian data centers for sovereignty reasons), Copilot will honor that for data processing. Only the abstract AI model processing might happen in a paired region if needed, but the model doesn’t log or store your content. For highly regulated sectors, Microsoft has provided documentation on compliance of Copilot, and third-party audits are expected as the service matures.

In summary, Microsoft does not train a custom AI model with each organization’s data (which mitigates the risk of your confidential information bleeding into a global model). Instead, Copilot uses a shared AI model + your tenant’s data retrieved in real-time. This Retrieval-Augmented Generation approach is updated continuously: the data retrieval layer updates as your data changes (and Microsoft improves the indexing), and the AI model layer is updated by Microsoft periodically to newer versions or with optimizations. For CIOs and CFOs, this means you get the best of both worlds – a state-of-the-art AI brain that’s always improving, combined with your company’s own knowledge fueling its answers. The key considerations remaining are governance and cost: ensure you have governance controls for what Copilot can connect to (Microsoft provides admin controls to enable/disable Copilot features and forthcoming plugin management) and keep an eye on license costs or Azure consumption as you scale up usage. If used well, Copilot can act as a knowledgeable assistant that draws from your enterprise’s collective memory in seconds, which is a compelling value proposition. Balancing that value against the subscription costs and any potential risks (like AI inaccuracies or exposure of sensitive info if permissions are misconfigured) will be the crux of the CIO/CFO evaluation. Microsoft appears to be addressing these concerns with transparent policies, tooling, and ongoing updates, as evidenced by the official documentation and previews so far.

References:

  • Microsoft 365 Copilot pricing (Australia) and included features.

  • GitHub Copilot for Business pricing (USD).

  • Dynamics 365 Sales Copilot features and licensing info.

  • Power Platform Copilot description (HBS consulting blog).

  • Security Copilot pricing model (consumption-based).

  • Microsoft 365 Copilot data access & privacy (Microsoft Learn).

  • Microsoft Support FAQ on Copilot access to OneDrive files.

  • Statement on Graph data not training the model.

  • Semantic index and data retrieval (Microsoft Learn).

  • Copilot model update example (Microsoft Learn release notes).




Comments

Popular Posts