Blog

AI Data Privacy Concern: Microsoft employees might review your Azure AI prompts and responses.

March 1, 2024

I assumed that relying on Microsoft for enterprise AI was the same as relying on Microsoft for enterprise cloud storage. I was wrong.

If you are a big Microsoft customer, you can opt out of potential human review of your prompts and responses. But you need to be proactive. If you are not a big customer, you are currently SOL.

The following, however, only applies to Azure OpenAI Services because Microsoft has exempted Microsoft Copilot from Microsoft’s own abuse monitoring program—which is rather telling.

PREFACE

As a lawyer, I know fine print matters. Indeed, I am keenly aware the fine print’s fine print can be dispositive. I am not sure if I should be comforted or disconsolate that I am very much not alone in having missed the fine print I am about to share.

This is a statement against interest. I am “all deliberate speed” on Generative AI for enterprise (receipt). Sadly, I am equipping the “slow down” contingent instead of bolstering my “go faster” compatriots. Pains my soul. Yet, while I might be reporting the news, I am persuaded what follows is not nearly as widely appreciated as it should be because, if it were, I already would have heard the slow-down folks shouting “Microsoft employees will look at our private information” from the rooftops.

Finally, I am pro Microsoft. A decade ago, I got my start suggesting we should improve our use of Microsoft Word (receipt) and to this day maintain a side hustle in competence-based training for MS Office (Procertas). I spent years extolling the virtues of leveraging the entire Microsoft 365 stack, especially Power Apps. Recently, I’ve been encouraging the legal market to experiment in Azure AI Studio. In short, I would be so pleased if Microsoft corrects the record and explains I have it all wrong. I could totally be wrong. I would love to be wrong. If I am not wrong, I am inclined to give Microsoft the benefit of the doubt that they were simply moving fast and meaning well.

I must confess, I was tempted to stay silent. But the longer this was not widely known, the worse the fallout when it finally surfaced. Plus, customer pressure is the best mechanism for convincing Microsoft to change their policy, which they should.

EXPLAINER

Azure is Microsoft’s enterprise cloud platform. Azure OpenAI Service is part of Azure, and a pathway for enterprises to access, via Microsoft, the most performant Generative AI models from OpenAI—e.g., GPT-4. The Azure OpenAI Service (“AOAIS”) tagline is “Build your own copilot and generative AI applications.”

Many businesses, including many law firms and legaltech vendors, have been heeding Microsoft’s call to build their own Generative AI applications on the Microsoft stack because they already rely on Microsoft/Azure for their core digital infrastructure. The consensus understanding seems to be that with AOAIS, Microsoft offers what Microsoft itself advertises as “AI with Enterprise-Grade Security.”

For most law departments, law firms, and legaltech vendors I have spoken to, this translates into an (erroneous) assumption that there is no added risk of disclosure when pairing AOAIS with private company data already stored on Azure—the same misimpression I was under. This perspective is reinforced at the top of Microsoft’s Data, privacy, and security for Azure OpenAI Service resource, which showcases a blue-colored box labeled “Important”:

Excellent. But fine print follows.

Below the blue box, in the same Data, privacy, and security for Azure OpenAI Service resource, we eventually learn that Microsoft has setup a process for “abuse monitoring.” Under their abuse monitoring program, Microsoft stores all prompts and generated content for 30 days.

If a prompt or generated content triggers the abuse monitoring system, the prompts and generated content are subject to human review by a Microsoft employee.

Candidly, I had seen this before. But I did not think much of it because there is a mechanism for turning off abuse monitoring.

I figured everyone would turn it off, and that would be that. How wrong I was.

First, I underestimated how few would read down and take the steps necessary to turn off abuse monitoring. Second, and more problematically, I failed to read the fine print’s fine print to understand how few enterprises—clients, law firms, and legaltech companies—even have the option of turning off abuse monitoring. You see, customers must meet “additional Limited Access eligibility criteria.”

“Limited Access eligibility criteria” is not explained in the Data, privacy, and security for Azure OpenAI Service resource from which the foregoing screenshots are taken. The term is not present, let alone explained, in the companion Abuse monitoring resource, which is not additive for our present purposes. But it starts to come into focus in the Azure OpenAI Limited Access Review form used to apply for “modified abuse monitoring”—also sometimes termed an “exemption” and meaning that abuse monitoring is turned off for specific use cases.

This “managed customers” language is similarly present in the accompanying resource on Limited Access:

Again, like turning off abuse monitoring, this appears, superficially, to merely be a matter of paperwork. An enterprise can register for Limited Access, BUT only if they are a managed account—a status for which they cannot register, per the same Limited Access resource:

And this, dear reader, is where my search skills reach their limit. What are the eligibility criteria and process for becoming a “managed customer” or a “managed partner” and therefore eligible to have abuse monitoring turned off?

¯\_(ツ)_/¯

Maybe you are a search ninja. If so, please share your findings. But neither Google nor Bing Copilot offered me much. I uncovered a few forums with posters asking questions about becoming a managed customer but receiving unhelpful answers (see here), as well as some vagueness about becoming a managed partner, including this unverifiable quote, “Microsoft talks about having 640,000 partners, and trust me, very few are fully managed.” (see here)

Admittedly, I cannot confirm that very few partners—and, presumably, very few customers—are managed. But it rings true based on the conversations I’ve had thus far. Absent anything official from Microsoft, my only option is to fall back on anecdote as the best available evidence. Anecdote suggests few (not zero) companies, law firms, and legaltech vendors are managed accounts. Thus, few are eligible to turn off abuse monitoring.

But you know who does qualify to turn off abuse monitoring, and has, in fact, opted out of abuse monitoring by Microsoft employees?

Microsoft. Specifically, Microsoft Copilot.

Seems like a bad look. Further, I would be willing to wager a not insubstantial sum that Microsoft has exempted Microsoft—i.e., as an enterprise—from abuse monitoring despite the fact that they, Microsoft, are the only enterprise who would not potentially be exposing their data to a third party—because the reviews are performed by Microsoft’s own employees. This is pure speculation; I have no evidence; hence the gamble.

Microsoft Copilot is on a relatively short (not blank) list of exemptions. Some anecdata from talking to law departments, law firms, and legaltech vendors about this subject indicate:

  • Only a small share of law departments, law firms, and legaltech vendors are aware of abuse monitoring
  • A far smaller share of law departments, law firms, and legaltech vendors have an exemption to turn off abuse monitoring
  • Only a small share of law departments, law firms, and legaltech vendors would, if they applied, be granted an exemption to turn off abuse monitoring because they are “unmanaged” accounts
  • Law departments, law firms, and legaltech vendors denied an exemption to turn off abuse monitoring because they are “unmanaged” accounts have subsequently been informed they are “too small” to become managed accounts and have not been offered any pathway to becoming managed—i.e., no pathway to turn off abuse monitoring
  • Only a small share of law departments are asking law firms and legaltech vendors if the firm, the vendor, or the firms’ vendors have turned off abuse monitoring
  • Only a small share of law firms are asking legaltech vendors if they have turned off abuse monitoring

Small share ≠ zero. I have found law departments, law firms, and legaltech vendors painfully aware of abuse monitoring. Some have been granted exemptions. Most have been denied exemptions because they are “unmanaged.”

Indeed, I deserve no credit for flagging this issue. The concern was flagged for me by a law firm denied an exemption. I simply ran it down because their commendable commitment to reading the fine print’s fine print sparked my curiosity and sense of duty.

Let me also say I understand the Responsible AI ethos that would lead Microsoft down this path. There are many legitimate concerns around AI being used for nefarious purposes or producing harmful results. (See the Responsible & Explainable section of our GenAI content library).

But what I understand even better is why, despite these legitimate concerns, Microsoft would still have Copilot opt out of human review for abuse monitoring purposes. The complications extend far beyond legal considerations. Most companies (i.e., clients) have massive swathes of data (e.g., financials, strategy, product) they would almost never agree to involuntarily share with a third party. Legal’s alarm, however, is acute because we are often called in to address ‘colorful’ matters that might trigger abuse monitoring (total black box, AFAIK), as well as our rightful emphasis on preserving privilege.

The best analogy I can muster is Microsoft reserving the unilateral right to comb through all their customers’ cloud data because that data might be inflammatory or illegal—legitimate concern but insufficient to overcome the interest in shielding enterprise data from third parties. On this, Microsoft has a clear, and correct, policy:

Another piece that hurts my heart is the differential treatment for large customers. AI was supposed to be a leveler. Instead, at present, the playing field is tilted to only extend the advantage of the largest incumbents, including startups that might compete with Microsoft's own Copilot. “Limited Access” is such a perfect descriptor.

I’m disappointed but not defeated. This does not eliminate every enterprise or legal use case. Myriad low-risk use cases remain, especially those with engineered prompts. The chatbot-like interfaces are probably the most concerning due to lack of constraints on what can be asked and answered—especially asked, since the answers can be somewhat controlled.

I hope Microsoft reverses course. I hope they were moving fast and meaning well. My call to action is for customers and partners to pressure Microsoft by whatever legitimate channels are available.

CONTENT LIBRARY

Stay Informed, Stay Ahead

Discover the latest trends and insights in the legal industry. Learn more about Generative AI, Delivering Legal Services and Strategy and Scale in a Complex World.

Let’s Forge the Future Together

Interested in joining our roster of legal innovators or simply curious about the world of legal tech? Reach out! Our team is eager to connect, collaborate, and contribute to the ever-evolving legal landscape.