
As part of the recent Microsoft AI-Powered Work Bootcamp, Herain Oberoi, General Manager of Data Security, Privacy & Compliance at Microsoft, held a session entitled “Securing and Governing Data for the Era of AI.” The session aimed to explore how users can best secure and govern data effectively in the AI Era.
While GenAI and AI agents are becoming increasingly intertwined with business processes and delivering massive strategic gains, it’s also growing and complicating the attack surface.
Top Challenges for the AI Era
Oberoi kicked off his session by illustrating that in the AI Era, there are two key questions when it comes to cybersecurity:
- Do I trust the data?
- Is the AI system reliable, safe, and secure?
He explained that AI systems expand the attack surface, and GenAI in particular has introduced new and amplified risks that continue to evolve and are, much like the technology itself, incredibly dynamic. These risks include areas like data leakage, jailbreaks, indirect prompt injection, hallucinations, and model vulnerabilities. Addressing these issues requires purpose-built security measures for AI.
What are the biggest challenges facing enterprises in the age of AI? Oberoi identifies three main issues: data oversharing and leakage, regulatory compliance, and the use of disparate solutions.
Data Oversharing and Leakage: One of the key ways to secure data is to manage access and maintain data hygiene. Oberoi emphasizes that organizations must ensure data sources have the appropriate access controls, actively implement Data Loss Prevention (DLP) policies, and apply sensitivity labels to their data.
In terms of good data hygiene, he suggests that companies focus on deleting outdated data, archiving information appropriately, and keeping their data current. Fundamentally, he believes that good hygiene enables all other data protection measures.
Within Microsoft’s security framework, tools provide visibility into areas where oversharing may occur and help companies understand their oversharing posture. For instance, Microsoft Defender for Cloud Apps now assigns a risk score to AI applications. At the same time, capabilities in Microsoft Purview provide reports that deliver visibility into data with accompanying security risk scores, as well as user risk scores (low vs. high) based on metrics such as the level of access to sensitive information or whether users have access to an AI agent.
Regarding data protection, Purview users can create DLP policies that adapt based on user risk levels and data context. There is also adaptive protection, allowing policies to be applied across multiple platforms, including email, SharePoint sites, Copilot, Foundry, and more.
When it comes to governance, Purview delivers a unified catalog that gives data professionals an easy way to find the data they need, apply data quality rules, and ensure AI produces the right outputs.

AI Agent & Copilot Summit is an AI-first event to define opportunities, impact, and outcomes with Microsoft Copilot and agents. Building on its 2025 success, the 2026 event takes place March 17-19 in San Diego. Get more details.
Regulatory Compliance: Oberoi emphasizes that AI regulations are emerging worldwide, citing the EU AI Act, the AI Action Plan in Australia, and various other data protection frameworks. He says that the key questions organizations must address are:
- Do I have a data compliance system in place?
- If an incident happens: what do I do next, and how is it reported?
- How do I find evidence of what happened?
Microsoft users can leverage Compliance Manager in Purview to perform self-assessments of their security posture, receive recommendations, solutions, and implementation steps, and govern GenAI projects to ensure regulatory compliance. While AI reports give details of model security configuration, safety configurations, and more.
Disparate Solutions: Oberoi highlights the core challenge of managing a fragmented AI ecosystem, which is complex and costly to support when solutions are siloed. Fragmented systems lead to inconsistent outcomes, higher costs, and greater implementation complexity, making security in the AI Era especially challenging.
Microsoft has addressed this issue by expanding Purview to include data security, data governance, and compliance through a single pane of glass. These integrated solutions work seamlessly across the technology stack.
In one of the final slides of the presentation, Oberoi positioned Microsoft as the first security provider to deliver comprehensive solutions across data security and governance, security posture management, threat protection, safety systems, and governance. This applies not only to Copilot and Microsoft agents but also to other enterprise, consumer, and custom-built AI.




