Europe is entering a new chapter in digital regulation focused on implementation and simplification. After several years of rapid role out of complex and immersive digital laws, EU institutions are now confronting a practical question — whether the cumulative framework can be executed consistently across 27 Member States without imposing unnecessary friction on investment, innovation, and the responsible adoption of AI.
For multinational leaders, the stakes are high. Simplification can change compliance cost, deployment speed, and the feasibility of scaling AI-enabled products and operations across the Single Market.
The emerging direction in Brussels is pragmatic: reduce duplicative obligations, improve harmonization, and make compliance more operational through clearer supervisory structures, standards, and guidance. The outcome will influence not only how companies build governance programs, but also where they place AI-related investment and how quickly they can expand across Europe.
1) Why simplification is rising to the top
Companies have long argued that the EU’s digital framework is difficult to operationalize because obligations can overlap and enforcement can vary by Member State — particularly across GDPR and the EU AI Act, the Digital Services Act (DSA), and the Digital Markets Act (DMA). Many point to a second-order problem as well: limited actionable guidance for building one governance program that works across multiple regimes.
The current debate is concentrating on two horizontal laws with the broadest reach: the EU AI Act and the GDPR. Both rulebooks have been elevated in the competitiveness discussion led by Mario Draghi. Draghi has described the AI Act as a source of uncertainty—especially the forthcoming “high-risk” regime—and has called for “radical simplification” of GDPR, arguing that its current complexity raises the cost of data for EU firms.
The broader signal is that Europe is testing whether it can remain a global regulatory standard-setter while also focusing more on innovation and being more pragmatic in implementation.
2) The AI Act: the upcoming high-risk regime is an inflection point
The AI Act’s provisions already in force address prohibited “unacceptable risk” systems and obligations for general-purpose AI, with enforcement tied to the Commission’s AI Office and Codes of Practice.
The next phase—requirements for “high-risk” AI systems—is scheduled to start in August 2026 and is expected to have the largest economic impact because it broadly regulates both providers and deployers. High-risk categories of AI development and use include biometric identification and categorization, critical infrastructure, education and vocational training, employment, access to essential public services and benefits including health care services and life and health insurance, creditworthiness, fraud detection.
For many companies, this is where AI moves from pilots to core business processes—hiring and workforce decisions, credit and fraud workflows, and sector deployments in healthcare, finance, and education. That is also why the high-risk regime is where questions of documentation, controls, and supervisory touchpoints become operational for many organizations, not theoretical.
What is under review: timing and compliance at scale. The Commission has been considering whether to pause or adjust the high-risk timetable for two practical reasons: uneven Member State readiness to designate and operationalize enforcement bodies, and delays in the European harmonized standards meant to provide a conformity-based compliance pathway.
Another concern is compliance at scale. Even with an EU regulation, national supervisory architectures can diverge, creating multiple regulatory interfaces and the risk of inconsistent interpretations. To contain that risk for certain high-impact systems, proposed changes include expanding centralized oversight for some systems—particularly those built on general-purpose models or embedded in very large online platforms and search engines.
3) GDPR: the need to simplify its impact on AI
Draghi has argued that GDPR’s complexity—amplified by Member State “gold-plating”—warrants “radical simplification” and has criticized the limited scope of reforms currently under consideration (such as recordkeeping relief and mid-cap derogations), while broader harmonization remains uncertain.
From an AI perspective, the imperative for simplification of the data protection rulebook is augmented because GDPR’s practical role will grow if the AI Act’s high-risk obligations are delayed or clarified. In that scenario, data protection law will step in as the primary horizontal baseline influencing a large swath of AI development and deployment — especially where data restrictions affect high-impact use cases in health care, finance and education.
4) The Commission’s approach so far: targeted implementation relief
As part of its simplification agenda, the Commission launched a consultation on a “digital omnibus” and, after broad stakeholder engagement, proposed targeted measures aimed at “timely, smooth, and proportionate implementation.”
In practical terms, the measures described combine (i) timing adjustments for high-risk AI until enforcement structures and standards are in place, (ii) more centralized oversight for certain categories, and (iii) implementation supports—guidance, templates, and expanded use of regulatory sandboxes and real-world testing. Additional elements discussed include governmental ownership of AI literacy programs, reducing registration burdens for narrowly scoped systems, and facilitating processing of special categories of data for bias detection and correction with safeguards.
5) Where additional simplification could deliver more value
If the objective is to reduce burden while preserving protections, three opportunities stand out.
Sector-led supervision where appropriate. EU-wide sector regulators already oversee highly regulated industries and can deliver more uniform supervision than fragmented national approaches. A clearer sector-led model—supported by Member State “arms” that avoid additional “gold plated” national requirements—could improve consistency in key sectors such as pharmaceuticals, medical-devices and financial services.
One risk-assessment package, not two. The AI Act and GDPR both require assessments, and the requirements can overlap substantially for high-risk systems. Allowing organizations to rely on a single primary template to satisfy both regimes would reduce duplicative documentation and help align expectations across authorities.
Clearer rules for data use, including sensitive data. Uncertainty about lawful bases for AI training under GDPR can create extensive documentation burdens and slow deployment. A stronger EU-wide approach to standards and guidance could reduce uncertainty, streamline demonstrations of appropriate safeguards, and support responsible innovation. Similarly, constraints on using health information and other sensitive data can inhibit beneficial applications (including certain healthcare uses) and impede bias detection and outcome testing. The Commission’s proposal to facilitate sensitive-data processing for bias detection and correction with safeguards is a meaningful step; clearer guidance for additional high-value, well-controlled use cases could further improve outcomes.
Key Takeaways:
Europe’s simplification agenda does not signal a retreat from regulation. Rather it is best understood as a test of regulatory maturity. The last mandate demonstrated that the EU can legislate comprehensively and set global norms. The current mandate must demonstrate something equally difficult: that the EU can operationalize these global norms coherently, at scale, across 27 Member States—and ensure that compliance is not turned into a barrier to investment and adoption.
The critical success factor is not whether the EU adjusts one deadline or another. It is whether Europe aligns three elements simultaneously: (1) clear, workable compliance pathways (including standards and templates), (2) a supervisory model that reduces fragmentation rather than multiplying interfaces, and (3) harmonized treatment of core GDPR concepts that have become global building blocks for data-driven innovation. The Commission’s targeted proposals move in this direction, but the highest-value reforms will be those that directly reduce duplication and uncertainty for organizations that build and deploy AI responsibly.
For multinational companies, the winners will be those that build integrated governance now – so they can move quickly as the rules, standards, and enforcement structures settle – and position the organization to engage credibly in shaping workable reforms.
by Julie Brill. Julie served as Microsoft’s Chief Privacy Officer and Corporate Vice President of Privacy, Safety and Regulatory Affairs, and as Microsoft’s technology ambassador engaging on geopolitical, regulatory, and market-access issues for enterprises and government agencies. Prior to her time at Microsoft, Julie was a Commissioner of the US Federal Trade Commission, where she drove the agency’s enforcement and policy agenda. As a global regulatory governance leader, Julie now advises boards and C-Suites on governance, risk and compliance solutions at enterprise scale.
The modern hiring landscape is a paradox. On one hand, we have unprecedented access to tools that can automate outreach, parse thousands of resumes in seconds, and predict candidate success. On the other, candidates are feeling more disconnected and commoditized than ever before.
The CEO role tests any leader’s resilience. For women, the scrutiny can be sharper, the resistance more persistent, and the margin for error thinner. The chief executive job is often described as the loneliest in business; with women still underrepresented at the very top, the scarcity of peers can make isolation more pronounced.
Leaders can master skills and tools for setting a vision, executing plans, managing others, building human capability, and demonstrating personal proficiency, and yet still not have full impact because they are not intentionally showing up.