In 2025, Gen AI adoption is transforming privacy, governance, and compliance frameworks across global industries. Here’s what privacy counsels are saying about Generative AI and its regulatory impact.
In 2025, Gen AI adoption is transforming privacy, governance, and compliance frameworks across … More
getty2025 has already thrown plenty of curveballs, and AI governance is no exception, diverging sharply from what many predicted just a year ago. Though broad AI adoption remains in its early phases, sectors like education and mental health are seeing noticeable momentum, especially in individual-facing applications and services. But things are shifting quickly.
“Prior to the AI Act coming into force, AI governance was fractured,” said Caitlin Fennessy, Vice President and Chief Knowledge Officer at the IAPP, formerly known as the International Association of Privacy Professionals. “Academics, civil society, and professional associations were involved, but they were often late to the conversation because there were no rules yet.” Today, the governance community has matured. The technology has advanced. Public engagement has surged. Since ChatGPT, AI is no longer just the domain of specialists. Friends and families are asking whether “Deep Seek is the real deal.”
At the IAPP’s AI Governance Global Europe 2025 conference (AIGG25) in Dublin, regulators, legal counsels, product leaders, and privacy professionals compared notes. Here’s what the front lines of AI governance are revealing in 2025.
The AI Regulatory Landscape: Fragmented, Real, and Already in Force
AI no longer operates in a regulatory vacuum. In Europe, the EU AI Act came into force in August 2024 and is now rolling out in phases.
As of February 2025, prohibitions on unacceptable-risk AI are in effect, alongside requirements for AI literacy. By August, obligations will apply to general-purpose AI providers, and national competent authorities must be appointed. Between 2026 and 2027, high-risk AI systems in sectors like healthcare, law enforcement, and infrastructure will be subject to extensive conformity assessments, documentation, and post-market monitoring. By 2030, some requirements will extend to large-scale government systems.
Support comes from mechanisms like the AI Pact, a voluntary initiative inviting providers to implement provisions ahead of schedule, as well as ongoing guidance from the European Commission and the newly established AI Office.
At the same time, EU officials have considered softening their approach. When asked whether the Commission was open to amending the AI Act, Kilian Gross said the first priority would be simplifying implementation, to make it easier for companies while still remaining effective.
In contrast, the United States is exploring a deregulatory path. A proposed 10-year moratorium on state-level enforcement of AI-specific laws is under Congressional consideration. It would suspend enforcement of design, performance, documentation, and data-handling laws unless those apply across all technologies.
“Yes, there is a complex regulatory landscape for AI systems,” said Ashley Casovan, Managing Director of the IAPP’s AI Governance Center. “However, it’s not insurmountable. For those who have started to navigate this web of rules, there are clear pathways for complying with overlapping requirements.”
Gen AI Adoption: How AI Governance is Driving Organizational Change
The message from the conference was consistent. AI governance cannot be owned by a single function. It requires coordination between legal, privacy, compliance, product, design, and engineering. Casovan described this shift as being highly dependent on use cases. The specific roles and responsibilities within governance teams vary by sector and application. But as the regulatory landscape becomes more complex and AI adoption expands, the need for people who can navigate and translate these obligations is growing.
In highly regulated industries such as healthcare, finance, and education, governance efforts are advancing most rapidly. At a dedicated AI in Healthcare workshop, multiple speakers stressed that AI compliance must align with existing obligations in patient care, medical recordkeeping, and safety. One panelist described it as a “complex web of laws, regulations, rules, standards, and industry practices.”
Other sectors are adopting risk-based governance aligned with the AI Act’s classification system, especially in use cases involving biometrics or automated decision-making in employment and HR. Many organisations are using the EU’s framework globally as a benchmark rather than creating their own from scratch. AI governance is being embedded into existing privacy and compliance programs, leveraging what’s already in place.
In some jurisdictions, state-level legislation and sector-specific rules are shaping governance even further. In cities like New York, organisations are adopting more targeted mitigation strategies, aligning AI obligations with longstanding standards around data use and safety. All of this signals a shift. AI governance is becoming more mature, risk-aware, and integrated into broader organisational operations.
Key AI Governance Dilemmas: Organisational Upheaval and Regulatory Intersections
Despite visible progress, several challenges remain. Innovation continues to outpace regulation. Product cycles are faster than rulemaking. There is still no agreement on when or how to intervene.
There is also no consensus on a best-practice model. “We haven’t seen [the] best practice structure for AI governance yet,” said Ronan Davy of Anthropic. “Company-specific contexts—risk management, size, style, use cases—all need to be considered.” The diversity of organisational needs makes a universal framework difficult to establish.
Fragmentation across jurisdictions continues to challenge multinationals. But many organisations are adapting. They are building jurisdiction-specific playbooks and aligning AI oversight with established sectoral requirements. The field is still young, drawing from disciplines including privacy, compliance, safety engineering, IT risk, and ethics. Building internal capability, and external networks, is now central to AI governance work.
Casovan emphasized the organisational change underway. The EU AI Act intersects with more than 60 other legislative instruments, especially in areas like financial regulation and product safety. Companies are responding by creating new governance roles such as Chief AI Officer, Head of Digital Governance, and hybrid roles like Chief Privacy and AI Officer. These titles reflect a demand for leadership that can span legal, technical, and operational responsibilities.
In the US, privacy continues to fill the gap in the absence of comprehensive AI laws. Fennessy pointed to an earlier pattern. The US privacy profession outpaced Europe not because of regulation, but because of market pressure and consumer trust. She sees a similar dynamic playing out in AI. “Organisations can’t afford to conduct ten different risk assessments,” she said. “We’re seeing a shift toward integrating privacy, security, and ethics into a single framework. This helps surface the most critical issues and elevates them to the board.”
Trustible CEO Gerald Kierce challenged the idea that governance slows down innovation. “We’ve seen this firsthand,” he said. “One of our customers saw a 10x increase in use cases in just one year after adopting a robust governance framework.” Before implementing governance, they lacked clear processes and tools. Once structure was in place, they were able to responsibly scale. “There’s a false narrative that governance slows things down,” said Kierce. “That’s only true when it’s approached as a checkbox exercise. In reality, governance enables progress by creating clarity, trust, and accountability.”
Toward AI Adoption Maturity: What Comes Next
AI governance is becoming cross-functional by necessity. Legal interpretations must be converted into operational controls that governance and compliance teams can manage. Companies are integrating AI risk into familiar tools like DPIAs and cybersecurity protocols. Casovan reinforced the foundation: “Start with your inventory. Know what AI systems you have, how they’re being used, and who is responsible.”
Rather than start from zero, most organisations are building on existing governance structures: privacy programs, ethics boards, safety reviews. “Don’t reinvent the wheel,” said Casovan. “Follow governance practices you already have in place.” The goal is to adapt known systems to meet new demands, not duplicate effort.
Fennessy underscored the need for a unified model. Fragmented approaches don’t scale. “That integrated governance approach is what enables organisations to manage AI risks holistically,” she said. Privacy, security, and ethics are converging, not diverging. Organisations are consolidating impact assessments, surfacing the most critical risks, and aligning AI oversight with strategic goals. The work is complex, but the direction is clear – and necessary.
Read the full article here