Editorial note (April 2026): This article was originally published in May 2025, following the AI Systems in Practice TechSpot panel. The EU AI Act entered into force in August 2024 and becomes fully applicable from August 2026, with phased requirements already in effect. The governance gaps described below have only grown since this was written — and the regulatory clock is now ticking in earnest.
On May 15th, I had the opportunity to speak at the AI Security Panel during the AI Systems in Practice TechSpot, organized by On the Spot. This was my second public engagement on AI risk and governance, and I’m excited to share some key insights from the discussion.
The moderator opened with a powerful and timely question:
„How do we know our data isn’t being used to train AI models? And how can we mitigate risks arising from AI technology?”
My answer?
„You can’t be 100% sure unless you control it. But it’s not just about security controls. You need governance, leadership, and context-aware risk management.”
The reality is more complex and requires a shift in how we approach AI governance and risk management across organizations.
What I learned from the audience
One highlight of the session was the chance to engage the audience directly. We asked how many of them use AI in their daily work — unsurprisingly, most hands went up. But when we asked how many have formal governance or policies in place for AI use within their organizations, very few did.
This is a concern. AI is being widely adopted without corresponding oversight, controls, or accountability. It’s not being governed or even monitored in many cases. That’s a risky place to be. So yes, we should pause — not to resist the technology, but to evaluate and govern it properly.
Individuals vs. organizations: a different level of risk
While individual users don’t always read terms and conditions and overlook privacy or data flow issues, organizations don’t have that luxury. Data is the new gold, and what we share, how we share it, and where it goes matter more than ever.
The good news? Awareness is growing. In time, I believe individuals will care for their data with the same rigor that top-tier enterprises do today. But in the meantime, it’s on organizations and institutions to lead — not by avoiding AI, but by managing it responsibly.
Where to start: AI governance frameworks that actually help
Fortunately, we have emerging tools to help. Frameworks like ISO/IEC 42001 and NIST’s AI Risk Management Framework (AI RMF) are excellent starting points for organizations building their AI governance risk management approach.
Are most large tech players certified under these? No — at least not yet. Many rely on internal governance models inspired by these frameworks, but their practices aren’t standardized.
Instead, many align to SOC 2 controls, particularly in the U.S. legal context. But here’s the issue: we often don’t know who’s auditing these organizations or what’s in those audit scopes. That lack of transparency is itself a risk.
This is part of what we call the „black box problem.” Originally used to describe opaque AI models — where we don’t know the logic or data behind decisions — today it also applies to how these systems are built, tested, and audited. And that’s exactly why regulation matters.
Regulation is coming — and that’s not a bad thing
The EU AI Act entered into force in August 2024 and becomes fully applicable from August 2026. Like GDPR before it, the AI Act sets a baseline for how data and AI systems should be governed. Alongside NIS2 and DORA, it represents a regulatory effort to protect both EU citizens and EU organizations from digital and algorithmic risks.
Not all EU regulations are perfect, but these ones — in my opinion — bring necessary guardrails to a very fast highway. Organizations that build governance structures now will have a significant advantage when enforcement begins.
What should organizations do? A practical AI governance framework
Securing AI isn’t that different from securing other IT systems — it just needs to be purpose-built for AI’s unique risks. Here’s a practical path forward:
- Implement an AI Management System — Align with ISO 42001 or NIST AI RMF: define AI policies, leadership responsibility, lifecycle processes, audits, and continuous improvement.
- Conduct Threat Modeling — Evaluate risks based on how AI is used in your specific environment — from privacy exposure to adversarial manipulation.
- Apply Security Controls — Based on your threat model, implement technical, organizational, and process controls: secure development, access controls, and usage monitoring.
- Continuously Monitor and Improve — AI systems evolve. Your governance must evolve with them.
Can we trust AI?
Yes — but we need to understand what it is and what it isn’t. AI doesn’t „hallucinate” like a human, but it does generate inaccurate outputs based on flawed data or prompts. It’s not about trusting AI blindly — it’s about understanding its limits and governing accordingly.
Do you want the safest option? Host your own local instance of AI, where data doesn’t leave your environment.
If that’s not feasible, then treat your AI provider like any other third-party vendor:
- Extend your third-party risk management program to cover GenAI.
- Ask about training data practices.
- Conduct or require audits.
- Align their practices to your GRC framework.
Final thought
We’re not here to stop the AI revolution — we’re here to make it safe, trusted, and aligned with human values and business goals. That starts with governance. It starts with risk management. And it starts now.
If your organization is at the beginning of its AI governance journey — or realizing it has a governance gap — I’m happy to discuss your specific situation. I work with organizations implementing ISO 42001, building AI risk frameworks, and preparing for EU AI Act compliance.
📧 srebnicki@protonmail.com
🌐 psrebnicki.pl
Frequently Asked Questions
What is AI governance and why does it matter?
AI governance refers to the organizational structures, policies, and controls that ensure AI systems are used responsibly, ethically, and safely. It matters because most organizations are adopting AI faster than they can manage its risks — exposing themselves to data breaches, regulatory penalties, and loss of stakeholder trust.
What frameworks exist for AI governance risk management?
The two leading frameworks are ISO/IEC 42001 — the world’s first international AI management system standard — and the NIST AI Risk Management Framework (AI RMF). Both provide structured approaches to identifying, assessing, and managing AI-specific risks across the full AI lifecycle. The EU AI Act adds a compliance layer on top for organizations operating in EU markets.
When does the EU AI Act become fully applicable?
The EU AI Act entered into force in August 2024. Full applicability for high-risk AI systems begins in August 2026. Some provisions — including prohibitions on unacceptable-risk AI — are already in effect. Organizations that have not yet started governance preparation are running out of runway.