When I sat down with Emre Kazim, co-founder of Holistic AI, I expected a technical deep-dive into AI governance frameworks. What I got was something much more valuable: a reality check on how companies are actually implementing AI, what breaks in production, and why we need to stop treating AI governance like it's just another checkbox on the compliance list.
Emre and his co-founder, Adriano Koshiyama, came out of the same University College London computer science department that gave birth to DeepMind. They were in the trenches of AI governance before it was a buzzword. Here are five takeaways from our conversation:
1. AI Governance Isn’t Data Protection or Cybersecurity
Here's something that caught me off guard: different regions are approaching AI governance from completely different angles. In Europe, there's a strong path dependency from data protection to AI governance. In the US, we're looking at it more through a cybersecurity lens.
But Emre's point hit hard: AI governance is its own category. It has analogies with data protection and cybersecurity, sure, but it's a distinct area of expertise that requires its own frameworks and thinking. Companies need to stop trying to force-fit AI governance into existing compliance structures and recognize it as a fundamentally new challenge.
2. Shadow AI Is Everywhere (And It’s Not as Sexy as It Sounds)
I asked Emre about expecting some dramatic revelation with shadow AI. Instead, he gave me a dose of reality: it's not some covert operation. It's just someone in your organization pulling out their credit card to use an AI tool, or grabbing an open-source model and building something without IT knowing about it.
The scary part? They might be sharing proprietary or personal company data into unapproved systems. Think: your company standardized on Anthropic's Claude, but someone started using ChatGPT and dumping company data into it. The first step to AI governance is just basic visibility (not fancy audits). You need to know where AI is being used in your business, point blank.
3. We’re Still in the Playing Around Phase
Emre made a distinction that helped me understand where we actually are in the AI adoption curve. We've had massive capital investment go into knowledge rather than production. Translation: companies have been experimenting, learning, and playing, but we haven't yet seen widespread, meaningful implementation of AI systems that deliver real ROI.
We're still asking "where are the killer use cases?" Coding has been phenomenal, but otherwise, we're in a time lag between the hype and the value. The next shift won't be a new model or capability. It'll be enterprises actually figuring out how to get production value from the systems they've been experimenting with.
4. Hallucinations Are a Feature of AI That We Need To Understand
Emre told a story that perfectly captures the hallucination problem. He finished reading a French novel, "Journey to the End of the Night" by Céline, and asked an LLM what happened at the end. It made something up. Completely fabricated the ending. But it was so compelling that if he hadn't just read the book, he would have believed it.
The technical team at Holistic AI is doing incredible work on hallucinations (they just won an OpenAI hackathon), but Emre's non-technical advice stuck with me: understand what these tools actually do. LLMs are inferential models that predict the next word. They're not truth creators. They're not factually grounded. They're next-token generators.
Stop anthropomorphizing AI. Stop treating it like an oracle. Start treating it like a prediction engine that uses data to regurgitate statistically dominant views. That mental shift alone will save you from a lot of bad outputs.
5. The Importance of Context (Not Buzzwords)
When I asked Emre about fixing AI bias — whether it's the algorithm, the data, or how humans use the system — he didn’t offer a simple answer. Because there isn't one. It's radically context-dependent.
We've developed these standard idioms in the AI space: "garbage in, garbage out," "we need human oversight," and so on. But Emre was clear: we're way past the talking point phase. Every AI system needs to be assessed in context. Where does the risk exist? Is it in privacy? Bias? Both? The intervention you make on a credit scoring algorithm will look completely different from the intervention you make on an HR screening tool.
There's no one-size-fits-all playbook. That's what makes AI governance hard and why companies need to get serious about it.
Where We Go From Here
What stuck with me most from this conversation was Emre's pragmatism. He's not selling AI governance as some abstract ideal. He's living in the messy reality of production systems, working with enterprises who are trying to figure this out in real-time.
His advice is simple: get visibility on where AI is being used in your organization, stop trying to bolt AI governance onto old frameworks, and understand that we're still in the early innings of figuring out how to make AI actually useful at scale.
The agentic future everyone's talking about? It's going to take time to get agents working meaningfully and autonomously. In the meantime, the companies that will win are the ones doing the unglamorous work of proper governance, risk assessment, and implementation.
Listen to the full episode of Actually Intelligent to hear more from Emre Kazim about AI audits, the Knight Capital glitch that wiped out $440 million, and why he'll never ask an LLM about Kant.
LEAVE A COMMENT