Lexi Reese has 30 years of operating experience at Google, Gusto, and American Express. She ran for Senate because she didn't see Congress members with the technological or commercial understanding to regulate AI effectively. And now she's co-founder and CEO of Lanai, an enterprise AI observability and governance platform.
When we talked, Lexi opened my eyes to something I hadn't fully grasped: AI isn't just technology anymore. It's your second workforce. And most management teams have absolutely no visibility into what that workforce is doing, how it's performing, or what risks it's introducing to the business.
Here are three things from our conversation that changed how I think about AI at work.
This framing is important: You would never hire people and not be able to see their work, their workflows, or evaluate whether they're effective or safe. But employees hire and fire AI assistants every day to help with their work. Businesses are increasingly giving full workflows to agents without humans in the loop. And management has zero visibility into any of it.
Lexi's point is that AI is less technology and more capability. In the context of work, it's a colleague. When a salesperson uploads a customer list to an AI capability in their CRM (a tool IT already approved), they're not thinking about the fact that they just introduced a new workflow. They're trying to meet their cross-sell goals for the quarter.
But here's where it gets real: in one insurance customer case, that customer list included zip codes. In the insurance industry, zip code is a proxy for race. Selling to only certain zip codes is redlining — against industry regulations and subject to huge fines. The salesperson wasn't being malicious. They were being productive. But without visibility into AI workflows, that risk is invisible until it's too late.
The scale of this problem is shocking. One Lenai customer thought they had approved five AI tools. When Lenai showed them the actual usage without any data leaving their perimeter, it was 183 tools. That's 83% of AI use happening on unsanctioned applications. Another customer with 450 people in their pilot saw 20 data leaks in 30 days. And the team leaking the most data? They’re also the highest adopters, the most frequent AI users driving innovation.
We all hear about "shadow AI" constantly, and yes it sounds like something sinister. Lexi set me straight: it's a press term more than an operator term.
Lexi's philosophy: think of AI as colleagues. When you manage it that way, you realize that innovation and productivity live in the same unit as risk. If your CEO says "we're going to be AI-first, I want employees trying AI every day, I want them to push the envelope," you can't simultaneously lock everything down. You need visibility and smart guardrails that help people push the envelope safely.
Lenai doesn't just discover shadow AI (or as Lexi calls it, "unmanaged workflows"). They detect AI interactions in real-time without data ever leaving your company's perimeter. They run models that capture prompts, classify risk contextual to your company, and label use cases. So instead of just knowing "someone used an unapproved tool," you know "someone did automated script generation; maybe we should templatize and standardize that in our coding standards so humans can focus on quality control instead."
It's not about blaming anyone or slapping wrists. It's about discovering what's actually working and then scaling it safely.
Lexi laid out five questions that every company should be able to answer about AI:
When she asks these questions of literally any company, there's no answer to a single one. Lenai is the first software solution that can actually answer them, and companies discovering this are often shocked by the numbers. One customer running a pilot with just 450 people saw AI usage 30x to 100x more than what their IT or digital transformation teams knew about. That's not because employees are being sneaky. AI is embedded everywhere, and traditional security approaches can't detect it.
Here's what stuck with me most: Lexi said there's no such thing as incognito mode, and there's no such thing as "we're probably keeping your data safe" unless it's gone through a procurement function or the company is using proper observability tools.
The internet is forever. If you're using your personal ChatGPT account because your company blocked it, and you're putting company data in there to get that marketing piece done faster or finish that financial analysis, you should assume it's publicly available. Always.
What gives me hope after this conversation is that Lexi's not an alarmist. She's not saying "lock down everything and stop using AI." She's saying the opposite: if you try to protect the castle at all costs and only allow specific tools, you will block the innovation that comes from your employees pushing the envelope. And you will lose.
You need a solution that lets people experiment while providing real-time context in the prompt itself: this is safe, this is not safe, right idea but wrong execution, let's help you do it better.
That's the future Lexi's building. A world where AI at work is universally accessible, secure, and scalable. Where every employee can be an "AI natural," knowing when and how to prompt an assistant or use an agent. Where businesses can see their second workforce clearly and manage it as intentionally as they manage their human teams.
Right now, most companies are flying blind. And the gap between the innovation AI enables and the risks it introduces is only getting wider.
Listen to the full episode of Actually Intelligent to hear more from Lexi Reese about her Senate run, the important difference between observing work versus surveilling workers, and how Descript is blowing her mind as a former documentary filmmaker.