Scott Zimmer started his career at Disney — a place that, as he put it, really gets storytelling. From there, he went on to lead brand and experience teams at places like Capital One and Verizon, where he kept finding himself doing the same thing: investing in people's growth by connecting them with other humans.
That pattern eventually led him to co-found Answers From Me, a platform that uses AI to scale the perspectives of experts and mentors we most want to learn from. He also coaches executives at Stanford's d.school on design thinking.
When Jill and I sat down with Scott, we anticipated a conversation that would make us think differently about AI, trust, and what we actually mean when we say ‘human-centered work.’ Scott absolutely delivered. Here are three things we learned.
What You Really Want Isn't a Better Search Engine — It's Bobby Flay
Scott has a go-to example he uses, and it stuck with us. Imagine you're at the grill, hoping you don't ruin dinner, and you wish you could call Bobby Flay right now. No, not ask Google or ChatGPT "how to cook a steak." Bobby Flay and his specific knowledge, his instincts, his years of experience baked into a single answer.
That's what Answers From Me is building. Their platform uses a technology called RAG (Retrieval-Augmented Generation) to build knowledge bases from an expert's articles, podcasts, recordings, and talks so their unique perspective can answer questions even when they can't.
The difference matters more than you'd think. Scott described working with a local chapter of Big Brothers Big Sisters, where new mentors can ask the Big Brother of the Year for advice through the platform. When someone asked for free things to do with their little, ChatGPT offered 25 well-organized but completely generic ideas. Answers From Me’s answer? Three personal picks — one including a trip to IKEA, because walking through the showroom gets kids talking about dreams, goals, and what they want their life to look like. "ChatGPT didn't mention IKEA," Scott said. It couldn't. That answer came from lived experience.
(I’m serious when I say Scott’s example stuck with us: I'm a court-appointed special advocate for foster youth, and Jill is a caretaker for her father experiencing Alzheimer’s. We are both stealing the IKEA idea.)
"I Don't Know" Is a Feature, Not a Bug
Here's a quick experiment: Ask an LLM (say ChatGPT or Claude) something you know nothing about. Odds are you'll be impressed. Now ask it something you're genuinely an expert in. Suddenly, you'll notice what's generic, what's off, what it quietly invented with complete confidence.
That gap, between how certain AI sounds and how accurate it actually is, is what Scott calls mistaking fluency for understanding. It becomes a serious problem when leaders start outsourcing the decisions that require real discernment.
Answers From Me is designed to do something different. When the AI doesn't have enough context to answer a question, it says so. It offers to pull a generic LLM answer with that caveat clearly stated, or it alerts the expert directly so they can respond and optionally save that answer for next time. Unlike LLM, their system is built to be honest about what it doesn't know because, as Scott reminded us, trust only holds if it's earned. And once you get burned by a confident wrong answer, you stop relying on that source for anything that actually matters.
Discernment Is the Skill Worth Protecting
The big question Scott keeps coming back to is: Do you want AI to do it for you, or do you want AI to help you do it? The first wave, the "do it for me" wave, is what's driving all the conversation about cost-cutting and replacement. But Scott sees a more interesting future in the second: AI as a collaborator that sharpens human judgment instead of skipping over it.
He brought this same thinking to his work at Stanford's d.school. One of the most common mistakes leaders make when trying to design human-centered products is asking people what they would do instead of watching what they actually do. We are notoriously bad at predicting our own behavior. (His example: every gym membership or Peloton bike ever purchased.)
The leaders who get this right are the ones who can look at an AI-generated answer and know whether it's good enough, or whether something more important is at stake and they need to dig deeper. That ability to discern, Scott believes, is something AI can’t replicate. It comes from experience. It comes from values. It comes from all the things that make a person a person.
What Gives Scott Hope
Scott’s hope for the future of work? That we use AI to give ourselves more time with other humans, not less. We're in control of that. And if we remember that, he thinks we'll be okay.
And Jill and I hope listeners walk away from this episode a little more skeptical of confident LLM answers and a little more trusting of their own.
Listen to the full episode of Work Made Human to hear more from Scott Zimmer on wanting to be an architect as a kid, how he accidentally built a corporate career while wanting to be an entrepreneur, and why the best humans we know are the ones who can admit when they don't have the answer.
P.S. Bobby Flay, if you're reading this, Scott's ready to learn from you!
LEAVE A COMMENT