I spent months trying to build the perfect AI training programme for my team. The thing that actually worked was stopping the training and starting to solve real-world problems.
The problem isn't access, or skills, or budget. It's imagination, or rather the lack of it.
Earlier this year, I ran a polished AI workshop for a client's brand team in Singapore. Ninety minutes, live demos, great Q&A. The marketing director said it was eye-opening.
Three months after my workshop, the client team was still running the same workflows and the same manual processes. When I followed up to enquire, the honest answer I got from the marketing director was: "The team doesn't know how to apply what they learnt to their own work."
I've seen this pattern across every team I've worked with, including my own. They have access. They're interested. Everyone has watched the videos and tried ChatGPT for a blog draft.
But people cannot picture AI applied to their specific, messy, real-world Tuesday afternoon. Adopting AI in your own work requires two things most people don't have yet. First, a deep understanding of where value is actually created in your process, what matters and what's noise. Second, and this is the harder one: Imagination. The ability to picture a workflow that doesn't exist yet. A generic demo doesn't bridge that gap. Neither does a prompt library, or a training session where everyone nods along and goes back to their desk.
My SEO strategist had the most painful task on the team: turning a hundred SEMrush prompts into page content, one by one, for every client, every month. We'd rolled out Claude to everyone months ago. In fact, he had it open on his desktop but was still copying and pasting prompts into Claude manually. He simply couldn't see how AI agents might be able to solve his specific problem.
So I sat next to him, opened Claude Code, and we built an agent together that automates this whole process. We tested it on his real client data, and in under two hours we produced real usable working output that would have taken him days, had he done it the old-fashioned way.
He didn't say "that's amazing." He said: "So my job now is to understand the problem and direct this thing, not do the writing myself?"
Training teaches what AI can do in general, but the real gap is about what AI can do for me, on my task, with my data, by Friday.
Those are different problems. The first one is knowledge and the second is imagination. You close the knowledge gap with a workshop. You close the imagination gap by solving someone's real problem in front of them.
I tried three approaches before I figured this out: a client workshop that changed nothing, a set of AI agents I built for my team that were powerful but only useful if I was in the room, and an inspiring offsite keynote that had a half-life of about five business days.
Every one of those created awareness without creating adoption.
After four months of getting this wrong, I landed on a method that actually works. I call it SEE, DO, OWN.
SEE. I sit with one team and solve their problem while they watch. Not a generic demo, but their actual problem, with their data, against their deadline. I narrate my thinking as I go: how I break the problem into components, where I start, what I choose to skip. The team member doesn't just see the output, they see the decomposition, how you actually go from "this is painful" to "this is buildable."
I tell my team: you want to build a bridge but you don't know how to lay bricks. We're going to lay bricks first.
DO. They take the keyboard with the same tool but a different task. I give them a new dataset and say: run it yourself. When they get stuck, I don't take over. Instead, I ask "what would you try next?" The learning isn't in watching me work, it's in the friction of doing it themselves, making a wrong turn, figuring out the right question to ask.
One successful run of their own is worth ten of mine.
OWN. I hand them the tool, a context document explaining how it works, and one specific task with a 48-hour deadline. Not "play around with it," but a real deliverable for a real client. Then I step back and check in after a week. If they can take a new problem and run the same process without me, they own it. If they can't, we do one more DO session.
I've now run this across four teams: content strategy, SEO, web development and operations. Each time, the session produced a working tool the team was able to use the next day. No training deck, no prompt library. Just their problem, solved, with a framework they can repeat on the next one.
The tools we build in these sessions matter less than what happens to the person's thinking. Before the session, AI is something they know they should probably use more. Afterwards, they see their role differently. They're not doing the manual work anymore, they're understanding the problem and directing AI to solve it.
That's not a productivity improvement, that's a role transformation. And it happens one person at a time, not in a town hall.
Q: How long does a session take?
A: 60-75 minutes for the first one. You demonstrate for 25 minutes, they take over for 15, then you review and assign next steps.
Q: Do team members need technical skills?
A: No. They talk to AI in natural language. If they can describe their problem clearly, they can use the tool.
Q: What if the first tool isn't good enough?
A: It won't be. That's the point. The first version is 70-80% right. Their job is to iterate it to 90%. Chasing perfection in the session means you never hand over.
Charanjit Singh is the CEO of Construct Digital, a B2B digital agency in Singapore. He's building AI-native delivery systems for marketing teams and documenting what works along the way.