We've all been in the meeting where, six months into development, someone says "the data doesn't actually work like that," or "users simply don't work that way."
By that point, the database is fixed, the API endpoints are agreed, and the business logic is in place. Changing core assumptions means a rewrite, not a quick adjustment.
Traditional Discovery has reduced how often this happens. But there's still a gap. Even good Discovery relies on abstractions - wireframes, user stories, best guesses about data and workflow. You only find out whether those guesses were right once you have working code.
At PaperKite, we've been experimenting with AI-powered rapid prototyping to close that gap. We've been jumping ahead to get something working, then stepping back to test whether we're solving the right problems. Early results suggest it's catching foundational assumptions that traditional Discovery sometimes misses.
This doesn't replace the fundamentals. Deep empathy, user research, understanding context - these remain essential. What we're describing is an additional tool for a specific problem: technical uncertainty that constrains how people articulate problems. Rapid prototyping removes that constraint, but only after the foundational work of understanding users and context.
Using Solutions to Discover the Right Problems
Traditional Discovery wisdom says to stay in the problem space. Don't jump to solutions too early. Make sure you understand the problem deeply before exploring answers. It's good advice - solving the wrong problem elegantly is expensive.
But our AI-augmented Discovery approach does something that seems to violate this principle: we give ourselves permission to jump forward to working solutions really quickly, then use those solutions as tools to validate we're focusing on the right problems.
Here's what we've observed: when people aren't sure if something is even possible, they get stuck. Stuck on early problems that feel tractable. Stuck on familiar solutions they know can work. The uncertainty about technical feasibility creates artificial constraints that narrow the problem space before we've properly explored it.
We use AI throughout Discovery in multiple ways: for example we sometimes use Claude projects to help us synthesise workshop conversations into structured problem statements and generate problem-based user stories from workshop data. We also use AI to rapidly generate realistic test data that helps us validate our understanding of domain complexity. But the real shift comes from using Claude Code to build functional prototypes - not clickable mockups, but actual working code - within days, sometimes hours.

This isn't about speed for speed's sake. It's about creating something concrete that answers the "is this even possible?" question quickly, so we can move past it. Once clients see a working prototype and think "oh, we could do that," they stop constraining themselves to safe, conservative problems. They start articulating the fuller, messier, more ambitious problem that actually needs solving.
This approach works best when technical feasibility is genuinely uncertain and that uncertainty is constraining problem exploration. For a stock trading platform with complex data relationships, or a public transport system with multiple integration points, showing what's possible unlocks better problem articulation.
But it's not appropriate for every Discovery challenge. If the core unknowns are about user behaviour, business model viability, or market positioning, building code won't help. We’re tackling a clear gap: the cost of getting core technical assumptions wrong
From Conservative to Comprehensive
A recent project demonstrated how rapid AI-powered prototyping pushes us beyond conservative thinking. We were designing a conversational interface for public transport journey planning.
Initially, we framed the job to be done conservatively: "I need to get the next number 2 bus to town." Standard journey planner territory. Using Claude Code, we could prototype solutions for this job incredibly quickly.
But that speed became provocative. If solving the basic journey planner problem was this fast, what's the fuller job to be done? Through iterative prototyping and discussion, we reframed: "I need to get to the office reliably on time without getting wet."
Same user, completely different solution space. Now we're thinking about reliability predictions, weather integration, alternative routes, and contingency planning. The prototype helped us recognise we were thinking too narrowly about the traditional job a journey planner might do, rather than the user's actual need.
In another Discovery workshop with a stock trading client, we used Claude Code to build a working prototype for natural language queries against their historical trading data. Within days, they could query "Show me active buyers from last November" and get real results. The conversation immediately shifted from "can you" to "should we" - from worrying about technical feasibility to focusing on the right problems to solve.
Managing Expectations and Reality
The speed and functionality of AI-generated prototypes creates a risk: clients may not understand they're seeing a "veneer" built for a specific purpose, not production-ready software.
We set clear expectations upfront through our hypothesis-driven approach - building just enough to test our thinking and inform our greatest unknowns. We deliberately use bland colours and minimal design to signal "this is for learning, not launching."
This only works when we've done the upfront research to understand users and context, and when we're explicit that the prototype is an artifact for learning, not a head start on delivery. The goal is keeping Discovery focused on the right questions: Are we solving the right problem? Do we understand the job to be done? What are our riskiest assumptions?
What This Means
AI-powered prototyping changes where clients focus their energy. They come to us worried about technical feasibility. The rapid prototypes give them confidence quickly, freeing them to worry about what actually matters: building the right thing.
Clients commit to action more easily when they've seen working prototypes that make possibilities concrete. But more importantly, they commit to better solutions - ones validated through rapid experimentation rather than abstract speculation.
The gap between what Discovery promises and what it delivers has always been about the cost of being wrong. AI-powered rapid prototyping is reducing that cost further - catching foundational assumptions before they become your technical debt.
If you're planning a digital product and want to test your assumptions with working prototypes, let's talk about how AI-augmented Discovery could reduce your risk and accelerate your path to the right solution.