Key Takeaway: Three people, ninety days, production systems -- not a proof of concept and a strategy document.
Enterprise AI transformation does not need a 30-person team. One senior architect who understands the data architecture. Two engineers who can build production systems. Three people, embedded for 90 days. That's it. The industry has conditioned buyers to expect large teams because large teams generate large invoices. But headcount and output are not the same thing, and anyone who's spent time inside a big consulting engagement knows the dirty secret: half the team is managing the other half.
The standard consulting model: two months of discovery, three months of strategy, four months building a proof of concept, three months trying to get it to production. Twelve months later you have a working demo living on a staging server forever. The failure isn't technical -- extended timelines let politics accumulate, stakeholders rotate, and the proof of concept targets requirements that went stale six months ago. More months, more revenue. Everyone knows this. Nobody says it out loud. The right model inverts the incentive: you make money by being efficient, not by being slow.
Three roles
The architect. Senior, technical, making architectural decisions in real time — writing critical code themselves. Also the client relationship. No separate engagement manager. The person making promises is the person doing the work. That's the point.
Two engineers. Production-grade builders. One on data infrastructure, one on application-layer integration. Both can cover for each other.
No project manager. No business analyst. No QA team. Every role that doesn't directly produce working software has been eliminated.
Days 1-30: Assess
Identify the highest-ROI use case and validate the organization can actually operationalize it. That second part is where most assessments fail. Every organization has use cases where AI would be valuable. The real question is where high business value intersects with organizational readiness — not what the executive wants, but what the daily users will actually adopt.
Pre-qualification across four dimensions: data readiness, organizational willingness, integration complexity, success measurability. Output: a one-page scope document. Not a 60-page strategy deck.
Days 31-60: Build
Build with the client's team, not for them. The pod embeds — same Slack channels, same standups, same codebase access. Target is production deployment on real data in the client's environment with actual error handling and monitoring. If something breaks, it breaks while we're there. The client's engineers pair-program from day one.
Days 61-90: Transfer
Documentation engineers actually read — architecture decision records explaining why, runbooks for the three most likely failure modes, a single-page system map. Every component gets a named owner, not a team. Two weeks of supervised independence where the client's team operates while we observe but don't touch the code.
The A-B-C progression
Phase A: Identify the highest-value AI application, build it, ship it. Not a proof of concept — a proof of production.
Phase B: Based on what Phase A reveals, identify the next high-value applications. Deploy additional pods or extend the original.
Phase C: Make the pod unnecessary. The client's internal team builds AI systems without external help.
Traditional consulting has no incentive to make itself unnecessary. The pod model has the opposite incentive -- the faster the client becomes independent, the faster the pod is available for the next engagement. Optimize for velocity, not billable hours.
Limitations
Large-scale data migrations need more than three people. Org change across a 10,000-person enterprise needs different skills. The pod is a precision tool. It requires a specific client: executive sponsor with real authority, technical team willing to work alongside the pod, data that exists, and a business problem real enough that a working solution produces measurable value.
When those conditions are met, three people in 90 days produce more lasting value than any large team engagement I've seen. The harder sell -- and this is the part that took me the longest to internalize -- is convincing organizations that speed is possible. They've been conditioned by years of consulting to believe enterprise AI takes 12-18 months. That belief is a feature of the consulting model, not a fact about the technology. The first time a client sees a working production system in sixty days, you can watch the mental model shift in real time. They stop asking "is this possible?" and start asking "what else can we do?" That's when the real engagement begins.