Illusions in the Boardroom
Free Illusions in the Boardroom The board-level diagnosis
Illusions of Work
Illusions of Work The operator manual
← Articles

Drive vs Drift

AI does not just make coding faster. It makes synthesis, gap analysis, and proof management cheap enough to run continuously. The right unit of management is no longer the pre-baked backlog. It is the milestone artefact, the proof loop, and the weekly judgement about whether Drive is beating Drift.

The old default in software was that planning was expensive, synthesis was manual, and coordination overhead was treated as normal. Teams spent a significant amount of time turning an objective into a roadmap, then into epics, then into stories, then into estimates, and then into a set of ceremonies designed to keep everyone aligned with a plan that was already going stale. That made sense when the cost of understanding the gap between current reality and desired outcome was high. It makes much less sense now.

AI changes the economics of synthesis. It does not merely make coding faster. Coding is only part of the work and, in many cases, not the binding constraint. A team can automate large parts of implementation and still get very little real benefit if the target is vague, the context is muddled, the product logic is weak, or the work is drifting away from the actual milestone. If you do not know what you are trying to deliver, why it matters, how the user experiences it, what the domain model is, where the conversion moments are, and how the system should behave as a coherent whole, then AI will just help you produce misaligned output faster.

The real leverage in AI-native execution sits further upstream than coding automation. It comes from reducing the cost of turning strategic intent into a clear milestone artefact, reducing the cost of continuously synthesising the gap between where you are and where you need to be, and reducing the cost of deciding what the next few workstreams should be. Once that is in place, AI can help with coding very effectively, but the coding gains only compound if the upstream thinking is sound.

The Strategic Context Stack

The Strategic Context Stack sits above this method and has to be coherent before the method can work properly. Purpose, vision, mission, goals, strategy, bets, OKRs, and KPIs are not interchangeable management words. Each layer has a different job, a different time horizon, and a different relationship to the layers above and below it. The stack matters because the milestone cannot be judged in isolation. A release can have strong local coherence and still be pointed at the wrong thing if it is not properly nested inside the strategic context.

Drive vs Drift assumes that the organisation can distinguish long-horizon direction from the current strategic mechanism, the active bets being placed, the instrumentation that tests those bets, and the operational health signals that keep the machine honest.

The milestone artefact

The milestone artefact is where the Strategic Context Stack becomes operational for a specific delivery. It is not a separate slogan or a fourth moving part. Its role is to set and stabilise the target so Drive and Drift can be judged properly. Execution without a clear milestone artefact is just efficient wandering, and execution against an unstable milestone artefact is disguised replanning.

In practice this means writing a milestone scope document that is specific enough to constrain implementation. It needs to define the user outcome, the key flows, the payoff moments, the domain language, the sequencing, the boundaries, the things that are explicitly not in scope, and the alignment rules across product, code, tests, and reporting. That artefact is not a decorative plan. It is the source of truth against which movement and deviation are judged.

Drive

Drive is the measure of net directional gain. I do not judge a week by performative productivity, ticket count, or visible busyness. I judge it by what became newly true that materially advances the milestone. Did we remove ambiguity from the critical path? Did we make the user journey more real? Did we turn a vague concept into a functioning capability? Did we create evidence that the milestone is becoming deliverable?

A team can look active while making little meaningful progress, and equally a week can look modest while actually producing a major increase in future velocity by resolving a hard constraint or restoring coherence. Drive is about movement towards the milestone, not motion for its own sake.

Drift

Drift is the measure of deviation from the intended milestone. Most teams think of drift only as scope creep, but the real forms of drift are broader and more dangerous:

A team can generate a lot of drive while also accumulating drift, and that is one of the main ways delivery slows down without people admitting it.

Why the contest matters

The reason I call the method Drive vs Drift is that weekly execution is ultimately a contest between those two forces. A high-Drive and high-Drift week is usually worse than a lower-Drive and lower-Drift week if the drift introduced will have to be paid back with interest. Net directional gain is not raw movement. It is movement minus deviation. Fast progress that weakens the release shape, fragments the model, or creates fake completion is not really progress.

The method only works if people are willing to say that some weeks which felt productive were, in fact, net negative.

Proof Synthesis

Proof Synthesis is the day-to-day evidence loop that keeps the method honest. The milestone artefact defines what must become true. Proof Synthesis tracks what is actually true now. Without that distinction, teams slide very quickly into a mush of assertions, local confidence, half-passing checks, and screenshots that look reassuring without really proving continuity.

Proof Synthesis is not a generic status update. It is a structured reconciliation between the milestone artefact, the current state of the product, and the evidence that exists right now. It should say clearly:

That distinction matters because many releases appear healthier than they are when local proof is mistaken for full proof.

A concrete example

A useful Proof Synthesis might say that typecheck passed, release alignment checks passed, targeted flow reruns passed, and regenerated screenshots now show the intended non-zero planning range and the saved reference gallery state. It might then say that a full maintained screenshot pack is still pending, that reference-to-brief-to-reveal continuity is not yet fully evidenced, that launch-readiness blockers remain unchanged, and that the next highest-yield move is to finish the remaining media work and then run the full canonical evidence pack.

That is not project management theatre. It is a disciplined statement of what is now proven, what is still open, and what next most improves the state of proof.

AI in the proof loop

The practical value of AI in this loop is that it can keep current state, evidence, and milestone intent in active reconciliation. Instead of asking vaguely what to do next, I ask for a proof-oriented synthesis of current state against the milestone. That conversation should not explode into a new planning exercise. It should produce a crisp answer to a narrower set of questions: What is now proven? What is still open? Which blockers are unchanged? Where is continuity still missing? What is the next highest-yield move that would most improve the state of proof?

The value of this is that it keeps execution anchored in evidence rather than sentiment. The milestone can remain stable while the proof picture becomes sharper every day.

When to change the milestone

The method only works if there is discipline around the stability of the milestone artefact. If AI makes synthesis cheap, people can easily mistake that for permission to keep rewriting the milestone. That would just recreate planning overhead in a more continuous and less honest form. The milestone artefact should usually stay stable while Proof Synthesis remains fluid.

You do not change the milestone because new work appeared, implementation is harder than expected, or a fresh idea sounds attractive. You change it when:

That last case matters because value recalibration is real. A team can learn that the target is possible and still decide that the return no longer justifies the path. That is not drift. That is a legitimate change in the milestone artefact.

The operating model

The practical operating model is straightforward. The Strategic Context Stack sets the governing frame. The milestone artefact turns that frame into a concrete target for the release. AI then helps synthesise the gap between current state and desired state so the next three to five workstreams become clear. I am not trying to plan every task in advance, and I am not pretending I can forecast all dependencies cleanly upfront. I am trying to keep the next layer of work sharply grounded in the milestone and in reality.

During execution, I repeatedly run Proof Synthesis so I know what is genuinely evidenced, what remains open, and where the next highest-yield move sits. At the end of the week, I assess whether Drive outpaced Drift and whether the milestone artefact still deserves to hold.

Why this works now

This method works now because AI allows synthesis to happen continuously rather than ceremonially. It is now cheap enough to re-evaluate the gap between current reality and desired outcome several times a week without freezing the organisation in planning rituals. It is also cheap enough to keep a running state of proof rather than relying on vague confidence and lagging reports.

The right unit of management is no longer the pre-baked backlog. It is the milestone artefact anchored in the Strategic Context Stack, the proof loop wrapped around current execution, and the weekly judgement about whether Drive is beating Drift. The point is not to abandon discipline. It is to move discipline to the place where it now has the highest leverage.


The gains from this approach do not come from treating AI as a faster programmer. They come from using AI to make synthesis, design integration, proof management, and execution steering dramatically cheaper, while holding a clear target and refusing to drift away from it.

If the Strategic Context Stack is incoherent, the milestone artefact will be weak. If the milestone is weak, Drive and Drift cannot be judged properly. If Proof Synthesis is weak, the team will confuse confidence with evidence. If all three are sound, the result is not just faster coding. The result is a much tighter execution system.

Illusions of Work
Illusions of Work AI Has Made Reality Non-Negotiable. The operator manual for CTOs and VPs of Engineering. Read now →
Illusions in the Boardroom
Free Illusions in the Boardroom How AI Forces Structural Honesty. The board-level diagnosis. Read now →

See also: All articles · The Strategic Context Stack · Illusions of Work