Reference
Purpose, vision, mission, strategy, goals, OKRs, KPIs: what each word is supposed to mean operationally, why most organisations conflate them, and why AI reveals the incoherence instantly.
Every organisation uses the same words. Purpose. Vision. Mission. Strategy. Goals. Almost none of them mean the same thing by any of them.
In most companies, “strategy” means a deck someone prepared for a planning offsite. “Vision” means the sentence on the careers page. “Mission” means roughly the same thing as vision, used interchangeably depending on who is in the room. “Goals” means the OKRs that were agreed in January and last reviewed in March.
The words have been used so loosely, for so long, that they have lost their load-bearing function. They still appear in documents. They still get discussed in leadership offsites. They no longer do the structural work they were designed to do.
This matters for the argument here because busy work does not begin with the meetings or the slide decks. It begins earlier, in the absence of a coherent governing structure. When an organisation does not know the difference between a purpose and a vision, or between a strategy and a plan, or between an OKR and a KPI, it cannot govern itself through those concepts. It can only perform them.
What follows is what I think these words are supposed to mean, not as management philosophy, but as operational architecture. Each layer has a specific job, a specific horizon, a specific owner, and a specific relationship to the layers above and below it. None of this is revolutionary. Most of it is recoverable from the original intentions behind these concepts, before decades of corporate usage eroded them into decoration. As you read, ask yourself how clearly your organisation distinguishes between them.
| Layer | Horizon | Question | Owner | Changes when |
|---|---|---|---|---|
| Purpose | Permanent | Why do we exist? | Founder / Board | Almost never |
| Vision | 15–20 yr | What are we creating over the coming decades? | Board / CEO | Rarely |
| Mission | 5 yr | What are we becoming? | CEO | Market shifts |
| Goals | 2–3 yr | What does success look like? | Leadership | Mission progresses |
| Strategy | 12–18 mo | Which mechanism, and why? | Cross-functional | Hypothesis falsified |
| Bets | Quarterly | Which initiatives test it? | Teams (bottom-up) | Evidence demands it |
| OKRs | Rolling | Are the bets working? | Teams | Each bet cycle |
| KPIs | Continuous | Is the machine healthy? | Operations | Separate from above |
Purpose answers one question: why does this organisation exist? Not what it does, not what it is becoming, not what the world looks like if it succeeds. Just: why.
Purpose is the only layer in the stack that should never change. If it does, you have a different organisation. A retailer whose purpose is to make quality goods accessible to ordinary families means something different from a retailer whose purpose is to maximise shareholder returns through retail operations. Both may sell the same products. They will make different decisions when the two objectives conflict, and the conflict always arrives eventually.
The most common error is confusing purpose with vision. Purpose explains why the organisation exists. Vision describes what the organisation is creating over the coming decades. One is a reason. The other is a picture. An organisation that conflates them ends up with a sentence on the wall that is too abstract to anchor anything and too specific to be permanent.
Purpose rarely appears in operational planning, which is why it is easy to dismiss as decorative. Its structural job is to resolve tie-breaks at the top of the stack. When two visions are equally plausible, purpose settles which one the organisation should pursue. When it is absent or vague, those tie-breaks are settled by politics, inertia, or whoever is loudest in the room.
A vision is not a motivational sentence. It is a claim about what the world looks like if the organisation succeeds over a generational timeframe. It is directional rather than measurable, which is appropriate for its horizon. You cannot set a KPI for twenty years from now. You can articulate a direction that makes subsequent choices legible.
The test of a vision is not whether it inspires but whether it rules things out. A vision that is compatible with every possible strategy is a slogan, not a vision. A genuine vision makes certain strategies obviously wrong and certain others obviously worth pursuing. Most corporate vision statements fail this test. They are written to be inarguable, which makes them structurally useless.
Vision changes rarely because the world changes slowly at civilisational scale. Treating vision changes as routine is a sign that the organisation never had one.
A mission is the concrete ambition the organisation is pursuing within its vision over a five-year window. Where the vision describes a direction, the mission describes a destination, specific enough that you could know whether you had arrived.
The distinction matters because a vision without a mission produces drift. The organisation knows roughly where it is heading but has no near-term commitment against which to test its choices. A mission should survive normal turbulence, competitive moves, market fluctuations, leadership changes, without revision. It should be revisited when the fundamental conditions that made it right have changed.
The most common error is conflating mission with vision, treating them as two names for the same thing. Vision describes what the organisation is creating over the coming decades. Mission describes what it does in the next five years to advance that picture. One is directional. The other is operational.
Goals are the specific outcomes that define what mission success looks like. They translate the mission into measurable commitments. A mission that cannot be broken into goals is probably still too abstract to govern anything.
Good goals are specific enough to be falsified. At the end of the period, you can say clearly whether each was achieved or not. Goals that dissolve under inspection, that turn out to mean whatever is convenient at the time of review, are narrative dressed as accountability.
Strategy is where most organisations fail most completely, and where the vocabulary has degraded most severely.
A strategy is not a plan. It is not a set of priorities. It is not a description of what the organisation intends to do. A strategy is a mechanism for achieving a goal, together with the causal logic that makes the mechanism the right choice.
The causal logic is not optional. Without it, a strategy cannot be evaluated, challenged, or updated. Consider two descriptions:
“We will grow through retail partnerships.”
versus
“We will grow through retail partnerships because our target customer already trusts the retailers we are targeting, we cannot build that trust from scratch at our current stage, and our unit economics only work at the volumes that retail distribution makes achievable.”
The first is a decision. The second is a strategy. The difference is that the second makes explicit the assumptions it is betting on. Those assumptions can be tested. If the target customer turns out not to trust the retailer, or the unit economics do not improve at the projected volumes, the strategy has been falsified and should be revised.
A strategy implies tradeoffs. If everything is still possible, if no path has been ruled out, the organisation has not made a strategic choice. It has deferred one. The discomfort of commitment is not a sign that the strategy is wrong. It is a sign that it is real.
Strategy changes when the hypothesis is proven wrong, not when it becomes uncomfortable. Changing strategy in response to short-term pressure rather than accumulated evidence is not agility. It is instability.
Falsifiability is what distinguishes strategy from everything above it in the stack. Vision and mission are stable commitments. Strategy is a falsifiable hypothesis. Holding both simultaneously, committed to the direction, willing to update the mechanism, is the cognitive discipline that separates organisations that learn from organisations that merely react.
Below strategy sits the initiative layer. The right word for these is bets. Not projects. Not initiatives. Not workstreams.
The language matters because bets carry information that projects do not. A bet implies a hypothesis, a stake, an expected return, and a time horizon. It implies the person placing it could be wrong. A project implies execution of something already decided. A bet implies ongoing evaluation of whether the decision was correct.
The distinction changes accountability. When a project fails, someone has not delivered. When a bet does not pay off, the organisation has learned something. An organisation that treats every initiative as a project cannot fail safely and therefore cannot learn safely.
Bets should be proposable from the bottom up. If the strategy is sufficiently articulate, teams at every level can read it and ask: does this initiative move one of the levers the strategy depends on? This is how alignment works without becoming compliance. Teams do not follow orders. They understand the logic well enough to generate their own ideas that fit.
OKRs are not a goal-setting system. They are an instrumentation system. The distinction is important because most organisations that implement OKRs treat them as the former and are disappointed by the results.
An OKR measures whether a bet is working. Specifically, it measures leading indicators: the signals that should move before the outcome does, if the causal logic in the strategy is correct. If the strategy claims retail partnerships will drive growth because customer trust transfers from the retailer, the leading indicator is not growth. It is something earlier in the chain: partner acquisition rate, conversion at partner locations, or repeat purchase behaviour.
The missing link to causal logic is why OKRs fail when they are disconnected from strategy. An OKR that measures an outcome the strategy does not specifically predict is target-setting rather than instrumentation. The difference is that instrumentation tests a hypothesis while target-setting merely records a number.
OKRs change with each bet cycle, because different bets require different instrumentation. An OKR programme that produces stable, multi-year objectives has drifted from instrumentation toward target-setting.
KPIs are not OKRs. They do not serve the same function, and collapsing them is one of the most reliable ways to ensure that both stop working.
A KPI measures operational health: customer satisfaction, system uptime, error rates, unit costs, staffing levels, compliance indicators. These are important numbers. They are not strategic.
The confusion arises because KPIs are often expressed in the same format as OKRs, targets with owners and thresholds, and managed in the same planning system. A KPI tells you the engine is not on fire. An OKR tells you the engine is heading somewhere worth going. An organisation that uses one set of numbers for both questions will eventually answer neither correctly.
KPIs should be continuous and should not cycle with strategy or bets. When a KPI deteriorates, that is an operational problem requiring an operational response. When an OKR fails to move, that is a strategic signal requiring a strategic response. Treating them as the same kind of signal produces the wrong response to both.
There is a deeper problem. Most software-dependent corporates steer primarily using lagging financial KPIs: quarterly revenue, cost ratios, margin, budget variance. These numbers describe what already happened. They cannot tell you why it happened, because the causal chain between a budget decision and its operational consequence passes through system architecture, process ownership, and engineering capability, none of which the financial model can observe. The CFO who kills a platform investment because it does not show ROI in year one is steering with a lagging indicator that is structurally blind to the thing it is trying to measure. The incidents that follow are attributed to engineering quality rather than to the investment decision, because the financial instrument cannot trace the connection.
Leading indicators, the kind that OKRs should measure, look different: deployment frequency, process drift rate, contract violation count, time from regulatory change to system change, the ratio of coordination time to production time. These numbers describe what the system is doing now, not what the P&L said last quarter. An organisation that steers using leading operational indicators detects problems while they are still cheap to fix. An organisation that steers using lagging financial indicators discovers problems after the cost has already been incurred and attributed to the wrong cause.
Describing the layers separately understates the most important point: the layers must connect.
The most common failure of organisational governance is not that any individual layer is badly designed. It is that the layers are designed independently and then left to interact informally. Vision exists in one document. Mission exists in another. Strategy was agreed at a planning offsite and lives in a slide deck that nobody has read since February. OKRs were set by teams who were not in the room when the strategy was agreed. KPIs are managed by a function that does not participate in strategic planning at all.
Independently designed layers produce an organisation that has all the vocabulary of coherent governance and none of the actual coherence. Teams can point to OKRs. Leaders can point to strategies. Executives can point to the mission. Nobody can trace the chain from what a team is doing today to why it serves the organisation’s five-year ambition.
Disconnected governance is precisely the condition that AI reveals and accelerates. A machine that can read the strategy document, the OKRs, the KPIs, the team roadmaps, and the codebase simultaneously will surface what humans have learned to ignore: that the documents describe different organisations, and that the organisation running on the ground matches none of them perfectly.
The Strategic Context Stack is not a new governance methodology. It is a diagnosis of the minimum structural coherence that governance requires. An organisation that knows the difference between a purpose and a vision, between a strategy and a plan, between an OKR and a KPI, and that has connected those layers explicitly rather than assuming they will align informally, has the preconditions for the kind of reconciliation described in the preceding articles.
An organisation that uses all these words without distinguishing between them is producing a more sophisticated version of the brochureware problem. The governance vocabulary sounds mature. The connection to the actual business is absent.
The competitive consequence is direct. An organisation whose strategic context stack is coherent can point AI at it and get reconciliation: are the bets serving the strategy, is the strategy falsifiable against the goals, are the OKRs measuring what the bets need them to measure, are the KPIs stable while the strategic indicators move? A competitor with a connected stack gets that answer every week. An organisation whose stack is decorative gets a confident summary of its own incoherence, which it will mistake for insight.
The gap between these two organisations is not a question of AI capability. The tools are identical. The gap is structural, and it compounds at the rate of every reconciliation cycle that one runs and the other cannot, because the other has nothing coherent for the machine to read.
See also: All articles · Illusions in the Boardroom · Illusions of Work