OpenAI: The Intelligence Utility

From Clarity to Gravity: OpenAI + Sam Altman

A positioning analysis of the company that created a category, owns a product noun, and is searching for the concept underneath.

A note before we begin. I wrote this analysis because I admire the work. OpenAI and Sam Altman have done something rare; they created a category that changed how hundreds of millions of people interact with technology. That deserves respect, and I want to be clear that respect is the starting point here.

I’ve been watching their content, reading the essays, following the decisions, and studying how the brand connects to identity. This is what I do. I look at businesses through a positioning lens to understand what makes them stick, cohere, and mean something to the people who choose them.

I’ve done my best to work from accurate, verifiable information. If I’ve gotten something wrong, the intent is not to misrepresent. It’s to read the label from the outside, because no one can read their own. That’s not a criticism; it’s a structural limitation that applies to every founder, every company, every brand. The people inside the building are the last to see what’s written on the outside.

This entire analysis exists to answer one question: What business are you in?

Sam and the team may have a strong point of view on that, informed by their own data, evidence, and experience of building the thing every day. I would expect nothing less. But the answer you live inside and the answer the market perceives are not always the same answer, and the gap between them is where positioning lives.

How to read this analysis

This article uses my 4-Level Positioning Canvas, a framework for diagnosing where a company actually stands versus where it claims to stand. Here is a quick reference so the levels make sense as they come up.

The four levels build on each other. You cannot skip one.

Level 4, Position (Own the Noun): The concept that becomes synonymous with you in people’s minds. Volvo owns “safety.” Tesla owns “the future.” This is the strongest level and the hardest to reach; it takes five to ten years and cannot be claimed explicitly. It can only be proven implicitly, through years of consistent decisions. If you have to tell people what you are, you are not there yet.

Level 1, Frame (Articulate): The language you use to express your positioning. Not the position itself; just the words wrapped around it. This is the easiest level to develop, and the weakest barrier, because anyone can copy words.

Level 2, Execute (Prove with Verbs): The measurable outcomes that validate what you claim. Actions, metrics, results. Strong execution without a clear concept proves “useful” but not “differentiated.”

Level 3, Live (Structural Embedding): When positioning is embedded in how you allocate resources, who you hire, which partnerships you form, and how you organize. At this level, 70% or more of resources flow to positioning-critical capabilities. The test: if a competitor had access to your P&L, what would shock them?

The sequence matters: Level 4 first, then Level 1, then Level 2, then Level 3. Most companies operate at Level 1 while claiming Level 4. The gap between where a company operates and where it claims to be is where the analysis lives.

Part 1: The Story They Tell

OpenAI tells two stories. Sometimes three. The stories compete with each other, and the competition reveals more about the company’s positioning than any of them do individually.

The first story comes from Sam Altman himself. It is civilizational in scope, philosophical in tone, and built entirely from nouns. “The Intelligence Age.” “Superintelligence.” “Abundance.” “Prosperity for all of humanity.” In a blog post titled Reflections, Altman wrote: “We are now confident we know how to build AGI as we have traditionally understood it.”

In an earlier essay, he predicted superintelligence “in a few thousand days.” These are not product claims. They are epoch claims. Altman positions OpenAI not as a technology company building tools, but as the steward of a civilizational transition, the organization responsible for ushering humanity into its next era.

The second story comes from OpenAI’s marketing. It is practical, grounded, and built from verbs. Recipes. Fitness plans. Small business help. Homework assistance. The Super Bowl ad showed ChatGPT as a friendly, everyday companion, not a harbinger of superintelligence. The marketing team knows what customers actually do with the product, and what customers actually do has nothing to do with AGI.

The third story leaked. An internal strategy memo, surfaced through Department of Justice proceedings, described ChatGPT as an “intuitive AI super assistant… your interface to the internet.” This is neither civilizational nor mundane. It is a platform ambition, a bid to become the layer between humans and everything digital. The $6.4 billion acquisition of Jony Ive’s hardware company confirms the ambition is real.

Three stories. Three frames. Zero coherence.

The linguistic architecture makes the incoherence precise. Altman’s language is overwhelmingly noun-driven: AGI, superintelligence, Intelligence Age, prosperity, abundance, civilization. These are territory-claiming words. But they are territory-claiming words without verbs to prove them. “Ensure AGI benefits all of humanity” contains two adjectives and an unmeasurable verb. Compare this to Steve Jobs saying “1,000 songs in your pocket,” which is concrete, visceral, and immediately testable, or Jeff Bezos anchoring Amazon to “the most customer-centric company on Earth,” which is specific and measurable. Altman’s “superintelligence in a few thousand days” is speculative and unmeasurable. It asks you to believe, not to verify.

Meanwhile, the product-level verbs, the things ChatGPT actually does, are robust. Enterprise users save 40 to 60 minutes per day. API reasoning token usage grew 320x year-over-year. GPT-5 achieves 74.9% on SWE-bench. One million enterprise customers, seven million workplace seats. These verbs prove something real. But what they prove is “useful AI tool,” not “AGI that benefits humanity.” The verbs and the nouns belong to different companies.

And then there are the adjectives. “Most capable.” “Smartest.” “Most important.” “Safe and beneficial.” These are the weakest possible positioning artifacts (framing), superlative claims that invite challenge rather than establish territory. “Safe and beneficial” is particularly telling: two adjectives, no proof mechanism, and a growing body of evidence that contradicts both. Adjectives are false positioning. They are comparisons dressed as identity, and comparisons can always be flipped by whoever ships the next benchmark result.

The metrics OpenAI obsesses over tell their own story. Revenue growth: $2 billion to $6 billion to $20 billion in three years. Weekly active users: 800 to 900 million. Valuation: $840 billion. These are scale metrics. They measure how big, not how differentiated. They measure distribution, not position. A company that tracks scale above all else is telling you what it actually values, regardless of what its CEO writes in blog posts.

So what is OpenAI claiming? It claims to be the company building AGI to benefit all of humanity, safely and responsibly. It claims civilizational stewardship. It claims the future.

Is it proving or claiming? It is claiming. Explicitly, repeatedly, and at volume. Sam Altman maintains an extraordinary public presence: blog posts, X posts, podcast appearances, Davos panels, congressional testimony. This level of narrative maintenance is the hallmark of a position that is not yet owned. A genuine Level 4 position does not require constant re-explanation. When you have to keep telling people what you are, the position is still in your mouth, not in their minds.

Part 2: The Hidden Position

Remove all words. Strip away “Intelligence Age,” strip away “benefit humanity,” strip away “safe and beneficial AGI.” Look only at what OpenAI has done, where the money went, what they refused and what they accepted.

What pattern remains?

Start with the costly signals. OpenAI has committed $1.15 trillion in infrastructure spending between 2025 and 2035, more than any private technology company has ever committed. The Stargate project alone is $500 billion for 10 gigawatts of compute capacity. The company is projecting a $14 billion loss for 2026, with cumulative losses of $44 billion through 2028. Deutsche Bank estimates $143 billion in negative cumulative free cash flow through 2029. HSBC projects a $207 billion funding shortfall through 2030. The February 2026 funding round raised $110 billion at an $840 billion valuation, the largest private funding round in history.

These are not the decisions of a safety company. These are not the decisions of a chatbot company. These are the decisions of a compute accumulation company. Every major capital allocation choice points the same direction: whoever controls the most compute wins.

Now look at the refusals. OpenAI refused to remain open-source, despite the name “OpenAI.” It refused to remain a nonprofit, progressively relaxing governance until it converted to a public benefit corporation. It refused to slow down when safety researchers urged caution, choosing speed. It refused to maintain its internal safety team, dissolving the superalignment team in May 2024. It refused to keep “safely” in its mission statement, editing the mission six times in nine years.

And look at what it did not refuse. It did not refuse military contracts, signing with the Pentagon hours after Anthropic declined the deal. It did not refuse to advertise on ChatGPT, despite Altman calling ads a “last resort” just months earlier. It did not refuse to require non-disparagement agreements that forced departing employees to forfeit equity if they criticized the company.

Every refusal and acceptance tells the same story. Scale was chosen over openness. Growth was chosen over governance. Speed was chosen over safety infrastructure. Revenue was chosen over philosophical consistency. The pattern is unambiguous.

So what noun do the decisions prove?

Not safety. Anthropic owns that. Not openness. Meta owns that. Not integration. Google owns that. The decisions prove something closer to “scale,” or “compute dominance,” or “infrastructure.”

The noun customers actually use is simpler: “ChatGPT.” When users talk about what they do, they say “I ChatGPT’d it.” The brand has achieved verb-level penetration, the kind of linguistic ownership that most companies spend decades pursuing. The U.S. Patent and Trademark Office denied the ChatGPT trademark as “merely descriptive,” which is, paradoxically, the ultimate proof of perceptual dominance. The name became the category before it could be protected as a brand.

But ChatGPT is a product noun, not a concept noun. It is “Kleenex,” not “comfort.” It is “Google,” not “organized information.” And this is the core positioning problem. OpenAI owns the default product in a category but does not own a differentiating concept beneath it.

The mental territory map makes this clear. Every major competitor positions relative to ChatGPT: Anthropic is “like ChatGPT but safer and better at coding,” Google Gemini is “like ChatGPT but integrated into your ecosystem,” Perplexity is “like ChatGPT but accurate,” Grok is “like ChatGPT but uncensored.” OpenAI is the reference point. That is enormously valuable. But it is also a vulnerability, because each competitor is differentiating, while OpenAI remains generic. The reference point in a maturing category is the thing everyone defines themselves against, and eventually, the thing everyone moves past.

There is a concept that OpenAI’s decisions actually prove, consistently and at great expense, without ever stating it. It is something like “the intelligence utility,” the company building intelligence infrastructure, the way previous generations built electricity grids and telephone networks. Altman himself has said, “The cost of intelligence should eventually converge to near the cost of electricity,” and referenced “intelligence too cheap to meter.” The $1.15 trillion in compute infrastructure, the pricing segmentation from free to $200, the product portfolio spanning chat, code, agents, search, shopping, and hardware, the Stargate commitment, and the Azure exclusivity: all of it points toward building a utility layer for intelligence.

This is the hidden position. OpenAI is strongly proving a concept it does not explicitly claim, while failing to prove the concept it does explicitly claim. The practiced position, scale, and intelligence infrastructure would survive without marketing. The claimed position, to benefit humanity safely, would not.

Part 3: The Level Diagnostic

The 4-Level Canvas reveals where OpenAI actually operates versus where it thinks it operates. The gap between the two is the source of nearly every strategic tension the company faces.

Level 4, Position (Own the Noun): Weak and Eroding.

OpenAI holds the generic category noun “AI” through ChatGPT, but holding the generic noun is not the same as owning a differentiating concept. It is the difference between owning “search engine” and owning “organize the world’s information.” One is a product category; the other is an idea that no one can take from you.

The evidence of erosion is specific. ChatGPT’s app market share fell from 69.1% in January 2025 to 45.3% in January 2026. One in five AI users now uses multiple apps. Benedict Evans observed that ChatGPT usage is “a mile wide but an inch deep,” with 80% of users sending fewer than 1,000 messages in 2025. In the enterprise, OpenAI’s market share dropped from 50% to 27% since 2023, while Anthropic now holds 54% of the enterprise coding market compared to OpenAI’s 21%. Google Gemini has surpassed 750 million monthly active users. The category OpenAI created is growing, but OpenAI’s share of it is shrinking.

ChatGPT’s daily uninstall rate spiked 200% after the Department of Defence contract announcement. That single data point captures the positioning problem in miniature: the product noun is strong enough that people had it installed, but the company-level concept is weak enough that a single decision can trigger mass departure.

Level 4 is the blocking level. OpenAI is operating at Level 1, claiming Level 4.

Level 1, Frame (Articulate): Muddled.

Level 1 is how you articulate positioning (what I call framing). It is not the position itself; it is the words you use to express it. OpenAI has too many frames competing for attention.

Altman’s frame: “The Intelligence Age,” superintelligence, civilizational transformation. Marketing’s frame: everyday assistant, practical utility, recipes and fitness plans. The leaked strategy memo’s frame: “Your interface to the internet.” The enterprise frame: productivity, time savings, workflow automation. The safety frame: responsible development, guardrails, oversight.

Five frames. Five different answers to “What is OpenAI?” This is not a communication problem. It is a positioning problem. When you cannot decide which frame to use, it usually means you have not yet decided what concept you own. The frames proliferate to fill the vacuum.

Level 2, Execute (Prove with Verbs): Strong but Undifferentiated.

OpenAI’s execution metrics are genuinely impressive. Enterprise users save 40 to 60 minutes per day. The product portfolio is the broadest in the industry: chat, image generation, video, code, agents, search, and shopping. A million enterprise customers, seven to nine million workplace seats. Nineteen times growth in structured enterprise workflows. The API serves hundreds of thousands of developers.

But execution proves a position only when it connects to a concept. The verbs “save,” “generate,” “code,” “research,” and “browse” prove “useful AI tool.” They do not prove “AGI that benefits humanity.” They do not even prove “the intelligence utility.” The verbs are strong, but they are referring to a noun that has not been chosen.

There is also a differentiation problem at Level 2. Model quality is roughly on par with that of five or six other frontier labs. Each company leapfrogs every few weeks. The execution is strong in absolute terms but increasingly undifferentiated in relative terms. Level 2 cannot be the moat when the verbs are commodities.

Level 3, Live (Structural Embedding): Genuinely Strong.

This is where OpenAI is most formidable. The numbers are staggering. Fifty-six percent of employees are in engineering. $1.15 trillion in infrastructure committed. $500 billion for Stargate. Ten gigawatts of compute planned. The Azure exclusivity deal locks in distribution. Enterprise partnerships with Accenture, BCG, and McKinsey embed OpenAI into consulting workflows. The consumer-to-enterprise flywheel, where hundreds of millions of consumer users create familiarity that reduces enterprise rollout friction, is a structural advantage that competitors cannot easily replicate.

If a competitor had OpenAI’s P&L, the $14 billion projected loss for 2026 would shock them. The willingness to burn capital at this rate is itself a structural embedding; it signals commitment that deters competition and attracts further investment. The resource flow is overwhelmingly positioned toward compute and scale.

But Level 3 is strong on scale and infrastructure, yet absent from the stated mission. Safety received a fraction of the investment that compute received. The superalignment team was dissolved. The mission statement was edited. The structural embedding proves “we are building the largest AI infrastructure on Earth.” It does not prove “we are building the safest.”

The Diagnosis.

OpenAI’s Level 3 is genuinely strong, its Level 2 is strong but undifferentiated, its Level 1 is muddled, and its Level 4 is weak and eroding. The sequence is inverted. You cannot skip levels, and OpenAI is trying to claim Level 4 (own a concept) while operating most powerfully at Level 3 (structural embedding). The infrastructure is built, the resources are flowing, but the concept that all of it is supposed to prove has never been chosen.

The position did not choose the distribution. The distribution is looking for a position to justify itself.

Part 4: The Identity & Cognitive Layer

What identity does choosing ChatGPT enable? The answer has changed three times in four years, and the trajectory tells a positioning story that no marketing campaign can override.

In 2022 and early 2023, using ChatGPT signalled “I am an early adopter, technically sophisticated, ahead of the curve.” It was an identity marker for the technology vanguard. In 2024, it shifted to “I am productive, I am keeping up with the times.” By 2025 and into 2026, at 810 million monthly active users, using ChatGPT signals approximately nothing. It is like having a Gmail account. The product went from aspirational to default to background. This is the identity economics of mass adoption: the more people use it, the less it says about any individual who does.

This matters for positioning because strong positions create tribes. Choosing Apple says, “I value design and premium experience.” Choosing Tesla (at least in its early years) said: “I believe in the future and I can afford to prove it.” Choosing Anthropic’s Claude increasingly says, “I am discerning, I care about quality and ethics.” Choosing Grok says, “I reject corporate censorship.” AI model preference is becoming an expression of identity, and tribalism has arrived.

ChatGPT users? They are the mass market. That is not a tribe; it is the absence of one. The product that once signalled early adoption now signals baseline digital competence. OpenAI’s consumer position has become what marketing strategists call “the generic default,” an enormous revenue base with minimal identity glue holding it together. The cognitive architecture underneath this identity shift is where the real danger lives.

Procedural versus Declarative Knowledge.

There is a clean bifurcation in how people relate to OpenAI, and all available evidence converges on the same split.

ChatGPT, as a product, lives in procedural memory. It is System 1, fast, automatic, habitual. Users “just open ChatGPT” without evaluating alternatives. The verb-level brand penetration (“I ChatGPT’d it”) is the linguistic signature of procedural knowledge. When you need to ask an AI something, you reach for ChatGPT the way you reach for Google when you need to search. It is muscle memory, not deliberation.

OpenAI, as an institution, however, has shifted into declarative memory. It is System 2, slow, analytical, contested. People are actively debating whether OpenAI can be trusted. The phrase “benefit all of humanity” now triggers immediate skepticism online. Reddit communities, technology analysts, and former employees routinely respond with “they say X but…” patterns. This is the textbook signature of a position that has moved from automatic acceptance to conscious evaluation.

The split creates a specific and dangerous dynamic. The procedural advantage, the automatic product selection, is strongest among free users and casual consumers, the segment where margins are lowest. The declarative vulnerability, the conscious evaluation and comparison shopping, is strongest among enterprise buyers, the segment where margins are highest. OpenAI’s most habitual users generate the least revenue. Its highest-value customers are the ones most likely to switch.

Defence Mechanisms.

OpenAI’s explicit claims are activating every defence mechanism in the cognitive playbook.

Persuasion knowledge: people recognize “benefit all of humanity” as a marketing frame, not a factual description. Psychological reactance: users resist being told that OpenAI is the responsible steward of civilization’s most powerful technology, especially when the evidence is ambiguous. Manipulative intent inference: the gap between the stated mission and observed behaviour (nonprofit-to-PBC conversion, safety team dissolution, Pentagon contract) creates the perception of self-serving rhetoric.

The “they claim X but…” pattern is the single most reliable indicator that explicit claims have failed. When a significant portion of your audience contradicts your positioning statement, the position is not owned. It is contested.

And here is the structural problem: Altman’s extraordinary public presence, the blog posts, the podcast appearances, the congressional testimony, the Davos panels, is designed to defend the narrative. But the need to defend is the proof that the position is weak. A genuine Level 4 position does not require a CEO to spend a meaningful percentage of his time explaining what the company is. It is simply known.

Hebbian Learning.

Neurons that fire together wire together. Consistent experiences create neural pathways that make positions automatic and unbreakable. For ChatGPT, Hebbian learning is occurring at the product level: the repeated experience of asking a question and getting a useful answer reinforces the association between “AI help” and “ChatGPT.” This is real and durable.

But at the institutional level, a different kind of Hebbian learning is occurring. Each time a user encounters a contradiction between OpenAI’s stated mission and its observed behaviour, a different neural pathway strengthens: the association between “OpenAI” and “say one thing, do another.” The safety team dissolution, the Pentagon contract, the advertising rollout, and the PBC conversion; each event is a data point, but collectively they weave a perception that will be extremely difficult to reverse.

The product builds trust through a consistent experience. The institution is eroding trust through inconsistent behaviour. Both processes are Hebbian. Both are compounding. And they point in opposite directions.

Part 5: Success Mechanics

OpenAI has succeeded enormously, but the reasons for its success are not the reasons its leadership believes. Understanding what actually works, even accidentally, is the prerequisite for doing it on purpose.

What Is Actually Working.

First: ChatGPT’s launch in November 2022 was a genuine category creation event. Before ChatGPT, “AI” was an abstraction discussed in research papers and science fiction. After ChatGPT, AI was something your aunt used to plan Thanksgiving dinner. This was not just a product launch; it was a perceptual shift that redefined a technology category for hundreds of millions of people. OpenAI did not just enter the consumer AI market. It created it. Every competitor still positions relative to ChatGPT. That reference-point status is the company’s most durable strategic asset.

Second: the consumer-to-enterprise flywheel is structurally powerful and genuinely difficult to replicate. Hundreds of millions of consumer users create familiarity, reducing enterprise rollout friction. When a CTO evaluates AI tools, the fact that most of their employees already use ChatGPT at home is a distribution advantage that no amount of enterprise sales effort can manufacture. This is not a PLG motion. It is something deeper: mass cultural familiarity functioning as a form of enterprise pre-qualification. The position chose the distribution, not vice versa.

Third: the pricing architecture, from free to $8 to $20 to $200, is an implicit proof of a concept that OpenAI has never explicitly articulated. The tiered structure says, without words: intelligence is a utility that should be accessible at every price point, with higher tiers unlocking more powerful cognitive labour. The $200 Pro tier is particularly interesting because it is explicitly commoditizing System 2 thinking, selling deliberate problem-solving as a premium service. This pricing ladder is a costly signal. It requires substantial structural investment to maintain a free tier at a massive scale while also serving $200-per-month power users. The architecture proves something about what OpenAI believes, even if OpenAI has never named the belief.

Fourth: the “Thinking” UI in the o-series models is a small design decision with outsized positioning implications. By making the model’s reasoning process visible, OpenAI implicitly demonstrates that its AI not only generates answers but also works through problems. This is a costly signal, because showing the thinking process exposes the model to scrutiny. It is harder to fake deliberation than to fake confidence. The UI choice says “intelligence” more persuasively than any blog post about the Intelligence Age.

Position-Market Resonance.

There is genuine market pull for what OpenAI’s products actually deliver: cognitive labour reduction. Enterprise users saving 40 to 60 minutes per day is not a vanity metric. It is a real outcome that creates real dependency. The 320x growth in API reasoning token usage suggests that developers are not just experimenting; they are building production systems on OpenAI’s infrastructure. The 19x growth in structured enterprise workflows indicates organizational embedding that goes beyond individual curiosity.

The resonance, though, is between the market and the product, not between the market and the stated position. Customers are pulled toward ChatGPT because it is useful, not because they believe in the Intelligence Age. The pull is real. Its source is misidentified.

IQ/EQ Alignment.

OpenAI’s inside-out capability (IQ) is formidable. Largest committed compute infrastructure. Largest AI user base. Broadest product portfolio. Strongest consumer brand recognition in AI. These are structural advantages.

The outside-in market need (EQ) varies by segment. For consumer productivity, the alignment is strong: people need a smart assistant, and ChatGPT is the smart assistant they reach for. In enterprise coding, the alignment is weak: Anthropic holds 54% of the market, while OpenAI holds 21%. For institutional trust, the alignment is poor and worsening: Anthropic has captured the “responsible AI” mental territory that OpenAI originally held. For integration into existing workflows, the alignment is moderate: Google owns native ecosystem integration, and OpenAI depends on the Azure partnership for enterprise distribution.

The IQ/EQ intersection, the sweet spot where unique capability meets unmet need, is narrower than OpenAI’s scale suggests. The unique capability is not the models themselves (roughly at parity with five or six competitors) but the industrialization of model access: the combination of massive consumer base, infrastructure investment, and pricing architecture that makes intelligence available at scale. The unmet need is not “AGI for humanity” but “reliable cognitive labour on demand.”

The misalignment is not between capability and market. It is between what OpenAI claims the alignment is about and what the alignment actually delivers.

What Is Missing.

The Level 4 gap is the most consequential absence. Without a differentiating concept, all of OpenAI’s structural strength (Level 3), execution metrics (Level 2), and competing frames (Level 1) float without an anchor. The infrastructure is built, but the question “infrastructure for what concept?” remains unanswered.

There is also an inconsistency problem that compounds over time. The chronology is damaging:

2022: Nonprofit lab, safety-first. Consistent with the safety claim.
2023: Commercial acceleration. Beginning to diverge.
2024: Safety team dissolved, NDAs requiring equity forfeiture for criticism, mission statement changed. Clearly diverged.
2025: PBC conversion, Stargate, advertising introduced. Structurally contradicts the original position.
2026: Pentagon contract, hours after Anthropic refused. The original concept abandoned in practice.

Each individual decision can be rationalized. Collectively, they wire a perception. The pattern is not ambiguous. And once Hebbian learning has wired the “say one thing, do another” association, no amount of blog posts can reverse it. Costly signals, not cheap talk, are the only remedy.

The explicit claims are not merely failing to establish positioning. They are actively triggering defence mechanisms that make future positioning harder. Every time Altman says “benefit all of humanity,” and a significant portion of the audience responds with skepticism, the skeptical pathway strengthens. The moment you claim, you weaken, and OpenAI has been claiming loudly and repeatedly for years.

Part 6: The Coaching Moment

The coaching question for OpenAI is:
not “How should we communicate better?”
but “What should we own?”

These are fundamentally different questions. The first assumes the position exists and needs better expression. The second acknowledges that the position has not been chosen. OpenAI’s strategic challenge is not a Level 1 problem (articulation), a Level 2 problem (execution), or a Level 3 problem (structural embedding). It is a Level 4 problem. The concept has not been named, which means it cannot be owned, which means all the infrastructure, execution, and framing are powerful machinery pointed at nothing specific.

The Reframe.

Here is what OpenAI is doing right for the wrong reasons.

The $1.15 trillion infrastructure commitment is treated as a competitive moat, a way to out-scale rivals. It is that. But it is also the single strongest proof of a concept that OpenAI has never claimed: that intelligence should be infrastructure, a utility layer as fundamental as electricity or telephony. Altman’s own language gestures toward this (“intelligence too cheap to meter”), but it lives in philosophical blog posts, not in how the company presents itself to the world.

The free tier is treated as a growth tactic, a way to build the user base. It is that. But it is also a costly signal proving that access to intelligence should be universal. Maintaining free access to 800 million weekly active users while burning $14 billion per year is a sacrifice that says more than any tagline could.

The pricing ladder from free to $200 is treated as revenue optimization. It is that. But it is also an implicit articulation of a belief: intelligence is a utility that scales in power with investment, accessible to everyone at the base, and transformative for those who invest more. This is the pricing architecture of a utility, not a SaaS product.

The Jony Ive acquisition is treated as a hardware bet. It is that. But it is also the physical manifestation of a belief that intelligence should have its own interface, its own form factor, its own presence in the world, not just a tab in a browser.

Every one of these decisions proves the same concept. None of them were made in service of that concept. They were made for tactical reasons, and they accidentally converge on a strategic position that is more honest, more differentiated, and more defensible than anything OpenAI has ever claimed.

The Level Transition.

The transition that would unlock growth is from Level 3 to Level 4. OpenAI has embedded structurally (Level 3 is genuinely strong). It executes well (Level 2 is strong if undifferentiated). What it lacks is the concept that all of this proves.

The candidate noun, triangulated across every dimension of this analysis, is some variant of “the intelligence utility.” Not intelligence as a product. Not intelligence as a service. Intelligence as infrastructure. The company that is building the utility layer for intelligence the way previous eras built utility layers for electricity, telephony, and computing.

This noun is consistent with where 70% or more of resources actually flow: compute infrastructure. It is consistent with Altman’s own stated philosophy: “intelligence too cheap to meter.” It is consistent with the product portfolio strategy: chat, code, agents, search, shopping, and hardware, all of which are applications running on top of an intelligence infrastructure. It is consistent with the Stargate and Azure commitments. It is consistent with the pricing segmentation from free to $200. It is consistent with the Jony Ive hardware vision.

And critically, it is a noun that does not need to be claimed. It can be proven. The infrastructure already exists. The spending already demonstrates it. The pricing already articulates it. The product portfolio already embodies it. The concept is already being proven implicitly through consistent decisions; it simply has not been named.

What This Requires.

First, it requires stopping the explicit claims about safety and “benefit all of humanity” as framing anchors. Not abandoning safety practices, but stopping the attempt to own safety as a concept. Anthropic owns safety. That territory is gone. Continuing to claim it activates defence mechanisms and weakens credibility on every other dimension. Let the safety practices be costly signals, real investments that people notice without being told to notice. Let Anthropic have the word. Own the infrastructure.

Second, it requires resolving the frame multiplicity at Level 1. One frame, not five. The company is building an intelligence infrastructure for everyone. Everything else, the marketing practicality, the philosophical grandeur, the platform ambition, the enterprise productivity, should be expressions of that single concept, not competing narratives.

Third, it requires aligning Level 2 execution with the chosen concept. The Five Execution Questions should be answered in terms of the intelligence utility, not in terms of AGI, safety, or assistant functionality. What measurable outcome proves you are building the intelligence utility? How do customers verify it? What is the timeframe? This is where nouns establish territory and verbs prove it. The noun “intelligence utility” needs verbs: “powers X million enterprise workflows,” “reduces cognitive labour cost by Y percent,” “delivers Z compute hours per dollar.”

Fourth, it requires converting the Jony Ive hardware vision into the concept’s physical anchor. An intelligence utility needs a physical interface, a thing in the world that makes the abstraction tangible. The IO device, whatever form it takes, should be the proof object for the concept, the way the iPhone was the proof object for Apple’s “intersection of technology and the liberal arts.”

Fifth, and most difficult, it requires Altman to stop explaining. The CEO’s role in a Level 4 transition is not to argue for the position but to embody it through decisions. Every blog post about the Intelligence Age, every Davos panel about civilizational risk, every congressional testimony about responsible development, these are declarative interventions that push the position into System 2 territory, into the conscious, analytical, skepticism-prone part of people’s minds. The position needs to move in the opposite direction, into System 1, into the automatic, the habitual, the unquestioned. And the only way to get there is to stop talking and let the infrastructure speak.

The Deeper Pattern.

OpenAI’s situation illustrates a pattern that recurs across technology companies at inflection points. The founder’s story about why the company matters (“we are building AGI to benefit humanity”) was true at the founding and useful for fundraising, but it is not the concept that the company’s decisions actually prove. The decisions prove something more specific, more defensible, and more ownable. But the founder cannot see it, because founders confuse the narrative that got them here with the position that will take them forward. They confuse the story with the structure. They confuse Level 1 with Level 4.

Altman genuinely believes that OpenAI is building AGI to benefit humanity. He may even be right in some ultimate sense. But the belief is not the position. The position is what the pattern of decisions proves without anyone having to say it. And what OpenAI’s decisions prove, consistently, expensively, and irreversibly, is that intelligence should be infrastructure.

The tragedy of OpenAI’s positioning is not that it lacks a strong position. It is that the strong position already exists, proven by every major capital allocation decision the company has made, and no one inside the company has named it. They are building the intelligence utility and calling it something else. They are proving one concept and claiming another. The result is a company with extraordinary structural assets, a dominant product noun, a trillion-dollar infrastructure commitment, and a consumer-to-enterprise flywheel, all floating without the conceptual anchor that would make them cohere.

The infrastructure is built. The execution is real. The distribution is massive. The concept is waiting to be named. Not claimed. Named. And then proven, over and over, through decisions that make the name unnecessary, until the market knows what OpenAI is without OpenAI having to say it.

Remove all words. What pattern remains? A company building the utility layer for intelligence. That is the position. Everything else is noise.



Digest — every Tuesday, you can expect practical advice on positioning tailored for business leaders. Written by Paul Syng.


Posted

in

by

Tags:

Comments

Leave a Reply