The AI Industry's OpenPath Problem: Why Cost Transparency Will Define Enterprise AI in 2026
The Trade Desk just lost two of advertising's biggest holding companies — not because OpenPath was expensive, but because nobody could explain the bill. Enterprise AI is sleepwalking into the exact same trap.
What Happened with The Trade Desk's OpenPath — and Why It Matters Beyond Ad Tech
In early 2026, Dentsu and WPP — two of the world's largest advertising holding companies — quietly pulled out of The Trade Desk's OpenPath program. For those outside the ad tech world, this might not register. But for anyone paying attention to how enterprises adopt new technology infrastructure, it's a case study worth studying carefully.
OpenPath launched in 2022 with a compelling premise: cut out the middlemen in programmatic advertising. Give advertisers a direct pipeline to publisher inventory. Cleaner supply chain. Better performance. Less waste.
On paper, it worked. The Trade Desk reported OpenPath growth of "many hundreds of percentage points" in Q3 2025. CEO Jeff Green had called 2025 the year OpenPath would enter the "steep acceleration phase of its S-curved growth."
Then the agencies started leaving.
The reason wasn't price. Multiple agency executives confirmed that OpenPath's costs were likely competitive with traditional supply-side platforms. The reason was something more fundamental: they couldn't explain the costs to their clients.
If the agency couldn't look its clients in the eyes and explain to them with total confidence how much they were paying, and what exactly they were paying for, then it followed they'd have to step away.
“
Another buyer estimated The Trade Desk was applying a 10-15% premium on transactions — but couldn't confirm it, because the fee structure wasn't visible. The product charged publishers a ~5% take rate, but how that rippled through to advertiser pricing was opaque. Floor prices, demand dynamics, and platform fees blurred together into a number that nobody could fully decompose.
OpenPath didn't fail on technology. It failed on trust.
The Enterprise AI Cost Transparency Crisis Nobody Is Talking About
Now look at enterprise AI infrastructure in 2026.
Organizations are running inference workloads across AWS Bedrock, Azure OpenAI, and Google Vertex AI — often simultaneously. Each provider has its own pricing model. Token-based. Per-request. Provisioned throughput commitments. TPM allocations with overage penalties. Cross-region data transfer surcharges that appear on bills two months after the fact.
On top of the cloud providers, many enterprises layer API proxies, aggregation platforms, and internal routing systems — each adding their own margins, caching behaviors, and cost structures that are rarely surfaced to the teams consuming them.
Ask a typical enterprise CTO today: "What did your organization spend on AI inference last month, broken down by model, by team, by cloud provider?"
Most can't answer that question. Not because they're negligent — because the tooling to track AI costs across multiple cloud providers doesn't exist in most organizations.
This is the OpenPath problem, transplanted into AI infrastructure. The issue isn't that AI is expensive (though it can be). The issue is that nobody can explain the bill.
Why Enterprise AI Spend Is Harder to Track Than You Think
Ad tech built its transparency crisis over two decades. Programmatic advertising went from novelty to a $200B+ global market before agencies started demanding supply chain audits. Even now, the industry is still unwinding layers of hidden fees, arbitrage, and opaque intermediaries.
Enterprise AI is compressing that same arc into 24 months.
Consider the trajectory: most enterprises went from "we're experimenting with LLMs" to "we have production AI workloads across multiple clouds" in under a year. AI infrastructure spend is growing faster than virtually any enterprise technology category in history.
Here's what makes multi-cloud AI cost management uniquely difficult:
- Different pricing units — AWS charges per input/output token. Azure charges per 1K tokens with TPM-based provisioning. GCP has per-character and per-token models depending on the API.
- Hidden provisioning costs — Reserved throughput on Azure and AWS locks in spending whether you use it or not. Serverless on GCP avoids this — but at higher per-request rates.
- Cross-region data transfer — Running a model in us-east-1 but serving users in eu-west-1? That data transfer cost won't show up on your AI bill. It'll appear under networking — three line items deep in your AWS invoice.
- Intermediary markups — If you're using an API proxy or gateway layer, what's their take rate? Is it visible? Is it per-request or percentage-based?
The CFO who approved a $500K annual AI budget is about to get a quarterly bill that doesn't add up. And the CTO who can't explain why is in exactly the same position as the media agency that couldn't explain OpenPath premiums to its clients.
We've seen this movie. We know how it ends.
What AI Cost Transparency Actually Looks Like
The ad tech industry learned — painfully, over years — that transparency isn't a feature. It's a requirement. Advertisers now demand supply chain audits, log-level reporting, and full fee disclosure from every intermediary in the programmatic chain.
Enterprise AI governance needs to get there faster. Here's what real cost transparency looks like in practice:
Per-Request Cost Attribution
Not monthly estimates or averaged billing. Every inference request should carry a cost — the actual amount paid to the cloud provider, inclusive of all fees, attributed to the team, project, and use case that generated it. If your engineering team made 50,000 GPT-4o calls last Tuesday, you should know exactly what that cost — not "approximately $2,000-ish."
Model-Level Spend Breakdowns Across Providers
If you're running GPT-4o on Azure and Gemini on Vertex, you should be able to compare not just sticker price per token, but total cost of ownership — including provisioning overhead, platform fees, and data transfer. Apples-to-apples, across clouds.
Explainable AI Model Routing Decisions
When a system chooses one model over another for a given request, the reasoning should be auditable. "We routed to Claude because it was 40% cheaper at equivalent quality for this prompt type" is a sentence a CTO should be able to produce on demand. Black-box routing is just OpenPath with a different label.
Pre-Enforcement Budget Governance
Budget limits should be enforced before the API call is made. If a team is about to exceed its allocation, the system should block or reroute — not send a report next month showing that the budget was blown three weeks ago.
Vendor-Neutral Cost Visibility
The audit layer can't be owned by the same company selling you the compute. That's the structural problem OpenPath had: The Trade Desk was simultaneously the platform, the router, and the fee collector. In AI, if your cost visibility depends on a single cloud provider's billing console, you've recreated the same conflict of interest.
5 Lessons from Ad Tech That Enterprise AI Leaders Should Learn Now
OpenPath's trajectory offers a clean lesson for anyone building or buying AI infrastructure today:
1. Opacity scales faster than trust. When you're small, nobody audits the bill. When you're spending millions, everyone does. Build the transparency layer now, before the CFO starts asking questions you can't answer.
2. Competitive pricing doesn't compensate for opaque pricing. Multiple agencies acknowledged that OpenPath was probably fairly priced. They left anyway. Because "probably" isn't a word you can put in a board presentation. The same will be true for AI.
3. Intermediaries that obscure costs will be routed around. The entire history of ad tech supply chain optimization has been one long exercise in removing unnecessary middlemen. AI infrastructure will follow the same pattern.
4. Governance is a Day 1 requirement, not a Day 100 feature. OpenPath shipped product first and governance never. The damage was done before they could course-correct.
5. The winners will be platforms that make costs legible. Not cheaper — legible. The enterprise AI platforms that survive the next wave won't be the ones with the lowest prices. They'll be the ones that let a CTO walk into a board meeting and say, with total confidence: "Here's what we spent, here's why, and here's how I know."
Is Your Organization Ready? 3 Questions Every AI Team Should Answer
The Trade Desk will probably fix OpenPath's transparency problem. They're a $50B company with strong fundamentals. But the damage — the lost trust, the paused budgets, the agency relationships that now need rebuilding — happened because they moved too fast on product and too slow on governance.
Enterprise AI is moving even faster, with even less governance infrastructure in place.
By the time the stakeholders start asking about AI costs, it's already too late to start building the answer. The transparency layer isn't a nice-to-have — it's the foundation everything else rests on.
“
Every organization running multi-cloud AI workloads today should be asking three questions:
Can we explain our AI costs — by model, by team, by provider — with full confidence? If the answer requires caveats, estimates, or manual spreadsheet reconciliation across three cloud billing consoles, you don't have cost transparency. You have cost guessing.
Are our routing and spending decisions auditable? Every hop in the AI inference chain should be explainable. If you can't trace a request from application to model to bill, you have a supply chain visibility gap.
Are we enforcing budgets proactively, or discovering overruns after the fact? Reactive cost management in AI is like reactive fraud detection in ad tech — by the time you see it, the money is already gone.
If the answer to any of these is "no" or "I'm not sure," you're building your own OpenPath problem. And the lesson from ad tech is clear: by the time the stakeholders start asking, it's already too late to start building the answer.
Don't wait for your OpenPath moment to figure that out.