Pulse / Control Plane

From token burn to business output.

Enterprise AI bills are tripling year over year. Finance sees the burn but not what it bought. Pulse is the control plane that closes that gap. Every team gets a budget, every agent gets a leash, and every dollar maps to the work it produced.

Pulse Control Plane v0.1.0
LIVE
Live Activity
14:02:11Engineering quota: 17.6M / 20M tokens
14:02:14Prompt rerouted Opus → Haiku (saved $0.42)
14:02:18!Agent ops-research-17 exceeding burn rate — paused
14:02:22Marketing → Engineering: 1.6M tokens reallocated
14:02:27PR #4218 merged · attributed to engineering quota
_
Output This Quarter
1,240
PRs merged
14,000
Tickets closed
18
Projects shipped
2 alerts|1 rebalance pending
Burn rate:on track
Uber's 2026 AI budget didn't make it to summer.

Four months into the year, Uber's CTO admitted the company had blown through its annual AI budget. Engineers had embraced Claude Code and Cursor faster than finance could model it. R&D landed at $3.4B for the year. The budget didn't bend. It snapped.

Uber isn't alone. In nine workdays, one Disney employee asked Claude 460,000 questions. Not a person typing. An agent, running while she slept. Meta hit 60 trillion tokens in a month. Visa, 1.9 trillion. Indeed expects this year's AI bill to come in at four times last year's. The conversation is happening at the board now.

The CFO didn't approve any of it. The CFO got the invoice and couldn't tell you what it bought.

That's the part nobody's solving. Helicone shows usage. Langfuse shows traces. Vantage shows cloud cost. Internally, companies build token leaderboards ranking employees by consumption. Meta made one, then quietly took it down. Because dashboards aren't governance, and leaderboards reward the wrong behavior. The engineer who burned the most tokens isn't the most valuable. She might be the one who left an agent running all weekend.

None of it answers the question that matters: what did we get for the spend?

AI MagazineReference
$3.4B
R&D, AI named driver

Why Uber Has Already Burned Through Its AI Budget

Bill shock turning into category-defining evidence.

Business InsiderReference
460,000
Claude queries / 9 workdays

Disney Built an AI Adoption Dashboard Because the Burn Became Impossible to Ignore

Visibility became survival, not a nice-to-have.

Business InsiderReference
Projected YoY AI spend

Indeed Expects AI Costs to Run 4x Higher Than Last Year

The same problem, board-level: usage escaped planning.

Spend is visible. Output isn't. That's the gap.
Cloud had FinOps. Snowflake had the warehouse. AI has nothing yet.

Walk through any AI-heavy company today. Engineering, marketing, support, ops: every team and every agent drawing from the same providers, draining the same shared bill, with no per-team budget, no agent leash, no accountability for what got built. The invoice shows up at month-end with no story attached.

This is what cloud looked like in 2015. Wild and exploding faster than the people paying for it could understand. Every great enterprise category started this way: a control gap widening in public, and one company building the system that closes it.

That system isn't another dashboard. It's a control plane that makes AI accountable.

Pulse is the operating layer for enterprise AI work.

Q1 starts. Pulse gives engineering a budget: 20M tokens across Claude, GPT, and your internal models, tied to the projects it's funding. Marketing gets 5M. Support gets 8M. Every agent gets a cap.

Then Pulse goes to work. When an agent starts compounding, it gets interrupted before the spike hits finance. When a prompt only needs Haiku but gets sent to Opus, it gets rerouted. When a workflow touches customer data, only approved providers can serve it. Two months in, marketing has used half its quota and engineering has blown through theirs. Pulse moves the slack across, with audit and approval, no Slack threads, no credit cards.

At quarter-end, when the CFO asks what the spend produced, Pulse answers in plain English. 1,240 pull requests merged. 14,000 support tickets closed. 18 priority projects shipped. All tied back to the $42K that paid for them.

Tokens stop being a sunk cost. They become a resource that flows toward output.

Q1 2026 / LIVE

Token spend, mapped to the work it produced.

Quarter spend
$42,000
of $60K budget

Allocations by team

  • Engineering$24,800
  • Support$8,200
  • Marketing$3,100
  • Ops Agents$5,900

Output produced

  • Pull requests merged1,240
  • Support tickets closed14,000
  • Priority projects shipped18
Cost per shipped project$2,333
Updated 2s ago
CFO-ready
01 / ALLOCATE

Budgets for every team, project, and agent. Tied to real work, not guesses.

02 / GOVERN

Runaway agents stopped in real time. Sensitive data routed to approved providers only.

03 / OPTIMIZE

Every prompt sized to the cheapest model that meets the bar. Routing, caching, and fallback handled automatically.

04 / PROVE

Spend mapped to merged code, closed tickets, shipped projects. CFO-ready.

Runaway agent killswitch
Live
!ops-research-17 burn rate +340%
Agent paused at threshold
$8,400 spike prevented

Pulse cuts the spike before finance ever sees it.

Dynamic budget reallocation
Live
Marketing2.4M idle
Engineering+1.6M reallocated

Idle capacity flows to where the work is, with audit attached.

Our wedge: AI-heavy mid-market and enterprise teams running two or more LLM providers. The first buyer is the platform or AI lead who already got the “what is this bill?” Slack from finance and didn't have a real answer.

The first wave of enterprise AI was adoption. The second is operations.

Adoption is broad. FinOps has woken up. Model spend has crossed from innovation experiment to a board-level line item with no governance underneath it.

88%

of enterprises use AI in at least one function

McKinsey, 2025
63%

of FinOps teams now manage AI spend, up from 31%

FinOps Foundation
$37B

enterprise GenAI spend in 2025, up 3.2x year over year

Menlo Ventures

The wedge is concentrated. The full GenAI market is $644B. The model and token layer Pulse sits on is already $14.2B and growing fast. A 1–3% control-plane fee on that layer alone points to $140M–$425M of ARR available today. That's before agents multiply consumption, and before every Fortune 500 hires a Head of AI Operations who needs Pulse to do the job.

The bigger prize isn't taking 1% of model spend. It's becoming the system of record for enterprise AI: the place every CFO, CIO, and AI lead goes to answer who used what, for what work, at what cost, and what it produced.

We monetize through platform fees, governed-spend percentages, and an enterprise tier.

We've seen this from the silicon up.
Usman Zia

Usman Zia

Co-founder

Senior Engineer, AMD. Silicon design for AI chips.

Gurinder Garcha

Gurinder Garcha

Co-founder

Staff Engineer, AMD. Silicon design for AI chips.

Most people building this category will come from finance, FinOps, or cloud cost tooling. We're coming from a layer below all of them. We design the chips AI runs on.

We see the unit economics of every token before it's ever priced as an API call: what it costs in transistors, heat, joules, silicon area. We know which workloads are wasteful at the metal, which are expensive by accident, and which models companies should be routing toward and away from.

The token economy was always going to break under enterprise load. We happened to be sitting upstream of where it broke first. We got tired of watching companies downstream try to solve it with dashboards.

Pulse / Investor Access

Pulse turns AI spend into governed business output. We're opening early investor conversations with partners who understand enterprise infrastructure, and the agentic AI shift.

Request Investor Brief

usman@waferzero.com