In partnership with

w

REALITY CHECKS
Is the AI Dream Even Possible?

Welcome back. Over the last few months, we’ve seen a plethora of ‘mega deals’ between AI Labs and infrastructure and hardware providers that make the word ‘billions’ meaningless.

On the horizon, trillions of dollars in investment, the largest-ever commitment to a single bet: AI.

But today we’re going to assume we have the money, and instead ask: is this actually possible?

For that, we are examining the constraints of the physical world to determine whether these announcements are merely a means to inflate stock values or whether we are truly entering a world of hundreds of gigawatts of AI enthusiasm.

I’m pretty sure you’re now predicting I’ll be talking about energy constraints for the rest of the newsletter. And, well, energy is a constraint indeed.

But what if I told you that energy doesn’t come close to being the whole picture, and that the missing part hides opportunity?

Today’s newsletter is as insight-packed as they get.

Specifically, we’ll explain which companies are the real bottleneck out of the entire lot, three of which could have an excellent 2026 in terms of stock price, potentially becoming the best AI trade of the year, as investors, blinded by discussions of money, still fail to realize that the real bottleneck here is the physical world.

Let’s dive in.

The Simplest Way To Create and Launch AI Agents

Imagine if ChatGPT, Zapier, and Webflow all had a baby. That's Lindy.

With Lindy, you can build AI agents and apps in minutes simply by describing what you want in plain English.

→ "Create a booking platform for my business"
→ "Automate my sales outreach"
→ "Create a weekly summary about each employee's performance and send it as an email"

From inbound lead qualification to AI-powered customer support and full-blown apps, Lindy has hundreds of agents that are ready to work for you 24/7/365.

Stop doing repetitive tasks manually. Let Lindy automate workflows, save time, and grow your business.

AI Matters Way Too Much

Despite its lukewarm adoption to this day (except for non-GenAI AI applications, such as search or social media algorithms), AI’s role in the economy has grown unprecedentedly in recent months by masking the miseries of what’s otherwise a clearly recessionary global economy.

We are drowning

With Germany on a two-year recessionary run and poised for a third, and France’s politicians unwilling to make the necessary reforms to save its economy from an inevitable debt explosion and subsequent bailout, Europe appears to be a lost cause. Moreover, China is still seeing massive deflation and stagnant internal consumption.

However, the US isn’t in a sweet spot either. According to estimates, without AI investment, GDP growth in the last quarter would have been flat. And just in the last few days:

  • Food chains like Chipotle or Wendy’s are seeing catastrophic stock performance

  • Consumer staples, a thermometer of how things are in the street, are having a lukewarm year

  • And just Friday alone, the FED had one of its most extensive repo operations since 2020, injecting dozens of billions into the banking system (‘injecting’ being a euphemism for money printing), meaning the FED loaned money to some banks to provide liquidity (and the news above doesn’t account another $20+ billion in MBS (mortgage backed securities) emissions, pushing the liquidity injection to $50 billion in a single day).

Therefore, whether you believe the AI buildout is worth it or not for incumbents is one story; the fact that it is “saving” the economy is another (at least the metrics).

Hence, many investors, including yours truly, believe that the market’s capacity to withhold the apparent signs of overextension of valuations (with the Buffett indicator, his preferred way to evaluate how pricey a market is, a simple percentage measurement of the value of the S&P500 vs the US economy, being at historic highs of 217%), hinges in whether Hyperscalers can continue to keep the party going via brute AI spending.

But will they?

Yes, but… let me insist… will they?

The short answer is that, for now, Hyperscalers (Meta, Google, Amazon, Microsoft, and Oracle) are willing to keep the lights on for a little bit longer and are expected to not only continue investing but grow their AI Capex to half a trillion in 2026, as per a Goldman Sachs report:

All but Oracle (which is financing most of this via debt—for a deep dive into Oracle’s big bet, read here) are expected to surpass $100 billion in capital expenditures for 2026.

Just to put these numbers into perspective:

  1. $518 billion is one billion more than Norway’s entire GDP

  2. It’s almost enough to fund Germany’s entire healthcare system for a year.

  3. 60% of that is enough to fund Spain’s entire pension system for a year (one of the largest in the world).

But how is no one panicking? Two example data points strengthen the narrative: Oracle’s RPO for the following years (the amount of booked revenue they have) is almost $500 billion, and Microsoft has another $400 billion; in total, nearly a trillion dollars is booked between the two.

RPO stands for Remaining Performance Obligations, which refers to booked revenue that has been agreed upon but not yet delivered in the form of a service/product.

Consequently, what makes the “AI bubble” different is that, this time, it’s being financed by the wealthiest corporations on the planet’s free cash flow (and debt). Except for Oracle, it’s not as if they are borrowing money to do this (although they are increasingly doing so, especially Meta); they actually have the cash.

And even when they issue bonds, these are massively oversubscribed. People trust these companies, period.

However, here’s the thing: willingness and feasibility are distinct concepts. And this difference matters a lot in AI.

What is ‘AI Investing’?

When people discuss these companies and their Chinese counterparts “investing in AI”, what values are we discussing, and what does it really mean?

Here’s a quick rundown of the deals announced just in the last six months in the US, the Middle East, and South Korea:

The following table accounts for ~40-ish GW, but Goldman Sachs estimates the total announced value at 45 GW and $2.5 trillion.

Developer / Program

Partners

Location / Campus

Stated chips or GW

Power sourcing or siting details

OpenAI “Stargate” US expansion

Oracle (OCI)

US multi‑site, anchored by Abilene, TX

+4.5 GW additional Stargate capacity; “>2 million chips” with Abilene included

Grid‑served; Oracle footprints; Abilene live on OCI

OpenAI/Oracle/SoftBank “five new US sites”

Oracle; SoftBank (incl. SB Energy); Vantage (Midwest)

Shackelford County, TX; Doña Ana County, NM; Midwest = Wisconsin (update); Abilene +600 MW expansion; SoftBank sites at Lordstown, OH & Milam County, TX

Brings Stargate to ~7 GW planned capacity (with Abilene + CoreWeave)

Oracle‑developed sites; SB Energy fast‑build powered infra at Milam County

OpenAI “Stargate Michigan”

Related Digital (developer); DTE Energy; Oracle (Stargate partner)

Saline Township, Michigan

>1 GW campus (adds to >8 GW total planned)

DTE to serve site “using existing excess transmission capacity”; closed‑loop cooling

OpenAI–NVIDIA strategic LOI

NVIDIA

Multi‑site (OpenAI data centers)

≥10 GW NVIDIA systems; NVIDIA intends to invest “up to $100 B” as GW deploy

Not specified (implies dedicated DC + power as part of deployment)

OpenAI–AMD 6 GW GPU deal

AMD

Multi‑site

6 GW AMD Instinct over multiple generations; Initial 1 GW MI450 in 2H 2026

Node: TSMC N2 for MI450 compute die (per CEO interview); packaging not disclosed

OpenAI–Broadcom 10 GW custom accelerators

Broadcom

Multi‑site

10 GW OpenAI‑designed accelerators + Broadcom networking

Node/packaging not disclosed (Broadcom customary at TSMC advanced nodes)

Anthropic TPU expansion

Google Cloud

Google‑operated TPU sites (US‑led)

Access to up to 1 million TPUs; “well over a gigawatt” online in 2026

Google‑sited, Google cooling/network stack

AWS capacity disclosure

Multi‑region

+3.8 GW added in the past 12 months; ≥1 GW more expected in Q4

Power sources not itemized; mix of grid + AWS programs

Meta – Hyperion campus financing JV

Blue Owl Capital; (prior reporting also flagged PIMCO debt)

Richland Parish, Louisiana

“Gigawatt‑scale”; Reuters reports >2 GW planned

Entergy Louisiana approved three gas turbines and a $550 M transmission line to serve Hyperion

Meta – Prometheus AI cluster

New Albany, Ohio (Meta campus)

Next AI cluster “will be a 1‑gigawatt cluster

Not specified in post; (separate Louisiana plant covers Hyperion)

Microsoft nuclear‑backed PPA → Three Mile Island restart

Constellation Energy (Crane Clean Energy Center)

Pennsylvania (TMI‑1)

837 MW nuclear back in service

20‑year PPA with Microsoft to supply DC load; PJM fast‑track interconnection

Stargate UAE initial tranche

G42; Oracle; Nvidia; SoftBank; Khazna

Abu Dhabi, UAE

1 GW cluster with 200 MW in 2026; broader UAE plan to multi‑GW

Mixed sources (nuclear/solar/gas per UAE coverage); US–UAE export/licensing framework

NEOM–DataVolt “AI factory”

DataVolt

Oxagon, Saudi Arabia

1.5 GW campus; $5 B phase‑1

NEOM claims “net‑zero” factory; water/gas not detailed in release

AWS + SK Group Korea hub

SK Group

Ulsan, South Korea

Initial 100 MW, scalable to 1 GW by 2029

Korean grid with local partners

Put simply, that’s a lot of “I want to do this, but I’m not going to tell you yet how I will.”

Which leads us to the question: What does it mean to ‘invest in AI’?

The AI supply chain

While I would need three additional newsletters to explain the AI supply chain in detail, and I would still fall short, here’s a quick rundown of the different phases:

  1. Mining and refining of rare earths, used to build magnets that are crucial for most of the components below. Rare earths aren’t rare, but the refining is, and it’s heavily concentrated in China.

  2. Chip design. The fabless companies like NVIDIA, AMD, or Google are designing the compute chips, Micron, SK Hynix, and Samsung are creating the memory chips (High Memory Bandwidth technology, very important as you’ll see later)

  3. Fab building. The fabs where chips are manufactured and packaged are among the most advanced and expensive on the planet, with the latest capital costs exceeding $20 billion per fab. Rock’s Law, also known as Moore’s second law, argues that fab costs double every four years (spoiler: they do). This also includes some of the most expensive SKUs in the whole world, with examples like AMSL’s EUV lithography machine (hundreds of millions of dollars).

  4. Chip manufacturing. Foundries like TSMC manufacture the compute chips and package them with HBM chips in a process called CoWoS-L, which is used for the latest GPUs, such as Blackwell. TSMC is the sole provider in this part of the supply chain for most, if not all, advanced chip manufacturing and packaging.

  5. System building. OEMs (Original Equipment Manufacturers), such as NVIDIA or Apple, and ODMs (Original Design Manufacturers), like Foxconn or Quanta, collaborate to build the systems where the packaged chips will reside, including cooling, power, and networking equipment, among others, providing the actual accelerator rack.

  6. Data center building. These systems require a place to be stored and run, necessitating the actual building and, more importantly, the power infrastructure. Large data centers often require their own transformer substation to minimize step-down costs (we maintain high voltages to minimize current and thus energy transmission losses) or even have their power source on-site.

So when discussing ‘AI CapEx’ beyond the money, this is the physical world of ‘AI investing’, and here’s where the real constraints arise.

Therefore, we will examine the physical constraints from two perspectives: energy and manufacturing. The first one is what everyone, including incumbents, tirelessly points to, but the latter may be even more bottlenecked.

The Energy Problem

The main energy issue is that we aren’t adding nearly enough power as we need right now, compared to the demand projections we are seeing based on the announcements.

The Gigascale

Data centers these days are measured in gigawatts. That means a single site requires more than 1 billion watts per hour to work. But how much power is that in terms of AI throughput? Let’s answer this by computing how much one of these data centers would take to train GPT-4.

Most are being built around NVIDIA’s GB300 NVL 72 servers, a rack-scale AI server that houses 72 GPUs (or 144 GPU chiplets because each Blackwell GPU has two GPU dies) and requires between 120 and 142 kW of power to work, depending on what OEM (the infrastructure companies that build the server around NVIDIA’s chips) you ask.

Oracle cites “over 120 kW”, HPE ~132 kW, and Schneider Electric 142 kW.

Thus, a 1 gigawatt data center can house around 7,600 of these servers, each with 720 PFLOPs of compute throughput at FP8 precision. That means the total compute power of this ‘AI factory’ is 5,472,000 PFLOPs, or 5.5 sextillion operations per second, or 5.5×1021 operations.

To put that number into perspective, assuming no issues (reality is a little bit more complex), this cluster would take one hour to train GPT-4 (2×1025 total training FLOPs divided by 5.5×1021 FLOP/s = 3636 seconds).

Yes, one hour to train the state-of-the-art model back in 2023 (and it was SOTA for more than a year afterward), and required more than three months of training.

Reality is not so simple; GPUs fail, communication overheads, and so on, but the number is still otherworldly. That said, it must be noted that new models can’t be directly compared because these were dense training runs (totally self-supervised). In contrast, training is now typically RL-based, which means it requires orders of magnitude more compute for a unit of progress; as we always say, RL is a sparse signal training method.

Now, picture this immense compute throughput and then fathom that OpenAI alone wants thirty times this by the end of the decade.

But the biggest issue here is not the AI racks themselves; it’s all that comes before, which really puts into question whether this is actually feasible.

logo

Subscribe to Full Premium package to read the rest.

Become a paying subscriber of Full Premium package to get access to this post and other subscriber-only content.

Upgrade

A subscription gets you:

  • NO ADS
  • An additional insights email on Tuesdays
  • Gain access to TheWhiteBox's knowledge base to access four times more content than the free version on markets, cutting-edge research, company deep dives, AI engineering tips, & more

Keep Reading

No posts found