Hot Takes for the AI Investor

FUTURE
Hot Takes for the AI Investor

What does it really mean to invest in AI?

We are answering this today, but I want to do so while changing your mindset. The reason is that the question is fundamentally wrong already.

There’s no such thing as ‘investing in AI.’ If you’re an investor and your framing is ‘I want to invest in AI,’ you’re already fucked, if you’ll excuse my words, because you’re giving way for incumbents to mislead you about what’s possible or not with AI. You need to understand what you’re investing in.

Let’s fix that today. This essay breaks into three parts:

  1. We will first explain AI’s true intelligence level using first principles analysis, shedding light on what AI has proven capable of and the current limits that incumbents continuously tell you we have surpassed (but it’s all a big lie). In fact, I will prove that the term ‘AI’ is still an unachieved milestone. After this part, you will not only be better equipped to make better investment decisions, but you will also be able to predict the roadmap of frontier AI Labs.

  2. Then, we will move on to discuss hot takes on industries that stand to benefit the most from AI disruption, like lifestyle companies (you’ll learn why the gym around the corner of your house will benefit from AI massively)

  3. And finally, what companies and industries will have massive negative disruption, even possibly ending in the entire market disappearing, with hot examples like higher education or professional services, and will give you intuition as to why everyone is enamored of Palantir.

You won’t find obvious choices here because I’ve purposely avoided generic takes and declaring winners and losers based on capabilities AIs are projected to have.

That’s setting ourselves up for failure.

Instead, we will focus on companies and industries impacted by AI capabilities that have been proven to exist, leaving the predictions based on wet dreams for other newsletters.

Let’s dive in.

Understanding AI’s Value.

In an industry filled with extremely-hyped, unprovable claims, AI is made to look much more complex than it really is.

What the ‘I’ in ‘AI’ Should Be.

I’ll just go and say it: AI should not be called AI. Or, put another way, our current ‘AIs’ aren’t ‘AIs,’ as the word ‘Intelligence’ does a lot of work here. Therefore, one of my goals today is to expose AI in a new light that is more grounded in reality and makes you waterproof from nonsensical hyped claims.

There are three components to ‘intelligence’ or, put another way, there are three things AIs should be capable of to prove intelligence.

These are:

  • Pattern matching, aka intuition, or being capable of finding patterns in data that allow the intelligent being to have ideas with a higher-than-chance probability of being correct

  • Search. Intuition takes us so far in some instances. Sometimes, you need to search for other alternatives your intuition suggests.

  • On-the-fly adaptation. In other cases, your experience on the matter is zero, meaning no intuition at all. Search becomes a guessing game, so not the solution either. Thus, you need to observe, experiment, learn, measure, and, crucially, adapt on the fly based on what you have learned to solve new problems.

Importantly, we can also categorize these three based on the complexity of replicating them. Pattern matching is level 1 because you simply need to compress knowledge from data and learn from it. In other words, data and experience build intuition, and our AIs are already sort of good at that.

Search is level 2 because it requires test-time computation, which is to say, extensive computation during the act of solving the problem.

Importantly, search builds on intuition, which serves as the ‘guide’ to the search; if we don’t have good intuition to solve a problem, we are just guessing, creating a combinatorial explosion. Without intuition (level 1 intelligence), there is no level 2 intelligence (search) because it becomes a computationally intractable problem.

Search also has the added complexity of heuristics, aka the methods for searching the solution space. Some are computable, like arriving at a solution that is clearly wrong (2+2 = 5) and correct, but others are not, like choosing which paragraph is better in your essay.

Adequately searching to solve non-verifiable problems is an extremely hard problem that remains unsolved.

This leads us to a grand conclusion: levels 1 and 2 of intelligence are inherently limited by our experience. Unless you have infinite compute that can be deployed in a timely manner (meaning you can search for all possible solutions within a logical time), problems involving poor intuition are unsolvable unless…

We adapt, taking us to level 3.

I love to quote Jean Piaget’s definition to capture this intelligence level: “Intelligence is what you use when you don’t know what to do.” That is, when intuition fails and searching to infinity and beyond is not an option (it never is; this isn’t Toy Story), true intelligence emerges, with your only choice being to test and adapt.

When experience doesn’t help, all you can do is try new things, measure feedback, and adapt your beliefs. This is how our brain works, called Bayesian inference. We continuously update our beliefs about the world and what will happen next to improve our intuition and search (levels 1 and 2). Simply put, level 3 is necessary to evolve levels 1 and 2 when there’s no data or experience to train the AI on.

And I’m telling you all this because current AIs fall quite short of meeting this three-parter definition.

The Hard Reality of AI

Claiming an AI is as bright as a PhD, a common statement these days, is a borderline embarrassing thing to say once you understand AI’s limits; you’re either ignorant or biased.

Our best AIs are great but inefficient data compressors. They have seen all the data in the world and memorized it. We can discuss the quality of the patterns they find, but it’s undisputable that they find patterns in data. LLMs are perfect proof of this.

For instance, DeepSeek v3 or Llama 3.1 405B were trained on 15 trillion tokens, around 60 Terabytes of data if we assume 4 bytes per token. However, they are 671 Gigabytes and 810 GB in size, respectively, or 90 and 74 times smaller than the dataset they can replicate reasonably well.

Compression is undoubtedly happening. But how far do LLMs and reasoning models take us in the three-level intelligence categorization? Where does ChatGPT fall? Or Grok-3 Reasoning?

Or, to be clear, how intelligent are these models really?

Subscribe to Full Premium package to read the rest.

Become a paying subscriber of Full Premium package to get access to this post and other subscriber-only content.

Already a paying subscriber? Sign In.

A subscription gets you:

  • • NO ADS
  • • An additional insights email on Tuesdays
  • • Gain access to TheWhiteBox's knowledge base to access four times more content than the free version on markets, cutting-edge research, company deep dives, AI engineering tips, & more