- TheWhiteBox by Nacho de Gregorio
- Posts
- What Makes a Company a Winner in the AI era
What Makes a Company a Winner in the AI era


Turn AI into Your Income Engine
Ready to transform artificial intelligence from a buzzword into your personal revenue generator
HubSpot’s groundbreaking guide "200+ AI-Powered Income Ideas" is your gateway to financial innovation in the digital age.
Inside you'll discover:
A curated collection of 200+ profitable opportunities spanning content creation, e-commerce, gaming, and emerging digital markets—each vetted for real-world potential
Step-by-step implementation guides designed for beginners, making AI accessible regardless of your technical background
Cutting-edge strategies aligned with current market trends, ensuring your ventures stay ahead of the curve
Download your guide today and unlock a future where artificial intelligence powers your success. Your next income stream is waiting.

FUTURE
The Secrets of an AI Winner
I’ve been advocating for a future where AI enables you to “just do things”. But instead of me pontificating to you all, I’ve finally decided to be totally clear on what I think are the fundamentals of a great business in the AI era.
In particular, we will:
Discuss why the concerns around failures to implement AI are mostly skill issues with a research-driven analysis, including discussions of accuracy, notable recent improvements in the frontier (i.e., overconfidence correction), and the uncertainty mindshift required to be successful with AI.
How to approach product strategy, covering the key differences between SaaS (software 2.0) vs agent tooling (software 3.0), and management consulting (consulting 2.0) vs forward-deployed engineering (consulting 3.0), and more.
What will be the defining moat for companies in the AI era, as well as pricing strategies, discussing topics like statistical coverage, the absurdity of AI subscriptions, and so forth.
We will even use a fascinating application of statistics and cryptography theory, zero-knowledge proofs, to prove how AIs being unpredictable doesn’t have to be a bad thing, and how you’re probably overstating the concerns regarding the uncertainty of AI predictions.
This isn’t just about validating my ideas, but hopefully serving as a starting point for you to think about companies in the AI era and make better investment decisions. By the end, you should be able to determine whether a company will succeed or not in an instant.
Let’s dive in.
It’s a Skill Issue
With studies like the one we discussed on Thursday by MIT, which claimed 95% of AI implementations still fail, companies that are too comfortable for their own good and unwilling to confront the inevitable change have the perfect excuse to delay AI adoption just a little longer.
But if you read Thursday’s post, you already know it’s not an AI issue, it’s a skill issue. AI is already good enough to transform most companies completely, but such companies are mostly clueless.
I have my fair share of interactions with executives. You never get used to how poor, on average, their knowledge of AI is, despite being the key decision makers.
But let me prove to you how ready AIs really are.
The Writing on the Wall
You can’t be blamed for thinking AI isn’t up for the task in most cases. It’s believed to be error-prone and generally inferior to humans. Hallucinations, untrustworthiness due to non-determinism, and other buzzwords will be thrown at you, even at the slightest suggestion of using AI.
The truth is, most of that is just people parroting “known knowns” about AI without actually verifying whether they remain true. And kid you not, they are mostly not true anymore.
In a recent study in New Zealand, a group of state-of-the-art (at the time) models, like GPT-4o or DeepSeek R1, were tested against humans, non-lawyers and lawyers, both young and experienced, to see who was more accurate for the invoice reviewing process, a necessary verification as to whether an invoice is contractually valid, both for fiscal reasons (government scrutiny) and for financial reasons (paying wrong invoices, out-of-date payments, and so on).
The results? An unequivocal superiority by the AI models, with every one of them outperforming humans in all areas:
Cost, negligible for AIs, while lawyers can cost more than $4 per invoice.
Speed, between 3 and 22 seconds per invoice for the AIs, up to 316 seconds per invoice for humans
Accuracy, achieving up to 92% accuracy vs. humans’ 72% ceiling.

A flawless victory for AI on all fronts. But vision isn’t the only area where AIs are extraordinary, as we have also almost eliminated the hallucination risk.
Out of the many good things one can say about GPT-5, probably the most remarkable one, besides GPT-5 Pro’s incredible coding capabilities and even alleged capacity to prove new mathematics (keyword here ‘alleged’), is that it has basically reduced factual hallucinations to almost zero.
Regarding the maths controversy, I haven’t read about the topic enough to know whether that researcher is full of it or not (and I’m not an expert mathematician), but it’s safe to say not everyone was convinced; this UC Berkeley mathematician wasn’t particularly impressed with the claim.
But going back to the topic of hallucinations, it seems that models don’t get their facts wrong anymore.
It’s important to clarify that these models aren’t truth-seeking, in the sense that they really cannot discern truth from falsehood (at least there isn’t enough proof to sustain that claim). Instead, what most likely drives their “factual correctness” is statistical likelihood, meaning they are now becoming very good at not making incorrect predictions compared to their training data and/or search results obtained from surfing the Internet in real-time.
In other words, “not making factual hallucinations” means the model does not claim things that are different from what they have been shown during training or in search results. But that’s the key caveat: that’s their source of truth, so they can only be as truthful as the truth level of their training data and web searches.
As shown in the GPT-5 system card, the model exhibits very low hallucination rates, successfully reducing the value despite the growing hallucination tendencies observed in reasoning models.

This is by no means a guarantee of ‘no hallucinations’, but low enough to not warrant general distrust of a model’s reliability.
But besides hallucinations, frontier models are starting to show another subset of desirable traits: becoming less overconfident in what they know or don’t know.
Overconfidence and Memory
Despite my initial skepticism, I’m starting to believe GPT-5 was actually a great release. The more I use it, the more I’m convinced it’s the best model around. A little bit slow for my liking, but it has excellent things to say about it:
Goes straight to the point, not particularly warm, just answers the damn question. This is exactly what I’m looking for in a model, but I can understand why people who used ChatGPT as a comforter are not happy about it (they are trying to correct this, but in my opinion, they shouldn’t).
They have considerably improved the memory feature. You can tell the model is getting used to some of my traits as a user, and has a pretty solid context-capturing system: it seems to know quite well what things are useful to remember and will condition on them before answering seemingly unrelated questions, leading to higher overall answer quality.
It’s not an overconfident know-it-all.
I want to double-click on the third point, because I believe this is crucial for companies. GPT-5 is now much more aware of its mistakes. For instance, in a recent conversation I had with it, it suddenly added the following snippet:

The funny thing is that I hadn’t even noticed it had acknowledged the mistake before; it autonomously realized (whether they’ve added a self-reflection heuristic of the model realized after revisiting the computation in a further response is unclear).
But another user pointed to an even more striking response:
GPT-5 says 'I don't know'.
Love this, thank you.
— Kol Tregaskes (@koltregaskes)
4:05 PM • Aug 18, 2025
The model actually “realized” it wasn’t confident enough to answer, and instead of just answering something plausible that would be surely wrong, it just goes ahead and tells you it doesn’t know.
I can’t understate how valuable this is.
Sure, you have just “wasted” 34 seconds of your life, but you have gained years of trust in the model.
Yes, the model’s truth or knowledge is still heavily dependent on its human trainers, but having the capacity to acknowledge confidence is a great asset for companies that, above all else, value trust as a key metric.
We don’t know how they did this; we can only guess. But a recent paper suggests a way to do this by teaching models to score their own confidence during training, akin to a human’s capacity to internally assess how likely their response is to be correct.
And don’t get me started with cost, as there’s overwhelming evidence to suggest AIs are much cheaper, on average, than humans, especially when humans are used for repetitive tasks, which they were never meant for.
But the biggest mindshift companies have to do goes to the core of software, because AI isn’t “normal” software.
Probabilistic Software Requires a Changing Mindset
No matter how much we improve hallucinations or how great models become at assessing their own confidence, there are theoretical limitations; these numbers will never be zero, meaning the chance that the model behaves unexpectedly in any given process will never be zero.
This is a fundamental shift away from traditional software, which is deterministic; it behaves just as expected. Instead, AI models are stochastic, meaning they have a non-zero randomness in their behavior (which, by the way, is a desired trait that incentivizes creativity).
In a nutshell, what I’m implying is that business leaders have to realize that adopting AIs means adopting a technology that, just like humans, will make unpredictable mistakes.
This may sound natural to you, someone who’s probably way ahead of the curve compared to the rest of mortals, but I can guarantee you that this mindshift is not remotely mainstream, and many AI champions trying to promote AI in their organizations run into executives who have not yet reconciled with the fact that they will have to accept uncertainty in their business processes.
Without this framing, AI implementations will “fail” not because they are actually failing, but because, according to ‘the outdated standards of software’ (zero tolerance for uncertainty) of some leaders, those implementations are “failing”.
If you’re trying to push AI in your organization, this should be your starting message: Embracing AI is embracing uncertainty. And if you’re not prepared to accept that, don’t even start thinking about AI; it’s a necessary acknowledgement one must make.
Later in this post will also tackle, from a first-principles perspective, why determinism is deeply overrated and how zero-knolwedge proofs and cryptography has taught us that uncertainty is perfectly viable even in the most risky instances like cybersecurity.
So, without further ado, let’s go over my entire strategy. Here I’m basically writing down exactly how I execute and, importantly, what I am looking for in companies I might want to invest in, especially if we are talking about consulting/software.
We will take a look at how I’m thinking about:
Product strategy, or knowing what makes sense to build in the AI era; how to play AI in your favor, not against.
Moat, or how to make a product or service “different,” in a rapidly commoditizing world,
Pricing, or how products or services should be offered.
This is valuable not only for you as a potential entrepreneur or AI adopter, but also as an investor; you will be surprised how bad things are for many companies that have not yet realized the risks AI has for them.
Product Strategy
I always have Brad Lightcap, OpenAI’s COO, comment in an interview whenever I think about building stuff these days. “If we release a model that is 10 times better tomorrow, are you happy or scared?” as a way of discerning whether your business makes sense in an AI world.
The Only Principle You Should Care About
If you're intimidated by more advanced models, that better models are not better for your business, you have a problem. In that case, your survival depends on AI stagnating, which is quite the bet. Not saying it won’t happen, but there are literally trillions of dollars of some of the most brilliant minds on this planet betting you’re wrong.
The odds aren’t great.
So, the key principle for any company, one that you’re building, working for, or investing in, is whether AI progress makes your business more necessary.
It really is that simple. Pick any company and you can immediately guess its long-term chances of survival with that simple question. Let’s take two opposite examples:
Salesforce. A SaaS company designed for humans to interact with a database of customers for relationship management.
The strengths are clear:
a solid product at the current time, with so many features that it’s highly likely many of them are useful to you, but some definitely will be.
Strong customer lock-in due to migration complexities. In other words, switching to a competitor is very expensive.
The weaknesses are also clear:
It’s essentially a frontend on top of a database, making it a highly replicable system of record.
Generally hated by users.
Expensive.
Bloated.
But better AIs make Salesforce’s offering less necessary by the day:
It can be easily replicated
AIs can help you migrate out of it. Microsoft has literally built an AI assistant for this
And perhaps more importantly, it’s designed to be used by humans, not AIs, while moving into a future where most software will be “used” by AIs.
This is the definition of a company that, without a clear transition (to be fair, that’s precisely their plan), will soon be outcompeted by AI rivals in every single category (price above all else).
In software engineering, we are already seeing incredible stories. A few days ago, a startup with six people and $500k in funding, Neo, came out of stealth with an AI agent that outcompeted Microsoft’s AI agent for coding tasks.
A $4 trillion company losing to six people and half a million in funding. This is what AI creates, empowerment like we have never seen before.
On the other hand, companies like utilities, particularly electricity companies, benefit from increases in AI demand, which implies a guaranteed increase in demand for them.
But the question is, besides building literal power plants or GPUs, how can you possibly build/invest in a product or service company that AI won’t cannibalize or render irrelevant?
How do I identify winners?
And the answer involves the lessons taught by the hottest non-hardware AI stock, and knowing that no moat exists besides the unsexiest of all company traits.

Subscribe to Full Premium package to read the rest.
Become a paying subscriber of Full Premium package to get access to this post and other subscriber-only content.
Already a paying subscriber? Sign In.
A subscription gets you:
- • NO ADS
- • An additional insights email on Tuesdays
- • Gain access to TheWhiteBox's knowledge base to access four times more content than the free version on markets, cutting-edge research, company deep dives, AI engineering tips, & more