The End of Computing As We Know It

There is something strange happening in computing right now. We keep building bigger data centers, feeding them more data and more power. If this continues, by 2030, AI could consume energy

equivalent to 44 nuclear reactors. But what if the problem isn't a lack of power? What if the way we compute is fundamentally wrong? These engineers are trying something very different. A new way to

convert energy into intelligence. This is their new chip. It looks like a spaceship covered in glyphs. And they claim up to 10,000 times higher efficiency than today's best GPUs. If that holds,

it changes the whole economics of AI. I am a chip design engineer and when I see numbers like that I don't just get curious I start pulling them apart. Subscribe to Anastasi in Tech and let's

decode the glyphs. Every machine ever built has the same limit. Energy always spreads out. Some of it is inevitably lost as heat and eventually the system settles into equilibrium where no

more useful work can be extracted. That's the second law of thermodynamics. At first, this sounds far away from computing, but computers are physical systems, too. And computation is not

abstract. It's actually physical. Every bit you flip has a real energy cost. And in the 1960s, Rolf Landauer showed something profound. Even erasing a single bit requires energy. Deleting

information increases entropy. That was a turning point. A computer was no longer just logic. It was a thermodynamic machine. And yet for decades, we designed computers as if that

didn't matter. In chip design, we used to go to extremes to suppress noise and to enforce this precision. At the lowest level, transistors, the building blocks of every modern device,

are engineered to switch cleanly between two states. So every operation gives the same result. It's either zero or one, nothing ambiguous. And for a very long time, that worked incredibly well.

It gave us the digital world. But then the nature of computation changed. Today, you open Claude or Gemini and type in a sentence. It feels instant. But on the background, that is an enormous amount

of computation. Billions of operations, massive matrix multiplications. And here is the part most people never see. All of that work is not really about producing one fixed answer. It's

about building a probability distribution and then after all that computation, it samples one result. That is what modern AI models keep doing again and again. And at its core, this is just controlled

randomness. And randomness here is actually the key. That's where creativity comes from. And yet we are building trillion dollar machines just to simulate randomness. That's like using

a chainsaw for surgery. So what if instead of forcing deterministic computers to simulate this expansive randomness, we used physics itself. That is the idea behind this new thermodynamic computer

by Extropic. To understand why this computer is fundamentally different, we have to zoom in all the way down to the transistor. Because this is where the logic flips in a normal microchip,

a transistor is designed to behave like a clean switch. You apply voltage to the gate and if it's below a threshold, the transistor is off. If it's above that threshold, it turns on. There

is a clear separation between these two states. So the output is stable and predictable. That is the foundation of digital computing. But keep in mind, it only works when you give the transistor

enough energy. Lower the voltage and something fundamental starts to change. Inside the device, there is an energy barrier that controls that switching. At high voltage, that barrier is

easy to cross. So, the transistor switches on and off reliably. But at low voltage, something else becomes important. Electrons are constantly moving due to temperature and this random motion

creates tiny fluctuations in voltage and current. This is so-called thermal noise. Now the key shift is that this energy barrier becomes comparable to that thermal energy which means the noise

is no longer negligible. It actually can push the charge over the barrier. And this is when physics stop being the background noise and becomes the main character. And suddenly switching is

no longer guaranteed. Sometimes the charge cross the barrier, sometimes it doesn't. And your clean digital zero and one begin to blur. For decades, this was treated as a problem. So we used to spend

more energy just to suppress this behavior. Extropic does the opposite. They lean into this instability, keeping the transistor right in that region where noise appears and use it for

computation. We will break down the genius behind how it works in a moment. But before that, do you know what's frustrating about AI right now? It's really hard to pick one model and stick with it

because there is no universally superior AI model. Claude is often better at writing and coding. Gemini, Mistral, Grok each have moments where they win. Wouldn't it be better to get the best

of them all? That's what Mammooth does. You get all the best AI models in one interface. Claude, GPT, Llama, Gemini, Mistral, Grok, Perplexity, and the feature that actually changes how you work is

reprompting. You write a prompt once and then you can redirect it to a different model in one click. Mammouth found that 34% of requests are reprompted because the best result is often not the first

one. You can also build custom Mammouths, your own AI model with your custom instructions for your workflow like research, document analysis, image generation, all in one place. Plus,

it's built in Europe and they guarantee no data retention. It's good to know that your data isn't stored somewhere. Check them out through the link below. The plan starts at €10 a month. Now,

let's get back to the spaceship and see if it can take off. When we operate a transistor in this relaxed low voltage region, it becomes what's called the probabilistic bit or P-bit. It doesn't

store a fixed value. It continuously fluctuates between zero and one. And it turns out you can actually shape those fluctuations. You apply voltage and that voltage sets the probability.

Push it one way, it favors one. Push it the other, it favors zero. So instead of storing information, we get a P-bit that generates samples directly from thermal noise. In today's chips, even a

single probabilistic bit is very expensive. It can take tens of thousands of transistors just to fake randomness. That's where a lot of the energy goes here. that entire layer disappears. They

don't simulate randomness. They get it directly from the physics of the transistor and it comes for free. Now scale that up. You connect many of these P-bits together with weighted connections.

This forms a thermodynamic sampling unit. Inside this box, you actually have two of these chips working together plus an FPGA orchestrating the whole system. You encode the problem into the

system and turn it on. At a given temperature, the system follows the Boltzmann distribution. Because of thermal noise, electrons are constantly being jostled by heat. To reach a high energy state,

the system needs a bigger energy kick and those are rare. Smaller ones happen all the time. So, the system occasionally visits the high energy states and naturally spends more

time in low energy ones. And those are your best solutions. After a while, the system settles into equilibrium. And what you observe most often is the answer. If that works at scale, this will

fundamentally change how we compute because the moment you stop fighting randomness, you also stop paying for it. And that directly hits the biggest constraint in AI today, energy. Extropic's early

silicon shows up to 10,000 times better energy efficiency. But we have to be precise because these are results of the simulations and small tests like generating an image. So here we are

nowhere near data center scale AI yet. Still the direction is important because unlike quantum computing this runs on standard silicon at room temperature using standard CMOS technology. It

requires no cryogenics, no exotic infrastructure and that matters because now you can at least imagine scaling this inside today's semiconductor ecosystem. But if you zoom in into P-bits,

things are getting more complicated because you can see that these are interconnected in the mesh of resistors. So the system is essentially analog which means noise here is not just a feature

but also a problem. In analog, you get unwanted coupling between elements, correlations instead of true randomness. Imagine you want a room full of people making more or less independent decisions.

Each person should be only influenced by a couple of its neighbors, but now suddenly they can hear whispers through the walls. Suddenly, decisions are no longer independent and patterns appear

that you didn't design. But that's the risk here because you want noise but only the right kind of noise. Now their first commercial chip Z1 is coming this year. Around 250,000 probabilistic

bits on a single chip. But the bigger the system, the harder it becomes to control. More elements, more proximity and small coupling effects starts to accumulate. What was negligible at small scale

becomes dominant. The real challenge is not just to generate this pure randomness but also to suppress unwanted coupling and to hold this balance at scale is extremely hard. There is also

a software problem. Today's entire AI stack is built on deterministic hardware. NVIDIA's CUDA took decades to build. Here you don't just optimize code. They have to rethink the

entire approach. Different algorithms, different abstractions and others actually have to adopt them. And meanwhile, GPUs are not standing still. They keep getting more efficient every year. So

this new approach has to move fast, fast enough to matter before the gap closes. Data centers are trillion dollar infrastructure deeply optimized around GPUs. You don't replace that because

something looks better on paper. You replace it when it's striking, better, easier, proven, otherwise it stays where many brilliant ideas stay. So yes, the idea is powerful and I admire

Extropic for pioneering this path because from building my own startup, I know how hard is it to solve problems no one ever faced before. But you know what? With enough grid, you can always figure

it out. But even if they do, it won't replace classical computing because certain systems demand this certainty like banking, control systems, medicine. But other problems are different like

generative AI inference, anomaly detection, energy based models, optimization problems like finding the best route, scientific and Monte Carlo simulations, selfsupervised models. All of

these are important and inherently probabilistic. So for these problems, this new approach changes the game because then the most powerful computers of the future won't fight entropy and uncertainty.

They will run on it. Or if not, this idea might become the most elegant and very expensive random number generator we've ever built. We will see in a few years. Now, watch this episode on the

most advanced chip factory and microchip technology ever built, or this one on the chip crisis happening right now. Love you guys, and I'm going to see you in the next one. Ciao.

Chat & Create Study Materials

Chat with this video