The lab is almost silent, except for the soft rattle of a mechanical arm and the whisper of paper being nudged under a lens. No glowing screen. No humming server racks. Only a glass slide, a beam of light, and a faint ticking sound that feels strangely… analog. In a research center in China, a group of engineers lean over a device that looks like it escaped from the 1970s, then fell through a wormhole into 2025.
One of them taps a keyboard, and an image appears — not on a monitor, but on a tiny optical plate, etched in ghostly shades of gray. It took a split second and barely any energy.
They’re not just saving power. They’re rewriting what “digital” could mean.
China’s quiet bet on an old-new technology
On paper, the idea sounds almost like a prank: China is pouring serious research effort into a technology that predates the modern microchip era. Yet this isn’t nostalgia. It’s about survival in a world drowning in data and electricity bills.
Chinese labs are revisiting *analog computing* — systems that use continuous physical signals like light, voltage, or magnetism to compute — and pairing it with today’s AI ambitions. One recent research push highlights something wild: analog setups that use up to **200 times less energy** than our beloved digital chips for certain tasks.
The twist? The core concepts are roughly 50 years old. The context is radically different.
To grasp what’s happening, picture an old oscilloscope or a bulky telephone exchange. Those machines “computed” using waveforms, resistors, and physical circuits that behaved like math. Engineers in the 1960s and 70s dreamed of building powerful analog computers, but they were overtaken by digital tech that scaled faster, shrank better, and crashed less often.
Now fast forward to China’s top universities and national labs. Researchers there are working on optical neural networks, analog accelerators, and mixed-signal chips that process information directly in memory. Instead of shuttling 1s and 0s back and forth, these devices use varying levels, phases, or intensities of signals to represent information.
You don’t see them on store shelves yet. You see them in scientific papers and prototypes with terrifying efficiency graphs.
Why bother? Because the digital model that won the 20th century is starting to creak. Global data centers eat as much electricity as some countries. AI models devour energy to train, then devour more to answer our daily stream of questions, videos, and recommendations. China, which is racing on AI, 5G, and cloud computing, can’t simply scale “more of the same” forever.
➡️ Unseen for 175 years: the world’s climate is tipping dangerously
➡️ 6 common causes of sore legs that affect people of all ages
➡️ How to recognize when your body is telling you to slow down before you burn out
Analog offers a different path. When you let physics do part of the math — light interfering with light, currents flowing through precise resistances — you skip a ton of digital overhead. Fewer transistors switching, fewer bits being moved, less heat to battle. That’s where those headline numbers like “200 times less energy” come from: not magic, but ruthless efficiency in very specific tasks, especially matrix multiplications at the heart of AI.
Digital isn’t dead. It’s just suddenly not alone anymore.
How you make 1970s-style computing feel like the future
The basic gesture behind these Chinese projects is almost poetic: stop fighting the hardware, and let reality do the work. Instead of representing everything as clean 1s and 0s, an analog chip might store information as tiny changes in voltage or as patterns of light intensity through a special material.
For example, an optical analog accelerator uses lasers shining through microscopic structures that act like a neural network. The “weights” of the network are baked into how the light is bent, delayed, or dimmed. Send in an image as a light pattern, and it gets transformed on the fly. No big matrix of digital multiplications. No discrete clock ticking. Just light moving, instantly.
The trick is calibrating that universe of tiny imperfections so it gives results you can actually trust.
This is where many digital engineers quietly roll their eyes. Analog is messy. Signals drift. Temperature matters. Dust matters. And we’ve all been there, that moment when a device works perfectly in the demo room, then glitches in real life for reasons nobody can fully explain. That’s the analog nightmare.
Chinese teams are attacking this mess with AI itself. They use learning algorithms to correct analog drift, compensate for noise, and fuse analog speed with digital reliability. Think of it as a hybrid creature: analog blocks do the heavy lifting, digital control circuits keep them in line, constantly nudging, correcting, and recalibrating.
Let’s be honest: nobody really builds a perfectly clean analog system and just prays it behaves forever.
Some of the researchers working in China are surprisingly candid about this tightrope.
“The physics is almost free,” one optical computing engineer reportedly told a colleague, “but the price is that you must learn to live with its moods.”
To navigate that trade-off, labs lean on three recurring ideas:
- Use analog only where it shines — heavy math, repetitive patterns, neural network layers.
- Wrap it in digital “guardrails” that monitor accuracy and correct errors on the fly.
- Design for specific workloads, not for being a universal Swiss Army knife.
The everyday result, if all this works, is invisible: your AI translation, your image enhancement, your search suggestions simply cost far less energy to compute. The drama stays buried in the chips.
What this means for our digital lives
If you zoom out for a second, the question isn’t “Will my next phone be analog?” It’s closer to: “Will the servers behind everything I do survive the next decade’s energy bill?” China’s revival of 50-year-old analog concepts is one answer to that uncomfortable question. And it doesn’t stay neatly inside its borders. Global chipmakers, cloud providers, and AI labs are watching closely, if only because the math is merciless.
Data traffic will keep growing. Streaming won’t slow down. AI won’t politely cap its appetite. Something has to give — or transform.
This analog comeback also pokes a hole in the smooth story we like to tell ourselves about progress. Technology isn’t a straight line, always forward, never back. Sometimes it loops, digs up old ideas, and gives them a brutal makeover with new tools and new constraints. The 1970s dreamed of analog power; the 2020s might actually need it.
There’s a strange comfort in that. The past isn’t just nostalgia; it’s a toolbox that can be reopened when the present hits a wall. And energy is exactly that wall.
Maybe in a few years, when you upload a video or ask an AI assistant a complicated question, a tiny slice of that request will be handled by some analog block hidden in a Chinese-designed accelerator, sipping electricity while its digital cousins gulp. You won’t see it. You won’t be asked.
But you might notice your battery lasting longer on a long day, or your cloud bill not spiking as brutally, or your country quietly meeting a climate goal that once looked unreachable. Behind all those small, human-scale wins, there might be a faint echo of those old lab benches from half a century ago — suddenly relevant again, suddenly powerful.
And that’s the unsettling, fascinating part: what other “obsolete” ideas are just waiting for the right crisis to come back to life?
| Key point | Detail | Value for the reader |
|---|---|---|
| Analog tech uses far less energy | Certain Chinese analog and optical systems can perform AI-style computations using up to 200× less power than digital chips for specific tasks. | Helps you understand why future devices and services could become more efficient without sacrificing performance. |
| Old ideas, new context | Concepts from 50 years ago are being revived with today’s AI, materials science, and chip manufacturing. | Shows that innovation isn’t only about brand‑new inventions, but also smart reuse of forgotten technologies. |
| Hybrid is the realistic future | Chinese projects combine analog speed with digital control to manage errors and variability. | Gives a clearer picture of how your future tech stack may quietly shift under the surface while staying familiar on top. |
FAQ:
- What exactly is the “50-year-old” technology China is reviving?
It refers to analog computing principles developed in the mid‑20th century, where physical properties like voltage, current, or light are used to perform mathematical operations directly, instead of encoding everything as binary 1s and 0s.- How can analog systems use 200 times less energy than digital ones?
For tasks like matrix multiplications in AI, analog or optical circuits let the physics do many operations in parallel with minimal switching, dramatically cutting the power required compared to conventional digital processors.- Will my smartphone or laptop become “analog” soon?
Unlikely in a pure sense. What’s more realistic is that some internal accelerators or cloud servers your apps rely on will use hybrid analog‑digital chips behind the scenes to save energy.- Is this only happening in China?
No. Research on analog and optical computing is active in the US, Europe, and elsewhere. China stands out because it links this work directly to national strategies on AI and energy efficiency.- Why did analog computing disappear in the first place?
Digital technology scaled better: it was easier to manufacture, program, and standardize, and it proved far more robust to noise and errors. Analog is coming back now because our energy limits are forcing us to reconsider those earlier trade‑offs.
