Instead of crunching endless streams of ones and zeros like a classic processor, this experimental chip leans on analogue physics to perform AI tasks. The promise is striking: data‑centre scale workloads with a tiny fraction of the energy bill.
A 50‑year‑old idea, rebuilt for the AI age
For decades, progress in computing followed a familiar script: more transistors, smaller chips, higher clock speeds. That digital recipe gave us smartphones, cloud computing and modern AI, but also a growing energy headache.
Researchers at Peking University have decided to challenge that script. Their prototype AI chip abandons binary logic as its core model. Instead, it uses analogue circuits that process information as continuous electrical signals, not discrete bits.
This analogue AI chip reportedly runs key workloads up to 12 times faster than advanced digital processors while using around 1/200th of the energy.
The concept is not entirely new. Before digital machines took over, engineers built analogue computers that solved equations by shaping voltages and currents. What is new is the attempt to make this approach practical again, using modern fabrication techniques and pairing it directly with AI algorithms.
How analogue computing changes the equation
From step‑by‑step logic to physics doing the work
Digital processors break tasks into long sequences of operations. Every addition, multiplication or comparison is a tiny step in a rigid schedule. Even in highly parallel chips like GPUs, that sequence dominates energy use and latency.
Analogue hardware behaves differently. Numbers are represented as voltages or currents that vary smoothly. Calculations happen as those signals interact through the circuit itself.
- Digital chips compute by flipping billions of transistors in coordinated steps.
- Analogue chips compute by letting electrical behaviour carry out many operations at once.
This parallelism comes “for free” from the physics of the device. Instead of executing thousands of instructions to update every parameter in an AI model, the analogue circuit reaches a new state in a single physical transition.
By computing directly where the data sits, the chip cuts down on costly shuttling between memory and processing units, a major source of energy waste in today’s AI servers.
➡️ Why regular sleep rituals help your brain feel safer at night
➡️ The French don’t get access to the same Google as users in the rest of the world
➡️ A growing lifestyle trend among seniors : why more “cumulants” are choosing to work after retirement to make ends meet
➡️ Few people know this, but Japan controls 95% of a material critical to Nvidia-style AI chips thanks to Ajinomoto
➡️ I tried this slow-simmered recipe and liked how it unfolded
➡️ [News] The arrival of the Oreshnik missile in Belarus makes noise, but changes nothing …
➡️ Psychology shows why emotional habits are harder to detect than emotional pain
➡️ Father splits assets in his will equally among his two daughters and son, wife says it’s not fair because of wealth inequality: ‘They’re all my kids’
Tackling real AI workloads, not just toy problems
The team led by researcher Sun Zhong set out to show that analogue AI can handle large, messy, real‑world data. Their results, published in the journal Nature Communications, focus on a mathematical technique called non‑negative matrix factorisation, or NMF.
NMF is widely used for recommendation systems, user‑behaviour analysis and image processing. It looks for hidden patterns in huge tables of numbers, such as which films tend to appeal to the same group of viewers, or which regions in a picture share similar features.
On conventional digital hardware, NMF quickly becomes expensive when datasets reach millions of entries. The Beijing chip implements the core NMF step directly as an analogue operation, essentially “hard‑wiring” the maths into the physical layout.
In tests resembling commercial recommendation engines like those of Netflix or Yahoo, the chip processed comparable datasets at much higher speed and with drastically lower power draw than recent digital rivals.
The same approach was trialled on image compression. The analogue chip reconstructed pictures with visual quality close to that of high‑precision digital algorithms, while halving the storage requirements.
Why current AI hardware hits an energy wall
The memory bottleneck in modern chips
Flagship AI accelerators, such as Nvidia’s H‑series GPUs, boast astonishing compute performance. Yet their efficiency is limited by a straightforward problem: moving data around.
Every AI operation shuffles numbers between memory banks and processing cores. That constant back‑and‑forth dominates both time and energy consumption. As models balloon into the hundreds of billions of parameters, this movement becomes a serious bottleneck.
The Peking University chip uses “in‑memory” analogue computing. That means the same physical components both store the numbers and transform them. Calculations happen where the data already is, shortening the distance and reducing wasted energy.
| Feature | Conventional AI GPU | Analogue AI chip (lab prototype) |
|---|---|---|
| Core computing style | Digital, step‑by‑step instructions | Analogue, physics‑driven state changes |
| Energy use on NMF tasks | Baseline (1x) | Roughly 1/200th of digital |
| Speed on tested workloads | Reference level | Reported up to 12x faster |
| Data movement | Frequent memory–processor transfers | Computation mostly in‑memory |
Some estimates from the research group suggest that, for specific operations, the analogue design could theoretically reach speedups as high as 1,000 times versus leading GPUs, if scaled and refined.
The maths baked into the silicon
Non‑negative matrix factorisation might sound esoteric, but it sits at the heart of many recommendation and pattern‑finding algorithms. Developed formally in the late 1990s, it breaks a large matrix into two smaller ones that capture underlying structure, with the constraint that all values stay above zero.
Digital systems treat NMF as a long series of multiply‑and‑add steps. The Chinese chip embodies that process directly in its circuitry. Voltages represent the matrix entries, and the layout of the hardware enforces the non‑negative constraints and update rules.
An algorithm that once existed only as code now appears as a physical process, unfolding across a chip etched in silicon.
One peer reviewer described the gains in speed and energy efficiency as spanning “several orders of magnitude” for the tested cases, a phrase that signals genuine surprise in the typically cautious language of scientific publishing.
Potential impact for data centres and national strategy
Why this matters for cloud providers
AI data centres already consume as much electricity as small countries. Training a single large model can draw megawatt‑hours of power, along with cooling and infrastructure costs. Governments are starting to scrutinise this demand, and cloud providers face pressure to keep emissions under control.
A chip that performs core AI tasks with 200 times less energy, even if only for a subset of workloads, could reshape the economics of running recommender systems, content platforms and some analytics tools.
China has an additional motivation. US export controls on high‑end GPUs have pushed its researchers to seek alternatives. Pursuing analogue designs, which rely on different components and know‑how, offers a path less exposed to foreign technology sanctions.
Where analogue AI could fit first
Analogue chips are unlikely to replace general‑purpose processors. They are better suited to targeted tasks where the maths is well understood and stable. Likely early uses include:
- Recommendation engines for streaming, e‑commerce and social platforms.
- On‑the‑fly media compression in content delivery networks.
- Signal processing in telecoms infrastructure.
- Specialised accelerators inside larger AI systems.
Hybrid architectures may appear, where digital chips handle control logic and complex branching, while analogue coprocessors take care of heavy numerical kernels like NMF or matrix multiplications.
Limits, risks and what still needs proving
Noise, precision and reliability
Analogue circuits face challenges that digital engineers work hard to avoid. Electrical noise, temperature drift and manufacturing variations can all distort results. For AI, which can tolerate some imperfection, this might be acceptable, but the boundaries remain unclear.
Data centres also value predictability. Hardware must behave the same way every time, across millions of chips and for years on end. Lab prototypes rarely provide guarantees at that scale.
There is also a question of flexibility. Once a mathematical method is hard‑wired into silicon, updating it is not as easy as pushing a software patch. That makes chip design cycles and algorithm choice far more strategic decisions.
Security, maintenance and skills
Analogue AI introduces new security and maintenance questions. Could attackers exploit tiny electrical fluctuations to infer sensitive data? How do operators test and calibrate boards whose behaviour depends on subtle physical effects?
Engineers trained on digital systems may need fresh skills. Designing, validating and debugging analogue accelerators calls for a mix of device‑physics knowledge and machine‑learning expertise that is still rare.
What this means for everyday tech
If analogue AI chips reach commercial maturity, users might first notice the change indirectly. Recommendation feeds could refresh faster on the same hardware budget. Video platforms could serve higher‑quality streams without ballooning power usage. Smaller data centres might run sophisticated AI services without industrial‑scale cooling.
There is also a local angle. Compact, low‑power analogue accelerators could sit closer to where data is generated: in base stations, factory lines or even appliances. That would allow some AI processing to leave the cloud and run on site, trimming latency and network traffic.
On the flip side, the arrival of more efficient chips could encourage even more AI usage. Lower energy per operation does not automatically mean lower total consumption if the number of operations keeps rising. Policymakers and companies will need to track both sides of that ledger.
For now, the Chinese prototype serves as a reminder that progress in computing does not always follow a straight digital line. Sometimes, reviving an old analogue idea, and pairing it with modern AI maths, can unlock performance that dense rows of transistors struggle to match.
