
The unstoppable development of Artificial Intelligence This has been accompanied by a problem that is increasingly worrying the industry: the high energy consumption which involve both the training and the massive use of large models. A single complex query in ChatGPT-type systems can consume an amount of electricity similar to that used by an average household. United States in one minute, a figure which, multiplied by billions of requests, dramatically increases the environmental footprint.
Given this scenario, researchers and technology companies are looking for ways to achieve a AI much more efficient and sustainableAmong the alternatives that are gaining traction, the following stands out: analog computing in memory (AIMC), an approach that proposes to fundamentally change the way chips store and process information.
What is in-memory analog computing and why might it be a game-changer?
In today's digital architectures, data is constantly moving between the memory and processing unitsThis back-and-forth communication consumes time and, above all, a lot of energy. Analog computing in memory poses just the opposite problem: that the data remain in the same place while the necessary mathematical operations are performed to run the AI models.
To achieve this, AIMC resorts to analog chips capable of storing and calculating jointly, directly leveraging the physical properties of the hardware. Instead of representing information solely with well-defined zeros and ones, the system uses continuous signals and the electrical characteristics of materials to perform operations.
According to recent work led by the researcher Tianyi Chen, in collaboration with teams from IBM and the Rensselaer Polytechnic InstituteThis approach would allow reduce energy consumption by up to a thousand times compared to conventional digital platforms, while maintaining the ability to run large-scale artificial intelligence models.
The key is that, by not having to constantly move the information, the system can "let physics do the work," using its own electrical pulses and the material's responses to solve operations almost instantaneously. This paradigm shift is especially attractive for data centers and applications with intensive use of AI, also in Europe, where energy efficiency has become a strategic factor.
Advantages and limitations of analog hardware for artificial intelligence
Analog computing in memory is not a new concept. It has been known for years that this type of hardware could perform certain calculations very quickly and with low power consumption, making it an attractive option for speed up inference from already trained modelsHowever, its application in the training phase had so far proven much more complicated.
The main obstacle lies in the imperfections inherent in analog hardwareThe pulses that update the model parameters can vary slightly from one operation to another, electrical noise may appear, or small deviations may occur which, when accumulated, end up harming the quality of the learning. This behavior contrasts with the precision and repeatability of digital hardware, where operations are easier to control.
In practice, these limitations translated into inaccurate gradients and unstable trainingThese two problems made it impractical to simply transfer classic machine learning algorithms to analog chips. Despite its potential, AIMC was relegated to very specific tasks, far from the goal of reliably training large models.
Chen's team's work focuses precisely on this point: how to leverage the extreme efficiency of analog hardware without sacrificing the precision necessary for AI models to maintain their quality and generalizability. The proposal consists of deeply adapting the training algorithms to the real-world characteristics of these physical systems.
Residual Learning: an analog version of backpropagation
To overcome the accuracy problems, the research group has developed a analog reformulation of the backpropagation algorithmThe most widespread technique for training deep neural networks. This variant has been named Residual Learning, a name that refers to the idea of constantly correcting errors that occur during the parameter update process.
The method introduces a additional layer of control It monitors how the hardware actually responds to each training operation. Based on this information, the system adjusts the gradients and compensates for deviations caused by noise, irregular pulses, or any other physical imperfection, so that the learning stays on track.
With this strategy, models trained on analog chips achieve a accuracy very close to that obtained on digital platformsbut with a fraction of the energy expenditure. Essentially, the algorithm accepts that the hardware is not perfect and is designed to coexist with those imperfections, rather than trying to ignore them.
The team argues that this approach not only improves training stability, but also introduces a more systematic framework for to control impartiality and statistical behavior of models in analog environments. This is especially relevant when considering sensitive applications, where biases or prediction errors can have significant consequences.
Presentation at NeurIPS and reaction from the scientific community
The results of this line of work were prominently presented at the Annual Conference on Neural Information Processing Systems (NeurIPS), one of the leading international forums on artificial intelligence. The presentation, held in December, generated interest by demonstrating a concrete approach to tackling the energy problem without relying solely on incremental improvements in digital hardware.
Among the aspects most valued by the community is the combination of machine learning theory and circuit designThis is an area where collaboration between universities and companies like IBM is key. The research lies at the intersection of computer science, physics, and electronic engineering, a field in which there are also very active European centers.
Although much of the experimental development has taken place in the United States, the potential impact is clearly global. The data center sector in the European UnionSubject to increasingly strict regulations on energy consumption and emissions, the world is closely watching these kinds of advances, which could help meet carbon reduction targets without hindering the adoption of AI.
According to the authors of the study, the way forward involves scaling up the prototypes, integrating the approach into existing infrastructures, and compare performance with open source models widely used by the research community, which would facilitate more transparent comparisons.
Practical applications: from data centers to medical devices
If analog in-memory computing becomes established, the repercussions could be felt in many everyday areas. First, it would affect the large data centers that power cloud services and AI-based assistants, reducing operating costs and the electricity bill associated with these systems.
But one of the most striking changes would occur in environments where available energy is very limited. We're talking about implantable or portable medical deviceswearable technology, sensors distributed in factories or critical infrastructure, and autonomous robots that need to operate for long periods with reduced batteries.
In these scenarios, having chips capable of training or readjusting models locally, with minimal power consumption, would open the door to applications that are currently impractical. For example, systems that learn from the patient themselves to adapt treatments or monitor vital signs in a more personalized way, without sending all the data to the cloud.
In the industrial sector, sensors and robots could make more complex decisions at the network edge, facilitating a finer and more flexible automation without the need to constantly rely on remote servers. This aligns with European digitalization strategies and the commitment to reducing dependence on external infrastructure for critical services.
From the end user's point of view, the improvement can translate into devices with greater battery life and less need for constant connectionThis is an aspect that is as practical as it is invisible in everyday life: the system would simply work longer and more quietly in terms of consumption.
Next steps and possibilities for Europe
The team led by Tianyi Chen has already indicated its intention to extend the approach of Residual Learning to a broad set of open source modelsThis would allow other groups to reproduce the results and explore new variants adapted to different types of analog hardware.
At the same time, possible collaborations with the industry to integrate these concepts into commercial platforms. This phase will be key to determining whether analog in-memory computing can compete head-to-head with the high-performance digital solutions that currently dominate the market.
In Europe, where there is a growing interest in strengthening the Technological sovereignty and energy efficiencyAIMC could find fertile ground. Initiatives related to next-generation chips, supercomputing infrastructures, and low-power AI projects could benefit from the experience accumulated in these types of developments.
At the same time, important challenges will remain to be resolved, such as large-scale manufacturing of reliable analog hardware, the standardization of development tools and the training of technical profiles capable of moving smoothly between the physics of the device and deep learning algorithms.
In-memory analog computing is establishing itself as one of the most serious alternatives for reducing the energy consumption associated with artificial intelligence, while maintaining competitive performance. If the next phases of research and deployment confirm the initial promises, It could become a relevant piece of the technological puzzle with which Europe and the rest of the world are trying to reconcile the expansion of AI with the goals of sustainability and emissions reduction.