A proof-of-concept AI system could cut energy use by around 100 times when compared with today’s large language models (LLMs), a team from Tufts University has said.
AI currently consumes massive of amounts of energy because training and running LLMs requires thousands of specialised GPUs running continuously in data centres. In the US, it’s estimated that AI systems and data centres used about 415TWh in 2024, accounting for more than 10% of the country’s total electricity production.
As reported in Science Daily, researchers at Tufts’ School of Engineering claim their proof-of-concept AI system is far more efficient as it relies on a hybrid approach called neuro-symbolic AI. The system combines traditional neural networks with symbolic reasoning, which is the use of human-readable symbols, rules and logic to solve problems, rather than finding patterns in data like modern LLMs. This method mirrors how people approach problems by breaking them into steps and categories.
The team focused on AI systems used in robotics known as visual-language-action (VLA) models. They extend LLM capabilities by incorporating vision and physical movement. Typically, they work by taking in visual data from cameras and instructions from language, then translate that information into real-world actions.
VLA systems rely on data and trial-and-error learning. For example, if a robot is asked to stack blocks into a tower, it must first analyse the scene, identify each block, and determine how to place them correctly. The process often leads to mistakes, with various factors such as shadows confusing the system about a block’s shape.
Symbolic reasoning offers a different strategy as it relies on rules and abstract concepts such as shape and balance, which allows the system to plan more effectively and avoid trial and error.
“Like an LLM, VLA models act on statistical results from large training sets of similar scenarios, but that can lead to errors,” said Professor Matthias Scheutz, who led the research. “A neuro-symbolic VLA can apply rules that limit the amount of trial and error during learning and get to a solution much faster. Not only does it complete the task much faster, but the time spent on training the system is significantly reduced.”
The researchers tested their system using the Tower of Hanoi puzzle, a classic mathematical problem that requires careful planning. The neuro-symbolic VLA achieved a 95% success rate, compared with just 34% for standard systems. When given a more complex version of the puzzle that it had not encountered before, the hybrid system still succeeded 78% of the time. Traditional models failed every attempt. Training time also dropped sharply. The new system learned the task in only 34 minutes, while conventional models required more than a day and a half.
As well as its success at completing the task itself, energy consumption was reduced dramatically, requiring only 1% of the energy used by a standard VLA system. During operation, it used just 5% of the energy needed by conventional approaches.
In comparison to everyday AI tools, Scheutz said: “These systems are just trying to predict the next word or action in a sequence, but that can be imperfect, and they can come up with inaccurate results or hallucinations. Their energy expense is often disproportionate to the task. For example, when you search on Google, the AI summary at the top of the page consumes up to 100 times more energy than the generation of the website listings.”
