VISIVE.AI

AI on Tiny Devices: How Researchers Are Making It Possible

Discover how researchers from TU Graz, Pro2Future, and the University of St. Gallen are enabling AI to run on small IoT devices with minimal resources.

Jun 26, 2025Source: Visive.ai
AI on Tiny Devices: How Researchers Are Making It Possible

Artificial intelligence (AI) is typically associated with high computational and energy demands, posing a significant challenge for the Internet of Things (IoT). Small, embedded sensors in IoT devices have limited computing power, memory, and battery life. However, a research team from the COMET K1 centre Pro2Future, Graz University of Technology (TU Graz), and the University of St. Gallen has developed innovative methods to run AI models efficiently on these tiny devices.

One key approach is the use of specialized, modular AI models tailored for specific tasks. For instance, in the E-MINDS project, researchers successfully ran AI models on an ultra-wideband (UWB) localisation device with only 4 kilobytes of memory. These models can calculate sources of interference from location data, which is crucial for applications like industrial automation.

Applying a Few Tricks

Michael Krisper, head of the project at Pro2Future and a scientist at the Institute of Technical Informatics at TU Graz, explains, 'These small devices do not run large language models but rather models with very specific tasks, such as estimating distances. To achieve this, we had to get the models small enough using a few tricks.'

The modular system they developed includes various methods that, when combined, deliver the desired results. One method involves dividing the models and orchestrating them. Instead of a single universal model, multiple small, specialized models are available. Each model is designed to handle specific types of interference, such as metal walls, people, or shelves. An orchestration model on the chip recognizes the type of interference and loads the appropriate AI model within 100 milliseconds, which is fast enough for industrial applications.

Fold, Adjust, Trim

Another method is the use of subspace configurable networks (SCNs). These models adapt to the data input instead of requiring a separate model for each input variant. SCNs have proven extremely productive in image recognition tasks, such as object classification. For example, they can calculate images up to 7.8 times faster than using external resources, while being smaller and more energy-efficient.

Quantisation and pruning techniques further reduce the size and energy consumption of the models. Quantisation simplifies the numbers used by the model, replacing floating-point numbers with integers. This saves energy and computing time with minimal accuracy loss. Pruning involves removing non-essential parts of a finished model, ensuring it remains capable of performing its core task.

Results Transferable to Other Areas

While the E-MINDS project focused on UWB localisation for industrial automation, the researchers see numerous other applications. For example, efficient AI models can enhance the security of keyless car openers by determining whether a key is genuinely near the car. In smart homes, they can extend the battery life of remote controls. Libraries can use them to track books more efficiently.

Michael Krisper notes, 'With new expertise and methods, we have laid a foundation for future products and applications in the E-MINDS project. Our project team complemented each other perfectly, with Pro2Future focusing on embedded systems and hardware implementation, Olga Saukh and colleagues at TU Graz developing embedded machine learning foundations, and Simon Mayer contributing to localisation research at the University of St. Gallen.'

The success of the E-MINDS project opens up exciting possibilities for the future of AI in IoT, enabling smarter, more efficient, and more sustainable technology solutions.

Frequently Asked Questions

What are the main challenges of running AI on small IoT devices?

The main challenges include limited computing power, memory, and battery life. These constraints make it difficult to run complex AI models directly on the devices.

How did the E-MINDS project overcome these challenges?

The E-MINDS project used specialized, modular AI models, subspace configurable networks (SCNs), quantisation, and pruning techniques to make AI models smaller, more efficient, and capable of running on small devices.

What are some potential applications of these efficient AI models?

Potential applications include industrial automation, keyless car openers, smart home remote controls, and library book tracking, among others.

What is the significance of the modular system in the E-MINDS project?

The modular system allows for the use of multiple small, specialized AI models that can be orchestrated to handle different types of interference, making the system more adaptable and efficient.

How does quantisation help in making AI models more efficient?

Quantisation simplifies the numbers used by the model, replacing floating-point numbers with integers. This reduces energy consumption and computing time while maintaining acceptable accuracy.

Related News Articles

Image for MSU Researchers Use Nanomedicine and AI to Diagnose Diseases

MSU Researchers Use Nanomedicine and AI to Diagnose Diseases

Read Article →
Image for AI Breakthrough Awards Honor Leading AI Innovators

AI Breakthrough Awards Honor Leading AI Innovators

Read Article →
Image for Yokogawa and Shell Collaborate on AI and Robotics for Plant Maintenance

Yokogawa and Shell Collaborate on AI and Robotics for Plant Maintenance

Read Article →