Learning on the sidelines

Learning on the sidelines

Microcontrollers, miniature computers capable of executing simple commands, are the basis of billions of connected devices, from Internet of Things (IoT) devices to sensors in automobiles. But cheap, low-power microcontrollers have extremely limited memory and no operating system, making it difficult to train AI models on “edge devices” that operate independently of central computing resources.

Training a machine learning model on a smart edge device allows it to adapt to new data and make better predictions. For example, training a pattern on a smart keyboard could allow the keyboard to continually learn from the user’s handwriting. However, the training process requires so much memory that it is usually done using powerful computers in a data center, before the model is deployed to a device. This is more expensive and poses privacy issues as user data must be sent to a central server.

To solve this problem, researchers at MIT and the MIT-IBM Watson AI Lab have developed a new technique that enables on-device training using less than a quarter of a megabyte of memory. Other training solutions designed for connected devices can use over 500 megabytes of memory, far exceeding the 256 kilobyte capacity of most microcontrollers (there are 1024 kilobytes in a megabyte).

The smart algorithms and framework developed by the researchers reduce the amount of computation needed to train a model, making the process faster and more memory efficient. Their technique can be used to train a machine learning model on a microcontroller in minutes.

This technique also preserves privacy by keeping data on the device, which could be particularly beneficial when the data is sensitive, such as in medical applications. It could also allow customization of a template based on user needs. Additionally, the framework preserves or improves model accuracy compared to other training approaches.

“Our study allows IoT devices to not only perform inference, but also continuously update AI models based on newly collected data, paving the way for lifelong learning on the device. resources makes deep learning more accessible and can have a broader reach, especially for low-power edge devices,” says Song Han, Associate Professor in the Department of Electrical Engineering and Computer Science (EECS), Member of the MIT-IBM Watson AI Lab and lead author of the paper describing this innovation.

Joining Han on the paper are co-lead authors and EECS doctoral students Ji Lin and Ligeng Zhu, as well as MIT postdocs Wei-Ming Chen and Wei-Chen Wang, and Chuang Gan, a senior research staff member. at the MIT-IBM Watson AI Lab. The research will be presented at the Neural Information Processing Systems conference.

Han and his team previously addressed the memory and computational bottlenecks that exist when trying to run machine learning models on tiny edge devices, as part of their TinyML initiative.

Light workout

A common type of machine learning model is known as a neural network. Loosely based on the human brain, these models contain layers of interconnected nodes, or neurons, that process data to accomplish a task, such as recognizing people in photos. The model must first be trained, which involves showing it millions of examples so it can learn the task. As it learns, the model increases or decreases the strength of the connections between neurons, called weights.

The model can undergo hundreds of updates as it is learned, and intermediate activations must be stored each round. In a neural network, activation is the intermediate result of the middle layer. Because there can be millions of weights and activations, training a model requires a lot more memory than running a pre-trained model, Han explains.

Han and his collaborators used two algorithmic solutions to make the training process more efficient and less memory intensive. The first, known as parsimonious updating, uses an algorithm that identifies the most important weights to update each training cycle. The algorithm starts freezing the weights one by one until it sees the accuracy drop to a set threshold, then it stops. The remaining weights are updated, while the activations corresponding to the frozen weights do not need to be stored in memory.

“Updating the whole model is very expensive because there are a lot of activations, so people tend to only update the last layer, but as you can imagine, it hurts the For our method, we selectively update these important weights and ensure that the accuracy is fully preserved,” says Han.

Their second solution involves quantized training and simplification of weights, which are typically 32 bits. An algorithm rounds the weights so they are only eight bits, through a process called quantization, which reduces the amount of memory for training and inference. Inference is the process of applying a model to a data set and generating a prediction. Next, the algorithm applies a technique called quantization-sensitive scaling (QAS), which acts as a multiplier to adjust the ratio between weight and gradient, to avoid any drop in precision that may come from quantified training.

The researchers have developed a system, called a small training engine, that can run these algorithmic innovations on a simple microcontroller with no operating system. This system changes the order of the steps in the training process so that more work is done at the compilation stage, before the model is deployed to the edge device.

“We push a lot of the computation, like self-differentiation and graph optimization, to compile time. We also aggressively remove redundant operators to support sparse updates. execution, we have a lot less work to do on the device,” says Han.

A successful acceleration

Their optimization required only 157 kilobytes of memory to train a machine learning model on a microcontroller, while other techniques designed for light training would still require between 300 and 600 megabytes.

They tested their framework by training a computer vision model to detect people in the images. After just 10 minutes of training, he learned how to complete the task successfully. Their method trained a model more than 20 times faster than other approaches.

Now that they have demonstrated the success of these techniques for computer vision models, the researchers want to apply them to language models and to different types of data, such as time series data. At the same time, they want to use what they’ve learned to scale down larger models without sacrificing accuracy, which could help reduce the carbon footprint of training large-scale machine learning models.

“Adapting/training an AI model on a device, especially on-board controllers, is an open challenge. This MIT research not only successfully demonstrated the capabilities, but also opened up new possibilities for real-time customization of privacy-preserving devices,” says Nilesh Jain, Principal Engineer at Intel who was not involved in this work. “The innovations in the publication have broader applicability and will trigger further research into co-designing system algorithms.”

“On-device learning is the next major advancement we are working towards for the connected smart edge. Prof. Song Han’s group has made great strides in demonstrating the effectiveness of cutting-edge devices for training,” adds Jilei Hou, vice president and head of AI research at Qualcomm, “Qualcomm awarded his team an innovation grant for further innovation and advancement in this area.”

This work is funded by the National Science Foundation, MIT-IBM Watson AI Lab, MIT AI Hardware Program, Amazon, Intel, Qualcomm, Ford Motor Company, and Google.

#Learning #sidelines

Leave a Comment

Your email address will not be published. Required fields are marked *