
The past decade has been a transformative time in the world of machine learning. A field that once weighed more heavily on hype than practical applications grew up, beginning to make major breakthroughs that revolutionized industrial processes and consumer products. But in order to continue to make big wins in these areas and fields; Further development will be required in the area of tinyML. Small machine learning algorithms that rely on powerful computing resources in the cloud to make inferences — traditional methods that use machine learning algorithms — privacy, Their applicability is limited by latency and cost issues. TinyML promises to eliminate these problems and open up new classes of problems that can be solved by artificial intelligence algorithms.
When memory is measured in kilobytes, running a state-of-the-art machine learning model with billions of parameters is almost certain. But with some creativity and an integrated approach that leverages the power of the cloud and combines it with the strengths of microML. It is possible. A team of researchers from MIT showed how it could be with them. A method called Netcast It relies on resource-rich cloud computers to quickly retrieve model weights from memory and transmit them almost instantaneously to miniature ML hardware via a fiber optic network. Once those weights are transferred, An optical device called a broadband “Mach-Zehnder” modulator combines them with sensor data to perform fast calculations of local currents.
The team’s solution uses cloud computing with large amounts of memory to keep the weight of a neural network in RAM. These weights are transmitted from connected device to connected device with enough bandwidth to transfer the entire movie in one millisecond. This is one of the biggest limiting factors preventing small ML machines from processing large models, but it is not the only factor. Functionality is at a premium in these devices, so the researchers proposed a solution to this problem in the form of a shoebox-sized receiver that performs very fast analog calculations by encoding the input data on transmitted weights.
This arrangement makes it possible to perform trillions of operations per second on a device with the same resources as a desktop computer from the early 1990s. In the process, which ensures privacy; It makes learning possible on a very low-latency and very energy-efficient device. Netcast is testing image classification and digit recognition tasks over 50 miles separating the small ML machine and cloud resources. After only a few adjustments, An average accuracy rate exceeding 98% was observed. Results of this quality are sufficient for use in commercial products.
Before that happens, The team is working on further improving their methods to achieve better performance. They want to shrink the shoebox-sized receiver down to the size of a chip to integrate into other devices like smartphones. With Netcast’s further refinement, Big things could be on the horizon for tiny ML.