A new era of smartwatches and wearable technology might just be around the corner, with the introduction of a new type of transistor capable of running AI algorithms.
This reconfigurable transistor operates on a fraction of the electricity compared to its silicon-based counterparts. It could herald a new wave of smartwatches and wearables equipped with potent AI technology if implemented.
Currently, the energy demands of many AI algorithms make them unsuitable for traditional wearables, as they would quickly drain the battery.
To process data using machine learning algorithms, smartwatches, wearables, and other portable sensors have to send the data wirelessly to an AI system in the cloud, which then analyzes the data and sends it back to the device.
Local processing at the device level is considerably faster than this process, cutting data processing latency. Low latency is crucial for time-sensitive technologies such as manufacturing equipment and driverless vehicles.
This is also relevant to Internet of Things (IoT) systems, which use computers to process complex data locally to sensors rather than sending data to the cloud, also called edge computing.
As Mark Hersam at Northwestern University in Illinois.explained, “Every time data are passed around, it increases the likelihood of the data being stolen. If personal health data is processed locally — such as on your wrist in your watch — that presents a much lower security risk.”
These are some of the problems researchers at Northwestern University are attempting to solve with their new lightweight transistors, which would be embedded into portable devices.
Machine learning technologies for portable devices
The key differentiation of these new transistors is their composition of molybdenum disulfide and carbon nanotubes.
These materials allow the transistor to be perpetually reconfigured by electric fields, handling multiple steps in AI-driven processes almost instantaneously.
In contrast, silicon-based transistors can only manage one step at a time, acting as minuscule on-or-off electronic switches. As a result, an AI task that would typically necessitate 100 silicon-based transistors might only require one of these reconfigurable transistors, leading to a drastic cut in energy use.
“The low energy results from the fact that we can implement the [AI algorithm] with a 100-fold reduction in the number of transistors, compared to conventional silicon technology,” stated Mark Hersam at Northwestern University in Illinois.
Hersam and his research team showcased the prowess of these transistors by applying them to a standard machine-learning-based AI algorithm that analyzed heartbeat data from 10,000 electrocardiogram tests.
Impressively, the AI managed a 95% accuracy rate in categorizing the heartbeat data samples into one “normal” group and five distinct “arrhythmic” groups, including premature ventricular contraction.
Vinod Sangwan, another research team member at Northwestern University, emphasized the potential implications of this advancement, especially for devices with low battery life or can’t maintain consistent internet connectivity for cloud-based AI processing.
However, incorporating these transistors into existing workflows while ensuring their durability is essential to commercial viability and remains challenging.
This is the latest entry in a line of breakthroughs that bring machine learning to low-power devices.
Earlier in the year, researchers at IBM built lightweight brain-inspired chips capable of processing algorithmic workloads with low power demands, again showing promise for portable devices.
In time, these technologies could help power autonomous bio-inspired robots that process data locally, similarly to organic organisms.