Many real-time activities are driving the need for edge AI, like smart home assistants, ADAS, patient monitoring and predictive maintenance.
From smart home assistants (think Alexa, Google and Siri) to advanced driver assistance systems (ADAS) that notify a driver when they're departing from their lane of traffic, the world relies on edge AI to provide real-time processing to these increasingly common and important devices. Edge AI uses artificial intelligence directly within a device, computing near the data source, rather than an off-site data center with cloud computing.
Edge AI offers reduced latency, faster processing, a reduced need for constant internet connectivity and can lower privacy concerns. This technology represents a significant shift in how data is processed, and as demand for real-time intelligence grows, edge AI is well-positioned to continue its substantial impact in many industries.
The greatest value of edge AI is the speed it can provide for critical applications. Unlike cloud/data center AI, edge AI is not sending data over network links and hoping for a reasonable response time. Rather, edge AI computes locally (often on a real-time operating system) and excels at providing timely responses.
For situations like conducting machine vision on a factory line and knowing a product can be diverted within a second, edge AI is well equipped. Likewise, you wouldn't want signals from your car to be dependent on the response times of the network or servers in the cloud.
Edge AI for real-time processing
Many real-time activities are driving the need for edge AI. Applications such as smart home assistants, ADAS, patient monitoring and predictive maintenance are notable technology uses. From quick responses to household questions to notifications of a vehicle's lane departure or a glucose reading sent to a smartphone, edge AI offers swift responses while minimizing privacy concerns.
We've seen edge AI do well in the supply chain for quite some time, particularly with warehousing and factories. There has also been substantial growth for the tech within the transportation industry over the last decade, such as delivery drones navigating through conditions like clouds.
Edge AI is also doing great things for engineers, especially in the med-tech sector, a critical area of advancement. For example, engineers developing pacemakers and other cardiac devices can give physicians the tools to look for abnormal heart rhythms while proactively programing devices to offer guidance on when to seek further medical intervention. Med-tech will continue to grow its use of edge AI and build out further capabilities.
Generating edge AI models
As more and more systems in everyday life now have some level of machine learning (ML) interaction, understanding this world becomes vital for engineers and developers to plan the future of user interactions.
The strongest opportunity with edge AI is ML, which matches patterns based on a statistical algorithm. The patterns could be sensing a human is present, that someone just spoke a "wake word" (e.g., Alexa or "Hey Siri") for a smart home assistant, or a motor starting to wobble. For the smart home assistant, wake words are models that run at the edge and do not need to send your voice to the cloud. It wakes the device and lets it know it's time to dispatch further commands.
There are several pathways to generate an ML model: either with an integrated development environment (like TensorFlow or PyTorch) or using a SaaS platform (like Edge Impulse). Most of the "work" in building a good ML model goes into creating a representative data set and labeling it well.
Currently, the most popular ML model for edge AI is a supervised model, which is a type of training based on labeled and tagged sample data, where the output is a known value that can be checked for correctness, like having a tutor check and correct work along the way. This type of training is typically used in applications such as classification work or data regression. Supervised training can be useful and highly accurate, but it depends greatly on the tagged dataset and may be unable to handle new inputs.
Hardware to run edge AI workloads
At DigiKey, we are well-positioned to assist in edge AI implementations, as they generally run on microcontrollers, FPGAs and single board computers (SBCs). DigiKey partners with top suppliers to provide several generations of hardware that run ML models at the edge. We've seen some great new hardware released this year, including NXP's MCX-N series, and we'll soon be stocking ST Microelectronics' STM32MP25 series.
In past years, dev boards from the maker community have been popular for running edge AI, including SparkFun's Edge Development Board Apollo3 Blue, AdaFruit's EdgeBadge, Arduino's Nano 33 BLE Sense Rev 2 and Raspberry Pi's 4 or 5.
Neural processing units (NPUs) are gaining ground in edge AI. NPUs are specialized ICs designed to accelerate the processing of ML and AI applications based on neural networks, structures based on the human brain with many interconnected layers and nodes called neurons that process and pass along information. A new generation of NPUs is being created with dedicated math processing, including NXP's MCX N series and ADI's MAX78000.
We're also seeing AI accelerators for edge devices, a space that has yet to be defined. Early companies of note include Google Coral and Hailo.
The importance of ML sensors
High-speed cameras with ML models have functioned in supply chains for quite some time. They have been used to decide where to send products within a warehouse or find defective products in a production line. Now, suppliers are creating low-cost AI vision modules that can run ML modules to recognize objects or people.
Although running an ML model will require an embedded system, more products will continue to be released as AI-enabled electronic components. This includes AI-enabled sensors, also known as ML sensors. While adding an ML model to most sensors will not make them more efficient at the application, there are a few types of sensors that ML training can enable to perform in significantly more efficient ways:
Some AI sensors come preloaded with an ML model ready to run. For example, the SparkFun eval board for sensing people is preprogrammed to detect faces and return information over the QWiiC I2C interface. Some AI sensors, like Nicla Vision from Arduino or the OpenMV Cam H7 from Seeed Technology, are more open-ended and need to have the trained ML model for what they are looking for (defects, objects, etc.).
By using neural nets to provide computational algorithms, it is possible to detect and track objects and people as they move into the field of view of the camera sensor.
The future of edge AI
As many industries evolve and rely more on data processing technology, edge AI will continue to see more widespread adoption. By enabling faster, more secure data processing at the device level, innovation in edge AI will be profound. A few areas we see expanding in the near future include:
As ML training methods, hardware and software evolve, edge AI is well-positioned to grow exponentially and support many industries. At DigiKey, we're committed to staying ahead of edge AI trends, and we look forward to supporting innovative engineers, designers, builders and procurement professionals worldwide with a wealth of solutions, frictionless interactions, tools and educational resources to make their jobs more efficient.
Shawn Luke is a technical marketing engineer at DigiKey. DigiKey is recognized as the global leader and continuous innovator in the cutting-edge commerce distribution of electronic components and automation products worldwide, providing more than 15.6 million components from over 3,000 quality name-brand manufacturers. For more edge AI information, products and resources, visit DigiKey.com/edge-ai.