Case Studies EBV News Events FPGA Smart Consumer Smart Home

Artificial Intelligence and it’s real-world applications

In-depth video seminar with Stan Klinke

The topic of artificial intelligence (AI) has been widely exploited in recent years. This fact is not surprising because AI can help us do much more than traditional programming algorithms. Various aspects of AI already affect our lives in many ways: AI can be used to find a specific piece of information in an ocean of online data, reconstruct our photos and videos, help with car driving, help us do faster/real-time physical simulations, and even help us find a cure for certain diseases like the new COVID-19 virus. What makes AI so unique is the fact that it does not use traditional, deterministic programming algorithms. It relies on the use of Artificial Neural Network (ANN), mimicking the way our brains work. This concept allows it to quickly adapt to new situations in which traditional algorithms would fail due to the lack of input parameters.

Although AI has made its way into many applications used every day, people still react with slight disbelief when they first hear about it. Some even fear that AI could gain control of everything, unleashing an army of intelligent robots to destroy humanity. Indeed, AI owes a lot to pop culture for its infamous reputation. However, engineers working on AI and its subsets (Machine Learning and Deep Learning) know better. For researchers, AI represents finding minima of an n-dimensional “cost” function, which is used to adjust all the weights and biases (parameters) of the ANN, to reach reliable and accurate inference.

Some not-so-long time ago, engineers and researchers alike dreamed of building an intelligent human-like robotic companion from scratch. With the recent technological breakthroughs and the advent of cheap yet powerful high-speed computing technology, they finally got a chance to test complex mathematical ANN models in the real-world. Such models involve tens, even hundreds of thousands of parameters – and this is what makes AI a bleeding-edge technology: no hardware is too fast when it comes down to calculating such complex mathematical functions.

Another layer of complexity is the rapid evolution of ANN models, which introduces some additional complications when using dedicated, fixed-function hardware (ASICs, CPUs, and GPUs). Unlike fixed-function hardware, FPGA solutions can be easily configured at the hardware level, ensuring compatibility with the broadest range of ANN models, both currently available and those yet to come. Therefore, FPGA devices are mostly used for applications where real-time inference has to be achieved without compromises – such as in ADAS systems, where multiple camera streams need to be processed in real-time. Also, FPGA-based devices are an excellent choice for data center applications, as they provide a lot of much-needed flexibility while reducing power consumption significantly (compared to purely CPU or CPU/GPU solutions).

On the other hand, with the growing popularity of yet another phenomenon known as IoT, some trade-offs must be made to allow inference in a different kind of environment. In an IoT environment, both power and cost are limiting factors. Inference for IoT is typically performed in the cloud (usually a data center-oriented environment), and many cloud service providers offer one form of AI processing or another. Lately, we can often hear about yet another team: edge computing. Edge computing implies using advanced, yet cheap ultra-fast microprocessors and microcontrollers, to decentralize data processing and thus to shift the processing load from the cloud to the edge platform. Running inference on an edge platform allows much faster and much more controlled IoT inference. Some devices even allow running some rudimentary inference on the IoT node directly – as demonstrated by the STM accelerometer sensor series (e.g., LSM6DSOX, LSM6DSRX). These accelerometers support on-chip decision making. Thanks to their embedded machine learning cores, these accelerometer sensors can be trained to recognize certain movement types without relying on the host controller for intensive calculations.

In this first in a series of articles on AI/ML, Stanislaw Klinke, a field application engineer from EBV Electronics, explains some of the most important AI concepts, while also covering AI/ML solutions based on Xilinx FPGA hardware. Make sure to visit the following link and learn more about the author, get familiar with the agenda, and register to watch his on-demand video seminar: https://www.avnet.com/wps/portal/ebv/resources/training-and-events/event/introduction-to-machine-learning

As a technical editor and writer, I am given an opportunity to bring technology closer to people. My updates are focused on the most recent solutions and applications from the global semiconductors industry.

Leave a Reply

Your email address will not be published. Required fields are marked *