Smart Sensing & Connectivity Technology Vertical Segments

Xilinx All Programmable Devices: Enabling Embedded Vision based on Sensor Fusion

To correctly perceive and safely interact with their environment, many embedded-vision systems such as vision-guided robots, autonomous cars and drones, rely either upon multiple image sensors of the same type or image sensors combined with a different sensing technology. These additional sensors enable extraction of information which cannot be obtained if just a single image sensor is used, for example determining the distance to an object. Combining and processing the data provided by multiple sensors is known as sensor fusion.

ADAS Heterogeneous Sensor Fusion
ADAS Heterogeneous Sensor Fusion example

When sensors of the same type are used, for example multiple image sensors, it is referred to as homogeneous sensor fusion; if the sensors are of different types, it is referred to as heterogeneous sensor fusion. Depending upon the application, heterogeneous sensors can cover a wide range of sensing technologies from IR, RADAR and LIDAR to GPS, accelerometers, pressure and temperature sensors.

When it comes to implementing both homogeneous and heterogeneous sensor fusion, Xilinx All Programmable Zynq®-7000 or Zynq® UltraScale+™ MPSoC devices offer significant benefits. The processing system provides a range of industry-standard interfaces such as CAN, UART, SPI and I2C. This allows easy interfacing with sensors such as accelerometers, magnetometers, gyroscopes and GPS receivers which generate data at low data rates. These interfaces are also easily scalable as they are predominantly bus based allowing the addition of extra sensors as required.

To implement high data rate interfaces as required by image sensors, IR sensors, RADAR and LIDAR, the designers can exploit the capabilities provided by programmable logic. This provides two distinct advantages over a traditional CPU/GPU based approach. The first is the ability to leverage the inherently parallel nature of programmable logic, creating image and signal processing pipelines in parallel. Using programmable logic also enables accurate synchronisation between sensors for applications where sensors must be synchronised, for example stereoscopic vision producing a better quality of result. Implementing these processing pipelines within the programmable logic increases both performance and determinism while reducing latency, allowing a more responsive solution which is critical for many applications. The second advantage is the ability to interface with any sensor or component thanks to the flexible nature of programmable logic and its IO cells, requiring only a PHY to align the physical layer of the protocol. This provides the ability to interface directly with image sensors and high speed mixed signal convertors.

One common application for sensor fusion is distance calculation. This can use multiple image sensors or a single image sensor combined with RADAR as demonstrated in the block diagrams below. Both solutions are well suited to be implemented in Zynq-7000 or Zynq UltraScale+ MPSoC devices as the image / signal processing paths can be implemented within the programmable logic, freeing up the processing system to perform the distance calculation and take necessary decisions based upon the calculated distance.

Homogeneous Sensor Fusion
Homogeneous Sensor Fusion example to calculate distance to an object
Heterogeneous Sensor Fusion
Heterogeneous Sensor Fusion example to calculate distance to an object

To reduce the development time of these applications, it is possible for the developer to use a software-defined development using SDSoC™. SDSoC is a system-optimising compiler which enables the acceleration of functions from running on the processor system to being implemented within the programmable logic. This enables the development of the sensor fusion application using high-level languages like C, C++, OpenCL® and SystemC, enabling developers to work directly with the high-level system model. To develop the image processing elements of the sensor fusion application, developers can leverage the reVISION™ acceleration stack which supports acceleration of OpenCV and machine learning functions. To develop the signal processing elements, developers can leverage the Vivado® IP catalogue within SDSoC to implement FIR Filters and Fast Fourier Transforms etc. Developers can also create their own C-callable libraries enabling the reuse of custom-developed HDL IP blocks from the developer’s existing libraries.

Using Zynq-7000 or Zynq UltraScale+ MPSoC devices enables developers to address both the performance and interfacing challenges presented by sensor fusion applications, while developing using SDSoC and acceleration stacks such as reVISION provide the ability to work directly with the high-level system models and industry-standard frameworks and libraries, significantly reducing development time.