Guest blog from Giles Peckham
Autonomous vehicles are a hot topic at the moment. However, the term can describe a wide range of potential autonomous capabilities. Therefore, the Society of Automotive Engineers has defined five levels of automated driving:
- Level 0 – The driver has control, with the vehicle issuing warnings.
- Level 1 – The vehicle and the driver have shared control, for instance adaptive cruise control.
- Level 2 – The vehicle has full control but the driver must still monitor the vehicle and take control if necessary.
- Level 3 – The driver can turn their attention away from the operation of the vehicle, as it can react to emergency events. The driver must be able to react within a specified period if required to do so.
- Level 4 – The driver does not need to be ready to intervene. Self-driving occurs in limited areas and outside of these, the vehicle can safely park if the driver does not take control.
- Level 5 – Fully autonomous with no need for the driver to be ready to intervene.
Implementing any of these six levels of automated driving requires a heterogeneous sensor fusion solution, which fuses embedded vision with several additional sensor modalities. This requires a processing solution capable of interfacing not only with the different sensors’ modalities, but also one which provides a responsive solution capable of an intelligent and immediate response. In addition, due to the application, any solution must also align with the regulatory ISO26262 safety standard while also providing a secure environment to prevent unauthorised access and modification.
To address these challenges, developers are utilising All Programmable Zynq®-7000 SoC or Zynq® UltraScale+™ MPSoC devices which offer a unique solution to the challenge at hand. These devices provide programmable logic coupled with high-performance ARM® A53 or A9 processors, for a tightly integrated heterogeneous processing unit. For real-time control, as may be required for vehicle control, the Zynq UltraScale+ MPSoC also provides a real-time processing unit which contains dual ARM® R5 processors intended for ISO26262 applications.
Within the programmable logic, the image processing and sensor fusion algorithms can be implemented that exploit its parallel nature, while the processing system addresses higher-level decision making and communication. This provides a solution which is more responsive than a traditional CPU/GPU based approach, removing traditional bottlenecks. Programmable logic-based algorithms therefore offer a solution with reduced latency, increased determinism, and critical system parameters for autonomous vehicles.
The reVISION™ acceleration stack gives developers the ability to work with industry-standard frameworks and libraries to create machine learning and embedded vision based system solutions. reVISION enables this by providing support for both OpenVX and OpenCV in the embedded vision sphere, and Caffe for machine learning. At the core of the reVISION stack is the SDSoC™ tool which enables software defined development of the All Programmable Zynq-7000 SoC or Zynq UltraScale+ MPSoC, using high-level languages such as C, C++ and OpenCL™. SDSoC, as a system optimising compiler, enables the designer to identify bottlenecks which impact performance within the processing system once the algorithm has been developed and accelerates these into the programmable logic. This acceleration is performed without the need for an HDL specialist. To support automated driving applications, reVISION provides both hardware optimised OpenCV functions, and machine learning inference stages.
One common automated driving application, which spans SAE Levels 0 to 4, is a collision detection and automatic braking system. Comparing solution implementations using a GoogLeNet Convolutional Neural Network running on an Nvidia TX1 with 256 Cores and a batch size of 8, with a Xilinx ZU9 with a batch size of 1 developed with reVISION, demonstrates the power of a reVISION based approach. Assuming an initial vehicle speed of 65 MPH, the reVISION example reacts within 2.7ms, applying the brakes and avoiding a collision.
Autonomous vehicles require safe, secure and responsive solutions. All Programmable Zynq-7000 SoC or Zynq UltraScale+ MPSoC, combined with the reVISION acceleration stack, enable the creation of a more responsive solution, leveraging the capabilities provided by programmable logic without the need to be an HDL specialist.
Giles has more than 30 years’ experience in the semiconductor industry, starting with the design of ASSPs for consumer applications at Philips Semiconductors (now NXP) before moving on to FAE and marketing roles for gate array and standard cell products and finally a sales role in the same organisation. After five years in IP product marketing and international sales roles at European Silicon Structures, the e-beam direct-write ASIC company, Giles recognised the increasing potential for FPGAs and joined Xilinx. Whilst at Xilinx, he has held a number of technical and commercial marketing roles in EMEA before being promoted to run the group. Giles holds a BSC in Electronic Engineering and Physics from Loughborough University, UK and a Professional Postgraduate Diploma in Marketing from the Chartered Institute of Marketing in the UK. Giles is based in the Xilinx EMEA office in London.