Xcell Journal issue 83
The spring 2013 edition of Xcell Journal includes a cover story on how the Zynq All Programmable SoC is enabling customers to create Smarter Vision systems. The issue also includes a variety of fantastic methodology and how-to articles for Xilinx users of all skill levels.
XCELLENCE IN EMBEDDED VISION by Fernando Martinez Vallina HLS Design Methodology Engineer Xilinx, Inc. Vallina@xilinx.com JosĂŠ Roberto Alvarez Engineering Director for Video Technology Xilinx, Inc. email@example.com Pairing Vivado HLS with the OpenCV libraries enables rapid prototyping and development of Smarter Vision systems targeting the Zynq All Programmable SoC. omputer vision has been a well-established discipline in academic circles for several years; many vision algorithms today hail from decades of research. But only recently have we seen the proliferation of computer vision in many aspects of our lives. We now have selfdriving cars, game consoles that react to our every move, autonomous vacuum cleaners and mobile phones that respond to our gestures, among other vision products. The challenge today is how to implement these and future vision systems efficiently while meeting strict power and time-to-market constraints. The Zynqâ„˘ All Programmable SoC can be the foundation for such products, in tandem with the widely used computer vision library OpenCV and the high-level synthesis (HLS) tools that accelerate critical functions in hardware. Together, this combination makes a powerful platform for designing and implementing Smarter Vision systems. Embedded systems are ubiquitous in the market today. However, limitations in computing capabilities, especially when dealing with large picture sizes and high frame rates, have restricted their use in practical implementations for computer/machine vision. Advances in image sensor technologies have been essential in opening the eyes of embedded devices to the world so they can interact with their environment using computer vision algorithms. The combination of embedded systems and computer/machine vision constitutes C embedded vision, a discipline that is fast becoming the basis for designing machines that see and understand their environments. DEVELOPMENT OF EMBEDDED VISION SYSTEMS Embedded vision involves running intelligent computer vision algorithms in a computing platform. For many users, a standard desktop-computing processing platform provides a conveniently accessible target. However, a general computing platform may not meet the requirements for producing highly embedded products that are compact, efficient and low in power when processing large image-data sets, such as multiple streams of realtime HD video at 60 frames/second. Figure 1 illustrates the flow that designers commonly employ to create embedded vision applications. The algorithm design is the most important step in this process, since the algorithm will determine whether we meet the processing and quality criteria for any particular computer vision task. At first, designers explore algorithm choices in a numerical-computing environment like MATLABÂŽ in order to work out high-level processing options. Once they have determined the proper algorithm, they typically model it in a high-level language, generally C/C++, for fast execution and adherence to the final bit-accurate implementation. System partitioning is an important step in the development process. Here, through algorithm performance analysis, designers can determine what porXcell Journal 25 Second Quarter 2013