Embedded Vision | MVPro 17 | October 2019

Page 1

EMBEDDED VISION SPECIAL FEATURE: THE IMPACT OF EMBEDDED VISION EXAMINED

DEBATE: IS NEUROMORPHIC TECHNOLOGY THE FUTURE?

ROBOTICS: MEET VINESCOUT, THE WINE HARVESTING ROBOT

ISSUE 17 - OCTOBER 2019

mvpromedia.eu MACHINE VISION & AUTOMATION



MVPRO TEAM Lee McLaughlan Editor-in-Chief lee.mclaughlan@mvpromedia.eu

CONTENTS 4

ED’S WELCOME - Keeping you on your toes!

6

INDUSTRY NEWS - Who is making the headlines

11

PRODUCT NEWS - What’s new on the market

14

EURESYS - Increasing vision performance

17

ORIGIN INSIGHT - How to generate sales in the technology sector

Cally Bennett

18

DENIS BULGIN - Vision and the global economy

Group Business Manager cally.bennett@mvpromedia.eu

21

EDMUND OPTICS - Your partner for advanced imaging optics

22

INSPEKTO - Inspekto’s 2020 vision after $15m investment

24

MATRIX VISION - Smart cameras behind optical image acquisition system

26

TELEDYNE IMAGING - Steve Geraghty looks at the world of embedded vision

29

STEMMER IMAGING - Technology Forums offer embedded vision insights

30

IDS - How Ensenso stereo 3D cameras enhance quality assurance in manufacturing

32

BASLER - Embedded vision solutions - smart sensors for IoT Applications

34

EVE CONFERENCE - Embedded VISION Europe ready to deliver

35

EMBEDDED VISION ALLIANCE - The Vision Accelerator Programme

37

COGNEX - Deep learning or machine vision?

38

GARDASOFT - Six essential considerations for machine vision lighting

40

INIVATION - Is Neuromorphic technology the future of machine vision?

42

BAUMER - Industrial cameras with precision time protocol

44

XILINX - Taking embedded vision to the next level

46

SUNDANCE - Meet VineScout - the wine harvesting robot

48

CRAV - Destination Silicon Valley for the Collaborative Robots, Advanced Vision & AI conference.

49

MIDOPT - Lian Tan explains why the CRAV show is Silicon Valley is good for business.

50

WORLD CLASS LEADER - What makes a great leader in the industrial and manufacturing sectors

Alex Sullivan Publishing Director alex.sullivan@mvpromedia.eu

Georgie Davey Designer georgie.davey@cliftonmedialab.com

Visit our website for daily updates

www.mvpromedia.eu

mvpromedia.eu

MVPro Media is published by IFA Magazine Publications Ltd, Arcade Chambers, 8 Kings Road, Bristol BS8 4AB Tel: +44 (0)117 3258328 © 2019. All rights reserved ‘MVPro Media’ is a trademark of IFA Magazine Publications Limited. No part of this publication may be reproduced or stored in any printed or electronic retrieval system without prior permission. All material has been carefully checked for accuracy, but no responsibility can be accepted for inaccuracies.

mvpromedia.eu

3


KEEPING YOU ON YOUR TOES! We are certainly at the business end of the year. The glut of conferences, forums and events are coming thick and fast over these final months of the year as companies jostle to not only showcase their new products coming to market, but also to get up on stage and deliver crucial insights that shows their business is at the forefront, keeping the rest of the industry on their toes. Whether it is ‘bigger this, faster that, the world’s first whatever’ there is no stopping the technology advances that are happening right now. One of those growing sectors is embedded vision, which is having an impact not just on the machine vision sector but also in robotics and automation as they converge. Evidence of that is the CRAV Conference in Silicon Valley in November, which we preview and get the opinion of Midopt as to why they attend. Closer to home, Embedded VISION Europe returns to Stuttgart this month (October). We have purposely delved into the world of embedded vision to discover more about how it is developing and where it is being applied. Huge thanks to our contributors from Teledyne Imaging, Basler, IDS and STEMMER IMAGING for their assistance. I have also taken the plunge to learn more and had the privilege of attending the 4th European Machine Vision Forum in Lyon back in September. The level of research and development going on is phenomenal, while the forum also gave me an opportunity to meet with key industry players. I look forward to meeting more of you again in the coming months. Turning our attention to this issue, we continue to combine both machine vision and automation. We reflect on an incredible first year for Inspekto with CEO Harel Boren, discover more about Neuromorphic Technology with iniVation and reflect on the financial state of the sector in the first article from renowned industry journalist Denis Bulgin. On the automation side, we take a closer look at grippers, while my favourite story of this issue is about VineScout – a little robot with an embedded processor, that is playing a key role for wine growers. I’ll drink to that.

Lee McLaughlan Editor lee.mclaughlan@mvpromedia.eu

Finally, please continue to send me your press releases and articles not only for the magazine but also our newlook website, which puts us and you at the forefront and keeps the rest on their toes. Check it out at www. mvpromedia.com

Arcade Chambers, 8 Kings Road, Clifton, Bristol, BS8 4AB

Until next time…

www.mvpromedia.eu

MVPro B2B digital platform and print magazine for the global machine vision industry

Lee McLaughlan Editor

4

mvpromedia.eu


CustomLogic Your own FPGA logic

Design and upload your own FPGA code to the Coaxlink boards.

Coaxlink Octo

PCIe 3.0 eight-connection CoaXPress frame grabber

Coaxlink Quad CXP-12

Four-connection CoaXPress CXP-12 frame grabber

www.euresys.com


INDUSTRY NEWS

AUTOMATE 2021 SEES RECORD NUMBER OF EARLY SIGN UPS North America’s leading automation trade show has witnessed more than 250 exhibitors booked over 150,000 sq. ft. of exhibit space for Automate 2021.

“As the use of automation grows globally, there’s an increasing demand for practical, real-world information about how to successfully automate.”

This is the largest early sign-up in the history of the event, which dates back to 1977, and a strong endorsement of the show’s move to its new home in Detroit, Michigan.

He noted that the five-day conference that accompanies the show is filled with practical presentations from leading global experts. The trade show floor itself also offers “expert huddles” focused on specific areas of interest to users, such as collaborative robots, autonomous mobile robots, 3D vision, embedded vision, quality control, machine learning/AI, IIoT (Industrial Internet of Things), and connected motion control.

Automate will run May 17-20, 2021, at Cobo Center in Detroit, Michigan, USA, and ultimately should feature more than 500 exhibiting companies. More than 20,000 attendees are expected from small, medium to large companies in every industry looking to successfully deploy automation. Exhibit space is now open to all companies who wish to exhibit.

For more information about exhibiting and attending Automate 2021 visit automateshow.com MV

“Automate has established itself as the most important show for users in every industry looking for automation solutions and the latest advances in automation, including robotics, machine vision, motion control, AI, and related technologies,” said Jeff Burnstein, president of the Association for Advancing Automation (A3), the show’s sponsor.

ISRA VISION BUYS SMART FACTORY SPECIALIST PHOTONFOCUS ISRA Vision AG has successfully completed the acquisition of Swiss company Photonfocus as it continues its strategy to focus on smart factory automation. Photonfocus’s specialised sensor technologies and expertise in the development of embedded systems with integrated intelligence will be implemented in new ISRA products and product generations to tap into new market potential. The focus is on linking ISRA’s 3D machine vision expertise with the automation of robots in established markets like the automotive industry as well as in other general industries.

6

The integration of Photonfocus will enable quick progress to be made on ISRA’s demanding innovation roadmap. For almost two decades now, the company has been developing sensor chip designs for industrial image processing with high speed demands based on a modular architecture. Current developments are aimed at the use of 3D technology for industrial applications and hyperspectral sensors. Based on the technologies and expertise of the company in the strategic areas of hardware and software development, ISRA’s management is targeting strong future turnover potential. The cooperation with Photonfocus is another step on the path to expanding the business with Industry 4.0 components and embedded products. Further acquisitions to strengthen both the application portfolio for vertical markets as well as the generic Industry 4.0 platforms are currently also being prepared. MV

mvpromedia.eu


LYNRED AND ADE TECHNOLOGY JOINT INITIATIVE FOR HEALTHCARE MARKETS ADE Technology will use Lynred’s thermal sensors to develop new systems for emerging applications in Taiwan, from medical care and assisted living for the elderly to livestock management Lynred, has announced it has secured a large volume contract for its 80x80 infrared detectors from advanced technology integrator ADE Technology. This is the first time that Lynred, a global leader in designing and manufacturing high quality infrared technologies for aerospace, defense and commercial markets, has signed with the Taiwan-based company, a provider of off-the-shelf or tailored solutions for security, healthcare, farming management and residential markets. ADE Technology will integrate high volumes of infrared detectors from Lynred, with the aim of developing for the market a first-of-its-kind system. This system will be designed with multi-sensors, including thermal imagery, that can monitor and display real-time data on vital signs in applications for the medical care of people and animals. “Lynred is pleased to supply ADE Technology with thermal imaging solutions in an 80x80 format that, in association with other technologies, will enable it to introduce to the market a 24/7 monitoring solution for the elderly and other systems that assist in medical care,” said Jean-François Delepau, chairman of Lynred. “The opening up of the Taiwanese market in recent years has led to new market opportunities and demands. Lynred is well-equipped with innovations and its business model permits the commercial independence that can help system integrators in this region fulfil the promise of emerging applications.” Jeffrey Chew, CEO and CRO of ADE Technology Inc added: “There are more and more applications in the comprehensive care industry. We are keen to collaborate with Lynred on preserving individual wellbeing and protecting people from health hazards. “With these state-of-the-art thermal sensors from Lynred, we can build custom systems for emerging applications and perfectly meet the needs of our clients.” Through this contract, Lynred extends its reach in Asia where the company is planning to increase its foothold.

mvpromedia.eu

MV


INDUSTRY NEWS

FIFTH DIGITALISING MANUFACTURING CONFERENCE TO DRIVE THE DIGITAL JOURNEY The fifth annual digital manufacturing conference at the Manufacturing Technology Centre will bring together experts on the digital revolution sweeping through the manufacturing industry.

One of the keynote speakers who will take the lead on day one of the conference will be Juergen Maier CBE, CEO of Siemens UK and an acknowledged expert on the digital manufacturing revolution.

The two-day conference in November entitled Digitalising Manufacturing 2019: Making Digital a Reality, will inform delegates that digital advances and smart factories are as much about people as technology.

The Digitalising Manufacturing 2019 conference will be held at the MTC’s Advanced Manufacturing Training Centre, Ansty Park, on November 4 and 5. To register for the event, please visit: www.the-mtc.org/digital2019. MV

While the fourth industrial revolution opens up huge potential for UK manufacturers, companies have to invest in people and skills to reap the benefits of technology and industrial automation. The conference at the MTC’s Advanced Manufacturing Training Centre on November 4 and 5 will aim to give support on how to progress the digital journey based on previous experiences and lessons learned. Yhe MTC has partnered with Made Smarter, an industry led programme which is aimed at boosting the UK economy using advanced digital technologies including artificial intelligence, advanced robotics, 3D printing, and augmented and virtual reality.

TERRA DRONE DEMOS SAFE USE OF UAVS WITH MITSUBISHI ESTATE Japan-headquartered Terra Drone Corporation, one of the world’s leading providers of industrial drone solutions, in collaboration with real estate giant Mitsubishi Estate, has successfully conducted a pilot test in the city of Tokyo to showcase how unmanned aerial vehicles (UAVs) or drones can be used in urban environments to aid logistics, security and surveillance, and disaster prevention. The demonstration was conducted in one of Japan’s leading business centers – the Marunouchi area of Tokyo. Leveraging Terra Drone’s homegrown unmanned traffic management (UTM) system, a drone flew autonomously at an altitude of 2.5 meters, navigating seamlessly between the high-rises of Marunouchi. The UAV captured aerial footage and relayed it back to a control room where analysts monitored the video to detect issues like logistical bottlenecks and security threats.

Marunouchi district into a business and innovation hub by utilizing urban-development-friendly artificial intelligence (AI), Internet of Things, and robotics. In Japan, drone use is active in mountainous areas and remote islands, but full-scale use in urban areas is not expected before 2022. This demonstration was aimed to showcase how a robust UTM system can make it possible to not only avoid collisions but also to realize the efficiency of logistics and security operations in everyday towns. MV

The test flight was organized as part of the ‘Marunouchi UrbanTech Voyager’ initiative by Mitsubishi Estate. The aim of the UrbanTech Voyager project is to transform the

8

mvpromedia.eu


INDUSTRY NEWS

PLEORA ANNOUNCE TWO STRATEGIC PARTNERSHIPS Pleora Technologies has unveiled two strategic partnerships that will combine expertise in sensor networking and artificial intelligence. The partnerships are with Lemay.ai and Mission Control Space Services, with both agreements announced just a few weeks apart. The collaboration with Lemay.ai is a strategic partnership that simplifies the introduction of machine learning capabilities into real-time imaging applications, while the work with Mission Control Space Services will have a focus on robotic control and automation to enable advanced driver assistance in local situational awareness (LSA) and C4ISR military imaging systems. Both partnerships will be utilising Pleora’s RUGGED Connect platform.

quickly changing terrain poses safety concerns and hinders mission planning.” “With our modular and scalable RuggedCONNECT Smart Video Switcher platform and plug-in Vehicle/Terrain AI Safety System from Mission Control, designers can more easily integrate advanced capabilities that help reduce accidents and heighten battlefield intelligence.” Pleora’s RuggedCONNECT platform converts sensor data from multiple sources into a standardized feed that is transmitted over a low latency, multicast Gigabit Ethernet (GigE) network to endpoints. Manufacturers can design straightforward camera-to-display systems and cost-effectively evolve to fully networked architectures integrating different sensor and display types, switching, processing, and recording units. MV

Lemay.ai’s AI expertise will be integrated directly into Pleora’s smart frame grabber and embedded interface products and available as a configurable standalone SDK (software development kit) tuned for real-time vision applications. “Partnering with Lemay, we’re significantly lowering the barrier to entry so designers and end-users can quickly and cost-effectively leverage the benefits of AI, machine learning, and sensor networking through drop-in hardware and software solutions,” said Harry Page, President, Pleora. Commenting on the Mission Control agreement, Page added: “Consulting with end-users throughout our product design, vehicle crew consistently highlight how uncharted,

PHOTONEO NAMED AS ONE OF EUROPE’S ‘ONES TO WATCH’ European Business Awards, one of the world’s largest and longest running business competitions, has identified Photoneo as ‘One to Watch’ in Europe!

Adrian Tripp, CEO of the European Business Awards, said: “The companies chosen as ‘Ones to Watch’ are the most inspirational, successful and dynamic in Europe.

Photoneo was chosen as it demonstrates exceptional achievement in one of the 18 European Business Awards’ categories and reflects the programme’s core values of innovation, success and ethics.

“The talent and tenacity at the heart of these businesses create jobs and drive Europe’s prosperity. This ‘Ones to Watch’ list of excellence is a benchmark of success for the rest of the European business community.”

The ‘Ones to Watch’ for 33 countries across Europe can be found at:

Companies on the ‘Ones to Watch’ list come from all sectors - from manufacturing to retail, agriculture to technology, and all sizes; from start-ups to billion euro businesses. MV

www.businessawardseurope.com. “We are honoured to have been nominated for the national award and we’re looking forward to compete for the first place at the international level,” says Jan Zizka, CEO of Photoneo.

mvpromedia.eu

9


Six Essential Considerations for Machine Vision Lighting 2. Get the most from your LEDs Accurate light intensity requires dependable current drive. LEDs are remarkably reliable but light intensity changes with both age and temperature. These effects can be minimised by pulsing the light which also permits safe overdrive at up to 10x the specified intensity.

Gardasoft Vision has used our specialist knowledge to help Machine Builders achieve innovative solutions for over 20 years. To read more about the Six Essential Considerations for Machine Vision Lighting see www.gardasoft.com/six-essential-considerations

Semiconductor

|

PCB Inspection

Telephone: +44 (0) 1954 234970 | +1 603 657 9026 Email: vision@gardasoft.com

www.gardasoft.com

|

Pharmaceuticals

|

Food Inspection


PRODUCT NEWS

BASLER EMBEDDED VISION SOLUTIONS NOW AVAILABLE FOR NXP’S I.MX 8 Basler is expanding its embedded vision solution portfolio with products that are compatible with NXP’s i.MX 8 applications processors. The combination of Basler’s camera know-how, together with powerful processors from NXP, can implement ideal embedded vision solutions, including for industrial applications.

8QuadMax, i.MX 8M Quad and i.MX 8M Mini processors. They are available with both the 5 MP and the 13 MP dart camera module. The corresponding driver makes commissioning as easy as possible and is optimised for each respective processor. In addition to the camera module, the kits also include cable, lens, and a BCON for MIPI to Mini SAS adapter. MV

NXP’s i.MX 8 applications processors impress with robust long-term availability and meet rigorous requirements for industrial temperature and operating ranges. The broad product portfolio offers the ideal processor for simple and complex image processing applications. Basler launches two new dart camera modules with BCON for MIPI interface and integrated Image Signal Processor (ISP) with 5 MP and 13 MP resolution, as well as two new Add-on Camera Kits. The kits are the perfect starting point to add vision to NXP’s evaluation boards without much effort. The Addon Camera Kits support evaluation boards with the i.MX

HIGH FLYERS

Smart industrial cameras for perfect images plus real added value for your applications. Get inspired at: www.mv-highflyers.com MATRIX VISION GmbH Talstr. 16 · 71570 Oppenweiler Germany · Phone: +49 -71 91- 94 32- 0


PRODUCT NEWS

IDS ANNOUNCES MORE THAN 100 NEW USB3 VISION CAMERA MODELS IDS Imaging Development Systems is expanding its USB3 Vision camera range by more than 100 models. The company integrates the entire range of Sony sensors in several camera families, which it currently already offers with a GigE Vision interface.

With IDS peak, it also provides a new SDK for application development for vision cameras. IDS peak is entirely based on the standards of EMVA (GenICam) and AIA (GigE Vision, USB3 Vision) and simplifies handling and programming of these cameras. The USB3 Vision cameras will be available both as CP and SE family variants. For the latter, customers can choose between housing or board level versions with different lens holder options.

THE FUTURE DEPENDS ON OPTICS™

NEw CA Series Fixed Focal Length Lenses TECHSPEC® CA Series Fixed Focal Length Lenses are designed for high resolution large format sensors. Covering the APS-C format sensors with a 28 mm diagonal image circle, these lenses feature a TFL Mount. TFL Mounts feature a M35 x 0,75 thread with a 17,5 mm flange distance, and offer the same flange distance, robustness, and ease of use as a C-Mount. Find out more at

www.edmundoptics.eu/CAseries

The first model available is the 29 x 29 x 29 mm U33890CP industrial camera with the light-sensitive 12 MP IMX226 rolling shutter sensor from the Sony STARVIS series. Its sensor size of 1/1.7” allows users to choose from a large selection of cost-effective lenses. The sensors IMX290 (2.1 MP) and IMX178 (6.4 MP) are now also available as U3V camera variants – each in colour and mono. The USB3 Vision cameras can be used with any software that supports this interface. For an optimal user experience, the company now also offers a new, platform-independent SDK called IDS peak. “It comes with all the necessary components, from source code samples to transport layers, so that customers can start developing their own applications right away,” explains Maike Strecker, uEye product manager. “We have developed an easy-to-understand programming interface that provides a convenient alternative to direct access via GenTL and GenAPI. In addition, special convenience classes ensure that the amount of required code and thus the programming effort is reduced.” More information: www.ids-peak.com

UK: +44 (0) 1904 788600 I GERMANY: +49 (0) 6131 5700-0 FRANCE: +33 (0) 820 207 555 I sales@edmundoptics.eu

MV


PRODUCT NEWS

E-CON SYSTEMS LAUNCHES LATEST 4K MULTI-FRAME BUFFER USB CAMERA Embedded camera solution provider e-con Systems has launched the FSCAM_CU135 - a 4K fixed focus USB camera with advanced multi-frame buffer for enhanced reliability. FSCAM_CU135 is the first frame buffer camera in the FRAMEsafe™ camera series. It is a 13 MP fixed focus USB camera. It has a 2Gb DDR3 SDRAM to store and retrieve the entire frame without losing data even at very high speeds. FRAMEsafe™ is the latest series of USB UVC Cameras, with a one-of-a-kind inbuilt buffer mechanism. It ensures speed, flexibility and stability while transferring images over the USB interface. Powered by e-con’s proprietary FloControl technology, FRAMEsafe™ cameras support on-demand image capture capability, which can be seamlessly and simplistically controlled from the host application.

Key Features: •

Houses e-CAM131_CUMI1335_MOD - 13 MP AR1335 Colour Camera Module

Fixed Focus with M12 lens holder

2Gb DDR3 SDRAM

Demand based frame transfer mode to enable bandwidth sharing among multiple cameras

Daisy Chain Trigger support

Electronic Rolling Shutter

Output format: Uncompressed UYVY format and Compressed MJPEG format MV

The effective decoupling of camera image capture from the USB communication allows multiple FRAMEsafe™ cameras to be used on the same host processor - without losing the frames from any camera.

PIXELINK ADDS 20 MP ULTRA-HIGH DEFINITION CAMERA MODELS Pixelink, a global provider of industrial cameras for the machine vision and microscopy markets, has released 20 MP ultra-high resolution and high quantum efficiency USB 3.0 camera models incorporating the Sony IMX183 CMOS rolling shutter sensor. Available in colour and monochrome, the Pixelink PLD7620 one inch sensor format camera is ideal for imaging applications where high resolution, improved sensitivity and low noise are key requirements. The Sony IMX183 back-illuminated image sensor technology realises a high QE at a small pixel size of 2.4 µm

mvpromedia.eu

making it perfect for metrology, research, surveillance/UAV and planetary imaging. Pixelink has also developed a microscopy camera model with the IMX183 sensor technology, the M20-CYL, which rivals the performance of many sCMOS cameras and is available at a fraction of the price. Julian Goldstein, co-president and owner of Navitar and Pixelink said: “The release of the 20 MP models stem from requests for solutions that provide higher sensitivity and higher resolution. “When paired with Navitar’s Resolv4K large FOV lenses, which offer 400-600 per cent larger FOV compared to traditional zooms, the need for multiple camera and lens systems and post-acquisition processing software to capture and re-assemble images is eliminated.” MV

13


SPONSORED

INCREASING VISION PERFORMANCE BY PREPROCESSING ON THE FRAME GRABBER As CPUs show the strain of processing high volume imaging data, Euresys’ FPGAs provide a solution with their CXP-12 frame grabbers.

Today’s cutting edge machine vision and video monitoring applications are taking on much more difficult requirements than ever before, such as identifying a small number of cancer cells in a patient’s blood and tracking which items customers leave a store with so their accounts can be charged without having to check out. Camera manufacturers are addressing these applications with sensors that offer dramatic increases in image quality and frames per second. Vision interface standards have kept pace by offering higher connection bit rates such the new CoaXPress 2.0 standard which offers data rates up to 12.5 Gigabits per second (Gbps).

These advances have strained central processing units (CPUs) that are now tasked with the need to process dramatically higher volumes of imaging data. Some applications are using high-end CPUs or dividing processing among several personal computers but both of these approaches are expensive and the latter also adds considerable size and weight. The result is that many applications are operating at CPU loads just under 100%, limiting their ability to deliver further performance gains. Euresys has addressed these challenges by enabling camera manufacturers and vision integrators to upload code to the FPGA in the company’s Coaxlink Octo and Coaxlink Quad CXP-12 frame grabbers to handle image processing tasks that would otherwise have to be performed on the host. The Euresys CustomLogic FPGA design kit can handle virtually any repetitive image preprocessing task that is performed on every pixel of the image. Typical applications include transforming the

14

image based on a lookup table such as by converting the color from the RGB to the YUV scale, implementing a noise reduction algorithm, and compensating for sensor defects such as black pixels. Another common application is flat field correction, compensating for differences in light intensity over the field of view. Performing image processing on the frame grabber provides dramatic improvements in image quality and processing speed, particularly for cutting-edge vision systems that are currently bottlenecked by the processing power of the host computer. The FPGA processes the image in parallel with image transfer so processing time savings on the host computer are achieved without adding any delays in the frame grabber. This means that vision systems can deliver higher resolution, higher image speeds and lower latency without increasing the cost of the host computer. In the past, frame grabber suppliers have offered to incorporate their customers’ code into their FPGAs at the heart of their frame grabbers. This approach requires that users share their proprietary intellectual property so that it could be designed into the frame grabber. In the new CustomLogic approach, on the other hand, users create and compile their own FPGA code into an object file and then upload it to the FPGA using a tool provided by the frame grabber company. The vision integrator’s proprietary code never leaves its premises and the resulting FPGA would be extremely difficult and expensive to reverse engineer. Many large machine makers have developed their own frame grabbers based on FPGAs that also perform image processing and analysis tasks. These frame grabbers in virtually every case use the venerable Camera Link protocol. The machine makers that designed them are now faced with the difficult challenge of redesigning the frame grabbers to accommodate the current generation of highspeed interfaces such as CXP-12. Their task can be greatly

mvpromedia.eu


SPONSORED

simplified by taking advantage of the built-in CXP interface and ability to upload their proprietary image processing routines offered by the new Euresys frame grabbers. The CustomLogic FPGA design leaves up to 70% of the resources on the Xilinx Kintex Ultrascale XCKU035 FPGAs used in the Coaxlink Octo and Coaxlink Quad CXP-12 frame grabbers for image preprocessing.

The event interface allows the user logic to send timestamped events to the Memento Logging tool with a precision of 1 Îźs. Memento provides the developer with a precise timeline of time-stamped events, along with context information and logic analyzer view. It provides valuable assistance during application development and debugging, as well as during machine operation. VivadoÂŽ High-Level Synthesis (HLS), included as a no cost upgrade in all Vivado HLx Editions, accelerates IP creation by enabling C, C++ and System C specifications to be directly targeted into Xilinx programmable devices without the need to manually create register transfer level (RTL) code. In addition to C++ programming skills, FPGA design requires understanding the constraints and limitations of FPGAs. For example, random access memory (RAM) limitations in FPGAs usually rule out memory allocation.

The design kit provides access to the CoaXPress camera pixel stream, on-board DDR4 memory and PCIe Gen3 connectivity. The design phase uses Xilinx Vivado development tools. The Coaxlink CustomLogic design kit is delivered with a reference design consisting of a Xilinx Vivado project that exposes all interfaces available to the user.

The Data Stream (pixel) interface is based on the AMBA AXI4-Stream protocol. On the source side, this interface provides images acquired from the camera. The interface transfers the after processing by the user logic to the PCI Express DMA Back-End channel. The Control/Status interface allows the user to read and write registers inside the user logic via the Coaxlink Driver API.

CustomLogic Your own FPGA logic

Â? Â

Â

PCIe Interface

mvpromedia.eu

CustomLogic can parallelize many image processing tasks, processing pixels on the fly without any buffering or latency. By offloading repetitive and massively parallel tasks to the FPGA, the CPU is freed to focus on high level tasks to meet the requirements of today’s cutting-edge imaging applications. The intellectual property included with CustomLogic such as the CoaXPress interface and the Memento log system helps get products to market in less time. MV

CONTACT DETAILS N: Virginie AndrĂŠ | W: https://www.euresys.com E: virginie.andre@euresys.com

15


USB3 LONG DISTANCE CABLES FOR MACHINE VISION

Active USB3 Long distance cables for USB3 Vision. CEI’s USB3 BitMaxx cables offers the Industry’s First STABLE Plug & Play active cable solution for USB Vision, supporting full 5 gig USB3 throughput and power delivery up to 20 meters in length with full USB2 backward compatibility.

1-630-257-0605 www.componentsexpress.com sales@componentsexpress.com


HOW TO GENERATE SALES IN THE TECHNOLOGY SECTOR High quality sales leads are the backbone of any thriving technology company, however, there are many definitions as to what a “quality” lead is and how is best to generate them. Sud Kumar, marketing director at digital marketing agency Origin, discusses this, and the key steps to generate high quality leads.

CATEGORISING LEADS THE RIGHT WAY… In many organisations, marketing professionals will generate leads and send them to their sales teams to close. However, this is not necessarily the best way to boost your business growth, as you may be sending across low quality leads to your sales team, which will mean that they’re wasting their efforts when trying to convert them. Leads can be categorised by their position in the sales funnel and their level of interest in your business. For technology businesses, consider how your offering will impact the lead’s business. Will it improve their operations and solve their problems? If it will, you should engage the lead further by communicating the benefits they’ll gain from your offering to push them down your sales funnel. To categorise your leads in the right way, be aware of the two types of leads. 1.

Marketing Qualified Leads (MQL) – they’ve shown interest by completing an action. This might be filling in a form, downloading a guide on your site or signing up to your updates. To nurture this lead effectively, send them more informative content. This could be based on the content they’ve already downloaded.

2. Keep your leads engaged Once your leads sign up to your updates and you have their contact details, nurture them. Don’t just send them product information, send them industry insights, advice and sprinkle in references to your business to position your brand as authoritative. 3. Create lead questions Introduce forms to your website to gather information about your prospects and use questions on the form that help you learn more about the prospect to identify whereabouts they are in the sales funnel. 4. Introduce lead scoring Lastly, rank your leads based on their level of engagement with your brand to determine if they are the right “fit” for your business. This will enable you to maximise your lead generation efforts and gain the most return. Learn new technology companies have benefited from this approach here www.origingrowth.co.uk/work/loqate. MV

2. Sales Qualified Leads (SQLs) – they’ve shown immediate interest, or they’ve passed through the MQL stage and they’re ready to make a purchase.

LEAD GENERATION TIPS… 1.

Align your sales and marketing

Ensure that your sales and marketing people are working together in unison so that prospects are passed to sales at the right time or are nurtured further through marketing until they are ready for sales interaction.

mvpromedia.eu

Sud Kumar

17


VISION AND THE GLOBAL ECONOMY What is the true financial state of the machine vision industry? Technical Marketing Services writer Denis Bulgin looks at the numbers and the economic factors influencing the market.

The machine vision industry has been a real success story with rapid and sustained growth over a long period of time. A recent survey1 estimates the global market value will reach $9.9 billion this year. Driven by technology developments, industry standardisation and falling prices in real terms, substantial sales of machine vision components and systems continue to be made into the traditional industry sectors such as automotive, electronics, packaging etc. Newer markets such as solar panel inspection, facial recognition and the sports sector have also regularly emerged to fuel further growth. The closer integration of vision with robotics for automated inspection, the commercialisation of newer techniques such as deep learning and embedded vision and the growing requirements of Industry 4.0 all serve to highlight the status of machine vision as an essential enabling technology in many applications in many market sectors worldwide. However, data from a variety of different sources would suggest that growth is rather hitting the buffers.

FACTS AND FIGURES Statistics can be both informative and confusing in equal measure, and the cynics might feel that statistics can be twisted to suit any agenda. Nevertheless, they do give us a measure of the state of the market. According to the VDMA2 in Germany, machine vision sales in Germany and Europe increased by an average of 13 per cent per year

18

between 2013 and 2017, reaching €2.6 billion in 2017. In 2018, however there was no growth and sales remained static. In the USA, the downturn seems to have been delayed. According to the AIA3, strong growth in North American sales in the early part of 2018 slowed towards the end of the year, with sales in the first quarter of 2019 showing a 4.5 per cent fall compared to the same quarter in 2018. In China, however, it is a different story, with sales growing by 21.6 per cent in 2018 compared to 2017, reaching €1.11 billion, with further growth expected in 2019, according to figures from the China Machine Vision Union (CMVU)4. Figures released by some leading machine vision companies reflect these market figures in general. For example, Cognex has reported a six per cent decrease in revenue in the second quarter of 2019, compared to the same period in 2018, due to weakness in the consumer electronics and automotive markets. Basler’s sales in 2018 remained static compared to 2017, yet IDS Imaging Development Systems enjoyed growth of 20 per cent in 2018.

CONSOLIDATING IN A VOLATILE ECONOMY There is no doubt that volatility in the world economy is influencing the machine vision market. The downturn in consumer electronics and the automotive sectors have already been mentioned. Mobile phone sales are stagnant or declining overall as consumers in the western

mvpromedia.eu


world in particular turn toward sim-only deals, and there is the impending ban on sales of new petrol and diesel cars in many countries, including the UK and France, by 2040, India by 2030 and Norway by 2025. These are all influencing factors. Couple that with uncertainties in the Middle East, tension over trade between the USA and China and, of course, the potential effects of Brexit on European trade, and we have a backdrop that does not particularly encourage investment. This doesn’t mean that the machine vision industry is sitting twiddling its thumbs. Many companies have grown or refined their portfolios or expanded into other markets or geographic sectors through acquisition. Some examples include: Basler acquiring Silicon Software and embedded computing consulting firm Mycable and Japanese LED lighting manufacturer CCS acquiring its French competitor Effilux. STEMMER IMAGING has acquired Infaimon S.L., a provider of software and hardware for machine vision and robotics as well as a stake in Perception Park, the Austrian hyperspectral software provider; Lakesight Technologies has been acquired by the TKH Group, the Netherland’s-based company that owns Allied Vision, NET, and LMI Technologies; Flir Systems has acquired Acyclica, a traffic analysis software company and Cvedi who provide solutions to transform raw images into training sets for deep learning algorithms and Jadak has bought US firm Imaging Solutions Group.

CHINA – A LAND OF OPPORTUNITY? China is an area still showing substantial growth when others are slowing or stagnant. Indeed, the CMVU predicts a 20 per cent growth in sales in 2019 with a further average annual compound growth rate of 23.5 per cent though to 2021. This represents a fantastic opportunity for the world’s machine vision companies and many are already either well-established or becoming established in this region to take advantage. However, it should not be forgotten that China has its own, indigenous machine vision component sector, with camera, sensor, lens, frame grabber, illumination manufacturers not only serving the home market but also increasingly well-known globally. There are now eight Chinese CMOS image sensor manufacturers, producing

mvpromedia.eu

very low-cost sensors which may be ready to challenge the performance of companies such as Sony there may be excellent opportunities within China, but increasing competition in the rest of the world.

WHAT DOES THE FUTURE HOLD? What price a crystal ball? Of course, it is difficult to judge exactly what will happen going forward, but historically any dips or stagnation in the machine vision market have been followed by renewed growth. There is always a caveat and according to an AIA survey of industry experts, twothirds of respondents believe the overall machine vision market in North America will remain flat in the next six months, the picture would appear to be different in China. However, the machine vision exhibition sector provides an alternative indicator, which reflects the levels of interest in the technology. The number of visitors to the Vision Stuttgart trade fair in 2018 increased by 14 per cent compared to the 2016 event, with the proportion travelling from abroad rising to a record 47per cent. The 2019 Vision China Shanghai trade fair grew by 35 per cent compared to the 2018 show, according to the CMVU, and attendance at the 2019 UKIVA Machine Vision Conference and Exhibition in the UK was up 30 per cent compared to 2017. All around the globe, people are taking time out to find out exactly what vision has to offer. That has to be an encouraging sign for the future, with the hope that there are plenty of businesses out there ready to invest in the machine vision industry. MV

References: 1 - https://www.marketsandmarkets.com/Market-Reports/ industrial-machine-vision-market-234246734.html 2 - https://ibv.vdma.org/en/viewer/-/v2article/render/29394827 3 - https://www.visiononline.org/market-data.cfm 4 - http://www.china-vision.org

19


30 & 31 October 2019 | NEC, Birmingham

The UK’s largest annual gathering of engineering supply chain professionals 15,000 engineering professionals in attendance

"I found the event a great networking opportunity to meet industrial professionals from different backgrounds with different products." Kat Clarke, Wing Manufacturing Engineer, Airbus

500+

Over 200

exhibitors showcasing their products/services

hours of free-to-attend industry content

Benefit from co-location with: AERO ENGINEERING

AUTOMOTIVE ENGINEERING

PERFORMANCE METALS ENGINEERING

COMPOSITES ENGINEERING

CONNECTED MANUFACTURING

MEDICAL DEVICE ENGINEERING NEW FOR 2019

HEADLINE PARTNERS

HEADLINE PARTNERS HEADLINE PARTNERS

NetComposites NetComposites

S U P P O R T I N G A S S O C I AT I O N S

S U P P O R T I N G A S S O C I AT I O NS SUPPORTING ASSOCIATIONS

H E A D L I N E M E D I A PA R T N E R S H E A D L I N E M E D I A PA R T N E R S

REGISTER FOR FREE USING THIS CODE 10026 SUPPORTING MEDIA SUPPORTING MEDIA

T: +44 (0)20 3196 4300 | E: aeuk@easyfairs.com www.advancedengineeringuk.com


EDMUND OPTICS

EO - YOUR PARTNER FOR ADVANCED IMAGING OPTICS

WHAT IS THE BACKGROUND AND FORMATION OF THE COMPANY? Edmund Optics (EO) has been a supplier of optics and optical components to industry since 1942, designing and manufacturing a wide array of multi-element lenses, lens coatings, imaging systems, and Opto-Mechanical equipment. Led by a staff of skilled optical engineers and scientists, EO holds great competency in imaging and imaging components and provides complete imaging solutions with a large product line, technical expertise, design and manufacturing capabilities, extensive technical support, and assists customers in every way possible to optimize the performance of their imaging system. EO is application focused and pursues new ways to implement optical technology, enabling advancements in semiconductor manufacturing, industrial metrology, and medical instrumentation. Our precision products, reaching from stock items in the catalogue to customized versions of imaging lenses, from low to high volume, improve efficiencies and yields and are used in test and measurement quality assurance applications, the automation of manufacturing processes, and research: “EO - Your Partner for Advanced Imaging Optics!”

WHAT IS THE LATEST INNOVATIVE PRODUCTS? Our latest product launches involve integrated liquid lenses enabling autofocus applications as well as ruggedized lenses allowing for robustly calibrated instruments in the robotics and 3D metrology space. We also have a new lens series for APS-C sensors featuring TFL mounts, which offers several advantages over the current alternatives.

HOW WILL THIS IMPACT ON THE MARKET? Regarding liquid lenses the ease of integration always was an obstacle. IDS and Pixelink integrating the electronics

mvpromedia.eu

to drive them in their cameras was a big step in making things easier for the user, we certainly expect a lot of traction here. Concerning the TFL mount we are trying to establish, we also get a lot of positive feedback from camera manufacturers. While today Lucid Vision is the only one offering a TFL mount camera you will see a few more moving forward.

WHAT ARE THE PLANS FOR 2020? EO will continue to bring innovative product to the market to address current needs in factory automation, semicon, 3D metrology and robotics – stay tuned for Vision 2020! MV

Boris Lange is responsible for Edmund Optics’ (EO) Imaging business and business development in Europe. Since joining the company in January 2015 as “Imaging Solutions Engineer”, Boris Lange has been supporting EO´s European key accounts in all technical regards related to Imaging and Machine Vision and was actively involved in forming a European Imaging team. Before, he has gained valuable experiences as an optical designer in the Analytics business unit at Polytec, developing VIS, NIR and Raman spectrometers. Boris holds a PhD in physics from Mainz University and GSI Helmholtzzentrum for Heavy-Ion Research. His thesis examined coherent EUV light sources generated by high-energy lasers.

21


INSPEKTO SECURES $15M INVESTMENT It has been an outstanding first year for Inspekto and its Autonomous Machine Vision System. With fresh investment and a blue-chip client list, Inspekto is preparing for more growth in 2020. Harel Boren, CEO and co-founder of Inspekto, reflects on the past year and what the future holds in this MVPro Machine Vision & Automation exclusive.

The global impact of Inspekto’s Autonomous Machine Vision system has ensured the company moves into its second year with renewed vigour. A fresh wave of investment to the tune of $15 million ensures that this Israeli/German start-up will continue its upward trend in 2020. The new investment is a mix of current and new investors including Germany-based Future Fund, Grazia Equity, Mahle, Steinbeis, plus South Korean industrial tech investor ACE Equity.

As he succinctly put it: “The $15m is going to be used in furthering our main objectives of enhancement and developing and enables us to scale up sales in specific geographic areas.” Inspekto has been one of the major success stories of the year since it came to worldwide attention at Vision 2018 in Stuttgart last November.

It is less than a year since the company showcased the Inspekto S70, but one in which it has truly made its mark having won awards, secured an enviable list of blue-chip clients and made significant developments across the business to continue its global expansion. Inspekto CEO and co-founder Harel Boren, an experienced technology and start-up entrepreneur, has welcomed the fresh round of cash. The investment from South Korea is certainly significant as it has the potential for the business to exploit new commercial opportunities in the Far East.

22

mvpromedia.eu


The industry and press were taken by Inspekto’s Plug & Inspect™ technology and how it would change how quality assurance (QA) was being done. It has allowed employees in manufacturing facilities to take the Autonomous Machine Vision system out of the box and install it in minutes, without any help from a systems integrator and without the long process associated with setting up QA inspection solutions.

“Our active pipeline comprises 32 global firms who have a combined total of 2,669 plants all over the world. “We are not in every one of those plants, but we are with those global firms, which means we can only see a bright future. There is the potential for greater volume. “We are talking about world leaders in their fields such as Bosch and MAHLE, leaders in automotive manufacturing. “We also work with BSH, the largest home appliance manufacturer, Geberit, a leader in sanitary products, PepsiCo, Schneider Electric and Daimler. “These are big players and the lesson for machine vision is that this is a very big industry and the issue of inspection is much larger than we think.” To ensure Inspekto can rise to the challenge, explore new markets and deliver enhanced but simple frictionless solutions to machine vision quality assessment, the company has added new European headquarters at Heilbronn, in the Baden-Wurttemberg region, which also has Stuttgart and Karlsruhe in its domain.

A simple solution using ‘game-changing’ technology borne from a management team that packs more than 100 years of experience. With the first anniversary looming, Boren has reviewed Inspekto’s successful first year. Rewinding 12 months when Stuttgart and Vision 2018 was on the horizon, Inspekto took a deliberate softer approach, while some companies, especially start-ups, gasp for the oxygen of publicity. The intentions were always to ‘shy away from the hot air of PR’ is how Boren explains the build up to Stuttgart. “The product was in beta stage going into Vision 2018 and the aim was to make the hit at the show. We wanted to deliver by stealth and surprise – and that succeeded. We received very good reviews from the media with more than 200 clippings or mentions from all over the world, including Japan. Since Stuttgart, we have had a purposeful marketing and PR plan and that has also reaped rewards. “More success followed as the Inspekto S70 won the Innovator’s Gold Award at one of the industry’s major events. To be judged by the machine vision industry in this way made us very happy. “We were also shortlisted for a PR award in the UK. We had come a long way from launch to winning awards in just six months, so the product did resonate with the market. “We were a tech start up with just those few months behind us, but this publicity gave us worldwide attention. My management team and I share more than 100 years in the industry and it can take two, three, four years to get that level of attraction, if ever.” This has enabled Inspekto to also get a firm footing on the global stage.

“This HQ is significant in our development, as it holds large portions of Germany’s manufacturing, including 52 per cent of the world’s automotive industry, and thousands of factory plants. Having this concentration of industry will help us in with our innovation,” explain Boren. “To achieve this, we have established training and support at Heilbronn, and we have moved some assembly, which is another reason why it is a key location for us. “Europe we saw as a prime market, but we have gone beyond those borders with a presence in North America, while our product has also been installed in Thailand, Turkey and Mexico. It was not something we had envisaged, but working with global firms, they have quality control and assurance issues wherever they have a plant.” The Inspekto S70, which was developed using Blue Ocean thinking and strategies, has brought a new dimension to machine vision with its plug and play operation, has shown the industry that there are operational challenges that can be overcome and one Inspekto aims to capitalise on. “Machine vision has had too many complicated solutions, one which needed experts to install them,” said Boren. “We are addressing the machine vision quality assurance 100 per cent. Plug and play for machine vision had not been thought of as possible but it is possible. You can have a system up and running with no integration costs within an hour. “Our vision now, with this simple solution for an independent and frictionless system is to cross tiers, industries and geographies. “The space is very large but unquestionably we are able to satisfy that need.” MV

“We are working with some very well-known names,” said Boren.

mvpromedia.eu

23


SPONSORED

SMART CAMERA TECHNOLOGY ENHANCES SHIPMENT TRACEABILITY Even if 1D and 2D codes have already provided for traceability and automation of a lot in the area of merchandise shipments, the codes do not provide any information about the statuses of the shipments, for example. To close the final gap with regard to verification, a company by the name of ivii based in Graz, Austria, is now offering ivii.photostation—an optical image acquisition system that is suitable for retrofitting. It is based on the smart camera from MATRIX VISION.

The Internet has fundamentally changed consumer behaviour. Not only consumers, but also companies are ordering an ever greater amount of goods on the Internet. So it is no wonder that since the year 2000 (with the exception of 2009 due to the financial crisis), the number of packages sent by couriers, express services and regular parcel delivery services has risen continuously and reached a maximum in Germany in 2017 with 3.35 billion shipments. More recent numbers have yet to be released by the Bundesverband Paket und Expresslogistik (BIEK = Federal Association for Packages and Express Logistics), but it looks as if this number had probably been exceeded in 2018, too.

The number of deliverers or package service providers is not increasing, however, so there is of course a growing time pressure to handle the workload from the nearly 11 million merchandise shipments per shipping day. It is evident that handling of shipments is suffering from this, which of course can sometimes be compensated for by good packaging. Yet you need only send something fragile or edible, and then even the best packaging concept reaches its limits. Damaged goods are a threefold nuisance for the shipper. First, the shipper’s reputation suffers. Second, out of goodwill, the shipping company usually sends the goods again at its own expense to save its reputation. Third, there will still be disputes with the deliverer to clarify the cause and the question of cost. The fact that these costs do not amount to peanuts is seen in the statistics for 2018 from the Retourenmanagement (“product return management”) research group. Last year, approximately 280 million packages were sent back, which incurred total costs of an estimated 5.46 billion euros. Thus a return shipment alone costs 19.51 euros on average, half of which is for the transport. Certainly many cases involve B2C shipments in which, for example, goods were tried out and the size wasn’t right, so then the shipments went back. However, goods that were picked incorrectly or damaged also make up a considerable portion of the returns. ivii recognized the lack of an option for shippers to verify the shipment status and order picking and resolved to find a practical solution. Practical means that the solution should

24

mvpromedia.eu


SPONSORED

be compact and modular so that it can be conveniently retrofitted as an additional unit, for example, on a conveyor belt. The product named ivii.photostation takes a picture of the loading equipment (such as boxes and plastic containers), links this with a timestamp and routing label for the loading equipment and stores this information on a server.

For the estimated 1600 containers per hour that ivii assumed, it became clear relatively quickly that a traditional solution consisting of an industrial computer and a camera is too large for a modular solution, uses too much electricity and makes the entire structure too complex, which would naturally have an impact on the overall costs, too. ivii thought there must be something more clever and began researching the image processing market. While researching, ivii came across the concept of the intelligent camera. Intelligent cameras are small, optimized, allin-one solutions—that is, a camera, interfaces and a computer are combined in one housing. They deliver high performance in relation to their space-saving size and low power consumption. And yet, not all intelligent cameras are alike. Some cameras offer only a Linux OS, requiring the user to personally handle the image processing by doing everything from setup to programming. Others specify image processing software

mvpromedia.eu

whose syntax has to be learned and whose applications have to be configured appropriately while not allowing for any excessively deviating values. Neither of these would have been a challenge for ivii, but why make the effort when there are already solutions on the market that correspond to their requirements. mvBlueGEMINI from MATRIX VISION is not just an intelligent camera; with the “mvIMPACT Configuration Studio” Smart Vision software it becomes a smart camera. The smart camera provides everything ivii was looking for in its compact solution. The hardware comes with the right interfaces for acquiring images when triggered and for controlling a lighting system, and with a network connection so that image data can be loaded onto a server. After a trigger event, the software can switch the lighting system and acquire an image as well as store the image data on an FTP server along with a timestamp and routing label. With that, the technical basis for ivii.photostation had been found. Achieving the goal only required configuring the available tools. With an elegantly designed mechanical system, which includes not only the smart camera but also the lighting system, ivii achieved its self-defined goal of the retrofittable solution having a total weight of about 50 kg and a low space requirement of 1200 x 1200 x 11402040 mm (L x W x H). ivii.photostation can be integrated into conveyor systems whose conveyor technology has a minimum top edge of 300 mm, a maximum top edge of 1000 mm and a nominal width from 270 to 450 mm. Loading equipment with a length from 250 to 650 mm, a width from 180 to 430 mm and a height from 50 to 310 mm is supported.

CONCLUSION By providing ivii.photostation, ivii closes a gap in the area of documenting shipments. ivii.photostation is suitable not only for logging the outgoing goods (before the package is sealed) and the condition of the sealed package’s exterior to assist in disputes with package service providers after shipping damage. But is also can log how the packages arrive at goods receiving and which articles are contained. Furthermore, additional checks in intralogistics can also be run. For more information or further questions, please get in touch with our ivii colleagues at sales@ivii.eu. MV

CONTACT DETAILS N: Ulli Lansche | W: https://www.matrix-vision.com E: sales@ivii.eu

25


EMBEDDED VISION

THE FUTURE OF

EMBEDDED VISION This year has seen a rapid rise in the number of products into the embedded vision market. This can only mean that demand is also increasing. In this edition of MVPro Machine Vision and Automation, we are delving deeper in the world of embedded vision to assess its future and how it is being used through the eyes of those companies that are playing a key role in this technology. We begin our in-depth future with the views of Teledyne Imaging’s Steve Geraghty, who considers the current world of embedded vision and the impact it will have on machine vision.

New imaging applications are booming. From collaborative robots to drones fighting fires or monitoring farms, to biometric face recognition and point-of-care hand-held medical devices at home. Established machine vision applications for manufacturing are also being remade: initiatives like Industry 4.0 and Made in China 2025 were formulated to keep entire national industrial sectors competitive by introducing ‘smarter’ manufacturing and production – less centralized, more responsive, and more automated. And robots. Robotic manufacturers across the automotive, electronic, food packaging and energy sectors are creating a full spectrum of solutions and integrated technologies. The growing prevalence of these robots is not surprising, because industrial operations, both mundane and complex,

26

are increasingly automated. Cost is a factor in their rise, since the average selling price for a robot has fallen by more than half over the past 30 years. Emerging markets have an additional imperative for automation: the need to improve product quality to compete effectively in the export market.

SMALLER = MORE OPPORTUNITIES A key enabler of these lower costs has been accessibility. For manufacturing OEMs, system integrators and camera manufacturers, machine vision is typically driven by a consistent set of priorities: size, weight, energy consumption, and unit cost. This is truer for embedded vision applications, where the imaging and processing

mvpromedia.eu


EMBEDDED VISION

are combined in the same device, instead of the historically separated camera and PC. Embedded vision means that everything extraneous is typically cut to meet requirements that are measured in milliwatts, millimetres, and minute fractions. Providing solutions that can deliver the performance that these new high-growth applications demand within the constraints of embedded vision is critically important to companies in the imaging industry.

A second change fuelling the growth of embedded vision systems is adding machine learning. Neural networks can be trained in the lab and then loaded directly into an embedded vision system so it can autonomously identify features and make decisions in real-time. Affordable hardware components developed for the consumer market have drastically reduced the price, size, and requirements for increasingly powerful sensors compared to computers. Single board computers or system on modules, such as the Xilinx Zynq or NVIDIA Jetson pack impressive machine learning capabilities in tiny packages, where image signal processors, such as Qualcomm Snapdragon or Intel Movidius Myriad 2 can bring new levels of image processing to OEM products. At the software level, off-the-shelf software libraries have also made specific vision systems much faster to develop and easier to deploy, even in low quantities. One major way that embedded vision systems differ from PC-based systems is that they’re typically built for specific applications, while PC-based systems are usually intended for general image processing. This is important, as it often increases the complexity of initial integration. However, this complexity is offset by a custom system perfectly suited for the application and for providing return on investment (ROI).

The image sensor itself is a significant factor in this pursuit. Care is needed in the choice of image sensor that will optimise the overall embedded vision system performance. The right image sensors will offer more freedom for an embedded vision designer to reduce not only the bill-ofmaterial but also the footprint of both illumination and optics. To meet market-acceptable price points, sensor manufacturers keep shrinking image sensor pixels—and even the sensors themselves. This cuts their costs because they can fit more chips on a wafer, but it also helps to reduce the rest of the system costs, including lower overall power requirements and lower-cost optics.

mvpromedia.eu

Embedded vision systems are used in applications like self-driving cars, autonomous vehicles in agriculture or digital dermascopes that help specialists make more accurate diagnoses. All of which require performance, but also almost perfect levels of reliability. Today, this requires customised solutions with carefully selected hardware, matched to custom software. An even more extreme option would be a customised solution where the image sensor integrating all the processing functions is integrated into a single chip in a 3D stacked fashion to optimise performance and power consumption. However, the cost of developing such a product could

27


EMBEDDED VISION

be tremendously high, and while a custom sensor it is not totally excluded to reach that level of integration in the long term, today we are at an intermediary step, consisting of embedding particular functions directly into the sensor to reduce computational load and accelerate the processing time.

Even with phone designs sporting three or more cameras, or larger 64-megapixel sensors, it’s expected that all of this data will simply be to better feed the onboard computing resources to create ‘better’ or more useful images at the resolution we’re all seeing already. It’s about getting smarter, instead of higher performance.

WHERE DO WE GO FROM HERE? LOOKING TO THE FUTURE, WE CAN EXPECT NEW EFFICIENCIES AND NEW APPLICATIONS To guess at the next steps that embedded imaging will take, it’s probably smart to look at another extremely size and power-constrained imaging system that has high demands placed on it: mobile phones. According to Gartner, 80 per cent of smartphones shipped will have ondevice AI capabilities by 2022. While many of the expected applications have to do with audio processing, security, and system optimisation, several have to do with imaging. In 2019, mobile phone cameras are already doing a lot of image processing, including colour correction, face recognition, imitating lens properties like depth-of-field, HDR, and low-light imaging. Analysts expect the next generation of phone-embedded AI will start knowing more and making more decisions: face identification, content detection (and perhaps censorship), automatically beautified selfies, augmented reality, and even emotion recognition. While these applications are only tangentially related to most embedded vision applications (outside of security and entertainment), it’s notable that so much effort is going into the computing side of imaging. Camera sensors are starting to hit a limit in how much resolution can be captured with such tiny sensors. The iPhone has had only small changes in sensor size since 2015 and has stuck to the same 12-megapixel resolution.

28

Researchers are already looking at what can be accomplished with vision systems that can better understand their environment and make reliable decisions based on that information. Drones that can land themselves, computationally modelled insect eyes, autonomous vehicles that can recognize the thin objects that usually challenge lidar, sonar, and stereo camera systems. The PC-to-camera separation will continue to break down, with the possibility of deep neural net models designed specifically for embedded vision systems. Embedded vision opens up entirely new possibilities. We have already seen that it has the potential to disrupt entire industries. Over the next few years, there will be a rapid proliferation of embedded vision technology. MV

mvpromedia.eu


EMBEDDED VISION

STEMMER IMAGING TECHNOLOGY FORUMS OFFER EMBEDDED VISION INSIGHTS

Embedded vision is one of the key themes being discussed at STEMMER IMAGING’s pan-European series of Vision Technology Forums taking place in this autumn. With a wide range of embedded vision technology and embedded solutions available in the marketplace, the six presentations by experts from STEMMER IMAGING, Allied Vision, The Imaging Source and Z-laser will provide valuable insights into a variety of capabilities. These include choosing the right combination of hardware, camera and software and tips on setting up an industrial embedded vision system. There will be a performance comparison of different embedded processors and a discussion of how Windows IOT can be used to create a locked down PC-based vision system.

typically smaller, lighter, cheaper and easier to replace or duplicate than a PC. Also requiring external cameras, they generally utilise ARM processors and Linux and are freely programmable using machine vision libraries or C/C++ programming. They are scalable for mass production but have limited connectivity to factory automation, and therefore need to fit the application well. Deep embedded, fully integrated machine vision systems could be based on these commercially available standard boards, but the lifetime is decided by the board manufacturer, as is the price and the features. The ideal solution is to have a system built specifically for the application. This involves high set up costs and long development time and will usually use proprietary interfaces such as CSI-2, but can lead to low production costs, providing there is a requirement for sufficient volumes.

Delegates can also find out about the use of MIPI sensor modules for multi-camera systems with subsequent data processing via AI and deep learning, as well as the integration of lasers into optical measurement systems such as 3D-displacement sensors.

EMBEDDED SOLUTIONS OVERVIEW Embedded solutions can take a number of different forms and be deployed for a variety of different reasons. By eliminating the use of a PC, they can provide an advantage in areas such as size, power consumption, cost and security. However, this is often at the expense of reduced flexibility in the system, which limits their applicability. Four major embedded solutions are: smart cameras, embedded computers, system on a chip (SOC) and deep embedded vision systems. Smart cameras have the camera, processing and I/O in one housing with no visible operating system. They can be connected to factory automation but have less functionality than a PC-based system. Embedded computers are freely programmable using machine vision libraries and can connect to factory automation by adapter cards, but use remote external cameras. Systems on a board or systems on a chip are

mvpromedia.eu

MORE THAN JUST EMBEDDED VISION The STEMMER IMAGING Forums take place in Germany, The Netherlands, France and Sweden during October and in the UK in November. In addition to embedded vision, the extensive program of technical presentations cover the fundamentals and future trends in machine vision and the specific areas of IIOT, 3D technology, machine learning and spectral imaging. The seminar program is supported by an exhibition by around 40 leading machine vision manufacturers. For more details on the autumn forums go to https://www. stemmer-imaging.com/en/technology-forum/ MV

29


EMBEDDED VISION

3D PRECISION IN LINE

Quality assurance plays a very important role in manufacturing companies. It is an important instrument for creating efficiency and transparency. In the factory of the future, an ever-increasing amount of measurement data will be generated for this purpose. Complex correlations can thus be recognised more and more quickly, valuable quality data can be obtained and processed in order, for example, to avoid future errors. With the development in Factory 4.0, however, the demands placed on quality standards in the manufacturing industry are also growing. Steve Hearn of I DS shares this case study.

The Dutch technology company senseIT (https://senseit. nl/home-en/) specialises in the development of fully automatic 3D inspection cells that can be used directly on the production line. This increases autonomy and ensures enormous time and cost savings. These inline inspection cells make use of Ensenso stereo 3D cameras to take a close look at extremely complex components with the utmost precision and in less than 30 seconds. The system checks and validates the completeness of assembled products up to the size of a shoebox and is exceptionally precise in detecting faults: the software signals deviations in the tenth of a millimetre range which is beyond the capability of the human eye. This applies to all types of defects that can occur during production or transport: broken or missing parts, deformations, machining material, cavities or excessive burrs.

INSPECTION CELL CAMERA CONFIGURATION The val-IT Flex cell consists of a turntable and three Ensenso N35 3D stereo cameras containing two 2D IDS cameras to view the scene from different positions. The component to be inspected is placed on the rotary table with a diameter of 440mm and a height of 240mm and recorded from all sides. During a pre-programmed, complete 360° rotation, the cameras generate a highresolution point cloud of the component. The object is captured with different integration times in order to do justice to the variance of the component properties. Although the cameras see the same scene content, there are different object positions according to the cameras projection rays. Special matching algorithms compare the two images, search for corresponding points and visualise all point displacements in a disparity map. This is then used to calculate depth information for the resulting point cloud. The high intensity light projector of the Ensenso cameras ensures that the component to be inspected is captured as accurately, quickly and reliably as possible.

IMAGE OPTIMISATION AND PROCESSING Projection of a texture pattern onto the object enables high-contrast imaging of smooth or reflective objects or objects with weak structures, even in difficult lighting conditions, which in turn increases accuracy during

30

mvpromedia.eu


EMBEDDED VISION

matching. The Ensenso’s integrated light projector features FlexView technology. This uses a piezo element to move the pattern mask in the light beam which shifts the resulting texture on the object surface. A combination of several images of the same scene, taken with different structures, increases the number of pixels. This results in a higher resolution. All detected points are combined into a complete, high-resolution 3D point cloud representation of the component. Within the val-IT Flex inspection cell this is then compared with a CAD reference model and projections of the product. 3D machine vision algorithms from the HALCON image processing library from MVTec are used by senseIT for the acquisition and processing of the point cloud. In addition, the company has developed specific measurement and processing algorithms and integrated them into HALCON via extension packages. Hardware acceleration ensures that the entire processing time is within 30 seconds. All data of the recorded component with possible deviations are displayed on a comfortable user interface. The deviations can, for example, be displayed per lot, article number or period. It is quick and easy to see whether the scanned components have repeated errors and which errors have occurred during production or transport, for example. This enables the user to react quickly and readjust the manufacturing or transport process.

mvpromedia.eu

TIME SAVINGS AND INCREASED PRODUCTIVITY All information about individually validated parts is stored in a large database. In this way the val-IT Flex is able to carry out statistical analyses which provide information about recurring errors in the manufacturing process. The results of the analysis are clearly presented in a secure online portal. Information about detected defects presented in real time reduces the feedback time and improves the delivery quality in the subsequent process. Insights into these repetitive errors help to optimise the entire manufacturing process. Quality costs, labour costs or penalties for non-compliance with quality standards are significantly reduced. In addition, the user can easily and intuitively teach the system new or modified components and make tolerance settings. This reduces product changeover times to a minimum and enables effective utilisation of the system. The return on investment is achieved in just a few years. Another major advantage is that the scanning and validation process is fully automated and requires no human interaction. This eliminates human interpretation errors caused by fatigue and distraction. Fully automated inline inspection saves time by increasing the measuring speed and increases productivity by detecting faulty parts at an early stage. The manufacturing process can be modified early enough to prevent defects. This is particularly important in the production of large product batches. Such an optimisation of the quality assurance process will contribute significantly to the reduction of quality costs in MV the factory of the future.

31


EMBEDDED VISION

EMBEDDED VISION SOLUTIONS AS SMART SENSORS FOR IOT APPLICATIONS Basler is at the forefront of embedded vision solutions. From cameras to kits to software to components, Basler is using its knowledge to deliver a variety of PC-based image processing solutions and at the same time enable a number of new applications in which small size, low power consumption and low costs are important. In this use case study, Basler looks at smart sensors for IoT applications.

The Internet of Things (IoT) promises the smart linking of a wide range of sensors – particularly imaging sensors– with a cloud. There the sensor data is evaluated and intelligent decisions are made which can then initiate further workflows. It is also possible to service the sensors from the cloud, e.g. by installing firmware updates for the sensors or preparing the sensors for new tasks. At the embedded world trade show 2019, Basler presented a live demonstration and showed how.

OVERVIEW Capturing image data by a camera for subsequent classification plays a key role in an increasing number of applications. In the simplest scenario, it would be possible to just transfer the image data to the cloud and then start an analysis for the classification. However, since IoT sensors such as cameras are generally connected to the cloud at very low bandwidths, transferring the huge image data volume would be a slow process. Such a bottleneck could be resolved with low camera frame rates or extreme image data compression. But in many cases, neither is a

32

satisfactory solution: With low frame rates, for example, the sensor might miss decisive events while efficient data compression results in a loss of information in the image. Not to mention that in many cases the available bandwidth would still be inadequate even with highly compressed data. This is compounded by the latency between capturing the image and the arrival of the image in the cloud, which would make it impossible to react quickly to the event.

SOLUTION One interesting approach is to process the image data analysis, e.g. the classification of certain image features, “on the edge,” namely at the camera sensor itself. Then only the analysed data (e.g the association of an image with a certain class) would be transferred into the cloud. Usually a connection with a very low bandwidth is completely sufficient for such a process. The transfer into the cloud and the reaction to an event can then take place more quickly.

mvpromedia.eu


EMBEDDED VISION

HARDWARE AND APPLICATION SOFTWARE Basler has created an embedded vision solution in which a smart IoT sensor is realised as an edge device. This solution is based on Basler’s award-winning Embedded Vision Kit and consists of: •

a dart BCON for MIPI camera module by Basler

a 96 Boards™ compatible processing board with Qualcomm® Snapdragon™ SoC

a 96 Boards™ compatible mezzanine board to directly connect the camera module to the processing board

This approach makes it possible to process the image data imported from the camera module at high frame rates directly on the processing board.

edge device with a bad connection (with low bandwidth) in an acceptable time period. After the Lego figure CNN was transferred, the edge device was able to reliably classify the figures and report the result to the cloud with low bandwidth requirements and low latency. All it took to “retrofit” the edge device to the classification of traffic signs was to transfer the corresponding traffic sign CNN from the cloud so that the smart sensor was able to reliably detect different traffic signs. This approach can be expanded arbitrarily by connecting not just one but many of such edge devices with the cloud. The edge devices can then be controlled from the cloud at the same time. This makes it possible e.g. to simultaneously configure the camera modules over the air (OTA) while also concurrently performing maintenance tasks such as firmware updates or transferring different CNNs for different classification tasks.

ADVANTAGES This approach offers the following advantages:

The goal was to perform the classification of various Lego figures (carpenter, astronaut, cook, etc.) or various traffic signs. These kinds of classification tasks are currently handled with particular efficiency through artificial intelligence (AI) methods, namely deep neural networks (deep learning) and there again with a special case of deep learning, the so-called Convolutional Neural Network (CNN). For a neural network to perform its task, it must first be trained with sample images so that it can “learn” which class a particular image belongs to. This training requires highly efficient hardware (usually high-end graphic cards). Even with such hardware, the training might take days, if not weeks. As an alternative, platforms such as Amazon Web Services offer hardware clusters that can be rented for training purposes. Once the network has been trained, its configuration can be transferred to an embedded processing board with a comparatively lower efficiency (for deployment) so that the actual classification task can be performed there. This process, called inference, means the application of a fully trained neural network on the specific target hardware (e.g. embedded processing platform). With its efficient heterogeneous architecture (Quadcore Kryo CPU, Adreno 530 GPU and Hexagon 680 DSP), the applied Snapdragon SoC is a good platform for image processing in general. The GPU (Graphic Processing Unit) and DSP (Digital Signal Processor) in particular are ideal hardware blocks for inference with the aid of CNNs. Basler started by training two different CNNs, one for the classification of Lego figures and the other to classify traffic signs. At just a few megabytes, the trained CNNs are fairly small and can also be transferred from the cloud to the

mvpromedia.eu

low bandwidth requirements for linking the sensor to the cloud

low latency in the reaction of a cloud application to a sensor event

ideal opportunity for simultaneous “remote maintenance” of multiple sensors OTA (sensor configuration, firmware update or e.g. uploading a new CNN for a new classification task)

SUMMARY The Basler live demo shows how processing on the edge with CNNs makes it possible to achieve an efficient and intelligent sensor for image data classification with connection to a cloud. With this demo, Basler proves the feasibility of a stable and lean system that is highly productive, scalable, easy to parameterize and maintain. This complex system is a good example of Basler’s expertise as an embedded vision solution provider. For more information on Basler’s embedded vision products and solutions visit https://www.baslerweb.com/ en/embedded-vision/ MV

33


EMBEDDED VISION

MVPRO PREVIEWS EVE BEING

HELD IN STUTTGART Much has happened in the intervening months with a great deal of research and product development in the embedded vision sector.

logger with ULP imager’. Jonathan Hou, Chief Technology Officer at Pleora who will discuss’ Embedded Learning and the Evolution of Machine Vision’.

It continues to have rapid growth as the quest for easier and low-cost solutions and applications shows no sign of abating.

There is Michael Engel, President at Vision Components, who developed the first intelligent camera more than 20 years ago. A true pioneer in machine vision and imaging, he will present ‘MIPI Cameras: New Standard for Embedded Vision’.

Plunge in the development of AI and deep learning over the same period, and there is a huge amount of ‘catching up’ to do when the doors open on this year’s event at the ICS International Congress Center, Messe Stuttgart. This year’s conference, over October 24-25, brings together a worldwide array of speakers from various fields, as well as those businesses at the forefront of embedded vision technology. Embedded VISION Europe is uniquely placed as it is the only conference on the continent to focus exclusively on this disruptive technology with the purpose of showing the capability of hardware and software platforms; presenting applications and markets for embedded vision and creating a platform for the exchange of information. This year’s keynote speaker is David Austin, Senior Principal Engineer at Intel Corp., who will give insights to flexible and practical AI for industrial deployment in his keynote conference with presentation.

Meanwhile, Gion-Pitschen Gross, Product Manager at Allied Vision with responsibility for the new Alvium Camera Series which bridges the gap between machine and embedded vision, talks on ‘How to set up an embedded system for industrial embedded vision - Requirements, components, and solutions’. The conference will be supplemented by a table-top exhibition enabling companies to show their embedded vision expertise through products and applications. Confirmed exhibitors this year include Active Silicon, Allied Vision Technologies, Baumer Optric, Framos, IDS, iniVation, Midopt, Pleora and SVS Vistek. For more information on this year’s conference go to www. MV embedded-vision-emva.org

There is a glut of impressive speakers that cover the entire focus of the technical elements of embedded vision, its applications and the hardware now available on the market. Amongst them are Andrea Dunbar, Head of Embedded Vision Systems at CSEM whose talk is ‘Autonomous data-

34

mvpromedia.eu


EMBEDDED VISION

THE VISION

ACCELERATOR PROGRAM Bringing better products with vision to market, faster

The Vision Accelerator Program helps companies quickly understand and navigate the technical and business complexity of incorporating visual perception capabilities so they can more quickly and confidently plan, develop and deliver their products. It is a service available only to members of the Embedded Vision Alliance, and targets those members who are developing end products and systems with visual perception capabilities. The Vision Accelerator Program helps companies: •

Make decisions in a fast-changing market where areas like deep learning and 3D sensing are rapidly moving from research into practical use

Understand the trade-offs for low-power, low-cost devices and cloud processing

Know what vision software standards, open source tools and algorithms are gaining traction

Identify which startups, suppliers, partners and experts have relevant vision technologies and know-how

Build skills and recruit the right talent

Access and develop a network of experts, suppliers and partners

mvpromedia.eu

The Vision Accelerator Program consists of four components: •

Confidential Vision Accelerator sessions based on an assessment of an organisation’s vision-related needs

Online workshops tailored to growth product team’s knowledge of available options

Free and discounted access to the Embedded Vision Alliance educational events

Membership in the Embedded Vision Alliance, including access to educational and networking resources

The Program is available in two versions: •

For product teams with a specific need or objective who need to be able to more quickly and confidently plan, develop and deliver products

For innovation centres tasked with early prototyping, research integration, spearheading technology initiatives, as well as building skills and insuring rapid knowledge transfer into multiple development teams

For more information on the Vision Accelerator Program, please email accelerate@embedded-vision.com. MV

35


THE 3D SENSOR THAT MAKES YOUR FACTORY

SMART

3D Smart Sensors

Achieve Greater Automation, Inspection, and Optimization Every GocatorŽ sensor comes with factory intelligence built-in. These easy to use, all-in-one, IIoT-ready devices deliver greater performance and results in your manufacturing systems and processes—and drive higher productivity and profitability for your business.

Sensors

visit lmi3D.com/Gocator


SPONSORED

DEEP LEARNING OR MACHINE VISION Which solution fits your company? In the last decade, the pace of technology change has been breathtaking. From mobile devices, big data, artificial intelligence (AI), and internet of things, to robotics, blockchain, 3D printing, and machine vision, industries have been thrust into a transformative era. Strategically planning for the adoption and leveraging of some or all these technologies will be crucial in the manufacturing industry. The companies that can quickly turn their factories into intelligent automation hubs will be the ones that win long term from those investments. But AI, specifically deep learning-based image analysis or example-based machine vision, combined with traditional rule-based machine vision can give a factory and its teams superpowers. Take a process such as the complex assembly of a modern smartphone or other consumer electronic devices. The combination of rule-based machine vision and deep learning based image analysis can help robotic assemblers identify the correct parts, help detect if a part was present or missing or assembled incorrectly on the product, and more quickly determine if those were problems. And they can do this at an unfathomable scale. The combination of machine vision and deep learning are the on-ramp for companies to adopt smarter technologies that will give them the scale, precision, efficiency, and financial growth for the next generation. But understanding the nuanced differences between traditional machine vision and deep learning and how they complement each other, rather than replace, are essential to maximizing those investments.

by-step filtering and rule-based algorithms that are more cost-effective than human inspection at scale. They can be executed at extremely fast speeds and with great accuracy. On a production line, a rule-based machine vision system can inspect hundreds, or even thousands, of parts per minute. The output of that visual data is based on a programmatic, rule-based approach to solving inspection problems. Deep learning is a subset of artificial intelligence and a part of the broader family of machine learning. Instead of humans programming task-specific computer applications, deep learning uses data and then trains it via neural networks to make more accurate outputs based on that training data. Simply put: deep learning allows for solving specific tasks without being explicitly programmed to do so. Rule-based machine vision and deep learning-based image analysis are a complement to each other instead of an either/or choice when adopting next generation factory automation tools. In some applications, like measurement, rule-based machine vision will still be the preferred and cost-effective choice. For complex inspections involving wide deviation and unpredictable defects—too numerous and complicated to program and maintain within a traditional machine vision system—deep learning-based tools offer an excellent alternative. More at www.cognex.com or in the QR code

MV

At a fundamental level, machine vision systems rely on digital sensors protected inside industrial cameras with specialized optics to acquire images. Those images are then fed to a PC so specialized software can process, analyze, and measure various characteristics for decision making. Traditional machine vision systems perform reliably with consistent, well-manufactured parts. They operate via step-

mvpromedia.eu

37


SPONSORED

SIX ESSENTIAL CONSIDERATIONS FOR MACHINE VISION LIGHTING Image quality is of paramount importance in machine vision because all subsequent processing and measurements are performed on the image data captured by the camera sensor. Even the most sophisticated software cannot deliver information that is not available in the original image. Many factors can influence image quality and system illumination is high on the list. Here, Jools Hudson of Gardasoft Vision looks at the six essential considerations for enhancing machine vision lighting for image optimisation.

1. OPTIMISE THE IMAGE AT SOURCE The image contrast and signal-to-noise ratio are key to producing an optimum image and these two measures are determined by the intensity and quality of light reaching the camera. To obtain a good, consistent image the light must be both sufficiently bright and stable, so both the lighting used and the way it is controlled should be carefully considered. Inadequate lighting control is likely to lead to significant underperformance from the machine vision system. Variations in light intensity can result in poor measurement repeatability with major repercussions in a manufacturing environment, leading to excessive waste, defective product reaching the end user or even line stoppages.

38

mvpromedia.eu


SPONSORED

2. GET THE MOST FROM YOUR LEDS LED lighting is often specified by voltage but the brightness is actually determined by the current through the LEDs and not the supply voltage. This means that accurate control of light intensity requires good current control. LEDs are remarkably reliable devices but the light output reduces as the device ages and, in shorter timescale, the increase in temperature after the light is switched on will change the brightness. Switching the light on only when an image is being acquired by pulsing the light minimises the amount of heat produced and significantly improves LED lifetime. Strobing an LED light also brings the major benefit of allowing the light to be ‘overdriven’ in short pulses to intensities much higher than the manufacturer’s light rating. Overdriving is a very powerful technique, but it is important to ensure the LED is not driven at too high a current for too long which may damage the light. Gardasoft’s SafeSense™ technology for lighting controllers imposes safe working limits on overdrive, based on manufacturers’ specifications, pulse width and duty cycle.

3. BUILD IN FLEXIBILITY It is common for machine vision requirements to change. To guard against the unexpected, system flexibility should be built in so that existing systems can be adapted to changing requirements and environment. The inclusion of variable lighting control allows the intensity to be adjusted to suit an altered application so that the same vision system can be used with different light settings for different product batches. Making use of overdrive offers even more flexibility because, with an appropriate lighting controller, LEDs can safely be overdriven at up to 10x the published maximum. A common occurrence is where an established system becomes adversely affected by extraneous light sources. If the light is already running at maximum rating in continuous mode, then overdrive enables increased brightness to significantly reduce ambient light interference, without having to change to a higher intensity LED.

intensity. This is critical since even for simple vision calliper measurements a 10% change in light level can cause a change of 0.5% in the measured values. All Gardasoft LED controllers regulate the LED current to produce a stable, tightly controlled and highly repeatable light output.

5. MAKE SETUP EASY A well-designed lighting controller can bring significant benefits to the setup and use of machine vision systems. Opto-isolated trigger inputs can connect to all common signal sources and a digital, button front panel allows easy and instant configuration of lighting and a controller should incur minimal delay between the trigger signal and the light pulse. Ethernet compatibility with inbuilt web pages can provide live, measured values of current and voltage to determine the performance of the light.

6. UTILISE NETWORK CONNECTIVITY GigE Vision and GenICam compatibility bring enhanced connectivity between components from different manufacturers and enable the controller to be discoverable on the network. Ethernet communications provide remote access to all the lighting within a system to indicate whether a light is connected, disconnected, or short-circuit. It also offers the possibility of remote troubleshooting of a vision system and allowing the configuration of a lighting controller to be changed to suit different batches of product. Adjustments can be made to increase
the current into a light whose intensity has reduced due to ageing. An application such as the Gardasoft Vision Utility allows the machine vision control software to select a batch type and directly communicate with the controller to set the configuration required for that batch. Find out more Getting machine vision lighting right is a critical step to the successful implementation of a machine vision system. More information is available in a freely downloadable white paper: www.gardasoft.com/Six-Essential-Considerations. MV

CONTACT DETAILS N: Jools Hudson W: www.gardasoft.com E: vision@gardasoft.com T: +44 1954 234970

4. MAINTAIN STABLE ILLUMINATION LED brightness is proportional to the current running through the device. In fact, a very small change in LED drive voltage will result in a much larger change in light output

mvpromedia.eu

39


IS NEUROMORPHIC TECHNOLOGY THE FUTURE OF MACHINE VISION? Standard machine vision systems solve problems by brute force. Frames are collected and processed, without regard for the highly redundant information within and between frames. This approach is very inefficient and suffers from the speed and dynamic range issues inherent to standard machine vision. Swiss-based iniVation offer a different view on the future of machine vision.

IniVation AG, a spin off company from the ETH Zurich and University of Zurich, believes that their patented Dynamic Vision Sensor (DVS) technology will transform the future of machine vision. iniVation’s neuromorphic vision systems offer unprecedented advantages over conventional machine vision systems, including the following: •

Low latency (<1 ms), since there is no waiting for fixed frame exposures, leading to much faster camera response times

High dynamic range (>120 dB), due to the design of the pixel sensor

Lower computation and data storage requirements

Ultra-low power consumption enabling devices with always-on battery life

Neuromorphic vision systems are inspired by aspects of the design of the retina and brain. In particular, biological retinas do not send pictures – i.e. frames – to the brain. Rather, they actually pre-processes the light, transmitting only changes in light intensity (to a highly simplified first approximation). It is this and other aspects of the human visual system that the founders of iniVation have replicated in technology, which they call the Dynamic Vision Sensor (DVS). In the DVS, every pixel works independently from the other pixels in analogue mode, providing an event-based stream of changes as the raw sensor output.

Together, these features provide an unprecedented combination of performance characteristics to enhance applications across a wide variety of markets including IoT, industrial vision, autonomous vehicles and aerospace.

40

mvpromedia.eu


Processing visual information in this way has at least three distinct advantages: 1.

The output data stream is very sparse, as only local intensity changes are being encoded. This saves on compute and energy.

2. The system can respond very fast, typically within tens of microseconds, because the pixels are operating continuously in analog mode instead of waiting for successive frames to be captured. 3. The sensor has very high dynamic range – around 120 dB – due to the fact that the changes being encoded are always relative intensities.

DVS technology evolved from over 20 years of research at the Institute of Neuroinformatics at the University of Zurich and ETH Zurich in Switzerland, from where iniVation was founded in 2015. Dr. Kynan Eng, CEO of iniVation, is fully convinced of the application potential of the technology. “We are finding applications almost everywhere in machine vision. Some

mvpromedia.eu

key areas of interest include factory automation and robotics, where the emphasis is on fast response times. In other areas such as IoT, the benefits of DVS include low power consumption and high dynamic range. In the automotive industry there is high interest in vision both within and outside the car.” With so many potential applications, will this new technology make traditional machine vision obsolete? Will it become a standard in machine vision, as the need to extract more compute per dollar (or Watt) becomes more and more important? The long-term bet that iniVation is making is that the answers to these questions will be yes. However, the path to the future may involve hybrid systems. It is possible that future image sensors will have a combination of existing pixels – which excel at taking static pictures – and iniVation DVS pixels, which excel at dealing with motion. Similar concepts are already starting to appear in some image sensors, where sensor manufacturers put auto-focus pixels directly into the pixel array. It is possible that this trend will continue, incorporating more computation directly into the image sensor. Computational photography – currently a hot topic for capturing highquality static images – may start to merge with the DVS methods being pioneered by iniVation. Computer vision will become more and more a mix of software and highly specialised hardware operating closer and closer to the image plane. This trend will, in the long term, make image sensors more and more like our own retinas. MV

iniVation currently has a network of over 250 customers and partners across multiple industrial markets. Customers include global top-10 companies in automotive, consumer electronics and aerospace.

41


SPONSORED

PRECISE TO THE MICROSECOND: INDUSTRIAL CAMERAS WITH PRECISION TIME PROTOCOL Baumer is equipping 25 camera models of the LX and CX series with the Precision Time Protocol (PTP) according to the IEEE 1588 Standard, in support of precise time synchronization in Ethernet networks. This allows applications to benefit from all advantages of a PTPsupported inspection system: synchronized recording of images by multiple cameras, simplified allocation of images to triggers, as well as unambiguous identification and allocation of process data. As an increasing number of machine components must operate with a cross-system uniform time basis, cameras with PTP support contribute to simplify system design and integration, potentially resulting in cost reduction, especially with multi-camera operations. Baumer PTP cameras can be precisely synchronized to 1 µs and support a Master and Slave Mode as well as Scheduled Action Commands. This is particularly useful

for applications in which the images of several cameras must be captured in sync from different perspectives, for example to generate a precisely composed image of a large object. Learn more at: www.baumer.com/cameras

Reliable

MV

Reliably picky: Reliably picky: 100 100% % quality qualitycontrol control

Keeping an eye on quality. Keeping an eye on quality. You spotted the square tomato right away? Us too! It should always be so simple. With VeriSens® You spotted the square tomato right away? Us too! It should always be so simple. With VeriSens® vision sensors, it‘s possible. Thanks to 100 % image-based inline quality control, they offer real vision sensors, it‘s possible. Thanks to 100 % image-based inline quality control, they offer real added value for better quality and greater profitability. added value for better quality and greater profitability. Learn more at: Learn more at: www.baumer.com/verisens www.baumer.com/verisens


CUTTING-EDGE MACHINE VISION TECHNOLOGY

VISION. RIGHT. NOW.

Innovative products, intelligent consulting and extensive service. Solve your machine vision projects with speed and ease by working with STEMMER IMAGING, your secure partner for success.

WWW.STEMMER-IMAGING.COM


XILINX

TAKING EMBEDDED VISION TO THE

NEXT LEVEL Xilinx’s Chetan Khona, director of Industrial, Vision, Healthcare and Sciences shares his detailed insight into the company’s origins, innovative technologies and future plans.

WHAT IS THE BACKGROUND AND FORMATION OF COMPANY? Xilinx was founded in 1984 as “The Programmable Logic Company.” Inventors of the Field Programmable Gate Array and pioneers of the fabless semiconductor business model, Xilinx has had dozens of industry “firsts” in both technology and business areas. Thirty-five years ago, Xilinx sought to democratise custom logic design by enabling a broader customer set, one that could not afford the NREs of pricy custom ASICs. With FPGAs now clearly in the mainstream, Xilinx has focused on building a better SoC-based embedded platform by combining industry standard Arm processors with customisable programmable logic fabric in a single small footprint device. Imagine a common device that can support MIPI, LVDS, and SLVS-EC from a sub-1MP rolling shutter sensor to the largest global shutter sensor with real-time image processing, AI, and a wide array of communications standards like GigEVision, USB, CoaxExpress, TSN, and beyond to the factory network. This highly-flexible programmable silicon, enabled by a suite of advanced software and tools, drives rapid innovation across a wide span of industries and technologies including

44

factories, hospitals, smart cities and the cloud. Xilinx powers scalable Industrial and Healthcare IoT platforms enabling intelligent and adaptive assets.

WHAT IS THE NEW/LATEST INNOVATIVE PRODUCT LAUNCH? Our latest product line, Versal, takes a quantum leap on the SoC concept to introduce the world’s first ACAP, Adaptive Compute Acceleration Platform. Versal is a heterogeneous compute platform that combines Scalar Engines, Adaptable Engines, and Intelligent Engines to achieve dramatic performance improvements of up to 20X over today’s fastest FPGA implementations and over 100X over today’s fastest CPU implementations.

HOW WILL THIS IMPACT ON THE MARKET? Versal is Xilinx’s first product line that is designed with software engineers and data scientists in mind. While traditional hardware/firmware engineers won’t be disappointed by the amount of innovation, Versal simplifies development of programmable hardware accelerators that augment the Arm A72 embedded processors. In particular, a network-on-chip architecture and an ample quantity of

mvpromedia.eu


XILINX

AI engines take embedded vision applications to the next level. It will enable all processing on camera without the requirement for PC-based capture and processing.

WHY ARE FPGAS SO PREVALENT IN MACHINE VISION APPLICATIONS? Most machine vision/factory automation cameras beyond the entry level models use FPGA technology with the overwhelming majority of them using Xilinx. There is really no technology available that is better suited to capture image data from a high-resolution sensor and process it in real-time and ship it out over an industry standard or proprietary communications protocol without buffering or complex memory operations.

WHAT TECHNOLOGIES DO YOU SEE SHAPING THE FUTURE OF MACHINE VISION CAMERAS? Machine learning using neural networks offers the most potential for productivity gains in defect detection/sorting applications as well as a host of classification applications. Early adopters of this technology are already starting to realise the benefits today. Adoption of this technology is driving the transition from traditional camera architectures to smart cameras and is expected to accelerate in 2020.

WHAT TECHNOLOGIES DO YOU SEE SHAPING THE FUTURE OF MACHINE VISION FRAME GRABBERS? Not every application can move to smart cameras because of restrictions with existing infrastructures, so the machine learning is done on the host PC. Xilinx offers a line of Accelerator cards called Alveo. In particular, the Alveo U50 is an ideal entry level choice as smart frame grabber for high-performance machine vision, able to analyse multiple high-resolution, high-frame-rate camera channels to accelerate industrial automated inspection and enable new insights into manufacturing processes. The Alveo U50 can extract intelligence from up to eight 10GigE (10 Gb Ethernet) high-speed, low-latency camera streams,

mvpromedia.eu

greatly outperforming comparable single-input CPU/GPUbased frame grabbers. Alternatively, it can handle up to 96 GigE camera streams compared to a typical conventional 4-input frame grabber. The Alveo U50 simplifies system architecture compared to alternatives such as multi-x86 industrial PCs. Performance benchmarks indicate 8x/2.5x advantage over CPU or P4 GPU respectively. Power efficiency of 42 images/Sec/W for GoogleNetv1(int8) DNN in low-latency mode compares with only 28 images/Sec/W for P4 GPU.

WHAT ARE THE PLANS FOR 2020? Xilinx 2020 plans for machine vision customers means the continued roll-out of the Versal family along with some new radical new innovations reducing the size, weight and power of cameras that we’ll share more details on in early 2020. We are also making some big advancements around the design tool flows, opening up our devices to more abstract programming mechanisms beyond the traditional programming languages to make this incredible technology accessible to a broader set of customers. We will talk about all these topics above at SPS IPC Drives, in Nuremberg on 26-28 November, 2019, Xilinx stand# H4-558. MV

Name of Company: Xilinx The interviewee: Chetan Khona (Director, Industrial, Vision, Healthcare & Sciences) Contact details: chetan.khona@xilinx.com

45


VINESCOUT THE WINE HARVESTING ROBOT

Sundance Multiprocessor Technology has launched VCS-1 a small, high performance, low power and lightweight embedded processor platform designed specifically for precision robotics incorporating complex, real-time vision, control and sensor applications.

The centuries old tradition of wine making is now using precision robotics aimed at delivering even greater knowledge to the wine producers of Europe. The Sundance VCS-1 precision robotics platform has been developed in conjunction with the VineScout viticulture partners that encompass the French Agri-robotics manufacturer, Wall-YE and Symington Estate, a leading producer of port-wine in Portugal as the target end-user.

Utilising the PC/104 form factor, which measures just 90mm x 96mm, to provide industry-standard compatibility and expandability, the Sundance VCS-1 embedded processor module is optimised for computer vision, Edge AI and Deep Learning requirements. It weighs just 300g, has a low power consumption of typically 15W and is highly compatible with a wide range of commercially available sensors and actuators.

The project’s aim is to significantly improve the success factors for the European viticulture industry by developing a robot for vineyard monitoring to help wine producers measure key parameters of their vineyards, including water availability, the temperature of the leaves and plant robustness. Developed and proven as part of the European Union’s H2020 ‘Fast-Track-Innovation’ pilot program (FTI - Project ID: 737669), the VineScout delivers a precision robotics solution designed to better facilitate the collection of real-time data in vineyards from which improved grape maturation and harvesting strategies can be devised.

46

mvpromedia.eu


At the processing heart of the Sundance VCS-1 is a Xilinx Zynq MPSoC which is mounted on to the PC/104 board using a System-on-Module (SoM). It incorporates an ARM Cortex A53 64-bit quad-core processor combining real-time control through engines for graphics, video, waveform, and acceleration with an FPGA. These include an ARM Mali 400 graphics processing unit (GPU) for graphics acceleration, an ARM Cortex R5 Real-Time Processing Unit (RPU) for managing real-time events and the programmable FPGA logic for hardware acceleration of AI algorithms used for on-the-fly image processing. The Sundance VCS-1 features extensive I/O capabilities made available through the Sundance External Interface Card (SEIC), including multiple USB3 interfaces for interfacing various cameras and sensors such as the Intel RealSense T265 tracking camera, Intel RealSense D435 and Stereo Labs Zed depth cameras and FLIR AX-8 thermal camera. It can also connect with most Arduino and Raspberry Pi actuators and sensors. A further interface enables it to mimic PC with HDMI display, SATA storage and Ethernet networking. An onboard ADC is available to gather data from an external sensor and there is an onboard DAC to control servos etc. A large selection of I/O standards is also implemented directly on the programmable logic to reduce the latency between the various supported cameras, sensors and servos. Extensive software support is provided for precision robotics solutions including the ROS Melodic Morenia (ROS compatible and ROS2 ready) robotics platform, MQTT machine-to-machine connectivity protocol, OpenCV computer vision library of real-time programming functions,

mvpromedia.eu

the Xilinx’s Edge-AI solutions and the Python scripting language. Also supported are the Ubuntu operating system, Xilinx SDSoC environment, TULIPP’s STHEM toolchain and Xilinx DPU (deep learning processing unit) for convolutional neural networks. The Sundance VCS-1 is available in a custom enclosure, the PC/104-Blade, designed to remove the need for any fans and provide a rugged environment for building embedded applications for resilient precision robotics. “The Sundance VCS-1 embedded processor module has been designed to provide the resilient processing power needed for the development of ruggedized precision robotics applications,” said Flemming Christensen, managing director of Sundance Multiprocessor Technology. “Available on a fully reconfigurable and expandable, industry-standard PC/104 platform, it delivers high performance and extremely low power consumption. It provides compatibility with a wide range of commercially available sensors and actuators as well as being optimised for computer vision applications, Edge AI and Deep Learning.” Capable of operating 24/7, with a battery life of six hours to mimic a conventional tractor operations before refuelling, the VineScout is intended to eliminate the subjectivity involved in traditional winemaking by providing winemakers with comprehensive and reliable real-time data on vine and grape growth and maturation, so that they can more easily and less expensively optimise irrigation and harvesting strategies for their vineyards. “Grapes must be picked at the exact point of maturation, and the vines must have the appropriate intake of water during development so that the wine ends up with desired properties,” explains Pedro Machado, R&D Manager of Sundance Multiprocessor Technology. “Controlling these parameters using traditional techniques is complicated and expensive, and few vine-growers and winemakers can really afford it. Thus, a majority of producers don’t have real data about the grape’s growth and maturation cycles that could help them. VineScout changes all this, bringing a new and valuable dimension to winemaking.” MV

47


CRAV BRINGS TOGETHER INDUSTRY TRIO Destination Silicon Valley for the dynamic two-day conference on Collaborative Robots, Advanced Vision & AI.

Silicon Valley – the cradle of the IT revolution – remains a centre for innovation, ideas and the meeting of minds. In November, Silicon Valley will be buzzing as it hosts the third CRAV – Collaborative Robots, Advanced Vision & AI Conference.

The event packs a punch with four keynote speakers across the two days, aside from those delivering insights in the three tracks. The keynotes are: •

Ben Wolff, Sarcos Robotics: Human Augmentation Robots for the Automation Age: An Extraordinary Category of Cobots

Sankar “Jay” Jayaram, Intel Sports: Volumetric Technologies for Future Sports Experiences

The aim of CRAV is to introduce delegates to the technologies, trends, challenges and people that are disrupting the status quo with revolutionary innovations.

Vincent Vanhoucke, Google: Closing the PerceptionActuation Loop using Machine Learning: New Perspectives and Strategies

This two-day event is being held in San Jose on November 12-13 and is organised by A3 – Association for Advancing Automation.

Henrik Christensen, Robust.AI and UC San Diego: Present State and Future Directions for Intelligent Vision-Based Collaborative Robots

“CRAV is very unique as it brings together these three different industries, but there is a lot of cohesion between robotics, vision and AI,” said Bob Doyle, A3 vice president.

In addition, CRAV currently has more than 60 exhibitors showcasing their latest technology.

This dynamic conference has a unique place on the calendar as it brings together three key sectors - machine vision, robotics and AI to one location.

“AI has been around for a long time now but there is still a lot of interest in AI and how it can be used in machine learning and what options it can open up.

Doyle added: “For exhibitors it is a great opportunity to gain exposure in this exciting market. Our attendees last year included engineers and decision makers from companies like Google, Apple, Intel and Lockheed Martin.

“With robotics and machine vision it comes together really well and the three tracks we have at the event offer plenty of insight.

“For delegates, and we had more than 500 last year, it is an opportunity to not just network but to also see what products are out there and are coming to the market.

“There is a lot to take in across the three tracks as there are 30 different sessions. As a result, it is difficult to highlight particular speakers without missing others off the list. If you don’t want to miss any of it, I would suggest that organisations bring three delegates!”

“We certainly want people to embrace the show and if they want to attend as an exhibitor or delegate then we want to hear from them.”

48

For more information about the event, go to https://crav.ai/

mvpromedia.eu

MV


MIDOPT

AN

UNMISSABLE OPPORTUNITY

Georgy Das at MidOpt explains why the CRAV show in Silicon Valley is good for business. 1. WHY ARE YOU EXHIBITING AT CRAV?

3. WHAT DO YOU HOPE TO ACHIEVE?

We attend CRAV to see the cutting-edge innovations and trends in the industry. It’s also a great place to introduce our new products, connect with our existing business partners and gain potential new business opportunities. CRAV is in an intimate setting that allows us to meet and talk to numerous people in the industry. Being from the Midwest, we enjoy that the show is on the west coast because it gives us an opportunity to meet with our partners, customers and potential new customers in the area.

We hope to introduce the MidOpt brand and our machine vision filters to those who are not currently familiar, show our new products and innovations to the market, connect with our existing customers and meet with potential new customers and help find the best filter solution for their application.

2. WHAT ARE YOU EXHIBITING? We are showcasing our full optical line, demonstrating the key features of a quality machine vision filter, and unveiling our new Backlight fluoreSHEET™.. When illuminated with a blue LED light, the paper emits an orange fluorescence. This is helpful when imaging with a monochrome camera and an orange Bandpass Filter, as it gives the effect of a diffuse backlight. MidOpt Wavelength Illumination Paper is ideal for applications with space constraints where backlighting is otherwise not possible and in systems where the background of an application does not have the desired contrast. We will also be showing our new line of oleophobic-coated protection windows, which are great for enclosures, outdoor applications or applications within a harsh environment. The oleophobic coating repels water and oils, making it easy to clean. Backlight fluoreSHEET™ Backlights provide sharp contrast to outline a shape, edges or an opening. But applications with space constraints may dictate backlight utilization. MidOpt introduces the backlight FluoreSHEET™ (BF590). When the FluoreSHEET™ is illuminated from the front with a blue LED light, it emits an orange fluorescence. A backlight effect can be created by using a MidOpt orange Bandpass Filter (BP590) to capture the orange emission and block the blue LED excitation, giving the appearance of a bright white diffuse background in a monochrome image. We will also be showing our new line of oleophobic-coated Protection Windows, which are great for enclosures or outdoor applications within harsh environments. The oleophobic coating repels water and oils, making it easy to clean.

mvpromedia.eu

4. WHAT ELEMENTS OF CRAV ARE YOU MOST LOOKING FORWARD TO? Similar to everyone in attendance at CRAV, MidOpt looks forward to collaborating with new friends and catching up with seasoned automation professionals. However, unlike most trade shows/conferences, CRAV represents a large concentration of companies and technologies that can be best described as ‘disruptive’. Robotics, MV and AI are being rapidly applied in new applications throughout all industries, and we are eager to better understand imaging needs of companies in this space to best support them with our filtering expertise.

5. CRAV IS BRINGING TOGETHER ROBOTICS, MACHINE VISION AND ARTIFICIAL INTELLIGENCE – HOW IMPORTANT IS THIS CONNECTION AND WHY? At the heart of success in each of these areas is quality image acquisition. While robotics, MV and AI can each be stand-alone applications, often solutions are a combi of two or more of these technologies. Even the most robust vision software tools can be handicapped when starting with an unfiltered image. MidOpt is excited to participate in CRAV to promote the importance of image filtering and the high impact it has on overall application repeatability. The social networking aspect of CRAV is just as significant, in that it allows professionals from each of these segments an opportunity to collaborate, expand their application knowledge base and better understand industry challenges. www.midopt.com

MV

CONTACT DETAILS N: Georgy Das | W: https://midopt.com E: Gd@midopt.com

49


HOW TO MANUFACTURE A

WORLD CLASS LEADER An innovative and thought-provoking report reveals what makes a great leader in the industrial and manufacturing sectors

There has always been the debate as to whether great leaders are born or made. From the great military generals to the visionary political and latterly those in business. What has enabled them to achieve greatness? The ‘World Class Leader Report’ has looked to provide the answer. It has been produced by executive search and recruitment specialist TS Grale and offers in-depth interviews with more than 20 board level executives and directors across private, listed and private equity backed businesses, with turnovers ranging from less than £20m to more than £1bn. The research highlights the traits that many great leaders, as well as successful businesses, share. It also answers a wide range of key questions relating to leadership. These include:•

How critical are the younger generation to the manufacturing and industrial sectors and how valuable are their technical skills?

Why should current leaders pay attention to their younger colleagues?

What specific skills do women bring to leadership roles and is business better for having different viewpoints round the table?

How well are leaders embracing technology?

Do people want leadership roles as much as they used to?

50

What key traits will leaders of the future need?

Do great leaders know it all, or are they hungry to keep learning?

What makes a world class leader?

Jason Saunders, co-founder and director at TS Grale, said: “Talk to anyone in business and it won’t be long before someone comments on the pace of change – whether it’s political, technological or people’s changing expectations of work. However, the big question we wanted to ask, was whether the old models of leadership skills and roles are fit for purpose in this ever-changing world, or do we need a new type of leader for the future? “We’ve asked a wide range of business leaders what they think and the results are fascinating and provocative, in equal measure. We’re now able to share this report with other companies working in the industrial and manufacturing sectors.” Saunders added: “The results give us a very clear set of qualities that make a great leader. We’re now looking at how we can incorporate this research and identify the qualities highlighted, during our selection processes, in order to help develop world class leaders of the future.” The ‘World Class Leader Report’ can be downloaded from TS Grale’s website at https://tsgrale.com/leader-report MV

mvpromedia.eu



INNOVATIVE FILTER DESIGNS FOR INDUSTRIAL IMAGING


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.