Artificial Intelligence | MVPro 25 | April 2021

Page 1

ARTIFICIAL INTELLIGENCE THE DEATH OF QUEUES

GAME-CHANGING ROBOTICS

PICTURE-FINDING AI

ISSUE 25 - APRIL 2021

mvpromedia.com MACHINE VISION & AUTOMATION


ADVANCED ILLUMINATION

MicroBrite Lights TM

WWW.ADVANCEDILLUMINATION.COM/MICROBRITE

AL295 MicroBriteTM Bar Light

LL232 MicroBriteTM Line Light

RL208 Series Bright Field Ring Light

DF196 Series Dark Field Ring Light

SL244 MicroBriteTM Spot Light

SL223 MicroBriteTM Spot/Coaxial Light

DF198 Series Diffuse Ring Light

BT Series MicroBriteTM Backlight

High Intensity Illumination in a Compact Form Factor The Ai MicroBrite family of high-performance machine vision lights are ideal for applications requiring short working distances, with their light-weight, compact form factor an added benefit for vision systems with strict spatial requirements. TM

Multiple Wavelengths

Pre-Engineered Scalability

High Intensity LEDs

1.802.767.3830 sales@advancedillumination.com


MVPRO TEAM Alex Sullivan Publishing Director alex.sullivan@mvpromedia.com

CONTENTS 4

EDITOR’S WELCOME

6

INDUSTRY NEWS - Who is making the headlines?

20 MOBIUS LABS - MVPro talks AI to Dr Appu Shaji, Mobius Labs

Cally Bennett Group Business Manager cally.bennett@mvpromedia.com

26 CHECKOUT TECH - Spotlight feature on contactless retail 32

Joel Davies Writer joel.davies@mvpromedia.com

Sam O’Neill Senior Media Sales Executive sam.oneill@cliftonmedialab.com

Jacqueline Wilson Contributor Jacqueline.wilson@mvpromedia.com

FUZZY LOGIC - Game-changing robotics software

34 IDS - APREX Solutions and SOLOCAP 36

TELEDYNE - Bruno Menard examines on 3D Machine Vision

39

ADVANCED ILLUMINATION - New linear backlight

40 AUTOMATE FORWARD - Overview and top three exhibitors 44 LYNX - Military grade security for Enterprise IT

Becky Oliver Graphic Designer

46

WHITE PAPER - A fascinating insight into Colloidal Nanocrystals

Visit our website for daily updates

www.mvpromedia.com

mvpromedia.com

MVPro Media is published by IFA Magazine Publications Ltd, 3 Worcester Terrace, Clifton, Bristol BS8 3JW Tel: +44 (0)117 3258328 © 2021. All rights reserved ‘MVPro Media’ is a trademark of IFA Magazine Publications Limited. No part of this publication may be reproduced or stored in any printed or electronic retrieval system without prior permission. All material has been carefully checked for accuracy, but no responsibility can be accepted for inaccuracies.

mvpromedia.com

3


WELCOME I’m sure I’m not alone in thinking that adaptation has become central to our lives over this past year. From home schooling to shopping, virtual software technology has enabled us to move to remote working, and not just to cope but also for some to thrive in the new normal. Although the UK now sees light at the end of the tunnel we are conscious that our friends and colleagues in Europe and farther afield remain in a difficult position. We send you all our best wishes. Time to welcome the newest member of the MVPro Team, Joel Davies, our new writer who has already hit the ground running and reached out to many of you to discuss the latest innovative thinking and products. This issue is focussed on AI, and the positive step-up that can be felt throughout the sector. With so much happening we have extended our industry news this month, highlighting important news and changes, along with the technology developments that have gathered pace through this last difficult year. Despite everything it would appear that if anything, the time has allowed researchers and developers to move forward rapidly in our industry. Joel sat down with Dr Appu Shaji of Mobius Labs to discuss his take on all things AI with a special emphasis on the ‘picture-finding’ kind. In a special feature, and one which is very topical at the moment, we explore the rise of the death of the supermarket queue, looking specifically at developments in the retail sector. MVPro attended a brilliant virtual press conference, hosted by Fuzzy Logic Robotics (FLR) the brainchild of Dr Ryan Lober and Antoine Hoarau. We give an overview of the first release of Fuzzy Studio™, a universal software platform the company say is, “intuitive and simple like a video game. One of the biggest industry trade show events in North America, Automate Forward didn’t fail to provide a worthy platform for the world’s best companies and products this year. The event contained many highlights, including a final day keynote speech from Andrew Ngon end-to-end workflow to build deep learning-powered visual inspection. Having explored every product and booth on display, Joel brings you a run-down on the top three exhibitors he visited. Wherever you are, stay safe, and now grab a cup of coffee, sit back and enjoy the read. Alex Sullivan Publishing Director

Alex Sullivan Publishing Director alex.sullivan@mvpromedia.com 3 Worcester Terrace, Clifton, Bristol BS8 3JW MVPro B2B digital platform and print magazine for the global machine vision industry www.mvpromedia.com

4

mvpromedia.com


Some Assemblies landed on Mars some are waiting for you on Earth.

A LYSIUM ASSEM BL IES INSIDE WE ARE YOU R SPACE QUAL IFIED SUPPL IER.

MACHIN E VISION ASSEM BL IES FOR YOU R D EM ANDING APPL ICATION. W HAT YOU EXPECT +M ORE

www.alysium.com


INDUSTRY NEWS

3D VISION SOLUTIONS COMPANY HAILED AS “TOP STARTUP”

The International Business Review included Photoneo in its list of “Top AI Startups to Watch” last month, whilst TechRound recently featured it as their “Startup of the Week”. Photoneo is a leading provider of industrial 3D vision, AI-powered automation solutions, and robotic intelligence software. Since its foundation in 2013, the Slovakian startup has received multiple accolades, including the inVision Top Innovations award in 2019 and 2021, the IERA Award 2020 and most recently, the inspect award 2021. The company’s mission is “to give vision and intelligence to robots all over the world so that they can see and understand.” Photoneo provides services for companies in various fields including automotive, logistics, e-commerce, food, and medical industry to improve the performance and efficiency of their manufacturing, fulfilment, and assembly processes. When discussing 3D vision, the reconstruction of scenes in motion has posed a great challenge to the developers of 3D vision systems as none of the existing 3D sensing technologies has been fully able to overcome their limitations. Opting for one method or another, the customer is often left with a compromise between quality and the speed of the device. Photoneo thinks their newest 3D camera changes that. With an accuracy of 40 meters/second and the precision of point clouds, they say the MotionCam-3D is the highest-resolution and highest-accuracy area scan 3D camera that can capture objects in motion.

the limits of 3D vision and fills the gap among the existing 3D sensing methods.” The technology uses structured light in combination with a proprietary mosaic shutter CMOS image sensor to capture objects in motion at high quality. The sensor consists of superpixel blocks that are further divided into subpixels. The laser from a structured light projector is on the entire time, whilst the individual pixels are repeatedly turned on and off. The camera reconstructs its 3D image from one single shot of the sensor, “paralyzing” it as it passes. This is the core ingredient of 3D technology. Co-founder and CTO of Photoneo, Tomas Kovacovsky, noted that the structured light method isn’t perfect but they offer a solution. He said, “The big limitation of the structured light method is that the projector-encoded patterns are captured by the camera sequentially and because the image acquisition of a 3D surface requires multiple frames, it cannot be used for dynamic objects or while the sensor is in motion as the output would be distorted.” This limitation has been solved with the “Parallel Structured Light” technology, which enables the capture of a dynamic scene without motion blur. Gabriele Jansen, the Managing Partner at Vision Ventures, is optimistic about the MotionCam-3D. She said that the camera “closed the gap“ amongst existing 3D technologies, enabling something “we have painfully missed so far - the high accuracy snap-shot area scan of large work areas in motion.” MV

The speed and precision of the camera are in part due to a new technology they invented called “Parallel Structured Light”, which they say “fundamentally shifts

6

mvpromedia.com


INDUSTRY NEWS

SOFT ROBOT SURVIVES DEEPEST PART OF OCEAN A recent report details how an experiment conducted by Guorui Li, Xiangping Chen and Fanghao Zhou, et al. allowed the soft robot to survive the pressure of being 10,900 metres (35761 feet) deep. Modelling it on the deep-sea snailfish and inspired by soft-bodied organisms like octopuses and jellyfish, the soft robot reached the bottom of the Mariana Trench and swam freely in the South China Sea at a depth of 3,224 m. The report says elegant soft robot designs present promising approaches to deep-sea exploration but their performance depends heavily on their soft actuators, including dielectric elastomers (DEs), hydrogels and fluidic devices. The power and control electronics of such robots can require bulky and rigid vessels for protection against extreme pressure, despite their soft actuator and structural flexibility. A pressure-resilient soft robot with no rigid vessel that can swim at extreme ocean depths has yet to be developed. The self-powered robot they developed eliminates the requirement for any rigid vessel. The robot, which is 22 cm long and 28 cm in wingspan, free-swam by flapping at a speed of 5.19 cm s− in a field test in the South China Sea. It was carried by a deep-sea remotely operated vehicle (ROV) to a depth of 3,224

m and actuated by an onboard a.c. voltage of 8 kV at 1 Hz. The robot is “soft” because it is made of DE muscles located at the joints between the supporting frame and the flapping fins, a thin silicone flapping fin supported by a stiffer leading edge and an elastic frame and decentralized electronics embedded in its soft body. Its DE muscles are made of a compliant electrode (carbon grease) sandwiched between two prestretched DE membranes. Elastic frames were glued to the pre-stretched DE muscle to provide support and to convert in-plane actuation of the DE membrane into fin flapping motion. The electronics, including a battery, a micro control unit (MCU) and a voltage amplifier, are encapsulated in a polymeric matrix that protects them from the hydrostatic pressure of the sea. Tests were also run in a pressure chamber and deep lake to demonstrate the swimming performance of the soft robot in conditions with lower levels of pressure. The report concludes that the experiment highlights the potential of designing soft, lightweight devices for use in extreme conditions. They say this is especially apparent when compared to even well-designed rigid robots that offer excellent manoeuvrability and functionality in underwater missions but still require pressure vessels or pressure-compensated systems that are at the risk of structural failure under extreme conditions. They believe soft devices with sensing, actuating power and control systems can be fully integrated to monitor and regulate complex tasks in mechanically abusive conditions (not only high pressure but also other difficult mechanical conditions such as vibration or impact). And integrating extra function units or rearranging the circuits could yield multiple additional functions, such as sensing and communication in the deep sea. Their future work will focus on developing new materials and structures to enhance the intelligence, versatility, manoeuvrability and efficiency of soft robots and devices. MV

mvpromedia.com

7


INDUSTRY NEWS

SIEMENS DECARBONISES SWEDISH COCA-COLA FACILITY

The Coca-Cola European Partners (CCEP) production facility in Jordbro reached the new milestone thanks to a major energy efficiency project executed with Siemens Smart Infrastructure. The factory, which opened in 1997, produces more than 1 million litres of beverages a day, in different flavours and package sizes. CCEP partnered with Siemens to cut energy consumption and improve sustainability during production, and following an audit of the production plant’s energy use, defined several energy-saving measures. The energy efficiency project resulted in annual savings equivalent to the amount of energy needed to charge a hybrid car 400,000 times. Peter Halliday, Head of Building Performance and Sustainability at Siemens Smart Infrastructure, said “Our energy and performance services are based on a strategic approach, utilizing value-stacking to exploit the full potential. This ensures we deliver a positive impact right from the start as well as in the long term for the entire organization”. Some of the measures included installing new fans and heat recovery from high-pressure compressors. They say this not only led to substantial energy savings but also improved air quality within the buildings and quietened operations, which enhanced the quality of life for the surrounding residential areas. Also, Siemens upgraded the existing building management system to the Desigo CC platform, facilitating continuous optimization of the production plant’s energy use.

develop the project to achieve even greater efficiency gains,” said Kim Hesselius, Property Manager at CocaCola European Partners in Sweden. Siemens and Coca-Cola have collaborated on property automation for several years and are considering supporting CCEP’s net-zero target goals by 2040. A multi-industry company, as of September 30, 2020, Siemens had around 69,600 employees worldwide. A result of its size recently came as it was named the third-best microgrid controls vendor in a Guidehouse Insights’ report. The report profiled, rated, and ranked the top microgrid controls vendors to provide industry participants with an objective assessment of these companies’ relative strengths and weaknesses in the global market for UES integration. The companies were rated on 12 criteria, including vision, production strategy, technology, geographic reach and staying power. Siemens rivalled the likes of Schweitzer Engineering Labs (SEL), Schneider Electric, Tesla and Optimal Power Solutions in a comparison of 16 of the best utilityscale energy storage systems integrators. “SEL and Schneider offer contrasting strengths—SEL offers a low cost, market-leading technology for seamless islanding while Schneider Electric is pioneering new energy as a service (EaaS) business models for microgrids”, says Peter Asmus, research director with Guidehouse Insights. “The third market leader is Siemens, which has expanded its microgrid offerings and helped develop leading-edge microgrids across multiple geographies”. MV

“We are very satisfied with how the project went, seeing how the implemented measures are paying off in a short time. Now, we look forward to continuing to

8

mvpromedia.com


INDUSTRY NEWS

ROBOT THAT DETECTS BREAST CANCER DEVELOPED The project was led by Dr Christos Bergeles from King’s College London and Dr Daniel Leff from Imperial. Together, they invented “growing robots” that unroll when pressure is applied. The technology can grow inside the lumen - cavities like vessels without causing harm. The robot couldn’t be steered originally, but Dr Hadi Sadati created a basic model of the growing and steerable elements, showing that the robot’s shape could adequately be predicted so the team knew how to steer it inside the anatomy. The engineers on the project then added steering capabilities to the growing robot and Dr Pierre Berthet-Rayne created the first version of the MAMMOBOT that can both grow and steer, with an overall diameter of around 2mm. “The 2mm robot elongates to conform to the ductal tree, and the steerable catheter bends to move the tip to the appropriate branch,” Dr Bergeles said. Researchers say the novelty in the development processes lie in the bespoke manufacturing approach of the adaptation of the “growing robot” concept to incorporate elements from steerable catheters, and tailored controllers. There has been significant growth in the use of robots in medical settings to assist where human senses, including vision and touch, cannot match their precision or consistency. Areas include preventive medicine, scans and surgeries. The MAMMOBOT in particular is one of several robots being developed to aid against cancer.

Netherlands. It alleviated a common complication of breast cancer surgery by helping a specialist surgeon divert thread-like lymphatic vessels, as narrow as 0.3mm, around scar tissue in the patients’ armpits, and connect them to nearby blood vessels. Not the only British university to hit the headlines recently, two students from Cardiff University won the Institute of Global Health Innovation’s (IGHI) annual Health Innovation Prize for their machine learning mattress. Out of 46 teams from 13 universities the £10,000 prize fund, awarded by Imperial who run the competition, was given to the two founders of Calidiscope. The mattress topper they designed integrates novel sensors and machine learning to reduce the incidence of pressure ulcers, an ailment that can mean a two-to-four-fold increase in the risk of death in older people in intensive care units. The solution can measure a marker of inflammation, allowing pressure ulcers to be detected at an early stage. The MAMMOBOT research project was developed during a sandpit, co-organised by Cancer Research UK and EPSRC, which was designed to identify robotic technologies that can lead to early detection of cancer. The researchers are working now on recreating the setup more robustly, using better motors and components. The steering technology is considered in a patent application but the team is keen to explore licensing opportunities in the UK MV and internationally.

Last year, a robot-assisted procedure took place at Maastricht University Medical Center, in the

mvpromedia.com

9


INDUSTRY NEWS

COGNEX ANNOUNCE EDGE INTELLIGENCE PLATFORM The machine vision giant, which boasts $7 billion in cumulative revenue since the company’s founding in 1981, expands its range of products into the software side of the industry with its Cognex Edge Intelligence (EI) platform. The company says the platform provides barcode reading performance monitoring and device management to help customers prevent downtime and boost the productivity of manufacturing and logistics operations. “Cognex’s machine vision tools and barcode reading systems produce insight-rich data across manufacturing and logistics facilities”, said Carl Gerst, Executive Vice President, Products and Platforms. With EI’s powerful visualization and diagnostics tools, our customers can now use that data to identify performance issues and take corrective action faster”. They report that within a few minutes of installation, the Cognex Edge Intelligence software begins securely collecting critical device data and displaying the results in visual dashboards. Customers can use this data to analyze performance trends, monitor configuration changes, and capture no-read and failed validation images for further analysis. The platform can monitor multiple devices and lines within a single site as well as deploying configurations and firmware updates simultaneously to a large number of connected devices. It also includes audit trail capabilities that track and report any changes to the device settings and connectivity features for easy integration with other Industry 4.0 solutions.

10

Cognex state the EI is designed to help improve overall equipment effectiveness (OEE) and increase throughput across a range of industries including logistics, food and beverage, consumer products, packaging, automotive, medical devices, and electronics. Cognex certainly has been busy. The announcement of its Edge Intelligence platform followed the release of its new 3D vision system, the In-Sight® 3D-L4000. This vision system eliminates speckle by using a special laser in the blue light range with the imager seeing a clear laser line, resulting in higher accuracy 3D images. Also, the laser provides its illumination for both 3D and 2D images, eliminating the need for external light. Cognex says the In-Sight® 3D-L4000 camera enables engineers to solve a range of inline inspection, guidance and measurement applications on automated production lines. It’s also so technically advanced and user-friendly, it can be applied to a range of applications in many industries, including food and beverage, consumer products, packaging, automotive, medical devices and electronics. Headquartered in Natick, Massachusetts, USA, with offices and distributors located throughout the Americas, Europe, and Asia; the company have shipped more than 2.3 million image-based products since 1981. Primarily designing, developing, manufacturing, and marketing a wide range of image-based products, all of which use artificial intelligence (AI) techniques, the announcement of the Cognex Edge Intelligence platform could mark a sign of growth for the company in the industry. MV

mvpromedia.com


INDUSTRY NEWS

ROBOTIC PICKING AND THE CONVENIENCE OF HAND-EYE COORDINATION BY PHOTONEO volume of the application. A small scanner mounted on the robotic arm, on the other hand, can approach the bin from a close distance and scan a specific part of the container. This approach is also very useful for scanning parts that are placed close to the bin walls, which may cast shadows. Making scans from the right angles eliminates the risk of not recognizing parts that are in the shade. The hand-eye approach is also a more effective solution in case the robotic cell consists of two bins. Extrinsic calibration would require a vision system for each bin. If the scanner is mounted on the robotic arm, it can cover a larger area as it can move from one bin to the other and make scans from appropriate distances and angles. Another limitation of extrinsic calibration is that a scanner mounted above the bin in a fixed manner may cast shadows on the container and hide some parts. In that case, a compromise needs to be made for finding an optimal position for the scanner in relation to the bin, and sometimes, the parts need to be manually rearranged. This problem will not occur if the scanner moves with the robotic arm and approaches the parts from various perspectives.

Robotic picking of items is one of the most common tasks that humans leave to robots. Yet it is also one of the most challenging ones. To be able to perform complex picking tasks, the robot needs to see and understand. It must be equipped with exceptional 3D vision and intelligence to be able to recognize and localize randomly arranged objects, navigate its arm to approach them, and pick them one by one without colliding with its environment or other parts. Here, Andrea Pufflerova, PR Specialist at Photoneo, lays out what you need and why. A powerful 3D vision is crucial - yet it is also important to choose the right position for the vision system. The scanner can either be mounted in a fixed manner in the cell - usually above the container that contains the parts (extrinsic calibration), or it can be attached directly to the robotic arm, behind the very last joint - for instance on the gripper (hand-eye calibration). Both approaches offer their advantages but there are many cases where hand-eye calibration is a must as it significantly increases the accuracy of an application and surpasses extrinsic calibration in a number of respects.

Hand-eye calibration also eliminates the need to make special adjustments to the environment of the robotic cell. Too much ambient light coming from a window may require darkening the room but scanning one side of the bin first and then another minimizes this need. While the hand-eye approach offers a number of advantages over extrinsic calibration, one should bear in mind that mounting a vision system to the robotic arm may limit the robot in its movements. It is therefore advisable to select a smaller scanner that will not restrict the robot’s ability to manoeuvre. Taking Photoneo PhoXi 3D Scanners as an example, models XS, S, M, and L are optimal choices for handeye picking, featuring a body length from 296 mm (XS) to 616 mm (L) and covering an overall scanning range from 16 cm up to 2 m. MV

Hand-eye calibration is the preferred option in case a customer has a large bin and a smaller scanner. This combination does not favour extrinsic calibration as a small scanning range will not cover the required

mvpromedia.com

11


INDUSTRY NEWS

PATSNAP: INNOVATION INTELLIGENCE COMPANY SECURES $300 MILLION FUNDING

PatSnap, a pan European-Asian global leader in Innovation Intelligence, announced it had secured $300 million in Series E funding this week. The investment round was led by SoftBank Vision Fund 2 and Tencent Investment with participation from CPE Industrial Fund and existing investors Sequoia China, Shun Wei Capital, and Vertex Ventures. PatSnap’s flagship R&D Intelligence and IP Intelligence platforms provide machine learning (ML), computer vision, natural language processing (NLP), and other artificial intelligence (AI) technology to innovation teams at many of the world’s largest institutes as they take their products from ideation to commercialization. PatSnap plans to use the funds to further advance its innovation intelligence platform, accelerate product development, and acquire additional domain expertise in the industry sectors where its technology is used by research and development (R&D) and intellectual property (IP) teams. The funds will also enable PatSnap to expand its sales presence around the world and invest in the growth and professional development of its employees to ensure the company is well-positioned to address the complex needs of its customers. “PatSnap’s mission is to empower innovators to make the world a better place”, said Jeffrey Tiong, founder and CEO of PatSnap. “Our global footprint, leadership, and strategic position in the innovation economy have enabled us to attract top investors, customers, and talent. Adding Softbank Vision Fund 2 and Tencent to our notable

12

roster of investors will help solidify PatSnap as the industry standard for innovation intelligence. Both have deep investment expertise with AI-led companies and proven track records supporting sustainable company growth”. PatSnap says companies around the world are under pressure to increase the pace of innovation. Whilst more money is spent on R&D every year – $2.4 trillion in 2021 according to R&D World – the returns are dwindling. An article published in HBR also noted a 65% drop in R&D productivity. PatSnap offers a remedy to this issue via AIpowered technology that analyzes and connects the key relationships between millions of unstructured data points across disparate data sources to deliver insights that guide R&D decisions and help accelerate the time it takes to bring innovations to the market. “We believe AI is radically changing industries, and PatSnap is a technology leader using AI to enable companies to innovate faster, using IP data and R&D analytics”, said Eric Chen, Managing Partner, SoftBank Investment Advisers. “We are pleased to partner with Jeffrey and the PatSnap team to support their mission of helping innovators make faster, more informed decisions through connected innovation intelligence”. PatSnap has more than 10,000 customers around the world with headquarters in Singapore, London and Toronto. Some of whom include Dyson, Spotify, Oxford University Innovation, and The Dow Chemical Company. MV

mvpromedia.com


AI, DISCRIMINATION & DIVERSITY: EU PROPOSES REGULATIONS

THE FUTURE DEPENDS ON OPTICS

Meeting to vote on the future of AI in Europe; the MEPs for the Culture and Education Committee decided on three major points that were adopted by 25 votes in favour, none against and 4 abstentions. The first point is to reduce gender, social or cultural bias in AI technologies. The use of AI technologies in education, culture and the audiovisual sector could have an impact on “the backbone of fundamental rights and values of our society”, says the Culture and Education Committee. It calls for all AI technologies to be regulated and trained to protect non-discrimination, gender equality, pluralism, as well as cultural and linguistic diversity. The second key point is to regulate media algorithms to protect diversity. To prevent algorithm-based content recommendations, especially in video and music streaming services from negatively affecting the EU’s cultural and linguistic diversity, MEPs ask for specific indicators to be developed to measure diversity and ensure that European works are being promoted. The Commission aim to establish a clear ethical framework for how AI technologies are used in EU media to ensure people have access to culturally and linguistically diverse content. Such a framework should also address the misuse of AI to disseminate fake news and disinformation, they add. Teaching EU values to Artificial Intelligence is the third point. The use of biased data that reflect already existing gender inequality or discrimination should be prevented when training AI, the MEPs urge. Instead, inclusive and ethical data sets must be developed, with the help of stakeholders and civil society, to be used during the “deep learning” process. The MEPs stress that teachers must always be able to correct decisions taken by AI, such as students’ final evaluations. At the same time, they highlight the need to train teachers and warn that they must never be replaced by AI technologies, especially in early childhood education. “We have fought for decades to establish our values of inclusion, non-discrimination, multilingualism and cultural diversity, which our citizens see as an essential part of European identity”, said rapporteur Sabine Verheyen (EPP, DE) after the vote. “These values also need to be reflected in the online world, where algorithms and AI applications are being used more and more. Developing quality and inclusive data systems for use in deep learning is vital, as is a clear ethical framework to ensure access to culturally and linguistically diverse content”. To make the three proposed regulations official, the full House will vote on the resolution in April (TBC). The Commission is also expected to propose a legislative framework for trustworthy AI in April 2021, as a follow-up to its white paper on AI. MV

mvpromedia.com

NEW

Cw Series Fixed Focal Length Lenses EO’s new Cw Series Fixed Focal Length Lenses are waterproof & ingress protected to prevent from moisture & debris, making them ideal for applications including food, pharma, automotive, and security. • Meet IEC ingress protection ratings of IPX7 / IPX9K • Resist high pressure & temperature water sprays • Hydrophobic coated window protects front lens element • Withstand exposure to water (30 seconds, 1 m depth) Find out more at:

www. edmundoptics.eu/ Cwseries

UK: +44 (0) 1904 788600 GERMANY: +49 (0) 6131 5700-0 FRANCE: +33 (0) 820 207 555 sales@edmundoptics.eu


INDUSTRY NEWS

ENGINEERING AND MANUFACTURING INDUSTRY “WORST AFFECTED” BY SKILLS SHORTAGE customers. 28% of managers that were surveyed admitted to poor quality of work being produced and a further 26% were unable to fulfil work commitments to clients and customers. Search found that in Science-based businesses, more than 50% of staff are working longer hours. The result of poor or unfulfilled service is detrimental to all sectors, but none more so than for the Engineering and Manufacturing and Scientific sectors where timing and consistency are essential to work.

Source: Search, Skill Shortage Report 2021

A report by Search has found that the engineering and manufacturing industry has been most affected, with 85% of senior managers explaining that their business is struggling. The reality of the issue presented by the report is emphasized by a 2019 statement from the British Chambers of Commerce which said the manufacturing industry was facing the biggest skills shortage in 30 years. Richard Vickers, CEO of Search said, “Three-quarters of businesses are impacted by skill shortages – an issue that is costing UK businesses £6.3 billion per year in temporary staff and training for workers who are not as experienced as required. The skills gap isn’t a problem that is going away without substantial effort and it is certainly not one we can ignore”. Search found that the most in-demand job by title is nurses, with COVID-19 being a significant contributing factor, closely followed by IT managers and engineers. Engineers in various disciplines including electronics, M&E and civils are featured on the official UK Government “Skilled Worker Shortage” database. Both the Engineering and Manufacturing and Scientific sectors identified being proactive, possessing a good attitude, resilience, work ethic and emotional intelligence as the most important skills lacking in their potential employees. Search says the impact of the skills shortage doesn’t just disrupt internal processes in the short-term but can have a long-term effect on relationships with clients and

14

Managing Director of Driving, Engineering & Manufacturing, Hospitality and Industrial, Richard Westhead sees the issue as particularly important for the UK’s manufacturing industry. He said, “The food & drink sector is the UK’s largest manufacturing industry and the demand for staff currently exceeds supply. The significant growth and continual focus on new products in this sector also mean there is an increase in food science and new product developments roles which are amongst the most difficult to recruit. Engineering roles still remain as the key skill shortage within the industry”. There is hope, as organisations across the UK implement measures to close the skills shortage gap. One-third of businesses say they have introduced an increasing amount of internal training for staff to elevate their skillsets. A further one in five has also invested in external training. The science industry is attempting to remedy the situation as one-third say they have invested in new technology to help fill the skills shortage. The technologies include the use of AI and automation as businesses explore software that can help streamline processes and reduce the need for physical intervention from humans. Companies like Teledyne e2v and Yumain are collaborating on new AI-based imaging solutions, whilst others like Herga Technology continue to develop footswitch receivers that emulate PC keystrokes and mouse clicks. It is these advancements that may, as Westhead says, “show people that a career in engineering & manufacturing is engaging, innovative and provides candidates with limitless potential to grow”. MV

mvpromedia.com


INDUSTRY NEWS

THE IMAGING & 3D PRINTING BEHIND THE WORLD’S FIRST FACE AND DOUBLE HAND TRANSPLANT After a car accident in 2018, Jo DiMeo suffered third-degree burns over 80% of his body which severely limited his ability to lead a normal life. Last month, after 23-hours of surgery, the world’s first face and double hand transplant were completed. The successful surgery was led by Dr Eduardo D. Rodriguez and an operating room team of 80 in NYU Langone’s Kimmel Pavilion. It involved six surgical teams - one for each hand and another for the face of both the donor and DiMeo. And it wouldn’t have been possible without the use of imaging and 3D Printing technologies. Although not present in the room, medical imaging and 3D Printing were essential to the procedure. Materialise, the Belgian software solutions and 3D printing services company coordinated the development of a surgical plan and created an onscreen 3D model based on CT scans. The 3D model allowed the surgeons and clinical engineers to virtually plan the procedure and visualize different scenarios in three dimensions, creating an indepth understanding of the anatomical bone structure and determining the optimal surgical flow. Pre-surgical planning also made it possible for surgeons to virtually select and position various medical implants to predict the optimal anatomical fit. Once the surgical plan was finalized, Materialise 3D printed the personalized surgical guides, anatomical models and tools for use during the transplant surgery. During this momentous procedure, Rodriguez and his surgical team of sixteen used Materialise’s 3D printed cutting and drilling guides. The fully guided system for bone fragment repositioning and fixation was unique to the patient’s anatomy and helped position the medical tools with great precision, reducing the overall surgery time. Additionally, Materialise created 3D printed sterilizable identification tags for nerves and blood vessels, 3D printed models that were used during donor transport, and 3D printed splints, enabling optimal donor hand position during soft tissue reconstruction. Pre-surgical planning made it possible for surgeons to virtually select and position various medical implants to predict the optimal anatomical fit.“Complex transplant surgery like this brings together a large team of specialists and presents new and unique challenges”, said Dr Rodriguez. “This demands careful planning and makes timing, efficiency and accuracy absolutely critical. Virtually planning the surgery in 3D and creating 3D printed, patient-specific tools offer additional insights in the pre-operative phase and increased levels of speed and accuracy during a timecritical surgery”.

mvpromedia.com

Materialise has pioneered many leading medical applications of 3D printing and enables researchers, engineers, and clinicians to develop innovative, personalized treatments that help improve and save lives. The Materialise platform of software and services forms the foundation of certified medical 3D printing in clinical and research environments, offering virtual planning software tools, 3D-printed anatomical models, and personalized surgical guides and implants. “Image-based planning and medical 3D printing have completely revolutionized personalized patient care by providing surgeons with detailed insights and an additional level of confidence before entering the operation room,” says Bryan Crutchfield, Vice President and General Manager – North America. “As a result, leading hospitals are adopting 3D planning and printing services as part of their medical practices because they create a level of predictability that would be impossible to achieve without the use of 3D technologies.” Materialise recently announced it had added new technology to support left atrium appendage occlusion (LAAO) procedures to its Mimics Enlight cardiovascular planning software suite making it possible to leverage the Mimics Enlight 3D planning technology for this procedure to mitigate risk and MV improve efficiency.

15


INDUSTRY NEWS

LARGEST EVENT-BASED VISION SOFTWARE LIBRARY RELEASED

Prophesee recently announced the release of OpenEB, a set of key open-source software modules and a set of new Event-Based Machine Learning solutions. The new products are aimed at optimizing ML training and inference for event-based applications, including optical flow and object detection. The company is also offering the industry’s largest HD Event-Based dataset to developers as a free download. This latest release of the company’s Metavision® Intelligence Suite includes an expanded set of development tools and software for designing industrial vision systems that leverage the performance and efficiency of Event-Based Vision. The suite now includes close to 100 algorithms, 67 code samples and 11 usecase specific application modules that accelerate the development process. The open-source modules of OpenEB are available through Github and allow designers to build custom plugins and ensure compatibility with the Metavision Intelligence Suite for developing event-based systems. It also provides a platform for developers to share software components across what they call the “machine vision ecosystem”. “We want to set an open technology standard in the machine vision ecosystem that enables new levels of accessibility and interoperability”, said Luca Verre, CEO and co-founder of Prophesee. “As the leader and technology pioneer in event-based vision systems, our role is to help proliferate its use and make critical development aids, data and tools more readily available to product developers. “Our approach provides the growing ecosystem around event-based technology with a rich, open foundation and a strong development framework. This includes extensive and reliable data that we have collected over several years, as well as application modules that leverage our expertise in a variety of specific uses to accelerate the development of customer-specific systems”.

16

The Metavision Intelligence Suite adds new applications for processes that can be enhanced with Event-Based Vision. These include applications like “Particle Size Monitoring”, which counts and measures objects passing through a field of view at high speeds (up to 500,00 pixels/second) with up to 99.9% counting precision in a production line. There’s also “Jet Monitoring”, which monitors the speed and quality of liquid dispensing in real-time up to 500Hz and generates an alarm automatically when errors occur on the dispenser. Finally, there’s “Edgelet tracking”, which can achieve realtime tracking of 3D objects with low compute power by leveraging the low data rate and sparse information provided by event-based sensors. Beginning with access to the real-sequence data set Prophesee has created over the past four years, developers can use a variety of tools to guide the development of neural network models, run inference on event-based data for both supervised training tasks for object detection and self-supervised training for optical flow - all optimized for event-based vision. In addition, developers can create their own models or leverage their existing frame-based datasets and models, using the provided event-based simulator, and improve them with Event-Based Vision. OpenEB offers a standard Event-Based data format for camera makers and their customers, whilst the opensource model for the Metavision Intelligence Suite enables compatibility across the ecosystem of camera makers and their customers. The company says the release of key modules under the Open Source License will accelerate the creation of custom plugins whilst ensuring compatibility with the underlying hardware from camera manufacturers. Prophesee is based in Paris, with local offices in Grenoble, Shanghai, Tokyo and Silicon Valley. The company is driven by a team of more than 100 engineers, holds more than 50 international patents and is backed by leading international investors including Sony, iBionext, 360 Capital Partners, Intel Capital, Robert Bosch Venture Capital, Supernova Invest, and European Investment Bank. MV

mvpromedia.com


INDUSTRY NEWS

FRENCH COMPANY NAMED NO.1 FOR FACIAL RECOGNITION IDEMIA has announced that its facial recognition algorithm, “1:N” achieved the best accuracy in the latest Face Recognition Vendor Test (FRVT) from the National Institute of Standards and Technology (NIST). Among 75 tested systems and 281 entrants in NIST’s latest FRVT, the algorithm came top. FRVT measures how well facial recognition systems work for civil, law enforcement and security applications covering accuracy, speed, storage, and memory criteria. FRVT test results are acknowledged to be the gold standard of the global security industry.

in NIST tests. It’s very important to have government agencies check how our algorithms measure up against other algorithms based on large data volumes. We’re thrilled that our results consistently come out at the very top”. The company also announced that it is collaborating with Microsoft to support its new Microsoft Azure Active Directory (Azure AD) verifiable credentials identity solution, now in public preview. Azure AD verifiable credentials enable organizations to confirm information about individuals, such as their education, professional or citizenship certifications, without collecting or storing their personal data. Verifiable credentials aim to replace hard-copy identity credentials such as physical badges, loyalty cards, and government-issued paper documents. The companies believe this digital representation of identity or credential allows individuals to take full ownership over that personal information, which is stored on a digital wallet and accessed with a mobile device. Within verifiable credentials, IDEMIA’s identity verification tools match the data against the system of records to provide authoritative proof of identity for an individual. Because the digital information is verified by IDEMIA, the individual is protected by layers of security, and verification will take minutes instead of days or weeks. IDEMIA is launching the service in the U.S. with plans to expand to additional markets.

NIST’s test results establish that IDEMIA has the best identification system on the market. Taking border control systems as an example, IDEMIA achieved the best accuracy score of 99.65% correct matches out of 1.6 million face images. One of the important aspects of AI-based automated facial recognition is to teach its various algorithms not only to be accurate but fast and optimized for fairness. IDEMIA’s facial recognition solutions work with or without masks and with the best trade-off between speed and accuracy, demographic parity and are able to process face profile images. FRVT evaluated IDEMIA’s core algorithms underlying all its systems using facial recognition which addresses access control, public security and border control needs.

“Mobile ID solutions such as Verifiable Credentials are the single most important security innovation since locking your front door”, said Matt Thompson, senior vice president, Civil Identity, at IDEMIA North America. “This technology that Microsoft is now supporting in Azure Active Directory increases access to personal information while improving the security of their identity against theft, making it easier and faster for governments and organizations to verify identities and credentials”. IDEMIA is a global leader in Augmented Identity, which the company say provide a trusted environment enabling citizens and consumers to perform their daily critical activities (such as pay, connect and travel), in the physical as well as digital space. The company provide Augmented Identity for international clients from the Financial, Telecom, Identity, Public Security and IoT sectors with close to 15,000 employees around the world, servicing clients in 180 countries. MV

IDEMIA’s Chief Technology Officer Jean-Christophe Fondeur said: “IDEMIA has always advocated for the responsible and ethical use and development of biometric technologies. The test results confirm IDEMIA’s long-standing expertise in facial recognition AI-based research and how advanced our technology is. We strive to demonstrate our leadership by regularly taking part

mvpromedia.com

17


INDUSTRY NEWS

EUROPEAN SPACEX RIVAL PARTNERS WITH US DISTRIBUTOR The day after the successful launch, OneWeb, the UK global communications network powered from Space, announced a Memorandum of Understanding with US DoD satellite communications application specialist, TrustComm Inc. The partnership brings the mission to deliver high-speed, low-latency global connectivity closer. The agreement, signed on 16 March, envisions OneWeb and TrustComm working together to deliver OneWeb’s high speed, low latency, beyond-line-ofsight communications services, with an initial focus on the northern latitudes. The partnership will enable Low Earth Orbit (LEO) satellites to deliver connectivity to government customers, bringing unprecedented opportunity to end-users. Supported by a global network of gateways and air, maritime and land user terminals, OneWeb’s Global Connectivity Platform will provide secure, high bandwidth and low latency secure data and internet connectivity to government customers across the globe. Initial services are expected to be available starting the 4th quarter of 2021. OneWeb’s Head of Government Services, Dylan Browne, said: “The US DoD is OneWeb’s largest single customer and so we will ensure we have the tools and vehicles in place to contract for service this November when our network goes live above the 50th parallel. I’m delighted we can now count on the support of the TrustComm team who are experts in satellite and terrestrial managed network service for DoD customers”. TrustComm specializes in combining satellite and terrestrial communication systems into fully interoperable networks, providing customers with bestfit and customized end-to-end connectivity solutions in Ku, Ka, L, C and X-band frequency ranges. TrustComm

18

operates a Teleport and Secure Managed Services Operations Center at Ellington Field Joint Reserve Base in Houston, Texas, and holds several DoD contract vehicles to provide managed satellite services. OneWeb’s partnership with TrustComm will focus on early adopters looking to take advantage of LEO technology including the US Naval Research Lab, US Army Futures Research Lab and others. Solutions will be deployed initially into areas of operation including the Arctic which continues to suffer from poor levels in connectivity due to its high latitude and extreme terrain. OneWeb and TrustComm also expect to support the biennial Ice Exercise (ICEX) in 2022. TrustComm’s Chief Executive Officer, Bob Roe said: “We are truly excited by the potential and advantages that OneWeb’s LEO system brings to the US and other government users on a global scale. With more OneWeb satellites being deployed monthly and the ground/service infrastructure coming online, we will be able to bring this capability to market quickly using our existing US Government contracts, especially CS3 and GSA. OneWeb’s unique architecture and focus on scalable solutions supported by clear SLA’s make it a perfect fit for the TrustComm portfolio”. OneWeb is co-owned by a consortium of investors led by the UK Government and the global telecommunications provider, Bharti Global Limited. Its mission is to provide a high speed, low latency network supported through cloud computing and on-demand digital resources that they believe will offer mobility solutions to industries that rely on global connectivity such as the aviation, maritime, automotive and train industries. MV

mvpromedia.com


INDUSTRY NEWS

$220M FOR “WORLD-LEADER” OF SELF-DRIVING TRUCKS Plus is an international technology company with Level 4 R&D capabilities - when driving automation systems can perform all dynamic driving tasks, including the ability to take over within its designed operating conditions if the human passenger does not respond. Focusing on the R&D and application of self-driving heavy trucks in expressway transportation, Plus was founded in 2016 in Silicon Valley, USA, and has R&D centres in California, Beijing, Shanghai and Suzhou. It says it is committed to, “improving road safety, reducing fuel consumption, increasing fleet efficiency, and transforming the logistics and transportation industry”. Peter Chiu, Head of Private Equity Investment and Managing Director, said, “The world’s freight industry has a huge potential market. Self-driving truck technology can solve many pain points in the heavy truck industry by reducing manual control, reducing bad driving habits, achieving fuel saving and cost reduction and improving use safety, which can help environmental protection and sustainable development. GTJAI values the huge truck freight market, Plus’s global team, excellent technology and in-depth cooperation with global strategic partners, and hopes that the cooperation with Plus, it will support the accelerated implementation of truck-assisted driving in the truck logistics scene and automated driving industry”.

The Series B funding was led by the private equity team of Guotai Junan International Holdings Limited, who joined with investors such as CPE and Hedosophia. The recipient of the major financial backing was Plus.ai, who Guotai Junan International Holdings Limited (GTJAI) call the “world-leader” in self-driving truck technology. The aim of the funding? To jointly support Plus to realise the mass production, deployment and global commercialisation of a new generation of automated high-level, self-driving heavy trucks. At present, Plus has established in-depth strategic partnerships with a number of heavy-truck OEMs and logistics fleets. GTJAI says the injection of new capital will further assist the global commercialisation of Plus and promote the application of mass-produced, self-driving heavy trucks. In China, Plus has assisted FAW Jiefang, a leading commercial vehicle company, to launch a high-level self-driving heavy truck called J7 L3, which will be massproduced and launched in mid-2021. At the same time, Plus and SF Express, the Chinese logistics giant, have achieved normalised commercial trial operations. In the United States, Plus will also simultaneously launch massproduced automated driving products in 2021 to serve leading logistics customers.

mvpromedia.com

In recent years, automated driving has become an important breakthrough in the transformation of the automotive industry. For GTJAI, it is also the key to the technological transformation of the smart logistics industry who says that whilst supporting innovative businesses, it also “lays the foundation for providing its wealth management clients with a more high-quality product portfolio in the future”. With an A+ round of financing in 2018 and past shareholders that include Sequoia Capital, GoldenSand Capital, China Growth Capital, Lightspeed Capital, Mayfield and SAI, the commercial investment possibilities for Plus could be huge - and not only for them. Plus’ rival TuSimple filed for a U.S. IPO recently, whilst other self-driving companies such as Velodyne, Luminar Technologies and Aeva have also recently filed to go public, banking on a boom in U.S. capital markets. A European pilot begins in 2021. Guotai Junan International is a market leader and the first Chinese securities broker listed on the Main Board of The Hong Kong Stock Exchange by way of an initial public offering. Based in Hong Kong, the Company provides “diversified integrated financial services”, with core services including wealth management, corporate finance, loans and financing, asset management as well as financial products. MV

19


MOBIUS LABS You can’t talk about current or future technologies without AI getting mentioned. It’s earned its title as one of the most important and varied technologies to emerge in recent years for just about every industry. Fortunately, MVPro’s Writer, Joel Davies, sat down with the person who knows a thing or two about AI, particularly the picture-finding kind: Dr Appu S ha j i of Mobius Labs.

YOU’VE BEEN SUCCESSFUL OVER THE PAST 10 YEARS, HAVING CO-FOUNDED CROPPOLA AND FOUNDED SIGHT.IO, WHICH BECAME EYEEM FOR WHOM YOU WERE HEAD OF R&D FOR 4 YEARS. NOW MOBIUS LABS IS 3 YEARS OLD, HOW HAS IT BEEN GOING AND WHAT DOES BEING COFOUNDER, CEO AND CHIEF SCIENTIST INVOLVE? Thank you very much. It has been a very interesting last three years, to say the least. There’s been a lot of accelerating moments and some quite depressing moments so we have been successfully navigating through that. I’ve been in this field for the last 20 years and right now I feel the most ambitious and most excited. That’s due to two reasons: technology is maturing at a very rapid rate, and the market relevance of that technology being used in a commercial sense is also increasing at a rapid rate. So in the next three to four years, you will see a lot of computer vision applications around the world up popping up. Being part of the journey is really exciting.

COULD YOU TELL US ABOUT WHAT MOBIUS LABS IS AND WHAT YOU’VE BEEN UP TO? So Mobius Lab’s mission is to enable enterprise companies around the world to deploy computer vision to power and use applications like visual search, visual analytics, visual recommendations systems in scale without having to spend a lot of time setting up this very

20

complex piece of software. Usually, this requires a set of expertise to do it and you need to have a lot of data. Instead, what we provide to our customers is a STK (Systems Tool Kit) with a no-code interface on top of it, which allows non-technical teams to actually start using computer vision for their projects with very minimal effort. So what we’re trying to do is take some of the latest and some of the most sophisticated computer vision research and productionise that as easy to use software.

So Mobius Lab’s mission is to enable enterprise companies around the world to deploy computer vision to power and use applications like visual search, visual analytics, visual recommendations systems in scale without having to spend a lot of time setting up this ver y complex piece of software YOU CO-FOUNDED MOBIUS LABS WITH HICHAM BADRI AND ALEKSANDR MOVCHAN WHO YOU PREVIOUSLY WORKED WITH AT EYEEM. WAS ACHIEVING THE RIGHT CULTURE FOR THE COMPANY IMPORTANT TO YOU? Culture is everything. One more name I will mention there is Dominic (Rüfenacht). I had hired a very

mvpromedia.com


great team while I was in EyeEm - some of the most talented computer scientists around the world. Then I got a chance to start Mobius and I brought them along the journey and that’s how it started. But over the last three years, we added more people so now the team is around twenty-five, twenty-six. And all that’s culture going back. Culture is one of the most significant things that you need to set up when you are starting an organisation because that drives the day to day activity. By working at the culture you create an environment where people can do their best work. Now I find I get up in the early morning passionate and excited to work with the team that we have at Mobius Labs. It’s really gratifying.

It ’s always people. There are many tools but ultimately the goal should be how to communicate between people HAS WORKING WITH THAT CULTURE MADE THIS COVID PERIOD OF TIME EASIER? It was a strange one for everyone in the world, to say the least. I thought we would be coming back in three weeks to four weeks. It’s a year since we stayed in an offline mode. We never had a remote culture, either. We were meeting everyone in person and a lot of around-the water-cooler kind of conversations were happening. Suddenly, even when the first lockdown came, we realised we needed to embrace the digital way of working. So, document routine, interact with people to keep that spirit of the day to day interactions in your digital channel, communicate more aggressively than what it was like in real life. Again, it’s always people. There are many tools but ultimately the goal should be how to communicate between people. So for that, you have to build up a lot of transparency and trust to work without silos. The fact that we could weather the storm is a testimonial to the great people who are working along with me.

mvpromedia.com

YOU’VE WORKED IN PICTURE-FINDING AI FOR OVER 10 YEARS. WHERE’S THE TECHNOLOGY GOING AND WHY IS IT EXCITING?? I started working on image classification problems 15, 20 years back at university. These technologies were good for writing papers, but nothing practical could be done on them. It started changing roughly seven to eight years back when technology started becoming powerful enough for some commercial applications. Still, it was data scientists or engineers that trained the systems and put them into production so the ability to train the models was limited to a set of people. Yet the true use of technology is never the technology per se, but other uses like for the business owners and product managers who use it. For technology to be truly accessible to them, it should be able to be trained by nonexperts. This is what we have worked on. The algorithms can now be trained by people who are not necessarily experts in computer science. The next generation of algorithms is about self-supervised and unsupervised learning by the machine learning algorithms that are just looking at data. Not with labels or by understanding the image of a cat or a dog, but by actually making its own judgement by looking at a lot of data. That will be a status quo in the next three to four years. We want to play a big part in actually taking the algorithms to that level.

YOU HAVE A VISUAL SEARCH ENGINE CALLED SUPERHUMAN SEARCH™. SOME MIGHT WONDER, “WHY NOT GOOGLE IT?” So how Google image or any popular search engine works is that it never looks directly at the images or video content. It looks at the metadata attached to it. So this metadata can be the text which was surrounding that image in the website or sometimes it’s manually entered. If you think about applications like visual

21


Imagine the world in the next three to four years you ’ll find a lot of cameras, but more importantly, you will have less intelligence which is just sitting in these cameras - it will actually end up making sense of this data in scale search or recommendation or analytics, the data itself is visual. So what we’re doing is building algorithms that are looking at visuals as first-class citizens, understanding what’s inside the data and powering the search experience. This will cut down a lot of cost for a lot of companies and it will make the search much more accurate and much more proficient. This is especially true for some of the technologies that we will then licence to large and small enterprise clients who don’t necessarily have the bandwidth of Google, or they don’t want to waste time in essentially translating this metadata. Our technology doesn’t require associated metadata. It’s just looking at that clip, understanding what is inside and creating a search experience that is truly remarkable. Sometimes when we see the results I would never have imagined that it could be this kind of technology for years.

MOBIUS CLAIMS IT IS “A NEW GENERATION OF AI-POWERED COMPUTER VISION POISED TO DISRUPT EVERY MARKET”. WHERE IS YOUR TECH BEING APPLIED AND WHAT DEVELOPMENTS ARE YOU MOST EXCITED ABOUT? So there were a few markets that we decided to concentrate on from the inception of the company.

22

One was the media market, which is a supply and demand business. How these people can build a successful business or a product is by matching the supply and demand effectively. Another that we recently started doing is the space sector, which is exciting because we build algorithms that are superefficient. This means that they can be deployed in low footprint devices like mobile phones or raspberry Pis, or now even satellites. Images from satellites are essentially being used by end-users like insurance companies, the government or mapping companies. What people are interested in is not the images of these visuals, but the analytics inside. Computer vision comes in and can figure out instantly if there is something like a wildfire happening at a particular location. This is giving real value to end-user clients. If you imagine the world in the next three to four years you’ll find a lot of cameras, but more importantly, you will have less intelligence which is just sitting in these cameras - it will actually end up making sense of this data in scale. Similar to what happened with the mobile camera explosion, you will find computer vision technology exploding in scope.

ARE THERE ANY APPLICATIONS YOU’RE INVOLVED IN MORE RELATED TO MACHINE VISION? From the top of my head, I remember there was a company that wanted to find out if there were cracks in the turbines of their windmills. And they asked, “can we put algorithms directly in the drones and can the drones actually go and inspect to find out if there are cracks?” Even four or five years before we thought this was not possible by machine learning systems. Suddenly, we’re able to address these problems with high accuracy, but

mvpromedia.com


also with a very low memory processing footprint. So we can actually embed this software in tiny devices and get due diligence on the fly. [Joel] It sounds absolutely wonderful when you look at the scope of AI technology. It covers everything that we could possibly ever need to cover because vision is such a fundamental part of our human existence, right? We’re seeing things every second and we’re making a judgement based on what we see every second. Now we have what we call “outsourced” visual eyes and visual brains that we’re able to develop. Our slogan is “Superhuman Vision™ for every application”. As a child, I used to read a lot of science fiction. So you read a book from Isaac Asimov and you see that robots are just going around and doing all kinds of tasks that are second nature. What not many people appreciate is that these robots have full-fledged mission capabilities. These are looking and perceiving the world and doing things. This is how the world will evolve. You’ll have machine vision in robots, machine vision in drones, machine vision everywhere.

SO THERE’S A LOT OF CONCERN THAT COMES WITH AI ABOUT DATA PRIVACY. HOW DO YOU PROTECT YOUR DATA FOR YOUR CLIENTS AND YOURSELVES? That’s something that we pride ourselves on and make a promise to our clients that we never see our clients’ data. We distribute our software on an on-premises basis where once these models are deployed, we never see any of the data that it is crunching. So privacy is inbuilt into our sort of product. That’s something that we hold dear at this moment.

mvpromedia.com

MOBIUS LABS AIMS TO BE ONE OF THE LEADING COMPANIES TAKING COMPUTER VISION AND A.I. TO “ANOTHER LEVEL”. WHAT DO YOU DO DIFFERENTLY OR BETTER THAN OTHER A.I. BASED TECHNOLOGY COMPANIES? So there are three things when you want to start any adventure that you have to be really good at. One is on the core innovation side, so you have to keep inventing the future and we decided to work on two primary things. One is making systems trainable by end-users who are not necessarily technical users by working heavily on weakly-supervised, self-supervised and unsupervised learning. The second that we concentrated on is how to make systems that have very low processing needs, or are of very small size and don’t consume too much to compute.

You need to have a team that actually understands the 360-degree aspect of it and collaboratively exchanges ideas to move towards this goal of putting superhuman vision in ever y application The second pillar that the company works on is productisation. So you can build the best technology possible but if you don’t have the proper channels to take it in front of end-users, it’s not useful. That’s why we work very hard on our STKs, with an AI that we build in that allows people to use the product easily. Last but not least, you are a business, so you have to commercialise and find that value proposition that is

23


giving really strong advantages to end-user clients. These are the three pillars in which we structure our company and one feeds the other. The core value to remember in any business is that it’s the people who do the work. So you need to have a team that actually understands the 360-degree aspect of it and collaboratively exchanges ideas to move towards this goal of putting superhuman vision in every application.

A RECENT FORBES ARTICLE CLAIMED THAT “A WAVE OF BILLION-DOLLAR COMPUTER VISION START-UPS IS COMING”. DO YOU THINK MOBIUS LABS HAVE THE POTENTIAL TO BE ONE? Yeah, absolutely. That’s what makes me wake up every day. I’m pretty sure that if you remain passionate and execute, you will get there. Mobius will be one of the top computer vision providers in the world. More important is the value that we can unlock. When we provide for a press agency, we’re helping put that most impactful photograph on the front page of various newspapers, or when we help the space company to capture photos and do analysis. We’re helping amazing end-user value. That’s why we’re a kind of platform that becomes an amplification where we get to collaborate with our clients in actually building applications we never would have initially thought possible within Mobius. We just become that layer that allows other companies to sort of express themselves and build products. It’s fascinating.

experiences that we have. For example, two or three weeks back, I gave that no-code AI to my nine-yearold daughter to train a computer vision classifier and she was able to train an interesting classifier. It’s really fascinating for me to think in the sense that I trained my first computers in 2005, 15 or 16 years back when I was doing my PhD. It took me eight or nine months and a lot of late nights to get it to work. Suddenly we have made the software or the technology so easily accessible that everyone in the world can act as an advanced data scientist. The potential of that is something that I’m looking forward to seeing.

SO JUST TO FINISH UP, IS THERE ANYTHING YOU WANT TO SAY TO THE READERS OF MVPRO? OR IS THERE ANYTHING THAT YOU WISH I’D ASKED YOU? Oh, that’s a tricky question, there’s a lot of things [laughs] but I will say that I’ve always been very interested in an open dialogue about what is possible when humans and machines interact. It’s always a pity when a lot of these basic sorts of blueprints or rule books get written, so we’d be very curious to know your thoughts about that. Coming from a technology side, some of my most fascinating discussions happen when I talk with nontechnologists because it gives a completely different perspective. Ideas about that would be also really, really interesting for me to learn. MV

SO WHAT’S NEXT AND WHAT DO YOU NEED TO GET THERE? As I mentioned, we’re sticking to our three pillars. In particular, we’re working hard on the no-code AI

24

mvpromedia.com


THE FUTURE OF COAXPRESS

over-Fiber

www.euresys.com

sales@euresys.com


THE DEATH OF THE QUEUE &

THE RISE OF CONTACTLESS RETAIL

It’s 1921. To do your weekly shop; you visit the greengrocer, butcher, bakery, fishmonger, dry goods and general store. You wait as the attendant collects your items. A decade later the self-service, all-in-one supermarket arrives, bringing along the shopping trolley with it. Still, you wait as the attendant checks your items. More recently self-scan checkouts, click & collect and home delivery have become the norm. Quicker and easier, but the issue remains: you wait. A century or so later, the solution may have finally arrived. If the opening of Amazon’s 29th Go store on its second continent didn’t convince you, MVPro’s Writer, Joel Davies, has five companies that might change your mind.

require any special shelves or sensors like an Amazon Go system does. And Standard does not use any facial recognition or biometrics; rather, our cameras monitor the movement of shoppers’ hands as they traverse the store, picking items up and putting them in a bag or pocket or back on a shelf”, said Alex Plant, Vice President of Marketing at Standard.

STANDARD COGNITION AI

Focusing on the convenience store space, it has agreed with Circle K, AKA Couche-Tard, the Global

Checkout-free shopping, enabled by machine vision, has the potential to have a larger impact on retail than mobile tech or even the internet

Perhaps Amazon’s biggest competitor, if only for the sheer financial backing it’s had. The Silicon Valley startup recently announced a $150m funding round to become the first autonomous checkout “unicorn” – a start-up company valued at over $1 billion, so named for the near-mythical rarity of such successful ventures. With plans to open over 50,000 stores in the next five years, Standard’s solution is to retrofit its technology into existing store layouts. “We are camera-first – our AI-powered cameras are mounted on store ceilings and bring digital properties to a physical space – not just checkout, but also features like instant, accurate inventory snapshots. We don’t

26

mvpromedia.com


LIFVS It It would be incorrect to classify Lifvs as a brick-andmortar store like Amazon’s Go stores since its 27 stores are made entirely of wood and can be put on a flatbed truck and transported to where they’re needed. With this unique idea, the Swedish retailer focuses on rural locations rather than urban ones, filling the convenience store gap for small communities. It’s an approach with a set of challenges completely different to those installing the technology in urban areas. “To be able to sustain on a much smaller customer base we have to look at our cost of operations, said Daniel Lundh, founder and COO of Lifvs. “One is the staffing cost. So by levering the power of technology we can operate the store unmanned. Second, we use our

Fortune 500 company with 2,350 stores worldwide, to retrofit their first contactless store in Phoenix, Arizona. It has also acquired the Milan-based, contactless tech company, Checkout Technologies. The success of those endeavours will surely inform their future, which Plant is confident about. “There are so many amazing applications for machine vision – but retail may be one of the first areas where it impacts the average person’s daily life. Checkout-free shopping, enabled by machine vision, has the potential to have a larger impact on retail than mobile tech or even the internet. We think within two years, checkoutfree shopping will be fairly common and in five years, it will be ubiquitous.” If you’re San Francisco based and you want to experience Standard AI’s version of what autonomous retail is like, you can visit its 1,100 square foot convenience store, which is fully equipped with the company’s technology.

mvpromedia.com

platform and let data predict the order of goods and the supply chain. So by being an unmanned store everything is done in our app - open door, scan items, access your personalized coupons and check out”. Unlike other companies, Lifvs doesn’t retrofit existing stores. They own and operate the stores on their own, relying on the unstaffed aspect to work with the benefit that the store is open 24 hours a day. But they fill any lack of social interaction via the app.

27


his young child. Avoiding a long line at his favourite store, he settled for different milk elsewhere. The result wasn’t well-received, but he realised that there had to be a better way. Seven years on and they might have found it. “A typical lunchtime trip to a convenience or grocery store for a quick grab-and-go sandwich takes 4.5 minutes”, said Motilal Agrawal, Co-Founder and Chief Scientist of Zippin. “In a Zippin-powered convenience store, the average shopping trip is only 45 seconds. Shoppers get done in one-sixth of the time, but more importantly, they also avoid close contact with other people”.

“One key to our success is that through the app we can communicate with the customer at the point of purchase, even if we are an unmanned store. The customers scan chicken, we can then in real-time populate a curry that goes well with the chicken in the app, or even provides a recipe based on chicken and the purchase history of that customer. All this while the customer is in the physical store sizing up the ripeness of an avocado”, continued Lundh.

it ’s now a matter of ‘ when’ to deploy frictionless shopping technology, not ‘ if ’

The company’s flexible strategy has allowed it to grow rapidly in recent years and the company has received interest in using its platform in a variety of ways. “Looking ahead we will have Lifvs stores outside Sweden but also stores that are white label and our tech platform that will power other types of retailers as well,” finished Lifvs’ COO.

ZIPPIN The idea for San Francisco start-up Zippin was born when its co-founder and CEO went out to buy milk for

28

mvpromedia.com


Yokohama Techno Tower Hotel in Japan, another in the Sacramento Kings stadium in Denver and an aisle of Azbuka Vkusa - one of Russia’s top 50 retailers – it’s easy to see why he thinks that.

SENSEI

Retrofitting stores, the company combines inputs from overhead cameras with product tracking—using smart shelf sensors— for accuracy. The proprietary AI helps keep track of aggregate data and insights, such as how many customers are in the store and which products are being picked off of the shelves. It also tracks details such as which products customers are hesitating to buy, or which end-cap products are selling fastest. It’s not just contactless shops that will be popularised, either. Agrawal predicts, “the rise of dark stores that solely exist for online order fulfilment as well as existing grocery stores being additionally used as micro fulfilment centres (MFC)”. Zippin’s technology may be directly involved in the success of those stores, as their inventory tracking capabilities could fix the issue of wrong stock information and subsequent incorrect click-and-collect orders, which he calls the “nut to crack”.

Oddly enough, the Portuguese company Sensei was co-founded in 2017 by a man named Portugal. The company similarly retrofit to stores as the others do and aim to have, “An autonomous store on every corner”. Approaching the idea from an analytical point of view as a tech provider, rather than a full retail provider in the vein of Grabango, Sensei focus on what they can do as a solution for businesses. They achieve this by translating the captured data of a shopper into metrics for businesses to interpret using computer vision. “This is a frustration I have had for years”, said Sensei COO and co-founder, Joana Rafael. “I often go to the supermarket but the waiting time in line always baffled me. Also, when I tried shopping online, I just get lost in the thousands of different websites and the zillions of

Agrawal says Zippin’s projected global growth for 2021, “signals that for all existing retailers, it’s now a matter of ‘when’ to deploy frictionless shopping technology, not ‘if’”. Having most recently opened a store in the

mvpromedia.com

29


of data with predictive analysis to make better, more informed decisions.

GRABANGO

hours deciding what to have for dinner based on images”.

Another Silicon Valley company dipping its toe in the checkout-free technology pool, Grabango says that whilst the average shopper spends 32.89 days of their lives waiting in queues, its technology brings an average checkout to 1.3 seconds. That’s 97% less time

In this day and age, no matter the company or industry, metrics or data is gold dust. As Sensei put it, retail shops can “turn millions of customers into millions of insights”. With data analysis then, you can understand customers, their preferences and their in-store behaviour. Sensei advertise this as a way to “improve your relationships, business and increase your customer’s basket size”. It can also provide real-time in-store inventory analytics that tracks the products on the shelves. Misplaced and out-of-the-shelf products are detected in real-time and information is constantly gathered to optimize operations and supply-chain decisions. “Our mission is to digitize physical stores; we want to create smart stores that intuitively understand the needs of every customer and employee to deliver a shopping experience that is more convenient and personalized than ever before”, said co-founder Vasco Portugal. The company provide its business-minded take on autonomous shops via its Business Intelligence Platform (BI) that processes in-store data that has been mined. This, they say, is about harnessing the power

30

than a traditional shopper without the app. Grabango’s technology is retrofitted to the ceilings of stores but establishes its point of difference in that it extends from convenience stores to supermarkets.

mvpromedia.com


checkout process for shoppers whilst simultaneously improving in-store operations and inventory analysis tools. The company has over 30 patents (issued & pending) related to its checkout-free technology, underscoring Grabango’s broad ownership of not only its core IP but also general IP ownership throughout the retail technology industry.

It’s able to do so because it retrofits to work with the store. “As a mandate, we don’t want the world to change for us. We want to make our technology resilient so our partners don’t need to change their operations”, said Ryan Smith, Chief Technology Officer. “That statement sounds easy but there’s a lot of complexity on the tech side to make that a reality. We can learn a can of Coca-Cola across the board, but we need to be very fast at detecting a new specific product for a given retailer. That’s why we operate with top 30 grocery and convenience stores and we’re targeting large-scale chains that we know can roll out rapidly across the stores.”

Already established for a startup, the company has been in commercial service with an existing retail partner, Giant Eagle, since September 2020. However, working in a large, real-world situation brought one of the biggest questions surrounding AI and modern tech to the company: will autonomous AI replace people? “Retailers are having trouble staffing to the needs required as it is, so the question is can we even keep up?” replied Smith. “Many of them have additional value-creation opportunities for employees outside of checkout areas. I don’t see this as a full elimination of people at the checkout zone, it’s more an efficiency improvement for the people they have.” MV

Grabango says the technology is 99% accurate, does not use facial recognition, and accelerates the

mvpromedia.com

31


“GAME-CHANGING”

ROBOTICS Robots on the silver screen, like Rocky’s beeping, bugeyed robot butler Sico, might seem a far cry from where the robotics industry is today, but there is a connection for the Franco-American start-up, Fuzzy Logic Robotics. MVPro’s Writer, Joel Davies, explores how the company is pushing the next generation of robotics into the industry with its “game-changing” software. In a virtual press conference, Fuzzy Logic Robotics (FLR) - the brainchild of Dr Ryan Lober and Antoine Hoarau - announced the first release of Fuzzy Studio™, a universal software platform the company say is, “intuitive and simple like a video game”. The simplicity of the software exists in part because of the cinema. “The company originally provided artists… with an interface which allowed them to programme industrial robots, specifically for the cinema industry”, said Ryan Lober, co-founder and CEO. “The challenge here was that artists who had never even seen a robot before needed to reprogram these complex things 20 or 30 times a day but without a single line of code. That sort of constraint forced us to find a solution that was far simpler than if we had ever worked directly with engineers and industry in the first place... and gave us a completely different view on making robotics simpler than everyone else who started in the industry”. Noting that both industrial and collaborative robotics are too costly for truly flexible production due to the complexity of software and integration, the company says only a few handling applications, such as pick-and-place, have been made truly accessible to non-experts and thus

32

cost-effective for flexible production. The vast majority of robotic and cobotic applications require complex and heterogeneous software tools and brand experts. These tools require significant training and expertise. Fuzzy Logic Robotics say, as a result, more than 75% of the Total Cost of Ownership (TCO) of a robot is related to software training and services in standard mass production. In flexible production, this number can skyrocket to over 90% of the TCO and “kill the potential return on investment of the robotic system”. Instead, it says Fuzzy Studio™ is an easy-to-use and universal software platform that can help factories automate with robotics quickly, simply and costeffectively - even for complex applications in processing, dispensing and welding. A few of these tasks come up in an industry as complicated and varied as aerospace but for Fuzzy Logic Robotics, it doesn’t have to be rocket science. “In the aerospace industry… we’re on applications for a specific manufacturer who has over 1000 different types of references that need to be treated for a very time consuming and manual application. What we’re showing with our software for this client is that an operator in the factory can programme the robot visually via our software and have a robot do the same process. Not only are they liberating two operators for their whole day of work, but they’re also removing these very time consuming and labour intensive tasks that none of them appreciates. It’s an application that just wasn’t possible with robotics before because it would have cost them something in the order of

mvpromedia.com


€300,000 to programme all of the references. Now an operator can do it on a day to day basis so that’s what’s truly amazing about the application and what you can do with flexible production”. Some of the most impressive features include drag and drop for Computer-Aided Design (CAD) parts into the 3D digital twin, offline simulation to real-time control, no-code realtime digital twin technology, two-click swapping between any make and model of the robot and one standard interface for all brands of robots.

accelerate things but we made the decision to make our solution a pure software solution”. The software supports FANUC, ABB, Yaskawa, KUKA and Kawasaki that Lober claims cover more than 90 per cent of robot manufacturers. It also covers 40+ software formats including industrial CAD, STEP and IGES. The range of robots the software is compatible with also allows users to browse through a collection of robot models from the supported brands and filter by the specification. The result of the software is to save labour time, avoid collisions, enhance performance, and reinforce safety.

What all of these features mean is Fuzzy Studio™ covers all of the steps in the integration of a robotic work cell from pre-project, design, and commissioning to real-time production control, inline reprogramming and maintenance. It’s designed to accelerate robotic uptake and usage for all stakeholders, from major manufacturers to small and medium-sized businesses, to systems integrators and even robot OEMs. Where FLR nails its flag to the mast as a unique provider is in its software-only solution. “Our competitors are some of the top minds in robotics”, said Lober. “The biggest difference between what we do and the majority of the other competitors or start-ups in the space is we are a 100% software solution. Our software is designed to handle applications to go from the CAD of the models we’re trying to actually treat in production all the way to the control of the robot just through software. I think that’s important because there are many applications where you can respond with a hybrid hardware-software solution and it can definitely

mvpromedia.com

“We are a start-up that is going to push the next generation of robotics into the industry. Our solution is unique in its software-only nature and this is something that is important for integrating into the existing value chain of stakeholders. We think this is going to be a real game-changer for production because we can truly attain real, flexible production... and we can drive cost down dramatically so it is accessible to anyone”, concluded Lober. MV

33


SPONSORED

CAP CLOSED! Strong price pressure combined with high-quality requirements - the beverage and bottle industry faces the classic dilemma of many industries. This is also the case in the quality control department of a French manufacturer of plastic caps. Reliably detecting cracks and micro-cracks on plastic caps in 40 different colours and shades running at high speed on a production line is a real challenge. APREX Solutions from Nancy, France has successfully achieved this goal with the help of image processing technology and artificial intelligence. The basic images are provided by a USB 3 industrial camera from IDS Imaging Development Systems GmbH, who explain how they did it.

In the second step, this application was implemented in the production line right after the first assembly run with APREX Track C&M. The latter was specially developed for the diverse image processing requirements in the industrial sector. This includes, among other things, the control and safeguarding of a production line up to the measurement, identification and classification of defects in the production environment. The software suite delivers the desired results quickly and efficiently, without time-consuming development processes. After a short training of the AI methods, the complete system is ready for use at the customer.

SOLOCAP is a subsidiary of La Maison Mélan Moutet, “flavour conditioner since 1880” and manufactures all types of plastic caps for the food sector at its industrial site in Contrexéville. Among them, a top-class screw cap suitable for any glass or PET bottle. Thanks to a clampable lamella ring arranged around the bottle collar, it enables a simple, fast, absolutely tight and secure seal. However, the slats must be reliably and extremely carefully checked for cracks, tears and twists during production. This is the only way to guarantee absolute tightness. The previous inspection system could not meet these high requirements. APREX Solutions realised the new solution with artificial intelligence individually based on inhouse software algorithms. The necessary specifications were developed in advance in cooperation with the customer. This also included several inspection stages, one of which, for example, was the reject control to avoid false reports. The introduction took place in two phases. First, the specific “SOLOCAP application” was trained with the help of the intelligent APREX Track AI solution. The software includes various object detector, classifier and standard methods that operate at different levels. Networked accordingly, they ultimately deliver the desired result tailored to the customer. Four control levels with several test points guarantee a reliability rate of over 99.99%.

34

In the case of SOLOCAP, it combines an IDS UI-3280CPC-HQ industrial camera, powerful ring illumination and a programmable logic controller (PLC) to provide comprehensive control over all inspection processes. At the same time, it records all workflows in real-time and ensures complete traceability. Only one camera is needed for this. However, APREX TRACK C&M could handle up to 5 cameras. “The difficulty of this project consisted mainly in the very subtle expression of the defects we were looking for and in the multitude of colours. With our software suite, it was

mvpromedia.com


SPONSORED

possible to quickly set up an image processing application. Despite the complexity,” explains Romain Baude, founder APREX Solutions. The image from the camera provides the basis for the evaluations. It captures every single cap directly in the production line at high speed and makes the smallest details visible to the software. 40 different colours and shades are reliably detected. The UI-3280CP-C-HQ industrial camera integrated into the system with the 5 MP IMX264 CMOS sensor from Sony sets new standards in terms of light sensitivity, dynamic range and colour reproduction. The USB 3 industrial camera provides excellent image quality with extraordinarily low-noise performance - at frame rates up to 36 fps. CP stands for “Compact Power”. This is because the tiny powerhouse for industrial applications of all kinds is fast, reliable and enables a high data rate of 420 MByte/s with low CPU load.

Anthony Vastel, Head of Technology and Industry at SOLOCAP sees a lot of potential in the new inspection system: “APREX’s AI-based approach has opened new doors for our 100% vision-based quality control. Our requirements for product safety, but also reject control, especially in the case of false reports, were quickly met. We are convinced that we can go one step further by continuing to increase the efficiency of the system at SOLOCAP and transferring it to other production lines.” AI offers quality assurance, but also all other industries in which image processing technology is used, new, undreamed-of fields of application. It makes it possible to solve tasks in which classic, rule-based image processing reaches its limits. Thus, high-quality results can be achieved with comparatively little effort - quickly, creatively and efficiently.

Thanks to the IDS-characteristic plug & play principle, the cameras are automatically recognised by the system and are immediately ready for use, as Romain Baude confirms: “The excellent colour reproduction of the UI-3280CP-C-HQ and its high resolution of 5 MP were decisive factors for us in choosing the camera. At the same time, the model enabled a quick, uncomplicated integration into our system.” Users can choose from a large number of modern CMOS sensors from manufacturers such as Sony, CMOSIS, e2v and ON Semiconductor with a wide range of resolutions. Its innovative, patented housing design with dimensions of only 29 x 29 x 29 millimetres makes it suitable for tasks in the fields of automation, automotive, medical technology and life sciences, agriculture, logistics as well as traffic and transport, among others. Screwable cables ensure a reliable electrical connection.

mvpromedia.com

APREX Solutions and IDS have recognised this and offer solutions with intelligent products that make it easier for customers to enter this new world. Image processing and AI - a real dream team on course for growth. Since its foundation in 1997 as a two-man company, IDS has developed into an independent, ISO-certified family business with more than 330 employees internationally. Headquartered in Obersulm, Germany, the industrial camera manufacturer has developed high-performance, easy-to-use USB, GigE and 3D cameras with a wide spectrum of sensors and variants. The almost unlimited range of applications covers multiple non-industrial and industrial sectors in the field of equipment, plant and mechanical engineering. In addition to the successful CMOS cameras, the company expands its portfolio with vision app-based, intelligent cameras. MV

35


3D MACHINE VISION IS DRIVING

EXCITING NEW POSSIBILITIES FASTER AND CHEAPER, WITH MORE ACCURATE INSPECTION

3D machine vision, or the precise three-dimensional measurement of complex free-formed surfaces, is a truly disruptive technology. As 3D imaging gains ground, how can companies get up and running quickly with 3D applications? Bruno Menard, Software Director for Teledyne DALSA Vision Solutions, examines what it takes to make 3D work in your imaging applications. Teledyne DALSA designs develop, manufactures, and markets digital imaging products and solutions, in addition to providing semiconductor products and services. Our core competencies are in specialized integrated circuit and electronics technology, software, and highly engineered semiconductor wafer processing.

3D MACHINE VISION TECHNOLOGIES Compared to more conventional 2D imaging, threedimensional imaging is hard. Commonly used 3D machine vision technologies include stereo vision, Time-of-Flight (ToF) scanning and 3D triangulation. With stereo vision, images from two cameras are processed to measure the difference in the images caused by the displacement between the two cameras,

36

enabling the system to accurately judge distances. This takes more processing time than 2D systems, but today’s multi-core processors can easily handle realtime 3D machine vision. Time-of-Flight (ToF) scanning determines the depth, length, and width of an object by measuring the time it takes light from a laser to travel between the camera and the object. Typical TOF 3D scanning systems can measure the distances from 10,000 to 100,000 points on an object every second. 3D triangulation systems use lasers that shine thousands of dots on an object and a camera that precisely locates each dot. The dots, the camera, and the laser form triangles, allowing the system to use trigonometry to calculate the object’s depth even for non-standard sized objects such as multiple parts for an engine component.

FASTER AND CHEAPER PROCESSING ENABLES MORE 3D APPLICATIONS Human eyes and brains often have a hard time processing the extra information available with 3D imaging – just think

mvpromedia.com


of watching a 3D movie. However, computers today can do that quite well. That hasn’t always been the case. In the 1980s, dimensional imaging was merely a research curiosity, only making its way out of the laboratories and into application demonstration in the 1990s. As computers became more powerful in the 2000s, they were able to handle more sophisticated algorithms and process data more quickly. Finally, there were applications where 3D imaging could produce more reliable and accurate results than 2D. Initially, 3D laser scanners found their way into the oil industry. Geological features could be measured, and oil refineries and other large industrial plants could keep track of geographical shifts or other threats to pipelines and equipment. The potential of discovery or the huge expenses of a failure meant that the companies were willing to put up with high costs and complexity. Later, manufacturing industries—particularly automotive— found ways to use 3D imaging for quality control. The construction industry uses 3D scanners to survey and model buildings and city sites. As processing became easier, and prices came down, 3D imaging found its way into more markets. The medical field began exploring dental applications. Real estate and construction companies could create threedimensional models of houses and other structures to build, repair, and even sell.

3D IMAGING LEVERAGING MULTIPLE SENSORS In certain 3D applications, the use of multiple sensors is required. Here are two examples: Covering a large field of view (FOV) with high resolution. Scanning large objects requires a 3D sensor with a large FOV. This is possible, but in practice, it could result in an insufficient resolution for the application and may therefore reduce the precision of the measurements. In this case, setting up multiple 3D sensors next to each other can solve the problem. One example would be the inspection of car indoor roof foam panels. This involves a very large FOV for a rather small depth.

mvpromedia.com

3D is required for any precision depth measurement. For example, solder inspection can only be achieve with precision via 3D. Another example is the inspection of seals on food packaging.

Scanning integral objects and eliminating shadows.

3D IMAGING LEVERAGING MULTIPLE SENSORS Scanning complex, integral objects is not possible with a single 3D sensor as the laser cannot reach all parts of the object. An integral object is a 3D model of a real object with data points on all surfaces of the object. For example, this could be a machined metal part on which we want to make measurements of various subparts (width, height, angles). Or, these parts may include cavities not seen by a single laser. In order to successfully scan these complex parts, we need multiple profilers to be able to reconstruct the total surface. Several configurations of 3D sensors – such as side-by-side, back-to-back, opposite configurations — are possible to allow scanning of all parts and combining them provides an integral object output. To do this, we scan a calibration object (e.g., a prism), and compute transformation via a specialized algorithm. We end up with one affine transformation for each profiler to be applied at runtime in order to produce data in a unified coordinate system. It takes all scans and outputs a 3D image of the object:

37


of freedom). Once transformations are applied to all sensors and redundant 3D points are eliminated, a unified view of the scanned object is produced and ready for measurements. Calibration object may be of different shapes, sizes and layout. Calibration objects need to be easy to manufacture and easy to mount (often customers produce their own calibration object) while being suitable for extracting features such as corners, lines, vertex, etc. 3. Graphical Software. The software provides the ability to manipulate the parameters of the system such as timing, layout scenarios, calibration objects, transformations, etc… all in a graphical way in order to ensure an optimal balance between ease-of-use and flexibility.

SOFTWARE REQUIREMENTS When multiple 3D sensors are used, the software needs to support the following requirements: 1. Timing Synchronization. When multiple sensors are involved, timing synchronization is required to avoid interference between sensors (i.e. prevent one sensor from seeing the laser of its adjacent sensors). To do this, the sensors are configured to strobe lasers and start exposing alternately in time (one after the other). Ideally, this synchronization is organized in such a way that the primary sensor is electrically trigged while sending a software command to all other sensors via Ethernet link. To ensure precise and non-drifting synchronization, the PTP standard (Precision Time Protocol) is used. 2. Unified Coordinated System. In a multiple-sensor configuration, each individual sensor has its own coordinate system which needs to be calibrated and transformed into a unified coordinated system for the application. Calibration is performed using a known calibration object on which an algorithm extracts features of interest before computing a transformation (typically rigid transformation based on six degrees

38

4. Measurements. Measurements are performed on the resulting data in the unified coordinate system. Measurements are defined on primitives such as points, lines and circles. These primitives are fitted to the real data points for maximum precision.

A RAPIDLY GROWING GLOBAL MARKET 3D machine vision was developed to enable automated quality control. But it is now being applied much more widely, in concert with automation and machine learning. Quality control has evolved into production optimization—systems that detect potential problems at a very early stage, identifying the causes and automatically fixing them on the fly. Analysts are predicting double-digit growth across the industry: 3D modelling, scanning, layout and animation, 3D rendering, and image reconstruction. The 3D camera industry is forecast to reach $17.6 billion in 2025 and the 3D scanning segment is expected to reach $14.3 billion by 2025. The growing use of 3D imaging in smartphones, cameras, and televisions continues to drive demand, and the use of 3D imaging software in the automation industry continues to propel further adoption as implementation becomes less burdensome and easier to deploy. MV

mvpromedia.com


SPONSORED

ADVANCED ILLUMINATION RELEASES

NEW LINEAR BACKLIGHT FOR MACHINE VISION

Rochester, VT – Advanced illumination releases its newest LED light for machine vision, the BL313 Medium Intensity Linear Backlight. This Linear Backlight delivers intense, diffuse illumination in a scalable design. Ideal for backlighting line scan applications and object silhouetting, the BL313 is customizable by peak wavelength and emitting length to provide tailored solutions for user applications. The BL313 features 17x higher output intensity than the existing BL193, with 226 klx in a highly uniform dispersion. Pre-engineered scalability translates to emitting lengths in 2” increments, eight peak wavelength options, and an optional washdown feature for harsh environments. The Medium Intensity Linear Backlight is also available with inline or external controls, providing varying power options for both strobed and continuous operation.

Founded in 1993, Advanced illumination was the first lighting company to develop and sell an LED lighting product and has continued being a global leader in the machine vision industry ever since. Ai combines innovation in product development and process control to deliver tailored lighting solutions to its customers. Ai has Stock products that ship in 1-3 days and hundreds of thousands of Build-to-Order lights that are ready to ship in 1-3 weeks. Their customers face unique challenges regarding their ever-evolving inspection systems; Ai is here to innovate with them. For more information, please visit our website: MV www.advancedillumination.com

This new Linear Backlight is also an ideal diffuse light projector for applications requiring adequate light dispersion to minimize hot-spot reflection on specular surfaces, particularly where space constraints exclude standard diffuse lighting. As with every product from Advanced illumination, the Medium Intensity Linear Backlight has a standard five-year warranty from the original date of purchase.

mvpromedia.com

39


AUTOMATE FORWARD:

TOP THREE EXHIBITORS One of the biggest industry trade show events in North America, Automate Forward didn’t fail to provide a worthy platform for the world’s best companies and products this year. The event contained many highlights, including a final day keynote speech from Andrew Ng on end-to-end workflow to build deep learning-powered visual inspection that was probably worth its own article. Having scoured every product and booth on display, MVPro’s Writer, Joel Davies, gives you the top three exhibitors he visited. PROPHESEE Inspired by human vision, Paris-based Prophesee says its technology uses a patented sensor design and AI algorithms that mimic the eye and brain to reveal what was invisible until now using standard frame-based technology. They call this design event-based vision, offering technology that is “fundamentally different from the traditional image sensors” and a “paradigm shift in computer vision.” Prophesee’s machine vision systems can operate in areas such as autonomous vehicles, industrial automation, IoT, security and surveillance, and AR/ VR. One early application was in medical devices that restore vision to the blind. Using the booth as a base from which to showcase its event-based vision technology, some of the products included

40

the Metavision® Packaged Sensor, the Century Arks Silkyevcam and the Image Visioncam EB, both of which are powered by Prophesee. “We are a disruptor of sorts, and it can be easy to get discouraged by people’s resistance to change”, said Luca Verre, co-Founder & CEO, Prophesee. “We have overcome that by showing very real, tangible improvements that can be realized with our technology. Looking forward, we want to continue to build an entire ecosystem around event-based vision. Great strides have been made in making AI and machine learning more accessible, and we envision more machines and processes that can perform better by using characteristics of the human brain and eye to improve safety and productivity in manufacturing”. In 2019, inVision added Prophesee’s event-based vision reference system Onboard, to their “Top

mvpromedia.com


Innovation” list. More recently, it was announced that FRAMOS, a leading supplier of embedded vision solutions and 3D cameras for industrial applications, had become a global distribution and ecosystem partner for Prophesee’s Metavision® line of advanced vision products.

ZIVID Founded in 2015, Zivid is a Norwegian, “pure-play” provider of industrial 3D machine vision cameras and vision software for autonomous industrial robot cells, collaborative robot (cobot) cells and other industrial automation systems. The company’s primary hardware products are the Zivid Two and Zivid One+ 3D colour cameras. They are supported by companion software products like the Zivid Software Development Kit (SDK) and the Zivid Studio, a graphical user interface (GUI).

mvpromedia.com

Primarily on display at the booth was the Zivid Two, which the company calls “probably the best 3D camera in the world”. It achieves high resolution and precision point clouds of small, densely packed or highly detailed objects, distinguishing features smaller than 5 mm. Palm-sized, 880g in weight, with a 60ms acquisition time and native colour 3D point clouds, it’s easy to see why the company is so confident. Its quality lends itself to applications such as bin picking, piece picking and machine tending. “High-end companies such as BMW, ThyssenKrupp, Amazon and Ocado are now using this technology, and are a lot less forgiving – demanding 100 per cent pick rates rather than 80 to 90 per cent”, said Mikkel Orheim, Senior Vice President of sales and business development at Zivid. “We see across the board that customers want more automation and to be able to pick more products. They are not looking for 10

41


KITOV AI Kitov AI is an Israeli company that develop fully automated visual inspection solutions for a broad range of product lines and markets. Founded in 2014 as a spin-off of RTC Vision, which had been developing advanced computer vision algorithms for over a decade, Kitov aimed to develop a universal system that can be intuitively trained by a non-expert to inspect almost any product and to effectively replace humans at the tedious task of finding and judging defects across production lines.

systems that need to be monitored by five people; they are looking for 20 systems that can be monitored by one person”. Taking on the challenge set by top-tier companies like those mentioned by Orheim, Zivid won the “Top Innovation” award by inVISION for the Zivid 3D camera in 2018, the Red Dot “Product Design Winner” award 2018, as well as being named a gold-level honouree for the Vision Systems Designs Innovators Award in 2018. With the Zivid Two released in November 2020, there’ll be high hopes for it to follow in its predecessor’s footprints.

42

Products on show at its booth included Kitov One, a Smart 3D, universal system robot that the company claim can inspect “virtually any product”. Leveraging advanced 3D computer vision and deep-learning algorithms, Kitov One supports complex 3D structures, numerous materials, and complete inspection specifications. Inspecting all sorts of materials, including labels, screws, connectors, ports and reading 1D and 2D barcodes with a cycle time between a minimum of a few seconds and a maximum of minutes, it’s worth watching it in action. “Many OEMs currently use automation and quality assurance systems, but leveraging these traditional machine vision technologies, customized for a specific product may not be possible when dealing with highly variable, multicomponent products, for example,” said Corey Merchant, Vice President of Kitov Americas. “Kitov One allows manufacturers to easily deploy a trainable system that leverages both deep learning and traditional machine vision techniques to inspect any type of product.”

mvpromedia.com


If my word isn’t enough, Kitov One received a platinum-level award for innovation by the Vision Systems Design Magazine 2018 in recognition of its breakthrough technology. The company recently announced it was collaborating with Capvidia NA (Houston, TX) to develop the CAD2SCAN, a process for using the digital 3D CAD model to automate visual inspection in production. The project is part of a $7.45 million IsraelU.S. industrial R&D foundation project. Whilst there were many wonderful exhibitors and products on show during Automate Forward, we thought Kitov AI, Zivid and Prophesee were the most exciting. It is a coincidence they are all involved in the vision-side of the industry - perhaps they know how to paint a picture better than others. They probably ought to. MV

mvpromedia.com

43


MILITARY

GRADE SECURITY FOR ENTERPRISE IT

The world is increasingly relying on remote working and living for people to simply go about their normal lives. Naturally, the need for technological adjustment and advancement to support this shift has increased. Apart from anti-virus or an ad-blocker, thoughts about security can go by the wayside. Yet most modern companies at least staff security guards, use cameras and require their employees to have key cards to get into an office building. So why wouldn’t you do the same at work, even if it’s in the comfort of your home? Lynx Software Technologies has something to say about that. Lynx Software Technologies, an innovator in modern platform software technologies, recently announced the availability of LynxSafe, a family of products that now allows enterprise IT teams to apply Lynx’s long-standing expertise in secure mission-critical systems to end-points. The first LynxSafe product, which is focused on secure laptops, creates isolated partitions that can run multiple security functions and secure operating systems. This enables architects to create products that deliver different levels of security for business and personal domains, which remains vitally important as people continue to work remotely and in zero-trust environments. During the COVID-19 pandemic, the security of the enterprise and IT worlds have faced a crisis of unprecedented proportions as people continue to work remotely. According to a Lynx study conducted in Q1 2021: • Over half of respondents (51%) believed their organization’s cybersecurity efforts hadn’t been strengthened since the start of the pandemic

44

• 60% of respondents hadn’t been notified of any apps and tools that did not meet the security standards set by their employers during the pandemic • Nearly 4 in 10 (36%) respondents were impacted by a cybersecurity hack during the pandemic - or knew someone who had been

As employers continue to offer remote and hybrid work options, LynxSafe provides a proven alternative to perimeter-based security, answering the unprecedented challenges of extended, fragmented workforces on top of the mounting threats IT teams were already facing. “Organizations must deliver a failproof way to secure end-points such as laptops, edge servers, networking cards and other devices in a work-from-home, zerotrust environment,” said Pavan Singh, VP of Product

mvpromedia.com


Management at Lynx. “Effectively, what LynxSafe enables is the ability to extend the company firewall to the place where you are working, be that a house, a coffee shop, or an aeroplane. Lynx has a long history of securing mission-critical applications onboard vehicles for companies including Airbus, Collins Aerospace and NASA, where application failure is not an option, and we’re confident in LynxSafe’s ability to emulate this success for IT organizations.”

• For moderate-threat environments, traditional methods of securing a laptop are no longer sufficient for the enterprise. LynxSafe’s manageability features allow IT departments to secure and manage an entire organization’s laptops, allowing the separation of security functions from the user operating systems, delivery of patches and updates as well as the deletion of important data, securely and remotely. Other features and benefits of LynxSafe include: • The ability of an organization to run one or multiple VPNs in isolated partitions for security and the prevention of network tampering • Support for Windows® and Linux, enabling existing applications to run unmodified • The ability for VMs to be managed remotely while people are working outside the office, saving operational costs • Compatibility with industry-standard mobile device management and mobile app management frameworks • Secure encryption of data in the end-point through a remote key mechanism • Protection for the user’s sensitive data so that it is not compromised even if the laptop is lost

Unlike other enterprise-class hypervisors, which typically create a software abstraction layer between the hardware and the virtual equipment that runs on it, the LynxSafe works by using a separation kernelbased approach. The LynxSecure separation kernel is inherently more effective for creating secure end-points running multiple isolated functions, including domains for different security and classification levels. Two variants of the product exist: • For high-threat environments, there is mandatory support for secure boot functionality and support for multiple levels of VPN. Use cases include the U.S. Department of Defense, and LynxSafe is aligned to support mandatory compliance requirements such as those called out in the Commercial Solutions for Classified (CSfC) specification. Both solutions have been proven and are immediately available for the Dell Latitude 5410 class of laptops with a broader set of device support planned through 2021.

mvpromedia.com

• The opportunity for organizations to take their product through security certification programs with a focus on the mobility and security Lynx Software Technologies is the premier Mission Critical Edge company that enables safe, secure and high-performance environments for global customers in the aerospace and automotive, enterprise and industrial markets. Since 1988, companies have trusted Lynx’s real-time operating system, virtualization and system certification experience, which uniquely enables mixed-criticality systems to be harnessed and deliver deterministic real-time performance and intelligent decision-making. Together with a growing set of technology partners, Lynx is realizing a new class of Mission Critical Edge systems that keep people and valuable data protected, at every moment. For more information, visit www.lynx.com. MV

45


WHITE PAPER

THE MATERIALS BEHIND

THE IMAGE It’s easy to forget about the naturally occurring structures and combinations that are always around us, allowing life to exist and humans to thrive and advance as we do. Invisible to the eye and even when seen and used by incredible tools like those in the automation, machine vision and robotics industries, we neglect the basic but wonderful platform they provide us. This is especially true for imaging technology and MVPro is glad to present a white paper to remind us that there’s more to a picture than just taking it.

COLLOIDAL NANOCRYSTALS FROM SYNTHESIS TO APPLICATION Colloidal nanocrystals (NCs) are semiconductor nanoparticles synthesized in solution. Thanks to quantum confinement, it is possible to tune their optical spectrum and therefore the colour of these materials. In the case of CdSe, the most investigated material under a nanocrystal form, it is possible to shift its colour from red (colour of the bulk material) to blue by shrinking the particle size. In addition, these nanoparticles are fluorescent and their light emission has driven the interest for such particle. Academically, they are being studied as a single photon emitter and as fluorescent markers for biological applications. Finally, they are one of the few nanotechnologies to have reached a mass market with their uses in a display as a red and green light source. Interest for NCs is not limited to the visible range. It is also possible to synthesize infrared (IR) optically active particles. In particular, a large research effort has been

dedicated to near IR absorbing NCs (notably PbS) for their use for solar cells. Thanks to quantum confinement, the spectrum of these nanocrystals are continuously tunable with size. The bandgap energy can be easily adjusted to around 1.3 eV which is optimal for a singlejunction solar cell. The colloidal growth of NCs is easy and inexpensive to set up with respect to semiconductors grown using ultra-high vacuum technology. In addition, it releases the constrain for epitaxial growth and thus no use of latticematched substrate is required. This drastic reduction in manufacturing costs makes NCs serious competitors to historical technologies dedicated to infrared detection. This is particularly the case for emerging applications, such as autonomous vehicles or night driving assistance, where the device cost is extremely constrained. For the past ten years, there has been a strong effort for (i) obtaining absorbent NCs over a wide spectral range in the infrared 1,2and for (ii) integrating these materials into highperformance device.3–5

NANOCRYSTALS AND INFRARED In addition to PbS, which is well suited to address the nearinfrared, II – VI materials such as HgTe are the most mature from the growth point of view. In addition, HgTe NCs benefit from their vicinity with the HgCdTe alloy already

46

mvpromedia.com


WHITE PAPER

widely used for infrared sensing. As for bulk HgCdTe, the cut-off wavelength of HgTe NCs is tunable from 1 to 100 µm6 This spectral tunability is enabled by quantum confinement and the semi-metallic6 nature of bulk HgTe (i.e. no bulk band-gap). It is nevertheless observed that the shape of the spectrum is very widely affected over this broad spectral range. This results from the change of the involved optical transition: the smallest particles (the most confined) exhibit interband absorption between valance and conduction band while the largest particles present an intraband transition only occurring within the conduction band.7,8 Thanks to this spectral tunability, HgTe NCs can therefore be used over a broad range of applications in short-wave IR (SWIR, λ <1.7 µm), extended SWIR (λ <2 .5 µm) and finally mid-wave IR (MWIR, 3 - 5 µm).

mechanism. The latter occurs through hopping and is inherent to the polycrystallinity of NCs film. After their synthesis, the nanocrystals are capped by organic ligands whose role is to passivate electronic surface states and ensure the colloidal stability of the particles in a non-polar solvent. Although they are initially necessary, the presence of these ligands is detrimental to transport because they behave as tunnel barriers (typically 2 eV in height, 2 nm in length). In recent years, notably, thanks to the work of Talapin’s group at the University of Chicago,9 significant signs of progress have been made in modifying the NC surface chemistry in order to increase their electronic coupling. This has led to an increase in the charge carrier mobility of the film over more than 6 orders of magnitude (from <10-6 cm2.V-1.s-1 to ≈ 1 cm2.V-1.s-1).

The integration of these materials into photoconductive devices is first challenged by the electronic transport

PHOTOCONDUCTION IN NANOCRYSTAL ARRAYS In parallel with the efforts made on the material and its surface chemistry, similar work has been carried out on the integration of the material into devices with increasingly complex geometries. 4,10 In particular, this includes the demonstration of photodiodes4,11–14, phototransistors and the integration of nanocrystals in imagers,15,16. We have designed photodiodes whose cut-off wavelength matched the one from InGaAs (≈1.7 µm). To reach this cut-off wavelength, we tune the particle size around 6 nm. Within the photodiode, the HgTe NCs layer is coupled to a hole extraction layer made from Ag2Te nanocrystals.11 A gold film is deposited on this layer of Ag2Te, and ease the whole

mvpromedia.com

47


WHITE PAPER

extraction due to its large work function value and act as a mirror to enable the second pass of light through the active layer. The bottom electrode is made of glass coated with FTO (fluorine-doped tin oxide). The latter is used to collect the photoelectrons while having good optical transmission in the SWIR.

of the diode shown are given for different thicknesses of the absorbent layer. Several lines can be observed on this graph. They correspond to Fabry-Perot resonances within the layer of nanocrystals. Furthermore, it is observed that to obtain an absorption of the order of 80% of the incident light at 1.7 µm, films thicker than 500 nm are necessary. We built a series of diodes while varying the HgTe film thickness. The performance of this diode increases for thickness up to 300 nm, which is consistent with an increase in absorption. Above this thickness, the film’s response (ie its ability to transform a flow of photons into an electrical signal) saturates. This effect reflects the poor photo charges collection for device thicker than the diffusion length. In addition, the open-circuit voltage of the diode is greatly reduced. This is the signature of the presence of a short circuit through the film, thus reflecting the difficulty to deposit very thick and conductive layers from colloidal nanocrystals.

THE TRADEOFF BETWEEN ELECTRONIC TRANSPORT AND ABSORPTION The design of a photoconductive device from nanocrystals requires a tradeoff between charge transport and light absorption. Because of hopping transport, the carrier diffusion length is short (10–100 nm typically), while the absorption length remains larger than µm. The deposition of thick films (≈300 - 500 nm) is experimentally difficult, as cracks can get formed in the film. This leads to electrical short. In addition, the collection of photo charges gets inefficient over such thicknesses. For the sake of illustration, a 200 nm thick film absorbs about 10% of the incident light. The absorption spectra

48

CHARACTERIZATION OF THE OPTICAL INDEX FOR THE RESONATOR DESIGN To overcome this material limitation, a possible approach is to introduce optical resonators17–19 in order to trap the light within the absorbing film. We have chosen to focus on a guided-mode resonator (GMR). The introduction of

mvpromedia.com


WHITE PAPER

a metallic grating modifies the propagation of the light and generate a mode that propagates along the substrate. This interaction length of the incident light with the film is increased and therefore increases its absorption. This type of resonator has already been widely studied to increase the absorption of solar cells20 and infrared detectors made from III – V semiconductors. Its application to nanocrystals nevertheless requires a modification of the design steps21 to avoid the exposure of the nanocrystals to annealing steps or to bad solvents that can damage the quality of the film. Moreover, the electromagnetic design of these resonators faces a difficulty: the poor knowledge of the optical indices of these materials. Therefore, we systematically characterized the optical index of nanocrystals film, with various particle size and surface chemistry, using spectrally resolved ellipsometric measurements,13. A particularly important result is that the spectral dependence of the index refraction is low. It is therefore possible to replace n (λ) by its mean value. Moreover, the index depends little on the size of the particles, n = 2.35 ± 0.1 capture the refractive index of most HgTe NC film. This value is clearly lower than that of bulk material for which an index of 3 to 4 is generally measured. This reduced value with respect to bulk material reflects the presence of void and organic ligands in the NCs film. The extinction coefficient, on the other hand, has a strong spectral dependence that corresponds to the absorption spectrum. A value of k = 0.1 is typically observed at the first excitonic peak.

a resonator. To avoid damaging the nanocrystals, the grating was fabricated before the film was deposited. In order to generate resonance in the SWIR range (1.7 µm), the grating period being given by λ / n with λ the wavelength of the resonance and n the refractive index, a period of 700 nm is required. The manufacture of this network, therefore, requires an electronic lithography step. In addition to the period, the size of the patterns of the grating has been optimized in order to maximize the absorption of the component. It is also necessary to ensure that the absorption is localized within the film of nanocrystals (thus generating the photocurrent) and not in the contacts to avoid thermal losses. The geometric parameters of the grating are therefore optimized so that the electromagnetic field is located at the heart of the semiconductor film13. The presence of the grating brings an obvious sign to the spectral photoresponse: two distinct peaks appear between 1 and 1.5 µm. They are associated with the excitation of guided modes in the layer of

EFFECT OF THE RESONATOR ON THE PERFORMANCE OF THE DIODE It is then possible to use this determined complex optical index to simulate a photodiode structure with

mvpromedia.com

49


WHITE PAPER

NCs. We also see that the response is enhanced even outside of these resonances. Indeed, the diffraction grating, made of aluminium, lowers the work function of the FTO electrode thus facilitating the extraction of electrons. In contrast, if the array is made of gold, the work function of the electrode is increased and the photoresponse significantly lowered. This diode absorbs 80% of the incident light between 1 and 1.6 µm, which makes it possible to obtain a response of 0.2 A.W-1 under illumination by a blackbody and a detectivity of 2x1010 Jones at room temperature under 0 V and for a signal at 1 kHz. Finally, the response time of this diode is short, around 110 ns13.

CONCLUSION Infrared colloidal nanocrystals have matured, allowing them to be integrated into complex devices. It becomes possible to design detectors (photodiodes and imagers) in the SWIR whose active layer is made of nanocrystals. It is therefore necessary to address an important compromise between the diffusion length of the carriers (<100 nm) and the absorption depth (a few µm). This compromise can be addressed through the introduction of optical resonators which allow the

incident field to be concentrated in a thin layer (<200 nm) of nanocrystals. The next challenge will be the introduction of such resonator at the focal plan array level.

THE AUTHORS Originally titled “Infrared sensing using nanocrystal toward on-demand light-matter coupling”, the white paper was a French collaboration written by the following authors: Eva Izquierdo of Sorbonne Université, CNRS, Institut des NanoSciences de Paris, INSP, 75005 Paris, France. Audrey Chu of Sorbonne Université, CNRS, Institut des NanoSciences de Paris, INSP, 75005 Paris, France and ONERA - The French Aerospace Lab, 6, chemin de la Vauve aux Granges, BP 80100, 91123 Palaiseau, France. Charlie Gréboval of Sorbonne Université, CNRS, Institut des NanoSciences de Paris, INSP, 75005 Paris, France. Gregory Vincent of ONERA - The French Aerospace Lab, 6, chemin de la Vauve aux Granges, BP 80100, 91123 Palaiseau, France. David Darson of Laboratoire de physique de l’Ecole Normale Supérieure, ENS, Université PSL, CNRS, Sorbonne Université, Université de Paris, 75005 Paris, France Victor Parahyba of New Imaging Technologies SA, 1 impasse de la Noisette 91370 Verrières le Buisson, France Pierre Potet of New Imaging Technologies SA, 1 impasse de la Noisette 91370 Verrières le Buisson, France Emmanuel Lhuillier of of Sorbonne Université, CNRS, Institut des NanoSciences de Paris, INSP, 75005 Paris, France.

50

mvpromedia.com


WHITE PAPER

REFERENCES S. Keuleyan, E. Lhuillier, V. Brajuskovic, and P. GuyotSionnest, Nature Photonics 5, 489 (2011).

1

S.E. Keuleyan, P. Guyot-Sionnest, C. Delerue, and G. Allan, ACS Nano 8, 8676 (2014).

2

X. Tang, M.M. Ackerman, M. Chen, and P. Guyot-Sionnest, Nature Photonics 13, 277 (2019).

3

U.N. Noumbé, C. Gréboval, C. Livache, A. Chu, H. Majjad, L.E. Parra López, L.D.N. Mouafo, B. Doudin, S. Berciaud, J. Chaste, A. Ouerghi, E. Lhuillier, and J.-F. Dayen, ACS Nano 14, 4567 (2020).

4

A. Chu, C. Gréboval, Y. Prado, H. Majjad, C. Delerue, J.-F. Dayen, G. Vincent, and E. Lhuillier, Nature Communications 12, 1794 (2021).

5

N. Goubet, A. Jagtap, C. Livache, B. Martinez, H. Portalès, X.Z. Xu, R.P.S.M. Lobo, B. Dubertret, and E. Lhuillier, Journal of the American Chemical Society 140, 5033 (2018).

6

A. Jagtap, C. Livache, B. Martinez, J. Qu, A. Chu, C. Gréboval, N. Goubet, and E. Lhuillier, Opt. Mater. Express, OME 8, 1174 (2018). 7

J. Kim, D. Choi, and K.S. Jeong, Chem. Commun. 54, 8435 (2018). 8

M.V. Kovalenko, M. Scheele, and D.V. Talapin, Science 324, 1417 (2009).

9

10 A. Chu, C. Gréboval, Y. Prado, H. Majjad, C. Delerue, J.-F. Dayen, G. Vincent, and E. Lhuillier, Nature Communications (2021).

M.M. Ackerman, X. Tang, and P. Guyot-Sionnest, ACS Nano 12, 7264 (2018).

11

12

Gréboval, J. Ramade, D. Amelot, P. Trousset, A. Triboulin, S. Ithurria, M.G. Silly, B. Dubertret, and E. Lhuillier, ACS Photonics 5, 4569 (2018). P. Rastogi, A. Chu, T.H. Dang, Y. Prado, C. Gréboval, J. Qu, C. Dabard, A. Khalili, E. Dandeu, B. Fix, X.Z. Xu, S. Ithurria, G. Vincent, B. Gallas, and E. Lhuillier, Advanced Optical Materials 2002066 (2021).

13

M.M. Ackerman, M. Chen, and P. Guyot-Sionnest, Appl. Phys. Lett. 116, 083502 (2020).

14

A.J. Ciani, R.E. Pimpinella, C.H. Grein, and P. GuyotSionnest, Infrared Technology and Applications XLII 9819, 981919 (2016).

15

A. Chu, B. Martinez, S. Ferré, V. Noguier, C. Gréboval, C. Livache, J. Qu, Y. Prado, N. Casaretto, N. Goubet, H. Cruguel, L. Dudy, M.G. Silly, G. Vincent, and E. Lhuillier, ACS Appl. Mater. Interfaces 11, 33116 (2019).

16

X. Tang, M.M. Ackerman, G. Shen, and P. Guyot -Sionnest, Small 15, 1804920 (2019).

17

18 X. Tang, M.M. Ackerman, and P. Guyot-Sionnest, Laser & Photonics Reviews 13, 1900165 (2019).

X. Tang, M. Chen, M.M. Ackerman, C. Melnychuk, and P. Guyot-Sionnest, Advanced Materials 32, 1906590 (2020).

19

H.-L. Chen, A. Cattoni, R. De Lépinau, A.W. Walker, O. Höhn, D. Lackner, G. Siefer, M. Faustini, N. Vandamme, J. Goffard, B. Behaghel, C. Dupuis, N. Bardou, F. Dimroth, and S. Collin, Nature Energy 4, 761 (2019).

20

A. Chu, C. Gréboval, N. Goubet, B. Martinez, C. Livache, J. Qu, P. Rastogi, F.A. Bresciani, Y. Prado, S. Suffit, S. Ithurria, G. Vincent, and E. Lhuillier, ACS Photonics 6, 2553 (2019).

21

A. Jagtap, B. Martinez, N. Goubet, A. Chu, C. Livache, C.

mvpromedia.com

51


FILTERS: A NECESSITY, NOT AN ACCESSORY.

INNOVATIVE FILTER DESIGNS FOR INDUSTRIAL IMAGING

MIDOPT.COM


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.