James F. Kenefick — Azafran Capital INSIGHTS

Page 1


Azafran Capital Partners

INSIGHTS issue THREE issue three FOCUS

Evolution and Innovation

At Azafran Capital Partners, we are focused on investing in end to end solutions solving real world problems derived from a scientific or engineering innovation in machine learning. Today, we are focused on machine learning solutions that are based on voice, acoustic, as well as language and imagery data. Issue Three of INSIGHTS reflects this focus, highlighting the voice and acoustic opportunity.

“Think of deep learning, machine learning and artificial intelligence as a set of Russian dolls nested within each other.” - Skymind AI.Wiki Technology is above us, underneath us, around us and inside us, advancing at a pace that was unthinkable even 25 years ago when the Internet was just making into everyday life. Being fed now by AI, ML, big data, sensors, nanotech, the list is endless. However, one dominant element that has quietly made its way into almost every home and office is end devices with voice recognition as a core technology. 100s of millions of devices all of the sudden and in our midst, with voice recognition technology at a level of accuracy that was unthinkable even a few years ago. The FAANG companies (Facebook, Apple, Amazon, Netflix and Alphabet's Google) have the platforms already established, as well as other players including health and wellness companies like Johnson & Johnson and Bayer and even Under Armour. They have all realized that voice is the UI and gateway to deep machine learning and are all rushing quickly to figure out how they can get it into their products and services. Next up, how can they get it so voice is the entry point and connection to their products, as it is becoming ubiquitous and you don't need a keypad. The consumer is now always connected from the living room to the car, but if one needs privacy, then they have a keyboard or other means for access.

Portfolio Focus: Aspinity Edge of Network + Sensors “Everything in the technology world went to digital processing with everything being in the digital domain. What I started doing in grad school was looking at biology for inspiration for how to do more efficient processing of information, and that led me to going down to do things, actually, in very old-fashioned analog ways. Well, it turns out that the analog way of building things is going through a Renaissance.” - David Graham, Aspinity co-founder

Aspinity solves the power and data challenges of integrating always-on sensing functionality into portable, battery-operated devices. Aspinity has developed a proprietary analog, ultra-low power, always-on sensing chip

that eliminates the digitization of irrelevant data and allows the higher-power system processors to stay asleep until an event has actually been detected. The Aspinity solution is highly powerand data-efficient, extending

Azafran INSIGHTS © Azafran Capital Partners 2019 - All Rights Reserved

Companies that aren’t paying attention to voice are already getting burned. For example...major publishers could be losing as much as $46,000 per day — $17M over the course of 2019 — due to voice tech failing to help consumers buy the books they want. This loss could balloon to upwards of $50M in 2020. Don’t wait for a similar report to come out for your industry — now is the time to invest, while voice enabled purchasing is still new and not as widely adopted.” source: Medium, Growing Artificial Intelligence with Blockchain

battery life and enabling a whole new generation of portable, always-on sensing applications such as always-on audio/voice wake-up and vibration sensing for smart home, consumer, industrial, and medical applications. In 2017, Aspinity was among the nine start-up companies selected for the inaugural Alexa Accelerator, a joint program between Amazon and Techstars Seattle.

Excerpted from Your Company Needs a Strategy for Voice Technology by Bradley Metrock, HBR.org, April 29, 2019

As for the ah ha moment and genesis of Aspinity, CEO Tom Doyle notes, “During university research and development, we were trying to monitor audio with early IoT devices and realized we could be more efficient by extracting intelligence from the signal at very low power levels. When rolled up to the product level, we saw that this offered significant power savings for battery-operated devices.”

Volume 1 Issue 3 - Page One

Artificial Intelligence vs. Machine Learning vs. Deep Learning The Azafran team scoured the web and other publications to exactly define and segment the branches of AI + ML + DL, following is the best description we found, excerpted from the Skymind AI.Wiki (link): You can think of deep learning, machine learning and artificial intelligence as a set of Russian dolls nested within each other, beginning with the smallest and working out. Deep learning is a subset of machine learning, and machine learning is a subset of AI, which is an umbrella term for any computer program that does something smart. In other words, all machine learning is AI, but not all AI is machine learning, and so forth. John McCarthy, widely recognized as one of the godfathers of AI, defined it as “the science and engineering of making intelligent machines.” Following are a few other definitions of artificial intelligence: 1) A branch of computer science dealing with the simulation of intelligent behavior in computers; 2) The capability of a machine to imitate intelligent human behavior; and, 3) A computer system able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages. Machine learning is a subset of AI. That is, all machine learning counts as AI, but not all AI counts as machine learning. For example, symbolic logic – rules engines, expert systems and knowledge graphs – could all be described as AI, and none of them are machine learning.

market PREDICTIONS 5G expected to add trillions to the global economy by 2035 In 2035, when 5G’s full economic benefit should be realized across the globe, a broad range of industries – from retail to education, transportation to entertainment, and everything in between – could produce up to $12.3 trillion worth of goods and services enabled by 5G. The 5G value chain itself is seen as generating up to $3.5 trillion in revenue in 2035, supporting as many as 22 million jobs. Over time, 5G will boost real global GDP growth by $3 trillion dollars cumulatively from 2020 to 2035, roughly the equivalent of adding an economy the size of India to the world in today’s dollars. Source: The 5G Economy, Qualcomm Technologies

Deep learning is a subset of machine learning. Usually, when people use the term deep learning, they are referring to deep artificial neural networks, and somewhat less frequently to deep reinforcement learning.

NEWSWORTHY… New Report - Deep Learning Market Set to Explode: The Deep Learning Market is expected to exceed more than US$ 18 Billion by 2024 at a CAGR of 42% in the given forecast period. The Deep Learning Market is segmented on the Basis of End-User Type, Application Type, Solution Type and Regional Analysis. By End-User Type this market is segmented on the basis of Automotive, Aerospace & defense, Healthcare, Manufacturing and Others. Browse Full Report (link):

Azafran Perspective Voice in Automobiles Accelerates Believe it or not, voice assistants are more popular and more prevalent in cars than in homes. According to recent data from Voicebot.AI, 77 million adults in the U.S. use voice assistants in the car, compared to 45 million adults using them on in-home smart speakers. Almost every car from major automakers rolling off the manufacturing line has voice-first technology integrated, from Mercedes-Benz and BMW to Tesla to Chevrolet and Ford. Source: Your Company Needs a Strategy for Voice Technology by Bradley Metrock, HBR.org

Excerpted from the Azafran Capital White Paper: The Voice of Deep Machine Learning. Why Now? At Azafran Capital, we see this moment resembling the mad rush in 1994-’95 when we were digging the dial-up and DSL, then came broadband and the Internet exploded. Siri and Amazon Echo Dot are the equivalent of the dial-up and DSL stage and the early days of the Internet, which were replaced by broadband, which brought broader adoption, greater experience and many many more companies, products and services to the market. The focus of Azafran Capital Fund One on acoustics as the overlay is rooted in the transformational nature of the tech, and that it’s in almost every home and business and the market and is set to grow exponentially. As we choose to be experts instead of generalists, our strategy is already paying off with our incredible early investments, Yobe and Aspinity, with a number of new investments happening soon in transformational companies/technologies.

Azafran INSIGHTS © Azafran Capital Partners 2019 - All Rights Reserved

Volume 1 Issue 3 - Page Two

Investment Segment Highlight: Edge of the Network Component: Sensors or embedded devices Internet connectivity not required An edge device is any piece of hardware that controls data flow at the boundary between two networks. Edge devices fulfill a variety of roles, depending on what type of device they are, but they essentially serve as network entry (or exit) points. Cloud computing and the internet of things (IoT) have elevated the role of edge devices, ushering in the need for more intelligence, computing power and advanced services at the network edge. But edge devices do not need to be connected to the network and new maps are emerging. This concept, where processes are decentralized and occur in a more logical physical location, is referred to as edge computing. From a security and privacy standpoint, it is important to recognize with edge devices that the data stays on the device(s). The Azafran Take: We see this category blowing up as lower power solutions enter the market and the need for edge devices to be connected to power and/or the network quickly fades. Our focus on this segment and where we see the greatest opportunity is on devices providing self-contained functionality with no need for cloud services. They may be connected to a local mesh network, which provides “minions” to accomplish many different tasks working as a team. There are two factors which are often overlooked and are very important towards bringing AI into our lives. First, is a reduction in cost, and increase in performance, of chips doing machine learning inference “at the edge” as well as the development of middleware allowing a broader range of applications to run seamlessly on a wider variety of chips. It is these final two developments that will allow AI to enhance our lives in countless new ways and enable AI in our pockets, cars, houses, and a host of other places. What we have witnessed in recent years is a natural migration of machine learning from central, powerful computers where a machine learning algorithm or application may have historically been built, trained, and used, to an edge model.

IN THE KNOW It's Time to Stop Calling It ‘Artificial' Intelligence “Ultimately, the biggest impediment to AI is not the technology but the very term we use to describe it, Artificial. It's not artificial. Instead it is an extension of our own intelligence. Perhaps it's time to start calling it what it is, Augmented Intelligence that we need in order to deal with and survive the increasing complexity of the machines and the world we inhabit. In that sense, it's no different than the long history of tools we have created to extend human abilities to cope and evolve in an ever changing, ever more challenging world.” - By Thomas Koulopoulos, Inc. Magazine

Quote of the Month: “It won’t be long before every company will be expected to own and manage its own voice-first presence and capabilities, much like every company is expected to own and manage their web presence and capabilities. In fact, every time you see someone asking Siri to give them information, or someone asking Google Assistant for directions, you’ll realize that your customers are already way ahead of you. - Bradley Metrock writing in HBR.org Azafran INSIGHTS © Azafran Capital Partners 2019 - All Rights Reserved

Volume 1 Issue 3 - Page Three

Feedback, going forward Thank you for the work you are doing in the world and your continued support of Azafran INSIGHTS’ monthly journey into the intersection of machine learning driven by voice, acoustics, language and image data. Our intention is to use this as a vehicle to open a dialogue with each of you, together as a group, and we strongly encourage and welcome your feedback. We’ve made feedback/comments simple, you can quickly and securely leave us a voice message by clicking here. If you are reading in print, please just visit the contact section of our website at AzafranCapitalPartners.com. In either case, just click on the “Start Recording” button and leave your thoughts and suggestions. Or you can always send us an email to insights@azafranpartners.com - thank you. We will be publishing INSIGHTS each month going forward, exploring the opportunity and intersection of voice tech and AI. We look forward to building this sector together and all the benefits for humanity that are soon coming down the road. From the Azafran team, we wish you all the best and a successful year ahead.

voice-techINDUSTRY At a Glance: Top 5 Markets & Global

Azafran INSIGHTS © Azafran Capital Partners 2019 - All Rights Reserved

413 West 14th St. - Suite 200, New York, NY 10014 p: +1.212.913.0700 insights@azafranpartners.com AzafranCapitalPartners.com

Volume 1 Issue 3 - Page Four