Customer Value Proposition Future Fragment has the technical skills set to be able to capture (See Mobile UI and workflows) the voice clip and index that voice clip in a manner that will allow for the correct model algorithms to be brought into play to ensure accuracy in interpretation. Our team of Full Stack Developers and Data Scientists have spent the last 18 months building our own proprietary repository for data (Image and Voice). We have just been invited to become a “Standard and Validated AWS Technical Partner” in the AWS framework on the back of our cutting-edge software and platforms. This recognition adds credence and credibility to the quality of the technology stack we have built and deployed. Off the back of the Voice Exchange the opportunity for product development is immeasurable, bearing in mind that the Future Fragment product development blue print has Voice to Text, Emotion Detection Services as its key next step developments. Neither of which are really feasible from a development for Africa context without the Voice Exchange. We have patented our EDS process in the interim. In my continual research on the opportunities in the Voice space I subscribe to the International Voice Technology Review Blog, and this article written by Carl Robinson identifies a list of existing European and USA based companies whose offerings could be re-packaged for our Africa conditions BY FF on the back of the exchange: Voice Emotion Analytics Companie By Carl Robinson This blog post is a roundup of voice emotion analytics companies. It is the first in a series that aim to provide a good overview of the voice technology landscape as it stands. Through a combination of online searches, industry reports and face-to-face conversations, I’ve assembled a long list of companies in the voice space, and divided these into categories based on their apparent primary function. The first of these categories is voice emotion analytics. These are companies that can process an audio file containing human speech, extract the paralinguistic features and interpret these as human emotions, then provide an analysis report or other service based on this information. audEERING https://www.audeering.com audEERING is an audio analysis company based just outside of Munich, Germany, that specialises in emotional artificial intelligence. Their team are experts in voice emotion analytics, machine learning and signal processing, and many of their founders have PhDs. Since 2012, they have carried out projects for major brands in many industry verticals, including market research, call centers, social robotics, health and many more.
Their product portfolio comprises software systems for automatic emotion and speaker state recognition from speech signals and methods for music signal analysis. They offer a range of commercial web-APIs, mobile SDKs, and embedded Linux and Windows SDKs. A very research-oriented company, audEERING are also the developers of openSMILE, an open source research toolkit for audio feature extraction. It is the most widely-used tool for emotion recognition tasks in research and industry, and considered the state-ofthe-art in affective computing for audio. audEERING are also responsible for creating the GeMAPS standard acoustic parameter recommendation, a research project that aimed to identify the most effective audio features for use in emotion recognition tasks. The feature sets defined in GeMAPS are easily imported within openSMILE. which standardises their implementation across research projects. audEERING produce a number of packaged products too, including: Audiary – a voice enabled diary that allows patients with chronic diseases to record the state of their health, and log their medical adherence. The sensAI technology it incorporates offers a complete analysis of the user’s emotional state. CallAIser – a call centre speech analysis software that reports the parameters of telephone conversations such as duration and relative share of the dialogue, along with the speakers’ mood and the atmosphere of the conversation. This can detect and prevent escalations before they happen, allowing a more experienced call centre agent to take over and calm the situation down. sensAI-Music – software that automatically detects tempo, meter, tune and vocals, and calculates the genre of a track, as well as its emotional setting. sensAI-Music helps DJs with planning set lists and dealing with large music databases, and allows for synchronisation of music tracks, with videos, lighting effects, and animated avatars. “When the interaction is frictionless and seamless, you’re actually more happy with it, you’re less stressed, because it just feels natural.” Florian Eyben, CTO of audEERING Beyond Verbal http://www.beyondverbal.com Beyond Verbal was founded in 2012 in Tel Aviv, Israel by Yuval Mor. Their patented voice emotion analytics technology extracts various acoustic features from a speaker’s voice, in real time, giving insights on personal health condition, wellbeing and emotional understanding. The technology does not analyze the linguistic context or content of conversations, nor does it record a speaker’s statements. It detects changes in vocal range that Continued on page 38
37 SYNAPSE
Continued from page 36
1ST QUARTER 2019
African Voices need to be Heard