From hype to reality

Page 1


02


A

rtificial intelligence in market research: hype, trend,

overrated or game changer? It’s time to experiment and find out. Meet Galvin, the intelligence Insight Activation Studio assistant, and Minority Report in research communities. These two successful case studies prove that we don’t have to talk anymore about what AI can mean for the market research industry, but that we can actually show how it works and demonstrate real results. Build, measure, learn … share!

I N S I T E S C O N S U LT I N G

03


To whatever conference we may go, industry magazine we may read, business podcast we may listen to: the message is the same everywhere. Artificial Intelligence (AI) is the next big thing. Artificial intelligence will disrupt every industry! “But is this actually the case for the MR industry? Or is this perhaps hype?” But what is hype in fact? There are two important things to say about hype. A first characteristic of hype is that everybody is using different words to say the same thing. This is the case for AI! We’ve heard all the buzz words, big data, pattern recognition, predictive analytics, machine learning, deep learning, etc., but in a certain way, we are all kind of saying the same thing. Bernard Marr, a leading data expert, can 04

help us to clarify the concept. In a recent Forbes article (Marr 2016), he says that Artificial Intelligence must be seen as an umbrella term and is in fact the broader concept of machines being able to carry out tasks in a way that we would consider “smart”. A second characteristic of hype is that, potentially, everybody is talking about it, but nobody or a few individuals are walking the talk. This is the case for AI! In the 2016 GRIT report (Greenbook 2016), +- 77% of industry professionals say that AI is an interesting trend or still too early to tell what it can actually mean for us, while 23% of marketers believe that AI is a game changer. These numbers suggest that few in business have actually implemented or considered it. “But does this hype help the MR industry any further without actual results?” Since we have been hyping AI for too many years, we need


to stop talking. We need to start experimentation to see what it actually means. We need to implement AI experiments within business environments and get our hands dirty. We need to go from hype to reality to move the industry forward. With this paper, we aim to increase understanding in the MR industry of how to adopt AI within the MR projects by presenting a framework of how companies should adopt AI within their business projects and discussing two case studies that proves the value added. This paper is the result of a top-notch collaboration between academia (IESEG School of Management) and a research agency (InSites Consulting).

05

Š GRIT 2016

I N S I T E S C O N S U LT I N G


HOW CAN WE TACKLE THIS INDUSTRY CHALLENGE? To explore the added value of AI for MR projects, we follow the framework popularized by The Lean Startup of Eric Ries (Ries 2011), whose ideas have been worshipped within startup contexts. To effectively explore and introduce innovation, Eric Ries states that one needs to 06

execute iteratively the following group of actions: 1) build, 2) measure, 3) learn. One starts by building out a Minimum Viable Product that provides a solution for the most basic product assumptions. One tests it with real consumers and measures the results. Using the feedback, one identifies the strong points, weaknesses and learns from the mistakes. Now, one follows the group of action back again and starts from a better product, in which the feedback of the previous iteration has been integrated. Whenever the innovation meets the expectations or the criteria that have been agreed upon in advance, one can stop the loop and go from experimentation to real business adoption. To introduce AI experimentation within the MR industry, we have adapted the framework and introduced a fourth step, i.e. share. As AI has both many advocates and opponents, it is crucial to share tips and


results, both internally within the company and externally to industry peers. Internal sharing is important to make other employees aware of the project and allow them to be curious or get involved. Whenever, the AI project is finished, adoption will be easier as employee buy-in will be higher, as opposed to the case where AI experimentation was done in secret. External sharing is important to move the industry forward. Only by staying at the front-end of AI innovation, MR agencies will be able to offer better services to clients, whereas clients can stay up-to-date. Furthermore, it’s important to mention that one can share the results of the loop in every iteration of the project. Only then, external knowledge and feedback can be incorporated in the experimentation loop, increasing chances of better outcomes. Figure 1 presents this new framework and structure for AI experimentation.

07 Fig. 1

BUILD

SHARE

TEST

LEARN

I N S I T E S C O N S U LT I N G


AI: FROM HYPE TO REALITY - CASE STUDIES To explore the potential benefit of AI for the MR industry, we investigate AI within two future-proof MR environments that are important for two phases within the insight value chain, i.e. Market Research Online Communities (insight generation) and Insight Activation Environments (insight activation). 08

We adopt AI within these environments to take on critical challenges: •

The impact threat: as companies increasingly work towards the generation of multiple consumer insights, how can they effectively be activated within the company?

•

The participation threat: as research communities increasingly evolve towards long-term projects, how can we effectively sustain these communities and increase the long-term value of the participant?

This paper presents two case studies that are the result of tackling the impact threat within insight activation environments and the participation threat within market research online communities. As AI is often viewed as fiction, Hollywood has already explored the concept of AI in many movies. For our case studies, many analogies can be made with


two movies “Her” and “Minority Report”. In the movie “Her”, a lonely writer develops an unlikely relationship with an intelligent operating sy tem designed to meet his every need, while “Minority Report” describes a future where a special police unit is able to arrest murderers before they commit their crimes by using oracles.

09

© Her © Minority report

I N S I T E S C O N S U LT I N G


10


GALVIN, THE CONSUMER IN YOUR POCKET The market research industry is built upon consumer insights. Companies are looking for consumer insights that they can use for any purpose, i.e. marketing campaign, new product development, product design, rebranding, etc. As of today, many methodologies exist to obtain and generate these consumer insights, both online (eye tracking, online communities, social media listening, etc.) and offline (focus groups, in-store observation, etc.).

I N S I T E S C O N S U LT I N G

11


As a result, companies have lots of consumer insights that can be used. However, do they effectively monetize this flood of information? In particular, when a customer-centric decision needs to be made, can all consumer insights available to the company be found and when they are actually found, do they matter? A recent study shows that there’s still an important problem with activating consumer insights within a company. The Market Research Impact study (Schillewaert et Pallini 2014) shows that 45% of marketers believe MR leads to change in attitudes and decisions, but only 50% of MR projects leads to change. Hence, the MR industry is facing a major challenge due to 12

this impact threat. The problem is two-fold. First, when you know those consumer insights exist, but when it takes too long to find them as they are hidden in all the reports and slide decks (low efficiency). Second, when you identify an insight, but it’s not the right insight for the intended purpose (low effectiveness). Hence, companies are under-utilizing the consumers insights. Today, new solutions increasingly emerge that deal with this impact gap such as insight activation environments, but there’s also another way that this can be tackled. The MR industry can tackle this challenge using chatbots.


BUILD Galvin is a chatbot and the personal assistant for market research. It helps marketers to provide them the insights they are looking for. Galvin allows companies to have direct access to all their consumer research. It gives marketers the right answer anywhere, anytime and inspires with new insights. Furthermore, by using predetermined consumer segments, Galvin is able to impersonate the consumer; giving the users the chance to have a simulated ‘chat’ with ‘their consumer’. “How to use Galvin?” We identified three ways for Galvin to be the perfect market research assistant. Figure 2 illustrates these with different cases.

Figure 2. The use cases of Galvin

I N S I T E S C O N S U LT I N G

13


1. Meet the consumer As many employees want to step into the shoes of their consumer to better understand their consumers and improve decision making outcomes, Galvin allows employees to have a simulated chat with their consumer. Using predetermined consumer segments, Galvin impersonates different consumer persona. By choosing a persona and asking him to tell a little bit more about his life or asking specific questions, employees can almost feel like they have a conversation with a real consumer. 14

2. Galvin as a coach As employees always look for inspiration on new topics that inspire them, Galvin can give them a five minute update with daily fresh inspiration. As a consequence, Galvin coaches employees to create a consumer-connected mindset and start the consumer habit. 3. Personal assistant Galvin helps employees, whenever they are in a meeting and/or urgent need of a specific insight. Galvin provides employees the right insights anywhere, anytime. They can just ask what they need and they will be informed on all the insights that are available on the topic.


“How does it work?� Galvin is connected to an Insight Activation Studio, an online platform where all sustainable consumer insights/research is collected and shared. By starting a conversation, Galvin can be asked to bring up specific consumer insights that are interesting for the query. Next, Galvin will search through the Studio, create connections between the insights and give you the answers that you need. Not by sending you the full PowerPoint document, but by sharing them in a short, sharp & visual way. Galvin uses LUIS AI to understand language use in the queries and Cortana to provide smart personal assistance. Figure 3 displays the technology stack of Galvin. 15

I N S I T E S C O N S U LT I N G


“Maximize the impact of your insights!” With Galvin, we enter a new era of using consumer research within a company. First of all, we kill the PowerPoint reports. Instead of sharing these heavy documents via e-mail, everyone inside the company has instant access to all consumer research via their mobile. But this type of interaction/communication is more than just sharing. Through Galvin, we can bring the insights to life in a new conversational way, giving employees the chance to really know the consumer. This all will help in effectively activating the consumer insights within the company, resulting in a stronger consumer connection in the hearts and minds of the employees and leading to better market decisions and a higher ROI.

16

TEST We implemented the chatbot to allow users to effectively activate consumer insights using the three use cases. We did three experiments with Galvin. We saw that it really eases the insight activation process. We evaluated the benefit of the chatbot according to two dimensions. •

Adoption: companies have been adopting Galvin because of a simple reason. The usage of Galvin is similar to natural human behavior. It’s just chatting, you just chat with the chatbot like a human.

Satisfaction: although it’s not perfect yet, clients really use it as it is way better than trying to finding the one insight in multiple PowerPoint decks of 100 slides.


The value of this new technological innovation can be evaluated according to the framework of Mooney, et al. (Mooney et Vijay Gurbaxani 1996), which investigates the impact on business processes according to three dimensions: process changes related to the use of ICT as a means for directly substituting labor (automational), facilitating the use of information (informational) and supporting process innovation (transformational). We evaluate the impact of Galvin as follows: •

Automational: because it’s a digital chatbot, you can access anytime you want & delivering what you need, it saves you time as you don’t have to go through searching in PowerPoint reports or emails.

Informational: it’s mobile, right available in your pocket, so you have access to insights anytime, anywhere you need it.

Transformational: because of the low barriers to use it & the focus on insights & the consumer, it improves your consumer-centric decision making.

I N S I T E S C O N S U LT I N G

17


LEARN During the process, there are three lessons that are important to share: 1. Logic: it’s straightforward that you need to implement functionality to respond to core human behavior, but it’s more challenging to define and allow small talk so you perceive it can deal with any question. The chatbot must be smart, but also human. 2. Relevance: you need to first map out and define all the use cases that you will respond to; so what you respond to is relevant and not 18

useless. The three core functions of the chatbot are the ones that are relevant for people. Instead of creating a chatbot that can do “everything”, we only looked at those use cases that are important to the users. Sometimes it’s difficult for users to know what to do with new technology, but by focusing on a limited number of use cases in the chatbot, it will help the users to get along with it. 3. Adoption: by associating a personality with your chatbot, as in our case Galvin, you humanize the chatbot, which increases chances that users will adopt it and use it. It is in fact a machine, but the user may not experience it in such a way.


19

I N S I T E S C O N S U LT I N G



A BACK-END COMMUNITY MODERATION SUPPORT SYSTEM If you were active in the market research industry in the past ten years, you probably experienced the important evolutions of Market Research Online Communities (MROCs), or as we like to call them, Consumer Consulting Boards. By going online, we could easily reach many consumers and switch to different geographical areas. Going mobile allowed us to better immerse in consumers’ daily life. And going structural resulted in ongoing conversations with anyone, anytime and anywhere. Communities helped market research to step to a higher level and are enormously popular in the industry.

I N S I T E S C O N S U LT I N G

21


The 2016 GRIT Report indicates that 58% of clients and 59% of suppliers adopt communities within their business (Greenbook 2016). This popularity will only continue to grow in the future. Industry watcher Ray Poynter expects that, whereas online communities only take up 5% of the market research budget in 2016, this will grow to 70% by 2026 (Poynter 2016). “But are we ready to guarantee this success in the future? Are we well prepared?” It’s important to mention that we can rely on industry experience an best practices to pursue community success. For example, we already know how to use different recruitment platforms to identify those mem22

bers that are interested and interesting for our communities. Right now, we have expert knowledge on how to manage and moderate community dynamics to achieve favorable conditions to do market research. Moreover, we have done extensive research on gamification and engagement techniques to encourage participation. “But can we use the same techniques and follow similar successful past practices in the future also?” The answer is “maybe not“, mainly due to two challenges, which will only become more important as community adoption or volume increases in the future. First, in essence, MROCs are data-loaded environments which accumulate additional data every day. This big data characteristic puts pressure on the moderator’s resources to deal with and analyze community content. Second, member disengagement


is a fundamental problem for healthy research communities. When members participate insufficiently in the topics which are posted in a community (low quantity), or what they say does not contain anything valuable (low quality), the moderator may be unable to derive useful consumer insights from the community. Additionally, when more communities will be organized, in the end we may all go for the same pool of participants, putting pressure on the members’ motivations to participate in the community. Hence, the viability of these communities is threatened by a participation threat. Therefore, it is important to explore new approaches on how to increase the participant’s long-term value and to effectively deal with the problem of member disengagement.

23

BUILD Proactive community management is a moderation practice to anticipate predicted member disengagement and take proactive actions to prevent disengagement behavior from negatively impacting the community. It leverages the datarich environment of the community and relies on technological innovations to support the moderator in managing the community more effectively. Proactive community management can be considered to be the real-life realization of the movie Minority Report in research communities and consists of a three-step approach: detect, predict and prevent.

I N S I T E S C O N S U LT I N G


1. Detect In research communities, moderators are usually on their own and have to rely on themselves and their own efforts to manage the community and combat member disengagement. But this is rather crazy. On the one hand, communities are data-loaded environments, but we use this data only in a limited way to derive consumer insights from it. On the other hand, already available technologies allow exploitation of data effectively and get more out of it, like text mining, Natural Language Processing and behavioral analysis. So why not adopt these techniques and use them on community data to identify community insights which could support the moderator in managing the community?

24

We can use text mining and behavioral analysis to analyze community behavior and detect member disengagement. Member disengagement can be measured in terms of quantity and quality dimensions, respectively by calculating the percentage of actively participated-in community topics and the number of cognitive words a member uses per post. Cognitive words such as because and think reflect the effort that has been put into the post and is identified as a reliable indicator for the posts’ quality. Now how can the detection of member disengagement be made practical for the moderator in a community context? By using a cut-off value to distinguish between high and low activation levels of participation and combining quantity and quality dimensions, we can obtain with a four-quadrant framework to classify community members and identify


four different community behavior profiles, i.e. the community stars, the high-potentials, the passivists and the annoyers. Figure 4 visualizes this four-quadrant framework.

LOW

HIGH

QUALITY

1 HIGHPOTENTIALS

COMMUNITY STARS

PASSIVISTS

ANNOYERS

2 25

LOW

HIGH

QUANTITY

Figure 4. Moderation framework for the participant profile

2. Predict But why only detect and look at the past, when it’s possible to consider the future? Why only detect, when we can predict? We can also do this in real life, as has been proven by many successful applications ranging from Facebook to the Obama campaign. Predictive analytics and Artificial Intelligence allow prediction of future events. We can adopt this in a community context by creating prediction models to predict member disengagement, low quantity and low quality behavior. The output is a probability and reflects the risk that a member will demonstrate disengagement behavior in the future. I N S I T E S C O N S U LT I N G


How can we make predictions? We leverage historical data and use machine learning techniques to identify patterns in past data that explain future behavior. The intuitive explanation is that we try to find habits that explain future disengagement behavior. Human behavior is very predictable; this also goes for the community context. We can then adopt the output of the two prediction models in our four-quadrant framework to give insights into the future behavior of each participant. The moderator can use the prediction models, historical data and this framework, in order to identify what each participant’s future profile will be. Figure 5 visualizes this framework to identify the future profile of the participant.

26

LOW

HIGH

QUALITY

1 87% 32%

91% 94%

94% 17%

17% 14% 19% 11%

11% 13%

81% 20%

27% 78% 86% 93%

2

12% 91%

LOW

Figure 5. Moderation framework for the participant future profile

HIGH

QUANTITY


2. Prevent Now that we can detect and predict member disengagement, why not take actions on our predictions, so we can anticipate expected member disengagement to prevent negative community impact? We can follow a three-stage approach in particular, where we combine the strengths of the first two steps with those of the moderator in the third step. Figure 6 summarizes this proactive moderation strategy. •

Identify: the prediction model predicts each member’s future profile; the framework allows classifification of each member into one of the four quadrants to identify their future participant behavior.

Contextualize: historical community data and CRM info can be retrieved to provide the right context for the moderator. Moreover, actions could even be recommended to proactively correct disengagement behavior; these have worked successfully in the past.

Finalize: the moderator interprets all the information from the previous steps and uses the intuition and creativity to finalize the prevention campaign by deciding which action needs to be taken.

1

2

3

IDENTIFIY

CONTEXTUALIZE

IDENTIFIY

Prediction model framework

CRM info precention suggestion

Interpretation action

machine

machine

human

Figure 6. Proactive community management framework

I N S I T E S C O N S U LT I N G

27


You may wonder which corrective actions we should take. To answer that question, we can rely on industry experience on engagement actions. The only differences are that instead of using a one-size-fitsall approach for all the members and using it reactively when negative impact has already been recognized, in our approach we personalize the prevention action for the individual member and use it proactively to prevent destructive behavior from impacting the community.

TEST We followed the three steps and implemented this in two research projects to evaluate the prediction and prevention step. 28

In a first project, we explored whether we can make these predictions. We explored 150,000 posts from three years of data, resulting from 10 communities for seven brands; we applied text mining and behavioral analysis techniques to construct about seven million data points to unravel valuable community insights. We then used these variables to detect member disengagement and identify relevant predictors. We used logistic regression as the prediction model. This set-up allows us to evaluate the detection ability of the model and make the comparison with the moderator: •

Detection ability: the results show that we have reliable prediction models. Evaluating our models on unseen data allows us to assess the quality of the prediction models. We see that the accuracy is rather good as for low quantity; we can make correction predictions in 78% of the cases, while making correct classifications


for low quality 78% of the time. Further clarifying these numbers, knowing that randomly deciding between high and low activation levels corresponds with a 50% prediction accuracy. You can see that our models already perform better than that. •

Model versus moderator: as mentioned in the previous paragraph, the model performs better than random choice, but it may compete with the ability of the moderator to make accurate predictions due to expert knowledge. However, this is the only argument that works in favor of the moderator. Whenever you want to effectively scale and make predictions for all community members and make fast predictions, the moderator is never able to be as effective as the model. Hence, in a long-term community context, it’s a wise decision in a first step for the model to make initial predictions and a second step for the moderator to step in and focus on the predicted members who are expected to demonstrate unconstructive community behavior.

In a second project, using a field test in four communities, we explored how we can prevent member disengagement. We relied on an email campaign that used three different emails, which anticipated on different types of benefits to participate in the community: functional (“I learn new things”), hedonic (“I like the experience”) and social (“I make new friends”). Using members’ predicted future profile, directly determined by the two prediction models for the quantity and quality dimension, we evaluated the impact of sending out motivational emails to proactively anticipate on member disengagement. This set-up allows us to evaluate the prevention capability:

I N S I T E S C O N S U LT I N G

29


Prevention capability: we saw that we can actually prevent member disengagement. The field test shows interesting results. Figure 7 summarizes these results. First, we see that we can convert predicted high-potentials to community stars by anticipating on their ‘functional’ benefit to participate in the community. Second, we identified that sending out motivational emails to predicted annoyers has a negative impact as it increases their participation quantity in the community. Hence, we need to avoid a motivational campaign for this type of participants. Other engagement techniques need to be explored to see how we can activate these types of members. Third, there is no impact on predicted passivists. Thus, they just need a break. After a while, we can aim to

30

LOW

HIGH

QUALITY

reactive them to continue participation in the community.

1 HIGHPOTENTIALS

PASSIVISTS No community impact. Give them a break!

Increased quantity impact of mail with ‘functional’ participation motivation. Go!

COMMUNITY STARS

ANNOYERS Increased quantity impact of email campaign. Avoid motivation. Human moderation needed!

LOW

Figure 7. Proactive community management field test results

HIGH

2

QUANTITY


The impact can be evaluated according to the framework of Mooney, et al. (Mooney et Vijay Gurbaxani 1996): •

Automational: time and money is saved because the moderator is automatically alerted of expected destructive behavior. As a consequence, the moderator can spend more time on the research task at hand and focus on the analyses, instead of trying to do the identification exercise herself.

Informational: the prediction models reveal subtleties and community patterns that the human eye may not see. Additionally, a member may be healthy today, but not in the future! Our prediction models detect this. The moderator may be able to predict some behavior, but not all.

Transformational: We move in time; it’s something we couldn’t do before. The prediction models allow moderators to anticipate on the expected future behavior of participants. This means, we can go from reactive (when the damage is already done) community management to prevention and go for proactive community management.

I N S I T E S C O N S U LT I N G

31


LEARN During the process, there are three lessons that are important to be shared: 1. Adoption: to make sure that the tool will be adopted within the business, it’s better to give up some predictive accuracy to make the model more believable and actionable for the user. Moderators will only use something that they understand & believe works. So it’s better, instead of a black box model that works very well, to use a model that works less well but is more believable. In our case, we opted for the popular logistic regression model, instead of other black-box models, which revealed interesting insights into the predictor variables, which can be 32

communicated to the moderators. For example, a narcissistic writing style of the moderator was a signaling indicator of future member disengagement. 2. Database: you need both sufficient volume and qualitative data points to identify useful predictors. Therefore, it’s important to store data: the more, the better. 3. Insights: by using a white-box prediction model, that gives insights into the predictors, subtleties and community insights the moderator was not aware of can be revealed. For example, we saw that a moderator’s narcissistic writing style signals future disengagement behavior. Therefore, it’s important for the moderator to take especially about others and not about himself.


CONCLUSION THE FUTURE OF AI CONSIST OF HUMAN-MACHINE COLLABORATION This study has shown that AI shouldn’t be viewed anymore as a hype, but that it can be actually be seen as reality. We’ve proven that AI can be used to take on critical challenges using two case studies. To finalize, we would like to stress the important common denominator in the case studies and that will also be important for future AI projects. We believe that the future of AI involves heavy human-machine collaboration. We need to evaluate the strengths and weaknesses of both humans and machines to allocate those tasks to the actor that is the best suited to execute it. Only by embracing AI can we as humans go to the next level of augmented human intelligence.

I N S I T E S C O N S U LT I N G

33


Steven Debaere Doctor of Philosophy Phd at IESEG School of Management, France Steven Debaere is Ph.D. candidate in marketing analytics at IÉSEG School of Management (LEM-CNRS) of the Catholic University of Lille in France. His ongoing research focuses on the exploitation of social media data to improve a company’s product strategy. Predictive modeling methodology is the central topic within his research projects. Steven holds a master’s degree in Computer Science Engineering from Ghent University and a master’s degree in General Management from Vlerick Business School, both located in Belgium.

@steven_debaere s.debaere@ieseg.fr 34

Tom De Ruyck Managing Partner Tom is Managing Partner and global expert in consumer & employee collaboration, supporting InSites Consulting’s efforts to make companies more consumer-connected. He loves leading in-depth workshops and chairing events, and has given more than 500 speeches all around the world. Next to that he is Adjunct Professor at the IESEG School of Management.

@tomderuyck Tom@insites-consulting.com


Kristof Coussement, dr. Professor IESEG School of Management, France Kristof teaches several marketing related courses including Strategic Marketing Research, Customer Relationship Management and Database Marketing. He also publishes in international peer-reviewed journals like Computational Statistics & Data Analysis, Decision Support Systems, European Journal of Operational Research, Information & Management, Expert Systems with Applications, among others. Moreover, his works has been presented on various conferences around the world.

@kcoussement K.Coussement@ieseg.fr 35


REFERENCES Greenbook. 2016. “GRIT Report.” Q3-Q4. https://www.greenbook.org/grit. Marr, Bernard. 2016. “What Is The Difference Between Artificial Intelligence And Machine Learning?” Forbes. December. https://www.forbes.com/sites/ bernardmarr/2016/12/06/what-is-the-difference-betweenartificialintelligence-and-machine-learning/.

36

Mooney, John G, and and Kenneth L. Kraemer Vijay Gurbaxani. 1996. “A process oriented framework for assessing the business value of information technology.” ACM SIGMIS Database 27 (2): 68-81. Poynter, Ray. 2016. “Why Are Communities Taking Over The Insights World?” December 3. https://www.linkedin.com/pulse/why-communities-taking-over-insights-world-ray-poynter. Ries, Eric. 2011. The Lean Startup. Penguin Books Limited. Schillewaert, Niels, and Katia Pallini. 2014. “What do clients think about the impact of market research?” November 25. http://www.insites-consulting. com/what-do-clients-think-about-the-impact-of-marketresearch/.


37

Marketing@insites-consulting.com @insites


T

o whatever conference we may go, industry magazine we may read,

business podcast we may listen to: the message is the same everywhere. Artificial Intelligence (AI) is the next big thing. Artificial intelligence will disrupt every industry! But is this actually the case for the market research industry? Is AI a hype, a trend, just overrated or a game changer? It’s time to experiment and find out. Based on two successful case studies, this paper aims to increase understanding in the MR industry of how to adopt AI within the MR projects by presenting a framework of how companies should adopt AI within their business projects and discussing two case studies that proves the value added. ABOUT THE AUTHORS By Steven Debaere (Doctor of Philosophy Phd at IÉSEG School of Management), Tom De Ruyck (Managing Partner at InSites Consulting) and dr. Kristof Coussement (Professor at IÉSEG School of Management).

ABOUT INSITES CONSULTING From the start of InSites Consulting in 1997 until today, there has been only one constant: we are continuously pushing the boundaries of marketing research. With a team of academic visionaries, passionate marketers and research innovators, we empower people to create the future of brands. As one of the top 5 most innovative market research agencies in the world (GRIT), we help our clients connect with consumers all over the world.

www.insites-consulting.com