Issuu on Google+

Philip M. Napoli Professor and Area Chair, Communication & Media Management Co-Director, Center for Communications Graduate School of Business Fordham University This research was conducted in partnership with, and with the assistance of, The Collaborative Alliance


Plan ž  Background: Audience

Evolution and the Post-Exposure Audience Marketplace ž  Research Questions and Method ž  Overview/Assessment of Social TV Analytics Methodologies ž  Comparative Analysis ž  Implications


Audience Evolution

Stakeholder Resistance and Negotiation

Transformation of Media Consumption

New Audience Information Systems

Evolved Audience


The Post-Exposure Audience Marketplace appreciation

response

exposure

interest

recall


Research Questions ž  What

are the key methodological issues that arise around social media-based constructions of television audiences? ž  How do competing representations of social media-constructed television audiences compare to each other and to traditional ratings? ž  How might the institutionalization of social TV metrics affect programming?


Methodology ž  Textual

analysis ž  Interviews (21) ž  Participant observation —  Industry events/meetings —  Social TV analytics training sessions (3)

ž  Limited

analysis of social TV and traditional ratings data


What Do Social TV Analytics Do? ž 

Program Performance Assessment —  —  —  — 

ž 

Quantity of online comments Share of online comments Sentiment Involvement

Content Performance Assessment —  Plot/characters

ž  ž 

Advertisement Performance Assessment Affinity Tracking —  Brand ßà Program —  Program ßà Program

ž  ž 

Trend Analysis Audience Analysis —  Demographics (limited) —  Influence/reach


How Do They Do It? ž 

Data Gathered from Online Social Media Sources —  —  —  —  — 

ž  ž  ž 

Twitter Facebook (public pages) TV Check-in platforms Other online communities Online news media

Analyzed via Language Processing Algorithms Synchronized with Program Schedule/Content Data To Produce a Wide Range of Analytical Outputs


Points of Differentiation ž  Data

sources ž  Algorithm/search terms ž  Measurement period ž  Granularity


Methodological Issues: The Redistribution of Cultural Influence ž  Layers

of possible misrepresentation

—  Access —  Participation ○  10% of Twitter accounts produce 90% of tweets ○  Services generally don’t include Facebook —  Overrepresentation of young males —  Overrepresentation of television enthusiasts


Methodological Issues: “Black Box” Audiences ž  “They

never tell you what’s in the black box. There’s no transparency.” ž  “People who are qualified to come up with solutions are coming from a completely different direction from traditional media research people. There’s no overlap in terms of skill sets, pedigree, etc.”


Methodological Issues: Compatibility with Established Practices ž  Demographics ž  Dayparts ž  Local

markets


A Crowded Marketplace


% Overlap in Top 25 Programs Bluefin

Bluefin

Trendrr.tv

General Sentiment

Trendrr.tv

40%

General Sentiment

Nielsen HH

Nielsen 12-34

40%

32%

32%

36%

28%

40%

16%

32%


Bluefin-Trendrr-General Sentiment Comparison: Top 25 Programs 100% 90% 80% 70% 60% % Overlap 50%

BluefinTrendrr GS-Bluefin

40%

GS-Trendrr

30% 20% 10% 0% 1

2

3

4

5

6

7

8

9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 Program Ranking


Bluefin-Nielsen Comparison: Top 25 Programs 100% 90% 80% 70% 60% Bluefin-Nielsen HH

% Overlap 50%

Bluefin-Nielsen 12-34

40% 30% 20% 10% 0% 1

2

3

4

5

6

7

8

9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 Program Ranking


Trendrr-Nielsen Comparison: Top 25 Programs 90% 80% 70% 60% 50% Trendrr-Nielsen HH Trendrr-Nielsen 12-34

% Overlap 40% 30% 20% 10% 0% 1

2

3

4

5

6

7

8

9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 Program Ranking


General Sentiment-Nielsen Comparison: Top 25 Programs 100% 90% 80% 70% 60% GS-Nielsen HH % Overlap 50%

GS-Nielsen 12-34

40% 30% 20% 10% 0% 1

2

3

4

5

6

7

8

9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 Program Ranking


Bluefin-Trendrr-General Sentiment Comparison: Top 25 Networks 100% 90% 80% 70% 60%

% Overlap

50%

Bluefin-Trendrr GS-Bluefin

40%

GS-Trendrr

30% 20% 10% 0% 1

2

3

4

5

6

7

8

9

10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25

Network Ranking


Implications: More or Less Diversity of Content? ž  +

Greater diversity of success criteria

—  Away from the “tyranny of 18-34” —  Beyond exposure ○  Conversation/appreciation

ž  -

Programming primarily to/for those active on social media?


Thank You! For more, visit: http://audienceevolution.wordpress.com


Phil Napoli Presentation