MEDIA IMPACT PROJECT
  • ABOUT
    • MISSION
    • OUR TEAM
    • WHAT WE DO
    • FELLOWS & PARTNERS
  • PROJECTS
    • AFRICA NARRATIVE
    • ACTION CAMPAIGNS
    • CHARITABLE GIVING ON TV
    • FILM DIPLOMACY
    • IDEOLOGY & ENTERTAINMENT
    • IMMIGRATION ON TV
    • JOURNALISM STUDIES >
      • VIRTUAL REALITY
  • PUBLICATIONS
    • Are You What You Watch?
    • Africa in the Media
    • CASE STUDIES
    • IMMIGRATION ON TV
    • METRICS GUIDES FOR JOURNALISTS
    • VIRTUAL REALITY
  • BLOG
  • NEWS & EVENTS
  • CONTACT
  • ABOUT
    • MISSION
    • OUR TEAM
    • WHAT WE DO
    • FELLOWS & PARTNERS
  • PROJECTS
    • AFRICA NARRATIVE
    • ACTION CAMPAIGNS
    • CHARITABLE GIVING ON TV
    • FILM DIPLOMACY
    • IDEOLOGY & ENTERTAINMENT
    • IMMIGRATION ON TV
    • JOURNALISM STUDIES >
      • VIRTUAL REALITY
  • PUBLICATIONS
    • Are You What You Watch?
    • Africa in the Media
    • CASE STUDIES
    • IMMIGRATION ON TV
    • METRICS GUIDES FOR JOURNALISTS
    • VIRTUAL REALITY
  • BLOG
  • NEWS & EVENTS
  • CONTACT

Nine Key Ways Media Impact Evaluations Differ From Research Projects … And Why We Care

11/15/2017

0 Comments

 
Erica Watson-Currie, PhD: November 2017

At the Media Impact Project, our mission is to understand the effects of media on viewers. We also strive to apply acquired knowledge to projects that serve the social good, and to be a thought leader on research issues in our field. This means we assume two distinctly different roles: the “critical friend” of the evaluator and the impartiality of the researcher.  

By evaluating the work of individual documentarians, journalists, and other media makers, we help find ways to improve engagement opportunities with audiences on a micro level. In studying the general effects of media on society, we contribute to the greater body of research on this topic that touches us all. These are important distinctions to make, as evaluations of specific programs or films provides us with information about the whats -- audience demographics and how audiences responded, what actions they took, and if their behaviors changed; research on that same show can help us to understand the whys of media impact -- what was it about the program that sparked the response, spurred the action, or shifted behaviors.

Myriad scholars have weighed in on the distinctions between evaluation and research. Michael Scriven's oft-referenced explanation of differences between the two disciplines posits that although both practices apply social science tools to conduct empirical investigations and analyze data, evaluators do so to assess value of what is being examined with an eye to whether predetermined standards are met. This, while researchers collect data to test hypotheses and reach conclusions based on "factual results."  Into this fray we contribute our own nine distinctions as they relate to media evaluation projects and our overarching research program here at the USC Annenberg Norman Lear Center’s Media Impact Project.

Nine Key Differences Between Evaluation vs. Research

1.  Value: Evaluation focuses on the effectiveness and/or value of a program, message campaign, or other communications; Research strives to be value-free or at least value-neutral in pursuit of increasing knowledge.

2. Role: Evaluators work with stakeholders to understand a program's objectives and goals, and develop agreements on the relevant (and obtainable) "Key Performance Indicators" which constitute evidence these are being achieved. Researchers develop an initial question and design their study (e.g., intervention, experiment) deciding:
what variables will be tested; on whom; under what conditions; over what period of time.

3.  Application of Critical Thinking Skills: Evaluators engage as a "critical friend" to program leaders helping to understand analyses, determine effective changes, and refine data collection as the program evolves (a posteriori/ad hoc). Researchers engage in critical thinking at outset of a study to implement procedures which prevent them from biasing data collection or interpretation of findings (a priori).

4.  Use and Timing of Operationalization: Evaluators work with project leaders to operationalize terms and agree upon methods at outset of project; however, these may shift, expand, or evolve along with the project. Thus, the process of evaluation is responsive and incorporative of what is learned along the way. Researchers operationalize variables and set methods and procedures at the outset which are to be followed until project is completed.

5.  Role in Avoiding Potential Pitfalls: External evaluators may be better positioned to recognize barriers and threats to a program's success, as well as unintended effects. Thus, evaluators often play a role in mediating collaborations between program leaders and key stakeholders to increase effective participation and encourage development of effective procedures. Researchers strive to be mindful of the possibility confounding variables may affect their results in order to exclude or control for these at the outset, or statistically eliminate their effects in analysis.

6.  Review of Academic Literature: In Evaluation, purposes for literature reviews vary depending upon stage of program:
  • Front end and developmental - Evaluator may review literature on behalf of non-academic/applied program leader to suggest best practices for design
  • Formative - Review literature in light of findings to investigate probable causes for initial findings in order to make recommendations for modifying procedures, adapting materials, improving effectiveness
  • Summative - Search literature to make sense of unintended effects of program and unanticipated findings
  • In Research it is customary to primarily review relevant academic literature when proposing the study in order to formulate hypotheses and ways of testing.

7.  Purposes and Procedures: Evaluations are conducted to discover strategies and tactics to improve a program. Thus, Evaluation Reports are provided at regular intervals as part of an ongoing process in order to encourage reflection and stimulate discussion among project leaders and key stakeholders to help discover and implement effective adjustments to materials and procedures (e.g., to enhance innovations) while the program is underway. Research seeks evidence to prove a program had the hypothesized effect and/or support a theory, with data analyzed and findings reported in full at the end of the study. Researchers would require approval from an Institutional Review Board to make substantial procedural changes to a research plan while in progress.

8.  Dissemination of Findings: Evaluators report findings to program leaders and stakeholders to: provide a record of how the program developed and evolved over time
document programs' effects; help articulate best practices for institutionalizing a program and/or implementing it more widely. Researchers disseminate new knowledge for peer review as a contribution to an ongoing academic narrative.

9.  Tables, Charts, and Graphs: Evaluators often get to use more engaging quantitative and qualitative data visualization techniques in reports to clients than permitted in academic journals. Research is reported in academic journals which often limit the permitted number of tables and figures, and most often display only black-and-white or grayscale images.

Thus, at MIP, our evaluations of entertainment projects, documentaries, films and other programs function as a vital component within our overall research mission to study the influence of news and entertainment on viewers. In my next blog, I will discuss how our evaluations of news and entertainment programs play an important role within our research.

0 Comments



Leave a Reply.

    Media Impact Project

    A hub for collecting, developing and sharing approaches for measuring the impact of media.

    Archives

    September 2020
    January 2020
    October 2019
    September 2019
    June 2019
    February 2019
    January 2019
    September 2018
    August 2018
    July 2018
    June 2018
    April 2018
    January 2018
    December 2017
    November 2017
    October 2017
    August 2017
    June 2017
    December 2016
    August 2016
    May 2016
    April 2016
    March 2016
    February 2016
    July 2015
    June 2015
    March 2015
    November 2014
    October 2014
    July 2014
    June 2014
    May 2014

    Categories

    All
    Advertisers
    Consent
    Demographics
    Documentaries
    Entertainment Education
    Entertainment Preferences
    Facebook
    Film
    Impact Stories
    Internet
    Johanna Blakley
    Journalism
    Media Impact Project
    Metrics
    Mobile
    Participant Media
    Social Media
    TED
    TPI
    Trends

    RSS Feed

The Norman Lear Center's Media Impact Project researches how entertainment and news influence our thoughts, attitudes, beliefs, knowledge and actions. We work with researchers, the film and TV industry, nonprofits, and news organizations, and share our research with the public. We are part of the University of Southern California's Annenberg School for Communication and Journalism.