Erica Watson-Currie, PhD: November 2017
At the Media Impact Project, our mission is to understand the effects of media on viewers. We also strive to apply acquired knowledge to projects that serve the social good, and to be a thought leader on research issues in our field. This means we assume two distinctly different roles: the “critical friend” of the evaluator and the impartiality of the researcher.
By evaluating the work of individual documentarians, journalists, and other media makers, we help find ways to improve engagement opportunities with audiences on a micro level. In studying the general effects of media on society, we contribute to the greater body of research on this topic that touches us all. These are important distinctions to make, as evaluations of specific programs or films provides us with information about the whats -- audience demographics and how audiences responded, what actions they took, and if their behaviors changed; research on that same show can help us to understand the whys of media impact -- what was it about the program that sparked the response, spurred the action, or shifted behaviors.
Myriad scholars have weighed in on the distinctions between evaluation and research. Michael Scriven's oft-referenced explanation of differences between the two disciplines posits that although both practices apply social science tools to conduct empirical investigations and analyze data, evaluators do so to assess value of what is being examined with an eye to whether predetermined standards are met. This, while researchers collect data to test hypotheses and reach conclusions based on "factual results." Into this fray we contribute our own nine distinctions as they relate to media evaluation projects and our overarching research program here at the USC Annenberg Norman Lear Center’s Media Impact Project.
Nine Key Differences Between Evaluation vs. Research
1. Value: Evaluation focuses on the effectiveness and/or value of a program, message campaign, or other communications; Research strives to be value-free or at least value-neutral in pursuit of increasing knowledge.
2. Role: Evaluators work with stakeholders to understand a program's objectives and goals, and develop agreements on the relevant (and obtainable) "Key Performance Indicators" which constitute evidence these are being achieved. Researchers develop an initial question and design their study (e.g., intervention, experiment) deciding:
what variables will be tested; on whom; under what conditions; over what period of time.
3. Application of Critical Thinking Skills: Evaluators engage as a "critical friend" to program leaders helping to understand analyses, determine effective changes, and refine data collection as the program evolves (a posteriori/ad hoc). Researchers engage in critical thinking at outset of a study to implement procedures which prevent them from biasing data collection or interpretation of findings (a priori).
4. Use and Timing of Operationalization: Evaluators work with project leaders to operationalize terms and agree upon methods at outset of project; however, these may shift, expand, or evolve along with the project. Thus, the process of evaluation is responsive and incorporative of what is learned along the way. Researchers operationalize variables and set methods and procedures at the outset which are to be followed until project is completed.
5. Role in Avoiding Potential Pitfalls: External evaluators may be better positioned to recognize barriers and threats to a program's success, as well as unintended effects. Thus, evaluators often play a role in mediating collaborations between program leaders and key stakeholders to increase effective participation and encourage development of effective procedures. Researchers strive to be mindful of the possibility confounding variables may affect their results in order to exclude or control for these at the outset, or statistically eliminate their effects in analysis.
6. Review of Academic Literature: In Evaluation, purposes for literature reviews vary depending upon stage of program:
7. Purposes and Procedures: Evaluations are conducted to discover strategies and tactics to improve a program. Thus, Evaluation Reports are provided at regular intervals as part of an ongoing process in order to encourage reflection and stimulate discussion among project leaders and key stakeholders to help discover and implement effective adjustments to materials and procedures (e.g., to enhance innovations) while the program is underway. Research seeks evidence to prove a program had the hypothesized effect and/or support a theory, with data analyzed and findings reported in full at the end of the study. Researchers would require approval from an Institutional Review Board to make substantial procedural changes to a research plan while in progress.
8. Dissemination of Findings: Evaluators report findings to program leaders and stakeholders to: provide a record of how the program developed and evolved over time
document programs' effects; help articulate best practices for institutionalizing a program and/or implementing it more widely. Researchers disseminate new knowledge for peer review as a contribution to an ongoing academic narrative.
9. Tables, Charts, and Graphs: Evaluators often get to use more engaging quantitative and qualitative data visualization techniques in reports to clients than permitted in academic journals. Research is reported in academic journals which often limit the permitted number of tables and figures, and most often display only black-and-white or grayscale images.
Thus, at MIP, our evaluations of entertainment projects, documentaries, films and other programs function as a vital component within our overall research mission to study the influence of news and entertainment on viewers. In my next blog, I will discuss how our evaluations of news and entertainment programs play an important role within our research.
Media Impact Project
A hub for collecting, developing and sharing approaches for measuring the impact of media.