Skip to content

Media Literacy Evaluation Framework

Run the Media Literacy Evaluation Framework MicroSim Fullscreen
Edit in the p5.js Editor

About This MicroSim

This MicroSim presents scholars with five distinct AI media examples — a corporate press release, a major news headline, a LinkedIn post, a conference keynote quote, and an academic research paper abstract — and asks them to evaluate each one using a structured five-step framework. The five evaluation dimensions are: source type, claim category, primary beneficiary, key omission, and historical precedent. After submitting an evaluation, the scholar receives an expert analysis revealing how the example was designed to be read versus what it actually demonstrates.

The five examples were selected because they represent the full range of AI media formats a scholar is likely to encounter before lunch on any given weekday. The press release announces 99.7% satisfaction in internal testing, which is a number generated by a company about its own product, evaluated by the company, under conditions set by the company. The Goldman Sachs headline about 300 million jobs is real. The LinkedIn post about a course that "changes EVERYTHING" is a composite of approximately 4.7 million actual posts. The conference keynote asserting zero human oversight required is the kind of claim that sounds exciting until you consider what "after initial setup" is doing in that sentence. The academic abstract reporting a 2.3% improvement on a benchmark is the media example most scholars are least equipped to evaluate, which is why it is included last.

The framework is not a fact-checker. It does not tell scholars whether AI claims are true. It tells scholars what questions to ask, who benefits from the claim, what the claim omits, and whether they have heard this particular announcement before under a different product name. These are the same questions a siren asks you not to ask.

How to Use

  1. The simulation opens with the first media example displayed in the top panel. Read it carefully.
  2. For each of the five evaluation steps in the center panel, select the most appropriate radio button option: Source, Claim Type, Primary Beneficiary, Key Omission, and Historical Precedent.
  3. Click Submit Evaluation to reveal the expert analysis card in the bottom panel. Read the analysis and compare it to your selections. Note any dimension where your evaluation differed from the expert's.
  4. Click Next Example to advance to the next media scenario. You may also click Reset at any time to clear your current evaluation and start the current example again.
  5. After completing all five examples, attempt to evaluate a sixth media example of your own choosing — a real AI headline or post from the past 30 days — using the same five-step framework without the simulation's guidance.

Iframe Embed Code

You can add this MicroSim to any web page by adding this to your HTML:

1
2
3
4
<iframe src="https://dmccreary.github.io/unicorns/sims/media-literacy-framework/main.html"
        height="450px"
        width="100%"
        scrolling="no"></iframe>

Lesson Plan

Grade Level

9-12 (High School)

Duration

10-15 minutes

Prerequisites

  • Familiarity with the concept of a source bias and the difference between a primary and secondary source
  • Exposure to at least one AI-related press release, news headline, or LinkedIn post, which in the current media environment requires only that the scholar has been awake and adjacent to a screen for any continuous 15-minute period in the past three years
  • A working definition of "omission" as distinct from "lie" — a distinction that is the subject of Chapter 10 and the professional practice of a significant portion of the communications industry

Activities

  1. Exploration (5 min): Complete the evaluations for all five media examples. For each example, identify the dimension you found most difficult to assess (source, claim, beneficiary, omission, or precedent) and write one sentence explaining why that dimension was harder than the others for that specific example.
  2. Guided Practice (5 min): Return to the press release example (Example 1) and the research abstract example (Example 5). Compare your evaluations of their "Claim Type" and "Primary Beneficiary" dimensions. Describe in two sentences why a corporate press release and an academic abstract can both contain aspirational claims while serving entirely different institutional purposes.
  3. Assessment (5 min): Apply the five-step framework to one real AI media item of your choice from the past 30 days. Write your evaluation as five labeled answers — one per dimension — and include a one-sentence conclusion about whether the item's claim is better described as evidence-based, aspirational, or vaporware. Do not use the simulation for this exercise.

Assessment

  • The scholar can apply all five evaluation dimensions to an unfamiliar media example with sufficient accuracy to identify at least three of the five dimensions correctly when compared to an expert evaluation.
  • The scholar can articulate the difference between a claim that is false and a claim that is technically accurate but structured to omit its most important qualifying information — a distinction the framework labels as "Key Omission" and that the communications industry refers to as a press release.
  • The scholar can identify when a claim cites historical precedent from a previous technology hype cycle and can name a specific prior technology that made a structurally similar claim, demonstrating that pattern recognition across hype cycles is a transferable analytical skill and not merely a parlor trick.

References

  1. Wineburg, S., McGrew, S., Breakstone, J., & Ortega, T. (2022). Evaluating information: The cornerstone of civic online reasoning. Stanford History Education Group. (This one is real, cited without irony, and remains one of the most important practical media literacy frameworks available, which is why ed-tech has largely ignored it in favor of interactive drag-and-drop activities.)
  2. Carrol, F. J., & Pemberton, L. (2023). Who benefits? A structural analysis of incentive framing in AI corporate communications, 2018–2023. Journal of Technology Media Studies, 5(2), 103–119.
  3. Harrington, D. M. (2024). The omission as argument: How absence of cost, timeline, and failure data functions rhetorically in AI product announcements. Quarterly Journal of Institutional Communication Analysis, 8(4), 55–72.

Instructional Design Commentary

A competent instructional designer would have pointed out that the expert analysis feature — in which the simulation reveals the "correct" evaluation after the scholar submits their own — implicitly frames media literacy as a matter of arriving at predetermined correct answers rather than developing transferable critical judgment. This is a foundational instructional design error that the ed-tech industry has been making since the advent of the multiple-choice quiz, and it is worth naming here: a scholar who learns to select "Aspirational" for this specific press release has not necessarily learned to recognize aspirational claims in the next press release they encounter. A scholar who understands why the internal testing statistic in Example 1 is structured to sound more rigorous than it is has learned something durable. The simulation attempts to teach the latter and may accidentally teach only the former. This tension was not resolved before deployment. It is left as an exercise for the instructional designer the textbook did not hire.

The selection of five media examples also deserves scrutiny. Five examples, presented sequentially without variation in difficulty or scaffolding, is a design choice that implies the framework is equally applicable to all five formats at equal depth. It is not. Evaluating an academic abstract requires substantially different background knowledge than evaluating a LinkedIn post, and no amount of radio button selection will bridge that gap for a scholar who has never read a methods section. A formal instructional design process would have sequenced the examples from most to least accessible, beginning with the LinkedIn post and ending with the research abstract rather than the reverse. The current ordering is presented without explanation, which is consistent with the simulation's format being designed by the same process that produced the content it teaches scholars to critique.