Content Moderation Decision Framework
Run the Content Moderation Decision Framework MicroSim Fullscreen
Edit in the p5.js Editor
About This MicroSim
This interactive MicroSim helps students judge content moderation decisions by weighing competing values — truth, free expression, harm prevention, and cultural context — in realistic scenarios.. It supports the learning objectives in Chapter: Misinformation and the Information Age.
How to Use
Use the interactive controls below the drawing area to explore the visualization. Hover over elements for additional information and click to see detailed descriptions.
Iframe Embed Code
You can add this MicroSim to any web page by adding this to your HTML:
1 2 3 4 | |
Lesson Plan
Grade Level
9-12 (High School / IB TOK)
Duration
15-20 minutes
Prerequisites
- Understanding of the tension between free expression and harm prevention as an ethical knowledge question
- Familiarity with the concept of competing values (e.g., truth vs. safety, individual rights vs. collective well-being)
- Basic awareness of how digital platforms shape the flow of shared knowledge
Learning Objectives
- Evaluate content moderation decisions by weighing competing epistemic and ethical values and justifying a position
Activities
- Exploration (5 min): Work through the first two content scenarios presented in the sim. For each piece of content, read the description carefully and make your moderation decision: allow, remove, label with a warning, or restrict distribution. After deciding, review the framework's analysis of the competing values at stake (e.g., freedom of expression vs. prevention of harm, right to information vs. risk of misinformation). Notice how different values lead to different "correct" decisions.
- Guided Practice (10 min): Work through at least three more scenarios, then compare your decisions with a classmate. For each case where you disagreed, identify the specific value you prioritized that your partner did not. Discuss: Is there a "right" answer to content moderation, or is this an inherently contested knowledge question? How do different ethical frameworks (utilitarian, deontological, virtue ethics) lead to different moderation policies? Consider the TOK knowledge question: "Who should have the authority to decide what counts as acceptable knowledge in public discourse?"
- Assessment (5 min): Choose the scenario you found most difficult. Write a brief position statement (4-5 sentences) defending your moderation decision. Your statement must: (a) name the competing values, (b) explain which value you prioritized and why, and (c) acknowledge what is lost or risked by your decision. This mirrors the kind of balanced evaluation expected in a TOK essay.
Assessment
- Students can identify at least three competing values relevant to content moderation decisions
- Students can defend a moderation decision with explicit reference to an ethical framework or epistemic principle
- Students can articulate the trade-offs inherent in their decision, demonstrating awareness of multiple perspectives
Quiz
Test your understanding with this review question.
1. A social media platform must decide whether to remove a post claiming "Climate change is a natural cycle, not caused by humans." From a TOK perspective, which consideration is MOST relevant to this content moderation decision?
- Whether the claim is popular among the platform's users
- Whether the claim contradicts the current scientific consensus and what epistemic harm unchallenged misinformation may cause
- Whether the person who posted it has a large number of followers
- Whether similar content has been removed on other platforms
Show Answer
The correct answer is B. Content moderation as an epistemological question requires weighing the reliability of the knowledge claim against potential harms. The relationship between the claim and scientific consensus, along with the epistemic consequences of its spread, are the most relevant considerations. Option A appeals to popularity (ad populum fallacy). Option C is irrelevant to the truth or harm of the claim itself. Option D appeals to precedent on other platforms rather than engaging with the epistemological merits.
Concept Tested: Evaluating Knowledge Claims in Digital Public Discourse
References
- Gillespie, T. (2018). Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media. Yale University Press.
- Wardle, C., & Derakhshan, H. (2017). "Information Disorder: Toward an Interdisciplinary Framework for Research and Policy Making." Council of Europe Report.