Skip to content

Metadata Fundamentals

Summary

This chapter introduces metadata concepts and standards that provide the foundation for MicroSim discoverability. You'll learn about metadata standards including the Dublin Core framework, taxonomies and classification systems, and how subject normalization enables consistent searching. The chapter also covers MicroSim-specific standards, schema compliance, tagging approaches including folksonomies and controlled vocabularies. After completing this chapter, students will understand how metadata enables search and discovery of educational resources.

Concepts Covered

This chapter covers the following 19 concepts from the learning graph:

  1. Metadata
  2. Metadata Standards
  3. Dublin Core
  4. Dublin Core Elements
  5. Taxonomies
  6. Classification Systems
  7. Subject Normalization
  8. MicroSim Standards
  9. Pedagogical Metadata
  10. Pedagogical Patterns
  11. Bloom Verb Alignment
  12. Schema Compliance
  13. Tags
  14. Folksonomies
  15. User-Generated Tags
  16. Keywords
  17. Controlled Vocabulary
  18. Technical Metadata
  19. Template Matching

Prerequisites

This chapter builds on concepts from:


The Invisible Force Behind Findability

You've probably had this experience: you know that perfect teaching resource exists somewhere on the internet, but after twenty minutes of increasingly creative search queries, you're ready to give up and create it yourself. That frustration? It's almost always a metadata problem.

Metadata is data about data—information that describes, explains, and categorizes your content so that both humans and machines can find it. When you search for "interactive pendulum simulation for high school physics," you're searching through metadata. The simulation itself doesn't contain those words in its JavaScript code. Instead, someone thoughtfully added that descriptive information, and a search engine matched your query against it.

Think of metadata as the ultimate wingman for your MicroSim. The simulation might be brilliant, but if nobody can find it, it's doing its groundbreaking educational work in complete isolation. Metadata introduces your MicroSim to the world.

The Findability Equation

Quality MicroSim + Zero Metadata = Hidden Treasure Nobody Finds

Average MicroSim + Excellent Metadata = Widely Used Teaching Tool

Quality MicroSim + Excellent Metadata = Educational Superpower

Understanding Metadata Standards

Creating metadata sounds simple enough—just describe what your MicroSim does, right? But here's the challenge: if everyone describes their resources differently, search becomes chaos. One person calls their subject "Physics," another calls it "Physical Science," a third uses "Mechanics." Same topic, three different labels, and now your search only finds one-third of the relevant results.

Metadata standards solve this problem by establishing agreed-upon ways to describe resources. They define:

  • Which fields to include (title, creator, subject, etc.)
  • What values are acceptable (standardized subject lists, date formats)
  • How fields relate to each other (hierarchies, dependencies)
  • Required versus optional information

When everyone follows the same standard, magic happens. A teacher in Tokyo can find a MicroSim created by a professor in Toronto because they both used "Trigonometry" from the same standardized subject list. The machines can compare apples to apples instead of apples to "circular fruit of the genus Malus."

Major metadata standards you'll encounter include:

Standard Primary Use Complexity
Dublin Core General resources Low
IEEE LOM Learning objects High
Schema.org Web content Medium
LRMI Learning resources Medium
Custom schemas Domain-specific Varies

For MicroSims, we build on Dublin Core as our foundation and add educational and technical extensions specific to interactive simulations.

Dublin Core: The Universal Language of Metadata

Dublin Core is the lingua franca of metadata—a simple, flexible standard that emerged from a 1995 workshop in Dublin, Ohio (not Ireland, despite what you might assume). Its genius lies in its simplicity: just 15 core elements that can describe virtually any resource.

Why Dublin Core became dominant:

  • Simple enough to use: You don't need a PhD to fill it out
  • Flexible enough to extend: Add your own fields as needed
  • Widely adopted: Libraries, museums, and digital repositories worldwide
  • Machine-readable: Easy for software to parse and index

Dublin Core operates on a principle of "dumb-down": even if you don't understand specialized fields, you can always fall back to simple descriptions. This means a MicroSim's metadata remains useful even to systems that don't understand educational extensions.

The 15 Dublin Core Elements form the backbone of resource description:

Element Description MicroSim Example
Title Name of the resource "Pendulum Period Explorer"
Creator Who made it "Dr. Maria Santos"
Subject Topic or keywords "Physics, Harmonic Motion"
Description Summary of content "Interactive simulation demonstrating..."
Publisher Who distributes it "OpenEd Simulations"
Contributor Others who helped "Reviewed by Physics Dept."
Date When created/modified "2026-01-15"
Type Nature of resource "Interactive Simulation"
Format File format "text/html"
Identifier Unique ID "https://example.com/sims/pendulum"
Source Derived from "Based on PhET simulation"
Language Language used "en-US"
Relation Related resources "Part of Physics series"
Coverage Scope (time/place) "Classical mechanics era"
Rights Usage permissions "CC BY-NC-SA 4.0"

Not every MicroSim needs every element. At minimum, you want Title, Creator, Subject, Description, Date, Format, and Rights. The others add richness when relevant.

Diagram: Dublin Core Element Relationships

Dublin Core Element Relationships Diagram

Type: infographic

Bloom Level: Understand (L2) Bloom Verb: classify

Learning Objective: Students will classify Dublin Core elements into categories (content description, intellectual property, instantiation) and explain how they relate to MicroSim metadata.

Purpose: Visualize the 15 Dublin Core elements organized by category with examples

Layout: Three-column layout representing Dublin Core's three categories

Categories (columns): 1. Content (left column - blue) - Title, Subject, Description, Type, Source, Relation, Coverage 2. Intellectual Property (center column - green) - Creator, Publisher, Contributor, Rights 3. Instantiation (right column - orange) - Date, Format, Identifier, Language

Interactive elements: - Hover over any element to see: - Definition - Example value for a MicroSim - Whether required or optional - Click to see extended description - Color intensity indicates how commonly used in MicroSims

Visual style: Card-based layout with clear category headers Color scheme: Blue for content, green for IP, orange for instantiation Animation: Cards lift slightly on hover with shadow effect

Implementation: HTML/CSS/JavaScript with card components

Taxonomies: Organizing Knowledge Hierarchically

A taxonomy is a hierarchical classification system—a tree structure that organizes concepts from broad to specific. You use taxonomies constantly without thinking about it: the biological taxonomy (Kingdom → Phylum → Class → Order → Family → Genus → Species) is perhaps the most famous example.

For educational resources, taxonomies help answer questions like:

  • Is this MicroSim about "Science" or specifically about "Physics"?
  • Is "Physics" part of "Physical Sciences" or separate?
  • Where does "Astrophysics" fit relative to "Physics"?

Here's a simplified educational subject taxonomy:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
STEM
├── Science
│   ├── Physical Sciences
│   │   ├── Physics
│   │   │   ├── Mechanics
│   │   │   ├── Thermodynamics
│   │   │   └── Electromagnetism
│   │   └── Chemistry
│   ├── Life Sciences
│   │   ├── Biology
│   │   └── Ecology
│   └── Earth Sciences
├── Technology
├── Engineering
└── Mathematics
    ├── Algebra
    ├── Geometry
    └── Calculus

Taxonomies enable powerful search features:

  • Hierarchical search: Find all "Science" MicroSims, including Physics, Chemistry, and Biology
  • Specificity: Search narrowly for just "Mechanics" simulations
  • Browse navigation: Explore categories without knowing exact terms
  • Automatic expansion: A search for "Physical Sciences" automatically includes "Physics"

Taxonomy vs. Folksonomy

Taxonomies are created top-down by experts who define the structure in advance. Folksonomies (covered later) emerge bottom-up from user-generated tags. Both have their place in MicroSim discovery.

Classification Systems: Categorizing with Purpose

Classification systems are broader frameworks that assign resources to categories based on defined criteria. While taxonomies focus on hierarchical relationships, classification systems can use multiple dimensions simultaneously.

Consider how you might classify MicroSims:

By Subject Area: - Mathematics, Physics, Chemistry, Biology, Computer Science...

By Grade Level: - Elementary, Middle School, High School, Undergraduate, Graduate

By Bloom's Taxonomy Level: - Remember, Understand, Apply, Analyze, Evaluate, Create

By Difficulty: - Beginner, Intermediate, Advanced

By Visualization Type: - Animation, Simulation, Chart, Diagram, Game, Quiz

A single MicroSim can be classified along all these dimensions simultaneously. A pendulum simulator might be:

  • Subject: Physics
  • Grade Level: High School, Undergraduate
  • Bloom's Level: Understand, Apply
  • Difficulty: Intermediate
  • Type: Simulation, Animation

This multi-dimensional classification is what makes faceted search possible—the ability to filter by subject AND grade level AND difficulty all at once. It's like having multiple file folders that a single document can live in simultaneously.

Diagram: Multi-Dimensional Classification

Multi-Dimensional Classification Interactive

Type: microsim

Bloom Level: Apply (L3) Bloom Verb: demonstrate

Learning Objective: Students will demonstrate how multi-dimensional classification enables faceted search by exploring different filter combinations and observing how result sets change.

Canvas layout: - Left panel (200px): Filter controls for 4 dimensions - Center panel (400px): Visualization showing MicroSim cards - Right panel (150px): Results count and active filters display

Visual elements: - Grid of MicroSim cards (20 sample items) - Each card shows title and icons for its classifications - Cards fade out when filtered, remain bright when matching - Venn-diagram-style overlay showing intersection

Interactive controls: - Subject dropdown: All, Physics, Math, Chemistry, Biology - Grade Level checkboxes: Elementary, Middle, High, College - Difficulty radio buttons: All, Beginner, Intermediate, Advanced - Type checkboxes: Animation, Simulation, Chart, Quiz

Default parameters: - All filters set to "All" (showing all 20 items) - Pre-populated sample data with varied classifications

Behavior: - Changing any filter immediately updates visible cards - Count display shows "X of 20 MicroSims match" - Active filter badges appear in right panel - Clear All button resets filters - Smooth fade animations on filter changes

Implementation notes: - p5.js for card rendering - JSON data array with sample MicroSims - Real-time filter function combining all dimensions - Responsive layout adapts to container width

Subject Normalization: Speaking the Same Language

Here's a real problem: one creator tags their MicroSim as "Math," another uses "Mathematics," a third uses "Maths," and a fourth uses "mathematical sciences." All mean the same thing, but a search for "Mathematics" only finds one of them.

Subject normalization is the process of mapping variant terms to standard forms. It's like having a universal translator that knows "Math" = "Mathematics" = "Maths" and routes all searches to the same canonical term.

Normalization typically works through:

  1. Synonym mapping: "Math" → "Mathematics"
  2. Abbreviation expansion: "CS" → "Computer Science"
  3. Spelling variants: "Colour" → "Color" (for English-US systems)
  4. Hierarchical mapping: "Quantum Physics" → "Physics > Quantum Physics"
  5. Common misspellings: "Mathmatics" → "Mathematics"

Here's a normalization table excerpt:

Input Term Normalized To
Math Mathematics
Maths Mathematics
Mathematical Sciences Mathematics
Arithmetic Mathematics > Arithmetic
Physics Physics
Physical Science Physics
Mechanics Physics > Mechanics
CS Computer Science
CompSci Computer Science
Programming Computer Science > Programming

For MicroSim search to work across thousands of resources from different creators, subject normalization is essential. Without it, teachers would need to try every possible spelling and synonym to find what they need.

Write Once, Find Many Ways

Good normalization means creators can use familiar terms while searchers find everything relevant. The system does the translation work, not the humans.

MicroSim Standards: Purpose-Built Metadata

While Dublin Core provides a solid foundation, educational simulations have unique needs that generic standards don't address. MicroSim standards extend the basic metadata framework with fields specifically designed for interactive learning resources.

MicroSim-specific metadata includes:

Educational Fields: - Learning objectives (what students should achieve) - Bloom's taxonomy level (cognitive complexity) - Prerequisites (what students should know first) - Grade level (target audience) - Subject area (curriculum alignment) - Estimated time to complete

Technical Fields: - JavaScript framework (p5.js, D3.js, etc.) - Browser compatibility - Device requirements - Responsive behavior - File size

Interaction Fields: - Visualization type (animation, simulation, chart) - Interaction level (view-only, adjustable, explorable) - Control types (sliders, buttons, drag-and-drop) - Accessibility features

Quality Fields: - Completeness score - Peer review status - Usage statistics - User ratings

This extended schema transforms a simple "description of a web page" into a rich educational profile that search systems can leverage for sophisticated matching.

Pedagogical Metadata: Matching Patterns to Learning

One of the most important—and most often overlooked—categories of MicroSim metadata is pedagogical metadata. This describes not just what a MicroSim teaches, but how it teaches, enabling systems to match interaction patterns to learning objectives.

Why does this matter? Consider this scenario: a chapter specification requests a workflow diagram with Bloom Level "Understand" and verb "explain." A template-matching system finds a visually similar animated flow diagram and recommends it. The result looks impressive—smooth particle animations, beautiful colors, engaging motion.

But it fails instructionally.

The animation obscures the actual data transformations students need to see. Continuous motion prevents the predict-before-observing that builds understanding. Students say "wow" but can't explain the process afterward.

Pedagogical metadata prevents this mismatch by describing how a MicroSim supports learning:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
{
  "pedagogical": {
    "pattern": "worked-example",
    "bloomsTaxonomy": ["Remember", "Understand", "Apply"],
    "bloomAlignment": ["understand", "apply"],
    "bloomVerbs": ["explain", "demonstrate", "illustrate"],
    "pacing": "self-paced",
    "supportsPrediction": true,
    "dataVisibility": "high",
    "interactionStyle": "manipulate"
  }
}

Bloom's Taxonomy vs Bloom Alignment

The pedagogical section contains two related but distinct Bloom fields:

Field Purpose Scope
bloomsTaxonomy All cognitive levels the content can address Broader (content scope)
bloomAlignment Levels the interaction pattern effectively supports Narrower (pattern effectiveness)

For example, a pendulum simulation's content might touch Remember, Understand, Apply, Analyze, and Create. But its exploration pattern most effectively supports Understand, Apply, and Analyze. The bloomsTaxonomy captures the full content scope, while bloomAlignment captures what the pattern actually enables.

Pedagogical Pattern Types

The pattern field classifies how the MicroSim structures the learning experience:

Pattern Best For Characteristics
worked-example Understand objectives Step-through with concrete data at each stage
exploration Analyze, Create objectives Open-ended parameter manipulation
practice Apply objectives Repeated skill application with feedback
demonstration Understand objectives Showing a process with explanations
guided-discovery Apply, Analyze objectives Scaffolded exploration with hints
assessment Evaluate objectives Testing with feedback
reference Remember objectives Static information display

Bloom Verb Alignment

The bloomVerbs field captures the specific action verbs from Bloom's Taxonomy that the MicroSim supports. This enables precise matching—a specification with verb "experiment" should match templates tagged with "experiment," "investigate," and "test," not templates tagged with "explain" or "define."

The system includes 36 Bloom verbs mapped to their cognitive levels:

Level Verbs
Remember define, identify, list, recall, recognize, state
Understand classify, compare, describe, explain, interpret, summarize
Apply apply, calculate, demonstrate, illustrate, implement, solve, use
Analyze analyze, differentiate, examine, experiment, investigate, test
Evaluate assess, critique, evaluate, judge, justify, predict
Create construct, create, design, develop, formulate, generate

Pacing and Prediction

Two fields capture timing characteristics:

  • pacing: How the MicroSim controls time flow
  • self-paced: Learner controls progression (Next/Previous buttons)
  • continuous: Automatic animation loop
  • step-through: Discrete steps with explicit advancement
  • timed: Time-limited activities

  • supportsPrediction: Whether learners can predict outcomes before observing them

For "Understand" level objectives, self-paced or step-through pacing with supportsPrediction: true is almost always more effective than continuous animation. The ability to pause, think, and predict before seeing the next step is pedagogically powerful.

Impact on Template Matching

With pedagogical metadata, template-finding systems can score matches on both semantic similarity (similar content/topic) and pedagogical alignment (appropriate interaction pattern):

1
Final Score = (60% × Semantic Score) + (40% × Pedagogical Score)

This prevents the common failure of recommending visually impressive but instructionally inappropriate templates. An animated data flow might be 85% semantically similar to a workflow specification, but if the specification has Bloom verb "explain" and the template has pacing: continuous, the pedagogical score drops, and a step-through worked example rises in the rankings.

The Pedagogical Alignment Question

Before accepting a template recommendation, ask: "Does this interaction pattern support what learners need to DO with the content?"

For "explain" → they need to trace steps with concrete data (worked-example) For "experiment" → they need to manipulate parameters freely (exploration) For "calculate" → they need practice with feedback (practice)

For a detailed case study of how pedagogical misalignment led to a failed MicroSim and how it was fixed, see Chapter 15: Pedagogical Pattern Alignment.

Schema Compliance: Playing by the Rules

A schema defines the structure and rules for valid metadata. Schema compliance means your metadata follows those rules—using the right field names, correct data types, and valid values.

Think of a schema like a form at the DMV. It specifies:

  • Which fields are required (you can't skip "Name")
  • What format each field accepts (date must be YYYY-MM-DD)
  • Which values are valid (state must be from the approved list)
  • How fields relate (if Country is "USA," State must be a US state)

For MicroSim metadata, schema compliance ensures:

1
2
3
4
5
6
7
8
{
  "educational": {
    "subjectArea": ["Physics"],       // Must be from approved list
    "gradeLevel": ["High School"],    // Must be from approved list
    "bloomsTaxonomy": ["Apply"],      // Must be from: Remember, Understand, Apply, Analyze, Evaluate, Create
    "difficulty": "Intermediate"       // Must be: Beginner, Intermediate, or Advanced
  }
}

Why does compliance matter?

  • Search accuracy: Systems can trust field values are valid
  • Interoperability: Different platforms can exchange metadata reliably
  • Aggregation: Large-scale indexing works when data is consistent
  • Validation: Errors caught early, before they cause search problems
  • Future-proofing: Structured data adapts to new features

Modern MicroSim systems use JSON Schema validation to automatically check compliance when metadata is submitted or updated.

Tags: Flexible Labels for Discovery

Tags are simple labels attached to content for categorization and discovery. Unlike taxonomies with their rigid hierarchies, tags are flat, flexible, and often user-generated.

Tags work well for:

  • Cross-cutting concepts: "climate-change" applies to Biology, Chemistry, and Economics simulations
  • Pedagogical approaches: "worked-example," "discovery-learning," "gamified"
  • Technical features: "responsive," "mobile-friendly," "offline-capable"
  • Temporal relevance: "2026-curriculum," "NGSS-aligned"

Example tags for a photosynthesis MicroSim:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
{
  "tags": [
    "biology",
    "photosynthesis",
    "plants",
    "chloroplast",
    "light-reactions",
    "carbon-dioxide",
    "middle-school",
    "interactive",
    "animation"
  ]
}

Tags complement structured metadata. While subjectArea: "Biology" provides official classification, tags like "chloroplast" and "light-reactions" capture specific details that enable long-tail searches.

The Tag Explosion Problem

Unrestricted tagging leads to inconsistency. Ten creators might use "Physics," "physics," "PHYSICS," "Phys," "PhySci," and "physical-science." This is where controlled vocabularies and folksonomies come in.

Folksonomies: Wisdom of the Crowd

A folksonomy is a classification system that emerges from user-generated tags rather than expert-defined categories. The term combines "folk" (people) and "taxonomy" (classification)—it's taxonomy by the masses.

Folksonomies have real advantages:

  • Low barrier: Anyone can add tags, no training required
  • Evolving vocabulary: New terms emerge as fields evolve
  • User language: People use terms they actually search for
  • Serendipitous discovery: Unexpected tag connections lead to useful finds
  • Community building: Shared tags create implicit relationships

Popular examples include hashtags on social media, tags on Stack Overflow, and bookmarking tags on sites like Pinboard.

For MicroSims, folksonomies might reveal:

  • Emerging topics ("quantum-computing" before it's in official curricula)
  • Teaching contexts ("flipped-classroom," "homework-helper")
  • Student perspectives ("confusing-topic," "test-prep")
  • Regional terminology differences

The challenge with folksonomies is inconsistency. User-generated tags reflect how individuals think, which varies wildly. Solutions include:

  • Tag suggestion systems (autocomplete with existing tags)
  • Synonym mapping (treating similar tags as equivalent)
  • Popularity weighting (commonly-used tags get priority)
  • Moderation (curating the most useful tags)

Keywords: The Search Connection

Keywords are specific terms that connect content to search queries. While related to tags, keywords are specifically chosen to match what users type into search boxes.

Effective keywords consider:

  • What users search for: Not what experts call things, but what regular people type
  • Synonyms and variants: Including common alternative terms
  • Related concepts: Terms that searchers might also want
  • Long-tail phrases: Specific multi-word queries

For a projectile motion MicroSim, keywords might include:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
{
  "keywords": [
    "projectile motion",
    "trajectory",
    "ballistics",
    "throwing",
    "angle of launch",
    "parabola",
    "physics simulation",
    "kinematics",
    "gravity effect",
    "range equation"
  ]
}

Notice how this includes both technical terms ("kinematics," "ballistics") and everyday language ("throwing," "gravity effect"). The goal is to catch searches from physics professors AND from students googling "how far does a ball go when you throw it."

Diagram: Keywords to Search Results Flow

Keywords to Search Results Flow Diagram

Type: workflow

Bloom Level: Understand (L2) Bloom Verb: explain

Learning Objective: Students will explain how keywords in metadata connect user search queries to relevant MicroSim results through matching and ranking processes.

Purpose: Visualize the journey from user query through keyword matching to ranked results

Steps (left to right flow): 1. User Query (person icon with search box) - Example: "physics ball throwing simulation" - Shows query being tokenized into words

  1. Query Processing (gear icon)
  2. Tokenization: ["physics", "ball", "throwing", "simulation"]
  3. Normalization: lowercase, stemming
  4. Synonym expansion: "throwing" → ["throwing", "projectile", "launch"]

  5. Keyword Matching (database icon with arrows)

  6. Compare query terms against MicroSim keywords
  7. Calculate match scores
  8. Show multiple MicroSims being evaluated

  9. Ranking (sorting icon)

  10. Score: keyword matches + metadata quality
  11. Factors: exact match > partial match > synonym match
  12. Freshness, popularity bonuses

  13. Results (list with thumbnails)

  14. Ordered list of matching MicroSims
  15. Match score badges
  16. Highlighted matching keywords

Interactive elements: - Enter custom search query and see matching process - Hover over each stage for detailed explanation - Click to see sample data at each stage

Visual style: Horizontal flow diagram with stages as rounded rectangles Color scheme: Blue for user side, green for processing, orange for results Animation: Data flows between stages with subtle particle effects

Implementation: HTML/CSS/JavaScript with animated flow visualization

Controlled Vocabulary: Curated Consistency

A controlled vocabulary is a predefined list of terms that must be used for a particular metadata field. Unlike free-form tags where anything goes, controlled vocabularies enforce consistency by limiting choices to approved options.

Examples of controlled vocabularies in MicroSim metadata:

Subject Areas (controlled):

1
2
Mathematics, Physics, Chemistry, Biology, Computer Science,
Earth Science, Engineering, Economics, History, Language Arts

Grade Levels (controlled):

1
K-2, 3-5, 6-8, 9-12, Undergraduate, Graduate, Professional

Bloom's Taxonomy (controlled):

1
Remember, Understand, Apply, Analyze, Evaluate, Create

Difficulty (controlled):

1
Beginner, Intermediate, Advanced

Visualization Types (controlled):

1
2
animation, simulation, chart, diagram, game, quiz,
timeline, map, graph-model, infographic

Controlled vocabularies provide:

  • Consistency: Everyone uses the same terms
  • Completeness: No missing variations
  • Searchability: Exact matching works reliably
  • Faceted filtering: Clean categories for filter UI
  • Analytics: Meaningful aggregation and reporting

The tradeoff is flexibility. New concepts (like "machine learning" becoming a subject area) require updating the vocabulary. Good systems balance controlled fields for core categories with free-form tags for emerging concepts.

Technical Metadata: Under the Hood

Technical metadata describes the implementation details that affect how a MicroSim runs, displays, and performs. This information helps both automated systems and human users understand compatibility requirements.

Key technical metadata fields:

Field Purpose Example Values
framework JavaScript library used p5.js, D3.js, Three.js, Chart.js
libraryVersion Specific version 1.9.0, 7.8.5
fileSize Total size of assets 45KB, 1.2MB
browserSupport Compatible browsers Chrome, Firefox, Safari, Edge
mobileSupport Works on mobile? true, false, partial
accessibility A11y features screen-reader, keyboard-nav
offline Works without internet? true, false
performance Resource requirements low, medium, high

Why technical metadata matters:

  1. Compatibility filtering: Teachers can find MicroSims that work on school Chromebooks
  2. Performance expectations: Know before loading a 10MB simulation on slow WiFi
  3. Accessibility compliance: Schools can verify WCAG compliance
  4. Development insights: Creators can learn from similar implementations
  5. Maintenance planning: Track which simulations need library updates
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
{
  "technical": {
    "framework": "p5.js",
    "libraryVersion": "1.9.0",
    "fileSize": "67KB",
    "browserSupport": ["Chrome", "Firefox", "Safari", "Edge"],
    "mobileSupport": true,
    "accessibility": {
      "keyboardNavigable": true,
      "screenReaderSupport": "partial",
      "highContrastMode": false
    },
    "performance": "low"
  }
}

Putting It All Together: The Metadata Ecosystem

Let's see how all these concepts combine in a real MicroSim metadata profile:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
{
  "microsim": {
    "dublinCore": {
      "title": "Wave Interference Simulator",
      "creator": "Dr. Sarah Chen",
      "subject": "Physics",
      "description": "Interactive simulation showing constructive and destructive interference patterns when two wave sources interact",
      "date": "2026-01-10",
      "type": "Interactive Simulation",
      "format": "text/html",
      "language": "en-US",
      "rights": "CC BY-SA 4.0"
    },
    "educational": {
      "subjectArea": ["Physics"],
      "gradeLevel": ["High School", "Undergraduate"],
      "difficulty": "Intermediate",
      "topic": "Wave Physics",
      "learningObjectives": [
        "Explain how wave interference creates patterns",
        "Predict constructive vs destructive interference locations",
        "Analyze the effect of wavelength on interference patterns"
      ],
      "prerequisites": ["Basic wave concepts", "Understanding of wavelength and frequency"]
    },
    "technical": {
      "framework": "p5.js",
      "libraryVersion": "1.9.0",
      "fileSize": "52KB",
      "browserSupport": ["Chrome", "Firefox", "Safari", "Edge"],
      "mobileSupport": true
    },
    "pedagogical": {
      "pattern": "exploration",
      "bloomsTaxonomy": ["Remember", "Understand", "Apply", "Analyze", "Create"],
      "bloomAlignment": ["understand", "apply", "analyze"],
      "bloomVerbs": ["explain", "predict", "analyze", "experiment"],
      "pacing": "self-paced",
      "supportsPrediction": true,
      "dataVisibility": "high",
      "interactionStyle": "manipulate"
    },
    "search": {
      "visualizationType": ["simulation", "animation"],
      "interactionLevel": "high",
      "keywords": [
        "wave interference",
        "constructive interference",
        "destructive interference",
        "superposition",
        "wave patterns",
        "physics simulation",
        "ripple tank"
      ]
    },
    "tags": [
      "waves",
      "interference",
      "physics",
      "interactive",
      "NGSS-aligned",
      "college-prep"
    ]
  }
}

This metadata enables:

  • Dublin Core: Basic discoverability in any system
  • Educational fields: Precise curriculum matching
  • Technical fields: Compatibility verification
  • Pedagogical fields: Appropriate template matching based on learning objectives
  • Keywords: Natural language search
  • Tags: Community categorization

The Superpower Unlocked

With comprehensive metadata, a teacher searching "wave interference simulation for AP Physics" instantly finds this MicroSim. The search matches "wave interference" from keywords, "Physics" from subject, "High School" from grade level, and ranks it highly due to learning objectives that match AP curriculum expectations.

Key Takeaways

  1. Metadata is data about data that makes resources findable and usable
  2. Metadata standards like Dublin Core provide consistent, interoperable frameworks
  3. Dublin Core's 15 elements cover content, intellectual property, and instantiation
  4. Taxonomies organize subjects hierarchically for browse and search
  5. Classification systems enable multi-dimensional categorization and faceted search
  6. Subject normalization maps variant terms to standard forms
  7. MicroSim standards extend generic metadata with educational and technical fields
  8. Pedagogical metadata matches interaction patterns to learning objectives, preventing visually impressive but instructionally ineffective designs
  9. Schema compliance ensures data quality and interoperability
  10. Tags provide flexible, informal categorization
  11. Folksonomies emerge from user-generated tags, capturing real-world language
  12. Keywords optimize content for search query matching
  13. Controlled vocabularies enforce consistency in critical fields
  14. Technical metadata describes compatibility and performance characteristics

Ready to learn about the Dublin Core standard? Continue to Chapter 4: Dublin Core Elements.