Skip to content

Collection Quality Dashboard

View Full Screen

About This MicroSim

This interactive dashboard provides a comprehensive view of MicroSim collection quality. Multiple visualization panels display different aspects of the collection, enabling educators and administrators to assess coverage, identify gaps, and prioritize improvement efforts.

Dashboard Panels

Panel Purpose Key Metrics
Collection Overview Total count and trend 432 MicroSims, +14 from last crawl
Subject Distribution Balance across subjects Mathematics leads with 145
Quality Score Overall quality breakdown 76% average, donut chart
Grade Levels Coverage by education level Grades 9-12 most represented
Field Completeness Metadata quality 9 fields tracked, color-coded
Repository Contributions Source breakdown Top 5 repos as treemap

Key Features

  • 6 coordinated visualization panels
  • Color-coded quality indicators (red/yellow/green)
  • Trend indicators showing changes since last crawl
  • Field completeness grid with progress bars
  • Repository treemap sized by contribution

Learning Objectives

After using this simulation, students will be able to:

  1. Assess overall collection quality from multiple metrics
  2. Identify areas needing improvement (low completeness fields)
  3. Evaluate balance across subjects and grade levels

Lesson Plan

Grade Level

Undergraduate / Graduate (Data Quality, Educational Technology)

Duration

20-25 minutes

Materials Needed

  • This dashboard visualization
  • Understanding of data quality metrics

Procedure

  1. Introduction (3 min): Discuss why data quality matters for search systems

  2. Panel Exploration (10 min):

  3. Start with Collection Overview - what's the overall health?
  4. Examine Subject Distribution - is coverage balanced?
  5. Check Quality Score distribution - what percentage is high quality?
  6. Review Grade Levels - are all educational levels served?
  7. Analyze Field Completeness - which fields need attention?
  8. Study Repository Contributions - who are the top contributors?

  9. Quality Assessment (5 min):

  10. Which metrics indicate good collection health?
  11. Which areas need improvement?
  12. What's the relationship between repository quality and count?

  13. Discussion (5 min):

  14. How would you prioritize improvement efforts?
  15. What's the cost of low completeness in learningObjectives vs. title?
  16. How often should quality metrics be reviewed?

Assessment

Students should be able to: - Interpret the quality score donut chart - Identify the field with lowest completeness - Explain why required fields (*) have 100% completeness - Recommend 3 specific improvement actions

Technical Details

Framework: p5.js

Canvas Size: Responsive width, 560px height

Visualization Types: Bar charts, donut chart, grid, treemap

References