Concept Validator Skill
Summary
This skill validates that all concepts in the learning graph are properly integrated throughout the textbook, checking for coverage in chapters, glossary definitions, quiz questions, and MicroSims, while identifying gaps and ensuring pedagogical consistency across the entire educational resource.
Order
This skill should be executed:
- After learning graph is complete
 - After initial chapter content generation
 - Before glossary finalization
 - Periodically during content development (every 10-15 chapters)
 - As final validation before publication
 
The validator cross-references all textbook components, so it requires the learning graph and at least some content to be meaningful. It's most valuable as a quality gate at key milestones.
Inputs
Primary Input Files
- Learning Graph (
docs/learning-graph/03-concept-dependencies.csv) - Complete list of concepts to validate
 - Prerequisite relationships for scaffolding checks
 - 
Quality check: Valid DAG structure, no cycles
 - 
Concept Taxonomy (
docs/learning-graph/04-concept-taxonomy.csv) - Concept categorization for organizational checks
 - 
Quality check: All concepts categorized
 - 
All Chapter Content (
docs/**/*.md) - Source of concept explanations
 - 
Quality check: Meaningful content exists (not just stubs)
 - 
Glossary (
docs/glossary.md) - Concept definitions
 - 
Quality check: Alphabetically ordered, valid markdown
 - 
Quiz Files (
docs/**/quiz.mdor*-quiz.md) - Assessment coverage
 - 
Quality check: Valid quiz format
 - 
MicroSims (
docs/sims/*/index.md) - Interactive concept demonstrations
 - Quality check: All MicroSims functional
 
Optional Input Files
- Course Description (
docs/course-description.md) - Learning objectives for alignment checking
 - 
Quality check: Contains Bloom's Taxonomy outcomes
 - 
FAQ (
docs/faq.md) - Student support coverage
 - 
Quality check: Questions mapped to concepts
 - 
Previous Validation Reports (
docs/learning-graph/validation-reports/) - Historical data for trend analysis
 - Quality check: Valid JSON format
 
Input Quality Metrics (Scale 1-100)
Validation Readiness Score: - 90-100: All inputs present, substantial content for all concepts - 70-89: Most inputs present, some concepts have limited coverage - 50-69: Basic inputs present, many concepts lack comprehensive coverage - Below 50: Critical inputs missing or insufficient content
Pre-Validation Checks:
- Learning graph validity (required: no cycles)
 - Minimum content threshold (recommended: 50+ markdown files)
 - Glossary exists (recommended)
 - At least 1 quiz (recommended)
 - At least 1 MicroSim (recommended)
 
User Dialog Triggers: - Score < 50: "Insufficient content for meaningful validation. Minimum requirements not met. Continue with limited validation?" - No glossary: "No glossary found. Concept definition validation will be skipped. Proceed?" - No quizzes: "No quizzes detected. Assessment coverage validation will be skipped. Continue?" - Learning graph invalid: "Learning graph has cycles or errors. Fix before validation?"
Outputs
Primary Validation Report
docs/learning-graph/validation-report.md- Comprehensive human-readable report- Executive summary with overall health score
 - Concept coverage matrix
 - Gap analysis by category
 - Prioritized remediation recommendations
 - Trend charts (if historical data available)
 
Detailed Analysis Files
docs/learning-graph/validation-reports/validation-[YYYY-MM-DD].json- Machine-readable results- Timestamp and version
 - Per-concept validation scores
 - Coverage metrics across all components
 - Gap identification
 - 
Comparison to previous validations
 - 
docs/learning-graph/concept-coverage-matrix.csv- Detailed matrix - Columns: Concept, Chapter, Glossary, Quiz, MicroSim, FAQ, Overall_Score
 - Rows: One per concept from learning graph
 - 
Values: ✓ (covered), ✗ (missing), ⚠ (partial), or coverage percentage
 - 
docs/learning-graph/gap-analysis.md- Actionable gaps list - Critical gaps: Concepts with <40% coverage
 - High priority gaps: Concepts with 40-60% coverage
 - Medium priority gaps: Concepts with 60-80% coverage
 - 
Enhancement opportunities: Concepts with >80% coverage but room for improvement
 - 
docs/learning-graph/scaffolding-validation.md- Prerequisite compliance - Prerequisites taught before dependent concepts?
 - Concept spacing appropriate (not too dense)?
 - Forward/backward references proper?
 - 
Circular dependency warnings
 - 
docs/learning-graph/consistency-report.md- Terminology consistency - Concept labels used consistently across components
 - Synonym usage patterns
 - Terminology conflicts or ambiguities
 - Recommendations for standardization
 
Output Quality Metrics (Scale 1-100)
Overall Concept Health Score
The validator generates a comprehensive health score based on multiple dimensions:
1. Coverage Completeness (40 points)
Per-concept scoring: - Chapter coverage (15 pts): Concept explained in chapter content - Substantial explanation (500+ words): 15 pts - Moderate explanation (200-499 words): 10 pts - Brief mention (<200 words): 5 pts - Not mentioned: 0 pts
- Glossary coverage (10 pts): Concept defined in glossary
 - ISO 11179 compliant definition: 10 pts
 - Adequate definition: 7 pts
 - Weak definition: 4 pts
 - 
Missing: 0 pts
 - 
Assessment coverage (10 pts): Concept tested in quizzes
 - Multiple quiz questions: 10 pts
 - One quiz question: 7 pts
 - 
Not tested: 0 pts
 - 
Interactive coverage (5 pts): Concept demonstrated in MicroSim
 - Dedicated MicroSim: 5 pts
 - Included in MicroSim: 3 pts
 - Not demonstrated: 0 pts
 
2. Pedagogical Quality (30 points)
- Scaffolding compliance (15 pts): Prerequisites taught before concept
 - 100% prerequisites covered earlier: 15 pts
 - 80-99% prerequisites covered: 12 pts
 - 60-79% prerequisites covered: 8 pts
 - 
<60% prerequisites covered: 0 pts (critical issue)
 - 
Bloom's Taxonomy alignment (10 pts): Concept taught at appropriate cognitive level
 - Perfect alignment with course objectives: 10 pts
 - Good alignment: 7 pts
 - 
Misaligned (too simple or too complex): 3 pts
 - 
Example quality (5 pts): Concrete examples provided
 - Multiple diverse examples: 5 pts
 - One good example: 3 pts
 - No examples: 0 pts
 
3. Consistency & Integration (20 points)
- Terminology consistency (10 pts): Same labels used across components
 - 100% consistent: 10 pts
 - Minor variations: 7 pts
 - 
Significant inconsistencies: 3 pts
 - 
Cross-referencing (5 pts): Concept linked between components
 - Well-integrated with links: 5 pts
 - Some links: 3 pts
 - 
Isolated: 0 pts
 - 
Context appropriateness (5 pts): Concept appears in right sections
 - Optimal placement: 5 pts
 - Acceptable placement: 3 pts
 - Misplaced: 0 pts
 
4. Support & Accessibility (10 points)
- FAQ coverage (5 pts): Common questions addressed
 - FAQ entry exists: 5 pts
 - Partially addressed: 3 pts
 - 
Not in FAQ: 0 pts
 - 
Multiple modalities (5 pts): Concept explained in different ways
 - 3+ modalities (text, visual, interactive, assessment): 5 pts
 - 2 modalities: 3 pts
 - 1 modality: 1 pt
 
Concept Coverage Categories
Based on overall score, concepts are categorized:
- Excellent (85-100): Comprehensive coverage, ready for publication
 - Good (70-84): Solid coverage, minor enhancements possible
 - Adequate (55-69): Basic coverage, improvements recommended
 - Insufficient (40-54): Significant gaps, remediation needed
 - Critical Gap (<40): Major deficiency, immediate attention required
 
Validation Checks Performed
1. Coverage Validation:
- ✓ Every concept from learning graph mentioned in at least one chapter
 - ✓ Every concept has glossary definition
 - ✓ Core concepts (high centrality in graph) tested in quizzes
 - ✓ Complex concepts have MicroSim demonstrations
 - ✓ Difficult concepts addressed in FAQ
 
2. Scaffolding Validation:
- ✓ Prerequisite concepts always introduced before dependent concepts
 - ✓ No circular dependencies in presentation order
 - ✓ Concept density appropriate (not too many new concepts per chapter)
 - ✓ Forward references include links to upcoming content
 - ✓ Review of prerequisites at chapter beginnings
 
3. Consistency Validation:
- ✓ Concept labels match exactly across all components
 - ✓ Definitions align between glossary and chapter explanations
 - ✓ Examples don't contradict each other
 - ✓ Terminology usage consistent
 - ✓ No concept presented with conflicting difficulty levels
 
4. Quality Validation:
- ✓ Glossary definitions meet ISO 11179 standards
 - ✓ Chapter explanations appropriate depth for concept importance
 - ✓ Quiz questions at right Bloom's level for concept
 - ✓ MicroSims effectively demonstrate concept
 - ✓ Examples clear and relevant
 
5. Integration Validation:
- ✓ Cross-references between components exist
 - ✓ Glossary terms linked from chapters
 - ✓ Quiz questions reference chapter sections
 - ✓ MicroSims embedded in relevant chapters
 - ✓ FAQ answers link to detailed explanations
 
6. Accessibility Validation:
- ✓ Multiple learning pathways to each concept
 - ✓ Varied presentation styles (visual, textual, interactive)
 - ✓ Support materials for difficult concepts
 - ✓ Alternative explanations available
 - ✓ Practice opportunities for key concepts
 
Gap Analysis Categories
Critical Gaps (Immediate Action Required):
- Orphaned concepts: In learning graph but not in any chapter
 - Undefined concepts: No glossary definition
 - Prerequisite violations: Concept before its prerequisites
 - Circular dependencies: Concepts depending on each other
 
High Priority Gaps:
- Untested core concepts: Important concepts without quiz questions
 - Unexplained concepts: Mentioned but not explained
 - Missing examples: Abstract concepts without concrete examples
 - Inconsistent terminology: Same concept with different labels
 
Medium Priority Gaps:
- Limited interactivity: Complex concepts without MicroSims
 - Single modality: Concepts only explained one way
 - Weak definitions: Glossary entries not meeting ISO standards
 - Missing cross-references: Isolated content without links
 
Enhancement Opportunities:
- Add alternative explanations: Multiple approaches to difficult topics
 - Increase quiz coverage: More assessment for better learning
 - Create MicroSims: Interactive demonstrations for abstract concepts
 - Expand examples: More diverse, real-world examples
 
Detailed Reports
Per-Concept Analysis:
For each concept, the report includes:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26  |  | 
Trend Analysis (if historical data available)
Tracks improvement over time:
- Overall health score trajectory
 - Gap reduction rate
 - Coverage improvements by category
 - Component completion progress
 - Concept integration velocity
 
Success Criteria
Publication Ready Thresholds:
- Overall health score > 75
 - No critical gaps
 - <10% concepts with insufficient coverage (<40)
 - 100% prerequisite compliance
 - 90%+ glossary coverage
 - 70%+ quiz coverage for core concepts
 - 30%+ concepts with MicroSims
 - Terminology consistency > 95%
 
Quality Gates:
- Alpha release: Health score > 60, no critical gaps
 - Beta release: Health score > 70, <5% insufficient concepts
 - Production release: Health score > 75, all criteria above
 
Remediation Planning
The validator generates prioritized action items:
Phase 1 - Critical Fixes (Week 1):
- Add missing concepts to chapters
 - Create glossary definitions for undefined concepts
 - Fix prerequisite violations
 - Resolve circular dependencies
 
Phase 2 - High Priority (Weeks 2-3):
- Add quiz questions for untested core concepts
 - Expand brief concept explanations
 - Create examples for abstract concepts
 - Standardize inconsistent terminology
 
Phase 3 - Medium Priority (Weeks 4-6):
- Develop MicroSims for complex concepts
 - Add FAQ entries for difficult topics
 - Create cross-references between components
 - Enhance weak glossary definitions
 
Phase 4 - Enhancements (Ongoing):
- Add alternative explanations
 - Increase quiz coverage
 - Create more MicroSims
 - Expand example variety
 
Additional Outputs
docs/learning-graph/concept-importance-analysis.md- Ranks concepts by centrality in learning graph
 - Identifies foundational vs. advanced concepts
 - 
Suggests where to focus quality improvements
 - 
docs/learning-graph/terminology-standardization.md - Lists all term variations found
 - Recommends standard labels
 - 
Find-and-replace suggestions
 - 
docs/learning-graph/cross-reference-suggestions.json - Recommended links to add between components
 - Structured for automated link insertion
 - 
Prioritized by pedagogical value
 - 
docs/learning-graph/coverage-visualization.html - Interactive visualization of concept coverage
 - Learning graph colored by coverage level
 - Clickable nodes showing detailed coverage info
 
Configuration Options
Validation Depth:
- Quick scan (coverage check only)
 - Standard validation (coverage + scaffolding + consistency)
 - Deep validation (all checks + quality assessment)
 
Reporting Detail:
- Summary only (executive overview)
 - Standard (summary + gap list)
 - Detailed (per-concept analysis)
 - Comprehensive (all reports + visualizations)
 
Scope:
- All concepts (complete validation)
 - Critical concepts only (fast check)
 - Changed concepts (incremental validation)
 - Specific category (targeted validation)
 
Thresholds:
- Strict (publication-ready standards)
 - Standard (recommended thresholds)
 - Lenient (early development)
 - Custom (user-defined thresholds)