Leveraging Generative AI
Summary
This chapter explores how to effectively use generative AI tools to accelerate MicroSim development. You will learn about large language models including ChatGPT and Claude, and develop prompt engineering skills for generating simulation code. The chapter covers iterative refinement techniques, AI-assisted debugging, and important concepts like token limits and context windows. You will also learn about AI limitations and hallucinations to use these tools responsibly, plus advanced techniques using rules files and skills development.
Concepts Covered
This chapter covers the following 16 concepts from the learning graph:
- Generative AI
- Large Language Models
- ChatGPT
- Claude
- AI Prompting
- Prompt Engineering
- Iterative Refinement
- AI Code Generation
- Code Debugging with AI
- Token Limits
- Context Window
- AI Hallucinations
- AI Limitations
- Rules Files
- Skills Development
- Claude Code
Prerequisites
This chapter builds on concepts from:
Meet Your Development Partner
Welcome to one of the most transformative chapters in this course. Here, you'll meet the partner who will help you build MicroSims faster than you ever imagined possible: the Large Language Model (LLM).
But here's the crucial insight that separates successful AI-assisted developers from frustrated ones: an LLM is a partner, not a replacement. Like any good partnership, success depends on understanding each other's strengths and limitations, communicating clearly, and developing shared patterns of collaboration.
Your task in this chapter is to learn how to bring out the best in your AI partner—to ask questions that elicit brilliant responses, to provide context that enables creative solutions, and to guide iterations that refine rough drafts into polished simulations.
The Partnership Mindset
Think of working with an LLM like collaborating with a brilliant but literal-minded colleague who has read millions of code examples but has never actually run a program. They can suggest solutions you'd never think of, but they need your judgment to evaluate what actually works.
What is Generative AI?
Generative AI refers to artificial intelligence systems that can create new content—text, images, code, music—rather than simply analyzing or classifying existing data. These systems learn patterns from massive datasets and use those patterns to generate novel outputs.
For MicroSim development, generative AI offers remarkable capabilities:
- Generate complete p5.js sketches from natural language descriptions
- Suggest solutions to visual layout problems
- Debug code by analyzing error messages
- Refactor existing simulations to add new features
- Explain complex code in plain language
The generative AI systems most useful for coding are Large Language Models (LLMs)—neural networks trained on vast amounts of text, including billions of lines of source code.
Understanding Large Language Models
Large Language Models are the engines behind tools like ChatGPT and Claude. They work by predicting what text should come next, given some input context. This simple mechanism—next-token prediction—gives rise to surprisingly sophisticated behaviors.
How LLMs Work (Simplified)
When you type a prompt, the LLM:
- Converts your text into numerical tokens
- Processes tokens through billions of neural network parameters
- Calculates probabilities for what token should come next
- Samples from those probabilities to generate output
- Repeats until the response is complete
This process happens incredibly fast—generating hundreds of tokens per second.
Diagram: LLM Token Processing Flow
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 | |
Key LLM Characteristics
| Characteristic | Implication for MicroSim Development |
|---|---|
| Pattern-based | Generates code similar to training examples |
| Probabilistic | Same prompt can produce different outputs |
| Context-dependent | Quality depends heavily on input quality |
| No execution | Cannot run or test the code it generates |
| No memory | Each conversation starts fresh (mostly) |
ChatGPT vs Claude: Choosing Your Partner
The two leading LLMs for code generation are ChatGPT (from OpenAI) and Claude (from Anthropic). Both are excellent for MicroSim development, with some differences in style and capability.
ChatGPT
ChatGPT, powered by GPT-4 and its variants, excels at:
- Broad knowledge across many programming languages
- Strong JavaScript and p5.js understanding
- Creative problem-solving approaches
- Wide availability (free tier available)
Claude
Claude, created by Anthropic, offers:
- Excellent at following detailed instructions
- Strong reasoning about complex requirements
- Thoughtful about edge cases and potential issues
- Extended context windows for larger projects
- Claude Code for integrated development
Which Should You Use?
For MicroSim development, both work well. This course uses Claude examples because:
- Claude Code integrates directly with your development environment
- Extended context windows handle larger simulations
- Strong instruction-following suits structured MicroSim patterns
- Consistent formatting for educational content
Tool Agnostic Skills
The prompt engineering skills you learn here apply to any LLM. Once you master communicating with one, you can easily adapt to others.
The Art of AI Prompting
AI prompting is the skill of crafting inputs that elicit useful outputs from an LLM. It's part communication, part psychology, and part technical specification.
The Anatomy of an Effective Prompt
A good prompt for MicroSim development typically includes:
- Context: What kind of simulation are you building?
- Specification: What exactly should it do?
- Constraints: What patterns or standards must it follow?
- Examples: What does good output look like?
- Format: How should the response be structured?
Basic Prompting Example
Here's a simple prompt and its improvement:
Weak prompt:
Make a bouncing ball
Strong prompt:
Create a p5.js sketch with a ball that bounces within the canvas boundaries. Use these specifications: - Canvas size: 400x400 pixels - Ball diameter: 40 pixels - Ball color: Blue (#4488FF) - Background: Alice blue (240, 248, 255) - Include velocity variables for x and y movement - Reverse direction when hitting any wall - Add a slider to control ball speed (range 1-10, default 3)
The second prompt provides specific, actionable details that dramatically improve output quality.
Prompt Engineering: From Art to Science
Prompt engineering takes basic prompting to a systematic level. It's the discipline of designing prompts that reliably produce high-quality outputs.
Prompt Engineering Techniques
| Technique | Description | Example Use |
|---|---|---|
| Few-shot learning | Provide examples of desired output | Show 2-3 similar MicroSims |
| Chain of thought | Ask for step-by-step reasoning | "First, outline the approach..." |
| Role prompting | Assign a persona to the AI | "As a p5.js expert..." |
| Constraints first | State limitations upfront | "Using only these functions..." |
| Structured output | Request specific format | "Return as: 1) Code 2) Explanation" |
The MicroSim Prompt Template
For consistent results, use this template when requesting new MicroSims:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 | |
Diagram: Prompt Engineering Workflow
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 | |
Iterative Refinement: The Key to Quality
Iterative refinement is the process of improving AI-generated code through successive conversations. Rarely does an LLM produce perfect code on the first try—but it can get there quickly with good feedback.
The Refinement Cycle
- Generate: Get initial code from your prompt
- Test: Run the code and observe behavior
- Identify: Note what's wrong or missing
- Specify: Describe the issue precisely to the AI
- Regenerate: Get updated code
- Repeat: Continue until satisfied
Effective Refinement Prompts
When refining, be specific about what needs to change:
Vague (less effective):
"The slider isn't working right"
Specific (more effective):
"The speed slider (line 45) currently ranges from 0-100 but the ball moves too fast even at low values. Please change the range to 1-10 and multiply the slider value by 0.5 when applying it to velocity."
Before and After Example
First attempt output (problems):
1 2 3 4 5 6 7 8 9 10 | |
After refinement prompt:
"The ball moves too fast. Please: 1) Change slider range to 1-10, 2) Multiply slider value by 0.3 for smoother movement, 3) Add wall bouncing"
Refined output:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 | |
AI Code Generation: What LLMs Do Well
AI code generation is remarkably capable for MicroSim development. Here's where your AI partner truly shines:
Strengths of AI Code Generation
- Boilerplate generation: Setup code, standard patterns, repetitive structures
- API translation: Converting descriptions to correct function calls
- Pattern application: Using established coding patterns consistently
- Syntax accuracy: Generating syntactically correct code
- Variation creation: Making similar versions of existing code
- Documentation: Adding comments and explanations
Example: Generating a Complete MicroSim
Given a detailed prompt, an LLM can generate a complete, working MicroSim in seconds:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 | |
This complete, functional MicroSim was generated from a single well-crafted prompt!
Code Debugging with AI: Your Error-Solving Partner
Code debugging with AI transforms frustrating error messages into learning opportunities. When your code doesn't work, your AI partner can help diagnose and fix the problem.
How to Debug with AI
- Copy the error message exactly as shown
- Include relevant code (the function with the error)
- Describe expected vs actual behavior
- Ask for explanation, not just a fix
Debugging Prompt Template
1 2 3 4 5 6 7 8 | |
Expected behavior: [What should happen]
Actual behavior: [What actually happens]
Please explain why this error occurs and how to fix it.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 | |
Using Rules Files Effectively
- Start conversations by referencing or including rules
- Keep rules updated as your standards evolve
- Be specific—vague rules get vague compliance
- Provide examples of correct implementation
- Organize by topic for easy reference
Skills Development: Building Reusable AI Workflows
Skills development refers to creating structured, reusable prompts and workflows that reliably produce high-quality outputs. As you gain experience, you'll develop skills that can be shared and refined.
Components of an AI Skill
A well-developed skill includes:
- Purpose statement: What does this skill accomplish?
- Input requirements: What information is needed?
- Prompt template: The structured prompt that works
- Output format: What the result should look like
- Quality checks: How to verify the output
- Refinement patterns: Common adjustments needed
Example: MicroSim Generation Skill
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 | |
Claude Code: Integrated Development Partnership
Claude Code is a command-line tool that integrates Claude directly into your development workflow. Instead of copying code between a chat interface and your editor, Claude Code can read and write files directly.
Claude Code Capabilities
| Capability | Benefit for MicroSim Development |
|---|---|
| Read project files | Claude sees your existing code and patterns |
| Write/edit files | Changes applied directly to your project |
| Run commands | Execute tests, start servers, check errors |
| Multi-file context | Understands relationships between files |
| Persistent session | Maintains context across multiple requests |
Working with Claude Code
Instead of:
"Here's my code [paste 200 lines]. Please add a reset button."
You can say:
"Read sketch.js and add a reset button that returns all values to their defaults."
Claude Code will: 1. Read the actual file 2. Understand the existing structure 3. Add the button with proper integration 4. Write the updated file
Claude Code for MicroSim Iteration
The iterative refinement cycle becomes much faster:
- Request: "Add a slider to control the pendulum length"
- Claude Code reads your existing file
- Claude Code writes the updated version
- You test in the browser
- Request refinement: "The pendulum is too long at maximum. Limit the range to 50-150."
- Repeat until perfect
This tight integration loop dramatically accelerates development.
Bringing Out the Best in Your Partner
Throughout this chapter, we've emphasized the partnership model. Here are the key principles for successful collaboration:
The Partnership Principles
- Be specific: Vague requests get vague results
- Provide context: Include relevant standards and examples
- Iterate willingly: First drafts are starting points
- Verify everything: You are the final quality check
- Learn together: Note what prompts work best
- Share knowledge: Use rules files to maintain consistency
Communication Best Practices
| Do | Don't |
|---|---|
| Specify exact values (pixels, ranges, colors) | Use vague terms ("bigger", "nicer") |
| Describe the problem you see | Just say "it's broken" |
| Include error messages verbatim | Paraphrase errors |
| Reference existing patterns | Explain everything from scratch |
| Ask for explanations | Accept code blindly |
| Test before assuming | Trust outputs without verification |
Key Takeaways
You've learned how to work effectively with your AI development partner. Here are the essential insights:
-
Generative AI creates new content, including code for MicroSims.
-
LLMs like ChatGPT and Claude predict text based on patterns, making them powerful but imperfect code generators.
-
Prompt engineering transforms vague requests into specific instructions that produce quality code.
-
Iterative refinement is normal and expected—plan for 2-4 cycles per MicroSim.
-
AI code generation excels at boilerplate, pattern application, and syntax—but needs human judgment for visual design.
-
Debugging with AI works best when you provide exact error messages and describe expected behavior.
-
Token limits and context windows constrain how much information the AI can process at once.
-
AI hallucinations are common—always verify function names and syntax against documentation.
-
AI limitations mean you must test all code and evaluate all visual output yourself.
-
Rules files encode your standards for consistent AI behavior across sessions.
-
Skills development creates reusable workflows that improve over time.
-
Claude Code integrates AI assistance directly into your development environment.
Challenge: Write a prompt for a pendulum simulation
Try writing a complete prompt for a pendulum MicroSim using the template from this chapter. Include: canvas size, visual elements (string, bob, pivot point), controls (angle, length, gravity sliders), physics behavior, and code standards. Compare your prompt to a classmate's—how do they differ?
Next Steps
You now understand both the power and limitations of your AI partner. In the next chapter, we'll apply these skills to create increasingly sophisticated simulations, using the prompt engineering and iterative refinement techniques you've learned here.
Remember: the goal isn't to have AI write all your code—it's to accelerate your development while deepening your understanding. The best MicroSims come from the collaboration between human creativity and AI capability.
Go forth and prompt wisely!
References
- Claude Documentation - Official Claude guides and API reference
- OpenAI Documentation - ChatGPT and GPT-4 documentation
- Prompt Engineering Guide - Comprehensive prompting techniques
- Claude Code - Integrated development tool
- p5.js Reference - Official p5.js documentation for verification