Power Iteration Algorithm
Run Power Iteration Fullscreen
Edit the MicroSim with the p5.js editor
About This MicroSim
Power iteration is a simple yet powerful algorithm for finding the dominant eigenvalue (largest in absolute value) and its corresponding eigenvector. It works by repeatedly multiplying a vector by the matrix and normalizing.
Key Features:
- Step-by-step iteration: Watch each multiply-and-normalize step
- Convergence visualization: Plot of angle error over iterations
- Rayleigh quotient: Real-time eigenvalue estimate
- Theoretical comparison: Green line shows expected convergence rate
- Editable matrix: Try different matrices
The Algorithm
1 2 3 4 5 6 | |
Convergence Rate
The error decreases proportionally to |λ₂/λ₁|^k where: - λ₁ is the dominant eigenvalue - λ₂ is the second largest eigenvalue - k is the iteration number
Faster convergence when |λ₂/λ₁| is small!
How to Use
- Click "Step" to perform one iteration
- Click "Run" for continuous iteration
- Adjust speed slider to control animation speed
- Click "Reset" to start with a new random vector
- Edit matrix cells to try different matrices
Embedding
1 | |
Lesson Plan
Learning Objectives
Students will be able to:
- Explain how power iteration finds the dominant eigenvector
- Connect the eigenvalue ratio |λ₂/λ₁| to convergence speed
- Use the Rayleigh quotient to estimate eigenvalues
Suggested Activities
- Predict convergence: Before running, predict if convergence will be fast or slow based on eigenvalue ratio
- Worst case: Find a matrix where power iteration converges very slowly
- Best case: Find a matrix where it converges in one step
- Complex eigenvalues: What happens with [[0, -1], [1, 0]]?
Assessment Questions
- Why does power iteration find the dominant eigenvalue specifically?
- If |λ₁| = |λ₂|, what goes wrong with power iteration?
- How does the Rayleigh quotient provide a better eigenvalue estimate than just looking at vector scaling?