Optimizer Comparison Arena
Edit the MicroSim with the p5.js editor
About This MicroSim
This arena lets you race four popular optimization algorithms against each other on different loss landscapes. Watch as they navigate toward the minimum (green dot) with varying strategies.
The Optimizers
| Optimizer | Color | Key Feature |
|---|---|---|
| SGD | Blue | Basic gradient descent |
| Momentum | Green | Accumulates velocity |
| RMSprop | Purple | Adapts per-parameter rates |
| Adam | Orange | Combines momentum + adaptive rates |
How to Use
- Select Landscape: Choose from Quadratic, Rosenbrock, Beale, or Saddle
- Enable/Disable Optimizers: Check which optimizers to include
- Race!: Click to start the competition
- Observe: Watch which optimizer reaches the minimum first
Loss Landscapes
- Quadratic: Elongated ellipses testing condition number handling
- Rosenbrock: Famous banana-shaped valley with global minimum at (1,1)
- Beale: Multiple local valleys with tricky gradients
- Saddle Point: Tests optimizer behavior near saddle points
Key Observations
- SGD may oscillate wildly on ill-conditioned problems
- Momentum accelerates in consistent directions
- RMSprop handles varying gradient scales well
- Adam often provides good all-around performance
Lesson Plan
Learning Objectives
- Compare convergence behavior of different optimizers
- Understand when different optimizers excel
- Visualize adaptive learning rate mechanisms
Suggested Activities
- Quadratic Landscape: Which optimizer handles ill-conditioning best?
- Rosenbrock Challenge: Watch optimizers navigate the narrow valley
- Saddle Point: Observe behavior when there's no local minimum
- Selective Racing: Disable optimizers one by one to see their individual paths
References
- Ruder, S., An overview of gradient descent optimization algorithms, 2016
- Kingma & Ba, Adam: A Method for Stochastic Optimization, 2014