Perceptron Decision Boundary
Run the Perceptron Decision Boundary Fullscreen
Edit the MicroSim with the p5.js editor
About This MicroSim
This interactive visualization demonstrates how a perceptron—the simplest neural network—creates a linear decision boundary to classify data points into two classes.
Key Concepts
- Decision Boundary: The line where \(\mathbf{w}^T\mathbf{x} + b = 0\)
- Weight Vector: Perpendicular to the decision boundary, determines its orientation
- Bias: Shifts the boundary away from the origin
- Linear Separability: Some datasets (like XOR) cannot be separated by a single line
Interactive Features
- Drag the Weight Vector: Click and drag the purple arrow to rotate the decision boundary
- Adjust Bias: Use the slider to shift the boundary parallel to itself
- Add Custom Points: Click "Add Points" then click on the plot to add data points
- Switch Classes: Press SPACE while in add-point mode to toggle between blue (+1) and red (-1)
- Run Learning: Watch the perceptron learning algorithm find a solution
- Preset Datasets: Compare linearly separable data with the XOR pattern
Visual Indicators
- Blue region: Points classified as +1
- Red region: Points classified as -1
- Yellow outline: Misclassified points
- Accuracy: Shown as percentage in the control area
Lesson Plan
Learning Objectives
After using this MicroSim, students will be able to:
- Explain how weight vectors and bias define a linear decision boundary
- Identify whether a dataset is linearly separable
- Describe the perceptron learning algorithm
- Recognize why the XOR problem motivated multilayer networks
Suggested Activities
- Explore Linear Separability: Load different datasets and observe which can achieve 100% accuracy
- Manual Classification: Try to manually position the boundary to classify all points correctly
- XOR Challenge: Attempt to classify XOR data and discover why it fails
- Learning Animation: Run the learning algorithm and observe how weights update
Discussion Questions
- Why does the weight vector always point perpendicular to the decision boundary?
- What happens when you increase the bias? What about when you make it negative?
- Why can't a single perceptron solve the XOR problem?
- How could you modify the network to handle non-linear boundaries?
References
- Rosenblatt, F. (1958). The Perceptron: A Probabilistic Model for Information Storage and Organization in the Brain
- Minsky & Papert (1969). Perceptrons: An Introduction to Computational Geometry