Draw: Click and drag on the grid (Right-click to erase)

3D controls: • Left click + drag = rotate • Right click + drag = pan • Scroll wheel = zoom

Touch controls
https://github.com/DFin/Neural-Network-Visualisation 📐 Linear Algebra Visualizer 🔤 Embedding Projector

MNIST digit classification – Inference visualization

Interactive neural network visualization

This app shows a compact Multi-Layer Perceptron (MLP) trained on MNIST. Draw a digit and watch activations propagate in real time through all fully connected layers.

How it works:

  • Draw: Click and drag in the 2D grid (top left) to sketch a digit
  • Watch: See your sketch flow through the network layers in 3D
  • Predict: Check the probability for each digit (0–9) in the chart (top right)

Network architecture (default export):

  • Input layer: 28×28 pixel grid (your drawing)
  • Dense layer 1: 784 → 64 neurons with ReLU
  • Dense layer 2: 64 → 32 neurons with ReLU
  • Output layer: 32 → 10 logits → softmax probabilities

3D controls:

  • Rotate: Hold left mouse button and drag
  • Pan: Hold right mouse button and drag
  • Zoom: Use mouse wheel

Color coding:

  • Nodes: Color shows activation strength (dark blues for low/negative values, strong coral for high positive activations)
  • Connections: Warm colors for strong positive contributions, cool tones for negative, faint lines near zero.

Training your own model:

  • Run python training/mlp_train.py to train the MLP (with Apple Metal acceleration if available).
  • The script writes exports/mlp_weights.json, which the visualizer loads on startup.
  • Change hidden neuron count, epochs, or export paths via the CLI options documented in training/mlp_train.py.

Real-time features:

  • Layer activations: Spheres show per-neuron activations with color-coded strength.
  • Strong connections: Each target neuron highlights its strongest incoming weights for readability.
  • Live probabilities: The bar chart updates logits → softmax values in real time.

The network is kept compact for smooth real-time visualization. You can retrain with other layer sizes; keep the architecture lean so the 3D view stays responsive.