Draw: Click and drag on the grid (Right-click to erase)

3D Controls: • Left button + drag = rotate • Right button + drag = move • Scroll wheel = zoom

Touch Controls
Paper + Code + Models: https://toponets.github.io/

MNIST Digit Classification – Visualization of Inference

Interactive Visualization of a Neural Network

This application shows a compact Multi-Layer Perceptron (MLP) trained on MNIST. Draw a digit and observe how activations propagate through all fully connected layers in real time.

How it works:

  • Draw: Click and drag in the 2D grid (top left) to sketch a digit
  • Observe: Watch your sketch move through the network layers in 3D space
  • Prediction: Check the probability for each digit (0–9) in the chart (top right)

Network Architecture (default export):

  • Input Layer: 28×28 pixel grid (your drawing)
  • Dense Layer 1: 784 → 64 neurons with ReLU
  • Dense Layer 2: 64 → 32 neurons with ReLU
  • Output Layer: 32 → 10 logits → Softmax probabilities

3D Controls:

  • Rotate: Hold left mouse button and drag
  • Move: Hold right mouse button and drag
  • Zoom: Use the mouse wheel

Color Coding:

  • Nodes: Color represents activation strength (dark blues for low/negative values, bright coral for strong positive activations)
  • Connections: Warm colors indicate strong positive contributions, cool tones indicate negative influences, muted lines are near zero.

Train your own model:

  • Run python training/mlp_train.py to train the MLP (includes Apple Metal acceleration if available).
  • The script writes exports/mlp_weights.json, which the visualizer loads at startup.
  • Change hidden neurons, epochs, or export paths using the CLI options documented in training/mlp_train.py.

Real-time Features:

  • Layer Activations: Spheres represent activations per neuron with color-coded strength.
  • Key Connections: Each target neuron highlights its strongest input weights for readability.
  • Live Probabilities: The bar chart updates logits → softmax values in real time.

The network is intentionally compact for smooth real-time rendering. You can retrain with other layer sizes—just keep the architecture lightweight for a responsive 3D view.