Download wallpaper

Brain 3D

Visualize neural network graphs as interactive 3D structures

Understanding the use case

Why 3D visualization? - SpikyPanda represents neural networks as explicit graphs where every neuron is a node and every synapse is a directed link. Unlike tensor frameworks that hide the topology inside matrix operations, this graph structure can be traversed, inspected, and rendered. 3D visualization makes the "brain" tangible - you can see how information flows through the network.

Tetrahedrons - Each neuron is rendered as a tetrahedron (4 faces, 4 vertices) - the simplest possible 3D solid. This minimizes GPU load: a tetrahedron has 4 triangles vs 12 for a cube or 80+ for a sphere. Using BabylonJS thin instances, thousands of neurons are rendered in a single draw call.

Thin instances - Instead of creating one mesh per neuron (which would mean one draw call per neuron), we create a single tetrahedron mesh and stamp it at different positions using thin instances. The GPU renders all neurons in one batch. This scales to thousands of neurons without frame rate issues.

Color encoding - Neuron colors encode activation values: cyan/green for positive activations, red for negative, dark for zero. Synapse colors encode weights: blue/cyan for positive weights, red/orange for negative. After training, you can visually read the network's learned structure.

XOR as a starting point - The XOR problem (exclusive OR) is the simplest non-linear classification task. It requires a hidden layer to solve, making it the perfect minimal test case: 2 inputs, 2 hidden neurons, 1 output = 5 tetrahedrons and 6 lines. Once this works, the same architecture scales to CNNs with thousands of neurons.

Controls

-
Positive activation
Negative activation
Positive weight
Negative weight

Neuron Info

Click a neuron in the 3D view to see its details.

Log

Initializing...

Understanding the results

XOR truth table - XOR(0,0)=0, XOR(0,1)=1, XOR(1,0)=1, XOR(1,1)=0. After training, the output neuron should produce values close to these targets. An output > 0.5 is interpreted as 1 (green), < 0.5 as 0 (red).

Weight visualization - After training, the synapse colors reveal the learned solution. A typical XOR solution has symmetric positive/negative weight patterns: both inputs excite one hidden neuron and inhibit the other, creating the non-linear decision boundary.

Training convergence - XOR typically converges in 2000-5000 epochs with Adam optimizer. The loss should drop below 0.001. If it doesn't converge, the random initial weights may have landed in a bad local minimum - click Train again to restart with new random weights.

Scaling up - This same thin-instance architecture can render CNN graphs. A CNN Tiny (16x16x6) has ~9000 neurons - still very manageable with thin instances. The main challenge for larger networks is layout: arranging thousands of neurons in 3D so the structure is readable.