0.80.91.0-1.0-1.00.80.91.0-1.0-1.0-1.00.90.81.0-1.0
intro MODE

Journey inside the AI Brain

Deep Learning is revolutionizing the world. But what exactly happens inside a Neural Network? Scroll down to visualize every mathematical step, from a single neuron to a learning brain.
Continue

1. The Artificial Neuron

Just like in our brains, the fundamental unit is the Neuron. It receives signals (Inputs), scales them by importance (Weights), and shifts them (Bias). Equation: Z = (Inputs × Weights) + Bias
Continue

2. The Spark (Activation)

A neuron needs to decide: 'Is this signal important?' Activation functions like ReLU or Sigmoid squash the output into a specific range (e.g., 0 to 1), adding non-linearity. This allows the network to learn complex patterns, not just straight lines.
Continue

3. The Architecture

Power comes from connection. Neurons are stacked into layers. • Input Layer: The raw senses (eyes, ears, data). • Hidden Layers: The brain. Extract features (edges -> shapes -> faces). • Output Layer: The final decision.
Continue

4. Forward Propagation

Imagine data flowing like water. The inputs ripple through the hidden layers, getting transformed by weights and activations at each step, until they reach the output.
Continue

5. Measuring Failure (Loss)

At first, the network is dumb. It guesses randomly. We measure how wrong it is using a 'Loss Function' (e.g., Mean Squared Error). High Loss = Bad Prediction. Low Loss = Good Prediction.
Continue

6. Backpropagation

This is the 'Learning' in Machine Learning. We calculate the gradient of the Loss with respect to every weight. We then send this error signal BACKWARDS through the network to blame the neurons that contributed most to the mistake.
Continue

7. The Training Loop

We repeat the cycle: Forward -> Loss -> Backprop -> Update. See the 'Loss Map' in the top right? Watch it drop as we train. The network is essentially rolling down a hill to find the lowest error.
Continue

Interactive Demo: Predict Pass/Fail

Let's put it to the test. • Inputs: Study Hours & Sleep Hours. • Goal: Predict if a student Passes (Green) or Fails (Red). Adjust the sliders. Can you see how 'Study' has a strong positive connection (Thick Blue) to the outcome?
Continue

Math Model

Everything boils down to matrix multiplication.

neuron.py
class Neuron:
    def __init__(self, num_inputs):
        # Initialize weights randomly, bias to 0
        self.weights = np.random.randn(num_inputs)
        self.bias = 0

    def forward(self, inputs):
        # Dot product of inputs and weights, plus bias
        z = np.dot(inputs, self.weights) + self.bias
        return sigmoid(z)
Neuron Class Structure

Activation Functions

Sigmoid

Squishes values between 0 and 1. Great for probabilities.

ReLU

If positive, keep it. If negative, zero it. Solves vanishing gradients.

activations.py
import numpy as np

def sigmoid(x):
    # Smooth S-curve (0 to 1)
    return 1 / (1 + np.exp(-x))

def relu(x):
    # The most popular activation function
    # Returns x if x > 0, else 0
    return np.maximum(0, x)
Activation Implementations

Training (Gradient Descent)

We update weights in the opposite direction of the gradient to minimize loss.
w_new = w_old - (learning_rate * gradient)

train.py
# The Training Step
learning_rate = 0.01

for epoch in range(1000):
    # 1. Forward Pass
    predictions = network.forward(X)
    
    # 2. Calculate Loss
    loss = mse_loss(predictions, y)
    
    # 3. Backward Pass (Calculate gradients)
    gradients = network.backward(loss)
    
    # 4. Update Weights
    network.weights -= learning_rate * gradients
The Optimization Loop
AlgoAnimator: Interactive Data Structures