The Atomic Unit

Deep Learning is complex, but its fundamental building block is surprisingly simple: The Artificial Neuron (or Perceptron). Let's break down how a single neuron makes a decision.
Continue

1. Inputs (x)

Imagine the neuron is deciding: 'Should I buy this house?' Inputs are the raw data: Price, Location Score, Size.
Continue

2. Weights (w)

Not all inputs matter equally. Weights determine importance. • High Weight = Very Important (e.g., Price). • Negative Weight = Reduced Chance (e.g., Crime Rate).
Continue

The neuron multiplies the VALUE of each input by its importance (Weight). Example: Price ($1M) × Weight (0.8) + Crime (High) × Weight (-0.5)
Continue

4. The Bias (b)

Think of Bias as the neuron's 'Base Mood'. • A High Bias means the neuron is optimistic (likely to fire even with weak inputs). • A Negative Bias means it's pessimistic (needs very strong inputs to fire). Output = Sum + Bias
Continue

5. Activation

Finally, we squash the result (usually between 0 and 1) using an Activation Function (like Sigmoid). This is the neuron's final 'firing rate' or confidence.
Continue

You are the Neuron

Try adjusting the Weights and Bias below. Notice how increasing the Bias (making it more positive) forces the neuron to output a higher value, even if the inputs are low?
Continue

The Math

It's just a dot product plus a bias.

neuron_math.py
# 1. Calculate Weighted Sum
linear_output = dot(inputs, weights) + bias

# 2. Apply Activation
final_output = sigmoid(linear_output)
Calculations

Implementation

A raw Python implementation without libraries.

perceptron.py
class Neuron:
    def __init__(self, size):
        self.weights = [0.0] * size
        self.bias = 0.0
        
    def forward(self, inputs):
        # Sum inputs * weights
        total = sum(i * w for i, w in zip(inputs, self.weights))
        
        # Add bias
        total += self.bias
        
        # Activation
        return 1.0 / (1.0 + math.exp(-total))
Simple Neuron Class
AlgoAnimator: Interactive Data Structures