0.00.250.50.751.0
AI (ŷ)

Learning from Mistakes

How does a machine 'know' if it's right or wrong? We need a mathematical way to measure failure. This is called the 'Loss Function' (or Cost Function).
Continue

1. The Guess (Prediction)

Let's say the AI is trying to predict a probability (e.g., 'Is this a cat?'). It might output 0.8 (80% sure).
Continue

2. The Truth (Target)

But in reality, it was NOT a cat. The true value (Target) is 0.0 (or 0.2 if we are being generous). The AI was wrong.
Continue

3. Calculating the Error

The simplest measure is the difference: Error = Prediction - Truth If Prediction is 0.8 and Truth is 0.2, the Error is 0.6.
Continue

4. Mean Squared Error (MSE)

Usually, we SQUARE the error. Why? 1. It removes negatives ((-5)² is 25). 2. It punishes BIG mistakes more severely than small ones. 3. It's easier to use in calculus (gradients).
Continue

Interactive Playground

Move the sliders below. See how being 'confident but wrong' (e.g., Prediction 0.9, Truth 0.1) explodes the Squared Error!
Continue

Defining Loss

Mean Squared Error (MSE) is common for regression tasks.

loss.py
def mean_squared_error(y_pred, y_true):
    # 1. Calculate difference
    error = y_pred - y_true
    
    # 2. Square it
    squared_error = error ** 2
    
    # 3. Average it (for multiple examples)
    mse = np.mean(squared_error)
    
    return mse
MSE Implementation

Cross Entropy Loss

For classification (Yes/No), we use Log Loss instead.

bce_loss.py
def binary_cross_entropy(y_pred, y_true):
    # Penalize confident wrong answers heavily
    return -(y_true * log(y_pred) + (1 - y_true) * log(1 - y_pred))
Binary Cross Entropy
AlgoAnimator: Interactive Data Structures