In Machine Learning terms, we write the linear equation as:
# Simple Linear Regression
# y = mx + b
# y_pred = weight * input + bias
def predict(x, weight, bias):
return weight * x + biasTo find the best line, we minimize the Mean Squared Error (MSE).
def compute_cost(X, y, weight, bias):
n = len(y)
total_error = 0
for i in range(n):
y_pred = weight * X[i] + bias
error = (y_pred - y[i]) ** 2
total_error += error
return total_error / nWe use the derivatives of the cost function to update our weight and bias iteratively.
learning_rate = 0.001
# Update rules
weight_gradient = (-2/n) * sum(X * (y - y_pred))
bias_gradient = (-2/n) * sum(y - y_pred)
weight = weight - (learning_rate * weight_gradient)
bias = bias - (learning_rate * bias_gradient)