Linear Regression with ODE's
Failed to render LaTeX expression — no expression found
Linear regression is one of the simplest and most widely used machine learning algorithms. It helps us find the best-fitting line that explains the relationship between the independent variable x and the dependent variable y.
What is Linear Regression?
The model assumes that the dependent variable yy can be expressed as a linear function of the independent variable x:
y = mx + c
where:
m = slope of the line (how much y changes with x)
c = intercept (value of y when x=0)
When we have n data points (x1, y1), (x2, y2), …, (xn, yn), we want to find m and c that minimize the difference between the actual data and our predicted line.
Error Function (Least Squares Method)
For each point, the error (residual) is:
We want to minimize the sum of squared errors (SSE):
Why squared error instead of absolute error?
Squaring makes all errors positive.
It gives more weight to larger errors.
It is differentiable, making calculus-based optimization possible.
Derivation of m and c
To minimize E(m,c), we take partial derivatives to m and c, and set them to zero.
Step 1: Differentiate with respect to c
Step 2: Differentiate with respect to m
Example Calculation
Suppose we have the dataset:
Step 1: Compute required sums
Step 2: Compute slope mm
Step 3: Compute intercept cc
Applications of Linear Regression
Predicting house prices based on size.
Estimating sales from advertising spend.
Understanding relationships in economics and biology.

