Why subtract learning rate * gradient from old weight to get new weight and not add ??!!!!!

Dipanwita Mallick
2 min readSep 29, 2020

--

I have been thinking about this concept ever since I started poking around the concept of gradient descent and the way weights are updated. So, here’s my understanding and hopefully this can help you answer the very question.

Let’s take the example of simple linear regression,

Y = m. X + c

for the sake of simplicity we will not consider the intercept, rather focus on the weight. So the equation now becomes:

Y =m.X

While performing the gradient descent, three primary steps are followed:

  1. Forward pass: where the prediction is computed
  2. Backward pass: where the gradients are computed
  3. Finally updating weights (W_new = W_old — learning rate * gradient)

The key objective here is minimizing the loss which in the case of linear regression is just the MSE (Mean Squared Error). In the backward pass step we calculate the gradient which is nothing but derivative of loss with respect to the weight i.e, dLoss/dw

Now let’s quickly, look at the following image:

image

From the diagram, it is quite evident that Global cost minimum is the point where the loss is minimum and we have to reach there, meaning we have to get the weight value at which global cost is minimum. Now let’s consider the weight updating equation:

W_new = W_old — learning_rate*( dLoss/dW_old) # learning rate 0.01

Now at the initial weight, gradient or slope is positive, so if we subtract, our new weight will be less than initial weight meaning we are moving towards the weight where the global cost is minimum (at the bottom).

But if the initial weight is on the left, then the gradient will be negative, so on subtraction, our new weight will increase, meaning we are again moving towards the weight where the global cost is minimum.

So the gradient gives us the sense of the direction, and the learning rate defines the steps meaning how fast or how slow can we get to the point of global minimum.

Takeaway: minimizing the loss and getting the correct weight is the goal and gradient helps us steer the direction.

I hope this is helpful !!

References:

https://medium.com/@faisalshahbaz/best-optimization-gradient-descent-algorithm-4ca5a3be3776

--

--

Dipanwita Mallick
Dipanwita Mallick

Written by Dipanwita Mallick

I am working as a Senior Data Scientist at Hewlett Packard Enterprise. I love exploring new ideas and new places !! :)

No responses yet