Gentle Introduction to Gradient Descent and Momentum

Jino Rohit
3 min readJul 18, 2021

--

In this article, we will talk about a fundamental concept in machine learning called the Gradient Descent.

The gradient descent is one of the most popular algorithms that tends to reduce the error in prediction i.e minimizing your cost function.

This might have been confusing but that’s okay, before we jump into more
details I’ll give a very small gist of where it is mostly used. In deep learning, we have a concept called backpropagation.

Wikipedia says “ backpropagation computes the gradient of the loss function with respect to the weights of the network for a single input–output example, and does so efficiently, unlike a naïve direct computation of the gradient with respect to each weight individually

I had a brain-freeze when I read this, so let me give you an intuitive example to help you understand better. Let’s say you’re a Javelinier(the dude who throws a javelin) and you’re taking part in the world Olympics. You have three attempts and your first attempt records 50m. Now the second time, you launch the javelin again and this time it hits the 65m. I see backpropagation here! HOW!? The first time you hit the 50m, you aren’t happy with it and obviously you want to improve. So the next time, you see how far you want to throw it and aim for a better shot. In the second attempt, you aimed to throw the javelin more farther than higher and it worked! Your distance improved. This is exactly what backprop does. It calculates the errors and see how far you are from your goal and tries to improve it. The first step towards this is Gradient Descent.

A typical definition for gradient descent is “The Gradient Descent is used to find the local minima of a function”

As you can see from the image, the global minima is the smallest value of the function and all the other points are local minima. The global minima is the ideal point we want to reach since we have to try and minimize the error as much as we can but gradient descent can only achieve local minima. To best put into picture, we can take the example of someone descending a mountain where he has to take multiple small steps to get down the mountain carefully.

The term gradient simply means the vector gives the direction of maximum rate of change. So to minimize your function, you need to take its negative. Two important points about the function you need to minimize are

A) It should be differentiable and what I mean by that is you should be able to find the slope at each point and not get stuck at a point.

Look at those images for some examples of non- differentiable functions where once you get stuck there, you ain’t going nowhere.

B) Like I said before Gradient descent strives to achieve the lowest point right? So if you don’t provide a convex function, it’ll get stuck somewhere in the middle and think it’s already reached the lowest point.

MOMENTUM

So where does momentum fit? Remember I said in gradient descent, as soon as you reach a local minima you end up getting stuck since the gradients become too small to move forward?
This is where momentum helps, where you simply take the average of the previous steps to power your way through(β).

I hope this article helped you in your machine learning journey and if you still need some help, let me know in the comments and I’ll do my best to help you out.

--

--

Jino Rohit
Jino Rohit

Written by Jino Rohit

Neurons that fire together, wire together

No responses yet