- Published on
Machine Learning Optimization - Gradient Descent
Table of Contents
0: Introduction
2: Stochastic Gradient Descent
3: Variance Reduction Techniques
4: SARAH and ADAM Algorithms
Crash Course: Gradient Descent
This section is some entry-level background to machine learning concepts, so please skip if you already have a grasp.
The goal in machine learning optimization is to train model parameters for a machine learning model we choose to use, which will ultimately depend on the application. We will use the simple but very commonly used linear model as an example.
Example: Least Squares Regression
We are interested in training a model using data samples with features and known outputs for each data sample. The model we will use as an example is a simple linear system.
Matrix represents our data, and is the response vector with entries corresponding to the known output of each row of data , . Our model uses a vector of parameters , , where . This is a linear model, where the product should ideally produce exactly .
In a typical application, however, this system is overdetermined (, i.e. more data than there are features that describe the data; column rank), and is inconsistent ( has no solution).
Since we cannot find an exact solution to the model, we want to optimize each of our parameters such that the product produces the best approximate solution to the system. In other words, we want to train the model to be able to predict the known outputs as well as it can by adjusting each of the .
To do so, we aim to minimize a quantity we call the "loss" of the model; a measure of error between predictions and the actual label values as a function of the however the are selected (or in our case, how we optimize them).
"Loss" will be quantified as a function of the expressed as , also sometimes called the "objective function". So, the "loss" will depend on the parameters that we choose.
A common loss function that we see in machine learning is a simple least-squares regression fit, where is the following.
Here, generates a vector of predictions, and the loss function calculates the squared Euclidian distance between the prediction vector and the true labels vector .
Our optimized model with our trained parameters satisfies the following.
Gradient Descent in Machine Learning
The gradient descent algorithm is ideal for optimization of this kind of problem. It is an iterative algorithm that is designed to update parameters by changing each parameter by some amount, in such a way that the "loss" decreases.
We must note that the least squares objective function is convex, for which follows that must have a global minimum (it must reach a "smallest" value). Not all problems in machine learning utilize convex loss functions, but many common applications do in practice.
NOTE
This post will assume, unless otherwise stated, that the objective function is convex (i.e. we will be covering convex optimization).
Common examples of convex objective functions include logistic regression, data fitting, SVMs, Lasso and Group Lasso. Convexity is detailed more formally in the appendix.
In basic terms, the gradient descent algorithm uses the gradient of the loss function (denoted ), or the vector of partial derivatives for each vector component evaluated at the parameter values , to determine the "steepest" direction that the loss is increasing. It's worth taking a look in the appendix at the details of the gradient operation if you are not familiar.
The negative gradient will indicate the steepest direction that our loss function is decreasing, and if we move the values of the s in that direction, we will decrease the loss . Ultimately, we intend to optimize by minimizing it, so we hope to adjust the s until our loss is minimized ("minimized" is usually defined by criteria we select).
So, if we start with some initial parameters , we can subtract to "step" the parameter tuning in the direction that will minimize the loss function. However, we need to control the size of the steps that we take, so we include a parameter that we can tune on our own (called a "hyperparameter"), to scale and control the size of the step we take in that direction. The parameter is commonly referred to as the "learning rate".
Since we likely will not arrive at the solution after one step, we can use the resulting weights and evaluate to continue and take another step. We continue onward by recursively plugging parameter values back into the algorithm and in general after steps, we have the gradient descent algorithm.
The algorithm iterates over discrete timesteps , stepping toward the minimum of proportional to the learning rate .
Now, picking up from the previous section, we will use a simple least squares loss regression of some data we generate as an example. We will generate 100 random points , where , and assign labels to these points where where is a standard gaussian noise term to make our values noisy. We only want to fit a first-order polynomial line to the data in this example, so the only two "features" that we will use for our parameters to fit are slope and intercept, .
We then initialize a slope and intercept for (in this example just ), select a learning rate (we will discuss how to more carefully select this hyperparameter later), and perform the gradient descent algorithm for steps (iterations). Below we animate the process, where the top plot is the line being fitted, the middle plot is the value of our loss gradually minimizing, and the last plot is the values for the slope and intercept.
Since we know how we generated the data, we know that the true optimal is , which we can see visually in the first plot.
There are a few observations we can note here.
- The parameters are converging to the optimal solution, taking large steps early on but slowing down as it approaches the optimum.
- Loss also falls steeply intially, but after around 10 steps it levels off around 1 (the variance of the noise of the data itself).
- 100 steps is not enough to get all the way to true optimal solution.
In short, we are seeing the parameters converge toward the optimal solution as the algorithm iterates, but it doesn't get us quite to the exact solution that is underlying the noise. When training a machine learning model, this isn't necessarily a bad thing. In fact, models that are not "overfit" to training data, i.e. fit "too closely" to the training data, often generalize better to make predictions for unseen data than models that match too closely with the training data. We may only care that the model weights we use are within some guaranteed tolerance of the true solution (an example is included in the third plot), which we typically express in convergence rates in terms of a tolerance . We will see accuracy convergence properties, or convergence properties expressed in terms of total iterations for the algorithms we cover in this post.
Limitations of Regular Gradient Descent
In a theoretical sense, regular gradient descent is robust, guaranteed to converge for convex objective functions, and will guarantee we are within a tolerance of the true solution (or we will know how close we are after some number of iterations). However, in practice it has limitations.
Regular gradient descent works well for a very simple example like this (small sample size, only two features), but in practical machine learning applications with large datasets, it has glaring computational deficiencies.
We turn our attention to the gradient calculation itself, and consider a gradient calculation at timestep , for for a dataset with features and samples. At each timestep, we are computing the full gradient. It is helpful to write out what the gradient calculation really looks like for our least square loss example.
See appendix for more details on full and stochastic gradient calculation.
In terms of complexity, each step has calculations per step. It can be shown that since is convex and Lipschitz continuous, for selected carefully using what is called the Lipschitz constant (you can learn more about Lipschitz continuity here or in the appendix) , that after iterations the following will hold.
Where is the true optimal solution ( from our earlier example), and is our initialization ( also from our earlier example).
We say that this converges at a rate . In other words, we can choose the number of gradient descent iterations and know for certain upper bound of the loss for the current parameters as compared to the optimal parameters is proportional to ( more on "big O notation" here ).
This means that we can guarantee we are within some tolerance of the true optimal solution after iterations. But, if we are calculating at each of those iterations, and our dataset is large, it can become computationally prohibitive to get a solution within a desirable tolerance.
In machine learning optimization, we are always seeking faster ways to train our models. When approaching an optimization problem we might ask ourselves questions like,
- How can optimize my parameters (train the model) in the fewest steps?
- How can I improve the computational cost of my steps?
- When can we terminate the algorithm, and what are my criteria for stopping?
These are the kinds of questions that might motivate a search for more efficient algorithms. A common approach to address such motives is to use approximations, for which stochasticity has some desirable theoretical properties to help us.