8.4 Optimization

Download notebook Interact

In The Mean and Slopes, we used a simple but slow way to find the slope that best predicted one vector of values from another vector of values.

First we go back to find that slope.

import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# Make plots look a little bit more fancy
plt.style.use('fivethirtyeight')
# Print to 2 decimal places, show tiny values as 0
np.set_printoptions(precision=2, suppress=True)
import pandas as pd

Download the gender_stats.csv file to the same directory as this notebook, if you are running on your own computer.

Fetch and process the data:

# Load the data file
gender_data = pd.read_csv('gender_stats.csv')
# Make new data frame with only MMR and FR
just_mmr_fr = gender_data[['mat_mort_ratio', 'fert_rate']]
# Drop the NaN values, and make into arrays.
clean_mmr_fr = just_mmr_fr.dropna()
mmr = np.array(clean_mmr_fr['mat_mort_ratio'])
fert = np.array(clean_mmr_fr['fert_rate'])

Our criterion is the sum of squared error:

def sos_error(slope):
    fitted = fert * slope  # 'fert' comes from the top level
    error = mmr - fitted     # 'mmr' comes from the top level
    return np.sum(error ** 2)

We found the best slope by trying a very large number of slopes, and recording, for each slope, the sum of squared error. We chose the slope from the slopes that we tried, that gave us the lowest sum of squared error.

# Slopes to try
some_slopes = np.arange(50, 110, 0.1)
n_slopes = len(some_slopes)
# Try all these slopes, calculate and record sum of squared error
sos_errors = np.zeros(n_slopes)
for i in np.arange(n_slopes):
    slope = some_slopes[i]
    sos_errors[i] = sos_error(slope)
# The slope minimizing the sum of squared error
best_slope = some_slopes[np.argmin(sos_errors)]
best_slope
77.8000000000004

At the end, of the mean and slopes notebook, we saw that a function in Scipy called minimize can do this work for us, relatively quickly.

from scipy.optimize import minimize
minimize(sos_error, 100)
      fun: 5901406.206272444
 hess_inv: array([[0.]])
      jac: array([0.])
  message: 'Optimization terminated successfully.'
     nfev: 15
      nit: 3
     njev: 5
   status: 0
  success: True
        x: array([77.84])

What we are doing, with our slow dumb technique, and with the minimize function, is something called mathematical optimization. We use optimization when we have some function that takes one or more parameters. We want to chose, or optimize the parameters to give us some desired output from the function.

In our case our function is the sum of squared error, sos_error. The parameter is the slope. We are trying to find the value for the parameter that minimizes the result of calling the function sos_error.

One way of doing this minimization, is the slow dumb way. We just try a huge number of values for the parameter (the slope), and chose the value that gives us the lowest output value (the sum of squared error).

This is such a common problem, that there has been an enormous amount of theoretical and practical work on building algorithms to make process of searching for the minimum value more efficient.

This notebook is to give you an idea of how you might do this, and therefore, the kind of things that minimize can do, to search quickly for the best parameter.

Let’s look again at the shape of the curve relating the slope to the sum of squared error:

plt.plot(some_slopes, sos_errors)
plt.xlabel('Candidate slopes')
plt.ylabel('Sum of squared error')
plt.title('SSE as a function of slope')
Text(0.5, 1.0, 'SSE as a function of slope')

png

This is the function we are trying minimize. Specifically, we are trying to optimize the function that gives the SSE as a function of the slope parameter.

We want to avoid trying every possible value for the slope.

So, we are going to start with one value for the slope, say 100, then see if there is a good way to chose the next value to try.

Looking at the graph, we see that, when the slope is far away from the minimum, the sum of squared error (on the y axis) changes very quickly as the slope changes. That is, the function has a steep gradient.

Maybe we could check what the gradient is, at our starting value of 100, by calculating the sum of squares (y) value, and then calculating the sum of squares (y) value when we increase the slope by a tiny amount. This is the change in y for a very small change in x. We divide the change in y by the change in x, to get the gradient:

def sos_error_gradient(x, dx=0.0001):
    # Gradient of the sos_error at this value of x
    # sos_error at this x value.
    sos_0 = sos_error(x)
    # sos_error a tiny bit to the right on the x axis.
    sos_1 = sos_error(x + dx)
    # gradient is y difference divided by x difference.
    return (sos_1 - sos_0) / dx
# The y value of the function.
sos_error(100)
7038632.75610837
# The gradient of the function at this point.
sos_error_gradient(100)
102629.55091893673

A large positive gradient means the x value (slope) that we tried is still far to the right of the minimum. This might encourage us to try an x value that is well to the left. We could call this a large step in x, and therefore a large step size.

A large negative gradient means the x value (slope) that we tried is still far to the left of the minimum. This might encourage us to try an x value that is well to the right.

As the gradients get small, we want to take smaller steps, so we don’t miss the minimum.

The general idea then, is to chose our step sizes in proportion to the gradient of the function.

This is the optimization technique known as gradient descent.

Here it is in action, using code modified from the Wikipedia page above.

We try new x (slope) values by making big jumps when the gradient is steep, and small jumps when the gradient is shallow.

next_x = 100 # We start the search at x=100
gamma = 0.0001 # Step size multiplier
precision = 0.00001 # Desired precision of result
max_iters = 1000 # Maximum number of iterations

for i in np.arange(max_iters):
    # Go to the next x value
    current_x = next_x
    # Estimate the gradient
    gradient = sos_error_gradient(current_x)
    # Use gradient to choose the next x value to try.
    # This takes negative steps when the gradient is positive
    # and positive steps when the gradient is negative.
    next_x = current_x - gamma * gradient
    step = next_x - current_x
    print('x: {:0.5f}; step {:0.5f}; gradient {:0.2f}'.format(
        current_x, step, gradient))
    # When the step size is equal to or smaller than the desired
    # precision, we are near enough.
    if abs(step) <= precision:
        # Break out of the loop.
        break

print("Minimum at", next_x)
x: 100.00000; step -10.26296; gradient 102629.55
x: 89.73704; step -5.51028; gradient 55102.78
x: 84.22677; step -2.95852; gradient 29585.20
x: 81.26825; step -1.58846; gradient 15884.57
x: 79.67979; step -0.85286; gradient 8528.58
x: 78.82693; step -0.45791; gradient 4579.07
x: 78.36902; step -0.24585; gradient 2458.55
x: 78.12317; step -0.13200; gradient 1320.02
x: 77.99117; step -0.07087; gradient 708.73
x: 77.92029; step -0.03805; gradient 380.52
x: 77.88224; step -0.02043; gradient 204.31
x: 77.86181; step -0.01097; gradient 109.69
x: 77.85084; step -0.00589; gradient 58.90
x: 77.84495; step -0.00316; gradient 31.62
x: 77.84179; step -0.00170; gradient 16.98
x: 77.84009; step -0.00091; gradient 9.12
x: 77.83918; step -0.00049; gradient 4.89
x: 77.83869; step -0.00026; gradient 2.63
x: 77.83843; step -0.00014; gradient 1.41
x: 77.83829; step -0.00008; gradient 0.76
x: 77.83821; step -0.00004; gradient 0.41
x: 77.83817; step -0.00002; gradient 0.22
x: 77.83815; step -0.00001; gradient 0.12
x: 77.83814; step -0.00001; gradient 0.06
Minimum at 77.8381314817816

As you can see, by doing this, we save ourselves from trying a very large number of other potential x values (slopes), and we focus in on the minimum very quickly.

This is just one method among many for optimizing our search for the minimum of a function. Now you know what kind of thing it is doing, we will just let miminize do its job for us:

# Use float to show us the final result in higher precision
result = minimize(sos_error, 100)
float(result.x)
77.83818669386743