Optimizing code#

Author: Gaël Varoquaux

This chapter deals with strategies to make Python code go faster.

Prerequisites

Optimization workflow#

  1. Make it work: write the code in a simple legible ways.

  2. Make it work reliably: write automated test cases, make really sure that your algorithm is right and that if you break it, the tests will capture the breakage.

  3. Optimize the code by profiling simple use-cases to find the bottlenecks and speeding up these bottleneck, finding a better algorithm or implementation. Keep in mind that a trade off should be found between profiling on a realistic example and the simplicity and speed of execution of the code. For efficient work, it is best to work with profiling runs lasting around 10s.

Profiling Python code#

No optimization without measuring!

  • Measure: profiling, timing

  • You’ll have surprises: the fastest code is not always what you think

Timeit#

In Jupyter or IPython, use timeit (https://docs.python.org/3/library/timeit.html) to time elementary operations:

import numpy as np

a = np.arange(1000)

%timeit a ** 2
962 ns ± 90 ns per loop (mean ± std. dev. of 7 runs, 1,000,000 loops each)
%timeit a ** 2.1
21.5 μs ± 718 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
%timeit a * a
983 ns ± 35.9 ns per loop (mean ± std. dev. of 7 runs, 1,000,000 loops each)

Use this to guide your choice between strategies.

Note

For long running calls, using %time instead of %timeit; it is less precise but faster.

Profiler#

Useful when you have a large program to profile, for example the following file:

# For this example to run, you also need the 'ica.py' file

import numpy as np
import scipy as sp

from ica import fastica


# @profile  # uncomment this line to run with line_profiler
def test():
    rng = np.random.default_rng()
    data = rng.random((5000, 100))
    u, s, v = sp.linalg.svd(data)
    pca = u[:, :10].T @ data
    results = fastica(pca.T, whiten=False)


if __name__ == "__main__":
    test()

Note

This is a combination of two unsupervised learning techniques, principal component analysis (PCA) and independent component analysis (ICA). PCA is a technique for dimensionality reduction, i.e. an algorithm to explain the observed variance in your data using less dimensions. ICA is a source separation technique, for example to unmix multiple signals that have been recorded through multiple sensors. Doing a PCA first and then an ICA can be useful if you have more sensors than signals. For more information see: the FastICA example from scikits-learn.

To run it, you also need to download the ica module. In IPython we can time the script:

In [1]: %run -t demo.py
IPython CPU timings (estimated):
    User  :    14.3929 s.
    System:   0.256016 s.

and profile it:

In [2]: %run -p demo.py
        916 function calls in 14.551 CPU seconds
Ordered by: internal time
ncalls  tottime  percall  cumtime  percall filename:lineno (function)
    1   14.457   14.457   14.479   14.479 decomp.py:849 (svd)
    1    0.054    0.054    0.054    0.054 {method 'random_sample' of 'mtrand.RandomState' objects}
    1    0.017    0.017    0.021    0.021 function_base.py:645 (asarray_chkfinite)
    54    0.011    0.000    0.011    0.000 {numpy.core._dotblas.dot}
    2    0.005    0.002    0.005    0.002 {method 'any' of 'numpy.ndarray' objects}
    6    0.001    0.000    0.001    0.000 ica.py:195 (gprime)
    6    0.001    0.000    0.001    0.000 ica.py:192 (g)
    14    0.001    0.000    0.001    0.000 {numpy.linalg.lapack_lite.dsyevd}
    19    0.001    0.000    0.001    0.000 twodim_base.py:204 (diag)
    1    0.001    0.001    0.008    0.008 ica.py:69 (_ica_par)
    1    0.001    0.001   14.551   14.551 {execfile}
    107    0.000    0.000    0.001    0.000 defmatrix.py:239 (__array_finalize__)
    7    0.000    0.000    0.004    0.001 ica.py:58 (_sym_decorrelation)
    7    0.000    0.000    0.002    0.000 linalg.py:841 (eigh)
    172    0.000    0.000    0.000    0.000 {isinstance}
    1    0.000    0.000   14.551   14.551 demo.py:1 (<module>)
    29    0.000    0.000    0.000    0.000 numeric.py:180 (asarray)
    35    0.000    0.000    0.000    0.000 defmatrix.py:193 (__new__)
    35    0.000    0.000    0.001    0.000 defmatrix.py:43 (asmatrix)
    21    0.000    0.000    0.001    0.000 defmatrix.py:287 (__mul__)
    41    0.000    0.000    0.000    0.000 {numpy.core.multiarray.zeros}
    28    0.000    0.000    0.000    0.000 {method 'transpose' of 'numpy.ndarray' objects}
    1    0.000    0.000    0.008    0.008 ica.py:97 (fastica)
    ...

Clearly the svd (in decomp.py) is what takes most of our time, a.k.a. the bottleneck. We have to find a way to make this step go faster, or to avoid this step (algorithmic optimization). Spending time on the rest of the code is useless.

Profiling outside of IPython, running ``cProfile``

Similar profiling can be done outside of IPython, simply calling the built-in Python profilers cProfile and profile.

$  python -m cProfile -o demo.prof demo.py

Using the -o switch will output the profiler results to the file demo.prof to view with an external tool. This can be useful if you wish to process the profiler output with a visualization tool.

Line-profiler#

The profiler tells us which function takes most of the time, but not where it is called.

For this, we use the line_profiler: in the source file, we decorate a few functions that we want to inspect with @profile (no need to import it)

@profile
def test():
    rng = np.random.default_rng()
    data = rng.random((5000, 100))
    u, s, v = linalg.svd(data)
    pca = u[:, :10] @ data
    results = fastica(pca.T, whiten=False)

Then we run the script using the kernprof command, with switches -l, --line-by-line and -v, --view to use the line-by-line profiler and view the results in addition to saving them:

$ kernprof -l -v demo.py

Wrote profile results to demo.py.lprof
Timer unit: 1e-06 s

Total time: 1.27874 s
File: demo.py
Function: test at line 9

Line #      Hits         Time  Per Hit   % Time  Line Contents
==============================================================
     9                                           @profile
    10                                           def test():
    11         1         69.0     69.0      0.0      rng = np.random.default_rng()
    12         1       2453.0   2453.0      0.2      data = rng.random((5000, 100))
    13         1    1274715.0 1274715.0     99.7      u, s, v = sp.linalg.svd(data)
    14         1        413.0    413.0      0.0      pca = u[:, :10].T @ data
    15         1       1094.0   1094.0      0.1      results = fastica(pca.T, whiten=False)

The SVD is taking all the time. We need to optimise this line.

Making code go faster#

Once we have identified the bottlenecks, we need to make the corresponding code go faster.

Algorithmic optimization#

The first thing to look for is algorithmic optimization: are there ways to compute less, or better?

For a high-level view of the problem, a good understanding of the maths behind the algorithm helps. However, it is not uncommon to find simple changes, like moving computation or memory allocation outside a for loop, that bring in big gains.

Example of the SVD#

In both examples above, the SVD - Singular Value Decomposition - is what takes most of the time. Indeed, the computational cost of this algorithm is roughly \(n^3\) in the size of the input matrix.

However, in both of these example, we are not using all the output of the SVD, but only the first few rows of its first return argument. If we use the svd implementation of SciPy, we can ask for an incomplete version of the SVD. Note that implementations of linear algebra in SciPy are richer then those in NumPy and should be preferred.

In [3]: %timeit np.linalg.svd(data)
1 loops, best of 3: 14.5 s per loop

In [4]: import scipy as sp

In [5]: %timeit sp.linalg.svd(data)
1 loops, best of 3: 14.2 s per loop

In [6]: %timeit sp.linalg.svd(data, full_matrices=False)
1 loops, best of 3: 295 ms per loop

In [7]: %timeit np.linalg.svd(data, full_matrices=False)
1 loops, best of 3: 293 ms per loop

We can then use this insight to optimize the previous code:

def test():
    rng = np.random.default_rng()
    data = rng.random((5000, 100))
    u, s, v = sp.linalg.svd(data, full_matrices=False)
    pca = u[:, :10].T @ data
    results = fastica(pca.T, whiten=False)
In [1]: import demo

In [2]: %timeit demo.
demo.fastica   demo.np        demo.prof.pdf  demo.py        demo.pyc
demo.linalg    demo.prof      demo.prof.png  demo.py.lprof  demo.test

In [2]: %timeit demo.test()
ica.py:65: RuntimeWarning: invalid value encountered in sqrt
  W = (u * np.diag(1.0/np.sqrt(s)) * u.T) * W  # W = (W * W.T) ^{-1/2} * W
1 loops, best of 3: 17.5 s per loop

In [3]: import demo_opt

In [4]: %timeit demo_opt.test()
1 loops, best of 3: 208 ms per loop

Real incomplete SVDs, e.g. computing only the first 10 eigenvectors, can be computed with ARPACK, available in scipy.sparse.linalg.eigsh.

Computational linear algebra

For certain algorithms, many of the bottlenecks will be linear algebra computations. In this case, using the right function to solve the right problem is key. For instance, an eigenvalue problem with a symmetric matrix is easier to solve than with a general matrix. Also, most often, you can avoid inverting a matrix and use a less costly (and more numerically stable) operation.

Know your computational linear algebra. When in doubt, explore scipy.linalg, and use %timeit to try out different alternatives on your data.

Writing faster numerical code#

A complete discussion on advanced use of NumPy is found in chapter Advanced NumPy, or in the article The NumPy array: a structure for efficient numerical computation. by van der Walt et al. Here we discuss only some commonly encountered tricks to make code faster.

Vectorizing for loops#

Find tricks to avoid for loops using NumPy arrays. For this, masks and indices arrays can be useful.

Broadcasting#

Use broadcasting to do operations on arrays as small as possible before combining them.

In place operations#

a = np.zeros(10_000_000)

%timeit global a ; a = 0*a
18.8 ms ± 623 μs per loop (mean ± std. dev. of 7 runs, 10 loops each)
%timeit global a ; a *= 0
7.51 ms ± 631 μs per loop (mean ± std. dev. of 7 runs, 100 loops each)

note: we need global a in the timeit so that it works as expected, as otherwise it is assigning to a, and thus considers it as a local variable.

Be easy on the memory: use views, and not copies#

Copying big arrays is as costly as making simple numerical operations on them:

a = np.zeros(10_000_000)

%timeit a.copy()
19 ms ± 572 μs per loop (mean ± std. dev. of 7 runs, 100 loops each)
%timeit a + 1
19.5 ms ± 688 μs per loop (mean ± std. dev. of 7 runs, 100 loops each)

Beware of cache effects#

Memory access is cheaper when it is grouped: accessing a big array in a continuous way is much faster than random access. This implies amongst other things that smaller strides are faster (see CPU cache effects):

c = np.zeros((5000, 5000), order='C')

# Row elements are far apart in memory, for C ordering.
%timeit np.median(c, axis=0)
448 ms ± 4.26 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
# Column elements are contiguous in memory, for C ordering.
%timeit np.median(c, axis=1)
116 ms ± 7.47 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
c.strides
(40000, 8)

This is the reason why Fortran ordering or C ordering may make a big difference on speed of operations:

rng = np.random.default_rng()

a = rng.random((20, 2**18))

b = rng.random((20, 2**18))

%timeit b @ a.T
11.5 ms ± 563 μs per loop (mean ± std. dev. of 7 runs, 100 loops each)
c = np.ascontiguousarray(a.T)

%timeit b @ c
12.5 ms ± 1.2 ms per loop (mean ± std. dev. of 7 runs, 100 loops each)

Note that copying the data to work around this effect may not be worth it:

%timeit c = np.ascontiguousarray(a.T)
11.2 ms ± 34.9 μs per loop (mean ± std. dev. of 7 runs, 100 loops each)

Using numexpr can be useful to automatically optimize code for such effects.

Use compiled code#

The last resort, once you are sure that all the high-level optimizations have been explored, is to transfer the hot spots, i.e. the few lines or functions in which most of the time is spent, to compiled code. For compiled code, the preferred option is to use Cython: it is easy to transform exiting Python code in compiled code, and with a good use of the NumPy support yields efficient code on NumPy arrays, for instance by unrolling loops.

Warning

For all the above: profile and time your choices. Don’t base your optimization on theoretical considerations.