Here is an example where a grid walk algorithm is implemented three times with progressively better run time, timed with %timeit and graphed using bokeh:
Code:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 | def num_paths(n): M = [[0] * n for i in range(n)] for i in range(n): M[n-1][i] = 1 for r in range(n-2, -1, -1): for c in range(n-r-1, n): M[r][c] = M[r][c-1] + M[r+1][c] return M[0][n-1] def num_paths_from(r, c, n, M): if M[r][c] > 0: return M[r][c] if r == 0 and c == n-1: return 1 paths = ([(x,y) for (x,y) in [(r-1, c), (r, c+1)] if y >= n-x-1 and y<n]) npaths = 0 for x,y in paths: npaths += num_paths_from(x,y,n,M) M[r][c] = npaths return npaths def num_pathz_from(r, c, n): if r == 0 and c == n-1: return 1 paths = ([(x,y) for (x,y) in [(r-1, c), (r, c+1)] if y >= n-x-1 and y<n]) npaths = 0 for x,y in paths: npaths += num_pathz_from(x,y,n) return npaths def num_paths_slow(n): M = [[0] * n for i in range(n)] return num_paths_from(n-1, 0, n, M) def num_paths_super_slow(n): return num_pathz_from(n-1, 0, n) for sz in range(5,15): %timeit num_paths(sz) %timeit num_paths_slow(sz) %timeit num_paths_super_slow(sz) |
Timing:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 | 100000 loops, best of 3: 7.74 µs per loop 10000 loops, best of 3: 26.2 µs per loop 10000 loops, best of 3: 62.1 µs per loop 100000 loops, best of 3: 9.27 µs per loop 10000 loops, best of 3: 32.9 µs per loop 10000 loops, best of 3: 200 µs per loop 100000 loops, best of 3: 11.3 µs per loop 10000 loops, best of 3: 43 µs per loop 1000 loops, best of 3: 615 µs per loop 100000 loops, best of 3: 13.9 µs per loop 10000 loops, best of 3: 56.9 µs per loop 100 loops, best of 3: 2.05 ms per loop 100000 loops, best of 3: 16.6 µs per loop 10000 loops, best of 3: 70.9 µs per loop 100 loops, best of 3: 6.67 ms per loop 100000 loops, best of 3: 19.4 µs per loop 10000 loops, best of 3: 97.4 µs per loop 10 loops, best of 3: 23.7 ms per loop 10000 loops, best of 3: 22.1 µs per loop 10000 loops, best of 3: 105 µs per loop 10 loops, best of 3: 80.2 ms per loop 10000 loops, best of 3: 25.6 µs per loop 10000 loops, best of 3: 135 µs per loop 1 loop, best of 3: 287 ms per loop 10000 loops, best of 3: 29.8 µs per loop 10000 loops, best of 3: 149 µs per loop 1 loop, best of 3: 1.05 s per loop 10000 loops, best of 3: 32.7 µs per loop 10000 loops, best of 3: 171 µs per loop 1 loop, best of 3: 3.78 s per loop |
Chart:
Code for the plot:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 | from bokeh.palettes import Spectral11 from bokeh.plotting import figure, show, output_file p = figure(plot_width=300, plot_height=300) slowest = [62,200,615,2050,6670,23700,80200,287000,1050000,3780000] slower = [26,32,43,56,70,97,105,135,149,171] fast = [7,9,11,13,16,19,22,25,29,32] st = 5 end = 8 mypalette=Spectral11[0:3] p.multi_line(xs=[list(range(st,end)), list(range(st,end)), list(range(st,end))], ys=[slowest[:end-st], slower[:end-st], fast[:end-st] ], line_color=mypalette, line_width=5 ) show(p) |
This shows how the algorithm with exponential time complexity deteriorates for higher values of n:
Now that I've shown you a bunch of performance numbers and visualization, if you are curious about the algorithm, it is a contrived example of finding the number of paths from one corner of a grid to another, here the squares to the north of the diagonal from top right to bottom left are out of bounds - that is, the path is restricted to the right of the diagonal. In this image, we show the problem for n = 5.
The exponential algorithm recursively finds the number of paths from each point to the end point (the top right corner). But since you can reach a single point by a number of paths (and this number increases exponentially with n), the same computation of finding the number of paths from this point to the grid corner is repeated, causing the slowdown.
The next improvement is to remember the number of paths once calculated. Say if we are on [4,2], we will calculate the path to the grid end from here and mark it in M[4][2]. Next time we are at [4,2], we no longer need to calculate again, as the result can be looked up from M[4][2].
The last algorithm uses dynamic programming to do even less work. It works based on the simple observation that a cell (i,j) can only be reached from just 2 cells. Those are the cell to its immediate left, (i,j-1) and the cell right below it, (i+1,j). Then there is just a single path from these two to (i,j). So if we know the number of paths to those two cells, we can add them up to find the number of paths to (i,j). Then we can keep calculating the paths to each cell, walking from bottom row up, going right on the columns and eventually, we will fill the cell at the top right (0, n -1).
No comments:
Post a Comment