Computational Complexity

When calculating algorithm complexity the goal is to find a range of implementations which will fall between upper and lower computed bounds. These bounds are big-O and big-Omega notations.

Big-O Notation

Big-O notation, also called Landau’s symbol after German theoretician Edmund Landau who invented it, describes how fast a function (algorithm) grows or declines with the change in input size. In more mathematical terms, it describes the asymptotic behavior of a function, where asymptotic means approaching value or curve (a mathematical limit) as input n increases towards infinity.

For example, when analyzing some algorithm, we might find that it takes   steps to complete a problem of size n. Let us see the equation graphically:

Asymptotic here means that the green line is an asymptotic upper bound for the algorithm and mathematically it is written as

where  is  and  is . In human language it means that complexity of known implementations will lie below this graph.

It is very important to grasp here that  does not measure time at which an algorithm executes, instead it shows a pattern at which the function changes for sufficiently large n. This ‘for sufficiently large n’ is key in the big-O definition; in order to describe an algorithm we look for a term that will dominate  for . For that reason we can get rid of the constants and slower growing terms leaving only the fastest growing term, in this example it is , and conclude that function  grows at an order of , or .

Let’s have an example:

Summing all the steps up, we will get

Getting rid of constants and slow-growing terms we can say that the function DoSomething has quadratic growth at the rate of . We did not have to do all this computation, but instead we could have just observed that the body inside j loop executes ~n times for each i, or .

Common classes of functions encountered in computer science are listed in the table below with slower growing functions shown first.

Base of the logarithm is as irrelevant just like the constants in our example are because logarithm base itself is a constant. It is so because logarithm in one base can be represented by a logarithm in another base, for example:

One of the criticisms of the big-O notation is that it is imprecise and only serves as a guidance for selecting an algorithm. For small n an exponential function could execute faster than a linear one if the latter has a huge constant or big slow-growing terms. But the purpose of the notation is not to tell the time of the execution but only to indicate the upper bound of the complexity for large n.

To summarize, big-O notation describes computational complexity. It is often (incorrectly) referred to as time complexity because running time is usually of the primary concern in programming. Space complexity, or growth rate of memory required by an algorithm, could also be calculated.

Big-Omega Notation

As big-O notation defines an upper bound of a function, big-Omega is its lower bound. When assessing an algorithm, we want to first calculate the upper bound:  for example, and then search for a class of examples of  complexity. If we can establish the lower-bound, we call the algorithm tight. If the algorithm’s upper bound matches the lower bound, the algorithm is called asymptotically optimal. Reminding that this asymptotic relationship only holds for sufficiently large n. When a gap remains between the upper and lower bounds, it indicates that complete understanding of the algorithm has not been achieved.

Applications of Arithmetic Intensity

Arithmetic intensity measures the number of arithmetic (floating point) operations per byte of memory that is accessed and answers the following question:

Is the program compute intensive or data intensive on a particular architecture?

Following is an example of RGB to Gray conversion sequential algorithm and application of arithmetic intensity using order-of-n notation:

for (inty = 0; y < height; y++) {
for (intx = 0; x < width; x++) {
Pixel pixel= RGB[y][x];
gray[y][x] =
0.30 * pixel.R
+ 0.59 * pixel.G
+ 0.11 * pixel.B;
}
}

 

2 additions, 3 multiplies = 5 operations
3 reads, 1 write = 4 memory accesses
AI = 5/4 = 1.25

 

Arithmetic-Intensity

References

http://en.wikipedia.org/wiki/Big_O_notation

http://classes.soe.ucsc.edu/cmps102/Spring04/TantaloAsymp.pdf

http://web.mit.edu/16.070/www/lecture/big_o.pdf

Advanced CUDA by Rob van Nieuwpoort

3 comments

  1. Amazing work.. Keep it up 🙂

  2. Good and understandable

  3. Good article, Alan!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: