Complexity Theory

Complexity theory concerns itself with two kinds of measures: time and space.

Time complexity is a measure of how long a computation takes to execute. For a Turing machine, this could be measured as the number of moves required to perform a computation. For a digital computer, it could be measured as the number of machine cycles required for the computation.

Space complexity is a measure of how much storage is required for a computation. For a Turing machine, the obvious measure is the number of tape squared used; for a digital computer, the number of bytes used.

Both of these measures are functions of a single input parameter, the size of the input. Again, this can be defined in terms of squares or bytes.

For any given input size, different inputs typically require different amounts of space and time. Hence we can discuss for either the average case or for the worst case. Usually we are interested in worst-case complexity because

 

In complexity theory we generally subject our equations to some extreme simplifications. For example, if a given algorithm takes exactly 5n3+2n2-n+1003 machine cycles, where n is the size of the input, we will simplify this to O(n3) (read: Order n-cubed). This is called an order statistic. Specifically, we:

Justifications for this procedure are: