Time Complexity

Time complexity describes the amount of processor time required to run an algorithm. Processor time may differ somewhat from real time, in that the algorithm may be parallelized across many independant processors working together. Generally, run time of algorithms will vary depending on the size of the input(s), and so algorithm complexity is described in a form called Big O Notation which describes the asymptotic behaviour of a given function.

For example, if an algorithm loops over every element of an input which contains \(n\) entries exactly once, then we would say the complexity of the algorithm is \(O(n)\).

Exponential Hardness

When comparing complexity of algorithms there is an important distinction between being one algorithm being “faster” than another, vs an algorithm being exponentially faster. Although many algorithms may vary in complexity somewhat, it is usually exponential differences that are the limiting factor for what is feasible to execute or not.

For example the difference between \(O(log(n))\) and \(O(\sqrt{n})\) is vastly smaller than when compared to \(O(n)\), which is in turn vastly smaller than \(O(2^n)\). See below chart for an visualization of this.

Generally, polynomial time is considered the cutoff point between “easy” and “hard” problems. That is, any problem that can be solved in \(O(n^k)\) for some constant \(k\) is considered “easy”. That said, there are times when the input data set can grow so large that even polynomial time algorithms would be unrealistic for real world use.

Quantum Computing

There exist computational techniques which are only made possible through the use of quantum phenomena such as [[ superposition ]]. These computational techniques often allow for an exponential speedup of computation when compared to their classical counterparts. For example factoring a semi-prime appears to take \(O(2^n)\) on a classical computer, but can be completed in polynomial time on a quantum computer.

Notes mentioning this note