Chapter 12 Computational Complexity
Computational complexity considers globally all possible
algorithms able to solve a given problem.
Complexity theory looks for a lower bound on the complexity of a
problem.
For example, in this section of the text, the following
proposition is proved:
Any deterministic algorithm for sorting by comparison must make
at least ceiling(lg(n!)) comparisons in the worst case when
sorting n items.
This shows the worst case work function T of a comparison-based sorting
algorithm A must be Ω(lg(n!)).
This proves, for example, that no one will ever find an O(n)
sorting algorithm based on comparisons.
12.1 Introduction: A simple example
In the game of twenty questions person A chooses a positive
integer no greater than one million. Person B tries to guess the
number by asking "yes/no" questions. Up to 20 questions are
allowed.
Is there an algorithm by which B can be assured of guessing the
number with no more than 19 questions?
The first question, whatever it is, will rule out some of the
numbers, and leave some still possible. By choosing the
question, B can control what the two sets of numbers are.
However, B can not control A's answer. Therefore, no matter what
B does, the set of numbers that are not ruled out after the first
question could be a set of 1/2 million or more numbers.
Similarly, after 2 questions, there could be 250,000 numbers
still not ruled out, and so forth.
The numbers progress like this:
This many numbers could this many
remain possible after questions.
1,000,000 0
500,000 1
250,000 2
125,000 3
62,500 4
31,250 5
15,625 6
7,813 7
3,907 8
1,954 9
977 10
489 11
245 12
123 13
62 14
31 15
16 16
8 17
4 18
2 19
This establishes a very elementary result in complexity theory:
19 questions are not always enough in the game of 20 questions.