To simplify assume the pivot is not included in either “half” that is sorted in the “recursion” of quicksort.

If splits are optimal (near as possible to equal halves) then after k divisions the size max_size of the largest sublist is max_size <= N/2k. Therefore 2k * max_size <= N. When the partitioning is completed and the max_size is 1, then we have 2k <= N, i.e. k <= log
2N.

That shows that the number of levels of partitioning is not more than log
2N and since the amount of work done in each level of partitioning is O(N), the total amount of partitioning work is O(NlogN). Therefore quicksort does O(NlogN) work if splits are always perfect.

We will see that the average case of quicksort is also O(NlogN) work. This can be proved as a consequence of the fact that a random binary tree has depth O(logN)


Quicksort tends to be about twice as fast as heapsort on average but that is not a “big edge.” A quicksort that is not implemented very carefully may not be as efficient as a heapsort on average.

Take as an example the partition algorithm of our text. It is very elegant and easy to understand, but it probably does about twice as many swaps as the two-pointer version of the partition algorithm -- which is tricky to code correctly. Thus I would guess the version of quicksort in our text is not likely to be twice as fast as a heapsort on average.

As another example, if we shuffle the list before sorting or do anything very involved to assure picking a good pivot value, we are tending to slow the quicksort down to the point where its average performance is not likely to be better than that of a heapsort.