Binary Search



template<class T>
int BinarySearch(T a[], const T& x, int n)
{      // Search a[0] <= a[1] <= ... <= a[n-1] for x
       // Return position if found; return ­1 otherwise
    int left = 0, right = n-1;
         
    while (left <= right)
    {
       int middle = (left+right)/2;
       if (x==a[middle]) return middle ;
       if (x>a[middle]) left=middle+1;
       else right=middle-1;
    }
    return ­1; //x not found  
}

Let R be the number of locations left to probe in the binary search algorithm. Let N be the size of the original list. R will be no more than N/2 after the first pass through the loop. Noting that a second pass through the loop is just like a first pass with a list of size no more than N/2, we can conclude also that R will be no more than N/4 after a second pass, and no more than N/(2^k) after k passes.

If N=1 then since R is no more than N/2 after one pass, and since R has to be an integer, then the search requires no more than one pass in the case N = 1.

In the case N = 2 or 3, N/4 < 1, so the search does not take more than 2 passes in the case N = 2 or 3.

In the case N = 4, 5, 6, or 7, N/8 < 1, so the search does not take more than 3 passes.

In the case N = 8, 9, 10, 11, 12, 13, 14, or 15, N/16 < 1, so the search does not take more than 4 passes. In general, if N= 2^k, (2^k)+1, (2^k)+2, ..., or (2^k)+(2^k)-1 (the last term equals 2^(k+1) - 1), then since N/(2^(k+1)) < 1, the search does not take more than k+1 passes.

Another way of saying this is that when 2^k <= N <= 2^(k+1)-1, the search takes at most k+1 passes. For these values of N, k=floor(log(N)), and k+1 = floor(log(N)) + 1.

Thus we may conclude that binary search requires no more than floor(log(N)) + 1 passes through the loop.

work(N) <= floor(log(N)) + 1 <= log(N) + 1 < 2*log(N) (when N>1)

Therefore work(N) is O(log(N)).

What is the ratio of these upper limits if we make N twice as large?

(log(2N) + 1)/(log(N) + 1) = (log(2N) + 1)/log(2N) = 1 + (1/log(2N)).

So you see work increasing by only a small factor (much less than twice the work to process twice the data.) In fact, the result is very impressive, in that the multiplier (1 + (1/log(2N))) converges to 1 as N-->infinity. In other words, when N is large, if we magnify the size of N by 2, the associated work is magnified hardly at all!