- Old Reliable: The Selection Sort
- The Merge-Sort Algorithm
- Why Is One Algorithm Faster Than Others?
- Other Algorithms
Why Is One Algorithm Faster Than Others?
Suppose you have an array 5,000 elements long. Dealing with datasets of that size is not rare today; in fact, it's a small data set, by some standards. Consider these time durations:
- O(n2)
- O(n log n)
Plugging the number 5,000 into these expressions, and using logarithm base 2, we get comparative figures of 25,000,000 and 60,000 (roughly speaking), suggesting that the merge-sort algorithm is 400 times faster than a selection sort.
The difference won't be nearly that great, however, because we've put aside some constant factors. For example, the merge-sort algorithm copies elements back and forth to a temporary array during each merge. For each comparison, it does several times the work. So let's divide by a constant factor of 10. We'd expect a merge sort to be about 40 times faster than a selection sort. (The actual figure, as it turns out, is around 50 times faster.) Being 40 times faster is a 4,000% increase in speed. As N is increased to numbers such as 10,000 or more, the difference in speed becomes far greater still.
This naturally raises a question: Why is one way so much faster than the other? Well, for a general-purpose sorting algorithm with no restrictions, computer scientists have proven that the best you can do is O(n log n). Algorithms that achieve this complexity are close to optimal. If a sorting algorithm achieves only O(n2), clearly it's doing something wasteful. But what?
It takes N(N-1)/2 actions to compare each element to every other element exactly once. That gives us O(n2). But this result is nowhere near optimal. With transitivity, you shouldn't need to make that many comparisons. For example, if element A[0] is less than A[1], and A[1] is less than A[2], you shouldn't need to test whether A[0] is less than A[2]. In the best case, you'd only need to make N-1 comparisons, thanks to transitivity. The problem is, you'd have to know which comparisons to make! And there's no way to know that. Therefore, unless we get lucky, the optimal algorithm is going to be neither O(n) nor O(n2), but somewhere in between.
Look again at the selection sort. Let's take the case of an array that's in reverse order:
10 9 8 7 6 5 4 3 2 1
If you look at the C++ code, you should see that the inner loop, which sets iMin, actually compares some of the neighboring array elements—with their original values—over and over again. (Ask yourself how many times 5 is compared to 4.) So there is significant redundancy. About halfway through, the entire array is sorted, but the algorithm must continue. Why? Because it has no way to detect that the last half of the array is already sorted:
5 6 7 8 9 10
In contrast, smart algorithms create more and more order at each stage, with later stages preserving and taking advantage of the intermediate ordering.
Not convinced? Let's use C/C++ library functions to clock algorithm speed.
Clocking the Algorithms
The standard C library provides an easy method for measuring runtime speeds. It has a granularity of a millisecond (1/1000th of a second). Of course, this library is inherited by C++. You can prepare for using the functions with these declarations:
#include <iostream> #include <ctime> using namespace std;
The following statements create arrays of 5,000 and initialize each to be out of order. (Actually, each is in perfect reverse order. You may want to experiment by creating arrays of random numbers.) The program calls the C/C++ clock function to record begin and end times, and then it prints the differences.
#define SZ 5000 int a1[SZ]; int a2[SZ]; int temp_array[SZ]; for (int i = 0; i < SZ; i++) { // Reverse order. a1[i] = a2[i] = SZ - i; } long t1 = clock(); sel_sort(a1, SZ); long t2 = clock(); long t3 = clock(); merge_sort(a2, temp_array, 0, SZ); long t4 = clock(); cout << "Time taken by selection sort = " << t2 - t1 << endl; cout << "Time taken by merge sort = " << t4 - t3 << endl;
By varying the value of SZ (size) and recompiling, you can get a series of comparative measures. The following table summarizes the (approximate) results I got, using Microsoft Visual Studio Express Edition.
Array Size |
Selection-Sort Duration (in milliseconds) |
Merge-Sort Duration (in milliseconds) |
Difference |
5,000 |
50 |
1 |
50x |
10,000 |
200 |
2 |
100x |
50,000 |
5,000 |
10 |
500X |
For the largest size I tested (50,000 elements), the merge-sort algorithm is 500 times faster than the selection sort!