Merge Sort Algorithm


Hello people…! In this post I will talk about another sorting algorithm, the Merge Sort, which is considered to be the fastest sorting technique by many. Merge Sort was invented by a famous computer scientist John von Neumann. It is a stable sort, so the relative position of the elements are preserved. It uses O(n) auxiliary memory to sort the elements in O(n log n) time. It is the classic example of Divide-and-Conquer Algorithms. It is very easy to understand and code. So, let’s get started..!

In the Merge Sort, we divide the array into halves recursively until we cannot divide further. Then, we merge them into a single array which would be sorted. The partitioning is similar to that in Quick Sort. We can put the merge part of the algorithm in a step-by-step process –

  • Create an array which is large enough to accommodate all the elements of the two arrays to be merged.
  • Compare the first elements of both arrays. Pick the least one and put it into the array we just created.
  • Now we compare the first element of one array and the second element of the other array (from which the element was picked). We choose the smaller element and put it into the large array.
  • We do this until we run out of elements. Then, our larger array is sorted.

For a better understanding of the merge procedure, look at the sketch below –

merge1

This is the way we merge two arrays. As you can see the prerequisite for this procedure to work is that the two smaller arrays that we are given must be sorted. How do we ensure this in our algorithm. Remember, at the lowest level, the partitions stop when the array has only one element, which is considered to be sorted. Then, we would merge two arrays of size 1 into a sorted array of size 2. This helps in merging two arrays of size 2 to form an array of size 4. In this way the recursion goes. So we will have the smaller arrays sorted initially.

As we have gained the knowledge about the working of the algorithm. Let’s put it in terms of pseudo-code. It comprises of two functions –

mergeSort(Array, low, high)
	if low < high
		mid = (low + high) / 2

		for i = low to mid
			LeftPartition[i - low] = Array[i]

		for i = mid + 1 to high
			RightPartition[i - mid - 1] = Array[i]

		mergeSort(LeftPartition, low, mid)
		mergeSort(RightPartition, mid + 1, high)
		mergePartitions(Array, low, high, LeftPartition, RightPartition)

mergePartitions(Array, low, high, LeftPartition, RightPartition)
	i = j = k = 0
	mid = (low + high) / 2

	while (i <= low - mid && j < high - mid)
		if (LeftPartition[i] < RightPartition[j])
			Array[k] = LeftPartition[i]
			++i
		else
			Array[k] = RightPartition[j]
			++j
		++k

	// If any array is left out
	if i > low - mid && j < high - mid
		while j < high - mid
			Array[k] = RightPartition[j]
			++j
			++k
	else
		while i <= low - mid
			Array[k] = RightPartition[i]
			++i
			++k

The pseudo-code is clear and self-explanatory. If you have any doubts, feel free to comment them. Looking at the pseudo-code try to code Merge Sort algorithm. If you get any doubts, keep reading the explanation again and again. I’m sure you’ll get it if you give your best. If you didn’t, don’t fret, I posted my code below –

/*
 * Merge Sort Algorithm
 *
 * Authored by,
 * Vamsi Sangam
 */

#include <stdio.h>

// Merges two sorted partitions into a bigger sorted array
void mergePartitions(int Array[], int low, int high, int LeftPartition[], int RightPartition[])
{
	int i, j, k;
	int mid = (low + high + 1) / 2;

	// Initialise carefully
	i = j = 0;
	k = low;

	while (i < mid - low && j <= high - mid) {
		if (LeftPartition[i] < RightPartition[j]) {
			Array[k] = LeftPartition[i];
			++i;
		} else {
			Array[k] = RightPartition[j];
			++j;
		}

		++k;
	}

	// Copying any leftover elements
	if (i == mid - low && j <= high - mid) {
		while (j <= high - mid) {
			Array[k] = RightPartition[j];
			++j;
			++k;
		}
	} else {
		while (i < mid - low) {
			Array[k] = LeftPartition[i];
			++i;
			++k;
		}
	}
}

// Partitions the array into two
// Does nothing if array has only one element
void mergeSort(int Array[], int low, int high)
{
	if (low < high) {
		int mid = (low + high + 1) / 2;
		int LeftPartition[mid - low];
		int RightPartition[high - mid + 1];
		int i;

		// Copying elements
		for (i = low; i < mid; ++i) {
			LeftPartition[i - low] = Array[i];
		}

		// Copying elements
		for (i = mid; i <= high; ++i) {
			RightPartition[i - mid] = Array[i];
		}

		// Recursive call
		mergeSort(LeftPartition, 0, mid - low - 1);
		mergeSort(RightPartition, 0, high - mid);
		// Merge the two partitions
		mergePartitions(Array, low, high, LeftPartition, RightPartition);
	}
}

int main()
{
	int n, i;

	printf("Enter the size of input -\n");
	scanf("%d", &n);

	int arr[n];

	for (i = 0; i < n; ++i) {
		scanf("%d", &arr[i]);
	}

	mergeSort(arr, 0, n - 1);

	printf("\nThe Sorted Array -\n");
	for (i = 0; i < n; ++i) {
		printf("%d ", arr[i]);
	}
	printf("\n");

	return 0;
}

I hope you get the intuition behind the Merge Sort Algorithm. It is the best example to explain Divide-and-Conquer technique. It has a runtime complexity of O(n log n). Which can be proved easily by using a Recursion Tree –

merge2

Optimisation – Merge Sort can be optimised to perform better. The idea behind this is the asymptotic complexities of various sorting algorithms. When we say that Merge Sort is better than Insertion Sort because it has a better worst case of O(n log n) which is less than O(n2), this is difference can be seen only for large values of ‘n’. This is because, the terms like O(n log n), O(n) and all are valid only when n → ∞ . So, for small values of ‘n’, like 100, we may not find any significant performance differences between Merge Sort and Insertion Sort. For smaller ‘n’ an optimised Bubble Sort can actually out perform the Merge Sort, because Merge Sort comes with a lot of overheads, the partition, the merging, the recursive calls and so. Whereas, the Bubble Sort has no such over heads. So, how about we apply an optimised Bubble Sort for smaller sized partitions inside the algorithm of Merge Sort…? That is how we optimise the Merge Sort.

But how do we decide how long this “smaller sized partitions” should be..? One way is you could do an analysis. You can just test your optimised Merge Sort for various lengths of cut-off’s. You need to run your code for the same input and keep varying the ‘threshold’ value below which the smaller sized partitions are sorted by optimised Bubble Sort and not by Merge Sort partitions. You could actually plot a graph between threshold and runtime, where you will find a minimum runtime for a certain value of threshold. But remember, if you want to see significant results you need to input as many as 105 or 106 values.

The next best thing you can do is make a guess by looking at your resources. The idea is to look at your cache. Let’s say your cache is 2MB. Then, your cache can accommodate an integer array of size 5 ✗ 105. But, the whole of cache will not be available to you, so let’s say when you operate on an array of size 100, it is most likely that it’ll end up being in the cache. So, if we do an optimised Bubble Sort for arrays of size 100, we are most likely doing that in cache and as our process has less overheads, it actually increases the performance.

Its okay if your don’t get the whole threshold concept. But the gist of the discussion is that we will use an optimised Bubble Sort for smaller sized partitions in Merge Sort. Try to code this weird practice. You only have to make minor adjustments to your code. I’m sure you’ll get it if you give a little serious thought. I will post the code very shortly, until then keep trying !

This is the Merge Sort Algorithm. If you have any doubts or anything to say, feel free to comment them. I hope my post helped you in getting an idea of the Merge Sort. If it did, let me know by commenting…! Keep practicing… Happy Coding…! 🙂

You may also like –

More in the Navigation Page ! →

Advertisements

Quick Sort Algorithm


Hello people…! In this post I will talk about the Quick Sort Algorithm. This is an in-place sorting algorithm, which means it works on the given array itself and does not need any additional space, which means less overheads. But this is an unstable sorting algorithm, which means that the relative position of equal elements may not be maintained. So, when an unstable sorting is fine, quick sort is a very good and robust solution. So, lets’s get started with Quick Sort…!

Quick Sort is a Divide-and-Conquer Algorithm. What it means is, the given problem is divided into many sub-problems and we solve the sub-problems to get the solution of the actual problem. Here, we will partition the array into two and work our algorithm on the two arrays and then recursively repeat the same for other partitioned arrays. The process is –

  1. Choose any element in the array. Make it what is called the pivot.
  2. Swap the pivot to the last element.
  3. Make a two way scan of the remaining elements.
  4. Swap elements such that all the elements less than pivot are to the left-end and all the elements greater than pivot are to the right end.
  5. Elements equal to the pivot can go to the either side.
  6. Swap the pivot to the appropriate position.
  7. Apply the same technique to the left sub-array and the right sub-array.
  8. This is done in a recursive manner and the base case is when our array is of size 0 or 1. Arrays of size 0 or 1 are considered to be sorted. So we do nothing in that case.

For a better understanding of the algorithm, look at the sketch below –

quick1

The technique of quick sort is rather weird but it is straight-forward. Go through the step-by-step process a few more times and try to code the quick sort algorithm. You need to be confident with recursion if you want to get this right. If you have succeeded, well done…! If not, don’t worry, look at my code below and try to figure out what are the small silly that things you have missed…!

/*
 * Quick Sort Algorithm
 * in C
 *
 * Authored by,
 * Vamsi Sangam
 */

#include <stdio.h>

void quick_sort(int arr[], int low, int high)
{
	if (low >= high) {
		// A single element is
		// considered to be sorted
		return;
	}

	int mid = low + ((high - low) / 2);

	// Put the pivot at the last
	int temp = arr[mid];
	arr[mid] = arr[high];
	arr[high] = temp;

	int pivot = arr[high];
	int i = low;
	int j = high - 1;

	// The partition
	while (j >= i) {
		if (arr[j] < pivot && arr[i] > pivot) {
			temp = arr[i];
			arr[i] = arr[j];
			arr[j] = temp;

			++i;
			--j;
		} else if (arr[j] < pivot && arr[i] <= pivot) {
			++i;
		} else if (arr[j] >= pivot && arr[i] > pivot) {
			--j;
		} else if (arr[j] >= pivot && arr[i] <= pivot) {
			++i;
			--j;
		}
	}

	// Bring the pivot back to its
	// appropriate place
	temp = arr[i];
	arr[i] = arr[high];
	arr[high] = temp;

	quick_sort(arr, low, i - 1);
	quick_sort(arr, i + 1, high);
}

int main()
{
	int size, i;

	printf("Enter the size of the array -\n");
	scanf("%d", &size);

	int arr[size];

	printf("Enter %d Integers -\n", size);

	for (i = 0; i < size; ++i) {
		scanf("%d", &arr[i]);
	}

	quick_sort(arr, 0, size - 1);

	printf("\nThe Sorted Array -\n");
	for (i = 0; i < size; ++i) {
		printf("%d ", arr[i]);
	}

	return 0;
}

Integer Overflow – It would like to point out one thing in the code above. In the line 19, we wrote, “int mid = low + ((high – low) / 2)”. One could have written “mid = (low + high) / 2” also, but there is a reason why we specifically write it in the former way. Suppose, the values of low is INT_MAX – 5 (maximum value of integer) and the value of high is INT_MAX. Upon adding them, we get a value that is greater than INT_MAX, thus a value greater than a 32-Bit integer can hold. So, what happens is that, the bits go from “1111111….” to “0000000….” that is they sort of “reset” themselves. This is similar to the Odometer in a car, when the odometer is maxed out, i.e., it has all 9’s then it resets itself to all 0’s. The same thing happens here too. There is a good depiction of this in Wikipedia. So, when we add such large values we actually get a negative value, to which we would be calculating the mean. And when you give a negative index to an array, it is most likely that your code will crash due to Segmentation Fault. Although this theory makes more sense when the indices of the arrays are 16-Bit integers, every programmer must keep this theory in the back of his mind.

The main concern about this post is not only to give you an idea about Quick Sort Algorithm, but we will discuss the complexity analysis and how to make quick sort faster. Quick Sort algorithm has the following complexities –

  • Worst Case Time Complexity – O(n2)
  • Average Case Time Complexity – O(n log2 n)
  • Best Case Time Complexity – O(n log2 n)
  • Space Complexity – O(n)

The O(n) space complexity of quick sort is due to the O(n) recursion calls that consume the stack. Although the complexities are vivid, the runtime of quick sort depends on a few factors –

  1. Choice of Pivot – The optimal choice of pivot in quick sort has troubled the researchers for quite some time. Generally, the pivot must be selected in such a way that it partitions the array into two-halves, or at the least, partitions the array into nearly equal halves. But we would know nothing about this when we partition. So, we are forced to pick a pivot of our choice. Sometimes, due to the pivot, we might have to do more partitions that we would expect. Consider the case of a sorted array. If we always choose the first element of the array, we would end up doing O(n) partitions. Robert Sedgewick, a computer scientist, suggested to take the median of first, middle and the last element as the pivot. When we implement this, with tail recursion, we can reduce the space complexity to O(log n), which is very less compared to O(n).
  2. Method of Partition – What we have seen till now is a 2-way partition. i.e., we partition the array into two and recursively implement our algorithm. We can do a 3-way partition which can reduce the best case time complexity to O(n). We will discuss this shortly.

Dual Pivot Partitioning – Until now, we did a 2-way partition, or we partitioned the array into 2 parts. Now, we will partition the array into 3 parts and recursively call solve for the partitioned sub-arrays. When we had to divide the array into two, we simply took an element and put all the elements left of it into one array and greater than it into another array. What should we do if we want to partition the array into 3 parts? If you think for a minute or two you can easily figure our the principle. We will simply, pick up 2 elements, say A and B, and partition our array into 3 parts such that –

  • First part has all elements < A.
  • Second part has all elements that is > A, and < B.
  • Third Part has elements > B.

We can put this technique in a step-by-step procedure –

  1. We will choose the starting element, say array[low] and the last element, say array[high] as the pivots. Firstly, check if array[low] < array[high], if not, swap.
  2. Take two variables lt, ht, which store the indices and act as pointers. Set (lt ← low + 1), and (ht ← high – 1).
  3. Start iterating from (low + 1) → (high – 1).
  4. If array[i] is less than array[low], swap array[i] and array[lt] and increment i and lt.
  5. If array[i] is greater than array[high], swap array[i] and array[gt] and decrement gt only.
  6. If array[i] lies between array[high] and array[low], simply continue to the next iteration.
  7. Break out of this procedure, once i crosses gt. This is because when, i > gt, all elements of value array[i] will be greater than array[high], the partitioning guarantees this and they are at their proper place.
  8. Lastly, swap array[low] and array[lt – 1], and swap array[ht + 1] and array[high].
  9. Recursively apply the algorithm on the partitioned sub-arrays.
  10. Your base case if there is only one element in the array, which needs no further sorting.

It is a simple procedure. You just need to be careful about using the variables and the base case must be dealt appropriately. Try to code this technique. I’m sure you’ll get it. If you didn’t I have put my code below –

/* Quick Sort with Dual
 * Pivoting 3-way Partition
 * Algorithm
 *
 * Authored by,
 * Vamsi Sangam.
 */

#include <stdio.h>

void quickSortDualPivot(int arr[], int low, int high)
{
	if (low >= high) {
		return;
	}

	int lt = low + 1, ht = high - 1, i, temp;

	// Swapping 'low' and 'high' elements if neccessary
	if (arr[low] > arr[high]) {
		temp = arr[low];
		arr[low] = arr[high];
		arr[high] = temp;
	}

	// If only two elements are there, by this
	// point they are already sorted
	if (low + 1 == high) {
		return;
	}

	for (i = low + 1; i <= high; ++i) {
		if (i > ht) {
			// Whatever value a[i] holds beyond
			// will have a value > a[high]
			break;
		}

		if (arr[i] < arr[low]) {
			// a[i] < pivot1 so, falls in
			// first sub-array
			temp = arr[lt];
			arr[lt] = arr[i];
			arr[i] = temp;
			++lt;
		} else if (arr[i] > arr[high]) {
			// a[i] > pivot2 so, falls in
			// third sub-array
			temp = arr[ht];
			arr[ht] = arr[i];
			arr[i] = temp;
			--ht;
			--i;
		}

		// The value of a[i] that does not go to either
		// of the cases falls in second sub-array
	}

	// Swapping the two-pivots to correct places
	++ht;
	temp = arr[ht];
	arr[ht] = arr[high];
	arr[high] = temp;
	--lt;
	temp = arr[lt];
	arr[lt] = arr[low];
	arr[low] = temp;

	// Recursively applying the algorithm to sub-arrays
	quickSortDualPivot(arr, low, lt - 1);
	quickSortDualPivot(arr, lt + 1, ht - 1);
	quickSortDualPivot(arr, ht + 1, high);
}

int main()
{
	int n, i;

	printf("Enter the size -\n");
	scanf("%d", &n);

	printf("Enter %d elements -\n", n);

	int arr[n];

	for (i = 0; i < n; ++i) {
		scanf("%d", &arr[i]);
	}

	quickSortDualPivot(arr, 0, n - 1);

	printf("The Sorted Array -\n");
	for (i = 0; i < n; ++i) {
		printf("%d ", arr[i]);
	}
	printf("\n");
}

I hope you get the fundamental idea behind the partition. Now, the time complexity of this algorithm is much like the standard procedure, it is O(n log3n) which is slightly more efficient than the standard one which would be O(n log2n) and we know that for a given value of n, log3n is smaller than log2n. But this difference is not significant for small values of n. Some consider the 3-way partition to add more overheads to the standard procedure. You must choose the best technique that fits your demands.

Random Pivot – There is another optimisation technique that is often implemented in quick sort. This is making the choice of the pivot random. Instead of picking the first, or last, or middle element as the pivot we pick a random element in the given range of numbers and partition the array as usual. So, how do we pick up a random element ? This is done by using the rand() function in the stdlib.h header file. This generates a random number. But we want it to be in the range we want, so, we take the mod of size of the partition. As the indices start from ‘low’, we add ‘low’ to the mod. Then, we get a random element in the given range. So, if you want to implement the random pivot technique in quick sort, you have to change one and only one line in the code of standard quick sort procedure. This is the assignment of the variable ‘mid’, which is the line 19 in the code posted. The modification would be –

	int mid = (rand() % (high - low)) + low;

Don’t forget to include the required header file. If you have any doubts feel free to comment them. Now, as we have learnt so many varieties of Quick Sort variants. Which one do you think is faster ? I did a comparison of the Quick Sort variants for a Weekly Discussion on the blog’s FB page. And this was what I got –

comparison_of_quick_sort_variants

Don’t panic on seeing the Dijkstra’s 3-way Partitioning in the graph, we’ll be discussing it soon. As you can expect, the Dual-Pivot worked pretty faster than the rest due to the O(n log3 n) complexity of Dual Pivot partitioning. You might get a doubt, if dual partitioning gave an O(n log3 n) complexity, will an n-partitioning quick sort give O(n logn n) complexity which is O(n). Can we really achieve a linear time complexity algorithm for Quick Sort ? Well, it is a nice question, but remember that, when we were doing a dual pivot partitioning, we had the two pivots in sorted order. Because it was only two elements we didn’t actually do any sorting there. So, if you wanted an n-pivot partitioning in Quick Sort, you can do that, but you’d have to have elements in the sorted order, which is nothing but sorting the elements itself. You see ? O(n) complexity sounded awesome a few moments back, but now it sounds like nonsense because we know it wont happen !

I hope my post helped you in learning the Quick Sort algorithm and the principles behind it. If it did, let me know by commenting. Commenting is super easy if you are a Facebook, Twitter or a Google+ user…! So, don’t hesitate..! 😉 … Keep practising… Happy Coding…! 🙂

You may also like –

More in the Navigation Page ! →