Friday 26 September 2014

Heap Sort Part Two

Heap Sort Part Two

In part one of heap sort we covered what a max heap is and how a max heap works. This was important in order for part two to make sense. In this part I will describe how a max heap can be utilised to produce a sorting algorithm that works in a similar way to selection sort but uses an efficient ADT, the max heap.

Firstly, how do I figure that max heap sort works in a similar way to selection sort? Selection sort works by getting the index of the maximum value. It then swaps that with the element at the end of the current index-able end point. The swapping operation is followed by reducing the problem domain by one and repeating the process with the new end point. The time taken to do that is n, and it runs n passes therefore has a O(n^2) running time. Max heap sort uses the max heap to get the largest value. This value is always O(1) because it is always at the front of the array and it is swapped with the last element of the array and then the problem domain is reduced by one. The last element of the current problem domain is the last leaf node of the max heap tree. When the above swap is done, this leaf node becomes the new root and breaks the "Heap Order". This order needs to be restored by applying a method known as "heapify". Essentially the root is percolated down until the heap order is restored for a max heap. Since this takes O(log n) time and is called n times the total cost then for heap sort is "n * 1 * log n" which is O(n log n). That is a much better result than selection sort.

In part one, I only showed code for getting child and parent, I did not show the code for max heap. The reason is that the process for max heap sort is very small and to aid in its explanation I wanted to discuss heapify at the same time. This discussion is essential to understand what is happening as max heap sort is progressing. However that restricted me from also showing the code for creating a max heap because it uses the heapify method. Without further ado let us take a look, first, at max heap code:

private void MaxHeap(ref int[] array)
{
  for (int i = (array.Length - 2) / 2; i >= 0; i--)
    MaxHeapify(ref array, i, array.Length);
}
Fig. 1.




In the above code we first find the index of the last root in the array sequence. This is given by (array.Length - 2) / 2. We have array.Length-2 because first root is i -1/2 since i is zero indexed, array.Length-1 -1 /2 therefore array.length-2/2. The last root in this context means in a level order traversal the last node that is the last parent node and not a last leaf node. Example in Fig. 1 node labeled 20 is the last root node. The level order array for that is: 55, 50, 20, 30, 25, 17, 18. Using the above equation we will see that it accurately picks node labeled 20 as the last root. First the length is seven. Thus (7 - 2)/2 = 5/2 = 2.5 which is 2 because C# takes the floor of that decimal. Index two does indeed contain 20 which is the last root of the max heap tree in fig. 1. However please be aware that when we are running MaxHeap we are actually building the heap from an unsorted array. This means the array is neither sorted nor is it in heap order. Remember from part one that the first step is to build the heap and that is what MaxHeap is doing in an in-place fashion. You can write this function in various ways, for example, you could start at index zero and go up to the index of the last root. I chose to start at the last root and work down to the first root because it feels more intuitive that way when you are working through on paper and draw a tree. For each root, the heapify method is called to ensure that each subtree processed has the largest value than anything below it.

Let's look at the code for MaxHeapify:

private void MaxHeapify(ref int[] array, int root, int length)
{
  int left = LeftChild(root);
  int right = RightChild(root);
  int j = root;
  if (left < length) j = array[left] > array[root]? left : root;
  if (right < length) j =  array[right] > array[j] ? right : j;
  if (root != j)
  {
    Swap(ref array, root, j);
    MaxHeapify(ref array, j, length);
  }
}
Line by line lets look at how this works.
First we retrieve the left child and right child indices of the root. Next we create a new variable and assign it the index of the root. What follows is a relation that is transitive. It follows that if left child is bigger than root and right child is bigger than left child, then right child is bigger than root. Or formally if A > B > C then by transitivity, A > C. The result of the two "if"statements is that either variable 'j' will contain the index of a new root that has a value bigger than current root or it will index the same as current root. The conditionals left < length and right < length should be obvious but I will explain. If a node has no left child then the left child index returned will be larger than the size of the array and similarly for the right child. As an interesting note to think about is that in a complete binary tree if a node does not have a left child, it most certainly will not have a right child.

Once the root index is found then the code checks if the root is the same or different. If it is a different root then we apply a swap because the new root will certainly be one that has a larger value than the starting root. We then recurse to ensure all nodes of the subtree is traversed and that we have the maximum root for this subtree. When this recursive method completes, command returns to MaxHeap method which will look at the next root, and so on until all roots are heapified and the tree is in heap order.

Using The Max Heap to perform Sorting.
We have now thoroughly covered Max Heap in terms of building the Heap however we have not fully covered what happens when we pop off the root. Infact I saved that so I can write it under this heading because that action is the reason max heap sort works! Lets graphically see what happens as shown in Fig. 2.
Fig. 2. Click on image to enlarge in order to see details.

As seen in fig. 2. we start with the initial tree which is in max heap order. We remove the root which we have come to know will be the largest value. The value is 55 in this tree, and we replace the root with the last leaf node which is of value 18 in this tree. We then percolate down value 18 until heap order is restored. We now have value 50 as root which is now the largest value so we remove that and replace it with the last leaf which is now node with value 17. We percolate down 17 until max heap order is restored. This continues on until all the values are popped and we are left with the resulting array: 17, 18, 20, 25, 30, 50, 55. This is now the sorted array in ascending order! If you have not guessed already, the process of percolating down is known as heapify because it restores the heap order.

Lets look at this from the array's perspective on what is happening.
Our initial array in max heap order: 55, 50, 20, 30, 25, 17, 18.
I will not label the steps, you need to follow the steps based on steps in the tree in fig. 2.
18, 50, 20, 30, 25, 17, 55.
50, 18, 20, 30, 25, 17, 55.
50, 30, 20, 18, 25, 17, 55.
17, 30, 20, 18, 25, 50, 55.
30, 17, 20, 18, 25, 50, 55.
30, 25, 20, 18, 17, 50, 55.
17, 25, 20, 18, 30, 50, 55.
25, 17, 20, 18, 30, 50, 55.
25, 18, 20, 17, 30, 50, 55.
17, 18, 20, 25, 30, 50, 55.
20, 18, 17, 25, 30, 50, 55.
17, 18, 20, 25, 30, 50, 55.
18, 17, 20, 25, 30, 50, 55.
17, 18, 20, 25, 30, 50, 55. Done.
As a reminder, the tree in Max heap sort is converted to the array in level-order traversal method. So to follow the above array with the tree, convert each tree to each respective array. Leave the bold faced elements in the array as these are sorted elements and will be missing from the tree. The above array shows how max heap sort is easily implemented as in-place. The most interesting thing we notice is that as we remove the root, what we are actually doing in the array is swapping it with the last element in the current problem domain. So our problem domain starts with seven elements. After the first swap and heapify our problem domain shrinks to six, then five, and so on. At each time we are swapping the first element with the last element of the current domain which directly translates to removing the root and replacing it with the last leaf node.

I will now show the code that does max heap sorting:

public void MaxHeapSort(ref int[] array)
{
  MaxHeap(ref array);
  MaxHeapSort(ref array, array.Length-1);
}

private void  MaxHeapSort(ref int[] array, int tail)

{
  while (tail > 0)
  {
    Swap(ref array, 0, tail--); //Pop max value.
    for (int i = (tail - 1) / 2; i >= 0; i--) MaxHeapify(ref array, i, tail + 1);
  }
}
The first method called MaxHeapSort with one parameter takes the array and calls MaxHeap to build the max heap. It then calls the overloaded method MaxHeapSort with two parameters and passes it the reference to the array with the length taking into account the zero index array.

In the MaxHeapSort overloaded method with two parameters the first parameter is the reference to the array passed to it but the second parameter is the last cell of the current problem domain. This domain is represented by the "tail" variable and is decremented as the problem domain shrinks. First we see the while loop which checks if the remaining cells to pop is more than 1, stops otherwise as at that point the array will be sorted. Inside the while loop element zero is swapped with the element at tail. This is the O(1) time I mentioned because we know that always the zeroth index will have the largest value in a max heap. After swapping, the heap property is lost so we perform a heapify operation. This is done by getting the last root of this problem domain and heapifying each until heap order is restored. The while loop will run n times, reducing the problem domain each time until we have a array that is sorted in ascending order.

I left max heap sort last in this part one of "Sort and search algorithms using C#" because it needs the longest explanation and because it combines a number of concepts. However, like mergesort, this algorithm is always O(n log n). Heap sort is better used where you want an in-place algorithm and want the ability to get the maximum value in O(1) time complexity.

This concludes the algorithms for sorting, but there is one more topic to cover. That is a searching one. I will only cover one searching algorithm, called the binary search.

Next: Binary Search.
Previous: Heap Sort part one.
Landing page: Part One

Tuesday 16 September 2014

Heapsort

NOTE: Because the explanation for Heap Sort involves introducing several concepts, this will be broken down into two parts. This is the first part which explains the concept one needs to understand in order to appreciate how heap sort works.

Heap Sort Part One.
Heap sort algorithm is based on a tree structure called the Heap. This algorithm comes in two flavours and that is "max heap sort", and, "min heap sort". They work exactly the same but based on whether they are "Max" or "Min", the algorithm does the opposite of the other. The algorithm is also an 'In-Place' algorithm because it requires little to no extra memory to perform its sorting. It is also a more interesting of all the algorithms because it uses a tree structure for its sorting which reduces the time to O(log n) for its "heapify" method but is overall O(n log n) because "heapify" is called n times. Even more remarkable is how fast it can get the maximum value (or minimum value) in a max or min heap respectively. The speed is infact O(1) and as I explain how the structure works this will become clear. At the very basic explanation, a max heap sort works by first inserting every element to build a max heap. Then delete each maximum and place it in reverse order. Note that getting the maximum takes only O(1) time. If you have seen the previous algorithms I posted you will notice that this looks similar to selection sort but instead this uses an efficient ADT, the tree structure.


How does the heap work
First, what is a complete binary tree.
To understand how heap sort works we first must understand how a heap works. Consequently to understand that we must first understand what a "complete binary tree" is. The heap relies upon the rules of a complete binary tree in order to function.
A complete binary tree is one where it is completely filled at all levels bar the bottom level which is filled from left to right. In a complete binary tree each leaf is at a depth h or h-1. Since the bottom level is filled from left to right then this property also means each leaf on the left is at h compared to a leaf on its right at h-1.

Lets take a look at fig.1 that shows the correct and incorrect complete binary tree.

Fig. 1.
In Fig 1. The reason the bottom tree is not a complete binary tree is because we can see at the bottom there are two leaf nodes on the left labeled "8" and "9". As we move along the bottom we see another leaf node labeled "10". However while there is yet another, fourth, leaf node at the bottom labeled "11/12" it is not first filling the parent node(5) of the previous node. I.E the placement of the fourth node should first entirely fill the left side as is in top first tree. This is the reason why I labeled the last node in the bottom tree as "11/12" because this node should be 11, not 12 as compared to the top tree. Additionally, the top tree is labeled in the traversal order with which an array representing this tree is structured. This type of traversal is known as 'Level Order Traversal' and is a breadth first traversal method.

For the heap to be a max heap the above order must be maintained but there is another order that must be maintained which separates a Max Heap from a Binary tree. This new order is hinted to by the name "max heap". Each node is added at the end of the tree. This node then is "bubbled up" if it is larger than its parent. It continues to be bubbled up until it is larger than any of its children and smaller than its parent. Once that condition is achieved, the tree is in "heap order".

Lets start by inserting the number 50 into the tree and see what happens:
  • It is the only node.
  • As the only node all branches are full therefore it is a complete tree.
  • The tree is in heap order as it is the largest parent node.
Now lets add number 55 and see what happens:
  • The node is added as the last node which is the left most bottom leaf.
  • It is bigger than its parent(50) so it swaps position with its parent.
  • Node with data 55 is now the parent and node with data 50 is its left child.
  • The tree is a complete tree.
  • The tree is in heap order for a max heap.
The steps outlined above continues for each node added maintaining the heap order and constructing a complete binary tree.
Let's see an example of this at work. We have the following unsorted array: 30, 50, 17, 55, 25, 20, 18.
After placing it in Heap order we have an array that looks like: 55, 50, 20, 30, 25, 17, 18.
The question on the readers mind might be "Ok, but it's not sorted, and how did this new order appear". First lets look at the property of a max heap from the new result. In array index zero(assuming zero index here) we have the maximum value. It will always be the maximum value in a max heap hence we know how to get it and no comparison need take place, therefore O(1) time.

Let's look at Fig. 2. and see what happens as we add each item to the heap from the initial array.
Fig. 2. Click on the image to enlarge it in order to see the details.




In Fig. 2. we start off at the top left and observe the process taking place up to the top right. Then we move down to bottom left and see the continued process up to bottom right. The final tree at the bottom right when converted to an array in the order labeled in Fig.1. top tree will look like the second array shown above. Immediately we can see that the root is the maximum value.

How to tell, looking at the array, where the parent and its children are.
Looking at the zero-indexed array that represents the max heap, we must be able to tell who the parent is for any given node and conversely who the children are for any given node. The solution is typical for any binary tree and rather trivial. For a node n its children is given by the formula (n*2)+1 for the left child and (n*2)+2 for the right child. To find the parent, we simply reverse the calculation. For a child at node n first we determine if it is a left or a right child. Careful observation of the array and how it is represented in the array will show that the right child is always in the even numbered index in a zero indexed array. There are two ways to determine the parent. First is that we mod the node n with 2 => child mod 2 = 0 means right child. If it is non zero it is a left child. We then apply the appropriate formula to find the parent. If it is the left child then: n/2. If it is the right child then n/2 -1.
The second way is to convert it to a non-zero array which makes the left child always even and the right child always odd. We then use the floor function to get the correct index by simply dividing the child's node index by 2. A floor function will take the lower value of a fraction. For example the values 7.0, 7.1, 7.2, 7.3 etc will always return 7. Thus in a zero indexed array using the second method, the parent of any node n is (n+1)/2 -1. The second method is simpler but relies on a floor function being available in a language. Let's see an example: In our heap array above, the node with  the value of 18 has a parent which is a node with the value 20. 18 is at index 6, and 20 is at index 2. Using the second method of finding 18's parent we have: (6+1)/2 -1 = 7/2 -1  = 3.5 -1. Using our floor function, 3.5 becomes 3 hence 3-1 = 2. Index 2 certainly has the node with the value 20 which is our parent. Lets do the same with node with the value 17 which is at index 5. (5+1)/2 -1 = 6/2 -1 = 3 -1 = 2. Once again we get index two which points to node with the value 20 and hence the correct parent of 17.

This is the time to introduce some code that deals with finding the parent and the children of a parent.

private int GetRoot(int child)
{
  return child % 2 == 0 ? child / 2 - 1 /*Right child*/:
                          child / 2; /*Left child*/
}

private int LeftChild(int root)
{
  return ((root*2)+1); //left child index
}

private int RightChild(int root)
{
  return ((root * 2) + 2); //right child index
}
The method for getting the parent from a given child is called "GetRoot" because at each time we will be dealing with a "subtree". Therefore for that subtree the parent is the root. This method uses the mod version even though C# provides Math.Floor method. The method returns the index where the parent of the child is. The LeftChild and RightChild methods take the index of a node and respectively returns the index of the left/right child. Once again the parameter is called "root" because we will be dealing with subtrees for which the parent is the root of all that follows after it. This concludes the basics of a Heap and how to find the parent from the child and vice-versa.
Next up is how we can use the Heap to make Heap Sort work.

Next: Heap Sort part two.
Previous: Merge Sort.
Landing page: Part One.