Friday 26 September 2014

Heap Sort Part Two

Heap Sort Part Two

In part one of heap sort we covered what a max heap is and how a max heap works. This was important in order for part two to make sense. In this part I will describe how a max heap can be utilised to produce a sorting algorithm that works in a similar way to selection sort but uses an efficient ADT, the max heap.

Firstly, how do I figure that max heap sort works in a similar way to selection sort? Selection sort works by getting the index of the maximum value. It then swaps that with the element at the end of the current index-able end point. The swapping operation is followed by reducing the problem domain by one and repeating the process with the new end point. The time taken to do that is n, and it runs n passes therefore has a O(n^2) running time. Max heap sort uses the max heap to get the largest value. This value is always O(1) because it is always at the front of the array and it is swapped with the last element of the array and then the problem domain is reduced by one. The last element of the current problem domain is the last leaf node of the max heap tree. When the above swap is done, this leaf node becomes the new root and breaks the "Heap Order". This order needs to be restored by applying a method known as "heapify". Essentially the root is percolated down until the heap order is restored for a max heap. Since this takes O(log n) time and is called n times the total cost then for heap sort is "n * 1 * log n" which is O(n log n). That is a much better result than selection sort.

In part one, I only showed code for getting child and parent, I did not show the code for max heap. The reason is that the process for max heap sort is very small and to aid in its explanation I wanted to discuss heapify at the same time. This discussion is essential to understand what is happening as max heap sort is progressing. However that restricted me from also showing the code for creating a max heap because it uses the heapify method. Without further ado let us take a look, first, at max heap code:

private void MaxHeap(ref int[] array)
{
  for (int i = (array.Length - 2) / 2; i >= 0; i--)
    MaxHeapify(ref array, i, array.Length);
}
Fig. 1.




In the above code we first find the index of the last root in the array sequence. This is given by (array.Length - 2) / 2. We have array.Length-2 because first root is i -1/2 since i is zero indexed, array.Length-1 -1 /2 therefore array.length-2/2. The last root in this context means in a level order traversal the last node that is the last parent node and not a last leaf node. Example in Fig. 1 node labeled 20 is the last root node. The level order array for that is: 55, 50, 20, 30, 25, 17, 18. Using the above equation we will see that it accurately picks node labeled 20 as the last root. First the length is seven. Thus (7 - 2)/2 = 5/2 = 2.5 which is 2 because C# takes the floor of that decimal. Index two does indeed contain 20 which is the last root of the max heap tree in fig. 1. However please be aware that when we are running MaxHeap we are actually building the heap from an unsorted array. This means the array is neither sorted nor is it in heap order. Remember from part one that the first step is to build the heap and that is what MaxHeap is doing in an in-place fashion. You can write this function in various ways, for example, you could start at index zero and go up to the index of the last root. I chose to start at the last root and work down to the first root because it feels more intuitive that way when you are working through on paper and draw a tree. For each root, the heapify method is called to ensure that each subtree processed has the largest value than anything below it.

Let's look at the code for MaxHeapify:

private void MaxHeapify(ref int[] array, int root, int length)
{
  int left = LeftChild(root);
  int right = RightChild(root);
  int j = root;
  if (left < length) j = array[left] > array[root]? left : root;
  if (right < length) j =  array[right] > array[j] ? right : j;
  if (root != j)
  {
    Swap(ref array, root, j);
    MaxHeapify(ref array, j, length);
  }
}
Line by line lets look at how this works.
First we retrieve the left child and right child indices of the root. Next we create a new variable and assign it the index of the root. What follows is a relation that is transitive. It follows that if left child is bigger than root and right child is bigger than left child, then right child is bigger than root. Or formally if A > B > C then by transitivity, A > C. The result of the two "if"statements is that either variable 'j' will contain the index of a new root that has a value bigger than current root or it will index the same as current root. The conditionals left < length and right < length should be obvious but I will explain. If a node has no left child then the left child index returned will be larger than the size of the array and similarly for the right child. As an interesting note to think about is that in a complete binary tree if a node does not have a left child, it most certainly will not have a right child.

Once the root index is found then the code checks if the root is the same or different. If it is a different root then we apply a swap because the new root will certainly be one that has a larger value than the starting root. We then recurse to ensure all nodes of the subtree is traversed and that we have the maximum root for this subtree. When this recursive method completes, command returns to MaxHeap method which will look at the next root, and so on until all roots are heapified and the tree is in heap order.

Using The Max Heap to perform Sorting.
We have now thoroughly covered Max Heap in terms of building the Heap however we have not fully covered what happens when we pop off the root. Infact I saved that so I can write it under this heading because that action is the reason max heap sort works! Lets graphically see what happens as shown in Fig. 2.
Fig. 2. Click on image to enlarge in order to see details.

As seen in fig. 2. we start with the initial tree which is in max heap order. We remove the root which we have come to know will be the largest value. The value is 55 in this tree, and we replace the root with the last leaf node which is of value 18 in this tree. We then percolate down value 18 until heap order is restored. We now have value 50 as root which is now the largest value so we remove that and replace it with the last leaf which is now node with value 17. We percolate down 17 until max heap order is restored. This continues on until all the values are popped and we are left with the resulting array: 17, 18, 20, 25, 30, 50, 55. This is now the sorted array in ascending order! If you have not guessed already, the process of percolating down is known as heapify because it restores the heap order.

Lets look at this from the array's perspective on what is happening.
Our initial array in max heap order: 55, 50, 20, 30, 25, 17, 18.
I will not label the steps, you need to follow the steps based on steps in the tree in fig. 2.
18, 50, 20, 30, 25, 17, 55.
50, 18, 20, 30, 25, 17, 55.
50, 30, 20, 18, 25, 17, 55.
17, 30, 20, 18, 25, 50, 55.
30, 17, 20, 18, 25, 50, 55.
30, 25, 20, 18, 17, 50, 55.
17, 25, 20, 18, 30, 50, 55.
25, 17, 20, 18, 30, 50, 55.
25, 18, 20, 17, 30, 50, 55.
17, 18, 20, 25, 30, 50, 55.
20, 18, 17, 25, 30, 50, 55.
17, 18, 20, 25, 30, 50, 55.
18, 17, 20, 25, 30, 50, 55.
17, 18, 20, 25, 30, 50, 55. Done.
As a reminder, the tree in Max heap sort is converted to the array in level-order traversal method. So to follow the above array with the tree, convert each tree to each respective array. Leave the bold faced elements in the array as these are sorted elements and will be missing from the tree. The above array shows how max heap sort is easily implemented as in-place. The most interesting thing we notice is that as we remove the root, what we are actually doing in the array is swapping it with the last element in the current problem domain. So our problem domain starts with seven elements. After the first swap and heapify our problem domain shrinks to six, then five, and so on. At each time we are swapping the first element with the last element of the current domain which directly translates to removing the root and replacing it with the last leaf node.

I will now show the code that does max heap sorting:

public void MaxHeapSort(ref int[] array)
{
  MaxHeap(ref array);
  MaxHeapSort(ref array, array.Length-1);
}

private void  MaxHeapSort(ref int[] array, int tail)

{
  while (tail > 0)
  {
    Swap(ref array, 0, tail--); //Pop max value.
    for (int i = (tail - 1) / 2; i >= 0; i--) MaxHeapify(ref array, i, tail + 1);
  }
}
The first method called MaxHeapSort with one parameter takes the array and calls MaxHeap to build the max heap. It then calls the overloaded method MaxHeapSort with two parameters and passes it the reference to the array with the length taking into account the zero index array.

In the MaxHeapSort overloaded method with two parameters the first parameter is the reference to the array passed to it but the second parameter is the last cell of the current problem domain. This domain is represented by the "tail" variable and is decremented as the problem domain shrinks. First we see the while loop which checks if the remaining cells to pop is more than 1, stops otherwise as at that point the array will be sorted. Inside the while loop element zero is swapped with the element at tail. This is the O(1) time I mentioned because we know that always the zeroth index will have the largest value in a max heap. After swapping, the heap property is lost so we perform a heapify operation. This is done by getting the last root of this problem domain and heapifying each until heap order is restored. The while loop will run n times, reducing the problem domain each time until we have a array that is sorted in ascending order.

I left max heap sort last in this part one of "Sort and search algorithms using C#" because it needs the longest explanation and because it combines a number of concepts. However, like mergesort, this algorithm is always O(n log n). Heap sort is better used where you want an in-place algorithm and want the ability to get the maximum value in O(1) time complexity.

This concludes the algorithms for sorting, but there is one more topic to cover. That is a searching one. I will only cover one searching algorithm, called the binary search.

Next: Binary Search.
Previous: Heap Sort part one.
Landing page: Part One

Tuesday 16 September 2014

Heapsort

NOTE: Because the explanation for Heap Sort involves introducing several concepts, this will be broken down into two parts. This is the first part which explains the concept one needs to understand in order to appreciate how heap sort works.

Heap Sort Part One.
Heap sort algorithm is based on a tree structure called the Heap. This algorithm comes in two flavours and that is "max heap sort", and, "min heap sort". They work exactly the same but based on whether they are "Max" or "Min", the algorithm does the opposite of the other. The algorithm is also an 'In-Place' algorithm because it requires little to no extra memory to perform its sorting. It is also a more interesting of all the algorithms because it uses a tree structure for its sorting which reduces the time to O(log n) for its "heapify" method but is overall O(n log n) because "heapify" is called n times. Even more remarkable is how fast it can get the maximum value (or minimum value) in a max or min heap respectively. The speed is infact O(1) and as I explain how the structure works this will become clear. At the very basic explanation, a max heap sort works by first inserting every element to build a max heap. Then delete each maximum and place it in reverse order. Note that getting the maximum takes only O(1) time. If you have seen the previous algorithms I posted you will notice that this looks similar to selection sort but instead this uses an efficient ADT, the tree structure.


How does the heap work
First, what is a complete binary tree.
To understand how heap sort works we first must understand how a heap works. Consequently to understand that we must first understand what a "complete binary tree" is. The heap relies upon the rules of a complete binary tree in order to function.
A complete binary tree is one where it is completely filled at all levels bar the bottom level which is filled from left to right. In a complete binary tree each leaf is at a depth h or h-1. Since the bottom level is filled from left to right then this property also means each leaf on the left is at h compared to a leaf on its right at h-1.

Lets take a look at fig.1 that shows the correct and incorrect complete binary tree.

Fig. 1.
In Fig 1. The reason the bottom tree is not a complete binary tree is because we can see at the bottom there are two leaf nodes on the left labeled "8" and "9". As we move along the bottom we see another leaf node labeled "10". However while there is yet another, fourth, leaf node at the bottom labeled "11/12" it is not first filling the parent node(5) of the previous node. I.E the placement of the fourth node should first entirely fill the left side as is in top first tree. This is the reason why I labeled the last node in the bottom tree as "11/12" because this node should be 11, not 12 as compared to the top tree. Additionally, the top tree is labeled in the traversal order with which an array representing this tree is structured. This type of traversal is known as 'Level Order Traversal' and is a breadth first traversal method.

For the heap to be a max heap the above order must be maintained but there is another order that must be maintained which separates a Max Heap from a Binary tree. This new order is hinted to by the name "max heap". Each node is added at the end of the tree. This node then is "bubbled up" if it is larger than its parent. It continues to be bubbled up until it is larger than any of its children and smaller than its parent. Once that condition is achieved, the tree is in "heap order".

Lets start by inserting the number 50 into the tree and see what happens:
  • It is the only node.
  • As the only node all branches are full therefore it is a complete tree.
  • The tree is in heap order as it is the largest parent node.
Now lets add number 55 and see what happens:
  • The node is added as the last node which is the left most bottom leaf.
  • It is bigger than its parent(50) so it swaps position with its parent.
  • Node with data 55 is now the parent and node with data 50 is its left child.
  • The tree is a complete tree.
  • The tree is in heap order for a max heap.
The steps outlined above continues for each node added maintaining the heap order and constructing a complete binary tree.
Let's see an example of this at work. We have the following unsorted array: 30, 50, 17, 55, 25, 20, 18.
After placing it in Heap order we have an array that looks like: 55, 50, 20, 30, 25, 17, 18.
The question on the readers mind might be "Ok, but it's not sorted, and how did this new order appear". First lets look at the property of a max heap from the new result. In array index zero(assuming zero index here) we have the maximum value. It will always be the maximum value in a max heap hence we know how to get it and no comparison need take place, therefore O(1) time.

Let's look at Fig. 2. and see what happens as we add each item to the heap from the initial array.
Fig. 2. Click on the image to enlarge it in order to see the details.




In Fig. 2. we start off at the top left and observe the process taking place up to the top right. Then we move down to bottom left and see the continued process up to bottom right. The final tree at the bottom right when converted to an array in the order labeled in Fig.1. top tree will look like the second array shown above. Immediately we can see that the root is the maximum value.

How to tell, looking at the array, where the parent and its children are.
Looking at the zero-indexed array that represents the max heap, we must be able to tell who the parent is for any given node and conversely who the children are for any given node. The solution is typical for any binary tree and rather trivial. For a node n its children is given by the formula (n*2)+1 for the left child and (n*2)+2 for the right child. To find the parent, we simply reverse the calculation. For a child at node n first we determine if it is a left or a right child. Careful observation of the array and how it is represented in the array will show that the right child is always in the even numbered index in a zero indexed array. There are two ways to determine the parent. First is that we mod the node n with 2 => child mod 2 = 0 means right child. If it is non zero it is a left child. We then apply the appropriate formula to find the parent. If it is the left child then: n/2. If it is the right child then n/2 -1.
The second way is to convert it to a non-zero array which makes the left child always even and the right child always odd. We then use the floor function to get the correct index by simply dividing the child's node index by 2. A floor function will take the lower value of a fraction. For example the values 7.0, 7.1, 7.2, 7.3 etc will always return 7. Thus in a zero indexed array using the second method, the parent of any node n is (n+1)/2 -1. The second method is simpler but relies on a floor function being available in a language. Let's see an example: In our heap array above, the node with  the value of 18 has a parent which is a node with the value 20. 18 is at index 6, and 20 is at index 2. Using the second method of finding 18's parent we have: (6+1)/2 -1 = 7/2 -1  = 3.5 -1. Using our floor function, 3.5 becomes 3 hence 3-1 = 2. Index 2 certainly has the node with the value 20 which is our parent. Lets do the same with node with the value 17 which is at index 5. (5+1)/2 -1 = 6/2 -1 = 3 -1 = 2. Once again we get index two which points to node with the value 20 and hence the correct parent of 17.

This is the time to introduce some code that deals with finding the parent and the children of a parent.

private int GetRoot(int child)
{
  return child % 2 == 0 ? child / 2 - 1 /*Right child*/:
                          child / 2; /*Left child*/
}

private int LeftChild(int root)
{
  return ((root*2)+1); //left child index
}

private int RightChild(int root)
{
  return ((root * 2) + 2); //right child index
}
The method for getting the parent from a given child is called "GetRoot" because at each time we will be dealing with a "subtree". Therefore for that subtree the parent is the root. This method uses the mod version even though C# provides Math.Floor method. The method returns the index where the parent of the child is. The LeftChild and RightChild methods take the index of a node and respectively returns the index of the left/right child. Once again the parameter is called "root" because we will be dealing with subtrees for which the parent is the root of all that follows after it. This concludes the basics of a Heap and how to find the parent from the child and vice-versa.
Next up is how we can use the Heap to make Heap Sort work.

Next: Heap Sort part two.
Previous: Merge Sort.
Landing page: Part One.  

Friday 22 August 2014

Merge Sort

Merge sort algorithm another divide and conquer algorithm which is also not in-place and that is where its similarity with Quicksort algorithm ends.  Merge sort running time is always O(n log n) regardless of the order of data which gives merge sort an edge over quicksort where data order does not influence cost in time. This algorithm is useful when you want to sort data where knowing every time before hand how long it will take to sort n amount of data is important. Like quicksort, a mergesort algorithm can also be modified to be in-place however the code I will provide does not cover this method.
Merge sort works by first dividing the array into two. The two sides are further divided and this division continues until no further divisions are possible. Each respective divided portions are then merged(in ascending order). To see how this process works lets take an arbitrary set of data of size n. After several division steps we have a set of divided arrays on the left side of the first split and a set of similar description on the right. Since the process of splitting and merging is exactly the same, we only need to look at a small proportion to see how it works. Lets take a set that is so far down the split, at a depth h,  that there are now only 4 elements left in the array on the left side of the original divide.

Array: 2 9 3 1
Split further
2 9      31
Split further
2        9                3        1
Can't split any further as each split has only one element.
Now we call the "Merge" algorithm to merge the elements in the right order.
We create a new array of the size of the total of two elements to be merged hence of size 2.
Merge:
2 < 9 = true. 2 goes to first element of empty array.
No more elements left in first array, rest of second array which is in theory already in its sorted form is copied to the new array hence final look of new array:
Array: 2 9
We do the same with the right side.
3 < 1 = false.
Store 1 into first element of new array. Whats left is 3 so copy that into the last cell of the new array.
New array  now looks like:
Array: 1 3
Now we have two sorted arrays [2, 9] and [1, 3] to merge.
Same process takes place.
new array = [,,,];
2 < 1 = false.
new array[1,,,]
2 < 3 = true
new array[1,2,,]
9 < 3 = false.
new array[1, 2, 3,]
Whats left is 9 hence new array[1, 2, 3, 9]
In the next merge attempt array [1, 2, 3, 9] will be merged with another array of same or similar length using the exact same method. Hence this can be achieved through a recursive algorithm.

We can see that in regards to coding this we will need three methods. A MergeSort method which wraps the MergeSort Algorithm. A RecursiveMergeSort which Performs a recursive dividing of the array down to one element array. Lastly we will need a Merge algorithm which will take two arrays to be merged. In our code we use two arrays the same size of the original array before division, and then use indices to "simulate" creating arrays at each merge. This reduces the arrays we need per division from three, to two. Merge then applies a merging algorithm so that the final result of the merge is a sorted array of the given elements.

The Code:
MergeSort method:
public void MergeSort(ref int[] array)
{
  int[] buffer = new int[array.Length];
  RecursiveMergeSort(ref array, ref buffer, 0, array.Length-1);
}

The above code works simply by creating a new array, 'buffer', the same length as the original 'array'. These two arrays are then passed to the RecursiveMergeSort method with the orignal array as first parameter. The start length, which almost always will be zero in this method, is sent along with the zero indexed length of the array.

RecursiveMergeSort method:
private void RecursiveMergeSort(ref int[] sequence, ref int[] buffer, int start, int end)
{
  int n = (end - start) + 1;
  if (n < 2) return;
  int m = start + (n / 2);
 

//Copy values from original array into buffer which will be the left side of the divide.
  for (int i = start; i < m; i++)
    buffer[i] = sequence[i];
 

 //Left side divide
  RecursiveMergeSort(ref buffer, ref sequence, start, m - 1);
 //Right side divide.
  RecursiveMergeSort(ref sequence, ref buffer, m, end);
 //Merge
  Merge(ref sequence, ref buffer, start, end, m);
}
The RecursiveMergeSort method is the divide portion of the algorithm which calls Merge at the end. As this is the recursive algorithm, by the time merge is called, the respective left and right partitions which were called recursively has returned with sorted data in each partition to be merged. It gets the size of the hypothetical n (which shrinks clearly deeper into the divide recursion), hence hypothetical size. The code then checks if there is only a one element array present. Returns if this is the case since a single element is already considered sorted. For an array with more than one element, it finds the midpoint m to divide the array at. The set of data between the appropriate start and end marks in the original array, sequence, are respectively copied to the second array, buffer. The division then commences with a recursive call that passes for the left array the copied array as first and the original array as the second array along with the start of the array and a point one less than mid point. Similarly for the right side the original array is passed as first, followed by the second array, then the mid point m as the new start position and the end point end as the new end point of the array. Merge is called once the recursive operations return.

Merge method:
private void Merge(ref int[] sequence, ref int[] Left, int start, int end, int m)
{
  int i = start;
  int j = m;
  int index = start;
  while(i < m && j <= end)
  {
    if (Left[i] > sequence[j])
      sequence[index++] = sequence[j++];
    else
      sequence[index++] = Left[i++];
   }
 

 //Order is important here. Since our left side is in Left array, 
  //this should have the smallest value so we copy this first.
  //Left side might have something left over
  while(i < m)
  {
    sequence[index++] = Left[i++];
  }
  //Right side might have something left over.
  while (j <= end)
  {
    sequence[index++] = sequence[j++];
  }

The Merge method is the more complicated of the three methods.  Index i and j are used as indices and are set respectively as start position of the passed array, the end point of the passed array, and the midpoint m of the array which acts as the end point of the first array. The first while loop does the merging of the two arrays by using the elements of the first array, sequence, as test against the second array, Left, to determine which of the two elements gets placed in the conjoined array. The second two while loop will copy the remainder, theoretically sorted, data into the conjoined array. The cycle repeats as each recursive call returns in the earlier RecursiveMergeSort method.

The running time for Merge sort is O(n log n) and this is maintained regardless of the sorted condition of the array. In choosing between Quicksort and Mergesort the first question you will need to ask is, what will be the sorted condition of the data in a  majority of cases? Bear in mind that sorted data in Quicksort, if we assume no improvement modifications to the algorithm, will draw a quadratic time cost.

Next: HeapSort.
Previous: Quicksort.
Landing page: Part One.

Wednesday 6 August 2014

Quicksort

This is one of the divide and conquer algorithm and sometimes hard to work out on paper what is going on. Quicksort algorithm is also not an in-place algorithm because it requires the use of additional memory to process partitioned sub-arrays. However quicksort can be made in-place through some tricks and the algorithm code I will present today will be an in-place algorithm.

The idea of quicksort is simple enough where you chose a pivot and arrange the elements so that each element bigger than the pivot is on its right and smaller is on its left. At this point the pivot in theory is in its final sorted position. We then take all the elements on the left and create a new array and perform the same operation. Similarly we do the same to all the elements on the right of the pivot. Eventually we reach the last element and the array is sorted and we copy them back in their ordered form.

So how does the algorithm work in detail? The first step is to decide how the pivot is chosen. In this example I will simply use the last element in the current partition as the pivot however there are other techniques such as median of three and random. The importance of choosing the pivot will be clear later however because it does make a difference to the worst case running time. Once the pivot is selected we will swap it with the element at the beginning of the array and then we can start testing. We have a left index indexing the beginning of the array but to the element after the pivot. We also have a right index that indexes the last element of the array. After this we first increment the left index and compare each element until it is at an element that is bigger than the pivot. We then start decrementing the right index and comparing each element it is indexing until it reaches an element smaller than the pivot. At this point we swap the elements pointed to by the left and right indices with each other. This step is repeated until the left index is bigger than the right index at which point we know the pivot is in the right place. We then swap the pivot with the element pointed to by the right index. Here is a small scale example with 5 elements.

Array "q"
initial. Pivot = 5 at index 4.
6 3 1 2 5:
5 3 1 2 6: Swapped pivot with first element. Pivot 5 is now at index 0.
left index l = 1, right index r = 4. (Zero index based).
Start:
q[l] > q[pivot]? no. l = 1 + 1 = 2.
q[l] > q[pivot]? no. l = 2 + 1 = 3.
q[l] > q[pivot]? no. l = 3 + 1 = 4
q[l] > q[pivot]? yes, stop.
q[r] > q[pivot]? yes. r = 4 -1 = 3.
left index is bigger than right index, swap q[r] with q[pivot]:
2 3 1 5 6.
Pivot 5 is now in its final position, so now we partition.
Left side                              5                          Right side
2 3 1                                                                    6
Pivot = 1 at index 2.                               One element so
                                                              already sorted.  
"q" = new array.
1 3 2: Swapped pivot with first element. Pivot 1 is now at index 0.
l = 1, r = 2.
Start:
q[l] > q[pivot]? yes, stop.
q[r] > q[pivot]? yes. r = 2 -1 = 1.
q[r] > q[pivot]? yes. r = 1 -1 = 0.
left index is bigger than right index, swap q[r] with q[pivot]:
1 3 2: no change because r = pivot index.
Pivot 1 is now in its final position, so now we partition:
1      Left                  Right           5                     6
         none                 3 2
"q" = new array 3 2.
Pivot = 2 at index 1.
3 2 Swap Pivot with first element. Pivot 2 is now at index 0.
2 3
l = 1, r = 1.
Start.
q[l] > q[pivot]? yes, stop.
q[r] > q[pivot]? yes. r = 1 -1 = 0.
left index is bigger than right index, swap q[r] with q[pivot]:
2 3: no change because r = pivot index.

Pivot 2 is now in its final position so now we partition:
1       2         Left side         Right side      5            6       
                       None                   3
                                          One element so
                                          already sorted.
Final:
1 2 3 5 6


As you can see from the above step-through of a short array it is a lot of work on paper however thankfully the running time is more forgiving. Quicksort average case is O(n log n) and worst case of O(n^2). Worst case running time occurs when the array is either sorted or reverse sorted however we can improve this as mentioned earlier by changing the way we choose the pivot and the way we use the pivot. To develop this into code, a careful analysis indicates we need 5 methods.
QuickSort: Passes an array and initial condition to QuckSortAlgorithm
QuickSortAlgorithm: Performs the QuickSort Division and Partitioning
Partition: Performs the Quicksort operation on the partition.
GetPivot: Returns the pivot position of the chosen pivot and after moving it to front of array.
Swap: Swaps two elements in an array.
Swap method has already been explained on the main page.

The code for the QuickSort method:
public void QuickSort(ref int[] array)
{
  QuickSortAlgorithm(ref array, 0, array.Length - 1);
}
The code for the QuickSortAlgorithm method:
public void QuickSortAlgorithm(ref int[] array, int left, int right)
{
  if (left < right)
  {

     //Partition by finding and returning final pivot position.
     int pivotIndex = Partition(ref array, left, right); 

    QuickSortAlgorithm(ref array, left, pivotIndex - 1);
    QuickSortAlgorithm(ref array, pivotIndex + 1, right);
  }
}

The code for the Partition method:
public int Partition(ref int[] array, int left, int right)
{
  //Pick pivot.
  int pivotIndex = GetPivot(ref array, left++, right);
  int newPivotIndex = pivotIndex;
  while (left <= right)
  {
    while (left <= right && array[left] <= array[pivotIndex]) left++; //move left pointer.
    while (left <= right && array[right] >= array[pivotIndex]) right--; //move right pointer.
    if(left < right) Swap(ref array, left, right); //Swap elements at left and right indices.
  }
  //Swap pivot from current position to final position.
  if (left > right) newPivotIndex = right;
  Swap(ref array, pivotIndex, newPivotIndex);
  return newPivotIndex;
}
The code for the GetPivot method:
private int GetPivot(ref int[] array, int left, int right)
{
  Swap(ref array, left, right);
  return left;
}

From the above code it is quite clear that a lot of work is done in the Partition method.
The explanation of how the above code works is trivial and is explained in the third paragraph above. In the QuickSortAlgorithm method the statement if (left < right) ensures that recursion only runs if there are more than one element. You will also notice why this method is in-place; because the same array is used so no additional memory to store the array is used. In C# this is achieved through passing by reference. This is a good all around algorithm to use and many programming languages built in sorting uses some form of quicksort algorithm.

Next: MergeSort.
Previous: Insertion Sort.
Landing page: Part One.

Tuesday 29 July 2014

New Zealand Space Program by 2015

"The world is going to change".

United States, Russia, India, China are some of the countries who have one thing in common right now, they all have a space program. There are even more countries with space programs even Australia but New Zealand does not. Why not New Zealand? That is about to change by 2015 and it is an exciting time, for me at least!
I present to you my latest news finding on the internet, the New Zealand Space Program which you can read at:
http://www.nzherald.co.nz/business/news/article.cfm?c_id=3&objectid=11300831

I present to you a video also found on the above link:





The company behind this is Rocket Lab and I like the name of their rocket "Electron".

"The world is going to change"


Saturday 26 July 2014

Insertion Sort

Insertion sort algorithm is interesting because it feels more natural. If you had a randomly sorted deck of cards and picked up one card, it is already sorted. If you picked up another card you would "insert" it into a sorted order which will either be before or after the card you already have. You now have two cards and pick up the third card. At this point you will "traverse" the cards you have and inserted it at the appropriate point, repeating the process until you have the desired number of cards in your hands in some sorted order. This process is an iterative search and should not be confused with a priori, a trait human beings are likely to apply or in short a human being will likely "just know where to place the card" upon looking at what is already in hand.

Insertion sort starts by indexing the first element which it deems is in correct order since the current index is the same as the first index. The index is moved up one and the element at the new index is compared or iterated backwards until a suitable position is found to insert the current element into. A good way to solve this is to store the current element into a temporary buffer. Then as you traverse backwards you swap each previous index with current index until you find where to place the stored element and copy that into the new location. The diagram below illustrates the process which I feel describes it more clearly than words:
To build this into code we need two methods. First method is the InsertionSort method that traverses the array of n elements. At each incremental traversal a second method, Insert, is called which inserts the current element being traversed into the appropriate slot.

InsertionSort code:

public int InsertionSort(ref int[] array)
{
  int swapcount = 0;
  for (int i = 1; i < array.Length; i++)
  {
    swapcount += Insert(ref array, i);
  }
  return swapcount;
}
In the above code inside the for loop Insert method is called which is passed the index of the current element being looked at and the array.

Insert code:
private int Insert(ref int[] array, int i)
{
  int swapcount = 0;
  for(int j = i - 1; j >= 0; j--)
  {
    if (array[j] >= array[i])
    {
      Swap(ref array, i, j);
      i = j;
      swapcount++;
    }
   }
  return swapcount;
}
In the above Insert code the array is traversed backwards from an index previous to current index 'i' and each item is compared with the 'current' index 'i'. If current indexed element is smaller than the index previous to it a swap is made and the current index 'i' becomes the previous index 'j'. The loop stops when no more elements can be traversed backwards. One thing I did not include is storing the current index into a temp variable because the solution is so trivial we do not need a temp storage and it was only mentioned above to simplify the explanation. In the code above, optionally, a swap count is kept and returned. Also it is important to notice that unlike bubble sort where you could detect that no swap has taken place and rest assured array is sorted, you cannot do the same here. If the current element is larger than the element before it, we cannot stop the entire insertion sort since we cannot tell if elements after the current element are sorted.

This brings us to the running time which is also quadratic at O(n^2) if none of the data is sorted. However there is one exception and that is Insertion sort can achieve O(n) time if all the data is already sorted. This is because the InsertionSort method will traverse n data and Insert will return with O(1) time because the loop will terminate if the first comparison reveals that no swap should take place in the already sorted portion of the array. We can detect this with a simple modification to our Insert algorithm:
private int Insert(ref int[] array, int i)
{
  int swapcount = 0;
  for(int j = i - 1; j >= 0; j--)
  {
    if (array[j] >= array[i])
    {
      Swap(ref array, i, j);
      i = j;
      swapcount++;
    }

    else break;
  }
  return swapcount;
}
The extra code is in green which detects, much like bubble sort, if no swap has taken place and breaks the for-loop. It does not abort the entire insertion sort algorithm which happens in bubble sort but only the current passes insert algorithm. Like bubble sort and selection sort, insertion sort is also an in-place algorithm because it requires little or no additional memory to perform its operation.

The best use for insertion sort algorithm is on data that is already mostly sorted in order to reduce the quadratic time. For data that is completely unsorted a better option, from the algorithms covered so far, would be selection sort because although it has a cost of O(n^2) it will have considerably less cost in swapping data.

Next: QuickSort.
Previous: Selection Sort.
Landing page: Part One.

Saturday 19 July 2014

Selection Sort

Previously we looked at bubble sort algorithm which has O(n^2) quadratic time regardless of order of original collection. The problem with bubble sort is compounded by the number of swaps it has to do within each pass. Everytime the algorithm encounters a unordered comparison, it does a swap which is costly. To avoid that we have selection sort which produces considerably less swaps.

Selection sort works by performing a number of passes identical to bubble sort however the difference is how the inner loop works. Unlike bubble sort, selection sort does not perform a swap every time it encounters two unordered comparisons. Index of largest element is sought. Once obtained and at the conclusion of the inner loop, the last element of the array is swapped with this largest element. The problem domain is then reduced by one since after the first pass the largest value is now in its sorted position. The cycle repeats up to the final pass. Obvious advantage is less data swapping compared to Bubble sort but worst case running time is still O(n^2). Best case is also O(n^2) since every element is scanned per pass.

Here is the sequence of events for selection sort.
Initial: 6 1 3 2 7 5 4
6 1 3 2 4 5 7 //Largest element: 7 at index 4. Swap with end element, 4.
5 1 3 2 4 6 7 //Largest element: 6 at index 0. Swap with end element, 5.
4 1 3 2 5 6 7 //Largest element: 5 at index 0. Swap with end element, 4.
2 1 3 4 5 6 7 //Largest element: 4 at index 0. Swap with end element, 2. 
2 1 3 4 5 6 7 //Largest element: 3 at index 2. No change last element == end element.
1 2 3 4 5 6 7 //Largest element: 2 at index 0. Swap with end element, 1.  
Done.

Like bubble sort, selection sort algorithm is also an in-place algorithm. What this means is that it uses little extra memory to complete its sorting operation. The source code for selection sort:
public int SelectionSort(ref int[] array)
{
  int swapcount = 0;
  for (int i = array.Length-1; i >= 0; i-- )
  {
    int index = 0;
    for (int j = 0; j <= i; j++)
    {
      if (array[j] >= array[index]) index = j;
    }
    if (index != i)
    {
      Swap(ref array, i, index);
      swapcount++;
    }
  }
  return swapcount;
}

How does the above code work:
The first loop is the number of passes and runs from 0 - n as index 'i'. The innerloop also runs from 0-n however in this instant n = i. The inner loop compares the item at stored index with item currently being looked at. If item of stored index is smaller then index is updated. Once the inner loop terminates for current pass, a swap is made if index of largest items is not the same as index of final element. Every time a swap is made a count is kept and this is returned at the conclusion of the algorithm.

Next: Insertion Sort.
Previous: Bubble Sort.
Landing page: Part One.

Sunday 13 July 2014

Bubble sort algorithm

First section of part one, this covers bubble sort algorithm.
Largest values are "Bubbled up" on each pass by swapping them between current element and next element. Each comparison is run n-1 times over n-1 passes. This leads to a time complexity of O((n-1)^2) or O(n^2). 
We can detect if data was sorted by checking if any swaps took place. If no swap, we stop the algorithm and this leads to O(n) time on data that is already sorted.

Let's look at how this works by using an unsorted array 6 1 3 2 7 5 4.
In the first pass we scan the entire array "bubbling up" the largest value to the end whose index starts from zero and goes up to six.
In the first pass we compare the entire array starting at index zero by comparing element at index with element at index + one. In our example we see if 6 > 1 and it is so we swap and have a new array 1 6 3 2 7 5 4.
We increment the counter still within first pass and now check if 6 > 3. Yes it is so we swap and get 1 3 6 2 7 5 4. Carrying on we get:
1 3 2 6 7 5 4
1 3 2 6 7 5 4
1 3 2 6 5 7 4
1 3 2 6 5 4 7
First pass done and the largest value, seven, in this pass is now at the end.

The second pass will start from zero again but only up to five because in the previous pass the 6th element has the largest value of the array and thus is in its final location.

For completeness here is what the second pass will look like:
initial: 1 3 2 6 5 4 7
1 3 2 6 5 4 7 //no change as 1 > 3 is not true.
1 2 3 6 5 4 7 //swapped 3 and 2 as 3 > 2.
1 2 3 6 5 4 7 //no change as 3 > 6 is not true.
1 2 3 5 6 4 7 //swapped 6 and 5 as 6 > 5.
1 2 3 5 4 6 7 //swapped 6 and 4 as 6 > 4.
Done second pass.

If we look at this carefully we can see that in any given pass, if no swap takes place then the data must already be in their correct sorted order. If we detect that no swap takes place we can reduce the time complexity on sorted data to a single pass of O(n).  In the above example at the completion of pass two we can see that the algorithm only needs to run pass three before all the data is sorted and no more passes is required. Without this check bubble sort is always an O(n^2) algorithm regardless of the condition of the collection.

The bubble sort algorithm code is shown below complete with a check for swaps. It returns the number of passes that was done.
public int BubbleSort(ref int[] array)
{
  int swapped = 0;
  int i;
  for (i = array.Length-1; i >= 0; i--)
  {
    swapped = 0;
    for (int j = 0; j < i; j++)
    {
      if (array[j] > array[j + 1])
      {
        Swap(ref array, j, j + 1);
        swapped++;
      }
    }

    //No swap, data is already sorted, no need to continue.
    if (swapped == 0) break; 

  }
  return array.Length - i; //Returns the pass count.
}

How does the above code work:
There are two for-loops where the first for loop determines the number of passes and runs through n passes referred to as index 'i'. The second loop using index 'j' will run from zero up to 'i' and since at each pass 'i' reduces by one then 'j' will always start at zero and run through to the length of the current pass 'i' each time the second loop executes. The second loop counts how many swaps have occurred and if zero swaps has occurred then this detects that the data must already be sorted and terminates the algorithm. In this second loop each item at index j and j+1 is compared and swapped as necessary. As an option the method returns the number of passes that has occurred during this run.

This concludes the bubble sort algorithm.
Next: Selection Sort.
Previous: Landing page Part One.

Friday 11 July 2014

Sort and search algorithms using C# Part one

Welcome to the introduction of searching and sorting algorithms in C# part one. This will be a two part series. The first part will introduce and talk about six sorting algorithms and one searching algorithm. These algorithms will be built using a single data type, an int. In part two, I will modify the algorithms from part one to allow more data types. The contents in each part will not attempt to teach C# as that information is left for the reader to investigate. Each algorithm will be posted once every week.



Most of what what a computer is expected to do is searching and sorting. Information found on search engines use searching algorithms, looking for a word in a text editor uses searching. Sorting is performed quite extensively and sometimes they are performed but not noticed. A list of music you have may be sorted in various ways such as alphabetical order of album name, or artist name, or even song name. Looking for a file in a directory you might be tempted to use one of its ordering functions. There are many things we do and some we don't even notice where powerful algorithms are working hard to provide you with a great user experience and of course to do it in the fastest way possible.

So why have I written this blog post when there are so many books and websites that already feature all of the algorithms I will present. The answer is that I will write it in a way that would make every attempt to produce understandable instructions and also provide complete example codes. Many of the algorithms follow a paradigm that will be immediately obvious if you have already done searches for how to write these algorithms. Furthermore this is a technology blog so it really wouldn't be complete without some sort of searching and sorting algorithm present.

The index below will provide what is to be expected in the coming weeks as well as serve as a landing page to jump straight to the appropriate algorithm. These links will be  updated as new algorithms are added to part one.

Algorithms covered:
  1. Bubble Sort.
  2. Selection Sort.
  3. Insertion Sort.
  4. Quick Sort.
  5. Merge Sort.
  6. Heap Sort.
  7. Binary Search.
 I will begin right now with a method that we will use for all the sorting algorithms. It is a trivial method and ordinarily does not need to be mentioned, however for completeness, with care for any learner programmers, and to get things started, I will show the code for the "Swap" method.

The swap method takes a reference to an array of ints, and two indices 'i' and 'j' of the elements to swap. Inside the method element at index i is stored in a temporary storage variable and this element in the array is overwritten by the element at index j. Finally the element at index j is overwritten by the int we stored in the temporary variable.

Next: Bubble Sort.

Tuesday 24 June 2014

Regular expressions in C#

Recently I got an excuse to learn and utilise regular expressions in C# and while I found that interesting at first from the sheer challenge of learning it, I am uncertain as to whether this should be abused within a development language.

To put it simply, it works and it is wonderful but code readability seems to suffer immensely. Lets look at an example.

We have a string variable called 's' that contains "The quick brown fox jumped over the lazy dog.". We want a word count of that.
If we think about it we immediately can see that each word is separated by spaces. So all we do is split the string up using the space as delimiter and count the array size right?

//s is the string.
var stringarray = s.Split(' ');
int count = stringarray.Count;


So in two lines of code we have our string array and the code is reasonably readable. Given the names used on the string and the methods, it is hard to imagine any programmer worth their salt having difficulty reading that code.

But what if we used regular expressions? Before I use regular expression in the next block of code, please be aware that there will be better expressions to do this in all likely hood and my implementation is more of a beginners understanding of C# regular expressions.


//s is the string.
Regex r = new Regex(@"\w+");
int count = r.Matches(s).Count;

This regular expression works too. The \w matches words equivalent to [A-Za-z0-9]. The + causes it to check for multiple occurrences.

Both of these methods work but which one is more readable that is the question. It is clear that the first version with the "Split" is more readable however regular expressions are more powerful. On MSDN and other sources you can find a list of regular expressions you can use to do many different operations including backtracking and it is quite useful in analysing text files. C# provides a multitude of ways to perform operations and its extensive .Net capability is amazing however I would recommend that rather than using it to impress, only use it when it is absolutely necessary.

Wednesday 4 June 2014

I have experienced Oculus Rift

Today was an amazing day. It would be an amazing day for any Oculus Rift enthusiast. It was so amazing because today for the very first time I got to use Oculus Rift. First impressions is "WOW".
Before I discuss this I would like to express thanks to the team at CoLab at AUT (Auckland University of Technology) for demoing the work they are doing with Oculus Rift. The place is amazing and a techies dream because there is so much going on in there. If you have a strong interest in working on great new technology, that is the place to be.

My impressions of Oculus Rift can only be compared to the very first VR technology I experienced and I have to say this is leaps and bounds better. I did not feel disorientated at all, at all. I turn my head and the scene moves as expected but it works so well it actually feels truly like I was in the room and I had turned my head inside the room. I could not detect any perceivable delays between when I  moved my head and the scene changes provided by the system in response. It was explained to me that the system can anticipate human movement as it is more predictable than we realise. "Like cache hit and cache miss for humans" I joked although this is probably the best way to describe it.

The only issue was the grainy image quality through the lens and some distortion however unless you are specifically looking to pull every little thing about it apart, it didn't seem to affect immersion. I am so impressed by it I cant wait to have one of my own.

Once again thanks everyone in CoLabs at AUT for this opportunity to experience Oculus Rift, you have made me very happy!

Friday 23 May 2014

Wolfenstein: The New Order

Wolfenstein: The New Order was released couple of days ago. I did my pre-order for one reason and one reason only, to get my hands on the Doom 4 Beta when it comes out. Just incase you are wondering, yes I do love Wolfenstein.

Have not had a chance to play through it, only fiddled with it for a couple of minutes and I can say the first impressions are pretty good. I am playing on Uber difficulty because why wouldn't you? Made it to a castle whose exact details are eliminated in an attempt to avoid spoilers. In the castle it feels like Wolfenstein 3D, the classic Wolfenstein 3D of course.

Since I cannot review this yet I will suggest that you must see the free classic Wolfenstein 3D game made available online to play inside your web browser. Click the Wolfenstein 3D link to get to it.

A link on the free Wolfenstein 3D game to a YouTube video is also worth a look (Embedded below) as it has John Carmack playing through the original DOS version while telling the story and history of the games. Of particular interest is the technical aspects of the game engine and design he touches on as well as the attempts to improve it when the game was made open source.
I am hoping to discuss Wolfenstein: A New Order at some point but there is a lot of other things going on at the moment so priorities priorities. However the most things I will talk about is the Doom 4 Beta -Beta conditions permitting of course.

Thursday 22 May 2014

Mantle Intel -Marriage or infatuation and a jealous Direct3D -the other woman?


Couple of weeks ago I wrote a blog post on AMD's Mantle where I had distilled information from a number of sources. Sometime later from then I wrote another blog post on Mantle talking about nVidia's response to it. From those and other news articles at that time one could conclude that AMD was in for a fight with nemesis Intel, nVidia, and the API competitor Microsoft's DirectX. Now it seems AMD is marrying Intel and Direct3D might be the jealous other woman, or perhaps I am getting a little ahead of myself here.

An interesting set of measured statistics has been published by techradar that compared the performance of Mantle enabled AMD GPU's running on AMD CPU's versus Intel CPU's. Techradar reports an interesting find that Intel and Mantle pairing resulted in better frame rates than AMD and Mantle pairing. We have to be careful when we say this because the pairing advantage seems to be where there is CPU intensive operations by DirectX, Mantle helps the AMD CPU's come closer to Intel CPU's running DirectX.

What does all of this mean then? If you are a gamer, you are better off with an Intel/Mantle combination as is inferred by techradar and I agree. The impression we could get and as touched by techradar is that AMD is not focusing on CPU comparison, instead take out nVidia altogether and then this would cause problems for DirectX. If Mantle and Intel combination succeeds in a hypothetical world with many game developers in support then this is dire straits for DirectX because let's not also forget TrueAudio part of the AMD package. Given that all this is in the early stages and DirectX is coming out in flavor number 12, it actually means a hard time for Mantle rather than the other way around. For me, I am continuing to develop my 3D engine in Direct3D (a component of DirectX) and there is no intention of changing that yet. It does mean that I will most likely try my hand at Mantle and will likely provide that support time permitting because I like Mantle and I like Intel.

So is Direct3D the jealous other woman? If Mantle keeps on growing strong then I guess she is, except we haven't seen her 12th incarnation yet. There is also the case of whether Mantle supports every technology of Direct3D and that I cannot answer. What I can say is that none of the reviewers of games running on Mantle have yet talked about any missing graphical functions compared to DirectX so at least those features are present that are needed to maintain the quality of the original version written for DirectX. In all of this we have forgotten about nVidia so is this a sign of things to come? I have another dilemma now, I really like nVidia too! so now what do I do? There is only one thing to do, watch the market, see how things are turning out, adjust and adapt. The beauty of it is, whatever goes, it is replaced with something even better so there is really no loss to think about.

Tuesday 13 May 2014

What a week!

Hi everyone!

Wow what a week it has been! Graduation day was amazing and it did not rain despite weather report predictions for Friday May 9th. For me at least it didn't feel like I  had achieved my degree until I was present at the ceremony. Thank you University of Auckland for an excellent ceremony and the street procession.

It was also a reminder of how important hard work is and its benefit but most importantly it was an experience all of which is embodied into the certificate. Photo shoot session was amazing with Timeless Images Photography with the photographer having a great sense of humor. I am sure that alone will probably result in a lot of my portraits showing off my pearly whites.

Haven't done much catching up with news lately but it is time to get back into it.

Monday 5 May 2014

Reminder about this blog

Hi All,

A reminder about this blog.

This is not a blog I intend to update regularly but with topics that I feel I need to discuss. These will mainly be programming which will be my major pillar post but currently it is a new blog and there are some interesting things going on in the technology world. There's the AMD's Mantle, and of course the Oculus VR technolgies Oculus Rift. I am following these technologies and will post more details as and when more information comes out about them and of course as I learn more about them.

This week is an extremely busy week with a lot of things going on. The most important of which is my Graduation on May 9th when I get capped!

If you have been following my blog I thank you and please remain tuned in!

-Michael Chand.

Tuesday 29 April 2014

Firefox version 29.0

I am enjoying the new Firefox version 29.0 however it isn't for the performance. Fact of the matter is, I cannot comment on the performance because I haven't done any useful surfing with it to validate any comments about it on performance. I am enjoying the new version for its user interface.

Up to version 28.0 the interface felt too common, with some colour for the Firefox button on the top left. While the old version attempted to look new, the interface mechanics were the same and with the same confusion as always. When I say confusion it is not a suggestion that I was not able to do something because I was unable to use it; my meaning is that it wasn't too intuitive, you had to click on a few things to get to something and sometimes a long list to scroll through in a menu. It was a nice interface let's not criticise that, it was certainly better than previous attempts but not anywhere near as good as it could be.

My biggest dislike on the older versions was the bookmarking system. It was ok on even older versions but the more recent versions it just got too irritating to use. If you bookmarked something, instead of just bookmarking it you get a dialog where you either clicked "Done" or fiddled some more. I guess it was an attempt to allow you to sort them out immediately rather than later. Problem with that, I wanted to sort them out later and usually that is much much later. Some bookmarks I made were only temporary because the article was too long and I wanted to read it another time. Every time that dialog came up I get frustrated and then click done causing the list in the main bookmarks menu to grow to an extremely long list before I get a chance to clean it out. At that point instead of being useful it became a hindrance and I had to scroll and scroll to find the things I wanted to find.

How does Firefox 29.0 user interface solve these problems or does it even solve it at all? Well actually and surprisingly it does solve it and at least for now it solves it very well. Since the bookmark issue is important for me let us journey into that first.

Immediately the bookmark feature you will notice is the bookmark icon shown as a star-separator-clipboard image(Fig.1.) If you click on that star to bookmark a page it will immediately add the book mark without displaying that annoying dialog. The star will then change colour to indicate you have that page bookmarked. If you click that star again then a dialog will come up that will let you decide where to put it or even delete it. What I noticed is that if you go to a site that is already in your book mark the star will change colour similar to adding a new book mark. This is useful because it avoids those multiple book marks for when you have forgotten that the site is already bookmarked. But wait there's more! you can actually click on the star and allocate a new place for the old book mark through the dialog box that will appear or delete it. Of course if you want to traverse your bookmarks list it is a simple matter of clicking on the clipboard picture of the bookmark star/clipboard combo icon but this is where you notice another anti-clutter feature. All your bookmarks you added by clicking on the star but not sorted do not end up cluttering the main bookmark list. They are added to a sub-list called "Recently Bookmarked", which is superb and it shows that the Mozilla team really put a lot of thought into the bookmarking system.

Fig.1. Firefox version 29.0 new user interface.
Fig.1 shows a screen grab of the new interface but notice the tabs, they look modern and its usage is very intuitive which is retained from the older tabbed versions. What's different is the aesthetics, it is very pleasing to the eye. It looks like the Pareto's 80/20 principle has been used quite effectively, the law of the vital few. For software it would be that 20% of the features are used 80% of the time and certainly you can see the most used features are immediately displayed and they are placed in a manner that reduces the gulf of evaluation. You have the back arrow that affords the back button placed intuitively before the URL entry field. The home button is prominent and to the left, there is a search field to the right of the URL field, a down arrow that has been kept from previous versions to further reduce the gulf of execution -users know from previous versions this is what shows the downloads. To the extreme right that would support Fitts law is the triple bar menu button. Once clicked it produces Fig.2.
Fig.2. Menu for Firefox version 29.0.

Gone is the coloured Firefox button that was on the top left of the earlier version, replaced by a nice clean and very intuitive icon based menu. Clicking on customize allows you to add or remove what you want displayed so you could leave more of what you use and less of what you do not use.

While I cannot comment yet on the speed and performance of Firefox, I can say that Mozilla have outdone themselves with the user interface in Firefox 29.0. There is clear evidence of good Human Computer Interaction(HCI) principles being applied here. If you have never used Firefox, I'd strongly recommend giving this a try just for the user interface alone but of course being Firefox, you get an amazing browser as an added bonus.

Monday 28 April 2014

Marvel at Marvel

Seen The Amazing Spiderman 2. Seen Captain America:The winter soldier. I cant help but marvel at Marvel, it has been a good 2-3 years at the movies. Marvel comics have done an excellent job bringing the comic book heroes to life, and to create a tv series Agents of S.H.I.E.L.D that fills in a lot of the gaps as was the case with Captain America. From what I understand Marvel is now in Phase 2 and from the trailers I  have seen (yes you HAVE to wait until all the credits have rolled in a Marvel movie!) it is going to be an AMAZING Marvel year. For comic lovers like myself, it is going to be an amazing year all around with DC comics also hard at work bringing its heroes to life.

I have to confess I am loving Marvel heroes a lot more even though I did grow up on Superman initially. Both universes (DC and Marvel) inspired generations with imaginative super heroes everyone wants to be. I have sometimes wished as a kid I was one of the fantastic 4, or superman, or even Iron man and infact as a kid I was a big Iron man fan. This was probably driven by the fact my first comic was Iron man -and I still have it!.

So excited at all these marvelous times!

Friday 25 April 2014

Why I love C#

C# pronounced See-Sharp, a play on the musical note C-Sharp is a strongly typed object oriented programming language developed by Microsoft and is integrated with the .Net framework. Its chief language designer is Anders Hejlsberg but more on C# and Anders Hejlsberg later.

My first introduction to programming was Basic back in high school and it was fascinating. You type a bit of instruction, and something happens on screen, that was the basics of it, no pun intended. Later in the same high school I was introduced to Turbo Pascal and while I do not remember the version, I suspect it might have been version 5.5. Turbo Pascal was love at first sight for me because it was so simple yet seemed so powerful. I was able to do more things in a structured manner and why not, it is a procedural language but did have some object oriented features such as classes and this was called Object Pascal. Later when I studied Engineering at MIT (Manukau Institute of Technology, New Zealand) I bought a package of Borlands Turbo Pascal version 7.0 and it ran on the first computer I ever had, a 486 DX2 66 MHz with 4 MB of ram, WOW!.

I still have my copy of Turbo Pascal and C++ Builder Professional. Turbo Pascal purchased in 1995/1996 and C++ Builder professional Purchased in 2002/2003.
 I did a lot of programming on Turbo Pascal because it was very easy to develop in, to understand, and I became very good at it. I was able to remember most of its standard library units such as CRT, DOS, GRAPH etc and it wasn't case sensitive so a variable: var something : integer; was the same as var Something: Integer;. I wrote a lot of programs in it including one that connected to bulletin boards and displayed the result in ANSI colour, lots of utilities, 3D engine that was a modification of some existing code where the rasterizer etc was already done. There was even a level editor for the 3D engine, a file transfer program, and an attempt to pack data using the 400% text compression with no flow control using the dial-up modems V.42 bis and MNP 5 protocols. Of course that caused data errors because with no flow control there was no way to control the data. A friend and I used this Turbo Pascal program I developed to send a sample test mp3 file. When he received it, he telephoned me and said "I received it but it sounds like turkeys!" and we have been joking about it ever since!

Regardless of what I developed, the point is that it was a lovely language to work with and program in. It wasn't as fast as C or C++ but it was lovely, and it even included the ability to directly inject assembly code. Something I used when developing a sound playing application which played MIDI files on the opl3 FM chip on the Sound Blaster as well as output to MIDI port. It was absolutely a joy to work with Turbo Pascal, however when Delphi came out based off Object Pascal but a fully object oriented language, I unfortunately did not pursue developing in pascal code until many years later when I developed a CD cataloging system in Delphi. I reached a point in my engineering study where I was working with C code and I was beginning to enjoy C code. You see, after developing on the sound blaster using the assembly capability of Turbo Pascal I was beginning to already enjoy assembly language. I was beginning to enjoy the power it was giving me with direct access to hardware registers opening an era of optimization in my programming life. C code was fast and close to assembly language and it was powerful because it felt like I could do absolutely anything with it. A lecturer at MIT best described C code back in that time as "C code looks like pascal printed with incorrect baud rate". Later and in no chronological order I started using and loving C++. To this day I love programming in C++ because it provides the power of C with Object orientation and with the new C++11 standards it now has threading capability. My current 3D engine being developed so I can develop a 3D game for windows phone is using C++ however it is a C++/CX flavor from Microsoft. Later of course Java and C# but I am coming to why I love C# soon.

Developing in these varieties of languages helped me understand conceptual programming languages better and has helped me acquire a skill where I can pickup new development languages fairly quickly, and for those languages which I have not used in a while such as Turbo Pascal, I am able to refresh them extremely fast. However all of these have come to pass but I still loved Turbo Pascal until one day I used C#.

Why do I love C#? It combines the best parts of C++, Delphi, and Java into one beautiful language that is music to me and C# is very well played. It is built for the CLI/CLR and makes it seamless and easy to use the .NET framework. Its later iterations have included LINQ and functional programing that has opened up some amazing capabilities in C#. It is so much easier to implement enumerators, function caching, and functors. The language is absolutely beautiful but the main reason I love C# is because of Anders Hejlsberg. He is my favorite language designer, he was the chief language architect for Turbo Pascal when he worked for Borland, and he is the chief language designer for C# at Microsoft. I love C# because it is influenced by the person I consider the best language designer ever. Infact I do not know why the keyword for auto pointer in C# is "var" but i'll venture a guess that it is his trademark perhaps? If you know why, leave a comment. C++ uses the keyword auto for the auto pointer or smart pointer.

C# is a beautiful language to learn on, to develop on, it has amazing and powerful features, it is a higher level language and these are alone the reasons to love it. 

I will leave you with a youtube video of Anders Hejlsberg I found and I strongly encourage viewing this one hour video titled Behind The Code - Anders Hejlsberg, C# Language Creator


 

Wednesday 23 April 2014

Oculus Rift and Star Trek Voyager

Some people have all the fun, and wow don't I want to be that person who is featured in the youtube video below wearing an Oculus rift and walking through the bridge of Voyager!

Download this Demo

Looking at the demo which was built with the unreal engine, and comparing that to the early attempts in virtual reality which I have actually used, I'd have to use a variation of Scotty's words in the opening scenes of Star Trek:Generations, "Damn fine VR Device if you ask me", and that pretty much sums it up.
This demo spectacularly demonstrates the potential of Oculus Rift as a mechanism to 'enter' another world. Neither the events of Star Trek Voyager nor the ship itself is real, but to be able to walk around it as if it were is absolutely incredible. I think that it is time to start saving for it and the best part of that is Oculus Rift is aimed at providing virtual reality at a more affordable cost.

Just to tease you some more, here is another video I found on youtube


Well done to the team at Oculus VR.