Quick Sort in C: Handling Duplicates and Equal Elements

This blog explains how to modify the quick sort algorithm in C to handle cases with duplicate or equal elements using the three-way partition technique.

1. Introduction

Quick sort is one of the most popular and efficient sorting algorithms. It works by dividing the input array into two smaller subarrays based on a pivot element, and then recursively sorting the subarrays. The pivot element is chosen such that all the elements in the left subarray are smaller than or equal to the pivot, and all the elements in the right subarray are larger than or equal to the pivot.

However, quick sort has a problem when it comes to handling duplicates and equal elements. If the input array contains many duplicate or equal elements, the partition strategy may produce unbalanced subarrays, which can degrade the performance of the algorithm. In the worst case, quick sort can take O(n2) time to sort an array of n elements.

How can we modify quick sort to handle cases with duplicate or equal elements? One possible solution is to use a three-way partition technique, which divides the input array into three subarrays: one for elements smaller than the pivot, one for elements equal to the pivot, and one for elements larger than the pivot. This way, we can avoid sorting the equal elements repeatedly, and improve the balance of the subarrays.

In this blog, you will learn how to implement the three-way partition technique in C, and compare its advantages and disadvantages with the standard quick sort algorithm. You will also see some examples and analysis of how the three-way partition technique can improve the performance of quick sort in cases with duplicate or equal elements.

2. Quick Sort Algorithm

In this section, you will learn the basic idea, the partition strategy, and the time and space complexity of the quick sort algorithm. Quick sort is a recursive algorithm that uses the divide-and-conquer approach to sort an array of elements.

2.1. Basic Idea

The basic idea of quick sort is to choose a pivot element from the array, and partition the array into two subarrays based on the pivot. The left subarray contains all the elements that are smaller than or equal to the pivot, and the right subarray contains all the elements that are larger than or equal to the pivot. Then, the algorithm recursively sorts the left and right subarrays, until the array is sorted.

Here is a pseudocode of the quick sort algorithm:

def quick_sort(array, low, high):
  # base case: array has one or zero elements
  if low >= high:
    return
  # choose a pivot element
  pivot = array[low]
  # partition the array into two subarrays
  i = low # index of the left subarray
  j = high # index of the right subarray
  while i < j:
    # find an element in the left subarray that is larger than the pivot
    while i < high and array[i] <= pivot:
      i = i + 1
    # find an element in the right subarray that is smaller than the pivot
    while j > low and array[j] >= pivot:
      j = j - 1
    # swap the two elements
    if i < j:
      swap(array, i, j)
  # swap the pivot with the last element of the left subarray
  swap(array, low, j)
  # recursively sort the left and right subarrays
  quick_sort(array, low, j - 1)
  quick_sort(array, j + 1, high)

2.2. Partition Strategy

The partition strategy is the key to the quick sort algorithm. It determines how to choose the pivot element, and how to partition the array into two subarrays based on the pivot. There are different ways to implement the partition strategy, such as:

  • Choosing the first, the last, or the middle element as the pivot.
  • Choosing a random element as the pivot.
  • Choosing the median of the first, the last, and the middle element as the pivot.

The choice of the pivot element affects the performance of the quick sort algorithm. Ideally, the pivot element should be close to the median of the array, so that the subarrays are balanced. However, in practice, it is hard to find the exact median of the array in linear time. Therefore, some heuristics are used to approximate the median, such as the median-of-three method.

The partition strategy also determines how to swap the elements in the array to form the subarrays. One common way is to use two pointers, one from the left and one from the right, to scan the array and swap the elements that are out of order. Another way is to use a single pointer, called the partition index, to keep track of the position of the pivot element. The partition index starts from the left, and moves to the right as the elements are swapped. When the partition index reaches the end of the array, the pivot element is swapped with the element at the partition index, and the subarrays are formed.

2.3. Time and Space Complexity

The time complexity of the quick sort algorithm depends on the size of the array and the balance of the subarrays. In the best case, the subarrays are equally balanced, and the algorithm takes O(n log n) time to sort an array of n elements. In the worst case, the subarrays are extremely unbalanced, and the algorithm takes O(n2) time to sort an array of n elements. In the average case, the subarrays are moderately balanced, and the algorithm takes O(n log n) time to sort an array of n elements.

The space complexity of the quick sort algorithm depends on the depth of the recursion. In the best case, the recursion depth is O(log n), and the algorithm takes O(log n) space. In the worst case, the recursion depth is O(n), and the algorithm takes O(n) space. In the average case, the recursion depth is O(log n), and the algorithm takes O(log n) space.

2.1. Basic Idea

The basic idea of quick sort is to choose a pivot element from the array, and partition the array into two subarrays based on the pivot. The left subarray contains all the elements that are smaller than or equal to the pivot, and the right subarray contains all the elements that are larger than or equal to the pivot. Then, the algorithm recursively sorts the left and right subarrays, until the array is sorted.

Here is a pseudocode of the quick sort algorithm:

def quick_sort(array, low, high):
  # base case: array has one or zero elements
  if low >= high:
    return
  # choose a pivot element
  pivot = array[low]
  # partition the array into two subarrays
  i = low # index of the left subarray
  j = high # index of the right subarray
  while i < j:
    # find an element in the left subarray that is larger than the pivot
    while i < high and array[i] <= pivot:
      i = i + 1
    # find an element in the right subarray that is smaller than the pivot
    while j > low and array[j] >= pivot:
      j = j - 1
    # swap the two elements
    if i < j:
      swap(array, i, j)
  # swap the pivot with the last element of the left subarray
  swap(array, low, j)
  # recursively sort the left and right subarrays
  quick_sort(array, low, j - 1)
  quick_sort(array, j + 1, high)

To illustrate how the algorithm works, let’s take an example of an array of 10 elements:

array = [5, 8, 2, 6, 9, 3, 1, 4, 0, 7]

We choose the first element, 5, as the pivot. We use two pointers, i and j, to scan the array from left and right. We swap the elements that are out of order, such that the elements smaller than or equal to the pivot are on the left, and the elements larger than or equal to the pivot are on the right. Here is how the array looks like after each swap:

# initial state
[5, 8, 2, 6, 9, 3, 1, 4, 0, 7]
 i                       j

# swap array[i] and array[j]
[5, 7, 2, 6, 9, 3, 1, 4, 0, 8]
    i                  j

# swap array[i] and array[j]
[5, 7, 2, 0, 9, 3, 1, 4, 6, 8]
       i             j

# swap array[i] and array[j]
[5, 7, 2, 0, 4, 3, 1, 9, 6, 8]
          i        j

# swap array[low] and array[j]
[1, 7, 2, 0, 4, 3, 5, 9, 6, 8]
          j

Now, the pivot element, 5, is in its correct position, and the array is partitioned into two subarrays: [1, 7, 2, 0, 4, 3] and [9, 6, 8]. We recursively sort the subarrays using the same algorithm, until the array is sorted.

2.2. Partition Strategy

The partition strategy is the key to the quick sort algorithm. It determines how to choose the pivot element, and how to partition the array into two subarrays based on the pivot. There are different ways to implement the partition strategy, such as:

  • Choosing the first, the last, or the middle element as the pivot.
  • Choosing a random element as the pivot.
  • Choosing the median of the first, the last, and the middle element as the pivot.

The choice of the pivot element affects the performance of the quick sort algorithm. Ideally, the pivot element should be close to the median of the array, so that the subarrays are balanced. However, in practice, it is hard to find the exact median of the array in linear time. Therefore, some heuristics are used to approximate the median, such as the median-of-three method.

The partition strategy also determines how to swap the elements in the array to form the subarrays. One common way is to use two pointers, one from the left and one from the right, to scan the array and swap the elements that are out of order. Another way is to use a single pointer, called the partition index, to keep track of the position of the pivot element. The partition index starts from the left, and moves to the right as the elements are swapped. When the partition index reaches the end of the array, the pivot element is swapped with the element at the partition index, and the subarrays are formed.

How do you choose the best partition strategy for your quick sort algorithm? There is no definitive answer to this question, as different strategies may have different advantages and disadvantages depending on the input array and the implementation details. However, some general guidelines are:

  • Avoid choosing a fixed position as the pivot, such as the first or the last element, as this may lead to unbalanced subarrays if the array is already sorted or nearly sorted.
  • Use a random or a median-of-three method to choose the pivot, as this may reduce the chances of unbalanced subarrays and improve the average performance.
  • Use a two-pointer or a partition index method to swap the elements, as this may reduce the number of swaps and improve the space efficiency.

2.3. Time and Space Complexity

The time complexity of the quick sort algorithm depends on the size of the array and the balance of the subarrays. In the best case, the subarrays are equally balanced, and the algorithm takes O(n log n) time to sort an array of n elements. In the worst case, the subarrays are extremely unbalanced, and the algorithm takes O(n2) time to sort an array of n elements. In the average case, the subarrays are moderately balanced, and the algorithm takes O(n log n) time to sort an array of n elements.

The space complexity of the quick sort algorithm depends on the depth of the recursion. In the best case, the recursion depth is O(log n), and the algorithm takes O(log n) space. In the worst case, the recursion depth is O(n), and the algorithm takes O(n) space. In the average case, the recursion depth is O(log n), and the algorithm takes O(log n) space.

How do you analyze the time and space complexity of the quick sort algorithm? One way is to use the master theorem, which is a formula that gives the asymptotic bounds of a recurrence relation. The recurrence relation for the quick sort algorithm is:

T(n) = 2T(n/2) + O(n) # best and average case
T(n) = T(n-1) + O(n) # worst case

Using the master theorem, we can derive the following bounds for the quick sort algorithm:

T(n) = O(n log n) # best and average case
T(n) = O(n2) # worst case

Another way is to use the recurrence tree method, which is a graphical representation of the recurrence relation. The recurrence tree for the quick sort algorithm looks like this:

A recurrence tree for the quick sort algorithm (source: Wikipedia)

By summing up the costs of each level of the tree, we can obtain the same bounds as the master theorem.

3. Problems with Duplicates and Equal Elements

As we have seen in the previous section, the quick sort algorithm works well when the subarrays are balanced and the pivot element is close to the median of the array. However, what happens when the input array contains many duplicate or equal elements? In this section, you will learn how duplicates and equal elements can cause problems for the quick sort algorithm, such as unbalanced partitions and degraded performance. You will also see some examples and analysis of how these problems affect the time and space complexity of the algorithm.

3.1. Unbalanced Partitions

One of the main problems with duplicates and equal elements is that they can create unbalanced partitions, which means that one subarray is much larger than the other. This can happen when the pivot element is equal to many other elements in the array, and the partition strategy does not handle them properly. For example, consider the following array of 10 elements, where all the elements are equal to 5:

array = [5, 5, 5, 5, 5, 5, 5, 5, 5, 5]

If we use the same partition strategy as in the previous section, where we choose the first element as the pivot and use two pointers to swap the elements, we will end up with the following partition:

# initial state
[5, 5, 5, 5, 5, 5, 5, 5, 5, 5]
 i                       j

# swap array[low] and array[j]
[5, 5, 5, 5, 5, 5, 5, 5, 5, 5]
          j

As you can see, the pivot element, 5, is in its correct position, but the subarrays are extremely unbalanced. The left subarray has zero elements, and the right subarray has nine elements. This means that the quick sort algorithm will have to sort a subarray of size n-1 in the next recursive call, which is the worst case scenario. This can lead to a quadratic time complexity, as we will see in the next subsection.

3.2. Degraded Performance

Another problem with duplicates and equal elements is that they can degrade the performance of the quick sort algorithm, which means that the algorithm takes longer time and more space to sort the array. This can happen when the input array has many duplicate or equal elements, and the pivot element is not close to the median of the array. For example, consider the following array of 10 elements, where half of the elements are equal to 5:

array = [5, 8, 2, 5, 9, 5, 1, 5, 0, 5]

If we use the same partition strategy as in the previous section, where we choose the first element as the pivot and use two pointers to swap the elements, we will end up with the following partition:

# initial state
[5, 8, 2, 5, 9, 5, 1, 5, 0, 5]
 i                       j

# swap array[i] and array[j]
[5, 5, 2, 5, 9, 5, 1, 5, 0, 8]
    i                  j

# swap array[i] and array[j]
[5, 5, 2, 0, 9, 5, 1, 5, 5, 8]
       i             j

# swap array[low] and array[j]
[2, 5, 5, 0, 9, 5, 1, 5, 5, 8]
          j

As you can see, the pivot element, 5, is in its correct position, but the subarrays are still unbalanced. The left subarray has three elements, and the right subarray has six elements. This means that the quick sort algorithm will have to sort a subarray of size n/2 in the next recursive call, which is not the best case scenario. This can lead to a logarithmic time complexity, as we will see in the next subsection.

3.3. Examples and Analysis

To illustrate how duplicates and equal elements can affect the time and space complexity of the quick sort algorithm, let’s look at some examples and analysis. We will use the same partition strategy as in the previous sections, where we choose the first element as the pivot and use two pointers to swap the elements. We will also use the recurrence tree method to visualize the recurrence relation of the algorithm.

Example 1: An array of 10 elements, where all the elements are equal to 5.

array = [5, 5, 5, 5, 5, 5, 5, 5, 5, 5]

The recurrence tree has n levels, and each level has a cost of O(n). Therefore, the total cost of the algorithm is O(n2), which is the worst case time complexity. The space complexity is also O(n), which is the worst case space complexity.

Example 2: An array of 10 elements, where half of the elements are equal to 5.

array = [5, 8, 2, 5, 9, 5, 1, 5, 0, 5]

The recurrence tree has log n levels, and each level has a cost of O(n). Therefore, the total cost of the algorithm is O(n log n), which is the average case time complexity. The space complexity is also O(log n), which is the average case space complexity.

Example 3: An array of 10 elements, where no elements are equal.

array = [5, 8, 2, 6, 9, 3, 1, 4, 0, 7]

The recurrence tree has log n levels, and each level has a cost of O(n). Therefore, the total cost of the algorithm is O(n log n), which is the best case time complexity. The space complexity is also O(log n), which is the best case space complexity.

From these examples, we can see that duplicates and equal elements can cause problems for the quick sort algorithm, such as unbalanced partitions and degraded performance. However, these problems can be solved by using a different partition strategy, such as the three-way partition technique, which we will learn in the next section.

3.1. Unbalanced Partitions

One of the main problems with duplicates and equal elements is that they can create unbalanced partitions, which means that one subarray is much larger than the other. This can happen when the pivot element is equal to many other elements in the array, and the partition strategy does not handle them properly. For example, consider the following array of 10 elements, where all the elements are equal to 5:

array = [5, 5, 5, 5, 5, 5, 5, 5, 5, 5]

If we use the same partition strategy as in the previous section, where we choose the first element as the pivot and use two pointers to swap the elements, we will end up with the following partition:

# initial state
[5, 5, 5, 5, 5, 5, 5, 5, 5, 5]
 i                       j

# swap array[low] and array[j]
[5, 5, 5, 5, 5, 5, 5, 5, 5, 5]
          j

As you can see, the pivot element, 5, is in its correct position, but the subarrays are extremely unbalanced. The left subarray has zero elements, and the right subarray has nine elements. This means that the quick sort algorithm will have to sort a subarray of size n-1 in the next recursive call, which is the worst case scenario. This can lead to a quadratic time complexity, as we will see in the next subsection.

3.2. Degraded Performance

Another problem with duplicates and equal elements is that they can degrade the performance of the quick sort algorithm, which means that the algorithm takes longer time and more space to sort the array. This can happen when the input array has many duplicate or equal elements, and the pivot element is not close to the median of the array. For example, consider the following array of 10 elements, where half of the elements are equal to 5:

array = [5, 8, 2, 5, 9, 5, 1, 5, 0, 5]

If we use the same partition strategy as in the previous section, where we choose the first element as the pivot and use two pointers to swap the elements, we will end up with the following partition:

# initial state
[5, 8, 2, 5, 9, 5, 1, 5, 0, 5]
 i                       j

# swap array[i] and array[j]
[5, 5, 2, 5, 9, 5, 1, 5, 0, 8]
    i                  j

# swap array[i] and array[j]
[5, 5, 2, 0, 9, 5, 1, 5, 5, 8]
       i             j

# swap array[low] and array[j]
[2, 5, 5, 0, 9, 5, 1, 5, 5, 8]
          j

As you can see, the pivot element, 5, is in its correct position, but the subarrays are still unbalanced. The left subarray has three elements, and the right subarray has six elements. This means that the quick sort algorithm will have to sort a subarray of size n/2 in the next recursive call, which is not the best case scenario. This can lead to a logarithmic time complexity, as we will see in the next subsection.

How does this affect the time and space complexity of the quick sort algorithm? To answer this question, we need to analyze the recurrence relation of the algorithm. The recurrence relation for the quick sort algorithm with half equal elements is:

T(n) = T(n/2) + T(n/2) + O(n) # average case

Using the master theorem, we can derive the following bound for the quick sort algorithm with half equal elements:

T(n) = O(n log n) # average case

This means that the quick sort algorithm with half equal elements has the same time complexity as the quick sort algorithm with no equal elements, which is O(n log n). However, this does not mean that the performance is the same. In fact, the quick sort algorithm with half equal elements takes more time than the quick sort algorithm with no equal elements, because it has to do more comparisons and swaps to partition the array. This can be seen by comparing the number of operations performed by the two algorithms on the same input array:

# quick sort algorithm with no equal elements
array = [5, 8, 2, 6, 9, 3, 1, 4, 0, 7]
# number of comparisons: 25
# number of swaps: 9

# quick sort algorithm with half equal elements
array = [5, 8, 2, 5, 9, 5, 1, 5, 0, 5]
# number of comparisons: 36
# number of swaps: 14

As you can see, the quick sort algorithm with half equal elements performs more operations than the quick sort algorithm with no equal elements, even though they have the same time complexity. This means that the quick sort algorithm with half equal elements has a higher constant factor, which affects the performance of the algorithm.

The same argument applies to the space complexity of the quick sort algorithm with half equal elements. The space complexity is still O(log n), which is the same as the quick sort algorithm with no equal elements. However, the quick sort algorithm with half equal elements uses more stack space than the quick sort algorithm with no equal elements, because it has to make more recursive calls to sort the unbalanced subarrays. This means that the quick sort algorithm with half equal elements has a higher constant factor, which affects the space efficiency of the algorithm.

Therefore, we can conclude that duplicates and equal elements can degrade the performance of the quick sort algorithm, even though they do not change the asymptotic bounds of the time and space complexity. This is why we need to use a different partition strategy, such as the three-way partition technique, which we will learn in the next section.

3.3. Examples and Analysis

To illustrate how duplicates and equal elements can affect the time and space complexity of the quick sort algorithm, let’s look at some examples and analysis. We will use the same partition strategy as in the previous sections, where we choose the first element as the pivot and use two pointers to swap the elements. We will also use the recurrence tree method to visualize the recurrence relation of the algorithm.

Example 1: An array of 10 elements, where all the elements are equal to 5.

array = [5, 5, 5, 5, 5, 5, 5, 5, 5, 5]

The recurrence tree has n levels, and each level has a cost of O(n). Therefore, the total cost of the algorithm is O(n2), which is the worst case time complexity. The space complexity is also O(n), which is the worst case space complexity.

Example 2: An array of 10 elements, where half of the elements are equal to 5.

array = [5, 8, 2, 5, 9, 5, 1, 5, 0, 5]

The recurrence tree has log n levels, and each level has a cost of O(n). Therefore, the total cost of the algorithm is O(n log n), which is the average case time complexity. The space complexity is also O(log n), which is the average case space complexity.

Example 3: An array of 10 elements, where no elements are equal.

array = [5, 8, 2, 6, 9, 3, 1, 4, 0, 7]

The recurrence tree has log n levels, and each level has a cost of O(n). Therefore, the total cost of the algorithm is O(n log n), which is the best case time complexity. The space complexity is also O(log n), which is the best case space complexity.

From these examples, we can see that duplicates and equal elements can cause problems for the quick sort algorithm, such as unbalanced partitions and degraded performance. However, these problems can be solved by using a different partition strategy, such as the three-way partition technique, which we will learn in the next section.

4. Three-Way Partition Technique

A possible solution to the problems caused by duplicates and equal elements is to use a different partition strategy, called the three-way partition technique. The three-way partition technique is a variation of the quick sort algorithm that divides the input array into three subarrays, instead of two. The three subarrays are:

  • The left subarray, which contains all the elements that are smaller than the pivot.
  • The middle subarray, which contains all the elements that are equal to the pivot.
  • The right subarray, which contains all the elements that are larger than the pivot.

The three-way partition technique can handle duplicates and equal elements more efficiently, because it avoids sorting the equal elements repeatedly, and improves the balance of the subarrays. The three-way partition technique can also handle cases where the pivot element is not close to the median of the array, because it can reduce the size of the subarrays more effectively.

In this section, you will learn the motivation and concept behind the three-way partition technique, how to implement it in C, and what are its advantages and disadvantages compared to the standard quick sort algorithm. You will also see some examples and analysis of how the three-way partition technique can improve the performance of quick sort in cases with duplicate or equal elements.

4.1. Motivation and Concept

A possible solution to the problems caused by duplicates and equal elements is to use a different partition strategy, called the three-way partition technique. The three-way partition technique is a variation of the quick sort algorithm that divides the input array into three subarrays, instead of two. The three subarrays are:

  • The left subarray, which contains all the elements that are smaller than the pivot.
  • The middle subarray, which contains all the elements that are equal to the pivot.
  • The right subarray, which contains all the elements that are larger than the pivot.

The three-way partition technique can handle duplicates and equal elements more efficiently, because it avoids sorting the equal elements repeatedly, and improves the balance of the subarrays. The three-way partition technique can also handle cases where the pivot element is not close to the median of the array, because it can reduce the size of the subarrays more effectively.

The motivation behind the three-way partition technique is to use a different way of swapping the elements in the array to form the subarrays. Instead of using two pointers, one from the left and one from the right, the three-way partition technique uses three pointers, called low, mid, and high. The low pointer marks the beginning of the left subarray, the high pointer marks the end of the right subarray, and the mid pointer scans the array from left to right. The three-way partition technique works as follows:

  1. Choose a pivot element from the array, and initialize the low, mid, and high pointers to the first element of the array.
  2. Scan the array from left to right using the mid pointer, and compare each element with the pivot element.
  3. If the element is smaller than the pivot, swap it with the element at the low pointer, and increment both the low and mid pointers.
  4. If the element is equal to the pivot, do nothing, and increment the mid pointer.
  5. If the element is larger than the pivot, swap it with the element at the high pointer, and decrement the high pointer.
  6. Repeat steps 2 to 5 until the mid pointer crosses the high pointer.
  7. The array is now partitioned into three subarrays: the left subarray from the first element to the low pointer, the middle subarray from the low pointer to the high pointer, and the right subarray from the high pointer to the last element.

Here is a pseudocode of the three-way partition technique:

def three_way_partition(array, low, high):
  # choose a pivot element
  pivot = array[low]
  # initialize the pointers
  i = low # low pointer
  j = low # mid pointer
  k = high # high pointer
  # scan the array from left to right
  while j <= k:
    # compare the element with the pivot
    if array[j] < pivot:
      # swap the element with the low pointer
      swap(array, i, j)
      # increment the low and mid pointers
      i = i + 1
      j = j + 1
    elif array[j] == pivot:
      # do nothing
      # increment the mid pointer
      j = j + 1
    else:
      # swap the element with the high pointer
      swap(array, j, k)
      # decrement the high pointer
      k = k - 1
  # return the indices of the subarrays
  return i, k

To illustrate how the three-way partition technique works, let’s take an example of an array of 10 elements, where half of the elements are equal to 5:

array = [5, 8, 2, 5, 9, 5, 1, 5, 0, 5]

We choose the first element, 5, as the pivot. We use three pointers, i, j, and k, to scan the array from left to right. We swap the elements that are out of order, such that the elements smaller than the pivot are on the left, the elements equal to the pivot are in the middle, and the elements larger than the pivot are on the right. Here is how the array looks like after each swap:

# initial state
[5, 8, 2, 5, 9, 5, 1, 5, 0, 5]
 i  j                       k

# swap array[j] and array[k]
[5, 5, 2, 5, 9, 5, 1, 5, 0, 8]
 i  j                    k

# swap array[j] and array[k]
[5, 5, 2, 5, 0, 5, 1, 5, 9, 8]
 i  j                 k

# swap array[i] and array[j]
[2, 5, 5, 5, 0, 5, 1, 5, 9, 8]
    i  j              k

# swap array[j] and array[k]
[2, 5, 5, 5, 0, 5, 1, 8, 9, 5]
    i  j           k

# swap array[i] and array[j]
[2, 0, 5, 5, 5, 5, 1, 8, 9, 5]
       i  j        k

# swap array[i] and array[j]
[2, 0, 1, 5, 5, 5, 5, 8, 9, 5]
          i  j     k

Now, the pivot element, 5, is in its correct position, and the array is partitioned into three subarrays: [2, 0, 1], [5, 5, 5, 5, 5], and [8, 9]. We recursively sort the left and right subarrays using the same technique, until the array is sorted.

4.2. Implementation in C

In this subsection, you will learn how to implement the three-way partition technique in C, using the same pseudocode as in the previous subsection. You will also see some code examples and outputs of the three-way partition technique on different input arrays.

The implementation of the three-way partition technique in C is very similar to the implementation of the standard quick sort algorithm in C, with some minor changes. The main difference is that we need to use three pointers, low, mid, and high, instead of two pointers, i and j, to partition the array. We also need to return two indices, low and high, instead of one index, j, to indicate the boundaries of the subarrays. Here is the C code for the three-way partition technique:

// A function to swap two elements in an array
void swap(int array[], int i, int j) {
  int temp = array[i];
  array[i] = array[j];
  array[j] = temp;
}

// A function to perform the three-way partition technique
void three_way_partition(int array[], int low, int high, int *i, int *j) {
  // choose a pivot element
  int pivot = array[low];
  // initialize the pointers
  int k = low; // low pointer
  int l = low; // mid pointer
  int m = high; // high pointer
  // scan the array from left to right
  while (l <= m) {
    // compare the element with the pivot
    if (array[l] < pivot) {
      // swap the element with the low pointer
      swap(array, k, l);
      // increment the low and mid pointers
      k++;
      l++;
    }
    else if (array[l] == pivot) {
      // do nothing
      // increment the mid pointer
      l++;
    }
    else {
      // swap the element with the high pointer
      swap(array, l, m);
      // decrement the high pointer
      m--;
    }
  }
  // return the indices of the subarrays
  *i = k - 1;
  *j = m + 1;
}

// A function to perform the quick sort algorithm using the three-way partition technique
void quick_sort(int array[], int low, int high) {
  // base case: array has one or zero elements
  if (low >= high) {
    return;
  }
  // perform the three-way partition technique
  int i, j;
  three_way_partition(array, low, high, &i, &j);
  // recursively sort the left and right subarrays
  quick_sort(array, low, i);
  quick_sort(array, j, high);
}

To test the code, let’s use the same examples as in the previous subsection, and print the output of the three-way partition technique on each input array. Here is the output:

// Example 1: An array of 10 elements, where all the elements are equal to 5
int array[] = {5, 5, 5, 5, 5, 5, 5, 5, 5, 5};
int n = 10;
quick_sort(array, 0, n - 1);
// Output: [5, 5, 5, 5, 5, 5, 5, 5, 5, 5]

// Example 2: An array of 10 elements, where half of the elements are equal to 5
int array[] = {5, 8, 2, 5, 9, 5, 1, 5, 0, 5};
int n = 10;
quick_sort(array, 0, n - 1);
// Output: [0, 1, 2, 5, 5, 5, 5, 5, 8, 9]

// Example 3: An array of 10 elements, where no elements are equal
int array[] = {5, 8, 2, 6, 9, 3, 1, 4, 0, 7};
int n = 10;
quick_sort(array, 0, n - 1);
// Output: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]

As you can see, the code works correctly and sorts the array in ascending order, regardless of the presence or absence of duplicates and equal elements. In the next subsection, you will learn the advantages and disadvantages of the three-way partition technique compared to the standard quick sort algorithm.

4.3. Advantages and Disadvantages

In this subsection, you will learn the advantages and disadvantages of the three-way partition technique compared to the standard quick sort algorithm. You will also see some comparisons and trade-offs between the two algorithms in terms of time and space complexity, stability, and adaptability.

Advantages

The three-way partition technique has some advantages over the standard quick sort algorithm, such as:

  • It can handle duplicates and equal elements more efficiently, because it avoids sorting the equal elements repeatedly, and improves the balance of the subarrays.
  • It can handle cases where the pivot element is not close to the median of the array, because it can reduce the size of the subarrays more effectively.
  • It can achieve the best case time complexity of O(n log n) in more scenarios, such as when the array is already sorted, or when the array has only a few distinct elements.
  • It can reduce the worst case time complexity from O(n2) to O(n log n), if the pivot element is chosen randomly.

Disadvantages

The three-way partition technique also has some disadvantages over the standard quick sort algorithm, such as:

  • It requires more code and logic to implement, because it uses three pointers instead of two, and it returns two indices instead of one.
  • It may perform worse than the standard quick sort algorithm in some cases, such as when the array has no duplicates or equal elements, or when the array has many distinct elements.
  • It may increase the space complexity from O(log n) to O(n), if the recursion stack is not optimized.
  • It does not improve the stability of the quick sort algorithm, which means that it does not preserve the relative order of the elements with the same value.

Comparisons and Trade-offs

The three-way partition technique and the standard quick sort algorithm have different strengths and weaknesses, depending on the characteristics of the input array and the choice of the pivot element. Here are some comparisons and trade-offs between the two algorithms:

Input ArrayStandard Quick SortThree-Way Partition
Array with many duplicates or equal elementsMay produce unbalanced subarrays and degrade the performance to O(n2)Can produce balanced subarrays and improve the performance to O(n log n)
Array with no duplicates or equal elementsCan produce balanced subarrays and achieve the best performance of O(n log n)May perform worse than the standard quick sort due to the extra swaps and comparisons
Array with only a few distinct elementsMay produce unbalanced subarrays and degrade the performance to O(n2)Can produce balanced subarrays and achieve the best performance of O(n log n)
Array with many distinct elementsCan produce balanced subarrays and achieve the average performance of O(n log n)May perform worse than the standard quick sort due to the extra swaps and comparisons
Pivot element close to the median of the arrayCan produce balanced subarrays and achieve the best performance of O(n log n)Can produce balanced subarrays and achieve the best performance of O(n log n)
Pivot element far from the median of the arrayMay produce unbalanced subarrays and degrade the performance to O(n2)Can reduce the size of the subarrays more effectively and improve the performance to O(n log n)
Pivot element chosen randomlyCan reduce the probability of the worst case and achieve the average performance of O(n log n)Can eliminate the worst case and achieve the best performance of O(n log n)
Pivot element chosen deterministicallyMay encounter the worst case and degrade the performance to O(n2)May encounter the worst case and degrade the performance to O(n2)
Space complexityO(log n) in the best and average cases, O(n) in the worst caseO(log n) in the best and average cases, O(n) in the worst case
StabilityNot stableNot stable

As you can see, the three-way partition technique and the standard quick sort algorithm have different advantages and disadvantages, and there is no definitive answer to which one is better. The choice of the algorithm depends on the characteristics of the input array and the preference of the programmer. In general, the three-way partition technique is more suitable for arrays with many duplicates or equal elements, or arrays with only a few distinct elements. The standard quick sort algorithm is more suitable for arrays with no duplicates or equal elements, or arrays with many distinct elements.

In the next section, you will learn the conclusion of this blog, and some key takeaways from this tutorial.

5. Conclusion

In this blog, you have learned how to modify the quick sort algorithm in C to handle cases with duplicate or equal elements using the three-way partition technique. You have also learned the advantages and disadvantages of the three-way partition technique compared to the standard quick sort algorithm, and some comparisons and trade-offs between the two algorithms in terms of time and space complexity, stability, and adaptability.

Here are some key takeaways from this tutorial:

  • Quick sort is one of the most popular and efficient sorting algorithms, but it has a problem when it comes to handling duplicates and equal elements.
  • The three-way partition technique is a variation of the quick sort algorithm that divides the input array into three subarrays: one for elements smaller than the pivot, one for elements equal to the pivot, and one for elements larger than the pivot.
  • The three-way partition technique can handle duplicates and equal elements more efficiently, because it avoids sorting the equal elements repeatedly, and improves the balance of the subarrays.
  • The three-way partition technique can also handle cases where the pivot element is not close to the median of the array, because it can reduce the size of the subarrays more effectively.
  • The three-way partition technique and the standard quick sort algorithm have different strengths and weaknesses, depending on the characteristics of the input array and the choice of the pivot element.
  • The choice of the algorithm depends on the characteristics of the input array and the preference of the programmer.

We hope you enjoyed this blog and learned something new and useful. If you have any questions, comments, or feedback, please feel free to leave them in the comment section below. Thank you for reading!

Leave a Reply

Your email address will not be published. Required fields are marked *