Quick Sort in C: Time and Space Complexity Analysis

This blog explains how to implement quick sort in C and analyze its time and space complexity in different scenarios.

1. Introduction

Quick sort is one of the most popular and efficient sorting algorithms. It is based on the idea of dividing and conquering, where a large problem is broken down into smaller subproblems that are easier to solve. In this blog, you will learn how to implement quick sort in C and analyze its time and space complexity in different scenarios.

Quick sort works by choosing a pivot element from the array and partitioning the array into two subarrays, such that all the elements that are less than or equal to the pivot are in the left subarray, and all the elements that are greater than the pivot are in the right subarray. Then, quick sort recursively sorts the left and right subarrays until the entire array is sorted.

But how do you choose the pivot element? And how do you partition the array efficiently? And how do you measure the performance of quick sort in terms of time and space complexity? These are some of the questions that you will answer in this blog. By the end of this blog, you will have a clear understanding of how quick sort works and how it compares to other sorting algorithms.

2. Quick Sort Algorithm

In this section, you will learn how to implement quick sort in C and understand how it works. You will also see some examples of quick sort in action and how it compares to other sorting algorithms.

The quick sort algorithm consists of two main steps: partitioning and recursion. Let’s look at each step in detail.

2.1. Partitioning Strategy

The partitioning strategy is the key to the quick sort algorithm. It determines how the array is divided into two subarrays around the pivot element. There are different ways to choose the pivot element and partition the array, but the most common one is the Hoare partition scheme, named after the inventor of quick sort, C. A. R. Hoare.

The Hoare partition scheme works as follows:

  • Choose the first element of the array as the pivot.
  • Initialize two pointers, one at the left end of the array and one at the right end of the array.
  • Move the left pointer to the right until it finds an element that is greater than the pivot.
  • Move the right pointer to the left until it finds an element that is less than or equal to the pivot.
  • If the left pointer is still to the left of the right pointer, swap the elements at these two positions.
  • Repeat steps 3 to 5 until the left pointer crosses the right pointer.
  • Return the final position of the right pointer as the partition index.

The partition index is the point where the array is split into two subarrays. All the elements to the left of the partition index are less than or equal to the pivot, and all the elements to the right of the partition index are greater than the pivot. The pivot element itself is not necessarily at the partition index, but it is in its correct position in the sorted array.

Here is an example of how the Hoare partition scheme works on an array of 10 elements:

// The array to be sorted
int arr[] = {10, 80, 30, 90, 40, 50, 70};

// The pivot element is the first element, 10
int pivot = arr[0];

// The left and right pointers are initialized at the ends of the array
int left = 0;
int right = 6;

// The partitioning process begins
while (left < right) {
  // Move the left pointer to the right until it finds an element greater than the pivot
  while (arr[left] <= pivot) {
    left++;
  }
  // Move the right pointer to the left until it finds an element less than or equal to the pivot
  while (arr[right] > pivot) {
    right--;
  }
  // If the left pointer is still to the left of the right pointer, swap the elements at these positions
  if (left < right) {
    int temp = arr[left];
    arr[left] = arr[right];
    arr[right] = temp;
  }
}
// The partitioning process ends when the left pointer crosses the right pointer
// The final position of the right pointer is the partition index
int partition_index = right;

// The array after partitioning is
// {10, 30, 40, 50, 70, 90, 80}
// The partition index is 4
// The pivot element, 10, is in its correct position in the sorted array

As you can see, the Hoare partition scheme is a simple and efficient way to partition the array for quick sort. However, it is not the only way. There are other partition schemes, such as the Lomuto partition scheme, that have different advantages and disadvantages. You can learn more about them here.

2.2. Recursive Calls

After partitioning the array, quick sort recursively sorts the left and right subarrays. This is done by calling the quick sort function on each subarray, passing the start and end indices as parameters. The base case of the recursion is when the subarray has only one or zero elements, in which case it is already sorted and no further action is required.

The recursive calls follow a divide and conquer approach, where a large problem is divided into smaller and simpler subproblems that are solved independently. The solutions of the subproblems are then combined to form the solution of the original problem. In quick sort, the subproblems are the subarrays that need to be sorted, and the solution is the sorted array.

Here is an example of how the recursive calls work on the same array of 10 elements that we used in the previous section:

// The array to be sorted
int arr[] = {10, 80, 30, 90, 40, 50, 70};

// The quick sort function
void quick_sort(int arr[], int start, int end) {
  // The base case is when the subarray has one or zero elements
  if (start >= end) {
    return;
  }
  // The partitioning step returns the partition index
  int partition_index = partition(arr, start, end);
  // The recursive calls sort the left and right subarrays
  quick_sort(arr, start, partition_index - 1);
  quick_sort(arr, partition_index + 1, end);
}

// The initial call to the quick sort function
quick_sort(arr, 0, 6);

// The recursive calls tree is as follows:

// quick_sort(arr, 0, 6) -> partition(arr, 0, 6) -> partition_index = 4
//   quick_sort(arr, 0, 3) -> partition(arr, 0, 3) -> partition_index = 0
//     quick_sort(arr, 0, -1) -> base case, return
//     quick_sort(arr, 1, 3) -> partition(arr, 1, 3) -> partition_index = 3
//       quick_sort(arr, 1, 2) -> partition(arr, 1, 2) -> partition_index = 2
//         quick_sort(arr, 1, 1) -> base case, return
//         quick_sort(arr, 3, 2) -> base case, return
//       quick_sort(arr, 4, 3) -> base case, return
//   quick_sort(arr, 5, 6) -> partition(arr, 5, 6) -> partition_index = 5
//     quick_sort(arr, 5, 4) -> base case, return
//     quick_sort(arr, 6, 6) -> base case, return

// The array after sorting is
// {10, 30, 40, 50, 70, 80, 90}

As you can see, the recursive calls sort the subarrays in a top-down manner, starting from the largest subarray and ending with the smallest subarrays. The partitioning step ensures that each subarray is closer to being sorted than the previous one, until the entire array is sorted.

3. Time Complexity Analysis

The time complexity of an algorithm is a measure of how fast it runs as a function of the input size. It is usually expressed using the big O notation, which describes the upper bound of the worst-case scenario. For example, if an algorithm has a time complexity of O(n), it means that it takes at most n steps to complete, where n is the input size.

The time complexity of quick sort depends on how well the partitioning step divides the array into two subarrays of roughly equal size. If the partitioning is balanced, quick sort can achieve a time complexity of O(n log n), which is optimal for comparison-based sorting algorithms. However, if the partitioning is unbalanced, quick sort can degrade to a time complexity of O(n2), which is very inefficient.

Let's see how these two scenarios can happen and how they affect the performance of quick sort.

3.1. Best Case Scenario

The best case scenario for quick sort is when the partitioning step always divides the array into two subarrays of equal or nearly equal size. This means that the pivot element is always the median of the array, or close to it. In this case, the recursive calls will have a logarithmic depth, and each level of recursion will have a linear amount of work. Therefore, the total time complexity will be O(n log n), where n is the size of the array.

To illustrate this, let's consider an example of an array of 16 elements that is sorted using quick sort in the best case scenario. We will assume that the pivot element is always the median of the subarray, and that the partitioning step takes O(n) time.

// The array to be sorted
int arr[] = {15, 9, 8, 1, 4, 11, 7, 12, 13, 6, 5, 3, 16, 2, 10, 14};

// The quick sort function
void quick_sort(int arr[], int start, int end) {
  // The base case is when the subarray has one or zero elements
  if (start >= end) {
    return;
  }
  // The partitioning step returns the partition index
  int partition_index = partition(arr, start, end);
  // The recursive calls sort the left and right subarrays
  quick_sort(arr, start, partition_index - 1);
  quick_sort(arr, partition_index + 1, end);
}

// The initial call to the quick sort function
quick_sort(arr, 0, 15);

// The recursive calls tree is as follows:

// quick_sort(arr, 0, 15) -> partition(arr, 0, 15) -> partition_index = 7
//   quick_sort(arr, 0, 6) -> partition(arr, 0, 6) -> partition_index = 3
//     quick_sort(arr, 0, 2) -> partition(arr, 0, 2) -> partition_index = 1
//       quick_sort(arr, 0, 0) -> base case, return
//       quick_sort(arr, 2, 2) -> base case, return
//     quick_sort(arr, 4, 6) -> partition(arr, 4, 6) -> partition_index = 5
//       quick_sort(arr, 4, 4) -> base case, return
//       quick_sort(arr, 6, 6) -> base case, return
//   quick_sort(arr, 8, 15) -> partition(arr, 8, 15) -> partition_index = 11
//     quick_sort(arr, 8, 10) -> partition(arr, 8, 10) -> partition_index = 9
//       quick_sort(arr, 8, 8) -> base case, return
//       quick_sort(arr, 10, 10) -> base case, return
//     quick_sort(arr, 12, 15) -> partition(arr, 12, 15) -> partition_index = 13
//       quick_sort(arr, 12, 12) -> base case, return
//       quick_sort(arr, 14, 15) -> partition(arr, 14, 15) -> partition_index = 14
//         quick_sort(arr, 14, 13) -> base case, return
//         quick_sort(arr, 15, 15) -> base case, return

// The array after sorting is
// {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16}

As you can see, the recursive calls tree has a depth of log2(16) = 4, and each level of recursion has 16 elements to partition. Therefore, the total time complexity is O(16 * 4) = O(n log n).

The best case scenario for quick sort is very desirable, as it makes the algorithm run as fast as possible. However, it is also very rare, as it depends on the choice of the pivot element and the distribution of the elements in the array. In the next section, we will see what happens when the partitioning step is not balanced and how it affects the time complexity of quick sort.

3.2. Worst Case Scenario

The worst case scenario for quick sort is when the partitioning step always divides the array into two subarrays of very unequal size. This means that the pivot element is always the smallest or the largest element of the array, or close to it. In this case, the recursive calls will have a linear depth, and each level of recursion will have a linear amount of work. Therefore, the total time complexity will be O(n2), where n is the size of the array.

To illustrate this, let's consider an example of an array of 16 elements that is sorted using quick sort in the worst case scenario. We will assume that the pivot element is always the first element of the subarray, and that the partitioning step takes O(n) time.

// The array to be sorted
int arr[] = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16};

// The quick sort function
void quick_sort(int arr[], int start, int end) {
  // The base case is when the subarray has one or zero elements
  if (start >= end) {
    return;
  }
  // The partitioning step returns the partition index
  int partition_index = partition(arr, start, end);
  // The recursive calls sort the left and right subarrays
  quick_sort(arr, start, partition_index - 1);
  quick_sort(arr, partition_index + 1, end);
}

// The initial call to the quick sort function
quick_sort(arr, 0, 15);

// The recursive calls tree is as follows:

// quick_sort(arr, 0, 15) -> partition(arr, 0, 15) -> partition_index = 0
//   quick_sort(arr, 0, -1) -> base case, return
//   quick_sort(arr, 1, 15) -> partition(arr, 1, 15) -> partition_index = 1
//     quick_sort(arr, 1, 0) -> base case, return
//     quick_sort(arr, 2, 15) -> partition(arr, 2, 15) -> partition_index = 2
//       quick_sort(arr, 2, 1) -> base case, return
//       quick_sort(arr, 3, 15) -> partition(arr, 3, 15) -> partition_index = 3
//         quick_sort(arr, 3, 2) -> base case, return
//         quick_sort(arr, 4, 15) -> partition(arr, 4, 15) -> partition_index = 4
//           quick_sort(arr, 4, 3) -> base case, return
//           quick_sort(arr, 5, 15) -> partition(arr, 5, 15) -> partition_index = 5
//             quick_sort(arr, 5, 4) -> base case, return
//             quick_sort(arr, 6, 15) -> partition(arr, 6, 15) -> partition_index = 6
//               quick_sort(arr, 6, 5) -> base case, return
//               quick_sort(arr, 7, 15) -> partition(arr, 7, 15) -> partition_index = 7
//                 quick_sort(arr, 7, 6) -> base case, return
//                 quick_sort(arr, 8, 15) -> partition(arr, 8, 15) -> partition_index = 8
//                   quick_sort(arr, 8, 7) -> base case, return
//                   quick_sort(arr, 9, 15) -> partition(arr, 9, 15) -> partition_index = 9
//                     quick_sort(arr, 9, 8) -> base case, return
//                     quick_sort(arr, 10, 15) -> partition(arr, 10, 15) -> partition_index = 10
//                       quick_sort(arr, 10, 9) -> base case, return
//                       quick_sort(arr, 11, 15) -> partition(arr, 11, 15) -> partition_index = 11
//                         quick_sort(arr, 11, 10) -> base case, return
//                         quick_sort(arr, 12, 15) -> partition(arr, 12, 15) -> partition_index = 12
//                           quick_sort(arr, 12, 11) -> base case, return
//                           quick_sort(arr, 13, 15) -> partition(arr, 13, 15) -> partition_index = 13
//                             quick_sort(arr, 13, 12) -> base case, return
//                             quick_sort(arr, 14, 15) -> partition(arr, 14, 15) -> partition_index = 14
//                               quick_sort(arr, 14, 13) -> base case, return
//                               quick_sort(arr, 15, 15) -> base case, return

// The array after sorting is
// {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16}

As you can see, the recursive calls tree has a depth of 16, and each level of recursion has 16 elements to partition. Therefore, the total time complexity is O(16 * 16) = O(n2).

The worst case scenario for quick sort is very undesirable, as it makes the algorithm run as slow as possible. However, it is also very unlikely, as it depends on the choice of the pivot element and the distribution of the elements in the array. In the next section, we will see what happens when the partitioning step is neither balanced nor unbalanced, and how it affects the time complexity of quick sort.

3.3. Average Case Scenario

The average case scenario for quick sort is when the partitioning step divides the array into two subarrays of reasonable size, but not necessarily equal. This means that the pivot element is somewhere in the middle of the array, but not exactly the median. In this case, the recursive calls will have a logarithmic depth, and each level of recursion will have a linear amount of work. Therefore, the total time complexity will be O(n log n), where n is the size of the array.

To illustrate this, let's consider an example of an array of 16 elements that is sorted using quick sort in the average case scenario. We will assume that the pivot element is always the first element of the subarray, and that the partitioning step takes O(n) time.

// The array to be sorted
int arr[] = {10, 80, 30, 90, 40, 50, 70, 15, 9, 8, 1, 4, 11, 7, 12, 13};

// The quick sort function
void quick_sort(int arr[], int start, int end) {
  // The base case is when the subarray has one or zero elements
  if (start >= end) {
    return;
  }
  // The partitioning step returns the partition index
  int partition_index = partition(arr, start, end);
  // The recursive calls sort the left and right subarrays
  quick_sort(arr, start, partition_index - 1);
  quick_sort(arr, partition_index + 1, end);
}

// The initial call to the quick sort function
quick_sort(arr, 0, 15);

// The recursive calls tree is as follows:

// quick_sort(arr, 0, 15) -> partition(arr, 0, 15) -> partition_index = 4
//   quick_sort(arr, 0, 3) -> partition(arr, 0, 3) -> partition_index = 2
//     quick_sort(arr, 0, 1) -> partition(arr, 0, 1) -> partition_index = 0
//       quick_sort(arr, 0, -1) -> base case, return
//       quick_sort(arr, 1, 1) -> base case, return
//     quick_sort(arr, 3, 3) -> base case, return
//   quick_sort(arr, 5, 15) -> partition(arr, 5, 15) -> partition_index = 10
//     quick_sort(arr, 5, 9) -> partition(arr, 5, 9) -> partition_index = 7
//       quick_sort(arr, 5, 6) -> partition(arr, 5, 6) -> partition_index = 5
//         quick_sort(arr, 5, 4) -> base case, return
//         quick_sort(arr, 6, 6) -> base case, return
//       quick_sort(arr, 8, 9) -> partition(arr, 8, 9) -> partition_index = 8
//         quick_sort(arr, 8, 7) -> base case, return
//         quick_sort(arr, 9, 9) -> base case, return
//     quick_sort(arr, 11, 15) -> partition(arr, 11, 15) -> partition_index = 13
//       quick_sort(arr, 11, 12) -> partition(arr, 11, 12) -> partition_index = 11
//         quick_sort(arr, 11, 10) -> base case, return
//         quick_sort(arr, 12, 12) -> base case, return
//       quick_sort(arr, 14, 15) -> partition(arr, 14, 15) -> partition_index = 14
//         quick_sort(arr, 14, 13) -> base case, return
//         quick_sort(arr, 15, 15) -> base case, return

// The array after sorting is
// {1, 4, 7, 8, 9, 10, 11, 12, 13, 15, 30, 40, 50, 70, 80, 90}

As you can see, the recursive calls tree has a depth of log2(16) = 4, and each level of recursion has 16 elements to partition. Therefore, the total time complexity is O(16 * 4) = O(n log n).

The average case scenario for quick sort is the most common and realistic one, as it reflects the typical distribution of the elements in the array. It also shows that quick sort can achieve a good performance in most cases, as long as the partitioning step is not too unbalanced. In the next section, we will see how the space complexity of quick sort is affected by the recursive calls and the partitioning scheme.

4. Space Complexity Analysis

The space complexity of an algorithm is a measure of how much extra memory it uses as a function of the input size. It is usually expressed using the big O notation, which describes the upper bound of the worst-case scenario. For example, if an algorithm has a space complexity of O(n), it means that it uses at most n units of extra memory, where n is the input size.

The space complexity of quick sort depends on how much extra memory is used by the recursive calls and the partitioning scheme. The recursive calls use a stack to keep track of the subarray boundaries, and the partitioning scheme may use a temporary array to store the elements. Let's see how these factors affect the space complexity of quick sort in different scenarios.

5. Conclusion

In this blog, you have learned how to implement quick sort in C and analyze its time and space complexity in different scenarios. You have seen how the choice of the pivot element and the partitioning scheme affect the performance of the algorithm. You have also learned how to compare quick sort with other sorting algorithms and how to optimize it for better efficiency.

Quick sort is a powerful and versatile sorting algorithm that can sort any type of data in O(n log n) time on average. However, it also has some drawbacks, such as its instability, its sensitivity to the input order, and its possible quadratic worst-case time complexity. Therefore, it is important to understand the trade-offs and limitations of quick sort before using it in your applications.

We hope that this blog has helped you gain a deeper understanding of quick sort and its properties. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading and happy coding!

Leave a Reply

Your email address will not be published. Required fields are marked *