
Quicksort in Java: A Step-by-Step Guide
Introduction
What is sorting and why is it important?
Quicksort in Java: Sorting is the process of arranging elements in a specific order, usually in ascending or descending order. It plays an important role in computer science, as many algorithms and data structures rely on sorted data to function efficiently. Some common applications of sorting are:
- Search algorithms (binary search only works with sorted data)
- Data retrieval and indexing in databases
- Optimization of numerical calculations
- Improving efficiency in applications such as scheduling and ranking
What is Quicksort?
Quicksort is one of the most efficient sorting algorithms, often used for sorting large amounts of data. It follows the divide-and-conquer strategy, in which a problem is broken down into smaller sub-problems and solved recursively.
The core idea of Quicksort is:
- Select a pivot element from the array.
- Split the array so that all elements that are smaller than the pivot element move to the left and all elements that are larger move to the right.
- Apply the same logic recursively to the left and right subfields.
Why Quicksort?
Quicksort is preferred over many other sorting algorithms because:
Fast performance
- It has an average time complexity of O(n log n), making it faster than many simple sorting algorithms such as bubble sort and selection sort.
- The partitioning step ensures that the elements are placed correctly in fewer swaps than many other sorting algorithms.
In-place sorting
- Unlike Merge Sort, Quicksort does not require additional memory for merging as it sorts the array in-place.
- It is space efficient with an additional space complexity of O(log n) due to recursive calls.
Versatility
- Quicksort is well suited for real-world applications such as database indexing and language libraries (e.g. Java’s
Arrays.sort()
uses a dual-pivot variant of Quicksort for primitive types). - It is adaptable and can be optimized with different pivot selection strategies.
Disadvantages of Quicksort
Quicksort is a powerful sorting algorithm, but it also has some disadvantages:
Worst case performance
- In the worst case, if the pivot is always the smallest or largest element, the complexity of quicksort degrades to O(n²).
- This happens when you sort an already sorted or almost sorted field with bad pivot selection.
Recursive overhead
- Since quicksort is a recursive algorithm, a stack overflow can occur with large data sets due to the deep recursion.
- Iterative implementations can help to alleviate this problem.
How Quicksort works
The divide-and-conquer approach
Quicksort follows the divide-and-conquer strategy, where a problem is broken down into smaller sub-problems, solved independently and then combined to get the final sorted result.
The individual steps in Quicksort are:
- Divide: Choose a pivot element and divide the field so that elements smaller than the pivot element go to the left and elements larger than the pivot element go to the right.
- Conquer: Apply Quicksort recursively to the left and right subfields.
- Combine: As the partitioning brings the elements to the correct position, no additional merging is necessary.
Understanding the partitioning process
Partitioning is the most important step in Quicksort. It ensures that:
- Elements smaller than the pivot are moved to the left.
- Elements larger than the pivot are moved to the right.
- The pivot itself is set to the correct position in the sorted array.
Example of partitioning
Consider the following array:
[8, 3, 1, 7, 0, 10, 2]
Use 7 as the pivot point:
- Elements smaller than 7:
[3, 1, 0, 2]
- Pivot element:
7
- Elements greater than 7:
[8, 10]
After partitioning, the array is reordered as follows:
[3, 1, 0, 2, 7, 8, 10]
Now Quicksort is applied recursively to [3, 1, 0, 2]
and [8, 10]
.
Recursive application of Quicksort
After partitioning, Quicksort processes the left and right subarray independently of each other:
- Left subarray
[3, 1, 0, 2]
- Select a pivot (e.g.
1
), partition the array and sort it recursively.
- Right subarray
[8, 10]
- As it is already sorted, no further changes are necessary.
The process continues until all subarrays contain a single element or no more elements, then the array is completely sorted.
Handling edge cases
Sorted or almost sorted arrays
If the array is already sorted or almost sorted, the default behavior of Quicksort may result in poor performance (O(n²)). Using a random pivot selection or a median-of-three pivot selection can improve efficiency.
Arrays with many duplicates
If an array contains many duplicate elements, the standard quicksort may process the same elements repeatedly. three-way partitioning (separate treatment of elements that are smaller, equal and larger than the pivot) can optimize the sorting process.
Selection of the pivot element
Meaning of the pivot element
The pivot element plays a decisive role in the efficiency of Quicksort. A well-chosen pivot divides the array into two almost equal halves, resulting in an average time complexity of O(n log n). However, a poorly chosen pivot can lead to very unbalanced partitions, which in the worst case degrades the performance to O(n²).
Strategies for pivot selection
First element as pivot
One of the simplest strategies is to always select the first element of the array as the pivot.
- Advantages:
- Easy to implement.
- Works well if the array is randomly distributed.
- Disadvantages:
- Poor performance for already sorted or reverse sorted arrays, leading to O(n²) performance in the worst case.
- Can lead to very unbalanced partitions.
Last element as pivot
Similar to the strategy for the first element, this approach selects the last element as the pivot.
- Advantages:
- Easy to implement.
- Works well in randomly distributed arrays.
- Disadvantages:
- Has the same disadvantages as the first element strategy and results in poor performance on sorted arrays.
Random element as pivot
A more effective approach is to randomly select a pivot element from the array.
- Advantages:
- Reduces the probability of encountering the worst-case scenario.
- Works well for most input distributions.
- Disadvantages:
- Requires additional logic to randomly select an element.
- Cannot be guaranteed to prevent the worst-case scenario for some distributions.
Median-of-three pivot selection
In this method, the pivot is selected as the median of three elements:
- The first element.
- The last element.
- The middle element.
With this strategy, the partitions can be better balanced than with a fixed pivot point.
- Advantages:
- Reduces the likelihood of worst-case performance.
- More balanced partitions improve efficiency.
- Disadvantages:
- Slightly more calculations are required to determine the median.
- Implementation is more complex than simply selecting the first or last element.
Effect of pivot selection on performance
Best-case scenario
If the pivot divides the array evenly into two nearly equal halves at each step, the algorithm runs in O(n log n) time.
Worst-case scenario
If the pivot always leads to very unbalanced partitions, quicksort degrades to O(n²). This happens when:
- The array is already sorted and the first or last element is used as the pivot.
- The pivot is always the smallest or largest element in the subarray.
Optimized pivot selection
To avoid the worst case, many implementations use:
- Randomized Quicksort to shuffle the array before sorting.
- Hybrid approaches such as switching to insertion sort for small subarrays.
- Three-way partitioning for arrays with many duplicate values.
Implement Quicksort in Java
Recursive approach for quicksort
Quicksort is mainly implemented by recursion. The algorithm follows these steps:
- Select a pivot element.
- Split the field around the pivot point.
- Apply quicksort recursively to the left and right subfields.
Below you can see the basic recursive implementation of Quicksort in Java.
Java implementation
public class QuickSort {
public static void quickSort(int[] arr, int low, int high) {
if (low < high) {
int pivotIndex = partition(arr, low, high);
quickSort(arr, low, pivotIndex - 1); // Sort left subarray
quickSort(arr, pivotIndex + 1, high); // Sort right subarray
}
}
private static int partition(int[] arr, int low, int high) {
int pivot = arr[high]; // Choosing last element as pivot
int i = low - 1;
for (int j = low; j < high; j++) {
if (arr[j] < pivot) {
i++;
swap(arr, i, j);
}
}
swap(arr, i + 1, high);
return i + 1; // Return pivot index
}
private static void swap(int[] arr, int i, int j) {
int temp = arr[i];
arr[i] = arr[j];
arr[j] = temp;
}
public static void main(String[] args) {
int[] arr = {8, 3, 1, 7, 0, 10, 2};
quickSort(arr, 0, arr.length - 1);
System.out.println("Sorted Array: " + java.util.Arrays.toString(arr));
}
}
Explanation of the code
Base case
The recursive call quickSort(arr, low, high)
is only executed if low < high
. This ensures that sorting stops when the subarrays contain a single element.
Partitioning
The method partition
:
- Selects the last element as the pivot point.
- Moves all smaller elements to the left.
- Moves the larger elements to the right.
- Sets the pivot point to the correct position and returns its index.
Swap elements
The auxiliary function “swap” is used to swap two elements if required.
Execution example
Input
[8, 3, 1, 7, 0, 10, 2]
Step-by-step execution
- Pivot = “2” (last element)
- Partitioning rearranges the elements:
[0, 1, 2, 7, 3, 10, 8]
- Quicksort is applied recursively to
[0, 1]
and[7, 3, 10, 8]
- The process is repeated until the array is completely sorted.
Output
Sorted array: [0, 1, 2, 3, 7, 8, 10]
Limiting cases to be considered
Sorted or reversed sorted array
If the input is already sorted ([1, 2, 3, 4, 5]
) or reversed sorted ([5, 4, 3, 2, 1]
), Quicksort can perform poorly with a naive pivot selection (e.g. always select the first or last element).
Array with duplicates
For an array like [4, 4, 4, 4, 4]
, the default implementation of Quicksort may cause unnecessary recursive calls. A three-way partitioning can optimize performance in such cases.
Small arrays
For very small arrays (e.g. with a length ≤ 10), switching to Insertion Sort instead of Quicksort can improve performance, as Insertion Sort has a lower overhead.
Understanding the partitioning logic
What is partitioning in Quicksort?
Partitioning rearranges the elements in an array like this:
- Elements smaller than the pivot are moved to the left.
- Elements that are larger than the pivot are moved to the right.
- The pivot point itself is placed in its correct position in the sorted array.
This step ensures that the pivot is in its final sorting position after partitioning, so that Quicksort can sort the left and right subfields independently of each other recursively.
Types of partitioning schemes
Lomuto partitioning scheme
The Lomuto partitioning scheme is simple and easy to implement, but can perform additional swaps. It selects a pivot (usually the last element) and partitions the array in a single pass.
Algorithm steps
- Choose the last element as pivot.
- Keep an index
i
for elements smaller than the pivot. - Iterate through the array, swapping the elements smaller than the pivot with
i
. - Swap the pivot with the first larger element to put it in the right place.
Java implementation
private static int partitionLomuto(int[] arr, int low, int high) {
int pivot = arr[high];
int i = low - 1;
for (int j = low; j < high; j++) {
if (arr[j] < pivot) {
i++;
swap(arr, i, j);
}
}
swap(arr, i + 1, high);
return i + 1;
}
Execution example
Input: [8, 3, 1, 7, 0, 10, 2]
(pivot = 2
)
Partitioned output: [0, 1, 2, 7, 3, 10, 8]
Advantages
- Easy to implement.
- Works well if additional swaps are not a problem.
Disadvantages
- Performs O(n) swaps in the worst case, which increases execution time.
- Not the most efficient solution for large datasets.
Hoare partition scheme
The Hoare partition scheme is more efficient as it minimizes the swaps. It uses two pointers that start at the two ends and move towards the middle.
Algorithm steps
- Choose a pivot point (usually the first or middle element).
- Use two pointers:
- One moves from left to right, looking for elements larger than the pivot.
- The other moves from right to left and searches for elements that are smaller than the pivot.
- Swap the elements that are on the wrong side.
- Repeat the process until the pointers intersect.
Java implementation
private static int partitionHoare(int[] arr, int low, int high) {
int pivot = arr[low];
int left = low - 1, right = high + 1;
while (true) {
do {
left++;
} while (arr[left] < pivot);
do {
right--;
} while (arr[right] > pivot);
if (left >= right)
return right;
swap(arr, left, right);
}
}
Execution example
Input: [8, 3, 1, 7, 0, 10, 2]
(pivot = 8
)
Partitioned output: [2, 3, 1, 7, 0, 8, 10]
Advantages
- More efficient, as fewer swaps are performed.
- Works well with large data sets.
Disadvantages
- Slightly more difficult to implement compared to Lomuto.
- Additional handling of recursion limits may be required.
Choosing the right partitioning scheme
- Lomuto is simpler and works well when swap operations are not costly.
- Hoare is more efficient as it reduces the number of swaps.
- For arrays with many duplicates, Three-way partitioning can be used to separate elements that are smaller, equal and larger than the pivot.
Edge Cases in Partitioning
Already sorted or reverse sorted array
If the input array is already sorted ([1, 2, 3, 4, 5]
) or reversed sorted ([5, 4, 3, 2, 1]
), the partitioning can lead to very unbalanced splits that affect the performance of Quicksort.
Arrays with many duplicates
For arrays like [4, 4, 4, 4, 4]
, traditional partitioning can lead to unnecessary recursive calls. Triple partitioning optimizes sorting by handling duplicates separately.
Small arrays
For small datasets, Quicksort’s partitioning overhead may not be justified. to improve performance, Insertion Sort can be used instead.
Iterative implementation of Quicksort
Why an iterative approach?
Quicksort is of course implemented with recursion, but in some cases an iterative implementation is preferable:
- Avoids stack overflow: Recursive quicksort can lead to stack overflow errors with large datasets due to deep recursion.
- Better control over recursion depth: The iterative approach replaces recursion with an explicit stack, which reduces memory usage.
- More efficient memory usage: Recursive calls require additional memory, while an iterative version can manage stack usage efficiently.
How iterative quicksort works
The iterative version of quicksort uses an explicit stack instead of function calls to keep track of the subarrays to be sorted.
Steps:
- Create a stack to store the indices of the subarrays.
- Push the initial indices
(low, high)
onto the stack. - As long as the stack is not empty:
- Open a subarray area
(low, high)
. - Divide the subarray and determine the pivot index.
- Push the left and right subarray (if they have more than one element) back onto the stack.
- Repeat the process until all subarrays have been processed.
Iterative quicksort implementation in Java
import java.util.Stack;
public class IterativeQuickSort {
public static void quickSort(int[] arr) {
Stack<int[]> stack = new Stack<>();
stack.push(new int[]{0, arr.length - 1});
while (!stack.isEmpty()) {
int[] range = stack.pop();
int low = range[0], high = range[1];
if (low < high) {
int pivotIndex = partition(arr, low, high);
// Push right subarray
if (pivotIndex + 1 < high) {
stack.push(new int[]{pivotIndex + 1, high});
}
// Push left subarray
if (low < pivotIndex - 1) {
stack.push(new int[]{low, pivotIndex - 1});
}
}
}
}
private static int partition(int[] arr, int low, int high) {
int pivot = arr[high];
int i = low - 1;
for (int j = low; j < high; j++) {
if (arr[j] < pivot) {
i++;
swap(arr, i, j);
}
}
swap(arr, i + 1, high);
return i + 1;
}
private static void swap(int[] arr, int i, int j) {
int temp = arr[i];
arr[i] = arr[j];
arr[j] = temp;
}
public static void main(String[] args) {
int[] arr = {8, 3, 1, 7, 0, 10, 2};
quickSort(arr);
System.out.println("Sorted Array: " + java.util.Arrays.toString(arr));
}
}
Explanation of the code
Stack-based iteration
- Instead of recursive calls, a stack is used to keep track of the
(low, high)
indices of the subarrays. - The stack stores the subarrays that still need to be sorted.
- The while loop processes each subarray iteratively.
Partitioning logic
- The “Partition” function works in a similar way to the recursive version.
- It places the pivot in the correct position and returns the pivot index.
- The left and right subarrays are placed on the stack for further sorting.
Swaping elements
- The “Swap” function is used to swap elements during partitioning.
Execution example
Input
[8, 3, 1, 7, 0, 10, 2]
Step-by-step execution
- Push
(0, 6)
onto the stack. - Partition with the pivot
2
, resulting in[0, 1, 2, 7, 3, 10, 8]
. - Move the left partition
(0, 1)
and the right partition(3, 6)
. - Process
(0, 1)
, then process(3, 6)
. - Continue until the stack is empty and the field is completely sorted.
Output
Sorted array: [0, 1, 2, 3, 7, 8, 10]
Edge cases with iterative quick sorting
Large amounts of data
With extremely large data sets, the recursion depth can become a problem. The iterative approach prevents stack overflow by explicitly controlling the maximum stack size.
Already sorted or backwards sorted arrays
Since quicksort performs poorly on already sorted or reverse sorted data with poor pivot selection, a random pivot strategy can improve performance.
Small subarrays
For very small subarrays, switching to insertion sort instead of quicksort can reduce overhead and improve efficiency.
Time and space complexity analysis
Time complexity of Quicksort
The time complexity of quicksort depends on how the pivot divides the array. The efficiency is analyzed in best-case, average-case and worst-case scenarios.
Best-case time complexity
- The best case occurs when the pivot always splits the array into two equal halves.
- This leads to a balanced recursion tree that minimizes the number of recursion calls.
- The recursion relation for this case is
T(n) = 2T(n/2) + O(n)
- If we solve this recursion relation using the master theorem, we obtain
T(n) = O(n log n)
- This scenario ensures optimal performance and makes Quicksort faster than many other sorting algorithms.
Time complexity in the middle case
- On average, the pivot divides the array into unequal, but not very unequal partitions.
- This still leads to O(n log n) complexity, as the recursion tree remains logarithmically low.
- The recursion is similar to the best case, but takes into account slightly unbalanced partitions:
T(n) = T(n/3) + T(2n/3) + O(n)
- The solution to this problem is
T(n) = O(n log n)
- This is the expected performance for most randomly ordered arrays.
Worst case time complexity
- The worst case occurs when the pivot always selects the smallest or largest element, resulting in very unbalanced partitions.
- Instead of splitting the field in half, one partition has n-1 elements and the other 0.
- The recursion relation for this case is:
T(n) = T(n - 1) + O(n)
- If you extend this, you get
T(n) = O(n²)
- This happens if:
- The field is already sorted (in ascending or descending order).
- A bad pivot selection strategy (e.g. always select the first or last element) is used.
Spatial complexity of Quicksort
Quicksort is an in-place sorting algorithm, which means it does not require additional memory for merging like Merge Sort. However, it does require additional memory for recursion.
Best-case memory complexity
- In the best case, Quicksort makes logarithmic recursion calls before it reaches the base cases.
- Since each recursive call requires O(1) auxiliary memory, the total memory usage is:
O(log n)
- This is much better than the O(n) space complexity of Merge Sort.
Worst-case space complexity
- In the worst case (very unbalanced partitions) Quicksort makes O(n) recursive calls, which leads to the following results:
O(n)
- This happens when the pivot is always the smallest or largest element.
- This space complexity can be improved by an iterative approach.
Comparison of time and space complexity
case | time complexity | space complexity |
---|---|---|
Best case | O(n log n) | O(log n) |
Average case | O(n log n) | O(log n) |
Worst case | O(n²) | O(n) |
Improvement in complexity
Preventing O(n²) complexity in the worst case
- Use Randomized Quicksort to randomly select a pivot instead of the first or last element.
- Use Median-of-Three Pivot Selection to avoid bad pivot selections.
- Implement Hybrid Sorting and switch to Insertion Sort for small subfields (typically with a size ≤ 10).
Reduction of the spatial complexity
- Use Tail Recursion Elimination by always sorting the smaller subarray first and thus reducing the recursion depth.
- Use Iterative Quicksort with an explicit stack to avoid recursive function calls.
Quicksort vs other sorting algorithms
Comparison of Quicksort with Merge Sort
Quicksort and Merge Sort are both divide-and-conquer sorting algorithms, but they have significant differences in terms of performance and memory usage.
Time complexity
Case | Quicksort | Merge Sort |
---|---|---|
Best Case | O(n log n) | O(n log n) |
Average case | O(n log n) | O(n log n) |
Worst case | O(n²) | O(n log n) |
- Quicksort is usually faster in practice due to better cache locality.
- Merge Sort guarantees a performance of O(n log n) even in the worst case, which makes it more predictable.
Space complexity
Algorithm | Space Complexity |
---|---|
Quicksort | O(log n) (best case), O(n) (worst case) |
Merge Sort | O(n) |
- Quicksort is in-place, which means it requires less additional memory.
- Merge Sort requires O(n) additional memory for merging, which makes it inefficient for large arrays.
Stability
Algorithm | Stable? |
---|---|
Quicksort | No |
Merge Sort | Yes |
- Merge Sort is stable, i.e. the order of the same elements is retained.
- Quicksort is not stable, as elements can be swapped between partitions.
Use cases
- Uses quicksort if:
- You need an in-place sorting algorithm with low memory usage.
- You want to sort large amounts of data where worst-case performance is unlikely.
- High speed is more important than stability.
- Use Merge Sort when:
- stable sorting is required.
- Sorting linked lists (Merge Sort works efficiently with linked structures).
- The worst-case performance of O(n²) with quicksort is a problem.
Quicksort vs. Heap Sort
Heap Sort is another O(n log n) sorting algorithm, but it differs significantly from Quicksort.
Time complexity
Case | Quicksort | Heap Sort |
---|---|---|
Best Case | O(n log n) | O(n log n) |
Average case | O(n log n) | O(n log n) |
Worst case | O(n²) | O(n log n) |
- Heap Sort guarantees O(n log n) performance, unlike Quicksort.
- Quicksort is usually faster in practice due to cache efficiency.
Space complexity
Algorithm | Space Complexity |
---|---|
Quicksort | O(log n) (best case), O(n) (worst case) |
Heap Sort | O(1) |
- Heap Sort is truly in-place because it does not use recursion.
- Quicksort can take O(n) space in the worst case, but iterative quicksort reduces this.
Stability
Algorithm | Stable? |
---|---|
Quicksort | No |
Heap Sort | No |
- Both algorithms are not stable by default.
Use cases
- Use Quicksort when:
- You need fast, universal sorting with minimal additional memory.
- Cache performance is a priority.
- Uses Heap Sort when:
- Guaranteed O(n log n) time complexity is required.
- Sorting is required in real-time applications (e.g. queues).
Quicksort vs. bubble sort
Bubble Sort is a simple sorting algorithm, but it is significantly slower than Quicksort.
Time complexity
Case | Quicksort | Bubble Sort |
---|---|---|
Best Case | O(n log n) | O(n) |
Average case | O(n log n) | O(n²) |
Worst case | O(n²) | O(n²) |
- Quicksort is much faster than Bubble Sort in all cases.
- Bubble Sort is only efficient for almost sorted arrays.
Space complexity
Algorithm | Space Complexity |
---|---|
Quicksort | O(log n) (best case), O(n) (worst case) |
Bubble Sort | O(1) |
- Bubble Sort is truly in-place and requires no additional space.
Stability
Algorithm | Stable? |
---|---|
Quicksort | No |
Bubble Sort | Yes |
- Bubble Sort is stable, i.e. the sequence of identical elements is retained.
Use cases
- Use Quicksort if:
- Want to sort large data sets efficiently.
- Performance is an issue.
- Uses Bubble Sort if:
- The array is already almost sorted.
- Sorting must be easy to understand for teaching purposes.
Optimize Quicksort for performance
Hybrid approaches for small arrays
While quicksort works well for large data sets, it has some overhead for small arrays. A common optimization is to switch to a different sorting algorithm when the subarray size becomes small.
Use Insertion Sort for small arrays
- Insertion Sort performs better than Quicksort for small arrays (n ≤ 10-20)
- Why? The constant factors in Quicksort’s recursion make it slower for small arrays.
- A hybrid approach is to use Quicksort for larger arrays and Insertion Sort for small subarrays.
Implementation in Java
private static void quickSort(int[] arr, int low, int high) {
while (high - low > 10) { // If subarray size is small, switch to Insertion Sort
int pivotIndex = partition(arr, low, high);
quickSort(arr, low, pivotIndex - 1);
quickSort(arr, pivotIndex + 1, high);
return;
}
insertionSort(arr, low, high);
}
private static void insertionSort(int[] arr, int low, int high) {
for (int i = low + 1; i <= high; i++) {
int key = arr[i];
int j = i - 1;
while (j >= low && arr[j] > key) {
arr[j + 1] = arr[j];
j--;
}
arr[j + 1] = key;
}
}
- Before using quicksort, check if the size of the subarray is small
- If it is small, switch to Insertion Sort for better efficiency
Tail Recursion Optimization
What is Tail Recursion?
- Quicksort uses recursive calls, which in the worst case leads to O(n) recursion depth.
- Tail recursion optimization reduces recursion depth by eliminating unnecessary recursive calls.
- Instead of making two recursive calls, you always recurs first on the smaller partition and use the iteration for the larger partition.
Optimized implementation
private static void quickSort(int[] arr, int low, int high) {
while (low < high) {
int pivotIndex = partition(arr, low, high);
if (pivotIndex - low < high - pivotIndex) {
quickSort(arr, low, pivotIndex - 1);
low = pivotIndex + 1;
} else {
quickSort(arr, pivotIndex + 1, high);
high = pivotIndex - 1;
}
}
}
- Reduces the depth of the recursive call stack
- Processes the larger subarray iteratively, making the function more memory efficient.
Avoidance of worst-case performance
Randomized quick sorting
- The worst case O(n²) happens when the pivot always leads to very unbalanced partitions.
- Solution: Choose a random pivot instead of the first or last element.
Java implementation
private static int randomizedPartition(int[] arr, int low, int high) {
int randomIndex = low + (int) (Math.random() * (high - low + 1));
swap(arr, randomIndex, high);
return partition(arr, low, high);
}
- Random pivot selection generator to avoid poor performance with sorted arrays
Median-of-Three pivot selection
- Instead of choosing a random pivot, an efficient strategy is median-of-three selection.
- The pivot is selected as the median of the first, middle and last element.
Java implementation
private static int medianOfThree(int[] arr, int low, int high) {
int mid = low + (high - low) / 2;
if (arr[low] > arr[mid])
swap(arr, low, mid);
if (arr[low] > arr[high])
swap(arr, low, high);
if (arr[mid] > arr[high])
swap(arr, mid, high);
swap(arr, mid, high - 1);
return arr[high - 1]; // Pivot is now the median
}
- Improves the balance of the partitioning, reducing the probability of O(n²) behavior in the worst case.
Triple partitioning for duplicate elements
Why Three-way partitioning?
- The standard quicksort method works poorly for arrays with many duplicates.
- Three-way partitioning splits the array into three parts:
- Elements less than the pivot.
- Elements that are equal to the pivot.
- Elements that are greater than the pivot.
Java implementation
private static void threeWayPartition(int[] arr, int low, int high) {
if (low >= high) return;
int pivot = arr[high];
int i = low, lt = low, gt = high;
while (i <= gt) {
if (arr[i] < pivot) {
swap(arr, i, lt);
i++;
lt++;
} else if (arr[i] > pivot) {
swap(arr, i, gt);
gt--;
} else {
i++;
}
}
threeWayPartition(arr, low, lt - 1);
threeWayPartition(arr, gt + 1, high);
}
- Reduces redundant comparisons and recursive calls when handling duplicate values.
- Works efficiently on arrays with repeated elements (e.g.
[3, 3, 3, 4, 4, 5]
).
Summary of key takeaways
Quicksort overview
- Quicksort is a divide-and-conquer sorting algorithm that partitions the array around a pivot point and sorts the subarrays recursively.
- It is an in-place sorting algorithm that is memory efficient compared to Merge Sort.
- The choice of pivot selection strategy significantly affects performance.
Summary of performance
Time complexity
Case | Time Complexity |
---|---|
Best case | O(n log n) |
Average case | O(n log n) |
Worst case | O(n²) |
- Best and average case occur when the partitions are balanced, resulting in O(n log n) complexity.
- The worst case occurs when the partitions are very unbalanced, resulting in a complexity of O(n²).
Space complexity
Scenario | Space Complexity |
---|---|
Best case | O(log n) |
Worst case | O(n) |
Iterative approach | O(1) |
- Recursive quicksort requires space in the best case due to the recursion depth O(log n).
- In the worst case, O(n) space is required, but this can be reduced with iterative quicksort.
When should you use quicksort?
Best use cases
- Sorting large data sets where speed is a priority.
- Sorting primitive data types (Java’s
Arrays.sort()
uses a variant of Quicksort). - In-place sorting scenarios where additional memory usage should be minimized.
When you should avoid quicksort
- When stable sorting is required (Quicksort is not stable by default).
- Sorting linked lists where Merge Sort performs better due to its compatibility with linked structures.
- Worst-case scenarios (e.g. already sorted or almost sorted data), unless pivot selection is optimized.
Further learning and practicing
Understanding quicksort variants
- Dual-Pivot Quicksort: An optimized version used in Java’s
Arrays.sort()
for primitive types. - Parallel Quicksort: Multiple threads are used for faster sorting in multi-core environments.
- Hybrid sort algorithms: Combine quicksort with insertion sort or heap sort for more efficiency.
Exercises
- Implement Quicksort with different selection strategies.
- Modify Quicksort to handle double elements efficiently by using three-way partitioning.
- Compare Quicksort with Merge Sort and Heap Sort on different data sets.
Additional resources
- Java documentation:
Arrays.sort()
and its quicksort-based implementation. - Algorithm books: “Introduction to Algorithms” by Cormen for an in-depth analysis of sorting algorithms.
- Online coding platforms: LeetCode, CodeChef and GeeksforGeeks for Quicksort exercises.

