| # | Action | Array State | Cmps | Swaps |
|---|
Each partition reduces the problem size on average by half. Unlike sorting O(n log n), Quickselect only recurses into one partition. The recurrence T(n) = T(n/2) + O(n) solves to O(n) by the master theorem.
If the pivot is always the minimum or maximum element, partition reduces size by only 1. Recurrence becomes T(n) = T(n-1) + O(n) → O(n²). This happens with sorted input and a fixed first-element pivot.
Naively sorting takes O(n log n) just to pick one element. Quickselect is strictly faster on average — it doesn't need the array fully sorted, just partitioned around the target rank.
With a deterministic first-element pivot, every partition on a sorted array reduces the problem by only 1 — causing O(n²). See how comparisons skyrocket vs the random case.
Break into smaller subproblems, solve recursively, combine results. Examples: Mergesort, Quicksort, Binary Search.
Divide: Partition around pivot. Conquer: Recurse on one side only (containing rank k). No combine step needed!
After partitioning, the pivot lands at its final sorted position p. If k = p+1 → done. If k < p+1 → recurse LEFT. If k > p+1 → recurse RIGHT. The other half is discarded permanently.
Greedy: No natural greedy choice for finding k-th smallest without sorting. Dynamic Programming: No overlapping subproblems; partitions are independent. D&C is the natural fit.
Recurses on ONE side. Discards the other. O(log n) average depth. Goal: find ONE element.
Recurses on BOTH sides. Must sort everything. O(n log n) average work. Goal: sort entire array.
Halves search space by value comparison. Requires sorted input. O(log n) total. Discards half based on value.
Reduces search space by rank. Works on unsorted input. O(n) per level. Discards half based on rank position.
Without partitioning-as-search, we inspect every element at every step → O(n²). The partition narrows where the k-th element can be without scanning both halves.
Binary search halves by value. Quickselect halves by rank. Both achieve O(log n) recursion levels on average, but Quickselect does O(n) work per level vs O(1) for binary search.
Given an unsorted array of n numbers, find the element that would be at position k if the array were sorted. This is called the k-th order statistic.
Partition rearranges the array so that all elements smaller than pivot go left, and all elements larger than pivot go right. The pivot ends up at its final sorted position.
After every partition, QuickSelect compares k to the pivot's position and decides where to go next. This is what makes it O(n) — it only ever follows one path.
[7, 2, 5, 1, 8, 3], k=3. Pivot = last element = 3. Search space = [0..5].2 (at j=1), 1 (at j=3). Swap them left. Then place pivot 3 at index 2.[2, 1, 3, 7, 5, 8] → pivot at index 2 = rank 3.Quicksort recurses BOTH sides after partition → O(n log n). Quickselect recurses ONE side only → O(n) average. They use the exact same partition function — only the recursion strategy differs.
k=1 means the smallest (minimum). k=n means the largest (maximum). After partition, the pivot at 0-indexed position p has rank p+1. So if k=3, you're looking for the pivot at index 2.
When recursing right (k > pivotPos+1), you must adjust k relative to the new subarray: new k = k − pivotPos − 1. Forgetting this gives the wrong element.
QuickSelect does NOT sort the array. Only the k-th element ends up at its correct position. All other elements are partially rearranged but not sorted.
Finding the median in O(n) · Top-K results in databases · Percentile calculations (e.g. 90th percentile load time) · Order book algorithms in finance · Graphics: finding median-cut for color quantization.
An advanced variant that guarantees O(n) worst case by choosing a provably good pivot deterministically. It divides the array into groups of 5, finds medians of each group, then recursively finds the median of those medians. Higher constant factor than randomized QuickSelect but provably optimal.