Selection Sort

Selection sort repeatedly selects the minimum element from the unsorted portion and moves it to the front. Unlike bubble sort, it minimises swaps — but it's input-blind: it always does the same number of comparisons.

O(n²) alwaysO(n) swapsO(1) spaceNot Stable ⚠️In-place

💡 The Core Idea

1Find the minimum

Scan the entire unsorted portion (arr[i..n-1]) to find the index of the smallest element.

2Swap into position

Swap the found minimum with arr[i] — the first unsorted position. One swap per pass.

3Grow the sorted prefix

The sorted region grows by one element per pass. After n−1 passes, the entire array is sorted.

All Cases — Step by Step

😰 Worst Case — Reversed Array

Reversed: [5, 4, 3, 2, 1]O(n²)
5
[0]
4
[1]
3
[2]
2
[3]
1
[4]

Worst case: Reversed array [5, 4, 3, 2, 1]

Selection sort's complexity doesn't actually change between best and worst case for comparisons — it always does n(n−1)/2. But a reversed array maximises swaps.

💡 Unlike bubble sort, selection sort always does exactly n(n−1)/2 comparisons. The worst/best/average distinction is only about swap count.

⚠️ Best Case — Already Sorted (still O(n²)!)

Sorted: [1, 2, 3, 4, 5]O(n²) — no early exit!
1
[0]
2
[1]
3
[2]
4
[3]
5
[4]

Best case: Already sorted [1, 2, 3, 4, 5]

This is NOT O(n) for selection sort! Unlike bubble sort, it has no early exit. It still does n(n−1)/2 comparisons.

💡 Selection sort's 'best case' for time is still O(n²). The only benefit on sorted data is 0 swaps.

⚡ Average Case — Nearly Sorted

Nearly Sorted: [1, 2, 4, 3, 5]O(n²) regardless
1
[0]
2
[1]
4
[2]
3
[3]
5
[4]

Nearly sorted: [1, 2, 4, 3, 5]

Nearly sorted arrays give no advantage to selection sort. It still scans exhaustively each pass — only 1 swap needed, but still O(n²) comparisons.

💡 This is selection sort's key weakness vs bubble/insertion sort: it cannot adapt to partially sorted data.

🟰 Edge Case — All Equal

All Equal: [3, 3, 3, 3, 3]O(n²) comparisons
3
[0]
3
[1]
3
[2]
3
[3]
3
[4]

All equal: [3, 3, 3, 3, 3]

All equal elements mean no swap is ever needed (the minimum is always the current position). But comparisons still happen.

💡 Unlike bubble sort, selection sort is NOT always stable. Swapping can change the relative order of equal elements.

Instability Explained

Selection sort is NOT stable — non-adjacent swaps can flip the relative order of equal elements. Step through the demo:

Instability Demo — [3ᵃ, 1, 3ᵇ, 2]NOT Stable ⚠️
3a
[0]
1
[1]
3b
[2]
2
[3]

Initial array

We have two elements with value 3: 3ᵃ (original index 0) and 3ᵇ (original index 2). A stable sort preserves their order.

Selection Sort's Key Property — Fewest Swaps

✅ At most n−1 swaps

Selection sort makes at most one swap per pass and zero swaps when the min is already in place. Total swaps ≤ n−1.

Pass 1: find min → 1 swap
Pass 2: find min → 1 swap
Pass 3: already in place → 0 swaps
Max total: n−1 swaps

❌ Always O(n²) comparisons

There's no way to know the minimum without checking every unsorted element. No early exit. Input order doesn't matter.

Pass 1: n−1 comparisons
Pass 2: n−2 comparisons
...
Total: n(n−1)/2 always

💡 When does this matter? When writes are extremely expensive (e.g. flash memory, EEPROM), selection sort's minimal swaps can be a practical advantage over algorithms with more writes — even though comparisons are still O(n²).

Complexity Deep Dive

CaseInputComparisonsSwapsTimeWhy
😰 WorstReversedn(n−1)/2n−1O(n²)Max swaps: every element starts in wrong place
📊 AverageRandomn(n−1)/2~n/2O(n²)Same comparisons; about half the passes need swaps
⚠️ BestAlready sortedn(n−1)/20O(n²)No swaps, but comparisons are still exhaustive
🟰 EqualAll equaln(n−1)/20O(n²)Strict < means no swaps; comparisons unavoidable
Space (all cases)O(1)In-place — no extra array needed

⚠️ Key insight: Selection sort always does exactly n(n−1)/2 comparisons — no variation based on input. This makes it predictable but also means it can't exploit nearly-sorted data like bubble or insertion sort can.

Implementation

Python
def selection_sort(arr):
    n = len(arr)
    for i in range(n - 1):           # n-1 passes
        min_idx = i                   # assume current position is min
        for j in range(i + 1, n):    # scan the unsorted portion
            if arr[j] < arr[min_idx]:
                min_idx = j           # found a smaller element
        if min_idx != i:             # only swap if needed
            arr[i], arr[min_idx] = arr[min_idx], arr[i]
    return arr

# Test all cases
print(selection_sort([64, 34, 25, 12]))  # Random    → [12, 25, 34, 64]
print(selection_sort([1, 2, 3, 4]))      # Sorted    → [1, 2, 3, 4]  (still O(n²)!)
print(selection_sort([4, 3, 2, 1]))      # Reversed  → [1, 2, 3, 4]  (O(n²))
print(selection_sort([]))                # Empty     → []
TypeScript
function selectionSort(arr: number[]): number[] {
  const n = arr.length;
  for (let i = 0; i < n - 1; i++) {    // n-1 passes
    let minIdx = i;                      // assume position i holds the min
    for (let j = i + 1; j < n; j++) {  // scan unsorted portion
      if (arr[j] < arr[minIdx]) {
        minIdx = j;                      // update minimum tracker
      }
    }
    if (minIdx !== i) {                 // swap only if min isn't already in place
      [arr[i], arr[minIdx]] = [arr[minIdx], arr[i]];
    }
  }
  return arr;
}

// Edge cases
selectionSort([]);          // [] — 0 iterations
selectionSort([42]);        // [42] — single element, already sorted
selectionSort([3, 3, 3]);   // [3,3,3] — equal, 0 swaps (but still O(n²) comparisons!)
selectionSort([5,4,3,2,1]); // reversed — O(n²) comparisons, n-1 swaps

Edge Cases to Handle

Empty array []

O(1)

n=0, outer loop never runs. Return immediately. O(1).

Single element [x]

O(1)

n=1, n−1=0 passes. Nothing to compare. Already sorted.

Two elements [a, b]

O(1)

One comparison, at most one swap. Simplest non-trivial case.

All equal [x, x, x]

O(n²)

Zero swaps — min is always already at position i. Still O(n²) comparisons.

Already sorted

O(n²)

Zero swaps, but still O(n²) comparisons. No early exit possible!

Reversed array

O(n²)

Maximum swaps: n−1. But comparisons are still same O(n²) as sorted.

When to Use (and Not Use)

✅ Good for

  • Minimising writes — at most n−1 swaps; ideal when write cost is high (flash memory, EEPROM)
  • Teaching — simple to reason about: find min, place it, repeat
  • Tiny arrays (n < 10) — overhead of complex algorithms isn't worth it
  • Predictable performance — always exactly n(n−1)/2 comparisons, no surprises

❌ Avoid when

  • Stability is required — use Merge Sort or Insertion Sort instead
  • Nearly sorted data — no early exit means wasted work vs. bubble/insertion
  • Large datasets — O(n²) comparisons are catastrophic at n=100,000+
  • Production code — Tim Sort (Python/Java) is always faster in practice
  • Unknown or variable input — insertion sort adapts better to real-world data

Selection Sort vs Others

AlgorithmBestAverageWorstSpaceStableNotes
🎯 Selection SortO(n²)O(n²)O(n²)O(1)Fewest swaps; input-blind
🫧 Bubble SortO(n)O(n²)O(n²)O(1)Early exit on sorted data
🃏 Insertion SortO(n)O(n²)O(n²)O(1)Best for nearly sorted data
🔀 Merge SortO(n log n)O(n log n)O(n log n)O(n)Best stable sort for large n
⚡ Quick SortO(n log n)O(n log n)O(n²)O(log n)Fastest in practice
🐍 Tim SortO(n)O(n log n)O(n log n)O(n)Python built-in; best all-rounder

💡 Insertion Sort beats Selection Sort in almost every practical scenario — it's adaptive (O(n) best case), stable, and has similar swap counts on average. The only edge case where Selection Sort wins: when the cost of writes vastly exceeds reads.