Lumænaut_

The 8 Algorithm Paradigms

Algorithm design techniques

Color ASCII art: people walking through a European city street 
              scene, with a streetcar/tram and a cafeteria visible; European-like 
              buildings in the background.
Illustration in ASCII art style: people walking through a European city street scene. A streetcar/tram and a cafeteria are visible, with European-like buildings in the background.

Before you write a single line of code, you choose a way of thinking: how will this program actually approach the problem? That choice is what algorithm paradigms are about. A paradigm isn’t a particular algorithm like QuickSort or Dijkstra — it’s the underlying mindset: "break the problem into smaller pieces," "try every option," "remember what you’ve already computed," and so on. Different paradigms lead to different algorithms; the same paradigm shows up in sorting, graph theory, and your next side project.

This post walks through eight of the main paradigms — the design techniques that turn up everywhere in computer science, from interviews to production code. Each section has a short interactive demo so you can see the paradigm in action, not just in theory.

1. Brute force

At its core, brute force is just systematic trial and error. You run through every option until you find the right one. There's no cleverness here — just raw computation. That makes it easy to implement, and sometimes it’s the only approach that guarantees a solution. The trade-off is speed: as the input grows, brute force tends to grind to a halt.

Think of a 3-digit combination lock. You don’t know the code, so you start at 000, then 001, 002... and so on. Eventually you hit the right combo. That’s brute force. The demo below does exactly that: it picks a random target and tries every number until it matches. Hit "Try next" to advance one attempt, or "Run until unlock" to watch it crank through the possibilities.

Brute force: 3-digit lock

Interactive demo: try every combination from 000 until the lock opens.

Try each combination in order until you find the target. No shortcuts — that’s brute force.

2. Divide and conquer

The divide-and-conquer paradigm says: take one big problem, split it into smaller, similar sub-problems, solve those (often recursively), then combine the answers. It’s a design technique, not a single algorithm — the same mindset shows up in merge sort, binary search, and many classic problems.

Think of divide and conquer like cleaning a messy house. Instead of tackling everything at once, you break it down: handle one room at a time. Once each room is spotless, the whole house is done. Merge sort works the same way. It takes a list and keeps splitting it in half until you're left with single items (which are trivially "sorted"). Then it carefully merges those pieces back together in the right order.

The demo below shows that process step by step. Use Next step to see each "divide" (splitting a segment in half — left and right are tinted) and each "conquer" (merging two sorted halves back together — the merged segment is highlighted). Run all plays through the whole sequence; Shuffle resets with a new array.

Divide and conquer: merge sort step-by-step

Step-by-step: each divide shows the split; each conquer shows the merged segment.

Step through each divide (split in half) and conquer (merge sorted halves). Previous / Next step / Run all / Shuffle.

3. Dynamic programming

The dynamic-programming paradigm also breaks problems into sub-problems, but with a crucial twist: it remembers the results. When the same sub-problem appears again, it reuses the stored answer instead of recomputing. That "memoization" (or tabulation) is the heart of the paradigm — it turns many exponential-time ideas into something feasible.

The classic example is the Fibonacci sequence. Naively, fib(5) triggers repeated work for fib(4), fib(3)... and so on. With DP you fill a table once: fib(0), fib(1), fib(2)... and each new value only needs a look-up of the previous two. In the demo you can step forward or backward and compute fib(n); the memo table shows what gets stored and reused.

Dynamic programming: Fibonacci with memo

Memo table showing fib(0), fib(1), … so we don’t recompute the same values.

Compute fib(n). The table caches results so each sub-problem is solved only once.

4. Greedy

The greedy paradigm: at each step, make the best local choice and never look back. Don’t reconsider past decisions. For many problems that mindset yields an optimal solution; for others you only get a "good" one. The art is knowing when the greedy approach is safe.

Making change with coins is a standard example. With coins 25, 10, 5, and 1 cent, taking the largest coin that fits at each step (greedy) gives the fewest coins. So for 37¢ you pick 25, then 10, then two 1s. The demo lets you enter an amount and see the greedy coin selection. (For some coin sets greedy fails; for 25, 10, 5, 1 it works.)

Greedy: coin change

Amount in cents and the coins chosen by the greedy rule.

At each step we choose the largest coin that fits. No backtracking — that’s greedy.

5. Backtracking

Backtracking follows a simple rhythm: try, fail, undo, repeat. You make choices one by one, but when a choice leads to a dead end, you rewind and try another. It's smarter than brute force because you cut off bad paths early instead of following them to the finish.

Solving a maze is a good mental model: you walk forward until you hit a dead end, then backtrack to the last junction and try another branch. Sudoku and many puzzle solvers work the same way. The demo is a tiny grid with a start and a goal; "Solve (backtrack)" finds a path by trying moves and backtracking when stuck.

Backtracking: maze path

Small grid with walls; backtracking finds a path from start to goal.

Click Solve to find a path. The algorithm tries a direction, backtracks if blocked, and tries another.

6. Searching

The searching paradigm is "find something in a structure." A classic design choice is binary search — repeatedly halve the search space in a sorted collection. Linear search inspects each item one by one; binary search is much faster when the data is ordered because it throws out half the remaining candidates on every step.

Like looking for a name in a phone book: instead of reading every page in order, you open the middle, then the middle of the remaining half, and so on. The demo lays out a sorted matrix of numbers and runs binary search on it: click "Set random target" and then "Run search" to see which cells remain "in play" and which ones get discarded (shown in soft red) as the range shrinks.

7. Sorting

Sorting means arranging data in a defined order, like smallest to largest or A to Z. Many algorithms do it. Bubble, insertion, merge, and quicksort are a few, each with different trade-offs. Merge sort is divide-and-conquer. Quicksort uses a pivot and partitioning. Bubble sort is the simplest. It just compares adjacent pairs and swaps them when they're out of order.

In the demo you shuffle the bars and run bubble sort. Watch larger values "bubble" toward the end as adjacent pairs are compared and swapped. A full pass with no swaps means the list is sorted. It's not the fastest algorithm, but the idea is easy to see.

Sorting, bubble sort

Bars get reordered by bubble sort until they are ascending.

Shuffle the bars, then run bubble sort. Adjacent pairs are compared and swapped until the list is sorted.

8. Hashing

Imagine you're organizing a massive library, but instead of searching every single shelf for a book, you have a simple rule: take the book's title, run it through a quick calculation, and it tells you exactly which shelf to look on. That's hashing. It's a way to instantly narrow down where data should live or be found, so you never have to check everything — you just go straight to the right spot. It's what makes looking things up in programs feel instant, whether it's finding a file, checking a password, or grabbing data from memory.

The demo shows a hash table with five buckets and 10 words in each. Click "Get random word" to pick one of the words, then "Look up" to see which bucket it lives in. We only check that one bucket, not all the others, so finding it stays fast.

Hashing, find which bucket a key goes in

Hash table with five buckets and sample keys; type a key and find which bucket it maps to.

Type a key and click "Find bucket" to see the hash and the bucket. We only look in that bucket, not the whole table.

Eight paradigms, countless algorithms

No matter how many specific algorithms you run into, they’re usually built on a small set of paradigms: brute force, divide and conquer, dynamic programming, greedy, backtracking, search, sort, and hashing. Each paradigm is a different way of designing a solution — a mindset you choose before you write a loop. Once you recognise them, you stop seeing only code and start seeing the underlying approach. That shift is what turns recipe-following into real algorithm design.

Eight ways to think. Countless ways to code.

References & further reading

Can't get enough from Lumænaut_? Read this...