Grinding LeetCode.
Day 1: Two Sum
From brute force to O(n) — a friendly journey for beginners
Welcome, brave coder. If you are starting your LeetCode journey, Two Sum is almost certainly the first problem you will meet. It is like the “hello world” of algorithmic interviews — simple on the surface, but packed with lessons that will echo through your entire coding career. In this post, we will unwrap it together. No scary jargon, no skipping steps. We will begin with the most naive solution, laugh at its flaws, and then transform it into something beautiful and fast. Grab a cup of coffee (or tea, we do not judge), and let’s go.
Problem Statement
Given an array of integers
numsand an integertarget, return indices of the two numbers such that they add up totarget.You may assume that each input would have exactly one solution, and you may not use the same element twice.
You can return the answer in any order.
Rewriting in plain English
Imagine you have a list of numbers (like [2, 7, 11, 15])
and a goal number (like 9). You need to find
two numbers inside the list that add up to the goal.
Then you give back their positions (indices) — not the numbers
themselves. Also, each element can only be used once. And the puzzle
guarantees that there is exactly one valid pair
hiding in there. That means you will never get stuck wondering
“which combination is right?” — there is only one correct answer.
Examples from the problem
-
Example 1:
nums = [2,7,11,15], target = 9→ Output:[0,1]because 2 + 7 = 9. -
Example 2:
nums = [3,2,4], target = 6→ Output:[1,2]because 2 + 4 = 6 (indices 1 and 2). -
Example 3:
nums = [3,3], target = 6→ Output:[0,1]. The two 3’s are at different positions.
Simple, right? Yet the real challenge is how fast can we find them when the array gets huge — up to 10,000 numbers. That is where the fun begins.
The naive way: brute force (“checking every couple”)
Let’s be honest: when we first see a problem like this, our brains instinctively go: “I’ll take the first number, and then try adding it to every other number after it. If it matches the target, done!” That is the brute force approach. It is simple, it is honest, and it works. But it is also the slowest.
Brute force flow:
For i = 0 to n-1:
For j = i+1 to n-1:
If nums[i] + nums[j] == target → return [i, j]
Code (Python).
def twoSum_bruteforce(nums, target):
n = len(nums)
for i in range(n):
for j in range(i + 1, n):
if nums[i] + nums[j] == target:
return [i, j]
return [] # never reached because one solution exists
Complexity — the bad news
If the array has n elements, the outer loop runs
n times, and the inner loop runs about n/2
times on average. That gives us roughly n * (n-1)/2
operations, which in Big O notation is O(n²). For
n = 10,000, that is about 50 million comparisons! Your computer can
handle it, but in an interview, they will ask:
Can you do better?
But the brute force approach is still important — it is the foundation. We always start with the simplest working solution, then optimize.
The “aha!” moment: using a memory (hash map)
Here is the key insight: Instead of checking every pair, we can remember what numbers we have seen and ask a simple question: “Do I need a specific number to complete the target?”
Let’s break it: For each number x, the complement is
target - x. If we have already seen that complement
somewhere in the array, we found our pair! We just need a way to
store the numbers we have passed and quickly look them up. That is
where a hash map (also called hash table or
dictionary) comes in — a data structure that lets you store key-value
pairs and check if a key exists in O(1) average time.
Lightning fast.
The smart algorithm (one pass)
We walk through the array once. For each element nums[i]:
-
Calculate
complement = target - nums[i]. -
Check if the complement already exists in our hash map. If yes →
return
[map[complement], i]. -
Otherwise, store the current number and its index in the map:
map[nums[i]] = i.
One-pass hash map flow:
map = {}
for i in 0..n-1:
complement = target - nums[i]
if complement in map: return [map[complement], i]
else: map[nums[i]] = i
# done
Code implementation.
def twoSum_hashmap(nums, target):
seen = {} # key = number, value = index
for i, num in enumerate(nums):
complement = target - num
if complement in seen:
return [seen[complement], i]
seen[num] = i
return []
Let’s trace example nums = [3,2,4], target = 6:
-
i=0,num=3,complement=3.seenis empty → store{3:0} -
i=1,num=2,complement=4.seenhas{3:0}, 4 not there → store{3:0, 2:1} -
i=2,num=4,complement=2.seenhas 2? Yes! → return[seen[2]=1, 2]→[1,2]
Beautiful, is it not? We only pass through the array once.
Complexity analysis: why it is a game-changer
Time complexity: O(n) — we traverse the array
once, and each hash map operation (insertion and lookup) is O(1) on
average.
Space complexity: O(n) — in worst case we store
almost all elements in the hash map before finding the pair.
For n = 10,000, the brute force does up to 50 million operations; the hash map solution does about 10,000. That is the difference between waiting 2 seconds and 0.001 seconds.
The math section — understanding the core equation
Let’s formalize the problem with math. We are given:
Problem restatement. Find indices
iandj,i ≠ j, such that
nums[i] + nums[j] = target
Equivalent to:nums[i] = target - nums[j]
If we think of it as an equation: for each index j, we
want to find an earlier index i satisfying
nums[i] = target - nums[j]. This is like solving for a
missing addend. The naive method checks all pairs; the hash map
method precomputes the set of earlier numbers, turning the problem
into a constant-time lookup.
Another mathematical viewpoint: Consider the function
f(x) = target - x. For each number in the array, its
complement is defined. Two numbers form a valid pair if one is the
image of the other under f. That is all we are doing —
searching for complement pairs in one pass.
We can also visualize it as: draw a number line. For each
x, the desired partner is target - x. As
we scan, we leave “breadcrumbs” in a hash map. The moment we step
on a breadcrumb left by a previous number, we have our solution. It
is like a treasure hunt with a perfect memory.
Follow-up: can we do better than O(n²)?
The problem’s follow-up asks: “Can you come up with an algorithm that is less than O(n²) time complexity?” And yes — we just did! O(n) with a hash map is dramatically faster. Could we do O(n log n)? Possibly with sorting and two-pointer technique, but that would lose original indices unless we store them. The hash map approach remains the most elegant for this specific problem.
If you are curious about the sorting method: You could sort the array (with indices attached) and then use two pointers (left and right) to find the pair. That is O(n log n) but often used when you need to avoid extra space. However, for Two Sum, the hash map method is the interview favorite.
A bit of history: where does this problem come from?
The Two Sum problem became famous largely because it was one of the first problems on LeetCode (launched in 2015). But the idea of finding two numbers that sum to a target goes back decades. It is essentially a special case of the subset sum problem from computer science theory. Companies like Google, Facebook, and Amazon love it because it tests both basic coding ability and the transition from naive to optimal thinking — a core skill in software engineering.
There is also a direct line to hash table usage, which revolutionized how we handle lookups in programming. The idea of using a hash map to remember past values is a common design pattern in algorithm interviews (you will see it again in problems like “Subarray Sum Equals K” or “Contains Duplicate”).
Testing our solution with edge cases
Let’s check some tricky scenarios that might break a naive approach:
-
Duplicates:
nums = [1, 1, 2], target = 2→ Output:[0,1]because 1+1=2. Our hash map works: when i=1, complement=1, seen already has 1 from index 0 → returns[0,1]. -
Negative numbers:
nums = [-1, -2, -3, -4], target = -6→ Output:[1,3]because -2 + (-4) = -6. Complement logic still holds. - Large numbers: Up to 109, but our integer arithmetic works fine.
One common beginner mistake: using the same element twice. For
example, if nums = [3, 2, 4], target = 6, complement
for 3 is 3, but we can’t use the same 3 again. That is why we store
numbers after checking the complement, not before.
The algorithm ensures we only pair current element with previously
seen ones.
Diagram — seeing the one-pass magic
Visual walkthrough for nums = [2,7,11,15], target = 9
Step 0: seen = {}
Step 1 (i=0, num=2): complement = 7. 7 in seen? No. Add {2:0}
Step 2 (i=1, num=7): complement = 2. 2 in seen? Yes! seen[2] = 0 → return [0,1]. Done.
We never even looked at 11 and 15.
It is like the algorithm has super vision — it spots the answer at the earliest possible moment.
Other languages and slight variations
While we wrote Python-style pseudocode, the same logic applies to JavaScript, Java, C++, etc.
JavaScript example (using Map).
var twoSum = function(nums, target) {
const map = new Map();
for (let i = 0; i < nums.length; i++) {
const complement = target - nums[i];
if (map.has(complement)) return [map.get(complement), i];
map.set(nums[i], i);
}
};
Java example (using HashMap).
public int[] twoSum(int[] nums, int target) {
Map<Integer, Integer> map = new HashMap<>();
for (int i = 0; i < nums.length; i++) {
int complement = target - nums[i];
if (map.containsKey(complement)) {
return new int[] {map.get(complement), i};
}
map.put(nums[i], i);
}
return new int[] {};
}
Common pitfalls and how to avoid them
- Forgetting the order: Always check complement before storing current number. Otherwise, you might pair an element with itself (when complement equals current number).
-
Using array search instead of hash map: Some
beginners try
index()or linear search for complement inside the loop, turning it back to O(n²). - Returning numbers instead of indices: The problem expects indices. Double-check the output format.
- Assuming the array is sorted: Not given. So two-pointer without storing indices would be messy.
Why this problem matters beyond the interview
Two Sum teaches you the art of trade-offs. The brute force is easy to write but slow. The hash map version uses a bit more memory but gives you speed. In real-world engineering, you constantly balance time vs. space. This simple problem encapsulates that fundamental decision. Plus, the pattern of “complement lookup” will appear again in problems like 3Sum, 4Sum, and many others.
References and further reading
If you want to dive deeper, here are some awesome resources:
- LeetCode Two Sum Problem Page — official problem and discussion forums.
- Wikipedia: Hash Table — learn about the data structure we used.
- GeeksforGeeks — Two Sum — additional explanations and code in multiple languages.
- YouTube: Two Sum by NeetCode — visual explanation (highly recommended for visual learners).
- Big-O Cheat Sheet — to understand complexity analysis better.
Final thoughts
We started from the most naive double loop, endured the pain of O(n²), and then leveled up to a one-pass hash map solution that is elegant, fast, and easy to remember. That is the spirit of algorithmic thinking: always iterate, always optimize, and never be afraid to use extra memory if it gives you massive speed gains.
Two Sum might be your first LeetCode problem, but it will not be your last. The patterns you learn here will stick with you. So next time you see a coding challenge, ask yourself: “Can I use a hash map to remember things?” It is often the key.
Happy coding, and may your complexity always be low.
Grinding LeetCode — Day 1: Two Sum. Written with curiosity.