Visual decision trees + complete pattern library + frameworks + cheat sheet. By LeetDaily
Every DSA problem follows the same process. Internalize this order and you'll never stare blankly at a problem again.
Modern judges do roughly 10⁸ simple operations per second. Use this table in reverse: the constraint tells you what complexity is allowed, which narrows the pattern space dramatically.
| Constraint | Target complexity | Ops | Likely pattern |
|---|---|---|---|
n ≤ 10 | O(n!) / O(2ⁿ·n) | ~3.6M | Backtracking, permutations, brute force |
n ≤ 20 | O(2ⁿ) | ~1M | Bitmask DP, meet in the middle |
n ≤ 100 | O(n³) / O(n⁴) | ~1M–100M | Floyd-Warshall, interval DP, matrix DP |
n ≤ 400 | O(n³) | ~64M | Floyd-Warshall, 3D DP |
n ≤ 1,000 | O(n²) | ~1M | DP, nested loops, 2D grid problems |
n ≤ 10,000 | O(n² / log n) | ~10M | Light nested loops, n log n sorting |
n ≤ 10⁵ | O(n log n) | ~1.7M | Sorting, heap, binary search, segment tree |
n ≤ 10⁶ | O(n) / O(n log n) | ~1M–20M | Two pointers, sliding window, hashing |
n ≤ 10⁸ | O(n) tight / O(√n) | ~10⁸ | Linear scan only, no hidden constants |
n ≤ 10⁹ | O(log n) / O(√n) | ~30 | Binary search on answer, math formula |
n ≤ 10¹⁸ | O(log n) | ~60 | Matrix exponentiation, number theory |
Start here for every problem. The input type routes you to the right sub-tree.
← scroll horizontally if needed →
First question: is it sorted? Sorted unlocks two pointers + binary search.
← scroll horizontally if needed →
Size K given upfront. Slide across, maintain running quantity (sum, max, count) in O(1) per step. Example: max sum subarray of size K.
'Longest / shortest subarray with ≤ K distinct' or 'sum ≥ X'. Expand right, contract left when constraint breaks. Example: longest substring without repeating chars.
'Subarray sum equals K' or 'divisible by K' or 'count subarrays with property'. Store prefix sums in map; check for complements in O(1).
Sorted array + pair/triplet with target sum. Start at both ends, move based on comparison. Example: two-sum sorted, 3-sum, container with most water.
In-place array modification: remove duplicates, move zeros, partition. Slow = write position, fast = read position.
Maintain stack of indices with monotonic values. Pop when new element breaks monotonicity. Example: daily temperatures, largest rectangle in histogram.
Deque of indices in decreasing (for max) or increasing (for min) order. Front is always the answer. O(n) total.
Count occurrences, detect duplicates, group anagrams. Character counter as key for anagram problems.
When array contains integers in bounded range, place each at its 'correct' index. Used for: find missing, find duplicate, first missing positive. O(n) time, O(1) space.
Min-heap of size K for top K largest, max-heap for top K smallest. O(n log K). Quickselect gives O(n) average.
Classic target find, or variant: first/last occurrence, leftmost/rightmost, rotated array search. Always check loop invariant.
When answer has monotonic property ('can we do X with budget B?'). Example: koko eating bananas, split array largest sum, capacity to ship packages.
Only five patterns exist. Match by what you need to find or change.
← scroll horizontally if needed →
Slow moves 1 step, fast moves 2. If they meet, cycle exists. When fast reaches end, slow is at middle. For nth from end: move fast n steps first, then both together.
After detection, reset slow to head. Move both 1 step at a time. They meet at cycle start. (Floyd's second phase.)
prev = null, curr = head. Loop: save next, point curr to prev, advance prev and curr. Return prev. For k-group: reverse k at a time, connect.
Dummy head + tail pointer. Pick smaller, advance, repeat. Attach remainder. O(m+n).
Push all heads into min-heap. Pop smallest, push its next. O(N log K) where N = total nodes.
Find middle with fast/slow → reverse second half → merge/compare with first half. Three primitives combined.
Method 1: HashMap old→new, two passes. Method 2: interleave clones (A→A'→B→B'→...), fix randoms, separate. O(n) time, O(1) extra space.
First check: BST or not? Then match the answer shape.
← scroll horizontally if needed →
Every tree problem fits one of these:
return 1 + max(left, right).(with_node, without_node) for house robber on tree, (is_balanced, height) for balanced check.Queue-based. Process level-by-level: record size at each level, process that many nodes. Used for: level order, right view, zigzag, minimum depth.
Process node before children. Used for: serialize tree, path from root, copy tree.
For BST, visits in sorted order. Used for: validate BST, Kth smallest in BST, BST to sorted list.
Process children first. Used for: height, diameter, max path sum, delete tree, bottom-up problems.
DFS with running sum parameter. Backtrack on return. Variations: exists, collect all, count with prefix sum map (for 'any node to any descendant').
If node is p or q, return it. Else recurse left and right. If both non-null, current is LCA. Else return whichever is non-null.
Post-order: return max depth of subtree. Update global answer with left_depth + right_depth at each node.
Pre-order with null markers. Split by delimiter for deserialization, use iterator/queue to rebuild.
Build adjacency list first. Then branch on what you're computing.
← scroll horizontally if needed →
Queue + visited set. First time you reach target = shortest. O(V + E). Works for grids too (4/8 directions).
Min-heap of (distance, node). Pop smallest, relax neighbors, push updated. O((V + E) log V). Does NOT work with negative edges.
Relax all edges V-1 times. O(VE). Also detects negative cycles (one more iteration changes anything = cycle).
3 nested loops: intermediate k, source i, dest j. dp[i][j] = min(dp[i][j], dp[i][k] + dp[k][j]). O(V³), only for n ≤ 400.
Deque. Push front for weight-0 edges, push back for weight-1. O(V + E). Faster than Dijkstra for this case.
Two ops: find(x) with path compression, union(x,y) by rank/size. Near-O(1) amortized. Used for: Kruskal's MST, redundant connection, accounts merge.
Count in-degrees. Queue all zero-in-degree nodes. Pop, add to result, decrement neighbors. If result length < V, there's a cycle.
DFS, push to stack on exit. Reverse stack = topo order. Also detects cycles via 3-color marking.
WHITE = unvisited, GRAY = in current path, BLACK = done. Hitting a GRAY node = cycle. Used for course schedule.
Sort edges by weight. For each edge, if endpoints in different components, add to MST and union. O(E log E).
Grow tree from start vertex. Min-heap of edges crossing the cut. Similar to Dijkstra. O((V + E) log V).
BFS/DFS coloring neighbors with opposite color. Conflict = not bipartite. Used for: graph coloring, possible bipartition.
Push all sources into queue at start. Used for: rotting oranges, walls and gates, nearest 0 in binary matrix.
Count islands, max area, surrounded regions. Mark visited by overwriting or using visited set.
Always starts with a sort. Sort by start for merging, by end for greedy selection.
← scroll horizontally if needed →
Sort by start. For each next interval, if it overlaps with current merged, extend end. Else push current, start new. O(n log n).
Three phases: (1) add all ending before new starts, (2) merge all overlapping with new, (3) add all starting after new ends. O(n).
Sort by start. Min-heap of end times. For each interval, if smallest end ≤ current start, pop (reuse room). Push current end. Heap size = answer.
Create (time, +1/-1) events, sort by time. Running sum = concurrent count. Max running sum = answer. Also: employee free time, car pooling.
Sort by END time. Greedily pick earliest-ending non-conflicting interval. Used for: max activities, min arrows, non-overlapping intervals.
When no pattern fits and problem asks for count/max/min/optimal, it's DP. Branch on state shape.
Signal phrases: "count the number of ways", "maximum / minimum", "is it possible", "longest / shortest", "optimal", "partition into".
← scroll horizontally if needed →
dp[i] or dp[i][j] actually mean? Write it in English before coding.dp[i] relate to smaller subproblems? What choices exist at step i?Climb stairs, house robber, decode ways, LIS, word break. dp[i] depends on a few previous cells. Usually space-optimizable to O(1).
LCS, edit distance, regex matching, distinct subsequences, interleaving strings. dp[i][j] = answer for prefixes of length i and j.
Unique paths, min path sum, maximal square, dungeon game. dp[i][j] usually depends on top and left neighbors.
Subset sum, partition equal subset sum, target sum, last stone weight II. dp[i][w] = best using first i items with capacity w. Iterate capacity backwards for 1D.
Coin change, combination sum IV, perfect squares. Iterate capacity forwards. Items-outside = combinations, capacity-outside = permutations.
Burst balloons, matrix chain, palindrome partitioning, min cost to merge stones. dp[i][j] = best for subarray i..j. Iterate by length then start.
House robber III, diameter, binary tree cameras, max independent set. Return tuple of (take, skip) states from each child.
TSP, assignment problems, 'visit all nodes', partition to K equal subsets. dp[mask] or dp[mask][i] where mask = set of visited items.
'How many numbers in [L, R] have property X?' dp[pos][tight][state]. Common in contests, rare in interviews.
Buy/sell with cooldown, transaction limits. dp[i][holding][transactions]. Draw the state diagram first.
These show up less often but are instantly recognizable when they do.
Point update + range query → Fenwick (simpler). Range update + range query → segment tree with lazy propagation. O(log n) per op.
Find pattern in text in O(n+m). KMP uses failure function. Z-algorithm is conceptually simpler.
Rolling hash for O(1) substring comparison. Used for: multiple pattern matching, longest duplicate substring (with binary search).
Autocomplete, word search II, maximum XOR pair, longest common prefix. Each node has ~26 children.
O(n²). Try every index (odd and even length), expand outwards while chars match. Simple and usually fast enough.
O(n) longest palindromic substring. Only needed for tight constraints. Uses symmetry of previously found palindromes.
Max-heap for lower half, min-heap for upper half. Balance sizes after each insert. Median = top of one heap or average.
HashMap of key → doubly linked list node. O(1) get and put. On access, move to front. On capacity exceeded, evict tail.
HashMap + frequency buckets (each bucket = DLL). Track min frequency. More complex than LRU but same O(1).
'Can player 1 win with optimal play?' Memoized minimax. State includes whose turn. Used for: stone game, predict the winner.
Euclidean: gcd(a, b) = gcd(b, a % b). LCM = a*b/gcd. Used in: fraction simplification, coprime checks.
Find all primes up to n in O(n log log n). Mark multiples as composite.
pow(x, n) in O(log n) via repeated squaring. Combined with matrix exponentiation for Fibonacci in O(log n).
Pick k random items from stream of unknown length. For item i (1-indexed), keep with probability k/i. O(n) time, O(k) space.
In-place random shuffle. For i from n-1 down to 1: swap a[i] with a[random(0, i)]. Uniform distribution, O(n).
Ford-Fulkerson, Edmonds-Karp, Dinic's. Min-cut = max-flow. Shows up in hard problems disguised as bipartite matching.
If you can't place the problem after going through the trees, these heuristics often break the deadlock. Apply them in order.
When you see the signal phrase on the left, reach for the pattern on the right first.
| Signal / problem phrase | First thing to try |
|---|---|
| "Contiguous subarray / substring" + sum or length | Sliding window or prefix sum |
| Sorted array + pair or triplet with target | Two pointers |
| "Kth largest / smallest" | Heap of size K, or Quickselect |
| "Top K frequent" | HashMap + heap, or bucket sort |
| "Next greater / smaller element" | Monotonic stack |
| "Max / min in sliding window" | Monotonic deque |
| "Shortest path" (unweighted) | BFS |
| "Shortest path" (weighted, non-negative) | Dijkstra |
| "Shortest path" (negative edges) | Bellman-Ford |
| "All pairs shortest path" | Floyd-Warshall (n ≤ 400) |
| "Course schedule" / dependencies / ordering | Topological sort |
| Connected components / union operations | Union-Find (DSU) or DFS |
| "Minimum spanning tree" | Kruskal or Prim |
| "Count the number of ways" | DP |
| "Maximum / minimum" + optimal substructure | DP |
| "Is it possible to…" | DP (decision variant) or BFS |
| "All permutations / combinations / subsets" | Backtracking |
| "N queens" / constraint satisfaction | Backtracking with pruning |
| "Longest palindromic substring" | Expand around center or DP |
| Prefix matching / autocomplete / word search | Trie |
| Range query on mutable array | Segment tree or Fenwick tree |
| Range query on immutable array | Prefix sum |
| "Cycle in linked list" / "find middle" | Fast and slow pointers |
| Reverse linked list (whole or part) | Iterative pointer reversal |
| In-place with values in 1..n | Cyclic sort |
| "Merge k sorted…" | Min-heap of heads |
| "Median of data stream" | Two heaps (max-heap + min-heap) |
| "LRU cache" / O(1) get and put | HashMap + doubly linked list |
| Interval overlap / merge | Sort by start, sweep |
| "Maximum non-overlapping intervals" | Sort by end, greedy |
| "Minimum meeting rooms" | Heap of end times, or sweep line |
| n ≤ 20 with "visit all" or assignment | Bitmask DP |
| Search space has monotonic property | Binary search on answer |
| "Edit distance" / LCS / two strings | 2D DP |
| "Robber" / "paint houses" / "jump game" | 1D DP |
| "Buy / sell stock" with constraints | State machine DP |
| "Burst balloons" / "matrix chain" | Interval DP |
| "Partition array into K equal subsets" | Bitmask DP or backtracking |
| "Coin change" / "ways to make amount" | Unbounded knapsack DP |
| "Subset sum" / "equal partition" | 0/1 knapsack DP |
| Grid with "islands" / "regions" | DFS / BFS flood fill |
| Grid with "rotting" / "fire spreading" | Multi-source BFS |
| Matrix with "transform in place" | Two-pointer / layer-by-layer |
| "Clone graph" / "deep copy" | HashMap + DFS/BFS |
| "Find all anagrams" / "character match" | Sliding window + counter |
| "XOR" tricks / "single number" | Bit manipulation |
| "Power of 2/3/4" / bit checks | Bit manipulation |
| "Serialize" a data structure | Pre-order with null markers |
| Structure | Access | Search | Insert | Delete |
|---|---|---|---|---|
| Array | O(1) | O(n) | O(n) | O(n) |
| Sorted array | O(1) | O(log n) | O(n) | O(n) |
| Linked list | O(n) | O(n) | O(1)* | O(1)* |
| Hash table | — | O(1) avg | O(1) avg | O(1) avg |
| BST (balanced) | O(log n) | O(log n) | O(log n) | O(log n) |
| Heap (binary) | — | O(n) | O(log n) | O(log n) |
| Trie | O(k) | O(k) | O(k) | O(k) |
| Union-Find | — | ~O(1) | ~O(1) | — |
| Segment tree | O(log n) | O(log n) | O(log n) | O(log n) |
| Fenwick tree (BIT) | O(log n) | — | O(log n) | — |
* Given pointer to node. k = key length for trie.
| Algorithm | Best | Average | Worst | Space | Stable |
|---|---|---|---|---|---|
| Quicksort | O(n log n) | O(n log n) | O(n²) | O(log n) | No |
| Mergesort | O(n log n) | O(n log n) | O(n log n) | O(n) | Yes |
| Heapsort | O(n log n) | O(n log n) | O(n log n) | O(1) | No |
| Timsort | O(n) | O(n log n) | O(n log n) | O(n) | Yes |
| Insertion sort | O(n) | O(n²) | O(n²) | O(1) | Yes |
| Counting sort | O(n+k) | O(n+k) | O(n+k) | O(k) | Yes |
| Radix sort | O(nk) | O(nk) | O(nk) | O(n+k) | Yes |
| Algorithm | Time | Space | Use case |
|---|---|---|---|
| BFS / DFS | O(V + E) | O(V) | Traversal, unweighted shortest path |
| Dijkstra (binary heap) | O((V+E) log V) | O(V) | Non-negative weighted shortest path |
| Bellman-Ford | O(VE) | O(V) | Negative edges, cycle detection |
| Floyd-Warshall | O(V³) | O(V²) | All-pairs shortest path |
| Topological sort | O(V + E) | O(V) | DAG ordering |
| Kruskal's MST | O(E log E) | O(V) | MST with edge list |
| Prim's MST | O((V+E) log V) | O(V) | MST with adjacency list |
| Tarjan's SCC | O(V + E) | O(V) | Strongly connected components |