← Back to LeetDaily

DSA Master Reference

Visual decision trees + complete pattern library + frameworks + cheat sheet. By LeetDaily

How to use this page: For pattern recognition, start with the master tree and trace down. For deep dives, read the detail sections below each tree. For quick recall during practice, use the cheat sheet at the bottom. Every section is independent — jump around freely.

The 5-step workflow

Every DSA problem follows the same process. Internalize this order and you'll never stare blankly at a problem again.

1. Read constraints2. Classify input3. Match pattern4. DP fallback5. Code it

Step 1 — Constraints → complexity

Modern judges do roughly 10⁸ simple operations per second. Use this table in reverse: the constraint tells you what complexity is allowed, which narrows the pattern space dramatically.

ConstraintTarget complexityOpsLikely pattern
n ≤ 10O(n!) / O(2ⁿ·n)~3.6MBacktracking, permutations, brute force
n ≤ 20O(2ⁿ)~1MBitmask DP, meet in the middle
n ≤ 100O(n³) / O(n⁴)~1M–100MFloyd-Warshall, interval DP, matrix DP
n ≤ 400O(n³)~64MFloyd-Warshall, 3D DP
n ≤ 1,000O(n²)~1MDP, nested loops, 2D grid problems
n ≤ 10,000O(n² / log n)~10MLight nested loops, n log n sorting
n ≤ 10⁵O(n log n)~1.7MSorting, heap, binary search, segment tree
n ≤ 10⁶O(n) / O(n log n)~1M–20MTwo pointers, sliding window, hashing
n ≤ 10⁸O(n) tight / O(√n)~10⁸Linear scan only, no hidden constants
n ≤ 10⁹O(log n) / O(√n)~30Binary search on answer, math formula
n ≤ 10¹⁸O(log n)~60Matrix exponentiation, number theory

Master decision tree

Start here for every problem. The input type routes you to the right sub-tree.

Start here
Question to ask
Pattern to use
Sub-category
DP fallback

← scroll horizontally if needed →

  • Read the problemNote constraints
    • What is the input?
      • Array / string
        • → Tree 1
      • Linked list
        • → Tree 2
      • Tree
        • → Tree 3
      • Graph / grid
        • → Tree 4
      • Intervals
        • → Tree 5
      • None fits
        • → DP tree

Tree 1 — Array / string

First question: is it sorted? Sorted unlocks two pointers + binary search.

← scroll horizontally if needed →

  • Array / string
    • Is it sorted?or can be sorted
      • YES
        • What's asked?
          • Binary searchfind target / boundary
          • Two pointerspair with sum / diff
          • Heap / QuickselectKth smallest / largest
          • BS on answermonotonic answer space
      • NO
        • Contiguous range?
          • YES
            • Sliding windowfixed or variable
            • Prefix sum + mapsum equals K
            • Monotonic dequemax / min in window
          • NO
            • Monotonic stacknext greater / smaller
            • HashMapfrequency / anagrams
            • Cyclic sortvalues in 1..n
            • DPLIS / LCS / edit distance

Array / string patterns in depth

Sliding window

Fixed window size

Size K given upfront. Slide across, maintain running quantity (sum, max, count) in O(1) per step. Example: max sum subarray of size K.

Sliding window

Variable window

'Longest / shortest subarray with ≤ K distinct' or 'sum ≥ X'. Expand right, contract left when constraint breaks. Example: longest substring without repeating chars.

Prefix sum

Prefix sum + HashMap

'Subarray sum equals K' or 'divisible by K' or 'count subarrays with property'. Store prefix sums in map; check for complements in O(1).

Two pointers

Opposite ends

Sorted array + pair/triplet with target sum. Start at both ends, move based on comparison. Example: two-sum sorted, 3-sum, container with most water.

Two pointers

Same direction (fast/slow)

In-place array modification: remove duplicates, move zeros, partition. Slow = write position, fast = read position.

Monotonic stack

Next greater / smaller

Maintain stack of indices with monotonic values. Pop when new element breaks monotonicity. Example: daily temperatures, largest rectangle in histogram.

Monotonic deque

Max / min in sliding window

Deque of indices in decreasing (for max) or increasing (for min) order. Front is always the answer. O(n) total.

HashMap

Frequency / anagrams

Count occurrences, detect duplicates, group anagrams. Character counter as key for anagram problems.

Cyclic sort

Values in 1..n

When array contains integers in bounded range, place each at its 'correct' index. Used for: find missing, find duplicate, first missing positive. O(n) time, O(1) space.

Top K

Heap of size K

Min-heap of size K for top K largest, max-heap for top K smallest. O(n log K). Quickselect gives O(n) average.

Binary search

On sorted array

Classic target find, or variant: first/last occurrence, leftmost/rightmost, rotated array search. Always check loop invariant.

Binary search

On answer space

When answer has monotonic property ('can we do X with budget B?'). Example: koko eating bananas, split array largest sum, capacity to ship packages.

Tree 2 — Linked list

Only five patterns exist. Match by what you need to find or change.

← scroll horizontally if needed →

  • Linked list
    • What do you need?
      • Find a position
        • Fast & slowcycle, middle, nth from end
      • Change structure
        • Iterative reversewhole / partial / k-group
        • Two-pointer mergemerge sorted
        • Min-heapmerge K sorted
      • Composite
        • Find mid + reverse + mergereorder / palindrome
        • HashMap / interleavedeep copy with random

Linked list patterns in depth

Fast & slow

Floyd's tortoise and hare

Slow moves 1 step, fast moves 2. If they meet, cycle exists. When fast reaches end, slow is at middle. For nth from end: move fast n steps first, then both together.

Cycle start

Find cycle start node

After detection, reset slow to head. Move both 1 step at a time. They meet at cycle start. (Floyd's second phase.)

Reversal

In-place pointer flip

prev = null, curr = head. Loop: save next, point curr to prev, advance prev and curr. Return prev. For k-group: reverse k at a time, connect.

Merge

Two sorted lists

Dummy head + tail pointer. Pick smaller, advance, repeat. Attach remainder. O(m+n).

Merge K

Min-heap of heads

Push all heads into min-heap. Pop smallest, push its next. O(N log K) where N = total nodes.

Composite

Reorder / palindrome check

Find middle with fast/slow → reverse second half → merge/compare with first half. Three primitives combined.

Deep copy

Random pointer clone

Method 1: HashMap old→new, two passes. Method 2: interleave clones (A→A'→B→B'→...), fix randoms, separate. O(n) time, O(1) extra space.

Tree 3 — Tree

First check: BST or not? Then match the answer shape.

← scroll horizontally if needed →

  • Tree
    • Is it a BST?
      • YES
        • InorderKth smallest, validate
        • BST recursionsearch / insert / delete
      • NO
        • Answer shape?
          • BFSlevel order, right view
          • DFS + backtrackroot-to-leaf paths
          • Post-orderheight, diameter, max path
          • LCA recursioncommon ancestor
          • Pre-order + nullserialize / deserialize
          • Tree DPhouse robber III

Tree patterns + recursion template

Tree recursion template — three shapes

Every tree problem fits one of these:

  1. Return a scalar (height, count, sum) — post-order, combine children's returns. Example: return 1 + max(left, right).
  2. Return a tuple — e.g. (with_node, without_node) for house robber on tree, (is_balanced, height) for balanced check.
  3. Pass state down, update global — recursion returns single-branch contribution; a closure variable tracks global best. Used for: max path sum, diameter.
BFS

Level order traversal

Queue-based. Process level-by-level: record size at each level, process that many nodes. Used for: level order, right view, zigzag, minimum depth.

DFS

Pre-order (root, left, right)

Process node before children. Used for: serialize tree, path from root, copy tree.

DFS

In-order (left, root, right)

For BST, visits in sorted order. Used for: validate BST, Kth smallest in BST, BST to sorted list.

DFS

Post-order (left, right, root)

Process children first. Used for: height, diameter, max path sum, delete tree, bottom-up problems.

Path sum

Root-to-leaf paths

DFS with running sum parameter. Backtrack on return. Variations: exists, collect all, count with prefix sum map (for 'any node to any descendant').

LCA

Lowest common ancestor

If node is p or q, return it. Else recurse left and right. If both non-null, current is LCA. Else return whichever is non-null.

Diameter

Longest path through tree

Post-order: return max depth of subtree. Update global answer with left_depth + right_depth at each node.

Serialize

Tree to string and back

Pre-order with null markers. Split by delimiter for deserialization, use iterator/queue to rebuild.

Tree 4 — Graph / grid

Build adjacency list first. Then branch on what you're computing.

← scroll horizontally if needed →

  • Graph
    • What do you need?
      • Shortest path
        • Edge weights?
          • BFSunweighted
          • Dijkstranon-negative
          • Bellman-Fordnegatives
          • Floyd-Warshallall pairs, n ≤ 400
          • 0-1 BFSweights 0 or 1
      • Connectivity
        • DFS / BFScount components
        • Union-Finddynamic edges
        • Tarjan / KosarajuSCCs
      • Ordering
        • Topo sortKahn's / DFS
      • Cycles
        • Union-Find / DFSundirected
        • DFS 3-colordirected
      • Other
        • Kruskal / PrimMST
        • BFS 2-colorbipartite
        • Flood fillgrid islands

Graph patterns in depth

BFS

Unweighted shortest path

Queue + visited set. First time you reach target = shortest. O(V + E). Works for grids too (4/8 directions).

Dijkstra

Non-negative weights

Min-heap of (distance, node). Pop smallest, relax neighbors, push updated. O((V + E) log V). Does NOT work with negative edges.

Bellman-Ford

Negative edges allowed

Relax all edges V-1 times. O(VE). Also detects negative cycles (one more iteration changes anything = cycle).

Floyd-Warshall

All pairs shortest path

3 nested loops: intermediate k, source i, dest j. dp[i][j] = min(dp[i][j], dp[i][k] + dp[k][j]). O(V³), only for n ≤ 400.

0-1 BFS

Weights 0 or 1 only

Deque. Push front for weight-0 edges, push back for weight-1. O(V + E). Faster than Dijkstra for this case.

Union-Find

Dynamic connectivity

Two ops: find(x) with path compression, union(x,y) by rank/size. Near-O(1) amortized. Used for: Kruskal's MST, redundant connection, accounts merge.

Topo sort

Kahn's algorithm (BFS)

Count in-degrees. Queue all zero-in-degree nodes. Pop, add to result, decrement neighbors. If result length < V, there's a cycle.

Topo sort

DFS-based

DFS, push to stack on exit. Reverse stack = topo order. Also detects cycles via 3-color marking.

DFS 3-color

Directed cycle detection

WHITE = unvisited, GRAY = in current path, BLACK = done. Hitting a GRAY node = cycle. Used for course schedule.

MST

Kruskal (sort edges + DSU)

Sort edges by weight. For each edge, if endpoints in different components, add to MST and union. O(E log E).

MST

Prim (heap from vertex)

Grow tree from start vertex. Min-heap of edges crossing the cut. Similar to Dijkstra. O((V + E) log V).

Bipartite

2-coloring

BFS/DFS coloring neighbors with opposite color. Conflict = not bipartite. Used for: graph coloring, possible bipartition.

Grid BFS

Multi-source BFS

Push all sources into queue at start. Used for: rotting oranges, walls and gates, nearest 0 in binary matrix.

Grid DFS

Flood fill

Count islands, max area, surrounded regions. Mark visited by overwriting or using visited set.

Tree 5 — Intervals

Always starts with a sort. Sort by start for merging, by end for greedy selection.

← scroll horizontally if needed →

  • Intervals
    • What's the goal?
      • Combine
        • Sort by startmerge overlapping
        • 3-phase scaninsert interval
      • Count resources
        • Min-heap of endsmeeting rooms
        • Sweep linemax concurrent
      • Maximize
        • Sort by END, greedymax non-overlapping
Merge

Merge overlapping

Sort by start. For each next interval, if it overlaps with current merged, extend end. Else push current, start new. O(n log n).

Insert

Insert new interval

Three phases: (1) add all ending before new starts, (2) merge all overlapping with new, (3) add all starting after new ends. O(n).

Heap

Minimum meeting rooms

Sort by start. Min-heap of end times. For each interval, if smallest end ≤ current start, pop (reuse room). Push current end. Heap size = answer.

Sweep line

Events on timeline

Create (time, +1/-1) events, sort by time. Running sum = concurrent count. Max running sum = answer. Also: employee free time, car pooling.

Greedy

Maximum non-overlapping

Sort by END time. Greedily pick earliest-ending non-conflicting interval. Used for: max activities, min arrows, non-overlapping intervals.

DP decision tree

When no pattern fits and problem asks for count/max/min/optimal, it's DP. Branch on state shape.

Signal phrases: "count the number of ways", "maximum / minimum", "is it possible", "longest / shortest", "optimal", "partition into".

← scroll horizontally if needed →

  • Dynamic programming
    • State shape?
      • Single index
        • 1D DProbber, LIS, stairs
      • Two sequences
        • 2D DPLCS, edit distance
      • Grid position
        • Grid DPunique paths
      • Items + capacity
        • Knapsack DP0/1, unbounded
      • Range [i..j]
        • Interval DPburst balloons
      • Set of visited
        • Bitmask DPTSP, n ≤ 20
      • Tree nodes
        • Tree DPpost-order tuples
      • Has states
        • State machinestock problems

DP framework + all types

The DP framework — apply to every DP problem:
  1. Define the state. What does dp[i] or dp[i][j] actually mean? Write it in English before coding.
  2. Write the recurrence. How does dp[i] relate to smaller subproblems? What choices exist at step i?
  3. Base cases. Smallest subproblems.
  4. Order of computation. Bottom-up (dependencies first) or top-down memoization.
  5. Space optimization. Often only last 1-2 rows matter. Do this AFTER basic version works.
1D DP

State = single index

Climb stairs, house robber, decode ways, LIS, word break. dp[i] depends on a few previous cells. Usually space-optimizable to O(1).

2D DP

Two strings / sequences

LCS, edit distance, regex matching, distinct subsequences, interleaving strings. dp[i][j] = answer for prefixes of length i and j.

Grid DP

2D grid paths

Unique paths, min path sum, maximal square, dungeon game. dp[i][j] usually depends on top and left neighbors.

0/1 Knapsack

Each item once

Subset sum, partition equal subset sum, target sum, last stone weight II. dp[i][w] = best using first i items with capacity w. Iterate capacity backwards for 1D.

Unbounded knapsack

Items reusable

Coin change, combination sum IV, perfect squares. Iterate capacity forwards. Items-outside = combinations, capacity-outside = permutations.

Interval DP

Solve on ranges

Burst balloons, matrix chain, palindrome partitioning, min cost to merge stones. dp[i][j] = best for subarray i..j. Iterate by length then start.

Tree DP

Post-order with tuples

House robber III, diameter, binary tree cameras, max independent set. Return tuple of (take, skip) states from each child.

Bitmask DP

n ≤ 20 visited set

TSP, assignment problems, 'visit all nodes', partition to K equal subsets. dp[mask] or dp[mask][i] where mask = set of visited items.

Digit DP

Count numbers with property

'How many numbers in [L, R] have property X?' dp[pos][tight][state]. Common in contests, rare in interviews.

State machine

Stock / cooldown problems

Buy/sell with cooldown, transaction limits. dp[i][holding][transactions]. Draw the state diagram first.

Advanced patterns

These show up less often but are instantly recognizable when they do.

Range queries

Segment tree / Fenwick (BIT)

Point update + range query → Fenwick (simpler). Range update + range query → segment tree with lazy propagation. O(log n) per op.

String matching

KMP / Z-algorithm

Find pattern in text in O(n+m). KMP uses failure function. Z-algorithm is conceptually simpler.

String hashing

Rabin-Karp

Rolling hash for O(1) substring comparison. Used for: multiple pattern matching, longest duplicate substring (with binary search).

Trie

Prefix tree

Autocomplete, word search II, maximum XOR pair, longest common prefix. Each node has ~26 children.

Palindromes

Expand around center

O(n²). Try every index (odd and even length), expand outwards while chars match. Simple and usually fast enough.

Palindromes

Manacher's algorithm

O(n) longest palindromic substring. Only needed for tight constraints. Uses symmetry of previously found palindromes.

Two heaps

Median from data stream

Max-heap for lower half, min-heap for upper half. Balance sizes after each insert. Median = top of one heap or average.

Design

LRU cache

HashMap of key → doubly linked list node. O(1) get and put. On access, move to front. On capacity exceeded, evict tail.

Design

LFU cache

HashMap + frequency buckets (each bucket = DLL). Track min frequency. More complex than LRU but same O(1).

Game theory

Minimax DP

'Can player 1 win with optimal play?' Memoized minimax. State includes whose turn. Used for: stone game, predict the winner.

Math

GCD / LCM

Euclidean: gcd(a, b) = gcd(b, a % b). LCM = a*b/gcd. Used in: fraction simplification, coprime checks.

Math

Sieve of Eratosthenes

Find all primes up to n in O(n log log n). Mark multiples as composite.

Math

Fast exponentiation

pow(x, n) in O(log n) via repeated squaring. Combined with matrix exponentiation for Fibonacci in O(log n).

Random

Reservoir sampling

Pick k random items from stream of unknown length. For item i (1-indexed), keep with probability k/i. O(n) time, O(k) space.

Random

Fisher-Yates shuffle

In-place random shuffle. For i from n-1 down to 1: swap a[i] with a[random(0, i)]. Uniform distribution, O(n).

Flow

Max flow (rare)

Ford-Fulkerson, Edmonds-Karp, Dinic's. Min-cut = max-flow. Shows up in hard problems disguised as bipartite matching.

Meta-strategy when stuck

If you can't place the problem after going through the trees, these heuristics often break the deadlock. Apply them in order.

  1. State the brute force out loud. Even if you won't code it, saying it clarifies the problem. It also gives you a baseline complexity to improve from.
  2. Look for redundant work in the brute force. Repeated computation → memoization. Repeated sort → precompute once. Repeated lookup → hashmap.
  3. Reverse the problem. Iterate from the end. Think about what the answer isn't. Swap source and target. Sometimes the reverse is trivial.
  4. Sort if order doesn't matter. Sorting unlocks two pointers, greedy, binary search. Almost always worth the O(n log n) cost.
  5. Think about invariants. What stays true as you iterate? Key insight for monotonic stack, deque, and most greedy proofs.
  6. Draw it. Trees, graphs, DP tables. Your visual cortex spots patterns your verbal reasoning misses.
  7. Work small examples by hand. n=1, n=2, n=3. Patterns emerge.
  8. Ask what data structure gives O(1) for the bottleneck. "Find min" → heap. "Find by key" → hashmap. "Next greater" → monotonic stack.
  9. Consider the complement. Instead of counting what satisfies, count what doesn't and subtract from total.
  10. Look for a monotonic property. If you can binary search on the answer, the problem reduces to a feasibility check.

Pattern cheat sheet

When you see the signal phrase on the left, reach for the pattern on the right first.

Signal / problem phraseFirst thing to try
"Contiguous subarray / substring" + sum or lengthSliding window or prefix sum
Sorted array + pair or triplet with targetTwo pointers
"Kth largest / smallest"Heap of size K, or Quickselect
"Top K frequent"HashMap + heap, or bucket sort
"Next greater / smaller element"Monotonic stack
"Max / min in sliding window"Monotonic deque
"Shortest path" (unweighted)BFS
"Shortest path" (weighted, non-negative)Dijkstra
"Shortest path" (negative edges)Bellman-Ford
"All pairs shortest path"Floyd-Warshall (n ≤ 400)
"Course schedule" / dependencies / orderingTopological sort
Connected components / union operationsUnion-Find (DSU) or DFS
"Minimum spanning tree"Kruskal or Prim
"Count the number of ways"DP
"Maximum / minimum" + optimal substructureDP
"Is it possible to…"DP (decision variant) or BFS
"All permutations / combinations / subsets"Backtracking
"N queens" / constraint satisfactionBacktracking with pruning
"Longest palindromic substring"Expand around center or DP
Prefix matching / autocomplete / word searchTrie
Range query on mutable arraySegment tree or Fenwick tree
Range query on immutable arrayPrefix sum
"Cycle in linked list" / "find middle"Fast and slow pointers
Reverse linked list (whole or part)Iterative pointer reversal
In-place with values in 1..nCyclic sort
"Merge k sorted…"Min-heap of heads
"Median of data stream"Two heaps (max-heap + min-heap)
"LRU cache" / O(1) get and putHashMap + doubly linked list
Interval overlap / mergeSort by start, sweep
"Maximum non-overlapping intervals"Sort by end, greedy
"Minimum meeting rooms"Heap of end times, or sweep line
n ≤ 20 with "visit all" or assignmentBitmask DP
Search space has monotonic propertyBinary search on answer
"Edit distance" / LCS / two strings2D DP
"Robber" / "paint houses" / "jump game"1D DP
"Buy / sell stock" with constraintsState machine DP
"Burst balloons" / "matrix chain"Interval DP
"Partition array into K equal subsets"Bitmask DP or backtracking
"Coin change" / "ways to make amount"Unbounded knapsack DP
"Subset sum" / "equal partition"0/1 knapsack DP
Grid with "islands" / "regions"DFS / BFS flood fill
Grid with "rotting" / "fire spreading"Multi-source BFS
Matrix with "transform in place"Two-pointer / layer-by-layer
"Clone graph" / "deep copy"HashMap + DFS/BFS
"Find all anagrams" / "character match"Sliding window + counter
"XOR" tricks / "single number"Bit manipulation
"Power of 2/3/4" / bit checksBit manipulation
"Serialize" a data structurePre-order with null markers

Complexity reference

Data structures

StructureAccessSearchInsertDelete
ArrayO(1)O(n)O(n)O(n)
Sorted arrayO(1)O(log n)O(n)O(n)
Linked listO(n)O(n)O(1)*O(1)*
Hash tableO(1) avgO(1) avgO(1) avg
BST (balanced)O(log n)O(log n)O(log n)O(log n)
Heap (binary)O(n)O(log n)O(log n)
TrieO(k)O(k)O(k)O(k)
Union-Find~O(1)~O(1)
Segment treeO(log n)O(log n)O(log n)O(log n)
Fenwick tree (BIT)O(log n)O(log n)

* Given pointer to node. k = key length for trie.

Sorting algorithms

AlgorithmBestAverageWorstSpaceStable
QuicksortO(n log n)O(n log n)O(n²)O(log n)No
MergesortO(n log n)O(n log n)O(n log n)O(n)Yes
HeapsortO(n log n)O(n log n)O(n log n)O(1)No
TimsortO(n)O(n log n)O(n log n)O(n)Yes
Insertion sortO(n)O(n²)O(n²)O(1)Yes
Counting sortO(n+k)O(n+k)O(n+k)O(k)Yes
Radix sortO(nk)O(nk)O(nk)O(n+k)Yes

Graph algorithms

AlgorithmTimeSpaceUse case
BFS / DFSO(V + E)O(V)Traversal, unweighted shortest path
Dijkstra (binary heap)O((V+E) log V)O(V)Non-negative weighted shortest path
Bellman-FordO(VE)O(V)Negative edges, cycle detection
Floyd-WarshallO(V³)O(V²)All-pairs shortest path
Topological sortO(V + E)O(V)DAG ordering
Kruskal's MSTO(E log E)O(V)MST with edge list
Prim's MSTO((V+E) log V)O(V)MST with adjacency list
Tarjan's SCCO(V + E)O(V)Strongly connected components
Final advice: Don't try to memorize this entire page. Use the decision trees for pattern recognition, then drill specific sub-trees as you encounter those problems in practice. After ~50 problems, the master tree becomes automatic. After ~150, you see the pattern before finishing the problem statement. After ~300, you start recognizing which DP variant applies just from constraints alone. Speed comes from deliberate repetition against this framework — not from re-reading it.