Binary search — 5 day plan

9 patterns · March 14–19 · All problems link to LeetCode

Difficulty:
Easy
Med
Hard
Action
Day 1 · Mar 14 Classic binary search + boundaries Foundation day
Pattern 1 — Vanilla binary search
The base template. lo/hi/mid, exit conditions. Get this muscle-memory clean.

Pattern 2 — Left/right boundary (first/last occurrence)
Two templates: find leftmost index where condition is true, and rightmost. Most medium problems reduce to this.
Key insight: always ask — am I searching for an exact value, or a boundary? The template differs. Nail lo=mid+1 vs hi=mid.
Day 2 · Mar 15 Rotated arrays + mountain arrays High interview frequency
Pattern 3 — Rotated sorted array
Key idea: at least one half is always sorted. Identify which half, then decide where target lies.

Pattern 4 — Mountain / bitonic array
Find the peak, then binary search on one side. Same "which half" logic.
Both patterns share one idea: in a non-uniform array, binary search still works if you can determine a monotonic sub-region.
Day 3 · Mar 16 Binary search on answer (the real power) Most underrated pattern
Pattern 5 — BS on answer space
The array isn't sorted — but the answer range is. Binary search on possible answer values, validate with a check function.

Pattern 6 — BS on matrix / 2D
Treat a sorted matrix as a flattened 1D array. mid/rows/cols arithmetic.
Template: lo=min_answer, hi=max_answer, check(mid) returns bool. Always ask: is check() monotone? If yes, BS applies.
Day 4 · Mar 17 Kth element + median + advanced Hard territory
Pattern 7 — Kth smallest / largest via BS
Binary search on the value, count how many elements ≤ mid. Classic in sorted matrix and multiplication table problems.

Pattern 8 — Median of two sorted arrays
The canonical hard BS problem. Binary search on partition, O(log(min(m,n))). Must know this cold for FAANG.

Pattern 9 — Sqrt / math BS
Binary search over integer answer with a math check function.
On day 4 fatigue is real. Prioritize 378 and 4 — they cover the deepest conceptual ground. The rest are bonuses.
Day 5 · Mar 18–19 Mixed revision + timed mock Lock it in
Priority revisit list
Redo problems you got wrong or felt shaky on. No notes, timed at 25 min each.

The 3 templates to have cold
Write these from memory on paper before your interview day: exact search (l<=r), left boundary (l<r, round down), right boundary (l<r, round up).
By March 19: you should be able to look at any BS problem and instantly name which of the 9 patterns it is. That classification reflex is the win.

Merge sort — 2 day plan

6 patterns · March 17–18 · All problems link to LeetCode

Difficulty:
Easy
Med
Hard
Action
Day 1 · Mar 17 Classic merge sort + count inversions Foundation day
Pattern 1 — Vanilla merge sort
The base template. Divide array in half, recursively sort, merge two sorted halves. Get the merge() helper muscle-memory clean — it's reused in every other pattern.

Pattern 2 — Count inversions
During the merge step, whenever a right-half element is placed before a left-half element, it contributes (mid - left_ptr + 1) inversions. Classic divide-and-conquer counting trick.
Key insight: merge sort's power isn't just sorting — it's that the merge step gives you a natural window to count cross-half relationships in O(n log n) instead of O(n²).
Day 2 · Mar 18 K-way merge + divide-and-conquer applications High interview frequency
Pattern 3 — K-way merge (heap-assisted)
Generalise 2-way merge to K sorted lists/arrays. Use a min-heap of size K — always extract the global minimum. O(N log K) total. This pattern recurs constantly in stream and external-sort problems.

Pattern 4 — Divide & conquer on arrays (not sorting)
Split the problem at mid, solve left and right independently, combine results. Maximum subarray is the canonical example — the cross-mid case is a merge-like combination step.

Pattern 5 — External sort / large-data merge
Conceptual pattern tested in system design + coding hybrids. Chunk → sort chunks → k-way merge. Understand why merge sort is cache-friendly and used in databases.
56 · Merge Intervals 57 · Insert Interval Interview Q: design external sort for 10GB file

Pattern 6 — Merge sort tree / offline queries
Advanced: build a segment tree where each node holds a sorted array (merge sort tree). Enables O(log² n) range frequency / order-statistic queries. Comes up in competitive programming and senior-level interviews.
By end of Day 2: you should be able to identify when a naive O(n²) counting/merging problem can be dropped to O(n log n) using the merge step. That instinct is the interview win for merge sort.

Dynamic programming — 7 day plan

10 patterns · All problems link to LeetCode

Difficulty:
Easy
Med
Hard
Action
Day 1 1D DP — linear sequences Foundation day
Pattern 1 — Linear DP (dp[i] from dp[i-1] or dp[i-k])
The entry point. dp[i] represents the answer for the first i elements. Transition looks back a fixed number of steps. Covers climbing stairs, house robber, and their variants. Nail the recurrence + base case ritual here — it's the same skeleton for everything that follows.
Key insight: draw the recurrence tree first, identify overlapping subproblems, then flatten to a loop. If you can't write dp[i] = f(dp[i-1], ...) in one line, you haven't found the right state yet.
Day 2 Kadane variants — subarray / subseq optimums High interview frequency
Pattern 2 — Maximum subarray (Kadane & variants)
dp[i] = max subarray ending exactly at i. Classic Kadane is O(n). Variants extend to products, circular arrays, and k-partitions. Every variant tweaks what "ending at i" tracks.

Pattern 3 — LIS (Longest Increasing Subsequence)
dp[i] = length of LIS ending at index i. O(n²) DP is the interview bar; O(n log n) patience sorting is the follow-up flex. Subsequence ≠ subarray — elements don't need to be contiguous.
LIS is the gateway to 2D DP — once you see dp[i] depends on all dp[j] where j < i, you're ready for the grid patterns on Day 3.
Day 3 2D DP — strings & grids Most common medium/hard type
Pattern 4 — LCS / Edit Distance (two-string DP)
dp[i][j] is defined on prefixes of two strings. Transition considers match (diagonal) vs mismatch (up/left). LCS, Edit Distance, and Shortest Common Supersequence all live here. Draw the 2D table for every new problem until the transitions feel automatic.

Pattern 5 — Grid DP (paths & obstacles)
dp[r][c] = answer for reaching cell (r,c). Moves are typically right/down. Variants add obstacles, variable costs, or multiple trips. Space can be reduced to O(n) with rolling array.
Two-string DP and grid DP are structurally identical — both fill a 2D table left-to-right, top-to-bottom. Master one, you understand both.
Day 4 Knapsack family Essential for product interviews
Pattern 6 — 0/1 Knapsack + subset sum
Each item can be taken at most once. dp[i][w] = best value using first i items with capacity w. Subset sum, partition equal subset, and target sum are all disguised 0/1 knapsacks. Key optimization: 1D dp array iterated in reverse.

Pattern 7 — Unbounded knapsack & coin change
Items can be reused unlimited times. dp iterated forward (unlike 0/1). Coin change is the canonical example. Rod cutting and integer break are structural twins.
The only difference between 0/1 and unbounded knapsack is loop direction. Forward = reuse allowed. Backward = each item once. Burn this into memory.
Day 5 Interval DP + palindromes Harder pattern, high signal
Pattern 8 — Interval DP (dp[i][j] on subarrays)
dp[i][j] = answer for the subarray/substring from i to j. Fill by increasing length. A pivot k splits [i,j] into [i,k] and [k+1,j]. Classic for matrix chain multiplication, burst balloons, and stone merge. O(n³) typical — flag for interviewers upfront.

Pattern 9 — Palindrome DP
Expand-around-center for detection; interval DP for counting/partitioning. LPS (Longest Palindromic Subsequence) = LCS(s, reverse(s)). Palindrome partitioning combines interval DP with linear DP.
Interval DP is the hardest pattern to spot. Hint: if the problem asks about an optimal way to merge, split, or evaluate a sequence and order of operations matters — it's interval DP.
Day 6 DP on trees + bitmask DP Senior-level territory
Pattern 10a — Tree DP (rerooting / subtree states)
dp defined on subtrees; computed via post-order DFS. Each node aggregates answers from children. House Robber III is the entry point. Diameter and max-path-sum require tracking two return values per node.

Pattern 10b — Bitmask DP (subset enumeration)
State includes a bitmask representing which elements have been used. dp[mask] or dp[mask][i]. Fits when n ≤ 20. TSP and assignment problems are canonical. Enumerate submasks in O(3ⁿ) when needed.
Bitmask DP is the last resort before brute force. If n ≤ 20 and you see "visit all" or "assign each", reach for it. State space is 2ⁿ × n — flag this complexity to your interviewer upfront.
Day 7 State machine DP + revision Lock it in
State machine DP (finite states per step)
Model the problem as transitions between explicit states (e.g., holding/not holding stock, cooldown, at most k transactions). Each dp state is (day, machine_state). Stock problems are the canonical interview set — know all 6 variants cold.

Revision: the pattern identification drill
Pick 2 random DP problems per pattern from above. No notes. Within 2 min of reading, name the pattern and write the state definition. That's the interview reflex you're building.
LeetCode DP problem list NeetCode DP roadmap (youtube)
By Day 7: given any DP problem, you should be able to (1) name the pattern in <60s, (2) write the state definition, (3) write the recurrence. Pattern recognition → state definition → transition. In that order, every time.

Graphs — 6 day plan

8 patterns · All problems link to LeetCode

Difficulty:
Easy
Med
Hard
Action
Day 1 BFS + DFS foundations Everything builds on this
Pattern 1 — BFS / DFS on adjacency list & matrix
The base template for both. BFS uses a queue and gives shortest path in unweighted graphs. DFS uses a stack (or recursion) and is simpler for reachability. Build both from scratch until they're automatic — every other pattern is a modification of one of these two.
Key insight: multi-source BFS (start queue with multiple nodes simultaneously) solves problems like Rotting Oranges and 01 Matrix in one pass. It's one of the most underrated tricks in graph interviews.
Day 2 Topological sort + cycle detection High interview frequency
Pattern 2 — Topological sort (Kahn's + DFS)
Kahn's BFS-based approach: track in-degrees, repeatedly remove nodes with in-degree 0. DFS-based: post-order traversal, push to stack on exit. If you can't process all nodes → cycle exists. Course Schedule is the canonical interview problem for this.

Pattern 3 — Cycle detection (directed + undirected)
Directed: DFS with 3-color marking (white/grey/black) or Kahn's leftover nodes. Undirected: DFS with parent tracking, or DSU. These are different problems with different approaches — know both.
Topological sort + cycle detection are two sides of the same coin. If topo sort completes (all nodes processed), no cycle. If nodes remain, a cycle exists. You get both for the price of one implementation.
Day 3 Union-Find (DSU) Deceptively powerful
Pattern 4 — Union-Find with path compression + rank
Two operations: find(x) with path compression, union(x,y) by rank. Near O(1) amortized per operation. Solves connected components, cycle detection in undirected graphs, and dynamic connectivity. Build the template once — it's copy-paste for every problem in this pattern.

Pattern 5 — Minimum Spanning Tree (Kruskal's)
Sort edges by weight, greedily add edge if it doesn't form a cycle (DSU check). Kruskal's is just DSU + sorting — you get it nearly free. Prim's is the BFS/heap alternative; good to know but Kruskal's is cleaner to code under pressure.
DSU is the most reusable data structure in graph problems. If you see "connect", "group", "merge", or "same component" — DSU is almost certainly the clean solution.
Day 4 Shortest paths Hard territory starts here
Pattern 6 — Dijkstra (weighted single-source)
Min-heap + greedy relaxation. O((V+E) log V). Only works with non-negative weights. The heap stores (cost, node) — always process the minimum cost node next. Network Delay Time is the purest Dijkstra problem on LeetCode.

Bellman-Ford + BFS for unweighted
Bellman-Ford: relax all edges V-1 times. O(VE). Handles negative weights; detects negative cycles. For unweighted graphs, plain BFS gives shortest path in O(V+E) — don't reach for Dijkstra when edges are unweighted. Know when each applies.
Interview shortcut: if weights are all 1 → BFS. If non-negative weights → Dijkstra. If negative weights or "at most k steps" constraint → Bellman-Ford. Pick by constraint, not habit.
Day 5 Bipartite + advanced graph problems Mixed bag of high-frequency patterns
Pattern 7 — Bipartite graph check (2-coloring)
BFS or DFS — assign alternating colors to nodes. If a neighbor has the same color as the current node, not bipartite. Equivalent to: does an odd-length cycle exist? Comes up more than you'd expect in disguised forms (e.g. "can we divide into two groups with no internal conflict").

Graph + DP / BFS with state
BFS/DFS where the state is more than just the node — e.g. (node, steps_remaining), (node, keys_collected), (node, visited_bitmask). These are the hardest interview graph problems and require combining two patterns cleanly.
State-augmented BFS is the graph equivalent of DP — you're memoizing (node, extra_state) pairs. If plain BFS isn't working, ask: what extra state do I need to track per node?
Day 6 SCC + bridges/articulation points + revision Senior bar — Microsoft relevant
Pattern 8 — Strongly Connected Components (Tarjan's / Kosaraju's)
Tarjan's: single DFS pass, tracks discovery time and low-link values, uses a stack. Kosaraju's: two DFS passes — one on original graph, one on reversed. Both O(V+E). SCCs collapse a directed graph into a DAG — useful for condensation problems.
1192 · Critical Connections (Bridges) 2421 · Number of Good Paths GFG: Kosaraju's SCC algorithm GFG: Tarjan's SCC algorithm

Revision: pattern classification drill
For each problem below, before reading any solution: name the pattern, state whether BFS or DFS, and write the key data structure. This is the graph equivalent of the DP identification drill.
By Day 6: given any graph problem, you should immediately identify — directed or undirected? Weighted or not? Reachability, shortest path, connectivity, or ordering? Those four questions determine which of the 8 patterns applies.

Greedy algorithms — 4 day plan

7 patterns · All problems link to LeetCode

Difficulty:
Easy
Med
Hard
Action
Day 1 Interval scheduling + sorting-based greedy Foundation day — highest interview freq
Pattern 1 — Interval scheduling (sort by end time)
The classic greedy template: sort intervals by end time, greedily pick the earliest-ending non-overlapping interval. Maximises the number of intervals selected. Variations sort by start time or require tracking overlap count — Meeting Rooms II needs a min-heap, not just sorting.

Pattern 2 — Sorting-based greedy (custom comparator)
The greedy choice is made by sorting with a non-obvious comparator. Largest Number sorts by string concatenation order. Task scheduler sorts by frequency. The hard part is proving your sort order is globally optimal — be ready to justify in interviews.
Key insight: greedy works when a locally optimal choice at each step leads to a globally optimal solution. The proof is usually by exchange argument — assume a better solution exists, show swapping to your greedy choice doesn't make it worse.
Day 2 Greedy on arrays — jumps, candies, gas Classic interview problems
Pattern 3 — Range / reach greedy (jump game family)
Track the maximum reachable index as you scan left to right. At each position, update the furthest you can reach. If current index exceeds max reach, you're stuck. Jump Game II extends this to counting minimum jumps by tracking the current "frontier".

Pattern 4 — Two-pass greedy (left-right scan)
Make one greedy pass left-to-right, then another right-to-left, combining both results. Candy is the textbook example. Trapping Rain Water also fits this shape. The insight: one pass can only satisfy one direction of constraint.
Gas Station is a beautiful greedy: if total gas ≥ total cost, a solution always exists. Start from the first position after a failure — this single-pass observation is what makes the O(n) solution non-obvious.
Day 3 Greedy with heap + string greedy Where greedy meets data structures
Pattern 5 — Greedy + heap (always process the extreme)
Use a max or min heap to always greedily pick the current best option. IPO, Find Median from Data Stream, and meeting room variants all use heaps to maintain the greedy choice efficiently as the input evolves.

Pattern 6 — String greedy (remove / construct greedily)
Build or reduce a string by making locally optimal character choices. Remove K Digits uses a monotonic stack to greedily maintain the smallest result. Greedy string problems often combine with stacks or two-pointer scanning.
Partition Labels is a disguised interval problem — last occurrence of each character defines its interval, then merge overlapping intervals greedily. Recognising the disguise is the whole problem.
Day 4 Greedy vs DP decisions + revision Lock in the instinct
Pattern 7 — Greedy correctness + when DP beats greedy
Some problems look greedy but require DP (coin change with arbitrary denominations, 0/1 knapsack). The test: does a greedy choice ever need to be reconsidered? If yes → DP. If a locally optimal choice is always globally safe → greedy. Train this instinct by solving the same problem both ways where possible.

The greedy checklist (use before every problem)
1. Can I sort the input to make the greedy choice obvious? 2. Is there a "always take the current best" argument? 3. Can I prove by exchange argument that my choice never loses? 4. If not — switch to DP. If yes to all three — code the greedy.
By Day 4: you should be able to look at a new problem and within 2 minutes decide — is this greedy provable, or do I need DP? That decision speed is the interview edge. Greedy that can't be proved in your head is probably wrong.

Two pointers & sliding window — 3 day plan

6 patterns · All problems link to LeetCode

Difficulty:
Easy
Med
Hard
Action
Day 1 Classic two pointers Foundation day
Pattern 1 — Opposite-end pointers (sorted array)
Left pointer at start, right pointer at end. Move them inward based on a condition. Works on sorted arrays for pair/triplet sum problems. Two Sum II is the purest form. 3Sum adds a sort + outer loop wrapper around the same idea.

Pattern 2 — Fast / slow pointers (Floyd's cycle)
One pointer moves 1 step, another moves 2 steps. Detects cycles, finds midpoints, and finds the k-th node from end. The slow pointer is always at the midpoint when fast reaches the end on a linear list.
Key insight: two pointers reduce O(n²) brute force to O(n) by exploiting monotonicity — moving a pointer in one direction either increases or decreases the value, so you never need to revisit.
Day 2 Fixed & variable sliding window Most common medium pattern
Pattern 3 — Fixed-size sliding window
Window size k is given. Slide right by adding nums[r] and removing nums[r-k]. Max sum subarray of size k is the simplest form. Extend to averages, max, character frequency tracking.

Pattern 4 — Variable sliding window (shrink on violation)
Expand right pointer freely, shrink left pointer when a constraint is violated. Template: while(invalid) { remove left; left++ }. Longest substring without repeating characters is the canonical form. The constraint can be character count, sum, distinct elements — same template.
Variable window template is always the same: expand r, update state, while(violated) shrink l, update state, record answer. Memorise this loop — the only thing that changes per problem is what "violated" means.
Day 3 Same-direction pointers + advanced window Lock it in
Pattern 5 — Same-direction two pointers (in-place)
Both pointers move left to right, at different speeds or conditions. Used for in-place removal, deduplication, and Dutch National Flag (3-way partition). Remove Duplicates and Move Zeroes are the entry points.

Pattern 6 — At-most-K trick (count subarrays)
Exact(K) = AtMost(K) - AtMost(K-1). Converts "exactly K" subarray counting problems into two variable window passes. Subarrays with K Different Integers is the canonical hard problem for this.
The AtMost(K) - AtMost(K-1) trick is one of the least-known but most powerful sliding window techniques. Once you see it, you'll spot it everywhere in "count subarrays with exactly K" problems.

Stack & queue — 3 day plan

5 patterns · All problems link to LeetCode

Difficulty:
Easy
Med
Hard
Action
Day 1 Classic stack problems Foundation day
Pattern 1 — Balanced parentheses / stack simulation
Push opening brackets, pop and match on closing. Extended to nested structures, decode strings, and expression evaluation. The stack maintains "pending" state waiting to be resolved.

Pattern 2 — Monotonic stack (next greater / smaller)
Maintain a stack that is always increasing or always decreasing. When a new element violates the property, pop and record. Next Greater Element, Daily Temperatures, and Largest Rectangle in Histogram all use this. One of the highest signal patterns for medium/hard interviews.
Monotonic stack insight: when you pop an element, you've found the answer for that element (its next greater/smaller). The stack never needs to be fully traversed — each element is pushed and popped at most once, giving O(n).
Day 2 Monotonic deque + queue-based problems Sliding window max, BFS with queues
Pattern 3 — Monotonic deque (sliding window max/min)
A deque that maintains the max (or min) of a sliding window in O(1) per step. Add to back (remove smaller elements first), remove from front if outside window. Sliding Window Maximum is the canonical problem.

Pattern 4 — Queue simulation + design problems
Implement stack using queues, implement queue using stacks — classic design problems that test understanding of both data structures. Also includes circular queue and task scheduling simulations.
Monotonic deque is the deque equivalent of monotonic stack — it just adds the constraint that elements outside the window are also evicted from the front. Once you see the pattern, it's the same O(n) amortized logic.
Day 3 Stack + other patterns combined Where stack meets DP and greedy
Pattern 5 — Stack + greedy / DP hybrid
Remove K Digits uses a monotonic stack greedily. Largest Rectangle uses stack + area calculation. Some problems combine stack-maintained state with a DP recurrence. These are the hardest stack problems — if stuck, ask: what invariant does my stack maintain?
By Day 3: for any problem involving "next greater", "nearest smaller", histogram areas, or expression parsing — your first instinct should be stack. That reflex alone will unlock a large chunk of hard problems.

Backtracking — 3 day plan

5 patterns · All problems link to LeetCode

Difficulty:
Easy
Med
Hard
Action
Day 1 Subsets + combinations Foundation day
Pattern 1 — Subsets (power set)
At each element, choose to include or exclude. Generates 2ⁿ subsets. Template: for loop starting at `start`, add element, recurse with start+1, remove element. Subsets II adds deduplication by skipping duplicates after sorting.

Pattern 2 — Permutations
Unlike subsets, order matters and all elements must be used. Use a visited array or swap-in-place. Permutations II deduplicates by sorting + skipping used duplicates. Time complexity is O(n × n!) — state this upfront.
Key insight: the only structural difference between subsets and permutations is the loop start index. Subsets pass start+1 to avoid reuse; permutations pass 0 with a visited array to allow all positions.
Day 2 Grid backtracking + constraint satisfaction High interview frequency
Pattern 3 — Grid / matrix backtracking
DFS on a 2D grid with visited marking + undo. Word Search is the canonical example. Mark cell as visited before recursing, unmark after returning. Crucial: the undo step is what makes it backtracking, not just DFS.

Pattern 4 — Constraint satisfaction (N-Queens, Sudoku)
Place elements one by one with hard constraints. Prune aggressively — check constraint validity before recursing, not after. N-Queens uses row/column/diagonal sets for O(1) validity checks. Sudoku Solver uses the same principle per box/row/col.
Pruning is everything in backtracking. A slow backtracking solution prunes after the fact. A fast one checks constraints before recursing. Every constraint check that avoids a branch cuts the search tree exponentially.
Day 3 String / expression backtracking + revision Lock it in
Pattern 5 — String generation / partitioning backtracking
Generate all valid strings or partition a string satisfying some rule. Generate Parentheses tracks open/close counts. Palindrome Partitioning checks each prefix before recursing. Letter Combinations maps digits to characters.
By Day 3: every backtracking problem reduces to the same skeleton — choose, explore, unchoose. The only variation is what "choose" means and what constraint triggers pruning. If you can write that skeleton from memory, you can solve any backtracking problem.

Trees — 4 day plan

7 patterns · All problems link to LeetCode

Difficulty:
Easy
Med
Hard
Action
Day 1 Tree traversals + basic properties Foundation day
Pattern 1 — DFS traversals (pre/in/post-order)
Three recursive orderings and their iterative equivalents with an explicit stack. Inorder of BST gives sorted sequence. Preorder is natural for tree serialization. Postorder is natural for bottom-up aggregation. Know all three iteratively — interviewers ask for it.

Pattern 2 — BFS / level-order traversal
Queue-based level-by-level traversal. Use queue size at the start of each iteration to process exactly one level. Right side view, level averages, and zigzag are all level-order BFS with a small twist.
Every tree problem is either top-down (pass info from parent to child) or bottom-up (aggregate from children to parent). Identify which direction information flows before writing a single line of code.
Day 2 BST patterns + LCA High interview frequency
Pattern 3 — BST operations (search, insert, delete, validate)
BST property: left subtree < root < right subtree. Inorder gives sorted order. Validate BST by passing min/max bounds down — not just checking immediate children. Delete is the trickiest: handle 0/1/2 children cases.

Pattern 4 — Lowest Common Ancestor (LCA)
For BST: navigate left/right based on values. For general binary tree: if both targets are in different subtrees, current node is LCA. Post-order DFS — return non-null from left or right, or current node if both found.
LCA is a post-order pattern in disguise. You need answers from both subtrees before deciding at the current node. Any time a problem asks about relationships between two nodes, think LCA.
Day 3 Tree DP + path problems Where trees meet hard problems
Pattern 5 — Tree DP (subtree aggregation)
dp defined on subtrees, computed post-order. Each node returns information about its subtree to its parent. House Robber III is the entry point — each node returns (rob_this, skip_this). Diameter and max path sum require returning the best single path upward while tracking a global max.

Pattern 6 — Tree serialization + construction
Encode a tree to a string and reconstruct it. Preorder with null markers is the cleanest approach. Construction from two traversals (pre+in, post+in) uses the root-finding property of each traversal order.
Max path sum is the hardest tree DP pattern — the path can go through any node and bend at any point. The trick is to return the best single-direction path to the parent while updating a global max with the full bent path at each node.
Day 4 Advanced tree structures + revision Lock it in
Pattern 7 — Balanced BST + segment tree basics
Convert sorted array to balanced BST (mid as root, recurse). AVL/Red-Black internals rarely asked but concepts are fair game. Segment trees for range queries are increasingly asked at Microsoft — build, query, update.
By Day 4: for any tree problem, identify — is it top-down or bottom-up? Does it need a global variable updated during DFS? Does it involve two nodes (LCA)? Those three questions cover 90% of tree problems.

Heap / priority queue — 2 day plan

4 patterns · All problems link to LeetCode

Difficulty:
Easy
Med
Hard
Action
Day 1 Top-K + K-way merge Foundation day
Pattern 1 — Top-K elements (fixed-size heap)
Maintain a min-heap of size K. For each new element, if it's larger than the heap's min, replace it. Result: heap contains the K largest elements. O(n log K) — better than sorting O(n log n) when K << n. Works for frequencies, distances, sums.

Pattern 2 — K-way merge
Min-heap seeded with the first element from each of K sorted lists. Always extract minimum, push the next element from the same list. O(N log K) total. Covered in merge sort but revisit here with the heap lens — it's a fundamental pattern.
Key insight: whenever you need the minimum (or maximum) of a dynamic set and elements are being added/removed — that's a heap. If the set is static, sorting suffices. The heap earns its place only when the set changes over time.
Day 2 Two-heap pattern + scheduling Hard territory
Pattern 3 — Two-heap pattern (median of stream)
Maintain a max-heap for the lower half and a min-heap for the upper half, kept balanced in size. Median is always the top of one or both heaps. Rebalance after each insert. This pattern is the canonical "hard heap" interview problem.

Pattern 4 — Greedy scheduling with heap
Use a heap to always process the highest-priority task. IPO: unlock projects greedily by capital, use max-heap for profit. CPU scheduling: use min-heap for deadlines. These combine greedy logic with heap-maintained ordering.
Two-heap pattern is always the answer to "running median" or "balanced partition" problems. If you ever need the median of a dynamic set — reach for two heaps before anything else.

Linked lists — 2 day plan

5 patterns · All problems link to LeetCode

Difficulty:
Easy
Med
Hard
Action
Day 1 Reversal + fast/slow pointers Core patterns
Pattern 1 — In-place reversal
Three pointers: prev, curr, next. Reverse by redirecting curr.next to prev. Iterative is preferred in interviews. Reverse K Group and Reverse Between are extensions — handle the sublist boundaries carefully with dummy nodes.

Pattern 2 — Fast / slow pointers
Cycle detection, midpoint finding, kth from end. Floyd's cycle: if fast meets slow, cycle exists; reset one pointer to head, move both one step at a time to find cycle entry. Removing Nth from end: advance fast by N, then move both until fast reaches end.
Dummy node is your best friend for linked list problems. Attach a dummy before the head — it eliminates all the edge cases for inserting/deleting at the head. Always use one unless there's a specific reason not to.
Day 2 Merge + intersection + design Lock it in
Pattern 3 — Merge sorted lists
Merge two sorted lists iteratively with a dummy head. Compare heads, attach smaller, advance. Merge K lists escalates to a heap or divide-and-conquer approach.

Pattern 4 — Intersection + palindrome
Intersection: two pointers switch lists at the end — they meet at the intersection node after traversing the same total length. Palindrome: find middle, reverse second half, compare. Restore the list after if asked.

Pattern 5 — LRU / LFU cache design
LRU: doubly linked list + hashmap. O(1) get and put. The list maintains access order; the map provides O(1) node lookup. LFU adds a frequency layer — map of frequency to doubly linked lists. Classic Microsoft/Google design question.
LRU Cache is the most important linked list problem for Microsoft interviews. It tests both linked list manipulation and system design thinking simultaneously. Know it cold — both the approach and the O(1) complexity justification.

Tries — 2 day plan

3 patterns · All problems link to LeetCode

Difficulty:
Easy
Med
Hard
Action
Day 1 Trie construction + prefix search Foundation day
Pattern 1 — Implement trie + prefix/search operations
Each node has children[26] and an isEnd flag. Insert walks characters creating nodes as needed. Search walks and checks isEnd at the last character. StartsWith checks if the path exists regardless of isEnd. Build this from scratch until it's automatic — it's the base for every other trie problem.

Pattern 2 — Trie + DFS / backtracking
Combine trie traversal with DFS on a grid. Word Search II inserts all words into a trie, then DFS on the grid — prune branches when no trie prefix matches. This drops the complexity from O(words × 4^L) to O(4^L) with shared prefix pruning.
Key insight: a trie is the right data structure whenever you have multiple prefix queries on a shared dictionary. Storing each word separately is O(L) per query; a trie is also O(L) but shares prefix nodes, making it O(total characters) in space rather than O(words × L).
Day 2 Bitwise trie + advanced applications Hard tier — Microsoft relevant
Pattern 3 — Binary trie (XOR maximization)
Store integers bit by bit from MSB to LSB. To maximize XOR with a query number, greedily take the opposite bit at each level. Maximum XOR of Two Numbers is the canonical problem. Extends to range XOR queries and prefix XOR problems.
Binary trie is the bridge between tries and bit manipulation. If you see XOR + "maximum" in the same problem, a binary trie almost certainly gives the optimal solution. It converts an O(n²) XOR comparison to O(n × 32).

Bit manipulation — 1 day plan

4 patterns · All problems link to LeetCode

Difficulty:
Easy
Med
Hard
Action
Day 1 AM XOR tricks + bit counting Morning session
Pattern 1 — XOR properties
a XOR a = 0, a XOR 0 = a. XOR of all elements cancels duplicates — whatever remains is the element that appeared an odd number of times. Single Number and its variants are the canonical problems. Two numbers that differ: n XOR m gives 1-bits where they differ; use lowest set bit to partition into two groups.

Pattern 2 — Bit counting (Hamming weight + DP)
n & (n-1) clears the lowest set bit — count set bits by repeatedly applying this until n=0. Counting Bits for all numbers 0..n uses DP: dp[i] = dp[i >> 1] + (i & 1). Know Brian Kernighan's algorithm cold.
n & (n-1) removes the lowest set bit. n & (-n) isolates the lowest set bit. These two operations solve an enormous number of bit problems. Memorise them — they're the bit equivalent of lo+hi>>1 in binary search.
Day 1 PM Bitmask enumeration + arithmetic tricks Afternoon session
Pattern 3 — Bitmask subset enumeration
Enumerate all 2ⁿ subsets with a bitmask from 0 to (1<<n)-1. Check if bit i is set with (mask >> i) & 1. Enumerate all submasks of a mask with: for(sub=mask; sub>0; sub=(sub-1)&mask). Used in bitmask DP (Day 6 of graphs) and combination problems where n ≤ 20.

Pattern 4 — Bit arithmetic (add, multiply without operators)
Add without +: XOR gives sum bits, AND shifted left gives carry; repeat until no carry. Multiply using shift-and-add. These are more about understanding binary arithmetic than production coding — still asked at Microsoft and Google.
Bit manipulation is a 1-day topic for a reason — the patterns are small in number but high in surprise factor. The goal isn't depth, it's fluency with the 6 core operations: AND, OR, XOR, NOT, left shift, right shift. Everything else follows from those.