Core Pattern
Start pointers at both ends, move inward based on comparison. O(n) over O(n²) brute force. Key insight: sorted arrays make decisions deterministic.
Algorithm Logic
- Initialize
left := 0,right := len(arr)-1. - While
left < right: - Calculate state (e.g.,
sum := arr[left] + arr[right]). - If state matches target, return/process.
- If state < target,
left++(increase sum). - If state > target,
right--(decrease sum).
Why This Matters
- Base for most array manipulation
- Appears in 3Sum, container water, trapping rain
- Reduces O(n²) → O(n) in interviews
Problems
Go Tip
Use sort.Slice(nums, func(i,j int) bool { return nums[i] < nums[j] }) to pre-sort. For 3Sum, skip duplicates with for i > 0 && nums[i] == nums[i-1] { i++ }. No overhead from extra allocation — work in-place.
Core Pattern
Fast pointer moves 2x, slow moves 1x. They meet at cycle start. Also used for finding middle, removing nth from end without counting length.
Why This Matters
- O(1) space cycle detection
- Floyd's algorithm — classic interview signal
- Variant: duplicate finder in O(1) space
Problems
Go Tip
Define node inline: type ListNode struct { Val int; Next *ListNode }. Go's nil pointer checks are cheap. For cycle detection: slow, fast := head, head then loop until fast == nil || fast.Next == nil.
Core Pattern
Window of size k slides right. Subtract outgoing left element, add incoming right. Critical: maintain a running aggregate (sum, max, frequency map). Never recompute from scratch.
Why This Matters
- Real use: rate limiting (fixed window)
- Foundation of all substring/subarray problems
- Teaches "avoid redundant computation" thinking
Problems
Go Tip
Use map[byte]int for frequency. Compare maps: reflect.DeepEqual is expensive — instead track a matches int counter. Update on add/remove. Array [26]int beats map for lowercase-only char problems.
Core Pattern
Expand right when condition satisfied, shrink left when violated. Template: left=0; for right in range; [process]; while violated { shrink left }; update answer
Why This Matters
- Longest substring problems — huge category
- Greedy expansion + lazy contraction
- At-most K distinct: key interview variant
Problems
Go Tip
For string iteration by byte: for i := 0; i < len(s); i++ is faster than range for ASCII. Use s[i] not rune(s[i]). max(a,b) available in Go 1.21+ — use builtin, don't write helpers.
Core Pattern
prefix[i] = prefix[i-1] + nums[i-1]. Range sum query in O(1). Combine with hash map for subarray sum = k: store prefix → count. Answer += map[prefix-k].
Why This Matters
- Transforms O(n²) subarray problems → O(n)
- Used in distributed stream analytics
- 2D prefix sum for matrix problems
Problems
Go Tip
Pre-initialize map with zero: prefixCount := map[int]int{0: 1}. This handles subarrays starting at index 0. Go maps have O(1) avg but allocate — for small n, consider array of size 2n+1 with offset.
Core Pattern
Sort by start time. Merge condition: curr.start ≤ prev.end. For meeting rooms: sort starts/ends separately, use two pointers. Min-heap for active intervals.
Why This Matters
- Calendar/scheduling systems real-world
- Meeting rooms II appears in every loop
- Sweep line algorithm foundation
Problems
Go Tip
Sort 2D slice: sort.Slice(intervals, func(i,j int) bool { return intervals[i][0] < intervals[j][0] }). For heap-based meeting rooms, implement container/heap interface (Len, Less, Swap, Push, Pop).
🎯 Week 1 Checkpoint — Mock Interview
- Solve one medium Two Pointer + one medium Sliding Window in 45 min under timed conditions
- Verbalize the pattern name before coding — "this is variable window because..."
- Review: Prefix sum + hash map combo (most missed pattern)
- Go code quality: no unnecessary allocations, proper error handling style
- Target: 4/5 problems solved optimally
Core Pattern
LIFO for matching/nesting. Parentheses, histograms, calculator. Key: push opening, pop on closing, check top. Calculator: two stacks (nums + ops) or one stack with precedence.
Why This Matters
- Parser/lexer foundation in compilers
- Undo/redo in editors
- Call stack simulation for DFS
Problems
Go Tip
Use slice as stack: stack := []int{}, push: stack = append(stack, v), pop: v, stack = stack[len(stack)-1], stack[:len(stack)-1]. No need for a container. This is idiomatic Go.
Core Pattern
Maintain stack in increasing or decreasing order. When current element breaks order → pop and record answer. Each element pushed/popped once → O(n). Store indices, not values.
The Stack Invariant
Think of the stack as a Line of Sight. If you use an increasing stack, the element below the top is always the "Nearest Smaller Value" to the left.
- Iterate through
arr. - While
stacknot empty ANDcurr < arr[stack.top]: - Pop
j. The "Next Smaller Right" forjis the current element. - The "Next Smaller Left" for
jis the newstack.top. - Push current index.
Why This Matters
- Daily temperatures: canonical interview problem
- Used in histogram, trapping rain water
- Span calculation in stock market systems
Problems
Go Tip
Store indices in monotonic stack. For circular arrays: iterate 2*n using i % n. When popping: result[stack[top]] = i - stack[top] for distance-based answers.
Core Pattern
Min-heap of size k for top-k largest. Max-heap for top-k smallest. Key insight: heap size k → pop when size exceeds k, top is the kth largest. O(n log k) beats O(n log n) sort.
Why This Matters
- Real: top-K queries in analytics systems
- Merge k sorted lists (Kafka partition merge)
- Dijkstra's algorithm uses min-heap
Problems
Go Tip
Go's container/heap requires implementing 5 methods. Min-heap: func (h MinHeap) Less(i,j int) bool { return h[i] < h[j] }. Push/Pop must use pointer receiver. Use heap.Init, heap.Push, heap.Pop — not methods directly.
Core Pattern
Monotonic deque: front always holds max/min index. Remove front if out of window. Remove from back if smaller than new element (maintain decreasing order). O(n) window maximum.
Why This Matters
- O(n) vs O(n log n) heap approach
- Jump game variants with deque
- Constrained sliding window problems
Problems
Go Tip
Implement deque with []int: front pop is dq = dq[1:], back pop is dq = dq[:len(dq)-1]. Store indices not values so you can check window bounds. This is O(n) amortized — each element pushed/popped once.
Problems
Go Tip for LRU
LRU = map[int]*Node + doubly linked list. Go has no built-in ordered map. For LRU cache: maintain head/tail sentinel nodes to avoid nil checks. Update: remove from current position + insert at head. O(1) all ops.
🎯 Week 2 Checkpoint — Mock Interview
- Implement container/heap from scratch — no looking up the interface
- Solve Sliding Window Maximum and articulate why deque beats heap here
- LRU Cache implementation under 20 minutes
- Explain monotonic stack invariant verbally to imaginary interviewer
Core Pattern
Template: lo, hi := 0, n-1; for lo <= hi { mid := lo + (hi-lo)/2 }. For "find leftmost": use lo <= hi + hi = mid-1 even when found. Binary search on answer space: "can we achieve X?" → monotonic predicate.
Why This Matters
- BS on answer space: powerful pattern for optimization
- Rotated array: appears in every loop of interviews
- Real: binary search in LSM tree SSTables
Problems
Go Tip
Avoid overflow: mid := lo + (hi-lo)/2 not (lo+hi)/2. For answer-space BS, define a canAchieve(mid int) bool function — clean separation. sort.Search(n, f) is Go's built-in lower_bound.
Core Pattern
Three DFS variants. Pre-order: process node before children (serialize). In-order: BST sorted traversal. Post-order: process children first (height, diameter). Global vs local return values.
Why This Matters
- Diameter/path problems need global state trick
- Serialization = distributed system state transfer
- BST validation: classic Senior interview
Problems
Go Tip
For global state in DFS, use closure: var maxPath int; var dfs func(*TreeNode) int; dfs = func(node *TreeNode) int {...; maxPath = max(maxPath, ...); return ...}; dfs(root). Avoid package-level vars — closures keep it clean and testable.
Problems
Go Tip — BFS with Channels
For educational purposes, BFS can use a Go channel as a queue: q := make(chan *TreeNode, n). In production-style interviews, slice-based queue is preferred. Use levelSize := len(queue) snapshot to process level-by-level.
Core Pattern
Node = {children [26]*TrieNode, isEnd bool}. Insert: create nodes for each char. Search: traverse, return false if node nil. Prefix search: same but don't need isEnd. Add count int for frequency tracking.
Why This Matters
- Search autocomplete — exact system design topic
- IP routing tables use radix tries
- XOR trie for max XOR pair problems
Problems
Go Tip
Prefer [26]*TrieNode over map[byte]*TrieNode for lowercase-only — it's 5–10x faster. Use c - 'a' as index. For wildcard search with '.', recurse on all non-nil children.
🎯 Week 3 Checkpoint — Mock Interview
- Implement Trie from scratch in under 15 minutes
- Solve any Hard tree problem (Path Sum, Serialize/Deserialize) cleanly
- Binary search on answer space — pick a problem you've never seen, identify predicate
- BFS vs DFS decision: articulate when to use each without thinking
Core Pattern
Build adjacency list from edges. BFS for shortest path (unweighted). DFS for connected components. Union-Find: find(x) = parent[x] == x ? x : find(parent[x]) with path compression. Union by rank.
Why This Matters
- Social networks, dependency graphs
- Kafka partition assignment uses graph coloring
- Union-Find: Kruskal's MST, network connectivity
Problems
Go Tip
BFS with goroutines for parallel level exploration (educational): use sync.WaitGroup + channel. In interview, keep it sequential. For Union-Find, iterative path compression: for parent[x] != x { parent[x] = parent[parent[x]]; x = parent[x] }
Problems
Go Tip
Kahn's (BFS topo sort): build indegree[] map, push all 0-indegree nodes to queue, BFS reducing indegrees. If output len ≠ n → cycle exists. DFS approach: use visited (0=unvisited, 1=in-stack, 2=done).
Problems
Go Tip
Dijkstra with container/heap: heap item = [2]int{dist, node}. Lazy deletion: skip if dist > dist[node]. For Bellman-Ford with K stops: copy prev dist array each round to avoid using same-round updates.
5 DP Patterns (Master These)
- 1D Linear: Climb stairs, House Robber
- Knapsack: 0/1 vs Unbounded
- LCS/LIS: String subsequences
- Interval DP: Matrix chain, burst balloons
- State Machine: Stock prices (cooldown, txn limit)
Pattern Recognition
Question has overlapping subproblems + optimal substructure? → DP. Start with brute force recursion → add memoization → optimize to bottom-up. Space optimization: if only previous row needed, use 1D array.
Problems by Pattern
Go Tip
For 2D DP: dp := make([][]int, m+1) then for i := range dp { dp[i] = make([]int, n+1) }. Go doesn't have 2D array literals for large sizes. For space optimization, two rows: prev, curr := make([]int,n+1), make([]int,n+1).
Problems
Go Tip
Pass path []int by value for immutability at each branch, or append to slice and defer removal: path = append(path, n); backtrack(...); path = path[:len(path)-1]. For result, deep copy path: tmp := make([]int, len(path)); copy(tmp, path); result = append(result, tmp).
🏆 Day 29–30: Full Mock Interviews
- Day 29: 2× timed 45-min sessions. One array/graph problem + one DP. Record yourself explaining. Review pattern gaps.
- Day 30: Review all weak spots from notes. Re-solve 5 problems that took >30 min. Speed coding practice.
- Final target: Any medium in ≤20 min, any hard in ≤40 min with explanation
- Go code standard: clean, no unnecessary allocations, handles edge cases (empty input, single element)
Problems repeatedly reported by engineers at 5-year level across Google, Meta, Stripe, Uber, Coinbase, and top Indian unicorns. Grouped by category. These go beyond patterns — they test judgment and edge-case thinking.
Pattern Recognition Ladder
- Binary search on answer: if "can we do it with X?" is monotonic, search X not the raw solution.
- Monotonic stack/deque: nearest greater/smaller, sliding maximum, contribution counting.
- Union-Find: dynamic connectivity, grouping, offline merges, redundant edges.
- Sweep line: sort events and process state changes over time or position.
- Trie / bitwise trie: prefix lookup, autocomplete, max XOR, string indexing.
Interview Conversion Rule
- If brute force compares many candidates, ask: can I sort, precompute, or maintain state incrementally?
- If the array answer depends on nearest boundaries, ask: can a monotonic stack expose contribution ranges?
- If the graph mutates by adding edges, ask: can DSU replace repeated BFS/DFS?
- If the question asks minimum feasible capacity/time/rate, ask: is the predicate monotonic?
- If the question mixes intervals and overlaps, ask: should I sort events and sweep?
Why These Get Asked
- Tests in-place manipulation + edge case coverage
- Median of two arrays = O(log(min(m,n))) — signals CS depth
- Palindrome pairs shows trie + hash combo thinking
- Wildcard matching separates DP from greedy candidates
Senior Expectations
- Optimal from the start — no O(n²) "I'll optimize later"
- Handle empty input, all-same chars, single element
- Articulate why brute force fails before coding
- Clean Go: no global vars, no unnecessary allocations
Problems
Go Tip — Median of Two Arrays
Binary search on the smaller array. Partition both arrays such that left half ≤ right half. maxLeft := max(nums1[p1-1], nums2[p2-1]). Watch out: when p1 == 0, left1 = math.MinInt. When p1 == m, right1 = math.MaxInt. Always binary search on the shorter array to guarantee O(log(min(m,n))).
When This Pattern Appears
- Question asks for minimum capacity, speed, days, distance, or maximum feasible value.
- There is a yes/no predicate that becomes permanently true or false past some boundary.
- Greedy check runs in O(n) or O(n log n), making total O(n log answer-range).
- Senior signal: state the monotonic predicate before writing code.
Common Mistakes
- Binary searching the array index instead of the solution space.
- Using
mid := (l+r)/2without thinking about overflow or integer rounding. - Greedy checker is subtly wrong on edge cases, so the binary search converges to garbage.
- Forgetting whether the answer should bias leftmost true or rightmost true.
Problems
Go Tip
Template: for l < r { mid := l + (r-l)/2; if feasible(mid) { r = mid } else { l = mid + 1 } }. Keep feasible pure and side-effect free so you can unit test it separately.
Core Insight
- Each element contributes to some range where it is minimum, maximum, or first blocker.
- Monotonic stack gives previous/next smaller or greater in O(n).
- Deque handles sliding window extrema while keeping candidates ordered.
- This pattern shows up far more often than candidates expect.
What Good Explanations Sound Like
- "I am not computing every subarray. I am counting each index's contribution."
- "The stack stores unresolved candidates whose boundary hasn't been discovered yet."
- "Each index is pushed and popped once, so this is amortized O(n)."
- "I need strict vs non-strict inequality to avoid double counting duplicates."
Problems
Go Tip
Use a plain []int slice as stack for indices. For deque, keep head index instead of repeatedly slicing from the front to avoid hidden reallocations.
Why This Matters
- DSU turns repeated graph connectivity checks into near O(1) unions and finds.
- Offline sorting by weight/time/value often unlocks much simpler solutions.
- Shows maturity: you know when not to rerun BFS/DFS from scratch.
- Very common in infra-style interview questions about connectivity and partitioning.
Implementation Notes
- Always use path compression + union by size/rank.
- Model components count explicitly when the question asks "how many groups remain?"
- For offline queries, sort both edges and queries by threshold, then sweep once.
- Good candidates explain why DSU cannot handle arbitrary deletions without rollback tricks.
Problems
Go Tip
Represent DSU as two slices: parent []int and size []int. Make find iterative if you want to avoid recursion depth concerns in very large inputs.
When To Reach For It
- Prefix search, autocomplete, dictionary segmentation, wildcard expansion.
- Bitwise trie when maximizing XOR or answering bounded XOR queries.
- Senior signal: pick trie only when it simplifies repeated prefix work, not by habit.
- Memory trade-off matters; say it out loud.
Design Judgment
- Trie is fast but memory-hungry; hashmap may win for sparse datasets.
- Compressed trie / radix tree is often the production answer.
- For autocomplete, combine trie prefix lookup with heap/top-k ranking metadata.
- For XOR trie, iterate bits from high to low and greedily choose the opposite branch.
Problems
Go Tip
For lowercase-only trie, [26]*Node is much faster than map[byte]*Node. Use maps only when the alphabet is sparse or dynamic.
Pattern Summary
- Sort starts/ends or build event points and process them in order.
- Difference arrays turn repeated range updates into two boundary writes.
- Heap often appears when you need "currently active" intervals.
- This pattern is common in scheduling, meeting rooms, booking, and traffic analytics.
What Strong Candidates Say
- "I only care when state changes, so I can compress work into events."
- "Tie-breaking start vs end events changes correctness; I need to define it explicitly."
- "Difference array works because each query updates a contiguous range."
- "Heap stores the active intervals by the one attribute I need next."
Problems
Go Tip
For sweep line event sorting, define a small struct and use sort.Slice with explicit tie-break rules. Hidden tie-break bugs are the main reason these solutions fail.
Why These Get Asked
- Critical connections = real infra (which service removes breaks network?)
- Reconstruct itinerary = Eulerian path — shows algorithm breadth
- Swim in water = Dijkstra variant — tests adaptability
- Accounts merge = Union-Find with string keys
Tarjan's Bridge Algorithm
- Track disc[u] (discovery time) and low[u] (lowest reachable)
- Edge (u,v) is bridge if low[v] > disc[u]
- DFS + backedge tracking = O(V+E)
- SCC: Tarjan's or Kosaraju's (two-pass DFS)
Problems
Go Tip — Tarjan's Bridge
Use closure-based DFS: timer := 0; disc := make([]int, n); low := make([]int, n). For each unvisited neighbor: update low[u] = min(low[u], low[v]). After DFS child returns, check if low[v] > disc[u] → bridge. Skip the edge back to parent using if v == parent { continue }.
Stock Problems — State Machine Mastery
- At most k transactions → 3D DP: day × txn × holding
- Cooldown → states: held, sold, rest
- Transaction fee → absorb fee at buy or sell, not both
- Compress: only need prev day values → O(k) space
Interval DP Pattern
- Enumerate split point k in range [i,j]
- dp[i][j] = best over all k in (i,j)
- Fill by increasing length, not index
- Burst Balloons: add sentinel 1s at both ends
Problems
Go Tip — Stock k Transactions
State: buy[j], sell[j] for j = 0..k transactions used. Transition each day: buy[j] = max(buy[j], sell[j-1] - price), sell[j] = max(sell[j], buy[j] + price). Initialize buy arrays to math.MinInt. Single O(k) pass per day. For k ≥ n/2, it becomes unlimited transactions.
LFU Cache — Core Insight
- Two maps: key→(val,freq) and freq→DLL of keys
- Track minFreq — only valid eviction target
- On get/put: move key from freq bucket to freq+1 bucket
- O(1) all ops. LRU order within same frequency
Range Sum 2D Mutable
- Segment tree or Binary Indexed Tree (Fenwick) in 2D
- BIT: update in O(log m · log n), query same
- Real use: database aggregate queries with updates
- Segment tree: more flexible (range updates)
Problems
Go Tip — LFU in Go
Use map[int]*list.List from container/list for freq buckets — Go's doubly linked list. Store list elements as struct {key, val, freq int}. keyMap map[int]*list.Element for O(1) access. On access: l.Remove(elem) from old freq list, push to front of freqMap[freq+1]. If old list empty and was minFreq, increment minFreq.
Most Asked Go Concurrency Problems
- Print FizzBuzz with N goroutines — channel sync
- Bounded parallel fetch — semaphore pattern
- Pipeline: producer → transformer → consumer
- Fan-out / fan-in — merge multiple channels
Key Patterns to Know Cold
- Done channel for cancellation (context.Context)
- Semaphore: buffered channel of size N
- Once: sync.Once for lazy init
- Mutex vs RWMutex: reads concurrent, writes exclusive
LeetCode Concurrency Problems
Go Tip — FooBar with Channels
Two goroutines, two channels: fooTurn := make(chan struct{}, 1), barTurn := make(chan struct{}). Seed with fooTurn <- struct{}{}. Foo: receive from fooTurn, print, send to barTurn. Bar: receive from barTurn, print, send to fooTurn. Clean, no mutex, no busy-wait. This is the idiomatic Go solution interviewers want to see.
This section is intentionally outside the 30-day timeline. Use it when you want machine-coding depth, not calendar discipline. Complete, production-quality Go solutions for the most frequently asked machine coding problems at 5-year experience level. Each includes design decisions, edge cases, and follow-up extensions.
Round Structure
- First 5-10 min: clarify API, concurrency needs, error behavior, and extensibility requirements.
- Next 10 min: design interfaces + structs before full implementation.
- Middle 25-35 min: build the happy path and one strong invariant.
- Final 10 min: edge cases, thread safety, tests you would write, and follow-up extensions.
Prompt Families To Cover
- Caches, rate limiters, key-value stores, and schedulers.
- Pub-sub, worker pools, retry queues, and job executors.
- Elevator, parking lot, splitwise, snake-ladder, and book-my-show style object models.
- Leaderboard, autocomplete, notification service, and in-memory databases.
Interview Signal
The machine-coding differentiator is not writing the most code. It is choosing the right abstractions early, naming invariants, and being explicit about concurrency, cleanup, and testability.
What They Test
- Token Bucket: bursty traffic allowed up to capacity
- Sliding Window Counter: smooth rate, no burst
- Thread-safety with sync.Mutex
- Interface design — Accept(key string) bool
Follow-up Questions
- Per-user vs global rate limiting
- Distributed rate limiter (Redis + Lua script)
- Leaky bucket vs token bucket tradeoffs
- How to test without sleeping in tests
Complete Go Solution
// ── Token Bucket Rate Limiter ────────────────────────────
package main
import (
"fmt"
"sync"
"time"
)
type TokenBucket struct {
mu sync.Mutex
capacity float64
tokens float64
refillRate float64 // tokens per second
lastRefill time.Time
}
func NewTokenBucket(capacity int, refillRate float64) *TokenBucket {
return &TokenBucket{
capacity: float64(capacity),
tokens: float64(capacity),
refillRate: refillRate,
lastRefill: time.Now(),
}
}
func (tb *TokenBucket) Allow() bool {
tb.mu.Lock()
defer tb.mu.Unlock()
now := time.Now()
elapsed := now.Sub(tb.lastRefill).Seconds()
tb.tokens = min(tb.capacity, tb.tokens+elapsed*tb.refillRate)
tb.lastRefill = now
if tb.tokens >= 1 {
tb.tokens--
return true
}
return false
}
// ── Sliding Window Rate Limiter ──────────────────────────
type SlidingWindowLimiter struct {
mu sync.Mutex
limit int
window time.Duration
requests []time.Time
}
func NewSlidingWindow(limit int, window time.Duration) *SlidingWindowLimiter {
return &SlidingWindowLimiter{limit: limit, window: window}
}
func (sl *SlidingWindowLimiter) Allow() bool {
sl.mu.Lock()
defer sl.mu.Unlock()
now := time.Now()
cutoff := now.Add(-sl.window)
// evict expired timestamps
i := 0
for i < len(sl.requests) && sl.requests[i].Before(cutoff) {
i++
}
sl.requests = sl.requests[i:]
if len(sl.requests) < sl.limit {
sl.requests = append(sl.requests, now)
return true
}
return false
}
func main() {
tb := NewTokenBucket(5, 2) // cap 5, refill 2/sec
for i := 0; i < 8; i++ {
fmt.Printf("request %d: allowed=%v
", i, tb.Allow())
}
}
Interview Signal
Mention the tradeoff: token bucket allows bursts up to capacity — good for APIs. Sliding window is smoother but uses O(requests) memory. For distributed systems, mention Redis + Lua atomicity. Interviewers at Uber/Stripe ask this in the context of their API gateway.
What They Test
- DLL + hashmap for O(1) get/put
- TTL expiry — lazy vs eager eviction
- sync.RWMutex — reads concurrent, writes exclusive
- Generic design (Go 1.18+ generics)
Follow-up Questions
- Background goroutine to clean expired keys
- How to make it distributed (consistent hashing)
- LFU variant — what changes?
- Cache stampede / thundering herd prevention
Complete Go Solution
package main
import (
"fmt"
"sync"
"time"
)
type entry struct {
key int
val int
expiry time.Time
hasExp bool
prev, next *entry
}
type LRUCache struct {
mu sync.RWMutex
cap int
items map[int]*entry
head, tail *entry // sentinel nodes
}
func NewLRU(cap int) *LRUCache {
h, t := &entry{}, &entry{}
h.next, t.prev = t, h
return &LRUCache{cap: cap, items: make(map[int]*entry), head: h, tail: t}
}
func (c *LRUCache) remove(e *entry) {
e.prev.next, e.next.prev = e.next, e.prev
}
func (c *LRUCache) pushFront(e *entry) {
e.next = c.head.next; e.prev = c.head
c.head.next.prev = e; c.head.next = e
}
func (c *LRUCache) Get(key int) (int, bool) {
c.mu.Lock()
defer c.mu.Unlock()
e, ok := c.items[key]
if !ok { return 0, false }
if e.hasExp && time.Now().After(e.expiry) {
c.remove(e); delete(c.items, key); return 0, false
}
c.remove(e); c.pushFront(e)
return e.val, true
}
func (c *LRUCache) Put(key, val int, ttl time.Duration) {
c.mu.Lock()
defer c.mu.Unlock()
if e, ok := c.items[key]; ok {
e.val = val
if ttl > 0 { e.expiry = time.Now().Add(ttl); e.hasExp = true }
c.remove(e); c.pushFront(e); return
}
if len(c.items) == c.cap {
lru := c.tail.prev
c.remove(lru); delete(c.items, lru.key)
}
e := &entry{key: key, val: val}
if ttl > 0 { e.expiry = time.Now().Add(ttl); e.hasExp = true }
c.pushFront(e); c.items[key] = e
}
func main() {
cache := NewLRU(3)
cache.Put(1, 100, 0)
cache.Put(2, 200, 2*time.Second)
if v, ok := cache.Get(2); ok { fmt.Println("key 2:", v) }
time.Sleep(3 * time.Second)
if _, ok := cache.Get(2); !ok { fmt.Println("key 2 expired") }
}
Interview Signal
Upgrade to sync.RWMutex — reads use RLock, only writes use Lock. Mention: for cache-heavy workloads this gives significant read throughput improvement. For TTL background cleanup, mention a goroutine with time.Ticker + context.Done() for graceful shutdown.
What They Test
- Goroutine lifecycle management
- Graceful shutdown — drain jobs before exit
- Backpressure — block when queue full
- Context cancellation propagation
Real-World Use
- Kafka consumer group processing
- Batch DB writes with bounded concurrency
- Parallel HTTP fan-out (microservices)
- Image resizing / file processing pipelines
Complete Go Solution
package main
import (
"context"
"fmt"
"sync"
"time"
)
type Job[T, R any] struct {
ID int
Input T
}
type Result[T, R any] struct {
Job Job[T, R]
Output R
Err error
}
type Pool[T, R any] struct {
workers int
jobs chan Job[T, R]
results chan Result[T, R]
wg sync.WaitGroup
}
func NewPool[T, R any](workers, queueSize int) *Pool[T, R] {
return &Pool[T, R]{
workers: workers,
jobs: make(chan Job[T, R], queueSize),
results: make(chan Result[T, R], queueSize),
}
}
func (p *Pool[T, R]) Start(ctx context.Context, fn func(context.Context, T) (R, error)) {
for i := 0; i < p.workers; i++ {
p.wg.Add(1)
go func() {
defer p.wg.Done()
for {
select {
case job, ok := <-p.jobs:
if !ok { return }
out, err := fn(ctx, job.Input)
p.results <- Result[T, R]{Job: job, Output: out, Err: err}
case <-ctx.Done():
return
}
}
}()
}
go func() { p.wg.Wait(); close(p.results) }()
}
func (p *Pool[T, R]) Submit(job Job[T, R]) { p.jobs <- job }
func (p *Pool[T, R]) Close() { close(p.jobs) }
func (p *Pool[T, R]) Results() <-chan Result[T, R] { return p.results }
func main() {
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
pool := NewPool[int, string](3, 10)
pool.Start(ctx, func(ctx context.Context, n int) (string, error) {
time.Sleep(100 * time.Millisecond)
return fmt.Sprintf("processed-%d", n), nil
})
for i := 0; i < 6; i++ {
pool.Submit(Job[int, string]{ID: i, Input: i * 10})
}
pool.Close()
for r := range pool.Results() {
fmt.Printf("job %d → %s
", r.Job.ID, r.Output)
}
}
Interview Signal
Generics here show Go 1.18+ awareness — big signal at senior level. Key design choices to mention: buffered channels prevent goroutine blocking on submit, context propagation allows graceful cancellation, closing jobs channel triggers all workers to drain and exit cleanly via range. Always mention: never close a channel from the receiver side.
What They Test
- Fan-out to multiple subscribers per topic
- Non-blocking publish (slow subscriber drops)
- Clean unsubscribe — no goroutine leak
- Wildcard topic matching (bonus)
Follow-up Questions
- Persistent subscribers — replay missed events
- At-least-once vs at-most-once delivery
- How Kafka solves this at scale
- Dead letter queue for failed subscribers
Complete Go Solution
package main
import (
"fmt"
"sync"
"time"
)
type Message struct {
Topic string
Payload interface{}
}
type Subscriber struct {
id string
ch chan Message
}
type EventBus struct {
mu sync.RWMutex
subs map[string][]*Subscriber
}
func NewEventBus() *EventBus {
return &EventBus{subs: make(map[string][]*Subscriber)}
}
func (eb *EventBus) Subscribe(topic, id string, bufSize int) *Subscriber {
eb.mu.Lock()
defer eb.mu.Unlock()
sub := &Subscriber{id: id, ch: make(chan Message, bufSize)}
eb.subs[topic] = append(eb.subs[topic], sub)
return sub
}
func (eb *EventBus) Publish(topic string, payload interface{}) int {
eb.mu.RLock()
subs := eb.subs[topic]
eb.mu.RUnlock()
msg := Message{Topic: topic, Payload: payload}
delivered := 0
for _, sub := range subs {
select {
case sub.ch <- msg:
delivered++
default: // slow subscriber, drop message
}
}
return delivered
}
func (eb *EventBus) Unsubscribe(topic string, sub *Subscriber) {
eb.mu.Lock()
defer eb.mu.Unlock()
subs := eb.subs[topic]
for i, s := range subs {
if s.id == sub.id {
eb.subs[topic] = append(subs[:i], subs[i+1:]...)
close(s.ch)
return
}
}
}
func main() {
bus := NewEventBus()
s1 := bus.Subscribe("payments", "service-a", 10)
s2 := bus.Subscribe("payments", "service-b", 10)
go func() {
for msg := range s1.ch {
fmt.Printf("service-a got: %v
", msg.Payload)
}
}()
go func() {
for msg := range s2.ch {
fmt.Printf("service-b got: %v
", msg.Payload)
}
}()
bus.Publish("payments", map[string]interface{}{"txn": "T001", "amount": 500})
bus.Publish("payments", map[string]interface{}{"txn": "T002", "amount": 1200})
time.Sleep(100 * time.Millisecond)
bus.Unsubscribe("payments", s1)
bus.Unsubscribe("payments", s2)
}
Interview Signal
Copying the subscriber slice before publishing with RLock and RUnlock is critical — allows concurrent reads while blocking only on write (subscribe/unsubscribe). Mention: in production, this is how Kafka consumer groups work internally. The select { default: } non-blocking send is the idiomatic Go drop-if-full pattern — much cleaner than checking channel length.
Three States
- Closed: requests pass through, count failures
- Open: reject all requests immediately (fast fail)
- Half-Open: let N requests through to probe recovery
What They Test
- State machine design — transitions correct?
- Thread-safe state transitions
- Configurable threshold and timeout
- Callback/hook on state change (observability)
Complete Go Solution
package main
import (
"errors"; "fmt"; "sync"; "time"
)
type State uint8
const (Closed State = iota; Open; HalfOpen)
var ErrOpen = errors.New("circuit breaker is open")
type CB struct {
mu sync.Mutex
state State
failures int
successes int
threshold int // failures to open
halfOpenMax int // successes to close from half-open
openTimeout time.Duration
openedAt time.Time
OnStateChange func(from, to State)
}
func NewCB(threshold, halfOpenMax int, timeout time.Duration) *CB {
return &CB{
threshold: threshold,
halfOpenMax: halfOpenMax,
openTimeout: timeout,
}
}
func (cb *CB) transition(to State) {
from := cb.state
cb.state = to
cb.failures, cb.successes = 0, 0
if to == Open { cb.openedAt = time.Now() }
if cb.OnStateChange != nil { cb.OnStateChange(from, to) }
}
func (cb *CB) Call(fn func() error) error {
cb.mu.Lock()
defer cb.mu.Unlock()
switch cb.state {
case Open:
if time.Since(cb.openedAt) > cb.openTimeout {
cb.transition(HalfOpen)
} else {
return ErrOpen
}
}
err := fn()
if err != nil {
cb.failures++
if cb.state == HalfOpen || cb.failures >= cb.threshold {
cb.transition(Open)
}
return err
}
if cb.state == HalfOpen {
cb.successes++
if cb.successes >= cb.halfOpenMax {
cb.transition(Closed)
}
} else {
cb.failures = 0
}
return nil
}
func main() {
cb := NewCB(3, 2, 2*time.Second)
cb.OnStateChange = func(from, to State) {
states := []string{"Closed", "Open", "HalfOpen"}
fmt.Printf("state: %s → %s
", states[from], states[to])
}
fail := func() error { return errors.New("downstream error") }
for i := 0; i < 5; i++ {
fmt.Printf("call %d: %v
", i, cb.Call(fail))
}
}
Interview Signal
This is directly from your production work — CBDC retry/rollback flows use the same state machine pattern. Key points: the OnStateChange hook is your observability hook (emit metrics, alerts). In production (Netflix Hystrix, Sony gobreaker), also track success rate, not just absolute failures. Mention: combine with exponential backoff on retry for complete resilience.
What They Test
- GET / SET / DEL / EXPIRE / TTL commands
- Background goroutine for expiry cleanup
- Shard-based locking for throughput
- Persistence hint (AOF / snapshot trade-off)
Follow-up Questions
- How Redis does lazy vs active expiry
- Add INCR / DECR atomically
- Add pub-sub on key expiry events
- Consistent hashing for sharding
Complete Go Solution
package main
import (
"context"; "fmt"
"sync"; "time"
)
type item struct {
val string
expiry time.Time
hasExp bool
}
type KVStore struct {
mu sync.RWMutex
store map[string]item
}
func NewKVStore(ctx context.Context) *KVStore {
kv := &KVStore{store: make(map[string]item)}
go kv.cleanupLoop(ctx)
return kv
}
func (kv *KVStore) Set(key, val string, ttl time.Duration) {
kv.mu.Lock(); defer kv.mu.Unlock()
it := item{val: val}
if ttl > 0 { it.expiry = time.Now().Add(ttl); it.hasExp = true }
kv.store[key] = it
}
func (kv *KVStore) Get(key string) (string, bool) {
kv.mu.RLock(); defer kv.mu.RUnlock()
it, ok := kv.store[key]
if !ok { return "", false }
if it.hasExp && time.Now().After(it.expiry) { return "", false } // lazy expiry
return it.val, true
}
func (kv *KVStore) Del(key string) {
kv.mu.Lock(); defer kv.mu.Unlock()
delete(kv.store, key)
}
func (kv *KVStore) TTL(key string) time.Duration {
kv.mu.RLock(); defer kv.mu.RUnlock()
it, ok := kv.store[key]
if !ok || !it.hasExp { return -1 }
remaining := time.Until(it.expiry)
if remaining < 0 { return 0 }
return remaining
}
func (kv *KVStore) cleanupLoop(ctx context.Context) {
ticker := time.NewTicker(500 * time.Millisecond)
defer ticker.Stop()
for {
select {
case <-ticker.C:
kv.mu.Lock()
now := time.Now()
for k, v := range kv.store {
if v.hasExp && now.After(v.expiry) { delete(kv.store, k) }
}
kv.mu.Unlock()
case <-ctx.Done(): return
}
}
}
func main() {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
kv := NewKVStore(ctx)
kv.Set("session:abc", "user123", 2*time.Second)
kv.Set("config:timeout", "30s", 0)
if v, ok := kv.Get("session:abc"); ok { fmt.Println("got:", v) }
fmt.Println("ttl:", kv.TTL("session:abc").Round(time.Millisecond))
time.Sleep(3 * time.Second)
if _, ok := kv.Get("session:abc"); !ok { fmt.Println("session expired") }
}
Interview Signal
Mention Redis does both lazy expiry (check on access) AND active expiry (periodic scan of 20 random keys with TTL). Your cleanupLoop mirrors that. For scale, mention sharding: N shards = N separate maps each with own RWMutex — shard by hash(key) % N. This is how BigCache and groupcache work in Go production.
More Machine Coding Prompts To Add To Your Rotation
- Scheduler: delayed job queue with retry, backoff, and idempotency key support.
- Leaderboard: top-K, surrounding rank, seasonal reset, tie handling.
- Autocomplete: top suggestions by prefix with incremental updates.
- Splitwise: balance graph, minimize cash flow, expense settlement rules.
- Parking Lot: slot allocation strategy, concurrency-safe entry/exit, pricing policy.
- BookMyShow: seat locking with TTL, payment success/failure flow, race prevention.
- Snake and Ladder / Tic-Tac-Toe: low-level design, move validation, state machine.
- File System: path trie, create/list/delete, metadata cache, permission hooks.
- Notification Service: channel abstraction, retries, dedup, user preferences.
- Mini Redis: strings + lists + expiry + append-only persistence.