PATTERN-FIRST · GO LANG · FAANG+ CALIBRATED

DSA in 30 Days

Not random LeetCode grinding. Every day maps to a pattern. Master the pattern → the problems become obvious. Go-specific implementations throughout.

30
Days Total
~2h
Per Day
230+
Problems
6
MC Solutions
Week 1: Arrays Week 2: Stack/Heap Week 3: Trees/BS Week 4: Graphs/DP Trending Senior Machine Coding
WEEK 01 Arrays, Sliding Window, Two Pointers, Prefix Sum LINEAR PATTERNS
DAY 01 Two Pointers — Opposite Ends Two Pointers

Core Pattern

Start pointers at both ends, move inward based on comparison. O(n) over O(n²) brute force. Key insight: sorted arrays make decisions deterministic.

Algorithm Logic

  1. Initialize left := 0, right := len(arr)-1.
  2. While left < right:
  3. Calculate state (e.g., sum := arr[left] + arr[right]).
  4. If state matches target, return/process.
  5. If state < target, left++ (increase sum).
  6. If state > target, right-- (decrease sum).

Why This Matters

  • Base for most array manipulation
  • Appears in 3Sum, container water, trapping rain
  • Reduces O(n²) → O(n) in interviews

Go Tip

Use sort.Slice(nums, func(i,j int) bool { return nums[i] < nums[j] }) to pre-sort. For 3Sum, skip duplicates with for i > 0 && nums[i] == nums[i-1] { i++ }. No overhead from extra allocation — work in-place.

DAY 02 Two Pointers — Fast/Slow (Floyd's) Two Pointers

Core Pattern

Fast pointer moves 2x, slow moves 1x. They meet at cycle start. Also used for finding middle, removing nth from end without counting length.

Why This Matters

  • O(1) space cycle detection
  • Floyd's algorithm — classic interview signal
  • Variant: duplicate finder in O(1) space

Go Tip

Define node inline: type ListNode struct { Val int; Next *ListNode }. Go's nil pointer checks are cheap. For cycle detection: slow, fast := head, head then loop until fast == nil || fast.Next == nil.

DAY 03 Sliding Window — Fixed Size Sliding Window

Core Pattern

Window of size k slides right. Subtract outgoing left element, add incoming right. Critical: maintain a running aggregate (sum, max, frequency map). Never recompute from scratch.

Why This Matters

  • Real use: rate limiting (fixed window)
  • Foundation of all substring/subarray problems
  • Teaches "avoid redundant computation" thinking

Go Tip

Use map[byte]int for frequency. Compare maps: reflect.DeepEqual is expensive — instead track a matches int counter. Update on add/remove. Array [26]int beats map for lowercase-only char problems.

DAY 04 Sliding Window — Variable Size Sliding Window

Core Pattern

Expand right when condition satisfied, shrink left when violated. Template: left=0; for right in range; [process]; while violated { shrink left }; update answer

Why This Matters

  • Longest substring problems — huge category
  • Greedy expansion + lazy contraction
  • At-most K distinct: key interview variant

Go Tip

For string iteration by byte: for i := 0; i < len(s); i++ is faster than range for ASCII. Use s[i] not rune(s[i]). max(a,b) available in Go 1.21+ — use builtin, don't write helpers.

DAY 05 Prefix Sum + Hash Map Prefix Sum

Core Pattern

prefix[i] = prefix[i-1] + nums[i-1]. Range sum query in O(1). Combine with hash map for subarray sum = k: store prefix → count. Answer += map[prefix-k].

Why This Matters

  • Transforms O(n²) subarray problems → O(n)
  • Used in distributed stream analytics
  • 2D prefix sum for matrix problems

Go Tip

Pre-initialize map with zero: prefixCount := map[int]int{0: 1}. This handles subarrays starting at index 0. Go maps have O(1) avg but allocate — for small n, consider array of size 2n+1 with offset.

DAY 06 Intervals — Merge, Insert, Overlap Intervals

Core Pattern

Sort by start time. Merge condition: curr.start ≤ prev.end. For meeting rooms: sort starts/ends separately, use two pointers. Min-heap for active intervals.

Why This Matters

  • Calendar/scheduling systems real-world
  • Meeting rooms II appears in every loop
  • Sweep line algorithm foundation

Go Tip

Sort 2D slice: sort.Slice(intervals, func(i,j int) bool { return intervals[i][0] < intervals[j][0] }). For heap-based meeting rooms, implement container/heap interface (Len, Less, Swap, Push, Pop).

🎯 Week 1 Checkpoint — Mock Interview

  • Solve one medium Two Pointer + one medium Sliding Window in 45 min under timed conditions
  • Verbalize the pattern name before coding — "this is variable window because..."
  • Review: Prefix sum + hash map combo (most missed pattern)
  • Go code quality: no unnecessary allocations, proper error handling style
  • Target: 4/5 problems solved optimally
WEEK 02 Stack, Queue, Monotonic Stack, Heap / Priority Queue ORDER & PRIORITY
DAY 08 Stack — Classic Applications Stack

Core Pattern

LIFO for matching/nesting. Parentheses, histograms, calculator. Key: push opening, pop on closing, check top. Calculator: two stacks (nums + ops) or one stack with precedence.

Why This Matters

  • Parser/lexer foundation in compilers
  • Undo/redo in editors
  • Call stack simulation for DFS

Go Tip

Use slice as stack: stack := []int{}, push: stack = append(stack, v), pop: v, stack = stack[len(stack)-1], stack[:len(stack)-1]. No need for a container. This is idiomatic Go.

DAY 09 Monotonic Stack — Next Greater/Smaller Monotonic Stack

Core Pattern

Maintain stack in increasing or decreasing order. When current element breaks order → pop and record answer. Each element pushed/popped once → O(n). Store indices, not values.

The Stack Invariant

Think of the stack as a Line of Sight. If you use an increasing stack, the element below the top is always the "Nearest Smaller Value" to the left.

  1. Iterate through arr.
  2. While stack not empty AND curr < arr[stack.top]:
  3. Pop j. The "Next Smaller Right" for j is the current element.
  4. The "Next Smaller Left" for j is the new stack.top.
  5. Push current index.

Why This Matters

  • Daily temperatures: canonical interview problem
  • Used in histogram, trapping rain water
  • Span calculation in stock market systems

Go Tip

Store indices in monotonic stack. For circular arrays: iterate 2*n using i % n. When popping: result[stack[top]] = i - stack[top] for distance-based answers.

DAY 10 Heap / Priority Queue — Top K Heap

Core Pattern

Min-heap of size k for top-k largest. Max-heap for top-k smallest. Key insight: heap size k → pop when size exceeds k, top is the kth largest. O(n log k) beats O(n log n) sort.

Why This Matters

  • Real: top-K queries in analytics systems
  • Merge k sorted lists (Kafka partition merge)
  • Dijkstra's algorithm uses min-heap

Go Tip

Go's container/heap requires implementing 5 methods. Min-heap: func (h MinHeap) Less(i,j int) bool { return h[i] < h[j] }. Push/Pop must use pointer receiver. Use heap.Init, heap.Push, heap.Pop — not methods directly.

DAY 11 Deque — Sliding Window Maximum Deque / Monotonic Queue

Core Pattern

Monotonic deque: front always holds max/min index. Remove front if out of window. Remove from back if smaller than new element (maintain decreasing order). O(n) window maximum.

Why This Matters

  • O(n) vs O(n log n) heap approach
  • Jump game variants with deque
  • Constrained sliding window problems

Go Tip

Implement deque with []int: front pop is dq = dq[1:], back pop is dq = dq[:len(dq)-1]. Store indices not values so you can check window bounds. This is O(n) amortized — each element pushed/popped once.

DAY 12–13 Hashing — Design & Applications Hash Map / Set

Go Tip for LRU

LRU = map[int]*Node + doubly linked list. Go has no built-in ordered map. For LRU cache: maintain head/tail sentinel nodes to avoid nil checks. Update: remove from current position + insert at head. O(1) all ops.

🎯 Week 2 Checkpoint — Mock Interview

  • Implement container/heap from scratch — no looking up the interface
  • Solve Sliding Window Maximum and articulate why deque beats heap here
  • LRU Cache implementation under 20 minutes
  • Explain monotonic stack invariant verbally to imaginary interviewer
WEEK 03 Binary Search, Trees (DFS/BFS/BST), Tries HIERARCHICAL PATTERNS
DAY 15 Binary Search — Beyond Sorted Arrays Binary Search

Core Pattern

Template: lo, hi := 0, n-1; for lo <= hi { mid := lo + (hi-lo)/2 }. For "find leftmost": use lo <= hi + hi = mid-1 even when found. Binary search on answer space: "can we achieve X?" → monotonic predicate.

Why This Matters

  • BS on answer space: powerful pattern for optimization
  • Rotated array: appears in every loop of interviews
  • Real: binary search in LSM tree SSTables

Go Tip

Avoid overflow: mid := lo + (hi-lo)/2 not (lo+hi)/2. For answer-space BS, define a canAchieve(mid int) bool function — clean separation. sort.Search(n, f) is Go's built-in lower_bound.

DAY 16–17 Trees — DFS Patterns (Pre/In/Post) Tree DFS

Core Pattern

Three DFS variants. Pre-order: process node before children (serialize). In-order: BST sorted traversal. Post-order: process children first (height, diameter). Global vs local return values.

Why This Matters

  • Diameter/path problems need global state trick
  • Serialization = distributed system state transfer
  • BST validation: classic Senior interview

Go Tip

For global state in DFS, use closure: var maxPath int; var dfs func(*TreeNode) int; dfs = func(node *TreeNode) int {...; maxPath = max(maxPath, ...); return ...}; dfs(root). Avoid package-level vars — closures keep it clean and testable.

DAY 18 Trees — BFS Level Order Tree BFS

Go Tip — BFS with Channels

For educational purposes, BFS can use a Go channel as a queue: q := make(chan *TreeNode, n). In production-style interviews, slice-based queue is preferred. Use levelSize := len(queue) snapshot to process level-by-level.

DAY 19–20 Trie — Prefix Tree Trie

Core Pattern

Node = {children [26]*TrieNode, isEnd bool}. Insert: create nodes for each char. Search: traverse, return false if node nil. Prefix search: same but don't need isEnd. Add count int for frequency tracking.

Why This Matters

  • Search autocomplete — exact system design topic
  • IP routing tables use radix tries
  • XOR trie for max XOR pair problems

Go Tip

Prefer [26]*TrieNode over map[byte]*TrieNode for lowercase-only — it's 5–10x faster. Use c - 'a' as index. For wildcard search with '.', recurse on all non-nil children.

🎯 Week 3 Checkpoint — Mock Interview

  • Implement Trie from scratch in under 15 minutes
  • Solve any Hard tree problem (Path Sum, Serialize/Deserialize) cleanly
  • Binary search on answer space — pick a problem you've never seen, identify predicate
  • BFS vs DFS decision: articulate when to use each without thinking
WEEK 04 Graphs + Dynamic Programming (Pattern-Based) DEEP PATTERNS
DAY 22–23 Graph — BFS/DFS + Union Find Graph

Core Pattern

Build adjacency list from edges. BFS for shortest path (unweighted). DFS for connected components. Union-Find: find(x) = parent[x] == x ? x : find(parent[x]) with path compression. Union by rank.

Why This Matters

  • Social networks, dependency graphs
  • Kafka partition assignment uses graph coloring
  • Union-Find: Kruskal's MST, network connectivity

Go Tip

BFS with goroutines for parallel level exploration (educational): use sync.WaitGroup + channel. In interview, keep it sequential. For Union-Find, iterative path compression: for parent[x] != x { parent[x] = parent[parent[x]]; x = parent[x] }

DAY 24 Graph — Topological Sort + Cycle Detection Topo Sort / DAG

Go Tip

Kahn's (BFS topo sort): build indegree[] map, push all 0-indegree nodes to queue, BFS reducing indegrees. If output len ≠ n → cycle exists. DFS approach: use visited (0=unvisited, 1=in-stack, 2=done).

DAY 25 Dijkstra + Bellman-Ford Shortest Path

Go Tip

Dijkstra with container/heap: heap item = [2]int{dist, node}. Lazy deletion: skip if dist > dist[node]. For Bellman-Ford with K stops: copy prev dist array each round to avoid using same-round updates.

DAY 26–27 Dynamic Programming — 5 Core Patterns DP

5 DP Patterns (Master These)

  • 1D Linear: Climb stairs, House Robber
  • Knapsack: 0/1 vs Unbounded
  • LCS/LIS: String subsequences
  • Interval DP: Matrix chain, burst balloons
  • State Machine: Stock prices (cooldown, txn limit)

Pattern Recognition

Question has overlapping subproblems + optimal substructure? → DP. Start with brute force recursion → add memoization → optimize to bottom-up. Space optimization: if only previous row needed, use 1D array.

Go Tip

For 2D DP: dp := make([][]int, m+1) then for i := range dp { dp[i] = make([]int, n+1) }. Go doesn't have 2D array literals for large sizes. For space optimization, two rows: prev, curr := make([]int,n+1), make([]int,n+1).

DAY 28 Backtracking — Permutations & Subsets Backtracking

Go Tip

Pass path []int by value for immutability at each branch, or append to slice and defer removal: path = append(path, n); backtrack(...); path = path[:len(path)-1]. For result, deep copy path: tmp := make([]int, len(path)); copy(tmp, path); result = append(result, tmp).

🏆 Day 29–30: Full Mock Interviews

  • Day 29: 2× timed 45-min sessions. One array/graph problem + one DP. Record yourself explaining. Review pattern gaps.
  • Day 30: Review all weak spots from notes. Re-solve 5 problems that took >30 min. Speed coding practice.
  • Final target: Any medium in ≤20 min, any hard in ≤40 min with explanation
  • Go code standard: clean, no unnecessary allocations, handles edge cases (empty input, single element)
BONUS Machine Coding Round — Go Solutions FULL SOLUTIONS

This section is intentionally outside the 30-day timeline. Use it when you want machine-coding depth, not calendar discipline. Complete, production-quality Go solutions for the most frequently asked machine coding problems at 5-year experience level. Each includes design decisions, edge cases, and follow-up extensions.

MC MAP Machine Coding Round — What To Practice First Strategy

Round Structure

  • First 5-10 min: clarify API, concurrency needs, error behavior, and extensibility requirements.
  • Next 10 min: design interfaces + structs before full implementation.
  • Middle 25-35 min: build the happy path and one strong invariant.
  • Final 10 min: edge cases, thread safety, tests you would write, and follow-up extensions.

Prompt Families To Cover

  • Caches, rate limiters, key-value stores, and schedulers.
  • Pub-sub, worker pools, retry queues, and job executors.
  • Elevator, parking lot, splitwise, snake-ladder, and book-my-show style object models.
  • Leaderboard, autocomplete, notification service, and in-memory databases.

Interview Signal

The machine-coding differentiator is not writing the most code. It is choosing the right abstractions early, naming invariants, and being explicit about concurrency, cleanup, and testability.

MC 01 Rate Limiter — Token Bucket + Sliding Window Concurrency

What They Test

  • Token Bucket: bursty traffic allowed up to capacity
  • Sliding Window Counter: smooth rate, no burst
  • Thread-safety with sync.Mutex
  • Interface design — Accept(key string) bool

Follow-up Questions

  • Per-user vs global rate limiting
  • Distributed rate limiter (Redis + Lua script)
  • Leaky bucket vs token bucket tradeoffs
  • How to test without sleeping in tests

Complete Go Solution

// ── Token Bucket Rate Limiter ────────────────────────────
package main

import (
    "fmt"
    "sync"
    "time"
)

type TokenBucket struct {
    mu         sync.Mutex
    capacity   float64
    tokens     float64
    refillRate float64 // tokens per second
    lastRefill time.Time
}

func NewTokenBucket(capacity int, refillRate float64) *TokenBucket {
    return &TokenBucket{
        capacity:   float64(capacity),
        tokens:     float64(capacity),
        refillRate: refillRate,
        lastRefill: time.Now(),
    }
}

func (tb *TokenBucket) Allow() bool {
    tb.mu.Lock()
    defer tb.mu.Unlock()

    now := time.Now()
    elapsed := now.Sub(tb.lastRefill).Seconds()
    tb.tokens = min(tb.capacity, tb.tokens+elapsed*tb.refillRate)
    tb.lastRefill = now

    if tb.tokens >= 1 {
        tb.tokens--
        return true
    }
    return false
}

// ── Sliding Window Rate Limiter ──────────────────────────
type SlidingWindowLimiter struct {
    mu       sync.Mutex
    limit    int
    window   time.Duration
    requests []time.Time
}

func NewSlidingWindow(limit int, window time.Duration) *SlidingWindowLimiter {
    return &SlidingWindowLimiter{limit: limit, window: window}
}

func (sl *SlidingWindowLimiter) Allow() bool {
    sl.mu.Lock()
    defer sl.mu.Unlock()

    now := time.Now()
    cutoff := now.Add(-sl.window)

    // evict expired timestamps
    i := 0
    for i < len(sl.requests) && sl.requests[i].Before(cutoff) {
        i++
    }
    sl.requests = sl.requests[i:]

    if len(sl.requests) < sl.limit {
        sl.requests = append(sl.requests, now)
        return true
    }
    return false
}

func main() {
    tb := NewTokenBucket(5, 2) // cap 5, refill 2/sec
    for i := 0; i < 8; i++ {
        fmt.Printf("request %d: allowed=%v
", i, tb.Allow())
    }
}

Interview Signal

Mention the tradeoff: token bucket allows bursts up to capacity — good for APIs. Sliding window is smoother but uses O(requests) memory. For distributed systems, mention Redis + Lua atomicity. Interviewers at Uber/Stripe ask this in the context of their API gateway.

MC 02 Thread-Safe LRU Cache with TTL Design + Concurrency

What They Test

  • DLL + hashmap for O(1) get/put
  • TTL expiry — lazy vs eager eviction
  • sync.RWMutex — reads concurrent, writes exclusive
  • Generic design (Go 1.18+ generics)

Follow-up Questions

  • Background goroutine to clean expired keys
  • How to make it distributed (consistent hashing)
  • LFU variant — what changes?
  • Cache stampede / thundering herd prevention

Complete Go Solution

package main

import (
    "fmt"
    "sync"
    "time"
)

type entry struct {
    key     int
    val     int
    expiry  time.Time
    hasExp  bool
    prev, next *entry
}

type LRUCache struct {
    mu         sync.RWMutex
    cap        int
    items      map[int]*entry
    head, tail *entry // sentinel nodes
}

func NewLRU(cap int) *LRUCache {
    h, t := &entry{}, &entry{}
    h.next, t.prev = t, h
    return &LRUCache{cap: cap, items: make(map[int]*entry), head: h, tail: t}
}

func (c *LRUCache) remove(e *entry) {
    e.prev.next, e.next.prev = e.next, e.prev
}

func (c *LRUCache) pushFront(e *entry) {
    e.next = c.head.next; e.prev = c.head
    c.head.next.prev = e; c.head.next = e
}

func (c *LRUCache) Get(key int) (int, bool) {
    c.mu.Lock()
    defer c.mu.Unlock()
    e, ok := c.items[key]
    if !ok { return 0, false }
    if e.hasExp && time.Now().After(e.expiry) {
        c.remove(e); delete(c.items, key); return 0, false
    }
    c.remove(e); c.pushFront(e)
    return e.val, true
}

func (c *LRUCache) Put(key, val int, ttl time.Duration) {
    c.mu.Lock()
    defer c.mu.Unlock()
    if e, ok := c.items[key]; ok {
        e.val = val
        if ttl > 0 { e.expiry = time.Now().Add(ttl); e.hasExp = true }
        c.remove(e); c.pushFront(e); return
    }
    if len(c.items) == c.cap {
        lru := c.tail.prev
        c.remove(lru); delete(c.items, lru.key)
    }
    e := &entry{key: key, val: val}
    if ttl > 0 { e.expiry = time.Now().Add(ttl); e.hasExp = true }
    c.pushFront(e); c.items[key] = e
}

func main() {
    cache := NewLRU(3)
    cache.Put(1, 100, 0)
    cache.Put(2, 200, 2*time.Second)
    if v, ok := cache.Get(2); ok { fmt.Println("key 2:", v) }
    time.Sleep(3 * time.Second)
    if _, ok := cache.Get(2); !ok { fmt.Println("key 2 expired") }
}

Interview Signal

Upgrade to sync.RWMutex — reads use RLock, only writes use Lock. Mention: for cache-heavy workloads this gives significant read throughput improvement. For TTL background cleanup, mention a goroutine with time.Ticker + context.Done() for graceful shutdown.

MC 03 Concurrent Worker Pool Goroutines + Channels

What They Test

  • Goroutine lifecycle management
  • Graceful shutdown — drain jobs before exit
  • Backpressure — block when queue full
  • Context cancellation propagation

Real-World Use

  • Kafka consumer group processing
  • Batch DB writes with bounded concurrency
  • Parallel HTTP fan-out (microservices)
  • Image resizing / file processing pipelines

Complete Go Solution

package main

import (
    "context"
    "fmt"
    "sync"
    "time"
)

type Job[T, R any] struct {
    ID    int
    Input T
}

type Result[T, R any] struct {
    Job    Job[T, R]
    Output R
    Err    error
}

type Pool[T, R any] struct {
    workers int
    jobs    chan Job[T, R]
    results chan Result[T, R]
    wg      sync.WaitGroup
}

func NewPool[T, R any](workers, queueSize int) *Pool[T, R] {
    return &Pool[T, R]{
        workers: workers,
        jobs:    make(chan Job[T, R], queueSize),
        results: make(chan Result[T, R], queueSize),
    }
}

func (p *Pool[T, R]) Start(ctx context.Context, fn func(context.Context, T) (R, error)) {
    for i := 0; i < p.workers; i++ {
        p.wg.Add(1)
        go func() {
            defer p.wg.Done()
            for {
                select {
                case job, ok := <-p.jobs:
                    if !ok { return }
                    out, err := fn(ctx, job.Input)
                    p.results <- Result[T, R]{Job: job, Output: out, Err: err}
                case <-ctx.Done():
                    return
                }
            }
        }()
    }
    go func() { p.wg.Wait(); close(p.results) }()
}

func (p *Pool[T, R]) Submit(job Job[T, R]) { p.jobs <- job }
func (p *Pool[T, R]) Close()               { close(p.jobs) }
func (p *Pool[T, R]) Results() <-chan Result[T, R] { return p.results }

func main() {
    ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
    defer cancel()

    pool := NewPool[int, string](3, 10)
    pool.Start(ctx, func(ctx context.Context, n int) (string, error) {
        time.Sleep(100 * time.Millisecond)
        return fmt.Sprintf("processed-%d", n), nil
    })

    for i := 0; i < 6; i++ {
        pool.Submit(Job[int, string]{ID: i, Input: i * 10})
    }
    pool.Close()
    for r := range pool.Results() {
        fmt.Printf("job %d → %s
", r.Job.ID, r.Output)
    }
}

Interview Signal

Generics here show Go 1.18+ awareness — big signal at senior level. Key design choices to mention: buffered channels prevent goroutine blocking on submit, context propagation allows graceful cancellation, closing jobs channel triggers all workers to drain and exit cleanly via range. Always mention: never close a channel from the receiver side.

MC 04 In-Memory Pub-Sub Event Bus Concurrency + Design

What They Test

  • Fan-out to multiple subscribers per topic
  • Non-blocking publish (slow subscriber drops)
  • Clean unsubscribe — no goroutine leak
  • Wildcard topic matching (bonus)

Follow-up Questions

  • Persistent subscribers — replay missed events
  • At-least-once vs at-most-once delivery
  • How Kafka solves this at scale
  • Dead letter queue for failed subscribers

Complete Go Solution

package main

import (
    "fmt"
    "sync"
    "time"
)

type Message struct {
    Topic   string
    Payload interface{}
}

type Subscriber struct {
    id  string
    ch  chan Message
}

type EventBus struct {
    mu   sync.RWMutex
    subs map[string][]*Subscriber
}

func NewEventBus() *EventBus {
    return &EventBus{subs: make(map[string][]*Subscriber)}
}

func (eb *EventBus) Subscribe(topic, id string, bufSize int) *Subscriber {
    eb.mu.Lock()
    defer eb.mu.Unlock()
    sub := &Subscriber{id: id, ch: make(chan Message, bufSize)}
    eb.subs[topic] = append(eb.subs[topic], sub)
    return sub
}

func (eb *EventBus) Publish(topic string, payload interface{}) int {
    eb.mu.RLock()
    subs := eb.subs[topic]
    eb.mu.RUnlock()
    msg := Message{Topic: topic, Payload: payload}
    delivered := 0
    for _, sub := range subs {
        select {
        case sub.ch <- msg:
            delivered++
        default: // slow subscriber, drop message
        }
    }
    return delivered
}

func (eb *EventBus) Unsubscribe(topic string, sub *Subscriber) {
    eb.mu.Lock()
    defer eb.mu.Unlock()
    subs := eb.subs[topic]
    for i, s := range subs {
        if s.id == sub.id {
            eb.subs[topic] = append(subs[:i], subs[i+1:]...)
            close(s.ch)
            return
        }
    }
}

func main() {
    bus := NewEventBus()
    s1 := bus.Subscribe("payments", "service-a", 10)
    s2 := bus.Subscribe("payments", "service-b", 10)

    go func() {
        for msg := range s1.ch {
            fmt.Printf("service-a got: %v
", msg.Payload)
        }
    }()
    go func() {
        for msg := range s2.ch {
            fmt.Printf("service-b got: %v
", msg.Payload)
        }
    }()

    bus.Publish("payments", map[string]interface{}{"txn": "T001", "amount": 500})
    bus.Publish("payments", map[string]interface{}{"txn": "T002", "amount": 1200})
    time.Sleep(100 * time.Millisecond)
    bus.Unsubscribe("payments", s1)
    bus.Unsubscribe("payments", s2)
}

Interview Signal

Copying the subscriber slice before publishing with RLock and RUnlock is critical — allows concurrent reads while blocking only on write (subscribe/unsubscribe). Mention: in production, this is how Kafka consumer groups work internally. The select { default: } non-blocking send is the idiomatic Go drop-if-full pattern — much cleaner than checking channel length.

MC 05 Circuit Breaker Resilience Pattern

Three States

  • Closed: requests pass through, count failures
  • Open: reject all requests immediately (fast fail)
  • Half-Open: let N requests through to probe recovery

What They Test

  • State machine design — transitions correct?
  • Thread-safe state transitions
  • Configurable threshold and timeout
  • Callback/hook on state change (observability)

Complete Go Solution

package main

import (
    "errors"; "fmt"; "sync"; "time"
)

type State uint8
const (Closed State = iota; Open; HalfOpen)

var ErrOpen = errors.New("circuit breaker is open")

type CB struct {
    mu           sync.Mutex
    state        State
    failures     int
    successes    int
    threshold    int   // failures to open
    halfOpenMax  int   // successes to close from half-open
    openTimeout  time.Duration
    openedAt     time.Time
    OnStateChange func(from, to State)
}

func NewCB(threshold, halfOpenMax int, timeout time.Duration) *CB {
    return &CB{
        threshold:   threshold,
        halfOpenMax: halfOpenMax,
        openTimeout: timeout,
    }
}

func (cb *CB) transition(to State) {
    from := cb.state
    cb.state = to
    cb.failures, cb.successes = 0, 0
    if to == Open { cb.openedAt = time.Now() }
    if cb.OnStateChange != nil { cb.OnStateChange(from, to) }
}

func (cb *CB) Call(fn func() error) error {
    cb.mu.Lock()
    defer cb.mu.Unlock()

    switch cb.state {
    case Open:
        if time.Since(cb.openedAt) > cb.openTimeout {
            cb.transition(HalfOpen)
        } else {
            return ErrOpen
        }
    }

    err := fn()
    if err != nil {
        cb.failures++
        if cb.state == HalfOpen || cb.failures >= cb.threshold {
            cb.transition(Open)
        }
        return err
    }

    if cb.state == HalfOpen {
        cb.successes++
        if cb.successes >= cb.halfOpenMax {
            cb.transition(Closed)
        }
    } else {
        cb.failures = 0
    }
    return nil
}

func main() {
    cb := NewCB(3, 2, 2*time.Second)
    cb.OnStateChange = func(from, to State) {
        states := []string{"Closed", "Open", "HalfOpen"}
        fmt.Printf("state: %s → %s
", states[from], states[to])
    }
    fail := func() error { return errors.New("downstream error") }
    for i := 0; i < 5; i++ {
        fmt.Printf("call %d: %v
", i, cb.Call(fail))
    }
}

Interview Signal

This is directly from your production work — CBDC retry/rollback flows use the same state machine pattern. Key points: the OnStateChange hook is your observability hook (emit metrics, alerts). In production (Netflix Hystrix, Sony gobreaker), also track success rate, not just absolute failures. Mention: combine with exponential backoff on retry for complete resilience.

MC 06 In-Memory Key-Value Store with TTL (Redis-lite) Storage + Concurrency

What They Test

  • GET / SET / DEL / EXPIRE / TTL commands
  • Background goroutine for expiry cleanup
  • Shard-based locking for throughput
  • Persistence hint (AOF / snapshot trade-off)

Follow-up Questions

  • How Redis does lazy vs active expiry
  • Add INCR / DECR atomically
  • Add pub-sub on key expiry events
  • Consistent hashing for sharding

Complete Go Solution

package main

import (
    "context"; "fmt"
    "sync"; "time"
)

type item struct {
    val    string
    expiry time.Time
    hasExp bool
}

type KVStore struct {
    mu    sync.RWMutex
    store map[string]item
}

func NewKVStore(ctx context.Context) *KVStore {
    kv := &KVStore{store: make(map[string]item)}
    go kv.cleanupLoop(ctx)
    return kv
}

func (kv *KVStore) Set(key, val string, ttl time.Duration) {
    kv.mu.Lock(); defer kv.mu.Unlock()
    it := item{val: val}
    if ttl > 0 { it.expiry = time.Now().Add(ttl); it.hasExp = true }
    kv.store[key] = it
}

func (kv *KVStore) Get(key string) (string, bool) {
    kv.mu.RLock(); defer kv.mu.RUnlock()
    it, ok := kv.store[key]
    if !ok { return "", false }
    if it.hasExp && time.Now().After(it.expiry) { return "", false } // lazy expiry
    return it.val, true
}

func (kv *KVStore) Del(key string) {
    kv.mu.Lock(); defer kv.mu.Unlock()
    delete(kv.store, key)
}

func (kv *KVStore) TTL(key string) time.Duration {
    kv.mu.RLock(); defer kv.mu.RUnlock()
    it, ok := kv.store[key]
    if !ok || !it.hasExp { return -1 }
    remaining := time.Until(it.expiry)
    if remaining < 0 { return 0 }
    return remaining
}

func (kv *KVStore) cleanupLoop(ctx context.Context) {
    ticker := time.NewTicker(500 * time.Millisecond)
    defer ticker.Stop()
    for {
        select {
        case <-ticker.C:
            kv.mu.Lock()
            now := time.Now()
            for k, v := range kv.store {
                if v.hasExp && now.After(v.expiry) { delete(kv.store, k) }
            }
            kv.mu.Unlock()
        case <-ctx.Done(): return
        }
    }
}

func main() {
    ctx, cancel := context.WithCancel(context.Background())
    defer cancel()
    kv := NewKVStore(ctx)
    kv.Set("session:abc", "user123", 2*time.Second)
    kv.Set("config:timeout", "30s", 0)
    if v, ok := kv.Get("session:abc"); ok { fmt.Println("got:", v) }
    fmt.Println("ttl:", kv.TTL("session:abc").Round(time.Millisecond))
    time.Sleep(3 * time.Second)
    if _, ok := kv.Get("session:abc"); !ok { fmt.Println("session expired") }
}

Interview Signal

Mention Redis does both lazy expiry (check on access) AND active expiry (periodic scan of 20 random keys with TTL). Your cleanupLoop mirrors that. For scale, mention sharding: N shards = N separate maps each with own RWMutex — shard by hash(key) % N. This is how BigCache and groupcache work in Go production.

More Machine Coding Prompts To Add To Your Rotation

  • Scheduler: delayed job queue with retry, backoff, and idempotency key support.
  • Leaderboard: top-K, surrounding rank, seasonal reset, tie handling.
  • Autocomplete: top suggestions by prefix with incremental updates.
  • Splitwise: balance graph, minimize cash flow, expense settlement rules.
  • Parking Lot: slot allocation strategy, concurrency-safe entry/exit, pricing policy.
  • BookMyShow: seat locking with TTL, payment success/failure flow, race prevention.
  • Snake and Ladder / Tic-Tac-Toe: low-level design, move validation, state machine.
  • File System: path trie, create/list/delete, metadata cache, permission hooks.
  • Notification Service: channel abstraction, retries, dedup, user preferences.
  • Mini Redis: strings + lists + expiry + append-only persistence.