Which Mutex to use in Go
Let's understand this with log structure example.
A commit log is just an append-only slice
// at its heart, this is all a log is
type Log struct {
records []string
}
func (l *Log) Append(record string) {
l.records = append(l.records, record)
}
func (l *Log) Read(index int) string {
return l.records[index]
}Logs are ordered by time index 0 is oldest, last index is newest. You never update or delete. Only append. That's the core idea.
This is exactly how Kafka, etcd, and distributed databases work at their heart. The log is the source of truth.
Why is this useful in distributed systems?
Service A → appends "user signed up" → index 0
Service A → appends "user paid" → index 1
Service A → appends "order shipped" → index 2
Service B → reads from index 0, replays everything
→ now B is in sync with AThe log becomes the single source of truth. Other services replay it to catch up. This is the big idea behind Kafka, Write-Ahead Logs in Postgres, etcd in Kubernetes.
Replay... what it means
Think of it like a bank statement.
index 0 → "deposit $100"
index 1 → "deposit $200"
index 2 → "withdraw $50"Your current balance isn't stored anywhere. You replay read from index 0 to end, apply each record in order, and arrive at the current state.
start at $0
+ $100 → $100
+ $200 → $300
- $50 → $250 ← current state, rebuilt from the logThat's replay. The log is the truth. The state is derived from it.
This is exactly what etcd does when a new Kubernetes node joins, it replays the log to catch up to current cluster state! Interesting isn't. 💁♀️
Now the problem concurrent access
Your log works fine with one goroutine. But in a real server, multiple requests hit it simultaneously.
// goroutine 1 (request A): reading len(l.records)
// goroutine 2 (request B): append() is resizing the slice
// → data race → crash or corrupted dataThis is a race condition. Two goroutines touching the same memory at the same time.
Mutex a lock on a door
Mutex = Mutual Exclusion. Only one goroutine can hold it at a time.
Goroutine 1: "I want to write" → grabs lock → writes → releases lock
Goroutine 2: "I want to write" → lock is taken → WAITS → lock free → grabs lock → writesimport "sync"
type Log struct {
mu sync.Mutex // the lock
records []string
}
func (l *Log) Append(record string) {
l.mu.Lock() // grab the lock
defer l.mu.Unlock() // release when function returns, no matter what
l.records = append(l.records, record)
}
func (l *Log) Read(index int) string {
l.mu.Lock()
defer l.mu.Unlock()
return l.records[index]
}defer l.mu.Unlock() is the Go pattern, you put the unlock right next to the lock so you never forget it. It closes when the function call ends.
But wait RWMutex is smarter
With sync.Mutex, even two readers block each other. But reads don't conflict only write vs write, or write vs read conflicts.
type Log struct {
mu sync.RWMutex // upgraded lock
records []string
}
func (l *Log) Append(record string) {
l.mu.Lock() // exclusive; no one else in
defer l.mu.Unlock()
l.records = append(l.records, record)
}
func (l *Log) Read(index int) string {
l.mu.RLock() // shared; multiple readers allowed simultaneously
defer l.mu.RUnlock()
return l.records[index]
}Multiple readers → all get RLock simultaneously
Writer comes in → waits for all readers to finish
Writer holds lock → all readers waitWhen to use which
| Situation | Use |
|---|---|
| Only one goroutine | No mutex needed |
| Multiple goroutines, all writing | sync.Mutex |
| Multiple goroutines, mostly reading | sync.RWMutex |
^ you're going to reason from the access pattern to the tool geting built. That's the right way to think about it.