The Secret Life of Go: Concurrency
Source: Dev.to
Bringing order to the chaos of the race condition.
Chapter 15: Sharing by Communicating
The archive was unusually loud that Tuesday. Not from voices, but from the rain hammering against the copper roof, a chaotic, drumming rhythm that filled the high ceilings.
Ethan was pacing. His laptop fan was spinning at maximum speed.
“It works, but it doesn’t,” he said, running a hand through his hair. “I’m trying to process these thousand log files. I used the
gokeyword to spawn a background job for each one. It’s blazing fast.”
“And the results?” Eleanor asked, calmly stirring her tea.
“Garbage. Sometimes I get 998 results. Sometimes 1005. Sometimes the program crashes with a map assignment error. It’s chaos.”
He showed her the code:
func processLogs(logs []string) map[string]int {
results := make(map[string]int)
for _, log := range logs {
go func(l string) {
// Simulate processing
user := parseUser(l)
results[user]++ // THE BUG IS HERE
}(log)
}
return results
}
“Right,” Eleanor said, leaning in. “You’ve got a classic race condition. You spun up a thousand goroutines, and they’re all fighting over that one map. They’re overwriting each other’s work because nothing is stopping them.”
Ethan: “So I need a lock? A Mutex?”
Eleanor: “You could use a Mutex, but then you’re pausing everything constantly to manage that one variable. In Go we try to avoid that. We have a saying: ‘Do not communicate by sharing memory; share memory by communicating.’”
The Goroutine
“First,” Eleanor said, “look at what you actually built. A goroutine isn’t just a function call. It’s fire‑and‑forget. The Go scheduler manages them, multiplexing thousands of them onto a few OS threads.”
“That sounds efficient,” Ethan said.
“It is. But because they are independent, your main function—the one returningresults—doesn’t wait for any of them. It definitely returns before your workers have even finished.”
“That explains the missing data,” Ethan realized. “I’m returning an empty map while the workers are still running in the background.”
“Exactly. You need a way to get the data back safely. Instead of letting everyone touch the map, let’s just have them pass the data back to you.”
The Channel
She opened a new file. “We use a channel. It is a direct pipe for data between running tasks.”
ch := make(chan string) // Create a channel of strings
“This handles the synchronization for you,” Eleanor explained. “If you send data into it, the code pauses until someone is there to receive it. It forces the two sides to line up perfectly.”
She refactored Ethan’s code:
func processLogs(logs []string) map[string]int {
results := make(map[string]int)
userChan := make(chan string)
// Spawn workers
for _, log := range logs {
go func(l string) {
user := parseUser(l)
userChan <- user
}(log)
}
// Collect results
for i := 0; i < len(logs); i++ {
user := <-userChan
results[user]++
}
return results
}
“See the difference?” Eleanor asked. “Your workers calculate the user, but they don’t touch the map. They just hand the result off. The main function waits, grabs the result, and updates the map. Only one thing touches the memory.”
“So the channel effectively serializes the writes,” Ethan realized.
“Precisely. It creates a single point of entry.”
Blocking Is a Feature
“But wait,” Ethan asked. “What if the channel gets backed up?”
“By default, there is no backup,” Eleanor said. “It’s a direct hand‑off. When a worker sends onuserChan, it blocks until the main goroutine receives the value. So they wait for each other.”
Buffered Channels
“Now,” Eleanor added, “sometimes you don’t want them locking up quite that often. You want a bit of a queue.”
ch := make(chan string, 100) // Buffer of 100 slots
“This gives you a buffer. Your workers can drop off 100 items without waiting. It lets them run a bit faster than the consumer for short bursts. But be careful—once that buffer is full, they block again.”
The select Statement
“One last thing,” Eleanor said. “What if you’re waiting on two things? Like getting a result or hitting a timeout?”
select {
case msg := <-ch1:
// handle msg from ch1
case ch2 <- value:
// send value to ch2
case <-time.After(5 * time.Second):
// handle timeout
}
“Do not communicate by sharing memory; instead, share memory by communicating.”
Avoid having multiple goroutines access the same variable (e.g., a map) directly. Instead, pass the data through a channel to a single owner goroutine.
Buffered Channels
ch := make(chan int, 100) // capacity of 100
- A buffered channel has a fixed capacity.
- Sends block only when the buffer is full; receives block when the buffer is empty.
The select Statement
- Allows a goroutine to wait on multiple channel operations simultaneously.
- Executes the first case that becomes ready.
- Essential for handling time‑outs, cancellations, and multiplexed I/O.
select {
case msg := <-ch1:
// handle msg from ch1
case ch2 <- value:
// send value to ch2
case <-time.After(5 * time.Second):
// handle timeout
}
Next chapter: The Context Package – Ethan learns how to stop a runaway goroutine and manage request deadlines politely.
Aaron Rose is a software engineer and technology writer at tech-reader.blog and the author of Think Like a Genius.