Goroutines know how to run in parallel. But how do they pass data to each other? How does one goroutine tell another "I'm done, here's the result"? That's what channels are for. Without them, you have goroutines doing things in isolation, and you never get anything back — or worse, you share memory with a mutex and spend your life debugging race conditions.
This article is part 2 of a series on concurrency in Go:
- Part 1: goroutines and
sync.WaitGroup - Part 2: channels and the worker pool pattern
- Part 3: errors in concurrent context and clean panic handling
What is a channel?
A channel is a pipe between two goroutines. You send a value in one end,
someone else receives it on the other. The syntax is deliberately simple:
ch <- value to send, value := <-ch to receive.
ch := make(chan int)
go func() {
ch <- 42 // send 42 into the channel
}()
result := <-ch // receive (blocks until the goroutine sends)
fmt.Println(result) // 42
What matters here: receiving blocks. The line result := <-ch
waits until someone sends something. This is not polling, it's not a sleep —
Go suspends the goroutine and wakes it up when a value arrives. Elegant and free.
You can type channels in function signatures to indicate their direction:
chan<- int for write-only, <-chan int for read-only.
The compiler will complain if you get it wrong. Use this whenever you pass
a channel to a function — it documents the intent.
Buffered vs unbuffered channels
By default, make(chan int) creates an unbuffered channel.
Sending blocks until someone is ready to receive — it's a rendezvous,
like a phone call: both parties need to be available at the same time.
A buffered channel is a mailbox. You drop a message and walk away. The recipient will read it when they can. Sending only blocks if the mailbox is full:
ch := make(chan string, 3) // buffer of 3 messages
ch <- "message 1" // does not block
ch <- "message 2" // does not block
ch <- "message 3" // does not block
ch <- "message 4" // BLOCKS — buffer full, nobody is reading
The simple rule: start with an unbuffered channel. Add a buffer if you've measured a bottleneck or if the producer generates bursts that the consumer can't absorb instantly. Don't set a buffer of 1000 "just to be safe" — that hides logic bugs.
Closing a channel and range
close(ch) signals to receivers that no more values will come.
It unblocks everyone waiting on that channel. The loop
for val := range ch reads values until the channel is
closed and empty — that's the idiomatic way to consume a channel:
ch := make(chan int)
go func() {
defer close(ch) // always close with defer
for i := 0; i < 5; i++ {
ch <- i * i
}
}()
for square := range ch { // reads until close(ch)
fmt.Println(square) // 0, 1, 4, 9, 16
}
The absolute rule: the producer closes, never the consumer.
Writing to a closed channel causes a panic. Closing twice does too.
Reading from a closed channel returns the zero value — detect it with
val, ok := <-ch where ok is false if closed and empty.
select — listening to multiple channels at once
select is a switch for channels. It blocks until
one of the cases is ready, then executes it. If multiple cases are ready at the same time,
it picks one at random — intentional behavior to avoid starvation.
select {
case result := <-ch1:
fmt.Println("ch1 responded:", result)
case result := <-ch2:
fmt.Println("ch2 responded:", result)
case <-time.After(2 * time.Second):
fmt.Println("timeout — nobody responded in time")
}
time.After(d) returns a channel that receives a value after the specified duration.
Combined with select, this is the standard timeout pattern in Go.
In production, you'd prefer context.WithTimeout (part 3), but for simple code
this works perfectly.
With a default case, select no longer blocks — it tries the cases
and if none are ready, executes default. Useful for non-blocking polling:
select {
case msg := <-ch:
process(msg)
default:
// nobody is sending anything, we continue without blocking
}
Fan-out and Fan-in
These are the two fundamental patterns for distributing and collecting parallel work.
Fan-out: one producer feeds multiple consumers.
We distribute the work to process it in parallel.
Fan-in: multiple producers write to a single channel.
We collect results from parallel processing into a single stream.
func scrapeURLs(urls []string, nbWorkers int) []string {
jobs := make(chan string, len(urls))
résultats := make(chan string, len(urls))
// Fan-out : nbWorkers goroutines read from jobs
var wg sync.WaitGroup
for range nbWorkers {
wg.Add(1)
go func() {
defer wg.Done()
for url := range jobs {
contenu, err := fetch(url)
if err != nil {
continue
}
résultats <- contenu // Fan-in : all write to the same channel
}
}()
}
// Send the work
for _, url := range urls {
jobs <- url
}
close(jobs) // signals workers that there's no more work
// Close résultats when all workers are done
go func() {
wg.Wait()
close(résultats)
}()
// Collect
var contenus []string
for contenu := range résultats {
contenus = append(contenus, contenu)
}
return contenus
}
The goroutine that closes résultats is necessary: you can't call
wg.Wait() and read résultats in the same goroutine — the workers
would block on writing if the buffer is full, and we'd never read. Deadlock guaranteed.
The intermediate goroutine breaks this cycle.
Worker Pool — the most useful pattern in production
Go can launch a million goroutines. But if you launch 10,000 goroutines each making an HTTP request, you'll saturate your connection pool, exhaust file descriptors, and the remote server will blacklist your IP.
The worker pool solves this: N fixed goroutines pull from a work queue. The number of tasks can be unlimited, the number of concurrent operations stays controlled.
type Job struct {
ID int
Chemin string
}
type Résultat struct {
JobID int
Err error
}
func workerPool(nbWorkers int, jobs <-chan Job, résultats chan<- Résultat) {
var wg sync.WaitGroup
for range nbWorkers {
wg.Add(1)
go func() {
defer wg.Done()
for job := range jobs { // each worker takes one job at a time
err := redimensionnerImage(job.Chemin)
résultats <- Résultat{JobID: job.ID, Err: err}
}
}()
}
wg.Wait()
close(résultats)
}
func main() {
images := chargerListeImages() // 100 images
jobs := make(chan Job, len(images))
résultats := make(chan Résultat, len(images))
go workerPool(5, jobs, résultats) // 5 workers maximum
for i, img := range images {
jobs <- Job{ID: i, Chemin: img}
}
close(jobs)
succès, échecs := 0, 0
for res := range résultats {
if res.Err != nil {
slog.Error("échec image", "job_id", res.JobID, "error", res.Err)
échecs++
} else {
succès++
}
}
slog.Info("terminé", "succès", succès, "échecs", échecs)
}
5 workers, 100 images, 0 saturation. If you have a pool of 5 DB connections, use 5 workers — the load is naturally bounded.
Classic channel mistakes
Deadlock. Two goroutines waiting on each other via a channel.
Go detects this and prints all goroutines are asleep - deadlock!.
Common cause: forgetting to close a channel that someone is waiting on with range.
Nil channel. An uninitialized channel is nil.
Sending to or receiving from a nil channel blocks forever, no error, no panic.
Your goroutine disappears silently. Always initialize with make.
var ch chan int // nil — DANGER
ch <- 42 // blocks forever, silent goroutine leak
ch = make(chan int) // OK
Closing twice. Immediate panic. If you have multiple producers,
coordinate them with a sync.WaitGroup to call close
only once.
Summary
- Unbuffered channel = rendezvous. The default.
- Buffered channel = mailbox. Sending only blocks if the buffer is full.
- The producer closes the channel, never the consumer.
for val := range chreads until closed.selectlistens to multiple channels. Timeout withtime.After.- Worker pool = N fixed workers. Controls real concurrency.
- Nil channel = silent goroutine leak. Always use
make.
Part 3 covers what to do
when the program needs to shut down cleanly: context for cancellation,
errgroup for error propagation, graceful shutdown on SIGTERM.
The channels you just learned are the foundation — part 3 builds on top of them.