Concurrency vs parallelism in Go: applied to Event Sourcing and CQRS

Rob Pike said in 2012: "Concurrency is about dealing with lots of things at once. Parallelism is about doing lots of things at once." Everyone nods at the conference. Nobody asks a question. And then you go home and still have no idea what it actually changes in your code.

This article starts from scratch: what the difference is, how Go expresses it with goroutines and channels, and why it matters as soon as you start working with architectures like Event Sourcing or CQRS. No advanced Go knowledge required.

The difference with a cooking analogy

Imagine a single chef preparing ten dishes at once. He starts the risotto, sears the scallops, checks on the sauce, comes back to the risotto. He only does one thing at a time — but he manages ten things by juggling his attention. That's concurrency.

Now put ten chefs in the same kitchen. Ten dishes are cooking physically at the same time, each at their own station. That's parallelism.

The difference in one sentence:

  • Concurrency = managing multiple tasks by alternating between them (even on a single core)
  • Parallelism = executing multiple tasks truly at the same time (multiple cores)

A concurrent program can run perfectly well on a single core — it just gives the illusion of doing several things at once because it switches quickly. Parallelism, on the other hand, requires real multi-core hardware.

Goroutines: Go's take on concurrency

In Go, the basic primitive for concurrency is the goroutine. Think of it as a lightweight thread, managed by Go rather than the operating system. You launch one with the go keyword in front of a function call:

package main

import (
    "fmt"
    "time"
)

func sayHello(name string) {
    fmt.Println("Hello", name)
}

func main() {
    go sayHello("Alice") // launched in the background
    go sayHello("Bob")   // launched in the background
    go sayHello("Charlie")

    time.Sleep(100 * time.Millisecond) // wait for goroutines to finish
    fmt.Println("Everyone said hello")
}

The three calls run concurrently. The order of output is not guaranteed — Go decides who goes first. On a multi-core machine, they may even run truly simultaneously (parallelism).

Why goroutines instead of classic threads? Because a goroutine starts with ~2 KB of memory, versus 1 to 8 MB for an OS thread. You can launch 100,000 goroutines without breaking a sweat. 100,000 threads, and your machine cries.

Channels: how goroutines talk to each other

The problem with concurrency: how do you share data between tasks without breaking everything? Go answers with channels — pipes through which goroutines send each other values.

package main

import "fmt"

func calculate(a, b int, result chan<- int) {
    result <- a + b // sends the result into the channel
}

func main() {
    ch := make(chan int) // creates a channel that carries ints

    go calculate(3, 4, ch) // runs the calculation in the background
    go calculate(10, 20, ch)

    r1 := <-ch // waits and receives the first result
    r2 := <-ch // waits and receives the second

    fmt.Println(r1, r2) // 7 and 30, in some order
}

The golden rule of channels: it's the sender (the producer) that closes the channel, not the receiver. Closing a channel from the receiver side is a classic mistake that crashes the program.

Event Sourcing and CQRS — a quick refresher

Before applying concurrency, a quick recap of these two patterns, because the names sound scarier than they are.

Event Sourcing: instead of storing the current state of a piece of data (balance = $150), you store all the events that led to that state (account created → deposit $200 → withdrawal $50). The current state is recomputed by replaying events in order.

CQRS (Command Query Responsibility Segregation): you separate write operations (Commands, which modify state) from read operations (Queries, which read state). Two separate paths, two separate models.

These two patterns work very well together — and their structure is naturally asymmetric: writes require ordering, reads can be done in parallel. That's exactly where the concurrency/parallelism distinction becomes concrete.

Writes: one queue per account

Let's take a simple example: a banking system in Event Sourcing. An account has a balance. You can make a deposit or a withdrawal. The business rule: the balance cannot go below zero.

Problem: if two withdrawals arrive at the same time on the same account, both check the balance at the same time (say $100), both see it's fine, both debit $80 — and the balance ends up at -$60. That's a race condition.

Go solution: one channel per account. All commands on the same account go through that channel and are processed one at a time. Meanwhile, other accounts process their commands in parallel — they have their own channel and their own goroutine.

package main

import (
    "fmt"
    "sync"
)

// Account manages its state and its command queue
type Account struct {
    id       string
    balance  float64
    commands chan Command
}

type Command struct {
    amount   float64
    response chan error
}

// NewAccount creates an account and starts its processing goroutine
func NewAccount(id string, initialBalance float64) *Account {
    a := &Account{
        id:       id,
        balance:  initialBalance,
        commands: make(chan Command, 10),
    }
    go a.process() // a single goroutine processes commands for this account
    return a
}

// process reads commands one by one — no race condition possible
func (a *Account) process() {
    for cmd := range a.commands {
        if a.balance+cmd.amount < 0 {
            cmd.response <- fmt.Errorf("insufficient balance (%.2f)", a.balance)
            continue
        }
        a.balance += cmd.amount
        cmd.response <- nil
    }
}

// Debit sends a command and waits for the response
func (a *Account) Debit(amount float64) error {
    resp := make(chan error, 1)
    a.commands <- Command{amount: -amount, response: resp}
    return <-resp
}

func main() {
    account := NewAccount("A001", 100)

    var wg sync.WaitGroup

    // 5 withdrawals of 30 arrive at the same time
    for i := range 5 {
        wg.Add(1)
        go func(n int) {
            defer wg.Done()
            err := account.Debit(30)
            if err != nil {
                fmt.Printf("Withdrawal %d refused: %v\n", n, err)
            } else {
                fmt.Printf("Withdrawal %d OK\n", n)
            }
        }(i)
    }

    wg.Wait()
}

What happens: all 5 goroutines send their command into the channel, but the process() goroutine picks them up one by one. The first ones succeed ($100 → $70 → $40), the later ones are refused. No race condition, no complex mutex.

That's the beauty of the approach: concurrency between goroutines is total (each account has its own goroutine, 1000 accounts run in parallel), but serialization within an account is guaranteed by the channel.

Reads: everything in parallel

On the CQRS side, Queries read data. They modify nothing — no risk of corrupting state. So they can be launched in parallel without restriction.

In Event Sourcing, projections are views computed from events. For example: "the balance of all accounts" is a projection. "The list of the last 10 transactions" is another. These projections are built by replaying events — and each one can do so in its own goroutine, independently of the others.

package main

import (
    "fmt"
    "log/slog"
    "sync"
)

type Event struct {
    AccountID string
    Amount    float64
    Type      string
}

type Projection interface {
    Name() string
    Handle(e Event) error
}

// PublishEvent sends an event to all projections in parallel.
// If one projection fails, the others continue.
func PublishEvent(evt Event, projections []Projection) {
    var wg sync.WaitGroup

    for _, p := range projections {
        wg.Add(1)
        go func(proj Projection) {
            defer wg.Done()
            if err := proj.Handle(evt); err != nil {
                slog.Error("projection failed",
                    "projection", proj.Name(),
                    "error", err,
                )
                // Continue — a failing projection doesn't block the others
            }
        }(p)
    }

    wg.Wait() // wait for all projections to have processed the event
}

func main() {
    evt := Event{AccountID: "A001", Amount: 50, Type: "deposit"}
    fmt.Printf("Event published: %+v\n", evt)
    // PublishEvent(evt, []Projection{balancesProj, historyProj, statsProj})
}

In practice, if you have 3 projections (balances, history, stats) and each takes 20ms to process an event, the sequential version takes 60ms. The parallel version takes 20ms. At high event volumes, this matters.

The classic pitfall: two goroutines created for the same account

A subtle issue worth knowing: if you store your accounts in a map (map[string]*Account), two requests arriving simultaneously for an account that doesn't exist yet may create two different instances of the same account. Both goroutines work on different states — and you lose events.

Fix: protect access to the map with a mutex, just for creation. Once the account is created, no more mutex needed — the channel does the work.

type Bank struct {
    mu       sync.Mutex
    accounts map[string]*Account
}

func (b *Bank) GetOrCreate(id string) *Account {
    b.mu.Lock()
    defer b.mu.Unlock()

    if a, ok := b.accounts[id]; ok {
        return a // already exists, return the same one
    }

    // Create only once, protected by the mutex
    a := NewAccount(id, 0)
    b.accounts[id] = a
    return a
}

Summary

Key takeaways:

  • Concurrency = managing multiple tasks by alternating. Go: goroutines + channels.
  • Parallelism = executing multiple tasks at the same time. Go: multiple goroutines on multiple cores.
  • Event Sourcing on the write side: one channel per aggregate → guaranteed sequential processing, no race condition.
  • CQRS on the read side: multiple projections in parallel goroutines → reads without ordering constraints.
  • Channel rule: the producer closes, never the consumer.

These two patterns are often presented as complex because they come loaded with jargon. The reality: they are solutions to very concrete problems (how to write without corrupting state, how to read fast). Go provides the primitives to implement them cleanly — goroutines for concurrency, channels for coordination.

📄 Associated CLAUDE.md

Comments (0)