Synchronous, asynchronous, concurrent, parallel: the 4 concepts explained in Go

These four words show up everywhere in docs, articles, and interviews. And they get mixed up constantly — including by experienced developers. "Asynchronous" and "concurrent" don't mean the same thing. "Parallel" and "concurrent" don't either. And a program can be all four at once, or only two, and that changes what it actually does.

This article completes the previous one on concurrency and parallelism by adding the synchronous/asynchronous dimension, and showing how all four concepts fit together in Go.

Synchronous vs asynchronous: a question of waiting

These two words are not about "how many tasks are running at the same time". They're about what your code does while it's waiting for a response.

Synchronous: you send a request, you stop and wait for the response before doing anything else. Like calling someone on the phone and staying silent until they answer.

Asynchronous: you send a request, you keep doing other things, and you handle the response when it arrives. Like sending a text message — you put your phone down and get on with your life.

package main

import (
    "fmt"
    "time"
)

func callAPI() string {
    time.Sleep(200 * time.Millisecond) // simulates a network request
    return "result"
}

func synchronous() {
    // We wait. We do nothing else for 200ms.
    result := callAPI()
    fmt.Println("synchronous:", result)
}

func asynchronous() {
    ch := make(chan string, 1)

    // We launch the call, we don't wait
    go func() {
        ch <- callAPI()
    }()

    // We can do other things here while the call is in progress
    fmt.Println("asynchronous: doing other things while the call runs...")

    // We retrieve the result when we need it
    result := <-ch
    fmt.Println("asynchronous: result received:", result)
}

The synchronous version blocks for 200ms. The asynchronous version keeps working during that time. In an application making 1000 network calls per second, the difference is massive.

Concurrent vs parallel: a question of cores

These two words don't talk about "how code waits". They talk about how many tasks are actually running at the same time.

Concurrent: multiple tasks make progress at the same time, but on a single core. The CPU juggles between them — it advances one a bit, switches to another, comes back. Like a single chef watching three pots at once: only doing one thing at a time, but everything moves forward.

Parallel: multiple tasks run truly at the same time, on multiple cores. Like three chefs, each at their own stove. Real simultaneous execution, not juggling.

Concurrency is a matter of structure (how the code is organized). Parallelism is a matter of hardware (how many cores are available). A concurrent program can run in parallel on a multi-core machine, or sequentially on a single-core — without changing a single line of code.

The four possible combinations

What makes things confusing is that synchronous/asynchronous and concurrent/parallel are two independent axes. A program can be any combination of the two:

Synchronous Asynchronous
Sequential Classic bash script Node.js (1 thread, callbacks)
Concurrent Goroutines blocking on channels Goroutines with non-blocking I/O
Parallel Threads waiting for their results Go in production: goroutines on multiple cores

Node.js is a perfect example of asynchronous but sequential code: a single thread, but it never blocks — it delegates I/O and resumes when ready via the event loop. Go does it differently: it is concurrent and can be parallel, and lets the developer choose whether an operation is synchronous or asynchronous.

Go: synchronous on the surface, asynchronous under the hood

This is one of Go's strengths that is often misunderstood. When you write an HTTP request in Go, it looks like synchronous code:

// This looks like "we're waiting" — but Go doesn't block the OS thread
resp, err := http.Get("https://api.example.com/data")
if err != nil {
    return err
}
defer resp.Body.Close()

In reality, when this goroutine is waiting for the network response, the Go runtime pauses it and uses the OS thread to advance other goroutines. You write code that reads like synchronous, but behaves like asynchronous. It's the best of both worlds: no callback hell, no async/await everywhere, but no blocked thread either.

Compare with the same pattern in JavaScript:

// JavaScript: forced to write asynchronism explicitly
const resp = await fetch('https://api.example.com/data')
const data = await resp.json()

In JS, the await is necessary to avoid blocking the single thread. In Go, you don't have to think about it — the goroutine "freezes" on its own during I/O and the OS thread stays available.

Concurrency within parallelism: both at the same time

A Go program in production is often concurrent AND parallel at the same time. Here's how the two coexist:

  • You have 8 cores on the machine → Go uses 8 OS threads (GOMAXPROCS=8)
  • You have 10,000 goroutines → Go distributes them across the 8 threads
  • Each thread does concurrency: it alternates between multiple goroutines
  • The 8 threads together do parallelism: they run truly at the same time
package main

import (
    "fmt"
    "runtime"
    "sync"
)

func main() {
    fmt.Printf("Available cores: %d\n", runtime.NumCPU())
    fmt.Printf("Threads used (GOMAXPROCS): %d\n", runtime.GOMAXPROCS(0))

    var wg sync.WaitGroup
    results := make(chan int, 1000)

    // 1000 goroutines launched — concurrency + parallelism simultaneously
    // Go distributes them across available threads
    for i := range 1000 {
        wg.Add(1)
        go func(n int) {
            defer wg.Done()
            results <- n * n // computed in parallel across multiple cores
        }(i)
    }

    go func() {
        wg.Wait()
        close(results)
    }()

    total := 0
    for r := range results {
        total += r
    }
    fmt.Println("Sum of squares:", total)
}

On an 8-core machine: the 1000 goroutines are distributed across 8 threads. Each thread alternates between ~125 goroutines (concurrency). The 8 threads run at the same time (parallelism). Result: 1000 calculations processed much faster than sequentially.

When to use what in Go?

In practice, here's how to choose:

Sequential synchronous — the default. Script, simple processing, business logic without I/O. Readable, predictable, nothing to manage.

// Sequential synchronous — simple and sufficient
func processOrder(cmd Command) error {
    if err := validate(cmd); err != nil {
        return err
    }
    if err := save(cmd); err != nil {
        return err
    }
    return notify(cmd)
}

Concurrent asynchronous — when you're making multiple independent I/O calls (multiple APIs, multiple DB queries). Launch them in parallel, collect the results.

// Concurrent asynchronous — 3 parallel calls instead of sequential
func fetchData(ctx context.Context, id string) (Result, error) {
    chUser := make(chan User, 1)
    chOrders := make(chan []Order, 1)
    chStats := make(chan Stats, 1)

    go func() { chUser <- getUser(ctx, id) }()
    go func() { chOrders <- getOrders(ctx, id) }()
    go func() { chStats <- getStats(ctx, id) }()

    // All 3 calls run at the same time
    // Total time = max(user time, orders time, stats time)
    // instead of sum(user time + orders time + stats time)
    return Result{
        User:   <-chUser,
        Orders: <-chOrders,
        Stats:  <-chStats,
    }, nil
}

Worker pool — when you have many similar tasks and want to control the load. Not 10,000 simultaneous goroutines, but N workers pulling from a queue.

// Worker pool: N goroutines process M tasks
func processInParallel(tasks []Task, numWorkers int) {
    queue := make(chan Task, len(tasks))
    for _, t := range tasks {
        queue <- t
    }
    close(queue)

    var wg sync.WaitGroup
    for range numWorkers {
        wg.Add(1)
        go func() {
            defer wg.Done()
            for t := range queue {
                process(t) // each worker takes one task at a time
            }
        }()
    }
    wg.Wait()
}

The summary as a mental model

To never mix up the four concepts again:

  • Synchronous/Asynchronous = do I wait for the result before continuing? Yes → synchronous. No, I continue and retrieve it later → asynchronous.
  • Sequential/Concurrent = am I handling multiple tasks at the same time? No, one after the other → sequential. Yes, I alternate → concurrent.
  • Sequential/Parallel = are multiple tasks physically executing at the same time on multiple cores? No → sequential. Yes → parallel.

Go makes all of this relatively transparent: you launch a goroutine, Go decides whether it runs concurrently on one thread or in parallel on several. You write code that looks synchronous, Go handles the asynchronism under the hood during I/O. That's why Go is pleasant to write for these kinds of problems — you think about the logic, not the thread plumbing.

📄 Associated CLAUDE.md

Comments (0)