PHP async, event loop and Fibers: concurrency on a single thread

A colleague asks: "Can PHP do what Node.js does — jump between tasks on a single thread?" Short answer: yes. Long answer: it's more subtle than that, and the distinction between concurrency and parallelism changes everything.

PHP has a reputation for being sequential and blocking. That's true in the classic Apache + PHP-FPM model. But it's far from inevitable — understanding why it blocks is the first step toward knowing when and how to stop letting it.

The blocking model of PHP-FPM

In PHP-FPM, each HTTP request is handled by an isolated worker. That worker is single-threaded: operations execute one after another, in order. Nothing can happen concurrently within the same process.

Concretely, if your handler makes three HTTP calls to external APIs, they chain sequentially:

t=0ms   → call API users    (200ms waiting on network)
t=200ms → call API orders   (200ms waiting on network)
t=400ms → call API products (200ms waiting on network)
t=600ms → response sent back to client

600 ms. For 580 of those 600 ms, the thread does nothing — it's waiting on network responses. The CPU is idle, but the thread is blocked on I/O. That's where async becomes interesting: that waiting time can be used to make progress on other tasks.

PHP CLI follows the same model: a single thread, sequential execution. The difference from PHP-FPM is that a script can run for a long time — which makes the event loop viable, since you have a persistent process capable of running an event loop.

The event loop: cooperative concurrency

The event loop principle is straightforward: instead of blocking the thread waiting for an I/O response, you register a callback that will be called when the response arrives, then move on to the next task. The thread never sleeps — it loops checking which events are ready to be handled.

ReactPHP is the library that implements this model in PHP. The simplest example to understand the mechanics:

$loop = React\EventLoop\Factory::create();

$loop->addTimer(1, function () { echo "Task A\n"; });
$loop->addTimer(2, function () { echo "Task B\n"; });

$loop->run();

A single thread. Both timers run in the same loop. While waiting for the first second, nothing blocks the thread — the event loop can handle other events in the meantime. This is cooperative concurrency: tasks voluntarily share the thread by yielding control when they wait.

The concrete case that justifies the investment: three HTTP calls in parallel instead of sequential. With React\Http\Browser:

$loop    = React\EventLoop\Factory::create();
$browser = new React\Http\Browser($loop);

$promises = [
    $browser->get('https://api.example.com/users'),
    $browser->get('https://api.example.com/orders'),
    $browser->get('https://api.example.com/products'),
];

React\Promise\all($promises)->then(function (array $responses) {
    foreach ($responses as $response) {
        echo $response->getBody() . "\n";
    }
});

$loop->run();

All three requests are fired simultaneously. The event loop waits for responses and processes each callback as a response arrives. Result: ~200 ms instead of 600 ms. The gain comes entirely from overlapping network wait time — no CPU parallelism, no additional threads.

PHP 8.1 Fibers: native cooperative multitasking

Fibers, introduced in PHP 8.1, are a low-level mechanism for cooperative multitasking. A Fiber can suspend its execution and hand control back to the caller, which can decide to resume the Fiber later with a value.

$fiber = new Fiber(function (): void {
    $value = Fiber::suspend('first suspension');
    echo "Resumed with: " . $value . "\n";
});

$value = $fiber->start();          // starts the Fiber, receives the suspended value
echo "Suspended with: " . $value . "\n";
$fiber->resume('hello');            // resumes the Fiber with a value

Which outputs:

Suspended with: first suspension
Resumed with: hello

A Fiber is essentially a pausable execution context. It doesn't run in parallel with the main code — it explicitly yields control via Fiber::suspend(). That's the heart of cooperative multitasking: each Fiber is responsible for knowing when to yield.

Fibers alone aren't enough to build a usable async system. You need a scheduler that manages the list of active Fibers, resumes those that can make progress, and integrates an event loop for I/O operations. That's what Amphp v3 provides. With Amp, writing async code looks like synchronous code:

use Amp\Http\Client\HttpClientBuilder;

$client = HttpClientBuilder::buildDefault();

// These three calls are launched concurrently,
// but the code reads as if it were sequential
$responses = Amp\Future\awaitAll([
    async(fn() => $client->request(new Request('https://api.example.com/users'))),
    async(fn() => $client->request(new Request('https://api.example.com/orders'))),
    async(fn() => $client->request(new Request('https://api.example.com/products'))),
]);

Readability is Amp's main argument over ReactPHP: no callback chains, no manual promise management. The scheduler handles Fiber suspension and resumption behind the scenes.

What async doesn't solve

Async only solves the problem of time wasted waiting on I/O. If the task is CPU-bound — cryptographic computation, image processing, heavy parsing — there's nothing to gain. The thread is running at 100 %, and suspending the Fiber just delays the work without letting another task execute in the meantime.

The concrete distinction: an HTTP call that takes 200 ms is 199 ms of waiting during which the CPU is free. A bcrypt hash that takes 200 ms is 200 ms of CPU at full load. The event loop helps in the first case, not the second.

The database case is more nuanced. PDO, PHP's default driver, is entirely blocking: a SQL query blocks the thread until the response. With ReactPHP or Amp, using PDO inside a Fiber cancels all the benefit — the Fiber blocks like classic synchronous code. You need native async drivers: amphp/mysql or amphp/postgres for PostgreSQL. Adoption of these drivers is still limited, which makes the Amp + PostgreSQL stack less natural than its Node.js equivalent.

Another limit not to underestimate: synchronous PHP extensions. If a third-party library internally calls a blocking network function (curl without async, some SDKs), the entire loop freezes during that call. The event loop can't do anything against blocking code it doesn't control.

True parallelism: when the event loop isn't enough

When the bottleneck is pure computation rather than I/O waiting, or when you genuinely need multiple CPU threads, PHP has other options — less elegant, but functional.

pcntl_fork — available in PHP CLI, creates a child process that inherits the parent context. Simple for parallelising independent tasks, but watch shared resource management (DB connections, files):

$pid = pcntl_fork();

if ($pid === 0) {
    // Child process
    processHeavyTask();
    exit(0);
} else {
    // Parent process continues
    doOtherWork();
    pcntl_waitpid($pid, $status); // wait for child to finish
}

ext-parallel — a PECL extension that exposes real PHP threads with controlled memory sharing. Cleaner than fork for CPU-bound tasks, but the extension isn't available on all installations and its API surface is limited (only stateless functions can be sent to a thread).

External workers — the most pragmatic solution in production: delegate heavy work to a queue (Redis, RabbitMQ, Beanstalkd) consumed by separate PHP-FPM workers. Symfony Messenger handles this cleanly. The web application stays responsive, heavy tasks execute outside the request/response cycle.

Horizontal scaling of PHP-FPM itself is also a valid answer: multiple workers handle multiple requests in parallel — not in the same thread, but in separate processes managed by the FPM pool. That's the default production model, and for most web applications, it's enough.

Conclusion

PHP can do cooperative concurrency. ReactPHP and Amp have been proving it in production for years. Fibers in PHP 8.1 finally gave a clean native primitive for building schedulers, and Amp v3 makes excellent use of them.

But the answer to "should you go async in PHP?" depends entirely on the problem. If the bottleneck is waiting on network responses in a long-running CLI script — yes, ReactPHP or Amp make sense. If it's a classic web API with a few SQL queries — PHP-FPM with multiple workers, maybe a Redis cache, solves the problem without adding the complexity of an event loop.

What's certain: the argument "PHP can't do async" is wrong. The argument "PHP with an event loop fits all Node.js use cases" is equally wrong. As usual, the nuance is in the load model, not the language.

Comments (0)