The 59 Seconds That Annoyed Me
My LinkedIn bot was replying to comments with a cron job that ran every minute. A comment arrives at 10:00, the cron executes at 10:01, the bot replies at 10:01. It works. But if the comment arrives at 10:01 and one second... it has to wait until 10:02. One day, an important comment from a recruiter went unanswered for 59 seconds.
Not dramatic. But annoying. Annoying enough that I asked myself: was cron really the right tool here?
Spoiler: for this specific case, no. But for other tasks on the same machine, it remains unbeatable. Here's the story of how I ended up with both approaches side by side, and why that's exactly what you want.
Chapter 1: When Cron Was More Than Enough
It all started very simply. Two Node.js scripts to run at fixed times: publish an article to dev.to at 9am, post on LinkedIn at 10am. Standard stuff. The crontab fit in two lines:
0 9 * * * bash ~/work/scripts/devto-cron.sh >> logs/devto-cron.log 2>&1
0 10 * * * /usr/bin/node ~/work/scripts/linkedin-cron.js >> logs/linkedin-cron.log 2>&1
The shell script loads environment variables from an .env file
and calls the Node.js script. The latter reads devto-schedule.json,
takes the first article with drafted status, calls the dev.to API,
updates the file, and exits. Execution time: a few seconds.
If it fails, the log contains the error, the article stays in the queue for tomorrow.
And it worked. For months. Never needed to be manually restarted. Cron is brilliant for this: a task that's atomic, stateless, doesn't need to know what happened before it. A process starts, does its work, stops. Exactly what cron was designed for in 1975.
Chapter 2: When Things Got Complicated
The problem came with the automated monitoring system. Four active monitors (crypto, tech, Epstein, retro), each with its own frequency (6 hours to 7 days). The admin interface let you trigger a monitor manually from the browser. A "generate" job could be queued anytime to reconfigure a monitor via Claude.
I first tried cron. Naturally. And I started accumulating problems:
- 1-minute minimum granularity. The user clicks "Refresh" in the interface, waits, sees nothing happen for potentially 59 seconds. The UX was painful.
- No state between runs. To know when each monitor last ran, you had to read a JSON file on every startup. With four monitors sharing the same resources, coordination got fragile.
- No job queue. Two crons triggering simultaneously
and trying to write the same files? You need to handle the overlap with
flockor similar. It works, but it demands attention. - No structured logging. Standard output redirected to a file quickly became unreadable when multiple monitors were writing in parallel.
None of these problems is insurmountable on its own. But stacked together, they made me realize I was trying to make cron do something it was no longer suited for.
Chapter 3: The Daemon Enters the Scene
The solution: a Node.js daemon running continuously. veille-daemon.js
polls jobs.json every 30 seconds for on-demand jobs,
and cron-config.json every 60 seconds to trigger scheduled monitors
whose interval has elapsed. It keeps the monitor registry and execution state in memory.
When the user clicks "Refresh" in the interface, the PHP writes a job to
jobs.json. The daemon detects it at the next poll (worst case 30 seconds),
executes it, and updates the status. The interface can display progress in real-time
via a /veille/status endpoint. The UX went from "I click and nothing happens"
to "I click and it starts within 30 seconds". Night and day.
systemd supervises the daemon with a minimal .service file:
[Unit]
Description=Veille daemon
After=network.target
[Service]
Type=simple
WorkingDirectory=/home/folken/work/cv
ExecStart=/usr/bin/node scripts/veille-daemon.js
Restart=on-failure
RestartSec=10
[Install]
WantedBy=default.target
What this brings concretely: automatic restart on crash,
logs in journald (journalctl --user -u veille-daemon -f),
automatic startup on boot with systemctl --user enable veille-daemon.
No more manual restart scripts, no separate log file to monitor.
But it required work. A few days to make the daemon robust: crash handling, TTL on blocked states, recovery after restarts. A cron is three lines. A reliable daemon is a real piece of code.
The Middle Ground I Almost Overlooked: systemd Timer
Between minimalist cron and a full daemon, there's a third option I underestimated for a long time:
the systemd timer. It's an improved cron — a periodic task, but supervised by systemd. On this project,
crypto-veille.timer is a legacy example:
# crypto-veille.service
[Unit]
Description=Crypto veille job
After=network.target
[Service]
Type=oneshot
WorkingDirectory=/home/folken/work/cv
ExecStart=/usr/bin/node scripts/crypto-veille.js
# crypto-veille.timer
[Unit]
Description=Crypto veille — every 6 hours
[Timer]
OnBootSec=5min
OnUnitActiveSec=6h
[Install]
WantedBy=timers.target
Compared to cron, the timer adds: native logging to journald,
the ability to declare dependencies (After=network.target),
a delay at boot with OnBootSec (cron can trigger too early
if the machine just rebooted), and more readable calendar expressions
than classic cron syntax.
The downside: two files to manage instead of one line. For a simple task that doesn't need structured logs and whose minimum frequency is 1 minute, cron remains more direct. The systemd timer is the middle ground — when cron isn't enough but a daemon would be over-engineering.
The Comparison Chart
After living with all three approaches on the same project, here's how they compare:
| cron | systemd timer | daemon | |
|---|---|---|---|
| Granularity | 1 min minimum | 1 second | free (polling) |
| Logging | manual file | native journald | native journald |
| State between runs | none | none | in memory |
| On-demand reaction | no | no | yes |
| Crash supervision | no | yes | yes |
| Config complexity | very low | medium (2 files) | high (code to write) |
| Ideal use case | atomic periodic task | periodic task + logs/deps | job queue, state, on-demand |
The Rule of Three Questions
After this experience, I've distilled three questions that are enough to decide:
1. Does the task need to react to an event in under one minute?
If yes: daemon. Cron can't do better than one minute, and neither can systemd timer
(it executes scheduled tasks, not reactive ones).
2. Does it need state between runs?
If yes: daemon. Reading and writing a file on each run works up to a point,
but once you have multiple workers or decisions based on recent history,
a continuous process with in-memory state is cleaner.
3. Do supervision and structured logs matter?
If yes but the first two answers are no: systemd timer.
It gives you the benefits of systemd (journald, restarts, dependencies) without
the complexity of writing an event loop.
If all three answers are no: cron. Three lines in crontab, a redirect to a log file, zero extra infrastructure. Don't over-engineer what doesn't need it.
Epilogue: The Two Coexist, and That's Exactly Right
Today, on the same machine, cron still publishes my articles at fixed times. It has never needed to be manually restarted. The monitoring daemon, meanwhile, took a few days of work to make robust — but it responds in 30 seconds where cron would have taken up to a minute.
It's not cron OR daemon. It's recognizing which one fits the problem at hand. Publishing at a fixed time doesn't need to react in 30 seconds. Interactive monitoring can't afford to wait a minute between checks.
And those 59 seconds of delay on LinkedIn? The bot now runs as a daemon. The recruiter doesn't have to wait anymore.