I had 47 RSS feeds. Yes, 47. Plus three daily newsletters, GitHub Trending bookmarked, and a half-open Twitter/X tab "just to follow a few devs". The result: two hours per week scrolling, a vague sense of being up to date, and discovering in production a breaking change in a Go lib that I should have caught six weeks earlier.
I changed my approach. Not because AI is magic — it isn't — but because the problem wasn't the volume of information. It was that I was reading without synthesizing, consuming without retaining. I restructured my tech watch around that observation, and AI became a tool in the process, not the core of it.
The problem with traditional tech watch
GitHub Trending is useful for discovering interesting projects on a Friday evening. It's a terrible source for tracking an ecosystem's evolution. What surfaces there is whatever got a lot of stars that week — usually a well-marketed side project, a README generator with a slick demo, or an "awesome-something" resource list. Meaningful changes in the language itself, silent deprecations, API migrations: none of that shows up.
Newsletters are the same story. Golang Weekly sends 15 links per week. I'd read three in full, skim the titles of the rest, and feel like I'd done my homework. In reality, I was just collecting open tabs. The signal-to-noise ratio is poor because these aggregators optimize for engagement, not for relevance to your specific stack and current projects.
The real problem: we consume information passively, never asking ourselves "what does this actually change for me, concretely, this week?". Nobody asks that question for you — unless you build a system that does.
The method in practice
I didn't delete everything. I sorted out what deserved to stay as a primary source, and defined what would go through an AI filter.
What I kept, unfiltered:
- The official Go release notes — read in full at every minor release
- The PostgreSQL changelog — especially the sections on indexes, query plans, and JSONB functions
- The Symfony blog — manually filtered, once a month
- Hacker News Weekly — the weekly digest, not the real-time feed
What goes through AI:
- Summarizing long changelogs (Go release notes sometimes run 3,000 words)
- Detecting potential breaking changes against my specific stack
- Targeted questions after reading an intriguing headline I don't have time to dig into
The frequency is weekly, not daily. Daily is noise disguised as discipline. One structured session per week, with a concrete output (a markdown digest file), makes all the difference.
Prompts I actually use:
# After a Go release
"Here are the Go 1.25 release notes. My main work involves goroutines,
channels, sync/atomic, and the net/http package for REST APIs. Which changes
directly affect me? What deprecations should I plan for?
Give me a prioritized list, not a general summary."
# After a PostgreSQL update
"Here is the PostgreSQL 17.x changelog. My use cases: heavy JSONB queries,
partial indexes on tables with > 10M rows, pg_partman for partitioning.
What changes for me? Are there any behaviors that could alter existing
query plans?"
# For a one-off question
"In Go 1.24, what changed in garbage collector behavior compared to 1.22?
I'm specifically looking for the impact on high-allocation-frequency
applications (Kafka message processing)."
The difference from a Google search: I'm asking a question contextualized to my stack, not a generic one. The AI maps the raw changelog to my actual needs. That's where the value is.
The automated workflow
After a few weeks of doing this manually, I wrote a bash script that runs as a cron
job every Sunday morning. It fetches sources, builds a prompt, calls the Claude API,
and generates a digest file in ~/veille/.
#!/bin/bash
# weekly-digest.sh — Automated tech watch
# Requires: ANTHROPIC_API_KEY as an environment variable
# Usage: chmod +x weekly-digest.sh && ./weekly-digest.sh
set -euo pipefail
DIGEST_DIR="$HOME/veille"
DATE=$(date +%Y-%m-%d)
OUTPUT="$DIGEST_DIR/digest-$DATE.md"
mkdir -p "$DIGEST_DIR"
echo "=== Tech watch digest — $DATE ===" > "$OUTPUT"
echo "" >> "$OUTPUT"
# --- Fetch Go release notes (official releases page) ---
echo "[*] Fetching Go release notes..."
GO_RELEASES=$(curl -s "https://go.dev/doc/devel/release" \
| grep -oP '(?<=<h2 id=")[^"]+' \
| head -5)
# Fetch the text content of the latest release note
LATEST_GO=$(echo "$GO_RELEASES" | head -1)
GO_CONTENT=$(curl -s "https://go.dev/doc/go${LATEST_GO#go}" 2>/dev/null \
|| curl -s "https://tip.golang.org/doc/go${LATEST_GO#go}" 2>/dev/null \
|| echo "Content unavailable via curl — see https://go.dev/doc/devel/release")
# --- Fetch PostgreSQL changelog ---
echo "[*] Fetching PostgreSQL changelog..."
PG_CONTENT=$(curl -s "https://www.postgresql.org/docs/release/" \
| grep -A 2 'class="title"' \
| grep -oP '(?<=>)[^<]+' \
| head -20 \
| tr '\n' ' ')
# --- Build the prompt ---
PROMPT="You are a tech watch assistant for a senior Go/PostgreSQL/PHP developer.
Here is the information retrieved this week:
## Go — Recent releases
$GO_RELEASES
## PostgreSQL — Recent versions
$PG_CONTENT
My main stack:
- Go: REST APIs, goroutines/channels, sync/atomic, net/http, kafka-go
- PostgreSQL: JSONB, partial indexes, partitioning, pg_partman
- PHP/Symfony: secondary APIs, batch processing
Generate a structured weekly digest with:
1. Go changes that directly affect me (if a recent release exists)
2. PostgreSQL changes to watch
3. Watch points / potential breaking changes
4. What I can safely ignore this week
Format: Markdown. Be concise and prioritized. If you don't have enough info on a specific version, say so clearly rather than making things up."
# --- Anthropic API call ---
echo "[*] Calling Claude API..."
RESPONSE=$(curl -s https://api.anthropic.com/v1/messages \
-H "x-api-key: $ANTHROPIC_API_KEY" \
-H "anthropic-version: 2023-06-01" \
-H "content-type: application/json" \
-d "{
\"model\": \"claude-opus-4-6\",
\"max_tokens\": 1024,
\"messages\": [
{
\"role\": \"user\",
\"content\": $(echo "$PROMPT" | python3 -c 'import json,sys; print(json.dumps(sys.stdin.read()))')
}
]
}")
# Extract the response text
DIGEST_CONTENT=$(echo "$RESPONSE" | python3 -c "
import json, sys
data = json.load(sys.stdin)
if 'content' in data and len(data['content']) > 0:
print(data['content'][0]['text'])
else:
print('API error:', data.get('error', {}).get('message', 'Unexpected response'))
print('Raw:', json.dumps(data, indent=2))
")
# --- Write the digest ---
echo "$DIGEST_CONTENT" >> "$OUTPUT"
echo "" >> "$OUTPUT"
echo "---" >> "$OUTPUT"
echo "*Generated on $DATE — verify primary sources before taking action.*" >> "$OUTPUT"
echo "[OK] Digest written to $OUTPUT"
To automate it every Sunday at 8am:
# crontab -e
0 8 * * 0 ANTHROPIC_API_KEY=your_key_here /path/to/weekly-digest.sh >> /var/log/veille.log 2>&1
Note: I put the API key directly in the cron rather than in a .env
to avoid sourcing a file in a cron context. On a shared server, use a credentials
file with proper permissions (chmod 600) instead.
The digest opens automatically in my editor on Sunday morning via another cron line. Ten minutes of focused reading instead of two hours of scrolling.
What it actually changes
Three real examples since I started using this workflow:
sync/atomic in Go: the digest flagged the changes in the
sync/atomic package early enough — the introduction of generic types
(atomic.Int64, atomic.Pointer[T]). I had code using
the low-level functions (atomic.AddInt64) in a metrics counter.
I was able to plan the migration calmly, before it became invisible technical debt.
PostgreSQL JSONB: the query planner changes around JSONB in PG16/17 have implications for indexes. Without the digest, I'd probably have discovered the changed behavior reading a confusing EXPLAIN ANALYZE on a Tuesday night at 11pm. With the digest, I read the release note and adjusted my indexes ahead of time.
The main gain isn't detection: it's the mindset. I no longer read changelogs in anxious mode — "did I miss something important?" I have a process. That changes your relationship with information.
The real limitations
It would be dishonest not to address these. AI in this workflow has genuine blind spots.
Hallucinations on version numbers. This is the number one problem. Claude — like all LLMs — can invent a version number, conflate two releases, or assert that a feature has been available since Go 1.21 when it actually landed in 1.23. The rule: the digest is a starting point, never a source of truth. Every point flagged as important must be verified in the official release note before acting on it. I added an explicit instruction in the prompt ("if you don't have enough info on a specific version, say so clearly") — it helps, but doesn't eliminate the risk.
Confirmation bias. When I ask "what affects my Go stack", the AI tends to tell me what it thinks I want to hear. If I work heavily with channels, it will over-index on concurrency changes. It won't spontaneously tell me that the real novelty in a release is an improvement to the profiling tooling — if I haven't mentioned that I care about it. The prompt needs regular calibration to avoid building an echo chamber.
What it doesn't replace. A GopherCon conference is three days of hearing people dealing with problems at a scale I haven't hit yet, and understanding how they solved them. No digest replaces that. A senior colleague saying "watch out, I tried that approach in 2023 and it bites" — that either. And reading a post-mortem in full, with the timeline, the decisions made under pressure, the mistakes owned up to: that's a form of knowledge transfer that doesn't survive a summary.
Cost and dependency. The Anthropic API costs money. Not much for a weekly digest (a few cents per call with claude-opus-4-6), but it's an external service with its own availability and pricing uncertainties. And if the API changes, the script breaks.
The risk of no longer really reading. This is the most insidious one. Delegating synthesis means risking delegating understanding as well. If I never read a release note in full again, I'll lose the eye for detail, the ability to spot what matters on my own. The digest must stay an intake filter, not a substitute for reading.
What it replaces, and what it doesn't
This workflow replaces reading 3,000-word raw changelogs when I'm looking for two specific pieces of information. It also replaces manually synthesizing ten articles that all say the same thing in different words. It freed me from the obligation to read everything for fear of missing something.
It doesn't replace curiosity. If I don't ask "what changed in Go's memory management over the last two versions", no script will ask it for me. AI executes a process — it doesn't generate one.
It doesn't replace experimentation. Truly understanding a change in the Go scheduler means writing a benchmark, observing the behavior — not reading a summary.
It doesn't replace conversations between developers. The most useful tech watch I did this year was a thirty-minute conversation with another dev who had migrated a large PostgreSQL database to native partitioning. No LLM could have given me the same nuance on what went wrong and why.
The real meta-observation: tech watch has always been a question of method. The best developers I know never had 47 RSS feeds — they had three carefully chosen sources they actually read. AI hasn't changed that fundamental principle. It just made it more visible by forcing me to formalize what I wanted to learn, for whom, and at what frequency. That might be the most lasting contribution.