I could have created a generic "bug fix" skill. A template that asks for the symptom, expected behavior, what's already been tried. Useful. Generic. Forgettable.
Instead, I looked at my git log. On this project, 8 out of 30 commits touch the same Node.js subsystem. Always the same patterns: API timeout, slug regex, corrupted JSON file. The skill I created doesn't ask for the symptom — it directly reads the 3 files that explain 80% of failures, in order. That's the difference between a generic template and a custom skill.
It's all already there: which tasks recur, which subsystems break, which sequences never change. You just have to read it.
The audit: what your git log really reveals
Start with this command:
git log --oneline -50 | awk '{$1=""; print $0}' | sed 's/([^)]*)//' | sort | uniq -c | sort -rn | head -20
What you see: the real frequency of each task type over the last 50 commits. On this portfolio:
8 fix(veille): Node.js watch system — frequent bugs, complex architecture
5 feat(blog): article creation — always the same file order
3 fix(blog): post-publication fixes — typos, PHP syntax, slugs
2 refactor(veille): refactors of the same subsystem
1 docs(publish): workflow updates
Three signals to look for:
- Frequency — what recurs often deserves a skill. A one-off task, no.
- Repeated scope — always the same files modified together → the skill knows where to look.
- Fixes in the same subsystem — multiple
fix(X)→ there are known failure points worth encoding.
Complete this with the most frequently touched files:
git log --oneline -30 --name-only | grep -v "^[a-f0-9]" | sort | uniq -c | sort -rn | head -15
Git log isn't enough — conversation history matters too
The git log tells you what was done and how often. It doesn't tell you how it was asked for, or what caused friction in the interaction. That's often where the best skill content hides.
Conversation history reveals:
- Clarifications Claude had to ask every session → missing information to encode in the skill
- Corrections like "no not like that, like this" → constraints to make explicit
- Repeated reformulations → exact keywords for trigger conditions
- Things re-explained across sessions → what should be in the skill, not in your head
In practice, raw conversation text isn't stored. But three sources capture the essence:
1. Memory files (feedback)
Every saved correction is a direct signal. On this project, memory/feedback_article_workflow.md contains:
Don't go through brainstorming for articles — too much ceremony.
Write directly in order: FR → EN → posts.json → OG → php -l → commit → deploy.
That's exactly the main constraint of the blog-article skill. It's not in the code — it's in the correction history. Without memory, this rule would need re-explaining every session.
2. Git log of CLAUDE.md itself
git log --oneline -- CLAUDE.md .claude/CLAUDE.md
Every commit that modifies a CLAUDE.md is a trace of friction that forced a rule update — usually the result of a session where something went wrong. Those additions are direct candidates for skill content.
3. Workflow commits (docs, chore)
On this project: docs(publish): add article creation workflow to CLAUDE.publish.md. That commit exists because a previous session revealed a gap. Whatever was added that day is exactly what a skill should encode.
Combining both sources
| Source | What it reveals | Useful for |
|---|---|---|
| Git log commits | Frequent tasks, files touched together, fragile subsystems | Identifying which skills to create |
| Memory/feedback | Past corrections, learned constraints, friction points | Skill content and constraints |
| Git log of CLAUDE.md | Rules added after friction | Non-obvious constraints to encode |
| Docs/chore commits | Documentation gaps revealed in session | Sequences and edge cases |
A skill generated only from git log knows what to do. A skill generated from git log and correction history knows what to do, and how not to get it wrong.
Turning a pattern into a skill
A Claude Code skill is a markdown file in ~/.claude/plugins/<name>/skills/<name>/SKILL.md. The minimal structure:
---
name: skill-name
description: >
[Trigger conditions — this is where everything happens]
---
[What Claude should do when the skill is triggered]
The description isn't documentation — it's a detection pattern. Claude reads it every message to decide if the skill applies. It should answer: in exactly which situations is this skill relevant?
Vague description → false positives and false negatives
# ❌ Too vague
description: Use when there's a bug.
# ✅ Precise
description: >
Use for any bug, error or unexpected behavior in the automated watch/veille system.
Trigger on: "veille doesn't work", "job not running", "article not generated",
any error in scripts/veille/ or logs/veille-daemon.log.
Do NOT trigger for blog PHP bugs or deploy issues.
The last line — "Do NOT trigger for" — is as important as the positive conditions. It prevents collisions between similar skills.
Concrete example: 3 skills generated from this project
The git log identified 3 clearly distinct patterns. Here are the corresponding skills.
Skill 1 — article creation (feat(blog) × 5)
Five commits, always the same file order: PHP FR → PHP EN → posts.json → OG image → php -l → commit → deploy. Miss one step and the deploy breaks.
---
name: blog-article
description: >
Use when asked to create, write, draft or publish a blog article.
Trigger on: "new article", "write about X", "publish on LinkedIn/dev.to",
any mention of blog post creation or article workflow.
---
Mandatory execution order:
1. blog/posts/<slug>.php — complete FR version
2. blog/posts/<slug>.en.php — complete EN version
3. blog/posts.json — FR + EN entry, first position
4. npm run og <slug> — OG image
5. php -l on both files — syntax check
6. git commit + push
7. node scripts/publish-article.js <slug> — LinkedIn + dev.to + deploy
Never skip a step. Never commit without php -l.
Skill 2 — veille debug (fix(veille) × 8)
Eight fix commits on the same subsystem. Known failure points, worth encoding directly.
---
name: veille-debug
description: >
Use for any bug, error or unexpected behavior in the automated watch/veille system.
Trigger on: veille errors, jobs not running, articles not generated, daemon issues,
Claude API timeouts in watch context, slug/registry problems.
Do NOT trigger for blog PHP bugs or LinkedIn/dev.to publishing issues.
---
Read in this order before any diagnosis:
1. scripts/veille/registry.json — configured jobs and their state
2. logs/veille-daemon.log — last execution (timestamp + errors)
3. scripts/veille/runner.js — general architecture
Known failure points (by frequency):
- Claude API timeout → increase timeout in the job config
- Slug regex too restrictive → test with node scripts/veille/test-slug.js
- Corrupt updates.json → delete the file, system recreates on next run
- Wrong cron working directory → check WorkingDirectory in systemd .service
- renderArticle() not writing → verify article.json exists with correct fields
Skill 3 — post-publication fix (fix(blog) × 3)
---
name: blog-fix
description: >
Use for small fixes on already-published blog articles: typos, grammar,
PHP syntax errors, slug corrections, missing tags.
Trigger after publication, not during creation.
Do NOT trigger for new article creation.
---
Constraints for post-publication fixes:
- Never run scripts/deploy.sh (full deploy)
- Use bash scripts/deploy-files.sh <file1> <file2> (targeted deploy)
- php -l mandatory before any deploy
- If posts.json modified → include it in deploy-files.sh
Commit convention: fix(blog): <short description>
What makes a skill auto-trigger well
After a few weeks of use, the patterns that work:
Short descriptions with concrete keywords. "Trigger on: veille errors, jobs not running" works better than "Use when there are problems with the automated system." Keywords should match what you naturally type.
One skill per context. If two skills can trigger on the same situation, Claude picks — and not always the right one. Better one skill with broader conditions than an overlap between two similar skills.
Encode non-obvious constraints. "Never run deploy.sh for a fix" is the kind of rule you relearn every time if it's not in the skill. That's exactly what skills should encode: decisions already made that you don't want to reconsider each time.
Test with variants. A skill that triggers on "article" but not on "blog post" or "LinkedIn post" is miscalibrated. List the natural formulations you actually use in the description.
Conclusion
The git log is the best starting point because it's honest. It shows what you actually do, not what you think you do. Frequent tasks, fragile subsystems, invariant sequences — it's all there.
The correction history — memory files, CLAUDE.md evolution, workflow commits — fills in what the git log can't show: how things were asked, what caused friction, what constraints were learned the hard way.
A skill generated from git log alone knows what to do. A skill generated from git log and correction history knows what to do, and how not to get it wrong. The difference is 30 minutes reading two sources instead of one.