Automating blog publishing to dev.to and LinkedIn: the complete code

The friction of manually cross-posting to every platform ends up killing consistency. Write the article, post it, copy-paste to dev.to, reformat, post to LinkedIn, find the image — at some point you stop, because it's a pain. Here's the complete system: 6 Node.js scripts, zero external dependencies except turndown, one command to publish everything.

Prerequisites

Node.js 18+ (native ES modules, built-in fetch). A blog whose articles live in .en.php files with an <div class="article-content">. A posts.json with this structure per article:

{
  "slug": "my-article",
  "date": "2026-03-21",
  "fr": {
    "title": "FR title",
    "category": "Golang",
    "tags": ["tag1", "tag2"],
    "excerpt": "Short description."
  },
  "en": {
    "title": "EN title",
    "category": "Golang",
    "tags": ["tag1", "tag2"],
    "excerpt": "Short description."
  }
}

Install the only dependency:

npm install turndown

And in package.json, make sure you have "type": "module" for ES imports.

File structure

scripts/
├── devto-helpers.js        # Markdown extraction + utilities
├── devto-draft-all.js      # Draft all pending articles on dev.to
├── devto-publish-next.js   # Publish the next draft (with cadence)
├── devto-cron.sh           # Shell script for the cron
├── linkedin-auth.js        # LinkedIn OAuth flow (done once)
├── linkedin-publish.js     # Publish to LinkedIn
├── publish-article.js      # Unified script: everything in one command
├── devto-schedule.json     # dev.to state and queue
├── linkedin-schedule.json  # LinkedIn state and queue
├── .devto-env              # DEVTO_API_KEY (gitignored)
└── .linkedin-env           # LinkedIn credentials (gitignored)

Add to .gitignore:

scripts/.devto-env
scripts/.linkedin-env

Part 1 — Dev.to

Getting the API key

Go to dev.to/settings/extensions, section "DEV API Keys", generate a key.

Create scripts/.devto-env:

export DEVTO_API_KEY=your_key_here

devto-helpers.js — HTML → Markdown extraction

Article content is HTML — that's what the PHP blog generates. Dev.to ingests Markdown. The turndown library handles the conversion, but the default behavior produces two problems that need fixing with custom rules.

Rule 1 — fenced-code-blocks: Turndown converts <pre><code> blocks to 4-space-indented blocks (classic Markdown convention). That works on dev.to, but the CSS class language-go on the <code> tag would be silently lost — no more syntax highlighting. The custom rule reads node.className, strips the language- prefix, and generates fenced code blocks with the language: ```go.

Rule 2 — absolute-links: internal blog links are relative: /blog/my-article. On dev.to, these links break — they point to dev.to/blog/my-article. The rule intercepts all <a> tags, detects if the href starts with /, and prefixes it with SITE_URL.

The devtoTags function normalizes tags for dev.to: the platform accepts a maximum of 4 tags, lowercase, alphanumeric only. Tags with hyphens or special characters (php-fpm, vue.js) would be rejected — they get stripped out.

import { readFileSync } from 'fs';
import { resolve } from 'path';
import TurndownService from 'turndown';

const SITE_URL = 'https://your-site.com';

function makeTurndown() {
    const td = new TurndownService({ headingStyle: 'atx', codeBlockStyle: 'fenced', fence: '```' });

    td.addRule('fenced-code-blocks', {
        filter: node => node.nodeName === 'CODE' && node.parentNode.nodeName === 'PRE',
        replacement: (content, node) => {
            const lang = (node.className || '').replace('language-', '').trim();
            return `\n\`\`\`${lang}\n${node.textContent}\n\`\`\`\n`;
        },
    });

    td.addRule('absolute-links', {
        filter: 'a',
        replacement: (content, node) => {
            let href = node.getAttribute('href') || '';
            if (href.startsWith('/')) href = SITE_URL + href;
            const title = node.title ? ` "${node.title}"` : '';
            return `[${content}](${href}${title})`;
        },
    });

    return td;
}

export function extractMarkdown(root, slug) {
    const phpFile = resolve(root, `blog/posts/${slug}.en.php`);
    const phpContent = readFileSync(phpFile, 'utf8');
    const match = phpContent.match(/<div class="article-content">([\s\S]*?)<\/div>\s*\n*\s*<\/article>/);
    if (!match) throw new Error(`Could not extract content from ${slug}.en.php`);
    const td = makeTurndown();
    return td.turndown(match[1]).replace(/\n{3,}/g, '\n\n').trim();
}

export function devtoTags(meta) {
    return (meta.tags || [])
        .map(t => t.toLowerCase().replace(/[^a-z0-9]/g, ''))
        .filter(Boolean)
        .slice(0, 4);
}

devto-schedule.json — the queue file

The core idea of the system: separate the moment you draft an article from the moment you publish it. That lets you prepare 10 articles in advance — all as drafts on dev.to, invisible to readers — and release them gradually at a defined cadence.

The full lifecycle of an article is: pendingdrafted (created as a draft via devto-draft-all.js) → published (published via devto-publish-next.js called by the cron). This two-phase split is intentional: drafting is done manually from WSL (offline, with access to local files), publishing happens automatically via cron.

{
  "cadence_days": 4,
  "articles": [
    { "slug": "my-first-article", "status": "pending" }
  ]
}

Possible statuses: pendingdraftedpublished (or skipped).

devto-draft-all.js — create the drafts

This script publishes nothing. It takes all articles in pending status and creates a draft on dev.to for each — published: false. Publishing comes later, deliberately and separately. The goal: never accidentally publish an unfinished article, and keep full control over the calendar.

The canonical_url field in the payload is critical for SEO. It tells Google that the original source of the article is your blog, not dev.to. Without it, Google may treat dev.to as the primary source and consider your blog as duplicate content — which penalizes your ranking in favor of the platform.

#!/usr/bin/env node
import { readFileSync, writeFileSync } from 'fs';
import { resolve, dirname } from 'path';
import { fileURLToPath } from 'url';
import { extractMarkdown, devtoTags } from './devto-helpers.js';

const __dirname = dirname(fileURLToPath(import.meta.url));
const ROOT = resolve(__dirname, '..');
const SCHEDULE_FILE = resolve(__dirname, 'devto-schedule.json');

const API_KEY = process.env.DEVTO_API_KEY;
if (!API_KEY) { console.error('Missing DEVTO_API_KEY'); process.exit(1); }

const schedule = JSON.parse(readFileSync(SCHEDULE_FILE, 'utf8'));
const posts = JSON.parse(readFileSync(resolve(ROOT, 'blog/posts.json'), 'utf8'));

const pending = schedule.articles.filter(a => a.status === 'pending');
console.log(`Found ${pending.length} pending articles to draft.\n`);

let ok = 0, fail = 0;

for (const item of pending) {
    const post = posts.find(p => p.slug === item.slug);
    if (!post?.en) {
        console.log(`  SKIP  ${item.slug} (no EN version)`);
        item.status = 'skipped'; fail++; continue;
    }

    let markdown;
    try { markdown = extractMarkdown(ROOT, item.slug); }
    catch (e) {
        console.log(`  SKIP  ${item.slug} (${e.message})`);
        item.status = 'skipped'; fail++; continue;
    }

    const payload = {
        article: {
            title: post.en.title,
            published: false,
            body_markdown: markdown,
            tags: devtoTags(post.en),
            canonical_url: `${SITE_URL}/en/blog/${item.slug}`,
            description: post.en.excerpt,
        },
    };

    const res = await fetch('https://dev.to/api/articles', {
        method: 'POST',
        headers: { 'api-key': API_KEY, 'Content-Type': 'application/json' },
        body: JSON.stringify(payload),
    });
    const result = await res.json();

    if (!res.ok) {
        console.log(`  FAIL  ${item.slug} — ${result.error || JSON.stringify(result)}`);
        fail++; continue;
    }

    item.status = 'drafted';
    item.drafted_at = new Date().toISOString();
    item.devto_id = result.id;
    item.devto_url = result.url;
    writeFileSync(SCHEDULE_FILE, JSON.stringify(schedule, null, 2));
    console.log(`  OK    ${item.slug}`);
    ok++;

    await new Promise(r => setTimeout(r, 800)); // avoid rate limiting
}

console.log(`\nDone: ${ok} drafted, ${fail} skipped/failed.`);

Usage: . scripts/.devto-env && node scripts/devto-draft-all.js

devto-publish-next.js — publish with cadence

This script publishes one article — at most one per call. The cadence logic is straightforward: find the date of the last publication in the schedule, compute the delta in days, compare against cadence_days. Too soon? Exit cleanly without doing anything. Otherwise, take the first article with status drafted and publish it.

The --force flag bypasses the cadence check. Useful for emergencies: an important article that can't wait, or a critical fix that needs to go out immediately.

Why a PUT on the existing id rather than a new POST? The article is already on dev.to as a draft — it has an id stored in the schedule. Just send published: true on that id. Re-POSTing would create a duplicate.

#!/usr/bin/env node
import { readFileSync, writeFileSync } from 'fs';
import { resolve, dirname } from 'path';
import { fileURLToPath } from 'url';

const __dirname = dirname(fileURLToPath(import.meta.url));
const SCHEDULE_FILE = resolve(__dirname, 'devto-schedule.json');

const force = process.argv.includes('--force');
const API_KEY = process.env.DEVTO_API_KEY;
if (!API_KEY) { console.error('Missing DEVTO_API_KEY'); process.exit(1); }

const schedule = JSON.parse(readFileSync(SCHEDULE_FILE, 'utf8'));

if (!force) {
    const lastPublished = schedule.articles
        .filter(a => a.published_at)
        .map(a => new Date(a.published_at))
        .sort((a, b) => b - a)[0];

    if (lastPublished) {
        const daysSince = (Date.now() - lastPublished) / (1000 * 60 * 60 * 24);
        if (daysSince < schedule.cadence_days) {
            const wait = Math.ceil(schedule.cadence_days - daysSince);
            const next = schedule.articles.find(a => a.status === 'drafted');
            console.log(`Next publish in ${wait} day(s).`);
            if (next) console.log(`   Queued: "${next.slug}"`);
            console.log('   Use --force to publish now.');
            process.exit(0);
        }
    }
}

const next = schedule.articles.find(a => a.status === 'drafted');
if (!next) { console.log('All articles published.'); process.exit(0); }

const res = await fetch(`https://dev.to/api/articles/${next.devto_id}`, {
    method: 'PUT',
    headers: { 'api-key': API_KEY, 'Content-Type': 'application/json' },
    body: JSON.stringify({ article: { published: true } }),
});

const result = await res.json();
if (!res.ok) { console.error('Dev.to API error:', JSON.stringify(result)); process.exit(1); }

next.status = 'published';
next.published_at = new Date().toISOString();
writeFileSync(SCHEDULE_FILE, JSON.stringify(schedule, null, 2));

const remaining = schedule.articles.filter(a => a.status === 'drafted').length;
console.log(`Published: ${next.devto_url}`);
console.log(`  ${remaining} drafts remaining.`);

The cron

Create scripts/devto-cron.sh:

#!/bin/bash
cd /home/user/work/my-blog
. scripts/.devto-env
/usr/bin/node scripts/devto-publish-next.js >> logs/devto-cron.log 2>&1

Add it to the crontab (crontab -e):

17 3,15 * * * /home/user/work/my-blog/scripts/devto-cron.sh

Why two runs per day (3am and 3pm)? If the machine is off at 3am, the 3pm run takes over. And running twice doesn't publish twice: the script checks the cadence itself. If the morning article was published, the 3pm run sees that the cadence hasn't elapsed and exits without doing anything.

Part 2 — LinkedIn

LinkedIn is two orders of magnitude more complex than dev.to. No simple API key — OAuth 2.0 is mandatory.

LinkedIn app setup (done once)

  1. Go to linkedin.com/developers/apps and create an app
  2. Products tab: enable "Share on LinkedIn" (w_member_social) and "Sign In with LinkedIn using OpenID Connect" (openid, profile) — both are required
  3. Auth tab: add http://localhost:8989/callback to "Authorized redirect URLs"
  4. Copy the Client ID and Client Secret

Create scripts/.linkedin-env:

LINKEDIN_CLIENT_ID=your_client_id
LINKEDIN_CLIENT_SECRET=your_client_secret
LINKEDIN_ACCESS_TOKEN=
LINKEDIN_PERSON_ID=

linkedin-auth.js — getting the token

Why a local server? LinkedIn OAuth requires a registered redirect_uri — a URL that LinkedIn will call after authorization to return the code. In a pure command-line context, there's no URL to expose. The solution: spin up a minimal HTTP server on port 8989, which receives the callback, exchanges the code for a token, and shuts down. No external dependencies, no ngrok tunnel, no third-party service.

The two required scopes are w_member_social (post publications) and openid profile (retrieve the user's person ID). These scopes correspond to two distinct products in the LinkedIn Developer portal. If either one isn't enabled, LinkedIn will return unauthorized_scope_error even if the code is perfectly correct.

The LinkedIn token expires after 60 days. Set a calendar reminder or add a date check in the script — otherwise it's the cron that silently breaks one morning.

#!/usr/bin/env node
import { createServer } from 'http';
import { readFileSync, writeFileSync } from 'fs';
import { resolve, dirname } from 'path';
import { fileURLToPath } from 'url';

const __dirname = dirname(fileURLToPath(import.meta.url));
const ENV_FILE = resolve(__dirname, '.linkedin-env');

function readEnv() {
    const env = {};
    for (const line of readFileSync(ENV_FILE, 'utf8').split('\n')) {
        const [k, ...v] = line.split('=');
        if (k && v.length) env[k.trim()] = v.join('=').trim();
    }
    return env;
}

function writeEnv(updates) {
    let content = readFileSync(ENV_FILE, 'utf8');
    for (const [key, value] of Object.entries(updates)) {
        const regex = new RegExp(`^${key}=.*$`, 'm');
        content = regex.test(content)
            ? content.replace(regex, `${key}=${value}`)
            : content + `\n${key}=${value}`;
    }
    writeFileSync(ENV_FILE, content);
}

const env = readEnv();
const CLIENT_ID = env.LINKEDIN_CLIENT_ID;
const CLIENT_SECRET = env.LINKEDIN_CLIENT_SECRET;
const REDIRECT_URI = 'http://localhost:8989/callback';
const SCOPES = 'openid profile w_member_social';

const authUrl = `https://www.linkedin.com/oauth/v2/authorization?response_type=code&client_id=${CLIENT_ID}&redirect_uri=${encodeURIComponent(REDIRECT_URI)}&scope=${encodeURIComponent(SCOPES)}`;

console.log('\n→ Open this URL in your browser:\n');
console.log(authUrl + '\n');
// ⚠️ Do not open from WSL/cmd.exe: & is treated as a command separator,
//    the URL gets truncated and LinkedIn responds "missing client_id"

const server = createServer(async (req, res) => {
    const url = new URL(req.url, 'http://localhost:8989');
    if (url.pathname !== '/callback') { res.end('Not found'); return; }

    const code = url.searchParams.get('code');
    const error = url.searchParams.get('error');

    if (error || !code) {
        res.end(`<h1>Error: ${error || 'no code'}</h1>`);
        console.error('Auth error:', error);
        server.close();
        return;
    }

    const tokenRes = await fetch('https://www.linkedin.com/oauth/v2/accessToken', {
        method: 'POST',
        headers: { 'Content-Type': 'application/x-www-form-urlencoded' },
        body: new URLSearchParams({
            grant_type: 'authorization_code',
            code, redirect_uri: REDIRECT_URI,
            client_id: CLIENT_ID, client_secret: CLIENT_SECRET,
        }),
    });

    const token = await tokenRes.json();
    if (!tokenRes.ok || !token.access_token) {
        res.end('<h1>Token exchange failed</h1><pre>' + JSON.stringify(token, null, 2) + '</pre>');
        server.close(); return;
    }

    // Retrieve person ID via OpenID Connect
    const profile = await (await fetch('https://api.linkedin.com/v2/userinfo', {
        headers: { Authorization: `Bearer ${token.access_token}` },
    })).json();

    writeEnv({ LINKEDIN_ACCESS_TOKEN: token.access_token, LINKEDIN_PERSON_ID: profile.sub || '' });

    console.log('Token saved to .linkedin-env');
    console.log(`  Person ID: ${profile.sub}`);
    console.log(`  Expires in: ${Math.round(token.expires_in / 86400)} days`);

    res.end('<h1>Done! You can close this tab.</h1>');
    server.close();
    process.exit(0);
});

server.listen(8989, () => console.log('Listening on http://localhost:8989/callback...\n'));

WSL trap: the & in the OAuth URL is interpreted as a command separator by cmd.exe. If you try to open the URL with start from WSL, it will get truncated at the first & and LinkedIn will respond "missing client_id". Always copy-paste the URL manually into the browser.

Usage:

node scripts/linkedin-auth.js
# Copy the URL shown → paste it in the browser → authorize → token saved

linkedin-publish.js — two posting modes

LinkedIn deprecated the old Share API in favor of the UGC Posts API (/v2/ugcPosts). That's what we use here. It accepts two publishing modes for an article, with different trade-offs.

image mode: manually upload the OG JPEG in three steps (registerUpload → PUT the file → create the post with the assetUrn). Result: a large full-width image in the LinkedIn feed. The downside: the image isn't clickable — the article URL lives in the post text.

article mode: just pass the URL and LinkedIn fetches the OG image itself via meta tags. Result: an entirely clickable card, image + title + description. The downside: LinkedIn can take several hours to fetch the image, or ignore it entirely if the site is slow to respond.

The image flow in detail: registerUpload returns a temporary uploadUrl and a permanent assetUrn. You PUT the JPEG to the uploadUrl (with Bearer token). Once uploaded, you create the post referencing the assetUrn — LinkedIn knows which image to display.

%20 trap: in image mode, if the post text ends with the URL followed by a newline (\n), LinkedIn encodes that newline as %20 and the URL becomes https://your-site.com/blog/slug%20. The link is broken. Fix: the URL must always be in the very last position, nothing after it.

#!/usr/bin/env node
// Usage: node scripts/linkedin-publish.js [--force] [--mode image|article]
import { readFileSync, writeFileSync } from 'fs';
import { resolve, dirname } from 'path';
import { fileURLToPath } from 'url';

const __dirname = dirname(fileURLToPath(import.meta.url));
const ROOT = resolve(__dirname, '..');
const SCHEDULE_FILE = resolve(__dirname, 'linkedin-schedule.json');

function readEnv() {
    const env = {};
    for (const line of readFileSync(resolve(__dirname, '.linkedin-env'), 'utf8').split('\n')) {
        const [k, ...v] = line.split('=');
        if (k && v.length) env[k.trim()] = v.join('=').trim();
    }
    return env;
}

const force = process.argv.includes('--force');
const modeIdx = process.argv.indexOf('--mode');
const mode = modeIdx !== -1 ? process.argv[modeIdx + 1] : 'image';

const env = readEnv();
const ACCESS_TOKEN = env.LINKEDIN_ACCESS_TOKEN;
const PERSON_ID = env.LINKEDIN_PERSON_ID;
if (!ACCESS_TOKEN || !PERSON_ID) {
    console.error('Missing token. Run linkedin-auth.js first.');
    process.exit(1);
}

const schedule = JSON.parse(readFileSync(SCHEDULE_FILE, 'utf8'));
const posts = JSON.parse(readFileSync(resolve(ROOT, 'blog/posts.json'), 'utf8'));

if (!force) {
    const lastPublished = schedule.articles
        .filter(a => a.published_at).map(a => new Date(a.published_at))
        .sort((a, b) => b - a)[0];
    if (lastPublished) {
        const daysSince = (Date.now() - lastPublished) / 86400000;
        if (daysSince < schedule.cadence_days) {
            const wait = Math.ceil(schedule.cadence_days - daysSince);
            console.log(`Next publish in ${wait} day(s). Use --force to bypass.`);
            process.exit(0);
        }
    }
}

const next = schedule.articles.find(a => a.status === 'pending');
if (!next) { console.log('No pending articles.'); process.exit(0); }

const post = posts.find(p => p.slug === next.slug);
if (!post?.en) { console.error(`No EN metadata for ${next.slug}`); process.exit(1); }

const meta = post.en;
const articleUrl = `https://your-site.com/en/blog/${next.slug}`;
const tags = (meta.tags || []).slice(0, 5).map(t => '#' + t.replace(/-/g, '')).join(' ');
const headers = { Authorization: `Bearer ${ACCESS_TOKEN}`, 'Content-Type': 'application/json' };

console.log(`→ LinkedIn publish [${mode}]: "${next.slug}"...`);

let shareContent;

if (mode === 'image') {
    // Step 1: register the upload
    const regData = await (await fetch('https://api.linkedin.com/v2/assets?action=registerUpload', {
        method: 'POST', headers,
        body: JSON.stringify({
            registerUploadRequest: {
                recipes: ['urn:li:digitalmediaRecipe:feedshare-image'],
                owner: `urn:li:person:${PERSON_ID}`,
                serviceRelationships: [{ relationshipType: 'OWNER', identifier: 'urn:li:userGeneratedContent' }],
            },
        }),
    })).json();

    const uploadUrl = regData.value.uploadMechanism['com.linkedin.digitalmedia.uploading.MediaUploadHttpRequest'].uploadUrl;
    const assetUrn = regData.value.asset;

    // Step 2: upload the JPEG
    await fetch(uploadUrl, {
        method: 'PUT',
        headers: { Authorization: `Bearer ${ACCESS_TOKEN}`, 'Content-Type': 'image/jpeg' },
        body: readFileSync(resolve(ROOT, `assets/images/og/${next.slug}.jpg`)),
    });
    console.log('  Image uploaded.');

    // URL last, no \n after — avoids the %20 bug
    shareContent = {
        shareCommentary: { text: `${meta.excerpt}\n\n${tags}\n\n${articleUrl}` },
        shareMediaCategory: 'IMAGE',
        media: [{ status: 'READY', media: assetUrn, title: { text: meta.title } }],
    };
} else {
    // Article mode: LinkedIn fetches the OG image, entire card is clickable
    shareContent = {
        shareCommentary: { text: `${meta.excerpt}\n\n${tags}` },
        shareMediaCategory: 'ARTICLE',
        media: [{ status: 'READY', originalUrl: articleUrl }],
    };
}

// Step 3 (or step 1 in article mode): create the post
const postData = await (await fetch('https://api.linkedin.com/v2/ugcPosts', {
    method: 'POST', headers,
    body: JSON.stringify({
        author: `urn:li:person:${PERSON_ID}`,
        lifecycleState: 'PUBLISHED',
        specificContent: { 'com.linkedin.ugc.ShareContent': shareContent },
        visibility: { 'com.linkedin.ugc.MemberNetworkVisibility': 'PUBLIC' },
    }),
})).json();

const postUrl = `https://www.linkedin.com/feed/update/${postData.id}/`;
next.status = 'published';
next.published_at = new Date().toISOString();
next.linkedin_url = postUrl;
writeFileSync(SCHEDULE_FILE, JSON.stringify(schedule, null, 2));

console.log(`Published: ${postUrl}`);

linkedin-schedule.json has the same format as devto-schedule.json, with status: "pending" instead of going through a "drafted" state: LinkedIn has no draft concept accessible via the API, so publication happens directly.

{
  "cadence_days": 3,
  "articles": [
    { "slug": "my-first-article", "status": "pending" }
  ]
}

The unified script

publish-article.js is the orchestrator. It calls the 4 steps in order — checks, OG image, dev.to draft, LinkedIn publish, deploy — with error handling between each. One command to do everything.

spawnSync rather than execSync: spawnSync passes arguments directly to the process without going through an intermediate shell. No interpolation, no injection risk with slugs that might contain weird characters. Small detail, but the kind that bites when you ignore it.

The readEnv function supports two env file formats: .devto-env uses export KEY=VALUE (shell-sourceable format), .linkedin-env uses KEY=VALUE. The function strips the leading export prefix to normalize both.

#!/usr/bin/env node
// Usage: node scripts/publish-article.js <slug> [--mode image|article]
import { readFileSync, writeFileSync, existsSync } from 'fs';
import { resolve, dirname } from 'path';
import { fileURLToPath } from 'url';
import { spawnSync } from 'child_process';
import { extractMarkdown, devtoTags } from './devto-helpers.js';

const __dirname = dirname(fileURLToPath(import.meta.url));
const ROOT = resolve(__dirname, '..');

// spawnSync avoids shell injection (no interpolation in sh -c)
function run(cmd, args, opts = {}) {
    const r = spawnSync(cmd, args, { stdio: 'inherit', cwd: ROOT, ...opts });
    if (r.status !== 0) process.exit(r.status ?? 1);
}

function readEnv(file) {
    const env = {};
    for (let line of readFileSync(resolve(__dirname, file), 'utf8').split('\n')) {
        line = line.replace(/^export\s+/, '');
        const [k, ...v] = line.split('=');
        if (k?.trim() && v.length) env[k.trim()] = v.join('=').trim();
    }
    return env;
}

const slug = process.argv[2];
if (!slug) { console.error('Usage: node publish-article.js <slug>'); process.exit(1); }

// 0. Checks
console.log(`\n[0/4] Checking "${slug}"...`);
if (!existsSync(resolve(ROOT, `blog/posts/${slug}.php`))) { console.error('Missing FR file'); process.exit(1); }
if (!existsSync(resolve(ROOT, `blog/posts/${slug}.en.php`))) { console.error('Missing EN file'); process.exit(1); }

const posts = JSON.parse(readFileSync(resolve(ROOT, 'blog/posts.json'), 'utf8'));
const post = posts.find(p => p.slug === slug);
if (!post?.fr || !post?.en) { console.error(`Missing posts.json entry for "${slug}"`); process.exit(1); }
console.log('  OK');

// 1. OG image
console.log('\n[1/4] OG image...');
run('npm', ['run', 'og', slug]);
const ogImagePath = resolve(ROOT, `assets/images/og/${slug}.jpg`);

// 2. Dev.to draft
console.log('\n[2/4] Draft dev.to (EN)...');
const { DEVTO_API_KEY } = readEnv('.devto-env');
const devtoScheduleFile = resolve(__dirname, 'devto-schedule.json');
const devtoSchedule = JSON.parse(readFileSync(devtoScheduleFile, 'utf8'));
const alreadyDrafted = devtoSchedule.articles.find(a => a.slug === slug);
let devtoUrl = alreadyDrafted?.devto_url || null;

if (alreadyDrafted) {
    console.log(`  Already in devto-schedule (${alreadyDrafted.status}), skip.`);
} else {
    const res = await fetch('https://dev.to/api/articles', {
        method: 'POST',
        headers: { 'api-key': DEVTO_API_KEY, 'Content-Type': 'application/json' },
        body: JSON.stringify({
            article: {
                title: post.en.title, published: false,
                body_markdown: extractMarkdown(ROOT, slug),
                tags: devtoTags(post.en),
                canonical_url: `https://your-site.com/en/blog/${slug}`,
                description: post.en.excerpt,
            },
        }),
    });
    const result = await res.json();
    if (res.ok) {
        devtoUrl = result.url;
        devtoSchedule.articles.push({ slug, status: 'drafted', drafted_at: new Date().toISOString(), devto_id: result.id, devto_url: devtoUrl });
        writeFileSync(devtoScheduleFile, JSON.stringify(devtoSchedule, null, 2));
        console.log(`  Drafted: ${devtoUrl}`);
    } else {
        console.error('  Dev.to error:', result.error);
    }
}

// 3. LinkedIn
console.log('\n[3/4] LinkedIn...');
const liEnv = readEnv('.linkedin-env');
const { LINKEDIN_ACCESS_TOKEN: TOKEN, LINKEDIN_PERSON_ID: PERSON_ID } = liEnv;
const liHeaders = { Authorization: `Bearer ${TOKEN}`, 'Content-Type': 'application/json' };
const liMode = (() => { const i = process.argv.indexOf('--mode'); return i !== -1 ? process.argv[i + 1] : 'image'; })();
const enMeta = post.en;
const articleUrl = `https://your-site.com/en/blog/${slug}`;
const hashTags = (enMeta.tags || []).slice(0, 4).map(t => '#' + t.replace(/-/g, '')).join(' ');

let liShareContent;
if (liMode === 'image') {
    const regData = await (await fetch('https://api.linkedin.com/v2/assets?action=registerUpload', {
        method: 'POST', headers: liHeaders,
        body: JSON.stringify({ registerUploadRequest: { recipes: ['urn:li:digitalmediaRecipe:feedshare-image'], owner: `urn:li:person:${PERSON_ID}`, serviceRelationships: [{ relationshipType: 'OWNER', identifier: 'urn:li:userGeneratedContent' }] } }),
    })).json();
    const uploadUrl = regData.value.uploadMechanism['com.linkedin.digitalmedia.uploading.MediaUploadHttpRequest'].uploadUrl;
    const assetUrn = regData.value.asset;
    await fetch(uploadUrl, { method: 'PUT', headers: { Authorization: `Bearer ${TOKEN}`, 'Content-Type': 'image/jpeg' }, body: readFileSync(ogImagePath) });
    console.log('  Image uploaded.');
    liShareContent = { shareCommentary: { text: `${enMeta.excerpt}\n\n${hashTags}\n\n${articleUrl}` }, shareMediaCategory: 'IMAGE', media: [{ status: 'READY', media: assetUrn, title: { text: enMeta.title } }] };
} else {
    liShareContent = { shareCommentary: { text: `${enMeta.excerpt}\n\n${hashTags}` }, shareMediaCategory: 'ARTICLE', media: [{ status: 'READY', originalUrl: articleUrl }] };
}

const postData = await (await fetch('https://api.linkedin.com/v2/ugcPosts', {
    method: 'POST', headers: liHeaders,
    body: JSON.stringify({ author: `urn:li:person:${PERSON_ID}`, lifecycleState: 'PUBLISHED', specificContent: { 'com.linkedin.ugc.ShareContent': liShareContent }, visibility: { 'com.linkedin.ugc.MemberNetworkVisibility': 'PUBLIC' } }),
})).json();

const liPostUrl = `https://www.linkedin.com/feed/update/${postData.id}/`;
console.log(`  Published: ${liPostUrl}`);

const liScheduleFile = resolve(__dirname, 'linkedin-schedule.json');
const liSchedule = JSON.parse(readFileSync(liScheduleFile, 'utf8'));
const ex = liSchedule.articles.find(a => a.slug === slug);
if (ex) { ex.status = 'published'; ex.published_at = new Date().toISOString(); ex.linkedin_url = liPostUrl; }
else liSchedule.articles.push({ slug, status: 'published', published_at: new Date().toISOString(), linkedin_url: liPostUrl });
writeFileSync(liScheduleFile, JSON.stringify(liSchedule, null, 2));

// 4. Deploy
console.log('\n[4/4] Deploy...');
run('bash', ['scripts/deploy.sh']);

console.log(`\n"${slug}" published everywhere!`);
if (devtoUrl) console.log(`  Dev.to   : ${devtoUrl}`);
console.log(`  LinkedIn : ${liPostUrl}`);

Adapting to your blog

Replace your-site.com in devto-helpers.js and publish-article.js. The script assumes EN articles are in blog/posts/${slug}.en.php with a <div class="article-content">. Adapt the regex in extractMarkdown if your structure differs.

The OG image step assumes an npm run og <slug> script — adapt or remove that step if you don't have one.

Conclusion

Dev.to takes 2 minutes. LinkedIn takes 15, of which 10 are debugging OAuth. The linkedin-auth.js script needs to be re-run every 60 days — set a calendar reminder.

Once in place, node scripts/publish-article.js my-article does everything: OG image, dev.to draft, LinkedIn post, deploy. The cron handles the rest.

The real gain isn't the time saved on publication — it's eliminating the friction that ends up meaning you stop publishing altogether. When the command is one line, you run it. When it's 20 minutes of copy-paste, you put it off until tomorrow.

Comments (0)