Booster Leboncoin: the Manifest V3 Chrome extension that bumps my ads and watches prospects for me

I have seven active ads on Leboncoin (France's Craigslist): IT support, web dev, WordPress hosting, retrogaming, e-waste pickup. All relevant for my area, all invisible after day three. On Leboncoin, without a paid sub, the only way to climb back to the top is to delete and republish each ad. By hand. One by one. Every week. With photo re-upload.

After three Sunday mornings of doing this between two coffees, I figured out I'd rather code an extension I might never ship than do the chore one more time.

The repo: ohugonnot/leboncoin-bumper — Manifest V3, zero deps, zero servers, MIT.

Why a Chrome extension and not a Node script

First option I tried: a Node script with Puppeteer that logs in with my account. I gave up after two evenings. Three concrete reasons:

  • DataDome. Leboncoin's anti-bot flags a headless browser within a handful of requests. A real browser, with my real session, my real cookies, my real user-agent — passes through.
  • Authentication. I log in via Google. Reproducing that flow in Puppeteer means JWT scraping that breaks on every rotation. Reusing the browser's session via chrome.scripting.executeScript is free.
  • Deployment. An unpacked extension starts when I open Chrome. No server to maintain. No cron. No docker. The day I switch machines, I clone the repo and toggle developer mode.

The verdict was obvious after two hours of prototyping: the extension wins on every axis for strictly personal use.

Manifest V3 — the service worker that dies every 30 seconds

MV3 has a trap nobody warns you about before you hit it: the service worker is killed when it goes idle. No long-running daemon. No setInterval that survives. To schedule anything, you need chrome.alarms, which wakes the worker at the right time.

chrome.runtime.onInstalled.addListener(() => {
  chrome.alarms.create('bump-weekly', {
    when: nextBumpSlot(),       // timestamp ms
    periodInMinutes: 60 * 24 * 7
  });
});

chrome.alarms.onAlarm.addListener(async (alarm) => {
  if (alarm.name === 'bump-weekly') {
    await runBumpCycle();        // opens a tab, scrapes, deletes, reposts
  }
});

Practical consequence: no in-memory state survives between two wake-ups. Everything goes through chrome.storage.local (watch profiles, ads already seen, cycle history, reply template). It's IndexedDB for lazy people — you write JSON, you read JSON, that's it.

The lesson it took me three days to internalize: never assume the service worker is running. Every piece of synchronous logic must be resumable from storage. It's painful for the first few hours, and trivial after.

The bumper — driving a real tab in the background

The cycle: fetch active ads → pick one → scrape it in detail (title, description, price, location, photos, contact preferences) → delete it → open the new-ad wizard → fill it → re-upload photos → submit. Seven steps per ad, across five different pages of the Leboncoin back-office.

It all rests on two MV3 primitives:

// 1. Open a tab and run arbitrary code in it
const tab = await chrome.tabs.create({ url: AD_LIST_URL, active: false });
const [{ result: ads }] = await chrome.scripting.executeScript({
  target: { tabId: tab.id },
  func: () => {
    return [...document.querySelectorAll('[data-test-id="ad-card"]')]
      .map(card => ({
        id: card.dataset.adId,
        title: card.querySelector('h3').textContent.trim(),
        status: card.dataset.status
      }));
  }
});

// 2. Navigate and re-inject at each step
await chrome.tabs.update(tab.id, { url: EDIT_URL(ad.id) });
await waitForLoad(tab.id);
await chrome.scripting.executeScript({ target: { tabId: tab.id }, func: fillFormStep1, args: [ad] });

DOM selectors are the fragile zone. Every Leboncoin redesign, I rewire them. I made it a rule to centralize all selectors in a single selectors.js file, with a comment about the last audit date. When someone opens an issue because it broke, I know what to grep.

The detail that cost me an hour: photo upload. Leboncoin expects a File object in the <input type="file">. Except you can't create a File from an image URL in pure JS — you have to fetch() the CDN URL, grab the Blob, wrap it in a File, then use DataTransfer to plant it in the input:

const blob = await fetch(photoUrl).then(r => r.blob());
const file = new File([blob], 'photo.jpg', { type: 'image/jpeg' });
const dt = new DataTransfer();
dt.items.add(file);
input.files = dt.files;
input.dispatchEvent(new Event('change', { bubbles: true }));

I found the DataTransfer trick in a 2019 Stack Overflow answer with 23 upvotes. Without it, the input stays empty and Leboncoin refuses the publish. The kind of thing no LLM suggests on first ask because the official docs don't cover it.

Prospect Watch — the private API everyone pretends not to see

The watch part is what made me actually write the extension. The bumper is useful but boring. Prospect watch is what turns the tool into an opportunity generator.

Leboncoin has a private JSON API behind its frontend: POST /finder/search. The same one the frontend calls when you type a search. Clean JSON, stable schema, pagination via limit/page. I call it by reusing the browser's session, from an active Leboncoin tab to avoid triggering DataDome:

async function searchAds(keywords, filters) {
  const body = {
    filters: {
      keywords: { text: keywords.join(' ') },
      category: { id: '8' },                    // Services
      location: { departments: filters.depts },
      ranges: { price: { min: filters.priceMin, max: filters.priceMax } }
    },
    limit: 100,
    listing_source: 'direct-search'
  };

  return chrome.scripting.executeScript({
    target: { tabId: leboncoinTab.id },
    func: async (b) => {
      const r = await fetch('/finder/search', {
        method: 'POST',
        headers: { 'Content-Type': 'application/json' },
        body: JSON.stringify(b),
        credentials: 'include'
      });
      return r.json();
    },
    args: [body]
  }).then(([{ result }]) => result.ads);
}

Once the ads are pulled in, scoring. Three simple rules that work surprisingly well:

  1. +2 × weight if the keyword matches in the title
  2. +1 × weight if the keyword matches in the description
  3. +1 if we detect a demand intent: looking for, seeking, need, help, advice

Weights come from a readable syntax in the keywords themselves: wordpress:3 prestashop symfony:2. WordPress weighs 3, PrestaShop weighs 1 by default, Symfony weighs 2. Hovering the star in the popup shows the score breakdown — "wordpress (title, ×3): +6 | demand detected: +1". No black box. When a false positive slips through, I see why in a second.

Demand-vs-offer detection is regex over the first 200 chars. Imperfect but cheap. "Looking for WordPress dev" matches. "Offering WordPress services" doesn't. Across 800 ads scanned over three weeks, precision sits around 90 %.

The anti-scam filter — nine patterns covering 95 % of scams

Not planned originally. I added it after getting three messages in the same week with the exact same Western Union phrasing. The Leboncoin inbox is a scammer magnet, and the native filter is very low-level.

I listed every scam I'd received over two years, pulled out nine regex patterns:

Pattern Detected example
Money order / Western Union"I'll pay by money order, give me your full name"
QR-code payment"scan this QR to release the payment"
Fake carrier"my driver will come tomorrow, plan for €35 shipping"
Off-platform"contact me on WhatsApp +33 6..."
Foreign number+44, +234, +1 in the body
Short external linkbit.ly, tinyurl, t.co
PayPal Friends & Family"send via friends and family option"
SMS code requested"I'll send a confirmation code, forward it to me"
Urgency + travel"I'm traveling, urgent, my husband/wife will pick up"

Every incoming message gets classified: 🚨 Scam · 💬 Lead · ❓ Question · 🗑 Spam. The inbox shows real leads first and hides scams under a filter. Three weeks after enabling it, I haven't read a single Western Union message — even though two arrive every week.

Native Node tests — zero deps, 120 ms, 35 tests

An extension that does DOM scraping looks untestable. And it's true for the scraping layer: chrome.scripting.executeScript doesn't mock. But all the useful logic is plain JS, separated from chrome.* — and that part I test with the native Node test runner. No Jest, no Vitest, no Mocha.

// tests/scoring.test.js
import { test } from 'node:test';
import { strict as assert } from 'node:assert';
import { scoreAd, parseKeywords } from '../lib/scoring.js';

test('keyword in title beats keyword in description', () => {
  const ad = { title: 'looking for wordpress dev', description: '...' };
  const kw = parseKeywords(['wordpress']);
  assert.equal(scoreAd(ad, kw).total, 3); // 2 (title) + 1 (looking for)
});

test('weighted keyword multiplies score', () => {
  const kw = parseKeywords(['wordpress:3']);
  const ad = { title: 'wordpress pro', description: '' };
  assert.equal(scoreAd(ad, kw).total, 6); // 2 × 3
});

npm test runs in 120 ms. 35 tests covering keyword regexes (accented characters, C++, .NET, parens), v2 scoring, weight parsing, display sorting, post-filters, profile serialization.

No framework. No transpiler. No config. node --test tests/. That's the comfort I wanted from a side-project: if I come back to the repo six months from now, I shouldn't have to reinstall anything to run the tests. git pull && npm test, done.

Why it's not on the Chrome Web Store

Leboncoin's ToS explicitly forbid automation (article 8: "use of any robot, script, or device allowing automated access to the site"). Submitting a public extension that does exactly that means getting rejected in review, then reported by Leboncoin with the risk of permanent developer account closure.

So: manual install, public repo on GitHub, MIT, at your own risk. It's stated upfront in the README and it's a condition I'm sticking to — a dev who clones and installs knows what they're doing. A regular user downloading from the Web Store doesn't. The friction is part of the accountability.

Risk-wise for my account: three months of daily use, zero flag. I stay at human frequencies (one bump per day max, one scan per hour), I use a real tab, I don't go under suspicious thresholds. DataDome seems to watch the pattern more than the fact of automation — if you behave like a hurried human rather than a bot, you pass.

Conclusion

What's surprising looking back at the repo after three months is what I didn't write. No framework. No TypeScript. No build step. No bundler. Vanilla JS, vanilla CSS, MV3, native Node tests. The whole thing fits in ~2,000 lines of code. When I need to add a feature, I re-read the relevant file in under a minute and code straight in.

The real productivity of a side-project is the absence of tooling. Every dependency I'd have added on day one would have become a maintenance chore the week after. Instead, the extension runs, I forget about it, it pings me when a WordPress prospect posts in my area on a Tuesday at 2 PM. ROI for a tool I built over a weekend: roughly one client a month on average. Largely paid off.

Comments (0)