Securing a dedicated Linux Debian 12 server — Complete post-incident guide

An ordinary Tuesday morning. I glance at Apache logs before starting work — a conditioned reflex, rarely useful. Except that morning. In error.log, hundreds of identical lines, all from the same IP: 13.37.248.113. HTTP Digest authentication attempts in a loop, combining common usernames with generic passwords.

The server held. HTTP Digest auth with a correct password is a solid barrier against brute force if the password is strong. But the incident still triggered a full audit I'd been putting off for too long.

This server hosts a private seedbox shared with about twenty users, a PHP website behind Apache with HTTP Digest authentication, SFTP access via ProFTPD, a wiki in Docker behind an Apache reverse proxy, and Jellyfin for streaming. A classic setup for a semi-professional personal server. Here is everything that was reviewed, fixed, and automated.

1. Detecting and responding to a brute force attack

Reading Apache logs to identify attempts

Apache's HTTP Digest authentication logs its failures in error.log, not access.log. The codes to look for are AH01790 and AH01794.

# Count failed attempts by IP over the last 24h
grep "AH0179" /var/log/apache2/error.log | grep -oP '\[client \K[0-9.]+' | sort | uniq -c | sort -rn | head -20

# See full lines for a specific IP
grep "13.37.248.113" /var/log/apache2/error.log | tail -30

Log lines look like this:

[Tue Feb 18 07:23:41.412893 2026] [auth_digest:error] [pid 1234] [client 13.37.248.113:58432] AH01790: user admin: password mismatch: /protected/
[Tue Feb 18 07:23:41.718204 2026] [auth_digest:error] [pid 1234] [client 13.37.248.113:58433] AH01794: user root in realm "Private Area" not found: /protected/
[Tue Feb 18 07:23:42.091337 2026] [auth_digest:error] [pid 1234] [client 13.37.248.113:58434] AH01790: user administrator: password mismatch: /protected/

Three distinct patterns in these logs: password mismatch (known user, wrong password), not found (non-existent user), and sometimes nonce mismatch (replay of expired challenge). The attacker was clearly testing a list of generic login/password pairs.

Identifying the attacker

# Quick whois
whois 13.37.248.113

# Geolocation without installing anything
curl -s https://ipinfo.io/13.37.248.113/json

Result: IP belonging to Amazon AWS eu-west-3 (Paris). Classic. Cheap AWS VPS instances are used en masse for this kind of operation because they're easy to create, difficult to trace back to a real person, and often poorly monitored. The IP has since been reported on AbuseIPDB with around sixty reports.

Verifying no intrusion occurred

Before doing anything else, verify that nothing actually got in. The brute force may have found something before it was noticed.

# Successful SSH connections (look for unexpected logins)
grep "Accepted" /var/log/auth.log | tail -50

# Failed SSH attempts from the same IP
grep "13.37.248.113" /var/log/auth.log

# SFTP activity (ProFTPD log)
grep "13.37.248.113" /var/log/proftpd/proftpd.log 2>/dev/null

# Check timestamps on sensitive files
stat /etc/passwd /etc/shadow /etc/sudoers
ls -la /root/.ssh/
ls -la /home/

# Look for files modified recently in /etc (last 24h)
find /etc -newer /tmp/ref_file -ls 2>/dev/null
# Create the reference file first: touch -d "24 hours ago" /tmp/ref_file

In this specific case, nothing. The attack was limited to HTTP Digest, without touching SSH. But that doesn't change the need to close the weak points that could have been exploited.

2. fail2ban — Protection against brute force

fail2ban analyzes logs and bans IPs that exceed an attempt threshold. The default config is a good starting point, but it has significant gaps for this specific setup.

apt install fail2ban
systemctl enable fail2ban

All customizations go in /etc/fail2ban/jail.local (never modify jail.conf directly — it will be overwritten during updates).

SSH jail

# /etc/fail2ban/jail.local
[DEFAULT]
bantime  = 86400    ; 24h
findtime = 600      ; detection window: 10 minutes
maxretry = 5
banaction = iptables-multiport

[sshd]
enabled  = true
port     = ssh
logpath  = %(sshd_log)s
backend  = %(sshd_backend)s
maxretry = 5

apache-auth jail — the default filter trap

fail2ban includes a default apache-auth filter. It does not detect HTTP Digest authentication errors from Apache 2.4. The default filter looks for patterns like Authorization Required or Basic Auth errors — not the AH01790 / AH01794 from the auth_digest module.

A custom filter is required:

# /etc/fail2ban/filter.d/apache-auth.local
[Definition]
failregex = \[client <HOST>:.*\] AH01790: user .+: password mismatch
            \[client <HOST>:.*\] AH01794: user .+ in realm .+ not found
            \[client <HOST>:.*\] AH01788: .+nonce from .+ received on .+ - not found

Then the corresponding jail in jail.local:

[apache-auth]
enabled  = true
filter   = apache-auth
port     = http,https
logpath  = /var/log/apache2/error.log
maxretry = 8
bantime  = 86400
findtime = 300

To test the filter before enabling it in production:

# Test the filter against a real log excerpt
fail2ban-regex /var/log/apache2/error.log /etc/fail2ban/filter.d/apache-auth.local --print-all-matched

ProFTPD jail

[proftpd]
enabled  = true
port     = ftp,ftp-data,ftps,ftps-data
logpath  = /var/log/proftpd/proftpd.log
maxretry = 6
bantime  = 86400

Checking jail status

# Overview
fail2ban-client status

# Details of a specific jail
fail2ban-client status apache-auth
fail2ban-client status sshd

# Manually unban an IP (if you ban yourself)
fail2ban-client set sshd unbanip 1.2.3.4

# fail2ban logs
tail -f /var/log/fail2ban.log

3. SSH/SFTP — Restricting access

About twenty users have SFTP access to upload and retrieve files. None of them need a full SSH shell. That's unnecessary attack surface.

sftponly group and ChrootDirectory

# Create the group
groupadd sftponly

# Add existing users
usermod -aG sftponly alice
usermod -aG sftponly bob
# etc.

In /etc/ssh/sshd_config, add this block at the end (it must come after any existing Match directive):

Match Group sftponly
    ForceCommand internal-sftp -l INFO -f AUTH
    ChrootDirectory %h
    AllowTcpForwarding no
    X11Forwarding no
    PermitTunnel no
    AllowAgentForwarding no

The chroot trap. The ChrootDirectory directive imposes a severe and counter-intuitive constraint: the chroot root directory must belong to root:root with 755 permissions. If it's the user's home directory and they own it, SSH refuses the connection silently — the user just sees a connection error with no explanation on the client side.

# Correct structure for an SFTP chroot
# The home must be root:root 755
ls -la /home/alice/
# drwxr-xr-x  3 root  root  4096 ...

# Subdirectories belong to the user
ls -la /home/alice/downloads/
# drwxr-xr-x  2 alice alice 4096 ...

# Fix permissions if needed
chown root:root /home/alice
chmod 755 /home/alice

# The user must still be able to write somewhere
mkdir -p /home/alice/uploads
chown alice:alice /home/alice/uploads
chmod 755 /home/alice/uploads

Detailed SFTP logging

By default, SFTP transfers are not logged in a useful way. Enable logging in the Subsystem directive:

# In /etc/ssh/sshd_config
# Replace the existing Subsystem line with:
Subsystem sftp internal-sftp -l INFO -f AUTH

Transfers then appear in /var/log/auth.log with the format sftp-server[PID]: open "/path/file.txt" flags READ mode 0666.

Disable root SSH login

# In /etc/ssh/sshd_config
PermitRootLogin prohibit-password

# Verify no unknown key exists
cat /root/.ssh/authorized_keys
# If the file contains keys you don't recognize: security incident

prohibit-password (formerly without-password) forbids password login but allows SSH keys. It's safer than no if you need emergency access via key.

# Reload SSH after modification
sshd -t && systemctl reload sshd

4. Permissions and sensitive files

After verifying nothing was modified, this is the opportunity to put permissions into a correct state once and for all.

Credential files

# htdigest files — readable by root and www-data only
chown root:www-data /etc/apache2/.htdigest
chmod 640 /etc/apache2/.htdigest

# Configuration files with passwords
chown root:www-data /var/www/html/config.php
chmod 640 /var/www/html/config.php

# Normal user home directories
chmod 700 /home/alice
chown alice:alice /home/alice

# Exception: home dirs of chrooted SFTP users → root:root 755 (see section 3)

Block web access to sensitive directories

The natural temptation is to use <Location> with Require all denied. Problem: in Apache 2.4 with HTTP Digest authentication configured at the parent level, Require directives can interact unexpectedly with inherited auth. In some configurations, access is simply challenged with a Digest prompt instead of being denied.

RewriteRule is more reliable because it runs in Apache's processing pipeline before authentication:

# In .htaccess or the vhost
RewriteEngine On

# Block direct access to internal data
RewriteRule ^/data/internal - [F,L]
RewriteRule ^/uploads/private - [F,L]

# Block config files accidentally exposed
RewriteRule \.(env|log|sql|bak)$ - [F,L]

The [F] flag returns a 403 immediately. [L] stops processing further rules.

5. Sudoers — Principle of least privilege

By default on Debian, the installation often creates a user in the sudo group. In a context where the server is shared and exposes services, keeping accounts with full sudo is unnecessary risk.

# List who has sudo
getent group sudo

# Remove sudo from a non-root account
deluser adminuser sudo

# Verify
sudo -l -U adminuser
# "User adminuser is not allowed to run sudo"

If the admin needs elevation, su is sufficient — they know the root password. No need for sudo on a personal server.

www-data and scripts with privileges

If PHP scripts need to execute system commands with elevated privileges (reload Apache, run a maintenance script), never put www-data in sudo globally. Create a file in /etc/sudoers.d/ with only what is strictly necessary:

# /etc/sudoers.d/www-data
# Always use absolute paths
www-data ALL=(ALL) NOPASSWD: /usr/sbin/apachectl graceful
www-data ALL=(ALL) NOPASSWD: /usr/local/bin/maintenance-script.sh
# Correct permissions on this file
chmod 440 /etc/sudoers.d/www-data

# Check syntax before saving
visudo -c -f /etc/sudoers.d/www-data

Scripts called by www-data must validate their inputs. If a parameter is passed from PHP, use escapeshellarg() and validate the format before calling shell_exec():

<?php
// Validate the parameter before using it
$user = $_POST['username'] ?? '';
if (!preg_match('/^[a-z][a-z0-9_]{2,31}$/', $user)) {
    die('Invalid username format');
}

$escaped = escapeshellarg($user);
$output = shell_exec("sudo /usr/local/bin/create-user-dir.sh $escaped");
?>

6. Audit and intrusion detection (auditd)

fail2ban reacts. auditd observes and records. The two are complementary.

apt install auditd audispd-plugins
systemctl enable auditd

Create the rules file /etc/audit/rules.d/security.rules:

# /etc/audit/rules.d/security.rules

# sudoers modifications
-w /etc/sudoers -p wa -k sudoers
-w /etc/sudoers.d/ -p wa -k sudoers

# SSH configuration
-w /etc/ssh/sshd_config -p wa -k sshd_config

# Root SSH keys
-w /root/.ssh/authorized_keys -p wa -k root_keys

# System accounts
-w /etc/passwd -p wa -k accounts
-w /etc/shadow -p wa -k accounts
-w /etc/group -p wa -k accounts

# su and sudo execution
-w /usr/bin/su -p x -k su_exec
-w /usr/bin/sudo -p x -k sudo_exec

# Crontab
-w /etc/crontab -p wa -k crontab
-w /etc/cron.d/ -p wa -k crontab
-w /var/spool/cron/ -p wa -k crontab

# fail2ban configuration
-w /etc/fail2ban/ -p wa -k fail2ban

# Sensitive application configuration files
-w /var/www/html/config.php -p rwa -k app_config
-w /etc/apache2/ -p wa -k apache_config
# Load rules without restarting
augenrules --load

# Verify active rules
auditctl -l

# Consult events (last 24h)
ausearch -ts yesterday -te now | aureport -f -i | head -50

Process accounting with acct

acct records every command executed by every user, with timestamp and duration. Less verbose than auditd, but excellent for a post-incident audit: "what did user X do yesterday between 2pm and 3pm?"

apt install acct
accton on  # Enable recording

# See recent commands by user
lastcomm --user alice | head -30

# Recent root commands
lastcomm --user root | head -50

# Filter by specific command
lastcomm --command bash

rkhunter — Rootkit scanner

apt install rkhunter

# Initialize the baseline (do this immediately after a clean installation)
rkhunter --update
rkhunter --propupd

# Full scan
rkhunter --check --skip-keypress

# After a system update, regenerate the baseline
apt upgrade && rkhunter --propupd

rkhunter will generate false positives at first — legitimate files it doesn't recognize. Go through the warnings once to qualify them. Real rootkits don't hide in plain sight (if the system is already compromised, rkhunter will likely be bypassed), but the tool is useful for detecting unexpected modifications to system binaries.

7. Automated security audit with AI analysis

The problem with monitoring logs on an exposed server: the background noise is enormous. Hundreds of SSH attempts per day from Chinese and Russian IPs is the norm. Alerting on all of them by email would mean never reading your email.

The solution put in place: collect the raw report every night, pass this report to Claude via CLI to distinguish background noise from real incidents, and only send an email if something warrants attention.

Collection script

#!/bin/bash
# /root/scripts/security-audit.sh
# Runs every night via cron: 3 0 * * * /root/scripts/security-audit.sh

set -euo pipefail

REPORT_DIR="/var/log/security-audit"
TODAY=$(date +%Y-%m-%d)
REPORT="$REPORT_DIR/$TODAY.txt"

mkdir -p "$REPORT_DIR"
chmod 700 "$REPORT_DIR"

{
    echo "=== SECURITY REPORT - $TODAY ==="
    echo "Generated on: $(date)"
    echo ""

    echo "=== AUDITD EVENTS (24h) ==="
    ausearch -ts yesterday -te now 2>/dev/null | aureport -f -i 2>/dev/null | tail -100 || echo "auditd: no events"
    echo ""

    echo "=== FAIL2BAN - ACTIVE BANS ==="
    for jail in sshd apache-auth proftpd; do
        echo "--- $jail ---"
        fail2ban-client status "$jail" 2>/dev/null || echo "jail $jail not active"
    done
    echo ""

    echo "=== SSH - FAILED ATTEMPTS (24h) ==="
    grep "Failed password" /var/log/auth.log | grep "$(date +%b)" | tail -50 || echo "none"
    echo ""

    echo "=== SSH - SUCCESSFUL CONNECTIONS (24h) ==="
    grep "Accepted" /var/log/auth.log | grep "$(date +%b)" || echo "none"
    echo ""

    echo "=== LISTENING PORTS (check) ==="
    ss -tlnp
    echo ""

    echo "=== ESTABLISHED OUTBOUND CONNECTIONS ==="
    ss -tnp state established | grep -v "127.0.0.1" | grep -v "::1" | head -30
    echo ""

    # rkhunter scan only on Sundays (day 7)
    if [ "$(date +%u)" -eq 7 ]; then
        echo "=== RKHUNTER SCAN (weekly) ==="
        rkhunter --check --skip-keypress --quiet 2>&1 | tail -30 || echo "rkhunter: error"
        echo ""
    fi

    echo "=== END OF REPORT ==="
} > "$REPORT" 2>&1

chmod 600 "$REPORT"

# Run AI analysis separately
/root/scripts/security-analyze.sh "$REPORT"

AI analysis script

#!/bin/bash
# /root/scripts/security-analyze.sh
# Receives the report path as argument

set -euo pipefail

REPORT="${1:-}"
ADMIN_EMAIL="admin@example.com"
TODAY=$(date +%Y-%m-%d)
ANALYSIS_TIMEOUT=60

if [ -z "$REPORT" ] || [ ! -f "$REPORT" ]; then
    echo "Usage: $0 /path/to/report.txt" >&2
    exit 1
fi

# Prompt for Claude — ask for JSON only to make parsing easy
PROMPT="Analyze this Linux server security report. Distinguish real incidents from normal events (SSH attempts from random IPs are typical background noise). A real incident would be: a successful SSH connection from an unknown IP, a sensitive file change in auditd, an unexpected open port, or an abnormally high volume of attempts on a specific service. Reply ONLY with valid JSON, no markdown, no explanation: {\"alert\": true/false, \"summary\": \"2-3 sentence summary\", \"details\": [\"point1\", \"point2\"]}"

# Claude call with timeout
AI_RESPONSE=""
AI_ERROR=0

if command -v claude >/dev/null 2>&1; then
    AI_RESPONSE=$(timeout "$ANALYSIS_TIMEOUT" bash -c "cat '$REPORT' | claude --print --model claude-haiku-4-5 '$PROMPT'" 2>/dev/null) || AI_ERROR=1
else
    AI_ERROR=1
fi

# Safe fallback: if AI is unavailable, send raw report
# We don't miss an incident because the API was down
if [ "$AI_ERROR" -eq 1 ] || [ -z "$AI_RESPONSE" ]; then
    mail -s "[SECURITY] Report $TODAY - AI analysis unavailable" "$ADMIN_EMAIL" < "$REPORT"
    exit 0
fi

# Parse the response JSON
ALERT=$(echo "$AI_RESPONSE" | python3 -c "import sys,json; d=json.load(sys.stdin); print(d.get('alert', False))" 2>/dev/null || echo "parse_error")

if [ "$ALERT" = "parse_error" ]; then
    # Invalid JSON → fallback raw report
    mail -s "[SECURITY] Report $TODAY - invalid AI response" "$ADMIN_EMAIL" < "$REPORT"
    exit 0
fi

if [ "$ALERT" = "True" ]; then
    SUMMARY=$(echo "$AI_RESPONSE" | python3 -c "import sys,json; d=json.load(sys.stdin); print(d.get('summary', 'N/A'))" 2>/dev/null || echo "N/A")

    {
        echo "AI Analysis: $SUMMARY"
        echo ""
        echo "--- Full report ---"
        cat "$REPORT"
    } | mail -s "[SECURITY ALERT] $TODAY - Incident detected" "$ADMIN_EMAIL"
fi

# If alert=False: nothing. The report is archived in $REPORT_DIR for manual review.
# Make executable and schedule
chmod 700 /root/scripts/security-audit.sh
chmod 700 /root/scripts/security-analyze.sh

# Root crontab
crontab -e
# Add:
# 0 3 * * * /root/scripts/security-audit.sh

The advantage of the systematic fallback: if Claude API is down, times out, or returns invalid JSON, the raw report is sent anyway. You don't risk missing a real incident because the third-party analysis service was unavailable that night.

8. PHP security

Secure sessions

By default, PHP sessions are not configured to resist cookie theft. Add to php.ini or at the start of every script that uses sessions:

<?php
// Configure before session_start()
ini_set('session.cookie_httponly', 1);   // Inaccessible from JavaScript
ini_set('session.cookie_secure', 1);     // HTTPS only
ini_set('session.cookie_samesite', 'Lax');  // Partial CSRF protection
ini_set('session.use_strict_mode', 1);   // Reject session IDs not generated by the server

session_start();
?>

Or in /etc/php/8.x/apache2/php.ini to apply globally:

session.cookie_httponly = 1
session.cookie_secure = 1
session.cookie_samesite = Lax
session.use_strict_mode = 1

CSRF protection

Every form that performs an action (modification, deletion, submission) must be protected with a CSRF token. The minimal but correct implementation:

<?php
session_start();

// Token generation (once per session or per form)
if (empty($_SESSION['csrf_token'])) {
    $_SESSION['csrf_token'] = bin2hex(random_bytes(32));
}

// In the HTML form
// <input type="hidden" name="csrf_token" value="<?= htmlspecialchars($_SESSION['csrf_token']) ?>">

// Verification on POST requests
if ($_SERVER['REQUEST_METHOD'] === 'POST') {
    $submitted_token = $_POST['csrf_token'] ?? '';
    if (!hash_equals($_SESSION['csrf_token'], $submitted_token)) {
        http_response_code(403);
        die('Invalid CSRF token');
    }
    // Regenerate after use for single-use forms
    $_SESSION['csrf_token'] = bin2hex(random_bytes(32));
}
?>

Rate limiting on sensitive actions

Without a database or Redis, basic session-based rate limiting is sufficient for small sites:

<?php
function check_rate_limit(string $action, int $max_attempts, int $window_seconds): bool
{
    $key = 'rate_limit_' . $action;
    $now = time();

    if (!isset($_SESSION[$key])) {
        $_SESSION[$key] = ['count' => 0, 'reset_at' => $now + $window_seconds];
    }

    if ($now > $_SESSION[$key]['reset_at']) {
        $_SESSION[$key] = ['count' => 0, 'reset_at' => $now + $window_seconds];
    }

    $_SESSION[$key]['count']++;

    return $_SESSION[$key]['count'] <= $max_attempts;
}

// Usage
if (!check_rate_limit('login', 5, 300)) {  // 5 attempts per 5 minutes
    http_response_code(429);
    die('Too many attempts. Please try again in a few minutes.');
}
?>

9. Apache HTTP headers

These headers strengthen security on the browser side. They don't prevent a server intrusion, but they reduce the attack surface for XSS and clickjacking vulnerabilities.

# In .htaccess or the vhost config
# Requires mod_headers: a2enmod headers

<IfModule mod_headers.c>
    # Prevents the browser from guessing the MIME type
    Header always set X-Content-Type-Options "nosniff"

    # Prevents inclusion in iframes (clickjacking protection)
    Header always set X-Frame-Options "SAMEORIGIN"

    # Controls information sent in the Referer header
    Header always set Referrer-Policy "strict-origin-when-cross-origin"

    # Disables sensitive unused features
    Header always set Permissions-Policy "camera=(), microphone=(), geolocation=()"

    # Remove Apache version from responses
    Header always unset X-Powered-By
    ServerTokens Prod
    ServerSignature Off
</IfModule>

HSTS (Strict-Transport-Security) is handled automatically by Certbot/Let's Encrypt if the site is on HTTPS. Don't configure it manually unless you know exactly what you're doing — a too-long duration with an HTTPS config error can make the site inaccessible for months from browsers that cached the header.

10. Docker — Isolating containers

When Apache acts as a reverse proxy to Docker containers, a common shortcut is to expose container ports on all network interfaces. The result: the service is directly accessible from the Internet, completely bypassing the reverse proxy and all the authentication that comes with it.

# /srv/wiki/docker-compose.yml

services:
  wiki:
    image: requarks/wiki:2
    # BAD — accessible from any IP on port 3000
    ports:
      - "3000:3000"

    # GOOD — only from localhost, the reverse proxy can reach it, Internet cannot
    ports:
      - "127.0.0.1:3000:3000"

The Apache reverse proxy config:

<VirtualHost *:443>
    ServerName wiki.example.com

    ProxyPreserveHost On
    ProxyPass / http://127.0.0.1:3000/
    ProxyPassReverse / http://127.0.0.1:3000/

    # Authentication is handled here, not in the container
    <Location /admin>
        AuthType Digest
        AuthName "Admin Area"
        AuthDigestProvider file
        AuthUserFile /etc/apache2/.htdigest
        Require valid-user
    </Location>
</VirtualHost>

Check existing containers that might have this problem:

# List exposed Docker ports
docker ps --format "table {{.Names}}\t{{.Ports}}"

# Look for bindings on 0.0.0.0 (problematic)
docker ps --format "{{.Ports}}" | grep "0.0.0.0"

11. Automatic updates

apt install unattended-upgrades apt-listchanges

# Enable
dpkg-reconfigure -plow unattended-upgrades

Verify the generated configuration in /etc/apt/apt.conf.d/20auto-upgrades:

APT::Periodic::Update-Package-Lists "1";
APT::Periodic::Download-Upgradeable-Packages "1";
APT::Periodic::AutocleanInterval "7";
APT::Periodic::Unattended-Upgrade "1";

On Debian 12, security updates are included by default in the unattended-upgrades configuration. Check /etc/apt/apt.conf.d/50unattended-upgrades to ensure the Debian-Security origins are uncommented.

# Simulate what would be updated
unattended-upgrade --dry-run -d

# Force an update now
unattended-upgrade -d

# View logs of past updates
cat /var/log/unattended-upgrades/unattended-upgrades.log | tail -50

12. Log retention

Debian keeps system logs for 4 weeks by default. If you detect an incident and want to know what happened 6 weeks ago, you have nothing. Extend retention now, before you need it.

# /etc/logrotate.d/rsyslog — change rotate 4 to rotate 13 for ~3 months
# (check the existing content first)
cat /etc/logrotate.d/rsyslog
# Modified version
/var/log/syslog
/var/log/auth.log
/var/log/kern.log
/var/log/mail.log
/var/log/daemon.log
{
    rotate 13
    weekly
    missingok
    notifempty
    compress
    delaycompress
    sharedscripts
    postrotate
        /usr/lib/rsyslog/rsyslog-rotate
    endscript
}
# /etc/logrotate.d/apache2 — extend to 365 days
# Look for "rotate" in the existing file and adapt
# Recommended format for Apache: daily rotation, 365 files
/var/log/apache2/*.log {
    daily
    rotate 365
    missingok
    notifempty
    compress
    delaycompress
    sharedscripts
    postrotate
        if invoke-rc.d apache2 status > /dev/null 2>&1; then
            invoke-rc.d apache2 reload > /dev/null 2>&1
        fi
    endscript
}
# fail2ban: 54 weeks (~1 year)
# /etc/logrotate.d/fail2ban
/var/log/fail2ban.log {
    weekly
    rotate 54
    compress
    delaycompress
    missingok
    postrotate
        fail2ban-client flushlogs 1>/dev/null || true
    endscript
}

# Test logrotate config
logrotate --debug /etc/logrotate.conf

13. Password changes post-incident

The least glamorous but most critical part. After any suspicion of intrusion, even if the analysis concludes it was a failed attempt, change all passwords that could have been exposed. When in doubt, change them.

htdigest password

# Change a user's password in an htdigest file
htdigest /etc/apache2/.htdigest "Private Area" alice

# Verify the resulting file (format: user:realm:md5_hash)
cat /etc/apache2/.htdigest

System passwords

# Change a user's password
passwd alice

# Change root password
passwd root

# Force change on next login
chage -d 0 alice

Checking consistency

The classic trap: a password is referenced in multiple places. Before validating a change, search for all occurrences:

# Search for references to a username in configs
grep -r "alice" /etc/apache2/ 2>/dev/null
grep -r "alice" /etc/proftpd/ 2>/dev/null
grep -r "alice" /var/www/html/ 2>/dev/null

# Find htpasswd and htdigest files
find /etc/apache2 /var/www -name ".htpasswd" -o -name ".htdigest" 2>/dev/null

A password can live in the htdigest file, in an application's configuration (wiki, seedbox), in maintenance scripts, and in internal documentation. Updating just one of the four and re-explaining everything to all users three weeks later is not a recommended experience.

Conclusion — Security hardening checklist

What this audit produced concretely, summarized for a quick review:

  • fail2ban installed and configured with custom filter for Apache auth_digest
  • Active jails: sshd, apache-auth, proftpd
  • sftponly group with chroot and ForceCommand internal-sftp
  • Chrooted users' home dirs at root:root 755 (unavoidable constraint)
  • Detailed SFTP logging enabled in sshd_config
  • Root SSH login set to prohibit-password, authorized_keys verified
  • Credential file permissions reviewed (640, root:www-data)
  • Sensitive directories blocked by RewriteRule (not Location)
  • Accounts without sudo need removed from the sudo group
  • www-data sudoers limited to strictly necessary commands
  • auditd installed with rules on critical files
  • acct installed for per-user command history
  • rkhunter installed, baseline initialized
  • Daily audit script with AI analysis and raw report fallback
  • Secure PHP sessions (httponly, secure, samesite)
  • CSRF protection on forms
  • Security HTTP headers configured in Apache
  • Docker ports bound to 127.0.0.1 only
  • unattended-upgrades active for security updates
  • Extended log retention (3 months for auth.log, 1 year for Apache)
  • All potentially exposed passwords changed

A server exposed on the Internet will always be attacked. That comes with the territory. The question isn't to prevent attempts — that's impossible — but to ensure that attempts fail, that any eventual successes are detected quickly, and that you have the logs needed to understand what happened.

This audit took about two days of work spread over a week. Most points should have been done at initial installation. That's rarely the case. What matters is doing it before something serious happens — and automating monitoring so you don't have to revisit it manually every week.

Comments (0)