How to Set Up Automated Incremental Backups with rsync and Cron on Linux

Data loss is one of the most disruptive events a system administrator or Linux user can experience. Hard drives fail. Ransomware encrypts. Accidental rm -rf happens. Power failures corrupt filesystems at the worst possible moment. The only genuine protection against all of these scenarios is a backup system that actually runs, reliably, on a schedule, without requiring you to remember to do it manually.
The combination of rsync and cron is the foundation of practical Linux backup strategy. rsync is one of the most powerful and efficient file synchronization tools available on any operating system it copies only what has changed, preserves all file metadata, works locally and over SSH, and produces detailed logs of every operation. Cron is the Linux task scheduler that runs your backup script silently every day, every hour, or any interval you specify, regardless of whether you are at your keyboard.
This guide builds a complete, production-ready incremental backup system from the ground up. We start with the concepts what incremental backups are, how rsync works under the hood, and the difference between backup strategies.
Then we build progressively: a simple daily backup script, a 7-day rotating snapshot system with hard links, remote SSH backup, email notifications, and log rotation. Every script is annotated, every flag explained, and every decision justified. By the end, you will have an automated backup system that you trust because you understand exactly how it works.
Understanding Backup Strategies: Full, Incremental, and Differential
Before building a backup system, it helps to understand the three fundamental backup strategies and the trade-offs between them. Choosing the right strategy determines your storage requirements, your backup window, and how quickly you can recover from data loss.
| Strategy | Storage Use | Backup Speed | Recovery Speed | Best For |
| Full | High — copies everything each time | Slow — transfers all data | Fast — single set to restore | Infrequent backups where fast recovery is critical |
| Incremental | Low — only changed files since last backup | Fast — minimal data transferred | Slower — requires chain of sets | Daily backups where storage efficiency is important |
| Differential | Medium — changed files since last FULL backup | Medium — grows over time | Medium — two sets to restore | Weekly full + daily differential for balanced approach |
| Snapshot | Very Low — hard links for unchanged files | Fast — only changed files | Fast — each snapshot is complete | Daily automated snapshots with retention periods (this guide) |
rsync with the –link-dest option implements the snapshot strategy — the most storage-efficient approach that also provides the fastest recovery. Each snapshot directory appears to contain a complete copy of your data, but unchanged files are stored as hard links pointing to the same data blocks as the previous snapshot. The result: you get the browse-and-restore convenience of full backups at the storage cost of incremental backups.
Installing rsync
rsync is pre-installed on virtually all major Linux distributions. Verify it is available and check its version:
rsync --version
If it is not installed (rare on modern systems), install it using your distribution’s package manager:
sudo apt update && sudo apt install rsync -y
# RHEL / CentOS / Rocky Linux / Fedora
sudo dnf install rsync -y
# Arch Linux
sudo pacman -S rsync
Step 1: Create the Basic Daily Backup Script
Building the Foundation Backup Script
Start with a clear, simple backup script that covers the most common use case: syncing a source directory to a local backup destination daily. We will progressively enhance this script in subsequent steps.
Create the script in /usr/local/bin/ where it is accessible system-wide and will be found in the PATH for any user:
sudo nano /usr/local/bin/rsync-backup.sh
Paste the following complete, annotated script:
#!/bin/bash
# ==============================================================
# rsync Daily Backup Script — vmorecloud.com
# Performs incremental daily backup with full logging
# ==============================================================
# ── Configuration ───────────────────────────────────────────
SOURCE="/home/user/documents/" # Source directory (trailing / is important!)
DEST="/backup/documents/" # Backup destination directory
LOGFILE="/var/log/rsync-backup.log" # Log file path
DATE=$(date +"%Y-%m-%d %H:%M:%S") # Human-readable timestamp
# ── Ensure destination directory exists ─────────────────────
mkdir -p "$DEST"
# ── Log backup start ────────────────────────────────────────
echo "" >> "$LOGFILE"
echo "=== Backup started at $DATE ===" >> "$LOGFILE"
# ── Run rsync ───────────────────────────────────────────────
rsync -av \
--delete \
--stats \
--exclude='*.tmp' \
--exclude='.cache/' \
--exclude='lost+found/' \
"$SOURCE" \
"$DEST" >> "$LOGFILE" 2>&1
# ── Capture exit code and log result ────────────────────────
EXIT_CODE=$?
if [ $EXIT_CODE -eq 0 ]; then
echo "=== Backup COMPLETED successfully at $(date) ===" >> "$LOGFILE"
else
echo "=== Backup FAILED with exit code $EXIT_CODE at $(date) ===" >> "$LOGFILE"
fi
Save the file (Ctrl+O, Enter, Ctrl+X in nano) and make it executable:
sudo chmod +x /usr/local/bin/rsync-backup.sh
Step 2: Test the Script Before Scheduling It
Running and Verifying the Backup Script
Testing manually before automating with cron is essential. Silent failures in cron are hard to diagnose — a 5-minute manual test saves hours of troubleshooting later.
Dry Run First — See What Will Happen Without Making Changes
Always run rsync with –dry-run before the first real execution, especially when –delete is involved:
rsync -av --delete --dry-run /home/user/documents/ /backup/documents/
The –dry-run flag simulates the operation completely — it shows every file that would be transferred and every file that would be deleted — without actually making any changes. Review the output carefully before committing to the real run.
Run the Script and Check the Log
Execute the backup script as root (required for proper permission handling):
sudo /usr/local/bin/rsync-backup.sh
The script runs silently — all output goes to the log file. Check the log:
cat /var/log/rsync-backup.log
The log should show the backup start timestamp, a list of transferred files, rsync statistics, and a COMPLETED success message.
Verify the backup destination contains the expected files:
ls -lh /backup/documents/
Run the script a second time to verify that unchanged files are skipped and only modified files are re-transferred:
sudo /usr/local/bin/rsync-backup.sh
grep 'Number of files transferred' /var/log/rsync-backup.log | tail -2
The second run should show zero or very few files transferred — confirming that rsync’s incremental sync is working correctly and only detecting actual changes.
Step 3: Automate with Cron — Daily Scheduled Backups
Scheduling the Backup with Cron
With the script tested and confirmed working, automate it using cron. Because backup scripts typically need root privileges to read all files and write to /backup/, use the root crontab:
sudo crontab -e
Add the following line to run the backup daily at 2:00 AM. Running at an off-peak hour reduces the impact on system performance and avoids conflicts with peak-hour file access:
# Daily rsync backup at 2:00 AM
0 2 * * * /usr/local/bin/rsync-backup.sh
For a more robust entry that also captures cron execution errors (distinct from rsync errors already captured by the script):
0 2 * * * /usr/local/bin/rsync-backup.sh >> /var/log/rsync-cron.log 2>&1
Verify the cron job was added:
Verify the cron job was added:
Step 4: 7-Day Rotating Snapshots with Hard Links (–link-dest)
Understanding Hard Link Snapshots
The basic backup script in Step 1 maintains a single backup destination that is always a mirror of the current source state. If you accidentally delete a file and the backup runs before you notice, the file disappears from the backup too. A rotating snapshot system solves this by maintaining multiple dated backup directories — each one appearing to be a full copy of your data, but using hard links to avoid storing duplicate file data.
How the Initial Snapshot Baseline Works
Before running the rotation script for the first time, create the initial baseline snapshot manually:
# Create the backup base directory
sudo mkdir -p /backup/snapshots
# Create the initial full snapshot (daily.0 is always the most recent)
sudo rsync -av --delete /home/user/documents/ /backup/snapshots/daily.0/
This initial run is the only full data transfer. All subsequent snapshots use –link-dest to reference daily.0 (after rotation, always the previous day’s snapshot), copying only changed files and hard-linking everything else.
The 7-Day Rotation Script
Create the rotating snapshot script:
sudo nano /usr/local/bin/rsync-rotate-backup.sh
Paste the following complete script:
#!/bin/bash
# ==============================================================
# 7-Day Rotating rsync Snapshot Backup — vmorecloud.com
# Uses --link-dest for space-efficient daily snapshots
# ==============================================================
# ── Configuration ───────────────────────────────────────────
SOURCE="/home/user/documents/"
BACKUP_BASE="/backup/snapshots"
LOGFILE="/var/log/rsync-rotate.log"
MAX_DAYS=7
# ── Timestamp for log ────────────────────────────────────────
echo "" >> "$LOGFILE"
echo "====================================================" >> "$LOGFILE"
echo "Snapshot rotation started: $(date)" >> "$LOGFILE"
echo "====================================================" >> "$LOGFILE"
# ── Rotate existing snapshots ────────────────────────────────
# Remove oldest snapshot to make room for rotation
rm -rf "$BACKUP_BASE/daily.$MAX_DAYS"
# Shift all snapshots: daily.5 -> daily.6, daily.4 -> daily.5, etc.
for (( i=$((MAX_DAYS-1)); i>=0; i-- )); do
if [ -d "$BACKUP_BASE/daily.$i" ]; then
mv "$BACKUP_BASE/daily.$i" "$BACKUP_BASE/daily.$((i+1))"
echo "Rotated: daily.$i -> daily.$((i+1))" >> "$LOGFILE"
fi
done
# ── Create today's snapshot ──────────────────────────────────
# daily.1 is now yesterday's snapshot (after rotation)
# --link-dest points to yesterday so unchanged files are hard-linked
if [ -d "$BACKUP_BASE/daily.1" ]; then
rsync -av \
--delete \
--stats \
--exclude='*.tmp' \
--exclude='.cache/' \
--link-dest="$BACKUP_BASE/daily.1" \
"$SOURCE" \
"$BACKUP_BASE/daily.0/" >> "$LOGFILE" 2>&1
else
# First run — no previous snapshot to reference, do full copy
rsync -av --delete --stats "$SOURCE" "$BACKUP_BASE/daily.0/" >> "$LOGFILE" 2>&1
fi
# ── Log completion ────────────────────────────────────────────
EXIT_CODE=$?
if [ $EXIT_CODE -eq 0 ]; then
echo "Snapshot completed successfully: $(date)" >> "$LOGFILE"
echo "Today's snapshot: $BACKUP_BASE/daily.0" >> "$LOGFILE"
else
echo "Snapshot FAILED (exit: $EXIT_CODE): $(date)" >> "$LOGFILE"
fi
echo "====================================================" >> "$LOGFILE"
Make the script executable and schedule it with cron:
sudo chmod +x /usr/local/bin/rsync-rotate-backup.sh
sudo crontab -e
# Add this line:
0 2 * * * /usr/local/bin/rsync-rotate-backup.sh
Browsing and Restoring from Snapshots
After several days of rotation, the snapshot directory structure looks like this:
/backup/snapshots/
daily.0/ ← Today's backup (most recent)
daily.1/ ← Yesterday's backup
daily.2/ ← 2 days ago
daily.3/ ← 3 days ago
daily.4/ ← 4 days ago
daily.5/ ← 5 days ago
daily.6/ ← 6 days ago
Each directory can be browsed like a normal file tree. To restore a specific file from 3 days ago:
# Browse yesterday's snapshot
ls /backup/snapshots/daily.1/
# Restore a specific file from 3 days ago
cp /backup/snapshots/daily.3/report.docx /home/user/documents/report_recovered.docx
# Check disk usage — notice how little space the snapshots consume
du -sh /backup/snapshots/daily.*
Step 5: Back Up to a Remote Server Over SSH
Why Remote Backup Is Essential for Disaster Recovery
Local backups — even excellent rotating snapshots — are destroyed by the same events that destroy the original data: house fire, flood, theft, storage controller failure, or ransomware that encrypts all locally-connected drives. The 3-2-1 backup rule exists precisely for this reason: 3 copies of data, on 2 different types of storage, with 1 copy off-site. Remote SSH backup fulfills the off-site requirement.
Set Up SSH Key-Based Authentication (Required for Cron)
Cron jobs run without a TTY and cannot accept interactive password prompts. SSH key authentication is mandatory for automated remote rsync. Generate a dedicated SSH key pair for backup use:
# Generate a dedicated backup SSH key (no passphrase for automation) sudo ssh-keygen -t ed25519 -f /root/.ssh/backup_key -N ''
Copy the public key to the remote backup server
ssh-copy-id -i /root/.ssh/backup_key.pub backupuser@backup-server.example.com
Test the key-based connection (should not prompt for password)
ssh -i /root/.ssh/backup_key backupuser@backup-server.example.com 'echo Connection successful'
| SECURITY BEST PRACTICE | Create a dedicated ‘backupuser’ account on the remote server with restricted shell access and write permissions only to the backup destination directory. Restrict this account from running arbitrary commands using the command= option in authorized_keys. Never use the root account for automated backup SSH connections. |
Remote Backup Script
Create a remote backup script that uses SSH key authentication:
sudo nano /usr/local/bin/rsync-remote-backup.sh
#!/bin/bash
# ==============================================================
# Remote rsync Backup via SSH — vmorecloud.com
# ==============================================================
SOURCE="/home/user/documents/"
REMOTE_USER="backupuser"
REMOTE_HOST="backup-server.example.com"
REMOTE_DEST="/backup/remote-documents/"
SSH_KEY="/root/.ssh/backup_key"
LOGFILE="/var/log/rsync-remote.log"
echo "Remote backup started: $(date)" >> "$LOGFILE"
rsync -avz \
--delete \
--stats \
--bwlimit=50000 \
-e "ssh -i $SSH_KEY -o StrictHostKeyChecking=no" \
"$SOURCE" \
"${REMOTE_USER}@${REMOTE_HOST}:${REMOTE_DEST}" >> "$LOGFILE" 2>&1
EXIT_CODE=$?
[ $EXIT_CODE -eq 0 ] && STATUS="SUCCESS" || STATUS="FAILED (code: $EXIT_CODE)"
echo "Remote backup $STATUS: $(date)" >> "$LOGFILE"
Step 6: Add Email Notifications for Backup Status Alerts
Getting Notified When Backups Succeed or Fail
An automated backup that fails silently is nearly as dangerous as having no backup at all. Adding email notifications means you are alerted immediately when a backup fails — not when you need to restore and discover the backup has been broken for three weeks.
The email notification can be added directly to either the basic or rotating backup script. Install the mail utility first:
# Debian / Ubuntu sudo apt install mailutils -y
# RHEL / CentOS / Rocky / Fedora
sudo dnf install mailx -y
Add the following email notification block at the end of your backup script (before the final fi closing statement):
# ── Email notification ───────────────────────────────────────
ADMIN_EMAIL="admin@yourdomain.com"
HOSTNAME=$(hostname)
if [ $EXIT_CODE -eq 0 ]; then
# Success notification (optional — can be noisy for daily backups)
echo "Backup completed successfully on $(date)" | \
mail -s "[OK] Daily Backup: $HOSTNAME" "$ADMIN_EMAIL"
else
# Failure notification (always send)
echo "BACKUP FAILED on $HOSTNAME at $(date)." | \
cat - "$LOGFILE" | \
mail -s "[FAILED] Backup Alert: $HOSTNAME" "$ADMIN_EMAIL"
fi
The failure notification pipes the log file content into the email body, giving you the full rsync output (including the error details) in the failure notification. This makes diagnosing the failure possible from your inbox without having to SSH to the server.
Conclusion
rsync combined with cron and a well-crafted rotation script gives you a backup system that is simultaneously powerful, storage-efficient, and completely transparent — no black-box agents, no proprietary formats, no vendor lock-in. Your backups are ordinary directories of ordinary files that can be browsed with ls, restored with cp, and verified with diff. Every component is a standard Linux tool that has been in production for decades.
The progression in this guide — from a simple daily sync to a 7-day rotating hard-link snapshot system with remote SSH backup, email notifications, and integrity verification — represents a complete, production-quality backup architecture that professionals deploy on critical systems. Start with Step 1 and Step 3 to get the basic daily backup running, then add the snapshot rotation in Step 4 for point-in-time recovery capability, and complete the 3-2-1 picture with the remote backup in Step 5.







