n8n Backup Automation Guide

n8n Backup Automation: Databases, Files, and Configs to Cloud Storage

Backups are one of those things everyone knows they should do but few people do consistently. The reason is simple: manual backups are tedious, easy to forget, and the consequences of skipping them are invisible — until the day everything goes wrong.

I am Javier, a startup consultant in Chile, and I have seen enough data loss incidents to take backups very seriously. After helping one client recover from a database corruption that cost them two weeks of customer data (they had no automated backups), I started building backup automation systems for every project I work on. n8n handles this perfectly because backups are inherently scheduled, repetitive tasks that benefit from monitoring and alerting.

In this guide, I will show you how to build comprehensive backup automation with n8n, covering databases, files, configurations, and the monitoring layer that ensures your backups actually work.

Why Use n8n for Backup Automation?

Dedicated backup tools exist, but they are often expensive, inflexible, or limited to specific platforms. n8n gives you complete control:

Backup anything — databases, files, SaaS data, API exports, and configuration files
Store anywhere — AWS S3, Google Cloud Storage, Backblaze B2, or any cloud storage with an API
Schedule flexibly — daily, hourly, or custom schedules based on your needs
Monitor and alert — know immediately when a backup fails instead of finding out during a crisis
Retention management — automatically clean up old backups based on your retention policy
Cross-platform — one system manages backups for all your services

Setting Up n8n for Backups

You need connections to your data sources and storage destinations.

If you do not have n8n running yet, start with n8n cloud. Backup workflows need to run reliably on schedule, and missing a scheduled backup defeats the purpose.

Common Connections

Data sources:
– PostgreSQL, MySQL, MongoDB (via n8n database nodes or SSH commands)
– Google Workspace (Docs, Sheets, Drive via API)
Notion, Airtable (via API export)
– Application configs (via SSH or API)

Storage destinations:
– AWS S3 (using the S3 node)
– Google Cloud Storage (using HTTP Request)
– Backblaze B2 (using the S3-compatible API)
– SFTP servers
– Google Drive (for smaller backups)

Monitoring:
Slack for alerts
– Email for failure notifications
– A health check service like Healthchecks.io

Workflow 1: Database Backup Pipeline

This is the most critical backup workflow. Your database contains your business data, and losing it can be catastrophic.

PostgreSQL Backup

Trigger: Schedule Trigger, daily at 2 AM (choose a low-traffic time)

Step 1: Create the database dump.

Use the Execute Command node (if n8n has SSH access to your database server) or the HTTP Request node to trigger a backup script on your server.

The Execute Command approach:
– Connect to your database server via SSH
– Run the pg_dump command with appropriate flags: compressed format, include schema, timestamp in filename
– The output is a compressed file on the server

If you cannot run commands directly, set up a small backup script on your database server that runs via a webhook. n8n triggers the webhook, the script creates the dump, and the script responds with the file path or uploads it to a temporary location.

Step 2: Transfer to cloud storage.

1. Download the dump file from the server (via SSH/SFTP or HTTP)
2. Upload to S3 using the AWS S3 node:
– Bucket: your-company-backups
– Key: databases/production/postgres/backup-2026-04-11-020000.sql.gz
– Storage class: Standard for recent backups, move to Glacier after 30 days

Step 3: Verify the backup.

This step is crucial. Many backup systems fail silently, creating empty or corrupted files.

1. Check file size — Is the backup file size within the expected range? If your database is 500 MB, a 1 KB backup file means something went wrong
2. Verify integrity — Download a small portion of the backup and check that it contains valid data. For SQL dumps, check that the file starts with the expected header
3. Record the backup metadata — Store the backup timestamp, file size, file path, and verification status in your backup log (a Google Sheet or database table)

Step 4: Notify.

1. On success — Post a brief confirmation in a #backups Slack channel: “PostgreSQL production backup completed. Size: 487 MB. Stored in S3.”
2. On failure — Send an urgent alert to the #incidents channel and via email to the engineering team lead. Include the error details and the last successful backup date

MySQL Backup

The process is nearly identical, using mysqldump instead of pg_dump. The key differences:

– MySQL dumps are typically larger, so compression is even more important
– Consider using –single-transaction for InnoDB tables to avoid locking
– For very large databases, use incremental backups with binary logs

MongoDB Backup

Use mongodump for MongoDB. The workflow follows the same pattern:

1. Run mongodump with compression
2. Transfer to cloud storage
3. Verify the dump directory structure
4. Clean up local files
5. Notify on success or failure

Workflow 2: File and Document Backup

For important files that live outside your database: documents, uploads, media files, and configuration files.

Application File Backup

Trigger: Schedule Trigger, daily at 3 AM (after database backup completes)

1. Create a compressed archive of your application’s upload directory or file storage:
– Use an Execute Command node to run tar with gzip compression
– Include only the directories that contain user-uploaded or generated files
2. Upload to S3 in a separate bucket or prefix from your database backups
3. Verify by checking file size and comparing with the expected range
4. Notify on success or failure

Google Workspace Backup

For companies that rely heavily on Google Docs and Drive:

1. List critical folders in Google Drive using the Google Drive node
2. For each important document, download a copy in the original format
3. Upload to your backup storage maintaining the folder structure
4. Track changes — store a manifest of all backed-up files with their last modified dates. On subsequent runs, only back up files that changed since the last backup

Notion Backup

Notion does not have a native export API that is easy to automate, but you can work around this:

1. Use the Notion API to list all pages and databases
2. Export each page’s content as Markdown using the Notion node
3. Store the exports in your backup storage organized by workspace structure
4. Track versions — keep the last 7 daily exports so you can restore to any recent point

Workflow 3: Configuration Backup

Application configurations, environment variables, and infrastructure-as-code files are often overlooked in backup strategies but are critical for disaster recovery.

Server Configuration Backup

Trigger: Schedule Trigger, weekly (configurations change less frequently)

1. Connect to each server via SSH
2. Copy critical config files:
– Nginx or Apache configuration
– Application environment files (sanitize sensitive values or encrypt the backup)
– Docker Compose files
– Systemd service definitions
– Cron job configurations
3. Create a versioned archive and upload to S3
4. Compare with previous version — alert if configs changed unexpectedly (could indicate unauthorized modifications)

n8n Workflow Backup

Back up your n8n workflows themselves. This is meta-backup, but it is important.

1. Use the n8n API to export all workflows as JSON
2. Store in a Git repository (using the GitHub API or direct git push) or upload to S3
3. Include credentials configuration (encrypted) and environment variables
4. Run weekly and keep version history

DNS and Domain Configuration

1. Query your DNS provider’s API (Cloudflare, Route53, etc.) to export all DNS records
2. Store the zone file in your backup storage
3. Alert on unexpected changes — if a DNS record changes between backup runs, notify the team

Workflow 4: Backup Retention and Cleanup

Storing every backup forever is expensive and unnecessary. Implement a retention policy.

Retention Strategy

A common approach I use:

Daily backups — keep the last 7 days
Weekly backups — keep the last 4 weeks (one backup per week)
Monthly backups — keep the last 12 months (one backup per month)
Annual backups — keep indefinitely

Cleanup Workflow

Trigger: Schedule Trigger, weekly on Sunday at 4 AM

1. List all backups in your S3 bucket using the AWS S3 node
2. Categorize by age using the timestamp in the filename or S3 metadata
3. Apply retention rules:
– Delete daily backups older than 7 days (except those that are the weekly backup)
– Delete weekly backups older than 30 days (except those that are the monthly backup)
– Delete monthly backups older than 365 days (except those that are the annual backup)
4. Log deletions for audit trail
5. Report the cleanup results: how many backups deleted, storage freed, oldest remaining backup

Storage Cost Monitoring

Add a step to your cleanup workflow that checks your cloud storage costs:

1. Calculate total backup storage size from the S3 listing
2. Estimate monthly cost based on your storage pricing tier
3. Alert if costs are increasing unexpectedly — might indicate a backup configuration issue creating oversized files

Workflow 5: Backup Monitoring and Testing

Backups you never test are not really backups. They are hopes.

Health Check Integration

Set up a monitoring system that verifies backups are running:

1. Use Healthchecks.io or a similar service — create a check for each backup workflow
2. At the end of each successful backup, ping the health check URL
3. If the check misses its expected ping, Healthchecks.io alerts you that a backup did not run
4. This catches silent failures — situations where n8n is down, the workflow is disabled, or the schedule was changed accidentally

Monthly Restore Test

Trigger: Schedule Trigger, first day of each month

1. Select the most recent backup from your storage
2. Download and decompress the backup file
3. Restore to a test database (never restore test backups to production)
4. Run validation queries — check row counts against known values, verify data integrity
5. Report results — did the restore succeed? How long did it take? Were there any data anomalies?
6. Clean up the test database after verification

This monthly test ensures your backups are not just files on S3 — they are actually restorable.

Backup Dashboard

Maintain a living dashboard (Google Sheet or Notion page) that shows:

– Last successful backup time for each system
– Backup file sizes over time (trending up means your data is growing)
– Time since last restore test
– Storage costs by category
– Any active alerts or failures

Tips for Backup Automation

Encrypt Sensitive Backups

Database dumps and config files often contain sensitive data. Encrypt them before uploading to cloud storage. Use GPG or AES encryption in your workflow’s Execute Command step, and store the encryption key separately from the backups (ideally in a secrets manager).

Geographic Redundancy

Store backups in at least two geographic regions. If your primary infrastructure is in US-East, send backup copies to US-West or EU. Most cloud storage providers make this easy with cross-region replication or by uploading to a bucket in a different region.

Document Your Restore Process

Automated backups are only half the equation. Document exactly how to restore from each type of backup. Include the commands, the expected timeline, and the verification steps. Practice the restore process with your team at least once per quarter.

Version Your Backup Scripts

Treat your backup workflows like code. Export them and store them in version control. If you accidentally break a backup workflow, you want to be able to restore it quickly.

Protect Your Data with Automated Backups

Data loss is not a question of if, but when. Drives fail, databases corrupt, employees accidentally delete things, and cyber attacks happen. The only question is whether you will have a recent, verified backup ready to restore.

Start with your database — it is your most critical data asset. Build the backup workflow, add verification, and set up failure alerts. Then expand to cover your files, configurations, and SaaS data.

Get started with n8n and build your first automated backup pipeline today.

Frequently Asked Questions

Can n8n backup large databases without timing out?

For large databases (10 GB and above), the backup process can take a long time. Instead of having n8n perform the dump directly, I recommend triggering the backup on the database server itself. Set up a script on the server that handles the dump and compression, and have n8n trigger it via SSH or a webhook. The script runs locally on the server (which is fast), and n8n handles the monitoring, notification, and storage transfer afterward. You can also increase n8n’s execution timeout in the settings for workflows that need more time.

How much does it cost to store backups in the cloud?

Cloud storage costs vary by provider and tier. AWS S3 Standard costs about $0.023 per GB per month. For a 1 GB daily database backup with 7 daily, 4 weekly, and 12 monthly copies, you are storing about 23 copies or roughly 23 GB — that is about $0.53 per month. Older backups can be moved to cheaper tiers like S3 Glacier ($0.004 per GB) for significant savings. Backblaze B2 is even cheaper at $0.005 per GB per month. For most small to mid-sized businesses, backup storage costs are negligible compared to the cost of data loss.

Should I back up my n8n workflows and credentials?

Absolutely. Your n8n workflows represent significant automation investment. Use the n8n API to export all workflows as JSON and store them in a Git repository or cloud storage. For n8n cloud users, Anthropic handles the infrastructure backup, but having your own export gives you portability and version history. For self-hosted n8n, also back up the n8n database itself (which contains workflows, credentials, and execution history) and your environment variables file.

🚀 Ready to automate?

Start your free n8n trial today.

Try n8n Free →