Before you start: have a database provisioned and an S3 bucket (or equivalent: R2, GCS, Backblaze) with credentials stored as environment variables in your project.
The Flow at a Glance
1. Schedule
Cron trigger fires every night
2. Dump + upload
Backup database, push to S3
3. Cleanup + alert
Trim old backups, notify Slack
Step 1: Create the Workflow
Step 2: Create the Backup
Step 3: Upload to Cloud Storage
Download the dump
Click + Add Action > HTTP Request:
- Method: GET
- URL:
{{step2.output.download_url}} - Response type: binary
Step 4: Prune Old Backups
Hiveku enforces storage quotas. Delete backups you’ve safely copied offsite.Step 5: Alert on Success or Failure
Silent backups are worse than no backups. You need to know when things break.Add a Slack notification action
Click + Add Action > Slack Message. Compose:Configure On failure: send failure alert so broken runs also notify the channel.
Simpler Alternative
If offsite storage isn’t a hard requirement, skip steps 3 and 4. Just schedule the Database Backup action on its own — Hiveku stores the result in managed backup storage, zero config, zero cost beyond quota. You lose offsite redundancy but gain simplicity.Retention Strategy
A solid backup strategy is layered, not uniform. For most businesses:- Daily backups — kept for 30 days
- Weekly backups — kept for 6 months (pick one weekday to retain)
- Monthly backups — kept for 2 years (first of the month)
Verify It Worked
After saving, click Run Now to trigger the workflow manually. Confirm:- A new backup appears in your S3 bucket at
hiveku/<today>/db-backup.sql - The file size is reasonable (matches expected database size)
- The Slack success notification arrives
- The backup can actually be restored — download it, restore to a test database, query a table
Troubleshooting
Backup step fails on a large database
Backup step fails on a large database
Very large databases may exceed the default timeout. Contact support to raise the limit, or switch to incremental backups via your database provider’s native tooling rather than full dumps.
S3 upload fails with 403
S3 upload fails with 403
Credentials wrong, or your bucket policy denies the PUT. Check that the IAM user or role attached to your env vars has
s3:PutObject and s3:PutObjectAcl permissions for the target bucket. Test with the AWS CLI using the same credentials to isolate the issue.Signed URL expires before upload completes
Signed URL expires before upload completes
The download URL from Database Backup expires in 1 hour. Shorten the workflow: do the upload immediately after the download, with no unrelated steps in between. For very large dumps, use multipart upload or restructure so the backup streams directly to S3.
Backups aren't running on schedule
Backups aren't running on schedule
Check the workflow’s Runs tab for any scheduled fires. If none, the Schedule trigger isn’t configured correctly — verify the cron expression and confirm the workflow is Enabled. Cron expressions with
? and * in the wrong positions are a common trip-up.Alerts aren't firing on failure
Alerts aren't firing on failure
Confirm the Slack action is configured with Send on failure enabled, not just on success. Also test by deliberately breaking the workflow (rename the S3 bucket temporarily) — the failure alert should trigger.
What’s Next?
Manual Backups
Create an ad-hoc backup before risky deployments
Cron Jobs
More examples of scheduled workflows