Skip to main content
Hiveku handles its own backups, but offsite copies under your control are insurance against anything — provider outages, accidental deletions, compliance requirements. This recipe ships a nightly SQL dump to your own S3 bucket.
Before you start: have a database provisioned and an S3 bucket (or equivalent: R2, GCS, Backblaze) with credentials stored as environment variables in your project.

The Flow at a Glance

1. Schedule

Cron trigger fires every night

2. Dump + upload

Backup database, push to S3

3. Cleanup + alert

Trim old backups, notify Slack

Step 1: Create the Workflow

1

Open Workflows

Go to Workflows > New Workflow. Name it Nightly DB Backup.
2

Add a Schedule trigger

Click Add Trigger > Schedule. Set the cron expression:
cron(0 3 * * ? *)
This fires every night at 3:00 AM UTC. Adjust the hour to match off-peak time in your main user timezone — less load during the dump means faster completion.

Step 2: Create the Backup

1

Add a Database Backup action

Click + Add Action > Database Backup. Select the target branch (usually main). The action produces a SQL dump and returns a signed download URL in its output.

Step 3: Upload to Cloud Storage

1

Download the dump

Click + Add Action > HTTP Request:
  • Method: GET
  • URL: {{step2.output.download_url}}
  • Response type: binary
The response body is the SQL dump as bytes, ready to upload.
2

Upload to S3

Click + Add Action > S3 Upload:
  • Bucket: your-backups
  • Key: hiveku/{{trigger.date}}/db-backup.sql
  • Body: {{step3.output.body}}
  • Metadata: project_id={{env.PROJECT_ID}}, backup_id={{step2.output.id}}
Signed download URLs from the backup action expire in 1 hour. Keep the download and upload steps adjacent in the workflow — long-running steps between them can cause the URL to expire before upload completes.

Step 4: Prune Old Backups

Hiveku enforces storage quotas. Delete backups you’ve safely copied offsite.
1

Add a Database Cleanup action

Click + Add Action > Delete Backups:
  • Older than: 30 days
Your offsite copies in S3 are the long-term archive — Hiveku-side copies just need to cover the recent recovery window.

Step 5: Alert on Success or Failure

Silent backups are worse than no backups. You need to know when things break.
1

Add a Slack notification action

Click + Add Action > Slack Message. Compose:
{
  "text": "Nightly DB backup complete — uploaded to s3://your-backups/hiveku/{{trigger.date}}/"
}
Configure On failure: send failure alert so broken runs also notify the channel.
2

Save and enable

Click Save and toggle Enabled.

Simpler Alternative

If offsite storage isn’t a hard requirement, skip steps 3 and 4. Just schedule the Database Backup action on its own — Hiveku stores the result in managed backup storage, zero config, zero cost beyond quota. You lose offsite redundancy but gain simplicity.

Retention Strategy

A solid backup strategy is layered, not uniform. For most businesses:
  • Daily backups — kept for 30 days
  • Weekly backups — kept for 6 months (pick one weekday to retain)
  • Monthly backups — kept for 2 years (first of the month)
Build this with separate scheduled workflows, or a single workflow that decides retention based on the trigger date (Sunday = weekly, 1st of month = monthly).
Test your restores. Schedule a quarterly “restore to staging” workflow that pulls a random backup and restores it into a sandbox database. A backup you can’t restore from is worse than no backup — it creates false confidence.

Verify It Worked

After saving, click Run Now to trigger the workflow manually. Confirm:
  1. A new backup appears in your S3 bucket at hiveku/<today>/db-backup.sql
  2. The file size is reasonable (matches expected database size)
  3. The Slack success notification arrives
  4. The backup can actually be restored — download it, restore to a test database, query a table

Troubleshooting

Very large databases may exceed the default timeout. Contact support to raise the limit, or switch to incremental backups via your database provider’s native tooling rather than full dumps.
Credentials wrong, or your bucket policy denies the PUT. Check that the IAM user or role attached to your env vars has s3:PutObject and s3:PutObjectAcl permissions for the target bucket. Test with the AWS CLI using the same credentials to isolate the issue.
The download URL from Database Backup expires in 1 hour. Shorten the workflow: do the upload immediately after the download, with no unrelated steps in between. For very large dumps, use multipart upload or restructure so the backup streams directly to S3.
Check the workflow’s Runs tab for any scheduled fires. If none, the Schedule trigger isn’t configured correctly — verify the cron expression and confirm the workflow is Enabled. Cron expressions with ? and * in the wrong positions are a common trip-up.
Confirm the Slack action is configured with Send on failure enabled, not just on success. Also test by deliberately breaking the workflow (rename the S3 bucket temporarily) — the failure alert should trigger.

What’s Next?

Manual Backups

Create an ad-hoc backup before risky deployments

Cron Jobs

More examples of scheduled workflows