Skip to main content
Catch SEO regressions (Google core update, content change, broken page) the morning they happen — before they tank a month of traffic.
Before you start: connect Google Search Console and have at least a few keywords being tracked. You’ll also need a Slack webhook or email address for the alert destination.

Two Paths

If you just want “ping me when a keyword drops,” use the built-in alerts — no workflow required.Marketing > SEO > Settings > Rank AlertsToggle on, pick a threshold (default 5 positions), pick a destination (email, Slack), save. Done.This covers 80% of what most teams need. Use the custom workflow below when you want bespoke logic — filtering by keyword tier, different thresholds per category, or routing to different channels.

The Flow at a Glance

1. Daily schedule

Runs every morning

2. Fetch rankings

Pull today vs yesterday

3. Alert if dropped

Threshold-based Slack ping

Step 1: Build the Workflow

1

Start a new workflow

Workflows > New Workflow. Name it Daily SEO Rank Check.
2

Add a Schedule trigger

Click Add Trigger > Schedule. Use cron syntax:
cron(0 8 * * ? *)
That’s 8:00 AM UTC daily. Adjust to your preferred time zone by offsetting (e.g., cron(0 13 * * ? *) for 8 AM ET in winter).
3

Add an HTTP Request action to fetch rankings

Click + Add Action > HTTP Request. Configure to query Hiveku’s internal SEO endpoint:
  • Method: GET
  • URL: https://api.hiveku.com/seo/rankings/diff?project_id={{project_id}}&days=1
  • Auth: use the Hiveku API key env var
The response is a JSON list of { keyword, url, yesterday_position, today_position, delta } objects. Name the output rankings.
4

Add a Conditional

Click + Add Action > Conditional. Branch on:
{{rankings}} has any item where delta <= -5
That means: continue only if at least one keyword dropped 5 or more positions since yesterday.
5

Add a Slack notification

In the true branch, add HTTP Request pointed at your Slack webhook:
{
  "text": "SEO rank drops detected today:\n{{#each rankings.filter(delta <= -5)}}• {{keyword}} — {{yesterday_position}} → {{today_position}} ({{delta}}) — {{url}}\n{{/each}}"
}
6

Save and enable

Save, flip enabled. The workflow runs tomorrow at 8 AM UTC by default.
To test without waiting 24 hours, temporarily change the trigger to rate(5 minutes) and mock a ranking drop by editing the tracked keywords table. Remember to change it back after.

Customizing

Threshold

5 positions is a reasonable default. GSC data naturally fluctuates ±2 positions day-to-day — below 5 you’ll get noisy alerts. Raise to 10 if you only care about catastrophic drops.

Frequency

  • cron(0 8 * * ? *) — daily at 8 AM UTC
  • cron(0 */6 * * ? *) — every 6 hours
  • rate(1 hour) — hourly, but GSC data only updates every 2–3 days so this is usually overkill

Scope

To only alert on high-priority keywords, add a filter to the HTTP request:
?project_id={{project_id}}&days=1&tier=priority
Set the tier on keywords in Marketing > SEO > Keywords.

Group by Category

Add an AI Generation step before the Slack action to summarize drops grouped by topic:
Summarize these SEO rank drops into a digest.
Group by page topic. Highlight the most critical one first.
Data: {{rankings}}

Test It

1

Mock a drop

In Marketing > SEO > Keywords, edit one tracked keyword and manually set its yesterday_position to 5 and today_position to 15. This simulates a 10-position drop.
2

Trigger the workflow manually

On the workflow page, click Run Now. The run appears in the Runs tab.
3

Check Slack

A rank-drop alert should show up in your target channel.
4

Restore the keyword

Reset the keyword values you mocked, otherwise the next real run will still think they dropped.

Troubleshooting

Two common causes: (1) threshold is too high — lower it and re-test; (2) GSC hasn’t synced today’s data yet. GSC updates on a 2–3 day lag and the API reflects whatever’s available. Check Marketing > SEO > Keywords — if the “last updated” timestamp is more than 2 days old, the issue is on the Search Console side.
Normal GSC fluctuation. Raise the threshold to 5+ positions, or compare a 7-day average instead of day-over-day. In the HTTP request, use ?days=7 and alert only if the 7-day trailing average dropped.
Filter to a priority tier (see Scope above), or group alerts by category so you get one digest message instead of 20 individual ones. You can also throttle: only alert if the same keyword dropped 2 days in a row.
Check the schedule in UTC, not your local time zone. Cron expressions always run on UTC in Hiveku. Also verify the workflow is enabled — scheduled workflows that aren’t enabled silently never fire.
The Hiveku API call needs an authenticated request. Make sure the HTTP action uses the Hiveku API Key connection (or env var HIVEKU_API_KEY), not a raw unauthenticated call.

What’s Next?

Connect Search Console

Set up the GSC connection that powers rank tracking

SEO Audit

Find the pages behind dropping keywords and fix them

Weekly Digest

Combine rank changes into a weekly team summary