Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.hiveku.com/llms.txt

Use this file to discover all available pages before exploring further.

Motion graphics in Hiveku work differently from every other AI video tool. Instead of generating a flat MP4 you can’t open back up, the AI agent designs the static frame as real layers, then animates each layer with keyframes. The headline, subhead, CTA, logo, and any image stay editable Fabric layers — drag the headline, retype any word, swap a color via the picker — and motion follows. You only render to MP4 at export time. Iterations are instant.

Make an animated design

  1. Go to /dashboard/marketing/design/.
  2. Click New Design.
  3. Pick Animated / Video.
  4. Choose dimensions: square (1080×1080), Story/Reel (1080×1920), YouTube/LinkedIn (1920×1080), or email banner (600×200).
  5. Optionally start from one of 18 animated templates, or start blank.
  6. The editor opens. Open the AI chat sidebar on the right and ask:
    “Make me a Spring Sale animated banner — headline ‘Spring Sale 50% off’, subhead ‘Free shipping over $50’, big CTA, cascade fade-up.”
  7. The agent designs the static layout (text, shapes, brand-pulled image), then animates each layer.
  8. Press Play in the canvas controls. Motion plays. Drag the headline. Press play again. Motion still works.
  9. When you’re happy, export to MP4.

Why “animation as a layer property” matters

Every other AI motion tool gives you a flat clip:
  • Runway / Pika / Sora — generates a 5-second video; change one word and you re-render (and re-pay).
  • HeyGen Hyperframes — renders HTML to MP4 server-side; you can’t edit the result.
  • Canva animations — locked to Canva’s player and editor.
Hiveku’s animations are stored on each Fabric layer as an animation property — alongside fill, fontFamily, left, top. The runtime (CanvasAnimator) reads those properties and plays them. That means:
  • Edits never re-render — drag, retype, restyle and motion keeps working
  • Per-layer control — the AI can tweak just the CTA’s timing without touching the headline
  • Designer-grade output — coherent motion strategies, not random presets per layer
  • One source of truth — the same animation schema is used by you, the AI, and the MP4 export compositor

What the AI knows

When you start chatting in an animated design, the agent sees:
  • Your brand kit — colors, fonts, logo URL — pulled from data/brand.json
  • The current layers on the canvas (canvasSnapshot — every layer’s id, name, type, position, current animation)
  • Canvas-level animation config (duration, loop, fps)
  • The intent you set when creating the design (designType: "motion_graphic")
The agent picks from four coherent motion strategies:

Cascade Fade-Up

Modern stagger — every text/CTA fades up in importance order, headline first. The default.

Kinetic

High-energy. Each word snaps in from a different direction. Use for ads or quote cards.

Hero Scale + Fade

Main visual scales in, copy fades around it. Use for product launches or brand statements.

Slideshow

Multi-slide enter+exit at fixed offsets. Use for Reels and Stories.
You can also click any of these as buttons in the Animation panel on the right side of the editor — same four strategies, applied with one click.

Editing existing motion

Once a design has animation on it, you have three ways to refine it. Ask the agent. Natural-language tweaks work because the agent reads the current animation per layer:
  • “Slow down the entrance” — agent recalls animate_canvas with longer enter durations
  • “Make the CTA red” — agent calls set_style on the CTA; motion stays the same
  • “Replace the cascade with a kinetic feel” — agent re-strategizes; preserves any loop motion (e.g. CTA pulse)
  • “Change the headline to Winter Sale” — agent calls set_text; doesn’t re-author the design
Use the per-layer Animation editor. Click any layer; the right Style panel shows an Animation section with enter / exit / loop dropdowns plus delay and duration sliders. Sliders debounce by 250ms so dragging doesn’t thrash playback. See Animation Editor for details. Click a strategy. The Animation panel’s four strategy buttons re-apply a coherent set of timings across every layer. Existing loop motions are preserved — a CTA that pulses keeps pulsing.

Animation vocabulary

Every preset is a member of one shared vocabulary. The list below is what you’ll see in the Style panel dropdowns and what the AI knows how to call. Enter / Exit presets
  • Fade family — fade-in, fade-up, fade-down, fade-left, fade-right
  • Scale family — scale-in, pop
  • Slide family — slide-up, slide-down, slide-left, slide-right
  • Wipe family — wipe-up, wipe-down
Loop motions (subtle, run continuously while the layer is on screen)
  • pulse — soft scale 1 → 1.05 → 1 (good for CTAs)
  • wiggle — small rotation back-and-forth (good for accent badges)
  • breathe — slow opacity 0.85 → 1 → 0.85
  • rotate-slow — full rotation over 8s

Timing that feels designed, not random

A few rules-of-thumb the AI follows (and you can use too):
  • Stagger — 100-250ms between layers reads “cascading”; tighter feels rushed
  • Enter durations — 500-900ms for most things; pop/scale-in 700-900ms
  • Loop length — 4-8s total for short attention spans
  • Don’t go past 15s without an explicit reason
The Animation panel shows total canvas duration; the agent stays inside whatever you’ve set so the last exit doesn’t run past playback length.

When motion isn’t the right answer

Some effects can’t be expressed as Fabric layer keyframes — particle systems, fluid simulations, shader-driven gradients, character-by-character typewriter on hand-drawn fonts. For those, the AI can fall back to the legacy generate_motion_graphic tool which renders a baked MP4. That MP4 lands as a flat video layer; you can resize and reposition it but you can’t edit content inside it. In practice, this is a rare case. For everyday animated banners, lower thirds, kinetic typography, slide shows, CTAs, and logo reveals, the layered path is what you want.
If you ever ask for a video and the result is a flat unedittable clip, the AI defaulted to the wrong tool. Reply “redo this as a layered animation” and it will switch.