Motion graphics in Hiveku work differently from every other AI video tool. Instead of generating a flat MP4 you can’t open back up, the AI agent designs the static frame as real layers, then animates each layer with keyframes. The headline, subhead, CTA, logo, and any image stay editable Fabric layers — drag the headline, retype any word, swap a color via the picker — and motion follows. You only render to MP4 at export time. Iterations are instant.Documentation Index
Fetch the complete documentation index at: https://docs.hiveku.com/llms.txt
Use this file to discover all available pages before exploring further.
Make an animated design
-
Go to
/dashboard/marketing/design/. - Click New Design.
- Pick Animated / Video.
- Choose dimensions: square (1080×1080), Story/Reel (1080×1920), YouTube/LinkedIn (1920×1080), or email banner (600×200).
- Optionally start from one of 18 animated templates, or start blank.
-
The editor opens. Open the AI chat sidebar on the right and ask:
“Make me a Spring Sale animated banner — headline ‘Spring Sale 50% off’, subhead ‘Free shipping over $50’, big CTA, cascade fade-up.”
- The agent designs the static layout (text, shapes, brand-pulled image), then animates each layer.
- Press Play in the canvas controls. Motion plays. Drag the headline. Press play again. Motion still works.
- When you’re happy, export to MP4.
Why “animation as a layer property” matters
Every other AI motion tool gives you a flat clip:- Runway / Pika / Sora — generates a 5-second video; change one word and you re-render (and re-pay).
- HeyGen Hyperframes — renders HTML to MP4 server-side; you can’t edit the result.
- Canva animations — locked to Canva’s player and editor.
animation property — alongside fill, fontFamily, left, top. The runtime (CanvasAnimator) reads those properties and plays them. That means:
- Edits never re-render — drag, retype, restyle and motion keeps working
- Per-layer control — the AI can tweak just the CTA’s timing without touching the headline
- Designer-grade output — coherent motion strategies, not random presets per layer
- One source of truth — the same animation schema is used by you, the AI, and the MP4 export compositor
What the AI knows
When you start chatting in an animated design, the agent sees:- Your brand kit — colors, fonts, logo URL — pulled from
data/brand.json - The current layers on the canvas (
canvasSnapshot— every layer’s id, name, type, position, current animation) - Canvas-level animation config (duration, loop, fps)
- The intent you set when creating the design (
designType: "motion_graphic")
Cascade Fade-Up
Modern stagger — every text/CTA fades up in importance order, headline first. The default.
Kinetic
High-energy. Each word snaps in from a different direction. Use for ads or quote cards.
Hero Scale + Fade
Main visual scales in, copy fades around it. Use for product launches or brand statements.
Slideshow
Multi-slide enter+exit at fixed offsets. Use for Reels and Stories.
Editing existing motion
Once a design has animation on it, you have three ways to refine it. Ask the agent. Natural-language tweaks work because the agent reads the current animation per layer:- “Slow down the entrance” — agent recalls
animate_canvaswith longer enter durations - “Make the CTA red” — agent calls
set_styleon the CTA; motion stays the same - “Replace the cascade with a kinetic feel” — agent re-strategizes; preserves any loop motion (e.g. CTA pulse)
- “Change the headline to Winter Sale” — agent calls
set_text; doesn’t re-author the design
Animation vocabulary
Every preset is a member of one shared vocabulary. The list below is what you’ll see in the Style panel dropdowns and what the AI knows how to call. Enter / Exit presets- Fade family —
fade-in,fade-up,fade-down,fade-left,fade-right - Scale family —
scale-in,pop - Slide family —
slide-up,slide-down,slide-left,slide-right - Wipe family —
wipe-up,wipe-down
pulse— soft scale 1 → 1.05 → 1 (good for CTAs)wiggle— small rotation back-and-forth (good for accent badges)breathe— slow opacity 0.85 → 1 → 0.85rotate-slow— full rotation over 8s
Timing that feels designed, not random
A few rules-of-thumb the AI follows (and you can use too):- Stagger — 100-250ms between layers reads “cascading”; tighter feels rushed
- Enter durations — 500-900ms for most things; pop/scale-in 700-900ms
- Loop length — 4-8s total for short attention spans
- Don’t go past 15s without an explicit reason
When motion isn’t the right answer
Some effects can’t be expressed as Fabric layer keyframes — particle systems, fluid simulations, shader-driven gradients, character-by-character typewriter on hand-drawn fonts. For those, the AI can fall back to the legacygenerate_motion_graphic tool which renders a baked MP4. That MP4 lands as a flat video layer; you can resize and reposition it but you can’t edit content inside it.
In practice, this is a rare case. For everyday animated banners, lower thirds, kinetic typography, slide shows, CTAs, and logo reveals, the layered path is what you want.