Skip to main content
When you’re moving onto Hiveku from another platform — or rolling out a content type that already has data somewhere — entering rows one at a time is painful. The CMS panel has a bulk import modal that takes a CSV or JSON file, lets you map columns to fields, validates everything up front, and writes one entry per row.
Bulk import is non-destructive — it only adds entries. To replace or update existing entries in bulk, ask the AI: “For my products collection, update all entries from this CSV — match by sku.”

Opening the Import Modal

1

Open the CMS panel

In /v3, switch the right pane to CMS mode and pick the collection you want to import into.
2

Click Import

The Import button is at the top right of the entry list, next to + New entry.
3

Choose your file

Drag-and-drop a .csv or .json file, or click to browse.

Supported File Formats

Standard CSV with a header row. UTF-8 encoded.
title,publishedAt,status,author,tags,body
Welcome,2026-01-01,published,jane-doe,"news,launch","Welcome to the blog..."
Getting Started,2026-01-15,published,jane-doe,tutorial,"Here's how to..."
  • Header names map to field names (case-sensitive)
  • Quote values that contain commas or newlines
  • Empty cells become null (or the field’s default if set)
  • For array fields with primitive items, use comma-separated values inside quotes
  • For reference fields, use the target entry’s slug
  • boolean fields accept true/false/1/0/yes/no (case-insensitive)

Mapping Columns to Fields

After upload, the modal shows a mapping table. Each row in the table is one of the fields in your collection’s manifest, paired with the source column or key the import will pull from. By default the modal auto-maps when names match exactly. Where they don’t, click the dropdown next to a field to pick a different source column, or Skip to leave the field empty (use its default). The mapping table also shows:
  • A sample value from the first row of your file
  • A field type badge so you can see if the source column looks compatible
  • A required marker on fields that can’t be skipped

Slug Source

If your collection’s slugFrom is filename, you’ll be asked which column to use as the slug. The importer slugifies the value (lowercases, hyphenates, strips special chars) and uses it as the filename. If slugFrom is field:<name>, the slug derives automatically from that field. If two rows would produce the same slug, the importer appends -2, -3, etc. to disambiguate.

Validation Pass

Before any files are written, the importer runs the same validation as a normal save against every row:
  • Required fields present
  • Type checks (numbers parseable, dates in ISO format, URLs valid)
  • Reference targets exist in the linked collection
  • Pattern matches (maxLength, pattern, min/max)
  • Slug uniqueness — both within the import batch and against existing entries
The validation report shows:
  • A green count for rows that will import cleanly
  • A yellow count for rows with warnings (will import, but with empty/default values for some fields)
  • A red count for rows that will be rejected
You can:
  • Import all valid — skip rejected rows, write the rest
  • Cancel — fix your file and re-upload
  • Edit row — fix a single problematic row inline and re-validate

Performance

The importer batches writes so a 500-row file imports in roughly 10–20 seconds, depending on file size and reference fields. The modal shows a live progress bar. For very large imports (multi-thousand rows), the AI path is faster — see below.

Importing via the AI

For complex imports (large files, transformations, deduplication, reference matching by name instead of slug), describe what you want in the AI chat:
I have 1200 blog posts in this CSV. The "author_name" column should
match against the authors collection by display name (not slug).
Posts with status="hidden" in the CSV should map to "draft" in the CMS.
Skip any rows where the body is shorter than 100 characters.
The AI reads the file, plans the mapping, surfaces edge cases for you to confirm, and writes the entries via the CMS tools.

Examples

Importing 500 Blog Posts From a WordPress Export

1

Export from WordPress

Use a CSV export plugin or the WordPress XML-to-CSV converter. You should end up with columns like post_title, post_date, post_content, post_status, post_author, post_tags.
2

Pre-clean the file (optional)

If your blog collection uses Markdown, run the HTML-to-Markdown conversion on the post_content column first. The AI can do this — “Convert the post_content column from HTML to Markdown, save the result as a new file.”
3

Open the CMS panel and pick your blog collection

Click Import, drop the CSV.
4

Map the columns

FieldSource column
titlepost_title
publishedAtpost_date
statuspost_status (skip if values don’t match draft/published)
bodypost_content
tagspost_tags
author(skip — fill in later, or pre-process to match author slugs)
5

Review the validation report

Look for slug collisions and required-field misses. Fix or skip rejected rows.
6

Click Import

Wait for the progress bar. The entries appear in the entry list as they save.
7

Spot-check a few entries

Open three or four to make sure the body rendered correctly and metadata mapped right. Use Edit row in the importer if you spot a systematic problem before all rows write.

Importing a Product Catalog From a Spreadsheet

sku,name,price,currency,inStock,category,shortDescription
WIDGET-001,Standard Widget,19.99,USD,true,widgets,"Our flagship widget"
WIDGET-002,Premium Widget,29.99,USD,true,widgets,"Bigger and shinier"
GADGET-001,Pocket Gadget,14.50,USD,false,gadgets,"Currently sold out"
Map each column to the matching field in the products collection. The category column should map to a reference field — the importer surfaces a warning if any category slug doesn’t match an entry in the categories collection.

Importing FAQs From a JSON Knowledge Base

[
  { "question": "How do I reset my password?", "answer": "Click the **Forgot password** link...", "category": "billing" },
  { "question": "Can I export my data?", "answer": "Yes — go to Settings...", "category": "general" }
]
JSON imports are usually mapping-free since the keys already match field names. Click upload, click import.

Rolling Back an Import

Each imported entry is a normal file with normal version history. If an import goes wrong:
  • Undo a single entry — open it in the CMS panel and use the version history drawer to restore the empty pre-import state.
  • Undo a whole batch — select all newly-imported entries with the bulk-action checkboxes and click Delete. The version history still preserves the deleted files; you can recreate them if needed.
  • Use a snapshot — if you snapshotted the project before importing, restore the project to that snapshot to revert everything at once.
Snapshot your project from the dashboard before any large import. It’s the cleanest rollback path if the mapping turns out to be wrong.

Troubleshooting

The default upload limit is 10 MB. Split your file into multiple parts, or use the AI path which can stream-process larger files.
The importer expects ISO-8601 (YYYY-MM-DD or full timestamps). Convert your dates first, or ask the AI: “Reformat the dates in this CSV to ISO-8601.”
Reference values must be the target entry’s slug. If your source data uses display names or IDs, map them to slugs first — the AI can do this in one step: “For the author column, find the matching slug in the authors collection by display name.”
The importer auto-disambiguates with -2, -3 suffixes, but if you want specific slugs, pre-fill a slug column in your file and map it in the importer.
If you imported HTML into a Markdown body, the field stores the raw HTML — which renders as escaped text. Ask the AI to convert HTML bodies to Markdown across all imported entries, or pre-process the source file.

What’s Next?

Editing Content

Polish individual entries after import

Migrate a Site

Move hardcoded content into the CMS

AI Integration

Use the AI for transformations and bulk edits

Field Types

Make sure your fields match your data shape