Bulk import is non-destructive — it only adds entries. To replace or update existing entries in bulk, ask the AI: “For my products collection, update all entries from this CSV — match by sku.”
Opening the Import Modal
Open the CMS panel
In
/v3, switch the right pane to CMS mode and pick the collection you want to import into.Supported File Formats
- CSV
- JSON
Standard CSV with a header row. UTF-8 encoded.
- Header names map to field
names (case-sensitive) - Quote values that contain commas or newlines
- Empty cells become
null(or the field’sdefaultif set) - For
arrayfields with primitive items, use comma-separated values inside quotes - For
referencefields, use the target entry’s slug booleanfields accepttrue/false/1/0/yes/no(case-insensitive)
Mapping Columns to Fields
After upload, the modal shows a mapping table. Each row in the table is one of the fields in your collection’s manifest, paired with the source column or key the import will pull from. By default the modal auto-maps when names match exactly. Where they don’t, click the dropdown next to a field to pick a different source column, or Skip to leave the field empty (use its default). The mapping table also shows:- A sample value from the first row of your file
- A field type badge so you can see if the source column looks compatible
- A required marker on fields that can’t be skipped
Slug Source
If your collection’sslugFrom is filename, you’ll be asked which column to use as the slug. The importer slugifies the value (lowercases, hyphenates, strips special chars) and uses it as the filename. If slugFrom is field:<name>, the slug derives automatically from that field.
If two rows would produce the same slug, the importer appends -2, -3, etc. to disambiguate.
Validation Pass
Before any files are written, the importer runs the same validation as a normal save against every row:- Required fields present
- Type checks (numbers parseable, dates in ISO format, URLs valid)
- Reference targets exist in the linked collection
- Pattern matches (
maxLength,pattern,min/max) - Slug uniqueness — both within the import batch and against existing entries
- A green count for rows that will import cleanly
- A yellow count for rows with warnings (will import, but with empty/default values for some fields)
- A red count for rows that will be rejected
- Import all valid — skip rejected rows, write the rest
- Cancel — fix your file and re-upload
- Edit row — fix a single problematic row inline and re-validate
Performance
The importer batches writes so a 500-row file imports in roughly 10–20 seconds, depending on file size and reference fields. The modal shows a live progress bar. For very large imports (multi-thousand rows), the AI path is faster — see below.Importing via the AI
For complex imports (large files, transformations, deduplication, reference matching by name instead of slug), describe what you want in the AI chat:Examples
Importing 500 Blog Posts From a WordPress Export
Export from WordPress
Use a CSV export plugin or the WordPress XML-to-CSV converter. You should end up with columns like
post_title, post_date, post_content, post_status, post_author, post_tags.Pre-clean the file (optional)
If your blog collection uses Markdown, run the HTML-to-Markdown conversion on the
post_content column first. The AI can do this — “Convert the post_content column from HTML to Markdown, save the result as a new file.”Map the columns
| Field | Source column |
|---|---|
title | post_title |
publishedAt | post_date |
status | post_status (skip if values don’t match draft/published) |
body | post_content |
tags | post_tags |
author | (skip — fill in later, or pre-process to match author slugs) |
Review the validation report
Look for slug collisions and required-field misses. Fix or skip rejected rows.
Importing a Product Catalog From a Spreadsheet
category column should map to a reference field — the importer surfaces a warning if any category slug doesn’t match an entry in the categories collection.
Importing FAQs From a JSON Knowledge Base
Rolling Back an Import
Each imported entry is a normal file with normal version history. If an import goes wrong:- Undo a single entry — open it in the CMS panel and use the version history drawer to restore the empty pre-import state.
- Undo a whole batch — select all newly-imported entries with the bulk-action checkboxes and click Delete. The version history still preserves the deleted files; you can recreate them if needed.
- Use a snapshot — if you snapshotted the project before importing, restore the project to that snapshot to revert everything at once.
Troubleshooting
Import says my file is too large
Import says my file is too large
The default upload limit is 10 MB. Split your file into multiple parts, or use the AI path which can stream-process larger files.
All my dates are showing as invalid
All my dates are showing as invalid
The importer expects ISO-8601 (
YYYY-MM-DD or full timestamps). Convert your dates first, or ask the AI: “Reformat the dates in this CSV to ISO-8601.”Reference fields all flagged as missing
Reference fields all flagged as missing
Reference values must be the target entry’s slug. If your source data uses display names or IDs, map them to slugs first — the AI can do this in one step: “For the author column, find the matching slug in the authors collection by display name.”
Slug collisions across the whole batch
Slug collisions across the whole batch
The importer auto-disambiguates with
-2, -3 suffixes, but if you want specific slugs, pre-fill a slug column in your file and map it in the importer.Body content lost formatting
Body content lost formatting
If you imported HTML into a Markdown body, the field stores the raw HTML — which renders as escaped text. Ask the AI to convert HTML bodies to Markdown across all imported entries, or pre-process the source file.
What’s Next?
Editing Content
Polish individual entries after import
Migrate a Site
Move hardcoded content into the CMS
AI Integration
Use the AI for transformations and bulk edits
Field Types
Make sure your fields match your data shape