Migrating to dAvePi
You have data already. These guides cover the move into dAvePi from the platforms most teams are leaving — schema mapping, ETL script template, auth migration, and feature-by-feature notes on how the source’s primitives translate to dAvePi’s.
| From | Schema model | Auth | What gets rewritten |
|---|---|---|---|
| Supabase | Postgres tables + RLS | Email/password, OAuth, magic link | Tables → schema files, RLS → ACL, buckets → file fields |
| Hasura | Tracked Postgres tables + permission rules | JWT (claims-driven) | Tracked entities → schema files, permissions → ACL, event triggers → webhooks |
| PocketBase | Collections (admin-UI defined) | Email/password, OAuth | Collections → schema files, API rules → ACL, WebSockets → webhooks |
| Strapi | Content types (admin-UI defined) | Users & Permissions plugin | Content types → schema files, draft/publish → state machine, plugins → custom routes |
| Directus | Introspected SQL tables | OAuth, SAML, SSO | Collections → schema files, permissions → ACL, Flows → state-machine onEnter + webhooks |
What’s the same in every migration
Section titled “What’s the same in every migration”Whichever source you’re leaving, the shape of the move is:
- Stand up dAvePi against an empty database.
npx create-davepi-app acme-apigets you a runnable server. - Write the schema files. One JS file per source collection / table under
schema/versions/v1/. The per-source guide has the field-type mapping table. - Re-create users. Passwords don’t migrate cleanly from anywhere (different hash algorithms / cost factors); plan a one-time password-reset email for every user.
- Backfill data with the ETL template. Each guide ships a Node script — read source dump, transform per-row,
bulkWriteinto the target collection. - Cut over reads. Run dAvePi alongside the old system; gradually move traffic. Read consistency is a per-table cutover, not a big-bang flip.
- Cut over writes. Once reads are stable and the data delta is small, point writes at dAvePi. Tear down the source.
Things that don’t migrate, ever
Section titled “Things that don’t migrate, ever”These are platform-specific and have no direct equivalent — you’ll need an alternative strategy:
- Password hashes. dAvePi uses bcrypt (rounds=10) for new users. Any source using a different algorithm (Supabase’s scrypt, PocketBase’s variants, Argon2 from custom auth) won’t move. Force-reset everyone on cutover. The
/auth/forgot-password+/auth/reset-passwordendpoints are built in. - Realtime WebSocket subscriptions. Supabase / PocketBase push row changes over WebSockets. dAvePi pushes change events via outbound HMAC-signed webhooks instead. Frontend code that uses
supabase.channel(...)orpb.collection(...).subscribe(...)needs reworking — either move to polling, or have your frontend listen to a websocket relay that’s fed by the webhook. - i18n. No direct map. If you’re using Strapi’s i18n plugin or Directus’s translations, you’ll model translations explicitly (a
translations: { [locale]: ... }sub-document, or per-locale resources). - SSO / SAML / OIDC. dAvePi ships JWT + refresh + password reset. SSO is a build-your-own path.
Things to plan for
Section titled “Things to plan for”- Field uniqueness across tenants. dAvePi’s tenant column is
userId, stamped server-side from the JWT. Don’t useunique: truefor tenant-scoped uniqueness — it creates a global index that crosses tenants. UsecompositeIndex: [{ userId: 1, slug: 1 }, { unique: true }]at the schema level. The per-source guides flag this where it matters. accountIdfor orgs. If your source models orgs as a column alongside the owner, declareaccountIdon the schema — dAvePi stamps that server-side too. Don’t name custom FKsaccountId; pickorgId/parentAccountId/ similar.- Soft delete vs. hard delete. dAvePi soft-deletes by default (
deletedAtflag +restore_*MCP tool). If the source hard-deleted rows, the migrated rows will land soft-deletable; the behaviour change is usually welcome, but flag it for the team. - Audit log retention. The framework writes an audit row on every mutation and does not auto-purge them. Plan a manual
db.audit.deleteMany({ at: { $lt: ... } })cron if you want bounded growth. The per-source guides reference this where the source did auto-purge.
What you get on the other side
Section titled “What you get on the other side”Once the schema files are in place, you have — automatically:
- REST endpoints (
GET / POST / PUT / DELETEper resource). - A GraphQL surface (queries, mutations, type definitions,
__include-style relation expansion). - MCP tools per resource (
list_<path>,create_<path>,update_<path>, …) for agent integration. - A typed TypeScript client (
davepi gen-client). - A Refine-based admin SPA.
- A
_describemanifest enumerating every resource, field, and capability.
No application-layer wiring per surface. The schema is the source.
Per-source guides
Section titled “Per-source guides”- From Supabase — Postgres + RLS + Auth + Storage. The most detailed guide, used as the reference end-to-end walkthrough.
- From Hasura — tracked Postgres + permission rules + event triggers + Actions.
- From PocketBase — single-binary collections + API rules + WebSocket subscriptions.
- From Strapi — content types + draft/publish + plugins.
- From Directus — introspected SQL + role policies + Flows.
See also
Section titled “See also”- Comparisons — when each source is the right choice and when dAvePi is.
- Schema file shape — the target shape your migrated data lands in.
- Quickstart — get dAvePi running locally before you start the ETL.