Skip to content

Backups

Database

PostgreSQL is the primary data store. Regular backups are critical.

pg_dump (Logical Backup)

pg_dump -h localhost -U vectis -d vectis -F c -f vectis_backup_$(date +%Y%m%d).dump

Restore:

pg_restore -h localhost -U vectis -d vectis -c vectis_backup_20260401.dump

Tip

Schedule pg_dump via cron or a Temporal scheduled workflow for automated daily backups. Compress and upload to S3 for offsite storage.

Continuous Archiving (WAL)

For point-in-time recovery, configure PostgreSQL WAL archiving:

  1. Set wal_level = replica and archive_mode = on in postgresql.conf.
  2. Configure archive_command to copy WAL files to S3 or a backup volume.
  3. Use pg_basebackup for the initial base backup.
  4. Restore to any point in time using recovery_target_time.

This is the recommended approach for production deployments where data loss must be minimized.

Media Files (S3/MinIO)

Uploaded media (product images, documents) is stored in S3-compatible object storage.

Replication

For production, use S3 cross-region replication or MinIO server-side replication to maintain copies in a secondary region.

Manual Backup

aws s3 sync s3://vectis-media ./media-backup/ --endpoint-url $S3_ENDPOINT

Or with MinIO client:

mc mirror myminio/vectis-media ./media-backup/

Redis

Redis stores sessions and cache. While losing Redis is not catastrophic (sessions expire, cache rebuilds), you may want to persist session data:

  • RDB snapshots — periodic point-in-time snapshots (default Redis behavior).
  • AOF — append-only file for crash recovery with minimal data loss.

Note

If Redis is lost, users will need to re-authenticate (session data gone) and the price cache will rebuild on the next request. No permanent data is lost.

Temporal

Temporal stores workflow execution history in its own database (typically a separate PostgreSQL instance). Back up the Temporal database using the same pg_dump strategy.

Temporal data is required to resume in-progress workflows after a restore. Without it, running workflows are lost and must be manually re-triggered.

Meilisearch

Search indexes can be rebuilt from the primary database at any time:

python -m vectis.modules.search.indexer --reindex

Backing up Meilisearch data is optional since it can always be regenerated. However, for faster recovery, Meilisearch supports snapshot exports:

curl -X POST 'http://localhost:7700/snapshots' -H 'Authorization: Bearer $MEILISEARCH_API_KEY'

Backup Schedule Recommendations

Data Frequency Retention Method
PostgreSQL Daily full + continuous WAL 30 days pg_dump + WAL archiving
Media (S3) Continuous replication Indefinite S3 cross-region replication
Redis Optional — rebuilds automatically N/A RDB snapshots
Temporal DB Daily 14 days pg_dump
Meilisearch Optional — rebuilds from DB N/A Reindex on recovery

Disaster Recovery

  1. Provision infrastructure — new database, Redis, S3.
  2. Restore PostgreSQL from latest backup.
  3. Restore media from S3 replica.
  4. Run Alembicalembic upgrade head (idempotent, ensures schema is current).
  5. Reindex Meilisearch — rebuild search indexes from DB.
  6. Start services — API, workers, consumers, frontends.
  7. Verify — check { health } query, test a login, browse products.