Skip to main content
Testkube 2.9.0 is out! Open-Source Execution Viewer, AI and TestTrigger improvements, CLI Marketplace support, and much more. Read More

Mongo to Postgres Migration

Testkube includes a convert utility that migrates data from MongoDB to PostgreSQL. It reads every collection used by the Testkube control plane, transforms each record into the PostgreSQL schema, and writes it to the target database. Both "record-level" collections (organisations, environments, agents, …) and high-volume "bulk" collections (usage, audit logs, events, workflow executions, …) are handled in a single run.

Prerequisites

RequirementDetails
MongoDBA running MongoDB instance containing the source data. The utility connects using the standard MongoDB connection string.
PostgreSQLAn empty (or partially migrated) PostgreSQL database. The utility automatically applies any pending Goose schema migrations before writing data.
Credentials master passwordThe same master password that was used to encrypt secrets in the source database. It is required to re-encrypt credentials for PostgreSQL. Must be at least 32 characters.

Quick start

# Using Docker
docker run --rm \
-e API_MONGO_DSN="mongodb://user:pass@mongo-host:27017" \
-e API_MONGO_DB="testkubecloud" \
-e API_POSTGRES_URL="postgres://user:pass@pg-host:5432/testkube?sslmode=disable" \
-e CREDENTIALS_MASTER_PASSWORD="your-32-char-master-password" \
kubeshop/testkube-postgres-migration:<image-tag>

Configuration

Every setting can be provided as an environment variable, a CLI flag, or both. When both are provided the CLI flag takes precedence.

Connection settings

Environment variableCLI flagDefaultDescription
API_MONGO_DSN-mongo_dsn(empty)MongoDB connection string.
API_MONGO_DB-mongo_dbtestkubecloudMongoDB database name.
API_MONGO_MIN_POOL_SIZE-mongo_min_pool_size0Minimum connection pool size for MongoDB.
API_MONGO_MAX_POOL_SIZE-mongo_max_pool_size100Maximum connection pool size for MongoDB.
API_MONGO_MAX_CONNECTING-mongo_max_connecting2Maximum number of connections being established concurrently.
API_MONGO_READ_PREFERENCE-mongo_read_preferencesecondaryPreferredMongoDB read preference.
API_MONGO_CONNECTION_TIMEOUT-mongo_connection_timeout5sTimeout for establishing a MongoDB connection.
API_MONGO_CONNECTION_DELAY-mongo_connection_delay1sDelay between MongoDB connection retry attempts.
API_MONGO_CONNECTION_ATTEMPTS-mongo_connection_attempts120Number of MongoDB connection retry attempts.
API_POSTGRES_URL-postgres_url(empty)PostgreSQL connection string.
CREDENTIALS_MASTER_PASSWORD-credentials_master_password(empty)Master password for credential encryption/signing.

Migration tuning

Environment variableCLI flagDefaultDescription
CONVERT_BATCH_SIZE-convert_batch_size1000Number of records fetched per batch during record-level migrations (users, organisations, environments, etc.).
CONVERT_BULK_READ_SIZE-convert_bulk_read_size10000Number of records fetched per batch during bulk migrations (usage, audit logs, events, executions, etc.).
CONVERT_BULK_WRITE_SIZE-convert_bulk_write_size100000Number of records written per batch during bulk migrations.
CONVERT_WORKERS-convert_parallel_workers10Number of parallel workers used for record-level migration processing.
CONVERT_DRY_RUN-convert_dry_runfalseWhen true, reads and transforms data but does not write to PostgreSQL. Useful for validating the migration before committing.
CONVERT_SKIP_EXISTING-convert_skip_existingfalseWhen true, silently skips records that already exist in PostgreSQL instead of failing. Useful for resuming an interrupted migration.
CONVERT_WORK_DIR-convert_work_dir(empty – current directory)Directory used for writing error report files.
CONVERT_SKIP_INTEGRITY_ERRORS-convert_skip_integrity_errorsfalseWhen true, skips records that fail integrity checks (e.g. orphaned references) instead of aborting.
CONVERT_SINCE-convert_since(empty – all records)Only include usage records created on or after this date. Format: YYYY-MM-DD.

What gets migrated

The utility migrates the following data in two phases.

Phase 1 – Record-level migrations

These collections are read from MongoDB, transformed, and inserted into PostgreSQL one batch at a time. Each batch is processed by a pool of parallel workers.

DataMongoDB collection
Usersusers
Organisationsorganizations
Environmentsenvironments
Agentsagents
Boardsboards
Configconfig
Credentialscredentials
Organisation domainsorganization_domains
Status pagesstatus_pages
Incidentsincidents
Invitesinvites
Metadatametadata
Resource groupsresource_groups
Settingssettings
Tokenstokens
Trackerstrackers
Viewsviews
Payments historyorganizations (embedded)
Cron schedulescron_schedules
AI assistantsai_assistants (per org)
LLM providersai_llm_providers (per org)
MCP serversai_mcp_servers (per org)
MCP server tokensai_mcp_server_tokens (per org)
MCP server toolsai_mcp_server_tools (per org)
AI agent triggersai_agent_triggers (per org)
Assistant sessionsai_sessions (per org)
Session tool callsai_session_tool_calls (per org)
Agent configsper org/env
Test workflowsper org/env
Test workflow templatesper org/env
Test triggersper org/env
Webhooksper org/env
Webhook templatesper org/env

Phase 2 – Bulk migrations

High-volume collections are migrated using optimised bulk readers and writers. These run sequentially after the record-level phase.

Data
Usage records
Audit log events
Application audit events
Events
Reporting data
Test workflow executions (per org/env)
Sequences (per org/env)

Examples

Dry run

Validate that all records can be read and transformed without writing anything:

docker run --rm \
-e API_MONGO_DSN="mongodb://user:pass@mongo-host:27017" \
-e API_MONGO_DB="testkubecloud" \
-e API_POSTGRES_URL="postgres://user:pass@pg-host:5432/testkube?sslmode=disable" \
-e CREDENTIALS_MASTER_PASSWORD="your-32-char-master-password" \
-e CONVERT_DRY_RUN="true" \
kubeshop/testkube-postgres-migration:<image-tag>

Resume after an interrupted migration

If a previous run was interrupted, re-run with -convert_skip_existing to continue from where it left off:

docker run --rm \
-e API_MONGO_DSN="mongodb://user:pass@mongo-host:27017" \
-e API_MONGO_DB="testkubecloud" \
-e API_POSTGRES_URL="postgres://user:pass@pg-host:5432/testkube?sslmode=disable" \
-e CREDENTIALS_MASTER_PASSWORD="your-32-char-master-password" \
-e CONVERT_SKIP_EXISTING="true" \
kubeshop/testkube-postgres-migration:<image-tag>

Skip integrity errors

Some records may reference organisations or environments that no longer exist. Use -convert_skip_integrity_errors to skip those and continue:

docker run --rm \
-e API_MONGO_DSN="mongodb://user:pass@mongo-host:27017" \
-e API_MONGO_DB="testkubecloud" \
-e API_POSTGRES_URL="postgres://user:pass@pg-host:5432/testkube?sslmode=disable" \
-e CREDENTIALS_MASTER_PASSWORD="your-32-char-master-password" \
-e CONVERT_SKIP_EXISTING="true" \
-e CONVERT_SKIP_INTEGRITY_ERRORS="true" \
kubeshop/testkube-postgres-migration:<image-tag>

Migrate only recent usage data

Only migrate usage records from the last quarter:

docker run --rm \
-e API_MONGO_DSN="mongodb://user:pass@mongo-host:27017" \
-e API_MONGO_DB="testkubecloud" \
-e API_POSTGRES_URL="postgres://user:pass@pg-host:5432/testkube?sslmode=disable" \
-e CREDENTIALS_MASTER_PASSWORD="your-32-char-master-password" \
-e CONVERT_SINCE="2026-01-01" \
kubeshop/testkube-postgres-migration:<image-tag>

Tuning for large datasets

For databases with millions of records, increase batch sizes and parallelism:

docker run --rm \
-e API_MONGO_DSN="mongodb://user:pass@mongo-host:27017" \
-e API_MONGO_DB="testkubecloud" \
-e API_POSTGRES_URL="postgres://user:pass@pg-host:5432/testkube?sslmode=disable" \
-e CREDENTIALS_MASTER_PASSWORD="your-32-char-master-password" \
-e CONVERT_BATCH_SIZE="5000" \
-e CONVERT_BULK_READ_SIZE="50000" \
-e CONVERT_BULK_WRITE_SIZE="200000" \
-e CONVERT_WORKERS="20" \
-e CONVERT_WORK_DIR="/tmp/convert-errors" \
kubeshop/testkube-postgres-migration:<image-tag>

Save error reports to a specific directory

docker run --rm \
-e API_MONGO_DSN="mongodb://user:pass@mongo-host:27017" \
-e API_MONGO_DB="testkubecloud" \
-e API_POSTGRES_URL="postgres://user:pass@pg-host:5432/testkube?sslmode=disable" \
-e CREDENTIALS_MASTER_PASSWORD="your-32-char-master-password" \
-e CONVERT_WORK_DIR="/var/log/convert" \
kubeshop/testkube-postgres-migration:<image-tag>

Error reports are written as <migration_name>_migration_errors.log in the work directory.

Output and logging

The utility logs progress to stdout using structured logging. For each migration task you will see:

  • Start message – the task name and source collection.
  • Statistics summary – total records, successful, failed, skipped counts, duration, and throughput (records/second).
  • Error report – if any records fail, an error log file is written to the work directory with details (record ID, timestamp, error message, and context).

Example statistics output:

============================================================
USERS MIGRATION STATISTICS
============================================================
Total Records: 1250
Successful: 1248
Failed: 2
Skipped: 0
Duration: 3.21s
Records/second: 388.79
============================================================

Limitations

  • No resume support for bulk migrations. Record-level migrations can be partially resumed with -convert_skip_existing, but the bulk migration phase (usage, audit logs, events, executions) does not track progress. If interrupted, bulk data may need to be truncated and re-migrated.
  • Target database should be empty. The utility is designed for a one-time migration. Running it against a database that already has data without -convert_skip_existing will result in duplicate-key errors.
  • Sequential execution. Bulk migration tasks run sequentially. For very large datasets the total migration time is bounded by the largest collection.

Running the MongoDB to PostgreSQL migration job using Helm chart

Testkube Enterprise includes a one-time Helm Job that migrates control plane data from MongoDB to PostgreSQL. The job is disabled by default and is enabled with the testkube-cloud-api.api.mongoToPostgresMigration.enabled Helm value.

This job is separate from the regular MongoDB schema migration job controlled by testkube-cloud-api.api.migrations.enabled.

Before you start

Before enabling the migration job:

  • Back up MongoDB and PostgreSQL.
  • Make sure PostgreSQL is reachable from the cluster.
  • Use an empty PostgreSQL database for the migration target. The converter applies PostgreSQL schema migrations first, then writes migrated records. The migration is not designed to resume after a crash or merge into an already populated database.
  • Make sure the chart is connected to the same MongoDB database currently used by Testkube.
  • Make sure global.credentials.masterPassword is set to the same value used by the existing installation, or points to the same Kubernetes Secret. This is required to migrate encrypted credentials correctly.
  • Plan a maintenance window so the source MongoDB data does not continue changing while the migration runs.

Configure the migration

Create a values file that enables PostgreSQL for the API and enables the MongoDB to PostgreSQL migration job.

global:
mongo:
dsnSecretRef: testkube-mongo-dsn
database: testkubeEnterpriseDB

postgres:
enabled: true
dsnSecretRef: testkube-postgres-dsn
dsnSecretKey: POSTGRES_DSN

credentials:
masterPassword:
secretKeyRef:
name: testkube-credentials-master-password
key: password

testkube-cloud-api:
api:
mongoToPostgresMigration:
enabled: true

After the migration succeeds

After the Job completes successfully:

  • Disable the migration job so it does not run on future upgrades.
  • Keep PostgreSQL enabled.
global:
postgres:
enabled: true
dsnSecretRef: testkube-postgres-dsn
dsnSecretKey: POSTGRES_DSN

testkube-cloud-api:
api:
mongoToPostgresMigration:
enabled: false