Migration Guide: OSS/Development (H2 to PostgreSQL)📜
This guide covers migrating from the legacy nexus-repository-manager chart with H2 database to nxrm-ha with PostgreSQL in OSS/development environments.
ℹ️ NOTE: This guide is for environments using embedded H2 database. If you already have an external PostgreSQL database, see the Production/Pro Migration Guide instead.
⚠️ CRITICAL CHANGES: 1. PostgreSQL Required: NXRM-HA does NOT support embedded H2 database. All deployments require PostgreSQL. 2. Data Migration Required: You must migrate your data from H2 to PostgreSQL before deploying NXRM-HA.
Overview📜
What’s Changing:
- Deployment → StatefulSet
- Direct values → Passthrough pattern (values nested under upstream key)
- Single container → Multi-container pod (main app + log sidecars)
- Embedded H2 database → External PostgreSQL database
What Stays the Same: - All repositories, artifacts, users, and configurations (after migration) - Blob storage data (copied to new PVC)
Why PostgreSQL is Required📜
Sonatype has discontinued support for embedded databases (H2) in Kubernetes deployments because: - Data Corruption: Embedded databases in containers frequently experience corruption - Data Loss: Kubernetes pod restarts can cause permanent data loss - Stability Issues: H2 in containerized environments leads to outages
Prerequisites📜
- Access to the Kubernetes cluster
kubectlCLI tool installedhelmCLI tool installedfluxCLI tool installed (if using GitOps)- Admin access to the existing Nexus instance
- Backup of existing Nexus data (required for migration)
- PostgreSQL database (deployed via subchart or external)
Estimated Downtime: 30 - 60 minutes (depends on data volume)
Migration requires: 1. Database Migration: From H2 to PostgreSQL using nexus-db-migrator tool 2. Chart Migration: From legacy chart to NXRM-HA
Values Migration: Passthrough Pattern📜
The main change is restructuring your values file. Big Bang additions stay at root level, upstream chart values move under upstream:.
Values Mapping Reference📜
| Configuration | Old Chart Location | New Chart Location |
|---|---|---|
| Hostname/Domain | hostname, domain |
hostname, domain (unchanged) |
| Admin Password | custom_admin_password |
custom_admin_password (unchanged - MUST match existing password during migration) |
| Database Config | H2 (embedded) | PostgreSQL (chart deploys PostgreSQL subchart automatically) |
| Istio | istio.* |
istio.* (unchanged) |
| Network Policies | networkPolicies.* |
networkPolicies.* (unchanged) |
| Monitoring | monitoring.* |
monitoring.* (unchanged) |
| SSO/SAML | sso.* |
sso.* (unchanged) |
| Blob Stores | nexus.blobstores.* |
nexus.blobstores.* (unchanged) |
| Image | image.repository, image.tag |
upstream.statefulset.container.image.repository, upstream.statefulset.container.image.nexusTag |
| Resources | resources.* |
upstream.statefulset.container.resources.* |
| Service Account | serviceAccount.* |
upstream.serviceAccount.* |
| Environment Vars | env.* |
upstream.statefulset.container.env.* |
| Probes | livenessProbe.*, readinessProbe.* |
upstream.statefulset.livenessProbe.*, upstream.statefulset.readinessProbe.* |
Step-by-Step Migration Process📜
This process has been tested and successfully migrates all data including: - Repository configurations - Component data - User accounts and passwords - System settings
Step 1: Prepare for Migration📜
Backup your current configuration and prepare the environment:
export NEXUS_NAMESPACE="nexus-repository-manager"
# Backup current configuration
kubectl get all,secrets,cm -n $NEXUS_NAMESPACE -o yaml > nexus-backup-config.yaml
# If you have helm values (non-GitOps)
helm get values nexus-repository-manager -n bigbang > old-nexus-values.yaml
Suspend Old Flux HelmRelease (If Using GitOps)📜
Important: If using Flux/GitOps, suspend the OLD HelmRelease to prevent Flux from reconciling it back during migration. The new nxrm-ha chart will be managed by a separate HelmRelease.
# Suspend OLD Flux HelmRelease (prevents auto-reconciliation during migration)
flux suspend hr nexus-repository-manager -n bigbang
# Verify suspension
flux get hr nexus-repository-manager -n bigbang
# Expected: SUSPENDED should be True
Note:
- Skip this step if not using Flux/GitOps
- The old HelmRelease will remain suspended - the new nxrm-ha HelmRelease manages the new deployment
- You can delete the old HelmRelease after successful migration
Backup H2 Database (Recommended)📜
Before proceeding with migration, create a backup of your H2 database using the Nexus Admin UI:
1. Access your Nexus Repository Manager UI
2. Navigate to Administration → System → Tasks
3. Click Create task and select Admin - Backup H2 Database Task
4. Configure the task:
- Task enabled: Check this box
- Task name: Admin - Backup H2 Database Task
- Notification email: (Optional) Add email for notifications
- Send notification on: Failure (or as needed)
- Location: /nexus-data (or your preferred backup location)
- Task frequency: Manual
5. Click Create task
6. Run the task immediately by clicking Run
7. The task will create a timestamped zip file containing the H2 database (nexus.mv.db) in the specified location
8. Copy the backup file to your local machine:
# Get the pod name
NEXUS_POD=$(kubectl get pods -n $NEXUS_NAMESPACE --no-headers | awk '{print $1}')
# Get backup filename
BACKUP_FILENAME=$(kubectl exec -n $NEXUS_NAMESPACE "$NEXUS_POD" -- sh -c 'ls /nexus-data/nexus-*.zip' | cut -d / -f3)
# Copy the backup file
kubectl cp $NEXUS_NAMESPACE/$NEXUS_POD:/nexus-data/${BACKUP_FILENAME} ./${BACKUP_FILENAME}
Step 2: Deploy NXRM-HA with PostgreSQL📜
First, deploy NXRM-HA with its PostgreSQL subchart:
# nxrm-ha-migration.yaml
addons:
nexusRepositoryManager:
enabled: true # Keep old one running during migration
nxrmha:
enabled: true
values:
# Recommended: Set the admin password to match your old Nexus password
# The migration will preserve the old password in the database, but setting it here
# ensures consistency and allows NXRM-HA to use it from the start
# Get your old password: kubectl get secret nexus-repository-manager-secret -n nexus-repository-manager -o jsonpath='{.data.admin\.password}' | base64 -d; echo
custom_admin_password: "<YOUR_OLD_NEXUS_ADMIN_PASSWORD>"
Deploy:
# If using GitOps: commit the configuration and let Flux deploy
# OR manually with Helm:
helm upgrade -i bigbang chart/ -n bigbang --create-namespace -f nxrm-ha-migration.yaml
# Wait for HelmRelease to be ready
kubectl wait helmrelease/nxrm-ha -n bigbang --for=condition=Ready --timeout=300s
Verify Deployment and Scale Down for Migration📜
After deployment, verify these critical checks:
# 1. Verify HelmRelease was successful
flux get hr nxrm-ha -n bigbang
# Expected: READY should be True, MESSAGE should show "Helm install succeeded" or "Helm upgrade succeeded"
# Example output:
# NAME REVISION SUSPENDED READY MESSAGE
# nxrm-ha 84.0.0-bb.3 False True Helm install succeeded for release nxrm-ha/nxrm-ha.v1 with chart nxrm-ha@84.0.0-bb.3
# 2. Verify PostgreSQL is running and ready
kubectl get pods -n nxrm-ha -l app.kubernetes.io/name=postgresql
# Expected: 2/2 Running (primary pod with both containers)
# Example output:
# NAME READY STATUS RESTARTS AGE
# nexus-postgresql-0 2/2 Running 0 2m
# 3. Verify NXRM-HA pod is running
kubectl get pods -n nxrm-ha -l app.kubernetes.io/name=nxrm-ha
# Expected: 1/1 Running
# Example output:
# NAME READY STATUS RESTARTS AGE
# nxrm-ha-0 5/5 Running 0 2m
Scale down NXRM-HA for migration:
The NXRM-HA pod must be scaled down before running the migration to prevent database conflicts:
# Suspend Flux to prevent it from scaling back up
flux suspend hr nxrm-ha -n bigbang
# Scale down NXRM-HA StatefulSet
kubectl scale statefulset nxrm-ha -n nxrm-ha --replicas=0
# Wait for pod termination
kubectl wait --for=delete pod -l app.kubernetes.io/name=nxrm-ha -n nxrm-ha --timeout=300s
# Verify StatefulSet has 0 replicas
kubectl get statefulset nxrm-ha -n nxrm-ha
# Expected: READY should show 0/0
Important: Do not proceed to the next step until: - HelmRelease shows READY=True - PostgreSQL pod shows 2/2 Running - NXRM-HA StatefulSet shows 0/0 replicas (after scaling down)
Step 3: Scale Down Old Nexus Deployment and Clean Up Resources📜
Important: Scale down the old deployment BEFORE running the migration to prevent database access conflicts.
# Scale down old deployment
kubectl scale deployment nexus-repository-manager -n $NEXUS_NAMESPACE --replicas=0
# Wait for termination
kubectl wait --for=delete pod -l app=nexus-repository-manager \
-n $NEXUS_NAMESPACE --timeout=300s
# Delete old Istio VirtualServices to prevent conflicts with new chart
kubectl delete virtualservice -n $NEXUS_NAMESPACE --all
# Verify pods are terminated
kubectl get pods -n $NEXUS_NAMESPACE
Step 4: Download Migration Tool📜
Download the nexus-db-migrator tool:
# Download the migration tool locally
curl -OLJ https://download.sonatype.com/nexus/nxrm3-migrator/nexus-db-migrator-3.86.2-01.jar
Note: The above downloads version 3.86.2-01. For the latest version of the migration tool, check the Sonatype Downloads page. Always use a migration tool version that matches or is close to your Nexus Repository version for best compatibility.
Note: We’ll copy this to the migration pod in the next steps.
Step 5: Create Migration Pod and Network Policies📜
Create Network Policies for Cross-Namespace PostgreSQL Access📜
In BigBang environments with Istio and strict network policies, you need two network policies to allow the migration pod to reach PostgreSQL across namespaces:
1. Egress policy (source namespace) - Allows migration pod to send traffic to PostgreSQL:
cat <<EOF | kubectl apply -f -
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-migration-egress-postgres
namespace: nexus-repository-manager
spec:
podSelector:
matchLabels:
app: nexus-migration
policyTypes:
- Egress
egress:
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: nxrm-ha
podSelector:
matchLabels:
app.kubernetes.io/name: postgresql
ports:
- protocol: TCP
port: 5432
EOF
2. Ingress policy (destination namespace) - Allows PostgreSQL to receive traffic from migration pod:
cat <<EOF | kubectl apply -f -
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-migration-to-postgres
namespace: nxrm-ha
spec:
podSelector:
matchLabels:
app.kubernetes.io/instance: nxrm-ha
app.kubernetes.io/name: postgresql
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: nexus-repository-manager
podSelector:
matchLabels:
app: nexus-migration
ports:
- protocol: TCP
port: 5432
EOF
Note: Both network policies are required in BigBang environments with strict network policies. The NXRM-HA chart’s default PostgreSQL ingress policy only allows traffic from within the same namespace.
Create Migration Pod📜
Create a pod that can access the H2 database files and run the migration. The pod must: - Disable Istio sidecar injection to avoid mTLS issues with PostgreSQL - Use correct security context to access the PVC
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: nexus-migration
namespace: nexus-repository-manager
labels:
app: nexus-migration
app.kubernetes.io/name: nexus-migration
app.kubernetes.io/version: "1.0.0"
annotations:
sidecar.istio.io/inject: "false"
spec:
securityContext:
runAsUser: 200
runAsGroup: 2000
fsGroup: 2000
containers:
- name: migrator
image: registry1.dso.mil/ironbank/sonatype/nexus/nexus:3.84.0-03
command: ["/bin/sh", "-c", "while true; do sleep 3600; done"]
volumeMounts:
- name: nexus-data
mountPath: /nexus-data
resources:
requests:
memory: "1Gi"
cpu: "250m"
limits:
memory: "2Gi"
cpu: "1"
volumes:
- name: nexus-data
persistentVolumeClaim:
claimName: nexus-repository-manager-data
EOF
# Wait for pod to be ready
kubectl wait --for=condition=ready pod nexus-migration -n nexus-repository-manager --timeout=60s
# Copy the migration tool to the migration pod
kubectl cp nexus-db-migrator-3.86.2-01.jar nexus-repository-manager/nexus-migration:/nexus-data/db/nexus-db-migrator-3.86.2-01.jar -c migrator
Important Notes:
- The sidecar.istio.io/inject: "false" annotation is critical - it prevents Istio from interfering with the PostgreSQL connection
- The securityContext settings (runAsUser: 200, runAsGroup: 2000, fsGroup: 2000) are required to access the PVC with correct permissions
- The -c migrator flag in the kubectl cp command specifies the container name
Step 6: Run the Migration📜
The nexus-db-migrator tool has a bug with Spring Batch authentication. Use this workaround:
First, get the PostgreSQL password from the secret:
# Get the PostgreSQL password and copy it to your clipboard
kubectl get secret nexus-postgresql -n nxrm-ha -o jsonpath='{.data.postgres-password}' | base64 -d; echo
# Copy the output password
Now run the migration:
# Shell into the migration pod
kubectl exec -it nexus-migration -n nexus-repository-manager -- /bin/sh
# Inside the pod, set environment variables for PostgreSQL credentials
# Paste the PostgreSQL password you copied above in place of <YOUR_POSTGRES_PASSWORD>
export POSTGRES_USER="postgres"
export POSTGRES_PASSWORD="<YOUR_POSTGRES_PASSWORD>" # Paste the password here
export POSTGRES_HOST="nexus-postgresql.nxrm-ha.svc.cluster.local"
export POSTGRES_DB="nexus"
cd /nexus-data/db
# Create properties file with embedded credentials
cat > app.properties << EOF
spring.datasource.url=jdbc:postgresql://${POSTGRES_HOST}:5432/${POSTGRES_DB}?user=${POSTGRES_USER}&password=${POSTGRES_PASSWORD}
spring.datasource.hikari.jdbc-url=jdbc:postgresql://${POSTGRES_HOST}:5432/${POSTGRES_DB}?user=${POSTGRES_USER}&password=${POSTGRES_PASSWORD}
spring.batch.jdbc.initialize-schema=never
EOF
# Run the migration (answer 'y' when prompted)
echo 'y' | java \
-Dspring.config.location=file:./app.properties \
-jar nexus-db-migrator-*.jar \
--migration_type=h2_to_postgres \
--db_url="jdbc:postgresql://${POSTGRES_HOST}:5432/${POSTGRES_DB}?user=${POSTGRES_USER}&password=${POSTGRES_PASSWORD}" \
--db_user="${POSTGRES_USER}" \
--db_password="${POSTGRES_PASSWORD}" \
--h2_path=/nexus-data/db/nexus \
--force=true
# Exit the pod
exit
Expected output:
Migration job finished...
Migration job took 4 seconds to execute
61 records were processed
61 records were migrated
Scale Up NXRM-HA After Migration📜
After the migration completes successfully, scale NXRM-HA back up:
# Scale up NXRM-HA StatefulSet
kubectl scale statefulset nxrm-ha -n nxrm-ha --replicas=1
# Wait for pod to be ready
kubectl wait --for=condition=ready pod -l app.kubernetes.io/name=nxrm-ha -n nxrm-ha --timeout=300s
# Verify pod is running
kubectl get pods -n nxrm-ha -l app.kubernetes.io/name=nxrm-ha
# Resume Flux HelmRelease
flux resume hr nxrm-ha -n bigbang
# Verify Flux resumed
flux get hr nxrm-ha -n bigbang
# Expected: SUSPENDED should be False
Step 7: Copy Blob Data📜
If you enabled pvc.volumeClaimTemplate.enabled: true which is already defaulted in the Big Bang NXRM-HA chart, the PVC was automatically created. Now copy the blob data from the old deployment.
Note: We reuse the nexus-migration pod from Step 5 as the source, and create a temporary destination pod.
Create destination pod to receive blob data:📜
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: blob-dest
namespace: nxrm-ha
annotations:
sidecar.istio.io/inject: "false"
spec:
securityContext:
runAsUser: 200
runAsGroup: 2000
fsGroup: 2000
containers:
- name: receiver
image: registry1.dso.mil/ironbank/sonatype/nexus/nexus:3.84.0-03
command: ["/bin/sh", "-c", "while true; do sleep 3600; done"]
volumeMounts:
- name: nxrm-data
mountPath: /nexus-data
volumes:
- name: nxrm-data
persistentVolumeClaim:
claimName: nexus-data-nxrm-ha-0 # StatefulSet volumeClaimTemplate naming pattern
EOF
# Wait for pod to be ready
kubectl wait --for=condition=ready pod blob-dest -n nxrm-ha --timeout=60s
Important: The securityContext must match NXRM-HA’s settings (runAsUser: 200, runAsGroup: 2000, fsGroup: 2000) to ensure extracted files have correct ownership for Nexus to read/write.
Copy blob data using tarball and kubectl cp:📜
# Create tarball in source pod (using migration pod from Step 5)
kubectl exec nexus-migration -n nexus-repository-manager -c migrator -- \
tar czf /tmp/blobs.tar.gz -C /nexus-data blobs
# Copy tarball to local machine, then to destination pod
kubectl cp nexus-repository-manager/nexus-migration:/tmp/blobs.tar.gz /tmp/blobs.tar.gz -c migrator
kubectl cp /tmp/blobs.tar.gz nxrm-ha/blob-dest:/tmp/blobs.tar.gz
# Extract tarball in destination pod
kubectl exec blob-dest -n nxrm-ha -- tar xzf /tmp/blobs.tar.gz -C /nexus-data
# Verify the copy
kubectl exec blob-dest -n nxrm-ha -- du -sh /nexus-data/blobs/
kubectl exec blob-dest -n nxrm-ha -- find /nexus-data/blobs -type f | head -10
Cleanup temporary pods:📜
# Delete migration pod (from Step 5) and blob destination pod
kubectl delete pod nexus-migration -n nexus-repository-manager
kubectl delete pod blob-dest -n nxrm-ha
# Cleanup local tarball or keep if needed
rm -f /tmp/blobs.tar.gz
Step 8: Update Blob Store Path📜
Important: The default blob store path is relative (default) which resolves to the ephemeral container path /opt/sonatype/sonatype-work/nexus3/blobs/default. We need to update it to use the absolute PVC path /nexus-data/blobs/default so blobs persist across pod restarts.
This is a one-time configuration change stored in PostgreSQL - it persists across pod restarts.
# Get the admin password (use the migrated password from your old Nexus)
ADMIN_PASSWORD=$(kubectl get secret nxrm-ha-adminsecret -n nxrm-ha -o jsonpath='{.data.nexus-admin-password}' | base64 -d)
# Update the default blob store path to use the PVC
kubectl exec nxrm-ha-0 -n nxrm-ha -c nxrm-app -- curl -s -X PUT \
-u admin:"${ADMIN_PASSWORD}" \
-H "Content-Type: application/json" \
-d '{"path": "/nexus-data/blobs/default"}' \
http://localhost:8081/service/rest/v1/blobstores/file/default
# Verify the path was updated
kubectl exec nxrm-ha-0 -n nxrm-ha -c nxrm-app -- curl -s \
-u admin:"${ADMIN_PASSWORD}" \
http://localhost:8081/service/rest/v1/blobstores/file/default
# Expected: {"softQuota":null,"path":"/nexus-data/blobs/default"}
Note: This configuration is stored in the PostgreSQL database and persists across pod restarts.
Step 9: Verify Migration Success📜
# Check HelmRelease status
flux get hr nxrm-ha -n bigbang
# Expected output:
# NAME REVISION SUSPENDED READY MESSAGE
# nxrm-ha 84.0.0-bb.3 False True Helm install succeeded for release nxrm-ha/nxrm-ha.v1 with chart nxrm-ha@84.0.0-bb.3
# Check pod status
kubectl get pods -n nxrm-ha
# Get admin password
kubectl get secret nxrm-ha-adminsecret \
-n nxrm-ha -o jsonpath='{.data.nexus-admin-password}' | base64 -d; echo
# Login to Nexus UI
# 1. Navigate to https://nexus.example.com (URL remains unchanged)
# 2. Login with admin and the password from above
# 3. Verify repositories and blob stores are accessible
# 4. Check that existing artifacts are available
Step 10: Post-Migration Tasks📜
These tasks are critical to the proper functioning of the repository after the migration process. Some tasks may take a notable amount of time to complete.
⚠️ WARNING: Do not restart your instance while the post-migration tasks are running to avoid damaging your browse and search index.
Run these repair tasks manually via the Nexus UI:
-
Navigate to Administration → System → Tasks
-
First: Create and run Repair - Rebuild repository browse
- Task name:
Rebuild repository browse - Repository:
(All Repositories) - Task frequency:
Manual -
Click Create task, then click Run
-
Second: Create and run Repair - Rebuild repository search
- Task name:
Rebuild repository search - Repository:
(All Repositories) - Task frequency:
Manual -
Click Create task, then click Run
-
Third: Create and run Repair - Reconcile component database from blob store
- Task name:
Reconcile blob store - Blob store:
default(or your blob store name) - Task frequency:
Manual - Click Create task, then click Run
- Wait for this task to complete before proceeding (check task status)
-
This scans the blob store directory and updates the database with blob metadata
-
Fourth: Create and run Repair - Recalculate blob store storage
- Task name:
Recalculate blob store metrics - Blob store:
default - Task frequency:
Manual - Click Create task, then click Run
-
This updates the Blob Count and Total Size metrics displayed in the UI
-
(If using Helm repositories): Create and run Repair - Rebuild Helm metadata
-
Verify Scheduled Tasks: Navigate to Administration → System → Tasks and verify your previously configured scheduled tasks are present. Recreate any missing tasks (e.g., cleanup policies, repository health checks, compact blob store).
-
Cleanup Migration Network Policies: Remove the temporary network policies created in Step 5:
kubectl delete networkpolicy allow-migration-egress-postgres -n nexus-repository-manager kubectl delete networkpolicy allow-migration-to-postgres -n nxrm-ha
Important Notes: - The Reconcile task is essential - it syncs the blob store files with the database - The Recalculate task updates UI metrics - without it, Blob Store page shows 0 blobs - All tasks must complete successfully before the migration is considered complete
Step 11: Disable Old Addon (GitOps)📜
After confirming the migration is successful and stable, disable the old nexusRepositoryManager addon in your Big Bang values:
addons:
nexusRepositoryManager:
enabled: false
Note: Only do this after you’re confident the migration is successful and you won’t need to rollback.
Rollback Procedure📜
If issues occur during or after migration, you can rollback to the old deployment:
Option 1: Using GitOps/Flux📜
# 1. Disable nxrm-ha addon in Big Bang values
addons:
nxrm-ha:
enabled: false
# 2. Re-enable old nexusRepositoryManager addon
addons:
nexusRepositoryManager:
enabled: true
# ... your previous values
Commit and let Flux reconcile.
Option 2: Manual Rollback📜
# Scale down nxrm-ha StatefulSet
kubectl scale statefulset nxrm-ha -n nxrm-ha --replicas=0
# Wait for pods to terminate
kubectl wait --for=delete pod -l app.kubernetes.io/name=nxrm-ha -n nxrm-ha --timeout=300s
# Resume old Flux HelmRelease (if it was suspended in Step 1)
flux resume hr nexus-repository-manager -n bigbang 2>/dev/null || echo "Not using Flux"
# Scale up old deployment
kubectl scale deployment nexus-repository-manager -n nexus-repository-manager --replicas=1
# Verify old deployment is running
kubectl get pods -n nexus-repository-manager
Important: - If you migrated the H2 database to PostgreSQL, rolling back means returning to the H2 database state - Any changes made in NXRM-HA after migration will be lost on rollback - Ensure you have backups before attempting rollback
What Gets Migrated📜
The migration successfully preserves: - ✅ All repository configurations (Maven, Docker, NPM, PyPI, etc.) - ✅ Component data and artifacts metadata - ✅ User accounts and passwords - ✅ Roles and permissions - ✅ System configuration - ⚠️ Scheduled tasks (may need to be recreated - verify after migration) - ✅ Security settings - ✅ Docker images (layers and manifests by digest)
Key Differences After Migration📜
| Feature | Legacy Chart | NXRM-HA |
|---|---|---|
| Database | Embedded H2 | PostgreSQL (required) |
| Namespace | nexus-repository-manager | nxrm-ha |
| Pod Name | nexus-repository-manager-* | nxrm-ha-* |
| Admin Secret | nexus-repository-manager-secret | nxrm-ha-adminsecret (uses migrated password) |
| Service Name | nexus-repository-manager | nxrm-ha |
| Service Account | nexus-repository-manager | nexus-repository-deployment-sa |
| Log Access | kubectl logs deploy/nexus-repository-manager |
kubectl logs statefulset/nxrm-ha -c nxrm-app |
| Values Structure | Direct | Nested under upstream: |
| High Availability | Not supported | Supported (Pro only) |