Skip to content

Migrating a Nexus Repository Using Velero📜

This guide demonstrates how to perform a migration of Nexus repositories and artifacts between Kubernetes clusters.


  • K8s running in AWS
  • Nexus PersistentVolume is using AWS EBS
  • Migration is between clusters on the same AWS instance and availability zone (due to known Velero limitations)
  • Migration occurs between K8s clusters with the same version
  • Velero CLI tool
  • Crane CLI tool


  1. Ensure the Velero addon in the Big Bang values file is properly configured, sample configuration below:

        enabled: true
        - aws
              name: velero
            provider: aws
              bucket: nexus-velero-backup
              provider: aws
                region: us-east-1
            useSecret: true
              cloud: |
                aws_access_key_id = <CHANGE ME>
                aws_secret_access_key = <CHANGE ME>
  2. Manually create an S3 bucket that the backup configuration will be stored in (in this case it is named nexus-velero-backup), this should match the configuration.backupStorageLocation.bucket key above

  3. The credentials should have the necessary permissions to read/write to S3, volumes and volume snapshots
  4. As a sanity check, take a look at the Velero logs to make sure the backup location (S3 bucket) is valid, you should see something like:

    level=info msg="Backup storage location valid, marking as available" backup-storage-location=default controller=backup-storage-location logSource="pkg/controller/backup_storage_location_controller.go:121"
  5. Ensure there are images/artifacts in Nexus. An as example we will use the Doom DOS image and a simple nginx image. Running crane catalog will show all of the artifacts and images in Nexus:


Backing Up Nexus📜

In the cluster containing the Nexus repositories to migrate, running the following command will create a backup called nexus-ns-backup and will backup all resources in the nexus-repository-manager namespace, including the associated PersistentVolume:

velero backup create nexus-ns-backup --include-namespaces nexus-repository-manager --include-cluster-resources=true

Specifically, this will backup all Nexus resources to the S3 bucket configuration.backupStorageLocation.bucket specified above and will create a volume snapshot of the Nexus EBS volume.

Double-check AWS to make sure this is the case by reviewing the contents of the S3 bucket:

aws s3 ls s3://nexus-velero-backup --recursive --human-readable --summarize

Expected output:


Also ensure an EBS volume snapshot has been created and the Snapshot status is Completed.

Restoring From Backup📜

  1. In the new cluster, ensure that Nexus and Velero are running and healthy
    • It is critical to ensure that Nexus has been included in the new cluster’s Big Bang deployment, otherwise the restored Nexus configuration will not be managed by the Big Bang Helm chart.
  2. If you are using the same velero.values from above, Velero should automatically be configured to use the same backup location as before. Verify this with velero backup get and you should see output that looks like:

    NAME              STATUS      ERRORS   WARNINGS   CREATED                         EXPIRES   STORAGE LOCATION   SELECTOR
    nexus-ns-backup   Completed   0        0          2022-02-08 12:34:46 +0100 CET   29d       default            <none>
  3. To perform the migration, Nexus must be shut down. In the Nexus Deployment, bring the spec.replicas down to 0.

  4. Ensure that the Nexus PVC and PV are also removed (you may have to delete these manually!), and that the corresponding Nexus EBS volume has been deleted.

    • If you have to remove the Nexus PV and PVC manually, delete the PVC first, which should cascade to the PV; then, manually delete the underlying EBS volume (if it still exists)
  5. Now that Nexus is down and the new cluster is configured to use the same backup location as the old one, perform the migration by running:
    velero restore create --from-backup nexus-ns-backup

  6. The Nexus PV and PVC should be recreated (verify before continuing!), but the pod will fail to start due to the previous change in the Nexus deployment spec. Change the Nexus deployment spec.replicas back to 1. This will bring up the Nexus pod which should connect to the PVC and PV created during the Velero restore.

  7. Once the Nexus pod is running and healthy, log in to Nexus and verify that the repositories have been restored

    • The credentials to log in will have been restored from the Nexus backup, so they should match the credentials of the Nexus that was migrated (not the new installation!)
    • It is recommended to log in to Nexus and download a sampling of images/artifacts to ensure they are working as expected.

    For example, login to Nexus using the migrated credentials:
    docker login -u admin -p admin

    Running crane catalog should show the same output as before:


    To ensure the integrity of the migrated image, we will pull and run the doom-dos image and defeat evil!

    docker pull && \
    docker run -p 8000:8000



Sample Nexus values📜

    enabled: true
          enabled: true
            - host:
              port: 5000

Last update: 2023-01-20 by Micah Nagel