Skip to content

Files that require bigbang integration testing📜

See bb MR testing for details regarding testing changes against bigbang umbrella chart📜

There are certain integrations within the bigbang ecosystem and this package that require additional testing outside of the specific package tests ran during CI. This is a requirement when files within those integrations are changed, as to avoid causing breaks up through the bigbang umbrella. Currently, these include changes to the istio implementation within nexus (see: istio templates, network policy templates, service entry templates).

Be aware that any changes to files listed in the Modifications made to upstream chart section will also require a codeowner to validate the changes using above method, to ensure that they do not affect the package or its integrations adversely.

Be sure to also test against monitoring locally as it is integrated by default with these high-impact service control packages, and needs to be validated using the necessary chart values beneath istio.hardened block with monitoring.enabled set to true as part of your dev-overrides.yaml

Upgrading to a new version📜

The below details the steps required to update to a new version of the Nexus package.

Note, Nexus does not track an upstream Helm Chart repository. We maintain this chart ourselves. The upstream chart was archived/no longer supported📜

  1. Create a development branch and merge request from the Gitlab issue.

  2. In chart/Chart.yaml update gluon to the latest version and run helm dependency update chart from the top level of the repo to package it up.

  3. Modify the image.tag value in chart/values.yaml to point to the newest version of Nexus.

  4. Update chart/Chart.yaml to the appropriate versions. The annotation version should match the appVersion.

    version: X.X.X-bb.X
    appVersion: X.X.X
    annotations:
      bigbang.dev/applicationVersions: |
        - Nexus: X.X.X
    
  5. Update CHANGELOG.md adding an entry for the new version and noting all changes (at minimum should include Updated Nexus to x.x.x).

  6. Generate the README.md updates by following the guide in gluon.
  7. Open an MR in “Draft” status and validate that CI passes. This will perform a number of smoke tests against the package, but it is good to manually deploy to test some things that CI doesn’t.
  8. Once all manual testing is complete take your MR out of “Draft” status and add the review label.

How to test Nexus📜

Big Bang has added several CaC (configuration as code) jobs to automate certain configurations that the upstream Nexus Helm chart does not support. Nexus upgrades could break the CaC jobs (which are not currently tested in CI). Note that you will need a license to test the SSO job. The CaC job for repo creation does not require a license. Big Bang has a license for development/testing purposes, which is located in S3 under bb-licenses.

Available CaC Jobs📜

The following Configuration as Code jobs are available:

  • SAML SSO Job (saml.yaml): Configures SAML identity provider metadata and creates user roles for SSO authentication. Requires Pro license.
  • Repository Job (repository.yaml): Creates and configures Nexus repositories based on values configuration. Works with OSS license.
  • Accept EULA Job (accept-eula.yaml): Automatically accepts the Nexus End User License Agreement. Works with OSS license.
  • Create Metrics User Job (create-metrics-user.yaml): Creates a dedicated user for Prometheus metrics collection. Works with OSS license.
  • Proxy Configuration Job (proxy.yaml): Configures HTTP/HTTPS proxy settings for outbound connections. Works with OSS license.

Proxy Configuration📜

The proxy configuration feature allows Nexus to route outbound HTTP/HTTPS traffic through a corporate proxy server. This is useful for environments where direct internet access is restricted.

Enabling Proxy Configuration📜

To enable proxy configuration, add the following to your values:

proxy:
  enabled: true
  request:
    tid: 1
    action: coreui_HttpSettings
    method: update
    type: rpc
    data:
    - httpEnabled: true
      httpHost: "your-proxy-host"
      httpPort: 8080
      httpsEnabled: true
      httpsHost: "your-proxy-host" 
      httpsPort: 8080
      nonProxyHosts:
        - "localhost"
        - "127.0.0.1"
        - "*.internal.company.com"
        - "*.svc.cluster.local"

The proxy job runs as a post-install/post-upgrade hook with weight “20” and includes proper cleanup policies.

Test Basic Functionality, Repo Job, and Monitoring📜

Deploy with the following Big Bang override values, in addition to (from the BB repo) ./docs/assets/configs/examples/policy-overrides-k3d.yaml, to test the repo job and monitoring interaction:

addons:
  nxrm-ha:
    enabled: true
    git:
      tag: null
      branch: "name-of-your-development-branch"
    values:
      realms:
          - NexusAuthenticatingRealm
          - LdapRealm
          - NpmToken
      nexus:
        docker:
          enabled: true
          registries:
            - host: containers.dev.bigbang.mil
              port: 5000
        repository:
          enabled: true
          repo:
            - name: "containers"
              format: "docker"
              type: "hosted"
              repo_data:
                name: "containers"
                online: true
                storage:
                  blobStoreName: "default"
                  strictContentTypeValidation: true
                  writePolicy: "allow_once"
                cleanup:
                  policyNames:
                    - "string"
                component:
                  proprietaryComponents: true
                docker:
                  v1Enabled: false
                  forceBasicAuth: true
                  httpPort: 5000
  1. Log in as admin and run through the setup wizard to set an admin password and disable anonymous access. If you change the admin password from what is in the secret, it will break the metrics job on subsequent reconciliation attempts.
  2. Locally run docker login containers.dev.bigbang.mil using the username admin and password that you setup. Make sure that you have added containers.dev.bigbang.mil to your /etc/hosts file along with the other hostnames.
  3. Locally run docker tag alpine containers.dev.bigbang.mil/alpine (or tag a similar small image) then push that image with docker push containers.dev.bigbang.mil/alpine. Validate the image pushes successfully which will confirm our repo job setup the docker repo.
  4. Navigate to the Prometheus target page (https://prometheus.dev.bigbang.mil/targets) and validate that the Nexus target shows as up.

NOTE: The realms can be configured by passing in the realm values to the array.

  • NexusAuthenticatingRealm
  • SamlRealm *+
  • ConanToken
  • Crowd *
  • DefaultRole
  • DockerToken
  • LdapRealm
  • NpmToken
  • NuGetApiKey
  • rutauth-realm
  • User-Token-Realm *

Those designated with a * require the PRO version license, + will be auto set when sso.enabled: true

Test SSO Job📜

SSO Job testing will require your own deployment of Keycloak because you must change the client settings. This cannot be done with P1 login.dso.mil because we don’t have admin privileges to change the config there.

Follow the instructions from the corresponding DEVELOPMENT_MAINTENANCE.md testing instructions in the Keycloak Package to deploy Keycloak. Then deploy Nexus with the following values (note the idpMetadata value must be filled in with your Keycloak’s information and license_key from the license file):

sso:
  saml:
    # Fill this in with the result from `curl https://keycloak.dev.bigbang.mil/auth/realms/baby-yoda/protocol/saml/descriptor ; echo`
    metadata: 'xxxxxxxxxxxxxxx'

addons:
  nxrm-ha:
    enabled: true
    git:
      tag: null
      branch: "name-of-your-development-branch"
    values:
      upstream:
        statefulset:
          replicaCount: 3
          clustered: true
          container:
            env:
              nexusDBName: nexus
              nexusDBPort: 5432
              install4jAddVmParams: "-Xms2703m -Xmx2703m -Dnexus.datastore.nexus.maximumPoolSize=80"
              jdbcUrlParams: null # Must start with a '?' e.g. "?foo=bar&baz=foo"
              zeroDowntimeEnabled: false
        postgresql:
          primary:
            extendedConfiguration: |
              max_connections = 350
        # -- Base64 encoded license file.
        # cat ./sonatype-license-XXXX-XX-XXXXXXXXXX.lic | base64 -w 0 ; echo
        secret:
          license:
            licenseSecret:
              enabled: true
              fileContentsBase64: "<base64-encoded-license>"
      sso:
        enabled: true
        idp_data:
          entityId: "https://nexus.dev.bigbang.mil/service/rest/v1/security/saml/metadata"
          username: "username"
          firstName: "firstName"
          lastName: "lastName"
          email: "email"
          groups: "groups"
        role:
          - id: "Nexus"
            name: "Keycloak Nexus Group"
            description: "unprivilaged users"
            privileges: []
            roles: []
          - id: "Nexus-Admin"
            name: "Keycloak Nexus Admin Group"
            description: "keycloak users as admins"
            privileges:
              - "nx-all"
            roles:
              - "nx-admin"

Once Nexus is up and running complete the following steps to properly configure the Keycloak client:

  1. Get the Nexus x509 cert from Nexus Admin UI (after logging in as admin you can get this from https://nexus.dev.bigbang.mil/service/rest/v1/security/saml/metadata inside of the X509Certificate XML section).
  2. Copy and paste the Nexus single line cert into a text file and save it:

    vi nexus-x509.txt
    

    Add the following content

    -----BEGIN CERTIFICATE-----
    put-single-line-nexus-x509-certificate-here
    -----END CERTIFICATE-----
    
  3. Make a valid pem file with proper wrapping at 64 characters per line

    fold -w 64 nexus-x509.txt > nexus.pem
    
  4. In Keycloak go to the Nexus client and on the Keys tab (https://keycloak.dev.bigbang.mil/auth/admin/master/console/#/realms/baby-yoda/clients/f975a475-89c7-43bc-bddb-c9d974ff5ac3/saml/keys) import the nexus.pem file in both places, setting the archive format as Certificate PEM.

Return to Nexus and validate you are able to login via SSO.

BigBang Integrations📜

This chart uses a passthrough subchart pattern where the upstream Nexus Repository Manager chart is included as a dependency, and BigBang-specific integrations are added through additional templates and configuration. This approach allows us to maintain upstream compatibility while providing enterprise-grade integrations.

Important: These are BigBang integrations and extensions, not modifications to the upstream chart. The upstream chart remains untouched.

Chart Structure📜

Dependencies (Chart.yaml)📜

  • Nexus Repository Manager: Upstream Sonatype chart as dependency
  • Gluon: BigBang library chart for testing and common patterns
  • PostgreSQL: Embedded database for High Availability mode

BigBang-Specific Templates (templates/bigbang/)📜

Service Mesh Integration (templates/bigbang/istio/)📜

  • VirtualService: External routing and ingress configuration
  • AuthorizationPolicies: Granular access control (intra-namespace, monitoring, ingress)
  • PeerAuthentication: mTLS configuration and exceptions for metrics/postgres
  • Sidecar: Resource optimization for sidecar proxy

Observability (templates/bigbang/)📜

  • ServiceMonitor: Prometheus metrics collection configuration
  • Realm: Custom authentication realm setup

Network Security (templates/bigbang/networkpolicies/)📜

  • Pod-to-pod communication controls
  • Ingress/egress traffic restrictions
  • Monitoring and database access policies

Configuration as Code Jobs (templates/bigbang/)📜

  • SAML SSO (saml.yaml): Identity provider integration
  • Repository Creation (repository.yaml): Automated repository setup
  • EULA Acceptance (accept-eula.yaml): License agreement automation
  • Metrics User (create-metrics-user.yaml): Dedicated monitoring user
  • Proxy Configuration (proxy.yaml): Corporate proxy settings

PostgreSQL Templates (templates/postgresql/)📜

  • High Availability database deployment
  • Service, ConfigMap, StatefulSet, and Secret resources
  • Optimized for Nexus HA requirements

Testing Framework (templates/tests/)📜

  • Cypress UI Tests: End-to-end user interface testing
  • Script Tests: API and functionality validation
  • RBAC Templates: Test service accounts and permissions
  • Uses gluon 0.9.2+ .base template functions for compatibility

Helper Templates (templates/_helpers.tpl)📜

  • License key management functions
  • Admin password generation utilities
  • Naming and labeling conventions

Values Configuration (values.yaml)📜

BigBang Extensions (Top Section)📜

  • Istio Integration: Service mesh settings, hardening, monitoring
  • Network Policies: Ingress/egress controls, exceptions
  • Monitoring: ServiceMonitor configuration, metrics user creation
  • SSO Configuration: SAML identity provider settings
  • Proxy Settings: HTTP/HTTPS proxy for outbound traffic

Upstream Values (upstream: key)📜

All upstream chart values are nested under the upstream: key to maintain clean separation and avoid conflicts during upgrades.

PostgreSQL Configuration (postgresql: key)📜

Dedicated configuration for the embedded PostgreSQL chart when HA mode is enabled.

Key Advantages of This Pattern📜

  1. Upstream Compatibility: No direct modifications to upstream templates
  2. Clean Upgrades: Upstream chart can be updated without merge conflicts
  3. Separation of Concerns: BigBang features isolated from core functionality
  4. Maintainability: Clear distinction between upstream and custom functionality
  5. Flexibility: Can easily add/remove BigBang integrations without affecting core chart

automountServiceAccountToken📜

The mutating Kyverno policy named update-automountserviceaccounttokens is leveraged to harden all ServiceAccounts in this package with automountServiceAccountToken: false. This policy is configured by namespace in the Big Bang umbrella chart repository at chart/templates/kyverno-policies/values.yaml.

This policy revokes access to the K8s API for Pods utilizing said ServiceAccounts. If a Pod truly requires access to the K8s API (for app functionality), the Pod is added to the pods: array of the same mutating policy. This grants the Pod access to the API, and creates a Kyverno PolicyException to prevent an alert.