Installation (OpenShift)

1. Requirements

xltrail can be installed on an OpenShift cluster via Helm chart. You'll need the following prerequisites:

  • OpenShift cluster
  • oc, the OpenShift CLI
  • Helm, the Kubernetes package manager

This tutorial covers the installation of xltrail into your OpenShift cluster.

This guide does not cover setting up routes or ingress options.

2. Install xltrail

2.1 Namespace

Create a namespace for xltrail. We'll be using xltrail throughout these instructions, but you may want to call it differently, e.g., xltrail-prod etc.

oc create namespace xltrail

Since the remaining commands will have to be run in this namespace, the easiest way is to set your current context to this namespace:

oc config set-context --current --namespace=xltrail

Confirm that the namespace has been correctly set:

oc config view --minify | grep namespace:

Instead of changing the current context, you could also add the --namespace=xltrail or -n=xltrail argument to the commands.

2.2 Secrets

Run the following commands to create the required secrets.

  • Docker Registry (make sure to replace both <USERNAME> and <PASSWORD> with the actual values provided by email)

    oc create secret docker-registry xltrail-registry-credentials --docker-server=registry.gitlab.com --docker-username=<USERNAME> --docker-password=<PASSWORD>
    

    You also need to link the image pull secret (xltrail-registry-credentials) to the default service account in the xltrail namespace. This is necessary so that OpenShift can use the secret to authenticate when pulling the images during pod creation.

    oc secrets link default xltrail-registry-credentials --for=pull
    
  • Secret Key

    oc create secret generic xltrail-secret-key --from-literal=SECRET_KEY=$(head -c 512 /dev/urandom | LC_ALL=C tr -cd 'a-zA-Z0-9' | head -c 64)
    
  • Postgres

    oc create secret generic xltrail-postgresql-password --from-literal=POSTGRES_PASSWORD=$(head -c 512 /dev/urandom | LC_ALL=C tr -cd 'a-zA-Z0-9' | head -c 64)
    

    In case you're using an externally hosted PostgreSQL instance, provide the password instead of generating a random one.

  • Minio

    oc create secret generic xltrail-minio-secret-key --from-literal=MINIO_SECRET_KEY=$(head -c 512 /dev/urandom | LC_ALL=C tr -cd 'a-zA-Z0-9' | head -c 64)
    
  • LDAP (optional)
    If you intend to use LDAP for authentication, set the Bind Password here.

    oc create secret generic xltrail-ldap-bind-password --from-literal=LDAP_BIND_PASSWORD=<PASSWORD>
    
  • SMTP Password (optional)
    If you intend to use an SMTP server to send password reset emails, set the SMTP Password here.

    oc create secret generic xltrail-smtp-password --from-literal=SMTP_PASSWORD=<PASSWORD>
    

You should back up the generated secrets, so you can move them to a new cluster if you have to. For example, if you want to retrieve the password of the Postgres user, you could do:

oc get secret xltrail-postgresql-password -ojsonpath='{.data.POSTGRES_PASSWORD}' | base64 --decode ; echo

2.3 Configuration via values.yaml

Installation to an OpenShift cluster requires some custom configuration via the Helm chart's values.yaml file.

The values.yaml below is a good starting point.

Pay special attention to:

  • nginx.resolver
  • git.repositories.fsGroup

The exact values depend on your setup, please check with your cluster administrator.

global:
  imagePullPolicy: Always
  registryCredentialsSecretName: xltrail-registry-credentials
  # If no storageClassName is provided, it uses the default storageClassName of your
  # Kubernetes provider as provided by: kubectl get storageclass
  storageClassName:

xltrail:
  secretKeySecretName: xltrail-secret-key
  # baseUrl: /xltrail
  # authTokenExpiry: 30d
  # maxInvalidPasswordAttempts: 5
  # passwordResetTokenExpiry: 600

  # If you rely on CA Certificates to connect with your Git or LDAP provider, upload
  # them like this:
  # oc create secret generic xltrail-ca-certificates --from-file=<cert1>.crt --from-file=<cert2>.crt
  # Note that the extension must be ".crt". Then uncomment the next line:
  # caCertificatesSecretName: xltrail-ca-certificates

  # During startup, containers check every 5 seconds if the database is up and
  # running. Increase number of retries if the db-migration Job fails
  # dbCheckRetries: 60

  # This is only relevant if you use the Git integration. If your Git repos sync
  # correctly when disabled (0), disable it for increased security, otherwise
  # leave it at 1. Sometimes 1 is required, e.g. if you’re using a self-signed
  # certificate to serve Git repositories over HTTPS
  # gitSslNoVerify: 1

  # LDAP settings, see:
  # https://www.xltrail.com/docs/en/stable/configuration#active-directory--ldap
  # authProvider: ldap
  # ldapUrl:
  # ldapBindDn:
  # ldapBindPasswordSecretName: xltrail-ldap-bind-password
  # ldapBaseDn:
  # ldapUserDn:
  # ldapUserEmailAttribute: mail
  # ldapUserDisplaynameAttribute: cn
  # ldapUserFilter:
  # ldapAdminFilter:
  # ldapRequireCert must be one of: OPT_X_TLS_DEMAND, OPT_X_TLS_NEVER
  # ldapRequireCert: OPT_X_TLS_NEVER

  # SMTP settings
  # smtpHost:
  # smtpPort:
  # smtpSenderEmail:
  # smtpSenderName:
  # smtpUsername:
  # smtpPassword: xltrail-smtp-password

git:
  repositories:
    # This is where the Git repos are stored on disk
    fsGroup: 1000050000 # Common values may range from 1000000000 to 1000059999. check with your admin
    storageSize: 50Gi
    storageClassName:
  crontab:
    # This run "git gc" to prevent repos from growing
    - "15 0 * * * python /server/scripts/git_gc.py"

nginx:
  imageRegistry: registry.gitlab.com
  imageRepository: xltrail/xltrail/nginx
  resolver: dns-default.openshift-dns.svc

server:
  imageRegistry: registry.gitlab.com
  imageRepository: xltrail/xltrail/server

redis:
  imageRegistry: registry.gitlab.com
  imageRepository: xltrail/xltrail/redis

minio:
  # MinIO is an object storage used for diff caching and inter-container communication
  imageRegistry: docker.io
  imageRepository: minio/minio
  imageTag: RELEASE.2023-10-16T04-13-43Z
  storageSize: 10Gi
  storageClassName:
  accessKey: xltrail
  secretKeySecretName: xltrail-minio-secret-key
  # If minioBrowser is "on" (quotes required), MinIO will activate a web dashboard that is exposed via port-forwarding:
  # kubectl port-forward service/xt-minio 9001:9001
  # Then you can access the UI at: http://localhost:9001
  # Note: minio doesn't restart automatically when you change the minioBrowser setting. Therefore, run
  # "kubectl delete pod/xt-minio-0" to restart
  minioBrowser: "off"

postgresql:
  external: false
  passwordSecretName: xltrail-postgresql-password
  imageRegistry: docker.io
  imageRepository: postgres
  imageTag: 14.1-alpine
  # The following 2 lines are only used with an internal database via "external: false"
  storageSize: 10Gi
  storageClassName:
  # The following lines are only required when using an external database via "external: true"
  # Uncomment them and MAKE SURE THEY HAVE THE SAME INDENTATION AS EVERYTHING ELSE UNDER postgresql
  # Supported sslMode: disable or require
  # host:
  # port:
  # sslMode:
  # database:
  # username:

pgbouncer:
  # Optional. When enabled, a PgBouncer client-side connection pooler is used.
  enabled: false
  imageRegistry: docker.io
  imageRepository: bitnami/pgbouncer
  imageTag: 1.21.0-debian-11-r10
  minPoolSize: 0
  poolSize: 20

pgweb:
  # Optional. When enabled, provides a web UI for managing the PostgreSQL database
  # This is currently not exposed outside of the cluster but can be accessed
  # via port-forwarding (make sure to run this in the same namespace as you run xltrail):
  # kubectl port-forward service/xt-pgweb 8081:8081
  # Then you can access the UI at: http://localhost:8081
  enabled: false
  imageRegistry: docker.io
  imageRepository: sosedoff/pgweb
  imageTag: 0.14.1

ingress:
  # Ingress rules, requires an existing installation of ingress-nginx
  enabled: true
  host:
  annotations:
    cert-manager.io/cluster-issuer: xltrail-cert-issuer
  tls:
    # tls requires an existing installation of cert-manager with a certificate issuer that
    # matches the above annotation
    enabled: false
    secret: xltrail-tls

differs:
  # To run the differs on a specific node, provide a label ("key": "value")
  nodeSelector:

web:
  # To run the web server on a specific node, provide a label ("key": "value")
  nodeSelector:

2.4 xltrail application

You can now install xltrail with the following helm commands:

helm repo add xltrail https://xltrail.com/charts
helm repo update
helm upgrade --install xltrail xltrail/xltrail -f values.yaml --version=<VERSION>

Note that the installation will take a couple of minutes. Confirm that all pods show their status as Running by running the following command.

oc get pods

results matching ""

    No results matching ""