Skip to content

Upgrade from 8.1.0.3 to 8.1.0.4

1. Upgrade From 8.1.0.3 to 8.1.0.4 (Selected Services)

AIOps (OIA) Application: From 8.1.0.3 to 8.1.0.4 (Selected Services)

OIA (AIOps) Application: From 8.1.0.3 to 8.1.0.4 (Selected Services)

1.1. Prerequisites

Before proceeding with this upgrade, please make sure and verify the below prerequisites are met.

Currently deployed CLI and RDAF services are running the below versions.

  • RDAF Deployment CLI version: 1.4.1

  • Infra Services tag: 1.0.4

  • Platform Services and RDA Worker tag: 8.1.0.x

  • OIA Application Services tag: 8.1.0.x / 8.1.0.3

  • CloudFabrix recommends taking VMware VM snapshots where RDA Fabric infra/platform/applications are deployed

Note

  • Check the Disk space of all the Platform and Service Vm's using the below mentioned command, the highlighted disk size should be less than 80%
    df -kh
    
rdauser@oia-125-216:~/collab-3.7-upgrade$ df -kh
Filesystem                         Size  Used Avail Use% Mounted on
udev                                32G     0   32G   0% /dev
tmpfs                              6.3G  357M  6.0G   6% /run
/dev/mapper/ubuntu--vg-ubuntu--lv   48G   12G   34G  26% /
tmpfs                               32G     0   32G   0% /dev/shm
tmpfs                              5.0M     0  5.0M   0% /run/lock
tmpfs                               32G     0   32G   0% /sys/fs/cgroup
/dev/loop0                          64M   64M     0 100% /snap/core20/2318
/dev/loop2                          92M   92M     0 100% /snap/lxd/24061
/dev/sda2                          1.5G  309M  1.1G  23% /boot
/dev/sdf                            50G  3.8G   47G   8% /var/mysql
/dev/loop3                          39M   39M     0 100% /snap/snapd/21759
/dev/sdg                            50G  541M   50G   2% /192.168-data
/dev/loop4                          92M   92M     0 100% /snap/lxd/29619
/dev/loop5                          39M   39M     0 100% /snap/snapd/21465
/dev/sde                            15G  140M   15G   1% /zookeeper
/dev/sdd                            30G  884M   30G   3% /kafka-logs
/dev/sdc                            50G  3.3G   47G   7% /opt
/dev/sdb                            50G   29G   22G  57% /var/lib/docker
/dev/sdi                            25G  294M   25G   2% /graphdb
/dev/sdh                            50G   34G   17G  68% /opensearch
/dev/loop6                          64M   64M     0 100% /snap/core20/2379
  • Check all MariaDB nodes are sync on HA setup using below commands before start upgrade

Tip

Please run the below commands on the VM host where RDAF deployment CLI was installed and rdafk8s setup command was run. The mariadb configuration is read from /opt/rdaf/rdaf.cfg file.

MARIADB_HOST=`cat /opt/rdaf/rdaf.cfg | grep -A3 haproxy| grep advertised_external_host | awk '{print $3}'`
MARIADB_USER=`cat /opt/rdaf/rdaf.cfg | grep -A3 mariadb | grep user | awk '{print $3}' | base64 -d`
MARIADB_PASSWORD=`cat /opt/rdaf/rdaf.cfg | grep -A3 mariadb | grep password | awk '{print $3}' | base64 -d`

mysql -u$MARIADB_USER -p$MARIADB_PASSWORD -h $MARIADB_HOST -P3307 -e "show status like 'wsrep_local_state_comment';"

Please verify that the mariadb cluster state is in Synced state.

+---------------------------+--------+
| Variable_name             | Value  |
+---------------------------+--------+
| wsrep_local_state_comment | Synced |
+---------------------------+--------+

Please run the below command and verify that the mariadb cluster size is 3.

mysql -u$MARIADB_USER -p$MARIADB_PASSWORD -h $MARIADB_HOST -P3307 -e "SHOW GLOBAL STATUS LIKE 'wsrep_cluster_size'";
+--------------------+-------+
| Variable_name      | Value |
+--------------------+-------+
| wsrep_cluster_size | 3     |
+--------------------+-------+

Warning

Make sure all of the above pre-requisites are met before proceeding with the upgrade process.

Warning

Kubernetes: Though Kubernetes based RDA Fabric deployment supports zero downtime upgrade, it is recommended to schedule a maintenance window for upgrading RDAF Platform and AIOps services to newer version.

Important

Please make sure full backup of the RDAF platform system is completed before performing the upgrade.

Kubernetes: Please run the below backup command to take the backup of application data.

rdafk8s backup --dest-dir <backup-dir>

Run the below command on RDAF Management system and make sure the Kubernetes PODs are NOT in restarting mode (it is applicable to only Kubernetes environment)

kubectl get pods -n rda-fabric -l app_category=rdaf-infra
kubectl get pods -n rda-fabric -l app_category=rdaf-platform
kubectl get pods -n rda-fabric -l app_component=rda-worker 
kubectl get pods -n rda-fabric -l app_name=oia 

  • Verify that RDAF deployment rdaf cli version is 1.4.1 on the VM where CLI was installed for docker on-prem registry managing Kubernetes or Non-kubernetes deployments.
rdafk8s --version
RDAF CLI version: 1.4.1
  • On-premise docker registry service version is 1.0.4
docker ps | grep docker-registry
0889e08f0871   docker1.cloudfabrix.io:443/external/docker-registry:1.0.4   "/entrypoint.sh /bin…"   7 days ago   Up 7 days             deployment-scripts-docker-registry-1
  • RDAF Infrastructure services version is 1.0.4 except for below services.

  • rda-minio: version is RELEASE.2024-12-18T13-15-44Z

Run the below command to get rdafk8s Infra service details

rdafk8s infra status
+--------------------------+-----------------+-------------------+--------------+--------------------------------+
| Name                     | Host            | Status            | Container Id | Tag                            |
+--------------------------+-----------------+-------------------+--------------+--------------------------------+
| rda-nats                 | 192.168.108.114 | Up 19 Minutes ago | bbb50d2dacc5 | 1.0.4                          |
| rda-minio                | 192.168.108.114 | Up 19 Minutes ago | d26148d4bf44 | RELEASE.2024-12-18T13-15-44Z   |
| rda-mariadb              | 192.168.108.114 | Up 19 Minutes ago | 02975e0eec89 | 1.0.4                          |
| rda-opensearch           | 192.168.108.114 | Up 18 Minutes ago | 1494be76f694 | 1.0.4                          |
+--------------------------+-----------------+-------------------+--------------+--------------------------------+
  • RDAF OIA Application services version is 8.1.0.x / 8.1.0.3

Run the below command to get RDAF App services details

rdafk8s app status
+-------------------------------+-----------------+-----------------+--------------+---------+
| Name                          | Host            | Status          | Container Id | Tag     |
+-------------------------------+-----------------+-----------------+--------------+---------+
| rda-alert-correlator          | 192.168.108.118 | Up 14 Hours ago | afdbbe6453e4 | 8.1.0.1 |
| rda-alert-correlator          | 192.168.108.117 | Up 14 Hours ago | 631b7978dcb0 | 8.1.0.1 |
| rda-alert-ingester            | 192.168.108.117 | Up 14 Hours ago | 33322e0b9cb9 | 8.1.0.1 |
| rda-alert-ingester            | 192.168.108.118 | Up 14 Hours ago | 8178c043bd04 | 8.1.0.1 |
| rda-alert-processor           | 192.168.108.117 | Up 14 Hours ago | b342b582ea1d | 8.1.0.1 |
| rda-alert-processor           | 192.168.108.118 | Up 14 Hours ago | b6f85413c2df | 8.1.0.1 |
+-------------------------------+-----------------+-----------------+--------------+---------+

Currently deployed CLI and RDAF services are running the below versions.

  • RDAF Deployment CLI version: 1.4.1

  • Infra Services tag: 1.0.4

  • Platform Services and RDA Worker tag: 8.1.0.x

  • OIA Application Services tag: 8.1.0.x / 8.1.0.3

  • CloudFabrix recommends taking VMware VM snapshots where RDA Fabric infra/platform/applications are deployed

Note

  • Check the Disk space of all the Platform and Service Vm's using the below mentioned command, the highlighted disk size should be less than 80%
    df -kh
    
rdauser@oia-125-216:~/collab-3.7-upgrade$ df -kh
Filesystem                         Size  Used Avail Use% Mounted on
udev                                32G     0   32G   0% /dev
tmpfs                              6.3G  357M  6.0G   6% /run
/dev/mapper/ubuntu--vg-ubuntu--lv   48G   12G   34G  26% /
tmpfs                               32G     0   32G   0% /dev/shm
tmpfs                              5.0M     0  5.0M   0% /run/lock
tmpfs                               32G     0   32G   0% /sys/fs/cgroup
/dev/loop0                          64M   64M     0 100% /snap/core20/2318
/dev/loop2                          92M   92M     0 100% /snap/lxd/24061
/dev/sda2                          1.5G  309M  1.1G  23% /boot
/dev/sdf                            50G  3.8G   47G   8% /var/mysql
/dev/loop3                          39M   39M     0 100% /snap/snapd/21759
/dev/sdg                            50G  541M   50G   2% /192.168-data
/dev/loop4                          92M   92M     0 100% /snap/lxd/29619
/dev/loop5                          39M   39M     0 100% /snap/snapd/21465
/dev/sde                            15G  140M   15G   1% /zookeeper
/dev/sdd                            30G  884M   30G   3% /kafka-logs
/dev/sdc                            50G  3.3G   47G   7% /opt
/dev/sdb                            50G   29G   22G  57% /var/lib/docker
/dev/sdi                            25G  294M   25G   2% /graphdb
/dev/sdh                            50G   34G   17G  68% /opensearch
/dev/loop6                          64M   64M     0 100% /snap/core20/2379

Warning

Make sure all of the above pre-requisites are met before proceeding with the upgrade process.

Warning

Non-Kubernetes: Upgrading RDAF Platform and AIOps application services is a disruptive operation. Schedule a maintenance window before upgrading RDAF Platform and AIOps services to newer version.

Important

Please make sure full backup of the RDAF platform system is completed before performing the upgrade.

Non-Kubernetes: Please run the below backup command to take the backup of application data.

rdaf backup --dest-dir <backup-dir>
Note: Please make sure this backup-dir is mounted across all infra,cli vms.

  • Verify that RDAF deployment rdaf cli version is 1.4.1 on the VM where CLI was installed for docker on-prem registry managing Kubernetes or Non-kubernetes deployments.
rdaf --version
RDAF CLI version: 1.4.1
  • On-premise docker registry service version is 1.0.4
docker ps | grep docker-registry
173d38eebeab   docker1.cloudfabrix.io:443/external/docker-registry:1.0.4   "/entrypoint.sh /bin…"   45 hours ago   Up 45 hours             deployment-scripts-docker-registry-1
  • RDAF Infrastructure services version is 1.0.4 except for below services.

  • rda-minio: version is RELEASE.2024-12-18T13-15-44Z

Run the below command to get RDAF Infra service details

rdaf infra status
+-------------------+----------------+-------------+--------------+------------------------------+
| Name              | Host           | Status      | Container Id | Tag                          |
+-------------------+----------------+-------------+--------------+------------------------------+
| nats              | 192.168.125.63 | Up 2 months | aff2eb1f37c9 | 1.0.4                        |
| minio             | 192.168.125.63 | Up 2 months | ed6bb3ea036a | RELEASE.2024-12-18T13-15-44Z |
| mariadb           | 192.168.125.63 | Up 2 months | 616a98d6471c | 1.0.4                        |
| opensearch        | 192.168.125.63 | Up 2 months | 7edeede52a9b | 1.0.4                        |
| kafka             | 192.168.125.63 | Up 2 months | d1426429da4c | 1.0.4                        |
| graphdb[operator] | 192.168.125.63 | Up 2 months | 8a53795f6ee4 | 1.0.4                        |
| graphdb[server]   | 192.168.125.63 | Up 2 months | 06c187c7dfa2 | 1.0.4                        |
| haproxy           | 192.168.125.63 | Up 2 months | fde40536be0c | 1.0.4                        |
+-------------------+----------------+-------------+--------------+------------------------------+
  • RDAF OIA Application services version is 8.1.0.x / 8.1.0.3

Run the below command to get RDAF App services details

rdaf app status
+-----------------------------------+----------------+------------+--------------+---------+
| Name                              | Host           | Status     | Container Id | Tag     |
+-----------------------------------+----------------+------------+--------------+---------+
| cfx-rda-app-controller            | 192.168.125.63 | Up 7 weeks | 1bae5abb4e9c | 8.1.0.1 |
| cfx-rda-reports-registry          | 192.168.125.63 | Up 7 weeks | 925a97ecb0a3 | 8.1.0.1 |
| cfx-rda-notification-service      | 192.168.125.63 | Up 7 weeks | 1628da0a7a30 | 8.1.0.1 |
| cfx-rda-file-browser              | 192.168.125.63 | Up 7 weeks | 237c85c6cb9f | 8.1.0.1 |
| cfx-rda-configuration-service     | 192.168.125.63 | Up 7 weeks | 0fe8f3ee7596 | 8.1.0.1 |
| cfx-rda-alert-ingester            | 192.168.125.63 | Up 7 weeks | d58452342e72 | 8.1.0.1 |
| cfx-rda-webhook-server            | 192.168.125.63 | Up 7 weeks | f3578f725d9c | 8.1.0.1 |
+-----------------------------------+----------------+------------+--------------+---------+

1.2. Upgrade Steps

1.2.1 RDAF Deployment CLI Upgrade

Please follow the below given steps.

Note

Upgrade RDAF Deployment CLI on both on-premise docker registry VM and RDAF Platform's management VM if provisioned separately.

Login into the VM where rdaf deployment CLI was installed for docker on-premise registry and managing Kubernetes or Non-kubernetes deployment.

  • Download the RDAF Deployment CLI's newer version 1.4.1.1 bundle.
wget https://macaw-amer.s3.us-east-1.amazonaws.com/releases/rdaf-platform/rdafcli-1.4.1.1.tar.gz
  • Upgrade the rdafk8s CLI to version 1.4.1.1
pip install --user rdafcli-1.4.1.1.tar.gz
  • Verify the installed rdafk8s CLI version is upgraded to 1.4.1.1
rdafk8s --version
  • Download the RDAF Deployment CLI's newer version 1.4.1.1 bundle and copy it to RDAF CLI management VM on which rdaf deployment CLI was installed.
wget https://macaw-amer.s3.us-east-1.amazonaws.com/releases/rdaf-platform/1.4.1.1/offline-ubuntu-1.4.1.1.tar.gz
  • Extract the rdaf CLI software bundle contents
tar -xvzf offline-ubuntu-1.4.1.1.tar.gz
  • Change the directory to the extracted directory
cd offline-ubuntu-1.4.1.1
  • Upgrade the rdaf CLI to version 1.4.1.1
pip install --user rdafcli-1.4.1.1.tar.gz -f ./ --no-index
  • Verify the installed rdafk8s CLI version is upgraded to 1.4.1.1
rdafk8s --version
  • Download the RDAF Deployment CLI's newer version 1.4.1.1 bundle.
wget https://macaw-amer.s3.us-east-1.amazonaws.com/releases/rdaf-platform/rdafcli-1.4.1.1.tar.gz
  • Upgrade the rdafk8s CLI to version 1.4.1.1
pip install --user rdafcli-1.4.1.1.tar.gz
  • Verify the installed rdaf CLI version is upgraded to 1.4.1.1
rdaf --version
  • Download the RDAF Deployment CLI's newer version 1.4.1.1 bundle and copy it to RDAF CLI management VM on which rdaf deployment CLI was installed.
wget https://macaw-amer.s3.us-east-1.amazonaws.com/releases/rdaf-platform/1.4.1.1/offline-ubuntu-1.4.1.1.tar.gz
  • Extract the rdaf CLI software bundle contents
tar -xvzf offline-ubuntu-1.4.1.1.tar.gz
  • Change the directory to the extracted directory
cd offline-ubuntu-1.4.1.1
  • Upgrade the rdaf CLI to version 1.4.1.1
pip install --user rdafcli-1.4.1.1.tar.gz -f ./ --no-index
  • Verify the installed rdaf CLI version is upgraded to 1.4.1.1
rdaf --version
RDAF CLI version: 1.4.1.1

1.2.2 Download the new Docker Images

Login into the VM where rdaf deployment CLI was installed for docker on-premise registry and managing kubernetes & Non-kubernetes deployment.

Download the new docker image tags for RDAF Platform and OIA (AIOps) Application services and wait until all of the images are downloaded.

To fetch registry please use the below command

rdaf registry fetch --tag 8.1.0.4

To fetch registry please use the below command

rdaf registry fetch --tag 8.1.0.4

Note

If the Download of the images fail, Please re-execute the above command

Run the below command to verify above mentioned tags are downloaded for all of the RDAF Platform and OIA (AIOps) Application services.

rdaf registry list-tags 

Please make sure 8.1.0.4 image tag is downloaded for the below RDAF OIA (AIOps) Application services.

  • rda-configuration-service
  • rda-alert-ingester
  • rda-event-consumer
  • rda-alert-processor
  • rda-collaboration
  • rda-irm-service
  • rda-alert-processor-companion

Downloaded Docker images are stored under the below path.

/opt/rdaf-registry/data/docker/registry/v2/ or /opt/rdaf/data/docker/registry/v2/

Run the below command to check the filesystem's disk usage on offline registry VM where docker images are pulled.

df -h /opt

If necessary, older image tags that are no longer in use can be deleted to free up disk space using the command below.

Note

Run the command below if /opt occupies more than 80% of the disk space or if the free capacity of /opt is less than 25GB.

rdaf registry delete-images --tag <tag1,tag2>

Note

The MetalLB section applies exclusively to Kubernetes (K8s) deployments.

  • For Document on Offline MetalLB Update and Installation Please Click Here

1.2.3 Upgrade RDAF Infra Services

1.2.3.1 Upgrade Nginx for External URL Access

Note

Below given steps are applicable only when an external URL is configured.

  • Please upgrade Nginx using the following command.
rdafk8s infra upgrade --tag 1.0.4 --service nginx
2026-01-22 06:18:55,584 [rdaf.cmd.infra] INFO     - Upgrading nginx
Release "rda-nginx" has been upgraded. Happy Helming!
NAME: rda-nginx
LAST DEPLOYED: Thu Jan 22 06:18:55 2026
NAMESPACE: rda-fabric
STATUS: deployed
REVISION: 3
TEST SUITE: None
  • User can use the following command to verify if nginx is running.
kubectl get pods -n rda-fabric  | grep nginx
rda-nginx-57cc4cfc47-59kzg                       1/1     Running            10 (3h24m ago)   3h45m
rda-nginx-57cc4cfc47-g8hg9                       1/1     Running            10 (3h23m ago)   3h45m
  • Run the below command on RDAF Management system and make sure the Kubernetes PODs are NOT in restarting mode and all are in running state (it is applicable to only Kubernetes environment)
kubectl get pods -n rda-fabric -l app_category=rdaf-infra
NAME                                            READY   STATUS    RESTARTS   AGE
arango-rda-arangodb-operator-77cdc8d659-8mzrh   1/1     Running   0          122m
arango-rda-arangodb-operator-77cdc8d659-dhlpk   1/1     Running   0          122m
opensearch-cluster-master-0                     1/1     Running   0          122m
opensearch-cluster-master-1                     1/1     Running   0          122m
opensearch-cluster-master-2                     1/1     Running   0          122m
rda-arangodb-agnt-0-473f99                      1/1     Running   0          121m
rda-arangodb-agnt-1-473f99                      1/1     Running   0          121m
rda-arangodb-agnt-2-473f99                      1/1     Running   0          121m
rda-arangodb-crdn-8fol23rk-473f99               1/1     Running   0          121m
rda-arangodb-crdn-honcbycs-473f99               1/1     Running   0          121m
rda-arangodb-crdn-z2jylamt-473f99               1/1     Running   0          121m
rda-arangodb-prmr-0-473f99                      1/1     Running   0          121m
rda-arangodb-prmr-1-473f99                      1/1     Running   0          121m
rda-arangodb-prmr-2-473f99                      1/1     Running   0          121m
rda-haproxy-6kxvv                               1/1     Running   0          122m
rda-haproxy-6wzt2                               1/1     Running   0          122m
rda-kafka-controller-0                          1/1     Running   0          122m
rda-kafka-controller-1                          1/1     Running   0          122m
rda-kafka-controller-2                          1/1     Running   0          122m
rda-mariadb-mariadb-galera-0                    1/1     Running   0          122m
rda-mariadb-mariadb-galera-1                    1/1     Running   0          121m
rda-mariadb-mariadb-galera-2                    1/1     Running   0          121m
rda-minio-0                                     1/1     Running   0          122m
rda-minio-1                                     1/1     Running   0          122m
rda-minio-2                                     1/1     Running   0          122m
rda-minio-3                                     1/1     Running   0          122m
rda-nats-0                                      2/2     Running   0          122m
rda-nats-1                                      2/2     Running   0          122m
rda-nats-box-7ddcc96856-9c4c9                   1/1     Running   0          122m
rda-nats-box-7ddcc96856-r4g6d                   1/1     Running   0          122m
rda-nginx-57cc4cfc47-d6w2j                      1/1     Running   0          92m
rda-nginx-57cc4cfc47-w9fm4                      1/1     Running   0          92m
  • Run the below command to get rdafk8s Infra service details.
rdafk8s infra status
+--------------------------+----------------+----------------+--------------+------------------------------+
| Name                     | Host           | Status         | Container Id | Tag                          |
+--------------------------+----------------+----------------+--------------+------------------------------+
| rda-nats                 | 192.168.108.13 | Up 2 Hours ago | d30f5878a938 | 1.0.4                        |
| rda-nats                 | 192.168.108.14 | Up 2 Hours ago | 848ba88d69fc | 1.0.4                        |
| rda-minio                | 192.168.108.13 | Up 2 Hours ago | 258c2cac4921 | RELEASE.2024-12-18T13-15-44Z |
| rda-minio                | 192.168.108.14 | Up 2 Hours ago | cf8a409c987a | RELEASE.2024-12-18T13-15-44Z |
| rda-minio                | 192.168.108.16 | Up 2 Hours ago | af69876c22bf | RELEASE.2024-12-18T13-15-44Z |
| rda-minio                | 192.168.108.17 | Up 2 Hours ago | 0487452f0e26 | RELEASE.2024-12-18T13-15-44Z |
| rda-mariadb              | 192.168.108.13 | Up 2 Hours ago | 9f705e4281e4 | 1.0.4                        |
| rda-mariadb              | 192.168.108.14 | Up 2 Hours ago | f5e79577db5d | 1.0.4                        |
| rda-mariadb              | 192.168.108.16 | Up 2 Hours ago | 84ee20d01c08 | 1.0.4                        |
| rda-nginx                | 192.168.108.13 | Up 1 Hours ago | 6d2a4e570728 | 1.0.4                        |
| rda-nginx                | 192.168.108.14 | Up 1 Hours ago | 4410b31b8a84 | 1.0.4                        |
+--------------------------+----------------+----------------+--------------+------------------------------+
  • Please upgrade Nginx using the following command.
rdaf infra upgrade --tag 1.0.4 --service nginx
2026-01-30 14:36:28,418 [rdaf.cmd.infra] INFO     - Upgrading nginx
2026-01-30 14:36:28,766 [rdaf.component.nginx] INFO     - Upgrading nginx on host 192.168.133.60
[+] Pulling 1/0
  nginx Pulled                                                                                                                                                 0.1s
WARN[0000] Found orphan containers ([infra-qdrant-1 infra-haproxy-1 infra-graphdb-1 infra-kafka-1 infra-opensearch-1 infra-mariadb-1 infra-minio-1 infra-nats-1]) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up.
[+] Running 2/2
  Container infra-nginx-1                                                                                               Started                                0.8s
 ! nginx Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap.                                        0.0s
2026-01-30 14:36:30,192 [rdaf.component.nginx] INFO     - Upgrading nginx on host 192.168.133.61
[+] Pulling 1/06:31,594 [rdaf.component] INFO     -
  nginx Pulled                                                            0.0s
WARN[0000] Found orphan containers ([infra-qdrant-1 infra-haproxy-1 infra-graphdb-1 infra-kafka-1 infra-opensearch-1 infra-mariadb-1 infra-minio-1 infra-nats-1]) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up.
[+] Running 2/2
  Container infra-nginx-1                                                                                               Started0.9s
 ! nginx Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap. 0.0s

+----------------------+--------------+--------------+--------------+------------------------------+
| Name                 | Host         | Status       | Container Id | Tag                          |
+----------------------+--------------+--------------+--------------+------------------------------+
| nats                 | 192.168.133.60 | Up 8 hours   | 76b8a46b9818 | 1.0.4                        |
|                      |              |              |              |                              |
| nats                 | 192.168.133.61 | Up 8 hours   | e761b35fb87f | 1.0.4                        |
|                      |              |              |              |                              |
| minio                | 192.168.133.60 | Up 8 hours   | c9aa49362f4b | RELEASE.2024-12-18T13-15-44Z |
|                      |              |              |              |                              |
| minio                | 192.168.133.61 | Up 8 hours   | 49b15a2fb621 | RELEASE.2024-12-18T13-15-44Z |
|                      |              |              |              |                              |
| minio                | 192.168.133.62 | Up 8 hours   | 45efec32e23a | RELEASE.2024-12-18T13-15-44Z |
|                      |              |              |              |                              |
| minio                | 192.168.133.63 | Up 8 hours   | 91dce952192b | RELEASE.2024-12-18T13-15-44Z |
|                      |              |              |              |                              |
| mariadb              | 192.168.133.60 | Up 8 hours   | e8e5892efe97 | 1.0.4                        |
|                      |              |              |              |                              |
| mariadb              | 192.168.133.61 | Up 8 hours   | 2541a1f63c7e | 1.0.4                        |
|                      |              |              |              |                              |
| mariadb              | 192.168.133.62 | Up 8 hours   | 24e950ea0142 | 1.0.4                        |
|                      |              |              |              |                              |
| opensearch           | 192.168.133.60 | Up 8 hours   | 46b884345420 | 1.0.4                        |
|                      |              |              |              |                              |
| opensearch           | 192.168.133.61 | Up 8 hours   | f7389aa8c72d | 1.0.4                        |
|                      |              |              |              |                              |
| opensearch           | 192.168.133.62 | Up 8 hours   | eccc419286bc | 1.0.4                        |
|                      |              |              |              |                              |
| kafka                | 192.168.133.60 | Up 8 hours   | d62cd36efb46 | 1.0.4                        |
|                      |              |              |              |                              |
| kafka                | 192.168.133.61 | Up 8 hours   | ceb6b7d0dd0b | 1.0.4                        |
|                      |              |              |              |                              |
| kafka                | 192.168.133.62 | Up 8 hours   | cc4c9ece81d1 | 1.0.4                        |
|                      |              |              |              |                              |
| graphdb[operator]    | 192.168.133.60 | Up 8 hours   | 22292d4612f4 | 1.0.4                        |
|                      |              |              |              |                              |
| graphdb[agent]       | 192.168.133.60 | Up 8 hours   | 9df5fcf1c1f2 | 1.0.4                        |
|                      |              |              |              |                              |
| graphdb[server]      | 192.168.133.60 | Up 8 hours   | b9772e56da46 | 1.0.4                        |
|                      |              |              |              |                              |
| graphdb[coordinator] | 192.168.133.60 | Up 8 hours   | 0bb60dff4baa | 1.0.4                        |
|                      |              |              |              |                              |
| graphdb[operator]    | 192.168.133.61 | Up 8 hours   | 4a235208e072 | 1.0.4                        |
|                      |              |              |              |                              |
| graphdb[agent]       | 192.168.133.61 | Up 8 hours   | f0da0b0f2246 | 1.0.4                        |
|                      |              |              |              |                              |
| graphdb[server]      | 192.168.133.61 | Up 8 hours   | 82ec7f612a79 | 1.0.4                        |
|                      |              |              |              |                              |
| graphdb[coordinator] | 192.168.133.61 | Up 8 hours   | 77b5aff2df45 | 1.0.4                        |
|                      |              |              |              |                              |
| graphdb[operator]    | 192.168.133.62 | Up 8 hours   | 4db2e8eec565 | 1.0.4                        |
|                      |              |              |              |                              |
| graphdb[agent]       | 192.168.133.62 | Up 8 hours   | 1d254432aa4f | 1.0.4                        |
|                      |              |              |              |                              |
| graphdb[server]      | 192.168.133.62 | Up 8 hours   | 7024a2881846 | 1.0.4                        |
|                      |              |              |              |                              |
| graphdb[coordinator] | 192.168.133.62 | Up 8 hours   | 73522095a870 | 1.0.4                        |
|                      |              |              |              |                              |
| haproxy              | 192.168.133.60 | Up 2 hours   | 2f0d834ff0ad | 1.0.4                        |
|                      |              |              |              |                              |
| haproxy              | 192.168.133.61 | Up 2 hours   | 46e35cfa883c | 1.0.4                        |
|                      |              |              |              |                              |
| keepalived           | 192.168.133.60 | active       | N/A          | N/A                          |
|                      |              |              |              |                              |
| keepalived           | 192.168.133.61 | active       | N/A          | N/A                          |
|                      |              |              |              |                              |
| nginx                | 192.168.133.60 | Up 2 seconds | b27b49ebb233 | 1.0.4                        |
|                      |              |              |              |                              |
| nginx                | 192.168.133.61 | Up 1 second  | 26d70da0a3e9 | 1.0.4                        |
|                      |              |              |              |                              |
| qdrant               | 192.168.133.60 | Up 2 hours   | 25d063055ceb | 1.0.4                        |
|                      |              |              |              |                              |
| qdrant               | 192.168.133.61 | Up 2 hours   | 407272cba363 | 1.0.4                        |
|                      |              |              |              |                              |
| qdrant               | 192.168.133.62 | Up 2 hours   | 0e4d8fde048f | 1.0.4                        |
|                      |              |              |              |                              |
+----------------------+--------------+--------------+--------------+------------------------------+
  • Run the below command to get rdaf Infra service details.
rdaf infra status
+---------------+-----------------+-------------+--------------+---------------+
| Name          | Host            | Status      | Container Id | Tag           |
+---------------+-----------------+-------------+--------------+---------------+
| nats          | 192.168.108.122 | Up 28 hours | 7900b808b5b0 | 1.0.4         |
| nats          | 192.168.108.123 | Up 28 hours | 2715919a35d6 | 1.0.4         |
| minio         | 192.168.108.122 | Up 28 hours | 708780e0775f | RELEASE.2024- |
|               |                 |             |              | 12-18T13-15-  |
|               |                 |             |              | 44Z           |
| minio         | 192.168.108.122 | Up 28 hours | 4ebce8c17ffb | RELEASE.2024- |
|               |                 |             |              | 12-18T13-15-  |
|               |                 |             |              | 44Z           |
| minio         | 192.168.108.122 | Up 28 hours | 0e1ec8ba2c86 | RELEASE.2024- |
|               |                 |             |              | 12-18T13-15-  |
|               |                 |             |              | 44Z           |
| minio         | 192.168.108.122 | Up 28 hours | 1371d073692f | RELEASE.2024- |
|               |                 |             |              | 12-18T13-15-  |
|               |                 |             |              | 44Z           |
| mariadb       | 192.168.108.122 | Up 28 hours | 489949c760a2 | 1.0.4         |
| mariadb       | 192.168.108.123 | Up 28 hours | 51a6a923a66e | 1.0.4         |
| mariadb       | 192.168.108.124 | Up 28 hours | d7fef2b28088 | 1.0.4         |
| opensearch    | 192.168.108.122 | Up 28 hours | 1c06e031aea3 | 1.0.4         |
| opensearch    | 192.168.108.123 | Up 28 hours | 19c74ee17727 | 1.0.4         |
| opensearch    | 192.168.108.124 | Up 28 hours | bcd75bb75eec | 1.0.4         |
| nginx         | 192.168.133.60  | Up 28 hours | f32b6280d57e | 1.0.4         |
| nginx         | 192.168.133.61  | Up 28 hours | f37cd20d1920 | 1.0.4         |
+---------------+-----------------+-------------+--------------+---------------+

1.2.4 Update Environment Variables in values.yaml

Alert Ingester- Add Environment Variables

  • Before upgrading the Alert Ingester service, ensure the following environment variables are added under the cfx-rda-alert-ingester section in the values.yaml file. file path /opt/rdaf/deployment-scripts/values.yaml

  • Environment Variables to Add

INBOUND_PARTITION_WORKERS_MAX
cfx-rda-alert-ingester:
   mem_limit: 6G
   memswap_limit: 6G
   privileged: true
environment:
  DISABLE_REMOTE_LOGGING_CONTROL: 'no'
  RDA_ENABLE_TRACES: 'yes'
  RDA_SELF_HEALTH_RESTART_AFTER_FAILURES: 3
  INBOUND_PARTITION_WORKERS_MAX: 3
hosts:
- 192.168.109.53
- 192.168.109.54
cap_add:
- SYS_PTRACE

Alert Ingester- Add Environment Variables

  • Before upgrading the Alert Ingester service, ensure the following environment variables are added under the cfx-rda-alert-ingester section in the values.yaml file. file path /opt/rdaf/deployment-scripts/values.yaml

  • Environment Variables to Add

INBOUND_PARTITION_WORKERS_MAX
cfx-rda-alert-ingester:
   mem_limit: 6G
   memswap_limit: 6G
   privileged: true
environment:
  DISABLE_REMOTE_LOGGING_CONTROL: 'no'
  RDA_ENABLE_TRACES: 'yes'
  RDA_SELF_HEALTH_RESTART_AFTER_FAILURES: 3
  INBOUND_PARTITION_WORKERS_MAX: 3
hosts:
- 192.168.109.53
- 192.168.109.54
cap_add:
- SYS_PTRACE

1.2.5 Upgrade OIA Application Services

Note

If the following environment variables exist in values.yaml under the Alert_ingester and event_consumer services (on the CLI-installed VM, navigate to /opt/rdaf/deployment-scripts/values.yaml), please remove them before upgrading OIA Services

  • OUTBOUND_TOPIC_WORKERS_MAX
  • OUTBOUND_WORKERS_MAX

Step-1: Run the below commands to initiate upgrading the following RDAF OIA Application services

rdafk8s app upgrade OIA --tag 8.1.0.4 --service rda-configuration-service --service rda-alert-ingester --service rda-event-consumer --service rda-alert-processor --service rda-alert-correlator --service rda-collaboration --service rda-irm-service --service rda-alert-processor-companion

Step-2: Run the below command to check the status of the newly upgraded PODs.

kubectl get pods -n rda-fabric -l app_name=oia

Step-3: Run the below command to put all Terminating OIA application service PODs into maintenance mode. It will list all of the POD Ids of OIA application services along with rdac maintenance command that are required to be put in maintenance mode.

python maint_command.py

Step-4: Copy & Paste the rdac maintenance command as below.

rdac maintenance start --ids <comma-separated-list-of-oia-app-pod-ids>

Step-5: Run the below command to verify the maintenance mode status of the OIA application services.

rdac pods --show_maintenance | grep False

Step-6: Run the below command to delete the Terminating OIA application service PODs

for i in `kubectl get pods -n rda-fabric -l app_name=oia | grep 'Terminating' | awk '{print $1}'`; do kubectl delete pod $i -n rda-fabric --force; done
kubectl get pods -n rda-fabric -l app_name=oia

Note

Wait for 120 seconds and Repeat above steps from Step-2 to Step-6 for rest of the OIA application service PODs.

Please wait till all of the new OIA application service PODs are in Running state and run the below command to verify their status and make sure they are running with 8.1.0.4 version.

rdafk8s app status
+-------------------------------+-----------------+-----------------+--------------+-----------+
| Name                          | Host            | Status          | Container Id | Tag       |
+-------------------------------+-----------------+-----------------+--------------+-----------+
| rda-alert-correlator          | 192.168.108.120 | Up 5 Hours ago  | de58c823d265 | 8.1.0.4   |
| rda-alert-correlator          | 192.168.108.119 | Up 5 Hours ago  | 7ccfb9832d63 | 8.1.0.4   |
| rda-alert-ingester            | 192.168.108.120 | Up 5 Hours ago  | d9722596015a | 8.1.0.4   |
| rda-alert-ingester            | 192.168.108.119 | Up 5 Hours ago  | 2d73cfed8226 | 8.1.0.4   |
| rda-alert-processor           | 192.168.108.120 | Up 5 Hours ago  | 3349c4455841 | 8.1.0.4   |
| rda-alert-processor           | 192.168.108.119 | Up 5 Hours ago  | 3f17dde3eed2 | 8.1.0.4   |
| rda-alert-processor-companion | 192.168.108.119 | Up 5 Hours ago  | ec87f1383f2a | 8.1.0.4   |
| rda-alert-processor-companion | 192.168.108.120 | Up 5 Hours ago  | eda5b39c3da1 | 8.1.0.4   |
| rda-app-controller            | 192.168.108.119 | Up 23 Hours ago | cb51cf3875ad | 8.1.0.1   |
| rda-app-controller            | 192.168.108.120 | Up 23 Hours ago | 83b2d405f6ee | 8.1.0.1   |
| rda-collaboration             | 192.168.108.119 | Up 5 Hours ago  | a16102be5b3f | 8.1.0.4   |
| rda-collaboration             | 192.168.108.120 | Up 5 Hours ago  | b9779202b517 | 8.1.0.4   |
| rda-configuration-service     | 192.168.108.119 | Up 23 Hours ago | 2666a70fd84b | 8.1.0.4   |
| rda-configuration-service     | 192.168.108.120 | Up 23 Hours ago | fa90a76ec426 | 8.1.0.4   |
| rda-event-consumer            | 192.168.108.120 | Up 5 Hours ago  | 339cb5f787a7 | 8.1.0.4   |
| rda-event-consumer            | 192.168.108.119 | Up 5 Hours ago  | 85a539443123 | 8.1.0.4   |
+-------------------------------+-----------------+-----------------+--------------+-----------+

Step-7: Run the below command to verify all OIA application services are up and running.

rdac pods
+-------+----------------------------------------+-------------+----------------+----------+-------------+----------+--------+--------------+---------------+--------------+
| Cat   | Pod-Type                               | Pod-Ready   | Host           | ID       | Site        | Age      |   CPUs |   Memory(GB) | Active Jobs   | Total Jobs   |
|-------+----------------------------------------+-------------+----------------+----------+-------------+----------+--------+--------------+---------------+--------------|
| App   | alert-ingester                         | True        | rda-alert-inge | 6a6e464d |             | 19:19:06 |      8 |        31.33 |               |              |
| App   | alert-ingester                         | True        | rda-alert-inge | 7f6b42a0 |             | 19:19:23 |      8 |        31.33 |               |              |
| App   | alert-processor                        | True        | rda-alert-proc | a880e491 |             | 19:19:51 |      8 |        31.33 |               |              |
| App   | alert-processor                        | True        | rda-alert-proc | b684609e |             | 19:19:48 |      8 |        31.33 |               |              |
| App   | alert-processor-companion              | True        | rda-alert-proc | 874f3b33 |             | 19:18:54 |      8 |        31.33 |               |              |
| App   | alert-processor-companion              | True        | rda-alert-proc | 70cadaa7 |             | 19:18:35 |      8 |        31.33 |               |              |
| App   | asset-dependency                       | True        | rda-asset-depe | bde06c15 |             | 19:44:20 |      8 |        31.33 |               |              |
| App   | asset-dependency                       | True        | rda-asset-depe | 47b9eb02 |             | 19:44:08 |      8 |        31.33 |               |              |
| App   | authenticator                          | True        | rda-identity-d | faa33e1b |             | 19:44:22 |      8 |        31.33 |               |              |
| App   | authenticator                          | True        | rda-identity-d | 36083c36 |             | 19:44:16 |      8 |        31.33 |               |              |
| App   | cfx-app-controller                     | True        | rda-app-contro | 5fd3c3f4 |             | 19:19:39 |      8 |        31.33 |               |              |
| App   | cfx-app-controller                     | True        | rda-app-contro | d66e5ce8 |             | 19:19:26 |      8 |        31.33 |               |              |
| App   | cfxdimensions-app-access-manager       | True        | rda-access-man | ecbb535c |             | 19:44:16 |      8 |        31.33 |               |              |
| App   | cfxdimensions-app-access-manager       | True        | rda-access-man | 9a05db5a |             | 19:44:06 |      8 |        31.33 |               |              |
| App   | cfxdimensions-app-collaboration        | True        | rda-collaborat | 61b3c53b |             | 19:18:48 |      8 |        31.33 |               |              |
| App   | cfxdimensions-app-collaboration        | True        | rda-collaborat | 09b9474e |             | 19:18:27 |      8 |        31.33 |               |              |
+-------+----------------------------------------+-------------+----------------+----------+-------------+-------------------+--------+-----------------------------+--------------+

Run the below command to check if all services has ok status and does not throw any failure messages.

rdac healthcheck
+-----------+----------------------------------------+--------------+----------+-------------+-----------------------------------------------------+----------+-----------------------------------------------------------------------------------------------------------------------------+
| Cat       | Pod-Type                               | Host         | ID       | Site        | Health Parameter                                    | Status   | Message                                                                                                                     |
|-----------+----------------------------------------+--------------+----------+-------------+-----------------------------------------------------+----------+-----------------------------------------------------------------------------------------------------------------------------|
| rda_app   | alert-ingester                         | rda-alert-in | 6a6e464d |             | service-status                                      | ok       |                                                                                                                             |
| rda_app   | alert-ingester                         | rda-alert-in | 6a6e464d |             | 192.168-connectivity                                  | ok       |                                                                                                                             |
| rda_app   | alert-ingester                         | rda-alert-in | 6a6e464d |             | service-dependency:configuration-service            | ok       | 2 pod(s) found for configuration-service                                                                                    |
| rda_app   | alert-ingester                         | rda-alert-in | 6a6e464d |             | service-initialization-status                       | ok       |                                                                                                                             |
| rda_app   | alert-ingester                         | rda-alert-in | 6a6e464d |             | kafka-connectivity                                  | ok       | Cluster=dKnnkaYSPELK8DBUk0rPig, Broker=0, Brokers=[0, 1, 2]                                                                 |
| rda_app   | alert-ingester                         | rda-alert-in | 6a6e464d |             | kafka-consumer                                      | ok       | Health: [{'387c0cb507b84878b9d0b15222cb4226.inbound-events': 0, '387c0cb507b84878b9d0b15222cb4226.mapped-events': 0}, {}]   |
| rda_app   | alert-ingester                         | rda-alert-in | 7f6b42a0 |             | service-status                                      | ok       |                                                                                                                             |
| rda_app   | alert-ingester                         | rda-alert-in | 7f6b42a0 |             | 192.168-connectivity                                  | ok       |                                                                                                                             |
| rda_app   | alert-ingester                         | rda-alert-in | 7f6b42a0 |             | service-dependency:configuration-service            | ok       | 2 pod(s) found for configuration-service                                                                                    |
| rda_app   | alert-ingester                         | rda-alert-in | 7f6b42a0 |             | service-initialization-status                       | ok       |                                                                                                                             |
| rda_app   | alert-ingester                         | rda-alert-in | 7f6b42a0 |             | kafka-consumer                                      | ok       | Health: [{'387c0cb507b84878b9d0b15222cb4226.inbound-events': 0, '387c0cb507b84878b9d0b15222cb4226.mapped-events': 0}, {}]   |
| rda_app   | alert-ingester                         | rda-alert-in | 7f6b42a0 |             | kafka-connectivity                                  | ok       | Cluster=dKnnkaYSPELK8DBUk0rPig, Broker=1, Brokers=[0, 1, 2]                                                                 |
| rda_app   | alert-processor                        | rda-alert-pr | a880e491 |             | service-status                                      | ok       |                                                                                                                             |
| rda_app   | alert-processor                        | rda-alert-pr | a880e491 |             | 192.168-connectivity                                  | ok       |                                                                                                                             |
| rda_app   | alert-processor                        | rda-alert-pr | a880e491 |             | service-dependency:cfx-app-controller               | ok       | 2 pod(s) found for cfx-app-controller                                                                                       |
| rda_app   | alert-processor                        | rda-alert-pr | a880e491 |             | service-dependency:configuration-service            | ok       | 2 pod(s) found for configuration-service                                                                                    |
| rda_app   | alert-processor                        | rda-alert-pr | a880e491 |             | service-initialization-status                       | ok       |                                                                                                                             |
| rda_app   | alert-processor                        | rda-alert-pr | a880e491 |             | kafka-connectivity                                  | ok       | Cluster=dKnnkaYSPELK8DBUk0rPig, Broker=1, Brokers=[0, 1, 2]                                                                 |
| rda_app   | alert-processor                        | rda-alert-pr | a880e491 |             | DB-connectivity                                     | ok       |                                                                                                                             |
+-----------+----------------------------------------+--------------+----------+-------------+-----------------------------------------------------+----------+-------------------------------------------------------------+

Run the below commands to initiate upgrading the following RDA Fabric OIA Application services with zero downtime

rdaf app upgrade  --tag 8.1.0.4  --rolling-upgrade --timeout 10 --service cfx-rda-configuration-service --service cfx-rda-alert-ingester --service cfx-rda-event-consumer --service cfx-rda-alert-processor --service cfx-rda-alert-correlator  --service cfx-rda-collaboration --service  cfx-rda-irm-service --service  cfx-rda-alert-processor-companion

Note

timeout <10> mentioned in the above command represents as Seconds

Note

The rolling-upgrade option upgrades the OIA application services running in high-availability mode on one VM at a time in sequence. It completes the upgrade of OIA application services running on VM-1 before upgrading them on VM-2, followed by VM-3, and so on.

After completing the OIA application services upgrade on all VMs, it will ask for user confirmation to delete the older version OIA application service PODs.

rdauser@infra108122:~$ rdaf app upgrade OIA --service cfx-rda-alert-ingester --tag 8.1.0.4 --rolling-upgrade --timeout 10
2026-01-23 10:57:42,729 [rdaf.component.oia] INFO     - Pulling oia images on host 192.168.108.127
2026-01-23 10:57:42,730 [rdaf.component.oia] INFO     - Pulling cfx-rda-alert-ingester image on host 192.168.108.127
2026-01-23 10:57:47,479 [rdaf.component] INFO     - 8.1.0.4: Pulling from cfx-rda-alert-ingester
Digest: sha256:a190c16c1f9feee1cb859ed3de3bdd10dcf049eb2d896978f6753165a89cb9ee
Status: Image is up to date for 192.168.108.122:5000/cfx-rda-alert-ingester:8.1.0.4
192.168.108.122:5000/cfx-rda-alert-ingester:8.1.0.4

2026-01-23 10:57:47,488 [rdaf.component.oia] INFO     - Pulling oia images on host 192.168.108.128
2026-01-23 10:57:47,490 [rdaf.component.oia] INFO     - Pulling cfx-rda-alert-ingester image on host 192.168.108.128
2026-01-23 10:57:52,183 [rdaf.component] INFO     - 8.1.0.4: Pulling from cfx-rda-alert-ingester
Digest: sha256:a190c16c1f9feee1cb859ed3de3bdd10dcf049eb2d896978f6753165a89cb9ee
Status: Image is up to date for 192.168.108.122:5000/cfx-rda-alert-ingester:8.1.0.4
192.168.108.122:5000/cfx-rda-alert-ingester:8.1.0.4

2026-01-23 10:57:52,416 [rdaf.component.oia] INFO     - Gathering OIA app container details.
2026-01-23 10:57:52,603 [rdaf.component.oia] INFO     - Gathering rdac pod details.
+----------+----------------+----------+---------+--------------+-------------+------------+
| Pod ID   | Pod Type       | Version  | Age     | Hostname     | Maintenance | Pod Status |
+----------+----------------+----------+---------+--------------+-------------+------------+
| 857e54a8 | alert-ingester | 8.1.0.4  | 0:20:47 | 7278a1f0e81e | None        | True       |
+----------+----------------+----------+---------+--------------+-------------+------------+
Continue moving above pods to maintenance mode? [yes/no]: yes
2026-01-23 10:58:13,977 [rdaf.component.oia] INFO     - Initiating Maintenance Mode...
2026-01-23 10:58:33,578 [rdaf.component.oia] INFO     - Following container are in maintenance mode
+----------+----------------+----------+---------+--------------+-------------+------------+
| Pod ID   | Pod Type       | Version  | Age     | Hostname     | Maintenance | Pod Status |
+----------+----------------+----------+---------+--------------+-------------+------------+
| 857e54a8 | alert-ingester | 8.1.0.4  | 0:21:17 | 7278a1f0e81e | maintenance | False      |
+----------+----------------+----------+---------+--------------+-------------+------------+
2026-01-23 10:58:33,579 [rdaf.component.oia] INFO     - Waiting for timeout of 10 seconds...
2026-01-23 10:58:43,579 [rdaf.component.oia] INFO     - Upgrading cfx-rda-alert-ingester on host 192.168.108.127
[+] Running 1/08:57,069 [rdaf.component] INFO     -
✔ Container oia-cfx-rda-alert-ingester-1  Running                         0.0s

2026-01-23 10:58:57,078 [rdaf.component.oia] INFO     - Waiting for upgraded containers to join rdac pods
2026-01-23 10:58:57,080 [rdaf.component.oia] INFO     - Checking if the upgraded components '['cfx-rda-alert-ingester']' has joined the rdac pods...

+----------+----------------+----------+---------+--------------+-------------+------------+
| Pod ID   | Pod Type       | Version  | Age     | Hostname     | Maintenance | Pod Status |
+----------+----------------+----------+---------+--------------+-------------+------------+
| d3fbba9a | alert-ingester | 8.1.0.4  | 0:20:22 | 2532fd4e6ad0 | None        | True       |
+----------+----------------+----------+---------+--------------+-------------+------------+
Continue moving above pods to maintenance mode? [yes/no]: yes
2026-01-23 11:04:42,977 [rdaf.component.oia] INFO     - Initiating Maintenance Mode...
2026-01-23 11:05:02,466 [rdaf.component.oia] INFO     - Following container are in maintenance mode
+----------+----------------+----------+---------+--------------+-------------+------------+
| Pod ID   | Pod Type       | Version  | Age     | Hostname     | Maintenance | Pod Status |
+----------+----------------+----------+---------+--------------+-------------+------------+
| 857e54a8 | alert-ingester | 8.1.0.4  | 0:27:47 | 7278a1f0e81e | maintenance | False      |
| d3fbba9a | alert-ingester | 8.1.0.4  | 0:27:22 | 2532fd4e6ad0 | maintenance | False      |
+----------+----------------+----------+---------+--------------+-------------+------------+
2026-01-23 11:05:02,467 [rdaf.component.oia] INFO     - Waiting for timeout of 10 seconds...
2026-01-23 11:05:12,467 [rdaf.component.oia] INFO     - Upgrading cfx-rda-alert-ingester on host 192.168.108.128
[+] Running 1/05:26,279 [rdaf.component] INFO     -
✔ Container oia-cfx-rda-alert-ingester-1  Running                         0.0s

2026-01-23 11:05:26,287 [rdaf.component.oia] INFO     - Upgrading cfx-rda-alert-ingester on host 192.168.108.128
[+] Running 1/05:40,157 [rdaf.component] INFO     -
✔ Container oia-cfx-rda-alert-ingester-1  Running                         0.0s

2026-01-23 11:05:40,167 [rdaf.component.oia] INFO     - Waiting for upgraded containers to join rdac pods
2026-01-23 11:05:40,169 [rdaf.component.oia] INFO     - Checking if the upgraded components '['cfx-rda-alert-ingester']' has joined the rdac pods...

Run the below command to initiate upgrading the RDA Fabric OIA Application services without zero downtime

rdaf app upgrade  --tag 8.1.0.4 --service cfx-rda-configuration-service --service cfx-rda-alert-ingester --service cfx-rda-event-consumer --service cfx-rda-alert-processor --service cfx-rda-alert-correlator  --service cfx-rda-collaboration --service  cfx-rda-irm-service --service  cfx-rda-alert-processor-companion
rdauser@infra108122:~$ rdaf app upgrade OIA --service cfx-rda-alert-ingester --tag 8.1.0.4
2026-01-23 11:16:42,653 [rdaf.component.oia] INFO     - Pulling oia images on host 192.168.108.127
2026-01-23 11:16:42,653 [rdaf.component.oia] INFO     - Pulling cfx-rda-alert-ingester image on host 192.168.108.127
2026-01-23 11:16:47,140 [rdaf.component] INFO     - 8.1.0.4: Pulling from cfx-rda-alert-ingester
Digest: sha256:a190c16c1f9feee1cb859ed3de3bdd10dcf049eb2d896978f6753165a89cb9ee
Status: Image is up to date for 192.168.108.122:5000/cfx-rda-alert-ingester:8.1.0.4
192.168.108.122:5000/cfx-rda-alert-ingester:8.1.0.4

2026-01-23 11:16:47,148 [rdaf.component.oia] INFO     - Pulling oia images on host 192.168.108.128
2026-01-23 11:16:47,150 [rdaf.component.oia] INFO     - Pulling cfx-rda-alert-ingester image on host 192.168.108.128
2026-01-23 11:16:51,662 [rdaf.component] INFO     - 8.1.0.4: Pulling from cfx-rda-alert-ingester
Digest: sha256:a190c16c1f9feee1cb859ed3de3bdd10dcf049eb2d896978f6753165a89cb9ee
Status: Image is up to date for 192.168.108.122:5000/cfx-rda-alert-ingester:8.1.0.4
192.168.108.122:5000/cfx-rda-alert-ingester:8.1.0.4

2026-01-23 11:16:51,775 [rdaf.component.oia] INFO     - Upgrading cfx-rda-alert-ingester on host 192.168.108.127
[+] Running 1/07:05,533 [rdaf.component] INFO     -
:heavy_check_mark: Container oia-cfx-rda-alert-ingester-1  Running                         0.0s

2026-01-23 11:17:05,541 [rdaf.component.oia] INFO     - Upgrading cfx-rda-alert-ingester on host 192.168.108.128
[+] Running 1/07:19,404 [rdaf.component] INFO     -
:heavy_check_mark: Container oia-cfx-rda-alert-ingester-1  Running                         0.0s

Please wait till all of the new OIA application service containers are in Up state and run the below command to verify their status and make sure they are running with 8.1.0.4 version.

rdaf app status
+-----------------------------------+-----------------+-------------+--------------+-------------+
| Name                              | Host            | Status      | Container Id | Tag         |
+-----------------------------------+-----------------+-------------+--------------+-------------+
| cfx-rda-app-controller            | 192.168.105.131 | Up 3 days   | 13bbf3d74c4e | 8.1.0.1     |
| cfx-rda-reports-registry          | 192.168.105.131 | Up 3 days   | 606bd2414e74 | 8.1.0.1     |
| cfx-rda-notification-service      | 192.168.105.131 | Up 3 days   | 70254879deaf | 8.1.0.1     |
| cfx-rda-file-browser              | 192.168.105.131 | Up 3 days   | ee3ac9c2604d | 8.1.0.1     |
| cfx-rda-configuration-service     | 192.168.105.131 | Up 3 days   | 648580eb4291 | 8.1.0.4     |
| cfx-rda-alert-ingester            | 192.168.105.131 | Up 20 hours | 433ba299b339 | 8.1.0.4     |
| cfx-rda-webhook-server            | 192.168.105.131 | Up 3 days   | 55ce34134b88 | 8.1.0.1     |
| cfx-rda-smtp-server               | 192.168.105.131 | Up 3 days   | 7a2160d536f7 | 8.1.0.1     |
| cfx-rda-event-consumer            | 192.168.105.131 | Up 20 hours | 8c17a6409151 | 8.1.0.4     |
| cfx-rda-alert-processor           | 192.168.105.131 | Up 20 hours | 209deec4f209 | 8.1.0.4     |
| cfx-rda-alert-correlator          | 192.168.105.131 | Up 20 hours | 0c9c99a2c64b | 8.1.0.4     |
| cfx-rda-irm-service               | 192.168.105.131 | Up 3 days   | 0833745ca4d9 | 8.1.0.4     |
| cfx-rda-ml-config                 | 192.168.105.131 | Up 3 days   | b04b6b341a73 | 8.1.0.1     |
| cfx-rda-collaboration             | 192.168.105.131 | Up 20 hours | 89dbb9a7fd2a | 8.1.0.4     |
| cfx-rda-ingestion-tracker         | 192.168.105.131 | Up 3 days   | 6df3071b5101 | 8.1.0.1     |
| cfx-rda-alert-processor-companion | 192.168.105.131 | Up 20 hours | c0b6577c9ca6 | 8.1.0.4     |
+-----------------------------------+-----------------+-------------+--------------+-------------+

Run the below command to verify all OIA application services are up and running.

rdac pods
+-------+----------------------------------------+-------------+----------------+----------+-------------+----------+--------+--------------+---------------+--------------+
| Cat   | Pod-Type                               | Pod-Ready   | Host           | ID       | Site        | Age      |   CPUs |   Memory(GB) | Active Jobs   | Total Jobs   |
|-------+----------------------------------------+-------------+----------------+----------+-------------+----------+--------+--------------+---------------+--------------|
| App   | alert-ingester                         | True        | rda-alert-inge | 6a6e464d |             | 19:22:36 |      8 |        31.33 |               |              |
| App   | alert-ingester                         | True        | rda-alert-inge | 7f6b42a0 |             | 19:22:53 |      8 |        31.33 |               |              |
| App   | alert-processor                        | True        | rda-alert-proc | a880e491 |             | 19:23:21 |      8 |        31.33 |               |              |
| App   | alert-processor                        | True        | rda-alert-proc | b684609e |             | 19:23:18 |      8 |        31.33 |               |              |
| App   | alert-processor-companion              | True        | rda-alert-proc | 874f3b33 |             | 19:22:24 |      8 |        31.33 |               |              |
| App   | alert-processor-companion              | True        | rda-alert-proc | 70cadaa7 |             | 19:22:05 |      8 |        31.33 |               |              |
| App   | asset-dependency                       | True        | rda-asset-depe | bde06c15 |             | 19:47:50 |      8 |        31.33 |               |              |
| App   | asset-dependency                       | True        | rda-asset-depe | 47b9eb02 |             | 19:47:38 |      8 |        31.33 |               |              |
| App   | authenticator                          | True        | rda-identity-d | faa33e1b |             | 19:47:52 |      8 |        31.33 |               |              |
| App   | authenticator                          | True        | rda-identity-d | 36083c36 |             | 19:47:46 |      8 |        31.33 |               |              |
| App   | cfx-app-controller                     | True        | rda-app-contro | 5fd3c3f4 |             | 19:23:09 |      8 |        31.33 |               |              |
| App   | cfx-app-controller                     | True        | rda-app-contro | d66e5ce8 |             | 19:22:56 |      8 |        31.33 |               |              |
| App   | cfxdimensions-app-access-manager       | True        | rda-access-man | ecbb535c |             | 19:47:46 |      8 |        31.33 |               |              |
| App   | cfxdimensions-app-access-manager       | True        | rda-access-man | 9a05db5a |             | 19:47:36 |      8 |        31.33 |               |              |
| App   | cfxdimensions-app-collaboration        | True        | rda-collaborat | 61b3c53b |             | 19:22:18 |      8 |        31.33 |               |              |
| App   | cfxdimensions-app-collaboration        | True        | rda-collaborat | 09b9474e |             | 19:21:57 |      8 |        31.33 |               |              |
| App   | cfxdimensions-app-file-browser         | True        | rda-file-brows | 00495640 |             | 19:22:45 |      8 |        31.33 |               |              |
| App   | cfxdimensions-app-file-browser         | True        | rda-file-brows | 640f0653 |             | 19:22:29 |      8 |        31.33 |               |              |
| App   | cfxdimensions-app-irm_service          | True        | rda-irm-servic | 27e345c5 |             | 19:21:43 |      8 |        31.33 |               |              |
| App   | cfxdimensions-app-irm_service          | True        | rda-irm-servic | 23c7e082 |             | 19:21:56 |      8 |        31.33 |               |              |
| App   | cfxdimensions-app-notification-service | True        | rda-notificati | bbb5b08b |             | 19:23:20 |      8 |        31.33 |               |              |
| App   | cfxdimensions-app-notification-service | True        | rda-notificati | 9841bcb5 |             | 19:23:02 |      8 |        31.33 |               |              |
+-------+----------------------------------------+-------------+----------------+----------+-------------+----------+--------+--------------+---------------+--------------+

Run the below command to check if all services has ok status and does not throw any failure messages.

rdac healthcheck
+-----------+----------------------------------------+--------------+----------+-------------+-----------------------------------------------------+----------+-------------------------------------------------------------+
| Cat       | Pod-Type                               | Host         | ID       | Site        | Health Parameter                                    | Status   | Message                                                     |
|-----------+----------------------------------------+--------------+----------+-------------+-----------------------------------------------------+----------+-------------------------------------------------------------|
| rda_app   | alert-ingester                         | 7f75047e9e44 | daa8c414 |             | service-status                                      | ok       |                                                             |
| rda_app   | alert-ingester                         | 7f75047e9e44 | daa8c414 |             | 192.168-connectivity                                  | ok       |                                                             |
| rda_app   | alert-ingester                         | 7f75047e9e44 | daa8c414 |             | service-dependency:configuration-service            | ok       | 2 pod(s) found for configuration-service                    |
| rda_app   | alert-ingester                         | 7f75047e9e44 | daa8c414 |             | service-initialization-status                       | ok       |                                                             |
| rda_app   | alert-ingester                         | 7f75047e9e44 | daa8c414 |             | kafka-connectivity                                  | ok       | Cluster=NTc1NWU1MTQxYmY3MTFlZg, Broker=1, Brokers=[1, 2, 3] |
| rda_app   | alert-ingester                         | f9ec55862be0 | f9b9231c |             | service-status                                      | ok       |                                                             |
| rda_app   | alert-ingester                         | f9ec55862be0 | f9b9231c |             | 192.168-connectivity                                  | ok       |                                                             |
| rda_app   | alert-ingester                         | f9ec55862be0 | f9b9231c |             | service-dependency:configuration-service            | ok       | 2 pod(s) found for configuration-service                    |
| rda_app   | alert-ingester                         | f9ec55862be0 | f9b9231c |             | service-initialization-status                       | ok       |                                                             |
| rda_app   | alert-ingester                         | f9ec55862be0 | f9b9231c |             | kafka-connectivity                                  | ok       | Cluster=NTc1NWU1MTQxYmY3MTFlZg, Broker=2, Brokers=[1, 2, 3] |
| rda_app   | alert-processor                        | c6cc7b04ab33 | b4ebfb06 |             | service-status                                      | ok       |                                                             |
| rda_app   | alert-processor                        | c6cc7b04ab33 | b4ebfb06 |             | 192.168-connectivity                                  | ok       |                                                             |
+-----------+----------------------------------------+--------------+----------+-------------+-----------------------------------------------------+----------+-------------------------------------------------------------+

1.2.6 Prune Images

After upgrading the services, run the below command to clean up the un-used docker images. This command helps to clean up and free the disk space

Run the below command on rdafcli vm to clean up old docker images

rdafk8s prune_images

Run the below command on rdafcli vm to clean up old docker images

rdaf prune_images