Automation to deploy the Trustification project on RH OS family
The automation within this repository establishes the components of Trustification, the downstream redistribution of Trustification project within a single Red Hat Enterprise Linux (RHEL) or Fedora machine using a standalone containerized deployment. Containers are spawned using Kubernetes based manifests using podman kube play.
The following Trustification components are deployed as part of this architecture:
The following components are used if provided by the customers:
- RH Single Sign On
- RH Kafka streams
- Postgresql
- S3 or compatible service like Minio
Utilize the steps below to understand how to setup and execute the provisioning.
A RHEL 9.3+ server should be used to run the Trustification components.
Ansible must be installed and configured on a control node that will be used to perform the automation.
Perform the following steps to prepare the control node for execution.
Install the required Ansible collections by executing the following
ansible-galaxy collection install -r requirements.yml
An installation of RH SSO/Keycloak/AWS Cognito must be provided to allow for integration with containerized Trustification.
In order to deploy Trustification on a RHEL 9.3+ VM:
- Create an
inventory.ini
file in the project with a single VM in thetrustification
group:
[trustification]
192.168.121.60 become=true
[trustification:vars]
ansible_user=vagrant
ansible_ssh_pass=vargrant
ansible_private_key_file=./vm-testing/images/rhel9-vm/.vagrant/machines/trustification/libvirt/private_key
- Create
ansible.cfg
file in the project with a single VM in thetrustification
group:
[defaults]
inventory = ./inventory.ini
host_key_checking = False
- Add the subscription, registry and certificates information :
- For Red Hat subscription define :
export TPA_SINGLE_NODE_REGISTRATION_USERNAME=<Your Red Hat subscription username> export TPA_SINGLE_NODE_REGISTRATION_PASSWORD=<Your Red Hat subscription password>
- For Red Hat image registry define :
export TPA_SINGLE_NODE_REGISTRY_USERNAME=<Your Red Hat image registry username> export TPA_SINGLE_NODE_REGISTRY_PASSWORD=<Your Red Hat image registry password>
Alternatively vagrant will prompt you to provide the registration username and password.
- Path for TLS certificates files:
Copy your certificate files in ./certs
directory using following names:
- guac-collectsub-tls-certificate.pem
- guac-collectsub-tls-certificate.key
- guac-graphql-tls-certificate.pem
- guac-graphql-tls-certificate.key
- collector-osv-tls-certificate.pem
- collector-osv-tls-certificate.key
Optionally, you can also copy service-ca.crt
certificate to the same
directory if you have OSV client that needs secure access to the collector.
- Create Environment Variables for Storage, Events and OIDC
export TPA_PG_HOST=<POSTGRES_HOST_IP>
export TPA_STORAGE_ACCESS_KEY=<Storage Access Key>
export TPA_STORAGE_SECRET_KEY=<Storage Secret Key>
export TPA_OIDC_ISSUER_URL=<AWS Cognito or Keycloak Issuer URL. Incase of Keycloak endpoint auth/realms/chicken is needed>
export TPA_OIDC_FRONTEND_ID=<OIDC Frontend Id>
export TPA_OIDC_PROVIDER_CLIENT_ID=<OIDC Walker Id>
export TPA_OIDC_PROVIDER_CLIENT_SECRET=<OIDC Walker Secret>
export TPA_EVENT_ACCESS_KEY_ID=<Kafka Username or AWS SQS Access Key>
export TPA_EVENT_SECRET_ACCESS_KEY=<Kafka User Password or AWS SQS Secret Key>
- In case of Kafka Events, create environmental variable for bootstrap server
export TPA_EVENT_BOOTSTRAP_SERVER=<Kafka Bootstrap Server>
- In case of AWS Cognito as OIDC, create environmental variable for Cognito Domain
export TPA_OIDC_COGNITO_DOMAIN=<AWS Cognito Domain>
- Update
roles/tpa_single_node/vars/main.yml
file with the below values,
-
Storage Service:
- Update the Storage type, eithe
s3
orminio
- Update the S3/Minio bucket names
- Update the AWS region for AWS S3 or keep
us-west-1
for minio - In case of minio, update minio storage end point
tpa_single_node_storage_endpoint
- Update the Storage type, eithe
-
SQS Service:
- Update the Event bus type, either
kafka
orsqs
- Update the topics for each events
- In case of Kafka, update the fields
tpa_single_node_kafka_security_protocol
andtpa_single_node_kafka_auth_mechanism
- In case of AWS SQS, update the AWS SQS region
tpa_single_node_sqs_region
- Update the Event bus type, either
Refer roles/tpa_single_node/vars/main_example_aws.yml
and roles/tpa_single_node/vars/main_example_nonaws.yml
- Execute the following command (NOTE: you will have to provide credentials to authenticate to registry.redhat.io: https://access.redhat.com/RegistryAuthentication):
ANSIBLE_ROLES_PATH="roles/" ansible-playbook -i inventory.ini play.yml -vvvv -e registry_username='REGISTRY.REDHAT.IO_USERNAME' -e registry_password='REGISTRY.REDHAT.IO_PASSWORD'
The vm-testing/README.md file contains instructions on testing the deployment on a VM. Right now, only Vagrant and libvirt are supported as testing VM provisioner.
Any and all feedback is welcome. Submit an Issue or Pull Request as desired.