oc-deploy/ansible/Admiralty/README.md

3.6 KiB

README

Ansible Playbooks for Admiralty Worker Setup with Argo Workflows

These Ansible playbooks help configure an existing Kubernetes (K8s) cluster as an Admiralty worker for Argo Workflows. The process consists of two main steps:

  1. Setting up a worker node: This playbook prepares the worker cluster and generates the necessary kubeconfig.
  2. Adding the worker to the source cluster: This playbook registers the worker cluster with the source Kubernetes cluster.

Prerequisites

  • Ansible installed on the control machine.
  • Kubernetes cluster(s) with kubectl and kubernetes.core collection installed.
  • Necessary permissions to create ServiceAccounts, Roles, RoleBindings, Secrets, and Custom Resources.
  • jq installed on worker nodes.

Playbook 1: Setting Up a Worker Node

This playbook configures a Kubernetes cluster to become an Admiralty worker for Argo Workflows.

Variables (Pass through --extra-vars)

Variable Description
user_prompt The user running the Ansible playbook
namespace_prompt Kubernetes namespace where resources are created
source_prompt The name of the source cluster

Actions Performed

  1. Installs required dependencies (python3, python3-yaml, python3-kubernetes, jq).
  2. Creates a service account for the source cluster.
  3. Grants patch permissions for pods to the argo-role.
  4. Adds the service account to argo-rolebinding.
  5. Creates a token for the service account.
  6. Creates a Source resource for Admiralty.
  7. Retrieves the worker cluster's kubeconfig and modifies it.
  8. Stores the kubeconfig locally.
  9. Displays the command needed to register this worker in the source cluster.

Running the Playbook

ansible-playbook setup_worker.yml -i <WORKER_HOST_IP>, --extra-vars "user_prompt=<YOUR_USER> namespace_prompt=<NAMESPACE> source_prompt=<SOURCE_NAME>"

Playbook 2: Adding Worker to Source Cluster

This playbook registers the configured worker cluster as an Admiralty target in the source Kubernetes cluster.

Variables (Pass through --extra-vars)

Variable Description
user_prompt The user running the Ansible playbook
target_name The name of the worker cluster in the source setup
target_ip IP of the worker cluster
namespace_source Namespace where the target is registered
serviceaccount_prompt The service account used in the worker

Actions Performed

  1. Retrieves the stored kubeconfig from the worker setup.
  2. Creates a ServiceAccount in the target namespace.
  3. Stores the kubeconfig in a Kubernetes Secret.
  4. Creates an Admiralty Target resource in the source cluster.

Running the Playbook

ansible-playbook add_admiralty_target.yml -i <SOURCE_HOST_IP>, --extra-vars "user_prompt=<YOUR_USER> target_name=<TARGET_NAME_IN_KUBE> target_ip=<WORKER_IP> namespace_source=<NAMESPACE> serviceaccount_prompt=<SERVICE_ACCOUNT_NAME>"

Post Playbook

Don't forget to give the patching rights to the serviceAccount on the control node :

kubectl patch role argo-role -n argo --type='json' -p '[{"op": "add", "path": "/rules/-", "value": {"apiGroups":[""],"resources":["pods"],"verbs":["patch"]}}]'

Add the name of the serviceAccount in the following command

kubectl patch rolebinding argo-binding -n argo --type='json' -p '[{"op": "add", "path": "/subjects/-", "value": {"kind": "ServiceAccount", "name": "<NAME OF THE USER ACCOUNT>", "namespace": "argo"}}]'

Maybe we could add a play/playbook to sync the roles and rolesbinding between all nodes.