All the Ansible playbooks used to deploy k3s, argo server, admiralty and minio
This commit is contained in:
95
ansible/Admiralty/README.md
Normal file
95
ansible/Admiralty/README.md
Normal file
@@ -0,0 +1,95 @@
|
||||
# README
|
||||
|
||||
## Ansible Playbooks for Admiralty Worker Setup with Argo Workflows
|
||||
|
||||
These Ansible playbooks help configure an existing Kubernetes (K8s) cluster as an Admiralty worker for Argo Workflows. The process consists of two main steps:
|
||||
|
||||
1. **Setting up a worker node**: This playbook prepares the worker cluster and generates the necessary kubeconfig.
|
||||
2. **Adding the worker to the source cluster**: This playbook registers the worker cluster with the source Kubernetes cluster.
|
||||
|
||||
---
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Ansible installed on the control machine.
|
||||
- Kubernetes cluster(s) with `kubectl` and `kubernetes.core` collection installed.
|
||||
- Necessary permissions to create ServiceAccounts, Roles, RoleBindings, Secrets, and Custom Resources.
|
||||
- `jq` installed on worker nodes.
|
||||
|
||||
---
|
||||
|
||||
## Playbook 1: Setting Up a Worker Node
|
||||
|
||||
This playbook configures a Kubernetes cluster to become an Admiralty worker for Argo Workflows.
|
||||
|
||||
### Variables (Pass through `--extra-vars`)
|
||||
|
||||
| Variable | Description |
|
||||
|----------|-------------|
|
||||
| `user_prompt` | The user running the Ansible playbook |
|
||||
| `namespace_prompt` | Kubernetes namespace where resources are created |
|
||||
| `source_prompt` | The name of the source cluster |
|
||||
|
||||
### Actions Performed
|
||||
|
||||
1. Installs required dependencies (`python3`, `python3-yaml`, `python3-kubernetes`, `jq`).
|
||||
2. Creates a service account for the source cluster.
|
||||
3. Grants patch permissions for pods to the `argo-role`.
|
||||
4. Adds the service account to `argo-rolebinding`.
|
||||
5. Creates a token for the service account.
|
||||
6. Creates a `Source` resource for Admiralty.
|
||||
7. Retrieves the worker cluster's kubeconfig and modifies it.
|
||||
8. Stores the kubeconfig locally.
|
||||
9. Displays the command needed to register this worker in the source cluster.
|
||||
|
||||
### Running the Playbook
|
||||
|
||||
```sh
|
||||
ansible-playbook setup_worker.yml -i <WORKER_HOST_IP>, --extra-vars "user_prompt=<YOUR_USER> namespace_prompt=<NAMESPACE> source_prompt=<SOURCE_NAME>"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Playbook 2: Adding Worker to Source Cluster
|
||||
|
||||
This playbook registers the configured worker cluster as an Admiralty target in the source Kubernetes cluster.
|
||||
|
||||
### Variables (Pass through `--extra-vars`)
|
||||
|
||||
| Variable | Description |
|
||||
|----------|-------------|
|
||||
| `user_prompt` | The user running the Ansible playbook |
|
||||
| `target_name` | The name of the worker cluster in the source setup |
|
||||
| `target_ip` | IP of the worker cluster |
|
||||
| `namespace_source` | Namespace where the target is registered |
|
||||
| `serviceaccount_prompt` | The service account used in the worker |
|
||||
|
||||
### Actions Performed
|
||||
|
||||
1. Retrieves the stored kubeconfig from the worker setup.
|
||||
2. Creates a ServiceAccount in the target namespace.
|
||||
3. Stores the kubeconfig in a Kubernetes Secret.
|
||||
4. Creates an Admiralty `Target` resource in the source cluster.
|
||||
|
||||
### Running the Playbook
|
||||
|
||||
```sh
|
||||
ansible-playbook add_admiralty_target.yml -i <SOURCE_HOST_IP>, --extra-vars "user_prompt=<YOUR_USER> target_name=<TARGET_NAME_IN_KUBE> target_ip=<WORKER_IP> namespace_source=<NAMESPACE> serviceaccount_prompt=<SERVICE_ACCOUNT_NAME>"
|
||||
```
|
||||
|
||||
# Post Playbook
|
||||
|
||||
Don't forget to give the patching rights to the `serviceAccount` on the control node :
|
||||
|
||||
```bash
|
||||
kubectl patch role argo-role -n argo --type='json' -p '[{"op": "add", "path": "/rules/-", "value": {"apiGroups":[""],"resources":["pods"],"verbs":["patch"]}}]'
|
||||
```
|
||||
|
||||
Add the name of the `serviceAccount` in the following command
|
||||
|
||||
```bash
|
||||
kubectl patch rolebinding argo-binding -n argo --type='json' -p '[{"op": "add", "path": "/subjects/-", "value": {"kind": "ServiceAccount", "name": "<NAME OF THE USER ACCOUNT>", "namespace": "argo"}}]'
|
||||
```
|
||||
|
||||
Maybe we could add a play/playbook to sync the roles and rolesbinding between all nodes.
|
||||
|
||||
49
ansible/Admiralty/add_admiralty_target.yml
Normal file
49
ansible/Admiralty/add_admiralty_target.yml
Normal file
@@ -0,0 +1,49 @@
|
||||
- name: Setup an exsiting k8s cluster to become an admiralty worker for Argo Workflows
|
||||
hosts: all:!localhost
|
||||
user: "{{ user_prompt }}"
|
||||
vars:
|
||||
- service_account_name: "{{ serviceaccount_prompt }}"
|
||||
- namespace: "{{ namespace_source }}"
|
||||
|
||||
|
||||
tasks:
|
||||
|
||||
- name: Store kubeconfig value
|
||||
ansible.builtin.set_fact:
|
||||
kubeconfig: "{{ lookup('file','worker_kubeconfig/{{ target_ip }}_kubeconfig.json') | trim }}"
|
||||
|
||||
- name: Create the serviceAccount that will execute in the target
|
||||
kubernetes.core.k8s:
|
||||
state: present
|
||||
definition:
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: '{{ service_account_name }}'
|
||||
namespace: '{{ namespace }}'
|
||||
|
||||
- name: Create the token to authentify source
|
||||
kubernetes.core.k8s:
|
||||
state: present
|
||||
definition:
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
type: Opaque
|
||||
metadata:
|
||||
name: admiralty-secret-{{ target_name }}
|
||||
namespace: "{{ namespace_source }}"
|
||||
data:
|
||||
config: "{{ kubeconfig | tojson | b64encode }}"
|
||||
|
||||
- name: Create the target ressource
|
||||
kubernetes.core.k8s:
|
||||
state: present
|
||||
definition:
|
||||
apiVersion: multicluster.admiralty.io/v1alpha1
|
||||
kind: Target
|
||||
metadata:
|
||||
name: target-{{ target_name }}
|
||||
namespace: '{{ namespace_source }}'
|
||||
spec:
|
||||
kubeconfigSecret:
|
||||
name: admiralty-secret-{{ target_name }}
|
||||
2
ansible/Admiralty/ansible.cfg
Normal file
2
ansible/Admiralty/ansible.cfg
Normal file
@@ -0,0 +1,2 @@
|
||||
[defaults]
|
||||
result_format=default
|
||||
75
ansible/Admiralty/deploy_admiralty.yml
Normal file
75
ansible/Admiralty/deploy_admiralty.yml
Normal file
@@ -0,0 +1,75 @@
|
||||
- name: Install Helm
|
||||
hosts: all:!localhost
|
||||
user: "{{ user_prompt }}"
|
||||
become: true
|
||||
# become_method: su
|
||||
vars:
|
||||
arch_mapping: # Map ansible architecture {{ ansible_architecture }} names to Docker's architecture names
|
||||
x86_64: amd64
|
||||
aarch64: arm64
|
||||
|
||||
|
||||
tasks:
|
||||
- name: Check if Helm does exist
|
||||
ansible.builtin.command:
|
||||
cmd: which helm
|
||||
register: result_which
|
||||
failed_when: result_which.rc not in [ 0, 1 ]
|
||||
|
||||
- name: Install helm
|
||||
when: result_which.rc == 1
|
||||
block:
|
||||
- name: download helm from source
|
||||
ansible.builtin.get_url:
|
||||
url: https://get.helm.sh/helm-v3.15.0-linux-amd64.tar.gz
|
||||
dest: ./
|
||||
|
||||
- name: unpack helm
|
||||
ansible.builtin.unarchive:
|
||||
remote_src: true
|
||||
src: helm-v3.15.0-linux-amd64.tar.gz
|
||||
dest: ./
|
||||
|
||||
- name: copy helm to path
|
||||
ansible.builtin.command:
|
||||
cmd: mv linux-amd64/helm /usr/local/bin/helm
|
||||
|
||||
- name: Install admiralty
|
||||
hosts: all:!localhost
|
||||
user: "{{ user_prompt }}"
|
||||
|
||||
tasks:
|
||||
- name: Install required python libraries
|
||||
become: true
|
||||
# become_method: su
|
||||
package:
|
||||
name:
|
||||
- python3
|
||||
- python3-yaml
|
||||
state: present
|
||||
|
||||
- name: Add jetstack repo
|
||||
ansible.builtin.shell:
|
||||
cmd: |
|
||||
helm repo add jetstack https://charts.jetstack.io && \
|
||||
helm repo update
|
||||
|
||||
- name: Install cert-manager
|
||||
kubernetes.core.helm:
|
||||
chart_ref: jetstack/cert-manager
|
||||
release_name: cert-manager
|
||||
context: default
|
||||
namespace: cert-manager
|
||||
create_namespace: true
|
||||
wait: true
|
||||
set_values:
|
||||
- value: installCRDs=true
|
||||
|
||||
- name: Install admiralty
|
||||
kubernetes.core.helm:
|
||||
name: admiralty
|
||||
chart_ref: oci://public.ecr.aws/admiralty/admiralty
|
||||
namespace: admiralty
|
||||
create_namespace: true
|
||||
chart_version: 0.16.0
|
||||
wait: true
|
||||
21
ansible/Admiralty/notes_admiralty.md
Normal file
21
ansible/Admiralty/notes_admiralty.md
Normal file
@@ -0,0 +1,21 @@
|
||||
Target
|
||||
---
|
||||
- Creer service account
|
||||
- Creer token pour service account (sa sur control == sa sur target, montre nom du sa + token pour accéder à target)
|
||||
- Créer fichier kubeconfig avec token et @IP (visible pour le controler/publique) et le récupérer pour le passer au controler
|
||||
- Créer une ressource source sur la target : dire qui va nous contacter
|
||||
- Rajouter les mêmes roles/droits que le sa "argo" au sa du controler
|
||||
- Dans authorization rajouter le verbe Patch sur la ressource pods
|
||||
- Rajouter le sa controler dans le rolebinding
|
||||
|
||||
Controler
|
||||
---
|
||||
- Créer le serviceAccount avec le même nom que sur la target
|
||||
- Récuperer le kubeconfig de la target
|
||||
- Creer un secret à partir du kubeconfig target
|
||||
- Creer la resource target à laquelle on associe le secret
|
||||
|
||||
Schema
|
||||
---
|
||||
Lorsqu'un ressource tagguée avec admiralty est exécutée sur le controller il va voir les targets en s'authentifiant avec le secret pour créer des pods avec le service account commun.
|
||||
|
||||
8
ansible/Admiralty/old/admiralty_inventory.yml
Normal file
8
ansible/Admiralty/old/admiralty_inventory.yml
Normal file
@@ -0,0 +1,8 @@
|
||||
myhosts:
|
||||
hosts:
|
||||
control:
|
||||
ansible_host: 172.16.0.184
|
||||
dc01: #oc-dev
|
||||
ansible_host: 172.16.0.187
|
||||
dc02:
|
||||
ansible_host:
|
||||
115
ansible/Admiralty/old/create_secrets.yml
Normal file
115
ansible/Admiralty/old/create_secrets.yml
Normal file
@@ -0,0 +1,115 @@
|
||||
- name: Create secret from Workload
|
||||
hosts: "{{ host_prompt }}"
|
||||
user: "{{ user_prompt }}"
|
||||
vars:
|
||||
secret_exists: false
|
||||
control_ip: 192.168.122.70
|
||||
user_prompt: admrescue
|
||||
|
||||
tasks:
|
||||
- name: Can management cluster be reached
|
||||
ansible.builtin.command:
|
||||
cmd: ping -c 5 "{{ control_ip }}"
|
||||
|
||||
- name: Install needed packages
|
||||
become: true
|
||||
ansible.builtin.package:
|
||||
name:
|
||||
- jq
|
||||
- python3-yaml
|
||||
- python3-kubernetes
|
||||
state: present
|
||||
|
||||
- name: Get the list of existing secrets
|
||||
kubernetes.core.k8s_info:
|
||||
api_version: v1
|
||||
kind: Secret
|
||||
name: "{{ inventory_hostname | lower }}"
|
||||
namespace: default
|
||||
register: list_secrets
|
||||
failed_when: false
|
||||
|
||||
- name: Create token
|
||||
ansible.builtin.command:
|
||||
cmd: kubectl create token admiralty-control
|
||||
register: cd_token
|
||||
|
||||
- name: Retrieve config
|
||||
ansible.builtin.command:
|
||||
cmd: kubectl config view --minify --raw --output json
|
||||
register: config_info
|
||||
|
||||
- name: Display config
|
||||
ansible.builtin.shell:
|
||||
cmd: |
|
||||
echo > config_info.json
|
||||
|
||||
- name: Edit the config json with jq
|
||||
ansible.builtin.shell:
|
||||
cmd: |
|
||||
CD_TOKEN="{{ cd_token.stdout }}" && \
|
||||
CD_IP="{{ control_ip }}" && \
|
||||
kubectl config view --minify --raw --output json | jq '.users[0].user={token:"'$CD_TOKEN'"} | .clusters[0].cluster.server="https://'$CD_IP':6443"'
|
||||
register: edited_config
|
||||
# failed_when: edited_config.skipped == true
|
||||
|
||||
- name: Set fact for secret
|
||||
set_fact:
|
||||
secret: "{{ edited_config.stdout }}"
|
||||
cacheable: true
|
||||
|
||||
- name: Create the source for controller
|
||||
kubernetes.core.k8s:
|
||||
state: present
|
||||
definition:
|
||||
apiVersion: multicluster.admiralty.io/v1alpha1
|
||||
kind: Source
|
||||
metadata:
|
||||
name: admiralty-control
|
||||
namespace: default
|
||||
spec:
|
||||
serviceAccountName: admiralty-control
|
||||
|
||||
|
||||
- name: Create secret from Workload
|
||||
hosts: "{{ control_host }}"
|
||||
user: "{{ user_prompt }}"
|
||||
gather_facts: true
|
||||
vars:
|
||||
secret: "{{ hostvars[host_prompt]['secret'] }}"
|
||||
user_prompt: admrescue
|
||||
|
||||
tasks:
|
||||
|
||||
- name: Get the list of existing secrets
|
||||
kubernetes.core.k8s_info:
|
||||
api_version: v1
|
||||
kind: Secret
|
||||
name: "{{ host_prompt | lower }}-secret"
|
||||
namespace: default
|
||||
register: list_secrets
|
||||
failed_when: false
|
||||
|
||||
- name: Test wether secret exists
|
||||
failed_when: secret == ''
|
||||
debug:
|
||||
msg: "Secret '{{ secret }}' "
|
||||
|
||||
- name: Create secret with new config
|
||||
ansible.builtin.command:
|
||||
cmd: kubectl create secret generic "{{ host_prompt | lower }}"-secret --from-literal=config='{{ secret }}'
|
||||
when: list_secrets.resources | length == 0
|
||||
|
||||
- name: Create target for the workload cluster
|
||||
kubernetes.core.k8s:
|
||||
state: present
|
||||
definition:
|
||||
apiVersion: multicluster.admiralty.io/v1alpha1
|
||||
kind: Target
|
||||
metadata:
|
||||
name: '{{ host_prompt | lower }}'
|
||||
namespace: default
|
||||
spec:
|
||||
kubeconfigSecret:
|
||||
name: $'{{ host_prompt | lower }}'-secret
|
||||
|
||||
33
ansible/Admiralty/sequence_diagram.puml
Normal file
33
ansible/Admiralty/sequence_diagram.puml
Normal file
@@ -0,0 +1,33 @@
|
||||
@startuml
|
||||
|
||||
actor User
|
||||
participant "Ansible Playbook" as Playbook
|
||||
participant "Target Node" as K8s
|
||||
participant "Control Node" as ControlNode
|
||||
|
||||
User -> Playbook: Start Playbook Execution
|
||||
Playbook -> Playbook: Save Target IP
|
||||
Playbook -> K8s: Install Required Packages
|
||||
Playbook -> K8s: Create Service Account
|
||||
Playbook -> K8s: Patch Role argo-role (Add pod patch permission)
|
||||
Playbook -> K8s: Patch RoleBinding argo-binding (Add service account)
|
||||
Playbook -> K8s: Create Token for Service Account
|
||||
Playbook -> K8s: Create Source Resource
|
||||
Playbook -> K8s: Retrieve Current Kubeconfig
|
||||
Playbook -> K8s: Convert Kubeconfig to JSON
|
||||
Playbook -> User: Display Worker Kubeconfig
|
||||
Playbook -> Playbook: Save Temporary Kubeconfig File
|
||||
Playbook -> Playbook: Modify Kubeconfig JSON (Replace user token, set server IP)
|
||||
Playbook -> User: Save Updated Kubeconfig File
|
||||
Playbook -> User: Display Instructions for Adding Target
|
||||
|
||||
User -> Playbook: Start Additional Playbook Execution
|
||||
Playbook -> Playbook: Store Kubeconfig Value
|
||||
Playbook -> User: Display Kubeconfig
|
||||
Playbook -> ControlNode : Copy Kubeconfig
|
||||
Playbook -> ControlNode: Create Service Account on Target
|
||||
Playbook -> ControlNode: Create Authentication Token for Source
|
||||
Playbook -> ControlNode: Create Target Resource
|
||||
|
||||
@enduml
|
||||
|
||||
110
ansible/Admiralty/setup_admiralty_target.yml
Normal file
110
ansible/Admiralty/setup_admiralty_target.yml
Normal file
@@ -0,0 +1,110 @@
|
||||
- name: Setup an exsiting k8s cluster to become an admiralty worker for Argo Workflows
|
||||
hosts: all:!localhost
|
||||
user: "{{ user_prompt }}"
|
||||
# Pass these through --extr-vars
|
||||
vars:
|
||||
- namespace: "{{ namespace_prompt }}"
|
||||
- source_name: "{{ source_prompt }}"
|
||||
- service_account_name : "admiralty-{{ source_prompt }}"
|
||||
environment:
|
||||
KUBECONFIG: /home/{{ user_prompt }}/.kube/config
|
||||
|
||||
tasks:
|
||||
- name: Save target IP
|
||||
set_fact:
|
||||
target_ip : "{{ ansible_host }}"
|
||||
|
||||
- name: Install the appropriates packages
|
||||
become: true
|
||||
become_method: sudo
|
||||
package:
|
||||
name:
|
||||
- python3
|
||||
- python3-yaml
|
||||
- python3-kubernetes
|
||||
- jq
|
||||
state: present
|
||||
|
||||
# We need to provide the source name in the command line through --extr-vars
|
||||
- name: Create a service account for the source
|
||||
kubernetes.core.k8s:
|
||||
state: present
|
||||
definition:
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: '{{ service_account_name }}'
|
||||
namespace: '{{ namespace }}'
|
||||
|
||||
- name: Add patch permission for pods to argo-role
|
||||
command: >
|
||||
kubectl patch role argo-role -n {{ namespace }} --type='json'
|
||||
-p '[{"op": "add", "path": "/rules/-", "value": {"apiGroups":[""],"resources":["pods"],"verbs":["patch"]}}]'
|
||||
register: patch_result
|
||||
changed_when: "'patched' in patch_result.stdout"
|
||||
|
||||
- name: Add service account to argo-rolebinding
|
||||
ansible.builtin.command: >
|
||||
kubectl patch rolebinding argo-role-binding -n {{ namespace }} --type='json'
|
||||
-p '[{"op": "add", "path": "/subjects/-", "value": {"kind": "ServiceAccount", "name": "{{ service_account_name }}", "namespace": "{{ namespace }}"}}]'
|
||||
register: patch_result
|
||||
changed_when: "'patched' in patch_result.stdout"
|
||||
|
||||
- name: Create a token for the created serivce account
|
||||
ansible.builtin.command:
|
||||
cmd: |
|
||||
kubectl create token '{{ service_account_name }}' -n {{ namespace }}
|
||||
register: token_source
|
||||
|
||||
- name: Create the source ressource
|
||||
kubernetes.core.k8s:
|
||||
state: present
|
||||
definition:
|
||||
apiVersion: multicluster.admiralty.io/v1alpha1
|
||||
kind: Source
|
||||
metadata:
|
||||
name: source-{{ source_name }}
|
||||
namespace: '{{ namespace }}'
|
||||
spec:
|
||||
serviceAccountName: "{{ service_account_name }}"
|
||||
|
||||
- name: Retrieve the current kubeconfig as json
|
||||
ansible.builtin.shell:
|
||||
cmd: |
|
||||
kubectl config view --minify --raw --output json
|
||||
register: worker_kubeconfig
|
||||
|
||||
|
||||
- name: Convert kubeconfig to JSON
|
||||
set_fact:
|
||||
kubeconfig_json: "{{ worker_kubeconfig.stdout | trim | from_json }}"
|
||||
|
||||
- name: View worker kubeconfig
|
||||
ansible.builtin.debug:
|
||||
msg: '{{ kubeconfig_json }}'
|
||||
|
||||
- name: Temporary kubeconfig file
|
||||
ansible.builtin.copy:
|
||||
content: "{{ kubeconfig_json }}"
|
||||
dest: "{{ target_ip }}_kubeconfig.json"
|
||||
|
||||
- name: Modify kubeconfig JSON
|
||||
ansible.builtin.shell:
|
||||
cmd: |
|
||||
jq '.users[0].user={token:"'{{ token_source.stdout }}'"} | .clusters[0].cluster.server="https://'{{ target_ip }}':6443"' {{ target_ip }}_kubeconfig.json
|
||||
register: kubeconfig_json
|
||||
|
||||
|
||||
|
||||
- name: Save updated kubeconfig
|
||||
ansible.builtin.copy:
|
||||
content: "{{ kubeconfig_json.stdout | trim | from_json | to_nice_json }}"
|
||||
dest: ./worker_kubeconfig/{{ target_ip }}_kubeconfig.json
|
||||
delegate_to: localhost
|
||||
|
||||
- name: Display informations for the creation of the target on the source host
|
||||
ansible.builtin.debug:
|
||||
msg: >
|
||||
- To add this host as a target in an Admiralty network use the following command line :
|
||||
- ansible-playbook add_admiralty_target.yml -i <SOURCE HOST IP>, --extra-vars "user_prompt=<YOUR USER> target_name=<TARGET NAME IN KUBE> target_ip={{ ansible_host }} namespace_source={{ namespace }} serviceaccount_prompt={{ service_account_name }}"
|
||||
- Don't forget to give {{ service_account_name }} the appropriate role in namespace {{ namespace }}
|
||||
121
ansible/Admiralty/setup_minio_argo_admiralty.yml
Normal file
121
ansible/Admiralty/setup_minio_argo_admiralty.yml
Normal file
@@ -0,0 +1,121 @@
|
||||
- name: Setup MinIO ressources for argo workflows/admiralty
|
||||
hosts: all:!localhost
|
||||
user: "{{ user_prompt }}"
|
||||
gather_facts: true
|
||||
become_method: sudo
|
||||
vars:
|
||||
- argo_namespace: "argo"
|
||||
- uuid: "{{ uuid_prompt }}"
|
||||
tasks:
|
||||
|
||||
- name: Install necessary packages
|
||||
become: true
|
||||
package:
|
||||
name:
|
||||
- python3-kubernetes
|
||||
state: present
|
||||
|
||||
- name: Create destination directory
|
||||
file:
|
||||
path: $HOME/minio-binaries
|
||||
state: directory
|
||||
mode: '0755'
|
||||
|
||||
- name: Install mc
|
||||
ansible.builtin.get_url:
|
||||
url: "https://dl.min.io/client/mc/release/linux-amd64/mc"
|
||||
dest: $HOME/minio-binaries/
|
||||
mode: +x
|
||||
headers:
|
||||
Content-Type: "application/json"
|
||||
|
||||
- name: Add mc to path
|
||||
ansible.builtin.shell:
|
||||
cmd: |
|
||||
grep -qxF 'export PATH=$PATH:$HOME/minio-binaries' $HOME/.bashrc || echo 'export PATH=$PATH:$HOME/minio-binaries' >> $HOME/.bashrc
|
||||
|
||||
- name: Test bashrc
|
||||
ansible.builtin.shell:
|
||||
cmd : |
|
||||
tail -n 5 $HOME/.bashrc
|
||||
|
||||
- name: Retrieve root user
|
||||
ansible.builtin.shell:
|
||||
cmd: |
|
||||
kubectl get secrets argo-artifacts -o jsonpath="{.data.rootUser}" | base64 -d -
|
||||
register: user
|
||||
|
||||
- name: Retrieve root password
|
||||
ansible.builtin.shell:
|
||||
cmd: |
|
||||
kubectl get secret argo-artifacts --namespace default -o jsonpath="{.data.rootPassword}" | base64 -d -
|
||||
register : password
|
||||
|
||||
- name: Set up MinIO host in mc
|
||||
ansible.builtin.shell:
|
||||
cmd: |
|
||||
$HOME/minio-binaries/mc alias set my-minio http://127.0.0.1:9000 '{{ user.stdout }}' '{{ password.stdout }}'
|
||||
|
||||
- name: Create oc-bucket
|
||||
ansible.builtin.shell:
|
||||
cmd: |
|
||||
$HOME/minio-binaries/mc mb oc-bucket
|
||||
|
||||
- name: Run mc admin accesskey create command
|
||||
command: $HOME/minio-binaries/mc admin accesskey create --json my-minio
|
||||
register: minio_output
|
||||
changed_when: false # Avoid marking the task as changed every time
|
||||
|
||||
- name: Parse JSON output
|
||||
set_fact:
|
||||
access_key: "{{ minio_output.stdout | from_json | json_query('accessKey') }}"
|
||||
secret_key: "{{ minio_output.stdout | from_json | json_query('secretKey') }}"
|
||||
|
||||
- name: Retrieve cluster IP for minio API
|
||||
ansible.builtin.shell:
|
||||
cmd: |
|
||||
kubectl get service argo-artifacts -o jsonpath="{.spec.clusterIP}"
|
||||
register: minio_cluster_ip
|
||||
|
||||
- name: Create the minio secret in argo namespace
|
||||
kubernetes.core.k8s:
|
||||
state: present
|
||||
namespace: '{{ argo_namespace }}'
|
||||
name: "{{ uuuid }}-argo-artifact-secret"
|
||||
definition:
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
type: Opaque
|
||||
stringData:
|
||||
access-key: '{{ access_key}}'
|
||||
secret-key: '{{ secret_key }}'
|
||||
|
||||
|
||||
- name: Create the minio secret in argo namespace
|
||||
kubernetes.core.k8s:
|
||||
state: present
|
||||
namespace: '{{ argo_namespace }}'
|
||||
definition:
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: artifact-repositories
|
||||
data:
|
||||
oc-s3-artifact-repository: |
|
||||
s3:
|
||||
bucket: oc-bucket
|
||||
endpoint: {{ minio_cluster_ip.stdout }}:9000
|
||||
insecure: true
|
||||
accessKeySecret:
|
||||
name: "{{ uuuid }}-argo-artifact-secret"
|
||||
key: access-key
|
||||
secretKeySecret:
|
||||
name: "{{ uuuid }}-argo-artifact-secret"
|
||||
key: secret-key
|
||||
|
||||
|
||||
# ansible.builtin.shell:
|
||||
# cmd: |
|
||||
# kubectl create secret -n '{{ argo_namespace }}' generic argo-artifact-secret \
|
||||
# --from-literal=access-key='{{ access_key }}' \
|
||||
# --from-literal=secret-key='{{ secret_key }}'
|
||||
149
ansible/Admiralty/weather_test_admiralty.yml
Normal file
149
ansible/Admiralty/weather_test_admiralty.yml
Normal file
@@ -0,0 +1,149 @@
|
||||
|
||||
apiVersion: argoproj.io/v1alpha1
|
||||
kind: Workflow
|
||||
metadata:
|
||||
generateName: harvesting-
|
||||
labels:
|
||||
example: 'true'
|
||||
workflows.argoproj.io/creator: 0d47b046-a09e-4bed-b10a-ec26783d4fe7
|
||||
workflows.argoproj.io/creator-email: pierre.bayle.at.irt-stexupery.com
|
||||
workflows.argoproj.io/creator-preferred-username: pbayle
|
||||
spec:
|
||||
templates:
|
||||
- name: busybox
|
||||
inputs:
|
||||
parameters:
|
||||
- name: model
|
||||
- name: output-dir
|
||||
- name: output-file
|
||||
- name: clustername
|
||||
outputs:
|
||||
parameters:
|
||||
- name: outfile
|
||||
value: '{{inputs.parameters.output-file}}.tgz'
|
||||
artifacts:
|
||||
- name: outputs
|
||||
path: '{{inputs.parameters.output-dir}}/{{inputs.parameters.output-file}}.tgz'
|
||||
s3:
|
||||
key: '{{workflow.name}}/{{inputs.parameters.output-file}}.tgz'
|
||||
container:
|
||||
image: busybox
|
||||
command: ["/bin/sh", "-c"]
|
||||
args:
|
||||
- |
|
||||
echo "Creating tarball for model: {{inputs.parameters.model}}";
|
||||
mkdir -p {{inputs.parameters.output-dir}};
|
||||
echo $(ping 8.8.8.8 -c 4) > $(date +%Y-%m-%d__%H-%M-%S)_{{inputs.parameters.output-file}}.txt
|
||||
tar -czf {{inputs.parameters.output-dir}}/{{inputs.parameters.output-file}}.tgz *_{{inputs.parameters.output-file}}.txt;
|
||||
metadata:
|
||||
annotations:
|
||||
multicluster.admiralty.io/elect: ""
|
||||
multicluster.admiralty.io/clustername: "{{inputs.parameters.clustername}}"
|
||||
|
||||
- name: weather-container
|
||||
inputs:
|
||||
parameters:
|
||||
- name: output-dir
|
||||
- name: output-file
|
||||
- name: clustername
|
||||
outputs:
|
||||
parameters:
|
||||
- name: outfile
|
||||
value: '{{inputs.parameters.output-file}}.tgz'
|
||||
artifacts:
|
||||
- name: outputs
|
||||
path: '{{inputs.parameters.output-dir}}/{{inputs.parameters.output-file}}.tgz'
|
||||
s3:
|
||||
insecure: true
|
||||
key: '{{workflow.name}}/{{inputs.parameters.output-file}}'
|
||||
container:
|
||||
name: weather-container
|
||||
image: pierrebirt/weather_container:latest
|
||||
#imagePullPolicy: IfNotPresent
|
||||
env:
|
||||
- name: API_KEY
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: cnes-secrets
|
||||
key: weather-api
|
||||
args:
|
||||
- '--key'
|
||||
- "$(API_KEY)"
|
||||
- '--dir'
|
||||
- '{{inputs.parameters.output-dir}}'
|
||||
- '--file'
|
||||
- '{{inputs.parameters.output-file}}'
|
||||
metadata:
|
||||
annotations:
|
||||
multicluster.admiralty.io/elect: ""
|
||||
multicluster.admiralty.io/clustername: "{{inputs.parameters.clustername}}"
|
||||
|
||||
- name: bucket-reader
|
||||
inputs:
|
||||
parameters:
|
||||
- name: bucket-path
|
||||
- name: logs-path
|
||||
artifacts:
|
||||
- name: retrieved-logs
|
||||
path: '{{inputs.parameters.logs-path}}'
|
||||
s3:
|
||||
key: '{{inputs.parameters.bucket-path}}'
|
||||
outputs:
|
||||
artifacts:
|
||||
- name: logs_for_test
|
||||
path: /tmp/empty_log_for_test.log
|
||||
s3:
|
||||
key: '{{workflow.name}}/log_test.log'
|
||||
container:
|
||||
image: busybox
|
||||
command: ["/bin/sh", "-c"]
|
||||
args:
|
||||
- |
|
||||
tar -xvf '{{inputs.parameters.logs-path}}'
|
||||
ls -la
|
||||
cat *.txt
|
||||
touch /tmp/empty_log_for_test.log
|
||||
|
||||
- name: harvesting-test
|
||||
inputs: {}
|
||||
outputs: {}
|
||||
metadata: {}
|
||||
dag:
|
||||
tasks:
|
||||
- name: busybox-dc02
|
||||
template: busybox
|
||||
arguments:
|
||||
parameters:
|
||||
- name: model
|
||||
value: era-pressure-levels
|
||||
- name: output-dir
|
||||
value: /app/data/output
|
||||
- name: output-file
|
||||
value: fake_logs
|
||||
- name: clustername
|
||||
value: target-dc02
|
||||
- name: weather-container-dc03
|
||||
template: weather-container
|
||||
arguments:
|
||||
parameters:
|
||||
- name: output-dir
|
||||
value: /app/results
|
||||
- name: output-file
|
||||
value: weather_results_23_01
|
||||
- name: clustername
|
||||
value: target-dc03
|
||||
- name: bucket-reader
|
||||
template: bucket-reader
|
||||
dependencies: [busybox-dc02,weather-container-dc03]
|
||||
arguments:
|
||||
parameters:
|
||||
- name: bucket-path
|
||||
value: '{{workflow.name}}/fake_logs.tgz'
|
||||
- name: logs-path
|
||||
value: /tmp/logs.tgz
|
||||
|
||||
entrypoint: harvesting-test
|
||||
serviceAccountName: argo-agregateur-workflow-controller
|
||||
artifactRepositoryRef: # https://argo-workflows.readthedocs.io/en/latest/fields/#s3artifactrepository
|
||||
key: admiralty-s3-artifact-repository # Choose the artifact repository with the public IP/url
|
||||
|
||||
@@ -0,0 +1,32 @@
|
||||
{
|
||||
"apiVersion": "v1",
|
||||
"clusters": [
|
||||
{
|
||||
"cluster": {
|
||||
"certificate-authority-data": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJkekNDQVIyZ0F3SUJBZ0lCQURBS0JnZ3Foa2pPUFFRREFqQWpNU0V3SHdZRFZRUUREQmhyTTNNdGMyVnkKZG1WeUxXTmhRREUzTXpneE5EVTNNekl3SGhjTk1qVXdNVEk1TVRBeE5UTXlXaGNOTXpVd01USTNNVEF4TlRNeQpXakFqTVNFd0h3WURWUVFEREJock0zTXRjMlZ5ZG1WeUxXTmhRREUzTXpneE5EVTNNekl3V1RBVEJnY3Foa2pPClBRSUJCZ2dxaGtqT1BRTUJCd05DQUFSWHFiRHBmcUtwWVAzaTFObVpCdEZ3RzNCZCtOY0RwenJKS01qOWFETlUKTUVYZmpRM3VrbzVISDVHdTFzNDRZY0p6Y29rVEFmb090QVhWS1pNMUs3YWVvMEl3UURBT0JnTlZIUThCQWY4RQpCQU1DQXFRd0R3WURWUjBUQVFIL0JBVXdBd0VCL3pBZEJnTlZIUTRFRmdRVWM5MW5TYi9kaU1pbHVqR3RENjFRClc0djVKVmN3Q2dZSUtvWkl6ajBFQXdJRFNBQXdSUUlnV05uSzlsU1lDY044VEFFODcwUnNOMEgwWFR6UndMNlAKOEF4Q0xwa3pDYkFDSVFDRW1LSkhveXFZRW5iZWZFU3VOYkthTHdtRkMrTE5lUHloOWxQUmhCVHdsQT09Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K",
|
||||
"server": "https://172.16.0.181:6443"
|
||||
},
|
||||
"name": "default"
|
||||
}
|
||||
],
|
||||
"contexts": [
|
||||
{
|
||||
"context": {
|
||||
"cluster": "default",
|
||||
"user": "default"
|
||||
},
|
||||
"name": "default"
|
||||
}
|
||||
],
|
||||
"current-context": "default",
|
||||
"kind": "Config",
|
||||
"preferences": {},
|
||||
"users": [
|
||||
{
|
||||
"name": "default",
|
||||
"user": {
|
||||
"token": "eyJhbGciOiJSUzI1NiIsImtpZCI6Ik5nT1p0NVVMUVllYko1MVhLdVIyMW01MzJjY25NdTluZ3VNQ1RmMnNTUHcifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiLCJrM3MiXSwiZXhwIjoxNzM4Njg1NzM2LCJpYXQiOjE3Mzg2ODIxMzYsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwianRpIjoiNTNkNzU4YmMtMGUwMC00YTU5LTgzZTUtYjkyYjZmODg2NWE2Iiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJhcmdvIiwic2VydmljZWFjY291bnQiOnsibmFtZSI6ImFkbWlyYWx0eS1jb250cm9sIiwidWlkIjoiMWQ1NmEzMzktMTM0MC00NDY0LTg3OGYtMmIxY2ZiZDU1ZGJhIn19LCJuYmYiOjE3Mzg2ODIxMzYsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDphcmdvOmFkbWlyYWx0eS1jb250cm9sIn0.WMqmDvp8WZHEiupJewo2BplD0xu6yWhlgZkG4q_PpVCbHKd7cKYWnpTi_Ojmabvvw-VC5sZFZAaxZUnqdZNGf_RMrJ5pJ9B5cYtD_gsa7AGhrSz03nd5zPKvujT7-gzWmfHTpZOvWky00A2ykKLflibhJgft4FmFMxQ6rR3MWmtqeAo82wevF47ggdOiJz3kksFJPfEpk1bflumbUCk-fv76k6EljPEcFijsRur-CI4uuXdmTKb7G2TDmTMcFs9X4eGbBO2ZYOAVEw_Xafru6D-V8hWBTm-NWQiyyhdxlVdQg7BNnXJ_26GsJg4ql4Rg-Q-tXB5nGvd68g2MnGTWwg"
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
@@ -0,0 +1,32 @@
|
||||
{
|
||||
"apiVersion": "v1",
|
||||
"clusters": [
|
||||
{
|
||||
"cluster": {
|
||||
"certificate-authority-data": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJkekNDQVIyZ0F3SUJBZ0lCQURBS0JnZ3Foa2pPUFFRREFqQWpNU0V3SHdZRFZRUUREQmhyTTNNdGMyVnkKZG1WeUxXTmhRREUzTXpnd09ESTVNRFF3SGhjTk1qVXdNVEk0TVRZME9ESTBXaGNOTXpVd01USTJNVFkwT0RJMApXakFqTVNFd0h3WURWUVFEREJock0zTXRjMlZ5ZG1WeUxXTmhRREUzTXpnd09ESTVNRFF3V1RBVEJnY3Foa2pPClBRSUJCZ2dxaGtqT1BRTUJCd05DQUFSdUV0Y2lRS3VaZUpEV214TlJBUzM3TlFib3czSkpxMWJQSjdsdTN2eEgKR2czS1hGdFVHZWNGUjQzL1Rjd0pmanQ3WFpsVm9PUldtOFozYWp3OEJPS0ZvMEl3UURBT0JnTlZIUThCQWY4RQpCQU1DQXFRd0R3WURWUjBUQVFIL0JBVXdBd0VCL3pBZEJnTlZIUTRFRmdRVTB3NG1uSlUrbkU3SnpxOHExRWdWCmFUNU1mMmd3Q2dZSUtvWkl6ajBFQXdJRFNBQXdSUUloQU9JTUtsZHk0Y044a3JmVnQyUFpLQi80eXhpOGRzM0wKaHR0b2ZrSEZtRnlsQWlCMWUraE5BamVUdVNCQjBDLzZvQnA2c21xUDBOaytrdGFtOW9EM3pvSSs0Zz09Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K",
|
||||
"server": "https://172.16.0.184:6443"
|
||||
},
|
||||
"name": "default"
|
||||
}
|
||||
],
|
||||
"contexts": [
|
||||
{
|
||||
"context": {
|
||||
"cluster": "default",
|
||||
"user": "default"
|
||||
},
|
||||
"name": "default"
|
||||
}
|
||||
],
|
||||
"current-context": "default",
|
||||
"kind": "Config",
|
||||
"preferences": {},
|
||||
"users": [
|
||||
{
|
||||
"name": "default",
|
||||
"user": {
|
||||
"token": "eyJhbGciOiJSUzI1NiIsImtpZCI6InUzaGF0T1RuSkdHck1sbURrQm0waDdDeDFSS3pxZ3FVQ25aX1VrOEkzdFkifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiLCJrM3MiXSwiZXhwIjoxNzM4Njg1NzM2LCJpYXQiOjE3Mzg2ODIxMzYsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwianRpIjoiZDFmNzQ2NmQtN2MyOS00MGNkLTg1ZTgtMjZmMzFkYWU5Nzg4Iiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJhcmdvIiwic2VydmljZWFjY291bnQiOnsibmFtZSI6ImFkbWlyYWx0eS1jb250cm9sIiwidWlkIjoiNTc0Y2E1OTQtY2IxZi00N2FiLTkxZGEtMDI0NDEwNjhjZjQwIn19LCJuYmYiOjE3Mzg2ODIxMzYsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDphcmdvOmFkbWlyYWx0eS1jb250cm9sIn0.ZJvTJawg73k5SEOG6357iYq_-w-7V4BqciURYJao_dtP_zDpcXyZ1Xw-sxNKITgLjByTkGaCJRjDtR2QdZumKtb8cl6ayv0UZMHHnFft4gtQi-ttjj69rQ5RTNA3dviPaQOQgWNAwPkUPryAM0Sjsd5pRWzXXe-NVpWQZ6ooNZeRBHyjT1Km1JoprB7i55vRJEbBnoK0laJUtHCNmLoxK5kJYQqeAtA-_ugdSJbnyTFQAG14vonZSyLWAQR-Hzw9QiqIkSEW1-fcvrrZbrVUZsl_i7tkrXSSY9EYwjrZlqIu79uToEa1oWvulGFEN6u6YGUydj9nXQJX_eDpaWvuOA"
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
@@ -0,0 +1,32 @@
|
||||
{
|
||||
"apiVersion": "v1",
|
||||
"clusters": [
|
||||
{
|
||||
"cluster": {
|
||||
"certificate-authority-data": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJkekNDQVIyZ0F3SUJBZ0lCQURBS0JnZ3Foa2pPUFFRREFqQWpNU0V3SHdZRFZRUUREQmhyTTNNdGMyVnkKZG1WeUxXTmhRREUzTXpnd09ESTVNRFF3SGhjTk1qVXdNVEk0TVRZME9ESTBXaGNOTXpVd01USTJNVFkwT0RJMApXakFqTVNFd0h3WURWUVFEREJock0zTXRjMlZ5ZG1WeUxXTmhRREUzTXpnd09ESTVNRFF3V1RBVEJnY3Foa2pPClBRSUJCZ2dxaGtqT1BRTUJCd05DQUFSdUV0Y2lRS3VaZUpEV214TlJBUzM3TlFib3czSkpxMWJQSjdsdTN2eEgKR2czS1hGdFVHZWNGUjQzL1Rjd0pmanQ3WFpsVm9PUldtOFozYWp3OEJPS0ZvMEl3UURBT0JnTlZIUThCQWY4RQpCQU1DQXFRd0R3WURWUjBUQVFIL0JBVXdBd0VCL3pBZEJnTlZIUTRFRmdRVTB3NG1uSlUrbkU3SnpxOHExRWdWCmFUNU1mMmd3Q2dZSUtvWkl6ajBFQXdJRFNBQXdSUUloQU9JTUtsZHk0Y044a3JmVnQyUFpLQi80eXhpOGRzM0wKaHR0b2ZrSEZtRnlsQWlCMWUraE5BamVUdVNCQjBDLzZvQnA2c21xUDBOaytrdGFtOW9EM3pvSSs0Zz09Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K",
|
||||
"server": "https://172.16.0.185:6443"
|
||||
},
|
||||
"name": "default"
|
||||
}
|
||||
],
|
||||
"contexts": [
|
||||
{
|
||||
"context": {
|
||||
"cluster": "default",
|
||||
"user": "default"
|
||||
},
|
||||
"name": "default"
|
||||
}
|
||||
],
|
||||
"current-context": "default",
|
||||
"kind": "Config",
|
||||
"preferences": {},
|
||||
"users": [
|
||||
{
|
||||
"name": "default",
|
||||
"user": {
|
||||
"token": "eyJhbGciOiJSUzI1NiIsImtpZCI6InUzaGF0T1RuSkdHck1sbURrQm0waDdDeDFSS3pxZ3FVQ25aX1VrOEkzdFkifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiLCJrM3MiXSwiZXhwIjoxNzM4Njg1NzM2LCJpYXQiOjE3Mzg2ODIxMzYsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwianRpIjoiODdlYjVkYTYtYWNlMi00YzFhLTg1YjctYWY1NDI2MjA1ZWY1Iiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJhcmdvIiwic2VydmljZWFjY291bnQiOnsibmFtZSI6ImFkbWlyYWx0eS1jb250cm9sIiwidWlkIjoiZjFjNjViNDQtYmZmMC00Y2NlLTk4ZGQtMTU0YTFiYTk0YTU2In19LCJuYmYiOjE3Mzg2ODIxMzYsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDphcmdvOmFkbWlyYWx0eS1jb250cm9sIn0.SkpDamOWdyvTUk8MIMDMhuKD8qvJPpX-tjXPWX9XsfpMyjcB02kI-Cn9b8w1TnYpGJ_u3qyLzO7RlXOgSHtm7TKHOCoYudj4jNwRWqIcThxzAeTm53nlZirUU0E0eJU8cnWHGO3McAGOgkStpfVwHaTQHq2oMZ6jayQU_HuButGEvpFt2FMFEwY9pOjabYHPPOkY9ruswzNhGBRShxWxfOgCWIt8UmbrryrNeNd_kZlB0_vahuQkAskeJZd3f_hp7qnSyLd-YZa5hUrruLJBPQZRw2sPrZe0ukvdpuz7MCfE-CQzUDn6i3G6FCKzYfd-gHFIYNUowS0APHLcC-yWSQ"
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
@@ -0,0 +1,32 @@
|
||||
{
|
||||
"apiVersion": "v1",
|
||||
"clusters": [
|
||||
{
|
||||
"cluster": {
|
||||
"certificate-authority-data": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJkakNDQVIyZ0F3SUJBZ0lCQURBS0JnZ3Foa2pPUFFRREFqQWpNU0V3SHdZRFZRUUREQmhyTTNNdGMyVnkKZG1WeUxXTmhRREUzTlRJM05EVTROVE13SGhjTk1qVXdOekUzTURrMU1EVXpXaGNOTXpVd056RTFNRGsxTURVegpXakFqTVNFd0h3WURWUVFEREJock0zTXRjMlZ5ZG1WeUxXTmhRREUzTlRJM05EVTROVE13V1RBVEJnY3Foa2pPClBRSUJCZ2dxaGtqT1BRTUJCd05DQUFUTzJzVWE4MTVDTmVxWUNPdCthREoreG5hWHRZNng3R096a0c1U1U0TEEKRE1talExRVQwZi96OG9oVU55L1JneUt0bmtqb2JnZVJhOExTdDAwc3NrMDNvMEl3UURBT0JnTlZIUThCQWY4RQpCQU1DQXFRd0R3WURWUjBUQVFIL0JBVXdBd0VCL3pBZEJnTlZIUTRFRmdRVW1zeUVyWkQvbmxtNVJReUUwR0NICk1FWlU0ZWd3Q2dZSUtvWkl6ajBFQXdJRFJ3QXdSQUlnVXJsR3ZGZy9FVzhXdU1Nc3JmZkZTTHdmYm1saFI5MDYKYjdHaWhUNHdFRzBDSUVsb2FvWGdwNnM5c055eE1iSUwxKzNlVUtFc0k2Y2dDdldFVEZmRWtQTUIKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=",
|
||||
"server": "https://172.16.0.191:6443"
|
||||
},
|
||||
"name": "default"
|
||||
}
|
||||
],
|
||||
"contexts": [
|
||||
{
|
||||
"context": {
|
||||
"cluster": "default",
|
||||
"user": "default"
|
||||
},
|
||||
"name": "default"
|
||||
}
|
||||
],
|
||||
"current-context": "default",
|
||||
"kind": "Config",
|
||||
"preferences": {},
|
||||
"users": [
|
||||
{
|
||||
"name": "default",
|
||||
"user": {
|
||||
"token": "eyJhbGciOiJSUzI1NiIsImtpZCI6IlZCTkEyUVJKeE9XblNpeUI1QUlMdWtLZmVpbGQ1LUpRTExvNWhkVjlEV2MifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiLCJrM3MiXSwiZXhwIjoxNzUzMTk0MjMxLCJpYXQiOjE3NTMxOTA2MzEsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwianRpIjoiYTMyOTY0OTktNzhiZS00MzE0LTkyYjctMDQ1NTBkY2JjMGUyIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJ0ZXN0LWFkbWlyYWx0eS1hbnNpYmxlIiwic2VydmljZWFjY291bnQiOnsibmFtZSI6ImFkbWlyYWx0eS1zb3VyY2UiLCJ1aWQiOiI4YmJhMTA3Mi0wYjZiLTQwYjUtYWI4Mi04OWQ1MTkyOGIwOTIifX0sIm5iZiI6MTc1MzE5MDYzMSwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50OnRlc3QtYWRtaXJhbHR5LWFuc2libGU6YWRtaXJhbHR5LXNvdXJjZSJ9.A0UJLoui_SX4dCgUZIo4kprZ3kb2WBkigvyy1e55qQMFZxRoAed6ZvR95XbHYNUoiHR-HZE04QO0QcOnFaaQDTA6fS9HHtjfPKAoqbXrpShyoHNciiQnhkwYvtEpG4bvDf0JMB9qbWGMrBoouHwx-JoQG0JeoQq-idMGiDeHhqVc86-Uy_angvRoAZGF5xmYgMPcw5-vZPGfgk1mHYx5vXNofCcmF4OqMvQaWyYmH82L5SYAYLTV39Z1aCKkDGGHt5y9dVJ0udA4E5Cx3gO2cLLLWxf8n7uFSUx8sHgFtZOGgXwN8DIrTe3Y95p09f3H7nTxjnmQ-Nce2hofLC2_ng"
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
32
ansible/Admiralty/worker_kubeconfig/target01_kubeconfig.json
Normal file
32
ansible/Admiralty/worker_kubeconfig/target01_kubeconfig.json
Normal file
@@ -0,0 +1,32 @@
|
||||
{
|
||||
"apiVersion": "v1",
|
||||
"clusters": [
|
||||
{
|
||||
"cluster": {
|
||||
"certificate-authority-data": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJlRENDQVIyZ0F3SUJBZ0lCQURBS0JnZ3Foa2pPUFFRREFqQWpNU0V3SHdZRFZRUUREQmhyTTNNdGMyVnkKZG1WeUxXTmhRREUzTXpnNU1Ua3pPRFF3SGhjTk1qVXdNakEzTURrd09UUTBXaGNOTXpVd01qQTFNRGt3T1RRMApXakFqTVNFd0h3WURWUVFEREJock0zTXRjMlZ5ZG1WeUxXTmhRREUzTXpnNU1Ua3pPRFF3V1RBVEJnY3Foa2pPClBRSUJCZ2dxaGtqT1BRTUJCd05DQUFTYWsySHRMQVFUclVBSUF3ckUraDBJZ0QyS2dUcWxkNmorQlczcXRUSmcKOW9GR2FRb1lnUERvaGJtT29ueHRTeDlCSlc3elkrZEM2T3J5ekhkYzUzOGRvMEl3UURBT0JnTlZIUThCQWY4RQpCQU1DQXFRd0R3WURWUjBUQVFIL0JBVXdBd0VCL3pBZEJnTlZIUTRFRmdRVXd5UE1iOFAwaC9IR2szZ0dianozClFvOVVoQ293Q2dZSUtvWkl6ajBFQXdJRFNRQXdSZ0loQUlBVE1ETGFpeWlwaUNuQjF1QWtYMkxiRXdrYk93QlcKb1U2eDluZnRMTThQQWlFQTUza0hZYU05ZVZVdThld3REa0M3TEs3RTlkSGczQ3pSNlBxSHJjUHJTeDA9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K",
|
||||
"server": "https://target01:6443"
|
||||
},
|
||||
"name": "default"
|
||||
}
|
||||
],
|
||||
"contexts": [
|
||||
{
|
||||
"context": {
|
||||
"cluster": "default",
|
||||
"user": "default"
|
||||
},
|
||||
"name": "default"
|
||||
}
|
||||
],
|
||||
"current-context": "default",
|
||||
"kind": "Config",
|
||||
"preferences": {},
|
||||
"users": [
|
||||
{
|
||||
"name": "default",
|
||||
"user": {
|
||||
"token": "eyJhbGciOiJSUzI1NiIsImtpZCI6IlJwbHhUQ2ppREt3SmtHTWs4Z2cwdXBuWGtjTUluMVB0dFdGbUhEUVY2Y2MifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiLCJrM3MiXSwiZXhwIjoxNzQwMDUzODAyLCJpYXQiOjE3NDAwNTAyMDIsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwianRpIjoiMThkNzdjMzctZjgyNC00MGVmLWExMDUtMzcxMzJkNjUxNzgzIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJhcmdvIiwic2VydmljZWFjY291bnQiOnsibmFtZSI6ImFkbWlyYWx0eS1jb250cm9sIiwidWlkIjoiNmExM2M4YTgtZmE0NC00NmJlLWI3ZWItYTQ0OWY3ZTMwZGM1In19LCJuYmYiOjE3NDAwNTAyMDIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDphcmdvOmFkbWlyYWx0eS1jb250cm9sIn0.DtKkkCEWLPp-9bmSbrqxvxO2kXfOW2cHlmxs5xPzTtn3DcNZ-yfUxJHxEv9Hz6-h732iljRKiWx3SrEN2ZjGq555xoOHV202NkyUqU3EWmBwmVQgvUKOZSn1tesAfI7fQp7sERa7oKz7ZZNHJ7x-nw0YBoxYa4ECRPkJKDR3uEyRsyFMaZJELi-wIUSZkeGxNR7PdQWoYPoJipnwXoyAFbT42r-pSR7nqzy0-Lx1il82klkZshPEj_CqycqJg1djoNoe4ekS7En1iljz03YqOqm1sFSOdvDRS8VGM_6Zm6e3PVwXQZVBgFy_ET1RqtxPsLyYmaPoIfPMq2xeRLoGIg"
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
Reference in New Issue
Block a user