17 Commits

Author SHA1 Message Date
pb
2ede262abe All the Ansible playbooks used to deploy k3s, argo server, admiralty and minio 2025-09-26 14:12:01 +02:00
mr
140bd63559 deploy adjustment 2025-06-16 09:14:36 +02:00
mr
90cc774341 clone debugged 2025-04-29 10:30:28 +02:00
mr
db10baf460 update 2025-04-28 14:11:18 +02:00
mr
53fca60178 Merge branch 'main' of https://cloud.o-forge.io/core/oc-deploy into main 2025-04-28 09:46:44 +02:00
mr
8b53c2e70e update oc-deploy 2025-04-28 09:45:54 +02:00
pb
3892692a07 corrected grafana data source file 2025-04-08 11:46:26 +02:00
mr
7ec310f161 full 80 2025-04-03 16:30:32 +02:00
mr
ced5e55698 git ignore deployed 2025-04-01 10:14:51 +02:00
mr
7cdb02b677 deployed 2025-04-01 10:13:55 +02:00
mr
82aed0fdb6 add datas 2025-03-27 13:32:27 +01:00
mr
626a1b1f22 oc-deploy vanilla k8s docker 2025-03-27 13:21:52 +01:00
mr
3b7c3a9526 deploy auto traefik 2025-03-06 09:39:07 +01:00
mr
0a96827200 dev launch mode 2025-03-06 09:34:04 +01:00
na
fcb45ec331 chore: Update file path for oc-deploy.puml 2024-09-02 11:59:42 +02:00
na
47d0d993d8 chore: Moved helm Charts and kube Components 2024-09-02 11:43:11 +02:00
na
333dfce355 chore: First Update README.md and add oc-deploy component documentation 2024-09-02 11:34:27 +02:00
162 changed files with 3250 additions and 5176 deletions

2
.gitignore vendored
View File

@@ -1 +1 @@
bin
k8s/deployed_config

View File

@@ -1,6 +0,0 @@
publish:
curl -X 'POST' \
'https://cloud.o-forge.io/api/v1/repos/core/oc-deploy/releases/2/assets?name=oc.json&token=92ad0a4b3d75ec7c5964913b7085d7ddf379247c' \
-H 'accept: application/json' \
-H 'Content-Type: multipart/form-data' \
-F 'attachment=@oc.json;type=application/json'

View File

@@ -4,17 +4,20 @@ The purpose of oc-deploy, is to deploy all the OC components over a Kubernetes c
An OpenCloud deployment is composed of the following layers:
| Layer | Tool |
| ------------------------ | --------------------- |
| OpenCloud components | oc-deploy binary |
| KubernetesCluster | TODO or pre-requisite |
| IaaS (VMs, LAN) | pre-requisite |
| HW (network and servers) | <-- pre-requisite |
OpenCloud components | <-- TODO
--------------------------
KubernetesCluster | <-- TODO
--------------------------
IaaS (VMs, LAN) | <-- pre-requisite
--------------------------
HW (network and servers) | <-- pre-requisite
--------------------------
It thus contains a first optional installation layer which deploys the Kubernetes nodes (control plane(s) and workers) above an existing infrastructure (Iaas).
Then the second installation layer uses Helm charts to deploy and configure all the OC components.
This documentation will be updated with the needed command and/or requirements to properly execute the installation.
# Deploy cluster
@@ -32,5 +35,69 @@ Install Talos
chmod 700 get_helm.sh
./get_helm.sh
--------------------------
# Create OpenCloud Chart
helm create occhart
# `oc-deploy` Component
The `oc-deploy` component aims to simplify and automate the deployment of OpenCloud components on a Kubernetes cluster through the creation of Helm Charts.
## Prerequisites:
- Access to the OpenCloud forge and the associated Harbor registry: [https://registry.o-forge.io/](https://registry.o-forge.io/), which will allow pulling OpenCloud release images from the "stable" project.
- To test the connection to this registry from the Docker client:
```bash
docker login registry.o-forge.io
```
- A Kubernetes cluster: Minikube, K3s, RKE2, etc. See `KubernetesCluster`.
- Helm installed locally
## **To Be Defined:**
### Configuring a Docker Secret for Kubernetes
Kubernetes needs to know your credentials to pull images from the "registry.o-forge.io" registry. Create a Docker secret in Kubernetes:
```bash
kubectl create secret docker-registry regcred \
--docker-server=registry.o-forge.io \
--docker-username=<your_username> \
--docker-password=<your_password> \
--docker-email=<your_email>
```
## Checking if Helm Recognizes Your Local Kubernetes Cluster:
### 1. Verify Connection to Kubernetes:
Before checking Helm, ensure that your `kubectl` is properly configured to connect to your local Kubernetes cluster.
Run the following command to see if you can communicate with the cluster:
```bash
kubectl get nodes
```
If this command returns the list of nodes in your cluster, it means `kubectl` is properly connected.
### 2. Verify Helm Configuration:
Now, you can check if Helm can access the cluster by using the following command:
```bash
helm version
```
This command displays the Helm version and the Kubernetes version it is connected to.
## Deploying with Helm:
You can deploy the `oc-deploy` Chart with Helm:
```bash
helm install oc-deploy path/to/your/Helm/oc-deploy
```
## Checking Helm Releases:
You can also list the existing releases to see if Helm is properly connected to the cluster:
```bash
helm list
```
If all these commands execute without errors and give the expected results, your Helm installation is correctly configured to recognize and interact with your local Kubernetes cluster

6
ansible/.gitignore vendored Normal file
View File

@@ -0,0 +1,6 @@
create_kvm/
alpr_with_argo.yml
*.qcow*
OpenPGP*
my_hosts.yaml
Admiraltyworker_kubeconfig/*

View File

@@ -0,0 +1,95 @@
# README
## Ansible Playbooks for Admiralty Worker Setup with Argo Workflows
These Ansible playbooks help configure an existing Kubernetes (K8s) cluster as an Admiralty worker for Argo Workflows. The process consists of two main steps:
1. **Setting up a worker node**: This playbook prepares the worker cluster and generates the necessary kubeconfig.
2. **Adding the worker to the source cluster**: This playbook registers the worker cluster with the source Kubernetes cluster.
---
## Prerequisites
- Ansible installed on the control machine.
- Kubernetes cluster(s) with `kubectl` and `kubernetes.core` collection installed.
- Necessary permissions to create ServiceAccounts, Roles, RoleBindings, Secrets, and Custom Resources.
- `jq` installed on worker nodes.
---
## Playbook 1: Setting Up a Worker Node
This playbook configures a Kubernetes cluster to become an Admiralty worker for Argo Workflows.
### Variables (Pass through `--extra-vars`)
| Variable | Description |
|----------|-------------|
| `user_prompt` | The user running the Ansible playbook |
| `namespace_prompt` | Kubernetes namespace where resources are created |
| `source_prompt` | The name of the source cluster |
### Actions Performed
1. Installs required dependencies (`python3`, `python3-yaml`, `python3-kubernetes`, `jq`).
2. Creates a service account for the source cluster.
3. Grants patch permissions for pods to the `argo-role`.
4. Adds the service account to `argo-rolebinding`.
5. Creates a token for the service account.
6. Creates a `Source` resource for Admiralty.
7. Retrieves the worker cluster's kubeconfig and modifies it.
8. Stores the kubeconfig locally.
9. Displays the command needed to register this worker in the source cluster.
### Running the Playbook
```sh
ansible-playbook setup_worker.yml -i <WORKER_HOST_IP>, --extra-vars "user_prompt=<YOUR_USER> namespace_prompt=<NAMESPACE> source_prompt=<SOURCE_NAME>"
```
---
## Playbook 2: Adding Worker to Source Cluster
This playbook registers the configured worker cluster as an Admiralty target in the source Kubernetes cluster.
### Variables (Pass through `--extra-vars`)
| Variable | Description |
|----------|-------------|
| `user_prompt` | The user running the Ansible playbook |
| `target_name` | The name of the worker cluster in the source setup |
| `target_ip` | IP of the worker cluster |
| `namespace_source` | Namespace where the target is registered |
| `serviceaccount_prompt` | The service account used in the worker |
### Actions Performed
1. Retrieves the stored kubeconfig from the worker setup.
2. Creates a ServiceAccount in the target namespace.
3. Stores the kubeconfig in a Kubernetes Secret.
4. Creates an Admiralty `Target` resource in the source cluster.
### Running the Playbook
```sh
ansible-playbook add_admiralty_target.yml -i <SOURCE_HOST_IP>, --extra-vars "user_prompt=<YOUR_USER> target_name=<TARGET_NAME_IN_KUBE> target_ip=<WORKER_IP> namespace_source=<NAMESPACE> serviceaccount_prompt=<SERVICE_ACCOUNT_NAME>"
```
# Post Playbook
Don't forget to give the patching rights to the `serviceAccount` on the control node :
```bash
kubectl patch role argo-role -n argo --type='json' -p '[{"op": "add", "path": "/rules/-", "value": {"apiGroups":[""],"resources":["pods"],"verbs":["patch"]}}]'
```
Add the name of the `serviceAccount` in the following command
```bash
kubectl patch rolebinding argo-binding -n argo --type='json' -p '[{"op": "add", "path": "/subjects/-", "value": {"kind": "ServiceAccount", "name": "<NAME OF THE USER ACCOUNT>", "namespace": "argo"}}]'
```
Maybe we could add a play/playbook to sync the roles and rolesbinding between all nodes.

View File

@@ -0,0 +1,49 @@
- name: Setup an exsiting k8s cluster to become an admiralty worker for Argo Workflows
hosts: all:!localhost
user: "{{ user_prompt }}"
vars:
- service_account_name: "{{ serviceaccount_prompt }}"
- namespace: "{{ namespace_source }}"
tasks:
- name: Store kubeconfig value
ansible.builtin.set_fact:
kubeconfig: "{{ lookup('file','worker_kubeconfig/{{ target_ip }}_kubeconfig.json') | trim }}"
- name: Create the serviceAccount that will execute in the target
kubernetes.core.k8s:
state: present
definition:
apiVersion: v1
kind: ServiceAccount
metadata:
name: '{{ service_account_name }}'
namespace: '{{ namespace }}'
- name: Create the token to authentify source
kubernetes.core.k8s:
state: present
definition:
apiVersion: v1
kind: Secret
type: Opaque
metadata:
name: admiralty-secret-{{ target_name }}
namespace: "{{ namespace_source }}"
data:
config: "{{ kubeconfig | tojson | b64encode }}"
- name: Create the target ressource
kubernetes.core.k8s:
state: present
definition:
apiVersion: multicluster.admiralty.io/v1alpha1
kind: Target
metadata:
name: target-{{ target_name }}
namespace: '{{ namespace_source }}'
spec:
kubeconfigSecret:
name: admiralty-secret-{{ target_name }}

View File

@@ -0,0 +1,2 @@
[defaults]
result_format=default

View File

@@ -0,0 +1,75 @@
- name: Install Helm
hosts: all:!localhost
user: "{{ user_prompt }}"
become: true
# become_method: su
vars:
arch_mapping: # Map ansible architecture {{ ansible_architecture }} names to Docker's architecture names
x86_64: amd64
aarch64: arm64
tasks:
- name: Check if Helm does exist
ansible.builtin.command:
cmd: which helm
register: result_which
failed_when: result_which.rc not in [ 0, 1 ]
- name: Install helm
when: result_which.rc == 1
block:
- name: download helm from source
ansible.builtin.get_url:
url: https://get.helm.sh/helm-v3.15.0-linux-amd64.tar.gz
dest: ./
- name: unpack helm
ansible.builtin.unarchive:
remote_src: true
src: helm-v3.15.0-linux-amd64.tar.gz
dest: ./
- name: copy helm to path
ansible.builtin.command:
cmd: mv linux-amd64/helm /usr/local/bin/helm
- name: Install admiralty
hosts: all:!localhost
user: "{{ user_prompt }}"
tasks:
- name: Install required python libraries
become: true
# become_method: su
package:
name:
- python3
- python3-yaml
state: present
- name: Add jetstack repo
ansible.builtin.shell:
cmd: |
helm repo add jetstack https://charts.jetstack.io && \
helm repo update
- name: Install cert-manager
kubernetes.core.helm:
chart_ref: jetstack/cert-manager
release_name: cert-manager
context: default
namespace: cert-manager
create_namespace: true
wait: true
set_values:
- value: installCRDs=true
- name: Install admiralty
kubernetes.core.helm:
name: admiralty
chart_ref: oci://public.ecr.aws/admiralty/admiralty
namespace: admiralty
create_namespace: true
chart_version: 0.16.0
wait: true

View File

@@ -0,0 +1,21 @@
Target
---
- Creer service account
- Creer token pour service account (sa sur control == sa sur target, montre nom du sa + token pour accéder à target)
- Créer fichier kubeconfig avec token et @IP (visible pour le controler/publique) et le récupérer pour le passer au controler
- Créer une ressource source sur la target : dire qui va nous contacter
- Rajouter les mêmes roles/droits que le sa "argo" au sa du controler
- Dans authorization rajouter le verbe Patch sur la ressource pods
- Rajouter le sa controler dans le rolebinding
Controler
---
- Créer le serviceAccount avec le même nom que sur la target
- Récuperer le kubeconfig de la target
- Creer un secret à partir du kubeconfig target
- Creer la resource target à laquelle on associe le secret
Schema
---
Lorsqu'un ressource tagguée avec admiralty est exécutée sur le controller il va voir les targets en s'authentifiant avec le secret pour créer des pods avec le service account commun.

View File

@@ -0,0 +1,8 @@
myhosts:
hosts:
control:
ansible_host: 172.16.0.184
dc01: #oc-dev
ansible_host: 172.16.0.187
dc02:
ansible_host:

View File

@@ -0,0 +1,115 @@
- name: Create secret from Workload
hosts: "{{ host_prompt }}"
user: "{{ user_prompt }}"
vars:
secret_exists: false
control_ip: 192.168.122.70
user_prompt: admrescue
tasks:
- name: Can management cluster be reached
ansible.builtin.command:
cmd: ping -c 5 "{{ control_ip }}"
- name: Install needed packages
become: true
ansible.builtin.package:
name:
- jq
- python3-yaml
- python3-kubernetes
state: present
- name: Get the list of existing secrets
kubernetes.core.k8s_info:
api_version: v1
kind: Secret
name: "{{ inventory_hostname | lower }}"
namespace: default
register: list_secrets
failed_when: false
- name: Create token
ansible.builtin.command:
cmd: kubectl create token admiralty-control
register: cd_token
- name: Retrieve config
ansible.builtin.command:
cmd: kubectl config view --minify --raw --output json
register: config_info
- name: Display config
ansible.builtin.shell:
cmd: |
echo > config_info.json
- name: Edit the config json with jq
ansible.builtin.shell:
cmd: |
CD_TOKEN="{{ cd_token.stdout }}" && \
CD_IP="{{ control_ip }}" && \
kubectl config view --minify --raw --output json | jq '.users[0].user={token:"'$CD_TOKEN'"} | .clusters[0].cluster.server="https://'$CD_IP':6443"'
register: edited_config
# failed_when: edited_config.skipped == true
- name: Set fact for secret
set_fact:
secret: "{{ edited_config.stdout }}"
cacheable: true
- name: Create the source for controller
kubernetes.core.k8s:
state: present
definition:
apiVersion: multicluster.admiralty.io/v1alpha1
kind: Source
metadata:
name: admiralty-control
namespace: default
spec:
serviceAccountName: admiralty-control
- name: Create secret from Workload
hosts: "{{ control_host }}"
user: "{{ user_prompt }}"
gather_facts: true
vars:
secret: "{{ hostvars[host_prompt]['secret'] }}"
user_prompt: admrescue
tasks:
- name: Get the list of existing secrets
kubernetes.core.k8s_info:
api_version: v1
kind: Secret
name: "{{ host_prompt | lower }}-secret"
namespace: default
register: list_secrets
failed_when: false
- name: Test wether secret exists
failed_when: secret == ''
debug:
msg: "Secret '{{ secret }}' "
- name: Create secret with new config
ansible.builtin.command:
cmd: kubectl create secret generic "{{ host_prompt | lower }}"-secret --from-literal=config='{{ secret }}'
when: list_secrets.resources | length == 0
- name: Create target for the workload cluster
kubernetes.core.k8s:
state: present
definition:
apiVersion: multicluster.admiralty.io/v1alpha1
kind: Target
metadata:
name: '{{ host_prompt | lower }}'
namespace: default
spec:
kubeconfigSecret:
name: $'{{ host_prompt | lower }}'-secret

View File

@@ -0,0 +1,33 @@
@startuml
actor User
participant "Ansible Playbook" as Playbook
participant "Target Node" as K8s
participant "Control Node" as ControlNode
User -> Playbook: Start Playbook Execution
Playbook -> Playbook: Save Target IP
Playbook -> K8s: Install Required Packages
Playbook -> K8s: Create Service Account
Playbook -> K8s: Patch Role argo-role (Add pod patch permission)
Playbook -> K8s: Patch RoleBinding argo-binding (Add service account)
Playbook -> K8s: Create Token for Service Account
Playbook -> K8s: Create Source Resource
Playbook -> K8s: Retrieve Current Kubeconfig
Playbook -> K8s: Convert Kubeconfig to JSON
Playbook -> User: Display Worker Kubeconfig
Playbook -> Playbook: Save Temporary Kubeconfig File
Playbook -> Playbook: Modify Kubeconfig JSON (Replace user token, set server IP)
Playbook -> User: Save Updated Kubeconfig File
Playbook -> User: Display Instructions for Adding Target
User -> Playbook: Start Additional Playbook Execution
Playbook -> Playbook: Store Kubeconfig Value
Playbook -> User: Display Kubeconfig
Playbook -> ControlNode : Copy Kubeconfig
Playbook -> ControlNode: Create Service Account on Target
Playbook -> ControlNode: Create Authentication Token for Source
Playbook -> ControlNode: Create Target Resource
@enduml

View File

@@ -0,0 +1,110 @@
- name: Setup an exsiting k8s cluster to become an admiralty worker for Argo Workflows
hosts: all:!localhost
user: "{{ user_prompt }}"
# Pass these through --extr-vars
vars:
- namespace: "{{ namespace_prompt }}"
- source_name: "{{ source_prompt }}"
- service_account_name : "admiralty-{{ source_prompt }}"
environment:
KUBECONFIG: /home/{{ user_prompt }}/.kube/config
tasks:
- name: Save target IP
set_fact:
target_ip : "{{ ansible_host }}"
- name: Install the appropriates packages
become: true
become_method: sudo
package:
name:
- python3
- python3-yaml
- python3-kubernetes
- jq
state: present
# We need to provide the source name in the command line through --extr-vars
- name: Create a service account for the source
kubernetes.core.k8s:
state: present
definition:
apiVersion: v1
kind: ServiceAccount
metadata:
name: '{{ service_account_name }}'
namespace: '{{ namespace }}'
- name: Add patch permission for pods to argo-role
command: >
kubectl patch role argo-role -n {{ namespace }} --type='json'
-p '[{"op": "add", "path": "/rules/-", "value": {"apiGroups":[""],"resources":["pods"],"verbs":["patch"]}}]'
register: patch_result
changed_when: "'patched' in patch_result.stdout"
- name: Add service account to argo-rolebinding
ansible.builtin.command: >
kubectl patch rolebinding argo-role-binding -n {{ namespace }} --type='json'
-p '[{"op": "add", "path": "/subjects/-", "value": {"kind": "ServiceAccount", "name": "{{ service_account_name }}", "namespace": "{{ namespace }}"}}]'
register: patch_result
changed_when: "'patched' in patch_result.stdout"
- name: Create a token for the created serivce account
ansible.builtin.command:
cmd: |
kubectl create token '{{ service_account_name }}' -n {{ namespace }}
register: token_source
- name: Create the source ressource
kubernetes.core.k8s:
state: present
definition:
apiVersion: multicluster.admiralty.io/v1alpha1
kind: Source
metadata:
name: source-{{ source_name }}
namespace: '{{ namespace }}'
spec:
serviceAccountName: "{{ service_account_name }}"
- name: Retrieve the current kubeconfig as json
ansible.builtin.shell:
cmd: |
kubectl config view --minify --raw --output json
register: worker_kubeconfig
- name: Convert kubeconfig to JSON
set_fact:
kubeconfig_json: "{{ worker_kubeconfig.stdout | trim | from_json }}"
- name: View worker kubeconfig
ansible.builtin.debug:
msg: '{{ kubeconfig_json }}'
- name: Temporary kubeconfig file
ansible.builtin.copy:
content: "{{ kubeconfig_json }}"
dest: "{{ target_ip }}_kubeconfig.json"
- name: Modify kubeconfig JSON
ansible.builtin.shell:
cmd: |
jq '.users[0].user={token:"'{{ token_source.stdout }}'"} | .clusters[0].cluster.server="https://'{{ target_ip }}':6443"' {{ target_ip }}_kubeconfig.json
register: kubeconfig_json
- name: Save updated kubeconfig
ansible.builtin.copy:
content: "{{ kubeconfig_json.stdout | trim | from_json | to_nice_json }}"
dest: ./worker_kubeconfig/{{ target_ip }}_kubeconfig.json
delegate_to: localhost
- name: Display informations for the creation of the target on the source host
ansible.builtin.debug:
msg: >
- To add this host as a target in an Admiralty network use the following command line :
- ansible-playbook add_admiralty_target.yml -i <SOURCE HOST IP>, --extra-vars "user_prompt=<YOUR USER> target_name=<TARGET NAME IN KUBE> target_ip={{ ansible_host }} namespace_source={{ namespace }} serviceaccount_prompt={{ service_account_name }}"
- Don't forget to give {{ service_account_name }} the appropriate role in namespace {{ namespace }}

View File

@@ -0,0 +1,121 @@
- name: Setup MinIO ressources for argo workflows/admiralty
hosts: all:!localhost
user: "{{ user_prompt }}"
gather_facts: true
become_method: sudo
vars:
- argo_namespace: "argo"
- uuid: "{{ uuid_prompt }}"
tasks:
- name: Install necessary packages
become: true
package:
name:
- python3-kubernetes
state: present
- name: Create destination directory
file:
path: $HOME/minio-binaries
state: directory
mode: '0755'
- name: Install mc
ansible.builtin.get_url:
url: "https://dl.min.io/client/mc/release/linux-amd64/mc"
dest: $HOME/minio-binaries/
mode: +x
headers:
Content-Type: "application/json"
- name: Add mc to path
ansible.builtin.shell:
cmd: |
grep -qxF 'export PATH=$PATH:$HOME/minio-binaries' $HOME/.bashrc || echo 'export PATH=$PATH:$HOME/minio-binaries' >> $HOME/.bashrc
- name: Test bashrc
ansible.builtin.shell:
cmd : |
tail -n 5 $HOME/.bashrc
- name: Retrieve root user
ansible.builtin.shell:
cmd: |
kubectl get secrets argo-artifacts -o jsonpath="{.data.rootUser}" | base64 -d -
register: user
- name: Retrieve root password
ansible.builtin.shell:
cmd: |
kubectl get secret argo-artifacts --namespace default -o jsonpath="{.data.rootPassword}" | base64 -d -
register : password
- name: Set up MinIO host in mc
ansible.builtin.shell:
cmd: |
$HOME/minio-binaries/mc alias set my-minio http://127.0.0.1:9000 '{{ user.stdout }}' '{{ password.stdout }}'
- name: Create oc-bucket
ansible.builtin.shell:
cmd: |
$HOME/minio-binaries/mc mb oc-bucket
- name: Run mc admin accesskey create command
command: $HOME/minio-binaries/mc admin accesskey create --json my-minio
register: minio_output
changed_when: false # Avoid marking the task as changed every time
- name: Parse JSON output
set_fact:
access_key: "{{ minio_output.stdout | from_json | json_query('accessKey') }}"
secret_key: "{{ minio_output.stdout | from_json | json_query('secretKey') }}"
- name: Retrieve cluster IP for minio API
ansible.builtin.shell:
cmd: |
kubectl get service argo-artifacts -o jsonpath="{.spec.clusterIP}"
register: minio_cluster_ip
- name: Create the minio secret in argo namespace
kubernetes.core.k8s:
state: present
namespace: '{{ argo_namespace }}'
name: "{{ uuuid }}-argo-artifact-secret"
definition:
apiVersion: v1
kind: Secret
type: Opaque
stringData:
access-key: '{{ access_key}}'
secret-key: '{{ secret_key }}'
- name: Create the minio secret in argo namespace
kubernetes.core.k8s:
state: present
namespace: '{{ argo_namespace }}'
definition:
apiVersion: v1
kind: ConfigMap
metadata:
name: artifact-repositories
data:
oc-s3-artifact-repository: |
s3:
bucket: oc-bucket
endpoint: {{ minio_cluster_ip.stdout }}:9000
insecure: true
accessKeySecret:
name: "{{ uuuid }}-argo-artifact-secret"
key: access-key
secretKeySecret:
name: "{{ uuuid }}-argo-artifact-secret"
key: secret-key
# ansible.builtin.shell:
# cmd: |
# kubectl create secret -n '{{ argo_namespace }}' generic argo-artifact-secret \
# --from-literal=access-key='{{ access_key }}' \
# --from-literal=secret-key='{{ secret_key }}'

View File

@@ -0,0 +1,149 @@
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: harvesting-
labels:
example: 'true'
workflows.argoproj.io/creator: 0d47b046-a09e-4bed-b10a-ec26783d4fe7
workflows.argoproj.io/creator-email: pierre.bayle.at.irt-stexupery.com
workflows.argoproj.io/creator-preferred-username: pbayle
spec:
templates:
- name: busybox
inputs:
parameters:
- name: model
- name: output-dir
- name: output-file
- name: clustername
outputs:
parameters:
- name: outfile
value: '{{inputs.parameters.output-file}}.tgz'
artifacts:
- name: outputs
path: '{{inputs.parameters.output-dir}}/{{inputs.parameters.output-file}}.tgz'
s3:
key: '{{workflow.name}}/{{inputs.parameters.output-file}}.tgz'
container:
image: busybox
command: ["/bin/sh", "-c"]
args:
- |
echo "Creating tarball for model: {{inputs.parameters.model}}";
mkdir -p {{inputs.parameters.output-dir}};
echo $(ping 8.8.8.8 -c 4) > $(date +%Y-%m-%d__%H-%M-%S)_{{inputs.parameters.output-file}}.txt
tar -czf {{inputs.parameters.output-dir}}/{{inputs.parameters.output-file}}.tgz *_{{inputs.parameters.output-file}}.txt;
metadata:
annotations:
multicluster.admiralty.io/elect: ""
multicluster.admiralty.io/clustername: "{{inputs.parameters.clustername}}"
- name: weather-container
inputs:
parameters:
- name: output-dir
- name: output-file
- name: clustername
outputs:
parameters:
- name: outfile
value: '{{inputs.parameters.output-file}}.tgz'
artifacts:
- name: outputs
path: '{{inputs.parameters.output-dir}}/{{inputs.parameters.output-file}}.tgz'
s3:
insecure: true
key: '{{workflow.name}}/{{inputs.parameters.output-file}}'
container:
name: weather-container
image: pierrebirt/weather_container:latest
#imagePullPolicy: IfNotPresent
env:
- name: API_KEY
valueFrom:
secretKeyRef:
name: cnes-secrets
key: weather-api
args:
- '--key'
- "$(API_KEY)"
- '--dir'
- '{{inputs.parameters.output-dir}}'
- '--file'
- '{{inputs.parameters.output-file}}'
metadata:
annotations:
multicluster.admiralty.io/elect: ""
multicluster.admiralty.io/clustername: "{{inputs.parameters.clustername}}"
- name: bucket-reader
inputs:
parameters:
- name: bucket-path
- name: logs-path
artifacts:
- name: retrieved-logs
path: '{{inputs.parameters.logs-path}}'
s3:
key: '{{inputs.parameters.bucket-path}}'
outputs:
artifacts:
- name: logs_for_test
path: /tmp/empty_log_for_test.log
s3:
key: '{{workflow.name}}/log_test.log'
container:
image: busybox
command: ["/bin/sh", "-c"]
args:
- |
tar -xvf '{{inputs.parameters.logs-path}}'
ls -la
cat *.txt
touch /tmp/empty_log_for_test.log
- name: harvesting-test
inputs: {}
outputs: {}
metadata: {}
dag:
tasks:
- name: busybox-dc02
template: busybox
arguments:
parameters:
- name: model
value: era-pressure-levels
- name: output-dir
value: /app/data/output
- name: output-file
value: fake_logs
- name: clustername
value: target-dc02
- name: weather-container-dc03
template: weather-container
arguments:
parameters:
- name: output-dir
value: /app/results
- name: output-file
value: weather_results_23_01
- name: clustername
value: target-dc03
- name: bucket-reader
template: bucket-reader
dependencies: [busybox-dc02,weather-container-dc03]
arguments:
parameters:
- name: bucket-path
value: '{{workflow.name}}/fake_logs.tgz'
- name: logs-path
value: /tmp/logs.tgz
entrypoint: harvesting-test
serviceAccountName: argo-agregateur-workflow-controller
artifactRepositoryRef: # https://argo-workflows.readthedocs.io/en/latest/fields/#s3artifactrepository
key: admiralty-s3-artifact-repository # Choose the artifact repository with the public IP/url

View File

@@ -0,0 +1,32 @@
{
"apiVersion": "v1",
"clusters": [
{
"cluster": {
"certificate-authority-data": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJkekNDQVIyZ0F3SUJBZ0lCQURBS0JnZ3Foa2pPUFFRREFqQWpNU0V3SHdZRFZRUUREQmhyTTNNdGMyVnkKZG1WeUxXTmhRREUzTXpneE5EVTNNekl3SGhjTk1qVXdNVEk1TVRBeE5UTXlXaGNOTXpVd01USTNNVEF4TlRNeQpXakFqTVNFd0h3WURWUVFEREJock0zTXRjMlZ5ZG1WeUxXTmhRREUzTXpneE5EVTNNekl3V1RBVEJnY3Foa2pPClBRSUJCZ2dxaGtqT1BRTUJCd05DQUFSWHFiRHBmcUtwWVAzaTFObVpCdEZ3RzNCZCtOY0RwenJKS01qOWFETlUKTUVYZmpRM3VrbzVISDVHdTFzNDRZY0p6Y29rVEFmb090QVhWS1pNMUs3YWVvMEl3UURBT0JnTlZIUThCQWY4RQpCQU1DQXFRd0R3WURWUjBUQVFIL0JBVXdBd0VCL3pBZEJnTlZIUTRFRmdRVWM5MW5TYi9kaU1pbHVqR3RENjFRClc0djVKVmN3Q2dZSUtvWkl6ajBFQXdJRFNBQXdSUUlnV05uSzlsU1lDY044VEFFODcwUnNOMEgwWFR6UndMNlAKOEF4Q0xwa3pDYkFDSVFDRW1LSkhveXFZRW5iZWZFU3VOYkthTHdtRkMrTE5lUHloOWxQUmhCVHdsQT09Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K",
"server": "https://172.16.0.181:6443"
},
"name": "default"
}
],
"contexts": [
{
"context": {
"cluster": "default",
"user": "default"
},
"name": "default"
}
],
"current-context": "default",
"kind": "Config",
"preferences": {},
"users": [
{
"name": "default",
"user": {
"token": "eyJhbGciOiJSUzI1NiIsImtpZCI6Ik5nT1p0NVVMUVllYko1MVhLdVIyMW01MzJjY25NdTluZ3VNQ1RmMnNTUHcifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiLCJrM3MiXSwiZXhwIjoxNzM4Njg1NzM2LCJpYXQiOjE3Mzg2ODIxMzYsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwianRpIjoiNTNkNzU4YmMtMGUwMC00YTU5LTgzZTUtYjkyYjZmODg2NWE2Iiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJhcmdvIiwic2VydmljZWFjY291bnQiOnsibmFtZSI6ImFkbWlyYWx0eS1jb250cm9sIiwidWlkIjoiMWQ1NmEzMzktMTM0MC00NDY0LTg3OGYtMmIxY2ZiZDU1ZGJhIn19LCJuYmYiOjE3Mzg2ODIxMzYsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDphcmdvOmFkbWlyYWx0eS1jb250cm9sIn0.WMqmDvp8WZHEiupJewo2BplD0xu6yWhlgZkG4q_PpVCbHKd7cKYWnpTi_Ojmabvvw-VC5sZFZAaxZUnqdZNGf_RMrJ5pJ9B5cYtD_gsa7AGhrSz03nd5zPKvujT7-gzWmfHTpZOvWky00A2ykKLflibhJgft4FmFMxQ6rR3MWmtqeAo82wevF47ggdOiJz3kksFJPfEpk1bflumbUCk-fv76k6EljPEcFijsRur-CI4uuXdmTKb7G2TDmTMcFs9X4eGbBO2ZYOAVEw_Xafru6D-V8hWBTm-NWQiyyhdxlVdQg7BNnXJ_26GsJg4ql4Rg-Q-tXB5nGvd68g2MnGTWwg"
}
}
]
}

View File

@@ -0,0 +1,32 @@
{
"apiVersion": "v1",
"clusters": [
{
"cluster": {
"certificate-authority-data": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJkekNDQVIyZ0F3SUJBZ0lCQURBS0JnZ3Foa2pPUFFRREFqQWpNU0V3SHdZRFZRUUREQmhyTTNNdGMyVnkKZG1WeUxXTmhRREUzTXpnd09ESTVNRFF3SGhjTk1qVXdNVEk0TVRZME9ESTBXaGNOTXpVd01USTJNVFkwT0RJMApXakFqTVNFd0h3WURWUVFEREJock0zTXRjMlZ5ZG1WeUxXTmhRREUzTXpnd09ESTVNRFF3V1RBVEJnY3Foa2pPClBRSUJCZ2dxaGtqT1BRTUJCd05DQUFSdUV0Y2lRS3VaZUpEV214TlJBUzM3TlFib3czSkpxMWJQSjdsdTN2eEgKR2czS1hGdFVHZWNGUjQzL1Rjd0pmanQ3WFpsVm9PUldtOFozYWp3OEJPS0ZvMEl3UURBT0JnTlZIUThCQWY4RQpCQU1DQXFRd0R3WURWUjBUQVFIL0JBVXdBd0VCL3pBZEJnTlZIUTRFRmdRVTB3NG1uSlUrbkU3SnpxOHExRWdWCmFUNU1mMmd3Q2dZSUtvWkl6ajBFQXdJRFNBQXdSUUloQU9JTUtsZHk0Y044a3JmVnQyUFpLQi80eXhpOGRzM0wKaHR0b2ZrSEZtRnlsQWlCMWUraE5BamVUdVNCQjBDLzZvQnA2c21xUDBOaytrdGFtOW9EM3pvSSs0Zz09Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K",
"server": "https://172.16.0.184:6443"
},
"name": "default"
}
],
"contexts": [
{
"context": {
"cluster": "default",
"user": "default"
},
"name": "default"
}
],
"current-context": "default",
"kind": "Config",
"preferences": {},
"users": [
{
"name": "default",
"user": {
"token": "eyJhbGciOiJSUzI1NiIsImtpZCI6InUzaGF0T1RuSkdHck1sbURrQm0waDdDeDFSS3pxZ3FVQ25aX1VrOEkzdFkifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiLCJrM3MiXSwiZXhwIjoxNzM4Njg1NzM2LCJpYXQiOjE3Mzg2ODIxMzYsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwianRpIjoiZDFmNzQ2NmQtN2MyOS00MGNkLTg1ZTgtMjZmMzFkYWU5Nzg4Iiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJhcmdvIiwic2VydmljZWFjY291bnQiOnsibmFtZSI6ImFkbWlyYWx0eS1jb250cm9sIiwidWlkIjoiNTc0Y2E1OTQtY2IxZi00N2FiLTkxZGEtMDI0NDEwNjhjZjQwIn19LCJuYmYiOjE3Mzg2ODIxMzYsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDphcmdvOmFkbWlyYWx0eS1jb250cm9sIn0.ZJvTJawg73k5SEOG6357iYq_-w-7V4BqciURYJao_dtP_zDpcXyZ1Xw-sxNKITgLjByTkGaCJRjDtR2QdZumKtb8cl6ayv0UZMHHnFft4gtQi-ttjj69rQ5RTNA3dviPaQOQgWNAwPkUPryAM0Sjsd5pRWzXXe-NVpWQZ6ooNZeRBHyjT1Km1JoprB7i55vRJEbBnoK0laJUtHCNmLoxK5kJYQqeAtA-_ugdSJbnyTFQAG14vonZSyLWAQR-Hzw9QiqIkSEW1-fcvrrZbrVUZsl_i7tkrXSSY9EYwjrZlqIu79uToEa1oWvulGFEN6u6YGUydj9nXQJX_eDpaWvuOA"
}
}
]
}

View File

@@ -0,0 +1,32 @@
{
"apiVersion": "v1",
"clusters": [
{
"cluster": {
"certificate-authority-data": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJkekNDQVIyZ0F3SUJBZ0lCQURBS0JnZ3Foa2pPUFFRREFqQWpNU0V3SHdZRFZRUUREQmhyTTNNdGMyVnkKZG1WeUxXTmhRREUzTXpnd09ESTVNRFF3SGhjTk1qVXdNVEk0TVRZME9ESTBXaGNOTXpVd01USTJNVFkwT0RJMApXakFqTVNFd0h3WURWUVFEREJock0zTXRjMlZ5ZG1WeUxXTmhRREUzTXpnd09ESTVNRFF3V1RBVEJnY3Foa2pPClBRSUJCZ2dxaGtqT1BRTUJCd05DQUFSdUV0Y2lRS3VaZUpEV214TlJBUzM3TlFib3czSkpxMWJQSjdsdTN2eEgKR2czS1hGdFVHZWNGUjQzL1Rjd0pmanQ3WFpsVm9PUldtOFozYWp3OEJPS0ZvMEl3UURBT0JnTlZIUThCQWY4RQpCQU1DQXFRd0R3WURWUjBUQVFIL0JBVXdBd0VCL3pBZEJnTlZIUTRFRmdRVTB3NG1uSlUrbkU3SnpxOHExRWdWCmFUNU1mMmd3Q2dZSUtvWkl6ajBFQXdJRFNBQXdSUUloQU9JTUtsZHk0Y044a3JmVnQyUFpLQi80eXhpOGRzM0wKaHR0b2ZrSEZtRnlsQWlCMWUraE5BamVUdVNCQjBDLzZvQnA2c21xUDBOaytrdGFtOW9EM3pvSSs0Zz09Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K",
"server": "https://172.16.0.185:6443"
},
"name": "default"
}
],
"contexts": [
{
"context": {
"cluster": "default",
"user": "default"
},
"name": "default"
}
],
"current-context": "default",
"kind": "Config",
"preferences": {},
"users": [
{
"name": "default",
"user": {
"token": "eyJhbGciOiJSUzI1NiIsImtpZCI6InUzaGF0T1RuSkdHck1sbURrQm0waDdDeDFSS3pxZ3FVQ25aX1VrOEkzdFkifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiLCJrM3MiXSwiZXhwIjoxNzM4Njg1NzM2LCJpYXQiOjE3Mzg2ODIxMzYsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwianRpIjoiODdlYjVkYTYtYWNlMi00YzFhLTg1YjctYWY1NDI2MjA1ZWY1Iiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJhcmdvIiwic2VydmljZWFjY291bnQiOnsibmFtZSI6ImFkbWlyYWx0eS1jb250cm9sIiwidWlkIjoiZjFjNjViNDQtYmZmMC00Y2NlLTk4ZGQtMTU0YTFiYTk0YTU2In19LCJuYmYiOjE3Mzg2ODIxMzYsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDphcmdvOmFkbWlyYWx0eS1jb250cm9sIn0.SkpDamOWdyvTUk8MIMDMhuKD8qvJPpX-tjXPWX9XsfpMyjcB02kI-Cn9b8w1TnYpGJ_u3qyLzO7RlXOgSHtm7TKHOCoYudj4jNwRWqIcThxzAeTm53nlZirUU0E0eJU8cnWHGO3McAGOgkStpfVwHaTQHq2oMZ6jayQU_HuButGEvpFt2FMFEwY9pOjabYHPPOkY9ruswzNhGBRShxWxfOgCWIt8UmbrryrNeNd_kZlB0_vahuQkAskeJZd3f_hp7qnSyLd-YZa5hUrruLJBPQZRw2sPrZe0ukvdpuz7MCfE-CQzUDn6i3G6FCKzYfd-gHFIYNUowS0APHLcC-yWSQ"
}
}
]
}

View File

@@ -0,0 +1,32 @@
{
"apiVersion": "v1",
"clusters": [
{
"cluster": {
"certificate-authority-data": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJkakNDQVIyZ0F3SUJBZ0lCQURBS0JnZ3Foa2pPUFFRREFqQWpNU0V3SHdZRFZRUUREQmhyTTNNdGMyVnkKZG1WeUxXTmhRREUzTlRJM05EVTROVE13SGhjTk1qVXdOekUzTURrMU1EVXpXaGNOTXpVd056RTFNRGsxTURVegpXakFqTVNFd0h3WURWUVFEREJock0zTXRjMlZ5ZG1WeUxXTmhRREUzTlRJM05EVTROVE13V1RBVEJnY3Foa2pPClBRSUJCZ2dxaGtqT1BRTUJCd05DQUFUTzJzVWE4MTVDTmVxWUNPdCthREoreG5hWHRZNng3R096a0c1U1U0TEEKRE1talExRVQwZi96OG9oVU55L1JneUt0bmtqb2JnZVJhOExTdDAwc3NrMDNvMEl3UURBT0JnTlZIUThCQWY4RQpCQU1DQXFRd0R3WURWUjBUQVFIL0JBVXdBd0VCL3pBZEJnTlZIUTRFRmdRVW1zeUVyWkQvbmxtNVJReUUwR0NICk1FWlU0ZWd3Q2dZSUtvWkl6ajBFQXdJRFJ3QXdSQUlnVXJsR3ZGZy9FVzhXdU1Nc3JmZkZTTHdmYm1saFI5MDYKYjdHaWhUNHdFRzBDSUVsb2FvWGdwNnM5c055eE1iSUwxKzNlVUtFc0k2Y2dDdldFVEZmRWtQTUIKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=",
"server": "https://172.16.0.191:6443"
},
"name": "default"
}
],
"contexts": [
{
"context": {
"cluster": "default",
"user": "default"
},
"name": "default"
}
],
"current-context": "default",
"kind": "Config",
"preferences": {},
"users": [
{
"name": "default",
"user": {
"token": "eyJhbGciOiJSUzI1NiIsImtpZCI6IlZCTkEyUVJKeE9XblNpeUI1QUlMdWtLZmVpbGQ1LUpRTExvNWhkVjlEV2MifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiLCJrM3MiXSwiZXhwIjoxNzUzMTk0MjMxLCJpYXQiOjE3NTMxOTA2MzEsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwianRpIjoiYTMyOTY0OTktNzhiZS00MzE0LTkyYjctMDQ1NTBkY2JjMGUyIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJ0ZXN0LWFkbWlyYWx0eS1hbnNpYmxlIiwic2VydmljZWFjY291bnQiOnsibmFtZSI6ImFkbWlyYWx0eS1zb3VyY2UiLCJ1aWQiOiI4YmJhMTA3Mi0wYjZiLTQwYjUtYWI4Mi04OWQ1MTkyOGIwOTIifX0sIm5iZiI6MTc1MzE5MDYzMSwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50OnRlc3QtYWRtaXJhbHR5LWFuc2libGU6YWRtaXJhbHR5LXNvdXJjZSJ9.A0UJLoui_SX4dCgUZIo4kprZ3kb2WBkigvyy1e55qQMFZxRoAed6ZvR95XbHYNUoiHR-HZE04QO0QcOnFaaQDTA6fS9HHtjfPKAoqbXrpShyoHNciiQnhkwYvtEpG4bvDf0JMB9qbWGMrBoouHwx-JoQG0JeoQq-idMGiDeHhqVc86-Uy_angvRoAZGF5xmYgMPcw5-vZPGfgk1mHYx5vXNofCcmF4OqMvQaWyYmH82L5SYAYLTV39Z1aCKkDGGHt5y9dVJ0udA4E5Cx3gO2cLLLWxf8n7uFSUx8sHgFtZOGgXwN8DIrTe3Y95p09f3H7nTxjnmQ-Nce2hofLC2_ng"
}
}
]
}

View File

@@ -0,0 +1,32 @@
{
"apiVersion": "v1",
"clusters": [
{
"cluster": {
"certificate-authority-data": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJlRENDQVIyZ0F3SUJBZ0lCQURBS0JnZ3Foa2pPUFFRREFqQWpNU0V3SHdZRFZRUUREQmhyTTNNdGMyVnkKZG1WeUxXTmhRREUzTXpnNU1Ua3pPRFF3SGhjTk1qVXdNakEzTURrd09UUTBXaGNOTXpVd01qQTFNRGt3T1RRMApXakFqTVNFd0h3WURWUVFEREJock0zTXRjMlZ5ZG1WeUxXTmhRREUzTXpnNU1Ua3pPRFF3V1RBVEJnY3Foa2pPClBRSUJCZ2dxaGtqT1BRTUJCd05DQUFTYWsySHRMQVFUclVBSUF3ckUraDBJZ0QyS2dUcWxkNmorQlczcXRUSmcKOW9GR2FRb1lnUERvaGJtT29ueHRTeDlCSlc3elkrZEM2T3J5ekhkYzUzOGRvMEl3UURBT0JnTlZIUThCQWY4RQpCQU1DQXFRd0R3WURWUjBUQVFIL0JBVXdBd0VCL3pBZEJnTlZIUTRFRmdRVXd5UE1iOFAwaC9IR2szZ0dianozClFvOVVoQ293Q2dZSUtvWkl6ajBFQXdJRFNRQXdSZ0loQUlBVE1ETGFpeWlwaUNuQjF1QWtYMkxiRXdrYk93QlcKb1U2eDluZnRMTThQQWlFQTUza0hZYU05ZVZVdThld3REa0M3TEs3RTlkSGczQ3pSNlBxSHJjUHJTeDA9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K",
"server": "https://target01:6443"
},
"name": "default"
}
],
"contexts": [
{
"context": {
"cluster": "default",
"user": "default"
},
"name": "default"
}
],
"current-context": "default",
"kind": "Config",
"preferences": {},
"users": [
{
"name": "default",
"user": {
"token": "eyJhbGciOiJSUzI1NiIsImtpZCI6IlJwbHhUQ2ppREt3SmtHTWs4Z2cwdXBuWGtjTUluMVB0dFdGbUhEUVY2Y2MifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiLCJrM3MiXSwiZXhwIjoxNzQwMDUzODAyLCJpYXQiOjE3NDAwNTAyMDIsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwianRpIjoiMThkNzdjMzctZjgyNC00MGVmLWExMDUtMzcxMzJkNjUxNzgzIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJhcmdvIiwic2VydmljZWFjY291bnQiOnsibmFtZSI6ImFkbWlyYWx0eS1jb250cm9sIiwidWlkIjoiNmExM2M4YTgtZmE0NC00NmJlLWI3ZWItYTQ0OWY3ZTMwZGM1In19LCJuYmYiOjE3NDAwNTAyMDIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDphcmdvOmFkbWlyYWx0eS1jb250cm9sIn0.DtKkkCEWLPp-9bmSbrqxvxO2kXfOW2cHlmxs5xPzTtn3DcNZ-yfUxJHxEv9Hz6-h732iljRKiWx3SrEN2ZjGq555xoOHV202NkyUqU3EWmBwmVQgvUKOZSn1tesAfI7fQp7sERa7oKz7ZZNHJ7x-nw0YBoxYa4ECRPkJKDR3uEyRsyFMaZJELi-wIUSZkeGxNR7PdQWoYPoJipnwXoyAFbT42r-pSR7nqzy0-Lx1il82klkZshPEj_CqycqJg1djoNoe4ekS7En1iljz03YqOqm1sFSOdvDRS8VGM_6Zm6e3PVwXQZVBgFy_ET1RqtxPsLyYmaPoIfPMq2xeRLoGIg"
}
}
]
}

70
ansible/Argo/README.md Normal file
View File

@@ -0,0 +1,70 @@
# Prerequisites
Ensure that you have the following installed on your local machine:
- Ansible
- SSH access to the target host
- Required dependencies for Kubernetes
Two passwords are required via the prompt:
1. The username used to connect to the host via SSH.
2. The root password for privilege escalation.
- You can use a user on the name with `NOPASSWD` permissions and not use `--ask-become-pass`
- You can use `ssh-copy-id` on the remote host on the user that you will provide and not use `--ask-pass`
# Deployment Instructions
## Deploying K3s
Replace `HOST_NAME` with the IP address or hostname of the target machine in `my_hosts.yaml`, then run:
```sh
ansible-playbook -i <YOUR_HOST_IP>, deploy_k3s.yml --extra-vars "user_prompt=YOUR_USER" --ask-pass --ask-become-pass
```
This playbook:
- Updates package repositories.
- Installs necessary dependencies.
- Ensures the user has `sudo` privileges.
- Downloads and installs K3s.
- Configures permissions for Kubernetes operations.
- Enables auto-completion for `kubectl`.
- Reboots the machine to apply changes.
## Deploying Argo Workflows
Replace `HOST_NAME` with the IP address or hostname of the target machine in `my_hosts.yaml`, then run:
```sh
ansible-playbook -i <YOUR_HOST_IP>, deploy_argo.yml --extra-vars "user_prompt=<YOUR_USER>" --ask-pass --ask-become-pass
```
This playbook:
- Ensures the `argo` namespace exists in Kubernetes.
- Deploys Argo Workflows using the official manifest.
- Waits for the `argo-server` pod to be running.
- Patches the deployment for first-time connection issues.
- Applies a service configuration to expose Argo Workflows via NodePort.
- Installs the Argo CLI.
- Enables CLI autocompletion.
- Configures `kubectl` for Argo access.
# Additional Notes
- The service account used by default is `argo:default`, which may not have sufficient permissions. Use `argo:argo` instead:
```sh
argo submit -f workflow.yaml --serviceaccount=argo
```
- The Argo CLI is installed in `/usr/local/bin/argo`.
- The Kubernetes configuration file is copied to `~/.kube/config`.
# Troubleshooting
- If the deployment fails due to permissions, ensure the user has `sudo` privileges.
- Check the status of Argo pods using:
```sh
kubectl get pods -n argo
```
- If Argo Workflows is not accessible, verify that the NodePort service is correctly configured.
# References
- [K3s Official Documentation](https://k3s.io/)
- [Argo Workflows Documentation](https://argoproj.github.io/argo-workflows/)

View File

@@ -0,0 +1,14 @@
# Needed by deploy-argo.yml to change argo to a NodePort service
apiVersion: v1
kind: Service
metadata:
name: argo-server
namespace: argo
spec:
type: NodePort
selector:
app: argo-server
ports:
- port: 2746
targetPort: 2746
nodePort: 32746

View File

@@ -0,0 +1,95 @@
# ansible-playbook -i my_hosts.yaml deploy_argo.yml --ask-pass --ask-become-pass
# Need to think about which serviceaccount will be used to launch the workflow, by default
# uses argo:default but it doesn't have enough rights, need to use argo:argo
# like '$ argo submit -f .... --serviceaccount=argo'
- name: Installation de Argo
hosts: all
user: "{{ user_prompt }}"
vars:
ARGO_VERSION: "3.5.2"
environment:
KUBECONFIG: /home/{{ user_prompt }}/.kube/config
tasks:
- name: Create argo namespace
kubernetes.core.k8s:
state: present
definition:
apiVersion: v1
kind: Namespace
metadata:
labels:
kubernetes.io/metadata.name: argo
name: argo
- name: Verifier si argo est déjà entrain de tourner
ansible.builtin.shell:
cmd: |
kubectl get -n argo pods | grep -q argo-server
register: argo_server_pod
failed_when: argo_server_pod.rc not in [ 0, 1 ]
- name: Installing argo services
ansible.builtin.shell:
cmd: |
kubectl apply -n argo -f https://github.com/argoproj/argo-workflows/releases/download/v{{ ARGO_VERSION }}/install.yaml
when: argo_server_pod.rc == 1
- name: Vérifier l'état du pod argo-server
ansible.builtin.shell:
cmd: |
argo_server_name=$(kubectl get -n argo pods | grep argo-server | cut -d ' ' -f 1)
kubectl get -n argo pods $argo_server_name --output=jsonpath='{.status.phase}'
register: pod_status
retries: 30
delay: 10
until: pod_status.stdout == "Running"
- name: Patch first connection bug
ansible.builtin.shell: |
kubectl patch deployment \
argo-server \
--namespace argo \
--type='json' \
-p='[{"op": "replace", "path": "/spec/template/spec/containers/0/args", "value": [
"server",
"--auth-mode=server"
]}]'
- name: Copying the configuration file to new host
copy: src=argo-service.yml dest=$HOME mode=0755
- name: Applying the conf file to make the service a NodePort typ
ansible.builtin.shell:
cmd: |
kubectl apply -f argo-service.yml
- name: download argo CLI
become: true
ansible.builtin.uri:
url: " https://github.com/argoproj/argo-workflows/releases/download/v{{ ARGO_VERSION }}/argo-linux-amd64.gz"
method: GET
dest: /var
status_code: 200
headers:
Content-Type: "application/json"
- name: Install argo CLI
become: true
ansible.builtin.shell:
cmd: |
gunzip argo-linux-amd64.gz
chmod +x argo-linux-amd64
mv ./argo-linux-amd64 /usr/local/bin/argo
args:
chdir: /var
- name: Enable argo CLI autocomplete
ansible.builtin.shell:
cmd: |
grep 'argo completion bash' $HOME/.bashrc || echo 'source <(argo completion bash)' >> $HOME/.bashrc

116
ansible/Argo/deploy_k3s.yml Normal file
View File

@@ -0,0 +1,116 @@
- name: Installation k3s
hosts: all:!localhost
user: "{{ user_prompt }}"
gather_facts: true
tasks:
- name: Update apt
become: true
# become_method: su
ansible.builtin.shell:
cmd:
apt update -y
- name: Install necessary packages
become: true
# become_method: su
package:
name:
- sudo
- curl
- grep
- expect
- adduser
state: present
- name: Test if the current user is a sudoer
ansible.builtin.shell:
cmd:
groups {{ ansible_user_id }} | grep -q 'sudo'
register: sudoer
failed_when: sudoer.rc not in [ 0, 1 ]
- name: Adding user to sudoers
become: true
# become_method: su
user:
name: "{{ ansible_user_id }}"
append: true
groups: sudo
when: sudoer.rc == 1
- name: Reset ssh connection to allow user changes to affect ansible user
ansible.builtin.meta:
reset_connection
when: sudoer.rc == 1
- name: Attendre que la déconnexion soit effective
wait_for:
port: 22
delay: 10
timeout: 120
when: sudoer.rc == 1
- name: Download k3s
ansible.builtin.uri:
url: "https://get.k3s.io"
method: GET
dest: ./install_k3s.sh
status_code: 200
headers:
Content-Type: "application/json"
- name: Install k3s
become: true
# become_method: su
ansible.builtin.shell:
cmd : sh install_k3s.sh
- name: Add k3s group
become: true
# become_method: su
group:
name: k3s
state: present
- name: Add user to k3s group
become: true
# become_method: su
user:
name: "{{ ansible_user_id }}"
append: true
groups: k3s
- name: Ensure .kube directory exists
ansible.builtin.file:
path: ~/.kube
state: directory
mode: '0700'
- name: Copy kubeconfig file
become: true
ansible.builtin.copy:
src: /etc/rancher/k3s/k3s.yaml
dest: /home/{{ user_prompt }}/.kube/config
remote_src: true
mode: '0600'
owner: "{{ ansible_user_id }}"
group: "{{ ansible_user_gid }}"
- name: Set KUBECONFIG environment variable in .bashrc
ansible.builtin.lineinfile:
path: ~/.bashrc
line: 'export KUBECONFIG=$HOME/.kube/config'
- name: Ensure kubectl autocompletion is enabled
ansible.builtin.lineinfile:
path: ~/.bashrc
line: 'source <(kubectl completion bash)'
- name: Unconditionally reboot the machine with all defaults
become: true
# become_method: su
ansible.builtin.reboot:

View File

@@ -0,0 +1,59 @@
- name: Deploys VM based on local debian image
hosts: localhost
gather_facts: true
become: true
vars:
# debian_image: "/var/lib/libvirt/images"
# vm: "{{ item }}"
ssh_pub_key: "/home/pierre/.ssh/id_rsa.pub"
root: root
os: https://cloud.debian.org/images/cloud/bullseye/latest/debian-11-generic-amd64.qcow2
checksum: ""
xml_template: debian_template
machines:
- name: control_test
ip: 192.168.122.80
# - name: DC01_test
# ip: 192.168.122.81
# - name: DC02_test
# ip: 192.168.122.82
tasks:
- name: Is os image present
ansible.builtin.fail:
msg: You did not provide an image to build from
when:
os == ""
- name: Is XML template present
ansible.builtin.stat:
path: "create_kvm/templates/{{ xml_template }}.xml.j2"
register: xml_present
- name: XML not present
ansible.builtin.fail:
msg: You did not provide a valid xml template
when: not (xml_present.stat.exists)
- name: KVM Provision role
ansible.builtin.include_role:
name: create_kvm
vars:
# libvirt_pool_dir: "{{ pool_dir }}"
os_image: "{{ os }}"
template_file: "{{ xml_template }}.xml.j2"
vm_name: "{{ item.name }}"
ssh_key: "{{ ssh_pub_key }}"
root_pwd: "{{ root }}"
loop:
"{{ machines }}"
- name: Set up the wanted IP
ansible.builtin.include_tasks:
file: setup_vm_ip.yml
loop:
"{{ machines }}"
# for control,dc01,dc02
# 192.168.122.70 + 1
# /var/lib/libvirt/images/debian11-2-1-clone.qcow2

View File

@@ -0,0 +1,32 @@
- name: Installation k3s
hosts: "{{ host_prompt }}"
user: "{{ user_prompt }}"
tasks:
- name: install package
become: true
ansible.builtin.package:
name:
- mosquitto
- mosquitto-clients
state: present
- name: configure mosquitto conf
become: true
ansible.builtin.lineinfile:
path: /etc/mosquitto/conf.d/mosquitto.conf
line: allow_anonymous true
create: true
- name: configure mosquitto conf
become: true
ansible.builtin.lineinfile:
path: /etc/mosquitto/conf.d/mosquitto.conf
line: listener 1883 0.0.0.0
- name: restart mosquitto
become: true
ansible.builtin.service:
name: mosquitto
state: restarted

View File

@@ -0,0 +1,15 @@
- name: Retrieve network info
ansible.builtin.command:
cmd: virsh domifaddr "{{ item.name }}"
register: output_domifaddr
- name: Extract vm's current ip
vars:
pattern: '(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})'
ansible.builtin.set_fact:
current_ip: "{{ output_domifaddr.stdout | regex_search(pattern, '\\1') }}"
- name: Show ip
ansible.builtin.debug:
msg: "{{ current_ip.0 }}"

111
ansible/Minio/README.md Normal file
View File

@@ -0,0 +1,111 @@
# MinIO
## Deploy Minio
This playbook installs MinIO on a Kubernetes cluster using Helm and retrieves necessary credentials and access information.
### Variables
| Variable | Description |
|----------|-------------|
| `user_prompt` | SSH user to execute commands |
| `host_name_prompt` | Hostname of the target machine |
| `memory_req` | Memory allocation for MinIO (`2Gi` by default) |
| `storage_req` | Storage allocation for MinIO (`20Gi` by default) |
### Steps Executed
1. Install necessary Python libraries.
2. Check if Helm is installed and install it if not present.
3. Add and update the MinIO Helm repository.
4. Deploy MinIO using Helm if it is not already running.
5. Retrieve the MinIO credentials (root user and password).
6. Retrieve the MinIO UI console external IP and API internal IP.
7. Display login credentials and connection details.
### Running the Playbook
```sh
ansible-playbook -i inventory deploy_minio.yml --extra-vars "user_prompt=your-user host_name_prompt=your-host"
```
## Setting up MinIO access
/!\ This part can be automated with this **[ansible playbook](https://github.com/pi-B/ansible-oc/blob/main/setup_minio_admiralty.yml)** which is designed to create ressources in a Argo-Workflows/Admiralty combo.
/!\ If you still want to setup the host manually **and** aim to use admiralty, give the ressources an **unique name** and be sure to make this uniqueness accessible (in an environment variable, in a conf file...)
- With the output of the last tasks, create a secret in argo namespace to give access to the minio API. We need to use the `create` verb because apply creates a non-functionning secret
```bash
kubectl create secret -n <name of your argo namespace> generic argo-artifact-secret \
--from-literal=access-key=<your access key> \
--from-literal=secret-key=<your secret key>
```
- Create a ConfigMap, which will be used by argo to create the S3 artifact, the content must match the one from the previously created secret
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
# If you want to use this config map by default, name it "artifact-repositories".
name: artifact-repositories
# annotations:
# # v3.0 and after - if you want to use a specific key, put that key into this annotation.
# workflows.argoproj.io/default-artifact-repository: oc-s3-artifact-repository
data:
oc-s3-artifact-repository: |
s3:
bucket: oc-bucket
endpoint: [ retrieve cluster with kubectl get service argo-artifacts -o jsonpath="{.spec.clusterIP}" ]:9000
insecure: true
accessKeySecret:
name: argo-artifact-secret
key: access-key
secretKeySecret:
name: argo-artifact-secret
key: secret-key
```
## Ansible Playbook setup MinIO
### Purpose
This playbook sets up MinIO to work with Argo Workflows, including creating the required buckets and secrets.
### Variables
| Variable | Description |
|----------|-------------|
| `user_prompt` | SSH user to execute commands |
| `uuid_prompt` | Unique identifier for the Argo secret |
| `argo_namespace` | Kubernetes namespace for Argo (`argo` by default) |
### Steps Executed
1. Install necessary dependencies.
2. Download and configure MinIO Client (`mc`).
3. Retrieve MinIO credentials (root user and password).
4. Configure `mc` to connect to MinIO.
5. Create a new S3 bucket (`oc-bucket`).
6. Generate a new access key and secret key for MinIO.
7. Retrieve the MinIO API cluster IP.
8. Create a Kubernetes Secret to store MinIO credentials.
9. Create a Kubernetes ConfigMap for MinIO artifact repository configuration.
### Running the Playbook
```sh
ansible-playbook -i inventory setup_minio_resources.yml --extra-vars "user_prompt=your-user uuid_prompt=unique-id"
```
---
## Expected Output
Upon successful execution, you should see:
- MinIO deployed and accessible.
- MinIO UI console credentials displayed.
- MinIO bucket (`oc-bucket`) created.
- Secrets and ConfigMaps properly configured in Kubernetes.
For any issues, check Ansible logs and validate configurations manually using:
```sh
kubectl get pods -n default
kubectl get secrets -n argo
kubectl get configmaps -n argo
```

View File

@@ -0,0 +1,134 @@
- name: Deploy MinIO
hosts: all:!localhost
user: "{{ user_prompt }}"
vars:
host_name: "{{ host_name_prompt }}"
memory_req: "2Gi"
storage_req: "20Gi"
environment:
KUBECONFIG: /home/{{ user_prompt }}/.kube/config
tasks:
- name: Install yaml library for python
become: true
ansible.builtin.package:
name: ansible
state: present
- name: Check if Helm does exist
ansible.builtin.command:
cmd: which helm
register: result_which
failed_when: result_which.rc not in [ 0, 1 ]
- name: Install helm
when: result_which.rc == 1
block:
- name: Download helm from source
ansible.builtin.get_url:
url: https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
dest: ./get_helm.sh
mode: 0700
- name: Launch helm install script
become: true
ansible.builtin.shell:
cmd: |
./get_helm.sh
- name: Test if MinIO is already installed
ansible.builtin.shell:
cmd : helm repo list | grep 'https://charts.min.io/'
register: minio_charts
failed_when: minio_charts.rc not in [0,1]
- name: Add helm repo MinIO
kubernetes.core.helm_repository:
repo_url: https://charts.min.io/
repo_state: present
repo_name: minio
when: minio_charts.rc == 1
- name: Update helm repo
ansible.builtin.command:
cmd : |
helm repo update
when: minio_charts.rc == 1
- name: Test is argo-artifact is already running
ansible.builtin.shell:
helm list | grep -w "argo-artifacts" | wc -l
register: argo_artifact_deployed
failed_when: argo_artifact_deployed.rc not in [ 0, 1 ]
- name: Initialize MinIO
when: argo_artifact_deployed.stdout == "0"
kubernetes.core.helm:
name: argo-artifacts
chart_ref: minio/minio
release_namespace: default
values:
service:
type: LoadBalancer
fullnameOverride: argo-artifacts
resources:
requests:
memory: "{{ memory_req }}"
replicas: 2
volumeClaimTemplates:
spec:
resources:
requests: "{{ storage_req }}"
consoleService:
type: LoadBalancer
# port: 9001
state: present
- name: Retrieve root user
ansible.builtin.shell:
cmd: |
kubectl get secret argo-artifacts --namespace default -o jsonpath="{.data.rootUser}"
register : user_encoded
- name: Decode root user
ansible.builtin.shell:
cmd: |
echo {{ user_encoded.stdout }} | base64 -d
register: user
- name: Retrieve root password
ansible.builtin.shell:
cmd: |
kubectl get secret argo-artifacts --namespace default -o jsonpath="{.data.rootPassword}"
register : password_encoded
- name: Decode root password
ansible.builtin.shell:
cmd: |
echo {{ password_encoded.stdout }} | base64 -d
register: password
- name: Retrieve console ip
ansible.builtin.shell:
cmd: |
kubectl get service argo-artifacts-console -o jsonpath="{.status.loadBalancer.ingress[0].ip}"
register : ip_console
- name: Retrieve API internal ip
ansible.builtin.shell:
cmd: |
kubectl get service argo-artifacts -o jsonpath="{.spec.clusterIP}"
register : ip_api
- name: Display info
debug:
msg :
"
MinIO UI console info
external IP GUI : {{ ip_console.stdout }}
user : {{ user.stdout }}
password : {{ password.stdout }}
IP API : {{ ip_api.stdout }}
"

View File

@@ -0,0 +1,8 @@
apiVersion: v1
kind: Secret
metadata:
name: cnes-secrets
type: Opaque
stringData:
weather-api: 1d2b4ad68a4375388e64f5353d33186c
era-5: 3e8457b6-f5eb-4405-a09c-78403a14c4d1

View File

@@ -0,0 +1,142 @@
- name: Installation k3s
hosts: all:!localhost
user: "{{ user_prompt }}"
gather_facts: true
become_method: sudo
vars:
- argo_namespace: argo
- MC_PATH: $HOME/minio-binaries
- MINIO_NAME: my-minio
- UUID: "{{ uuid_prompt }}"
environment:
- KUBECONFIG: /home/{{ user_prompt }}/.kube/config
tasks:
- name: Install necessary packages
become: true
package:
name:
- python3-kubernetes
- python3-jmespath
state: present
- name: Create destination directory
file:
path: $HOME/minio-binaries
state: directory
mode: '0755'
- name: Install mc
ansible.builtin.get_url:
url: "https://dl.min.io/client/mc/release/linux-amd64/mc"
dest: $HOME/minio-binaries/mc
mode: +x
headers:
Content-Type: "application/json"
- name: Add mc to path
ansible.builtin.lineinfile:
path: $HOME/.bashrc
line: export PATH=$PATH:$HOME/minio-binaries
- name: Is mc already set up for the local minio
ansible.builtin.shell:
cmd: |
"{{ MC_PATH }}"/mc admin info {{ MINIO_NAME }}
register: minio_info
failed_when: minio_info.rc not in [0,1]
- name: Retrieve root user
ansible.builtin.shell:
cmd: |
kubectl get secrets argo-artifacts -o jsonpath="{.data.rootUser}" | base64 -d -
register: user
when: minio_info.rc == 1
- name: Retrieve root password
ansible.builtin.shell:
cmd: |
kubectl get secret argo-artifacts --namespace default -o jsonpath="{.data.rootPassword}" | base64 -d -
register : password
when: minio_info.rc == 1
- name: Set up MinIO host in mc
ansible.builtin.shell:
cmd: |
"{{ MC_PATH }}"/mc alias set {{ MINIO_NAME }} http://127.0.0.1:9000 '{{ user.stdout }}' '{{ password.stdout }}'
failed_when: user.stdout == "" or password.stdout == ""
when: minio_info.rc == 1
- name: Does oc-bucket already exist
ansible.builtin.shell:
cmd: |
"{{ MC_PATH }}"/mc ls my-minio | grep -q oc-bucket
register: bucket_exists
failed_when: bucket_exists.rc not in [0,1]
- name: Create oc-bucket
ansible.builtin.shell:
cmd: |
"{{ MC_PATH }}"/mc mb {{ MINIO_NAME }}/oc-bucket
when: bucket_exists.rc == 1
- name: Run mc admin accesskey create command
ansible.builtin.shell:
cmd: |
{{ MC_PATH }}/mc admin accesskey create --json {{ MINIO_NAME }}
register: minio_output
changed_when: false # Avoid marking the task as changed every time
- name: Parse JSON output
set_fact:
access_key: "{{ minio_output.stdout | from_json | json_query('accessKey') }}"
secret_key: "{{ minio_output.stdout | from_json | json_query('secretKey') }}"
- name: Retrieve cluster IP for minio API
ansible.builtin.shell:
cmd: |
kubectl get service argo-artifacts -o jsonpath="{.spec.clusterIP}"
register: minio_cluster_ip
- name: Create the minio secret in argo namespace
kubernetes.core.k8s:
state: present
namespace: '{{ argo_namespace }}'
name: "{{ UUID }}-argo-artifact-secret"
definition:
apiVersion: v1
kind: Secret
type: Opaque
stringData:
access-key: '{{ access_key }}'
secret-key: '{{ secret_key }}'
- name: Create the minio secret in argo namespace
kubernetes.core.k8s:
state: present
namespace: '{{ argo_namespace }}'
definition:
apiVersion: v1
kind: ConfigMap
metadata:
name: artifact-repositories
data:
oc-s3-artifact-repository: |
s3:
bucket: oc-bucket
endpoint: {{ minio_cluster_ip.stdout }}:9000
insecure: true
accessKeySecret:
name: "{{ UUID }}-argo-artifact-secret"
key: access-key
secretKeySecret:
name: "{{ UUID }}-argo-artifact-secret"
key: secret-key
# ansible.builtin.shell:
# cmd: |
# kubectl create secret -n '{{ argo_namespace }}' generic argo-artifact-secret \
# --from-literal=access-key='{{ access_key }}' \
# --from-literal=secret-key='{{ secret_key }}'

86
ansible/README.md Normal file
View File

@@ -0,0 +1,86 @@
Login : admrescue/admrescue
# Requirement
**Ansible** (+ pip):
If you don't have `pip` yet
```
curl https://bootstrap.pypa.io/get-pip.py -o /tmp/get-pip.py
python3 /tmp/get-pip.py --user
```
```
python3 -m pip install --user ansible
pip install -r requirement.txt
```
**Ansible collections**:
```
ansible-galaxy collection install kubernetes.core
```
# Mosquitto
`sudo apt update && apt install -y mosquitto mosquitto-clients`
need to add a conf file in `/etc/mosquitto/conf.d/mosquitto.conf` containing :
```
allow_anonymous true
listener 1883 0.0.0.0
```
`sudo systemctl restart mosquitto`
Launch the mosquitto client to receive message on the machine that hosts the mosquitto server : `sudo mosquitto_sub -h 127.0.0.1 -t argo/alpr`
# Argo
## Execute/submite a workflow
```
argo submit PATH_TO_YAML --watch --serviceaccount=argo -n argo
```
# Troubleshoot
## k3s bind to local port
On certain distro you might already have an other mini k8s. A sign of this is k3s being able to install, start but never being stable, restarting non stop.
You should try to see if the port used by k3s are arlready binded :
> sudo netstat -tuln | grep -E '6443|10250'
If those ports are already in use then you should identify which service run behidn them and then stop them and preferably uninstall them.
We have already encountered an instance of `Ubuntu Server` with minikube already installed.
### Remove minikube
```bash
sudo systemctl stop snap.microk8s.daemon-kubelite
sudo systemctl disable snap.microk8s.daemon-kubelite
sudo systemctl restart k3s
```
## Use local container images
We have encountered difficulties declaring container images that correspond to local images (stored in docker.io/library/)
We used a docker hub repository to pull our customized image. For this we need to create a secret holding the login informations to a docker account that has access to this repository, which we then link to the serviceAccount running the workflow :
Create the secret in the argo namespace
```
kubectl create secret docker-registry regcred --docker-username=[DOCKER HUB USERNAME] --docker-password=[DOCKER HUB PASSWORD] -n argo
```
Patch the `argo` serviceAccount to use the secret when pulling image
```
kubectl patch serviceaccount argo -n argo -p '{"imagePullSecrets": [{"name": "regcred"}]}'
```

3
ansible/ansible.cfg Normal file
View File

@@ -0,0 +1,3 @@
[defaults]
stdout_callback = yaml
stderr_callback = yaml

154
ansible/notes.md Normal file
View File

@@ -0,0 +1,154 @@
Login : admrescue/admrescue
# Deploy VM with ansible
TODO : check with yves or benjamin how to create a qcow2 image with azerty layout and ssh ready
# Deploy k3s
Two password are asked via the prompt :
- First the user that you are connecting to on the host via ssh
- Second the root password
`ansible-playbook -i my_hosts.yaml deploy_k3s.yml --extra-vars " user_prompt=<YOUR_USER>" --ask-pass --ask-become-pass`
# Deploy Argo
password to provide is the one to the user you are connecting to on the host via ssh
`ansible-playbook -i my_hosts.yaml deploy_argo.yml --extra-vars " user_prompt=<YOUR_USER>" --ask-pass --ask-become-pass`
# Deploy Admirality
Install the kubernetes.core collection : `ansible-galaxy collection install kubernetes.core` for ansible to be able to use some kubectl tools.
## Install and prepare Admiralty
This play prepare your machine to use Admiralty in kubernetes. It installs helm, cert-manager and admiralty, then configure your clusters to be an admiralty source or target.
/!\ TODO : declare the list of target and source in a play's vars
`ansible-playbook -i my_hosts.yaml deploy_admiralty.yml --extra-vars "host_prompt=HOSTNAME user_prompt=<YOUR_USER>" --ask-pass --ask-become-pass`
## Share kubeconfig for the control cluster
`ansible-playbook -i ../my_hosts.yaml create_secrets.yml --extra-vars "host_prompt=WORKLOAD_HOST user_prompt=<YOUR_USER> control_host=CONTROL_HOST" --ask-pass --ask-become-pass`
# MinIO
- Limit the Memory
- Limit the replica
- Limit volumeClaimTemplates.spec.resources.requests
- Add LoadBalancer for WebUI
- Corrected command :
> kubectl get secret argo-artifacts --namespace default -o jsonpath="{.data.rootUser}" | base64 --decode
> kubectl get secret argo-artifacts --namespace default -o jsonpath="{.data.rootPassword}" | base64 --decode
- With the output of the last tasks, create a secret in argo namespace to give access to the minio API
```
apiVersion: v1
kind: Secret
metadata:
name: argo-minio-secret
type: Opaque
data:
accessKeySecret: [base64 ENCODED VALUE]
secretKeySecret: [base64 ENCODED VALUE]
```
- Create a ConfigMap, which will be used by argo to create the S3 artifact, the content can match the one from the previously created secret
```
apiVersion: v1
kind: ConfigMap
metadata:
# If you want to use this config map by default, name it "artifact-repositories". Otherwise, you can provide a reference to a
# different config map in `artifactRepositoryRef.configMap`.
name: artifact-repositories
# annotations:
# # v3.0 and after - if you want to use a specific key, put that key into this annotation.
# workflows.argoproj.io/default-artifact-repository: oc-s3-artifact-repository
data:
oc-s3-artifact-repository: |
s3:
bucket: oc-bucket
endpoint: [ retrieve cluster with kubectl get service argo-artifacts -o jsonpath="{.spec.clusterIP}" ]:9000
insecure: true
accessKeySecret:
name: argo-minio-secret
key: accessKeySecret
secretKeySecret:
name: argo-minio-secret
key: secretKeySecret
```
# Use custom container image : local registry
# Mosquitto
`sudo apt update && apt install -y mosquitto mosquitto-clients`
need to add a conf file in `/etc/mosquitto/conf.d/mosquitto.conf` containing :
```
allow_anonymous true
listener 1883 0.0.0.0
```
`sudo systemctl restart mosquitto`
Launch the mosquitto client to receive message on the machine that hosts the mosquitto server : `sudo mosquitto_sub -h 127.0.0.1 -t argo/alpr`
# Argo
## Execute/submite a workflow
```
argo submit PATH_TO_YAML --watch --serviceaccount=argo -n argo
```
# Troubleshoot
## k3s bind to local port
On certain distro you might already have an other mini k8s. A sign of this is k3s being able to install, start but never being stable, restarting non stop.
You should try to see if the port used by k3s are arlready binded :
> sudo netstat -tuln | grep -E '6443|10250'
If those ports are already in use then you should identify which service run behidn them and then stop them and preferably uninstall them.
We have already encountered an instance of `Ubuntu Server` with minikube already installed.
### Remove minikube
```bash
sudo systemctl stop snap.microk8s.daemon-kubelite
sudo systemctl disable snap.microk8s.daemon-kubelite
sudo systemctl restart k3s
```
## Use local container images
We have encountered difficulties declaring container images that correspond to local images (stored in docker.io/library/)
We used a docker hub repository to pull our customized image. For this we need to create a secret holding the login informations to a docker account that has access to this repository, which we then link to the serviceAccount running the workflow :
Create the secret in the argo namespace
```
kubectl create secret docker-registry regcred --docker-username=[DOCKER HUB USERNAME] --docker-password=[DOCKER HUB PASSWORD] -n argo
```
Patch the `argo` serviceAccount to use the secret when pulling image
```
kubectl patch serviceaccount argo -n argo -p '{"imagePullSecrets": [{"name": "regcred"}]}'
```

36
ansible/requirements.txt Normal file
View File

@@ -0,0 +1,36 @@
ansible-compat==24.6.0
ansible-core==2.17.0
ansible-creator==24.5.0
ansible-lint==24.5.0
attrs==23.2.0
black==24.4.2
bracex==2.4
cffi==1.16.0
click==8.1.7
cryptography==42.0.7
filelock==3.14.0
importlib_metadata==7.1.0
Jinja2==3.1.4
jmespath==1.0.1
jsonschema==4.22.0
jsonschema-specifications==2023.12.1
markdown-it-py==3.0.0
MarkupSafe==2.1.5
mdurl==0.1.2
mypy-extensions==1.0.0
packaging==24.0
pathspec==0.12.1
platformdirs==4.2.2
pycparser==2.22
Pygments==2.18.0
PyYAML==6.0.1
referencing==0.35.1
resolvelib==1.0.1
rich==13.7.1
rpds-py==0.18.1
ruamel.yaml==0.18.6
ruamel.yaml.clib==0.2.8
subprocess-tee==0.4.1
wcmatch==8.5.2
yamllint==1.35.1
zipp==3.19.0

View File

@@ -0,0 +1,48 @@
#!/bin/bash
REPOS=(
"oc-auth"
"oc-catalog"
"oc-datacenter"
"oc-front"
"oc-monitord"
"oc-peer"
"oc-shared"
"oc-scheduler"
"oc-schedulerd"
"oc-workflow"
"oc-workspace"
)
# Function to clone repositories
clone_repo() {
local repo_url="https://cloud.o-forge.io/core/$1.git"
local repo_name=$(basename "$repo_url" .git)
local branche=$2
echo "Processing repository: $repo_name"
if [ ! -d "$repo_name" ]; then
echo "Cloning repository: $repo_name"
git clone "$repo_url"
if [ $? -ne 0 ]; then
echo "Error cloning $repo_url"
exit 1
fi
fi
echo "Check in $branche & pull"
ls
echo "Repository '$repo_name' already exists. Pulling latest changes..."
cd "$repo_name" && git checkout $branche && git pull
cd ..
}
cd ..
# Iterate through each repository in the list
for repo in "${REPOS[@]}"; do
clone_repo $repo ${1:-main}
done
echo "All repositories processed successfully."

9
docker/db/add.sh Executable file
View File

@@ -0,0 +1,9 @@
#!/bin/bash
docker cp ./datas mongo:.
for i in $(ls ./datas); do
firstString=$i
echo "ADD file $i in collection ${i/.json/}"
docker exec -it mongo sh -c "mongoimport --jsonArray --db DC_myDC --collection ${i/.json/} --file ./datas/$i"
done

View File

@@ -0,0 +1 @@
[{"_id":"0b6a375f-be3e-49a9-9827-3c2d5eddb057","abstractobject":{"id":"0b6a375f-be3e-49a9-9827-3c2d5eddb057","name":"test","is_draft":false,"creator_id":"c0cece97-7730-4c2a-8c20-a30944564106","creation_date":{"$date":"2025-01-27T10:41:47.741Z"},"update_date":{"$date":"2025-01-27T10:41:47.741Z"},"updater_id":"c0cece97-7730-4c2a-8c20-a30944564106","access_mode":0},"description":"Proto Collaborative area example","collaborative_area":{},"workflows":["58314c99-c595-4ca2-8b5e-822a6774efed"],"allowed_peers_group":{"c0cece97-7730-4c2a-8c20-a30944564106":["*"]},"workspaces":[]}]

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1 @@
[{"_id":"c0cece97-7730-4c2a-8c20-a30944564106","failed_execution":null,"abstractobject":{"update_date":{"$date":"2025-03-27T09:13:13.230Z"},"access_mode":0,"id":"c0cece97-7730-4c2a-8c20-a30944564106","name":"local","is_draft":false,"creation_date":{"$date":"2025-03-27T09:13:13.230Z"}},"url":"http://localhost:8000","wallet_address":"my-wallet","public_key":"-----BEGIN RSA PUBLIC KEY-----\nMIICCgKCAgEAw2pdG6wMtuLcP0+k1LFvIb0DQo/oHW2uNJaEJK74plXqp4ztz2dR\nb+RQHFLeLuqk4i/zc3b4K3fKPXSlwnVPJCwzPrnyT8jYGOZVlWlETiV9xeJhu6s/\nBh6g1PWz75XjjwV50iv/CEiLNBT23f/3J44wrQzygqNQCiQSALdxWLAEl4l5kHSa\n9oMyV70/Uql94/ayMARZsHgp9ZvqQKbkZPw6yzVMfCBxQozlNlo315OHevudhnhp\nDRjN5I7zWmqYt6rbXJJC7Y3Izdvzn7QI88RqjSRST5I/7Kz3ndCqrOnI+OQUE5NT\nREyQebphvQfTDTKlRPXkdyktdK2DH28Zj6ZF3yjQvN35Q4zhOzlq77dO5IhhopI7\nct8dZH1T1nYkvdyCA/EVMtQsASmBOitH0Y0ACoXQK5Kb6nm/TcM/9ZSJUNiEMuy5\ngBZ3YKE9oa4cpTpPXwcA+S/cU7HPNnQAsvD3iJi8GTW9uJs84pn4/WhpQqmXd4rv\nhKWECCN3fHy01fUs/U0PaSj2jDY/kQVeXoikNMzPUjdZd9m816TIBh3v3aVXCH/0\niTHHAxctvDgMRb2fpvRJ/wwnYjFG9RpamVFDMvC9NffuYzWAA9IRIY4cqgerfHrV\nZ2HHiPTDDvDAIsvImXZc/h7mXN6m3RCQ4Qywy993wd9gUdgg/qnynHcCAwEAAQ==\n-----END RSA PUBLIC KEY-----\n","state":1}]

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1 @@
[{"_id":"04bc70b5-8d7b-44e6-9015-fadfa0fb102d","abstractinstanciatedresource":{"abstractresource":{"type":"storage","abstractobject":{"id":"04bc70b5-8d7b-44e6-9015-fadfa0fb102d","name":"IRT risk database","is_draft":false,"creator_id":"c0cece97-7730-4c2a-8c20-a30944564106","creation_date":"2021-09-30T14:00:00.000Z","update_date":"2021-09-30T14:00:00.000Z","updater_id":"c0cece97-7730-4c2a-8c20-a30944564106","access_mode":1},"logo":"https://cloud.o-forge.io/core/deperecated-oc-catalog/raw/branch/main/scripts/local_imgs/IRT risk database.png","description":"Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.","short_description":"S3 compliant IRT file storage","owners":[{"name":"IRT"}]},"instances":[{"env":[{"attr":"source","readonly":true}],"resourceinstance":{"abstractobject":{"id":"7fdccb9c-7090-40a5-bacd-7435bc56c90d","name":"IRT local file storage Marseille"},"location":{"latitude":50.62925,"longitude":3.057256},"country":250,"partnerships":[{"resourcepartnership":{"namespace":"default","peer_groups":{"c0cece97-7730-4c2a-8c20-a30944564106":["*"]},"pricing_profiles":[{"pricing":{"price":50,"currency":"EUR","buying_strategy":0,"time_pricing_strategy":0}}]}}]},"source":"/mnt/vol","local":false,"security_level":"public","size":50,"size_type":3,"redundancy":"RAID5","throughput":"r:200,w:150"}]},"storage_type":5,"acronym":"DC_myDC"},{"_id":"e726020a-b68e-4abc-ab36-c3640ea3f557","abstractinstanciatedresource":{"abstractresource":{"type":"storage","abstractobject":{"id":"e726020a-b68e-4abc-ab36-c3640ea3f557","name":"IRT local file storage","is_draft":false,"creator_id":"c0cece97-7730-4c2a-8c20-a30944564106","creation_date":"2021-09-30T14:00:00.000Z","update_date":"2021-09-30T14:00:00.000Z","updater_id":"c0cece97-7730-4c2a-8c20-a30944564106","access_mode":1},"logo":"https://cloud.o-forge.io/core/deperecated-oc-catalog/raw/branch/main/scripts/local_imgs/IRT local file storage.png","description":"Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.","short_description":"S3 compliant IRT file storage","owners":[{"name":"IRT"}]},"instances":[{"resourceinstance":{"env":[{"attr":"source","readonly":true}],"abstractobject":{"id":"7fdccb9c-7090-40a5-bacd-7435bc56c90d","name":"IRT local file storage Marseille"},"location":{"latitude":50.62925,"longitude":3.057256},"country":250,"partnerships":[{"resourcepartnership":{"namespace":"default","peer_groups":{"c0cece97-7730-4c2a-8c20-a30944564106":["*"]},"pricing_profiles":[{"pricing":{"price":50,"currency":"EUR","buying_strategy":0,"time_pricing_strategy":0}}]}}]},"source":"/mnt/vol","local":true,"security_level":"public","size":500,"size_type":0,"encryption":true,"redundancy":"RAID5S","throughput":"r:300,w:350"}]},"storage_type":5,"acronym":"DC_myDC"}]

File diff suppressed because one or more lines are too long

4
docker/kube.exemple.env Normal file
View File

@@ -0,0 +1,4 @@
KUBERNETES_SERVICE_HOST=192.168.47.20
KUBE_CA="LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJkekNDQVIyZ0F3SUJBZ0lCQURBS0JnZ3Foa2pPUFFRREFqQWpNU0V3SHdZRFZRUUREQmhyTTNNdGMyVnkKZG1WeUxXTmhRREUzTWpNeE1USXdNell3SGhjTk1qUXdPREE0TVRBeE16VTJXaGNOTXpRd09EQTJNVEF4TXpVMgpXakFqTVNFd0h3WURWUVFEREJock0zTXRjMlZ5ZG1WeUxXTmhRREUzTWpNeE1USXdNell3V1RBVEJnY3Foa2pPClBRSUJCZ2dxaGtqT1BRTUJCd05DQUFTVlk3ZHZhNEdYTVdkMy9jMlhLN3JLYjlnWXgyNSthaEE0NmkyNVBkSFAKRktQL2UxSVMyWVF0dzNYZW1TTUQxaStZdzJSaVppNUQrSVZUamNtNHdhcnFvMEl3UURBT0JnTlZIUThCQWY4RQpCQU1DQXFRd0R3WURWUjBUQVFIL0JBVXdBd0VCL3pBZEJnTlZIUTRFRmdRVWtlUVJpNFJiODduME5yRnZaWjZHClc2SU55NnN3Q2dZSUtvWkl6ajBFQXdJRFNBQXdSUUlnRXA5ck04WmdNclRZSHYxZjNzOW5DZXZZeWVVa3lZUk4KWjUzazdoaytJS1FDSVFDbk05TnVGKzlTakIzNDFacGZ5ays2NEpWdkpSM3BhcmVaejdMd2lhNm9kdz09Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K"
KUBE_CERT="LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJrVENDQVRlZ0F3SUJBZ0lJWUxWNkFPQkdrU1F3Q2dZSUtvWkl6ajBFQXdJd0l6RWhNQjhHQTFVRUF3d1kKYXpOekxXTnNhV1Z1ZEMxallVQXhOekl6TVRFeU1ETTJNQjRYRFRJME1EZ3dPREV3TVRNMU5sb1hEVEkxTURndwpPREV3TVRNMU5sb3dNREVYTUJVR0ExVUVDaE1PYzNsemRHVnRPbTFoYzNSbGNuTXhGVEFUQmdOVkJBTVRESE41CmMzUmxiVHBoWkcxcGJqQlpNQk1HQnlxR1NNNDlBZ0VHQ0NxR1NNNDlBd0VIQTBJQUJGQ2Q1MFdPeWdlQ2syQzcKV2FrOWY4MVAvSkJieVRIajRWOXBsTEo0ck5HeHFtSjJOb2xROFYxdUx5RjBtOTQ2Nkc0RmRDQ2dqaXFVSk92Swp3NVRPNnd5alNEQkdNQTRHQTFVZER3RUIvd1FFQXdJRm9EQVRCZ05WSFNVRUREQUtCZ2dyQmdFRkJRY0RBakFmCkJnTlZIU01FR0RBV2dCVFJkOFI5cXVWK2pjeUVmL0ovT1hQSzMyS09XekFLQmdncWhrak9QUVFEQWdOSUFEQkYKQWlFQTArbThqTDBJVldvUTZ0dnB4cFo4NVlMalF1SmpwdXM0aDdnSXRxS3NmUVVDSUI2M2ZNdzFBMm5OVWU1TgpIUGZOcEQwSEtwcVN0Wnk4djIyVzliYlJUNklZCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0KLS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJlRENDQVIyZ0F3SUJBZ0lCQURBS0JnZ3Foa2pPUFFRREFqQWpNU0V3SHdZRFZRUUREQmhyTTNNdFkyeHAKWlc1MExXTmhRREUzTWpNeE1USXdNell3SGhjTk1qUXdPREE0TVRBeE16VTJXaGNOTXpRd09EQTJNVEF4TXpVMgpXakFqTVNFd0h3WURWUVFEREJock0zTXRZMnhwWlc1MExXTmhRREUzTWpNeE1USXdNell3V1RBVEJnY3Foa2pPClBRSUJCZ2dxaGtqT1BRTUJCd05DQUFRc3hXWk9pbnIrcVp4TmFEQjVGMGsvTDF5cE01VHAxOFRaeU92ektJazQKRTFsZWVqUm9STW0zNmhPeVljbnN3d3JoNnhSUnBpMW5RdGhyMzg0S0Z6MlBvMEl3UURBT0JnTlZIUThCQWY4RQpCQU1DQXFRd0R3WURWUjBUQVFIL0JBVXdBd0VCL3pBZEJnTlZIUTRFRmdRVTBYZkVmYXJsZm8zTWhIL3lmemx6Cnl0OWlqbHN3Q2dZSUtvWkl6ajBFQXdJRFNRQXdSZ0loQUxJL2dNYnNMT3MvUUpJa3U2WHVpRVMwTEE2cEJHMXgKcnBlTnpGdlZOekZsQWlFQW1wdjBubjZqN3M0MVI0QzFNMEpSL0djNE53MHdldlFmZWdEVGF1R2p3cFk9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K"
KUBE_DATA="LS0tLS1CRUdJTiBFQyBQUklWQVRFIEtFWS0tLS0tCk1IY0NBUUVFSU5ZS1BFb1dhd1NKUzJlRW5oWmlYMk5VZlY1ZlhKV2krSVNnV09TNFE5VTlvQW9HQ0NxR1NNNDkKQXdFSG9VUURRZ0FFVUozblJZN0tCNEtUWUx0WnFUMS96VS84a0Z2Sk1lUGhYMm1Vc25pczBiR3FZblkyaVZEeApYVzR2SVhTYjNqcm9iZ1YwSUtDT0twUWs2OHJEbE03ckRBPT0KLS0tLS1FTkQgRUMgUFJJVkFURSBLRVktLS0tLQo="

36
docker/start-demo.sh Executable file
View File

@@ -0,0 +1,36 @@
#!/bin/bash
KUBERNETES_ENV_FILE=$(realpath ${1:-"./kube.exemple.env"})
HOST=${2:-"http://localhost:8000"}
docker network create oc | true
docker compose down
cd ./tools && docker compose -f ./docker-compose.dev.yml up --force-recreate -d
docker compose -f ./docker-compose.traefik.yml up --force-recreate -d && cd ..
cd ./db && ./add.sh && cd ..
cd ../..
REPOS=(
"oc-auth"
"oc-catalog"
"oc-datacenter"
"oc-monitord"
"oc-peer"
"oc-shared"
"oc-scheduler"
"oc-schedulerd"
"oc-workflow"
"oc-workspace"
"oc-front"
)
for i in "${REPOS[@]}"
do
echo "Building $i"
docker kill $i | true
docker rm $i | true
cd ./$i
cp $KUBERNETES_ENV_FILE ./env.env
docker build . -t $i --build-arg=HOST=$HOST && docker compose up -d
cd ..
done

34
docker/start.sh Executable file
View File

@@ -0,0 +1,34 @@
#!/bin/bash
export KUBERNETES_ENV_FILE=$(realpath ${KUBERNETES_ENV_FILE=:-"./kube.exemple.env"})
export HOST=${HOST:-"http://localhost:8000"}
docker network create oc | true
docker compose down
cd ./tools && docker compose -f ./docker-compose.dev.yml up --force-recreate -d
docker compose -f ./docker-compose.traefik.yml up --force-recreate -d && cd ..
cd ../..
REPOS=(
"oc-auth"
"oc-catalog"
"oc-datacenter"
"oc-monitord"
"oc-peer"
"oc-shared"
"oc-scheduler"
"oc-schedulerd"
"oc-workflow"
"oc-workspace"
"oc-front"
)
for i in "${REPOS[@]}"
do
echo "Building $i"
docker kill $i | true
docker rm $i | true
cd ./$i
cp $KUBERNETES_ENV_FILE ./env.env
make run-docker
cd ..
done

50
docker/stop.sh Executable file
View File

@@ -0,0 +1,50 @@
#!/bin/bash
docker network delete oc | true
docker compose -f ./tools/docker-compose.traefik.yml down
TOOLS=(
"mongo"
"mongo-express"
"nats"
"loki"
"grafana"
"hydra-client"
"hydra"
"keto"
"ldap"
)
for i in "${TOOLS[@]}"
do
echo "kill $i"
docker kill $i | true
docker rm $i | true
done
docker volume rm tools_oc-data
cd ../..
REPOS=(
"oc-auth"
"oc-catalog"
"oc-datacenter"
"oc-monitord"
"oc-peer"
"oc-shared"
"oc-scheduler"
"oc-schedulerd"
"oc-workflow"
"oc-workspace"
"oc-front"
)
for i in "${REPOS[@]}"
do
echo "Kill $i"
cd ./$i
docker kill $i | true
docker rm $i | true
make purge | true
cd ..
done

View File

@@ -0,0 +1,8 @@
datasources:
- name: Loki
type: loki
access: proxy
url: http://loki:3100
isDefault: true
jsonData:
httpMethod: POST

View File

@@ -0,0 +1,162 @@
version: '3.4'
services:
mongo:
image: 'mongo:latest'
networks:
- oc
ports:
- 27017:27017
container_name: mongo
volumes:
- oc-data:/data/db
- oc-data:/data/configdb
mongo-express:
image: "mongo-express:latest"
restart: always
depends_on:
- mongo
networks:
- oc
ports:
- 8081:8081
environment:
- ME_CONFIG_BASICAUTH_USERNAME=test
- ME_CONFIG_BASICAUTH_PASSWORD=test
nats:
image: 'nats:latest'
container_name: nats
ports:
- 4222:4222
command:
- "--debug"
networks:
- oc
loki:
image: 'grafana/loki'
container_name: loki
labels:
- "traefik.enable=true"
- "traefik.http.routers.loki.entrypoints=web"
- "traefik.http.routers.loki.rule=PathPrefix(`/tools/loki`)"
- "traefik.http.services.loki.loadbalancer.server.port=3100"
- "traefik.http.middlewares.loki-stripprefix.stripprefix.prefixes=/tools/loki"
- "traefik.http.routers.loki.middlewares=loki-stripprefix"
- "traefik.http.middlewares.loki.forwardauth.address=http://oc-auth:8080/oc/forward"
ports :
- "3100:3100"
networks:
- oc
grafana:
image: 'grafana/grafana'
container_name: grafana
ports:
- '3000:3000'
labels:
- "traefik.enable=true"
- "traefik.http.routers.grafana.entrypoints=web"
- "traefik.http.routers.grafana.rule=PathPrefix(`/tools/grafana`)"
- "traefik.http.services.grafana.loadbalancer.server.port=3000"
- "traefik.http.middlewares.grafana-stripprefix.stripprefix.prefixes=/tools/grafana"
- "traefik.http.routers.grafana.middlewares=grafana-stripprefix"
- "traefik.http.middlewares.grafana.forwardauth.address=http://oc-auth:8080/oc/forward"
networks:
- oc
volumes:
- ./conf/grafana_data_source.yml:/etc/grafana/provisioning/datasources/datasource.yml
environment:
- GF_SECURITY_ADMIN_PASSWORD=pfnirt # Change this to anything but admin to not have a password change page at startup
- GF_SECURITY_ADMIN_USER=admin
- GF_SECURITY_DISABLE_INITIAL_ADMIN_PASSWORD_CHANGE=true
hydra:
container_name: hydra
image: oryd/hydra:v2.2.0
environment:
SECRETS_SYSTEM: oc-auth-got-secret
LOG_LEAK_SENSITIVE_VALUES: true
# OAUTH2_TOKEN_HOOK_URL: http://oc-auth:8080/oc/claims
URLS_SELF_ISSUER: http://hydra:4444
URLS_SELF_PUBLIC: http://hydra:4444
WEBFINGER_OIDC_DISCOVERY_SUPPORTED_SCOPES: profile,email,phone,roles
WEBFINGER_OIDC_DISCOVERY_SUPPORTED_CLAIMS: name,family_name,given_name,nickname,email,phone_number
DSN: memory
command: serve all --dev
networks:
- oc
ports:
- "4444:4444"
- "4445:4445"
deploy:
restart_policy:
condition: on-failure
hydra-client:
image: oryd/hydra:v2.2.0
container_name: hydra-client
environment:
HYDRA_ADMIN_URL: http://hydra:4445
ORY_SDK_URL: http://hydra:4445
command:
- create
- oauth2-client
- --skip-tls-verify
- --name
- test-client
- --secret
- oc-auth-got-secret
- --response-type
- id_token,token,code
- --grant-type
- implicit,refresh_token,authorization_code,client_credentials
- --scope
- openid,profile,email,roles
- --token-endpoint-auth-method
- client_secret_post
- --redirect-uri
- http://localhost:3000
networks:
- oc
deploy:
restart_policy:
condition: none
depends_on:
- hydra
healthcheck:
test: ["CMD", "curl", "-f", "http://hydra:4445"]
interval: 10s
timeout: 10s
retries: 10
ldap:
image: pgarrett/ldap-alpine
container_name: ldap
volumes:
- "./ldap.ldif:/ldif/ldap.ldif"
networks:
- oc
ports:
- "390:389"
deploy:
restart_policy:
condition: on-failure
keto:
image: oryd/keto:v0.7.0-alpha.1-sqlite
ports:
- "4466:4466"
- "4467:4467"
command: serve -c /home/ory/keto.yml
restart: on-failure
volumes:
- type: bind
source: .
target: /home/ory
container_name: keto
networks:
- oc
volumes:
oc-data:
networks:
oc:
external: true

View File

@@ -0,0 +1,24 @@
version: '3.4'
services:
traefik:
image: traefik:v2.10.4
container_name: traefik
restart: unless-stopped
networks:
- oc
command:
- "--api.insecure=true"
- "--providers.docker=true"
- "--entrypoints.web.address=:8000"
ports:
- "8000:8000" # Expose Traefik on port 8000
volumes:
- /var/run/docker.sock:/var/run/docker.sock
volumes:
oc-data:
networks:
oc:
external: true

18
docker/tools/keto.yml Normal file
View File

@@ -0,0 +1,18 @@
version: v0.6.0-alpha.1
log:
level: debug
namespaces:
- id: 0
name: open-cloud
dsn: memory
serve:
read:
host: 0.0.0.0
port: 4466
write:
host: 0.0.0.0
port: 4467

24
docker/tools/ldap.ldif Normal file
View File

@@ -0,0 +1,24 @@
dn: uid=admin,ou=Users,dc=example,dc=com
objectClass: inetOrgPerson
cn: Admin
sn: Istrator
uid: admin
userPassword: admin
mail: admin@example.com
ou: Users
dn: ou=AppRoles,dc=example,dc=com
objectClass: organizationalunit
ou: AppRoles
description: AppRoles
dn: ou=App1,ou=AppRoles,dc=example,dc=com
objectClass: organizationalunit
ou: App1
description: App1
dn: cn=traveler,ou=App1,ou=AppRoles,dc=example,dc=com
objectClass: groupofnames
cn: traveler
description: traveler
member: uid=admin,ou=Users,dc=example,dc=com

View File

@@ -0,0 +1,53 @@
@startuml Arch Diagram
top to bottom direction
component front as "oc-front" #MistyRose
component api as "oc-api" #BlueViolet
component auth as "oc-auth" #BlueViolet
component catalog as "oc-catalog" #MistyRose
component workspace as "oc-workspace" #MistyRose
component workflow as "oc-workflow" #MistyRose
component calendarIn as "oc-calendar-in" #MistyRose
component calendarOut as "oc-calendar-out" #MistyRose
component stat as "oc-status" #MistyRose
component disco as "oc-discovery" #MistyRose
component agg as "oc-aggregator" #MistyRose
component scheduler as "oc-scheduler" #LightYellow
component monitor as "oc-monitor" #LightYellow
database rd as "Nats" #Green
database zn as "Zinc" #Green
database loki as "Loki" #Green
database mongo as "MongoDB" #Green
database nats as "Nats" #Green
front -- api
api -- auth : auth user
api -- catalog : local search
api -- workspace : store user data
api -- workflow
api -- calendarIn
api -- calendarOut
api -- stat
catalog -- disco
catalog -- agg
scheduler -- monitor
scheduler -- catalog
rd -- scheduler
loki -- monitor
catalog -- mongo : store resources available for users
workspace -- mongo : store resources allocated to a workspace
workflow -- mongo : store workflow
calendarOut -- mongo : store booking informations for this dc
@enduml

4
env.env Normal file
View File

@@ -0,0 +1,4 @@
KUBERNETES_SERVICE_HOST=192.168.1.169
KUBE_CA="LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJkekNDQVIyZ0F3SUJBZ0lCQURBS0JnZ3Foa2pPUFFRREFqQWpNU0V3SHdZRFZRUUREQmhyTTNNdGMyVnkKZG1WeUxXTmhRREUzTWpNeE1USXdNell3SGhjTk1qUXdPREE0TVRBeE16VTJXaGNOTXpRd09EQTJNVEF4TXpVMgpXakFqTVNFd0h3WURWUVFEREJock0zTXRjMlZ5ZG1WeUxXTmhRREUzTWpNeE1USXdNell3V1RBVEJnY3Foa2pPClBRSUJCZ2dxaGtqT1BRTUJCd05DQUFTVlk3ZHZhNEdYTVdkMy9jMlhLN3JLYjlnWXgyNSthaEE0NmkyNVBkSFAKRktQL2UxSVMyWVF0dzNYZW1TTUQxaStZdzJSaVppNUQrSVZUamNtNHdhcnFvMEl3UURBT0JnTlZIUThCQWY4RQpCQU1DQXFRd0R3WURWUjBUQVFIL0JBVXdBd0VCL3pBZEJnTlZIUTRFRmdRVWtlUVJpNFJiODduME5yRnZaWjZHClc2SU55NnN3Q2dZSUtvWkl6ajBFQXdJRFNBQXdSUUlnRXA5ck04WmdNclRZSHYxZjNzOW5DZXZZeWVVa3lZUk4KWjUzazdoaytJS1FDSVFDbk05TnVGKzlTakIzNDFacGZ5ays2NEpWdkpSM3BhcmVaejdMd2lhNm9kdz09Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K"
KUBE_CERT="LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJrVENDQVRlZ0F3SUJBZ0lJWUxWNkFPQkdrU1F3Q2dZSUtvWkl6ajBFQXdJd0l6RWhNQjhHQTFVRUF3d1kKYXpOekxXTnNhV1Z1ZEMxallVQXhOekl6TVRFeU1ETTJNQjRYRFRJME1EZ3dPREV3TVRNMU5sb1hEVEkxTURndwpPREV3TVRNMU5sb3dNREVYTUJVR0ExVUVDaE1PYzNsemRHVnRPbTFoYzNSbGNuTXhGVEFUQmdOVkJBTVRESE41CmMzUmxiVHBoWkcxcGJqQlpNQk1HQnlxR1NNNDlBZ0VHQ0NxR1NNNDlBd0VIQTBJQUJGQ2Q1MFdPeWdlQ2syQzcKV2FrOWY4MVAvSkJieVRIajRWOXBsTEo0ck5HeHFtSjJOb2xROFYxdUx5RjBtOTQ2Nkc0RmRDQ2dqaXFVSk92Swp3NVRPNnd5alNEQkdNQTRHQTFVZER3RUIvd1FFQXdJRm9EQVRCZ05WSFNVRUREQUtCZ2dyQmdFRkJRY0RBakFmCkJnTlZIU01FR0RBV2dCVFJkOFI5cXVWK2pjeUVmL0ovT1hQSzMyS09XekFLQmdncWhrak9QUVFEQWdOSUFEQkYKQWlFQTArbThqTDBJVldvUTZ0dnB4cFo4NVlMalF1SmpwdXM0aDdnSXRxS3NmUVVDSUI2M2ZNdzFBMm5OVWU1TgpIUGZOcEQwSEtwcVN0Wnk4djIyVzliYlJUNklZCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0KLS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJlRENDQVIyZ0F3SUJBZ0lCQURBS0JnZ3Foa2pPUFFRREFqQWpNU0V3SHdZRFZRUUREQmhyTTNNdFkyeHAKWlc1MExXTmhRREUzTWpNeE1USXdNell3SGhjTk1qUXdPREE0TVRBeE16VTJXaGNOTXpRd09EQTJNVEF4TXpVMgpXakFqTVNFd0h3WURWUVFEREJock0zTXRZMnhwWlc1MExXTmhRREUzTWpNeE1USXdNell3V1RBVEJnY3Foa2pPClBRSUJCZ2dxaGtqT1BRTUJCd05DQUFRc3hXWk9pbnIrcVp4TmFEQjVGMGsvTDF5cE01VHAxOFRaeU92ektJazQKRTFsZWVqUm9STW0zNmhPeVljbnN3d3JoNnhSUnBpMW5RdGhyMzg0S0Z6MlBvMEl3UURBT0JnTlZIUThCQWY4RQpCQU1DQXFRd0R3WURWUjBUQVFIL0JBVXdBd0VCL3pBZEJnTlZIUTRFRmdRVTBYZkVmYXJsZm8zTWhIL3lmemx6Cnl0OWlqbHN3Q2dZSUtvWkl6ajBFQXdJRFNRQXdSZ0loQUxJL2dNYnNMT3MvUUpJa3U2WHVpRVMwTEE2cEJHMXgKcnBlTnpGdlZOekZsQWlFQW1wdjBubjZqN3M0MVI0QzFNMEpSL0djNE53MHdldlFmZWdEVGF1R2p3cFk9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K"
KUBE_DATA="LS0tLS1CRUdJTiBFQyBQUklWQVRFIEtFWS0tLS0tCk1IY0NBUUVFSU5ZS1BFb1dhd1NKUzJlRW5oWmlYMk5VZlY1ZlhKV2krSVNnV09TNFE5VTlvQW9HQ0NxR1NNNDkKQXdFSG9VUURRZ0FFVUozblJZN0tCNEtUWUx0WnFUMS96VS84a0Z2Sk1lUGhYMm1Vc25pczBiR3FZblkyaVZEeApYVzR2SVhTYjNqcm9iZ1YwSUtDT0twUWs2OHJEbE03ckRBPT0KLS0tLS1FTkQgRUMgUFJJVkFURSBLRVktLS0tLQo="

46
k8s/README.md Normal file
View File

@@ -0,0 +1,46 @@
## Deploy the opencloud chart
```
./start.sh <mode: dev|prod default:dev> <branche | default:main>
```
Feel free to modify/create a new opencloud/dev-values.yaml. Provided setup should work out of the box, but is not suitable for production usage.
## Hostname settings
Edit your /etc/hosts file, and add following line:
```
127.0.0.1 beta.opencloud.com
```
## Done
Everything should be operational now, go to http://beta.opencloud.com and enjoy the ride
# Prebuilt microservices deployment procedure
TODO
# First steps
Go to http://beta.opencloud.com/users
Log in using default user/password combo ldapadmin/ldapadmin
Create a new user, or change the default one
Go to http://beta.opencloud.com
Log in using your fresh credentials
Do stuff
You can go to http://beta.opencloud.com/mongoexpress
... for mongo express web client access (default login/password is test/testme)
You can go to http://localhost/dashboard/
... for access to Traefik reverse proxy front-end

21
k8s/start.sh Executable file
View File

@@ -0,0 +1,21 @@
#!/bin/bash
mode=${1:-dev}
branch=${2:-main}
cd ../..
if [ ! -d "oc-k8s" ]; then
echo "Cloning repository: $repo_name"
git clone "https://cloud.o-forge.io/core/oc-k8s.git"
if [ $? -ne 0 ]; then
echo "Error cloning oc-k8s"
exit 1
fi
fi
echo "Repository 'oc-k8s' already exists. Pulling latest changes..."
cd "oc-k8s" && git checkout $branch && git pull
./create_kind_cluster.sh
./clone_opencloud_microservices.sh $branch
./build_opencloud_microservices.sh
./install.sh $mode

20
k8s/stop.sh Executable file
View File

@@ -0,0 +1,20 @@
#!/bin/bash
mode=${1:-dev}
branch=${2:-main}
cd ../..
if [ ! -d "oc-k8s" ];
echo "Cloning repository: $repo_name"
git clone "https://cloud.o-forge.io/core/oc-k8s.git"
if [ $? -ne 0 ]; then
echo "Error cloning oc-k8s"
exit 1
fi
fi
echo "Repository 'oc-k8s' already exists. Pulling latest changes..."
cd "oc-k8s" && git checkout $branch && git pull
./uninstall.sh $mode
./delete_kind_cluster.sh

View File

@@ -1,3 +0,0 @@
---
version: 0.1.0

View File

@@ -1,41 +0,0 @@
---
# Définition d'une version
version: 1.0
tools:
- name: kubectl
url: https://dl.k8s.io/release/%s/bin/linux/amd64/kubectl
version: v1.31.0
- name: helm
url: https://get.helm.sh/helm-%s-linux-amd64.tar.gz
version: v3.16.0
# helm install my-release <repo>/<chart>
opencloud:
- repository:
name: bitnami
url: https://charts.bitnami.com/bitnami # Repository des Charts
charts:
- name: wordpress
chart: bitnami/wordpress
version: 23.1.0
values: {}
helm_opts: --wait-for-jobs
helm_filevalues: values-init.yml
- name: phpmyadmin
chart: bitnami/phpmyadmin
version: 17.0.4
values: {}
- charts:
- name: mongo
chart: ../oc-mongo/mongo
- charts:
- name: myfirstrelease
chart: myfirstchart-0.1.0.tgz
url: https://zzzz/myfirstchart-0.1.0.tgz
# helm install myfirstrelease myfirstchart-0.1.0.tgz

3
publish/.gitignore vendored
View File

@@ -1,3 +0,0 @@
go.sum
*_
.coverage.*

View File

@@ -1,3 +0,0 @@
module oc-publish
go 1.22.2

View File

@@ -1,27 +0,0 @@
package main
import (
"fmt"
"os"
"oc-publish/releases"
)
func main() {
fmt.Println(" >> oc-publish :")
version := os.Args[1]
fmt.Println(fmt.Sprintf(" << version : %s", version))
existe, _ := releases.CheckRelease(version)
fmt.Println(fmt.Sprintf(" << existe : %t ", existe))
idRelease, _ := releases.GetReleaseId(version)
fmt.Println(fmt.Sprintf(" << id : %d ", idRelease))
idAsset, _ := releases.GetAssetId(idRelease, "oc.json")
fmt.Println(fmt.Sprintf(" << idAsset : %d ", idAsset))
fmt.Println(releases.CreateAsset(idRelease, "../bin/oc-deploy"))
}

View File

@@ -1,6 +0,0 @@
package occonst
var PUBLISH_URL = "https://cloud.o-forge.io"
var PUBLISH_VERSION = "core/oc-deploy"
var PUBLISH_TOKEN = ""

View File

@@ -1,166 +0,0 @@
package releases
import (
"fmt"
"os"
"path/filepath"
"mime/multipart"
"io"
"encoding/json"
"net/http"
"bytes"
"oc-publish/occonst"
)
type assetStruct struct {
Name string `json:"name"`
Id int `json:"id"`
}
func GetAssetId(idRelease int, name string) (int, error) {
url := fmt.Sprintf("%s/api/v1/repos/%s/releases/%d/assets",
occonst.PUBLISH_URL,
occonst.PUBLISH_VERSION,
idRelease)
res, err := http.Get(url)
if err != nil {
return -1, err
}
body, err := io.ReadAll(res.Body)
if err != nil {
return -2, err
}
var data []assetStruct
err = json.Unmarshal(body, &data)
fmt.Println(err)
if err != nil {
return -3, err
}
for _, ele := range data {
if ele.Name == name {
return ele.Id, nil
}
}
return 0, nil
}
// curl -X 'POST' \
// 'https://cloud.o-forge.io/api/v1/repos/core/oc-deploy/releases/2/assets?name=zzzz' \
// -H 'accept: application/json' \
// -H 'Content-Type: multipart/form-data' \
// -F 'attachment=oc-deploy'
func CreateAsset(idRelease int, filename string) (int, error) {
url := fmt.Sprintf("%s/api/v1/repos/%s/releases/%d/assets?name=%s&token=%s",
occonst.PUBLISH_URL,
occonst.PUBLISH_VERSION,
idRelease,
"name",
occonst.PUBLISH_TOKEN)
// request, err := newfileUploadRequest(url, extraParams, "file", "/tmp/doc.pdf")
err := uploadFile(url, "attachment", filename)
fmt.Println(url, err)
// fmt.Println(url)
// body := []byte("CONTENU")
// req, err := http.NewRequest("POST", url, bytes.NewBuffer(body))
// if err != nil {
// return -1, err
// }
// req.Header.Add("accept", "application/json")
// req.Header.Add("Content-Type", "multipart/form-data")
// client := &http.Client{}
// res, err := client.Do(req)
// fmt.Println(res, err)
// cnt, err := io.ReadAll(res.Body)
// fmt.Println(string(cnt), err)
// if err != nil {
// return -1, err
// }
// if err != nil {
// return -2, err
// }
// defer res.Body.Close()
return 0, nil
}
func uploadFile(url string, paramName string, filePath string) error {
file, err := os.Open(filePath)
if err != nil {
return err
}
defer file.Close()
body := &bytes.Buffer{}
writer := multipart.NewWriter(body)
part, err := writer.CreateFormFile(paramName, filepath.Base(filePath))
if err != nil {
return err
}
_, err = io.Copy(part, file)
err = writer.Close()
if err != nil {
return err
}
request, err := http.NewRequest("POST", url, body)
request.Header.Add("Content-Type", writer.FormDataContentType())
request.Header.Add("accept", "application/json")
client := &http.Client{}
response, err := client.Do(request)
if err != nil {
return err
}
defer response.Body.Close()
cnt, err := io.ReadAll(response.Body)
fmt.Println(string(cnt), err, writer.FormDataContentType())
// Handle the server response...
return nil
}
// func newfileUploadRequest(uri string, params map[string]string, paramName, path string) (*http.Request, error) {
// file, err := os.Open(path)
// if err != nil {
// return nil, err
// }
// defer file.Close()
// body := &bytes.Buffer{}
// writer := multipart.NewWriter(body)
// part, err := writer.CreateFormFile(paramName, filepath.Base(path))
// if err != nil {
// return nil, err
// }
// _, err = io.Copy(part, file)
// // for key, val := range params {
// // _ = writer.WriteField(key, val)
// // }
// // err = writer.Close()
// // if err != nil {
// // return nil, err
// // }
// req, err := http.NewRequest("POST", uri, body)
// req.Header.Set("Content-Type", writer.FormDataContentType())
// return req, err
// }

View File

@@ -1,63 +0,0 @@
package releases
import (
"fmt"
"io"
"encoding/json"
"net/http"
"oc-publish/occonst"
)
type checkStruct struct {
Name string `json:"name"`
Id int `json:"id"`
}
func CheckRelease(version string) (bool, error) {
url := fmt.Sprintf("%s/api/v1/repos/%s/releases/tags/%s",
occonst.PUBLISH_URL,
occonst.PUBLISH_VERSION,
version)
res, err := http.Get(url)
if err != nil {
return false, err
}
body, err := io.ReadAll(res.Body)
if err != nil {
return false, err
}
var data checkStruct
err = json.Unmarshal(body, &data)
if err != nil {
return false, err
}
return data.Name != "", nil
}
func GetReleaseId(version string) (int, error) {
url := fmt.Sprintf("%s/api/v1/repos/%s/releases/tags/%s",
occonst.PUBLISH_URL,
occonst.PUBLISH_VERSION,
version)
res, err := http.Get(url)
if err != nil {
return 0, err
}
body, err := io.ReadAll(res.Body)
if err != nil {
return 0, err
}
var data checkStruct
err = json.Unmarshal(body, &data)
if err != nil {
return 0, err
}
return data.Id, nil
}

5
src/.gitignore vendored
View File

@@ -1,5 +0,0 @@
go.sum
*_
.coverage.*
.*.log
workspace_*

View File

@@ -1,79 +0,0 @@
BIN_NAME := oc-deploy
BIN_OPTS :=
##################
SOURCES := $(wildcard *.go) $(wildcard */*.go)
BIN_DIR = ../bin/
PLUGINS := $(wildcard ../plugins/*/*.go)
OBJS := ${PLUGINS:.go=.so}
%.so: %.go
go build -buildmode=plugin -o $@ $<
help:
@echo
@echo 'Usage:'
@echo ' make build Génère les exécutables.'
@echo ' make get-deps Dependency download'
@echo ' make run BIN_OPTS=... Go run'
@echo ' make run_install BIN_OPTS=... Go run'
@echo ' make run_uninstall BIN_OPTS=... Go run'
@echo ' make exec BIN_OPTS=... exécutable'
@echo ' make exec_install BIN_OPTS=... exécutable'
@echo ' make exec_uninstall BIN_OPTS=... exécutable'
@echo ' make test Test.'
@echo ' make test Test'
@echo ' make clean Clean the directory tree.'
@echo
${BIN_DIR}/${BIN_NAME}: ${SOURCES} $(OBJS)
go build -o ${BIN_DIR}/${BIN_NAME}
get-deps:
@go mod tidy
build: ${BIN_DIR}/${BIN_NAME}
run: $(OBJS)
@go run main.go ${BIN_OPTS}
run_generate: $(OBJS)
@go run main.go generate ${BIN_OPTS}
run_install: $(OBJS)
@go run main.go install ${BIN_OPTS}
run_uninstall: $(OBJS)
@go run main.go uninstall ${BIN_OPTS}
exec: ${BIN_DIR}/${BIN_NAME} $(OBJS)
@${BIN_DIR}/${BIN_NAME} ${BIN_OPTS}
exec_install: ${BIN_DIR}/${BIN_NAME} $(OBJS)
@${BIN_DIR}/${BIN_NAME} install ${BIN_OPTS}
exec_uninstall: ${BIN_DIR}/${BIN_NAME} $(OBJS)
@${BIN_DIR}/${BIN_NAME} uninstall ${BIN_OPTS}
clean:
@test ! -e ${BIN_DIR}/${BIN_NAME} || rm ${BIN_DIR}/${BIN_NAME}
@test ! -e .coverage.out || rm .coverage.out
@test ! -e .coverage.html || rm .coverage.html
@test ! -e go.sum || rm go.sum
@test ! -e .oc-deploy.log || rm .oc-deploy.log
@rm -rf workspace_*
.PHONY: test
test_%:
go test oc-deploy/$(subst test_,,$@) -coverprofile=.coverage.out -v
@go tool cover -html=.coverage.out -o .coverage.html
test:
@go test ./... -coverprofile=.coverage.out -v
go tool cover -html=.coverage.out -o .coverage.html

View File

@@ -1,85 +0,0 @@
# Purpose
**oc-deploy** is a tool to deploy (with **helm**) all component of **OpenCloud**.
# Usage
| Command | Description |
| ----------------------------------------------------------------------------- | --------------------------- |
| ```oc-deploy``` | Display help |
| ```oc-deploy version``` | Display the version of tool |
| ```oc-deploy install [-c\|--context <context>] [-v\|--version <OcVersion>]``` | Deploy an OpenCloud |
| ```oc-deploy uninstall [-c\|--context <context>]``` | Undeploy an OpenCloud |
| Arguments | Description | Default |
| ---------------- | --------------------------- | ------------ |
| ```context``` | Context Kubernetes | _opencloud_ |
| ```OcVersion``` | Specific version or latest | _latest_ |
# Principe
# Pre-requis
**oc-deploy** need to access to an Kubernetes Cluster, c'est-à-dire : kubeconfig.
**oc-deploy** need to access to Internet :
* to download the _oc.json_ file (contient _oc.yml_) :
* Url : https://cloud.o-forge.io/core/oc-deploy/releases
* to download _kubectl_ and _helm_ tools if
* Url : Urls are specified into _oc.yml_
# Development
To init:
```
make get-deps
```
## To build
```
make build
```
## To run
```
make run_install [BIN_OPTS="<args>"]
make run_uninstall [BIN_OPTS="<args>"]
make run_generate [BIN_OPTS="<args>"]
```
or
```
make exec_install [BIN_OPTS="<args>"]
make exec_uninstall [BIN_OPTS="<args>"]
make exec_generate [BIN_OPTS="<args>"]
```
# To Test
All packages:
```
make test
```
or to test an specific package:
```
make test_<package>
```
Test generate _.coverage.html_ file to view the coverage of test.
## To Publish
Cf : ../publish
## Divers
* Latest version for _kubectl_: https://dl.k8s.io/release/stable.txt
* Release for _helm_: https://github.com/helm/helm/releases

View File

@@ -1,46 +0,0 @@
package chart
import (
"os"
"gopkg.in/yaml.v2"
)
type ChartData struct {
Name string `yaml:"name"`
Chart string `yaml:"chart"`
Url string `yaml:"url"`
Version string `yaml:"version"`
Opts string `yaml:"helm_opts"`
Values map[string]string `yaml:"helm_values"`
FileValues []string `yaml:"helm_filevalues"`
Overwrite string `yaml:"helm_overwrite"`
}
type repoData struct {
Name string `yaml:"name"`
Url string `yaml:"url"`
ForceUpdate bool `yaml:"forceupdate"`
Opts string `yaml:"opts"`
}
type ChartRepoData struct {
Repository repoData `yaml:"repository"`
Charts []ChartData `yaml:"charts"`
}
type chartsRepoParse struct {
Charts []ChartRepoData `yaml:"opencloud"`
}
func FromConfigFile(filename string) ([]ChartRepoData, error) {
yamlFile, _ := os.ReadFile(filename)
var data chartsRepoParse
err := yaml.Unmarshal(yamlFile, &data)
if err != nil {
return data.Charts, err
}
return data.Charts, nil
}

View File

@@ -1,50 +0,0 @@
package chart
import (
"testing"
"path/filepath"
"github.com/stretchr/testify/assert"
)
func _TestReadConfChart(t *testing.T) {
src := filepath.Join(TEST_SRC_DIR, "oc.yml")
assert.FileExists(t, src, "FromConfigFile error")
data, _ := FromConfigFile(src)
assert.Equal(t, "bitnami", data[0].Repository.Name, "FromConfigFile error")
assert.Equal(t, "https://charts.bitnami.com/bitnami", data[0].Repository.Url, "FromConfigFile error")
wordpress := data[0].Charts[0]
assert.Equal(t, "wordpress", wordpress.Name, "FromConfigFile error")
assert.Equal(t, "bitnami/wordpress", wordpress.Chart, "FromConfigFile error")
assert.Equal(t, "23.1.0", wordpress.Version, "FromConfigFile error")
assert.Equal(t, 0, len(wordpress.FileValues), "FromConfigFile error")
assert.Equal(t, 0, len(wordpress.Values), "FromConfigFile error")
phpmyadmin := data[0].Charts[1]
assert.Equal(t, "phpmyadmin", phpmyadmin.Name, "FromConfigFile error")
assert.Equal(t, "bitnami/phpmyadmin", phpmyadmin.Chart,"FromConfigFile error")
assert.Equal(t, "17.0.4", phpmyadmin.Version, "FromConfigFile error")
assert.Equal(t, 2, len(phpmyadmin.FileValues), "FromConfigFile error")
assert.Equal(t, 1, len(phpmyadmin.Values), "FromConfigFile error")
data1 := data[1]
assert.Equal(t, "", data1.Repository.Name, "FromConfigFile error")
assert.Equal(t, "", data1.Repository.Url, "FromConfigFile error")
myfirstrelease := data1.Charts[0]
assert.Equal(t, "myfirstrelease", myfirstrelease.Name, "FromConfigFile error")
assert.Equal(t, "https://zzzz/myfirstchart-0.1.0.tgz", myfirstrelease.Url, "FromConfigFile error")
}
func TestReadConfChartOverwrite(t *testing.T){
src := filepath.Join(TEST_SRC_DIR, "oc_overwrite.yml")
assert.FileExists(t, src, "FromConfigFile error")
data, _ := FromConfigFile(src)
// Nombre de lettres
assert.Equal(t, 70, len(data[0].Charts[0].Overwrite), "TestReadConfChartOverwrite error")
}

View File

@@ -1,23 +0,0 @@
package chart
import (
"os"
"testing"
"path/filepath"
)
var TEST_DEST_DIR = "../wrk_chart"
var TEST_SRC_DIR = filepath.Join("../../test", "chart")
func TestMain(m *testing.M) {
folderPath := TEST_DEST_DIR
os.RemoveAll(folderPath)
os.MkdirAll(folderPath, os.ModePerm)
// call flag.Parse() here if TestMain uses flags
exitCode := m.Run()
os.RemoveAll(folderPath)
os.Exit(exitCode)
}

View File

@@ -1,85 +0,0 @@
// Package cmd : Parse les arguments
// Arguments : version ==> version d'OpenCloud
// Argument : projet ==> nom du projet
package cmd
import (
"github.com/spf13/cobra"
log "oc-deploy/log_wrapper"
)
var (
context string
version string
modules []string
)
func cobraInstallCmd() *cobra.Command {
return &cobra.Command{
Use: "install",
Short: "install",
Long: `deploy Charts`,
Args: cobra.MaximumNArgs(0),
RunE: func(cmd *cobra.Command, args []string) error {
return InstallCmd(context, version, modules)
},
Example: "oc-deploy install --version 0.1.0 --context ex1",
}
}
func cobraUninstallCmd() *cobra.Command{
return &cobra.Command{
Use: "uninstall",
Short: "undeploy",
Long: `Undeploy`,
Args: cobra.MaximumNArgs(0),
RunE: func(cmd *cobra.Command, args []string) error {
return UninstallCmd(context)
},
Example: "oc-deploy uninstall --context ex1",
}
}
func cobraGenerateCmd() *cobra.Command{
return &cobra.Command{
Use: "generate",
Short: "generate",
Long: "Value",
Args: cobra.MaximumNArgs(0),
RunE: func(cmd *cobra.Command, args []string) error {
return GenerateCmd(context, version)
},
Example: "oc-deploy generate --version 0.1.0 --context ex1",
}
}
func Execute() {
log.Log().Debug().Msg("Execute")
var rootCmd = &cobra.Command{Use: "oc-deploy"}
var cmdInstall = cobraInstallCmd()
var cmdUninstall = cobraUninstallCmd()
var cmdGenerate = cobraGenerateCmd()
cmdInstall.Flags().StringVarP(&context, "context", "c", "opencloud", "Nom du context")
cmdInstall.Flags().StringVarP(&version, "version", "v", "latest", "Version")
cmdInstall.Flags().StringArrayVarP(&modules, "modules", "m", []string{}, "modules, ...")
cmdUninstall.Flags().StringVarP(&context, "context", "c", "opencloud", "Nom du context")
cmdGenerate.Flags().StringVarP(&context, "context", "c", "opencloud", "Nom du context")
cmdGenerate.Flags().StringVarP(&version, "version", "v", "latest", "Version")
rootCmd.AddCommand(cmdInstall)
rootCmd.AddCommand(cmdUninstall)
rootCmd.AddCommand(cmdGenerate)
cobra.CheckErr(rootCmd.Execute())
}

View File

@@ -1,9 +0,0 @@
package cmd
import (
"testing"
)
func TestExecute(t *testing.T) {
Execute()
}

View File

@@ -1,23 +0,0 @@
package cmd
import (
"fmt"
log "oc-deploy/log_wrapper"
// "oc-deploy/versionOc"
"oc-deploy/install"
)
func GenerateCmd(prcontextoject string, version string) error {
log.Log().Info().Msg("Generate >> ")
workspace := fmt.Sprintf("workspace_%s", context)
obj := install.InstallClass{Workspace: workspace, Version: version}
_, err := obj.NewGenerate()
if err != nil {
log.Log().Fatal().Msg(" >> " + err.Error())
}
return err
}

View File

@@ -1,51 +0,0 @@
package cmd
import (
"fmt"
log "oc-deploy/log_wrapper"
"oc-deploy/install"
)
func InstallCmd(context string, version string, modules []string) error {
log.Log().Info().Msg("Install >> ")
log.Log().Info().Msg(" << Contexte : " + context)
if len(modules) > 0 {
log.Log().Info().Msg(fmt.Sprintf(" << Modules : %s", modules))
}
workspace := fmt.Sprintf("workspace_%s", context)
obj := install.InstallClass{Workspace: workspace, Version: version}
file, err := obj.NewInstall()
if err != nil {
log.Log().Fatal().Msg(" >> " + err.Error())
}
log.Log().Info().Msg(fmt.Sprintf(" << Version : %s", obj.Version))
log.Log().Info().Msg(fmt.Sprintf(" >> Config : %s", file))
err = obj.Tools()
if err != nil {
log.Log().Fatal().Msg(" >> " + err.Error())
}
obj.SetCommands()
err = obj.ChartRepo()
if err != nil {
log.Log().Fatal().Msg(" >> " + err.Error())
}
err = obj.K8s(context)
if err != nil {
log.Log().Fatal().Msg(" >> " + err.Error())
}
err = obj.InstallCharts(modules)
if err != nil {
log.Log().Fatal().Msg(" >> " + err.Error())
}
return err
}

View File

@@ -1,41 +0,0 @@
package cmd
import (
"bytes"
"github.com/spf13/cobra"
"testing"
"github.com/stretchr/testify/assert"
)
func TestInstallCommand(t *testing.T) {
cmd := cobraInstallCmd()
inMock := false
cmd.RunE = func(cmd *cobra.Command, args []string) error {
inMock = true
return nil
}
cmd.Execute()
assert.Truef(t, inMock, "TestInstallCommand")
}
func TestInstallCommandErr(t *testing.T) {
cmd := cobraUninstallCmd()
inMock := false
cmd.RunE = func(cmd *cobra.Command, args []string) error {
inMock = true
return nil
}
cmd.SetArgs([]string{"bad"})
b := bytes.NewBufferString("")
cmd.SetOut(b)
err := cmd.Execute()
assert.Falsef(t, inMock, "TestInstallCommand args")
assert.NotNilf(t, err, "TestInstallCommand args")
}

View File

@@ -1,45 +0,0 @@
package cmd
import (
"fmt"
// "strings"
// "github.com/spf13/cobra"
log "oc-deploy/log_wrapper"
// "oc-deploy/versionOc"
"oc-deploy/install"
)
func UninstallCmd(context string) error {
log.Log().Info().Msg("Uninstall >> ")
log.Log().Info().Msg(" << Contexte : " + context)
workspace := fmt.Sprintf("workspace_%s", context)
obj := install.InstallClass{Workspace: workspace}
file, err := obj.NewUninstall()
if err != nil {
log.Log().Fatal().Msg(" >> " + err.Error())
}
log.Log().Info().Msg(fmt.Sprintf(" << Version : %s", obj.Version))
log.Log().Info().Msg(fmt.Sprintf(" >> Config : %s", file))
err = obj.Tools()
if err != nil {
log.Log().Fatal().Msg(" >> " + err.Error())
}
err = obj.K8s(context)
if err != nil {
log.Log().Fatal().Msg(" >> " + err.Error())
}
err = obj.UninstallCharts()
if err != nil {
log.Log().Fatal().Msg(" >> " + err.Error())
}
return err
}

View File

@@ -1,41 +0,0 @@
package cmd
import (
"bytes"
"github.com/spf13/cobra"
"testing"
"github.com/stretchr/testify/assert"
)
func TestUninstallCommand(t *testing.T) {
cmd := cobraUninstallCmd()
inMock := false
cmd.RunE = func(cmd *cobra.Command, args []string) error {
inMock = true
return nil
}
cmd.Execute()
assert.Truef(t, inMock, "TestUninstallCommand")
}
func TestUninstallCommandErr(t *testing.T) {
cmd := cobraUninstallCmd()
inMock := false
cmd.RunE = func(cmd *cobra.Command, args []string) error {
inMock = true
return nil
}
cmd.SetArgs([]string{"bad"})
b := bytes.NewBufferString("")
cmd.SetOut(b)
err := cmd.Execute()
assert.Falsef(t, inMock, "TestUninstallCommand args")
assert.NotNilf(t, err, "TestUninstallCommand args")
}

View File

@@ -1,25 +0,0 @@
module oc-deploy
go 1.22.0
require (
github.com/jarcoal/httpmock v1.3.1
github.com/rs/zerolog v1.33.0
github.com/spf13/cobra v1.8.1
github.com/stretchr/testify v1.9.0
gopkg.in/yaml.v2 v2.4.0
)
require (
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/inconshreveable/mousetrap v1.1.0 // indirect
github.com/kr/pretty v0.3.1 // indirect
github.com/mattn/go-colorable v0.1.13 // indirect
github.com/mattn/go-isatty v0.0.20 // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/rogpeppe/go-internal v1.11.0 // indirect
github.com/spf13/pflag v1.0.5 // indirect
golang.org/x/sys v0.22.0 // indirect
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
)

View File

@@ -1,173 +0,0 @@
package helm
import (
"fmt"
"strconv"
"os"
"strings"
"errors"
"path/filepath"
"encoding/json"
log "oc-deploy/log_wrapper"
)
type HelmChart struct {
Bin string
Name string
Chart string
Version string
Url string
Workspace string
Opts string
Values map[string]string
FileValues []string
}
type installInfoOutput struct {
Description string `json:"description"`
Notes string `json:"notes"`
Status string `json:"status"`
}
type installOutput struct {
Info installInfoOutput `json:"info"`
}
func (this HelmCommand) ChartInstall(data HelmChart) (string, error) {
bin := this.Bin
existe, err := this.chartExists(data)
if err != nil {
return "", err
}
if existe {
return "Existe déjà", nil
}
ficChart := data.Chart
// Recherche locale
if _, err := os.Stat(ficChart); err != nil {
} else {
// Recherche voa le Workspace
ficChart := filepath.Join(data.Workspace, data.Chart)
if _, err := os.Stat(ficChart); err == nil {
} else {
if data.Url != "" {
fmt.Println("============ 52 Télechargement", data.Url)
}
}
}
msg := fmt.Sprintf("%s install %s %s %s --output json", bin, data.Name, ficChart, data.Opts)
if data.Version != "" {
msg = fmt.Sprintf("%s --version %s", msg, data.Version)
}
for key, value := range data.Values {
msg = fmt.Sprintf("%s --set %s=%s", msg, key, value)
}
ficoverwrite := filepath.Join(data.Workspace, fmt.Sprintf("value-%s.yml", data.Name))
if _, err := os.Stat(ficoverwrite); err != nil {
log.Log().Warn().Msg(ficoverwrite)
} else {
msg = fmt.Sprintf("%s --values %s", msg, ficoverwrite)
}
for _, valuefilename := range data.FileValues {
fic := filepath.Join(data.Workspace, valuefilename)
if _, err := os.Stat(fic); err != nil {
log.Log().Warn().Msg(fic)
} else {
msg = fmt.Sprintf("%s --values %s", msg, fic)
}
}
msg = strings.Replace(msg, " ", " ", -1)
log.Log().Debug().Msg(msg)
cmd_args := strings.Split(msg, " ")
cmd := this.Exec(cmd_args[0], cmd_args[1:]...)
stdout, err := cmd.CombinedOutput()
if err != nil {
res := string(stdout)
res = strings.TrimSuffix(res, "\n")
return "", errors.New(res)
}
var objmap installOutput
err = json.Unmarshal(stdout, &objmap)
if err != nil {
return "", err
}
res := objmap.Info.Status
return res, nil
}
func (this HelmCommand) ChartUninstall(data HelmChart) (string, error) {
bin := this.Bin
log.Log().Info().Msg(" >> Chart : " + data.Name)
existe, err := this.chartExists(data)
if err != nil {
return "", err
}
if ! existe {
return "Non présent", nil
}
msg := fmt.Sprintf("%s uninstall %s", bin, data.Name)
log.Log().Debug().Msg(msg)
cmd_args := strings.Split(msg, " ")
cmd := this.Exec(cmd_args[0], cmd_args[1:]...)
stdout, err := cmd.CombinedOutput()
res := string(stdout)
res = strings.TrimSuffix(res, "\n")
log.Log().Debug().Msg(res)
return res, err
}
// ../bin/helm list --filter phpmyadminm --short
func (this HelmCommand) chartExists(data HelmChart) (bool, error) {
bin := this.Bin
msg := fmt.Sprintf("%s list --filter %s --no-headers", bin, data.Name)
log.Log().Debug().Msg(msg)
cmd_args := strings.Split(msg, " ")
cmd := this.Exec(cmd_args[0], cmd_args[1:]...)
stdout, err := cmd.CombinedOutput()
if err != nil {
log.Log().Debug().Msg(string(stdout))
return false, errors.New(string(stdout))
}
res := string(stdout)
res = strings.TrimSuffix(res, "\n")
log.Log().Debug().Msg(string(stdout))
log.Log().Debug().Msg(strconv.FormatBool(res != ""))
return res != "", nil
}
// func (this HelmChart) GetRessources() (map[string]string, error) {
// hs := HelmStatus{Name: this.Name}
// hs.New(this.Bin)
// data, _ := hs.getRessources()
// return data, nil
// }

View File

@@ -1,30 +0,0 @@
package helm
import (
"testing"
"github.com/stretchr/testify/assert"
)
func TestHelmChartExists(t *testing.T){
cmd := getCmdHelm(true, `oc-catalog default 1 2024-09-06 16:01:49.17368605 +0200 CEST deployed oc-catalog-0.1.0 1.0`)
data := HelmChart{Name: "oc-catalog"}
res, err := cmd.chartExists(data)
assert.Nilf(t, err, "error message %s", err)
assert.Truef(t, res, "TestHelmVersion error")
}
func TestHelmChartNotExists(t *testing.T){
cmd := getCmdHelm(true, "\n")
data := HelmChart{Name: "phpmyadmin"}
res, err := cmd.chartExists(data)
assert.Nilf(t, err, "error message %s", err)
assert.Falsef(t, res, "TestHelmVersion error")
}

View File

@@ -1,22 +0,0 @@
package helm
import (
"os/exec"
)
type HelmCommand struct {
Bin string
Exec func(string,...string) commandExecutor
}
////
type commandExecutor interface {
Output() ([]byte, error)
CombinedOutput() ([]byte, error)
}
func (this *HelmCommand) New() {
this.Exec = func(name string, arg ...string) commandExecutor {
return exec.Command(name, arg...)
}
}

View File

@@ -1,17 +0,0 @@
package helm
import (
"testing"
"github.com/stretchr/testify/assert"
)
func TestHelm(t *testing.T) {
cmd := HelmCommand{}
cmd.New()
assert.NotNilf(t, cmd.Exec, "TestHelm %s", "New")
cmd.Exec("pwd")
}

View File

@@ -1,84 +0,0 @@
package helm
import (
"os"
"strings"
"testing"
"path/filepath"
)
var TEST_DEST_DIR = "../wrk_helm"
var TEST_SRC_DIR = filepath.Join("../../test", "helm")
var TEST_BIN_DIR = filepath.Join("../../test", "bin")
func TestMain(m *testing.M) {
folderPath := TEST_DEST_DIR
os.RemoveAll(folderPath)
os.MkdirAll(folderPath, os.ModePerm)
// call flag.Parse() here if TestMain uses flags
exitCode := m.Run()
os.RemoveAll(folderPath)
os.Exit(exitCode)
}
// Mock
type MockCommandExecutor struct {
// Used to stub the return of the Output method
// Could add other properties depending on testing needs
output string
}
// Implements the commandExecutor interface
func (m *MockCommandExecutor) Output() ([]byte, error) {
return []byte(m.output), nil
}
func (m *MockCommandExecutor) CombinedOutput() ([]byte, error) {
return []byte(m.output), nil
}
//
func getCmdHelm(mock bool, output string) (HelmCommand) {
if mock == true {
mock := func(name string, args ...string) commandExecutor {
return &MockCommandExecutor{output: output}
}
cmd := HelmCommand{Bin: "mock", Exec: mock}
return cmd
} else {
bin := filepath.Join(TEST_BIN_DIR, "helm")
os.Chmod(bin, 0700)
cmd := HelmCommand{Bin: bin}
cmd.New()
return cmd
}
}
func getCmdsHelm(mock bool, outputs map[string]string) (HelmCommand) {
if mock == true {
mock := func(name string, args ...string) commandExecutor {
cmd := strings.TrimSuffix(strings.Join(args," "), " ")
output := outputs[cmd]
return &MockCommandExecutor{output: output}
}
cmd := HelmCommand{Bin: "mock", Exec: mock}
return cmd
} else {
bin := filepath.Join(TEST_BIN_DIR, "helm")
os.Chmod(bin, 0700)
cmd := HelmCommand{Bin: bin}
cmd.New()
return cmd
}
}

View File

@@ -1,102 +0,0 @@
package helm
import (
"fmt"
"strings"
"encoding/json"
log "oc-deploy/log_wrapper"
"oc-deploy/utils"
)
type HelmRepo struct {
Name string
Repository string // Url du dépôt
ForceUpdate bool
Opts string
}
func (this HelmCommand) AddRepository(repo HelmRepo) (string, error) {
helm_bin := this.Bin
force_update := "--force-update=false"
if repo.ForceUpdate {
force_update = "--force-update=true"
} else {
list, _ := this.ListRepository()
if utils.StringInSlice(repo.Name, list) {
return "Existe déjà", nil
}
}
msg := fmt.Sprintf("%s repo add %s %s %s %s", helm_bin, repo.Name, repo.Repository, force_update, repo.Opts)
msg = strings.TrimSuffix(msg, " ")
log.Log().Debug().Msg(msg)
cmd_args := strings.Split(msg, " ")
cmd := this.Exec(cmd_args[0], cmd_args[1:]...)
stdout, err := cmd.CombinedOutput()
if err != nil {
return "", fmt.Errorf(string(stdout))
}
res := string(stdout)
res = strings.TrimSuffix(res, "\n")
log.Log().Debug().Msg(res)
return res, nil
}
type parseList struct {
Name string `json:"name"`
}
func (this HelmCommand) ListRepository() ([]string, error) {
helm_bin := this.Bin
res := make([]string, 0, 0)
msg := fmt.Sprintf("%s repo list -o json", helm_bin)
log.Log().Debug().Msg(msg)
cmd_args := strings.Split(msg, " ")
cmd := this.Exec(cmd_args[0], cmd_args[1:]...)
stdout, err := cmd.CombinedOutput()
if err != nil {
return res, err
}
var objmap []parseList
err = json.Unmarshal(stdout, &objmap)
if err != nil {
return res, err
}
for _, ele := range objmap {
res = append(res, ele.Name)
}
return res, err
}
// helm repo remove [NAME]
func (this HelmCommand) RemoveRepository(repo HelmRepo) (string, error) {
helm_bin := this.Bin
msg := fmt.Sprintf("%s repo remove %s", helm_bin, repo.Name)
log.Log().Debug().Msg(msg)
cmd_args := strings.Split(msg, " ")
cmd := this.Exec(cmd_args[0], cmd_args[1:]...)
stdout, err := cmd.CombinedOutput()
res := string(stdout)
res = strings.TrimSuffix(res, "\n")
return res, err
}

View File

@@ -1,72 +0,0 @@
package helm
import (
"testing"
"github.com/stretchr/testify/assert"
)
func TestHelmListRepository(t *testing.T){
cmd := getCmdHelm(true, `[{"name":"bitnami","url":"https://charts.bitnami.com/bitnami"}]`)
res, err := cmd.ListRepository()
assert.Nilf(t, err, "error message %s", err)
assert.Equal(t, "bitnami", res[0], "TestHelmVersion error")
}
func TestHelmRemoveRepository(t *testing.T){
cmd := getCmdHelm(true, `"bitnami" has been removed from your repositories`)
repo := HelmRepo{Name: "bitnami"}
res, err := cmd.RemoveRepository(repo)
assert.Nilf(t, err, "error message %s", err)
assert.Equal(t, `"bitnami" has been removed from your repositories`, res, "TestHelmRemoveRepository error")
}
func TestHelmRemoveRepository2(t *testing.T){
cmd := getCmdHelm(true, `Error: no repositories configured`)
repo := HelmRepo{Name: "bitnami"}
res, err := cmd.RemoveRepository(repo)
assert.Nilf(t, err, "error message %s", err)
assert.Equal(t, `Error: no repositories configured`, res, "TestHelmRemoveRepository error")
}
func TestHelmAddRepositoryNew(t *testing.T){
cmd_output := map[string]string{
"repo list -o json": `[{"name":"repo1","url":"https://repo.com"}]`,
"repo add repo2 https://repo2.com --force-update=false": `"repo2" has been added to your repositories"`,
}
cmd := getCmdsHelm(true, cmd_output)
repo := HelmRepo{Name: "repo2", Repository: "https://repo2.com", ForceUpdate: false}
res, err := cmd.AddRepository(repo)
assert.Nilf(t, err, "error message %s", err)
assert.Equal(t, `"repo2" has been added to your repositories"`, res, "TestHelmAddRepositoryNew error")
}
func TestHelmAddRepositoryExists(t *testing.T){
cmd_output := map[string]string{
"repo list -o json": `[{"name":"repo1","url":"https://repo.com"}]`,
"version --short": "v3.15.4+gfa9efb0",
}
cmd := getCmdsHelm(true, cmd_output)
repo := HelmRepo{Name: "repo1", Repository: "https://repo.com", ForceUpdate: false}
res, err := cmd.AddRepository(repo)
assert.Nilf(t, err, "error message %s", err)
assert.Equal(t, `Existe déjà`, res, "TestHelmRemoveRepository error")
}

View File

@@ -1,70 +0,0 @@
package helm
import (
// "fmt"
"encoding/json"
)
////
type parseStatusInfoResourcesMetadata struct {
Name string `json:"name"`
}
// type parseStatusInfoResourcesPod struct {
// Api string `json:"apiVersion"`
// }
type parseStatusInfoResourcesStatefulSet struct {
Api string `json:"apiVersion"`
Kind string `json:"kind"`
Metadata parseStatusInfoResourcesMetadata `json:"metadata"`
}
type parseStatusInfoResourcesDeployment struct {
Api string `json:"apiVersion"`
Kind string `json:"kind"`
Metadata parseStatusInfoResourcesMetadata `json:"metadata"`
}
type parseStatusInfoResources struct {
// Pod []parseStatusInfoResourcesPod `json:"v1/Pod(related)"`
StatefulSet []parseStatusInfoResourcesStatefulSet `json:"v1/StatefulSet"`
Deployment []parseStatusInfoResourcesDeployment `json:"v1/Deployment"`
}
type parseStatusInfo struct {
Status string `json:"status"`
Resources parseStatusInfoResources `json:"Resources"`
}
type parseStatus struct {
Name string `json:"name"`
Info parseStatusInfo `json:"info"`
}
func (this HelmCommand) GetRessources(data HelmChart) (map[string]string, error) {
res := make(map[string]string)
status, err := this.Status(data)
if err != nil {
return res, err
}
var objmap parseStatus
err = json.Unmarshal([]byte(status), &objmap)
if err != nil {
return res, err
}
for _, ele := range objmap.Info.Resources.StatefulSet {
res[ele.Metadata.Name] = ele.Kind
}
for _, ele := range objmap.Info.Resources.Deployment {
res[ele.Metadata.Name] = ele.Kind
}
return res, nil
}

View File

@@ -1,24 +0,0 @@
package helm
import (
"os"
"path/filepath"
"testing"
"github.com/stretchr/testify/assert"
)
func TestHelmRessources(t *testing.T){
fileName := filepath.Join(TEST_SRC_DIR, "helm_status.json")
res_json, _ := os.ReadFile(fileName)
cmd := getCmdHelm(true, string(res_json))
data := HelmChart{Name: "test1"}
res, err := cmd.GetRessources(data)
assert.Nilf(t, err, "error message %s", err)
assert.Equal(t, "StatefulSet", res["oc-catalog-oc-catalog"], "TestHelmStatus error")
}

View File

@@ -1,32 +0,0 @@
package helm
import (
"fmt"
"strings"
"errors"
log "oc-deploy/log_wrapper"
)
// type HelmData struct {
// Name string
// }
func (this HelmCommand) Status(data HelmChart) (string, error) {
helm_bin := this.Bin
msg := fmt.Sprintf("%s status %s --show-resources -o json", helm_bin, data.Name)
log.Log().Debug().Msg(msg)
cmd_args := strings.Split(msg, " ")
cmd := this.Exec(cmd_args[0], cmd_args[1:]...)
stdout, err := cmd.CombinedOutput()
if err != nil {
log.Log().Debug().Msg(string(stdout))
return "", errors.New(string(stdout))
}
return string(stdout), nil
}

View File

@@ -1,20 +0,0 @@
package helm
import (
"strings"
)
func (this HelmCommand) GetVersion() (string, error) {
cmd := this.Exec(this.Bin, "version", "--short")
stdout, err := cmd.CombinedOutput()
if err != nil {
return "", err
}
res := string(stdout)
res = strings.TrimSuffix(res, "\n")
return res, nil
}

View File

@@ -1,18 +0,0 @@
package helm
import (
"testing"
"github.com/stretchr/testify/assert"
)
func TestHelmVersion(t *testing.T){
cmd := getCmdHelm(true, "v3.15.4+gfa9efb0\n")
version, err := cmd.GetVersion()
assert.Nilf(t, err, "error message %s", err)
assert.Equal(t, "v3.15.4+gfa9efb0", version, "TestHelmVersion error")
}

View File

@@ -1,145 +0,0 @@
package install
import (
"fmt"
"os"
"errors"
log "oc-deploy/log_wrapper"
"oc-deploy/tool"
"oc-deploy/chart"
"oc-deploy/kubectl"
"oc-deploy/helm"
"oc-deploy/versionOc"
"oc-deploy/utils"
)
type InstallClass struct {
Version string
Workspace string
tools []tool.ToolData
toolsBin map[string]string
charts []chart.ChartRepoData
commandHelm helm.HelmCommand
commandKubectl kubectl.KubectlCommand
}
func (this *InstallClass) Tools() (error) {
var mem []tool.ToolClass
for _, v := range this.tools {
tool2 := tool.ToolClass{}
v.Bin = this.Workspace
err := tool2.New(v)
if err != nil {
return err
}
mem = append(mem,tool2)
}
this.toolsBin = make(map[string]string)
for _, p := range mem {
data := p.Obj.Get()
log.Log().Info().Msg(fmt.Sprintf(" >> Outils : %s", data.Name))
err := p.Locate()
if err != nil {
log.Log().Info().Msg(fmt.Sprintf(" << %s ", err))
return err
}
log.Log().Info().Msg(fmt.Sprintf(" << %s ", p.Path))
version, err1 := p.Version()
if err1 != nil {
log.Log().Info().Msg(fmt.Sprintf(" << %s ", err1))
return err1
}
log.Log().Info().Msg(fmt.Sprintf(" << %s ", version))
this.toolsBin[data.Name] = p.Path
}
return nil
}
func (this *InstallClass) SetCommands() {
helm_bin, _ := this.getToolBin("helm")
this.commandHelm = helm.HelmCommand{Bin: helm_bin}
this.commandHelm.New()
kubectl_bin, _ := this.getToolBin("kubectl")
this.commandKubectl = kubectl.KubectlCommand{Bin: kubectl_bin}
this.commandKubectl.New()
}
func (this *InstallClass) getToolBin(name string) (string, error) {
for key, value := range this.toolsBin {
if key == name {
return value, nil
}
}
return "", errors.New("Error")
}
func (this *InstallClass) K8s(context string) (error) {
kube := this.commandKubectl
err := kube.UseContext(context)
if err != nil {
log.Log().Info().Msg(fmt.Sprintf(" << Kube : %s ", err))
return err
}
currentcontext, namespace, server, err := kube.GetContext()
if err != nil {
log.Log().Info().Msg(fmt.Sprintf(" << Kube : %s ", err))
return err
}
log.Log().Info().Msg(fmt.Sprintf(" << Kube : %s ", currentcontext))
log.Log().Info().Msg(fmt.Sprintf(" << : %s ", namespace))
log.Log().Info().Msg(fmt.Sprintf(" << : %s ", server))
err = kube.Check()
if err != nil {
log.Log().Info().Msg(fmt.Sprintf(" << : %s ", err))
return err
}
log.Log().Info().Msg(fmt.Sprintf(" << : %s ", "OK"))
return nil
}
func (this *InstallClass) extractVersion() (string, error) {
// Extraction du fichier de version
dst := fmt.Sprintf("%s/oc.yml", this.Workspace)
log.Log().Debug().Msg(fmt.Sprintf("Check du fichier de version : %s", dst))
if _, err := os.Stat(dst); err == nil {
log.Log().Debug().Msg("Existe déjà")
version, err := versionOc.GetFromFile(dst)
if err != nil {
return "", err
}
this.Version = version
} else {
log.Log().Debug().Msg("Téléchargement du fichier de version "+ this.Version)
version, fileversion, err := versionOc.GetFromOnline(this.Version)
if err != nil {
return "", err
}
this.Version = version
err = utils.CopyContentFile(fileversion, dst)
if err != nil {
return "", err
}
}
return dst, nil
}

View File

@@ -1,36 +0,0 @@
package install
import (
"fmt"
"path/filepath"
log "oc-deploy/log_wrapper"
"oc-deploy/utils"
"oc-deploy/chart"
)
func (this *InstallClass) NewGenerate() (string, error) {
// Extraction du fichier de la version
dst, err := this.extractVersion()
if err != nil {
return "", err
}
this.charts, _ = chart.FromConfigFile(dst)
if err != nil {
return dst, err
}
for _, ele1 := range this.charts {
for _, ele2 := range ele1.Charts {
filename := filepath.Join(this.Workspace, fmt.Sprintf("values-%s.yml", ele2.Name) )
utils.CopyContentFile(ele2.Overwrite, filename)
log.Log().Info().Msg(fmt.Sprintf(">> %s : %s", ele2.Name, filename))
}
}
return dst, nil
}

View File

@@ -1,111 +0,0 @@
package install
import (
"fmt"
"sync"
log "oc-deploy/log_wrapper"
"oc-deploy/utils"
"oc-deploy/tool"
"oc-deploy/chart"
"oc-deploy/helm"
"oc-deploy/kubectl"
)
func (this *InstallClass) NewInstall() (string, error) {
// Extraction du fichier de la version
dst, err := this.extractVersion()
if err != nil {
return "", err
}
// Lecture du fichier de conf
this.tools, err = tool.FromConfigFile(dst)
if err != nil {
return dst, err
}
this.charts, _ = chart.FromConfigFile(dst)
if err != nil {
return dst, err
}
return dst, nil
}
func (this *InstallClass) ChartRepo() (error) {
for _, v := range this.charts {
if v.Repository.Name != "" {
log.Log().Info().Msg(fmt.Sprintf(" >> Helm Repo : %s", v.Repository.Name))
repo := helm.HelmRepo{Name: v.Repository.Name,
Repository: v.Repository.Url,
ForceUpdate: v.Repository.ForceUpdate,
Opts: v.Repository.Opts}
res, err := this.commandHelm.AddRepository(repo)
if err != nil {
log.Log().Info().Msg(fmt.Sprintf(" << %s ", err))
return err
}
log.Log().Info().Msg(fmt.Sprintf(" << %s ", res))
}
}
return nil
}
func (this *InstallClass) InstallCharts(modules []string) (error) {
var wg sync.WaitGroup
for _, v := range this.charts {
for _, v1 := range v.Charts {
if len(modules) == 0 || utils.StringInSlice(v1.Name, modules) {
wg.Add(1)
go func() {
defer wg.Done()
this.installChart(v1)
} ()
}
}
}
wg.Wait()
return nil
}
func (this *InstallClass) installChart(chart chart.ChartData) {
log.Log().Info().Msg(fmt.Sprintf(" << Chart : %s ", chart.Name))
data := helm.HelmChart{Name: chart.Name,
Chart: chart.Chart,
Url: chart.Url,
Version: chart.Version,
Workspace: this.Workspace,
Opts: chart.Opts,
Values: chart.Values,
FileValues: chart.FileValues}
res, err := this.commandHelm.ChartInstall(data)
if err != nil {
log.Log().Error().Msg(fmt.Sprintf(" >> %s %s (%s)", data.Name, "KO", err))
return
}
log.Log().Info().Msg(fmt.Sprintf(" >> %s (%s)", data.Name, res))
ressources, _ := this.commandHelm.GetRessources(data)
for key, value := range ressources {
obj := kubectl.KubectlObject{Name: key, Kind: value}
err := this.commandKubectl.Wait(obj)
if err != nil {
log.Log().Error().Msg(fmt.Sprintf(" >> %s/%s KO (%s)", chart.Name, key, err))
} else {
log.Log().Info().Msg(fmt.Sprintf(" >> %s/%s OK", chart.Name, key))
}
}
}

View File

@@ -1,81 +0,0 @@
package install
import (
"fmt"
"os"
"sync"
log "oc-deploy/log_wrapper"
"oc-deploy/versionOc"
"oc-deploy/tool"
"oc-deploy/chart"
"oc-deploy/helm"
)
func (this *InstallClass) NewUninstall() (string, error) {
dst := fmt.Sprintf("%s/oc.yml", this.Workspace)
if _, err := os.Stat(dst); err != nil {
return dst, err
}
version, err := versionOc.GetFromFile(dst)
if err != nil {
return "", err
}
this.Version = version
// Lecture du fichier de conf
this.tools, err = tool.FromConfigFile(dst)
if err != nil {
return dst, err
}
this.charts, _ = chart.FromConfigFile(dst)
if err != nil {
return dst, err
}
return dst, nil
}
func (this *InstallClass) UninstallCharts() (error) {
helm_bin, _ := this.getToolBin("helm")
kubectl_bin, _ := this.getToolBin("kubectl")
var wg sync.WaitGroup
for _, v := range this.charts {
for _, v1 := range v.Charts {
wg.Add(1)
go func() {
defer wg.Done()
this.uninstallChart(helm_bin, kubectl_bin, v1)
} ()
}
}
wg.Wait()
return nil
}
func (this *InstallClass) uninstallChart(helm_bin string, kubectl_bin string, chart chart.ChartData) {
log.Log().Info().Msg(fmt.Sprintf(" << Chart : %s ", chart.Name))
helm_cmd := helm.HelmCommand{Bin: helm_bin}
helm_cmd.New()
data := helm.HelmChart{Name: chart.Name}
// helmchart := helm.HelmChart{Bin: helm_bin,
// Name: chart.Name}
res, err := helm_cmd.ChartUninstall(data)
if err != nil {
log.Log().Error().Msg(fmt.Sprintf(" >> %s %s (%s)", data.Name, "KO", err))
return
}
log.Log().Info().Msg(fmt.Sprintf(" >> %s (%s)", data.Name, res))
}

View File

@@ -1,130 +0,0 @@
package kubectl
import (
"fmt"
"strings"
"errors"
"encoding/json"
log "oc-deploy/log_wrapper"
)
type kubeConfig struct {
CurrentContext string `json:"current-context"`
Contexts [] kubeConfigContexts `json:"contexts"`
Clusters [] kubeConfigClusters `json:"clusters"`
}
type kubeConfigContexts struct {
Name string `json:"name"`
Context kubeConfigContext `json:"context"`
}
type kubeConfigContext struct {
Cluster string `json:"cluster"`
User string `json:"user"`
Namespace string `json:"namespace"`
}
type kubeConfigCluster struct {
Server string `json:"server"`
}
type kubeConfigClusters struct {
Name string `json:"name"`
Cluster kubeConfigCluster `json:"cluster"`
}
func (this KubectlCommand) GetCurrentContext() (string, error) {
bin := this.Bin
msg := fmt.Sprintf("%s config current-context", bin)
log.Log().Debug().Msg(msg)
cmd_args := strings.Split(msg, " ")
cmd := this.Exec(cmd_args[0], cmd_args[1:]...)
stdout, err := cmd.CombinedOutput()
res := string(stdout)
res = strings.TrimSuffix(res, "\n")
return res, err
}
// currentContext, currentNamespace, currentServer
func (this KubectlCommand) GetContext() (string, string, string, error) {
bin := this.Bin
msg := fmt.Sprintf("%s config view -o json", bin)
log.Log().Debug().Msg(msg)
cmd_args := strings.Split(msg, " ")
cmd := this.Exec(cmd_args[0], cmd_args[1:]...)
stdout, err := cmd.CombinedOutput()
if err != nil {
return "", "", "", errors.New(string(stdout))
}
var objmap kubeConfig
err = json.Unmarshal(stdout, &objmap)
if err != nil {
return "", "", "", err
}
currentContext := objmap.CurrentContext
currentCluster := ""
currentNamespace := ""
for _, v := range objmap.Contexts {
if v.Name == currentContext {
currentNamespace = v.Context.Namespace
currentCluster = v.Context.Cluster
}
}
currentServer := ""
for _, v := range objmap.Clusters {
if v.Name == currentCluster {
currentServer = v.Cluster.Server
}
}
return currentContext, currentNamespace, currentServer, nil
}
func (this KubectlCommand) UseContext(newContext string) (error) {
bin := this.Bin
msg := fmt.Sprintf("%s config use-context %s", bin, newContext)
log.Log().Debug().Msg(msg)
cmd_args := strings.Split(msg, " ")
cmd := this.Exec(cmd_args[0], cmd_args[1:]...)
stdout, err := cmd.CombinedOutput()
if err != nil {
log.Log().Debug().Msg(string(stdout))
return errors.New(string(stdout))
}
return nil
}
func (this KubectlCommand) Check() (error) {
bin := this.Bin
msg := fmt.Sprintf("%s cluster-info", bin)
log.Log().Debug().Msg(msg)
cmd_args := strings.Split(msg, " ")
cmd := this.Exec(cmd_args[0], cmd_args[1:]...)
stdout, err := cmd.CombinedOutput()
if err != nil {
log.Log().Debug().Msg(string(stdout))
return errors.New("Kube non disponible")
}
return nil
}

View File

@@ -1,86 +0,0 @@
package kubectl
import (
"os"
"path/filepath"
"errors"
"testing"
"github.com/stretchr/testify/assert"
)
var MOCK_ENABLE = true
func TestKubectCurrentContext(t *testing.T) {
cmd := getCmdKubectl(MOCK_ENABLE, "minikube")
res, err := cmd.GetCurrentContext()
assert.Nilf(t, err, "error message %s", err)
assert.Equal(t, "minikube", res, "TestKubectCurrentContext error")
}
func TestKubectContext(t *testing.T) {
fileName := filepath.Join(TEST_SRC_DIR, "context.json")
cmd_json, _ := os.ReadFile(fileName)
cmd := getCmdKubectl(MOCK_ENABLE, string(cmd_json))
currentContext, currentNamespace, currentServer, err := cmd.GetContext()
assert.Nilf(t, err, "error message %s", err)
assert.Equal(t, "minikube", currentContext, "TestKubectContext error")
assert.Equal(t, "default", currentNamespace, "TestKubectContext error")
assert.Equal(t, "https://127.0.0.1:38039", currentServer, "TestKubectContext error")
}
func TestKubectUseContext(t *testing.T) {
cmd := getCmdKubectl(MOCK_ENABLE, `Switched to context "minikube".`)
err := cmd.UseContext("minikube")
assert.Nilf(t, err, "error message %s", err)
}
func TestKubectUseContextErr(t *testing.T) {
error := errors.New("exit 1")
cmd := getCmdKubectlError(MOCK_ENABLE, `error: no context exists with the name: "minikube2"`, error)
err := cmd.UseContext("minikube2")
assert.NotNilf(t, err, "error message %s", err)
}
func TestKubectCheck(t *testing.T) {
cmd_txt := `
Kubernetes control plane is running at https://127.0.0.1:38039
CoreDNS is running at https://127.0.0.1:38039/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
`
// error := errors.New("exit 1")
cmd := getCmdKubectl(MOCK_ENABLE, cmd_txt)
err := cmd.Check()
assert.Nilf(t, err, "error message %s", err)
}
func TestKubectCheckErr(t *testing.T) {
cmd_txt := ""
error := errors.New("exit 1")
cmd := getCmdKubectlError(MOCK_ENABLE, cmd_txt, error)
err := cmd.Check()
assert.NotNilf(t, err, "error message %s", "TestKubectCheckErr")
}

View File

@@ -1,40 +0,0 @@
package kubectl
import (
"fmt"
"strings"
"errors"
"encoding/json"
log "oc-deploy/log_wrapper"
)
func (this KubectlCommand) getDeployment(data KubectlObject) (map[string]any, error) {
bin := this.Bin
msg := fmt.Sprintf("%s get deployment %s -o json", bin, data.Name)
log.Log().Debug().Msg(msg)
m := make(map[string]any)
cmd_args := strings.Split(msg, " ")
cmd := this.Exec(cmd_args[0], cmd_args[1:]...)
stdout, err := cmd.CombinedOutput()
if err != nil {
return m, errors.New(string(stdout))
}
var objmap getOutput
json.Unmarshal(stdout, &objmap)
kind := objmap.Kind
status := objmap.Status
m["name"] = data.Name
m["kind"] = kind
m["replicas"] = status.Replicas
m["UnavailableReplicas"] = status.UnavailableReplicas
return m, nil
}

View File

@@ -1,29 +0,0 @@
package kubectl
import (
"os"
"path/filepath"
"testing"
"github.com/stretchr/testify/assert"
)
func TestKubectDeployment(t *testing.T) {
fileName := filepath.Join(TEST_SRC_DIR, "deployment.json")
cmd_json, _ := os.ReadFile(fileName)
cmd := getCmdKubectl(true, string(cmd_json))
data := KubectlObject{Name: "dep1", Kind: "Deployment"}
res, err := cmd.getDeployment(data)
// map[string]interface {}(map[string]interface {}{"UnavailableReplicas":0, "kind":"Deployment", "name":"dep1", "replicas":1})
assert.Nilf(t, err, "error message %s", err)
assert.Equal(t, "Deployment", res["kind"], "TestKubectDeployment error")
assert.Equal(t, 1, res["replicas"], "TestKubectDeployment error")
}

Some files were not shown because too many files have changed in this diff Show More