5.0 KiB
Login : admrescue/admrescue
# Deploy VM with ansible
TODO : check with yves or benjamin how to create a qcow2 image with azerty layout and ssh ready
Deploy k3s
Two password are asked via the prompt :
- First the user that you are connecting to on the host via ssh
- Second the root password
ansible-playbook -i my_hosts.yaml deploy_k3s.yml --extra-vars " user_prompt=<YOUR_USER>" --ask-pass --ask-become-pass
Deploy Argo
password to provide is the one to the user you are connecting to on the host via ssh
ansible-playbook -i my_hosts.yaml deploy_argo.yml --extra-vars " user_prompt=<YOUR_USER>" --ask-pass --ask-become-pass
Deploy Admirality
Install the kubernetes.core collection : ansible-galaxy collection install kubernetes.core
for ansible to be able to use some kubectl tools.
Install and prepare Admiralty
This play prepare your machine to use Admiralty in kubernetes. It installs helm, cert-manager and admiralty, then configure your clusters to be an admiralty source or target.
/!\ TODO : declare the list of target and source in a play's vars
ansible-playbook -i my_hosts.yaml deploy_admiralty.yml --extra-vars "host_prompt=HOSTNAME user_prompt=<YOUR_USER>" --ask-pass --ask-become-pass
Share kubeconfig for the control cluster
ansible-playbook -i ../my_hosts.yaml create_secrets.yml --extra-vars "host_prompt=WORKLOAD_HOST user_prompt=<YOUR_USER> control_host=CONTROL_HOST" --ask-pass --ask-become-pass
MinIO
- Limit the Memory
- Limit the replica
- Limit volumeClaimTemplates.spec.resources.requests
- Add LoadBalancer for WebUI
- Corrected command :
kubectl get secret argo-artifacts --namespace default -o jsonpath="{.data.rootUser}" | base64 --decode
kubectl get secret argo-artifacts --namespace default -o jsonpath="{.data.rootPassword}" | base64 --decode
- With the output of the last tasks, create a secret in argo namespace to give access to the minio API
apiVersion: v1
kind: Secret
metadata:
name: argo-minio-secret
type: Opaque
data:
accessKeySecret: [base64 ENCODED VALUE]
secretKeySecret: [base64 ENCODED VALUE]
- Create a ConfigMap, which will be used by argo to create the S3 artifact, the content can match the one from the previously created secret
apiVersion: v1
kind: ConfigMap
metadata:
# If you want to use this config map by default, name it "artifact-repositories". Otherwise, you can provide a reference to a
# different config map in `artifactRepositoryRef.configMap`.
name: artifact-repositories
# annotations:
# # v3.0 and after - if you want to use a specific key, put that key into this annotation.
# workflows.argoproj.io/default-artifact-repository: oc-s3-artifact-repository
data:
oc-s3-artifact-repository: |
s3:
bucket: oc-bucket
endpoint: [ retrieve cluster with kubectl get service argo-artifacts -o jsonpath="{.spec.clusterIP}" ]:9000
insecure: true
accessKeySecret:
name: argo-minio-secret
key: accessKeySecret
secretKeySecret:
name: argo-minio-secret
key: secretKeySecret
Use custom container image : local registry
Mosquitto
sudo apt update && apt install -y mosquitto mosquitto-clients
need to add a conf file in /etc/mosquitto/conf.d/mosquitto.conf
containing :
allow_anonymous true
listener 1883 0.0.0.0
sudo systemctl restart mosquitto
Launch the mosquitto client to receive message on the machine that hosts the mosquitto server : sudo mosquitto_sub -h 127.0.0.1 -t argo/alpr
Argo
Execute/submite a workflow
argo submit PATH_TO_YAML --watch --serviceaccount=argo -n argo
Troubleshoot
k3s bind to local port
On certain distro you might already have an other mini k8s. A sign of this is k3s being able to install, start but never being stable, restarting non stop.
You should try to see if the port used by k3s are arlready binded :
sudo netstat -tuln | grep -E '6443|10250'
If those ports are already in use then you should identify which service run behidn them and then stop them and preferably uninstall them.
We have already encountered an instance of Ubuntu Server
with minikube already installed.
Remove minikube
sudo systemctl stop snap.microk8s.daemon-kubelite
sudo systemctl disable snap.microk8s.daemon-kubelite
sudo systemctl restart k3s
Use local container images
We have encountered difficulties declaring container images that correspond to local images (stored in docker.io/library/)
We used a docker hub repository to pull our customized image. For this we need to create a secret holding the login informations to a docker account that has access to this repository, which we then link to the serviceAccount running the workflow :
Create the secret in the argo namespace
kubectl create secret docker-registry regcred --docker-username=[DOCKER HUB USERNAME] --docker-password=[DOCKER HUB PASSWORD] -n argo
Patch the argo
serviceAccount to use the secret when pulling image
kubectl patch serviceaccount argo -n argo -p '{"imagePullSecrets": [{"name": "regcred"}]}'