2.8 KiB
2.8 KiB
Setting up
Minio can be deployed using the argo workflow documentation or use the ansible playbook written by pierre.bayle[at]irt-saintexupery.com
available (here)[https://raw.githubusercontent.com/pi-B/ansible-oc/refs/heads/main/deploy_minio.yml?token=GHSAT0AAAAAAC5OBWUCGHWPA4OUAKHBKB4GZ4YTPGQ].
Launch the playbook with ansible-playbook -i [your host ip or url], deploy_minio.yml --extra-vars "user_prompt=[your user]" [--ask-become-pass]
- If your user doesn't have the
NOPASSWD
rights on the host use the--ask-become-pass
to allow ansible to usesudo
- Fill the value for
memory_req
,storage_req
andreplicas
in the playbook's vars. The pods won't necessarily use it fully, but if the total memory or storage request of your pod's pool excede your host's capabilities the deployment might fail.
Flaws of the default install
- Requests 16Gi of memory / pods
- Requests 500Gi of storage
- Creates 16 replicas
- Dosen't expose the MinIO GUI to the exterior of cluster
Allow API access
Visit the MinIO GUI (on port 9001) and create the bucket(s) you will use (here oc-bucket
) and access keys, encode them with base64 and create a secret in the argo namespace :
kubectl create secret -n [name of your argo namespace] generic argo-artifact-secret \
--from-literal=access-key=[your access key] \
--from-literal=secret-key=[your secret key]
- Create a ConfigMap, which will be used by argo to create the S3 artifact, the content can match the one from the previously created secret
apiVersion: v1
kind: ConfigMap
metadata:
# If you want to use this config map by default, name it "artifact-repositories". Otherwise, you can provide a reference to a
# different config map in `artifactRepositoryRef.configMap`.
name: artifact-repositories
# annotations:
# # v3.0 and after - if you want to use a specific key, put that key into this annotation.
# workflows.argoproj.io/default-artifact-repository: oc-s3-artifact-repository
data:
oc-s3-artifact-repository: |
s3:
bucket: oc-bucket
endpoint: [ retrieve cluster with kubectl get service argo-artifacts -o jsonpath="{.spec.clusterIP}" ]:9000
insecure: true
accessKeySecret:
name: argo-artifact-secret
key: access-key
secretKeySecret:
name: argo-artifact-secret
key: secret-key
# Store Argo Workflow objects in MinIO S3 bucket
Here is an exemple of how to store some file/dir from an argo pod to an existing s3 bucket
outputs:
parameters:
- name: outfile [or OUTDIR ]
value: [NAME OF THE FILE OR DIR TO STORE]
artifacts:
- name: outputs
path: [ PATH OF THE FILE OR DIR IN THE CONTAINER]
s3:
key: [PATH OF THE FILE IN THE BUCKET].tgz'