Run and monitor a workflow
Go to file
2025-04-17 16:58:37 +02:00
conf Adapted some of the steps of the executeInside()'s method to work with the updated Admiralty environment, using execution id as namespace, serviceAccount naming convention and adding the serviceAccount in the workflow's YAML. Logging not working yet. 2025-04-14 18:21:33 +02:00
demo_nginx oclib 2024-11-07 13:36:28 +01:00
logger restructured the different package, cleaned some non used code, added comments, still have to reorganize packages to optimize packages 2025-04-17 16:53:36 +02:00
models restructured the different package, cleaned some non used code, added comments, still have to reorganize packages to optimize packages 2025-04-17 16:53:36 +02:00
tools Modified how logging with monitord container is implemented, with simpler logic thanks to the argo client library and k8 client-go for pods' logs 2025-04-17 16:51:29 +02:00
utils moved the logger creation to the utils package to make them available to all packages without recreating or passing them 2025-04-15 12:00:43 +02:00
workflow_builder Adapted some of the steps of the executeInside()'s method to work with the updated Admiralty environment, using execution id as namespace, serviceAccount naming convention and adding the serviceAccount in the workflow's YAML. Logging not working yet. 2025-04-14 18:21:33 +02:00
.gitignore conf 2024-08-06 11:42:59 +02:00
docker_schedulerd.json Merge branch 'feature/namespace' into main 2025-02-18 08:24:22 +01:00
Dockerfile added entrypoint 2025-04-14 18:22:31 +02:00
exemple.yml oclib 2024-11-07 13:36:28 +01:00
go.mod adding lib files 2025-04-17 16:58:37 +02:00
go.sum adding lib files 2025-04-17 16:58:37 +02:00
LICENSE.md Téléverser les fichiers vers "/" 2024-10-03 10:53:51 +02:00
main.go Modified how logging with monitord container is implemented, with simpler logic thanks to the argo client library and k8 client-go for pods' logs 2025-04-17 16:51:29 +02:00
Makefile Add missing tag 2025-01-13 12:10:22 +01:00
README.md Adapted some of the steps of the executeInside()'s method to work with the updated Admiralty environment, using execution id as namespace, serviceAccount naming convention and adding the serviceAccount in the workflow's YAML. Logging not working yet. 2025-04-14 18:21:33 +02:00
test-logs-workflow.json oclib 2024-11-07 13:36:28 +01:00

oc-monitor

## Deploy in k8s (dev)

While a registry with all of the OC docker images has not been set-up we can export this image to k3s ctr

docker save oc-monitord:latest | sudo k3s ctr images import -

Then in the pod manifest for oc-monitord use :

image: docker.io/library/oc-monitord
imagePullPolicy: Never

Not doing so will end up in the pod having a ErrorImagePull

Allow argo to create services

In order for monitord to expose open cloud services on the node, we need to give him permission to create k8s services.

For that we can update the RBAC configuration for a role already created by argo :

### Manually edit the rbac authorization

kubectl edit roles.rbac.authorization.k8s.io -n argo argo-role

In rules add a new entry :

- apiGroups:
  - ""
  resources:  
  - services
    verbs:                                                                                                                                                          
    - get
    - create

Patch the rbac authorization with a one liner

kubectl patch role argo-role -n argo --type='json' -p='[{"op": "add", "path": "/rules/-", "value": {"apiGroups": [""], "resources": ["services"], "verbs": ["get","create"]}}]'

Check wether the modification is effective

kubectl auth can-i create services --as=system:serviceaccount:argo:argo -n argo

This command must return "yes"

Notes features/admiralty-docker

  • When executing monitord as a container we need to change any url with "localhost" to the container's host IP.

    We can :

    • declare a new parameter 'HOST_IP'
    • decide that no peer can have "http://localhost" as its url and use an attribute from the peer object or isMyself() from oc-lib if a peer is the current host.

## TODO

  • [ ] Allow the front to known on which IP the service are reachable
    • currently doing it by using kubectl get nodes -o wide

### Adding ingress handling to support reverse proxing

  • Test wether ingress-nginx is running or not
    • Do something if not found : stop running and send error log OR start installation