Run and monitor a workflow
conf | ||
demo_nginx | ||
docs | ||
logger | ||
models | ||
tools | ||
utils | ||
workflow_builder | ||
.gitignore | ||
docker_schedulerd.json | ||
Dockerfile | ||
env.env | ||
exemple.yml | ||
go.mod | ||
go.sum | ||
LICENSE.md | ||
main.go | ||
Makefile | ||
oc-monitord | ||
README.md | ||
test-logs-workflow.json |
oc-monitor
DO : make build
Summary
oc-monitord is a daemon which can be run :
- as a binary
- as a container
It is used to perform several actions regarding the execution of an Open Cloud workflow :
- generating a YAML file that can be interpreted by Argo Workflow to create and execute pods in a kubernetes environment
- setting up the different resources needed to execute a workflow over several peers/kubernetes nodes with Admiralty : token, secrets, targets and sources
- creating the workflow and logging the output from
- Argo watch, which gives informations about the workflow in general (phase, number of steps executed, status...)
- Pods : which are the logs generated by the pods
To execute, the daemon needs several options :
- -u :
- -m :
- -d :
- -e :
Notes features/admiralty-docker
-
When executing monitord as a container we need to change any url with "localhost" to the container's host IP.
We can :
- declare a new parameter 'HOST_IP'
- decide that no peer can have "http://localhost" as its url and use an attribute from the peer object or isMyself() from oc-lib if a peer is the current host.
TODO
-
Allow the front to known on which IP the service are reachable
- currently doing it by using
kubectl get nodes -o wide
- currently doing it by using
-
Implement writing and reading from S3 bucket/MinIO when a data resource is linked to a compute resource.
Adding ingress handling to support reverse proxing
- Test wether ingress-nginx is running or not
- Do something if not found : stop running and send error log OR start installation