Run and monitor a workflow
Go to file
2024-12-23 16:05:25 +01:00
.vscode creating services inside the argo manifest 2024-12-23 16:05:25 +01:00
conf compatibility with oclib update 2024-09-05 11:23:43 +02:00
demo_nginx add nginx conf file 2024-09-03 18:11:37 +02:00
models creating services inside the argo manifest 2024-12-23 16:05:25 +01:00
workflow_builder creating services inside the argo manifest 2024-12-23 16:05:25 +01:00
.gitignore conf 2024-08-06 11:42:59 +02:00
Dockerfile debug 2024-08-20 15:23:02 +02:00
go.mod compatibility with oclib update 2024-09-05 11:23:43 +02:00
go.sum compatibility with oclib update 2024-09-05 11:23:43 +02:00
main.go Merge branch 'main' of https://cloud.o-forge.io/core/oc-monitord into services_demo 2024-09-05 11:45:08 +02:00
oc-monitord simplify processing check up 2024-08-29 09:34:10 +02:00
README.md creating services inside the argo manifest 2024-12-23 16:05:25 +01:00
test-logs-workflow.json files to display 2024-08-27 11:02:30 +02:00
traefik-values.yaml creating services inside the argo manifest 2024-12-23 16:05:25 +01:00

oc-monitor

## Deploy in k8s (dev)

While a registry with all of the OC docker images has not been set-up we can export this image to k3s ctr

docker save oc-monitord:latest | sudo k3s ctr images import -

Then in the pod manifest for oc-monitord use :

image: docker.io/library/oc-monitord
imagePullPolicy: Never

Not doing so will end up in the pod having a ErrorImagePull

Allow argo to create services

In order for monitord to expose open cloud services on the node, we need to give him permission to create k8s services.

For that we can update the RBAC configuration for a role already created by argo :

### Manually edit the rbac authorization

kubectl edit roles.rbac.authorization.k8s.io -n argo argo-role

In rules add a new entry :

- apiGroups:
  - ""
  resources:  
  - services
    verbs:                                                                                                                                                          
    - get
    - create

Patch the rbac authorization with a one liner

kubectl patch role argo-role -n argo --type='json' -p='[{"op": "add", "path": "/rules/-", "value": {"apiGroups": [""], "resources": ["services"], "verbs": ["get","create"]}}]'

Check wether the modification is effective

kubectl auth can-i create services --as=system:serviceaccount:argo:argo -n argo

This command must return "yes"

Allow services to be joined with reverse proxy

Since the development has been realised in a K3S environment, we will use the lightweight solution provided by traefik.

We need to install metallb to expose our cluster to the exterior and allow packets to reach traefik.

### Deploy traefik and metallb

helm repo add metallb https://metallb.github.io/metallb helm repo add traefik https://helm.traefik.io/traefik

helm repo update

  • Create the namespaces for each

kubectl create ns traefik-ingress kubectl create ns metallb-system

  • Configure the deployment
cat > traefik-values.yaml <<EOF
globalArguments:
deployment:
  kind: DaemonSet
providers:
  kubernetesCRD:
    enabled: true
service:
  type: LoadBalancer
ingressRoute:
  dashboard:
    enabled: false
EOF
  • Launch the installs

helm upgrade --install metallb metallb/metallb --namespace metallb-system

helm install --namespace=traefik-ingress traefik traefik/traefik --values=./traefik-values.yaml

Configure metallb

cat << 'EOF' | kubectl apply -f -
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: default-pool
  namespace: metallb-system
spec:
  addresses:
  - 192.168.0.200-192.168.0.250
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: default
  namespace: metallb-system
spec:
  ipAddressPools:
  - default-pool
EOF
  • Check that the services created in traefik-ingress have an external IP

kubectl get service -n traefik-ingress -o wide

## TODO

  • [ ] Logs the output of each pods :
    • logsPods() function already exists
    • need to implement the logic to create each pod's logger and start the monitoring routing
  • [ ] Allow the front to known on which IP the service are reachable
    • currently doing it by using kubectl get nodes -o wide

### Adding ingress handling to support reverse proxing

  • Test wether ingress-nginx is running or not
    • Do something if not found : stop running and send error log OR start installation