2024-07-04 08:39:43 +02:00
|
|
|
|
# oc-monitor
|
|
|
|
|
|
2024-07-25 18:48:25 +02:00
|
|
|
|
## Deploy in k8s (dev)
|
|
|
|
|
|
|
|
|
|
While a registry with all of the OC docker images has not been set-up we can export this image to k3s ctr
|
|
|
|
|
|
2024-08-19 11:43:40 +02:00
|
|
|
|
> docker save oc-monitord:latest | sudo k3s ctr images import -
|
2024-07-25 18:48:25 +02:00
|
|
|
|
|
2024-08-19 11:43:40 +02:00
|
|
|
|
Then in the pod manifest for oc-monitord use :
|
2024-07-25 18:48:25 +02:00
|
|
|
|
|
|
|
|
|
```
|
2024-08-19 11:43:40 +02:00
|
|
|
|
image: docker.io/library/oc-monitord
|
2024-07-25 18:48:25 +02:00
|
|
|
|
imagePullPolicy: Never
|
|
|
|
|
```
|
|
|
|
|
|
2024-07-26 10:30:26 +02:00
|
|
|
|
Not doing so will end up in the pod having a `ErrorImagePull`
|
|
|
|
|
|
2024-09-03 14:24:03 +02:00
|
|
|
|
## Allow argo to create services
|
|
|
|
|
|
|
|
|
|
In order for monitord to expose **open cloud services** on the node, we need to give him permission to create **k8s services**.
|
|
|
|
|
|
|
|
|
|
For that we can update the RBAC configuration for a role already created by argo :
|
|
|
|
|
|
|
|
|
|
### Manually edit the rbac authorization
|
|
|
|
|
|
|
|
|
|
> kubectl edit roles.rbac.authorization.k8s.io -n argo argo-role
|
|
|
|
|
|
|
|
|
|
In rules add a new entry :
|
|
|
|
|
|
|
|
|
|
```
|
|
|
|
|
- apiGroups:
|
|
|
|
|
- ""
|
|
|
|
|
resources:
|
|
|
|
|
- services
|
|
|
|
|
verbs:
|
|
|
|
|
- get
|
|
|
|
|
- create
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
### Patch the rbac authorization with a one liner
|
|
|
|
|
|
|
|
|
|
> kubectl patch role argo-role -n argo --type='json' -p='[{"op": "add", "path": "/rules/-", "value": {"apiGroups": [""], "resources": ["services"], "verbs": ["get","create"]}}]'
|
|
|
|
|
|
|
|
|
|
### Check wether the modification is effective
|
|
|
|
|
|
|
|
|
|
> kubectl auth can-i create services --as=system:serviceaccount:argo:argo -n argo
|
|
|
|
|
|
|
|
|
|
This command **must return "yes"**
|
|
|
|
|
|
2024-09-06 14:24:20 +02:00
|
|
|
|
## Allow services to be joined with reverse proxy
|
|
|
|
|
|
|
|
|
|
Since the development has been realised in a K3S environment, we will use the lightweight solution provided by **traefik**.
|
|
|
|
|
|
|
|
|
|
We need to install **metallb** to expose our cluster to the exterior and allow packets to reach traefik.
|
|
|
|
|
|
|
|
|
|
### Deploy traefik and metallb
|
|
|
|
|
|
|
|
|
|
- Make sure that helm is installed, else visit : https://helm.sh/docs/intro/install/
|
|
|
|
|
|
|
|
|
|
- Add the repositories for traefik and metallb
|
|
|
|
|
> helm repo add metallb https://metallb.github.io/metallb
|
|
|
|
|
> helm repo add traefik https://helm.traefik.io/traefik
|
|
|
|
|
|
|
|
|
|
>helm repo update
|
|
|
|
|
|
|
|
|
|
- Create the namespaces for each
|
|
|
|
|
> kubectl create ns traefik-ingress
|
|
|
|
|
> kubectl create ns metallb-system
|
|
|
|
|
|
|
|
|
|
- Configure the deployment
|
|
|
|
|
|
|
|
|
|
```
|
|
|
|
|
cat > traefik-values.yaml <<EOF
|
|
|
|
|
globalArguments:
|
|
|
|
|
deployment:
|
|
|
|
|
kind: DaemonSet
|
|
|
|
|
providers:
|
|
|
|
|
kubernetesCRD:
|
|
|
|
|
enabled: true
|
|
|
|
|
service:
|
|
|
|
|
type: LoadBalancer
|
|
|
|
|
ingressRoute:
|
|
|
|
|
dashboard:
|
|
|
|
|
enabled: false
|
|
|
|
|
EOF
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
- Launch the installs
|
|
|
|
|
> helm upgrade --install metallb metallb/metallb
|
|
|
|
|
> helm upgrade --install metallb metallb/metallb
|
|
|
|
|
|
|
|
|
|
### Configure metallb
|
|
|
|
|
|
|
|
|
|
```
|
|
|
|
|
cat << 'EOF' | kubectl apply -f -
|
|
|
|
|
apiVersion: metallb.io/v1beta1
|
|
|
|
|
kind: IPAddressPool
|
|
|
|
|
metadata:
|
|
|
|
|
name: default-pool
|
|
|
|
|
namespace: metallb-system
|
|
|
|
|
spec:
|
|
|
|
|
addresses:
|
|
|
|
|
- 192.168.0.200-192.168.0.250
|
|
|
|
|
---
|
|
|
|
|
apiVersion: metallb.io/v1beta1
|
|
|
|
|
kind: L2Advertisement
|
|
|
|
|
metadata:
|
|
|
|
|
name: default
|
|
|
|
|
namespace: metallb-system
|
|
|
|
|
spec:
|
|
|
|
|
ipAddressPools:
|
|
|
|
|
- default-pool
|
|
|
|
|
EOF
|
|
|
|
|
```
|
2024-09-03 14:24:03 +02:00
|
|
|
|
|
2024-09-03 15:56:16 +02:00
|
|
|
|
## TODO
|
2024-09-03 14:24:03 +02:00
|
|
|
|
|
2024-09-03 15:56:16 +02:00
|
|
|
|
- [ ] Logs the output of each pods :
|
|
|
|
|
- logsPods() function already exists
|
|
|
|
|
- need to implement the logic to create each pod's logger and start the monitoring routing
|
|
|
|
|
- [ ] Allow the front to known on which IP the service are reachable
|
2024-09-03 17:46:36 +02:00
|
|
|
|
- currently doing it by using `kubectl get nodes -o wide`
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
### Adding ingress handling to support reverse proxing
|
|
|
|
|
|
|
|
|
|
- Test wether ingress-nginx is running or not
|
|
|
|
|
- Do something if not found : stop running and send error log OR start installation
|
|
|
|
|
-
|