Finalised report of the performance test
This commit is contained in:
parent
a9b5f6dcad
commit
3da3ada710
BIN
docs/performance_test/100_monitors.png
Normal file
BIN
docs/performance_test/100_monitors.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 34 KiB |
BIN
docs/performance_test/10_monitors.png
Normal file
BIN
docs/performance_test/10_monitors.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 31 KiB |
BIN
docs/performance_test/150_monitors.png
Normal file
BIN
docs/performance_test/150_monitors.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 30 KiB |
@ -106,19 +106,19 @@ In order to monitor the ressource consumption during our tests we need to create
|
||||
|
||||
We create 4 different queries using Prometheus as the data source. For each query we can use the `code` mode to create them from a PromQL query.
|
||||
|
||||
## OC stack consumption
|
||||
### OC stack consumption
|
||||
|
||||
```
|
||||
sum(container_memory_usage_bytes{name=~"oc-auth|oc-datacenter|oc-scheduler|oc-front|oc-schedulerd|oc-workflow|oc-catalog|oc-peer|oc-workspace|loki|mongo|traefik|nats"})
|
||||
```
|
||||
|
||||
## Monitord consumption
|
||||
### Monitord consumption
|
||||
|
||||
```
|
||||
sum(container_memory_usage_bytes{image="oc-monitord"})
|
||||
```
|
||||
|
||||
## Total RAM consumption
|
||||
### Total RAM consumption
|
||||
|
||||
```
|
||||
sum(
|
||||
@ -128,8 +128,24 @@ sum(
|
||||
)
|
||||
```
|
||||
|
||||
## Number of monitord containers
|
||||
### Number of monitord containers
|
||||
|
||||
```
|
||||
count(container_memory_usage_bytes{image="oc-monitord"} > 0)
|
||||
```
|
||||
|
||||
# Launch executions
|
||||
|
||||
We will use a script to insert in the DB the executions that will create the monitord containers.
|
||||
|
||||
We need to retrieve two informations to execute the scripted insertion :
|
||||
|
||||
- The **workflow id** for the workflow we want to instantiate, this is can be located in the DB
|
||||
- A **token** to authentify against the API, connect to oc-front and retrieve the token in your browser network analyzer tool.
|
||||
|
||||
Add these to the `insert_exex.sh` script.
|
||||
|
||||
The script takes two arguments :
|
||||
- **$1** : the number of executions, which are created by chunks of 10 using a CRON expression to create 10 execution**S** for each execution/namespace
|
||||
|
||||
- **$2** : the number of minutes between now and the execution time for the executions.
|
File diff suppressed because one or more lines are too long
@ -0,0 +1,43 @@
|
||||
We used a very simple mono node workflow which execute a simple sleep command within an alpine container
|
||||
|
||||

|
||||
|
||||
# 10 monitors
|
||||
|
||||

|
||||
|
||||
# 100 monitors
|
||||
|
||||

|
||||
|
||||
# 150 monitors
|
||||
|
||||

|
||||
|
||||
# Observations
|
||||
|
||||
We see an increase in the memory usage by the OC stack which initially is around 600/700 MiB :
|
||||
|
||||
```
|
||||
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
|
||||
7ce889dd97cc oc-auth 0.00% 21.82MiB / 11.41GiB 0.19% 125MB / 61.9MB 23.3MB / 5.18MB 9
|
||||
93be30148a12 oc-catalog 0.14% 17.52MiB / 11.41GiB 0.15% 300MB / 110MB 35.1MB / 242kB 9
|
||||
611de96ee37e oc-datacenter 0.32% 21.85MiB / 11.41GiB 0.19% 38.7MB / 18.8MB 14.8MB / 0B 9
|
||||
dafb3027cfc6 oc-front 0.00% 5.887MiB / 11.41GiB 0.05% 162kB / 3.48MB 1.65MB / 12.3kB 7
|
||||
d7601fd64205 oc-peer 0.23% 16.46MiB / 11.41GiB 0.14% 201MB / 74.2MB 27.6MB / 606kB 9
|
||||
a78eb053f0c8 oc-scheduler 0.00% 17.24MiB / 11.41GiB 0.15% 125MB / 61.1MB 17.3MB / 1.13MB 10
|
||||
bfbc3c7c2c14 oc-schedulerd 0.07% 15.05MiB / 11.41GiB 0.13% 303MB / 293MB 7.58MB / 176kB 9
|
||||
304bb6a65897 oc-workflow 0.44% 107.6MiB / 11.41GiB 0.92% 2.54GB / 2.65GB 50.9MB / 11.2MB 10
|
||||
62e243c1c28f oc-workspace 0.13% 17.1MiB / 11.41GiB 0.15% 193MB / 95.6MB 34.4MB / 2.14MB 10
|
||||
3c9311c8b963 loki 1.57% 147.4MiB / 11.41GiB 1.26% 37.4MB / 16.4MB 148MB / 459MB 13
|
||||
01284abc3c8e mongo 1.48% 86.78MiB / 11.41GiB 0.74% 564MB / 1.48GB 35.6MB / 5.35GB 94
|
||||
14fc9ac33688 traefik 2.61% 49.53MiB / 11.41GiB 0.42% 72.1MB / 72.1MB 127MB / 2.2MB 13
|
||||
4f1b7890c622 nats 0.70% 78.14MiB / 11.41GiB 0.67% 2.64GB / 2.36GB 17.3MB / 2.2MB 14
|
||||
|
||||
Total 631.2 Mb
|
||||
```
|
||||
|
||||
However over time with the repetition of a large number of scheduling that the stacks uses a larger amount of RAM.
|
||||
|
||||
Espacially it seems that **loki**, **nats**, **mongo**, **oc-datacenter** and **oc-workflow** grow overs 150 MiB. This can be explained by the cache growing in these containers, which seems to be reduced every time the containers are restarted.
|
||||
|
BIN
docs/performance_test/wf_test_ram_1node.png
Normal file
BIN
docs/performance_test/wf_test_ram_1node.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 16 KiB |
BIN
performance_test
Normal file
BIN
performance_test
Normal file
Binary file not shown.
After Width: | Height: | Size: 16 KiB |
Loading…
Reference in New Issue
Block a user