update the authentication doc to specify how target and source work

This commit is contained in:
pb 2025-02-12 17:49:42 +01:00
parent 91c272d58f
commit 0cff31b32f
3 changed files with 34 additions and 4 deletions

Binary file not shown.

After

Width:  |  Height:  |  Size: 91 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 217 KiB

View File

@ -4,13 +4,13 @@ We are currently able to authentify against a remote `Admiralty Target` to execu
In this document we present the steps needed for this authentication process, its flaws and the improvments we could make.
![Representation of the current authentication mechanism](auth_schema.png)
![Representation of the current authentication mechanism](auth_schema.jpg)
## Requirements
### Namespace
In each cluster we need the same `namespace` to exist. Hence, both namespace need to have the same ressources available, mmeaning here that Argo must be deployed in the same way.
In each cluster we need the same `namespace` to exist. Hence, both namespace need to have the same resources available, mmeaning here that Argo must be deployed in the same way.
> We haven't tested it yet, but maybe the `version` of the Argo Workflow shoud be the same in order to prevent mismatch between functionnalities.
@ -32,7 +32,37 @@ Once the patch is done the role the `serviceAccount` that will be used must be a
In order to authentify against the Kubernetes API we need to provide the Admiralty `Source` with a token stored in a secret. This token is created on the `Target` for the `serviceAccount` that we will use in the Admiralty communication. After copying it, we replace the IP in the `kubeconfig` with the IP that will be targeted by the source to reach the k8s API. The token generated for the serviceAccount is added in the "user" part of the kubeconfig.
This **edited kubeconfig** is then passed to the source cluster and converted into a secret, bound to the Admiralty `Source` ressource. It is presented to the the k8s API on the target cluster, first as part of the TLS handshake and then to authenticate the serviceAccount that performs the pods delegation.
This **edited kubeconfig** is then passed to the source cluster and converted into a secret, bound to the Admiralty `Target` resource. It is presented to the the k8s API on the target cluster, first as part of the TLS handshake and then to authenticate the serviceAccount that performs the pods delegation.
### Source/Target
Each cluster in the Admiralty Federation needs to declare **all of the other clusters** :
- That he will delegate pods to, with the `Target` resource
```yaml
apiVersion: multicluster.admiralty.io/v1alpha1
kind: Target
metadata:
name: some-name
namespace: your-namespace
spec:
kubeconfigSecret:
name: secret-holding-kubeconfig-info
```
- That he will accept pods from, with the `Source` resource
```yaml
apiVersion: multicluster.admiralty.io/v1alpha1
kind: Source
metadata:
name: some-name
namespace: your-namespace
spec:
serviceAccountName: service-account-used-by-source
```
## Caveats
@ -46,7 +76,7 @@ By default, a token created by the kubernetes API is only valid for **1 hour**,
It is possible to create token with **infinite duration** (in reality 10 years) but the Admiralty documentation **advices against** this for security issues.
### Ressources' name
### resources' name
When coupling Argo Workflows with a MinIO server to store the artifacts produced by a pod we need to access, for example but not only, a secret containing the authentication data. If we launch a workflow on cluster A and B, the secret resource containing the auth. data can't have the same thing in cluster A and B.