diff --git a/docs/admiralty/auth_schema.jpg b/docs/admiralty/auth_schema.jpg new file mode 100644 index 0000000..4727575 Binary files /dev/null and b/docs/admiralty/auth_schema.jpg differ diff --git a/docs/admiralty/auth_schema.png b/docs/admiralty/auth_schema.png deleted file mode 100644 index 827b298..0000000 Binary files a/docs/admiralty/auth_schema.png and /dev/null differ diff --git a/docs/admiralty/authentication.md b/docs/admiralty/authentication.md index da8bcbc..509659f 100644 --- a/docs/admiralty/authentication.md +++ b/docs/admiralty/authentication.md @@ -4,13 +4,13 @@ We are currently able to authentify against a remote `Admiralty Target` to execu In this document we present the steps needed for this authentication process, its flaws and the improvments we could make. -![Representation of the current authentication mechanism](auth_schema.png) +![Representation of the current authentication mechanism](auth_schema.jpg) ## Requirements ### Namespace -In each cluster we need the same `namespace` to exist. Hence, both namespace need to have the same ressources available, mmeaning here that Argo must be deployed in the same way. +In each cluster we need the same `namespace` to exist. Hence, both namespace need to have the same resources available, mmeaning here that Argo must be deployed in the same way. > We haven't tested it yet, but maybe the `version` of the Argo Workflow shoud be the same in order to prevent mismatch between functionnalities. @@ -32,7 +32,37 @@ Once the patch is done the role the `serviceAccount` that will be used must be a In order to authentify against the Kubernetes API we need to provide the Admiralty `Source` with a token stored in a secret. This token is created on the `Target` for the `serviceAccount` that we will use in the Admiralty communication. After copying it, we replace the IP in the `kubeconfig` with the IP that will be targeted by the source to reach the k8s API. The token generated for the serviceAccount is added in the "user" part of the kubeconfig. -This **edited kubeconfig** is then passed to the source cluster and converted into a secret, bound to the Admiralty `Source` ressource. It is presented to the the k8s API on the target cluster, first as part of the TLS handshake and then to authenticate the serviceAccount that performs the pods delegation. +This **edited kubeconfig** is then passed to the source cluster and converted into a secret, bound to the Admiralty `Target` resource. It is presented to the the k8s API on the target cluster, first as part of the TLS handshake and then to authenticate the serviceAccount that performs the pods delegation. + +### Source/Target + +Each cluster in the Admiralty Federation needs to declare **all of the other clusters** : + +- That he will delegate pods to, with the `Target` resource + +```yaml +apiVersion: multicluster.admiralty.io/v1alpha1 +kind: Target +metadata: + name: some-name + namespace: your-namespace +spec: + kubeconfigSecret: + name: secret-holding-kubeconfig-info +``` + +- That he will accept pods from, with the `Source` resource + +```yaml +apiVersion: multicluster.admiralty.io/v1alpha1 +kind: Source +metadata: + name: some-name + namespace: your-namespace +spec: + serviceAccountName: service-account-used-by-source +``` + ## Caveats @@ -46,7 +76,7 @@ By default, a token created by the kubernetes API is only valid for **1 hour**, It is possible to create token with **infinite duration** (in reality 10 years) but the Admiralty documentation **advices against** this for security issues. -### Ressources' name +### resources' name When coupling Argo Workflows with a MinIO server to store the artifacts produced by a pod we need to access, for example but not only, a secret containing the authentication data. If we launch a workflow on cluster A and B, the secret resource containing the auth. data can't have the same thing in cluster A and B.