diff --git a/docs/admiralty/auth_schema.jpg b/docs/admiralty/auth_schema.jpg new file mode 100644 index 0000000..4727575 Binary files /dev/null and b/docs/admiralty/auth_schema.jpg differ diff --git a/docs/admiralty/authentication.md b/docs/admiralty/authentication.md new file mode 100644 index 0000000..509659f --- /dev/null +++ b/docs/admiralty/authentication.md @@ -0,0 +1,90 @@ +# Current authentication process + +We are currently able to authentify against a remote `Admiralty Target` to execute pods from the `Source` cluster in a remote cluster, in the context of an `Argo Workflow`. The resulting artifacts or data can then be retrieved in the source cluster. + +In this document we present the steps needed for this authentication process, its flaws and the improvments we could make. + +![Representation of the current authentication mechanism](auth_schema.jpg) + +## Requirements + +### Namespace + +In each cluster we need the same `namespace` to exist. Hence, both namespace need to have the same resources available, mmeaning here that Argo must be deployed in the same way. + +> We haven't tested it yet, but maybe the `version` of the Argo Workflow shoud be the same in order to prevent mismatch between functionnalities. + +### ServiceAccount + +A `serviceAccount` with the same name must be created in each side of the cluster federation. + +In the case of Argo Workflows it will be used to submit the workflow in the `Argo CLI` or should be specified in the `spec.serviceAccountName` field of the Workflow. + +#### Roles + +Given that the `serviceAccount` will be the same in both cluster, it must be binded with the appropriate `role` in order to execute both the Argo Workflow and Admiralty actions. + +So far we only have seen the need to add the `patch` verb on `pods` for the `apiGroup` "" in `argo-role`. + +Once the patch is done the role the `serviceAccount` that will be used must be added to the rolebinding `argo-binding`. + +### Token + +In order to authentify against the Kubernetes API we need to provide the Admiralty `Source` with a token stored in a secret. This token is created on the `Target` for the `serviceAccount` that we will use in the Admiralty communication. After copying it, we replace the IP in the `kubeconfig` with the IP that will be targeted by the source to reach the k8s API. The token generated for the serviceAccount is added in the "user" part of the kubeconfig. + +This **edited kubeconfig** is then passed to the source cluster and converted into a secret, bound to the Admiralty `Target` resource. It is presented to the the k8s API on the target cluster, first as part of the TLS handshake and then to authenticate the serviceAccount that performs the pods delegation. + +### Source/Target + +Each cluster in the Admiralty Federation needs to declare **all of the other clusters** : + +- That he will delegate pods to, with the `Target` resource + +```yaml +apiVersion: multicluster.admiralty.io/v1alpha1 +kind: Target +metadata: + name: some-name + namespace: your-namespace +spec: + kubeconfigSecret: + name: secret-holding-kubeconfig-info +``` + +- That he will accept pods from, with the `Source` resource + +```yaml +apiVersion: multicluster.admiralty.io/v1alpha1 +kind: Source +metadata: + name: some-name + namespace: your-namespace +spec: + serviceAccountName: service-account-used-by-source +``` + + +## Caveats + +### Token + +By default, a token created by the kubernetes API is only valid for **1 hour**, which can pose problem for : + +- Workflows taking more than 1 hour to execute, with pods requesting creation on a remote cluster when the token is expired + +- Retransfering the modified `kubeconfig`, we need a way that allows a secure communication of the data between to clusters running Open Cloud. + +It is possible to create token with **infinite duration** (in reality 10 years) but the Admiralty documentation **advices against** this for security issues. + +### resources' name + +When coupling Argo Workflows with a MinIO server to store the artifacts produced by a pod we need to access, for example but not only, a secret containing the authentication data. If we launch a workflow on cluster A and B, the secret resource containing the auth. data can't have the same thing in cluster A and B. + +At the moment the only time we have faced this issue is with the MinIO s3 storage access. Since it is a service that we could deploy ourself we would have the possibility to attribute naming containing an UUID linked to the OC instance. + +## Possible improvements + +- Pods bound token, can they be issued to the remote cluster via an http API call ? [doc](https://kubernetes.io/docs/reference/kubernetes-api/authentication-resources/token-request-v1/) + +- Using a service that contact its counterpart in the target cluster, to ask for a token with a validity set by the user in the workflow workspace. Communication over HTTPS, but how do we generate secure certificates on both ends ? + diff --git a/docs/admiralty/deployment.md b/docs/admiralty/deployment.md new file mode 100644 index 0000000..d3e235a --- /dev/null +++ b/docs/admiralty/deployment.md @@ -0,0 +1,8 @@ +# Deploying Admiralty on a Open Cloud cluster + +We have written two playbooks available on a private [GitHub repo](https://github.com/pi-B/ansible-oc/tree/384a5acc0713a0fa013a82f71fbe2338bf6c80c1/Admiralty) + +- `deploy_admiralty.yml` installs Helm and necessary charts in order to run Admiralty on the cluster +- `setup_admiralty_target.yml` create the environment necessary to use a cluster as a target in an Admiralty federation running Argo Workflows. Create the necessary serviceAccount, target ressource and token to authentify the source +- `add_admiralty_target.yml` creates the environment to use a cluster as a source, providing the data necessary to use a given cluster as a target. +