Capi-2-Argo Cluster Operator (CACO) converts ClusterAPI Cluster credentials into ArgoCD Cluster definitions and keep them synchronized. It aims to act as an integration bridge and solve an automation gap for users that combine these tools to provision Kubernetes Clusters.
Probably to be here, you are already aware of ClusterAPI and ArgoCD. If not, lets say few words about these projects and what they want to offer:
ClusterAPI provides declarative APIs and tooling to simplify provisioning, upgrading, and operating multiple Kubernetes clusters. In simple words, users can define all aspects of their Kubernetes setup as CRDs and CAPI controller -which follows operator pattern- is responsible to reconcile on them and keep their desired state.
ArgoCD is a declarative, GitOps continuous delivery tool for Kubernetes. It automates the deployment of the desired application states in the specified target environments. In simple words, give a Git and a target Kubernetes Cluster and Argo will keep you package up, running and always in-sync with your source.
So.. we have CAPI that enables us to define Clusters as native k8s objects and ArgoCD that can take these objects and deploy them. Let’s demonstrate how a pipeline like this could look like:
Ok, all good until here. But having bare naked k8s clusters is not something useful. Probably dozens of utils and addons are needed for a cluster to look handy (eg. CSI Drivers, Ingress Controllers, Monitoring, etc).
Argo can also take care of deploying these utils but eventually credentials will be essential to authenticate against target clusters. Of course, we can proceed with the following three manual steps to solve that:
But how can we automate this? Capi2Argo Cluster Operator was created so it can take care of above actions.
CACO implements them in an automated loop that watches for changing events in secret resources and if conditions are met to be a CAPI compliant, it converts and deploy them as Argo compatible ones. What is actually does under the hood, is a god simple KRM transformation like this:
Before we got only CAPI Cluster Spec:
kind: Secret
apiVersion: v1
type: cluster.x-k8s.io/secret
metadata:
labels:
cluster.x-k8s.io/cluster-name: CAPICluster
name: CAPICluster-kubeconfig
data:
value: << CAPICluster KUBECONFIG based64-encoded >>
After we have also Argo Cluster Spec:
kind: Secret
apiVersion: v1
type: Opaque
metadata:
labels:
argocd.argoproj.io/secret-type: cluster
capi-to-argocd/owned: "true" # Capi2Argo Controller Ownership Label
name: ArgoCluster
namespace: argocd
stringData:
name: CAPICluster
server: CAPICluster Host
config: |
{
"tlsClientConfig": {
"caData": "b64-ca-cert",
"keyData": "b64-token",
"certData": "b64-cert",
}
}
Above functionality use-case can be demonstrated by extending the Workflow mentioned above by automating following steps:
Helm
$ helm repo add capi2argo https://dntosas.github.com/capi-2-argo-cluster-operator
$ helm upgrade -i capi2argo capi2argo/capi2argo-cluster-operator
Capi2Argo is builded upon the powerful Operator SDK.
Gradually -and depends on how free time allow us- will try adopting all best practices that are suggested on the community, find more in here.
make all
make ci
make run
make docker-build
TODO In the meantime, feel free to grab any of unimplemented bullets on the Roadmap section :).