You can provide the cluster configuration and the authentication for the cloud vendor using environment variables. You can also use the .env.template
file as a reference to create a .env
.
docker run --rm -it \
-v ~/.config/gcloud:/root/.config/gcloud \
-e CLUSTER_TYPE=GKE \
-e GCP_PROJECT=<GCP project of the GKE cluster>
ghcr.io/sparkfabrik/cloud-tools:latest
docker run --rm -it \
-e CLUSTER_TYPE=EKS \
-e AWS_ACCESS_KEY_ID=<AWS_ACCESS_KEY_ID> \
-e AWS_SECRET_ACCESS_KEY=<AWS_SECRET_ACCESS_KEY> \
-e AWS_DEFAULT_REGION=<AWS_DEFAULT_REGION> \
ghcr.io/sparkfabrik/cloud-tools:latest
CLUSTER_NAME
: the name of the cluster that you want to configure (optional, if the variable is not provided, the first cluster in thelist
command will be configured; e.g.:prod-cluster
).CLUSTER_LOCATION
(only for GCP): the location of the cluster (optional, if the variable is not provided, the location will be searched using the cluster name; e.g.:europe-west4-a
).AVAILABLE_NAMESPACES
: the list of the available namespaces as space separated values (e.g.:default stage production
).STARTUP_NAMESPACE
: the namespace configured at CLI startup (e.g.:stage
).ORIGINAL_KUBENS
: if you want to use the originalkubens
command, set this variable to1
. The default shippedkubens
command is a custom script that uses theAVAILABLE_NAMESPACES
environment variable to list the available namespaces to limit choices. It is useful to increase the developer experience when your teams have access only to few namespaces.
You can use a GCP secret to store AWS credentials and the additional configuration. The secret payload must follow this structure:
{
"AWS_ACCESS_KEY_ID": <AWS_ACCESS_KEY_ID>,
"AWS_SECRET_ACCESS_KEY": <AWS_SECRET_ACCESS_KEY>,
"AWS_DEFAULT_REGION": <AWS_DEFAULT_REGION>,
"AVAILABLE_NAMESPACES": [ <The available namsespaces as a list of strings> ],
"STARTUP_NAMESPACE": <Namespace configured at startup>
}
To use the secret you have to run the docker container using the following environment variables:
SECRET_PROJECT
: the GCP project which hosts the secretSECRET_NAME
: the secret nameSECRET_VER
: the secret version (optional, if the variable is not provided, the latest version will be used)
docker run --rm -it \
-v ~/.config/gcloud:/root/.config/gcloud \
-e CLUSTER_TYPE=EKS \
-e SECRET_PROJECT=<GCP Project id which hosts the secret>
-e SECRET_NAME=<GCP secret name>
-e SECRET_VER=<GCP secret version>
ghcr.io/sparkfabrik/cloud-tools:latest
If you want to maintain the bash history from one run to another, you can mount a local folder in /root/dotfiles
. The docker image is configured to save the HISTFILE
in /root/dotfiles/.bash_history
.
This image is intended to be a cloud toolkit with some helpers to work with Google Kubernetes Engine (GKE) and Amazon Elastic Kubernetes Service (EKS).
The image is based on the google/cloud-sdk
docker image. You can use the gcloud CLI and the AWS CLI commands to work with your cloud vendor. If your user has access to a GKE or EKS cluster, the docker image tries to configure the proper KUBECONFIG
at startup.
In the final docker image, you will also find the following tools:
- gcloud (GCP CLI)
- gsutil (Google Cloud Storage Utility)
- aws (AWS CLI)
- kubectl
- kubens (custom script which uses
AVAILABLE_NAMESPACES
environment variable as the list of namespaces) - helm
- stern
If you have configured your gcloud authentication and your user can access a cluster, the first GKE cluster listed using the gcloud container clusters list
command will be automatically configured as default in the kubeconfig
file.
If you need to configure another cluster you can use the gcloud container clusters list
command to see the list of all the available clusters. Use gcloud container clusters get-credentials "<put here the cluster name>" --project "${GCP_PROJECT}" --zone "<put here the GCP project name>"
to update the configuration.
You can also specify the CLUSTER_NAME
environment variable to force the cluster configuration.
If the IAM user configured to run inside the docker image has access to an EKS cluster, the first EKS cluster listed using the aws eks list-clusters
command will be automatically configured as default in the kubeconfig
file.
If you need to configure another cluster you can use the aws eks list-clusters
command to see the list of all the available clusters. Use aws eks update-kubeconfig --name "<put here the cluster name>" --kubeconfig "${KUBECONFIG}"
to update the configuration.