Open-source Edition: Google Cloud Platform GKE

Modified on Thu, 18 May, 2023 at 5:49 PM

For the latest open-source installation instructions see the Direktiv documentation pages:
https://docs.direktiv.io/installation/

TABLE OF CONTENTS

Overview

Component Overview

Direktiv can be deployed in the following 3 configurations.

  1. Open source: A single virtual machine running a k8s environment or a public cloud provider Kubernetes service, hosting the Direktiv pods and the containers used for integration.
  2. Enterprise Edition Production VM(s) Only: a single virtual machine (or 3 virtual machines for HA purposes), running a k8s environment (or clustered across multiple), hosting the Direktiv pods and the containers used for integration
  3. Enterprise Edition Production Kubernetes: a cloud provider Kubernetes service, hosting the Direktiv pods and the containers used for integration

The diagram below illustrates the proposed option 1 for the deployment on a public cloud provider Kubernetes service:


Proposed PoC deployment diagram (Google Cloud GKE)

Proposed deployment diagram (Google Cloud GKE)

The Kubernetes deployment runs on the Google Cloud Platform Kubernetes Engine. The installation will contain the following components:

  1. A Kubernetes environment running in Google Cloud GKE.
  2. A single instance of the Direktiv open-source edition, which includes a PostgreSQL database

The following components are assumed to be available:

  1. Optional: a single GitLab instance (if not available in the customer environment, or access to a GitHub instance)

Optional: a single Docker repository for storing local images (if not available in the customer environment). Alternatively, Docker Hub can also be used.

NOTE: the Postgresql database instance is running on the Kubernetes cluster. In this configuration, it is the responsibility of the owner to ensure that the database backups are maintained separately.
The architecture above is not a production deployment. It is solely to be used for PoC as the database and Direktiv components all reside on a single instance virtual machine.

Connectivity / Firewall Requirements

The following requirements exist for the successful deployment of Direktiv:

Initial Installation Requirements

The following network configurations are critical, all components need to have the following services configured and working correctly:

  • Firewall: firewalls on the VMs need to be disabled or the ports described in the table below need to be opened.

The tables below capture all of the external connectivity requirements for initial deployment. 


Source

Destination

Protocol / Port

Notes

JumpHost

download.docker.com, registry-1.docker.io, cloudfront.net, production.cloudflare.docker.com

HTTPS

Initial Docker install on the VM. If no access is allowed, download the Docker package from https://download.docker.com/linux/ubuntu/dists/ and install using https://docs.docker.com/engine/install/ubuntu/#install-from-a-package

JumpHost

raw.githubusercontent.com/helm

HTTPS

Initial helm installer

JumpHost

get.helm.sh/

HTTPS

Helm package download. If no access is allowed, download the helm package from https://github.com/helm/helm/releases and install it manually using https://helm.sh/docs/intro/install/ 

JumpHost

docker.io/smallstep/step-cli

HTTPS

Used to create the certificates used by Linkerd

JumpHost

chart.direktiv.io

HTTPS

Helm chart used for database install and Direktiv application install (OPTIONAL)

JumpHost

registry.developers.crunchydata.com, prod.developers.crunchydata.com

HTTPS

Database install for Postgresql (OPTIONAL)

JumpHost

githubusercontent.com/direktiv/direktiv

HTTPS

Configure Direktiv database from pre-configured settings (OPTIONAL)

JumpHost / K8S Cluster IP

ghcr.io/projectcontour

HTTPS

Knative

JumpHost / K8S Cluster IP

docker.io/envoyproxy

HTTPS

Knative

JumpHost / K8S Cluster IP

gcr.io/knative-releases/

HTTPS

Knative

JumpHost / K8S Cluster IP

k8s.gcr.io/ingress-nginx/

HTTPS

NGINX install (ingress load balancer, part of Direktiv helm chart)

JumpHost / K8S Cluster IP

docker.io/jimmidyson/configmap-reload

HTTPS

Configmap reload for Kubernetes install (part of Direktiv helm chart)

JumpHos/ K8S Cluster IP

quay.io/prometheus

HTTPS

Prometheus install (part of Direktiv helm chart)

JumpHos/ K8S Cluster IP

docker.io/direktiv

HTTPS

Direktiv installation

JumpHost

download.docker.com, registry-1.docker.io, cloudfront.net, production.cloudflare.docker.com

HTTPS

Initial Docker install on the VM. If no access is allowed, download the Docker package from https://download.docker.com/linux/ubuntu/dists/ and install using https://docs.docker.com/engine/install/ubuntu/#install-from-a-package

JumpHost

raw.githubusercontent.com/helm

HTTPS

Initial helm installer

The Docker registries whitelist includes:

Running Requirements

There are several running firewall/connectivity requirements to ensure the ongoing operation of the environment. The tables below capture all of the external connectivity requirements for an operational state:


  Source

  Destination

  Protocol / Port

  Notes

Direktiv IP

direktiv.azurecr.io

HTTPS

Pulling Direktiv pre-built containers

Direktiv IP

hub.docker.com/direktiv

HTTPS

Pulling Direktiv pre-built containers

JumpHost

Direktiv VM

TCP: 443 / 6443 / 10250 / 2379-2380

Kubernetes API Server, Kubelet metrics, HA

Client

Direktiv IP

TCP: 443

Direktiv UI & API access

Client

Direktiv IP

TCP: 443

Direktiv UI & API access

Installation Instructions

JumpHost Proxy Settings

If the environment requires a proxy to be configured, the following settings can be applied on the JumpHost. You can set up a permanent proxy for a single user by editing the ~/.bashrc file:

  • First, log in to your Ubuntu system with a user whom you want to set a proxy for.
  • Next, open the terminal interface and edit the ~/.bashrcfile as shown below:
vi ~/.bashrc
  • Add the following lines at the end of the file that matches your proxy server:
export http_proxy=username:password@proxy-server-ip:8080
export https_proxy=username:password@proxy-server-ip:8082
export ftp_proxy=username:password@proxy-server-ip:8080
export no_proxy=localhost, 127.0.0.1
  • Save and close the file when you are finished.
  • Then to activate your new proxy settings for the current session, use the following command:
source ~/.bashrc

Install Docker (Jumphost)

Install Docker on the single VM deployment using the following steps.

For the latest installation instructions per distro please review: https://docs.docker.com/engine/install/

Uninstall old versions

Older versions of Docker were called docker, docker.io, or docker-engine. If these are installed, uninstall them:

sudo apt-get remove docker docker-engine docker.io containerd runc

It’s OK if apt-get reports that none of these packages are installed.

Set up the repository

  • Update the apt package index and install packages to allow apt to use a repository over HTTPS:
sudo apt-get update
sudo apt-get install \
    ca-certificates \
    curl \
    gnupg \
    lsb-release
  • Add Docker’s official GPG key:
sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
  • Use the following command to set up the repository:
echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
  $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

Install Docker Engine

  • Update the apt package index, and install the latest versionof Docker Engine, containers, and Docker Compose, or go to the next step to install a specific version:
sudo apt-get update
  • Next command:
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-compose-plugin
  • Verify that Docker Engine is installed correctly by running the hello-world image. This might not work, depending on whether connectivity can be established to https://hub.docker.com
sudo docker run hello-world 

Install Google Cloud CLI (Jumphost)

Google Cloud command-line interface is NOT installed by default. To access the Kubernetes cluster Google CloudCLI is used to collect the Kubernetes cluster credentials.

NOTE: Install the Google CloudCLI using the link below or follow the steps below:
https://cloud.google.com/sdk/docs/install#deb
  • Install the required packages needed for the Google Cloud CLI:
sudo apt-get install apt-transport-https ca-certificates gnup
  • Add the Google Cloud CLI distribution URI as a package source. If your distribution supports the signed-by option, run the following command:
echo "deb [signed-by=/usr/share/keyrings/cloud.google.gpg] https://packages.cloud.google.com/apt cloud-sdk main" | sudo tee -a /etc/apt/sources.list.d/google-cloud-sdk.list
  • Import the Google Cloud public key. If your distribution's apt-key command supports the --keyring argument, run the following command:
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key --keyring /usr/share/keyrings/cloud.google.gpg add -
  • Update and install the Google Cloud CLI:
sudo apt-get update && sudo apt-get install google-cloud-cli && sudo apt-get install google-cloud-sdk-gke-gcloud-auth-plugin
  • Run Google Cloud-init to get started:
gcloud init

Install helm (Jumphost)

helm is used to install Direktiv and Knative components.

NOTE: For the latest installation instructions per distro, please refer to: https://helm.sh/docs/intro/install/

To install helm, the following steps need to be followed:

curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh

Install kubectl (Jumphost)

kubectl is used to manage the Kubernetes cluster.

NOTE: For the latest installation instructions per distro, please refer to: https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/

To install kubectl, the following steps need to be followed:

  • Update the apt package index and install packages needed to use the Kubernetes apt repository:
sudo apt-get update
sudo apt-get install -y ca-certificates curl
  • Download the Google Cloud public signing key:
sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
  • Add the Kubernetes apt repository:
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
  • Update apt package index with the new repository and install kubectl:
sudo apt-get update
sudo apt-get install -y kubectl
  • Verify the installation:
# kubectl version --client --output=yaml
clientVersion:
  buildDate: "2022-08-23T17:44:59Z"
  compiler: gc
  gitCommit: a866cbe2e5bbaa01cfd5e969aa3e033f3282a8a2
  gitTreeState: clean
  gitVersion: v1.25.0
  goVersion: go1.19
  major: "1"
  minor: "25"
  platform: linux/amd64
kustomizeVersion: v4.5.7

Deploy GCP GKE (Jumphost)

CAUTION: GKE Autopilot clusters are NOT supported

The following steps show how to deploy a default Google Cloud GKE service. 

  • In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
    Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project.
    Go to project selector
  • Make sure that billing is enabled for your Cloud project. Learn how to check if billing is enabled on a project.
  • Enable the Artifact Registry and Google Kubernetes Engine APIs: Enable the APIs

GKE Standard Cluster

  • Create a Standard cluster named direktiv-cluster:
cloud container clusters create direktiv-cluster \
    --num-nodes=1 \
    --zone=COMPUTE_ZONE

            Replace COMPUTE_REGION with the Compute Engine regionfor the cluster. An example is shown below:

# gcloud container clusters create direktiv-cluster --num-nodes=1 --zone=australia-southeast1

Default change: VPC-native is the default mode 
during cluster creation for versions greater than 
1.21.0-gke.1500. To create advanced routes based 
clusters, please pass the `--no-enable-ip-alias` flag

Default change: During creation of nodepools or 
autoscaling configuration changes for cluster 
versions greater than 1.24.1-gke.800 a default 
location policy is applied. For Spot and PVM it 
defaults to ANY, and for all other VM kinds a BALANCED 
policy is used. To change the default values use the 
`--location-policy` flag.

Note: Your Pod address range (`--cluster-ipv4-cidr`)
 can accommodate at most 1008 node(s).

Creating cluster direktiv-cluster in australia-southeast1... 
Cluster is being health-checked (master is healthy)...done.                                                                                                                                                                                                                            
Created [https://container.googleapis.com/v1/projects/direktiv/zones/australia-southeast1/clusters/direktiv-cluster].
To inspect the contents of your cluster, go to: https://console.cloud.google.com/kubernetes/workload_/gcloud/australia-southeast1/direktiv-cluster?project=direktiv
kubeconfig entry generated for direktiv-cluster.

NAME              LOCATION              MASTER_VERSION   MASTER_IP      MACHINE_TYPE  NODE_VERSION     NUM_NODES  STATUS
direktiv-cluster  australia-southeast1  1.22.11-gke.400  35.189.29.225  e2-medium     1.22.11-gke.400  3          RUNNING
  • After creating your cluster, you need to get authentication credentials to interact with the cluster. This command configures kubectl to use the cluster you created:
gcloud container clusters get-credentials direktiv-cluster --region=COMPUTE_REGION
  • Replace COMPUTE_REGION with the compute region previously configured for the cluster. An example is shown below:
# gcloud container clusters get-credentials direktiv-cluster --region=australia-southeast1
Fetching cluster endpoint and auth data.
kubeconfig entry generated for direktiv-cluster.

  • Verify the installation:
# kubectl get nodes
W0913 02:30:16.023890  114974 gcp.go:119] WARNING: the gcp auth plugin is deprecated in v1.22+, unavailable in v1.26+; use gcloud instead.
To learn more, consult https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke
NAME                                              STATUS   ROLES    AGE   VERSION
gke-direktiv-cluster-default-pool-3cb4331c-v4xt   Ready    <none>   44s   v1.22.11-gke.400
gke-direktiv-cluster-default-pool-7ad0efef-b25d   Ready    <none>   46s   v1.22.11-gke.400
gke-direktiv-cluster-default-pool-eee39bed-nfpj   Ready    <none>   45s   v1.22.11-gke.400

Install Linkerd

For the installation of Linkerd and the rest of the open-source components, the KUBECONFIG environment variable needs to be set:

  • Create or update a kubeconfig file for your cluster using the Azure CLI previously installed. Replace resource-groupe with the Azure Resource Group created and aks-cluster-namewith the name of the AKS cluster:
az aks get-credentials --resource-group <resource-group> --name <aks-cluster-name>

            Example output is shown below:

# az aks get-credentials --resource-group direktivResourceGroup --name direktiv-cluster
Merged "direktiv-cluster" as current context in /home/wwonigkeit/.kube/config
  • Set the KUBECONFIGenvironment variable:
echo "export KUBECONFIG=$HOME/.kube/config" >> ~/.bashrc
  • Then to activate your new proxy settings for the current session, use the following command:
source ~/.bashrc

            Linkerd is used to secure the data between pods with mTLS. Generate certificates with the following command:

$ certDir=$(tmpDir=$(mktemp -d); \
exe='cd /certs && step certificate create root.linkerd.cluster.local ca.crt ca.key \
--profile root-ca --no-password --insecure \
&& step certificate create identity.linkerd.cluster.local issuer.crt issuer.key \
--profile intermediate-ca --not-after 8760h --no-password --insecure \
--ca ca.crt --ca-key ca.key'; \
sudo docker run --user 1000:1000 -v /tmp:/certs  -i smallstep/step-cli /bin/bash -c "$exe"; \
echo $tmpDir);
  • It stores four files in /tmp and permissions need to be changed to make them readable by the current user (they only have root access after creation):
    • /tmp/ca.crt
    • /tmp/ca.key
    • /tmp/issuer.crt
    • /tmp/issuer.key
  • Install Linkerd certificates with the following script:
helm repo add linkerd https://helm.linkerd.io/stable;
helm install linkerd-crds linkerd/linkerd-crds -n linkerd --create-namespace 
helm install linkerd-control-plane \
  -n linkerd \
  --set-file identityTrustAnchorsPEM=$certDir/ca.crt \
  --set-file identity.issuer.tls.crtPEM=$certDir/issuer.crt \
  --set-file identity.issuer.tls.keyPEM=$certDir/issuer.key \
  linkerd/linkerd-control-plane --wait
  • Create namespaces and annotate. All annotated namespaces will use Linkerd. The following script creates the namespaces and annotates them for linkerd:
for ns in "direktiv" 
do
 kubectl create namespace $ns || true
 kubectl annotate ns --overwrite=true $ns linkerd.io/inject=enabled 
done;

Install PostgreSQL Database

We are using the CrunchyData Postgres operator for PoCs. 

  • To install the operator add the chart:
# helm repo add direktiv https://chart.direktiv.io
"direktiv" has been added to your repositories
  • Run the install operator:
$ helm install -n postgres --create-namespace --set singleNamespace=true postgres direktiv/pgo
NAME: postgres
LAST DEPLOYED: Tue Aug 30 01:42:48 2022
NAMESPACE: postgres
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Thank you for deploying PGO v5.0.4!

                          ((((((((((((((((((((((
                    (((((((((((((%%%%%%%(((((((((((((((
                (((((((((((%%%             %%%%((((((((((((
            (((((((((((%%(   (((( (            %%%(((((((((((
          (((((((((((((%%     (( ,((               %%%(((((((((((
        (((((((((((((((%%         *%%/            %%%%%%%((((((((((
      (((((((((((((((((((%%(( %%%%%%%%%%#(((((%%%%%%%%%%#((((((((((((
    ((((((((((((((((((%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%((((((((((((((
  *((((((((((((((((((((%%%%%%     /%%%%%%%%%%%%%%%%%%%((((((((((((((((
  (((((((((((((((((((((((%%%/      .%,             %%%((((((((((((((((((,
  ((((((((((((((((((((((%                             %#(((((((((((((((((
(((((((((((((((%%%%%%                                 #%(((((((((((((((((
((((((((((((((%%                                         %%(((((((((((((((,
((((((((((((%%%#%                                     %   %%(((((((((((((((
((((((((((((%.                      %                 %     #((((((((((((((
(((((((((((%%                        %               %%*     %(((((((((((((
#(###(###(#%%                      %%%    %%        %%%      #%%#(###(###(#
###########%%%%%   /%%%%%%%%%%%%%       %%       %%%%%         ,%%#######
###############%%       %%%%%%        %%%    %%%%%%%%             %%#####
  ################%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%   %%               %%##
  ################%%        %%%%%%%%%%%%%%%%%      %%%%               %
    ##############%#        %%   (%%%%%%%           %%%%%%
    #############%     %%%%%                      %%%%%%%%%%%
      ###########%       %%%%%%%%%%%            %%%%%%%%%
        #########%%     %%            %%%%%%%%%%%%%%%#
          ########%%   %%                  %%%%%%%%%
              ######%% %%                      %%%%%%
                ####%%%                        %%%%%  %
                      %%                         %%%%
  • After that install the actual Database for Direktiv:
kubectl apply -f https://raw.githubusercontent.com/direktiv/direktiv/main/kubernetes/install/db/basic.yaml
  • This creates a 1GB database and should be sufficient for a POC. Wait till all pods in the Postgres namespace are up and running:
# kubectl get pods -n postgres
NAME                         READY   STATUS    RESTARTS   AGE
direktiv-backup-6qjs-fd9g2   1/1     Running   0          15s
direktiv-instance1-wp2g-0    3/3     Running   0          105s
direktiv-repo-host-0         1/1     Running   0          104s
pgo-8659687c97-bsqcq         1/1     Running   0          13m

Install Knative

The following steps install Knative:

  • Add the Knative operator:
kubectl apply -f https://github.com/knative/operator/releases/download/knative-v1.8.1/operator.yaml
  • Install the knative serving components:
# helm install -n knative-serving --create-namespace knative-serving direktiv/knative-instance
NAME: knative
LAST DEPLOYED: Tue Aug 30 02:01:45 2022
NAMESPACE: knative-serving
STATUS: deployed
REVISION: 1
TEST SUITE: None

Install Direktiv

  • Create a file with the database information if no external database will be used:
echo "database:
  host: \"$(kubectl get secrets -n postgres direktiv-pguser-direktiv -o 'go-template={{index .data "host"}}' | base64 --decode)\"
  port: $(kubectl get secrets -n postgres direktiv-pguser-direktiv -o 'go-template={{index .data "port"}}' | base64 --decode)
  user: \"$(kubectl get secrets -n postgres direktiv-pguser-direktiv -o 'go-template={{index .data "user"}}' | base64 --decode)\"
  password: \"$(kubectl get secrets -n postgres direktiv-pguser-direktiv -o 'go-template={{index .data "password"}}' | base64 --decode)\"
  name: \"$(kubectl get secrets -n postgres direktiv-pguser-direktiv -o 'go-template={{index .data "dbname"}}' | base64 --decode)\"
  sslmode: require" > direktiv.yaml
  • Add an API key to the direktiv.yaml file:
echo 'apikey: "12345"' >> direktiv.yaml

            The following output is an example of the direktiv.yaml file:

# cat direktiv.yaml 
database:
  host: "direktiv-primary.postgres.svc"
  port: 5432
  user: "direktiv"
  password: ";Pr<}[^SegXWXM/1qO07cktl"
  name: "direktiv"
  sslmode: require
ingress-nginx:
  install: false
apikey: "12345"
  • Install direktiv:
# helm install -f direktiv.yaml -n direktiv direktiv direktiv/direktiv
NAME: direktiv
LAST DEPLOYED: Tue Aug 30 02:12:59 2022
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
  • The instance is available at the IP show under “External-IP” from this command:
kubectl get services direktiv-ingress-nginx-controller --output jsonpath='{.status.loadBalancer.ingress[0].ip}'
  • The login is the configured API key.

Was this article helpful?

That’s Great!

Thank you for your feedback

Sorry! We couldn't be helpful

Thank you for your feedback

Let us know how can we improve this article!

Select at least one of the reasons
CAPTCHA verification is required.

Feedback sent

We appreciate your effort and will try to fix the article