Enterprise Edition: Multi Virtual Machine

Modified on Tue, 30 May 2023 at 08:14 PM

TABLE OF CONTENTS

Overview

Component Overview

Direktiv can be deployed in the following 3 configurations.

  1. Open-source: A single virtual machine running a k8s environment or a public cloud provider Kubernetes service, hosting the direktiv pods and the containers used for integration.
  2. Enterprise Edition Production VM(s) Only: a single virtual machine (or 3 virtual machines for HA purposes), running a k8s environment (or clustered across multiple), hosting the direktiv pods and the containers used for integration
  3. Enterprise Edition Production Kubernetes: a cloud provider Kubernetes service, hosting the direktiv pods and the containers used for integration

The diagram below illustrates the proposed option 2 for the Production deployment on multiple VM:

 

Proposed EE deployment diagram (multi-VM)

Proposed EE deployment diagram (multi-VM)

The Enterprise Edition deployment runs on a multiple virtual machines. The installation will contain the following components:

  • K3S, a lightweight Kubernetes platform to run Direktiv
  • A single instance of the Direktiv Enterprise Edition
  • A single instance PostgreSQL database

The following components are assumed to be available:

  • Optional: a single GitLab instances (if not available in customer environment, or access to a GitHub instance)
  • Optional: a single Docker repository for storing local images (if not available in customer environment). Alternatively, Docker Hub can also be used.

Connectivity / Firewall Requirements

The following requirements exist for the successful deployment of Direktiv:

Initial Installation Requirements

The following network configurations are critical, all VMs need to have the following services configured and working correctly:

  • DNS: all internal DNS domains and external DNS domains need to be resolvable
  • NTP: to ensure the tracing, reporting and dashboards function correctly, NTP needs to be configured on all VMs
  • Firewall: firewalls on the VMs need to be disabled or the ports described in the table below need to be opened.

The tables below capture all of the external connectivity requirements for an initial deployment.


Source

Destination

Protocol / Port

Notes

Direktiv VM

download.docker.com, registry-1.docker.io, cloudfront.net, production.cloudflare.docker.com

HTTPS

Initial Docker install on the VM. If no access is allowed, download the Docker package from https://download.docker.com/linux/ubuntu/dists/ and install using https://docs.docker.com/engine/install/ubuntu/#install-from-a-package

Direktiv VM

get.k3s.io, update.k3s.io

HTTPS

Install script for K3S (Rancher)

Direktiv VM

github.com/k3s-io/k3s/releases

HTTPS

Install binary for K3S

Direktiv VM

raw.githubusercontent.com/helm

HTTPS

Initial helm installer

Direktiv VM

get.helm.sh/

HTTPS

Helm package download. If no access is allowed, download the helm package from https://github.com/helm/helm/releases and install it manually using https://helm.sh/docs/intro/install/ 

Direktiv VM

docker.io/smallstep/step-cli

HTTPS

Used to create the certificates used by Linkerd

Direktiv VM

chart.direktiv.io

HTTPS

Helm chart used for database install and Direktiv application install

Direktiv VM

registry.developers.crunchydata.com, prod.developers.crunchydata.com

HTTPS

Database install for Postgresql

Direktiv VM

githubusercontent.com/direktiv/direktiv

HTTPS

Configure Direktiv database from pre-configured settings

Direktiv VM

ghcr.io/projectcontour

HTTPS

Knative

Direktiv VM

docker.io/envoyproxy

HTTPS

Knative

Direktiv VM

gcr.io/knative-releases/

HTTPS

Knative

Direktiv VM

k8s.gcr.io/ingress-nginx/

HTTPS

NGINX install (ingress load balancer, part of Direktiv helm chart)

Direktiv VM

docker.io/jimmidyson/configmap-reload

HTTPS

Configmap reload for Kubernetes install (part of Direktiv helm chart)

Direktiv VM

quay.io/prometheus

HTTPS

Prometheus install (part of Direktiv helm chart)

Direktiv VM

docker.io/direktiv

HTTPS

Direktiv installation

The Docker registries whitelist includes:

Running Requirements

There are several running firewall / connectivity requirements to ensure the ongoing operation of the environment. The tables below captures all of the external connectivity requirements for an operational state:


Source

Destination

Protocol / Port

Notes

Direktiv VM

direktiv.azurecr.io

HTTPS

Pulling Direktiv pre-built containers

Direktiv VM

hub.docker.com/direktiv

HTTPS

Pulling Direktiv pre-built containers

Client

Direktiv VM

TCP: 443 / 6443 / 10250 / 2379-2380

Kubernetes API Server, Kubelet metrics, HA

Client

Direktiv IP

TCP: 443

Direktiv UI & API access

Client

Direktiv IP

TCP: 443

Direktiv UI & API access

Installation Instructions

Proxy Settings

You can set up a permanent proxy for a single user by editing the ~/.bashrc file:

  • First, log in to your Ubuntu system with a user whom you want to set a proxy for.
  • Next, open the terminal interface and edit the ~/.bashrcfile as shown below:
vi ~/.bashrc
  • Add the following lines at the end of the file that matches your proxy server:
export HTTP_PROXY="http://<username>:<password>@<proxy-server-ip>:<port>"
export HTTPS_PROXY="http://<username>:<password>@<proxy-server-ip>:<port>"
export FTP_PROXY="ftp://<username>:<password>@<proxy-server-ip>:<port>"
export NO_PROXY="localhost,127.0.0.1,svc,.cluster.local,192.168.1.100,192.168.1.101,192.168.1.102,10.0.0.0/8"
  • Save and close the file when you are finished.
  • Then to activate your new proxy settings for the current session, use the following command:
source ~/.bashrc

Install Docker

Install Docker on the single VM deployment using the following steps.

For the latest installation instructions per distro please review: https://docs.docker.com/engine/install/

Uninstall old versions

Older versions of Docker were called docker, docker.io, or docker-engine. If these are installed, uninstall them:

sudo apt-get remove docker docker-engine docker.io containerd runc

It’s OK if apt-get reports that none of these packages are installed.

Set up the repository

  • Update the apt package index and install packages to allow apt to use a repository over HTTPS:
sudo apt-get update
sudo apt-get install \
    ca-certificates \
    curl \
    gnupg \
    lsb-release
  • Add Docker’s official GPG key:
sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
  • Use the following command to set up the repository:
echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
  $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

Install Docker Engine

  • Update the apt package index, and install the latest versionof Docker Engine, containers, and Docker Compose, or go to the next step to install a specific version:
sudo apt-get update
  • Next command:
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-compose-plugin
  • Verify that Docker Engine is installed correctly by running the hello-world image. This might not work, depending on whether connectivity can be established to https://hub.docker.com
sudo docker run hello-world 

Install K3s

Direktiv works with Kubernetes offerings from all major cloud providers and the requirements for on-premise or local installations is Kubernetes 1.19+. The following documentation describes a small installation with k3s.

Hardware requirements

The following table shows the hardware requirements for the virtual machine (PoC single VM deployment):


Environment

Node size

Scale method

Node count

PoC / Development

  • 4 vCPUs, 16 GB memory
  • 50 GB storage
  • Ubuntu 18.04.6 LTS or higher

None

2+


Configure local firewall rules

The nodes communicate with each other on different ports and protocols. The table in “Running Requirements” (previously in this document) shows the ports required to be accessible (incoming) for the nodes to enable this. On some Linux distributions firewall changes have to be applied. Please see k3s installation guide for detailed installation instructions.

  • Check if the firewall is enabled:
sudo ufw status
  • If the firewall is enabled, open the required ports:
sudo ufw allow 6443/tcp
sudo ufw allow 10250/tcp
sudo ufw allow 8472/udp
sudo ufw allow 2379-2380/tcp
sudo ufw allow from 10.42.0.0/16 to any #pods
sudo ufw allow from 10.43.0.0/16 to any #services

Disable swap

One of Kubernetes' requirements is to disable swap on the nodes. This change need to be applied permanently to survive reboots.

sudo swapoff -a
sudo sed -e '/swap/s/^/#/g' -i /etc/fstab

K3s installation

  • For the Proof-of-Concept deployment, a single node is installed. For the command it is necessary to add a proxy configuration (if a proxy is used):
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="server --disable traefik --write-kubeconfig-mode=644 --cluster-init" INSTALL_K3S_VERSION="v1.25.9+k3s1" HTTP_PROXY="http://192.168.1.10:3128" HTTPS_PROXY="http://192.168.1.10:3128" NO_PROXY="localhost,127.0.0.1,svc,.cluster.local,192.168.1.100,192.168.1.101,192.168.1.102,10.0.0.0/8" sh -
  • Produces output:

[INFO]  Finding release for channel stable
[INFO]  Using v1.24.4+k3s1 as release
[INFO]  Downloading hash https://github.com/k3s-io/k3s/releases/download/v1.24.4+k3s1/sha256sum-amd64.txt
[INFO]  Downloading binary https://github.com/k3s-io/k3s/releases/download/v1.24.4+k3s1/k3s
[INFO]  Verifying binary download
[INFO]  Installing k3s to /usr/local/bin/k3s
[INFO]  Skipping installation of SELinux RPM
[INFO]  Creating /usr/local/bin/kubectl symlink to k3s
[INFO]  Creating /usr/local/bin/crictl symlink to k3s
[INFO]  Skipping /usr/local/bin/ctr symlink to k3s, command exists in PATH at /usr/bin/ctr
[INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[INFO]  Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO]  env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO]  systemd: Creating service file /etc/systemd/system/k3s.service
[INFO]  systemd: Enabling k3s unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.
[INFO]  systemd: Starting k3s
  • Configure the KUBECONFIG environment variable for the shell:
echo "export KUBECONFIG=/etc/rancher/k3s/k3s.yaml" >> ~/.bashrc
  • Source the profile changes:
source ~/.bashrc
  • To add nodes to the cluster the node token is required, which is saved under /var/lib/rancher/k3s/server/node-token. With this token additional nodes can be added. The cluster IP address is the external IP address for the virtual machine (or an internal IP address on the network)
On the other VMs:
  • Start the install:
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="server --disable traefik --write-kubeconfig-mode=644" K3S_TOKEN="<TOKEN FROM NODE-TOKEN FILE>" K3S_URL="https://<cluster ip>:6443" INSTALL_K3S_VERSION="v1.25.9+k3s1" HTTP_PROXY="http://192.168.1.10:3128" HTTPS_PROXY="http://192.168.1.10:3128" NO_PROXY="localhost,127.0.0.1,svc,.cluster.local,192.168.1.100,192.168.1.101,192.168.1.102,10.0.0.0/8" sh -
  • Example output is shown below:
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="server --disable traefik --write-kubeconfig-mode=644" K3S_TOKEN="K1076a401fcbbc34e6bfa35cbada8f48d915f910c88bd6aa9f0c124aee7038402c0::server:faf5b641f35cf6de4cb1486e84bc70a6" K3S_URL=https://10.152.0.22:6443 sh -
[INFO]  Finding release for channel stable
[INFO]  Using v1.24.4+k3s1 as release
[INFO]  Downloading hash https://github.com/k3s-io/k3s/releases/download/v1.24.4+k3s1/sha256sum-amd64.txt
[INFO]  Downloading binary https://github.com/k3s-io/k3s/releases/download/v1.24.4+k3s1/k3s
[INFO]  Verifying binary download
[INFO]  Installing k3s to /usr/local/bin/k3s
[INFO]  Skipping installation of SELinux RPM
[INFO]  Creating /usr/local/bin/kubectl symlink to k3s
[INFO]  Creating /usr/local/bin/crictl symlink to k3s
[INFO]  Creating /usr/local/bin/ctr symlink to k3s
[INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[INFO]  Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO]  env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO]  systemd: Creating service file /etc/systemd/system/k3s.service
[INFO]  systemd: Enabling k3s unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.
[INFO]  systemd: Starting k3s
$ kubectl get nodes
NAME                     STATUS   ROLES                       AGE     VERSION
direktiv-singlevm-ee     Ready    control-plane,etcd,master   5m12s   v1.24.4+k3s1
direktiv-singlevm-ee-2   Ready    control-plane,etcd,master   26s     v1.24.4+k3s1

Install Helm

Helm is used to install additional components for K3s. 

NOTE: For the latest installation instructions per distro, please refer to: https://helm.sh/docs/intro/install/

To install helm, the following steps need to be followed.

curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh

Install Direktiv Enterprise Edition

The Direktiv Enterprise Edition is a commercial platform for running Direktiv. Access to the installation files are restricted, and requires a license and access to the repository download section (via GitHub or the Direktiv Service Portal)

The process below installs all of the Enterprise Edition components on the single node cluster.

  • Download the Direktiv installer. It provides a basic configuration for all components needed for Direktiv. It assumed that the person installing this has access to the direktiv-ee GitHub repository:
tar -xvf direktiv-ee.tar.gz
cd direktiv-ee/install
  • Set the following environmental variables:
    • DIREKTIV_HOST: sets the FQDN for the Direktiv instance.
    • DIREKTIV_DEV: if this environment variable has a value the installation uses the localhost:5000 variant of Direktiv images.
    • DIREKTIV_TOKEN: if this environment variable has a value the installation uses as the admin API key (this allows for elevated API admin access)
export DIREKTIV_HOST="dev.direktiv.io"
export DIREKTIV_DEV="localhost:5000"
export DIREKTIV_TOKEN="KxTp8d_gj5yj$%mPkvFY=f2rPN2K6N?F"
  • Run the script to prepare the installation:
./prepare.sh
  • The above command generates an install YAML files. If a proxy is used, configure the following files with the appropriate settings:
~/direktiv-ee/install/05_knative# cat knative.yaml
# -- Knative version
version: 1.8.0
# -- Knative replicas
replicas: 1
# -- Proxy settings for controller
http_proxy: ""
https_proxy: ""
no_proxy: ""
# -- Custom certificate for controller. This needs to be a secret created before installation in the knative-serving namespace
certificate: ""
# -- Repositories which skip digest resolution
skip-digest: kind.local,ko.local,dev.local,localhost:5000,localhost:31212
  • Direktiv also needs to have the proxy settings configured:
:~/direktiv-ee/install/06_direktiv# cat direktiv.yaml
pullPolicy: Always
debug: "false"

proxy:
  no-proxy: ""
  http-proxy: ""
  https-proxy: ""

database:
  replicas: 1
  image: "direktiv/ui-ee"

ui:
  replicas: 1
  image: "direktiv/ui-ee"

api:
  replicas: 1
  image: "direktiv/api-ee"
  additionalEnvs:
  - name: DIREKTIV_ROLE_ADMIN
    value: admin
  - name: DIREKTIV_TOKEN_SECRET
    valueFrom:
      secretKeyRef:
        name: tokensecret
        key: tokensecret
  - name: DIREKTIV_ADMIN_SECRET
    valueFrom:
      secretKeyRef:
        name: adminsecret
        key: adminsecret
  • Certificates are required for the DIREKTIV_HOST URL selected used in the previous step. Direktiv will use self-signed certificates if not provided, but this is not recommended. To run the installation with certificates for (as an example *.direktiv.io), replace the following 2 files with the server.key and server.crtfiles for the respective domain:
~/direktiv-ee/install/04_keycloak/certs# ls -l
total 28
-rw-r--r-- 1 root root 7124 Jan 30 04:44 direktiv.io.crt
-rwx------ 1 root root 1708 Jan 30 04:44 direktiv.io.key
drwxr-xr-x 2 root root 4096 Jan 30 05:16 keycloak
-r-------- 1 root root 7124 Jan 30 06:01 server.crt
-r-------- 1 root root 1708 Jan 30 06:01 server.key
~/direktiv-ee/install/04_keycloak/certs#
~/direktiv-ee/install/04_keycloak/certs# cp direktiv.io.crt server.crt
~/direktiv-ee/install/04_keycloak/certs# cp direktiv.io.key server.key
  • Run the installer:
./install-all.sh
  • Wait till all pods are up and running. Can take a long time if the network is slow. This can be verified using the following command:
# kubectl get pods
NAME                                          READY   STATUS    RESTARTS   AGE
direktiv-api-6f8567555b-pq7q8                 2/2     Running   0          2d16h
direktiv-flow-564c8fc4cc-jh5dh                3/3     Running   0          2d16h
direktiv-functions-6f6698d7fb-s7n9z           2/2     Running   0          2d16h
direktiv-prometheus-server-667b8c6d65-6nzxm   3/3     Running   0          2d16h
direktiv-ui-d947dccc-zlzxc                    2/2     Running   0          2d16h
knative-operator-58647bbfd5-w9kvc             1/1     Running   0          2d16h
operator-webhook-b866dc4c-6klqx               1/1     Running   0          2d16h
  • Add the hostname configured in the DIREKTIV_HOSTenvironmental variable to your DNS. The external IP address (or hostname for the Load Balancer) can be found using the following command:
kubectl get svc -n apisix apisix-gateway
  • Output shown below:
# kubectl get svc -n apisix apisix-gateway
NAME             TYPE           CLUSTER-IP     EXTERNAL-IP                   PORT(S)                      AGE
apisix-gateway   LoadBalancer   10.43.159.35   145.40.102.235,145.40.99.47   80:30429/TCP,443:31423/TCP   2d16h

Access Direktiv

To access the Direktiv admin user, use the following credentials:

  • admin/password: user in admin group
  • admin/password: user in direktiv group

To access the Keycloak setup:

Next steps

The next steps in setting up the enterprise edition:

Was this article helpful?

That’s Great!

Thank you for your feedback

Sorry! We couldn't be helpful

Thank you for your feedback

Let us know how can we improve this article!

Select atleast one of the reasons
CAPTCHA verification is required.

Feedback sent

We appreciate your effort and will try to fix the article