For the latest open-source installation instructions see the Direktiv documentation pages:
https://docs.direktiv.io/installation/
TABLE OF CONTENTS
Overview
Component Overview
Direktiv can be deployed in the following 3 configurations.
- Open-source: A single virtual machine running a k8s environment or a public cloud provider Kubernetes service, hosting the direktiv pods and the containers used for integration.
- Enterprise Edition Production VM(s) Only: a single virtual machine (or 3 virtual machines for HA purposes), running a k8s environment (or clustered across multiple), hosting the direktiv pods and the containers used for integration
- Enterprise Edition Production Kubernetes: a cloud provider Kubernetes service, hosting the direktiv pods and the containers used for integration
The diagram below illustrates the proposed option 1 for a deployment on a single VM:
Proposed OS deployment diagram (Single VM)
The PoC deployment runs on a single virtual machine. The installation will contain the following components:
- K3S, a lightweight Kubernetes platform to run Direktiv
- A single instance of the Direktiv open-source edition, which includes a PostgreSQL database
- Optional: a single GitLab instances (if not available in customer environment, or access to a GitHub instance)
- Optional: a single Docker repository for storing local images (if not available in customer environment). Alternatively, Docker Hub can also be used.
The architecture above is not a production deployment. It is solely to be used for PoC as the database and Direktiv components all reside on a single instance virtual machine. Direktiv cannot guarantee database consistency on the K3S instance.
Connectivity / Firewall Requirements
The following requirements exist for the successful deployment of Direktiv:
Initial Installation Requirements
The following network configurations are critical, all VMs need to have the following services configured and working correctly:
- DNS: all internal DNS domains and external DNS domains need to be resolvable
- NTP: to ensure the tracing, reporting and dashboards function correctly, NTP needs to be configured on all VMs
- Firewall: firewalls on the VMs need to be disabled or the ports described in the table below need to be opened.
The tables below capture all of the external connectivity requirements for an initial deployment.
Source | Destination | Protocol / Port | Notes |
---|---|---|---|
Direktiv VM | download.docker.com, registry-1.docker.io, cloudfront.net, production.cloudflare.docker.com | HTTPS | Initial Docker install on the VM. If no access is allowed, download the Docker package from https://download.docker.com/linux/ubuntu/dists/ and install using https://docs.docker.com/engine/install/ubuntu/#install-from-a-package |
Direktiv VM | get.k3s.io, update.k3s.io | HTTPS | Install script for K3S (Rancher) |
Direktiv VM | github.com/k3s-io/k3s/releases | HTTPS | Install binary for K3S |
Direktiv VM | raw.githubusercontent.com/helm | HTTPS | Initial helm installer |
Direktiv VM | get.helm.sh/ | HTTPS | Helm package download. If no access is allowed, download the helm package from https://github.com/helm/helm/releases and install it manually using https://helm.sh/docs/intro/install/ |
Direktiv VM | docker.io/smallstep/step-cli | HTTPS | Used to create the certificates used by Linkerd |
Direktiv VM | chart.direktiv.io | HTTPS | Helm chart used for database install and Direktiv application install |
Direktiv VM | registry.developers.crunchydata.com, prod.developers.crunchydata.com | HTTPS | Database install for Postgresql |
Direktiv VM | githubusercontent.com/direktiv/direktiv | HTTPS | Configure Direktiv database from pre-configured settings |
Direktiv VM | ghcr.io/projectcontour | HTTPS | Knative |
Direktiv VM | docker.io/envoyproxy | HTTPS | Knative |
Direktiv VM | gcr.io/knative-releases/ | HTTPS | Knative |
Direktiv VM | k8s.gcr.io/ingress-nginx/ | HTTPS | NGINX install (ingress load balancer, part of Direktiv helm chart) |
Direktiv VM | docker.io/jimmidyson/configmap-reload | HTTPS | Configmap reload for Kubernetes install (part of Direktiv helm chart) |
Direktiv VM | quay.io/prometheus | HTTPS | Prometheus install (part of Direktiv helm chart) |
Direktiv VM | docker.io/direktiv | HTTPS | Direktiv installation |
The Docker registries whitelist includes:
Running Requirements
There are several running firewall / connectivity requirements to ensure the ongoing operation of the environment. The tables below captures all of the external connectivity requirements for an operational state:
Source | Destination | Protocol / Port | Notes |
---|---|---|---|
Direktiv VM | direktiv.azurecr.io | HTTPS | Pulling Direktiv pre-built containers |
Direktiv VM | hub.docker.com/direktiv | HTTPS | Pulling Direktiv pre-built containers |
Client | Direktiv VM | TCP: 443 / 6443 / 10250 / 2379-2380 | Kubernetes API Server, Kubelet metrics, HA |
Client | Direktiv IP | TCP: 443 | Direktiv UI & API access |
Client | Direktiv IP | TCP: 443 | Direktiv UI & API access |
Installation Instructions
JumpHost Proxy Settings
If the environment requires a proxy to be configured, the following settings can be applied on the JumpHost. You can set up a permanent proxy for a single user by editing the ~/.bashrc file:
- First, log in to your Ubuntu system with a user whom you want to set a proxy for.
- Next, open the terminal interface and edit the ~/.bashrcfile as shown below:
vi ~/.bashrc
- Add the following lines at the end of the file that matches your proxy server:
export http_proxy=username:password@proxy-server-ip:8080 export https_proxy=username:password@proxy-server-ip:8082 export ftp_proxy=username:password@proxy-server-ip:8080 export no_proxy=localhost, 127.0.0.1
- Save and close the file when you are finished.
- Then to activate your new proxy settings for the current session, use the following command:
source ~/.bashrc
Install Docker (Jumphost)
Install Docker on the single VM deployment using the following steps.
For the latest installation instructions per distro please review: https://docs.docker.com/engine/install/
Uninstall old versions
Older versions of Docker were called docker, docker.io, or docker-engine. If these are installed, uninstall them:
sudo apt-get remove docker docker-engine docker.io containerd runc
It’s OK if apt-get reports that none of these packages are installed.
Set up the repository
- Update the apt package index and install packages to allow apt to use a repository over HTTPS:
sudo apt-get update sudo apt-get install \ ca-certificates \ curl \ gnupg \ lsb-release
- Add Docker’s official GPG key:
sudo mkdir -p /etc/apt/keyrings curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
- Use the following command to set up the repository:
echo \ "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \ $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
Install Docker Engine
- Update the apt package index, and install the latest versionof Docker Engine, containers, and Docker Compose, or go to the next step to install a specific version:
sudo apt-get update
- Next command:
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-compose-plugin
- Verify that Docker Engine is installed correctly by running the hello-world image. This might not work, depending on whether connectivity can be established to https://hub.docker.com
sudo docker run hello-world
Install K3s
Direktiv works with Kubernetes offerings from all major cloud providers and the requirements for on-premise or local installations is Kubernetes 1.19+. The following documentation describes a small installation with k3s.
Hardware requirements
The following table shows the hardware requirements for the virtual machine (PoC single VM deployment):
Environment | Node size | Scale method | Node count |
---|---|---|---|
PoC / Development |
| None | 1 |
Configure local firewall rules
The nodes communicate with each other on different ports and protocols. The table in “Running Requirements” (previously in this document) shows the ports required to be accessible (incoming) for the nodes to enable this. On some Linux distributions firewall changes have to be applied. Please see k3s installation guide for detailed installation instructions.
- Check if the firewall is enabled:
sudo ufw status
- If the firewall is enabled, open the required ports:
sudo ufw allow 6443/tcp sudo ufw allow 10250/tcp sudo ufw allow 8472/udp sudo ufw allow 2379-2380/tcp
Disable swap
One of Kubernetes' requirements is to disable swap on the nodes. This change need to be applied permanently to survive reboots.
sudo swapoff -a sudo sed -e '/swap/s/^/#/g' -i /etc/fstab
K3s installation
- For the Proof-of-Concept deployment, a single node is installed. For the command it is necessary to add a proxy configuration (if a proxy is used):
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="server --disable traefik --write-kubeconfig-mode=644 --cluster-init" INSTALL_K3S_VERSION="v1.24.10+k3s1" HTTP_PROXY="http://192.168.1.10:3128" HTTPS_PROXY="http://192.168.1.10:3128" NO_PROXY="localhost,127.0.0.1,svc,.cluster.local,192.168.1.100,192.168.1.101,192.168.1.102,10.0.0.0/8" sh -
Produces output:
[INFO] Finding release for channel stable [INFO] Using v1.24.4+k3s1 as release [INFO] Downloading hash https://github.com/k3s-io/k3s/releases/download/v1.24.4+k3s1/sha256sum-amd64.txt [INFO] Downloading binary https://github.com/k3s-io/k3s/releases/download/v1.24.4+k3s1/k3s [INFO] Verifying binary download [INFO] Installing k3s to /usr/local/bin/k3s [INFO] Skipping installation of SELinux RPM [INFO] Creating /usr/local/bin/kubectl symlink to k3s [INFO] Creating /usr/local/bin/crictl symlink to k3s [INFO] Skipping /usr/local/bin/ctr symlink to k3s, command exists in PATH at /usr/bin/ctr [INFO] Creating killall script /usr/local/bin/k3s-killall.sh [INFO] Creating uninstall script /usr/local/bin/k3s-uninstall.sh [INFO] env: Creating environment file /etc/systemd/system/k3s.service.env [INFO] systemd: Creating service file /etc/systemd/system/k3s.service [INFO] systemd: Enabling k3s unit Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service. [INFO] systemd: Starting k3s
- Configure the KUBECONFIG environment variable for the shell:
echo "export KUBECONFIG=/etc/rancher/k3s/k3s.yaml" >> ~/.bashrc
- Source the profile changes:
source ~/.bashrc
Install Helm
Helm is used to install additional components for K3s. To install helm, the following steps need to be followed.
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 chmod 700 get_helm.sh ./get_helm.sh
Install Linkerd
Linkerd is used to secure the data between pods with mTLS. Generate certificates with the following command:
$ certDir=$(tmpDir=$(mktemp -d); \ exe='cd /certs && step certificate create root.linkerd.cluster.local ca.crt ca.key \ --profile root-ca --no-password --insecure \ && step certificate create identity.linkerd.cluster.local issuer.crt issuer.key \ --profile intermediate-ca --not-after 8760h --no-password --insecure \ --ca ca.crt --ca-key ca.key'; \ sudo docker run --user 1000:1000 -v /tmp:/certs -i smallstep/step-cli /bin/bash -c "$exe"; \ echo $tmpDir);
- It stores four files in /tmp and permissions need to be changed to make them readable by the current user (they only have root access after creation):
- /tmp/ca.crt
- /tmp/ca.key
- /tmp/issuer.crt
- /tmp/issuer.key
- Install Linkerd certificates with the following script:
helm repo add linkerd https://helm.linkerd.io/stable; helm install linkerd-crds linkerd/linkerd-crds -n linkerd --create-namespace helm install linkerd-control-plane \ -n linkerd \ --set-file identityTrustAnchorsPEM=$certDir/ca.crt \ --set-file identity.issuer.tls.crtPEM=$certDir/issuer.crt \ --set-file identity.issuer.tls.keyPEM=$certDir/issuer.key \ linkerd/linkerd-control-plane --wait
- Create namespaces and annotate. All annotated namespaces will use Linkerd. The following script creates the namespaces and annotates them for linkerd:
for ns in "direktiv" do kubectl create namespace $ns || true kubectl annotate ns --overwrite=true $ns linkerd.io/inject=enabled done;
Install PostgreSQL Database
We are using the CrunchyData Postgres operator for PoCs.
- To install the operator add the chart:
# helm repo add direktiv https://chart.direktiv.io "direktiv" has been added to your repositories
- Run the install operator:
$ helm install -n postgres --create-namespace --set singleNamespace=true postgres direktiv/pgo NAME: postgres LAST DEPLOYED: Tue Aug 30 01:42:48 2022 NAMESPACE: postgres STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: Thank you for deploying PGO v5.0.4! (((((((((((((((((((((( (((((((((((((%%%%%%%((((((((((((((( (((((((((((%%% %%%%(((((((((((( (((((((((((%%( (((( ( %%%((((((((((( (((((((((((((%% (( ,(( %%%((((((((((( (((((((((((((((%% *%%/ %%%%%%%(((((((((( (((((((((((((((((((%%(( %%%%%%%%%%#(((((%%%%%%%%%%#(((((((((((( ((((((((((((((((((%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%(((((((((((((( *((((((((((((((((((((%%%%%% /%%%%%%%%%%%%%%%%%%%(((((((((((((((( (((((((((((((((((((((((%%%/ .%, %%%((((((((((((((((((, ((((((((((((((((((((((% %#((((((((((((((((( (((((((((((((((%%%%%% #%((((((((((((((((( ((((((((((((((%% %%(((((((((((((((, ((((((((((((%%%#% % %%((((((((((((((( ((((((((((((%. % % #(((((((((((((( (((((((((((%% % %%* %((((((((((((( #(###(###(#%% %%% %% %%% #%%#(###(###(# ###########%%%%% /%%%%%%%%%%%%% %% %%%%% ,%%####### ###############%% %%%%%% %%% %%%%%%%% %%##### ################%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %% %%## ################%% %%%%%%%%%%%%%%%%% %%%% % ##############%# %% (%%%%%%% %%%%%% #############% %%%%% %%%%%%%%%%% ###########% %%%%%%%%%%% %%%%%%%%% #########%% %% %%%%%%%%%%%%%%%# ########%% %% %%%%%%%%% ######%% %% %%%%%% ####%%% %%%%% % %% %%%%
- After that install the actual Database for Direktiv:
kubectl apply -f https://raw.githubusercontent.com/direktiv/direktiv/main/kubernetes/install/db/basic.yaml
- This creates a 1GB database and should be sufficient for a POC. Wait till all pods in the Postgres namespace are up and running:
# kubectl get pods -n postgres NAME READY STATUS RESTARTS AGE direktiv-backup-6qjs-fd9g2 1/1 Running 0 15s direktiv-instance1-wp2g-0 3/3 Running 0 105s direktiv-repo-host-0 1/1 Running 0 104s pgo-8659687c97-bsqcq 1/1 Running 0 13m
Install Knative
The following steps install Knative:
- Add the Knative operator:
kubectl apply -f https://github.com/knative/operator/releases/download/knative-v1.8.1/operator.yaml
- Install the knative serving components:
# helm install -n knative-serving --create-namespace knative-serving direktiv/knative-instance NAME: knative LAST DEPLOYED: Tue Aug 30 02:01:45 2022 NAMESPACE: knative-serving STATUS: deployed REVISION: 1 TEST SUITE: None
Install Direktiv
- Create a file with the database information if no external database will be used:
echo "database: host: \"$(kubectl get secrets -n postgres direktiv-pguser-direktiv -o 'go-template={{index .data "host"}}' | base64 --decode)\" port: $(kubectl get secrets -n postgres direktiv-pguser-direktiv -o 'go-template={{index .data "port"}}' | base64 --decode) user: \"$(kubectl get secrets -n postgres direktiv-pguser-direktiv -o 'go-template={{index .data "user"}}' | base64 --decode)\" password: \"$(kubectl get secrets -n postgres direktiv-pguser-direktiv -o 'go-template={{index .data "password"}}' | base64 --decode)\" name: \"$(kubectl get secrets -n postgres direktiv-pguser-direktiv -o 'go-template={{index .data "dbname"}}' | base64 --decode)\" sslmode: require" > direktiv.yaml
- Add an API key to the direktiv.yaml file:
echo 'apikey: "12345"' >> direktiv.yaml
The following output is an example of the direktiv.yaml file:
# cat direktiv.yaml database: host: "direktiv-primary.postgres.svc" port: 5432 user: "direktiv" password: ";Pr<}[^SegXWXM/1qO07cktl" name: "direktiv" sslmode: require ingress-nginx: install: false apikey: "12345"
- Install direktiv:
# helm install -f direktiv.yaml -n direktiv direktiv direktiv/direktiv NAME: direktiv LAST DEPLOYED: Tue Aug 30 02:12:59 2022 NAMESPACE: default STATUS: deployed REVISION: 1 TEST SUITE: None
- The instance is available at the IP show under “External-IP” from this command:
kubectl get services direktiv-ingress-nginx-controller --output jsonpath='{.status.loadBalancer.ingress[0].ip}'
- The login is the configured API key.
Was this article helpful?
That’s Great!
Thank you for your feedback
Sorry! We couldn't be helpful
Thank you for your feedback
Feedback sent
We appreciate your effort and will try to fix the article