CATION: For the latest open-source installation instructions see the Direktiv documentation pages:
https://docs.direktiv.io/installation/
TABLE OF CONTENTS
- Overview
- Installation Instructions
Overview
Component Overview
Direktiv can be deployed in the following 3 configurations.
- Open source: A single virtual machine running a k8s environment or a public cloud provider Kubernetes service, hosting the direktiv pods and the containers used for integration.
- Enterprise Edition Production VM(s) Only: a single virtual machine (or 3 virtual machines for HA purposes), running a k8s environment (or clustered across multiple), hosting the direktiv pods and the containers used for integration
- Enterprise Edition Production Kubernetes: a cloud provider Kubernetes service, hosting the direktiv pods and the containers used for integration
The diagram below illustrates the proposed option 1 for the deployment on a public cloud provider Kubernetes service:
Proposed deployment diagram (AWS EKS)
The Kubernetes deployment runs on the AWS Elastic Kubernetes Service. The installation will contain the following components:
- A Kubernetes environment running in AWS EKS.
- A single instance of the Direktiv open-source edition, which includes a PostgreSQL database
The following components are assumed to be available:
- Optional: a single GitLab instances (if not available in customer environment, or access to a GitHub instance)
- Optional: a single Docker repository for storing local images (if not available in customer environment). Alternatively, Docker Hub can also be used.
NOTE: the PostgreSQL database instance running on the Kubernetes cluster. In this configuration it is the responsibility of the owner to ensure that the database backups are maintained separately.
The architecture above is not a production deployment. It is solely to be used for PoC as the database and Direktiv components all reside on a single instance virtual machine.
Connectivity / Firewall Requirements
The following requirements exist for the successful deployment of Direktiv:
Initial Installation Requirements
The following network configurations are critical, all components need to have the following services configured and working correctly:
- Firewall: firewalls on the VMs need to be disabled or the ports described in the table below need to be opened.
The tables below capture all of the external connectivity requirements for initial deployment.
Source | Destination | Protocol / Port | Notes |
---|---|---|---|
JumpHost | download.docker.com, registry-1.docker.io, cloudfront.net, production.cloudflare.docker.com | HTTPS | Initial Docker install on the VM. If no access is allowed, download the Docker package from https://download.docker.com/linux/ubuntu/dists/ and install using https://docs.docker.com/engine/install/ubuntu/#install-from-a-package |
JumpHost | raw.githubusercontent.com/helm | HTTPS | Initial helm installer |
JumpHost | get.helm.sh/ | HTTPS | Helm package download. If no access is allowed, download the helm package from https://github.com/helm/helm/releases and install it manually using https://helm.sh/docs/intro/install/ |
JumpHost | docker.io/smallstep/step-cli | HTTPS | Used to create the certificates used by Linkerd |
JumpHost | chart.direktiv.io | HTTPS | Helm chart used for database install and Direktiv application install (OPTIONAL) |
JumpHost | registry.developers.crunchydata.com, prod.developers.crunchydata.com | HTTPS | Database install for Postgresql (OPTIONAL) |
JumpHost | githubusercontent.com/direktiv/direktiv | HTTPS | Configure Direktiv database from pre-configured settings (OPTIONAL) |
JumpHost / K8S Cluster IP | ghcr.io/projectcontour | HTTPS | Knative |
JumpHost / K8S Cluster IP | docker.io/envoyproxy | HTTPS | Knative |
JumpHost / K8S Cluster IP | gcr.io/knative-releases/ | HTTPS | Knative |
JumpHost / K8S Cluster IP | k8s.gcr.io/ingress-nginx/ | HTTPS | NGINX install (ingress load balancer, part of Direktiv helm chart) |
JumpHost / K8S Cluster IP | docker.io/jimmidyson/configmap-reload | HTTPS | Configmap reload for Kubernetes install (part of Direktiv helm chart) |
JumpHos/ K8S Cluster IP | quay.io/prometheus | HTTPS | Prometheus install (part of Direktiv helm chart) |
JumpHos/ K8S Cluster IP | docker.io/direktiv | HTTPS | Direktiv installation |
JumpHost | download.docker.com, registry-1.docker.io, cloudfront.net, production.cloudflare.docker.com | HTTPS | Initial Docker install on the VM. If no access is allowed, download the Docker package from https://download.docker.com/linux/ubuntu/dists/ and install using https://docs.docker.com/engine/install/ubuntu/#install-from-a-package |
JumpHost | raw.githubusercontent.com/helm | HTTPS | Initial helm installer |
The Docker registries whitelist includes:
Running Requirements
There are several running firewall/connectivity requirements to ensure the ongoing operation of the environment. The tables below capture all of the external connectivity requirements for an operational state:
Source | Destination | Protocol / Port | Notes |
---|---|---|---|
Direktiv IP | direktiv.azurecr.io | HTTPS | Pulling Direktiv pre-built containers |
Direktiv IP | hub.docker.com/direktiv | HTTPS | Pulling Direktiv pre-built containers |
JumpHost | Direktiv VM | TCP: 443 / 6443 / 10250 / 2379-2380 | Kubernetes API Server, Kubelet metrics, HA |
Client | Direktiv IP | TCP: 443 | Direktiv UI & API access |
Client | Direktiv IP | TCP: 443 | Direktiv UI & API access |
Installation Instructions
JumpHost Proxy Settings
If the environment requires a proxy to be configured, the following settings can be applied on the JumpHost. You can set up a permanent proxy for a single user by editing the ~/.bashrc file:
- First, log in to your Ubuntu system with a user whom you want to set a proxy for.
- Next, open the terminal interface and edit the ~/.bashrcfile as shown below:
vi ~/.bashrc
- Add the following lines at the end of the file that matches your proxy server:
export http_proxy=username:password@proxy-server-ip:8080 export https_proxy=username:password@proxy-server-ip:8082 export ftp_proxy=username:password@proxy-server-ip:8080 export no_proxy=localhost, 127.0.0.1
- Save and close the file when you are finished.
- Then to activate your new proxy settings for the current session, use the following command:
source ~/.bashrc
Install Docker (Jumphost)
Install Docker on the single VM deployment using the following steps.
For the latest installation instructions per distro please review: https://docs.docker.com/engine/install/
Uninstall old versions
Older versions of Docker were called docker, docker.io, or docker-engine. If these are installed, uninstall them:
sudo apt-get remove docker docker-engine docker.io containerd runc
It’s OK if apt-get reports that none of these packages are installed.
Set up the repository
- Update the apt package index and install packages to allow apt to use a repository over HTTPS:
sudo apt-get update sudo apt-get install \ ca-certificates \ curl \ gnupg \ lsb-release
- Add Docker’s official GPG key:
sudo mkdir -p /etc/apt/keyrings curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
- Use the following command to set up the repository:
echo \ "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \ $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
Install Docker Engine
- Update the apt package index, and install the latest versionof Docker Engine, containers, and Docker Compose, or go to the next step to install a specific version:
sudo apt-get update
- Next command:
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-compose-plugin
- Verify that Docker Engine is installed correctly by running the hello-world image. This might not work, depending on whether connectivity can be established to https://hub.docker.com
sudo docker run hello-world
Install AWS CLI (Jumphost)
AWS command-line interface is NOT installed by default. To access the Kubernetes cluster the AWS CLI is used to collect the Kubernetes cluster credentials.
NOTE: Install the AWS CLI using the link below or follow the steps below: https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html
- Use the curl command – The -o option specifies the file name that the downloaded package is written to. The options on the following example command write the downloaded file to the current directory with the local name awscliv2.zip
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
- Unzip the installer. If your Linux distribution doesn't have a built-in unzip command, use an equivalent to unzip it. The following example command unzips the package and creates a directory named aws under the current directory:
unzip awscliv2.zip
- Run the install program. The installation command uses a file named install in the newly unzipped aws directory. By default, the files are all installed to /usr/local/aws-cli, and a symbolic link is created in /usr/local/bin. The command includes sudo to grant write permissions to those directories:
sudo ./aws/install
- Confirm the installation with the following command:
# aws --version aws-cli/2.7.27 Python/3.9.11 Linux/5.4.0-1024-aws exe/x86_64.ubuntu.20 prompt/off
Install helm (Jumphost)
helm is used to install Direktiv and Knative components.
NOTE: For the latest installation instructions per distro, please refer to: https://helm.sh/docs/intro/install/
To install helm, the following steps need to be followed:
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 chmod 700 get_helm.sh ./get_helm.sh
Install kubectl (Jumphost)
kubectl is used to manage the Kubernetes cluster.
NOTE: For the latest installation instructions per distro, please refer to: https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/
To install kubectl, the following steps need to be followed:
- Update the apt package index and install packages needed to use the Kubernetes apt repository:
sudo apt-get update sudo apt-get install -y ca-certificates curl
- Download the Google Cloud public signing key:
sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
- Add the Kubernetes apt repository:
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
- Update apt package index with the new repository and install kubectl:
sudo apt-get update sudo apt-get install -y kubectl
- Verify the installation:
# kubectl version --client --output=yaml clientVersion: buildDate: "2022-08-23T17:44:59Z" compiler: gc gitCommit: a866cbe2e5bbaa01cfd5e969aa3e033f3282a8a2 gitTreeState: clean gitVersion: v1.25.0 goVersion: go1.19 major: "1" minor: "25" platform: linux/amd64 kustomizeVersion: v4.5.7
Deploy AWS EKS (Jumphost)
The following steps show how to deploy a default AWS EKS service.
NOTE: For updated deployment steps on AWS, please refer to: https://docs.aws.amazon.com/eks/latest/userguide/getting-started-console.html
AWS VPC
Create an Amazon VPC with public and private subnets that meets Amazon EKS requirements. Using the AWS CLI installed on the jumphost, run the following command:
aws cloudformation create-stack \ --region region-code \ --stack-name my-eks-vpc-stack \ --template-url https://s3.us-west-2.amazonaws.com/amazon-eks/cloudformation/2020-10-29/amazon-eks-vpc-private-subnets.yaml
where:
- region-code is the region-code for the AWS environment, and
- my-eks-vpc-stack is the name of the VPC stack to create
Below is an example of creating the VPC stack in the ap-southeast-2 region with the name direktiv-vpc-stack:
# aws cloudformation create-stack --region ap-southeast-2 --stack-name direktiv-vpc-stack --template-url https://s3.us-west-2.amazonaws.com/amazon-eks/cloudformation/2020-10-29/amazon-eks-vpc-private-subnets.yaml { "StackId": "arn:aws:cloudformation:ap-southeast-2:338328518639:stack/direktiv-vpc-stack/5f586bf0-2956-11ed-8217-02614d89b2ac" }
AWS IAM Roles
The first thing to check before building an AWS Kubernetes cluster is IAM roles and access configurations. There are 2 roles which absolutely have to be configured before continuing:
- Cluster IAM role: role and permissions for the EKS cluster deployment and management
- Node IAM role: role and permissions for deploying and managing the nodes within the EKS cluster
Cluster IAM Role Creation (Jumphost)
Create a cluster IAM role and attach the required Amazon EKS IAM managed policy to it. Kubernetes clusters managed by Amazon EKS make calls to other AWS services on your behalf to manage the resources that you use with the service.
- Copy the following contents to a file named eks-cluster-role-trust-policy.json:
echo '{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "eks.amazonaws.com" }, "Action": "sts:AssumeRole" } ] }' > eks-cluster-role-trust-policy.json
- Create the Cluster IAM role:
aws iam create-role \ --role-name direktivAmazonEKSClusterRole \ --assume-role-policy-document file://"eks-cluster-role-trust-policy.json"
Produces the output:
{ "Role": { "Path": "/", "RoleName": "direktivAmazonEKSClusterRole", "RoleId": "AROAU5RPTP7X2M5MEPUJ4", "Arn": "arn:aws:iam::338328518639:role/direktivAmazonEKSClusterRole", "CreateDate": "2022-08-31T18:07:05+00:00", "AssumeRolePolicyDocument": { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "eks.amazonaws.com" }, "Action": "sts:AssumeRole" } ] } } }
- Attach the required Amazon EKS managed IAM policy to the role:
aws iam attach-role-policy \ --policy-arn arn:aws:iam::aws:policy/AmazonEKSClusterPolicy \ --role-name direktivAmazonEKSClusterRole
Node IAM Role Creation (Jumphost)
Create a node IAM role and attach the required Amazon EKS IAM managed policy to it. The Amazon EKS node kubelet daemon makes calls to AWS APIs on your behalf. Nodes receive permissions for these API calls through an IAM instance profile and associated policies.
- Run the following command to create the node-role-trust-policy.json:
echo '{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "ec2.amazonaws.com" }, "Action": "sts:AssumeRole" } ] }' > node-role-trust-policy.json
- Create the Node IAM role:
aws iam create-role \ --role-name direktivAmazonEKSNodeRole \ --assume-role-policy-document file://"node-role-trust-policy.json"
Produces the output:
{ "Role": { "Path": "/", "RoleName": "direktivAmazonEKSNodeRole", "RoleId": "AROAU5RPTP7X36ABSCUWY", "Arn": "arn:aws:iam::338328518639:role/direktivAmazonEKSNodeRole", "CreateDate": "2022-08-31T18:13:57+00:00", "AssumeRolePolicyDocument": { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "ec2.amazonaws.com" }, "Action": "sts:AssumeRole" } ] } } }
- Attach the required managed IAM policies to the role:
aws iam attach-role-policy \ --policy-arn arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy \ --role-name direktivAmazonEKSNodeRole aws iam attach-role-policy \ --policy-arn arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly \ --role-name direktivAmazonEKSNodeRole aws iam attach-role-policy \ --policy-arn arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy \ --role-name direktivAmazonEKSNodeRole
AWS EKS Cluster Build
Create the EKS cluster and worker nodes. Please note the following VERY important check:
CAUTION: To ensure that an Public external IP address is assigned, please ensure that AWS Elastic IPs are available
- Open the Amazon EKS console at https://console.aws.amazon.com/eks/home#/clusters. Make sure that the AWS Region shown in the upper right of your console is the AWS Region that you want to create your cluster in. If it's not, choose the dropdown next to the AWS Region name and choose the AWS Region that you want to use.
- On the left hand menu, select “Clusters”
- In the configuration pane, select “Create cluster”
- Add the following options to the first configuration section:
- Name: direktiv-cluster
- Default Kubernetes version: select the default option
- Cluster service role: the role created in the pre-configuration steps (in our case direktivAmazonEKSClusterRole)
- Turn on envelope encryption: Disabled
- Tagging: optional and customer specific
- Select “Next”
- Add the following options to the first configuration section:
- VPC: This should be the VPC created by the CloudFormation command at the start (direktiv-vpc-stack)
- Subnets: subnets are populated by default for the VPC stack
- Security groups: select all the security groups (include “default”)
- Cluster IP address family: IPv4 only
- Cluster endpoint access: Public and Private (otherwise if Private is selected make sure the Jumphost is in the same VPCs)
- Amazon VPC CNI: default
- CoreDNS Build: default
- Kube-proxy: default
- Control plane logging: None selected
- Select “Create”
- Once the Cluster Status is “Active” continue with the following steps (takes roughly 13 minutes):
- Navigate to the “Compute” -> “Node Groups”
- Select “Add node group”
- On the node group create window:
- Name: direktiv-node-group
- Node IAM Role: direktivAmazonEKSNodeRole
- Node group compute configuration:
- AMI Type: Amazon Linux 2 type (or whatever is preferred)
- Capacity: On-demand
- Instance types: See table below
- Disk size: 40GB
- Desired state: See table below
- Minimum size: See table below
- Maximum size: See table below
- Leave everything else as-is
- Specify networking:
- Subnets: this should be all the VPC subnets you selected previously
- Configure access to SSH nodes
- Select “Create”
- This will start the node group creation
Please note: the node group configuration can be changed based on individual customer needs. The table below is a guideline ONLY for a medium sized Direktiv deployment:
Environment | Node size | Scale method | Node count |
---|---|---|---|
PoC / Development | 2 vCPUs, 8GB (Standard t3.large) | Autoscale | 1 - 2 |
Production | 2 vCPUs, 8GB (Standard t3.large) | Autoscale | 3+ |
Production | 2 vCPUs, 4 GB (Standard t3.medium) if using an additional node pool (see below) | Autoscale | 2 - 3 |
2 vCPUs, 8 GB (Standard t3.large) if using an additional node pool (see below) | Autoscale | 3+ |
Install Linkerd
For the installation of Linkerd and the rest of the open source components, the KUBECONFIG environment variable needs to be set:
- Create or update a kubeconfig file for your cluster using the AWS CLI previously installed. Replace region-code with the AWS Region the cluster in. Replace my-clusterwith the name of your cluster:
aws eks update-kubeconfig --region region-code --name my-cluster
- Example output is shown below:
aws eks update-kubeconfig --region ap-southeast-2 --name direktiv-cluster Added new context arn:aws:eks:ap-southeast-2:338328518639:cluster/direktiv-cluster to /home/ubuntu/.kube/config\
- Set the KUBECONFIG environment variable:
echo "export KUBECONFIG=$HOME/.kube/config" >> ~/.bashrc
- Then to activate your new proxy settings for the current session, use the following command:
source ~/.bashrc
- Linkerd is used to secure the data between pods with mTLS. Generate certificates with the following command:
$ certDir=$(exe='step certificate create root.linkerd.cluster.local ca.crt ca.key \ --profile root-ca --no-password --insecure \ && step certificate create identity.linkerd.cluster.local issuer.crt issuer.key \ --profile intermediate-ca --not-after 87600h --no-password --insecure \ --ca ca.crt --ca-key ca.key'; \ sudo docker run --mount "type=bind,src=$(pwd),dst=/home/step" -i smallstep/step-cli /bin/bash -c "$exe"; \ echo $(pwd));
- It stores four files in /tmp and permissions need to be changed to make them readable by the current user (they only have root access after creation):
- /tmp/ca.crt
- /tmp/ca.key
- /tmp/issuer.crt
- /tmp/issuer.key
- Install Linkerd CRDs:
helm repo add linkerd https://helm.linkerd.io/stable; helm install linkerd-crds linkerd/linkerd-crds -n linkerd --create-namespace
- Install Linkerd
helm install linkerd-control-plane \ -n linkerd \ --set-file identityTrustAnchorsPEM=ca.crt \ --set-file identity.issuer.tls.crtPEM=issuer.crt \ --set-file identity.issuer.tls.keyPEM=issuer.key \ linkerd/linkerd-control-plane --wait
- Create namespaces and annotate. All annotated namespaces will use Linkerd. The following command creates the namespaces and annotates them for linkerd:
kubectl create namespace direktiv kubectl annotate ns --overwrite=true direktiv linkerd.io/inject=enabled
Install PostgreSQL Database
We are using the CrunchyData Postgres operator for PoCs.
- To install the operator add the chart:
$ helm repo add direktiv https://chart.direktiv.io "direktiv" has been added to your repositories
- Run the install operator:
$ helm install -n postgres --create-namespace postgres direktiv/pgo NAME: postgres LAST DEPLOYED: Wed Jun 14 23:02:52 2023 NAMESPACE: postgres STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: Thank you for deploying PGO v5.3.1! (((((((((((((((((((((( (((((((((((((%%%%%%%((((((((((((((( (((((((((((%%% %%%%(((((((((((( (((((((((((%%( (((( ( %%%((((((((((( (((((((((((((%% (( ,(( %%%((((((((((( (((((((((((((((%% *%%/ %%%%%%%(((((((((( (((((((((((((((((((%%(( %%%%%%%%%%#(((((%%%%%%%%%%#(((((((((((( ((((((((((((((((((%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%(((((((((((((( *((((((((((((((((((((%%%%%% /%%%%%%%%%%%%%%%%%%%(((((((((((((((( (((((((((((((((((((((((%%%/ .%, %%%((((((((((((((((((, ((((((((((((((((((((((% %#((((((((((((((((( (((((((((((((((%%%%%% #%((((((((((((((((( ((((((((((((((%% %%(((((((((((((((, ((((((((((((%%%#% % %%((((((((((((((( ((((((((((((%. % % #(((((((((((((( (((((((((((%% % %%* %((((((((((((( #(###(###(#%% %%% %% %%% #%%#(###(###(# ###########%%%%% /%%%%%%%%%%%%% %% %%%%% ,%%####### ###############%% %%%%%% %%% %%%%%%%% %%##### ################%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %% %%## ################%% %%%%%%%%%%%%%%%%% %%%% % ##############%# %% (%%%%%%% %%%%%% #############% %%%%% %%%%%%%%%%% ###########% %%%%%%%%%%% %%%%%%%%% #########%% %% %%%%%%%%%%%%%%%# ########%% %% %%%%%%%%% ######%% %% %%%%%% ####%%% %%%%% % %% %%%%
- After that install the actual Database for Direktiv:
$ kubectl apply -f https://raw.githubusercontent.com/direktiv/direktiv/main/kubernetes/install/db/basic.yaml
- This creates a 1GB database and should be sufficient for a POC. Wait till all pods in the Postgres namespace are up and running:
# kubectl get pods -n postgres NAME READY STATUS RESTARTS AGE direktiv-backup-6qjs-fd9g2 1/1 Running 0 15s direktiv-instance1-wp2g-0 3/3 Running 0 105s direktiv-repo-host-0 1/1 Running 0 104s pgo-8659687c97-bsqcq 1/1 Running 0 13m
Install Knative
The following steps install Knative:
- Add the Knative operator:
$ kubectl apply -f https://github.com/knative/operator/releases/download/knative-v1.9.4/operator.yaml
- Install the knative serving components:
$ kubectl create ns knative-serving $ kubectl apply -f https://raw.githubusercontent.com/direktiv/direktiv/main/kubernetes/install/knative/basic.yaml
- Direktiv supports Contour as network component. Install Contour:
$ kubectl apply --filename https://github.com/knative/net-contour/releases/download/knative-v1.9.3/contour.yaml
- Delete Contour External:
$ kubectl delete namespace contour-external
Install Direktiv
- Create a file with the database information if no external database will be used:
$ echo "database: host: \"$(kubectl get secrets -n postgres direktiv-pguser-direktiv -o 'go-template={{index .data "host"}}' | base64 --decode)\" port: $(kubectl get secrets -n postgres direktiv-pguser-direktiv -o 'go-template={{index .data "port"}}' | base64 --decode) user: \"$(kubectl get secrets -n postgres direktiv-pguser-direktiv -o 'go-template={{index .data "user"}}' | base64 --decode)\" password: \"$(kubectl get secrets -n postgres direktiv-pguser-direktiv -o 'go-template={{index .data "password"}}' | base64 --decode)\" name: \"$(kubectl get secrets -n postgres direktiv-pguser-direktiv -o 'go-template={{index .data "dbname"}}' | base64 --decode)\" sslmode: require" > direktiv.yaml
- Add an API key to the direktiv.yaml file:
echo 'apikey: "12345"' >> direktiv.yaml
The following output is an example of the direktiv.yaml file:
# cat direktiv.yaml database: host: "direktiv-primary.postgres.svc" port: 5432 user: "direktiv" password: ";Pr<}[^SegXWXM/1qO07cktl" name: "direktiv" sslmode: require ingress-nginx: install: false apikey: "12345"
- Install direktiv:
$ helm install -f direktiv.yaml -n direktiv direktiv direktiv/direktiv
- The instance is available at the IP show under “External-IP” from this command:
$ kubectl -n direktiv get services direktiv-ingress-nginx-controller --output jsonpath='{.status.loadBalancer.ingress[0].ip}'
- The login is the configured API key.
Was this article helpful?
That’s Great!
Thank you for your feedback
Sorry! We couldn't be helpful
Thank you for your feedback
Feedback sent
We appreciate your effort and will try to fix the article