Enterprise Edition: Azure Kubernetes Service (AKS)

Modified on Thu, 18 May, 2023 at 8:56 PM

TABLE OF CONTENTS

Overview

Component Overview

Direktiv can be deployed in the following 3 configurations.

  1. Open source: A single virtual machine running a k8s environment or a public cloud provider Kubernetes service, hosting the direktiv pods and the containers used for integration.
  2. Enterprise Edition Production VM(s) Only: a single virtual machine (or 3 virtual machines for HA purposes), running a k8s environment (or clustered across multiple), hosting the direktiv pods and the containers used for integration
  3. Enterprise Edition Production Kubernetes: a cloud provider Kubernetes service, hosting the direktiv pods and the containers used for integration

The diagram below illustrates the proposed option 3 for Production deployment:


Proposed Azure AKS deployment diagram

Proposed EE deployment  diagram (Azure AKS)

The AKS deployment runs on the Azure Kubernetes Service. The installation will contain the following components:

  1. A Kubernetes environment running in AKS.
  2. A single instance of the Direktiv Enterprise Edition
  3. A single instance PostgreSQL database

The following components are assumed to be available:

  1. Optional: a single GitLab instances (if not available in customer environment, or access to a GitHub instance)
  2. Optional: a single Docker repository for storing local images (if not available in customer environment). Alternatively, Docker Hub can also be used.

Connectivity / Firewall Requirements

The following requirements exist for the successful deployment of Direktiv:

Initial Installation Requirements

The following network configurations are critical, all components need to have the following services configured and working correctly:

  • Firewall: firewalls on the VMs need to be disabled or the ports described in the table below need to be opened.

The tables below capture all of the external connectivity requirements for initial deployment.


Source

Destination

Protocol / Port

Notes

JumpHost

download.docker.com, registry-1.docker.io, cloudfront.net, production.cloudflare.docker.com

HTTPS

Initial Docker install on the VM. If no access is allowed, download the Docker package from https://download.docker.com/linux/ubuntu/dists/ and install using https://docs.docker.com/engine/install/ubuntu/#install-from-a-package

JumpHost

raw.githubusercontent.com/helm

HTTPS

Initial helm installer

JumpHost

get.helm.sh/

HTTPS

Helm package download. If no access is allowed, download the helm package from https://github.com/helm/helm/releases and install it manually using https://helm.sh/docs/intro/install/ 

JumpHost

docker.io/smallstep/step-cli

HTTPS

Used to create the certificates used by Linkerd

JumpHost

chart.direktiv.io

HTTPS

Helm chart used for database install and Direktiv application install (OPTIONAL)

JumpHost

registry.developers.crunchydata.com, prod.developers.crunchydata.com

HTTPS

Database install for Postgresql (OPTIONAL)

JumpHost

githubusercontent.com/direktiv/direktiv

HTTPS

Configure Direktiv database from pre-configured settings (OPTIONAL)

JumpHost / K8S IP

ghcr.io/projectcontour

HTTPS

Knative

JumpHost / K8S IP

docker.io/envoyproxy

HTTPS

Knative

JumpHost / K8S IP

gcr.io/knative-releases/

HTTPS

Knative

JumpHost / K8S IP

k8s.gcr.io/ingress-nginx/

HTTPS

NGINX install (ingress load balancer, part of Direktiv helm chart)

JumpHost / K8S IP

docker.io/jimmidyson/configmap-reload

HTTPS

Configmap reload for Kubernetes install (part of Direktiv helm chart)

JumpHos/ K8S IP

quay.io/prometheus

HTTPS

Prometheus install (part of Direktiv helm chart)

JumpHos/ K8S IP

docker.io/direktiv

HTTPS

Direktiv installation

JumpHost

download.docker.com, registry-1.docker.io, cloudfront.net, production.cloudflare.docker.com

HTTPS

Initial Docker install on the VM. If no access is allowed, download the Docker package from https://download.docker.com/linux/ubuntu/dists/ and install using https://docs.docker.com/engine/install/ubuntu/#install-from-a-package

JumpHost

raw.githubusercontent.com/helm

HTTPS

Initial helm installer

The Docker registries whitelist includes:

Running Requirements

There are several running firewall/connectivity requirements to ensure the ongoing operation of the environment. The tables below capture all of the external connectivity requirements for an operational state:


  Source

Destination

  Protocol / Port

  Notes

Direktiv IP

direktiv.azurecr.io

HTTPS

Pulling Direktiv pre-built containers

Direktiv IP

hub.docker.com/direktiv

HTTPS

Pulling Direktiv pre-built containers

JumpHost

Direktiv VM

TCP: 443 / 6443 / 10250 / 2379-2380

Kubernetes API Server, Kubelet metrics, HA

Client

Direktiv IP

TCP: 443

Direktiv UI & API access

Client

Direktiv IP

TCP: 443

Direktiv UI & API access

Installation Instructions

JumpHost Proxy Settings

If the environment requires a proxy to be configured, the following settings can be applied on the JumpHost. You can set up a permanent proxy for a single user by editing the ~/.bashrc file:

  • First, log in to your Ubuntu system with a user whom you want to set a proxy for.
  • Next, open the terminal interface and edit the ~/.bashrcfile as shown below:
vi ~/.bashrc
  • Add the following lines at the end of the file that matches your proxy server:
export http_proxy=username:password@proxy-server-ip:8080
export https_proxy=username:password@proxy-server-ip:8082
export ftp_proxy=username:password@proxy-server-ip:8080
export no_proxy=localhost, 127.0.0.1
  • Save and close the file when you are finished.
  • Then to activate your new proxy settings for the current session, use the following command:
source ~/.bashrc

Install Docker (Jumphost)

Install Docker on the single VM deployment using the following steps.

For the latest installation instructions per distro please review: https://docs.docker.com/engine/install/

Uninstall old versions

Older versions of Docker were called docker, docker.io, or docker-engine. If these are installed, uninstall them:

sudo apt-get remove docker docker-engine docker.io containerd runc

It’s OK if apt-get reports that none of these packages are installed.

Set up the repository

  • Update the apt package index and install packages to allow apt to use a repository over HTTPS:
sudo apt-get update
sudo apt-get install \
    ca-certificates \
    curl \
    gnupg \
    lsb-release
  • Add Docker’s official GPG key:
sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
  • Use the following command to set up the repository:
echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
  $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

Install Docker Engine

  • Update the apt package index, and install the latest versionof Docker Engine, containers, and Docker Compose, or go to the next step to install a specific version:
sudo apt-get update
  • Next command:
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-compose-plugin
  • Verify that Docker Engine is installed correctly by running the hello-world image. This might not work, depending on whether connectivity can be established to https://hub.docker.com
sudo docker run hello-world 

Install Azure CLI (Jumphost)

Azure command-line interface is NOT installed by default. To access the Kubernetes cluster the Azure CLI is used to collect the Kubernetes cluster credentials.

NOTE: Install the Azure CLI using the link below or follow the steps below:
https://docs.microsoft.com/en-us/cli/azure/install-azure-cli
  • Install the Azure CLI client on an Ubuntu machine:
sudo apt install azure-cli
  • Authenticate the Azure CLI client with your credentials. This will also store the credentials for all subsequent commands used:
az login

Install helm (Jumphost)

helm is used to install Direktiv and Knative components.

NOTE: For the latest installation instructions per distro, please refer to: https://helm.sh/docs/intro/install/

To install helm, the following steps need to be followed.

curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh

Install kubectl (Jumphost)

kubectl is used to manage the Kubernetes cluster.

NOTE: For the latest installation instructions per distro, please refer to: https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/

To install kubectl, the following steps need to be followed:

  • Update the apt package index and install packages needed to use the Kubernetes apt repository:
sudo apt-get update
sudo apt-get install -y ca-certificates curl
  • Download the Google Cloud public signing key:
sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
  • Add the Kubernetes apt repository:
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
  • Update apt package index with the new repository and install kubectl:
sudo apt-get update
sudo apt-get install -y kubectl
  • Verify the installation:
# kubectl version --client --output=yaml
clientVersion:
  buildDate: "2022-08-23T17:44:59Z"
  compiler: gc
  gitCommit: a866cbe2e5bbaa01cfd5e969aa3e033f3282a8a2
  gitTreeState: clean
  gitVersion: v1.25.0
  goVersion: go1.19
  major: "1"
  minor: "25"
  platform: linux/amd64
kustomizeVersion: v4.5.7

Deploy Azure AKS (Jumphost)

The following steps show how to deploy a default Azure AKS service. 

NOTE: The first thing to check before building an Azure Kubernetes cluster is your subscription type. For the purpose of this document, we’ll assume that the subscription type is “Pay-As-You-Go”. This implies that there are no resource restrictions and adequate subscription capacity to deploy the Kubernetes cluster and node resources

Azure Resource Group

An Azure resource group is a logical group in which Azure resources are deployed and managed. When you create a resource group, you are prompted to specify a location. This location is:

  • The storage location of your resource group metadata.
  • Where your resources will run in Azure if you don't specify another region during resource creation.

The following example creates a resource group named myResourceGroup in the westus2 location:

az group create --name <resource-group> --location <location>

where:

  • resource-group is the resource group name, and
  • location is the location of the Azure instance (see the output from command az account list-locations)

Below is an example of creating the resource group in the westus2 region with the name direktivResourceGroup:


az group create --name direktivResourceGroup --location westus2
{
  "id": "/subscriptions/40eb3cb3-a114-4cd1-b584-5bbedd2126a2/resourceGroups/direktivResourceGroup",
  "location": "westus2",
  "managedBy": null,
  "name": "direktivResourceGroup",
  "properties": {
    "provisioningState": "Succeeded"
  },
  "tags": null,
  "type": "Microsoft.Resources/resourceGroups"
}

Azure AKS Cluster Build

Create the AKS cluster and worker nodes.

  • On the Azure Portal, select the “Kubernetes services” 
  • On the “Create” drop-down in the middle of the page select “Create a Kubernetes cluster
  • The following options are dependent on the subscription type and the type of environment you would run Direktiv in, but for the purposes of PoC we select the following options:
    • Select the “Subscription” for which the cluster has resource available (Pay-As-You-Go in the case of this deployment)
    • Resource Group: direktivResourceGroup (created in the previous section)
    • Cluster preset configuration: Standard ($$)
    • Kubernetes cluster name: direktiv-cluster
    • Region: US-West-2 <select whatever region was set during Resource Group creation>
    • Availability zones: None (PoC), All available (Production)
    • Kubernetes version: 1.22.11 (default) - select the default version
    • API server availability: 99.5% for PoC, 99.9% for Production
    • Node size: minimum recommended combinations are shown in the table below.
Please note: the node group configuration can be changed based on individual customer needs. The table below is a guideline ONLY for a medium sized Direktiv deployment:

  Environment

  Node size

  Scale method

  Node count

PoC / Development

4 vCPUs, 16GB (Standard B4ms)

Autoscale

1 - 2

Production

4 vCPUs, 16GB (Standard B4ms)

Autoscale

3+

Production

2 vCPUs, 7 GB (Standard DS2_v2) if using an additional node pool (see below)

Autoscale

2 - 3

Node Pool

2 vCPUs, 7 GB (Standard DS2_v2) if using an additional node pool (see below)

Autoscale

3+


  • Node pools:
    • Create a node pool based on the settings in the table above
    • Enable virtual nodes: Unchecked
    • Node pool OS disk encryption: (Default) Encryption at-rest with a platform-managed key
  • Access:
    • Authentication & Authorization: Local accounts with Kubernetes RBAC (PoC), whatever the organisation uses (Production)
  • Networking:
    • Network configuration: Kubenet
    • DNS name prefix: Generated automatically, but can be changed to suit
    • Load balancer: Standard
    • Enable HTTP application routing: Unchecked
    • Enable private cluster: Unchecked
    • Set authorised IP ranges: Unchecked (PoC), Production implementation will have this set to allow (at minimum) internal ranges and Jumphost range access
    • Network policy: None (PoC), Calico (Production)
  • Integrations:
    • Container registry: None
    • Container monitoring: Disabled
    • Azure Policy: Disabled
  • Advanced:
    • Enable secret store CSI driver: Unchecked
  • Tags:
    • Tags: <whatever the organisation uses>
  • Review and create:
    • Create
The deployment process will start, this typically (depending on the the complexity of the setup and the amount of nodes and node pools) takes approximately 15 minutes (see below deployment started at 15:13 completed at 15:27):


Group 2773

Once the deployment is completed, click the “Connect to cluster” button. This will present you with all the steps necessary to connect to your cluster from your Linux Jumphost:


Group 2774


The below is the template used for the proof-of-concept open source deployment:

{
  "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "parameters": {
      "resourceName": {
          "type": "string",
          "metadata": {
              "description": "The name of the Managed Cluster resource."
          }
      },
      "location": {
          "type": "string",
          "metadata": {
              "description": "The location of AKS resource."
          }
      },
      "dnsPrefix": {
          "type": "string",
          "metadata": {
              "description": "Optional DNS prefix to use with hosted Kubernetes API server FQDN."
          }
      },
      "osDiskSizeGB": {
          "type": "int",
          "defaultValue": 0,
          "metadata": {
              "description": "Disk size (in GiB) to provision for each of the agent pool nodes. This value ranges from 0 to 1023. Specifying 0 will apply the default disk size for that agentVMSize."
          },
          "minValue": 0,
          "maxValue": 1023
      },
      "kubernetesVersion": {
          "type": "string",
          "defaultValue": "1.7.7",
          "metadata": {
              "description": "The version of Kubernetes."
          }
      },
      "networkPlugin": {
          "type": "string",
          "allowedValues": [
              "azure",
              "kubenet"
          ],
          "metadata": {
              "description": "Network plugin used for building Kubernetes network."
          }
      },
      "enableRBAC": {
          "type": "bool",
          "defaultValue": true,
          "metadata": {
              "description": "Boolean flag to turn on and off of RBAC."
          }
      },
      "vmssNodePool": {
          "type": "bool",
          "defaultValue": false,
          "metadata": {
              "description": "Boolean flag to turn on and off of virtual machine scale sets"
          }
      },
      "windowsProfile": {
          "type": "bool",
          "defaultValue": false,
          "metadata": {
              "description": "Boolean flag to turn on and off of virtual machine scale sets"
          }
      },
      "nodeResourceGroup": {
          "type": "string",
          "metadata": {
              "description": "The name of the resource group containing agent pool nodes."
          }
      },
      "adminGroupObjectIDs": {
          "type": "array",
          "defaultValue": [],
          "metadata": {
              "description": "An array of AAD group object ids to give administrative access."
          }
      },
      "azureRbac": {
          "type": "bool",
          "defaultValue": false,
          "metadata": {
              "description": "Enable or disable Azure RBAC."
          }
      },
      "disableLocalAccounts": {
          "type": "bool",
          "defaultValue": false,
          "metadata": {
              "description": "Enable or disable local accounts."
          }
      },
      "enablePrivateCluster": {
          "type": "bool",
          "defaultValue": false,
          "metadata": {
              "description": "Enable private network access to the Kubernetes cluster."
          }
      },
      "enableHttpApplicationRouting": {
          "type": "bool",
          "defaultValue": true,
          "metadata": {
              "description": "Boolean flag to turn on and off http application routing."
          }
      },
      "enableAzurePolicy": {
          "type": "bool",
          "defaultValue": false,
          "metadata": {
              "description": "Boolean flag to turn on and off Azure Policy addon."
          }
      },
      "enableSecretStoreCSIDriver": {
          "type": "bool",
          "defaultValue": false,
          "metadata": {
              "description": "Boolean flag to turn on and off secret store CSI driver."
          }
      }
  },
  "resources": [
      {
          "apiVersion": "2022-06-01",
          "dependsOn": [],
          "type": "Microsoft.ContainerService/managedClusters",
          "location": "[parameters('location')]",
          "name": "[parameters('resourceName')]",
          "properties": {
              "kubernetesVersion": "[parameters('kubernetesVersion')]",
              "enableRBAC": "[parameters('enableRBAC')]",
              "dnsPrefix": "[parameters('dnsPrefix')]",
              "nodeResourceGroup": "[parameters('nodeResourceGroup')]",
              "agentPoolProfiles": [
                  {
                      "name": "agentpool",
                      "osDiskSizeGB": "[parameters('osDiskSizeGB')]",
                      "count": 2,
                      "enableAutoScaling": true,
                      "minCount": 1,
                      "maxCount": 2,
                      "vmSize": "Standard_DS2_v2",
                      "osType": "Linux",
                      "storageProfile": "ManagedDisks",
                      "type": "VirtualMachineScaleSets",
                      "mode": "System",
                      "maxPods": 110,
                      "availabilityZones": null,
                      "nodeTaints": [],
                      "enableNodePublicIP": false,
                      "tags": {}
                  }
              ],
              "networkProfile": {
                  "loadBalancerSku": "standard",
                  "networkPlugin": "[parameters('networkPlugin')]"
              },
              "disableLocalAccounts": "[parameters('disableLocalAccounts')]",
              "apiServerAccessProfile": {
                  "enablePrivateCluster": "[parameters('enablePrivateCluster')]"
              },
              "addonProfiles": {
                  "httpApplicationRouting": {
                      "enabled": "[parameters('enableHttpApplicationRouting')]"
                  },
                  "azurepolicy": {
                      "enabled": "[parameters('enableAzurePolicy')]"
                  },
                  "azureKeyvaultSecretsProvider": {
                      "enabled": "[parameters('enableSecretStoreCSIDriver')]",
                      "config": null
                  }
              }
          },
          "tags": {},
          "sku": {
              "name": "Basic",
              "tier": "Free"
          },
          "identity": {
              "type": "SystemAssigned"
          }
      }
  ],
  "outputs": {
      "controlPlaneFQDN": {
          "type": "string",
          "value": "[reference(concat('Microsoft.ContainerService/managedClusters/', parameters('resourceName'))).fqdn]"
      }
  }
}

Install Direktiv Enterprise Edition (Jumphost)

The Direktiv Enterprise Edition is a commercial platform for running Direktiv. Access to the installation files are restricted, and requires a license and access to the repository download section (via GitHub or the Direktiv Service Portal).


For the installation of Direktiv, the KUBECONFIG environment variable needs to be set:

  • Create or update a kubeconfig file for your cluster using the Azure CLI previously installed. Replace resource-groupe with the Azure Resource Group created and aks-cluster-namewith the name of the AKS cluster:
az aks get-credentials --resource-group <resource-group> --name <aks-cluster-name>

            Example output is shown below:

# az aks get-credentials --resource-group direktivResourceGroup --name direktiv-cluster
Merged "direktiv-cluster" as current context in /home/wwonigkeit/.kube/config
  • Set the KUBECONFIGenvironment variable:
echo "export KUBECONFIG=$HOME/.kube/config" >> ~/.bashrc
  • Then to activate your new proxy settings for the current session, use the following command:
source ~/.bashrc

  • Download the Direktiv installer. It provides a basic configuration for all components needed for Direktiv. It assumed that the person installing this has access to the direktiv-ee GitHub repository:
tar -xvf direktiv-ee.tar.gz
cd direktiv-ee/install
  • Set the following environmental variables:
    • DIREKTIV_HOST: sets the FQDN for the Direktiv instance.
    • DIREKTIV_DEV: if this environment variable has a value the installation uses the localhost:5000 variant of Direktiv images.
    • DIREKTIV_TOKEN: if this environment variable has a value the installation uses as the admin API key (this has allows for elevated API admin access)
export DIREKTIV_HOST="dev.direktiv.io"
export DIREKTIV_DEV="localhost:5000"
export DIREKTIV_TOKEN="KxTp8d_gj5yj$%mPkvFY=f2rPN2K6N?F"
  • Run the script to prepare the installation:
./prepare.sh
  • The above command generates an install YAML files. If a proxy is used, configure the following files with the appropriate settings:
~/direktiv-ee/install/05_knative# cat knative.yaml
# -- Knative version
version: 1.8.0
# -- Knative replicas
replicas: 1
# -- Proxy settings for controller
http_proxy: ""
https_proxy: ""
no_proxy: ""
# -- Custom certificate for controller. This needs to be a secret created before installation in the knative-serving namespace
certificate: ""
# -- Repositories which skip digest resolution
skip-digest: kind.local,ko.local,dev.local,localhost:5000,localhost:31212
  • Direktiv also needs to have the proxy settings configured:
:~/direktiv-ee/install/06_direktiv# cat direktiv.yaml
pullPolicy: Always
debug: "false"

proxy:
  no-proxy: ""
  http-proxy: ""
  https-proxy: ""

database:
  replicas: 1
  image: "direktiv/ui-ee"

ui:
  replicas: 1
  image: "direktiv/ui-ee"

api:
  replicas: 1
  image: "direktiv/api-ee"
  additionalEnvs:
  - name: DIREKTIV_ROLE_ADMIN
    value: admin
  - name: DIREKTIV_TOKEN_SECRET
    valueFrom:
      secretKeyRef:
        name: tokensecret
        key: tokensecret
  - name: DIREKTIV_ADMIN_SECRET
    valueFrom:
      secretKeyRef:
        name: adminsecret
        key: adminsecret
  • Certificates are required for the DIREKTIV_HOST URL selected used in the previous step. Direktiv will use self-signed certificates if not provided, but this is not recommended. To run the installation with certificates for (as an example *.direktiv.io), replace the following 2 files with the server.key and server.crtfiles for the respective domain:
~/direktiv-ee/install/04_keycloak/certs# ls -l
total 28
-rw-r--r-- 1 root root 7124 Jan 30 04:44 direktiv.io.crt
-rwx------ 1 root root 1708 Jan 30 04:44 direktiv.io.key
drwxr-xr-x 2 root root 4096 Jan 30 05:16 keycloak
-r-------- 1 root root 7124 Jan 30 06:01 server.crt
-r-------- 1 root root 1708 Jan 30 06:01 server.key
~/direktiv-ee/install/04_keycloak/certs#
~/direktiv-ee/install/04_keycloak/certs# cp direktiv.io.crt server.crt
~/direktiv-ee/install/04_keycloak/certs# cp direktiv.io.key server.key
  • Run the installer:
./install-all.sh
  • Wait till all pods are up and running. Can take a long time if the network is slow. This can be verified using the following command:
# kubectl get pods
NAME                                          READY   STATUS    RESTARTS   AGE
direktiv-api-6f8567555b-pq7q8                 2/2     Running   0          2d16h
direktiv-flow-564c8fc4cc-jh5dh                3/3     Running   0          2d16h
direktiv-functions-6f6698d7fb-s7n9z           2/2     Running   0          2d16h
direktiv-prometheus-server-667b8c6d65-6nzxm   3/3     Running   0          2d16h
direktiv-ui-d947dccc-zlzxc                    2/2     Running   0          2d16h
knative-operator-58647bbfd5-w9kvc             1/1     Running   0          2d16h
operator-webhook-b866dc4c-6klqx               1/1     Running   0          2d16h
  • Add the hostname configured in the DIREKTIV_HOSTenvironmental variable to your DNS. The external IP address (or hostname for the Load Balancer) can be found using the following command:
kubectl get svc -n apisix apisix-gateway
  • Output shown below:
# kubectl get svc -n apisix apisix-gateway
NAME             TYPE           CLUSTER-IP     EXTERNAL-IP                   PORT(S)                      AGE
apisix-gateway   LoadBalancer   10.43.159.35   145.40.102.235,145.40.99.47   80:30429/TCP,443:31423/TCP   2d16h

Access Direktiv

To access the Direktiv admin user, use the following credentials:

  • admin/password: user in admin group
  • admin/password: user in direktiv group

To access the Keycloak setup:

Next steps

The next steps in setting up the enterprise edition:

Was this article helpful?

That’s Great!

Thank you for your feedback

Sorry! We couldn't be helpful

Thank you for your feedback

Let us know how can we improve this article!

Select at least one of the reasons
CAPTCHA verification is required.

Feedback sent

We appreciate your effort and will try to fix the article