TABLE OF CONTENTS
- Overview of the Linode Receiver
- Configuring the Linode receiver
- Installing the Linode receiver
- Testing the Linode receiver
Overview of the Linode Receiver
It’s pretty simple, the Direktiv receiver will connect to the Linode API using an Personal Access Token created in the console.
Configuring the Linode receiver
The receiver is installed and configured using the following YAML file:
apiVersion: v1 kind: ConfigMap metadata: name: linode-receiver-cfg-cm data: config.yaml: | linode: linodeAuthToken: <linode-token> direktiv: endpoint: https://<<direktiv-url>>/api/namespaces/<<namespace>>/broadcast insecureSkipVerify: true token: <<direktiv-token>> event-on-error: true --- apiVersion: apps/v1 kind: Deployment metadata: name: linode-receiver labels: app: linode-receiver spec: replicas: 1 selector: matchLabels: app: linode-receiver template: metadata: annotations: linkerd.io/inject: disabled labels: app: linode-receiver spec: volumes: - name: linodeconf configMap: name: linode-receiver-cfg-cm securityContext: runAsNonRoot: true runAsUser: 65532 runAsGroup: 65532 containers: - name: linode-receiver image: gcr.io/direktiv/receivers/linode-receiver:1.0 imagePullPolicy: Always volumeMounts: - name: linodeconf mountPath: "/config" readOnly: false
The following information is needed:
- linodeAuthToken: the Personal Authentication token can be generated from your profile page.
- endpoint: just add you Direktiv URL and the namespace you want the events to be sent to (e.g. https://linode.direktiv.io/api/namespaces/linode-workflows/broadcast)
- token: this is your Direktiv authentication token, created on the namespace level and given the appropriate permissions (eventSend)
For more information on permissions, see this Knowledge Base article
Installing the Linode receiver
For this blog article, I’m going to assume you have a Direktiv instance available and access to kubectl to install the receiver.
Create the YAML file as described above. In this case, I called it linode-receiver.yaml with the following configuration:
apiVersion: v1 kind: ConfigMap metadata: name: linode-receiver-cfg-cm data: config.yaml: | linode: linodeAuthToken: 89dc3fc6b......b768b04b7224 direktiv: endpoint: https://linode.direktiv.io/api/namespaces/linode-workflows/broadcast insecureSkipVerify: true token: 4.........KZn89FSk_Vslskjdi0md event-on-error: true --- apiVersion: apps/v1 kind: Deployment metadata: name: linode-receiver labels: app: linode-receiver spec: replicas: 1 selector: matchLabels: app: linode-receiver template: metadata: annotations: linkerd.io/inject: disabled labels: app: linode-receiver spec: volumes: - name: linodeconf configMap: name: linode-receiver-cfg-cm securityContext: runAsNonRoot: true runAsUser: 65532 runAsGroup: 65532 containers: - name: linode-receiver image: gcr.io/direktiv/receivers/linode-receiver:1.0 imagePullPolicy: Always volumeMounts: - name: linodeconf mountPath: "/config" readOnly: false
Next step, simply run the following command:
kubectl apply -f linode-receiver.yaml
This will produce the following output:
configmap/linode-receiver-cfg-cm created deployment.apps/linode-receiver created
This will install the receiver. You can verify that it’s all up and running:
# kubectl get pods NAME READY STATUS RESTARTS AGE direktiv-api-6f8567555b-pq7q8 2/2 Running 0 13d direktiv-flow-564c8fc4cc-jh5dh 3/3 Running 0 13d direktiv-functions-6f6698d7fb-s7n9z 2/2 Running 0 13d direktiv-prometheus-server-667b8c6d65-6nzxm 3/3 Running 0 13d direktiv-ui-d947dccc-zlzxc 2/2 Running 0 13d knative-operator-58647bbfd5-w9kvc 1/1 Running 0 13d linode-receiver-547b5fd95d-mphxx 1/1 Running 0 22s operator-webhook-b866dc4c-6klqx 1/1 Running 0 13d
Testing the Linode receiver
To make sure that we receive the events, let’s execute 2 actions in our Linode console and view the output.
The workflow
The workflow has a start definition that instructs Direktiv to kick-off the flow whenever it receives 2 types of events:
- token_create
- linode_boot
Depending on the type of event received, Direktiv will either just print the event contents (for a token_create event) or for a linode_boot event query the Linode API for more information on the node:
description: A simple workflow that waits for Linode Events start: type: eventsXor state: check-event events: - type: token_create - type: linode_boot functions: - id: http-request image: gcr.io/direktiv/functions/http-request:1.0 type: knative-workflow states: - id: check-event log: jq(.) type: switch defaultTransition: print-event conditions: - condition: 'jq(.token_create != null)' transition: print-event - condition: 'jq(.linode_boot != null)' transition: print-node-info - id: print-node-info type: action log: jq(.) action: secrets: ["LINODE_TOKEN"] function: http-request input: debug: true url: jq("https://api.linode.com" + .linode_boot.data.entity.url)' headers: Authorization: "Bearer jq(.secrets.LINODE_TOKEN)" - id: print-event type: noop log: jq(.)
Create a new API Token
In the Linode console, we created a new API token, as can be seen in the “Events” view of the Linode console:
In Direktiv, a new CloudEvent arrived at the same time:
The event name is token_create and using a simple workflow that listens for the event and print the details, we can see the content of the event:
{ "token_create": { "data": { "action": "token_create", "entity": { "id": 44518363, "label": "new-personal-token", "status": "", "type": "token", "url": "/v4/profile/tokens/44518363" }, "id": 429910244, "percent_complete": 0, "rate": null, "read": false, "secondary_entity": null, "seen": false, "status": "notification", "username": "wwonigkeit" }, "id": "b4743aa2-39e9-401a-b981-4dc25eb2fae9", "source": "direktiv/listener/linode", "specversion": "1.0", "time": "2023-02-12T20:29:40Z", "traceparent": "00-d2e3b45cdd56bbc4304fe8c67d01bb06-f58741659808b5e1-00", "type": "token_create" } }
Create a Linode Kubernetes cluster (example events)
In the process of creating a new Kubernetes cluster under Linode, we will receive a couple of notification on a device booting successfully. So what we want to do is print more information the devices as they boot successfully.
In this case I’m building a simple Kubernetes cluster in Linode as follow:
Monitoring the Linode console, you can see a whole flurry of events hitting the “Events” tab:
And in Direktiv, you can see the same events being received:
So let’s look at one of the linode_boot event contents:
{ "linode_boot": { "data": { "action": "linode_boot", "entity": { "id": 42597067, "label": "lke92339-139853-63e94fe078ca", "status": "", "type": "linode", "url": "/v4/linode/instances/42597067" }, "id": 429916562, "percent_complete": 0, "rate": null, "read": false, "secondary_entity": { "id": 45262245, "label": "Boot Config", "status": "", "type": "linode_config", "url": "/v4/linode/instances/42597067/configs/45262245" }, "seen": false, "status": "scheduled", "username": "lke-service-account-429d3d60ac22" }, "id": "47292c03-3306-4c84-a444-eeb158ebf26e", "source": "direktiv/listener/linode", "specversion": "1.0", "time": "2023-02-12T20:46:44Z", "traceparent": "00-56a60cd32edfabf1399cf422d4918d05-b7fa53fdca230aae-00", "type": "linode_boot" } }
We can see a reference to an entity and a secondary_entity with a url field (which we assume point to the instance directly in the API). The workflow uses this to query the Linode API for more information on the device in question (the device for which the linode_boot event was sent).
The following extract from the workflow shows the logic:
- id: print-node-info type: action log: jq(.) action: secrets: ["LINODE_TOKEN"] function: http-request input: debug: true url: jq("https://api.linode.com" + .linode_boot.data.entity.url)' headers: Authorization: "Bearer jq(.secrets.LINODE_TOKEN)"
We can see the workflow running as soon as the event is received (we received 3 linode_boot events from the cluster build):
And now let’s look at the output of the workflow:
{ "linode_boot": { "data": { "action": "linode_boot", "entity": { "id": 42597067, "label": "lke92339-139853-63e94fe078ca", "status": "", "type": "linode", "url": "/v4/linode/instances/42597067" }, "id": 429916562, "percent_complete": 0, "rate": null, "read": false, "secondary_entity": { "id": 45262245, "label": "Boot Config", "status": "", "type": "linode_config", "url": "/v4/linode/instances/42597067/configs/45262245" }, "seen": false, "status": "scheduled", "username": "lke-service-account-429d3d60ac22" }, "id": "47292c03-3306-4c84-a444-eeb158ebf26e", "source": "direktiv/listener/linode", "specversion": "1.0", "time": "2023-02-12T20:46:44Z", "traceparent": "00-56a60cd32edfabf1399cf422d4918d05-b7fa53fdca230aae-00", "type": "linode_boot" }, "return": [ { "code": 200, "headers": { "Access-Control-Allow-Credentials": [ "true" ], "Access-Control-Allow-Headers": [ "Authorization, Origin, X-Requested-With, Content-Type, Accept, X-Filter" ], "Access-Control-Allow-Methods": [ "HEAD, GET, OPTIONS, POST, PUT, DELETE" ], "Access-Control-Allow-Origin": [ "*" ], "Access-Control-Expose-Headers": [ "X-OAuth-Scopes, X-Accepted-OAuth-Scopes, X-Status" ], "Cache-Control": [ "private, max-age=0, s-maxage=0, no-cache, no-store", "private, max-age=60, s-maxage=60" ], "Content-Length": [ "753" ], "Content-Security-Policy": [ "default-src 'none'" ], "Content-Type": [ "application/json" ], "Date": [ "Sun, 12 Feb 2023 21:05:15 GMT" ], "Retry-After": [ "60" ], "Server": [ "nginx" ], "Strict-Transport-Security": [ "max-age=31536000" ], "Vary": [ "Authorization, X-Filter", "Authorization, X-Filter" ], "X-Accepted-Oauth-Scopes": [ "linodes:read_only" ], "X-Content-Type-Options": [ "nosniff" ], "X-Customer-Uuid": [ "CB9A1A60-A762-465D-B7E5ABF5259C24BC" ], "X-Frame-Options": [ "DENY", "DENY" ], "X-Oauth-Scopes": [ "*" ], "X-Ratelimit-Limit": [ "800" ], "X-Ratelimit-Remaining": [ "799" ], "X-Ratelimit-Reset": [ "1676235976" ], "X-Spec-Version": [ "4.144.2" ], "X-Xss-Protection": [ "1; mode=block" ] }, "result": { "alerts": { "cpu": 180, "io": 10000, "network_in": 10, "network_out": 10, "transfer_quota": 80 }, "backups": { "available": false, "enabled": false, "last_successful": null, "schedule": { "day": null, "window": null } }, "created": "2023-02-12T20:45:21", "group": "", "host_uuid": "196741ef4f3d7d4a2ac50aa5c05091b5704df0ce", "hypervisor": "kvm", "id": 42597067, "image": "linode/debian11-kube-v1.24.8", "ipv4": [ "50.116.53.237", "192.168.180.215" ], "ipv6": "2600:3c03::f03c:93ff:fe2a:b331/128", "label": "lke92339-139853-63e94fe078ca", "region": "us-east", "specs": { "disk": 81920, "gpus": 0, "memory": 4096, "transfer": 4000, "vcpus": 2 }, "status": "running", "tags": [], "type": "g6-dedicated-2", "updated": "2023-02-12T20:45:21", "watchdog_enabled": true }, "status": "200 OK", "success": true } ] }
Finally, let’s clean this up so that we have more readable information at the workflow output. We can accomplish this by using a Direktiv transform to dump all the information we don’t need (such as the API response headers and the original event information).
The action now looks like this:
- id: print-node-info type: action log: jq(.) action: secrets: ["LINODE_TOKEN"] function: http-request input: debug: true url: jq("https://api.linode.com" + .linode_boot.data.entity.url)' headers: Authorization: "Bearer jq(.secrets.LINODE_TOKEN)" transform: jq(.return[0].result)
And the output … well nicely sanitised:
{ "alerts": { "cpu": 180, "io": 10000, "network_in": 10, "network_out": 10, "transfer_quota": 80 }, "backups": { "available": false, "enabled": false, "last_successful": null, "schedule": { "day": null, "window": null } }, "created": "2023-02-12T20:45:21", "group": "", "host_uuid": "196741ef4f3d7d4a2ac50aa5c05091b5704df0ce", "hypervisor": "kvm", "id": 42597067, "image": "linode/debian11-kube-v1.24.8", "ipv4": [ "50.116.53.237", "192.168.180.215" ], "ipv6": "2600:3c03::f03c:93ff:fe2a:b331/128", "label": "lke92339-139853-63e94fe078ca", "region": "us-east", "specs": { "disk": 81920, "gpus": 0, "memory": 4096, "transfer": 4000, "vcpus": 2 }, "status": "running", "tags": [], "type": "g6-dedicated-2", "updated": "2023-02-12T20:45:21", "watchdog_enabled": true }
Was this article helpful?
That’s Great!
Thank you for your feedback
Sorry! We couldn't be helpful
Thank you for your feedback
Feedback sent
We appreciate your effort and will try to fix the article