Skip to content

Latest commit

 

History

History
154 lines (96 loc) · 5.94 KB

README.md

File metadata and controls

154 lines (96 loc) · 5.94 KB

Table of Contents

Created by gh-md-toc

Fission Kafka - IoT Demo

This sample project is inspired from Yugabyte's IOT fleet management demo. The original example was built using microservice, this demo will transform that into an example which uses Fission functions.

Architecture

Architecture of the Fission Kafka - IOT Demo

Functions

  1. IOT Data Producer: This function generates data about a fleet of vehicles and their lattitude, longitude, speed, fuel level etc. This function can be trigerred every N seconds resembling a sensor sending the data. This function sends this data to the a Kafka topic: iotdata (Configured in environment variable of IOT data Producer function).

  2. IOT Data Consumer: Second function retrieves data from Kafka topics, runs some transformations and persists into Redis. This function is trigerred for every message in the Kafka topic.

  3. There are 4 more functions which read the data stored in Redis and expose them at REST endpoints: count, fuelavg, data & speedavg

  4. The Kafka consumer & 4 rest endpoint functions are all in same Spring project and form a single jar archive. The entrypoint for each function is different.

  5. IOT Web: The last queries the data from rest endpoints and renders using chart.js, HTML & jQuery. This dashboard page itself is a function rendered using python flask & HTML.

Result

The resulting dashboard shows various trends. It also provides a way to invoke the IOT Data producer function which sends data to Kafka. Sample screen:

Dashboard: Fission Kafka - IOT Demo

Setup

Prerequisite

  • A Kubernetes cluster with latest version of Fission
  • Fission installation should have Kafka MQT installed with address of Kafka brokers.
  • Docker and Fission-CLI on local machine

Setup Kafka

Easiest way to install Kafka is to start with Quickstart of Strimzi Kafka Operator.

$ kubectl create namespace kafka

$ kubectl apply -f 'https://strimzi.io/install/latest?namespace=kafka' -n kafka

$ kubectl apply -f https://strimzi.io/examples/latest/kafka/kafka-persistent-single.yaml -n kafka 

Setup Redis

We will setup redis without persistence and as a single instance (i.e. no master & slave). You can customize the command as per your needs based on instrcutions from here. Also note that for simplicity - Redis has been setup without a password.

$ kubectl create namespace redis

$ helm repo add bitnami https://charts.bitnami.com/bitnami

$ helm install redis-single --namespace redis \
  --set usePassword=false \
  --set cluster.enabled=false \
  --set master.persistence.enabled=false \
  --set master.disableCommands={"FLUSHALL"} \
    bitnami/redis

Note: The parameter --set master.disableCommands={"FLUSHALL"} enables the FLUSHDB command in Redis, which allows to delete everything in a database. This is needed for the Flush Redis Data action in this example, but should be used carefully. To disbale the FLUSHDB command, just remove the line --set master.disableCommands={"FLUSHALL"} \

Configuration Check/Change

Some key configuration need to be checked for having proper values:

  • You will also need to update the Fission installation if not already to enable the Kafka MQT trigger with appropriate configuration of broker address etc.
## Kafka: enable and configure the details
kafka:
  enabled: true
  # note: below link is only for reference. 
  # Please use the brokers link for your kafka here. 
  brokers: 'my-cluster-kafka-0.my-cluster-kafka-brokers.kafka.svc:9092' # or your-bootstrap-server.kafka:9092/9093
  authentication:
  • In specs/env-java.yaml check

    • KAFKA_ADDR is pointing to appropriate address of Kafka cluster created in earlier section (Current default to my-cluster-kafka-0.my-cluster-kafka-brokers.kafka.svc:9092)

    • REDIS_ADDR is pointing to correct Redis address within cluster (Default "redis-single-master.redis")

    • CONSUMER_SLEEP_DELAY - is right now set to 300 second. This is delay which consumer function introduces before consuming a message.

    • PRODUCER_MSG_PER_SECOND - is set to 300 and is number of Kafka messages that producer produces every second.

Build & Deploy functions

The functions are built inside the cluster after the fission spec apply command.

  • All the environment, package and function definitions are in specs directory. To create all of them run command:
$ fission spec apply 
  • You can check details of deployed functions:
$ fission fn list 

Testing

  • To generate some sample data, you can run the Kafka producer (kp) function a few times or create a trigger which invokes the function every N minutes:
# Creating trigger to produce data every 5 minutes
$ fission tt create --name kptrigger --function kp --cron '@every 5m'
trigger 'kptrigger' created
Current Server Time: 	2018-08-09T10:58:38Z
Next 1 invocation: 	2018-08-09T11:03:38Z
$ fission tt list
NAME      CRON      FUNCTION_NAME
kptrigger @every 5m kp

# Calling the producer function directly.
$ fission fn test --name kp
  • After the Kafka producer function has trigerred at least once, you can open the dashboard function in browser:

http://$FISSION_ROUTER/iot/index.html