Skip to content

Communication

Dennis Hertel edited this page Mar 13, 2019 · 2 revisions

Asynchronous Communication

The microservices of OSO (or any other application) need to somehow communicate with each other. In OSO (as in any other microservice-application) asynchronous communication should be used.

Brocker vs Brockerless

In a brokerless communication each Service need to know each other, which he wants to send a message to. The communication is

  • easy to implement/debug
  • has a high throughput
  • and has a low latency

but therefore

  • has a tight coupling as basically everyone needs to know everyone
  • makes it very hard to implement a service-discovery
  • creates a connection-hell on all services.

When communicating using a Brocker on the other side

  • load balancing is much easier
  • a service-discovery is not needed. The service only need to know/use the same key (topic, channel, etc.)
  • the services are more independent of each other - the Events just go in at one side and go out at the other side
  • communication seems more like a stream

With the drawbacks of

  • the brocker to have to be scalable
  • the brocker as "central point of failure"
  • a higher latency
  • a higher resource utilisation

All in all a communication using a brocker will be used for OSO as the pros far outweight the cons. Especially the missing coupling is the argument for any microservice-architecture to use a brocker over a brockerless design.

The Brocker to use

In short: there are many Brockers available but the most of them can be cancelled for OSO with ease

  • IBM Websphere MW is liable for costs
  • Solace appliance messaging system is limited in the free edition
  • Tervela appliance messaging system does not give many information on first search
  • Tibco RendezVous is limited in the free edition
  • Axeda dos not give many informaiton on first search
  • OSIsoft PI seems to be liable for cost as well as it is hard to find information on using/implementing it

This only leaves RabbitMQ and Apache Kafka for usage. RabbitMQ

  • is based on AMQP 0-9-1
  • you send byte-data
  • Consumers connect on Queues and Producers push on Exhanges. Exchanges push the messages by given rules to the Queues
  • fire and forget
  • light weight (in comparison)
  • designed for (mainly) vertical scaling and clustering

Apache Kafka

  • several apis for Producer, Consumer, Stream and Connector
  • Push and Pull using topics
  • Consumers grouped to a Consumer Group. A Consumer group is than connected to a topic and distributes the Events to the consumers.
  • has built in geo replication and distribution
  • you sent a set of key-value pairs (although deep down it's still byte-data)
  • You have to have zookeeper (a software for "managing" all the kafka inatances of yours, which is considered to be anything but simple to use and understand)
  • Designed for horizontal scaling (see zookeper and built in geo replicaiton)
  • built in event sourcing

As one can see, there is no clear result, which one would/could be better. While Kafka has much more functionality (which may not be needed), RabbitMQ is much less complicated and ressource hungry. For OSO we decided to use Kafka as the old saying goes "an ounce of prevention is worth a pound of cure"

Synchronous Communication

Although you should always use asynchronous communication between services their will always be those one or two situations where it is simple needed to speak synchron. Therefor we needed to decied how to do that.

First choice is using Websockets or using HTTP.

  • Using Websocket you sent raw data and so have to define an own protocol (or use one, just like plain Socket communication)
  • In HTTP you have many predefined states/functions (just mention the status-codes plus the response-content)
  • Websockets can push data while HTTP only supports Pull

As we do not have the urge for explicit Push-notifications we are not willing to implement the missing functions in Websockets, those were no option for us.

Next we needed to choose between the HTTP-protocol to use.

REST

  • Almost Standard
  • rigid endpoints - rigid responses - rigid requests - it is all predefined what can be done, which data will be sent and which data will be received

gRPC

  • easiest for clients
  • based on HTTP 2
  • all pros and cons of RPC
  • poor documentation - most just sample-code with no or quiet less additional information
  • No standard API across languages
  • weird error-handling compared to "normal" HTTP

Falcor

  • Pure JSon
  • Everything described as one giant JSON-Model
  • Sources, etc. can still be from various sources
  • functions are nested as nodes within the JSon-graph
  • Arrays are described as Maps with key "0", "1", "2", ...
  • JSon-Arrays are used to describe the path to the data to receive
  • Elements of those Arrays can be filters (a range, regex, etc.)

GraphQL

  • define schemas for retrieval
  • schemas can be merged from multiple source
  • query are sent to retrieve the data of schema i want in the format i want
  • define mutations for commands
  • definition is done via an own c-like language
  • various filters and/or variables can be used/defined for a retrieval just like wanted

Although gRPC is used by Google and adapted by a few others it seems to be still in an early phase. Togehter with the poor documentation we decided against it. Falcor and GraphQL both address the same problem - get only but all of what you want and how you want it. Currently we do not see OSO getting a huge data-hierarchy justifying those two. Even if it would only be valid if those data needed to be transferred between the services which we do not see neither. We decided to use REST as it is almost standard at the moment and the cost of learning Falcor or GraphQl does not exceed the pros of those two.

Clone this wiki locally