Skip to content

πŸ”— A multipurpose Kafka Connect connector that makes it easy to parse, transform and stream any file, in any format, into Apache Kafka

License

Notifications You must be signed in to change notification settings

streamthoughts/kafka-connect-file-pulse

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Kafka Connect File Pulse

License Stars Forks DockerPull Issues Main Build

Reliability Rating Maintainability Rating Vulnerabilities Coverage

Connect FilePulse is a multipurpose, scalable and reliable, Kafka Connector that makes it easy to parse, transform and stream any file, in any format, into Apache Kafkaβ„’. It provides capabilities for reading files from: local-filesystem, Amazon S3, Azure Storage and Google Cloud Storage.

Motivation

In organizations, data is frequently exported, shared and integrated from legacy systems through the use of files in a wide variety of formats (e.g. CSV, XML, JSON, Avro, etc.). Dealing with all of these formats can quickly become a real challenge for enterprise that usually end up with a complex and hard to maintain data integration mess.

A modern approach consists in building a scalable data streaming platform as a central nervous system to decouple applications from each other. Apache Kafkaβ„’ is one of the most widely used technologies to build such a system. The Apache Kafka project packs with Kafka Connect a distributed, fault tolerant and scalable framework for connecting Kafka with external systems.

The Connect File Pulse project aims to provide an easy-to-use solution, based on Kafka Connect, for streaming any type of data file with the Apache Kafkaβ„’ platform.

Some of the features of Connect File Pulse are inspired by the ingestion capabilities of Elasticsearch and Logstash.

πŸš€ Key Features Overview

Connect FilePulse provides a set of built-in features for streaming files from multiple filesystems into Kafka. This includes, among other things:

  • Support for recursive scanning of local directories.
  • Support for reading files from Amazon S3, Azure Storage and Google Cloud Storage.
  • Support multiple input file formats (e.g: CSV, JSON, AVRO, XML).
  • Support for Grok expressions.
  • Parsing and transforming data using built-in or custom processing filters.
  • Error handler definition
  • Monitoring files while they are being written into Kafka
  • Support pluggable strategies to clean up completed files
  • Etc.

πŸ™ Show your support

You think this project can help you or your team to ingest data into Kafka ? Please 🌟 this repository to support us!

🏁 How to get started ?

The best way to learn Kafka Connect File Pulse is to follow the step by step Getting Started.

If you want to read about using Connect File Pulse, the full documentation can be found here

File Pulse is also available on Docker Hub 🐳

https://hub.docker.com/r/streamthoughts/kafka-connect-file-pulse:latest

πŸ’‘ Contributions

Any feedback, bug reports and PRs are greatly appreciated! See our guideline.

Licence

This code base is available under the Apache License, version 2.