ArkFlow is a high-performance Rust-based stream processing engine designed to handle data streams efficiently. It supports multiple input/output sources and processors, making it versatile for various data processing tasks.
This article will delve into the features, installation, and usage of ArkFlow.
Features Of ArkFlow
- High Performance: Built on Rust and utilizing the Tokio async runtime, ArkFlow offers excellent performance with low latency.
- Multiple Data Sources: Supports Kafka, MQTT, HTTP, files, and other input/output sources.
- Powerful Processing Capabilities: Includes built-in SQL queries, JSON processing, Protobuf encoding/decoding, batch processing, and more.
- Extensible: Modular design allows for easy extension with new components.
To use ArkFlow, follow these steps:
- Clone the Repository: bash
git clone https://github.com/chenquan/arkflow.git cd arkflow
- Build the Project: bash
cargo build --release
- Run Tests: bash
cargo test
Quick Start
- Create a Configuration File (
config.yaml
): textlogging: level: info streams: - input: type: "generate" context: '{ "timestamp": 1625000000000, "value": 10, "sensor": "temp_1" }' interval: 1s batch_size: 10 pipeline: thread_num: 4 processors: - type: "json_to_arrow" - type: "sql" query: "SELECT * FROM flow WHERE value >= 10" - type: "arrow_to_json" output: type: "stdout"
- Run ArkFlow: bash
./target/release/arkflow --config config.yaml
ArkFlow uses YAML configuration files. Key configurations include:
- Logging: Set log levels (debug, info, warn, error).
- Streams: Define input, processing pipeline, and output configurations.
- Input Components: Supports Kafka, MQTT, HTTP, files, generators, and SQL databases.
- Processors: Includes JSON processing, SQL queries, Protobuf encoding/decoding, and batch processing.
- Output Components: Supports Kafka, MQTT, HTTP, files, and standard output.
Examples
- Kafka to Kafka Data Processing: text
streams: - input: type: kafka brokers: - localhost:9092 topics: - test-topic consumer_group: test-group pipeline: thread_num: 4 processors: - type: json_to_arrow - type: sql query: "SELECT * FROM flow WHERE value > 100" - type: arrow_to_json output: type: kafka brokers: - localhost:9092 topic: processed-topic
- Generate Test Data and Process: text
streams: - input: type: "generate" context: '{ "timestamp": 1625000000000, "value": 10, "sensor": "temp_1" }' interval: 1ms batch_size: 10000 pipeline: thread_num: 4 processors: - type: "json_to_arrow" - type: "sql" query: "SELECT count(*) FROM flow WHERE value >= 10 group by sensor" - type: "arrow_to_json" output: type: "stdout"
ArkFlow is a powerful tool for stream processing, offering flexibility and high performance. It is not yet production-ready but provides a robust framework for data processing tasks.
With its modular design and support for multiple data sources and processors, ArkFlow is an excellent choice for developers looking to build efficient data processing pipelines.