Proxy 600k msg/s of Kafka produce requests in just 13MB of RAM with a p99 of 1msLearn more

Kafka compatible with PostgreSQL, S3 or SQLite storage

An Open Source Apache Kafka® compatible streaming platform with AVRO, JSON or Protocol buffer schema validation, supporting PostgreSQL, S3 or SQLite storage with Parquet, Apache Iceberg or Delta Lake.

$
cargo install tansu --all-features

Tansu

Broker

Each broker is completely stateless acting as the leader for any topic partition, transaction or consumer group. Storage is separate to the broker. Schema validation with open table support. Quick starting: spin the broker down between API requests.

Proxy

A high volume, low latency proxy for Kafka traffic adding security, multi tenancy, schema validation, throttling or batching.

Open Source

Available on GitHub under the Apache License.

CLI

A developer friendly CLI that can be used to administer the broker or produce to schema backed topics.

Storage

PostgreSQL

Multiple brokers can use the same PostgreSQL database as storage. Topic data is partitioned, splitting what is logically one large table into smaller physical pieces. Simple for existing Operational teams to manage.

SQLite

Super simple to setup. Embedded in the Tansu binary. Single broker only. Widely adopted and very fast. A single database file can easily reproduce an environment on demand.

S3

AWS S3 is designed to exceed 99.999999999% (11 nines) data durability. Multiple brokers can use the same S3 bucket using conditional writes, without an additional coordinator.

Memory

Designed for ephemeral development or test environments. Quick to setup. Even quicker to tear down.

Schema

Broker Schema Validation

AVRO, Protocol buffer and JSON schema backed topics are automatically validated by the broker. Validation is embedded in the broker, with no other moving parts.

Open Table Format

Automatic conversion of schema topics into Delta Lake, Apache Iceberg or Parquet open table/file formats. Sink topics can skip the Kafka metadata overhead writing directly into the Data Lake.