Skip to main content
Engineering·February 12, 2026

Scale SigNoz on ObsessionDB and ClickHouse® Cloud: No Sharding Required

Marc Leuthardt

SigNoz is one of the best open source observability platforms. It runs on ClickHouse. Now you can use it with ObsessionDB and ClickHouse Cloud. Why wasn't it possible before? Read on.

If you run SigNoz self-hosted today, you know the pain: starting is easy. A single docker compose up and you have a fully functional SigNoz instance in seconds. But once you grow out of your hobby/dev project, you need to operate the full blown stack. The observability tool that's supposed to reduce your operational burden comes with its own operational burden. Especially if you run it on scale.

We built signoz-obsessiondb to fix that. It's open source and works with both ObsessionDB and ClickHouse Cloud.

What it means to operate SigNoz in production

SigNoz chose ClickHouse for good reason. Observability data — logs, traces, metrics — is a perfect fit for columnar storage: high-volume writes, analytical reads, time-series patterns. ClickHouse handles this better than almost anything else.

The problem is running it at scale. The official SigNoz docs show what a production ready setup requires:

ComponentReplicasCPU per Instance (cores)Memory per Instance (GiB)Node CountTotal CPU (cores)Total Memory (GiB)
Collectors341631248
ClickHouse2 shards1632 (variable)23264 (variable)
SigNoz Core148148
PostgreSQL128128
ZooKeeper3283624

That's 56 CPU cores and 152 GiB of memory across 10 nodes. On AWS with on demand instances, you're looking at $1,000+ per month just for the infrastructure, before you even start ingesting data. And this is the baseline for reliable operation, not a luxury setup.

A brief digression to understand how classical ClickHouse scaling works

Before we explain our solution, let's quickly cover how traditional ClickHouse scales — because SigNoz's schema assumes this model.

Classic ClickHouse scales through sharding: splitting data across multiple independent servers. Each shard holds a portion of your data in local MergeTree tables. To query across all shards, you create a Distributed table — a virtual table that sits on top, routes queries to every shard, and merges the results.

distributed_logs
Virtual table — routes queries
Shard 1
Local MergeTree table
33%
Shard 2
Local MergeTree table
33%
Shard 3
Local MergeTree table
33%

Traditional ClickHouse sharding, data is split across independent servers, a Distributed table routes queries to every shard

SigNoz's schema follows this pattern: it creates _local tables (one per shard) and Distributed tables on top. Applications write to the distributed table, which routes data to the appropriate shard based on a sharding key.

This works, but it's operationally complex. Adding shards requires rebalancing data. Losing a shard means lost data (unless replicated, which doubles storage costs). Schema changes must coordinate across every shard using ON CLUSTER commands.

(For more on ClickHouse sharding and distributed tables, see the official docs.)

SharedMergeTree eliminates sharding entirely. Here's what we built:

What we built

To make SigNoz work on ObsessionDB or ClickHouse Cloud, we forked the SigNoz schema migrator and modified it to:

  1. Create distributed_* tables as SharedMergeTree — the OpenTelemetry Collector writes here
  2. Create local table names (e.g. logs) as VIEWs pointing to the distributed tables — SigNoz Query Service reads here
  3. Redirect Materialized Views to write to the SharedMergeTree tables — since VIEWs can't receive inserts
UPSTREAM SIGNOZ: CLUSTERED
OTEL Collector writes to
distributed_logs
Distributed engine
routes to
logs
ReplicatedMergeTree, real storage
SigNoz Query reads from
logs
Same local table
Requires ZooKeeper + ON CLUSTER + Distributed DDL queue
OUR FORK: STANDALONE
OTEL Collector writes to
distributed_logs
SharedMergeTree, real storage
logs
VIEW → SELECT * FROM distributed_logs
SigNoz Query reads from
logs
Transparent redirect
Materialized Views write to
distributed_logs
No ZooKeeper. No cluster. No coordination layer.

This mapping is the key insight. SigNoz expects both local and distributed tables to exist. By making the "local" tables views of the SharedMergeTree tables, everything works without touching SigNoz's core code. The OTEL Collector writes to SharedMergeTree tables. The Query Service reads from views that point to the same data. Materialized Views are redirected to write directly to the SharedMergeTree tables, since the local table names are now VIEWs and can't receive inserts. No duplication, no sharding, no coordination layer.

Why this matters for observability

Observability data grows fast. A moderately instrumented application generates gigabytes of traces and logs per day. At scale, that's terabytes per week. Traditional ClickHouse setups handle this, but the cost scales linearly with replication.

With separated storage and compute:

ConcernTraditional ClickHouseSharedMergeTree
Storage costMultiplied per replicaData stored once in object storage
Scaling computeHours to days (copy all data to new node)Seconds (stateless nodes)
CoordinationZooKeeper/Keeper requiredNone
Operational overheadShard balancing, replica syncNone

For observability specifically, this means your monitoring infrastructure doesn't become its own monitoring problem. You get the full power of ClickHouse for querying traces, logs, and metrics without the operational tax that comes with running a distributed cluster.

Works with ClickHouse Cloud too

This isn't ObsessionDB-only. The same setup works with ClickHouse Cloud, which also uses SharedMergeTree under the hood. If you're already on ClickHouse Cloud, you can run SigNoz against it with the same docker compose file.

We built this as a service to the ClickHouse community. Whether you choose ObsessionDB or ClickHouse Cloud, the benefit is the same: SigNoz without the operational overhead of managing a sharded cluster.

Getting started

The setup is fairly easy. Clone the repo, add your credentials, and run docker compose up. Full instructions are in the GitHub repository.

Alternatively we offer it as a managed service for you. Get in contact and we get it set up in no time.

Pricing: SigNoz Cloud vs. ObsessionDB

Let's talk numbers. If you're running SigNoz Cloud and ingesting 5TB of log data per month, here's what you're looking at:

SIGNOZ CLOUD: 5TB LOGS

ItemUnit PriceUsageSubtotal
Logs$0.4/GB5,000 GB$2,000

Monthly estimate: $2,000/mo

Source: signoz.io/pricing

OBSESSIONDB: MANAGED SIGNOZ

ItemDetailsCost
Compute2× Developer nodes, 16GB each$480
Storage~0.5TB compressed (~10× compression from 5TB raw)$13

Monthly estimate: $493/mo

Ingress and egress included. All prices in USD.

Save ~75% on your observability bill

Less than $500/mo with fully managed SigNoz on ObsessionDB vs. $2,000/mo with SigNoz Cloud for the same 5TB of log data.

What's next

The repo is open source and we welcome contributions, bug reports, and feature requests.

It's a show case for what is possible with ObsessionDB. If you're running SigNoz at scale and want to simplify your ops or reduce your costs, we offer fully managed SigNoz instances with ObsessionDB.

Want to test your observability workload on ObsessionDB? We offer free test instances. Reach out at support@obsessiondb.com.


ObsessionDB is a cloud native ClickHouse® compatible database with separated storage and compute. SigNoz is an open source observability platform built on ClickHouse. This integration is a community contribution, not affiliated with SigNoz Inc.

Marc Leuthardt·February 12, 2026