Telegraf drawn to M3DB

Telegraf and its 200+ plugins are great for collecting metrics. But what system downstream can handle input from tens of thousands of Telegraf agents?

15 November 2021
Jason Hepp
Jason Hepp RSS Feed
Pre-Sales Solution Architect at Aiven

In the past few years, Telegraf has become a go-to open source standard for collecting metrics for various stacks, sensors and systems. It boasts over 200 available plugins for collecting on a wide variety of system events. These events it sends to either an InfluxDB or Prometheus server for storage and visualization.

But all systems are not created equal. What happens when your metrics flow outstrips the computing capacity of your InfluxDB or Prometheus server?

Vertical scaling can work, up to a point. Horizontal scaling in InfluxDB requires an enterprise license, while Prometheus doesn’t even have a viable option.

So when you have hundreds, thousands or possibly tens of thousands Telegraf agents producing metrics, what do you do? Have no fear, M3 to the rescue!

What is M3?

M3 is an open source time series database. Originally developed by Uber, the goal of M3 is to provide easy horizontal scalability, central storage for all metrics and compatibility with several standard industry interfaces for producing and consuming metrics. In particular, M3 has 100% Prometheus Query Language (PromQL) compatibility and InfluxDB Line Protocol format.

With its already proven track record of storing 10’s of billions of time series, M3 is a perfect replacement for an InfluxDB or Prometheus backend, and it allows us to retain our current Telegraf agents.

How do we do this? Who offers M3 in the cloud?

Aiven for M3

Aiven is one of the first cloud providers to offer M3 as a service. Not only do we provide M3 as a service for our customers, we also use it internally to monitor all of our customer VM’s and services (Aiven for Apache Kafka, Aiven for PostgreSQL, etc.) in one single central M3 service.

We deploy the Telegraf agent on all of our managed VM’s to monitor dozens of server and process metrics including CPU utilization, RAM Utilization, Network IO, and disk utilization to name a few.

Sounds sweet? Head over to our Developer Portal for the details, but in general terms, here’s what you can do:

  1. Create a new Aiven for M3 service in the Aiven Console or CLI to collect metrics.
  2. Install Telegraf Agent as an endpoint for the metrics.
  3. Integrate Telegraf with M3 so it can actually get those metrics.
  4. Create an Aiven for Grafana service to visualize the metrics.
  5. Enjoy the fruits of your labor!

Yep, that’s all. Granted that it’s a few clicks and keypresses away, but it’s not rocket science by any means.

Wrapping up

Not using Aiven services yet? Sign up now for your free trial at https://console.aiven.io/signup!

In the meantime, make sure you follow our changelog and blog RSS feeds or our LinkedIn and Twitter accounts to stay up-to-date with product and feature-related news.

m3data
orange decoration
yellow decoration

Start your free 30 day trial!

Build your platform, and throw in any data you want for 30 days, with no ifs, ands, or buts.

orange decoration
yellow decoration

Start your free 30 day trial!

Build your platform, and throw in any data you want for 30 days, with no ifs, ands, or buts.

Products

Aiven for Apache KafkaAiven for Apache Kafka ConnectAiven for Apache Kafka MirrorMaker 2Aiven for Apache Flink BetaAiven for M3Aiven for M3 AggregatorAiven for Apache CassandraAiven for OpenSearchAiven for PostgreSQLAiven for MySQLAiven for RedisAiven for InfluxDBAiven for Grafana

Let‘s connect

Aiven for Apache Kafka, Aiven for Apache Kafka Connect, Aiven for Apache Kafka MirrorMaker 2, Aiven for Apache Flink Beta, Aiven for M3, Aiven for M3 Aggregator, Aiven for Apache Cassandra, Aiven for OpenSearch, Aiven for PostgreSQL, Aiven for MySQL, Aiven for Redis, Aiven for InfluxDB, Aiven for Grafana are trademarks and property of their respective owners. All product and service names used in this website are for identification purposes only and do not imply endorsement.