Let's face it: Most logs are just noise. We're drowning in a sea of meaningless data, and it's costing us time, money, and sanity. As developers, we've fallen into the habit of logging everything—every function call, every variable change, every heartbeat of our applications. The result? Terabytes of useless information that clogs our dashboards and clouds our judgment. But here's the kicker: We're paying a fortune to store this digital junk, all while making it harder to find the insights that truly matter.
But here's where it gets controversial: Is logging without purpose really observability, or is it just expensive littering? Even with advanced observability platforms that compress data like never before, logging everything still turns root cause analysis into a needle-in-a-haystack problem. It dilutes the signal, buries the insights, and costs us more in the long run. So, what's the solution? We need to log with intention. Focus on what helps us understand the system, debug real issues, or explain business impact—and silence everything else.
And this is the part most people miss: Logging isn’t about capturing every detail; it’s about capturing the right details. Every log line should be a deliberate choice, not a reflex. Ask yourself: Will this log help my future self at 3 a.m. when the system crashes? If not, delete it. Logs aren’t a narrative; they’re evidence. They should reveal what the system was thinking when things went wrong, not just add to the clutter.
Logs shouldn’t be an afterthought in the observability process. They’re not just a tool for confirmation but a map for discovery. Sometimes, the quickest path to insight is to dive into raw text—grep, filter, and trust your intuition. Logs invite curiosity, revealing nuances that metrics might overlook and context that traces can’t capture. Treat them as a living source of truth, not a last resort.
Context is everything. A log like “Error occurred” is useless without inputs, IDs, or state. Add context—request IDs, user IDs, input parameters, operation names. With OpenTelemetry, trace and span IDs are readily available. Use them. Logs connected to traces and metrics by trace IDs are infinitely more valuable than isolated lines of text. They transform noise into evidence, tightly scoped and directly tied to the path of a single request.
Structured logs are the future. Free-text logging is outdated. Structured logs—whether JSON, CSV, or key-value—aren’t just easier to query; they’re the foundation for analytics. Once logs have structure, patterns emerge: “This error spiked last week,” “This happens after event X,” “This warning correlates with this deployment.” The future of logging isn’t about reading one line; it’s about seeing the pattern across thousands.
But here’s a counterpoint: While structured logging is powerful, many observability platforms offer schema-on-read, which can be flexible but costly. Every query forces the system to scan and parse raw text, line by line, to infer a structure that should have existed from the start. These queries are slow, expensive, and harder to write. Prestructured logs flip this inefficiency, allowing for column-oriented storage and native aggregation—querying, visualizing, and correlating events in milliseconds instead of minutes.
Know when to measure, not just when to log. Not every event belongs in the log stream. Some things need structure and timing—exactly what spans and metrics are for. Measuring latency, user flow, or distributed causality? Emit a span. Spans capture duration, context, and relationships across services, telling you why something was slow or broken, not just that it happened. The same goes for metrics: turn repetitive logs into actionable signals you can alert on and aggregate efficiently.
Log levels are for humans, not machines. Logging isn’t a personal debugging diary; it’s a shared artifact for your team. Every line should be clear and purposeful. Write logs for the next incident, not your current mood. For example:
- ERROR: Page a human. Something’s broken.
- WARN: Unexpected but survivable. Investigate later.
- INFO: Routine system behavior worth knowing.
- DEBUG/TRACE: Temporary developer insight—should rarely leave your laptop.
But here’s a thought-provoking question: Is trace logging ever justified? For instance, at ClickHouse Cloud, we trace-log extensively to diagnose performance issues and support customers at scale. It’s a deliberate exception, necessary for operating a distributed database serving thousands of workloads in real time. But for most applications, this level of verbosity isn’t observability—it’s noise.
Tools exist to help you log less and log smarter. Modern OpenTelemetry Collector SDKs let you be prescriptive about what you log. Instrument your code to create only meaningful log lines and filter or drop everything else at ingest or collection time. For example, the filter processor supports dropping unwanted logs using conditions like severity, resource attributes, or content patterns. If your platform allows, filter at agent time, collector gateway, or ingest time to prevent unnecessary logs from being written, stored, or indexed—saving compute, storage, and query costs.
Log with purpose, or don’t log at all. Observability isn’t about volume; it’s about clarity. Every log line should earn its place by explaining something that metrics and traces can’t. Logging without intent just burns money and buries insight. Be deliberate. Use structure. Add context. Know when to measure, when to trace, and when to say nothing. Modern tools make discipline easier, but the discipline still has to come from you.
In the end, great logging isn’t about capturing everything that happens. It’s about capturing what matters. So, here’s the question for you: How are you ensuring your logs are purposeful and not just noise? Share your thoughts in the comments—let’s spark a discussion!