Introduction
In modern application development, especially in microservices and distributed systems, logs are one of the most important sources of information for debugging, monitoring, and performance analysis. When applications run across multiple servers or containers, logs get scattered, making it difficult to trace issues.
Centralized logging solves this problem by collecting logs from different services and storing them in a single place where they can be searched, analyzed, and visualized.
One of the most popular solutions for centralized logging is the ELK Stack, which consists of Elasticsearch, Logstash, and Kibana.
This article provides a detailed, step-by-step guide on how to implement centralized logging using the ELK stack, along with examples, real-world use cases, advantages, and best practices.
What is ELK Stack?
ELK stands for:
Elasticsearch → Search and analytics engine
Logstash → Data processing and pipeline tool
Kibana → Visualization and dashboard tool
These three components work together to collect, process, store, and visualize logs.
How ELK Stack Works
Flow of Data
Applications generate logs
Logs are collected using agents (e.g., Filebeat)
Logstash processes and transforms logs
Elasticsearch stores and indexes logs
Kibana is used to search and visualize logs
This pipeline enables real-time log monitoring and analysis.
Step 1: Install Elasticsearch
Elasticsearch is responsible for storing and indexing logs.
sudo apt update
sudo apt install elasticsearch
Start and enable the service:
sudo systemctl start elasticsearch
sudo systemctl enable elasticsearch
Explanation
Elasticsearch runs as a service
It stores logs in a searchable format
It provides fast querying capabilities
Step 2: Install Logstash
Logstash collects and processes logs before sending them to Elasticsearch.
sudo apt install logstash
Create a Logstash Configuration
input {
beats {
port => 5044
}
}
filter {
json {
source => "message"
}
}
output {
elasticsearch {
hosts => ["http://localhost:9200"]
index => "app-logs-%{+YYYY.MM.dd}"
}
}
Explanation
Input listens for logs from Beats (like Filebeat)
Filter parses logs into structured JSON
Output sends logs to Elasticsearch
Step 3: Install Kibana
Kibana provides a UI to visualize and analyze logs.
sudo apt install kibana
Start the service:
sudo systemctl start kibana
sudo systemctl enable kibana
Access Kibana:
http://localhost:5601
Explanation
Kibana connects to Elasticsearch
It allows searching logs using queries
It provides dashboards and visualizations
Step 4: Install Filebeat (Log Shipper)
Filebeat is used to collect logs from applications and send them to Logstash.
sudo apt install filebeat
Configure Filebeat
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/*.log
output.logstash:
hosts: ["localhost:5044"]
Start Filebeat:
sudo systemctl start filebeat
sudo systemctl enable filebeat
Explanation
Step 5: Generate Logs from Application
Example in a .NET application:
using Serilog;
Log.Logger = new LoggerConfiguration()
.WriteTo.File("logs/app.log", rollingInterval: RollingInterval.Day)
.CreateLogger();
Log.Information("Application started");
Explanation
Step 6: Visualize Logs in Kibana
Open Kibana dashboard
Create an index pattern (e.g., app-logs-*)
Use Discover tab to search logs
Build dashboards for monitoring
Example Queries
Real-World Use Cases
Monitoring microservices logs in production
Debugging distributed systems
Tracking user activity
Detecting security issues
Advantages of ELK Stack
Disadvantages of ELK Stack
Requires setup and maintenance
Can consume significant resources
Learning curve for beginners
Best Practices
Use structured logging (JSON format)
Rotate logs to avoid disk issues
Secure Elasticsearch with authentication
Monitor ELK performance
Use dashboards for better insights
Summary
Centralized logging using the ELK stack is an essential practice for modern applications. It helps developers and DevOps teams collect, process, and analyze logs efficiently. By combining Elasticsearch, Logstash, and Kibana, you can build a powerful logging system that improves debugging, monitoring, and overall system reliability.