Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Splunk
ELK!???
ELK at LinkedIn
● 100+ ELK clusters across 20+ teams and 6
data centers
● Some of our larger clusters have:
o Greater than 32+ billion docs (30+TB)
o Daily indices average 3.0 billion docs (~3TB)
ELK + Kafka
Summary: ELK is a popular open sourced application stack for
visualizing and analyzing logs. ELK is currently being used across
many teams within LinkedIn. The architecture we use is made up of
four components: Elasticsearch, Logstash, Kibana and Kafka.
Kibana
Elasticsearch Elasticsearch
(data node) (data node)
Logstash Logstash
Users
Kafka
Operational Challenges
● Data, lots of it.
o Transporting, queueing, storing, securing,
reliability…
o Ingesting & Indexing fast enough
o Scaling infrastructure
o Which data? (right data needed?)
o Formats, mapping, transformation
Data from many sources: Java, Scala, Python, Node.js, Go
Operational Challenges...
● Centralized vs Siloed Cluster Management
● Aggregated views of data across the entire
infrastructure
● Consistent view (trace up/down app stack)
● Scaling - horizontally or vertically?
● Monitoring, alerting, auto-remediating
The future of ELK at LinkedIn
● More ELK clusters being used by even more teams
● Clusters with 300+ billion docs (300+TB)
● Daily indices average 10+ billion docs, 10TB - move to
hourly indices
● ~5,000 shards per cluster
Extra slides
Next two slides contain example logstash
configs to show how we use input pipe plugin
with Kafka Console Consumer, and how to
monitor logstash using metrics filter.
KCC pipe input config
pipe {
type => "mobile"
command => "/opt/bin/kafka-console-consumer/kafka-console-consumer.sh \
--formatter com.linkedin.avro.KafkaMessageJsonWithHexFormatter \
--property schema.registry.url=http://schema-
server.example.com:12250/schemaRegistry/schemas \
--autocommit.interval.ms=60000 \
--zookeeper zk.example.com:12913/kafka-metrics \
--topic log_stash_event \
--group logstash1"
codec => “json”
}
Monitoring Logstash metrics
filter {
metrics {
meter => "events"
add_tag => "metric"
}
}
output {
if “metric” in [tags] [
stdout {
codec => line {
format => “Rate: %{events.rate_1m}”