Açıklaması şöyle
Elasticsearch is an open-source search engine and analytics store used by a variety of applications from search in e-commerce stores, to internal log management tools using the ELK stack (short for “Elasticsearch, Logstash, Kibana”).
Kurulum ve konfigürasyon için bir yazı burada
Diğer Seçenekler
1. Jaeger + ElasticSearch.
2. FluentD + ElasticSearch. Bir örnek burada
Elastic Stack - Log Management Çözümler
Çözümler şöyle. Logstash var/yok ve Filebeat var/yok şeklinde
1. Application -> Filebeat -> Logstash -> Elasticsearch2. Application -> Filebeat -> Elasticsearch3. Application (Java) + Logstash-logback-encoder -> Logstash -> Elasticsearch
Logstash-logback-encoder
Filebeat'e gerek yok. Uygulama logback'e yeni bir appender takar ve logları direkt Elasticsearch' gönderir
Maven
Şöyle yaparız
<dependency> <groupId>net.logstash.logback</groupId> <artifactId>logstash-logback-encoder</artifactId> <version>7.3</version> <scope>runtime</scope> </dependency>
logback.xml şöyledir. Burada Appender olarak LogstashTcpSocketAppender kullanılıyor.
<property name="STACK_TRACE_COUNT" value="15"/> <property name="CLASS_NAME_LENGTH" value="40"/> <appender name="LOGSTASH" class="net.logstash.logback.appender.LogstashTcpSocketAppender"> <destination>192.168.1.1:4560</destination> <addDefaultStatusListener>false</addDefaultStatusListener> <encoder class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder"> <providers> <pattern> <pattern>{"app_name": "myapp", "app_version":"1.0.0", "hostname": "${HOSTNAME}"}</pattern> </pattern> <mdc/> <timestamp/> <message/> <threadName/> <logLevel/> <callerData/> <stackTrace> <throwableConverter class="net.logstash.logback.stacktrace.ShortenedThrowableConverter"> <maxDepthPerThrowable>${STACK_TRACE_COUNT}</maxDepthPerThrowable> <shortenedClassNameLength>${CLASS_NAME_LENGTH}</shortenedClassNameLength> <rootCauseFirst>true</rootCauseFirst> </throwableConverter> </stackTrace> </providers> </encoder> </appender> <root level="${ROOT_LEVEL}"> <appender-ref ref="CONSOLE"/> <appender-ref ref="LOGSTASH"/> </root>
Örnek
logback.xml'de şöyle yaparız
<?xml version="1.1" encoding="UTF-8"?> <configuration> <appender name="JSON" class="ch.qos.logback.core.ConsoleAppender"> <encoder class="net.logstash.logback.encoder.LogstashEncoder"> <includeMdc>true</includeMdc> </encoder> </appender> <root level="INFO"> <appender-ref ref="JSON"/> </root> </configuration>
Artık log çıktısı şöyle
{ "@timestamp": "2023-06-17T13:41:01.134+01:00", "@version": "1", "message": "Hello World", "logger_name": "no.f12.application", "userId": "user-id-something", "documentId": "document-id", "documentType": "legal", "thread_name": "somethread", "level": "INFO", "level_value": 20000 }
Beats + Logstash + Elasticsearch + Kibana Seçeneği
Şeklen şöyle. Beats + Logstash + Elasticsearch + Kibana ile oluşur. Açıklaması şöyle. Şekillerde Beat diye gösterilmesine rağmen container ismi Filebeat
Elasticsearch: Elasticsearch is a distributed, free and open search and analytics engine for all types of data, including textual, numerical, geospatial, structured, and unstructured.Logstash: Logstash is a light-weight, open-source, server-side data processing pipeline that allows you to collect data from a variety of sources, transform it on the fly, and send it to your desired destination.Kibana: Kibana is an free and open frontend application that sits on top of the Elastic Stack, providing search and data visualization capabilities for data indexed in Elasticsearch.Filebeat: Filebeat is a lightweight shipper for forwarding and centralizing log data. Installed as an agent on your servers, Filebeat monitors the log files or locations that you specify, collects log events, and forwards them either to Elasticsearch or Logstash for indexing.
Filebeat
Filebeat yazısına taşıdım
Kibana
Kibana yazısına taşıdım
Örnek - Her şeyi Birlikte Kullanan
filebeat.cm.yaml şöyle olsun. /var/log/*.log dosyasından okur ve logstash'e gönderir
apiVersion: v1kind: ConfigMapmetadata:name: filebeat-configlabels:component: filebeatdata:conf.yaml: |filebeat.inputs:- type: logpaths:- '/var/log/*.log'output:logstash:hosts: [ "logstash:5044" ]
uygulamamız ve filebeat şöyle olsun
apiVersion: apps/v1kind: Deployment metadata: name: busybox labels: component: busybox spec: strategy: type: Recreate selector: matchLabels: component: busybox template: metadata: labels: component: busybox spec: containers: - name: busybox image: busybox args: - sh - -c - > while true; do echo $(date) - filebeat log >> /var/log/access.log; sleep 10; done volumeMounts: - name: log mountPath: /var/log - name: filebeat image: elastic/filebeat:7.16.3 args: - -c - /etc/filebeat/conf.yaml - -e volumeMounts: - name: filebeat-config mountPath: /etc/filebeat - name: log mountPath: /var/log volumes: - name: log emptyDir: {} - name: filebeat-config configMap: name: filebeat-config
Açıklaması şöyle
In the Pod above we mount the Filebeat configuration file into the /etc/filebeat/conf.yaml file and use the args to specify that configuration file for Filebeat.
Our application container writes a log to the file /var/log/access.log every 10s. We use emptyDir volumes to share storage between two containers.
logstash.cm.yaml şöyle olsun
apiVersion: v1 kind: ConfigMap metadata: name: logstash labels: component: logstash data: access-log.conf: | input { beats { port => "5044" } } output { elasticsearch { hosts => [ "elasticsearch:9200" ] } }
logstash.yaml şöyle olsun
apiVersion: apps/v1 kind: Deployment metadata: name: logstash labels: component: logstash spec: strategy: type: Recreate selector: matchLabels: component: logstash template: metadata: labels: component: logstash spec: containers: - name: logstash image: logstash:7.16.3 ports: - containerPort: 5044 volumeMounts: - name: logstash-config mountPath: /usr/share/logstash/pipeline volumes: - name: logstash-config configMap: name: logstash --- apiVersion: v1 kind: Service metadata: name: logstash labels: component: logstash spec: ports: - port: 5044 selector: component: logstash
Elastichsearch şöyle olsun
apiVersion: apps/v1 kind: Deployment metadata: name: elasticsearch labels: component: elasticsearch spec: strategy: type: Recreate selector: matchLabels: component: elasticsearch template: metadata: labels: component: elasticsearch spec: containers: - name: elasticsearch image: elasticsearch:7.16.3 ports: - containerPort: 9200 name: client - containerPort: 9300 name: nodes env: - name: JAVA_TOOL_OPTIONS value: -Xmx256m -Xms256m - name: discovery.type value: single-node resources: requests: memory: 500Mi cpu: 0.5 limits: memory: 500Mi cpu: 0.5 --- apiVersion: v1 kind: Service metadata: name: elasticsearch labels: component: elasticsearch spec: ports: - port: 9200 name: client - port: 9300 name: nodes selector: component: elasticsearch
Kibana şöyle olsun
apiVersion: apps/v1 kind: Deployment metadata: name: kibana labels: component: kibana spec: strategy: type: Recreate selector: matchLabels: component: kibana template: metadata: labels: component: kibana spec: containers: - name: kibana image: kibana:7.16.3 ports: - containerPort: 5601 --- apiVersion: v1 kind: Service metadata: name: kibana labels: component: kibana spec: ports: - port: 5601 selector: component: kibana
Açıklaması şöyle
Now, go to menu Stack Management > Index patterns and create an index pattern, then go to menu Discover and you’ll see the logs we collected from the busybox container.
Örnek - Kibana
Şöyle yaparız
apiVersion: apps/v1 kind: Deployment metadata: name: kibana namespace: default labels: app: kibana spec: selector: matchLabels: app: kibana replicas: 1 template: metadata: labels: app: kibana spec: containers: - name: kibana image: docker.elastic.co/kibana/kibana:7.5.2 imagePullPolicy: IfNotPresent resources: requests: cpu: 100m memory: 100Mi limits: cpu: 1024m memory: 512Mi env: - name: ELASTICSEARCH_HOSTS value: '["http://elastic-svc:9200"]' - name: SERVER_NAME value: 'https://kibana.example.com' ports: - containerPort: 5601 name: kibana restartPolicy: Always --- apiVersion: v1 kind: Service metadata: name: kibana-svc namespace: default spec: selector: app: kibana type: ClusterIP ports: - port: 5601 targetPort: 5601 --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: kibana-ingress namespace: default annotations: cert-manager.io/cluster-issuer: letsencrypt-prod nginx.ingress.kubernetes.io/auth-type: basic nginx.ingress.kubernetes.io/auth-secret: kibana-basic-auth nginx.ingress.kubernetes.io/auth-realm: 'Authentication Required' spec: tls: - hosts: - kibana.example.com secretName: kibana-tls rules: - host: kibana.example.com http: paths: - path: / pathType: Prefix backend: service: name: kibana-svc port: number: 5601
Hiç yorum yok:
Yorum Gönder