Logging

Clusters with our logging addon uses a Elasticsearch / Fluentd / Kibana stack. The stack is installed and ready to use, you only need to add filters for your application.

Save logging of application

Pod logs will not be picked up and stored in Elasticsearch Database unless the application has the label fluentd: "true"

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nginx
    fluentd: "true"

Create a log output filter

Log output filters needs to be added to the configMap fluentd-filters in the logging namespace

apiVersion: v1
data:
  filters.conf: |
    <filter kubernetes.**>
        @type parser
        key_name log
        reserve_data true
        emit_invalid_record_to_error false
        <parse>
            @type regexp
            expression /^(?<remote>[^ ]*) (?<host>[^ ]*) (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^\"]*?)(?: +\S*)?)?" (?<code>[^ ]*) (?<size>[^ ]*)(?: "(?<referer>[^\"]*)" "(?<agent>[^\"]*)"(?:\s+(?<http_x_forwarded_for>[^ ]+))?)?$/
            time_format %d/%b/%Y:%H:%M:%S %z
        </parse>
    </filter>
kind: ConfigMap
metadata:
  name: log-filters

Kibana Dashboard

Kibana Dashboard can be accessed through your ASERGO Dashboard.

Default username is elastic and password can be found with kubectl

$ kubectl get secret -n default fluentd-es-elastic-user \
-o go-template='{{.data.elastic | base64decode }}'