Monitor your logging pipeline with Prometheus Operator

Logos

Architecture

You can configure the Logging operator to expose metrics endpoints for Fluentd, Fluent Bit, and syslog-ng using ServiceMonitor resources. That way, a Prometheus operator running in the same cluster can automatically fetch your logging metrics.

Metrics Variables

You can configure the following metrics-related options in the spec.fluentd.metrics, spec.syslogNG.metrics, and spec.fluentbit.metrics sections of your Logging resource.

Variable NameTypeRequiredDefaultDescription
intervalstringNo“15s”Scrape Interval
timeoutstringNo“5s”Scrape Timeout
portintNo-Metrics Port.
pathintNo-Metrics Path.
serviceMonitorboolNofalseEnable to create ServiceMonitor for Prometheus operator
prometheusAnnotationsboolNofalseAdd prometheus labels to fluent pods.

For example:

spec:
  fluentd:
    metrics:
      serviceMonitor: true
  fluentbit:
    metrics:
      serviceMonitor: true
  syslogNG:
    metrics:
      serviceMonitor: true

For more details on installing the Prometheus operator and configuring and accessing metrics, see the following procedures.

Install Prometheus Operator with Helm

  1. Create logging namespace

    kubectl create namespace logging
    
  2. Install Prometheus Operator

     helm upgrade --install --wait --create-namespace --namespace logging monitor stable/prometheus-operator \
        --set "grafana.dashboardProviders.dashboardproviders\\.yaml.apiVersion=1" \
        --set "grafana.dashboardProviders.dashboardproviders\\.yaml.providers[0].orgId=1" \
        --set "grafana.dashboardProviders.dashboardproviders\\.yaml.providers[0].type=file" \
        --set "grafana.dashboardProviders.dashboardproviders\\.yaml.providers[0].disableDeletion=false" \
        --set "grafana.dashboardProviders.dashboardproviders\\.yaml.providers[0].options.path=/var/lib/grafana/dashboards/default" \
        --set "grafana.dashboards.default.logging.gnetId=7752" \
        --set "grafana.dashboards.default.logging.revision=5" \
        --set "grafana.dashboards.default.logging.datasource=Prometheus" \
        --set "prometheus.prometheusSpec.serviceMonitorSelectorNilUsesHelmValues=False"
    

    Prometheus Operator Documentation The prometheus-operator install may take a few more minutes. Please be patient. The logging-operator metrics function depends on the prometheus-operator’s resources. If those do not exist in the cluster it may cause the logging-operator’s malfunction.

Install Logging Operator with Helm

  1. Install the Logging operator into the logging namespace:

    helm upgrade --install --wait --create-namespace --namespace logging logging-operator oci://ghcr.io/kube-logging/helm-charts/logging-operator
    

    Expected output:

    Release "logging-operator" does not exist. Installing it now.
    Pulled: ghcr.io/kube-logging/helm-charts/logging-operator:4.3.0
    Digest: sha256:c2ece861f66a3a2cb9788e7ca39a267898bb5629dc98429daa8f88d7acf76840
    NAME: logging-operator
    LAST DEPLOYED: Wed Aug  9 11:02:12 2023
    NAMESPACE: logging
    STATUS: deployed
    REVISION: 1
    TEST SUITE: None
    

    Note: Helm has a known issue in version 3.13.0 that requires users to log in to the registry, even though the repo is public. Upgrade to 3.13.1 or higher to avoid having to log in, see: https://github.com/kube-logging/logging-operator/issues/1522

Install Minio

  1. Create Minio Credential Secret

    kubectl -n logging create secret generic logging-s3 --from-literal=accesskey='AKIAIOSFODNN7EXAMPLE' --from-literal=secretkey='wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY'
    
  2. Deploy Minio

    kubectl -n logging apply -f - <<"EOF"
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: minio-deployment
      namespace: logging
    spec:
      selector:
        matchLabels:
          app: minio
      strategy:
        type: Recreate
      template:
        metadata:
          labels:
            app: minio
        spec:
          containers:
          - name: minio
            image: minio/minio
            args:
            - server
            - /storage
            readinessProbe:
              httpGet:
                path: /minio/health/ready
                port: 9000
              initialDelaySeconds: 10
              periodSeconds: 5
            env:
            - name: MINIO_REGION
              value: 'test_region'
            - name: MINIO_ACCESS_KEY
              valueFrom:
                secretKeyRef:
                  name: logging-s3
                  key: accesskey
            - name: MINIO_SECRET_KEY
              valueFrom:
                secretKeyRef:
                  name: logging-s3
                  key: secretkey
            ports:
            - containerPort: 9000
          volumes:
            - name: logging-s3
              secret:
                secretName: logging-s3
    ---
    kind: Service
    apiVersion: v1
    metadata:
      name: nginx-demo-minio
      namespace: logging
    spec:
      selector:
        app: minio
      ports:
      - protocol: TCP
        port: 9000
        targetPort: 9000
    
    EOF
    
  3. Create logging resource

    kubectl apply -f - <<"EOF"
    apiVersion: logging.banzaicloud.io/v1beta1
    kind: Logging
    metadata:
      name: default-logging-simple
    spec:
      fluentd:
        metrics:
          serviceMonitor: true
      fluentbit:
        metrics:
          serviceMonitor: true
      controlNamespace: logging
    EOF
    

    Note: ClusterOutput and ClusterFlow resource will only be accepted in the controlNamespace

  4. Create Minio output definition

    kubectl -n logging apply -f - <<"EOF"
    apiVersion: logging.banzaicloud.io/v1beta1
    kind: Output
    metadata:
      name: demo-output
    spec:
      s3:
        aws_key_id:
          valueFrom:
            secretKeyRef:
              key: accesskey
              name: logging-s3
        aws_sec_key:
          valueFrom:
            secretKeyRef:
              key: secretkey
              name: logging-s3
        buffer:
          timekey: 10s
          timekey_use_utc: true
          timekey_wait: 0s
        force_path_style: "true"
        path: logs/${tag}/%Y/%m/%d/
        s3_bucket: demo
        s3_endpoint: http://nginx-demo-minio.logging.svc.cluster.local:9000
        s3_region: test_region
    EOF
    

    Note: For production set-up we recommend using longer timekey interval to avoid generating too many object.

  5. Create flow resource

    kubectl -n logging apply -f - <<"EOF"
    apiVersion: logging.banzaicloud.io/v1beta1
    kind: Flow
    metadata:
      name: demo-flow
    spec:
      filters:
        - tag_normaliser: {}
        - parser:
            remove_key_name_field: true
            reserve_data: true
            parse:
              type: nginx
      match:
        - select:
            labels:
              app.kubernetes.io/instance: log-generator
              app.kubernetes.io/name: log-generator
      localOutputRefs:
        - demo-output
    EOF
    
  6. Install log-generator to produce logs with the label app.kubernetes.io/name: log-generator

    helm upgrade --install --wait --create-namespace --namespace logging log-generator oci://ghcr.io/kube-logging/helm-charts/log-generator
    

Validation

Minio

  1. Get Minio login credentials

    kubectl -n logging get secrets logging-s3 -o json | jq '.data | map_values(@base64d)'
    
  2. Forward Service

    kubectl -n logging port-forward svc/nginx-demo-minio 9000
    
  3. Open the Minio Dashboard: http://localhost:9000

    Minio dashboard

Prometheus

  1. Forward Service

    kubectl port-forward svc/monitor-prometheus-operato-prometheus 9090
    
  2. Open the Prometheus Dashboard: http://localhost:9090

    Prometheus dashboard

Grafana

  1. Get Grafana login credentials

    kubectl get secret --namespace logging monitor-grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
    

    Default username: admin

  2. Forward Service

    kubectl -n logging port-forward svc/monitor-grafana 3000:80
    
  3. Open Grafana Dashboard: http://localhost:3000

    Grafana dashboard