Store NGINX access logs in Elasticsearch with Logging operator
This guide describes how to collect application and container logs in Kubernetes using the Logging operator, and how to send them to Elasticsearch.
The following figure gives you an overview about how the system works. The Logging operator collects the logs from the application, selects which logs to forward to the output, and sends the selected log messages to the output. For more details about the Logging operator, see the Logging operator overview.
Deploy Elasticsearch
First, deploy Elasticsearch in your Kubernetes cluster. The following procedure is based on the Elastic Cloud on Kubernetes quickstart, but there are some minor configuration changes, and we install everything into the logging namespace.
-
Install the Elasticsearch operator.
-
Create the
logging
Namespace. -
Install the Elasticsearch cluster into the logging namespace.
-
Install Kibana into the logging namespace.
Deploy the Logging operator and a demo Application
Install the Logging operator and a demo application to provide sample log messages.
Deploy the Logging operator with Helm
To install the Logging operator using Helm, complete the following steps.
Note: You need Helm v3.8 or later to be able to install the chart from an OCI registry.
-
Install the Logging operator into the logging namespace:
Expected output:
Note:
-
Helm has a known issue in version 3.13.0 that requires users to log in to the registry, even though the repo is public.
Upgrade to 3.13.1 or higher to avoid having to log in, see: https://github.com/kube-logging/logging-operator/issues/1522
-
If you’re installing the Helm chart from Terraform, reference the repository as
repository = "oci://ghcr.io/kube-logging/helm-charts/"
(without thelogging-operator
suffix). Otherwise, you’ll get a 403 Forbidden error.
-
Configure the Logging operator
-
Create the
logging
resource.Note: You can use the
ClusterOutput
andClusterFlow
resources only in thecontrolNamespace
. -
Create an Elasticsearch
output
definition.Note: In production environment, use a longer
timekey
interval to avoid generating too many objects. -
Create a
flow
resource. (Mind the label selector in thematch
that selects a set of pods that we will install in the next step) -
Install the demo application.
Validate the deployment
To validate that the deployment was successful, complete the following steps.
-
Check fluentd logs:
Fluentd logs were written to the container filesystem up until Logging operator version 4.3, which has been changed to stdout with 4.4. See FluentOutLogrotate why this was changed and how you can re-enable it if needed.
-
Use the following command to retrieve the password of the
elastic
user: -
Enable port forwarding to the Kibana Dashboard Service.
-
Open the Kibana dashboard in your browser at https://localhost:5601 and login as elastic using the retrieved password.
-
By default, the Logging operator sends the incoming log messages into an index called fluentd. Create an Index Pattern that includes this index (for example, fluentd*), then select Menu > Kibana > Discover. You should see the dashboard and some sample log messages from the demo application.
If you don’t get the expected result you can find help in the troubleshooting section.