Transport all logs into Amazon S3 with Logging operator
This guide describes how to collect all the container logs in Kubernetes using the Logging operator, and how to send them to Amazon S3.
The following figure gives you an overview about how the system works. The Logging operator collects the logs from the application, selects which logs to forward to the output, and sends the selected log messages to the output. For more details about the Logging operator, see the Logging operator overview.
Deploy the Logging operator
Install the Logging operator.
Deploy the Logging operator with Helm
To install the Logging operator using Helm, complete these steps.
Note: For the Helm-based installation you need Helm v3.2.1 or later.
-
Add the chart repository of the Logging operator using the following commands:
-
Install the Logging operator into the logging namespace:
Configure the Logging operator
-
Create AWS
secret
If you have your
$AWS_ACCESS_KEY_ID
and$AWS_SECRET_ACCESS_KEY
set you can use the following snippet.Or set up the secret manually.
-
Create the
logging
resource.Note: You can use the
ClusterOutput
andClusterFlow
resources only in thecontrolNamespace
. -
Create an S3
output
definition.Note: In production environment, use a longer
timekey
interval to avoid generating too many objects. -
Create a
flow
resource. (Mind the label selector in thematch
that selects a set of pods that we will install in the next step) -
Install log-generator to produce logs with the label
app.kubernetes.io/name: log-generator
Validate the deployment
Check fluentd logs (errors with AWS credentials should be visible here):
Check the output. The logs will be available in the bucket on a path
like:
If you don’t get the expected result you can find help in the troubleshooting section.