Configuring OpenShift Logging using LokiStack on ROSA and (soon) ARO
This content is authored by Red Hat experts, but has not yet been tested on every supported configuration.
A guide to shipping logs and metrics on OpenShift using the new LokiStack setup. Recently, the default logging system with OpenShift swapped from ElasticSearch/FluentD/Kibana to a system based on LokiStack/Vector/OCP Console. LokiStack requires an object store in order to function, and this guide is designed to walk the user through the steps required to set this up.
Overview of the components of OpenShift Cluster Logging

Prerequisites
- OpenShift CLI (oc)
- Rights to install operators on the cluster
- Access to create S3 buckets (AWS/ROSA), Blob Storage Container (Azure), Storage Bucket (GCP)
Setting up your environment for ROSA
- Create environment variables to use later in this process by running the following commands:
$ export REGION=$(oc get infrastructure cluster -o=jsonpath="{.status.platformStatus.aws.region}")
$ export OIDC_ENDPOINT=$(oc get authentication.config.openshift.io cluster \
-o jsonpath='{.spec.serviceAccountIssuer}' | sed 's|^https://||')
$ export AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
$ export AWS_PAGER=""
$ export CLUSTER_NAME=$(oc get infrastructure cluster -o=jsonpath="{.status.apiServerURL}" | awk -F '.' '{print $2}')
$ export LOKISTACK_BUCKET_NAME=${CLUSTER_NAME}-lokistack-storage
Create the relevant AWS resources for LokiStack:
- Create a bucket for the LokiStack Operator to consume
Copyaws s3 mb --region ${REGION} s3://${LOKISTACK_BUCKET_NAME}
Create a policy document for the LockStack Operator to consume
Copycat << EOF > policy.json { "Version": "2012-10-17", "Statement": [ { "Sid": "LokiStorage", "Effect": "Allow", "Action": [ "s3:ListBucket", "s3:PutObject", "s3:GetObject", "s3:DeleteObject" ], "Resource": [ "arn:aws:s3:::${LOKISTACK_BUCKET_NAME}", "arn:aws:s3:::${LOKISTACK_BUCKET_NAME}/*" ] } ] } EOF
Create the IAM Access Policy by running the following command:
CopyPOLICY_ARN=$(aws --region "$REGION" --query Policy.Arn \ --output text iam create-policy \ --policy-name "${CLUSTER_NAME}-lokistack-access-policy" \ --policy-document file://policy.json) echo $POLICY_ARN
Create the LokiStack installation (OpenShift 4.14 or higher on AWS (ROSA))
Create an IAM Role trust policy document by running the following command:
Copycat <<EOF > trust-policy.json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Condition": { "StringEquals" : { "${OIDC_ENDPOINT}:sub": ["system:serviceaccount:openshift-logging:loki"] } }, "Principal": { "Federated": "arn:aws:iam::${AWS_ACCOUNT_ID}:oidc-provider/${OIDC_ENDPOINT}" }, "Action": "sts:AssumeRoleWithWebIdentity" } ] } EOF
Create an IAM Role to link the trust policy to the IAM Access Policy by running the following command:
CopyROLE_ARN=$(aws iam create-role --role-name "${CLUSTER_NAME}-lokistack-access-role" \ --assume-role-policy-document file://trust-policy.json \ --query Role.Arn --output text) echo $ROLE_ARN
Save this role_arn for installation of the lokistack operator later.
Create the LokiStack installation (OpenShift 4.13 or lower on AWS (ROSA) and non-STS clusters)
Create an IAM user that will allow your LokiStack to access the bucket using the following command:
Copyaws iam create-user --user-name "${CLUSTER_NAME}-lokistack-access-user"
Attach your policy to your new user using the following command:
Copyaws iam attach-user-policy --user-name "${CLUSTER_NAME}-lokistack-access-user" --policy-arn ${POLICY_ARN}
Create an AWS Access key and Secret key for your IAM user using the following command:
CopyAWS_KEYS=$(aws iam create-access-key --user-name "${CLUSTER_NAME}-lokistack-access-user")
You are ready to proceed to the next step
Install the OpenShift Cluster Logging Operator
Create a namespace for the OpenShift Logging Operator
Copyoc create -f - <<EOF apiVersion: v1 kind: Namespace metadata: name: openshift-logging annotations: openshift.io/node-selector: "" labels: openshift.io/cluster-monitoring: "true" EOF
Install the Loki Operator by creating the following objects, specifying the Role ARN we generated above:
Copyoc create -f - <<EOF apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat spec: channel: "stable-5.9" name: loki-operator source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: loki-operator.v5.9.3 config: env: - name: ROLEARN value: ${ROLE_ARN} EOF
Verify Operator Installation
Copyoc get csv -n openshift-operators-redhat
Example Output
CopyNAME DISPLAY VERSION REPLACES PHASE loki-operator.v5.9.3 Loki Operator 5.9.3 loki-operator.v5.9.2 Succeeded
If you are using OpenShift 4.14 or higher on AWS (ROSA)
Create a secret for the LokiStack Operator to consume by running the following command:
Copyoc -n openshift-logging create secret generic "logging-loki-aws" \ --from-literal=bucketnames="${LOKISTACK_BUCKET_NAME}" \ --from-literal=region="${REGION}" \ --from-literal=audience="openshift" \ --from-literal=role_arn="${ROLE_ARN}"
Create a LokiStack installation by creating the following object:
Copyoc create -f - <<EOF apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: size: 1x.extra-small storage: schemas: - effectiveDate: '2023-10-15' version: v13 secret: name: logging-loki-aws type: s3 credentialMode: token storageClassName: gp3-csi tenants: mode: openshift-logging EOF
If you are using OpenShift 4.13 or lower on AWS (ROSA), or are using a Non-STS cluster
Extract the AWS Access Key and Secret key from your variable created above using the following command:
CopyAWS_ACCESS_KEY_ID=$(echo $AWS_KEYS | jq -r '.AccessKey.AccessKeyId') AWS_SECRET_ACCESS_KEY=$(echo $AWS_KEYS | jq -r '.AccessKey.SecretAccessKey')
Create a secret for the LokiStack Operator to consume by running the following command:
Copyoc -n openshift-logging create secret generic "logging-loki-aws" \ --from-literal=bucketnames="${LOKISTACK_BUCKET_NAME}" \ --from-literal=region="${REGION}" \ --from-literal=access_key_id="${AWS_ACCESS_KEY_ID}" \ --from-literal=access_key_secret="${AWS_SECRET_ACCESS_KEY}" \ --from-literal=endpoint="https://s3.${REGION}.amazonaws.com"
Create a LokiStack installation by creating the following object:
Copyoc create -f - <<EOF apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: size: 1x.extra-small storage: schemas: - effectiveDate: '2023-10-15' version: v13 secret: name: logging-loki-aws type: s3 credentialMode: static storageClassName: gp3-csi tenants: mode: openshift-logging EOF
Configuring LokiStack
Confirm your LokiStack is running successfully by running the following command:
Copyoc get pods -n openshift-logging
Note: If you see pods in Pending state, confirm that you have sufficient resources in the cluster to run a LokiStack. If you are running a small cluster, try adding one or two m5.4xlarge machines to your cluster like so:
rosa create machinepool -c ${CLUSTER_NAME} --name=lokistack-mp --replicas=2 --instance-type=m5.4xlarge
. An overview of lokistack sizing can be found here: https://docs.openshift.com/rosa/observability/logging/log_storage/installing-log-storage.html#loki-deployment-sizing_installing-log-storageInstall the Red Hat OpenShift Logging Operator by creating the following objects:
The Cluster Logging OperatorGroup
Copyoc create -f - <<EOF apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cluster-logging namespace: openshift-logging spec: targetNamespaces: - openshift-logging EOF
Subscription Object to subscribe a Namespace to the Red Hat OpenShift Logging Operator
Copyoc create -f - <<EOF apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cluster-logging namespace: openshift-logging spec: channel: "stable" name: cluster-logging source: redhat-operators sourceNamespace: openshift-marketplace EOF
Verify the Operator installation, the
PHASE
should beSucceeded
Copyoc get csv -n openshift-logging
Example Output
CopyNAME DISPLAY VERSION REPLACES PHASE cluster-logging.v5.9.3 Red Hat OpenShift Logging 5.9.3 cluster-logging.v5.9.2 Succeeded loki-operator.v5.9.3 Loki Operator 5.9.3 loki-operator.v5.9.2 Succeeded
Create an OpenShift Logging instance, specifying the logStore:
Copyoc create -f - <<EOF apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" namespace: "openshift-logging" spec: managementState: "Managed" logStore: type: "lokistack" lokistack: name: logging-loki EOF
- Here, we have configured the ClusterLogging Operator to use the existing LokiStack we have created in the cluster as it’s LogStorage. If you are using ElasticSearch as your LogStore, this would point at ElasticSearch (now deprecated)
Edit your OpenShift Logging instance, adding the collection section to create vector collection pods:
Copyoc replace -f - <<EOF apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" namespace: "openshift-logging" spec: managementState: "Managed" logStore: type: "lokistack" lokistack: name: logging-loki collection: type: "vector" vector: {} EOF
Confirm you can see collector pods starting up using the following command. There should be one per node.
Copyoc get pods -n openshift-logging | grep collector
Example output:
Copycollector-49qnt 1/1 Running 0 11m collector-gvd5x 1/1 Running 0 11m collector-qfqxs 1/1 Running 0 11m collector-r7scm 1/1 Running 0 11m collector-zlzpf 1/1 Running 0 11m
Edit your OpenShift Logging instance, adding the visualisation section to show logs in the console:
Copyoc replace -f - <<EOF apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" namespace: "openshift-logging" spec: managementState: "Managed" logStore: type: "lokistack" lokistack: name: logging-loki collection: type: "vector" vector: {} visualization: type: "ocp-console" ocpConsole: {} EOF
Ensure the Console Plugin is enabled by running the following command:
Copyoc get consoles.operator.openshift.io cluster -o yaml |grep logging-view-plugin \ || oc patch consoles.operator.openshift.io cluster --type=merge \ --patch '{ "spec": { "plugins": ["logging-view-plugin"]}}'
Example output:
Copyclusterlogging.logging.openshift.io/instance patched
Here, we have added the console-view-plugin to allow us to view logs in the OpenShift console. You can check that the pod has been created using the following command:
Copyoc get pods -n openshift-logging | grep logging-view-plugin
Example output:
Copylogging-view-plugin-bd5978d6d-9sc5v 1/1 Running 0 8m41s
Confirm you can see the Logging section of the console under the Observe tab:
At this point OpenShift logging is installed and configured and is ready to receive logs.
Install the ClusterLogForwarder Custom Resource
Separately from the ClusterLogging storage system, the OpenShift Cluster Logging Operator provides the ClusterLogForwarder which allows you to describe which log types are sent where. We will now configure this to collect logs from our cluster and forward them to our log store
Create a basic ClusterLogForwarder using the following command:
Copyoc create -f - <<EOF apiVersion: "logging.openshift.io/v1" kind: ClusterLogForwarder metadata: name: "instance" namespace: "openshift-logging" spec: pipelines: - name: infrastructure-logs inputRefs: - infrastructure outputRefs: - default EOF
This example selects all infrastructure logs and forwards them to “default”, which is a reference to our LokiStack Logging Store. If we go to the Console and browse to Observer -> Logs, then change the dropdown from “application” to “infrastructure” we can now see logs:

Adjust your ClusterLogForwarder to pick up Application logs from a specific namespace by running the following command:
Copyoc replace -f - <<EOF apiVersion: "logging.openshift.io/v1" kind: ClusterLogForwarder metadata: name: "instance" namespace: "openshift-logging" spec: inputs: - name: openshift-dns-logs application: namespaces: - openshift-dns pipelines: - name: infrastructure-logs inputRefs: - infrastructure outputRefs: - default - name: application-logs inputRefs: - openshift-dns-logs outputRefs: - default EOF
This example has created a new input, specifying the openshift-dns namespace, and forwarded it to our LogStore. If you refresh your Logging tag and select “application” in the drop down you will now see your logs.
For more examples or configuration options please see the documentation here: https://docs.openshift.com/rosa/observability/logging/log_collection_forwarding/configuring-log-forwarding.html
Cleanup
Remove the ClusterLogForwarder Instance:
Copyoc -n openshift-logging delete clusterlogforwarder --all
Remove the ClusterLogging Instance:
Copyoc -n openshift-logging delete clusterlogging --all
Remove the LokiStack Instance:
Copyoc -n openshift-logging delete lokistack --all
Remove the Cluster Logging Operator:
Copyoc -n openshift-logging delete subscription cluster-logging oc -n openshift-logging delete csv cluster-logging.v5.9.3
Remove the LokiStack Operator:
Copyoc -n openshift-logging delete subscription loki-operator oc -n openshift-logging delete csv loki-operator.v5.9.3
Cleanup the openshift-logging namespace
Copyoc delete namespace openshift-logging
Cleanup your AWS Bucket
Copyaws s3 rb s3://$LOKISTACK_BUCKET_NAME
Cleanup your AWS Policy
Copyaws iam delete-policy --policy-arn ${POLICY_ARN}
If you are using OpenShift 4.14 or higher on AWS (ROSA)
Cleanup your AWS Role
Copyaws iam delete-role --role-name ${CLUSTER_NAME}-lokistack-access-role
If you are using OpenShift 4.13 or lower on AWS (ROSA), or are using a Non-STS cluster
Cleanup your AWS user
Copyaws iam delete-user --user-name "${CLUSTER_NAME}-lokistack-access-user"