Lab: Enabling SCRAM Authentication
Task 1: Configure Broker for SCRAM
Configure the SCRAM Listener.
Use the previously configured ‘tls’ listener and enable authentication for SCRAM-SHA-512.
https://strimzi.io/docs/operators/latest/deploying#assembly-securing-access-str
CR schema references:
https://strimzi.io/docs/operators/latest/configuring#type-GenericKafkaListener-reference
and
Hint 1
Use
authentication:
type: scram-sha-512
Hint 2
Add the following listener to your existing cluster definition:
apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
# ...
spec:
kafka:
# ...
listeners:
- name: tlsscram
port: 9093
type: internal
tls: true
authentication:
type: scram-sha-512
configuration:
brokerCertChainAndKey:
secretName: cluster-2-tls-managed
certificate: tls.crt
key: tls.key
Hint 3
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaNodePool
metadata:
name: controller
labels:
strimzi.io/cluster: cluster-1
spec:
replicas: 3
roles:
- controller
storage:
type: jbod
volumes:
- id: 0
type: persistent-claim
size: 50Gi
kraftMetadata: shared
deleteClaim: false
class: managed-csi
resources:
requests:
memory: 2Gi
cpu: "200m"
limits:
memory: 2Gi
---
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaNodePool
metadata:
name: broker
labels:
strimzi.io/cluster: cluster-1
spec:
replicas: 3
roles:
- broker
storage:
type: jbod
volumes:
- id: 0
type: persistent-claim
size: 100Gi
kraftMetadata: shared
deleteClaim: false
class: managed-csi
resources:
requests:
memory: 4Gi
cpu: "500m"
limits:
memory: 4Gi
---
apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
name: cluster-1
annotations:
strimzi.io/node-pools: enabled
strimzi.io/kraft: enabled
spec:
kafkaExporter:
topicRegex: "^[^_].*$"
groupRegex: "^[^_].*$"
kafka:
version: 4.1.0
listeners:
- name: plain
port: 9092
type: internal
tls: false
- name: tls
port: 9093
type: internal
tls: true
authentication:
type: scram-sha-512
configuration:
brokerCertChainAndKey:
secretName: cluster-1-tls-managed
certificate: tls.crt
key: tls.key
config:
offsets.topic.replication.factor: 3
transaction.state.log.replication.factor: 3
transaction.state.log.min.isr: 2
default.replication.factor: 3
min.insync.replicas: 2
metricsConfig:
type: jmxPrometheusExporter
valueFrom:
configMapKeyRef:
name: kafka-metrics
key: kafka-metrics-config.yml
entityOperator:
topicOperator: {}
userOperator: {}
Task 2: Create a SCRAM User
Create a simple KafkaUser CR.
For the User ‘client-01’
The User is only used for authentication not authorization ! Hence you can omit the “authorization”.
https://strimzi.io/docs/operators/latest/deploying#assembly-using-the-user-operator-str
Solution
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaUser
metadata:
name: client-01
labels:
strimzi.io/cluster: cluster-1
spec:
authentication:
type: scram-sha-512
Task 3: Get the SASL JAAS config
While creating a secret to store the password of the new KafkaUser, the Strimzi User Entity Operator also created a key within the same secret containing the ready-to-use JAAS config string.
You can use the Kubernetes Dashboard to copy the content of the secret:
Task 4: Create Client Configuration
Change the values within the existing ConfigMap you set up to store the client configuration.
Add those configs:
sasl.mechanism=SCRAM-SHA-512
sasl.jaas.config={enter the content of configmap/client-01.sasl.jaas.config}
security.protocol=SASL_SSL
Hint 1
Change the existing config map and also change the bootstrap server.
Add a JAAS SASL configuration containing the username and password of the KafkaUser you created.
Hint 2
apiVersion: v1
kind: ConfigMap
metadata:
name: cluster-1-client-cfg
data:
client.properties: |
bootstrap.servers=cluster-1-kafka-bootstrap.{your namespace}.svc:9093
sasl.mechanism=SCRAM-SHA-512
sasl.jaas.config={enter the content of configmap/client-01.sasl.jaas.config}
security.protocol=SASL_SSL
ssl.truststore.location=/run/secrets/ca/truststore.jks
ssl.truststore.type=JKS
ssl.truststore.password=changeit
parse.key=true
key.separator=:
Hint 3
ask the trainer
Info
After deploying the changed ConfigMap, it can take up to 1-2 minutes for the new properties file to appear in the running Debug CLI pod.
Task 5: Publish Some Messages
Using the new configuration file and the Debug CLI pod, publish some messages to the topic test-0.
Info
Remember to adjust the port number in the argument to --bootstrap-server to the new listener port.
Solution
Replace {your namespace} with your personal namespace.
$ kafka-console-producer \
--bootstrap-server cluster-1-kafka-bootstrap.{your namespace}:9094 \
--producer.config /config/cluster-1/client.properties \
--topic test-0 < /opt/samples/messages_glados_still_alive.txt
