Lab: Setup Clients
Task 1: Add Topics
Add following topic with its respective configuration:
name: flight.tracker.updates.1
cleanup policy: delete
retention period: 1 hour
partitions: 6
replicas: 3
min in sync replicas: to a value that allows for rolling cluster updates
Hint 1
Use the topic CR from strimzi like in the last exercise. Details can also be found here.
Topic config details can be found here.
Hint 2
partitions: 6
replicas: 3
config:
cleanup.policy: "delete"
retention.ms: "3600000"
min.insync.replicas: "2"
Hint 3
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaTopic
metadata:
name: flight.tracker.updates.1
labels:
strimzi.io/cluster: cluster-1
spec:
partitions: 6
replicas: 3
config:
cleanup.policy: "delete"
retention.ms: "3600000" # 1 hour = 3,600,000 ms
min.insync.replicas: "2"
Task 2: Install Database
install a postgres database to your namespace with following config:
apiVersion: postgres-operator.crunchydata.com/v1beta1
kind: PostgresCluster
metadata:
name: hippo
spec:
postgresVersion: 16
patroni:
dynamicConfiguration:
postgresql:
parameters:
wal_level: "logical"
max_replication_slots: 10
max_wal_senders: 10
users:
- name: pgadmin
databases: ["pgadmin"]
options: "SUPERUSER"
- name: connect
databases: ["connect"]
options: "SUPERUSER"
instances:
- name: instance1
replicas: 1
dataVolumeClaimSpec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
cpu: "500m"
memory: "2Gi"
storage: 50Gi
limits:
memory: "2Gi"
storageClassName: managed-csi
Hint 1
Apply the config to your namespace
Hint 2
kubectl apply -n {your namespace} -f {postgres def}.yml
Hint 3
ask the trainer
Task 3: Install Clients
Install the workshop data generator in your namespace.
-
Create a deployment with one instance.
-
Use following name: workshop-generator
-
It should use following image: krassestecontainerreistry.azurecr.io/clients/workshop-generator:latest
-
It should use 512Mi RAM and 500m CPU.
-
Use following env parameters:
- name: SPRING_DATASOURCE_URL
value: "jdbc:postgresql://hippo-pods.{YOUR NAMESPACE}.svc.cluster.local:5432/connect"
- name: SPRING_DATASOURCE_USERNAME
valueFrom:
secretKeyRef:
name: hippo-pguser-connect
key: user
- name: SPRING_DATASOURCE_PASSWORD
valueFrom:
secretKeyRef:
name: hippo-pguser-connect
key: password
- name: SPRING_KAFKA_BOOTSTRAP_SERVERS
value: "{YOUR CLUSTER}:9093"
Hint 1
Start with a default deployment and work your way up from there. Here ist the link to the doc.
Hint 2
apiVersion: apps/v1
kind: Deployment
metadata:
name: {XXX}
labels:
app: {XXX}
spec:
replicas: 1
selector:
matchLabels:
app: {XXX}
template:
metadata:
labels:
app: {XXX}
spec:
containers:
- name: {XXX}
image: {XXX}
imagePullPolicy: Always
env:
{XXX}
resources:
{XXX}
Hint 3
apiVersion: apps/v1
kind: Deployment
metadata:
name: workshop-generator
labels:
app: workshop-generator
spec:
replicas: 1
selector:
matchLabels:
app: workshop-generator
template:
metadata:
labels:
app: workshop-generator
spec:
containers:
- name: workshop-generator
image: krassestecontainerreistry.azurecr.io/clients/workshop-generator:latest
imagePullPolicy: Always
env:
- name: SPRING_DATASOURCE_URL
value: "jdbc:postgresql://hippo-pods.{YOUR NAMESPACE}.svc.cluster.local:5432/connect"
- name: SPRING_DATASOURCE_USERNAME
valueFrom:
secretKeyRef:
name: hippo-pguser-connect
key: user
- name: SPRING_DATASOURCE_PASSWORD
valueFrom:
secretKeyRef:
name: hippo-pguser-connect
key: password
- name: SPRING_KAFKA_BOOTSTRAP_SERVERS
value: "{YOUR CLUSTER}:9093"
resources:
requests:
memory: "512Mi"
cpu: "500m"
limits:
memory: "512Mi"
cpu: "500m"
Task 4: Verify the clients work
Consume the flight.tracker.updates.1 using the CLI tool.
Hint 1
Use the CLI from the previous exercise.
Hint 2
kafka-console-consumer --consumer.config /config/cluster-1/client-plaintext-noauth.properties --bootstrap-server {bootstrap server}.svc:9093 --topic flight.tracker.updates.1 --group cli-flight-tracker
Hint 3
ask the trainer