tonglin0325的个人主页

在minikube下创建kafka集群

在minikube下安装的kafka集群分成4个步骤

1.在mac上安装minikube#

这里安装的minikube是基于virtualbox的,也就是minikube是运行在virtualbox启动的一个虚拟机中

参考:Mac下安装minikube

2.给zk和kafka创建local persistence volumn#

参考:Helm 安装Kafka

zk和kafka的数据需要落盘,所以需要依赖pv,这里创建的是k8s的local pv,注意如果volumeBindingMode选择WaitForFirstConsumer的话,只有在pod创建的时候,pvc才会绑定到pv上,没有pod就话pvc就一直是pending状态

StorageClass的yaml,local-storage.yaml

1
2
3
4
5
6
7
8
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: Immediate
reclaimPolicy: Retain

创建一个叫local-storage的StorageClass

1
2
kubectl apply -f ./local-storage.yaml

进入virtualbox的虚拟器中创建如下的linux目录,minikube的虚拟器账号密码是docker tcuser

 

在local-storage的sc下面创建3个local的pv用于存储zookeeper的数据,zookeeper-local-pv.yaml

1
2
kubectl apply -f ./zookeeper-local-pv.yaml

yaml,其中minikube是k8s node的name,可以使用minikube get pod -A查看

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
apiVersion: v1
kind: PersistentVolume
metadata:
name: data-kafka-zookeeper-0
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /tmp/zookeeper/data-0
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minikube
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: data-kafka-zookeeper-1
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /tmp/zookeeper/data-1
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minikube
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: data-kafka-zookeeper-2
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /tmp/zookeeper/data-2
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minikube

创建zk的pvc

1
2
kubectl apply -f ./zookeeper-local-pvc.yaml

zookeeper-local-pvc.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: data-kafka-zookeeper-0
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: local-storage
volumeMode: Filesystem
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: data-kafka-zookeeper-1
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: local-storage
volumeMode: Filesystem
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: data-kafka-zookeeper-2
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: local-storage
volumeMode: Filesystem

  

在local-storage的sc下面创建3个local的pv用于存储kafka的数据,kafka-local-pv.yaml

1
2
kubectl apply -f ./kafka-local-pv.yaml

yaml,其中minikube是k8s node的name,可以使用minikube get pod -A查看

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
apiVersion: v1
kind: PersistentVolume
metadata:
name: data-kafka-0
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /tmp/kafka/data-0
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minikube
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: data-kafka-1
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /tmp/kafka/data-1
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minikube
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: data-kafka-2
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /tmp/kafka/data-2
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minikube

创建kafka的pvc

1
2
kubectl apply -f ./kafka-local-pvc.yaml

kafka-local-pvc.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: data-kafka-0
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: local-storage
volumeMode: Filesystem
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: data-kafka-1
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: local-storage
volumeMode: Filesystem
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: data-kafka-2
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: local-storage
volumeMode: Filesystem

查看pv,注意这里所有pv刚创建的status都是Available,如果创建了pvc,状态会变成Bound

如果删除了pvc的话,pv的状态就会变成Released,这时这个pc已经无法被其他pvc绑定,需要edit这个pv,删除其中的claimRef配置

pv和pvc的各种状态可以参考文章:Kubernetes 中 PV 和 PVC 的状态变化

参考:https://kubernetes.io/zh-cn/docs/tasks/configure-pod-container/configure-persistent-volume-storage/

1
2
3
4
5
6
7
8
9
kubectl get pv -A
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
data-kafka-zookeeper-0 5Gi RWO Retain Bound default/data-kafka-zookeeper-1 local-storage 10h
data-kafka-zookeeper-1 5Gi RWO Retain Bound default/data-kafka-zookeeper-0 local-storage 10h
data-kafka-zookeeper-2 5Gi RWO Retain Bound default/data-kafka-zookeeper-2 local-storage 10h
datadir-kafka-0 5Gi RWO Retain Bound default/data-kafka-0 local-storage 10h
datadir-kafka-1 5Gi RWO Retain Bound default/data-kafka-1 local-storage 10h
datadir-kafka-2 5Gi RWO Retain Bound default/data-kafka-2 local-storage 10h

    

3.下载 bitnami/kafka的chart,并使用helm来部署kafka集群#

添加 repo

1
2
3
helm repo add incubator https://charts.helm.sh/incubator
helm repo add bitnami https://charts.bitnami.com/bitnami

下载chart

1
2
helm fetch bitnami/kafka

这边下载最新版本的chart是kafka-15.3.2.tgz

最新的chart版本和对应的kafka版本可以去网站查看

1
2
https://artifacthub.io/packages/helm/bitnami/kafka

或者使用search命令查看可以下载的版本

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
helm search repo bitnami/kafka -l
NAME CHART VERSION APP VERSION DESCRIPTION
bitnami/kafka 25.3.1 3.5.1 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 25.3.0 3.5.1 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 25.2.0 3.5.1 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 25.1.12 3.5.1 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 25.1.11 3.5.1 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 25.1.10 3.5.1 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 25.1.9 3.5.1 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 25.1.8 3.5.1 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 25.1.7 3.5.1 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 25.1.6 3.5.1 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 25.1.5 3.5.1 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 25.1.4 3.5.1 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 25.1.3 3.5.1 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 25.1.2 3.5.1 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 25.1.1 3.5.1 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 25.1.0 3.5.1 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 25.0.1 3.5.1 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 25.0.0 3.5.1 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 24.0.14 3.5.1 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 24.0.13 3.5.1 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 24.0.12 3.5.1 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 24.0.11 3.5.1 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 24.0.10 3.5.1 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 24.0.9 3.5.1 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 24.0.8 3.5.1 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 24.0.7 3.5.1 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 24.0.6 3.5.1 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 24.0.5 3.5.1 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 24.0.4 3.5.1 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 24.0.3 3.5.1 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 24.0.2 3.5.1 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 24.0.1 3.5.1 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 24.0.0 3.5.1 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 23.0.7 3.5.1 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 23.0.6 3.5.1 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 23.0.5 3.5.0 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 23.0.4 3.5.0 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 23.0.3 3.5.0 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 23.0.2 3.5.0 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 23.0.1 3.5.0 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 23.0.0 3.5.0 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 22.1.6 3.4.1 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 22.1.5 3.4.1 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 22.1.4 3.4.0 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 22.1.3 3.4.0 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 22.1.2 3.4.0 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 22.1.1 3.4.0 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 22.0.3 3.4.0 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 22.0.2 3.4.0 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 22.0.1 3.4.0 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 22.0.0 3.4.0 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 21.4.6 3.4.0 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 21.4.5 3.4.0 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 21.4.4 3.4.0 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 21.4.3 3.4.0 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 21.4.2 3.4.0 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 21.4.1 3.4.0 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 21.4.0 3.4.0 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 21.3.1 3.4.0 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 21.3.0 3.4.0 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 21.2.0 3.4.0 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 21.1.1 3.4.0 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 21.1.0 3.4.0 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 21.0.1 3.4.0 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 21.0.0 3.4.0 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 20.1.1 3.4.0 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 20.1.0 3.4.0 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 20.0.6 3.3.2 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 20.0.5 3.3.2 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 20.0.4 3.3.1 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 20.0.3 3.3.1 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 20.0.2 3.3.1 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 20.0.1 3.3.1 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 20.0.0 3.3.1 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 19.1.5 3.3.1 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 19.1.4 3.3.1 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 19.1.3 3.3.1 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 19.1.2 3.3.1 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 19.1.1 3.3.1 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 19.1.0 3.3.1 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 19.0.2 3.3.1 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 19.0.1 3.3.1 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 19.0.0 3.3.1 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 18.5.0 3.2.3 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 18.4.4 3.2.3 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 18.4.3 3.2.3 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 18.4.2 3.2.1 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 18.4.1 3.2.1 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 18.4.0 3.2.1 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 18.3.1 3.2.1 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 18.3.0 3.2.1 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 18.2.0 3.2.1 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 18.1.3 3.2.1 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 18.1.2 3.2.1 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 18.1.1 3.2.1 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 18.0.8 3.2.1 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 18.0.7 3.2.1 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 18.0.6 3.2.1 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 18.0.5 3.2.1 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 18.0.4 3.2.1 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 18.0.3 3.2.0 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 18.0.2 3.2.0 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 18.0.0 3.2.0 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 17.2.6 3.2.0 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 17.2.5 3.2.0 Apache Kafka is a distributed streaming platfor...
bitnami/kafka 17.2.3 3.2.0 Apache Kafka is a distributed streaming platfor...

如果要指定版本的话,可以使用如下命令

1
2
helm fetch bitnami/kafka --version 17.2.3

修改value.yaml,主要是修改了kafka和zk的persistence配置,添加了local-storage存储

以及service的type改成了ClusterIP,并配置了在宿主机上暴露的端口

参考:k8s 集群暴露 kafka 端口 

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
## @section Global parameters
## Global Docker image parameters
## Please, note that this will override the image parameters, including dependencies, configured to use the global value
## Current available global Docker image parameters: imageRegistry, imagePullSecrets and storageClass

## @param global.imageRegistry Global Docker image registry
## @param global.imagePullSecrets Global Docker registry secret names as an array
## @param global.storageClass Global StorageClass for Persistent Volume(s)
##
global:
imageRegistry: ""
## E.g.
## imagePullSecrets:
## - myRegistryKeySecretName
##
imagePullSecrets: []
storageClass: ""

## @section Common parameters

## @param kubeVersion Override Kubernetes version
##
kubeVersion: ""
## @param nameOverride String to partially override common.names.fullname
##
nameOverride: ""
## @param fullnameOverride String to fully override common.names.fullname
##
fullnameOverride: ""
## @param clusterDomain Default Kubernetes cluster domain
##
clusterDomain: cluster.local
## @param commonLabels Labels to add to all deployed objects
##
commonLabels: {}
## @param commonAnnotations Annotations to add to all deployed objects
##
commonAnnotations: {}
## @param extraDeploy Array of extra objects to deploy with the release
##
extraDeploy: []
## Enable diagnostic mode in the statefulset
##
diagnosticMode:
## @param diagnosticMode.enabled Enable diagnostic mode (all probes will be disabled and the command will be overridden)
##
enabled: false
## @param diagnosticMode.command Command to override all containers in the statefulset
##
command:
- sleep
## @param diagnosticMode.args Args to override all containers in the statefulset
##
args:
- infinity

## @section Kafka parameters

## Bitnami Kafka image version
## ref: https://hub.docker.com/r/bitnami/kafka/tags/
## @param image.registry Kafka image registry
## @param image.repository Kafka image repository
## @param image.tag Kafka image tag (immutable tags are recommended)
## @param image.pullPolicy Kafka image pull policy
## @param image.pullSecrets Specify docker-registry secret names as an array
## @param image.debug Specify if debug values should be set
##
image:
registry: docker.io
repository: bitnami/kafka
tag: 3.1.0-debian-10-r20
## Specify a imagePullPolicy
## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
## ref: https://kubernetes.io/docs/user-guide/images/#pre-pulling-images
##
pullPolicy: IfNotPresent
## Optionally specify an array of imagePullSecrets.
## Secrets must be manually created in the namespace.
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
## e.g:
## pullSecrets:
## - myRegistryKeySecretName
##
pullSecrets: []
## Set to true if you would like to see extra information on logs
##
debug: false
## @param config Configuration file for Kafka. Auto-generated based on other parameters when not specified
## Specify content for server.properties
## NOTE: This will override any KAFKA_CFG_ environment variables (including those set by the chart)
## The server.properties is auto-generated based on other parameters when this parameter is not specified
## e.g:
## config: |-
## broker.id=-1
## listeners=PLAINTEXT://:9092
## advertised.listeners=PLAINTEXT://KAFKA_IP:9092
## num.network.threads=3
## num.io.threads=8
## socket.send.buffer.bytes=102400
## socket.receive.buffer.bytes=102400
## socket.request.max.bytes=104857600
## log.dirs=/bitnami/kafka/data
## num.partitions=1
## num.recovery.threads.per.data.dir=1
## offsets.topic.replication.factor=1
## transaction.state.log.replication.factor=1
## transaction.state.log.min.isr=1
## log.flush.interval.messages=10000
## log.flush.interval.ms=1000
## log.retention.hours=168
## log.retention.bytes=1073741824
## log.segment.bytes=1073741824
## log.retention.check.interval.ms=300000
## zookeeper.connect=ZOOKEEPER_SERVICE_NAME
## zookeeper.connection.timeout.ms=6000
## group.initial.rebalance.delay.ms=0
##
config: ""
## @param existingConfigmap ConfigMap with Kafka Configuration
## NOTE: This will override `config` AND any KAFKA_CFG_ environment variables
##
existingConfigmap: ""
## @param log4j An optional log4j.properties file to overwrite the default of the Kafka brokers
## An optional log4j.properties file to overwrite the default of the Kafka brokers
## ref: https://github.com/apache/kafka/blob/trunk/config/log4j.properties
##
log4j: ""
## @param existingLog4jConfigMap The name of an existing ConfigMap containing a log4j.properties file
## The name of an existing ConfigMap containing a log4j.properties file
## NOTE: this will override `log4j`
##
existingLog4jConfigMap: ""
## @param heapOpts Kafka Java Heap size
##
heapOpts: -Xmx1024m -Xms1024m
## @param deleteTopicEnable Switch to enable topic deletion or not
##
deleteTopicEnable: false
## @param autoCreateTopicsEnable Switch to enable auto creation of topics. Enabling auto creation of topics not recommended for production or similar environments
##
autoCreateTopicsEnable: true
## @param logFlushIntervalMessages The number of messages to accept before forcing a flush of data to disk
##
logFlushIntervalMessages: _10000
## @param logFlushIntervalMs The maximum amount of time a message can sit in a log before we force a flush
##
logFlushIntervalMs: 1000
## @param logRetentionBytes A size-based retention policy for logs
##
logRetentionBytes: _1073741824
## @param logRetentionCheckIntervalMs The interval at which log segments are checked to see if they can be deleted
##
logRetentionCheckIntervalMs: 300000
## @param logRetentionHours The minimum age of a log file to be eligible for deletion due to age
##
logRetentionHours: 168
## @param logSegmentBytes The maximum size of a log segment file. When this size is reached a new log segment will be created
##
logSegmentBytes: _1073741824
## @param logsDirs A comma separated list of directories under which to store log files
##
logsDirs: /bitnami/kafka/data
## @param maxMessageBytes The largest record batch size allowed by Kafka
##
maxMessageBytes: _1000012
## @param defaultReplicationFactor Default replication factors for automatically created topics
##
defaultReplicationFactor: 1
## @param offsetsTopicReplicationFactor The replication factor for the offsets topic
##
offsetsTopicReplicationFactor: 1
## @param transactionStateLogReplicationFactor The replication factor for the transaction topic
##
transactionStateLogReplicationFactor: 1
## @param transactionStateLogMinIsr Overridden min.insync.replicas config for the transaction topic
##
transactionStateLogMinIsr: 1
## @param numIoThreads The number of threads doing disk I/O
##
numIoThreads: 8
## @param numNetworkThreads The number of threads handling network requests
##
numNetworkThreads: 3
## @param numPartitions The default number of log partitions per topic
##
numPartitions: 1
## @param numRecoveryThreadsPerDataDir The number of threads per data directory to be used for log recovery at startup and flushing at shutdown
##
numRecoveryThreadsPerDataDir: 1
## @param socketReceiveBufferBytes The receive buffer (SO_RCVBUF) used by the socket server
##
socketReceiveBufferBytes: 102400
## @param socketRequestMaxBytes The maximum size of a request that the socket server will accept (protection against OOM)
##
socketRequestMaxBytes: _104857600
## @param socketSendBufferBytes The send buffer (SO_SNDBUF) used by the socket server
##
socketSendBufferBytes: 102400
## @param zookeeperConnectionTimeoutMs Timeout in ms for connecting to ZooKeeper
##
zookeeperConnectionTimeoutMs: 6000
## @param zookeeperChrootPath Path which puts data under some path in the global ZooKeeper namespace
## ref: https://kafka.apache.org/documentation/#brokerconfigs_zookeeper.connect
##
zookeeperChrootPath: ""
## @param authorizerClassName The Authorizer is configured by setting authorizer.class.name=kafka.security.authorizer.AclAuthorizer in server.properties
##
authorizerClassName: ""
## @param allowEveryoneIfNoAclFound By default, if a resource has no associated ACLs, then no one is allowed to access that resource except super users
##
allowEveryoneIfNoAclFound: true
## @param superUsers You can add super users in server.properties
##
superUsers: User:admin
## Authentication parameters
## https://github.com/bitnami/bitnami-docker-kafka#security
##
auth:
## Authentication protocol for client and inter-broker communications
## This table shows the security provided on each protocol:
## | Method | Authentication | Encryption via TLS |
## | plaintext | None | No |
## | tls | None | Yes |
## | mtls | Yes (two-way authentication) | Yes |
## | sasl | Yes (via SASL) | No |
## | sasl_tls | Yes (via SASL) | Yes |
## @param auth.clientProtocol Authentication protocol for communications with clients. Allowed protocols: `plaintext`, `tls`, `mtls`, `sasl` and `sasl_tls`
## @param auth.externalClientProtocol Authentication protocol for communications with external clients. Defaults to value of `auth.clientProtocol`. Allowed protocols: `plaintext`, `tls`, `mtls`, `sasl` and `sasl_tls`
## @param auth.interBrokerProtocol Authentication protocol for inter-broker communications. Allowed protocols: `plaintext`, `tls`, `mtls`, `sasl` and `sasl_tls`
##
clientProtocol: plaintext
# Note: empty by default for backwards compatibility reasons, find more information at
# https://github.com/bitnami/charts/pull/8902/
externalClientProtocol: ""
interBrokerProtocol: plaintext
## SASL configuration
##
sasl:
## @param auth.sasl.mechanisms SASL mechanisms when either `auth.interBrokerProtocol`, `auth.clientProtocol` or `auth.externalClientProtocol` are `sasl`. Allowed types: `plain`, `scram-sha-256`, `scram-sha-512`
##
mechanisms: plain,scram-sha-256,scram-sha-512
## @param auth.sasl.interBrokerMechanism SASL mechanism for inter broker communication.
##
interBrokerMechanism: plain
## JAAS configuration for SASL authentication.
##
jaas:
## @param auth.sasl.jaas.clientUsers Kafka client user list
##
## clientUsers:
## - user1
## - user2
##
clientUsers:
- user
## @param auth.sasl.jaas.clientPasswords Kafka client passwords. This is mandatory if more than one user is specified in clientUsers
##
## clientPasswords:
## - password1
## - password2"
##
clientPasswords: []
## @param auth.sasl.jaas.interBrokerUser Kafka inter broker communication user for SASL authentication
##
interBrokerUser: admin
## @param auth.sasl.jaas.interBrokerPassword Kafka inter broker communication password for SASL authentication
##
interBrokerPassword: ""
## @param auth.sasl.jaas.zookeeperUser Kafka ZooKeeper user for SASL authentication
##
zookeeperUser: ""
## @param auth.sasl.jaas.zookeeperPassword Kafka ZooKeeper password for SASL authentication
##
zookeeperPassword: ""
## @param auth.sasl.jaas.existingSecret Name of the existing secret containing credentials for clientUsers, interBrokerUser and zookeeperUser
## Create this secret running the command below where SECRET_NAME is the name of the secret you want to create:
## kubectl create secret generic SECRET_NAME --from-literal=client-passwords=CLIENT_PASSWORD1,CLIENT_PASSWORD2 --from-literal=inter-broker-password=INTER_BROKER_PASSWORD --from-literal=zookeeper-password=ZOOKEEPER_PASSWORD
##
existingSecret: ""
## TLS configuration
##
tls:
## @param auth.tls.type Format to use for TLS certificates. Allowed types: `jks` and `pem`
##
type: jks
## @param auth.tls.existingSecrets Array existing secrets containing the TLS certificates for the Kafka brokers
## When using 'jks' format for certificates, each secret should contain a truststore and a keystore.
## Create these secrets following the steps below:
## 1) Generate your truststore and keystore files. Helpful script: https://raw.githubusercontent.com/confluentinc/confluent-platform-security-tools/master/kafka-generate-ssl.sh
## 2) Rename your truststore to `kafka.truststore.jks`.
## 3) Rename your keystores to `kafka-X.keystore.jks` where X is the ID of each Kafka broker.
## 4) Run the command below one time per broker to create its associated secret (SECRET_NAME_X is the name of the secret you want to create):
## kubectl create secret generic SECRET_NAME_0 --from-file=kafka.truststore.jks=./kafka.truststore.jks --from-file=kafka.keystore.jks=./kafka-0.keystore.jks
## kubectl create secret generic SECRET_NAME_1 --from-file=kafka.truststore.jks=./kafka.truststore.jks --from-file=kafka.keystore.jks=./kafka-1.keystore.jks
## ...
##
## When using 'pem' format for certificates, each secret should contain a public CA certificate, a public certificate and one private key.
## Create these secrets following the steps below:
## 1) Create a certificate key and signing request per Kafka broker, and sign the signing request with your CA
## 2) Rename your CA file to `kafka.ca.crt`.
## 3) Rename your certificates to `kafka-X.tls.crt` where X is the ID of each Kafka broker.
## 3) Rename your keys to `kafka-X.tls.key` where X is the ID of each Kafka broker.
## 4) Run the command below one time per broker to create its associated secret (SECRET_NAME_X is the name of the secret you want to create):
## kubectl create secret generic SECRET_NAME_0 --from-file=ca.crt=./kafka.ca.crt --from-file=tls.crt=./kafka-0.tls.crt --from-file=tls.key=./kafka-0.tls.key
## kubectl create secret generic SECRET_NAME_1 --from-file=ca.crt=./kafka.ca.crt --from-file=tls.crt=./kafka-1.tls.crt --from-file=tls.key=./kafka-1.tls.key
## ...
##
existingSecrets: []
## @param auth.tls.autoGenerated Generate automatically self-signed TLS certificates for Kafka brokers. Currently only supported if `auth.tls.type` is `pem`
## Note: ignored when using 'jks' format or `auth.tls.existingSecrets` is not empty
##
autoGenerated: false
## @param auth.tls.password Password to access the JKS files or PEM key when they are password-protected.
## Note: ignored when using 'existingSecret'.
##
password: ""
## @param auth.tls.existingSecret Name of the secret containing the password to access the JKS files or PEM key when they are password-protected. (`key`: `password`)
##
existingSecret: ""
## @param auth.tls.jksTruststoreSecret Name of the existing secret containing your truststore if truststore not existing or different from the ones in the `auth.tls.existingSecrets`
## Note: ignored when using 'pem' format for certificates.
##
jksTruststoreSecret: ""
## @param auth.tls.jksKeystoreSAN The secret key from the `auth.tls.existingSecrets` containing the keystore with a SAN certificate
## The SAN certificate in it should be issued with Subject Alternative Names for all headless services:
## - kafka-0.kafka-headless.kafka.svc.cluster.local
## - kafka-1.kafka-headless.kafka.svc.cluster.local
## - kafka-2.kafka-headless.kafka.svc.cluster.local
## Note: ignored when using 'pem' format for certificates.
##
jksKeystoreSAN: ""
## @param auth.tls.jksTruststore The secret key from the `auth.tls.existingSecrets` or `auth.tls.jksTruststoreSecret` containing the truststore
## Note: ignored when using 'pem' format for certificates.
##
jksTruststore: ""
## @param auth.tls.endpointIdentificationAlgorithm The endpoint identification algorithm to validate server hostname using server certificate
## Disable server host name verification by setting it to an empty string.
## ref: https://docs.confluent.io/current/kafka/authentication_ssl.html#optional-settings
##
endpointIdentificationAlgorithm: https
## @param listeners The address(es) the socket server listens on. Auto-calculated it's set to an empty array
## When it's set to an empty array, the listeners will be configured
## based on the authentication protocols (auth.clientProtocol, auth.externalClientProtocol and auth.interBrokerProtocol parameters)
##
listeners: []
## @param advertisedListeners The address(es) (hostname:port) the broker will advertise to producers and consumers. Auto-calculated it's set to an empty array
## When it's set to an empty array, the advertised listeners will be configured
## based on the authentication protocols (auth.clientProtocol, auth.externalClientProtocol and auth.interBrokerProtocol parameters)
##
advertisedListeners: []
## @param listenerSecurityProtocolMap The protocol->listener mapping. Auto-calculated it's set to nil
## When it's nil, the listeners will be configured based on the authentication protocols (auth.clientProtocol, auth.externalClientProtocol and auth.interBrokerProtocol parameters)
##
listenerSecurityProtocolMap: ""
## @param allowPlaintextListener Allow to use the PLAINTEXT listener
##
allowPlaintextListener: true
## @param interBrokerListenerName The listener that the brokers should communicate on
##
interBrokerListenerName: INTERNAL
## @param command Override Kafka container command
##
command:
- /scripts/setup.sh
## @param args Override Kafka container arguments
##
args: []
## @param extraEnvVars Extra environment variables to add to Kafka pods
## ref: https://github.com/bitnami/bitnami-docker-kafka#configuration
## e.g:
## extraEnvVars:
## - name: KAFKA_CFG_BACKGROUND_THREADS
## value: "10"
##
extraEnvVars: []
## @param extraEnvVarsCM ConfigMap with extra environment variables
##
extraEnvVarsCM: ""
## @param extraEnvVarsSecret Secret with extra environment variables
##
extraEnvVarsSecret: ""

## @section Statefulset parameters

## @param replicaCount Number of Kafka nodes
##
replicaCount: 3
## @param minBrokerId Minimal broker.id value, nodes increment their `broker.id` respectively
## Brokers increment their ID starting at this minimal value.
## E.g., with `minBrokerId=100` and 3 nodes, IDs will be 100, 101, 102 for brokers 0, 1, and 2, respectively.
##
minBrokerId: 0
## @param containerPorts.client Kafka client container port
## @param containerPorts.internal Kafka inter-broker container port
## @param containerPorts.external Kafka external container port
##
containerPorts:
client: 9092
internal: 9093
external: 9094
## Configure extra options for Kafka containers' liveness, readiness and startup probes
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes
## @param livenessProbe.enabled Enable livenessProbe on Kafka containers
## @param livenessProbe.initialDelaySeconds Initial delay seconds for livenessProbe
## @param livenessProbe.periodSeconds Period seconds for livenessProbe
## @param livenessProbe.timeoutSeconds Timeout seconds for livenessProbe
## @param livenessProbe.failureThreshold Failure threshold for livenessProbe
## @param livenessProbe.successThreshold Success threshold for livenessProbe
##
livenessProbe:
enabled: true
initialDelaySeconds: 10
timeoutSeconds: 5
failureThreshold: 3
periodSeconds: 10
successThreshold: 1
## @param readinessProbe.enabled Enable readinessProbe on Kafka containers
## @param readinessProbe.initialDelaySeconds Initial delay seconds for readinessProbe
## @param readinessProbe.periodSeconds Period seconds for readinessProbe
## @param readinessProbe.timeoutSeconds Timeout seconds for readinessProbe
## @param readinessProbe.failureThreshold Failure threshold for readinessProbe
## @param readinessProbe.successThreshold Success threshold for readinessProbe
##
readinessProbe:
enabled: true
initialDelaySeconds: 5
failureThreshold: 6
timeoutSeconds: 5
periodSeconds: 10
successThreshold: 1
## @param startupProbe.enabled Enable startupProbe on Kafka containers
## @param startupProbe.initialDelaySeconds Initial delay seconds for startupProbe
## @param startupProbe.periodSeconds Period seconds for startupProbe
## @param startupProbe.timeoutSeconds Timeout seconds for startupProbe
## @param startupProbe.failureThreshold Failure threshold for startupProbe
## @param startupProbe.successThreshold Success threshold for startupProbe
##
startupProbe:
enabled: false
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 1
failureThreshold: 15
successThreshold: 1
## @param customLivenessProbe Custom livenessProbe that overrides the default one
##
customLivenessProbe: {}
## @param customReadinessProbe Custom readinessProbe that overrides the default one
##
customReadinessProbe: {}
## @param customStartupProbe Custom startupProbe that overrides the default one
##
customStartupProbe: {}
## @param lifecycleHooks lifecycleHooks for the Kafka container to automate configuration before or after startup
##
lifecycleHooks: {}
## Kafka resource requests and limits
## ref: https://kubernetes.io/docs/user-guide/compute-resources/
## @param resources.limits The resources limits for the container
## @param resources.requests The requested resources for the container
##
resources:
limits: {}
requests: {}
## Kafka pods' Security Context
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod
## @param podSecurityContext.enabled Enable security context for the pods
## @param podSecurityContext.fsGroup Set Kafka pod's Security Context fsGroup
##
podSecurityContext:
enabled: true
fsGroup: 1001
## Kafka containers' Security Context
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container
## @param containerSecurityContext.enabled Enable Kafka containers' Security Context
## @param containerSecurityContext.runAsUser Set Kafka containers' Security Context runAsUser
## @param containerSecurityContext.runAsNonRoot Set Kafka containers' Security Context runAsNonRoot
## e.g:
## containerSecurityContext:
## enabled: true
## capabilities:
## drop: ["NET_RAW"]
## readOnlyRootFilesystem: true
##
containerSecurityContext:
enabled: true
runAsUser: 1001
runAsNonRoot: true
## @param hostAliases Kafka pods host aliases
## https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/
##
hostAliases: []
## @param hostNetwork Specify if host network should be enabled for Kafka pods
##
hostNetwork: false
## @param hostIPC Specify if host IPC should be enabled for Kafka pods
##
hostIPC: false
## @param podLabels Extra labels for Kafka pods
## Ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
##
podLabels: {}
## @param podAnnotations Extra annotations for Kafka pods
## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
##
podAnnotations: {}
## @param podAffinityPreset Pod affinity preset. Ignored if `affinity` is set. Allowed values: `soft` or `hard`
## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity
##
podAffinityPreset: ""
## @param podAntiAffinityPreset Pod anti-affinity preset. Ignored if `affinity` is set. Allowed values: `soft` or `hard`
## Ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity
##
podAntiAffinityPreset: soft
## Node affinity preset
## Ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity
##
nodeAffinityPreset:
## @param nodeAffinityPreset.type Node affinity preset type. Ignored if `affinity` is set. Allowed values: `soft` or `hard`
##
type: ""
## @param nodeAffinityPreset.key Node label key to match Ignored if `affinity` is set.
## E.g.
## key: "kubernetes.io/e2e-az-name"
##
key: ""
## @param nodeAffinityPreset.values Node label values to match. Ignored if `affinity` is set.
## E.g.
## values:
## - e2e-az1
## - e2e-az2
##
values: []
## @param affinity Affinity for pod assignment
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
## Note: podAffinityPreset, podAntiAffinityPreset, and nodeAffinityPreset will be ignored when it's set
##
affinity: {}
## @param nodeSelector Node labels for pod assignment
## Ref: https://kubernetes.io/docs/user-guide/node-selection/
##
nodeSelector: {}
## @param tolerations Tolerations for pod assignment
## Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
##
tolerations: []
## @param topologySpreadConstraints Topology Spread Constraints for pod assignment spread across your cluster among failure-domains. Evaluated as a template
## Ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/#spread-constraints-for-pods
##
topologySpreadConstraints: {}
## @param terminationGracePeriodSeconds Seconds the pod needs to gracefully terminate
## ref: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#hook-handler-execution
##
terminationGracePeriodSeconds: ""
## @param podManagementPolicy StatefulSet controller supports relax its ordering guarantees while preserving its uniqueness and identity guarantees. There are two valid pod management policies: OrderedReady and Parallel
## ref: https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#pod-management-policy
##
podManagementPolicy: Parallel
## @param priorityClassName Name of the existing priority class to be used by kafka pods
## Ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
##
priorityClassName: ""
## @param schedulerName Name of the k8s scheduler (other than default)
## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
##
schedulerName: ""
## @param updateStrategy.type Kafka statefulset strategy type
## @param updateStrategy.rollingUpdate Kafka statefulset rolling update configuration parameters
## ref: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#update-strategies
##
updateStrategy:
type: RollingUpdate
rollingUpdate: {}
## @param extraVolumes Optionally specify extra list of additional volumes for the Kafka pod(s)
## e.g:
## extraVolumes:
## - name: kafka-jaas
## secret:
## secretName: kafka-jaas
##
extraVolumes: []
## @param extraVolumeMounts Optionally specify extra list of additional volumeMounts for the Kafka container(s)
## extraVolumeMounts:
## - name: kafka-jaas
## mountPath: /bitnami/kafka/config/kafka_jaas.conf
## subPath: kafka_jaas.conf
##
extraVolumeMounts: []
## @param sidecars Add additional sidecar containers to the Kafka pod(s)
## e.g:
## sidecars:
## - name: your-image-name
## image: your-image
## imagePullPolicy: Always
## ports:
## - name: portname
## containerPort: 1234
##
sidecars: []
## @param initContainers Add additional Add init containers to the Kafka pod(s)
## e.g:
## initContainers:
## - name: your-image-name
## image: your-image
## imagePullPolicy: Always
## ports:
## - name: portname
## containerPort: 1234
##
initContainers: []
## Kafka Pod Disruption Budget
## ref: https://kubernetes.io/docs/concepts/workloads/pods/disruptions/
## @param pdb.create Deploy a pdb object for the Kafka pod
## @param pdb.minAvailable Maximum number/percentage of unavailable Kafka replicas
## @param pdb.maxUnavailable Maximum number/percentage of unavailable Kafka replicas
##
pdb:
create: false
minAvailable: ""
maxUnavailable: 1

## @section Traffic Exposure parameters

## Service parameters
##
service:
## @param service.type Kubernetes Service type
##
type: ClusterIP
## @param service.ports.client Kafka svc port for client connections
## @param service.ports.internal Kafka svc port for inter-broker connections
## @param service.ports.external Kafka svc port for external connections
##
ports:
client: 9092
internal: 9093
external: 9094
## @param service.nodePorts.client Node port for the Kafka client connections
## @param service.nodePorts.external Node port for the Kafka external connections
## NOTE: choose port between <30000-32767>
##
nodePorts:
client: ""
external: ""
## @param service.sessionAffinity Control where client requests go, to the same pod or round-robin
## Values: ClientIP or None
## ref: https://kubernetes.io/docs/user-guide/services/
##
sessionAffinity: None
## @param service.clusterIP Kafka service Cluster IP
## e.g.:
## clusterIP: None
##
clusterIP: ""
## @param service.loadBalancerIP Kafka service Load Balancer IP
## ref: https://kubernetes.io/docs/user-guide/services/#type-loadbalancer
##
loadBalancerIP: ""
## @param service.loadBalancerSourceRanges Kafka service Load Balancer sources
## ref: https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/#restrict-access-for-loadbalancer-service
## e.g:
## loadBalancerSourceRanges:
## - 10.10.10.0/24
##
loadBalancerSourceRanges: []
## @param service.externalTrafficPolicy Kafka service external traffic policy
## ref https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip
##
externalTrafficPolicy: Cluster
## @param service.annotations Additional custom annotations for Kafka service
##
annotations: {}
## @param service.extraPorts Extra ports to expose in the Kafka service (normally used with the `sidecar` value)
##
extraPorts: []
## External Access to Kafka brokers configuration
##
externalAccess:
## @param externalAccess.enabled Enable Kubernetes external cluster access to Kafka brokers
##
enabled: true
## External IPs auto-discovery configuration
## An init container is used to auto-detect LB IPs or node ports by querying the K8s API
## Note: RBAC might be required
##
autoDiscovery:
## @param externalAccess.autoDiscovery.enabled Enable using an init container to auto-detect external IPs/ports by querying the K8s API
##
enabled: false
## Bitnami Kubectl image
## ref: https://hub.docker.com/r/bitnami/kubectl/tags/
## @param externalAccess.autoDiscovery.image.registry Init container auto-discovery image registry
## @param externalAccess.autoDiscovery.image.repository Init container auto-discovery image repository
## @param externalAccess.autoDiscovery.image.tag Init container auto-discovery image tag (immutable tags are recommended)
## @param externalAccess.autoDiscovery.image.pullPolicy Init container auto-discovery image pull policy
## @param externalAccess.autoDiscovery.image.pullSecrets Init container auto-discovery image pull secrets
##
image:
registry: docker.io
repository: bitnami/kubectl
tag: 1.23.3-debian-10-r19
## Specify a imagePullPolicy
## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
## ref: https://kubernetes.io/docs/user-guide/images/#pre-pulling-images
##
pullPolicy: IfNotPresent
## Optionally specify an array of imagePullSecrets (secrets must be manually created in the namespace)
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
## e.g:
## pullSecrets:
## - myRegistryKeySecretName
##
pullSecrets: []
## Init Container resource requests and limits
## ref: https://kubernetes.io/docs/user-guide/compute-resources/
## @param externalAccess.autoDiscovery.resources.limits The resources limits for the auto-discovery init container
## @param externalAccess.autoDiscovery.resources.requests The requested resources for the auto-discovery init container
##
resources:
limits: {}
requests: {}
## Parameters to configure K8s service(s) used to externally access Kafka brokers
## Note: A new service per broker will be created
##
service:
## @param externalAccess.service.type Kubernetes Service type for external access. It can be NodePort or LoadBalancer
##
type: NodePort
## @param externalAccess.service.ports.external Kafka port used for external access when service type is LoadBalancer
##
ports:
external: 9094
## @param externalAccess.service.loadBalancerIPs Array of load balancer IPs for each Kafka broker. Length must be the same as replicaCount
## e.g:
## loadBalancerIPs:
## - X.X.X.X
## - Y.Y.Y.Y
##
loadBalancerIPs: []
## @param externalAccess.service.loadBalancerNames Array of load balancer Names for each Kafka broker. Length must be the same as replicaCount
## e.g:
## loadBalancerNames:
## - broker1.external.example.com
## - broker2.external.example.com
##
loadBalancerNames: []
## @param externalAccess.service.loadBalancerAnnotations Array of load balancer annotations for each Kafka broker. Length must be the same as replicaCount
## e.g:
## loadBalancerAnnotations:
## - external-dns.alpha.kubernetes.io/hostname: broker1.external.example.com.
## - external-dns.alpha.kubernetes.io/hostname: broker2.external.example.com.
##
loadBalancerAnnotations: []
## @param externalAccess.service.loadBalancerSourceRanges Address(es) that are allowed when service is LoadBalancer
## ref: https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/#restrict-access-for-loadbalancer-service
## e.g:
## loadBalancerSourceRanges:
## - 10.10.10.0/24
##
loadBalancerSourceRanges: []
## @param externalAccess.service.nodePorts Array of node ports used for each Kafka broker. Length must be the same as replicaCount
## e.g:
## nodePorts:
## - 30001
## - 30002
##
nodePorts: [30001,30002,30003]
## @param externalAccess.service.useHostIPs Use service host IPs to configure Kafka external listener when service type is NodePort
##
useHostIPs: false
## @param externalAccess.service.usePodIPs using the MY_POD_IP address for external access.
##
usePodIPs: false
## @param externalAccess.service.domain Domain or external ip used to configure Kafka external listener when service type is NodePort
## If not specified, the container will try to get the kubernetes node external IP
##
domain: "stage-kafka.info"
## @param externalAccess.service.annotations Service annotations for external access
##
annotations: {}
## @param externalAccess.service.extraPorts Extra ports to expose in the Kafka external service
##
extraPorts: []
## Network policies
## Ref: https://kubernetes.io/docs/concepts/services-networking/network-policies/
##
networkPolicy:
## @param networkPolicy.enabled Specifies whether a NetworkPolicy should be created
##
enabled: false
## @param networkPolicy.allowExternal Don't require client label for connections
## When set to false, only pods with the correct client label will have network access to the port Redis&trade; is
## listening on. When true, zookeeper accept connections from any source (with the correct destination port).
##
allowExternal: true
## @param networkPolicy.explicitNamespacesSelector A Kubernetes LabelSelector to explicitly select namespaces from which traffic could be allowed
## If explicitNamespacesSelector is missing or set to {}, only client Pods that are in the networkPolicy's namespace
## and that match other criteria, the ones that have the good label, can reach the kafka.
## But sometimes, we want the kafka to be accessible to clients from other namespaces, in this case, we can use this
## LabelSelector to select these namespaces, note that the networkPolicy's namespace should also be explicitly added.
##
## e.g:
## explicitNamespacesSelector:
## matchLabels:
## role: frontend
## matchExpressions:
## - {key: role, operator: In, values: [frontend]}
##
explicitNamespacesSelector: {}
## @param networkPolicy.externalAccess.from customize the from section for External Access on tcp-external port
## e.g:
## - ipBlock:
## cidr: 172.9.0.0/16
## except:
## - 172.9.1.0/24
##
externalAccess:
from: []
## @param networkPolicy.egressRules.customRules [object] Custom network policy rule
##
egressRules:
## Additional custom egress rules
## e.g:
## customRules:
## - to:
## - namespaceSelector:
## matchLabels:
## label: example
customRules: []

## @section Persistence parameters

## Enable persistence using Persistent Volume Claims
## ref: https://kubernetes.io/docs/user-guide/persistent-volumes/
##
persistence:
## @param persistence.enabled Enable Kafka data persistence using PVC, note that ZooKeeper persistence is unaffected
##
enabled: true
## @param persistence.existingClaim A manually managed Persistent Volume and Claim
## If defined, PVC must be created manually before volume will be bound
## The value is evaluated as a template
##
existingClaim: ""
## @param persistence.storageClass PVC Storage Class for Kafka data volume
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner.
##
storageClass: local-storage
## @param persistence.accessModes Persistent Volume Access Modes
##
accessModes:
- ReadWriteOnce
## @param persistence.size PVC Storage Request for Kafka data volume
##
size: 5Gi
## @param persistence.annotations Annotations for the PVC
##
annotations: {}
## @param persistence.selector Selector to match an existing Persistent Volume for Kafka data PVC. If set, the PVC can't have a PV dynamically provisioned for it
## selector:
## matchLabels:
## app: my-app
##
selector: {}
## @param persistence.mountPath Mount path of the Kafka data volume
##
mountPath: /bitnami/kafka
## Log Persistence parameters
##
logPersistence:
## @param logPersistence.enabled Enable Kafka logs persistence using PVC, note that ZooKeeper persistence is unaffected
##
enabled: false
## @param logPersistence.existingClaim A manually managed Persistent Volume and Claim
## If defined, PVC must be created manually before volume will be bound
## The value is evaluated as a template
##
existingClaim: ""
## @param logPersistence.storageClass PVC Storage Class for Kafka logs volume
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner.
##
storageClass: ""
## @param logPersistence.accessModes Persistent Volume Access Modes
##
accessModes:
- ReadWriteOnce
## @param logPersistence.size PVC Storage Request for Kafka logs volume
##
size: 5Gi
## @param logPersistence.annotations Annotations for the PVC
##
annotations: {}
## @param logPersistence.selector Selector to match an existing Persistent Volume for Kafka log data PVC. If set, the PVC can't have a PV dynamically provisioned for it
## selector:
## matchLabels:
## app: my-app
##
selector: {}
## @param logPersistence.mountPath Mount path of the Kafka logs volume
##
mountPath: /opt/bitnami/kafka/logs

## @section Volume Permissions parameters
##

## Init containers parameters:
## volumePermissions: Change the owner and group of the persistent volume(s) mountpoint(s) to 'runAsUser:fsGroup' on each node
##
volumePermissions:
## @param volumePermissions.enabled Enable init container that changes the owner and group of the persistent volume
##
enabled: false
## @param volumePermissions.image.registry Init container volume-permissions image registry
## @param volumePermissions.image.repository Init container volume-permissions image repository
## @param volumePermissions.image.tag Init container volume-permissions image tag (immutable tags are recommended)
## @param volumePermissions.image.pullPolicy Init container volume-permissions image pull policy
## @param volumePermissions.image.pullSecrets Init container volume-permissions image pull secrets
##
image:
registry: docker.io
repository: bitnami/bitnami-shell
tag: 10-debian-10-r339
pullPolicy: IfNotPresent
## Optionally specify an array of imagePullSecrets.
## Secrets must be manually created in the namespace.
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
## Example:
## pullSecrets:
## - myRegistryKeySecretName
##
pullSecrets: []
## Init container resource requests and limits
## ref: https://kubernetes.io/docs/user-guide/compute-resources/
## @param volumePermissions.resources.limits Init container volume-permissions resource limits
## @param volumePermissions.resources.requests Init container volume-permissions resource requests
##
resources:
limits: {}
requests: {}
## Init container' Security Context
## Note: the chown of the data folder is done to containerSecurityContext.runAsUser
## and not the below volumePermissions.containerSecurityContext.runAsUser
## @param volumePermissions.containerSecurityContext.runAsUser User ID for the init container
##
containerSecurityContext:
runAsUser: 0

## @section Other Parameters

## ServiceAccount for Kafka
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
##
serviceAccount:
## @param serviceAccount.create Enable creation of ServiceAccount for Kafka pods
##
create: true
## @param serviceAccount.name The name of the service account to use. If not set and `create` is `true`, a name is generated
## If not set and create is true, a name is generated using the kafka.serviceAccountName template
##
name: ""
## @param serviceAccount.automountServiceAccountToken Allows auto mount of ServiceAccountToken on the serviceAccount created
## Can be set to false if pods using this serviceAccount do not need to use K8s API
##
automountServiceAccountToken: true
## @param serviceAccount.annotations Additional custom annotations for the ServiceAccount
##
annotations: {}
## Role Based Access Control
## ref: https://kubernetes.io/docs/admin/authorization/rbac/
##
rbac:
## @param rbac.create Whether to create &amp; use RBAC resources or not
## binding Kafka ServiceAccount to a role
## that allows Kafka pods querying the K8s API
##
create: false

## @section Metrics parameters

## Prometheus Exporters / Metrics
##
metrics:
## Prometheus Kafka exporter: exposes complimentary metrics to JMX exporter
##
kafka:
## @param metrics.kafka.enabled Whether or not to create a standalone Kafka exporter to expose Kafka metrics
##
enabled: false
## Bitnami Kafka exporter image
## ref: https://hub.docker.com/r/bitnami/kafka-exporter/tags/
## @param metrics.kafka.image.registry Kafka exporter image registry
## @param metrics.kafka.image.repository Kafka exporter image repository
## @param metrics.kafka.image.tag Kafka exporter image tag (immutable tags are recommended)
## @param metrics.kafka.image.pullPolicy Kafka exporter image pull policy
## @param metrics.kafka.image.pullSecrets Specify docker-registry secret names as an array
##
image:
registry: docker.io
repository: bitnami/kafka-exporter
tag: 1.4.2-debian-10-r147
## Specify a imagePullPolicy
## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
## ref: https://kubernetes.io/docs/user-guide/images/#pre-pulling-images
##
pullPolicy: IfNotPresent
## Optionally specify an array of imagePullSecrets (secrets must be manually created in the namespace)
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
## e.g:
## pullSecrets:
## - myRegistryKeySecretName
##
pullSecrets: []

## @param metrics.kafka.certificatesSecret Name of the existing secret containing the optional certificate and key files
## for Kafka exporter client authentication
##
certificatesSecret: ""
## @param metrics.kafka.tlsCert The secret key from the certificatesSecret if 'client-cert' key different from the default (cert-file)
##
tlsCert: cert-file
## @param metrics.kafka.tlsKey The secret key from the certificatesSecret if 'client-key' key different from the default (key-file)
##
tlsKey: key-file
## @param metrics.kafka.tlsCaSecret Name of the existing secret containing the optional ca certificate for Kafka exporter client authentication
##
tlsCaSecret: ""
## @param metrics.kafka.tlsCaCert The secret key from the certificatesSecret or tlsCaSecret if 'ca-cert' key different from the default (ca-file)
##
tlsCaCert: ca-file
## @param metrics.kafka.extraFlags Extra flags to be passed to Kafka exporter
## e.g:
## extraFlags:
## tls.insecure-skip-tls-verify: ""
## web.telemetry-path: "/metrics"
##
extraFlags: {}
## @param metrics.kafka.command Override Kafka exporter container command
##
command: []
## @param metrics.kafka.args Override Kafka exporter container arguments
##
args: []
## @param metrics.kafka.containerPorts.metrics Kafka exporter metrics container port
##
containerPorts:
metrics: 9308
## Kafka exporter resource requests and limits
## ref: https://kubernetes.io/docs/user-guide/compute-resources/
## @param metrics.kafka.resources.limits The resources limits for the container
## @param metrics.kafka.resources.requests The requested resources for the container
##
resources:
limits: {}
requests: {}
## Kafka exporter pods' Security Context
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod
## @param metrics.kafka.podSecurityContext.enabled Enable security context for the pods
## @param metrics.kafka.podSecurityContext.fsGroup Set Kafka exporter pod's Security Context fsGroup
##
podSecurityContext:
enabled: true
fsGroup: 1001
## Kafka exporter containers' Security Context
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container
## @param metrics.kafka.containerSecurityContext.enabled Enable Kafka exporter containers' Security Context
## @param metrics.kafka.containerSecurityContext.runAsUser Set Kafka exporter containers' Security Context runAsUser
## @param metrics.kafka.containerSecurityContext.runAsNonRoot Set Kafka exporter containers' Security Context runAsNonRoot
## e.g:
## containerSecurityContext:
## enabled: true
## capabilities:
## drop: ["NET_RAW"]
## readOnlyRootFilesystem: true
##
containerSecurityContext:
enabled: true
runAsUser: 1001
runAsNonRoot: true
## @param metrics.kafka.hostAliases Kafka exporter pods host aliases
## https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/
##
hostAliases: []
## @param metrics.kafka.podLabels Extra labels for Kafka exporter pods
## Ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
##
podLabels: {}
## @param metrics.kafka.podAnnotations Extra annotations for Kafka exporter pods
## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
##
podAnnotations: {}
## @param metrics.kafka.podAffinityPreset Pod affinity preset. Ignored if `metrics.kafka.affinity` is set. Allowed values: `soft` or `hard`
## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity
##
podAffinityPreset: ""
## @param metrics.kafka.podAntiAffinityPreset Pod anti-affinity preset. Ignored if `metrics.kafka.affinity` is set. Allowed values: `soft` or `hard`
## Ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity
##
podAntiAffinityPreset: soft
## Node metrics.kafka.affinity preset
## Ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity
##
nodeAffinityPreset:
## @param metrics.kafka.nodeAffinityPreset.type Node affinity preset type. Ignored if `metrics.kafka.affinity` is set. Allowed values: `soft` or `hard`
##
type: ""
## @param metrics.kafka.nodeAffinityPreset.key Node label key to match Ignored if `metrics.kafka.affinity` is set.
## E.g.
## key: "kubernetes.io/e2e-az-name"
##
key: ""
## @param metrics.kafka.nodeAffinityPreset.values Node label values to match. Ignored if `metrics.kafka.affinity` is set.
## E.g.
## values:
## - e2e-az1
## - e2e-az2
##
values: []
## @param metrics.kafka.affinity Affinity for pod assignment
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
## Note: metrics.kafka.podAffinityPreset, metrics.kafka.podAntiAffinityPreset, and metrics.kafka.nodeAffinityPreset will be ignored when it's set
##
affinity: {}
## @param metrics.kafka.nodeSelector Node labels for pod assignment
## Ref: https://kubernetes.io/docs/user-guide/node-selection/
##
nodeSelector: {}
## @param metrics.kafka.tolerations Tolerations for pod assignment
## Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
##
tolerations: []
## @param metrics.kafka.schedulerName Name of the k8s scheduler (other than default) for Kafka exporter
## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
##
schedulerName: ""
## @param metrics.kafka.extraVolumes Optionally specify extra list of additional volumes for the Kafka exporter pod(s)
## e.g:
## extraVolumes:
## - name: kafka-jaas
## secret:
## secretName: kafka-jaas
##
extraVolumes: []
## @param metrics.kafka.extraVolumeMounts Optionally specify extra list of additional volumeMounts for the Kafka exporter container(s)
## extraVolumeMounts:
## - name: kafka-jaas
## mountPath: /bitnami/kafka/config/kafka_jaas.conf
## subPath: kafka_jaas.conf
##
extraVolumeMounts: []
## @param metrics.kafka.sidecars Add additional sidecar containers to the Kafka exporter pod(s)
## e.g:
## sidecars:
## - name: your-image-name
## image: your-image
## imagePullPolicy: Always
## ports:
## - name: portname
## containerPort: 1234
##
sidecars: []
## @param metrics.kafka.initContainers Add init containers to the Kafka exporter pods
## e.g:
## initContainers:
## - name: your-image-name
## image: your-image
## imagePullPolicy: Always
## ports:
## - name: portname
## containerPort: 1234
##
initContainers: []
## Kafka exporter service configuration
##
service:
## @param metrics.kafka.service.ports.metrics Kafka exporter metrics service port
##
ports:
metrics: 9308
## @param metrics.kafka.service.clusterIP Static clusterIP or None for headless services
## ref: https://kubernetes.io/docs/concepts/services-networking/service/#choosing-your-own-ip-address
##
clusterIP: ""
## @param metrics.kafka.service.sessionAffinity Control where client requests go, to the same pod or round-robin
## Values: ClientIP or None
## ref: https://kubernetes.io/docs/user-guide/services/
##
sessionAffinity: None
## @param metrics.kafka.service.annotations [object] Annotations for the Kafka exporter service
##
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "{{ .Values.metrics.kafka.service.ports.metrics }}"
prometheus.io/path: "/metrics"
## Kafka exporter pods ServiceAccount
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
##
serviceAccount:
## @param metrics.kafka.serviceAccount.create Enable creation of ServiceAccount for Kafka exporter pods
##
create: true
## @param metrics.kafka.serviceAccount.name The name of the service account to use. If not set and `create` is `true`, a name is generated
## If not set and create is true, a name is generated using the kafka.metrics.kafka.serviceAccountName template
##
name: ""
## @param metrics.kafka.serviceAccount.automountServiceAccountToken Allows auto mount of ServiceAccountToken on the serviceAccount created
## Can be set to false if pods using this serviceAccount do not need to use K8s API
##
automountServiceAccountToken: true
## Prometheus JMX exporter: exposes the majority of Kafkas metrics
##
jmx:
## @param metrics.jmx.enabled Whether or not to expose JMX metrics to Prometheus
##
enabled: false
## Bitnami JMX exporter image
## ref: https://hub.docker.com/r/bitnami/jmx-exporter/tags/
## @param metrics.jmx.image.registry JMX exporter image registry
## @param metrics.jmx.image.repository JMX exporter image repository
## @param metrics.jmx.image.tag JMX exporter image tag (immutable tags are recommended)
## @param metrics.jmx.image.pullPolicy JMX exporter image pull policy
## @param metrics.jmx.image.pullSecrets Specify docker-registry secret names as an array
##
image:
registry: docker.io
repository: bitnami/jmx-exporter
tag: 0.16.1-debian-10-r208
## Specify a imagePullPolicy
## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
## ref: https://kubernetes.io/docs/user-guide/images/#pre-pulling-images
##
pullPolicy: IfNotPresent
## Optionally specify an array of imagePullSecrets (secrets must be manually created in the namespace)
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
## e.g:
## pullSecrets:
## - myRegistryKeySecretName
##
pullSecrets: []
## Prometheus JMX exporter containers' Security Context
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container
## @param metrics.jmx.containerSecurityContext.enabled Enable Prometheus JMX exporter containers' Security Context
## @param metrics.jmx.containerSecurityContext.runAsUser Set Prometheus JMX exporter containers' Security Context runAsUser
## @param metrics.jmx.containerSecurityContext.runAsNonRoot Set Prometheus JMX exporter containers' Security Context runAsNonRoot
## e.g:
## containerSecurityContext:
## enabled: true
## capabilities:
## drop: ["NET_RAW"]
## readOnlyRootFilesystem: true
##
containerSecurityContext:
enabled: true
runAsUser: 1001
runAsNonRoot: true
## @param metrics.jmx.containerPorts.metrics Prometheus JMX exporter metrics container port
##
containerPorts:
metrics: 5556
## Prometheus JMX exporter resource requests and limits
## ref: https://kubernetes.io/docs/user-guide/compute-resources/
## @param metrics.jmx.resources.limits The resources limits for the JMX exporter container
## @param metrics.jmx.resources.requests The requested resources for the JMX exporter container
##
resources:
limits: {}
requests: {}
## Prometheus JMX exporter service configuration
##
service:
## @param metrics.jmx.service.ports.metrics Prometheus JMX exporter metrics service port
##
ports:
metrics: 5556
## @param metrics.jmx.service.clusterIP Static clusterIP or None for headless services
## ref: https://kubernetes.io/docs/concepts/services-networking/service/#choosing-your-own-ip-address
##
clusterIP: ""
## @param metrics.jmx.service.sessionAffinity Control where client requests go, to the same pod or round-robin
## Values: ClientIP or None
## ref: https://kubernetes.io/docs/user-guide/services/
##
sessionAffinity: None
## @param metrics.jmx.service.annotations [object] Annotations for the Prometheus JMX exporter service
##
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "{{ .Values.metrics.jmx.service.ports.metrics }}"
prometheus.io/path: "/"
## @param metrics.jmx.whitelistObjectNames Allows setting which JMX objects you want to expose to via JMX stats to JMX exporter
## Only whitelisted values will be exposed via JMX exporter. They must also be exposed via Rules. To expose all metrics
## (warning its crazy excessive and they aren't formatted in a prometheus style) (1) `whitelistObjectNames: []`
## (2) commented out above `overrideConfig`.
##
whitelistObjectNames:
- kafka.controller:*
- kafka.server:*
- java.lang:*
- kafka.network:*
- kafka.log:*
## @param metrics.jmx.config [string] Configuration file for JMX exporter
## Specify content for jmx-kafka-prometheus.yml. Evaluated as a template
##
## Credits to the incubator/kafka chart for the JMX configuration.
## https://github.com/helm/charts/tree/master/incubator/kafka
##
config: |-
jmxUrl: service:jmx:rmi:///jndi/rmi://127.0.0.1:5555/jmxrmi
lowercaseOutputName: true
lowercaseOutputLabelNames: true
ssl: false
{{- if .Values.metrics.jmx.whitelistObjectNames }}
whitelistObjectNames: ["{{ join "\",\"" .Values.metrics.jmx.whitelistObjectNames }}"]
{{- end }}
## @param metrics.jmx.existingConfigmap Name of existing ConfigMap with JMX exporter configuration
## NOTE: This will override metrics.jmx.config
##
existingConfigmap: ""
## Prometheus Operator ServiceMonitor configuration
##
serviceMonitor:
## @param metrics.serviceMonitor.enabled if `true`, creates a Prometheus Operator ServiceMonitor (requires `metrics.kafka.enabled` or `metrics.jmx.enabled` to be `true`)
##
enabled: false
## @param metrics.serviceMonitor.namespace Namespace in which Prometheus is running
##
namespace: ""
## @param metrics.serviceMonitor.interval Interval at which metrics should be scraped
## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#endpoint
##
interval: ""
## @param metrics.serviceMonitor.scrapeTimeout Timeout after which the scrape is ended
## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#endpoint
##
scrapeTimeout: ""
## @param metrics.serviceMonitor.labels Additional labels that can be used so ServiceMonitor will be discovered by Prometheus
##
labels: {}
## @param metrics.serviceMonitor.selector Prometheus instance selector labels
## ref: https://github.com/bitnami/charts/tree/master/bitnami/prometheus-operator#prometheus-configuration
##
selector: {}
## @param metrics.serviceMonitor.relabelings RelabelConfigs to apply to samples before scraping
##
relabelings: []
## @param metrics.serviceMonitor.metricRelabelings MetricRelabelConfigs to apply to samples before ingestion
##
metricRelabelings: []
## @param metrics.serviceMonitor.honorLabels Specify honorLabels parameter to add the scrape endpoint
##
honorLabels: false
## @param metrics.serviceMonitor.jobLabel The name of the label on the target service to use as the job name in prometheus.
##
jobLabel: ""

## @section Kafka provisioning parameters

## Kafka provisioning
##
provisioning:
## @param provisioning.enabled Enable kafka provisioning Job
##
enabled: false
## @param provisioning.numPartitions Default number of partitions for topics when unspecified
##
numPartitions: 1
## @param provisioning.replicationFactor Default replication factor for topics when unspecified
##
replicationFactor: 1
## @param provisioning.topics Kafka provisioning topics
## - name: topic-name
## partitions: 1
## replicationFactor: 1
## ## https://kafka.apache.org/documentation/#topicconfigs
## config:
## max.message.bytes: 64000
## flush.messages: 1
##
topics: []
## @param provisioning.command Override provisioning container command
##
command: []
## @param provisioning.args Override provisioning container arguments
##
args: []
## @param provisioning.podAnnotations Extra annotations for Kafka provisioning pods
##
podAnnotations: {}
## @param provisioning.podLabels Extra labels for Kafka provisioning pods
## Ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
##
podLabels: {}
## Kafka provisioning resource requests and limits
## ref: https://kubernetes.io/docs/user-guide/compute-resources/
## @param provisioning.resources.limits The resources limits for the Kafka provisioning container
## @param provisioning.resources.requests The requested resources for the Kafka provisioning container
##
resources:
limits: {}
requests: {}
## Kafka provisioning pods' Security Context
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod
## @param provisioning.podSecurityContext.enabled Enable security context for the pods
## @param provisioning.podSecurityContext.fsGroup Set Kafka provisioning pod's Security Context fsGroup
##
podSecurityContext:
enabled: true
fsGroup: 1001
## Kafka provisioning containers' Security Context
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container
## @param provisioning.containerSecurityContext.enabled Enable Kafka provisioning containers' Security Context
## @param provisioning.containerSecurityContext.runAsUser Set Kafka provisioning containers' Security Context runAsUser
## @param provisioning.containerSecurityContext.runAsNonRoot Set Kafka provisioning containers' Security Context runAsNonRoot
## e.g:
## containerSecurityContext:
## enabled: true
## capabilities:
## drop: ["NET_RAW"]
## readOnlyRootFilesystem: true
##
containerSecurityContext:
enabled: true
runAsUser: 1001
runAsNonRoot: true
## @param provisioning.schedulerName Name of the k8s scheduler (other than default) for kafka provisioning
## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
##
schedulerName: ""
## @param provisioning.extraVolumes Optionally specify extra list of additional volumes for the Kafka provisioning pod(s)
## e.g:
## extraVolumes:
## - name: kafka-jaas
## secret:
## secretName: kafka-jaas
##
extraVolumes: []
## @param provisioning.extraVolumeMounts Optionally specify extra list of additional volumeMounts for the Kafka provisioning container(s)
## extraVolumeMounts:
## - name: kafka-jaas
## mountPath: /bitnami/kafka/config/kafka_jaas.conf
## subPath: kafka_jaas.conf
##
extraVolumeMounts: []
## @param provisioning.sidecars Add additional sidecar containers to the Kafka provisioning pod(s)
## e.g:
## sidecars:
## - name: your-image-name
## image: your-image
## imagePullPolicy: Always
## ports:
## - name: portname
## containerPort: 1234
##
sidecars: []
## @param provisioning.initContainers Add additional Add init containers to the Kafka provisioning pod(s)
## e.g:
## initContainers:
## - name: your-image-name
## image: your-image
## imagePullPolicy: Always
## ports:
## - name: portname
## containerPort: 1234
##
initContainers: []

## @section ZooKeeper chart parameters

## ZooKeeper chart configuration
## https://github.com/bitnami/charts/blob/master/bitnami/zookeeper/values.yaml
##
zookeeper:
## @param zookeeper.enabled Switch to enable or disable the ZooKeeper helm chart
##
enabled: true
## @param zookeeper.replicaCount Number of ZooKeeper nodes
##
replicaCount: 3
## ZooKeeper authenticaiton
##
auth:
## @param zookeeper.auth.enabled Enable ZooKeeper auth
##
enabled: false
## @param zookeeper.auth.clientUser User that will use ZooKeeper clients to auth
##
clientUser: ""
## @param zookeeper.auth.clientPassword Password that will use ZooKeeper clients to auth
##
clientPassword: ""
## @param zookeeper.auth.serverUsers Comma, semicolon or whitespace separated list of user to be created. Specify them as a string, for example: "user1,user2,admin"
##
serverUsers: ""
## @param zookeeper.auth.serverPasswords Comma, semicolon or whitespace separated list of passwords to assign to users when created. Specify them as a string, for example: "pass4user1, pass4user2, pass4admin"
##
serverPasswords: ""
## ZooKeeper Persistence parameters
## ref: https://kubernetes.io/docs/user-guide/persistent-volumes/
## @param zookeeper.persistence.enabled Enable persistence on ZooKeeper using PVC(s)
## @param zookeeper.persistence.storageClass Persistent Volume storage class
## @param zookeeper.persistence.accessModes Persistent Volume access modes
## @param zookeeper.persistence.size Persistent Volume size
##
persistence:
enabled: true
storageClass: local-storage
accessModes:
- ReadWriteOnce
size: 5Gi

## External Zookeeper Configuration
## All of these values are only used if `zookeeper.enabled=false`
##
externalZookeeper:
## @param externalZookeeper.servers List of external zookeeper servers to use
##
servers: []

使用helm部署,需要在fetch的chart的解压目录下

1
2
(⎈ |minikube:default)➜  /Users/lintong/coding/helm/kafka git:(master) ✗ $ helm install kafka -f values.yaml .

查看pv和pvc,已经是Bound状态

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
kubectl get pv,pvc -A
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/data-kafka-zookeeper-0 5Gi RWO Retain Bound default/data-kafka-zookeeper-1 local-storage 10h
persistentvolume/data-kafka-zookeeper-1 5Gi RWO Retain Bound default/data-kafka-zookeeper-0 local-storage 10h
persistentvolume/data-kafka-zookeeper-2 5Gi RWO Retain Bound default/data-kafka-zookeeper-2 local-storage 10h
persistentvolume/datadir-kafka-0 5Gi RWO Retain Bound default/data-kafka-0 local-storage 10h
persistentvolume/datadir-kafka-1 5Gi RWO Retain Bound default/data-kafka-1 local-storage 10h
persistentvolume/datadir-kafka-2 5Gi RWO Retain Bound default/data-kafka-2 local-storage 10h

NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
default persistentvolumeclaim/data-kafka-0 Bound datadir-kafka-0 5Gi RWO local-storage 22m
default persistentvolumeclaim/data-kafka-1 Bound datadir-kafka-1 5Gi RWO local-storage 22m
default persistentvolumeclaim/data-kafka-2 Bound datadir-kafka-2 5Gi RWO local-storage 22m
default persistentvolumeclaim/data-kafka-zookeeper-0 Bound data-kafka-zookeeper-1 5Gi RWO local-storage 10h
default persistentvolumeclaim/data-kafka-zookeeper-1 Bound data-kafka-zookeeper-0 5Gi RWO local-storage 10h
default persistentvolumeclaim/data-kafka-zookeeper-2 Bound data-kafka-zookeeper-2 5Gi RWO local-storage 10h

查看pod状态

1
2
3
4
5
6
7
8
9
kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
default kafka-0 1/1 Running 2 7m14s
default kafka-1 1/1 Running 3 7m14s
default kafka-2 1/1 Running 3 7m14s
default kafka-zookeeper-0 1/1 Running 0 7m14s
default kafka-zookeeper-1 1/1 Running 0 7m14s
default kafka-zookeeper-2 1/1 Running 0 7m14s

  

4.连接kafka集群#

查看service,可以看到kafka-0-external,kafka-1-external,kafka-2-external都是NodePort,所以可以使用minikube的ip来访问

1
2
3
4
5
6
7
8
9
10
kubectl get service -A
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kafka ClusterIP 10.101.193.49 <none> 9092/TCP 11m
default kafka-0-external NodePort 10.107.22.127 <none> 9094:30001/TCP 11m
default kafka-1-external NodePort 10.106.222.15 <none> 9094:30002/TCP 11m
default kafka-2-external NodePort 10.111.166.28 <none> 9094:30003/TCP 11m
default kafka-headless ClusterIP None <none> 9092/TCP,9093/TCP 11m
default kafka-zookeeper ClusterIP 10.110.168.238 <none> 2181/TCP,2888/TCP,3888/TCP 11m
default kafka-zookeeper-headless ClusterIP None <none> 2181/TCP,2888/TCP,3888/TCP 11m

minikube的ip

1
2
3
(⎈ |minikube:default)➜  /Users/lintong $ minikube ip
192.168.99.100

使用来offset explorer连接,成功连接