k8s HELM 安装Kafka Zookeeper集群

释放双眼,带上耳机,听听看~!
我们采用helm部署高可用Zookeeper和Kafka集群,kafka的元数据存储在zookeeper中,所以要先设置zookeeper的集群然后部署kafka集群。

很早之前文章介绍了通过Kafka 二进制安装集群,目前很多环境都是Kubernetes,追求快速部署、快速创建项目。 下面我们通过helm快速构建一套Kafka集群并配置持久化

关于k8s sc持久化和Kafka二进制安装,此处就不在介绍了,可以参考下面的文章

持久化存储 StorageClass

消息队列 Kafka –未完

Kafka容器化会受底层物理机的配置影响,大并发常景还是要慎重考虑

Helm 安装

相关服务版本

  • Kubernetes 1.24.0
  • Containerd v1.6.4
# 下载
wget https://get.helm.sh/helm-v3.6.1-linux-amd64.tar.gz

# 解压
tar zxvf helm-v3.6.1-linux-amd64.tar.gz

# 安装
mv linux-amd64/helm /usr/local/bin/

# 验证
helm version

Helm 部署Zookeeper集群

# 添加bitnami仓库
helm repo add bitnami https://charts.bitnami.com/bitnami

# 查询chart
helm search repo bitnami

# 拉取zookeeper
helm pull bitnami/zookeeper

# 解压
tar zxvf zookeeper-11.4.2.tgz

#进入Zookeeper
cd zookeeper

接下来对Zookeeper进行时区、持久化存储、副本数等配置

extraEnvVars: 
  - name: TZ
    value: "Asia/Shanghai"

# 允许任意用户连接(默认开启)
allowAnonymousLogin: true
---

# 关闭认证(默认关闭)
auth:
  enable: false 
---

# 修改副本数
replicaCount: 3 
---

# 4. 配置持久化,按需使用
persistence:
  enabled: true
  storageClass: "rook-ceph-block"  # storageClass 如果有默认存储可以不写
  accessModes:
    - ReadWriteOnce

创建Kafka namespace

[root@k8s-02 zookeeper]# kubectl create ns kafka

helm创建Zookeeper集群

[root@k8s-02 zookeeper]# helm install zookeeper -n kafka .
#此处下面环境版本信息
NAME: zookeeper
LAST DEPLOYED: Tue May 23 13:40:12 2023
NAMESPACE: kafka
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: zookeeper
CHART VERSION: 11.4.2
APP VERSION: 3.8.1

#下面为相关注释,后面可以通过下面的命令查看Zookeeper集群状态
** Please be patient while the chart is being deployed **

ZooKeeper can be accessed via port 2181 on the following DNS name from within your cluster:

    zookeeper.kafka.svc.cluster.local

To connect to your ZooKeeper server run the following commands:

    export POD_NAME=$(kubectl get pods --namespace kafka -l "app.kubernetes.io/name=zookeeper,app.kubernetes.io/instance=zookeeper,app.kubernetes.io/component=zookeeper" -o jsonpath="{.items[0].metadata.name}")
    kubectl exec -it $POD_NAME -- zkCli.sh

To connect to your ZooKeeper server from outside the cluster execute the following commands:

    kubectl port-forward --namespace kafka svc/zookeeper 2181:2181 &
    zkCli.sh 127.0.0.1:2181
[root@k8s-02 zookeeper]#

检查pod状态

[root@k8s-02 zookeeper]# kubectl get all -n kafka
NAME              READY   STATUS    RESTARTS   AGE
pod/zookeeper-0   1/1     Running   0          52s
pod/zookeeper-1   1/1     Running   0          51s
pod/zookeeper-2   1/1     Running   0          49s

NAME                         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
service/zookeeper            ClusterIP   10.110.142.203   <none>        2181/TCP,2888/TCP,3888/TCP   52s
service/zookeeper-headless   ClusterIP   None             <none>        2181/TCP,2888/TCP,3888/TCP   52s

NAME                         READY   AGE
statefulset.apps/zookeeper   3/3     52s

检查pvc

[root@k8s-02 zookeeper]# kubectl get pvc  |grep zook
data-zookeeper-0   Bound    pvc-997a81c1-6986-4620-88f4-2270247354f5   8Gi        RWO            nfs-storage    7d23h
data-zookeeper-1   Bound    pvc-a6012ebb-1f70-43d1-ac1b-8deaec660efe   8Gi        RWO            nfs-storage    7d23h
data-zookeeper-2   Bound    pvc-f6300de4-8cd9-4807-a5fd-2655deb05139   8Gi        RWO            nfs-storage    7d23h
[root@k8s-02 zookeeper]#

检查Zookeeper集群状态

[root@k8s-02 zookeeper]# kubectl exec -it -n kafka zookeeper-0 -- bash
I have no name!@zookeeper-0:/$
I have no name!@zookeeper-0:/$ zkServer.sh status
/opt/bitnami/java/bin/java
ZooKeeper JMX enabled by default
Using config: /opt/bitnami/zookeeper/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: follower
I have no name!@zookeeper-0:/$

Helm 部署Kafka集群

拉取kafka

helm pull bitnami/kafka

解压kafka

[root@k8s-02 ~]# tar xf kafka-22.1.3.tgz

进入Kafka目录

[root@k8s-02 ~]# cd kafka
[root@k8s-02 kafka]# ls
Chart.lock  charts  Chart.yaml  README.md  templates  values.yaml

修改values.yaml

extraEnvVars: 
  - name: TZ
    value: "Asia/Shanghai"
---

# 副本数
replicaCount: 3                    # 副本数
---

# 持久化存储
persistence:
  enabled: true
  storageClass: "rook-ceph-block"  # sc 有默认sc可以不写
  accessModes:
    - ReadWriteOnce
  size: 8Gi
---
kraft:
  ## @param kraft.enabled Switch to enable or disable the Kraft mode for Kafka
  ##
  enabled: false   #设置为false
---
# 配置zookeeper外部连接
zookeeper:
  enabled: false                   # 不使用内部zookeeper,默认是false

externalZookeeper:                 # 外部zookeeper
  servers: zookeeper            #Zookeeper svc名称

可选配置

## 允许删除topic(按需开启)
deleteTopicEnable: true

## 日志保留时间(默认一周)
logRetentionHours: 168

## 自动创建topic时的默认副本数
defaultReplicationFactor: 2

## 用于配置offset记录的topic的partition的副本个数
offsetsTopicReplicationFactor: 2

## 事务主题的复制因子
transactionStateLogReplicationFactor: 2

## min.insync.replicas
transactionStateLogMinIsr: 2

## 新建Topic时默认的分区数
numPartitions: 3

创建Kafka集群

[root@k8s-02 kafka]# helm install kafka -n kafka .

#输出结果如下
[root@k8s-02 kafka]# helm install kafka -n kafka .
W0523 13:52:58.673090   28827 warnings.go:70] spec.template.spec.containers[0].env[39].name: duplicate name "KAFKA_ENABLE_KRAFT"
NAME: kafka
LAST DEPLOYED: Tue May 23 13:52:58 2023
NAMESPACE: kafka
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: kafka
CHART VERSION: 22.1.3
APP VERSION: 3.4.0

** Please be patient while the chart is being deployed **

Kafka can be accessed by consumers via port 9092 on the following DNS name from within your cluster:

    kafka.kafka.svc.cluster.local

Each Kafka broker can be accessed by producers via port 9092 on the following DNS name(s) from within your cluster:

    kafka-0.kafka-headless.kafka.svc.cluster.local:9092
    kafka-1.kafka-headless.kafka.svc.cluster.local:9092
    kafka-2.kafka-headless.kafka.svc.cluster.local:9092

To create a pod that you can use as a Kafka client run the following commands:

    kubectl run kafka-client --restart='Never' --image docker.io/bitnami/kafka:3.4.0-debian-11-r33 --namespace kafka --command -- sleep infinity
    kubectl exec --tty -i kafka-client --namespace kafka -- bash

    PRODUCER:
        kafka-console-producer.sh \
            --broker-list kafka-0.kafka-headless.kafka.svc.cluster.local:9092,kafka-1.kafka-headless.kafka.svc.cluster.local:9092,kafka-2.kafka-headless.kafka.svc.cluster.local:9092 \
            --topic test

    CONSUMER:
        kafka-console-consumer.sh \
            --bootstrap-server kafka.kafka.svc.cluster.local:9092 \
            --topic test \
            --from-beginning

进入Kafka集群,创建topic查看

##进入Kafka集群
kubectl exec -it -n kafka kafka-0 -- bash

#创建topic
kafka-topics.sh --create --bootstrap-server kafka:9092  --topic abcdocker

#查看topic列表
kafka-topics.sh --list --bootstrap-server kafka:9092 

#查看topic详细信息
kafka-topics.sh --bootstrap-server kafka:9092  --describe --topic abcdocker

#配置文件配置已经生效,默认分区为3,副本为3,过期时间为168小时

I have no name!@kafka-0:/$ kafka-topics.sh --bootstrap-server kafka:9092  --describe --topic abcdocker
Topic: abcdocker    TopicId: jcJtxY1NSr-nSloax8oPnA PartitionCount: 3   ReplicationFactor: 3    Configs: flush.ms=1000,segment.bytes=1073741824,flush.messages=10000,max.message.bytes=1000012,retention.bytes=1073741824
    Topic: abcdocker    Partition: 0    Leader: 1   Replicas: 1,2,0 Isr: 1,2,0
    Topic: abcdocker    Partition: 1    Leader: 0   Replicas: 0,1,2 Isr: 0,1,2
    Topic: abcdocker    Partition: 2    Leader: 2   Replicas: 2,0,1 Isr: 2,0,1

给TA打赏
共{{data.count}}人
人已打赏
Kafka报错锦集

Kafka ERROR Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)

2023-5-23 14:27:55

GrafanaKafkaKubernetesprometheus

Prometheus 监控Kafka集群并设置AlertManager告警

2023-5-26 14:25:29

0 条回复 A文章作者 M管理员
    暂无讨论,说说你的看法吧
个人中心
购物车
优惠劵
今日签到
有新私信 私信列表
搜索