基础环境准备
JDK下载地址: http://www.oracle.com/technetwork/java/javase/downloads/index.html
Zookeeper下载地址: https://www.apache.org/dyn/closer.cgi/zookeeper/
Kafka下载地址: http://kafka.apache.org/downloads
服务器配置:
Centos 7.6, 8核32G (推荐16G以上)
node1: 10.10.50.3 (es1+kafka1+zk1)
node2: 10.10.50.13 (es2+kafka2+zk2)
node3: 10.10.50.5 (es3+kafka3+zk3)
1.安装java环境及时间
yum install -y java-1.8.0-openjdk.x86_64 ntpdate epel-release
/bin/cp -f /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
ntpdate us.pool.ntp.org
2.内核优化
cat > /etc/security/limits.d/20-nproc.conf <<EOF
* soft nproc 4096
* hard nproc 4096
root soft nproc unlimited
EOF
cat > /etc/security/limits.d/90-nproc.conf <<EOF
* soft nproc 4096
root soft nproc unlimited
EOF
cat > /etc/sysctl.conf <<EOF
fs.file-max = 999999
net.ipv4.ip_forward = 0
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
kernel.sysrq = 0
kernel.core_uses_pid = 1
net.ipv4.tcp_syncookies = 1
kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.shmmax = 68719476736
kernel.shmall = 4294967296
net.ipv4.tcp_max_tw_buckets = 6000
net.ipv4.tcp_sack = 1
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_rmem = 10240 87380 12582912
net.ipv4.tcp_wmem = 10240 87380 12582912
net.core.wmem_default = 8388608
net.core.rmem_default = 8388608
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.core.netdev_max_backlog = 262144
net.core.somaxconn = 40960
net.ipv4.tcp_max_orphans = 3276800
net.ipv4.tcp_max_syn_backlog = 262144
net.ipv4.tcp_timestamps = 0
net.ipv4.tcp_synack_retries = 1
net.ipv4.tcp_syn_retries = 1
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_mem = 94500000 915000000 927000000
net.ipv4.tcp_fin_timeout = 1
net.ipv4.tcp_keepalive_time = 30
net.ipv4.ip_local_port_range = 1024 65000
EOF
3.修改主机名
#node1节点
echo 'node1' > /etc/hostname
#node2节点
echo 'node2' > /etc/hostname
#node3节点
echo 'node3' > /etc/hostname
cat >> /etc/hosts <<EOF
10.10.50.3 node1
10.10.50.13 node2
10.10.50.5 node3
EOF
4.配置免密登录
ssh-keygen
ssh-copy-id node2
ssh-copy-id node3
5.关闭防火墙
systemctl stop firewalld
systemctl disable firewalld
以上操作在三个几点都需要执行。
zookeeper 集群模式安装
zookeeper集群可以独立部署,也可以直接使用kafka自带的zookeeper集群配置,此处为了服务独立性,采用单独安装的方式。
zookeeper的版本要与kafka的版本相对应才能使用。下面我发下对应的版本,仅供参考。
kafka版本 | zookeeper版本 | springboot版本 |
---|---|---|
kafka_2.12-2.4.0 | zookeeper-3.5.6.jar | - |
kafka_2.12-2.3.1 | zookeeper-3.4.14.jar | springboot2.2.2 |
kafka_2.12-2.3.0 | zookeeper-3.4.14.jar | springboot2.2.2 |
kafka_2.12-1.1.1 | zookeeper-3.4.10.jar | - |
kafka_2.12-1.1.0 | zookeeper-3.4.10.jar | - |
kafka_2.12-1.0.2 | zookeeper-3.4.10.jar | - |
kafka_2.12-1.0.0 | zookeeper-3.4.10.jar | - |
kafka_2.12-0.11.0.0 | zookeeper-3.4.10.jar | - |
kafka_2.12-0.10.2.2 | zookeeper-3.4.9.jar | - |
kafka_2.11-0.10.0.0 | zookeeper-3.4.6.jar | - |
kafka_2.11-0.9.0.0 | zookeeper-3.4.6.jar | - |
此处我安装的kafka版本是2.11-2.4.1,对应的zookeeper版本为zookeeper-3.5.7
其它版本下载地址:http://archive.apache.org/dist/zookeeper/
1.下载zookeeper
wget http://archive.apache.org/dist/zookeeper/zookeeper-3.5.7/apache-zookeeper-3.5.7-bin.tar.gz
tar zxvf apache-zookeeper-3.5.7-bin.tar.gz
mv apache-zookeeper-3.5.7-bin /usr/local/zookeeper-3.5.7
2.创建数据和日志目录
mkdir -p /data/zookeeper/logs
mkdir -p /data/zookeeper/data
3.环境变量并刷新
# vim /etc/profile
export ZOOKEEPER_HOME=/usr/local/zookeeper-3.5.7
export PATH=$ZOOKEEPER_HOME/bin:$PATH
# source /etc/profile
4.各节点配置
# cd /usr/local/zookeeper-3.5.7/conf
# cp -f zoo_sample.cfg zoo.cfg
# vim zoo.cfg
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/data/zookeeper/data
daraLogDir=/data/zookeeper/logs
clientPort=2181
server.1=node1:2888:3888
server.2=node2:2888:3888
server.3=node3:2888:3888
#第一个端口是master和slave之间的通信端口,默认是2888,第二个端口是leader选举的端口,集群刚启动的时候选举或者leader挂掉之后进行新的选举的端口默认是3888
# echo '1' > /data/zookeeper/data/myid #server1配置,各节点不同,跟上面配置server.1的id对应
# echo "2" > /data/zookeeper/data/myid #server2配置,各节点不同,跟上面配置server.2的id对应
# echo "3" > /data/zookeeper/data/myid #server3配置,各节点不同,跟上面配置server.3的id对应
5.启动停止服务
zkServer.sh start
zkServer.sh stop
zkCli.sh
6.设置开机启动
# cat >> /usr/lib/systemd/system/zookeeper.service <<EOF
[Unit]
Description=zookeeper server daemon
After=zookeeper.target
[Service]
Type=forking
ExecStart=/usr/local/zookeeper-3.5.7/bin/zkServer.sh start
ExecReload=/usr/local/zookeeper-3.5.7/bin/zkServer.sh stop && sleep 2 && /usr/local/zookeeper-3.5.7/bin/zkServer.sh start
ExecStop=/usr/local/zookeeper-3.5.7/bin/zkServer.sh stop
Restart=always
[Install]
WantedBy=multi-user.target
EOF
# systemctl start zookeeper
# systemctl enable zookeeper
kafka集群搭建
1.下载kafka
wget https://mirror.bit.edu.cn/apache/kafka/2.4.1/kafka_2.11-2.4.1.tgz
tar zxvf kafka_2.11-2.4.1.tgz
mv kafka_2.11-2.4.1 /usr/local/
2.配置
# vim /usr/local/kafka_2.11-2.4.1/config/server.properties
broker.id=1
listeners=PLAINTEXT://node1:9092 ##填写实际节点的主机名或ip
advertised.listeners=PLAINTEXT://node1:9092 ##填写实际节点的主机名或ip
num.network.threads=9 #cpu线程数+1
num.io.threads=16 #cpu线程数倍数
socket.send.buffer.bytes=10240000
socket.receive.buffer.bytes=10240000
message.max.bytes=100000000 #消息体的最大大小,单位是字节,此处是1G
socket.request.max.bytes=1048576000
queued.max.requests=5000
log.dirs=/data/kafka-logs
num.partitions=3 #分区数,默认分区的replication个数 ,不得大于集群中broker的个数
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=3 #副本数
transaction.state.log.replication.factor=3
transaction.state.log.min.isr=3
delete.topic.enable=true #选择启用删除topic功能,默认false
log.cleaner.enable=true #开启日志清理
log.flush.interval.messages=10000
log.flush.interval.ms=1000
log.retention.hours=168 #segment文件保留的最长时间(小时),超时将被删除,也就是说7天之前的数据将被清理掉
log.segment.bytes=1073741824 #日志文件中每个segmeng的大小(字节),默认为1G
log.retention.check.interval.ms=300000
zookeeper.connect=node1:2181,node2:2181,node3:2181
zookeeper.connection.timeout.ms=6000
zookeeper.session.timeout.ms=6000
zookeeper.sync.time.ms =2000
group.initial.rebalance.delay.ms=0
auto.create.topics.enable=false #关闭自动创建topic
3.kafka节点默认内存为1G,生产环境建议调到2G或4G
# vim /data/kafka/kafka_2.12-2.1.0/bin/kafka-server-start.sh
export KAFKA_HEAP_OPTS="-Xmx2G -Xms2G"
4.cp应用程序目录到另外两个节点
scp -r /usr/local/kafka_2.11-2.4.1 node2:/usr/local/
scp -r /usr/local/kafka_2.11-2.4.1 node3:/usr/local/
#修改listeners 和 advertised.listeners的监听地址,以及broker.id
5.配置环境变量
# vim /etc/profile
export KAFKA_HOME=/usr/local/kafka_2.11-2.4.1
export PATH=$KAFKA_HOME/bin:$ZOOKEEPER_HOME/bin:$PATH
6.kafka服务启动和停止,设置开机启动
#启动kafka:先启动主节点,node1,再启动node2,node3
#node1
kafka-server-start.sh -daemon /usr/local/kafka_2.11-2.4.1/config/server.properties
kafka-server-stop.sh
#node2
kafka-server-start.sh -daemon /usr/local/kafka_2.11-2.4.1/config/server.properties
kafka-server-stop.sh
#node3
kafka-server-start.sh -daemon /usr/local/kafka_2.11-2.4.1/config/server.properties
kafka-server-stop.sh
设置开机启动
[Unit]
Description=kafka server daemon
After=kafka.target
[Service]
Type=forking
ExecStart=/usr/local/kafka_2.11-2.4.1/bin/kafka-server-start.sh -daemon /usr/local/kafka_2.11-2.4.1/config/server.properties
ExecReload=/usr/local/kafka_2.11-2.4.1/bin/kafka-server-stop.sh && sleep 2 && /usr/local/kafka_2.11-2.4.1/bin/kafka-server-start.sh -daemon /usr/local/kafka_2.11-2.4.1/config/server.properties
ExecStop=/usr/local/kafka_2.11-2.4.1/bin/kafka-server-stop.sh
Restart=always
[Install]
WantedBy=multi-user.target
# systemctl start kafka
# systemctl enable kafka
本文由 Mr Gu 创作,采用 知识共享署名4.0 国际许可协议进行许可
本站文章除注明转载/出处外,均为本站原创或翻译,转载前请务必署名
最后编辑时间为: May 21, 2020 at 05:17 pm