一步一步搞定Kubernetes二进制部署(二)——flannel网络配置(单节点)

Dannyvon 2020-05-05

一步一步搞定Kubernetes二进制部署(二)——flannel网络配置(单节点)

前言

? 前面搭建了单节点Kubernetes二进制部署的etcd集群流程的演示,本文将结合上次的文章继续部署Kubernetes单节点集群,完成集群的外部通信之flannel网络配置。

环境准备

? 首先,两个node节点安装docker-ce,可以查看我之前的有关docker部署的文章:揭开docker的面纱——基础理论梳理和安装流程演示,这里我直接使用shell脚本安装了,注意其中的镜像加速最好使用自己在阿里云或其他地方申请的地址。

? 上次我是在实验环境中挂起了虚拟机,此时建议检查网络是否可以访问外网,然后检查三个节点的etcd集群健康状态,这里的三个环境已node01为例演示验证

[ opt]# ping www.baidu.com
#两个node节点上测试验证docker服务是否开启
[ opt]# systemctl status docker.service 
#健康状态检查
[ ssl]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.0.128:2379,https://192.168.0.129:2379,https://192.168.0.130:2379" cluster-health 
member a25c294d3a391c7c is healthy: got healthy result from https://192.168.0.128:2379
member b2db359ffad36ee5 is healthy: got healthy result from https://192.168.0.129:2379
member eddae83baed564ba is healthy: got healthy result from https://192.168.0.130:2379
cluster is healthy

结果显示cluster is healthy表示目前etcd集群是健康的

配置flannel网络

master节点上:写入分配的子网段到ETCD中,供flannel使用

#写入操作
[ etcd-cert]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.0.128:2379,https://192.168.0.129:2379,https://192.168.0.131:2379" set /coreos.com/network/config ‘{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}‘

#执行结果显示
{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}

#查看命令操作
[ etcd-cert]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.0.128:2379,https://192.168.0.129:2379,https://192.168.0.130:2379" get /coreos.com/network/config
#执行结果显示
{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}

在node节点上部署flannel,首先需要软件包,两个节点上配置一样,这里还是以node01为例:
软件包资源:
链接:https://pan.baidu.com/s/1etCPIGRQ1ZUxcNaCxChaCQ
提取码:65ml

[ ~]# ls
anaconda-ks.cfg                     initial-setup-ks.cfg  模板  图片  下载  桌面
flannel-v0.10.0-linux-amd64.tar.gz 
[ ~]# tar zxvf flannel-v0.10.0-linux-amd64.tar.gz 
flanneld
mk-docker-opts.sh
README.md
#以上就是该软件包解压后的文件

我们在两个节点上创建Kubernetes的工作目录,将两个文件移动到bin目录下

~]# mkdir /opt/kubernetes/{cfg,bin,ssl} -p
[ ~]# mv mk-docker-opts.sh flanneld /opt/kubernetes/bin/

需要编写配置文件以及启动脚本文件,这里使用shell脚本即可

vim flannel.sh

#!/bin/bash

ETCD_ENDPOINTS=${1:-"http://127.0.0.1:2379"}

cat <<EOF >/opt/kubernetes/cfg/flanneld

FLANNEL_OPTIONS="--etcd-endpoints=${ETCD_ENDPOINTS} -etcd-cafile=/opt/etcd/ssl/ca.pem -etcd-certfile=/opt/etcd/ssl/server.pem -etcd-keyfile=/opt/etcd/ssl/server-key.pem"

EOF

cat <<EOF >/usr/lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service

[Service]
Type=notify
EnvironmentFile=/opt/kubernetes/cfg/flanneld
ExecStart=/opt/kubernetes/bin/flanneld --ip-masq \$FLANNEL_OPTIONS
ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure

[Install]
WantedBy=multi-user.target

EOF

systemctl daemon-reload
systemctl enable flanneld
systemctl restart flanneld

执行脚本

[ ~]# bash flannel.sh https://192.168.0.128:2379,https://192.168.0.129:2379,https://192.168.0.130:2379
#执行结果如下:
Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.

此时配置docker连接flannel

#编辑docker服务启动文件
[ ~]# vim /usr/lib/systemd/system/docker.service
#设置环境文件
14 EnvironmentFile=/run/flannel/subnet.env
#添加$DOCKER_NETWORK_OPTIONS参数
15 ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS -H fd:// --containerd=/run/containerd/containerd.sock

查看一下subnet.env文件

[ ~]# cat /run/flannel/subnet.env 
DOCKER_OPT_BIP="--bip=172.17.56.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=false"
DOCKER_OPT_MTU="--mtu=1450"
DOCKER_NETWORK_OPTIONS=" --bip=172.17.56.1/24 --ip-masq=false --mtu=1450"
#其中--bip表示的是启动时的子网

重启docker服务

[ ~]# systemctl daemon-reload
[ ~]# systemctl restart docker

查看flannel网络

[ ~]# ifconfig 
docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.56.1  netmask 255.255.255.0  broadcast 172.17.56.255
        ether 02:42:fb:e2:37:f9  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.0.129  netmask 255.255.255.0  broadcast 192.168.0.255
        inet6 fe80::20c:29ff:fe1d:9287  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:1d:92:87  txqueuelen 1000  (Ethernet)
        RX packets 1068818  bytes 1195325321 (1.1 GiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 461088  bytes 43526519 (41.5 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
#flannel的网段是否和前的subnet.env一致
flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 172.17.56.0  netmask 255.255.255.255  broadcast 0.0.0.0
        inet6 fe80::74a5:98ff:fe3f:4bf7  prefixlen 64  scopeid 0x20<link>
        ether 76:a5:98:3f:4b:f7  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 26 overruns 0  carrier 0  collisions 0

我的node02节点上的网段是172.17.91.0,在node01上测试ping该网段的网关

[ ~]# ping 172.17.91.1
PING 172.17.91.1 (172.17.91.1) 56(84) bytes of data.
64 bytes from 172.17.91.1: icmp_seq=1 ttl=64 time=0.436 ms
64 bytes from 172.17.91.1: icmp_seq=2 ttl=64 time=0.343 ms
64 bytes from 172.17.91.1: icmp_seq=3 ttl=64 time=1.19 ms
64 bytes from 172.17.91.1: icmp_seq=4 ttl=64 time=0.439 ms
^C

能够ping通就证明flannel起到路由作用

此时我们在两个节点上启动一个容器来测试两个容器之间的网络通信是否正常

[ ~]#  docker run -it centos:7 /bin/bash
#直接进入容器
[ /]# yum install -y net-tools
[ /]# ifconfig                
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 172.17.56.2  netmask 255.255.255.0  broadcast 172.17.56.255
        ether 02:42:ac:11:38:02  txqueuelen 0  (Ethernet)
        RX packets 9511  bytes 7631125 (7.2 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 4561  bytes 249617 (243.7 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

#第二个容器地址
[ /]# ifconfig 
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 172.17.91.2  netmask 255.255.255.0  broadcast 172.17.91.255
        ether 02:42:ac:11:5b:02  txqueuelen 0  (Ethernet)
        RX packets 9456  bytes 7629047 (7.2 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 4802  bytes 262568 (256.4 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

测试两个容器是否可以互相ping通

[ /]# ping 172.17.91.2
PING 172.17.91.2 (172.17.91.2) 56(84) bytes of data.
64 bytes from 172.17.91.2: icmp_seq=1 ttl=62 time=0.555 ms
64 bytes from 172.17.91.2: icmp_seq=2 ttl=62 time=0.361 ms
64 bytes from 172.17.91.2: icmp_seq=3 ttl=62 time=0.435 ms

测试可以ping通则表明此时节点间可以互相通信了

相关推荐