kubernetes集群环境搭建(4)

guan000 2020-06-14

kubernetes集群环境搭建(4)
目录

一、kubernetes集群环境搭建

1.1.1 安装部署主控节点服务apiserver

1.部署kube-apiserver集群

主机名角色ip
hdss-21kube-apiserver10.0.0.21
hdss-22kube-apiserver10.0.0.22
hdss-114层负载均衡10.0.0.11
hdss-124层负载均衡10.0.0.12

使用nginx做4层负载均衡器,用keepalive跑一个vip,代理两个kube-apiserver,实现高可用.
2.下载上传kubernetes-server

下载地址:https://github.com/kubernetes/kubernetes/
[ ~]# cd /opt/src
[ /opt/src]# ll
total 442992
-rw-r--r-- 1 root root   9850227 Apr 27 14:37 etcd-v3.1.20-linux-amd64.tar.gz
-rw-r--r-- 1 root root 443770238 Apr 27 14:44 kubernetes-server-linux-amd64-v1.15.2.tar.gz

3.解压做软连接

#21
[ /opt/src]# tar zxf kubernetes-server-linux-amd64-v1.15.2.tar.gz -C /opt/
[ /opt/src]# mv /opt/kubernetes /opt/kubernetes-v1.15.2
[ /opt/src]# ln -s /opt/kubernetes-v1.15.2/ /opt/kubernetes
[ /opt/src]# ll -ld /opt/kubernetes
lrwxrwxrwx 1 root root 24 Jun 13 21:05 /opt/kubernetes -> /opt/kubernetes-v1.15.2/
[ /opt/src]# cd /opt/kubernetes/
[ /opt/kubernetes]# ll
total 27184
drwxr-xr-x 2 root root        6 Aug  5  2019 addons
-rw-r--r-- 1 root root 26625140 Aug  5  2019 kubernetes-src.tar.gz
-rw-r--r-- 1 root root  1205293 Aug  5  2019 LICENSES
drwxr-xr-x 3 root root       17 Aug  5  2019 server

#22操作同上
  1. 因为我们是二进制安装的,所以(可删可留)一些用不到的tar包
[ /opt/kubernetes]# rm -f kubernetes-src.tar.gz #源码包
[ /opt/kubernetes]# cd server/bin/
[ /opt/kubernetes/server/bin]# ls 
apiextensions-apiserver              kube-apiserver                      kubectl                kube-scheduler.docker_tag
cloud-controller-manager             kube-apiserver.docker_tag           kubelet                kube-scheduler.tar
cloud-controller-manager.docker_tag  kube-apiserver.tar                  kube-proxy             mounter
cloud-controller-manager.tar         kube-controller-manager             kube-proxy.docker_tag
hyperkube                            kube-controller-manager.docker_tag  kube-proxy.tar
kubeadm                              kube-controller-manager.tar         kube-scheduler
#后缀是tar(镜像)和tag(标签)都要删除
[ /opt/kubernetes/server/bin]# rm -f *.tar
[ /opt/kubernetes/server/bin]# rm -f *_tag
[ /opt/kubernetes/server/bin]# ll
total 884636
-rwxr-xr-x 1 root root  43534816 Aug  5  2019 apiextensions-apiserver
-rwxr-xr-x 1 root root 100548640 Aug  5  2019 cloud-controller-manager
-rwxr-xr-x 1 root root 200648416 Aug  5  2019 hyperkube
-rwxr-xr-x 1 root root  40182208 Aug  5  2019 kubeadm
-rwxr-xr-x 1 root root 164501920 Aug  5  2019 kube-apiserver
-rwxr-xr-x 1 root root 116397088 Aug  5  2019 kube-controller-manager
-rwxr-xr-x 1 root root  42985504 Aug  5  2019 kubectl
-rwxr-xr-x 1 root root 119616640 Aug  5  2019 kubelet
-rwxr-xr-x 1 root root  36987488 Aug  5  2019 kube-proxy
-rwxr-xr-x 1 root root  38786144 Aug  5  2019 kube-scheduler
-rwxr-xr-x 1 root root   1648224 Aug  5  2019 mounter
  1. 签发证书,签发apiserver-client证书:apiserver与etc通信用的证书。apiserver是客户端,etcd是服务端
#在hdss-201上操作
[ ~]# cd /opt/certs/
[ /opt/certs]# vim client-csr.json
[ /opt/certs]# cat client-csr.json 
{
    "CN": "k8s-node",
    "hosts": [
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "guizhou",
            "L": "guiyang",
            "O": "od",
            "OU": "ops"
        }
    ]
}

[ /opt/certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client client-csr.json |cfssl-json -bare client
2020/06/13 21:23:17 [INFO] generate received request
2020/06/13 21:23:17 [INFO] received CSR
2020/06/13 21:23:17 [INFO] generating key: rsa-2048
2020/06/13 21:23:17 [INFO] encoded CSR
2020/06/13 21:23:17 [INFO] signed certificate with serial number 164742628745058938739196176750276413219457623573
2020/06/13 21:23:17 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").

[ /opt/certs]# ll
total 56
-rw-r--r-- 1 root root  585 Apr 27 13:49 apiserver-csr.json
-rw-r--r-- 1 root root  840 Jun 12 21:24 ca-config.json
-rw-r--r-- 1 root root  993 Jun 10 21:49 ca.csr
-rw-r--r-- 1 root root  345 Jun 10 21:48 ca-csr.json
-rw------- 1 root root 1675 Jun 10 21:49 ca-key.pem
-rw-r--r-- 1 root root 1346 Jun 10 21:49 ca.pem
-rw-r--r-- 1 root root  993 Jun 13 21:23 client.csr
-rw-r--r-- 1 root root  280 Jun 13 21:22 client-csr.json
-rw------- 1 root root 1675 Jun 13 21:23 client-key.pem
-rw-r--r-- 1 root root 1363 Jun 13 21:23 client.pem
-rw-r--r-- 1 root root 1062 Jun 12 21:33 etcd-peer.csr
-rw-r--r-- 1 root root  363 Jun 12 21:27 etcd-peer-csr.json
-rw------- 1 root root 1679 Jun 12 21:33 etcd-peer-key.pem
-rw-r--r-- 1 root root 1428 Jun 12 21:33 etcd-peer.pem
  1. 创建签名请求(csr)的JSON配置文件,apiserver,server端证书
[ /opt/certs]# vim apiserver-csr.json 
[ /opt/certs]# cat apiserver-csr.json 
{
    "CN": "k8s-apiserver",
    "hosts": [
        "127.0.0.1",
        "192.168.0.1",
        "kubernetes.default",
        "kubernetes.default.svc",
        "kubernetes.default.svc.cluster",
        "kubernetes.default.svc.cluster.local",
        "10.0.0.10",  #高可用vip
        "10.0.0.21",
        "10.0.0.22",
        "10.0.0.23"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "guizhou",
            "L": "guiyang",
            "O": "od",
            "OU": "ops"
        }
    ]
}

[ /opt/certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server apiserver-csr.json |cfssl-json -bare apiserver
\2020/06/13 21:35:49 [INFO] generate received request
2020/06/13 21:35:49 [INFO] received CSR
2020/06/13 21:35:49 [INFO] generating key: rsa-2048
2020/06/13 21:35:49 [INFO] encoded CSR
2020/06/13 21:35:49 [INFO] signed certificate with serial number 529471702081305162274454544664990111192752224227
2020/06/13 21:35:49 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[ /opt/certs]# ll
total 68
-rw-r--r-- 1 root root 1249 Jun 13 21:35 apiserver.csr
-rw-r--r-- 1 root root  566 Jun 13 21:31 apiserver-csr.json
-rw------- 1 root root 1679 Jun 13 21:35 apiserver-key.pem
-rw-r--r-- 1 root root 1598 Jun 13 21:35 apiserver.pem
-rw-r--r-- 1 root root  840 Jun 12 21:24 ca-config.json
-rw-r--r-- 1 root root  993 Jun 10 21:49 ca.csr
-rw-r--r-- 1 root root  345 Jun 10 21:48 ca-csr.json
-rw------- 1 root root 1675 Jun 10 21:49 ca-key.pem
-rw-r--r-- 1 root root 1346 Jun 10 21:49 ca.pem
-rw-r--r-- 1 root root  993 Jun 13 21:23 client.csr
-rw-r--r-- 1 root root  280 Jun 13 21:22 client-csr.json
-rw------- 1 root root 1675 Jun 13 21:23 client-key.pem
-rw-r--r-- 1 root root 1363 Jun 13 21:23 client.pem
-rw-r--r-- 1 root root 1062 Jun 12 21:33 etcd-peer.csr
-rw-r--r-- 1 root root  363 Jun 12 21:27 etcd-peer-csr.json
-rw------- 1 root root 1679 Jun 12 21:33 etcd-peer-key.pem
-rw-r--r-- 1 root root 1428 Jun 12 21:33 etcd-peer.pem
  1. 拷贝证书
#21
[ /opt/kubernetes/server/bin]# mkdir certs
[ /opt/kubernetes/server/bin]# cd certs/
[ /opt/kubernetes/server/bin/certs]# scp hdss-201:/opt/certs/ca.pem ./
‘s password: 
ca.pem                                                                                             100% 1346   878.7KB/s   00:00    
[ /opt/kubernetes/server/bin/certs]# scp hdss-201:/opt/certs/ca-key.pem ./
‘s password: 
ca-key.pem                                                                                         100% 1675     2.0MB/s   00:00    
[ /opt/kubernetes/server/bin/certs]# scp hdss-201:/opt/certs/client.pem ./
‘s password: 
client.pem                                                                                         100% 1363     1.6MB/s   00:00    
[ /opt/kubernetes/server/bin/certs]# scp hdss-201:/opt/certs/client-key.pem ./
‘s password: 
client-key.pem                                                                                     100% 1675     2.2MB/s   00:00    
[ /opt/kubernetes/server/bin/certs]# scp hdss-201:/opt/certs/apiserver.pem ./
‘s password: 
apiserver.pem                                                                                      100% 1598     1.3MB/s   00:00    
[ /opt/kubernetes/server/bin/certs]# scp hdss-201:/opt/certs/apiserver-key.pem ./
‘s password: 
apiserver-key.pem                                                                                  100% 1679     1.7MB/s   00:00 

[ /opt/kubernetes/server/bin/certs]# ll
total 24
-rw------- 1 root root 1679 Jun 13 21:49 apiserver-key.pem
-rw-r--r-- 1 root root 1598 Jun 13 21:48 apiserver.pem
-rw------- 1 root root 1675 Jun 13 21:47 ca-key.pem
-rw-r--r-- 1 root root 1346 Jun 13 21:46 ca.pem
-rw------- 1 root root 1675 Jun 13 21:48 client-key.pem
-rw-r--r-- 1 root root 1363 Jun 13 21:48 client.pem

#22 操作同上

8.创建启动配置脚本

[ /opt/kubernetes/server/bin/certs]# cd ..
[ /opt/kubernetes/server/bin]# mkdir config
[ /opt/kubernetes/server/bin/config]# vim audit.yaml 
[ /opt/kubernetes/server/bin/config]# cat audit.yaml 
apiVersion: audit.k8s.io/v1beta1 # This is required.
kind: Policy
# Don‘t generate audit events for all requests in RequestReceived stage.
omitStages:
  - "RequestReceived"
rules:
  # Log pod changes at RequestResponse level
  - level: RequestResponse
    resources:
    - group: ""
      # Resource "pods" doesn‘t match requests to any subresource of pods,
      # which is consistent with the RBAC policy.
      resources: ["pods"]
  # Log "pods/log", "pods/status" at Metadata level
  - level: Metadata
    resources:
    - group: ""
      resources: ["pods/log", "pods/status"]

  # Don‘t log requests to a configmap called "controller-leader"
  - level: None
    resources:
    - group: ""
      resources: ["configmaps"]
      resourceNames: ["controller-leader"]

  # Don‘t log watch requests by the "system:kube-proxy" on endpoints or services
  - level: None
    users: ["system:kube-proxy"]
    verbs: ["watch"]
    resources:
    - group: "" # core API group
      resources: ["endpoints", "services"]

  # Don‘t log authenticated requests to certain non-resource URL paths.
  - level: None
    userGroups: ["system:authenticated"]
    nonResourceURLs:
    - "/api*" # Wildcard matching.
    - "/version"

  # Log the request body of configmap changes in kube-system.
  - level: Request
    resources:
    - group: "" # core API group
      resources: ["configmaps"]
    # This rule only applies to resources in the "kube-system" namespace.
    # The empty string "" can be used to select non-namespaced resources.
    namespaces: ["kube-system"]

  # Log configmap and secret changes in all other namespaces at the Metadata level.
  - level: Metadata
    resources:
    - group: "" # core API group
      resources: ["secrets", "configmaps"]

  # Log all other resources in core and extensions at the Request level.
  - level: Request
    resources:
    - group: "" # core API group
    - group: "extensions" # Version of group should NOT be included.

  # A catch-all rule to log all other requests at the Metadata level.
  - level: Metadata
    # Long-running requests like watches that fall under this rule will not
    # generate an audit event in RequestReceived.
    omitStages:
      - "RequestReceived"

#22操作同上
  1. 编写启动脚本
#21
[ /opt/kubernetes/server/bin/config]# cd ..
[ /opt/kubernetes/server/bin]# ./kube-apiserver --help #查看可选参数命令
The Kubernetes API server validates and configures data
for the api objects which include pods, services, replicationcontrollers, and
others. The API Server services REST operations and provides the frontend to the
cluster‘s shared state through which all other components interact.

[ /opt/kubernetes/server/bin]# vim /opt/kubernetes/server/bin/kube-apiserver.sh
[ /opt/kubernetes/server/bin]# cat /opt/kubernetes/server/bin/kube-apiserver.sh 
#!/bin/bash
./kube-apiserver   --apiserver-count 2   --audit-log-path /data/logs/kubernetes/kube-apiserver/audit-log   --audit-policy-file ./config/audit.yaml   --authorization-mode RBAC   --client-ca-file ./certs/ca.pem   --requestheader-client-ca-file ./certs/ca.pem   --enable-admission-plugins NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota   --etcd-cafile ./certs/ca.pem   --etcd-certfile ./certs/client.pem   --etcd-keyfile ./certs/client-key.pem   --etcd-servers https://10.0.0.12:2379,https://10.0.0.21:2379,https://10.0.0.22:2379   --service-account-key-file ./certs/ca-key.pem   --service-cluster-ip-range 192.168.0.0/16   --service-node-port-range 3000-29999   --target-ram-mb=1024   --kubelet-client-certificate ./certs/client.pem   --kubelet-client-key ./certs/client-key.pem   --log-dir  /data/logs/kubernetes/kube-apiserver   --tls-cert-file ./certs/apiserver.pem   --tls-private-key-file ./certs/apiserver-key.pem   --v 2

#赋予执行权限
[ /opt/kubernetes/server/bin]# chmod +x /opt/kubernetes/server/bin/kube-apiserver.sh
[ /opt/kubernetes/server/bin]# ll /opt/kubernetes/server/bin/kube-apiserver.sh
-rwxr-xr-x 1 root root 1093 Jun 13 22:31 /opt/kubernetes/server/bin/kube-apiserver.sh

#创建日志文件目录
[ /opt/kubernetes/server/bin]# mkdir -p /data/logs/kubernetes/kube-apiserver
[ /opt/kubernetes/server/bin]# ll /data/logs/kubernetes/
total 0
drwxr-xr-x 2 root root 6 Jun 13 22:33 kube-apiserver
#22同上
  1. 创建后台启动文件
[ /opt/kubernetes/server/bin]# vim /etc/supervisord.d/kube-apiserver.ini
[ /opt/kubernetes/server/bin]# cat /etc/supervisord.d/kube-apiserver.ini
[program:kube-apiserver-21]	#主机是22就改成22			
command=/opt/kubernetes/server/bin/kube-apiserver.sh            ; the program (relative uses PATH, can take args)
numprocs=1                                                      ; number of processes copies to start (def 1)
directory=/opt/kubernetes/server/bin                            ; directory to cwd to before exec (def no cwd)
autostart=true                                                  ; start at supervisord start (default: true)
autorestart=true                                                ; retstart at unexpected quit (default: true)
startsecs=30                                                    ; number of secs prog must stay running (def. 1)
startretries=3                                                  ; max # of serial start failures (default 3)
exitcodes=0,2                                                   ; ‘expected‘ exit codes for process (default 0,2)
stopsignal=QUIT                                                 ; signal used to kill process (default TERM)
stopwaitsecs=10                                                 ; max num secs to wait b4 SIGKILL (default 10)
user=root                                                       ; setuid to this UNIX account to run the program
redirect_stderr=true                                            ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/kubernetes/kube-apiserver/apiserver.stdout.log        ; stderr log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB                                    ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4                                        ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB                                     ; number of bytes in ‘capturemode‘ (default 0)
stdout_events_enabled=false                                     ; emit events on stdout writes (default false)

[ /opt/kubernetes/server/bin]# supervisorctl update
[ /opt/kubernetes/server/bin]# netstat -luntp | grep kube-api
tcp        0      0 127.0.0.1:8080          0.0.0.0:*               LISTEN      1657/./kube-apiserv 
tcp6       0      0 :::6443                 :::*                    LISTEN      1657/./kube-apiserv 
#22操作同上

二、安装部署主控节点4层反向代理服务

2.1.1 安装部署主控节点4层反向代理服务

  1. 部署原因

部署在hdss-11 hdss-12机器上,用VIP:10.0.0.10的7443端口,反代hdss-21、hdss-22的apiserver6443端口

  1. 下载安装并配置nginx
#11
[ ~]# yum install -y nginx
[ ~]# vim /etc/nginx/nginx.conf
#注意因为这是四层代理所以配置在最后,不要配置在http模块里面(切记一定要配置在最后)
stream {  
    upstream kube-apiserver {
        server 10.0.0.21:6443     max_fails=3 fail_timeout=30s;
        server 10.0.0.22:6443     max_fails=3 fail_timeout=30s;
    }
    server {
        listen 7443;
        proxy_connect_timeout 2s;
        proxy_timeout 900s;
        proxy_pass kube-apiserver;
    }
}
#12操作同上
  1. 检查配置文件启动
#11
[ ~]# nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
[ ~]# systemctl start nginx
[ ~]# systemctl enable nginx
Created symlink from /etc/systemd/system/multi-user.target.wants/nginx.service to /usr/lib/systemd/system/nginx.service.
[ ~]# systemctl status nginx
● nginx.service - The nginx HTTP and reverse proxy server
   Loaded: loaded (/usr/lib/systemd/system/nginx.service; enabled; vendor preset: disabled)
   Active: active (running) since Sun 2020-06-14 22:07:26 CST; 18s ago
 Main PID: 5120 (nginx)
   CGroup: /system.slice/nginx.service
           ├─5120 nginx: master process /usr/sbin/nginx
           └─5121 nginx: worker process

Jun 14 22:07:26 hdss-11.host.com systemd[1]: Starting The nginx HTTP and reverse proxy server...
Jun 14 22:07:26 hdss-11.host.com nginx[5114]: nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
Jun 14 22:07:26 hdss-11.host.com nginx[5114]: nginx: configuration file /etc/nginx/nginx.conf test is successful
Jun 14 22:07:26 hdss-11.host.com systemd[1]: Failed to parse PID from file /run/nginx.pid: Invalid argument
Jun 14 22:07:26 hdss-11.host.com systemd[1]: Started The nginx HTTP and reverse proxy server.
[ ~]# netstat -luntp|grep 7443
tcp        0      0 0.0.0.0:7443            0.0.0.0:*               LISTEN      5120/nginx: master  

#12配置同上
  1. 下载配置keepalived高可用
#11
[ ~]# yum install  keepalived -y
创建健康检查脚本
[ /etc/keepalived]# vim /etc/keepalived/check_port.sh
[ /etc/keepalived]# cat /etc/keepalived/check_port.sh
#!/bin/bash
CHK_PORT=$1
if [ -n "$CHK_PORT" ];then
        PORT_PROCESS=`ss -lnt|grep $CHK_PORT|wc -l`
        if [ $PORT_PROCESS -eq 0 ];then
                echo "Port $CHK_PORT Is Not Used,End."
                exit 1
        fi
else
        echo "Check Port Cant Be Empty!"
fi
[ /etc/keepalived]# chmod +x /etc/keepalived/check_port.sh
[ /etc/keepalived]# ll /etc/keepalived/check_port.sh
-rwxr-xr-x 1 root root 281 Jun 14 22:37 /etc/keepalived/check_port.sh

配置keepalived文件
[ /etc/keepalived]# vim keepalived.conf 
[ /etc/keepalived]# cat keepalived.conf 
! Configuration File for keepalived

global_defs {
   router_id 10.0.0.11

}

vrrp_script chk_nginx {
    script "/etc/keepalived/check_port.sh 7443"
    interval 2
    weight -20
}

vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 251
    priority 100
    advert_int 1
    mcast_src_ip 10.0.0.11
    nopreempt

    authentication {
        auth_type PASS
        auth_pass 11111111
    }
    track_script {
         chk_nginx
    }
    virtual_ipaddress {
        10.0.0.10/24 dev eth0 label eth0:1
    }
}
#12 只需要该更keepalived配置文件即可 ,其它操作相同
[ /etc/keepalived]# cat keepalived.conf 
! Configuration File for keepalived
global_defs {
	router_id 10.0.0.12
	script_user root
        enable_script_security 
}
vrrp_script chk_nginx {
	script "/etc/keepalived/check_port.sh 7443"
	interval 2
	weight -20
}
vrrp_instance VI_1 {
	state BACKUP
	interface eth0
	virtual_router_id 251
	mcast_src_ip 10.0.0.12
	priority 90
	advert_int 1
	authentication {
		auth_type PASS
		auth_pass 11111111
	}
	track_script {
		chk_nginx
	}
	virtual_ipaddress {
	   10.0.0.10/24 dev eth0 label eth0:1
	}
}
  1. 启动keepalived
#11
[ /etc/keepalived]# systemctl start keepalived
[ /etc/keepalived]# systemctl enable keepalived
Created symlink from /etc/systemd/system/multi-user.target.wants/keepalived.service to /usr/lib/systemd/system/keepalived.service.
[ /etc/keepalived]# systemctl status keepalived
● keepalived.service - LVS and VRRP High Availability Monitor
   Loaded: loaded (/usr/lib/systemd/system/keepalived.service; enabled; vendor preset: disabled)
   Active: active (running) since Sun 2020-06-14 22:50:08 CST; 18s ago
 Main PID: 5324 (keepalived)
   CGroup: /system.slice/keepalived.service
           ├─5324 /usr/sbin/keepalived -D
           ├─5325 /usr/sbin/keepalived -D
           └─5326 /usr/sbin/keepalived -D

Jun 14 22:50:10 hdss-11.host.com Keepalived_vrrp[5326]: Sending gratuitous ARP on eth0 for 10.0.0.10
Jun 14 22:50:10 hdss-11.host.com Keepalived_vrrp[5326]: Sending gratuitous ARP on eth0 for 10.0.0.10
Jun 14 22:50:10 hdss-11.host.com Keepalived_vrrp[5326]: Sending gratuitous ARP on eth0 for 10.0.0.10
Jun 14 22:50:10 hdss-11.host.com Keepalived_vrrp[5326]: Sending gratuitous ARP on eth0 for 10.0.0.10
Jun 14 22:50:15 hdss-11.host.com Keepalived_vrrp[5326]: Sending gratuitous ARP on eth0 for 10.0.0.10
Jun 14 22:50:15 hdss-11.host.com Keepalived_vrrp[5326]: VRRP_Instance(VI_1) Sending/queueing gratuitous ARPs on eth0 for 10.0.0.10
Jun 14 22:50:15 hdss-11.host.com Keepalived_vrrp[5326]: Sending gratuitous ARP on eth0 for 10.0.0.10
Jun 14 22:50:15 hdss-11.host.com Keepalived_vrrp[5326]: Sending gratuitous ARP on eth0 for 10.0.0.10
Jun 14 22:50:15 hdss-11.host.com Keepalived_vrrp[5326]: Sending gratuitous ARP on eth0 for 10.0.0.10
Jun 14 22:50:15 hdss-11.host.com Keepalived_vrrp[5326]: Sending gratuitous ARP on eth0 for 10.0.0.10

[ /etc/keepalived]# ip add
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:6e:66:ce brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.11/24 brd 10.0.0.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet 10.0.0.10/24 scope global secondary eth0:1  #vip
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe6e:66ce/64 scope link 
       valid_lft forever preferred_lft forever
  1. 测试高可用性
#11
[ /etc/keepalived]# ip add
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:6e:66:ce brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.11/24 brd 10.0.0.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet 10.0.0.10/24 scope global secondary eth0:1 #vip
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe6e:66ce/64 scope link 
       valid_lft forever preferred_lft forever

停止hdss-11

#12
[ /etc/keepalived]# ip add
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:3e:fb:87 brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.12/24 brd 10.0.0.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet 10.0.0.10/24 scope global secondary eth0:1
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe3e:fb87/64 scope link 
       valid_lft forever preferred_lft forever

相关推荐