thug 2019-11-17
| ip | hostname | 用途 |
| 172.16.180.251 | k8s-master | master 节点 |
| 172.16.180.252 | k8s-node1 | node 节点1 |
| 172.16.180.253 | k8s-node2 | node 节点2 |
这部分三个节点都需要进行配置和准备,当然node节点部分资源可以不下载,将不需要的部分剔除即可。这里为了方便就统一都一样处理吧
# Set SELinux in permissive mode setenforce 0 sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config # Stop and disable firewalld systemctl disable firewalld --now
# 修改内核参数 cat <<EOF > /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sysctl --system # 加载内核模块 modprobe br_netfilter lsmod | grep br_netfilter
# base repo cd /etc/yum.repos.d mv CentOS-Base.repo CentOS-Base.repo.bak curl -o CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo sed -i 's/gpgcheck=1/gpgcheck=0/g' /etc/yum.repos.d/CentOS-Base.repo # docker repo curl -o docker-ce.repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo # k8s repo cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF # update cache yum clean all yum makecache yum repolist
swapoff -a echo "vm.swappiness = 0">> /etc/sysctl.conf sysctl -p
如果重启后swap又会被挂上,还需要注释掉/etc/fstab中的一行配置:
安装最新版本: yum install docker-ce 启动docker: systemctl enable docker --now 查看服务状态: systemctl status docker
安装最新版本: yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes 启动kubelet: systemctl enable --now kubelet
mkdir -p /etc/docker tee /etc/docker/daemon.json <<-'EOF' { "registry-mirrors": ["https://ih25wpox.mirror.aliyuncs.com"] } EOF systemctl daemon-reload systemctl restart docker
然后我们就可以下载image,下载完记得打个tag: docker pull mirrorgooglecontainers/kube-apiserver-amd64:v1.14.3 docker tag mirrorgooglecontainers/kube-apiserver-amd64:v1.14.3 k8s.gcr.io/kube-apiserver:v1.14.3 docker pull mirrorgooglecontainers/kube-controller-manager-amd64:v1.14.3 docker tag mirrorgooglecontainers/kube-controller-manager-amd64:v1.14.3 k8s.gcr.io/kube-controller-manager:v1.14.3 docker pull mirrorgooglecontainers/kube-scheduler-amd64:v1.14.3 docker tag mirrorgooglecontainers/kube-scheduler-amd64:v1.14.3 k8s.gcr.io/kube-scheduler:v1.14.3 docker pull mirrorgooglecontainers/kube-proxy-amd64:v1.14.3 docker tag mirrorgooglecontainers/kube-proxy-amd64:v1.14.3 k8s.gcr.io/kube-proxy:v1.14.3 docker pull mirrorgooglecontainers/pause-amd64:3.1 docker tag mirrorgooglecontainers/pause-amd64:3.1 k8s.gcr.io/pause:3.1 docker pull mirrorgooglecontainers/etcd-amd64:3.3.10 docker tag mirrorgooglecontainers/etcd-amd64:3.3.10 k8s.gcr.io/etcd:3.3.10 docker pull coredns/coredns:1.3.1 docker tag coredns/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1 docker pull mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.10.1 docker tag mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.10.1 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1
tip:下面的ip地址(172.16.180.251)需要替换成自己机器的IP
kubeadm init --pod-network-cidr=10.224.0.0/16 --kubernetes-version=v1.14.3 --apiserver-advertise-address 172.16.180.251 --service-cidr=10.225.0.0/16
--kubernetes-version: 用于指定k8s版本;
--apiserver-advertise-address:用于指定kube-apiserver监听的ip地址;
--pod-network-cidr:用于指定Pod的网络范围;
--service-cidr:用于指定SVC的网络范围;
如上,跑kubeadm init命令后等几分钟。
如果遇到报错,对着错误信息修正一下。比如没有关闭swap会遇到error,系统cpu不够会遇到error,网络不通等等都会出错,仔细看一下错误信息一般都好解决~
跑完上面的命令后,会看到类似如下的输出:
Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 172.16.180.251:6443 --token apykm6.1rfitzg6x4bmd0ll \ --discovery-token-ca-cert-hash sha256:114e913f3d254475c33325e7f18b9f49f3ce97244b782a9c871c70f6a0d7c750
上面输出告诉我们还需要做一些工作:
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
安装flannel
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml
稍等一会,应该可以看到node状态变成ready:
# kubectl get node NAME STATUS ROLES AGE VERSION k8s-master Ready master 23m v1.13.3
如果你的环境迟迟都是NotReady状态,可以kubectl get pod -n kube-system看一下pod状态,一般可以发现问题,比如flannel的镜像下载失败之类的
在2个node节点执行同样的kube join命令(具体命令master安装完输出的信息里可以找到):
kubeadm join 172.16.180.251:6443 --token apykm6.1rfitzg6x4bmd0ll \ --discovery-token-ca-cert-hash sha256:114e913f3d254475c33325e7f18b9f49f3ce97244b782a9c871c70f6a0d7c750
将master节点的/etc/kubernetes/admin.conf复制到node节点的相同位置下,并在node节点执行:
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile source ~/.bash_profile
执行完成一会后,可以在master节点上看看各节点情况
[root@k8s-master ~]# kubectl get node NAME STATUS ROLES AGE VERSION k8s-master Ready master 16h v1.14.3 k8s-node1 Ready <none> 16h v1.14.3 k8s-node2 Ready <none> 16h v1.14.3
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml #如果没有翻墙或者访问速度太慢可以把yaml文件保存到本地,然后把里面的镜像地址改为国内镜像即可
设置Dashboard集群外访问
执行命令:
kubectl -n kube-system edit service kubernetes-dashboard
把type: ClusterIP 改为 type: NodePort,并保存.
再执行命令查看暴露端口:
$ kubectl -n kube-system get service kubernetes-dashboard NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes-dashboard 10.100.124.90 <nodes> 443:31707/TCP 21h
这是我们可以通过节点服务器IP:端口(如本例:https://172.16.180.251:31707)的方式访问Dashboard了
###host字段指定授权使用该证书的etcd节点IP或子网列表,需要将etcd集群的3个节点都添加其中。cp etcd-v3.3.13-linux-amd64/etcd* /opt/k8s/bin/