xiunai 2020-03-23
https://www.alibabacloud.com/forum/read-830
This article illustrates how to quickly deploy a safe Kubernetes cluster on Alibaba Cloud VPC, and install and use the kubeadm tool. The tool is part of k8s-release1.5. (Note: kubeadm only has a beta version available currently. Any suggestions for improvement are welcome.) This article will explain how to quickly install kubernetes-1.5.1 on Alibaba Cloud. Kubeadm uses the gcr.io/google-containers as its default image repository which is not accessible in China. In addition, the code supporting user-defined image repositories is not included in kubeadm-release1.5, so we compiled a kubeadm version that supports user-defined image repositories from the master branch and put it in Alibaba Cloud OSS shared storage for everybody to download and use. We have also downloaded the required rpm package and make it available on Alibaba Cloud OSS for the same reason - the official yum source of Kubernetes is not accessible in China.
Precondition
• One or more ECS instances running CentOS 7
• Every ECS instance should have at least 1G RAM memory (otherwise the available memory for the app may be insufficient)
• All the ECS instances must be in the same VPC
• Have the AccessKey and AccessKeySecret of the Alibaba Cloud account ready You can get them through Search KEY
Installation steps
Create ECS in Alibaba Cloud Console
• Log in to Alibaba Cloud Console and create a VPC network. For detailed steps, see VPC.
• Log in to Alibaba Cloud Console, and create at least two ECS instances under the just-created VPC network. Note to select CentOS 7 as the operating system. For details, see ECS.
Quickly create a K8S cluster using the default configuration
If you just want to try out K8S, we provide a one-click deployment script for you to quickly set up a set of K8S with default configurations. The script supports CentOS 7. The following commands should be executed after you log in to the master or node instance through SSH.
1. You can deploy a set of minimal secure K8S clusters with a single command. The command below will start a K8S master cluster in CentOS 7. Note: The token in the output record will be used for adding nodes.
# curl -sSL "http://k8s.oss-cn-shanghai.aliyuncs.com/admin-1.5.1.sh" | sudo bash -s master
2. The command below will start a K8S node in CentOS 7 and add the node to the just-created master cluster. Note: Replace xxxxx.xxxxxxxx with the token in the previous step.
# curl -sSL "http://k8s.oss-cn-shanghai.aliyuncs.com/admin-1.5.1.sh" | sudo bash -s join --token xxxxx.xxxxxxxxx 119.123.211.22
3. The command below will destroy a K8S node or master and erase the data.
# curl -sSL "http://k8s.oss-cn-shanghai.aliyuncs.com/admin-1.5.1.sh" | sudo bash -s down
4. Create a Flannel VPC network for K8S.
# curl -sSL "http://k8s.oss-cn-shanghai.aliyuncs.com/kube/flannel-vpc.yml" -o flannel-vpc.yml
# vi flannel-vpc.yml
// Replace [replace with your id] with your own ACCESS_KEY_ID and ACCESS_KEY_SECRETSearch KEY
# kubectl create -f flannel-vpc.yml
5. You can access the Kubernetes dashboard via the IP:NodePort node. The NodePort can be queried through kubectl.
[ ~]# kubectl --namespace=kube-system describe svc kubernetes-dashboard|grep NodePort
Type: NodePort
NodePort: <unset> 31158/TCP
Enjoy Your dashboard.
Customize the creation of a K8S cluster
You can refer to the steps below to learn about the process for creating a cluster manually. You need to install the following components to create a K8S node. The components will be installed in Step 2.
• Docker: The container service Kubernetes depends on
• Kubelet: The most core agent component of Kubernetes. One kubelet is started on every node and it manages the lifecycle of pods and the node.
• Kubectl: The command line control tool of Kubernetes. It can only be used on the master.
• Kubeadm: Used for bootstrap of Kubernetes. It initiates a K8S cluster.
Customize the creation process
Step 1:
Log in to the target instance: Log in to the just-subscribed ECS instance through ssh Please use the root permission. The root login is the default for ECS in general. If it is not root login, switch to root login using sudo su -.
Step 2:
Install the K8S SDK:
• The official K8S installation SDK is located in overseas areas and cannot be downloaded directly to the ECS. So we have downloaded a copy and made it available in a public Alibaba Cloud storage service for everybody to download. We are working on the official K8S installation SDK image which will be available soon. Currently you can install the K8S software using the commands below in CentOS.
Pay attention to the installation order.
1. Install socat
# yum install -y socat
2. Install docker
# curl -sSL http://acs-public-mirror.oss-cn-hangzhou.aliyuncs.com/docker-engine/internet | sh -
3. Download kubectl
# curl -sSL http://k8s.oss-cn-shanghai.aliyuncs.com/kube/rpm/kubectl-1.5.1.x86_64.rpm -o kubectl-1.5.1.x86_64.rpm
4. Download kubelet
# curl -sSL http://k8s.oss-cn-shanghai.aliyuncs.com/kube/rpm/kubelet-1.5.1.x86_64.rpm -o kubelet-1.5.1.x86_64.rpm
5. Download kubernetes-cni plug-in
# curl -sSL http://k8s.oss-cn-shanghai.aliyuncs.com/kube/rpm/kubernetes-cni-0.3.0.1-1.07a8a2.x86_64.rpm -o kubernetes-cni-0.3.0.1-1.07a8a2.x86_64.rpm
6. Download kubeadm
# curl -sSL http://k8s.oss-cn-shanghai.aliyuncs.com/kube/rpm/kubeadm-1.6.0-0.alpha.0.2074.a092d8e0f95f52.x86_64.rpm -o kubeadm-1.6.0-0.alpha.0.2074.a092d8e0f95f52.x86_64.rpm
7. Start installation
# rpm -ivh kubectl-1.5.1.x86_64.rpm kubelet-1.5.1.x86_64.rpm kubernetes-cni-0.3.0.1-1.07a8a2.x86_64.rpm kubeadm-1.6.0-0.alpha.0.2074.a092d8e0f95f52.x86_64.rpm
Before starting kubelet, we should first modify vi /etc/systemd/system/kubelet.service.d/10-kubeadm.conf and add an additional parameter for kubelet.
// So that kubelet won‘t try to pull pause-amd64:3.0 image from overseas K8S repositories when starting the pod.
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0
• To make your changes take effect, you need to execute the following command: systemctl enable kubelet && systemctl start kubelet. At this time, kubelet should be in the loop restart process waiting for the /etc/kubernetes/kubelet.conf configuration file to be up and running. This point can be verified by the output log of journalctl -f -u kubelet.service.
Step 3:
Initialize the master node: The master node is where the control components run which includes apiserver, etcd, scheduler, controllermanager, and proxy. These components will be pulled by kubelet as containers. Select one ECS instance that has completed the kubeadm and kubelet installation from the previous steps to execute the following commands:
# export KUBE_REPO_PREFIX=registry.cn-hangzhou.aliyuncs.com/google-containers \
KUBE_HYPERKUBE_IMAGE=registry.cn-hangzhou.aliyuncs.com/google-containers/hyperkube-amd64:v1.5.1 \
KUBE_DISCOVERY_IMAGE=registry.cn-hangzhou.aliyuncs.com/google-containers/kube-discovery-amd64:1.0 \
KUBE_ETCD_IMAGE=registry.cn-hangzhou.aliyuncs.com/google-containers/etcd-amd64:3.0.4
# kubeadm init --pod-network-cidr="10.24.0.0/16"
By setting the three environmental variables, namely KUBE_REPO_PREFIX, KUBE_HYPERKUBE_IMAGE and KUBE_DISCOVERY_IMAGE, we notify the kubeadm to use the official repositories provided on Alibaba Cloud for creating K8S components. This saves the problem of inaccessible overseas official K8S repository from China. If the --api-advertise-addresses=<ip-address> parameter is not specified, kubeadm will automatically detect available IP addresses as the listening IP address of the master node.
• Create the certificates and keys related with safe communications. Note: The token in the output record will be used for creating nodes in the next step.
• Store the configuration files related to kubelet production in /etc/kubernetes and the static pods configuration file will be generated to start various components of K8S (apiserver, kubeproxy, etcd, scheduler).
You can view more configuration parameters using kubeadm -h. By default, the cluster will not schedule pods to the master node out of security concerns. Where scheduling pods to the master node is required (for example, to deploy a single-node K8S testing cluster), you need to run the following command to enable scheduling to the master node.
# kubectl taint nodes --all dedicated-
This command will remove the “dedicated” taint on all the nodes, meaning you can schedule pods to any node, including the master node.
Step 4:
Add a node in the cluster: The node is where the actual scheduling of pods and containers happens. To add a node to your cluster, you should first log in to the node via SSH, switch to the root identity and run the kubeadm join command:
# kubeadm join --token <token> <master-ip>
Wait for a few minutes. If the command returns with success, it indicates the node has been created successfully. Then you can execute kubectl --namespace=kube-system get no on the master node and all the nodes and their health status in the just-created cluster will be displayed. But note that your cluster is not ready yet. You also need to create a network for your pods before you can deploy your applications.
Step 5:
Add a network plug-in for your cluster: You also need to add a network plug-in for your cluster to enable communications between all the pods in your cluster. Currently, flannel, weaver, canal and calico support K8S CNI network model. Note: you can add only one network support for a cluster.
Flannel: You can use flannel for K8S for VPC clusters. First, have your access key and secret ready (Search KEY), and then log in to the master node via SSH and execute the following command:
# curl -ssL http://k8s.oss-cn-shanghai.aliyuncs.com/kube/flannel-vpc.yml -o flannel.yml
# vi flannel.yml
// ACCESS_KEY_ID 与 ACCESS_KEY_SECRET。 找到[replace with your own id]替换成自己的accesskeyid与accesskeysecret.
# kubectl create -f flannel.yml
# kubectl --namespace=kube-system get po
NAME READY STATUS RESTARTS AGE
dummy-2340867639-owm2g 1/1 Running 0 1d
etcd-192.168.1.191 1/1 Running 0 1d
kube-apiserver-192.168.1.191 1/1 Running 0 1d
kube-controller-manager-192.168.1.191 1/1 Running 0 1d
kube-dns-3378589527-rifti 4/4 Running 0 1d
kube-flannel-ds-5ud36 2/2 Running 0 1d
kube-flannel-ds-br3zb 2/2 Running 0 1d
kube-proxy-jcbcy 1/1 Running 0 1d
kube-proxy-uhzsn 1/1 Running 0 1d
kube-scheduler-192.168.1.191 1/1 Running 3 1d
WeaveNet: You can also add the weave network support for your K8S cluster by executing the following command:
# kubectl apply -f http://k8s.oss-cn-shanghai.aliyuncs.com/kube/weave-kube-1.7.2
daemonset "weave-net" created
Step 6:
Deploy Kubernetes-dashboard
# kubectl apply -f http://k8s.oss-cn-shanghai.aliyuncs.com/kube/kubernetes-dashboard1.5.0.yaml
# kubectl --namespace=kube-system describe svc kubernetes-dashboard
Name: kubernetes-dashboard
Namespace: kube-system
Labels: app=kubernetes-dashboard
Selector: app=kubernetes-dashboard
Type: NodePort
IP: 10.1.56.28
Port: <unset> 80/TCP
NodePort: <unset> 30624/TCP <=== Access the dashboard via http://node IP:30624
Endpoints: 10.24.4.79:9090
Session Affinity: None
Step 7: (optional)
Run a demo program: Clone a demo program from github.com. For more information about the demo, join GitHub README.
# git clone https://github.com/microservices-demo/microservices-demo
# kubectl apply -f microservices-demo/deploy/kubernetes/manifests/sock-shop-ns.yml -f microservices-demo/deploy/kubernetes/manifests
# kubectl get po
Summary
1. We submitted a user-defined image repository pull request (PR) to the Kubernetes Community to address kubeadm‘s lack of support of user-defined image repositories during Kubernetes deployment. You can set the KUBE_REPO_PREFIX environmental variable to specify your own image repositories for deploying Kubernetes.
2. We also submitted another PR to the flannel community to request flannel‘s adaption to Alibaba Cloud VPC for facilitating integration with Kubernetes.