k8s的安装有多种方式,如yum安装,kubeadm安装,kubemini安装,二进制安装(生产环境多采用此方式精确控制安装)等。本文是入门系列验证,之前进行过yum安装,可以查看文章《k8s入门系列之集群yum安装篇》 。这里进行kubeadm安装一次了解安装过程,真正的学习、测试环境和生产环境都不建议此方法,都建议yum安装或者二进制安装,这样才可以详细了解到k8s的工作原理和工作过程。
平台 :CentOS Linux release 7.5.1804 (Core)
master: 10.1.14.12
node1: 10.1.14.17
node2: 10.1.14.14
一,三台机器前期工作准备
1,关闭防火墙服务,避免与docker容器的防火墙规则冲突。
systemctl stop firewalld
systemctl disable firewalld
2,关闭selinux:
修改/etc/selinux/config为SELINUX=disabled
重启后配置生效。不建议临时关闭(setenfore 0),防止机器重启失效。
3,关闭swap:
临时关闭:
swapoff -a
永久关闭:
sed -i 's/.*swap.*/#&/' /etc/fstab
4,host定向,将机器内部主机名通信打通:
vi /etc/hosts
10.1.14.12 master 10.1.14.17 node01 10.1.14.14 node02
5,master机器设置免密钥登陆其他两个node
ssh-keygen #生成密钥对 cd /root/.ssh/ ssh-copy-id -i id_rsa.pub node01 ssh-copy-id -i id_rsa.pub node02
6,配置ntp:
yum install ntpdate -y systemctl enable ntpdate.service systemctl start ntpdate.service 临时同步:ntpdate time7.aliyun.com 设置任务计划crontab -e: */30 * * * * /usr/sbin/ntpdate time7.aliyun.com >/dev/null 2>&1
7,设置内核,建议安装标准系统初始化进行(部分厂商的机器已经进行了默认参数优化)
echo "* soft nofile 65536">> /etc/security/limits.conf echo "* hard nofile 65536" >> /etc/security/limits.conf echo "* soft nproc 65536" >> /etc/security/limits.conf echo "* hard nproc 65536" >> /etc/security/limits.conf echo "* soft memlock unlimited" >> /etc/security/limits.conf echo "* hard memlock unlimited" >> /etc/security/limits.conf
修正转发:
modprobe br_netfilter cat < < EOF > /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sysctl -p /etc/sysctl.d/k8s.conf
8,设置yum源
epel源: yum install -y epel-release #kubenetes yum源 ,采用阿里云 cat < /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF --------------------- #docker yum源 采用阿里云 wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo yum clean all && yum makecache fast
9,安装依赖包
yum install -y yum-utils device-mapper-persistent-data lvm2 net-tools conntrack-tools wget vim ntpdate libseccomp libtool-ltdl
二,进行kubeadm 集群安装部署
1,组件安装
master节点安装etcd:
yum install etcd -y systemctl restart etcd systemctl enable etcd
由于采用外部 etcd,所以要在 master 节点安装 etcd服务,这里也是etcd是单节点。不管是etcd集群还是单机,或者是 http, https都可以,只要在 kubeadm 中配置好就行。
实验中是单机 etcd,然后监听地址为 https://10.1.14.12:2379
master和其他node:
yum install docker-ce kubelet kubeadm kubectl ipvsadm -y systemctl restart kubelet systemctl restart docker systemctl enable kubelet systemctl enable docker
2,kubeadm安装配置 kubenetes 1.12.2集群
kubeadm init --kubernetes-version=v1.12.2 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=10.1.14.12
这里由于我的虚拟机都设置了科学上网,所以可以直接去下载。 没有配置科学上网的,建议采用阿里云的镜像下载后更改tag处理再kubeadm安装。
安装完信息如下,则说明初始化成功:
[root@master docker]# kubeadm init --kubernetes-version=v1.12.2 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=10.1.14.12 [init] using Kubernetes version: v1.12.2 [preflight] running pre-flight checks [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service' [preflight/images] Pulling images required for setting up a Kubernetes cluster [preflight/images] This might take a minute or two, depending on the speed of your internet connection [preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [preflight] Activating the kubelet service [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.1.14.12] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated etcd/ca certificate and key. [certificates] Generated etcd/peer certificate and key. [certificates] etcd/peer serving cert is signed for DNS names [master localhost] and IPs [10.1.14.12 127.0.0.1 ::1] [certificates] Generated etcd/healthcheck-client certificate and key. [certificates] Generated apiserver-etcd-client certificate and key. [certificates] Generated etcd/server certificate and key. [certificates] etcd/server serving cert is signed for DNS names [master localhost] and IPs [127.0.0.1 ::1] [certificates] valid certificates and keys now exist in "/etc/kubernetes/pki" [certificates] Generated sa key and public key. [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" [init] this might take a minute or longer if the control plane images have to be pulled [apiclient] All control plane components are healthy after 39.007740 seconds [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.12" in namespace kube-system with the configuration for the kubelets in the cluster [markmaster] Marking the node master as master by adding the label "node-role.kubernetes.io/master=''" [markmaster] Marking the node master as master by adding the taints [node-role.kubernetes.io/master:NoSchedule] [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "master" as an annotation [bootstraptoken] using token: wzh9es.gaz6xloz7omrswvs [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 10.1.14.12:6443 --token wzh9es.gaz6xloz7omrswvs --discovery-token-ca-cert-hash sha256:0a2135bbdfd174d5f9e4d4afb8d15b54f066ac261722e1ab7171cee62b7b158b
里边的信息量很大,可以多仔细看两遍,其中最后将node加入集群的命令可以记录一下。
按照提示,在开始使用集群之前,执行如下命令(提示信息直接贴过来即可):
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
配置完毕以后查看node信息:
[root@master ~]# kubectl get node NAME STATUS ROLES AGE VERSION master NotReady master 8m5s v1.12.2
原因是flannel没有配置,配置flannel:
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml
重新查看:
[root@master ~]# kubectl get node NAME STATUS ROLES AGE VERSION master Ready master 16m v1.12.2 [root@master ~]# kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-576cbf47c7-ftvc8 1/1 Running 0 15m kube-system coredns-576cbf47c7-rtm6f 1/1 Running 0 15m kube-system etcd-master 1/1 Running 0 75s kube-system kube-apiserver-master 1/1 Running 0 66s kube-system kube-controller-manager-master 1/1 Running 0 77s kube-system kube-flannel-ds-amd64-488hv 1/1 Running 0 99s kube-system kube-proxy-d4v7j 1/1 Running 0 15m kube-system kube-scheduler-master 1/1 Running 0 77s [root@master ~]# kubectl get cs NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-0 Healthy {"health": "true"}
3,node节点配置
node01和node02分别执行之前的命令提示,将node01 node02加入到集群:
kubeadm join 10.1.14.12:6443 --token wzh9es.gaz6xloz7omrswvs --discovery-token-ca-cert-hash sha256:0a2135bbdfd174d5f9e4d4afb8d15b54f066ac261722e1ab7171cee62b7b158b
master查看:
[root@master ~]# kubectl get node NAME STATUS ROLES AGE VERSION master Ready master 21m v1.12.2 node01 Ready 2m38s v1.12.2 node02 Ready 2m35s v1.12.2
4,集群信息查看
[root@master ~]# kubeadm config view apiServerExtraArgs: authorization-mode: Node,RBAC apiVersion: kubeadm.k8s.io/v1alpha3 auditPolicy: logDir: /var/log/kubernetes/audit logMaxAge: 2 path: "" certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controlPlaneEndpoint: "" etcd: local: dataDir: /var/lib/etcd image: "" imageRepository: k8s.gcr.io kind: ClusterConfiguration kubernetesVersion: v1.12.2 networking: dnsDomain: cluster.local podSubnet: 10.244.0.0/16 serviceSubnet: 10.96.0.0/12 unifiedControlPlaneImage:
5,集群重装
kubeadm reset
本安装顺利实现参考了如下文章:
kubeadm安装k8s测试环境:https://blog.csdn.net/orangleliu/article/details/81284633
Kubernetes 学习总结 2 安装与入门: https://www.itency.com/topic/show.do?id=592252
转载请注明:21运维 » kubenetes 入门系列安装之kubeadm安装