源码包安装K8S
前置工作:K8s最少要两台服务器,一个master节点一个node节点先安装yum源#清华源[centosplus]#baseurl=http://mirror.centos.org/centos/$releasever/centosplus/$basearch/baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos/7.9.2009/os/x86_64/gpgcheck=1enabled=1gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7#k8s源:[kubernetes]name=Kubernetesbaseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/enabled=1gpgcheck=0docke 源:# step 1: 安装必要的一些系统工具sudo yum install -y yum-utils device-mapper-persistent-data lvm2# Step 2: 添加软件源信息sudo yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo# Step 3sudo sed -i 's+download.docker.com+mirrors.aliyun.com/docker-ce+' /etc/yum.repos.d/docker-ce.repo# Step 4: 更新并安装Docker-CEsudo yum makecache fastsudo yum -y install docker-ce# Step 4: 开启Docker服务sudo service docker startSystemctl stop firewalld关闭防火墙Master服务器上:Ssh-keygen 一路回车再把公钥拷到所有的node节点主机上ssh-copy-id 192.168.1.197ssh-copy-id 192.168.1.198关闭swap分区(master和node都要做)Vi /etc/fstab 所这个分区注销掉如果其它两台机是克隆的系统,就要把克出来的网卡中的uuid替换了再用uuidgen生成一个,重启网卡开启内核转发(master和node都要做):modprobe br_netfilterecho "modprobe br_netfilter" >>/etc/profile[root@yeng ~]# vim /etc/sysctl.d/k8s.confnet.bridge.bridge-nf-call-ip6tables = 1net.bridge.bridge-nf-call-iptables = 1net.ipv4.ip_forward = 1启用sysctl -p /etc/sysctl.d/k8s.confMaster和node都要装这个时间要同步Yum install ntpNtpdate pool.ntp.orgSystemctl start ntpdSyttemctl enable ntpd设置计划任务更新时间每台都要设置Crontab –e0 */4 * * * ntpdate pool.ntp.orgSystemctl restart crondSystecmctl enable crond下面用到的6个包下载提前,已存入网盘-------------------------------------------------------------三台机器,所有机器相互做解析 centos7.9关闭防火墙和selinux[root@k8s-master ~]# vim /etc/hosts192.168.1.199 k8s-master192.168.1.198 k8s-node1192.168.1.197 k8s-node2master上配置:------------------------------------------------------------------------------[root@k8s-master1 ~]# wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64[root@k8s-master1 ~]# wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64[root@k8s-master1 ~]# wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64[root@k8s-master1 ~]# chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64[root@k8s-master1 ~]# mv cfssl_linux-amd64 /usr/local/bin/cfssl[root@k8s-master1 ~]# mv cfssljson_linux-amd64 /usr/local/bin/cfssljson[root@k8s-master1 ~]# mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo[root@k8s-master1 ~]# mkdir cert[root@k8s-master1 ~]# cd cert/[root@k8s-master1 cert]# vim ca-config.json #生成ca中心的,没有创建这个文件[root@k8s-master1 cert]# cat ca-config.json{"signing": {"default": {"expiry": "87600h"},"profiles": {"www": {"expiry": "87600h","usages": ["signing","key encipherment","server auth","client auth"]}}}}[root@k8s-master1 cert]# vim ca-csr.json #生成ca中心的证书请求文件[root@k8s-master1 cert]# cat ca-csr.json{"CN": "etcd CA","key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","L": "Beijing","ST": "Beijing"}]}[root@k8s-master1 cert]# vim server-csr.json #生成服务器的证书请求文件[root@k8s-master1 cert]# cat server-csr.json{"CN": "etcd","hosts": ["192.168.1.199","192.168.1.198","192.168.1.197"],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","L": "BeiJing","ST": "BeiJing"}]}[root@k8s-master1 cert]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca -[root@k8s-master1 cert]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server[root@k8s-master1 cert]# ls *pemca-key.pem ca.pem server-key.pem server.pem-------------------------------------------------------------------------------安装etcd(3台设备都要操作)mkdir /opt/etcd/{bin,cfg,ssl} -ptar zxvf etcd-v3.2.12-linux-amd64.tar.gz #提前下载好mv etcd-v3.2.12-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/# cd /opt/etcd/cfg/# vim etcd# cat /opt/etcd/cfg/etcdmaster节点上:#[Member]ETCD_NAME="etcd01"ETCD_DATA_DIR="/var/lib/etcd/default.etcd"ETCD_LISTEN_PEER_URLS="https://192.168.1.199:2380"ETCD_LISTEN_CLIENT_URLS="https://192.168.1.199:2379"#[Clustering]ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.1.199:2380"ETCD_ADVERTISE_CLIENT_URLS="https://192.168.1.199:2379"ETCD_INITIAL_CLUSTER="etcd01=https://192.168.1.199:2380,etcd02=https://192.168.1.198:2380,etcd03=https://192.168.1.197:2380"ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"ETCD_INITIAL_CLUSTER_STATE="new"第二台node#[Member]ETCD_NAME="etcd02"ETCD_DATA_DIR="/var/lib/etcd/default.etcd"ETCD_LISTEN_PEER_URLS="https://192.168.1.198:2380"ETCD_LISTEN_CLIENT_URLS="https://192.168.1.198:2379"#[Clustering]ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.1.198:2380"ETCD_ADVERTISE_CLIENT_URLS="https://192.168.1.198:2379"ETCD_INITIAL_CLUSTER="etcd01=https://192.168.1.199:2380,etcd02=https://192.168.1.198:2380,etcd03=https://192.168.1.197:2380"ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"ETCD_INITIAL_CLUSTER_STATE="new"第三台node#[Member]ETCD_NAME="etcd03"ETCD_DATA_DIR="/var/lib/etcd/default.etcd"ETCD_LISTEN_PEER_URLS="https://192.168.1.197:2380"ETCD_LISTEN_CLIENT_URLS="https://192.168.1.197:2379"#[Clustering]ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.1.197:2380"ETCD_ADVERTISE_CLIENT_URLS="https://192.168.1.197:2379"ETCD_INITIAL_CLUSTER="etcd01=https://192.168.1.199:2380,etcd02=https://192.168.1.198:2380,etcd03=https://192.168.1.197:2380"ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"ETCD_INITIAL_CLUSTER_STATE="new"# vim /usr/lib/systemd/system/etcd.service# cat /usr/lib/systemd/system/etcd.service[Unit]Description=Etcd ServerAfter=network.targetAfter=network-online.targetWants=network-online.target[Service]Type=notifyEnvironmentFile=/opt/etcd/cfg/etcdExecStart=/opt/etcd/bin/etcd \--name=${ETCD_NAME} \--data-dir=${ETCD_DATA_DIR} \--listen-peer-urls=${ETCD_LISTEN_PEER_URLS} \--listen-client-urls=${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \--advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS} \--initial-advertise-peer-urls=${ETCD_INITIAL_ADVERTISE_PEER_URLS} \--initial-cluster=${ETCD_INITIAL_CLUSTER} \--initial-cluster-token=${ETCD_INITIAL_CLUSTER_TOKEN} \--initial-cluster-state=new \--cert-file=/opt/etcd/ssl/server.pem \--key-file=/opt/etcd/ssl/server-key.pem \--peer-cert-file=/opt/etcd/ssl/server.pem \--peer-key-file=/opt/etcd/ssl/server-key.pem \--trusted-ca-file=/opt/etcd/ssl/ca.pem \--peer-trusted-ca-file=/opt/etcd/ssl/ca.pemRestart=on-failureLimitNOFILE=65536[Install]WantedBy=multi-user.target把刚才生成的证书拷贝到配置文件中的位置:(将master上面生成的证书scp到剩余两台机器上面)# cd /root/cert/# cp ca*pem server*pem /opt/etcd/ssl[root@k8s-master cert]# scp ca*pem server*pem k8s-node1:/opt/etcd/ssl[root@k8s-master cert]# scp ca*pem server*pem k8s-node2:/opt/etcd/ssl三台机器都要启动# systemctl daemon-reload# systemctl start etcd# systemctl enable etcd在所有的机器节点上执行以下检测/opt/etcd/bin/etcdctl --ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --endpoints="https://192.168.1.199:2379,https://192.168.1.198:2379,https://192.168.1.197:2379" cluster-health每台节点机器出下面的信息带表成功了:member 18218cfabd4e0dea is healthy: got healthy result from https://10.206.240.111:2379member 541c1c40994c939b is healthy: got healthy result from https://10.206.240.189:2379member a342ea2798d20705 is healthy: got healthy result from https://10.206.240.188:2379cluster is healthy-----------------------------------------------------------------------------------------master节点上操作:[root@k8s-master ~]# scp -r cert/ k8s-node1:/root/ #将生成的证书copy到剩下的机器上面[root@k8s-master ~]# scp -r cert/ k8s-node2:/root/[root@k8s-master ~]# cd cert/必须要在cert目录下执行,不然报错/opt/etcd/bin/etcdctl \--ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem \--endpoints="https://192.168.1.199:2379,https://192.168.1.198:2379,https://192.168.1.197:2379" \set /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'输出这个信息代表成功了:{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}--------------------------------------------------------------------------------每个node节点都要操作wget https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz# tar zxvf flannel-v0.10.0-linux-amd64.tar.gz# mkdir -pv /opt/kubernetes/bin# mv flanneld mk-docker-opts.sh /opt/kubernetes/bin# mkdir -pv /opt/kubernetes/cfg/# vim /opt/kubernetes/cfg/flanneld# cat /opt/kubernetes/cfg/flanneldFLANNEL_OPTIONS="--etcd-endpoints=https://192.168.1.199:2379,https://192.168.1.198:2379,https://192.168.1.197:2379 -etcd-cafile=/opt/etcd/ssl/ca.pem -etcd-certfile=/opt/etcd/ssl/server.pem -etcd-keyfile=/opt/etcd/ssl/server-key.pem"vim /usr/lib/systemd/system/flanneld.servicecat /usr/lib/systemd/system/flanneld.service[Unit]Description=Flanneld overlay address etcd agentAfter=network-online.target network.targetBefore=docker.service[Service]Type=notifyEnvironmentFile=/opt/kubernetes/cfg/flanneldExecStart=/opt/kubernetes/bin/flanneld --ip-masq $FLANNEL_OPTIONSExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.envRestart=on-failure[Install]WantedBy=multi-user.targetcd /usr/lib/systemd/system/cp docker.service docker.service.bakvim docker.service[Unit]Description=Docker Application Container EngineDocumentation=https://docs.docker.comAfter=network-online.target firewalld.serviceWants=network-online.target[Service]Type=notifyEnvironmentFile=/run/flannel/subnet.envExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONSExecReload=/bin/kill -s HUP $MAINPIDLimitNOFILE=infinityLimitNPROC=infinityLimitCORE=infinityTimeoutStartSec=0Delegate=yesKillMode=processRestart=on-failureStartLimitBurst=3StartLimitInterval=60s[Install]WantedBy=multi-user.target从master节点拷贝证书文件到node1和node2上:因为node1和2上没有证书,但是flanel需要证书,这一步我们上面做etcd集群的时候好像做过拉,所有可以不用做# systemctl daemon-reload# systemctl start flanneld# systemctl enable flanneld# systemctl daemon-reload# systemctl restart docker检测是否生效:ps -ef | grep dockerroot 3632 1 1 22:19 ? 00:00:00 /usr/bin/dockerd --bip=172.17.77.1/24 --ip-masq=false --mtu=1450查看ip a注:1. 确保docker0与flannel.1在同一网段。2. 测试不同节点互通,在当前节点访问另一个Node节点docker0 IP:案例:node1机器pingnode2机器的docker0上面的ip地址--------------------------------------------------------------------------------------master节点操作给api-server创建的证书。别的服务访问api-server的时候需要通过证书认证创建CA证书:[root@k8s-master1 ~]# mkdir -p /opt/crt/[root@k8s-master1 ~]# cd /opt/crt/# vim ca-config.json{"signing": {"default": {"expiry": "87600h"},"profiles": {"kubernetes": {"expiry": "87600h","usages": ["signing","key encipherment","server auth","client auth"]}}}}# vim ca-csr.json{"CN": "kubernetes","key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","L": "Beijing","ST": "Beijing","O": "k8s","OU": "System"}]}[root@k8s-master1 crt]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca -[root@k8s-master1 crt]# vim server-csr.json# cat server-csr.json{"CN": "kubernetes","hosts": ["10.0.0.1","127.0.0.1","192.168.1.199","192.168.1.198","192.168.1.197","kubernetes","kubernetes.default","kubernetes.default.svc","kubernetes.default.svc.cluster","kubernetes.default.svc.cluster.local"],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","L": "BeiJing","ST": "BeiJing","O": "k8s","OU": "System"}]}[root@k8s-master1 crt]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server[root@k8s-master1 crt]# vim kube-proxy-csr.json# cat kube-proxy-csr.json{"CN": "system:kube-proxy","hosts": [],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","L": "BeiJing","ST": "BeiJing","O": "k8s","OU": "System"}]}[root@k8s-master1 crt]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy[root@k8s-master1 crt]# ls *pemca-key.pem ca.pem kube-proxy-key.pem kube-proxy.pem server-key.pem server.pem-----------------------------------------------------------------------------------部署apiserver组件在master节点进行下载二进制包:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.11.md下载这个包(kubernetes-server-linux-amd64.tar.gz)就够了,包含了所需的所有组件。# wget https://dl.k8s.io/v1.11.10/kubernetes-server-linux-amd64.tar.gzmkdir /opt/kubernetes/{bin,cfg,ssl} -pv# tar zxvf kubernetes-server-linux-amd64.tar.gz# cd kubernetes/server/bincp kube-apiserver kube-scheduler kube-controller-manager kubectl /opt/kubernetes/bin从生成证书的机器拷贝证书到master1,master2:----由于证书在master1上面生成的,因此这一步不用scp。 重点提示,这一步是你有俩个master,如果你只有一个这一步 省略,省略,省略。# scp server.pem server-key.pem ca.pem ca-key.pem k8s-master1:/opt/kubernetes/ssl/# scp server.pem server-key.pem ca.pem ca-key.pem k8s-master2:/opt/kubernetes/ssl/[root@k8s-master1 bin]# cd /opt/crt/# cp server.pem server-key.pem ca.pem ca-key.pem /opt/kubernetes/ssl/[root@k8s-master1 crt]# cd /opt/kubernetes/cfg/# vim token.csv# cat /opt/kubernetes/cfg/token.csv674c457d4dcf2eefe4920d7dbb6b0ddc,kubelet-bootstrap,10001,"system:kubelet-bootstrap"[root@k8s-master1 cfg]# pwd/opt/kubernetes/cfg[root@k8s-master1 cfg]# vim kube-apiserver[root@k8s-master1 cfg]# cat kube-apiserver (这步必须要手动进去把空格全删完,不能直接复制用)KUBE_APISERVER_OPTS="--logtostderr=true \--v=4 \--etcd-servers=https://192.168.1.199:2379,https://192.168.1.198:2379,https://192.168.1.197:2379 \--bind-address=192.168.1.199 \--secure-port=6443 \--advertise-address=192.168.1.199 \--allow-privileged=true \--service-cluster-ip-range=10.0.0.0/24 \--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \--authorization-mode=RBAC,Node \--enable-bootstrap-token-auth \--token-auth-file=/opt/kubernetes/cfg/token.csv \--service-node-port-range=30000-50000 \--tls-cert-file=/opt/kubernetes/ssl/server.pem \--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \--client-ca-file=/opt/kubernetes/ssl/ca.pem \--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \--etcd-cafile=/opt/etcd/ssl/ca.pem \--etcd-certfile=/opt/etcd/ssl/server.pem \--etcd-keyfile=/opt/etcd/ssl/server-key.pem"[root@k8s-master1 cfg]# cd /usr/lib/systemd/system# vim kube-apiserver.service# cat /usr/lib/systemd/system/kube-apiserver.service[Unit]Description=Kubernetes API ServerDocumentation=https://github.com/kubernetes/kubernetes[Service]EnvironmentFile=-/opt/kubernetes/cfg/kube-apiserverExecStart=/opt/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTSRestart=on-failure[Install]WantedBy=multi-user.target# systemctl daemon-reload# systemctl enable kube-apiserver# systemctl start kube-apiserver# systemctl status kube-apiserver必须是下面的输出才行:kube-apiserver.service - Kubernetes API ServerLoaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled)Active: active (running) since 六 2022-04-16 18:41:58 CST; 6s agoDocs: https://github.com/kubernetes/kubernetesMain PID: 2557 (kube-apiserver)Tasks: 10Memory: 165.4MCGroup: /system.slice/kube-apiserver.service└─2557 /opt/kubernetes/bin/kube-apiserver --logtostderr=true --v=4 --etcd-servers=https...4月 16 18:42:02 master kube-apiserver[2557]: I0416 18:42:02.104292 2557 compact.go:54] comp...79]4月 16 18:42:02 master kube-apiserver[2557]: I0416 18:42:02.104320 2557 store.go:1397] Moni...nts4月 16 18:42:02 master kube-apiserver[2557]: I0416 18:42:02.104331 2557 master.go:418] Enab...o".4月 16 18:42:02 master kube-apiserver[2557]: W0416 18:42:02.555492 2557 genericapiserver.go...es.4月 16 18:42:03 master kube-apiserver[2557]: W0416 18:42:03.100413 2557 genericapiserver.go...es.4月 16 18:42:03 master kube-apiserver[2557]: W0416 18:42:03.108900 2557 genericapiserver.go...es.4月 16 18:42:03 master kube-apiserver[2557]: W0416 18:42:03.134188 2557 genericapiserver.go...es.4月 16 18:42:04 master kube-apiserver[2557]: W0416 18:42:04.100904 2557 genericapiserver.go...es.4月 16 18:42:04 master kube-apiserver[2557]: [restful] 2022/04/16 18:42:04 log.go:33: [restful...api4月 16 18:42:04 master kube-apiserver[2557]: [restful] 2022/04/16 18:42:04 log.go:33: [restful...ui/Hint: Some lines were ellipsized, use -l to show in full.------------------------------------------------------------------------------------------部署schduler组件—master节点创建schduler配置文件:[root@k8s-master1 cfg]# vim /opt/kubernetes/cfg/kube-scheduler# cat /opt/kubernetes/cfg/kube-schedulerKUBE_SCHEDULER_OPTS="--logtostderr=true \--v=4 \--master=127.0.0.1:8080 \--leader-elect"[root@k8s-master1 cfg]# cd /usr/lib/systemd/system/# vim kube-scheduler.service# cat /usr/lib/systemd/system/kube-scheduler.service# systemctl daemon-reload# systemctl enable kube-scheduler# systemctl start kube-scheduler# systemctl status kube-scheduler状态和上面一样必须是active状态才行-------------------------------------------------------------------------------部署controller-manager组件–控制管理组件master节点操作:创建controller-manager配置文件:[root@k8s-master1 ~]# cd /opt/kubernetes/cfg/[root@k8s-master1 cfg]# vim kube-controller-manager# cat /opt/kubernetes/cfg/kube-controller-manager和上面一样要求不能用空格,不能直接复制,必须要手动KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \--v=4 \--master=127.0.0.1:8080 \--leader-elect=true \--address=127.0.0.1 \--service-cluster-ip-range=10.0.0.0/24 \--cluster-name=kubernetes \--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \--root-ca-file=/opt/kubernetes/ssl/ca.pem \--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem"[root@k8s-master1 cfg]# cd /usr/lib/systemd/system/[root@k8s-master1 system]# vim kube-controller-manager.service# cat /usr/lib/systemd/system/kube-controller-manager.service[Unit]Description=Kubernetes Controller ManagerDocumentation=https://github.com/kubernetes/kubernetes[Service]EnvironmentFile=-/opt/kubernetes/cfg/kube-controller-managerExecStart=/opt/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTSRestart=on-failure[Install]WantedBy=multi-user.target# systemctl daemon-reload# systemctl enable kube-controller-manager# systemctl start kube-controller-manager# systemctl status kube-controller-manager这个也是一样必须看到有active状态所有组件都已经启动成功,通过kubectl工具查看当前集群组件状态[root@k8s-master1 ~]# /opt/kubernetes/bin/kubectl get csNAME STATUS MESSAGE ERRORscheduler Healthy okcontroller-manager Healthy oketcd-2 Healthy {"health": "true"}etcd-0 Healthy {"health": "true"}etcd-1 Healthy {"health": "true"}如果出错x.509证书错误的话就删除这个环境变理:执行rm -rf $HOME/.kube---------------------------------------------------------------------------在Node节点部署组件Master apiserver启用TLS认证后,Node节点kubelet组件想要加入集群,必须使用CA签发的有效证书才能与apiserver通信,当Node节点很多时,签署证书是一件很繁琐的事情,因此有了TLS Bootstrapping机制,kubelet会以一个低权限用户自动向apiserver申请证书,kubelet的证书由apiserver动态签署。--------------------------------------------------------------------------下面这些操作在master节点完成将kubelet-bootstrap用户绑定到系统集群角色[root@k8s-master1 ~]# /opt/kubernetes/bin/kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap创建kubeconfig文件:在生成kubernetes证书的目录下执行以下命令生成kubeconfig文件:[root@k8s-master1 ~]# cd /opt/crt/指定apiserver 内网负载均衡地址[root@k8s-master1 crt]# KUBE_APISERVER="https://192.168.1.199:6443" #写你master的ip地址,集群中就写负载均衡的ip地址[root@k8s-master1 crt]# BOOTSTRAP_TOKEN=674c457d4dcf2eefe4920d7dbb6b0ddc[root@k8s-master1 crt]# /opt/kubernetes/bin/kubectl config set-cluster kubernetes \--certificate-authority=ca.pem \--embed-certs=true \--server=${KUBE_APISERVER} \--kubeconfig=bootstrap.kubeconfig[root@k8s-master crt]# /opt/kubernetes/bin/kubectl config set-credentials kubelet-bootstrap \--token=${BOOTSTRAP_TOKEN} \--kubeconfig=bootstrap.kubeconfig[root@k8s-master crt]# /opt/kubernetes/bin/kubectl config set-context default \--cluster=kubernetes \--user=kubelet-bootstrap \--kubeconfig=bootstrap.kubeconfig[root@k8s-master crt]# /opt/kubernetes/bin/kubectl config use-context default --kubeconfig=bootstrap.kubeconfig创建kube-proxy kubeconfig文件[root@k8s-master1 crt]# /opt/kubernetes/bin/kubectl config set-cluster kubernetes \--certificate-authority=ca.pem \--embed-certs=true \--server=${KUBE_APISERVER} \--kubeconfig=kube-proxy.kubeconfig[root@k8s-master1 crt]# /opt/kubernetes/bin/kubectl config set-credentials kube-proxy \--client-certificate=kube-proxy.pem \--client-key=kube-proxy-key.pem \--embed-certs=true \--kubeconfig=kube-proxy.kubeconfig[root@k8s-master1 crt]# /opt/kubernetes/bin/kubectl config set-context default \--cluster=kubernetes \--user=kube-proxy \--kubeconfig=kube-proxy.kubeconfig[root@k8s-master1 crt]# /opt/kubernetes/bin/kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig[root@k8s-master1 crt]# lsbootstrap.kubeconfig kube-proxy.kubeconfig将这两个文件拷贝到Node节点/opt/kubernetes/cfg目录下[root@k8s-master1 crt]# scp *.kubeconfig k8s-node1:/opt/kubernetes/cfg/[root@k8s-master1 crt]# scp *.kubeconfig k8s-node2:/opt/kubernetes/cfg/将master上面的包拷贝过去[root@k8s-master1 ~]# scp kubernetes-server-linux-amd64.tar.gz k8s-node1:/root/[root@k8s-master1 ~]# scp kubernetes-server-linux-amd64.tar.gz k8s-node2:/root/----------------------------------------------------------------------------下面这些操作在node节点完成[root@k8s-node1 ~]# tar xzf kubernetes-server-linux-amd64.tar.gz[root@k8s-node1 ~]# cd kubernetes/server/bin/[root@k8s-node1 bin]# cp kubelet kube-proxy /opt/kubernetes/bin/先下载个docker镜像[root@k8s-node1 ~]# docker pull registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0[root@k8s-node2 ~]# docker pull registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0[root@k8s-node1 ~]# vim /opt/kubernetes/cfg/kubelet这个也是一样不能直接复制用,要删除空格,每个节点上都要这么做,改自己对面应的IPKUBELET_OPTS="--logtostderr=true \--v=4 \--hostname-override=192.168.1.198 \--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \--config=/opt/kubernetes/cfg/kubelet.config \--cert-dir=/opt/kubernetes/ssl \--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"[root@k8s-node1 ~]# vim /opt/kubernetes/cfg/kubelet.config不能有空格,不能直接复制用,每个节点上都要这么做,改自己对面应的IPkind: KubeletConfigurationapiVersion: kubelet.config.k8s.io/v1beta1address: 192.168.1.198port: 10250readOnlyPort: 10255cgroupDriver: cgroupfsclusterDNS: ["10.0.0.2"]clusterDomain: cluster.local.failSwapOn: falseauthentication:anonymous:enabled: truewebhook:enabled: false# vim /usr/lib/systemd/system/kubelet.service[Unit]Description=Kubernetes KubeletAfter=docker.serviceRequires=docker.service[Service]EnvironmentFile=/opt/kubernetes/cfg/kubeletExecStart=/opt/kubernetes/bin/kubelet $KUBELET_OPTSRestart=on-failureKillMode=process[Install]WantedBy=multi-user.target# systemctl daemon-reload# systemctl enable kubelet# systemctl start kubelet# systemctl status kubelet必须有active 状态才行---------------------------------------------------------------回master上查看[root@k8s-master ~]# /opt/kubernetes/bin/kubectl get csrNAME AGE REQUESTOR CONDITIONnode-csr-F5AQ8SeoyloVrjPuzSbzJnFKQaUsier7EGvNFXLKTqM 17s kubelet-bootstrap Pendingnode-csr-bjeHSWXOuUDSHganJPL_hDz_8jjYhM2FQyTkbA9pM0Q 18s kubelet-bootstrap Pending在Master审批Node加入集群:启动后还没加入到集群中,需要手动允许该节点才可以。在Master节点查看请求签名的Node:[root@k8s-master1 ~]# /opt/kubernetes/bin/kubectl certificate approve XXXXID注意:xxxid 指的是上面的NAME这一列[root@k8s-master1 ~]# /opt/kubernetes/bin/kubectl get csrNAME AGE REQUESTOR CONDITIONnode-csr--1TVDzcozo7NoOD3WS2t9xLQqNunsVXj_i2AQ5x1mbs 1m kubelet-bootstrap Approved,Issuednode-csr-L0wqvr69oy8rzXwFm1u1uNx4aEMOOvd_RWPxaAERn_w 27m kubelet-bootstrap Approved,Issued加完后再查看集群节点:[root@k8s-master1 ~]# /opt/kubernetes/bin/kubectl get nodeNAME STATUS ROLES AGE VERSION192.168.246.164 Ready <none> 1m v1.11.10192.168.246.165 Ready <none> 17s v1.11.10------------------------------------------------------------------------------部署kube-proxy组件创建kube-proxy配置文件:还是在所有node节点,重点俩个上面都操作[root@k8s-node1 ~]# vim /opt/kubernetes/cfg/kube-proxy# cat /opt/kubernetes/cfg/kube-proxy不能有空格,不能直接复制用,每个节点上都要这么做,改自己对面应的IPKUBE_PROXY_OPTS="--logtostderr=true \--v=4 \--hostname-override=192.168.1.198 \--cluster-cidr=10.0.0.0/24 \--kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig"[root@k8s-node1 ~]# cd /usr/lib/systemd/system# vim /usr/lib/systemd/system/kube-proxy.service[Unit]Description=Kubernetes ProxyAfter=network.target[Service]EnvironmentFile=-/opt/kubernetes/cfg/kube-proxyExecStart=/opt/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTSRestart=on-failure[Install]WantedBy=multi-user.target# systemctl daemon-reload# systemctl enable kube-proxy# systemctl start kube-proxy# systemctl status kube-proxy一定要active状态才行--------------------------------------------------------------------在master上查看[root@k8s-master1 ~]# /opt/kubernetes/bin/kubectl get nodeNAME STATUS ROLES AGE VERSION192.168.246.164 Ready <none> 19m v1.11.10192.168.246.165 Ready <none> 18m v1.11.10[root@k8s-master1 ~]# /opt/kubernetes/bin/kubectl get csNAME STATUS MESSAGE ERRORscheduler Healthy okcontroller-manager Healthy oketcd-0 Healthy {"health": "true"}etcd-1 Healthy {"health": "true"}etcd-2 Healthy {"health": "true"}测试,我们去运行一个nginx容器运行一个测试示例–在master节点先安装docker服务创建一个Nginx Web,判断集群是否正常工 master节点上面/opt/kubernetes/bin/kubectl run nginx --image=daocloud.io/nginx --replicas=3有三个容器 三个副本说明没有问题/opt/kubernetes/bin/kubectl get deployment查看一下,每个容器的ip/opt/kubernetes/bin/kubectl get pod -o wide测试一下能不能ping通,我们在node节点上面测试 ,因为我们的从节点上面安装拉了,Flannel网络*ping 172.17.88.3暴露在外网/opt/kubernetes/bin/kubectl expose deployment nginx --port=88 --target-port=80 --type=NodePort查看容器的端口/opt/kubernetes/bin/kubectl get service[root@master ~]# /opt/kubernetes/bin/kubectl get serviceNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEkubernetes ClusterIP 10.0.0.1 <none> 443/TCP 2hnginx NodePort 10.0.0.73 <none> 88:35654/TCP 12m访问nodeip加端口打开nginx页面打开浏览器输入:http://192.168.1.198:35654/------------------------------------------------------------------------------下面部署 web管理界面部署Dashboard(Web UI)* dashboard-deployment.yaml #部署Pod,提供Web服务* dashboard-rbac.yaml #授权访问apiserver获取信息* dashboard-service.yaml #发布服务,提供对外访问创建一个目录 master节点[root@k8s-master ~]# mkdir webui[root@k8s-master ~]# cd webui/[root@k8s-master webui]# vim dashboard-deployment.yaml[root@k8s-master webui]# cat dashboard-deployment.yaml一定要注意,下面所有的都是这个格式是ansibale格式,不能和tab等,对格式有严格的要求,不然下面不能创建apiVersion: apps/v1beta2kind: Deploymentmetadata:name: kubernetes-dashboardnamespace: kube-systemlabels:k8s-app: kubernetes-dashboardkubernetes.io/cluster-service: "true"addonmanager.kubernetes.io/mode: Reconcilespec:selector:matchLabels:k8s-app: kubernetes-dashboardtemplate:metadata:labels:k8s-app: kubernetes-dashboardannotations:scheduler.alpha.kubernetes.io/critical-pod: ''spec:serviceAccountName: kubernetes-dashboardcontainers:- name: kubernetes-dashboardimage: registry.cn-hangzhou.aliyuncs.com/kube_containers/kubernetes-dashboard-amd64:v1.8.1resources:limits:cpu: 100mmemory: 300Mirequests:cpu: 100mmemory: 100Miports:- containerPort: 9090protocol: TCPlivenessProbe:httpGet:scheme: HTTPpath: /port: 9090initialDelaySeconds: 30timeoutSeconds: 30tolerations:- key: "CriticalAddonsOnly"operator: "Exists"[root@k8s-master webui]# vim dashboard-rbac.yaml[root@k8s-master webui]# cat dashboard-rbac.yamlapiVersion: v1kind: ServiceAccountmetadata:labels:k8s-app: kubernetes-dashboardaddonmanager.kubernetes.io/mode: Reconcilename: kubernetes-dashboardnamespace: kube-system---kind: ClusterRoleBindingapiVersion: rbac.authorization.k8s.io/v1beta1metadata:name: kubernetes-dashboard-minimalnamespace: kube-systemlabels:k8s-app: kubernetes-dashboardaddonmanager.kubernetes.io/mode: ReconcileroleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: cluster-adminsubjects:- kind: ServiceAccountname: kubernetes-dashboardnamespace: kube-system[root@k8s-master webui]# vim dashboard-service.yaml[root@k8s-master webui]# cat dashboard-service.yamlapiVersion: v1kind: Servicemetadata:name: kubernetes-dashboardnamespace: kube-systemlabels:k8s-app: kubernetes-dashboardkubernetes.io/cluster-service: "true"addonmanager.kubernetes.io/mode: Reconcilespec:type: NodePortselector:k8s-app: kubernetes-dashboardports:- port: 80targetPort: 9090[root@k8s-master webui]# /opt/kubernetes/bin/kubectl create -f dashboard-rbac.yaml[root@k8s-master webui]# /opt/kubernetes/bin/kubectl create -f dashboard-deployment.yaml[root@k8s-master webui]# /opt/kubernetes/bin/kubectl create -f dashboard-service.yaml等待数分钟,查看资源状态:/opt/kubernetes/bin/kubectl get pod -n kube-system查看 ui网页的IP[root@master webui]# /opt/kubernetes/bin/kubectl get service -n kube-systemNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEkubernetes-dashboard NodePort 10.0.0.63 <none> 80:41929/TCP 15m查看这个容器运行在那台节点上面[root@master webui]# /opt/kubernetes/bin/kubectl get pod -n kube-system -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODEkubernetes-dashboard-d9545b947-ftlh6 1/1 Running 0 2m 172.17.41.3 192.168.1.197 <none>访问?http://192.168.1.197:41929这样web管理页面只在一台上没有冗余,在点开web页面后在概况---部署---点开名称为:kubernetes-dashboard右边的三个点,修改配置参数spec下的replicas: 2再点下面更新,他就会自动在另一节点服务器上再部署一台了。可以在master上用命令查看:[root@master webui]# /opt/kubernetes/bin/kubectl get pods --namespace=kube-systemNAME READY STATUS RESTARTS AGEkubernetes-dashboard-d9545b947-42sf8 1/1 Running 0 2mkubernetes-dashboard-d9545b947-vn9vh 1/1 Running 1 1h
后期开关关机重启注意:
由于关闭或者重启docker而导致的网络未更新问题引起
Master节点启动 注意先启动kubernetes,再启动docker(如果是关闭docker或者重启docker导致的网络问题,重启master和node节点,注意重启顺序)
主Master节点重启顺序
`systemctl enable dockersystemctl enable etcd kube-apiserver kube-scheduler kube-controller-managersystemctl restart etcd kube-apiserver kube-scheduler kube-controller-managersystemctl restart flanneld docker``#网络相关后启动 flanneld和docker 重置网络Node从节点重启顺序systemctl restart kubelet kube-proxysystemctl restart flanneld docker#网络相关后启动 flanneld 和 docker 重置网络systemctl enable flanneld kubelet kube-proxy docker3.如果还不行,重新删除安装一下dashboard即可kubectl delete -f dashboard-controller.yamlkubectl delete -f dashboard-service.yamlkubectl create -f dashboard-controller.yamlkubectl create -f dashboard-service.yaml------------------------------------------------------
如果是按这个二进制包来装的话就不用做这步调整了,默认就支持选择超级权限创建。先把所有的节点和master改成只要起容器就支持超权限的创建动作:vim /etc/kubernetes/config把allow-privileged=false改在true创建容器有两种方法:一种直接在UI界面创建,记得要把超权限勾打上在web创建容器时要选择service这个选项,选择内部或外部,(生产环境一般会在前面再加代理服务器类似于nginx,),这个serveice他功能主要是用来固定一个不变的IP和不变的部署里的应用标签,这样我们每次访问就不用老换端口号和IP了,要不然在集群挂掉一个应用容器后自动创建后IP就变了,这样很难管理。生产环境都是选择外部这个选项,这样外部就可以利用node+服务外部端口访问了另一种是用原来的yaml[root@master webui]# ll总用量 12-rw-r--r-- 1 root root 1146 4月 16 21:47 dashboard-deployment.yaml-rw-r--r-- 1 root root 612 4月 16 21:32 dashboard-rbac.yaml-rw-r--r-- 1 root root 338 4月 16 21:33 dashboard-service.yaml利用原有的yaml文件再复制一份把参数修改一下,再用命令执行一下就创建了:/opt/kubernetes/bin/kubectl create -f nginx.yaml删除的话就换成delete如果用web UI界面上删除的话:要按顺序删除:先删除UI上面的部署-再删除副本-再删除容器。
夜雨聆风