k8s集群在centos环境中部署安装详细过程


1. 概述和环境介绍

1.1 概述

k8s 是一款开源容器集群系统,用于容器化应用程序部署、扩展和管理,目标是让容器简单高效。它主要可以实现一下几个主要高端操作。

  • 自我修复:如果当前某个容器奔溃,k8s 能够快速响应启动新的容器进行补充
  • 弹性伸缩:动态的对正在运行的集群中添加和介绍容器数量
  • 服务发现:服务可以通过自动发现的形式找到所依赖的服务
  • 负载均衡:如果一个服务启动了多个容器,能够自动实现请求负载均衡
  • 版本回退:如果发现新版本存在问题,可以立即回退到原来的版本
  • 存储编排:可以根据容器自身的需求自动创建存储卷

1.2 环境介绍

Tips:虚拟机总共是三台,采用的是一个 master,两个 node 的集群部署方式。

电脑系统 虚拟机系统
Windows 10 CentOS Linux release 7.9.2009 (Core)

2. k8s 集群部署

2.1 初始化环境

  1. 检查操作系统的版本号,k8s 要求 CentOS 系统版本在 7.5 及以上,通过以下命令查看。
cat /etc/redhat-release
  1. 配置hosts文件,需要在集群中的每一台机器都做相关配置,以一主两工作节点来配置如下。
## 打开hosts文件
vim /etc/hosts
## 将下面的内容添加到hosts文件中
192.168.29.128 master
192.168.29.129 node1
192.168.29.130 node2
## 在每天机器中执行以下三条命令,看是否配置成功
ping master
ping node1
ping node2
  1. 时间同步,集群中的服务器时间必须保持一致。
## 安装 chrony
yum -y install chrony
## 启动 chronyd
systemctl start chronyd
## 设置开机启动
systemctl enable chronyd
  1. 禁用iptablesfirewalld服务,因为 k8s 和 Docker 在运行中需要用到很多端口,为了省去麻烦,直接禁用
## 关闭firewalld
systemctl stop firewalld
systemctl disable firewalld
  1. 关闭selinux,这个是linux系统下的一个安全服务,如果不关闭,k8s 安装会出现各种问题
## 打开配置文件
vim /etc/selinux/config
## 修改SELINUX的值为disabled
SELINUX=disabled
  1. 禁用swap分区

Tips:swap分区指的是虚拟内存分区,它的作用是物理内存使用完之后,将磁盘空间虚拟成内存来使用,启用swap设备会对系统的性能产生非常负面的影响,因此kubernetes要求每个节点都要禁用swap设备,但是如果因为某些原因确实不能关闭swap分区,就需要在集群安装过程中通过明确的参数进行配置说明。

## 打开配置文件
vim /etc/fstab
## 注释下面的内容
/dev/mapper/centos-swap swap
  1. 修改linux内核参数,添加网桥过滤和地址转发功能
## 打开配置文件,这个文件一般是不存在的,直接vim,然后保存
vim /etc/sysctl.d/kubernetes.conf
## 文件中添加如下内容
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
## 添加完成保存文件,重新加载配置,依次执行以下命令
## 重新加载配置
sysctl -p
## 加载网桥过滤模块
modprobe br_netfilter
## 查看网桥过滤模块是否加载成功
lsmod | grep br_netfilter
  1. 配置ipvs功能

Tips:在 k8s 中的service有两种代理模式,一种是基于iptables的,一种是基于ipvs的,两者比较的话,ipvs的性能明显要高一些,但是如果要使用它,需要手动载入ipvs模块

## 安装ipset和ipvsadm
yum -y install ipset ipvsadm
## 添加需要加载的模块写入脚本文件
cat <<EOF > /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
## 为脚本添加执行权限
chmod +x /etc/sysconfig/modules/ipvs.modules
## 执行脚本文件
sh +x /etc/sysconfig/modules/ipvs.modules
## 查看对应的模块是否加载成功
lsmod | grep -e ip_vs -e nf_conntrack_ipv4
  1. 重启 CentOS 系统,使用reboot命令。

2.2 安装 Docker

  1. 切换镜像,Docker 是国外服务器,速度很慢,需要切换到国内的服务器
## 执行此命令的时候,如果出现wget找不到,使用yum -y install wget安装即可
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
  1. 查看当前镜像源中支持的 Docker 版本
yum list docker-ce --showduplicates
  1. 安装特定版本docker-ce

注意docker-ce版本号,防止 k8s 对不同版本的适配不相同,如果需要用其他版本,可以到网上找一下适配关系

## 指定 --setopt=obsoletes=0,否则yum会自动安装更高的版本
yum -y install --setopt=obsoletes=0 docker-ce-18.06.3.ce-3.el7
  1. 添加一个配置文件,配置阿里云的加速镜像
## 创建目录
mkdir /etc/docker
## 执行创建文件并写入内容
cat <<EOF > /etc/docker/daemon.json
{
  "exec-opts":["native.cgroupdriver=systemd"],
  "registry-mirrors":["https://gxdoa2yz.mirror.aliyuncs.com"]
}
EOF
  1. 启动 Docker 服务,并设置为开机启动启动
systemctl start docker
systemctl enable docker
  1. 检查 Docker 是否启动成功及其版本号
docker --version

2.3 安装 k8s 组件

k8s 集群搭建有三种方式:

  • 通过kubeadm工具安装,此工具提供kubeadm initkubeadm join,可以快速搭建 k8s 集群。
  • 二进制安装包,手动从官网下载所有组件,手动部署每个组件,组成 k8s
  • minikube一个用于快速搭建单点 k8s 的工具

最方便的就是第一种方式,第二种方式对于 k8s 非常了解的人,可以尝试,第三种在生产上不适用。

  1. 配置 k8s 的镜像源,切换成国内的镜像源,方式在安装的过程中出现连不上或者下载太慢的问题
## 打开配置文件,不存在就是新建
vim /etc/yum.repos.d/kubernetes.repo
## 将以下配置信息添加到文件中
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
      https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
  1. 安装kubeadmkubeletkubectl
yum -y install --setopt=obsoletes=0 kubeadm-1.17.4-0 kubelet-1.17.4-0 kubectl-1.17.4-0
  1. 配置kubeletcgroup
## 打开配置文件,如果没有就是新建
vim /etc/sysconfig/kubelet
## 将以下内容添加到配置文件中
KUBELET_CGROUP_ARGS="--cgroup-driver=systemd"
KUBE_PROXY_MODE="ipvs"
  1. 设置kubelet开机自动启动
systemctl enable kubelet
  1. 由于 k8s 从国外拉取镜像慢,使用阿里云拉取后,将文件名称改成 k8s 要求的镜像名,这样在对 k8s 集群初始化的时候,就不会出现再次到 k8s 拉取的问题
  • 先用kubeadm config images list命令查看需要拉取哪些镜像文件
  • 然后通过docker pull命令拉取尝试一下,正常会报下面这个错误
Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
  • 此时就需要使用备选方案,先从阿里云拉取,然后修改镜像名称
## 先执行
images=(
kube-apiserver:v1.17.4
kube-controller-manager:v1.17.4
kube-scheduler:v1.17.4
kube-proxy:v1.17.4
pause:3.1
etcd:3.4.3-0
coredns:1.6.5
)
## 再执行
for imageName in ${images[@]} ; do
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
done
  1. 集群初始化

Tips:只需要在 master 节点创建即可

注意:此命令执行完后,最后一行日志,会给出一个join命令的执行语句,保存需要先保存下来,后续 node 节点加入到集群需要执行

kubeadm init \
--kubernetes-version=v1.17.4 \
--pod-network-cidr=10.244.0.0/16 \
--image-repository registry.aliyuncs.com/google_containers \
--service-cidr=10.96.0.0/12 \
--apiserver-advertise-address=192.168.29.128

Tips:
apiserver-advertise-address:集群通告地址(master 机器 IP)
image-repository:由于默认拉取镜像地址k8s.gcr.io国内无法访问,这里指定阿里云镜像仓库地址
kubernetes-version:K8s 版本,与上面kubeadmkubeletkubectl安装的版本一致
service-cidr:集群内部虚拟网络,Pod 统一访问入口
pod-network-cidr:Pod 网络,与下面部署的 CNI 网络组件yaml中保持一致

  1. 创建必要文件

Tips:只需要在master节点创建即可

mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
  1. 使用第 6 步中保存的join命令,到 node1、node2 节点运行

Tips:在执行此命令之前,要保证 node1、node2 节点已经完成第 6 步之前的所有安装和配置操作

kubeadm join 192.168.73.101:6443 --token 4gtt23.kkcw9hg5083iwclc \
--discovery-token-ca-cert-hash sha256:34fbfcf18649a5841474c2dc4b9ff90c02fc05de0798ed690e1754437be35a05
  1. 以上操作结束后,通过kubectl get nodes命令,在 master 机器上查看节点数量和状态,此时会发现所有节点的状态都是NotReady,如图所示。

2.4 网络插件安装

在 2.3 中的最后,虽然在 master 机器上,通过kubectl get nodes命令能够查看到所有节点,但是状态都是NotReady,需要解决。

  1. kube-flannel.yml文件(这个文件内容如下,很长),上传到 master 节点机器上,可以是任意位置。然后修改文件中内容,将quay.io改成quay-mirror.qiniu.com
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: psp.flannel.unprivileged
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
  privileged: false
  volumes:
    - configMap
    - secret
    - emptyDir
    - hostPath
  allowedHostPaths:
    - pathPrefix: "/etc/cni/net.d"
    - pathPrefix: "/etc/kube-flannel"
    - pathPrefix: "/run/flannel"
  readOnlyRootFilesystem: false
  # Users and groups
  runAsUser:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  # Privilege Escalation
  allowPrivilegeEscalation: false
  defaultAllowPrivilegeEscalation: false
  # Capabilities
  allowedCapabilities: ['NET_ADMIN']
  defaultAddCapabilities: []
  requiredDropCapabilities: []
  # Host namespaces
  hostPID: false
  hostIPC: false
  hostNetwork: true
  hostPorts:
  - min: 0
    max: 65535
  # SELinux
  seLinux:
    # SELinux is unsed in CaaSP
    rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: flannel
rules:
  - apiGroups: ['extensions']
    resources: ['podsecuritypolicies']
    verbs: ['use']
    resourceNames: ['psp.flannel.unprivileged']
  - apiGroups:
      - ""
    resources:
      - pods
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes/status
    verbs:
      - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "cniVersion": "0.2.0",
      "name": "cbr0",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-amd64
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: beta.kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: beta.kubernetes.io/arch
                    operator: In
                    values:
                      - amd64
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
             add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-arm64
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: beta.kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: beta.kubernetes.io/arch
                    operator: In
                    values:
                      - arm64
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay-mirror.qiniu.com/coreos/flannel:v0.11.0-arm64
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay-mirror.qiniu.com/coreos/flannel:v0.11.0-arm64
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
             add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-arm
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: beta.kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: beta.kubernetes.io/arch
                    operator: In
                    values:
                      - arm
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay-mirror.qiniu.com/coreos/flannel:v0.11.0-arm
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay-mirror.qiniu.com/coreos/flannel:v0.11.0-arm
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
             add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-ppc64le
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: beta.kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: beta.kubernetes.io/arch
                    operator: In
                    values:
                      - ppc64le
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay-mirror.qiniu.com/coreos/flannel:v0.11.0-ppc64le
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay-mirror.qiniu.com/coreos/flannel:v0.11.0-ppc64le
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
             add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-s390x
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: beta.kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: beta.kubernetes.io/arch
                    operator: In
                    values:
                      - s390x
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay-mirror.qiniu.com/coreos/flannel:v0.11.0-s390x
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay-mirror.qiniu.com/coreos/flannel:v0.11.0-s390x
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
             add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
  1. 拉取 flannel 镜像文件

Tips:注意集群中的所有节点都需要拉一下这个镜像

docker pull quay.io/coreos/flannel:v0.12.0-amd64
  1. 执行应用kube-flannel.yml配置信息

Tips:注意这里只在 master 节点上执行

kubectl apply -f kube-flannel.yml
  1. 修改各个节点的/var/lib/kubelet/kubeadm-flags.env文件,将其内的--network-plugin=cni删除掉。

  2. 然后执行以下命令重启

Tips:以下命令执行结束后,需要稍等一会才能生效

systemctl daemon-reload
systemctl restart kubelet
  1. 通过kubectl get nodes查看状态,所有状态都变成了Ready状态。

到此全部结束,k8s 集群安装部署成功,整个过程很复杂,涉及到的东西很多。不要急不要噪,慢慢来,祝你一次成功。

3. 测试集群可用性

通过安装一个 nginx 来检查 k8s 环境是否安装成功。

  1. 通过kubectl命令安装 nginx
kubectl create deployment nginx --image=nginx:1.14-alpine

Tips:执行成功后,会给出deployment.apps/nginx created提示信息

  1. 映射端口号
kubectl expose deployment nginx --port=80 --type=NodePort

Tips:执行成功后,会给出service/nginx exposed提示信息

  1. 查看podsservice信息
kubectl get pods,service

Tips:输出的结果中,会有一条类似service/nginx NodePort 10.96.7.203 <none> 80:30133/TCP 6s信息

  1. 通过docker ps命令,在工作节点上查看,nginx 具体按照到那个节点中

  2. 使用第四步找到的节点 ip 和第三步的端口信息,使用浏览器访问,给出 nginx 主页即成功。

Tips:比如第四步找到的节点 ip 是 192.168.29.130,结合第三步的结果信息,知道端口号是 30133,那么此时在浏览器上访问 192.168.29.130:30133 即可正常访问 nginx 主页啦!


文章作者: 程序猿洞晓
版权声明: 本博客所有文章除特別声明外,均采用 CC BY 4.0 许可协议。转载请注明来源 程序猿洞晓 !
评论
  目录