1. 概述和环境介绍
1.1 概述
k8s 是一款开源容器集群系统,用于容器化应用程序部署、扩展和管理,目标是让容器简单高效。它主要可以实现一下几个主要高端操作。
- 自我修复:如果当前某个容器奔溃,k8s 能够快速响应启动新的容器进行补充
- 弹性伸缩:动态的对正在运行的集群中添加和介绍容器数量
- 服务发现:服务可以通过自动发现的形式找到所依赖的服务
- 负载均衡:如果一个服务启动了多个容器,能够自动实现请求负载均衡
- 版本回退:如果发现新版本存在问题,可以立即回退到原来的版本
- 存储编排:可以根据容器自身的需求自动创建存储卷
1.2 环境介绍
Tips:虚拟机总共是三台,采用的是一个 master,两个 node 的集群部署方式。
电脑系统 | 虚拟机系统 |
---|---|
Windows 10 | CentOS Linux release 7.9.2009 (Core) |
2. k8s 集群部署
2.1 初始化环境
- 检查操作系统的版本号,k8s 要求 CentOS 系统版本在 7.5 及以上,通过以下命令查看。
cat /etc/redhat-release
- 配置
hosts
文件,需要在集群中的每一台机器都做相关配置,以一主两工作节点来配置如下。
## 打开hosts文件
vim /etc/hosts
## 将下面的内容添加到hosts文件中
192.168.29.128 master
192.168.29.129 node1
192.168.29.130 node2
## 在每天机器中执行以下三条命令,看是否配置成功
ping master
ping node1
ping node2
- 时间同步,集群中的服务器时间必须保持一致。
## 安装 chrony
yum -y install chrony
## 启动 chronyd
systemctl start chronyd
## 设置开机启动
systemctl enable chronyd
- 禁用
iptables
和firewalld
服务,因为 k8s 和 Docker 在运行中需要用到很多端口,为了省去麻烦,直接禁用
## 关闭firewalld
systemctl stop firewalld
systemctl disable firewalld
- 关闭
selinux
,这个是linux
系统下的一个安全服务,如果不关闭,k8s 安装会出现各种问题
## 打开配置文件
vim /etc/selinux/config
## 修改SELINUX的值为disabled
SELINUX=disabled
- 禁用
swap
分区
Tips:
swap
分区指的是虚拟内存分区,它的作用是物理内存使用完之后,将磁盘空间虚拟成内存来使用,启用swap
设备会对系统的性能产生非常负面的影响,因此kubernetes
要求每个节点都要禁用swap
设备,但是如果因为某些原因确实不能关闭swap
分区,就需要在集群安装过程中通过明确的参数进行配置说明。
## 打开配置文件
vim /etc/fstab
## 注释下面的内容
/dev/mapper/centos-swap swap
- 修改
linux
内核参数,添加网桥过滤和地址转发功能
## 打开配置文件,这个文件一般是不存在的,直接vim,然后保存
vim /etc/sysctl.d/kubernetes.conf
## 文件中添加如下内容
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
## 添加完成保存文件,重新加载配置,依次执行以下命令
## 重新加载配置
sysctl -p
## 加载网桥过滤模块
modprobe br_netfilter
## 查看网桥过滤模块是否加载成功
lsmod | grep br_netfilter
- 配置
ipvs
功能
Tips:在 k8s 中的
service
有两种代理模式,一种是基于iptables
的,一种是基于ipvs
的,两者比较的话,ipvs
的性能明显要高一些,但是如果要使用它,需要手动载入ipvs
模块
## 安装ipset和ipvsadm
yum -y install ipset ipvsadm
## 添加需要加载的模块写入脚本文件
cat <<EOF > /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
## 为脚本添加执行权限
chmod +x /etc/sysconfig/modules/ipvs.modules
## 执行脚本文件
sh +x /etc/sysconfig/modules/ipvs.modules
## 查看对应的模块是否加载成功
lsmod | grep -e ip_vs -e nf_conntrack_ipv4
- 重启 CentOS 系统,使用
reboot
命令。
2.2 安装 Docker
- 切换镜像,Docker 是国外服务器,速度很慢,需要切换到国内的服务器
## 执行此命令的时候,如果出现wget找不到,使用yum -y install wget安装即可
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
- 查看当前镜像源中支持的 Docker 版本
yum list docker-ce --showduplicates
- 安装特定版本
docker-ce
注意
docker-ce
版本号,防止 k8s 对不同版本的适配不相同,如果需要用其他版本,可以到网上找一下适配关系
## 指定 --setopt=obsoletes=0,否则yum会自动安装更高的版本
yum -y install --setopt=obsoletes=0 docker-ce-18.06.3.ce-3.el7
- 添加一个配置文件,配置阿里云的加速镜像
## 创建目录
mkdir /etc/docker
## 执行创建文件并写入内容
cat <<EOF > /etc/docker/daemon.json
{
"exec-opts":["native.cgroupdriver=systemd"],
"registry-mirrors":["https://gxdoa2yz.mirror.aliyuncs.com"]
}
EOF
- 启动 Docker 服务,并设置为开机启动启动
systemctl start docker
systemctl enable docker
- 检查 Docker 是否启动成功及其版本号
docker --version
2.3 安装 k8s 组件
k8s 集群搭建有三种方式:
- 通过
kubeadm
工具安装,此工具提供kubeadm init
和kubeadm join
,可以快速搭建 k8s 集群。 - 二进制安装包,手动从官网下载所有组件,手动部署每个组件,组成 k8s
minikube
一个用于快速搭建单点 k8s 的工具
最方便的就是第一种方式,第二种方式对于 k8s 非常了解的人,可以尝试,第三种在生产上不适用。
- 配置 k8s 的镜像源,切换成国内的镜像源,方式在安装的过程中出现连不上或者下载太慢的问题
## 打开配置文件,不存在就是新建
vim /etc/yum.repos.d/kubernetes.repo
## 将以下配置信息添加到文件中
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
- 安装
kubeadm
、kubelet
和kubectl
yum -y install --setopt=obsoletes=0 kubeadm-1.17.4-0 kubelet-1.17.4-0 kubectl-1.17.4-0
- 配置
kubelet
和cgroup
## 打开配置文件,如果没有就是新建
vim /etc/sysconfig/kubelet
## 将以下内容添加到配置文件中
KUBELET_CGROUP_ARGS="--cgroup-driver=systemd"
KUBE_PROXY_MODE="ipvs"
- 设置
kubelet
开机自动启动
systemctl enable kubelet
- 由于 k8s 从国外拉取镜像慢,使用阿里云拉取后,将文件名称改成 k8s 要求的镜像名,这样在对 k8s 集群初始化的时候,就不会出现再次到 k8s 拉取的问题
- 先用
kubeadm config images list
命令查看需要拉取哪些镜像文件 - 然后通过
docker pull
命令拉取尝试一下,正常会报下面这个错误
Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
- 此时就需要使用备选方案,先从阿里云拉取,然后修改镜像名称
## 先执行
images=(
kube-apiserver:v1.17.4
kube-controller-manager:v1.17.4
kube-scheduler:v1.17.4
kube-proxy:v1.17.4
pause:3.1
etcd:3.4.3-0
coredns:1.6.5
)
## 再执行
for imageName in ${images[@]} ; do
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
done
- 集群初始化
Tips:只需要在 master 节点创建即可
注意:此命令执行完后,最后一行日志,会给出一个join
命令的执行语句,保存需要先保存下来,后续 node 节点加入到集群需要执行
kubeadm init \
--kubernetes-version=v1.17.4 \
--pod-network-cidr=10.244.0.0/16 \
--image-repository registry.aliyuncs.com/google_containers \
--service-cidr=10.96.0.0/12 \
--apiserver-advertise-address=192.168.29.128
Tips:
apiserver-advertise-address:集群通告地址(master 机器 IP)
image-repository:由于默认拉取镜像地址k8s.gcr.io
国内无法访问,这里指定阿里云镜像仓库地址
kubernetes-version:K8s 版本,与上面kubeadm
、kubelet
和kubectl
安装的版本一致
service-cidr:集群内部虚拟网络,Pod 统一访问入口
pod-network-cidr:Pod 网络,与下面部署的 CNI 网络组件yaml
中保持一致
- 创建必要文件
Tips:只需要在
master
节点创建即可
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
- 使用第 6 步中保存的
join
命令,到 node1、node2 节点运行
Tips:在执行此命令之前,要保证 node1、node2 节点已经完成第 6 步之前的所有安装和配置操作
kubeadm join 192.168.73.101:6443 --token 4gtt23.kkcw9hg5083iwclc \
--discovery-token-ca-cert-hash sha256:34fbfcf18649a5841474c2dc4b9ff90c02fc05de0798ed690e1754437be35a05
- 以上操作结束后,通过
kubectl get nodes
命令,在 master 机器上查看节点数量和状态,此时会发现所有节点的状态都是NotReady
,如图所示。
2.4 网络插件安装
在 2.3 中的最后,虽然在 master 机器上,通过kubectl get nodes
命令能够查看到所有节点,但是状态都是NotReady
,需要解决。
- 将
kube-flannel.yml
文件(这个文件内容如下,很长),上传到 master 节点机器上,可以是任意位置。然后修改文件中内容,将quay.io
改成quay-mirror.qiniu.com
。
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: psp.flannel.unprivileged
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
privileged: false
volumes:
- configMap
- secret
- emptyDir
- hostPath
allowedHostPaths:
- pathPrefix: "/etc/cni/net.d"
- pathPrefix: "/etc/kube-flannel"
- pathPrefix: "/run/flannel"
readOnlyRootFilesystem: false
# Users and groups
runAsUser:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
fsGroup:
rule: RunAsAny
# Privilege Escalation
allowPrivilegeEscalation: false
defaultAllowPrivilegeEscalation: false
# Capabilities
allowedCapabilities: ['NET_ADMIN']
defaultAddCapabilities: []
requiredDropCapabilities: []
# Host namespaces
hostPID: false
hostIPC: false
hostNetwork: true
hostPorts:
- min: 0
max: 65535
# SELinux
seLinux:
# SELinux is unsed in CaaSP
rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: flannel
rules:
- apiGroups: ['extensions']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- apiGroups:
- ""
resources:
- nodes
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: flannel
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flannel
subjects:
- kind: ServiceAccount
name: flannel
namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: flannel
namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
name: kube-flannel-cfg
namespace: kube-system
labels:
tier: node
app: flannel
data:
cni-conf.json: |
{
"cniVersion": "0.2.0",
"name": "cbr0",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds-amd64
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: beta.kubernetes.io/os
operator: In
values:
- linux
- key: beta.kubernetes.io/arch
operator: In
values:
- amd64
hostNetwork: true
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds-arm64
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: beta.kubernetes.io/os
operator: In
values:
- linux
- key: beta.kubernetes.io/arch
operator: In
values:
- arm64
hostNetwork: true
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: quay-mirror.qiniu.com/coreos/flannel:v0.11.0-arm64
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: quay-mirror.qiniu.com/coreos/flannel:v0.11.0-arm64
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds-arm
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: beta.kubernetes.io/os
operator: In
values:
- linux
- key: beta.kubernetes.io/arch
operator: In
values:
- arm
hostNetwork: true
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: quay-mirror.qiniu.com/coreos/flannel:v0.11.0-arm
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: quay-mirror.qiniu.com/coreos/flannel:v0.11.0-arm
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds-ppc64le
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: beta.kubernetes.io/os
operator: In
values:
- linux
- key: beta.kubernetes.io/arch
operator: In
values:
- ppc64le
hostNetwork: true
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: quay-mirror.qiniu.com/coreos/flannel:v0.11.0-ppc64le
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: quay-mirror.qiniu.com/coreos/flannel:v0.11.0-ppc64le
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds-s390x
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: beta.kubernetes.io/os
operator: In
values:
- linux
- key: beta.kubernetes.io/arch
operator: In
values:
- s390x
hostNetwork: true
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: quay-mirror.qiniu.com/coreos/flannel:v0.11.0-s390x
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: quay-mirror.qiniu.com/coreos/flannel:v0.11.0-s390x
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
- 拉取 flannel 镜像文件
Tips:注意集群中的所有节点都需要拉一下这个镜像
docker pull quay.io/coreos/flannel:v0.12.0-amd64
- 执行应用
kube-flannel.yml
配置信息
Tips:注意这里只在 master 节点上执行
kubectl apply -f kube-flannel.yml
修改各个节点的
/var/lib/kubelet/kubeadm-flags.env
文件,将其内的--network-plugin=cni
删除掉。然后执行以下命令重启
Tips:以下命令执行结束后,需要稍等一会才能生效
systemctl daemon-reload
systemctl restart kubelet
- 通过
kubectl get nodes
查看状态,所有状态都变成了Ready
状态。
到此全部结束,k8s 集群安装部署成功,整个过程很复杂,涉及到的东西很多。不要急不要噪,慢慢来,祝你一次成功。
3. 测试集群可用性
通过安装一个 nginx 来检查 k8s 环境是否安装成功。
- 通过
kubectl
命令安装 nginx
kubectl create deployment nginx --image=nginx:1.14-alpine
Tips:执行成功后,会给出
deployment.apps/nginx created
提示信息
- 映射端口号
kubectl expose deployment nginx --port=80 --type=NodePort
Tips:执行成功后,会给出
service/nginx exposed
提示信息
- 查看
pods
和service
信息
kubectl get pods,service
Tips:输出的结果中,会有一条类似
service/nginx NodePort 10.96.7.203 <none> 80:30133/TCP 6s
信息
通过
docker ps
命令,在工作节点上查看,nginx 具体按照到那个节点中使用第四步找到的节点 ip 和第三步的端口信息,使用浏览器访问,给出 nginx 主页即成功。
Tips:比如第四步找到的节点 ip 是 192.168.29.130,结合第三步的结果信息,知道端口号是 30133,那么此时在浏览器上访问 192.168.29.130:30133 即可正常访问 nginx 主页啦!