抱歉,您的浏览器无法访问本站
本页面需要浏览器支持(启用)JavaScript
了解详情 >

宇道项目部署实战

1.安装Java、MAVEN

需要安装的项目要求java17版本,所以可以到官网下载Linux版本的压缩包,tar.gz后缀,将它上传到Linux服务器中,可通过finalShell的文件上传功能,也可以下载yum -y install lrzsz在终端中上传。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
#到oracle官方网站下载对应系统需要的版本Java
##将文件上传到/opt文件夹下并解压
tar -xvf openlogic-openjdk-17.0.12+7-linux-x64.tar.gz

##系统范围内添加环境变量进去
#JAVA_HOME写解压出来的文件夹路径
cat >>/etc/profile<<EOF
export JAVA_HOME=/opt/jdk-17.0.15
export CLASSPATH=.:${JAVA_HOME}/lib
export PATH=${CLASSPATH}:${JAVA_HOME}/bin:$PATH
EOF

##将上面的内容应用加载
source /etc/profile
##应用后可以使用Java命令,查看版本
java --version

##安装maven
cd /opt
wget https://mirrors.tuna.tsinghua.edu.cn/apache/maven/maven-3/3.8.9/binaries/apache-maven-3.8.9-bin.tar.gz
tar -zxvf apache-maven-3.8.9-bin.tar.gz
ln -s /opt/apache-maven-3.8.9 /opt/maven
#编辑环境配置,单用户环境变量bashrc文件加入maven
vim ~/.bashrc
export M2_HOME=/opt/maven
export PATH=$M2_HOME/bin:$PATH
#重新加载环境变量
source ~/.bashrc

#修改下载源为国内,没有.m2里的setting.xml,复制一份到该路径
mkdir -p ~/.m2
cp /opt/maven/conf/settings.xml ~/.m2/settings.xml
vim ~/.m2/settings.xml
#在mirrors的标签加,记得放在第一个位置,不然会用原来默认的镜像去下载源
<mirror>
<id>aliyun</id>
<mirrorOf>*</mirrorOf>
<name>阿里云公共仓库</name>
<url>https://maven.aliyun.com/repository/public</url>
</mirror>

2.构建项目

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
##找到项目的地址,需要用到git命令,没有可以安装git,路径在、opt/git.clone (mkdir的目录,自定义路径名字)
git clone https://gitee.com/zhijiantianya/yudao-cloud.git

##在opt进入项目
cd yudao-cloud/
##查看分支
[root@localhost yudao-cloud]# git branch -a
* master
master-jdk17
remotes/origin/HEAD -> origin/master
remotes/origin/master
remotes/origin/master-jdk17
##可以看到jdk17的版本,切换到17的分支版本
git checkout -b master-jdk17 origin/master-jdk17
##在查看*号就在17的分支上了
[root@localhost yudao-cloud]# git branch
master
* master-jdk17

3.中间件搭建

3.1安装docker

1
2
3
4
5
6
7
8
9
##在docker上部署中间件redis、MySQL、nacos等等
#后面部署需要25版本以上的docker
sudo yum -y install docker-ce docker-ce-cli containerd.io
#查看docker 版本
docker -v
#设置开机自启并运行
systemctl start docker
systemctl enable docker

修改docker下载源

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
#修改下载源
sudo mkdir -p /etc/docker

sudo tee /etc/docker/daemon.json <<EOF
{
"registry-mirrors": [
"https://docker.1ms.run",
"https://docker.xuanyuan.me"
]
}
EOF

#重新加载下载源
sudo systemctl daemon-reload
sudo systemctl restart docker

3.2 中间件安装

安装MySQL

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
##docker下载mysql8.3版本并创建成容器运行
docker run -d -p 3306:3306 \
--restart=unless-stopped \
--name=yudao_mysql \
-e MYSQL_ROOT_PASSWORD=123456 \
-v "/etc/localtime:/etc/localtime" \
-v yc_mysql:/var/lib/mysql \
mysql:8.3
#在/opt/yudao-cloud/yudao-module-system/yudao-module-system-biz/src/main/resources/application-local.yaml路径打开这个文件application-local.yaml可以找到这些文件的mysql的配置,使用下面的命令批量修改
#使用命令修改:##先查看到是否该文件
find ./ -name application-local.yaml -exec grep -l 'jdbc:mysql://127.0.0.1:3306' {} +
##确认后使用sed命令修改127.0.0.1为本机IP192.168.69.128
find ./ -name application-local.yaml -print0 | xargs -0 sed -i 's|jdbc:mysql://127.0.0.1:3306|jdbc:mysql://192.168.69.128|g'
#修改数据库密码
find ./ -name application-local.yaml -print0 | xargs -0 sed -i 's|password: 123456|password: treeman|g'
#开放端口
firewall-cmd --zone=public --add-port=3306/tcp --permanent



使用mysql创建数据库ruoyi-vue-pro
导入后端项目下sql目录中的ruoyi-vue-pro.sql进行初始化
#进入mysql容器,输入密码123456
docker exec -it yudao_mysql mysql -u root -p
#创建数据库ruoyi-vue-pro,完成后输入 exit; 返回
CREATE DATABASE `ruoyi-vue-pro` CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci;
#需要修改脚本文件
vim sql/mysql/ruoyi-vue-pro.sql
#在SET NAMES utf8mb4;SET FOREIGN_KEY_CHECKS = 0;前面加上USE `ruoyi-vue-pro`;
...
USE `ruoyi-vue-pro`;

SET NAMES utf8mb4;
SET FOREIGN_KEY_CHECKS = 0;
...

#使用项目的sql脚本文件创建需要的表
docker exec -i yudao_mysql mysql -u root -p123456 ruoyi-vue-pro < sql/mysql/ruoyi-vue-pro.sql
# 进入容器查看是否创建成功
docker exec -it yudao_mysql mysql -u root -p

#有表格显示则创建成功
USE `ruoyi-vue-pro`;
SHOW TABLES;

安装redis

1
2
3
4
5
6
7
8
9
10
11
12
13
##docker部署redis
docker run -d -p 6379:6379 \
--restart=unless-stopped \
--name=yudao_redis \
-v "/etc/localtime:/etc/localtime" \
redis
#修改配置文件的redis地址
#使用命令修改:##先查看到是否该文件
find ./ -name application-local.yaml -exec grep -l 'host: 127.0.0.1 # 地' {} +
##确认后使用sed命令修改127.0.0.1为本机IP192.168.69.128
find ./ -name application-local.yaml -print0 | xargs -0 sed -i 's|host: 127.0.0.1 # 地|host: 192.168.69.128 # 地|g'
#开放端口
firewall-cmd --zone=public --add-port=6379/tcp --permanent

安装nacos

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
##docker部署nacos
##-e MODE=standalone:这个部分设置了一个环境变量 MODE,它的值为 standalone。容器内运行的应用可以读取这个变量,并根据其值调整行为。

docker run -d -p 8848:8848 \
-p 9848:9848 \
--restart=unless-stopped \
--name=yudao_nacos \
-e MODE=standalone \
-e NACOS_AUTH_ENABLE=false \
-v "/etc/localtime:/etc/localtime" \
nacos/nacos-server:v2.3.1

########修改配置文件的nacos地址
使用命令修改:##先查看到是否该文件
find ./ -name application-local.yaml -exec grep -l 'server-addr: 127.0.0.1:8848' {} +
##确认后使用sed命令修改
find ./ -name application-local.yaml -print0 | xargs -0 sed -i 's|server-addr: 127.0.0.1:8848|server-addr: 192.168.69.128:8848|g'

#开放端口
firewall-cmd --zone=public --add-port=8848/tcp --permanent
firewall-cmd --zone=public --add-port=9848/tcp --permanent
#完成后使用docker ps查看已运行的容器,确认启动后,就可以登录nacos的网站:192.168.69.128:8848/nacos
#在这个网站例创建新的命名空间(一般是项目上的开发配合)

执行成功后输入网址http:IP:8848/nacos,例如我的http://192.168.69.128:8848/nacos/

在命名空间里创建新的命名空间dev

4.启动后端项目

4.1运行gateway服务

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
##启动后端项目
##因为修改了配置文件,所以需要重新构建项目
mvn clean install package '-Dmaven.test.skip=true'

##构建完后,各个服务都会有个target目录,运行的jar在这里面
yum -y install screen
##使用screen创建多个虚拟终端(如win的工作窗口),使用screen -list查看其他窗口名字用于切换
screen -S 会话名 -X quit #关闭会话
screen -S 会话名 #创建会话
screen -ls #查看所有会话
screen -r 会话名 #恢复会话
Ctrl + a 然后按 d #暂时离开会话

#创建一个窗口,运行这个服务为48080端口
screen -R gateway
firewall-cmd --zone=public --add-port=48080/tcp --permanent
java -jar yudao-gateway/target/yudao-gateway.jar

gateway服务运行成功,这个服务的主要提供API 服务网关,提供用户认证、服务路由、灰度发布、访问日志、异常处理等功能

4.2运行system服务

1
2
3
4
5
6
7
8
#切换窗口,ctrl + a + d
#在启动一个服务,服务端口48081
screen -R system
cd /opt/yudao-cloud/
java -jar ./yudao-module-system/yudao-module-system-server/target/yudao-module-system-server.jar

##若是访问已开服务的端口没有响应或超时,可能是防火墙的端口未开放,使用下面的命令开发端口可以访问。
firewall-cmd --zone=public --add-port=48081/tcp --permanent

system服务运行成功,此服务主要是实现系统功能的模块

image-20250502165845224

4.3运行infra服务

1
2
3
screen -S infra
firewall-cmd --zone=public --add-port=48082/tcp --permanent
java -jar yudao-module-infra/yudao-module-infra-biz/target/yudao-module-infra-biz.jar

infra服务与系统的基础设施相关

5.服务容器化-打包镜像

5.1打包gateway镜像

1
2
3
4
5
6
7
8
9
10
11
12
##后端项目打包成镜像
##项目里有已经写好的dokcerfile文件,可以使用来构建镜像
cd /opt/yudao-cloud/yudao-gateway
docker build -t yudao_gateway .
docker images ##查看镜像
##运行这个镜像
docker run -d \
--restart=unless-stopped \
--network=host \
--name yudao_gateway \
-v "/etc/localtime:/etc/localtime" \
yudao_gateway

5.2打包system镜像

1
2
3
4
5
6
7
8
9
10
11
##同理,打包其他需要的镜像
cd /opt/yudao-cloud/yudao-module-system/yudao-module-system-server/
docker build -t yudao_system .
docker images ##查看镜像
##运行这个镜像
docker run -d \
--restart=unless-stopped \
--network=host \
--name yudao_system \
-v "/etc/localtime:/etc/localtime" \
yudao_system

5.3打包infra镜像

1
2
3
4
5
6
7
8
9
10
11
##同理,打包其他需要的镜像
cd /opt/yudao-cloud/yudao-module-infra/yudao-module-infra-server
docker build -t yudao_infra_server .
docker images ##查看镜像
##运行这个镜像
docker run -d \
--restart=unless-stopped \
--network=host \
--name yudao_infra_server \
-v "/etc/localtime:/etc/localtime" \
yudao_infra_server

5.4安装nodejs

1
2
3
4
5
6
7
8
9
10
11
12
13
##安装node.js
##在官网下载nodejs20版本二进制,下载后放到/opt下
##https://nodejs.org/en/download/prebuilt-binaries
#跟着官方里的安装指令来操作
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.3/install.sh | bash
#作为重启 shell 的替代方案
\. "$HOME/.nvm/nvm.sh"
#用下载好的nvm来管理下载20版本
nvm install 20

#查看nodejs版本是不是20
node -v # Should print "v20.19.1".
nvm current # Should print "v20.19.1".

5.5构建前端项目

1
2
3
4
5
6
7
8
9
#到前端项目目录下
cd /opt/
#clone前端项目
git clone https://gitee.com/yudaocode/yudao-ui-admin-vue3.git

#centos7使用npm可能出现的问题
#解决网址https://www.cnblogs.com/yuwen01/p/18067005
#更新npm源
npm config set registry https://registry.npmjs.org
1
2
3
4
5
6
7
8
#安装pnpm
npm install -g pnpm
#nodejs可能报错:npm error code ECONNRESET
#可以切换下载源:npm config set registry https://registry.npmmirror.com,还需要开放防火墙
#到前端项目的根目录下构建项目
cd /opt/yudao-ui-admin-vue3
pnpm install --fetch-timeout=60000
pnpm run build:local

构建前端项目成功

1
2
3
4
5
6
7
8
9
10
11
#编写dokcerfile
vim Dockerfile
#文件里添加下面两行代码
FROM nginx
ADD ./dist /usr/share/nginx/html/
#开放暴露的端口
firewall-cmd --zone=public --add-port=8080/tcp --permanent
#构建前端项目
docker build -t yudao_ui_admin .
#启动镜像
docker run --restart=unless-stopped --name yudao_ui_admin -d -p 8080:80 yudao_ui_admin

登录ip:8080,比如我的192.168.69.128:8080

6.Harbor镜像仓库

安装docker-compose

1
2
3
4
5
6
7
##还需要docker-compose程序,需要到GitHub上搜索compose,找到docker/compose下载Linux版本的
cd /opt
wget https://github.com/docker/compose/releases/download/v2.35.0/docker-compose-linux-x86_64
mv docker-compose-linux-x86_64 docker-compose # 改包名
chmod +x docker-compose #赋予可执行文件
mv ./docker-compose /usr/local/bin/ #移动到命令目录下
docker-compose -v #查看是否可用,显示版本

安装harbor镜像仓库

该开源项目主要用构建一个自己的docker镜像仓库,常用于企业内部管理自己的镜像包,多数企业内部不允许直连外网,一般都采取构建自己的镜像仓库。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
##下载地址:https://github.com/goharbor/harbor/releases
##本地采用在线下载方式,在下载地址找online字样的包,上传到文件夹/opt
cd /opt
wget https://github.com/goharbor/harbor/releases/download/v2.11.1/harbor-online-installer-v2.11.1.tgz
tar -xvf harbor-online-installer-v2.11.1.tgz
cd harbor
cp harbor.yml.tmpl harbor.yml #复制配置文件
vim harbor.yml ##修改配置文件

hostname: 192.168.69.128 #该字段修改为本机IP
##注释下面的https,因为没有证书,保存退出
# https related config
#https:
# https port for harbor, default is 443
# port: 443
# The path of cert and key files for nginx
# certificate: /your/certificate/path
# private_key: /your/private/key/path
# enable strong ssl ciphers (default: false)
# strong_ssl_ciphers: false

./install.sh #执行安装程序,等待下载完成

#都安装好后,harbor已经启动了,访问部署harbor主机IP:80,如:192.168.69.128:80
#默认账户密码为:admin|Harbor12345

7.搭建部署K8S

7.1主从安装docker\containerd

1
2
3
4
5
6
7
8
9
10
--主机、从机都需要以下操作
--1.关闭swap分区
swapoff -a #临时关闭
vim /etc/fstab #永久关闭
#注释掉swap那行,主机重启也会立即关闭

#主机从机的docker需要安装到25版本及以上,低版本不行
yum install -y docker-ce
docker -v #查看是否是24版本以上的
Docker version 26.1.4, build 5650f9b #我的是26版本

安装 containerd(所有节点)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
yum install -y containerd
#创建路径
mkdir -p /etc/containerd
#生成默认配置文件
containerd config default > /etc/containerd/config.toml

#修改配置文件 /etc/containerd/config.toml,添加下面的代码
vim /etc/containerd/config.toml

#确保disabled_plugins没有cri或者为空
disabled_plugins = []

[plugins."io.containerd.grpc.v1.cri".registry]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
endpoint = ["https://registry.docker-cn.com", "https://hub-mirror.c.163.com", "https://mirror.baidubce.com"]



#重新启动,设置开机自启
systemctl daemon-reexec
systemctl restart containerd
systemctl enable --now containerd

修改主机名(可选)

1
2
3
4
5
6
7
#主机改名为k8smaster并添加hosts记录
hostnamectl set-hostname k8smaster
#配置host
cat >> /etc/hosts <<EOF
192.168.69.128 k8smaster
192.169.69.130 k8snode1
EOF

7.2安装 Kubernetes 组件(所有节点)

更换国内下载源

1
2
3
4
5
6
7
8
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
EOF

安装k8s并初始化

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# 所有节点安装k8s组件
# 以1.28版本为例(主从都需要使用相同版本)
yum install -y kubelet-1.28.2 kubeadm-1.28.2 kubectl-1.28.2

#锁定三个包的版本
#查询已安装的包版本
rpm -qa | grep kube
#yum包管理器需要versionlock包来管理版本,安装
sudo yum install yum-plugin-versionlock -y
#锁定
sudo yum versionlock kubelet-1.28.2 kubeadm-1.28.2 kubectl-1.28.2
#查看被锁定的包
yum versionlock list
#解锁包(有需要的时候可解锁,推荐锁包,yum update的时候不会自动更新,避免主从版本不一致)
sudo yum versionlock delete kubelet kubeadm kubectl

至此,从节点需要安装的组件已安装完成,主节点还需要继续往下走

kubeadm初始化

1
2
3
4
#初始化前,先开放端口,和设置自启动kubelet
firewall-cmd --zone=public --add-port=6443/tcp --permanent
firewall-cmd --zone=public --add-port=10250/tcp --permanent
systemctl enable kubelet.service

如果初始化提示containerd未执行,可以删除toml文件,在重启

1
2
3
4
5
6
7
8
9
10
11
#报错container runtime未执行,尝试删除toml文件
rm -rf /etc/containerd/config.toml
#重启服务
systemctl restart containerd
#检查状态active (running)
systemctl status containerd
#清理之前初始化残留的环境
kubeadm reset -f
rm -rf ~/.kube /etc/kubernetes/manifests /etc/kubernetes/*.conf /var/lib/etcd

firewall-cmd --zone=public --add-port=443/tcp --permanent

下载镜像

1
2
#下载镜像(下载不了可跳到7.3手动下载,能下载的话可以跳过7.3手动拉取)
kubeadm config images pull

继续初始化

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
#继续初始化kubeadm
#apiserver-advertise-address填写master主机IP
kubeadm init \
--apiserver-advertise-address=192.168.69.128 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.28.2 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16 \
--cri-socket unix:///run/containerd/containerd.sock

#创建kube需要的文件夹
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

#安装网络插件flannel
kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml

7.3初始化镜像源手动拉取

如果在上面执行kubeadm config images pull出现报错拉取image失败的时候,需要手动来拉取,如果执行没有报错,下面就不需要执行了

1
2
3
4
5
6
7
8
9
10
kubeadm config images list  #查看需要的镜像

registry.k8s.io/kube-apiserver:v1.28.15
registry.k8s.io/kube-controller-manager:v1.28.15
registry.k8s.io/kube-scheduler:v1.28.15
registry.k8s.io/kube-proxy:v1.28.15
registry.k8s.io/pause:3.9
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/coredns/coredns:v1.10.1

将上面的registry.k8s.io后面的镜像名加到ctr -n k8s.io i pull registry.cn-hangzhou.aliyuncs.com/后面,手动使用阿里云container容器进行时 pull,其中pause显示3.9,但是kubelet需要的是3.6

1
2
3
4
5
6
7
8
# 所有操作需使用 containerd 的命令 `ctr`,注意加上 -n k8s.io 命名空间
ctr -n k8s.io i pull registry.aliyuncs.com/google_containers/kube-apiserver:v1.28.2
ctr -n k8s.io i pull registry.aliyuncs.com/google_containers/kube-controller-manager:v1.28.2
ctr -n k8s.io i pull registry.aliyuncs.com/google_containers/kube-scheduler:v1.28.2
ctr -n k8s.io i pull registry.aliyuncs.com/google_containers/kube-proxy:v1.28.2
ctr -n k8s.io i pull registry.aliyuncs.com/google_containers/pause:3.6
ctr -n k8s.io i pull registry.aliyuncs.com/google_containers/coredns:v1.10.1
ctr -n k8s.io i pull registry.aliyuncs.com/google_containers/etcd:3.5.9-0

查看刚刚拉取的镜像

1
2
3
4
5
6
7
8
9
10
crictl images | grep google_containers
#需要的都已经拉取下来了
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler
registry.cn-hangzhou.aliyuncs.com/google_containers/etcd
registry.cn-hangzhou.aliyuncs.com/google_containers/coredns
registry.cn-hangzhou.aliyuncs.com/google_containers/pause

将上面的镜像标记为需要的前缀,即registry.k8s.io

1
2
3
4
5
6
7
ctr -n k8s.io i tag registry.aliyuncs.com/google_containers/kube-apiserver:v1.28.2 registry.k8s.io/kube-apiserver:v1.28.2
ctr -n k8s.io i tag registry.aliyuncs.com/google_containers/kube-controller-manager:v1.28.2 registry.k8s.io/kube-controller-manager:v1.28.2
ctr -n k8s.io i tag registry.aliyuncs.com/google_containers/kube-scheduler:v1.28.2 registry.k8s.io/kube-scheduler:v1.28.2
ctr -n k8s.io i tag registry.aliyuncs.com/google_containers/kube-proxy:v1.28.2 registry.k8s.io/kube-proxy:v1.28.2
ctr -n k8s.io i tag registry.aliyuncs.com/google_containers/pause:3.6 registry.k8s.io/pause:3.6
ctr -n k8s.io i tag registry.aliyuncs.com/google_containers/coredns:v1.10.1 registry.k8s.io/coredns/coredns:v1.10.1
ctr -n k8s.io i tag registry.aliyuncs.com/google_containers/etcd:3.5.9-0 registry.k8s.io/etcd:3.5.9-0

7.4kubeadm初始化

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
#清理之前初始化残留的环境
kubeadm reset -f
rm -rf ~/.kube /etc/kubernetes/manifests /etc/kubernetes/*.conf /var/lib/etcd

kubeadm init \
--apiserver-advertise-address=192.168.69.128 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.28.2 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16 \
--cri-socket unix:///run/containerd/containerd.sock

#根据初始化后的提示创建kube需要的文件夹
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
export KUBECONFIG=/etc/kubernetes/admin.conf

初始化成功会返回一段加入集群的代码(记得保存),当有多个容器进行时,可使用–cri-socket 来指定路径

从节点执行提示代码加入集群

1
2
3
4
5
6
7
8
#若是没有记录代码,可以创建新的 token
kubeadm token create --print-join-command
#复制到从节点执行
kubeadm join 192.168.69.128:6443 --token utn1dj.4hq7lzi76j2ers73 --discovery-token-ca-cert-hash sha256:f35bc1a085d300145cc2fe3762a1b16626aa586fcc71c8f813fbb067b3dff679 --cri-socket unix:///run/containerd/containerd.sock
#从节点之前执行join失败,需要先删除之前执行的残留内容才能再次执行join代码
kubeadm reset -f
rm -rf /etc/cni /var/lib/cni /var/lib/kubelet /etc/kubernetes
systemctl restart containerd
1
2
3
4
5
6
#从节点本身不运行 API Server,kubectl 需要配置 kubeconfig 文件去连接 master 节点
#将主节点的配置文件传送到从节点,注意修改IP
#没有scp可以安装,这里就不再展开了,在从节点执行
#如果目录不存在,可以在从节点创建目录,在主节点重新传输
mkdir -p /root/.kube
scp root@192.168.69.128:/etc/kubernetes/admin.conf /root/.kube/config

–cri-socket unix:///var/run/cri-dockerd.sock是机内有多个容器进行时才需要指定项cri-docker的,若是没有多的,或者本身就能运行可以不用加

复制初始化出来的代码到从节点执行

从节点只需要用root权限执行这段代码即可加入该集群

成功加入集群会提示get查看节点

输入kubectl get nodes即可查看节点信息

在主节点get nodes查看后会显示notReady

需要安装网络,Kubernetes的Pod之间默认不能通信,需要靠CNI插件搭建Overlay网络。

没有网络,Pod无法连到kube-apiserver,节点就会 NotReady。

所以 kubeadm init 只完成了集群骨架,网络插件是必须安装的补充步骤

1
2
3
4
#下载并应用 flannel 配置
kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
#或者复制下面的内容到kube-flannel.yml文件中
vim kube-flannel.yml
复制这里的内容到kube-flannel.yml,有自定义网络可以修改在应用
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
apiVersion: v1
kind: Namespace
metadata:
labels:
k8s-app: flannel
pod-security.kubernetes.io/enforce: privileged
name: kube-flannel
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: flannel
name: flannel
namespace: kube-flannel
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
k8s-app: flannel
name: flannel
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
k8s-app: flannel
name: flannel
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flannel
subjects:
- kind: ServiceAccount
name: flannel
namespace: kube-flannel
---
apiVersion: v1
data:
cni-conf.json: |
{
"name": "cbr0",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
net-conf.json: |
{
"Network": "10.244.0.0/16",
"EnableNFTables": false,
"Backend": {
"Type": "vxlan"
}
}
kind: ConfigMap
metadata:
labels:
app: flannel
k8s-app: flannel
tier: node
name: kube-flannel-cfg
namespace: kube-flannel
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
labels:
app: flannel
k8s-app: flannel
tier: node
name: kube-flannel-ds
namespace: kube-flannel
spec:
selector:
matchLabels:
app: flannel
k8s-app: flannel
template:
metadata:
labels:
app: flannel
k8s-app: flannel
tier: node
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
containers:
- args:
- --ip-masq
- --kube-subnet-mgr
command:
- /opt/bin/flanneld
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: EVENT_QUEUE_DEPTH
value: "5000"
image: ghcr.io/flannel-io/flannel:v0.26.7
name: kube-flannel
resources:
requests:
cpu: 100m
memory: 50Mi
securityContext:
capabilities:
add:
- NET_ADMIN
- NET_RAW
privileged: false
volumeMounts:
- mountPath: /run/flannel
name: run
- mountPath: /etc/kube-flannel/
name: flannel-cfg
- mountPath: /run/xtables.lock
name: xtables-lock
hostNetwork: true
initContainers:
- args:
- -f
- /flannel
- /opt/cni/bin/flannel
command:
- cp
image: ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1
name: install-cni-plugin
volumeMounts:
- mountPath: /opt/cni/bin
name: cni-plugin
- args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
command:
- cp
image: ghcr.io/flannel-io/flannel:v0.26.7
name: install-cni
volumeMounts:
- mountPath: /etc/cni/net.d
name: cni
- mountPath: /etc/kube-flannel/
name: flannel-cfg
priorityClassName: system-node-critical
serviceAccountName: flannel
tolerations:
- effect: NoSchedule
operator: Exists
volumes:
- hostPath:
path: /run/flannel
name: run
- hostPath:
path: /opt/cni/bin
name: cni-plugin
- hostPath:
path: /etc/cni/net.d
name: cni
- configMap:
name: kube-flannel-cfg
name: flannel-cfg
- hostPath:
path: /run/xtables.lock
type: FileOrCreate
name: xtables-lock
1
2
3
4
5
6
7
8
9
#用 vim 或 sed 修改里面镜像地址(原来是 quay.io,我们改成 registry.cn-hangzhou.aliyuncs.com)
#切换为国内源
sed -i 's#quay.io/coreos#registry.cn-hangzhou.aliyuncs.com/ljzflannel#g' kube-flannel.yml

#应用配置文件
kubectl apply -f kube-flannel.yml
#检查 pod 状态
kubectl get pods -A
#看到 kube-flannel-ds-xxx 都 Running,节点很快就 Ready 了!

手动拉取flannel

1
2
3
4
5
6
7
8
9
10
11
12
13
14
#如果flannel无法拉取镜像,可以尝试用docker下载需要的包
docker pull ghcr.io/flannel-io/flannel-cni-plugin:latest
docker pull ghcr.io/flannel-io/flannel:v0.26.7
docker save -o flannel-cni-plugin:latest.tar ghcr.io/flannel-io/flannel-cni-plugin:latest
docker save -o flannel-v0.26.7.tar ghcr.io/flannel-io/flannel:v0.26.7
sudo ctr -n k8s.io images import flannel-v0.26.7.tar
sudo ctr -n k8s.io images import flannel-cni-plugin:latest.tar
sudo ctr -n k8s.io images ls | grep flannel
#删除之前部署的flannel,删除后,kubectl会自动重新部署
#kubectl delete pod ingress-nginx-admission-create-sx4gj -n ingress-nginx --grace-period=0 --force
#kubectl delete pod ingress-nginx-admission-patch-8ztcv -n ingress-nginx --grace-period=0 --force
kubectl delete pod -n kube-flannel -l app=flannel
kubectl get pods -n kube-flannel -w

8安装k8s集群组件

k8s常用的几个组件,我们都需要安装,方便管理集群。

  • Helm(包管理工具)

  • Dashboard(Web 可视化界面)

  • Ingress-NGINX(统一集群入口)

  • MetalLB(提供内网 LoadBalancer IP)

8.1安装helm

1
2
3
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
查看是否安装完成
helm version

8.2安装dashboard

1
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml
github访问不了可以vim recommended.yaml创建文件,点击展开,复制这里的内容进去
recommended.yaml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

apiVersion: v1
kind: Namespace
metadata:
name: kubernetes-dashboard

---

apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard

---

kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
ports:
- port: 443
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard

---

apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-certs
namespace: kubernetes-dashboard
type: Opaque

---

apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-csrf
namespace: kubernetes-dashboard
type: Opaque
data:
csrf: ""

---

apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-key-holder
namespace: kubernetes-dashboard
type: Opaque

---

kind: ConfigMap
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-settings
namespace: kubernetes-dashboard

---

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
rules:
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
verbs: ["get", "update", "delete"]
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["kubernetes-dashboard-settings"]
verbs: ["get", "update"]
# Allow Dashboard to get metrics.
- apiGroups: [""]
resources: ["services"]
resourceNames: ["heapster", "dashboard-metrics-scraper"]
verbs: ["proxy"]
- apiGroups: [""]
resources: ["services/proxy"]
resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
verbs: ["get"]

---

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
rules:
# Allow Metrics Scraper to get metrics from the Metrics server
- apiGroups: ["metrics.k8s.io"]
resources: ["pods", "nodes"]
verbs: ["get", "list", "watch"]

---

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard

---

kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
securityContext:
seccompProfile:
type: RuntimeDefault
containers:
- name: kubernetes-dashboard
image: kubernetesui/dashboard:v2.7.0
imagePullPolicy: Always
ports:
- containerPort: 8443
protocol: TCP
args:
- --auto-generate-certificates
- --namespace=kubernetes-dashboard
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
# Create on-disk volume to store exec logs
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: kubernetes-dashboard-certs
- name: tmp-volume
emptyDir: {}
serviceAccountName: kubernetes-dashboard
nodeSelector:
"kubernetes.io/os": linux
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule

---

kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: dashboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kubernetes-dashboard
spec:
ports:
- port: 8000
targetPort: 8000
selector:
k8s-app: dashboard-metrics-scraper

---

kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: dashboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kubernetes-dashboard
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: dashboard-metrics-scraper
template:
metadata:
labels:
k8s-app: dashboard-metrics-scraper
spec:
securityContext:
seccompProfile:
type: RuntimeDefault
containers:
- name: dashboard-metrics-scraper
image: kubernetesui/metrics-scraper:v1.0.8
ports:
- containerPort: 8000
protocol: TCP
livenessProbe:
httpGet:
scheme: HTTP
path: /
port: 8000
initialDelaySeconds: 30
timeoutSeconds: 30
volumeMounts:
- mountPath: /tmp
name: tmp-volume
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001
serviceAccountName: kubernetes-dashboard
nodeSelector:
"kubernetes.io/os": linux
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
volumes:
- name: tmp-volume
emptyDir: {}

创建管理员账户

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
EOF

获取登录令牌

1
2
3
4
kubectl -n kubernetes-dashboard create token admin-user
#下来的一串就是登录用到的令牌
eyJhbGciOiJSUzI1NiIsImtpZCI6IjlGNXRIV3h4blF3QkwyMlk5RUlYQ0ZTY2FUa05YR0d0VGUyWXc3OTVldkEifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNzYxMzcxMTg2LCJpYXQiOjE3NjEzNjc1ODYsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJhZG1pbi11c2VyIiwidWlkIjoiMDg5YTZiNmUtMDhlZi00MmUyLWIzMDctYjc4MmZlMzdhZTUyIn19LCJuYmYiOjE3NjEzNjc1ODYsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDphZG1pbi11c2VyIn0.Tj8-hwfWil4MqaXNfL6mKzVWDDp6YiJFZMtsArcyT3_tmIdVKKEUvRsdkz70tWW3YXkpoqXKjNJtK9pcBikcu373YfoM0AreM-dBAJG5VLDVnKVo0jBmriLo4x8bfGTlqnLmVLaHgYcWVfeRBemn8BtI6E7kmbT1Ej6Byfh0jrGVLhqskZ3LmfhunAhrzP8o8fXQSGMxpvIKM-qYiqlhQFi40MxY52fqRpfQvWTAWI_-kGKWNTOwkHhctqcBmQhdNVWrok1iZ1VWapEmEcm9wjcAd5HTZ2dMSCN_kIuyJmRIL4JAGJL-RJMN5XkWuGdxZLaojswhkSSgfmmf0sFtUw

8.3安装 Ingress-NGINX

1
2
3
4
5
6
7
8
wget -O nginx_ingress.yaml https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.10.0/deploy/static/provider/cloud/deploy.yaml
#修改为国内镜像
sed -i 's$registry.k8s.io/ingress-nginx/controller:v1.10.0.*$registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller:v1.10.0$' nginx_ingress.yaml
sed -i 's$registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0.*$registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen:v1.4.0$' nginx_ingress.yaml
#部署nainx
kubectl apply -f nginx_ingress.yaml
#查看状态是否为running
kubectl get pod -n ingress-nginx

8.4 helm安装meatllb

1
2
3
4
# 添加repo源
helm repo add metallb https://metallb.github.io/metallb
#安装
helm install metallb metallb/metallb -n metallb-system --create-namespace

安装好后会提示你,meatllb已经运行在你的集群里了,请参考官方文档修改配置。

我们需要修改可以访问的外部网络IP范围用于分配给集群

1
2
#新建一个yaml文件
vim meatllb.yaml
meatllb.yaml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: first-pool
namespace: metallb-system
spec:
addresses:
- 192.168.137.200-192.168.137.210
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: example-lb
namespace: metallb-system
spec:
ipAddressPools:
- first-pool
1
kubectl apply -f meatllb.yaml

9 推送镜像至本地harbor仓库

第6步已经安装了harbor本地镜像仓库,我们现在需要推送打包好的项目镜像至harbor仓库。

还需要在docker的仓库配置里配置局域网IP为安全地址,才呢个正常链接上。

1
vim /etc/docker/daemon.json
1
2
3
4
5
6
7
{
"insecure-registries": ["172.20.223.160"], #该行为添加的内容,确保harbor的IP能安全链接
"registry-mirrors": [
"https://docker.1ms.run",
"https://docker.xuanyuan.me"
]
}

登录至本地仓库,这里需要的IP是安装了harbor仓库的主机IP,我这里使用192.168.69.128

1
docker login 192.168.69.128
1
2
3
4
5
# 修改标志为指定harbor仓库地址
docker tag yudao_ui_admin 192.168.69.128/library/yudao_ui_admin
docker tag yudao_infra 192.168.69.128/library/yudao_infra
docker tag yudao_system 192.168.69.128/library/yudao_system
docker tag yudao_gateway 192.168.69.128/library/yudao_gateway
1
2
3
4
5
# 推送镜像至仓库
docker push 192.168.69.128/library/yudao_ui_admin
docker push 192.168.69.128/library/yudao_infra
docker push 192.168.69.128/library/yudao_system
docker push 192.168.69.128/library/yudao_gateway
1
2
3
4
# 查询是否存储在harbor仓库里
curl -u admin:Harbor12345 http://192.168.69.128:80/v2/_catalog
# 返回如下内容,成功上传
{"repositories":["library/yudao_gateway","library/yudao_infra","library/yudao_system","library/yudao_ui_admin"]}

10 k8s启动项目

后端服务运行

1
2
3
4
# 从指定镜像仓库拉取镜像创建deployment
kubectl create deployment yudao-gateway --image=172.20.223.160/library/yudao_gateway
kubectl create deployment yudao-system --image=172.20.223.160/library/yudao_system
kubectl create deployment yudao-infra --image=172.20.223.160/library/yudao_infra

使用ingress必须用域名来解析,这样才能正确把对应的路径的请求转发到后端服务。

查询ingress的外部暴露IP

1
2
3
4
5
6
kubectl get svc ingress-nginx-controller -n ingress-nginx
# 返回如下内容
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.109.141.16 192.168.137.200 80:32333/TCP,443:30667/TCP 2d1h

# 其中的192.168.137.200就是ingress的外部访问IP,这个IP是之前meatllb配置的IP范围池分配的IP。

修改host

路径:C:\Windows\System32\drivers\etc

打开host文件(无后缀),末尾添加IP和域名,列入:

1
2
192.168.137.200 a.treeman.org
192.168.137.200 www.treeman.org

因为需要通过域名来访问,所以前端项目还需要修改并重新构建上传部署。

修改前端项目的.env.local文件,修改成指定的域名

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
# 本地开发环境:本地启动所有项目(前端、后端、APP)时使用,不依赖外部环境
NODE_ENV=development
VITE_DEV=true
# 请求路径
VITE_BASE_URL='http://a.treeman.org'
# 文件上传类型:server - 后端上传, client - 前端直连上传,仅支持 S3 服务
VITE_UPLOAD_TYPE=server
# 接口地址
VITE_API_URL=/admin-api
# 是否删除debugger
VITE_DROP_DEBUGGER=false
# 是否删除console.log
VITE_DROP_CONSOLE=false
# 是否sourcemap
VITE_SOURCEMAP=false
# 打包路径
VITE_BASE_PATH=/
# 商城H5会员端域名
VITE_MALL_H5_DOMAIN='http://a.treeman.org:3000'
# 验证码的开关
VITE_APP_CAPTCHA_ENABLE=false

重新构建前端项目

1
2
cd /opt/yudao-ui-admin-vue3 
pnpm run build:local
1
2
3
4
5
6
7
8
9
#构建新的镜像,版本为V1
docker build -t yudao_ui_admin:v1 .
#镜像修改为仓库路径
docker tag yudao_ui_admin:v1 172.20.223.160/library/yudao_ui_admin:v1
#上传镜像至仓库
docker push 172.20.223.160/library/yudao_ui_admin:v1

#创建前端的deployment
kubectl create deployment yudao-ui-admin --image=172.20.223.160/library/yudao_ui_admin:v1

创建gateway服务和前端服务的SVC

svc_yudao.yaml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
apiVersion: v1
kind: Service
metadata:
labels:
app: yudao-gateway
name: yudao-gateway
spec:
ports:
- port: 48080
protocol: TCP
targetPort: 48080
selector:
app: yudao-gateway
type: ClusterIP

---

apiVersion: v1
kind: Service
metadata:
labels:
app: yudao-ui-admin
name: yudao-ui-admin
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: yudao-ui-admin
type: ClusterIP
1
kubectl apply -f svc_yudao.yaml

配置Ingress两个域名指向的服务

ingress_yudao.yaml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
apiVersion: network.k8s.io/v1
kind: Ingress
metadata:
creationTimestamp: null
name: yudao
spec:
ingressClassName: nginx
rules:
- host: a.treeman.org
http:
paths:
- backend:
service:
name: yudao-gateway
port:
number: 48080
path: /
pathType: Prefix
- host: www.treeman.org
http:
paths:
- backend:
service:
name: yudao-ui-admin
port:
number: 80
path: /
pathType: Prefix
1
kubectl apply -f ingress_yudao.yaml

一切正常后输入:www.treeman.org 访问前端项目

评论