集群服务器规划
角色主机名 | IP地址 | 所安装组件 |
|---|
k8s-master1-60 | 172.30.42.60 | kube-apiserver,kubelet,kube-proxy,kube-controller-manager,kube-scheduler,etcd,docker,cri-docker |
k8s-master2-61 | 172.30.42.61 | kube-apiserver,kubelet,kube-proxy,kube-controller-manager,kube-scheduler,etcd,docker,cri-docker |
k8s-master3-62 | 172.30.42.62 | kube-apiserver,kubelet,kube-proxy,kube-controller-manager,kube-scheduler,etcd,docker,cri-docker |
k8s-work1-63 | 172.30.42.63 | kubelet,kube-proxy,docker,cri-docker |
k8s-work2-64 | 172.30.42.64 | kubelet,kube-proxy,docker,cri-docker |
HA-1 | 172.30.42.66 | nginx+keepalived |
HA-2 | 172.30.42.67 | nginx+keepalived |
VIP | 172.30.42.68 | |
kubernetes集群安装开源项目介绍
k8s-ha-install 是一个用于高可用 Kubernetes 集群安装的开源项目。该项目支持使用二进制文件和 kubeadm 工具来部署 Kubernetes 集群,确保集群的高可用性和稳定性。通过该项目,用户可以轻松地在生产环境中部署和管理 Kubernetes 集群
项目地址: https://github.com/dotbalo/k8s-ha-install.git
1:下载项目代码
[root@k8s-master1-60 ~]# git clone https://github.com/dotbalo/k8s-ha-install.git
[root@k8s-master1-60 ~]# cd k8s-ha-install/
#查看版本分支,根据安装版本切换分支进行安装
[root@k8s-master1-60 k8s-ha-install]# git branch -a
[root@k8s-master1-60 k8s-ha-install]# git checkout manual-installation-v1.30.x
[root@k8s-master1-60 k8s-ha-install]# git branch -l
* manual-installation-v1.30.x
master
kubernetes二进制包下载地址
https://github.com/kubernetes/kubernetes/releases
所有节点安装docker
1:添加docker源
yum-config-manager --add-repo https://mirrors.ustc.edu.cn/docker-ce/linux/centos/docker-ce.repo
2:查看可安装版本
yum list docker-ce --showduplicates | sort -n
3:安装指定版本docker-ce
yum install -y docker-ce-24.0.9-1.el8
4:启动docker-ce
systemctl enable --now docker.service
所有节点安装cri-docker
由于1.24以及更高版本不支持docker所以安装cri-docker
#版本列表 https://github.com/Mirantis/cri-dockerd/releases/
1:下载cri-docker
wget https://mirrors.chenby.cn/https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.10/cri-dockerd-0.3.10.amd64.tgz
2:解压cri-docker
tar xvf cri-dockerd-*.amd64.tgz
cp -r cri-dockerd/ /usr/bin/
chmod +x /usr/bin/cri-dockerd/cri-dockerd
3:准备cri-docker service配置文件
cd /usr/lib/systemd/system
[root@k8s-master1-60 system]# vi cri-docker.servic
[Unit]
Description=CRI Interface for Docker Application Container Engine
Documentation=https://docs.mirantis.com
After=network-online.target firewalld.service docker.service
Wants=network-online.target
Requires=cri-docker.socket
[Service]
Type=notify
ExecStart=/usr/bin/cri-dockerd --network-plugin=cni --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.7
ExecReload=/bin/kill -s HUP
TimeoutSec=0
RestartSec=2
Restart=always
StartLimitBurst=3
StartLimitInterval=60s
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TasksMax=infinity
Delegate=yes
KillMode=process
[Install]
WantedBy=multi-user.target
4:准备cri-docker sork配置文件
[root@k8s-master1-60 system]# cd /usr/lib/systemd/system
[root@k8s-master1-60 system]# vi cri-docker.socket
[Unit]
Description=CRI Docker Socket for the API
PartOf=cri-docker.service
[Socket]
ListenStream=%t/cri-dockerd.sock
SocketMode=0660
SocketUser=root
SocketGroup=docker
[Install]
WantedBy=sockets.target
5:启动cri-docker
systemctl enable --now cri-docker.servic
第一章:部署高可用HA
1、下载nginx源码包
wget https://nginx.org/download/nginx-1.22.1.tar.gzwget https://nginx.org/download/nginx-1.22.1.tar.gz
2、安装依赖软件包
yum -y install gcc gcc-c++ pcre pcre-devel zlib zlib-devel openssl openssl-devel
3、解压缩nginx源码包
tar -zxvf nginx-1.22.1.tar.gz && cd tar -zxvf nginx-1.22.1
4、编译
./configure --prefix=/data/nginx --sbin-path=/data/nginx/sbin/nginx --conf-path=/data/nginx/conf/nginx.conf --error-log-path=/data/nginx/logs/error.log --http-log-path=/data/nginx/logs/access.log --pid-path=/data/nginx/nginx.pid --lock-path=/data/nginx/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-file-aio --with-threads --with-http_addition_module --with-http_auth_request_module --with-http_dav_module --with-http_flv_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_mp4_module --with-http_random_index_module --with-http_realip_module --with-http_secure_link_module --with-http_slice_module --with-http_ssl_module --with-http_stub_status_module --with-http_sub_module --with-http_v2_module --with-mail --with-mail_ssl_module --with-stream --with-stream_realip_module --with-stream_ssl_module --with-stream_ssl_preread_module
5、安装
make && make install
6、创建所需目录
mkdir /var/cache/nginx/ -p
7、创建nginx用户
useradd -M -s /sbin/nologin nginx
8、系统句柄调优
echo """
* soft nofile 1000000
* hard nofile 1000000
* soft nproc unlimited
* hard nproc unlimited
""" >> /etc/security/limits.conf
ulimit -SHn 65535
9、启动nginx看有没有报错
/data/nginx/sbin/nginx -t
/data/nginx/sbin/nginx
10、安装keepalived
yum install keepalived -y
#修改配置
cd /etc/keepalived/
cp keepalived.conf.sample keepalived.conf
#keepalived主配置文件
[root@ha-1 ~]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
notification_email {
acassen@firewall.loc
failover@firewall.loc
sysadmin@firewall.loc
}
}
vrrp_script chk_apiserver {
script "/etc/keepalived/check_apiserver.sh"
interval 5
weight -5
fall 2
rise 1
}
vrrp_instance VI_1 {
state MASTER
nopreempt
interface ens18
virtual_router_id 68
priority 100
advert_int 1
unicast_src_ip 172.30.42.66
unicast_peer {
172.30.42.67
}
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
172.30.42.68/24 dev ens18 label ens18:1
}
track_interface {
ens18
}
track_script {
chk_apiserver
}
}
#keepalived备配置文件
[root@ha-2 ~]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
notification_email {
acassen@firewall.loc
failover@firewall.loc
sysadmin@firewall.loc
}
}
vrrp_script chk_apiserver {
script "/etc/keepalived/check_apiserver.sh"
interval 5
weight -5
fall 2
rise 1
}
vrrp_instance VI_1 {
state BACKUP
nopreempt
interface ens18
virtual_router_id 68
priority 80
advert_int 1
unicast_src_ip 172.30.42.67
unicast_peer {
172.30.42.66
}
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
172.30.42.68/24 dev ens18 label ens18:1
}
track_interface {
ens18
}
track_script {
chk_apiserver
}
}
#启动keepalived
systemctl enable --now keepalived
#nginx配置文件-配置请同步到ha-2节点
[root@ha-1 ~]# cat /data/nginx/conf/nginx.conf
user nginx;
worker_processes auto;
events {
worker_connections 1024;
}
stream {
upstream kube-api {
server 172.30.42.60:6443 max_fails=3 fail_timeout=30s;
server 172.30.42.61:6443 max_fails=3 fail_timeout=30s;
server 172.30.42.62:6443 max_fails=3 fail_timeout=30s;
}
server {
listen 8443;
proxy_connect_timeout 1s;
proxy_pass kube-api;
}
}
http {
include mime.types;
default_type application/octet-stream;
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
log_format detailed '[$time_local] '
'客户端IP: $remote_addr '
'请求方法: $request_method '
'URL: $request_uri '
'协议: $server_protocol '
'状态码: $status '
'响应大小: $body_bytes_sent '
'引用页: $http_referer '
'用户代理: $http_user_agent '
'请求头: "$http_headers" '
'响应头: "$sent_http_headers"';
access_log logs/access.log detailed;
sendfile on;
tcp_nopush on;
keepalive_timeout 65;
include conf.d/*.conf;
}
#重新加载nginx
[root@ha-1 ~]# /data/nginx/sbin/nginx -s reload
第二章:k8s节点服务器参数优化
#执行脚本
[root@k8s-master1-60 ~]# sh k8s.sh
#脚本内容
[root@k8s-master1-60 ~]# cat k8s.sh
#! /bin/bash
echo "===================安装基础工具========================="
systemctl disable --now firewalld
echo "===================关闭防火墙========================="
systemctl disable --noe dnsmasq
echo "===================关闭dnsmasq========================="
system_type=$(uname -s)
echo "$system_type"
if [ $system_type = "Kylin" ]; then
systemctl disable --now NetworkManager
echo "===================麒麟系统关闭NetworkManager========================="
fi
setenforce 0
sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/sysconfig/selinux
sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config
swapoff -a && sysctl -w vm.swappiness=0
sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab
ulimit -SHn 65535
limit_src="/etc/security/limits.conf"
limit_txt=$(cat <<EOF
# 末尾添加如下内容
* soft nofile 65536
* hard nofile 131072
* soft nproc 65535
* hard nproc 655350
* soft memlock unlimited
* hard memlock unlimited
EOF
)
echo "$limit_txt" >> "$limit_src"
echo "===================开始安装ipvs==============="
yum install ipvsadm ipset sysstat conntrack libseccomp -y
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
modprobe -- ipip
ipvs_txt=$(cat <<EOF
# 加入以下内容
ip_vs
ip_vs_lc
ip_vs_wlc
ip_vs_rr
ip_vs_wrr
ip_vs_lblc
ip_vs_lblcr
ip_vs_dh
ip_vs_sh
ip_vs_fo
ip_vs_nq
ip_vs_sed
ip_vs_ftp
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
EOF
)
echo "$ipvs_txt" >> /etc/modules-load.d/ipvs.conf
systemctl enable --now systemd-modules-load.service
lsmod | grep -e ip_vs -e nf_conntrack
k8s_txt=$(cat <<EOF
# 贴入以下内容(大概就是开启转发,还有一些网络的内核参数优化)
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
vm.overcommit_memory=1
net.ipv4.conf.all.route_localnet = 1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384
EOF
)
echo "$k8s_txt" >> /etc/sysctl.d/k8s.conf
sysctl -p /etc/sysctl.d/k8s.conf
第三章:etcd部署
下载软件etcd
1:下载etcd源码包
[root@k8s-master1-60 ~]# wget https://github.com/etcd-io/etcd/releases/download/v3.4.13/etcd-v3.4.13-linux-amd64.tar.gz
2:解压缩
[root@k8s-master1-60 ~]# tar -zxvf etcd-v3.4.13-linux-amd64.tar.gz
[root@k8s-master1-60 ~]# cd etcd-v3.4.13-linux-amd64/
[root@k8s-master1-60 etcd-v3.4.13-linux-amd64]#
[root@k8s-master1-60 etcd-v3.4.13-linux-amd64]# ls
Documentation etcd etcdctl etcdutl README-etcdctl.md README-etcdutl.md README.md READMEv2-etcdctl.md
[root@k8s-master1-60 etcd-v3.4.13-linux-amd64]# cp etcd etcdctl /usr/local/bin/
3:分发到其它两个节点
[root@k8s-master1-60 etcd-v3.4.13-linux-amd64]# for i in 172.30.42.61 172.30.42.62; do scp /usr/local/bin/etcd* $i:/usr/local/bin/; done
创建etcd工作目录
[root@k8s-master1-60 ~]# mkdir -p /etc/etcd/{conf,ssl,data}
创建etcd证书
1:下载cfssl证书工具
[root@k8s-master1-60 ~]# wget "https://pkg.cfssl.org/R1.2/cfssl_linux-amd64" -O /usr/local/bin/cfssl
[root@k8s-master1-60 ~]# wget "https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64" -O /usr/local/bin/cfssljson
[root@k8s-master1-60 ~]# chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson
2:生成etcd CA证书和CA证书的key
[root@k8s-master1-60 ~]# cd k8s-ha-install/pki/
[root@k8s-master1-60 pki]# pwd
/root/k8s-ha-install/pki
[root@k8s-master1-60 pki]# cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare /etc/etcd/ssl/etcd-ca
[root@k8s-master1-60 pki]# cfssl gencert -ca=/etc/etcd/ssl/etcd-ca.pem -ca-key=/etc/etcd/ssl/etcd-ca-key.pem -config=ca-config.json -hostname=127.0.0.1,k8s-master1-60,k8s-master2-61,k8s-master3-62,172.30.42.60,172.30.42.61,172.30.42.62 -profile=kubernetes etcd-csr.json | cfssljson –bare /etc/etcd/ssl/etcd
3:查看生成证书
[root@k8s-master1-60 pki]# ls /etc/etcd/ssl/
etcd-ca.csr etcd-ca-key.pem etcd-ca.pem etcd.csr etcd-key.pem etcd.pem
4:分发证书
[root@k8s-master1-60 pki]# for i in 172.30.42.61 172.30.42.62; do scp /etc/etcd/ssl/etcd* $i:/etc/etcd/ssl/; done
创建etcd配置文件
etcd生产环境中一定要启动奇数个节点,不然容易产生脑裂
etcd配置大致相同,注意修改每个Master节点的etcd配置的主机名和IP地址
注意三个节点的配置是不同的
#节点1配置文件
[root@k8s-master1-60 pki]# cd /etc/etcd/conf/
[root@k8s-master1-60 conf]# vi etcd.config.yaml
name: 'etcd1'
data-dir: /var/lib/etcd
wal-dir: /etc/etcd/data
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://172.30.42.60:2380'
listen-client-urls: 'https://172.30.42.60:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://172.30.42.60:2380'
advertise-client-urls: 'https://172.30.42.60:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'etcd1=https://172.30.42.60:2380,etcd2=https://172.30.42.61:2380,etcd3=https://172.30.42.62:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
cert-file: '/etc/etcd/ssl/etcd.pem'
key-file: '/etc/etcd/ssl/etcd-key.pem'
client-cert-auth: true
trusted-ca-file: '/etc/etcd/ssl/etcd-ca.pem'
auto-tls: true
peer-transport-security:
cert-file: '/etc/etcd/ssl/etcd.pem'
key-file: '/etc/etcd/ssl/etcd-key.pem'
peer-client-cert-auth: true
trusted-ca-file: '/etc/etcd/ssl/etcd-ca.pem'
auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
#节点2配置文件
[root@k8s-master2-61 conf]# vi etcd.config.yaml
name: 'etcd2'
data-dir: /var/lib/etcd
wal-dir: /etc/etcd/data
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://172.30.42.61:2380'
listen-client-urls: 'https://172.30.42.61:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://172.30.42.61:2380'
advertise-client-urls: 'https://172.30.42.61:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'etcd1=https://172.30.42.60:2380,etcd2=https://172.30.42.61:2380,etcd3=https://172.30.42.62:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
cert-file: '/etc/etcd/ssl/etcd.pem'
key-file: '/etc/etcd/ssl/etcd-key.pem'
client-cert-auth: true
trusted-ca-file: '/etc/etcd/ssl/etcd-ca.pem'
auto-tls: true
peer-transport-security:
cert-file: '/etc/etcd/ssl/etcd.pem'
key-file: '/etc/etcd/ssl/etcd-key.pem'
peer-client-cert-auth: true
trusted-ca-file: '/etc/etcd/ssl/etcd-ca.pem'
auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
#节点3配置文件
[root@k8s-master3-62 conf]# vi etcd.config.yaml
name: 'etcd3'
data-dir: /var/lib/etcd
wal-dir: /etc/etcd/data
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://172.30.42.62:2380'
listen-client-urls: 'https://172.30.42.62:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://172.30.42.62:2380'
advertise-client-urls: 'https://172.30.42.62:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'etcd1=https://172.30.42.60:2380,etcd2=https://172.30.42.61:2380,etcd3=https://172.30.42.62:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
cert-file: '/etc/etcd/ssl/etcd.pem'
key-file: '/etc/etcd/ssl/etcd-key.pem'
client-cert-auth: true
trusted-ca-file: '/etc/etcd/ssl/etcd-ca.pem'
auto-tls: true
peer-transport-security:
cert-file: '/etc/etcd/ssl/etcd.pem'
key-file: '/etc/etcd/ssl/etcd-key.pem'
peer-client-cert-auth: true
trusted-ca-file: '/etc/etcd/ssl/etcd-ca.pem'
auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
所有Master节点创建etcd service并启动
vim /usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Service
Documentation=https://coreos.com/etcd/docs/latest/
After=network.target
[Service]
Type=notify
ExecStart=/usr/local/bin/etcd --config-file=/etc/etcd/etcd.config.yml
Restart=on-failure
RestartSec=10
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
Alias=etcd3.service
#所有master节点启动ETCD
systemctl daemon-reload
systemctl enable --now etcd
#检查ETCD节点状态
[root@k8s-master1-60 ]# /usr/local/bin/etcdctl --write-out=table --endpoints="https://172.30.42.60:2379,https://172.30.42.61:2379,https://172.30.42.62:2379" --cert=/etc/etcd/ssl/etcd.pem --key=/etc/etcd/ssl/etcd-key.pem --cacert=/etc/etcd/ssl/etcd-ca.pem endpoint status
[root@k8s-master1-60 ]# /usr/local/bin/etcdctl --write-out=table --endpoints="https://172.30.42.60:2379,https://172.30.42.61:2379,https://172.30.42.62:2379" --cert=/etc/etcd/ssl/etcd.pem --key=/etc/etcd/ssl/etcd-key.pem --cacert=/etc/etcd/ssl/etcd-ca.pem endpoint health
第三章:Kubernetes组件安装
生成k8s CA证书和CA证书的key
#所有master节点创建所需目录
[root@k8s-master1-60 ]# mkdir -p /etc/kubernetes/pki
[root@k8s-master1-60 ]# cd /root/k8s-ha-install/pki
# 生成k8s CA证书和CA证书的key
[root@k8s-master1-60 pki]# cfssl gencert -initca ca-csr.json | cfssljson -bare /etc/kubernetes/pki/ca
生成token认证文件
cat > /etc/kubernetes/pki/token.csv << EOF
$(head -c 16 /dev/urandom | od -An -t x | tr -d ' '),kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF
生成apiserver的客户端证书
生成apiserver的客户端证书 10.96.0.1是k8s service的网段,如果说需要更改k8s service网段,那就需要更改10.96.0.1,如果不是高可用集群,172.30.42.68为Master01的IP
[root@k8s-master1-60 pki]# cfssl gencert -ca=/etc/kubernetes/pki/ca.pem -ca-key=/etc/kubernetes/pki/ca-key.pem -config=ca-config.json -hostname=10.96.0.1,172.30.42.68,127.0.0.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,172.30.42.60,172.30.42.61,172.30.42.62 -profile=kubernetes apiserver-csr.json | cfssljson -bare /etc/kubernetes/pki/apiserver
生成apiserver的聚合证书
[root@k8s-master1-60 pki]# cfssl gencert -initca front-proxy-ca-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-ca
[root@k8s-master1-60 pki]# cfssl gencert -ca=/etc/kubernetes/pki/front-proxy-ca.pem -ca-key=/etc/kubernetes/pki/front-proxy-ca-key.pem -config=ca-config.json -profile=kubernetes front-proxy-client-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-client
生成kube-controller-manager证书
[root@k8s-master1-60 pki]#
cfssl gencert \
-ca=/etc/kubernetes/pki/ca.pem \
-ca-key=/etc/kubernetes/pki/ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
manager-csr.json | cfssljson -bare /etc/kubernetes/pki/controller-manager
# 注意,如果不是高可用集群,172.30.42.68:8443改为master01的地址,8443改为apiserver的端口,默认是6443
# set-cluster:设置一个集群项
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/pki/ca.pem \
--embed-certs=true \
--server=https://172.30.42.68:8443 \
--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
# set-credentials 设置一个用户项
kubectl config set-credentials system:kube-controller-manager \
--client-certificate=/etc/kubernetes/pki/controller-manager.pem \
--client-key=/etc/kubernetes/pki/controller-manager-key.pem \
--embed-certs=true \
--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
# 设置一个环境项,一个上下文
kubectl config set-context system:kube-controller-manager@kubernetes \
--cluster=kubernetes \
--user=system:kube-controller-manager \
--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
# 使用某个环境当做默认环境
kubectl config use-context system:kube-controller-manager@kubernetes \
--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
生成scheduler的书
##########################################scheduler的证书##################################################
cfssl gencert \
-ca=/etc/kubernetes/pki/ca.pem \
-ca-key=/etc/kubernetes/pki/ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
scheduler-csr.json | cfssljson -bare /etc/kubernetes/pki/scheduler
# 注意,如果不是高可用集群,172.30.42.68:8443改为master01的地址,8443改为apiserver的端口,默认是6443
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/pki/ca.pem \
--embed-certs=true \
--server=https://172.30.42.68:8443 \
--kubeconfig=/etc/kubernetes/scheduler.kubeconfig
kubectl config set-credentials system:kube-scheduler \
--client-certificate=/etc/kubernetes/pki/scheduler.pem \
--client-key=/etc/kubernetes/pki/scheduler-key.pem \
--embed-certs=true \
--kubeconfig=/etc/kubernetes/scheduler.kubeconfig
kubectl config set-context system:kube-scheduler@kubernetes \
--cluster=kubernetes \
--user=system:kube-scheduler \
--kubeconfig=/etc/kubernetes/scheduler.kubeconfig
kubectl config use-context system:kube-scheduler@kubernetes \
--kubeconfig=/etc/kubernetes/scheduler.kubeconfig
是生成admin证书
cfssl gencert \
-ca=/etc/kubernetes/pki/ca.pem \
-ca-key=/etc/kubernetes/pki/ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
admin-csr.json | cfssljson -bare /etc/kubernetes/pki/admin
# 注意,如果不是高可用集群,172.30.42.68:8443改为master01的地址,8443改为apiserver的端口,默认是6443
kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/pki/ca.pem --embed-certs=true --server=https://172.30.42.68:8443 --kubeconfig=/etc/kubernetes/admin.kubeconfig
kubectl config set-credentials kubernetes-admin --client-certificate=/etc/kubernetes/pki/admin.pem --client-key=/etc/kubernetes/pki/admin-key.pem --embed-certs=true --kubeconfig=/etc/kubernetes/admin.kubeconfig
kubectl config set-context kubernetes-admin@kubernetes --cluster=kubernetes --user=kubernetes-admin --kubeconfig=/etc/kubernetes/admin.kubeconfig
kubectl config use-context kubernetes-admin@kubernetes --kubeconfig=/etc/kubernetes/admin.kubeconfig
证书分发到其它两个master节点
#目前共生成证书文件(一共24个文件)
[root@k8s-master1-60 ~]# ls /etc/kubernetes/pki/| wc -l
24
[root@k8s-master1-60 ~]# for i in 172.30.42.61 172.30.42.62; do scp /etc/kubernetes/pki/* $i:/etc/kubernetes/pki/; done
Master节点组件安装kube-apiserver,kube-controller-manager,kube-scheduler
1:所有节点创建工作所需目录
[root@k8s-master1-60 ~]# mkdir -p /etc/kubernetes/manifests/ /var/lib/kubelet/ /var/log/kubernetes/
2:解压缩kubernetes包
[root@k8s-master1-60 ~]# tar -zxvf kubernetes-server-linux-amd64.tar.gz
[root@k8s-master1-60 ~]# cd kubernetes/server/bin/
3:复制二进制文件到/usr/local/bin/
[root@k8s-master1-60 ~]# cp -a kubectl kube-controller-manager kube-apiserver kube-scheduler kube-proxy kubelet /usr/local/bin/
4:分发二进制文件到其它两个master节点
[root@k8s-master1-60 ~]# for i in 172.30.42.61 172.30.42.62; do scp kubectl kube-controller-manager kube-apiserver kube-scheduler kube-proxy kubelet $i:/usr/local/bin/; done
kube-apiserver组件安装
#Master01配置
vim /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-apiserver \
--v=2 \
--allow-privileged=true \
--bind-address=0.0.0.0 \
--secure-port=6443 \
--advertise-address=172.30.42.60 \
--service-cluster-ip-range=10.96.0.0/12 \
--service-node-port-range=30000-32767 \
--etcd-servers=https://172.30.42.60:2379,https://172.30.42.61:2379,https://172.30.42.62:2379 \
--etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \
--etcd-certfile=/etc/etcd/ssl/etcd.pem \
--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \
--client-ca-file=/etc/kubernetes/pki/ca.pem \
--tls-cert-file=/etc/kubernetes/pki/apiserver.pem \
--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \
--kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \
--kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \
--service-account-key-file=/etc/kubernetes/pki/sa.pub \
--service-account-signing-key-file=/etc/kubernetes/pki/sa.key \
--service-account-issuer=https://kubernetes.default.svc.cluster.local \
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \
--authorization-mode=Node,RBAC \
--enable-bootstrap-token-auth=true \
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \
--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \
--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \
--requestheader-allowed-names=aggregator \
--requestheader-group-headers=X-Remote-Group \
--requestheader-extra-headers-prefix=X-Remote-Extra- \
--requestheader-username-headers=X-Remote-User \
--token-auth-file=/etc/kubernetes/pki/token.csv
Restart=on-failure
RestartSec=10s
LimitNOFILE=65535
[Install]
WantedBy=multi-user.target
#Master02配置
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-apiserver \
--v=2 \
--allow-privileged=true \
--bind-address=0.0.0.0 \
--secure-port=6443 \
--advertise-address=172.30.42.61 \
--service-cluster-ip-range=10.96.0.0/12 \
--service-node-port-range=30000-32767 \
--etcd-servers=https://172.30.42.60:2379,https://172.30.42.61:2379,https://172.30.42.62:2379 \
--etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \
--etcd-certfile=/etc/etcd/ssl/etcd.pem \
--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \
--client-ca-file=/etc/kubernetes/pki/ca.pem \
--tls-cert-file=/etc/kubernetes/pki/apiserver.pem \
--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \
--kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \
--kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \
--service-account-key-file=/etc/kubernetes/pki/sa.pub \
--service-account-signing-key-file=/etc/kubernetes/pki/sa.key \
--service-account-issuer=https://kubernetes.default.svc.cluster.local \
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \
--authorization-mode=Node,RBAC \
--enable-bootstrap-token-auth=true \
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \
--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \
--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \
--requestheader-allowed-names=aggregator \
--requestheader-group-headers=X-Remote-Group \
--requestheader-extra-headers-prefix=X-Remote-Extra- \
--requestheader-username-headers=X-Remote-User \
--token-auth-file=/etc/kubernetes/pki/token.csv
Restart=on-failure
RestartSec=10s
LimitNOFILE=65535
[Install]
WantedBy=multi-user.target
#Master03配置
vim /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-apiserver \
--v=2 \
--allow-privileged=true \
--bind-address=0.0.0.0 \
--secure-port=6443 \
--advertise-address=172.30.42.62 \
--service-cluster-ip-range=10.96.0.0/12 \
--service-node-port-range=30000-32767 \
--etcd-servers=https://172.30.42.60:2379,https://172.30.42.61:2379,https://172.30.42.62:2379 \
--etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \
--etcd-certfile=/etc/etcd/ssl/etcd.pem \
--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \
--client-ca-file=/etc/kubernetes/pki/ca.pem \
--tls-cert-file=/etc/kubernetes/pki/apiserver.pem \
--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \
--kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \
--kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \
--service-account-key-file=/etc/kubernetes/pki/sa.pub \
--service-account-signing-key-file=/etc/kubernetes/pki/sa.key \
--service-account-issuer=https://kubernetes.default.svc.cluster.local \
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \
--authorization-mode=Node,RBAC \
--enable-bootstrap-token-auth=true \
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \
--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \
--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \
--requestheader-allowed-names=aggregator \
--requestheader-group-headers=X-Remote-Group \
--requestheader-extra-headers-prefix=X-Remote-Extra- \
--requestheader-username-headers=X-Remote-User \
--token-auth-file=/etc/kubernetes/pki/token.csv
Restart=on-failure
RestartSec=10s
LimitNOFILE=65535
[Install]
WantedBy=multi-user.target
#所有master节点启动kube-apiserver
systemctl daemon-reload && systemctl enable --now kube-apiserver
#查看启动状态
systemctl status kube-apiserver
ControllerManager组件安装
#所有Master节点配置kube-controller-manager.service
[root@k8s-master2-61 ~]# cat /usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-controller-manager \
--v=2 \
--secure-port=10257 \
--bind-address=0.0.0.0 \
--root-ca-file=/etc/kubernetes/pki/ca.pem \
--cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem \
--cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem \
--service-account-private-key-file=/etc/kubernetes/pki/sa.key \
--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig \
--leader-elect=true \
--use-service-account-credentials=true \
--node-monitor-grace-period=40s \
--node-monitor-period=5s \
--controllers=*,bootstrapsigner,tokencleaner \
--allocate-node-cidrs=true \
--cluster-cidr=192.168.0.0/12 \
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \
--node-cidr-mask-size=24
Restart=always
RestartSec=10s
[Install]
WantedBy=multi-user.target
#所有master节点启动kube-controller-manager.service
systemctl daemon-reload && systemctl enable --now kube-controller-manager.service
#查看启动状态
systemctl status kube-controller-manager.service
Scheduler组件安装
#所有Master节点配置kube-scheduler.service
[root@k8s-master2-61 ~]# cat /usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-scheduler \
--v=2 \
--bind-address=0.0.0.0 \
--leader-elect=true \
--kubeconfig=/etc/kubernetes/scheduler.kubeconfig
Restart=always
RestartSec=10s
[Install]
WantedBy=multi-user.target
#所有master节点启动kube-scheduler.service
systemctl daemon-reload && systemctl enable --now kube-scheduler.service
#查看启动状态
systemctl status kube-scheduler.service
创建 kubelet-bootstrap.kubeconfig
BOOTSTRAP_TOKEN=$(awk -F "," '{print $1}' /etc/kubernetes/pki/token.csv)
kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/pki/ca.pem --embed-certs=true --server=https://10.1.20.52:6443 --kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig
kubectl config set-credentials kubelet-bootstrap --token=${BOOTSTRAP_TOKEN} --kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig
kubectl config set-context default --cluster=kubernetes --user=kubelet-bootstrap --kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig
kubectl config use-context default --kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig
# 添加权限
kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
#复制文件admin.kubeconfig
mkdir -p /root/.kube ; cp /etc/kubernetes/admin.kubeconfig /root/.kube/config
查看集群状态
[root@k8s-master1-60 ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy ok
所有节点部署组件 kubelet
1:拷贝所需证书到其它节点
[root@k8s-master1-60 ~]# for i in 172.30.42.61 172.30.42.62 172.30.42.63 172.30.42.64; do scp /etc/kubernetes/pki/ca.pem $i:/etc/kubernetes/pki/; done
[root@k8s-master1-60 ~]# for i in 172.30.42.61 172.30.42.62 172.30.42.63 172.30.42.64; do scp /etc/kubernetes/kubelet-bootstrap.kubeconfig $i:/etc/kubernetes/; done
2:所有节点创建kubelet.service配置文件
[root@k8s-master1-60 ~]# cat /usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=network-online.target firewalld.service cri-docker.service docker.socket containerd.service
Wants=network-online.target
Requires=docker.socket containerd.service
[Service]
ExecStart=/usr/local/bin/kubelet \
--bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig \
--cert-dir=/etc/kubernetes/pki \
--kubeconfig=/etc/kubernetes/kubelet.kubeconfig \
--config=/etc/kubernetes/kubelet-conf.yml \
--container-runtime-endpoint=unix:///run/cri-dockerd.sock \
--node-labels=node.kubernetes.io/node=
--node-ip=k8s-master1-60 #此处修改对应节点的主机名
[Install]
WantedBy=multi-user.target
3:所有节点创建kubelet的配置文件
[root@k8s-master1-60 ~]# cat /etc/kubernetes/kubelet-conf.yml
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 2m0s
enabled: true
x509:
clientCAFile: /etc/kubernetes/pki/ca.pem
authorization:
mode: Webhook
webhook:
cacheAuthorizedTTL: 5m0s
cacheUnauthorizedTTL: 30s
cgroupDriver: systemd
cgroupsPerQOS: true
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
containerLogMaxFiles: 5
containerLogMaxSize: 10Mi
contentType: application/vnd.kubernetes.protobuf
cpuCFSQuota: true
cpuManagerPolicy: none
cpuManagerReconcilePeriod: 10s
enableControllerAttachDetach: true
enableDebuggingHandlers: true
enforceNodeAllocatable:
- pods
eventBurst: 10
eventRecordQPS: 5
evictionHard:
imagefs.available: 15%
memory.available: 100Mi
nodefs.available: 10%
nodefs.inodesFree: 5%
evictionPressureTransitionPeriod: 5m0s
failSwapOn: true
fileCheckFrequency: 20s
hairpinMode: promiscuous-bridge
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 20s
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
imageMinimumGCAge: 2m0s
iptablesDropBit: 15
iptablesMasqueradeBit: 14
kubeAPIBurst: 10
kubeAPIQPS: 5
makeIPTablesUtilChains: true
maxOpenFiles: 1000000
maxPods: 110
nodeStatusUpdateFrequency: 10s
oomScoreAdj: -999
podPidsLimit: -1
registryBurst: 10
registryPullQPS: 5
resolvConf: /etc/resolv.conf
rotateCertificates: true
runtimeRequestTimeout: 2m0s
serializeImagePulls: true
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 4h0m0s
syncFrequency: 1m0s
volumeStatsAggPeriod: 1m0s
4:启动 kubelet
systemctl daemon-reload && systemctl enable --now kubelet
#查看日志
journalctl -u kubelet -f
#显示只有如下信息为正常,因为Calico还没安装
Unable to update cni config" err="no networks found in /etc/cni/net.d
5:批准所有节点加入集群
[root@k8s-master1-60 ~]# kubectl get csr
[root@k8s-master1-60 ~]# kubectl certificate approve csr-svl28
6:查看集群状态
[root@k8s-master1-60 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master1-60 NotReady <none> 6s v1.30.2
k8s-master2-61 NotReady <none> 6s v1.30.2
k8s-master3-62 NotReady <none> 6s v1.30.2
k8s-work1-63 NotReady <none> 6s v1.30.2
k8s-work2-64 NotReady <none> 6s v1.30.2
所有节点安装kube-proxy
1:创建kube-proxy证书
[root@k8s-master1-60 ~]# cd k8s-ha-install/pki/
[root@k8s-master1-60 pki]# cfssl gencert -ca=/etc/kubernetes/pki/ca.pem -ca-key=/etc/kubernetes/pki/ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare /etc/kubernetes/pki/kube-proxy
## 创建 kubeconfig 文件
[root@k8s-master1-60 pki]# kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/pki/ca.pem --embed-certs=true --server=https://172.30.42.68:6443 --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig
[root@k8s-master1-60 pki]# kubectl config set-credentials kube-proxy --client-certificate=/etc/kubernetes/pki/kube-proxy.pem --client-key=/etc/kubernetes/pki/kube-proxy-key.pem --embed-certs=true --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig
[root@k8s-master1-60 pki]# kubectl config set-context default --cluster=kubernetes --user=kube-proxy --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig
[root@k8s-master1-60 pki]# kubectl config use-context default --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig
2:分发kube-proxy证书到集群节点
[root@k8s-master1-60 ~]# for i in 172.30.42.61 172.30.42.62 172.30.42.63 172.30.42.64; do scp /etc/kubernetes/pki/kube-proxy* $i:/etc/kubernetes/pki/; done
[root@k8s-master1-60 ~]# for i in 172.30.42.61 172.30.42.62 172.30.42.63 172.30.42.64; do scp /etc/kubernetes/kube-proxy.kubeconfig $i:/etc/kubernetes/; done
3:所以节点创建kube-proxy.yaml配置文件
[root@k8s-master1-60 ]# cd /etc/kubernetes/
[root@k8s-master1-60 kubernetes]# cat kube-proxy.yaml
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
clientConnection:
acceptContentTypes: ""
burst: 10
contentType: application/vnd.kubernetes.protobuf
kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
qps: 5
clusterCIDR: 192.168.0.0/12
configSyncPeriod: 15m0s
conntrack:
max: null
maxPerCore: 32768
min: 131072
tcpCloseWaitTimeout: 1h0m0s
tcpEstablishedTimeout: 24h0m0s
enableProfiling: false
healthzBindAddress: 0.0.0.0:10256
hostnameOverride: ""
iptables:
masqueradeAll: false
masqueradeBit: 14
minSyncPeriod: 0s
syncPeriod: 30s
ipvs:
masqueradeAll: true
minSyncPeriod: 5s
scheduler: "rr"
syncPeriod: 30s
kind: KubeProxyConfiguration
metricsBindAddress: 127.0.0.1:10249
mode: "ipvs"
nodePortAddresses: null
oomScoreAdj: -999
portRange: ""
udpIdleTimeout: 250ms
4:所有节点添加kube-proxy的service文件
[root@k8s-master1-60 ]# cat > /usr/lib/systemd/system/kube-proxy.service << EOF
[Unit]
Description=Kubernetes Kube Proxy
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
Environment="KUBERNETES_MASTER=https://172.30.42.68:8443"
ExecStart=/usr/local/bin/kube-proxy \
--config=/etc/kubernetes/kube-proxy.yaml \
--cluster-cidr=192.168.0.0/12 \
--master=${KUBERNETES_MASTER} \
--v=2
Restart=always
RestartSec=10s
[Install]
WantedBy=multi-user.target
EOF
5:启动kube-proxy服务
[root@k8s-master1-60 ]# systemctl daemon-reload && systemctl enable –now kube-proxy
[root@k8s-master1-60 ]# systemctl status kube-proxy