随心一记

一二三四五,上山打老鼠


  • 首页

  • 归档

  • 标签
ywcsb

ywcsb

游戏可以不玩,小说不能不看。

153 日志
3 分类
42 标签
RSS
GitHub 知乎 随心一记
Links
  • 随心一记
  • 追梦人物的
  • MSDN

Kubernetes扩容master(二进制)

发表于 2020-06-28 | 阅读 1197 | 分类于 系统运维 |

目录

  • 一、高可用架构(扩容多Master架构)
  • 二、安装docker
    • 2.1 解压二进制报
    • 2.2 systemd 管理docker
    • 2.3 创建配置文件
  • 三、部署Master02 Node(192.168.171.134)
    • 3.1 创建etcd证书目录
    • 3.2 拷贝文件
    • 3.3 删除证书文件
    • 3.4 修改配置文件和主机名
    • 3.5 设置开机启动
    • 3.6 查看集群状态
    • 3.6 批准kubelet证书申请
  • 四、部署负载均衡器
    • 4.1 安装软件包
    • 4.2 Nginx配置文件
    • 4.3 Keepalived配置文件
      • 4.3.1 主配置文件(Nginx Master)
      • 4.3.2 备配置文件(Nginx Backup)
      • 4.3.3 检查nginx状态脚本
      • 4.3.4 设置开机启动
      • 4.3.5 查看Keepalived工作状态
    • 4.4 Nginx+Keepalived高可用测试
    • 4.5 访问负载均衡器测试
  • 五、修改所有Worker Node连接LB VIP
    • 5.1 查看所有节点
    • 5.2 修改Node配置
    • 5.3 检查节点状态:

Kubernetes单master架构部署(二进制)

一、高可用架构(扩容多Master架构)

Kubernetes作为容器集群系统,通过健康检查+重启策略实现了Pod故障自我修复能力,通过调度算法实现将Pod分布式部署,并保持预期副本数,根据Node失效状态自动在其他Node拉起Pod,实现了应用层的高可用性。

针对Kubernetes集群,高可用性还应包含以下两个层面的考虑:Etcd数据库的高可用性和Kubernetes Master组件的高可用性。而Etcd我们已经采用3个节点组建集群实现高可用,本节将对Master节点高可用进行说明和实施。

Master节点扮演着总控中心的角色,通过不断与工作节点上的Kubelet进行通信来维护整个集群的健康工作状态。如果Master节点故障,将无法使用kubectl工具或者API做任何集群管理。

Master节点主要有三个服务kube-apiserver、kube-controller-mansger和kube-scheduler,其中kube-controller-mansger和kube-scheduler组件自身通过选择机制已经实现了高可用,所以Master高可用主要针对kube-apiserver组件,而该组件是以HTTP API提供服务,因此对他高可用与Web服务器类似,增加负载均衡器对其负载均衡即可,并且可水平扩容。

根据服务器整体规划,这次我们高可用涉及的三台服务器:

角色 IP 组件
k8s-master01 192.168.171.131 Kube-apiserver,kube-controller-manager, kube-scheduler, etcd
k8s-master02 192.168.171.134 Kube-apiserver,kube-controller-manager, kube-scheduler
K8s-node01 192.168.171.132 kubelet,kube-proxy,docker etcd
k8s-node02 192.168.171.133 kubelet,kube-proxy,docker etcd
Load Balancer(Master) 192.168.171.141,192.168.171.88(vip) Nginx,Keepalived
Load Balancer(Backup) 192.168.171.142 Nginx,Keepalived

多Master架构图:

二、安装docker

下载Docker

wget https://download.docker.com/linux/static/stable/x86_64/docker-19.03.9.tgz

以下在所有节点操作,这里采用二进制安装,用yum安装也是一样的。

2.1 解压二进制报

tar zxvf docker-19*

mv docker/ /usr/bin

2.2 systemd 管理docker

vim /usr/lib/systemd/system/docker.service

[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket

[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always

# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
# Both the old, and new location are accepted by systemd 229 and up, so using the old location
# to make them work for either version of systemd.
StartLimitBurst=3

# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
# this option work for either version of systemd.
StartLimitInterval=60s

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Comment TasksMax if your systemd version does not support it.
# Only systemd 226 and above support this option.
TasksMax=infinity

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target

2.3 创建配置文件

mkdir /etc/docker
vim /etc/docker/daemon.json

{
  "registry-mirrors": ["https://6o9frx9d.mirror.aliyuncs.com"],
   "exec-opts": ["native.cgroupdriver=systemd"],
   "log-driver": "json-file",
   "log-opts": {
      "max-size": "100m"
    },
    "storage-driver": "overlay2"
}
  • registry-mirrors 阿里云镜像加速器

三、部署Master02 Node(192.168.171.134)

Master02与已部署的Master01所有操作一致。所以我们只需要将Master01所有文件拷贝到Master02,再修改服务器IP和主机名启动即可。

3.1 创建etcd证书目录

在Master02创建etcd证书目录:

mkdir -p /opt/etcd/ssl

3.2 拷贝文件

拷贝Master01上所有K8s文件和etcd证书到Master02:

scp -r /opt/kubernetes root@192.168.171.134:/opt
scp -r /opt/cni/ root@192.168.171.134:/opt
scp -r /opt/etcd/ssl root@192.168.171.134:/opt/etcd
scp /usr/lib/systemd/system/kube* root@192.168.171.134:/usr/lib/systemd/system
scp /usr/bin/kubectl  root@192.168.171.134:/usr/bin

3.3 删除证书文件

删除kubelet证书和kubeconfig文件:

rm -f /opt/kubernetes/cfg/kubelet.kubeconfig
rm -f /opt/kubernetes/ssl/kubelet*

3.4 修改配置文件和主机名

修改apiserver、kubelet和kube-proxy配置文件为本地IP:

vi /opt/kubernetes/cfg/kube-apiserver.conf
...
--bind-address=192.168.171.134 \
--advertise-address=192.168.171.134 \
...

vi /opt/kubernetes/cfg/kubelet.conf
--hostname-override=k8s-master02

vi /opt/kubernetes/cfg/kube-proxy-config.yml
hostnameOverride: k8s-master02

3.5 设置开机启动

systemctl daemon-reload
systemctl start kube-apiserver
systemctl start kube-controller-manager
systemctl start kube-scheduler
systemctl start kubelet
systemctl start kube-proxy
systemctl enable kube-apiserver
systemctl enable kube-controller-manager
systemctl enable kube-scheduler
systemctl enable kubelet
systemctl enable kube-proxy

3.6 查看集群状态

kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok
controller-manager   Healthy   ok
etcd-1               Healthy   {"health":"true"}   
etcd-2               Healthy   {"health":"true"}   
etcd-0               Healthy   {"health":"true"}

3.6 批准kubelet证书申请

kubectl get csr
NAME                                                   AGE   SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-JYNknakEa_YpHz797oKaN-ZTk43nD51Zc9CJkBLcASU   85m   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending

kubectl certificate approve node-csr-JYNknakEa_YpHz797oKaN-ZTk43nD51Zc9CJkBLcASU

kubectl get node
NAME          STATUS   ROLES    AGE   VERSION
k8s-master01    Ready    <none>   34h   v1.18.3
k8s-master02   Ready    <none>   83m   v1.18.3
k8s-node1     Ready    <none>   33h   v1.18.3
k8s-node2     Ready    <none>   33h   v1.18.3

四、部署负载均衡器

kube-apiserver高可用架构:

  • Nginx是一个主流的web服务和反向代理服务器,这里用四层实现对apiserver实现负载均衡。
  • Keepalived是一个主流高可用的软件,基于VIP绑定实现服务器双机热备。

4.1 安装软件包

yum install epel-release -y
yum install nginx keepalived -y

4.2 Nginx配置文件

主备都一样

cat > /etc/nginx/nginx.conf << "EOF"
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;

include /usr/share/nginx/modules/*.conf;

events {
    worker_connections 1024;
}

# 四层负载均衡,为两台Master apiserver组件提供负载均衡
stream {

    log_format  main  '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';

    access_log  /var/log/nginx/k8s-access.log  main;

    upstream k8s-apiserver {
       server 192.168.171.131:6443;   # Master01 APISERVER IP:PORT
       server 192.168.171.134:6443;   # Master02 APISERVER IP:PORT
    }

    server {
       listen 6443;
       proxy_pass k8s-apiserver;
    }
}

http {
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile            on;
    tcp_nopush          on;
    tcp_nodelay         on;
    keepalive_timeout   65;
    types_hash_max_size 2048;

    include             /etc/nginx/mime.types;
    default_type        application/octet-stream;

    server {
        listen       80 default_server;
        server_name  _;

        location / {
        }
    }
}
EOF

4.3 Keepalived配置文件

4.3.1 主配置文件(Nginx Master)

cat > /etc/keepalived/keepalived.conf << EOF
global_defs {
   notification_email {
     acassen@firewall.loc
     failover@firewall.loc
     sysadmin@firewall.loc
   }
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id NGINX_MASTER
}
vrrp_script check_nginx {
    script "/etc/keepalived/check_nginx.sh"
}
vrrp_instance VI_1 {
    state MASTER
    interface ens33
    virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的
    priority 100    # 优先级,备服务器设置 90
    advert_int 1    # 指定VRRP 心跳包通告间隔时间,默认1秒
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    # 虚拟IP
    virtual_ipaddress {
        192.168.171.88/24
    }
    track_script {
        check_nginx
    }
}
EOF
  • vrrp_script:指定检查nginx工作状态脚本(根据nginx状态判断是否故障转移)
  • virtual_ipaddress:虚拟IP(VIP)

4.3.2 备配置文件(Nginx Backup)

cat > /etc/keepalived/keepalived.conf << EOF
global_defs {
   notification_email {
     acassen@firewall.loc
     failover@firewall.loc
     sysadmin@firewall.loc
   }
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id NGINX_BACKUP
}
vrrp_script check_nginx {
    script "/etc/keepalived/check_nginx.sh"
}
vrrp_instance VI_1 {
    state BACKUP
    interface ens33
    virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的
    priority 90
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.171.88/24
    }
    track_script {
        check_nginx
    }
}
EOF

4.3.3 检查nginx状态脚本

cat > /etc/keepalived/check_nginx.sh  << "EOF"
#!/bin/bash
count=$(ps -ef |grep nginx |egrep -cv "grep|$$")

if [ "$count" -eq 0 ];then
    exit 1
else
    exit 0
fi
EOF
chmod +x /etc/keepalived/check_nginx.sh

注:Keepalived根据脚本返回状态码(0为工作正常,非0不正常).

4.3.4 设置开机启动

systemctl daemon-reload
systemctl start nginx
systemctl start keepalived
systemctl enable nginx
systemctl enable keepalived

4.3.5 查看Keepalived工作状态

$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:04:f7:2c brd ff:ff:ff:ff:ff:ff
    inet 192.168.171.141/24 brd 192.168.31.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet 192.168.171.88/24 scope global secondary ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe04:f72c/64 scope link
       valid_lft forever preferred_lft forever

可以看到,在ens33网卡绑定了192.168.171.88 虚拟IP,说明工作正常。

4.4 Nginx+Keepalived高可用测试

关闭主节点Nginx,测试VIP是否漂移到备节点服务器。

在Nginx Master执行 pkill nginx
在Nginx Backup,ip addr命令查看已成功绑定VIP。

4.5 访问负载均衡器测试

找K8s集群中任意一个节点,使用curl查看K8s版本测试,使用VIP访问:

curl -k https://192.168.171.88:6443/version
{
  "major": "1",
  "minor": "18",
  "gitVersion": "v1.18.3",
  "gitCommit": "2e7996e3e2712684bc73f0dec0200d64eec7fe40",
  "gitTreeState": "clean",
  "buildDate": "2020-05-20T12:43:34Z",
  "goVersion": "go1.13.9",
  "compiler": "gc",
  "platform": "linux/amd64"
}

可以正确获取到K8s版本信息,说明负载均衡器搭建正常。该请求数据流程:curl -> vip(nginx) -> apiserver

通过查看Nginx日志也可以看到转发apiserver IP:

tail /var/log/nginx/k8s-access.log -f
192.168.171.141 192.168.171.131:6443 - [30/May/2020:11:15:10 +0800] 200 422
192.168.171.141 192.168.171.134:6443 - [30/May/2020:11:15:26 +0800] 200 422

五、修改所有Worker Node连接LB VIP

试想下,虽然我们增加了Master2和负载均衡器,但是我们是从单Master架构扩容的,也就是说目前所有的Node组件连接都还是Master1,如果不改为连接VIP走负载均衡器,那么Master还是单点故障。

因此接下来就是要改所有Node组件配置文件,由原来192.168.171.131修改为192.168.171.88(VIP)

5.1 查看所有节点

通过kubectl get node命令查看到的节点。

角色 IP
k8s-master01 192.168.171.131
k8s-master02 192.168.171.134
k8s-node01 192.168.171.132
k8s-node02 192.168.171.133

5.2 修改Node配置

在上述所有Worker Node执行:

sed -i 's#192.168.31.71:6443#192.168.31.88:6443#' /opt/kubernetes/cfg/*
systemctl restart kubelet
systemctl restart kube-proxy

5.3 检查节点状态:

kubectl get node
NAME          STATUS   ROLES    AGE    VERSION
k8s-master01    Ready    <none>   34h    v1.18.3
k8s-master02   Ready    <none>   101m   v1.18.3
k8s-node01     Ready    <none>   33h    v1.18.3
k8s-node02     Ready    <none>   33h    v1.18.3

至此,一套完整的 Kubernetes 高可用集群就部署完成了!

PS:如果你是在公有云上,一般都不支持keepalived,那么你可以直接用它们的负载均衡器产品(内网就行,还免费~),架构与上面一样,直接负载均衡多台Master kube-apiserver即可!

觉得不错,支持一下!
geerniya WeChat Pay

微信打赏

geerniya Alipay

支付宝打赏

# Nginx # 系统运维 # docker # k8s
Kubernetes单master架构部署(二进制)
docker使用NFS共享卷

发表评论

共 0 条评论

    暂无评论
© 2018 - 2022 ywcsb
冀ICP备17022045号-1
Supported by 腾讯云