使用RKE的方式快速部署K8S集群(如何部署k8s)
RKE是一款经过CNCF认证的开源Kubernetes发行版,可以在Docker容器内运行。它通过删除大部分主机依赖项,并为部署、升级和回滚、节点扩容提供一个稳定的路径,从而解决了Kubernetes最常见的安装复杂性问题。
这篇文章主要讲解通过RKE配置高可用K8S集群,并通过rancher管理。
正文开始:
1、环境准备
* rancher 2.7.1
* rke 1.4
* kubernetes v1.26.6
* docker 24.0.4
*
编号 | HostName | IP | ROLE |
1 | rke-k8s-master1 | 172.16.213.95 | controlplane,etcd |
2 | rke-k8s-master2 | 172.16.213.96 | controlplane,etcd |
3 | rke-k8s-master3 | 172.16.213.97 | controlplane,etcd |
4 | rke-k8s-worker1 | 172.16.213.161 | worker,rancher |
5 | rke-k8s-worker2 | 172.16.213.163 | workerr,rancher |
6 | rke-k8s-worker3 | 172.16.213.165 | worker,rancher |
## 1.1 主机设置
* hosts 配置
cat >> /etc/hosts << EOF 172.16.213.95 RKE-k8s-master1 172.16.213.96 RKE-k8s-master2 172.16.213.97 RKE-k8s-master3 172.16.213.161 RKE-k8s-worker1 172.16.213.163 RKE-k8s-worker2 172.16.213.165 RKE-k8s-worker3 172.16.213.104 RKE-k8s-nginx EOF
*用户配置
#添加用户,RKE默认需要普通户来运行,如果root会无法执行下一步 usermod -aG docker rancher #firewalld 会影响同步镜像所以需要关闭firwalld systemctl stop firwalld && systemctl disable firewalld * 安装依赖 yum install -y bash-completion conntrack-tools ipset ipvsadm libseccomp nfs-utils psmisc socat wget vim net-tools ntpdate
* 时间同步
echo '*/5 * * * * /usr/sbin/ntpdate ntp.aliyun.com &> /dev/null' |tee /var/spool/cron/root
* 防火墙
setenforce 0 sed -i 's#SELINUX=enforcing#SELINUX=disabled#' /etc/selinux/config swapoff -a && sysctl -w vm.swappiness=0 && sed -i '/swap/d' /etc/fstab systemctl stop firewalld && systemctl disable firewalld
* 加载内核
$ modprobe br_netfilter
* sysctl 内核参数修改
cat >> /etc/sysctl.conf << EOF net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF
* ulimits 文件打开数修改
mkdir -pv /etc/systemd/system.conf.d/ cat > /etc/systemd/system.conf.d/30-k8s-ulimits.conf <<EOF [Manager] DefaultLimitCORE=infinity DefaultLimitNOFILE=100000 DefaultLimitNPROC=100000 EOF
1.2 容器运行时配置
* 安装容器运行时
yum install -y yum-utils device-mapper-persistent-data lvm2 yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo sed -i 's+download.docker.com+mirrors.aliyun.com/docker-ce+' /etc/yum.repos.d/docker-ce.repo yum install docker-ce-23.0.6-1.el7 -y yum makecache fast && systemctl enable docker &&systemctl start docker mkdir /etc/docker && mkdir -pv /data/docker-root echo '{ "oom-score-adjust": -1000, "log-driver": "json-file", "log-opts": { "max-size": "100m", "max-file": "3" }, "max-concurrent-downloads": 10, "max-concurrent-uploads": 10, "bip": "172.20.1.0/16", "storage-driver": "overlay2", "storage-opts": [ "overlay2.override_kernel_check=true" ], "registry-mirrors": ["https://o4uba187.mirror.aliyuncs.com"], "data-root": "/data/docker-root", "exec-opts": ["native.cgroupdriver=systemd"] }'| tee /etc/docker/daemon.json systemctl daemon-reload && systemctl start docker
#修改docker socket 文件的属主 chown -R rancher.docker /run/docker.sock
* 配置免密钥登录
#在所有节点均要创建rancher用户,然后master1生成key,通过ssh-copy-id方式同步到其他主机 ssh-keygen # 生成 key # 复制key ,注意是需要包括自己本机的 ssh-copy-id /home/rancher/.ssh/id_rsa.put [email protected] ssh-copy-id /home/rancher/.ssh/id_rsa.put [email protected] ssh-copy-id /home/rancher/.ssh/id_rsa.put [email protected] ssh-copy-id /home/rancher/.ssh/id_rsa.put [email protected] ssh-copy-id /home/rancher/.ssh/id_rsa.put [email protected] ssh-copy-id /home/rancher/.ssh/id_rsa.put [email protected]
#上传rke
上传rke 到/usr/local/bin/rke chmod a+x /usr/local/bin/rke
## 1.3 配置RKE,安装k8s
* 初始化集群
```
rke config [+] Cluster Level SSH Private Key Path [~/.ssh/id_rsa]: [+] Number of Hosts [1]: 6 [+] SSH Address of host (1) [none]: 172.16.213.95 [+] SSH Port of host (1) [22]: [+] SSH Private Key Path of host (172.16.213.95) [none]: [-] You have entered empty SSH key path, trying fetch from SSH key parameter [+] SSH Private Key of host (172.16.213.95) [none]: [-] You have entered empty SSH key, defaulting to cluster level SSH key: ~/.ssh/id_rsa [+] SSH User of host (172.16.213.95) [ubuntu]: rancher [+] Is host (172.16.213.95) a Control Plane host (y/n)? [y]: y [+] Is host (172.16.213.95) a Worker host (y/n)? [n]: n [+] Is host (172.16.213.95) an etcd host (y/n)? [n]: y [+] Override Hostname of host (172.16.213.95) [none]: rke-k8s-master1 [+] Internal IP of host (172.16.213.95) [none]: [+] Docker socket path on host (172.16.213.95) [/run/docker.sock]: [+] SSH Address of host (2) [none]: ... [+] Docker socket path on host (172.16.213.96) [/run/docker.sock]: [+] Network Plugin Type (flannel, calico, weave, canal, aci) [canal]: [+] Authentication Strategy [x509]: [+] Authorization Mode (rbac, none) [rbac]: [+] Kubernetes Docker image [rancher/hyperkube:v1.24.8-rancher1]: registry.cn-hangzhou.aliyuncs.com/rancher/hyperkube:vv1.24.8-rancher1 [+] Cluster domain [cluster.local]: [+] Service Cluster IP Range [10.43.0.0/16]: [+] Enable PodSecurityPolicy [n]: [+] Cluster Network CIDR [10.42.0.0/16]: [+] Cluster DNS Service IP [10.43.0.10]: [+] Add addon manifest URLs or YAML files [no]:
$ cat /home/rancher/cluster.yml #114.161 rke-k8s-master1 # If you intended to deploy Kubernetes in an air-gapped environment, # please consult the documentation on how to configure custom RKE images. nodes: - address: 172.16.213.95 port: "22" internal_address: "" role: - controlplane - etcd hostname_override: "rke-k8s-master1" user: rancher docker_socket: /run/docker.sock ssh_key: "" ssh_key_path: ~/.ssh/id_rsa ssh_cert: "" ssh_cert_path: "" labels: {} taints: [] - address: 172.16.213.96 port: "22" internal_address: "" role: - controlplane - etcd hostname_override: "rke-k8s-master2" user: rancher docker_socket: /run/docker.sock ssh_key: "" ssh_key_path: ~/.ssh/id_rsa ssh_cert: "" ssh_cert_path: "" labels: {} taints: [] - address: 172.16.213.97 port: "22" internal_address: "" role: - controlplane - etcd hostname_override: "rke-k8s-master3" user: rancher docker_socket: /run/docker.sock ssh_key: "" ssh_key_path: ~/.ssh/id_rsa ssh_cert: "" ssh_cert_path: "" labels: {} taints: [] - address: 172.16.213.161 port: "22" internal_address: "" role: - worker hostname_override: "rke-k8s-worker1" user: rancher docker_socket: /run/docker.sock ssh_key: "" ssh_key_path: ~/.ssh/id_rsa ssh_cert: "" ssh_cert_path: "" labels: {} taints: [] - address: 172.16.213.163 port: "22" internal_address: "" role: - worker hostname_override: "rke-k8s-worker2" user: rancher docker_socket: /run/docker.sock ssh_key: "" ssh_key_path: ~/.ssh/id_rsa ssh_cert: "" ssh_cert_path: "" labels: {} taints: [] - address: 172.16.213.165 port: "22" internal_address: "" role: - worker hostname_override: "rke-k8s-worker3" user: rancher docker_socket: /run/docker.sock ssh_key: "" ssh_key_path: ~/.ssh/id_rsa ssh_cert: "" ssh_cert_path: "" labels: {} taints: [] services: etcd: image: "" extra_args: {} extra_args_array: {} extra_binds: [] extra_env: [] win_extra_args: {} win_extra_args_array: {} win_extra_binds: [] win_extra_env: [] external_urls: [] ca_cert: "" cert: "" key: "" path: "" uid: 0 gid: 0 snapshot: null retention: "" creation: "" backup_config: null kube-api: image: "" extra_args: {} extra_args_array: {} extra_binds: [] extra_env: [] win_extra_args: {} win_extra_args_array: {} win_extra_binds: [] win_extra_env: [] service_cluster_ip_range: 10.43.0.0/16 service_node_port_range: "" pod_security_policy: false pod_security_configuration: "" always_pull_images: false secrets_encryption_config: null audit_log: null admission_configuration: null event_rate_limit: null kube-controller: image: "" extra_args: {} extra_args_array: {} extra_binds: [] extra_env: [] win_extra_args: {} win_extra_args_array: {} win_extra_binds: [] win_extra_env: [] cluster_cidr: 10.42.0.0/16 service_cluster_ip_range: 10.43.0.0/16 scheduler: image: "" extra_args: {} extra_args_array: {} extra_binds: [] extra_env: [] win_extra_args: {} win_extra_args_array: {} win_extra_binds: [] win_extra_env: [] kubelet: image: "" extra_args: {} extra_args_array: {} extra_binds: [] extra_env: [] win_extra_args: {} win_extra_args_array: {} win_extra_binds: [] win_extra_env: [] cluster_domain: cluster.local infra_container_image: "" cluster_dns_server: 10.43.0.10 fail_swap_on: false generate_serving_certificate: false kubeproxy: image: "" extra_args: {} extra_args_array: {} extra_binds: [] extra_env: [] win_extra_args: {} win_extra_args_array: {} win_extra_binds: [] win_extra_env: [] network: plugin: canal options: {} mtu: 0 node_selector: {} update_strategy: null tolerations: [] authentication: strategy: x509 sans: [] webhook: null addons: "" addons_include: [] system_images: etcd: rancher/mirrored-coreos-etcd:v3.5.6 alpine: rancher/rke-tools:v0.1.89 nginx_proxy: rancher/rke-tools:v0.1.89 cert_downloader: rancher/rke-tools:v0.1.89 kubernetes_services_sidecar: rancher/rke-tools:v0.1.89 kubedns: rancher/mirrored-k8s-dns-kube-dns:1.22.20 dnsmasq: rancher/mirrored-k8s-dns-dnsmasq-nanny:1.22.20 kubedns_sidecar: rancher/mirrored-k8s-dns-sidecar:1.22.20 kubedns_autoscaler: rancher/mirrored-cluster-proportional-autoscaler:1.8.6 coredns: rancher/mirrored-coredns-coredns:1.9.4 coredns_autoscaler: rancher/mirrored-cluster-proportional-autoscaler:1.8.6 nodelocal: rancher/mirrored-k8s-dns-node-cache:1.22.20 kubernetes: rancher/hyperkube:v1.26.6-rancher1 flannel: rancher/mirrored-flannel-flannel:v0.21.4 flannel_cni: rancher/flannel-cni:v0.3.0-rancher8 calico_node: rancher/mirrored-calico-node:v3.25.0 calico_cni: rancher/calico-cni:v3.25.0-rancher1 calico_controllers: rancher/mirrored-calico-kube-controllers:v3.25.0 calico_ctl: rancher/mirrored-calico-ctl:v3.25.0 calico_flexvol: rancher/mirrored-calico-pod2daemon-flexvol:v3.25.0 canal_node: rancher/mirrored-calico-node:v3.25.0 canal_cni: rancher/calico-cni:v3.25.0-rancher1 canal_controllers: rancher/mirrored-calico-kube-controllers:v3.25.0 canal_flannel: rancher/mirrored-flannel-flannel:v0.21.4 canal_flexvol: rancher/mirrored-calico-pod2daemon-flexvol:v3.25.0 weave_node: weaveworks/weave-kube:2.8.1 weave_cni: weaveworks/weave-npc:2.8.1 pod_infra_container: rancher/mirrored-pause:3.7 ingress: rancher/nginx-ingress-controller:nginx-1.7.0-rancher1 ingress_backend: rancher/mirrored-nginx-ingress-controller-defaultbackend:1.5-rancher1 ingress_webhook: rancher/mirrored-ingress-nginx-kube-webhook-certgen:v20230312-helm-chart-4.5.2-28-g66a760794 metrics_server: rancher/mirrored-metrics-server:v0.6.3 windows_pod_infra_container: rancher/mirrored-pause:3.7 aci_cni_deploy_container: noiro/cnideploy:5.2.7.1.81c2369 aci_host_container: noiro/aci-containers-host:5.2.7.1.81c2369 aci_opflex_container: noiro/opflex:5.2.7.1.81c2369 aci_mcast_container: noiro/opflex:5.2.7.1.81c2369 aci_ovs_container: noiro/openvswitch:5.2.7.1.81c2369 aci_controller_container: noiro/aci-containers-controller:5.2.7.1.81c2369 aci_gbp_server_container: noiro/gbp-server:5.2.7.1.81c2369 aci_opflex_server_container: noiro/opflex-server:5.2.7.1.81c2369 ssh_key_path: ~/.ssh/id_rsa ssh_cert_path: "" ssh_agent_auth: false authorization: mode: rbac options: {} ignore_docker_version: null enable_cri_dockerd: null kubernetes_version: "" private_registries: [] ingress: provider: "" options: {} node_selector: {} extra_args: {} dns_policy: "" extra_envs: [] extra_volumes: [] extra_volume_mounts: [] update_strategy: null http_port: 0 https_port: 0 network_mode: "" tolerations: [] default_backend: null default_http_backend_priority_class_name: "" nginx_ingress_controller_priority_class_name: "" default_ingress_class: null cluster_name: "" cloud_provider: name: "" prefix_path: "" win_prefix_path: "" addon_job_timeout: 0 bastion_host: address: "" port: "" user: "" ssh_key: "" ssh_key_path: "" ssh_cert: "" ssh_cert_path: "" ignore_proxy_env_vars: false monitoring: provider: "" options: {} node_selector: {} update_strategy: null replicas: null tolerations: [] metrics_server_priority_class_name: "" restore: restore: false snapshot_name: "" rotate_encryption_key: false dns: null
* 启动、更新集群
$ rke up --config cluster.yml
# 二、验证kubernetes 集群
## 2.1 配置k8s 命令行工具
* 安装kubectl 工具
cat > /etc/yum.repos.d/kubernetes.repo << EOF [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF
#master 上都要安装(kubectl 用于管理k8s 集群,哪里需要哪里装 建议只在mater)
yum install kubectl-1.26.6 -y
* k8s config配置认证文件,注意保存,该文件为k8s重要密钥文件,请勿泄露,此处是测试数据
[rancher@rke-k8s-master1 ~]$ mkdir -pv ~/.kube # 手动创建 echo > ~/.kube/config << EOF apiVersion: v1 kind: Config clusters: - cluster: api-version: v1 certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM0VENDQWNtZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFTTVJBd0RnWURWUVFERXdkcmRXSmwKTFdOaE1CNFhEVEl6TURjd09ERTBNRFkwTTFvWERUTXpNRGN3TlRFME1EWTBNMW93RWpFUU1BNEdBMVVFQXhNSAphM1ZpWlMxallUQ0NBU0l3RFFZSktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQUt0UDlOb1JLbmZNCmtEalpDTVpVak56STgzYXpjUnI2NW9FUjVYQnI4cnJMaThKamQ3OWdrMWMzZElkSTBCN056Q2dvZTROc25oTzUKZ2pKcnk4N1E0YVMvdERSWGNWRXZGSENUMVZxL0xpSUlCRml3RVFQdXc0SGVBSmZHdnRaZVdPUkEvMUMveDdpNQpKdEVlUTB3djhlTVpoc3M5RHB2OVhJOHE1SGJtUzRRVStITzQ2eXNPYjR4Rk5YRkY4RE1OWnNoaFFLY1hDRHQ1CkZNT0xWS0lXU3JmM1BYbkRtV1JBbFcwNEdWV3lFZ2p0TWJZbjllVFM4RUQwTldIVWZzOXIwcDlqRUtMb1BEL04KeDRRSndxQ3BvcmVBdEd0THA4RmV0UUJvQldaRUo2UUxIYWpiaE9paWdMY2l6WGtyY3F2KzVueHlMRDNHM1hrcApWQkUzbER2QStXOENBd0VBQWFOQ01FQXdEZ1lEVlIwUEFRSC9CQVFEQWdLa01BOEdBMVVkRXdFQi93UUZNQU1CCkFmOHdIUVlEVlIwT0JCWUVGRG5keDh5aDV6aUh0YTVWcTJFRk9KU00yZGkrTUEwR0NTcUdTSWIzRFFFQkN3VUEKQTRJQkFRQlN5K205SnpPSitXcVhlbjRWMUVFU1BJOWsvbm5XKzVaUGZDUkptOWxhTEozN1JSTDI3YlpGZnBRcQoxbGVMN3pHZGUzWnc4VTNJNk5uc1g2SmF6NlJDS2d4YWxoYjBKYnJQazJGcTZXNWdPVk0xZ3hFSm9Xd2tRVHVwCkM1bnB5TW12ZUJyMjFJMUZOOEVKOHpJUmliNVRDTmR3ZmNOVmR1SmtJY2Nld0hSV3oyNTJEMHNiVzB1aXNOWm8KMlpQbWpYUUwrSEtWdTgxN2NBNU11U3UxSjZVZTluNThodEhWbnpxL3VrRDJ4Zzh6Z0dBeTYxQWk1ZFF3dGJlbQpIbjhXSXZ5NzlKMU5aYnh1V1RkZmV3MklYQWFJNUZzUzAxUlJJWEZmNTIxcFVJZDdQNS9BYTFhWE02eU9JZXI2Cm9kb3BwbGlVK2I2bStsa1laVzhFd1J0T3lCZ3UKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo= server: "https://172.16.213.95:6443" name: "local" contexts: - context: cluster: "local" user: "kube-admin-local" name: "local" current-context: "local" users: - name: "kube-admin-local" user: client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURDakNDQWZLZ0F3SUJBZ0lJVDhBeWIvRHJFVEl3RFFZSktvWklodmNOQVFFTEJRQXdFakVRTUE0R0ExVUUKQXhNSGEzVmlaUzFqWVRBZUZ3MHlNekEzTURneE5EQTJORE5hRncwek16QTNNRFV4TkRFNE1qUmFNQzR4RnpBVgpCZ05WQkFvVERuTjVjM1JsYlRwdFlYTjBaWEp6TVJNd0VRWURWUVFERXdwcmRXSmxMV0ZrYldsdU1JSUJJakFOCkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQXU3OGpmVjBwRFByRnoxZS91a0pwakJkWmppYmIKUVlOMWtualJrbC9TSDdDdlVLUisrNXl0cnZoVHFUcHE5VWp0b1k0Q3lHZVFmVCtUeDlRdEUrTGVMakhPRUZSUQpUbkEyWWd4VksvbGFOT1FGNUtlM24ydW9RQndpdFFubXpRTEs3U3ZiVlQ3Kzhwa2FLaUpOVk96WmltRGg4WkhKClhwU0JFWlY3US91aG0yMnF0cTlIYVdPay9BSmRZR3g1MjVMc3lsdURHY3RVZ0g1cC9LVlpJNU9sNCs0NGdQWloKcDg1eFRvWThuNXkyeWNqMG9JVzByR2dDdUQrcnFyZmhtMG0xU3NwMG1VNjU0N3dPMTNCWTNjNGhjWHRBNjZWUQpGeWlTaWxPNW03QVVkVm5wWnRMaEk4WTdxNGFWcE02YThuaUZVandjdjZYdE15Y0Q4a2xsakFnbW5RSURBUUFCCm8wZ3dSakFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUhBd0l3SHdZRFZSMGoKQkJnd0ZvQVVPZDNIektIbk9JZTFybFdyWVFVNGxJeloyTDR3RFFZSktvWklodmNOQVFFTEJRQURnZ0VCQUg0dQpaU3BOa0NFYkEzOHR6cmdXUy9HYmI0UXRPR2N5TXlmTkZ3ejczcnJaTmJSUTFPajNPdi81bHdYYWg1cCtQMzBRClRtRFVDb3prSW56T004dXV3R00vQVFXKzJDQTFRTEJ6Uk8vbFlyQkNhWnpxL2ljbWI4YnNUMzhhdjFRR3dncncKRnpjWTVjM21iYUZNTnFrb1pCZVloWDN3aDcyT2RDU0xLaWZkYmY0RUNuU0g5MTc1U050VC9QSGNiMzNwVC8yRwpEOFpOU2xxclhEZWhJZzFPN0lmU2wwS0RkL2NRY2lOUkZDT1pMQVFsSThkV2dvWVVUK1RRSGlSWGswQWdYeVRrCk5qZFFIaXQyNkdkV2k2b2laZjlPL0hjbzZ4K0s1WnJ5NFZjbVNCbnVxSnV3WjZ0QmVIdUpMK0NNVm9ybjR5bWQKN3BjUEZOQUZLcFJPUjBlcEZrcz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo= client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBdTc4amZWMHBEUHJGejFlL3VrSnBqQmRaamliYlFZTjFrbmpSa2wvU0g3Q3ZVS1IrCis1eXRydmhUcVRwcTlVanRvWTRDeUdlUWZUK1R4OVF0RStMZUxqSE9FRlJRVG5BMllneFZLL2xhTk9RRjVLZTMKbjJ1b1FCd2l0UW5telFMSzdTdmJWVDcrOHBrYUtpSk5WT3paaW1EaDhaSEpYcFNCRVpWN1EvdWhtMjJxdHE5SAphV09rL0FKZFlHeDUyNUxzeWx1REdjdFVnSDVwL0tWWkk1T2w0KzQ0Z1BaWnA4NXhUb1k4bjV5MnljajBvSVcwCnJHZ0N1RCtycXJmaG0wbTFTc3AwbVU2NTQ3d08xM0JZM2M0aGNYdEE2NlZRRnlpU2lsTzVtN0FVZFZucFp0TGgKSThZN3E0YVZwTTZhOG5pRlVqd2N2Nlh0TXljRDhrbGxqQWdtblFJREFRQUJBb0lCQUNqNjZxTTFqMzFPeTVpYgpmYlVKUkFLWklpb2VIeU9vcnlRZWpSZ1hKRVZZaXB2ZW0vME4wUGR0S3MyNGU1bzRwZTNxa243dDVDTUNtcDQyCm1QUkxROVh2ZHh3bld6UVQyRHNFbUI2MkdkT0xwaUduM2pQRkN2K2JaSlFCcWtnN2dOSE9EZDBJbUJ1YUFaVUsKMGJoa3pvTWU3SktQRU5ZOU1nTUZqdGRpK0g1MVRDcDEwMzZXOXlvYkprVzEwcEtocjhST1VRN2l5TWQySkljMgpQSENXaHUzRnVLUHRsNUxmTHRmY3BUMW55STBVb24yNlE5c05wajFtWjJsLzZmbFJQWXRDRVlpZERFM0YwZ3NwClVYeWowNzNtaEQyejNEMEVHWXJLMDdBaC9lMmhzU0RlaU02SFp2a0RVa0FON3VISytXZi9iYlplRURsbFU5ZmgKejc4VGtrRUNnWUVBd2J3azhSRlBMNFhmU2Q3RVJ1K21Ea0lWNk41ekFYSzF6Z0grWGtabThleTFTMzN3eHBFdAptditSNnVNWTd6Q3ZhVGxhSVdZUVNMQkE0eTJJazRnSW1pV2RFN0xtcCtDVmZhWmJtd0lHbXFoZ0Y1Y2lmWEs3ClpoeWVDVUF0T21Lb0psMlJwN29EdzJ1Q29QR04zcS91MGY4TUxhdVJWNWtlNjdNTU0zQUZ3SzBDZ1lFQStCWk0KQmxlYlBKOTlHbmh3WUhEQzdxT0NBeWJKRjVUaGQ0QkVUdnBCaGtXenJ4U2dpeWI0dGZoR3JKTWpvRWlIaGF0NwpwaFBlRERDdHlqb0VEMlpiMkV1YlI5VnB0OS9KUjRoamF3VDRVbFoxOGN1ZnVjVHY2UnM0bjhsaElvN1RwSnBwCitwajdwK1kxZUF5SUxUM0RKSCt0NDlmdnZvRGNOYmU4NkRnd2k3RUNnWUVBc2NXY1BGeit4WVBaYmVadFF3NWEKMk5DSlhFTHJVeFBZZ2UzUVpOL0RUUkZCRnNHODgraDU2YlhFUnI0V3ZqMTFhRi9KTmNaN0FNaEM4bk53MUxmSgo5UEM0M3orVmFjeXFRRDhyNWVRSS9WZXR2VmZndlM1UGlaYU82YndyQkYxTklNOVJmWkF5TGRyMFpnemhlc3NECm9VeWc5ek5zemUzaXNyTjhhYUxNbEkwQ2dZRUF2ZnI5THlJcGUrdzZ0bW1pelFldEQyaGhLSjZzQWdYOS96QlgKbnc5ZjNENUdVbjMrVDNHQnBvQkJSdWpLc0hTNmEyK2RtZG0vQWlESkJZTVdGdUR3MXB0WGgxUHp5RjUwV2ZZbApCQkJqUlZKMnNicVlUMzl6cFZRMk1ZN2Fkc2RmWmI3bUI0VGR1bjY5VlhoclZCSG0vVzFWTVpUc1FEdVg1djhVCmg5UjN3SkVDZ1lCcjNRVjAvSUFxYVpINzZ6VWxmaWVKeEZSUFplZFl4TTYvcGl3MjdmOHVYWCs3YUVCM1VSSUUKSXR6QlJKcmNPb1ZoVEZITjdDaDdiVnZIbndhdTdYWkVmRWZEOWZmbjAvWnozbEdzSmIraEdsdXlrVzQ4L2RXawpRa3BNQ1AwU2tyVHRXdk0vbk1sUTBNOGxmWXd3ZTZBL1dEdVN3WDIvc3NGMldWQkV4UVJZMEE9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo= EOF
#通过kubectl查询nodes状态
[root@rke-k8s-master1 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION rke-k8s-master1 Ready controlplane,etcd 46m v1.26.6 rke-k8s-master2 Ready controlplane,etcd 46m v1.26.6 rke-k8s-master3 Ready controlplane,etcd 46m v1.26.6 rke-k8s-worker1 Ready worker 46m v1.26.6 rke-k8s-worker2 Ready worker 46m v1.26.6 rke-k8s-worker3 Ready worker 32m v1.26.6
## 2.2 查看资源信息
[root@rke-k8s-master1 ~]# kubectl get pods -A NAMESPACE NAME READY STATUS RESTARTS AGE ingress-nginx nginx-ingress-controller-gl2nv 1/1 Running 6 (36m ago) 46m ingress-nginx nginx-ingress-controller-x9bvz 1/1 Running 6 (40m ago) 46m ingress-nginx nginx-ingress-controller-z4m2r 1/1 Running 0 33m kube-system calico-kube-controllers-777cbf8f9c-94kpj 1/1 Running 0 46m kube-system canal-8stv9 2/2 Running 0 33m kube-system canal-cnn56 2/2 Running 2 (40m ago) 46m kube-system canal-thlr7 2/2 Running 2 (36m ago) 46m kube-system canal-vrhlj 2/2 Running 0 46m kube-system canal-wtq6g 2/2 Running 0 46m kube-system canal-znbd9 2/2 Running 0 46m kube-system coredns-66b64c55d4-ftn6v 1/1 Running 0 40m kube-system coredns-66b64c55d4-kdn9r 1/1 Running 1 (36m ago) 46m kube-system coredns-autoscaler-5567d8c485-4288l 1/1 Running 1 (36m ago) 46m kube-system metrics-server-7886b5f87c-d4gx6 1/1 Running 1 (36m ago) 41m kube-system rke-coredns-addon-deploy-job-m5fcz 0/1 Completed 0 46m kube-system rke-ingress-controller-deploy-job-9kqb8 0/1 Completed 0 46m kube-system rke-metrics-addon-deploy-job-g4xx9 0/1 Completed 0 46m kube-system rke-network-plugin-deploy-job-zzfnl 0/1 Completed 0 46m ```
三、部署应用
## 3.1 安装k8s web管理工具 rancher
#安装helm软件
cd /usr/local/bin && sudo wget https://rancher-mirror.rancher.cn/helm/v3.10.3/helm-v3.10.3-linux-amd64.tar.gz && sudo tar -zxf helm-v3.10.3-linux-amd64.tar.gz && sudo cp linux-amd64/helm ./ && sudo chmod +x helm && sudo chown -R rancher.rancher helm
# 创建证书目录
mkdir -p /data1/rancher/cert cd /data1/rancher/cert
## 3.2 一键生成证书脚本,生成证书复制到上述目录
#!/bin/bash -e help () { echo ' ================================================================ ' echo ' --ssl-domain: 生成ssl证书需要的主域名,如不指定则默认为rancher.toutiao.com,如果是ip访问服务,则可忽略;' echo ' --ssl-trusted-ip: 一般ssl证书只信任域名的访问请求,有时候需要使用ip去访问server,那么需要给ssl证书添加扩展IP,多个IP用逗号隔开;' echo ' --ssl-trusted-domain: 如果想多个域名访问,则添加扩展域名(SSL_TRUSTED_DOMAIN),多个扩展域名用逗号隔开;' echo ' --ssl-size: ssl加密位数,默认2048;' echo ' --ssl-cn: 国家代码(2个字母的代号),默认CN;' echo ' 使用示例:' echo ' ./create_self-signed-cert.sh --ssl-domain=rancher.toutiao.com --ssl-trusted-domain=rancher.toutiao.com \ ' echo ' --ssl-trusted-ip=1.1.1.1,2.2.2.2,3.3.3.3 --ssl-size=2048 --ssl-date=36500' echo ' ================================================================' } case "$1" in -h|--help) help; exit;; esac if [[ $1 == '' ]];then help; exit; fi CMDOPTS="$*" for OPTS in $CMDOPTS; do key=$(echo ${OPTS} | awk -F"=" '{print $1}' ) value=$(echo ${OPTS} | awk -F"=" '{print $2}' ) case "$key" in --ssl-domain) SSL_DOMAIN=$value ;; --ssl-trusted-ip) SSL_TRUSTED_IP=$value ;; --ssl-trusted-domain) SSL_TRUSTED_DOMAIN=$value ;; --ssl-size) SSL_SIZE=$value ;; --ssl-date) SSL_DATE=$value ;; --ca-date) CA_DATE=$value ;; --ssl-cn) CN=$value ;; esac done # CA相关配置 CA_DATE=${CA_DATE:-36500} CA_KEY=${CA_KEY:-cakey.pem} CA_CERT=${CA_CERT:-cacerts.pem} CA_DOMAIN=cattle-ca # ssl相关配置 SSL_CONFIG=${SSL_CONFIG:-$PWD/openssl.cnf} SSL_DOMAIN=${SSL_DOMAIN:-'www.rancher.local'} SSL_DATE=${SSL_DATE:-36500} SSL_SIZE=${SSL_SIZE:-2048} ## 国家代码(2个字母的代号),默认CN; CN=${CN:-CN} SSL_KEY=$SSL_DOMAIN.key SSL_CSR=$SSL_DOMAIN.csr SSL_CERT=$SSL_DOMAIN.crt echo -e "\033[32m ---------------------------- \033[0m" echo -e "\033[32m | 生成 SSL Cert | \033[0m" echo -e "\033[32m ---------------------------- \033[0m" if [[ -e ./${CA_KEY} ]]; then echo -e "\033[32m ====> 1. 发现已存在CA私钥,备份"${CA_KEY}"为"${CA_KEY}"-bak,然后重新创建 \033[0m" mv ${CA_KEY} "${CA_KEY}"-bak openssl genrsa -out ${CA_KEY} ${SSL_SIZE} else echo -e "\033[32m ====> 1. 生成新的CA私钥 ${CA_KEY} \033[0m" openssl genrsa -out ${CA_KEY} ${SSL_SIZE} fi if [[ -e ./${CA_CERT} ]]; then echo -e "\033[32m ====> 2. 发现已存在CA证书,先备份"${CA_CERT}"为"${CA_CERT}"-bak,然后重新创建 \033[0m" mv ${CA_CERT} "${CA_CERT}"-bak openssl req -x509 -sha256 -new -nodes -key ${CA_KEY} -days ${CA_DATE} -out ${CA_CERT} -subj "/C=${CN}/CN=${CA_DOMAIN}" else echo -e "\033[32m ====> 2. 生成新的CA证书 ${CA_CERT} \033[0m" openssl req -x509 -sha256 -new -nodes -key ${CA_KEY} -days ${CA_DATE} -out ${CA_CERT} -subj "/C=${CN}/CN=${CA_DOMAIN}" fi echo -e "\033[32m ====> 3. 生成Openssl配置文件 ${SSL_CONFIG} \033[0m" cat > ${SSL_CONFIG} <<EOM [req] req_extensions = v3_req distinguished_name = req_distinguished_name [req_distinguished_name] [ v3_req ] basicConstraints = CA:FALSE keyUsage = nonRepudiation, digitalSignature, keyEncipherment extendedKeyUsage = clientAuth, serverAuth EOM if [[ -n ${SSL_TRUSTED_IP} || -n ${SSL_TRUSTED_DOMAIN} || -n ${SSL_DOMAIN} ]]; then cat >> ${SSL_CONFIG} <<EOM subjectAltName = @alt_names [alt_names] EOM IFS="," dns=(${SSL_TRUSTED_DOMAIN}) dns+=(${SSL_DOMAIN}) for i in "${!dns[@]}"; do echo DNS.$((i+1)) = ${dns[$i]} >> ${SSL_CONFIG} done if [[ -n ${SSL_TRUSTED_IP} ]]; then ip=(${SSL_TRUSTED_IP}) for i in "${!ip[@]}"; do echo IP.$((i+1)) = ${ip[$i]} >> ${SSL_CONFIG} done fi fi echo -e "\033[32m ====> 4. 生成服务SSL KEY ${SSL_KEY} \033[0m" openssl genrsa -out ${SSL_KEY} ${SSL_SIZE} echo -e "\033[32m ====> 5. 生成服务SSL CSR ${SSL_CSR} \033[0m" openssl req -sha256 -new -key ${SSL_KEY} -out ${SSL_CSR} -subj "/C=${CN}/CN=${SSL_DOMAIN}" -config ${SSL_CONFIG} echo -e "\033[32m ====> 6. 生成服务SSL CERT ${SSL_CERT} \033[0m" openssl x509 -sha256 -req -in ${SSL_CSR} -CA ${CA_CERT} \ -CAkey ${CA_KEY} -CAcreateserial -out ${SSL_CERT} \ -days ${SSL_DATE} -extensions v3_req \ -extfile ${SSL_CONFIG} echo -e "\033[32m ====> 7. 证书制作完成 \033[0m" echo echo -e "\033[32m ====> 8. 以YAML格式输出结果 \033[0m" echo "----------------------------------------------------------" echo "ca_key: |" cat $CA_KEY | sed 's/^/ /' echo echo "ca_cert: |" cat $CA_CERT | sed 's/^/ /' echo echo "ssl_key: |" cat $SSL_KEY | sed 's/^/ /' echo echo "ssl_csr: |" cat $SSL_CSR | sed 's/^/ /' echo echo "ssl_cert: |" cat $SSL_CERT | sed 's/^/ /' echo echo -e "\033[32m ====> 9. 附加CA证书到Cert文件 \033[0m" cat ${CA_CERT} >> ${SSL_CERT} echo "ssl_cert: |" cat $SSL_CERT | sed 's/^/ /' echo echo -e "\033[32m ====> 10. 重命名服务证书 \033[0m" echo "cp ${SSL_DOMAIN}.key tls.key" cp ${SSL_DOMAIN}.key tls.key echo "cp ${SSL_DOMAIN}.crt tls.crt" cp ${SSL_DOMAIN}.crt tls.crt
* 生成证书,rancher证书(非k8s集群证书) 时间为100年
[root@rke-k8s-master1 data]# chmod a+x create_self-signed-cert.sh [root@rke-k8s-master1 data]# ./create_self-signed-cert.sh --ssl-domain=rancher.toutiao.com -ssl-trusted-domain=www.toutiao.com --ssl-size=2048 --ssl-date=36500
* 复制证书
[root@rke-k8s-master1 create]# cp tls.* ../cert/ [root@rke-k8s-master1 data]# cp ca.* rancher/cert/ [root@rke-k8s-master1 data]# chown -R rancher.docker rancher ```
## 3.3 在k8s中创建密钥相关信息
cd /data1/rke/rancher/cert/ kubectl create ns cattle-system # 创建ingress 密钥 kubectl -n cattle-system create secret tls tls-rancher-ingress --cert=./tls.crt --key=./tls.key #创建证书密钥 kubectl -n cattle-system create secret generic tls-ca --from-file=./cacerts.pem # #给helm提供rancher的安装包地址 helm repo add rancher-latest http://rancher-mirror.oss-cn-beijing.aliyuncs.com/server-charts/latest
## 3.4 安装rancher
[rancher@rke-k8s-master1 cert]$ helm repo add rancher-latest http://rancher-mirror.oss-cn-beijing.aliyuncs.com/server-charts/latest "rancher-latest" has been added to your repositories helm install rancher rancher-latest/rancher \ --namespace cattle-system \ --set systemDefaultRegistry=registry.cn-hangzhou.aliyuncs.com \ --set hostname=rancher.toutiao.com \ --set ingress.tls.source=secret \ --set privateCA=true
* 等待安装完成
[root@rke-k8s-master1 cert]# helm install rancher rancher-latest/rancher \ --namespace cattle-system \ --set systemDefaultRegistry=registry.cn-hangzhou.aliyuncs.com \ --set hostname=rancher.toutiao.com \ --set ingress.tls.source=secret \ --set privateCA=true WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /root/.kube/config WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /root/.kube/config NAME: rancher LAST DEPLOYED: Sat Jul 8 23:17:22 2023 NAMESPACE: cattle-system STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: Rancher Server has been installed. NOTE: Rancher may take several minutes to fully initialize. Please standby while Certificates are being issued, Containers are started and the Ingress rule comes up. Check out our docs at https://rancher.com/docs/ If you provided your own bootstrap password during installation, browse to https://rancher.toutiao.com to get started. If this is the first time you installed Rancher, get started by running this command and clicking the URL it generates: ``` echo https://rancher.toutiao.com/dashboard/?setup=$(kubectl get secret --namespace cattle-system bootstrap-secret -o go-template='{{.data.bootstrapPassword|base64decode}}') ``` To get just the bootstrap password on its own, run: ``` kubectl get secret --namespace cattle-system bootstrap-secret -o go-template='{{.data.bootstrapPassword|base64decode}}{{ "\n" }}' ``` Happy Containering! [root@rke-k8s-master1 cert]# kubectl get ingress -A NAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGE cattle-system rancher nginx rancher.toutiao.com 172.16.213.161,172.16.213.163,172.16.213.165 80, 443 10m
## 3.5 安装监控(rancher中安装即可),查看pod信息
kubectl --namespace cattle-monitoring-system get pods -l "release=rancher-monitoring"
## 3.6 新增、删除节的流程:
1、修改cluster.yml的配置文件,增加节点或者修改当前节点角色后保存 2、执行rke命令 rke up --update-only --config ./cluster.yml
## 3.7 集群证书管理
#轮换全部证书 rke cert rotate [--config cluster.yml] #轮换CA证书和全部服务证 rke cert rotate --rotate-ca #轮换单个服务证书 rke cert rotate --service etcd #查看证书有效时间 openssl x509 -in /etc/kubernetes/ssl/kube-apiserver.pem -noout -dates 轮换全部证书后KUBECONFIG文件有所修改,需进行替换更新cp kube_config_cluster.yml $HOME/.kube/config
4.配置hosts,访问rancher
rancher安装完成后可以通过所有的worker节点地址访问,因为通过配置nginx反向代理到worker 即可。
cat > /etc/nginx/conf.d/rancher.conf <<EOF upstream rke-worker { server 172.16.213.161:443; server 172.16.213.163:443; server 172.16.213.165:443; } server { listen 443 ssl; server_name rancher.toutiao.com; access_log /usr/share/nginx/logs/rancher.access.log main; error_log /usr/share/nginx/logs/rancher.error.log; index index.html index.htm; ssl_certificate cert/server.pem; ssl_certificate_key cert/server.key; ssl_session_timeout 5m; ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE:ECDH:AES:HIGH:!NULL:!aNULL:!MD5:!ADH:!RC4; ssl_protocols TLSv1.1 TLSv1.2 TLSv1.3; ssl_prefer_server_ciphers on; location / { proxy_pass https://rke-worker; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; proxy_set_header X-Forwarded-Proto $scheme; } }
首次访问需要配置密码,默认用户admin
版权声明:
作者: freeclashnode
链接: https://www.freeclashnode.com/news/article-1967.htm
来源: FreeClashNode
文章版权归作者所有,未经允许请勿转载。
热门文章
- 10月12日|20.2M/S,Shadowrocket/V2ray/SSR/Clash免费节点订阅链接每天更新
- 10月11日|22.7M/S,Shadowrocket/Clash/V2ray/SSR免费节点订阅链接每天更新
- 10月9日|19M/S,Clash/V2ray/SSR/Shadowrocket免费节点订阅链接每天更新
- 10月13日|18.6M/S,V2ray/SSR/Shadowrocket/Clash免费节点订阅链接每天更新
- 10月10日|21.9M/S,V2ray/Clash/SSR/Shadowrocket免费节点订阅链接每天更新
- 10月15日|18.5M/S,V2ray/Shadowrocket/SSR/Clash免费节点订阅链接每天更新
- 10月16日|22.5M/S,Clash/V2ray/SSR/Shadowrocket免费节点订阅链接每天更新
- 10月14日|22.5M/S,V2ray/Clash/Shadowrocket/SSR免费节点订阅链接每天更新
- 10月8日|18.9M/S,Clash/SSR/V2ray/Shadowrocket免费节点订阅链接每天更新
- 10月17日|18.2M/S,Clash/SSR/V2ray/Shadowrocket免费节点订阅链接每天更新
最新文章
- 11月3日|18.7M/S,V2ray/Shadowrocket/Clash/SSR免费节点订阅链接每天更新
- 11月2日|20.2M/S,Clash/Shadowrocket/V2ray/SSR免费节点订阅链接每天更新
- 11月1日|19.1M/S,SSR/Clash/Shadowrocket/V2ray免费节点订阅链接每天更新
- 10月31日|22.2M/S,V2ray/SSR/Clash/Shadowrocket免费节点订阅链接每天更新
- 10月30日|22.7M/S,Clash/V2ray/Shadowrocket/SSR免费节点订阅链接每天更新
- 10月29日|21.8M/S,SSR/Shadowrocket/V2ray/Clash免费节点订阅链接每天更新
- 10月28日|20.4M/S,SSR/Clash/Shadowrocket/V2ray免费节点订阅链接每天更新
- 10月27日|19.3M/S,SSR/Clash/V2ray/Shadowrocket免费节点订阅链接每天更新
- 10月26日|19.1M/S,SSR/V2ray/Clash/Shadowrocket免费节点订阅链接每天更新
- 10月25日|18.2M/S,Shadowrocket/SSR/V2ray/Clash免费节点订阅链接每天更新