rancher 集群安装
Last updated on November 22, 2024 pm
🧙 Questions
使用rke安装kubernetes集群,并且使用rancher管理kubernetes集群
☄️ Ideas
前提
- 至少三台服务器
- 服务器系统一定是centos7系统
- 服务器之间必须互相ping通
hostname | 配置 | 公网 | 内网 |
---|---|---|---|
master | 4核8G | 59.110.13.66 | 172.23.86.67 |
slave1 | 4核8G | 101.201.100.123 | 172.23.86.65 |
slave2 | 4核8G | 101.201.100.110 | 172.23.86.66 |
1. 创建sudo用户ispong
Note: 三台服务器都要做
useradd ispong
passwd ispong
vim /etc/sudoers
# / ## Allow root to run any commands anywhere
# ispong ALL=(ALL) ALL
su ispong
2. 关闭防火墙
Note: 三台服务器都要做
sudo systemctl stop firewalld
sudo systemctl status firewalld
3. 关闭selinux
Note: 三台服务器都要做
# 查看selinux状态
getenforce
sudo vim /etc/selinux/config
# SELINUX=disabled
4. 修改服务器hostname
Note: 三台服务器都要做
# master服务器
sudo hostnamectl set-hostname master
newgrp ispong
# slave1服务器
# sudo hostnamectl set-hostname slave1
# slave2服务器
# sudo hostnamectl set-hostname slave2
# 检查配置
uname -a
# 三台服务器配置需要一样
sudo vim /etc/hosts
# 172.23.86.67 master
# 172.23.86.65 slave1
# 172.23.86.66 slave2
5. 升级服务器ssh版本
Note: 三台服务器都要做
sudo yum -y update openssh
ssh -V
6. 同步服务器之间时间
Note: 三台服务器都要做
sudo systemctl status ntpd
sudo ntpdate -u ntp.aliyun.com
sudo hwclock --systohc
7. 安装docker
Note: 三台服务器都要做
# 卸载旧的docker
sudo yum remove -y docker \
docker-client \
docker-client-latest \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-engine
# 安装yum工具集
sudo yum install -y yum-utils
# 配置阿里镜像仓库
sudo yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# 刷新安装缓存
sudo yum makecache all
# 安装docker
sudo yum install -y docker-ce docker-ce-cli containerd.io
# 设置开机自启
sudo systemctl enable docker
# 开启docker服务
sudo systemctl start docker
# 给docker配置阿里镜像仓库
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://3fe1zqfu.mirror.aliyuncs.com"],
"data-root":"/data/docker"
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker
sudo systemctl status docker
# 降低docker版本
sudo systemctl stop docker
sudo yum downgrade --setopt=obsoletes=0 -y docker-ce-19.03.13-3.el7 docker-ce-cli-19.03.13-3.el7 containerd.io
sudo systemctl start docker
# 将`ispong`用户添加到docker用户组
sudo gpasswd -a ispong docker
newgrp docker
sudo chmod a+rw /var/run/docker.sock
8. 创建ssh免密登录
Note: 只有主机节点需要做,且在ispong用户下
ssh-keygen
ssh-copy-id ispong@master
ssh-copy-id ispong@slave1
ssh-copy-id ispong@slave2
9. 提前创建安装目录
Note: 三台服务器都要做
sudo mkdir -p /var/lib/rancher/etcd/
sudo chown -R ispong:ispong /var/lib/rancher/etcd/
10. 安装rke软件
# wget https://github.com/rancher/rke/releases/download/v1.1.11/rke_linux-amd64
scp rke_linux-amd64 ispong@59.110.13.66:~/
sudo mv rke_linux-amd64 /usr/bin/rke
sudo chmod +x /usr/bin/rke
rke -version
11. 编写kubernetes配置脚本
Note:
不可以使用127.0.0.1地址,address和internal_address可以都填写内网,最好不要用外网,因为阿里云服务器的端口号是安全组控制的。使用外网的话,需要手动去开启端口号,很麻烦
# 命令创建
# rke config --name cluster.yml
# 手动创建
vim cluster.yml
nodes:
- address: 172.23.86.67
internal_address: 172.23.86.67
role:
- controlplane
- worker
- etcd
hostname_override: master
user: ispong
- address: 172.23.86.65
internal_address: 172.23.86.65
role:
- worker
- etcd
hostname_override: slave1
user: ispong
- address: 172.23.86.66
internal_address: 172.23.86.66
role:
- worker
- etcd
hostname_override: slave2
user: ispong
services:
etcd: # 集群状态和数据的存储配置
image: ""
extra_args: {
data-dir: "/var/lib/etcd" # 配置容器内镜像 etcd位置
}
extra_binds:
- /data/etcd:/var/lib/etcd/:z # 映射到主机位置
snapshot: true # 允许快照
retention: 24h # 周期24小时
creation: 6h
kube-api: # 处理所有 Kubernetes 对象的请求和数据
service_cluster_ip_range: 10.43.0.0/16
service_node_port_range: "1-65535" # 为NodePort服务提供不同的端口范围 默认值30000-32767
pod_security_policy: false
always_pull_images: false
kube-controller: #用于为集群中的 pod 分配 IP 地址的 CIDR 池
cluster_cidr: 10.42.0.0/16
service_cluster_ip_range: 10.43.0.0/16
scheduler: # 调度集群工作负载
kubelet: # Kubernetes的节点代理,让 Kubernetes 有能力管理节点上的容器运行
extra_args: {
max-pods: "250" # 设置最大可建pods数量
}
cluster_domain: cluster.local
cluster_dns_server: 10.43.0.10
fail_swap_on: false
generate_serving_certificate: false
kubeproxy: #管理 Kubernetes 创建的 TCP/UDP 端口的端点
network:
plugin: canal
options: {}
mtu: 0
node_selector: {}
update_strategy: null
authentication:
strategy: x509
sans: []
webhook: null
addons: ""
addons_include: []
system_images:
etcd: rancher/coreos-etcd:v3.4.3-rancher1
alpine: rancher/rke-tools:v0.1.66
nginx_proxy: rancher/rke-tools:v0.1.66
cert_downloader: rancher/rke-tools:v0.1.66
kubernetes_services_sidecar: rancher/rke-tools:v0.1.66
kubedns: rancher/k8s-dns-kube-dns:1.15.2
dnsmasq: rancher/k8s-dns-dnsmasq-nanny:1.15.2
kubedns_sidecar: rancher/k8s-dns-sidecar:1.15.2
kubedns_autoscaler: rancher/cluster-proportional-autoscaler:1.7.1
coredns: rancher/coredns-coredns:1.6.9
coredns_autoscaler: rancher/cluster-proportional-autoscaler:1.7.1
nodelocal: rancher/k8s-dns-node-cache:1.15.7
kubernetes: rancher/hyperkube:v1.18.10-rancher1
flannel: rancher/coreos-flannel:v0.12.0
flannel_cni: rancher/flannel-cni:v0.3.0-rancher6
calico_node: rancher/calico-node:v3.13.4
calico_cni: rancher/calico-cni:v3.13.4
calico_controllers: rancher/calico-kube-controllers:v3.13.4
calico_ctl: rancher/calico-ctl:v3.13.4
calico_flexvol: rancher/calico-pod2daemon-flexvol:v3.13.4
canal_node: rancher/calico-node:v3.13.4
canal_cni: rancher/calico-cni:v3.13.4
canal_flannel: rancher/coreos-flannel:v0.12.0
canal_flexvol: rancher/calico-pod2daemon-flexvol:v3.13.4
weave_node: weaveworks/weave-kube:2.6.4
weave_cni: weaveworks/weave-npc:2.6.4
pod_infra_container: rancher/pause:3.1
ingress: rancher/nginx-ingress-controller:nginx-0.35.0-rancher2
ingress_backend: rancher/nginx-ingress-controller-defaultbackend:1.5-rancher1
metrics_server: rancher/metrics-server:v0.3.6
windows_pod_infra_container: rancher/kubelet-pause:v0.1.4
ssh_key_path: ~/.ssh/id_rsa
ssh_cert_path: ""
ssh_agent_auth: false
authorization:
mode: rbac
options: {}
ignore_docker_version: true # 忽略Docker版本检查
kubernetes_version: "" # 设置kubernetes版本
private_registries: [] # 配置私有镜像仓库
ingress:
provider: ""
options: {}
node_selector: {}
extra_args: {}
dns_policy: ""
extra_envs: []
extra_volumes: []
extra_volume_mounts: []
update_strategy: null
http_port: 0
https_port: 0
network_mode: ""
cluster_name: "ispong-cluster" #集群名称
cloud_provider:
name: ""
prefix_path: "/data/rke" # 设置rke安装地址 默认/opt/rke
addon_job_timeout: 0
bastion_host: # 堡垒机配置
address: ""
port: ""
user: ""
ssh_key: ""
ssh_key_path: ""
ssh_cert: ""
ssh_cert_path: ""
monitoring:
provider: ""
options: {}
node_selector: {}
update_strategy: null
replicas: null
restore:
restore: false
snapshot_name: ""
dns: null
12. 安装kubernetes集群
Note:
不要使用sudo安装,否则会默认使用root的ssh,导致无法访问从节点服务器
可能出现 chmod: /var/lib/rancher/etcd/: No such file or directory 错误,不用理睬继续执行安装命令
rke up --config cluster.yml
13. 安装kubectl软件
# wget https://dl.k8s.io/v1.20.6/kubernetes-client-linux-amd64.tar.gz
scp kubernetes-client-linux-amd64.tar.gz ispong@59.110.13.66:~/
tar -vzxf kubernetes-client-linux-amd64.tar.gz
sudo mv kubernetes/client/bin/kubectl /usr/bin/kubectl
sudo chmod +x /usr/bin/kubectl
14. 配置kubectl
mkdir -p ~/.kube
cp kube_config_cluster.yml ~/.kube/config
kubectl version
# 测试kubernetes安装情况
kubectl get pods --all-namespaces
15. 安装rancher
Note:
最好不要使用80端口和443端口,可能端口号留给官网使用
打开阿里服务器端口号安全组 8080端口和4443端口
最好不要用 rancher/rancher:stable ,2.5+ 的rancher版本会默认有一个k3s,当重启服务器的时候启动k3s比较麻烦
sudo docker run -d --name rancher --privileged --restart=unless-stopped -p 8080:80 -p 4443:443 rancher/rancher:stable
docker ps | grep rancher/rancher:v2.4.4
16. 配置rancher
Note:
访问 https://59.110.13.66:4443
kubectl create clusterrolebinding cluster-admin-binding --clusterrole cluster-admin --user ispong
curl --insecure -sfL https://172.23.86.67:4443/v3/import/2qksm8lxgpvwdgvhgn8ms92gk62ntlp6q28mxpw24mzbhnlzh22xpx_c-6ttjp.yaml | kubectl apply -f -
🔗 Links
rancher 集群安装
https://ispong.isxcode.com/kubernetes/rancher/rancher 集群安装/