Logical Topology Work in progress
Requirements Work in progress
Workflow
Networking and firewall setup.Execute on each node with root user.
## Mapping node-name in /etc/hosts.
cat > /etc/hosts
10.1.2.203 k8s01.master01
10.1.2.204 k8s01.worker01
10.1.2.205 k8s01.worker02
## Disabling ufw service.
systemctl stop ufw
systemctl disable ufw
## Disabling swap support.
swapoff -a
## Allowing required ports.
Depend on your firewall environment.
You need to open port 6443, 2379-2380, 10250-10252 on master node.
You need to open port 10250, and 30000-32767 on worker node.
##Letting iptables see bridged traffic Make sure that the br_netfilter module is loaded.
##This can be done by running lsmod | grep br_netfilter. To load it explicitly call modprobe br_netfilter.
##As a requirement for your Linux Node's iptables to correctly see bridged traffic,
##you should ensure net.bridge.bridge-nf-call-iptables is set to 1 in your sysctl config, e.g.
lsmod | grep br_netfilter
cat <<EOF | tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF
cat <<EOF | tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
Node preparation. Execute on each node with kubernetes user.
## Update and upgrading system.
apt update -y; apt upgrade -y; apt autoremove; apt dist-upgrade -y; reboot
## Install required packages and docker
apt install -y docker.io; docker version
apt-get install -y apt-transport-https ca-certificates curl
curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
## Adding kubernetes repository.
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | tee /etc/apt/sources.list.d/kubernetes.list
## Install kubectl, kubelet, and kubeadm.
apt-get update -y
apt-get install -y kubelet kubeadm kubectl -y
apt-mark hold kubelet kubeadm kubectl
Configure cgroup for docker Execute on all node with kubernetes user.
mkdir /etc/docker
cat <<EOF | tee /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
EOF
systemctl enable docker
systemctl daemon-reload
systemctl restart docker
Kubernetes master setup. Execute on master node with kubernetes user.
## (Recommended) If you have plans to upgrade this single control-plane kubeadm cluster to high availability
## you should specify the --control-plane-endpoint to set the shared endpoint for all control-plane nodes.
## Such an endpoint can be either a DNS name or an IP address of a load-balancer.
## Init will be like this : kubeadm init --control-plane-endpoint "LoadBalancerIP:Port" --upload-certs --pod-network-cidr 192.168.10.0/16
## You must prepare LoadBalancer using Haproxy or nginx
## Master initialization
kubeadm init --pod-network-cidr=192.168.0.0/16
## Copy configuration for non root user #SKIP
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
## For root user. run this
export KUBECONFIG=/etc/kubernetes/admin.conf
## Verify connectivity
kubeadm config images pull
## Install Network Plugin. Choose one.
## Flannel
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f kube-flannel.yml
watch -n1 kubectl get pods --all-namespaces ##wait until all pods state became ready
## Calico (https://docs.projectcalico.org/getting-started/kubernetes/quickstart)
curl https://docs.projectcalico.org/manifests/calico.yaml -O
kubectl apply -f calico.yaml
watch -n1 kubectl get pods --all-namespaces ##wait until all pods state became ready
## Displaying token and token-ca-cert-hash for worker initialization.
kubeadm token list
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
## Join control plane command
kubeadm join 10.1.2.206:443 --token xxx \
--discovery-token-ca-cert-hash sha256:yyy \
--control-plane --certificate-key zzz
Retrieve new certificate key: kubeadm init phase upload-certs --upload-certs
Kubernetes worker setup. Execute on worker node with kubernetes user.
## Join to master
kubeadm join --token [TOKEN] [NODE-MASTER]:6443 --discovery-token-ca-cert-hash sha256:[TOKEN-CA-CERT-HASH]
kubectl config view
kubectl cluster-info
kubectl get nodes
Kubernetes Dashboard Setup. Execute on master node with kubernetes user.
## Installation
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.3.1/aio/deploy/recommended.yaml
## Verificition
kubectl get svc --all-namespaces -o wide
kubectl -n kubernetes-dashboard get svc
## Set Permissive RBAC Permission
kubectl create clusterrolebinding permissive-binding --clusterrole=cluster-admin --user=admin --user=kubelet --group=system:serviceaccounts
## Accessing Kubernetes Dashboard
kubectl get secrets; kubectl describe secrets [secret-name]
kubectl --kubeconfig config/admin.conf -n kube-system describe secret $(kubectl --kubeconfig config/admin.conf -n kube-system get secret | grep admin-user | awk '{print $1}')
kubectl taint nodes --all node-role.kubernetes.io/master-
Manage cluster using kubectl oustide master control plane
scp root@<control-plane-host>:/etc/kubernetes/admin.conf .
kubectl --kubeconfig ./admin.conf get nodes
Reference : https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/