K8s deployment problems I met
systemctl start kubelet failed, journalctl -xeu kubelet show the fail message:
Failed to run kubelet" err="failed to run Kubelet: misconfiguration: kubelet cgroup>
The whole log should be
misconfiguration: kubelet cgroup driver: "cgroupfs" is different from docker cgroup driver: "systemd"
I learn from https://github.com/kubernetes/kubernetes/issues/43805
and change docker cgroup to systemd by editing /usr/lib/systemd/system/docker.service, add --exec-opt native.cgroupdriver=systemd after the dockerd cmdline.
Then systemctl daemon-reload; systemctl start kubelet successfully.
By the way, docker info | grep -i cgorup whould show what cgroup driver docker is using.
Node not ready, reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
This is about I need a network plugin. By reading from
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/
and
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/
and execute the installation steps from
https://projectcalico.docs.tigera.io/getting-started/kubernetes/quickstart
I still get the NotReady message. Finally I found it's that I have not get the meaning. The quickstart say: you may need to change the default IP pool CIDR to match your pod network CIDR, which I have missed.
So I wget https://projectcalico.docs.tigera.io/manifests/custom-resources.yaml and edit the cidr, then after waiting about 20min, I observed the nodes are all ready.
k logs -p -n kubeapps kubeapps-internal-apprepository-controller-685685876b-kk2kl met
Error from server: Get "https://192.168.0.204:10250/containerLogs/kubeapps/kubeapps-internal-apprepository-controller-685685876b-kk2kl/controller?previous=true": dial tcp 192.168.0.204:10250: i/o timeout
I already checked firewalld was disabled.
Some said flush iptables would work, but not really worked for me by:
systemctl stop kubelet
systemctl stop docke
iptables --flush
iptables -tnat --flushsystemctl start docker
systemctl start kubelet
I solved this by re-stop firewalld with:
systemctl start firewalld ## here met unit file firewalld.service is masked, solved by systemctl unmask firewalld
firewall-cmd --add-port=10250/tcp ## then kubectl logs appears ok.
systemctl stop firewalld ## also, kubectl logs worked fine.
I think it's that iptables not flushed correctly.
I finally find out the reason of --
Why kubectl logs ok while re-stop firewalld but failed when os reboot:
when os reboot, iptables rules is just the same as when you disable firewalld. The rules added before disabling firewalld cannot be saved.
The differences hided in /etc/sysconfig/nftables.conf or /etc/sysconfig/iptables-config
And I solved this by empty /etc/sysconfig/nftables.conf by echo > /etc/sysconfig/nftables.conf
Comments (0)
Leave a Comment
No comments yet. Be the first to comment!