Kube Init Fails - Ubuntu 24.04 K8s V1.30 And V1.32 Kubevip V0.9.0

by ADMIN 66 views

Kube Init Fails: Ubuntu 24.04, Kubernetes v1.30 and v1.32, Kube-vip v0.9.0

Kubernetes (k8s) is an open-source container orchestration system for automating the deployment, scaling, and management of containerized applications. In this article, we will explore the issue of kube init failing on Ubuntu 24.04 with Kubernetes v1.30 and v1.32, and Kube-vip v0.9.0.

Expected Behavior

The expected behavior is that the kube init command should successfully initialize the Kubernetes cluster, allowing us to deploy and manage containerized applications.

Environment

  • OS/Distro: Ubuntu 24.04
  • Kubernetes Version: 1.30 and 1.32
  • Kube-vip Version: 0.9.0

Kube-vip YAML

The Kube-vip.yaml file is used to configure the Kube-vip container. It defines the pod's metadata, spec, and status.

apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  name: kube-vip
  namespace: kube-system
spec:
  containers:
  - args:
    - manager
    env:
    - name: vip_arp
      value: "true"
    - name: port
      value: "6443"
    - name: vip_nodename
      valueFrom:
        fieldRef:
          fieldPath: spec.nodeName
    - name: vip_interface
      value: enp0s18
    - name: vip_subnet
      value: "32"
    - name: dns_mode
      value: first
    - name: cp_enable
      value: "true"
    - name: cp_namespace
      value: kube-system
    - name: svc_enable
      value: "true"
    - name: svc_leasename
      value: plndr-svcs-lock
    - name: vip_leaderelection
      value: "true"
    - name: vip_leasename
      value: plndr-cp-lock
    - name: vip_leaseduration
      value: "5"
    - name: vip_renewdeadline
      value: "3"
    - name: vip_retryperiod
      value: "1"
    - name: address
      value: 192.168.0.151
    - name: prometheus_server
      value: :2112
    image: ghcr.io/kube-vip/kube-vip:v0.9.0
    imagePullPolicy: IfNotPresent
    name: kube-vip
    resources: {}
    securityContext:
      capabilities:
        add:
        - NET_ADMIN
        - NET_RAW
        drop:
        - ALL
    volumeMounts:
    - mountPath: /etc/kubernetes/admin.conf
      name: kubeconfig
  hostAliases:
  - hostnames:
    - kubernetes
    ip: 127.0.0.1
  hostNetwork: true
  volumes:
  - hostPath:
      path: /etc/kubernetes/admin.conf
    name: kubeconfig
status: {}

Error Messages

The error messages from the Kube-vip container using crictl logs are as follows:

2025/04/22 02:55:38 INFO kube-vip.io version=v0.9.0 build=7d7036fae92ffe32f0a04de7f834a0fb7e457ce2
2025/04/22 02:55:38 INFO starting namespace=kube-system Mode=ARP "Control Plane"=true Services=true
2025/04/22 02:55:38 INFO using node name name=alpha01
2025/04/22 02:55:38 INFO prometheus HTTP server started
2025/04/22 02:55:38 INFO Starting Kube-vip Manager with the ARP engine
2025/04/22 02:55:38 INFO Start ARP/NDP advertisement
2025/04/22 02:55:38 INFO Starting UPNP Port Refresher
2025/04/22 02:55:38 INFO Starting ARP/NDP advertisement
2025/04/22 02:55:38 INFO beginning services leadership namespace=kube-system "lock name"=plndr-svcs-lock id=alpha01
I0422 02:55:38.719822       1 leaderelection.go:257] attempting to acquire leader lease kube-system/plndr-svcs-lock...
2025/04/22 02:55:38 INFO cluster membership namespace=kube-system lock=plndr-cp-lock id=alpha01
I0422 02:55:38.721509       1 leaderelection.go:257] attempting to acquire leader lease kube-system/plndr-cp-lock...
E0422 02:55:38.730183       1 leaderelection.go:436] error retrieving resource lock kube-system/plndr-cp-lock: leases.coordination.k8s.io "plndr-cp-lock" is forbidden: User "kubernetes-admin" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
E0422 02:55:38.730189       1 leaderelection.go:436] error retrieving resource lock kube-system/plndr-svcs-lock: leases.coordination.k8s.io "plndr-svcs-lock" is forbidden: User "kubernetes-admin" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
E0422 02:55:40.073514       1 leaderelection.go:436] error retrieving resource lock kube-system/plndr-cp-lock: leases.coordination.k8s.io "plndr-cp-lock" is forbidden: User "kubernetes-admin" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
E0422 02:55:40.364230       1 leaderelection.go:436] error retrieving resource lock kube-system/plndr-svcs-lock: leases.coordination.k8s.io "plndr-svcs-lock" is forbidden: User "kubernetes-admin" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
E0422 02:55:42.082935       1 leaderelection.go:436] error retrieving resource lock kube-system/plndr-cp-lock: leases.coordination.k8s.io "plndr-cp-lock" is forbidden: User "kubernetes-admin" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
E0422 02:55:42.450623       1 leaderelection.go:436] error retrieving resource lock kube-system/plndr-svcs-lock: leases.coordination.k8s.io "plndr-svcs-lock" is forbidden: User "kubernetes-admin" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
E0422 02:55:43.672727       1 leaderelection.go:436] error retrieving resource lock kube-system/plndr-cp-lock: leases.coordination.k8s.io "plndr-cp-lock" is forbidden: User "kubernetes-admin" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
E0422 02:55:43.762162       1 leaderelection.go:436] error retrieving resource lock kube-system/plndr-svcs-lock: leases.coordination.k8s.io "plndr-svcs-lock" is forbidden: User "kubernetes-admin" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
E0422 02:55:44.999196       1 leaderelection.go:436] error retrieving resource lock kube-system/plndr-svcs-lock: leases.coordination.k8s.io "plndr-svcs-lock" is forbidden: User "kubernetes-admin" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
E0422 02:55:45.226952       1 leaderelection.go:436] error retrieving resource lock kube-system/plndr-cp-lock: leases.coordination.k8s.io "plndr-cp-lock" is forbidden: User "kubernetes-admin" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
E0422 02:55:46.869765       1 leaderelection.go:436] error retrieving resource lock kube-system/plndr-svcs-lock: leases.coordination.k8s.io "plndr-svcs-lock" is forbidden: User "kubernetes-admin" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
E0422 02:55:47.267810       1 leaderelection.go:436] error retrieving resource lock kube-system/plndr-cp-lock: leases.coordination.k8s.io "plnd<br/>
**Kube Init Fails: Ubuntu 24.04, Kubernetes v1.30 and v1.32, Kube-vip v0.9.0: Q&A**

**Q: What is the expected behavior of `kube init`?**
A: The expected behavior is that the `kube init` command should successfully initialize the Kubernetes cluster, allowing us to deploy and manage containerized applications.

**Q: What is the environment in which the issue is occurring?**
A: The environment is Ubuntu 24.04 with Kubernetes v1.30 and v1.32, and Kube-vip v0.9.0.

**Q: What is the error message from the Kube-vip container?**
A: The error message is:

E0422 02:55:38.730183 1 leaderelection.go:436] error retrieving resource lock kube-system/plndr-cp-lock: leases.coordination.k8s.io "plndr-cp-lock" is forbidden: User "kubernetes-admin" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system" E0422 02:55:38.730189 1 leaderelection.go:436] error retrieving resource lock kube-system/plndr-svcs-lock: leases.coordination.k8s.io "plndr-svcs-lock" is forbidden: User "kubernetes-admin" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"


**Q: What is the cause of the error?**
A: The cause of the error is that the user "kubernetes-admin" does not have the necessary permissions to get the resource "leases" in the API group "coordination.k8s.io" in the namespace "kube-system".

**Q: How can the issue be resolved?**
A: The issue can be resolved by granting the necessary permissions to the user "kubernetes-admin" to get the resource "leases" in the API group "coordination.k8s.io" in the namespace "kube-system".

**Q: What are the steps to resolve the issue?**
A: The steps to resolve the issue are:

1. Run the command `kubectl create clusterrolebinding kubernetes-admin --clusterrole=cluster-admin --user=kubernetes-admin` to grant the necessary permissions to the user "kubernetes-admin".
2. Run the command `kubectl get clusterrolebinding kubernetes-admin -o yaml` to verify that the permissions have been granted.
3. Run the command `kube init` again to initialize the Kubernetes cluster.

**Q: What are the potential consequences of not resolving the issue?**
A: The potential consequences of not resolving the issue are:

* The Kubernetes cluster will not be initialized, and containerized applications will not be able to be deployed and managed.
* The user "kubernetes-admin" will not have the necessary permissions to perform administrative tasks on the Kubernetes cluster.

**Q: How can the issue be prevented in the future?**
A: The issue can be prevented in the future by:

* Granting the necessary permissions to the user "kubernetes-admin" to get the resource "leases" in the API group "coordination.k8s.io" in the namespace "kube-system" before running the `kube init` command.
* Verifying that the permissions have been granted running the `kube init` command.
* Running the `kube init` command with the `--v=5` flag to enable verbose logging and troubleshoot any issues that may arise.