You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What steps did you take and what happened:
I've just installed openebs as part of k0s on an aws ec2 instance with 2 disks, the host disk and a separate ebs data partition. Everything seems to be working fine but one of the ndm pods is at a constant 20% cpu usage. looking at the logs it seems to be in some loop querying the host/node disks
Looking at another server with the same ndm version but a simpler, single-disk setup, the exact same thing is happening.
What did you expect to happen:
I expected the ndm process to not be constantly using cpu in a constant loop.
The output of the following commands will help us better understand what's going on:
[Pasting long output into a GitHub gist or other pastebin is fine.]
kubectl get pods -n openebs
NAME READY STATUS RESTARTS AGE
openebs-localpv-provisioner-6ccc9d6fc9-kcnhs 1/1 Running 9 (19h ago) 20h
openebs-ndm-jpvpw 1/1 Running 0 26m
openebs-ndm-operator-7bd6898d96-vz54r 1/1 Running 9 (19h ago) 20h
Exact same issue with vanilla k0s v1.27.2+k0s.0 installation with open ebs extension enabled (openebs/node-disk-manager:1.9.0), consumes over 60% of CPU on IDLE with no PVs no nothing, this is really bad.
Hi, we had the same issue on-premise and it was caused by the presence of "/dev/sr1" on the vm, so I think you should update the filter to remove unusable devices.
What steps did you take and what happened:
I've just installed openebs as part of k0s on an aws ec2 instance with 2 disks, the host disk and a separate ebs data partition. Everything seems to be working fine but one of the ndm pods is at a constant 20% cpu usage. looking at the logs it seems to be in some loop querying the host/node disks
Looking at another server with the same ndm version but a simpler, single-disk setup, the exact same thing is happening.
What did you expect to happen:
I expected the ndm process to not be constantly using cpu in a constant loop.
The output of the following commands will help us better understand what's going on:
[Pasting long output into a GitHub gist or other pastebin is fine.]
kubectl get pods -n openebs
kubectl get blockdevices -n openebs -o yaml
kubectl get blockdeviceclaims -n openebs -o yaml
kubectl logs <ndm daemon pod name> -n openebs
just including two loops, it goes on like this permanently.
https://gist.github.com/magnetised/c1f2bef4242b663721d87898f8416d65
lsblk
from nodes where ndm daemonset is runningAnything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]
Environment:
openebs.io/version=3.0.0
node-disk-manager:1.7.0
kubectl version
):K0s version v1.23.6+k0s.0
AWS EC2 instance
host root partition
nvme0n1
open ebs volume
nvme1n1
with a single partitionnvme1n1p1
mounted at/var/openebs
/etc/os-release
):The text was updated successfully, but these errors were encountered: