These are not different LVM processes. The container process is using the LVM binary that the node itself has. We have achieved this by using scripts that point to the same lvm binary that is used by the node.
Configmap(~shell script) used for the same has the following contents where `/host` refers to the root directory of the node:
get_bin_path: |
#!/bin/sh
bin_name=$1
if [ -x /host/bin/which ]; then
echo $(chroot /host /bin/which $bin_name | cut -d ' ' -f 1)
elif [ -x /host/usr/bin/which ]; then
echo $(chroot /host /usr/bin/which $bin_name | cut -d ' ' -f 1)
else
$(chroot /host which $bin_name | cut -d ' ' -f 1)
fi
lvcreate: |
#!/bin/sh
path=$(/sbin/lvm-eg/get_bin_path "lvcreate")
chroot /host $path "$@"
Also, the above logs in the pastebin link have errors because the vg lock has not been acquired and hence creation commands will fail. Once the lock is acquired, the `strace -f` command gives the following output being stuck. Check out this link for full details ->
https://pastebin.com/raw/DwQfdmr8
P.S: We at OpenEBS are trying to provide lvm storage to cloud native workloads with the help of kubernetes CSI drivers and since all these drivers run as pods and help dynamic provisioning of kubernetes volumes(storage) for the application, the lvm commands needs to be run from inside the pod. Reference ->
https://github.com/openebs/lvm-localpv
Regards