linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
* [linux-lvm] Mixing devices with different logical or physical block size in oVirt LVM based storage
@ 2019-02-02 23:54 Nir Soffer
  2019-02-04 16:25 ` Mike Snitzer
  0 siblings, 1 reply; 5+ messages in thread
From: Nir Soffer @ 2019-02-02 23:54 UTC (permalink / raw)
  To: linux-lvm; +Cc: Denis Chaplygin, Mike Snitzer, David Teigland, Vojtech Juranek

[-- Attachment #1: Type: text/plain, Size: 2391 bytes --]

We working on enabling 4k block size in oVirt block storage domain, built
using VG
on multipath devices on shared storage.

We have incomplete support for 4k, added in 2011, for this bug:

    https://bugzilla.redhat.com/732980

When creating or extending a VG, we check that all PVs are using same
logical and
phyisical block size, and we store both logical and physical block size in
the VG tags.
We get the block sizes from
/sys/block/dm-X/queue/{logical,physical}_block_size.

We also enforce that device physical block size is not smaller than logical
block size,
This check was added in this patch, trying to enable block size != 512.
There is no
explanation in the patch or in the review comments why we need to validate
this.


https://github.com/oVirt/vdsm/commit/7e79153705891a91a06eb31cd642fb209d10ff86

When we start to use a VG, we validate that all the devices are using the
stored logical
and physical block size.

In vdsm itself, we use the logical block size to manage vdsm metadata,
assuming that writing
and reading one block of logical block size bytes is atomic, and we can
read and write
different blocks from different hosts at the same time.

The relevant code validating PV block sizes is here:


https://github.com/oVirt/vdsm/blob/8b043e402f41d8a82b9f832be5f582b8520b38bc/lib/vdsm/storage/lvm.py#L1110

Reading the comments in bug 732980, I don't see anything about physical
block size. It looks
like this is unnecessary check, and we should check only the logical block
size.

Regarding mixing devices with different logical block size, according to

    https://bugzilla.redhat.com/show_bug.cgi?id=732980#c8

We should not extend an LV over devices with different block size, as this
will change the device
logical block size (e.g change from 512 to 4k), and the change may break
the upper layer that
already use the device and assume the previous logical block size.

Based on this, I think we are ok with limiting VG to devices with same
logical block size, so any
LV can be extended to any device.

I think this code should change to:

1. When creating a VG, check that all PVs use the same logical block size
2. Store the logical block size in the VG tag
3. When extending the VG, check that the new PVs use the same logical block
size
4. When starting to use a VG, check that stored logical block size matches
PVs logical block size

What do you think?

Nir

[-- Attachment #2: Type: text/html, Size: 6116 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [linux-lvm] Mixing devices with different logical or physical block size in oVirt LVM based storage
  2019-02-02 23:54 [linux-lvm] Mixing devices with different logical or physical block size in oVirt LVM based storage Nir Soffer
@ 2019-02-04 16:25 ` Mike Snitzer
  2019-02-13  9:14   ` Vojtech Juranek
  0 siblings, 1 reply; 5+ messages in thread
From: Mike Snitzer @ 2019-02-04 16:25 UTC (permalink / raw)
  To: Nir Soffer; +Cc: David Teigland, Vojtech Juranek, Denis Chaplygin, linux-lvm

On Sat, Feb 02 2019 at  6:54pm -0500,
Nir Soffer <nsoffer@redhat.com> wrote:

>    We working on enabling 4k block size in oVirt block storage domain, built
>    using VG
>    on multipath devices on shared storage.
>    We have incomplete support for 4k, added in 2011, for this bug:
>        [1]https://bugzilla.redhat.com/732980
>    When creating or extending a VG, we check that all PVs are using same
>    logical and
>    phyisical block size, and we store both logical and physical block size in
>    the VG tags.
>    We get the block sizes from
>    /sys/block/dm-X/queue/{logical,physical}_block_size.
>    We also enforce that device physical block size is not smaller than
>    logical block size,
>    This check was added in this patch, trying to enable block size != 512.
>    There is no
>    explanation in the patch or in the review comments why we need to validate
>    this.
> 
>    [2]https://github.com/oVirt/vdsm/commit/7e79153705891a91a06eb31cd642fb209d10ff86
>    When we start to use a VG, we validate that all the devices are using the
>    stored logical
>    and physical block size.
>    In vdsm itself, we use the logical block size to manage vdsm metadata,
>    assuming that writing
>    and reading one block of logical block size bytes is atomic, and we can
>    read and write
>    different blocks from different hosts at the same time.
>    The relevant code validating PV block sizes is here:
> 
>    [3]https://github.com/oVirt/vdsm/blob/8b043e402f41d8a82b9f832be5f582b8520b38bc/lib/vdsm/storage/lvm.py#L1110
>    Reading the comments in bug 732980, I don't see anything about physical
>    block size. It looks
>    like this is unnecessary check, and we should check only the logical block
>    size.
>    Regarding mixing devices with different logical block size, according to
>        [4]https://bugzilla.redhat.com/show_bug.cgi?id=732980#c8
>    We should not extend an LV over devices with different block size, as this
>    will change the device
>    logical block size (e.g change from 512 to 4k), and the change may break
>    the upper layer that
>    already use the device and assume the previous logical block size.

This idea that 4K writes to a 512b physical drive aren't going to be
atomic, and that that is going to be the basis for some upper level
failure is handwaving and overly paranoid TBH.

>    Based on this, I think we are ok with limiting VG to devices with same
>    logical block size, so any
>    LV can be extended to any device.
>    I think this code should change to:
>    1. When creating a VG, check that all PVs use the same logical block size
>    2. Store the logical block size in the VG tag
>    3. When extending the VG, check that the new PVs use the same logical
>    block size
>    4. When starting to use a VG, check that stored logical block size matches
>    PVs logical block size
>    What do you think?

I think you shouldn't care.  Or please show me a case where all this
concern matters.

Mike

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [linux-lvm] Mixing devices with different logical or physical block size in oVirt LVM based storage
  2019-02-04 16:25 ` Mike Snitzer
@ 2019-02-13  9:14   ` Vojtech Juranek
  2019-02-13 20:39     ` Mike Snitzer
  0 siblings, 1 reply; 5+ messages in thread
From: Vojtech Juranek @ 2019-02-13  9:14 UTC (permalink / raw)
  To: linux-lvm; +Cc: Nir Soffer, Denis Chaplygin, David Teigland, Mike Snitzer

[-- Attachment #1: Type: text/plain, Size: 4062 bytes --]

Hi Mike,

> 
> Nir Soffer <nsoffer@redhat.com> wrote:
> >    We working on enabling 4k block size in oVirt block storage domain,
> >    built
> >    using VG
> >    on multipath devices on shared storage.
> >    
> >    We have incomplete support for 4k, added in 2011, for this bug:
> >        [1]https://bugzilla.redhat.com/732980
> >    
> >    When creating or extending a VG, we check that all PVs are using same
> >    logical and
> >    phyisical block size, and we store both logical and physical block size
> >    in
> >    the VG tags.
> >    We get the block sizes from
> >    /sys/block/dm-X/queue/{logical,physical}_block_size.
> >    We also enforce that device physical block size is not smaller than
> >    logical block size,
> >    This check was added in this patch, trying to enable block size != 512.
> >    There is no
> >    explanation in the patch or in the review comments why we need to
> >    validate
> >    this.
> >    
> >    [2]https://github.com/oVirt/vdsm/commit/7e79153705891a91a06eb31cd642fb2
> >    09d10ff86 When we start to use a VG, we validate that all the devices
> >    are using the stored logical
> >    and physical block size.
> >    In vdsm itself, we use the logical block size to manage vdsm metadata,
> >    assuming that writing
> >    and reading one block of logical block size bytes is atomic, and we can
> >    read and write
> >    different blocks from different hosts at the same time.
> >    The relevant code validating PV block sizes is here:
> >    
> >    [3]https://github.com/oVirt/vdsm/blob/8b043e402f41d8a82b9f832be5f582b85
> >    20b38bc/lib/vdsm/storage/lvm.py#L1110 Reading the comments in bug
> >    732980, I don't see anything about physical block size. It looks
> >    like this is unnecessary check, and we should check only the logical
> >    block
> >    size.
> >    Regarding mixing devices with different logical block size, according
> >    to
> >    
> >        [4]https://bugzilla.redhat.com/show_bug.cgi?id=732980#c8
> >    
> >    We should not extend an LV over devices with different block size, as
> >    this
> >    will change the device
> >    logical block size (e.g change from 512 to 4k), and the change may
> >    break
> >    the upper layer that
> >    already use the device and assume the previous logical block size.
> 
> This idea that 4K writes to a 512b physical drive aren't going to be
> atomic, and that that is going to be the basis for some upper level
> failure is handwaving and overly paranoid TBH.
> 
> >    Based on this, I think we are ok with limiting VG to devices with same
> >    logical block size, so any
> >    LV can be extended to any device.
> >    I think this code should change to:
> >    1. When creating a VG, check that all PVs use the same logical block
> >    size
> >    2. Store the logical block size in the VG tag
> >    3. When extending the VG, check that the new PVs use the same logical
> >    block size
> >    4. When starting to use a VG, check that stored logical block size
> >    matches
> >    PVs logical block size
> >    What do you think?
> 
> I think you shouldn't care.  Or please show me a case where all this
> concern matters.

I'm sorry, but I'm still quite confused what needs to be checked and what not. 

In [1] you wrote 

"So the appropriate VDSM constraint is to not allow a larger 
logical_block_size device (4K) to be added to a VG that has only ever 
contained small logical_block_size (512b) devices."

and 

"If an LV is already in use then the admin needs to avoid extending the LV in 
a way that upper layers may get upset with." 

and here that we shouldn't care. Could you be please more specific what one 
needs to check (regarding block sizes) when creating or extending VG and start 
using it?

Thanks
Vojta

[1] https://bugzilla.redhat.com/show_bug.cgi?id=732980


> Mike
> 
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


[-- Attachment #2: This is a digitally signed message part. --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [linux-lvm] Mixing devices with different logical or physical block size in oVirt LVM based storage
  2019-02-13  9:14   ` Vojtech Juranek
@ 2019-02-13 20:39     ` Mike Snitzer
  2019-02-13 21:41       ` Nir Soffer
  0 siblings, 1 reply; 5+ messages in thread
From: Mike Snitzer @ 2019-02-13 20:39 UTC (permalink / raw)
  To: Vojtech Juranek; +Cc: Nir Soffer, Denis Chaplygin, David Teigland, linux-lvm

On Wed, Feb 13 2019 at  4:14am -0500,
Vojtech Juranek <vjuranek@redhat.com> wrote:

> Hi Mike,
> 
> > 
> > Nir Soffer <nsoffer@redhat.com> wrote:
> > >    We working on enabling 4k block size in oVirt block storage domain,
> > >    built
> > >    using VG
> > >    on multipath devices on shared storage.
> > >    
> > >    We have incomplete support for 4k, added in 2011, for this bug:
> > >        [1]https://bugzilla.redhat.com/732980
> > >    
> > >    When creating or extending a VG, we check that all PVs are using same
> > >    logical and
> > >    phyisical block size, and we store both logical and physical block size
> > >    in
> > >    the VG tags.
> > >    We get the block sizes from
> > >    /sys/block/dm-X/queue/{logical,physical}_block_size.
> > >    We also enforce that device physical block size is not smaller than
> > >    logical block size,
> > >    This check was added in this patch, trying to enable block size != 512.
> > >    There is no
> > >    explanation in the patch or in the review comments why we need to
> > >    validate
> > >    this.
> > >    
> > >    [2]https://github.com/oVirt/vdsm/commit/7e79153705891a91a06eb31cd642fb2
> > >    09d10ff86 When we start to use a VG, we validate that all the devices
> > >    are using the stored logical
> > >    and physical block size.
> > >    In vdsm itself, we use the logical block size to manage vdsm metadata,
> > >    assuming that writing
> > >    and reading one block of logical block size bytes is atomic, and we can
> > >    read and write
> > >    different blocks from different hosts at the same time.
> > >    The relevant code validating PV block sizes is here:
> > >    
> > >    [3]https://github.com/oVirt/vdsm/blob/8b043e402f41d8a82b9f832be5f582b85
> > >    20b38bc/lib/vdsm/storage/lvm.py#L1110 Reading the comments in bug
> > >    732980, I don't see anything about physical block size. It looks
> > >    like this is unnecessary check, and we should check only the logical
> > >    block
> > >    size.
> > >    Regarding mixing devices with different logical block size, according
> > >    to
> > >    
> > >        [4]https://bugzilla.redhat.com/show_bug.cgi?id=732980#c8
> > >    
> > >    We should not extend an LV over devices with different block size, as
> > >    this
> > >    will change the device
> > >    logical block size (e.g change from 512 to 4k), and the change may
> > >    break
> > >    the upper layer that
> > >    already use the device and assume the previous logical block size.
> > 
> > This idea that 4K writes to a 512b physical drive aren't going to be
> > atomic, and that that is going to be the basis for some upper level
> > failure is handwaving and overly paranoid TBH.
> > 
> > >    Based on this, I think we are ok with limiting VG to devices with same
> > >    logical block size, so any
> > >    LV can be extended to any device.
> > >    I think this code should change to:
> > >    1. When creating a VG, check that all PVs use the same logical block
> > >    size
> > >    2. Store the logical block size in the VG tag
> > >    3. When extending the VG, check that the new PVs use the same logical
> > >    block size
> > >    4. When starting to use a VG, check that stored logical block size
> > >    matches
> > >    PVs logical block size
> > >    What do you think?
> > 
> > I think you shouldn't care.  Or please show me a case where all this
> > concern matters.
> 
> I'm sorry, but I'm still quite confused what needs to be checked and what not. 
> 
> In [1] you wrote 
> 
> "So the appropriate VDSM constraint is to not allow a larger 
> logical_block_size device (4K) to be added to a VG that has only ever 
> contained small logical_block_size (512b) devices."
> 
> and 
> 
> "If an LV is already in use then the admin needs to avoid extending the LV in 
> a way that upper layers may get upset with." 
> 
> and here that we shouldn't care. Could you be please more specific what one 
> needs to check (regarding block sizes) when creating or extending VG and start 
> using it?
> 
> Thanks
> Vojta
> 
> [1] https://bugzilla.redhat.com/show_bug.cgi?id=732980

Ha, only going back 8 years in the archive for that BZ!

I'd need to revisit all the details of what VDSM/oVirt are so concerned
about relative to just _always_ using 4K for the sanlock volumes.

My contention is the constraint likely wasn't ever _really_ needed.  But
maybe it was.. again, I'll look back at the BZ in more detail to see
what I'm missing.

Concerns about 4K issued to 512b physical devices _not_ being atomic
(could have 5 of the 8 512b written, so old 3 bytes could cause
issues).  IIRC I shared those concerns with Martin Petersen before
(Martin is an upstream Linux SCSI maintainer) and he felt the atomicity
concerns were overstated.  Thinking now, it was possibly for devices
that advertise 4K physical and 512b logical.  Whereas issuing 4K to a
512b/512b device could easily not be atomic for that 4K IO.

I can revisit this with Martin.  Also, I'm happy to adjust my
understanding based on further anecdotal real-world evidence that
issuing 4K IOs to a 512b device and expecting any 4K IO operation to be
atomic is _wrong_.

Thanks,
Mike

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [linux-lvm] Mixing devices with different logical or physical block size in oVirt LVM based storage
  2019-02-13 20:39     ` Mike Snitzer
@ 2019-02-13 21:41       ` Nir Soffer
  0 siblings, 0 replies; 5+ messages in thread
From: Nir Soffer @ 2019-02-13 21:41 UTC (permalink / raw)
  To: Mike Snitzer; +Cc: Denis Chaplygin, Vojtech Juranek, David Teigland, linux-lvm

[-- Attachment #1: Type: text/plain, Size: 7886 bytes --]

On Wed, Feb 13, 2019 at 10:40 PM Mike Snitzer <snitzer@redhat.com> wrote:

> On Wed, Feb 13 2019 at  4:14am -0500,
> Vojtech Juranek <vjuranek@redhat.com> wrote:
>
> > Hi Mike,
> >
> > >
> > > Nir Soffer <nsoffer@redhat.com> wrote:
> > > >    We working on enabling 4k block size in oVirt block storage
> domain,
> > > >    built
> > > >    using VG
> > > >    on multipath devices on shared storage.
> > > >
> > > >    We have incomplete support for 4k, added in 2011, for this bug:
> > > >        [1]https://bugzilla.redhat.com/732980
> > > >
> > > >    When creating or extending a VG, we check that all PVs are using
> same
> > > >    logical and
> > > >    phyisical block size, and we store both logical and physical
> block size
> > > >    in
> > > >    the VG tags.
> > > >    We get the block sizes from
> > > >    /sys/block/dm-X/queue/{logical,physical}_block_size.
> > > >    We also enforce that device physical block size is not smaller
> than
> > > >    logical block size,
> > > >    This check was added in this patch, trying to enable block size
> != 512.
> > > >    There is no
> > > >    explanation in the patch or in the review comments why we need to
> > > >    validate
> > > >    this.
> > > >
> > > >    [2]
> https://github.com/oVirt/vdsm/commit/7e79153705891a91a06eb31cd642fb2
> > > >    09d10ff86 When we start to use a VG, we validate that all the
> devices
> > > >    are using the stored logical
> > > >    and physical block size.
> > > >    In vdsm itself, we use the logical block size to manage vdsm
> metadata,
> > > >    assuming that writing
> > > >    and reading one block of logical block size bytes is atomic, and
> we can
> > > >    read and write
> > > >    different blocks from different hosts at the same time.
> > > >    The relevant code validating PV block sizes is here:
> > > >
> > > >    [3]
> https://github.com/oVirt/vdsm/blob/8b043e402f41d8a82b9f832be5f582b85
> > > >    20b38bc/lib/vdsm/storage/lvm.py#L1110 Reading the comments in bug
> > > >    732980, I don't see anything about physical block size. It looks
> > > >    like this is unnecessary check, and we should check only the
> logical
> > > >    block
> > > >    size.
> > > >    Regarding mixing devices with different logical block size,
> according
> > > >    to
> > > >
> > > >        [4]https://bugzilla.redhat.com/show_bug.cgi?id=732980#c8
> > > >
> > > >    We should not extend an LV over devices with different block
> size, as
> > > >    this
> > > >    will change the device
> > > >    logical block size (e.g change from 512 to 4k), and the change may
> > > >    break
> > > >    the upper layer that
> > > >    already use the device and assume the previous logical block size.
> > >
> > > This idea that 4K writes to a 512b physical drive aren't going to be
> > > atomic, and that that is going to be the basis for some upper level
> > > failure is handwaving and overly paranoid TBH.
> > >
> > > >    Based on this, I think we are ok with limiting VG to devices with
> same
> > > >    logical block size, so any
> > > >    LV can be extended to any device.
> > > >    I think this code should change to:
> > > >    1. When creating a VG, check that all PVs use the same logical
> block
> > > >    size
> > > >    2. Store the logical block size in the VG tag
> > > >    3. When extending the VG, check that the new PVs use the same
> logical
> > > >    block size
> > > >    4. When starting to use a VG, check that stored logical block size
> > > >    matches
> > > >    PVs logical block size
> > > >    What do you think?
> > >
> > > I think you shouldn't care.  Or please show me a case where all this
> > > concern matters.
> >
> > I'm sorry, but I'm still quite confused what needs to be checked and
> what not.
> >
> > In [1] you wrote
> >
> > "So the appropriate VDSM constraint is to not allow a larger
> > logical_block_size device (4K) to be added to a VG that has only ever
> > contained small logical_block_size (512b) devices."
> >
> > and
> >
> > "If an LV is already in use then the admin needs to avoid extending the
> LV in
> > a way that upper layers may get upset with."
> >
> > and here that we shouldn't care. Could you be please more specific what
> one
> > needs to check (regarding block sizes) when creating or extending VG and
> start
> > using it?
> >
> > Thanks
> > Vojta
> >
> > [1] https://bugzilla.redhat.com/show_bug.cgi?id=732980
>
> Ha, only going back 8 years in the archive for that BZ!
>

Thanks for looking at this.

I'd need to revisit all the details of what VDSM/oVirt are so concerned
> about relative to just _always_ using 4K for the sanlock volumes.
>

For sanlock volumes we don't care, we trust David to get this right :-)

The issue is vdsm metadata.

My contention is the constraint likely wasn't ever _really_ needed.  But
> maybe it was.. again, I'll look back at the BZ in more detail to see
> what I'm missing.
>
> Concerns about 4K issued to 512b physical devices _not_ being atomic
> (could have 5 of the 8 512b written, so old 3 bytes could cause
> issues).  IIRC I shared those concerns with Martin Petersen before
> (Martin is an upstream Linux SCSI maintainer) and he felt the atomicity
> concerns were overstated.  Thinking now, it was possibly for devices
> that advertise 4K physical and 512b logical.  Whereas issuing 4K to a
> 512b/512b device could easily not be atomic for that 4K IO.
>
> I can revisit this with Martin.  Also, I'm happy to adjust my
> understanding based on further anecdotal real-world evidence that
> issuing 4K IOs to a 512b device and expecting any 4K IO operation to be
> atomic is _wrong_.
>

I want more info why we care about atomic write to 512 bytes blocks.

One use case is managing vdsm volumes metadata. In current version we keep
one 512 bytes block for every vdsm volume. We keep that on a special
"metadata"
lv. The number of the block is kept in the lv tags.

Here is an example:

# lvs -o lv_name,tags fb5cab8c-08ba-4781-9532-ccc78ddb21ec
  LV                                   LV Tags

  3ad2d445-6505-4442-915b-ab3a6a2fd55b
IU_c4622768-4173-403a-811c-096376d28c26,MD_7,PU_00000000-0000-0000-0000-000000000000
  416573b6-caf0-49b8-ba36-8b64336d742f
IU_1f05ff49-e97b-4a13-a973-59260dd13b87,MD_8,PU_00000000-0000-0000-0000-000000000000
  ...
  metadata


The metadata of the lv 3ad2d445-6505-4442-915b-ab3a6a2fd55b is stored
at offset 7 * 512 (MD_7) in the metadata lv.

# dd if=/dev/fb5cab8c-08ba-4781-9532-ccc78ddb21ec/metadata bs=512 count=1
skip=7
DOMAIN=fb5cab8c-08ba-4781-9532-ccc78ddb21ec
CTIME=1542309274
FORMAT=RAW
DISKTYPE=ISOF
LEGALITY=LEGAL
SIZE=6291456
VOLTYPE=LEAF
DESCRIPTION={"DiskAlias":"Fedora-Server-dvd-x86_64-29-1.2.iso","DiskDescription":"Uploaded
disk"}
IMAGE=c4622768-4173-403a-811c-096376d28c26
PUUID=00000000-0000-0000-0000-000000000000
MTIME=0
POOL_UUID=
TYPE=PREALLOCATED
GEN=0
EOF
1+0 records in
1+0 records out
512 bytes (512 B) copied, 0.00085428 s, 599 kB/s

We use sanlock to synchronize access to the metadata lv, but this lv is
active
on many hosts at the same time, and different hosts are reading and writing
volume metadata at the same time.

We may have 2 storage jobs reading and writing the blocks at offset 7 and 8.
If the writes are not atomic, one host can overwrite other host write.

To support 4k drives, we are modifying this format to keep 8k per volume so
we can have the same format regardless of the underlying block size, reading
and writing 512 bytes blocks or 4k blocks. However we still have to support
the old format using 512 bytes blocks per volume.

We can simplify the code to always read and write 4k blocks, but I believe
that
we may have short read/write, and handling that may be more complicated then
writing always one block.

The underlying storage that we try to support is anything that can be
shared using
FC/SAS/iSCSI. We want to be compatible with the most stupid storage.

Nir

[-- Attachment #2: Type: text/html, Size: 14681 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2019-02-13 21:41 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-02-02 23:54 [linux-lvm] Mixing devices with different logical or physical block size in oVirt LVM based storage Nir Soffer
2019-02-04 16:25 ` Mike Snitzer
2019-02-13  9:14   ` Vojtech Juranek
2019-02-13 20:39     ` Mike Snitzer
2019-02-13 21:41       ` Nir Soffer

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).