All of lore.kernel.org
 help / color / mirror / Atom feed
* max_sectors_kb limitations with VDO and dm-thin
@ 2019-04-19 14:40 Ryan Norwood
  2019-04-23 10:11 ` Zdenek Kabelac
  2019-04-24 19:45 ` Mike Snitzer
  0 siblings, 2 replies; 11+ messages in thread
From: Ryan Norwood @ 2019-04-19 14:40 UTC (permalink / raw)
  To: dm-devel


[-- Attachment #1.1: Type: text/plain, Size: 608 bytes --]

We have been using dm-thin layered above VDO and have noticed that our
performance is not optimal for large sequential writes as max_sectors_kb
and max_hw_sectors_kb for all thin devices are set to 4k due to the VDO
layer beneath.

This effectively eliminates the performance optimizations for sequential
writes to skip both zeroing and COW overhead when a write fully overlaps a
thin chunk as all bios are split into 4k which always be less than the 64k
thin chunk minimum.

Is this known behavior? Is there any way around this issue?

We are using RHEL 7.5 with kernel 3.10.0-862.20.2.el7.x86_64.

Thanks!

[-- Attachment #1.2: Type: text/html, Size: 802 bytes --]

[-- Attachment #2: Type: text/plain, Size: 0 bytes --]



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: max_sectors_kb limitations with VDO and dm-thin
  2019-04-19 14:40 max_sectors_kb limitations with VDO and dm-thin Ryan Norwood
@ 2019-04-23 10:11 ` Zdenek Kabelac
  2019-04-23 17:02   ` Ryan Norwood
  2019-04-24 19:45 ` Mike Snitzer
  1 sibling, 1 reply; 11+ messages in thread
From: Zdenek Kabelac @ 2019-04-23 10:11 UTC (permalink / raw)
  To: Ryan Norwood, dm-devel

Dne 19. 04. 19 v 16:40 Ryan Norwood napsal(a):
> We have been using dm-thin layered above VDO and have noticed that our 
> performance is not optimal for large sequential writes as max_sectors_kb 
> and max_hw_sectors_kb for all thin devices are set to 4k due to the VDO layer 
> beneath.
> 
> This effectively eliminates the performance optimizations for sequential 
> writes to skip both zeroing and COW overhead when a write fully overlaps a 
> thin chunk as all bios are split into 4k which always be less than the 64k 
> thin chunk minimum.
> 
> Is this known behavior? Is there any way around this issue?

Hi

If you require highest performance - I'd suggest to avoid using VDO.
VDO replaces performance with better space utilization.
It works on 4KiB block - so by design it's going to be slow.

I'd also probably not mix 2 provisioning technologies together - there
is nontrivial amount of problematic states when the whole device stack
runs out of real physical space.

Regards

Zdenek

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: max_sectors_kb limitations with VDO and dm-thin
  2019-04-23 10:11 ` Zdenek Kabelac
@ 2019-04-23 17:02   ` Ryan Norwood
       [not found]     ` <CAMeeMh-v+xpn2YDizFQg4cKHZgC=aCdJwLfAMeGLGT1gB1ZURw@mail.gmail.com>
  0 siblings, 1 reply; 11+ messages in thread
From: Ryan Norwood @ 2019-04-23 17:02 UTC (permalink / raw)
  To: Zdenek Kabelac; +Cc: dm-devel


[-- Attachment #1.1: Type: text/plain, Size: 3486 bytes --]

I have added vdo-devel to the converation:
https://www.redhat.com/archives/vdo-devel/2019-April/msg00017.html

Here is some more info to describe the specific issue:

A dm-thin volume is configured with a chunk/block size that determines the
minimum allocation size that it can track, between 64KiB and 1GiB. If an
application performs a write to a dm-thin block device, and that IO
operation completely overlaps a thin block, dm-thin will skip zeroing after
allocation before performing the write. This is a pretty big performance
optimization as it effectively halves IO for large sequential writes. When
a block device has a snapshot the data is referenced by both the original
block and the snapshot. If a write is issued dm-thin will  normally
allocate a new chunk, copy the old data to that new chunk, then perform the
write. If the new write completely overlaps a chunk it will skip the copy.

So for example dm-thin block device is created in a thin pool with a 512k
block size. A new block is created and an application performs a 4k
sequential write at the beginning of the volume. dm-thin will do the
following,

1) allocate 512k block
2) write 0's to the block
3) perform the 4k write

This does 516k of writes for a 4k write (ouch). If the write was at least
512k, it will skip zeroing and just do the write.

Similarly assume there is a dm-thin block device with a snapshot and data
is shared between the two. Again the application performs a 4k write.

1) allocate new 512k block
2) copy 512k form the old block to the new
3) perform the 4k write

This does 512k in reads and 516k in writes (big ouch). If the write was at
least 512k it will skip all the overhead.

Now fast forward to VDO. Normally the IO size is determined by the
max_sectors_kb setting in /sys/block/DEVICE/queue. This value is inherited
for stacked DM devices and can be modified by the user up to the hardware
limit max_hw_sectors_kb, which also appears to be inherited for stacked DM
devices. VDO sets this value to 4k which in turn forces all layers stacked
above it to also have a 4k maximum. If you take my previous example but
place VDO beneath the dm-thin volume, all IO sequential or otherwise will
be split down to 4k which will completely eliminate all the performance
optimizations that dm-thin provides.

1) Is this known behavior?
2) Is there a possible workaround?


On Tue, Apr 23, 2019 at 6:11 AM Zdenek Kabelac <zkabelac@redhat.com> wrote:

> Dne 19. 04. 19 v 16:40 Ryan Norwood napsal(a):
> > We have been using dm-thin layered above VDO and have noticed that our
> > performance is not optimal for large sequential writes as max_sectors_kb
> > and max_hw_sectors_kb for all thin devices are set to 4k due to the VDO
> layer
> > beneath.
> >
> > This effectively eliminates the performance optimizations for sequential
> > writes to skip both zeroing and COW overhead when a write fully overlaps
> a
> > thin chunk as all bios are split into 4k which always be less than the
> 64k
> > thin chunk minimum.
> >
> > Is this known behavior? Is there any way around this issue?
>
> Hi
>
> If you require highest performance - I'd suggest to avoid using VDO.
> VDO replaces performance with better space utilization.
> It works on 4KiB block - so by design it's going to be slow.
>
> I'd also probably not mix 2 provisioning technologies together - there
> is nontrivial amount of problematic states when the whole device stack
> runs out of real physical space.
>
> Regards
>
> Zdenek
>

[-- Attachment #1.2: Type: text/html, Size: 4517 bytes --]

[-- Attachment #2: Type: text/plain, Size: 0 bytes --]



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: max_sectors_kb limitations with VDO and dm-thin
       [not found]       ` <CAFTvtQnLDU_MKzse5t4xc_CVWENsZtwQQSTAFyj-45tPg5STZQ@mail.gmail.com>
@ 2019-04-24 14:46         ` Ryan Norwood
  2019-04-24 21:27           ` Mike Snitzer
  0 siblings, 1 reply; 11+ messages in thread
From: Ryan Norwood @ 2019-04-24 14:46 UTC (permalink / raw)
  To: dm-devel, vdo-devel


[-- Attachment #1.1: Type: text/plain, Size: 3998 bytes --]

On Wed, Apr 24, 2019 at 9:08 AM Ryan Norwood <ryan.p.norwood@gmail.com>
wrote:

> Thank you for your help.
>
> You are correct, it appears that the problem occurs when there is a RAID 5
> or RAID 50 volume beneath VDO.
>
> NAME      KNAME    RA   SIZE ALIGNMENT  MIN-IO  OPT-IO PHY-SEC LOG-SEC
> RQ-SIZE SCHED    WSAME
> sdh                                                                   sdh
>    128 977.5G         0     512       0     512     512     128 deadline
> 0B
> └─sed6
> dm-6    128 977.5G         0     512       0     512     512     128
>      0B
>   └─md127
>  md127 12288   5.7T         0 1048576 6291456     512     512     128
>        0B
>     └─vdo_data
> dm-17   128   5.7T         0 1048576 6291456     512     512     128
>      0B
>       └─vdo
>  dm-18   128  57.3T         0    4096    4096    4096    4096     128
>        0B
>
> */sys/block/md126/queue/max_hw_sectors_kb:2147483647*
> /sys/block/md126/queue/max_integrity_segments:0
> */sys/block/md126/queue/max_sectors_kb:512*
> /sys/block/md126/queue/max_segments:64
> /sys/block/md126/queue/max_segment_size:4096
>
> */sys/block/dm-17/queue/max_hw_sectors_kb:512*
> /sys/block/dm-17/queue/max_integrity_segments:0
> */sys/block/dm-17/queue/max_sectors_kb:512*
> /sys/block/dm-17/queue/max_segments:64
> /sys/block/dm-17/queue/max_segment_size:4096
>
> */sys/block/dm-18/queue/max_hw_sectors_kb:4*
> /sys/block/dm-18/queue/max_integrity_segments:0
> */sys/block/dm-18/queue/max_sectors_kb:4*
> /sys/block/dm-18/queue/max_segments:64
> /sys/block/dm-18/queue/max_segment_size:4096
>
> NAME      KNAME    RA   SIZE ALIGNMENT  MIN-IO  OPT-IO PHY-SEC LOG-SEC
> RQ-SIZE SCHED    WSAME
> sdq       sdq     128 977.5G         0     512       0     512     512
>  128 deadline    0B
> └─sed15   dm-15   128 977.5G         0     512       0     512     512
>  128             0B
>   └─vdo   dm-16   128  57.3T         0    4096    4096    4096    4096
>  128             0B
>
> */sys/block/sdq/queue/max_hw_sectors_kb:256*
> /sys/block/sdq/queue/max_integrity_segments:0
> */sys/block/sdq/queue/max_sectors_kb:256*
> /sys/block/sdq/queue/max_segments:64
> /sys/block/sdq/queue/max_segment_size:65536
>
> */sys/block/dm-15/queue/max_hw_sectors_kb:256*
> /sys/block/dm-15/queue/max_integrity_segments:0
> */sys/block/dm-15/queue/max_sectors_kb:256*
> /sys/block/dm-15/queue/max_segments:64
> /sys/block/dm-15/queue/max_segment_size:4096
>
> */sys/block/dm-16/queue/max_hw_sectors_kb:256*
> /sys/block/dm-16/queue/max_integrity_segments:0
> */sys/block/dm-16/queue/max_sectors_kb:256*
> /sys/block/dm-16/queue/max_segments:64
> /sys/block/dm-16/queue/max_segment_size:4096
>
>
>
>
>
>
>
>
> On Tue, Apr 23, 2019 at 9:11 PM Sweet Tea Dorminy <sweettea@redhat.com>
> wrote:
>
>> One piece of this that I'm not following:
>>
>> > Now fast forward to VDO. Normally the IO size is determined by the
>> max_sectors_kb setting in /sys/block/DEVICE/queue. This value is inherited
>> for stacked DM devices and can be modified by the user up to the hardware
>> limit max_hw_sectors_kb, which also appears to be inherited for stacked DM
>> devices. VDO sets this value to 4k which in turn forces all layers stacked
>> above it to also have a 4k maximum. If you take my previous example but
>> place VDO beneath the dm-thin volume, all IO sequential or otherwise will
>> be split down to 4k which will completely eliminate all the performance
>> optimizations that dm-thin provides.
>>
>> I am unable to find a place that VDO is setting max_sectors, and
>> indeed I cannot reproduce this -- I stack VDO atop various disks of
>> max_hw_sectors_kb of 256, 512, or 1280, and VDO reports max_sectors_kb
>> of [underlying max_hw_sectors_kb]. I'm suspicious that it's some other
>> setting that is going wonky... can you recheck whether max_sectors_kb
>> is changing between (device under VDO) and (VDO device)?
>>
>

[-- Attachment #1.2: Type: text/html, Size: 7292 bytes --]

[-- Attachment #2: Type: text/plain, Size: 0 bytes --]



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: max_sectors_kb limitations with VDO and dm-thin
  2019-04-19 14:40 max_sectors_kb limitations with VDO and dm-thin Ryan Norwood
  2019-04-23 10:11 ` Zdenek Kabelac
@ 2019-04-24 19:45 ` Mike Snitzer
  2019-04-24 21:18   ` Mike Snitzer
  1 sibling, 1 reply; 11+ messages in thread
From: Mike Snitzer @ 2019-04-24 19:45 UTC (permalink / raw)
  To: Ryan Norwood; +Cc: dm-devel

On Fri, Apr 19 2019 at 10:40am -0400,
Ryan Norwood <ryan.p.norwood@gmail.com> wrote:

>    We have been using dm-thin layered above VDO and have noticed that our
>    performance is not optimal for large sequential writes as max_sectors_kb
>    and max_hw_sectors_kb for all thin devices are set to 4k due to the VDO
>    layer beneath.
>    This effectively eliminates the performance optimizations for sequential
>    writes to skip both zeroing and COW overhead when a write fully overlaps a
>    thin chunk as all bios are split into 4k which always be less than the 64k
>    thin chunk minimum.
>    Is this known behavior? Is there any way around this issue?

Are you creating the thin-pool to use a 4K thinp blocksize?  If not,
I'll have to look to see why the block core's block limits stacking
would impose these limits of the underlying data device.

>    We are using RHEL 7.5 with kernel 3.10.0-862.20.2.el7.x86_64.

OK, please let me know what the thin-pool's blocksize is.

Thanks,
Mike

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: max_sectors_kb limitations with VDO and dm-thin
  2019-04-24 19:45 ` Mike Snitzer
@ 2019-04-24 21:18   ` Mike Snitzer
  0 siblings, 0 replies; 11+ messages in thread
From: Mike Snitzer @ 2019-04-24 21:18 UTC (permalink / raw)
  To: Ryan Norwood; +Cc: dm-devel

On Wed, Apr 24 2019 at  3:45pm -0400,
Mike Snitzer <snitzer@redhat.com> wrote:

> On Fri, Apr 19 2019 at 10:40am -0400,
> Ryan Norwood <ryan.p.norwood@gmail.com> wrote:
> 
> >    We have been using dm-thin layered above VDO and have noticed that our
> >    performance is not optimal for large sequential writes as max_sectors_kb
> >    and max_hw_sectors_kb for all thin devices are set to 4k due to the VDO
> >    layer beneath.
> >    This effectively eliminates the performance optimizations for sequential
> >    writes to skip both zeroing and COW overhead when a write fully overlaps a
> >    thin chunk as all bios are split into 4k which always be less than the 64k
> >    thin chunk minimum.
> >    Is this known behavior? Is there any way around this issue?
> 
> Are you creating the thin-pool to use a 4K thinp blocksize?  If not,
> I'll have to look to see why the block core's block limits stacking
> would impose these limits of the underlying data device.
> 
> >    We are using RHEL 7.5 with kernel 3.10.0-862.20.2.el7.x86_64.
> 
> OK, please let me know what the thin-pool's blocksize is.

Nevermind, I see you have done so in another portion of this thread.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: max_sectors_kb limitations with VDO and dm-thin
  2019-04-24 14:46         ` Ryan Norwood
@ 2019-04-24 21:27           ` Mike Snitzer
  2019-04-24 22:22             ` Mike Snitzer
  0 siblings, 1 reply; 11+ messages in thread
From: Mike Snitzer @ 2019-04-24 21:27 UTC (permalink / raw)
  To: Ryan Norwood; +Cc: vdo-devel, dm-devel


On Wed, Apr 24 2019 at 10:46am -0400,
Ryan Norwood <ryan.p.norwood@gmail.com> wrote:

>    On Wed, Apr 24, 2019 at 9:08 AM Ryan Norwood <[1]ryan.p.norwood@gmail.com>
>    wrote:
> 
>      Thank you for your help.
>      You are correct, it appears that the problem occurs when there is a RAID
>      5 or RAID 50 volume beneath VDO.
>      NAME      KNAME    RA   SIZE ALIGNMENT  MIN-IO  OPT-IO PHY-SEC LOG-SEC
>      RQ-SIZE SCHED    WSAME
>      sdh
>       sdh     128 977.5G         0     512       0     512     512     128
>      deadline    0B
>      +-sed6
>      dm-6    128 977.5G         0     512       0     512     512     128
>               0B
>        +-md127
>       md127 12288   5.7T         0 1048576 6291456     512     512     128
>               0B
>          +-vdo_data
>      dm-17   128   5.7T         0 1048576 6291456     512     512     128
>               0B
>            +-vdo
>       dm-18   128  57.3T         0    4096    4096    4096    4096     128
>               0B
>      /sys/block/md126/queue/max_hw_sectors_kb:2147483647
>      /sys/block/md126/queue/max_integrity_segments:0
>      /sys/block/md126/queue/max_sectors_kb:512
>      /sys/block/md126/queue/max_segments:64
>      /sys/block/md126/queue/max_segment_size:4096
>      /sys/block/dm-17/queue/max_hw_sectors_kb:512
>      /sys/block/dm-17/queue/max_integrity_segments:0
>      /sys/block/dm-17/queue/max_sectors_kb:512
>      /sys/block/dm-17/queue/max_segments:64
>      /sys/block/dm-17/queue/max_segment_size:4096
>      /sys/block/dm-18/queue/max_hw_sectors_kb:4
>      /sys/block/dm-18/queue/max_integrity_segments:0
>      /sys/block/dm-18/queue/max_sectors_kb:4
>      /sys/block/dm-18/queue/max_segments:64
>      /sys/block/dm-18/queue/max_segment_size:4096
>      NAME      KNAME    RA   SIZE ALIGNMENT  MIN-IO  OPT-IO PHY-SEC LOG-SEC
>      RQ-SIZE SCHED    WSAME
>      sdq       sdq     128 977.5G         0     512       0     512     512
>         128 deadline    0B
>      +-sed15   dm-15   128 977.5G         0     512       0     512     512
>         128             0B
>        +-vdo   dm-16   128  57.3T         0    4096    4096    4096    4096
>         128             0B
>      /sys/block/sdq/queue/max_hw_sectors_kb:256
>      /sys/block/sdq/queue/max_integrity_segments:0
>      /sys/block/sdq/queue/max_sectors_kb:256
>      /sys/block/sdq/queue/max_segments:64
>      /sys/block/sdq/queue/max_segment_size:65536
>      /sys/block/dm-15/queue/max_hw_sectors_kb:256
>      /sys/block/dm-15/queue/max_integrity_segments:0
>      /sys/block/dm-15/queue/max_sectors_kb:256
>      /sys/block/dm-15/queue/max_segments:64
>      /sys/block/dm-15/queue/max_segment_size:4096
>      /sys/block/dm-16/queue/max_hw_sectors_kb:256
>      /sys/block/dm-16/queue/max_integrity_segments:0
>      /sys/block/dm-16/queue/max_sectors_kb:256
>      /sys/block/dm-16/queue/max_segments:64
>      /sys/block/dm-16/queue/max_segment_size:4096

[please don't top-post]

The above examples are hard to parse due to premature line wrapping.

Would appreciate seeing the IO stack in terms of:
dmsetup ls --tree -o blkdevname
dmsetup table

Feel free to trucate the output of both commands to just show one entire
example of the IO stack in question.

thanks,
Mike

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: max_sectors_kb limitations with VDO and dm-thin
  2019-04-24 21:27           ` Mike Snitzer
@ 2019-04-24 22:22             ` Mike Snitzer
  2019-04-25 11:58               ` Ryan Norwood
  0 siblings, 1 reply; 11+ messages in thread
From: Mike Snitzer @ 2019-04-24 22:22 UTC (permalink / raw)
  To: Ryan Norwood; +Cc: vdo-devel, dm-devel

On Wed, Apr 24 2019 at  5:27pm -0400,
Mike Snitzer <snitzer@redhat.com> wrote:

> 
> On Wed, Apr 24 2019 at 10:46am -0400,
> Ryan Norwood <ryan.p.norwood@gmail.com> wrote:
> 
> >    On Wed, Apr 24, 2019 at 9:08 AM Ryan Norwood <[1]ryan.p.norwood@gmail.com>
> >    wrote:
> > 
> >      Thank you for your help.
> >      You are correct, it appears that the problem occurs when there is a RAID
> >      5 or RAID 50 volume beneath VDO.
> >      NAME      KNAME    RA   SIZE ALIGNMENT  MIN-IO  OPT-IO PHY-SEC LOG-SEC
> >      RQ-SIZE SCHED    WSAME
> >      sdh
> >       sdh     128 977.5G         0     512       0     512     512     128
> >      deadline    0B
> >      +-sed6
> >      dm-6    128 977.5G         0     512       0     512     512     128
> >               0B
> >        +-md127
> >       md127 12288   5.7T         0 1048576 6291456     512     512     128
> >               0B
> >          +-vdo_data
> >      dm-17   128   5.7T         0 1048576 6291456     512     512     128
> >               0B
> >            +-vdo
> >       dm-18   128  57.3T         0    4096    4096    4096    4096     128
> >               0B

<snip>

> >      /sys/block/dm-18/queue/max_hw_sectors_kb:4
> >      /sys/block/dm-18/queue/max_sectors_kb:4

These are getting set as a side-effect of MD raid imposing the need for
merge_bvec (in the context of RHEL7.x only, not upstream) otherwise it
goes conservative and forces the IO to be contrained to a single page,
please see:

drivers/md/dm-table.c:dm_set_device_limits() at the end:

        /*
         * Check if merge fn is supported.
         * If not we'll force DM to use PAGE_SIZE or
         * smaller I/O, just to be safe.
         */
        if (dm_queue_merge_is_compulsory(q) && !ti->type->merge)
                blk_limits_max_hw_sectors(limits,
                                          (unsigned int) (PAGE_SIZE >> 9));

With MD raid in the IO stack, dm_queue_merge_is_compulsory() will return
true, so the VDO target not providing ti->type->merge causes this issue.

Please file a BZ at bugzilla.redhat.com against VDO and I'll continue to
work with the VDO developers to get this fixed for you for RHEL7.5, etc.

Thanks,
Mike

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: max_sectors_kb limitations with VDO and dm-thin
  2019-04-24 22:22             ` Mike Snitzer
@ 2019-04-25 11:58               ` Ryan Norwood
  2019-04-25 18:11                 ` [Vdo-devel] " Gionatan Danti
  0 siblings, 1 reply; 11+ messages in thread
From: Ryan Norwood @ 2019-04-25 11:58 UTC (permalink / raw)
  To: Mike Snitzer; +Cc: vdo-devel, dm-devel


[-- Attachment #1.1: Type: text/plain, Size: 2434 bytes --]

Great, I will do that. Thank you for all your help!

On Wed, Apr 24, 2019 at 6:22 PM Mike Snitzer <snitzer@redhat.com> wrote:

> On Wed, Apr 24 2019 at  5:27pm -0400,
> Mike Snitzer <snitzer@redhat.com> wrote:
>
> >
> > On Wed, Apr 24 2019 at 10:46am -0400,
> > Ryan Norwood <ryan.p.norwood@gmail.com> wrote:
> >
> > >    On Wed, Apr 24, 2019 at 9:08 AM Ryan Norwood <[1]
> ryan.p.norwood@gmail.com>
> > >    wrote:
> > >
> > >      Thank you for your help.
> > >      You are correct, it appears that the problem occurs when there is
> a RAID
> > >      5 or RAID 50 volume beneath VDO.
> > >      NAME      KNAME    RA   SIZE ALIGNMENT  MIN-IO  OPT-IO PHY-SEC
> LOG-SEC
> > >      RQ-SIZE SCHED    WSAME
> > >      sdh
> > >       sdh     128 977.5G         0     512       0     512     512
>  128
> > >      deadline    0B
> > >      +-sed6
> > >      dm-6    128 977.5G         0     512       0     512     512
>  128
> > >               0B
> > >        +-md127
> > >       md127 12288   5.7T         0 1048576 6291456     512     512
>  128
> > >               0B
> > >          +-vdo_data
> > >      dm-17   128   5.7T         0 1048576 6291456     512     512
>  128
> > >               0B
> > >            +-vdo
> > >       dm-18   128  57.3T         0    4096    4096    4096    4096
>  128
> > >               0B
>
> <snip>
>
> > >      /sys/block/dm-18/queue/max_hw_sectors_kb:4
> > >      /sys/block/dm-18/queue/max_sectors_kb:4
>
> These are getting set as a side-effect of MD raid imposing the need for
> merge_bvec (in the context of RHEL7.x only, not upstream) otherwise it
> goes conservative and forces the IO to be contrained to a single page,
> please see:
>
> drivers/md/dm-table.c:dm_set_device_limits() at the end:
>
>         /*
>          * Check if merge fn is supported.
>          * If not we'll force DM to use PAGE_SIZE or
>          * smaller I/O, just to be safe.
>          */
>         if (dm_queue_merge_is_compulsory(q) && !ti->type->merge)
>                 blk_limits_max_hw_sectors(limits,
>                                           (unsigned int) (PAGE_SIZE >> 9));
>
> With MD raid in the IO stack, dm_queue_merge_is_compulsory() will return
> true, so the VDO target not providing ti->type->merge causes this issue.
>
> Please file a BZ at bugzilla.redhat.com against VDO and I'll continue to
> work with the VDO developers to get this fixed for you for RHEL7.5, etc.
>
> Thanks,
> Mike
>

[-- Attachment #1.2: Type: text/html, Size: 3644 bytes --]

[-- Attachment #2: Type: text/plain, Size: 0 bytes --]



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Vdo-devel] max_sectors_kb limitations with VDO and dm-thin
  2019-04-25 11:58               ` Ryan Norwood
@ 2019-04-25 18:11                 ` Gionatan Danti
  2019-04-25 18:26                   ` Ryan Norwood
  0 siblings, 1 reply; 11+ messages in thread
From: Gionatan Danti @ 2019-04-25 18:11 UTC (permalink / raw)
  To: Ryan Norwood; +Cc: vdo-devel, dm-devel, Mike Snitzer

Il 25-04-2019 13:58 Ryan Norwood ha scritto:
> Great, I will do that. Thank you for all your help!

For future reference, can you post here the link of the BZ issue?
Thanks.

-- 
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: g.danti@assyoma.it - info@assyoma.it
GPG public key ID: FF5F32A8

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Vdo-devel] max_sectors_kb limitations with VDO and dm-thin
  2019-04-25 18:11                 ` [Vdo-devel] " Gionatan Danti
@ 2019-04-25 18:26                   ` Ryan Norwood
  0 siblings, 0 replies; 11+ messages in thread
From: Ryan Norwood @ 2019-04-25 18:26 UTC (permalink / raw)
  To: Gionatan Danti; +Cc: vdo-devel, dm-devel, Mike Snitzer


[-- Attachment #1.1: Type: text/plain, Size: 483 bytes --]

Here is the BZ: https://bugzilla.redhat.com/show_bug.cgi?id=1703073

On Thu, Apr 25, 2019 at 2:11 PM Gionatan Danti <g.danti@assyoma.it> wrote:

> Il 25-04-2019 13:58 Ryan Norwood ha scritto:
> > Great, I will do that. Thank you for all your help!
>
> For future reference, can you post here the link of the BZ issue?
> Thanks.
>
> --
> Danti Gionatan
> Supporto Tecnico
> Assyoma S.r.l. - www.assyoma.it
> email: g.danti@assyoma.it - info@assyoma.it
> GPG public key ID: FF5F32A8
>

[-- Attachment #1.2: Type: text/html, Size: 1070 bytes --]

[-- Attachment #2: Type: text/plain, Size: 0 bytes --]



^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2019-04-25 18:26 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-04-19 14:40 max_sectors_kb limitations with VDO and dm-thin Ryan Norwood
2019-04-23 10:11 ` Zdenek Kabelac
2019-04-23 17:02   ` Ryan Norwood
     [not found]     ` <CAMeeMh-v+xpn2YDizFQg4cKHZgC=aCdJwLfAMeGLGT1gB1ZURw@mail.gmail.com>
     [not found]       ` <CAFTvtQnLDU_MKzse5t4xc_CVWENsZtwQQSTAFyj-45tPg5STZQ@mail.gmail.com>
2019-04-24 14:46         ` Ryan Norwood
2019-04-24 21:27           ` Mike Snitzer
2019-04-24 22:22             ` Mike Snitzer
2019-04-25 11:58               ` Ryan Norwood
2019-04-25 18:11                 ` [Vdo-devel] " Gionatan Danti
2019-04-25 18:26                   ` Ryan Norwood
2019-04-24 19:45 ` Mike Snitzer
2019-04-24 21:18   ` Mike Snitzer

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.