linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* blk-mq: bitmap tag: performance degradation?
@ 2014-06-04 10:35 Alexander Gordeev
  2014-06-04 14:18 ` Jens Axboe
  0 siblings, 1 reply; 11+ messages in thread
From: Alexander Gordeev @ 2014-06-04 10:35 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-kernel, Ming Lei

Hi Jens, et al

With new bitmap tags I am observing performance degradation on 'null_blk'
device with 512 queue depth. This is 'fio' config used:

[global]
bs=4k
size=16g

[nullb]
filename=/dev/nullb0
direct=1
rw=randread
numjobs=8


I tried machines with 16 and 48 CPUs and it seems the more
CPUs we have the worse the result. Here is 48 CPUs one:

3.15.0-rc4+

   READ: io=131072MB, aggrb=3128.7MB/s, minb=400391KB/s, maxb=407204KB/s,
mint=41201msec, maxt=41902msec

   548,549,235,428 cycles:k
     3,759,335,303 L1-dcache-load-misses
       419,021,008 cache-misses:k                                              

      39.659121371 seconds time elapsed

3.15.0-rc1.for-3.16-blk-mq-tagging+

   READ: io=131072MB, aggrb=1951.8MB/s, minb=249824KB/s, maxb=255851KB/s,
mint=65574msec, maxt=67156msec

 1,063,669,976,651 cycles:k
     4,572,746,591 L1-dcache-load-misses
     1,127,037,813 cache-misses:k                                              

      69.446112553 seconds time elapsed

Thanks!

-- 
Regards,
Alexander Gordeev
agordeev@redhat.com

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: blk-mq: bitmap tag: performance degradation?
  2014-06-04 10:35 blk-mq: bitmap tag: performance degradation? Alexander Gordeev
@ 2014-06-04 14:18 ` Jens Axboe
  2014-06-05 14:01   ` Alexander Gordeev
  0 siblings, 1 reply; 11+ messages in thread
From: Jens Axboe @ 2014-06-04 14:18 UTC (permalink / raw)
  To: Alexander Gordeev; +Cc: linux-kernel, Ming Lei

On 2014-06-04 04:35, Alexander Gordeev wrote:
> Hi Jens, et al
>
> With new bitmap tags I am observing performance degradation on 'null_blk'
> device with 512 queue depth. This is 'fio' config used:
>
> [global]
> bs=4k
> size=16g
>
> [nullb]
> filename=/dev/nullb0
> direct=1
> rw=randread
> numjobs=8
>
>
> I tried machines with 16 and 48 CPUs and it seems the more
> CPUs we have the worse the result. Here is 48 CPUs one:
>
> 3.15.0-rc4+
>
>     READ: io=131072MB, aggrb=3128.7MB/s, minb=400391KB/s, maxb=407204KB/s,
> mint=41201msec, maxt=41902msec
>
>     548,549,235,428 cycles:k
>       3,759,335,303 L1-dcache-load-misses
>         419,021,008 cache-misses:k
>
>        39.659121371 seconds time elapsed
>
> 3.15.0-rc1.for-3.16-blk-mq-tagging+
>
>     READ: io=131072MB, aggrb=1951.8MB/s, minb=249824KB/s, maxb=255851KB/s,
> mint=65574msec, maxt=67156msec
>
>   1,063,669,976,651 cycles:k
>       4,572,746,591 L1-dcache-load-misses
>       1,127,037,813 cache-misses:k
>
>        69.446112553 seconds time elapsed

A null_blk test is the absolute best case for percpu_ida, since there 
are enough tags and everything is localized. The above test is more 
useful for testing blk-mq than any real world application of the tagging.

I've done considerable testing on both 2 and 4 socket (32 and 64 CPUs) 
and bitmap tagging is better in a much wider range of applications. This 
includes even high tag depth devices like nvme, and more normal ranges 
like mtip32xx and scsi-mq setups.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: blk-mq: bitmap tag: performance degradation?
  2014-06-04 14:18 ` Jens Axboe
@ 2014-06-05 14:01   ` Alexander Gordeev
  2014-06-05 14:03     ` Jens Axboe
  0 siblings, 1 reply; 11+ messages in thread
From: Alexander Gordeev @ 2014-06-05 14:01 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-kernel, Ming Lei

On Wed, Jun 04, 2014 at 08:18:42AM -0600, Jens Axboe wrote:
> A null_blk test is the absolute best case for percpu_ida, since
> there are enough tags and everything is localized. The above test is
> more useful for testing blk-mq than any real world application of
> the tagging.
> 
> I've done considerable testing on both 2 and 4 socket (32 and 64
> CPUs) and bitmap tagging is better in a much wider range of
> applications. This includes even high tag depth devices like nvme,
> and more normal ranges like mtip32xx and scsi-mq setups.

Just for the record: bitmap tags on a 48 CPU box with NVMe device
indeed shows almost the same performance/cache rate as the stock
kernel.


-- 
Regards,
Alexander Gordeev
agordeev@redhat.com

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: blk-mq: bitmap tag: performance degradation?
  2014-06-05 14:01   ` Alexander Gordeev
@ 2014-06-05 14:03     ` Jens Axboe
  2014-06-05 14:16       ` Ming Lei
  0 siblings, 1 reply; 11+ messages in thread
From: Jens Axboe @ 2014-06-05 14:03 UTC (permalink / raw)
  To: Alexander Gordeev; +Cc: linux-kernel, Ming Lei

On 2014-06-05 08:01, Alexander Gordeev wrote:
> On Wed, Jun 04, 2014 at 08:18:42AM -0600, Jens Axboe wrote:
>> A null_blk test is the absolute best case for percpu_ida, since
>> there are enough tags and everything is localized. The above test is
>> more useful for testing blk-mq than any real world application of
>> the tagging.
>>
>> I've done considerable testing on both 2 and 4 socket (32 and 64
>> CPUs) and bitmap tagging is better in a much wider range of
>> applications. This includes even high tag depth devices like nvme,
>> and more normal ranges like mtip32xx and scsi-mq setups.
>
> Just for the record: bitmap tags on a 48 CPU box with NVMe device
> indeed shows almost the same performance/cache rate as the stock
> kernel.

Thanks for confirming. It's one of the dangers of null_blk, it's not 
always a very accurate simulation of what a real device will do. I think 
it's mostly a completion side thing, would be great with a small device 
that supported msi-x and could be used as an irq trigger :-)

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: blk-mq: bitmap tag: performance degradation?
  2014-06-05 14:03     ` Jens Axboe
@ 2014-06-05 14:16       ` Ming Lei
  2014-06-05 17:17         ` Jens Axboe
  0 siblings, 1 reply; 11+ messages in thread
From: Ming Lei @ 2014-06-05 14:16 UTC (permalink / raw)
  To: Jens Axboe; +Cc: Alexander Gordeev, Linux Kernel Mailing List

On Thu, Jun 5, 2014 at 10:03 PM, Jens Axboe <axboe@kernel.dk> wrote:
> On 2014-06-05 08:01, Alexander Gordeev wrote:
>>
>> On Wed, Jun 04, 2014 at 08:18:42AM -0600, Jens Axboe wrote:
>>>
>>> A null_blk test is the absolute best case for percpu_ida, since
>>> there are enough tags and everything is localized. The above test is
>>> more useful for testing blk-mq than any real world application of
>>> the tagging.
>>>
>>> I've done considerable testing on both 2 and 4 socket (32 and 64
>>> CPUs) and bitmap tagging is better in a much wider range of
>>> applications. This includes even high tag depth devices like nvme,
>>> and more normal ranges like mtip32xx and scsi-mq setups.
>>
>>
>> Just for the record: bitmap tags on a 48 CPU box with NVMe device
>> indeed shows almost the same performance/cache rate as the stock
>> kernel.
>
>
> Thanks for confirming. It's one of the dangers of null_blk, it's not always
> a very accurate simulation of what a real device will do. I think it's
> mostly a completion side thing, would be great with a small device that
> supported msi-x and could be used as an irq trigger :-)

Maybe null_blk at IRQ_TIMER mode is more close to
a real device, and I guess the result may be different with
mode IRQ_NONE/IRQ_SOFTIRQ.

Thanks,
-- 
Ming Lei

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: blk-mq: bitmap tag: performance degradation?
  2014-06-05 14:16       ` Ming Lei
@ 2014-06-05 17:17         ` Jens Axboe
  2014-06-05 23:33           ` Ming Lei
  0 siblings, 1 reply; 11+ messages in thread
From: Jens Axboe @ 2014-06-05 17:17 UTC (permalink / raw)
  To: Ming Lei; +Cc: Alexander Gordeev, Linux Kernel Mailing List

On 06/05/2014 08:16 AM, Ming Lei wrote:
> On Thu, Jun 5, 2014 at 10:03 PM, Jens Axboe <axboe@kernel.dk> wrote:
>> On 2014-06-05 08:01, Alexander Gordeev wrote:
>>>
>>> On Wed, Jun 04, 2014 at 08:18:42AM -0600, Jens Axboe wrote:
>>>>
>>>> A null_blk test is the absolute best case for percpu_ida, since
>>>> there are enough tags and everything is localized. The above test is
>>>> more useful for testing blk-mq than any real world application of
>>>> the tagging.
>>>>
>>>> I've done considerable testing on both 2 and 4 socket (32 and 64
>>>> CPUs) and bitmap tagging is better in a much wider range of
>>>> applications. This includes even high tag depth devices like nvme,
>>>> and more normal ranges like mtip32xx and scsi-mq setups.
>>>
>>>
>>> Just for the record: bitmap tags on a 48 CPU box with NVMe device
>>> indeed shows almost the same performance/cache rate as the stock
>>> kernel.
>>
>>
>> Thanks for confirming. It's one of the dangers of null_blk, it's not always
>> a very accurate simulation of what a real device will do. I think it's
>> mostly a completion side thing, would be great with a small device that
>> supported msi-x and could be used as an irq trigger :-)
> 
> Maybe null_blk at IRQ_TIMER mode is more close to
> a real device, and I guess the result may be different with
> mode IRQ_NONE/IRQ_SOFTIRQ.

It'd be closer in behavior, but the results might then be skewed by
hitting the timer way too hard. And it'd be a general slowdown, again
possibly skewing it. But I haven't tried with the timer completion, to
see if that yields more accurate modelling for this test, so it might
actually be a lot better.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: blk-mq: bitmap tag: performance degradation?
  2014-06-05 17:17         ` Jens Axboe
@ 2014-06-05 23:33           ` Ming Lei
  2014-06-06  1:55             ` Jens Axboe
  0 siblings, 1 reply; 11+ messages in thread
From: Ming Lei @ 2014-06-05 23:33 UTC (permalink / raw)
  To: Jens Axboe; +Cc: Alexander Gordeev, Linux Kernel Mailing List

On Fri, Jun 6, 2014 at 1:17 AM, Jens Axboe <axboe@kernel.dk> wrote:
> On 06/05/2014 08:16 AM, Ming Lei wrote:
>> On Thu, Jun 5, 2014 at 10:03 PM, Jens Axboe <axboe@kernel.dk> wrote:
>>> On 2014-06-05 08:01, Alexander Gordeev wrote:
>>>>
>>>> On Wed, Jun 04, 2014 at 08:18:42AM -0600, Jens Axboe wrote:
>>>>>
>>>>> A null_blk test is the absolute best case for percpu_ida, since
>>>>> there are enough tags and everything is localized. The above test is
>>>>> more useful for testing blk-mq than any real world application of
>>>>> the tagging.
>>>>>
>>>>> I've done considerable testing on both 2 and 4 socket (32 and 64
>>>>> CPUs) and bitmap tagging is better in a much wider range of
>>>>> applications. This includes even high tag depth devices like nvme,
>>>>> and more normal ranges like mtip32xx and scsi-mq setups.
>>>>
>>>>
>>>> Just for the record: bitmap tags on a 48 CPU box with NVMe device
>>>> indeed shows almost the same performance/cache rate as the stock
>>>> kernel.
>>>
>>>
>>> Thanks for confirming. It's one of the dangers of null_blk, it's not always
>>> a very accurate simulation of what a real device will do. I think it's
>>> mostly a completion side thing, would be great with a small device that
>>> supported msi-x and could be used as an irq trigger :-)
>>
>> Maybe null_blk at IRQ_TIMER mode is more close to
>> a real device, and I guess the result may be different with
>> mode IRQ_NONE/IRQ_SOFTIRQ.
>
> It'd be closer in behavior, but the results might then be skewed by
> hitting the timer way too hard. And it'd be a general slowdown, again
> possibly skewing it. But I haven't tried with the timer completion, to
> see if that yields more accurate modelling for this test, so it might
> actually be a lot better.

My test on a 16core VM(host: 2 sockets, 16core):

1, bitmap tag allocation(3.15-rc7-next):
- softirq mode: 759K IOPS
- timer mode: 409K IOPS

2, percpu_ida allocation(3.15-rc7)
- softirq mode: 1116K IOPS
- timer mode: 411K IOPS

Also on real hardware, I remember there is no such big difference
between softirq mode and timer mode.

[global]
direct=1
size=128G
bsrange=4k-4k
timeout=20
numjobs=16
ioengine=libaio
iodepth=64
filename=/dev/nullb0
group_reporting=1

[f2]
stonewall
rw=randread


Thanks,
-- 
Ming Lei

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: blk-mq: bitmap tag: performance degradation?
  2014-06-05 23:33           ` Ming Lei
@ 2014-06-06  1:55             ` Jens Axboe
  2014-06-06  2:03               ` Ming Lei
  2014-06-06  2:35               ` Ming Lei
  0 siblings, 2 replies; 11+ messages in thread
From: Jens Axboe @ 2014-06-06  1:55 UTC (permalink / raw)
  To: Ming Lei; +Cc: Alexander Gordeev, Linux Kernel Mailing List

On 2014-06-05 17:33, Ming Lei wrote:
> On Fri, Jun 6, 2014 at 1:17 AM, Jens Axboe <axboe@kernel.dk> wrote:
>> On 06/05/2014 08:16 AM, Ming Lei wrote:
>>> On Thu, Jun 5, 2014 at 10:03 PM, Jens Axboe <axboe@kernel.dk> wrote:
>>>> On 2014-06-05 08:01, Alexander Gordeev wrote:
>>>>>
>>>>> On Wed, Jun 04, 2014 at 08:18:42AM -0600, Jens Axboe wrote:
>>>>>>
>>>>>> A null_blk test is the absolute best case for percpu_ida, since
>>>>>> there are enough tags and everything is localized. The above test is
>>>>>> more useful for testing blk-mq than any real world application of
>>>>>> the tagging.
>>>>>>
>>>>>> I've done considerable testing on both 2 and 4 socket (32 and 64
>>>>>> CPUs) and bitmap tagging is better in a much wider range of
>>>>>> applications. This includes even high tag depth devices like nvme,
>>>>>> and more normal ranges like mtip32xx and scsi-mq setups.
>>>>>
>>>>>
>>>>> Just for the record: bitmap tags on a 48 CPU box with NVMe device
>>>>> indeed shows almost the same performance/cache rate as the stock
>>>>> kernel.
>>>>
>>>>
>>>> Thanks for confirming. It's one of the dangers of null_blk, it's not always
>>>> a very accurate simulation of what a real device will do. I think it's
>>>> mostly a completion side thing, would be great with a small device that
>>>> supported msi-x and could be used as an irq trigger :-)
>>>
>>> Maybe null_blk at IRQ_TIMER mode is more close to
>>> a real device, and I guess the result may be different with
>>> mode IRQ_NONE/IRQ_SOFTIRQ.
>>
>> It'd be closer in behavior, but the results might then be skewed by
>> hitting the timer way too hard. And it'd be a general slowdown, again
>> possibly skewing it. But I haven't tried with the timer completion, to
>> see if that yields more accurate modelling for this test, so it might
>> actually be a lot better.
>
> My test on a 16core VM(host: 2 sockets, 16core):
>
> 1, bitmap tag allocation(3.15-rc7-next):
> - softirq mode: 759K IOPS
> - timer mode: 409K IOPS
>
> 2, percpu_ida allocation(3.15-rc7)
> - softirq mode: 1116K IOPS
> - timer mode: 411K IOPS

It's hard to say if this is close, or whether we are just timer bound at 
that point.

What other parameters did you load null_blk with (unmber of queues, 
queue depth)?


-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: blk-mq: bitmap tag: performance degradation?
  2014-06-06  1:55             ` Jens Axboe
@ 2014-06-06  2:03               ` Ming Lei
  2014-06-06  2:35               ` Ming Lei
  1 sibling, 0 replies; 11+ messages in thread
From: Ming Lei @ 2014-06-06  2:03 UTC (permalink / raw)
  To: Jens Axboe; +Cc: Alexander Gordeev, Linux Kernel Mailing List

On Fri, Jun 6, 2014 at 9:55 AM, Jens Axboe <axboe@kernel.dk> wrote:
> On 2014-06-05 17:33, Ming Lei wrote:
>>
>> On Fri, Jun 6, 2014 at 1:17 AM, Jens Axboe <axboe@kernel.dk> wrote:
>>>
>>> On 06/05/2014 08:16 AM, Ming Lei wrote:
>>>>
>>>> On Thu, Jun 5, 2014 at 10:03 PM, Jens Axboe <axboe@kernel.dk> wrote:
>>>>>
>>>>> On 2014-06-05 08:01, Alexander Gordeev wrote:
>>>>>>
>>>>>>
>>>>>> On Wed, Jun 04, 2014 at 08:18:42AM -0600, Jens Axboe wrote:
>>>>>>>
>>>>>>>
>>>>>>> A null_blk test is the absolute best case for percpu_ida, since
>>>>>>> there are enough tags and everything is localized. The above test is
>>>>>>> more useful for testing blk-mq than any real world application of
>>>>>>> the tagging.
>>>>>>>
>>>>>>> I've done considerable testing on both 2 and 4 socket (32 and 64
>>>>>>> CPUs) and bitmap tagging is better in a much wider range of
>>>>>>> applications. This includes even high tag depth devices like nvme,
>>>>>>> and more normal ranges like mtip32xx and scsi-mq setups.
>>>>>>
>>>>>>
>>>>>>
>>>>>> Just for the record: bitmap tags on a 48 CPU box with NVMe device
>>>>>> indeed shows almost the same performance/cache rate as the stock
>>>>>> kernel.
>>>>>
>>>>>
>>>>>
>>>>> Thanks for confirming. It's one of the dangers of null_blk, it's not
>>>>> always
>>>>> a very accurate simulation of what a real device will do. I think it's
>>>>> mostly a completion side thing, would be great with a small device that
>>>>> supported msi-x and could be used as an irq trigger :-)
>>>>
>>>>
>>>> Maybe null_blk at IRQ_TIMER mode is more close to
>>>> a real device, and I guess the result may be different with
>>>> mode IRQ_NONE/IRQ_SOFTIRQ.
>>>
>>>
>>> It'd be closer in behavior, but the results might then be skewed by
>>> hitting the timer way too hard. And it'd be a general slowdown, again
>>> possibly skewing it. But I haven't tried with the timer completion, to
>>> see if that yields more accurate modelling for this test, so it might
>>> actually be a lot better.
>>
>>
>> My test on a 16core VM(host: 2 sockets, 16core):
>>
>> 1, bitmap tag allocation(3.15-rc7-next):
>> - softirq mode: 759K IOPS
>> - timer mode: 409K IOPS
>>
>> 2, percpu_ida allocation(3.15-rc7)
>> - softirq mode: 1116K IOPS
>> - timer mode: 411K IOPS
>
>
> It's hard to say if this is close, or whether we are just timer bound at
> that point.
>
> What other parameters did you load null_blk with (unmber of queues, queue
> depth)?

depth: 256, submit queues: 1

Thanks,
-- 
Ming Lei

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: blk-mq: bitmap tag: performance degradation?
  2014-06-06  1:55             ` Jens Axboe
  2014-06-06  2:03               ` Ming Lei
@ 2014-06-06  2:35               ` Ming Lei
  2014-06-06  2:40                 ` Jens Axboe
  1 sibling, 1 reply; 11+ messages in thread
From: Ming Lei @ 2014-06-06  2:35 UTC (permalink / raw)
  To: Jens Axboe; +Cc: Alexander Gordeev, Linux Kernel Mailing List

On Fri, Jun 6, 2014 at 9:55 AM, Jens Axboe <axboe@kernel.dk> wrote:
> On 2014-06-05 17:33, Ming Lei wrote:
>>
>> On Fri, Jun 6, 2014 at 1:17 AM, Jens Axboe <axboe@kernel.dk> wrote:
>>>
>>> On 06/05/2014 08:16 AM, Ming Lei wrote:
>>>>
>>>> On Thu, Jun 5, 2014 at 10:03 PM, Jens Axboe <axboe@kernel.dk> wrote:
>>>>>
>>>>> On 2014-06-05 08:01, Alexander Gordeev wrote:
>>>>>>
>>>>>>
>>>>>> On Wed, Jun 04, 2014 at 08:18:42AM -0600, Jens Axboe wrote:
>>>>>>>
>>>>>>>
>>>>>>> A null_blk test is the absolute best case for percpu_ida, since
>>>>>>> there are enough tags and everything is localized. The above test is
>>>>>>> more useful for testing blk-mq than any real world application of
>>>>>>> the tagging.
>>>>>>>
>>>>>>> I've done considerable testing on both 2 and 4 socket (32 and 64
>>>>>>> CPUs) and bitmap tagging is better in a much wider range of
>>>>>>> applications. This includes even high tag depth devices like nvme,
>>>>>>> and more normal ranges like mtip32xx and scsi-mq setups.
>>>>>>
>>>>>>
>>>>>>
>>>>>> Just for the record: bitmap tags on a 48 CPU box with NVMe device
>>>>>> indeed shows almost the same performance/cache rate as the stock
>>>>>> kernel.
>>>>>
>>>>>
>>>>>
>>>>> Thanks for confirming. It's one of the dangers of null_blk, it's not
>>>>> always
>>>>> a very accurate simulation of what a real device will do. I think it's
>>>>> mostly a completion side thing, would be great with a small device that
>>>>> supported msi-x and could be used as an irq trigger :-)
>>>>
>>>>
>>>> Maybe null_blk at IRQ_TIMER mode is more close to
>>>> a real device, and I guess the result may be different with
>>>> mode IRQ_NONE/IRQ_SOFTIRQ.
>>>
>>>
>>> It'd be closer in behavior, but the results might then be skewed by
>>> hitting the timer way too hard. And it'd be a general slowdown, again
>>> possibly skewing it. But I haven't tried with the timer completion, to
>>> see if that yields more accurate modelling for this test, so it might
>>> actually be a lot better.
>>
>>
>> My test on a 16core VM(host: 2 sockets, 16core):
>>
>> 1, bitmap tag allocation(3.15-rc7-next):
>> - softirq mode: 759K IOPS
>> - timer mode: 409K IOPS
>>
>> 2, percpu_ida allocation(3.15-rc7)
>> - softirq mode: 1116K IOPS
>> - timer mode: 411K IOPS
>
>
> It's hard to say if this is close, or whether we are just timer bound at
> that point.

You are right, my previous test should be timer bound, but it
should be eased by increasing timer period.

I do the test again with increasing parameter of completion_nsec
to 235000 from default 10000:

1, nullblk(timer mode)3.15-rc7:
- each fio cpu utilization: 80% ~ 90%
- 860K IOPS

2, nullbk(timer mode)3.15-rc7-next
- each fio cpu utilization: 70~80%
- 940K IOPS

Then bitmap based allocation can be observed to be a bit
better than percpu ida.

Thanks,
-- 
Ming Lei

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: blk-mq: bitmap tag: performance degradation?
  2014-06-06  2:35               ` Ming Lei
@ 2014-06-06  2:40                 ` Jens Axboe
  0 siblings, 0 replies; 11+ messages in thread
From: Jens Axboe @ 2014-06-06  2:40 UTC (permalink / raw)
  To: Ming Lei; +Cc: Alexander Gordeev, Linux Kernel Mailing List

On 2014-06-05 20:35, Ming Lei wrote:
> On Fri, Jun 6, 2014 at 9:55 AM, Jens Axboe <axboe@kernel.dk> wrote:
>> On 2014-06-05 17:33, Ming Lei wrote:
>>>
>>> On Fri, Jun 6, 2014 at 1:17 AM, Jens Axboe <axboe@kernel.dk> wrote:
>>>>
>>>> On 06/05/2014 08:16 AM, Ming Lei wrote:
>>>>>
>>>>> On Thu, Jun 5, 2014 at 10:03 PM, Jens Axboe <axboe@kernel.dk> wrote:
>>>>>>
>>>>>> On 2014-06-05 08:01, Alexander Gordeev wrote:
>>>>>>>
>>>>>>>
>>>>>>> On Wed, Jun 04, 2014 at 08:18:42AM -0600, Jens Axboe wrote:
>>>>>>>>
>>>>>>>>
>>>>>>>> A null_blk test is the absolute best case for percpu_ida, since
>>>>>>>> there are enough tags and everything is localized. The above test is
>>>>>>>> more useful for testing blk-mq than any real world application of
>>>>>>>> the tagging.
>>>>>>>>
>>>>>>>> I've done considerable testing on both 2 and 4 socket (32 and 64
>>>>>>>> CPUs) and bitmap tagging is better in a much wider range of
>>>>>>>> applications. This includes even high tag depth devices like nvme,
>>>>>>>> and more normal ranges like mtip32xx and scsi-mq setups.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Just for the record: bitmap tags on a 48 CPU box with NVMe device
>>>>>>> indeed shows almost the same performance/cache rate as the stock
>>>>>>> kernel.
>>>>>>
>>>>>>
>>>>>>
>>>>>> Thanks for confirming. It's one of the dangers of null_blk, it's not
>>>>>> always
>>>>>> a very accurate simulation of what a real device will do. I think it's
>>>>>> mostly a completion side thing, would be great with a small device that
>>>>>> supported msi-x and could be used as an irq trigger :-)
>>>>>
>>>>>
>>>>> Maybe null_blk at IRQ_TIMER mode is more close to
>>>>> a real device, and I guess the result may be different with
>>>>> mode IRQ_NONE/IRQ_SOFTIRQ.
>>>>
>>>>
>>>> It'd be closer in behavior, but the results might then be skewed by
>>>> hitting the timer way too hard. And it'd be a general slowdown, again
>>>> possibly skewing it. But I haven't tried with the timer completion, to
>>>> see if that yields more accurate modelling for this test, so it might
>>>> actually be a lot better.
>>>
>>>
>>> My test on a 16core VM(host: 2 sockets, 16core):
>>>
>>> 1, bitmap tag allocation(3.15-rc7-next):
>>> - softirq mode: 759K IOPS
>>> - timer mode: 409K IOPS
>>>
>>> 2, percpu_ida allocation(3.15-rc7)
>>> - softirq mode: 1116K IOPS
>>> - timer mode: 411K IOPS
>>
>>
>> It's hard to say if this is close, or whether we are just timer bound at
>> that point.
>
> You are right, my previous test should be timer bound, but it
> should be eased by increasing timer period.
>
> I do the test again with increasing parameter of completion_nsec
> to 235000 from default 10000:
>
> 1, nullblk(timer mode)3.15-rc7:
> - each fio cpu utilization: 80% ~ 90%
> - 860K IOPS
>
> 2, nullbk(timer mode)3.15-rc7-next
> - each fio cpu utilization: 70~80%
> - 940K IOPS
>
> Then bitmap based allocation can be observed to be a bit
> better than percpu ida.

That's more inline with the real device testing I did. If tags are 
plenty, it's a wash between the two. But once you exceed 50% 
utilization, percpu_ida starts to degrade, and in some cases very badly. 
This is especially apparent on bigger 2 socket, or 4 socket boxes.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2014-06-06  2:40 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-06-04 10:35 blk-mq: bitmap tag: performance degradation? Alexander Gordeev
2014-06-04 14:18 ` Jens Axboe
2014-06-05 14:01   ` Alexander Gordeev
2014-06-05 14:03     ` Jens Axboe
2014-06-05 14:16       ` Ming Lei
2014-06-05 17:17         ` Jens Axboe
2014-06-05 23:33           ` Ming Lei
2014-06-06  1:55             ` Jens Axboe
2014-06-06  2:03               ` Ming Lei
2014-06-06  2:35               ` Ming Lei
2014-06-06  2:40                 ` Jens Axboe

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).