linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/2] cfq-iosched: fixing RQ_NOIDLE handling.
@ 2010-07-07 15:22 Corrado Zoccolo
  2010-07-07 15:56 ` Corrado Zoccolo
  2010-07-07 17:03 ` Jeff Moyer
  0 siblings, 2 replies; 27+ messages in thread
From: Corrado Zoccolo @ 2010-07-07 15:22 UTC (permalink / raw)
  To: Jens Axboe, Linux-Kernel; +Cc: Jeff Moyer, Vivek Goyal

Hi Jens,
patch 8e55063 "cfq-iosched: fix corner cases in idling logic", is
suspected for some regressions on high end hardware.
The two patches from this series:
- [PATCH 1/2] cfq-iosched: fix tree-wide handling of rq_noidle
- [PATCH 2/2] cfq-iosched: RQ_NOIDLE enabled for SYNC_WORKLOAD
fix two issues that I have identified, related to how RQ_NOIDLE is
used by the upper layers.
First patch makes sure that a RQ_NOIDLE coming after a sequence of
possibly idling requests from the same queue on the no-idle tree will
clear the noidle_tree_requires_idle flag.
Second patch enables RQ_NOIDLE for queues in the idling tree,
restoring the behaviour pre-8e55063 patch.

An other option to consider is the partial revert of 8e55063, if the
corner cases we are trying to handle are not frequent enough to
justify this added complexity.

Thanks,
Corrado

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 0/2] cfq-iosched: fixing RQ_NOIDLE handling.
  2010-07-07 15:22 [PATCH 0/2] cfq-iosched: fixing RQ_NOIDLE handling Corrado Zoccolo
@ 2010-07-07 15:56 ` Corrado Zoccolo
  2010-07-07 17:03 ` Jeff Moyer
  1 sibling, 0 replies; 27+ messages in thread
From: Corrado Zoccolo @ 2010-07-07 15:56 UTC (permalink / raw)
  To: Jens Axboe, Linux-Kernel

Fixed Jens' mail address, and resending the patches based on for-2.6.36 tree.
On Wed, Jul 7, 2010 at 5:22 PM, Corrado Zoccolo <czoccolo@gmail.com> wrote:
> Hi Jens,
> patch 8e55063 "cfq-iosched: fix corner cases in idling logic", is
> suspected for some regressions on high end hardware.
> The two patches from this series:
> - [PATCH 1/2] cfq-iosched: fix tree-wide handling of rq_noidle
> - [PATCH 2/2] cfq-iosched: RQ_NOIDLE enabled for SYNC_WORKLOAD
> fix two issues that I have identified, related to how RQ_NOIDLE is
> used by the upper layers.
> First patch makes sure that a RQ_NOIDLE coming after a sequence of
> possibly idling requests from the same queue on the no-idle tree will
> clear the noidle_tree_requires_idle flag.
> Second patch enables RQ_NOIDLE for queues in the idling tree,
> restoring the behaviour pre-8e55063 patch.
>
> An other option to consider is the partial revert of 8e55063, if the
> corner cases we are trying to handle are not frequent enough to
> justify this added complexity.
>
> Thanks,
> Corrado
>



-- 
__________________________________________________________________________

dott. Corrado Zoccolo                          mailto:czoccolo@gmail.com
PhD - Department of Computer Science - University of Pisa, Italy
--------------------------------------------------------------------------
The self-confidence of a warrior is not the self-confidence of the average
man. The average man seeks certainty in the eyes of the onlooker and calls
that self-confidence. The warrior seeks impeccability in his own eyes and
calls that humbleness.
                               Tales of Power - C. Castaneda

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 0/2] cfq-iosched: fixing RQ_NOIDLE handling.
  2010-07-07 15:22 [PATCH 0/2] cfq-iosched: fixing RQ_NOIDLE handling Corrado Zoccolo
  2010-07-07 15:56 ` Corrado Zoccolo
@ 2010-07-07 17:03 ` Jeff Moyer
  2010-07-07 17:39   ` Corrado Zoccolo
                     ` (2 more replies)
  1 sibling, 3 replies; 27+ messages in thread
From: Jeff Moyer @ 2010-07-07 17:03 UTC (permalink / raw)
  To: Corrado Zoccolo; +Cc: Jens Axboe, Linux-Kernel, Vivek Goyal

Corrado Zoccolo <czoccolo@gmail.com> writes:

> Hi Jens,
> patch 8e55063 "cfq-iosched: fix corner cases in idling logic", is
> suspected for some regressions on high end hardware.
> The two patches from this series:
> - [PATCH 1/2] cfq-iosched: fix tree-wide handling of rq_noidle
> - [PATCH 2/2] cfq-iosched: RQ_NOIDLE enabled for SYNC_WORKLOAD
> fix two issues that I have identified, related to how RQ_NOIDLE is
> used by the upper layers.
> First patch makes sure that a RQ_NOIDLE coming after a sequence of
> possibly idling requests from the same queue on the no-idle tree will
> clear the noidle_tree_requires_idle flag.
> Second patch enables RQ_NOIDLE for queues in the idling tree,
> restoring the behaviour pre-8e55063 patch.

Hi, Corrado,

I ran your kernel through my tests.  Here are the results, up against
vanilla, deadline, and the blk_yield patch set:

                 just    just
                fs_mark  fio        mixed	
-------------------------------+--------------
deadline        529.44   151.4 | 450.0    78.2
vanilla cfq     107.88   164.4 |   6.6   137.2
blk_yield cfq   530.82   158.7 | 113.2    78.6
corrado cfq      80.82   138.1 |   4.5   130.7

fs_mark results are in files/second, fio results are in MB/s.  All
results are the average of 5 runs.  In order to get results for the
mixed workload for both vanilla and Corrado's kernels, I had to extend
the runtime from 30s to 300s.

So, the changes proposed in this thread actually make performance worse
across the board.

I re-ran my tests against a RHEL 5 kernel (which is based on 2.6.18),
and it shows that fs_mark performance is much better than stock CFQ in
2.6.35-rc3, and the mixed workload results are much the same as they are
now (which is to say, the fs_mark process is completely starved by the
sequential reader).  So, that problem has existed for a long time.

I'm still in the process of collecting data from production servers and
will report back with my findings there.

Cheers,
Jeff

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 0/2] cfq-iosched: fixing RQ_NOIDLE handling.
  2010-07-07 17:03 ` Jeff Moyer
@ 2010-07-07 17:39   ` Corrado Zoccolo
  2010-07-07 20:06     ` Jeff Moyer
  2010-07-07 17:50   ` Vivek Goyal
  2010-07-08 14:35   ` Vivek Goyal
  2 siblings, 1 reply; 27+ messages in thread
From: Corrado Zoccolo @ 2010-07-07 17:39 UTC (permalink / raw)
  To: Jeff Moyer; +Cc: Jens Axboe, Linux-Kernel, Vivek Goyal

On Wed, Jul 7, 2010 at 7:03 PM, Jeff Moyer <jmoyer@redhat.com> wrote:
> Corrado Zoccolo <czoccolo@gmail.com> writes:
>
>> Hi Jens,
>> patch 8e55063 "cfq-iosched: fix corner cases in idling logic", is
>> suspected for some regressions on high end hardware.
>> The two patches from this series:
>> - [PATCH 1/2] cfq-iosched: fix tree-wide handling of rq_noidle
>> - [PATCH 2/2] cfq-iosched: RQ_NOIDLE enabled for SYNC_WORKLOAD
>> fix two issues that I have identified, related to how RQ_NOIDLE is
>> used by the upper layers.
>> First patch makes sure that a RQ_NOIDLE coming after a sequence of
>> possibly idling requests from the same queue on the no-idle tree will
>> clear the noidle_tree_requires_idle flag.
>> Second patch enables RQ_NOIDLE for queues in the idling tree,
>> restoring the behaviour pre-8e55063 patch.
>
> Hi, Corrado,
>
> I ran your kernel through my tests.  Here are the results, up against
> vanilla, deadline, and the blk_yield patch set:
>
Hi Jeff,
can you also add cfq with 8e55063 reverted to the testing mix?

>                 just    just
>                fs_mark  fio        mixed
> -------------------------------+--------------
> deadline        529.44   151.4 | 450.0    78.2
> vanilla cfq     107.88   164.4 |   6.6   137.2
> blk_yield cfq   530.82   158.7 | 113.2    78.6
> corrado cfq      80.82   138.1 |   4.5   130.7

So it doesn't seem to help. I wonder if other parts of that commit are
affecting those workloads.

>
> fs_mark results are in files/second, fio results are in MB/s.  All
> results are the average of 5 runs.  In order to get results for the
> mixed workload for both vanilla and Corrado's kernels, I had to extend
> the runtime from 30s to 300s.
>
> So, the changes proposed in this thread actually make performance worse
> across the board.
>
> I re-ran my tests against a RHEL 5 kernel (which is based on 2.6.18),
> and it shows that fs_mark performance is much better than stock CFQ in
> 2.6.35-rc3, and the mixed workload results are much the same as they are
> now (which is to say, the fs_mark process is completely starved by the
> sequential reader).  So, that problem has existed for a long time.
>
> I'm still in the process of collecting data from production servers and
> will report back with my findings there.

Thanks,
Corrado

>
> Cheers,
> Jeff
>



-- 
__________________________________________________________________________

dott. Corrado Zoccolo                          mailto:czoccolo@gmail.com
PhD - Department of Computer Science - University of Pisa, Italy
--------------------------------------------------------------------------
The self-confidence of a warrior is not the self-confidence of the average
man. The average man seeks certainty in the eyes of the onlooker and calls
that self-confidence. The warrior seeks impeccability in his own eyes and
calls that humbleness.
                               Tales of Power - C. Castaneda

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 0/2] cfq-iosched: fixing RQ_NOIDLE handling.
  2010-07-07 17:03 ` Jeff Moyer
  2010-07-07 17:39   ` Corrado Zoccolo
@ 2010-07-07 17:50   ` Vivek Goyal
  2010-07-08 14:35   ` Vivek Goyal
  2 siblings, 0 replies; 27+ messages in thread
From: Vivek Goyal @ 2010-07-07 17:50 UTC (permalink / raw)
  To: Jeff Moyer; +Cc: Corrado Zoccolo, Jens Axboe, Linux-Kernel

On Wed, Jul 07, 2010 at 01:03:08PM -0400, Jeff Moyer wrote:
> Corrado Zoccolo <czoccolo@gmail.com> writes:
> 
> > Hi Jens,
> > patch 8e55063 "cfq-iosched: fix corner cases in idling logic", is
> > suspected for some regressions on high end hardware.
> > The two patches from this series:
> > - [PATCH 1/2] cfq-iosched: fix tree-wide handling of rq_noidle
> > - [PATCH 2/2] cfq-iosched: RQ_NOIDLE enabled for SYNC_WORKLOAD
> > fix two issues that I have identified, related to how RQ_NOIDLE is
> > used by the upper layers.
> > First patch makes sure that a RQ_NOIDLE coming after a sequence of
> > possibly idling requests from the same queue on the no-idle tree will
> > clear the noidle_tree_requires_idle flag.
> > Second patch enables RQ_NOIDLE for queues in the idling tree,
> > restoring the behaviour pre-8e55063 patch.
> 
> Hi, Corrado,
> 
> I ran your kernel through my tests.  Here are the results, up against
> vanilla, deadline, and the blk_yield patch set:
> 
>                  just    just
>                 fs_mark  fio        mixed	
> -------------------------------+--------------
> deadline        529.44   151.4 | 450.0    78.2
> vanilla cfq     107.88   164.4 |   6.6   137.2
> blk_yield cfq   530.82   158.7 | 113.2    78.6
> corrado cfq      80.82   138.1 |   4.5   130.7
> 
> fs_mark results are in files/second, fio results are in MB/s.  All
> results are the average of 5 runs.  In order to get results for the
> mixed workload for both vanilla and Corrado's kernels, I had to extend
> the runtime from 30s to 300s.
> 
> So, the changes proposed in this thread actually make performance worse
> across the board.

This is really surprising. It should have atleast helped in just fs_mark
case.

I think what is happening is that we are idling on the fsync queue
(because it is last queue in the group). After some time jbd thread will
submit some IO and we will not preempt the fsync thread. That's why
I had also implemented the logic of allowing preemption in case of group
idle and that had helped.

> 
> I re-ran my tests against a RHEL 5 kernel (which is based on 2.6.18),
> and it shows that fs_mark performance is much better than stock CFQ in
> 2.6.35-rc3, and the mixed workload results are much the same as they are
> now (which is to say, the fs_mark process is completely starved by the
> sequential reader).  So, that problem has existed for a long time.

If we just stop idling on WRITE_SYNC, we will should be back to almost
2.6.18 CFQ behavior.

> 
> I'm still in the process of collecting data from production servers and
> will report back with my findings there.

That would be great.

Vivek

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 0/2] cfq-iosched: fixing RQ_NOIDLE handling.
  2010-07-07 17:39   ` Corrado Zoccolo
@ 2010-07-07 20:06     ` Jeff Moyer
  2010-07-08 14:38       ` Corrado Zoccolo
  2010-07-09 10:33       ` Corrado Zoccolo
  0 siblings, 2 replies; 27+ messages in thread
From: Jeff Moyer @ 2010-07-07 20:06 UTC (permalink / raw)
  To: Corrado Zoccolo; +Cc: Jens Axboe, Linux-Kernel, Vivek Goyal

Corrado Zoccolo <czoccolo@gmail.com> writes:

> On Wed, Jul 7, 2010 at 7:03 PM, Jeff Moyer <jmoyer@redhat.com> wrote:
>> Corrado Zoccolo <czoccolo@gmail.com> writes:
>>
>>> Hi Jens,
>>> patch 8e55063 "cfq-iosched: fix corner cases in idling logic", is
>>> suspected for some regressions on high end hardware.
>>> The two patches from this series:
>>> - [PATCH 1/2] cfq-iosched: fix tree-wide handling of rq_noidle
>>> - [PATCH 2/2] cfq-iosched: RQ_NOIDLE enabled for SYNC_WORKLOAD
>>> fix two issues that I have identified, related to how RQ_NOIDLE is
>>> used by the upper layers.
>>> First patch makes sure that a RQ_NOIDLE coming after a sequence of
>>> possibly idling requests from the same queue on the no-idle tree will
>>> clear the noidle_tree_requires_idle flag.
>>> Second patch enables RQ_NOIDLE for queues in the idling tree,
>>> restoring the behaviour pre-8e55063 patch.
>>
>> Hi, Corrado,
>>
>> I ran your kernel through my tests.  Here are the results, up against
>> vanilla, deadline, and the blk_yield patch set:
>>
> Hi Jeff,
> can you also add cfq with 8e55063 reverted to the testing mix?

Sure, the results now look like this:

                 just    just
                fs_mark  fio        mixed	
-------------------------------+--------------
deadline        529.44   151.4 | 450.0    78.2
vanilla cfq     107.88   164.4 |   6.6   137.2
blk_yield cfq   530.82   158.7 | 113.2    78.6
corrado cfq     110.16   220.6 |   7.0   159.8
8e55063 revert  559.66   198.9 |  16.1   153.3

I had accidentally run your patch set (corrado cfq) on ext3, so the
numbers were a bit off (everything else was run against ext4).  The
corrected numbers above reflect the performance on ext4, which is much
better for the sequential reader, but still not great for the fs_mark
run.  Reverting 8e55063 definitely gets us into better shape.  However,
if we care about the mixed workload, then it won't be enough.

It's worth noting that I can't explain that jump from 151MB/s for
deadline vs 220MB/s for corrado cfq.  I'm not sure how you can vary
driving a single queue depth sequential read at all.  Those are the
averages of 5 runs and this storage should be solely accessible by me,
so I am at a loss.

Cheers,
Jeff

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 0/2] cfq-iosched: fixing RQ_NOIDLE handling.
  2010-07-07 17:03 ` Jeff Moyer
  2010-07-07 17:39   ` Corrado Zoccolo
  2010-07-07 17:50   ` Vivek Goyal
@ 2010-07-08 14:35   ` Vivek Goyal
  2010-07-08 14:38     ` Jeff Moyer
  2010-07-08 14:45     ` Corrado Zoccolo
  2 siblings, 2 replies; 27+ messages in thread
From: Vivek Goyal @ 2010-07-08 14:35 UTC (permalink / raw)
  To: Jeff Moyer; +Cc: Corrado Zoccolo, Jens Axboe, Linux-Kernel

On Wed, Jul 07, 2010 at 01:03:08PM -0400, Jeff Moyer wrote:
> Corrado Zoccolo <czoccolo@gmail.com> writes:
> 
> > Hi Jens,
> > patch 8e55063 "cfq-iosched: fix corner cases in idling logic", is
> > suspected for some regressions on high end hardware.
> > The two patches from this series:
> > - [PATCH 1/2] cfq-iosched: fix tree-wide handling of rq_noidle
> > - [PATCH 2/2] cfq-iosched: RQ_NOIDLE enabled for SYNC_WORKLOAD
> > fix two issues that I have identified, related to how RQ_NOIDLE is
> > used by the upper layers.
> > First patch makes sure that a RQ_NOIDLE coming after a sequence of
> > possibly idling requests from the same queue on the no-idle tree will
> > clear the noidle_tree_requires_idle flag.
> > Second patch enables RQ_NOIDLE for queues in the idling tree,
> > restoring the behaviour pre-8e55063 patch.
> 
> Hi, Corrado,
> 
> I ran your kernel through my tests.  Here are the results, up against
> vanilla, deadline, and the blk_yield patch set:
> 
>                  just    just
>                 fs_mark  fio        mixed	
> -------------------------------+--------------
> deadline        529.44   151.4 | 450.0    78.2
> vanilla cfq     107.88   164.4 |   6.6   137.2
> blk_yield cfq   530.82   158.7 | 113.2    78.6
> corrado cfq      80.82   138.1 |   4.5   130.7
> 
> fs_mark results are in files/second, fio results are in MB/s.  All
> results are the average of 5 runs.  In order to get results for the
> mixed workload for both vanilla and Corrado's kernels, I had to extend
> the runtime from 30s to 300s.
> 
> So, the changes proposed in this thread actually make performance worse
> across the board.
> 
> I re-ran my tests against a RHEL 5 kernel (which is based on 2.6.18),
> and it shows that fs_mark performance is much better than stock CFQ in
> 2.6.35-rc3, and the mixed workload results are much the same as they are
> now (which is to say, the fs_mark process is completely starved by the
> sequential reader).  So, that problem has existed for a long time.
> 
> I'm still in the process of collecting data from production servers and
> will report back with my findings there.

Hi Jeff and all,

How about if we simply get rid of idling on RQ_NOIDLE threads (as
corrado's patch series does) and not try to solve the problem of fsync
being starved in the presence of sequential readers. I mean it might just
be a theoritical problem and not many people are running into it. That's
how CFQ has been behaving for long-2 time and if nobody is complaining
then we probably don't have to fix it.

Thanks
Vivek

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 0/2] cfq-iosched: fixing RQ_NOIDLE handling.
  2010-07-07 20:06     ` Jeff Moyer
@ 2010-07-08 14:38       ` Corrado Zoccolo
  2010-07-09 10:33       ` Corrado Zoccolo
  1 sibling, 0 replies; 27+ messages in thread
From: Corrado Zoccolo @ 2010-07-08 14:38 UTC (permalink / raw)
  To: Jeff Moyer; +Cc: Jens Axboe, Linux-Kernel, Vivek Goyal

On Wed, Jul 7, 2010 at 10:06 PM, Jeff Moyer <jmoyer@redhat.com> wrote:
> Corrado Zoccolo <czoccolo@gmail.com> writes:
>
>> On Wed, Jul 7, 2010 at 7:03 PM, Jeff Moyer <jmoyer@redhat.com> wrote:
>>> Corrado Zoccolo <czoccolo@gmail.com> writes:
>>>
>>>> Hi Jens,
>>>> patch 8e55063 "cfq-iosched: fix corner cases in idling logic", is
>>>> suspected for some regressions on high end hardware.
>>>> The two patches from this series:
>>>> - [PATCH 1/2] cfq-iosched: fix tree-wide handling of rq_noidle
>>>> - [PATCH 2/2] cfq-iosched: RQ_NOIDLE enabled for SYNC_WORKLOAD
>>>> fix two issues that I have identified, related to how RQ_NOIDLE is
>>>> used by the upper layers.
>>>> First patch makes sure that a RQ_NOIDLE coming after a sequence of
>>>> possibly idling requests from the same queue on the no-idle tree will
>>>> clear the noidle_tree_requires_idle flag.
>>>> Second patch enables RQ_NOIDLE for queues in the idling tree,
>>>> restoring the behaviour pre-8e55063 patch.
>>>
>>> Hi, Corrado,
>>>
>>> I ran your kernel through my tests.  Here are the results, up against
>>> vanilla, deadline, and the blk_yield patch set:
>>>
>> Hi Jeff,
>> can you also add cfq with 8e55063 reverted to the testing mix?
>
> Sure, the results now look like this:
>
>                 just    just
>                fs_mark  fio        mixed
> -------------------------------+--------------
> deadline        529.44   151.4 | 450.0    78.2
> vanilla cfq     107.88   164.4 |   6.6   137.2
> blk_yield cfq   530.82   158.7 | 113.2    78.6
> corrado cfq     110.16   220.6 |   7.0   159.8
> 8e55063 revert  559.66   198.9 |  16.1   153.3
>
> I had accidentally run your patch set (corrado cfq) on ext3, so the
> numbers were a bit off (everything else was run against ext4).  The
> corrected numbers above reflect the performance on ext4, which is much
> better for the sequential reader, but still not great for the fs_mark
> run.  Reverting 8e55063 definitely gets us into better shape.  However,
> if we care about the mixed workload, then it won't be enough.

I wonder why 8e55063 revert gives such a big improvement on fsync ops.
Maybe, before 8e55063, we ended up not idling even if
cfq_arm_slice_timer was called, due to other requests still pending?
I think your patch that allows both async and sync requests to be in
flight at the same time could help here.

>
> It's worth noting that I can't explain that jump from 151MB/s for
> deadline vs 220MB/s for corrado cfq.  I'm not sure how you can vary
> driving a single queue depth sequential read at all.  Those are the
> averages of 5 runs and this storage should be solely accessible by me,
> so I am at a loss.

I guess ext4 tries to be smart, and issues some background reads of fs
data structures needed to keep reading the sequential file without
interruption.
Those reads will be far from the current head, so if you service them
immediately (as deadline would), they can cause a degradation, while
delaying them to when you are servicing other random requests (as cfq
would) can help.

>
> Cheers,
> Jeff
>



-- 
__________________________________________________________________________

dott. Corrado Zoccolo                          mailto:czoccolo@gmail.com
PhD - Department of Computer Science - University of Pisa, Italy
--------------------------------------------------------------------------
The self-confidence of a warrior is not the self-confidence of the average
man. The average man seeks certainty in the eyes of the onlooker and calls
that self-confidence. The warrior seeks impeccability in his own eyes and
calls that humbleness.
                               Tales of Power - C. Castaneda

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 0/2] cfq-iosched: fixing RQ_NOIDLE handling.
  2010-07-08 14:35   ` Vivek Goyal
@ 2010-07-08 14:38     ` Jeff Moyer
  2010-07-08 14:45     ` Corrado Zoccolo
  1 sibling, 0 replies; 27+ messages in thread
From: Jeff Moyer @ 2010-07-08 14:38 UTC (permalink / raw)
  To: Vivek Goyal; +Cc: Corrado Zoccolo, Jens Axboe, Linux-Kernel

Vivek Goyal <vgoyal@redhat.com> writes:

> On Wed, Jul 07, 2010 at 01:03:08PM -0400, Jeff Moyer wrote:
>> Corrado Zoccolo <czoccolo@gmail.com> writes:
>> 
>> > Hi Jens,
>> > patch 8e55063 "cfq-iosched: fix corner cases in idling logic", is
>> > suspected for some regressions on high end hardware.
>> > The two patches from this series:
>> > - [PATCH 1/2] cfq-iosched: fix tree-wide handling of rq_noidle
>> > - [PATCH 2/2] cfq-iosched: RQ_NOIDLE enabled for SYNC_WORKLOAD
>> > fix two issues that I have identified, related to how RQ_NOIDLE is
>> > used by the upper layers.
>> > First patch makes sure that a RQ_NOIDLE coming after a sequence of
>> > possibly idling requests from the same queue on the no-idle tree will
>> > clear the noidle_tree_requires_idle flag.
>> > Second patch enables RQ_NOIDLE for queues in the idling tree,
>> > restoring the behaviour pre-8e55063 patch.
>> 
>> Hi, Corrado,
>> 
>> I ran your kernel through my tests.  Here are the results, up against
>> vanilla, deadline, and the blk_yield patch set:
>> 
>>                  just    just
>>                 fs_mark  fio        mixed	
>> -------------------------------+--------------
>> deadline        529.44   151.4 | 450.0    78.2
>> vanilla cfq     107.88   164.4 |   6.6   137.2
>> blk_yield cfq   530.82   158.7 | 113.2    78.6
>> corrado cfq      80.82   138.1 |   4.5   130.7
>> 
>> fs_mark results are in files/second, fio results are in MB/s.  All
>> results are the average of 5 runs.  In order to get results for the
>> mixed workload for both vanilla and Corrado's kernels, I had to extend
>> the runtime from 30s to 300s.
>> 
>> So, the changes proposed in this thread actually make performance worse
>> across the board.
>> 
>> I re-ran my tests against a RHEL 5 kernel (which is based on 2.6.18),
>> and it shows that fs_mark performance is much better than stock CFQ in
>> 2.6.35-rc3, and the mixed workload results are much the same as they are
>> now (which is to say, the fs_mark process is completely starved by the
>> sequential reader).  So, that problem has existed for a long time.
>> 
>> I'm still in the process of collecting data from production servers and
>> will report back with my findings there.
>
> Hi Jeff and all,
>
> How about if we simply get rid of idling on RQ_NOIDLE threads (as
> corrado's patch series does) and not try to solve the problem of fsync
> being starved in the presence of sequential readers. I mean it might just
> be a theoritical problem and not many people are running into it. That's
> how CFQ has been behaving for long-2 time and if nobody is complaining
> then we probably don't have to fix it.

I would instead suggest we just revert that one commit, if this is the
route we're going to go.  Please keep in mind, though, that folks who
may have experienced this issue may also have just switched to deadline.

Cheers,
Jeff

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 0/2] cfq-iosched: fixing RQ_NOIDLE handling.
  2010-07-08 14:35   ` Vivek Goyal
  2010-07-08 14:38     ` Jeff Moyer
@ 2010-07-08 14:45     ` Corrado Zoccolo
  1 sibling, 0 replies; 27+ messages in thread
From: Corrado Zoccolo @ 2010-07-08 14:45 UTC (permalink / raw)
  To: Vivek Goyal; +Cc: Jeff Moyer, Jens Axboe, Linux-Kernel

On Thu, Jul 8, 2010 at 4:35 PM, Vivek Goyal <vgoyal@redhat.com> wrote:
> On Wed, Jul 07, 2010 at 01:03:08PM -0400, Jeff Moyer wrote:
>> Corrado Zoccolo <czoccolo@gmail.com> writes:
>>
>> > Hi Jens,
>> > patch 8e55063 "cfq-iosched: fix corner cases in idling logic", is
>> > suspected for some regressions on high end hardware.
>> > The two patches from this series:
>> > - [PATCH 1/2] cfq-iosched: fix tree-wide handling of rq_noidle
>> > - [PATCH 2/2] cfq-iosched: RQ_NOIDLE enabled for SYNC_WORKLOAD
>> > fix two issues that I have identified, related to how RQ_NOIDLE is
>> > used by the upper layers.
>> > First patch makes sure that a RQ_NOIDLE coming after a sequence of
>> > possibly idling requests from the same queue on the no-idle tree will
>> > clear the noidle_tree_requires_idle flag.
>> > Second patch enables RQ_NOIDLE for queues in the idling tree,
>> > restoring the behaviour pre-8e55063 patch.
>>
>> Hi, Corrado,
>>
>> I ran your kernel through my tests.  Here are the results, up against
>> vanilla, deadline, and the blk_yield patch set:
>>
>>                  just    just
>>                 fs_mark  fio        mixed
>> -------------------------------+--------------
>> deadline        529.44   151.4 | 450.0    78.2
>> vanilla cfq     107.88   164.4 |   6.6   137.2
>> blk_yield cfq   530.82   158.7 | 113.2    78.6
>> corrado cfq      80.82   138.1 |   4.5   130.7
>>
>> fs_mark results are in files/second, fio results are in MB/s.  All
>> results are the average of 5 runs.  In order to get results for the
>> mixed workload for both vanilla and Corrado's kernels, I had to extend
>> the runtime from 30s to 300s.
>>
>> So, the changes proposed in this thread actually make performance worse
>> across the board.
>>
>> I re-ran my tests against a RHEL 5 kernel (which is based on 2.6.18),
>> and it shows that fs_mark performance is much better than stock CFQ in
>> 2.6.35-rc3, and the mixed workload results are much the same as they are
>> now (which is to say, the fs_mark process is completely starved by the
>> sequential reader).  So, that problem has existed for a long time.
>>
>> I'm still in the process of collecting data from production servers and
>> will report back with my findings there.
>
> Hi Jeff and all,
>
> How about if we simply get rid of idling on RQ_NOIDLE threads (as
> corrado's patch series does) and not try to solve the problem of fsync
> being starved in the presence of sequential readers. I mean it might just
> be a theoritical problem and not many people are running into it. That's
> how CFQ has been behaving for long-2 time and if nobody is complaining
> then we probably don't have to fix it.

8e55063 was done to fix theoretical problems as well :)
I think, instead, that Jeff's approach of yielding the queue when a
better knowledge is present is good, and this set of patches is not
intended as a replacement. It is intended just to fix some regressions
introduced by a previous commit, and I hope it could work together
with Jeff's patch.
Clearly, if RQ_NOIDLE is used only in the places that Jeff is already
handling, then it is better to completely remove RQ_NOIDLE handling,
so my patch set becomes obsolete.

Thanks,
Corrado
>
> Thanks
> Vivek
>



-- 
__________________________________________________________________________

dott. Corrado Zoccolo                          mailto:czoccolo@gmail.com
PhD - Department of Computer Science - University of Pisa, Italy
--------------------------------------------------------------------------
The self-confidence of a warrior is not the self-confidence of the average
man. The average man seeks certainty in the eyes of the onlooker and calls
that self-confidence. The warrior seeks impeccability in his own eyes and
calls that humbleness.
                               Tales of Power - C. Castaneda

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 0/2] cfq-iosched: fixing RQ_NOIDLE handling.
  2010-07-07 20:06     ` Jeff Moyer
  2010-07-08 14:38       ` Corrado Zoccolo
@ 2010-07-09 10:33       ` Corrado Zoccolo
  2010-07-09 13:23         ` Vivek Goyal
                           ` (2 more replies)
  1 sibling, 3 replies; 27+ messages in thread
From: Corrado Zoccolo @ 2010-07-09 10:33 UTC (permalink / raw)
  To: Jeff Moyer; +Cc: Jens Axboe, Linux-Kernel, Vivek Goyal

[-- Attachment #1: Type: text/plain, Size: 3785 bytes --]

On Wed, Jul 7, 2010 at 10:06 PM, Jeff Moyer <jmoyer@redhat.com> wrote:
> Corrado Zoccolo <czoccolo@gmail.com> writes:
>
>> On Wed, Jul 7, 2010 at 7:03 PM, Jeff Moyer <jmoyer@redhat.com> wrote:
>>> Corrado Zoccolo <czoccolo@gmail.com> writes:
>>>
>>>> Hi Jens,
>>>> patch 8e55063 "cfq-iosched: fix corner cases in idling logic", is
>>>> suspected for some regressions on high end hardware.
>>>> The two patches from this series:
>>>> - [PATCH 1/2] cfq-iosched: fix tree-wide handling of rq_noidle
>>>> - [PATCH 2/2] cfq-iosched: RQ_NOIDLE enabled for SYNC_WORKLOAD
>>>> fix two issues that I have identified, related to how RQ_NOIDLE is
>>>> used by the upper layers.
>>>> First patch makes sure that a RQ_NOIDLE coming after a sequence of
>>>> possibly idling requests from the same queue on the no-idle tree will
>>>> clear the noidle_tree_requires_idle flag.
>>>> Second patch enables RQ_NOIDLE for queues in the idling tree,
>>>> restoring the behaviour pre-8e55063 patch.
>>>
>>> Hi, Corrado,
>>>
>>> I ran your kernel through my tests.  Here are the results, up against
>>> vanilla, deadline, and the blk_yield patch set:
>>>
>> Hi Jeff,
>> can you also add cfq with 8e55063 reverted to the testing mix?
>
> Sure, the results now look like this:
>
>                 just    just
>                fs_mark  fio        mixed
> -------------------------------+--------------
> deadline        529.44   151.4 | 450.0    78.2
> vanilla cfq     107.88   164.4 |   6.6   137.2
> blk_yield cfq   530.82   158.7 | 113.2    78.6
> corrado cfq     110.16   220.6 |   7.0   159.8
> 8e55063 revert  559.66   198.9 |  16.1   153.3
>
> I had accidentally run your patch set (corrado cfq) on ext3, so the
> numbers were a bit off (everything else was run against ext4).  The
> corrected numbers above reflect the performance on ext4, which is much
> better for the sequential reader, but still not great for the fs_mark
> run.  Reverting 8e55063 definitely gets us into better shape.  However,
> if we care about the mixed workload, then it won't be enough.

Wondering why deadline performs so well in the fs_mark workload. Is it
because it doesn't distinguish between sync and async writes?
Maybe we can achieve something similar by putting all sync writes
(that are marked as REQ_NOIDLE) in the noidle tree? This, coupled with
making jbd(2) perform sync writes, should make the yield automatic,
since they all live in the same tree for which we don't idle between
queues, and should be able to provide fairness compared to a
sequential reader (that lives in the other tree).

Can you test the attached patch, where I also added your changes to
make jbd(2) to perform sync writes?

Thanks,
Corrado

>
> It's worth noting that I can't explain that jump from 151MB/s for
> deadline vs 220MB/s for corrado cfq.  I'm not sure how you can vary
> driving a single queue depth sequential read at all.  Those are the
> averages of 5 runs and this storage should be solely accessible by me,
> so I am at a loss.
>
> Cheers,
> Jeff
>



-- 
__________________________________________________________________________

dott. Corrado Zoccolo                          mailto:czoccolo@gmail.com
PhD - Department of Computer Science - University of Pisa, Italy
--------------------------------------------------------------------------
The self-confidence of a warrior is not the self-confidence of the average
man. The average man seeks certainty in the eyes of the onlooker and calls
that self-confidence. The warrior seeks impeccability in his own eyes and
calls that humbleness.
                               Tales of Power - C. Castaneda

[-- Attachment #2: 0001-p.o.c.-fairness-between-seq-reader-and-sync-writers.patch --]
[-- Type: application/octet-stream, Size: 3312 bytes --]

From 88e4c26c382e5ea8466b4bae3ca6efae33eeaa91 Mon Sep 17 00:00:00 2001
From: Corrado Zoccolo <czoccolo@gmail.com>
Date: Fri, 9 Jul 2010 12:28:00 +0200
Subject: [PATCH] p.o.c.: fairness between seq reader and sync writers

Force all queues that have REQ_NOIDLE requests to be put in the noidle
tree. This allows seamless switching between them, but ensure fairness
when competing with sequential readers.
---
 block/cfq-iosched.c |   18 ++++--------------
 fs/jbd/commit.c     |    2 +-
 fs/jbd2/commit.c    |    2 +-
 3 files changed, 6 insertions(+), 16 deletions(-)

diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c
index eb4086f..49dada4 100644
--- a/block/cfq-iosched.c
+++ b/block/cfq-iosched.c
@@ -216,7 +216,6 @@ struct cfq_data {
 	enum wl_type_t serving_type;
 	unsigned long workload_expires;
 	struct cfq_group *serving_group;
-	bool noidle_tree_requires_idle;
 
 	/*
 	 * Each priority tree is sorted by next_request position.  These
@@ -2126,7 +2125,6 @@ static void choose_service_tree(struct cfq_data *cfqd, struct cfq_group *cfqg)
 	slice = max_t(unsigned, slice, CFQ_MIN_TT);
 	cfq_log(cfqd, "workload slice:%d", slice);
 	cfqd->workload_expires = jiffies + slice;
-	cfqd->noidle_tree_requires_idle = false;
 }
 
 static struct cfq_group *cfq_get_next_cfqg(struct cfq_data *cfqd)
@@ -3108,7 +3106,9 @@ cfq_update_idle_window(struct cfq_data *cfqd, struct cfq_queue *cfqq,
 	if (cfqq->queued[0] + cfqq->queued[1] >= 4)
 		cfq_mark_cfqq_deep(cfqq);
 
-	if (!atomic_read(&cic->ioc->nr_tasks) || !cfqd->cfq_slice_idle ||
+	if (cfqq->next_rq && (cfqq->next_rq->cmd_flags & REQ_NOIDLE))
+		enable_idle = 0;
+	else if (!atomic_read(&cic->ioc->nr_tasks) || !cfqd->cfq_slice_idle ||
 	    (!cfq_cfqq_deep(cfqq) && CFQQ_SEEKY(cfqq)))
 		enable_idle = 0;
 	else if (sample_valid(cic->ttime_samples)) {
@@ -3421,17 +3421,7 @@ static void cfq_completed_request(struct request_queue *q, struct request *rq)
 			cfq_slice_expired(cfqd, 1);
 		else if (sync && cfqq_empty &&
 			 !cfq_close_cooperator(cfqd, cfqq)) {
-			cfqd->noidle_tree_requires_idle |=
-				!(rq->cmd_flags & REQ_NOIDLE);
-			/*
-			 * Idling is enabled for SYNC_WORKLOAD.
-			 * SYNC_NOIDLE_WORKLOAD idles at the end of the tree
-			 * only if we processed at least one !REQ_NOIDLE request
-			 */
-			if (cfqd->serving_type == SYNC_WORKLOAD
-			    || cfqd->noidle_tree_requires_idle
-			    || cfqq->cfqg->nr_cfqq == 1)
-				cfq_arm_slice_timer(cfqd);
+			cfq_arm_slice_timer(cfqd);
 		}
 	}
 
diff --git a/fs/jbd/commit.c b/fs/jbd/commit.c
index 28a9dda..d97a0c6 100644
--- a/fs/jbd/commit.c
+++ b/fs/jbd/commit.c
@@ -317,7 +317,7 @@ void journal_commit_transaction(journal_t *journal)
 	int first_tag = 0;
 	int tag_flag;
 	int i;
-	int write_op = WRITE;
+	int write_op = WRITE_SYNC;
 
 	/*
 	 * First job: lock down the current transaction and wait for
diff --git a/fs/jbd2/commit.c b/fs/jbd2/commit.c
index 75716d3..a078744 100644
--- a/fs/jbd2/commit.c
+++ b/fs/jbd2/commit.c
@@ -369,7 +369,7 @@ void jbd2_journal_commit_transaction(journal_t *journal)
 	int tag_bytes = journal_tag_bytes(journal);
 	struct buffer_head *cbh = NULL; /* For transactional checksums */
 	__u32 crc32_sum = ~0;
-	int write_op = WRITE;
+	int write_op = WRITE_SYNC;
 
 	/*
 	 * First job: lock down the current transaction and wait for
-- 
1.6.4.4


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* Re: [PATCH 0/2] cfq-iosched: fixing RQ_NOIDLE handling.
  2010-07-09 10:33       ` Corrado Zoccolo
@ 2010-07-09 13:23         ` Vivek Goyal
  2010-07-09 14:07         ` Jeff Moyer
  2010-07-13 19:38         ` Jeff Moyer
  2 siblings, 0 replies; 27+ messages in thread
From: Vivek Goyal @ 2010-07-09 13:23 UTC (permalink / raw)
  To: Corrado Zoccolo; +Cc: Jeff Moyer, Jens Axboe, Linux-Kernel

On Fri, Jul 09, 2010 at 12:33:36PM +0200, Corrado Zoccolo wrote:
> On Wed, Jul 7, 2010 at 10:06 PM, Jeff Moyer <jmoyer@redhat.com> wrote:
> > Corrado Zoccolo <czoccolo@gmail.com> writes:
> >
> >> On Wed, Jul 7, 2010 at 7:03 PM, Jeff Moyer <jmoyer@redhat.com> wrote:
> >>> Corrado Zoccolo <czoccolo@gmail.com> writes:
> >>>
> >>>> Hi Jens,
> >>>> patch 8e55063 "cfq-iosched: fix corner cases in idling logic", is
> >>>> suspected for some regressions on high end hardware.
> >>>> The two patches from this series:
> >>>> - [PATCH 1/2] cfq-iosched: fix tree-wide handling of rq_noidle
> >>>> - [PATCH 2/2] cfq-iosched: RQ_NOIDLE enabled for SYNC_WORKLOAD
> >>>> fix two issues that I have identified, related to how RQ_NOIDLE is
> >>>> used by the upper layers.
> >>>> First patch makes sure that a RQ_NOIDLE coming after a sequence of
> >>>> possibly idling requests from the same queue on the no-idle tree will
> >>>> clear the noidle_tree_requires_idle flag.
> >>>> Second patch enables RQ_NOIDLE for queues in the idling tree,
> >>>> restoring the behaviour pre-8e55063 patch.
> >>>
> >>> Hi, Corrado,
> >>>
> >>> I ran your kernel through my tests.  Here are the results, up against
> >>> vanilla, deadline, and the blk_yield patch set:
> >>>
> >> Hi Jeff,
> >> can you also add cfq with 8e55063 reverted to the testing mix?
> >
> > Sure, the results now look like this:
> >
> >                 just    just
> >                fs_mark  fio        mixed
> > -------------------------------+--------------
> > deadline        529.44   151.4 | 450.0    78.2
> > vanilla cfq     107.88   164.4 |   6.6   137.2
> > blk_yield cfq   530.82   158.7 | 113.2    78.6
> > corrado cfq     110.16   220.6 |   7.0   159.8
> > 8e55063 revert  559.66   198.9 |  16.1   153.3
> >
> > I had accidentally run your patch set (corrado cfq) on ext3, so the
> > numbers were a bit off (everything else was run against ext4).  The
> > corrected numbers above reflect the performance on ext4, which is much
> > better for the sequential reader, but still not great for the fs_mark
> > run.  Reverting 8e55063 definitely gets us into better shape.  However,
> > if we care about the mixed workload, then it won't be enough.
> 
> Wondering why deadline performs so well in the fs_mark workload. Is it
> because it doesn't distinguish between sync and async writes?
> Maybe we can achieve something similar by putting all sync writes
> (that are marked as REQ_NOIDLE) in the noidle tree? This, coupled with
> making jbd(2) perform sync writes, should make the yield automatic,
> since they all live in the same tree for which we don't idle between
> queues, and should be able to provide fairness compared to a
> sequential reader (that lives in the other tree).
> 

This makes sense conceptually at least at CFQ level. By putting
OSYNC/fsync on sync-noidle tree we will not be able to take advantage
of sequential nature of queue but christoph mentioned that all sequential
writes in general should be lumped together and then sent down to CFQ
instead of issuing small writes after some delay. So this probably
is not an issue.

What I am not sure about is impact of switching jbd thread writes from
async to sync (WRITE ---> WRITE_SYNC). Especially if somebody is
journalling the data also (data=journal).

But it is definitely worth trying because then we don't have to idle on
individual queues of WRITE_SYNC as well as fsync performance issue should
also be solved. 

Thanks
Vivek

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 0/2] cfq-iosched: fixing RQ_NOIDLE handling.
  2010-07-09 10:33       ` Corrado Zoccolo
  2010-07-09 13:23         ` Vivek Goyal
@ 2010-07-09 14:07         ` Jeff Moyer
  2010-07-09 19:45           ` Corrado Zoccolo
  2010-07-13 19:38         ` Jeff Moyer
  2 siblings, 1 reply; 27+ messages in thread
From: Jeff Moyer @ 2010-07-09 14:07 UTC (permalink / raw)
  To: Corrado Zoccolo; +Cc: Jens Axboe, Linux-Kernel, Vivek Goyal

Corrado Zoccolo <czoccolo@gmail.com> writes:

> On Wed, Jul 7, 2010 at 10:06 PM, Jeff Moyer <jmoyer@redhat.com> wrote:
>> Corrado Zoccolo <czoccolo@gmail.com> writes:
>>
>>> On Wed, Jul 7, 2010 at 7:03 PM, Jeff Moyer <jmoyer@redhat.com> wrote:
>>>> Corrado Zoccolo <czoccolo@gmail.com> writes:
>>>>
>>>>> Hi Jens,
>>>>> patch 8e55063 "cfq-iosched: fix corner cases in idling logic", is
>>>>> suspected for some regressions on high end hardware.
>>>>> The two patches from this series:
>>>>> - [PATCH 1/2] cfq-iosched: fix tree-wide handling of rq_noidle
>>>>> - [PATCH 2/2] cfq-iosched: RQ_NOIDLE enabled for SYNC_WORKLOAD
>>>>> fix two issues that I have identified, related to how RQ_NOIDLE is
>>>>> used by the upper layers.
>>>>> First patch makes sure that a RQ_NOIDLE coming after a sequence of
>>>>> possibly idling requests from the same queue on the no-idle tree will
>>>>> clear the noidle_tree_requires_idle flag.
>>>>> Second patch enables RQ_NOIDLE for queues in the idling tree,
>>>>> restoring the behaviour pre-8e55063 patch.
>>>>
>>>> Hi, Corrado,
>>>>
>>>> I ran your kernel through my tests.  Here are the results, up against
>>>> vanilla, deadline, and the blk_yield patch set:
>>>>
>>> Hi Jeff,
>>> can you also add cfq with 8e55063 reverted to the testing mix?
>>
>> Sure, the results now look like this:
>>
>>                 just    just
>>                fs_mark  fio        mixed
>> -------------------------------+--------------
>> deadline        529.44   151.4 | 450.0    78.2
>> vanilla cfq     107.88   164.4 |   6.6   137.2
>> blk_yield cfq   530.82   158.7 | 113.2    78.6
>> corrado cfq     110.16   220.6 |   7.0   159.8
>> 8e55063 revert  559.66   198.9 |  16.1   153.3
>>
>> I had accidentally run your patch set (corrado cfq) on ext3, so the
>> numbers were a bit off (everything else was run against ext4).  The
>> corrected numbers above reflect the performance on ext4, which is much
>> better for the sequential reader, but still not great for the fs_mark
>> run.  Reverting 8e55063 definitely gets us into better shape.  However,
>> if we care about the mixed workload, then it won't be enough.
>
> Wondering why deadline performs so well in the fs_mark workload. Is it
> because it doesn't distinguish between sync and async writes?

It performs well because it doesn't do any idling.

> Maybe we can achieve something similar by putting all sync writes
> (that are marked as REQ_NOIDLE) in the noidle tree? This, coupled with
> making jbd(2) perform sync writes, should make the yield automatic,
> since they all live in the same tree for which we don't idle between
> queues, and should be able to provide fairness compared to a
> sequential reader (that lives in the other tree).
>
> Can you test the attached patch, where I also added your changes to
> make jbd(2) to perform sync writes?

I'm not sure what kernel you generated that patch against.  I'm working
with 2.6.35-rc3 or later, and your patch does not apply there.

Cheers,
Jeff

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 0/2] cfq-iosched: fixing RQ_NOIDLE handling.
  2010-07-09 14:07         ` Jeff Moyer
@ 2010-07-09 19:45           ` Corrado Zoccolo
  2010-07-09 20:48             ` Jeff Moyer
  0 siblings, 1 reply; 27+ messages in thread
From: Corrado Zoccolo @ 2010-07-09 19:45 UTC (permalink / raw)
  To: Jeff Moyer; +Cc: Jens Axboe, Linux-Kernel, Vivek Goyal

On Fri, Jul 9, 2010 at 4:07 PM, Jeff Moyer <jmoyer@redhat.com> wrote:
> Corrado Zoccolo <czoccolo@gmail.com> writes:
>
>> On Wed, Jul 7, 2010 at 10:06 PM, Jeff Moyer <jmoyer@redhat.com> wrote:
>>> Corrado Zoccolo <czoccolo@gmail.com> writes:
>>>
>>>> On Wed, Jul 7, 2010 at 7:03 PM, Jeff Moyer <jmoyer@redhat.com> wrote:
>>>>> Corrado Zoccolo <czoccolo@gmail.com> writes:
>>>>>
>>>>>> Hi Jens,
>>>>>> patch 8e55063 "cfq-iosched: fix corner cases in idling logic", is
>>>>>> suspected for some regressions on high end hardware.
>>>>>> The two patches from this series:
>>>>>> - [PATCH 1/2] cfq-iosched: fix tree-wide handling of rq_noidle
>>>>>> - [PATCH 2/2] cfq-iosched: RQ_NOIDLE enabled for SYNC_WORKLOAD
>>>>>> fix two issues that I have identified, related to how RQ_NOIDLE is
>>>>>> used by the upper layers.
>>>>>> First patch makes sure that a RQ_NOIDLE coming after a sequence of
>>>>>> possibly idling requests from the same queue on the no-idle tree will
>>>>>> clear the noidle_tree_requires_idle flag.
>>>>>> Second patch enables RQ_NOIDLE for queues in the idling tree,
>>>>>> restoring the behaviour pre-8e55063 patch.
>>>>>
>>>>> Hi, Corrado,
>>>>>
>>>>> I ran your kernel through my tests.  Here are the results, up against
>>>>> vanilla, deadline, and the blk_yield patch set:
>>>>>
>>>> Hi Jeff,
>>>> can you also add cfq with 8e55063 reverted to the testing mix?
>>>
>>> Sure, the results now look like this:
>>>
>>>                 just    just
>>>                fs_mark  fio        mixed
>>> -------------------------------+--------------
>>> deadline        529.44   151.4 | 450.0    78.2
>>> vanilla cfq     107.88   164.4 |   6.6   137.2
>>> blk_yield cfq   530.82   158.7 | 113.2    78.6
>>> corrado cfq     110.16   220.6 |   7.0   159.8
>>> 8e55063 revert  559.66   198.9 |  16.1   153.3
>>>
>>> I had accidentally run your patch set (corrado cfq) on ext3, so the
>>> numbers were a bit off (everything else was run against ext4).  The
>>> corrected numbers above reflect the performance on ext4, which is much
>>> better for the sequential reader, but still not great for the fs_mark
>>> run.  Reverting 8e55063 definitely gets us into better shape.  However,
>>> if we care about the mixed workload, then it won't be enough.
>>
>> Wondering why deadline performs so well in the fs_mark workload. Is it
>> because it doesn't distinguish between sync and async writes?
>
> It performs well because it doesn't do any idling.
>
>> Maybe we can achieve something similar by putting all sync writes
>> (that are marked as REQ_NOIDLE) in the noidle tree? This, coupled with
>> making jbd(2) perform sync writes, should make the yield automatic,
>> since they all live in the same tree for which we don't idle between
>> queues, and should be able to provide fairness compared to a
>> sequential reader (that lives in the other tree).
>>
>> Can you test the attached patch, where I also added your changes to
>> make jbd(2) to perform sync writes?
>
> I'm not sure what kernel you generated that patch against.  I'm working
> with 2.6.35-rc3 or later, and your patch does not apply there.
It's Jens' block/for-2.6.36 tree.

>
> Cheers,
> Jeff
>



-- 
__________________________________________________________________________

dott. Corrado Zoccolo                          mailto:czoccolo@gmail.com
PhD - Department of Computer Science - University of Pisa, Italy
--------------------------------------------------------------------------
The self-confidence of a warrior is not the self-confidence of the average
man. The average man seeks certainty in the eyes of the onlooker and calls
that self-confidence. The warrior seeks impeccability in his own eyes and
calls that humbleness.
                               Tales of Power - C. Castaneda

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 0/2] cfq-iosched: fixing RQ_NOIDLE handling.
  2010-07-09 19:45           ` Corrado Zoccolo
@ 2010-07-09 20:48             ` Jeff Moyer
  0 siblings, 0 replies; 27+ messages in thread
From: Jeff Moyer @ 2010-07-09 20:48 UTC (permalink / raw)
  To: Corrado Zoccolo; +Cc: Jens Axboe, Linux-Kernel, Vivek Goyal

Corrado Zoccolo <czoccolo@gmail.com> writes:

>> I'm not sure what kernel you generated that patch against.  I'm working
>> with 2.6.35-rc3 or later, and your patch does not apply there.

> It's Jens' block/for-2.6.36 tree.

OK.  I'll get back to you on this next week.  My storage is kind of
broken right now.  :(

-Jeff

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 0/2] cfq-iosched: fixing RQ_NOIDLE handling.
  2010-07-09 10:33       ` Corrado Zoccolo
  2010-07-09 13:23         ` Vivek Goyal
  2010-07-09 14:07         ` Jeff Moyer
@ 2010-07-13 19:38         ` Jeff Moyer
  2010-07-13 19:56           ` Vivek Goyal
  2 siblings, 1 reply; 27+ messages in thread
From: Jeff Moyer @ 2010-07-13 19:38 UTC (permalink / raw)
  To: Corrado Zoccolo, axboe; +Cc: Linux-Kernel, Vivek Goyal

Corrado Zoccolo <czoccolo@gmail.com> writes:

> Can you test the attached patch, where I also added your changes to
> make jbd(2) to perform sync writes?

I got new storage, so I have new numbers.  I only re-ran deadline and
vanilla cfq for the fs_mark only test.  The average of 10 runs comes out
like so:

deadline:    571.98
vanilla cfq: 107.42
patched cfq: 460.9

Mixed workload results with your suggested patch:

fs_mark: 15.65 files/sec
fio: 132.5 MB/s

So, again, not looking great for the mixed workload, but the patch
does improve the fs_mark only case.  Looking at the blktrace data shows
that the jbd2 thread preempts the fs_mark thread at all the right
times.  The only thing holding throughput back is the whole notion that
we need to only dispatch from one queue (even though the storage is
capable of serving both the reads and writes simultaneously).

I added in the patch that allows the simultaneous dispatch of both reads
and writes, and here are the results from that run:

fs_mark: 15.975 files/sec
fio: 132.4 MB/s

So, it looks like that didn't help.  The reason this patch doesn't come
close to the yield patch in the mixed workload is because the yield
patch set allows the fs_mark process to continue to issue I/O.  With
your patch, the fs_mark process does 64KB of I/O, the jbd2 thread does
the journal commit, and then the fio process runs again.  Given that the
fs_mark process typically only uses a small fraction of its time slice,
you end up with an unfair balance.

Now, we still have to decide whether that's a problem that needs
solving.  I tried to gather data from the field, but I've been unable to
conclusively say whether an application issues this sort of dependent
I/O.

As such, I am happy with this patch.  If we see that we need something
like the blk_yield approach, then I'm happy to resurrect that work.

Jens, do you find that an agreeable solution?  If so, you can add my
signed-off-by and tested-by to the patch that Corrado posted.

Cheers,
Jeff

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 0/2] cfq-iosched: fixing RQ_NOIDLE handling.
  2010-07-13 19:38         ` Jeff Moyer
@ 2010-07-13 19:56           ` Vivek Goyal
  2010-07-13 20:30             ` Jeff Moyer
  0 siblings, 1 reply; 27+ messages in thread
From: Vivek Goyal @ 2010-07-13 19:56 UTC (permalink / raw)
  To: Jeff Moyer; +Cc: Corrado Zoccolo, axboe, Linux-Kernel

On Tue, Jul 13, 2010 at 03:38:11PM -0400, Jeff Moyer wrote:
> Corrado Zoccolo <czoccolo@gmail.com> writes:
> 
> > Can you test the attached patch, where I also added your changes to
> > make jbd(2) to perform sync writes?
> 
> I got new storage, so I have new numbers.  I only re-ran deadline and
> vanilla cfq for the fs_mark only test.  The average of 10 runs comes out
> like so:
> 
> deadline:    571.98
> vanilla cfq: 107.42
> patched cfq: 460.9
> 
> Mixed workload results with your suggested patch:
> 
> fs_mark: 15.65 files/sec
> fio: 132.5 MB/s
> 
> So, again, not looking great for the mixed workload, but the patch
> does improve the fs_mark only case.  Looking at the blktrace data shows
> that the jbd2 thread preempts the fs_mark thread at all the right
> times.  The only thing holding throughput back is the whole notion that
> we need to only dispatch from one queue (even though the storage is
> capable of serving both the reads and writes simultaneously).
> 
> I added in the patch that allows the simultaneous dispatch of both reads
> and writes, and here are the results from that run:
> 
> fs_mark: 15.975 files/sec
> fio: 132.4 MB/s
> 
> So, it looks like that didn't help.  The reason this patch doesn't come
> close to the yield patch in the mixed workload is because the yield
> patch set allows the fs_mark process to continue to issue I/O.  With
> your patch, the fs_mark process does 64KB of I/O, the jbd2 thread does
> the journal commit, and then the fio process runs again.  Given that the
> fs_mark process typically only uses a small fraction of its time slice,
> you end up with an unfair balance.

Hi Jeff,

This is little strange. Given the fact that now both fs_mark and jbd
threads are on sync-noidle tree, we should have idled on sync-noidle
tree to provide fairness and that should have made sure that fs_mark/jbd
do more IO and slice is not lost to fio thread.

Not sure what is happening though in practice. Only you can look at
traces more closely and see if timer is being armed or not. 

Thanks
Vivek

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 0/2] cfq-iosched: fixing RQ_NOIDLE handling.
  2010-07-13 19:56           ` Vivek Goyal
@ 2010-07-13 20:30             ` Jeff Moyer
  2010-07-13 20:42               ` Vivek Goyal
  2010-07-13 21:00               ` Jeff Moyer
  0 siblings, 2 replies; 27+ messages in thread
From: Jeff Moyer @ 2010-07-13 20:30 UTC (permalink / raw)
  To: Vivek Goyal; +Cc: Corrado Zoccolo, axboe, Linux-Kernel

Vivek Goyal <vgoyal@redhat.com> writes:

> On Tue, Jul 13, 2010 at 03:38:11PM -0400, Jeff Moyer wrote:
>> Corrado Zoccolo <czoccolo@gmail.com> writes:
>> 
>> > Can you test the attached patch, where I also added your changes to
>> > make jbd(2) to perform sync writes?
>> 
>> I got new storage, so I have new numbers.  I only re-ran deadline and
>> vanilla cfq for the fs_mark only test.  The average of 10 runs comes out
>> like so:
>> 
>> deadline:    571.98
>> vanilla cfq: 107.42
>> patched cfq: 460.9
>> 
>> Mixed workload results with your suggested patch:
>> 
>> fs_mark: 15.65 files/sec
>> fio: 132.5 MB/s
>> 
>> So, again, not looking great for the mixed workload, but the patch
>> does improve the fs_mark only case.  Looking at the blktrace data shows
>> that the jbd2 thread preempts the fs_mark thread at all the right
>> times.  The only thing holding throughput back is the whole notion that
>> we need to only dispatch from one queue (even though the storage is
>> capable of serving both the reads and writes simultaneously).
>> 
>> I added in the patch that allows the simultaneous dispatch of both reads
>> and writes, and here are the results from that run:
>> 
>> fs_mark: 15.975 files/sec
>> fio: 132.4 MB/s
>> 
>> So, it looks like that didn't help.  The reason this patch doesn't come
>> close to the yield patch in the mixed workload is because the yield
>> patch set allows the fs_mark process to continue to issue I/O.  With
>> your patch, the fs_mark process does 64KB of I/O, the jbd2 thread does
>> the journal commit, and then the fio process runs again.  Given that the
>> fs_mark process typically only uses a small fraction of its time slice,
>> you end up with an unfair balance.
>
> Hi Jeff,
>
> This is little strange. Given the fact that now both fs_mark and jbd
> threads are on sync-noidle tree, we should have idled on sync-noidle
> tree to provide fairness and that should have made sure that fs_mark/jbd
> do more IO and slice is not lost to fio thread.
>
> Not sure what is happening though in practice. Only you can look at
> traces more closely and see if timer is being armed or not. 

Vivek, if you want to look at traces, just ask.  I'd be happy to show
them to you, upload them, whatever.  I'm not sure why you think
otherwise (though I wouldn't blame you for not wanting to look at
them!).

Now, to answer your question, the jbd2 thread runs and issues a barrier,
which causes a forced dispatch of requests.  After that a new queue is
selected, and since the fs_mark thread is blocked on the journal commit,
it's always the fio process that gets to run.

This, of course, raises the question of why the blk_yield patches didn't
run into the same problem.  Looking back at some saved traces, I don't
see WBS (write barrier sync) requests, so I wonder if barriers weren't
supported by my last storage system.

Cheers,
Jeff

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 0/2] cfq-iosched: fixing RQ_NOIDLE handling.
  2010-07-13 20:30             ` Jeff Moyer
@ 2010-07-13 20:42               ` Vivek Goyal
  2010-07-19 16:08                 ` Jeff Moyer
  2010-07-13 21:00               ` Jeff Moyer
  1 sibling, 1 reply; 27+ messages in thread
From: Vivek Goyal @ 2010-07-13 20:42 UTC (permalink / raw)
  To: Jeff Moyer; +Cc: Corrado Zoccolo, axboe, Linux-Kernel

On Tue, Jul 13, 2010 at 04:30:23PM -0400, Jeff Moyer wrote:
> Vivek Goyal <vgoyal@redhat.com> writes:
> 
> > On Tue, Jul 13, 2010 at 03:38:11PM -0400, Jeff Moyer wrote:
> >> Corrado Zoccolo <czoccolo@gmail.com> writes:
> >> 
> >> > Can you test the attached patch, where I also added your changes to
> >> > make jbd(2) to perform sync writes?
> >> 
> >> I got new storage, so I have new numbers.  I only re-ran deadline and
> >> vanilla cfq for the fs_mark only test.  The average of 10 runs comes out
> >> like so:
> >> 
> >> deadline:    571.98
> >> vanilla cfq: 107.42
> >> patched cfq: 460.9
> >> 
> >> Mixed workload results with your suggested patch:
> >> 
> >> fs_mark: 15.65 files/sec
> >> fio: 132.5 MB/s
> >> 
> >> So, again, not looking great for the mixed workload, but the patch
> >> does improve the fs_mark only case.  Looking at the blktrace data shows
> >> that the jbd2 thread preempts the fs_mark thread at all the right
> >> times.  The only thing holding throughput back is the whole notion that
> >> we need to only dispatch from one queue (even though the storage is
> >> capable of serving both the reads and writes simultaneously).
> >> 
> >> I added in the patch that allows the simultaneous dispatch of both reads
> >> and writes, and here are the results from that run:
> >> 
> >> fs_mark: 15.975 files/sec
> >> fio: 132.4 MB/s
> >> 
> >> So, it looks like that didn't help.  The reason this patch doesn't come
> >> close to the yield patch in the mixed workload is because the yield
> >> patch set allows the fs_mark process to continue to issue I/O.  With
> >> your patch, the fs_mark process does 64KB of I/O, the jbd2 thread does
> >> the journal commit, and then the fio process runs again.  Given that the
> >> fs_mark process typically only uses a small fraction of its time slice,
> >> you end up with an unfair balance.
> >
> > Hi Jeff,
> >
> > This is little strange. Given the fact that now both fs_mark and jbd
> > threads are on sync-noidle tree, we should have idled on sync-noidle
> > tree to provide fairness and that should have made sure that fs_mark/jbd
> > do more IO and slice is not lost to fio thread.
> >
> > Not sure what is happening though in practice. Only you can look at
> > traces more closely and see if timer is being armed or not. 
> 
> Vivek, if you want to look at traces, just ask.  I'd be happy to show
> them to you, upload them, whatever.  I'm not sure why you think
> otherwise (though I wouldn't blame you for not wanting to look at
> them!).

I don't mind looking at traces. Do let me know where can I access those.

> 
> Now, to answer your question, the jbd2 thread runs and issues a barrier,
> which causes a forced dispatch of requests.  After that a new queue is
> selected, and since the fs_mark thread is blocked on the journal commit,
> it's always the fio process that gets to run.

Ok, that explains it.  So somehow after the barrier, fio always wins
as issues next read request before the fs_mark is able to issue the
next set of writes.

> 
> This, of course, raises the question of why the blk_yield patches didn't
> run into the same problem.  Looking back at some saved traces, I don't
> see WBS (write barrier sync) requests, so I wonder if barriers weren't
> supported by my last storage system.

I think that blk_yield patches will also run into the same issue if
barriers are enabled.

Thanks
Vivek

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 0/2] cfq-iosched: fixing RQ_NOIDLE handling.
  2010-07-13 20:30             ` Jeff Moyer
  2010-07-13 20:42               ` Vivek Goyal
@ 2010-07-13 21:00               ` Jeff Moyer
  1 sibling, 0 replies; 27+ messages in thread
From: Jeff Moyer @ 2010-07-13 21:00 UTC (permalink / raw)
  To: Vivek Goyal; +Cc: Corrado Zoccolo, axboe, Linux-Kernel

Jeff Moyer <jmoyer@redhat.com> writes:

> This, of course, raises the question of why the blk_yield patches didn't
> run into the same problem.  Looking back at some saved traces, I don't
> see WBS (write barrier sync) requests, so I wonder if barriers weren't
> supported by my last storage system.

So, I tested Corrado's approach with -o nobarrier, and here are the
results:

fs_mark: 363.291 files/sec
fio: 38.5 MB/s

I don't have time to analyze the data right now, and it's 600MB worth of
binary output.  If you want, I can upload a representative sample
somewhere, let me know.

Anyway, I'll post an analysis tomorrow.

Cheers,
Jeff

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 0/2] cfq-iosched: fixing RQ_NOIDLE handling.
  2010-07-13 20:42               ` Vivek Goyal
@ 2010-07-19 16:08                 ` Jeff Moyer
  2010-07-19 20:31                   ` Vivek Goyal
  2010-07-20 14:11                   ` Christoph Hellwig
  0 siblings, 2 replies; 27+ messages in thread
From: Jeff Moyer @ 2010-07-19 16:08 UTC (permalink / raw)
  To: Vivek Goyal; +Cc: Corrado Zoccolo, axboe, Linux-Kernel

Vivek Goyal <vgoyal@redhat.com> writes:

> On Tue, Jul 13, 2010 at 04:30:23PM -0400, Jeff Moyer wrote:
>> Vivek Goyal <vgoyal@redhat.com> writes:

> I don't mind looking at traces. Do let me know where can I access those.

Forwarded privately.

>> Now, to answer your question, the jbd2 thread runs and issues a barrier,
>> which causes a forced dispatch of requests.  After that a new queue is
>> selected, and since the fs_mark thread is blocked on the journal commit,
>> it's always the fio process that gets to run.
>
> Ok, that explains it.  So somehow after the barrier, fio always wins
> as issues next read request before the fs_mark is able to issue the
> next set of writes.
>
>> 
>> This, of course, raises the question of why the blk_yield patches didn't
>> run into the same problem.  Looking back at some saved traces, I don't
>> see WBS (write barrier sync) requests, so I wonder if barriers weren't
>> supported by my last storage system.
>
> I think that blk_yield patches will also run into the same issue if
> barriers are enabled.

Agreed.

Here are the results again with barriers disabled for Corrado's patch:

fs_mark: 348.2 files/sec
fio: 53324.6 KB/s

Remember that deadline was seeing 450 files/sec and 78 MB/s.  So, in
this case, the buffered reader appears to be starved.  Looking into this
further, I found that the journal thread is running with I/O priority 0,
while the fio and fs_mark processes are running at the default (4).
Because the jbd thread has a higher I/O priority, its requests are
always closer to the front of the sort list, and thus the sync-noidle
workload is chosen more often than the sync workload.  This essentially
results in an elevated I/O priority for the fs_mark process as well.
While troubling, that problem is not directly related to the problem
we're looking at.

So, I'm still in favor of Corrado's approach.  Are there any remaining
dissenting opinions on this?

Cheers,
Jeff

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 0/2] cfq-iosched: fixing RQ_NOIDLE handling.
  2010-07-19 16:08                 ` Jeff Moyer
@ 2010-07-19 20:31                   ` Vivek Goyal
  2010-07-20 14:02                     ` Jeff Moyer
  2010-07-20 14:11                   ` Christoph Hellwig
  1 sibling, 1 reply; 27+ messages in thread
From: Vivek Goyal @ 2010-07-19 20:31 UTC (permalink / raw)
  To: Jeff Moyer; +Cc: Corrado Zoccolo, axboe, Linux-Kernel

On Mon, Jul 19, 2010 at 12:08:23PM -0400, Jeff Moyer wrote:
> Vivek Goyal <vgoyal@redhat.com> writes:
> 
> > On Tue, Jul 13, 2010 at 04:30:23PM -0400, Jeff Moyer wrote:
> >> Vivek Goyal <vgoyal@redhat.com> writes:
> 
> > I don't mind looking at traces. Do let me know where can I access those.
> 
> Forwarded privately.
> 
> >> Now, to answer your question, the jbd2 thread runs and issues a barrier,
> >> which causes a forced dispatch of requests.  After that a new queue is
> >> selected, and since the fs_mark thread is blocked on the journal commit,
> >> it's always the fio process that gets to run.
> >
> > Ok, that explains it.  So somehow after the barrier, fio always wins
> > as issues next read request before the fs_mark is able to issue the
> > next set of writes.
> >
> >> 
> >> This, of course, raises the question of why the blk_yield patches didn't
> >> run into the same problem.  Looking back at some saved traces, I don't
> >> see WBS (write barrier sync) requests, so I wonder if barriers weren't
> >> supported by my last storage system.
> >
> > I think that blk_yield patches will also run into the same issue if
> > barriers are enabled.
> 
> Agreed.
> 
> Here are the results again with barriers disabled for Corrado's patch:
> 
> fs_mark: 348.2 files/sec
> fio: 53324.6 KB/s
> 
> Remember that deadline was seeing 450 files/sec and 78 MB/s.  So, in
> this case, the buffered reader appears to be starved.  Looking into this
> further, I found that the journal thread is running with I/O priority 0,
> while the fio and fs_mark processes are running at the default (4).
> Because the jbd thread has a higher I/O priority, its requests are
> always closer to the front of the sort list, and thus the sync-noidle
> workload is chosen more often than the sync workload.  This essentially
> results in an elevated I/O priority for the fs_mark process as well.
> While troubling, that problem is not directly related to the problem
> we're looking at.
> 
> So, I'm still in favor of Corrado's approach.  Are there any remaining
> dissenting opinions on this?

Nope. I am fine with moving all WRITE_SYNC with RQ_NOIDLE to sync-noidle
tree and also marking jbd writes as WRITE_SYNC. By bringing dependent
threads on single service tree, we don't have to worry about slice
yielding.

Acked-by: Vivek Goyal <vgoyal@redhat.com>

Thanks
Vivek

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 0/2] cfq-iosched: fixing RQ_NOIDLE handling.
  2010-07-19 20:31                   ` Vivek Goyal
@ 2010-07-20 14:02                     ` Jeff Moyer
  0 siblings, 0 replies; 27+ messages in thread
From: Jeff Moyer @ 2010-07-20 14:02 UTC (permalink / raw)
  To: Corrado Zoccolo; +Cc: Vivek Goyal, axboe, Linux-Kernel

Vivek Goyal <vgoyal@redhat.com> writes:

> On Mon, Jul 19, 2010 at 12:08:23PM -0400, Jeff Moyer wrote:
>> So, I'm still in favor of Corrado's approach.  Are there any remaining
>> dissenting opinions on this?
>
> Nope. I am fine with moving all WRITE_SYNC with RQ_NOIDLE to sync-noidle
> tree and also marking jbd writes as WRITE_SYNC. By bringing dependent
> threads on single service tree, we don't have to worry about slice
> yielding.
>
> Acked-by: Vivek Goyal <vgoyal@redhat.com>

Corrado, would you mind reposting the patches and adding:

Reviewed-by: Jeff Moyer <jmoyer@redhat.com>
Tested-by: Jeff Moyer <jmoyer@redhat.com>

Thanks!
Jeff

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 0/2] cfq-iosched: fixing RQ_NOIDLE handling.
  2010-07-19 16:08                 ` Jeff Moyer
  2010-07-19 20:31                   ` Vivek Goyal
@ 2010-07-20 14:11                   ` Christoph Hellwig
  2010-07-20 14:26                     ` Vivek Goyal
  1 sibling, 1 reply; 27+ messages in thread
From: Christoph Hellwig @ 2010-07-20 14:11 UTC (permalink / raw)
  To: Jeff Moyer; +Cc: Vivek Goyal, Corrado Zoccolo, axboe, Linux-Kernel

Didn't you guys have a previous iteration of the fixes that gets
rid of REQ_NODILE by improving the heuristics inside cfq?  That
would be much, much preffered from the filesystem point of view.


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 0/2] cfq-iosched: fixing RQ_NOIDLE handling.
  2010-07-20 14:11                   ` Christoph Hellwig
@ 2010-07-20 14:26                     ` Vivek Goyal
  2010-07-20 19:10                       ` Corrado Zoccolo
  0 siblings, 1 reply; 27+ messages in thread
From: Vivek Goyal @ 2010-07-20 14:26 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: Jeff Moyer, Corrado Zoccolo, axboe, Linux-Kernel

On Tue, Jul 20, 2010 at 10:11:03AM -0400, Christoph Hellwig wrote:
> Didn't you guys have a previous iteration of the fixes that gets
> rid of REQ_NODILE by improving the heuristics inside cfq?  That
> would be much, much preffered from the filesystem point of view.

Actually in this patch, I was thinking we can probably get rid of
RQ_NOIDLE flag and just check for WRITE_SYNC. Any WRITE_SYNC queue
gets served on sync-noidle tree. I am wondering will we not face jbd
thread issues with direct writes also? If yes, then not special casing
direct IO writes and treat them same as O_SYNC writes will make sense.

I really wished that we had some blktrace of some standard workloads
stored somewhere which we could simply replay using "btreplay" and come
to some kind of conclusion whenever we are faced with taking such
decisions.

Thanks
Vivek


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 0/2] cfq-iosched: fixing RQ_NOIDLE handling.
  2010-07-20 14:26                     ` Vivek Goyal
@ 2010-07-20 19:10                       ` Corrado Zoccolo
  2010-07-20 19:32                         ` Vivek Goyal
  0 siblings, 1 reply; 27+ messages in thread
From: Corrado Zoccolo @ 2010-07-20 19:10 UTC (permalink / raw)
  To: Vivek Goyal; +Cc: Christoph Hellwig, Jeff Moyer, axboe, Linux-Kernel

On Tue, Jul 20, 2010 at 4:26 PM, Vivek Goyal <vgoyal@redhat.com> wrote:
> On Tue, Jul 20, 2010 at 10:11:03AM -0400, Christoph Hellwig wrote:
>> Didn't you guys have a previous iteration of the fixes that gets
>> rid of REQ_NODILE by improving the heuristics inside cfq?  That
>> would be much, much preffered from the filesystem point of view.
I think the previous iteration required more complex heuristics, while
this one uses existing ones to handle one more class of problems.
I understand that you still see the complexity from the fs side, but
Vivek's proposal may help also there. It only needs to be tested thoroughly.

>
> Actually in this patch, I was thinking we can probably get rid of
> RQ_NOIDLE flag and just check for WRITE_SYNC. Any WRITE_SYNC queue
> gets served on sync-noidle tree. I am wondering will we not face jbd
> thread issues with direct writes also? If yes, then not special casing
> direct IO writes and treat them same as O_SYNC writes will make sense.

Probably it is better to submit this first, since it is already
tested, and then have a different patch that can finish the work
This will help when bisecting for possible regressions, since I'm not
sure why the other writes are not already marked with RQ_NOIDLE (maybe
it was introduced for some good reason to distinguish the two sets,
and we won't know unless we find the workload where it helped).
I'll resend the current patch with Jeff's reviewed and tested tags.

Corrado

>
> I really wished that we had some blktrace of some standard workloads
> stored somewhere which we could simply replay using "btreplay" and come
> to some kind of conclusion whenever we are faced with taking such
> decisions.
>
> Thanks
> Vivek
>
>



-- 
__________________________________________________________________________

dott. Corrado Zoccolo                          mailto:czoccolo@gmail.com
PhD - Department of Computer Science - University of Pisa, Italy
--------------------------------------------------------------------------
The self-confidence of a warrior is not the self-confidence of the average
man. The average man seeks certainty in the eyes of the onlooker and calls
that self-confidence. The warrior seeks impeccability in his own eyes and
calls that humbleness.
                               Tales of Power - C. Castaneda

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 0/2] cfq-iosched: fixing RQ_NOIDLE handling.
  2010-07-20 19:10                       ` Corrado Zoccolo
@ 2010-07-20 19:32                         ` Vivek Goyal
  0 siblings, 0 replies; 27+ messages in thread
From: Vivek Goyal @ 2010-07-20 19:32 UTC (permalink / raw)
  To: Corrado Zoccolo; +Cc: Christoph Hellwig, Jeff Moyer, axboe, Linux-Kernel

On Tue, Jul 20, 2010 at 09:10:56PM +0200, Corrado Zoccolo wrote:
> On Tue, Jul 20, 2010 at 4:26 PM, Vivek Goyal <vgoyal@redhat.com> wrote:
> > On Tue, Jul 20, 2010 at 10:11:03AM -0400, Christoph Hellwig wrote:
> >> Didn't you guys have a previous iteration of the fixes that gets
> >> rid of REQ_NODILE by improving the heuristics inside cfq?  That
> >> would be much, much preffered from the filesystem point of view.
> I think the previous iteration required more complex heuristics, while
> this one uses existing ones to handle one more class of problems.
> I understand that you still see the complexity from the fs side, but
> Vivek's proposal may help also there. It only needs to be tested thoroughly.
> 
> >
> > Actually in this patch, I was thinking we can probably get rid of
> > RQ_NOIDLE flag and just check for WRITE_SYNC. Any WRITE_SYNC queue
> > gets served on sync-noidle tree. I am wondering will we not face jbd
> > thread issues with direct writes also? If yes, then not special casing
> > direct IO writes and treat them same as O_SYNC writes will make sense.
> 
> Probably it is better to submit this first, since it is already
> tested, and then have a different patch that can finish the work
> This will help when bisecting for possible regressions, since I'm not
> sure why the other writes are not already marked with RQ_NOIDLE (maybe
> it was introduced for some good reason to distinguish the two sets,
> and we won't know unless we find the workload where it helped).
> I'll resend the current patch with Jeff's reviewed and tested tags.
> 

I am fine with pushing this patch as it is first and then once we have
an answer to question whether direct IO path and O_SYNC/fsync path need
same treatment or different treatment in IO scheduler, we can fix RQ_NOIDLE
flag issue also.

Vivek

^ permalink raw reply	[flat|nested] 27+ messages in thread

end of thread, other threads:[~2010-07-20 19:32 UTC | newest]

Thread overview: 27+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2010-07-07 15:22 [PATCH 0/2] cfq-iosched: fixing RQ_NOIDLE handling Corrado Zoccolo
2010-07-07 15:56 ` Corrado Zoccolo
2010-07-07 17:03 ` Jeff Moyer
2010-07-07 17:39   ` Corrado Zoccolo
2010-07-07 20:06     ` Jeff Moyer
2010-07-08 14:38       ` Corrado Zoccolo
2010-07-09 10:33       ` Corrado Zoccolo
2010-07-09 13:23         ` Vivek Goyal
2010-07-09 14:07         ` Jeff Moyer
2010-07-09 19:45           ` Corrado Zoccolo
2010-07-09 20:48             ` Jeff Moyer
2010-07-13 19:38         ` Jeff Moyer
2010-07-13 19:56           ` Vivek Goyal
2010-07-13 20:30             ` Jeff Moyer
2010-07-13 20:42               ` Vivek Goyal
2010-07-19 16:08                 ` Jeff Moyer
2010-07-19 20:31                   ` Vivek Goyal
2010-07-20 14:02                     ` Jeff Moyer
2010-07-20 14:11                   ` Christoph Hellwig
2010-07-20 14:26                     ` Vivek Goyal
2010-07-20 19:10                       ` Corrado Zoccolo
2010-07-20 19:32                         ` Vivek Goyal
2010-07-13 21:00               ` Jeff Moyer
2010-07-07 17:50   ` Vivek Goyal
2010-07-08 14:35   ` Vivek Goyal
2010-07-08 14:38     ` Jeff Moyer
2010-07-08 14:45     ` Corrado Zoccolo

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).