* [PATCH v2] RAID1: Avoid unnecessary loop to decrease conf->nr_queued in raid1d()
@ 2016-11-16 14:19 Coly Li
2016-11-16 14:36 ` Coly Li
0 siblings, 1 reply; 4+ messages in thread
From: Coly Li @ 2016-11-16 14:19 UTC (permalink / raw)
To: linux-raid; +Cc: Coly Li, Shaohua Li, Neil Brown
commit ccfc7bf1f0 points out bios from conf->bio_end_io_list also
contribute to conf->nr_queued counter, that's right. But the fix
could be improved. The original fix replaces list_add() by a while()
loop to iterate every bio on conf->bio_end_io_list, and decrease
conf->nr_queued. If we look at the code several lines after, we may
find there is another while() loop to iterate every node from tmp
list which contains the original content of conf->bio_end_io_list.
Yes, we can decrease conf->nr_queued here, then we can avoid the
previous extra while() loop, which consumes more CPU cycles and hold
a spin lock longer time when conf->bio_end_io_list is not tiny.
This patch changes to decrease conf->nr_queued in loop of
while(!list_empty(&tmp)){}, it avoids an extra loop execution, and
avoids holding conf->device_lock for too long time.
By suggestion from Neil, in this version of the patch, I use
list_splice_init() interface to operate conf->bio_end_io_list.
Signed-off-by: Coly Li <colyli@suse.de>
Cc: Shaohua Li <shli@fb.com>
Cc: Neil Brown <neilb@suse.de>
---
drivers/md/raid1.c | 9 +++++----
1 file changed, 5 insertions(+), 4 deletions(-)
Index: linux-raid1/drivers/md/raid1.c
===================================================================
--- linux-raid1.orig/drivers/md/raid1.c
+++ linux-raid1/drivers/md/raid1.c
@@ -2387,17 +2387,17 @@ static void raid1d(struct md_thread *thr
!test_bit(MD_CHANGE_PENDING, &mddev->flags)) {
LIST_HEAD(tmp);
spin_lock_irqsave(&conf->device_lock, flags);
- if (!test_bit(MD_CHANGE_PENDING, &mddev->flags)) {
- while (!list_empty(&conf->bio_end_io_list)) {
- list_move(conf->bio_end_io_list.prev, &tmp);
- conf->nr_queued--;
- }
- }
+ if (!test_bit(MD_CHANGE_PENDING, &mddev->flags))
+ list_splice_init(&conf->bio_end_io_list, &tmp);
spin_unlock_irqrestore(&conf->device_lock, flags);
+
while (!list_empty(&tmp)) {
r1_bio = list_first_entry(&tmp, struct r1bio,
retry_list);
list_del(&r1_bio->retry_list);
+ spin_lock_irqsave(&conf->device_lock, flags);
+ conf->nr_queued--;
+ spin_unlock_irqrestore(&conf->device_lock, flags);
if (mddev->degraded)
set_bit(R1BIO_Degraded, &r1_bio->state);
if (test_bit(R1BIO_WriteError, &r1_bio->state))
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH v2] RAID1: Avoid unnecessary loop to decrease conf->nr_queued in raid1d()
2016-11-16 14:19 [PATCH v2] RAID1: Avoid unnecessary loop to decrease conf->nr_queued in raid1d() Coly Li
@ 2016-11-16 14:36 ` Coly Li
2016-11-16 20:05 ` Shaohua Li
0 siblings, 1 reply; 4+ messages in thread
From: Coly Li @ 2016-11-16 14:36 UTC (permalink / raw)
To: linux-raid; +Cc: Shaohua Li, Neil Brown
在 2016/11/16 下午10:19, Coly Li 写道:
[snip]
> ---
> drivers/md/raid1.c | 9 +++++----
> 1 file changed, 5 insertions(+), 4 deletions(-)
>
> Index: linux-raid1/drivers/md/raid1.c
> ===================================================================
> --- linux-raid1.orig/drivers/md/raid1.c
> +++ linux-raid1/drivers/md/raid1.c
> @@ -2387,17 +2387,17 @@ static void raid1d(struct md_thread *thr
[snip]
> while (!list_empty(&tmp)) {
> r1_bio = list_first_entry(&tmp, struct r1bio,
> retry_list);
> list_del(&r1_bio->retry_list);
> + spin_lock_irqsave(&conf->device_lock, flags);
> + conf->nr_queued--;
> + spin_unlock_irqrestore(&conf->device_lock, flags);
[snip]
Now I work on another 2 patches for a simpler I/O barrier, and a
lockless I/O submit on raid1, where conf->nr_queued will be in atomic_t.
So spin lock expense will not exist any more. Just FYI.
Coly
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH v2] RAID1: Avoid unnecessary loop to decrease conf->nr_queued in raid1d()
2016-11-16 14:36 ` Coly Li
@ 2016-11-16 20:05 ` Shaohua Li
2016-11-21 10:16 ` Coly Li
0 siblings, 1 reply; 4+ messages in thread
From: Shaohua Li @ 2016-11-16 20:05 UTC (permalink / raw)
To: Coly Li; +Cc: linux-raid, Shaohua Li, Neil Brown
On Wed, Nov 16, 2016 at 10:36:32PM +0800, Coly Li wrote:
> 在 2016/11/16 下午10:19, Coly Li 写道:
> [snip]
> > ---
> > drivers/md/raid1.c | 9 +++++----
> > 1 file changed, 5 insertions(+), 4 deletions(-)
> >
> > Index: linux-raid1/drivers/md/raid1.c
> > ===================================================================
> > --- linux-raid1.orig/drivers/md/raid1.c
> > +++ linux-raid1/drivers/md/raid1.c
> > @@ -2387,17 +2387,17 @@ static void raid1d(struct md_thread *thr
> [snip]
> > while (!list_empty(&tmp)) {
> > r1_bio = list_first_entry(&tmp, struct r1bio,
> > retry_list);
> > list_del(&r1_bio->retry_list);
> > + spin_lock_irqsave(&conf->device_lock, flags);
> > + conf->nr_queued--;
> > + spin_unlock_irqrestore(&conf->device_lock, flags);
> [snip]
>
> Now I work on another 2 patches for a simpler I/O barrier, and a
> lockless I/O submit on raid1, where conf->nr_queued will be in atomic_t.
> So spin lock expense will not exist any more. Just FYI.
I'd like to hold this patch till you post the simpler I/O barrier, as the patch
itself currently doesn't make the process faster (lock/unlock is much heavier
than the loop).
Thanks,
Shaohua
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH v2] RAID1: Avoid unnecessary loop to decrease conf->nr_queued in raid1d()
2016-11-16 20:05 ` Shaohua Li
@ 2016-11-21 10:16 ` Coly Li
0 siblings, 0 replies; 4+ messages in thread
From: Coly Li @ 2016-11-21 10:16 UTC (permalink / raw)
To: Shaohua Li; +Cc: linux-raid, Neil Brown
在 2016/11/17 上午4:05, Shaohua Li 写道:
> On Wed, Nov 16, 2016 at 10:36:32PM +0800, Coly Li wrote:
>> 在 2016/11/16 下午10:19, Coly Li 写道:
>> [snip]
>>> ---
>>> drivers/md/raid1.c | 9 +++++----
>>> 1 file changed, 5 insertions(+), 4 deletions(-)
>>>
>>> Index: linux-raid1/drivers/md/raid1.c
>>> ===================================================================
>>> --- linux-raid1.orig/drivers/md/raid1.c
>>> +++ linux-raid1/drivers/md/raid1.c
>>> @@ -2387,17 +2387,17 @@ static void raid1d(struct md_thread *thr
>> [snip]
>>> while (!list_empty(&tmp)) {
>>> r1_bio = list_first_entry(&tmp, struct r1bio,
>>> retry_list);
>>> list_del(&r1_bio->retry_list);
>>> + spin_lock_irqsave(&conf->device_lock, flags);
>>> + conf->nr_queued--;
>>> + spin_unlock_irqrestore(&conf->device_lock, flags);
>> [snip]
>>
>> Now I work on another 2 patches for a simpler I/O barrier, and a
>> lockless I/O submit on raid1, where conf->nr_queued will be in atomic_t.
>> So spin lock expense will not exist any more. Just FYI.
>
> I'd like to hold this patch till you post the simpler I/O barrier, as the patch
> itself currently doesn't make the process faster (lock/unlock is much heavier
> than the loop).
Yeah, I will combine this patch with the new barrier patch, and send out
a RFC patch set with two patches. One is new barrier patch, one is
lockless wait_barrier() patch.
Thanks.
Coly
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2016-11-21 10:16 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-11-16 14:19 [PATCH v2] RAID1: Avoid unnecessary loop to decrease conf->nr_queued in raid1d() Coly Li
2016-11-16 14:36 ` Coly Li
2016-11-16 20:05 ` Shaohua Li
2016-11-21 10:16 ` Coly Li
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.