* [RFC PATCH] writeback: move list_lock down into the for loop
@ 2016-02-26 16:46 Yang Shi
2016-02-29 15:06 ` Michal Hocko
0 siblings, 1 reply; 4+ messages in thread
From: Yang Shi @ 2016-02-26 16:46 UTC (permalink / raw)
To: tj, jack, axboe, fengguang.wu, akpm
Cc: linux-kernel, linux-mm, linaro-kernel, yang.shi
The list_lock was moved outside the for loop by commit
e8dfc30582995ae12454cda517b17d6294175b07 ("writeback: elevate queue_io()
into wb_writeback())", however, the commit log says "No behavior change", so
it sounds safe to have the list_lock acquired inside the for loop as it did
before.
Leave tracepoints outside the critical area since tracepoints already have
preempt disabled.
Signed-off-by: Yang Shi <yang.shi@linaro.org>
---
Tested with ltp on 8 cores Cortex-A57 machine.
fs/fs-writeback.c | 12 +++++++-----
1 file changed, 7 insertions(+), 5 deletions(-)
diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
index 1f76d89..9b7b5f6 100644
--- a/fs/fs-writeback.c
+++ b/fs/fs-writeback.c
@@ -1623,7 +1623,6 @@ static long wb_writeback(struct bdi_writeback *wb,
work->older_than_this = &oldest_jif;
blk_start_plug(&plug);
- spin_lock(&wb->list_lock);
for (;;) {
/*
* Stop writeback when nr_pages has been consumed
@@ -1661,15 +1660,19 @@ static long wb_writeback(struct bdi_writeback *wb,
oldest_jif = jiffies;
trace_writeback_start(wb, work);
+
+ spin_lock(&wb->list_lock);
if (list_empty(&wb->b_io))
queue_io(wb, work);
if (work->sb)
progress = writeback_sb_inodes(work->sb, wb, work);
else
progress = __writeback_inodes_wb(wb, work);
- trace_writeback_written(wb, work);
wb_update_bandwidth(wb, wb_start);
+ spin_unlock(&wb->list_lock);
+
+ trace_writeback_written(wb, work);
/*
* Did we write something? Try for more
@@ -1693,15 +1696,14 @@ static long wb_writeback(struct bdi_writeback *wb,
*/
if (!list_empty(&wb->b_more_io)) {
trace_writeback_wait(wb, work);
+ spin_lock(&wb->list_lock);
inode = wb_inode(wb->b_more_io.prev);
- spin_lock(&inode->i_lock);
spin_unlock(&wb->list_lock);
+ spin_lock(&inode->i_lock);
/* This function drops i_lock... */
inode_sleep_on_writeback(inode);
- spin_lock(&wb->list_lock);
}
}
- spin_unlock(&wb->list_lock);
blk_finish_plug(&plug);
return nr_pages - work->nr_pages;
--
2.0.2
^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: [RFC PATCH] writeback: move list_lock down into the for loop
2016-02-26 16:46 [RFC PATCH] writeback: move list_lock down into the for loop Yang Shi
@ 2016-02-29 15:06 ` Michal Hocko
2016-02-29 17:27 ` Shi, Yang
0 siblings, 1 reply; 4+ messages in thread
From: Michal Hocko @ 2016-02-29 15:06 UTC (permalink / raw)
To: Yang Shi
Cc: tj, jack, axboe, fengguang.wu, akpm, linux-kernel, linux-mm,
linaro-kernel
On Fri 26-02-16 08:46:25, Yang Shi wrote:
> The list_lock was moved outside the for loop by commit
> e8dfc30582995ae12454cda517b17d6294175b07 ("writeback: elevate queue_io()
> into wb_writeback())", however, the commit log says "No behavior change", so
> it sounds safe to have the list_lock acquired inside the for loop as it did
> before.
> Leave tracepoints outside the critical area since tracepoints already have
> preempt disabled.
The patch says what but it completely misses the why part.
>
> Signed-off-by: Yang Shi <yang.shi@linaro.org>
> ---
> Tested with ltp on 8 cores Cortex-A57 machine.
>
> fs/fs-writeback.c | 12 +++++++-----
> 1 file changed, 7 insertions(+), 5 deletions(-)
>
> diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
> index 1f76d89..9b7b5f6 100644
> --- a/fs/fs-writeback.c
> +++ b/fs/fs-writeback.c
> @@ -1623,7 +1623,6 @@ static long wb_writeback(struct bdi_writeback *wb,
> work->older_than_this = &oldest_jif;
>
> blk_start_plug(&plug);
> - spin_lock(&wb->list_lock);
> for (;;) {
> /*
> * Stop writeback when nr_pages has been consumed
> @@ -1661,15 +1660,19 @@ static long wb_writeback(struct bdi_writeback *wb,
> oldest_jif = jiffies;
>
> trace_writeback_start(wb, work);
> +
> + spin_lock(&wb->list_lock);
> if (list_empty(&wb->b_io))
> queue_io(wb, work);
> if (work->sb)
> progress = writeback_sb_inodes(work->sb, wb, work);
> else
> progress = __writeback_inodes_wb(wb, work);
> - trace_writeback_written(wb, work);
>
> wb_update_bandwidth(wb, wb_start);
> + spin_unlock(&wb->list_lock);
> +
> + trace_writeback_written(wb, work);
>
> /*
> * Did we write something? Try for more
> @@ -1693,15 +1696,14 @@ static long wb_writeback(struct bdi_writeback *wb,
> */
> if (!list_empty(&wb->b_more_io)) {
> trace_writeback_wait(wb, work);
> + spin_lock(&wb->list_lock);
> inode = wb_inode(wb->b_more_io.prev);
> - spin_lock(&inode->i_lock);
> spin_unlock(&wb->list_lock);
> + spin_lock(&inode->i_lock);
> /* This function drops i_lock... */
> inode_sleep_on_writeback(inode);
> - spin_lock(&wb->list_lock);
> }
> }
> - spin_unlock(&wb->list_lock);
> blk_finish_plug(&plug);
>
> return nr_pages - work->nr_pages;
> --
> 2.0.2
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org. For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
--
Michal Hocko
SUSE Labs
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [RFC PATCH] writeback: move list_lock down into the for loop
2016-02-29 15:06 ` Michal Hocko
@ 2016-02-29 17:27 ` Shi, Yang
2016-02-29 17:33 ` Michal Hocko
0 siblings, 1 reply; 4+ messages in thread
From: Shi, Yang @ 2016-02-29 17:27 UTC (permalink / raw)
To: Michal Hocko
Cc: tj, jack, axboe, fengguang.wu, akpm, linux-kernel, linux-mm,
linaro-kernel
On 2/29/2016 7:06 AM, Michal Hocko wrote:
> On Fri 26-02-16 08:46:25, Yang Shi wrote:
>> The list_lock was moved outside the for loop by commit
>> e8dfc30582995ae12454cda517b17d6294175b07 ("writeback: elevate queue_io()
>> into wb_writeback())", however, the commit log says "No behavior change", so
>> it sounds safe to have the list_lock acquired inside the for loop as it did
>> before.
>> Leave tracepoints outside the critical area since tracepoints already have
>> preempt disabled.
>
> The patch says what but it completely misses the why part.
I'm just wondering the finer grained lock may reach a little better
performance, i.e. more likely for preempt, lower latency.
Thanks,
Yang
>
>>
>> Signed-off-by: Yang Shi <yang.shi@linaro.org>
>> ---
>> Tested with ltp on 8 cores Cortex-A57 machine.
>>
>> fs/fs-writeback.c | 12 +++++++-----
>> 1 file changed, 7 insertions(+), 5 deletions(-)
>>
>> diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
>> index 1f76d89..9b7b5f6 100644
>> --- a/fs/fs-writeback.c
>> +++ b/fs/fs-writeback.c
>> @@ -1623,7 +1623,6 @@ static long wb_writeback(struct bdi_writeback *wb,
>> work->older_than_this = &oldest_jif;
>>
>> blk_start_plug(&plug);
>> - spin_lock(&wb->list_lock);
>> for (;;) {
>> /*
>> * Stop writeback when nr_pages has been consumed
>> @@ -1661,15 +1660,19 @@ static long wb_writeback(struct bdi_writeback *wb,
>> oldest_jif = jiffies;
>>
>> trace_writeback_start(wb, work);
>> +
>> + spin_lock(&wb->list_lock);
>> if (list_empty(&wb->b_io))
>> queue_io(wb, work);
>> if (work->sb)
>> progress = writeback_sb_inodes(work->sb, wb, work);
>> else
>> progress = __writeback_inodes_wb(wb, work);
>> - trace_writeback_written(wb, work);
>>
>> wb_update_bandwidth(wb, wb_start);
>> + spin_unlock(&wb->list_lock);
>> +
>> + trace_writeback_written(wb, work);
>>
>> /*
>> * Did we write something? Try for more
>> @@ -1693,15 +1696,14 @@ static long wb_writeback(struct bdi_writeback *wb,
>> */
>> if (!list_empty(&wb->b_more_io)) {
>> trace_writeback_wait(wb, work);
>> + spin_lock(&wb->list_lock);
>> inode = wb_inode(wb->b_more_io.prev);
>> - spin_lock(&inode->i_lock);
>> spin_unlock(&wb->list_lock);
>> + spin_lock(&inode->i_lock);
>> /* This function drops i_lock... */
>> inode_sleep_on_writeback(inode);
>> - spin_lock(&wb->list_lock);
>> }
>> }
>> - spin_unlock(&wb->list_lock);
>> blk_finish_plug(&plug);
>>
>> return nr_pages - work->nr_pages;
>> --
>> 2.0.2
>>
>> --
>> To unsubscribe, send a message with 'unsubscribe linux-mm' in
>> the body to majordomo@kvack.org. For more info on Linux MM,
>> see: http://www.linux-mm.org/ .
>> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
>
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [RFC PATCH] writeback: move list_lock down into the for loop
2016-02-29 17:27 ` Shi, Yang
@ 2016-02-29 17:33 ` Michal Hocko
0 siblings, 0 replies; 4+ messages in thread
From: Michal Hocko @ 2016-02-29 17:33 UTC (permalink / raw)
To: Shi, Yang
Cc: tj, jack, axboe, fengguang.wu, akpm, linux-kernel, linux-mm,
linaro-kernel
On Mon 29-02-16 09:27:44, Shi, Yang wrote:
> On 2/29/2016 7:06 AM, Michal Hocko wrote:
> >On Fri 26-02-16 08:46:25, Yang Shi wrote:
> >>The list_lock was moved outside the for loop by commit
> >>e8dfc30582995ae12454cda517b17d6294175b07 ("writeback: elevate queue_io()
> >>into wb_writeback())", however, the commit log says "No behavior change", so
> >>it sounds safe to have the list_lock acquired inside the for loop as it did
> >>before.
> >>Leave tracepoints outside the critical area since tracepoints already have
> >>preempt disabled.
> >
> >The patch says what but it completely misses the why part.
>
> I'm just wondering the finer grained lock may reach a little better
> performance, i.e. more likely for preempt, lower latency.
If this is supposed to be a performance enhancement then some numbers
would definitely make it easier to get in. Or even an arguments to back
your theory. Basing your argument on 4+ years commit doesn't really seem
sound... Just to make it clear, I am not opposing the patch I just
stumbled over it and the changelog was just too terrible which made me
response.
--
Michal Hocko
SUSE Labs
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2016-02-29 17:33 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-02-26 16:46 [RFC PATCH] writeback: move list_lock down into the for loop Yang Shi
2016-02-29 15:06 ` Michal Hocko
2016-02-29 17:27 ` Shi, Yang
2016-02-29 17:33 ` Michal Hocko
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).