All of lore.kernel.org
 help / color / mirror / Atom feed
From: Mel Gorman <mgorman@suse.de>
To: Dave Chinner <david@fromorbit.com>
Cc: Linux-MM <linux-mm@kvack.org>,
	LKML <linux-kernel@vger.kernel.org>, XFS <xfs@oss.sgi.com>,
	Christoph Hellwig <hch@infradead.org>,
	Johannes Weiner <jweiner@redhat.com>,
	Wu Fengguang <fengguang.wu@intel.com>, Jan Kara <jack@suse.cz>,
	Rik van Riel <riel@redhat.com>,
	Minchan Kim <minchan.kim@gmail.com>
Subject: Re: [RFC PATCH 0/5] Reduce filesystem writeback from page reclaim (again)
Date: Thu, 14 Jul 2011 08:37:42 +0100	[thread overview]
Message-ID: <20110714073742.GS7529@suse.de> (raw)
In-Reply-To: <20110714003340.GZ23038@dastard>

On Thu, Jul 14, 2011 at 10:33:40AM +1000, Dave Chinner wrote:
> On Wed, Jul 13, 2011 at 03:31:22PM +0100, Mel Gorman wrote:
> > (Revisting this from a year ago and following on from the thread
> > "Re: [PATCH 03/27] xfs: use write_cache_pages for writeback
> > clustering". Posting an prototype to see if anything obvious is
> > being missed)
> 
> Hi Mel,
> 
> Thanks for picking this up again. The results are definitely
> promising, but I'd like to see a comparison against simply not doing
> IO from memory reclaim at all combined with the enhancements in this
> patchset.

Convered elsewhere. In these tests we are already writing 0 pages so it
won't make a difference and I'm wary of eliminating writes entirely
unless kswapd has a way of priotising pages the flusher writes back
because of the risk of premature OOM kill.

> After all, that's what I keep asking for (so we can get
> rid of .writepage altogether), and if the numbers don't add up, then
> I'll shut up about it. ;)
> 

Christoph covered this.

> .....
> 
> > use-once LRU logic). The command line for fs_mark looked something like
> > 
> > ./fs_mark  -d  /tmp/fsmark-2676  -D  100  -N  150  -n  150  -L  25  -t  1  -S0  -s  10485760
> > 
> > The machine was booted with "nr_cpus=1 mem=512M" as according to Dave
> > this triggers the worst behaviour.
> ....
> > During testing, a number of monitors were running to gather information
> > from ftrace in particular. This disrupts the results of course because
> > recording the information generates IO in itself but I'm ignoring
> > that for the moment so the effect of the patches can be seen.
> > 
> > I've posted the raw reports for each filesystem at
> > 
> > http://www.csn.ul.ie/~mel/postings/reclaim-20110713/writeback-ext3/sandy/comparison.html
> > http://www.csn.ul.ie/~mel/postings/reclaim-20110713/writeback-ext4/sandy/comparison.html
> > http://www.csn.ul.ie/~mel/postings/reclaim-20110713/writeback-btrfs/sandy/comparison.html
> > http://www.csn.ul.ie/~mel/postings/reclaim-20110713/writeback-xfs/sandy/comparison.html
> .....
> > Average files per second is increased by a nice percentage albeit
> > just within the standard deviation. Consider the type of test this is,
> > variability was inevitable but will double check without monitoring.
> > 
> > The overhead (time spent in non-filesystem-related activities) is
> > reduced a *lot* and is a lot less variable.
> 
> Given that userspace is doing the same amount of work in all test
> runs, that implies that the userspace process is retaining it's
> working set hot in the cache over syscalls with this patchset.
> 

It's one possibility. The more likely one is that the fs_marks anonymous
pages are getting swapped out leading to variability. If IO is less
seeky as a result of the change, the swap in/outs would be faster.

> > Direct reclaim work is significantly reduced going from 37% of all
> > pages scanned to 1% with all patches applied. This implies that
> > processes are getting stalled less.
> 
> And that directly implicates page scanning during direct reclaim as
> the prime contributor to turfing the application's working set out
> of the CPU cache....
> 

It's a possibility.

> > Page writes by reclaim is what is motivating this series. It goes
> > from 14511 pages to 4084 which is a big improvement. We'll see later
> > if these were anonymous or file-backed pages.
> 
> Which were anon pages, so this is a major improvement. However,
> given that there were no dirty pages writen directly by memory
> reclaim, perhaps we don't need to do IO at all from here and
> throttling is all that is needed?  ;)
> 

I wouldn't bet my life on it due to potential premature OOM kill problem
if we cannot reclaim pages at all :)

> > Direct reclaim writes were never a problem according to this.
> 
> That's true. but we disable direct reclaim for other reasons, namely
> that writeback from direct reclaim blows the stack.
> 

Correct. I should have been clearer and said direct reclaim wasn't
a problem in terms of queueing pages for IO.

-- 
Mel Gorman
SUSE Labs

WARNING: multiple messages have this Message-ID (diff)
From: Mel Gorman <mgorman@suse.de>
To: Dave Chinner <david@fromorbit.com>
Cc: Rik van Riel <riel@redhat.com>, Jan Kara <jack@suse.cz>,
	LKML <linux-kernel@vger.kernel.org>, XFS <xfs@oss.sgi.com>,
	Christoph Hellwig <hch@infradead.org>,
	Linux-MM <linux-mm@kvack.org>,
	Minchan Kim <minchan.kim@gmail.com>,
	Wu Fengguang <fengguang.wu@intel.com>,
	Johannes Weiner <jweiner@redhat.com>
Subject: Re: [RFC PATCH 0/5] Reduce filesystem writeback from page reclaim (again)
Date: Thu, 14 Jul 2011 08:37:42 +0100	[thread overview]
Message-ID: <20110714073742.GS7529@suse.de> (raw)
In-Reply-To: <20110714003340.GZ23038@dastard>

On Thu, Jul 14, 2011 at 10:33:40AM +1000, Dave Chinner wrote:
> On Wed, Jul 13, 2011 at 03:31:22PM +0100, Mel Gorman wrote:
> > (Revisting this from a year ago and following on from the thread
> > "Re: [PATCH 03/27] xfs: use write_cache_pages for writeback
> > clustering". Posting an prototype to see if anything obvious is
> > being missed)
> 
> Hi Mel,
> 
> Thanks for picking this up again. The results are definitely
> promising, but I'd like to see a comparison against simply not doing
> IO from memory reclaim at all combined with the enhancements in this
> patchset.

Convered elsewhere. In these tests we are already writing 0 pages so it
won't make a difference and I'm wary of eliminating writes entirely
unless kswapd has a way of priotising pages the flusher writes back
because of the risk of premature OOM kill.

> After all, that's what I keep asking for (so we can get
> rid of .writepage altogether), and if the numbers don't add up, then
> I'll shut up about it. ;)
> 

Christoph covered this.

> .....
> 
> > use-once LRU logic). The command line for fs_mark looked something like
> > 
> > ./fs_mark  -d  /tmp/fsmark-2676  -D  100  -N  150  -n  150  -L  25  -t  1  -S0  -s  10485760
> > 
> > The machine was booted with "nr_cpus=1 mem=512M" as according to Dave
> > this triggers the worst behaviour.
> ....
> > During testing, a number of monitors were running to gather information
> > from ftrace in particular. This disrupts the results of course because
> > recording the information generates IO in itself but I'm ignoring
> > that for the moment so the effect of the patches can be seen.
> > 
> > I've posted the raw reports for each filesystem at
> > 
> > http://www.csn.ul.ie/~mel/postings/reclaim-20110713/writeback-ext3/sandy/comparison.html
> > http://www.csn.ul.ie/~mel/postings/reclaim-20110713/writeback-ext4/sandy/comparison.html
> > http://www.csn.ul.ie/~mel/postings/reclaim-20110713/writeback-btrfs/sandy/comparison.html
> > http://www.csn.ul.ie/~mel/postings/reclaim-20110713/writeback-xfs/sandy/comparison.html
> .....
> > Average files per second is increased by a nice percentage albeit
> > just within the standard deviation. Consider the type of test this is,
> > variability was inevitable but will double check without monitoring.
> > 
> > The overhead (time spent in non-filesystem-related activities) is
> > reduced a *lot* and is a lot less variable.
> 
> Given that userspace is doing the same amount of work in all test
> runs, that implies that the userspace process is retaining it's
> working set hot in the cache over syscalls with this patchset.
> 

It's one possibility. The more likely one is that the fs_marks anonymous
pages are getting swapped out leading to variability. If IO is less
seeky as a result of the change, the swap in/outs would be faster.

> > Direct reclaim work is significantly reduced going from 37% of all
> > pages scanned to 1% with all patches applied. This implies that
> > processes are getting stalled less.
> 
> And that directly implicates page scanning during direct reclaim as
> the prime contributor to turfing the application's working set out
> of the CPU cache....
> 

It's a possibility.

> > Page writes by reclaim is what is motivating this series. It goes
> > from 14511 pages to 4084 which is a big improvement. We'll see later
> > if these were anonymous or file-backed pages.
> 
> Which were anon pages, so this is a major improvement. However,
> given that there were no dirty pages writen directly by memory
> reclaim, perhaps we don't need to do IO at all from here and
> throttling is all that is needed?  ;)
> 

I wouldn't bet my life on it due to potential premature OOM kill problem
if we cannot reclaim pages at all :)

> > Direct reclaim writes were never a problem according to this.
> 
> That's true. but we disable direct reclaim for other reasons, namely
> that writeback from direct reclaim blows the stack.
> 

Correct. I should have been clearer and said direct reclaim wasn't
a problem in terms of queueing pages for IO.

-- 
Mel Gorman
SUSE Labs

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

WARNING: multiple messages have this Message-ID (diff)
From: Mel Gorman <mgorman@suse.de>
To: Dave Chinner <david@fromorbit.com>
Cc: Linux-MM <linux-mm@kvack.org>,
	LKML <linux-kernel@vger.kernel.org>, XFS <xfs@oss.sgi.com>,
	Christoph Hellwig <hch@infradead.org>,
	Johannes Weiner <jweiner@redhat.com>,
	Wu Fengguang <fengguang.wu@intel.com>, Jan Kara <jack@suse.cz>,
	Rik van Riel <riel@redhat.com>,
	Minchan Kim <minchan.kim@gmail.com>
Subject: Re: [RFC PATCH 0/5] Reduce filesystem writeback from page reclaim (again)
Date: Thu, 14 Jul 2011 08:37:42 +0100	[thread overview]
Message-ID: <20110714073742.GS7529@suse.de> (raw)
In-Reply-To: <20110714003340.GZ23038@dastard>

On Thu, Jul 14, 2011 at 10:33:40AM +1000, Dave Chinner wrote:
> On Wed, Jul 13, 2011 at 03:31:22PM +0100, Mel Gorman wrote:
> > (Revisting this from a year ago and following on from the thread
> > "Re: [PATCH 03/27] xfs: use write_cache_pages for writeback
> > clustering". Posting an prototype to see if anything obvious is
> > being missed)
> 
> Hi Mel,
> 
> Thanks for picking this up again. The results are definitely
> promising, but I'd like to see a comparison against simply not doing
> IO from memory reclaim at all combined with the enhancements in this
> patchset.

Convered elsewhere. In these tests we are already writing 0 pages so it
won't make a difference and I'm wary of eliminating writes entirely
unless kswapd has a way of priotising pages the flusher writes back
because of the risk of premature OOM kill.

> After all, that's what I keep asking for (so we can get
> rid of .writepage altogether), and if the numbers don't add up, then
> I'll shut up about it. ;)
> 

Christoph covered this.

> .....
> 
> > use-once LRU logic). The command line for fs_mark looked something like
> > 
> > ./fs_mark  -d  /tmp/fsmark-2676  -D  100  -N  150  -n  150  -L  25  -t  1  -S0  -s  10485760
> > 
> > The machine was booted with "nr_cpus=1 mem=512M" as according to Dave
> > this triggers the worst behaviour.
> ....
> > During testing, a number of monitors were running to gather information
> > from ftrace in particular. This disrupts the results of course because
> > recording the information generates IO in itself but I'm ignoring
> > that for the moment so the effect of the patches can be seen.
> > 
> > I've posted the raw reports for each filesystem at
> > 
> > http://www.csn.ul.ie/~mel/postings/reclaim-20110713/writeback-ext3/sandy/comparison.html
> > http://www.csn.ul.ie/~mel/postings/reclaim-20110713/writeback-ext4/sandy/comparison.html
> > http://www.csn.ul.ie/~mel/postings/reclaim-20110713/writeback-btrfs/sandy/comparison.html
> > http://www.csn.ul.ie/~mel/postings/reclaim-20110713/writeback-xfs/sandy/comparison.html
> .....
> > Average files per second is increased by a nice percentage albeit
> > just within the standard deviation. Consider the type of test this is,
> > variability was inevitable but will double check without monitoring.
> > 
> > The overhead (time spent in non-filesystem-related activities) is
> > reduced a *lot* and is a lot less variable.
> 
> Given that userspace is doing the same amount of work in all test
> runs, that implies that the userspace process is retaining it's
> working set hot in the cache over syscalls with this patchset.
> 

It's one possibility. The more likely one is that the fs_marks anonymous
pages are getting swapped out leading to variability. If IO is less
seeky as a result of the change, the swap in/outs would be faster.

> > Direct reclaim work is significantly reduced going from 37% of all
> > pages scanned to 1% with all patches applied. This implies that
> > processes are getting stalled less.
> 
> And that directly implicates page scanning during direct reclaim as
> the prime contributor to turfing the application's working set out
> of the CPU cache....
> 

It's a possibility.

> > Page writes by reclaim is what is motivating this series. It goes
> > from 14511 pages to 4084 which is a big improvement. We'll see later
> > if these were anonymous or file-backed pages.
> 
> Which were anon pages, so this is a major improvement. However,
> given that there were no dirty pages writen directly by memory
> reclaim, perhaps we don't need to do IO at all from here and
> throttling is all that is needed?  ;)
> 

I wouldn't bet my life on it due to potential premature OOM kill problem
if we cannot reclaim pages at all :)

> > Direct reclaim writes were never a problem according to this.
> 
> That's true. but we disable direct reclaim for other reasons, namely
> that writeback from direct reclaim blows the stack.
> 

Correct. I should have been clearer and said direct reclaim wasn't
a problem in terms of queueing pages for IO.

-- 
Mel Gorman
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  parent reply	other threads:[~2011-07-14  7:37 UTC|newest]

Thread overview: 114+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-07-13 14:31 [RFC PATCH 0/5] Reduce filesystem writeback from page reclaim (again) Mel Gorman
2011-07-13 14:31 ` Mel Gorman
2011-07-13 14:31 ` Mel Gorman
2011-07-13 14:31 ` [PATCH 1/5] mm: vmscan: Do not writeback filesystem pages in direct reclaim Mel Gorman
2011-07-13 14:31   ` Mel Gorman
2011-07-13 14:31   ` Mel Gorman
2011-07-13 23:34   ` Dave Chinner
2011-07-13 23:34     ` Dave Chinner
2011-07-13 23:34     ` Dave Chinner
2011-07-14  6:17     ` Mel Gorman
2011-07-14  6:17       ` Mel Gorman
2011-07-14  6:17       ` Mel Gorman
2011-07-14  1:38   ` KAMEZAWA Hiroyuki
2011-07-14  1:38     ` KAMEZAWA Hiroyuki
2011-07-14  1:38     ` KAMEZAWA Hiroyuki
2011-07-14  4:46     ` Christoph Hellwig
2011-07-14  4:46       ` Christoph Hellwig
2011-07-14  4:46       ` Christoph Hellwig
2011-07-14  4:46       ` KAMEZAWA Hiroyuki
2011-07-14  4:46         ` KAMEZAWA Hiroyuki
2011-07-14  4:46         ` KAMEZAWA Hiroyuki
2011-07-14 15:07         ` Christoph Hellwig
2011-07-14 15:07           ` Christoph Hellwig
2011-07-14 15:07           ` Christoph Hellwig
2011-07-14 23:55           ` KAMEZAWA Hiroyuki
2011-07-14 23:55             ` KAMEZAWA Hiroyuki
2011-07-14 23:55             ` KAMEZAWA Hiroyuki
2011-07-15  2:22         ` Dave Chinner
2011-07-15  2:22           ` Dave Chinner
2011-07-15  2:22           ` Dave Chinner
2011-07-18  2:22           ` Dave Chinner
2011-07-18  2:22             ` Dave Chinner
2011-07-18  2:22             ` Dave Chinner
2011-07-18  3:06             ` Dave Chinner
2011-07-18  3:06               ` Dave Chinner
2011-07-18  3:06               ` Dave Chinner
2011-07-14  6:19     ` Mel Gorman
2011-07-14  6:19       ` Mel Gorman
2011-07-14  6:19       ` Mel Gorman
2011-07-14  6:17       ` KAMEZAWA Hiroyuki
2011-07-14  6:17         ` KAMEZAWA Hiroyuki
2011-07-14  6:17         ` KAMEZAWA Hiroyuki
2011-07-13 14:31 ` [PATCH 2/5] mm: vmscan: Do not writeback filesystem pages in kswapd except in high priority Mel Gorman
2011-07-13 14:31   ` Mel Gorman
2011-07-13 14:31   ` Mel Gorman
2011-07-13 23:37   ` Dave Chinner
2011-07-13 23:37     ` Dave Chinner
2011-07-13 23:37     ` Dave Chinner
2011-07-14  6:29     ` Mel Gorman
2011-07-14  6:29       ` Mel Gorman
2011-07-14  6:29       ` Mel Gorman
2011-07-14 11:52       ` Dave Chinner
2011-07-14 11:52         ` Dave Chinner
2011-07-14 11:52         ` Dave Chinner
2011-07-14 13:17         ` Mel Gorman
2011-07-14 13:17           ` Mel Gorman
2011-07-14 13:17           ` Mel Gorman
2011-07-15  3:12           ` Dave Chinner
2011-07-15  3:12             ` Dave Chinner
2011-07-15  3:12             ` Dave Chinner
2011-07-13 14:31 ` [PATCH 3/5] mm: vmscan: Throttle reclaim if encountering too many dirty pages under writeback Mel Gorman
2011-07-13 14:31   ` Mel Gorman
2011-07-13 14:31   ` Mel Gorman
2011-07-13 23:41   ` Dave Chinner
2011-07-13 23:41     ` Dave Chinner
2011-07-13 23:41     ` Dave Chinner
2011-07-14  6:33     ` Mel Gorman
2011-07-14  6:33       ` Mel Gorman
2011-07-14  6:33       ` Mel Gorman
2011-07-13 14:31 ` [PATCH 4/5] mm: vmscan: Immediately reclaim end-of-LRU dirty pages when writeback completes Mel Gorman
2011-07-13 14:31   ` Mel Gorman
2011-07-13 14:31   ` Mel Gorman
2011-07-13 16:40   ` Johannes Weiner
2011-07-13 16:40     ` Johannes Weiner
2011-07-13 16:40     ` Johannes Weiner
2011-07-13 17:15     ` Mel Gorman
2011-07-13 17:15       ` Mel Gorman
2011-07-13 17:15       ` Mel Gorman
2011-07-13 14:31 ` [PATCH 5/5] mm: writeback: Prioritise dirty inodes encountered by direct reclaim for background flushing Mel Gorman
2011-07-13 14:31   ` Mel Gorman
2011-07-13 14:31   ` Mel Gorman
2011-07-13 21:39   ` Jan Kara
2011-07-13 21:39     ` Jan Kara
2011-07-13 21:39     ` Jan Kara
2011-07-14  0:09     ` Dave Chinner
2011-07-14  0:09       ` Dave Chinner
2011-07-14  0:09       ` Dave Chinner
2011-07-14  7:03     ` Mel Gorman
2011-07-14  7:03       ` Mel Gorman
2011-07-14  7:03       ` Mel Gorman
2011-07-13 23:56   ` Dave Chinner
2011-07-13 23:56     ` Dave Chinner
2011-07-13 23:56     ` Dave Chinner
2011-07-14  7:30     ` Mel Gorman
2011-07-14  7:30       ` Mel Gorman
2011-07-14  7:30       ` Mel Gorman
2011-07-14 15:09   ` Christoph Hellwig
2011-07-14 15:09     ` Christoph Hellwig
2011-07-14 15:09     ` Christoph Hellwig
2011-07-14 15:49     ` Mel Gorman
2011-07-14 15:49       ` Mel Gorman
2011-07-14 15:49       ` Mel Gorman
2011-07-13 15:31 ` [RFC PATCH 0/5] Reduce filesystem writeback from page reclaim (again) Mel Gorman
2011-07-13 15:31   ` Mel Gorman
2011-07-13 15:31   ` Mel Gorman
2011-07-14  0:33 ` Dave Chinner
2011-07-14  0:33   ` Dave Chinner
2011-07-14  0:33   ` Dave Chinner
2011-07-14  4:51   ` Christoph Hellwig
2011-07-14  4:51     ` Christoph Hellwig
2011-07-14  4:51     ` Christoph Hellwig
2011-07-14  7:37   ` Mel Gorman [this message]
2011-07-14  7:37     ` Mel Gorman
2011-07-14  7:37     ` Mel Gorman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20110714073742.GS7529@suse.de \
    --to=mgorman@suse.de \
    --cc=david@fromorbit.com \
    --cc=fengguang.wu@intel.com \
    --cc=hch@infradead.org \
    --cc=jack@suse.cz \
    --cc=jweiner@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=minchan.kim@gmail.com \
    --cc=riel@redhat.com \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.