All of lore.kernel.org
 help / color / mirror / Atom feed
From: Mel Gorman <mel@csn.ul.ie>
To: Christian Ehrhardt <ehrhardt@linux.vnet.ibm.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	linux-mm@kvack.org, linux-fsdevel@vger.kernel.org,
	Linux Kernel List <linux-kernel@vger.kernel.org>,
	Johannes Weiner <hannes@cmpxchg.org>,
	Minchan Kim <minchan.kim@gmail.com>,
	Wu Fengguang <fengguang.wu@intel.com>,
	KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>,
	KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Subject: Re: [PATCH 0/8] Reduce latencies and improve overall reclaim efficiency v2
Date: Mon, 18 Oct 2010 14:55:36 +0100	[thread overview]
Message-ID: <20101018135535.GC30667@csn.ul.ie> (raw)
In-Reply-To: <4CB721A1.4010508@linux.vnet.ibm.com>

On Thu, Oct 14, 2010 at 05:28:33PM +0200, Christian Ehrhardt wrote:

> Seing the patches Mel sent a few weeks ago I realized that this series
> might be at least partially related to my reports in 1Q 2010 - so I ran my
> testcase on a few kernels to provide you with some more backing data.

Thanks very much for revisiting this.

> Results are always the average of three iozone runs as it is known to be somewhat noisy - especially when affected by the issue I try to show here.
> As discussed in detail in older threads the setup uses 16 disks and scales the number of concurrent iozone processes.
> Processes are evenly distributed so that it always is one process per disk.
> In the past we reported 40% to 80% degradation for the sequential read case based on 2.6.32 which can still be seen.
> What we found was that the allocations for page cache with GFP_COLD flag loop a long time between try_to_free, get_page, reclaim as free makes some progress and due to that GFP_COLD allocations can loop and retry.
> In addition my case had no writes at all, which forced congestion_wait to wait the full timeout all the time.
> 
> Kernel (git)                   4          8         16   deviation #16 case                           comment
> linux-2.6.30              902694    1396073    1892624                 base                              base
> linux-2.6.32              752008     990425     932938               -50.7%     impact as reported in 1Q 2010
> linux-2.6.35               63532      71573      64083               -96.6%                    got even worse
> linux-2.6.35.6            176485     174442     212102               -88.8%  fixes useful, but still far away
> linux-2.6.36-rc4-trace    119683     188997     187012               -90.1%                         still bad 

FWIW, I wouldn't expect the trace kernel to help. It's only adding the
markers but not doing anything useful with them.

> linux-2.6.36-rc4-fix      884431    1114073    1470659               -22.3%            Mels fixes help a lot!
> 
> So much from the case that I used when I reported the issue earlier this year.
> The short summary is that the patch series from Mel helps a lot for my test case.
> 

This is good to hear. We're going in the right direction at least.

> So I guess Mel you now want some traces of the last two cases right?
> Could you give me some minimal advice what/how you would exactly need.
> 

Yes please. Please do something like the following before the test

mount -t debugfs none /sys/kernel/debug
echo 1 > /sys/kernel/debug/tracing/events/vmscan/enable
echo 1 > /sys/kernel/debug/tracing/events/writeback/writeback_congestion_wait/enable
echo 1 > /sys/kernel/debug/tracing/events/writeback/writeback_wait_iff_congested/enable
cat /sys/kernel/debug/tracing/trace_pipe > trace.log &

rerun the test, gzip trace.log and drop it on some publicly accessible
webserver. I can rerun the analysis scripts and see if something odd
falls out.

> In addition it worked really fine, so you can add both, however you like.
> Reported-by: <ehrhardt@linux.vnet.ibm.com>
> Tested-by: <ehrhardt@linux.vnet.ibm.com>
> 
> Note: it might be worth to mention that the write case improved a lot since 2.6.30.
> Not directly related to the read degradations, but with up to 150% (write) 272% (rewrite).
> Therefore not everything is bad :-) 
> 

Every cloud has a silver lining I guess :)

> Any further comments or questions?
> 

The log might help me further in figuring out how and why we are losing
time. When/if the patches move from -mm to mainline, it'd also be worth
retesting as there is some churn in this area and we need to know whether
we are heading in the right direction or not. If all goes according to plan,
kernel 2.6.37-rc1 will be of interest. Thanks again.

-- 
Mel Gorman
Part-time Phd Student                          Linux Technology Center
University of Limerick                         IBM Dublin Software Lab

WARNING: multiple messages have this Message-ID (diff)
From: Mel Gorman <mel@csn.ul.ie>
To: Christian Ehrhardt <ehrhardt@linux.vnet.ibm.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	linux-mm@kvack.org, linux-fsdevel@vger.kernel.org,
	Linux Kernel List <linux-kernel@vger.kernel.org>,
	Johannes Weiner <hannes@cmpxchg.org>,
	Minchan Kim <minchan.kim@gmail.com>,
	Wu Fengguang <fengguang.wu@intel.com>,
	KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>,
	KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Subject: Re: [PATCH 0/8] Reduce latencies and improve overall reclaim efficiency v2
Date: Mon, 18 Oct 2010 14:55:36 +0100	[thread overview]
Message-ID: <20101018135535.GC30667@csn.ul.ie> (raw)
In-Reply-To: <4CB721A1.4010508@linux.vnet.ibm.com>

On Thu, Oct 14, 2010 at 05:28:33PM +0200, Christian Ehrhardt wrote:

> Seing the patches Mel sent a few weeks ago I realized that this series
> might be at least partially related to my reports in 1Q 2010 - so I ran my
> testcase on a few kernels to provide you with some more backing data.

Thanks very much for revisiting this.

> Results are always the average of three iozone runs as it is known to be somewhat noisy - especially when affected by the issue I try to show here.
> As discussed in detail in older threads the setup uses 16 disks and scales the number of concurrent iozone processes.
> Processes are evenly distributed so that it always is one process per disk.
> In the past we reported 40% to 80% degradation for the sequential read case based on 2.6.32 which can still be seen.
> What we found was that the allocations for page cache with GFP_COLD flag loop a long time between try_to_free, get_page, reclaim as free makes some progress and due to that GFP_COLD allocations can loop and retry.
> In addition my case had no writes at all, which forced congestion_wait to wait the full timeout all the time.
> 
> Kernel (git)                   4          8         16   deviation #16 case                           comment
> linux-2.6.30              902694    1396073    1892624                 base                              base
> linux-2.6.32              752008     990425     932938               -50.7%     impact as reported in 1Q 2010
> linux-2.6.35               63532      71573      64083               -96.6%                    got even worse
> linux-2.6.35.6            176485     174442     212102               -88.8%  fixes useful, but still far away
> linux-2.6.36-rc4-trace    119683     188997     187012               -90.1%                         still bad 

FWIW, I wouldn't expect the trace kernel to help. It's only adding the
markers but not doing anything useful with them.

> linux-2.6.36-rc4-fix      884431    1114073    1470659               -22.3%            Mels fixes help a lot!
> 
> So much from the case that I used when I reported the issue earlier this year.
> The short summary is that the patch series from Mel helps a lot for my test case.
> 

This is good to hear. We're going in the right direction at least.

> So I guess Mel you now want some traces of the last two cases right?
> Could you give me some minimal advice what/how you would exactly need.
> 

Yes please. Please do something like the following before the test

mount -t debugfs none /sys/kernel/debug
echo 1 > /sys/kernel/debug/tracing/events/vmscan/enable
echo 1 > /sys/kernel/debug/tracing/events/writeback/writeback_congestion_wait/enable
echo 1 > /sys/kernel/debug/tracing/events/writeback/writeback_wait_iff_congested/enable
cat /sys/kernel/debug/tracing/trace_pipe > trace.log &

rerun the test, gzip trace.log and drop it on some publicly accessible
webserver. I can rerun the analysis scripts and see if something odd
falls out.

> In addition it worked really fine, so you can add both, however you like.
> Reported-by: <ehrhardt@linux.vnet.ibm.com>
> Tested-by: <ehrhardt@linux.vnet.ibm.com>
> 
> Note: it might be worth to mention that the write case improved a lot since 2.6.30.
> Not directly related to the read degradations, but with up to 150% (write) 272% (rewrite).
> Therefore not everything is bad :-) 
> 

Every cloud has a silver lining I guess :)

> Any further comments or questions?
> 

The log might help me further in figuring out how and why we are losing
time. When/if the patches move from -mm to mainline, it'd also be worth
retesting as there is some churn in this area and we need to know whether
we are heading in the right direction or not. If all goes according to plan,
kernel 2.6.37-rc1 will be of interest. Thanks again.

-- 
Mel Gorman
Part-time Phd Student                          Linux Technology Center
University of Limerick                         IBM Dublin Software Lab

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2010-10-18 13:55 UTC|newest]

Thread overview: 59+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-09-15 12:27 [PATCH 0/8] Reduce latencies and improve overall reclaim efficiency v2 Mel Gorman
2010-09-15 12:27 ` Mel Gorman
2010-09-15 12:27 ` [PATCH 1/8] tracing, vmscan: Add trace events for LRU list shrinking Mel Gorman
2010-09-15 12:27   ` Mel Gorman
2010-09-15 12:27 ` [PATCH 2/8] writeback: Account for time spent congestion_waited Mel Gorman
2010-09-15 12:27   ` Mel Gorman
2010-09-15 12:27 ` [PATCH 3/8] vmscan: Synchronous lumpy reclaim should not call congestion_wait() Mel Gorman
2010-09-15 12:27   ` Mel Gorman
2010-09-15 12:27 ` [PATCH 4/8] vmscan: Narrow the scenarios lumpy reclaim uses synchrounous reclaim Mel Gorman
2010-09-15 12:27   ` Mel Gorman
2010-09-15 12:27 ` [PATCH 5/8] vmscan: Remove dead code in shrink_inactive_list() Mel Gorman
2010-09-15 12:27   ` Mel Gorman
2010-09-15 12:27 ` [PATCH 6/8] vmscan: isolated_lru_pages() stop neighbour search if neighbour cannot be isolated Mel Gorman
2010-09-15 12:27   ` Mel Gorman
2010-09-15 12:27 ` [PATCH 7/8] writeback: Do not sleep on the congestion queue if there are no congested BDIs Mel Gorman
2010-09-15 12:27   ` Mel Gorman
2010-09-16  7:59   ` Minchan Kim
2010-09-16  7:59     ` Minchan Kim
2010-09-16  8:23     ` Mel Gorman
2010-09-16  8:23       ` Mel Gorman
2010-09-15 12:27 ` [PATCH 8/8] writeback: Do not sleep on the congestion queue if there are no congested BDIs or if significant congestion is not being encountered in the current zone Mel Gorman
2010-09-15 12:27   ` Mel Gorman
2010-09-16  8:13   ` Minchan Kim
2010-09-16  8:13     ` Minchan Kim
2010-09-16  9:18     ` Mel Gorman
2010-09-16  9:18       ` Mel Gorman
2010-09-16 14:11       ` Minchan Kim
2010-09-16 14:11         ` Minchan Kim
2010-09-16 15:18         ` Mel Gorman
2010-09-16 15:18           ` Mel Gorman
2010-09-16 22:28   ` Andrew Morton
2010-09-16 22:28     ` Andrew Morton
2010-09-20  9:52     ` Mel Gorman
2010-09-20  9:52       ` Mel Gorman
2010-09-21 21:44       ` Andrew Morton
2010-09-21 21:44         ` Andrew Morton
2010-09-21 22:10         ` Mel Gorman
2010-09-21 22:10           ` Mel Gorman
2010-09-21 22:24           ` Andrew Morton
2010-09-21 22:24             ` Andrew Morton
2010-09-20 13:05   ` [PATCH] writeback: Do not sleep on the congestion queue if there are no congested BDIs or if significant congestion is not being encounted in the current zone fix Mel Gorman
2010-09-20 13:05     ` Mel Gorman
2010-09-16 22:28 ` [PATCH 0/8] Reduce latencies and improve overall reclaim efficiency v2 Andrew Morton
2010-09-16 22:28   ` Andrew Morton
2010-09-17  7:52   ` Mel Gorman
2010-09-17  7:52     ` Mel Gorman
2010-10-14 15:28 ` Christian Ehrhardt
2010-10-14 15:28   ` Christian Ehrhardt
2010-10-14 15:28   ` Christian Ehrhardt
2010-10-18 13:55   ` Mel Gorman [this message]
2010-10-18 13:55     ` Mel Gorman
2010-10-22 12:29     ` Christian Ehrhardt
2010-10-22 12:29       ` Christian Ehrhardt
2010-10-22 12:29       ` Christian Ehrhardt
2010-11-03 10:50     ` Christian Ehrhardt
2010-11-03 10:50       ` Christian Ehrhardt
2010-11-03 10:50       ` Christian Ehrhardt
2010-11-10 14:37       ` Mel Gorman
2010-11-10 14:37         ` Mel Gorman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20101018135535.GC30667@csn.ul.ie \
    --to=mel@csn.ul.ie \
    --cc=akpm@linux-foundation.org \
    --cc=ehrhardt@linux.vnet.ibm.com \
    --cc=fengguang.wu@intel.com \
    --cc=hannes@cmpxchg.org \
    --cc=kamezawa.hiroyu@jp.fujitsu.com \
    --cc=kosaki.motohiro@jp.fujitsu.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=minchan.kim@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.