All of lore.kernel.org
 help / color / mirror / Atom feed
From: Greg KH <gregkh@linuxfoundation.org>
To: Mel Gorman <mgorman@suse.de>
Cc: Stable <stable@vger.kernel.org>, Linux-MM <linux-mm@kvack.org>,
	LKML <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH 30/34] mm: vmscan: Do not force kswapd to scan small targets
Date: Wed, 25 Jul 2012 14:44:28 -0700	[thread overview]
Message-ID: <20120725214428.GA6502@kroah.com> (raw)
In-Reply-To: <20120725213508.GE9222@suse.de>

On Wed, Jul 25, 2012 at 10:35:08PM +0100, Mel Gorman wrote:
> On Wed, Jul 25, 2012 at 12:59:48PM -0700, Greg KH wrote:
> > On Mon, Jul 23, 2012 at 02:38:43PM +0100, Mel Gorman wrote:
> > > commit ad2b8e601099a23dffffb53f91c18d874fe98854 upstream - WARNING: this is a substitute patch.
> > > 
> > > Stable note: Not tracked in Bugzilla. This is a substitute for an
> > > 	upstream commit addressing a completely different issue that
> > > 	accidentally contained an important fix. The workload this patch
> > > 	helps was memcached when IO is started in the background. memcached
> > > 	should stay resident but without this patch it gets swapped more
> > > 	than it should. Sometimes this manifests as a drop in throughput
> > > 	but mostly it was observed through /proc/vmstat.
> > > 
> > > Commit [246e87a9: memcg: fix get_scan_count() for small targets] was
> > > meant to fix a problem whereby small scan targets on memcg were ignored
> > > causing priority to raise too sharply. It forced scanning to take place
> > > if the target was small, memcg or kswapd.
> > > 
> > > >From the time it was introduced it cause excessive reclaim by kswapd
> > > with workloads being pushed to swap that previously would have stayed
> > > resident. This was accidentally fixed by commit [ad2b8e60: mm: memcg:
> > > remove optimization of keeping the root_mem_cgroup LRU lists empty] but
> > > that patchset is not suitable for backporting.
> > > 
> > > The original patch came with no information on what workloads it benefits
> > > but the cost of it is obvious in that it forces scanning to take place
> > > on lists that would otherwise have been ignored such as small anonymous
> > > inactive lists. This patch partially reverts 246e87a9 so that small lists
> > > are not force scanned which means that IO-intensive workloads with small
> > > amounts of anonymous memory will not be swapped.
> > > 
> > > Signed-off-by: Mel Gorman <mgorman@suse.de>
> > > ---
> > >  mm/vmscan.c |    3 ---
> > >  1 file changed, 3 deletions(-)
> > 
> > I don't understand this patch.  The original
> > ad2b8e601099a23dffffb53f91c18d874fe98854 commit touched the file
> > mm/memcontrol.c and seemed to do something quite different from what you
> > have done below.
> > 
> 
> The main problem is I'm an idiot and "missed" when copying&paste and followed
> through with the mistake. The actual commit of interest was the one after it
> [b95a2f2d: mm: vmscan: convert global reclaim to per-memcg LRU lists]
> 
> That patch has this hunk in it
> 
> @@ -1886,7 +1886,7 @@ static void get_scan_count(struct mem_cgroup_zone *mz, struct scan_control *sc,
>          * latencies, so it's better to scan a minimum amount there as
>          * well.
>          */
> -       if (current_is_kswapd())
> +       if (current_is_kswapd() && mz->zone->all_unreclaimable)
>                 force_scan = true;
>         if (!global_reclaim(sc))
>                 force_scan = true;
> 
> This change makes it very difficult for kswapd to force scan which was
> the fix I was interested in but the series is not suitable for backport.
> This has changed again since in 3.5-rc1 due to commit [90126375: mm/vmscan:
> push lruvec pointer into get_scan_count()] where this check became
> 
> 	if (current_is_kswapd() && zone->all_unreclaimable)
> 
> Superficially that looks ok to backport, but it's not due to a subtle
> difference in how zone is looked up in the new context.
> 
> Can you use this patch as a replacement? It is functionally much closer
> to what happens upstream while still backporting the actual fix of
> interest.

Yes, that makes more sense as that is what the patch you included does
:)

I'll go queue it up now, thanks for the backport.

greg k-h

WARNING: multiple messages have this Message-ID (diff)
From: Greg KH <gregkh@linuxfoundation.org>
To: Mel Gorman <mgorman@suse.de>
Cc: Stable <stable@vger.kernel.org>, Linux-MM <linux-mm@kvack.org>,
	LKML <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH 30/34] mm: vmscan: Do not force kswapd to scan small targets
Date: Wed, 25 Jul 2012 14:44:28 -0700	[thread overview]
Message-ID: <20120725214428.GA6502@kroah.com> (raw)
In-Reply-To: <20120725213508.GE9222@suse.de>

On Wed, Jul 25, 2012 at 10:35:08PM +0100, Mel Gorman wrote:
> On Wed, Jul 25, 2012 at 12:59:48PM -0700, Greg KH wrote:
> > On Mon, Jul 23, 2012 at 02:38:43PM +0100, Mel Gorman wrote:
> > > commit ad2b8e601099a23dffffb53f91c18d874fe98854 upstream - WARNING: this is a substitute patch.
> > > 
> > > Stable note: Not tracked in Bugzilla. This is a substitute for an
> > > 	upstream commit addressing a completely different issue that
> > > 	accidentally contained an important fix. The workload this patch
> > > 	helps was memcached when IO is started in the background. memcached
> > > 	should stay resident but without this patch it gets swapped more
> > > 	than it should. Sometimes this manifests as a drop in throughput
> > > 	but mostly it was observed through /proc/vmstat.
> > > 
> > > Commit [246e87a9: memcg: fix get_scan_count() for small targets] was
> > > meant to fix a problem whereby small scan targets on memcg were ignored
> > > causing priority to raise too sharply. It forced scanning to take place
> > > if the target was small, memcg or kswapd.
> > > 
> > > >From the time it was introduced it cause excessive reclaim by kswapd
> > > with workloads being pushed to swap that previously would have stayed
> > > resident. This was accidentally fixed by commit [ad2b8e60: mm: memcg:
> > > remove optimization of keeping the root_mem_cgroup LRU lists empty] but
> > > that patchset is not suitable for backporting.
> > > 
> > > The original patch came with no information on what workloads it benefits
> > > but the cost of it is obvious in that it forces scanning to take place
> > > on lists that would otherwise have been ignored such as small anonymous
> > > inactive lists. This patch partially reverts 246e87a9 so that small lists
> > > are not force scanned which means that IO-intensive workloads with small
> > > amounts of anonymous memory will not be swapped.
> > > 
> > > Signed-off-by: Mel Gorman <mgorman@suse.de>
> > > ---
> > >  mm/vmscan.c |    3 ---
> > >  1 file changed, 3 deletions(-)
> > 
> > I don't understand this patch.  The original
> > ad2b8e601099a23dffffb53f91c18d874fe98854 commit touched the file
> > mm/memcontrol.c and seemed to do something quite different from what you
> > have done below.
> > 
> 
> The main problem is I'm an idiot and "missed" when copying&paste and followed
> through with the mistake. The actual commit of interest was the one after it
> [b95a2f2d: mm: vmscan: convert global reclaim to per-memcg LRU lists]
> 
> That patch has this hunk in it
> 
> @@ -1886,7 +1886,7 @@ static void get_scan_count(struct mem_cgroup_zone *mz, struct scan_control *sc,
>          * latencies, so it's better to scan a minimum amount there as
>          * well.
>          */
> -       if (current_is_kswapd())
> +       if (current_is_kswapd() && mz->zone->all_unreclaimable)
>                 force_scan = true;
>         if (!global_reclaim(sc))
>                 force_scan = true;
> 
> This change makes it very difficult for kswapd to force scan which was
> the fix I was interested in but the series is not suitable for backport.
> This has changed again since in 3.5-rc1 due to commit [90126375: mm/vmscan:
> push lruvec pointer into get_scan_count()] where this check became
> 
> 	if (current_is_kswapd() && zone->all_unreclaimable)
> 
> Superficially that looks ok to backport, but it's not due to a subtle
> difference in how zone is looked up in the new context.
> 
> Can you use this patch as a replacement? It is functionally much closer
> to what happens upstream while still backporting the actual fix of
> interest.

Yes, that makes more sense as that is what the patch you included does
:)

I'll go queue it up now, thanks for the backport.

greg k-h

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2012-07-25 21:44 UTC|newest]

Thread overview: 121+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-07-23 13:38 [PATCH 00/34] Memory management performance backports for -stable V2 Mel Gorman
2012-07-23 13:38 ` Mel Gorman
2012-07-23 13:38 ` [PATCH 01/34] mm: vmstat: cache align vm_stat Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 02/34] mm: memory hotplug: Check if pages are correctly reserved on a per-section basis Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 03/34] mm: Reduce the amount of work done when updating min_free_kbytes Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-24 22:47   ` Greg KH
2012-07-24 22:47     ` Greg KH
2012-07-25  7:57     ` Mel Gorman
2012-07-25  7:57       ` Mel Gorman
2012-07-23 13:38 ` [PATCH 04/34] mm: vmscan: fix force-scanning small targets without swap Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 05/34] vmscan: clear ZONE_CONGESTED for zone with good watermark Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 06/34] vmscan: add shrink_slab tracepoints Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 07/34] vmscan: shrinker->nr updates race and go wrong Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 08/34] vmscan: reduce wind up shrinker->nr when shrinker can't do work Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 09/34] mm: limit direct reclaim for higher order allocations Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 10/34] mm: Abort reclaim/compaction if compaction can proceed Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 11/34] mm: compaction: trivial clean up in acct_isolated() Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 12/34] mm: change isolate mode from #define to bitwise type Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 13/34] mm: compaction: make isolate_lru_page() filter-aware Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 14/34] mm: zone_reclaim: " Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 15/34] mm: migration: clean up unmap_and_move() Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-25 15:45   ` Greg KH
2012-07-25 15:45     ` Greg KH
2012-07-25 16:04     ` Mel Gorman
2012-07-25 16:04       ` Mel Gorman
2012-07-25 18:03       ` Greg KH
2012-07-25 18:03         ` Greg KH
2012-07-23 13:38 ` [PATCH 16/34] mm: compaction: Allow compaction to isolate dirty pages Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-25 15:47   ` Greg KH
2012-07-25 15:47     ` Greg KH
2012-07-25 16:07     ` Mel Gorman
2012-07-25 16:07       ` Mel Gorman
2012-07-23 13:38 ` [PATCH 17/34] mm: compaction: Determine if dirty pages can be migrated without blocking within ->migratepage Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 18/34] mm: page allocator: Do not call direct reclaim for THP allocations while compaction is deferred Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 19/34] mm: compaction: make isolate_lru_page() filter-aware again Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 20/34] kswapd: avoid unnecessary rebalance after an unsuccessful balancing Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 21/34] kswapd: assign new_order and new_classzone_idx after wakeup in sleeping Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 22/34] mm: compaction: Introduce sync-light migration for use by compaction Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 23/34] mm: vmscan: When reclaiming for compaction, ensure there are sufficient free pages available Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 24/34] mm: vmscan: Do not OOM if aborting reclaim to start compaction Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 25/34] mm: vmscan: Check if reclaim should really abort even if compaction_ready() is true for one zone Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-25 19:51   ` Greg KH
2012-07-25 19:51     ` Greg KH
2012-07-23 13:38 ` [PATCH 26/34] vmscan: promote shared file mapped pages Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 27/34] vmscan: activate executable pages after first usage Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 28/34] mm/vmscan.c: consider swap space when deciding whether to continue reclaim Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 29/34] mm: test PageSwapBacked in lumpy reclaim Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 30/34] mm: vmscan: Do not force kswapd to scan small targets Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-25 19:59   ` Greg KH
2012-07-25 19:59     ` Greg KH
2012-07-25 21:35     ` Mel Gorman
2012-07-25 21:35       ` Mel Gorman
2012-07-25 21:44       ` Greg KH [this message]
2012-07-25 21:44         ` Greg KH
2012-07-23 13:38 ` [PATCH 31/34] cpusets: avoid looping when storing to mems_allowed if one node remains set Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 32/34] cpusets: stall when updating mems_allowed for mempolicy or disjoint nodemask Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 33/34] cpuset: mm: Reduce large amounts of memory barrier related damage v3 Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 34/34] mm/hugetlb: fix warning in alloc_huge_page/dequeue_huge_page_vma Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-24  5:58 ` [PATCH 00/34] Memory management performance backports for -stable V2 Mike Galbraith
2012-07-24  5:58   ` Mike Galbraith
2012-07-24  8:10   ` Mel Gorman
2012-07-24  8:10     ` Mel Gorman
2012-07-24 13:18   ` Hillf Danton
2012-07-24 13:18     ` Hillf Danton
2012-07-24 13:27     ` Mel Gorman
2012-07-24 13:27       ` Mel Gorman
2012-07-24 13:34       ` Hillf Danton
2012-07-24 13:34         ` Hillf Danton
2012-07-24 13:53         ` Mel Gorman
2012-07-24 13:53           ` Mel Gorman
2012-07-24 14:11           ` Hillf Danton
2012-07-24 14:11             ` Hillf Danton
2012-07-24 13:52     ` Mike Galbraith
2012-07-24 13:52       ` Mike Galbraith
2012-07-24 14:18       ` Hillf Danton
2012-07-24 14:18         ` Hillf Danton
2012-07-24 14:41         ` Mike Galbraith
2012-07-24 14:41           ` Mike Galbraith
2012-07-25 22:30 ` Greg KH
2012-07-25 22:30   ` Greg KH
2012-07-25 22:48   ` Mel Gorman
2012-07-25 22:48     ` Mel Gorman
2012-07-30  1:13 ` Ben Hutchings
  -- strict thread matches above, loose matches on Subject: below --
2012-07-19 14:36 [PATCH 00/34] Memory management performance backports for -stable Mel Gorman
2012-07-19 14:36 ` [PATCH 30/34] mm: vmscan: Do not force kswapd to scan small targets Mel Gorman
2012-07-19 14:36   ` Mel Gorman
2012-07-19 20:37   ` Jonathan Nieder
2012-07-19 22:08     ` Mel Gorman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20120725214428.GA6502@kroah.com \
    --to=gregkh@linuxfoundation.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@suse.de \
    --cc=stable@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.