linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Mel Gorman <mgorman@techsingularity.net>
To: Andrea Arcangeli <aarcange@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Linux-MM <linux-mm@kvack.org>, Rik van Riel <riel@surriel.com>,
	Vlastimil Babka <vbabka@suse.cz>,
	Johannes Weiner <hannes@cmpxchg.org>,
	Minchan Kim <minchan@kernel.org>,
	Joonsoo Kim <iamjoonsoo.kim@lge.com>,
	LKML <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH 00/34] Move LRU page reclaim from zones to nodes v9
Date: Fri, 19 Aug 2016 16:55:46 +0100	[thread overview]
Message-ID: <20160819155546.GQ8119@techsingularity.net> (raw)
In-Reply-To: <20160819153259.nszkbsk7dnfzfv5i@redhat.com>

On Fri, Aug 19, 2016 at 05:32:59PM +0200, Andrea Arcangeli wrote:
> On Fri, Aug 19, 2016 at 03:53:59PM +0100, Mel Gorman wrote:
> > Compaction is not the same as LRU management.
> 
> Sure but compaction is invoked by reclaim and if reclaim is node-wide,
> it makes more sense if compaction would be node-wide as well.
> 

It might be desirable but it's not necessarily effective.  Reclaim/compaction
was always, at best, a heuristic that replaced lumpy-reclaim.

> Otherwise what you compact? Just the higher zone, or all of them?
> 

Right now, all of them taking into account whether the compaction is likely
to succeed.

> > That is not guaranteed. At the time of migration, it is unknown if the
> > original allocation had addressing limitations or not. I did not audit
> > the address-limited allocations to see if any of them allow migration.
> > 
> > The filesystems would be the ones that need careful auditing. There are
> > some places that add lowmem pages to the LRU but far less obvious if any
> > of them would successfully migrate.
> 
> True but that's a tradeoff. This whole patchset is about optimizing
> the common case of allocations from the highest possible
> classzone_idx, as if a system has no older hardware, the lowmem
> classzone_idx allocations practically never happens.
> 

I'll have to take your word for it. It still is the case that the highest
zone is preferred for allocations but the timing will be different due to
when reclaim is triggered and what is reclaimed.

As long as it is reclaiming, the page age will still be priority. Unless the
LRU is almost perfectly interleaves between zones, the effect of node-reclaim
will be that there is a likelihood that reclaim/compaction will succeed
for at least one zone with similar success rates to zone reclaim.

> If that tradeoff is valid, retaining migrability of memory allocated
> not with the highest classzone_idx is an optimization that goes in the
> opposite direction.
> 
> Retaining such "optimization" means increasing the likelihood of
> succeeding high order allocations from lower zones yes, but it screws
> with the main concept of:
> 
>      reclaim node wide at high order -> failure to allocate -> compaction node wide
> 
> So if the tradeoff works for reclaim_node I don't see why we should
> "optimize" for the opposite case in compaction.
> 
> > That is likely true as long as migration is always towards higher address.
> 
> Well with a node-wide LRU we got rid of any "towards higher address"

The allocation preferences of the zonelist continue to favour higher
zones. If anything, there is a stronger preference because zone-lru used
the fair-zone allocation policy to interleave pages between zones to avoid
page age inversion problems with smaller sized high zones.

> bias. So why to worry about these concepts for high order allocations
> provided by compaction, if order 0 allocations provided purely by
> shrink_node won't care at all about such a concept any longer?
> 

At worst, reclaim/compaction is slighly weakened. As the reclaim is in
LRU order, reclaim/compaction will still make progress for at least one
zone at a time.

> It sounds backwards to even worry about "towards higher address" in
> compaction which is only relevant to provide high order allocations,
> when the zero order 4kb allocations will not care at all to go
> "towards higher address" anymore.
> 

Towards higher addresses in compaction is an implementation detail when
it's not going across zones. Specifically, it was the easiest way to avoid
using the same pageblocks as both migration sources and targets.

> > An audit of all additions to the LRU that are address-limited allocations
> > is required to determine if any of those pages can migrate.
> 
> Agreed.
> 
> Either that or the pageblock needs to be marked with the classzone_idx
> that it can tolerate. And a per-allocation-classzone highpfn,lowpfn
> markers needs to be added, instead of being global.
> 

That would be somewhat severe. A single address-restricted allocation
would prevent any pages in that pageblock migrating out of the zone.
The classzone_idx could only be cleared when the entire pageblock was freed.

> > I'm not sure I understand. The zone allocation preference has the same
> > meaning as it always had.
> 
> What I mean is zonelist is like:
> 
> =n
> 
> 	[ node0->zone0, node0->zone1, node1->zone0, node1->zone1 ]
> 
> or:
> 
> =z
> 
> 	[ node0->zone0, node1->zone0, node0->zone1, node1->zone1 ]
> 
> So why to call in order (first case above):
> 
> 1  shrink_node(node0->zone0->node, allocation_classzone_idx /* to limit */)
> 2  shrink_node(node0->zone1->node, allocation_classzone_idx /* to limit */)
> 3  shrink_node(node1->zone0->node, allocation_classzone_idx /* to limit */)
> 4  shrink_node(node1->zone1->node, allocation_classzone_idx /* to limit */)
> 

For =n in the direct reclaim case, there is a check to see if the pgdat
has changed when reclaming in zonelist order. 1 will call shrink_node, 2
will skip, 3 will shrink_node, 4 will skip.

For =z, shrink_node will be called multiple times but it's also not the
default case. If it is a problem then direct reclaim would need to use a
bitmask of nodes shrunk during a zonelist traversal.

> It's possible I missed something in the code, perhaps I misunderstand
> how shrink_node is invoked through the zonelist.
> 

Look for this bit

                        /*
                         * Shrink each node in the zonelist once. If the
                         * zonelist is ordered by zone (not the default)
                         * then a node may be shrunk multiple times but in that
                         * case the user prefers lower zones being preserved.
                         */
                        if (zone->zone_pgdat == last_pgdat)
                                continue;

> > The zonelist ordering is still required to satisfy address-limited allocation
> > requests. If it wasn't, free pages could be managed on a per-node basis.
> 
> But you pass the node, not the zone, to the shrink_node function, the
> whole point I'm making is that the "address-limiting" is provided by
> the classzone_idx alone with your change, not by the zonelist anymore.
> 
> static bool shrink_node(pg_data_t *pgdat, struct scan_control *sc)
> 
> There's no zone pointer in "scan control". There's the reclaim_idx
> (aka classzone_idx, would have been more clear to call it
> classzone_idx).
> 

There is no need for a zone pointer as the reclaim_idx is sufficient. In
the node-ordered case, the node will be shrunk once based on the
restrictions of reclaim_idx.

> > Compacting across zones was/is a problem regardless of how the LRU is
> > managed.
> 
> It is a separate problem to make it work, but wanting to making
> compaction work node-wide is very much a side effect of shrink_node
> not going serially into zones but going node-wide at all times. Hence
> it doesn't make sense anymore to only compact a single zone before
> invoking shrink_node again.

Calling shrink_node will increase the chance of a single zone successfully
compacting. It's not guaranteed to succeed but then again, it never was.

-- 
Mel Gorman
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

      reply	other threads:[~2016-08-19 15:55 UTC|newest]

Thread overview: 109+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-07-08  9:34 [PATCH 00/34] Move LRU page reclaim from zones to nodes v9 Mel Gorman
2016-07-08  9:34 ` [PATCH 01/34] mm, vmstat: add infrastructure for per-node vmstats Mel Gorman
2016-08-03 19:13   ` Reza Arbab
2016-07-08  9:34 ` [PATCH 02/34] mm, vmscan: move lru_lock to the node Mel Gorman
2016-07-12 11:06   ` Balbir Singh
2016-07-12 11:18     ` Mel Gorman
2016-07-13  5:50       ` Balbir Singh
2016-07-13  8:39         ` Vlastimil Babka
2016-07-08  9:34 ` [PATCH 03/34] mm, vmscan: move LRU lists to node Mel Gorman
2016-08-04 20:59   ` James Hogan
2016-08-05  8:41     ` Mel Gorman
2016-08-05 10:52       ` James Hogan
2016-08-05 11:55         ` Mel Gorman
2016-08-05 12:02           ` James Hogan
2016-07-08  9:34 ` [PATCH 04/34] mm, mmzone: clarify the usage of zone padding Mel Gorman
2016-07-12 13:49   ` Johannes Weiner
2016-07-08  9:34 ` [PATCH 05/34] mm, vmscan: begin reclaiming pages on a per-node basis Mel Gorman
2016-07-12 13:54   ` Johannes Weiner
2016-07-14  9:19   ` Vlastimil Babka
2016-07-08  9:34 ` [PATCH 06/34] mm, vmscan: have kswapd only scan based on the highest requested zone Mel Gorman
2016-07-12 14:05   ` Johannes Weiner
2016-07-13  8:37     ` Mel Gorman
2016-07-08  9:34 ` [PATCH 07/34] mm, vmscan: make kswapd reclaim in terms of nodes Mel Gorman
2016-08-29  9:38   ` Srikar Dronamraju
2016-08-30 12:07     ` Mel Gorman
2016-08-30 14:25       ` Srikar Dronamraju
2016-08-30 15:00         ` Mel Gorman
2016-08-31  6:09           ` Srikar Dronamraju
2016-08-31  8:49             ` Mel Gorman
2016-08-31 11:09               ` Michal Hocko
2016-08-31 12:46                 ` Mel Gorman
2016-08-31 17:33               ` Srikar Dronamraju
2016-07-08  9:34 ` [PATCH 08/34] mm, vmscan: remove balance gap Mel Gorman
2016-07-12 14:06   ` Johannes Weiner
2016-07-08  9:34 ` [PATCH 09/34] mm, vmscan: simplify the logic deciding whether kswapd sleeps Mel Gorman
2016-07-08  9:34 ` [PATCH 10/34] mm, vmscan: by default have direct reclaim only shrink once per node Mel Gorman
2016-07-08  9:34 ` [PATCH 11/34] mm, vmscan: remove duplicate logic clearing node congestion and dirty state Mel Gorman
2016-07-12 14:22   ` Johannes Weiner
2016-07-13  8:40     ` Mel Gorman
2016-07-14  9:45   ` Vlastimil Babka
2016-07-08  9:34 ` [PATCH 12/34] mm: vmscan: do not reclaim from kswapd if there is any eligible zone Mel Gorman
2016-07-12 14:29   ` Johannes Weiner
2016-07-13  8:47     ` Mel Gorman
2016-07-13 12:28       ` Johannes Weiner
2016-07-08  9:34 ` [PATCH 13/34] mm, vmscan: make shrink_node decisions more node-centric Mel Gorman
2016-07-12 14:32   ` Johannes Weiner
2016-07-13  8:48     ` Mel Gorman
2016-07-08  9:34 ` [PATCH 14/34] mm, memcg: move memcg limit enforcement from zones to nodes Mel Gorman
2016-07-12 14:38   ` Johannes Weiner
2016-07-08  9:34 ` [PATCH 15/34] mm, workingset: make working set detection node-aware Mel Gorman
2016-07-08  9:34 ` [PATCH 16/34] mm, page_alloc: consider dirtyable memory in terms of nodes Mel Gorman
2016-07-08  9:34 ` [PATCH 17/34] mm: move page mapped accounting to the node Mel Gorman
2016-07-12 14:42   ` Johannes Weiner
2016-07-08  9:34 ` [PATCH 18/34] mm: rename NR_ANON_PAGES to NR_ANON_MAPPED Mel Gorman
2016-07-12 14:58   ` Johannes Weiner
2016-07-13  8:55     ` Mel Gorman
2016-07-13 13:04       ` Johannes Weiner
2016-07-13 13:37         ` Mel Gorman
2016-07-13 21:13           ` Andrew Morton
2016-07-15 10:46             ` Mel Gorman
2016-07-15 22:35               ` Andrew Morton
2016-07-18 13:34                 ` Johannes Weiner
2016-07-14  1:27           ` Minchan Kim
2016-07-08  9:34 ` [PATCH 19/34] mm: move most file-based accounting to the node Mel Gorman
2016-07-12 15:11   ` Johannes Weiner
2016-07-08  9:34 ` [PATCH 20/34] mm: move vmscan writes and file write " Mel Gorman
2016-07-12 15:15   ` Johannes Weiner
2016-07-08  9:34 ` [PATCH 21/34] mm, vmscan: only wakeup kswapd once per node for the requested classzone Mel Gorman
2016-07-12 17:18   ` Johannes Weiner
2016-07-08  9:34 ` [PATCH 22/34] mm, page_alloc: wake kswapd based on the highest eligible zone Mel Gorman
2016-07-12 17:24   ` Johannes Weiner
2016-07-14 10:05   ` Vlastimil Babka
2016-07-08  9:34 ` [PATCH 23/34] mm: convert zone_reclaim to node_reclaim Mel Gorman
2016-07-12 17:28   ` Johannes Weiner
2016-07-08  9:35 ` [PATCH 24/34] mm, vmscan: avoid passing in classzone_idx unnecessarily to shrink_node Mel Gorman
2016-07-12 17:31   ` Johannes Weiner
2016-07-14 10:09   ` Vlastimil Babka
2016-07-08  9:35 ` [PATCH 25/34] mm, vmscan: avoid passing in classzone_idx unnecessarily to compaction_ready Mel Gorman
2016-07-12 18:01   ` Johannes Weiner
2016-07-14 12:12   ` Vlastimil Babka
2016-07-08  9:35 ` [PATCH 26/34] mm, vmscan: avoid passing in remaining unnecessarily to prepare_kswapd_sleep Mel Gorman
2016-07-12 18:06   ` Johannes Weiner
2016-07-14 12:48   ` Vlastimil Babka
2016-07-08  9:35 ` [PATCH 27/34] mm, vmscan: Have kswapd reclaim from all zones if reclaiming and buffer_heads_over_limit Mel Gorman
2016-07-12 18:10   ` Johannes Weiner
2016-07-14 12:54   ` Vlastimil Babka
2016-07-08  9:35 ` [PATCH 28/34] mm, vmscan: add classzone information to tracepoints Mel Gorman
2016-07-12 18:13   ` Johannes Weiner
2016-07-08  9:35 ` [PATCH 29/34] mm, page_alloc: remove fair zone allocation policy Mel Gorman
2016-07-12 18:18   ` Johannes Weiner
2016-07-08  9:35 ` [PATCH 30/34] mm: page_alloc: cache the last node whose dirty limit is reached Mel Gorman
2016-07-12 18:43   ` Johannes Weiner
2016-07-08  9:35 ` [PATCH 31/34] mm: vmstat: replace __count_zone_vm_events with a zone id equivalent Mel Gorman
2016-07-12 19:10   ` Johannes Weiner
2016-07-08  9:35 ` [PATCH 32/34] mm: vmstat: account per-zone stalls and pages skipped during reclaim Mel Gorman
2016-07-12 19:06   ` Johannes Weiner
2016-07-08  9:35 ` [PATCH 33/34] mm, vmstat: print node-based stats in zoneinfo file Mel Gorman
2016-07-12 19:18   ` Johannes Weiner
2016-07-14 12:56   ` Vlastimil Babka
2016-07-08  9:35 ` [PATCH 34/34] mm, vmstat: remove zone and node double accounting by approximating retries Mel Gorman
2016-07-14 13:40   ` Vlastimil Babka
2016-07-15  7:48     ` Mel Gorman
2016-07-15 12:20       ` Vlastimil Babka
2016-08-19 13:12 ` [PATCH 00/34] Move LRU page reclaim from zones to nodes v9 Andrea Arcangeli
2016-08-19 13:23   ` Vlastimil Babka
2016-08-19 13:55     ` Andrea Arcangeli
2016-08-19 14:53   ` Mel Gorman
2016-08-19 15:32     ` Andrea Arcangeli
2016-08-19 15:55       ` Mel Gorman [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20160819155546.GQ8119@techsingularity.net \
    --to=mgorman@techsingularity.net \
    --cc=aarcange@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=hannes@cmpxchg.org \
    --cc=iamjoonsoo.kim@lge.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=minchan@kernel.org \
    --cc=riel@surriel.com \
    --cc=vbabka@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).