* Re: [PATCH 04/27] mm, vmscan: Begin reclaiming pages on a per-node basis
[not found] <02ed01d1c47a$49fbfbc0$ddf3f340$@alibaba-inc.com>
@ 2016-06-12 7:33 ` Hillf Danton
2016-06-14 14:47 ` Mel Gorman
0 siblings, 1 reply; 11+ messages in thread
From: Hillf Danton @ 2016-06-12 7:33 UTC (permalink / raw)
To: 'Mel Gorman'; +Cc: linux-kernel, linux-mm
> @@ -3207,15 +3228,14 @@ static int balance_pgdat(pg_data_t *pgdat, int order, int classzone_idx)
> sc.may_writepage = 1;
>
> /*
> - * Now scan the zone in the dma->highmem direction, stopping
> - * at the last zone which needs scanning.
> - *
> - * We do this because the page allocator works in the opposite
> - * direction. This prevents the page allocator from allocating
> - * pages behind kswapd's direction of progress, which would
> - * cause too much scanning of the lower zones.
> + * Continue scanning in the highmem->dma direction stopping at
> + * the last zone which needs scanning. This may reclaim lowmem
> + * pages that are not necessary for zone balancing but it
> + * preserves LRU ordering. It is assumed that the bulk of
> + * allocation requests can use arbitrary zones with the
> + * possible exception of big highmem:lowmem configurations.
> */
> - for (i = 0; i <= end_zone; i++) {
> + for (i = end_zone; i >= end_zone; i--) {
s/i >= end_zone;/i >= 0;/ ?
> struct zone *zone = pgdat->node_zones + i;
>
> if (!populated_zone(zone))
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 04/27] mm, vmscan: Begin reclaiming pages on a per-node basis
2016-06-12 7:33 ` [PATCH 04/27] mm, vmscan: Begin reclaiming pages on a per-node basis Hillf Danton
@ 2016-06-14 14:47 ` Mel Gorman
0 siblings, 0 replies; 11+ messages in thread
From: Mel Gorman @ 2016-06-14 14:47 UTC (permalink / raw)
To: Hillf Danton; +Cc: linux-kernel, linux-mm
On Sun, Jun 12, 2016 at 03:33:25PM +0800, Hillf Danton wrote:
> > @@ -3207,15 +3228,14 @@ static int balance_pgdat(pg_data_t *pgdat, int order, int classzone_idx)
> > sc.may_writepage = 1;
> >
> > /*
> > - * Now scan the zone in the dma->highmem direction, stopping
> > - * at the last zone which needs scanning.
> > - *
> > - * We do this because the page allocator works in the opposite
> > - * direction. This prevents the page allocator from allocating
> > - * pages behind kswapd's direction of progress, which would
> > - * cause too much scanning of the lower zones.
> > + * Continue scanning in the highmem->dma direction stopping at
> > + * the last zone which needs scanning. This may reclaim lowmem
> > + * pages that are not necessary for zone balancing but it
> > + * preserves LRU ordering. It is assumed that the bulk of
> > + * allocation requests can use arbitrary zones with the
> > + * possible exception of big highmem:lowmem configurations.
> > */
> > - for (i = 0; i <= end_zone; i++) {
> > + for (i = end_zone; i >= end_zone; i--) {
>
> s/i >= end_zone;/i >= 0;/ ?
>
Yes although it's eliminated by "mm, vmscan: Make kswapd reclaim in
terms of nodes"
--
Mel Gorman
SUSE Labs
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 04/27] mm, vmscan: Begin reclaiming pages on a per-node basis
2016-06-23 11:07 ` Mel Gorman
@ 2016-06-23 11:13 ` Michal Hocko
0 siblings, 0 replies; 11+ messages in thread
From: Michal Hocko @ 2016-06-23 11:13 UTC (permalink / raw)
To: Mel Gorman
Cc: Vlastimil Babka, Andrew Morton, Linux-MM, Rik van Riel,
Johannes Weiner, LKML
On Thu 23-06-16 12:07:28, Mel Gorman wrote:
> On Wed, Jun 22, 2016 at 06:00:12PM +0200, Vlastimil Babka wrote:
> > >>- enum zone_type classzone_idx;
> > >>-
> > >> if (!populated_zone(zone))
> > >> continue;
> > >>
> > >>- classzone_idx = requested_highidx;
> > >>+ /*
> > >>+ * Note that reclaim_idx does not change as it is the highest
> > >>+ * zone reclaimed from which for empty zones is a no-op but
> > >>+ * classzone_idx is used by shrink_node to test if the slabs
> > >>+ * should be shrunk on a given node.
> > >>+ */
> > >> while (!populated_zone(zone->zone_pgdat->node_zones +
> > >>- classzone_idx))
> > >>+ classzone_idx)) {
> > >> classzone_idx--;
> > >>+ continue;
> >
> > Oh and Michal's comment on Patch 20 made me realize that my objection to v6
> > about possible underflow of sc->reclaim_idx and classzone_idx seems to still
> > apply here for classzone_idx?
>
> Potentially. The relevant code now looks like this
>
> classzone_idx = sc->reclaim_idx;
> while (!populated_zone(zone->zone_pgdat->node_zones +
> classzone_idx))
> classzone_idx--;
Yes that makes much more sense to me.
--
Michal Hocko
SUSE Labs
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 04/27] mm, vmscan: Begin reclaiming pages on a per-node basis
2016-06-22 16:00 ` Vlastimil Babka
@ 2016-06-23 11:07 ` Mel Gorman
2016-06-23 11:13 ` Michal Hocko
0 siblings, 1 reply; 11+ messages in thread
From: Mel Gorman @ 2016-06-23 11:07 UTC (permalink / raw)
To: Vlastimil Babka
Cc: Andrew Morton, Linux-MM, Rik van Riel, Johannes Weiner, LKML,
Michal Hocko
On Wed, Jun 22, 2016 at 06:00:12PM +0200, Vlastimil Babka wrote:
> >>- enum zone_type classzone_idx;
> >>-
> >> if (!populated_zone(zone))
> >> continue;
> >>
> >>- classzone_idx = requested_highidx;
> >>+ /*
> >>+ * Note that reclaim_idx does not change as it is the highest
> >>+ * zone reclaimed from which for empty zones is a no-op but
> >>+ * classzone_idx is used by shrink_node to test if the slabs
> >>+ * should be shrunk on a given node.
> >>+ */
> >> while (!populated_zone(zone->zone_pgdat->node_zones +
> >>- classzone_idx))
> >>+ classzone_idx)) {
> >> classzone_idx--;
> >>+ continue;
>
> Oh and Michal's comment on Patch 20 made me realize that my objection to v6
> about possible underflow of sc->reclaim_idx and classzone_idx seems to still
> apply here for classzone_idx?
Potentially. The relevant code now looks like this
classzone_idx = sc->reclaim_idx;
while (!populated_zone(zone->zone_pgdat->node_zones +
classzone_idx))
classzone_idx--;
--
Mel Gorman
SUSE Labs
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 04/27] mm, vmscan: Begin reclaiming pages on a per-node basis
2016-06-22 14:04 ` Vlastimil Babka
2016-06-22 16:00 ` Vlastimil Babka
@ 2016-06-23 10:58 ` Mel Gorman
1 sibling, 0 replies; 11+ messages in thread
From: Mel Gorman @ 2016-06-23 10:58 UTC (permalink / raw)
To: Vlastimil Babka
Cc: Andrew Morton, Linux-MM, Rik van Riel, Johannes Weiner, LKML
On Wed, Jun 22, 2016 at 04:04:34PM +0200, Vlastimil Babka wrote:
> >-static void shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
> >+static void shrink_zones(struct zonelist *zonelist, struct scan_control *sc,
> >+ enum zone_type classzone_idx)
> > {
> > struct zoneref *z;
> > struct zone *zone;
> > unsigned long nr_soft_reclaimed;
> > unsigned long nr_soft_scanned;
> > gfp_t orig_mask;
> >- enum zone_type requested_highidx = gfp_zone(sc->gfp_mask);
> >
> > /*
> > * If the number of buffer_heads in the machine exceeds the maximum
> >@@ -2560,15 +2579,20 @@ static void shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
> >
> > for_each_zone_zonelist_nodemask(zone, z, zonelist,
> > gfp_zone(sc->gfp_mask), sc->nodemask) {
>
> Using sc->reclaim_idx could be faster/nicer here than gfp_zone()?
Yes, then the reclaim_idx and classzone_idx needs to be updated if
buffer_heads_over_limit in the check above but that is better anyway.
> Although after "mm, vmscan: Update classzone_idx if buffer_heads_over_limit"
> there would need to be a variable for the highmem adjusted value - maybe
> reuse "requested_highidx"? Not important though.
>
I think it's ok in the buffer_heads_over_limit case to reclaim
from more zones than requested. It may require another pass through
do_try_to_free_pages if a low zone was not reclaimed and required by the
caller but that's ok and expected if there are too many buffer_heads.
> >- enum zone_type classzone_idx;
> >-
> > if (!populated_zone(zone))
> > continue;
> >
> >- classzone_idx = requested_highidx;
> >+ /*
> >+ * Note that reclaim_idx does not change as it is the highest
> >+ * zone reclaimed from which for empty zones is a no-op but
> >+ * classzone_idx is used by shrink_node to test if the slabs
> >+ * should be shrunk on a given node.
> >+ */
> > while (!populated_zone(zone->zone_pgdat->node_zones +
> >- classzone_idx))
> >+ classzone_idx)) {
> > classzone_idx--;
> >+ continue;
> >+ }
> >
> > /*
> > * Take care memory controller reclaiming has small influence
> >@@ -2594,8 +2618,8 @@ static void shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
> > */
> > if (IS_ENABLED(CONFIG_COMPACTION) &&
> > sc->order > PAGE_ALLOC_COSTLY_ORDER &&
> >- zonelist_zone_idx(z) <= requested_highidx &&
> >- compaction_ready(zone, sc->order, requested_highidx)) {
> >+ zonelist_zone_idx(z) <= classzone_idx &&
> >+ compaction_ready(zone, sc->order, classzone_idx)) {
> > sc->compaction_ready = true;
> > continue;
> > }
> >@@ -2615,7 +2639,7 @@ static void shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
> > /* need some check for avoid more shrink_zone() */
> > }
> >
> >- shrink_zone(zone, sc, zone_idx(zone) == classzone_idx);
> >+ shrink_node(zone->zone_pgdat, sc, classzone_idx);
> > }
> >
> > /*
> >@@ -2647,6 +2671,7 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
> > int initial_priority = sc->priority;
> > unsigned long total_scanned = 0;
> > unsigned long writeback_threshold;
> >+ enum zone_type classzone_idx = sc->reclaim_idx;
>
> Hmm, try_to_free_mem_cgroup_pages() seems to call this with sc->reclaim_idx
> not explicitly inirialized (e.g. 0). And shrink_all_memory() as well. I
> probably didn't check them in v6 and pointed out only try_to_free_pages()
> (which is now OK), sorry.
>
That gets fixed in "mm, memcg: move memcg limit enforcement from zones
to nodes" but I can move the hunk to this patch to make bisection a
little easier.
> > retry:
> > delayacct_freepages_start();
> >
> >@@ -2657,7 +2682,7 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
> > vmpressure_prio(sc->gfp_mask, sc->target_mem_cgroup,
> > sc->priority);
> > sc->nr_scanned = 0;
> >- shrink_zones(zonelist, sc);
> >+ shrink_zones(zonelist, sc, classzone_idx);
>
> Looks like classzone_idx here is only used here to pass to shrink_zones()
> unchanged, which means it can just use it directly without a new param?
>
Yes
--
Mel Gorman
SUSE Labs
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 04/27] mm, vmscan: Begin reclaiming pages on a per-node basis
2016-06-22 14:04 ` Vlastimil Babka
@ 2016-06-22 16:00 ` Vlastimil Babka
2016-06-23 11:07 ` Mel Gorman
2016-06-23 10:58 ` Mel Gorman
1 sibling, 1 reply; 11+ messages in thread
From: Vlastimil Babka @ 2016-06-22 16:00 UTC (permalink / raw)
To: Mel Gorman, Andrew Morton, Linux-MM
Cc: Rik van Riel, Johannes Weiner, LKML, Michal Hocko
On 06/22/2016 04:04 PM, Vlastimil Babka wrote:
> On 06/21/2016 04:15 PM, Mel Gorman wrote:
>> This patch makes reclaim decisions on a per-node basis. A reclaimer knows
>> what zone is required by the allocation request and skips pages from
>> higher zones. In many cases this will be ok because it's a GFP_HIGHMEM
>> request of some description. On 64-bit, ZONE_DMA32 requests will cause
>> some problems but 32-bit devices on 64-bit platforms are increasingly
>> rare. Historically it would have been a major problem on 32-bit with big
>> Highmem:Lowmem ratios but such configurations are also now rare and even
>> where they exist, they are not encouraged. If it really becomes a problem,
>> it'll manifest as very low reclaim efficiencies.
>>
>> Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
>
> [...]
>
>> @@ -2540,14 +2559,14 @@ static inline bool compaction_ready(struct zone *zone, int order, int classzone_
>> * If a zone is deemed to be full of pinned pages then just give it a light
>> * scan then give up on it.
>> */
>> -static void shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
>> +static void shrink_zones(struct zonelist *zonelist, struct scan_control *sc,
>> + enum zone_type classzone_idx)
>> {
>> struct zoneref *z;
>> struct zone *zone;
>> unsigned long nr_soft_reclaimed;
>> unsigned long nr_soft_scanned;
>> gfp_t orig_mask;
>> - enum zone_type requested_highidx = gfp_zone(sc->gfp_mask);
>>
>> /*
>> * If the number of buffer_heads in the machine exceeds the maximum
>> @@ -2560,15 +2579,20 @@ static void shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
>>
>> for_each_zone_zonelist_nodemask(zone, z, zonelist,
>> gfp_zone(sc->gfp_mask), sc->nodemask) {
>
> Using sc->reclaim_idx could be faster/nicer here than gfp_zone()?
> Although after "mm, vmscan: Update classzone_idx if buffer_heads_over_limit"
> there would need to be a variable for the highmem adjusted value - maybe reuse
> "requested_highidx"? Not important though.
>
>> - enum zone_type classzone_idx;
>> -
>> if (!populated_zone(zone))
>> continue;
>>
>> - classzone_idx = requested_highidx;
>> + /*
>> + * Note that reclaim_idx does not change as it is the highest
>> + * zone reclaimed from which for empty zones is a no-op but
>> + * classzone_idx is used by shrink_node to test if the slabs
>> + * should be shrunk on a given node.
>> + */
>> while (!populated_zone(zone->zone_pgdat->node_zones +
>> - classzone_idx))
>> + classzone_idx)) {
>> classzone_idx--;
>> + continue;
Oh and Michal's comment on Patch 20 made me realize that my objection to v6
about possible underflow of sc->reclaim_idx and classzone_idx seems to still
apply here for classzone_idx? Updated example: Normal zone allocation. A small
node 0 without Normal zone will get us classzone_idx == dma32. Node 1 next in
zonelist won't have dma/dma32 zones so we won't see node_zones + classzone_idx
populated, and the while loop will lead to underflow of classzone_idx.
I may be missing something, but I don't really see another way around it than
resetting classzone_idx to sc->reclaim_idx before the while loop.
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 04/27] mm, vmscan: Begin reclaiming pages on a per-node basis
2016-06-21 14:15 ` [PATCH 04/27] mm, vmscan: Begin reclaiming pages on a per-node basis Mel Gorman
@ 2016-06-22 14:04 ` Vlastimil Babka
2016-06-22 16:00 ` Vlastimil Babka
2016-06-23 10:58 ` Mel Gorman
0 siblings, 2 replies; 11+ messages in thread
From: Vlastimil Babka @ 2016-06-22 14:04 UTC (permalink / raw)
To: Mel Gorman, Andrew Morton, Linux-MM; +Cc: Rik van Riel, Johannes Weiner, LKML
On 06/21/2016 04:15 PM, Mel Gorman wrote:
> This patch makes reclaim decisions on a per-node basis. A reclaimer knows
> what zone is required by the allocation request and skips pages from
> higher zones. In many cases this will be ok because it's a GFP_HIGHMEM
> request of some description. On 64-bit, ZONE_DMA32 requests will cause
> some problems but 32-bit devices on 64-bit platforms are increasingly
> rare. Historically it would have been a major problem on 32-bit with big
> Highmem:Lowmem ratios but such configurations are also now rare and even
> where they exist, they are not encouraged. If it really becomes a problem,
> it'll manifest as very low reclaim efficiencies.
>
> Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
[...]
> @@ -2540,14 +2559,14 @@ static inline bool compaction_ready(struct zone *zone, int order, int classzone_
> * If a zone is deemed to be full of pinned pages then just give it a light
> * scan then give up on it.
> */
> -static void shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
> +static void shrink_zones(struct zonelist *zonelist, struct scan_control *sc,
> + enum zone_type classzone_idx)
> {
> struct zoneref *z;
> struct zone *zone;
> unsigned long nr_soft_reclaimed;
> unsigned long nr_soft_scanned;
> gfp_t orig_mask;
> - enum zone_type requested_highidx = gfp_zone(sc->gfp_mask);
>
> /*
> * If the number of buffer_heads in the machine exceeds the maximum
> @@ -2560,15 +2579,20 @@ static void shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
>
> for_each_zone_zonelist_nodemask(zone, z, zonelist,
> gfp_zone(sc->gfp_mask), sc->nodemask) {
Using sc->reclaim_idx could be faster/nicer here than gfp_zone()?
Although after "mm, vmscan: Update classzone_idx if buffer_heads_over_limit"
there would need to be a variable for the highmem adjusted value - maybe reuse
"requested_highidx"? Not important though.
> - enum zone_type classzone_idx;
> -
> if (!populated_zone(zone))
> continue;
>
> - classzone_idx = requested_highidx;
> + /*
> + * Note that reclaim_idx does not change as it is the highest
> + * zone reclaimed from which for empty zones is a no-op but
> + * classzone_idx is used by shrink_node to test if the slabs
> + * should be shrunk on a given node.
> + */
> while (!populated_zone(zone->zone_pgdat->node_zones +
> - classzone_idx))
> + classzone_idx)) {
> classzone_idx--;
> + continue;
> + }
>
> /*
> * Take care memory controller reclaiming has small influence
> @@ -2594,8 +2618,8 @@ static void shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
> */
> if (IS_ENABLED(CONFIG_COMPACTION) &&
> sc->order > PAGE_ALLOC_COSTLY_ORDER &&
> - zonelist_zone_idx(z) <= requested_highidx &&
> - compaction_ready(zone, sc->order, requested_highidx)) {
> + zonelist_zone_idx(z) <= classzone_idx &&
> + compaction_ready(zone, sc->order, classzone_idx)) {
> sc->compaction_ready = true;
> continue;
> }
> @@ -2615,7 +2639,7 @@ static void shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
> /* need some check for avoid more shrink_zone() */
> }
>
> - shrink_zone(zone, sc, zone_idx(zone) == classzone_idx);
> + shrink_node(zone->zone_pgdat, sc, classzone_idx);
> }
>
> /*
> @@ -2647,6 +2671,7 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
> int initial_priority = sc->priority;
> unsigned long total_scanned = 0;
> unsigned long writeback_threshold;
> + enum zone_type classzone_idx = sc->reclaim_idx;
Hmm, try_to_free_mem_cgroup_pages() seems to call this with sc->reclaim_idx not
explicitly inirialized (e.g. 0). And shrink_all_memory() as well. I probably
didn't check them in v6 and pointed out only try_to_free_pages() (which is now
OK), sorry.
> retry:
> delayacct_freepages_start();
>
> @@ -2657,7 +2682,7 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
> vmpressure_prio(sc->gfp_mask, sc->target_mem_cgroup,
> sc->priority);
> sc->nr_scanned = 0;
> - shrink_zones(zonelist, sc);
> + shrink_zones(zonelist, sc, classzone_idx);
Looks like classzone_idx here is only used here to pass to shrink_zones()
unchanged, which means it can just use it directly without a new param?
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH 04/27] mm, vmscan: Begin reclaiming pages on a per-node basis
2016-06-21 14:15 [PATCH 00/27] Move LRU page reclaim from zones to nodes v7 Mel Gorman
@ 2016-06-21 14:15 ` Mel Gorman
2016-06-22 14:04 ` Vlastimil Babka
0 siblings, 1 reply; 11+ messages in thread
From: Mel Gorman @ 2016-06-21 14:15 UTC (permalink / raw)
To: Andrew Morton, Linux-MM
Cc: Rik van Riel, Vlastimil Babka, Johannes Weiner, LKML, Mel Gorman
This patch makes reclaim decisions on a per-node basis. A reclaimer knows
what zone is required by the allocation request and skips pages from
higher zones. In many cases this will be ok because it's a GFP_HIGHMEM
request of some description. On 64-bit, ZONE_DMA32 requests will cause
some problems but 32-bit devices on 64-bit platforms are increasingly
rare. Historically it would have been a major problem on 32-bit with big
Highmem:Lowmem ratios but such configurations are also now rare and even
where they exist, they are not encouraged. If it really becomes a problem,
it'll manifest as very low reclaim efficiencies.
Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
---
mm/vmscan.c | 78 +++++++++++++++++++++++++++++++++++++++++--------------------
1 file changed, 53 insertions(+), 25 deletions(-)
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 39cd6375f54e..7d5bad437809 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -84,6 +84,9 @@ struct scan_control {
/* Scan (total_size >> priority) pages at once */
int priority;
+ /* The highest zone to isolate pages for reclaim from */
+ enum zone_type reclaim_idx;
+
unsigned int may_writepage:1;
/* Can mapped pages be reclaimed? */
@@ -1386,6 +1389,7 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan,
unsigned long nr_taken = 0;
unsigned long nr_zone_taken[MAX_NR_ZONES] = { 0 };
unsigned long scan, nr_pages;
+ LIST_HEAD(pages_skipped);
for (scan = 0; scan < nr_to_scan && nr_taken < nr_to_scan &&
!list_empty(src); scan++) {
@@ -1396,6 +1400,11 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan,
VM_BUG_ON_PAGE(!PageLRU(page), page);
+ if (page_zonenum(page) > sc->reclaim_idx) {
+ list_move(&page->lru, &pages_skipped);
+ continue;
+ }
+
switch (__isolate_lru_page(page, mode)) {
case 0:
nr_pages = hpage_nr_pages(page);
@@ -1414,6 +1423,15 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan,
}
}
+ /*
+ * Splice any skipped pages to the start of the LRU list. Note that
+ * this disrupts the LRU order when reclaiming for lower zones but
+ * we cannot splice to the tail. If we did then the SWAP_CLUSTER_MAX
+ * scanning would soon rescan the same pages to skip and put the
+ * system at risk of premature OOM.
+ */
+ if (!list_empty(&pages_skipped))
+ list_splice(&pages_skipped, src);
*nr_scanned = scan;
trace_mm_vmscan_lru_isolate(sc->order, nr_to_scan, scan,
nr_taken, mode, is_file_lru(lru));
@@ -1583,7 +1601,7 @@ static int current_may_throttle(void)
}
/*
- * shrink_inactive_list() is a helper for shrink_zone(). It returns the number
+ * shrink_inactive_list() is a helper for shrink_node(). It returns the number
* of reclaimed pages
*/
static noinline_for_stack unsigned long
@@ -2395,12 +2413,13 @@ static inline bool should_continue_reclaim(struct zone *zone,
}
}
-static bool shrink_zone(struct zone *zone, struct scan_control *sc,
- bool is_classzone)
+static bool shrink_node(pg_data_t *pgdat, struct scan_control *sc,
+ enum zone_type classzone_idx)
{
struct reclaim_state *reclaim_state = current->reclaim_state;
unsigned long nr_reclaimed, nr_scanned;
bool reclaimable = false;
+ struct zone *zone = &pgdat->node_zones[classzone_idx];
do {
struct mem_cgroup *root = sc->target_mem_cgroup;
@@ -2432,7 +2451,7 @@ static bool shrink_zone(struct zone *zone, struct scan_control *sc,
shrink_zone_memcg(zone, memcg, sc, &lru_pages);
zone_lru_pages += lru_pages;
- if (memcg && is_classzone)
+ if (!global_reclaim(sc) && sc->reclaim_idx == classzone_idx)
shrink_slab(sc->gfp_mask, zone_to_nid(zone),
memcg, sc->nr_scanned - scanned,
lru_pages);
@@ -2463,7 +2482,7 @@ static bool shrink_zone(struct zone *zone, struct scan_control *sc,
* Shrink the slab caches in the same proportion that
* the eligible LRU pages were scanned.
*/
- if (global_reclaim(sc) && is_classzone)
+ if (global_reclaim(sc) && sc->reclaim_idx == classzone_idx)
shrink_slab(sc->gfp_mask, zone_to_nid(zone), NULL,
sc->nr_scanned - nr_scanned,
zone_lru_pages);
@@ -2540,14 +2559,14 @@ static inline bool compaction_ready(struct zone *zone, int order, int classzone_
* If a zone is deemed to be full of pinned pages then just give it a light
* scan then give up on it.
*/
-static void shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
+static void shrink_zones(struct zonelist *zonelist, struct scan_control *sc,
+ enum zone_type classzone_idx)
{
struct zoneref *z;
struct zone *zone;
unsigned long nr_soft_reclaimed;
unsigned long nr_soft_scanned;
gfp_t orig_mask;
- enum zone_type requested_highidx = gfp_zone(sc->gfp_mask);
/*
* If the number of buffer_heads in the machine exceeds the maximum
@@ -2560,15 +2579,20 @@ static void shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
for_each_zone_zonelist_nodemask(zone, z, zonelist,
gfp_zone(sc->gfp_mask), sc->nodemask) {
- enum zone_type classzone_idx;
-
if (!populated_zone(zone))
continue;
- classzone_idx = requested_highidx;
+ /*
+ * Note that reclaim_idx does not change as it is the highest
+ * zone reclaimed from which for empty zones is a no-op but
+ * classzone_idx is used by shrink_node to test if the slabs
+ * should be shrunk on a given node.
+ */
while (!populated_zone(zone->zone_pgdat->node_zones +
- classzone_idx))
+ classzone_idx)) {
classzone_idx--;
+ continue;
+ }
/*
* Take care memory controller reclaiming has small influence
@@ -2594,8 +2618,8 @@ static void shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
*/
if (IS_ENABLED(CONFIG_COMPACTION) &&
sc->order > PAGE_ALLOC_COSTLY_ORDER &&
- zonelist_zone_idx(z) <= requested_highidx &&
- compaction_ready(zone, sc->order, requested_highidx)) {
+ zonelist_zone_idx(z) <= classzone_idx &&
+ compaction_ready(zone, sc->order, classzone_idx)) {
sc->compaction_ready = true;
continue;
}
@@ -2615,7 +2639,7 @@ static void shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
/* need some check for avoid more shrink_zone() */
}
- shrink_zone(zone, sc, zone_idx(zone) == classzone_idx);
+ shrink_node(zone->zone_pgdat, sc, classzone_idx);
}
/*
@@ -2647,6 +2671,7 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
int initial_priority = sc->priority;
unsigned long total_scanned = 0;
unsigned long writeback_threshold;
+ enum zone_type classzone_idx = sc->reclaim_idx;
retry:
delayacct_freepages_start();
@@ -2657,7 +2682,7 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
vmpressure_prio(sc->gfp_mask, sc->target_mem_cgroup,
sc->priority);
sc->nr_scanned = 0;
- shrink_zones(zonelist, sc);
+ shrink_zones(zonelist, sc, classzone_idx);
total_scanned += sc->nr_scanned;
if (sc->nr_reclaimed >= sc->nr_to_reclaim)
@@ -2841,6 +2866,7 @@ unsigned long try_to_free_pages(struct zonelist *zonelist, int order,
struct scan_control sc = {
.nr_to_reclaim = SWAP_CLUSTER_MAX,
.gfp_mask = (gfp_mask = memalloc_noio_flags(gfp_mask)),
+ .reclaim_idx = gfp_zone(gfp_mask),
.order = order,
.nodemask = nodemask,
.priority = DEF_PRIORITY,
@@ -3112,7 +3138,7 @@ static bool kswapd_shrink_zone(struct zone *zone,
balance_gap, classzone_idx))
return true;
- shrink_zone(zone, sc, zone_idx(zone) == classzone_idx);
+ shrink_node(zone->zone_pgdat, sc, classzone_idx);
/* TODO: ANOMALY */
clear_bit(PGDAT_WRITEBACK, &pgdat->flags);
@@ -3161,6 +3187,7 @@ static int balance_pgdat(pg_data_t *pgdat, int order, int classzone_idx)
unsigned long nr_soft_scanned;
struct scan_control sc = {
.gfp_mask = GFP_KERNEL,
+ .reclaim_idx = MAX_NR_ZONES - 1,
.order = order,
.priority = DEF_PRIORITY,
.may_writepage = !laptop_mode,
@@ -3231,15 +3258,14 @@ static int balance_pgdat(pg_data_t *pgdat, int order, int classzone_idx)
sc.may_writepage = 1;
/*
- * Now scan the zone in the dma->highmem direction, stopping
- * at the last zone which needs scanning.
- *
- * We do this because the page allocator works in the opposite
- * direction. This prevents the page allocator from allocating
- * pages behind kswapd's direction of progress, which would
- * cause too much scanning of the lower zones.
+ * Continue scanning in the highmem->dma direction stopping at
+ * the last zone which needs scanning. This may reclaim lowmem
+ * pages that are not necessary for zone balancing but it
+ * preserves LRU ordering. It is assumed that the bulk of
+ * allocation requests can use arbitrary zones with the
+ * possible exception of big highmem:lowmem configurations.
*/
- for (i = 0; i <= end_zone; i++) {
+ for (i = end_zone; i >= 0; i--) {
struct zone *zone = pgdat->node_zones + i;
if (!populated_zone(zone))
@@ -3250,6 +3276,7 @@ static int balance_pgdat(pg_data_t *pgdat, int order, int classzone_idx)
continue;
sc.nr_scanned = 0;
+ sc.reclaim_idx = i;
nr_soft_scanned = 0;
/*
@@ -3698,6 +3725,7 @@ static int __zone_reclaim(struct zone *zone, gfp_t gfp_mask, unsigned int order)
.may_writepage = !!(zone_reclaim_mode & RECLAIM_WRITE),
.may_unmap = !!(zone_reclaim_mode & RECLAIM_UNMAP),
.may_swap = 1,
+ .reclaim_idx = zone_idx(zone),
};
cond_resched();
@@ -3717,7 +3745,7 @@ static int __zone_reclaim(struct zone *zone, gfp_t gfp_mask, unsigned int order)
* priorities until we have enough memory freed.
*/
do {
- shrink_zone(zone, &sc, true);
+ shrink_node(zone->zone_pgdat, &sc, zone_idx(zone));
} while (sc.nr_reclaimed < nr_pages && --sc.priority >= 0);
}
--
2.6.4
^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [PATCH 04/27] mm, vmscan: Begin reclaiming pages on a per-node basis
2016-06-09 18:04 ` [PATCH 04/27] mm, vmscan: Begin reclaiming pages on a per-node basis Mel Gorman
@ 2016-06-15 12:52 ` Vlastimil Babka
0 siblings, 0 replies; 11+ messages in thread
From: Vlastimil Babka @ 2016-06-15 12:52 UTC (permalink / raw)
To: Mel Gorman, Andrew Morton, Linux-MM; +Cc: Rik van Riel, Johannes Weiner, LKML
On 06/09/2016 08:04 PM, Mel Gorman wrote:
> This patch makes reclaim decisions on a per-node basis. A reclaimer knows
> what zone is required by the allocation request and skips pages from
> higher zones. In many cases this will be ok because it's a GFP_HIGHMEM
> request of some description. On 64-bit, ZONE_DMA32 requests will cause
> some problems but 32-bit devices on 64-bit platforms are increasingly
> rare. Historically it would have been a major problem on 32-bit with big
> Highmem:Lowmem ratios but such configurations are also now rare and even
> where they exist, they are not encouraged. If it really becomes a problem,
> it'll manifest as very low reclaim efficiencies.
>
> Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
> ---
> mm/vmscan.c | 72 ++++++++++++++++++++++++++++++++++++++++---------------------
> 1 file changed, 47 insertions(+), 25 deletions(-)
>
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index f87a5a0f8793..ab1b28e7e20a 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -84,6 +84,9 @@ struct scan_control {
> /* Scan (total_size >> priority) pages at once */
> int priority;
>
> + /* The highest zone to isolate pages for reclaim from */
> + enum zone_type reclaim_idx;
> +
> unsigned int may_writepage:1;
>
> /* Can mapped pages be reclaimed? */
> @@ -1369,6 +1372,7 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan,
> struct list_head *src = &lruvec->lists[lru];
> unsigned long nr_taken = 0;
> unsigned long scan;
> + LIST_HEAD(pages_skipped);
>
> for (scan = 0; scan < nr_to_scan && nr_taken < nr_to_scan &&
> !list_empty(src); scan++) {
> @@ -1379,6 +1383,11 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan,
>
> VM_BUG_ON_PAGE(!PageLRU(page), page);
>
> + if (page_zonenum(page) > sc->reclaim_idx) {
> + list_move(&page->lru, &pages_skipped);
> + continue;
> + }
> +
> switch (__isolate_lru_page(page, mode)) {
> case 0:
> nr_taken += hpage_nr_pages(page);
> @@ -1395,6 +1404,15 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan,
> }
> }
>
> + /*
> + * Splice any skipped pages to the start of the LRU list. Note that
> + * this disrupts the LRU order when reclaiming for lower zones but
> + * we cannot splice to the tail. If we did then the SWAP_CLUSTER_MAX
> + * scanning would soon rescan the same pages to skip and put the
> + * system at risk of premature OOM.
> + */
> + if (!list_empty(&pages_skipped))
> + list_splice(&pages_skipped, src);
Hmm, that's unfortunate. But probably better than reclaiming the pages
in the name of LRU order, even though it wouldn't help the allocation at
hand.
[...]
> @@ -2516,14 +2535,14 @@ static inline bool compaction_ready(struct zone *zone, int order, int classzone_
> * If a zone is deemed to be full of pinned pages then just give it a light
> * scan then give up on it.
> */
> -static void shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
> +static void shrink_zones(struct zonelist *zonelist, struct scan_control *sc,
> + enum zone_type classzone_idx)
> {
> struct zoneref *z;
> struct zone *zone;
> unsigned long nr_soft_reclaimed;
> unsigned long nr_soft_scanned;
> gfp_t orig_mask;
> - enum zone_type requested_highidx = gfp_zone(sc->gfp_mask);
>
> /*
> * If the number of buffer_heads in the machine exceeds the maximum
> @@ -2536,15 +2555,15 @@ static void shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
>
> for_each_zone_zonelist_nodemask(zone, z, zonelist,
> gfp_zone(sc->gfp_mask), sc->nodemask) {
> - enum zone_type classzone_idx;
> -
> if (!populated_zone(zone))
> continue;
>
> - classzone_idx = requested_highidx;
> while (!populated_zone(zone->zone_pgdat->node_zones +
> - classzone_idx))
> + classzone_idx)) {
> + sc->reclaim_idx--;
> classzone_idx--;
> + continue;
Isn't this wrong to do this across whole zonelist which will contain
multiple nodes? Example: a small node 0 without Normal zone will get us
sc->reclaim_idx == classzone_idx == dma32. Node 1 won't have dma/dma32
zones so we won't see classzone_idx populated, and the while loop will
lead to underflow?
And sc->reclaim_idx seems to be unitialized when called via
try_to_free_pages() -> do_try_to_free_pages() -> shrink_zones() ?
Which means it's zero and we underflow immediately?
> @@ -3207,15 +3228,14 @@ static int balance_pgdat(pg_data_t *pgdat, int order, int classzone_idx)
> sc.may_writepage = 1;
>
> /*
> - * Now scan the zone in the dma->highmem direction, stopping
> - * at the last zone which needs scanning.
> - *
> - * We do this because the page allocator works in the opposite
> - * direction. This prevents the page allocator from allocating
> - * pages behind kswapd's direction of progress, which would
> - * cause too much scanning of the lower zones.
> + * Continue scanning in the highmem->dma direction stopping at
> + * the last zone which needs scanning. This may reclaim lowmem
> + * pages that are not necessary for zone balancing but it
> + * preserves LRU ordering. It is assumed that the bulk of
> + * allocation requests can use arbitrary zones with the
> + * possible exception of big highmem:lowmem configurations.
> */
> - for (i = 0; i <= end_zone; i++) {
> + for (i = end_zone; i >= end_zone; i--) {
i >= 0 ?
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH 04/27] mm, vmscan: Begin reclaiming pages on a per-node basis
2016-06-09 18:04 [PATCH 00/27] Move LRU page reclaim from zones to nodes v6 Mel Gorman
@ 2016-06-09 18:04 ` Mel Gorman
2016-06-15 12:52 ` Vlastimil Babka
0 siblings, 1 reply; 11+ messages in thread
From: Mel Gorman @ 2016-06-09 18:04 UTC (permalink / raw)
To: Andrew Morton, Linux-MM
Cc: Rik van Riel, Vlastimil Babka, Johannes Weiner, LKML, Mel Gorman
This patch makes reclaim decisions on a per-node basis. A reclaimer knows
what zone is required by the allocation request and skips pages from
higher zones. In many cases this will be ok because it's a GFP_HIGHMEM
request of some description. On 64-bit, ZONE_DMA32 requests will cause
some problems but 32-bit devices on 64-bit platforms are increasingly
rare. Historically it would have been a major problem on 32-bit with big
Highmem:Lowmem ratios but such configurations are also now rare and even
where they exist, they are not encouraged. If it really becomes a problem,
it'll manifest as very low reclaim efficiencies.
Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
---
mm/vmscan.c | 72 ++++++++++++++++++++++++++++++++++++++++---------------------
1 file changed, 47 insertions(+), 25 deletions(-)
diff --git a/mm/vmscan.c b/mm/vmscan.c
index f87a5a0f8793..ab1b28e7e20a 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -84,6 +84,9 @@ struct scan_control {
/* Scan (total_size >> priority) pages at once */
int priority;
+ /* The highest zone to isolate pages for reclaim from */
+ enum zone_type reclaim_idx;
+
unsigned int may_writepage:1;
/* Can mapped pages be reclaimed? */
@@ -1369,6 +1372,7 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan,
struct list_head *src = &lruvec->lists[lru];
unsigned long nr_taken = 0;
unsigned long scan;
+ LIST_HEAD(pages_skipped);
for (scan = 0; scan < nr_to_scan && nr_taken < nr_to_scan &&
!list_empty(src); scan++) {
@@ -1379,6 +1383,11 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan,
VM_BUG_ON_PAGE(!PageLRU(page), page);
+ if (page_zonenum(page) > sc->reclaim_idx) {
+ list_move(&page->lru, &pages_skipped);
+ continue;
+ }
+
switch (__isolate_lru_page(page, mode)) {
case 0:
nr_taken += hpage_nr_pages(page);
@@ -1395,6 +1404,15 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan,
}
}
+ /*
+ * Splice any skipped pages to the start of the LRU list. Note that
+ * this disrupts the LRU order when reclaiming for lower zones but
+ * we cannot splice to the tail. If we did then the SWAP_CLUSTER_MAX
+ * scanning would soon rescan the same pages to skip and put the
+ * system at risk of premature OOM.
+ */
+ if (!list_empty(&pages_skipped))
+ list_splice(&pages_skipped, src);
*nr_scanned = scan;
trace_mm_vmscan_lru_isolate(sc->order, nr_to_scan, scan,
nr_taken, mode, is_file_lru(lru));
@@ -1557,7 +1575,7 @@ static int current_may_throttle(void)
}
/*
- * shrink_inactive_list() is a helper for shrink_zone(). It returns the number
+ * shrink_inactive_list() is a helper for shrink_node(). It returns the number
* of reclaimed pages
*/
static noinline_for_stack unsigned long
@@ -2371,12 +2389,13 @@ static inline bool should_continue_reclaim(struct zone *zone,
}
}
-static bool shrink_zone(struct zone *zone, struct scan_control *sc,
- bool is_classzone)
+static bool shrink_node(pg_data_t *pgdat, struct scan_control *sc,
+ enum zone_type classzone_idx)
{
struct reclaim_state *reclaim_state = current->reclaim_state;
unsigned long nr_reclaimed, nr_scanned;
bool reclaimable = false;
+ struct zone *zone = &pgdat->node_zones[classzone_idx];
do {
struct mem_cgroup *root = sc->target_mem_cgroup;
@@ -2408,7 +2427,7 @@ static bool shrink_zone(struct zone *zone, struct scan_control *sc,
shrink_zone_memcg(zone, memcg, sc, &lru_pages);
zone_lru_pages += lru_pages;
- if (memcg && is_classzone)
+ if (!global_reclaim(sc) && sc->reclaim_idx == classzone_idx)
shrink_slab(sc->gfp_mask, zone_to_nid(zone),
memcg, sc->nr_scanned - scanned,
lru_pages);
@@ -2439,7 +2458,7 @@ static bool shrink_zone(struct zone *zone, struct scan_control *sc,
* Shrink the slab caches in the same proportion that
* the eligible LRU pages were scanned.
*/
- if (global_reclaim(sc) && is_classzone)
+ if (global_reclaim(sc) && sc->reclaim_idx == classzone_idx)
shrink_slab(sc->gfp_mask, zone_to_nid(zone), NULL,
sc->nr_scanned - nr_scanned,
zone_lru_pages);
@@ -2516,14 +2535,14 @@ static inline bool compaction_ready(struct zone *zone, int order, int classzone_
* If a zone is deemed to be full of pinned pages then just give it a light
* scan then give up on it.
*/
-static void shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
+static void shrink_zones(struct zonelist *zonelist, struct scan_control *sc,
+ enum zone_type classzone_idx)
{
struct zoneref *z;
struct zone *zone;
unsigned long nr_soft_reclaimed;
unsigned long nr_soft_scanned;
gfp_t orig_mask;
- enum zone_type requested_highidx = gfp_zone(sc->gfp_mask);
/*
* If the number of buffer_heads in the machine exceeds the maximum
@@ -2536,15 +2555,15 @@ static void shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
for_each_zone_zonelist_nodemask(zone, z, zonelist,
gfp_zone(sc->gfp_mask), sc->nodemask) {
- enum zone_type classzone_idx;
-
if (!populated_zone(zone))
continue;
- classzone_idx = requested_highidx;
while (!populated_zone(zone->zone_pgdat->node_zones +
- classzone_idx))
+ classzone_idx)) {
+ sc->reclaim_idx--;
classzone_idx--;
+ continue;
+ }
/*
* Take care memory controller reclaiming has small influence
@@ -2570,8 +2589,8 @@ static void shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
*/
if (IS_ENABLED(CONFIG_COMPACTION) &&
sc->order > PAGE_ALLOC_COSTLY_ORDER &&
- zonelist_zone_idx(z) <= requested_highidx &&
- compaction_ready(zone, sc->order, requested_highidx)) {
+ zonelist_zone_idx(z) <= classzone_idx &&
+ compaction_ready(zone, sc->order, classzone_idx)) {
sc->compaction_ready = true;
continue;
}
@@ -2591,7 +2610,7 @@ static void shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
/* need some check for avoid more shrink_zone() */
}
- shrink_zone(zone, sc, zone_idx(zone) == classzone_idx);
+ shrink_node(zone->zone_pgdat, sc, classzone_idx);
}
/*
@@ -2623,6 +2642,7 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
int initial_priority = sc->priority;
unsigned long total_scanned = 0;
unsigned long writeback_threshold;
+ enum zone_type classzone_idx = gfp_zone(sc->gfp_mask);
retry:
delayacct_freepages_start();
@@ -2633,7 +2653,7 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
vmpressure_prio(sc->gfp_mask, sc->target_mem_cgroup,
sc->priority);
sc->nr_scanned = 0;
- shrink_zones(zonelist, sc);
+ shrink_zones(zonelist, sc, classzone_idx);
total_scanned += sc->nr_scanned;
if (sc->nr_reclaimed >= sc->nr_to_reclaim)
@@ -3088,7 +3108,7 @@ static bool kswapd_shrink_zone(struct zone *zone,
balance_gap, classzone_idx))
return true;
- shrink_zone(zone, sc, zone_idx(zone) == classzone_idx);
+ shrink_node(zone->zone_pgdat, sc, classzone_idx);
/* TODO: ANOMALY */
clear_bit(PGDAT_WRITEBACK, &pgdat->flags);
@@ -3137,6 +3157,7 @@ static int balance_pgdat(pg_data_t *pgdat, int order, int classzone_idx)
unsigned long nr_soft_scanned;
struct scan_control sc = {
.gfp_mask = GFP_KERNEL,
+ .reclaim_idx = MAX_NR_ZONES - 1,
.order = order,
.priority = DEF_PRIORITY,
.may_writepage = !laptop_mode,
@@ -3207,15 +3228,14 @@ static int balance_pgdat(pg_data_t *pgdat, int order, int classzone_idx)
sc.may_writepage = 1;
/*
- * Now scan the zone in the dma->highmem direction, stopping
- * at the last zone which needs scanning.
- *
- * We do this because the page allocator works in the opposite
- * direction. This prevents the page allocator from allocating
- * pages behind kswapd's direction of progress, which would
- * cause too much scanning of the lower zones.
+ * Continue scanning in the highmem->dma direction stopping at
+ * the last zone which needs scanning. This may reclaim lowmem
+ * pages that are not necessary for zone balancing but it
+ * preserves LRU ordering. It is assumed that the bulk of
+ * allocation requests can use arbitrary zones with the
+ * possible exception of big highmem:lowmem configurations.
*/
- for (i = 0; i <= end_zone; i++) {
+ for (i = end_zone; i >= end_zone; i--) {
struct zone *zone = pgdat->node_zones + i;
if (!populated_zone(zone))
@@ -3226,6 +3246,7 @@ static int balance_pgdat(pg_data_t *pgdat, int order, int classzone_idx)
continue;
sc.nr_scanned = 0;
+ sc.reclaim_idx = i;
nr_soft_scanned = 0;
/*
@@ -3674,6 +3695,7 @@ static int __zone_reclaim(struct zone *zone, gfp_t gfp_mask, unsigned int order)
.may_writepage = !!(zone_reclaim_mode & RECLAIM_WRITE),
.may_unmap = !!(zone_reclaim_mode & RECLAIM_UNMAP),
.may_swap = 1,
+ .reclaim_idx = zone_idx(zone),
};
cond_resched();
@@ -3693,7 +3715,7 @@ static int __zone_reclaim(struct zone *zone, gfp_t gfp_mask, unsigned int order)
* priorities until we have enough memory freed.
*/
do {
- shrink_zone(zone, &sc, true);
+ shrink_node(zone->zone_pgdat, &sc, zone_idx(zone));
} while (sc.nr_reclaimed < nr_pages && --sc.priority >= 0);
}
--
2.6.4
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH 04/27] mm, vmscan: Begin reclaiming pages on a per-node basis
2016-04-15 9:13 [PATCH 00/27] Move LRU page reclaim from zones to nodes v5 Mel Gorman
@ 2016-04-15 9:13 ` Mel Gorman
0 siblings, 0 replies; 11+ messages in thread
From: Mel Gorman @ 2016-04-15 9:13 UTC (permalink / raw)
To: Andrew Morton, Linux-MM
Cc: Rik van Riel, Vlastimil Babka, Johannes Weiner,
Jesper Dangaard Brouer, LKML, Mel Gorman
This patch makes reclaim decisions on a per-node basis. A reclaimer knows
what zone is required by the allocation request and skips pages from
higher zones. In many cases this will be ok because it's a GFP_HIGHMEM
request of some description. On 64-bit, ZONE_DMA32 requests will cause
some problems but 32-bit devices on 64-bit platforms are increasingly
rare. Historically it would have been a major problem on 32-bit with big
Highmem:Lowmem ratios but such configurations are also now rare and even
where they exist, they are not encouraged. If it really becomes a problem,
it'll manifest as very low reclaim efficiencies.
Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
---
mm/vmscan.c | 77 ++++++++++++++++++++++++++++++++++++++-----------------------
1 file changed, 48 insertions(+), 29 deletions(-)
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 75acb89c9df5..0f8dc3488f9d 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -84,6 +84,9 @@ struct scan_control {
/* Scan (total_size >> priority) pages at once */
int priority;
+ /* The highest zone to isolate pages for reclaim from */
+ enum zone_type reclaim_idx;
+
unsigned int may_writepage:1;
/* Can mapped pages be reclaimed? */
@@ -1369,6 +1372,7 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan,
struct list_head *src = &lruvec->lists[lru];
unsigned long nr_taken = 0;
unsigned long scan;
+ LIST_HEAD(pages_skipped);
for (scan = 0; scan < nr_to_scan && nr_taken < nr_to_scan &&
!list_empty(src); scan++) {
@@ -1380,6 +1384,11 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan,
VM_BUG_ON_PAGE(!PageLRU(page), page);
+ if (page_zonenum(page) > sc->reclaim_idx) {
+ list_move(&page->lru, &pages_skipped);
+ continue;
+ }
+
switch (__isolate_lru_page(page, mode)) {
case 0:
nr_pages = hpage_nr_pages(page);
@@ -1398,6 +1407,15 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan,
}
}
+ /*
+ * Splice any skipped pages to the start of the LRU list. Note that
+ * this disrupts the LRU order when reclaiming for lower zones but
+ * we cannot splice to the tail. If we did then the SWAP_CLUSTER_MAX
+ * scanning would soon rescan the same pages to skip and put the
+ * system at risk of premature OOM.
+ */
+ if (!list_empty(&pages_skipped))
+ list_splice(&pages_skipped, src);
*nr_scanned = scan;
trace_mm_vmscan_lru_isolate(sc->order, nr_to_scan, scan,
nr_taken, mode, is_file_lru(lru));
@@ -1560,7 +1578,7 @@ static int current_may_throttle(void)
}
/*
- * shrink_inactive_list() is a helper for shrink_zone(). It returns the number
+ * shrink_inactive_list() is a helper for shrink_node(). It returns the number
* of reclaimed pages
*/
static noinline_for_stack unsigned long
@@ -2394,12 +2412,13 @@ static inline bool should_continue_reclaim(struct zone *zone,
}
}
-static bool shrink_zone(struct zone *zone, struct scan_control *sc,
- bool is_classzone)
+static bool shrink_node(pg_data_t *pgdat, struct scan_control *sc,
+ enum zone_type classzone_idx)
{
struct reclaim_state *reclaim_state = current->reclaim_state;
unsigned long nr_reclaimed, nr_scanned;
bool reclaimable = false;
+ struct zone *zone = &pgdat->node_zones[classzone_idx];
do {
struct mem_cgroup *root = sc->target_mem_cgroup;
@@ -2431,7 +2450,7 @@ static bool shrink_zone(struct zone *zone, struct scan_control *sc,
shrink_zone_memcg(zone, memcg, sc, &lru_pages);
zone_lru_pages += lru_pages;
- if (memcg && is_classzone)
+ if (!global_reclaim(sc) && sc->reclaim_idx == classzone_idx)
shrink_slab(sc->gfp_mask, zone_to_nid(zone),
memcg, sc->nr_scanned - scanned,
lru_pages);
@@ -2462,7 +2481,7 @@ static bool shrink_zone(struct zone *zone, struct scan_control *sc,
* Shrink the slab caches in the same proportion that
* the eligible LRU pages were scanned.
*/
- if (global_reclaim(sc) && is_classzone)
+ if (global_reclaim(sc) && sc->reclaim_idx == classzone_idx)
shrink_slab(sc->gfp_mask, zone_to_nid(zone), NULL,
sc->nr_scanned - nr_scanned,
zone_lru_pages);
@@ -2541,14 +2560,14 @@ static inline bool compaction_ready(struct zone *zone, int order)
*
* Returns true if a zone was reclaimable.
*/
-static bool shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
+static bool shrink_zones(struct zonelist *zonelist, struct scan_control *sc,
+ enum zone_type classzone_idx)
{
struct zoneref *z;
struct zone *zone;
unsigned long nr_soft_reclaimed;
unsigned long nr_soft_scanned;
gfp_t orig_mask;
- enum zone_type requested_highidx = gfp_zone(sc->gfp_mask);
bool reclaimable = false;
/*
@@ -2561,16 +2580,12 @@ static bool shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
sc->gfp_mask |= __GFP_HIGHMEM;
for_each_zone_zonelist_nodemask(zone, z, zonelist,
- requested_highidx, sc->nodemask) {
- enum zone_type classzone_idx;
-
- if (!populated_zone(zone))
- continue;
-
- classzone_idx = requested_highidx;
- while (!populated_zone(zone->zone_pgdat->node_zones +
- classzone_idx))
+ classzone_idx, sc->nodemask) {
+ if (!populated_zone(zone)) {
+ sc->reclaim_idx--;
classzone_idx--;
+ continue;
+ }
/*
* Take care memory controller reclaiming has small influence
@@ -2596,7 +2611,7 @@ static bool shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
*/
if (IS_ENABLED(CONFIG_COMPACTION) &&
sc->order > PAGE_ALLOC_COSTLY_ORDER &&
- zonelist_zone_idx(z) <= requested_highidx &&
+ zonelist_zone_idx(z) <= classzone_idx &&
compaction_ready(zone, sc->order)) {
sc->compaction_ready = true;
continue;
@@ -2619,7 +2634,7 @@ static bool shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
/* need some check for avoid more shrink_zone() */
}
- if (shrink_zone(zone, sc, zone_idx(zone) == classzone_idx))
+ if (shrink_node(zone->zone_pgdat, sc, classzone_idx))
reclaimable = true;
if (global_reclaim(sc) &&
@@ -2659,6 +2674,7 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
unsigned long total_scanned = 0;
unsigned long writeback_threshold;
bool zones_reclaimable;
+ enum zone_type classzone_idx = gfp_zone(sc->gfp_mask);
retry:
delayacct_freepages_start();
@@ -2669,7 +2685,8 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
vmpressure_prio(sc->gfp_mask, sc->target_mem_cgroup,
sc->priority);
sc->nr_scanned = 0;
- zones_reclaimable = shrink_zones(zonelist, sc);
+ sc->reclaim_idx = classzone_idx;
+ zones_reclaimable = shrink_zones(zonelist, sc, classzone_idx);
total_scanned += sc->nr_scanned;
if (sc->nr_reclaimed >= sc->nr_to_reclaim)
@@ -3128,7 +3145,7 @@ static bool kswapd_shrink_zone(struct zone *zone,
balance_gap, classzone_idx))
return true;
- shrink_zone(zone, sc, zone_idx(zone) == classzone_idx);
+ shrink_node(zone->zone_pgdat, sc, classzone_idx);
/* TODO: ANOMALY */
clear_bit(PGDAT_WRITEBACK, &pgdat->flags);
@@ -3177,6 +3194,7 @@ static int balance_pgdat(pg_data_t *pgdat, int order, int classzone_idx)
unsigned long nr_soft_scanned;
struct scan_control sc = {
.gfp_mask = GFP_KERNEL,
+ .reclaim_idx = MAX_NR_ZONES - 1,
.order = order,
.priority = DEF_PRIORITY,
.may_writepage = !laptop_mode,
@@ -3247,15 +3265,14 @@ static int balance_pgdat(pg_data_t *pgdat, int order, int classzone_idx)
sc.may_writepage = 1;
/*
- * Now scan the zone in the dma->highmem direction, stopping
- * at the last zone which needs scanning.
- *
- * We do this because the page allocator works in the opposite
- * direction. This prevents the page allocator from allocating
- * pages behind kswapd's direction of progress, which would
- * cause too much scanning of the lower zones.
+ * Continue scanning in the highmem->dma direction stopping at
+ * the last zone which needs scanning. This may reclaim lowmem
+ * pages that are not necessary for zone balancing but it
+ * preserves LRU ordering. It is assumed that the bulk of
+ * allocation requests can use arbitrary zones with the
+ * possible exception of big highmem:lowmem configurations.
*/
- for (i = 0; i <= end_zone; i++) {
+ for (i = end_zone; i >= end_zone; i--) {
struct zone *zone = pgdat->node_zones + i;
if (!populated_zone(zone))
@@ -3266,6 +3283,7 @@ static int balance_pgdat(pg_data_t *pgdat, int order, int classzone_idx)
continue;
sc.nr_scanned = 0;
+ sc.reclaim_idx = i;
nr_soft_scanned = 0;
/*
@@ -3714,6 +3732,7 @@ static int __zone_reclaim(struct zone *zone, gfp_t gfp_mask, unsigned int order)
.may_writepage = !!(zone_reclaim_mode & RECLAIM_WRITE),
.may_unmap = !!(zone_reclaim_mode & RECLAIM_UNMAP),
.may_swap = 1,
+ .reclaim_idx = zone_idx(zone),
};
cond_resched();
@@ -3733,7 +3752,7 @@ static int __zone_reclaim(struct zone *zone, gfp_t gfp_mask, unsigned int order)
* priorities until we have enough memory freed.
*/
do {
- shrink_zone(zone, &sc, true);
+ shrink_node(zone->zone_pgdat, &sc, zone_idx(zone));
} while (sc.nr_reclaimed < nr_pages && --sc.priority >= 0);
}
--
2.6.4
^ permalink raw reply related [flat|nested] 11+ messages in thread
end of thread, other threads:[~2016-06-23 11:13 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
[not found] <02ed01d1c47a$49fbfbc0$ddf3f340$@alibaba-inc.com>
2016-06-12 7:33 ` [PATCH 04/27] mm, vmscan: Begin reclaiming pages on a per-node basis Hillf Danton
2016-06-14 14:47 ` Mel Gorman
2016-06-21 14:15 [PATCH 00/27] Move LRU page reclaim from zones to nodes v7 Mel Gorman
2016-06-21 14:15 ` [PATCH 04/27] mm, vmscan: Begin reclaiming pages on a per-node basis Mel Gorman
2016-06-22 14:04 ` Vlastimil Babka
2016-06-22 16:00 ` Vlastimil Babka
2016-06-23 11:07 ` Mel Gorman
2016-06-23 11:13 ` Michal Hocko
2016-06-23 10:58 ` Mel Gorman
-- strict thread matches above, loose matches on Subject: below --
2016-06-09 18:04 [PATCH 00/27] Move LRU page reclaim from zones to nodes v6 Mel Gorman
2016-06-09 18:04 ` [PATCH 04/27] mm, vmscan: Begin reclaiming pages on a per-node basis Mel Gorman
2016-06-15 12:52 ` Vlastimil Babka
2016-04-15 9:13 [PATCH 00/27] Move LRU page reclaim from zones to nodes v5 Mel Gorman
2016-04-15 9:13 ` [PATCH 04/27] mm, vmscan: Begin reclaiming pages on a per-node basis Mel Gorman
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).