* [patch 0/3] mm: vmscan: followup fixes to cleanups in -mm
@ 2014-07-14 13:20 ` Johannes Weiner
0 siblings, 0 replies; 30+ messages in thread
From: Johannes Weiner @ 2014-07-14 13:20 UTC (permalink / raw)
To: Andrew Morton
Cc: Mel Gorman, Michal Hocko, Minchan Kim, Rik van Riel, linux-mm,
linux-kernel
Hi Andrew,
here is a follow-up to feedback on patches you already have in -mm.
This series is not linear: the first two patches are fixlets according
to their name, the third one could be placed after "mm: vmscan: move
swappiness out of scan_control".
Thanks!
^ permalink raw reply [flat|nested] 30+ messages in thread
* [patch 0/3] mm: vmscan: followup fixes to cleanups in -mm
@ 2014-07-14 13:20 ` Johannes Weiner
0 siblings, 0 replies; 30+ messages in thread
From: Johannes Weiner @ 2014-07-14 13:20 UTC (permalink / raw)
To: Andrew Morton
Cc: Mel Gorman, Michal Hocko, Minchan Kim, Rik van Riel, linux-mm,
linux-kernel
Hi Andrew,
here is a follow-up to feedback on patches you already have in -mm.
This series is not linear: the first two patches are fixlets according
to their name, the third one could be placed after "mm: vmscan: move
swappiness out of scan_control".
Thanks!
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 30+ messages in thread
* [patch 1/3] mm: vmscan: rework compaction-ready signaling in direct reclaim fix
2014-07-14 13:20 ` Johannes Weiner
@ 2014-07-14 13:20 ` Johannes Weiner
-1 siblings, 0 replies; 30+ messages in thread
From: Johannes Weiner @ 2014-07-14 13:20 UTC (permalink / raw)
To: Andrew Morton
Cc: Mel Gorman, Michal Hocko, Minchan Kim, Rik van Riel, linux-mm,
linux-kernel
As per Mel, replace out label with breaks from the loop.
Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
---
mm/vmscan.c | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 35747a75bf08..6f43df4a5253 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2496,10 +2496,10 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
total_scanned += sc->nr_scanned;
if (sc->nr_reclaimed >= sc->nr_to_reclaim)
- goto out;
+ break;
if (sc->compaction_ready)
- goto out;
+ break;
/*
* If we're getting trouble reclaiming, start doing
@@ -2523,7 +2523,6 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
}
} while (--sc->priority >= 0);
-out:
delayacct_freepages_end();
if (sc->nr_reclaimed)
--
2.0.0
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [patch 1/3] mm: vmscan: rework compaction-ready signaling in direct reclaim fix
@ 2014-07-14 13:20 ` Johannes Weiner
0 siblings, 0 replies; 30+ messages in thread
From: Johannes Weiner @ 2014-07-14 13:20 UTC (permalink / raw)
To: Andrew Morton
Cc: Mel Gorman, Michal Hocko, Minchan Kim, Rik van Riel, linux-mm,
linux-kernel
As per Mel, replace out label with breaks from the loop.
Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
---
mm/vmscan.c | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 35747a75bf08..6f43df4a5253 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2496,10 +2496,10 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
total_scanned += sc->nr_scanned;
if (sc->nr_reclaimed >= sc->nr_to_reclaim)
- goto out;
+ break;
if (sc->compaction_ready)
- goto out;
+ break;
/*
* If we're getting trouble reclaiming, start doing
@@ -2523,7 +2523,6 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
}
} while (--sc->priority >= 0);
-out:
delayacct_freepages_end();
if (sc->nr_reclaimed)
--
2.0.0
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [patch 2/3] mm: vmscan: remove all_unreclaimable() fix
2014-07-14 13:20 ` Johannes Weiner
@ 2014-07-14 13:20 ` Johannes Weiner
-1 siblings, 0 replies; 30+ messages in thread
From: Johannes Weiner @ 2014-07-14 13:20 UTC (permalink / raw)
To: Andrew Morton
Cc: Mel Gorman, Michal Hocko, Minchan Kim, Rik van Riel, linux-mm,
linux-kernel
As per Mel, use bool for reclaimability throughout and simplify the
reclaimability tracking in shrink_zones().
Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
---
mm/vmscan.c | 29 +++++++++++++++--------------
1 file changed, 15 insertions(+), 14 deletions(-)
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 6dac1310e5e4..74a9e0ae09b0 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2244,10 +2244,10 @@ static inline bool should_continue_reclaim(struct zone *zone,
}
}
-static unsigned long shrink_zone(struct zone *zone, struct scan_control *sc)
+static bool shrink_zone(struct zone *zone, struct scan_control *sc)
{
unsigned long nr_reclaimed, nr_scanned;
- unsigned long zone_reclaimed = 0;
+ bool reclaimable = false;
do {
struct mem_cgroup *root = sc->target_mem_cgroup;
@@ -2291,12 +2291,13 @@ static unsigned long shrink_zone(struct zone *zone, struct scan_control *sc)
sc->nr_scanned - nr_scanned,
sc->nr_reclaimed - nr_reclaimed);
- zone_reclaimed += sc->nr_reclaimed - nr_reclaimed;
+ if (sc->nr_reclaimed - nr_reclaimed)
+ reclaimable = true;
} while (should_continue_reclaim(zone, sc->nr_reclaimed - nr_reclaimed,
sc->nr_scanned - nr_scanned, sc));
- return zone_reclaimed;
+ return reclaimable;
}
/* Returns true if compaction should go ahead for a high-order request */
@@ -2346,7 +2347,7 @@ static inline bool compaction_ready(struct zone *zone, int order)
* If a zone is deemed to be full of pinned pages then just give it a light
* scan then give up on it.
*
- * Returns whether the zones overall are reclaimable or not.
+ * Returns true if a zone was reclaimable.
*/
static bool shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
{
@@ -2361,7 +2362,7 @@ static bool shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
.gfp_mask = sc->gfp_mask,
};
enum zone_type requested_highidx = gfp_zone(sc->gfp_mask);
- bool all_unreclaimable = true;
+ bool reclaimable = false;
/*
* If the number of buffer_heads in the machine exceeds the maximum
@@ -2376,8 +2377,6 @@ static bool shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
for_each_zone_zonelist_nodemask(zone, z, zonelist,
gfp_zone(sc->gfp_mask), sc->nodemask) {
- unsigned long zone_reclaimed = 0;
-
if (!populated_zone(zone))
continue;
/*
@@ -2424,15 +2423,17 @@ static bool shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
&nr_soft_scanned);
sc->nr_reclaimed += nr_soft_reclaimed;
sc->nr_scanned += nr_soft_scanned;
- zone_reclaimed += nr_soft_reclaimed;
+ if (nr_soft_reclaimed)
+ reclaimable = true;
/* need some check for avoid more shrink_zone() */
}
- zone_reclaimed += shrink_zone(zone, sc);
+ if (shrink_zone(zone, sc))
+ reclaimable = true;
- if (zone_reclaimed ||
- (global_reclaim(sc) && zone_reclaimable(zone)))
- all_unreclaimable = false;
+ if (global_reclaim(sc) &&
+ !reclaimable && zone_reclaimable(zone))
+ reclaimable = true;
}
/*
@@ -2455,7 +2456,7 @@ static bool shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
*/
sc->gfp_mask = orig_mask;
- return !all_unreclaimable;
+ return reclaimable;
}
/*
--
2.0.0
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [patch 2/3] mm: vmscan: remove all_unreclaimable() fix
@ 2014-07-14 13:20 ` Johannes Weiner
0 siblings, 0 replies; 30+ messages in thread
From: Johannes Weiner @ 2014-07-14 13:20 UTC (permalink / raw)
To: Andrew Morton
Cc: Mel Gorman, Michal Hocko, Minchan Kim, Rik van Riel, linux-mm,
linux-kernel
As per Mel, use bool for reclaimability throughout and simplify the
reclaimability tracking in shrink_zones().
Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
---
mm/vmscan.c | 29 +++++++++++++++--------------
1 file changed, 15 insertions(+), 14 deletions(-)
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 6dac1310e5e4..74a9e0ae09b0 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2244,10 +2244,10 @@ static inline bool should_continue_reclaim(struct zone *zone,
}
}
-static unsigned long shrink_zone(struct zone *zone, struct scan_control *sc)
+static bool shrink_zone(struct zone *zone, struct scan_control *sc)
{
unsigned long nr_reclaimed, nr_scanned;
- unsigned long zone_reclaimed = 0;
+ bool reclaimable = false;
do {
struct mem_cgroup *root = sc->target_mem_cgroup;
@@ -2291,12 +2291,13 @@ static unsigned long shrink_zone(struct zone *zone, struct scan_control *sc)
sc->nr_scanned - nr_scanned,
sc->nr_reclaimed - nr_reclaimed);
- zone_reclaimed += sc->nr_reclaimed - nr_reclaimed;
+ if (sc->nr_reclaimed - nr_reclaimed)
+ reclaimable = true;
} while (should_continue_reclaim(zone, sc->nr_reclaimed - nr_reclaimed,
sc->nr_scanned - nr_scanned, sc));
- return zone_reclaimed;
+ return reclaimable;
}
/* Returns true if compaction should go ahead for a high-order request */
@@ -2346,7 +2347,7 @@ static inline bool compaction_ready(struct zone *zone, int order)
* If a zone is deemed to be full of pinned pages then just give it a light
* scan then give up on it.
*
- * Returns whether the zones overall are reclaimable or not.
+ * Returns true if a zone was reclaimable.
*/
static bool shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
{
@@ -2361,7 +2362,7 @@ static bool shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
.gfp_mask = sc->gfp_mask,
};
enum zone_type requested_highidx = gfp_zone(sc->gfp_mask);
- bool all_unreclaimable = true;
+ bool reclaimable = false;
/*
* If the number of buffer_heads in the machine exceeds the maximum
@@ -2376,8 +2377,6 @@ static bool shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
for_each_zone_zonelist_nodemask(zone, z, zonelist,
gfp_zone(sc->gfp_mask), sc->nodemask) {
- unsigned long zone_reclaimed = 0;
-
if (!populated_zone(zone))
continue;
/*
@@ -2424,15 +2423,17 @@ static bool shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
&nr_soft_scanned);
sc->nr_reclaimed += nr_soft_reclaimed;
sc->nr_scanned += nr_soft_scanned;
- zone_reclaimed += nr_soft_reclaimed;
+ if (nr_soft_reclaimed)
+ reclaimable = true;
/* need some check for avoid more shrink_zone() */
}
- zone_reclaimed += shrink_zone(zone, sc);
+ if (shrink_zone(zone, sc))
+ reclaimable = true;
- if (zone_reclaimed ||
- (global_reclaim(sc) && zone_reclaimable(zone)))
- all_unreclaimable = false;
+ if (global_reclaim(sc) &&
+ !reclaimable && zone_reclaimable(zone))
+ reclaimable = true;
}
/*
@@ -2455,7 +2456,7 @@ static bool shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
*/
sc->gfp_mask = orig_mask;
- return !all_unreclaimable;
+ return reclaimable;
}
/*
--
2.0.0
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [patch 3/3] mm: vmscan: clean up struct scan_control
2014-07-14 13:20 ` Johannes Weiner
@ 2014-07-14 13:20 ` Johannes Weiner
-1 siblings, 0 replies; 30+ messages in thread
From: Johannes Weiner @ 2014-07-14 13:20 UTC (permalink / raw)
To: Andrew Morton
Cc: Mel Gorman, Michal Hocko, Minchan Kim, Rik van Riel, linux-mm,
linux-kernel
Reorder the members by input and output, then turn the individual
integers for may_writepage, may_unmap, may_swap, compaction_ready,
hibernation_mode into flags that fit into a single integer.
Stack delta: +72/-296 -224 old new delta
kswapd 104 176 +72
try_to_free_pages 80 56 -24
try_to_free_mem_cgroup_pages 80 56 -24
shrink_all_memory 88 64 -24
reclaim_clean_pages_from_list 168 144 -24
mem_cgroup_shrink_node_zone 104 80 -24
__zone_reclaim 176 152 -24
balance_pgdat 152 - -152
text data bss dec hex filename
38151 5641 16 43808 ab20 mm/vmscan.o.old
38047 5641 16 43704 aab8 mm/vmscan.o
Suggested-by: Mel Gorman <mgorman@suse.de>
Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
---
mm/vmscan.c | 158 ++++++++++++++++++++++++++++++------------------------------
1 file changed, 78 insertions(+), 80 deletions(-)
diff --git a/mm/vmscan.c b/mm/vmscan.c
index c28b8981e56a..73d8e69ff3eb 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -58,36 +58,28 @@
#define CREATE_TRACE_POINTS
#include <trace/events/vmscan.h>
-struct scan_control {
- /* Incremented by the number of inactive pages that were scanned */
- unsigned long nr_scanned;
-
- /* Number of pages freed so far during a call to shrink_zones() */
- unsigned long nr_reclaimed;
-
- /* One of the zones is ready for compaction */
- int compaction_ready;
+/* Scan control flags */
+#define MAY_WRITEPAGE 0x1
+#define MAY_UNMAP 0x2
+#define MAY_SWAP 0x4
+#define MAY_SKIP_CONGESTION 0x8
+#define COMPACTION_READY 0x10
+struct scan_control {
/* How many pages shrink_list() should reclaim */
unsigned long nr_to_reclaim;
- unsigned long hibernation_mode;
-
/* This context's GFP mask */
gfp_t gfp_mask;
- int may_writepage;
-
- /* Can mapped pages be reclaimed? */
- int may_unmap;
-
- /* Can pages be swapped as part of reclaim? */
- int may_swap;
-
+ /* Allocation order */
int order;
- /* Scan (total_size >> priority) pages at once */
- int priority;
+ /*
+ * Nodemask of nodes allowed by the caller. If NULL, all nodes
+ * are scanned.
+ */
+ nodemask_t *nodemask;
/*
* The memory cgroup that hit its limit and as a result is the
@@ -95,11 +87,17 @@ struct scan_control {
*/
struct mem_cgroup *target_mem_cgroup;
- /*
- * Nodemask of nodes allowed by the caller. If NULL, all nodes
- * are scanned.
- */
- nodemask_t *nodemask;
+ /* Scan (total_size >> priority) pages at once */
+ int priority;
+
+ /* Scan control flags; see above */
+ unsigned int flags;
+
+ /* Incremented by the number of inactive pages that were scanned */
+ unsigned long nr_scanned;
+
+ /* Number of pages freed so far during a call to shrink_zones() */
+ unsigned long nr_reclaimed;
};
#define lru_to_page(_head) (list_entry((_head)->prev, struct page, lru))
@@ -840,7 +838,7 @@ static unsigned long shrink_page_list(struct list_head *page_list,
if (unlikely(!page_evictable(page)))
goto cull_mlocked;
- if (!sc->may_unmap && page_mapped(page))
+ if (!(sc->flags & MAY_UNMAP) && page_mapped(page))
goto keep_locked;
/* Double the slab pressure for mapped and swapcache pages */
@@ -1014,7 +1012,7 @@ static unsigned long shrink_page_list(struct list_head *page_list,
goto keep_locked;
if (!may_enter_fs)
goto keep_locked;
- if (!sc->may_writepage)
+ if (!(sc->flags & MAY_WRITEPAGE))
goto keep_locked;
/* Page is dirty, try to write it out here */
@@ -1146,7 +1144,7 @@ unsigned long reclaim_clean_pages_from_list(struct zone *zone,
struct scan_control sc = {
.gfp_mask = GFP_KERNEL,
.priority = DEF_PRIORITY,
- .may_unmap = 1,
+ .flags = MAY_UNMAP,
};
unsigned long ret, dummy1, dummy2, dummy3, dummy4, dummy5;
struct page *page, *next;
@@ -1489,9 +1487,9 @@ shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec,
lru_add_drain();
- if (!sc->may_unmap)
+ if (!(sc->flags & MAY_UNMAP))
isolate_mode |= ISOLATE_UNMAPPED;
- if (!sc->may_writepage)
+ if (!(sc->flags & MAY_WRITEPAGE))
isolate_mode |= ISOLATE_CLEAN;
spin_lock_irq(&zone->lru_lock);
@@ -1593,7 +1591,7 @@ shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec,
* is congested. Allow kswapd to continue until it starts encountering
* unqueued dirty pages or cycling through the LRU too quickly.
*/
- if (!sc->hibernation_mode && !current_is_kswapd() &&
+ if (!(sc->flags & MAY_SKIP_CONGESTION) && !current_is_kswapd() &&
current_may_throttle())
wait_iff_congested(zone, BLK_RW_ASYNC, HZ/10);
@@ -1683,9 +1681,9 @@ static void shrink_active_list(unsigned long nr_to_scan,
lru_add_drain();
- if (!sc->may_unmap)
+ if (!(sc->flags & MAY_UNMAP))
isolate_mode |= ISOLATE_UNMAPPED;
- if (!sc->may_writepage)
+ if (!(sc->flags & MAY_WRITEPAGE))
isolate_mode |= ISOLATE_CLEAN;
spin_lock_irq(&zone->lru_lock);
@@ -1897,7 +1895,7 @@ static void get_scan_count(struct lruvec *lruvec, int swappiness,
force_scan = true;
/* If we have no swap space, do not bother scanning anon pages. */
- if (!sc->may_swap || (get_nr_swap_pages() <= 0)) {
+ if (!(sc->flags & MAY_SWAP) || (get_nr_swap_pages() <= 0)) {
scan_balance = SCAN_FILE;
goto out;
}
@@ -2406,7 +2404,7 @@ static bool shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
sc->order > PAGE_ALLOC_COSTLY_ORDER &&
zonelist_zone_idx(z) <= requested_highidx &&
compaction_ready(zone, sc->order)) {
- sc->compaction_ready = true;
+ sc->flags |= COMPACTION_READY;
continue;
}
@@ -2496,7 +2494,7 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
if (sc->nr_reclaimed >= sc->nr_to_reclaim)
break;
- if (sc->compaction_ready)
+ if (sc->flags & COMPACTION_READY)
break;
/*
@@ -2504,7 +2502,7 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
* writepage even in laptop mode.
*/
if (sc->priority < DEF_PRIORITY - 2)
- sc->may_writepage = 1;
+ sc->flags |= MAY_WRITEPAGE;
/*
* Try to write back as many pages as we just scanned. This
@@ -2517,7 +2515,7 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
if (total_scanned > writeback_threshold) {
wakeup_flusher_threads(laptop_mode ? 0 : total_scanned,
WB_REASON_TRY_TO_FREE_PAGES);
- sc->may_writepage = 1;
+ sc->flags |= MAY_WRITEPAGE;
}
} while (--sc->priority >= 0);
@@ -2527,7 +2525,7 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
return sc->nr_reclaimed;
/* Aborted reclaim to try compaction? don't OOM, then */
- if (sc->compaction_ready)
+ if (sc->flags & COMPACTION_READY)
return 1;
/* Any of the zones still reclaimable? Don't OOM. */
@@ -2668,17 +2666,17 @@ unsigned long try_to_free_pages(struct zonelist *zonelist, int order,
{
unsigned long nr_reclaimed;
struct scan_control sc = {
- .gfp_mask = (gfp_mask = memalloc_noio_flags(gfp_mask)),
- .may_writepage = !laptop_mode,
.nr_to_reclaim = SWAP_CLUSTER_MAX,
- .may_unmap = 1,
- .may_swap = 1,
+ .gfp_mask = (gfp_mask = memalloc_noio_flags(gfp_mask)),
.order = order,
- .priority = DEF_PRIORITY,
- .target_mem_cgroup = NULL,
.nodemask = nodemask,
+ .priority = DEF_PRIORITY,
+ .flags = MAY_UNMAP | MAY_SWAP,
};
+ if (!laptop_mode)
+ sc.flags |= MAY_WRITEPAGE;
+
/*
* Do not enter reclaim if fatal signal was delivered while throttled.
* 1 is returned so that the page allocator does not OOM kill at this
@@ -2688,7 +2686,7 @@ unsigned long try_to_free_pages(struct zonelist *zonelist, int order,
return 1;
trace_mm_vmscan_direct_reclaim_begin(order,
- sc.may_writepage,
+ sc.flags & MAY_WRITEPAGE,
gfp_mask);
nr_reclaimed = do_try_to_free_pages(zonelist, &sc);
@@ -2706,23 +2704,22 @@ unsigned long mem_cgroup_shrink_node_zone(struct mem_cgroup *memcg,
unsigned long *nr_scanned)
{
struct scan_control sc = {
- .nr_scanned = 0,
.nr_to_reclaim = SWAP_CLUSTER_MAX,
- .may_writepage = !laptop_mode,
- .may_unmap = 1,
- .may_swap = !noswap,
- .order = 0,
- .priority = 0,
+ .gfp_mask = (gfp_mask & GFP_RECLAIM_MASK) |
+ (GFP_HIGHUSER_MOVABLE & ~GFP_RECLAIM_MASK),
.target_mem_cgroup = memcg,
+ .flags = MAY_UNMAP,
};
struct lruvec *lruvec = mem_cgroup_zone_lruvec(zone, memcg);
int swappiness = mem_cgroup_swappiness(memcg);
- sc.gfp_mask = (gfp_mask & GFP_RECLAIM_MASK) |
- (GFP_HIGHUSER_MOVABLE & ~GFP_RECLAIM_MASK);
+ if (!laptop_mode)
+ sc.flags |= MAY_WRITEPAGE;
+ if (!noswap)
+ sc.flags |= MAY_SWAP;
trace_mm_vmscan_memcg_softlimit_reclaim_begin(sc.order,
- sc.may_writepage,
+ sc.flags & MAY_WRITEPAGE,
sc.gfp_mask);
/*
@@ -2748,18 +2745,19 @@ unsigned long try_to_free_mem_cgroup_pages(struct mem_cgroup *memcg,
unsigned long nr_reclaimed;
int nid;
struct scan_control sc = {
- .may_writepage = !laptop_mode,
- .may_unmap = 1,
- .may_swap = !noswap,
.nr_to_reclaim = SWAP_CLUSTER_MAX,
- .order = 0,
- .priority = DEF_PRIORITY,
- .target_mem_cgroup = memcg,
- .nodemask = NULL, /* we don't care the placement */
.gfp_mask = (gfp_mask & GFP_RECLAIM_MASK) |
- (GFP_HIGHUSER_MOVABLE & ~GFP_RECLAIM_MASK),
+ (GFP_HIGHUSER_MOVABLE & ~GFP_RECLAIM_MASK),
+ .target_mem_cgroup = memcg,
+ .priority = DEF_PRIORITY,
+ .flags = MAY_UNMAP,
};
+ if (!laptop_mode)
+ sc.flags |= MAY_WRITEPAGE;
+ if (!noswap)
+ sc.flags |= MAY_SWAP;
+
/*
* Unlike direct reclaim via alloc_pages(), memcg's reclaim doesn't
* take care of from where we get pages. So the node where we start the
@@ -2770,7 +2768,7 @@ unsigned long try_to_free_mem_cgroup_pages(struct mem_cgroup *memcg,
zonelist = NODE_DATA(nid)->node_zonelists;
trace_mm_vmscan_memcg_reclaim_begin(0,
- sc.may_writepage,
+ sc.flags & MAY_WRITEPAGE,
sc.gfp_mask);
nr_reclaimed = do_try_to_free_pages(zonelist, &sc);
@@ -3015,15 +3013,15 @@ static unsigned long balance_pgdat(pg_data_t *pgdat, int order,
unsigned long nr_soft_scanned;
struct scan_control sc = {
.gfp_mask = GFP_KERNEL,
- .priority = DEF_PRIORITY,
- .may_unmap = 1,
- .may_swap = 1,
- .may_writepage = !laptop_mode,
.order = order,
- .target_mem_cgroup = NULL,
+ .priority = DEF_PRIORITY,
+ .flags = MAY_UNMAP | MAY_SWAP,
};
count_vm_event(PAGEOUTRUN);
+ if (!laptop_mode)
+ sc.flags |= MAY_WRITEPAGE;
+
do {
unsigned long lru_pages = 0;
unsigned long nr_attempted = 0;
@@ -3104,7 +3102,7 @@ static unsigned long balance_pgdat(pg_data_t *pgdat, int order,
* even in laptop mode.
*/
if (sc.priority < DEF_PRIORITY - 2)
- sc.may_writepage = 1;
+ sc.flags |= MAY_WRITEPAGE;
/*
* Now scan the zone in the dma->highmem direction, stopping
@@ -3401,14 +3399,11 @@ unsigned long shrink_all_memory(unsigned long nr_to_reclaim)
{
struct reclaim_state reclaim_state;
struct scan_control sc = {
- .gfp_mask = GFP_HIGHUSER_MOVABLE,
- .may_swap = 1,
- .may_unmap = 1,
- .may_writepage = 1,
.nr_to_reclaim = nr_to_reclaim,
- .hibernation_mode = 1,
- .order = 0,
+ .gfp_mask = GFP_HIGHUSER_MOVABLE,
.priority = DEF_PRIORITY,
+ .flags = MAY_WRITEPAGE | MAY_UNMAP | MAY_SWAP |
+ MAY_SKIP_CONGESTION,
};
struct zonelist *zonelist = node_zonelist(numa_node_id(), sc.gfp_mask);
struct task_struct *p = current;
@@ -3588,19 +3583,22 @@ static int __zone_reclaim(struct zone *zone, gfp_t gfp_mask, unsigned int order)
struct task_struct *p = current;
struct reclaim_state reclaim_state;
struct scan_control sc = {
- .may_writepage = !!(zone_reclaim_mode & RECLAIM_WRITE),
- .may_unmap = !!(zone_reclaim_mode & RECLAIM_SWAP),
- .may_swap = 1,
.nr_to_reclaim = max(nr_pages, SWAP_CLUSTER_MAX),
.gfp_mask = (gfp_mask = memalloc_noio_flags(gfp_mask)),
.order = order,
.priority = ZONE_RECLAIM_PRIORITY,
+ .flags = MAY_SWAP,
};
struct shrink_control shrink = {
.gfp_mask = sc.gfp_mask,
};
unsigned long nr_slab_pages0, nr_slab_pages1;
+ if (zone_reclaim_mode & RECLAIM_WRITE)
+ sc.flags |= MAY_WRITEPAGE;
+ if (zone_reclaim_mode & RECLAIM_SWAP)
+ sc.flags |= MAY_UNMAP;
+
cond_resched();
/*
* We need to be able to allocate from the reserves for RECLAIM_SWAP
--
2.0.0
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [patch 3/3] mm: vmscan: clean up struct scan_control
@ 2014-07-14 13:20 ` Johannes Weiner
0 siblings, 0 replies; 30+ messages in thread
From: Johannes Weiner @ 2014-07-14 13:20 UTC (permalink / raw)
To: Andrew Morton
Cc: Mel Gorman, Michal Hocko, Minchan Kim, Rik van Riel, linux-mm,
linux-kernel
Reorder the members by input and output, then turn the individual
integers for may_writepage, may_unmap, may_swap, compaction_ready,
hibernation_mode into flags that fit into a single integer.
Stack delta: +72/-296 -224 old new delta
kswapd 104 176 +72
try_to_free_pages 80 56 -24
try_to_free_mem_cgroup_pages 80 56 -24
shrink_all_memory 88 64 -24
reclaim_clean_pages_from_list 168 144 -24
mem_cgroup_shrink_node_zone 104 80 -24
__zone_reclaim 176 152 -24
balance_pgdat 152 - -152
text data bss dec hex filename
38151 5641 16 43808 ab20 mm/vmscan.o.old
38047 5641 16 43704 aab8 mm/vmscan.o
Suggested-by: Mel Gorman <mgorman@suse.de>
Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
---
mm/vmscan.c | 158 ++++++++++++++++++++++++++++++------------------------------
1 file changed, 78 insertions(+), 80 deletions(-)
diff --git a/mm/vmscan.c b/mm/vmscan.c
index c28b8981e56a..73d8e69ff3eb 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -58,36 +58,28 @@
#define CREATE_TRACE_POINTS
#include <trace/events/vmscan.h>
-struct scan_control {
- /* Incremented by the number of inactive pages that were scanned */
- unsigned long nr_scanned;
-
- /* Number of pages freed so far during a call to shrink_zones() */
- unsigned long nr_reclaimed;
-
- /* One of the zones is ready for compaction */
- int compaction_ready;
+/* Scan control flags */
+#define MAY_WRITEPAGE 0x1
+#define MAY_UNMAP 0x2
+#define MAY_SWAP 0x4
+#define MAY_SKIP_CONGESTION 0x8
+#define COMPACTION_READY 0x10
+struct scan_control {
/* How many pages shrink_list() should reclaim */
unsigned long nr_to_reclaim;
- unsigned long hibernation_mode;
-
/* This context's GFP mask */
gfp_t gfp_mask;
- int may_writepage;
-
- /* Can mapped pages be reclaimed? */
- int may_unmap;
-
- /* Can pages be swapped as part of reclaim? */
- int may_swap;
-
+ /* Allocation order */
int order;
- /* Scan (total_size >> priority) pages at once */
- int priority;
+ /*
+ * Nodemask of nodes allowed by the caller. If NULL, all nodes
+ * are scanned.
+ */
+ nodemask_t *nodemask;
/*
* The memory cgroup that hit its limit and as a result is the
@@ -95,11 +87,17 @@ struct scan_control {
*/
struct mem_cgroup *target_mem_cgroup;
- /*
- * Nodemask of nodes allowed by the caller. If NULL, all nodes
- * are scanned.
- */
- nodemask_t *nodemask;
+ /* Scan (total_size >> priority) pages at once */
+ int priority;
+
+ /* Scan control flags; see above */
+ unsigned int flags;
+
+ /* Incremented by the number of inactive pages that were scanned */
+ unsigned long nr_scanned;
+
+ /* Number of pages freed so far during a call to shrink_zones() */
+ unsigned long nr_reclaimed;
};
#define lru_to_page(_head) (list_entry((_head)->prev, struct page, lru))
@@ -840,7 +838,7 @@ static unsigned long shrink_page_list(struct list_head *page_list,
if (unlikely(!page_evictable(page)))
goto cull_mlocked;
- if (!sc->may_unmap && page_mapped(page))
+ if (!(sc->flags & MAY_UNMAP) && page_mapped(page))
goto keep_locked;
/* Double the slab pressure for mapped and swapcache pages */
@@ -1014,7 +1012,7 @@ static unsigned long shrink_page_list(struct list_head *page_list,
goto keep_locked;
if (!may_enter_fs)
goto keep_locked;
- if (!sc->may_writepage)
+ if (!(sc->flags & MAY_WRITEPAGE))
goto keep_locked;
/* Page is dirty, try to write it out here */
@@ -1146,7 +1144,7 @@ unsigned long reclaim_clean_pages_from_list(struct zone *zone,
struct scan_control sc = {
.gfp_mask = GFP_KERNEL,
.priority = DEF_PRIORITY,
- .may_unmap = 1,
+ .flags = MAY_UNMAP,
};
unsigned long ret, dummy1, dummy2, dummy3, dummy4, dummy5;
struct page *page, *next;
@@ -1489,9 +1487,9 @@ shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec,
lru_add_drain();
- if (!sc->may_unmap)
+ if (!(sc->flags & MAY_UNMAP))
isolate_mode |= ISOLATE_UNMAPPED;
- if (!sc->may_writepage)
+ if (!(sc->flags & MAY_WRITEPAGE))
isolate_mode |= ISOLATE_CLEAN;
spin_lock_irq(&zone->lru_lock);
@@ -1593,7 +1591,7 @@ shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec,
* is congested. Allow kswapd to continue until it starts encountering
* unqueued dirty pages or cycling through the LRU too quickly.
*/
- if (!sc->hibernation_mode && !current_is_kswapd() &&
+ if (!(sc->flags & MAY_SKIP_CONGESTION) && !current_is_kswapd() &&
current_may_throttle())
wait_iff_congested(zone, BLK_RW_ASYNC, HZ/10);
@@ -1683,9 +1681,9 @@ static void shrink_active_list(unsigned long nr_to_scan,
lru_add_drain();
- if (!sc->may_unmap)
+ if (!(sc->flags & MAY_UNMAP))
isolate_mode |= ISOLATE_UNMAPPED;
- if (!sc->may_writepage)
+ if (!(sc->flags & MAY_WRITEPAGE))
isolate_mode |= ISOLATE_CLEAN;
spin_lock_irq(&zone->lru_lock);
@@ -1897,7 +1895,7 @@ static void get_scan_count(struct lruvec *lruvec, int swappiness,
force_scan = true;
/* If we have no swap space, do not bother scanning anon pages. */
- if (!sc->may_swap || (get_nr_swap_pages() <= 0)) {
+ if (!(sc->flags & MAY_SWAP) || (get_nr_swap_pages() <= 0)) {
scan_balance = SCAN_FILE;
goto out;
}
@@ -2406,7 +2404,7 @@ static bool shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
sc->order > PAGE_ALLOC_COSTLY_ORDER &&
zonelist_zone_idx(z) <= requested_highidx &&
compaction_ready(zone, sc->order)) {
- sc->compaction_ready = true;
+ sc->flags |= COMPACTION_READY;
continue;
}
@@ -2496,7 +2494,7 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
if (sc->nr_reclaimed >= sc->nr_to_reclaim)
break;
- if (sc->compaction_ready)
+ if (sc->flags & COMPACTION_READY)
break;
/*
@@ -2504,7 +2502,7 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
* writepage even in laptop mode.
*/
if (sc->priority < DEF_PRIORITY - 2)
- sc->may_writepage = 1;
+ sc->flags |= MAY_WRITEPAGE;
/*
* Try to write back as many pages as we just scanned. This
@@ -2517,7 +2515,7 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
if (total_scanned > writeback_threshold) {
wakeup_flusher_threads(laptop_mode ? 0 : total_scanned,
WB_REASON_TRY_TO_FREE_PAGES);
- sc->may_writepage = 1;
+ sc->flags |= MAY_WRITEPAGE;
}
} while (--sc->priority >= 0);
@@ -2527,7 +2525,7 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
return sc->nr_reclaimed;
/* Aborted reclaim to try compaction? don't OOM, then */
- if (sc->compaction_ready)
+ if (sc->flags & COMPACTION_READY)
return 1;
/* Any of the zones still reclaimable? Don't OOM. */
@@ -2668,17 +2666,17 @@ unsigned long try_to_free_pages(struct zonelist *zonelist, int order,
{
unsigned long nr_reclaimed;
struct scan_control sc = {
- .gfp_mask = (gfp_mask = memalloc_noio_flags(gfp_mask)),
- .may_writepage = !laptop_mode,
.nr_to_reclaim = SWAP_CLUSTER_MAX,
- .may_unmap = 1,
- .may_swap = 1,
+ .gfp_mask = (gfp_mask = memalloc_noio_flags(gfp_mask)),
.order = order,
- .priority = DEF_PRIORITY,
- .target_mem_cgroup = NULL,
.nodemask = nodemask,
+ .priority = DEF_PRIORITY,
+ .flags = MAY_UNMAP | MAY_SWAP,
};
+ if (!laptop_mode)
+ sc.flags |= MAY_WRITEPAGE;
+
/*
* Do not enter reclaim if fatal signal was delivered while throttled.
* 1 is returned so that the page allocator does not OOM kill at this
@@ -2688,7 +2686,7 @@ unsigned long try_to_free_pages(struct zonelist *zonelist, int order,
return 1;
trace_mm_vmscan_direct_reclaim_begin(order,
- sc.may_writepage,
+ sc.flags & MAY_WRITEPAGE,
gfp_mask);
nr_reclaimed = do_try_to_free_pages(zonelist, &sc);
@@ -2706,23 +2704,22 @@ unsigned long mem_cgroup_shrink_node_zone(struct mem_cgroup *memcg,
unsigned long *nr_scanned)
{
struct scan_control sc = {
- .nr_scanned = 0,
.nr_to_reclaim = SWAP_CLUSTER_MAX,
- .may_writepage = !laptop_mode,
- .may_unmap = 1,
- .may_swap = !noswap,
- .order = 0,
- .priority = 0,
+ .gfp_mask = (gfp_mask & GFP_RECLAIM_MASK) |
+ (GFP_HIGHUSER_MOVABLE & ~GFP_RECLAIM_MASK),
.target_mem_cgroup = memcg,
+ .flags = MAY_UNMAP,
};
struct lruvec *lruvec = mem_cgroup_zone_lruvec(zone, memcg);
int swappiness = mem_cgroup_swappiness(memcg);
- sc.gfp_mask = (gfp_mask & GFP_RECLAIM_MASK) |
- (GFP_HIGHUSER_MOVABLE & ~GFP_RECLAIM_MASK);
+ if (!laptop_mode)
+ sc.flags |= MAY_WRITEPAGE;
+ if (!noswap)
+ sc.flags |= MAY_SWAP;
trace_mm_vmscan_memcg_softlimit_reclaim_begin(sc.order,
- sc.may_writepage,
+ sc.flags & MAY_WRITEPAGE,
sc.gfp_mask);
/*
@@ -2748,18 +2745,19 @@ unsigned long try_to_free_mem_cgroup_pages(struct mem_cgroup *memcg,
unsigned long nr_reclaimed;
int nid;
struct scan_control sc = {
- .may_writepage = !laptop_mode,
- .may_unmap = 1,
- .may_swap = !noswap,
.nr_to_reclaim = SWAP_CLUSTER_MAX,
- .order = 0,
- .priority = DEF_PRIORITY,
- .target_mem_cgroup = memcg,
- .nodemask = NULL, /* we don't care the placement */
.gfp_mask = (gfp_mask & GFP_RECLAIM_MASK) |
- (GFP_HIGHUSER_MOVABLE & ~GFP_RECLAIM_MASK),
+ (GFP_HIGHUSER_MOVABLE & ~GFP_RECLAIM_MASK),
+ .target_mem_cgroup = memcg,
+ .priority = DEF_PRIORITY,
+ .flags = MAY_UNMAP,
};
+ if (!laptop_mode)
+ sc.flags |= MAY_WRITEPAGE;
+ if (!noswap)
+ sc.flags |= MAY_SWAP;
+
/*
* Unlike direct reclaim via alloc_pages(), memcg's reclaim doesn't
* take care of from where we get pages. So the node where we start the
@@ -2770,7 +2768,7 @@ unsigned long try_to_free_mem_cgroup_pages(struct mem_cgroup *memcg,
zonelist = NODE_DATA(nid)->node_zonelists;
trace_mm_vmscan_memcg_reclaim_begin(0,
- sc.may_writepage,
+ sc.flags & MAY_WRITEPAGE,
sc.gfp_mask);
nr_reclaimed = do_try_to_free_pages(zonelist, &sc);
@@ -3015,15 +3013,15 @@ static unsigned long balance_pgdat(pg_data_t *pgdat, int order,
unsigned long nr_soft_scanned;
struct scan_control sc = {
.gfp_mask = GFP_KERNEL,
- .priority = DEF_PRIORITY,
- .may_unmap = 1,
- .may_swap = 1,
- .may_writepage = !laptop_mode,
.order = order,
- .target_mem_cgroup = NULL,
+ .priority = DEF_PRIORITY,
+ .flags = MAY_UNMAP | MAY_SWAP,
};
count_vm_event(PAGEOUTRUN);
+ if (!laptop_mode)
+ sc.flags |= MAY_WRITEPAGE;
+
do {
unsigned long lru_pages = 0;
unsigned long nr_attempted = 0;
@@ -3104,7 +3102,7 @@ static unsigned long balance_pgdat(pg_data_t *pgdat, int order,
* even in laptop mode.
*/
if (sc.priority < DEF_PRIORITY - 2)
- sc.may_writepage = 1;
+ sc.flags |= MAY_WRITEPAGE;
/*
* Now scan the zone in the dma->highmem direction, stopping
@@ -3401,14 +3399,11 @@ unsigned long shrink_all_memory(unsigned long nr_to_reclaim)
{
struct reclaim_state reclaim_state;
struct scan_control sc = {
- .gfp_mask = GFP_HIGHUSER_MOVABLE,
- .may_swap = 1,
- .may_unmap = 1,
- .may_writepage = 1,
.nr_to_reclaim = nr_to_reclaim,
- .hibernation_mode = 1,
- .order = 0,
+ .gfp_mask = GFP_HIGHUSER_MOVABLE,
.priority = DEF_PRIORITY,
+ .flags = MAY_WRITEPAGE | MAY_UNMAP | MAY_SWAP |
+ MAY_SKIP_CONGESTION,
};
struct zonelist *zonelist = node_zonelist(numa_node_id(), sc.gfp_mask);
struct task_struct *p = current;
@@ -3588,19 +3583,22 @@ static int __zone_reclaim(struct zone *zone, gfp_t gfp_mask, unsigned int order)
struct task_struct *p = current;
struct reclaim_state reclaim_state;
struct scan_control sc = {
- .may_writepage = !!(zone_reclaim_mode & RECLAIM_WRITE),
- .may_unmap = !!(zone_reclaim_mode & RECLAIM_SWAP),
- .may_swap = 1,
.nr_to_reclaim = max(nr_pages, SWAP_CLUSTER_MAX),
.gfp_mask = (gfp_mask = memalloc_noio_flags(gfp_mask)),
.order = order,
.priority = ZONE_RECLAIM_PRIORITY,
+ .flags = MAY_SWAP,
};
struct shrink_control shrink = {
.gfp_mask = sc.gfp_mask,
};
unsigned long nr_slab_pages0, nr_slab_pages1;
+ if (zone_reclaim_mode & RECLAIM_WRITE)
+ sc.flags |= MAY_WRITEPAGE;
+ if (zone_reclaim_mode & RECLAIM_SWAP)
+ sc.flags |= MAY_UNMAP;
+
cond_resched();
/*
* We need to be able to allocate from the reserves for RECLAIM_SWAP
--
2.0.0
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 30+ messages in thread
* Re: [patch 1/3] mm: vmscan: rework compaction-ready signaling in direct reclaim fix
2014-07-14 13:20 ` Johannes Weiner
@ 2014-07-14 14:09 ` Rik van Riel
-1 siblings, 0 replies; 30+ messages in thread
From: Rik van Riel @ 2014-07-14 14:09 UTC (permalink / raw)
To: Johannes Weiner, Andrew Morton
Cc: Mel Gorman, Michal Hocko, Minchan Kim, linux-mm, linux-kernel
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
On 07/14/2014 09:20 AM, Johannes Weiner wrote:
> As per Mel, replace out label with breaks from the loop.
>
> Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: Rik van Riel <riel@redhat.com>
- --
All rights reversed
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
iQEcBAEBAgAGBQJTw+SUAAoJEM553pKExN6Dj4wH/imfBq+85Kpjrw4NQltD+uUt
0pAzt/SX9IfiUcowi/1i1jWUKhMAfrY4SCG14g3ErKnIprMT8oa9ujRGCpnnZud2
eqLDFIHM8BlLNfIOV6a96+i1JpFDLbNL8WBjlew6X7ZDZamUG6j+0XxBOtwVemn6
Yj+cubH6mgPtovGRHdEDnyb4JOw5eue4/vIpumdTak3mnKghRpAxdN5tq7h13e1a
w/tweAWFspYHBkUj6FjeGiXrttNF7ToOy0cJeypZJZJfZFRwHBYStTe81iLa7jld
eNmfr48eLjnHrvZAN+lFm/DPuqU4ISuoCnL9N67OCudLrj8YbBzw8tOYNvff7fw=
=bbE9
-----END PGP SIGNATURE-----
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [patch 1/3] mm: vmscan: rework compaction-ready signaling in direct reclaim fix
@ 2014-07-14 14:09 ` Rik van Riel
0 siblings, 0 replies; 30+ messages in thread
From: Rik van Riel @ 2014-07-14 14:09 UTC (permalink / raw)
To: Johannes Weiner, Andrew Morton
Cc: Mel Gorman, Michal Hocko, Minchan Kim, linux-mm, linux-kernel
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
On 07/14/2014 09:20 AM, Johannes Weiner wrote:
> As per Mel, replace out label with breaks from the loop.
>
> Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: Rik van Riel <riel@redhat.com>
- --
All rights reversed
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
iQEcBAEBAgAGBQJTw+SUAAoJEM553pKExN6Dj4wH/imfBq+85Kpjrw4NQltD+uUt
0pAzt/SX9IfiUcowi/1i1jWUKhMAfrY4SCG14g3ErKnIprMT8oa9ujRGCpnnZud2
eqLDFIHM8BlLNfIOV6a96+i1JpFDLbNL8WBjlew6X7ZDZamUG6j+0XxBOtwVemn6
Yj+cubH6mgPtovGRHdEDnyb4JOw5eue4/vIpumdTak3mnKghRpAxdN5tq7h13e1a
w/tweAWFspYHBkUj6FjeGiXrttNF7ToOy0cJeypZJZJfZFRwHBYStTe81iLa7jld
eNmfr48eLjnHrvZAN+lFm/DPuqU4ISuoCnL9N67OCudLrj8YbBzw8tOYNvff7fw=
=bbE9
-----END PGP SIGNATURE-----
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [patch 2/3] mm: vmscan: remove all_unreclaimable() fix
2014-07-14 13:20 ` Johannes Weiner
@ 2014-07-14 14:10 ` Rik van Riel
-1 siblings, 0 replies; 30+ messages in thread
From: Rik van Riel @ 2014-07-14 14:10 UTC (permalink / raw)
To: Johannes Weiner, Andrew Morton
Cc: Mel Gorman, Michal Hocko, Minchan Kim, linux-mm, linux-kernel
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
On 07/14/2014 09:20 AM, Johannes Weiner wrote:
> As per Mel, use bool for reclaimability throughout and simplify
> the reclaimability tracking in shrink_zones().
>
> Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: Rik van Riel <riel@redhat.com>
- --
All rights reversed
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
iQEcBAEBAgAGBQJTw+S+AAoJEM553pKExN6DtvYH/RXaEx/lWC9UR5eRQ8nQy2L4
V87wVoWPXauuIeJrGurTV28cvqUW/JXNAmnONuGdRI//9jE2vMZmVi2X5V4CnGv2
zpMsM2MhIn7tzBKW7AlBLHBC9nUEIHpo+OA3IvCvQsgG5qWNkdWOTUv1xkiqTuuQ
Nu7pxiNH360Dp+g2VCuFU2+nrjcKKSolsMBqEvGGP+Dh3/G5EQpQ/lQJ0/a/4q1y
/ew0HCYRfH2/kCMKxixTtUXR7QcMw4L4AkD0fHoisRoyuVAws4QwYA4zB1FKpxgr
XIsVPLftQc8f+++Djone+RvPaPmUOvXrf8UdbxTG5wrfBJ9aPB5AuecWqOOO22g=
=3M3e
-----END PGP SIGNATURE-----
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [patch 2/3] mm: vmscan: remove all_unreclaimable() fix
@ 2014-07-14 14:10 ` Rik van Riel
0 siblings, 0 replies; 30+ messages in thread
From: Rik van Riel @ 2014-07-14 14:10 UTC (permalink / raw)
To: Johannes Weiner, Andrew Morton
Cc: Mel Gorman, Michal Hocko, Minchan Kim, linux-mm, linux-kernel
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
On 07/14/2014 09:20 AM, Johannes Weiner wrote:
> As per Mel, use bool for reclaimability throughout and simplify
> the reclaimability tracking in shrink_zones().
>
> Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: Rik van Riel <riel@redhat.com>
- --
All rights reversed
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
iQEcBAEBAgAGBQJTw+S+AAoJEM553pKExN6DtvYH/RXaEx/lWC9UR5eRQ8nQy2L4
V87wVoWPXauuIeJrGurTV28cvqUW/JXNAmnONuGdRI//9jE2vMZmVi2X5V4CnGv2
zpMsM2MhIn7tzBKW7AlBLHBC9nUEIHpo+OA3IvCvQsgG5qWNkdWOTUv1xkiqTuuQ
Nu7pxiNH360Dp+g2VCuFU2+nrjcKKSolsMBqEvGGP+Dh3/G5EQpQ/lQJ0/a/4q1y
/ew0HCYRfH2/kCMKxixTtUXR7QcMw4L4AkD0fHoisRoyuVAws4QwYA4zB1FKpxgr
XIsVPLftQc8f+++Djone+RvPaPmUOvXrf8UdbxTG5wrfBJ9aPB5AuecWqOOO22g=
=3M3e
-----END PGP SIGNATURE-----
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [patch 3/3] mm: vmscan: clean up struct scan_control
2014-07-14 13:20 ` Johannes Weiner
@ 2014-07-14 19:46 ` Hugh Dickins
-1 siblings, 0 replies; 30+ messages in thread
From: Hugh Dickins @ 2014-07-14 19:46 UTC (permalink / raw)
To: Johannes Weiner
Cc: Andrew Morton, Mel Gorman, Michal Hocko, Minchan Kim,
Rik van Riel, linux-mm, linux-kernel
On Mon, 14 Jul 2014, Johannes Weiner wrote:
> Reorder the members by input and output, then turn the individual
> integers for may_writepage, may_unmap, may_swap, compaction_ready,
> hibernation_mode into flags that fit into a single integer.
>
> Stack delta: +72/-296 -224 old new delta
> kswapd 104 176 +72
> try_to_free_pages 80 56 -24
> try_to_free_mem_cgroup_pages 80 56 -24
> shrink_all_memory 88 64 -24
> reclaim_clean_pages_from_list 168 144 -24
> mem_cgroup_shrink_node_zone 104 80 -24
> __zone_reclaim 176 152 -24
> balance_pgdat 152 - -152
>
> text data bss dec hex filename
> 38151 5641 16 43808 ab20 mm/vmscan.o.old
> 38047 5641 16 43704 aab8 mm/vmscan.o
>
> Suggested-by: Mel Gorman <mgorman@suse.de>
> Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
> ---
> mm/vmscan.c | 158 ++++++++++++++++++++++++++++++------------------------------
> 1 file changed, 78 insertions(+), 80 deletions(-)
>
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index c28b8981e56a..73d8e69ff3eb 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -58,36 +58,28 @@
> #define CREATE_TRACE_POINTS
> #include <trace/events/vmscan.h>
>
> -struct scan_control {
> - /* Incremented by the number of inactive pages that were scanned */
> - unsigned long nr_scanned;
> -
> - /* Number of pages freed so far during a call to shrink_zones() */
> - unsigned long nr_reclaimed;
> -
> - /* One of the zones is ready for compaction */
> - int compaction_ready;
> +/* Scan control flags */
> +#define MAY_WRITEPAGE 0x1
> +#define MAY_UNMAP 0x2
> +#define MAY_SWAP 0x4
> +#define MAY_SKIP_CONGESTION 0x8
> +#define COMPACTION_READY 0x10
>
> +struct scan_control {
> /* How many pages shrink_list() should reclaim */
> unsigned long nr_to_reclaim;
>
> - unsigned long hibernation_mode;
> -
> /* This context's GFP mask */
> gfp_t gfp_mask;
>
> - int may_writepage;
> -
> - /* Can mapped pages be reclaimed? */
> - int may_unmap;
> -
> - /* Can pages be swapped as part of reclaim? */
> - int may_swap;
> -
> + /* Allocation order */
> int order;
>
> - /* Scan (total_size >> priority) pages at once */
> - int priority;
> + /*
> + * Nodemask of nodes allowed by the caller. If NULL, all nodes
> + * are scanned.
> + */
> + nodemask_t *nodemask;
>
> /*
> * The memory cgroup that hit its limit and as a result is the
> @@ -95,11 +87,17 @@ struct scan_control {
> */
> struct mem_cgroup *target_mem_cgroup;
>
> - /*
> - * Nodemask of nodes allowed by the caller. If NULL, all nodes
> - * are scanned.
> - */
> - nodemask_t *nodemask;
> + /* Scan (total_size >> priority) pages at once */
> + int priority;
> +
> + /* Scan control flags; see above */
> + unsigned int flags;
This seems to result in a fair amount of unnecessary churn:
why not just put may_writepage etc into an unsigned int bitfield,
then you get the saving without changing all the rest of the code.
Hugh
> +
> + /* Incremented by the number of inactive pages that were scanned */
> + unsigned long nr_scanned;
> +
> + /* Number of pages freed so far during a call to shrink_zones() */
> + unsigned long nr_reclaimed;
> };
>
> #define lru_to_page(_head) (list_entry((_head)->prev, struct page, lru))
> @@ -840,7 +838,7 @@ static unsigned long shrink_page_list(struct list_head *page_list,
> if (unlikely(!page_evictable(page)))
> goto cull_mlocked;
>
> - if (!sc->may_unmap && page_mapped(page))
> + if (!(sc->flags & MAY_UNMAP) && page_mapped(page))
> goto keep_locked;
>
> /* Double the slab pressure for mapped and swapcache pages */
> @@ -1014,7 +1012,7 @@ static unsigned long shrink_page_list(struct list_head *page_list,
> goto keep_locked;
> if (!may_enter_fs)
> goto keep_locked;
> - if (!sc->may_writepage)
> + if (!(sc->flags & MAY_WRITEPAGE))
> goto keep_locked;
>
> /* Page is dirty, try to write it out here */
> @@ -1146,7 +1144,7 @@ unsigned long reclaim_clean_pages_from_list(struct zone *zone,
> struct scan_control sc = {
> .gfp_mask = GFP_KERNEL,
> .priority = DEF_PRIORITY,
> - .may_unmap = 1,
> + .flags = MAY_UNMAP,
> };
> unsigned long ret, dummy1, dummy2, dummy3, dummy4, dummy5;
> struct page *page, *next;
> @@ -1489,9 +1487,9 @@ shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec,
>
> lru_add_drain();
>
> - if (!sc->may_unmap)
> + if (!(sc->flags & MAY_UNMAP))
> isolate_mode |= ISOLATE_UNMAPPED;
> - if (!sc->may_writepage)
> + if (!(sc->flags & MAY_WRITEPAGE))
> isolate_mode |= ISOLATE_CLEAN;
>
> spin_lock_irq(&zone->lru_lock);
> @@ -1593,7 +1591,7 @@ shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec,
> * is congested. Allow kswapd to continue until it starts encountering
> * unqueued dirty pages or cycling through the LRU too quickly.
> */
> - if (!sc->hibernation_mode && !current_is_kswapd() &&
> + if (!(sc->flags & MAY_SKIP_CONGESTION) && !current_is_kswapd() &&
> current_may_throttle())
> wait_iff_congested(zone, BLK_RW_ASYNC, HZ/10);
>
> @@ -1683,9 +1681,9 @@ static void shrink_active_list(unsigned long nr_to_scan,
>
> lru_add_drain();
>
> - if (!sc->may_unmap)
> + if (!(sc->flags & MAY_UNMAP))
> isolate_mode |= ISOLATE_UNMAPPED;
> - if (!sc->may_writepage)
> + if (!(sc->flags & MAY_WRITEPAGE))
> isolate_mode |= ISOLATE_CLEAN;
>
> spin_lock_irq(&zone->lru_lock);
> @@ -1897,7 +1895,7 @@ static void get_scan_count(struct lruvec *lruvec, int swappiness,
> force_scan = true;
>
> /* If we have no swap space, do not bother scanning anon pages. */
> - if (!sc->may_swap || (get_nr_swap_pages() <= 0)) {
> + if (!(sc->flags & MAY_SWAP) || (get_nr_swap_pages() <= 0)) {
> scan_balance = SCAN_FILE;
> goto out;
> }
> @@ -2406,7 +2404,7 @@ static bool shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
> sc->order > PAGE_ALLOC_COSTLY_ORDER &&
> zonelist_zone_idx(z) <= requested_highidx &&
> compaction_ready(zone, sc->order)) {
> - sc->compaction_ready = true;
> + sc->flags |= COMPACTION_READY;
> continue;
> }
>
> @@ -2496,7 +2494,7 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
> if (sc->nr_reclaimed >= sc->nr_to_reclaim)
> break;
>
> - if (sc->compaction_ready)
> + if (sc->flags & COMPACTION_READY)
> break;
>
> /*
> @@ -2504,7 +2502,7 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
> * writepage even in laptop mode.
> */
> if (sc->priority < DEF_PRIORITY - 2)
> - sc->may_writepage = 1;
> + sc->flags |= MAY_WRITEPAGE;
>
> /*
> * Try to write back as many pages as we just scanned. This
> @@ -2517,7 +2515,7 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
> if (total_scanned > writeback_threshold) {
> wakeup_flusher_threads(laptop_mode ? 0 : total_scanned,
> WB_REASON_TRY_TO_FREE_PAGES);
> - sc->may_writepage = 1;
> + sc->flags |= MAY_WRITEPAGE;
> }
> } while (--sc->priority >= 0);
>
> @@ -2527,7 +2525,7 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
> return sc->nr_reclaimed;
>
> /* Aborted reclaim to try compaction? don't OOM, then */
> - if (sc->compaction_ready)
> + if (sc->flags & COMPACTION_READY)
> return 1;
>
> /* Any of the zones still reclaimable? Don't OOM. */
> @@ -2668,17 +2666,17 @@ unsigned long try_to_free_pages(struct zonelist *zonelist, int order,
> {
> unsigned long nr_reclaimed;
> struct scan_control sc = {
> - .gfp_mask = (gfp_mask = memalloc_noio_flags(gfp_mask)),
> - .may_writepage = !laptop_mode,
> .nr_to_reclaim = SWAP_CLUSTER_MAX,
> - .may_unmap = 1,
> - .may_swap = 1,
> + .gfp_mask = (gfp_mask = memalloc_noio_flags(gfp_mask)),
> .order = order,
> - .priority = DEF_PRIORITY,
> - .target_mem_cgroup = NULL,
> .nodemask = nodemask,
> + .priority = DEF_PRIORITY,
> + .flags = MAY_UNMAP | MAY_SWAP,
> };
>
> + if (!laptop_mode)
> + sc.flags |= MAY_WRITEPAGE;
> +
> /*
> * Do not enter reclaim if fatal signal was delivered while throttled.
> * 1 is returned so that the page allocator does not OOM kill at this
> @@ -2688,7 +2686,7 @@ unsigned long try_to_free_pages(struct zonelist *zonelist, int order,
> return 1;
>
> trace_mm_vmscan_direct_reclaim_begin(order,
> - sc.may_writepage,
> + sc.flags & MAY_WRITEPAGE,
> gfp_mask);
>
> nr_reclaimed = do_try_to_free_pages(zonelist, &sc);
> @@ -2706,23 +2704,22 @@ unsigned long mem_cgroup_shrink_node_zone(struct mem_cgroup *memcg,
> unsigned long *nr_scanned)
> {
> struct scan_control sc = {
> - .nr_scanned = 0,
> .nr_to_reclaim = SWAP_CLUSTER_MAX,
> - .may_writepage = !laptop_mode,
> - .may_unmap = 1,
> - .may_swap = !noswap,
> - .order = 0,
> - .priority = 0,
> + .gfp_mask = (gfp_mask & GFP_RECLAIM_MASK) |
> + (GFP_HIGHUSER_MOVABLE & ~GFP_RECLAIM_MASK),
> .target_mem_cgroup = memcg,
> + .flags = MAY_UNMAP,
> };
> struct lruvec *lruvec = mem_cgroup_zone_lruvec(zone, memcg);
> int swappiness = mem_cgroup_swappiness(memcg);
>
> - sc.gfp_mask = (gfp_mask & GFP_RECLAIM_MASK) |
> - (GFP_HIGHUSER_MOVABLE & ~GFP_RECLAIM_MASK);
> + if (!laptop_mode)
> + sc.flags |= MAY_WRITEPAGE;
> + if (!noswap)
> + sc.flags |= MAY_SWAP;
>
> trace_mm_vmscan_memcg_softlimit_reclaim_begin(sc.order,
> - sc.may_writepage,
> + sc.flags & MAY_WRITEPAGE,
> sc.gfp_mask);
>
> /*
> @@ -2748,18 +2745,19 @@ unsigned long try_to_free_mem_cgroup_pages(struct mem_cgroup *memcg,
> unsigned long nr_reclaimed;
> int nid;
> struct scan_control sc = {
> - .may_writepage = !laptop_mode,
> - .may_unmap = 1,
> - .may_swap = !noswap,
> .nr_to_reclaim = SWAP_CLUSTER_MAX,
> - .order = 0,
> - .priority = DEF_PRIORITY,
> - .target_mem_cgroup = memcg,
> - .nodemask = NULL, /* we don't care the placement */
> .gfp_mask = (gfp_mask & GFP_RECLAIM_MASK) |
> - (GFP_HIGHUSER_MOVABLE & ~GFP_RECLAIM_MASK),
> + (GFP_HIGHUSER_MOVABLE & ~GFP_RECLAIM_MASK),
> + .target_mem_cgroup = memcg,
> + .priority = DEF_PRIORITY,
> + .flags = MAY_UNMAP,
> };
>
> + if (!laptop_mode)
> + sc.flags |= MAY_WRITEPAGE;
> + if (!noswap)
> + sc.flags |= MAY_SWAP;
> +
> /*
> * Unlike direct reclaim via alloc_pages(), memcg's reclaim doesn't
> * take care of from where we get pages. So the node where we start the
> @@ -2770,7 +2768,7 @@ unsigned long try_to_free_mem_cgroup_pages(struct mem_cgroup *memcg,
> zonelist = NODE_DATA(nid)->node_zonelists;
>
> trace_mm_vmscan_memcg_reclaim_begin(0,
> - sc.may_writepage,
> + sc.flags & MAY_WRITEPAGE,
> sc.gfp_mask);
>
> nr_reclaimed = do_try_to_free_pages(zonelist, &sc);
> @@ -3015,15 +3013,15 @@ static unsigned long balance_pgdat(pg_data_t *pgdat, int order,
> unsigned long nr_soft_scanned;
> struct scan_control sc = {
> .gfp_mask = GFP_KERNEL,
> - .priority = DEF_PRIORITY,
> - .may_unmap = 1,
> - .may_swap = 1,
> - .may_writepage = !laptop_mode,
> .order = order,
> - .target_mem_cgroup = NULL,
> + .priority = DEF_PRIORITY,
> + .flags = MAY_UNMAP | MAY_SWAP,
> };
> count_vm_event(PAGEOUTRUN);
>
> + if (!laptop_mode)
> + sc.flags |= MAY_WRITEPAGE;
> +
> do {
> unsigned long lru_pages = 0;
> unsigned long nr_attempted = 0;
> @@ -3104,7 +3102,7 @@ static unsigned long balance_pgdat(pg_data_t *pgdat, int order,
> * even in laptop mode.
> */
> if (sc.priority < DEF_PRIORITY - 2)
> - sc.may_writepage = 1;
> + sc.flags |= MAY_WRITEPAGE;
>
> /*
> * Now scan the zone in the dma->highmem direction, stopping
> @@ -3401,14 +3399,11 @@ unsigned long shrink_all_memory(unsigned long nr_to_reclaim)
> {
> struct reclaim_state reclaim_state;
> struct scan_control sc = {
> - .gfp_mask = GFP_HIGHUSER_MOVABLE,
> - .may_swap = 1,
> - .may_unmap = 1,
> - .may_writepage = 1,
> .nr_to_reclaim = nr_to_reclaim,
> - .hibernation_mode = 1,
> - .order = 0,
> + .gfp_mask = GFP_HIGHUSER_MOVABLE,
> .priority = DEF_PRIORITY,
> + .flags = MAY_WRITEPAGE | MAY_UNMAP | MAY_SWAP |
> + MAY_SKIP_CONGESTION,
> };
> struct zonelist *zonelist = node_zonelist(numa_node_id(), sc.gfp_mask);
> struct task_struct *p = current;
> @@ -3588,19 +3583,22 @@ static int __zone_reclaim(struct zone *zone, gfp_t gfp_mask, unsigned int order)
> struct task_struct *p = current;
> struct reclaim_state reclaim_state;
> struct scan_control sc = {
> - .may_writepage = !!(zone_reclaim_mode & RECLAIM_WRITE),
> - .may_unmap = !!(zone_reclaim_mode & RECLAIM_SWAP),
> - .may_swap = 1,
> .nr_to_reclaim = max(nr_pages, SWAP_CLUSTER_MAX),
> .gfp_mask = (gfp_mask = memalloc_noio_flags(gfp_mask)),
> .order = order,
> .priority = ZONE_RECLAIM_PRIORITY,
> + .flags = MAY_SWAP,
> };
> struct shrink_control shrink = {
> .gfp_mask = sc.gfp_mask,
> };
> unsigned long nr_slab_pages0, nr_slab_pages1;
>
> + if (zone_reclaim_mode & RECLAIM_WRITE)
> + sc.flags |= MAY_WRITEPAGE;
> + if (zone_reclaim_mode & RECLAIM_SWAP)
> + sc.flags |= MAY_UNMAP;
> +
> cond_resched();
> /*
> * We need to be able to allocate from the reserves for RECLAIM_SWAP
> --
> 2.0.0
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [patch 3/3] mm: vmscan: clean up struct scan_control
@ 2014-07-14 19:46 ` Hugh Dickins
0 siblings, 0 replies; 30+ messages in thread
From: Hugh Dickins @ 2014-07-14 19:46 UTC (permalink / raw)
To: Johannes Weiner
Cc: Andrew Morton, Mel Gorman, Michal Hocko, Minchan Kim,
Rik van Riel, linux-mm, linux-kernel
On Mon, 14 Jul 2014, Johannes Weiner wrote:
> Reorder the members by input and output, then turn the individual
> integers for may_writepage, may_unmap, may_swap, compaction_ready,
> hibernation_mode into flags that fit into a single integer.
>
> Stack delta: +72/-296 -224 old new delta
> kswapd 104 176 +72
> try_to_free_pages 80 56 -24
> try_to_free_mem_cgroup_pages 80 56 -24
> shrink_all_memory 88 64 -24
> reclaim_clean_pages_from_list 168 144 -24
> mem_cgroup_shrink_node_zone 104 80 -24
> __zone_reclaim 176 152 -24
> balance_pgdat 152 - -152
>
> text data bss dec hex filename
> 38151 5641 16 43808 ab20 mm/vmscan.o.old
> 38047 5641 16 43704 aab8 mm/vmscan.o
>
> Suggested-by: Mel Gorman <mgorman@suse.de>
> Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
> ---
> mm/vmscan.c | 158 ++++++++++++++++++++++++++++++------------------------------
> 1 file changed, 78 insertions(+), 80 deletions(-)
>
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index c28b8981e56a..73d8e69ff3eb 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -58,36 +58,28 @@
> #define CREATE_TRACE_POINTS
> #include <trace/events/vmscan.h>
>
> -struct scan_control {
> - /* Incremented by the number of inactive pages that were scanned */
> - unsigned long nr_scanned;
> -
> - /* Number of pages freed so far during a call to shrink_zones() */
> - unsigned long nr_reclaimed;
> -
> - /* One of the zones is ready for compaction */
> - int compaction_ready;
> +/* Scan control flags */
> +#define MAY_WRITEPAGE 0x1
> +#define MAY_UNMAP 0x2
> +#define MAY_SWAP 0x4
> +#define MAY_SKIP_CONGESTION 0x8
> +#define COMPACTION_READY 0x10
>
> +struct scan_control {
> /* How many pages shrink_list() should reclaim */
> unsigned long nr_to_reclaim;
>
> - unsigned long hibernation_mode;
> -
> /* This context's GFP mask */
> gfp_t gfp_mask;
>
> - int may_writepage;
> -
> - /* Can mapped pages be reclaimed? */
> - int may_unmap;
> -
> - /* Can pages be swapped as part of reclaim? */
> - int may_swap;
> -
> + /* Allocation order */
> int order;
>
> - /* Scan (total_size >> priority) pages at once */
> - int priority;
> + /*
> + * Nodemask of nodes allowed by the caller. If NULL, all nodes
> + * are scanned.
> + */
> + nodemask_t *nodemask;
>
> /*
> * The memory cgroup that hit its limit and as a result is the
> @@ -95,11 +87,17 @@ struct scan_control {
> */
> struct mem_cgroup *target_mem_cgroup;
>
> - /*
> - * Nodemask of nodes allowed by the caller. If NULL, all nodes
> - * are scanned.
> - */
> - nodemask_t *nodemask;
> + /* Scan (total_size >> priority) pages at once */
> + int priority;
> +
> + /* Scan control flags; see above */
> + unsigned int flags;
This seems to result in a fair amount of unnecessary churn:
why not just put may_writepage etc into an unsigned int bitfield,
then you get the saving without changing all the rest of the code.
Hugh
> +
> + /* Incremented by the number of inactive pages that were scanned */
> + unsigned long nr_scanned;
> +
> + /* Number of pages freed so far during a call to shrink_zones() */
> + unsigned long nr_reclaimed;
> };
>
> #define lru_to_page(_head) (list_entry((_head)->prev, struct page, lru))
> @@ -840,7 +838,7 @@ static unsigned long shrink_page_list(struct list_head *page_list,
> if (unlikely(!page_evictable(page)))
> goto cull_mlocked;
>
> - if (!sc->may_unmap && page_mapped(page))
> + if (!(sc->flags & MAY_UNMAP) && page_mapped(page))
> goto keep_locked;
>
> /* Double the slab pressure for mapped and swapcache pages */
> @@ -1014,7 +1012,7 @@ static unsigned long shrink_page_list(struct list_head *page_list,
> goto keep_locked;
> if (!may_enter_fs)
> goto keep_locked;
> - if (!sc->may_writepage)
> + if (!(sc->flags & MAY_WRITEPAGE))
> goto keep_locked;
>
> /* Page is dirty, try to write it out here */
> @@ -1146,7 +1144,7 @@ unsigned long reclaim_clean_pages_from_list(struct zone *zone,
> struct scan_control sc = {
> .gfp_mask = GFP_KERNEL,
> .priority = DEF_PRIORITY,
> - .may_unmap = 1,
> + .flags = MAY_UNMAP,
> };
> unsigned long ret, dummy1, dummy2, dummy3, dummy4, dummy5;
> struct page *page, *next;
> @@ -1489,9 +1487,9 @@ shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec,
>
> lru_add_drain();
>
> - if (!sc->may_unmap)
> + if (!(sc->flags & MAY_UNMAP))
> isolate_mode |= ISOLATE_UNMAPPED;
> - if (!sc->may_writepage)
> + if (!(sc->flags & MAY_WRITEPAGE))
> isolate_mode |= ISOLATE_CLEAN;
>
> spin_lock_irq(&zone->lru_lock);
> @@ -1593,7 +1591,7 @@ shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec,
> * is congested. Allow kswapd to continue until it starts encountering
> * unqueued dirty pages or cycling through the LRU too quickly.
> */
> - if (!sc->hibernation_mode && !current_is_kswapd() &&
> + if (!(sc->flags & MAY_SKIP_CONGESTION) && !current_is_kswapd() &&
> current_may_throttle())
> wait_iff_congested(zone, BLK_RW_ASYNC, HZ/10);
>
> @@ -1683,9 +1681,9 @@ static void shrink_active_list(unsigned long nr_to_scan,
>
> lru_add_drain();
>
> - if (!sc->may_unmap)
> + if (!(sc->flags & MAY_UNMAP))
> isolate_mode |= ISOLATE_UNMAPPED;
> - if (!sc->may_writepage)
> + if (!(sc->flags & MAY_WRITEPAGE))
> isolate_mode |= ISOLATE_CLEAN;
>
> spin_lock_irq(&zone->lru_lock);
> @@ -1897,7 +1895,7 @@ static void get_scan_count(struct lruvec *lruvec, int swappiness,
> force_scan = true;
>
> /* If we have no swap space, do not bother scanning anon pages. */
> - if (!sc->may_swap || (get_nr_swap_pages() <= 0)) {
> + if (!(sc->flags & MAY_SWAP) || (get_nr_swap_pages() <= 0)) {
> scan_balance = SCAN_FILE;
> goto out;
> }
> @@ -2406,7 +2404,7 @@ static bool shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
> sc->order > PAGE_ALLOC_COSTLY_ORDER &&
> zonelist_zone_idx(z) <= requested_highidx &&
> compaction_ready(zone, sc->order)) {
> - sc->compaction_ready = true;
> + sc->flags |= COMPACTION_READY;
> continue;
> }
>
> @@ -2496,7 +2494,7 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
> if (sc->nr_reclaimed >= sc->nr_to_reclaim)
> break;
>
> - if (sc->compaction_ready)
> + if (sc->flags & COMPACTION_READY)
> break;
>
> /*
> @@ -2504,7 +2502,7 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
> * writepage even in laptop mode.
> */
> if (sc->priority < DEF_PRIORITY - 2)
> - sc->may_writepage = 1;
> + sc->flags |= MAY_WRITEPAGE;
>
> /*
> * Try to write back as many pages as we just scanned. This
> @@ -2517,7 +2515,7 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
> if (total_scanned > writeback_threshold) {
> wakeup_flusher_threads(laptop_mode ? 0 : total_scanned,
> WB_REASON_TRY_TO_FREE_PAGES);
> - sc->may_writepage = 1;
> + sc->flags |= MAY_WRITEPAGE;
> }
> } while (--sc->priority >= 0);
>
> @@ -2527,7 +2525,7 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
> return sc->nr_reclaimed;
>
> /* Aborted reclaim to try compaction? don't OOM, then */
> - if (sc->compaction_ready)
> + if (sc->flags & COMPACTION_READY)
> return 1;
>
> /* Any of the zones still reclaimable? Don't OOM. */
> @@ -2668,17 +2666,17 @@ unsigned long try_to_free_pages(struct zonelist *zonelist, int order,
> {
> unsigned long nr_reclaimed;
> struct scan_control sc = {
> - .gfp_mask = (gfp_mask = memalloc_noio_flags(gfp_mask)),
> - .may_writepage = !laptop_mode,
> .nr_to_reclaim = SWAP_CLUSTER_MAX,
> - .may_unmap = 1,
> - .may_swap = 1,
> + .gfp_mask = (gfp_mask = memalloc_noio_flags(gfp_mask)),
> .order = order,
> - .priority = DEF_PRIORITY,
> - .target_mem_cgroup = NULL,
> .nodemask = nodemask,
> + .priority = DEF_PRIORITY,
> + .flags = MAY_UNMAP | MAY_SWAP,
> };
>
> + if (!laptop_mode)
> + sc.flags |= MAY_WRITEPAGE;
> +
> /*
> * Do not enter reclaim if fatal signal was delivered while throttled.
> * 1 is returned so that the page allocator does not OOM kill at this
> @@ -2688,7 +2686,7 @@ unsigned long try_to_free_pages(struct zonelist *zonelist, int order,
> return 1;
>
> trace_mm_vmscan_direct_reclaim_begin(order,
> - sc.may_writepage,
> + sc.flags & MAY_WRITEPAGE,
> gfp_mask);
>
> nr_reclaimed = do_try_to_free_pages(zonelist, &sc);
> @@ -2706,23 +2704,22 @@ unsigned long mem_cgroup_shrink_node_zone(struct mem_cgroup *memcg,
> unsigned long *nr_scanned)
> {
> struct scan_control sc = {
> - .nr_scanned = 0,
> .nr_to_reclaim = SWAP_CLUSTER_MAX,
> - .may_writepage = !laptop_mode,
> - .may_unmap = 1,
> - .may_swap = !noswap,
> - .order = 0,
> - .priority = 0,
> + .gfp_mask = (gfp_mask & GFP_RECLAIM_MASK) |
> + (GFP_HIGHUSER_MOVABLE & ~GFP_RECLAIM_MASK),
> .target_mem_cgroup = memcg,
> + .flags = MAY_UNMAP,
> };
> struct lruvec *lruvec = mem_cgroup_zone_lruvec(zone, memcg);
> int swappiness = mem_cgroup_swappiness(memcg);
>
> - sc.gfp_mask = (gfp_mask & GFP_RECLAIM_MASK) |
> - (GFP_HIGHUSER_MOVABLE & ~GFP_RECLAIM_MASK);
> + if (!laptop_mode)
> + sc.flags |= MAY_WRITEPAGE;
> + if (!noswap)
> + sc.flags |= MAY_SWAP;
>
> trace_mm_vmscan_memcg_softlimit_reclaim_begin(sc.order,
> - sc.may_writepage,
> + sc.flags & MAY_WRITEPAGE,
> sc.gfp_mask);
>
> /*
> @@ -2748,18 +2745,19 @@ unsigned long try_to_free_mem_cgroup_pages(struct mem_cgroup *memcg,
> unsigned long nr_reclaimed;
> int nid;
> struct scan_control sc = {
> - .may_writepage = !laptop_mode,
> - .may_unmap = 1,
> - .may_swap = !noswap,
> .nr_to_reclaim = SWAP_CLUSTER_MAX,
> - .order = 0,
> - .priority = DEF_PRIORITY,
> - .target_mem_cgroup = memcg,
> - .nodemask = NULL, /* we don't care the placement */
> .gfp_mask = (gfp_mask & GFP_RECLAIM_MASK) |
> - (GFP_HIGHUSER_MOVABLE & ~GFP_RECLAIM_MASK),
> + (GFP_HIGHUSER_MOVABLE & ~GFP_RECLAIM_MASK),
> + .target_mem_cgroup = memcg,
> + .priority = DEF_PRIORITY,
> + .flags = MAY_UNMAP,
> };
>
> + if (!laptop_mode)
> + sc.flags |= MAY_WRITEPAGE;
> + if (!noswap)
> + sc.flags |= MAY_SWAP;
> +
> /*
> * Unlike direct reclaim via alloc_pages(), memcg's reclaim doesn't
> * take care of from where we get pages. So the node where we start the
> @@ -2770,7 +2768,7 @@ unsigned long try_to_free_mem_cgroup_pages(struct mem_cgroup *memcg,
> zonelist = NODE_DATA(nid)->node_zonelists;
>
> trace_mm_vmscan_memcg_reclaim_begin(0,
> - sc.may_writepage,
> + sc.flags & MAY_WRITEPAGE,
> sc.gfp_mask);
>
> nr_reclaimed = do_try_to_free_pages(zonelist, &sc);
> @@ -3015,15 +3013,15 @@ static unsigned long balance_pgdat(pg_data_t *pgdat, int order,
> unsigned long nr_soft_scanned;
> struct scan_control sc = {
> .gfp_mask = GFP_KERNEL,
> - .priority = DEF_PRIORITY,
> - .may_unmap = 1,
> - .may_swap = 1,
> - .may_writepage = !laptop_mode,
> .order = order,
> - .target_mem_cgroup = NULL,
> + .priority = DEF_PRIORITY,
> + .flags = MAY_UNMAP | MAY_SWAP,
> };
> count_vm_event(PAGEOUTRUN);
>
> + if (!laptop_mode)
> + sc.flags |= MAY_WRITEPAGE;
> +
> do {
> unsigned long lru_pages = 0;
> unsigned long nr_attempted = 0;
> @@ -3104,7 +3102,7 @@ static unsigned long balance_pgdat(pg_data_t *pgdat, int order,
> * even in laptop mode.
> */
> if (sc.priority < DEF_PRIORITY - 2)
> - sc.may_writepage = 1;
> + sc.flags |= MAY_WRITEPAGE;
>
> /*
> * Now scan the zone in the dma->highmem direction, stopping
> @@ -3401,14 +3399,11 @@ unsigned long shrink_all_memory(unsigned long nr_to_reclaim)
> {
> struct reclaim_state reclaim_state;
> struct scan_control sc = {
> - .gfp_mask = GFP_HIGHUSER_MOVABLE,
> - .may_swap = 1,
> - .may_unmap = 1,
> - .may_writepage = 1,
> .nr_to_reclaim = nr_to_reclaim,
> - .hibernation_mode = 1,
> - .order = 0,
> + .gfp_mask = GFP_HIGHUSER_MOVABLE,
> .priority = DEF_PRIORITY,
> + .flags = MAY_WRITEPAGE | MAY_UNMAP | MAY_SWAP |
> + MAY_SKIP_CONGESTION,
> };
> struct zonelist *zonelist = node_zonelist(numa_node_id(), sc.gfp_mask);
> struct task_struct *p = current;
> @@ -3588,19 +3583,22 @@ static int __zone_reclaim(struct zone *zone, gfp_t gfp_mask, unsigned int order)
> struct task_struct *p = current;
> struct reclaim_state reclaim_state;
> struct scan_control sc = {
> - .may_writepage = !!(zone_reclaim_mode & RECLAIM_WRITE),
> - .may_unmap = !!(zone_reclaim_mode & RECLAIM_SWAP),
> - .may_swap = 1,
> .nr_to_reclaim = max(nr_pages, SWAP_CLUSTER_MAX),
> .gfp_mask = (gfp_mask = memalloc_noio_flags(gfp_mask)),
> .order = order,
> .priority = ZONE_RECLAIM_PRIORITY,
> + .flags = MAY_SWAP,
> };
> struct shrink_control shrink = {
> .gfp_mask = sc.gfp_mask,
> };
> unsigned long nr_slab_pages0, nr_slab_pages1;
>
> + if (zone_reclaim_mode & RECLAIM_WRITE)
> + sc.flags |= MAY_WRITEPAGE;
> + if (zone_reclaim_mode & RECLAIM_SWAP)
> + sc.flags |= MAY_UNMAP;
> +
> cond_resched();
> /*
> * We need to be able to allocate from the reserves for RECLAIM_SWAP
> --
> 2.0.0
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [patch 3/3] mm: vmscan: clean up struct scan_control
2014-07-14 13:20 ` Johannes Weiner
@ 2014-07-14 19:56 ` Andrew Morton
-1 siblings, 0 replies; 30+ messages in thread
From: Andrew Morton @ 2014-07-14 19:56 UTC (permalink / raw)
To: Johannes Weiner
Cc: Mel Gorman, Michal Hocko, Minchan Kim, Rik van Riel, linux-mm,
linux-kernel
On Mon, 14 Jul 2014 09:20:49 -0400 Johannes Weiner <hannes@cmpxchg.org> wrote:
> Reorder the members by input and output, then turn the individual
> integers for may_writepage, may_unmap, may_swap, compaction_ready,
> hibernation_mode into flags that fit into a single integer.
bitfields would be a pretty good fit here. I usually don't like them
because of locking concerns with the RMWs, but scan_control is never
accessed from another thread.
> - if (!sc->may_unmap && page_mapped(page))
> + if (!(sc->flags & MAY_UNMAP) && page_mapped(page))
Then edits such as this are unneeded.
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [patch 3/3] mm: vmscan: clean up struct scan_control
@ 2014-07-14 19:56 ` Andrew Morton
0 siblings, 0 replies; 30+ messages in thread
From: Andrew Morton @ 2014-07-14 19:56 UTC (permalink / raw)
To: Johannes Weiner
Cc: Mel Gorman, Michal Hocko, Minchan Kim, Rik van Riel, linux-mm,
linux-kernel
On Mon, 14 Jul 2014 09:20:49 -0400 Johannes Weiner <hannes@cmpxchg.org> wrote:
> Reorder the members by input and output, then turn the individual
> integers for may_writepage, may_unmap, may_swap, compaction_ready,
> hibernation_mode into flags that fit into a single integer.
bitfields would be a pretty good fit here. I usually don't like them
because of locking concerns with the RMWs, but scan_control is never
accessed from another thread.
> - if (!sc->may_unmap && page_mapped(page))
> + if (!(sc->flags & MAY_UNMAP) && page_mapped(page))
Then edits such as this are unneeded.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [patch 2/3] mm: vmscan: remove all_unreclaimable() fix
2014-07-14 13:20 ` Johannes Weiner
@ 2014-07-16 9:40 ` Michal Hocko
-1 siblings, 0 replies; 30+ messages in thread
From: Michal Hocko @ 2014-07-16 9:40 UTC (permalink / raw)
To: Johannes Weiner
Cc: Andrew Morton, Mel Gorman, Minchan Kim, Rik van Riel, linux-mm,
linux-kernel
On Mon 14-07-14 09:20:48, Johannes Weiner wrote:
> As per Mel, use bool for reclaimability throughout and simplify the
> reclaimability tracking in shrink_zones().
>
> Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Looks good to me and it fits better with my low/min limit patches which I
hopefully post soon.
> ---
> mm/vmscan.c | 29 +++++++++++++++--------------
> 1 file changed, 15 insertions(+), 14 deletions(-)
>
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 6dac1310e5e4..74a9e0ae09b0 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -2244,10 +2244,10 @@ static inline bool should_continue_reclaim(struct zone *zone,
> }
> }
>
> -static unsigned long shrink_zone(struct zone *zone, struct scan_control *sc)
> +static bool shrink_zone(struct zone *zone, struct scan_control *sc)
> {
> unsigned long nr_reclaimed, nr_scanned;
> - unsigned long zone_reclaimed = 0;
> + bool reclaimable = false;
>
> do {
> struct mem_cgroup *root = sc->target_mem_cgroup;
> @@ -2291,12 +2291,13 @@ static unsigned long shrink_zone(struct zone *zone, struct scan_control *sc)
> sc->nr_scanned - nr_scanned,
> sc->nr_reclaimed - nr_reclaimed);
>
> - zone_reclaimed += sc->nr_reclaimed - nr_reclaimed;
> + if (sc->nr_reclaimed - nr_reclaimed)
> + reclaimable = true;
>
> } while (should_continue_reclaim(zone, sc->nr_reclaimed - nr_reclaimed,
> sc->nr_scanned - nr_scanned, sc));
>
> - return zone_reclaimed;
> + return reclaimable;
> }
>
> /* Returns true if compaction should go ahead for a high-order request */
> @@ -2346,7 +2347,7 @@ static inline bool compaction_ready(struct zone *zone, int order)
> * If a zone is deemed to be full of pinned pages then just give it a light
> * scan then give up on it.
> *
> - * Returns whether the zones overall are reclaimable or not.
> + * Returns true if a zone was reclaimable.
> */
> static bool shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
> {
> @@ -2361,7 +2362,7 @@ static bool shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
> .gfp_mask = sc->gfp_mask,
> };
> enum zone_type requested_highidx = gfp_zone(sc->gfp_mask);
> - bool all_unreclaimable = true;
> + bool reclaimable = false;
>
> /*
> * If the number of buffer_heads in the machine exceeds the maximum
> @@ -2376,8 +2377,6 @@ static bool shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
>
> for_each_zone_zonelist_nodemask(zone, z, zonelist,
> gfp_zone(sc->gfp_mask), sc->nodemask) {
> - unsigned long zone_reclaimed = 0;
> -
> if (!populated_zone(zone))
> continue;
> /*
> @@ -2424,15 +2423,17 @@ static bool shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
> &nr_soft_scanned);
> sc->nr_reclaimed += nr_soft_reclaimed;
> sc->nr_scanned += nr_soft_scanned;
> - zone_reclaimed += nr_soft_reclaimed;
> + if (nr_soft_reclaimed)
> + reclaimable = true;
> /* need some check for avoid more shrink_zone() */
> }
>
> - zone_reclaimed += shrink_zone(zone, sc);
> + if (shrink_zone(zone, sc))
> + reclaimable = true;
>
> - if (zone_reclaimed ||
> - (global_reclaim(sc) && zone_reclaimable(zone)))
> - all_unreclaimable = false;
> + if (global_reclaim(sc) &&
> + !reclaimable && zone_reclaimable(zone))
> + reclaimable = true;
> }
>
> /*
> @@ -2455,7 +2456,7 @@ static bool shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
> */
> sc->gfp_mask = orig_mask;
>
> - return !all_unreclaimable;
> + return reclaimable;
> }
>
> /*
> --
> 2.0.0
>
--
Michal Hocko
SUSE Labs
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [patch 2/3] mm: vmscan: remove all_unreclaimable() fix
@ 2014-07-16 9:40 ` Michal Hocko
0 siblings, 0 replies; 30+ messages in thread
From: Michal Hocko @ 2014-07-16 9:40 UTC (permalink / raw)
To: Johannes Weiner
Cc: Andrew Morton, Mel Gorman, Minchan Kim, Rik van Riel, linux-mm,
linux-kernel
On Mon 14-07-14 09:20:48, Johannes Weiner wrote:
> As per Mel, use bool for reclaimability throughout and simplify the
> reclaimability tracking in shrink_zones().
>
> Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Looks good to me and it fits better with my low/min limit patches which I
hopefully post soon.
> ---
> mm/vmscan.c | 29 +++++++++++++++--------------
> 1 file changed, 15 insertions(+), 14 deletions(-)
>
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 6dac1310e5e4..74a9e0ae09b0 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -2244,10 +2244,10 @@ static inline bool should_continue_reclaim(struct zone *zone,
> }
> }
>
> -static unsigned long shrink_zone(struct zone *zone, struct scan_control *sc)
> +static bool shrink_zone(struct zone *zone, struct scan_control *sc)
> {
> unsigned long nr_reclaimed, nr_scanned;
> - unsigned long zone_reclaimed = 0;
> + bool reclaimable = false;
>
> do {
> struct mem_cgroup *root = sc->target_mem_cgroup;
> @@ -2291,12 +2291,13 @@ static unsigned long shrink_zone(struct zone *zone, struct scan_control *sc)
> sc->nr_scanned - nr_scanned,
> sc->nr_reclaimed - nr_reclaimed);
>
> - zone_reclaimed += sc->nr_reclaimed - nr_reclaimed;
> + if (sc->nr_reclaimed - nr_reclaimed)
> + reclaimable = true;
>
> } while (should_continue_reclaim(zone, sc->nr_reclaimed - nr_reclaimed,
> sc->nr_scanned - nr_scanned, sc));
>
> - return zone_reclaimed;
> + return reclaimable;
> }
>
> /* Returns true if compaction should go ahead for a high-order request */
> @@ -2346,7 +2347,7 @@ static inline bool compaction_ready(struct zone *zone, int order)
> * If a zone is deemed to be full of pinned pages then just give it a light
> * scan then give up on it.
> *
> - * Returns whether the zones overall are reclaimable or not.
> + * Returns true if a zone was reclaimable.
> */
> static bool shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
> {
> @@ -2361,7 +2362,7 @@ static bool shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
> .gfp_mask = sc->gfp_mask,
> };
> enum zone_type requested_highidx = gfp_zone(sc->gfp_mask);
> - bool all_unreclaimable = true;
> + bool reclaimable = false;
>
> /*
> * If the number of buffer_heads in the machine exceeds the maximum
> @@ -2376,8 +2377,6 @@ static bool shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
>
> for_each_zone_zonelist_nodemask(zone, z, zonelist,
> gfp_zone(sc->gfp_mask), sc->nodemask) {
> - unsigned long zone_reclaimed = 0;
> -
> if (!populated_zone(zone))
> continue;
> /*
> @@ -2424,15 +2423,17 @@ static bool shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
> &nr_soft_scanned);
> sc->nr_reclaimed += nr_soft_reclaimed;
> sc->nr_scanned += nr_soft_scanned;
> - zone_reclaimed += nr_soft_reclaimed;
> + if (nr_soft_reclaimed)
> + reclaimable = true;
> /* need some check for avoid more shrink_zone() */
> }
>
> - zone_reclaimed += shrink_zone(zone, sc);
> + if (shrink_zone(zone, sc))
> + reclaimable = true;
>
> - if (zone_reclaimed ||
> - (global_reclaim(sc) && zone_reclaimable(zone)))
> - all_unreclaimable = false;
> + if (global_reclaim(sc) &&
> + !reclaimable && zone_reclaimable(zone))
> + reclaimable = true;
> }
>
> /*
> @@ -2455,7 +2456,7 @@ static bool shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
> */
> sc->gfp_mask = orig_mask;
>
> - return !all_unreclaimable;
> + return reclaimable;
> }
>
> /*
> --
> 2.0.0
>
--
Michal Hocko
SUSE Labs
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [patch 3/3] mm: vmscan: clean up struct scan_control
2014-07-14 19:46 ` Hugh Dickins
@ 2014-07-17 13:26 ` Johannes Weiner
-1 siblings, 0 replies; 30+ messages in thread
From: Johannes Weiner @ 2014-07-17 13:26 UTC (permalink / raw)
To: Hugh Dickins
Cc: Andrew Morton, Mel Gorman, Michal Hocko, Minchan Kim,
Rik van Riel, linux-mm, linux-kernel
On Mon, Jul 14, 2014 at 12:46:21PM -0700, Hugh Dickins wrote:
> On Mon, 14 Jul 2014, Johannes Weiner wrote:
>
> > Reorder the members by input and output, then turn the individual
> > integers for may_writepage, may_unmap, may_swap, compaction_ready,
> > hibernation_mode into flags that fit into a single integer.
> >
> > Stack delta: +72/-296 -224 old new delta
> > kswapd 104 176 +72
> > try_to_free_pages 80 56 -24
> > try_to_free_mem_cgroup_pages 80 56 -24
> > shrink_all_memory 88 64 -24
> > reclaim_clean_pages_from_list 168 144 -24
> > mem_cgroup_shrink_node_zone 104 80 -24
> > __zone_reclaim 176 152 -24
> > balance_pgdat 152 - -152
> >
> > text data bss dec hex filename
> > 38151 5641 16 43808 ab20 mm/vmscan.o.old
> > 38047 5641 16 43704 aab8 mm/vmscan.o
> >
> > Suggested-by: Mel Gorman <mgorman@suse.de>
> > Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
> > ---
> > mm/vmscan.c | 158 ++++++++++++++++++++++++++++++------------------------------
> > 1 file changed, 78 insertions(+), 80 deletions(-)
> >
> > diff --git a/mm/vmscan.c b/mm/vmscan.c
> > index c28b8981e56a..73d8e69ff3eb 100644
> > --- a/mm/vmscan.c
> > +++ b/mm/vmscan.c
> > @@ -58,36 +58,28 @@
> > #define CREATE_TRACE_POINTS
> > #include <trace/events/vmscan.h>
> >
> > -struct scan_control {
> > - /* Incremented by the number of inactive pages that were scanned */
> > - unsigned long nr_scanned;
> > -
> > - /* Number of pages freed so far during a call to shrink_zones() */
> > - unsigned long nr_reclaimed;
> > -
> > - /* One of the zones is ready for compaction */
> > - int compaction_ready;
> > +/* Scan control flags */
> > +#define MAY_WRITEPAGE 0x1
> > +#define MAY_UNMAP 0x2
> > +#define MAY_SWAP 0x4
> > +#define MAY_SKIP_CONGESTION 0x8
> > +#define COMPACTION_READY 0x10
> >
> > +struct scan_control {
> > /* How many pages shrink_list() should reclaim */
> > unsigned long nr_to_reclaim;
> >
> > - unsigned long hibernation_mode;
> > -
> > /* This context's GFP mask */
> > gfp_t gfp_mask;
> >
> > - int may_writepage;
> > -
> > - /* Can mapped pages be reclaimed? */
> > - int may_unmap;
> > -
> > - /* Can pages be swapped as part of reclaim? */
> > - int may_swap;
> > -
> > + /* Allocation order */
> > int order;
> >
> > - /* Scan (total_size >> priority) pages at once */
> > - int priority;
> > + /*
> > + * Nodemask of nodes allowed by the caller. If NULL, all nodes
> > + * are scanned.
> > + */
> > + nodemask_t *nodemask;
> >
> > /*
> > * The memory cgroup that hit its limit and as a result is the
> > @@ -95,11 +87,17 @@ struct scan_control {
> > */
> > struct mem_cgroup *target_mem_cgroup;
> >
> > - /*
> > - * Nodemask of nodes allowed by the caller. If NULL, all nodes
> > - * are scanned.
> > - */
> > - nodemask_t *nodemask;
> > + /* Scan (total_size >> priority) pages at once */
> > + int priority;
> > +
> > + /* Scan control flags; see above */
> > + unsigned int flags;
>
> This seems to result in a fair amount of unnecessary churn:
> why not just put may_writepage etc into an unsigned int bitfield,
> then you get the saving without changing all the rest of the code.
Good point, I didn't even think of that. Thanks!
Andrew, could you please replace this patch with the following?
---
>From bbe8c1645c77297a96ecd5d64d659ddcd6984d03 Mon Sep 17 00:00:00 2001
From: Johannes Weiner <hannes@cmpxchg.org>
Date: Mon, 14 Jul 2014 08:51:54 -0400
Subject: [patch] mm: vmscan: clean up struct scan_control
Reorder the members by input and output, then turn the individual
integers for may_writepage, may_unmap, may_swap, compaction_ready,
hibernation_mode into bit fields to save stack space:
+72/-296 -224
kswapd 104 176 +72
try_to_free_pages 80 56 -24
try_to_free_mem_cgroup_pages 80 56 -24
shrink_all_memory 88 64 -24
reclaim_clean_pages_from_list 168 144 -24
mem_cgroup_shrink_node_zone 104 80 -24
__zone_reclaim 176 152 -24
balance_pgdat 152 - -152
Suggested-by: Mel Gorman <mgorman@suse.de>
Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
---
mm/vmscan.c | 99 ++++++++++++++++++++++++++++---------------------------------
1 file changed, 46 insertions(+), 53 deletions(-)
diff --git a/mm/vmscan.c b/mm/vmscan.c
index c28b8981e56a..81dd858b9d17 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -59,35 +59,20 @@
#include <trace/events/vmscan.h>
struct scan_control {
- /* Incremented by the number of inactive pages that were scanned */
- unsigned long nr_scanned;
-
- /* Number of pages freed so far during a call to shrink_zones() */
- unsigned long nr_reclaimed;
-
- /* One of the zones is ready for compaction */
- int compaction_ready;
-
/* How many pages shrink_list() should reclaim */
unsigned long nr_to_reclaim;
- unsigned long hibernation_mode;
-
/* This context's GFP mask */
gfp_t gfp_mask;
- int may_writepage;
-
- /* Can mapped pages be reclaimed? */
- int may_unmap;
-
- /* Can pages be swapped as part of reclaim? */
- int may_swap;
-
+ /* Allocation order */
int order;
- /* Scan (total_size >> priority) pages at once */
- int priority;
+ /*
+ * Nodemask of nodes allowed by the caller. If NULL, all nodes
+ * are scanned.
+ */
+ nodemask_t *nodemask;
/*
* The memory cgroup that hit its limit and as a result is the
@@ -95,11 +80,27 @@ struct scan_control {
*/
struct mem_cgroup *target_mem_cgroup;
- /*
- * Nodemask of nodes allowed by the caller. If NULL, all nodes
- * are scanned.
- */
- nodemask_t *nodemask;
+ /* Scan (total_size >> priority) pages at once */
+ int priority;
+
+ unsigned int may_writepage:1;
+
+ /* Can mapped pages be reclaimed? */
+ unsigned int may_unmap:1;
+
+ /* Can pages be swapped as part of reclaim? */
+ unsigned int may_swap:1;
+
+ unsigned int hibernation_mode:1;
+
+ /* One of the zones is ready for compaction */
+ unsigned int compaction_ready:1;
+
+ /* Incremented by the number of inactive pages that were scanned */
+ unsigned long nr_scanned;
+
+ /* Number of pages freed so far during a call to shrink_zones() */
+ unsigned long nr_reclaimed;
};
#define lru_to_page(_head) (list_entry((_head)->prev, struct page, lru))
@@ -2668,15 +2669,14 @@ unsigned long try_to_free_pages(struct zonelist *zonelist, int order,
{
unsigned long nr_reclaimed;
struct scan_control sc = {
+ .nr_to_reclaim = SWAP_CLUSTER_MAX,
.gfp_mask = (gfp_mask = memalloc_noio_flags(gfp_mask)),
+ .order = order,
+ .nodemask = nodemask,
+ .priority = DEF_PRIORITY,
.may_writepage = !laptop_mode,
- .nr_to_reclaim = SWAP_CLUSTER_MAX,
.may_unmap = 1,
.may_swap = 1,
- .order = order,
- .priority = DEF_PRIORITY,
- .target_mem_cgroup = NULL,
- .nodemask = nodemask,
};
/*
@@ -2706,14 +2706,11 @@ unsigned long mem_cgroup_shrink_node_zone(struct mem_cgroup *memcg,
unsigned long *nr_scanned)
{
struct scan_control sc = {
- .nr_scanned = 0,
.nr_to_reclaim = SWAP_CLUSTER_MAX,
+ .target_mem_cgroup = memcg,
.may_writepage = !laptop_mode,
.may_unmap = 1,
.may_swap = !noswap,
- .order = 0,
- .priority = 0,
- .target_mem_cgroup = memcg,
};
struct lruvec *lruvec = mem_cgroup_zone_lruvec(zone, memcg);
int swappiness = mem_cgroup_swappiness(memcg);
@@ -2748,16 +2745,14 @@ unsigned long try_to_free_mem_cgroup_pages(struct mem_cgroup *memcg,
unsigned long nr_reclaimed;
int nid;
struct scan_control sc = {
- .may_writepage = !laptop_mode,
- .may_unmap = 1,
- .may_swap = !noswap,
.nr_to_reclaim = SWAP_CLUSTER_MAX,
- .order = 0,
- .priority = DEF_PRIORITY,
- .target_mem_cgroup = memcg,
- .nodemask = NULL, /* we don't care the placement */
.gfp_mask = (gfp_mask & GFP_RECLAIM_MASK) |
(GFP_HIGHUSER_MOVABLE & ~GFP_RECLAIM_MASK),
+ .target_mem_cgroup = memcg,
+ .priority = DEF_PRIORITY,
+ .may_writepage = !laptop_mode,
+ .may_unmap = 1,
+ .may_swap = !noswap,
};
/*
@@ -3015,12 +3010,11 @@ static unsigned long balance_pgdat(pg_data_t *pgdat, int order,
unsigned long nr_soft_scanned;
struct scan_control sc = {
.gfp_mask = GFP_KERNEL,
+ .order = order,
.priority = DEF_PRIORITY,
+ .may_writepage = !laptop_mode,
.may_unmap = 1,
.may_swap = 1,
- .may_writepage = !laptop_mode,
- .order = order,
- .target_mem_cgroup = NULL,
};
count_vm_event(PAGEOUTRUN);
@@ -3401,14 +3395,13 @@ unsigned long shrink_all_memory(unsigned long nr_to_reclaim)
{
struct reclaim_state reclaim_state;
struct scan_control sc = {
+ .nr_to_reclaim = nr_to_reclaim,
.gfp_mask = GFP_HIGHUSER_MOVABLE,
- .may_swap = 1,
- .may_unmap = 1,
+ .priority = DEF_PRIORITY,
.may_writepage = 1,
- .nr_to_reclaim = nr_to_reclaim,
+ .may_unmap = 1,
+ .may_swap = 1,
.hibernation_mode = 1,
- .order = 0,
- .priority = DEF_PRIORITY,
};
struct zonelist *zonelist = node_zonelist(numa_node_id(), sc.gfp_mask);
struct task_struct *p = current;
@@ -3588,13 +3581,13 @@ static int __zone_reclaim(struct zone *zone, gfp_t gfp_mask, unsigned int order)
struct task_struct *p = current;
struct reclaim_state reclaim_state;
struct scan_control sc = {
- .may_writepage = !!(zone_reclaim_mode & RECLAIM_WRITE),
- .may_unmap = !!(zone_reclaim_mode & RECLAIM_SWAP),
- .may_swap = 1,
.nr_to_reclaim = max(nr_pages, SWAP_CLUSTER_MAX),
.gfp_mask = (gfp_mask = memalloc_noio_flags(gfp_mask)),
.order = order,
.priority = ZONE_RECLAIM_PRIORITY,
+ .may_writepage = !!(zone_reclaim_mode & RECLAIM_WRITE),
+ .may_unmap = !!(zone_reclaim_mode & RECLAIM_SWAP),
+ .may_swap = 1,
};
struct shrink_control shrink = {
.gfp_mask = sc.gfp_mask,
--
2.0.0
^ permalink raw reply related [flat|nested] 30+ messages in thread
* Re: [patch 3/3] mm: vmscan: clean up struct scan_control
@ 2014-07-17 13:26 ` Johannes Weiner
0 siblings, 0 replies; 30+ messages in thread
From: Johannes Weiner @ 2014-07-17 13:26 UTC (permalink / raw)
To: Hugh Dickins
Cc: Andrew Morton, Mel Gorman, Michal Hocko, Minchan Kim,
Rik van Riel, linux-mm, linux-kernel
On Mon, Jul 14, 2014 at 12:46:21PM -0700, Hugh Dickins wrote:
> On Mon, 14 Jul 2014, Johannes Weiner wrote:
>
> > Reorder the members by input and output, then turn the individual
> > integers for may_writepage, may_unmap, may_swap, compaction_ready,
> > hibernation_mode into flags that fit into a single integer.
> >
> > Stack delta: +72/-296 -224 old new delta
> > kswapd 104 176 +72
> > try_to_free_pages 80 56 -24
> > try_to_free_mem_cgroup_pages 80 56 -24
> > shrink_all_memory 88 64 -24
> > reclaim_clean_pages_from_list 168 144 -24
> > mem_cgroup_shrink_node_zone 104 80 -24
> > __zone_reclaim 176 152 -24
> > balance_pgdat 152 - -152
> >
> > text data bss dec hex filename
> > 38151 5641 16 43808 ab20 mm/vmscan.o.old
> > 38047 5641 16 43704 aab8 mm/vmscan.o
> >
> > Suggested-by: Mel Gorman <mgorman@suse.de>
> > Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
> > ---
> > mm/vmscan.c | 158 ++++++++++++++++++++++++++++++------------------------------
> > 1 file changed, 78 insertions(+), 80 deletions(-)
> >
> > diff --git a/mm/vmscan.c b/mm/vmscan.c
> > index c28b8981e56a..73d8e69ff3eb 100644
> > --- a/mm/vmscan.c
> > +++ b/mm/vmscan.c
> > @@ -58,36 +58,28 @@
> > #define CREATE_TRACE_POINTS
> > #include <trace/events/vmscan.h>
> >
> > -struct scan_control {
> > - /* Incremented by the number of inactive pages that were scanned */
> > - unsigned long nr_scanned;
> > -
> > - /* Number of pages freed so far during a call to shrink_zones() */
> > - unsigned long nr_reclaimed;
> > -
> > - /* One of the zones is ready for compaction */
> > - int compaction_ready;
> > +/* Scan control flags */
> > +#define MAY_WRITEPAGE 0x1
> > +#define MAY_UNMAP 0x2
> > +#define MAY_SWAP 0x4
> > +#define MAY_SKIP_CONGESTION 0x8
> > +#define COMPACTION_READY 0x10
> >
> > +struct scan_control {
> > /* How many pages shrink_list() should reclaim */
> > unsigned long nr_to_reclaim;
> >
> > - unsigned long hibernation_mode;
> > -
> > /* This context's GFP mask */
> > gfp_t gfp_mask;
> >
> > - int may_writepage;
> > -
> > - /* Can mapped pages be reclaimed? */
> > - int may_unmap;
> > -
> > - /* Can pages be swapped as part of reclaim? */
> > - int may_swap;
> > -
> > + /* Allocation order */
> > int order;
> >
> > - /* Scan (total_size >> priority) pages at once */
> > - int priority;
> > + /*
> > + * Nodemask of nodes allowed by the caller. If NULL, all nodes
> > + * are scanned.
> > + */
> > + nodemask_t *nodemask;
> >
> > /*
> > * The memory cgroup that hit its limit and as a result is the
> > @@ -95,11 +87,17 @@ struct scan_control {
> > */
> > struct mem_cgroup *target_mem_cgroup;
> >
> > - /*
> > - * Nodemask of nodes allowed by the caller. If NULL, all nodes
> > - * are scanned.
> > - */
> > - nodemask_t *nodemask;
> > + /* Scan (total_size >> priority) pages at once */
> > + int priority;
> > +
> > + /* Scan control flags; see above */
> > + unsigned int flags;
>
> This seems to result in a fair amount of unnecessary churn:
> why not just put may_writepage etc into an unsigned int bitfield,
> then you get the saving without changing all the rest of the code.
Good point, I didn't even think of that. Thanks!
Andrew, could you please replace this patch with the following?
---
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [patch 3/3] mm: vmscan: clean up struct scan_control
2014-07-17 13:26 ` Johannes Weiner
@ 2014-07-17 13:57 ` Michal Hocko
-1 siblings, 0 replies; 30+ messages in thread
From: Michal Hocko @ 2014-07-17 13:57 UTC (permalink / raw)
To: Johannes Weiner
Cc: Hugh Dickins, Andrew Morton, Mel Gorman, Minchan Kim,
Rik van Riel, linux-mm, linux-kernel
On Thu 17-07-14 09:26:04, Johannes Weiner wrote:
> From bbe8c1645c77297a96ecd5d64d659ddcd6984d03 Mon Sep 17 00:00:00 2001
> From: Johannes Weiner <hannes@cmpxchg.org>
> Date: Mon, 14 Jul 2014 08:51:54 -0400
> Subject: [patch] mm: vmscan: clean up struct scan_control
>
> Reorder the members by input and output, then turn the individual
> integers for may_writepage, may_unmap, may_swap, compaction_ready,
> hibernation_mode into bit fields to save stack space:
>
> +72/-296 -224
> kswapd 104 176 +72
> try_to_free_pages 80 56 -24
> try_to_free_mem_cgroup_pages 80 56 -24
> shrink_all_memory 88 64 -24
> reclaim_clean_pages_from_list 168 144 -24
> mem_cgroup_shrink_node_zone 104 80 -24
> __zone_reclaim 176 152 -24
> balance_pgdat 152 - -152
>
> Suggested-by: Mel Gorman <mgorman@suse.de>
> Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Looks nice to me.
Acked-by: Michal Hocko <mhocko@suse.cz>
> ---
> mm/vmscan.c | 99 ++++++++++++++++++++++++++++---------------------------------
> 1 file changed, 46 insertions(+), 53 deletions(-)
>
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index c28b8981e56a..81dd858b9d17 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -59,35 +59,20 @@
> #include <trace/events/vmscan.h>
>
> struct scan_control {
> - /* Incremented by the number of inactive pages that were scanned */
> - unsigned long nr_scanned;
> -
> - /* Number of pages freed so far during a call to shrink_zones() */
> - unsigned long nr_reclaimed;
> -
> - /* One of the zones is ready for compaction */
> - int compaction_ready;
> -
> /* How many pages shrink_list() should reclaim */
> unsigned long nr_to_reclaim;
>
> - unsigned long hibernation_mode;
> -
> /* This context's GFP mask */
> gfp_t gfp_mask;
>
> - int may_writepage;
> -
> - /* Can mapped pages be reclaimed? */
> - int may_unmap;
> -
> - /* Can pages be swapped as part of reclaim? */
> - int may_swap;
> -
> + /* Allocation order */
> int order;
>
> - /* Scan (total_size >> priority) pages at once */
> - int priority;
> + /*
> + * Nodemask of nodes allowed by the caller. If NULL, all nodes
> + * are scanned.
> + */
> + nodemask_t *nodemask;
>
> /*
> * The memory cgroup that hit its limit and as a result is the
> @@ -95,11 +80,27 @@ struct scan_control {
> */
> struct mem_cgroup *target_mem_cgroup;
>
> - /*
> - * Nodemask of nodes allowed by the caller. If NULL, all nodes
> - * are scanned.
> - */
> - nodemask_t *nodemask;
> + /* Scan (total_size >> priority) pages at once */
> + int priority;
> +
> + unsigned int may_writepage:1;
> +
> + /* Can mapped pages be reclaimed? */
> + unsigned int may_unmap:1;
> +
> + /* Can pages be swapped as part of reclaim? */
> + unsigned int may_swap:1;
> +
> + unsigned int hibernation_mode:1;
> +
> + /* One of the zones is ready for compaction */
> + unsigned int compaction_ready:1;
> +
> + /* Incremented by the number of inactive pages that were scanned */
> + unsigned long nr_scanned;
> +
> + /* Number of pages freed so far during a call to shrink_zones() */
> + unsigned long nr_reclaimed;
> };
>
> #define lru_to_page(_head) (list_entry((_head)->prev, struct page, lru))
> @@ -2668,15 +2669,14 @@ unsigned long try_to_free_pages(struct zonelist *zonelist, int order,
> {
> unsigned long nr_reclaimed;
> struct scan_control sc = {
> + .nr_to_reclaim = SWAP_CLUSTER_MAX,
> .gfp_mask = (gfp_mask = memalloc_noio_flags(gfp_mask)),
> + .order = order,
> + .nodemask = nodemask,
> + .priority = DEF_PRIORITY,
> .may_writepage = !laptop_mode,
> - .nr_to_reclaim = SWAP_CLUSTER_MAX,
> .may_unmap = 1,
> .may_swap = 1,
> - .order = order,
> - .priority = DEF_PRIORITY,
> - .target_mem_cgroup = NULL,
> - .nodemask = nodemask,
> };
>
> /*
> @@ -2706,14 +2706,11 @@ unsigned long mem_cgroup_shrink_node_zone(struct mem_cgroup *memcg,
> unsigned long *nr_scanned)
> {
> struct scan_control sc = {
> - .nr_scanned = 0,
> .nr_to_reclaim = SWAP_CLUSTER_MAX,
> + .target_mem_cgroup = memcg,
> .may_writepage = !laptop_mode,
> .may_unmap = 1,
> .may_swap = !noswap,
> - .order = 0,
> - .priority = 0,
> - .target_mem_cgroup = memcg,
> };
> struct lruvec *lruvec = mem_cgroup_zone_lruvec(zone, memcg);
> int swappiness = mem_cgroup_swappiness(memcg);
> @@ -2748,16 +2745,14 @@ unsigned long try_to_free_mem_cgroup_pages(struct mem_cgroup *memcg,
> unsigned long nr_reclaimed;
> int nid;
> struct scan_control sc = {
> - .may_writepage = !laptop_mode,
> - .may_unmap = 1,
> - .may_swap = !noswap,
> .nr_to_reclaim = SWAP_CLUSTER_MAX,
> - .order = 0,
> - .priority = DEF_PRIORITY,
> - .target_mem_cgroup = memcg,
> - .nodemask = NULL, /* we don't care the placement */
> .gfp_mask = (gfp_mask & GFP_RECLAIM_MASK) |
> (GFP_HIGHUSER_MOVABLE & ~GFP_RECLAIM_MASK),
> + .target_mem_cgroup = memcg,
> + .priority = DEF_PRIORITY,
> + .may_writepage = !laptop_mode,
> + .may_unmap = 1,
> + .may_swap = !noswap,
> };
>
> /*
> @@ -3015,12 +3010,11 @@ static unsigned long balance_pgdat(pg_data_t *pgdat, int order,
> unsigned long nr_soft_scanned;
> struct scan_control sc = {
> .gfp_mask = GFP_KERNEL,
> + .order = order,
> .priority = DEF_PRIORITY,
> + .may_writepage = !laptop_mode,
> .may_unmap = 1,
> .may_swap = 1,
> - .may_writepage = !laptop_mode,
> - .order = order,
> - .target_mem_cgroup = NULL,
> };
> count_vm_event(PAGEOUTRUN);
>
> @@ -3401,14 +3395,13 @@ unsigned long shrink_all_memory(unsigned long nr_to_reclaim)
> {
> struct reclaim_state reclaim_state;
> struct scan_control sc = {
> + .nr_to_reclaim = nr_to_reclaim,
> .gfp_mask = GFP_HIGHUSER_MOVABLE,
> - .may_swap = 1,
> - .may_unmap = 1,
> + .priority = DEF_PRIORITY,
> .may_writepage = 1,
> - .nr_to_reclaim = nr_to_reclaim,
> + .may_unmap = 1,
> + .may_swap = 1,
> .hibernation_mode = 1,
> - .order = 0,
> - .priority = DEF_PRIORITY,
> };
> struct zonelist *zonelist = node_zonelist(numa_node_id(), sc.gfp_mask);
> struct task_struct *p = current;
> @@ -3588,13 +3581,13 @@ static int __zone_reclaim(struct zone *zone, gfp_t gfp_mask, unsigned int order)
> struct task_struct *p = current;
> struct reclaim_state reclaim_state;
> struct scan_control sc = {
> - .may_writepage = !!(zone_reclaim_mode & RECLAIM_WRITE),
> - .may_unmap = !!(zone_reclaim_mode & RECLAIM_SWAP),
> - .may_swap = 1,
> .nr_to_reclaim = max(nr_pages, SWAP_CLUSTER_MAX),
> .gfp_mask = (gfp_mask = memalloc_noio_flags(gfp_mask)),
> .order = order,
> .priority = ZONE_RECLAIM_PRIORITY,
> + .may_writepage = !!(zone_reclaim_mode & RECLAIM_WRITE),
> + .may_unmap = !!(zone_reclaim_mode & RECLAIM_SWAP),
> + .may_swap = 1,
> };
> struct shrink_control shrink = {
> .gfp_mask = sc.gfp_mask,
> --
> 2.0.0
>
--
Michal Hocko
SUSE Labs
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [patch 3/3] mm: vmscan: clean up struct scan_control
@ 2014-07-17 13:57 ` Michal Hocko
0 siblings, 0 replies; 30+ messages in thread
From: Michal Hocko @ 2014-07-17 13:57 UTC (permalink / raw)
To: Johannes Weiner
Cc: Hugh Dickins, Andrew Morton, Mel Gorman, Minchan Kim,
Rik van Riel, linux-mm, linux-kernel
On Thu 17-07-14 09:26:04, Johannes Weiner wrote:
> From bbe8c1645c77297a96ecd5d64d659ddcd6984d03 Mon Sep 17 00:00:00 2001
> From: Johannes Weiner <hannes@cmpxchg.org>
> Date: Mon, 14 Jul 2014 08:51:54 -0400
> Subject: [patch] mm: vmscan: clean up struct scan_control
>
> Reorder the members by input and output, then turn the individual
> integers for may_writepage, may_unmap, may_swap, compaction_ready,
> hibernation_mode into bit fields to save stack space:
>
> +72/-296 -224
> kswapd 104 176 +72
> try_to_free_pages 80 56 -24
> try_to_free_mem_cgroup_pages 80 56 -24
> shrink_all_memory 88 64 -24
> reclaim_clean_pages_from_list 168 144 -24
> mem_cgroup_shrink_node_zone 104 80 -24
> __zone_reclaim 176 152 -24
> balance_pgdat 152 - -152
>
> Suggested-by: Mel Gorman <mgorman@suse.de>
> Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Looks nice to me.
Acked-by: Michal Hocko <mhocko@suse.cz>
> ---
> mm/vmscan.c | 99 ++++++++++++++++++++++++++++---------------------------------
> 1 file changed, 46 insertions(+), 53 deletions(-)
>
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index c28b8981e56a..81dd858b9d17 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -59,35 +59,20 @@
> #include <trace/events/vmscan.h>
>
> struct scan_control {
> - /* Incremented by the number of inactive pages that were scanned */
> - unsigned long nr_scanned;
> -
> - /* Number of pages freed so far during a call to shrink_zones() */
> - unsigned long nr_reclaimed;
> -
> - /* One of the zones is ready for compaction */
> - int compaction_ready;
> -
> /* How many pages shrink_list() should reclaim */
> unsigned long nr_to_reclaim;
>
> - unsigned long hibernation_mode;
> -
> /* This context's GFP mask */
> gfp_t gfp_mask;
>
> - int may_writepage;
> -
> - /* Can mapped pages be reclaimed? */
> - int may_unmap;
> -
> - /* Can pages be swapped as part of reclaim? */
> - int may_swap;
> -
> + /* Allocation order */
> int order;
>
> - /* Scan (total_size >> priority) pages at once */
> - int priority;
> + /*
> + * Nodemask of nodes allowed by the caller. If NULL, all nodes
> + * are scanned.
> + */
> + nodemask_t *nodemask;
>
> /*
> * The memory cgroup that hit its limit and as a result is the
> @@ -95,11 +80,27 @@ struct scan_control {
> */
> struct mem_cgroup *target_mem_cgroup;
>
> - /*
> - * Nodemask of nodes allowed by the caller. If NULL, all nodes
> - * are scanned.
> - */
> - nodemask_t *nodemask;
> + /* Scan (total_size >> priority) pages at once */
> + int priority;
> +
> + unsigned int may_writepage:1;
> +
> + /* Can mapped pages be reclaimed? */
> + unsigned int may_unmap:1;
> +
> + /* Can pages be swapped as part of reclaim? */
> + unsigned int may_swap:1;
> +
> + unsigned int hibernation_mode:1;
> +
> + /* One of the zones is ready for compaction */
> + unsigned int compaction_ready:1;
> +
> + /* Incremented by the number of inactive pages that were scanned */
> + unsigned long nr_scanned;
> +
> + /* Number of pages freed so far during a call to shrink_zones() */
> + unsigned long nr_reclaimed;
> };
>
> #define lru_to_page(_head) (list_entry((_head)->prev, struct page, lru))
> @@ -2668,15 +2669,14 @@ unsigned long try_to_free_pages(struct zonelist *zonelist, int order,
> {
> unsigned long nr_reclaimed;
> struct scan_control sc = {
> + .nr_to_reclaim = SWAP_CLUSTER_MAX,
> .gfp_mask = (gfp_mask = memalloc_noio_flags(gfp_mask)),
> + .order = order,
> + .nodemask = nodemask,
> + .priority = DEF_PRIORITY,
> .may_writepage = !laptop_mode,
> - .nr_to_reclaim = SWAP_CLUSTER_MAX,
> .may_unmap = 1,
> .may_swap = 1,
> - .order = order,
> - .priority = DEF_PRIORITY,
> - .target_mem_cgroup = NULL,
> - .nodemask = nodemask,
> };
>
> /*
> @@ -2706,14 +2706,11 @@ unsigned long mem_cgroup_shrink_node_zone(struct mem_cgroup *memcg,
> unsigned long *nr_scanned)
> {
> struct scan_control sc = {
> - .nr_scanned = 0,
> .nr_to_reclaim = SWAP_CLUSTER_MAX,
> + .target_mem_cgroup = memcg,
> .may_writepage = !laptop_mode,
> .may_unmap = 1,
> .may_swap = !noswap,
> - .order = 0,
> - .priority = 0,
> - .target_mem_cgroup = memcg,
> };
> struct lruvec *lruvec = mem_cgroup_zone_lruvec(zone, memcg);
> int swappiness = mem_cgroup_swappiness(memcg);
> @@ -2748,16 +2745,14 @@ unsigned long try_to_free_mem_cgroup_pages(struct mem_cgroup *memcg,
> unsigned long nr_reclaimed;
> int nid;
> struct scan_control sc = {
> - .may_writepage = !laptop_mode,
> - .may_unmap = 1,
> - .may_swap = !noswap,
> .nr_to_reclaim = SWAP_CLUSTER_MAX,
> - .order = 0,
> - .priority = DEF_PRIORITY,
> - .target_mem_cgroup = memcg,
> - .nodemask = NULL, /* we don't care the placement */
> .gfp_mask = (gfp_mask & GFP_RECLAIM_MASK) |
> (GFP_HIGHUSER_MOVABLE & ~GFP_RECLAIM_MASK),
> + .target_mem_cgroup = memcg,
> + .priority = DEF_PRIORITY,
> + .may_writepage = !laptop_mode,
> + .may_unmap = 1,
> + .may_swap = !noswap,
> };
>
> /*
> @@ -3015,12 +3010,11 @@ static unsigned long balance_pgdat(pg_data_t *pgdat, int order,
> unsigned long nr_soft_scanned;
> struct scan_control sc = {
> .gfp_mask = GFP_KERNEL,
> + .order = order,
> .priority = DEF_PRIORITY,
> + .may_writepage = !laptop_mode,
> .may_unmap = 1,
> .may_swap = 1,
> - .may_writepage = !laptop_mode,
> - .order = order,
> - .target_mem_cgroup = NULL,
> };
> count_vm_event(PAGEOUTRUN);
>
> @@ -3401,14 +3395,13 @@ unsigned long shrink_all_memory(unsigned long nr_to_reclaim)
> {
> struct reclaim_state reclaim_state;
> struct scan_control sc = {
> + .nr_to_reclaim = nr_to_reclaim,
> .gfp_mask = GFP_HIGHUSER_MOVABLE,
> - .may_swap = 1,
> - .may_unmap = 1,
> + .priority = DEF_PRIORITY,
> .may_writepage = 1,
> - .nr_to_reclaim = nr_to_reclaim,
> + .may_unmap = 1,
> + .may_swap = 1,
> .hibernation_mode = 1,
> - .order = 0,
> - .priority = DEF_PRIORITY,
> };
> struct zonelist *zonelist = node_zonelist(numa_node_id(), sc.gfp_mask);
> struct task_struct *p = current;
> @@ -3588,13 +3581,13 @@ static int __zone_reclaim(struct zone *zone, gfp_t gfp_mask, unsigned int order)
> struct task_struct *p = current;
> struct reclaim_state reclaim_state;
> struct scan_control sc = {
> - .may_writepage = !!(zone_reclaim_mode & RECLAIM_WRITE),
> - .may_unmap = !!(zone_reclaim_mode & RECLAIM_SWAP),
> - .may_swap = 1,
> .nr_to_reclaim = max(nr_pages, SWAP_CLUSTER_MAX),
> .gfp_mask = (gfp_mask = memalloc_noio_flags(gfp_mask)),
> .order = order,
> .priority = ZONE_RECLAIM_PRIORITY,
> + .may_writepage = !!(zone_reclaim_mode & RECLAIM_WRITE),
> + .may_unmap = !!(zone_reclaim_mode & RECLAIM_SWAP),
> + .may_swap = 1,
> };
> struct shrink_control shrink = {
> .gfp_mask = sc.gfp_mask,
> --
> 2.0.0
>
--
Michal Hocko
SUSE Labs
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [patch 3/3] mm: vmscan: clean up struct scan_control
2014-07-17 13:57 ` Michal Hocko
@ 2014-07-17 23:00 ` Hugh Dickins
-1 siblings, 0 replies; 30+ messages in thread
From: Hugh Dickins @ 2014-07-17 23:00 UTC (permalink / raw)
To: Johannes Weiner
Cc: Michal Hocko, Hugh Dickins, Andrew Morton, Mel Gorman,
Minchan Kim, Rik van Riel, linux-mm, linux-kernel
On Thu, 17 Jul 2014, Michal Hocko wrote:
> On Thu 17-07-14 09:26:04, Johannes Weiner wrote:
> > From bbe8c1645c77297a96ecd5d64d659ddcd6984d03 Mon Sep 17 00:00:00 2001
> > From: Johannes Weiner <hannes@cmpxchg.org>
> > Date: Mon, 14 Jul 2014 08:51:54 -0400
> > Subject: [patch] mm: vmscan: clean up struct scan_control
> >
> > Reorder the members by input and output, then turn the individual
> > integers for may_writepage, may_unmap, may_swap, compaction_ready,
> > hibernation_mode into bit fields to save stack space:
> >
> > +72/-296 -224
> > kswapd 104 176 +72
> > try_to_free_pages 80 56 -24
> > try_to_free_mem_cgroup_pages 80 56 -24
> > shrink_all_memory 88 64 -24
> > reclaim_clean_pages_from_list 168 144 -24
> > mem_cgroup_shrink_node_zone 104 80 -24
> > __zone_reclaim 176 152 -24
> > balance_pgdat 152 - -152
> >
> > Suggested-by: Mel Gorman <mgorman@suse.de>
> > Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
>
> Looks nice to me.
> Acked-by: Michal Hocko <mhocko@suse.cz>
Yes, looks nice to me too; and I agree that it was worthwhile to make
those initialization orders consistent, and drop the 0 initializations.
Acked-by: Hugh Dickins <hughd@google.com>
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [patch 3/3] mm: vmscan: clean up struct scan_control
@ 2014-07-17 23:00 ` Hugh Dickins
0 siblings, 0 replies; 30+ messages in thread
From: Hugh Dickins @ 2014-07-17 23:00 UTC (permalink / raw)
To: Johannes Weiner
Cc: Michal Hocko, Hugh Dickins, Andrew Morton, Mel Gorman,
Minchan Kim, Rik van Riel, linux-mm, linux-kernel
On Thu, 17 Jul 2014, Michal Hocko wrote:
> On Thu 17-07-14 09:26:04, Johannes Weiner wrote:
> > From bbe8c1645c77297a96ecd5d64d659ddcd6984d03 Mon Sep 17 00:00:00 2001
> > From: Johannes Weiner <hannes@cmpxchg.org>
> > Date: Mon, 14 Jul 2014 08:51:54 -0400
> > Subject: [patch] mm: vmscan: clean up struct scan_control
> >
> > Reorder the members by input and output, then turn the individual
> > integers for may_writepage, may_unmap, may_swap, compaction_ready,
> > hibernation_mode into bit fields to save stack space:
> >
> > +72/-296 -224
> > kswapd 104 176 +72
> > try_to_free_pages 80 56 -24
> > try_to_free_mem_cgroup_pages 80 56 -24
> > shrink_all_memory 88 64 -24
> > reclaim_clean_pages_from_list 168 144 -24
> > mem_cgroup_shrink_node_zone 104 80 -24
> > __zone_reclaim 176 152 -24
> > balance_pgdat 152 - -152
> >
> > Suggested-by: Mel Gorman <mgorman@suse.de>
> > Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
>
> Looks nice to me.
> Acked-by: Michal Hocko <mhocko@suse.cz>
Yes, looks nice to me too; and I agree that it was worthwhile to make
those initialization orders consistent, and drop the 0 initializations.
Acked-by: Hugh Dickins <hughd@google.com>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [patch 1/3] mm: vmscan: rework compaction-ready signaling in direct reclaim fix
2014-07-14 13:20 ` Johannes Weiner
@ 2014-07-18 12:50 ` Mel Gorman
-1 siblings, 0 replies; 30+ messages in thread
From: Mel Gorman @ 2014-07-18 12:50 UTC (permalink / raw)
To: Johannes Weiner
Cc: Andrew Morton, Michal Hocko, Minchan Kim, Rik van Riel, linux-mm,
linux-kernel
On Mon, Jul 14, 2014 at 09:20:47AM -0400, Johannes Weiner wrote:
> As per Mel, replace out label with breaks from the loop.
>
> Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: Mel Gorman <mgorman@suse.de>
--
Mel Gorman
SUSE Labs
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [patch 1/3] mm: vmscan: rework compaction-ready signaling in direct reclaim fix
@ 2014-07-18 12:50 ` Mel Gorman
0 siblings, 0 replies; 30+ messages in thread
From: Mel Gorman @ 2014-07-18 12:50 UTC (permalink / raw)
To: Johannes Weiner
Cc: Andrew Morton, Michal Hocko, Minchan Kim, Rik van Riel, linux-mm,
linux-kernel
On Mon, Jul 14, 2014 at 09:20:47AM -0400, Johannes Weiner wrote:
> As per Mel, replace out label with breaks from the loop.
>
> Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: Mel Gorman <mgorman@suse.de>
--
Mel Gorman
SUSE Labs
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [patch 2/3] mm: vmscan: remove all_unreclaimable() fix
2014-07-14 13:20 ` Johannes Weiner
@ 2014-07-18 12:51 ` Mel Gorman
-1 siblings, 0 replies; 30+ messages in thread
From: Mel Gorman @ 2014-07-18 12:51 UTC (permalink / raw)
To: Johannes Weiner
Cc: Andrew Morton, Michal Hocko, Minchan Kim, Rik van Riel, linux-mm,
linux-kernel
On Mon, Jul 14, 2014 at 09:20:48AM -0400, Johannes Weiner wrote:
> As per Mel, use bool for reclaimability throughout and simplify the
> reclaimability tracking in shrink_zones().
>
> Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: Mel Gorman <mgorman@suse.de>
--
Mel Gorman
SUSE Labs
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [patch 2/3] mm: vmscan: remove all_unreclaimable() fix
@ 2014-07-18 12:51 ` Mel Gorman
0 siblings, 0 replies; 30+ messages in thread
From: Mel Gorman @ 2014-07-18 12:51 UTC (permalink / raw)
To: Johannes Weiner
Cc: Andrew Morton, Michal Hocko, Minchan Kim, Rik van Riel, linux-mm,
linux-kernel
On Mon, Jul 14, 2014 at 09:20:48AM -0400, Johannes Weiner wrote:
> As per Mel, use bool for reclaimability throughout and simplify the
> reclaimability tracking in shrink_zones().
>
> Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: Mel Gorman <mgorman@suse.de>
--
Mel Gorman
SUSE Labs
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [patch 3/3] mm: vmscan: clean up struct scan_control
2014-07-17 13:26 ` Johannes Weiner
@ 2014-07-18 12:53 ` Mel Gorman
-1 siblings, 0 replies; 30+ messages in thread
From: Mel Gorman @ 2014-07-18 12:53 UTC (permalink / raw)
To: Johannes Weiner
Cc: Hugh Dickins, Andrew Morton, Michal Hocko, Minchan Kim,
Rik van Riel, linux-mm, linux-kernel
On Thu, Jul 17, 2014 at 09:26:04AM -0400, Johannes Weiner wrote:
> <SNIP>
>
> Andrew, could you please replace this patch with the following?
>
> ---
> From bbe8c1645c77297a96ecd5d64d659ddcd6984d03 Mon Sep 17 00:00:00 2001
> From: Johannes Weiner <hannes@cmpxchg.org>
> Date: Mon, 14 Jul 2014 08:51:54 -0400
> Subject: [patch] mm: vmscan: clean up struct scan_control
>
> Reorder the members by input and output, then turn the individual
> integers for may_writepage, may_unmap, may_swap, compaction_ready,
> hibernation_mode into bit fields to save stack space:
>
> +72/-296 -224
> kswapd 104 176 +72
> try_to_free_pages 80 56 -24
> try_to_free_mem_cgroup_pages 80 56 -24
> shrink_all_memory 88 64 -24
> reclaim_clean_pages_from_list 168 144 -24
> mem_cgroup_shrink_node_zone 104 80 -24
> __zone_reclaim 176 152 -24
> balance_pgdat 152 - -152
>
> Suggested-by: Mel Gorman <mgorman@suse.de>
> Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: Mel Gorman <mgorman@suse.de>
--
Mel Gorman
SUSE Labs
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [patch 3/3] mm: vmscan: clean up struct scan_control
@ 2014-07-18 12:53 ` Mel Gorman
0 siblings, 0 replies; 30+ messages in thread
From: Mel Gorman @ 2014-07-18 12:53 UTC (permalink / raw)
To: Johannes Weiner
Cc: Hugh Dickins, Andrew Morton, Michal Hocko, Minchan Kim,
Rik van Riel, linux-mm, linux-kernel
On Thu, Jul 17, 2014 at 09:26:04AM -0400, Johannes Weiner wrote:
> <SNIP>
>
> Andrew, could you please replace this patch with the following?
>
> ---
> From bbe8c1645c77297a96ecd5d64d659ddcd6984d03 Mon Sep 17 00:00:00 2001
> From: Johannes Weiner <hannes@cmpxchg.org>
> Date: Mon, 14 Jul 2014 08:51:54 -0400
> Subject: [patch] mm: vmscan: clean up struct scan_control
>
> Reorder the members by input and output, then turn the individual
> integers for may_writepage, may_unmap, may_swap, compaction_ready,
> hibernation_mode into bit fields to save stack space:
>
> +72/-296 -224
> kswapd 104 176 +72
> try_to_free_pages 80 56 -24
> try_to_free_mem_cgroup_pages 80 56 -24
> shrink_all_memory 88 64 -24
> reclaim_clean_pages_from_list 168 144 -24
> mem_cgroup_shrink_node_zone 104 80 -24
> __zone_reclaim 176 152 -24
> balance_pgdat 152 - -152
>
> Suggested-by: Mel Gorman <mgorman@suse.de>
> Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: Mel Gorman <mgorman@suse.de>
--
Mel Gorman
SUSE Labs
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 30+ messages in thread
end of thread, other threads:[~2014-07-18 12:54 UTC | newest]
Thread overview: 30+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-07-14 13:20 [patch 0/3] mm: vmscan: followup fixes to cleanups in -mm Johannes Weiner
2014-07-14 13:20 ` Johannes Weiner
2014-07-14 13:20 ` [patch 1/3] mm: vmscan: rework compaction-ready signaling in direct reclaim fix Johannes Weiner
2014-07-14 13:20 ` Johannes Weiner
2014-07-14 14:09 ` Rik van Riel
2014-07-14 14:09 ` Rik van Riel
2014-07-18 12:50 ` Mel Gorman
2014-07-18 12:50 ` Mel Gorman
2014-07-14 13:20 ` [patch 2/3] mm: vmscan: remove all_unreclaimable() fix Johannes Weiner
2014-07-14 13:20 ` Johannes Weiner
2014-07-14 14:10 ` Rik van Riel
2014-07-14 14:10 ` Rik van Riel
2014-07-16 9:40 ` Michal Hocko
2014-07-16 9:40 ` Michal Hocko
2014-07-18 12:51 ` Mel Gorman
2014-07-18 12:51 ` Mel Gorman
2014-07-14 13:20 ` [patch 3/3] mm: vmscan: clean up struct scan_control Johannes Weiner
2014-07-14 13:20 ` Johannes Weiner
2014-07-14 19:46 ` Hugh Dickins
2014-07-14 19:46 ` Hugh Dickins
2014-07-17 13:26 ` Johannes Weiner
2014-07-17 13:26 ` Johannes Weiner
2014-07-17 13:57 ` Michal Hocko
2014-07-17 13:57 ` Michal Hocko
2014-07-17 23:00 ` Hugh Dickins
2014-07-17 23:00 ` Hugh Dickins
2014-07-18 12:53 ` Mel Gorman
2014-07-18 12:53 ` Mel Gorman
2014-07-14 19:56 ` Andrew Morton
2014-07-14 19:56 ` Andrew Morton
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.