All of lore.kernel.org
 help / color / mirror / Atom feed
From: Mel Gorman <mgorman@techsingularity.net>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Vlastimil Babka <vbabka@suse.cz>,
	Jesper Dangaard Brouer <brouer@redhat.com>,
	Linux-MM <linux-mm@kvack.org>,
	LKML <linux-kernel@vger.kernel.org>,
	Mel Gorman <mgorman@techsingularity.net>
Subject: [PATCH 05/28] mm, page_alloc: Inline the fast path of the zonelist iterator
Date: Fri, 15 Apr 2016 09:58:57 +0100	[thread overview]
Message-ID: <1460710760-32601-6-git-send-email-mgorman@techsingularity.net> (raw)
In-Reply-To: <1460710760-32601-1-git-send-email-mgorman@techsingularity.net>

The page allocator iterates through a zonelist for zones that match
the addressing limitations and nodemask of the caller but many allocations
will not be restricted. Despite this, there is always functional call
overhead which builds up.

This patch inlines the optimistic basic case and only calls the
iterator function for the complex case. A hindrance was the fact that
cpuset_current_mems_allowed is used in the fastpath as the allowed nodemask
even though all nodes are allowed on most systems. The patch handles this
by only considering cpuset_current_mems_allowed if a cpuset exists. As well
as being faster in the fast-path, this removes some junk in the slowpath.

The performance difference on a page allocator microbenchmark is;

                                           4.6.0-rc2                  4.6.0-rc2
                                    statinline-v1r20              optiter-v1r20
Min      alloc-odr0-1               412.00 (  0.00%)           382.00 (  7.28%)
Min      alloc-odr0-2               301.00 (  0.00%)           282.00 (  6.31%)
Min      alloc-odr0-4               247.00 (  0.00%)           233.00 (  5.67%)
Min      alloc-odr0-8               215.00 (  0.00%)           203.00 (  5.58%)
Min      alloc-odr0-16              199.00 (  0.00%)           188.00 (  5.53%)
Min      alloc-odr0-32              191.00 (  0.00%)           182.00 (  4.71%)
Min      alloc-odr0-64              187.00 (  0.00%)           177.00 (  5.35%)
Min      alloc-odr0-128             185.00 (  0.00%)           175.00 (  5.41%)
Min      alloc-odr0-256             193.00 (  0.00%)           184.00 (  4.66%)
Min      alloc-odr0-512             207.00 (  0.00%)           197.00 (  4.83%)
Min      alloc-odr0-1024            213.00 (  0.00%)           203.00 (  4.69%)
Min      alloc-odr0-2048            220.00 (  0.00%)           209.00 (  5.00%)
Min      alloc-odr0-4096            226.00 (  0.00%)           214.00 (  5.31%)
Min      alloc-odr0-8192            229.00 (  0.00%)           218.00 (  4.80%)
Min      alloc-odr0-16384           229.00 (  0.00%)           219.00 (  4.37%)

perf indicated that next_zones_zonelist disappeared in the profile and
__next_zones_zonelist did not appear. This is expected as the micro-benchmark
would hit the inlined fast-path every time.

Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
---
 include/linux/mmzone.h | 13 +++++++++++--
 mm/mmzone.c            |  2 +-
 mm/page_alloc.c        | 26 +++++++++-----------------
 3 files changed, 21 insertions(+), 20 deletions(-)

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index c60df9257cc7..0c4d5ebb3849 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -922,6 +922,10 @@ static inline int zonelist_node_idx(struct zoneref *zoneref)
 #endif /* CONFIG_NUMA */
 }
 
+struct zoneref *__next_zones_zonelist(struct zoneref *z,
+					enum zone_type highest_zoneidx,
+					nodemask_t *nodes);
+
 /**
  * next_zones_zonelist - Returns the next zone at or below highest_zoneidx within the allowed nodemask using a cursor within a zonelist as a starting point
  * @z - The cursor used as a starting point for the search
@@ -934,9 +938,14 @@ static inline int zonelist_node_idx(struct zoneref *zoneref)
  * being examined. It should be advanced by one before calling
  * next_zones_zonelist again.
  */
-struct zoneref *next_zones_zonelist(struct zoneref *z,
+static __always_inline struct zoneref *next_zones_zonelist(struct zoneref *z,
 					enum zone_type highest_zoneidx,
-					nodemask_t *nodes);
+					nodemask_t *nodes)
+{
+	if (likely(!nodes && zonelist_zone_idx(z) <= highest_zoneidx))
+		return z;
+	return __next_zones_zonelist(z, highest_zoneidx, nodes);
+}
 
 /**
  * first_zones_zonelist - Returns the first zone at or below highest_zoneidx within the allowed nodemask in a zonelist
diff --git a/mm/mmzone.c b/mm/mmzone.c
index 52687fb4de6f..5652be858e5e 100644
--- a/mm/mmzone.c
+++ b/mm/mmzone.c
@@ -52,7 +52,7 @@ static inline int zref_in_nodemask(struct zoneref *zref, nodemask_t *nodes)
 }
 
 /* Returns the next zone at or below highest_zoneidx in a zonelist */
-struct zoneref *next_zones_zonelist(struct zoneref *z,
+struct zoneref *__next_zones_zonelist(struct zoneref *z,
 					enum zone_type highest_zoneidx,
 					nodemask_t *nodes)
 {
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index b56c2b2911a2..e9acc0b0f787 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3193,17 +3193,6 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 	 */
 	alloc_flags = gfp_to_alloc_flags(gfp_mask);
 
-	/*
-	 * Find the true preferred zone if the allocation is unconstrained by
-	 * cpusets.
-	 */
-	if (!(alloc_flags & ALLOC_CPUSET) && !ac->nodemask) {
-		struct zoneref *preferred_zoneref;
-		preferred_zoneref = first_zones_zonelist(ac->zonelist,
-				ac->high_zoneidx, NULL, &ac->preferred_zone);
-		ac->classzone_idx = zonelist_zone_idx(preferred_zoneref);
-	}
-
 	/* This is the last chance, in general, before the goto nopage. */
 	page = get_page_from_freelist(gfp_mask, order,
 				alloc_flags & ~ALLOC_NO_WATERMARKS, ac);
@@ -3359,14 +3348,21 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order,
 	struct zoneref *preferred_zoneref;
 	struct page *page = NULL;
 	unsigned int cpuset_mems_cookie;
-	int alloc_flags = ALLOC_WMARK_LOW|ALLOC_CPUSET|ALLOC_FAIR;
+	int alloc_flags = ALLOC_WMARK_LOW|ALLOC_FAIR;
 	gfp_t alloc_mask; /* The gfp_t that was actually used for allocation */
 	struct alloc_context ac = {
 		.high_zoneidx = gfp_zone(gfp_mask),
+		.zonelist = zonelist,
 		.nodemask = nodemask,
 		.migratetype = gfpflags_to_migratetype(gfp_mask),
 	};
 
+	if (cpusets_enabled()) {
+		alloc_flags |= ALLOC_CPUSET;
+		if (!ac.nodemask)
+			ac.nodemask = &cpuset_current_mems_allowed;
+	}
+
 	gfp_mask &= gfp_allowed_mask;
 
 	lockdep_trace_alloc(gfp_mask);
@@ -3390,16 +3386,12 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order,
 retry_cpuset:
 	cpuset_mems_cookie = read_mems_allowed_begin();
 
-	/* We set it here, as __alloc_pages_slowpath might have changed it */
-	ac.zonelist = zonelist;
-
 	/* Dirty zone balancing only done in the fast path */
 	ac.spread_dirty_pages = (gfp_mask & __GFP_WRITE);
 
 	/* The preferred zone is used for statistics later */
 	preferred_zoneref = first_zones_zonelist(ac.zonelist, ac.high_zoneidx,
-				ac.nodemask ? : &cpuset_current_mems_allowed,
-				&ac.preferred_zone);
+				ac.nodemask, &ac.preferred_zone);
 	if (!ac.preferred_zone)
 		goto out;
 	ac.classzone_idx = zonelist_zone_idx(preferred_zoneref);
-- 
2.6.4

WARNING: multiple messages have this Message-ID (diff)
From: Mel Gorman <mgorman@techsingularity.net>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Vlastimil Babka <vbabka@suse.cz>,
	Jesper Dangaard Brouer <brouer@redhat.com>,
	Linux-MM <linux-mm@kvack.org>,
	LKML <linux-kernel@vger.kernel.org>,
	Mel Gorman <mgorman@techsingularity.net>
Subject: [PATCH 05/28] mm, page_alloc: Inline the fast path of the zonelist iterator
Date: Fri, 15 Apr 2016 09:58:57 +0100	[thread overview]
Message-ID: <1460710760-32601-6-git-send-email-mgorman@techsingularity.net> (raw)
In-Reply-To: <1460710760-32601-1-git-send-email-mgorman@techsingularity.net>

The page allocator iterates through a zonelist for zones that match
the addressing limitations and nodemask of the caller but many allocations
will not be restricted. Despite this, there is always functional call
overhead which builds up.

This patch inlines the optimistic basic case and only calls the
iterator function for the complex case. A hindrance was the fact that
cpuset_current_mems_allowed is used in the fastpath as the allowed nodemask
even though all nodes are allowed on most systems. The patch handles this
by only considering cpuset_current_mems_allowed if a cpuset exists. As well
as being faster in the fast-path, this removes some junk in the slowpath.

The performance difference on a page allocator microbenchmark is;

                                           4.6.0-rc2                  4.6.0-rc2
                                    statinline-v1r20              optiter-v1r20
Min      alloc-odr0-1               412.00 (  0.00%)           382.00 (  7.28%)
Min      alloc-odr0-2               301.00 (  0.00%)           282.00 (  6.31%)
Min      alloc-odr0-4               247.00 (  0.00%)           233.00 (  5.67%)
Min      alloc-odr0-8               215.00 (  0.00%)           203.00 (  5.58%)
Min      alloc-odr0-16              199.00 (  0.00%)           188.00 (  5.53%)
Min      alloc-odr0-32              191.00 (  0.00%)           182.00 (  4.71%)
Min      alloc-odr0-64              187.00 (  0.00%)           177.00 (  5.35%)
Min      alloc-odr0-128             185.00 (  0.00%)           175.00 (  5.41%)
Min      alloc-odr0-256             193.00 (  0.00%)           184.00 (  4.66%)
Min      alloc-odr0-512             207.00 (  0.00%)           197.00 (  4.83%)
Min      alloc-odr0-1024            213.00 (  0.00%)           203.00 (  4.69%)
Min      alloc-odr0-2048            220.00 (  0.00%)           209.00 (  5.00%)
Min      alloc-odr0-4096            226.00 (  0.00%)           214.00 (  5.31%)
Min      alloc-odr0-8192            229.00 (  0.00%)           218.00 (  4.80%)
Min      alloc-odr0-16384           229.00 (  0.00%)           219.00 (  4.37%)

perf indicated that next_zones_zonelist disappeared in the profile and
__next_zones_zonelist did not appear. This is expected as the micro-benchmark
would hit the inlined fast-path every time.

Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
---
 include/linux/mmzone.h | 13 +++++++++++--
 mm/mmzone.c            |  2 +-
 mm/page_alloc.c        | 26 +++++++++-----------------
 3 files changed, 21 insertions(+), 20 deletions(-)

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index c60df9257cc7..0c4d5ebb3849 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -922,6 +922,10 @@ static inline int zonelist_node_idx(struct zoneref *zoneref)
 #endif /* CONFIG_NUMA */
 }
 
+struct zoneref *__next_zones_zonelist(struct zoneref *z,
+					enum zone_type highest_zoneidx,
+					nodemask_t *nodes);
+
 /**
  * next_zones_zonelist - Returns the next zone at or below highest_zoneidx within the allowed nodemask using a cursor within a zonelist as a starting point
  * @z - The cursor used as a starting point for the search
@@ -934,9 +938,14 @@ static inline int zonelist_node_idx(struct zoneref *zoneref)
  * being examined. It should be advanced by one before calling
  * next_zones_zonelist again.
  */
-struct zoneref *next_zones_zonelist(struct zoneref *z,
+static __always_inline struct zoneref *next_zones_zonelist(struct zoneref *z,
 					enum zone_type highest_zoneidx,
-					nodemask_t *nodes);
+					nodemask_t *nodes)
+{
+	if (likely(!nodes && zonelist_zone_idx(z) <= highest_zoneidx))
+		return z;
+	return __next_zones_zonelist(z, highest_zoneidx, nodes);
+}
 
 /**
  * first_zones_zonelist - Returns the first zone at or below highest_zoneidx within the allowed nodemask in a zonelist
diff --git a/mm/mmzone.c b/mm/mmzone.c
index 52687fb4de6f..5652be858e5e 100644
--- a/mm/mmzone.c
+++ b/mm/mmzone.c
@@ -52,7 +52,7 @@ static inline int zref_in_nodemask(struct zoneref *zref, nodemask_t *nodes)
 }
 
 /* Returns the next zone at or below highest_zoneidx in a zonelist */
-struct zoneref *next_zones_zonelist(struct zoneref *z,
+struct zoneref *__next_zones_zonelist(struct zoneref *z,
 					enum zone_type highest_zoneidx,
 					nodemask_t *nodes)
 {
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index b56c2b2911a2..e9acc0b0f787 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3193,17 +3193,6 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 	 */
 	alloc_flags = gfp_to_alloc_flags(gfp_mask);
 
-	/*
-	 * Find the true preferred zone if the allocation is unconstrained by
-	 * cpusets.
-	 */
-	if (!(alloc_flags & ALLOC_CPUSET) && !ac->nodemask) {
-		struct zoneref *preferred_zoneref;
-		preferred_zoneref = first_zones_zonelist(ac->zonelist,
-				ac->high_zoneidx, NULL, &ac->preferred_zone);
-		ac->classzone_idx = zonelist_zone_idx(preferred_zoneref);
-	}
-
 	/* This is the last chance, in general, before the goto nopage. */
 	page = get_page_from_freelist(gfp_mask, order,
 				alloc_flags & ~ALLOC_NO_WATERMARKS, ac);
@@ -3359,14 +3348,21 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order,
 	struct zoneref *preferred_zoneref;
 	struct page *page = NULL;
 	unsigned int cpuset_mems_cookie;
-	int alloc_flags = ALLOC_WMARK_LOW|ALLOC_CPUSET|ALLOC_FAIR;
+	int alloc_flags = ALLOC_WMARK_LOW|ALLOC_FAIR;
 	gfp_t alloc_mask; /* The gfp_t that was actually used for allocation */
 	struct alloc_context ac = {
 		.high_zoneidx = gfp_zone(gfp_mask),
+		.zonelist = zonelist,
 		.nodemask = nodemask,
 		.migratetype = gfpflags_to_migratetype(gfp_mask),
 	};
 
+	if (cpusets_enabled()) {
+		alloc_flags |= ALLOC_CPUSET;
+		if (!ac.nodemask)
+			ac.nodemask = &cpuset_current_mems_allowed;
+	}
+
 	gfp_mask &= gfp_allowed_mask;
 
 	lockdep_trace_alloc(gfp_mask);
@@ -3390,16 +3386,12 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order,
 retry_cpuset:
 	cpuset_mems_cookie = read_mems_allowed_begin();
 
-	/* We set it here, as __alloc_pages_slowpath might have changed it */
-	ac.zonelist = zonelist;
-
 	/* Dirty zone balancing only done in the fast path */
 	ac.spread_dirty_pages = (gfp_mask & __GFP_WRITE);
 
 	/* The preferred zone is used for statistics later */
 	preferred_zoneref = first_zones_zonelist(ac.zonelist, ac.high_zoneidx,
-				ac.nodemask ? : &cpuset_current_mems_allowed,
-				&ac.preferred_zone);
+				ac.nodemask, &ac.preferred_zone);
 	if (!ac.preferred_zone)
 		goto out;
 	ac.classzone_idx = zonelist_zone_idx(preferred_zoneref);
-- 
2.6.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  parent reply	other threads:[~2016-04-15  9:00 UTC|newest]

Thread overview: 160+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-04-15  8:58 [PATCH 00/28] Optimise page alloc/free fast paths v3 Mel Gorman
2016-04-15  8:58 ` Mel Gorman
2016-04-15  8:58 ` [PATCH 01/28] mm, page_alloc: Only check PageCompound for high-order pages Mel Gorman
2016-04-15  8:58   ` Mel Gorman
2016-04-25  9:33   ` Vlastimil Babka
2016-04-25  9:33     ` Vlastimil Babka
2016-04-26 10:33     ` Mel Gorman
2016-04-26 10:33       ` Mel Gorman
2016-04-26 11:20       ` Vlastimil Babka
2016-04-26 11:20         ` Vlastimil Babka
2016-04-15  8:58 ` [PATCH 02/28] mm, page_alloc: Use new PageAnonHead helper in the free page fast path Mel Gorman
2016-04-15  8:58   ` Mel Gorman
2016-04-25  9:56   ` Vlastimil Babka
2016-04-25  9:56     ` Vlastimil Babka
2016-04-15  8:58 ` [PATCH 03/28] mm, page_alloc: Reduce branches in zone_statistics Mel Gorman
2016-04-15  8:58   ` Mel Gorman
2016-04-25 11:15   ` Vlastimil Babka
2016-04-25 11:15     ` Vlastimil Babka
2016-04-15  8:58 ` [PATCH 04/28] mm, page_alloc: Inline zone_statistics Mel Gorman
2016-04-15  8:58   ` Mel Gorman
2016-04-25 11:17   ` Vlastimil Babka
2016-04-25 11:17     ` Vlastimil Babka
2016-04-15  8:58 ` Mel Gorman [this message]
2016-04-15  8:58   ` [PATCH 05/28] mm, page_alloc: Inline the fast path of the zonelist iterator Mel Gorman
2016-04-25 14:50   ` Vlastimil Babka
2016-04-25 14:50     ` Vlastimil Babka
2016-04-26 10:30     ` Mel Gorman
2016-04-26 10:30       ` Mel Gorman
2016-04-26 11:05       ` Vlastimil Babka
2016-04-26 11:05         ` Vlastimil Babka
2016-04-15  8:58 ` [PATCH 06/28] mm, page_alloc: Use __dec_zone_state for order-0 page allocation Mel Gorman
2016-04-15  8:58   ` Mel Gorman
2016-04-26 11:25   ` Vlastimil Babka
2016-04-26 11:25     ` Vlastimil Babka
2016-04-15  8:58 ` [PATCH 07/28] mm, page_alloc: Avoid unnecessary zone lookups during pageblock operations Mel Gorman
2016-04-15  8:58   ` Mel Gorman
2016-04-26 11:29   ` Vlastimil Babka
2016-04-26 11:29     ` Vlastimil Babka
2016-04-15  8:59 ` [PATCH 08/28] mm, page_alloc: Convert alloc_flags to unsigned Mel Gorman
2016-04-15  8:59   ` Mel Gorman
2016-04-26 11:31   ` Vlastimil Babka
2016-04-26 11:31     ` Vlastimil Babka
2016-04-15  8:59 ` [PATCH 09/28] mm, page_alloc: Convert nr_fair_skipped to bool Mel Gorman
2016-04-15  8:59   ` Mel Gorman
2016-04-26 11:37   ` Vlastimil Babka
2016-04-26 11:37     ` Vlastimil Babka
2016-04-15  8:59 ` [PATCH 10/28] mm, page_alloc: Remove unnecessary local variable in get_page_from_freelist Mel Gorman
2016-04-15  8:59   ` Mel Gorman
2016-04-26 11:38   ` Vlastimil Babka
2016-04-26 11:38     ` Vlastimil Babka
2016-04-15  8:59 ` [PATCH 11/28] mm, page_alloc: Remove unnecessary initialisation " Mel Gorman
2016-04-15  8:59   ` Mel Gorman
2016-04-26 11:39   ` Vlastimil Babka
2016-04-26 11:39     ` Vlastimil Babka
2016-04-15  9:07 ` [PATCH 13/28] mm, page_alloc: Remove redundant check for empty zonelist Mel Gorman
2016-04-15  9:07   ` Mel Gorman
2016-04-15  9:07   ` [PATCH 14/28] mm, page_alloc: Simplify last cpupid reset Mel Gorman
2016-04-15  9:07     ` Mel Gorman
2016-04-26 13:30     ` Vlastimil Babka
2016-04-26 13:30       ` Vlastimil Babka
2016-04-15  9:07   ` [PATCH 15/28] mm, page_alloc: Move might_sleep_if check to the allocator slowpath Mel Gorman
2016-04-15  9:07     ` Mel Gorman
2016-04-26 13:41     ` Vlastimil Babka
2016-04-26 13:41       ` Vlastimil Babka
2016-04-26 14:50       ` Mel Gorman
2016-04-26 14:50         ` Mel Gorman
2016-04-26 15:16         ` Vlastimil Babka
2016-04-26 15:16           ` Vlastimil Babka
2016-04-26 16:29           ` Mel Gorman
2016-04-26 16:29             ` Mel Gorman
2016-04-15  9:07   ` [PATCH 16/28] mm, page_alloc: Move __GFP_HARDWALL modifications out of the fastpath Mel Gorman
2016-04-15  9:07     ` Mel Gorman
2016-04-26 14:13     ` Vlastimil Babka
2016-04-26 14:13       ` Vlastimil Babka
2016-04-15  9:07   ` [PATCH 17/28] mm, page_alloc: Check once if a zone has isolated pageblocks Mel Gorman
2016-04-15  9:07     ` Mel Gorman
2016-04-26 14:27     ` Vlastimil Babka
2016-04-26 14:27       ` Vlastimil Babka
2016-04-15  9:07   ` [PATCH 18/28] mm, page_alloc: Shorten the page allocator fast path Mel Gorman
2016-04-15  9:07     ` Mel Gorman
2016-04-26 15:23     ` Vlastimil Babka
2016-04-26 15:23       ` Vlastimil Babka
2016-04-15  9:07   ` [PATCH 19/28] mm, page_alloc: Reduce cost of fair zone allocation policy retry Mel Gorman
2016-04-15  9:07     ` Mel Gorman
2016-04-26 17:24     ` Vlastimil Babka
2016-04-26 17:24       ` Vlastimil Babka
2016-04-15  9:07   ` [PATCH 20/28] mm, page_alloc: Shortcut watermark checks for order-0 pages Mel Gorman
2016-04-15  9:07     ` Mel Gorman
2016-04-26 17:32     ` Vlastimil Babka
2016-04-26 17:32       ` Vlastimil Babka
2016-04-15  9:07   ` [PATCH 21/28] mm, page_alloc: Avoid looking up the first zone in a zonelist twice Mel Gorman
2016-04-15  9:07     ` Mel Gorman
2016-04-26 17:46     ` Vlastimil Babka
2016-04-26 17:46       ` Vlastimil Babka
2016-04-15  9:07   ` [PATCH 22/28] mm, page_alloc: Remove field from alloc_context Mel Gorman
2016-04-15  9:07     ` Mel Gorman
2016-04-15  9:07   ` [PATCH 23/28] mm, page_alloc: Check multiple page fields with a single branch Mel Gorman
2016-04-15  9:07     ` Mel Gorman
2016-04-26 18:41     ` Vlastimil Babka
2016-04-26 18:41       ` Vlastimil Babka
2016-04-27 10:07       ` Mel Gorman
2016-04-27 10:07         ` Mel Gorman
2016-04-15  9:07   ` [PATCH 24/28] mm, page_alloc: Remove unnecessary variable from free_pcppages_bulk Mel Gorman
2016-04-15  9:07     ` Mel Gorman
2016-04-26 18:43     ` Vlastimil Babka
2016-04-26 18:43       ` Vlastimil Babka
2016-04-15  9:07   ` [PATCH 25/28] mm, page_alloc: Inline pageblock lookup in page free fast paths Mel Gorman
2016-04-15  9:07     ` Mel Gorman
2016-04-26 19:10     ` Vlastimil Babka
2016-04-26 19:10       ` Vlastimil Babka
2016-04-15  9:07   ` [PATCH 26/28] cpuset: use static key better and convert to new API Mel Gorman
2016-04-15  9:07     ` Mel Gorman
2016-04-26 19:49     ` Vlastimil Babka
2016-04-26 19:49       ` Vlastimil Babka
2016-04-15  9:07   ` [PATCH 27/28] mm, page_alloc: Defer debugging checks of freed pages until a PCP drain Mel Gorman
2016-04-15  9:07     ` Mel Gorman
2016-04-27 11:59     ` Vlastimil Babka
2016-04-27 11:59       ` Vlastimil Babka
2016-04-27 12:01       ` [PATCH 1/3] mm, page_alloc: un-inline the bad part of free_pages_check Vlastimil Babka
2016-04-27 12:01         ` Vlastimil Babka
2016-04-27 12:01         ` [PATCH 2/3] mm, page_alloc: pull out side effects from free_pages_check Vlastimil Babka
2016-04-27 12:01           ` Vlastimil Babka
2016-04-27 12:41           ` Mel Gorman
2016-04-27 12:41             ` Mel Gorman
2016-04-27 13:00             ` Vlastimil Babka
2016-04-27 13:00               ` Vlastimil Babka
2016-04-27 12:01         ` [PATCH 3/3] mm, page_alloc: don't duplicate code in free_pcp_prepare Vlastimil Babka
2016-04-27 12:01           ` Vlastimil Babka
2016-04-27 12:37         ` [PATCH 1/3] mm, page_alloc: un-inline the bad part of free_pages_check Mel Gorman
2016-04-27 12:37           ` Mel Gorman
2016-04-27 12:53           ` Vlastimil Babka
2016-04-27 12:53             ` Vlastimil Babka
2016-04-15  9:07   ` [PATCH 28/28] mm, page_alloc: Defer debugging checks of pages allocated from the PCP Mel Gorman
2016-04-15  9:07     ` Mel Gorman
2016-04-27 14:06     ` Vlastimil Babka
2016-04-27 14:06       ` Vlastimil Babka
2016-04-27 15:31       ` Mel Gorman
2016-04-27 15:31         ` Mel Gorman
2016-05-17  6:41     ` Naoya Horiguchi
2016-05-17  6:41       ` Naoya Horiguchi
2016-05-18  7:51       ` Vlastimil Babka
2016-05-18  7:51         ` Vlastimil Babka
2016-05-18  7:55         ` Vlastimil Babka
2016-05-18  7:55           ` Vlastimil Babka
2016-05-18  8:49         ` Mel Gorman
2016-05-18  8:49           ` Mel Gorman
2016-04-26 12:04   ` [PATCH 13/28] mm, page_alloc: Remove redundant check for empty zonelist Vlastimil Babka
2016-04-26 12:04     ` Vlastimil Babka
2016-04-26 13:00     ` Mel Gorman
2016-04-26 13:00       ` Mel Gorman
2016-04-26 19:11       ` Andrew Morton
2016-04-26 19:11         ` Andrew Morton
2016-04-15 12:44 ` [PATCH 00/28] Optimise page alloc/free fast paths v3 Jesper Dangaard Brouer
2016-04-15 12:44   ` Jesper Dangaard Brouer
2016-04-15 13:08   ` Mel Gorman
2016-04-15 13:08     ` Mel Gorman
2016-04-16  7:21 ` [PATCH 12/28] mm, page_alloc: Remove unnecessary initialisation from __alloc_pages_nodemask() Mel Gorman
2016-04-16  7:21   ` Mel Gorman
2016-04-26 11:41   ` Vlastimil Babka
2016-04-26 11:41     ` Vlastimil Babka

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1460710760-32601-6-git-send-email-mgorman@techsingularity.net \
    --to=mgorman@techsingularity.net \
    --cc=akpm@linux-foundation.org \
    --cc=brouer@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=vbabka@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.