linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/4] fix premature OOM regression in 4.7+ due to cpuset races
@ 2017-01-20 10:38 Vlastimil Babka
  2017-01-20 10:38 ` [PATCH v2 1/4] mm, page_alloc: fix check for NULL preferred_zone Vlastimil Babka
                   ` (4 more replies)
  0 siblings, 5 replies; 6+ messages in thread
From: Vlastimil Babka @ 2017-01-20 10:38 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Mel Gorman, Michal Hocko, Hillf Danton, linux-kernel, linux-mm,
	Vlastimil Babka

Changes since v1:
- add/remove comments per Michal Hocko and Hillf Danton
- move no_zone: label in patch 3 so we don't miss part of ac initialization

This is v2 of my attempt to fix the recent report based on LTP cpuset stress
test [1]. The intention is to go to stable 4.9 LTSS with this, as triggering
repeated OOMs is not nice. That's why the patches try to be not too intrusive.

Unfortunately why investigating I found that modifying the testcase to use
per-VMA policies instead of per-task policies will bring the OOM's back, but
that seems to be much older and harder to fix problem. I have posted a RFC [2]
but I believe that fixing the recent regressions has a higher priority.

Longer-term we might try to think how to fix the cpuset mess in a better and
less error prone way. I was for example very surprised to learn, that cpuset
updates change not only task->mems_allowed, but also nodemask of mempolicies.
Until now I expected the parameter to alloc_pages_nodemask() to be stable.
I wonder why do we then treat cpusets specially in get_page_from_freelist()
and distinguish HARDWALL etc, when there's unconditional intersection between
mempolicy and cpuset. I would expect the nodemask adjustment for saving
overhead in g_p_f(), but that clearly doesn't happen in the current form.
So we have both crazy complexity and overhead, AFAICS.

[1] https://lkml.kernel.org/r/CAFpQJXUq-JuEP=QPidy4p_=FN0rkH5Z-kfB4qBvsf6jMS87Edg@mail.gmail.com
[2] https://lkml.kernel.org/r/7c459f26-13a6-a817-e508-b65b903a8378@suse.cz

Vlastimil Babka (4):
  mm, page_alloc: fix check for NULL preferred_zone
  mm, page_alloc: fix fast-path race with cpuset update or removal
  mm, page_alloc: move cpuset seqcount checking to slowpath
  mm, page_alloc: fix premature OOM when racing with cpuset mems update

 include/linux/mmzone.h |  6 ++++-
 mm/page_alloc.c        | 68 ++++++++++++++++++++++++++++++++++----------------
 2 files changed, 52 insertions(+), 22 deletions(-)

-- 
2.11.0

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH v2 1/4] mm, page_alloc: fix check for NULL preferred_zone
  2017-01-20 10:38 [PATCH v2 0/4] fix premature OOM regression in 4.7+ due to cpuset races Vlastimil Babka
@ 2017-01-20 10:38 ` Vlastimil Babka
  2017-01-20 10:38 ` [PATCH v2 2/4] mm, page_alloc: fix fast-path race with cpuset update or removal Vlastimil Babka
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 6+ messages in thread
From: Vlastimil Babka @ 2017-01-20 10:38 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Mel Gorman, Michal Hocko, Hillf Danton, linux-kernel, linux-mm,
	Vlastimil Babka, stable

Since commit c33d6c06f60f ("mm, page_alloc: avoid looking up the first zone in
a zonelist twice") we have a wrong check for NULL preferred_zone, which can
theoretically happen due to concurrent cpuset modification. We check the
zoneref pointer which is never NULL and we should check the zone pointer.
Also document this in first_zones_zonelist() comment per Michal Hocko.

Fixes: c33d6c06f60f ("mm, page_alloc: avoid looking up the first zone in a zonelist twice")
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Cc: <stable@vger.kernel.org>
Acked-by: Mel Gorman <mgorman@techsingularity.net>
---
 include/linux/mmzone.h | 6 +++++-
 mm/page_alloc.c        | 2 +-
 2 files changed, 6 insertions(+), 2 deletions(-)

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 36d9896fbc1e..f4aac87adcc3 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -972,12 +972,16 @@ static __always_inline struct zoneref *next_zones_zonelist(struct zoneref *z,
  * @zonelist - The zonelist to search for a suitable zone
  * @highest_zoneidx - The zone index of the highest zone to return
  * @nodes - An optional nodemask to filter the zonelist with
- * @zone - The first suitable zone found is returned via this parameter
+ * @return - Zoneref pointer for the first suitable zone found (see below)
  *
  * This function returns the first zone at or below a given zone index that is
  * within the allowed nodemask. The zoneref returned is a cursor that can be
  * used to iterate the zonelist with next_zones_zonelist by advancing it by
  * one before calling.
+ *
+ * When no eligible zone is found, zoneref->zone is NULL (zoneref itself is
+ * never NULL). This may happen either genuinely, or due to concurrent nodemask
+ * update due to cpuset modification.
  */
 static inline struct zoneref *first_zones_zonelist(struct zonelist *zonelist,
 					enum zone_type highest_zoneidx,
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index d604d2596b7b..0d771f3fb835 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3784,7 +3784,7 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order,
 	 */
 	ac.preferred_zoneref = first_zones_zonelist(ac.zonelist,
 					ac.high_zoneidx, ac.nodemask);
-	if (!ac.preferred_zoneref) {
+	if (!ac.preferred_zoneref->zone) {
 		page = NULL;
 		goto no_zone;
 	}
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH v2 2/4] mm, page_alloc: fix fast-path race with cpuset update or removal
  2017-01-20 10:38 [PATCH v2 0/4] fix premature OOM regression in 4.7+ due to cpuset races Vlastimil Babka
  2017-01-20 10:38 ` [PATCH v2 1/4] mm, page_alloc: fix check for NULL preferred_zone Vlastimil Babka
@ 2017-01-20 10:38 ` Vlastimil Babka
  2017-01-20 10:38 ` [PATCH v2 3/4] mm, page_alloc: move cpuset seqcount checking to slowpath Vlastimil Babka
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 6+ messages in thread
From: Vlastimil Babka @ 2017-01-20 10:38 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Mel Gorman, Michal Hocko, Hillf Danton, linux-kernel, linux-mm,
	Vlastimil Babka, stable

Ganapatrao Kulkarni reported that the LTP test cpuset01 in stress mode triggers
OOM killer in few seconds, despite lots of free memory. The test attempts to
repeatedly fault in memory in one process in a cpuset, while changing allowed
nodes of the cpuset between 0 and 1 in another process.

One possible cause is that in the fast path we find the preferred zoneref
according to current mems_allowed, so that it points to the middle of the
zonelist, skipping e.g. zones of node 1 completely. If the mems_allowed is
updated to contain only node 1, we never reach it in the zonelist, and trigger
OOM before checking the cpuset_mems_cookie.

This patch fixes the particular case by redoing the preferred zoneref search
if we switch back to the original nodemask. The condition is also slightly
changed so that when the last non-root cpuset is removed, we don't miss it.

Note that this is not a full fix, and more patches will follow.

Reported-by: Ganapatrao Kulkarni <gpkulkarni@gmail.com>
Fixes: 682a3385e773 ("mm, page_alloc: inline the fast path of the zonelist iterator")
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Cc: <stable@vger.kernel.org>
Acked-by: Michal Hocko <mhocko@suse.com>
Acked-by: Mel Gorman <mgorman@techsingularity.net>
---
 mm/page_alloc.c | 10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 0d771f3fb835..3ca0c15deca4 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3804,9 +3804,17 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order,
 	/*
 	 * Restore the original nodemask if it was potentially replaced with
 	 * &cpuset_current_mems_allowed to optimize the fast-path attempt.
+	 * Also recalculate the starting point for the zonelist iterator or
+	 * we could end up iterating over non-eligible zones endlessly.
 	 */
-	if (cpusets_enabled())
+	if (unlikely(ac.nodemask != nodemask)) {
 		ac.nodemask = nodemask;
+		ac.preferred_zoneref = first_zones_zonelist(ac.zonelist,
+						ac.high_zoneidx, ac.nodemask);
+		if (!ac.preferred_zoneref->zone)
+			goto no_zone;
+	}
+
 	page = __alloc_pages_slowpath(alloc_mask, order, &ac);
 
 no_zone:
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH v2 3/4] mm, page_alloc: move cpuset seqcount checking to slowpath
  2017-01-20 10:38 [PATCH v2 0/4] fix premature OOM regression in 4.7+ due to cpuset races Vlastimil Babka
  2017-01-20 10:38 ` [PATCH v2 1/4] mm, page_alloc: fix check for NULL preferred_zone Vlastimil Babka
  2017-01-20 10:38 ` [PATCH v2 2/4] mm, page_alloc: fix fast-path race with cpuset update or removal Vlastimil Babka
@ 2017-01-20 10:38 ` Vlastimil Babka
  2017-01-20 10:38 ` [PATCH v2 4/4] mm, page_alloc: fix premature OOM when racing with cpuset mems update Vlastimil Babka
  2017-01-21 12:22 ` [PATCH v2 0/4] fix premature OOM regression in 4.7+ due to cpuset races Hillf Danton
  4 siblings, 0 replies; 6+ messages in thread
From: Vlastimil Babka @ 2017-01-20 10:38 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Mel Gorman, Michal Hocko, Hillf Danton, linux-kernel, linux-mm,
	Vlastimil Babka, stable

This is a preparation for the following patch to make review simpler. While
the primary motivation is a bug fix, this also simplifies the fast path,
although the moved code is only enabled when cpusets are in use.

Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Cc: <stable@vger.kernel.org>
Acked-by: Mel Gorman <mgorman@techsingularity.net>
---
 mm/page_alloc.c | 47 ++++++++++++++++++++++++++---------------------
 1 file changed, 26 insertions(+), 21 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 3ca0c15deca4..fd3b9839a355 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3523,12 +3523,13 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 	struct page *page = NULL;
 	unsigned int alloc_flags;
 	unsigned long did_some_progress;
-	enum compact_priority compact_priority = DEF_COMPACT_PRIORITY;
+	enum compact_priority compact_priority;
 	enum compact_result compact_result;
-	int compaction_retries = 0;
-	int no_progress_loops = 0;
+	int compaction_retries;
+	int no_progress_loops;
 	unsigned long alloc_start = jiffies;
 	unsigned int stall_timeout = 10 * HZ;
+	unsigned int cpuset_mems_cookie;
 
 	/*
 	 * In the slowpath, we sanity check order to avoid ever trying to
@@ -3549,6 +3550,12 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 				(__GFP_ATOMIC|__GFP_DIRECT_RECLAIM)))
 		gfp_mask &= ~__GFP_ATOMIC;
 
+retry_cpuset:
+	compaction_retries = 0;
+	no_progress_loops = 0;
+	compact_priority = DEF_COMPACT_PRIORITY;
+	cpuset_mems_cookie = read_mems_allowed_begin();
+
 	/*
 	 * The fast path uses conservative alloc_flags to succeed only until
 	 * kswapd needs to be woken up, and to avoid the cost of setting up
@@ -3720,6 +3727,15 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 	}
 
 nopage:
+	/*
+	 * When updating a task's mems_allowed, it is possible to race with
+	 * parallel threads in such a way that an allocation can fail while
+	 * the mask is being updated. If a page allocation is about to fail,
+	 * check if the cpuset changed during allocation and if so, retry.
+	 */
+	if (read_mems_allowed_retry(cpuset_mems_cookie))
+		goto retry_cpuset;
+
 	warn_alloc(gfp_mask,
 			"page allocation failure: order:%u", order);
 got_pg:
@@ -3734,7 +3750,6 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order,
 			struct zonelist *zonelist, nodemask_t *nodemask)
 {
 	struct page *page;
-	unsigned int cpuset_mems_cookie;
 	unsigned int alloc_flags = ALLOC_WMARK_LOW;
 	gfp_t alloc_mask = gfp_mask; /* The gfp_t that was actually used for allocation */
 	struct alloc_context ac = {
@@ -3771,9 +3786,6 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order,
 	if (IS_ENABLED(CONFIG_CMA) && ac.migratetype == MIGRATE_MOVABLE)
 		alloc_flags |= ALLOC_CMA;
 
-retry_cpuset:
-	cpuset_mems_cookie = read_mems_allowed_begin();
-
 	/* Dirty zone balancing only done in the fast path */
 	ac.spread_dirty_pages = (gfp_mask & __GFP_WRITE);
 
@@ -3786,6 +3798,11 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order,
 					ac.high_zoneidx, ac.nodemask);
 	if (!ac.preferred_zoneref->zone) {
 		page = NULL;
+		/*
+		 * This might be due to race with cpuset_current_mems_allowed
+		 * update, so make sure we retry with original nodemask in the
+		 * slow path.
+		 */
 		goto no_zone;
 	}
 
@@ -3794,6 +3811,7 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order,
 	if (likely(page))
 		goto out;
 
+no_zone:
 	/*
 	 * Runtime PM, block IO and its error handling path can deadlock
 	 * because I/O on the device might not complete.
@@ -3811,24 +3829,11 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order,
 		ac.nodemask = nodemask;
 		ac.preferred_zoneref = first_zones_zonelist(ac.zonelist,
 						ac.high_zoneidx, ac.nodemask);
-		if (!ac.preferred_zoneref->zone)
-			goto no_zone;
+		/* If we have NULL preferred zone, slowpath wll handle that */
 	}
 
 	page = __alloc_pages_slowpath(alloc_mask, order, &ac);
 
-no_zone:
-	/*
-	 * When updating a task's mems_allowed, it is possible to race with
-	 * parallel threads in such a way that an allocation can fail while
-	 * the mask is being updated. If a page allocation is about to fail,
-	 * check if the cpuset changed during allocation and if so, retry.
-	 */
-	if (unlikely(!page && read_mems_allowed_retry(cpuset_mems_cookie))) {
-		alloc_mask = gfp_mask;
-		goto retry_cpuset;
-	}
-
 out:
 	if (memcg_kmem_enabled() && (gfp_mask & __GFP_ACCOUNT) && page &&
 	    unlikely(memcg_kmem_charge(page, gfp_mask, order) != 0)) {
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH v2 4/4] mm, page_alloc: fix premature OOM when racing with cpuset mems update
  2017-01-20 10:38 [PATCH v2 0/4] fix premature OOM regression in 4.7+ due to cpuset races Vlastimil Babka
                   ` (2 preceding siblings ...)
  2017-01-20 10:38 ` [PATCH v2 3/4] mm, page_alloc: move cpuset seqcount checking to slowpath Vlastimil Babka
@ 2017-01-20 10:38 ` Vlastimil Babka
  2017-01-21 12:22 ` [PATCH v2 0/4] fix premature OOM regression in 4.7+ due to cpuset races Hillf Danton
  4 siblings, 0 replies; 6+ messages in thread
From: Vlastimil Babka @ 2017-01-20 10:38 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Mel Gorman, Michal Hocko, Hillf Danton, linux-kernel, linux-mm,
	Vlastimil Babka, stable

Ganapatrao Kulkarni reported that the LTP test cpuset01 in stress mode triggers
OOM killer in few seconds, despite lots of free memory. The test attempts to
repeatedly fault in memory in one process in a cpuset, while changing allowed
nodes of the cpuset between 0 and 1 in another process.

The problem comes from insufficient protection against cpuset changes, which
can cause get_page_from_freelist() to consider all zones as non-eligible due to
nodemask and/or current->mems_allowed. This was masked in the past by
sufficient retries, but since commit 682a3385e773 ("mm, page_alloc: inline the
fast path of the zonelist iterator") we fix the preferred_zoneref once, and
don't iterate over the whole zonelist in further attempts, thus the only
eligible zones might be placed in the zonelist before our starting point and we
always miss them.

A previous patch fixed this problem for current->mems_allowed. However, cpuset
changes also update the task's mempolicy nodemask. The fix has two parts. We
have to repeat the preferred_zoneref search when we detect cpuset update by way
of seqcount, and we have to check the seqcount before considering OOM.

Reported-by: Ganapatrao Kulkarni <gpkulkarni@gmail.com>
Fixes: c33d6c06f60f ("mm, page_alloc: avoid looking up the first zone in a zonelist twice")
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Cc: <stable@vger.kernel.org>
Acked-by: Mel Gorman <mgorman@techsingularity.net>
---
 mm/page_alloc.c | 35 ++++++++++++++++++++++++-----------
 1 file changed, 24 insertions(+), 11 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index fd3b9839a355..1c331ff6fdc4 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3555,6 +3555,17 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 	no_progress_loops = 0;
 	compact_priority = DEF_COMPACT_PRIORITY;
 	cpuset_mems_cookie = read_mems_allowed_begin();
+	/*
+	 * We need to recalculate the starting point for the zonelist iterator
+	 * because we might have used different nodemask in the fast path, or
+	 * there was a cpuset modification and we are retrying - otherwise we
+	 * could end up iterating over non-eligible zones endlessly.
+	 */
+	ac->preferred_zoneref = first_zones_zonelist(ac->zonelist,
+					ac->high_zoneidx, ac->nodemask);
+	if (!ac->preferred_zoneref->zone)
+		goto nopage;
+
 
 	/*
 	 * The fast path uses conservative alloc_flags to succeed only until
@@ -3715,6 +3726,13 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 				&compaction_retries))
 		goto retry;
 
+	/*
+	 * It's possible we raced with cpuset update so the OOM would be
+	 * premature (see below the nopage: label for full explanation).
+	 */
+	if (read_mems_allowed_retry(cpuset_mems_cookie))
+		goto retry_cpuset;
+
 	/* Reclaim has failed us, start killing things */
 	page = __alloc_pages_may_oom(gfp_mask, order, ac, &did_some_progress);
 	if (page)
@@ -3728,10 +3746,11 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 
 nopage:
 	/*
-	 * When updating a task's mems_allowed, it is possible to race with
-	 * parallel threads in such a way that an allocation can fail while
-	 * the mask is being updated. If a page allocation is about to fail,
-	 * check if the cpuset changed during allocation and if so, retry.
+	 * When updating a task's mems_allowed or mempolicy nodeask, it is
+	 * possible to race with parallel threads in such a way that our
+	 * allocation can fail while the mask is being updated. If we are about
+	 * to fail, check if the cpuset changed during allocation and if so,
+	 * retry.
 	 */
 	if (read_mems_allowed_retry(cpuset_mems_cookie))
 		goto retry_cpuset;
@@ -3822,15 +3841,9 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order,
 	/*
 	 * Restore the original nodemask if it was potentially replaced with
 	 * &cpuset_current_mems_allowed to optimize the fast-path attempt.
-	 * Also recalculate the starting point for the zonelist iterator or
-	 * we could end up iterating over non-eligible zones endlessly.
 	 */
-	if (unlikely(ac.nodemask != nodemask)) {
+	if (unlikely(ac.nodemask != nodemask))
 		ac.nodemask = nodemask;
-		ac.preferred_zoneref = first_zones_zonelist(ac.zonelist,
-						ac.high_zoneidx, ac.nodemask);
-		/* If we have NULL preferred zone, slowpath wll handle that */
-	}
 
 	page = __alloc_pages_slowpath(alloc_mask, order, &ac);
 
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH v2 0/4] fix premature OOM regression in 4.7+ due to cpuset races
  2017-01-20 10:38 [PATCH v2 0/4] fix premature OOM regression in 4.7+ due to cpuset races Vlastimil Babka
                   ` (3 preceding siblings ...)
  2017-01-20 10:38 ` [PATCH v2 4/4] mm, page_alloc: fix premature OOM when racing with cpuset mems update Vlastimil Babka
@ 2017-01-21 12:22 ` Hillf Danton
  4 siblings, 0 replies; 6+ messages in thread
From: Hillf Danton @ 2017-01-21 12:22 UTC (permalink / raw)
  To: 'Vlastimil Babka', 'Andrew Morton'
  Cc: 'Mel Gorman', 'Michal Hocko', linux-kernel, linux-mm

On Friday, January 20, 2017 6:39 PM Vlastimil Babka wrote: 
> 
> Changes since v1:
> - add/remove comments per Michal Hocko and Hillf Danton
> - move no_zone: label in patch 3 so we don't miss part of ac initialization
> 
> This is v2 of my attempt to fix the recent report based on LTP cpuset stress
> test [1]. The intention is to go to stable 4.9 LTSS with this, as triggering
> repeated OOMs is not nice. That's why the patches try to be not too intrusive.
> 
> Unfortunately why investigating I found that modifying the testcase to use
> per-VMA policies instead of per-task policies will bring the OOM's back, but
> that seems to be much older and harder to fix problem. I have posted a RFC [2]
> but I believe that fixing the recent regressions has a higher priority.
> 
> Longer-term we might try to think how to fix the cpuset mess in a better and
> less error prone way. I was for example very surprised to learn, that cpuset
> updates change not only task->mems_allowed, but also nodemask of mempolicies.
> Until now I expected the parameter to alloc_pages_nodemask() to be stable.
> I wonder why do we then treat cpusets specially in get_page_from_freelist()
> and distinguish HARDWALL etc, when there's unconditional intersection between
> mempolicy and cpuset. I would expect the nodemask adjustment for saving
> overhead in g_p_f(), but that clearly doesn't happen in the current form.
> So we have both crazy complexity and overhead, AFAICS.
> 
> [1] https://lkml.kernel.org/r/CAFpQJXUq-JuEP=QPidy4p_=FN0rkH5Z-kfB4qBvsf6jMS87Edg@mail.gmail.com
> [2] https://lkml.kernel.org/r/7c459f26-13a6-a817-e508-b65b903a8378@suse.cz
> 
> Vlastimil Babka (4):
>   mm, page_alloc: fix check for NULL preferred_zone
>   mm, page_alloc: fix fast-path race with cpuset update or removal
>   mm, page_alloc: move cpuset seqcount checking to slowpath
>   mm, page_alloc: fix premature OOM when racing with cpuset mems update
> 
>  include/linux/mmzone.h |  6 ++++-
>  mm/page_alloc.c        | 68 ++++++++++++++++++++++++++++++++++----------------
>  2 files changed, 52 insertions(+), 22 deletions(-)
> 
Acked-by: Hillf Danton <hillf.zj@alibaba-inc.com>

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2017-01-21 12:22 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-01-20 10:38 [PATCH v2 0/4] fix premature OOM regression in 4.7+ due to cpuset races Vlastimil Babka
2017-01-20 10:38 ` [PATCH v2 1/4] mm, page_alloc: fix check for NULL preferred_zone Vlastimil Babka
2017-01-20 10:38 ` [PATCH v2 2/4] mm, page_alloc: fix fast-path race with cpuset update or removal Vlastimil Babka
2017-01-20 10:38 ` [PATCH v2 3/4] mm, page_alloc: move cpuset seqcount checking to slowpath Vlastimil Babka
2017-01-20 10:38 ` [PATCH v2 4/4] mm, page_alloc: fix premature OOM when racing with cpuset mems update Vlastimil Babka
2017-01-21 12:22 ` [PATCH v2 0/4] fix premature OOM regression in 4.7+ due to cpuset races Hillf Danton

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).