All of lore.kernel.org
 help / color / mirror / Atom feed
From: Mel Gorman <mgorman@suse.de>
To: Stable <stable@vger.kernel.org>
Cc: Linux-MM <linux-mm@kvack.org>,
	LKML <linux-kernel@vger.kernel.org>, Mel Gorman <mgorman@suse.de>
Subject: [PATCH 32/34] cpusets: stall when updating mems_allowed for mempolicy or disjoint nodemask
Date: Mon, 23 Jul 2012 14:38:45 +0100	[thread overview]
Message-ID: <1343050727-3045-33-git-send-email-mgorman@suse.de> (raw)
In-Reply-To: <1343050727-3045-1-git-send-email-mgorman@suse.de>

From: David Rientjes <rientjes@google.com>

commit b246272ecc5ac68c743b15c9e41a2275f7ce70e2 upstream.

Stable note: Not tracked in Bugzilla. [get|put]_mems_allowed() is extremely
	expensive and severely impacted page allocator performance. This is
	part of a series of patches that reduce page allocator overhead.

Kernels where MAX_NUMNODES > BITS_PER_LONG may temporarily see an empty
nodemask in a tsk's mempolicy if its previous nodemask is remapped onto a
new set of allowed cpuset nodes where the two nodemasks, as a result of
the remap, are now disjoint.

c0ff7453bb5c ("cpuset,mm: fix no node to alloc memory when changing
cpuset's mems") adds get_mems_allowed() to prevent the set of allowed
nodes from changing for a thread.  This causes any update to a set of
allowed nodes to stall until put_mems_allowed() is called.

This stall is unncessary, however, if at least one node remains unchanged
in the update to the set of allowed nodes.  This was addressed by
89e8a244b97e ("cpusets: avoid looping when storing to mems_allowed if one
node remains set"), but it's still possible that an empty nodemask may be
read from a mempolicy because the old nodemask may be remapped to the new
nodemask during rebind.  To prevent this, only avoid the stall if there is
no mempolicy for the thread being changed.

This is a temporary solution until all reads from mempolicy nodemasks can
be guaranteed to not be empty without the get_mems_allowed()
synchronization.

Also moves the check for nodemask intersection inside task_lock() so that
tsk->mems_allowed cannot change.  This ensures that nothing can set this
tsk's mems_allowed out from under us and also protects tsk->mempolicy.

Reported-by: Miao Xie <miaox@cn.fujitsu.com>
Signed-off-by: David Rientjes <rientjes@google.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Paul Menage <paul@paulmenage.org>
Cc: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Mel Gorman <mgorman@suse.de>
---
 kernel/cpuset.c |   29 ++++++++++++++++++++++++-----
 1 file changed, 24 insertions(+), 5 deletions(-)

diff --git a/kernel/cpuset.c b/kernel/cpuset.c
index a995893..28d0bbd 100644
--- a/kernel/cpuset.c
+++ b/kernel/cpuset.c
@@ -123,6 +123,19 @@ static inline struct cpuset *task_cs(struct task_struct *task)
 			    struct cpuset, css);
 }
 
+#ifdef CONFIG_NUMA
+static inline bool task_has_mempolicy(struct task_struct *task)
+{
+	return task->mempolicy;
+}
+#else
+static inline bool task_has_mempolicy(struct task_struct *task)
+{
+	return false;
+}
+#endif
+
+
 /* bits in struct cpuset flags field */
 typedef enum {
 	CS_CPU_EXCLUSIVE,
@@ -949,7 +962,7 @@ static void cpuset_migrate_mm(struct mm_struct *mm, const nodemask_t *from,
 static void cpuset_change_task_nodemask(struct task_struct *tsk,
 					nodemask_t *newmems)
 {
-	bool masks_disjoint = !nodes_intersects(*newmems, tsk->mems_allowed);
+	bool need_loop;
 
 repeat:
 	/*
@@ -962,6 +975,14 @@ repeat:
 		return;
 
 	task_lock(tsk);
+	/*
+	 * Determine if a loop is necessary if another thread is doing
+	 * get_mems_allowed().  If at least one node remains unchanged and
+	 * tsk does not have a mempolicy, then an empty nodemask will not be
+	 * possible when mems_allowed is larger than a word.
+	 */
+	need_loop = task_has_mempolicy(tsk) ||
+			!nodes_intersects(*newmems, tsk->mems_allowed);
 	nodes_or(tsk->mems_allowed, tsk->mems_allowed, *newmems);
 	mpol_rebind_task(tsk, newmems, MPOL_REBIND_STEP1);
 
@@ -981,11 +1002,9 @@ repeat:
 
 	/*
 	 * Allocation of memory is very fast, we needn't sleep when waiting
-	 * for the read-side.  No wait is necessary, however, if at least one
-	 * node remains unchanged.
+	 * for the read-side.
 	 */
-	while (masks_disjoint &&
-			ACCESS_ONCE(tsk->mems_allowed_change_disable)) {
+	while (need_loop && ACCESS_ONCE(tsk->mems_allowed_change_disable)) {
 		task_unlock(tsk);
 		if (!task_curr(tsk))
 			yield();
-- 
1.7.9.2


WARNING: multiple messages have this Message-ID (diff)
From: Mel Gorman <mgorman@suse.de>
To: Stable <stable@vger.kernel.org>
Cc: Linux-MM <linux-mm@kvack.org>,
	LKML <linux-kernel@vger.kernel.org>, Mel Gorman <mgorman@suse.de>
Subject: [PATCH 32/34] cpusets: stall when updating mems_allowed for mempolicy or disjoint nodemask
Date: Mon, 23 Jul 2012 14:38:45 +0100	[thread overview]
Message-ID: <1343050727-3045-33-git-send-email-mgorman@suse.de> (raw)
In-Reply-To: <1343050727-3045-1-git-send-email-mgorman@suse.de>

From: David Rientjes <rientjes@google.com>

commit b246272ecc5ac68c743b15c9e41a2275f7ce70e2 upstream.

Stable note: Not tracked in Bugzilla. [get|put]_mems_allowed() is extremely
	expensive and severely impacted page allocator performance. This is
	part of a series of patches that reduce page allocator overhead.

Kernels where MAX_NUMNODES > BITS_PER_LONG may temporarily see an empty
nodemask in a tsk's mempolicy if its previous nodemask is remapped onto a
new set of allowed cpuset nodes where the two nodemasks, as a result of
the remap, are now disjoint.

c0ff7453bb5c ("cpuset,mm: fix no node to alloc memory when changing
cpuset's mems") adds get_mems_allowed() to prevent the set of allowed
nodes from changing for a thread.  This causes any update to a set of
allowed nodes to stall until put_mems_allowed() is called.

This stall is unncessary, however, if at least one node remains unchanged
in the update to the set of allowed nodes.  This was addressed by
89e8a244b97e ("cpusets: avoid looping when storing to mems_allowed if one
node remains set"), but it's still possible that an empty nodemask may be
read from a mempolicy because the old nodemask may be remapped to the new
nodemask during rebind.  To prevent this, only avoid the stall if there is
no mempolicy for the thread being changed.

This is a temporary solution until all reads from mempolicy nodemasks can
be guaranteed to not be empty without the get_mems_allowed()
synchronization.

Also moves the check for nodemask intersection inside task_lock() so that
tsk->mems_allowed cannot change.  This ensures that nothing can set this
tsk's mems_allowed out from under us and also protects tsk->mempolicy.

Reported-by: Miao Xie <miaox@cn.fujitsu.com>
Signed-off-by: David Rientjes <rientjes@google.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Paul Menage <paul@paulmenage.org>
Cc: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Mel Gorman <mgorman@suse.de>
---
 kernel/cpuset.c |   29 ++++++++++++++++++++++++-----
 1 file changed, 24 insertions(+), 5 deletions(-)

diff --git a/kernel/cpuset.c b/kernel/cpuset.c
index a995893..28d0bbd 100644
--- a/kernel/cpuset.c
+++ b/kernel/cpuset.c
@@ -123,6 +123,19 @@ static inline struct cpuset *task_cs(struct task_struct *task)
 			    struct cpuset, css);
 }
 
+#ifdef CONFIG_NUMA
+static inline bool task_has_mempolicy(struct task_struct *task)
+{
+	return task->mempolicy;
+}
+#else
+static inline bool task_has_mempolicy(struct task_struct *task)
+{
+	return false;
+}
+#endif
+
+
 /* bits in struct cpuset flags field */
 typedef enum {
 	CS_CPU_EXCLUSIVE,
@@ -949,7 +962,7 @@ static void cpuset_migrate_mm(struct mm_struct *mm, const nodemask_t *from,
 static void cpuset_change_task_nodemask(struct task_struct *tsk,
 					nodemask_t *newmems)
 {
-	bool masks_disjoint = !nodes_intersects(*newmems, tsk->mems_allowed);
+	bool need_loop;
 
 repeat:
 	/*
@@ -962,6 +975,14 @@ repeat:
 		return;
 
 	task_lock(tsk);
+	/*
+	 * Determine if a loop is necessary if another thread is doing
+	 * get_mems_allowed().  If at least one node remains unchanged and
+	 * tsk does not have a mempolicy, then an empty nodemask will not be
+	 * possible when mems_allowed is larger than a word.
+	 */
+	need_loop = task_has_mempolicy(tsk) ||
+			!nodes_intersects(*newmems, tsk->mems_allowed);
 	nodes_or(tsk->mems_allowed, tsk->mems_allowed, *newmems);
 	mpol_rebind_task(tsk, newmems, MPOL_REBIND_STEP1);
 
@@ -981,11 +1002,9 @@ repeat:
 
 	/*
 	 * Allocation of memory is very fast, we needn't sleep when waiting
-	 * for the read-side.  No wait is necessary, however, if at least one
-	 * node remains unchanged.
+	 * for the read-side.
 	 */
-	while (masks_disjoint &&
-			ACCESS_ONCE(tsk->mems_allowed_change_disable)) {
+	while (need_loop && ACCESS_ONCE(tsk->mems_allowed_change_disable)) {
 		task_unlock(tsk);
 		if (!task_curr(tsk))
 			yield();
-- 
1.7.9.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  parent reply	other threads:[~2012-07-23 13:40 UTC|newest]

Thread overview: 119+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-07-23 13:38 [PATCH 00/34] Memory management performance backports for -stable V2 Mel Gorman
2012-07-23 13:38 ` Mel Gorman
2012-07-23 13:38 ` [PATCH 01/34] mm: vmstat: cache align vm_stat Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 02/34] mm: memory hotplug: Check if pages are correctly reserved on a per-section basis Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 03/34] mm: Reduce the amount of work done when updating min_free_kbytes Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-24 22:47   ` Greg KH
2012-07-24 22:47     ` Greg KH
2012-07-25  7:57     ` Mel Gorman
2012-07-25  7:57       ` Mel Gorman
2012-07-23 13:38 ` [PATCH 04/34] mm: vmscan: fix force-scanning small targets without swap Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 05/34] vmscan: clear ZONE_CONGESTED for zone with good watermark Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 06/34] vmscan: add shrink_slab tracepoints Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 07/34] vmscan: shrinker->nr updates race and go wrong Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 08/34] vmscan: reduce wind up shrinker->nr when shrinker can't do work Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 09/34] mm: limit direct reclaim for higher order allocations Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 10/34] mm: Abort reclaim/compaction if compaction can proceed Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 11/34] mm: compaction: trivial clean up in acct_isolated() Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 12/34] mm: change isolate mode from #define to bitwise type Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 13/34] mm: compaction: make isolate_lru_page() filter-aware Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 14/34] mm: zone_reclaim: " Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 15/34] mm: migration: clean up unmap_and_move() Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-25 15:45   ` Greg KH
2012-07-25 15:45     ` Greg KH
2012-07-25 16:04     ` Mel Gorman
2012-07-25 16:04       ` Mel Gorman
2012-07-25 18:03       ` Greg KH
2012-07-25 18:03         ` Greg KH
2012-07-23 13:38 ` [PATCH 16/34] mm: compaction: Allow compaction to isolate dirty pages Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-25 15:47   ` Greg KH
2012-07-25 15:47     ` Greg KH
2012-07-25 16:07     ` Mel Gorman
2012-07-25 16:07       ` Mel Gorman
2012-07-23 13:38 ` [PATCH 17/34] mm: compaction: Determine if dirty pages can be migrated without blocking within ->migratepage Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 18/34] mm: page allocator: Do not call direct reclaim for THP allocations while compaction is deferred Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 19/34] mm: compaction: make isolate_lru_page() filter-aware again Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 20/34] kswapd: avoid unnecessary rebalance after an unsuccessful balancing Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 21/34] kswapd: assign new_order and new_classzone_idx after wakeup in sleeping Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 22/34] mm: compaction: Introduce sync-light migration for use by compaction Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 23/34] mm: vmscan: When reclaiming for compaction, ensure there are sufficient free pages available Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 24/34] mm: vmscan: Do not OOM if aborting reclaim to start compaction Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 25/34] mm: vmscan: Check if reclaim should really abort even if compaction_ready() is true for one zone Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-25 19:51   ` Greg KH
2012-07-25 19:51     ` Greg KH
2012-07-23 13:38 ` [PATCH 26/34] vmscan: promote shared file mapped pages Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 27/34] vmscan: activate executable pages after first usage Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 28/34] mm/vmscan.c: consider swap space when deciding whether to continue reclaim Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 29/34] mm: test PageSwapBacked in lumpy reclaim Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 30/34] mm: vmscan: Do not force kswapd to scan small targets Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-25 19:59   ` Greg KH
2012-07-25 19:59     ` Greg KH
2012-07-25 21:35     ` Mel Gorman
2012-07-25 21:35       ` Mel Gorman
2012-07-25 21:44       ` Greg KH
2012-07-25 21:44         ` Greg KH
2012-07-23 13:38 ` [PATCH 31/34] cpusets: avoid looping when storing to mems_allowed if one node remains set Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` Mel Gorman [this message]
2012-07-23 13:38   ` [PATCH 32/34] cpusets: stall when updating mems_allowed for mempolicy or disjoint nodemask Mel Gorman
2012-07-23 13:38 ` [PATCH 33/34] cpuset: mm: Reduce large amounts of memory barrier related damage v3 Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-23 13:38 ` [PATCH 34/34] mm/hugetlb: fix warning in alloc_huge_page/dequeue_huge_page_vma Mel Gorman
2012-07-23 13:38   ` Mel Gorman
2012-07-24  5:58 ` [PATCH 00/34] Memory management performance backports for -stable V2 Mike Galbraith
2012-07-24  5:58   ` Mike Galbraith
2012-07-24  8:10   ` Mel Gorman
2012-07-24  8:10     ` Mel Gorman
2012-07-24 13:18   ` Hillf Danton
2012-07-24 13:18     ` Hillf Danton
2012-07-24 13:27     ` Mel Gorman
2012-07-24 13:27       ` Mel Gorman
2012-07-24 13:34       ` Hillf Danton
2012-07-24 13:34         ` Hillf Danton
2012-07-24 13:53         ` Mel Gorman
2012-07-24 13:53           ` Mel Gorman
2012-07-24 14:11           ` Hillf Danton
2012-07-24 14:11             ` Hillf Danton
2012-07-24 13:52     ` Mike Galbraith
2012-07-24 13:52       ` Mike Galbraith
2012-07-24 14:18       ` Hillf Danton
2012-07-24 14:18         ` Hillf Danton
2012-07-24 14:41         ` Mike Galbraith
2012-07-24 14:41           ` Mike Galbraith
2012-07-25 22:30 ` Greg KH
2012-07-25 22:30   ` Greg KH
2012-07-25 22:48   ` Mel Gorman
2012-07-25 22:48     ` Mel Gorman
2012-07-30  1:13 ` Ben Hutchings
  -- strict thread matches above, loose matches on Subject: below --
2012-07-19 14:36 [PATCH 00/34] Memory management performance backports for -stable Mel Gorman
2012-07-19 14:36 ` [PATCH 32/34] cpusets: stall when updating mems_allowed for mempolicy or disjoint nodemask Mel Gorman
2012-07-19 14:36   ` Mel Gorman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1343050727-3045-33-git-send-email-mgorman@suse.de \
    --to=mgorman@suse.de \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=stable@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.