All of lore.kernel.org
 help / color / mirror / Atom feed
* Batch locking for rmap fork/exit processing v2
@ 2011-05-13 23:46 Andi Kleen
  2011-05-13 23:46 ` [PATCH 1/4] VM/RMAP: Add infrastructure for batching the rmap chain locking v2 Andi Kleen
                   ` (3 more replies)
  0 siblings, 4 replies; 10+ messages in thread
From: Andi Kleen @ 2011-05-13 23:46 UTC (permalink / raw)
  To: akpm; +Cc: linux-mm

Update with review feedback addressed.

012f18004da33ba67 in 2.6.36 caused a significant performance regression in 
fork/exit intensive workloads with a lot of sharing. The problem is that 
fork/exit now contend heavily on the lock of the root anon_vma.

This patchkit attempts to lower this a bit by batching the lock acquisions.
Right now the lock is taken for every shared vma individually. This
patchkit batches this and only reaquires the lock when actually needed.

When multiple processes are doing this in parallel, they will now 
spend much less time bouncing the lock cache line around. In addition
there should be also lower overhead in the uncontended case because
locks are relatively slow (not measured) 

This doesn't completely fix the regression on a 4S system, but cuts 
it down somewhat. One particular workload suffering from this gets
about 5% faster.

This is essentially a micro optimization that just tries to mitigate
the problem a bit.

Better would be to switch back to more local locking like .35 had, but I 
guess then we would be back with the old deadlocks? I was thinking also of 
adding some deadlock avoidance as an alternative.

-Andi

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH 1/4] VM/RMAP: Add infrastructure for batching the rmap chain locking v2
  2011-05-13 23:46 Batch locking for rmap fork/exit processing v2 Andi Kleen
@ 2011-05-13 23:46 ` Andi Kleen
  2011-05-18 20:25   ` Andrew Morton
  2011-05-13 23:46 ` [PATCH 2/4] VM/RMAP: Batch anon vma chain root locking in fork Andi Kleen
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 10+ messages in thread
From: Andi Kleen @ 2011-05-13 23:46 UTC (permalink / raw)
  To: akpm; +Cc: linux-mm, Andi Kleen, Andrea Arcangeli

From: Andi Kleen <ak@linux.intel.com>

In fork and exit it's quite common to take same rmap chain locks
again and again when the whole address space is processed  for a
address space that has a lot of sharing. Also since the locking
has changed to always lock the root anon_vma this can be very
contended.

This patch adds a simple wrapper to batch these lock acquisitions
and only reaquire the lock when another is needed. The main
advantage is that when multiple processes are doing this in
parallel they will avoid a lot of communication overhead
on the lock cache line.

v2: Address review feedback. Drop lockbreak. Rename init function.
Move out of line. Add CONFIG_SMP ifdefs.
Cc: Andrea Arcangeli <aarcange@redhat.com>
Reviewed-by: Rik van Riel<riel@redhat.com>
Signed-off-by: Andi Kleen <ak@linux.intel.com>
---
 include/linux/rmap.h |   34 ++++++++++++++++++++++++++++++++++
 mm/rmap.c            |   12 ++++++++++++
 2 files changed, 46 insertions(+), 0 deletions(-)

diff --git a/include/linux/rmap.h b/include/linux/rmap.h
index 830e65d..44f5bb2 100644
--- a/include/linux/rmap.h
+++ b/include/linux/rmap.h
@@ -113,6 +113,40 @@ static inline void anon_vma_unlock(struct anon_vma *anon_vma)
 	spin_unlock(&anon_vma->root->lock);
 }
 
+/* 
+ * Batched locking for anon VMA chains to avoid too much cache line 
+ * bouncing.
+ */
+
+struct anon_vma_lock_state {
+	struct anon_vma *root_anon_vma;
+};
+
+static inline void anon_vma_lock_batch_init(struct anon_vma_lock_state *avs)
+{
+	avs->root_anon_vma = NULL;
+}
+
+extern void __anon_vma_lock_batch(struct anon_vma *anon_vma,
+				  struct anon_vma_lock_state *state);
+
+static inline void anon_vma_lock_batch(struct anon_vma *anon_vma,
+				       struct anon_vma_lock_state *state)
+{
+#ifdef CONFIG_SMP
+	if (state->root_anon_vma != anon_vma->root)
+		__anon_vma_lock_batch(anon_vma, state);
+#endif
+}
+
+static inline void anon_vma_unlock_batch(struct anon_vma_lock_state *avs)
+{
+#ifdef CONFIG_SMP
+	if (avs->root_anon_vma)
+		spin_unlock(&avs->root_anon_vma->lock);
+#endif
+}
+
 /*
  * anon_vma helper functions.
  */
diff --git a/mm/rmap.c b/mm/rmap.c
index 8da044a..5a2cd65 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1624,3 +1624,15 @@ void hugepage_add_new_anon_rmap(struct page *page,
 	__hugepage_set_anon_rmap(page, vma, address, 1);
 }
 #endif /* CONFIG_HUGETLB_PAGE */
+
+/* 
+ * Batched rmap chain locking
+ */
+void __anon_vma_lock_batch(struct anon_vma *anon_vma,
+			   struct anon_vma_lock_state *state)
+{
+	if (state->root_anon_vma)
+		spin_unlock(&state->root_anon_vma->lock);
+	state->root_anon_vma = anon_vma->root;
+	spin_lock(&state->root_anon_vma->lock);
+}
-- 
1.7.4.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 2/4] VM/RMAP: Batch anon vma chain root locking in fork
  2011-05-13 23:46 Batch locking for rmap fork/exit processing v2 Andi Kleen
  2011-05-13 23:46 ` [PATCH 1/4] VM/RMAP: Add infrastructure for batching the rmap chain locking v2 Andi Kleen
@ 2011-05-13 23:46 ` Andi Kleen
  2011-05-13 23:46 ` [PATCH 3/4] VM/RMAP: Batch anon_vma_unlink in exit Andi Kleen
  2011-05-13 23:46 ` [PATCH 4/4] VM/RMAP: Move avc freeing outside the lock Andi Kleen
  3 siblings, 0 replies; 10+ messages in thread
From: Andi Kleen @ 2011-05-13 23:46 UTC (permalink / raw)
  To: akpm; +Cc: linux-mm, Andi Kleen, Andrea Arcangeli

From: Andi Kleen <ak@linux.intel.com>

We found that the changes to take anon vma root chain lock lead
to excessive lock contention on a fork intensive workload on a 4S
system.

Use the new batch lock infrastructure to optimize the fork()
path, where it is very common to acquire always the same lock.

This patch does not really lower the contention, but batches
the lock taking/freeing to lower the bouncing overhead when
multiple forks are working at the same time. Essentially each
user will get more work done inside a locking region.

Reported-by: Tim Chen <tim.c.chen@linux.intel.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Reviewed-by: Rik van Riel<riel@redhat.com>
Signed-off-by: Andi Kleen <ak@linux.intel.com>
---
 mm/rmap.c |   69 +++++++++++++++++++++++++++++++++++++++++++-----------------
 1 files changed, 49 insertions(+), 20 deletions(-)

diff --git a/mm/rmap.c b/mm/rmap.c
index 5a2cd65..4e93770 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -177,44 +177,72 @@ int anon_vma_prepare(struct vm_area_struct *vma)
 	return -ENOMEM;
 }
 
+/* Caller must call anon_vma_unlock_batch. */
 static void anon_vma_chain_link(struct vm_area_struct *vma,
 				struct anon_vma_chain *avc,
-				struct anon_vma *anon_vma)
+				struct anon_vma *anon_vma,
+				struct anon_vma_lock_state *avs)
 {
 	avc->vma = vma;
 	avc->anon_vma = anon_vma;
 	list_add(&avc->same_vma, &vma->anon_vma_chain);
 
-	anon_vma_lock(anon_vma);
+	anon_vma_lock_batch(anon_vma, avs);
 	/*
 	 * It's critical to add new vmas to the tail of the anon_vma,
 	 * see comment in huge_memory.c:__split_huge_page().
 	 */
 	list_add_tail(&avc->same_anon_vma, &anon_vma->head);
-	anon_vma_unlock(anon_vma);
+	/* unlock in caller */
 }
 
+
 /*
  * Attach the anon_vmas from src to dst.
  * Returns 0 on success, -ENOMEM on failure.
+ * Caller must call anon_vma_unlock_batch.
  */
-int anon_vma_clone(struct vm_area_struct *dst, struct vm_area_struct *src)
+static int anon_vma_clone_batch(struct vm_area_struct *dst, 
+				struct vm_area_struct *src,
+				struct anon_vma_lock_state *avs)
 {
-	struct anon_vma_chain *avc, *pavc;
-
+	struct anon_vma_chain *avc, *pavc, *avc_next;
+	LIST_HEAD(head);
+	
+	/* First allocate with sleeping */
 	list_for_each_entry_reverse(pavc, &src->anon_vma_chain, same_vma) {
 		avc = anon_vma_chain_alloc();
 		if (!avc)
 			goto enomem_failure;
-		anon_vma_chain_link(dst, avc, pavc->anon_vma);
+		list_add_tail(&avc->same_anon_vma, &head);
+	}	
+
+	/* Now take locks and link in */
+	anon_vma_lock_batch_init(avs);
+	avc = list_first_entry(&head, struct anon_vma_chain, same_anon_vma);
+	list_for_each_entry_reverse(pavc, &src->anon_vma_chain, same_vma) {
+		avc_next = list_entry(avc->same_anon_vma.next, 
+				      struct anon_vma_chain,
+				      same_anon_vma);
+		anon_vma_chain_link(dst, avc, pavc->anon_vma, avs);
+		avc = avc_next;
 	}
 	return 0;
 
  enomem_failure:
-	unlink_anon_vmas(dst);
+	list_for_each_entry (avc, &head, same_anon_vma)
+		anon_vma_chain_free(avc);
 	return -ENOMEM;
 }
 
+int anon_vma_clone(struct vm_area_struct *dst, struct vm_area_struct *src)
+{
+	struct anon_vma_lock_state avs;
+	int n = anon_vma_clone_batch(dst, src, &avs);
+	anon_vma_unlock_batch(&avs);
+	return n;
+}
+
 /*
  * Attach vma to its own anon_vma, as well as to the anon_vmas that
  * the corresponding VMA in the parent process is attached to.
@@ -224,27 +252,27 @@ int anon_vma_fork(struct vm_area_struct *vma, struct vm_area_struct *pvma)
 {
 	struct anon_vma_chain *avc;
 	struct anon_vma *anon_vma;
+	struct anon_vma_lock_state avs;
 
 	/* Don't bother if the parent process has no anon_vma here. */
 	if (!pvma->anon_vma)
 		return 0;
 
-	/*
-	 * First, attach the new VMA to the parent VMA's anon_vmas,
-	 * so rmap can find non-COWed pages in child processes.
-	 */
-	if (anon_vma_clone(vma, pvma))
-		return -ENOMEM;
-
-	/* Then add our own anon_vma. */
 	anon_vma = anon_vma_alloc();
 	if (!anon_vma)
-		goto out_error;
+		return -ENOMEM;
 	avc = anon_vma_chain_alloc();
 	if (!avc)
 		goto out_error_free_anon_vma;
 
 	/*
+	 * First, attach the new VMA to the parent VMA's anon_vmas,
+	 * so rmap can find non-COWed pages in child processes.
+	 */
+	if (anon_vma_clone_batch(vma, pvma, &avs))
+		goto out_error_free_vma_chain;
+
+	/*
 	 * The root anon_vma's spinlock is the lock actually used when we
 	 * lock any of the anon_vmas in this anon_vma tree.
 	 */
@@ -257,14 +285,15 @@ int anon_vma_fork(struct vm_area_struct *vma, struct vm_area_struct *pvma)
 	get_anon_vma(anon_vma->root);
 	/* Mark this anon_vma as the one where our new (COWed) pages go. */
 	vma->anon_vma = anon_vma;
-	anon_vma_chain_link(vma, avc, anon_vma);
+	anon_vma_chain_link(vma, avc, anon_vma, &avs);
+	anon_vma_unlock_batch(&avs);
 
 	return 0;
 
+ out_error_free_vma_chain:
+	anon_vma_chain_free(avc);
  out_error_free_anon_vma:
 	put_anon_vma(anon_vma);
- out_error:
-	unlink_anon_vmas(vma);
 	return -ENOMEM;
 }
 
-- 
1.7.4.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 3/4] VM/RMAP: Batch anon_vma_unlink in exit
  2011-05-13 23:46 Batch locking for rmap fork/exit processing v2 Andi Kleen
  2011-05-13 23:46 ` [PATCH 1/4] VM/RMAP: Add infrastructure for batching the rmap chain locking v2 Andi Kleen
  2011-05-13 23:46 ` [PATCH 2/4] VM/RMAP: Batch anon vma chain root locking in fork Andi Kleen
@ 2011-05-13 23:46 ` Andi Kleen
  2011-05-13 23:46 ` [PATCH 4/4] VM/RMAP: Move avc freeing outside the lock Andi Kleen
  3 siblings, 0 replies; 10+ messages in thread
From: Andi Kleen @ 2011-05-13 23:46 UTC (permalink / raw)
  To: akpm; +Cc: linux-mm, Andi Kleen

From: Andi Kleen <ak@linux.intel.com>

Apply the rmap chain lock batching to anon_vma_unlink() too.
This speeds up exit() on process chains with many processes,
when there is a lot of sharing.

Unfortunately this doesn't fix all lock contention -- file vmas
have a mapping lock that is also a problem. And even existing
anon_vmas still contend. But it's better than before.

Reviewed-by: Rik van Riel<riel@redhat.com>
Signed-off-by: Andi Kleen <ak@linux.intel.com>
---
 mm/rmap.c |   14 +++++++++-----
 1 files changed, 9 insertions(+), 5 deletions(-)

diff --git a/mm/rmap.c b/mm/rmap.c
index 4e93770..d4d3680 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -297,7 +297,9 @@ int anon_vma_fork(struct vm_area_struct *vma, struct vm_area_struct *pvma)
 	return -ENOMEM;
 }
 
-static void anon_vma_unlink(struct anon_vma_chain *anon_vma_chain)
+/* Caller must call anon_vma_unlock_batch */
+static void anon_vma_unlink(struct anon_vma_chain *anon_vma_chain,
+			    struct anon_vma_lock_state *avs)
 {
 	struct anon_vma *anon_vma = anon_vma_chain->anon_vma;
 	int empty;
@@ -306,12 +308,11 @@ static void anon_vma_unlink(struct anon_vma_chain *anon_vma_chain)
 	if (!anon_vma)
 		return;
 
-	anon_vma_lock(anon_vma);
+	anon_vma_lock_batch(anon_vma, avs);
 	list_del(&anon_vma_chain->same_anon_vma);
 
 	/* We must garbage collect the anon_vma if it's empty */
 	empty = list_empty(&anon_vma->head);
-	anon_vma_unlock(anon_vma);
 
 	if (empty)
 		put_anon_vma(anon_vma);
@@ -320,16 +321,19 @@ static void anon_vma_unlink(struct anon_vma_chain *anon_vma_chain)
 void unlink_anon_vmas(struct vm_area_struct *vma)
 {
 	struct anon_vma_chain *avc, *next;
-
+	struct anon_vma_lock_state avs;
+	
 	/*
 	 * Unlink each anon_vma chained to the VMA.  This list is ordered
 	 * from newest to oldest, ensuring the root anon_vma gets freed last.
 	 */
+	anon_vma_lock_batch_init(&avs);
 	list_for_each_entry_safe(avc, next, &vma->anon_vma_chain, same_vma) {
-		anon_vma_unlink(avc);
+		anon_vma_unlink(avc, &avs);
 		list_del(&avc->same_vma);
 		anon_vma_chain_free(avc);
 	}
+	anon_vma_unlock_batch(&avs);
 }
 
 static void anon_vma_ctor(void *data)
-- 
1.7.4.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 4/4] VM/RMAP: Move avc freeing outside the lock
  2011-05-13 23:46 Batch locking for rmap fork/exit processing v2 Andi Kleen
                   ` (2 preceding siblings ...)
  2011-05-13 23:46 ` [PATCH 3/4] VM/RMAP: Batch anon_vma_unlink in exit Andi Kleen
@ 2011-05-13 23:46 ` Andi Kleen
  3 siblings, 0 replies; 10+ messages in thread
From: Andi Kleen @ 2011-05-13 23:46 UTC (permalink / raw)
  To: akpm; +Cc: linux-mm, Andi Kleen

From: Andi Kleen <ak@linux.intel.com>

Now that the avc locking is batched move the freeing of AVCs
outside the lock. This lowers lock contention somewhat more on
a fork/exit intensive workload.

Acked-by: Rik van Riel<riel@redhat.com>
Signed-off-by: Andi Kleen <ak@linux.intel.com>
---
 mm/rmap.c |   24 ++++++++++++++----------
 1 files changed, 14 insertions(+), 10 deletions(-)

diff --git a/mm/rmap.c b/mm/rmap.c
index d4d3680..57f8ba9 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -302,7 +302,6 @@ static void anon_vma_unlink(struct anon_vma_chain *anon_vma_chain,
 			    struct anon_vma_lock_state *avs)
 {
 	struct anon_vma *anon_vma = anon_vma_chain->anon_vma;
-	int empty;
 
 	/* If anon_vma_fork fails, we can get an empty anon_vma_chain. */
 	if (!anon_vma)
@@ -310,19 +309,14 @@ static void anon_vma_unlink(struct anon_vma_chain *anon_vma_chain,
 
 	anon_vma_lock_batch(anon_vma, avs);
 	list_del(&anon_vma_chain->same_anon_vma);
-
-	/* We must garbage collect the anon_vma if it's empty */
-	empty = list_empty(&anon_vma->head);
-
-	if (empty)
-		put_anon_vma(anon_vma);
 }
 
 void unlink_anon_vmas(struct vm_area_struct *vma)
 {
 	struct anon_vma_chain *avc, *next;
 	struct anon_vma_lock_state avs;
-	
+	LIST_HEAD(avmas);
+
 	/*
 	 * Unlink each anon_vma chained to the VMA.  This list is ordered
 	 * from newest to oldest, ensuring the root anon_vma gets freed last.
@@ -330,10 +324,20 @@ void unlink_anon_vmas(struct vm_area_struct *vma)
 	anon_vma_lock_batch_init(&avs);
 	list_for_each_entry_safe(avc, next, &vma->anon_vma_chain, same_vma) {
 		anon_vma_unlink(avc, &avs);
-		list_del(&avc->same_vma);
-		anon_vma_chain_free(avc);
+		list_move(&avc->same_vma, &avmas);
 	}
 	anon_vma_unlock_batch(&avs);
+
+	/* Now free them outside the lock */
+	list_for_each_entry_safe(avc, next, &avmas, same_vma) {
+		/* 
+		 * list_empty check can be done lockless because
+		 * once it is empty noone will readd.
+		 */
+		if (list_empty(&avc->anon_vma->head))
+			put_anon_vma(avc->anon_vma);
+		anon_vma_chain_free(avc);		
+	}
 }
 
 static void anon_vma_ctor(void *data)
-- 
1.7.4.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH 1/4] VM/RMAP: Add infrastructure for batching the rmap chain locking v2
  2011-05-13 23:46 ` [PATCH 1/4] VM/RMAP: Add infrastructure for batching the rmap chain locking v2 Andi Kleen
@ 2011-05-18 20:25   ` Andrew Morton
  2011-05-18 21:14     ` Andi Kleen
  0 siblings, 1 reply; 10+ messages in thread
From: Andrew Morton @ 2011-05-18 20:25 UTC (permalink / raw)
  To: Andi Kleen; +Cc: linux-mm, Andi Kleen, Andrea Arcangeli

On Fri, 13 May 2011 16:46:21 -0700
Andi Kleen <andi@firstfloor.org> wrote:

> In fork and exit it's quite common to take same rmap chain locks
> again and again when the whole address space is processed  for a
> address space that has a lot of sharing. Also since the locking
> has changed to always lock the root anon_vma this can be very
> contended.
> 
> This patch adds a simple wrapper to batch these lock acquisitions
> and only reaquire the lock when another is needed. The main
> advantage is that when multiple processes are doing this in
> parallel they will avoid a lot of communication overhead
> on the lock cache line.
> 
> v2: Address review feedback. Drop lockbreak. Rename init function.

Doesn't compile:

include/linux/rmap.h: In function 'anon_vma_unlock_batch':
include/linux/rmap.h:146: error: 'struct anon_vma' has no member named 'lock'
mm/rmap.c: In function '__anon_vma_lock_batch':
mm/rmap.c:1737: error: 'struct anon_vma' has no member named 'lock'
mm/rmap.c:1739: error: 'struct anon_vma' has no member named 'lock'

I think I reported this against the v1 patches.


Please fix up the checkpatch errors?

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 1/4] VM/RMAP: Add infrastructure for batching the rmap chain locking v2
  2011-05-18 20:25   ` Andrew Morton
@ 2011-05-18 21:14     ` Andi Kleen
  2011-05-18 21:20       ` Andrew Morton
  0 siblings, 1 reply; 10+ messages in thread
From: Andi Kleen @ 2011-05-18 21:14 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Andi Kleen, linux-mm, Andi Kleen, Andrea Arcangeli

On Wed, May 18, 2011 at 01:25:47PM -0700, Andrew Morton wrote:
> On Fri, 13 May 2011 16:46:21 -0700
> Andi Kleen <andi@firstfloor.org> wrote:
> 
> > In fork and exit it's quite common to take same rmap chain locks
> > again and again when the whole address space is processed  for a
> > address space that has a lot of sharing. Also since the locking
> > has changed to always lock the root anon_vma this can be very
> > contended.
> > 
> > This patch adds a simple wrapper to batch these lock acquisitions
> > and only reaquire the lock when another is needed. The main
> > advantage is that when multiple processes are doing this in
> > parallel they will avoid a lot of communication overhead
> > on the lock cache line.
> > 
> > v2: Address review feedback. Drop lockbreak. Rename init function.
> 
> Doesn't compile:
> 
> include/linux/rmap.h: In function 'anon_vma_unlock_batch':
> include/linux/rmap.h:146: error: 'struct anon_vma' has no member named 'lock'
> mm/rmap.c: In function '__anon_vma_lock_batch':
> mm/rmap.c:1737: error: 'struct anon_vma' has no member named 'lock'
> mm/rmap.c:1739: error: 'struct anon_vma' has no member named 'lock'
> 
> I think I reported this against the v1 patches.

Hmm is that against -mm? Which tree exactly?

Both in Linus' latest and in -next I have

include/linux/rmap.h:

struct anon_vma {
        struct anon_vma *root;  /* Root of this anon_vma tree */
        spinlock_t lock;        /* Serialize access to vma list */

So it should compile. And it compiles here of course.

To be honest I forgot where the -mm tree is, so I can't check. 
It's not in its old place on kernel.org anymore?

I'll look into checkpatch.

-Andi

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 1/4] VM/RMAP: Add infrastructure for batching the rmap chain locking v2
  2011-05-18 21:14     ` Andi Kleen
@ 2011-05-18 21:20       ` Andrew Morton
  2011-05-18 21:35         ` Andi Kleen
  0 siblings, 1 reply; 10+ messages in thread
From: Andrew Morton @ 2011-05-18 21:20 UTC (permalink / raw)
  To: Andi Kleen; +Cc: linux-mm, Andi Kleen, Andrea Arcangeli

On Wed, 18 May 2011 23:14:43 +0200
Andi Kleen <andi@firstfloor.org> wrote:

> On Wed, May 18, 2011 at 01:25:47PM -0700, Andrew Morton wrote:
> > On Fri, 13 May 2011 16:46:21 -0700
> > Andi Kleen <andi@firstfloor.org> wrote:
> > 
> > > In fork and exit it's quite common to take same rmap chain locks
> > > again and again when the whole address space is processed  for a
> > > address space that has a lot of sharing. Also since the locking
> > > has changed to always lock the root anon_vma this can be very
> > > contended.
> > > 
> > > This patch adds a simple wrapper to batch these lock acquisitions
> > > and only reaquire the lock when another is needed. The main
> > > advantage is that when multiple processes are doing this in
> > > parallel they will avoid a lot of communication overhead
> > > on the lock cache line.
> > > 
> > > v2: Address review feedback. Drop lockbreak. Rename init function.
> > 
> > Doesn't compile:
> > 
> > include/linux/rmap.h: In function 'anon_vma_unlock_batch':
> > include/linux/rmap.h:146: error: 'struct anon_vma' has no member named 'lock'
> > mm/rmap.c: In function '__anon_vma_lock_batch':
> > mm/rmap.c:1737: error: 'struct anon_vma' has no member named 'lock'
> > mm/rmap.c:1739: error: 'struct anon_vma' has no member named 'lock'
> > 
> > I think I reported this against the v1 patches.
> 
> Hmm is that against -mm? Which tree exactly?
> 
> Both in Linus' latest and in -next I have
> 
> include/linux/rmap.h:
> 
> struct anon_vma {
>         struct anon_vma *root;  /* Root of this anon_vma tree */
>         spinlock_t lock;        /* Serialize access to vma list */
> 
> So it should compile. And it compiles here of course.

ah, whoop, sorry.  It's getting turned into a mutex, by
mm-convert-anon_vma-lock-to-a-mutex.patch

> To be honest I forgot where the -mm tree is, so I can't check. 
> It's not in its old place on kernel.org anymore?

http://userweb.kernel.org/~akpm/mmotm/

If you dive into the `series' file you'll see that this patch is part of this
series:

mm-mmu_gather-rework.patch
mm-mmu_gather-rework-fix.patch
powerpc-mmu_gather-rework.patch
sparc-mmu_gather-rework.patch
s390-mmu_gather-rework.patch
arm-mmu_gather-rework.patch
sh-mmu_gather-rework.patch
ia64-mmu_gather-rework.patch
um-mmu_gather-rework.patch
mm-now-that-all-old-mmu_gather-code-is-gone-remove-the-storage.patch
mm-powerpc-move-the-rcu-page-table-freeing-into-generic-code.patch
mm-extended-batches-for-generic-mmu_gather.patch
mm-extended-batches-for-generic-mmu_gather-fix.patch
lockdep-mutex-provide-mutex_lock_nest_lock.patch
mm-remove-i_mmap_lock-lockbreak.patch
mm-convert-i_mmap_lock-to-a-mutex.patch
mm-revert-page_lock_anon_vma-lock-annotation.patch
mm-improve-page_lock_anon_vma-comment.patch
mm-use-refcounts-for-page_lock_anon_vma.patch
mm-convert-anon_vma-lock-to-a-mutex.patch
mm-optimize-page_lock_anon_vma-fast-path.patch
mm-uninline-large-generic-tlbh-functions.patch

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 1/4] VM/RMAP: Add infrastructure for batching the rmap chain locking v2
  2011-05-18 21:20       ` Andrew Morton
@ 2011-05-18 21:35         ` Andi Kleen
  2011-05-18 21:39           ` Andrew Morton
  0 siblings, 1 reply; 10+ messages in thread
From: Andi Kleen @ 2011-05-18 21:35 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Andi Kleen, linux-mm, Andi Kleen, Andrea Arcangeli

> ah, whoop, sorry.  It's getting turned into a mutex, by
> mm-convert-anon_vma-lock-to-a-mutex.patch

Do you plan to merge that next cycle? If yes I can rebase on top
of that. Otherwise the other way round?

> > To be honest I forgot where the -mm tree is, so I can't check. 
> > It's not in its old place on kernel.org anymore?
> 
> http://userweb.kernel.org/~akpm/mmotm/
> 
> If you dive into the `series' file you'll see that this patch is part of this
> series:

Thanks.

-Andi

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 1/4] VM/RMAP: Add infrastructure for batching the rmap chain locking v2
  2011-05-18 21:35         ` Andi Kleen
@ 2011-05-18 21:39           ` Andrew Morton
  0 siblings, 0 replies; 10+ messages in thread
From: Andrew Morton @ 2011-05-18 21:39 UTC (permalink / raw)
  To: Andi Kleen; +Cc: linux-mm, Andi Kleen, Andrea Arcangeli

On Wed, 18 May 2011 23:35:56 +0200
Andi Kleen <andi@firstfloor.org> wrote:

> > ah, whoop, sorry.  It's getting turned into a mutex, by
> > mm-convert-anon_vma-lock-to-a-mutex.patch
> 
> Do you plan to merge that next cycle?

I expect so, yes.

> If yes I can rebase on top
> of that. Otherwise the other way round?

Well, I'm rather in shutdown-for-2.6.39 mode now anwyay.  So I'd
suggest that you wait and redo the patches on 2.6.40-rc1.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2011-05-18 21:39 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-05-13 23:46 Batch locking for rmap fork/exit processing v2 Andi Kleen
2011-05-13 23:46 ` [PATCH 1/4] VM/RMAP: Add infrastructure for batching the rmap chain locking v2 Andi Kleen
2011-05-18 20:25   ` Andrew Morton
2011-05-18 21:14     ` Andi Kleen
2011-05-18 21:20       ` Andrew Morton
2011-05-18 21:35         ` Andi Kleen
2011-05-18 21:39           ` Andrew Morton
2011-05-13 23:46 ` [PATCH 2/4] VM/RMAP: Batch anon vma chain root locking in fork Andi Kleen
2011-05-13 23:46 ` [PATCH 3/4] VM/RMAP: Batch anon_vma_unlink in exit Andi Kleen
2011-05-13 23:46 ` [PATCH 4/4] VM/RMAP: Move avc freeing outside the lock Andi Kleen

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.