All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/2] mm: fixlets
@ 2013-05-31 10:53 ` Vineet Gupta
  0 siblings, 0 replies; 9+ messages in thread
From: Vineet Gupta @ 2013-05-31 10:53 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-arch, linux-kernel, Max Filippov, Vineet Gupta

Hi Andrew,

Max Filippov reported a generic MM issue with PTE/TLB coherency
@ http://www.spinics.net/lists/linux-arch/msg21736.html

While the fix for issue is still being discussed, sending over a bunch
mm fixlets which we found in due course.

Infact, 1/2 looks like stable material as orig code was flushing wrong range
from TLB - wherever used.

Please consider applying.

Thx,
-Vineet


Vineet Gupta (2):
  mm: Fix the TLB range flushed when __tlb_remove_page() runs out of
    slots
  mm: tlb_fast_mode check missing in tlb_finish_mmu()

 mm/memory.c |   12 +++++++++---
 1 file changed, 9 insertions(+), 3 deletions(-)

-- 
1.7.10.4


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH 0/2] mm: fixlets
@ 2013-05-31 10:53 ` Vineet Gupta
  0 siblings, 0 replies; 9+ messages in thread
From: Vineet Gupta @ 2013-05-31 10:53 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-arch, linux-kernel, Max Filippov, Vineet Gupta

Hi Andrew,

Max Filippov reported a generic MM issue with PTE/TLB coherency
@ http://www.spinics.net/lists/linux-arch/msg21736.html

While the fix for issue is still being discussed, sending over a bunch
mm fixlets which we found in due course.

Infact, 1/2 looks like stable material as orig code was flushing wrong range
from TLB - wherever used.

Please consider applying.

Thx,
-Vineet


Vineet Gupta (2):
  mm: Fix the TLB range flushed when __tlb_remove_page() runs out of
    slots
  mm: tlb_fast_mode check missing in tlb_finish_mmu()

 mm/memory.c |   12 +++++++++---
 1 file changed, 9 insertions(+), 3 deletions(-)

-- 
1.7.10.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH 0/2] mm: fixlets
@ 2013-05-31 10:53 ` Vineet Gupta
  0 siblings, 0 replies; 9+ messages in thread
From: Vineet Gupta @ 2013-05-31 10:53 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-arch, linux-kernel, Max Filippov, Vineet Gupta

Hi Andrew,

Max Filippov reported a generic MM issue with PTE/TLB coherency
@ http://www.spinics.net/lists/linux-arch/msg21736.html

While the fix for issue is still being discussed, sending over a bunch
mm fixlets which we found in due course.

Infact, 1/2 looks like stable material as orig code was flushing wrong range
from TLB - wherever used.

Please consider applying.

Thx,
-Vineet


Vineet Gupta (2):
  mm: Fix the TLB range flushed when __tlb_remove_page() runs out of
    slots
  mm: tlb_fast_mode check missing in tlb_finish_mmu()

 mm/memory.c |   12 +++++++++---
 1 file changed, 9 insertions(+), 3 deletions(-)

-- 
1.7.10.4


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH 1/2] mm: Fix the TLB range flushed when __tlb_remove_page() runs out of slots
  2013-05-31 10:53 ` Vineet Gupta
  (?)
@ 2013-05-31 10:53   ` Vineet Gupta
  -1 siblings, 0 replies; 9+ messages in thread
From: Vineet Gupta @ 2013-05-31 10:53 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-arch, linux-kernel, Max Filippov, Vineet Gupta,
	Mel Gorman, Hugh Dickins, Rik van Riel, David Rientjes,
	Peter Zijlstra, Catalin Marinas, Alex Shi

zap_pte_range loops from @addr to @end. In the middle, if it runs out of
batching slots, TLB entries needs to be flushed for @start to @interim,
NOT @interim to @end.

Since ARC port doesn't use page free batching I can't test it myself but
this seems like the right thing to do.
Observed this when working on a fix for the issue at thread:
	http://www.spinics.net/lists/linux-arch/msg21736.html

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Hugh Dickins <hughd@google.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: linux-mm@kvack.org
Cc: linux-arch@vger.kernel.org <linux-arch@vger.kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Max Filippov <jcmvbkbc@gmail.com>
Cc: Alex Shi <alex.shi@intel.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
---
 mm/memory.c |    9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/mm/memory.c b/mm/memory.c
index 6dc1882..d9d5fd9 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1110,6 +1110,7 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb,
 	spinlock_t *ptl;
 	pte_t *start_pte;
 	pte_t *pte;
+	unsigned long range_start = addr;
 
 again:
 	init_rss_vec(rss);
@@ -1215,12 +1216,14 @@ again:
 		force_flush = 0;
 
 #ifdef HAVE_GENERIC_MMU_GATHER
-		tlb->start = addr;
-		tlb->end = end;
+		tlb->start = range_start;
+		tlb->end = addr;
 #endif
 		tlb_flush_mmu(tlb);
-		if (addr != end)
+		if (addr != end) {
+			range_start = addr;
 			goto again;
+		}
 	}
 
 	return addr;
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 1/2] mm: Fix the TLB range flushed when __tlb_remove_page() runs out of slots
@ 2013-05-31 10:53   ` Vineet Gupta
  0 siblings, 0 replies; 9+ messages in thread
From: Vineet Gupta @ 2013-05-31 10:53 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-arch, linux-kernel, Max Filippov, Vineet Gupta,
	Mel Gorman, Hugh Dickins, Rik van Riel, David Rientjes,
	Peter Zijlstra, Catalin Marinas, Alex Shi

zap_pte_range loops from @addr to @end. In the middle, if it runs out of
batching slots, TLB entries needs to be flushed for @start to @interim,
NOT @interim to @end.

Since ARC port doesn't use page free batching I can't test it myself but
this seems like the right thing to do.
Observed this when working on a fix for the issue at thread:
	http://www.spinics.net/lists/linux-arch/msg21736.html

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Hugh Dickins <hughd@google.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: linux-mm@kvack.org
Cc: linux-arch@vger.kernel.org <linux-arch@vger.kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Max Filippov <jcmvbkbc@gmail.com>
Cc: Alex Shi <alex.shi@intel.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
---
 mm/memory.c |    9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/mm/memory.c b/mm/memory.c
index 6dc1882..d9d5fd9 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1110,6 +1110,7 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb,
 	spinlock_t *ptl;
 	pte_t *start_pte;
 	pte_t *pte;
+	unsigned long range_start = addr;
 
 again:
 	init_rss_vec(rss);
@@ -1215,12 +1216,14 @@ again:
 		force_flush = 0;
 
 #ifdef HAVE_GENERIC_MMU_GATHER
-		tlb->start = addr;
-		tlb->end = end;
+		tlb->start = range_start;
+		tlb->end = addr;
 #endif
 		tlb_flush_mmu(tlb);
-		if (addr != end)
+		if (addr != end) {
+			range_start = addr;
 			goto again;
+		}
 	}
 
 	return addr;
-- 
1.7.10.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 1/2] mm: Fix the TLB range flushed when __tlb_remove_page() runs out of slots
@ 2013-05-31 10:53   ` Vineet Gupta
  0 siblings, 0 replies; 9+ messages in thread
From: Vineet Gupta @ 2013-05-31 10:53 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-arch, linux-kernel, Max Filippov, Vineet Gupta,
	Mel Gorman, Hugh Dickins, Rik van Riel, David Rientjes,
	Peter Zijlstra, Catalin Marinas, Alex Shi

zap_pte_range loops from @addr to @end. In the middle, if it runs out of
batching slots, TLB entries needs to be flushed for @start to @interim,
NOT @interim to @end.

Since ARC port doesn't use page free batching I can't test it myself but
this seems like the right thing to do.
Observed this when working on a fix for the issue at thread:
	http://www.spinics.net/lists/linux-arch/msg21736.html

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Hugh Dickins <hughd@google.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: linux-mm@kvack.org
Cc: linux-arch@vger.kernel.org <linux-arch@vger.kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Max Filippov <jcmvbkbc@gmail.com>
Cc: Alex Shi <alex.shi@intel.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
---
 mm/memory.c |    9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/mm/memory.c b/mm/memory.c
index 6dc1882..d9d5fd9 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1110,6 +1110,7 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb,
 	spinlock_t *ptl;
 	pte_t *start_pte;
 	pte_t *pte;
+	unsigned long range_start = addr;
 
 again:
 	init_rss_vec(rss);
@@ -1215,12 +1216,14 @@ again:
 		force_flush = 0;
 
 #ifdef HAVE_GENERIC_MMU_GATHER
-		tlb->start = addr;
-		tlb->end = end;
+		tlb->start = range_start;
+		tlb->end = addr;
 #endif
 		tlb_flush_mmu(tlb);
-		if (addr != end)
+		if (addr != end) {
+			range_start = addr;
 			goto again;
+		}
 	}
 
 	return addr;
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 2/2] mm: tlb_fast_mode check missing in tlb_finish_mmu()
  2013-05-31 10:53 ` Vineet Gupta
  (?)
@ 2013-05-31 10:53   ` Vineet Gupta
  -1 siblings, 0 replies; 9+ messages in thread
From: Vineet Gupta @ 2013-05-31 10:53 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-arch, linux-kernel, Max Filippov, Vineet Gupta,
	Mel Gorman, Hugh Dickins, Rik van Riel, David Rientjes,
	Peter Zijlstra

This removes some unused generated code for tlb_fast_mode() == true

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
Acked-by: Peter Zijlstra <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Hugh Dickins <hughd@google.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: linux-mm@kvack.org
Cc: linux-arch@vger.kernel.org <linux-arch@vger.kernel.org>
---
 mm/memory.c |    3 +++
 1 file changed, 3 insertions(+)

diff --git a/mm/memory.c b/mm/memory.c
index d9d5fd9..569ffe1 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -269,6 +269,9 @@ void tlb_finish_mmu(struct mmu_gather *tlb, unsigned long start, unsigned long e
 	/* keep the page table cache within bounds */
 	check_pgt_cache();
 
+	if (tlb_fast_mode(tlb))
+		return;
+
 	for (batch = tlb->local.next; batch; batch = next) {
 		next = batch->next;
 		free_pages((unsigned long)batch, 0);
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 2/2] mm: tlb_fast_mode check missing in tlb_finish_mmu()
@ 2013-05-31 10:53   ` Vineet Gupta
  0 siblings, 0 replies; 9+ messages in thread
From: Vineet Gupta @ 2013-05-31 10:53 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-arch, linux-kernel, Max Filippov, Vineet Gupta,
	Mel Gorman, Hugh Dickins, Rik van Riel, David Rientjes,
	Peter Zijlstra

This removes some unused generated code for tlb_fast_mode() == true

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
Acked-by: Peter Zijlstra <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Hugh Dickins <hughd@google.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: linux-mm@kvack.org
Cc: linux-arch@vger.kernel.org <linux-arch@vger.kernel.org>
---
 mm/memory.c |    3 +++
 1 file changed, 3 insertions(+)

diff --git a/mm/memory.c b/mm/memory.c
index d9d5fd9..569ffe1 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -269,6 +269,9 @@ void tlb_finish_mmu(struct mmu_gather *tlb, unsigned long start, unsigned long e
 	/* keep the page table cache within bounds */
 	check_pgt_cache();
 
+	if (tlb_fast_mode(tlb))
+		return;
+
 	for (batch = tlb->local.next; batch; batch = next) {
 		next = batch->next;
 		free_pages((unsigned long)batch, 0);
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 2/2] mm: tlb_fast_mode check missing in tlb_finish_mmu()
@ 2013-05-31 10:53   ` Vineet Gupta
  0 siblings, 0 replies; 9+ messages in thread
From: Vineet Gupta @ 2013-05-31 10:53 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-arch, linux-kernel, Max Filippov, Vineet Gupta,
	Mel Gorman, Hugh Dickins, Rik van Riel, David Rientjes,
	Peter Zijlstra

This removes some unused generated code for tlb_fast_mode() == true

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
Acked-by: Peter Zijlstra <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Hugh Dickins <hughd@google.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: linux-mm@kvack.org
Cc: linux-arch@vger.kernel.org <linux-arch@vger.kernel.org>
---
 mm/memory.c |    3 +++
 1 file changed, 3 insertions(+)

diff --git a/mm/memory.c b/mm/memory.c
index d9d5fd9..569ffe1 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -269,6 +269,9 @@ void tlb_finish_mmu(struct mmu_gather *tlb, unsigned long start, unsigned long e
 	/* keep the page table cache within bounds */
 	check_pgt_cache();
 
+	if (tlb_fast_mode(tlb))
+		return;
+
 	for (batch = tlb->local.next; batch; batch = next) {
 		next = batch->next;
 		free_pages((unsigned long)batch, 0);
-- 
1.7.10.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2013-05-31 10:57 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-05-31 10:53 [PATCH 0/2] mm: fixlets Vineet Gupta
2013-05-31 10:53 ` Vineet Gupta
2013-05-31 10:53 ` Vineet Gupta
2013-05-31 10:53 ` [PATCH 1/2] mm: Fix the TLB range flushed when __tlb_remove_page() runs out of slots Vineet Gupta
2013-05-31 10:53   ` Vineet Gupta
2013-05-31 10:53   ` Vineet Gupta
2013-05-31 10:53 ` [PATCH 2/2] mm: tlb_fast_mode check missing in tlb_finish_mmu() Vineet Gupta
2013-05-31 10:53   ` Vineet Gupta
2013-05-31 10:53   ` Vineet Gupta

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.