All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 00/11] Introduces new count-based method for monitoring lockless pagetable wakls
@ 2019-09-20 19:50 ` Leonardo Bras
  0 siblings, 0 replies; 72+ messages in thread
From: Leonardo Bras @ 2019-09-20 19:50 UTC (permalink / raw)
  To: linuxppc-dev, linux-kernel
  Cc: Leonardo Bras, Benjamin Herrenschmidt, Paul Mackerras,
	Michael Ellerman, Arnd Bergmann, Aneesh Kumar K.V,
	Christophe Leroy, Andrew Morton, Dan Williams, Nicholas Piggin,
	Mahesh Salgaonkar, Thomas Gleixner, Richard Fontana,
	Ganesh Goudar, Allison Randal, Greg Kroah-Hartman, Mike Rapoport,
	YueHaibing, Ira Weiny, Jason Gunthorpe, John Hubbard,
	Keith Busch

*** BLURB HERE ***

Leonardo Bras (11):
  powerpc/mm: Adds counting method to monitor lockless pgtable walks
  asm-generic/pgtable: Adds dummy functions to monitor lockless pgtable
    walks
  mm/gup: Applies counting method to monitor gup_pgd_range
  powerpc/mce_power: Applies counting method to monitor lockless pgtbl
    walks
  powerpc/perf: Applies counting method to monitor lockless pgtbl walks
  powerpc/mm/book3s64/hash: Applies counting method to monitor lockless
    pgtbl walks
  powerpc/kvm/e500: Applies counting method to monitor lockless pgtbl
    walks
  powerpc/kvm/book3s_hv: Applies counting method to monitor lockless
    pgtbl walks
  powerpc/kvm/book3s_64: Applies counting method to monitor lockless
    pgtbl walks
  powerpc/book3s_64: Enables counting method to monitor lockless pgtbl
    walk
  powerpc/mm/book3s64/pgtable: Uses counting method to skip serializing

 arch/powerpc/include/asm/book3s/64/mmu.h     |  3 +++
 arch/powerpc/include/asm/book3s/64/pgtable.h |  5 +++++
 arch/powerpc/kernel/mce_power.c              | 13 ++++++++++---
 arch/powerpc/kvm/book3s_64_mmu_hv.c          |  2 ++
 arch/powerpc/kvm/book3s_64_mmu_radix.c       | 20 ++++++++++++++++++--
 arch/powerpc/kvm/book3s_64_vio_hv.c          |  4 ++++
 arch/powerpc/kvm/book3s_hv_nested.c          |  8 ++++++++
 arch/powerpc/kvm/book3s_hv_rm_mmu.c          |  9 ++++++++-
 arch/powerpc/kvm/e500_mmu_host.c             |  4 ++++
 arch/powerpc/mm/book3s64/hash_tlb.c          |  2 ++
 arch/powerpc/mm/book3s64/hash_utils.c        |  7 +++++++
 arch/powerpc/mm/book3s64/mmu_context.c       |  1 +
 arch/powerpc/mm/book3s64/pgtable.c           | 20 +++++++++++++++++++-
 arch/powerpc/perf/callchain.c                |  5 ++++-
 include/asm-generic/pgtable.h                |  9 +++++++++
 mm/gup.c                                     |  4 ++++
 16 files changed, 108 insertions(+), 8 deletions(-)

-- 
2.20.1


^ permalink raw reply	[flat|nested] 72+ messages in thread

* [PATCH v2 00/11] Introduces new count-based method for monitoring lockless pagetable wakls
@ 2019-09-20 19:50 ` Leonardo Bras
  0 siblings, 0 replies; 72+ messages in thread
From: Leonardo Bras @ 2019-09-20 19:50 UTC (permalink / raw)
  To: linuxppc-dev, linux-kernel
  Cc: Keith Busch, Richard Fontana, Paul Mackerras, Ira Weiny,
	Dan Williams, Aneesh Kumar K.V, YueHaibing, Mike Rapoport,
	Jason Gunthorpe, Ganesh Goudar, Mahesh Salgaonkar, Leonardo Bras,
	Arnd Bergmann, John Hubbard, Nicholas Piggin, Thomas Gleixner,
	Allison Randal, Greg Kroah-Hartman, Andrew Morton

*** BLURB HERE ***

Leonardo Bras (11):
  powerpc/mm: Adds counting method to monitor lockless pgtable walks
  asm-generic/pgtable: Adds dummy functions to monitor lockless pgtable
    walks
  mm/gup: Applies counting method to monitor gup_pgd_range
  powerpc/mce_power: Applies counting method to monitor lockless pgtbl
    walks
  powerpc/perf: Applies counting method to monitor lockless pgtbl walks
  powerpc/mm/book3s64/hash: Applies counting method to monitor lockless
    pgtbl walks
  powerpc/kvm/e500: Applies counting method to monitor lockless pgtbl
    walks
  powerpc/kvm/book3s_hv: Applies counting method to monitor lockless
    pgtbl walks
  powerpc/kvm/book3s_64: Applies counting method to monitor lockless
    pgtbl walks
  powerpc/book3s_64: Enables counting method to monitor lockless pgtbl
    walk
  powerpc/mm/book3s64/pgtable: Uses counting method to skip serializing

 arch/powerpc/include/asm/book3s/64/mmu.h     |  3 +++
 arch/powerpc/include/asm/book3s/64/pgtable.h |  5 +++++
 arch/powerpc/kernel/mce_power.c              | 13 ++++++++++---
 arch/powerpc/kvm/book3s_64_mmu_hv.c          |  2 ++
 arch/powerpc/kvm/book3s_64_mmu_radix.c       | 20 ++++++++++++++++++--
 arch/powerpc/kvm/book3s_64_vio_hv.c          |  4 ++++
 arch/powerpc/kvm/book3s_hv_nested.c          |  8 ++++++++
 arch/powerpc/kvm/book3s_hv_rm_mmu.c          |  9 ++++++++-
 arch/powerpc/kvm/e500_mmu_host.c             |  4 ++++
 arch/powerpc/mm/book3s64/hash_tlb.c          |  2 ++
 arch/powerpc/mm/book3s64/hash_utils.c        |  7 +++++++
 arch/powerpc/mm/book3s64/mmu_context.c       |  1 +
 arch/powerpc/mm/book3s64/pgtable.c           | 20 +++++++++++++++++++-
 arch/powerpc/perf/callchain.c                |  5 ++++-
 include/asm-generic/pgtable.h                |  9 +++++++++
 mm/gup.c                                     |  4 ++++
 16 files changed, 108 insertions(+), 8 deletions(-)

-- 
2.20.1


^ permalink raw reply	[flat|nested] 72+ messages in thread

* [PATCH v2 01/11] powerpc/mm: Adds counting method to monitor lockless pgtable walks
  2019-09-20 19:50 ` Leonardo Bras
@ 2019-09-20 19:50   ` Leonardo Bras
  -1 siblings, 0 replies; 72+ messages in thread
From: Leonardo Bras @ 2019-09-20 19:50 UTC (permalink / raw)
  To: linuxppc-dev, linux-kernel
  Cc: Leonardo Bras, Benjamin Herrenschmidt, Paul Mackerras,
	Michael Ellerman, Arnd Bergmann, Aneesh Kumar K.V,
	Christophe Leroy, Andrew Morton, Dan Williams, Nicholas Piggin,
	Mahesh Salgaonkar, Thomas Gleixner, Richard Fontana,
	Ganesh Goudar, Allison Randal, Greg Kroah-Hartman, Mike Rapoport,
	YueHaibing, Ira Weiny, Jason Gunthorpe, John Hubbard,
	Keith Busch

It's necessary to monitor lockless pagetable walks, in order to avoid doing
THP splitting/collapsing during them.

Some methods rely on local_irq_{save,restore}, but that can be slow on
cases with a lot of cpus are used for the process.

In order to speedup some cases, I propose a refcount-based approach, that
counts the number of lockless pagetable	walks happening on the process.

This method does not exclude the current irq-oriented method. It works as a
complement to skip unnecessary waiting.

start_lockless_pgtbl_walk(mm)
	Insert before starting any lockless pgtable walk
end_lockless_pgtbl_walk(mm)
	Insert after the end of any lockless pgtable walk
	(Mostly after the ptep is last used)
running_lockless_pgtbl_walk(mm)
	Returns the number of lockless pgtable walks running

Signed-off-by: Leonardo Bras <leonardo@linux.ibm.com>
---
 arch/powerpc/include/asm/book3s/64/mmu.h |  3 +++
 arch/powerpc/mm/book3s64/mmu_context.c   |  1 +
 arch/powerpc/mm/book3s64/pgtable.c       | 17 +++++++++++++++++
 3 files changed, 21 insertions(+)

diff --git a/arch/powerpc/include/asm/book3s/64/mmu.h b/arch/powerpc/include/asm/book3s/64/mmu.h
index 23b83d3593e2..13b006e7dde4 100644
--- a/arch/powerpc/include/asm/book3s/64/mmu.h
+++ b/arch/powerpc/include/asm/book3s/64/mmu.h
@@ -116,6 +116,9 @@ typedef struct {
 	/* Number of users of the external (Nest) MMU */
 	atomic_t copros;
 
+	/* Number of running instances of lockless pagetable walk*/
+	atomic_t lockless_pgtbl_walk_count;
+
 	struct hash_mm_context *hash_context;
 
 	unsigned long vdso_base;
diff --git a/arch/powerpc/mm/book3s64/mmu_context.c b/arch/powerpc/mm/book3s64/mmu_context.c
index 2d0cb5ba9a47..3dd01c0ca5be 100644
--- a/arch/powerpc/mm/book3s64/mmu_context.c
+++ b/arch/powerpc/mm/book3s64/mmu_context.c
@@ -200,6 +200,7 @@ int init_new_context(struct task_struct *tsk, struct mm_struct *mm)
 #endif
 	atomic_set(&mm->context.active_cpus, 0);
 	atomic_set(&mm->context.copros, 0);
+	atomic_set(&mm->context.lockless_pgtbl_walk_count, 0);
 
 	return 0;
 }
diff --git a/arch/powerpc/mm/book3s64/pgtable.c b/arch/powerpc/mm/book3s64/pgtable.c
index 7d0e0d0d22c4..13239b17a22c 100644
--- a/arch/powerpc/mm/book3s64/pgtable.c
+++ b/arch/powerpc/mm/book3s64/pgtable.c
@@ -98,6 +98,23 @@ void serialize_against_pte_lookup(struct mm_struct *mm)
 	smp_call_function_many(mm_cpumask(mm), do_nothing, NULL, 1);
 }
 
+void start_lockless_pgtbl_walk(struct mm_struct *mm)
+{
+	atomic_inc(&mm->context.lockless_pgtbl_walk_count);
+}
+EXPORT_SYMBOL(start_lockless_pgtbl_walk);
+
+void end_lockless_pgtbl_walk(struct mm_struct *mm)
+{
+	atomic_dec(&mm->context.lockless_pgtbl_walk_count);
+}
+EXPORT_SYMBOL(end_lockless_pgtbl_walk);
+
+int running_lockless_pgtbl_walk(struct mm_struct *mm)
+{
+	return atomic_read(&mm->context.lockless_pgtbl_walk_count);
+}
+
 /*
  * We use this to invalidate a pmdp entry before switching from a
  * hugepte to regular pmd entry.
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 72+ messages in thread

* [PATCH v2 01/11] powerpc/mm: Adds counting method to monitor lockless pgtable walks
@ 2019-09-20 19:50   ` Leonardo Bras
  0 siblings, 0 replies; 72+ messages in thread
From: Leonardo Bras @ 2019-09-20 19:50 UTC (permalink / raw)
  To: linuxppc-dev, linux-kernel
  Cc: Keith Busch, Richard Fontana, Paul Mackerras, Ira Weiny,
	Dan Williams, Aneesh Kumar K.V, YueHaibing, Mike Rapoport,
	Jason Gunthorpe, Ganesh Goudar, Mahesh Salgaonkar, Leonardo Bras,
	Arnd Bergmann, John Hubbard, Nicholas Piggin, Thomas Gleixner,
	Allison Randal, Greg Kroah-Hartman, Andrew Morton

It's necessary to monitor lockless pagetable walks, in order to avoid doing
THP splitting/collapsing during them.

Some methods rely on local_irq_{save,restore}, but that can be slow on
cases with a lot of cpus are used for the process.

In order to speedup some cases, I propose a refcount-based approach, that
counts the number of lockless pagetable	walks happening on the process.

This method does not exclude the current irq-oriented method. It works as a
complement to skip unnecessary waiting.

start_lockless_pgtbl_walk(mm)
	Insert before starting any lockless pgtable walk
end_lockless_pgtbl_walk(mm)
	Insert after the end of any lockless pgtable walk
	(Mostly after the ptep is last used)
running_lockless_pgtbl_walk(mm)
	Returns the number of lockless pgtable walks running

Signed-off-by: Leonardo Bras <leonardo@linux.ibm.com>
---
 arch/powerpc/include/asm/book3s/64/mmu.h |  3 +++
 arch/powerpc/mm/book3s64/mmu_context.c   |  1 +
 arch/powerpc/mm/book3s64/pgtable.c       | 17 +++++++++++++++++
 3 files changed, 21 insertions(+)

diff --git a/arch/powerpc/include/asm/book3s/64/mmu.h b/arch/powerpc/include/asm/book3s/64/mmu.h
index 23b83d3593e2..13b006e7dde4 100644
--- a/arch/powerpc/include/asm/book3s/64/mmu.h
+++ b/arch/powerpc/include/asm/book3s/64/mmu.h
@@ -116,6 +116,9 @@ typedef struct {
 	/* Number of users of the external (Nest) MMU */
 	atomic_t copros;
 
+	/* Number of running instances of lockless pagetable walk*/
+	atomic_t lockless_pgtbl_walk_count;
+
 	struct hash_mm_context *hash_context;
 
 	unsigned long vdso_base;
diff --git a/arch/powerpc/mm/book3s64/mmu_context.c b/arch/powerpc/mm/book3s64/mmu_context.c
index 2d0cb5ba9a47..3dd01c0ca5be 100644
--- a/arch/powerpc/mm/book3s64/mmu_context.c
+++ b/arch/powerpc/mm/book3s64/mmu_context.c
@@ -200,6 +200,7 @@ int init_new_context(struct task_struct *tsk, struct mm_struct *mm)
 #endif
 	atomic_set(&mm->context.active_cpus, 0);
 	atomic_set(&mm->context.copros, 0);
+	atomic_set(&mm->context.lockless_pgtbl_walk_count, 0);
 
 	return 0;
 }
diff --git a/arch/powerpc/mm/book3s64/pgtable.c b/arch/powerpc/mm/book3s64/pgtable.c
index 7d0e0d0d22c4..13239b17a22c 100644
--- a/arch/powerpc/mm/book3s64/pgtable.c
+++ b/arch/powerpc/mm/book3s64/pgtable.c
@@ -98,6 +98,23 @@ void serialize_against_pte_lookup(struct mm_struct *mm)
 	smp_call_function_many(mm_cpumask(mm), do_nothing, NULL, 1);
 }
 
+void start_lockless_pgtbl_walk(struct mm_struct *mm)
+{
+	atomic_inc(&mm->context.lockless_pgtbl_walk_count);
+}
+EXPORT_SYMBOL(start_lockless_pgtbl_walk);
+
+void end_lockless_pgtbl_walk(struct mm_struct *mm)
+{
+	atomic_dec(&mm->context.lockless_pgtbl_walk_count);
+}
+EXPORT_SYMBOL(end_lockless_pgtbl_walk);
+
+int running_lockless_pgtbl_walk(struct mm_struct *mm)
+{
+	return atomic_read(&mm->context.lockless_pgtbl_walk_count);
+}
+
 /*
  * We use this to invalidate a pmdp entry before switching from a
  * hugepte to regular pmd entry.
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 72+ messages in thread

* [PATCH v2 02/11] asm-generic/pgtable: Adds dummy functions to monitor lockless pgtable walks
  2019-09-20 19:50 ` Leonardo Bras
@ 2019-09-20 19:50   ` Leonardo Bras
  -1 siblings, 0 replies; 72+ messages in thread
From: Leonardo Bras @ 2019-09-20 19:50 UTC (permalink / raw)
  To: linuxppc-dev, linux-kernel
  Cc: Leonardo Bras, Benjamin Herrenschmidt, Paul Mackerras,
	Michael Ellerman, Arnd Bergmann, Aneesh Kumar K.V,
	Christophe Leroy, Andrew Morton, Dan Williams, Nicholas Piggin,
	Mahesh Salgaonkar, Thomas Gleixner, Richard Fontana,
	Ganesh Goudar, Allison Randal, Greg Kroah-Hartman, Mike Rapoport,
	YueHaibing, Ira Weiny, Jason Gunthorpe, John Hubbard,
	Keith Busch

There is a need to monitor lockless pagetable walks, in order to avoid
doing THP splitting/collapsing during them.

Some methods rely on local_irq_{save,restore}, but that can be slow on
cases with a lot of cpus are used for the process.

In order to speedup these cases, I propose a refcount-based approach, that
counts the number of lockless pagetable	walks happening on the process.

Given that there are lockless pagetable walks on generic code, it's
necessary to create dummy functions for archs that won't use the approach.

Signed-off-by: Leonardo Bras <leonardo@linux.ibm.com>
---
 include/asm-generic/pgtable.h | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/include/asm-generic/pgtable.h b/include/asm-generic/pgtable.h
index 75d9d68a6de7..6eb4fabb5595 100644
--- a/include/asm-generic/pgtable.h
+++ b/include/asm-generic/pgtable.h
@@ -1172,6 +1172,15 @@ static inline bool arch_has_pfn_modify_check(void)
 #endif
 #endif
 
+#ifndef __HAVE_ARCH_LOCKLESS_PGTBL_WALK_COUNTER
+static inline void start_lockless_pgtbl_walk(struct mm_struct *mm) { }
+static inline void end_lockless_pgtbl_walk(struct mm_struct *mm) { }
+static inline int running_lockless_pgtbl_walk(struct mm_struct *mm)
+{
+	return 0;
+}
+#endif
+
 /*
  * On some architectures it depends on the mm if the p4d/pud or pmd
  * layer of the page table hierarchy is folded or not.
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 72+ messages in thread

* [PATCH v2 02/11] asm-generic/pgtable: Adds dummy functions to monitor lockless pgtable walks
@ 2019-09-20 19:50   ` Leonardo Bras
  0 siblings, 0 replies; 72+ messages in thread
From: Leonardo Bras @ 2019-09-20 19:50 UTC (permalink / raw)
  To: linuxppc-dev, linux-kernel
  Cc: Keith Busch, Richard Fontana, Paul Mackerras, Ira Weiny,
	Dan Williams, Aneesh Kumar K.V, YueHaibing, Mike Rapoport,
	Jason Gunthorpe, Ganesh Goudar, Mahesh Salgaonkar, Leonardo Bras,
	Arnd Bergmann, John Hubbard, Nicholas Piggin, Thomas Gleixner,
	Allison Randal, Greg Kroah-Hartman, Andrew Morton

There is a need to monitor lockless pagetable walks, in order to avoid
doing THP splitting/collapsing during them.

Some methods rely on local_irq_{save,restore}, but that can be slow on
cases with a lot of cpus are used for the process.

In order to speedup these cases, I propose a refcount-based approach, that
counts the number of lockless pagetable	walks happening on the process.

Given that there are lockless pagetable walks on generic code, it's
necessary to create dummy functions for archs that won't use the approach.

Signed-off-by: Leonardo Bras <leonardo@linux.ibm.com>
---
 include/asm-generic/pgtable.h | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/include/asm-generic/pgtable.h b/include/asm-generic/pgtable.h
index 75d9d68a6de7..6eb4fabb5595 100644
--- a/include/asm-generic/pgtable.h
+++ b/include/asm-generic/pgtable.h
@@ -1172,6 +1172,15 @@ static inline bool arch_has_pfn_modify_check(void)
 #endif
 #endif
 
+#ifndef __HAVE_ARCH_LOCKLESS_PGTBL_WALK_COUNTER
+static inline void start_lockless_pgtbl_walk(struct mm_struct *mm) { }
+static inline void end_lockless_pgtbl_walk(struct mm_struct *mm) { }
+static inline int running_lockless_pgtbl_walk(struct mm_struct *mm)
+{
+	return 0;
+}
+#endif
+
 /*
  * On some architectures it depends on the mm if the p4d/pud or pmd
  * layer of the page table hierarchy is folded or not.
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 72+ messages in thread

* [PATCH v2 03/11] mm/gup: Applies counting method to monitor gup_pgd_range
  2019-09-20 19:50 ` Leonardo Bras
@ 2019-09-20 19:50   ` Leonardo Bras
  -1 siblings, 0 replies; 72+ messages in thread
From: Leonardo Bras @ 2019-09-20 19:50 UTC (permalink / raw)
  To: linuxppc-dev, linux-kernel
  Cc: Leonardo Bras, Benjamin Herrenschmidt, Paul Mackerras,
	Michael Ellerman, Arnd Bergmann, Aneesh Kumar K.V,
	Christophe Leroy, Andrew Morton, Dan Williams, Nicholas Piggin,
	Mahesh Salgaonkar, Thomas Gleixner, Richard Fontana,
	Ganesh Goudar, Allison Randal, Greg Kroah-Hartman, Mike Rapoport,
	YueHaibing, Ira Weiny, Jason Gunthorpe, John Hubbard,
	Keith Busch

As decribed, gup_pgd_range is a lockless pagetable walk. So, in order to
monitor against THP split/collapse with the couting method, it's necessary
to bound it with {start,end}_lockless_pgtbl_walk.

There are dummy functions, so it is not going to add any overhead on archs
that don't use this method.

Signed-off-by: Leonardo Bras <leonardo@linux.ibm.com>
---
 mm/gup.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/mm/gup.c b/mm/gup.c
index 98f13ab37bac..675e4be27082 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -2404,6 +2404,7 @@ int get_user_pages_fast(unsigned long start, int nr_pages,
 			unsigned int gup_flags, struct page **pages)
 {
 	unsigned long addr, len, end;
+	struct mm_struct *mm;
 	int nr = 0, ret = 0;
 
 	if (WARN_ON_ONCE(gup_flags & ~(FOLL_WRITE | FOLL_LONGTERM)))
@@ -2421,9 +2422,12 @@ int get_user_pages_fast(unsigned long start, int nr_pages,
 
 	if (IS_ENABLED(CONFIG_HAVE_FAST_GUP) &&
 	    gup_fast_permitted(start, end)) {
+		mm = current->mm;
+		start_lockless_pgtbl_walk(mm);
 		local_irq_disable();
 		gup_pgd_range(addr, end, gup_flags, pages, &nr);
 		local_irq_enable();
+		end_lockless_pgtbl_walk(mm);
 		ret = nr;
 	}
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 72+ messages in thread

* [PATCH v2 03/11] mm/gup: Applies counting method to monitor gup_pgd_range
@ 2019-09-20 19:50   ` Leonardo Bras
  0 siblings, 0 replies; 72+ messages in thread
From: Leonardo Bras @ 2019-09-20 19:50 UTC (permalink / raw)
  To: linuxppc-dev, linux-kernel
  Cc: Keith Busch, Richard Fontana, Paul Mackerras, Ira Weiny,
	Dan Williams, Aneesh Kumar K.V, YueHaibing, Mike Rapoport,
	Jason Gunthorpe, Ganesh Goudar, Mahesh Salgaonkar, Leonardo Bras,
	Arnd Bergmann, John Hubbard, Nicholas Piggin, Thomas Gleixner,
	Allison Randal, Greg Kroah-Hartman, Andrew Morton

As decribed, gup_pgd_range is a lockless pagetable walk. So, in order to
monitor against THP split/collapse with the couting method, it's necessary
to bound it with {start,end}_lockless_pgtbl_walk.

There are dummy functions, so it is not going to add any overhead on archs
that don't use this method.

Signed-off-by: Leonardo Bras <leonardo@linux.ibm.com>
---
 mm/gup.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/mm/gup.c b/mm/gup.c
index 98f13ab37bac..675e4be27082 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -2404,6 +2404,7 @@ int get_user_pages_fast(unsigned long start, int nr_pages,
 			unsigned int gup_flags, struct page **pages)
 {
 	unsigned long addr, len, end;
+	struct mm_struct *mm;
 	int nr = 0, ret = 0;
 
 	if (WARN_ON_ONCE(gup_flags & ~(FOLL_WRITE | FOLL_LONGTERM)))
@@ -2421,9 +2422,12 @@ int get_user_pages_fast(unsigned long start, int nr_pages,
 
 	if (IS_ENABLED(CONFIG_HAVE_FAST_GUP) &&
 	    gup_fast_permitted(start, end)) {
+		mm = current->mm;
+		start_lockless_pgtbl_walk(mm);
 		local_irq_disable();
 		gup_pgd_range(addr, end, gup_flags, pages, &nr);
 		local_irq_enable();
+		end_lockless_pgtbl_walk(mm);
 		ret = nr;
 	}
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 72+ messages in thread

* [PATCH v2 04/11] powerpc/mce_power: Applies counting method to monitor lockless pgtbl walks
  2019-09-20 19:50 ` Leonardo Bras
@ 2019-09-20 19:50   ` Leonardo Bras
  -1 siblings, 0 replies; 72+ messages in thread
From: Leonardo Bras @ 2019-09-20 19:50 UTC (permalink / raw)
  To: linuxppc-dev, linux-kernel
  Cc: Leonardo Bras, Benjamin Herrenschmidt, Paul Mackerras,
	Michael Ellerman, Arnd Bergmann, Aneesh Kumar K.V,
	Christophe Leroy, Andrew Morton, Dan Williams, Nicholas Piggin,
	Mahesh Salgaonkar, Thomas Gleixner, Richard Fontana,
	Ganesh Goudar, Allison Randal, Greg Kroah-Hartman, Mike Rapoport,
	YueHaibing, Ira Weiny, Jason Gunthorpe, John Hubbard,
	Keith Busch

Applies the counting-based method for monitoring lockless pgtable walks on
addr_to_pfn().

Signed-off-by: Leonardo Bras <leonardo@linux.ibm.com>
---
 arch/powerpc/kernel/mce_power.c | 13 ++++++++++---
 1 file changed, 10 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/kernel/mce_power.c b/arch/powerpc/kernel/mce_power.c
index a814d2dfb5b0..0f2f87da4cd1 100644
--- a/arch/powerpc/kernel/mce_power.c
+++ b/arch/powerpc/kernel/mce_power.c
@@ -27,6 +27,7 @@ unsigned long addr_to_pfn(struct pt_regs *regs, unsigned long addr)
 {
 	pte_t *ptep;
 	unsigned long flags;
+	unsigned long pfn;
 	struct mm_struct *mm;
 
 	if (user_mode(regs))
@@ -34,15 +35,21 @@ unsigned long addr_to_pfn(struct pt_regs *regs, unsigned long addr)
 	else
 		mm = &init_mm;
 
+	start_lockless_pgtbl_walk(mm);
 	local_irq_save(flags);
 	if (mm == current->mm)
 		ptep = find_current_mm_pte(mm->pgd, addr, NULL, NULL);
 	else
 		ptep = find_init_mm_pte(addr, NULL);
-	local_irq_restore(flags);
+
 	if (!ptep || pte_special(*ptep))
-		return ULONG_MAX;
-	return pte_pfn(*ptep);
+		pfn = ULONG_MAX;
+	else
+		pfn = pte_pfn(*ptep);
+
+	local_irq_restore(flags);
+	end_lockless_pgtbl_walk(mm);
+	return pfn;
 }
 
 /* flush SLBs and reload */
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 72+ messages in thread

* [PATCH v2 04/11] powerpc/mce_power: Applies counting method to monitor lockless pgtbl walks
@ 2019-09-20 19:50   ` Leonardo Bras
  0 siblings, 0 replies; 72+ messages in thread
From: Leonardo Bras @ 2019-09-20 19:50 UTC (permalink / raw)
  To: linuxppc-dev, linux-kernel
  Cc: Keith Busch, Richard Fontana, Paul Mackerras, Ira Weiny,
	Dan Williams, Aneesh Kumar K.V, YueHaibing, Mike Rapoport,
	Jason Gunthorpe, Ganesh Goudar, Mahesh Salgaonkar, Leonardo Bras,
	Arnd Bergmann, John Hubbard, Nicholas Piggin, Thomas Gleixner,
	Allison Randal, Greg Kroah-Hartman, Andrew Morton

Applies the counting-based method for monitoring lockless pgtable walks on
addr_to_pfn().

Signed-off-by: Leonardo Bras <leonardo@linux.ibm.com>
---
 arch/powerpc/kernel/mce_power.c | 13 ++++++++++---
 1 file changed, 10 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/kernel/mce_power.c b/arch/powerpc/kernel/mce_power.c
index a814d2dfb5b0..0f2f87da4cd1 100644
--- a/arch/powerpc/kernel/mce_power.c
+++ b/arch/powerpc/kernel/mce_power.c
@@ -27,6 +27,7 @@ unsigned long addr_to_pfn(struct pt_regs *regs, unsigned long addr)
 {
 	pte_t *ptep;
 	unsigned long flags;
+	unsigned long pfn;
 	struct mm_struct *mm;
 
 	if (user_mode(regs))
@@ -34,15 +35,21 @@ unsigned long addr_to_pfn(struct pt_regs *regs, unsigned long addr)
 	else
 		mm = &init_mm;
 
+	start_lockless_pgtbl_walk(mm);
 	local_irq_save(flags);
 	if (mm == current->mm)
 		ptep = find_current_mm_pte(mm->pgd, addr, NULL, NULL);
 	else
 		ptep = find_init_mm_pte(addr, NULL);
-	local_irq_restore(flags);
+
 	if (!ptep || pte_special(*ptep))
-		return ULONG_MAX;
-	return pte_pfn(*ptep);
+		pfn = ULONG_MAX;
+	else
+		pfn = pte_pfn(*ptep);
+
+	local_irq_restore(flags);
+	end_lockless_pgtbl_walk(mm);
+	return pfn;
 }
 
 /* flush SLBs and reload */
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 72+ messages in thread

* [PATCH v2 05/11] powerpc/perf: Applies counting method to monitor lockless pgtbl walks
  2019-09-20 19:50 ` Leonardo Bras
@ 2019-09-20 19:50   ` Leonardo Bras
  -1 siblings, 0 replies; 72+ messages in thread
From: Leonardo Bras @ 2019-09-20 19:50 UTC (permalink / raw)
  To: linuxppc-dev, linux-kernel
  Cc: Leonardo Bras, Benjamin Herrenschmidt, Paul Mackerras,
	Michael Ellerman, Arnd Bergmann, Aneesh Kumar K.V,
	Christophe Leroy, Andrew Morton, Dan Williams, Nicholas Piggin,
	Mahesh Salgaonkar, Thomas Gleixner, Richard Fontana,
	Ganesh Goudar, Allison Randal, Greg Kroah-Hartman, Mike Rapoport,
	YueHaibing, Ira Weiny, Jason Gunthorpe, John Hubbard,
	Keith Busch

Applies the counting-based method for monitoring lockless pgtable walks on
read_user_stack_slow.

Signed-off-by: Leonardo Bras <leonardo@linux.ibm.com>
---
 arch/powerpc/perf/callchain.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/perf/callchain.c b/arch/powerpc/perf/callchain.c
index c84bbd4298a0..9d76194a2a8f 100644
--- a/arch/powerpc/perf/callchain.c
+++ b/arch/powerpc/perf/callchain.c
@@ -113,16 +113,18 @@ static int read_user_stack_slow(void __user *ptr, void *buf, int nb)
 	int ret = -EFAULT;
 	pgd_t *pgdir;
 	pte_t *ptep, pte;
+	struct mm_struct *mm = current->mm;
 	unsigned shift;
 	unsigned long addr = (unsigned long) ptr;
 	unsigned long offset;
 	unsigned long pfn, flags;
 	void *kaddr;
 
-	pgdir = current->mm->pgd;
+	pgdir = mm->pgd;
 	if (!pgdir)
 		return -EFAULT;
 
+	start_lockless_pgtbl_walk(mm);
 	local_irq_save(flags);
 	ptep = find_current_mm_pte(pgdir, addr, NULL, &shift);
 	if (!ptep)
@@ -146,6 +148,7 @@ static int read_user_stack_slow(void __user *ptr, void *buf, int nb)
 	ret = 0;
 err_out:
 	local_irq_restore(flags);
+	end_lockless_pgtbl_walk(mm);
 	return ret;
 }
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 72+ messages in thread

* [PATCH v2 05/11] powerpc/perf: Applies counting method to monitor lockless pgtbl walks
@ 2019-09-20 19:50   ` Leonardo Bras
  0 siblings, 0 replies; 72+ messages in thread
From: Leonardo Bras @ 2019-09-20 19:50 UTC (permalink / raw)
  To: linuxppc-dev, linux-kernel
  Cc: Keith Busch, Richard Fontana, Paul Mackerras, Ira Weiny,
	Dan Williams, Aneesh Kumar K.V, YueHaibing, Mike Rapoport,
	Jason Gunthorpe, Ganesh Goudar, Mahesh Salgaonkar, Leonardo Bras,
	Arnd Bergmann, John Hubbard, Nicholas Piggin, Thomas Gleixner,
	Allison Randal, Greg Kroah-Hartman, Andrew Morton

Applies the counting-based method for monitoring lockless pgtable walks on
read_user_stack_slow.

Signed-off-by: Leonardo Bras <leonardo@linux.ibm.com>
---
 arch/powerpc/perf/callchain.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/perf/callchain.c b/arch/powerpc/perf/callchain.c
index c84bbd4298a0..9d76194a2a8f 100644
--- a/arch/powerpc/perf/callchain.c
+++ b/arch/powerpc/perf/callchain.c
@@ -113,16 +113,18 @@ static int read_user_stack_slow(void __user *ptr, void *buf, int nb)
 	int ret = -EFAULT;
 	pgd_t *pgdir;
 	pte_t *ptep, pte;
+	struct mm_struct *mm = current->mm;
 	unsigned shift;
 	unsigned long addr = (unsigned long) ptr;
 	unsigned long offset;
 	unsigned long pfn, flags;
 	void *kaddr;
 
-	pgdir = current->mm->pgd;
+	pgdir = mm->pgd;
 	if (!pgdir)
 		return -EFAULT;
 
+	start_lockless_pgtbl_walk(mm);
 	local_irq_save(flags);
 	ptep = find_current_mm_pte(pgdir, addr, NULL, &shift);
 	if (!ptep)
@@ -146,6 +148,7 @@ static int read_user_stack_slow(void __user *ptr, void *buf, int nb)
 	ret = 0;
 err_out:
 	local_irq_restore(flags);
+	end_lockless_pgtbl_walk(mm);
 	return ret;
 }
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 72+ messages in thread

* [PATCH v2 06/11] powerpc/mm/book3s64/hash: Applies counting method to monitor lockless pgtbl walks
  2019-09-20 19:50 ` Leonardo Bras
@ 2019-09-20 19:50   ` Leonardo Bras
  -1 siblings, 0 replies; 72+ messages in thread
From: Leonardo Bras @ 2019-09-20 19:50 UTC (permalink / raw)
  To: linuxppc-dev, linux-kernel
  Cc: Leonardo Bras, Benjamin Herrenschmidt, Paul Mackerras,
	Michael Ellerman, Arnd Bergmann, Aneesh Kumar K.V,
	Christophe Leroy, Andrew Morton, Dan Williams, Nicholas Piggin,
	Mahesh Salgaonkar, Thomas Gleixner, Richard Fontana,
	Ganesh Goudar, Allison Randal, Greg Kroah-Hartman, Mike Rapoport,
	YueHaibing, Ira Weiny, Jason Gunthorpe, John Hubbard,
	Keith Busch

Applies the counting-based method for monitoring all hash-related functions
that do lockless pagetable walks.

Signed-off-by: Leonardo Bras <leonardo@linux.ibm.com>
---
 arch/powerpc/mm/book3s64/hash_tlb.c   | 2 ++
 arch/powerpc/mm/book3s64/hash_utils.c | 7 +++++++
 2 files changed, 9 insertions(+)

diff --git a/arch/powerpc/mm/book3s64/hash_tlb.c b/arch/powerpc/mm/book3s64/hash_tlb.c
index 4a70d8dd39cd..5e5213c3f7c4 100644
--- a/arch/powerpc/mm/book3s64/hash_tlb.c
+++ b/arch/powerpc/mm/book3s64/hash_tlb.c
@@ -209,6 +209,7 @@ void __flush_hash_table_range(struct mm_struct *mm, unsigned long start,
 	 * to being hashed). This is not the most performance oriented
 	 * way to do things but is fine for our needs here.
 	 */
+	start_lockless_pgtbl_walk(mm);
 	local_irq_save(flags);
 	arch_enter_lazy_mmu_mode();
 	for (; start < end; start += PAGE_SIZE) {
@@ -230,6 +231,7 @@ void __flush_hash_table_range(struct mm_struct *mm, unsigned long start,
 	}
 	arch_leave_lazy_mmu_mode();
 	local_irq_restore(flags);
+	end_lockless_pgtbl_walk(mm);
 }
 
 void flush_tlb_pmd_range(struct mm_struct *mm, pmd_t *pmd, unsigned long addr)
diff --git a/arch/powerpc/mm/book3s64/hash_utils.c b/arch/powerpc/mm/book3s64/hash_utils.c
index b8ad14bb1170..299946cedc3a 100644
--- a/arch/powerpc/mm/book3s64/hash_utils.c
+++ b/arch/powerpc/mm/book3s64/hash_utils.c
@@ -1322,6 +1322,7 @@ int hash_page_mm(struct mm_struct *mm, unsigned long ea,
 #endif /* CONFIG_PPC_64K_PAGES */
 
 	/* Get PTE and page size from page tables */
+	start_lockless_pgtbl_walk(mm);
 	ptep = find_linux_pte(pgdir, ea, &is_thp, &hugeshift);
 	if (ptep == NULL || !pte_present(*ptep)) {
 		DBG_LOW(" no PTE !\n");
@@ -1438,6 +1439,7 @@ int hash_page_mm(struct mm_struct *mm, unsigned long ea,
 	DBG_LOW(" -> rc=%d\n", rc);
 
 bail:
+	end_lockless_pgtbl_walk(mm);
 	exception_exit(prev_state);
 	return rc;
 }
@@ -1547,10 +1549,12 @@ void hash_preload(struct mm_struct *mm, unsigned long ea,
 	vsid = get_user_vsid(&mm->context, ea, ssize);
 	if (!vsid)
 		return;
+
 	/*
 	 * Hash doesn't like irqs. Walking linux page table with irq disabled
 	 * saves us from holding multiple locks.
 	 */
+	start_lockless_pgtbl_walk(mm);
 	local_irq_save(flags);
 
 	/*
@@ -1597,6 +1601,7 @@ void hash_preload(struct mm_struct *mm, unsigned long ea,
 				   pte_val(*ptep));
 out_exit:
 	local_irq_restore(flags);
+	end_lockless_pgtbl_walk(mm);
 }
 
 #ifdef CONFIG_PPC_MEM_KEYS
@@ -1613,11 +1618,13 @@ u16 get_mm_addr_key(struct mm_struct *mm, unsigned long address)
 	if (!mm || !mm->pgd)
 		return 0;
 
+	start_lockless_pgtbl_walk(mm);
 	local_irq_save(flags);
 	ptep = find_linux_pte(mm->pgd, address, NULL, NULL);
 	if (ptep)
 		pkey = pte_to_pkey_bits(pte_val(READ_ONCE(*ptep)));
 	local_irq_restore(flags);
+	end_lockless_pgtbl_walk(mm);
 
 	return pkey;
 }
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 72+ messages in thread

* [PATCH v2 06/11] powerpc/mm/book3s64/hash: Applies counting method to monitor lockless pgtbl walks
@ 2019-09-20 19:50   ` Leonardo Bras
  0 siblings, 0 replies; 72+ messages in thread
From: Leonardo Bras @ 2019-09-20 19:50 UTC (permalink / raw)
  To: linuxppc-dev, linux-kernel
  Cc: Keith Busch, Richard Fontana, Paul Mackerras, Ira Weiny,
	Dan Williams, Aneesh Kumar K.V, YueHaibing, Mike Rapoport,
	Jason Gunthorpe, Ganesh Goudar, Mahesh Salgaonkar, Leonardo Bras,
	Arnd Bergmann, John Hubbard, Nicholas Piggin, Thomas Gleixner,
	Allison Randal, Greg Kroah-Hartman, Andrew Morton

Applies the counting-based method for monitoring all hash-related functions
that do lockless pagetable walks.

Signed-off-by: Leonardo Bras <leonardo@linux.ibm.com>
---
 arch/powerpc/mm/book3s64/hash_tlb.c   | 2 ++
 arch/powerpc/mm/book3s64/hash_utils.c | 7 +++++++
 2 files changed, 9 insertions(+)

diff --git a/arch/powerpc/mm/book3s64/hash_tlb.c b/arch/powerpc/mm/book3s64/hash_tlb.c
index 4a70d8dd39cd..5e5213c3f7c4 100644
--- a/arch/powerpc/mm/book3s64/hash_tlb.c
+++ b/arch/powerpc/mm/book3s64/hash_tlb.c
@@ -209,6 +209,7 @@ void __flush_hash_table_range(struct mm_struct *mm, unsigned long start,
 	 * to being hashed). This is not the most performance oriented
 	 * way to do things but is fine for our needs here.
 	 */
+	start_lockless_pgtbl_walk(mm);
 	local_irq_save(flags);
 	arch_enter_lazy_mmu_mode();
 	for (; start < end; start += PAGE_SIZE) {
@@ -230,6 +231,7 @@ void __flush_hash_table_range(struct mm_struct *mm, unsigned long start,
 	}
 	arch_leave_lazy_mmu_mode();
 	local_irq_restore(flags);
+	end_lockless_pgtbl_walk(mm);
 }
 
 void flush_tlb_pmd_range(struct mm_struct *mm, pmd_t *pmd, unsigned long addr)
diff --git a/arch/powerpc/mm/book3s64/hash_utils.c b/arch/powerpc/mm/book3s64/hash_utils.c
index b8ad14bb1170..299946cedc3a 100644
--- a/arch/powerpc/mm/book3s64/hash_utils.c
+++ b/arch/powerpc/mm/book3s64/hash_utils.c
@@ -1322,6 +1322,7 @@ int hash_page_mm(struct mm_struct *mm, unsigned long ea,
 #endif /* CONFIG_PPC_64K_PAGES */
 
 	/* Get PTE and page size from page tables */
+	start_lockless_pgtbl_walk(mm);
 	ptep = find_linux_pte(pgdir, ea, &is_thp, &hugeshift);
 	if (ptep == NULL || !pte_present(*ptep)) {
 		DBG_LOW(" no PTE !\n");
@@ -1438,6 +1439,7 @@ int hash_page_mm(struct mm_struct *mm, unsigned long ea,
 	DBG_LOW(" -> rc=%d\n", rc);
 
 bail:
+	end_lockless_pgtbl_walk(mm);
 	exception_exit(prev_state);
 	return rc;
 }
@@ -1547,10 +1549,12 @@ void hash_preload(struct mm_struct *mm, unsigned long ea,
 	vsid = get_user_vsid(&mm->context, ea, ssize);
 	if (!vsid)
 		return;
+
 	/*
 	 * Hash doesn't like irqs. Walking linux page table with irq disabled
 	 * saves us from holding multiple locks.
 	 */
+	start_lockless_pgtbl_walk(mm);
 	local_irq_save(flags);
 
 	/*
@@ -1597,6 +1601,7 @@ void hash_preload(struct mm_struct *mm, unsigned long ea,
 				   pte_val(*ptep));
 out_exit:
 	local_irq_restore(flags);
+	end_lockless_pgtbl_walk(mm);
 }
 
 #ifdef CONFIG_PPC_MEM_KEYS
@@ -1613,11 +1618,13 @@ u16 get_mm_addr_key(struct mm_struct *mm, unsigned long address)
 	if (!mm || !mm->pgd)
 		return 0;
 
+	start_lockless_pgtbl_walk(mm);
 	local_irq_save(flags);
 	ptep = find_linux_pte(mm->pgd, address, NULL, NULL);
 	if (ptep)
 		pkey = pte_to_pkey_bits(pte_val(READ_ONCE(*ptep)));
 	local_irq_restore(flags);
+	end_lockless_pgtbl_walk(mm);
 
 	return pkey;
 }
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 72+ messages in thread

* [PATCH v2 07/11] powerpc/kvm/e500: Applies counting method to monitor lockless pgtbl walks
  2019-09-20 19:50 ` Leonardo Bras
@ 2019-09-20 19:50   ` Leonardo Bras
  -1 siblings, 0 replies; 72+ messages in thread
From: Leonardo Bras @ 2019-09-20 19:50 UTC (permalink / raw)
  To: linuxppc-dev, linux-kernel
  Cc: Leonardo Bras, Benjamin Herrenschmidt, Paul Mackerras,
	Michael Ellerman, Arnd Bergmann, Aneesh Kumar K.V,
	Christophe Leroy, Andrew Morton, Dan Williams, Nicholas Piggin,
	Mahesh Salgaonkar, Thomas Gleixner, Richard Fontana,
	Ganesh Goudar, Allison Randal, Greg Kroah-Hartman, Mike Rapoport,
	YueHaibing, Ira Weiny, Jason Gunthorpe, John Hubbard,
	Keith Busch

Applies the counting-based method for monitoring lockless pgtable walks on
kvmppc_e500_shadow_map().

Signed-off-by: Leonardo Bras <leonardo@linux.ibm.com>
---
 arch/powerpc/kvm/e500_mmu_host.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/arch/powerpc/kvm/e500_mmu_host.c b/arch/powerpc/kvm/e500_mmu_host.c
index 321db0fdb9db..a1b8bfe20bc8 100644
--- a/arch/powerpc/kvm/e500_mmu_host.c
+++ b/arch/powerpc/kvm/e500_mmu_host.c
@@ -473,6 +473,7 @@ static inline int kvmppc_e500_shadow_map(struct kvmppc_vcpu_e500 *vcpu_e500,
 	 * We are holding kvm->mmu_lock so a notifier invalidate
 	 * can't run hence pfn won't change.
 	 */
+	start_lockless_pgtbl_walk(kvm->mm);
 	local_irq_save(flags);
 	ptep = find_linux_pte(pgdir, hva, NULL, NULL);
 	if (ptep) {
@@ -484,12 +485,15 @@ static inline int kvmppc_e500_shadow_map(struct kvmppc_vcpu_e500 *vcpu_e500,
 			local_irq_restore(flags);
 		} else {
 			local_irq_restore(flags);
+			end_lockless_pgtbl_walk(kvm->mm);
 			pr_err_ratelimited("%s: pte not present: gfn %lx,pfn %lx\n",
 					   __func__, (long)gfn, pfn);
 			ret = -EINVAL;
 			goto out;
 		}
 	}
+	end_lockless_pgtbl_walk(kvm->mm);
+
 	kvmppc_e500_ref_setup(ref, gtlbe, pfn, wimg);
 
 	kvmppc_e500_setup_stlbe(&vcpu_e500->vcpu, gtlbe, tsize,
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 72+ messages in thread

* [PATCH v2 07/11] powerpc/kvm/e500: Applies counting method to monitor lockless pgtbl walks
@ 2019-09-20 19:50   ` Leonardo Bras
  0 siblings, 0 replies; 72+ messages in thread
From: Leonardo Bras @ 2019-09-20 19:50 UTC (permalink / raw)
  To: linuxppc-dev, linux-kernel
  Cc: Keith Busch, Richard Fontana, Paul Mackerras, Ira Weiny,
	Dan Williams, Aneesh Kumar K.V, YueHaibing, Mike Rapoport,
	Jason Gunthorpe, Ganesh Goudar, Mahesh Salgaonkar, Leonardo Bras,
	Arnd Bergmann, John Hubbard, Nicholas Piggin, Thomas Gleixner,
	Allison Randal, Greg Kroah-Hartman, Andrew Morton

Applies the counting-based method for monitoring lockless pgtable walks on
kvmppc_e500_shadow_map().

Signed-off-by: Leonardo Bras <leonardo@linux.ibm.com>
---
 arch/powerpc/kvm/e500_mmu_host.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/arch/powerpc/kvm/e500_mmu_host.c b/arch/powerpc/kvm/e500_mmu_host.c
index 321db0fdb9db..a1b8bfe20bc8 100644
--- a/arch/powerpc/kvm/e500_mmu_host.c
+++ b/arch/powerpc/kvm/e500_mmu_host.c
@@ -473,6 +473,7 @@ static inline int kvmppc_e500_shadow_map(struct kvmppc_vcpu_e500 *vcpu_e500,
 	 * We are holding kvm->mmu_lock so a notifier invalidate
 	 * can't run hence pfn won't change.
 	 */
+	start_lockless_pgtbl_walk(kvm->mm);
 	local_irq_save(flags);
 	ptep = find_linux_pte(pgdir, hva, NULL, NULL);
 	if (ptep) {
@@ -484,12 +485,15 @@ static inline int kvmppc_e500_shadow_map(struct kvmppc_vcpu_e500 *vcpu_e500,
 			local_irq_restore(flags);
 		} else {
 			local_irq_restore(flags);
+			end_lockless_pgtbl_walk(kvm->mm);
 			pr_err_ratelimited("%s: pte not present: gfn %lx,pfn %lx\n",
 					   __func__, (long)gfn, pfn);
 			ret = -EINVAL;
 			goto out;
 		}
 	}
+	end_lockless_pgtbl_walk(kvm->mm);
+
 	kvmppc_e500_ref_setup(ref, gtlbe, pfn, wimg);
 
 	kvmppc_e500_setup_stlbe(&vcpu_e500->vcpu, gtlbe, tsize,
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 72+ messages in thread

* [PATCH v2 08/11] powerpc/kvm/book3s_hv: Applies counting method to monitor lockless pgtbl walks
  2019-09-20 19:50 ` Leonardo Bras
@ 2019-09-20 19:50   ` Leonardo Bras
  -1 siblings, 0 replies; 72+ messages in thread
From: Leonardo Bras @ 2019-09-20 19:50 UTC (permalink / raw)
  To: linuxppc-dev, linux-kernel
  Cc: Leonardo Bras, Benjamin Herrenschmidt, Paul Mackerras,
	Michael Ellerman, Arnd Bergmann, Aneesh Kumar K.V,
	Christophe Leroy, Andrew Morton, Dan Williams, Nicholas Piggin,
	Mahesh Salgaonkar, Thomas Gleixner, Richard Fontana,
	Ganesh Goudar, Allison Randal, Greg Kroah-Hartman, Mike Rapoport,
	YueHaibing, Ira Weiny, Jason Gunthorpe, John Hubbard,
	Keith Busch

Applies the counting-based method for monitoring all book3s_hv related
functions that do lockless pagetable walks.

Signed-off-by: Leonardo Bras <leonardo@linux.ibm.com>
---
 arch/powerpc/kvm/book3s_hv_nested.c | 8 ++++++++
 arch/powerpc/kvm/book3s_hv_rm_mmu.c | 9 ++++++++-
 2 files changed, 16 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/kvm/book3s_hv_nested.c b/arch/powerpc/kvm/book3s_hv_nested.c
index 735e0ac6f5b2..ed68e57af3a3 100644
--- a/arch/powerpc/kvm/book3s_hv_nested.c
+++ b/arch/powerpc/kvm/book3s_hv_nested.c
@@ -804,6 +804,7 @@ static void kvmhv_update_nest_rmap_rc(struct kvm *kvm, u64 n_rmap,
 		return;
 
 	/* Find the pte */
+	start_lockless_pgtbl_walk(kvm->mm);
 	ptep = __find_linux_pte(gp->shadow_pgtable, gpa, NULL, &shift);
 	/*
 	 * If the pte is present and the pfn is still the same, update the pte.
@@ -815,6 +816,7 @@ static void kvmhv_update_nest_rmap_rc(struct kvm *kvm, u64 n_rmap,
 		__radix_pte_update(ptep, clr, set);
 		kvmppc_radix_tlbie_page(kvm, gpa, shift, lpid);
 	}
+	end_lockless_pgtbl_walk(kvm->mm);
 }
 
 /*
@@ -854,10 +856,12 @@ static void kvmhv_remove_nest_rmap(struct kvm *kvm, u64 n_rmap,
 		return;
 
 	/* Find and invalidate the pte */
+	start_lockless_pgtbl_walk(kvm->mm);
 	ptep = __find_linux_pte(gp->shadow_pgtable, gpa, NULL, &shift);
 	/* Don't spuriously invalidate ptes if the pfn has changed */
 	if (ptep && pte_present(*ptep) && ((pte_val(*ptep) & mask) == hpa))
 		kvmppc_unmap_pte(kvm, ptep, gpa, shift, NULL, gp->shadow_lpid);
+	end_lockless_pgtbl_walk(kvm->mm);
 }
 
 static void kvmhv_remove_nest_rmap_list(struct kvm *kvm, unsigned long *rmapp,
@@ -921,6 +925,7 @@ static bool kvmhv_invalidate_shadow_pte(struct kvm_vcpu *vcpu,
 	int shift;
 
 	spin_lock(&kvm->mmu_lock);
+	start_lockless_pgtbl_walk(kvm->mm);
 	ptep = __find_linux_pte(gp->shadow_pgtable, gpa, NULL, &shift);
 	if (!shift)
 		shift = PAGE_SHIFT;
@@ -928,6 +933,7 @@ static bool kvmhv_invalidate_shadow_pte(struct kvm_vcpu *vcpu,
 		kvmppc_unmap_pte(kvm, ptep, gpa, shift, NULL, gp->shadow_lpid);
 		ret = true;
 	}
+	end_lockless_pgtbl_walk(kvm->mm);
 	spin_unlock(&kvm->mmu_lock);
 
 	if (shift_ret)
@@ -1362,11 +1368,13 @@ static long int __kvmhv_nested_page_fault(struct kvm_run *run,
 	/* See if can find translation in our partition scoped tables for L1 */
 	pte = __pte(0);
 	spin_lock(&kvm->mmu_lock);
+	start_lockless_pgtbl_walk(kvm->mm);
 	pte_p = __find_linux_pte(kvm->arch.pgtable, gpa, NULL, &shift);
 	if (!shift)
 		shift = PAGE_SHIFT;
 	if (pte_p)
 		pte = *pte_p;
+	end_lockless_pgtbl_walk(kvm->mm);
 	spin_unlock(&kvm->mmu_lock);
 
 	if (!pte_present(pte) || (writing && !(pte_val(pte) & _PAGE_WRITE))) {
diff --git a/arch/powerpc/kvm/book3s_hv_rm_mmu.c b/arch/powerpc/kvm/book3s_hv_rm_mmu.c
index 63e0ce91e29d..53ca67492211 100644
--- a/arch/powerpc/kvm/book3s_hv_rm_mmu.c
+++ b/arch/powerpc/kvm/book3s_hv_rm_mmu.c
@@ -258,6 +258,7 @@ long kvmppc_do_h_enter(struct kvm *kvm, unsigned long flags,
 	 * If called in real mode we have MSR_EE = 0. Otherwise
 	 * we disable irq above.
 	 */
+	start_lockless_pgtbl_walk(kvm->mm);
 	ptep = __find_linux_pte(pgdir, hva, NULL, &hpage_shift);
 	if (ptep) {
 		pte_t pte;
@@ -311,6 +312,7 @@ long kvmppc_do_h_enter(struct kvm *kvm, unsigned long flags,
 		ptel &= ~(HPTE_R_W|HPTE_R_I|HPTE_R_G);
 		ptel |= HPTE_R_M;
 	}
+	end_lockless_pgtbl_walk(kvm->mm);
 
 	/* Find and lock the HPTEG slot to use */
  do_insert:
@@ -886,10 +888,15 @@ static int kvmppc_get_hpa(struct kvm_vcpu *vcpu, unsigned long gpa,
 	hva = __gfn_to_hva_memslot(memslot, gfn);
 
 	/* Try to find the host pte for that virtual address */
+	start_lockless_pgtbl_walk(kvm->mm);
 	ptep = __find_linux_pte(vcpu->arch.pgdir, hva, NULL, &shift);
-	if (!ptep)
+	if (!ptep) {
+		end_lockless_pgtbl_walk(kvm->mm);
 		return H_TOO_HARD;
+	}
 	pte = kvmppc_read_update_linux_pte(ptep, writing);
+	end_lockless_pgtbl_walk(kvm->mm);
+
 	if (!pte_present(pte))
 		return H_TOO_HARD;
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 72+ messages in thread

* [PATCH v2 08/11] powerpc/kvm/book3s_hv: Applies counting method to monitor lockless pgtbl walks
@ 2019-09-20 19:50   ` Leonardo Bras
  0 siblings, 0 replies; 72+ messages in thread
From: Leonardo Bras @ 2019-09-20 19:50 UTC (permalink / raw)
  To: linuxppc-dev, linux-kernel
  Cc: Keith Busch, Richard Fontana, Paul Mackerras, Ira Weiny,
	Dan Williams, Aneesh Kumar K.V, YueHaibing, Mike Rapoport,
	Jason Gunthorpe, Ganesh Goudar, Mahesh Salgaonkar, Leonardo Bras,
	Arnd Bergmann, John Hubbard, Nicholas Piggin, Thomas Gleixner,
	Allison Randal, Greg Kroah-Hartman, Andrew Morton

Applies the counting-based method for monitoring all book3s_hv related
functions that do lockless pagetable walks.

Signed-off-by: Leonardo Bras <leonardo@linux.ibm.com>
---
 arch/powerpc/kvm/book3s_hv_nested.c | 8 ++++++++
 arch/powerpc/kvm/book3s_hv_rm_mmu.c | 9 ++++++++-
 2 files changed, 16 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/kvm/book3s_hv_nested.c b/arch/powerpc/kvm/book3s_hv_nested.c
index 735e0ac6f5b2..ed68e57af3a3 100644
--- a/arch/powerpc/kvm/book3s_hv_nested.c
+++ b/arch/powerpc/kvm/book3s_hv_nested.c
@@ -804,6 +804,7 @@ static void kvmhv_update_nest_rmap_rc(struct kvm *kvm, u64 n_rmap,
 		return;
 
 	/* Find the pte */
+	start_lockless_pgtbl_walk(kvm->mm);
 	ptep = __find_linux_pte(gp->shadow_pgtable, gpa, NULL, &shift);
 	/*
 	 * If the pte is present and the pfn is still the same, update the pte.
@@ -815,6 +816,7 @@ static void kvmhv_update_nest_rmap_rc(struct kvm *kvm, u64 n_rmap,
 		__radix_pte_update(ptep, clr, set);
 		kvmppc_radix_tlbie_page(kvm, gpa, shift, lpid);
 	}
+	end_lockless_pgtbl_walk(kvm->mm);
 }
 
 /*
@@ -854,10 +856,12 @@ static void kvmhv_remove_nest_rmap(struct kvm *kvm, u64 n_rmap,
 		return;
 
 	/* Find and invalidate the pte */
+	start_lockless_pgtbl_walk(kvm->mm);
 	ptep = __find_linux_pte(gp->shadow_pgtable, gpa, NULL, &shift);
 	/* Don't spuriously invalidate ptes if the pfn has changed */
 	if (ptep && pte_present(*ptep) && ((pte_val(*ptep) & mask) == hpa))
 		kvmppc_unmap_pte(kvm, ptep, gpa, shift, NULL, gp->shadow_lpid);
+	end_lockless_pgtbl_walk(kvm->mm);
 }
 
 static void kvmhv_remove_nest_rmap_list(struct kvm *kvm, unsigned long *rmapp,
@@ -921,6 +925,7 @@ static bool kvmhv_invalidate_shadow_pte(struct kvm_vcpu *vcpu,
 	int shift;
 
 	spin_lock(&kvm->mmu_lock);
+	start_lockless_pgtbl_walk(kvm->mm);
 	ptep = __find_linux_pte(gp->shadow_pgtable, gpa, NULL, &shift);
 	if (!shift)
 		shift = PAGE_SHIFT;
@@ -928,6 +933,7 @@ static bool kvmhv_invalidate_shadow_pte(struct kvm_vcpu *vcpu,
 		kvmppc_unmap_pte(kvm, ptep, gpa, shift, NULL, gp->shadow_lpid);
 		ret = true;
 	}
+	end_lockless_pgtbl_walk(kvm->mm);
 	spin_unlock(&kvm->mmu_lock);
 
 	if (shift_ret)
@@ -1362,11 +1368,13 @@ static long int __kvmhv_nested_page_fault(struct kvm_run *run,
 	/* See if can find translation in our partition scoped tables for L1 */
 	pte = __pte(0);
 	spin_lock(&kvm->mmu_lock);
+	start_lockless_pgtbl_walk(kvm->mm);
 	pte_p = __find_linux_pte(kvm->arch.pgtable, gpa, NULL, &shift);
 	if (!shift)
 		shift = PAGE_SHIFT;
 	if (pte_p)
 		pte = *pte_p;
+	end_lockless_pgtbl_walk(kvm->mm);
 	spin_unlock(&kvm->mmu_lock);
 
 	if (!pte_present(pte) || (writing && !(pte_val(pte) & _PAGE_WRITE))) {
diff --git a/arch/powerpc/kvm/book3s_hv_rm_mmu.c b/arch/powerpc/kvm/book3s_hv_rm_mmu.c
index 63e0ce91e29d..53ca67492211 100644
--- a/arch/powerpc/kvm/book3s_hv_rm_mmu.c
+++ b/arch/powerpc/kvm/book3s_hv_rm_mmu.c
@@ -258,6 +258,7 @@ long kvmppc_do_h_enter(struct kvm *kvm, unsigned long flags,
 	 * If called in real mode we have MSR_EE = 0. Otherwise
 	 * we disable irq above.
 	 */
+	start_lockless_pgtbl_walk(kvm->mm);
 	ptep = __find_linux_pte(pgdir, hva, NULL, &hpage_shift);
 	if (ptep) {
 		pte_t pte;
@@ -311,6 +312,7 @@ long kvmppc_do_h_enter(struct kvm *kvm, unsigned long flags,
 		ptel &= ~(HPTE_R_W|HPTE_R_I|HPTE_R_G);
 		ptel |= HPTE_R_M;
 	}
+	end_lockless_pgtbl_walk(kvm->mm);
 
 	/* Find and lock the HPTEG slot to use */
  do_insert:
@@ -886,10 +888,15 @@ static int kvmppc_get_hpa(struct kvm_vcpu *vcpu, unsigned long gpa,
 	hva = __gfn_to_hva_memslot(memslot, gfn);
 
 	/* Try to find the host pte for that virtual address */
+	start_lockless_pgtbl_walk(kvm->mm);
 	ptep = __find_linux_pte(vcpu->arch.pgdir, hva, NULL, &shift);
-	if (!ptep)
+	if (!ptep) {
+		end_lockless_pgtbl_walk(kvm->mm);
 		return H_TOO_HARD;
+	}
 	pte = kvmppc_read_update_linux_pte(ptep, writing);
+	end_lockless_pgtbl_walk(kvm->mm);
+
 	if (!pte_present(pte))
 		return H_TOO_HARD;
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 72+ messages in thread

* [PATCH v2 09/11] powerpc/kvm/book3s_64: Applies counting method to monitor lockless pgtbl walks
  2019-09-20 19:50 ` Leonardo Bras
@ 2019-09-20 19:50   ` Leonardo Bras
  -1 siblings, 0 replies; 72+ messages in thread
From: Leonardo Bras @ 2019-09-20 19:50 UTC (permalink / raw)
  To: linuxppc-dev, linux-kernel
  Cc: Leonardo Bras, Benjamin Herrenschmidt, Paul Mackerras,
	Michael Ellerman, Arnd Bergmann, Aneesh Kumar K.V,
	Christophe Leroy, Andrew Morton, Dan Williams, Nicholas Piggin,
	Mahesh Salgaonkar, Thomas Gleixner, Richard Fontana,
	Ganesh Goudar, Allison Randal, Greg Kroah-Hartman, Mike Rapoport,
	YueHaibing, Ira Weiny, Jason Gunthorpe, John Hubbard,
	Keith Busch

Applies the counting-based method for monitoring all book3s_64-related
functions that do lockless pagetable walks.

Signed-off-by: Leonardo Bras <leonardo@linux.ibm.com>
---
It may be necessary to merge an older patch first:
powerpc: kvm: Reduce calls to get current->mm by storing the value locally
Link: https://lore.kernel.org/linuxppc-dev/20190919222748.20761-1-leonardo@linux.ibm.com/

 arch/powerpc/kvm/book3s_64_mmu_hv.c    |  2 ++
 arch/powerpc/kvm/book3s_64_mmu_radix.c | 20 ++++++++++++++++++--
 arch/powerpc/kvm/book3s_64_vio_hv.c    |  4 ++++
 3 files changed, 24 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c b/arch/powerpc/kvm/book3s_64_mmu_hv.c
index f2b9aea43216..a938733c3f7b 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_hv.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c
@@ -622,6 +622,7 @@ int kvmppc_book3s_hv_page_fault(struct kvm_run *run, struct kvm_vcpu *vcpu,
 			 * We need to protect against page table destruction
 			 * hugepage split and collapse.
 			 */
+			start_lockless_pgtbl_walk(mm);
 			local_irq_save(flags);
 			ptep = find_current_mm_pte(mm->pgd, hva, NULL, NULL);
 			if (ptep) {
@@ -630,6 +631,7 @@ int kvmppc_book3s_hv_page_fault(struct kvm_run *run, struct kvm_vcpu *vcpu,
 					write_ok = 1;
 			}
 			local_irq_restore(flags);
+			end_lockless_pgtbl_walk(mm);
 		}
 	}
 
diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c b/arch/powerpc/kvm/book3s_64_mmu_radix.c
index 2d415c36a61d..d46f8258d8d6 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_radix.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c
@@ -741,6 +741,7 @@ bool kvmppc_hv_handle_set_rc(struct kvm *kvm, pgd_t *pgtable, bool writing,
 	unsigned long pgflags;
 	unsigned int shift;
 	pte_t *ptep;
+	bool ret = false;
 
 	/*
 	 * Need to set an R or C bit in the 2nd-level tables;
@@ -755,12 +756,14 @@ bool kvmppc_hv_handle_set_rc(struct kvm *kvm, pgd_t *pgtable, bool writing,
 	 * We can do this without disabling irq because the Linux MM
 	 * subsystem doesn't do THP splits and collapses on this tree.
 	 */
+	start_lockless_pgtbl_walk(kvm->mm);
 	ptep = __find_linux_pte(pgtable, gpa, NULL, &shift);
 	if (ptep && pte_present(*ptep) && (!writing || pte_write(*ptep))) {
 		kvmppc_radix_update_pte(kvm, ptep, 0, pgflags, gpa, shift);
-		return true;
+		ret = true;
 	}
-	return false;
+	end_lockless_pgtbl_walk(kvm->mm);
+	return ret;
 }
 
 int kvmppc_book3s_instantiate_page(struct kvm_vcpu *vcpu,
@@ -813,6 +816,7 @@ int kvmppc_book3s_instantiate_page(struct kvm_vcpu *vcpu,
 	 * Read the PTE from the process' radix tree and use that
 	 * so we get the shift and attribute bits.
 	 */
+	start_lockless_pgtbl_walk(kvm->mm);
 	local_irq_disable();
 	ptep = __find_linux_pte(vcpu->arch.pgdir, hva, NULL, &shift);
 	/*
@@ -821,12 +825,14 @@ int kvmppc_book3s_instantiate_page(struct kvm_vcpu *vcpu,
 	 */
 	if (!ptep) {
 		local_irq_enable();
+		end_lockless_pgtbl_walk(kvm->mm);
 		if (page)
 			put_page(page);
 		return RESUME_GUEST;
 	}
 	pte = *ptep;
 	local_irq_enable();
+	end_lockless_pgtbl_walk(kvm->mm);
 
 	/* If we're logging dirty pages, always map single pages */
 	large_enable = !(memslot->flags & KVM_MEM_LOG_DIRTY_PAGES);
@@ -972,10 +978,12 @@ int kvm_unmap_radix(struct kvm *kvm, struct kvm_memory_slot *memslot,
 	unsigned long gpa = gfn << PAGE_SHIFT;
 	unsigned int shift;
 
+	start_lockless_pgtbl_walk(kvm->mm);
 	ptep = __find_linux_pte(kvm->arch.pgtable, gpa, NULL, &shift);
 	if (ptep && pte_present(*ptep))
 		kvmppc_unmap_pte(kvm, ptep, gpa, shift, memslot,
 				 kvm->arch.lpid);
+	end_lockless_pgtbl_walk(kvm->mm);
 	return 0;				
 }
 
@@ -989,6 +997,7 @@ int kvm_age_radix(struct kvm *kvm, struct kvm_memory_slot *memslot,
 	int ref = 0;
 	unsigned long old, *rmapp;
 
+	start_lockless_pgtbl_walk(kvm->mm);
 	ptep = __find_linux_pte(kvm->arch.pgtable, gpa, NULL, &shift);
 	if (ptep && pte_present(*ptep) && pte_young(*ptep)) {
 		old = kvmppc_radix_update_pte(kvm, ptep, _PAGE_ACCESSED, 0,
@@ -1001,6 +1010,7 @@ int kvm_age_radix(struct kvm *kvm, struct kvm_memory_slot *memslot,
 					       1UL << shift);
 		ref = 1;
 	}
+	end_lockless_pgtbl_walk(kvm->mm);
 	return ref;
 }
 
@@ -1013,9 +1023,11 @@ int kvm_test_age_radix(struct kvm *kvm, struct kvm_memory_slot *memslot,
 	unsigned int shift;
 	int ref = 0;
 
+	start_lockless_pgtbl_walk(kvm->mm);
 	ptep = __find_linux_pte(kvm->arch.pgtable, gpa, NULL, &shift);
 	if (ptep && pte_present(*ptep) && pte_young(*ptep))
 		ref = 1;
+	end_lockless_pgtbl_walk(kvm->mm);
 	return ref;
 }
 
@@ -1030,6 +1042,7 @@ static int kvm_radix_test_clear_dirty(struct kvm *kvm,
 	int ret = 0;
 	unsigned long old, *rmapp;
 
+	start_lockless_pgtbl_walk(kvm->mm);
 	ptep = __find_linux_pte(kvm->arch.pgtable, gpa, NULL, &shift);
 	if (ptep && pte_present(*ptep) && pte_dirty(*ptep)) {
 		ret = 1;
@@ -1046,6 +1059,7 @@ static int kvm_radix_test_clear_dirty(struct kvm *kvm,
 					       1UL << shift);
 		spin_unlock(&kvm->mmu_lock);
 	}
+	end_lockless_pgtbl_walk(kvm->mm);
 	return ret;
 }
 
@@ -1084,6 +1098,7 @@ void kvmppc_radix_flush_memslot(struct kvm *kvm,
 
 	gpa = memslot->base_gfn << PAGE_SHIFT;
 	spin_lock(&kvm->mmu_lock);
+	start_lockless_pgtbl_walk(kvm->mm);
 	for (n = memslot->npages; n; --n) {
 		ptep = __find_linux_pte(kvm->arch.pgtable, gpa, NULL, &shift);
 		if (ptep && pte_present(*ptep))
@@ -1091,6 +1106,7 @@ void kvmppc_radix_flush_memslot(struct kvm *kvm,
 					 kvm->arch.lpid);
 		gpa += PAGE_SIZE;
 	}
+	end_lockless_pgtbl_walk(kvm->mm);
 	spin_unlock(&kvm->mmu_lock);
 }
 
diff --git a/arch/powerpc/kvm/book3s_64_vio_hv.c b/arch/powerpc/kvm/book3s_64_vio_hv.c
index b4f20f13b860..d7ea44f28993 100644
--- a/arch/powerpc/kvm/book3s_64_vio_hv.c
+++ b/arch/powerpc/kvm/book3s_64_vio_hv.c
@@ -431,6 +431,7 @@ long kvmppc_rm_h_put_tce(struct kvm_vcpu *vcpu, unsigned long liobn,
 static long kvmppc_rm_ua_to_hpa(struct kvm_vcpu *vcpu,
 		unsigned long ua, unsigned long *phpa)
 {
+	struct kvm *kvm = vcpu->kvm;
 	pte_t *ptep, pte;
 	unsigned shift = 0;
 
@@ -443,10 +444,13 @@ static long kvmppc_rm_ua_to_hpa(struct kvm_vcpu *vcpu,
 	 * to exit which will agains result in the below page table walk
 	 * to finish.
 	 */
+
+	start_lockless_pgtbl_walk(kvm->mm);
 	ptep = __find_linux_pte(vcpu->arch.pgdir, ua, NULL, &shift);
 	if (!ptep || !pte_present(*ptep))
 		return -ENXIO;
 	pte = *ptep;
+	end_lockless_pgtbl_walk(kvm->mm);
 
 	if (!shift)
 		shift = PAGE_SHIFT;
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 72+ messages in thread

* [PATCH v2 09/11] powerpc/kvm/book3s_64: Applies counting method to monitor lockless pgtbl walks
@ 2019-09-20 19:50   ` Leonardo Bras
  0 siblings, 0 replies; 72+ messages in thread
From: Leonardo Bras @ 2019-09-20 19:50 UTC (permalink / raw)
  To: linuxppc-dev, linux-kernel
  Cc: Keith Busch, Richard Fontana, Paul Mackerras, Ira Weiny,
	Dan Williams, Aneesh Kumar K.V, YueHaibing, Mike Rapoport,
	Jason Gunthorpe, Ganesh Goudar, Mahesh Salgaonkar, Leonardo Bras,
	Arnd Bergmann, John Hubbard, Nicholas Piggin, Thomas Gleixner,
	Allison Randal, Greg Kroah-Hartman, Andrew Morton

Applies the counting-based method for monitoring all book3s_64-related
functions that do lockless pagetable walks.

Signed-off-by: Leonardo Bras <leonardo@linux.ibm.com>
---
It may be necessary to merge an older patch first:
powerpc: kvm: Reduce calls to get current->mm by storing the value locally
Link: https://lore.kernel.org/linuxppc-dev/20190919222748.20761-1-leonardo@linux.ibm.com/

 arch/powerpc/kvm/book3s_64_mmu_hv.c    |  2 ++
 arch/powerpc/kvm/book3s_64_mmu_radix.c | 20 ++++++++++++++++++--
 arch/powerpc/kvm/book3s_64_vio_hv.c    |  4 ++++
 3 files changed, 24 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c b/arch/powerpc/kvm/book3s_64_mmu_hv.c
index f2b9aea43216..a938733c3f7b 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_hv.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c
@@ -622,6 +622,7 @@ int kvmppc_book3s_hv_page_fault(struct kvm_run *run, struct kvm_vcpu *vcpu,
 			 * We need to protect against page table destruction
 			 * hugepage split and collapse.
 			 */
+			start_lockless_pgtbl_walk(mm);
 			local_irq_save(flags);
 			ptep = find_current_mm_pte(mm->pgd, hva, NULL, NULL);
 			if (ptep) {
@@ -630,6 +631,7 @@ int kvmppc_book3s_hv_page_fault(struct kvm_run *run, struct kvm_vcpu *vcpu,
 					write_ok = 1;
 			}
 			local_irq_restore(flags);
+			end_lockless_pgtbl_walk(mm);
 		}
 	}
 
diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c b/arch/powerpc/kvm/book3s_64_mmu_radix.c
index 2d415c36a61d..d46f8258d8d6 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_radix.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c
@@ -741,6 +741,7 @@ bool kvmppc_hv_handle_set_rc(struct kvm *kvm, pgd_t *pgtable, bool writing,
 	unsigned long pgflags;
 	unsigned int shift;
 	pte_t *ptep;
+	bool ret = false;
 
 	/*
 	 * Need to set an R or C bit in the 2nd-level tables;
@@ -755,12 +756,14 @@ bool kvmppc_hv_handle_set_rc(struct kvm *kvm, pgd_t *pgtable, bool writing,
 	 * We can do this without disabling irq because the Linux MM
 	 * subsystem doesn't do THP splits and collapses on this tree.
 	 */
+	start_lockless_pgtbl_walk(kvm->mm);
 	ptep = __find_linux_pte(pgtable, gpa, NULL, &shift);
 	if (ptep && pte_present(*ptep) && (!writing || pte_write(*ptep))) {
 		kvmppc_radix_update_pte(kvm, ptep, 0, pgflags, gpa, shift);
-		return true;
+		ret = true;
 	}
-	return false;
+	end_lockless_pgtbl_walk(kvm->mm);
+	return ret;
 }
 
 int kvmppc_book3s_instantiate_page(struct kvm_vcpu *vcpu,
@@ -813,6 +816,7 @@ int kvmppc_book3s_instantiate_page(struct kvm_vcpu *vcpu,
 	 * Read the PTE from the process' radix tree and use that
 	 * so we get the shift and attribute bits.
 	 */
+	start_lockless_pgtbl_walk(kvm->mm);
 	local_irq_disable();
 	ptep = __find_linux_pte(vcpu->arch.pgdir, hva, NULL, &shift);
 	/*
@@ -821,12 +825,14 @@ int kvmppc_book3s_instantiate_page(struct kvm_vcpu *vcpu,
 	 */
 	if (!ptep) {
 		local_irq_enable();
+		end_lockless_pgtbl_walk(kvm->mm);
 		if (page)
 			put_page(page);
 		return RESUME_GUEST;
 	}
 	pte = *ptep;
 	local_irq_enable();
+	end_lockless_pgtbl_walk(kvm->mm);
 
 	/* If we're logging dirty pages, always map single pages */
 	large_enable = !(memslot->flags & KVM_MEM_LOG_DIRTY_PAGES);
@@ -972,10 +978,12 @@ int kvm_unmap_radix(struct kvm *kvm, struct kvm_memory_slot *memslot,
 	unsigned long gpa = gfn << PAGE_SHIFT;
 	unsigned int shift;
 
+	start_lockless_pgtbl_walk(kvm->mm);
 	ptep = __find_linux_pte(kvm->arch.pgtable, gpa, NULL, &shift);
 	if (ptep && pte_present(*ptep))
 		kvmppc_unmap_pte(kvm, ptep, gpa, shift, memslot,
 				 kvm->arch.lpid);
+	end_lockless_pgtbl_walk(kvm->mm);
 	return 0;				
 }
 
@@ -989,6 +997,7 @@ int kvm_age_radix(struct kvm *kvm, struct kvm_memory_slot *memslot,
 	int ref = 0;
 	unsigned long old, *rmapp;
 
+	start_lockless_pgtbl_walk(kvm->mm);
 	ptep = __find_linux_pte(kvm->arch.pgtable, gpa, NULL, &shift);
 	if (ptep && pte_present(*ptep) && pte_young(*ptep)) {
 		old = kvmppc_radix_update_pte(kvm, ptep, _PAGE_ACCESSED, 0,
@@ -1001,6 +1010,7 @@ int kvm_age_radix(struct kvm *kvm, struct kvm_memory_slot *memslot,
 					       1UL << shift);
 		ref = 1;
 	}
+	end_lockless_pgtbl_walk(kvm->mm);
 	return ref;
 }
 
@@ -1013,9 +1023,11 @@ int kvm_test_age_radix(struct kvm *kvm, struct kvm_memory_slot *memslot,
 	unsigned int shift;
 	int ref = 0;
 
+	start_lockless_pgtbl_walk(kvm->mm);
 	ptep = __find_linux_pte(kvm->arch.pgtable, gpa, NULL, &shift);
 	if (ptep && pte_present(*ptep) && pte_young(*ptep))
 		ref = 1;
+	end_lockless_pgtbl_walk(kvm->mm);
 	return ref;
 }
 
@@ -1030,6 +1042,7 @@ static int kvm_radix_test_clear_dirty(struct kvm *kvm,
 	int ret = 0;
 	unsigned long old, *rmapp;
 
+	start_lockless_pgtbl_walk(kvm->mm);
 	ptep = __find_linux_pte(kvm->arch.pgtable, gpa, NULL, &shift);
 	if (ptep && pte_present(*ptep) && pte_dirty(*ptep)) {
 		ret = 1;
@@ -1046,6 +1059,7 @@ static int kvm_radix_test_clear_dirty(struct kvm *kvm,
 					       1UL << shift);
 		spin_unlock(&kvm->mmu_lock);
 	}
+	end_lockless_pgtbl_walk(kvm->mm);
 	return ret;
 }
 
@@ -1084,6 +1098,7 @@ void kvmppc_radix_flush_memslot(struct kvm *kvm,
 
 	gpa = memslot->base_gfn << PAGE_SHIFT;
 	spin_lock(&kvm->mmu_lock);
+	start_lockless_pgtbl_walk(kvm->mm);
 	for (n = memslot->npages; n; --n) {
 		ptep = __find_linux_pte(kvm->arch.pgtable, gpa, NULL, &shift);
 		if (ptep && pte_present(*ptep))
@@ -1091,6 +1106,7 @@ void kvmppc_radix_flush_memslot(struct kvm *kvm,
 					 kvm->arch.lpid);
 		gpa += PAGE_SIZE;
 	}
+	end_lockless_pgtbl_walk(kvm->mm);
 	spin_unlock(&kvm->mmu_lock);
 }
 
diff --git a/arch/powerpc/kvm/book3s_64_vio_hv.c b/arch/powerpc/kvm/book3s_64_vio_hv.c
index b4f20f13b860..d7ea44f28993 100644
--- a/arch/powerpc/kvm/book3s_64_vio_hv.c
+++ b/arch/powerpc/kvm/book3s_64_vio_hv.c
@@ -431,6 +431,7 @@ long kvmppc_rm_h_put_tce(struct kvm_vcpu *vcpu, unsigned long liobn,
 static long kvmppc_rm_ua_to_hpa(struct kvm_vcpu *vcpu,
 		unsigned long ua, unsigned long *phpa)
 {
+	struct kvm *kvm = vcpu->kvm;
 	pte_t *ptep, pte;
 	unsigned shift = 0;
 
@@ -443,10 +444,13 @@ static long kvmppc_rm_ua_to_hpa(struct kvm_vcpu *vcpu,
 	 * to exit which will agains result in the below page table walk
 	 * to finish.
 	 */
+
+	start_lockless_pgtbl_walk(kvm->mm);
 	ptep = __find_linux_pte(vcpu->arch.pgdir, ua, NULL, &shift);
 	if (!ptep || !pte_present(*ptep))
 		return -ENXIO;
 	pte = *ptep;
+	end_lockless_pgtbl_walk(kvm->mm);
 
 	if (!shift)
 		shift = PAGE_SHIFT;
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 72+ messages in thread

* [PATCH v2 10/11] powerpc/book3s_64: Enables counting method to monitor lockless pgtbl walk
  2019-09-20 19:50 ` Leonardo Bras
@ 2019-09-20 19:50   ` Leonardo Bras
  -1 siblings, 0 replies; 72+ messages in thread
From: Leonardo Bras @ 2019-09-20 19:50 UTC (permalink / raw)
  To: linuxppc-dev, linux-kernel
  Cc: Leonardo Bras, Benjamin Herrenschmidt, Paul Mackerras,
	Michael Ellerman, Arnd Bergmann, Aneesh Kumar K.V,
	Christophe Leroy, Andrew Morton, Dan Williams, Nicholas Piggin,
	Mahesh Salgaonkar, Thomas Gleixner, Richard Fontana,
	Ganesh Goudar, Allison Randal, Greg Kroah-Hartman, Mike Rapoport,
	YueHaibing, Ira Weiny, Jason Gunthorpe, John Hubbard,
	Keith Busch

Enables count-based monitoring method for lockless pagetable walks on
PowerPC book3s_64.

Other architectures/platforms fallback to using generic dummy functions.

Signed-off-by: Leonardo Bras <leonardo@linux.ibm.com>
---
 arch/powerpc/include/asm/book3s/64/pgtable.h | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h
index 06a6a5344770..7a8813dfaee9 100644
--- a/arch/powerpc/include/asm/book3s/64/pgtable.h
+++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
@@ -1391,5 +1391,10 @@ static inline bool pgd_is_leaf(pgd_t pgd)
 	return !!(pgd_raw(pgd) & cpu_to_be64(_PAGE_PTE));
 }
 
+#define __HAVE_ARCH_LOCKLESS_PGTBL_WALK_COUNTER
+void start_lockless_pgtbl_walk(struct mm_struct *mm);
+void end_lockless_pgtbl_walk(struct mm_struct *mm);
+int running_lockless_pgtbl_walk(struct mm_struct *mm);
+
 #endif /* __ASSEMBLY__ */
 #endif /* _ASM_POWERPC_BOOK3S_64_PGTABLE_H_ */
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 72+ messages in thread

* [PATCH v2 10/11] powerpc/book3s_64: Enables counting method to monitor lockless pgtbl walk
@ 2019-09-20 19:50   ` Leonardo Bras
  0 siblings, 0 replies; 72+ messages in thread
From: Leonardo Bras @ 2019-09-20 19:50 UTC (permalink / raw)
  To: linuxppc-dev, linux-kernel
  Cc: Keith Busch, Richard Fontana, Paul Mackerras, Ira Weiny,
	Dan Williams, Aneesh Kumar K.V, YueHaibing, Mike Rapoport,
	Jason Gunthorpe, Ganesh Goudar, Mahesh Salgaonkar, Leonardo Bras,
	Arnd Bergmann, John Hubbard, Nicholas Piggin, Thomas Gleixner,
	Allison Randal, Greg Kroah-Hartman, Andrew Morton

Enables count-based monitoring method for lockless pagetable walks on
PowerPC book3s_64.

Other architectures/platforms fallback to using generic dummy functions.

Signed-off-by: Leonardo Bras <leonardo@linux.ibm.com>
---
 arch/powerpc/include/asm/book3s/64/pgtable.h | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h
index 06a6a5344770..7a8813dfaee9 100644
--- a/arch/powerpc/include/asm/book3s/64/pgtable.h
+++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
@@ -1391,5 +1391,10 @@ static inline bool pgd_is_leaf(pgd_t pgd)
 	return !!(pgd_raw(pgd) & cpu_to_be64(_PAGE_PTE));
 }
 
+#define __HAVE_ARCH_LOCKLESS_PGTBL_WALK_COUNTER
+void start_lockless_pgtbl_walk(struct mm_struct *mm);
+void end_lockless_pgtbl_walk(struct mm_struct *mm);
+int running_lockless_pgtbl_walk(struct mm_struct *mm);
+
 #endif /* __ASSEMBLY__ */
 #endif /* _ASM_POWERPC_BOOK3S_64_PGTABLE_H_ */
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 72+ messages in thread

* [PATCH v2 11/11] powerpc/mm/book3s64/pgtable: Uses counting method to skip serializing
  2019-09-20 19:50 ` Leonardo Bras
@ 2019-09-20 19:50   ` Leonardo Bras
  -1 siblings, 0 replies; 72+ messages in thread
From: Leonardo Bras @ 2019-09-20 19:50 UTC (permalink / raw)
  To: linuxppc-dev, linux-kernel
  Cc: Leonardo Bras, Benjamin Herrenschmidt, Paul Mackerras,
	Michael Ellerman, Arnd Bergmann, Aneesh Kumar K.V,
	Christophe Leroy, Andrew Morton, Dan Williams, Nicholas Piggin,
	Mahesh Salgaonkar, Thomas Gleixner, Richard Fontana,
	Ganesh Goudar, Allison Randal, Greg Kroah-Hartman, Mike Rapoport,
	YueHaibing, Ira Weiny, Jason Gunthorpe, John Hubbard,
	Keith Busch

Skips slow part of serialize_against_pte_lookup if there is no running
lockless pagetable walk.

Signed-off-by: Leonardo Bras <leonardo@linux.ibm.com>
---
 arch/powerpc/mm/book3s64/pgtable.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/mm/book3s64/pgtable.c b/arch/powerpc/mm/book3s64/pgtable.c
index 13239b17a22c..41ca30269fa3 100644
--- a/arch/powerpc/mm/book3s64/pgtable.c
+++ b/arch/powerpc/mm/book3s64/pgtable.c
@@ -95,7 +95,8 @@ static void do_nothing(void *unused)
 void serialize_against_pte_lookup(struct mm_struct *mm)
 {
 	smp_mb();
-	smp_call_function_many(mm_cpumask(mm), do_nothing, NULL, 1);
+	if (running_lockless_pgtbl_walk(mm))
+		smp_call_function_many(mm_cpumask(mm), do_nothing, NULL, 1);
 }
 
 void start_lockless_pgtbl_walk(struct mm_struct *mm)
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 72+ messages in thread

* [PATCH v2 11/11] powerpc/mm/book3s64/pgtable: Uses counting method to skip serializing
@ 2019-09-20 19:50   ` Leonardo Bras
  0 siblings, 0 replies; 72+ messages in thread
From: Leonardo Bras @ 2019-09-20 19:50 UTC (permalink / raw)
  To: linuxppc-dev, linux-kernel
  Cc: Keith Busch, Richard Fontana, Paul Mackerras, Ira Weiny,
	Dan Williams, Aneesh Kumar K.V, YueHaibing, Mike Rapoport,
	Jason Gunthorpe, Ganesh Goudar, Mahesh Salgaonkar, Leonardo Bras,
	Arnd Bergmann, John Hubbard, Nicholas Piggin, Thomas Gleixner,
	Allison Randal, Greg Kroah-Hartman, Andrew Morton

Skips slow part of serialize_against_pte_lookup if there is no running
lockless pagetable walk.

Signed-off-by: Leonardo Bras <leonardo@linux.ibm.com>
---
 arch/powerpc/mm/book3s64/pgtable.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/mm/book3s64/pgtable.c b/arch/powerpc/mm/book3s64/pgtable.c
index 13239b17a22c..41ca30269fa3 100644
--- a/arch/powerpc/mm/book3s64/pgtable.c
+++ b/arch/powerpc/mm/book3s64/pgtable.c
@@ -95,7 +95,8 @@ static void do_nothing(void *unused)
 void serialize_against_pte_lookup(struct mm_struct *mm)
 {
 	smp_mb();
-	smp_call_function_many(mm_cpumask(mm), do_nothing, NULL, 1);
+	if (running_lockless_pgtbl_walk(mm))
+		smp_call_function_many(mm_cpumask(mm), do_nothing, NULL, 1);
 }
 
 void start_lockless_pgtbl_walk(struct mm_struct *mm)
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 72+ messages in thread

* Re: [PATCH v2 00/11] Introduces new count-based method for monitoring lockless pagetable wakls
  2019-09-20 19:50 ` Leonardo Bras
@ 2019-09-20 19:56   ` Leonardo Bras
  -1 siblings, 0 replies; 72+ messages in thread
From: Leonardo Bras @ 2019-09-20 19:56 UTC (permalink / raw)
  To: linuxppc-dev, linux-kernel
  Cc: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman,
	Arnd Bergmann, Aneesh Kumar K.V, Christophe Leroy, Andrew Morton,
	Dan Williams, Nicholas Piggin, Mahesh Salgaonkar,
	Thomas Gleixner, Richard Fontana, Ganesh Goudar, Allison Randal,
	Greg Kroah-Hartman, Mike Rapoport, YueHaibing, Ira Weiny,
	Jason Gunthorpe, John Hubbard, Keith Busch

[-- Attachment #1: Type: text/plain, Size: 211 bytes --]

On Fri, 2019-09-20 at 16:50 -0300, Leonardo Bras wrote:
> *** BLURB HERE ***

Sorry, something gone terribly wrong with my cover letter.
I will try to find it and send here, or rewrite it.

Best regards,

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v2 00/11] Introduces new count-based method for monitoring lockless pagetable wakls
@ 2019-09-20 19:56   ` Leonardo Bras
  0 siblings, 0 replies; 72+ messages in thread
From: Leonardo Bras @ 2019-09-20 19:56 UTC (permalink / raw)
  To: linuxppc-dev, linux-kernel
  Cc: Jason Gunthorpe, Thomas Gleixner, Arnd Bergmann,
	Greg Kroah-Hartman, YueHaibing, Keith Busch, Nicholas Piggin,
	Mike Rapoport, Mahesh Salgaonkar, Richard Fontana,
	Paul Mackerras, John Hubbard, Aneesh Kumar K.V, Ganesh Goudar,
	Andrew Morton, Ira Weiny, Dan Williams, Allison Randal

[-- Attachment #1: Type: text/plain, Size: 211 bytes --]

On Fri, 2019-09-20 at 16:50 -0300, Leonardo Bras wrote:
> *** BLURB HERE ***

Sorry, something gone terribly wrong with my cover letter.
I will try to find it and send here, or rewrite it.

Best regards,

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v2 11/11] powerpc/mm/book3s64/pgtable: Uses counting method to skip serializing
  2019-09-20 19:50   ` Leonardo Bras
  (?)
@ 2019-09-20 20:11   ` John Hubbard
  2019-09-20 20:28       ` Leonardo Bras
  -1 siblings, 1 reply; 72+ messages in thread
From: John Hubbard @ 2019-09-20 20:11 UTC (permalink / raw)
  To: Leonardo Bras, linuxppc-dev, linux-kernel
  Cc: Jason Gunthorpe, Thomas Gleixner, Arnd Bergmann,
	Greg Kroah-Hartman, YueHaibing, Keith Busch, Nicholas Piggin,
	Mike Rapoport, Mahesh Salgaonkar, Richard Fontana,
	Paul Mackerras, Aneesh Kumar K.V, Ganesh Goudar, Andrew Morton,
	Ira Weiny, Dan Williams, Allison Randal

On 9/20/19 12:50 PM, Leonardo Bras wrote:
> Skips slow part of serialize_against_pte_lookup if there is no running
> lockless pagetable walk.
> 
> Signed-off-by: Leonardo Bras <leonardo@linux.ibm.com>
> ---
>  arch/powerpc/mm/book3s64/pgtable.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/powerpc/mm/book3s64/pgtable.c b/arch/powerpc/mm/book3s64/pgtable.c
> index 13239b17a22c..41ca30269fa3 100644
> --- a/arch/powerpc/mm/book3s64/pgtable.c
> +++ b/arch/powerpc/mm/book3s64/pgtable.c
> @@ -95,7 +95,8 @@ static void do_nothing(void *unused)
>  void serialize_against_pte_lookup(struct mm_struct *mm)
>  {
>  	smp_mb();
> -	smp_call_function_many(mm_cpumask(mm), do_nothing, NULL, 1);
> +	if (running_lockless_pgtbl_walk(mm))
> +		smp_call_function_many(mm_cpumask(mm), do_nothing, NULL, 1);

Hi,

If you do this, then you are left without any synchronization. So it will
have race conditions: a page table walk could begin right after the above
check returns "false", and then code such as hash__pmdp_huge_get_and_clear()
will continue on right away, under the false assumption that it has let
all the current page table walks complete.

The current code uses either interrupts or RCU to synchronize, and in
either case, you end up scheduling something on each CPU. If you remove
that entirely, I don't see anything left. ("Pure" atomic counting is not
a synchronization technique all by itself.)

thanks,
-- 
John Hubbard
NVIDIA

>  }
>  
>  void start_lockless_pgtbl_walk(struct mm_struct *mm)
> 

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v2 00/11] Introduces new count-based method for monitoring lockless pagetable wakls
  2019-09-20 19:50 ` Leonardo Bras
@ 2019-09-20 20:12   ` Leonardo Bras
  -1 siblings, 0 replies; 72+ messages in thread
From: Leonardo Bras @ 2019-09-20 20:12 UTC (permalink / raw)
  To: linuxppc-dev, linux-kernel
  Cc: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman,
	Arnd Bergmann, Aneesh Kumar K.V, Christophe Leroy, Andrew Morton,
	Dan Williams, Nicholas Piggin, Mahesh Salgaonkar,
	Thomas Gleixner, Richard Fontana, Ganesh Goudar, Allison Randal,
	Greg Kroah-Hartman, Mike Rapoport, YueHaibing, Ira Weiny,
	Jason Gunthorpe, John Hubbard, Keith Busch

[-- Attachment #1: Type: text/plain, Size: 3862 bytes --]

If a process (qemu) with a lot of CPUs (128) try to munmap() a large
chunk of memory (496GB) mapped with THP, it takes an average of 275
seconds, which can cause a lot of problems to the load (in qemu case,
the guest will lock for this time).

Trying to find the source of this bug, I found out most of this time is
spent on serialize_against_pte_lookup(). This function will take a lot
of time in smp_call_function_many() if there is more than a couple CPUs
running the user process. Since it has to happen to all THP mapped, it
will take a very long time for large amounts of memory.

By the docs, serialize_against_pte_lookup() is needed in order to avoid
pmd_t to pte_t casting inside find_current_mm_pte(), or any lockless
pagetable walk, to happen concurrently with THP splitting/collapsing.

It does so by calling a do_nothing() on each CPU in mm->cpu_bitmap[],
after interrupts are re-enabled. 
Since, interrupts are (usually) disabled during lockless pagetable
walk, and serialize_against_pte_lookup will only return after
interrupts are enabled, it is protected.

So, by what I could understand, if there is no lockless pagetable walk
running, there is no need to call serialize_against_pte_lookup().

So, to avoid the cost of running serialize_against_pte_lookup(), I
propose a counter that keeps track of how many find_current_mm_pte()
are currently running, and if there is none, just skip 
smp_call_function_many().

The related functions are:
start_lockless_pgtbl_walk(mm)
	Insert before starting any lockless pgtable walk
end_lockless_pgtbl_walk(mm)
	Insert after the end of any lockless pgtable walk
	(Mostly after the ptep is last used)
running_lockless_pgtbl_walk(mm)
	Returns the number of lockless pgtable walks running


On my workload (qemu), I could see munmap's time reduction from 275
seconds to 418ms.

> Leonardo Bras (11):
>   powerpc/mm: Adds counting method to monitor lockless pgtable walks
>   asm-generic/pgtable: Adds dummy functions to monitor lockless pgtable
>     walks
>   mm/gup: Applies counting method to monitor gup_pgd_range
>   powerpc/mce_power: Applies counting method to monitor lockless pgtbl
>     walks
>   powerpc/perf: Applies counting method to monitor lockless pgtbl walks
>   powerpc/mm/book3s64/hash: Applies counting method to monitor lockless
>     pgtbl walks
>   powerpc/kvm/e500: Applies counting method to monitor lockless pgtbl
>     walks
>   powerpc/kvm/book3s_hv: Applies counting method to monitor lockless
>     pgtbl walks
>   powerpc/kvm/book3s_64: Applies counting method to monitor lockless
>     pgtbl walks
>   powerpc/book3s_64: Enables counting method to monitor lockless pgtbl
>     walk
>   powerpc/mm/book3s64/pgtable: Uses counting method to skip serializing
> 
>  arch/powerpc/include/asm/book3s/64/mmu.h     |  3 +++
>  arch/powerpc/include/asm/book3s/64/pgtable.h |  5 +++++
>  arch/powerpc/kernel/mce_power.c              | 13 ++++++++++---
>  arch/powerpc/kvm/book3s_64_mmu_hv.c          |  2 ++
>  arch/powerpc/kvm/book3s_64_mmu_radix.c       | 20 ++++++++++++++++++--
>  arch/powerpc/kvm/book3s_64_vio_hv.c          |  4 ++++
>  arch/powerpc/kvm/book3s_hv_nested.c          |  8 ++++++++
>  arch/powerpc/kvm/book3s_hv_rm_mmu.c          |  9 ++++++++-
>  arch/powerpc/kvm/e500_mmu_host.c             |  4 ++++
>  arch/powerpc/mm/book3s64/hash_tlb.c          |  2 ++
>  arch/powerpc/mm/book3s64/hash_utils.c        |  7 +++++++
>  arch/powerpc/mm/book3s64/mmu_context.c       |  1 +
>  arch/powerpc/mm/book3s64/pgtable.c           | 20 +++++++++++++++++++-
>  arch/powerpc/perf/callchain.c                |  5 ++++-
>  include/asm-generic/pgtable.h                |  9 +++++++++
>  mm/gup.c                                     |  4 ++++
>  16 files changed, 108 insertions(+), 8 deletions(-)
> 

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v2 00/11] Introduces new count-based method for monitoring lockless pagetable wakls
@ 2019-09-20 20:12   ` Leonardo Bras
  0 siblings, 0 replies; 72+ messages in thread
From: Leonardo Bras @ 2019-09-20 20:12 UTC (permalink / raw)
  To: linuxppc-dev, linux-kernel
  Cc: Jason Gunthorpe, Thomas Gleixner, Arnd Bergmann,
	Greg Kroah-Hartman, YueHaibing, Keith Busch, Nicholas Piggin,
	Mike Rapoport, Mahesh Salgaonkar, Richard Fontana,
	Paul Mackerras, John Hubbard, Aneesh Kumar K.V, Ganesh Goudar,
	Andrew Morton, Ira Weiny, Dan Williams, Allison Randal

[-- Attachment #1: Type: text/plain, Size: 3862 bytes --]

If a process (qemu) with a lot of CPUs (128) try to munmap() a large
chunk of memory (496GB) mapped with THP, it takes an average of 275
seconds, which can cause a lot of problems to the load (in qemu case,
the guest will lock for this time).

Trying to find the source of this bug, I found out most of this time is
spent on serialize_against_pte_lookup(). This function will take a lot
of time in smp_call_function_many() if there is more than a couple CPUs
running the user process. Since it has to happen to all THP mapped, it
will take a very long time for large amounts of memory.

By the docs, serialize_against_pte_lookup() is needed in order to avoid
pmd_t to pte_t casting inside find_current_mm_pte(), or any lockless
pagetable walk, to happen concurrently with THP splitting/collapsing.

It does so by calling a do_nothing() on each CPU in mm->cpu_bitmap[],
after interrupts are re-enabled. 
Since, interrupts are (usually) disabled during lockless pagetable
walk, and serialize_against_pte_lookup will only return after
interrupts are enabled, it is protected.

So, by what I could understand, if there is no lockless pagetable walk
running, there is no need to call serialize_against_pte_lookup().

So, to avoid the cost of running serialize_against_pte_lookup(), I
propose a counter that keeps track of how many find_current_mm_pte()
are currently running, and if there is none, just skip 
smp_call_function_many().

The related functions are:
start_lockless_pgtbl_walk(mm)
	Insert before starting any lockless pgtable walk
end_lockless_pgtbl_walk(mm)
	Insert after the end of any lockless pgtable walk
	(Mostly after the ptep is last used)
running_lockless_pgtbl_walk(mm)
	Returns the number of lockless pgtable walks running


On my workload (qemu), I could see munmap's time reduction from 275
seconds to 418ms.

> Leonardo Bras (11):
>   powerpc/mm: Adds counting method to monitor lockless pgtable walks
>   asm-generic/pgtable: Adds dummy functions to monitor lockless pgtable
>     walks
>   mm/gup: Applies counting method to monitor gup_pgd_range
>   powerpc/mce_power: Applies counting method to monitor lockless pgtbl
>     walks
>   powerpc/perf: Applies counting method to monitor lockless pgtbl walks
>   powerpc/mm/book3s64/hash: Applies counting method to monitor lockless
>     pgtbl walks
>   powerpc/kvm/e500: Applies counting method to monitor lockless pgtbl
>     walks
>   powerpc/kvm/book3s_hv: Applies counting method to monitor lockless
>     pgtbl walks
>   powerpc/kvm/book3s_64: Applies counting method to monitor lockless
>     pgtbl walks
>   powerpc/book3s_64: Enables counting method to monitor lockless pgtbl
>     walk
>   powerpc/mm/book3s64/pgtable: Uses counting method to skip serializing
> 
>  arch/powerpc/include/asm/book3s/64/mmu.h     |  3 +++
>  arch/powerpc/include/asm/book3s/64/pgtable.h |  5 +++++
>  arch/powerpc/kernel/mce_power.c              | 13 ++++++++++---
>  arch/powerpc/kvm/book3s_64_mmu_hv.c          |  2 ++
>  arch/powerpc/kvm/book3s_64_mmu_radix.c       | 20 ++++++++++++++++++--
>  arch/powerpc/kvm/book3s_64_vio_hv.c          |  4 ++++
>  arch/powerpc/kvm/book3s_hv_nested.c          |  8 ++++++++
>  arch/powerpc/kvm/book3s_hv_rm_mmu.c          |  9 ++++++++-
>  arch/powerpc/kvm/e500_mmu_host.c             |  4 ++++
>  arch/powerpc/mm/book3s64/hash_tlb.c          |  2 ++
>  arch/powerpc/mm/book3s64/hash_utils.c        |  7 +++++++
>  arch/powerpc/mm/book3s64/mmu_context.c       |  1 +
>  arch/powerpc/mm/book3s64/pgtable.c           | 20 +++++++++++++++++++-
>  arch/powerpc/perf/callchain.c                |  5 ++++-
>  include/asm-generic/pgtable.h                |  9 +++++++++
>  mm/gup.c                                     |  4 ++++
>  16 files changed, 108 insertions(+), 8 deletions(-)
> 

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v2 11/11] powerpc/mm/book3s64/pgtable: Uses counting method to skip serializing
  2019-09-20 20:11   ` John Hubbard
@ 2019-09-20 20:28       ` Leonardo Bras
  0 siblings, 0 replies; 72+ messages in thread
From: Leonardo Bras @ 2019-09-20 20:28 UTC (permalink / raw)
  To: John Hubbard, linuxppc-dev, linux-kernel
  Cc: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman,
	Arnd Bergmann, Aneesh Kumar K.V, Christophe Leroy, Andrew Morton,
	Dan Williams, Nicholas Piggin, Mahesh Salgaonkar,
	Thomas Gleixner, Richard Fontana, Ganesh Goudar, Allison Randal,
	Greg Kroah-Hartman, Mike Rapoport, YueHaibing, Ira Weiny,
	Jason Gunthorpe, Keith Busch

[-- Attachment #1: Type: text/plain, Size: 2179 bytes --]

On Fri, 2019-09-20 at 13:11 -0700, John Hubbard wrote:
> On 9/20/19 12:50 PM, Leonardo Bras wrote:
> > Skips slow part of serialize_against_pte_lookup if there is no running
> > lockless pagetable walk.
> > 
> > Signed-off-by: Leonardo Bras <leonardo@linux.ibm.com>
> > ---
> >  arch/powerpc/mm/book3s64/pgtable.c | 3 ++-
> >  1 file changed, 2 insertions(+), 1 deletion(-)
> > 
> > diff --git a/arch/powerpc/mm/book3s64/pgtable.c b/arch/powerpc/mm/book3s64/pgtable.c
> > index 13239b17a22c..41ca30269fa3 100644
> > --- a/arch/powerpc/mm/book3s64/pgtable.c
> > +++ b/arch/powerpc/mm/book3s64/pgtable.c
> > @@ -95,7 +95,8 @@ static void do_nothing(void *unused)
> >  void serialize_against_pte_lookup(struct mm_struct *mm)
> >  {
> >  	smp_mb();
> > -	smp_call_function_many(mm_cpumask(mm), do_nothing, NULL, 1);
> > +	if (running_lockless_pgtbl_walk(mm))
> > +		smp_call_function_many(mm_cpumask(mm), do_nothing, NULL, 1);
> 
> Hi,
> 
> If you do this, then you are left without any synchronization. So it will
> have race conditions: a page table walk could begin right after the above
> check returns "false", and then code such as hash__pmdp_huge_get_and_clear()
> will continue on right away, under the false assumption that it has let
> all the current page table walks complete.
> 
> The current code uses either interrupts or RCU to synchronize, and in
> either case, you end up scheduling something on each CPU. If you remove
> that entirely, I don't see anything left. ("Pure" atomic counting is not
> a synchronization technique all by itself.)
> 
> thanks,

Hello John,
Thanks for the fast feedback.

See, before calling serialize_against_pte_lookup(), there is always an
update or clear on the pmd. So, if a page table walk begin right after
the check returns "false", there is no problem, since it will use the
updated pmd.

Think about serialize, on a process with a bunch of cpus. After you
check the last processor (wait part), there is no guarantee that the
first one is not starting a lockless pagetable walk.

The same mechanism protect both methods.

Does it make sense?

Best regards,
Leonardo Bras



[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v2 11/11] powerpc/mm/book3s64/pgtable: Uses counting method to skip serializing
@ 2019-09-20 20:28       ` Leonardo Bras
  0 siblings, 0 replies; 72+ messages in thread
From: Leonardo Bras @ 2019-09-20 20:28 UTC (permalink / raw)
  To: John Hubbard, linuxppc-dev, linux-kernel
  Cc: Jason Gunthorpe, Thomas Gleixner, Arnd Bergmann,
	Greg Kroah-Hartman, YueHaibing, Keith Busch, Nicholas Piggin,
	Mike Rapoport, Mahesh Salgaonkar, Richard Fontana,
	Paul Mackerras, Aneesh Kumar K.V, Ganesh Goudar, Andrew Morton,
	Ira Weiny, Dan Williams, Allison Randal

[-- Attachment #1: Type: text/plain, Size: 2179 bytes --]

On Fri, 2019-09-20 at 13:11 -0700, John Hubbard wrote:
> On 9/20/19 12:50 PM, Leonardo Bras wrote:
> > Skips slow part of serialize_against_pte_lookup if there is no running
> > lockless pagetable walk.
> > 
> > Signed-off-by: Leonardo Bras <leonardo@linux.ibm.com>
> > ---
> >  arch/powerpc/mm/book3s64/pgtable.c | 3 ++-
> >  1 file changed, 2 insertions(+), 1 deletion(-)
> > 
> > diff --git a/arch/powerpc/mm/book3s64/pgtable.c b/arch/powerpc/mm/book3s64/pgtable.c
> > index 13239b17a22c..41ca30269fa3 100644
> > --- a/arch/powerpc/mm/book3s64/pgtable.c
> > +++ b/arch/powerpc/mm/book3s64/pgtable.c
> > @@ -95,7 +95,8 @@ static void do_nothing(void *unused)
> >  void serialize_against_pte_lookup(struct mm_struct *mm)
> >  {
> >  	smp_mb();
> > -	smp_call_function_many(mm_cpumask(mm), do_nothing, NULL, 1);
> > +	if (running_lockless_pgtbl_walk(mm))
> > +		smp_call_function_many(mm_cpumask(mm), do_nothing, NULL, 1);
> 
> Hi,
> 
> If you do this, then you are left without any synchronization. So it will
> have race conditions: a page table walk could begin right after the above
> check returns "false", and then code such as hash__pmdp_huge_get_and_clear()
> will continue on right away, under the false assumption that it has let
> all the current page table walks complete.
> 
> The current code uses either interrupts or RCU to synchronize, and in
> either case, you end up scheduling something on each CPU. If you remove
> that entirely, I don't see anything left. ("Pure" atomic counting is not
> a synchronization technique all by itself.)
> 
> thanks,

Hello John,
Thanks for the fast feedback.

See, before calling serialize_against_pte_lookup(), there is always an
update or clear on the pmd. So, if a page table walk begin right after
the check returns "false", there is no problem, since it will use the
updated pmd.

Think about serialize, on a process with a bunch of cpus. After you
check the last processor (wait part), there is no guarantee that the
first one is not starting a lockless pagetable walk.

The same mechanism protect both methods.

Does it make sense?

Best regards,
Leonardo Bras



[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v2 11/11] powerpc/mm/book3s64/pgtable: Uses counting method to skip serializing
  2019-09-20 20:28       ` Leonardo Bras
  (?)
@ 2019-09-20 21:15       ` John Hubbard
  -1 siblings, 0 replies; 72+ messages in thread
From: John Hubbard @ 2019-09-20 21:15 UTC (permalink / raw)
  To: Leonardo Bras, linuxppc-dev, linux-kernel
  Cc: Jason Gunthorpe, Thomas Gleixner, Arnd Bergmann,
	Greg Kroah-Hartman, YueHaibing, Keith Busch, Nicholas Piggin,
	Mike Rapoport, Mahesh Salgaonkar, Richard Fontana,
	Paul Mackerras, Aneesh Kumar K.V, Ganesh Goudar, Andrew Morton,
	Ira Weiny, Dan Williams, Allison Randal

On 9/20/19 1:28 PM, Leonardo Bras wrote:
> On Fri, 2019-09-20 at 13:11 -0700, John Hubbard wrote:
>> On 9/20/19 12:50 PM, Leonardo Bras wrote:
>>> Skips slow part of serialize_against_pte_lookup if there is no running
>>> lockless pagetable walk.
>>>
>>> Signed-off-by: Leonardo Bras <leonardo@linux.ibm.com>
>>> ---
>>>  arch/powerpc/mm/book3s64/pgtable.c | 3 ++-
>>>  1 file changed, 2 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/arch/powerpc/mm/book3s64/pgtable.c b/arch/powerpc/mm/book3s64/pgtable.c
>>> index 13239b17a22c..41ca30269fa3 100644
>>> --- a/arch/powerpc/mm/book3s64/pgtable.c
>>> +++ b/arch/powerpc/mm/book3s64/pgtable.c
>>> @@ -95,7 +95,8 @@ static void do_nothing(void *unused)
>>>  void serialize_against_pte_lookup(struct mm_struct *mm)
>>>  {
>>>  	smp_mb();
>>> -	smp_call_function_many(mm_cpumask(mm), do_nothing, NULL, 1);
>>> +	if (running_lockless_pgtbl_walk(mm))
>>> +		smp_call_function_many(mm_cpumask(mm), do_nothing, NULL, 1);
>>
>> Hi,
>>
>> If you do this, then you are left without any synchronization. So it will
>> have race conditions: a page table walk could begin right after the above
>> check returns "false", and then code such as hash__pmdp_huge_get_and_clear()
>> will continue on right away, under the false assumption that it has let
>> all the current page table walks complete.
>>
>> The current code uses either interrupts or RCU to synchronize, and in
>> either case, you end up scheduling something on each CPU. If you remove
>> that entirely, I don't see anything left. ("Pure" atomic counting is not
>> a synchronization technique all by itself.)

(Actually, correction: the protection is really for deleting the page tables, 
so maybe that doesn't apply directly here.)

>>
> 
> Hello John,
> Thanks for the fast feedback.
> 
> See, before calling serialize_against_pte_lookup(), there is always an
> update or clear on the pmd. So, if a page table walk begin right after
> the check returns "false", there is no problem, since it will use the
> updated pmd.
> 
> Think about serialize, on a process with a bunch of cpus. After you
> check the last processor (wait part), there is no guarantee that the
> first one is not starting a lockless pagetable walk.
> 

Then why even try to count at all if a page table walker is happening, if
a) it can't actually be done accurately with just counting, and 
b) you are claiming that it's safe even if the count is wrong?

In other words, by your reasoning (which I'm not saying is wrong--I don't
fully get this ppc model yet), you can probably just drop all of the counting
code from this patchset.

btw, once this is sorted out, I think all of those large copy-pasted comment
blocks before each call to serialize_against_pte_lookup(), should 
be updated to reflect what the actual synchronization mechanism is.
This is pretty hard to figure out. :)

thanks,
-- 
John Hubbard
NVIDIA

> The same mechanism protect both methods.
> 
> Does it make sense?
> 
> Best regards,
> Leonardo Bras
> 
> 

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v2 00/11] Introduces new count-based method for monitoring lockless pagetable wakls
  2019-09-20 20:12   ` Leonardo Bras
@ 2019-09-20 21:24     ` John Hubbard
  -1 siblings, 0 replies; 72+ messages in thread
From: John Hubbard @ 2019-09-20 21:24 UTC (permalink / raw)
  To: Leonardo Bras, linuxppc-dev, linux-kernel
  Cc: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman,
	Arnd Bergmann, Aneesh Kumar K.V, Christophe Leroy, Andrew Morton,
	Dan Williams, Nicholas Piggin, Mahesh Salgaonkar,
	Thomas Gleixner, Richard Fontana, Ganesh Goudar, Allison Randal,
	Greg Kroah-Hartman, Mike Rapoport, YueHaibing, Ira Weiny,
	Jason Gunthorpe, Keith Busch, Linux-MM

On 9/20/19 1:12 PM, Leonardo Bras wrote:
> If a process (qemu) with a lot of CPUs (128) try to munmap() a large
> chunk of memory (496GB) mapped with THP, it takes an average of 275
> seconds, which can cause a lot of problems to the load (in qemu case,
> the guest will lock for this time).
> 
> Trying to find the source of this bug, I found out most of this time is
> spent on serialize_against_pte_lookup(). This function will take a lot
> of time in smp_call_function_many() if there is more than a couple CPUs
> running the user process. Since it has to happen to all THP mapped, it
> will take a very long time for large amounts of memory.
> 
> By the docs, serialize_against_pte_lookup() is needed in order to avoid
> pmd_t to pte_t casting inside find_current_mm_pte(), or any lockless
> pagetable walk, to happen concurrently with THP splitting/collapsing.
> 
> It does so by calling a do_nothing() on each CPU in mm->cpu_bitmap[],
> after interrupts are re-enabled. 
> Since, interrupts are (usually) disabled during lockless pagetable
> walk, and serialize_against_pte_lookup will only return after
> interrupts are enabled, it is protected.
> 
> So, by what I could understand, if there is no lockless pagetable walk
> running, there is no need to call serialize_against_pte_lookup().
> 
> So, to avoid the cost of running serialize_against_pte_lookup(), I
> propose a counter that keeps track of how many find_current_mm_pte()
> are currently running, and if there is none, just skip 
> smp_call_function_many().

Just noticed that this really should also include linux-mm, maybe
it's best to repost the patchset with them included?

In particular, there is likely to be some feedback about adding more
calls, in addition to local_irq_disable/enable, around the gup_fast() path,
separately from my questions about the synchronization cases in ppc.

thanks,
-- 
John Hubbard
NVIDIA

> 
> The related functions are:
> start_lockless_pgtbl_walk(mm)
> 	Insert before starting any lockless pgtable walk
> end_lockless_pgtbl_walk(mm)
> 	Insert after the end of any lockless pgtable walk
> 	(Mostly after the ptep is last used)
> running_lockless_pgtbl_walk(mm)
> 	Returns the number of lockless pgtable walks running
> 
> 
> On my workload (qemu), I could see munmap's time reduction from 275
> seconds to 418ms.
> 
>> Leonardo Bras (11):
>>   powerpc/mm: Adds counting method to monitor lockless pgtable walks
>>   asm-generic/pgtable: Adds dummy functions to monitor lockless pgtable
>>     walks
>>   mm/gup: Applies counting method to monitor gup_pgd_range
>>   powerpc/mce_power: Applies counting method to monitor lockless pgtbl
>>     walks
>>   powerpc/perf: Applies counting method to monitor lockless pgtbl walks
>>   powerpc/mm/book3s64/hash: Applies counting method to monitor lockless
>>     pgtbl walks
>>   powerpc/kvm/e500: Applies counting method to monitor lockless pgtbl
>>     walks
>>   powerpc/kvm/book3s_hv: Applies counting method to monitor lockless
>>     pgtbl walks
>>   powerpc/kvm/book3s_64: Applies counting method to monitor lockless
>>     pgtbl walks
>>   powerpc/book3s_64: Enables counting method to monitor lockless pgtbl
>>     walk
>>   powerpc/mm/book3s64/pgtable: Uses counting method to skip serializing
>>
>>  arch/powerpc/include/asm/book3s/64/mmu.h     |  3 +++
>>  arch/powerpc/include/asm/book3s/64/pgtable.h |  5 +++++
>>  arch/powerpc/kernel/mce_power.c              | 13 ++++++++++---
>>  arch/powerpc/kvm/book3s_64_mmu_hv.c          |  2 ++
>>  arch/powerpc/kvm/book3s_64_mmu_radix.c       | 20 ++++++++++++++++++--
>>  arch/powerpc/kvm/book3s_64_vio_hv.c          |  4 ++++
>>  arch/powerpc/kvm/book3s_hv_nested.c          |  8 ++++++++
>>  arch/powerpc/kvm/book3s_hv_rm_mmu.c          |  9 ++++++++-
>>  arch/powerpc/kvm/e500_mmu_host.c             |  4 ++++
>>  arch/powerpc/mm/book3s64/hash_tlb.c          |  2 ++
>>  arch/powerpc/mm/book3s64/hash_utils.c        |  7 +++++++
>>  arch/powerpc/mm/book3s64/mmu_context.c       |  1 +
>>  arch/powerpc/mm/book3s64/pgtable.c           | 20 +++++++++++++++++++-
>>  arch/powerpc/perf/callchain.c                |  5 ++++-
>>  include/asm-generic/pgtable.h                |  9 +++++++++
>>  mm/gup.c                                     |  4 ++++
>>  16 files changed, 108 insertions(+), 8 deletions(-)
>>


^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v2 00/11] Introduces new count-based method for monitoring lockless pagetable wakls
@ 2019-09-20 21:24     ` John Hubbard
  0 siblings, 0 replies; 72+ messages in thread
From: John Hubbard @ 2019-09-20 21:24 UTC (permalink / raw)
  To: Leonardo Bras, linuxppc-dev, linux-kernel
  Cc: Jason Gunthorpe, Thomas Gleixner, Arnd Bergmann,
	Greg Kroah-Hartman, YueHaibing, Keith Busch, Nicholas Piggin,
	Mike Rapoport, Mahesh Salgaonkar, Richard Fontana, Linux-MM,
	Paul Mackerras, Aneesh Kumar K.V, Ganesh Goudar, Andrew Morton,
	Ira Weiny, Dan Williams, Allison Randal

On 9/20/19 1:12 PM, Leonardo Bras wrote:
> If a process (qemu) with a lot of CPUs (128) try to munmap() a large
> chunk of memory (496GB) mapped with THP, it takes an average of 275
> seconds, which can cause a lot of problems to the load (in qemu case,
> the guest will lock for this time).
> 
> Trying to find the source of this bug, I found out most of this time is
> spent on serialize_against_pte_lookup(). This function will take a lot
> of time in smp_call_function_many() if there is more than a couple CPUs
> running the user process. Since it has to happen to all THP mapped, it
> will take a very long time for large amounts of memory.
> 
> By the docs, serialize_against_pte_lookup() is needed in order to avoid
> pmd_t to pte_t casting inside find_current_mm_pte(), or any lockless
> pagetable walk, to happen concurrently with THP splitting/collapsing.
> 
> It does so by calling a do_nothing() on each CPU in mm->cpu_bitmap[],
> after interrupts are re-enabled. 
> Since, interrupts are (usually) disabled during lockless pagetable
> walk, and serialize_against_pte_lookup will only return after
> interrupts are enabled, it is protected.
> 
> So, by what I could understand, if there is no lockless pagetable walk
> running, there is no need to call serialize_against_pte_lookup().
> 
> So, to avoid the cost of running serialize_against_pte_lookup(), I
> propose a counter that keeps track of how many find_current_mm_pte()
> are currently running, and if there is none, just skip 
> smp_call_function_many().

Just noticed that this really should also include linux-mm, maybe
it's best to repost the patchset with them included?

In particular, there is likely to be some feedback about adding more
calls, in addition to local_irq_disable/enable, around the gup_fast() path,
separately from my questions about the synchronization cases in ppc.

thanks,
-- 
John Hubbard
NVIDIA

> 
> The related functions are:
> start_lockless_pgtbl_walk(mm)
> 	Insert before starting any lockless pgtable walk
> end_lockless_pgtbl_walk(mm)
> 	Insert after the end of any lockless pgtable walk
> 	(Mostly after the ptep is last used)
> running_lockless_pgtbl_walk(mm)
> 	Returns the number of lockless pgtable walks running
> 
> 
> On my workload (qemu), I could see munmap's time reduction from 275
> seconds to 418ms.
> 
>> Leonardo Bras (11):
>>   powerpc/mm: Adds counting method to monitor lockless pgtable walks
>>   asm-generic/pgtable: Adds dummy functions to monitor lockless pgtable
>>     walks
>>   mm/gup: Applies counting method to monitor gup_pgd_range
>>   powerpc/mce_power: Applies counting method to monitor lockless pgtbl
>>     walks
>>   powerpc/perf: Applies counting method to monitor lockless pgtbl walks
>>   powerpc/mm/book3s64/hash: Applies counting method to monitor lockless
>>     pgtbl walks
>>   powerpc/kvm/e500: Applies counting method to monitor lockless pgtbl
>>     walks
>>   powerpc/kvm/book3s_hv: Applies counting method to monitor lockless
>>     pgtbl walks
>>   powerpc/kvm/book3s_64: Applies counting method to monitor lockless
>>     pgtbl walks
>>   powerpc/book3s_64: Enables counting method to monitor lockless pgtbl
>>     walk
>>   powerpc/mm/book3s64/pgtable: Uses counting method to skip serializing
>>
>>  arch/powerpc/include/asm/book3s/64/mmu.h     |  3 +++
>>  arch/powerpc/include/asm/book3s/64/pgtable.h |  5 +++++
>>  arch/powerpc/kernel/mce_power.c              | 13 ++++++++++---
>>  arch/powerpc/kvm/book3s_64_mmu_hv.c          |  2 ++
>>  arch/powerpc/kvm/book3s_64_mmu_radix.c       | 20 ++++++++++++++++++--
>>  arch/powerpc/kvm/book3s_64_vio_hv.c          |  4 ++++
>>  arch/powerpc/kvm/book3s_hv_nested.c          |  8 ++++++++
>>  arch/powerpc/kvm/book3s_hv_rm_mmu.c          |  9 ++++++++-
>>  arch/powerpc/kvm/e500_mmu_host.c             |  4 ++++
>>  arch/powerpc/mm/book3s64/hash_tlb.c          |  2 ++
>>  arch/powerpc/mm/book3s64/hash_utils.c        |  7 +++++++
>>  arch/powerpc/mm/book3s64/mmu_context.c       |  1 +
>>  arch/powerpc/mm/book3s64/pgtable.c           | 20 +++++++++++++++++++-
>>  arch/powerpc/perf/callchain.c                |  5 ++++-
>>  include/asm-generic/pgtable.h                |  9 +++++++++
>>  mm/gup.c                                     |  4 ++++
>>  16 files changed, 108 insertions(+), 8 deletions(-)
>>

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v2 11/11] powerpc/mm/book3s64/pgtable: Uses counting method to skip serializing
  2019-09-20 20:28       ` Leonardo Bras
@ 2019-09-21  0:48         ` John Hubbard
  -1 siblings, 0 replies; 72+ messages in thread
From: John Hubbard @ 2019-09-21  0:48 UTC (permalink / raw)
  To: Leonardo Bras, linuxppc-dev, linux-kernel, Linux-MM
  Cc: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman,
	Arnd Bergmann, Aneesh Kumar K.V, Christophe Leroy, Andrew Morton,
	Dan Williams, Nicholas Piggin, Mahesh Salgaonkar,
	Thomas Gleixner, Richard Fontana, Ganesh Goudar, Allison Randal,
	Greg Kroah-Hartman, Mike Rapoport, YueHaibing, Ira Weiny,
	Jason Gunthorpe, Keith Busch

On 9/20/19 1:28 PM, Leonardo Bras wrote:
> On Fri, 2019-09-20 at 13:11 -0700, John Hubbard wrote:
>> On 9/20/19 12:50 PM, Leonardo Bras wrote:
>>> Skips slow part of serialize_against_pte_lookup if there is no running
>>> lockless pagetable walk.
>>>
>>> Signed-off-by: Leonardo Bras <leonardo@linux.ibm.com>
>>> ---
>>>  arch/powerpc/mm/book3s64/pgtable.c | 3 ++-
>>>  1 file changed, 2 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/arch/powerpc/mm/book3s64/pgtable.c b/arch/powerpc/mm/book3s64/pgtable.c
>>> index 13239b17a22c..41ca30269fa3 100644
>>> --- a/arch/powerpc/mm/book3s64/pgtable.c
>>> +++ b/arch/powerpc/mm/book3s64/pgtable.c
>>> @@ -95,7 +95,8 @@ static void do_nothing(void *unused)
>>>  void serialize_against_pte_lookup(struct mm_struct *mm)
>>>  {
>>>  	smp_mb();
>>> -	smp_call_function_many(mm_cpumask(mm), do_nothing, NULL, 1);
>>> +	if (running_lockless_pgtbl_walk(mm))
>>> +		smp_call_function_many(mm_cpumask(mm), do_nothing, NULL, 1);
>>
>> Hi,
>>
>> If you do this, then you are left without any synchronization. So it will
>> have race conditions: a page table walk could begin right after the above
>> check returns "false", and then code such as hash__pmdp_huge_get_and_clear()
>> will continue on right away, under the false assumption that it has let
>> all the current page table walks complete.
>>
>> The current code uses either interrupts or RCU to synchronize, and in
>> either case, you end up scheduling something on each CPU. If you remove
>> that entirely, I don't see anything left. ("Pure" atomic counting is not
>> a synchronization technique all by itself.)
>>
>> thanks,
> 
> Hello John,
> Thanks for the fast feedback.
> 
> See, before calling serialize_against_pte_lookup(), there is always an
> update or clear on the pmd. So, if a page table walk begin right after
> the check returns "false", there is no problem, since it will use the
> updated pmd.
> 
> Think about serialize, on a process with a bunch of cpus. After you
> check the last processor (wait part), there is no guarantee that the
> first one is not starting a lockless pagetable walk.
> 
> The same mechanism protect both methods.
> 
> Does it make sense?
> 

Yes, after working through this with Mark Hairgrove, I think I finally 
realize that the new code will allow existing gup_fast() readers to drain,
before proceeding. So that technically works (ignoring issues such as 
whether it's desirable to use this approach, vs. for example batching
the THP updates, etc), I agree.

(And please ignore my other response that asked if the counting was 
helping at all--I see that it does.)

However, Mark pointed out a pre-existing question, which neither of us
could figure out: are the irq disable/enable calls effective, given that
they are (apparently) not memory barriers? 

Given this claim from Documentation/memory-barriers.txt:

INTERRUPT DISABLING FUNCTIONS
-----------------------------

Functions that disable interrupts (ACQUIRE equivalent) and enable interrupts
(RELEASE equivalent) will act as compiler barriers only.  So if memory or I/O
barriers are required in such a situation, they must be provided from some
other means.

...and given both the preexisting code, and the code you've added:

mm/gup.c:

	atomic_inc(readers); /* new code */
	local_irq_disable();
	gup_pgd_range();
		...read page tables
	local_irq_enable();
	atomic_dec(readers); /* new code */
 
...if the page table reads are allowed to speculatively happen *outside*
of the irq enable/disable calls (which could happen if there are no run-time
memory barriers in the above code), then nothing works anymore. 

So it seems that full memory barriers (not just compiler barriers) are required.
If the irq enable/disable somehow provides that, then your new code just goes
along for the ride and Just Works. (You don't have any memory barriers in
start_lockless_pgtbl_walk() / end_lockless_pgtbl_walk(), just the compiler
barriers provided by the atomic inc/dec.)

So it's really a pre-existing question about the correctness of the gup_fast()
irq disabling approach.

+CC linux-mm


thanks,
-- 
John Hubbard
NVIDIA


^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v2 11/11] powerpc/mm/book3s64/pgtable: Uses counting method to skip serializing
@ 2019-09-21  0:48         ` John Hubbard
  0 siblings, 0 replies; 72+ messages in thread
From: John Hubbard @ 2019-09-21  0:48 UTC (permalink / raw)
  To: Leonardo Bras, linuxppc-dev, linux-kernel, Linux-MM
  Cc: Jason Gunthorpe, Thomas Gleixner, Arnd Bergmann,
	Greg Kroah-Hartman, YueHaibing, Keith Busch, Nicholas Piggin,
	Mike Rapoport, Mahesh Salgaonkar, Richard Fontana,
	Paul Mackerras, Aneesh Kumar K.V, Ganesh Goudar, Andrew Morton,
	Ira Weiny, Dan Williams, Allison Randal

On 9/20/19 1:28 PM, Leonardo Bras wrote:
> On Fri, 2019-09-20 at 13:11 -0700, John Hubbard wrote:
>> On 9/20/19 12:50 PM, Leonardo Bras wrote:
>>> Skips slow part of serialize_against_pte_lookup if there is no running
>>> lockless pagetable walk.
>>>
>>> Signed-off-by: Leonardo Bras <leonardo@linux.ibm.com>
>>> ---
>>>  arch/powerpc/mm/book3s64/pgtable.c | 3 ++-
>>>  1 file changed, 2 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/arch/powerpc/mm/book3s64/pgtable.c b/arch/powerpc/mm/book3s64/pgtable.c
>>> index 13239b17a22c..41ca30269fa3 100644
>>> --- a/arch/powerpc/mm/book3s64/pgtable.c
>>> +++ b/arch/powerpc/mm/book3s64/pgtable.c
>>> @@ -95,7 +95,8 @@ static void do_nothing(void *unused)
>>>  void serialize_against_pte_lookup(struct mm_struct *mm)
>>>  {
>>>  	smp_mb();
>>> -	smp_call_function_many(mm_cpumask(mm), do_nothing, NULL, 1);
>>> +	if (running_lockless_pgtbl_walk(mm))
>>> +		smp_call_function_many(mm_cpumask(mm), do_nothing, NULL, 1);
>>
>> Hi,
>>
>> If you do this, then you are left without any synchronization. So it will
>> have race conditions: a page table walk could begin right after the above
>> check returns "false", and then code such as hash__pmdp_huge_get_and_clear()
>> will continue on right away, under the false assumption that it has let
>> all the current page table walks complete.
>>
>> The current code uses either interrupts or RCU to synchronize, and in
>> either case, you end up scheduling something on each CPU. If you remove
>> that entirely, I don't see anything left. ("Pure" atomic counting is not
>> a synchronization technique all by itself.)
>>
>> thanks,
> 
> Hello John,
> Thanks for the fast feedback.
> 
> See, before calling serialize_against_pte_lookup(), there is always an
> update or clear on the pmd. So, if a page table walk begin right after
> the check returns "false", there is no problem, since it will use the
> updated pmd.
> 
> Think about serialize, on a process with a bunch of cpus. After you
> check the last processor (wait part), there is no guarantee that the
> first one is not starting a lockless pagetable walk.
> 
> The same mechanism protect both methods.
> 
> Does it make sense?
> 

Yes, after working through this with Mark Hairgrove, I think I finally 
realize that the new code will allow existing gup_fast() readers to drain,
before proceeding. So that technically works (ignoring issues such as 
whether it's desirable to use this approach, vs. for example batching
the THP updates, etc), I agree.

(And please ignore my other response that asked if the counting was 
helping at all--I see that it does.)

However, Mark pointed out a pre-existing question, which neither of us
could figure out: are the irq disable/enable calls effective, given that
they are (apparently) not memory barriers? 

Given this claim from Documentation/memory-barriers.txt:

INTERRUPT DISABLING FUNCTIONS
-----------------------------

Functions that disable interrupts (ACQUIRE equivalent) and enable interrupts
(RELEASE equivalent) will act as compiler barriers only.  So if memory or I/O
barriers are required in such a situation, they must be provided from some
other means.

...and given both the preexisting code, and the code you've added:

mm/gup.c:

	atomic_inc(readers); /* new code */
	local_irq_disable();
	gup_pgd_range();
		...read page tables
	local_irq_enable();
	atomic_dec(readers); /* new code */
 
...if the page table reads are allowed to speculatively happen *outside*
of the irq enable/disable calls (which could happen if there are no run-time
memory barriers in the above code), then nothing works anymore. 

So it seems that full memory barriers (not just compiler barriers) are required.
If the irq enable/disable somehow provides that, then your new code just goes
along for the ride and Just Works. (You don't have any memory barriers in
start_lockless_pgtbl_walk() / end_lockless_pgtbl_walk(), just the compiler
barriers provided by the atomic inc/dec.)

So it's really a pre-existing question about the correctness of the gup_fast()
irq disabling approach.

+CC linux-mm


thanks,
-- 
John Hubbard
NVIDIA

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v2 11/11] powerpc/mm/book3s64/pgtable: Uses counting method to skip serializing
  2019-09-21  0:48         ` John Hubbard
@ 2019-09-23 17:25           ` Leonardo Bras
  -1 siblings, 0 replies; 72+ messages in thread
From: Leonardo Bras @ 2019-09-23 17:25 UTC (permalink / raw)
  To: John Hubbard, linuxppc-dev, linux-kernel, Linux-MM
  Cc: Jason Gunthorpe, Thomas Gleixner, Arnd Bergmann,
	Greg Kroah-Hartman, YueHaibing, Keith Busch, Nicholas Piggin,
	Mike Rapoport, Mahesh Salgaonkar, Richard Fontana,
	Paul Mackerras, Aneesh Kumar K.V, Ganesh Goudar, Andrew Morton,
	Ira Weiny, Dan Williams, Allison Randal

[-- Attachment #1: Type: text/plain, Size: 1538 bytes --]

On Fri, 2019-09-20 at 17:48 -0700, John Hubbard wrote:
> 
[...]
> So it seems that full memory barriers (not just compiler barriers) are required.
> If the irq enable/disable somehow provides that, then your new code just goes
> along for the ride and Just Works. (You don't have any memory barriers in
> start_lockless_pgtbl_walk() / end_lockless_pgtbl_walk(), just the compiler
> barriers provided by the atomic inc/dec.)
> 
> So it's really a pre-existing question about the correctness of the gup_fast()
> irq disabling approach.

I am not experienced in other archs, and I am still pretty new to
Power, but by what I could understand, this behavior is better
explained in serialize_against_pte_lookup. 

What happens here is that, before doing a THP split/collapse, the
function does a update of the pmd and a serialize_against_pte_lookup,
in order do avoid a invalid output on a lockless pagetable walk.

Serialize basically runs a do_nothing in every cpu related to the
process, and wait for it to return. 

This running depends on interrupt being enabled, so disabling it before
gup_pgd_range() and re-enabling after the end, makes the THP
split/collapse wait for gup_pgd_range() completion in every cpu before
continuing. (here happens the lock)

(As told before, every gup_pgd_range() that occurs after it uses a
updated pmd, so no problem.)

I am sure other archs may have a similar mechanism using
local_irq_{disable,enable}.

Did it answer your questions?

Best regards,

Leonardo Bras

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v2 11/11] powerpc/mm/book3s64/pgtable: Uses counting method to skip serializing
@ 2019-09-23 17:25           ` Leonardo Bras
  0 siblings, 0 replies; 72+ messages in thread
From: Leonardo Bras @ 2019-09-23 17:25 UTC (permalink / raw)
  To: John Hubbard, linuxppc-dev, linux-kernel, Linux-MM
  Cc: Arnd Bergmann, Richard Fontana, Greg Kroah-Hartman, YueHaibing,
	Nicholas Piggin, Mike Rapoport, Keith Busch, Jason Gunthorpe,
	Paul Mackerras, Aneesh Kumar K.V, Allison Randal,
	Mahesh Salgaonkar, Ganesh Goudar, Thomas Gleixner, Ira Weiny,
	Andrew Morton, Dan Williams

[-- Attachment #1: Type: text/plain, Size: 1538 bytes --]

On Fri, 2019-09-20 at 17:48 -0700, John Hubbard wrote:
> 
[...]
> So it seems that full memory barriers (not just compiler barriers) are required.
> If the irq enable/disable somehow provides that, then your new code just goes
> along for the ride and Just Works. (You don't have any memory barriers in
> start_lockless_pgtbl_walk() / end_lockless_pgtbl_walk(), just the compiler
> barriers provided by the atomic inc/dec.)
> 
> So it's really a pre-existing question about the correctness of the gup_fast()
> irq disabling approach.

I am not experienced in other archs, and I am still pretty new to
Power, but by what I could understand, this behavior is better
explained in serialize_against_pte_lookup. 

What happens here is that, before doing a THP split/collapse, the
function does a update of the pmd and a serialize_against_pte_lookup,
in order do avoid a invalid output on a lockless pagetable walk.

Serialize basically runs a do_nothing in every cpu related to the
process, and wait for it to return. 

This running depends on interrupt being enabled, so disabling it before
gup_pgd_range() and re-enabling after the end, makes the THP
split/collapse wait for gup_pgd_range() completion in every cpu before
continuing. (here happens the lock)

(As told before, every gup_pgd_range() that occurs after it uses a
updated pmd, so no problem.)

I am sure other archs may have a similar mechanism using
local_irq_{disable,enable}.

Did it answer your questions?

Best regards,

Leonardo Bras

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v2 11/11] powerpc/mm/book3s64/pgtable: Uses counting method to skip serializing
  2019-09-23 17:25           ` Leonardo Bras
@ 2019-09-23 18:14             ` John Hubbard
  -1 siblings, 0 replies; 72+ messages in thread
From: John Hubbard @ 2019-09-23 18:14 UTC (permalink / raw)
  To: Leonardo Bras, linuxppc-dev, linux-kernel, Linux-MM
  Cc: Jason Gunthorpe, Thomas Gleixner, Arnd Bergmann,
	Greg Kroah-Hartman, YueHaibing, Keith Busch, Nicholas Piggin,
	Mike Rapoport, Mahesh Salgaonkar, Richard Fontana,
	Paul Mackerras, Aneesh Kumar K.V, Ganesh Goudar, Andrew Morton,
	Ira Weiny, Dan Williams, Allison Randal

On 9/23/19 10:25 AM, Leonardo Bras wrote:
> On Fri, 2019-09-20 at 17:48 -0700, John Hubbard wrote:
>>
> [...]
>> So it seems that full memory barriers (not just compiler barriers) are required.
>> If the irq enable/disable somehow provides that, then your new code just goes
>> along for the ride and Just Works. (You don't have any memory barriers in
>> start_lockless_pgtbl_walk() / end_lockless_pgtbl_walk(), just the compiler
>> barriers provided by the atomic inc/dec.)
>>
>> So it's really a pre-existing question about the correctness of the gup_fast()
>> irq disabling approach.
> 
> I am not experienced in other archs, and I am still pretty new to
> Power, but by what I could understand, this behavior is better
> explained in serialize_against_pte_lookup. 
> 
> What happens here is that, before doing a THP split/collapse, the
> function does a update of the pmd and a serialize_against_pte_lookup,
> in order do avoid a invalid output on a lockless pagetable walk.
> 
> Serialize basically runs a do_nothing in every cpu related to the
> process, and wait for it to return. 
> 
> This running depends on interrupt being enabled, so disabling it before
> gup_pgd_range() and re-enabling after the end, makes the THP
> split/collapse wait for gup_pgd_range() completion in every cpu before
> continuing. (here happens the lock)
> 

That part is all fine, but there are no run-time memory barriers in the 
atomic_inc() and atomic_dec() additions, which means that this is not
safe, because memory operations on CPU 1 can be reordered. It's safe
as shown *if* there are memory barriers to keep the order as shown:

CPU 0                            CPU 1
------                         --------------
                               atomic_inc(val) (no run-time memory barrier!)
pmd_clear(pte)
if (val)
    run_on_all_cpus(): IPI
                               local_irq_disable() (also not a mem barrier)

                               READ(pte)
                               if(pte)
                                  walk page tables

                               local_irq_enable() (still not a barrier)
                               atomic_dec(val)

free(pte)

thanks,
-- 
John Hubbard
NVIDIA

> (As told before, every gup_pgd_range() that occurs after it uses a
> updated pmd, so no problem.)
> 
> I am sure other archs may have a similar mechanism using
> local_irq_{disable,enable}.
> 
> Did it answer your questions?
> 
> Best regards,
> 
> Leonardo Bras
> 


^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v2 11/11] powerpc/mm/book3s64/pgtable: Uses counting method to skip serializing
@ 2019-09-23 18:14             ` John Hubbard
  0 siblings, 0 replies; 72+ messages in thread
From: John Hubbard @ 2019-09-23 18:14 UTC (permalink / raw)
  To: Leonardo Bras, linuxppc-dev, linux-kernel, Linux-MM
  Cc: Arnd Bergmann, Richard Fontana, Greg Kroah-Hartman, YueHaibing,
	Nicholas Piggin, Mike Rapoport, Keith Busch, Jason Gunthorpe,
	Paul Mackerras, Aneesh Kumar K.V, Allison Randal,
	Mahesh Salgaonkar, Ganesh Goudar, Thomas Gleixner, Ira Weiny,
	Andrew Morton, Dan Williams

On 9/23/19 10:25 AM, Leonardo Bras wrote:
> On Fri, 2019-09-20 at 17:48 -0700, John Hubbard wrote:
>>
> [...]
>> So it seems that full memory barriers (not just compiler barriers) are required.
>> If the irq enable/disable somehow provides that, then your new code just goes
>> along for the ride and Just Works. (You don't have any memory barriers in
>> start_lockless_pgtbl_walk() / end_lockless_pgtbl_walk(), just the compiler
>> barriers provided by the atomic inc/dec.)
>>
>> So it's really a pre-existing question about the correctness of the gup_fast()
>> irq disabling approach.
> 
> I am not experienced in other archs, and I am still pretty new to
> Power, but by what I could understand, this behavior is better
> explained in serialize_against_pte_lookup. 
> 
> What happens here is that, before doing a THP split/collapse, the
> function does a update of the pmd and a serialize_against_pte_lookup,
> in order do avoid a invalid output on a lockless pagetable walk.
> 
> Serialize basically runs a do_nothing in every cpu related to the
> process, and wait for it to return. 
> 
> This running depends on interrupt being enabled, so disabling it before
> gup_pgd_range() and re-enabling after the end, makes the THP
> split/collapse wait for gup_pgd_range() completion in every cpu before
> continuing. (here happens the lock)
> 

That part is all fine, but there are no run-time memory barriers in the 
atomic_inc() and atomic_dec() additions, which means that this is not
safe, because memory operations on CPU 1 can be reordered. It's safe
as shown *if* there are memory barriers to keep the order as shown:

CPU 0                            CPU 1
------                         --------------
                               atomic_inc(val) (no run-time memory barrier!)
pmd_clear(pte)
if (val)
    run_on_all_cpus(): IPI
                               local_irq_disable() (also not a mem barrier)

                               READ(pte)
                               if(pte)
                                  walk page tables

                               local_irq_enable() (still not a barrier)
                               atomic_dec(val)

free(pte)

thanks,
-- 
John Hubbard
NVIDIA

> (As told before, every gup_pgd_range() that occurs after it uses a
> updated pmd, so no problem.)
> 
> I am sure other archs may have a similar mechanism using
> local_irq_{disable,enable}.
> 
> Did it answer your questions?
> 
> Best regards,
> 
> Leonardo Bras
> 

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v2 11/11] powerpc/mm/book3s64/pgtable: Uses counting method to skip serializing
  2019-09-23 18:14             ` John Hubbard
@ 2019-09-23 19:40               ` Leonardo Bras
  -1 siblings, 0 replies; 72+ messages in thread
From: Leonardo Bras @ 2019-09-23 19:40 UTC (permalink / raw)
  To: John Hubbard, linuxppc-dev, linux-kernel, Linux-MM
  Cc: Arnd Bergmann, Richard Fontana, Greg Kroah-Hartman, YueHaibing,
	Nicholas Piggin, Mike Rapoport, Keith Busch, Jason Gunthorpe,
	Paul Mackerras, Aneesh Kumar K.V, Allison Randal,
	Mahesh Salgaonkar, Ganesh Goudar, Thomas Gleixner, Ira Weiny,
	Andrew Morton, Dan Williams

[-- Attachment #1: Type: text/plain, Size: 1990 bytes --]

On Mon, 2019-09-23 at 11:14 -0700, John Hubbard wrote:
> On 9/23/19 10:25 AM, Leonardo Bras wrote:
> [...]
> That part is all fine, but there are no run-time memory barriers in the 
> atomic_inc() and atomic_dec() additions, which means that this is not
> safe, because memory operations on CPU 1 can be reordered. It's safe
> as shown *if* there are memory barriers to keep the order as shown:
> 
> CPU 0                            CPU 1
> ------                         --------------
>                                atomic_inc(val) (no run-time memory barrier!)
> pmd_clear(pte)
> if (val)
>     run_on_all_cpus(): IPI
>                                local_irq_disable() (also not a mem barrier)
> 
>                                READ(pte)
>                                if(pte)
>                                   walk page tables
> 
>                                local_irq_enable() (still not a barrier)
>                                atomic_dec(val)
> 
> free(pte)
> 
> thanks,

This is serialize:

void serialize_against_pte_lookup(struct mm_struct *mm)
{
	smp_mb();
	if (running_lockless_pgtbl_walk(mm))
		smp_call_function_many(mm_cpumask(mm), do_nothing,
NULL, 1);
}

That would mean:

CPU 0                            CPU 1
------                         --------------
                               atomic_inc(val) 
pmd_clear(pte)
smp_mb()
if (val)
    run_on_all_cpus(): IPI
                               local_irq_disable() 

                               READ(pte)
                               if(pte)
                                  walk page tables

                               local_irq_enable() (still not a barrier)
                               atomic_dec(val)

By https://www.kernel.org/doc/Documentation/memory-barriers.txt :
'If you need all the CPUs to see a given store at the same time, use
smp_mb().'

Is it not enough? 
Do you suggest adding 'smp_mb()' after atomic_{inc,dec} ?

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v2 11/11] powerpc/mm/book3s64/pgtable: Uses counting method to skip serializing
@ 2019-09-23 19:40               ` Leonardo Bras
  0 siblings, 0 replies; 72+ messages in thread
From: Leonardo Bras @ 2019-09-23 19:40 UTC (permalink / raw)
  To: John Hubbard, linuxppc-dev, linux-kernel, Linux-MM
  Cc: Dan Williams, Arnd Bergmann, Jason Gunthorpe, Greg Kroah-Hartman,
	Mahesh Salgaonkar, YueHaibing, Nicholas Piggin, Mike Rapoport,
	Keith Busch, Richard Fontana, Paul Mackerras, Aneesh Kumar K.V,
	Ganesh Goudar, Thomas Gleixner, Ira Weiny, Andrew Morton,
	Allison Randal

[-- Attachment #1: Type: text/plain, Size: 1990 bytes --]

On Mon, 2019-09-23 at 11:14 -0700, John Hubbard wrote:
> On 9/23/19 10:25 AM, Leonardo Bras wrote:
> [...]
> That part is all fine, but there are no run-time memory barriers in the 
> atomic_inc() and atomic_dec() additions, which means that this is not
> safe, because memory operations on CPU 1 can be reordered. It's safe
> as shown *if* there are memory barriers to keep the order as shown:
> 
> CPU 0                            CPU 1
> ------                         --------------
>                                atomic_inc(val) (no run-time memory barrier!)
> pmd_clear(pte)
> if (val)
>     run_on_all_cpus(): IPI
>                                local_irq_disable() (also not a mem barrier)
> 
>                                READ(pte)
>                                if(pte)
>                                   walk page tables
> 
>                                local_irq_enable() (still not a barrier)
>                                atomic_dec(val)
> 
> free(pte)
> 
> thanks,

This is serialize:

void serialize_against_pte_lookup(struct mm_struct *mm)
{
	smp_mb();
	if (running_lockless_pgtbl_walk(mm))
		smp_call_function_many(mm_cpumask(mm), do_nothing,
NULL, 1);
}

That would mean:

CPU 0                            CPU 1
------                         --------------
                               atomic_inc(val) 
pmd_clear(pte)
smp_mb()
if (val)
    run_on_all_cpus(): IPI
                               local_irq_disable() 

                               READ(pte)
                               if(pte)
                                  walk page tables

                               local_irq_enable() (still not a barrier)
                               atomic_dec(val)

By https://www.kernel.org/doc/Documentation/memory-barriers.txt :
'If you need all the CPUs to see a given store at the same time, use
smp_mb().'

Is it not enough? 
Do you suggest adding 'smp_mb()' after atomic_{inc,dec} ?

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v2 11/11] powerpc/mm/book3s64/pgtable: Uses counting method to skip serializing
  2019-09-23 19:40               ` Leonardo Bras
@ 2019-09-23 19:58                 ` John Hubbard
  -1 siblings, 0 replies; 72+ messages in thread
From: John Hubbard @ 2019-09-23 19:58 UTC (permalink / raw)
  To: Leonardo Bras, linuxppc-dev, linux-kernel, Linux-MM
  Cc: Arnd Bergmann, Richard Fontana, Greg Kroah-Hartman, YueHaibing,
	Nicholas Piggin, Mike Rapoport, Keith Busch, Jason Gunthorpe,
	Paul Mackerras, Aneesh Kumar K.V, Allison Randal,
	Mahesh Salgaonkar, Ganesh Goudar, Thomas Gleixner, Ira Weiny,
	Andrew Morton, Dan Williams

On 9/23/19 12:40 PM, Leonardo Bras wrote:
> On Mon, 2019-09-23 at 11:14 -0700, John Hubbard wrote:
>> On 9/23/19 10:25 AM, Leonardo Bras wrote:
>> [...]
>> That part is all fine, but there are no run-time memory barriers in the 
>> atomic_inc() and atomic_dec() additions, which means that this is not
>> safe, because memory operations on CPU 1 can be reordered. It's safe
>> as shown *if* there are memory barriers to keep the order as shown:
>>
>> CPU 0                            CPU 1
>> ------                         --------------
>>                                atomic_inc(val) (no run-time memory barrier!)
>> pmd_clear(pte)
>> if (val)
>>     run_on_all_cpus(): IPI
>>                                local_irq_disable() (also not a mem barrier)
>>
>>                                READ(pte)
>>                                if(pte)
>>                                   walk page tables
>>
>>                                local_irq_enable() (still not a barrier)
>>                                atomic_dec(val)
>>
>> free(pte)
>>
>> thanks,
> 
> This is serialize:
> 
> void serialize_against_pte_lookup(struct mm_struct *mm)
> {
> 	smp_mb();
> 	if (running_lockless_pgtbl_walk(mm))
> 		smp_call_function_many(mm_cpumask(mm), do_nothing,
> NULL, 1);
> }
> 
> That would mean:
> 
> CPU 0                            CPU 1
> ------                         --------------
>                                atomic_inc(val) 
> pmd_clear(pte)
> smp_mb()
> if (val)
>     run_on_all_cpus(): IPI
>                                local_irq_disable() 
> 
>                                READ(pte)
>                                if(pte)
>                                   walk page tables
> 
>                                local_irq_enable() (still not a barrier)
>                                atomic_dec(val)
> 
> By https://www.kernel.org/doc/Documentation/memory-barriers.txt :
> 'If you need all the CPUs to see a given store at the same time, use
> smp_mb().'
> 
> Is it not enough? 

Nope. CPU 1 memory accesses could be re-ordered, as I said above:


CPU 0                            CPU 1
------                         --------------
                               READ(pte) (re-ordered at run time)
                               atomic_inc(val) (no run-time memory barrier!)
                           
pmd_clear(pte)
if (val)
    run_on_all_cpus(): IPI
                               local_irq_disable() (also not a mem barrier)

                               if(pte)
                                  walk page tables
...

> Do you suggest adding 'smp_mb()' after atomic_{inc,dec} ?
> 

Yes (approximately: I'd have to look closer to see which barrier call is really
required). Unless there is something else that is providing the barrier, which
is why I called this a pre-existing question: it seems like the interrupt
interlock in the current gup_fast() might not have what it needs.

In other words, if your code needs a barrier, then the pre-existing gup_fast()
code probably does, too.

thanks,
-- 
John Hubbard
NVIDIA


^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v2 11/11] powerpc/mm/book3s64/pgtable: Uses counting method to skip serializing
@ 2019-09-23 19:58                 ` John Hubbard
  0 siblings, 0 replies; 72+ messages in thread
From: John Hubbard @ 2019-09-23 19:58 UTC (permalink / raw)
  To: Leonardo Bras, linuxppc-dev, linux-kernel, Linux-MM
  Cc: Dan Williams, Arnd Bergmann, Jason Gunthorpe, Greg Kroah-Hartman,
	Mahesh Salgaonkar, YueHaibing, Nicholas Piggin, Mike Rapoport,
	Keith Busch, Richard Fontana, Paul Mackerras, Aneesh Kumar K.V,
	Ganesh Goudar, Thomas Gleixner, Ira Weiny, Andrew Morton,
	Allison Randal

On 9/23/19 12:40 PM, Leonardo Bras wrote:
> On Mon, 2019-09-23 at 11:14 -0700, John Hubbard wrote:
>> On 9/23/19 10:25 AM, Leonardo Bras wrote:
>> [...]
>> That part is all fine, but there are no run-time memory barriers in the 
>> atomic_inc() and atomic_dec() additions, which means that this is not
>> safe, because memory operations on CPU 1 can be reordered. It's safe
>> as shown *if* there are memory barriers to keep the order as shown:
>>
>> CPU 0                            CPU 1
>> ------                         --------------
>>                                atomic_inc(val) (no run-time memory barrier!)
>> pmd_clear(pte)
>> if (val)
>>     run_on_all_cpus(): IPI
>>                                local_irq_disable() (also not a mem barrier)
>>
>>                                READ(pte)
>>                                if(pte)
>>                                   walk page tables
>>
>>                                local_irq_enable() (still not a barrier)
>>                                atomic_dec(val)
>>
>> free(pte)
>>
>> thanks,
> 
> This is serialize:
> 
> void serialize_against_pte_lookup(struct mm_struct *mm)
> {
> 	smp_mb();
> 	if (running_lockless_pgtbl_walk(mm))
> 		smp_call_function_many(mm_cpumask(mm), do_nothing,
> NULL, 1);
> }
> 
> That would mean:
> 
> CPU 0                            CPU 1
> ------                         --------------
>                                atomic_inc(val) 
> pmd_clear(pte)
> smp_mb()
> if (val)
>     run_on_all_cpus(): IPI
>                                local_irq_disable() 
> 
>                                READ(pte)
>                                if(pte)
>                                   walk page tables
> 
>                                local_irq_enable() (still not a barrier)
>                                atomic_dec(val)
> 
> By https://www.kernel.org/doc/Documentation/memory-barriers.txt :
> 'If you need all the CPUs to see a given store at the same time, use
> smp_mb().'
> 
> Is it not enough? 

Nope. CPU 1 memory accesses could be re-ordered, as I said above:


CPU 0                            CPU 1
------                         --------------
                               READ(pte) (re-ordered at run time)
                               atomic_inc(val) (no run-time memory barrier!)
                           
pmd_clear(pte)
if (val)
    run_on_all_cpus(): IPI
                               local_irq_disable() (also not a mem barrier)

                               if(pte)
                                  walk page tables
...

> Do you suggest adding 'smp_mb()' after atomic_{inc,dec} ?
> 

Yes (approximately: I'd have to look closer to see which barrier call is really
required). Unless there is something else that is providing the barrier, which
is why I called this a pre-existing question: it seems like the interrupt
interlock in the current gup_fast() might not have what it needs.

In other words, if your code needs a barrier, then the pre-existing gup_fast()
code probably does, too.

thanks,
-- 
John Hubbard
NVIDIA

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v2 11/11] powerpc/mm/book3s64/pgtable: Uses counting method to skip serializing
  2019-09-23 19:58                 ` John Hubbard
@ 2019-09-23 20:23                   ` Leonardo Bras
  -1 siblings, 0 replies; 72+ messages in thread
From: Leonardo Bras @ 2019-09-23 20:23 UTC (permalink / raw)
  To: John Hubbard, linuxppc-dev, linux-kernel, Linux-MM
  Cc: Dan Williams, Arnd Bergmann, Jason Gunthorpe, Greg Kroah-Hartman,
	Mahesh Salgaonkar, YueHaibing, Nicholas Piggin, Mike Rapoport,
	Keith Busch, Richard Fontana, Paul Mackerras, Aneesh Kumar K.V,
	Ganesh Goudar, Thomas Gleixner, Ira Weiny, Andrew Morton,
	Allison Randal

[-- Attachment #1: Type: text/plain, Size: 1026 bytes --]

On Mon, 2019-09-23 at 12:58 -0700, John Hubbard wrote:
> 
> CPU 0                            CPU 1
> ------                         --------------
>                                READ(pte) (re-ordered at run time)
>                                atomic_inc(val) (no run-time memory barrier!)
>                            
> pmd_clear(pte)
> if (val)
>     run_on_all_cpus(): IPI
>                                local_irq_disable() (also not a mem barrier)
> 
>                                if(pte)
>                                   walk page tables

Let me see if I can understand,
On most patches, it would be:

CPU 0                            CPU 1
------				--------------
				ptep = __find_linux_pte  
				(re-ordered at run time)
				atomic_inc(val) 
pmd_clear(pte)
smp_mb()
if (val)
    run_on_all_cpus(): IPI
                               local_irq_disable() 

                               if(ptep)
                                  pte = *ptep;

Is that what you meant?



[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v2 11/11] powerpc/mm/book3s64/pgtable: Uses counting method to skip serializing
@ 2019-09-23 20:23                   ` Leonardo Bras
  0 siblings, 0 replies; 72+ messages in thread
From: Leonardo Bras @ 2019-09-23 20:23 UTC (permalink / raw)
  To: John Hubbard, linuxppc-dev, linux-kernel, Linux-MM
  Cc: Andrew Morton, Arnd Bergmann, Richard Fontana, Mahesh Salgaonkar,
	YueHaibing, Nicholas Piggin, Mike Rapoport, Keith Busch,
	Jason Gunthorpe, Paul Mackerras, Aneesh Kumar K.V,
	Greg Kroah-Hartman, Ganesh Goudar, Dan Williams, Ira Weiny,
	Thomas Gleixner, Allison Randal

[-- Attachment #1: Type: text/plain, Size: 1026 bytes --]

On Mon, 2019-09-23 at 12:58 -0700, John Hubbard wrote:
> 
> CPU 0                            CPU 1
> ------                         --------------
>                                READ(pte) (re-ordered at run time)
>                                atomic_inc(val) (no run-time memory barrier!)
>                            
> pmd_clear(pte)
> if (val)
>     run_on_all_cpus(): IPI
>                                local_irq_disable() (also not a mem barrier)
> 
>                                if(pte)
>                                   walk page tables

Let me see if I can understand,
On most patches, it would be:

CPU 0                            CPU 1
------				--------------
				ptep = __find_linux_pte  
				(re-ordered at run time)
				atomic_inc(val) 
pmd_clear(pte)
smp_mb()
if (val)
    run_on_all_cpus(): IPI
                               local_irq_disable() 

                               if(ptep)
                                  pte = *ptep;

Is that what you meant?



[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v2 11/11] powerpc/mm/book3s64/pgtable: Uses counting method to skip serializing
  2019-09-23 20:23                   ` Leonardo Bras
@ 2019-09-23 20:26                     ` John Hubbard
  -1 siblings, 0 replies; 72+ messages in thread
From: John Hubbard @ 2019-09-23 20:26 UTC (permalink / raw)
  To: Leonardo Bras, linuxppc-dev, linux-kernel, Linux-MM
  Cc: Dan Williams, Arnd Bergmann, Jason Gunthorpe, Greg Kroah-Hartman,
	Mahesh Salgaonkar, YueHaibing, Nicholas Piggin, Mike Rapoport,
	Keith Busch, Richard Fontana, Paul Mackerras, Aneesh Kumar K.V,
	Ganesh Goudar, Thomas Gleixner, Ira Weiny, Andrew Morton,
	Allison Randal

On 9/23/19 1:23 PM, Leonardo Bras wrote:
> On Mon, 2019-09-23 at 12:58 -0700, John Hubbard wrote:
>>
>> CPU 0                            CPU 1
>> ------                         --------------
>>                                READ(pte) (re-ordered at run time)
>>                                atomic_inc(val) (no run-time memory barrier!)
>>                            
>> pmd_clear(pte)
>> if (val)
>>     run_on_all_cpus(): IPI
>>                                local_irq_disable() (also not a mem barrier)
>>
>>                                if(pte)
>>                                   walk page tables
> 
> Let me see if I can understand,
> On most patches, it would be:
> 
> CPU 0                            CPU 1
> ------				--------------
> 				ptep = __find_linux_pte  
> 				(re-ordered at run time)
> 				atomic_inc(val) 
> pmd_clear(pte)
> smp_mb()
> if (val)
>     run_on_all_cpus(): IPI
>                                local_irq_disable() 
> 
>                                if(ptep)
>                                   pte = *ptep;
> 
> Is that what you meant?
> 
> 

Yes.

thanks,
-- 
John Hubbard
NVIDIA

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v2 11/11] powerpc/mm/book3s64/pgtable: Uses counting method to skip serializing
@ 2019-09-23 20:26                     ` John Hubbard
  0 siblings, 0 replies; 72+ messages in thread
From: John Hubbard @ 2019-09-23 20:26 UTC (permalink / raw)
  To: Leonardo Bras, linuxppc-dev, linux-kernel, Linux-MM
  Cc: Andrew Morton, Arnd Bergmann, Richard Fontana, Mahesh Salgaonkar,
	YueHaibing, Nicholas Piggin, Mike Rapoport, Keith Busch,
	Jason Gunthorpe, Paul Mackerras, Aneesh Kumar K.V,
	Greg Kroah-Hartman, Ganesh Goudar, Dan Williams, Ira Weiny,
	Thomas Gleixner, Allison Randal

On 9/23/19 1:23 PM, Leonardo Bras wrote:
> On Mon, 2019-09-23 at 12:58 -0700, John Hubbard wrote:
>>
>> CPU 0                            CPU 1
>> ------                         --------------
>>                                READ(pte) (re-ordered at run time)
>>                                atomic_inc(val) (no run-time memory barrier!)
>>                            
>> pmd_clear(pte)
>> if (val)
>>     run_on_all_cpus(): IPI
>>                                local_irq_disable() (also not a mem barrier)
>>
>>                                if(pte)
>>                                   walk page tables
> 
> Let me see if I can understand,
> On most patches, it would be:
> 
> CPU 0                            CPU 1
> ------				--------------
> 				ptep = __find_linux_pte  
> 				(re-ordered at run time)
> 				atomic_inc(val) 
> pmd_clear(pte)
> smp_mb()
> if (val)
>     run_on_all_cpus(): IPI
>                                local_irq_disable() 
> 
>                                if(ptep)
>                                   pte = *ptep;
> 
> Is that what you meant?
> 
> 

Yes.

thanks,
-- 
John Hubbard
NVIDIA

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v2 03/11] mm/gup: Applies counting method to monitor gup_pgd_range
  2019-09-20 19:50   ` Leonardo Bras
@ 2019-09-23 20:27     ` John Hubbard
  -1 siblings, 0 replies; 72+ messages in thread
From: John Hubbard @ 2019-09-23 20:27 UTC (permalink / raw)
  To: Leonardo Bras, linuxppc-dev, linux-kernel, Linux-MM
  Cc: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman,
	Arnd Bergmann, Aneesh Kumar K.V, Christophe Leroy, Andrew Morton,
	Dan Williams, Nicholas Piggin, Mahesh Salgaonkar,
	Thomas Gleixner, Richard Fontana, Ganesh Goudar, Allison Randal,
	Greg Kroah-Hartman, Mike Rapoport, YueHaibing, Ira Weiny,
	Jason Gunthorpe, Keith Busch

On 9/20/19 12:50 PM, Leonardo Bras wrote:
> As decribed, gup_pgd_range is a lockless pagetable walk. So, in order to
> monitor against THP split/collapse with the couting method, it's necessary
> to bound it with {start,end}_lockless_pgtbl_walk.
> 
> There are dummy functions, so it is not going to add any overhead on archs
> that don't use this method.
> 
> Signed-off-by: Leonardo Bras <leonardo@linux.ibm.com>
> ---
>  mm/gup.c | 4 ++++
>  1 file changed, 4 insertions(+)
> 
> diff --git a/mm/gup.c b/mm/gup.c
> index 98f13ab37bac..675e4be27082 100644
> --- a/mm/gup.c
> +++ b/mm/gup.c
> @@ -2404,6 +2404,7 @@ int get_user_pages_fast(unsigned long start, int nr_pages,
>  			unsigned int gup_flags, struct page **pages)
>  {
>  	unsigned long addr, len, end;
> +	struct mm_struct *mm;
>  	int nr = 0, ret = 0;
>  
>  	if (WARN_ON_ONCE(gup_flags & ~(FOLL_WRITE | FOLL_LONGTERM)))
> @@ -2421,9 +2422,12 @@ int get_user_pages_fast(unsigned long start, int nr_pages,
>  
>  	if (IS_ENABLED(CONFIG_HAVE_FAST_GUP) &&
>  	    gup_fast_permitted(start, end)) {
> +		mm = current->mm;
> +		start_lockless_pgtbl_walk(mm);
>  		local_irq_disable();

I'd also like a second opinion from the "core" -mm maintainers, but it seems like
there is now too much code around the gup_pgd_range() call. Especially since there
are two places where it's called--did you forget the other one in 
__get_user_pages_fast(), btw??

Maybe the irq handling and atomic counting should be moved into start/finish
calls, like this:

     start_gup_fast_walk()
     gup_pgd_range()
     finish_gup_fast_walk()

>  		gup_pgd_range(addr, end, gup_flags, pages, &nr);
>  		local_irq_enable();
> +		end_lockless_pgtbl_walk(mm);
>  		ret = nr;
>  	}
>  
> 



thanks,
-- 
John Hubbard
NVIDIA


^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v2 03/11] mm/gup: Applies counting method to monitor gup_pgd_range
@ 2019-09-23 20:27     ` John Hubbard
  0 siblings, 0 replies; 72+ messages in thread
From: John Hubbard @ 2019-09-23 20:27 UTC (permalink / raw)
  To: Leonardo Bras, linuxppc-dev, linux-kernel, Linux-MM
  Cc: Jason Gunthorpe, Thomas Gleixner, Arnd Bergmann,
	Greg Kroah-Hartman, YueHaibing, Keith Busch, Nicholas Piggin,
	Mike Rapoport, Mahesh Salgaonkar, Richard Fontana,
	Paul Mackerras, Aneesh Kumar K.V, Ganesh Goudar, Andrew Morton,
	Ira Weiny, Dan Williams, Allison Randal

On 9/20/19 12:50 PM, Leonardo Bras wrote:
> As decribed, gup_pgd_range is a lockless pagetable walk. So, in order to
> monitor against THP split/collapse with the couting method, it's necessary
> to bound it with {start,end}_lockless_pgtbl_walk.
> 
> There are dummy functions, so it is not going to add any overhead on archs
> that don't use this method.
> 
> Signed-off-by: Leonardo Bras <leonardo@linux.ibm.com>
> ---
>  mm/gup.c | 4 ++++
>  1 file changed, 4 insertions(+)
> 
> diff --git a/mm/gup.c b/mm/gup.c
> index 98f13ab37bac..675e4be27082 100644
> --- a/mm/gup.c
> +++ b/mm/gup.c
> @@ -2404,6 +2404,7 @@ int get_user_pages_fast(unsigned long start, int nr_pages,
>  			unsigned int gup_flags, struct page **pages)
>  {
>  	unsigned long addr, len, end;
> +	struct mm_struct *mm;
>  	int nr = 0, ret = 0;
>  
>  	if (WARN_ON_ONCE(gup_flags & ~(FOLL_WRITE | FOLL_LONGTERM)))
> @@ -2421,9 +2422,12 @@ int get_user_pages_fast(unsigned long start, int nr_pages,
>  
>  	if (IS_ENABLED(CONFIG_HAVE_FAST_GUP) &&
>  	    gup_fast_permitted(start, end)) {
> +		mm = current->mm;
> +		start_lockless_pgtbl_walk(mm);
>  		local_irq_disable();

I'd also like a second opinion from the "core" -mm maintainers, but it seems like
there is now too much code around the gup_pgd_range() call. Especially since there
are two places where it's called--did you forget the other one in 
__get_user_pages_fast(), btw??

Maybe the irq handling and atomic counting should be moved into start/finish
calls, like this:

     start_gup_fast_walk()
     gup_pgd_range()
     finish_gup_fast_walk()

>  		gup_pgd_range(addr, end, gup_flags, pages, &nr);
>  		local_irq_enable();
> +		end_lockless_pgtbl_walk(mm);
>  		ret = nr;
>  	}
>  
> 



thanks,
-- 
John Hubbard
NVIDIA

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v2 02/11] asm-generic/pgtable: Adds dummy functions to monitor lockless pgtable walks
  2019-09-20 19:50   ` Leonardo Bras
@ 2019-09-23 20:39     ` John Hubbard
  -1 siblings, 0 replies; 72+ messages in thread
From: John Hubbard @ 2019-09-23 20:39 UTC (permalink / raw)
  To: Leonardo Bras, linuxppc-dev, linux-kernel
  Cc: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman,
	Arnd Bergmann, Aneesh Kumar K.V, Christophe Leroy, Andrew Morton,
	Dan Williams, Nicholas Piggin, Mahesh Salgaonkar,
	Thomas Gleixner, Richard Fontana, Ganesh Goudar, Allison Randal,
	Greg Kroah-Hartman, Mike Rapoport, YueHaibing, Ira Weiny,
	Jason Gunthorpe, Keith Busch, Linux-MM

On 9/20/19 12:50 PM, Leonardo Bras wrote:
> There is a need to monitor lockless pagetable walks, in order to avoid
> doing THP splitting/collapsing during them.
> 
> Some methods rely on local_irq_{save,restore}, but that can be slow on
> cases with a lot of cpus are used for the process.
> 
> In order to speedup these cases, I propose a refcount-based approach, that
> counts the number of lockless pagetable	walks happening on the process.
> 
> Given that there are lockless pagetable walks on generic code, it's
> necessary to create dummy functions for archs that won't use the approach.
> 
> Signed-off-by: Leonardo Bras <leonardo@linux.ibm.com>
> ---
>  include/asm-generic/pgtable.h | 9 +++++++++
>  1 file changed, 9 insertions(+)
> 
> diff --git a/include/asm-generic/pgtable.h b/include/asm-generic/pgtable.h
> index 75d9d68a6de7..6eb4fabb5595 100644
> --- a/include/asm-generic/pgtable.h
> +++ b/include/asm-generic/pgtable.h
> @@ -1172,6 +1172,15 @@ static inline bool arch_has_pfn_modify_check(void)
>  #endif
>  #endif
>  
> +#ifndef __HAVE_ARCH_LOCKLESS_PGTBL_WALK_COUNTER
> +static inline void start_lockless_pgtbl_walk(struct mm_struct *mm) { }
> +static inline void end_lockless_pgtbl_walk(struct mm_struct *mm) { }
> +static inline int running_lockless_pgtbl_walk(struct mm_struct *mm)
> +{
> +	return 0;
> +}
> +#endif
> +

Please remember to include linux-mm if there is a v2.

Nit: seems like it would be nicer to just put it all in one place, and use
positive logic, and also I think people normally don't compress the empty
functions quite that much. So like this:

#ifdef __HAVE_ARCH_LOCKLESS_PGTBL_WALK_COUNTER
void start_lockless_pgtbl_walk(struct mm_struct *mm); 
void end_lockless_pgtbl_walk(struct mm_struct *mm); 
int running_lockless_pgtbl_walk(struct mm_struct *mm); 

#else
static inline void start_lockless_pgtbl_walk(struct mm_struct *mm)
{
}
static inline void end_lockless_pgtbl_walk(struct mm_struct *mm)
{
}
static inline int running_lockless_pgtbl_walk(struct mm_struct *mm)
{
	return 0;
}
#endif

thanks,
-- 
John Hubbard
NVIDIA


^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v2 02/11] asm-generic/pgtable: Adds dummy functions to monitor lockless pgtable walks
@ 2019-09-23 20:39     ` John Hubbard
  0 siblings, 0 replies; 72+ messages in thread
From: John Hubbard @ 2019-09-23 20:39 UTC (permalink / raw)
  To: Leonardo Bras, linuxppc-dev, linux-kernel
  Cc: Jason Gunthorpe, Thomas Gleixner, Arnd Bergmann,
	Greg Kroah-Hartman, YueHaibing, Keith Busch, Nicholas Piggin,
	Mike Rapoport, Mahesh Salgaonkar, Richard Fontana, Linux-MM,
	Paul Mackerras, Aneesh Kumar K.V, Ganesh Goudar, Andrew Morton,
	Ira Weiny, Dan Williams, Allison Randal

On 9/20/19 12:50 PM, Leonardo Bras wrote:
> There is a need to monitor lockless pagetable walks, in order to avoid
> doing THP splitting/collapsing during them.
> 
> Some methods rely on local_irq_{save,restore}, but that can be slow on
> cases with a lot of cpus are used for the process.
> 
> In order to speedup these cases, I propose a refcount-based approach, that
> counts the number of lockless pagetable	walks happening on the process.
> 
> Given that there are lockless pagetable walks on generic code, it's
> necessary to create dummy functions for archs that won't use the approach.
> 
> Signed-off-by: Leonardo Bras <leonardo@linux.ibm.com>
> ---
>  include/asm-generic/pgtable.h | 9 +++++++++
>  1 file changed, 9 insertions(+)
> 
> diff --git a/include/asm-generic/pgtable.h b/include/asm-generic/pgtable.h
> index 75d9d68a6de7..6eb4fabb5595 100644
> --- a/include/asm-generic/pgtable.h
> +++ b/include/asm-generic/pgtable.h
> @@ -1172,6 +1172,15 @@ static inline bool arch_has_pfn_modify_check(void)
>  #endif
>  #endif
>  
> +#ifndef __HAVE_ARCH_LOCKLESS_PGTBL_WALK_COUNTER
> +static inline void start_lockless_pgtbl_walk(struct mm_struct *mm) { }
> +static inline void end_lockless_pgtbl_walk(struct mm_struct *mm) { }
> +static inline int running_lockless_pgtbl_walk(struct mm_struct *mm)
> +{
> +	return 0;
> +}
> +#endif
> +

Please remember to include linux-mm if there is a v2.

Nit: seems like it would be nicer to just put it all in one place, and use
positive logic, and also I think people normally don't compress the empty
functions quite that much. So like this:

#ifdef __HAVE_ARCH_LOCKLESS_PGTBL_WALK_COUNTER
void start_lockless_pgtbl_walk(struct mm_struct *mm); 
void end_lockless_pgtbl_walk(struct mm_struct *mm); 
int running_lockless_pgtbl_walk(struct mm_struct *mm); 

#else
static inline void start_lockless_pgtbl_walk(struct mm_struct *mm)
{
}
static inline void end_lockless_pgtbl_walk(struct mm_struct *mm)
{
}
static inline int running_lockless_pgtbl_walk(struct mm_struct *mm)
{
	return 0;
}
#endif

thanks,
-- 
John Hubbard
NVIDIA

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v2 01/11] powerpc/mm: Adds counting method to monitor lockless pgtable walks
  2019-09-20 19:50   ` Leonardo Bras
@ 2019-09-23 20:42     ` John Hubbard
  -1 siblings, 0 replies; 72+ messages in thread
From: John Hubbard @ 2019-09-23 20:42 UTC (permalink / raw)
  To: Leonardo Bras, linuxppc-dev, linux-kernel, Linux-MM
  Cc: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman,
	Arnd Bergmann, Aneesh Kumar K.V, Christophe Leroy, Andrew Morton,
	Dan Williams, Nicholas Piggin, Mahesh Salgaonkar,
	Thomas Gleixner, Richard Fontana, Ganesh Goudar, Allison Randal,
	Greg Kroah-Hartman, Mike Rapoport, YueHaibing, Ira Weiny,
	Jason Gunthorpe, Keith Busch

On 9/20/19 12:50 PM, Leonardo Bras wrote:
> It's necessary to monitor lockless pagetable walks, in order to avoid doing
> THP splitting/collapsing during them.
> 
> Some methods rely on local_irq_{save,restore}, but that can be slow on
> cases with a lot of cpus are used for the process.
> 
> In order to speedup some cases, I propose a refcount-based approach, that
> counts the number of lockless pagetable	walks happening on the process.
> 
> This method does not exclude the current irq-oriented method. It works as a
> complement to skip unnecessary waiting.
> 
> start_lockless_pgtbl_walk(mm)
> 	Insert before starting any lockless pgtable walk
> end_lockless_pgtbl_walk(mm)
> 	Insert after the end of any lockless pgtable walk
> 	(Mostly after the ptep is last used)
> running_lockless_pgtbl_walk(mm)
> 	Returns the number of lockless pgtable walks running
> 
> Signed-off-by: Leonardo Bras <leonardo@linux.ibm.com>
> ---
>  arch/powerpc/include/asm/book3s/64/mmu.h |  3 +++
>  arch/powerpc/mm/book3s64/mmu_context.c   |  1 +
>  arch/powerpc/mm/book3s64/pgtable.c       | 17 +++++++++++++++++
>  3 files changed, 21 insertions(+)
> 
> diff --git a/arch/powerpc/include/asm/book3s/64/mmu.h b/arch/powerpc/include/asm/book3s/64/mmu.h
> index 23b83d3593e2..13b006e7dde4 100644
> --- a/arch/powerpc/include/asm/book3s/64/mmu.h
> +++ b/arch/powerpc/include/asm/book3s/64/mmu.h
> @@ -116,6 +116,9 @@ typedef struct {
>  	/* Number of users of the external (Nest) MMU */
>  	atomic_t copros;
>  
> +	/* Number of running instances of lockless pagetable walk*/
> +	atomic_t lockless_pgtbl_walk_count;
> +
>  	struct hash_mm_context *hash_context;
>  
>  	unsigned long vdso_base;
> diff --git a/arch/powerpc/mm/book3s64/mmu_context.c b/arch/powerpc/mm/book3s64/mmu_context.c
> index 2d0cb5ba9a47..3dd01c0ca5be 100644
> --- a/arch/powerpc/mm/book3s64/mmu_context.c
> +++ b/arch/powerpc/mm/book3s64/mmu_context.c
> @@ -200,6 +200,7 @@ int init_new_context(struct task_struct *tsk, struct mm_struct *mm)
>  #endif
>  	atomic_set(&mm->context.active_cpus, 0);
>  	atomic_set(&mm->context.copros, 0);
> +	atomic_set(&mm->context.lockless_pgtbl_walk_count, 0);
>  
>  	return 0;
>  }
> diff --git a/arch/powerpc/mm/book3s64/pgtable.c b/arch/powerpc/mm/book3s64/pgtable.c
> index 7d0e0d0d22c4..13239b17a22c 100644
> --- a/arch/powerpc/mm/book3s64/pgtable.c
> +++ b/arch/powerpc/mm/book3s64/pgtable.c
> @@ -98,6 +98,23 @@ void serialize_against_pte_lookup(struct mm_struct *mm)
>  	smp_call_function_many(mm_cpumask(mm), do_nothing, NULL, 1);
>  }
>  

Somewhere, there should be a short comment that explains how the following functions
are meant to be used. And it should include the interaction with irqs, so maybe
if you end up adding that combined wrapper function that does both, that's 
where the documentation would go. If not, then here is probably where it goes.


> +void start_lockless_pgtbl_walk(struct mm_struct *mm)
> +{
> +	atomic_inc(&mm->context.lockless_pgtbl_walk_count);
> +}
> +EXPORT_SYMBOL(start_lockless_pgtbl_walk);
> +
> +void end_lockless_pgtbl_walk(struct mm_struct *mm)
> +{
> +	atomic_dec(&mm->context.lockless_pgtbl_walk_count);
> +}
> +EXPORT_SYMBOL(end_lockless_pgtbl_walk);
> +
> +int running_lockless_pgtbl_walk(struct mm_struct *mm)
> +{
> +	return atomic_read(&mm->context.lockless_pgtbl_walk_count);
> +}
> +


thanks,
-- 
John Hubbard
NVIDIA


^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v2 01/11] powerpc/mm: Adds counting method to monitor lockless pgtable walks
@ 2019-09-23 20:42     ` John Hubbard
  0 siblings, 0 replies; 72+ messages in thread
From: John Hubbard @ 2019-09-23 20:42 UTC (permalink / raw)
  To: Leonardo Bras, linuxppc-dev, linux-kernel, Linux-MM
  Cc: Jason Gunthorpe, Thomas Gleixner, Arnd Bergmann,
	Greg Kroah-Hartman, YueHaibing, Keith Busch, Nicholas Piggin,
	Mike Rapoport, Mahesh Salgaonkar, Richard Fontana,
	Paul Mackerras, Aneesh Kumar K.V, Ganesh Goudar, Andrew Morton,
	Ira Weiny, Dan Williams, Allison Randal

On 9/20/19 12:50 PM, Leonardo Bras wrote:
> It's necessary to monitor lockless pagetable walks, in order to avoid doing
> THP splitting/collapsing during them.
> 
> Some methods rely on local_irq_{save,restore}, but that can be slow on
> cases with a lot of cpus are used for the process.
> 
> In order to speedup some cases, I propose a refcount-based approach, that
> counts the number of lockless pagetable	walks happening on the process.
> 
> This method does not exclude the current irq-oriented method. It works as a
> complement to skip unnecessary waiting.
> 
> start_lockless_pgtbl_walk(mm)
> 	Insert before starting any lockless pgtable walk
> end_lockless_pgtbl_walk(mm)
> 	Insert after the end of any lockless pgtable walk
> 	(Mostly after the ptep is last used)
> running_lockless_pgtbl_walk(mm)
> 	Returns the number of lockless pgtable walks running
> 
> Signed-off-by: Leonardo Bras <leonardo@linux.ibm.com>
> ---
>  arch/powerpc/include/asm/book3s/64/mmu.h |  3 +++
>  arch/powerpc/mm/book3s64/mmu_context.c   |  1 +
>  arch/powerpc/mm/book3s64/pgtable.c       | 17 +++++++++++++++++
>  3 files changed, 21 insertions(+)
> 
> diff --git a/arch/powerpc/include/asm/book3s/64/mmu.h b/arch/powerpc/include/asm/book3s/64/mmu.h
> index 23b83d3593e2..13b006e7dde4 100644
> --- a/arch/powerpc/include/asm/book3s/64/mmu.h
> +++ b/arch/powerpc/include/asm/book3s/64/mmu.h
> @@ -116,6 +116,9 @@ typedef struct {
>  	/* Number of users of the external (Nest) MMU */
>  	atomic_t copros;
>  
> +	/* Number of running instances of lockless pagetable walk*/
> +	atomic_t lockless_pgtbl_walk_count;
> +
>  	struct hash_mm_context *hash_context;
>  
>  	unsigned long vdso_base;
> diff --git a/arch/powerpc/mm/book3s64/mmu_context.c b/arch/powerpc/mm/book3s64/mmu_context.c
> index 2d0cb5ba9a47..3dd01c0ca5be 100644
> --- a/arch/powerpc/mm/book3s64/mmu_context.c
> +++ b/arch/powerpc/mm/book3s64/mmu_context.c
> @@ -200,6 +200,7 @@ int init_new_context(struct task_struct *tsk, struct mm_struct *mm)
>  #endif
>  	atomic_set(&mm->context.active_cpus, 0);
>  	atomic_set(&mm->context.copros, 0);
> +	atomic_set(&mm->context.lockless_pgtbl_walk_count, 0);
>  
>  	return 0;
>  }
> diff --git a/arch/powerpc/mm/book3s64/pgtable.c b/arch/powerpc/mm/book3s64/pgtable.c
> index 7d0e0d0d22c4..13239b17a22c 100644
> --- a/arch/powerpc/mm/book3s64/pgtable.c
> +++ b/arch/powerpc/mm/book3s64/pgtable.c
> @@ -98,6 +98,23 @@ void serialize_against_pte_lookup(struct mm_struct *mm)
>  	smp_call_function_many(mm_cpumask(mm), do_nothing, NULL, 1);
>  }
>  

Somewhere, there should be a short comment that explains how the following functions
are meant to be used. And it should include the interaction with irqs, so maybe
if you end up adding that combined wrapper function that does both, that's 
where the documentation would go. If not, then here is probably where it goes.


> +void start_lockless_pgtbl_walk(struct mm_struct *mm)
> +{
> +	atomic_inc(&mm->context.lockless_pgtbl_walk_count);
> +}
> +EXPORT_SYMBOL(start_lockless_pgtbl_walk);
> +
> +void end_lockless_pgtbl_walk(struct mm_struct *mm)
> +{
> +	atomic_dec(&mm->context.lockless_pgtbl_walk_count);
> +}
> +EXPORT_SYMBOL(end_lockless_pgtbl_walk);
> +
> +int running_lockless_pgtbl_walk(struct mm_struct *mm)
> +{
> +	return atomic_read(&mm->context.lockless_pgtbl_walk_count);
> +}
> +


thanks,
-- 
John Hubbard
NVIDIA

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v2 08/11] powerpc/kvm/book3s_hv: Applies counting method to monitor lockless pgtbl walks
  2019-09-20 19:50   ` Leonardo Bras
@ 2019-09-23 20:47     ` John Hubbard
  -1 siblings, 0 replies; 72+ messages in thread
From: John Hubbard @ 2019-09-23 20:47 UTC (permalink / raw)
  To: Leonardo Bras, linuxppc-dev, linux-kernel, Linux-MM
  Cc: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman,
	Arnd Bergmann, Aneesh Kumar K.V, Christophe Leroy, Andrew Morton,
	Dan Williams, Nicholas Piggin, Mahesh Salgaonkar,
	Thomas Gleixner, Richard Fontana, Ganesh Goudar, Allison Randal,
	Greg Kroah-Hartman, Mike Rapoport, YueHaibing, Ira Weiny,
	Jason Gunthorpe, Keith Busch

On 9/20/19 12:50 PM, Leonardo Bras wrote:
> Applies the counting-based method for monitoring all book3s_hv related
> functions that do lockless pagetable walks.
> 
> Signed-off-by: Leonardo Bras <leonardo@linux.ibm.com>
> ---
>  arch/powerpc/kvm/book3s_hv_nested.c | 8 ++++++++
>  arch/powerpc/kvm/book3s_hv_rm_mmu.c | 9 ++++++++-
>  2 files changed, 16 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/powerpc/kvm/book3s_hv_nested.c b/arch/powerpc/kvm/book3s_hv_nested.c
> index 735e0ac6f5b2..ed68e57af3a3 100644
> --- a/arch/powerpc/kvm/book3s_hv_nested.c
> +++ b/arch/powerpc/kvm/book3s_hv_nested.c
> @@ -804,6 +804,7 @@ static void kvmhv_update_nest_rmap_rc(struct kvm *kvm, u64 n_rmap,
>  		return;
>  
>  	/* Find the pte */
> +	start_lockless_pgtbl_walk(kvm->mm);

Here, it appears that there is no disabling of interrupts, so now I'm not
sure anymore if this is going to work properly. Definitely needs some documentation
somewhere as to why this is OK, since the previous discussions don't apply
if the interrupts are left on.


thanks,
-- 
John Hubbard
NVIDIA

>  	ptep = __find_linux_pte(gp->shadow_pgtable, gpa, NULL, &shift);
>  	/*
>  	 * If the pte is present and the pfn is still the same, update the pte.
> @@ -815,6 +816,7 @@ static void kvmhv_update_nest_rmap_rc(struct kvm *kvm, u64 n_rmap,
>  		__radix_pte_update(ptep, clr, set);
>  		kvmppc_radix_tlbie_page(kvm, gpa, shift, lpid);
>  	}
> +	end_lockless_pgtbl_walk(kvm->mm);
>  }
>  
>  /*
> @@ -854,10 +856,12 @@ static void kvmhv_remove_nest_rmap(struct kvm *kvm, u64 n_rmap,
>  		return;
>  
>  	/* Find and invalidate the pte */
> +	start_lockless_pgtbl_walk(kvm->mm);
>  	ptep = __find_linux_pte(gp->shadow_pgtable, gpa, NULL, &shift);
>  	/* Don't spuriously invalidate ptes if the pfn has changed */
>  	if (ptep && pte_present(*ptep) && ((pte_val(*ptep) & mask) == hpa))
>  		kvmppc_unmap_pte(kvm, ptep, gpa, shift, NULL, gp->shadow_lpid);
> +	end_lockless_pgtbl_walk(kvm->mm);
>  }
>  
>  static void kvmhv_remove_nest_rmap_list(struct kvm *kvm, unsigned long *rmapp,
> @@ -921,6 +925,7 @@ static bool kvmhv_invalidate_shadow_pte(struct kvm_vcpu *vcpu,
>  	int shift;
>  
>  	spin_lock(&kvm->mmu_lock);
> +	start_lockless_pgtbl_walk(kvm->mm);
>  	ptep = __find_linux_pte(gp->shadow_pgtable, gpa, NULL, &shift);
>  	if (!shift)
>  		shift = PAGE_SHIFT;
> @@ -928,6 +933,7 @@ static bool kvmhv_invalidate_shadow_pte(struct kvm_vcpu *vcpu,
>  		kvmppc_unmap_pte(kvm, ptep, gpa, shift, NULL, gp->shadow_lpid);
>  		ret = true;
>  	}
> +	end_lockless_pgtbl_walk(kvm->mm);
>  	spin_unlock(&kvm->mmu_lock);
>  
>  	if (shift_ret)
> @@ -1362,11 +1368,13 @@ static long int __kvmhv_nested_page_fault(struct kvm_run *run,
>  	/* See if can find translation in our partition scoped tables for L1 */
>  	pte = __pte(0);
>  	spin_lock(&kvm->mmu_lock);
> +	start_lockless_pgtbl_walk(kvm->mm);
>  	pte_p = __find_linux_pte(kvm->arch.pgtable, gpa, NULL, &shift);
>  	if (!shift)
>  		shift = PAGE_SHIFT;
>  	if (pte_p)
>  		pte = *pte_p;
> +	end_lockless_pgtbl_walk(kvm->mm);
>  	spin_unlock(&kvm->mmu_lock);
>  
>  	if (!pte_present(pte) || (writing && !(pte_val(pte) & _PAGE_WRITE))) {
> diff --git a/arch/powerpc/kvm/book3s_hv_rm_mmu.c b/arch/powerpc/kvm/book3s_hv_rm_mmu.c
> index 63e0ce91e29d..53ca67492211 100644
> --- a/arch/powerpc/kvm/book3s_hv_rm_mmu.c
> +++ b/arch/powerpc/kvm/book3s_hv_rm_mmu.c
> @@ -258,6 +258,7 @@ long kvmppc_do_h_enter(struct kvm *kvm, unsigned long flags,
>  	 * If called in real mode we have MSR_EE = 0. Otherwise
>  	 * we disable irq above.
>  	 */
> +	start_lockless_pgtbl_walk(kvm->mm);
>  	ptep = __find_linux_pte(pgdir, hva, NULL, &hpage_shift);
>  	if (ptep) {
>  		pte_t pte;
> @@ -311,6 +312,7 @@ long kvmppc_do_h_enter(struct kvm *kvm, unsigned long flags,
>  		ptel &= ~(HPTE_R_W|HPTE_R_I|HPTE_R_G);
>  		ptel |= HPTE_R_M;
>  	}
> +	end_lockless_pgtbl_walk(kvm->mm);
>  
>  	/* Find and lock the HPTEG slot to use */
>   do_insert:
> @@ -886,10 +888,15 @@ static int kvmppc_get_hpa(struct kvm_vcpu *vcpu, unsigned long gpa,
>  	hva = __gfn_to_hva_memslot(memslot, gfn);
>  
>  	/* Try to find the host pte for that virtual address */
> +	start_lockless_pgtbl_walk(kvm->mm);
>  	ptep = __find_linux_pte(vcpu->arch.pgdir, hva, NULL, &shift);
> -	if (!ptep)
> +	if (!ptep) {
> +		end_lockless_pgtbl_walk(kvm->mm);
>  		return H_TOO_HARD;
> +	}
>  	pte = kvmppc_read_update_linux_pte(ptep, writing);
> +	end_lockless_pgtbl_walk(kvm->mm);
> +
>  	if (!pte_present(pte))
>  		return H_TOO_HARD;
>  
> 


^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v2 08/11] powerpc/kvm/book3s_hv: Applies counting method to monitor lockless pgtbl walks
@ 2019-09-23 20:47     ` John Hubbard
  0 siblings, 0 replies; 72+ messages in thread
From: John Hubbard @ 2019-09-23 20:47 UTC (permalink / raw)
  To: Leonardo Bras, linuxppc-dev, linux-kernel, Linux-MM
  Cc: Jason Gunthorpe, Thomas Gleixner, Arnd Bergmann,
	Greg Kroah-Hartman, YueHaibing, Keith Busch, Nicholas Piggin,
	Mike Rapoport, Mahesh Salgaonkar, Richard Fontana,
	Paul Mackerras, Aneesh Kumar K.V, Ganesh Goudar, Andrew Morton,
	Ira Weiny, Dan Williams, Allison Randal

On 9/20/19 12:50 PM, Leonardo Bras wrote:
> Applies the counting-based method for monitoring all book3s_hv related
> functions that do lockless pagetable walks.
> 
> Signed-off-by: Leonardo Bras <leonardo@linux.ibm.com>
> ---
>  arch/powerpc/kvm/book3s_hv_nested.c | 8 ++++++++
>  arch/powerpc/kvm/book3s_hv_rm_mmu.c | 9 ++++++++-
>  2 files changed, 16 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/powerpc/kvm/book3s_hv_nested.c b/arch/powerpc/kvm/book3s_hv_nested.c
> index 735e0ac6f5b2..ed68e57af3a3 100644
> --- a/arch/powerpc/kvm/book3s_hv_nested.c
> +++ b/arch/powerpc/kvm/book3s_hv_nested.c
> @@ -804,6 +804,7 @@ static void kvmhv_update_nest_rmap_rc(struct kvm *kvm, u64 n_rmap,
>  		return;
>  
>  	/* Find the pte */
> +	start_lockless_pgtbl_walk(kvm->mm);

Here, it appears that there is no disabling of interrupts, so now I'm not
sure anymore if this is going to work properly. Definitely needs some documentation
somewhere as to why this is OK, since the previous discussions don't apply
if the interrupts are left on.


thanks,
-- 
John Hubbard
NVIDIA

>  	ptep = __find_linux_pte(gp->shadow_pgtable, gpa, NULL, &shift);
>  	/*
>  	 * If the pte is present and the pfn is still the same, update the pte.
> @@ -815,6 +816,7 @@ static void kvmhv_update_nest_rmap_rc(struct kvm *kvm, u64 n_rmap,
>  		__radix_pte_update(ptep, clr, set);
>  		kvmppc_radix_tlbie_page(kvm, gpa, shift, lpid);
>  	}
> +	end_lockless_pgtbl_walk(kvm->mm);
>  }
>  
>  /*
> @@ -854,10 +856,12 @@ static void kvmhv_remove_nest_rmap(struct kvm *kvm, u64 n_rmap,
>  		return;
>  
>  	/* Find and invalidate the pte */
> +	start_lockless_pgtbl_walk(kvm->mm);
>  	ptep = __find_linux_pte(gp->shadow_pgtable, gpa, NULL, &shift);
>  	/* Don't spuriously invalidate ptes if the pfn has changed */
>  	if (ptep && pte_present(*ptep) && ((pte_val(*ptep) & mask) == hpa))
>  		kvmppc_unmap_pte(kvm, ptep, gpa, shift, NULL, gp->shadow_lpid);
> +	end_lockless_pgtbl_walk(kvm->mm);
>  }
>  
>  static void kvmhv_remove_nest_rmap_list(struct kvm *kvm, unsigned long *rmapp,
> @@ -921,6 +925,7 @@ static bool kvmhv_invalidate_shadow_pte(struct kvm_vcpu *vcpu,
>  	int shift;
>  
>  	spin_lock(&kvm->mmu_lock);
> +	start_lockless_pgtbl_walk(kvm->mm);
>  	ptep = __find_linux_pte(gp->shadow_pgtable, gpa, NULL, &shift);
>  	if (!shift)
>  		shift = PAGE_SHIFT;
> @@ -928,6 +933,7 @@ static bool kvmhv_invalidate_shadow_pte(struct kvm_vcpu *vcpu,
>  		kvmppc_unmap_pte(kvm, ptep, gpa, shift, NULL, gp->shadow_lpid);
>  		ret = true;
>  	}
> +	end_lockless_pgtbl_walk(kvm->mm);
>  	spin_unlock(&kvm->mmu_lock);
>  
>  	if (shift_ret)
> @@ -1362,11 +1368,13 @@ static long int __kvmhv_nested_page_fault(struct kvm_run *run,
>  	/* See if can find translation in our partition scoped tables for L1 */
>  	pte = __pte(0);
>  	spin_lock(&kvm->mmu_lock);
> +	start_lockless_pgtbl_walk(kvm->mm);
>  	pte_p = __find_linux_pte(kvm->arch.pgtable, gpa, NULL, &shift);
>  	if (!shift)
>  		shift = PAGE_SHIFT;
>  	if (pte_p)
>  		pte = *pte_p;
> +	end_lockless_pgtbl_walk(kvm->mm);
>  	spin_unlock(&kvm->mmu_lock);
>  
>  	if (!pte_present(pte) || (writing && !(pte_val(pte) & _PAGE_WRITE))) {
> diff --git a/arch/powerpc/kvm/book3s_hv_rm_mmu.c b/arch/powerpc/kvm/book3s_hv_rm_mmu.c
> index 63e0ce91e29d..53ca67492211 100644
> --- a/arch/powerpc/kvm/book3s_hv_rm_mmu.c
> +++ b/arch/powerpc/kvm/book3s_hv_rm_mmu.c
> @@ -258,6 +258,7 @@ long kvmppc_do_h_enter(struct kvm *kvm, unsigned long flags,
>  	 * If called in real mode we have MSR_EE = 0. Otherwise
>  	 * we disable irq above.
>  	 */
> +	start_lockless_pgtbl_walk(kvm->mm);
>  	ptep = __find_linux_pte(pgdir, hva, NULL, &hpage_shift);
>  	if (ptep) {
>  		pte_t pte;
> @@ -311,6 +312,7 @@ long kvmppc_do_h_enter(struct kvm *kvm, unsigned long flags,
>  		ptel &= ~(HPTE_R_W|HPTE_R_I|HPTE_R_G);
>  		ptel |= HPTE_R_M;
>  	}
> +	end_lockless_pgtbl_walk(kvm->mm);
>  
>  	/* Find and lock the HPTEG slot to use */
>   do_insert:
> @@ -886,10 +888,15 @@ static int kvmppc_get_hpa(struct kvm_vcpu *vcpu, unsigned long gpa,
>  	hva = __gfn_to_hva_memslot(memslot, gfn);
>  
>  	/* Try to find the host pte for that virtual address */
> +	start_lockless_pgtbl_walk(kvm->mm);
>  	ptep = __find_linux_pte(vcpu->arch.pgdir, hva, NULL, &shift);
> -	if (!ptep)
> +	if (!ptep) {
> +		end_lockless_pgtbl_walk(kvm->mm);
>  		return H_TOO_HARD;
> +	}
>  	pte = kvmppc_read_update_linux_pte(ptep, writing);
> +	end_lockless_pgtbl_walk(kvm->mm);
> +
>  	if (!pte_present(pte))
>  		return H_TOO_HARD;
>  
> 

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v2 02/11] asm-generic/pgtable: Adds dummy functions to monitor lockless pgtable walks
  2019-09-23 20:39     ` John Hubbard
@ 2019-09-23 20:48       ` Leonardo Bras
  -1 siblings, 0 replies; 72+ messages in thread
From: Leonardo Bras @ 2019-09-23 20:48 UTC (permalink / raw)
  To: John Hubbard, linuxppc-dev, linux-kernel
  Cc: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman,
	Arnd Bergmann, Aneesh Kumar K.V, Christophe Leroy, Andrew Morton,
	Dan Williams, Nicholas Piggin, Mahesh Salgaonkar,
	Thomas Gleixner, Richard Fontana, Ganesh Goudar, Allison Randal,
	Greg Kroah-Hartman, Mike Rapoport, YueHaibing, Ira Weiny,
	Jason Gunthorpe, Keith Busch, Linux-MM

[-- Attachment #1: Type: text/plain, Size: 1132 bytes --]

Thanks for the feedback,

On Mon, 2019-09-23 at 13:39 -0700, John Hubbard wrote:
> Please remember to include linux-mm if there is a v2.

Sure, I will include on v3.

> Nit: seems like it would be nicer to just put it all in one place, and use
> positive logic, and also I think people normally don't compress the empty
> functions quite that much. So like this:

I did this by following the default on the rest of this file.
As you can see, all other features use the standard of 
#ifndef SOMETHING
dummy/generic functions
#endif

Declaring the functions become responsibility of the arch.

> #ifdef __HAVE_ARCH_LOCKLESS_PGTBL_WALK_COUNTER
> void start_lockless_pgtbl_walk(struct mm_struct *mm); 
> void end_lockless_pgtbl_walk(struct mm_struct *mm); 
> int running_lockless_pgtbl_walk(struct mm_struct *mm); 
> 
> #else
> static inline void start_lockless_pgtbl_walk(struct mm_struct *mm)
> {
> }
> static inline void end_lockless_pgtbl_walk(struct mm_struct *mm)
> {
> }
> static inline int running_lockless_pgtbl_walk(struct mm_struct *mm)
> {
>         return 0;
> }
> #endif
> 
> thanks,

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v2 02/11] asm-generic/pgtable: Adds dummy functions to monitor lockless pgtable walks
@ 2019-09-23 20:48       ` Leonardo Bras
  0 siblings, 0 replies; 72+ messages in thread
From: Leonardo Bras @ 2019-09-23 20:48 UTC (permalink / raw)
  To: John Hubbard, linuxppc-dev, linux-kernel
  Cc: Jason Gunthorpe, Thomas Gleixner, Arnd Bergmann,
	Greg Kroah-Hartman, YueHaibing, Keith Busch, Nicholas Piggin,
	Mike Rapoport, Mahesh Salgaonkar, Richard Fontana, Linux-MM,
	Paul Mackerras, Aneesh Kumar K.V, Ganesh Goudar, Andrew Morton,
	Ira Weiny, Dan Williams, Allison Randal

[-- Attachment #1: Type: text/plain, Size: 1132 bytes --]

Thanks for the feedback,

On Mon, 2019-09-23 at 13:39 -0700, John Hubbard wrote:
> Please remember to include linux-mm if there is a v2.

Sure, I will include on v3.

> Nit: seems like it would be nicer to just put it all in one place, and use
> positive logic, and also I think people normally don't compress the empty
> functions quite that much. So like this:

I did this by following the default on the rest of this file.
As you can see, all other features use the standard of 
#ifndef SOMETHING
dummy/generic functions
#endif

Declaring the functions become responsibility of the arch.

> #ifdef __HAVE_ARCH_LOCKLESS_PGTBL_WALK_COUNTER
> void start_lockless_pgtbl_walk(struct mm_struct *mm); 
> void end_lockless_pgtbl_walk(struct mm_struct *mm); 
> int running_lockless_pgtbl_walk(struct mm_struct *mm); 
> 
> #else
> static inline void start_lockless_pgtbl_walk(struct mm_struct *mm)
> {
> }
> static inline void end_lockless_pgtbl_walk(struct mm_struct *mm)
> {
> }
> static inline int running_lockless_pgtbl_walk(struct mm_struct *mm)
> {
>         return 0;
> }
> #endif
> 
> thanks,

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v2 01/11] powerpc/mm: Adds counting method to monitor lockless pgtable walks
  2019-09-23 20:42     ` John Hubbard
@ 2019-09-23 20:50       ` Leonardo Bras
  -1 siblings, 0 replies; 72+ messages in thread
From: Leonardo Bras @ 2019-09-23 20:50 UTC (permalink / raw)
  To: John Hubbard, linuxppc-dev, linux-kernel, Linux-MM
  Cc: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman,
	Arnd Bergmann, Aneesh Kumar K.V, Christophe Leroy, Andrew Morton,
	Dan Williams, Nicholas Piggin, Mahesh Salgaonkar,
	Thomas Gleixner, Richard Fontana, Ganesh Goudar, Allison Randal,
	Greg Kroah-Hartman, Mike Rapoport, YueHaibing, Ira Weiny,
	Jason Gunthorpe, Keith Busch

[-- Attachment #1: Type: text/plain, Size: 463 bytes --]

On Mon, 2019-09-23 at 13:42 -0700, John Hubbard wrote:
> Somewhere, there should be a short comment that explains how the following functions
> are meant to be used. And it should include the interaction with irqs, so maybe
> if you end up adding that combined wrapper function that does both, that's 
> where the documentation would go. If not, then here is probably where it goes.

Thanks for the feedback.
I will make sure to add comments here for v3.

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v2 01/11] powerpc/mm: Adds counting method to monitor lockless pgtable walks
@ 2019-09-23 20:50       ` Leonardo Bras
  0 siblings, 0 replies; 72+ messages in thread
From: Leonardo Bras @ 2019-09-23 20:50 UTC (permalink / raw)
  To: John Hubbard, linuxppc-dev, linux-kernel, Linux-MM
  Cc: Jason Gunthorpe, Thomas Gleixner, Arnd Bergmann,
	Greg Kroah-Hartman, YueHaibing, Keith Busch, Nicholas Piggin,
	Mike Rapoport, Mahesh Salgaonkar, Richard Fontana,
	Paul Mackerras, Aneesh Kumar K.V, Ganesh Goudar, Andrew Morton,
	Ira Weiny, Dan Williams, Allison Randal

[-- Attachment #1: Type: text/plain, Size: 463 bytes --]

On Mon, 2019-09-23 at 13:42 -0700, John Hubbard wrote:
> Somewhere, there should be a short comment that explains how the following functions
> are meant to be used. And it should include the interaction with irqs, so maybe
> if you end up adding that combined wrapper function that does both, that's 
> where the documentation would go. If not, then here is probably where it goes.

Thanks for the feedback.
I will make sure to add comments here for v3.

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v2 00/11] Introduces new count-based method for monitoring lockless pagetable wakls
  2019-09-20 20:12   ` Leonardo Bras
@ 2019-09-23 20:51     ` John Hubbard
  -1 siblings, 0 replies; 72+ messages in thread
From: John Hubbard @ 2019-09-23 20:51 UTC (permalink / raw)
  To: Leonardo Bras, linuxppc-dev, linux-kernel, Linux-MM
  Cc: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman,
	Arnd Bergmann, Aneesh Kumar K.V, Christophe Leroy, Andrew Morton,
	Dan Williams, Nicholas Piggin, Mahesh Salgaonkar,
	Thomas Gleixner, Richard Fontana, Ganesh Goudar, Allison Randal,
	Greg Kroah-Hartman, Mike Rapoport, YueHaibing, Ira Weiny,
	Jason Gunthorpe, Keith Busch

On 9/20/19 1:12 PM, Leonardo Bras wrote:
...
>>  arch/powerpc/include/asm/book3s/64/mmu.h     |  3 +++
>>  arch/powerpc/include/asm/book3s/64/pgtable.h |  5 +++++
>>  arch/powerpc/kernel/mce_power.c              | 13 ++++++++++---
>>  arch/powerpc/kvm/book3s_64_mmu_hv.c          |  2 ++
>>  arch/powerpc/kvm/book3s_64_mmu_radix.c       | 20 ++++++++++++++++++--
>>  arch/powerpc/kvm/book3s_64_vio_hv.c          |  4 ++++
>>  arch/powerpc/kvm/book3s_hv_nested.c          |  8 ++++++++
>>  arch/powerpc/kvm/book3s_hv_rm_mmu.c          |  9 ++++++++-
>>  arch/powerpc/kvm/e500_mmu_host.c             |  4 ++++
>>  arch/powerpc/mm/book3s64/hash_tlb.c          |  2 ++
>>  arch/powerpc/mm/book3s64/hash_utils.c        |  7 +++++++
>>  arch/powerpc/mm/book3s64/mmu_context.c       |  1 +
>>  arch/powerpc/mm/book3s64/pgtable.c           | 20 +++++++++++++++++++-
>>  arch/powerpc/perf/callchain.c                |  5 ++++-
>>  include/asm-generic/pgtable.h                |  9 +++++++++
>>  mm/gup.c                                     |  4 ++++
>>  16 files changed, 108 insertions(+), 8 deletions(-)
>>

Also, which tree do these patches apply to, please? 

thanks,
-- 
John Hubbard
NVIDIA


^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v2 00/11] Introduces new count-based method for monitoring lockless pagetable wakls
@ 2019-09-23 20:51     ` John Hubbard
  0 siblings, 0 replies; 72+ messages in thread
From: John Hubbard @ 2019-09-23 20:51 UTC (permalink / raw)
  To: Leonardo Bras, linuxppc-dev, linux-kernel, Linux-MM
  Cc: Jason Gunthorpe, Thomas Gleixner, Arnd Bergmann,
	Greg Kroah-Hartman, YueHaibing, Keith Busch, Nicholas Piggin,
	Mike Rapoport, Mahesh Salgaonkar, Richard Fontana,
	Paul Mackerras, Aneesh Kumar K.V, Ganesh Goudar, Andrew Morton,
	Ira Weiny, Dan Williams, Allison Randal

On 9/20/19 1:12 PM, Leonardo Bras wrote:
...
>>  arch/powerpc/include/asm/book3s/64/mmu.h     |  3 +++
>>  arch/powerpc/include/asm/book3s/64/pgtable.h |  5 +++++
>>  arch/powerpc/kernel/mce_power.c              | 13 ++++++++++---
>>  arch/powerpc/kvm/book3s_64_mmu_hv.c          |  2 ++
>>  arch/powerpc/kvm/book3s_64_mmu_radix.c       | 20 ++++++++++++++++++--
>>  arch/powerpc/kvm/book3s_64_vio_hv.c          |  4 ++++
>>  arch/powerpc/kvm/book3s_hv_nested.c          |  8 ++++++++
>>  arch/powerpc/kvm/book3s_hv_rm_mmu.c          |  9 ++++++++-
>>  arch/powerpc/kvm/e500_mmu_host.c             |  4 ++++
>>  arch/powerpc/mm/book3s64/hash_tlb.c          |  2 ++
>>  arch/powerpc/mm/book3s64/hash_utils.c        |  7 +++++++
>>  arch/powerpc/mm/book3s64/mmu_context.c       |  1 +
>>  arch/powerpc/mm/book3s64/pgtable.c           | 20 +++++++++++++++++++-
>>  arch/powerpc/perf/callchain.c                |  5 ++++-
>>  include/asm-generic/pgtable.h                |  9 +++++++++
>>  mm/gup.c                                     |  4 ++++
>>  16 files changed, 108 insertions(+), 8 deletions(-)
>>

Also, which tree do these patches apply to, please? 

thanks,
-- 
John Hubbard
NVIDIA

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v2 02/11] asm-generic/pgtable: Adds dummy functions to monitor lockless pgtable walks
  2019-09-23 20:48       ` Leonardo Bras
@ 2019-09-23 20:53         ` John Hubbard
  -1 siblings, 0 replies; 72+ messages in thread
From: John Hubbard @ 2019-09-23 20:53 UTC (permalink / raw)
  To: Leonardo Bras, linuxppc-dev, linux-kernel
  Cc: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman,
	Arnd Bergmann, Aneesh Kumar K.V, Christophe Leroy, Andrew Morton,
	Dan Williams, Nicholas Piggin, Mahesh Salgaonkar,
	Thomas Gleixner, Richard Fontana, Ganesh Goudar, Allison Randal,
	Greg Kroah-Hartman, Mike Rapoport, YueHaibing, Ira Weiny,
	Jason Gunthorpe, Keith Busch, Linux-MM

On 9/23/19 1:48 PM, Leonardo Bras wrote:
> Thanks for the feedback,
> 
> On Mon, 2019-09-23 at 13:39 -0700, John Hubbard wrote:
>> Please remember to include linux-mm if there is a v2.
> 
> Sure, I will include on v3.
> 
>> Nit: seems like it would be nicer to just put it all in one place, and use
>> positive logic, and also I think people normally don't compress the empty
>> functions quite that much. So like this:
> 
> I did this by following the default on the rest of this file.
> As you can see, all other features use the standard of 
> #ifndef SOMETHING
> dummy/generic functions
> #endif
> 
> Declaring the functions become responsibility of the arch.

I noticed that, but...well, I guess you're right.  Follow the way
it is. :)

thanks,
-- 
John Hubbard
NVIDIA

> 
>> #ifdef __HAVE_ARCH_LOCKLESS_PGTBL_WALK_COUNTER
>> void start_lockless_pgtbl_walk(struct mm_struct *mm); 
>> void end_lockless_pgtbl_walk(struct mm_struct *mm); 
>> int running_lockless_pgtbl_walk(struct mm_struct *mm); 
>>
>> #else
>> static inline void start_lockless_pgtbl_walk(struct mm_struct *mm)
>> {
>> }
>> static inline void end_lockless_pgtbl_walk(struct mm_struct *mm)
>> {
>> }
>> static inline int running_lockless_pgtbl_walk(struct mm_struct *mm)
>> {
>>         return 0;
>> }
>> #endif
>>
>> thanks,


^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v2 02/11] asm-generic/pgtable: Adds dummy functions to monitor lockless pgtable walks
@ 2019-09-23 20:53         ` John Hubbard
  0 siblings, 0 replies; 72+ messages in thread
From: John Hubbard @ 2019-09-23 20:53 UTC (permalink / raw)
  To: Leonardo Bras, linuxppc-dev, linux-kernel
  Cc: Jason Gunthorpe, Thomas Gleixner, Arnd Bergmann,
	Greg Kroah-Hartman, YueHaibing, Keith Busch, Nicholas Piggin,
	Mike Rapoport, Mahesh Salgaonkar, Richard Fontana, Linux-MM,
	Paul Mackerras, Aneesh Kumar K.V, Ganesh Goudar, Andrew Morton,
	Ira Weiny, Dan Williams, Allison Randal

On 9/23/19 1:48 PM, Leonardo Bras wrote:
> Thanks for the feedback,
> 
> On Mon, 2019-09-23 at 13:39 -0700, John Hubbard wrote:
>> Please remember to include linux-mm if there is a v2.
> 
> Sure, I will include on v3.
> 
>> Nit: seems like it would be nicer to just put it all in one place, and use
>> positive logic, and also I think people normally don't compress the empty
>> functions quite that much. So like this:
> 
> I did this by following the default on the rest of this file.
> As you can see, all other features use the standard of 
> #ifndef SOMETHING
> dummy/generic functions
> #endif
> 
> Declaring the functions become responsibility of the arch.

I noticed that, but...well, I guess you're right.  Follow the way
it is. :)

thanks,
-- 
John Hubbard
NVIDIA

> 
>> #ifdef __HAVE_ARCH_LOCKLESS_PGTBL_WALK_COUNTER
>> void start_lockless_pgtbl_walk(struct mm_struct *mm); 
>> void end_lockless_pgtbl_walk(struct mm_struct *mm); 
>> int running_lockless_pgtbl_walk(struct mm_struct *mm); 
>>
>> #else
>> static inline void start_lockless_pgtbl_walk(struct mm_struct *mm)
>> {
>> }
>> static inline void end_lockless_pgtbl_walk(struct mm_struct *mm)
>> {
>> }
>> static inline int running_lockless_pgtbl_walk(struct mm_struct *mm)
>> {
>>         return 0;
>> }
>> #endif
>>
>> thanks,

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v2 00/11] Introduces new count-based method for monitoring lockless pagetable wakls
  2019-09-23 20:51     ` John Hubbard
@ 2019-09-23 20:58       ` Leonardo Bras
  -1 siblings, 0 replies; 72+ messages in thread
From: Leonardo Bras @ 2019-09-23 20:58 UTC (permalink / raw)
  To: John Hubbard, linuxppc-dev, linux-kernel, Linux-MM
  Cc: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman,
	Arnd Bergmann, Aneesh Kumar K.V, Christophe Leroy, Andrew Morton,
	Dan Williams, Nicholas Piggin, Mahesh Salgaonkar,
	Thomas Gleixner, Richard Fontana, Ganesh Goudar, Allison Randal,
	Greg Kroah-Hartman, Mike Rapoport, YueHaibing, Ira Weiny,
	Jason Gunthorpe, Keith Busch

[-- Attachment #1: Type: text/plain, Size: 474 bytes --]

On Mon, 2019-09-23 at 13:51 -0700, John Hubbard wrote:
> Also, which tree do these patches apply to, please? 
> 
> thanks,

They should apply on top of v5.3 + one patch: 
https://patchwork.ozlabs.org/patch/1164925/

I was working on top of this patch, because I thought it would be
merged fast. But since I got no feedback, it was not merged and the
present patchset became broken. :(

But I will rebase v3 on top of plain v5.3.

Thanks,
Leonardo Bras




[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v2 00/11] Introduces new count-based method for monitoring lockless pagetable wakls
@ 2019-09-23 20:58       ` Leonardo Bras
  0 siblings, 0 replies; 72+ messages in thread
From: Leonardo Bras @ 2019-09-23 20:58 UTC (permalink / raw)
  To: John Hubbard, linuxppc-dev, linux-kernel, Linux-MM
  Cc: Jason Gunthorpe, Thomas Gleixner, Arnd Bergmann,
	Greg Kroah-Hartman, YueHaibing, Keith Busch, Nicholas Piggin,
	Mike Rapoport, Mahesh Salgaonkar, Richard Fontana,
	Paul Mackerras, Aneesh Kumar K.V, Ganesh Goudar, Andrew Morton,
	Ira Weiny, Dan Williams, Allison Randal

[-- Attachment #1: Type: text/plain, Size: 474 bytes --]

On Mon, 2019-09-23 at 13:51 -0700, John Hubbard wrote:
> Also, which tree do these patches apply to, please? 
> 
> thanks,

They should apply on top of v5.3 + one patch: 
https://patchwork.ozlabs.org/patch/1164925/

I was working on top of this patch, because I thought it would be
merged fast. But since I got no feedback, it was not merged and the
present patchset became broken. :(

But I will rebase v3 on top of plain v5.3.

Thanks,
Leonardo Bras




[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v2 03/11] mm/gup: Applies counting method to monitor gup_pgd_range
  2019-09-23 20:27     ` John Hubbard
@ 2019-09-23 21:01       ` Leonardo Bras
  -1 siblings, 0 replies; 72+ messages in thread
From: Leonardo Bras @ 2019-09-23 21:01 UTC (permalink / raw)
  To: John Hubbard, linuxppc-dev, linux-kernel, Linux-MM
  Cc: Jason Gunthorpe, Thomas Gleixner, Arnd Bergmann,
	Greg Kroah-Hartman, YueHaibing, Keith Busch, Nicholas Piggin,
	Mike Rapoport, Mahesh Salgaonkar, Richard Fontana,
	Paul Mackerras, Aneesh Kumar K.V, Ganesh Goudar, Andrew Morton,
	Ira Weiny, Dan Williams, Allison Randal

[-- Attachment #1: Type: text/plain, Size: 837 bytes --]

On Mon, 2019-09-23 at 13:27 -0700, John Hubbard wrote:
> I'd also like a second opinion from the "core" -mm maintainers, but it seems like
> there is now too much code around the gup_pgd_range() call. Especially since there
> are two places where it's called--did you forget the other one in 
> __get_user_pages_fast(), btw??
> 
Oh, sorry, I missed this one. I will put it on v3.
(Also I will make sure to include linux-mm on v3.)

> Maybe the irq handling and atomic counting should be moved into start/finish
> calls, like this:
> 
>      start_gup_fast_walk()
>      gup_pgd_range()
>      finish_gup_fast_walk()

There are cases where interrupt disable/enable is not done around the
lockless pagetable walk.
It may come from functions called above on stack, that's why I opted it
to be only the atomic operation.

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v2 03/11] mm/gup: Applies counting method to monitor gup_pgd_range
@ 2019-09-23 21:01       ` Leonardo Bras
  0 siblings, 0 replies; 72+ messages in thread
From: Leonardo Bras @ 2019-09-23 21:01 UTC (permalink / raw)
  To: John Hubbard, linuxppc-dev, linux-kernel, Linux-MM
  Cc: Arnd Bergmann, Richard Fontana, Greg Kroah-Hartman, YueHaibing,
	Nicholas Piggin, Mike Rapoport, Keith Busch, Jason Gunthorpe,
	Paul Mackerras, Aneesh Kumar K.V, Allison Randal,
	Mahesh Salgaonkar, Ganesh Goudar, Thomas Gleixner, Ira Weiny,
	Andrew Morton, Dan Williams

[-- Attachment #1: Type: text/plain, Size: 837 bytes --]

On Mon, 2019-09-23 at 13:27 -0700, John Hubbard wrote:
> I'd also like a second opinion from the "core" -mm maintainers, but it seems like
> there is now too much code around the gup_pgd_range() call. Especially since there
> are two places where it's called--did you forget the other one in 
> __get_user_pages_fast(), btw??
> 
Oh, sorry, I missed this one. I will put it on v3.
(Also I will make sure to include linux-mm on v3.)

> Maybe the irq handling and atomic counting should be moved into start/finish
> calls, like this:
> 
>      start_gup_fast_walk()
>      gup_pgd_range()
>      finish_gup_fast_walk()

There are cases where interrupt disable/enable is not done around the
lockless pagetable walk.
It may come from functions called above on stack, that's why I opted it
to be only the atomic operation.

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v2 03/11] mm/gup: Applies counting method to monitor gup_pgd_range
  2019-09-23 21:01       ` Leonardo Bras
@ 2019-09-23 21:09         ` John Hubbard
  -1 siblings, 0 replies; 72+ messages in thread
From: John Hubbard @ 2019-09-23 21:09 UTC (permalink / raw)
  To: Leonardo Bras, linuxppc-dev, linux-kernel, Linux-MM
  Cc: Jason Gunthorpe, Thomas Gleixner, Arnd Bergmann,
	Greg Kroah-Hartman, YueHaibing, Keith Busch, Nicholas Piggin,
	Mike Rapoport, Mahesh Salgaonkar, Richard Fontana,
	Paul Mackerras, Aneesh Kumar K.V, Ganesh Goudar, Andrew Morton,
	Ira Weiny, Dan Williams, Allison Randal

On 9/23/19 2:01 PM, Leonardo Bras wrote:
> On Mon, 2019-09-23 at 13:27 -0700, John Hubbard wrote:
>> I'd also like a second opinion from the "core" -mm maintainers, but it seems like
>> there is now too much code around the gup_pgd_range() call. Especially since there
>> are two places where it's called--did you forget the other one in 
>> __get_user_pages_fast(), btw??
>>
> Oh, sorry, I missed this one. I will put it on v3.
> (Also I will make sure to include linux-mm on v3.)
> 
>> Maybe the irq handling and atomic counting should be moved into start/finish
>> calls, like this:
>>
>>      start_gup_fast_walk()
>>      gup_pgd_range()
>>      finish_gup_fast_walk()
> 
> There are cases where interrupt disable/enable is not done around the
> lockless pagetable walk.
> It may come from functions called above on stack, that's why I opted it
> to be only the atomic operation.
> 

That doesn't prevent you from writing the above as shown, though, for mm/gup.c.

(Also, let's see on the other thread if it is even valid to be indicating
a lockless walk, without also disabling interrupts.)

thanks,
-- 
John Hubbard
NVIDIA


^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v2 03/11] mm/gup: Applies counting method to monitor gup_pgd_range
@ 2019-09-23 21:09         ` John Hubbard
  0 siblings, 0 replies; 72+ messages in thread
From: John Hubbard @ 2019-09-23 21:09 UTC (permalink / raw)
  To: Leonardo Bras, linuxppc-dev, linux-kernel, Linux-MM
  Cc: Arnd Bergmann, Richard Fontana, Greg Kroah-Hartman, YueHaibing,
	Nicholas Piggin, Mike Rapoport, Keith Busch, Jason Gunthorpe,
	Paul Mackerras, Aneesh Kumar K.V, Allison Randal,
	Mahesh Salgaonkar, Ganesh Goudar, Thomas Gleixner, Ira Weiny,
	Andrew Morton, Dan Williams

On 9/23/19 2:01 PM, Leonardo Bras wrote:
> On Mon, 2019-09-23 at 13:27 -0700, John Hubbard wrote:
>> I'd also like a second opinion from the "core" -mm maintainers, but it seems like
>> there is now too much code around the gup_pgd_range() call. Especially since there
>> are two places where it's called--did you forget the other one in 
>> __get_user_pages_fast(), btw??
>>
> Oh, sorry, I missed this one. I will put it on v3.
> (Also I will make sure to include linux-mm on v3.)
> 
>> Maybe the irq handling and atomic counting should be moved into start/finish
>> calls, like this:
>>
>>      start_gup_fast_walk()
>>      gup_pgd_range()
>>      finish_gup_fast_walk()
> 
> There are cases where interrupt disable/enable is not done around the
> lockless pagetable walk.
> It may come from functions called above on stack, that's why I opted it
> to be only the atomic operation.
> 

That doesn't prevent you from writing the above as shown, though, for mm/gup.c.

(Also, let's see on the other thread if it is even valid to be indicating
a lockless walk, without also disabling interrupts.)

thanks,
-- 
John Hubbard
NVIDIA

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v2 11/11] powerpc/mm/book3s64/pgtable: Uses counting method to skip serializing
@ 2019-09-24 21:23 ` Leonardo Bras
  0 siblings, 0 replies; 72+ messages in thread
From: Leonardo Bras @ 2019-09-24 21:23 UTC (permalink / raw)
  To: jhubbard, linuxppc-dev, linux-kernel, linux-mm
  Cc: dan.j.williams, arnd, jgg, gregkh, mahesh, yuehaibing, npiggin,
	rppt, keith.busch, rfontana, paulus, aneesh.kumar, ganeshgr,
	tglx, ira.weiny, akpm, allison

[-- Attachment #1: Type: text/plain, Size: 507 bytes --]

John Hubbard <jhubbard@nvidia.com> writes:

>> Is that what you meant?
>
> Yes.
>

I am still trying to understand this issue.

I am also analyzing some cases where interrupt disable is not done
before the lockless pagetable walk (patch 3 discussion).

But given I forgot to add the mm mailing list before, I think it would
be wiser to send a v3 and gather feedback while I keep trying to
understand how it works, and if it needs additional memory barrier here.

Thanks!

Leonardo Bras


[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [PATCH v2 11/11] powerpc/mm/book3s64/pgtable: Uses counting method to skip serializing
@ 2019-09-24 21:23 ` Leonardo Bras
  0 siblings, 0 replies; 72+ messages in thread
From: Leonardo Bras @ 2019-09-24 21:23 UTC (permalink / raw)
  To: jhubbard, linuxppc-dev, linux-kernel, linux-mm
  Cc: akpm, arnd, rfontana, mahesh, yuehaibing, npiggin, rppt,
	keith.busch, jgg, paulus, aneesh.kumar, gregkh, ganeshgr,
	dan.j.williams, ira.weiny, tglx, allison

[-- Attachment #1: Type: text/plain, Size: 507 bytes --]

John Hubbard <jhubbard@nvidia.com> writes:

>> Is that what you meant?
>
> Yes.
>

I am still trying to understand this issue.

I am also analyzing some cases where interrupt disable is not done
before the lockless pagetable walk (patch 3 discussion).

But given I forgot to add the mm mailing list before, I think it would
be wiser to send a v3 and gather feedback while I keep trying to
understand how it works, and if it needs additional memory barrier here.

Thanks!

Leonardo Bras


[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 72+ messages in thread

end of thread, other threads:[~2019-09-24 21:27 UTC | newest]

Thread overview: 72+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-09-20 19:50 [PATCH v2 00/11] Introduces new count-based method for monitoring lockless pagetable wakls Leonardo Bras
2019-09-20 19:50 ` Leonardo Bras
2019-09-20 19:50 ` [PATCH v2 01/11] powerpc/mm: Adds counting method to monitor lockless pgtable walks Leonardo Bras
2019-09-20 19:50   ` Leonardo Bras
2019-09-23 20:42   ` John Hubbard
2019-09-23 20:42     ` John Hubbard
2019-09-23 20:50     ` Leonardo Bras
2019-09-23 20:50       ` Leonardo Bras
2019-09-20 19:50 ` [PATCH v2 02/11] asm-generic/pgtable: Adds dummy functions " Leonardo Bras
2019-09-20 19:50   ` Leonardo Bras
2019-09-23 20:39   ` John Hubbard
2019-09-23 20:39     ` John Hubbard
2019-09-23 20:48     ` Leonardo Bras
2019-09-23 20:48       ` Leonardo Bras
2019-09-23 20:53       ` John Hubbard
2019-09-23 20:53         ` John Hubbard
2019-09-20 19:50 ` [PATCH v2 03/11] mm/gup: Applies counting method to monitor gup_pgd_range Leonardo Bras
2019-09-20 19:50   ` Leonardo Bras
2019-09-23 20:27   ` John Hubbard
2019-09-23 20:27     ` John Hubbard
2019-09-23 21:01     ` Leonardo Bras
2019-09-23 21:01       ` Leonardo Bras
2019-09-23 21:09       ` John Hubbard
2019-09-23 21:09         ` John Hubbard
2019-09-20 19:50 ` [PATCH v2 04/11] powerpc/mce_power: Applies counting method to monitor lockless pgtbl walks Leonardo Bras
2019-09-20 19:50   ` Leonardo Bras
2019-09-20 19:50 ` [PATCH v2 05/11] powerpc/perf: " Leonardo Bras
2019-09-20 19:50   ` Leonardo Bras
2019-09-20 19:50 ` [PATCH v2 06/11] powerpc/mm/book3s64/hash: " Leonardo Bras
2019-09-20 19:50   ` Leonardo Bras
2019-09-20 19:50 ` [PATCH v2 07/11] powerpc/kvm/e500: " Leonardo Bras
2019-09-20 19:50   ` Leonardo Bras
2019-09-20 19:50 ` [PATCH v2 08/11] powerpc/kvm/book3s_hv: " Leonardo Bras
2019-09-20 19:50   ` Leonardo Bras
2019-09-23 20:47   ` John Hubbard
2019-09-23 20:47     ` John Hubbard
2019-09-20 19:50 ` [PATCH v2 09/11] powerpc/kvm/book3s_64: " Leonardo Bras
2019-09-20 19:50   ` Leonardo Bras
2019-09-20 19:50 ` [PATCH v2 10/11] powerpc/book3s_64: Enables counting method to monitor lockless pgtbl walk Leonardo Bras
2019-09-20 19:50   ` Leonardo Bras
2019-09-20 19:50 ` [PATCH v2 11/11] powerpc/mm/book3s64/pgtable: Uses counting method to skip serializing Leonardo Bras
2019-09-20 19:50   ` Leonardo Bras
2019-09-20 20:11   ` John Hubbard
2019-09-20 20:28     ` Leonardo Bras
2019-09-20 20:28       ` Leonardo Bras
2019-09-20 21:15       ` John Hubbard
2019-09-21  0:48       ` John Hubbard
2019-09-21  0:48         ` John Hubbard
2019-09-23 17:25         ` Leonardo Bras
2019-09-23 17:25           ` Leonardo Bras
2019-09-23 18:14           ` John Hubbard
2019-09-23 18:14             ` John Hubbard
2019-09-23 19:40             ` Leonardo Bras
2019-09-23 19:40               ` Leonardo Bras
2019-09-23 19:58               ` John Hubbard
2019-09-23 19:58                 ` John Hubbard
2019-09-23 20:23                 ` Leonardo Bras
2019-09-23 20:23                   ` Leonardo Bras
2019-09-23 20:26                   ` John Hubbard
2019-09-23 20:26                     ` John Hubbard
2019-09-20 19:56 ` [PATCH v2 00/11] Introduces new count-based method for monitoring lockless pagetable wakls Leonardo Bras
2019-09-20 19:56   ` Leonardo Bras
2019-09-20 20:12 ` Leonardo Bras
2019-09-20 20:12   ` Leonardo Bras
2019-09-20 21:24   ` John Hubbard
2019-09-20 21:24     ` John Hubbard
2019-09-23 20:51   ` John Hubbard
2019-09-23 20:51     ` John Hubbard
2019-09-23 20:58     ` Leonardo Bras
2019-09-23 20:58       ` Leonardo Bras
2019-09-24 21:23 [PATCH v2 11/11] powerpc/mm/book3s64/pgtable: Uses counting method to skip serializing Leonardo Bras
2019-09-24 21:23 ` Leonardo Bras

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.