linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 00/12] mm: free retracted page table by RCU
@ 2023-05-29  6:11 Hugh Dickins
  2023-05-29  6:14 ` [PATCH 01/12] mm/pgtable: add rcu_read_lock() and rcu_read_unlock()s Hugh Dickins
                   ` (11 more replies)
  0 siblings, 12 replies; 27+ messages in thread
From: Hugh Dickins @ 2023-05-29  6:11 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Mike Kravetz, Mike Rapoport, Kirill A. Shutemov, Matthew Wilcox,
	David Hildenbrand, Suren Baghdasaryan, Qi Zheng, Yang Shi,
	Mel Gorman, Peter Xu, Peter Zijlstra, Will Deacon, Yu Zhao,
	Alistair Popple, Ralph Campbell, Ira Weiny, Steven Price,
	SeongJae Park, Naoya Horiguchi, Christophe Leroy, Zack Rusin,
	Jason Gunthorpe, Axel Rasmussen, Anshuman Khandual,
	Pasha Tatashin, Miaohe Lin, Minchan Kim, Christoph Hellwig,
	Song Liu, Thomas Hellstrom, Russell King, David S. Miller,
	Michael Ellerman, Aneesh Kumar K.V, Heiko Carstens,
	Christian Borntraeger, Claudio Imbrenda, Alexander Gordeev,
	Jann Horn, linux-arm-kernel, sparclinux, linuxppc-dev,
	linux-s390, linux-kernel, linux-mm

Here is the third series of patches to mm (and a few architectures), based
on v6.4-rc3 with the preceding two series applied: in which khugepaged
takes advantage of pte_offset_map[_lock]() allowing for pmd transitions.

This follows on from the "arch: allow pte_offset_map[_lock]() to fail"
https://lore.kernel.org/linux-mm/77a5d8c-406b-7068-4f17-23b7ac53bc83@google.com/
series of 23 posted on 2023-05-09,
and the "mm: allow pte_offset_map[_lock]() to fail"
https://lore.kernel.org/linux-mm/68a97fbe-5c1e-7ac6-72c-7b9c6290b370@google.com/
series of 31 posted on 2023-05-21.

Those two series were "independent": neither depending for build or
correctness on the other, but both series needed before this third one
can safely make the effective changes.  I'll send v2 of those two series
in a couple of days, incorporating Acks and Revieweds and the minor fixes.

What is it all about?  Some mmap_lock avoidance i.e. latency reduction.
Initially just for the case of collapsing shmem or file pages to THPs:
the usefulness of MADV_COLLAPSE on shmem is being limited by that
mmap_write_lock it currently requires.

Likely to be relied upon later in other contexts e.g. freeing of
empty page tables (but that's not work I'm doing).  mmap_write_lock
avoidance when collapsing to anon THPs?  Perhaps, but again that's not
work I've done: a quick attempt was not as easy as the shmem/file case.

These changes (though of course not these exact patches) have been in
Google's data centre kernel for three years now: we do rely upon them.

Based on the preceding two series over v6.4-rc3, but good over
v6.4-rc[1-4], current mm-everything or current linux-next.

01/12 mm/pgtable: add rcu_read_lock() and rcu_read_unlock()s
02/12 mm/pgtable: add PAE safety to __pte_offset_map()
03/12 arm: adjust_pte() use pte_offset_map_nolock()
04/12 powerpc: assert_pte_locked() use pte_offset_map_nolock()
05/12 powerpc: add pte_free_defer() for pgtables sharing page
06/12 sparc: add pte_free_defer() for pgtables sharing page
07/12 s390: add pte_free_defer(), with use of mmdrop_async()
08/12 mm/pgtable: add pte_free_defer() for pgtable as page
09/12 mm/khugepaged: retract_page_tables() without mmap or vma lock
10/12 mm/khugepaged: collapse_pte_mapped_thp() with mmap_read_lock()
11/12 mm/khugepaged: delete khugepaged_collapse_pte_mapped_thps()
12/12 mm: delete mmap_write_trylock() and vma_try_start_write()

 arch/arm/mm/fault-armv.c            |   3 +-
 arch/powerpc/include/asm/pgalloc.h  |   4 +
 arch/powerpc/mm/pgtable-frag.c      |  18 ++
 arch/powerpc/mm/pgtable.c           |  16 +-
 arch/s390/include/asm/pgalloc.h     |   4 +
 arch/s390/mm/pgalloc.c              |  34 +++
 arch/sparc/include/asm/pgalloc_64.h |   4 +
 arch/sparc/mm/init_64.c             |  16 ++
 include/linux/mm.h                  |  17 --
 include/linux/mm_types.h            |   2 +-
 include/linux/mmap_lock.h           |  10 -
 include/linux/pgtable.h             |   6 +-
 include/linux/sched/mm.h            |   1 +
 kernel/fork.c                       |   2 +-
 mm/khugepaged.c                     | 425 ++++++++----------------------
 mm/pgtable-generic.c                |  44 +++-
 16 files changed, 253 insertions(+), 353 deletions(-)

Hugh


^ permalink raw reply	[flat|nested] 27+ messages in thread

* [PATCH 01/12] mm/pgtable: add rcu_read_lock() and rcu_read_unlock()s
  2023-05-29  6:11 [PATCH 00/12] mm: free retracted page table by RCU Hugh Dickins
@ 2023-05-29  6:14 ` Hugh Dickins
  2023-05-31 17:06   ` Jann Horn
  2023-05-29  6:16 ` [PATCH 02/12] mm/pgtable: add PAE safety to __pte_offset_map() Hugh Dickins
                   ` (10 subsequent siblings)
  11 siblings, 1 reply; 27+ messages in thread
From: Hugh Dickins @ 2023-05-29  6:14 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Mike Kravetz, Mike Rapoport, Kirill A. Shutemov, Matthew Wilcox,
	David Hildenbrand, Suren Baghdasaryan, Qi Zheng, Yang Shi,
	Mel Gorman, Peter Xu, Peter Zijlstra, Will Deacon, Yu Zhao,
	Alistair Popple, Ralph Campbell, Ira Weiny, Steven Price,
	SeongJae Park, Naoya Horiguchi, Christophe Leroy, Zack Rusin,
	Jason Gunthorpe, Axel Rasmussen, Anshuman Khandual,
	Pasha Tatashin, Miaohe Lin, Minchan Kim, Christoph Hellwig,
	Song Liu, Thomas Hellstrom, Russell King, David S. Miller,
	Michael Ellerman, Aneesh Kumar K.V, Heiko Carstens,
	Christian Borntraeger, Claudio Imbrenda, Alexander Gordeev,
	Jann Horn, linux-arm-kernel, sparclinux, linuxppc-dev,
	linux-s390, linux-kernel, linux-mm

Before putting them to use (several commits later), add rcu_read_lock()
to pte_offset_map(), and rcu_read_unlock() to pte_unmap().  Make this a
separate commit, since it risks exposing imbalances: prior commits have
fixed all the known imbalances, but we may find some have been missed.

Signed-off-by: Hugh Dickins <hughd@google.com>
---
 include/linux/pgtable.h | 4 ++--
 mm/pgtable-generic.c    | 4 ++--
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
index a1326e61d7ee..8b0fc7fdc46f 100644
--- a/include/linux/pgtable.h
+++ b/include/linux/pgtable.h
@@ -99,7 +99,7 @@ static inline pte_t *pte_offset_kernel(pmd_t *pmd, unsigned long address)
 	((pte_t *)kmap_local_page(pmd_page(*(pmd))) + pte_index((address)))
 #define pte_unmap(pte)	do {	\
 	kunmap_local((pte));	\
-	/* rcu_read_unlock() to be added later */	\
+	rcu_read_unlock();	\
 } while (0)
 #else
 static inline pte_t *__pte_map(pmd_t *pmd, unsigned long address)
@@ -108,7 +108,7 @@ static inline pte_t *__pte_map(pmd_t *pmd, unsigned long address)
 }
 static inline void pte_unmap(pte_t *pte)
 {
-	/* rcu_read_unlock() to be added later */
+	rcu_read_unlock();
 }
 #endif
 
diff --git a/mm/pgtable-generic.c b/mm/pgtable-generic.c
index c7ab18a5fb77..674671835631 100644
--- a/mm/pgtable-generic.c
+++ b/mm/pgtable-generic.c
@@ -236,7 +236,7 @@ pte_t *__pte_offset_map(pmd_t *pmd, unsigned long addr, pmd_t *pmdvalp)
 {
 	pmd_t pmdval;
 
-	/* rcu_read_lock() to be added later */
+	rcu_read_lock();
 	pmdval = pmdp_get_lockless(pmd);
 	if (pmdvalp)
 		*pmdvalp = pmdval;
@@ -250,7 +250,7 @@ pte_t *__pte_offset_map(pmd_t *pmd, unsigned long addr, pmd_t *pmdvalp)
 	}
 	return __pte_map(&pmdval, addr);
 nomap:
-	/* rcu_read_unlock() to be added later */
+	rcu_read_unlock();
 	return NULL;
 }
 
-- 
2.35.3



^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 02/12] mm/pgtable: add PAE safety to __pte_offset_map()
  2023-05-29  6:11 [PATCH 00/12] mm: free retracted page table by RCU Hugh Dickins
  2023-05-29  6:14 ` [PATCH 01/12] mm/pgtable: add rcu_read_lock() and rcu_read_unlock()s Hugh Dickins
@ 2023-05-29  6:16 ` Hugh Dickins
  2023-05-29 13:56   ` Matthew Wilcox
  2023-05-29  6:17 ` [PATCH 03/12] arm: adjust_pte() use pte_offset_map_nolock() Hugh Dickins
                   ` (9 subsequent siblings)
  11 siblings, 1 reply; 27+ messages in thread
From: Hugh Dickins @ 2023-05-29  6:16 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Mike Kravetz, Mike Rapoport, Kirill A. Shutemov, Matthew Wilcox,
	David Hildenbrand, Suren Baghdasaryan, Qi Zheng, Yang Shi,
	Mel Gorman, Peter Xu, Peter Zijlstra, Will Deacon, Yu Zhao,
	Alistair Popple, Ralph Campbell, Ira Weiny, Steven Price,
	SeongJae Park, Naoya Horiguchi, Christophe Leroy, Zack Rusin,
	Jason Gunthorpe, Axel Rasmussen, Anshuman Khandual,
	Pasha Tatashin, Miaohe Lin, Minchan Kim, Christoph Hellwig,
	Song Liu, Thomas Hellstrom, Russell King, David S. Miller,
	Michael Ellerman, Aneesh Kumar K.V, Heiko Carstens,
	Christian Borntraeger, Claudio Imbrenda, Alexander Gordeev,
	Jann Horn, linux-arm-kernel, sparclinux, linuxppc-dev,
	linux-s390, linux-kernel, linux-mm

There is a faint risk that __pte_offset_map(), on a 32-bit architecture
with a 64-bit pmd_t e.g. x86-32 with CONFIG_X86_PAE=y, would succeed on
a pmdval assembled from a pmd_low and a pmd_high which never belonged
together: their combination not pointing to a page table at all, perhaps
not even a valid pfn.  pmdp_get_lockless() is not enough to prevent that.

Guard against that (on such configs) by local_irq_save() blocking TLB
flush between present updates, as linux/pgtable.h suggests.  It's only
needed around the pmdp_get_lockless() in __pte_offset_map(): a race when
__pte_offset_map_lock() repeats the pmdp_get_lockless() after getting the
lock, would just send it back to __pte_offset_map() again.

CONFIG_GUP_GET_PXX_LOW_HIGH is enabled when required by mips, sh and x86.
It is not enabled by arm-32 CONFIG_ARM_LPAE: my understanding is that
Will Deacon's 2020 enhancements to READ_ONCE() are sufficient for arm.
It is not enabled by arc, but its pmd_t is 32-bit even when pte_t 64-bit.

Limit the IRQ disablement to CONFIG_HIGHPTE?  Perhaps, but would need a
little more work, to retry if pmd_low good for page table, but pmd_high
non-zero from THP (and that might be making x86-specific assumptions).

Signed-off-by: Hugh Dickins <hughd@google.com>
---
 mm/pgtable-generic.c | 20 ++++++++++++++++++++
 1 file changed, 20 insertions(+)

diff --git a/mm/pgtable-generic.c b/mm/pgtable-generic.c
index 674671835631..d28b63386cef 100644
--- a/mm/pgtable-generic.c
+++ b/mm/pgtable-generic.c
@@ -232,12 +232,32 @@ pmd_t pmdp_collapse_flush(struct vm_area_struct *vma, unsigned long address,
 #endif
 #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
 
+#if defined(CONFIG_GUP_GET_PXX_LOW_HIGH) && \
+	(defined(CONFIG_SMP) || defined(CONFIG_PREEMPT_RCU))
+/*
+ * See the comment above ptep_get_lockless() in include/linux/pgtable.h:
+ * the barriers in pmdp_get_lockless() cannot guarantee that the value in
+ * pmd_high actually belongs with the value in pmd_low; but holding interrupts
+ * off blocks the TLB flush between present updates, which guarantees that a
+ * successful __pte_offset_map() points to a page from matched halves.
+ */
+#define config_might_irq_save(flags)	local_irq_save(flags)
+#define config_might_irq_restore(flags)	local_irq_restore(flags)
+#else
+#define config_might_irq_save(flags)
+#define config_might_irq_restore(flags)
+#endif
+
 pte_t *__pte_offset_map(pmd_t *pmd, unsigned long addr, pmd_t *pmdvalp)
 {
+	unsigned long __maybe_unused flags;
 	pmd_t pmdval;
 
 	rcu_read_lock();
+	config_might_irq_save(flags);
 	pmdval = pmdp_get_lockless(pmd);
+	config_might_irq_restore(flags);
+
 	if (pmdvalp)
 		*pmdvalp = pmdval;
 	if (unlikely(pmd_none(pmdval) || is_pmd_migration_entry(pmdval)))
-- 
2.35.3



^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 03/12] arm: adjust_pte() use pte_offset_map_nolock()
  2023-05-29  6:11 [PATCH 00/12] mm: free retracted page table by RCU Hugh Dickins
  2023-05-29  6:14 ` [PATCH 01/12] mm/pgtable: add rcu_read_lock() and rcu_read_unlock()s Hugh Dickins
  2023-05-29  6:16 ` [PATCH 02/12] mm/pgtable: add PAE safety to __pte_offset_map() Hugh Dickins
@ 2023-05-29  6:17 ` Hugh Dickins
  2023-05-29  6:18 ` [PATCH 04/12] powerpc: assert_pte_locked() " Hugh Dickins
                   ` (8 subsequent siblings)
  11 siblings, 0 replies; 27+ messages in thread
From: Hugh Dickins @ 2023-05-29  6:17 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Mike Kravetz, Mike Rapoport, Kirill A. Shutemov, Matthew Wilcox,
	David Hildenbrand, Suren Baghdasaryan, Qi Zheng, Yang Shi,
	Mel Gorman, Peter Xu, Peter Zijlstra, Will Deacon, Yu Zhao,
	Alistair Popple, Ralph Campbell, Ira Weiny, Steven Price,
	SeongJae Park, Naoya Horiguchi, Christophe Leroy, Zack Rusin,
	Jason Gunthorpe, Axel Rasmussen, Anshuman Khandual,
	Pasha Tatashin, Miaohe Lin, Minchan Kim, Christoph Hellwig,
	Song Liu, Thomas Hellstrom, Russell King, David S. Miller,
	Michael Ellerman, Aneesh Kumar K.V, Heiko Carstens,
	Christian Borntraeger, Claudio Imbrenda, Alexander Gordeev,
	Jann Horn, linux-arm-kernel, sparclinux, linuxppc-dev,
	linux-s390, linux-kernel, linux-mm

Instead of pte_lockptr(), use the recently added pte_offset_map_nolock()
in adjust_pte(): because it gives the not-locked ptl for precisely that
pte, which the caller can then safely lock; whereas pte_lockptr() is not
so tightly coupled, because it dereferences the pmd pointer again.

Signed-off-by: Hugh Dickins <hughd@google.com>
---
 arch/arm/mm/fault-armv.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/arch/arm/mm/fault-armv.c b/arch/arm/mm/fault-armv.c
index ca5302b0b7ee..7cb125497976 100644
--- a/arch/arm/mm/fault-armv.c
+++ b/arch/arm/mm/fault-armv.c
@@ -117,11 +117,10 @@ static int adjust_pte(struct vm_area_struct *vma, unsigned long address,
 	 * must use the nested version.  This also means we need to
 	 * open-code the spin-locking.
 	 */
-	pte = pte_offset_map(pmd, address);
+	pte = pte_offset_map_nolock(vma->vm_mm, pmd, address, &ptl);
 	if (!pte)
 		return 0;
 
-	ptl = pte_lockptr(vma->vm_mm, pmd);
 	do_pte_lock(ptl);
 
 	ret = do_adjust_pte(vma, address, pfn, pte);
-- 
2.35.3



^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 04/12] powerpc: assert_pte_locked() use pte_offset_map_nolock()
  2023-05-29  6:11 [PATCH 00/12] mm: free retracted page table by RCU Hugh Dickins
                   ` (2 preceding siblings ...)
  2023-05-29  6:17 ` [PATCH 03/12] arm: adjust_pte() use pte_offset_map_nolock() Hugh Dickins
@ 2023-05-29  6:18 ` Hugh Dickins
  2023-05-29  6:20 ` [PATCH 05/12] powerpc: add pte_free_defer() for pgtables sharing page Hugh Dickins
                   ` (7 subsequent siblings)
  11 siblings, 0 replies; 27+ messages in thread
From: Hugh Dickins @ 2023-05-29  6:18 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Mike Kravetz, Mike Rapoport, Kirill A. Shutemov, Matthew Wilcox,
	David Hildenbrand, Suren Baghdasaryan, Qi Zheng, Yang Shi,
	Mel Gorman, Peter Xu, Peter Zijlstra, Will Deacon, Yu Zhao,
	Alistair Popple, Ralph Campbell, Ira Weiny, Steven Price,
	SeongJae Park, Naoya Horiguchi, Christophe Leroy, Zack Rusin,
	Jason Gunthorpe, Axel Rasmussen, Anshuman Khandual,
	Pasha Tatashin, Miaohe Lin, Minchan Kim, Christoph Hellwig,
	Song Liu, Thomas Hellstrom, Russell King, David S. Miller,
	Michael Ellerman, Aneesh Kumar K.V, Heiko Carstens,
	Christian Borntraeger, Claudio Imbrenda, Alexander Gordeev,
	Jann Horn, linux-arm-kernel, sparclinux, linuxppc-dev,
	linux-s390, linux-kernel, linux-mm

Instead of pte_lockptr(), use the recently added pte_offset_map_nolock()
in assert_pte_locked().  BUG if pte_offset_map_nolock() fails: this is
stricter than the previous implementation, which skipped when pmd_none()
(with a comment on khugepaged collapse transitions): but wouldn't we want
to know, if an assert_pte_locked() caller can be racing such transitions?

This mod might cause new crashes: which either expose my ignorance, or
indicate issues to be fixed, or limit the usage of assert_pte_locked().

Signed-off-by: Hugh Dickins <hughd@google.com>
---
 arch/powerpc/mm/pgtable.c | 16 ++++++----------
 1 file changed, 6 insertions(+), 10 deletions(-)

diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pgtable.c
index cb2dcdb18f8e..16b061af86d7 100644
--- a/arch/powerpc/mm/pgtable.c
+++ b/arch/powerpc/mm/pgtable.c
@@ -311,6 +311,8 @@ void assert_pte_locked(struct mm_struct *mm, unsigned long addr)
 	p4d_t *p4d;
 	pud_t *pud;
 	pmd_t *pmd;
+	pte_t *pte;
+	spinlock_t *ptl;
 
 	if (mm == &init_mm)
 		return;
@@ -321,16 +323,10 @@ void assert_pte_locked(struct mm_struct *mm, unsigned long addr)
 	pud = pud_offset(p4d, addr);
 	BUG_ON(pud_none(*pud));
 	pmd = pmd_offset(pud, addr);
-	/*
-	 * khugepaged to collapse normal pages to hugepage, first set
-	 * pmd to none to force page fault/gup to take mmap_lock. After
-	 * pmd is set to none, we do a pte_clear which does this assertion
-	 * so if we find pmd none, return.
-	 */
-	if (pmd_none(*pmd))
-		return;
-	BUG_ON(!pmd_present(*pmd));
-	assert_spin_locked(pte_lockptr(mm, pmd));
+	pte = pte_offset_map_nolock(mm, pmd, addr, &ptl);
+	BUG_ON(!pte);
+	assert_spin_locked(ptl);
+	pte_unmap(pte);
 }
 #endif /* CONFIG_DEBUG_VM */
 
-- 
2.35.3



^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 05/12] powerpc: add pte_free_defer() for pgtables sharing page
  2023-05-29  6:11 [PATCH 00/12] mm: free retracted page table by RCU Hugh Dickins
                   ` (3 preceding siblings ...)
  2023-05-29  6:18 ` [PATCH 04/12] powerpc: assert_pte_locked() " Hugh Dickins
@ 2023-05-29  6:20 ` Hugh Dickins
  2023-05-29 14:02   ` Matthew Wilcox
  2023-05-29  6:21 ` [PATCH 06/12] sparc: " Hugh Dickins
                   ` (6 subsequent siblings)
  11 siblings, 1 reply; 27+ messages in thread
From: Hugh Dickins @ 2023-05-29  6:20 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Mike Kravetz, Mike Rapoport, Kirill A. Shutemov, Matthew Wilcox,
	David Hildenbrand, Suren Baghdasaryan, Qi Zheng, Yang Shi,
	Mel Gorman, Peter Xu, Peter Zijlstra, Will Deacon, Yu Zhao,
	Alistair Popple, Ralph Campbell, Ira Weiny, Steven Price,
	SeongJae Park, Naoya Horiguchi, Christophe Leroy, Zack Rusin,
	Jason Gunthorpe, Axel Rasmussen, Anshuman Khandual,
	Pasha Tatashin, Miaohe Lin, Minchan Kim, Christoph Hellwig,
	Song Liu, Thomas Hellstrom, Russell King, David S. Miller,
	Michael Ellerman, Aneesh Kumar K.V, Heiko Carstens,
	Christian Borntraeger, Claudio Imbrenda, Alexander Gordeev,
	Jann Horn, linux-arm-kernel, sparclinux, linuxppc-dev,
	linux-s390, linux-kernel, linux-mm

Add powerpc-specific pte_free_defer(), to call pte_free() via call_rcu().
pte_free_defer() will be called inside khugepaged's retract_page_tables()
loop, where allocating extra memory cannot be relied upon.  This precedes
the generic version to avoid build breakage from incompatible pgtable_t.

Signed-off-by: Hugh Dickins <hughd@google.com>
---
 arch/powerpc/include/asm/pgalloc.h |  4 ++++
 arch/powerpc/mm/pgtable-frag.c     | 18 ++++++++++++++++++
 2 files changed, 22 insertions(+)

diff --git a/arch/powerpc/include/asm/pgalloc.h b/arch/powerpc/include/asm/pgalloc.h
index 3360cad78ace..3a971e2a8c73 100644
--- a/arch/powerpc/include/asm/pgalloc.h
+++ b/arch/powerpc/include/asm/pgalloc.h
@@ -45,6 +45,10 @@ static inline void pte_free(struct mm_struct *mm, pgtable_t ptepage)
 	pte_fragment_free((unsigned long *)ptepage, 0);
 }
 
+/* arch use pte_free_defer() implementation in arch/powerpc/mm/pgtable-frag.c */
+#define pte_free_defer pte_free_defer
+void pte_free_defer(struct mm_struct *mm, pgtable_t pgtable);
+
 /*
  * Functions that deal with pagetables that could be at any level of
  * the table need to be passed an "index_size" so they know how to
diff --git a/arch/powerpc/mm/pgtable-frag.c b/arch/powerpc/mm/pgtable-frag.c
index 20652daa1d7e..3a3dac77faf2 100644
--- a/arch/powerpc/mm/pgtable-frag.c
+++ b/arch/powerpc/mm/pgtable-frag.c
@@ -120,3 +120,21 @@ void pte_fragment_free(unsigned long *table, int kernel)
 		__free_page(page);
 	}
 }
+
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+static void pte_free_now(struct rcu_head *head)
+{
+	struct page *page;
+
+	page = container_of(head, struct page, rcu_head);
+	pte_fragment_free((unsigned long *)page_to_virt(page), 0);
+}
+
+void pte_free_defer(struct mm_struct *mm, pgtable_t pgtable)
+{
+	struct page *page;
+
+	page = virt_to_page(pgtable);
+	call_rcu(&page->rcu_head, pte_free_now);
+}
+#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
-- 
2.35.3



^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 06/12] sparc: add pte_free_defer() for pgtables sharing page
  2023-05-29  6:11 [PATCH 00/12] mm: free retracted page table by RCU Hugh Dickins
                   ` (4 preceding siblings ...)
  2023-05-29  6:20 ` [PATCH 05/12] powerpc: add pte_free_defer() for pgtables sharing page Hugh Dickins
@ 2023-05-29  6:21 ` Hugh Dickins
  2023-05-29  6:22 ` [PATCH 07/12] s390: add pte_free_defer(), with use of mmdrop_async() Hugh Dickins
                   ` (5 subsequent siblings)
  11 siblings, 0 replies; 27+ messages in thread
From: Hugh Dickins @ 2023-05-29  6:21 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Mike Kravetz, Mike Rapoport, Kirill A. Shutemov, Matthew Wilcox,
	David Hildenbrand, Suren Baghdasaryan, Qi Zheng, Yang Shi,
	Mel Gorman, Peter Xu, Peter Zijlstra, Will Deacon, Yu Zhao,
	Alistair Popple, Ralph Campbell, Ira Weiny, Steven Price,
	SeongJae Park, Naoya Horiguchi, Christophe Leroy, Zack Rusin,
	Jason Gunthorpe, Axel Rasmussen, Anshuman Khandual,
	Pasha Tatashin, Miaohe Lin, Minchan Kim, Christoph Hellwig,
	Song Liu, Thomas Hellstrom, Russell King, David S. Miller,
	Michael Ellerman, Aneesh Kumar K.V, Heiko Carstens,
	Christian Borntraeger, Claudio Imbrenda, Alexander Gordeev,
	Jann Horn, linux-arm-kernel, sparclinux, linuxppc-dev,
	linux-s390, linux-kernel, linux-mm

Add sparc-specific pte_free_defer(), to call pte_free() via call_rcu().
pte_free_defer() will be called inside khugepaged's retract_page_tables()
loop, where allocating extra memory cannot be relied upon.  This precedes
the generic version to avoid build breakage from incompatible pgtable_t.

Signed-off-by: Hugh Dickins <hughd@google.com>
---
 arch/sparc/include/asm/pgalloc_64.h |  4 ++++
 arch/sparc/mm/init_64.c             | 16 ++++++++++++++++
 2 files changed, 20 insertions(+)

diff --git a/arch/sparc/include/asm/pgalloc_64.h b/arch/sparc/include/asm/pgalloc_64.h
index 7b5561d17ab1..caa7632be4c2 100644
--- a/arch/sparc/include/asm/pgalloc_64.h
+++ b/arch/sparc/include/asm/pgalloc_64.h
@@ -65,6 +65,10 @@ pgtable_t pte_alloc_one(struct mm_struct *mm);
 void pte_free_kernel(struct mm_struct *mm, pte_t *pte);
 void pte_free(struct mm_struct *mm, pgtable_t ptepage);
 
+/* arch use pte_free_defer() implementation in arch/sparc/mm/init_64.c */
+#define pte_free_defer pte_free_defer
+void pte_free_defer(struct mm_struct *mm, pgtable_t pgtable);
+
 #define pmd_populate_kernel(MM, PMD, PTE)	pmd_set(MM, PMD, PTE)
 #define pmd_populate(MM, PMD, PTE)		pmd_set(MM, PMD, PTE)
 
diff --git a/arch/sparc/mm/init_64.c b/arch/sparc/mm/init_64.c
index 04f9db0c3111..b7c6aa085ef6 100644
--- a/arch/sparc/mm/init_64.c
+++ b/arch/sparc/mm/init_64.c
@@ -2930,6 +2930,22 @@ void pgtable_free(void *table, bool is_page)
 }
 
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
+static void pte_free_now(struct rcu_head *head)
+{
+	struct page *page;
+
+	page = container_of(head, struct page, rcu_head);
+	__pte_free((pgtable_t)page_to_virt(page));
+}
+
+void pte_free_defer(struct mm_struct *mm, pgtable_t pgtable)
+{
+	struct page *page;
+
+	page = virt_to_page(pgtable);
+	call_rcu(&page->rcu_head, pte_free_now);
+}
+
 void update_mmu_cache_pmd(struct vm_area_struct *vma, unsigned long addr,
 			  pmd_t *pmd)
 {
-- 
2.35.3



^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 07/12] s390: add pte_free_defer(), with use of mmdrop_async()
  2023-05-29  6:11 [PATCH 00/12] mm: free retracted page table by RCU Hugh Dickins
                   ` (5 preceding siblings ...)
  2023-05-29  6:21 ` [PATCH 06/12] sparc: " Hugh Dickins
@ 2023-05-29  6:22 ` Hugh Dickins
       [not found]   ` <175ebec8-761-c3f-2d98-6c3bd87161c8@google.com>
  2023-05-29  6:23 ` [PATCH 08/12] mm/pgtable: add pte_free_defer() for pgtable as page Hugh Dickins
                   ` (4 subsequent siblings)
  11 siblings, 1 reply; 27+ messages in thread
From: Hugh Dickins @ 2023-05-29  6:22 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Mike Kravetz, Mike Rapoport, Kirill A. Shutemov, Matthew Wilcox,
	David Hildenbrand, Suren Baghdasaryan, Qi Zheng, Yang Shi,
	Mel Gorman, Peter Xu, Peter Zijlstra, Will Deacon, Yu Zhao,
	Alistair Popple, Ralph Campbell, Ira Weiny, Steven Price,
	SeongJae Park, Naoya Horiguchi, Christophe Leroy, Zack Rusin,
	Jason Gunthorpe, Axel Rasmussen, Anshuman Khandual,
	Pasha Tatashin, Miaohe Lin, Minchan Kim, Christoph Hellwig,
	Song Liu, Thomas Hellstrom, Russell King, David S. Miller,
	Michael Ellerman, Aneesh Kumar K.V, Heiko Carstens,
	Christian Borntraeger, Claudio Imbrenda, Alexander Gordeev,
	Jann Horn, linux-arm-kernel, sparclinux, linuxppc-dev,
	linux-s390, linux-kernel, linux-mm

Add s390-specific pte_free_defer(), to call pte_free() via call_rcu().
pte_free_defer() will be called inside khugepaged's retract_page_tables()
loop, where allocating extra memory cannot be relied upon.  This precedes
the generic version to avoid build breakage from incompatible pgtable_t.

This version is more complicated than others: because page_table_free()
needs to know which fragment is being freed, and which mm to link it to.

page_table_free()'s fragment handling is clever, but I could too easily
break it: what's done here in pte_free_defer() and pte_free_now() might
be better integrated with page_table_free()'s cleverness, but not by me!

By the time that page_table_free() gets called via RCU, it's conceivable
that mm would already have been freed: so mmgrab() in pte_free_defer()
and mmdrop() in pte_free_now().  No, that is not a good context to call
mmdrop() from, so make mmdrop_async() public and use that.

Signed-off-by: Hugh Dickins <hughd@google.com>
---
 arch/s390/include/asm/pgalloc.h |  4 ++++
 arch/s390/mm/pgalloc.c          | 34 +++++++++++++++++++++++++++++++++
 include/linux/mm_types.h        |  2 +-
 include/linux/sched/mm.h        |  1 +
 kernel/fork.c                   |  2 +-
 5 files changed, 41 insertions(+), 2 deletions(-)

diff --git a/arch/s390/include/asm/pgalloc.h b/arch/s390/include/asm/pgalloc.h
index 17eb618f1348..89a9d5ef94f8 100644
--- a/arch/s390/include/asm/pgalloc.h
+++ b/arch/s390/include/asm/pgalloc.h
@@ -143,6 +143,10 @@ static inline void pmd_populate(struct mm_struct *mm,
 #define pte_free_kernel(mm, pte) page_table_free(mm, (unsigned long *) pte)
 #define pte_free(mm, pte) page_table_free(mm, (unsigned long *) pte)
 
+/* arch use pte_free_defer() implementation in arch/s390/mm/pgalloc.c */
+#define pte_free_defer pte_free_defer
+void pte_free_defer(struct mm_struct *mm, pgtable_t pgtable);
+
 void vmem_map_init(void);
 void *vmem_crst_alloc(unsigned long val);
 pte_t *vmem_pte_alloc(void);
diff --git a/arch/s390/mm/pgalloc.c b/arch/s390/mm/pgalloc.c
index 66ab68db9842..0129de9addfd 100644
--- a/arch/s390/mm/pgalloc.c
+++ b/arch/s390/mm/pgalloc.c
@@ -346,6 +346,40 @@ void page_table_free(struct mm_struct *mm, unsigned long *table)
 	__free_page(page);
 }
 
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+static void pte_free_now(struct rcu_head *head)
+{
+	struct page *page;
+	unsigned long mm_bit;
+	struct mm_struct *mm;
+	unsigned long *table;
+
+	page = container_of(head, struct page, rcu_head);
+	table = (unsigned long *)page_to_virt(page);
+	mm_bit = (unsigned long)page->pt_mm;
+	/* 4K page has only two 2K fragments, but alignment allows eight */
+	mm = (struct mm_struct *)(mm_bit & ~7);
+	table += PTRS_PER_PTE * (mm_bit & 7);
+	page_table_free(mm, table);
+	mmdrop_async(mm);
+}
+
+void pte_free_defer(struct mm_struct *mm, pgtable_t pgtable)
+{
+	struct page *page;
+	unsigned long mm_bit;
+
+	mmgrab(mm);
+	page = virt_to_page(pgtable);
+	/* Which 2K page table fragment of a 4K page? */
+	mm_bit = ((unsigned long)pgtable & ~PAGE_MASK) /
+			(PTRS_PER_PTE * sizeof(pte_t));
+	mm_bit += (unsigned long)mm;
+	page->pt_mm = (struct mm_struct *)mm_bit;
+	call_rcu(&page->rcu_head, pte_free_now);
+}
+#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
+
 void page_table_free_rcu(struct mmu_gather *tlb, unsigned long *table,
 			 unsigned long vmaddr)
 {
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 306a3d1a0fa6..1667a1bdb8a8 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -146,7 +146,7 @@ struct page {
 			pgtable_t pmd_huge_pte; /* protected by page->ptl */
 			unsigned long _pt_pad_2;	/* mapping */
 			union {
-				struct mm_struct *pt_mm; /* x86 pgds only */
+				struct mm_struct *pt_mm; /* x86 pgd, s390 */
 				atomic_t pt_frag_refcount; /* powerpc */
 			};
 #if ALLOC_SPLIT_PTLOCKS
diff --git a/include/linux/sched/mm.h b/include/linux/sched/mm.h
index 8d89c8c4fac1..a9043d1a0d55 100644
--- a/include/linux/sched/mm.h
+++ b/include/linux/sched/mm.h
@@ -41,6 +41,7 @@ static inline void smp_mb__after_mmgrab(void)
 	smp_mb__after_atomic();
 }
 
+extern void mmdrop_async(struct mm_struct *mm);
 extern void __mmdrop(struct mm_struct *mm);
 
 static inline void mmdrop(struct mm_struct *mm)
diff --git a/kernel/fork.c b/kernel/fork.c
index ed4e01daccaa..fa4486b65c56 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -942,7 +942,7 @@ static void mmdrop_async_fn(struct work_struct *work)
 	__mmdrop(mm);
 }
 
-static void mmdrop_async(struct mm_struct *mm)
+void mmdrop_async(struct mm_struct *mm)
 {
 	if (unlikely(atomic_dec_and_test(&mm->mm_count))) {
 		INIT_WORK(&mm->async_put_work, mmdrop_async_fn);
-- 
2.35.3



^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 08/12] mm/pgtable: add pte_free_defer() for pgtable as page
  2023-05-29  6:11 [PATCH 00/12] mm: free retracted page table by RCU Hugh Dickins
                   ` (6 preceding siblings ...)
  2023-05-29  6:22 ` [PATCH 07/12] s390: add pte_free_defer(), with use of mmdrop_async() Hugh Dickins
@ 2023-05-29  6:23 ` Hugh Dickins
  2023-05-29  6:25 ` [PATCH 09/12] mm/khugepaged: retract_page_tables() without mmap or vma lock Hugh Dickins
                   ` (3 subsequent siblings)
  11 siblings, 0 replies; 27+ messages in thread
From: Hugh Dickins @ 2023-05-29  6:23 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Mike Kravetz, Mike Rapoport, Kirill A. Shutemov, Matthew Wilcox,
	David Hildenbrand, Suren Baghdasaryan, Qi Zheng, Yang Shi,
	Mel Gorman, Peter Xu, Peter Zijlstra, Will Deacon, Yu Zhao,
	Alistair Popple, Ralph Campbell, Ira Weiny, Steven Price,
	SeongJae Park, Naoya Horiguchi, Christophe Leroy, Zack Rusin,
	Jason Gunthorpe, Axel Rasmussen, Anshuman Khandual,
	Pasha Tatashin, Miaohe Lin, Minchan Kim, Christoph Hellwig,
	Song Liu, Thomas Hellstrom, Russell King, David S. Miller,
	Michael Ellerman, Aneesh Kumar K.V, Heiko Carstens,
	Christian Borntraeger, Claudio Imbrenda, Alexander Gordeev,
	Jann Horn, linux-arm-kernel, sparclinux, linuxppc-dev,
	linux-s390, linux-kernel, linux-mm

Add the generic pte_free_defer(), to call pte_free() via call_rcu().
pte_free_defer() will be called inside khugepaged's retract_page_tables()
loop, where allocating extra memory cannot be relied upon.  This version
suits all those architectures which use an unfragmented page for one page
table (none of whose pte_free()s use the mm arg which was passed to it).

Signed-off-by: Hugh Dickins <hughd@google.com>
---
 include/linux/pgtable.h |  2 ++
 mm/pgtable-generic.c    | 20 ++++++++++++++++++++
 2 files changed, 22 insertions(+)

diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
index 8b0fc7fdc46f..62a8732d92f0 100644
--- a/include/linux/pgtable.h
+++ b/include/linux/pgtable.h
@@ -112,6 +112,8 @@ static inline void pte_unmap(pte_t *pte)
 }
 #endif
 
+void pte_free_defer(struct mm_struct *mm, pgtable_t pgtable);
+
 /* Find an entry in the second-level page table.. */
 #ifndef pmd_offset
 static inline pmd_t *pmd_offset(pud_t *pud, unsigned long address)
diff --git a/mm/pgtable-generic.c b/mm/pgtable-generic.c
index d28b63386cef..471697dcb244 100644
--- a/mm/pgtable-generic.c
+++ b/mm/pgtable-generic.c
@@ -13,6 +13,7 @@
 #include <linux/swap.h>
 #include <linux/swapops.h>
 #include <linux/mm_inline.h>
+#include <asm/pgalloc.h>
 #include <asm/tlb.h>
 
 /*
@@ -230,6 +231,25 @@ pmd_t pmdp_collapse_flush(struct vm_area_struct *vma, unsigned long address,
 	return pmd;
 }
 #endif
+
+/* arch define pte_free_defer in asm/pgalloc.h for its own implementation */
+#ifndef pte_free_defer
+static void pte_free_now(struct rcu_head *head)
+{
+	struct page *page;
+
+	page = container_of(head, struct page, rcu_head);
+	pte_free(NULL /* mm not passed and not used */, (pgtable_t)page);
+}
+
+void pte_free_defer(struct mm_struct *mm, pgtable_t pgtable)
+{
+	struct page *page;
+
+	page = pgtable;
+	call_rcu(&page->rcu_head, pte_free_now);
+}
+#endif /* pte_free_defer */
 #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
 
 #if defined(CONFIG_GUP_GET_PXX_LOW_HIGH) && \
-- 
2.35.3



^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 09/12] mm/khugepaged: retract_page_tables() without mmap or vma lock
  2023-05-29  6:11 [PATCH 00/12] mm: free retracted page table by RCU Hugh Dickins
                   ` (7 preceding siblings ...)
  2023-05-29  6:23 ` [PATCH 08/12] mm/pgtable: add pte_free_defer() for pgtable as page Hugh Dickins
@ 2023-05-29  6:25 ` Hugh Dickins
  2023-05-29 23:26   ` Peter Xu
  2023-05-31 15:34   ` Jann Horn
  2023-05-29  6:26 ` [PATCH 10/12] mm/khugepaged: collapse_pte_mapped_thp() with mmap_read_lock() Hugh Dickins
                   ` (2 subsequent siblings)
  11 siblings, 2 replies; 27+ messages in thread
From: Hugh Dickins @ 2023-05-29  6:25 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Mike Kravetz, Mike Rapoport, Kirill A. Shutemov, Matthew Wilcox,
	David Hildenbrand, Suren Baghdasaryan, Qi Zheng, Yang Shi,
	Mel Gorman, Peter Xu, Peter Zijlstra, Will Deacon, Yu Zhao,
	Alistair Popple, Ralph Campbell, Ira Weiny, Steven Price,
	SeongJae Park, Naoya Horiguchi, Christophe Leroy, Zack Rusin,
	Jason Gunthorpe, Axel Rasmussen, Anshuman Khandual,
	Pasha Tatashin, Miaohe Lin, Minchan Kim, Christoph Hellwig,
	Song Liu, Thomas Hellstrom, Russell King, David S. Miller,
	Michael Ellerman, Aneesh Kumar K.V, Heiko Carstens,
	Christian Borntraeger, Claudio Imbrenda, Alexander Gordeev,
	Jann Horn, linux-arm-kernel, sparclinux, linuxppc-dev,
	linux-s390, linux-kernel, linux-mm

Simplify shmem and file THP collapse's retract_page_tables(), and relax
its locking: to improve its success rate and to lessen impact on others.

Instead of its MADV_COLLAPSE case doing set_huge_pmd() at target_addr of
target_mm, leave that part of the work to madvise_collapse() calling
collapse_pte_mapped_thp() afterwards: just adjust collapse_file()'s
result code to arrange for that.  That spares retract_page_tables() four
arguments; and since it will be successful in retracting all of the page
tables expected of it, no need to track and return a result code itself.

It needs i_mmap_lock_read(mapping) for traversing the vma interval tree,
but it does not need i_mmap_lock_write() for that: page_vma_mapped_walk()
allows for pte_offset_map_lock() etc to fail, and uses pmd_lock() for
THPs.  retract_page_tables() just needs to use those same spinlocks to
exclude it briefly, while transitioning pmd from page table to none: so
restore its use of pmd_lock() inside of which pte lock is nested.

Users of pte_offset_map_lock() etc all now allow for them to fail:
so retract_page_tables() now has no use for mmap_write_trylock() or
vma_try_start_write().  In common with rmap and page_vma_mapped_walk(),
it does not even need the mmap_read_lock().

But those users do expect the page table to remain a good page table,
until they unlock and rcu_read_unlock(): so the page table cannot be
freed immediately, but rather by the recently added pte_free_defer().

retract_page_tables() can be enhanced to replace_page_tables(), which
inserts the final huge pmd without mmap lock: going through an invalid
state instead of pmd_none() followed by fault.  But that does raise some
questions, and requires a more complicated pte_free_defer() for powerpc
(when its arch_needs_pgtable_deposit() for shmem and file THPs).  Leave
that enhancement to a later release.

Signed-off-by: Hugh Dickins <hughd@google.com>
---
 mm/khugepaged.c | 169 +++++++++++++++++-------------------------------
 1 file changed, 60 insertions(+), 109 deletions(-)

diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 1083f0e38a07..4fd408154692 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -1617,9 +1617,8 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr,
 		break;
 	case SCAN_PMD_NONE:
 		/*
-		 * In MADV_COLLAPSE path, possible race with khugepaged where
-		 * all pte entries have been removed and pmd cleared.  If so,
-		 * skip all the pte checks and just update the pmd mapping.
+		 * All pte entries have been removed and pmd cleared.
+		 * Skip all the pte checks and just update the pmd mapping.
 		 */
 		goto maybe_install_pmd;
 	default:
@@ -1748,123 +1747,73 @@ static void khugepaged_collapse_pte_mapped_thps(struct khugepaged_mm_slot *mm_sl
 	mmap_write_unlock(mm);
 }
 
-static int retract_page_tables(struct address_space *mapping, pgoff_t pgoff,
-			       struct mm_struct *target_mm,
-			       unsigned long target_addr, struct page *hpage,
-			       struct collapse_control *cc)
+static void retract_page_tables(struct address_space *mapping, pgoff_t pgoff)
 {
 	struct vm_area_struct *vma;
-	int target_result = SCAN_FAIL;
 
-	i_mmap_lock_write(mapping);
+	i_mmap_lock_read(mapping);
 	vma_interval_tree_foreach(vma, &mapping->i_mmap, pgoff, pgoff) {
-		int result = SCAN_FAIL;
-		struct mm_struct *mm = NULL;
-		unsigned long addr = 0;
-		pmd_t *pmd;
-		bool is_target = false;
+		struct mm_struct *mm;
+		unsigned long addr;
+		pmd_t *pmd, pgt_pmd;
+		spinlock_t *pml;
+		spinlock_t *ptl;
 
 		/*
 		 * Check vma->anon_vma to exclude MAP_PRIVATE mappings that
-		 * got written to. These VMAs are likely not worth investing
-		 * mmap_write_lock(mm) as PMD-mapping is likely to be split
-		 * later.
+		 * got written to. These VMAs are likely not worth removing
+		 * page tables from, as PMD-mapping is likely to be split later.
 		 *
-		 * Note that vma->anon_vma check is racy: it can be set up after
-		 * the check but before we took mmap_lock by the fault path.
-		 * But page lock would prevent establishing any new ptes of the
-		 * page, so we are safe.
-		 *
-		 * An alternative would be drop the check, but check that page
-		 * table is clear before calling pmdp_collapse_flush() under
-		 * ptl. It has higher chance to recover THP for the VMA, but
-		 * has higher cost too. It would also probably require locking
-		 * the anon_vma.
+		 * Note that vma->anon_vma check is racy: it can be set after
+		 * the check, but page locks (with XA_RETRY_ENTRYs in holes)
+		 * prevented establishing new ptes of the page. So we are safe
+		 * to remove page table below, without even checking it's empty.
 		 */
-		if (READ_ONCE(vma->anon_vma)) {
-			result = SCAN_PAGE_ANON;
-			goto next;
-		}
+		if (READ_ONCE(vma->anon_vma))
+			continue;
+
 		addr = vma->vm_start + ((pgoff - vma->vm_pgoff) << PAGE_SHIFT);
 		if (addr & ~HPAGE_PMD_MASK ||
-		    vma->vm_end < addr + HPAGE_PMD_SIZE) {
-			result = SCAN_VMA_CHECK;
-			goto next;
-		}
-		mm = vma->vm_mm;
-		is_target = mm == target_mm && addr == target_addr;
-		result = find_pmd_or_thp_or_none(mm, addr, &pmd);
-		if (result != SCAN_SUCCEED)
-			goto next;
-		/*
-		 * We need exclusive mmap_lock to retract page table.
-		 *
-		 * We use trylock due to lock inversion: we need to acquire
-		 * mmap_lock while holding page lock. Fault path does it in
-		 * reverse order. Trylock is a way to avoid deadlock.
-		 *
-		 * Also, it's not MADV_COLLAPSE's job to collapse other
-		 * mappings - let khugepaged take care of them later.
-		 */
-		result = SCAN_PTE_MAPPED_HUGEPAGE;
-		if ((cc->is_khugepaged || is_target) &&
-		    mmap_write_trylock(mm)) {
-			/* trylock for the same lock inversion as above */
-			if (!vma_try_start_write(vma))
-				goto unlock_next;
-
-			/*
-			 * Re-check whether we have an ->anon_vma, because
-			 * collapse_and_free_pmd() requires that either no
-			 * ->anon_vma exists or the anon_vma is locked.
-			 * We already checked ->anon_vma above, but that check
-			 * is racy because ->anon_vma can be populated under the
-			 * mmap lock in read mode.
-			 */
-			if (vma->anon_vma) {
-				result = SCAN_PAGE_ANON;
-				goto unlock_next;
-			}
-			/*
-			 * When a vma is registered with uffd-wp, we can't
-			 * recycle the pmd pgtable because there can be pte
-			 * markers installed.  Skip it only, so the rest mm/vma
-			 * can still have the same file mapped hugely, however
-			 * it'll always mapped in small page size for uffd-wp
-			 * registered ranges.
-			 */
-			if (hpage_collapse_test_exit(mm)) {
-				result = SCAN_ANY_PROCESS;
-				goto unlock_next;
-			}
-			if (userfaultfd_wp(vma)) {
-				result = SCAN_PTE_UFFD_WP;
-				goto unlock_next;
-			}
-			collapse_and_free_pmd(mm, vma, addr, pmd);
-			if (!cc->is_khugepaged && is_target)
-				result = set_huge_pmd(vma, addr, pmd, hpage);
-			else
-				result = SCAN_SUCCEED;
-
-unlock_next:
-			mmap_write_unlock(mm);
-			goto next;
-		}
-		/*
-		 * Calling context will handle target mm/addr. Otherwise, let
-		 * khugepaged try again later.
-		 */
-		if (!is_target) {
-			khugepaged_add_pte_mapped_thp(mm, addr);
+		    vma->vm_end < addr + HPAGE_PMD_SIZE)
 			continue;
-		}
-next:
-		if (is_target)
-			target_result = result;
+
+		mm = vma->vm_mm;
+		if (find_pmd_or_thp_or_none(mm, addr, &pmd) != SCAN_SUCCEED)
+			continue;
+
+		if (hpage_collapse_test_exit(mm))
+			continue;
+		/*
+		 * When a vma is registered with uffd-wp, we cannot recycle
+		 * the page table because there may be pte markers installed.
+		 * Other vmas can still have the same file mapped hugely, but
+		 * skip this one: it will always be mapped in small page size
+		 * for uffd-wp registered ranges.
+		 *
+		 * What if VM_UFFD_WP is set a moment after this check?  No
+		 * problem, huge page lock is still held, stopping new mappings
+		 * of page which might then get replaced by pte markers: only
+		 * existing markers need to be protected here.  (We could check
+		 * after getting ptl below, but this comment distracting there!)
+		 */
+		if (userfaultfd_wp(vma))
+			continue;
+
+		/* Huge page lock is still held, so page table must be empty */
+		pml = pmd_lock(mm, pmd);
+		ptl = pte_lockptr(mm, pmd);
+		if (ptl != pml)
+			spin_lock_nested(ptl, SINGLE_DEPTH_NESTING);
+		pgt_pmd = pmdp_collapse_flush(vma, addr, pmd);
+		if (ptl != pml)
+			spin_unlock(ptl);
+		spin_unlock(pml);
+
+		mm_dec_nr_ptes(mm);
+		page_table_check_pte_clear_range(mm, addr, pgt_pmd);
+		pte_free_defer(mm, pmd_pgtable(pgt_pmd));
 	}
-	i_mmap_unlock_write(mapping);
-	return target_result;
+	i_mmap_unlock_read(mapping);
 }
 
 /**
@@ -2261,9 +2210,11 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr,
 
 	/*
 	 * Remove pte page tables, so we can re-fault the page as huge.
+	 * If MADV_COLLAPSE, adjust result to call collapse_pte_mapped_thp().
 	 */
-	result = retract_page_tables(mapping, start, mm, addr, hpage,
-				     cc);
+	retract_page_tables(mapping, start);
+	if (cc && !cc->is_khugepaged)
+		result = SCAN_PTE_MAPPED_HUGEPAGE;
 	unlock_page(hpage);
 
 	/*
-- 
2.35.3



^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 10/12] mm/khugepaged: collapse_pte_mapped_thp() with mmap_read_lock()
  2023-05-29  6:11 [PATCH 00/12] mm: free retracted page table by RCU Hugh Dickins
                   ` (8 preceding siblings ...)
  2023-05-29  6:25 ` [PATCH 09/12] mm/khugepaged: retract_page_tables() without mmap or vma lock Hugh Dickins
@ 2023-05-29  6:26 ` Hugh Dickins
  2023-05-31 17:25   ` Jann Horn
  2023-05-29  6:28 ` [PATCH 11/12] mm/khugepaged: delete khugepaged_collapse_pte_mapped_thps() Hugh Dickins
  2023-05-29  6:30 ` [PATCH 12/12] mm: delete mmap_write_trylock() and vma_try_start_write() Hugh Dickins
  11 siblings, 1 reply; 27+ messages in thread
From: Hugh Dickins @ 2023-05-29  6:26 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Mike Kravetz, Mike Rapoport, Kirill A. Shutemov, Matthew Wilcox,
	David Hildenbrand, Suren Baghdasaryan, Qi Zheng, Yang Shi,
	Mel Gorman, Peter Xu, Peter Zijlstra, Will Deacon, Yu Zhao,
	Alistair Popple, Ralph Campbell, Ira Weiny, Steven Price,
	SeongJae Park, Naoya Horiguchi, Christophe Leroy, Zack Rusin,
	Jason Gunthorpe, Axel Rasmussen, Anshuman Khandual,
	Pasha Tatashin, Miaohe Lin, Minchan Kim, Christoph Hellwig,
	Song Liu, Thomas Hellstrom, Russell King, David S. Miller,
	Michael Ellerman, Aneesh Kumar K.V, Heiko Carstens,
	Christian Borntraeger, Claudio Imbrenda, Alexander Gordeev,
	Jann Horn, linux-arm-kernel, sparclinux, linuxppc-dev,
	linux-s390, linux-kernel, linux-mm

Bring collapse_and_free_pmd() back into collapse_pte_mapped_thp().
It does need mmap_read_lock(), but it does not need mmap_write_lock(),
nor vma_start_write() nor i_mmap lock nor anon_vma lock.  All racing
paths are relying on pte_offset_map_lock() and pmd_lock(), so use those.
Follow the pattern in retract_page_tables(); and using pte_free_defer()
removes the need for tlb_remove_table_sync_one() here.

Confirm the preliminary find_pmd_or_thp_or_none() once page lock has been
acquired and the page looks suitable: from then on its state is stable.

However, collapse_pte_mapped_thp() was doing something others don't:
freeing a page table still containing "valid" entries.  i_mmap lock did
stop a racing truncate from double-freeing those pages, but we prefer
collapse_pte_mapped_thp() to clear the entries as usual.  Their TLB
flush can wait until the pmdp_collapse_flush() which follows, but the
mmu_notifier_invalidate_range_start() has to be done earlier.

Some cleanup while rearranging: rename "count" to "nr_ptes";
and "step 2" does not need to duplicate the checks in "step 1".

Signed-off-by: Hugh Dickins <hughd@google.com>
---
 mm/khugepaged.c | 131 +++++++++++++++---------------------------------
 1 file changed, 41 insertions(+), 90 deletions(-)

diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 4fd408154692..2999500abdd5 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -1485,7 +1485,7 @@ static bool khugepaged_add_pte_mapped_thp(struct mm_struct *mm,
 	return ret;
 }
 
-/* hpage must be locked, and mmap_lock must be held in write */
+/* hpage must be locked, and mmap_lock must be held */
 static int set_huge_pmd(struct vm_area_struct *vma, unsigned long addr,
 			pmd_t *pmdp, struct page *hpage)
 {
@@ -1497,7 +1497,7 @@ static int set_huge_pmd(struct vm_area_struct *vma, unsigned long addr,
 	};
 
 	VM_BUG_ON(!PageTransHuge(hpage));
-	mmap_assert_write_locked(vma->vm_mm);
+	mmap_assert_locked(vma->vm_mm);
 
 	if (do_set_pmd(&vmf, hpage))
 		return SCAN_FAIL;
@@ -1506,48 +1506,6 @@ static int set_huge_pmd(struct vm_area_struct *vma, unsigned long addr,
 	return SCAN_SUCCEED;
 }
 
-/*
- * A note about locking:
- * Trying to take the page table spinlocks would be useless here because those
- * are only used to synchronize:
- *
- *  - modifying terminal entries (ones that point to a data page, not to another
- *    page table)
- *  - installing *new* non-terminal entries
- *
- * Instead, we need roughly the same kind of protection as free_pgtables() or
- * mm_take_all_locks() (but only for a single VMA):
- * The mmap lock together with this VMA's rmap locks covers all paths towards
- * the page table entries we're messing with here, except for hardware page
- * table walks and lockless_pages_from_mm().
- */
-static void collapse_and_free_pmd(struct mm_struct *mm, struct vm_area_struct *vma,
-				  unsigned long addr, pmd_t *pmdp)
-{
-	pmd_t pmd;
-	struct mmu_notifier_range range;
-
-	mmap_assert_write_locked(mm);
-	if (vma->vm_file)
-		lockdep_assert_held_write(&vma->vm_file->f_mapping->i_mmap_rwsem);
-	/*
-	 * All anon_vmas attached to the VMA have the same root and are
-	 * therefore locked by the same lock.
-	 */
-	if (vma->anon_vma)
-		lockdep_assert_held_write(&vma->anon_vma->root->rwsem);
-
-	mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, mm, addr,
-				addr + HPAGE_PMD_SIZE);
-	mmu_notifier_invalidate_range_start(&range);
-	pmd = pmdp_collapse_flush(vma, addr, pmdp);
-	tlb_remove_table_sync_one();
-	mmu_notifier_invalidate_range_end(&range);
-	mm_dec_nr_ptes(mm);
-	page_table_check_pte_clear_range(mm, addr, pmd);
-	pte_free(mm, pmd_pgtable(pmd));
-}
-
 /**
  * collapse_pte_mapped_thp - Try to collapse a pte-mapped THP for mm at
  * address haddr.
@@ -1563,16 +1521,17 @@ static void collapse_and_free_pmd(struct mm_struct *mm, struct vm_area_struct *v
 int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr,
 			    bool install_pmd)
 {
+	struct mmu_notifier_range range;
 	unsigned long haddr = addr & HPAGE_PMD_MASK;
 	struct vm_area_struct *vma = vma_lookup(mm, haddr);
 	struct page *hpage;
 	pte_t *start_pte, *pte;
-	pmd_t *pmd;
-	spinlock_t *ptl;
-	int count = 0, result = SCAN_FAIL;
+	pmd_t *pmd, pgt_pmd;
+	spinlock_t *pml, *ptl;
+	int nr_ptes = 0, result = SCAN_FAIL;
 	int i;
 
-	mmap_assert_write_locked(mm);
+	mmap_assert_locked(mm);
 
 	/* Fast check before locking page if already PMD-mapped */
 	result = find_pmd_or_thp_or_none(mm, haddr, &pmd);
@@ -1612,6 +1571,7 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr,
 		goto drop_hpage;
 	}
 
+	result = find_pmd_or_thp_or_none(mm, haddr, &pmd);
 	switch (result) {
 	case SCAN_SUCCEED:
 		break;
@@ -1625,27 +1585,14 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr,
 		goto drop_hpage;
 	}
 
-	/* Lock the vma before taking i_mmap and page table locks */
-	vma_start_write(vma);
+	mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, mm,
+				haddr, haddr + HPAGE_PMD_SIZE);
+	mmu_notifier_invalidate_range_start(&range);
 
-	/*
-	 * We need to lock the mapping so that from here on, only GUP-fast and
-	 * hardware page walks can access the parts of the page tables that
-	 * we're operating on.
-	 * See collapse_and_free_pmd().
-	 */
-	i_mmap_lock_write(vma->vm_file->f_mapping);
-
-	/*
-	 * This spinlock should be unnecessary: Nobody else should be accessing
-	 * the page tables under spinlock protection here, only
-	 * lockless_pages_from_mm() and the hardware page walker can access page
-	 * tables while all the high-level locks are held in write mode.
-	 */
 	result = SCAN_FAIL;
 	start_pte = pte_offset_map_lock(mm, pmd, haddr, &ptl);
-	if (!start_pte)
-		goto drop_immap;
+	if (!start_pte)		/* mmap_lock + page lock should prevent this */
+		goto abort;
 
 	/* step 1: check all mapped PTEs are to the right huge page */
 	for (i = 0, addr = haddr, pte = start_pte;
@@ -1671,40 +1618,44 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr,
 		 */
 		if (hpage + i != page)
 			goto abort;
-		count++;
+		nr_ptes++;
 	}
 
-	/* step 2: adjust rmap */
+	/* step 2: clear page table and adjust rmap */
 	for (i = 0, addr = haddr, pte = start_pte;
 	     i < HPAGE_PMD_NR; i++, addr += PAGE_SIZE, pte++) {
-		struct page *page;
-
 		if (pte_none(*pte))
 			continue;
-		page = vm_normal_page(vma, addr, *pte);
-		if (WARN_ON_ONCE(page && is_zone_device_page(page)))
-			goto abort;
-		page_remove_rmap(page, vma, false);
+
+		/* Must clear entry, or a racing truncate may re-remove it */
+		pte_clear(mm, addr, pte);
+		page_remove_rmap(hpage + i, vma, false);
 	}
 
 	pte_unmap_unlock(start_pte, ptl);
 
 	/* step 3: set proper refcount and mm_counters. */
-	if (count) {
-		page_ref_sub(hpage, count);
-		add_mm_counter(vma->vm_mm, mm_counter_file(hpage), -count);
+	if (nr_ptes) {
+		page_ref_sub(hpage, nr_ptes);
+		add_mm_counter(vma->vm_mm, mm_counter_file(hpage), -nr_ptes);
 	}
 
-	/* step 4: remove pte entries */
-	/* we make no change to anon, but protect concurrent anon page lookup */
-	if (vma->anon_vma)
-		anon_vma_lock_write(vma->anon_vma);
+	/* step 4: remove page table */
 
-	collapse_and_free_pmd(mm, vma, haddr, pmd);
+	/* Huge page lock is still held, so page table must remain empty */
+	pml = pmd_lock(mm, pmd);
+	if (ptl != pml)
+		spin_lock_nested(ptl, SINGLE_DEPTH_NESTING);
+	pgt_pmd = pmdp_collapse_flush(vma, haddr, pmd);
+	if (ptl != pml)
+		spin_unlock(ptl);
+	spin_unlock(pml);
 
-	if (vma->anon_vma)
-		anon_vma_unlock_write(vma->anon_vma);
-	i_mmap_unlock_write(vma->vm_file->f_mapping);
+	mmu_notifier_invalidate_range_end(&range);
+
+	mm_dec_nr_ptes(mm);
+	page_table_check_pte_clear_range(mm, haddr, pgt_pmd);
+	pte_free_defer(mm, pmd_pgtable(pgt_pmd));
 
 maybe_install_pmd:
 	/* step 5: install pmd entry */
@@ -1718,9 +1669,9 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr,
 	return result;
 
 abort:
-	pte_unmap_unlock(start_pte, ptl);
-drop_immap:
-	i_mmap_unlock_write(vma->vm_file->f_mapping);
+	if (start_pte)
+		pte_unmap_unlock(start_pte, ptl);
+	mmu_notifier_invalidate_range_end(&range);
 	goto drop_hpage;
 }
 
@@ -2842,9 +2793,9 @@ int madvise_collapse(struct vm_area_struct *vma, struct vm_area_struct **prev,
 		case SCAN_PTE_MAPPED_HUGEPAGE:
 			BUG_ON(mmap_locked);
 			BUG_ON(*prev);
-			mmap_write_lock(mm);
+			mmap_read_lock(mm);
 			result = collapse_pte_mapped_thp(mm, addr, true);
-			mmap_write_unlock(mm);
+			mmap_locked = true;
 			goto handle_result;
 		/* Whitelisted set of results where continuing OK */
 		case SCAN_PMD_NULL:
-- 
2.35.3



^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 11/12] mm/khugepaged: delete khugepaged_collapse_pte_mapped_thps()
  2023-05-29  6:11 [PATCH 00/12] mm: free retracted page table by RCU Hugh Dickins
                   ` (9 preceding siblings ...)
  2023-05-29  6:26 ` [PATCH 10/12] mm/khugepaged: collapse_pte_mapped_thp() with mmap_read_lock() Hugh Dickins
@ 2023-05-29  6:28 ` Hugh Dickins
  2023-05-29  6:30 ` [PATCH 12/12] mm: delete mmap_write_trylock() and vma_try_start_write() Hugh Dickins
  11 siblings, 0 replies; 27+ messages in thread
From: Hugh Dickins @ 2023-05-29  6:28 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Mike Kravetz, Mike Rapoport, Kirill A. Shutemov, Matthew Wilcox,
	David Hildenbrand, Suren Baghdasaryan, Qi Zheng, Yang Shi,
	Mel Gorman, Peter Xu, Peter Zijlstra, Will Deacon, Yu Zhao,
	Alistair Popple, Ralph Campbell, Ira Weiny, Steven Price,
	SeongJae Park, Naoya Horiguchi, Christophe Leroy, Zack Rusin,
	Jason Gunthorpe, Axel Rasmussen, Anshuman Khandual,
	Pasha Tatashin, Miaohe Lin, Minchan Kim, Christoph Hellwig,
	Song Liu, Thomas Hellstrom, Russell King, David S. Miller,
	Michael Ellerman, Aneesh Kumar K.V, Heiko Carstens,
	Christian Borntraeger, Claudio Imbrenda, Alexander Gordeev,
	Jann Horn, linux-arm-kernel, sparclinux, linuxppc-dev,
	linux-s390, linux-kernel, linux-mm

Now that retract_page_tables() can retract page tables reliably, without
depending on trylocks, delete all the apparatus for khugepaged to try
again later: khugepaged_collapse_pte_mapped_thps() etc; and free up the
per-mm memory which was set aside for that in the khugepaged_mm_slot.

But one part of that is worth keeping: when hpage_collapse_scan_file()
found SCAN_PTE_MAPPED_HUGEPAGE, that address was noted in the mm_slot
to be tried for retraction later - catching, for example, page tables
where a reversible mprotect() of a portion had required splitting the
pmd, but now it can be recollapsed.  Call collapse_pte_mapped_thp()
directly in this case (why was it deferred before?  I assume an issue
with needing mmap_lock for write, but now it's only needed for read).

Signed-off-by: Hugh Dickins <hughd@google.com>
---
 mm/khugepaged.c | 125 +++++++-----------------------------------------
 1 file changed, 16 insertions(+), 109 deletions(-)

diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 2999500abdd5..301c0e54a2ef 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -92,8 +92,6 @@ static __read_mostly DEFINE_HASHTABLE(mm_slots_hash, MM_SLOTS_HASH_BITS);
 
 static struct kmem_cache *mm_slot_cache __read_mostly;
 
-#define MAX_PTE_MAPPED_THP 8
-
 struct collapse_control {
 	bool is_khugepaged;
 
@@ -107,15 +105,9 @@ struct collapse_control {
 /**
  * struct khugepaged_mm_slot - khugepaged information per mm that is being scanned
  * @slot: hash lookup from mm to mm_slot
- * @nr_pte_mapped_thp: number of pte mapped THP
- * @pte_mapped_thp: address array corresponding pte mapped THP
  */
 struct khugepaged_mm_slot {
 	struct mm_slot slot;
-
-	/* pte-mapped THP in this mm */
-	int nr_pte_mapped_thp;
-	unsigned long pte_mapped_thp[MAX_PTE_MAPPED_THP];
 };
 
 /**
@@ -1441,50 +1433,6 @@ static void collect_mm_slot(struct khugepaged_mm_slot *mm_slot)
 }
 
 #ifdef CONFIG_SHMEM
-/*
- * Notify khugepaged that given addr of the mm is pte-mapped THP. Then
- * khugepaged should try to collapse the page table.
- *
- * Note that following race exists:
- * (1) khugepaged calls khugepaged_collapse_pte_mapped_thps() for mm_struct A,
- *     emptying the A's ->pte_mapped_thp[] array.
- * (2) MADV_COLLAPSE collapses some file extent with target mm_struct B, and
- *     retract_page_tables() finds a VMA in mm_struct A mapping the same extent
- *     (at virtual address X) and adds an entry (for X) into mm_struct A's
- *     ->pte-mapped_thp[] array.
- * (3) khugepaged calls khugepaged_collapse_scan_file() for mm_struct A at X,
- *     sees a pte-mapped THP (SCAN_PTE_MAPPED_HUGEPAGE) and adds an entry
- *     (for X) into mm_struct A's ->pte-mapped_thp[] array.
- * Thus, it's possible the same address is added multiple times for the same
- * mm_struct.  Should this happen, we'll simply attempt
- * collapse_pte_mapped_thp() multiple times for the same address, under the same
- * exclusive mmap_lock, and assuming the first call is successful, subsequent
- * attempts will return quickly (without grabbing any additional locks) when
- * a huge pmd is found in find_pmd_or_thp_or_none().  Since this is a cheap
- * check, and since this is a rare occurrence, the cost of preventing this
- * "multiple-add" is thought to be more expensive than just handling it, should
- * it occur.
- */
-static bool khugepaged_add_pte_mapped_thp(struct mm_struct *mm,
-					  unsigned long addr)
-{
-	struct khugepaged_mm_slot *mm_slot;
-	struct mm_slot *slot;
-	bool ret = false;
-
-	VM_BUG_ON(addr & ~HPAGE_PMD_MASK);
-
-	spin_lock(&khugepaged_mm_lock);
-	slot = mm_slot_lookup(mm_slots_hash, mm);
-	mm_slot = mm_slot_entry(slot, struct khugepaged_mm_slot, slot);
-	if (likely(mm_slot && mm_slot->nr_pte_mapped_thp < MAX_PTE_MAPPED_THP)) {
-		mm_slot->pte_mapped_thp[mm_slot->nr_pte_mapped_thp++] = addr;
-		ret = true;
-	}
-	spin_unlock(&khugepaged_mm_lock);
-	return ret;
-}
-
 /* hpage must be locked, and mmap_lock must be held */
 static int set_huge_pmd(struct vm_area_struct *vma, unsigned long addr,
 			pmd_t *pmdp, struct page *hpage)
@@ -1675,29 +1623,6 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr,
 	goto drop_hpage;
 }
 
-static void khugepaged_collapse_pte_mapped_thps(struct khugepaged_mm_slot *mm_slot)
-{
-	struct mm_slot *slot = &mm_slot->slot;
-	struct mm_struct *mm = slot->mm;
-	int i;
-
-	if (likely(mm_slot->nr_pte_mapped_thp == 0))
-		return;
-
-	if (!mmap_write_trylock(mm))
-		return;
-
-	if (unlikely(hpage_collapse_test_exit(mm)))
-		goto out;
-
-	for (i = 0; i < mm_slot->nr_pte_mapped_thp; i++)
-		collapse_pte_mapped_thp(mm, mm_slot->pte_mapped_thp[i], false);
-
-out:
-	mm_slot->nr_pte_mapped_thp = 0;
-	mmap_write_unlock(mm);
-}
-
 static void retract_page_tables(struct address_space *mapping, pgoff_t pgoff)
 {
 	struct vm_area_struct *vma;
@@ -2326,16 +2251,6 @@ static int hpage_collapse_scan_file(struct mm_struct *mm, unsigned long addr,
 {
 	BUILD_BUG();
 }
-
-static void khugepaged_collapse_pte_mapped_thps(struct khugepaged_mm_slot *mm_slot)
-{
-}
-
-static bool khugepaged_add_pte_mapped_thp(struct mm_struct *mm,
-					  unsigned long addr)
-{
-	return false;
-}
 #endif
 
 static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *result,
@@ -2365,7 +2280,6 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *result,
 		khugepaged_scan.mm_slot = mm_slot;
 	}
 	spin_unlock(&khugepaged_mm_lock);
-	khugepaged_collapse_pte_mapped_thps(mm_slot);
 
 	mm = slot->mm;
 	/*
@@ -2418,36 +2332,29 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *result,
 						khugepaged_scan.address);
 
 				mmap_read_unlock(mm);
-				*result = hpage_collapse_scan_file(mm,
-								   khugepaged_scan.address,
-								   file, pgoff, cc);
 				mmap_locked = false;
+				*result = hpage_collapse_scan_file(mm,
+					khugepaged_scan.address, file, pgoff, cc);
+				if (*result == SCAN_PTE_MAPPED_HUGEPAGE) {
+					mmap_read_lock(mm);
+					mmap_locked = true;
+					if (hpage_collapse_test_exit(mm)) {
+						fput(file);
+						goto breakouterloop;
+					}
+					*result = collapse_pte_mapped_thp(mm,
+						khugepaged_scan.address, false);
+					if (*result == SCAN_PMD_MAPPED)
+						*result = SCAN_SUCCEED;
+				}
 				fput(file);
 			} else {
 				*result = hpage_collapse_scan_pmd(mm, vma,
-								  khugepaged_scan.address,
-								  &mmap_locked,
-								  cc);
+					khugepaged_scan.address, &mmap_locked, cc);
 			}
-			switch (*result) {
-			case SCAN_PTE_MAPPED_HUGEPAGE: {
-				pmd_t *pmd;
 
-				*result = find_pmd_or_thp_or_none(mm,
-								  khugepaged_scan.address,
-								  &pmd);
-				if (*result != SCAN_SUCCEED)
-					break;
-				if (!khugepaged_add_pte_mapped_thp(mm,
-								   khugepaged_scan.address))
-					break;
-			} fallthrough;
-			case SCAN_SUCCEED:
+			if (*result == SCAN_SUCCEED)
 				++khugepaged_pages_collapsed;
-				break;
-			default:
-				break;
-			}
 
 			/* move to next address */
 			khugepaged_scan.address += HPAGE_PMD_SIZE;
-- 
2.35.3



^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 12/12] mm: delete mmap_write_trylock() and vma_try_start_write()
  2023-05-29  6:11 [PATCH 00/12] mm: free retracted page table by RCU Hugh Dickins
                   ` (10 preceding siblings ...)
  2023-05-29  6:28 ` [PATCH 11/12] mm/khugepaged: delete khugepaged_collapse_pte_mapped_thps() Hugh Dickins
@ 2023-05-29  6:30 ` Hugh Dickins
  11 siblings, 0 replies; 27+ messages in thread
From: Hugh Dickins @ 2023-05-29  6:30 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Mike Kravetz, Mike Rapoport, Kirill A. Shutemov, Matthew Wilcox,
	David Hildenbrand, Suren Baghdasaryan, Qi Zheng, Yang Shi,
	Mel Gorman, Peter Xu, Peter Zijlstra, Will Deacon, Yu Zhao,
	Alistair Popple, Ralph Campbell, Ira Weiny, Steven Price,
	SeongJae Park, Naoya Horiguchi, Christophe Leroy, Zack Rusin,
	Jason Gunthorpe, Axel Rasmussen, Anshuman Khandual,
	Pasha Tatashin, Miaohe Lin, Minchan Kim, Christoph Hellwig,
	Song Liu, Thomas Hellstrom, Russell King, David S. Miller,
	Michael Ellerman, Aneesh Kumar K.V, Heiko Carstens,
	Christian Borntraeger, Claudio Imbrenda, Alexander Gordeev,
	Jann Horn, linux-arm-kernel, sparclinux, linuxppc-dev,
	linux-s390, linux-kernel, linux-mm

mmap_write_trylock() and vma_try_start_write() were added just for
khugepaged, but now it has no use for them: delete.

Signed-off-by: Hugh Dickins <hughd@google.com>
---
 include/linux/mm.h        | 17 -----------------
 include/linux/mmap_lock.h | 10 ----------
 2 files changed, 27 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 3c2e56980853..9b24f8fbf899 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -690,21 +690,6 @@ static inline void vma_start_write(struct vm_area_struct *vma)
 	up_write(&vma->vm_lock->lock);
 }
 
-static inline bool vma_try_start_write(struct vm_area_struct *vma)
-{
-	int mm_lock_seq;
-
-	if (__is_vma_write_locked(vma, &mm_lock_seq))
-		return true;
-
-	if (!down_write_trylock(&vma->vm_lock->lock))
-		return false;
-
-	vma->vm_lock_seq = mm_lock_seq;
-	up_write(&vma->vm_lock->lock);
-	return true;
-}
-
 static inline void vma_assert_write_locked(struct vm_area_struct *vma)
 {
 	int mm_lock_seq;
@@ -730,8 +715,6 @@ static inline bool vma_start_read(struct vm_area_struct *vma)
 		{ return false; }
 static inline void vma_end_read(struct vm_area_struct *vma) {}
 static inline void vma_start_write(struct vm_area_struct *vma) {}
-static inline bool vma_try_start_write(struct vm_area_struct *vma)
-		{ return true; }
 static inline void vma_assert_write_locked(struct vm_area_struct *vma) {}
 static inline void vma_mark_detached(struct vm_area_struct *vma,
 				     bool detached) {}
diff --git a/include/linux/mmap_lock.h b/include/linux/mmap_lock.h
index aab8f1b28d26..d1191f02c7fa 100644
--- a/include/linux/mmap_lock.h
+++ b/include/linux/mmap_lock.h
@@ -112,16 +112,6 @@ static inline int mmap_write_lock_killable(struct mm_struct *mm)
 	return ret;
 }
 
-static inline bool mmap_write_trylock(struct mm_struct *mm)
-{
-	bool ret;
-
-	__mmap_lock_trace_start_locking(mm, true);
-	ret = down_write_trylock(&mm->mmap_lock) != 0;
-	__mmap_lock_trace_acquire_returned(mm, true, ret);
-	return ret;
-}
-
 static inline void mmap_write_unlock(struct mm_struct *mm)
 {
 	__mmap_lock_trace_released(mm, true);
-- 
2.35.3



^ permalink raw reply related	[flat|nested] 27+ messages in thread

* Re: [PATCH 02/12] mm/pgtable: add PAE safety to __pte_offset_map()
  2023-05-29  6:16 ` [PATCH 02/12] mm/pgtable: add PAE safety to __pte_offset_map() Hugh Dickins
@ 2023-05-29 13:56   ` Matthew Wilcox
  0 siblings, 0 replies; 27+ messages in thread
From: Matthew Wilcox @ 2023-05-29 13:56 UTC (permalink / raw)
  To: Hugh Dickins
  Cc: Andrew Morton, Mike Kravetz, Mike Rapoport, Kirill A. Shutemov,
	David Hildenbrand, Suren Baghdasaryan, Qi Zheng, Yang Shi,
	Mel Gorman, Peter Xu, Peter Zijlstra, Will Deacon, Yu Zhao,
	Alistair Popple, Ralph Campbell, Ira Weiny, Steven Price,
	SeongJae Park, Naoya Horiguchi, Christophe Leroy, Zack Rusin,
	Jason Gunthorpe, Axel Rasmussen, Anshuman Khandual,
	Pasha Tatashin, Miaohe Lin, Minchan Kim, Christoph Hellwig,
	Song Liu, Thomas Hellstrom, Russell King, David S. Miller,
	Michael Ellerman, Aneesh Kumar K.V, Heiko Carstens,
	Christian Borntraeger, Claudio Imbrenda, Alexander Gordeev,
	Jann Horn, linux-arm-kernel, sparclinux, linuxppc-dev,
	linux-s390, linux-kernel, linux-mm

On Sun, May 28, 2023 at 11:16:16PM -0700, Hugh Dickins wrote:
> +#if defined(CONFIG_GUP_GET_PXX_LOW_HIGH) && \
> +	(defined(CONFIG_SMP) || defined(CONFIG_PREEMPT_RCU))
> +/*
> + * See the comment above ptep_get_lockless() in include/linux/pgtable.h:
> + * the barriers in pmdp_get_lockless() cannot guarantee that the value in
> + * pmd_high actually belongs with the value in pmd_low; but holding interrupts
> + * off blocks the TLB flush between present updates, which guarantees that a
> + * successful __pte_offset_map() points to a page from matched halves.
> + */
> +#define config_might_irq_save(flags)	local_irq_save(flags)
> +#define config_might_irq_restore(flags)	local_irq_restore(flags)
> +#else
> +#define config_might_irq_save(flags)
> +#define config_might_irq_restore(flags)

I don't like the name.  It should indicate that it's PMD-related, so
pmd_read_start(flags) / pmd_read_end(flags)?



^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 05/12] powerpc: add pte_free_defer() for pgtables sharing page
  2023-05-29  6:20 ` [PATCH 05/12] powerpc: add pte_free_defer() for pgtables sharing page Hugh Dickins
@ 2023-05-29 14:02   ` Matthew Wilcox
  2023-05-29 14:36     ` Hugh Dickins
       [not found]     ` <ZHn6n5eVTsr4Wl8x@ziepe.ca>
  0 siblings, 2 replies; 27+ messages in thread
From: Matthew Wilcox @ 2023-05-29 14:02 UTC (permalink / raw)
  To: Hugh Dickins
  Cc: Andrew Morton, Mike Kravetz, Mike Rapoport, Kirill A. Shutemov,
	David Hildenbrand, Suren Baghdasaryan, Qi Zheng, Yang Shi,
	Mel Gorman, Peter Xu, Peter Zijlstra, Will Deacon, Yu Zhao,
	Alistair Popple, Ralph Campbell, Ira Weiny, Steven Price,
	SeongJae Park, Naoya Horiguchi, Christophe Leroy, Zack Rusin,
	Jason Gunthorpe, Axel Rasmussen, Anshuman Khandual,
	Pasha Tatashin, Miaohe Lin, Minchan Kim, Christoph Hellwig,
	Song Liu, Thomas Hellstrom, Russell King, David S. Miller,
	Michael Ellerman, Aneesh Kumar K.V, Heiko Carstens,
	Christian Borntraeger, Claudio Imbrenda, Alexander Gordeev,
	Jann Horn, linux-arm-kernel, sparclinux, linuxppc-dev,
	linux-s390, linux-kernel, linux-mm

On Sun, May 28, 2023 at 11:20:21PM -0700, Hugh Dickins wrote:
> +void pte_free_defer(struct mm_struct *mm, pgtable_t pgtable)
> +{
> +	struct page *page;
> +
> +	page = virt_to_page(pgtable);
> +	call_rcu(&page->rcu_head, pte_free_now);
> +}

This can't be safe (on ppc).  IIRC you might have up to 16x4k page
tables sharing one 64kB page.  So if you have two page tables from the
same page being defer-freed simultaneously, you'll reuse the rcu_head
and I cannot imagine things go well from that point.

I have no idea how to solve this problem.


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 05/12] powerpc: add pte_free_defer() for pgtables sharing page
  2023-05-29 14:02   ` Matthew Wilcox
@ 2023-05-29 14:36     ` Hugh Dickins
       [not found]     ` <ZHn6n5eVTsr4Wl8x@ziepe.ca>
  1 sibling, 0 replies; 27+ messages in thread
From: Hugh Dickins @ 2023-05-29 14:36 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: Hugh Dickins, Andrew Morton, Mike Kravetz, Mike Rapoport,
	Kirill A. Shutemov, David Hildenbrand, Suren Baghdasaryan,
	Qi Zheng, Yang Shi, Mel Gorman, Peter Xu, Peter Zijlstra,
	Will Deacon, Yu Zhao, Alistair Popple, Ralph Campbell, Ira Weiny,
	Steven Price, SeongJae Park, Naoya Horiguchi, Christophe Leroy,
	Zack Rusin, Jason Gunthorpe, Axel Rasmussen, Anshuman Khandual,
	Pasha Tatashin, Miaohe Lin, Minchan Kim, Christoph Hellwig,
	Song Liu, Thomas Hellstrom, Russell King, David S. Miller,
	Michael Ellerman, Aneesh Kumar K.V, Heiko Carstens,
	Christian Borntraeger, Claudio Imbrenda, Alexander Gordeev,
	Jann Horn, linux-arm-kernel, sparclinux, linuxppc-dev,
	linux-s390, linux-kernel, linux-mm

On Mon, 29 May 2023, Matthew Wilcox wrote:
> On Sun, May 28, 2023 at 11:20:21PM -0700, Hugh Dickins wrote:
> > +void pte_free_defer(struct mm_struct *mm, pgtable_t pgtable)
> > +{
> > +	struct page *page;
> > +
> > +	page = virt_to_page(pgtable);
> > +	call_rcu(&page->rcu_head, pte_free_now);
> > +}
> 
> This can't be safe (on ppc).  IIRC you might have up to 16x4k page
> tables sharing one 64kB page.  So if you have two page tables from the
> same page being defer-freed simultaneously, you'll reuse the rcu_head
> and I cannot imagine things go well from that point.

Oh yes, of course, thanks for catching that so quickly.
So my s390 and sparc implementations will be equally broken.

> 
> I have no idea how to solve this problem.

I do: I'll have to go back to the more complicated implementation we
actually ran with on powerpc - I was thinking those complications just
related to deposit/withdraw matters, forgetting the one-rcu_head issue.

It uses large (0x10000) increments of the page refcount, avoiding
call_rcu() when already active.

It's not a complication I had wanted to explain or test for now,
but we shall have to.  Should apply equally well to sparc, but s390
more of a problem, since s390 already has its own refcount cleverness.

Thanks, I must dash, out much of the day.

Hugh


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 09/12] mm/khugepaged: retract_page_tables() without mmap or vma lock
  2023-05-29  6:25 ` [PATCH 09/12] mm/khugepaged: retract_page_tables() without mmap or vma lock Hugh Dickins
@ 2023-05-29 23:26   ` Peter Xu
  2023-05-31  0:38     ` Hugh Dickins
  2023-05-31 15:34   ` Jann Horn
  1 sibling, 1 reply; 27+ messages in thread
From: Peter Xu @ 2023-05-29 23:26 UTC (permalink / raw)
  To: Hugh Dickins
  Cc: Andrew Morton, Mike Kravetz, Mike Rapoport, Kirill A. Shutemov,
	Matthew Wilcox, David Hildenbrand, Suren Baghdasaryan, Qi Zheng,
	Yang Shi, Mel Gorman, Peter Zijlstra, Will Deacon, Yu Zhao,
	Alistair Popple, Ralph Campbell, Ira Weiny, Steven Price,
	SeongJae Park, Naoya Horiguchi, Christophe Leroy, Zack Rusin,
	Jason Gunthorpe, Axel Rasmussen, Anshuman Khandual,
	Pasha Tatashin, Miaohe Lin, Minchan Kim, Christoph Hellwig,
	Song Liu, Thomas Hellstrom, Russell King, David S. Miller,
	Michael Ellerman, Aneesh Kumar K.V, Heiko Carstens,
	Christian Borntraeger, Claudio Imbrenda, Alexander Gordeev,
	Jann Horn, linux-arm-kernel, sparclinux, linuxppc-dev,
	linux-s390, linux-kernel, linux-mm

On Sun, May 28, 2023 at 11:25:15PM -0700, Hugh Dickins wrote:
> Simplify shmem and file THP collapse's retract_page_tables(), and relax
> its locking: to improve its success rate and to lessen impact on others.
> 
> Instead of its MADV_COLLAPSE case doing set_huge_pmd() at target_addr of
> target_mm, leave that part of the work to madvise_collapse() calling
> collapse_pte_mapped_thp() afterwards: just adjust collapse_file()'s
> result code to arrange for that.  That spares retract_page_tables() four
> arguments; and since it will be successful in retracting all of the page
> tables expected of it, no need to track and return a result code itself.
> 
> It needs i_mmap_lock_read(mapping) for traversing the vma interval tree,
> but it does not need i_mmap_lock_write() for that: page_vma_mapped_walk()
> allows for pte_offset_map_lock() etc to fail, and uses pmd_lock() for
> THPs.  retract_page_tables() just needs to use those same spinlocks to
> exclude it briefly, while transitioning pmd from page table to none: so
> restore its use of pmd_lock() inside of which pte lock is nested.
> 
> Users of pte_offset_map_lock() etc all now allow for them to fail:
> so retract_page_tables() now has no use for mmap_write_trylock() or
> vma_try_start_write().  In common with rmap and page_vma_mapped_walk(),
> it does not even need the mmap_read_lock().
> 
> But those users do expect the page table to remain a good page table,
> until they unlock and rcu_read_unlock(): so the page table cannot be
> freed immediately, but rather by the recently added pte_free_defer().
> 
> retract_page_tables() can be enhanced to replace_page_tables(), which
> inserts the final huge pmd without mmap lock: going through an invalid
> state instead of pmd_none() followed by fault.  But that does raise some
> questions, and requires a more complicated pte_free_defer() for powerpc
> (when its arch_needs_pgtable_deposit() for shmem and file THPs).  Leave
> that enhancement to a later release.
> 
> Signed-off-by: Hugh Dickins <hughd@google.com>
> ---
>  mm/khugepaged.c | 169 +++++++++++++++++-------------------------------
>  1 file changed, 60 insertions(+), 109 deletions(-)
> 
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index 1083f0e38a07..4fd408154692 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -1617,9 +1617,8 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr,
>  		break;
>  	case SCAN_PMD_NONE:
>  		/*
> -		 * In MADV_COLLAPSE path, possible race with khugepaged where
> -		 * all pte entries have been removed and pmd cleared.  If so,
> -		 * skip all the pte checks and just update the pmd mapping.
> +		 * All pte entries have been removed and pmd cleared.
> +		 * Skip all the pte checks and just update the pmd mapping.
>  		 */
>  		goto maybe_install_pmd;
>  	default:
> @@ -1748,123 +1747,73 @@ static void khugepaged_collapse_pte_mapped_thps(struct khugepaged_mm_slot *mm_sl
>  	mmap_write_unlock(mm);
>  }
>  
> -static int retract_page_tables(struct address_space *mapping, pgoff_t pgoff,
> -			       struct mm_struct *target_mm,
> -			       unsigned long target_addr, struct page *hpage,
> -			       struct collapse_control *cc)
> +static void retract_page_tables(struct address_space *mapping, pgoff_t pgoff)
>  {
>  	struct vm_area_struct *vma;
> -	int target_result = SCAN_FAIL;
>  
> -	i_mmap_lock_write(mapping);
> +	i_mmap_lock_read(mapping);
>  	vma_interval_tree_foreach(vma, &mapping->i_mmap, pgoff, pgoff) {
> -		int result = SCAN_FAIL;
> -		struct mm_struct *mm = NULL;
> -		unsigned long addr = 0;
> -		pmd_t *pmd;
> -		bool is_target = false;
> +		struct mm_struct *mm;
> +		unsigned long addr;
> +		pmd_t *pmd, pgt_pmd;
> +		spinlock_t *pml;
> +		spinlock_t *ptl;
>  
>  		/*
>  		 * Check vma->anon_vma to exclude MAP_PRIVATE mappings that
> -		 * got written to. These VMAs are likely not worth investing
> -		 * mmap_write_lock(mm) as PMD-mapping is likely to be split
> -		 * later.
> +		 * got written to. These VMAs are likely not worth removing
> +		 * page tables from, as PMD-mapping is likely to be split later.
>  		 *
> -		 * Note that vma->anon_vma check is racy: it can be set up after
> -		 * the check but before we took mmap_lock by the fault path.
> -		 * But page lock would prevent establishing any new ptes of the
> -		 * page, so we are safe.
> -		 *
> -		 * An alternative would be drop the check, but check that page
> -		 * table is clear before calling pmdp_collapse_flush() under
> -		 * ptl. It has higher chance to recover THP for the VMA, but
> -		 * has higher cost too. It would also probably require locking
> -		 * the anon_vma.
> +		 * Note that vma->anon_vma check is racy: it can be set after
> +		 * the check, but page locks (with XA_RETRY_ENTRYs in holes)
> +		 * prevented establishing new ptes of the page. So we are safe
> +		 * to remove page table below, without even checking it's empty.
>  		 */
> -		if (READ_ONCE(vma->anon_vma)) {
> -			result = SCAN_PAGE_ANON;
> -			goto next;
> -		}
> +		if (READ_ONCE(vma->anon_vma))
> +			continue;

Not directly related to current patch, but I just realized there seems to
have similar issue as what ab0c3f1251b4 wanted to fix.

IIUC any shmem vma that used to have uprobe/bp installed will have anon_vma
set here, then does it mean that any vma used to get debugged will never be
able to merge into a thp (with either madvise or khugepaged)?

I think it'll only make a difference when the page cache is not huge yet
when bp was uninstalled, but then it becomes a thp candidate somehow.  Even
if so, I think the anon_vma should still be there.

Did I miss something, or maybe that's not even a problem?

> +
>  		addr = vma->vm_start + ((pgoff - vma->vm_pgoff) << PAGE_SHIFT);
>  		if (addr & ~HPAGE_PMD_MASK ||
> -		    vma->vm_end < addr + HPAGE_PMD_SIZE) {
> -			result = SCAN_VMA_CHECK;
> -			goto next;
> -		}
> -		mm = vma->vm_mm;
> -		is_target = mm == target_mm && addr == target_addr;
> -		result = find_pmd_or_thp_or_none(mm, addr, &pmd);
> -		if (result != SCAN_SUCCEED)
> -			goto next;
> -		/*
> -		 * We need exclusive mmap_lock to retract page table.
> -		 *
> -		 * We use trylock due to lock inversion: we need to acquire
> -		 * mmap_lock while holding page lock. Fault path does it in
> -		 * reverse order. Trylock is a way to avoid deadlock.
> -		 *
> -		 * Also, it's not MADV_COLLAPSE's job to collapse other
> -		 * mappings - let khugepaged take care of them later.
> -		 */
> -		result = SCAN_PTE_MAPPED_HUGEPAGE;
> -		if ((cc->is_khugepaged || is_target) &&
> -		    mmap_write_trylock(mm)) {
> -			/* trylock for the same lock inversion as above */
> -			if (!vma_try_start_write(vma))
> -				goto unlock_next;
> -
> -			/*
> -			 * Re-check whether we have an ->anon_vma, because
> -			 * collapse_and_free_pmd() requires that either no
> -			 * ->anon_vma exists or the anon_vma is locked.
> -			 * We already checked ->anon_vma above, but that check
> -			 * is racy because ->anon_vma can be populated under the
> -			 * mmap lock in read mode.
> -			 */
> -			if (vma->anon_vma) {
> -				result = SCAN_PAGE_ANON;
> -				goto unlock_next;
> -			}
> -			/*
> -			 * When a vma is registered with uffd-wp, we can't
> -			 * recycle the pmd pgtable because there can be pte
> -			 * markers installed.  Skip it only, so the rest mm/vma
> -			 * can still have the same file mapped hugely, however
> -			 * it'll always mapped in small page size for uffd-wp
> -			 * registered ranges.
> -			 */
> -			if (hpage_collapse_test_exit(mm)) {
> -				result = SCAN_ANY_PROCESS;
> -				goto unlock_next;
> -			}
> -			if (userfaultfd_wp(vma)) {
> -				result = SCAN_PTE_UFFD_WP;
> -				goto unlock_next;
> -			}
> -			collapse_and_free_pmd(mm, vma, addr, pmd);
> -			if (!cc->is_khugepaged && is_target)
> -				result = set_huge_pmd(vma, addr, pmd, hpage);
> -			else
> -				result = SCAN_SUCCEED;
> -
> -unlock_next:
> -			mmap_write_unlock(mm);
> -			goto next;
> -		}
> -		/*
> -		 * Calling context will handle target mm/addr. Otherwise, let
> -		 * khugepaged try again later.
> -		 */
> -		if (!is_target) {
> -			khugepaged_add_pte_mapped_thp(mm, addr);
> +		    vma->vm_end < addr + HPAGE_PMD_SIZE)
>  			continue;
> -		}
> -next:
> -		if (is_target)
> -			target_result = result;
> +
> +		mm = vma->vm_mm;
> +		if (find_pmd_or_thp_or_none(mm, addr, &pmd) != SCAN_SUCCEED)
> +			continue;
> +
> +		if (hpage_collapse_test_exit(mm))
> +			continue;
> +		/*
> +		 * When a vma is registered with uffd-wp, we cannot recycle
> +		 * the page table because there may be pte markers installed.
> +		 * Other vmas can still have the same file mapped hugely, but
> +		 * skip this one: it will always be mapped in small page size
> +		 * for uffd-wp registered ranges.
> +		 *
> +		 * What if VM_UFFD_WP is set a moment after this check?  No
> +		 * problem, huge page lock is still held, stopping new mappings
> +		 * of page which might then get replaced by pte markers: only
> +		 * existing markers need to be protected here.  (We could check
> +		 * after getting ptl below, but this comment distracting there!)
> +		 */
> +		if (userfaultfd_wp(vma))
> +			continue;

IIUC here with the new code we only hold (1) hpage lock, and (2)
i_mmap_lock read.  Then could it possible that right after checking this
and found !UFFD_WP, but then someone quickly (1) register uffd-wp on this
vma, then UFFDIO_WRITEPROTECT to install some pte markers, before below
pgtable locks taken?

The thing is installation of pte markers may not need either of the locks
iiuc..

Would taking mmap read lock help in this case?

Thanks,

> +
> +		/* Huge page lock is still held, so page table must be empty */
> +		pml = pmd_lock(mm, pmd);
> +		ptl = pte_lockptr(mm, pmd);
> +		if (ptl != pml)
> +			spin_lock_nested(ptl, SINGLE_DEPTH_NESTING);
> +		pgt_pmd = pmdp_collapse_flush(vma, addr, pmd);
> +		if (ptl != pml)
> +			spin_unlock(ptl);
> +		spin_unlock(pml);
> +
> +		mm_dec_nr_ptes(mm);
> +		page_table_check_pte_clear_range(mm, addr, pgt_pmd);
> +		pte_free_defer(mm, pmd_pgtable(pgt_pmd));
>  	}
> -	i_mmap_unlock_write(mapping);
> -	return target_result;
> +	i_mmap_unlock_read(mapping);
>  }
>  
>  /**
> @@ -2261,9 +2210,11 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr,
>  
>  	/*
>  	 * Remove pte page tables, so we can re-fault the page as huge.
> +	 * If MADV_COLLAPSE, adjust result to call collapse_pte_mapped_thp().
>  	 */
> -	result = retract_page_tables(mapping, start, mm, addr, hpage,
> -				     cc);
> +	retract_page_tables(mapping, start);
> +	if (cc && !cc->is_khugepaged)
> +		result = SCAN_PTE_MAPPED_HUGEPAGE;
>  	unlock_page(hpage);
>  
>  	/*
> -- 
> 2.35.3
> 

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 09/12] mm/khugepaged: retract_page_tables() without mmap or vma lock
  2023-05-29 23:26   ` Peter Xu
@ 2023-05-31  0:38     ` Hugh Dickins
  0 siblings, 0 replies; 27+ messages in thread
From: Hugh Dickins @ 2023-05-31  0:38 UTC (permalink / raw)
  To: Peter Xu
  Cc: Hugh Dickins, Andrew Morton, Mike Kravetz, Mike Rapoport,
	Kirill A. Shutemov, Matthew Wilcox, David Hildenbrand,
	Suren Baghdasaryan, Qi Zheng, Yang Shi, Mel Gorman,
	Peter Zijlstra, Will Deacon, Yu Zhao, Alistair Popple,
	Ralph Campbell, Ira Weiny, Steven Price, SeongJae Park,
	Naoya Horiguchi, Christophe Leroy, Zack Rusin, Jason Gunthorpe,
	Axel Rasmussen, Anshuman Khandual, Pasha Tatashin, Miaohe Lin,
	Minchan Kim, Christoph Hellwig, Song Liu, Thomas Hellstrom,
	Russell King, David S. Miller, Michael Ellerman,
	Aneesh Kumar K.V, Heiko Carstens, Christian Borntraeger,
	Claudio Imbrenda, Alexander Gordeev, Jann Horn, linux-arm-kernel,
	sparclinux, linuxppc-dev, linux-s390, linux-kernel, linux-mm

Thanks for looking, Peter: I was well aware of you dropping several hints
that you wanted to see what's intended before passing judgment on earlier
series, and I preferred to get on with showing this series, than go into
detail in responses to you there - thanks for your patience :)

On Mon, 29 May 2023, Peter Xu wrote:
> On Sun, May 28, 2023 at 11:25:15PM -0700, Hugh Dickins wrote:
...
> > @@ -1748,123 +1747,73 @@ static void khugepaged_collapse_pte_mapped_thps(struct khugepaged_mm_slot *mm_sl
> >  	mmap_write_unlock(mm);
> >  }
> >  
> > -static int retract_page_tables(struct address_space *mapping, pgoff_t pgoff,
> > -			       struct mm_struct *target_mm,
> > -			       unsigned long target_addr, struct page *hpage,
> > -			       struct collapse_control *cc)
> > +static void retract_page_tables(struct address_space *mapping, pgoff_t pgoff)
> >  {
> >  	struct vm_area_struct *vma;
> > -	int target_result = SCAN_FAIL;
> >  
> > -	i_mmap_lock_write(mapping);
> > +	i_mmap_lock_read(mapping);
> >  	vma_interval_tree_foreach(vma, &mapping->i_mmap, pgoff, pgoff) {
> > -		int result = SCAN_FAIL;
> > -		struct mm_struct *mm = NULL;
> > -		unsigned long addr = 0;
> > -		pmd_t *pmd;
> > -		bool is_target = false;
> > +		struct mm_struct *mm;
> > +		unsigned long addr;
> > +		pmd_t *pmd, pgt_pmd;
> > +		spinlock_t *pml;
> > +		spinlock_t *ptl;
> >  
> >  		/*
> >  		 * Check vma->anon_vma to exclude MAP_PRIVATE mappings that
> > -		 * got written to. These VMAs are likely not worth investing
> > -		 * mmap_write_lock(mm) as PMD-mapping is likely to be split
> > -		 * later.
> > +		 * got written to. These VMAs are likely not worth removing
> > +		 * page tables from, as PMD-mapping is likely to be split later.
> >  		 *
> > -		 * Note that vma->anon_vma check is racy: it can be set up after
> > -		 * the check but before we took mmap_lock by the fault path.
> > -		 * But page lock would prevent establishing any new ptes of the
> > -		 * page, so we are safe.
> > -		 *
> > -		 * An alternative would be drop the check, but check that page
> > -		 * table is clear before calling pmdp_collapse_flush() under
> > -		 * ptl. It has higher chance to recover THP for the VMA, but
> > -		 * has higher cost too. It would also probably require locking
> > -		 * the anon_vma.
> > +		 * Note that vma->anon_vma check is racy: it can be set after
> > +		 * the check, but page locks (with XA_RETRY_ENTRYs in holes)
> > +		 * prevented establishing new ptes of the page. So we are safe
> > +		 * to remove page table below, without even checking it's empty.
> >  		 */
> > -		if (READ_ONCE(vma->anon_vma)) {
> > -			result = SCAN_PAGE_ANON;
> > -			goto next;
> > -		}
> > +		if (READ_ONCE(vma->anon_vma))
> > +			continue;
> 
> Not directly related to current patch, but I just realized there seems to
> have similar issue as what ab0c3f1251b4 wanted to fix.
> 
> IIUC any shmem vma that used to have uprobe/bp installed will have anon_vma
> set here, then does it mean that any vma used to get debugged will never be
> able to merge into a thp (with either madvise or khugepaged)?
> 
> I think it'll only make a difference when the page cache is not huge yet
> when bp was uninstalled, but then it becomes a thp candidate somehow.  Even
> if so, I think the anon_vma should still be there.
> 
> Did I miss something, or maybe that's not even a problem?

Finding vma->anon_vma set would discourage retract_page_tables() from
doing its business with that previously uprobed area; but it does not stop
collapse_pte_mapped_thp() (which uprobes unregister calls directly) from
dealing with it, and MADV_COLLAPSE works on anon_vma'ed areas too.  It's
just a heuristic in retract_page_tables(), when it chooses to skip the
anon_vma'ed areas as often not worth bothering with.

As to vma merges: I haven't actually checked since the maple tree and other
rewrites of vma merging, but previously one vma with anon_vma set could be
merged with adjacent vma before or after without anon_vma set - the
anon_vma comparison is not just equality of anon_vma, but allows NULL too -
so the anon_vma will still be there, but extends to cover the wider extent.
Right, I find is_mergeable_anon_vma() still following that rule.

(And once vmas are merged, so that the whole of the huge page falls within
a single vma, khugepaged can consider it, and do collapse_pte_mapped_thp()
on it - before or after 11/12 I think.)

As to whether it would even be a problem: generally no, the vma is supposed
just to be an internal representation, and so long as the code resists
proliferating them unnecessarily, occasional failures to merge should not
matter.  The one place that forever sticks in my mind as mattering (perhaps
there are others I'm unaware of, but I'd call them bugs) is mremap(): which
is sufficiently awkward and bug-prone already, that nobody ever had the
courage to make it independent of vma boundaries; but ideally, it's
mremap() that we should fix.

But I may have written three answers, yet still missed your point.

...
> > +
> > +		mm = vma->vm_mm;
> > +		if (find_pmd_or_thp_or_none(mm, addr, &pmd) != SCAN_SUCCEED)
> > +			continue;
> > +
> > +		if (hpage_collapse_test_exit(mm))
> > +			continue;
> > +		/*
> > +		 * When a vma is registered with uffd-wp, we cannot recycle
> > +		 * the page table because there may be pte markers installed.
> > +		 * Other vmas can still have the same file mapped hugely, but
> > +		 * skip this one: it will always be mapped in small page size
> > +		 * for uffd-wp registered ranges.
> > +		 *
> > +		 * What if VM_UFFD_WP is set a moment after this check?  No
> > +		 * problem, huge page lock is still held, stopping new mappings
> > +		 * of page which might then get replaced by pte markers: only
> > +		 * existing markers need to be protected here.  (We could check
> > +		 * after getting ptl below, but this comment distracting there!)
> > +		 */
> > +		if (userfaultfd_wp(vma))
> > +			continue;
> 
> IIUC here with the new code we only hold (1) hpage lock, and (2)
> i_mmap_lock read.  Then could it possible that right after checking this
> and found !UFFD_WP, but then someone quickly (1) register uffd-wp on this
> vma, then UFFDIO_WRITEPROTECT to install some pte markers, before below
> pgtable locks taken?
> 
> The thing is installation of pte markers may not need either of the locks
> iiuc..
> 
> Would taking mmap read lock help in this case?

Isn't my comment above it a good enough answer?  If I misunderstand the
uffd-wp pte marker ("If"? certainly I don't understand it well enough,
but I may or may not be too wrong about it here), and actually it can
spring up in places where the page has not even been mapped yet, then
I'd *much* rather just move that check down into the pte_locked area,
than involve mmap read lock (which, though easier to acquire than its
write lock, would I think take us back to square 1 in terms of needing
trylock); but I did prefer not to have a big uffd-wp comment distracting
from the code flow there.

I expect now, that if I follow up UFFDIO_WRITEPROTECT, I shall indeed
find it inserting pte markers where the page has not even been mapped
yet.  A "Yes" from you will save me looking, but probably I shall have
to move that check down (oh well, the comment will be smaller there).

Thanks,
Hugh


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 09/12] mm/khugepaged: retract_page_tables() without mmap or vma lock
  2023-05-29  6:25 ` [PATCH 09/12] mm/khugepaged: retract_page_tables() without mmap or vma lock Hugh Dickins
  2023-05-29 23:26   ` Peter Xu
@ 2023-05-31 15:34   ` Jann Horn
  1 sibling, 0 replies; 27+ messages in thread
From: Jann Horn @ 2023-05-31 15:34 UTC (permalink / raw)
  To: Hugh Dickins
  Cc: Andrew Morton, Mike Kravetz, Mike Rapoport, Kirill A. Shutemov,
	Matthew Wilcox, David Hildenbrand, Suren Baghdasaryan, Qi Zheng,
	Yang Shi, Mel Gorman, Peter Xu, Peter Zijlstra, Will Deacon,
	Yu Zhao, Alistair Popple, Ralph Campbell, Ira Weiny,
	Steven Price, SeongJae Park, Naoya Horiguchi, Christophe Leroy,
	Zack Rusin, Jason Gunthorpe, Axel Rasmussen, Anshuman Khandual,
	Pasha Tatashin, Miaohe Lin, Minchan Kim, Christoph Hellwig,
	Song Liu, Thomas Hellstrom, Russell King, David S. Miller,
	Michael Ellerman, Aneesh Kumar K.V, Heiko Carstens,
	Christian Borntraeger, Claudio Imbrenda, Alexander Gordeev,
	linux-arm-kernel, sparclinux, linuxppc-dev, linux-s390,
	linux-kernel, linux-mm

On Mon, May 29, 2023 at 8:25 AM Hugh Dickins <hughd@google.com> wrote:
> -static int retract_page_tables(struct address_space *mapping, pgoff_t pgoff,
> -                              struct mm_struct *target_mm,
> -                              unsigned long target_addr, struct page *hpage,
> -                              struct collapse_control *cc)
> +static void retract_page_tables(struct address_space *mapping, pgoff_t pgoff)
>  {
>         struct vm_area_struct *vma;
> -       int target_result = SCAN_FAIL;
>
> -       i_mmap_lock_write(mapping);
> +       i_mmap_lock_read(mapping);
>         vma_interval_tree_foreach(vma, &mapping->i_mmap, pgoff, pgoff) {
> -               int result = SCAN_FAIL;
> -               struct mm_struct *mm = NULL;
> -               unsigned long addr = 0;
> -               pmd_t *pmd;
> -               bool is_target = false;
> +               struct mm_struct *mm;
> +               unsigned long addr;
> +               pmd_t *pmd, pgt_pmd;
> +               spinlock_t *pml;
> +               spinlock_t *ptl;
>
>                 /*
>                  * Check vma->anon_vma to exclude MAP_PRIVATE mappings that
> -                * got written to. These VMAs are likely not worth investing
> -                * mmap_write_lock(mm) as PMD-mapping is likely to be split
> -                * later.
> +                * got written to. These VMAs are likely not worth removing
> +                * page tables from, as PMD-mapping is likely to be split later.
>                  *
> -                * Note that vma->anon_vma check is racy: it can be set up after
> -                * the check but before we took mmap_lock by the fault path.
> -                * But page lock would prevent establishing any new ptes of the
> -                * page, so we are safe.
> -                *
> -                * An alternative would be drop the check, but check that page
> -                * table is clear before calling pmdp_collapse_flush() under
> -                * ptl. It has higher chance to recover THP for the VMA, but
> -                * has higher cost too. It would also probably require locking
> -                * the anon_vma.
> +                * Note that vma->anon_vma check is racy: it can be set after
> +                * the check, but page locks (with XA_RETRY_ENTRYs in holes)
> +                * prevented establishing new ptes of the page. So we are safe
> +                * to remove page table below, without even checking it's empty.

This "we are safe to remove page table below, without even checking
it's empty" assumes that the only way to create new anonymous PTEs is
to use existing file PTEs, right? What about private shmem VMAs that
are registered with userfaultfd as VM_UFFD_MISSING? I think for those,
the UFFDIO_COPY ioctl lets you directly insert anonymous PTEs without
looking at the mapping and its pages (except for checking that the
insertion point is before end-of-file), protected only by mmap_lock
(shared) and pte_offset_map_lock().


>                  */
> -               if (READ_ONCE(vma->anon_vma)) {
> -                       result = SCAN_PAGE_ANON;
> -                       goto next;
> -               }
> +               if (READ_ONCE(vma->anon_vma))
> +                       continue;
> +
>                 addr = vma->vm_start + ((pgoff - vma->vm_pgoff) << PAGE_SHIFT);
>                 if (addr & ~HPAGE_PMD_MASK ||
> -                   vma->vm_end < addr + HPAGE_PMD_SIZE) {
> -                       result = SCAN_VMA_CHECK;
> -                       goto next;
> -               }
> -               mm = vma->vm_mm;
> -               is_target = mm == target_mm && addr == target_addr;
> -               result = find_pmd_or_thp_or_none(mm, addr, &pmd);
> -               if (result != SCAN_SUCCEED)
> -                       goto next;
> -               /*
> -                * We need exclusive mmap_lock to retract page table.
> -                *
> -                * We use trylock due to lock inversion: we need to acquire
> -                * mmap_lock while holding page lock. Fault path does it in
> -                * reverse order. Trylock is a way to avoid deadlock.
> -                *
> -                * Also, it's not MADV_COLLAPSE's job to collapse other
> -                * mappings - let khugepaged take care of them later.
> -                */
> -               result = SCAN_PTE_MAPPED_HUGEPAGE;
> -               if ((cc->is_khugepaged || is_target) &&
> -                   mmap_write_trylock(mm)) {
> -                       /* trylock for the same lock inversion as above */
> -                       if (!vma_try_start_write(vma))
> -                               goto unlock_next;
> -
> -                       /*
> -                        * Re-check whether we have an ->anon_vma, because
> -                        * collapse_and_free_pmd() requires that either no
> -                        * ->anon_vma exists or the anon_vma is locked.
> -                        * We already checked ->anon_vma above, but that check
> -                        * is racy because ->anon_vma can be populated under the
> -                        * mmap lock in read mode.
> -                        */
> -                       if (vma->anon_vma) {
> -                               result = SCAN_PAGE_ANON;
> -                               goto unlock_next;
> -                       }
> -                       /*
> -                        * When a vma is registered with uffd-wp, we can't
> -                        * recycle the pmd pgtable because there can be pte
> -                        * markers installed.  Skip it only, so the rest mm/vma
> -                        * can still have the same file mapped hugely, however
> -                        * it'll always mapped in small page size for uffd-wp
> -                        * registered ranges.
> -                        */
> -                       if (hpage_collapse_test_exit(mm)) {
> -                               result = SCAN_ANY_PROCESS;
> -                               goto unlock_next;
> -                       }
> -                       if (userfaultfd_wp(vma)) {
> -                               result = SCAN_PTE_UFFD_WP;
> -                               goto unlock_next;
> -                       }
> -                       collapse_and_free_pmd(mm, vma, addr, pmd);

The old code called collapse_and_free_pmd(), which involves MMU
notifier invocation...

> -                       if (!cc->is_khugepaged && is_target)
> -                               result = set_huge_pmd(vma, addr, pmd, hpage);
> -                       else
> -                               result = SCAN_SUCCEED;
> -
> -unlock_next:
> -                       mmap_write_unlock(mm);
> -                       goto next;
> -               }
> -               /*
> -                * Calling context will handle target mm/addr. Otherwise, let
> -                * khugepaged try again later.
> -                */
> -               if (!is_target) {
> -                       khugepaged_add_pte_mapped_thp(mm, addr);
> +                   vma->vm_end < addr + HPAGE_PMD_SIZE)
>                         continue;
> -               }
> -next:
> -               if (is_target)
> -                       target_result = result;
> +
> +               mm = vma->vm_mm;
> +               if (find_pmd_or_thp_or_none(mm, addr, &pmd) != SCAN_SUCCEED)
> +                       continue;
> +
> +               if (hpage_collapse_test_exit(mm))
> +                       continue;
> +               /*
> +                * When a vma is registered with uffd-wp, we cannot recycle
> +                * the page table because there may be pte markers installed.
> +                * Other vmas can still have the same file mapped hugely, but
> +                * skip this one: it will always be mapped in small page size
> +                * for uffd-wp registered ranges.
> +                *
> +                * What if VM_UFFD_WP is set a moment after this check?  No
> +                * problem, huge page lock is still held, stopping new mappings
> +                * of page which might then get replaced by pte markers: only
> +                * existing markers need to be protected here.  (We could check
> +                * after getting ptl below, but this comment distracting there!)
> +                */
> +               if (userfaultfd_wp(vma))
> +                       continue;
> +
> +               /* Huge page lock is still held, so page table must be empty */
> +               pml = pmd_lock(mm, pmd);
> +               ptl = pte_lockptr(mm, pmd);
> +               if (ptl != pml)
> +                       spin_lock_nested(ptl, SINGLE_DEPTH_NESTING);
> +               pgt_pmd = pmdp_collapse_flush(vma, addr, pmd);

... while the new code only does pmdp_collapse_flush(), which clears
the pmd entry and does a TLB flush, but AFAICS doesn't use MMU
notifiers. My understanding is that that's problematic - maybe (?) it
is sort of okay with regards to classic MMU notifier users like KVM,
but it's probably wrong for IOMMUv2 users, where an IOMMU directly
consumes the normal page tables?

(FWIW, last I looked, there also seemed to be some other issues with
MMU notifier usage wrt IOMMUv2, see the thread
<https://lore.kernel.org/linux-mm/Yzbaf9HW1%2FreKqR8@nvidia.com/>.)


> +               if (ptl != pml)
> +                       spin_unlock(ptl);
> +               spin_unlock(pml);
> +
> +               mm_dec_nr_ptes(mm);
> +               page_table_check_pte_clear_range(mm, addr, pgt_pmd);
> +               pte_free_defer(mm, pmd_pgtable(pgt_pmd));
>         }
> -       i_mmap_unlock_write(mapping);
> -       return target_result;
> +       i_mmap_unlock_read(mapping);
>  }
>
>  /**
> @@ -2261,9 +2210,11 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr,
>
>         /*
>          * Remove pte page tables, so we can re-fault the page as huge.
> +        * If MADV_COLLAPSE, adjust result to call collapse_pte_mapped_thp().
>          */
> -       result = retract_page_tables(mapping, start, mm, addr, hpage,
> -                                    cc);
> +       retract_page_tables(mapping, start);
> +       if (cc && !cc->is_khugepaged)
> +               result = SCAN_PTE_MAPPED_HUGEPAGE;
>         unlock_page(hpage);
>
>         /*
> --
> 2.35.3
>


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 01/12] mm/pgtable: add rcu_read_lock() and rcu_read_unlock()s
  2023-05-29  6:14 ` [PATCH 01/12] mm/pgtable: add rcu_read_lock() and rcu_read_unlock()s Hugh Dickins
@ 2023-05-31 17:06   ` Jann Horn
  0 siblings, 0 replies; 27+ messages in thread
From: Jann Horn @ 2023-05-31 17:06 UTC (permalink / raw)
  To: Hugh Dickins
  Cc: Andrew Morton, Mike Kravetz, Mike Rapoport, Kirill A. Shutemov,
	Matthew Wilcox, David Hildenbrand, Suren Baghdasaryan, Qi Zheng,
	Yang Shi, Mel Gorman, Peter Xu, Peter Zijlstra, Will Deacon,
	Yu Zhao, Alistair Popple, Ralph Campbell, Ira Weiny,
	Steven Price, SeongJae Park, Naoya Horiguchi, Christophe Leroy,
	Zack Rusin, Jason Gunthorpe, Axel Rasmussen, Anshuman Khandual,
	Pasha Tatashin, Miaohe Lin, Minchan Kim, Christoph Hellwig,
	Song Liu, Thomas Hellstrom, Russell King, David S. Miller,
	Michael Ellerman, Aneesh Kumar K.V, Heiko Carstens,
	Christian Borntraeger, Claudio Imbrenda, Alexander Gordeev,
	linux-arm-kernel, sparclinux, linuxppc-dev, linux-s390,
	linux-kernel, linux-mm

On Mon, May 29, 2023 at 8:15 AM Hugh Dickins <hughd@google.com> wrote:
> Before putting them to use (several commits later), add rcu_read_lock()
> to pte_offset_map(), and rcu_read_unlock() to pte_unmap().  Make this a
> separate commit, since it risks exposing imbalances: prior commits have
> fixed all the known imbalances, but we may find some have been missed.
[...]
> diff --git a/mm/pgtable-generic.c b/mm/pgtable-generic.c
> index c7ab18a5fb77..674671835631 100644
> --- a/mm/pgtable-generic.c
> +++ b/mm/pgtable-generic.c
> @@ -236,7 +236,7 @@ pte_t *__pte_offset_map(pmd_t *pmd, unsigned long addr, pmd_t *pmdvalp)
>  {
>         pmd_t pmdval;
>
> -       /* rcu_read_lock() to be added later */
> +       rcu_read_lock();
>         pmdval = pmdp_get_lockless(pmd);
>         if (pmdvalp)
>                 *pmdvalp = pmdval;

It might be a good idea to document that this series assumes that the
first argument to __pte_offset_map() is a pointer into a second-level
page table (and not a local copy of the entry) unless the containing
VMA is known to not be THP-eligible or the page table is detached from
the page table hierarchy or something like that. Currently a bunch of
places pass references to local copies of the entry, and while I think
all of these are fine, it would probably be good to at least document
why these are allowed to do it while other places aren't.

$ vgrep 'pte_offset_map(&'
Index File                  Line Content
    0 arch/sparc/mm/tlb.c    151 pte = pte_offset_map(&pmd, vaddr);
    1 kernel/events/core.c  7501 ptep = pte_offset_map(&pmd, addr);
    2 mm/gup.c              2460 ptem = ptep = pte_offset_map(&pmd, addr);
    3 mm/huge_memory.c      2057 pte = pte_offset_map(&_pmd, haddr);
    4 mm/huge_memory.c      2214 pte = pte_offset_map(&_pmd, haddr);
    5 mm/page_table_check.c  240 pte_t *ptep = pte_offset_map(&pmd, addr);


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 10/12] mm/khugepaged: collapse_pte_mapped_thp() with mmap_read_lock()
  2023-05-29  6:26 ` [PATCH 10/12] mm/khugepaged: collapse_pte_mapped_thp() with mmap_read_lock() Hugh Dickins
@ 2023-05-31 17:25   ` Jann Horn
  0 siblings, 0 replies; 27+ messages in thread
From: Jann Horn @ 2023-05-31 17:25 UTC (permalink / raw)
  To: Hugh Dickins
  Cc: Andrew Morton, Mike Kravetz, Mike Rapoport, Kirill A. Shutemov,
	Matthew Wilcox, David Hildenbrand, Suren Baghdasaryan, Qi Zheng,
	Yang Shi, Mel Gorman, Peter Xu, Peter Zijlstra, Will Deacon,
	Yu Zhao, Alistair Popple, Ralph Campbell, Ira Weiny,
	Steven Price, SeongJae Park, Naoya Horiguchi, Christophe Leroy,
	Zack Rusin, Jason Gunthorpe, Axel Rasmussen, Anshuman Khandual,
	Pasha Tatashin, Miaohe Lin, Minchan Kim, Christoph Hellwig,
	Song Liu, Thomas Hellstrom, Russell King, David S. Miller,
	Michael Ellerman, Aneesh Kumar K.V, Heiko Carstens,
	Christian Borntraeger, Claudio Imbrenda, Alexander Gordeev,
	linux-arm-kernel, sparclinux, linuxppc-dev, linux-s390,
	linux-kernel, linux-mm

On Mon, May 29, 2023 at 8:26 AM Hugh Dickins <hughd@google.com> wrote:
> Bring collapse_and_free_pmd() back into collapse_pte_mapped_thp().
> It does need mmap_read_lock(), but it does not need mmap_write_lock(),
> nor vma_start_write() nor i_mmap lock nor anon_vma lock.  All racing
> paths are relying on pte_offset_map_lock() and pmd_lock(), so use those.

I think there's a weirdness in the existing code, and this change
probably turns that into a UAF bug.

collapse_pte_mapped_thp() can be called on an address that might not
be associated with a VMA anymore, and after this change, the page
tables for that address might be in the middle of page table teardown
in munmap(), right? The existing mmap_write_lock() guards against
concurrent munmap() (so in the old code we are guaranteed to either
see a normal VMA or not see the page tables anymore), but
mmap_read_lock() only guards against the part of munmap() up to the
mmap_write_downgrade() in do_vmi_align_munmap(), and unmap_region()
(including free_pgtables()) happens after that.

So we can now enter collapse_pte_mapped_thp() and race with concurrent
free_pgtables() such that a PUD disappears under us while we're
walking it or something like that:


int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr,
          bool install_pmd)
{
  struct mmu_notifier_range range;
  unsigned long haddr = addr & HPAGE_PMD_MASK;
  struct vm_area_struct *vma = vma_lookup(mm, haddr); // <<< returns NULL
  struct page *hpage;
  pte_t *start_pte, *pte;
  pmd_t *pmd, pgt_pmd;
  spinlock_t *pml, *ptl;
  int nr_ptes = 0, result = SCAN_FAIL;
  int i;

  mmap_assert_locked(mm);

  /* Fast check before locking page if already PMD-mapped */
  result = find_pmd_or_thp_or_none(mm, haddr, &pmd); // <<< PUD UAF in here
  if (result == SCAN_PMD_MAPPED)
    return result;

  if (!vma || !vma->vm_file || // <<< bailout happens too late
      !range_in_vma(vma, haddr, haddr + HPAGE_PMD_SIZE))
    return SCAN_VMA_CHECK;


I guess the right fix here is to make sure that at least the basic VMA
revalidation stuff (making sure there still is a VMA covering this
range) happens before find_pmd_or_thp_or_none()? Like:


diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 301c0e54a2ef..5db365587556 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -1481,15 +1481,15 @@ int collapse_pte_mapped_thp(struct mm_struct
*mm, unsigned long addr,

         mmap_assert_locked(mm);

+        if (!vma || !vma->vm_file ||
+            !range_in_vma(vma, haddr, haddr + HPAGE_PMD_SIZE))
+                return SCAN_VMA_CHECK;
+
         /* Fast check before locking page if already PMD-mapped */
         result = find_pmd_or_thp_or_none(mm, haddr, &pmd);
         if (result == SCAN_PMD_MAPPED)
                 return result;

-        if (!vma || !vma->vm_file ||
-            !range_in_vma(vma, haddr, haddr + HPAGE_PMD_SIZE))
-                return SCAN_VMA_CHECK;
-
         /*
          * If we are here, we've succeeded in replacing all the native pages
          * in the page cache with a single hugepage. If a mm were to fault-in


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* Re: [PATCH 07/12] s390: add pte_free_defer(), with use of mmdrop_async()
       [not found]   ` <175ebec8-761-c3f-2d98-6c3bd87161c8@google.com>
@ 2023-06-06 19:40     ` Gerald Schaefer
  2023-06-08  3:35       ` Hugh Dickins
       [not found]     ` <ZH99cLKeALvUCIH8@ziepe.ca>
  1 sibling, 1 reply; 27+ messages in thread
From: Gerald Schaefer @ 2023-06-06 19:40 UTC (permalink / raw)
  To: Hugh Dickins
  Cc: Vasily Gorbik, Andrew Morton, Mike Kravetz, Mike Rapoport,
	Kirill A. Shutemov, Matthew Wilcox, David Hildenbrand,
	Suren Baghdasaryan, Qi Zheng, Yang Shi, Mel Gorman, Peter Xu,
	Peter Zijlstra, Will Deacon, Yu Zhao, Alistair Popple,
	Ralph Campbell, Ira Weiny, Steven Price, SeongJae Park,
	Naoya Horiguchi, Christophe Leroy, Zack Rusin, Jason Gunthorpe,
	Axel Rasmussen, Anshuman Khandual, Pasha Tatashin, Miaohe Lin,
	Minchan Kim, Christoph Hellwig, Song Liu, Thomas Hellstrom,
	Russell King, David S. Miller, Michael Ellerman,
	Aneesh Kumar K.V, Heiko Carstens, Christian Borntraeger,
	Claudio Imbrenda, Alexander Gordeev, Jann Horn, linux-arm-kernel,
	sparclinux, linuxppc-dev, linux-s390, linux-kernel, linux-mm

On Mon, 5 Jun 2023 22:11:52 -0700 (PDT)
Hugh Dickins <hughd@google.com> wrote:

> On Sun, 28 May 2023, Hugh Dickins wrote:
> 
> > Add s390-specific pte_free_defer(), to call pte_free() via call_rcu().
> > pte_free_defer() will be called inside khugepaged's retract_page_tables()
> > loop, where allocating extra memory cannot be relied upon.  This precedes
> > the generic version to avoid build breakage from incompatible pgtable_t.
> > 
> > This version is more complicated than others: because page_table_free()
> > needs to know which fragment is being freed, and which mm to link it to.
> > 
> > page_table_free()'s fragment handling is clever, but I could too easily
> > break it: what's done here in pte_free_defer() and pte_free_now() might
> > be better integrated with page_table_free()'s cleverness, but not by me!
> > 
> > By the time that page_table_free() gets called via RCU, it's conceivable
> > that mm would already have been freed: so mmgrab() in pte_free_defer()
> > and mmdrop() in pte_free_now().  No, that is not a good context to call
> > mmdrop() from, so make mmdrop_async() public and use that.  
> 
> But Matthew Wilcox quickly pointed out that sharing one page->rcu_head
> between multiple page tables is tricky: something I knew but had lost
> sight of.  So the powerpc and s390 patches were broken: powerpc fairly
> easily fixed, but s390 more painful.
> 
> In https://lore.kernel.org/linux-s390/20230601155751.7c949ca4@thinkpad-T15/
> On Thu, 1 Jun 2023 15:57:51 +0200
> Gerald Schaefer <gerald.schaefer@linux.ibm.com> wrote:
> > 
> > Yes, we have 2 pagetables in one 4K page, which could result in same
> > rcu_head reuse. It might be possible to use the cleverness from our
> > page_table_free() function, e.g. to only do the call_rcu() once, for
> > the case where both 2K pagetable fragments become unused, similar to
> > how we decide when to actually call __free_page().
> > 
> > However, it might be much worse, and page->rcu_head from a pagetable
> > page cannot be used at all for s390, because we also use page->lru
> > to keep our list of free 2K pagetable fragments. I always get confused
> > by struct page unions, so not completely sure, but it seems to me that
> > page->rcu_head would overlay with page->lru, right?  
> 
> Sigh, yes, page->rcu_head overlays page->lru.  But (please correct me if
> I'm wrong) I think that s390 could use exactly the same technique for
> its list of free 2K pagetable fragments as it uses for its list of THP
> "deposited" pagetable fragments, over in arch/s390/mm/pgtable.c: use
> the first two longs of the page table itself for threading the list.

Nice idea, I think that could actually work, since we only need the empty
2K halves on the list. So it should be possible to store the list_head
inside those.

> 
> And while it could use third and fourth longs instead, I don't see any
> need for that: a deposited pagetable has been allocated, so would not
> be on the list of free fragments.

Correct, that should not interfere.

> 
> Below is one of the grossest patches I've ever posted: gross because
> it's a rushed attempt to see whether that is viable, while it would take
> me longer to understand all the s390 cleverness there (even though the
> PP AA commentary above page_table_alloc() is excellent).

Sounds fair, this is also one of the grossest code we have, which is also
why Alexander added the comment. I guess we could need even more comments
inside the code, as it still confuses me more than it should.

Considering that, you did remarkably well. Your patch seems to work fine,
at least it survived some LTP mm tests. I will also add it to our CI runs,
to give it some more testing. Will report tomorrow when it broke something.
See also below for some patch comments.

> 
> I'm hoping the use of page->lru in arch/s390/mm/gmap.c is disjoint.
> And cmma_init_nodat()? Ah, that's __init so I guess disjoint.

cmma_init_nodat() should be disjoint, not only because it is __init,
but also because it explicitly skips pagetable pages, so it should
never touch page->lru of those.

Not very familiar with the gmap code, it does look disjoint, and we should
also use complete 4K pages for pagetables instead of 2K fragments there,
but Christian or Claudio should also have a look.

> 
> Gerald, s390 folk: would it be possible for you to give this
> a try, suggest corrections and improvements, and then I can make it
> a separate patch of the series; and work on avoiding concurrent use
> of the rcu_head by pagetable fragment buddies (ideally fit in with
> the scheme already there, maybe DD bits to go along with the PP AA).

It feels like it could be possible to not only avoid the double
rcu_head, but also avoid passing over the mm via page->pt_mm.
I.e. have pte_free_defer(), which has the mm, do all the checks and
list updates that page_table_free() does, for which we need the mm.
Then just skip the pgtable_pte_page_dtor() + __free_page() at the end,
and do call_rcu(pte_free_now) instead. The pte_free_now() could then
just do _dtor/__free_page similar to the generic version.

I must admit that I still have no good overview of the "big picture"
here, and especially if this approach would still fit in. Probably not,
as the to-be-freed pagetables would still be accessible, but not really
valid, if we added them back to the list, with list_heads inside them.
So maybe call_rcu() has to be done always, and not only for the case
where the whole 4K page becomes free, then we probably cannot do w/o
passing over the mm for proper list handling.

Ah, and they could also be re-used, once they are back on the list,
which will probably not go well. Is that what you meant with DD bits,
i.e. mark such fragments to prevent re-use? Smells a bit like the
"pending purge"

> 
> Why am I even asking you to move away from page->lru: why don't I
> thread s390's pte_free_defer() pagetables like THP's deposit does?
> I cannot, because the deferred pagetables have to remain accessible
> as valid pagetables, until the RCU grace period has elapsed - unless
> all the list pointers would appear as pte_none(), which I doubt.

Yes, only empty and invalid PTEs will appear as pte_none(), i.e. entries
that contain only 0x400.

Ok, I guess that also explains why the approach mentioned above,
to avoid passing over the mm and do the list handling already in
pte_free_defer(), will not be so easy or possible at all.

> 
> (That may limit our possibilities with the deposited pagetables in
> future: I can imagine them too wanting to remain accessible as valid
> pagetables.  But that's not needed by this series, and s390 only uses
> deposit/withdraw for anon THP; and some are hoping that we might be
> able to move away from deposit/withdraw altogther - though powerpc's
> special use will make that more difficult.)
> 
> Thanks!
> Hugh
> 
> --- 6.4-rc5/arch/s390/mm/pgalloc.c
> +++ linux/arch/s390/mm/pgalloc.c
> @@ -232,6 +232,7 @@ void page_table_free_pgste(struct page *
>   */
>  unsigned long *page_table_alloc(struct mm_struct *mm)
>  {
> +	struct list_head *listed;
>  	unsigned long *table;
>  	struct page *page;
>  	unsigned int mask, bit;
> @@ -241,8 +242,8 @@ unsigned long *page_table_alloc(struct m
>  		table = NULL;
>  		spin_lock_bh(&mm->context.lock);
>  		if (!list_empty(&mm->context.pgtable_list)) {
> -			page = list_first_entry(&mm->context.pgtable_list,
> -						struct page, lru);
> +			listed = mm->context.pgtable_list.next;
> +			page = virt_to_page(listed);
>  			mask = atomic_read(&page->_refcount) >> 24;
>  			/*
>  			 * The pending removal bits must also be checked.
> @@ -259,9 +260,12 @@ unsigned long *page_table_alloc(struct m
>  				bit = mask & 1;		/* =1 -> second 2K */
>  				if (bit)
>  					table += PTRS_PER_PTE;
> +				BUG_ON(table != (unsigned long *)listed);
>  				atomic_xor_bits(&page->_refcount,
>  							0x01U << (bit + 24));
> -				list_del(&page->lru);
> +				list_del(listed);
> +				set_pte((pte_t *)&table[0], __pte(_PAGE_INVALID));
> +				set_pte((pte_t *)&table[1], __pte(_PAGE_INVALID));
>  			}
>  		}
>  		spin_unlock_bh(&mm->context.lock);
> @@ -288,8 +292,9 @@ unsigned long *page_table_alloc(struct m
>  		/* Return the first 2K fragment of the page */
>  		atomic_xor_bits(&page->_refcount, 0x01U << 24);
>  		memset64((u64 *)table, _PAGE_INVALID, 2 * PTRS_PER_PTE);
> +		listed = (struct list head *)(table + PTRS_PER_PTE);

Missing "_" in "struct list head"

>  		spin_lock_bh(&mm->context.lock);
> -		list_add(&page->lru, &mm->context.pgtable_list);
> +		list_add(listed, &mm->context.pgtable_list);
>  		spin_unlock_bh(&mm->context.lock);
>  	}
>  	return table;
> @@ -310,6 +315,7 @@ static void page_table_release_check(str
>  
>  void page_table_free(struct mm_struct *mm, unsigned long *table)
>  {
> +	struct list_head *listed;
>  	unsigned int mask, bit, half;
>  	struct page *page;

Not sure if "reverse X-mas" is still part of any style guidelines,
but I still am a big fan of that :-). Although the other code in that
file is also not consistently using it ...

>  
> @@ -325,10 +331,24 @@ void page_table_free(struct mm_struct *m
>  		 */
>  		mask = atomic_xor_bits(&page->_refcount, 0x11U << (bit + 24));
>  		mask >>= 24;
> -		if (mask & 0x03U)
> -			list_add(&page->lru, &mm->context.pgtable_list);
> -		else
> -			list_del(&page->lru);
> +		if (mask & 0x03U) {
> +			listed = (struct list_head *)table;
> +			list_add(listed, &mm->context.pgtable_list);
> +		} else {
> +			/*
> +			 * Get address of the other page table sharing the page.
> +			 * There are sure to be MUCH better ways to do all this!
> +			 * But I'm rushing, while trying to keep to the obvious.
> +			 */
> +			listed = (struct list_head *)(table + PTRS_PER_PTE);
> +			if (virt_to_page(listed) != page) {
> +				/* sizeof(*listed) is twice sizeof(*table) */
> +				listed -= PTRS_PER_PTE;
> +			}

Bitwise XOR with 0x800 should do the trick here, i.e. give you the address
of the other 2K half, like this:

			listed = (struct list_head *)((unsigned long) table ^ 0x800UL);

> +			list_del(listed);
> +			set_pte((pte_t *)&listed->next, __pte(_PAGE_INVALID));
> +			set_pte((pte_t *)&listed->prev, __pte(_PAGE_INVALID));
> +		}
>  		spin_unlock_bh(&mm->context.lock);
>  		mask = atomic_xor_bits(&page->_refcount, 0x10U << (bit + 24));
>  		mask >>= 24;
> @@ -349,6 +369,7 @@ void page_table_free(struct mm_struct *m
>  void page_table_free_rcu(struct mmu_gather *tlb, unsigned long *table,
>  			 unsigned long vmaddr)
>  {
> +	struct list_head *listed;
>  	struct mm_struct *mm;
>  	struct page *page;
>  	unsigned int bit, mask;
> @@ -370,10 +391,24 @@ void page_table_free_rcu(struct mmu_gath
>  	 */
>  	mask = atomic_xor_bits(&page->_refcount, 0x11U << (bit + 24));
>  	mask >>= 24;
> -	if (mask & 0x03U)
> -		list_add_tail(&page->lru, &mm->context.pgtable_list);
> -	else
> -		list_del(&page->lru);
> +	if (mask & 0x03U) {
> +		listed = (struct list_head *)table;
> +		list_add_tail(listed, &mm->context.pgtable_list);
> +	} else {
> +		/*
> +		 * Get address of the other page table sharing the page.
> +		 * There are sure to be MUCH better ways to do all this!
> +		 * But I'm rushing, and trying to keep to the obvious.
> +		 */
> +		listed = (struct list_head *)(table + PTRS_PER_PTE);
> +		if (virt_to_page(listed) != page) {
> +			/* sizeof(*listed) is twice sizeof(*table) */
> +			listed -= PTRS_PER_PTE;
> +		}

Same as above.

> +		list_del(listed);
> +		set_pte((pte_t *)&listed->next, __pte(_PAGE_INVALID));
> +		set_pte((pte_t *)&listed->prev, __pte(_PAGE_INVALID));
> +	}
>  	spin_unlock_bh(&mm->context.lock);
>  	table = (unsigned long *) ((unsigned long) table | (0x01U << bit));
>  	tlb_remove_table(tlb, table);

Reviewed-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com>


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 05/12] powerpc: add pte_free_defer() for pgtables sharing page
       [not found]             ` <ZH+EMp9RuEVOjVNb@ziepe.ca>
@ 2023-06-07  3:49               ` Hugh Dickins
  0 siblings, 0 replies; 27+ messages in thread
From: Hugh Dickins @ 2023-06-07  3:49 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Peter Xu, Hugh Dickins, Matthew Wilcox, Andrew Morton,
	Mike Kravetz, Mike Rapoport, Kirill A. Shutemov,
	David Hildenbrand, Suren Baghdasaryan, Qi Zheng, Yang Shi,
	Mel Gorman, Peter Zijlstra, Will Deacon, Yu Zhao,
	Alistair Popple, Ralph Campbell, Ira Weiny, Steven Price,
	SeongJae Park, Naoya Horiguchi, Christophe Leroy, Zack Rusin,
	Axel Rasmussen, Anshuman Khandual, Pasha Tatashin, Miaohe Lin,
	Minchan Kim, Christoph Hellwig, Song Liu, Thomas Hellstrom,
	Russell King, David S. Miller, Michael Ellerman,
	Aneesh Kumar K.V, Heiko Carstens, Christian Borntraeger,
	Claudio Imbrenda, Alexander Gordeev, Jann Horn, linux-arm-kernel,
	sparclinux, linuxppc-dev, linux-s390, linux-kernel, linux-mm

On Tue, 6 Jun 2023, Jason Gunthorpe wrote:
> On Tue, Jun 06, 2023 at 03:03:31PM -0400, Peter Xu wrote:
> > On Tue, Jun 06, 2023 at 03:23:30PM -0300, Jason Gunthorpe wrote:
> > > On Mon, Jun 05, 2023 at 08:40:01PM -0700, Hugh Dickins wrote:
> > > 
> > > > diff --git a/arch/powerpc/mm/pgtable-frag.c b/arch/powerpc/mm/pgtable-frag.c
> > > > index 20652daa1d7e..e4f58c5fc2ac 100644
> > > > --- a/arch/powerpc/mm/pgtable-frag.c
> > > > +++ b/arch/powerpc/mm/pgtable-frag.c
> > > > @@ -120,3 +120,54 @@ void pte_fragment_free(unsigned long *table, int kernel)
> > > >  		__free_page(page);
> > > >  	}
> > > >  }
> > > > +
> > > > +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
> > > > +#define PTE_FREE_DEFERRED 0x10000 /* beyond any PTE_FRAG_NR */
> > > > +
> > > > +static void pte_free_now(struct rcu_head *head)
> > > > +{
> > > > +	struct page *page;
> > > > +	int refcount;
> > > > +
> > > > +	page = container_of(head, struct page, rcu_head);
> > > > +	refcount = atomic_sub_return(PTE_FREE_DEFERRED - 1,
> > > > +				     &page->pt_frag_refcount);
> > > > +	if (refcount < PTE_FREE_DEFERRED) {
> > > > +		pte_fragment_free((unsigned long *)page_address(page), 0);
> > > > +		return;
> > > > +	}
> > > 
> > > From what I can tell power doesn't recycle the sub fragment into any
> > > kind of free list. It just waits for the last fragment to be unused
> > > and then frees the whole page.

Yes, it's relatively simple in that way: not as sophisticated as s390.

> > > 
> > > So why not simply go into pte_fragment_free() and do the call_rcu directly:
> > > 
> > > 	BUG_ON(atomic_read(&page->pt_frag_refcount) <= 0);
> > > 	if (atomic_dec_and_test(&page->pt_frag_refcount)) {
> > > 		if (!kernel)
> > > 			pgtable_pte_page_dtor(page);
> > > 		call_rcu(&page->rcu_head, free_page_rcu)
> > 
> > We need to be careful on the lock being freed in pgtable_pte_page_dtor(),
> > in Hugh's series IIUC we need the spinlock being there for the rcu section
> > alongside the page itself.  So even if to do so we'll need to also rcu call 
> > pgtable_pte_page_dtor() when needed.

Thanks, Peter, yes that's right.

> 
> Er yes, I botched that, the dtor and the free_page should be in a the
> rcu callback function

But it was just a botched detail, and won't have answered Jason's doubt.

I had three (or perhaps it amounts to two) reasons for doing it this way:
none of which may seem good enough reasons to you.  Certainly I'd agree
that the way it's done seems... arcane.

One, as I've indicated before, I don't actually dare to go all
the way into RCU freeing of all page tables for powerpc (or any other):
I should think it's a good idea that everyone wants in the end, but I'm
limited by my time and competence - and dread of losing my way in the
mmu_gather TLB #ifdef maze.  It's work for someone else not me.

(pte_free_defer() do as you suggest, without changing pte_fragment_free()
itself?  No, that doesn't work out when defer does, say, the decrement of
pt_frag_refcount from 2 to 1, then pte_fragment_free() does the decrement
from 1 to 0: page freed without deferral.)

Two, this was the code I'd worked out before, and was used in production,
so I had confidence in it - it was just my mistake that I'd forgotten the
single rcu_head issue, and thought I could avoid it in the initial posting.
powerpc has changed around since then, but apparently not in any way that
affects this.  And it's too easy to agree in review that something can be
simpler, without bringing back to mind why the complications are there.

Three (just an explanation of why the old code was like this), powerpc
relies on THP's page table deposit+withdraw protocol, even for shmem/
file THPs.  I've skirted that issue in this series, by sticking with
retract_page_tables(), not attempting to insert huge pmd immediately.
But if huge pmd is inserted to replace ptetable pmd, then ptetable must
be deposited: pte_free_defer() as written protects the deposited ptetable
from then being freed without deferral (rather like in the example above).

But does not protect it from being withdrawn and reused within that
grace period.  Jann has grave doubts whether that can ever be allowed
(or perhaps I should grant him certainty, and examples that it cannot).
I did convince myself, back in the day, that it was safe here: but I'll
have to put in a lot more thought to re-justify it now, and on the way
may instead be completely persuaded by Jann.

Not very good reasons: good enough, or can you supply a better patch?

Thanks,
Hugh


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 07/12] s390: add pte_free_defer(), with use of mmdrop_async()
       [not found]     ` <ZH99cLKeALvUCIH8@ziepe.ca>
@ 2023-06-08  2:46       ` Hugh Dickins
  0 siblings, 0 replies; 27+ messages in thread
From: Hugh Dickins @ 2023-06-08  2:46 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Hugh Dickins, Gerald Schaefer, Vasily Gorbik, Andrew Morton,
	Mike Kravetz, Mike Rapoport, Kirill A. Shutemov, Matthew Wilcox,
	David Hildenbrand, Suren Baghdasaryan, Qi Zheng, Yang Shi,
	Mel Gorman, Peter Xu, Peter Zijlstra, Will Deacon, Yu Zhao,
	Alistair Popple, Ralph Campbell, Ira Weiny, Steven Price,
	SeongJae Park, Naoya Horiguchi, Christophe Leroy, Zack Rusin,
	Axel Rasmussen, Anshuman Khandual, Pasha Tatashin, Miaohe Lin,
	Minchan Kim, Christoph Hellwig, Song Liu, Thomas Hellstrom,
	Russell King, David S. Miller, Michael Ellerman,
	Aneesh Kumar K.V, Heiko Carstens, Christian Borntraeger,
	Claudio Imbrenda, Alexander Gordeev, Jann Horn, linux-arm-kernel,
	sparclinux, linuxppc-dev, linux-s390, linux-kernel, linux-mm

On Tue, 6 Jun 2023, Jason Gunthorpe wrote:
> On Mon, Jun 05, 2023 at 10:11:52PM -0700, Hugh Dickins wrote:
> 
> > "deposited" pagetable fragments, over in arch/s390/mm/pgtable.c: use
> > the first two longs of the page table itself for threading the list.
> 
> It is not RCU anymore if it writes to the page table itself before the
> grace period, so this change seems to break the RCU behavior of
> page_table_free_rcu().. The rcu sync is inside tlb_remove_table()
> called after the stores.

Yes indeed, thanks for pointing that out.

> 
> Maybe something like an xarray on the mm to hold the frags?

I think we can manage without that:
I'll say slightly more in reply to Gerald.

Hugh


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 07/12] s390: add pte_free_defer(), with use of mmdrop_async()
  2023-06-06 19:40     ` Gerald Schaefer
@ 2023-06-08  3:35       ` Hugh Dickins
  2023-06-08 13:58         ` Jason Gunthorpe
  2023-06-08 15:47         ` Gerald Schaefer
  0 siblings, 2 replies; 27+ messages in thread
From: Hugh Dickins @ 2023-06-08  3:35 UTC (permalink / raw)
  To: Gerald Schaefer
  Cc: Hugh Dickins, Vasily Gorbik, Andrew Morton, Mike Kravetz,
	Mike Rapoport, Kirill A. Shutemov, Matthew Wilcox,
	David Hildenbrand, Suren Baghdasaryan, Qi Zheng, Yang Shi,
	Mel Gorman, Peter Xu, Peter Zijlstra, Will Deacon, Yu Zhao,
	Alistair Popple, Ralph Campbell, Ira Weiny, Steven Price,
	SeongJae Park, Naoya Horiguchi, Christophe Leroy, Zack Rusin,
	Jason Gunthorpe, Axel Rasmussen, Anshuman Khandual,
	Pasha Tatashin, Miaohe Lin, Minchan Kim, Christoph Hellwig,
	Song Liu, Thomas Hellstrom, Russell King, David S. Miller,
	Michael Ellerman, Aneesh Kumar K.V, Heiko Carstens,
	Christian Borntraeger, Claudio Imbrenda, Alexander Gordeev,
	Jann Horn, linux-arm-kernel, sparclinux, linuxppc-dev,
	linux-s390, linux-kernel, linux-mm

On Tue, 6 Jun 2023, Gerald Schaefer wrote:
> On Mon, 5 Jun 2023 22:11:52 -0700 (PDT)
> Hugh Dickins <hughd@google.com> wrote:
> > On Thu, 1 Jun 2023 15:57:51 +0200
> > Gerald Schaefer <gerald.schaefer@linux.ibm.com> wrote:
> > > 
> > > Yes, we have 2 pagetables in one 4K page, which could result in same
> > > rcu_head reuse. It might be possible to use the cleverness from our
> > > page_table_free() function, e.g. to only do the call_rcu() once, for
> > > the case where both 2K pagetable fragments become unused, similar to
> > > how we decide when to actually call __free_page().
> > > 
> > > However, it might be much worse, and page->rcu_head from a pagetable
> > > page cannot be used at all for s390, because we also use page->lru
> > > to keep our list of free 2K pagetable fragments. I always get confused
> > > by struct page unions, so not completely sure, but it seems to me that
> > > page->rcu_head would overlay with page->lru, right?  
> > 
> > Sigh, yes, page->rcu_head overlays page->lru.  But (please correct me if
> > I'm wrong) I think that s390 could use exactly the same technique for
> > its list of free 2K pagetable fragments as it uses for its list of THP
> > "deposited" pagetable fragments, over in arch/s390/mm/pgtable.c: use
> > the first two longs of the page table itself for threading the list.
> 
> Nice idea, I think that could actually work, since we only need the empty
> 2K halves on the list. So it should be possible to store the list_head
> inside those.

Jason quickly pointed out the flaw in my thinking there.

> 
> > 
> > And while it could use third and fourth longs instead, I don't see any
> > need for that: a deposited pagetable has been allocated, so would not
> > be on the list of free fragments.
> 
> Correct, that should not interfere.
> 
> > 
> > Below is one of the grossest patches I've ever posted: gross because
> > it's a rushed attempt to see whether that is viable, while it would take
> > me longer to understand all the s390 cleverness there (even though the
> > PP AA commentary above page_table_alloc() is excellent).
> 
> Sounds fair, this is also one of the grossest code we have, which is also
> why Alexander added the comment. I guess we could need even more comments
> inside the code, as it still confuses me more than it should.
> 
> Considering that, you did remarkably well. Your patch seems to work fine,
> at least it survived some LTP mm tests. I will also add it to our CI runs,
> to give it some more testing. Will report tomorrow when it broke something.
> See also below for some patch comments.

Many thanks for your effort on this patch.  I don't expect the testing
of it to catch Jason's point, that I'm corrupting the page table while
it's on its way through RCU to being freed, but he's right nonetheless.

I'll integrate your fixes below into what I have here, but probably
just archive it as something to refer to later in case it might play
a part; but probably it will not - sorry for wasting your time.

> 
> > 
> > I'm hoping the use of page->lru in arch/s390/mm/gmap.c is disjoint.
> > And cmma_init_nodat()? Ah, that's __init so I guess disjoint.
> 
> cmma_init_nodat() should be disjoint, not only because it is __init,
> but also because it explicitly skips pagetable pages, so it should
> never touch page->lru of those.
> 
> Not very familiar with the gmap code, it does look disjoint, and we should
> also use complete 4K pages for pagetables instead of 2K fragments there,
> but Christian or Claudio should also have a look.
> 
> > 
> > Gerald, s390 folk: would it be possible for you to give this
> > a try, suggest corrections and improvements, and then I can make it
> > a separate patch of the series; and work on avoiding concurrent use
> > of the rcu_head by pagetable fragment buddies (ideally fit in with
> > the scheme already there, maybe DD bits to go along with the PP AA).
> 
> It feels like it could be possible to not only avoid the double
> rcu_head, but also avoid passing over the mm via page->pt_mm.
> I.e. have pte_free_defer(), which has the mm, do all the checks and
> list updates that page_table_free() does, for which we need the mm.
> Then just skip the pgtable_pte_page_dtor() + __free_page() at the end,
> and do call_rcu(pte_free_now) instead. The pte_free_now() could then
> just do _dtor/__free_page similar to the generic version.

I'm not sure: I missed your suggestion there when I first skimmed
through, and today have spent more time getting deeper into how it's
done at present.  I am now feeling more confident of a way forward,
a nicely integrated way forward, than I was yesterday.
Though getting it right may not be so easy.

When Jason pointed out the existing RCU, I initially hoped that it might
already provide the necessary framework: but sadly not, because the
unbatched case (used when additional memory is not available) does not
use RCU at all, but instead the tlb_remove_table_sync_one() IRQ hack.
If I used that, it would cripple the s390 implementation unacceptably.

> 
> I must admit that I still have no good overview of the "big picture"
> here, and especially if this approach would still fit in. Probably not,
> as the to-be-freed pagetables would still be accessible, but not really
> valid, if we added them back to the list, with list_heads inside them.
> So maybe call_rcu() has to be done always, and not only for the case
> where the whole 4K page becomes free, then we probably cannot do w/o
> passing over the mm for proper list handling.

My current thinking (but may be proved wrong) is along the lines of:
why does something on its way to being freed need to be on any list
than the rcu_head list?  I expect the current answer is, that the
other half is allocated, so the page won't be freed; but I hope that
we can put it back on that list once we're through with the rcu_head.

But the less I say now, the less I shall make a fool of myself:
I need to get deeper in.

> 
> Ah, and they could also be re-used, once they are back on the list,
> which will probably not go well. Is that what you meant with DD bits,
> i.e. mark such fragments to prevent re-use? Smells a bit like the
> "pending purge"

Yes, we may not need those DD defer bits at all: the pte_free_defer()
pagetables should fit very well with "pending purge" as it is.  They
will go down an unbatched route, but should be obeying the same rules.

> 
> > 
> > Why am I even asking you to move away from page->lru: why don't I
> > thread s390's pte_free_defer() pagetables like THP's deposit does?
> > I cannot, because the deferred pagetables have to remain accessible
> > as valid pagetables, until the RCU grace period has elapsed - unless
> > all the list pointers would appear as pte_none(), which I doubt.
> 
> Yes, only empty and invalid PTEs will appear as pte_none(), i.e. entries
> that contain only 0x400.
> 
> Ok, I guess that also explains why the approach mentioned above,
> to avoid passing over the mm and do the list handling already in
> pte_free_defer(), will not be so easy or possible at all.
> 
> > 
> > (That may limit our possibilities with the deposited pagetables in
> > future: I can imagine them too wanting to remain accessible as valid
> > pagetables.  But that's not needed by this series, and s390 only uses
> > deposit/withdraw for anon THP; and some are hoping that we might be
> > able to move away from deposit/withdraw altogther - though powerpc's
> > special use will make that more difficult.)
> > 
> > Thanks!
> > Hugh
> > 
> > --- 6.4-rc5/arch/s390/mm/pgalloc.c
> > +++ linux/arch/s390/mm/pgalloc.c
> > @@ -232,6 +232,7 @@ void page_table_free_pgste(struct page *
> >   */
> >  unsigned long *page_table_alloc(struct mm_struct *mm)
> >  {
> > +	struct list_head *listed;
> >  	unsigned long *table;
> >  	struct page *page;
> >  	unsigned int mask, bit;
> > @@ -241,8 +242,8 @@ unsigned long *page_table_alloc(struct m
> >  		table = NULL;
> >  		spin_lock_bh(&mm->context.lock);
> >  		if (!list_empty(&mm->context.pgtable_list)) {
> > -			page = list_first_entry(&mm->context.pgtable_list,
> > -						struct page, lru);
> > +			listed = mm->context.pgtable_list.next;
> > +			page = virt_to_page(listed);
> >  			mask = atomic_read(&page->_refcount) >> 24;
> >  			/*
> >  			 * The pending removal bits must also be checked.
> > @@ -259,9 +260,12 @@ unsigned long *page_table_alloc(struct m
> >  				bit = mask & 1;		/* =1 -> second 2K */
> >  				if (bit)
> >  					table += PTRS_PER_PTE;
> > +				BUG_ON(table != (unsigned long *)listed);
> >  				atomic_xor_bits(&page->_refcount,
> >  							0x01U << (bit + 24));
> > -				list_del(&page->lru);
> > +				list_del(listed);
> > +				set_pte((pte_t *)&table[0], __pte(_PAGE_INVALID));
> > +				set_pte((pte_t *)&table[1], __pte(_PAGE_INVALID));
> >  			}
> >  		}
> >  		spin_unlock_bh(&mm->context.lock);
> > @@ -288,8 +292,9 @@ unsigned long *page_table_alloc(struct m
> >  		/* Return the first 2K fragment of the page */
> >  		atomic_xor_bits(&page->_refcount, 0x01U << 24);
> >  		memset64((u64 *)table, _PAGE_INVALID, 2 * PTRS_PER_PTE);
> > +		listed = (struct list head *)(table + PTRS_PER_PTE);
> 
> Missing "_" in "struct list head"
> 
> >  		spin_lock_bh(&mm->context.lock);
> > -		list_add(&page->lru, &mm->context.pgtable_list);
> > +		list_add(listed, &mm->context.pgtable_list);
> >  		spin_unlock_bh(&mm->context.lock);
> >  	}
> >  	return table;
> > @@ -310,6 +315,7 @@ static void page_table_release_check(str
> >  
> >  void page_table_free(struct mm_struct *mm, unsigned long *table)
> >  {
> > +	struct list_head *listed;
> >  	unsigned int mask, bit, half;
> >  	struct page *page;
> 
> Not sure if "reverse X-mas" is still part of any style guidelines,
> but I still am a big fan of that :-). Although the other code in that
> file is also not consistently using it ...
> 
> >  
> > @@ -325,10 +331,24 @@ void page_table_free(struct mm_struct *m
> >  		 */
> >  		mask = atomic_xor_bits(&page->_refcount, 0x11U << (bit + 24));
> >  		mask >>= 24;
> > -		if (mask & 0x03U)
> > -			list_add(&page->lru, &mm->context.pgtable_list);
> > -		else
> > -			list_del(&page->lru);
> > +		if (mask & 0x03U) {
> > +			listed = (struct list_head *)table;
> > +			list_add(listed, &mm->context.pgtable_list);
> > +		} else {
> > +			/*
> > +			 * Get address of the other page table sharing the page.
> > +			 * There are sure to be MUCH better ways to do all this!
> > +			 * But I'm rushing, while trying to keep to the obvious.
> > +			 */
> > +			listed = (struct list_head *)(table + PTRS_PER_PTE);
> > +			if (virt_to_page(listed) != page) {
> > +				/* sizeof(*listed) is twice sizeof(*table) */
> > +				listed -= PTRS_PER_PTE;
> > +			}
> 
> Bitwise XOR with 0x800 should do the trick here, i.e. give you the address
> of the other 2K half, like this:
> 
> 			listed = (struct list_head *)((unsigned long) table ^ 0x800UL);
> 
> > +			list_del(listed);
> > +			set_pte((pte_t *)&listed->next, __pte(_PAGE_INVALID));
> > +			set_pte((pte_t *)&listed->prev, __pte(_PAGE_INVALID));
> > +		}
> >  		spin_unlock_bh(&mm->context.lock);
> >  		mask = atomic_xor_bits(&page->_refcount, 0x10U << (bit + 24));
> >  		mask >>= 24;
> > @@ -349,6 +369,7 @@ void page_table_free(struct mm_struct *m
> >  void page_table_free_rcu(struct mmu_gather *tlb, unsigned long *table,
> >  			 unsigned long vmaddr)
> >  {
> > +	struct list_head *listed;
> >  	struct mm_struct *mm;
> >  	struct page *page;
> >  	unsigned int bit, mask;
> > @@ -370,10 +391,24 @@ void page_table_free_rcu(struct mmu_gath
> >  	 */
> >  	mask = atomic_xor_bits(&page->_refcount, 0x11U << (bit + 24));
> >  	mask >>= 24;
> > -	if (mask & 0x03U)
> > -		list_add_tail(&page->lru, &mm->context.pgtable_list);
> > -	else
> > -		list_del(&page->lru);
> > +	if (mask & 0x03U) {
> > +		listed = (struct list_head *)table;
> > +		list_add_tail(listed, &mm->context.pgtable_list);
> > +	} else {
> > +		/*
> > +		 * Get address of the other page table sharing the page.
> > +		 * There are sure to be MUCH better ways to do all this!
> > +		 * But I'm rushing, and trying to keep to the obvious.
> > +		 */
> > +		listed = (struct list_head *)(table + PTRS_PER_PTE);
> > +		if (virt_to_page(listed) != page) {
> > +			/* sizeof(*listed) is twice sizeof(*table) */
> > +			listed -= PTRS_PER_PTE;
> > +		}
> 
> Same as above.
> 
> > +		list_del(listed);
> > +		set_pte((pte_t *)&listed->next, __pte(_PAGE_INVALID));
> > +		set_pte((pte_t *)&listed->prev, __pte(_PAGE_INVALID));
> > +	}
> >  	spin_unlock_bh(&mm->context.lock);
> >  	table = (unsigned long *) ((unsigned long) table | (0x01U << bit));
> >  	tlb_remove_table(tlb, table);
> 
> Reviewed-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com>

Thanks a lot, Gerald, sorry that it now looks like wasted effort.

I'm feeling confident enough of getting into s390 PP-AA-world now, that
I think my top priority should be posting a v2 of the two preliminary
series: get those out before focusing back on s390 mm/pgalloc.c.

Is it too early to wish you a happy reverse Xmas?

Hugh


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 07/12] s390: add pte_free_defer(), with use of mmdrop_async()
  2023-06-08  3:35       ` Hugh Dickins
@ 2023-06-08 13:58         ` Jason Gunthorpe
  2023-06-08 15:47         ` Gerald Schaefer
  1 sibling, 0 replies; 27+ messages in thread
From: Jason Gunthorpe @ 2023-06-08 13:58 UTC (permalink / raw)
  To: Hugh Dickins
  Cc: Gerald Schaefer, Vasily Gorbik, Andrew Morton, Mike Kravetz,
	Mike Rapoport, Kirill A. Shutemov, Matthew Wilcox,
	David Hildenbrand, Suren Baghdasaryan, Qi Zheng, Yang Shi,
	Mel Gorman, Peter Xu, Peter Zijlstra, Will Deacon, Yu Zhao,
	Alistair Popple, Ralph Campbell, Ira Weiny, Steven Price,
	SeongJae Park, Naoya Horiguchi, Christophe Leroy, Zack Rusin,
	Axel Rasmussen, Anshuman Khandual, Pasha Tatashin, Miaohe Lin,
	Minchan Kim, Christoph Hellwig, Song Liu, Thomas Hellstrom,
	Russell King, David S. Miller, Michael Ellerman,
	Aneesh Kumar K.V, Heiko Carstens, Christian Borntraeger,
	Claudio Imbrenda, Alexander Gordeev, Jann Horn, linux-arm-kernel,
	sparclinux, linuxppc-dev, linux-s390, linux-kernel, linux-mm

On Wed, Jun 07, 2023 at 08:35:05PM -0700, Hugh Dickins wrote:

> My current thinking (but may be proved wrong) is along the lines of:
> why does something on its way to being freed need to be on any list
> than the rcu_head list?  I expect the current answer is, that the
> other half is allocated, so the page won't be freed; but I hope that
> we can put it back on that list once we're through with the rcu_head.

I was having the same thought. It is pretty tricky, but if this was
made into some core helper then PPC and S390 could both use it and PPC
would get a nice upgrade to have the S390 frag re-use instead of
leaking frags.

Broadly we have three states:

 all frags free
 at least one frag free
 all frags used

'all frags free' should be returned to the allocator
'at least one frag free' should have the struct page on the mmu_struct's list
'all frags used' should be on no list.

So if we go from 
  all frags used -> at least one frag free
Then we put it on the RCU then the RCU puts it on the mmu_struct list

If we go from 
   at least one frag free -> all frags free
Then we take it off the mmu_struct list, put it on the RCU, and RCU
frees it.

Your trick to put the list_head for the mm_struct list into the frag
memory looks like the right direction. So 'at least one frag free' has
a single already RCU free'd frag hold the list head pointer. Thus we
never use the LRU and the rcu_head is always available.

The struct page itself can contain the actual free frag bitmask.

I think if we split up the memory used for pt_frag_refcount we can get
enough bits to keep track of everything. With only 2-4 frags we should
be OK.

So we track this data in the struct page:
  - Current RCU free TODO bitmask - if non-zero then a RCU is already
    triggered
  - Next RCU TODO bitmaks - If an RCU is already triggrered then we
    accumulate more free'd frags here
  - Current Free Bits - Only updated by the RCU callback

?

We'd also need to store the mmu_struct pointer in the struct page for
the RCU to be able to add/remove from the mm_struct list.

I'm not sure how much of the work can be done with atomics and how
much would need to rely on spinlock inside the mm_struct.

It feels feasible and not so bad. :)

Figure it out and test it on S390 then make power use the same common
code, and we get full RCU page table freeing using a reliable rcu_head
on both of these previously troublesome architectures :) Yay

Jason


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 07/12] s390: add pte_free_defer(), with use of mmdrop_async()
  2023-06-08  3:35       ` Hugh Dickins
  2023-06-08 13:58         ` Jason Gunthorpe
@ 2023-06-08 15:47         ` Gerald Schaefer
  1 sibling, 0 replies; 27+ messages in thread
From: Gerald Schaefer @ 2023-06-08 15:47 UTC (permalink / raw)
  To: Hugh Dickins
  Cc: Vasily Gorbik, Andrew Morton, Mike Kravetz, Mike Rapoport,
	Kirill A. Shutemov, Matthew Wilcox, David Hildenbrand,
	Suren Baghdasaryan, Qi Zheng, Yang Shi, Mel Gorman, Peter Xu,
	Peter Zijlstra, Will Deacon, Yu Zhao, Alistair Popple,
	Ralph Campbell, Ira Weiny, Steven Price, SeongJae Park,
	Naoya Horiguchi, Christophe Leroy, Zack Rusin, Jason Gunthorpe,
	Axel Rasmussen, Anshuman Khandual, Pasha Tatashin, Miaohe Lin,
	Minchan Kim, Christoph Hellwig, Song Liu, Thomas Hellstrom,
	Russell King, David S. Miller, Michael Ellerman,
	Aneesh Kumar K.V, Heiko Carstens, Christian Borntraeger,
	Claudio Imbrenda, Alexander Gordeev, Jann Horn, linux-arm-kernel,
	sparclinux, linuxppc-dev, linux-s390, linux-kernel, linux-mm

On Wed, 7 Jun 2023 20:35:05 -0700 (PDT)
Hugh Dickins <hughd@google.com> wrote:

> On Tue, 6 Jun 2023, Gerald Schaefer wrote:
> > On Mon, 5 Jun 2023 22:11:52 -0700 (PDT)
> > Hugh Dickins <hughd@google.com> wrote:  
> > > On Thu, 1 Jun 2023 15:57:51 +0200
> > > Gerald Schaefer <gerald.schaefer@linux.ibm.com> wrote:  
> > > > 
> > > > Yes, we have 2 pagetables in one 4K page, which could result in same
> > > > rcu_head reuse. It might be possible to use the cleverness from our
> > > > page_table_free() function, e.g. to only do the call_rcu() once, for
> > > > the case where both 2K pagetable fragments become unused, similar to
> > > > how we decide when to actually call __free_page().
> > > > 
> > > > However, it might be much worse, and page->rcu_head from a pagetable
> > > > page cannot be used at all for s390, because we also use page->lru
> > > > to keep our list of free 2K pagetable fragments. I always get confused
> > > > by struct page unions, so not completely sure, but it seems to me that
> > > > page->rcu_head would overlay with page->lru, right?    
> > > 
> > > Sigh, yes, page->rcu_head overlays page->lru.  But (please correct me if
> > > I'm wrong) I think that s390 could use exactly the same technique for
> > > its list of free 2K pagetable fragments as it uses for its list of THP
> > > "deposited" pagetable fragments, over in arch/s390/mm/pgtable.c: use
> > > the first two longs of the page table itself for threading the list.  
> > 
> > Nice idea, I think that could actually work, since we only need the empty
> > 2K halves on the list. So it should be possible to store the list_head
> > inside those.  
> 
> Jason quickly pointed out the flaw in my thinking there.

Yes, while I had the right concerns about "the to-be-freed pagetables would
still be accessible, but not really valid, if we added them back to the list,
with list_heads inside them", when suggesting the approach w/o passing over
the mm, I missed that we would have the very same issue already with the
existing page_table_free_rcu().

Thankfully Jason was watching out!

> 
> >   
> > > 
> > > And while it could use third and fourth longs instead, I don't see any
> > > need for that: a deposited pagetable has been allocated, so would not
> > > be on the list of free fragments.  
> > 
> > Correct, that should not interfere.
> >   
> > > 
> > > Below is one of the grossest patches I've ever posted: gross because
> > > it's a rushed attempt to see whether that is viable, while it would take
> > > me longer to understand all the s390 cleverness there (even though the
> > > PP AA commentary above page_table_alloc() is excellent).  
> > 
> > Sounds fair, this is also one of the grossest code we have, which is also
> > why Alexander added the comment. I guess we could need even more comments
> > inside the code, as it still confuses me more than it should.
> > 
> > Considering that, you did remarkably well. Your patch seems to work fine,
> > at least it survived some LTP mm tests. I will also add it to our CI runs,
> > to give it some more testing. Will report tomorrow when it broke something.
> > See also below for some patch comments.  
> 
> Many thanks for your effort on this patch.  I don't expect the testing
> of it to catch Jason's point, that I'm corrupting the page table while
> it's on its way through RCU to being freed, but he's right nonetheless.

Right, tests ran fine, but we would have introduced subtle issues with
racing gup_fast, I guess.

> 
> I'll integrate your fixes below into what I have here, but probably
> just archive it as something to refer to later in case it might play
> a part; but probably it will not - sorry for wasting your time.

No worries, looking at that s390 code can never be amiss. It seems I need
regular refresh, at least I'm sure I already understood it better in the
past.

And who knows, with Jasons recent thoughts, that "list_head inside
pagetable" idea might not be dead yet.

> 
> >   
> > > 
> > > I'm hoping the use of page->lru in arch/s390/mm/gmap.c is disjoint.
> > > And cmma_init_nodat()? Ah, that's __init so I guess disjoint.  
> > 
> > cmma_init_nodat() should be disjoint, not only because it is __init,
> > but also because it explicitly skips pagetable pages, so it should
> > never touch page->lru of those.
> > 
> > Not very familiar with the gmap code, it does look disjoint, and we should
> > also use complete 4K pages for pagetables instead of 2K fragments there,
> > but Christian or Claudio should also have a look.
> >   
> > > 
> > > Gerald, s390 folk: would it be possible for you to give this
> > > a try, suggest corrections and improvements, and then I can make it
> > > a separate patch of the series; and work on avoiding concurrent use
> > > of the rcu_head by pagetable fragment buddies (ideally fit in with
> > > the scheme already there, maybe DD bits to go along with the PP AA).  
> > 
> > It feels like it could be possible to not only avoid the double
> > rcu_head, but also avoid passing over the mm via page->pt_mm.
> > I.e. have pte_free_defer(), which has the mm, do all the checks and
> > list updates that page_table_free() does, for which we need the mm.
> > Then just skip the pgtable_pte_page_dtor() + __free_page() at the end,
> > and do call_rcu(pte_free_now) instead. The pte_free_now() could then
> > just do _dtor/__free_page similar to the generic version.  
> 
> I'm not sure: I missed your suggestion there when I first skimmed
> through, and today have spent more time getting deeper into how it's
> done at present.  I am now feeling more confident of a way forward,
> a nicely integrated way forward, than I was yesterday.
> Though getting it right may not be so easy.

I think my "feeling" was a déjà vu of the existing logic that we use for
page_table_free_rcu() -> __tlb_remove_table(), where we also have no mm
any more at the end, and use the PP bits magic to find out if the page
can be freed, or if we still have fragments left.

Of course, in that case, we also would not need the mm any more for
list handling, as the to-be-freed fragments were already put back
on the list, but with PP bits set, to prevent re-use. And clearing
those would then make the fragment usable from the list again.

I guess that would also be the major difference here, i.e. your RCU
call-back would need to be able to add fragments back to the list,
after having them removed before to make room for page->rcu_head,
but with Jasons thoughts that does not seem so impossible after all.

I do not yet understand if the list_head would then compulsorily need
to be inside the pagetable, because page->rcu_head/lru still cannot be
used (again). But you already have a patch for that, so either way
might be possible.

> 
> When Jason pointed out the existing RCU, I initially hoped that it might
> already provide the necessary framework: but sadly not, because the
> unbatched case (used when additional memory is not available) does not
> use RCU at all, but instead the tlb_remove_table_sync_one() IRQ hack.
> If I used that, it would cripple the s390 implementation unacceptably.
> 
> > 
> > I must admit that I still have no good overview of the "big picture"
> > here, and especially if this approach would still fit in. Probably not,
> > as the to-be-freed pagetables would still be accessible, but not really
> > valid, if we added them back to the list, with list_heads inside them.
> > So maybe call_rcu() has to be done always, and not only for the case
> > where the whole 4K page becomes free, then we probably cannot do w/o
> > passing over the mm for proper list handling.  
> 
> My current thinking (but may be proved wrong) is along the lines of:
> why does something on its way to being freed need to be on any list
> than the rcu_head list?  I expect the current answer is, that the
> other half is allocated, so the page won't be freed; but I hope that
> we can put it back on that list once we're through with the rcu_head.

Yes, that looks promising. Such a fragment would not necessarily need
to be on the list, because while it is on its way, i.e. before the
RCU call-back finished, it cannot be re-used anyway.

page_table_alloc() could currently find such a fragment on the list, but
only to see the PP bits set, so it will not use it. Only after
__tlb_remove_table() in the RCU call-back resets the bits, it would be
usable again.

In your case, that could correspond to adding it back to the list.
That could even be an improvement, because page_table_alloc() would
not be bothered by such unusable fragments.

[...]
> 
> Is it too early to wish you a happy reverse Xmas?

Nice idea, we should make June 24th the reverse Xmas Remembrance Day :-)


^ permalink raw reply	[flat|nested] 27+ messages in thread

end of thread, other threads:[~2023-06-08 15:49 UTC | newest]

Thread overview: 27+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-05-29  6:11 [PATCH 00/12] mm: free retracted page table by RCU Hugh Dickins
2023-05-29  6:14 ` [PATCH 01/12] mm/pgtable: add rcu_read_lock() and rcu_read_unlock()s Hugh Dickins
2023-05-31 17:06   ` Jann Horn
2023-05-29  6:16 ` [PATCH 02/12] mm/pgtable: add PAE safety to __pte_offset_map() Hugh Dickins
2023-05-29 13:56   ` Matthew Wilcox
2023-05-29  6:17 ` [PATCH 03/12] arm: adjust_pte() use pte_offset_map_nolock() Hugh Dickins
2023-05-29  6:18 ` [PATCH 04/12] powerpc: assert_pte_locked() " Hugh Dickins
2023-05-29  6:20 ` [PATCH 05/12] powerpc: add pte_free_defer() for pgtables sharing page Hugh Dickins
2023-05-29 14:02   ` Matthew Wilcox
2023-05-29 14:36     ` Hugh Dickins
     [not found]     ` <ZHn6n5eVTsr4Wl8x@ziepe.ca>
     [not found]       ` <4df4909f-f5dd-6f94-9792-8f2949f542b3@google.com>
     [not found]         ` <ZH95oobIqN0WO5MK@ziepe.ca>
     [not found]           ` <ZH+DAxLhIYpTlIFc@x1n>
     [not found]             ` <ZH+EMp9RuEVOjVNb@ziepe.ca>
2023-06-07  3:49               ` Hugh Dickins
2023-05-29  6:21 ` [PATCH 06/12] sparc: " Hugh Dickins
2023-05-29  6:22 ` [PATCH 07/12] s390: add pte_free_defer(), with use of mmdrop_async() Hugh Dickins
     [not found]   ` <175ebec8-761-c3f-2d98-6c3bd87161c8@google.com>
2023-06-06 19:40     ` Gerald Schaefer
2023-06-08  3:35       ` Hugh Dickins
2023-06-08 13:58         ` Jason Gunthorpe
2023-06-08 15:47         ` Gerald Schaefer
     [not found]     ` <ZH99cLKeALvUCIH8@ziepe.ca>
2023-06-08  2:46       ` Hugh Dickins
2023-05-29  6:23 ` [PATCH 08/12] mm/pgtable: add pte_free_defer() for pgtable as page Hugh Dickins
2023-05-29  6:25 ` [PATCH 09/12] mm/khugepaged: retract_page_tables() without mmap or vma lock Hugh Dickins
2023-05-29 23:26   ` Peter Xu
2023-05-31  0:38     ` Hugh Dickins
2023-05-31 15:34   ` Jann Horn
2023-05-29  6:26 ` [PATCH 10/12] mm/khugepaged: collapse_pte_mapped_thp() with mmap_read_lock() Hugh Dickins
2023-05-31 17:25   ` Jann Horn
2023-05-29  6:28 ` [PATCH 11/12] mm/khugepaged: delete khugepaged_collapse_pte_mapped_thps() Hugh Dickins
2023-05-29  6:30 ` [PATCH 12/12] mm: delete mmap_write_trylock() and vma_try_start_write() Hugh Dickins

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).