All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 00/13] mm/gup: Unify hugetlb, part 2
@ 2024-01-03  9:14 ` peterx
  0 siblings, 0 replies; 143+ messages in thread
From: peterx @ 2024-01-03  9:14 UTC (permalink / raw)
  To: linux-mm, linux-kernel
  Cc: James Houghton, David Hildenbrand, Kirill A . Shutemov, Yang Shi,
	peterx, linux-riscv, Andrew Morton, Aneesh Kumar K . V,
	Rik van Riel, Andrea Arcangeli, Axel Rasmussen, Mike Rapoport,
	John Hubbard, Vlastimil Babka, Michael Ellerman,
	Christophe Leroy, Andrew Jones, linuxppc-dev, Mike Kravetz,
	Muchun Song, linux-arm-kernel, Jason Gunthorpe,
	Christoph Hellwig, Lorenzo Stoakes, Matthew Wilcox

From: Peter Xu <peterx@redhat.com>

v2:
- Collect acks
- Patch 9:
  - Use READ_ONCE() to fetch pud entry [James]

rfc: https://lore.kernel.org/r/20231116012908.392077-1-peterx@redhat.com
v1:  https://lore.kernel.org/r/20231219075538.414708-1-peterx@redhat.com

This is v2 of the series, based on latest mm-unstalbe (856325d361df).

The series removes the hugetlb slow gup path after a previous refactor work
[1], so that slow gup now uses the exact same path to process all kinds of
memory including hugetlb.

For the long term, we may want to remove most, if not all, call sites of
huge_pte_offset().  It'll be ideal if that API can be completely dropped
from arch hugetlb API.  This series is one small step towards merging
hugetlb specific codes into generic mm paths.  From that POV, this series
removes one reference to huge_pte_offset() out of many others.

One goal of such a route is that we can reconsider merging hugetlb features
like High Granularity Mapping (HGM).  It was not accepted in the past
because it may add lots of hugetlb specific codes and make the mm code even
harder to maintain.  With a merged codeset, features like HGM can hopefully
share some code with THP, legacy (PMD+) or modern (continuous PTEs).

To make it work, the generic slow gup code will need to at least understand
hugepd, which is already done like so in fast-gup.  Fortunately it seems
that's the only major thing I need to teach slow GUP to share the common
path for now besides normal huge PxD entries.  Non-gup can be more
challenging, but that's a question for later.

There's one major difference for slow-gup on cont_pte / cont_pmd handling,
currently supported on three architectures (aarch64, riscv, ppc).  Before
the series, slow gup will be able to recognize e.g. cont_pte entries with
the help of huge_pte_offset() when hstate is around.  Now it's gone but
still working, by looking up pgtable entries one by one.

It's not ideal, but hopefully this change should not affect yet on major
workloads.  There's some more information in the commit message of the last
patch.  If this would be a concern, we can consider teaching slow gup to
recognize cont pte/pmd entries, and that should recover the lost
performance.  But I doubt its necessity for now, so I kept it as simple as
it can be.

Test Done
=========

This v1 went through the normal GUP smoke tests over different memory
types on archs (using VM instances): x86_64, aarch64, ppc64le.  For
aarch64, tested over 64KB cont_pte huge pages.  For ppc64le, tested over
16MB hugepd entries (Power8 hash MMU on 4K base page size).

Patch layout
=============

Patch 1-7:    Preparation works, or cleanups in relevant code paths
Patch 8-12:   Teach slow gup with all kinds of huge entries (pXd, hugepd)
Patch 13:     Drop hugetlb_follow_page_mask()

More information can be found in the commit messages of each patch.  Any
comment will be welcomed.  Thanks.

[1] https://lore.kernel.org/all/20230628215310.73782-1-peterx@redhat.com

Peter Xu (13):
  mm/Kconfig: CONFIG_PGTABLE_HAS_HUGE_LEAVES
  mm/hugetlb: Declare hugetlbfs_pagecache_present() non-static
  mm: Provide generic pmd_thp_or_huge()
  mm: Make HPAGE_PXD_* macros even if !THP
  mm: Introduce vma_pgtable_walk_{begin|end}()
  mm/gup: Drop folio_fast_pin_allowed() in hugepd processing
  mm/gup: Refactor record_subpages() to find 1st small page
  mm/gup: Handle hugetlb for no_page_table()
  mm/gup: Cache *pudp in follow_pud_mask()
  mm/gup: Handle huge pud for follow_pud_mask()
  mm/gup: Handle huge pmd for follow_pmd_mask()
  mm/gup: Handle hugepd for follow_page()
  mm/gup: Handle hugetlb in the generic follow_page_mask code

 include/linux/huge_mm.h |  25 +--
 include/linux/hugetlb.h |  16 +-
 include/linux/mm.h      |   3 +
 include/linux/pgtable.h |   4 +
 mm/Kconfig              |   3 +
 mm/gup.c                | 362 ++++++++++++++++++++++++++++++++--------
 mm/huge_memory.c        | 133 +--------------
 mm/hugetlb.c            |  75 +--------
 mm/internal.h           |   7 +-
 mm/memory.c             |  12 ++
 10 files changed, 342 insertions(+), 298 deletions(-)

-- 
2.41.0


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 143+ messages in thread

* [PATCH v2 00/13] mm/gup: Unify hugetlb, part 2
@ 2024-01-03  9:14 ` peterx
  0 siblings, 0 replies; 143+ messages in thread
From: peterx @ 2024-01-03  9:14 UTC (permalink / raw)
  To: linux-mm, linux-kernel
  Cc: James Houghton, David Hildenbrand, Kirill A . Shutemov, Yang Shi,
	peterx, linux-riscv, Andrew Morton, Aneesh Kumar K . V,
	Rik van Riel, Andrea Arcangeli, Axel Rasmussen, Mike Rapoport,
	John Hubbard, Vlastimil Babka, Michael Ellerman,
	Christophe Leroy, Andrew Jones, linuxppc-dev, Mike Kravetz,
	Muchun Song, linux-arm-kernel, Jason Gunthorpe,
	Christoph Hellwig, Lorenzo Stoakes, Matthew Wilcox

From: Peter Xu <peterx@redhat.com>

v2:
- Collect acks
- Patch 9:
  - Use READ_ONCE() to fetch pud entry [James]

rfc: https://lore.kernel.org/r/20231116012908.392077-1-peterx@redhat.com
v1:  https://lore.kernel.org/r/20231219075538.414708-1-peterx@redhat.com

This is v2 of the series, based on latest mm-unstalbe (856325d361df).

The series removes the hugetlb slow gup path after a previous refactor work
[1], so that slow gup now uses the exact same path to process all kinds of
memory including hugetlb.

For the long term, we may want to remove most, if not all, call sites of
huge_pte_offset().  It'll be ideal if that API can be completely dropped
from arch hugetlb API.  This series is one small step towards merging
hugetlb specific codes into generic mm paths.  From that POV, this series
removes one reference to huge_pte_offset() out of many others.

One goal of such a route is that we can reconsider merging hugetlb features
like High Granularity Mapping (HGM).  It was not accepted in the past
because it may add lots of hugetlb specific codes and make the mm code even
harder to maintain.  With a merged codeset, features like HGM can hopefully
share some code with THP, legacy (PMD+) or modern (continuous PTEs).

To make it work, the generic slow gup code will need to at least understand
hugepd, which is already done like so in fast-gup.  Fortunately it seems
that's the only major thing I need to teach slow GUP to share the common
path for now besides normal huge PxD entries.  Non-gup can be more
challenging, but that's a question for later.

There's one major difference for slow-gup on cont_pte / cont_pmd handling,
currently supported on three architectures (aarch64, riscv, ppc).  Before
the series, slow gup will be able to recognize e.g. cont_pte entries with
the help of huge_pte_offset() when hstate is around.  Now it's gone but
still working, by looking up pgtable entries one by one.

It's not ideal, but hopefully this change should not affect yet on major
workloads.  There's some more information in the commit message of the last
patch.  If this would be a concern, we can consider teaching slow gup to
recognize cont pte/pmd entries, and that should recover the lost
performance.  But I doubt its necessity for now, so I kept it as simple as
it can be.

Test Done
=========

This v1 went through the normal GUP smoke tests over different memory
types on archs (using VM instances): x86_64, aarch64, ppc64le.  For
aarch64, tested over 64KB cont_pte huge pages.  For ppc64le, tested over
16MB hugepd entries (Power8 hash MMU on 4K base page size).

Patch layout
=============

Patch 1-7:    Preparation works, or cleanups in relevant code paths
Patch 8-12:   Teach slow gup with all kinds of huge entries (pXd, hugepd)
Patch 13:     Drop hugetlb_follow_page_mask()

More information can be found in the commit messages of each patch.  Any
comment will be welcomed.  Thanks.

[1] https://lore.kernel.org/all/20230628215310.73782-1-peterx@redhat.com

Peter Xu (13):
  mm/Kconfig: CONFIG_PGTABLE_HAS_HUGE_LEAVES
  mm/hugetlb: Declare hugetlbfs_pagecache_present() non-static
  mm: Provide generic pmd_thp_or_huge()
  mm: Make HPAGE_PXD_* macros even if !THP
  mm: Introduce vma_pgtable_walk_{begin|end}()
  mm/gup: Drop folio_fast_pin_allowed() in hugepd processing
  mm/gup: Refactor record_subpages() to find 1st small page
  mm/gup: Handle hugetlb for no_page_table()
  mm/gup: Cache *pudp in follow_pud_mask()
  mm/gup: Handle huge pud for follow_pud_mask()
  mm/gup: Handle huge pmd for follow_pmd_mask()
  mm/gup: Handle hugepd for follow_page()
  mm/gup: Handle hugetlb in the generic follow_page_mask code

 include/linux/huge_mm.h |  25 +--
 include/linux/hugetlb.h |  16 +-
 include/linux/mm.h      |   3 +
 include/linux/pgtable.h |   4 +
 mm/Kconfig              |   3 +
 mm/gup.c                | 362 ++++++++++++++++++++++++++++++++--------
 mm/huge_memory.c        | 133 +--------------
 mm/hugetlb.c            |  75 +--------
 mm/internal.h           |   7 +-
 mm/memory.c             |  12 ++
 10 files changed, 342 insertions(+), 298 deletions(-)

-- 
2.41.0


^ permalink raw reply	[flat|nested] 143+ messages in thread

* [PATCH v2 00/13] mm/gup: Unify hugetlb, part 2
@ 2024-01-03  9:14 ` peterx
  0 siblings, 0 replies; 143+ messages in thread
From: peterx @ 2024-01-03  9:14 UTC (permalink / raw)
  To: linux-mm, linux-kernel
  Cc: James Houghton, David Hildenbrand, Yang Shi, peterx,
	Andrew Jones, Matthew Wilcox, linux-riscv, Andrea Arcangeli,
	Christoph Hellwig, Aneesh Kumar K . V, Vlastimil Babka,
	Jason Gunthorpe, Axel Rasmussen, Rik van Riel, John Hubbard,
	Kirill A . Shutemov, linux-arm-kernel, Lorenzo Stoakes,
	Muchun Song, Andrew Morton, linuxppc-dev, Mike Rapoport,
	Mike Kravetz

From: Peter Xu <peterx@redhat.com>

v2:
- Collect acks
- Patch 9:
  - Use READ_ONCE() to fetch pud entry [James]

rfc: https://lore.kernel.org/r/20231116012908.392077-1-peterx@redhat.com
v1:  https://lore.kernel.org/r/20231219075538.414708-1-peterx@redhat.com

This is v2 of the series, based on latest mm-unstalbe (856325d361df).

The series removes the hugetlb slow gup path after a previous refactor work
[1], so that slow gup now uses the exact same path to process all kinds of
memory including hugetlb.

For the long term, we may want to remove most, if not all, call sites of
huge_pte_offset().  It'll be ideal if that API can be completely dropped
from arch hugetlb API.  This series is one small step towards merging
hugetlb specific codes into generic mm paths.  From that POV, this series
removes one reference to huge_pte_offset() out of many others.

One goal of such a route is that we can reconsider merging hugetlb features
like High Granularity Mapping (HGM).  It was not accepted in the past
because it may add lots of hugetlb specific codes and make the mm code even
harder to maintain.  With a merged codeset, features like HGM can hopefully
share some code with THP, legacy (PMD+) or modern (continuous PTEs).

To make it work, the generic slow gup code will need to at least understand
hugepd, which is already done like so in fast-gup.  Fortunately it seems
that's the only major thing I need to teach slow GUP to share the common
path for now besides normal huge PxD entries.  Non-gup can be more
challenging, but that's a question for later.

There's one major difference for slow-gup on cont_pte / cont_pmd handling,
currently supported on three architectures (aarch64, riscv, ppc).  Before
the series, slow gup will be able to recognize e.g. cont_pte entries with
the help of huge_pte_offset() when hstate is around.  Now it's gone but
still working, by looking up pgtable entries one by one.

It's not ideal, but hopefully this change should not affect yet on major
workloads.  There's some more information in the commit message of the last
patch.  If this would be a concern, we can consider teaching slow gup to
recognize cont pte/pmd entries, and that should recover the lost
performance.  But I doubt its necessity for now, so I kept it as simple as
it can be.

Test Done
=========

This v1 went through the normal GUP smoke tests over different memory
types on archs (using VM instances): x86_64, aarch64, ppc64le.  For
aarch64, tested over 64KB cont_pte huge pages.  For ppc64le, tested over
16MB hugepd entries (Power8 hash MMU on 4K base page size).

Patch layout
=============

Patch 1-7:    Preparation works, or cleanups in relevant code paths
Patch 8-12:   Teach slow gup with all kinds of huge entries (pXd, hugepd)
Patch 13:     Drop hugetlb_follow_page_mask()

More information can be found in the commit messages of each patch.  Any
comment will be welcomed.  Thanks.

[1] https://lore.kernel.org/all/20230628215310.73782-1-peterx@redhat.com

Peter Xu (13):
  mm/Kconfig: CONFIG_PGTABLE_HAS_HUGE_LEAVES
  mm/hugetlb: Declare hugetlbfs_pagecache_present() non-static
  mm: Provide generic pmd_thp_or_huge()
  mm: Make HPAGE_PXD_* macros even if !THP
  mm: Introduce vma_pgtable_walk_{begin|end}()
  mm/gup: Drop folio_fast_pin_allowed() in hugepd processing
  mm/gup: Refactor record_subpages() to find 1st small page
  mm/gup: Handle hugetlb for no_page_table()
  mm/gup: Cache *pudp in follow_pud_mask()
  mm/gup: Handle huge pud for follow_pud_mask()
  mm/gup: Handle huge pmd for follow_pmd_mask()
  mm/gup: Handle hugepd for follow_page()
  mm/gup: Handle hugetlb in the generic follow_page_mask code

 include/linux/huge_mm.h |  25 +--
 include/linux/hugetlb.h |  16 +-
 include/linux/mm.h      |   3 +
 include/linux/pgtable.h |   4 +
 mm/Kconfig              |   3 +
 mm/gup.c                | 362 ++++++++++++++++++++++++++++++++--------
 mm/huge_memory.c        | 133 +--------------
 mm/hugetlb.c            |  75 +--------
 mm/internal.h           |   7 +-
 mm/memory.c             |  12 ++
 10 files changed, 342 insertions(+), 298 deletions(-)

-- 
2.41.0


^ permalink raw reply	[flat|nested] 143+ messages in thread

* [PATCH v2 00/13] mm/gup: Unify hugetlb, part 2
@ 2024-01-03  9:14 ` peterx
  0 siblings, 0 replies; 143+ messages in thread
From: peterx @ 2024-01-03  9:14 UTC (permalink / raw)
  To: linux-mm, linux-kernel
  Cc: James Houghton, David Hildenbrand, Kirill A . Shutemov, Yang Shi,
	peterx, linux-riscv, Andrew Morton, Aneesh Kumar K . V,
	Rik van Riel, Andrea Arcangeli, Axel Rasmussen, Mike Rapoport,
	John Hubbard, Vlastimil Babka, Michael Ellerman,
	Christophe Leroy, Andrew Jones, linuxppc-dev, Mike Kravetz,
	Muchun Song, linux-arm-kernel, Jason Gunthorpe,
	Christoph Hellwig, Lorenzo Stoakes, Matthew Wilcox

From: Peter Xu <peterx@redhat.com>

v2:
- Collect acks
- Patch 9:
  - Use READ_ONCE() to fetch pud entry [James]

rfc: https://lore.kernel.org/r/20231116012908.392077-1-peterx@redhat.com
v1:  https://lore.kernel.org/r/20231219075538.414708-1-peterx@redhat.com

This is v2 of the series, based on latest mm-unstalbe (856325d361df).

The series removes the hugetlb slow gup path after a previous refactor work
[1], so that slow gup now uses the exact same path to process all kinds of
memory including hugetlb.

For the long term, we may want to remove most, if not all, call sites of
huge_pte_offset().  It'll be ideal if that API can be completely dropped
from arch hugetlb API.  This series is one small step towards merging
hugetlb specific codes into generic mm paths.  From that POV, this series
removes one reference to huge_pte_offset() out of many others.

One goal of such a route is that we can reconsider merging hugetlb features
like High Granularity Mapping (HGM).  It was not accepted in the past
because it may add lots of hugetlb specific codes and make the mm code even
harder to maintain.  With a merged codeset, features like HGM can hopefully
share some code with THP, legacy (PMD+) or modern (continuous PTEs).

To make it work, the generic slow gup code will need to at least understand
hugepd, which is already done like so in fast-gup.  Fortunately it seems
that's the only major thing I need to teach slow GUP to share the common
path for now besides normal huge PxD entries.  Non-gup can be more
challenging, but that's a question for later.

There's one major difference for slow-gup on cont_pte / cont_pmd handling,
currently supported on three architectures (aarch64, riscv, ppc).  Before
the series, slow gup will be able to recognize e.g. cont_pte entries with
the help of huge_pte_offset() when hstate is around.  Now it's gone but
still working, by looking up pgtable entries one by one.

It's not ideal, but hopefully this change should not affect yet on major
workloads.  There's some more information in the commit message of the last
patch.  If this would be a concern, we can consider teaching slow gup to
recognize cont pte/pmd entries, and that should recover the lost
performance.  But I doubt its necessity for now, so I kept it as simple as
it can be.

Test Done
=========

This v1 went through the normal GUP smoke tests over different memory
types on archs (using VM instances): x86_64, aarch64, ppc64le.  For
aarch64, tested over 64KB cont_pte huge pages.  For ppc64le, tested over
16MB hugepd entries (Power8 hash MMU on 4K base page size).

Patch layout
=============

Patch 1-7:    Preparation works, or cleanups in relevant code paths
Patch 8-12:   Teach slow gup with all kinds of huge entries (pXd, hugepd)
Patch 13:     Drop hugetlb_follow_page_mask()

More information can be found in the commit messages of each patch.  Any
comment will be welcomed.  Thanks.

[1] https://lore.kernel.org/all/20230628215310.73782-1-peterx@redhat.com

Peter Xu (13):
  mm/Kconfig: CONFIG_PGTABLE_HAS_HUGE_LEAVES
  mm/hugetlb: Declare hugetlbfs_pagecache_present() non-static
  mm: Provide generic pmd_thp_or_huge()
  mm: Make HPAGE_PXD_* macros even if !THP
  mm: Introduce vma_pgtable_walk_{begin|end}()
  mm/gup: Drop folio_fast_pin_allowed() in hugepd processing
  mm/gup: Refactor record_subpages() to find 1st small page
  mm/gup: Handle hugetlb for no_page_table()
  mm/gup: Cache *pudp in follow_pud_mask()
  mm/gup: Handle huge pud for follow_pud_mask()
  mm/gup: Handle huge pmd for follow_pmd_mask()
  mm/gup: Handle hugepd for follow_page()
  mm/gup: Handle hugetlb in the generic follow_page_mask code

 include/linux/huge_mm.h |  25 +--
 include/linux/hugetlb.h |  16 +-
 include/linux/mm.h      |   3 +
 include/linux/pgtable.h |   4 +
 mm/Kconfig              |   3 +
 mm/gup.c                | 362 ++++++++++++++++++++++++++++++++--------
 mm/huge_memory.c        | 133 +--------------
 mm/hugetlb.c            |  75 +--------
 mm/internal.h           |   7 +-
 mm/memory.c             |  12 ++
 10 files changed, 342 insertions(+), 298 deletions(-)

-- 
2.41.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 143+ messages in thread

* [PATCH v2 01/13] mm/Kconfig: CONFIG_PGTABLE_HAS_HUGE_LEAVES
  2024-01-03  9:14 ` peterx
  (?)
  (?)
@ 2024-01-03  9:14   ` peterx
  -1 siblings, 0 replies; 143+ messages in thread
From: peterx @ 2024-01-03  9:14 UTC (permalink / raw)
  To: linux-mm, linux-kernel
  Cc: James Houghton, David Hildenbrand, Kirill A . Shutemov, Yang Shi,
	peterx, linux-riscv, Andrew Morton, Aneesh Kumar K . V,
	Rik van Riel, Andrea Arcangeli, Axel Rasmussen, Mike Rapoport,
	John Hubbard, Vlastimil Babka, Michael Ellerman,
	Christophe Leroy, Andrew Jones, linuxppc-dev, Mike Kravetz,
	Muchun Song, linux-arm-kernel, Jason Gunthorpe,
	Christoph Hellwig, Lorenzo Stoakes, Matthew Wilcox

From: Peter Xu <peterx@redhat.com>

Introduce a config option that will be selected as long as huge leaves are
involved in pgtable (thp or hugetlbfs).  It would be useful to mark any
code with this new config that can process either hugetlb or thp pages in
any level that is higher than pte level.

Signed-off-by: Peter Xu <peterx@redhat.com>
---
 mm/Kconfig | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/mm/Kconfig b/mm/Kconfig
index cb9d470f0bf7..9350ba180d52 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -904,6 +904,9 @@ config READ_ONLY_THP_FOR_FS
 
 endif # TRANSPARENT_HUGEPAGE
 
+config PGTABLE_HAS_HUGE_LEAVES
+	def_bool TRANSPARENT_HUGEPAGE || HUGETLB_PAGE
+
 #
 # UP and nommu archs use km based percpu allocator
 #
-- 
2.41.0


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply related	[flat|nested] 143+ messages in thread

* [PATCH v2 01/13] mm/Kconfig: CONFIG_PGTABLE_HAS_HUGE_LEAVES
@ 2024-01-03  9:14   ` peterx
  0 siblings, 0 replies; 143+ messages in thread
From: peterx @ 2024-01-03  9:14 UTC (permalink / raw)
  To: linux-mm, linux-kernel
  Cc: James Houghton, David Hildenbrand, Kirill A . Shutemov, Yang Shi,
	peterx, linux-riscv, Andrew Morton, Aneesh Kumar K . V,
	Rik van Riel, Andrea Arcangeli, Axel Rasmussen, Mike Rapoport,
	John Hubbard, Vlastimil Babka, Michael Ellerman,
	Christophe Leroy, Andrew Jones, linuxppc-dev, Mike Kravetz,
	Muchun Song, linux-arm-kernel, Jason Gunthorpe,
	Christoph Hellwig, Lorenzo Stoakes, Matthew Wilcox

From: Peter Xu <peterx@redhat.com>

Introduce a config option that will be selected as long as huge leaves are
involved in pgtable (thp or hugetlbfs).  It would be useful to mark any
code with this new config that can process either hugetlb or thp pages in
any level that is higher than pte level.

Signed-off-by: Peter Xu <peterx@redhat.com>
---
 mm/Kconfig | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/mm/Kconfig b/mm/Kconfig
index cb9d470f0bf7..9350ba180d52 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -904,6 +904,9 @@ config READ_ONLY_THP_FOR_FS
 
 endif # TRANSPARENT_HUGEPAGE
 
+config PGTABLE_HAS_HUGE_LEAVES
+	def_bool TRANSPARENT_HUGEPAGE || HUGETLB_PAGE
+
 #
 # UP and nommu archs use km based percpu allocator
 #
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 143+ messages in thread

* [PATCH v2 01/13] mm/Kconfig: CONFIG_PGTABLE_HAS_HUGE_LEAVES
@ 2024-01-03  9:14   ` peterx
  0 siblings, 0 replies; 143+ messages in thread
From: peterx @ 2024-01-03  9:14 UTC (permalink / raw)
  To: linux-mm, linux-kernel
  Cc: James Houghton, David Hildenbrand, Yang Shi, peterx,
	Andrew Jones, Matthew Wilcox, linux-riscv, Andrea Arcangeli,
	Christoph Hellwig, Aneesh Kumar K . V, Vlastimil Babka,
	Jason Gunthorpe, Axel Rasmussen, Rik van Riel, John Hubbard,
	Kirill A . Shutemov, linux-arm-kernel, Lorenzo Stoakes,
	Muchun Song, Andrew Morton, linuxppc-dev, Mike Rapoport,
	Mike Kravetz

From: Peter Xu <peterx@redhat.com>

Introduce a config option that will be selected as long as huge leaves are
involved in pgtable (thp or hugetlbfs).  It would be useful to mark any
code with this new config that can process either hugetlb or thp pages in
any level that is higher than pte level.

Signed-off-by: Peter Xu <peterx@redhat.com>
---
 mm/Kconfig | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/mm/Kconfig b/mm/Kconfig
index cb9d470f0bf7..9350ba180d52 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -904,6 +904,9 @@ config READ_ONLY_THP_FOR_FS
 
 endif # TRANSPARENT_HUGEPAGE
 
+config PGTABLE_HAS_HUGE_LEAVES
+	def_bool TRANSPARENT_HUGEPAGE || HUGETLB_PAGE
+
 #
 # UP and nommu archs use km based percpu allocator
 #
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 143+ messages in thread

* [PATCH v2 01/13] mm/Kconfig: CONFIG_PGTABLE_HAS_HUGE_LEAVES
@ 2024-01-03  9:14   ` peterx
  0 siblings, 0 replies; 143+ messages in thread
From: peterx @ 2024-01-03  9:14 UTC (permalink / raw)
  To: linux-mm, linux-kernel
  Cc: James Houghton, David Hildenbrand, Kirill A . Shutemov, Yang Shi,
	peterx, linux-riscv, Andrew Morton, Aneesh Kumar K . V,
	Rik van Riel, Andrea Arcangeli, Axel Rasmussen, Mike Rapoport,
	John Hubbard, Vlastimil Babka, Michael Ellerman,
	Christophe Leroy, Andrew Jones, linuxppc-dev, Mike Kravetz,
	Muchun Song, linux-arm-kernel, Jason Gunthorpe,
	Christoph Hellwig, Lorenzo Stoakes, Matthew Wilcox

From: Peter Xu <peterx@redhat.com>

Introduce a config option that will be selected as long as huge leaves are
involved in pgtable (thp or hugetlbfs).  It would be useful to mark any
code with this new config that can process either hugetlb or thp pages in
any level that is higher than pte level.

Signed-off-by: Peter Xu <peterx@redhat.com>
---
 mm/Kconfig | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/mm/Kconfig b/mm/Kconfig
index cb9d470f0bf7..9350ba180d52 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -904,6 +904,9 @@ config READ_ONLY_THP_FOR_FS
 
 endif # TRANSPARENT_HUGEPAGE
 
+config PGTABLE_HAS_HUGE_LEAVES
+	def_bool TRANSPARENT_HUGEPAGE || HUGETLB_PAGE
+
 #
 # UP and nommu archs use km based percpu allocator
 #
-- 
2.41.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 143+ messages in thread

* [PATCH v2 02/13] mm/hugetlb: Declare hugetlbfs_pagecache_present() non-static
  2024-01-03  9:14 ` peterx
  (?)
  (?)
@ 2024-01-03  9:14   ` peterx
  -1 siblings, 0 replies; 143+ messages in thread
From: peterx @ 2024-01-03  9:14 UTC (permalink / raw)
  To: linux-mm, linux-kernel
  Cc: James Houghton, David Hildenbrand, Kirill A . Shutemov, Yang Shi,
	peterx, linux-riscv, Andrew Morton, Aneesh Kumar K . V,
	Rik van Riel, Andrea Arcangeli, Axel Rasmussen, Mike Rapoport,
	John Hubbard, Vlastimil Babka, Michael Ellerman,
	Christophe Leroy, Andrew Jones, linuxppc-dev, Mike Kravetz,
	Muchun Song, linux-arm-kernel, Jason Gunthorpe,
	Christoph Hellwig, Lorenzo Stoakes, Matthew Wilcox

From: Peter Xu <peterx@redhat.com>

It will be used outside hugetlb.c soon.

Signed-off-by: Peter Xu <peterx@redhat.com>
---
 include/linux/hugetlb.h | 9 +++++++++
 mm/hugetlb.c            | 4 ++--
 2 files changed, 11 insertions(+), 2 deletions(-)

diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index c1ee640d87b1..e8eddd51fc17 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -174,6 +174,9 @@ u32 hugetlb_fault_mutex_hash(struct address_space *mapping, pgoff_t idx);
 
 pte_t *huge_pmd_share(struct mm_struct *mm, struct vm_area_struct *vma,
 		      unsigned long addr, pud_t *pud);
+bool hugetlbfs_pagecache_present(struct hstate *h,
+				 struct vm_area_struct *vma,
+				 unsigned long address);
 
 struct address_space *hugetlb_page_mapping_lock_write(struct page *hpage);
 
@@ -1221,6 +1224,12 @@ static inline void hugetlb_register_node(struct node *node)
 static inline void hugetlb_unregister_node(struct node *node)
 {
 }
+
+static inline bool hugetlbfs_pagecache_present(
+    struct hstate *h, struct vm_area_struct *vma, unsigned long address)
+{
+	return false;
+}
 #endif	/* CONFIG_HUGETLB_PAGE */
 
 static inline spinlock_t *huge_pte_lock(struct hstate *h,
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 0d262784ce60..bfb52bb8b943 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -6017,8 +6017,8 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma,
 /*
  * Return whether there is a pagecache page to back given address within VMA.
  */
-static bool hugetlbfs_pagecache_present(struct hstate *h,
-			struct vm_area_struct *vma, unsigned long address)
+bool hugetlbfs_pagecache_present(struct hstate *h,
+				 struct vm_area_struct *vma, unsigned long address)
 {
 	struct address_space *mapping = vma->vm_file->f_mapping;
 	pgoff_t idx = linear_page_index(vma, address);
-- 
2.41.0


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply related	[flat|nested] 143+ messages in thread

* [PATCH v2 02/13] mm/hugetlb: Declare hugetlbfs_pagecache_present() non-static
@ 2024-01-03  9:14   ` peterx
  0 siblings, 0 replies; 143+ messages in thread
From: peterx @ 2024-01-03  9:14 UTC (permalink / raw)
  To: linux-mm, linux-kernel
  Cc: James Houghton, David Hildenbrand, Kirill A . Shutemov, Yang Shi,
	peterx, linux-riscv, Andrew Morton, Aneesh Kumar K . V,
	Rik van Riel, Andrea Arcangeli, Axel Rasmussen, Mike Rapoport,
	John Hubbard, Vlastimil Babka, Michael Ellerman,
	Christophe Leroy, Andrew Jones, linuxppc-dev, Mike Kravetz,
	Muchun Song, linux-arm-kernel, Jason Gunthorpe,
	Christoph Hellwig, Lorenzo Stoakes, Matthew Wilcox

From: Peter Xu <peterx@redhat.com>

It will be used outside hugetlb.c soon.

Signed-off-by: Peter Xu <peterx@redhat.com>
---
 include/linux/hugetlb.h | 9 +++++++++
 mm/hugetlb.c            | 4 ++--
 2 files changed, 11 insertions(+), 2 deletions(-)

diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index c1ee640d87b1..e8eddd51fc17 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -174,6 +174,9 @@ u32 hugetlb_fault_mutex_hash(struct address_space *mapping, pgoff_t idx);
 
 pte_t *huge_pmd_share(struct mm_struct *mm, struct vm_area_struct *vma,
 		      unsigned long addr, pud_t *pud);
+bool hugetlbfs_pagecache_present(struct hstate *h,
+				 struct vm_area_struct *vma,
+				 unsigned long address);
 
 struct address_space *hugetlb_page_mapping_lock_write(struct page *hpage);
 
@@ -1221,6 +1224,12 @@ static inline void hugetlb_register_node(struct node *node)
 static inline void hugetlb_unregister_node(struct node *node)
 {
 }
+
+static inline bool hugetlbfs_pagecache_present(
+    struct hstate *h, struct vm_area_struct *vma, unsigned long address)
+{
+	return false;
+}
 #endif	/* CONFIG_HUGETLB_PAGE */
 
 static inline spinlock_t *huge_pte_lock(struct hstate *h,
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 0d262784ce60..bfb52bb8b943 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -6017,8 +6017,8 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma,
 /*
  * Return whether there is a pagecache page to back given address within VMA.
  */
-static bool hugetlbfs_pagecache_present(struct hstate *h,
-			struct vm_area_struct *vma, unsigned long address)
+bool hugetlbfs_pagecache_present(struct hstate *h,
+				 struct vm_area_struct *vma, unsigned long address)
 {
 	struct address_space *mapping = vma->vm_file->f_mapping;
 	pgoff_t idx = linear_page_index(vma, address);
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 143+ messages in thread

* [PATCH v2 02/13] mm/hugetlb: Declare hugetlbfs_pagecache_present() non-static
@ 2024-01-03  9:14   ` peterx
  0 siblings, 0 replies; 143+ messages in thread
From: peterx @ 2024-01-03  9:14 UTC (permalink / raw)
  To: linux-mm, linux-kernel
  Cc: James Houghton, David Hildenbrand, Kirill A . Shutemov, Yang Shi,
	peterx, linux-riscv, Andrew Morton, Aneesh Kumar K . V,
	Rik van Riel, Andrea Arcangeli, Axel Rasmussen, Mike Rapoport,
	John Hubbard, Vlastimil Babka, Michael Ellerman,
	Christophe Leroy, Andrew Jones, linuxppc-dev, Mike Kravetz,
	Muchun Song, linux-arm-kernel, Jason Gunthorpe,
	Christoph Hellwig, Lorenzo Stoakes, Matthew Wilcox

From: Peter Xu <peterx@redhat.com>

It will be used outside hugetlb.c soon.

Signed-off-by: Peter Xu <peterx@redhat.com>
---
 include/linux/hugetlb.h | 9 +++++++++
 mm/hugetlb.c            | 4 ++--
 2 files changed, 11 insertions(+), 2 deletions(-)

diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index c1ee640d87b1..e8eddd51fc17 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -174,6 +174,9 @@ u32 hugetlb_fault_mutex_hash(struct address_space *mapping, pgoff_t idx);
 
 pte_t *huge_pmd_share(struct mm_struct *mm, struct vm_area_struct *vma,
 		      unsigned long addr, pud_t *pud);
+bool hugetlbfs_pagecache_present(struct hstate *h,
+				 struct vm_area_struct *vma,
+				 unsigned long address);
 
 struct address_space *hugetlb_page_mapping_lock_write(struct page *hpage);
 
@@ -1221,6 +1224,12 @@ static inline void hugetlb_register_node(struct node *node)
 static inline void hugetlb_unregister_node(struct node *node)
 {
 }
+
+static inline bool hugetlbfs_pagecache_present(
+    struct hstate *h, struct vm_area_struct *vma, unsigned long address)
+{
+	return false;
+}
 #endif	/* CONFIG_HUGETLB_PAGE */
 
 static inline spinlock_t *huge_pte_lock(struct hstate *h,
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 0d262784ce60..bfb52bb8b943 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -6017,8 +6017,8 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma,
 /*
  * Return whether there is a pagecache page to back given address within VMA.
  */
-static bool hugetlbfs_pagecache_present(struct hstate *h,
-			struct vm_area_struct *vma, unsigned long address)
+bool hugetlbfs_pagecache_present(struct hstate *h,
+				 struct vm_area_struct *vma, unsigned long address)
 {
 	struct address_space *mapping = vma->vm_file->f_mapping;
 	pgoff_t idx = linear_page_index(vma, address);
-- 
2.41.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 143+ messages in thread

* [PATCH v2 02/13] mm/hugetlb: Declare hugetlbfs_pagecache_present() non-static
@ 2024-01-03  9:14   ` peterx
  0 siblings, 0 replies; 143+ messages in thread
From: peterx @ 2024-01-03  9:14 UTC (permalink / raw)
  To: linux-mm, linux-kernel
  Cc: James Houghton, David Hildenbrand, Yang Shi, peterx,
	Andrew Jones, Matthew Wilcox, linux-riscv, Andrea Arcangeli,
	Christoph Hellwig, Aneesh Kumar K . V, Vlastimil Babka,
	Jason Gunthorpe, Axel Rasmussen, Rik van Riel, John Hubbard,
	Kirill A . Shutemov, linux-arm-kernel, Lorenzo Stoakes,
	Muchun Song, Andrew Morton, linuxppc-dev, Mike Rapoport,
	Mike Kravetz

From: Peter Xu <peterx@redhat.com>

It will be used outside hugetlb.c soon.

Signed-off-by: Peter Xu <peterx@redhat.com>
---
 include/linux/hugetlb.h | 9 +++++++++
 mm/hugetlb.c            | 4 ++--
 2 files changed, 11 insertions(+), 2 deletions(-)

diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index c1ee640d87b1..e8eddd51fc17 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -174,6 +174,9 @@ u32 hugetlb_fault_mutex_hash(struct address_space *mapping, pgoff_t idx);
 
 pte_t *huge_pmd_share(struct mm_struct *mm, struct vm_area_struct *vma,
 		      unsigned long addr, pud_t *pud);
+bool hugetlbfs_pagecache_present(struct hstate *h,
+				 struct vm_area_struct *vma,
+				 unsigned long address);
 
 struct address_space *hugetlb_page_mapping_lock_write(struct page *hpage);
 
@@ -1221,6 +1224,12 @@ static inline void hugetlb_register_node(struct node *node)
 static inline void hugetlb_unregister_node(struct node *node)
 {
 }
+
+static inline bool hugetlbfs_pagecache_present(
+    struct hstate *h, struct vm_area_struct *vma, unsigned long address)
+{
+	return false;
+}
 #endif	/* CONFIG_HUGETLB_PAGE */
 
 static inline spinlock_t *huge_pte_lock(struct hstate *h,
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 0d262784ce60..bfb52bb8b943 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -6017,8 +6017,8 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma,
 /*
  * Return whether there is a pagecache page to back given address within VMA.
  */
-static bool hugetlbfs_pagecache_present(struct hstate *h,
-			struct vm_area_struct *vma, unsigned long address)
+bool hugetlbfs_pagecache_present(struct hstate *h,
+				 struct vm_area_struct *vma, unsigned long address)
 {
 	struct address_space *mapping = vma->vm_file->f_mapping;
 	pgoff_t idx = linear_page_index(vma, address);
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 143+ messages in thread

* [PATCH v2 03/13] mm: Provide generic pmd_thp_or_huge()
  2024-01-03  9:14 ` peterx
  (?)
  (?)
@ 2024-01-03  9:14   ` peterx
  -1 siblings, 0 replies; 143+ messages in thread
From: peterx @ 2024-01-03  9:14 UTC (permalink / raw)
  To: linux-mm, linux-kernel
  Cc: James Houghton, David Hildenbrand, Kirill A . Shutemov, Yang Shi,
	peterx, linux-riscv, Andrew Morton, Aneesh Kumar K . V,
	Rik van Riel, Andrea Arcangeli, Axel Rasmussen, Mike Rapoport,
	John Hubbard, Vlastimil Babka, Michael Ellerman,
	Christophe Leroy, Andrew Jones, linuxppc-dev, Mike Kravetz,
	Muchun Song, linux-arm-kernel, Jason Gunthorpe,
	Christoph Hellwig, Lorenzo Stoakes, Matthew Wilcox

From: Peter Xu <peterx@redhat.com>

ARM defines pmd_thp_or_huge(), detecting either a THP or a huge PMD.  It
can be a helpful helper if we want to merge more THP and hugetlb code
paths.  Make it a generic default implementation, only exist when
CONFIG_MMU.  Arch can overwrite it by defining its own version.

For example, ARM's pgtable-2level.h defines it to always return false.

Keep the macro declared with all config, it should be optimized to a false
anyway if !THP && !HUGETLB.

Signed-off-by: Peter Xu <peterx@redhat.com>
---
 include/linux/pgtable.h | 4 ++++
 mm/gup.c                | 3 +--
 2 files changed, 5 insertions(+), 2 deletions(-)

diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
index 466cf477551a..2b42e95a4e3a 100644
--- a/include/linux/pgtable.h
+++ b/include/linux/pgtable.h
@@ -1362,6 +1362,10 @@ static inline int pmd_write(pmd_t pmd)
 #endif /* pmd_write */
 #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
 
+#ifndef pmd_thp_or_huge
+#define pmd_thp_or_huge(pmd)	(pmd_huge(pmd) || pmd_trans_huge(pmd))
+#endif
+
 #ifndef pud_write
 static inline int pud_write(pud_t pud)
 {
diff --git a/mm/gup.c b/mm/gup.c
index df83182ec72d..eebae70d2465 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -3004,8 +3004,7 @@ static int gup_pmd_range(pud_t *pudp, pud_t pud, unsigned long addr, unsigned lo
 		if (!pmd_present(pmd))
 			return 0;
 
-		if (unlikely(pmd_trans_huge(pmd) || pmd_huge(pmd) ||
-			     pmd_devmap(pmd))) {
+		if (unlikely(pmd_thp_or_huge(pmd) || pmd_devmap(pmd))) {
 			/* See gup_pte_range() */
 			if (pmd_protnone(pmd))
 				return 0;
-- 
2.41.0


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply related	[flat|nested] 143+ messages in thread

* [PATCH v2 03/13] mm: Provide generic pmd_thp_or_huge()
@ 2024-01-03  9:14   ` peterx
  0 siblings, 0 replies; 143+ messages in thread
From: peterx @ 2024-01-03  9:14 UTC (permalink / raw)
  To: linux-mm, linux-kernel
  Cc: James Houghton, David Hildenbrand, Kirill A . Shutemov, Yang Shi,
	peterx, linux-riscv, Andrew Morton, Aneesh Kumar K . V,
	Rik van Riel, Andrea Arcangeli, Axel Rasmussen, Mike Rapoport,
	John Hubbard, Vlastimil Babka, Michael Ellerman,
	Christophe Leroy, Andrew Jones, linuxppc-dev, Mike Kravetz,
	Muchun Song, linux-arm-kernel, Jason Gunthorpe,
	Christoph Hellwig, Lorenzo Stoakes, Matthew Wilcox

From: Peter Xu <peterx@redhat.com>

ARM defines pmd_thp_or_huge(), detecting either a THP or a huge PMD.  It
can be a helpful helper if we want to merge more THP and hugetlb code
paths.  Make it a generic default implementation, only exist when
CONFIG_MMU.  Arch can overwrite it by defining its own version.

For example, ARM's pgtable-2level.h defines it to always return false.

Keep the macro declared with all config, it should be optimized to a false
anyway if !THP && !HUGETLB.

Signed-off-by: Peter Xu <peterx@redhat.com>
---
 include/linux/pgtable.h | 4 ++++
 mm/gup.c                | 3 +--
 2 files changed, 5 insertions(+), 2 deletions(-)

diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
index 466cf477551a..2b42e95a4e3a 100644
--- a/include/linux/pgtable.h
+++ b/include/linux/pgtable.h
@@ -1362,6 +1362,10 @@ static inline int pmd_write(pmd_t pmd)
 #endif /* pmd_write */
 #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
 
+#ifndef pmd_thp_or_huge
+#define pmd_thp_or_huge(pmd)	(pmd_huge(pmd) || pmd_trans_huge(pmd))
+#endif
+
 #ifndef pud_write
 static inline int pud_write(pud_t pud)
 {
diff --git a/mm/gup.c b/mm/gup.c
index df83182ec72d..eebae70d2465 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -3004,8 +3004,7 @@ static int gup_pmd_range(pud_t *pudp, pud_t pud, unsigned long addr, unsigned lo
 		if (!pmd_present(pmd))
 			return 0;
 
-		if (unlikely(pmd_trans_huge(pmd) || pmd_huge(pmd) ||
-			     pmd_devmap(pmd))) {
+		if (unlikely(pmd_thp_or_huge(pmd) || pmd_devmap(pmd))) {
 			/* See gup_pte_range() */
 			if (pmd_protnone(pmd))
 				return 0;
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 143+ messages in thread

* [PATCH v2 03/13] mm: Provide generic pmd_thp_or_huge()
@ 2024-01-03  9:14   ` peterx
  0 siblings, 0 replies; 143+ messages in thread
From: peterx @ 2024-01-03  9:14 UTC (permalink / raw)
  To: linux-mm, linux-kernel
  Cc: James Houghton, David Hildenbrand, Kirill A . Shutemov, Yang Shi,
	peterx, linux-riscv, Andrew Morton, Aneesh Kumar K . V,
	Rik van Riel, Andrea Arcangeli, Axel Rasmussen, Mike Rapoport,
	John Hubbard, Vlastimil Babka, Michael Ellerman,
	Christophe Leroy, Andrew Jones, linuxppc-dev, Mike Kravetz,
	Muchun Song, linux-arm-kernel, Jason Gunthorpe,
	Christoph Hellwig, Lorenzo Stoakes, Matthew Wilcox

From: Peter Xu <peterx@redhat.com>

ARM defines pmd_thp_or_huge(), detecting either a THP or a huge PMD.  It
can be a helpful helper if we want to merge more THP and hugetlb code
paths.  Make it a generic default implementation, only exist when
CONFIG_MMU.  Arch can overwrite it by defining its own version.

For example, ARM's pgtable-2level.h defines it to always return false.

Keep the macro declared with all config, it should be optimized to a false
anyway if !THP && !HUGETLB.

Signed-off-by: Peter Xu <peterx@redhat.com>
---
 include/linux/pgtable.h | 4 ++++
 mm/gup.c                | 3 +--
 2 files changed, 5 insertions(+), 2 deletions(-)

diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
index 466cf477551a..2b42e95a4e3a 100644
--- a/include/linux/pgtable.h
+++ b/include/linux/pgtable.h
@@ -1362,6 +1362,10 @@ static inline int pmd_write(pmd_t pmd)
 #endif /* pmd_write */
 #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
 
+#ifndef pmd_thp_or_huge
+#define pmd_thp_or_huge(pmd)	(pmd_huge(pmd) || pmd_trans_huge(pmd))
+#endif
+
 #ifndef pud_write
 static inline int pud_write(pud_t pud)
 {
diff --git a/mm/gup.c b/mm/gup.c
index df83182ec72d..eebae70d2465 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -3004,8 +3004,7 @@ static int gup_pmd_range(pud_t *pudp, pud_t pud, unsigned long addr, unsigned lo
 		if (!pmd_present(pmd))
 			return 0;
 
-		if (unlikely(pmd_trans_huge(pmd) || pmd_huge(pmd) ||
-			     pmd_devmap(pmd))) {
+		if (unlikely(pmd_thp_or_huge(pmd) || pmd_devmap(pmd))) {
 			/* See gup_pte_range() */
 			if (pmd_protnone(pmd))
 				return 0;
-- 
2.41.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 143+ messages in thread

* [PATCH v2 03/13] mm: Provide generic pmd_thp_or_huge()
@ 2024-01-03  9:14   ` peterx
  0 siblings, 0 replies; 143+ messages in thread
From: peterx @ 2024-01-03  9:14 UTC (permalink / raw)
  To: linux-mm, linux-kernel
  Cc: James Houghton, David Hildenbrand, Yang Shi, peterx,
	Andrew Jones, Matthew Wilcox, linux-riscv, Andrea Arcangeli,
	Christoph Hellwig, Aneesh Kumar K . V, Vlastimil Babka,
	Jason Gunthorpe, Axel Rasmussen, Rik van Riel, John Hubbard,
	Kirill A . Shutemov, linux-arm-kernel, Lorenzo Stoakes,
	Muchun Song, Andrew Morton, linuxppc-dev, Mike Rapoport,
	Mike Kravetz

From: Peter Xu <peterx@redhat.com>

ARM defines pmd_thp_or_huge(), detecting either a THP or a huge PMD.  It
can be a helpful helper if we want to merge more THP and hugetlb code
paths.  Make it a generic default implementation, only exist when
CONFIG_MMU.  Arch can overwrite it by defining its own version.

For example, ARM's pgtable-2level.h defines it to always return false.

Keep the macro declared with all config, it should be optimized to a false
anyway if !THP && !HUGETLB.

Signed-off-by: Peter Xu <peterx@redhat.com>
---
 include/linux/pgtable.h | 4 ++++
 mm/gup.c                | 3 +--
 2 files changed, 5 insertions(+), 2 deletions(-)

diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
index 466cf477551a..2b42e95a4e3a 100644
--- a/include/linux/pgtable.h
+++ b/include/linux/pgtable.h
@@ -1362,6 +1362,10 @@ static inline int pmd_write(pmd_t pmd)
 #endif /* pmd_write */
 #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
 
+#ifndef pmd_thp_or_huge
+#define pmd_thp_or_huge(pmd)	(pmd_huge(pmd) || pmd_trans_huge(pmd))
+#endif
+
 #ifndef pud_write
 static inline int pud_write(pud_t pud)
 {
diff --git a/mm/gup.c b/mm/gup.c
index df83182ec72d..eebae70d2465 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -3004,8 +3004,7 @@ static int gup_pmd_range(pud_t *pudp, pud_t pud, unsigned long addr, unsigned lo
 		if (!pmd_present(pmd))
 			return 0;
 
-		if (unlikely(pmd_trans_huge(pmd) || pmd_huge(pmd) ||
-			     pmd_devmap(pmd))) {
+		if (unlikely(pmd_thp_or_huge(pmd) || pmd_devmap(pmd))) {
 			/* See gup_pte_range() */
 			if (pmd_protnone(pmd))
 				return 0;
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 143+ messages in thread

* [PATCH v2 04/13] mm: Make HPAGE_PXD_* macros even if !THP
  2024-01-03  9:14 ` peterx
  (?)
  (?)
@ 2024-01-03  9:14   ` peterx
  -1 siblings, 0 replies; 143+ messages in thread
From: peterx @ 2024-01-03  9:14 UTC (permalink / raw)
  To: linux-mm, linux-kernel
  Cc: James Houghton, David Hildenbrand, Kirill A . Shutemov, Yang Shi,
	peterx, linux-riscv, Andrew Morton, Aneesh Kumar K . V,
	Rik van Riel, Andrea Arcangeli, Axel Rasmussen, Mike Rapoport,
	John Hubbard, Vlastimil Babka, Michael Ellerman,
	Christophe Leroy, Andrew Jones, linuxppc-dev, Mike Kravetz,
	Muchun Song, linux-arm-kernel, Jason Gunthorpe,
	Christoph Hellwig, Lorenzo Stoakes, Matthew Wilcox

From: Peter Xu <peterx@redhat.com>

These macros can be helpful when we plan to merge hugetlb code into generic
code.  Move them out and define them even if !THP.

We actually already defined HPAGE_PMD_NR for other reasons even if !THP.
Reorganize these macros.

Reviewed-by: Christoph Hellwig <hch@infradead.org>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 include/linux/huge_mm.h | 17 ++++++-----------
 1 file changed, 6 insertions(+), 11 deletions(-)

diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index 5adb86af35fc..96bd4b5d027e 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -64,9 +64,6 @@ ssize_t single_hugepage_flag_show(struct kobject *kobj,
 				  enum transparent_hugepage_flag flag);
 extern struct kobj_attribute shmem_enabled_attr;
 
-#define HPAGE_PMD_ORDER (HPAGE_PMD_SHIFT-PAGE_SHIFT)
-#define HPAGE_PMD_NR (1<<HPAGE_PMD_ORDER)
-
 /*
  * Mask of all large folio orders supported for anonymous THP; all orders up to
  * and including PMD_ORDER, except order-0 (which is not "huge") and order-1
@@ -87,14 +84,19 @@ extern struct kobj_attribute shmem_enabled_attr;
 #define thp_vma_allowable_order(vma, vm_flags, smaps, in_pf, enforce_sysfs, order) \
 	(!!thp_vma_allowable_orders(vma, vm_flags, smaps, in_pf, enforce_sysfs, BIT(order)))
 
-#ifdef CONFIG_TRANSPARENT_HUGEPAGE
 #define HPAGE_PMD_SHIFT PMD_SHIFT
 #define HPAGE_PMD_SIZE	((1UL) << HPAGE_PMD_SHIFT)
 #define HPAGE_PMD_MASK	(~(HPAGE_PMD_SIZE - 1))
+#define HPAGE_PMD_ORDER (HPAGE_PMD_SHIFT-PAGE_SHIFT)
+#define HPAGE_PMD_NR (1<<HPAGE_PMD_ORDER)
 
 #define HPAGE_PUD_SHIFT PUD_SHIFT
 #define HPAGE_PUD_SIZE	((1UL) << HPAGE_PUD_SHIFT)
 #define HPAGE_PUD_MASK	(~(HPAGE_PUD_SIZE - 1))
+#define HPAGE_PUD_ORDER (HPAGE_PUD_SHIFT-PAGE_SHIFT)
+#define HPAGE_PUD_NR (1<<HPAGE_PUD_ORDER)
+
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
 
 extern unsigned long transparent_hugepage_flags;
 extern unsigned long huge_anon_orders_always;
@@ -377,13 +379,6 @@ static inline bool thp_migration_supported(void)
 }
 
 #else /* CONFIG_TRANSPARENT_HUGEPAGE */
-#define HPAGE_PMD_SHIFT ({ BUILD_BUG(); 0; })
-#define HPAGE_PMD_MASK ({ BUILD_BUG(); 0; })
-#define HPAGE_PMD_SIZE ({ BUILD_BUG(); 0; })
-
-#define HPAGE_PUD_SHIFT ({ BUILD_BUG(); 0; })
-#define HPAGE_PUD_MASK ({ BUILD_BUG(); 0; })
-#define HPAGE_PUD_SIZE ({ BUILD_BUG(); 0; })
 
 static inline bool folio_test_pmd_mappable(struct folio *folio)
 {
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 143+ messages in thread

* [PATCH v2 04/13] mm: Make HPAGE_PXD_* macros even if !THP
@ 2024-01-03  9:14   ` peterx
  0 siblings, 0 replies; 143+ messages in thread
From: peterx @ 2024-01-03  9:14 UTC (permalink / raw)
  To: linux-mm, linux-kernel
  Cc: James Houghton, David Hildenbrand, Kirill A . Shutemov, Yang Shi,
	peterx, linux-riscv, Andrew Morton, Aneesh Kumar K . V,
	Rik van Riel, Andrea Arcangeli, Axel Rasmussen, Mike Rapoport,
	John Hubbard, Vlastimil Babka, Michael Ellerman,
	Christophe Leroy, Andrew Jones, linuxppc-dev, Mike Kravetz,
	Muchun Song, linux-arm-kernel, Jason Gunthorpe,
	Christoph Hellwig, Lorenzo Stoakes, Matthew Wilcox

From: Peter Xu <peterx@redhat.com>

These macros can be helpful when we plan to merge hugetlb code into generic
code.  Move them out and define them even if !THP.

We actually already defined HPAGE_PMD_NR for other reasons even if !THP.
Reorganize these macros.

Reviewed-by: Christoph Hellwig <hch@infradead.org>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 include/linux/huge_mm.h | 17 ++++++-----------
 1 file changed, 6 insertions(+), 11 deletions(-)

diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index 5adb86af35fc..96bd4b5d027e 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -64,9 +64,6 @@ ssize_t single_hugepage_flag_show(struct kobject *kobj,
 				  enum transparent_hugepage_flag flag);
 extern struct kobj_attribute shmem_enabled_attr;
 
-#define HPAGE_PMD_ORDER (HPAGE_PMD_SHIFT-PAGE_SHIFT)
-#define HPAGE_PMD_NR (1<<HPAGE_PMD_ORDER)
-
 /*
  * Mask of all large folio orders supported for anonymous THP; all orders up to
  * and including PMD_ORDER, except order-0 (which is not "huge") and order-1
@@ -87,14 +84,19 @@ extern struct kobj_attribute shmem_enabled_attr;
 #define thp_vma_allowable_order(vma, vm_flags, smaps, in_pf, enforce_sysfs, order) \
 	(!!thp_vma_allowable_orders(vma, vm_flags, smaps, in_pf, enforce_sysfs, BIT(order)))
 
-#ifdef CONFIG_TRANSPARENT_HUGEPAGE
 #define HPAGE_PMD_SHIFT PMD_SHIFT
 #define HPAGE_PMD_SIZE	((1UL) << HPAGE_PMD_SHIFT)
 #define HPAGE_PMD_MASK	(~(HPAGE_PMD_SIZE - 1))
+#define HPAGE_PMD_ORDER (HPAGE_PMD_SHIFT-PAGE_SHIFT)
+#define HPAGE_PMD_NR (1<<HPAGE_PMD_ORDER)
 
 #define HPAGE_PUD_SHIFT PUD_SHIFT
 #define HPAGE_PUD_SIZE	((1UL) << HPAGE_PUD_SHIFT)
 #define HPAGE_PUD_MASK	(~(HPAGE_PUD_SIZE - 1))
+#define HPAGE_PUD_ORDER (HPAGE_PUD_SHIFT-PAGE_SHIFT)
+#define HPAGE_PUD_NR (1<<HPAGE_PUD_ORDER)
+
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
 
 extern unsigned long transparent_hugepage_flags;
 extern unsigned long huge_anon_orders_always;
@@ -377,13 +379,6 @@ static inline bool thp_migration_supported(void)
 }
 
 #else /* CONFIG_TRANSPARENT_HUGEPAGE */
-#define HPAGE_PMD_SHIFT ({ BUILD_BUG(); 0; })
-#define HPAGE_PMD_MASK ({ BUILD_BUG(); 0; })
-#define HPAGE_PMD_SIZE ({ BUILD_BUG(); 0; })
-
-#define HPAGE_PUD_SHIFT ({ BUILD_BUG(); 0; })
-#define HPAGE_PUD_MASK ({ BUILD_BUG(); 0; })
-#define HPAGE_PUD_SIZE ({ BUILD_BUG(); 0; })
 
 static inline bool folio_test_pmd_mappable(struct folio *folio)
 {
-- 
2.41.0


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply related	[flat|nested] 143+ messages in thread

* [PATCH v2 04/13] mm: Make HPAGE_PXD_* macros even if !THP
@ 2024-01-03  9:14   ` peterx
  0 siblings, 0 replies; 143+ messages in thread
From: peterx @ 2024-01-03  9:14 UTC (permalink / raw)
  To: linux-mm, linux-kernel
  Cc: James Houghton, David Hildenbrand, Kirill A . Shutemov, Yang Shi,
	peterx, linux-riscv, Andrew Morton, Aneesh Kumar K . V,
	Rik van Riel, Andrea Arcangeli, Axel Rasmussen, Mike Rapoport,
	John Hubbard, Vlastimil Babka, Michael Ellerman,
	Christophe Leroy, Andrew Jones, linuxppc-dev, Mike Kravetz,
	Muchun Song, linux-arm-kernel, Jason Gunthorpe,
	Christoph Hellwig, Lorenzo Stoakes, Matthew Wilcox

From: Peter Xu <peterx@redhat.com>

These macros can be helpful when we plan to merge hugetlb code into generic
code.  Move them out and define them even if !THP.

We actually already defined HPAGE_PMD_NR for other reasons even if !THP.
Reorganize these macros.

Reviewed-by: Christoph Hellwig <hch@infradead.org>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 include/linux/huge_mm.h | 17 ++++++-----------
 1 file changed, 6 insertions(+), 11 deletions(-)

diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index 5adb86af35fc..96bd4b5d027e 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -64,9 +64,6 @@ ssize_t single_hugepage_flag_show(struct kobject *kobj,
 				  enum transparent_hugepage_flag flag);
 extern struct kobj_attribute shmem_enabled_attr;
 
-#define HPAGE_PMD_ORDER (HPAGE_PMD_SHIFT-PAGE_SHIFT)
-#define HPAGE_PMD_NR (1<<HPAGE_PMD_ORDER)
-
 /*
  * Mask of all large folio orders supported for anonymous THP; all orders up to
  * and including PMD_ORDER, except order-0 (which is not "huge") and order-1
@@ -87,14 +84,19 @@ extern struct kobj_attribute shmem_enabled_attr;
 #define thp_vma_allowable_order(vma, vm_flags, smaps, in_pf, enforce_sysfs, order) \
 	(!!thp_vma_allowable_orders(vma, vm_flags, smaps, in_pf, enforce_sysfs, BIT(order)))
 
-#ifdef CONFIG_TRANSPARENT_HUGEPAGE
 #define HPAGE_PMD_SHIFT PMD_SHIFT
 #define HPAGE_PMD_SIZE	((1UL) << HPAGE_PMD_SHIFT)
 #define HPAGE_PMD_MASK	(~(HPAGE_PMD_SIZE - 1))
+#define HPAGE_PMD_ORDER (HPAGE_PMD_SHIFT-PAGE_SHIFT)
+#define HPAGE_PMD_NR (1<<HPAGE_PMD_ORDER)
 
 #define HPAGE_PUD_SHIFT PUD_SHIFT
 #define HPAGE_PUD_SIZE	((1UL) << HPAGE_PUD_SHIFT)
 #define HPAGE_PUD_MASK	(~(HPAGE_PUD_SIZE - 1))
+#define HPAGE_PUD_ORDER (HPAGE_PUD_SHIFT-PAGE_SHIFT)
+#define HPAGE_PUD_NR (1<<HPAGE_PUD_ORDER)
+
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
 
 extern unsigned long transparent_hugepage_flags;
 extern unsigned long huge_anon_orders_always;
@@ -377,13 +379,6 @@ static inline bool thp_migration_supported(void)
 }
 
 #else /* CONFIG_TRANSPARENT_HUGEPAGE */
-#define HPAGE_PMD_SHIFT ({ BUILD_BUG(); 0; })
-#define HPAGE_PMD_MASK ({ BUILD_BUG(); 0; })
-#define HPAGE_PMD_SIZE ({ BUILD_BUG(); 0; })
-
-#define HPAGE_PUD_SHIFT ({ BUILD_BUG(); 0; })
-#define HPAGE_PUD_MASK ({ BUILD_BUG(); 0; })
-#define HPAGE_PUD_SIZE ({ BUILD_BUG(); 0; })
 
 static inline bool folio_test_pmd_mappable(struct folio *folio)
 {
-- 
2.41.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 143+ messages in thread

* [PATCH v2 04/13] mm: Make HPAGE_PXD_* macros even if !THP
@ 2024-01-03  9:14   ` peterx
  0 siblings, 0 replies; 143+ messages in thread
From: peterx @ 2024-01-03  9:14 UTC (permalink / raw)
  To: linux-mm, linux-kernel
  Cc: James Houghton, David Hildenbrand, Yang Shi, peterx,
	Andrew Jones, Matthew Wilcox, linux-riscv, Andrea Arcangeli,
	Christoph Hellwig, Aneesh Kumar K . V, Vlastimil Babka,
	Jason Gunthorpe, Axel Rasmussen, Rik van Riel, John Hubbard,
	Kirill A . Shutemov, linux-arm-kernel, Lorenzo Stoakes,
	Muchun Song, Andrew Morton, linuxppc-dev, Mike Rapoport,
	Mike Kravetz

From: Peter Xu <peterx@redhat.com>

These macros can be helpful when we plan to merge hugetlb code into generic
code.  Move them out and define them even if !THP.

We actually already defined HPAGE_PMD_NR for other reasons even if !THP.
Reorganize these macros.

Reviewed-by: Christoph Hellwig <hch@infradead.org>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 include/linux/huge_mm.h | 17 ++++++-----------
 1 file changed, 6 insertions(+), 11 deletions(-)

diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index 5adb86af35fc..96bd4b5d027e 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -64,9 +64,6 @@ ssize_t single_hugepage_flag_show(struct kobject *kobj,
 				  enum transparent_hugepage_flag flag);
 extern struct kobj_attribute shmem_enabled_attr;
 
-#define HPAGE_PMD_ORDER (HPAGE_PMD_SHIFT-PAGE_SHIFT)
-#define HPAGE_PMD_NR (1<<HPAGE_PMD_ORDER)
-
 /*
  * Mask of all large folio orders supported for anonymous THP; all orders up to
  * and including PMD_ORDER, except order-0 (which is not "huge") and order-1
@@ -87,14 +84,19 @@ extern struct kobj_attribute shmem_enabled_attr;
 #define thp_vma_allowable_order(vma, vm_flags, smaps, in_pf, enforce_sysfs, order) \
 	(!!thp_vma_allowable_orders(vma, vm_flags, smaps, in_pf, enforce_sysfs, BIT(order)))
 
-#ifdef CONFIG_TRANSPARENT_HUGEPAGE
 #define HPAGE_PMD_SHIFT PMD_SHIFT
 #define HPAGE_PMD_SIZE	((1UL) << HPAGE_PMD_SHIFT)
 #define HPAGE_PMD_MASK	(~(HPAGE_PMD_SIZE - 1))
+#define HPAGE_PMD_ORDER (HPAGE_PMD_SHIFT-PAGE_SHIFT)
+#define HPAGE_PMD_NR (1<<HPAGE_PMD_ORDER)
 
 #define HPAGE_PUD_SHIFT PUD_SHIFT
 #define HPAGE_PUD_SIZE	((1UL) << HPAGE_PUD_SHIFT)
 #define HPAGE_PUD_MASK	(~(HPAGE_PUD_SIZE - 1))
+#define HPAGE_PUD_ORDER (HPAGE_PUD_SHIFT-PAGE_SHIFT)
+#define HPAGE_PUD_NR (1<<HPAGE_PUD_ORDER)
+
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
 
 extern unsigned long transparent_hugepage_flags;
 extern unsigned long huge_anon_orders_always;
@@ -377,13 +379,6 @@ static inline bool thp_migration_supported(void)
 }
 
 #else /* CONFIG_TRANSPARENT_HUGEPAGE */
-#define HPAGE_PMD_SHIFT ({ BUILD_BUG(); 0; })
-#define HPAGE_PMD_MASK ({ BUILD_BUG(); 0; })
-#define HPAGE_PMD_SIZE ({ BUILD_BUG(); 0; })
-
-#define HPAGE_PUD_SHIFT ({ BUILD_BUG(); 0; })
-#define HPAGE_PUD_MASK ({ BUILD_BUG(); 0; })
-#define HPAGE_PUD_SIZE ({ BUILD_BUG(); 0; })
 
 static inline bool folio_test_pmd_mappable(struct folio *folio)
 {
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 143+ messages in thread

* [PATCH v2 05/13] mm: Introduce vma_pgtable_walk_{begin|end}()
  2024-01-03  9:14 ` peterx
  (?)
  (?)
@ 2024-01-03  9:14   ` peterx
  -1 siblings, 0 replies; 143+ messages in thread
From: peterx @ 2024-01-03  9:14 UTC (permalink / raw)
  To: linux-mm, linux-kernel
  Cc: James Houghton, David Hildenbrand, Kirill A . Shutemov, Yang Shi,
	peterx, linux-riscv, Andrew Morton, Aneesh Kumar K . V,
	Rik van Riel, Andrea Arcangeli, Axel Rasmussen, Mike Rapoport,
	John Hubbard, Vlastimil Babka, Michael Ellerman,
	Christophe Leroy, Andrew Jones, linuxppc-dev, Mike Kravetz,
	Muchun Song, linux-arm-kernel, Jason Gunthorpe,
	Christoph Hellwig, Lorenzo Stoakes, Matthew Wilcox

From: Peter Xu <peterx@redhat.com>

Introduce per-vma begin()/end() helpers for pgtable walks.  This is a
preparation work to merge hugetlb pgtable walkers with generic mm.

The helpers need to be called before and after a pgtable walk, will start
to be needed if the pgtable walker code supports hugetlb pages.  It's a
hook point for any type of VMA, but for now only hugetlb uses it to
stablize the pgtable pages from getting away (due to possible pmd
unsharing).

Reviewed-by: Christoph Hellwig <hch@infradead.org>
Reviewed-by: Muchun Song <muchun.song@linux.dev>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 include/linux/mm.h |  3 +++
 mm/memory.c        | 12 ++++++++++++
 2 files changed, 15 insertions(+)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 896c0079f64f..6836da00671a 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -4181,4 +4181,7 @@ static inline bool pfn_is_unaccepted_memory(unsigned long pfn)
 	return range_contains_unaccepted_memory(paddr, paddr + PAGE_SIZE);
 }
 
+void vma_pgtable_walk_begin(struct vm_area_struct *vma);
+void vma_pgtable_walk_end(struct vm_area_struct *vma);
+
 #endif /* _LINUX_MM_H */
diff --git a/mm/memory.c b/mm/memory.c
index 7e1f4849463a..89f3caac2ec8 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -6279,3 +6279,15 @@ void ptlock_free(struct ptdesc *ptdesc)
 	kmem_cache_free(page_ptl_cachep, ptdesc->ptl);
 }
 #endif
+
+void vma_pgtable_walk_begin(struct vm_area_struct *vma)
+{
+	if (is_vm_hugetlb_page(vma))
+		hugetlb_vma_lock_read(vma);
+}
+
+void vma_pgtable_walk_end(struct vm_area_struct *vma)
+{
+	if (is_vm_hugetlb_page(vma))
+		hugetlb_vma_unlock_read(vma);
+}
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 143+ messages in thread

* [PATCH v2 05/13] mm: Introduce vma_pgtable_walk_{begin|end}()
@ 2024-01-03  9:14   ` peterx
  0 siblings, 0 replies; 143+ messages in thread
From: peterx @ 2024-01-03  9:14 UTC (permalink / raw)
  To: linux-mm, linux-kernel
  Cc: James Houghton, David Hildenbrand, Kirill A . Shutemov, Yang Shi,
	peterx, linux-riscv, Andrew Morton, Aneesh Kumar K . V,
	Rik van Riel, Andrea Arcangeli, Axel Rasmussen, Mike Rapoport,
	John Hubbard, Vlastimil Babka, Michael Ellerman,
	Christophe Leroy, Andrew Jones, linuxppc-dev, Mike Kravetz,
	Muchun Song, linux-arm-kernel, Jason Gunthorpe,
	Christoph Hellwig, Lorenzo Stoakes, Matthew Wilcox

From: Peter Xu <peterx@redhat.com>

Introduce per-vma begin()/end() helpers for pgtable walks.  This is a
preparation work to merge hugetlb pgtable walkers with generic mm.

The helpers need to be called before and after a pgtable walk, will start
to be needed if the pgtable walker code supports hugetlb pages.  It's a
hook point for any type of VMA, but for now only hugetlb uses it to
stablize the pgtable pages from getting away (due to possible pmd
unsharing).

Reviewed-by: Christoph Hellwig <hch@infradead.org>
Reviewed-by: Muchun Song <muchun.song@linux.dev>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 include/linux/mm.h |  3 +++
 mm/memory.c        | 12 ++++++++++++
 2 files changed, 15 insertions(+)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 896c0079f64f..6836da00671a 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -4181,4 +4181,7 @@ static inline bool pfn_is_unaccepted_memory(unsigned long pfn)
 	return range_contains_unaccepted_memory(paddr, paddr + PAGE_SIZE);
 }
 
+void vma_pgtable_walk_begin(struct vm_area_struct *vma);
+void vma_pgtable_walk_end(struct vm_area_struct *vma);
+
 #endif /* _LINUX_MM_H */
diff --git a/mm/memory.c b/mm/memory.c
index 7e1f4849463a..89f3caac2ec8 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -6279,3 +6279,15 @@ void ptlock_free(struct ptdesc *ptdesc)
 	kmem_cache_free(page_ptl_cachep, ptdesc->ptl);
 }
 #endif
+
+void vma_pgtable_walk_begin(struct vm_area_struct *vma)
+{
+	if (is_vm_hugetlb_page(vma))
+		hugetlb_vma_lock_read(vma);
+}
+
+void vma_pgtable_walk_end(struct vm_area_struct *vma)
+{
+	if (is_vm_hugetlb_page(vma))
+		hugetlb_vma_unlock_read(vma);
+}
-- 
2.41.0


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply related	[flat|nested] 143+ messages in thread

* [PATCH v2 05/13] mm: Introduce vma_pgtable_walk_{begin|end}()
@ 2024-01-03  9:14   ` peterx
  0 siblings, 0 replies; 143+ messages in thread
From: peterx @ 2024-01-03  9:14 UTC (permalink / raw)
  To: linux-mm, linux-kernel
  Cc: James Houghton, David Hildenbrand, Kirill A . Shutemov, Yang Shi,
	peterx, linux-riscv, Andrew Morton, Aneesh Kumar K . V,
	Rik van Riel, Andrea Arcangeli, Axel Rasmussen, Mike Rapoport,
	John Hubbard, Vlastimil Babka, Michael Ellerman,
	Christophe Leroy, Andrew Jones, linuxppc-dev, Mike Kravetz,
	Muchun Song, linux-arm-kernel, Jason Gunthorpe,
	Christoph Hellwig, Lorenzo Stoakes, Matthew Wilcox

From: Peter Xu <peterx@redhat.com>

Introduce per-vma begin()/end() helpers for pgtable walks.  This is a
preparation work to merge hugetlb pgtable walkers with generic mm.

The helpers need to be called before and after a pgtable walk, will start
to be needed if the pgtable walker code supports hugetlb pages.  It's a
hook point for any type of VMA, but for now only hugetlb uses it to
stablize the pgtable pages from getting away (due to possible pmd
unsharing).

Reviewed-by: Christoph Hellwig <hch@infradead.org>
Reviewed-by: Muchun Song <muchun.song@linux.dev>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 include/linux/mm.h |  3 +++
 mm/memory.c        | 12 ++++++++++++
 2 files changed, 15 insertions(+)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 896c0079f64f..6836da00671a 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -4181,4 +4181,7 @@ static inline bool pfn_is_unaccepted_memory(unsigned long pfn)
 	return range_contains_unaccepted_memory(paddr, paddr + PAGE_SIZE);
 }
 
+void vma_pgtable_walk_begin(struct vm_area_struct *vma);
+void vma_pgtable_walk_end(struct vm_area_struct *vma);
+
 #endif /* _LINUX_MM_H */
diff --git a/mm/memory.c b/mm/memory.c
index 7e1f4849463a..89f3caac2ec8 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -6279,3 +6279,15 @@ void ptlock_free(struct ptdesc *ptdesc)
 	kmem_cache_free(page_ptl_cachep, ptdesc->ptl);
 }
 #endif
+
+void vma_pgtable_walk_begin(struct vm_area_struct *vma)
+{
+	if (is_vm_hugetlb_page(vma))
+		hugetlb_vma_lock_read(vma);
+}
+
+void vma_pgtable_walk_end(struct vm_area_struct *vma)
+{
+	if (is_vm_hugetlb_page(vma))
+		hugetlb_vma_unlock_read(vma);
+}
-- 
2.41.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 143+ messages in thread

* [PATCH v2 05/13] mm: Introduce vma_pgtable_walk_{begin|end}()
@ 2024-01-03  9:14   ` peterx
  0 siblings, 0 replies; 143+ messages in thread
From: peterx @ 2024-01-03  9:14 UTC (permalink / raw)
  To: linux-mm, linux-kernel
  Cc: James Houghton, David Hildenbrand, Yang Shi, peterx,
	Andrew Jones, Matthew Wilcox, linux-riscv, Andrea Arcangeli,
	Christoph Hellwig, Aneesh Kumar K . V, Vlastimil Babka,
	Jason Gunthorpe, Axel Rasmussen, Rik van Riel, John Hubbard,
	Kirill A . Shutemov, linux-arm-kernel, Lorenzo Stoakes,
	Muchun Song, Andrew Morton, linuxppc-dev, Mike Rapoport,
	Mike Kravetz

From: Peter Xu <peterx@redhat.com>

Introduce per-vma begin()/end() helpers for pgtable walks.  This is a
preparation work to merge hugetlb pgtable walkers with generic mm.

The helpers need to be called before and after a pgtable walk, will start
to be needed if the pgtable walker code supports hugetlb pages.  It's a
hook point for any type of VMA, but for now only hugetlb uses it to
stablize the pgtable pages from getting away (due to possible pmd
unsharing).

Reviewed-by: Christoph Hellwig <hch@infradead.org>
Reviewed-by: Muchun Song <muchun.song@linux.dev>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 include/linux/mm.h |  3 +++
 mm/memory.c        | 12 ++++++++++++
 2 files changed, 15 insertions(+)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 896c0079f64f..6836da00671a 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -4181,4 +4181,7 @@ static inline bool pfn_is_unaccepted_memory(unsigned long pfn)
 	return range_contains_unaccepted_memory(paddr, paddr + PAGE_SIZE);
 }
 
+void vma_pgtable_walk_begin(struct vm_area_struct *vma);
+void vma_pgtable_walk_end(struct vm_area_struct *vma);
+
 #endif /* _LINUX_MM_H */
diff --git a/mm/memory.c b/mm/memory.c
index 7e1f4849463a..89f3caac2ec8 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -6279,3 +6279,15 @@ void ptlock_free(struct ptdesc *ptdesc)
 	kmem_cache_free(page_ptl_cachep, ptdesc->ptl);
 }
 #endif
+
+void vma_pgtable_walk_begin(struct vm_area_struct *vma)
+{
+	if (is_vm_hugetlb_page(vma))
+		hugetlb_vma_lock_read(vma);
+}
+
+void vma_pgtable_walk_end(struct vm_area_struct *vma)
+{
+	if (is_vm_hugetlb_page(vma))
+		hugetlb_vma_unlock_read(vma);
+}
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 143+ messages in thread

* [PATCH v2 06/13] mm/gup: Drop folio_fast_pin_allowed() in hugepd processing
  2024-01-03  9:14 ` peterx
  (?)
  (?)
@ 2024-01-03  9:14   ` peterx
  -1 siblings, 0 replies; 143+ messages in thread
From: peterx @ 2024-01-03  9:14 UTC (permalink / raw)
  To: linux-mm, linux-kernel
  Cc: James Houghton, David Hildenbrand, Kirill A . Shutemov, Yang Shi,
	peterx, linux-riscv, Andrew Morton, Aneesh Kumar K . V,
	Rik van Riel, Andrea Arcangeli, Axel Rasmussen, Mike Rapoport,
	John Hubbard, Vlastimil Babka, Michael Ellerman,
	Christophe Leroy, Andrew Jones, linuxppc-dev, Mike Kravetz,
	Muchun Song, linux-arm-kernel, Jason Gunthorpe,
	Christoph Hellwig, Lorenzo Stoakes, Matthew Wilcox

From: Peter Xu <peterx@redhat.com>

Hugepd format for GUP is only used in PowerPC with hugetlbfs.  There are
some kernel usage of hugepd (can refer to hugepd_populate_kernel() for
PPC_8XX), however those pages are not candidates for GUP.

Commit a6e79df92e4a ("mm/gup: disallow FOLL_LONGTERM GUP-fast writing to
file-backed mappings") added a check to fail gup-fast if there's potential
risk of violating GUP over writeback file systems.  That should never apply
to hugepd.  Considering that hugepd is an old format (and even
software-only), there's no plan to extend hugepd into other file typed
memories that is prone to the same issue.

Drop that check, not only because it'll never be true for hugepd per any
known plan, but also it paves way for reusing the function outside
fast-gup.

To make sure we'll still remember this issue just in case hugepd will be
extended to support non-hugetlbfs memories, add a rich comment above
gup_huge_pd(), explaining the issue with proper references.

Cc: Christoph Hellwig <hch@infradead.org>
Cc: Lorenzo Stoakes <lstoakes@gmail.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: linuxppc-dev@lists.ozlabs.org
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 mm/gup.c | 13 ++++++++-----
 1 file changed, 8 insertions(+), 5 deletions(-)

diff --git a/mm/gup.c b/mm/gup.c
index eebae70d2465..fa93e14b7fca 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -2820,11 +2820,6 @@ static int gup_hugepte(pte_t *ptep, unsigned long sz, unsigned long addr,
 		return 0;
 	}
 
-	if (!folio_fast_pin_allowed(folio, flags)) {
-		gup_put_folio(folio, refs, flags);
-		return 0;
-	}
-
 	if (!pte_write(pte) && gup_must_unshare(NULL, flags, &folio->page)) {
 		gup_put_folio(folio, refs, flags);
 		return 0;
@@ -2835,6 +2830,14 @@ static int gup_hugepte(pte_t *ptep, unsigned long sz, unsigned long addr,
 	return 1;
 }
 
+/*
+ * NOTE: currently GUP for a hugepd is only possible on hugetlbfs file
+ * systems on Power, which does not have issue with folio writeback against
+ * GUP updates.  When hugepd will be extended to support non-hugetlbfs or
+ * even anonymous memory, we need to do extra check as what we do with most
+ * of the other folios. See writable_file_mapping_allowed() and
+ * folio_fast_pin_allowed() for more information.
+ */
 static int gup_huge_pd(hugepd_t hugepd, unsigned long addr,
 		unsigned int pdshift, unsigned long end, unsigned int flags,
 		struct page **pages, int *nr)
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 143+ messages in thread

* [PATCH v2 06/13] mm/gup: Drop folio_fast_pin_allowed() in hugepd processing
@ 2024-01-03  9:14   ` peterx
  0 siblings, 0 replies; 143+ messages in thread
From: peterx @ 2024-01-03  9:14 UTC (permalink / raw)
  To: linux-mm, linux-kernel
  Cc: James Houghton, David Hildenbrand, Kirill A . Shutemov, Yang Shi,
	peterx, linux-riscv, Andrew Morton, Aneesh Kumar K . V,
	Rik van Riel, Andrea Arcangeli, Axel Rasmussen, Mike Rapoport,
	John Hubbard, Vlastimil Babka, Michael Ellerman,
	Christophe Leroy, Andrew Jones, linuxppc-dev, Mike Kravetz,
	Muchun Song, linux-arm-kernel, Jason Gunthorpe,
	Christoph Hellwig, Lorenzo Stoakes, Matthew Wilcox

From: Peter Xu <peterx@redhat.com>

Hugepd format for GUP is only used in PowerPC with hugetlbfs.  There are
some kernel usage of hugepd (can refer to hugepd_populate_kernel() for
PPC_8XX), however those pages are not candidates for GUP.

Commit a6e79df92e4a ("mm/gup: disallow FOLL_LONGTERM GUP-fast writing to
file-backed mappings") added a check to fail gup-fast if there's potential
risk of violating GUP over writeback file systems.  That should never apply
to hugepd.  Considering that hugepd is an old format (and even
software-only), there's no plan to extend hugepd into other file typed
memories that is prone to the same issue.

Drop that check, not only because it'll never be true for hugepd per any
known plan, but also it paves way for reusing the function outside
fast-gup.

To make sure we'll still remember this issue just in case hugepd will be
extended to support non-hugetlbfs memories, add a rich comment above
gup_huge_pd(), explaining the issue with proper references.

Cc: Christoph Hellwig <hch@infradead.org>
Cc: Lorenzo Stoakes <lstoakes@gmail.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: linuxppc-dev@lists.ozlabs.org
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 mm/gup.c | 13 ++++++++-----
 1 file changed, 8 insertions(+), 5 deletions(-)

diff --git a/mm/gup.c b/mm/gup.c
index eebae70d2465..fa93e14b7fca 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -2820,11 +2820,6 @@ static int gup_hugepte(pte_t *ptep, unsigned long sz, unsigned long addr,
 		return 0;
 	}
 
-	if (!folio_fast_pin_allowed(folio, flags)) {
-		gup_put_folio(folio, refs, flags);
-		return 0;
-	}
-
 	if (!pte_write(pte) && gup_must_unshare(NULL, flags, &folio->page)) {
 		gup_put_folio(folio, refs, flags);
 		return 0;
@@ -2835,6 +2830,14 @@ static int gup_hugepte(pte_t *ptep, unsigned long sz, unsigned long addr,
 	return 1;
 }
 
+/*
+ * NOTE: currently GUP for a hugepd is only possible on hugetlbfs file
+ * systems on Power, which does not have issue with folio writeback against
+ * GUP updates.  When hugepd will be extended to support non-hugetlbfs or
+ * even anonymous memory, we need to do extra check as what we do with most
+ * of the other folios. See writable_file_mapping_allowed() and
+ * folio_fast_pin_allowed() for more information.
+ */
 static int gup_huge_pd(hugepd_t hugepd, unsigned long addr,
 		unsigned int pdshift, unsigned long end, unsigned int flags,
 		struct page **pages, int *nr)
-- 
2.41.0


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply related	[flat|nested] 143+ messages in thread

* [PATCH v2 06/13] mm/gup: Drop folio_fast_pin_allowed() in hugepd processing
@ 2024-01-03  9:14   ` peterx
  0 siblings, 0 replies; 143+ messages in thread
From: peterx @ 2024-01-03  9:14 UTC (permalink / raw)
  To: linux-mm, linux-kernel
  Cc: James Houghton, David Hildenbrand, Kirill A . Shutemov, Yang Shi,
	peterx, linux-riscv, Andrew Morton, Aneesh Kumar K . V,
	Rik van Riel, Andrea Arcangeli, Axel Rasmussen, Mike Rapoport,
	John Hubbard, Vlastimil Babka, Michael Ellerman,
	Christophe Leroy, Andrew Jones, linuxppc-dev, Mike Kravetz,
	Muchun Song, linux-arm-kernel, Jason Gunthorpe,
	Christoph Hellwig, Lorenzo Stoakes, Matthew Wilcox

From: Peter Xu <peterx@redhat.com>

Hugepd format for GUP is only used in PowerPC with hugetlbfs.  There are
some kernel usage of hugepd (can refer to hugepd_populate_kernel() for
PPC_8XX), however those pages are not candidates for GUP.

Commit a6e79df92e4a ("mm/gup: disallow FOLL_LONGTERM GUP-fast writing to
file-backed mappings") added a check to fail gup-fast if there's potential
risk of violating GUP over writeback file systems.  That should never apply
to hugepd.  Considering that hugepd is an old format (and even
software-only), there's no plan to extend hugepd into other file typed
memories that is prone to the same issue.

Drop that check, not only because it'll never be true for hugepd per any
known plan, but also it paves way for reusing the function outside
fast-gup.

To make sure we'll still remember this issue just in case hugepd will be
extended to support non-hugetlbfs memories, add a rich comment above
gup_huge_pd(), explaining the issue with proper references.

Cc: Christoph Hellwig <hch@infradead.org>
Cc: Lorenzo Stoakes <lstoakes@gmail.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: linuxppc-dev@lists.ozlabs.org
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 mm/gup.c | 13 ++++++++-----
 1 file changed, 8 insertions(+), 5 deletions(-)

diff --git a/mm/gup.c b/mm/gup.c
index eebae70d2465..fa93e14b7fca 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -2820,11 +2820,6 @@ static int gup_hugepte(pte_t *ptep, unsigned long sz, unsigned long addr,
 		return 0;
 	}
 
-	if (!folio_fast_pin_allowed(folio, flags)) {
-		gup_put_folio(folio, refs, flags);
-		return 0;
-	}
-
 	if (!pte_write(pte) && gup_must_unshare(NULL, flags, &folio->page)) {
 		gup_put_folio(folio, refs, flags);
 		return 0;
@@ -2835,6 +2830,14 @@ static int gup_hugepte(pte_t *ptep, unsigned long sz, unsigned long addr,
 	return 1;
 }
 
+/*
+ * NOTE: currently GUP for a hugepd is only possible on hugetlbfs file
+ * systems on Power, which does not have issue with folio writeback against
+ * GUP updates.  When hugepd will be extended to support non-hugetlbfs or
+ * even anonymous memory, we need to do extra check as what we do with most
+ * of the other folios. See writable_file_mapping_allowed() and
+ * folio_fast_pin_allowed() for more information.
+ */
 static int gup_huge_pd(hugepd_t hugepd, unsigned long addr,
 		unsigned int pdshift, unsigned long end, unsigned int flags,
 		struct page **pages, int *nr)
-- 
2.41.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 143+ messages in thread

* [PATCH v2 06/13] mm/gup: Drop folio_fast_pin_allowed() in hugepd processing
@ 2024-01-03  9:14   ` peterx
  0 siblings, 0 replies; 143+ messages in thread
From: peterx @ 2024-01-03  9:14 UTC (permalink / raw)
  To: linux-mm, linux-kernel
  Cc: James Houghton, David Hildenbrand, Yang Shi, peterx,
	Andrew Jones, Matthew Wilcox, linux-riscv, Andrea Arcangeli,
	Christoph Hellwig, Aneesh Kumar K . V, Vlastimil Babka,
	Jason Gunthorpe, Axel Rasmussen, Rik van Riel, John Hubbard,
	Kirill A . Shutemov, linux-arm-kernel, Lorenzo Stoakes,
	Muchun Song, Andrew Morton, linuxppc-dev, Mike Rapoport,
	Mike Kravetz

From: Peter Xu <peterx@redhat.com>

Hugepd format for GUP is only used in PowerPC with hugetlbfs.  There are
some kernel usage of hugepd (can refer to hugepd_populate_kernel() for
PPC_8XX), however those pages are not candidates for GUP.

Commit a6e79df92e4a ("mm/gup: disallow FOLL_LONGTERM GUP-fast writing to
file-backed mappings") added a check to fail gup-fast if there's potential
risk of violating GUP over writeback file systems.  That should never apply
to hugepd.  Considering that hugepd is an old format (and even
software-only), there's no plan to extend hugepd into other file typed
memories that is prone to the same issue.

Drop that check, not only because it'll never be true for hugepd per any
known plan, but also it paves way for reusing the function outside
fast-gup.

To make sure we'll still remember this issue just in case hugepd will be
extended to support non-hugetlbfs memories, add a rich comment above
gup_huge_pd(), explaining the issue with proper references.

Cc: Christoph Hellwig <hch@infradead.org>
Cc: Lorenzo Stoakes <lstoakes@gmail.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: linuxppc-dev@lists.ozlabs.org
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 mm/gup.c | 13 ++++++++-----
 1 file changed, 8 insertions(+), 5 deletions(-)

diff --git a/mm/gup.c b/mm/gup.c
index eebae70d2465..fa93e14b7fca 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -2820,11 +2820,6 @@ static int gup_hugepte(pte_t *ptep, unsigned long sz, unsigned long addr,
 		return 0;
 	}
 
-	if (!folio_fast_pin_allowed(folio, flags)) {
-		gup_put_folio(folio, refs, flags);
-		return 0;
-	}
-
 	if (!pte_write(pte) && gup_must_unshare(NULL, flags, &folio->page)) {
 		gup_put_folio(folio, refs, flags);
 		return 0;
@@ -2835,6 +2830,14 @@ static int gup_hugepte(pte_t *ptep, unsigned long sz, unsigned long addr,
 	return 1;
 }
 
+/*
+ * NOTE: currently GUP for a hugepd is only possible on hugetlbfs file
+ * systems on Power, which does not have issue with folio writeback against
+ * GUP updates.  When hugepd will be extended to support non-hugetlbfs or
+ * even anonymous memory, we need to do extra check as what we do with most
+ * of the other folios. See writable_file_mapping_allowed() and
+ * folio_fast_pin_allowed() for more information.
+ */
 static int gup_huge_pd(hugepd_t hugepd, unsigned long addr,
 		unsigned int pdshift, unsigned long end, unsigned int flags,
 		struct page **pages, int *nr)
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 143+ messages in thread

* [PATCH v2 07/13] mm/gup: Refactor record_subpages() to find 1st small page
  2024-01-03  9:14 ` peterx
  (?)
  (?)
@ 2024-01-03  9:14   ` peterx
  -1 siblings, 0 replies; 143+ messages in thread
From: peterx @ 2024-01-03  9:14 UTC (permalink / raw)
  To: linux-mm, linux-kernel
  Cc: James Houghton, David Hildenbrand, Kirill A . Shutemov, Yang Shi,
	peterx, linux-riscv, Andrew Morton, Aneesh Kumar K . V,
	Rik van Riel, Andrea Arcangeli, Axel Rasmussen, Mike Rapoport,
	John Hubbard, Vlastimil Babka, Michael Ellerman,
	Christophe Leroy, Andrew Jones, linuxppc-dev, Mike Kravetz,
	Muchun Song, linux-arm-kernel, Jason Gunthorpe,
	Christoph Hellwig, Lorenzo Stoakes, Matthew Wilcox

From: Peter Xu <peterx@redhat.com>

All the fast-gup functions take a tail page to operate, always need to do
page mask calculations before feeding that into record_subpages().

Merge that logic into record_subpages(), so that it will do the nth_page()
calculation.

Signed-off-by: Peter Xu <peterx@redhat.com>
---
 mm/gup.c | 25 ++++++++++++++-----------
 1 file changed, 14 insertions(+), 11 deletions(-)

diff --git a/mm/gup.c b/mm/gup.c
index fa93e14b7fca..3813aad79c4a 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -2767,13 +2767,16 @@ static int __gup_device_huge_pud(pud_t pud, pud_t *pudp, unsigned long addr,
 }
 #endif
 
-static int record_subpages(struct page *page, unsigned long addr,
-			   unsigned long end, struct page **pages)
+static int record_subpages(struct page *page, unsigned long sz,
+			   unsigned long addr, unsigned long end,
+			   struct page **pages)
 {
+	struct page *start_page;
 	int nr;
 
+	start_page = nth_page(page, (addr & (sz - 1)) >> PAGE_SHIFT);
 	for (nr = 0; addr != end; nr++, addr += PAGE_SIZE)
-		pages[nr] = nth_page(page, nr);
+		pages[nr] = nth_page(start_page, nr);
 
 	return nr;
 }
@@ -2808,8 +2811,8 @@ static int gup_hugepte(pte_t *ptep, unsigned long sz, unsigned long addr,
 	/* hugepages are never "special" */
 	VM_BUG_ON(!pfn_valid(pte_pfn(pte)));
 
-	page = nth_page(pte_page(pte), (addr & (sz - 1)) >> PAGE_SHIFT);
-	refs = record_subpages(page, addr, end, pages + *nr);
+	page = pte_page(pte);
+	refs = record_subpages(page, sz, addr, end, pages + *nr);
 
 	folio = try_grab_folio(page, refs, flags);
 	if (!folio)
@@ -2882,8 +2885,8 @@ static int gup_huge_pmd(pmd_t orig, pmd_t *pmdp, unsigned long addr,
 					     pages, nr);
 	}
 
-	page = nth_page(pmd_page(orig), (addr & ~PMD_MASK) >> PAGE_SHIFT);
-	refs = record_subpages(page, addr, end, pages + *nr);
+	page = pmd_page(orig);
+	refs = record_subpages(page, PMD_SIZE, addr, end, pages + *nr);
 
 	folio = try_grab_folio(page, refs, flags);
 	if (!folio)
@@ -2926,8 +2929,8 @@ static int gup_huge_pud(pud_t orig, pud_t *pudp, unsigned long addr,
 					     pages, nr);
 	}
 
-	page = nth_page(pud_page(orig), (addr & ~PUD_MASK) >> PAGE_SHIFT);
-	refs = record_subpages(page, addr, end, pages + *nr);
+	page = pud_page(orig);
+	refs = record_subpages(page, PUD_SIZE, addr, end, pages + *nr);
 
 	folio = try_grab_folio(page, refs, flags);
 	if (!folio)
@@ -2966,8 +2969,8 @@ static int gup_huge_pgd(pgd_t orig, pgd_t *pgdp, unsigned long addr,
 
 	BUILD_BUG_ON(pgd_devmap(orig));
 
-	page = nth_page(pgd_page(orig), (addr & ~PGDIR_MASK) >> PAGE_SHIFT);
-	refs = record_subpages(page, addr, end, pages + *nr);
+	page = pgd_page(orig);
+	refs = record_subpages(page, PGDIR_SIZE, addr, end, pages + *nr);
 
 	folio = try_grab_folio(page, refs, flags);
 	if (!folio)
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 143+ messages in thread

* [PATCH v2 07/13] mm/gup: Refactor record_subpages() to find 1st small page
@ 2024-01-03  9:14   ` peterx
  0 siblings, 0 replies; 143+ messages in thread
From: peterx @ 2024-01-03  9:14 UTC (permalink / raw)
  To: linux-mm, linux-kernel
  Cc: James Houghton, David Hildenbrand, Kirill A . Shutemov, Yang Shi,
	peterx, linux-riscv, Andrew Morton, Aneesh Kumar K . V,
	Rik van Riel, Andrea Arcangeli, Axel Rasmussen, Mike Rapoport,
	John Hubbard, Vlastimil Babka, Michael Ellerman,
	Christophe Leroy, Andrew Jones, linuxppc-dev, Mike Kravetz,
	Muchun Song, linux-arm-kernel, Jason Gunthorpe,
	Christoph Hellwig, Lorenzo Stoakes, Matthew Wilcox

From: Peter Xu <peterx@redhat.com>

All the fast-gup functions take a tail page to operate, always need to do
page mask calculations before feeding that into record_subpages().

Merge that logic into record_subpages(), so that it will do the nth_page()
calculation.

Signed-off-by: Peter Xu <peterx@redhat.com>
---
 mm/gup.c | 25 ++++++++++++++-----------
 1 file changed, 14 insertions(+), 11 deletions(-)

diff --git a/mm/gup.c b/mm/gup.c
index fa93e14b7fca..3813aad79c4a 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -2767,13 +2767,16 @@ static int __gup_device_huge_pud(pud_t pud, pud_t *pudp, unsigned long addr,
 }
 #endif
 
-static int record_subpages(struct page *page, unsigned long addr,
-			   unsigned long end, struct page **pages)
+static int record_subpages(struct page *page, unsigned long sz,
+			   unsigned long addr, unsigned long end,
+			   struct page **pages)
 {
+	struct page *start_page;
 	int nr;
 
+	start_page = nth_page(page, (addr & (sz - 1)) >> PAGE_SHIFT);
 	for (nr = 0; addr != end; nr++, addr += PAGE_SIZE)
-		pages[nr] = nth_page(page, nr);
+		pages[nr] = nth_page(start_page, nr);
 
 	return nr;
 }
@@ -2808,8 +2811,8 @@ static int gup_hugepte(pte_t *ptep, unsigned long sz, unsigned long addr,
 	/* hugepages are never "special" */
 	VM_BUG_ON(!pfn_valid(pte_pfn(pte)));
 
-	page = nth_page(pte_page(pte), (addr & (sz - 1)) >> PAGE_SHIFT);
-	refs = record_subpages(page, addr, end, pages + *nr);
+	page = pte_page(pte);
+	refs = record_subpages(page, sz, addr, end, pages + *nr);
 
 	folio = try_grab_folio(page, refs, flags);
 	if (!folio)
@@ -2882,8 +2885,8 @@ static int gup_huge_pmd(pmd_t orig, pmd_t *pmdp, unsigned long addr,
 					     pages, nr);
 	}
 
-	page = nth_page(pmd_page(orig), (addr & ~PMD_MASK) >> PAGE_SHIFT);
-	refs = record_subpages(page, addr, end, pages + *nr);
+	page = pmd_page(orig);
+	refs = record_subpages(page, PMD_SIZE, addr, end, pages + *nr);
 
 	folio = try_grab_folio(page, refs, flags);
 	if (!folio)
@@ -2926,8 +2929,8 @@ static int gup_huge_pud(pud_t orig, pud_t *pudp, unsigned long addr,
 					     pages, nr);
 	}
 
-	page = nth_page(pud_page(orig), (addr & ~PUD_MASK) >> PAGE_SHIFT);
-	refs = record_subpages(page, addr, end, pages + *nr);
+	page = pud_page(orig);
+	refs = record_subpages(page, PUD_SIZE, addr, end, pages + *nr);
 
 	folio = try_grab_folio(page, refs, flags);
 	if (!folio)
@@ -2966,8 +2969,8 @@ static int gup_huge_pgd(pgd_t orig, pgd_t *pgdp, unsigned long addr,
 
 	BUILD_BUG_ON(pgd_devmap(orig));
 
-	page = nth_page(pgd_page(orig), (addr & ~PGDIR_MASK) >> PAGE_SHIFT);
-	refs = record_subpages(page, addr, end, pages + *nr);
+	page = pgd_page(orig);
+	refs = record_subpages(page, PGDIR_SIZE, addr, end, pages + *nr);
 
 	folio = try_grab_folio(page, refs, flags);
 	if (!folio)
-- 
2.41.0


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply related	[flat|nested] 143+ messages in thread

* [PATCH v2 07/13] mm/gup: Refactor record_subpages() to find 1st small page
@ 2024-01-03  9:14   ` peterx
  0 siblings, 0 replies; 143+ messages in thread
From: peterx @ 2024-01-03  9:14 UTC (permalink / raw)
  To: linux-mm, linux-kernel
  Cc: James Houghton, David Hildenbrand, Kirill A . Shutemov, Yang Shi,
	peterx, linux-riscv, Andrew Morton, Aneesh Kumar K . V,
	Rik van Riel, Andrea Arcangeli, Axel Rasmussen, Mike Rapoport,
	John Hubbard, Vlastimil Babka, Michael Ellerman,
	Christophe Leroy, Andrew Jones, linuxppc-dev, Mike Kravetz,
	Muchun Song, linux-arm-kernel, Jason Gunthorpe,
	Christoph Hellwig, Lorenzo Stoakes, Matthew Wilcox

From: Peter Xu <peterx@redhat.com>

All the fast-gup functions take a tail page to operate, always need to do
page mask calculations before feeding that into record_subpages().

Merge that logic into record_subpages(), so that it will do the nth_page()
calculation.

Signed-off-by: Peter Xu <peterx@redhat.com>
---
 mm/gup.c | 25 ++++++++++++++-----------
 1 file changed, 14 insertions(+), 11 deletions(-)

diff --git a/mm/gup.c b/mm/gup.c
index fa93e14b7fca..3813aad79c4a 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -2767,13 +2767,16 @@ static int __gup_device_huge_pud(pud_t pud, pud_t *pudp, unsigned long addr,
 }
 #endif
 
-static int record_subpages(struct page *page, unsigned long addr,
-			   unsigned long end, struct page **pages)
+static int record_subpages(struct page *page, unsigned long sz,
+			   unsigned long addr, unsigned long end,
+			   struct page **pages)
 {
+	struct page *start_page;
 	int nr;
 
+	start_page = nth_page(page, (addr & (sz - 1)) >> PAGE_SHIFT);
 	for (nr = 0; addr != end; nr++, addr += PAGE_SIZE)
-		pages[nr] = nth_page(page, nr);
+		pages[nr] = nth_page(start_page, nr);
 
 	return nr;
 }
@@ -2808,8 +2811,8 @@ static int gup_hugepte(pte_t *ptep, unsigned long sz, unsigned long addr,
 	/* hugepages are never "special" */
 	VM_BUG_ON(!pfn_valid(pte_pfn(pte)));
 
-	page = nth_page(pte_page(pte), (addr & (sz - 1)) >> PAGE_SHIFT);
-	refs = record_subpages(page, addr, end, pages + *nr);
+	page = pte_page(pte);
+	refs = record_subpages(page, sz, addr, end, pages + *nr);
 
 	folio = try_grab_folio(page, refs, flags);
 	if (!folio)
@@ -2882,8 +2885,8 @@ static int gup_huge_pmd(pmd_t orig, pmd_t *pmdp, unsigned long addr,
 					     pages, nr);
 	}
 
-	page = nth_page(pmd_page(orig), (addr & ~PMD_MASK) >> PAGE_SHIFT);
-	refs = record_subpages(page, addr, end, pages + *nr);
+	page = pmd_page(orig);
+	refs = record_subpages(page, PMD_SIZE, addr, end, pages + *nr);
 
 	folio = try_grab_folio(page, refs, flags);
 	if (!folio)
@@ -2926,8 +2929,8 @@ static int gup_huge_pud(pud_t orig, pud_t *pudp, unsigned long addr,
 					     pages, nr);
 	}
 
-	page = nth_page(pud_page(orig), (addr & ~PUD_MASK) >> PAGE_SHIFT);
-	refs = record_subpages(page, addr, end, pages + *nr);
+	page = pud_page(orig);
+	refs = record_subpages(page, PUD_SIZE, addr, end, pages + *nr);
 
 	folio = try_grab_folio(page, refs, flags);
 	if (!folio)
@@ -2966,8 +2969,8 @@ static int gup_huge_pgd(pgd_t orig, pgd_t *pgdp, unsigned long addr,
 
 	BUILD_BUG_ON(pgd_devmap(orig));
 
-	page = nth_page(pgd_page(orig), (addr & ~PGDIR_MASK) >> PAGE_SHIFT);
-	refs = record_subpages(page, addr, end, pages + *nr);
+	page = pgd_page(orig);
+	refs = record_subpages(page, PGDIR_SIZE, addr, end, pages + *nr);
 
 	folio = try_grab_folio(page, refs, flags);
 	if (!folio)
-- 
2.41.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 143+ messages in thread

* [PATCH v2 07/13] mm/gup: Refactor record_subpages() to find 1st small page
@ 2024-01-03  9:14   ` peterx
  0 siblings, 0 replies; 143+ messages in thread
From: peterx @ 2024-01-03  9:14 UTC (permalink / raw)
  To: linux-mm, linux-kernel
  Cc: James Houghton, David Hildenbrand, Yang Shi, peterx,
	Andrew Jones, Matthew Wilcox, linux-riscv, Andrea Arcangeli,
	Christoph Hellwig, Aneesh Kumar K . V, Vlastimil Babka,
	Jason Gunthorpe, Axel Rasmussen, Rik van Riel, John Hubbard,
	Kirill A . Shutemov, linux-arm-kernel, Lorenzo Stoakes,
	Muchun Song, Andrew Morton, linuxppc-dev, Mike Rapoport,
	Mike Kravetz

From: Peter Xu <peterx@redhat.com>

All the fast-gup functions take a tail page to operate, always need to do
page mask calculations before feeding that into record_subpages().

Merge that logic into record_subpages(), so that it will do the nth_page()
calculation.

Signed-off-by: Peter Xu <peterx@redhat.com>
---
 mm/gup.c | 25 ++++++++++++++-----------
 1 file changed, 14 insertions(+), 11 deletions(-)

diff --git a/mm/gup.c b/mm/gup.c
index fa93e14b7fca..3813aad79c4a 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -2767,13 +2767,16 @@ static int __gup_device_huge_pud(pud_t pud, pud_t *pudp, unsigned long addr,
 }
 #endif
 
-static int record_subpages(struct page *page, unsigned long addr,
-			   unsigned long end, struct page **pages)
+static int record_subpages(struct page *page, unsigned long sz,
+			   unsigned long addr, unsigned long end,
+			   struct page **pages)
 {
+	struct page *start_page;
 	int nr;
 
+	start_page = nth_page(page, (addr & (sz - 1)) >> PAGE_SHIFT);
 	for (nr = 0; addr != end; nr++, addr += PAGE_SIZE)
-		pages[nr] = nth_page(page, nr);
+		pages[nr] = nth_page(start_page, nr);
 
 	return nr;
 }
@@ -2808,8 +2811,8 @@ static int gup_hugepte(pte_t *ptep, unsigned long sz, unsigned long addr,
 	/* hugepages are never "special" */
 	VM_BUG_ON(!pfn_valid(pte_pfn(pte)));
 
-	page = nth_page(pte_page(pte), (addr & (sz - 1)) >> PAGE_SHIFT);
-	refs = record_subpages(page, addr, end, pages + *nr);
+	page = pte_page(pte);
+	refs = record_subpages(page, sz, addr, end, pages + *nr);
 
 	folio = try_grab_folio(page, refs, flags);
 	if (!folio)
@@ -2882,8 +2885,8 @@ static int gup_huge_pmd(pmd_t orig, pmd_t *pmdp, unsigned long addr,
 					     pages, nr);
 	}
 
-	page = nth_page(pmd_page(orig), (addr & ~PMD_MASK) >> PAGE_SHIFT);
-	refs = record_subpages(page, addr, end, pages + *nr);
+	page = pmd_page(orig);
+	refs = record_subpages(page, PMD_SIZE, addr, end, pages + *nr);
 
 	folio = try_grab_folio(page, refs, flags);
 	if (!folio)
@@ -2926,8 +2929,8 @@ static int gup_huge_pud(pud_t orig, pud_t *pudp, unsigned long addr,
 					     pages, nr);
 	}
 
-	page = nth_page(pud_page(orig), (addr & ~PUD_MASK) >> PAGE_SHIFT);
-	refs = record_subpages(page, addr, end, pages + *nr);
+	page = pud_page(orig);
+	refs = record_subpages(page, PUD_SIZE, addr, end, pages + *nr);
 
 	folio = try_grab_folio(page, refs, flags);
 	if (!folio)
@@ -2966,8 +2969,8 @@ static int gup_huge_pgd(pgd_t orig, pgd_t *pgdp, unsigned long addr,
 
 	BUILD_BUG_ON(pgd_devmap(orig));
 
-	page = nth_page(pgd_page(orig), (addr & ~PGDIR_MASK) >> PAGE_SHIFT);
-	refs = record_subpages(page, addr, end, pages + *nr);
+	page = pgd_page(orig);
+	refs = record_subpages(page, PGDIR_SIZE, addr, end, pages + *nr);
 
 	folio = try_grab_folio(page, refs, flags);
 	if (!folio)
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 143+ messages in thread

* [PATCH v2 08/13] mm/gup: Handle hugetlb for no_page_table()
  2024-01-03  9:14 ` peterx
  (?)
  (?)
@ 2024-01-03  9:14   ` peterx
  -1 siblings, 0 replies; 143+ messages in thread
From: peterx @ 2024-01-03  9:14 UTC (permalink / raw)
  To: linux-mm, linux-kernel
  Cc: James Houghton, David Hildenbrand, Kirill A . Shutemov, Yang Shi,
	peterx, linux-riscv, Andrew Morton, Aneesh Kumar K . V,
	Rik van Riel, Andrea Arcangeli, Axel Rasmussen, Mike Rapoport,
	John Hubbard, Vlastimil Babka, Michael Ellerman,
	Christophe Leroy, Andrew Jones, linuxppc-dev, Mike Kravetz,
	Muchun Song, linux-arm-kernel, Jason Gunthorpe,
	Christoph Hellwig, Lorenzo Stoakes, Matthew Wilcox

From: Peter Xu <peterx@redhat.com>

no_page_table() is not yet used for hugetlb code paths. Make it prepared.

The major difference here is hugetlb will return -EFAULT as long as page
cache does not exist, even if VM_SHARED.  See hugetlb_follow_page_mask().

Pass "address" into no_page_table() too, as hugetlb will need it.

Reviewed-by: Christoph Hellwig <hch@infradead.org>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 mm/gup.c | 44 ++++++++++++++++++++++++++------------------
 1 file changed, 26 insertions(+), 18 deletions(-)

diff --git a/mm/gup.c b/mm/gup.c
index 3813aad79c4a..b8a80e2bfe08 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -501,19 +501,27 @@ static inline void mm_set_has_pinned_flag(unsigned long *mm_flags)
 
 #ifdef CONFIG_MMU
 static struct page *no_page_table(struct vm_area_struct *vma,
-		unsigned int flags)
+				  unsigned int flags, unsigned long address)
 {
+	if (!(flags & FOLL_DUMP))
+		return NULL;
+
 	/*
-	 * When core dumping an enormous anonymous area that nobody
-	 * has touched so far, we don't want to allocate unnecessary pages or
+	 * When core dumping, we don't want to allocate unnecessary pages or
 	 * page tables.  Return error instead of NULL to skip handle_mm_fault,
 	 * then get_dump_page() will return NULL to leave a hole in the dump.
 	 * But we can only make this optimization where a hole would surely
 	 * be zero-filled if handle_mm_fault() actually did handle it.
 	 */
-	if ((flags & FOLL_DUMP) &&
-			(vma_is_anonymous(vma) || !vma->vm_ops->fault))
+	if (is_vm_hugetlb_page(vma)) {
+		struct hstate *h = hstate_vma(vma);
+
+		if (!hugetlbfs_pagecache_present(h, vma, address))
+			return ERR_PTR(-EFAULT);
+	} else if ((vma_is_anonymous(vma) || !vma->vm_ops->fault)) {
 		return ERR_PTR(-EFAULT);
+	}
+
 	return NULL;
 }
 
@@ -593,7 +601,7 @@ static struct page *follow_page_pte(struct vm_area_struct *vma,
 
 	ptep = pte_offset_map_lock(mm, pmd, address, &ptl);
 	if (!ptep)
-		return no_page_table(vma, flags);
+		return no_page_table(vma, flags, address);
 	pte = ptep_get(ptep);
 	if (!pte_present(pte))
 		goto no_page;
@@ -685,7 +693,7 @@ static struct page *follow_page_pte(struct vm_area_struct *vma,
 	pte_unmap_unlock(ptep, ptl);
 	if (!pte_none(pte))
 		return NULL;
-	return no_page_table(vma, flags);
+	return no_page_table(vma, flags, address);
 }
 
 static struct page *follow_pmd_mask(struct vm_area_struct *vma,
@@ -701,27 +709,27 @@ static struct page *follow_pmd_mask(struct vm_area_struct *vma,
 	pmd = pmd_offset(pudp, address);
 	pmdval = pmdp_get_lockless(pmd);
 	if (pmd_none(pmdval))
-		return no_page_table(vma, flags);
+		return no_page_table(vma, flags, address);
 	if (!pmd_present(pmdval))
-		return no_page_table(vma, flags);
+		return no_page_table(vma, flags, address);
 	if (pmd_devmap(pmdval)) {
 		ptl = pmd_lock(mm, pmd);
 		page = follow_devmap_pmd(vma, address, pmd, flags, &ctx->pgmap);
 		spin_unlock(ptl);
 		if (page)
 			return page;
-		return no_page_table(vma, flags);
+		return no_page_table(vma, flags, address);
 	}
 	if (likely(!pmd_trans_huge(pmdval)))
 		return follow_page_pte(vma, address, pmd, flags, &ctx->pgmap);
 
 	if (pmd_protnone(pmdval) && !gup_can_follow_protnone(vma, flags))
-		return no_page_table(vma, flags);
+		return no_page_table(vma, flags, address);
 
 	ptl = pmd_lock(mm, pmd);
 	if (unlikely(!pmd_present(*pmd))) {
 		spin_unlock(ptl);
-		return no_page_table(vma, flags);
+		return no_page_table(vma, flags, address);
 	}
 	if (unlikely(!pmd_trans_huge(*pmd))) {
 		spin_unlock(ptl);
@@ -752,17 +760,17 @@ static struct page *follow_pud_mask(struct vm_area_struct *vma,
 
 	pud = pud_offset(p4dp, address);
 	if (pud_none(*pud))
-		return no_page_table(vma, flags);
+		return no_page_table(vma, flags, address);
 	if (pud_devmap(*pud)) {
 		ptl = pud_lock(mm, pud);
 		page = follow_devmap_pud(vma, address, pud, flags, &ctx->pgmap);
 		spin_unlock(ptl);
 		if (page)
 			return page;
-		return no_page_table(vma, flags);
+		return no_page_table(vma, flags, address);
 	}
 	if (unlikely(pud_bad(*pud)))
-		return no_page_table(vma, flags);
+		return no_page_table(vma, flags, address);
 
 	return follow_pmd_mask(vma, address, pud, flags, ctx);
 }
@@ -776,10 +784,10 @@ static struct page *follow_p4d_mask(struct vm_area_struct *vma,
 
 	p4d = p4d_offset(pgdp, address);
 	if (p4d_none(*p4d))
-		return no_page_table(vma, flags);
+		return no_page_table(vma, flags, address);
 	BUILD_BUG_ON(p4d_huge(*p4d));
 	if (unlikely(p4d_bad(*p4d)))
-		return no_page_table(vma, flags);
+		return no_page_table(vma, flags, address);
 
 	return follow_pud_mask(vma, address, p4d, flags, ctx);
 }
@@ -829,7 +837,7 @@ static struct page *follow_page_mask(struct vm_area_struct *vma,
 	pgd = pgd_offset(mm, address);
 
 	if (pgd_none(*pgd) || unlikely(pgd_bad(*pgd)))
-		return no_page_table(vma, flags);
+		return no_page_table(vma, flags, address);
 
 	return follow_p4d_mask(vma, address, pgd, flags, ctx);
 }
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 143+ messages in thread

* [PATCH v2 08/13] mm/gup: Handle hugetlb for no_page_table()
@ 2024-01-03  9:14   ` peterx
  0 siblings, 0 replies; 143+ messages in thread
From: peterx @ 2024-01-03  9:14 UTC (permalink / raw)
  To: linux-mm, linux-kernel
  Cc: James Houghton, David Hildenbrand, Kirill A . Shutemov, Yang Shi,
	peterx, linux-riscv, Andrew Morton, Aneesh Kumar K . V,
	Rik van Riel, Andrea Arcangeli, Axel Rasmussen, Mike Rapoport,
	John Hubbard, Vlastimil Babka, Michael Ellerman,
	Christophe Leroy, Andrew Jones, linuxppc-dev, Mike Kravetz,
	Muchun Song, linux-arm-kernel, Jason Gunthorpe,
	Christoph Hellwig, Lorenzo Stoakes, Matthew Wilcox

From: Peter Xu <peterx@redhat.com>

no_page_table() is not yet used for hugetlb code paths. Make it prepared.

The major difference here is hugetlb will return -EFAULT as long as page
cache does not exist, even if VM_SHARED.  See hugetlb_follow_page_mask().

Pass "address" into no_page_table() too, as hugetlb will need it.

Reviewed-by: Christoph Hellwig <hch@infradead.org>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 mm/gup.c | 44 ++++++++++++++++++++++++++------------------
 1 file changed, 26 insertions(+), 18 deletions(-)

diff --git a/mm/gup.c b/mm/gup.c
index 3813aad79c4a..b8a80e2bfe08 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -501,19 +501,27 @@ static inline void mm_set_has_pinned_flag(unsigned long *mm_flags)
 
 #ifdef CONFIG_MMU
 static struct page *no_page_table(struct vm_area_struct *vma,
-		unsigned int flags)
+				  unsigned int flags, unsigned long address)
 {
+	if (!(flags & FOLL_DUMP))
+		return NULL;
+
 	/*
-	 * When core dumping an enormous anonymous area that nobody
-	 * has touched so far, we don't want to allocate unnecessary pages or
+	 * When core dumping, we don't want to allocate unnecessary pages or
 	 * page tables.  Return error instead of NULL to skip handle_mm_fault,
 	 * then get_dump_page() will return NULL to leave a hole in the dump.
 	 * But we can only make this optimization where a hole would surely
 	 * be zero-filled if handle_mm_fault() actually did handle it.
 	 */
-	if ((flags & FOLL_DUMP) &&
-			(vma_is_anonymous(vma) || !vma->vm_ops->fault))
+	if (is_vm_hugetlb_page(vma)) {
+		struct hstate *h = hstate_vma(vma);
+
+		if (!hugetlbfs_pagecache_present(h, vma, address))
+			return ERR_PTR(-EFAULT);
+	} else if ((vma_is_anonymous(vma) || !vma->vm_ops->fault)) {
 		return ERR_PTR(-EFAULT);
+	}
+
 	return NULL;
 }
 
@@ -593,7 +601,7 @@ static struct page *follow_page_pte(struct vm_area_struct *vma,
 
 	ptep = pte_offset_map_lock(mm, pmd, address, &ptl);
 	if (!ptep)
-		return no_page_table(vma, flags);
+		return no_page_table(vma, flags, address);
 	pte = ptep_get(ptep);
 	if (!pte_present(pte))
 		goto no_page;
@@ -685,7 +693,7 @@ static struct page *follow_page_pte(struct vm_area_struct *vma,
 	pte_unmap_unlock(ptep, ptl);
 	if (!pte_none(pte))
 		return NULL;
-	return no_page_table(vma, flags);
+	return no_page_table(vma, flags, address);
 }
 
 static struct page *follow_pmd_mask(struct vm_area_struct *vma,
@@ -701,27 +709,27 @@ static struct page *follow_pmd_mask(struct vm_area_struct *vma,
 	pmd = pmd_offset(pudp, address);
 	pmdval = pmdp_get_lockless(pmd);
 	if (pmd_none(pmdval))
-		return no_page_table(vma, flags);
+		return no_page_table(vma, flags, address);
 	if (!pmd_present(pmdval))
-		return no_page_table(vma, flags);
+		return no_page_table(vma, flags, address);
 	if (pmd_devmap(pmdval)) {
 		ptl = pmd_lock(mm, pmd);
 		page = follow_devmap_pmd(vma, address, pmd, flags, &ctx->pgmap);
 		spin_unlock(ptl);
 		if (page)
 			return page;
-		return no_page_table(vma, flags);
+		return no_page_table(vma, flags, address);
 	}
 	if (likely(!pmd_trans_huge(pmdval)))
 		return follow_page_pte(vma, address, pmd, flags, &ctx->pgmap);
 
 	if (pmd_protnone(pmdval) && !gup_can_follow_protnone(vma, flags))
-		return no_page_table(vma, flags);
+		return no_page_table(vma, flags, address);
 
 	ptl = pmd_lock(mm, pmd);
 	if (unlikely(!pmd_present(*pmd))) {
 		spin_unlock(ptl);
-		return no_page_table(vma, flags);
+		return no_page_table(vma, flags, address);
 	}
 	if (unlikely(!pmd_trans_huge(*pmd))) {
 		spin_unlock(ptl);
@@ -752,17 +760,17 @@ static struct page *follow_pud_mask(struct vm_area_struct *vma,
 
 	pud = pud_offset(p4dp, address);
 	if (pud_none(*pud))
-		return no_page_table(vma, flags);
+		return no_page_table(vma, flags, address);
 	if (pud_devmap(*pud)) {
 		ptl = pud_lock(mm, pud);
 		page = follow_devmap_pud(vma, address, pud, flags, &ctx->pgmap);
 		spin_unlock(ptl);
 		if (page)
 			return page;
-		return no_page_table(vma, flags);
+		return no_page_table(vma, flags, address);
 	}
 	if (unlikely(pud_bad(*pud)))
-		return no_page_table(vma, flags);
+		return no_page_table(vma, flags, address);
 
 	return follow_pmd_mask(vma, address, pud, flags, ctx);
 }
@@ -776,10 +784,10 @@ static struct page *follow_p4d_mask(struct vm_area_struct *vma,
 
 	p4d = p4d_offset(pgdp, address);
 	if (p4d_none(*p4d))
-		return no_page_table(vma, flags);
+		return no_page_table(vma, flags, address);
 	BUILD_BUG_ON(p4d_huge(*p4d));
 	if (unlikely(p4d_bad(*p4d)))
-		return no_page_table(vma, flags);
+		return no_page_table(vma, flags, address);
 
 	return follow_pud_mask(vma, address, p4d, flags, ctx);
 }
@@ -829,7 +837,7 @@ static struct page *follow_page_mask(struct vm_area_struct *vma,
 	pgd = pgd_offset(mm, address);
 
 	if (pgd_none(*pgd) || unlikely(pgd_bad(*pgd)))
-		return no_page_table(vma, flags);
+		return no_page_table(vma, flags, address);
 
 	return follow_p4d_mask(vma, address, pgd, flags, ctx);
 }
-- 
2.41.0


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply related	[flat|nested] 143+ messages in thread

* [PATCH v2 08/13] mm/gup: Handle hugetlb for no_page_table()
@ 2024-01-03  9:14   ` peterx
  0 siblings, 0 replies; 143+ messages in thread
From: peterx @ 2024-01-03  9:14 UTC (permalink / raw)
  To: linux-mm, linux-kernel
  Cc: James Houghton, David Hildenbrand, Kirill A . Shutemov, Yang Shi,
	peterx, linux-riscv, Andrew Morton, Aneesh Kumar K . V,
	Rik van Riel, Andrea Arcangeli, Axel Rasmussen, Mike Rapoport,
	John Hubbard, Vlastimil Babka, Michael Ellerman,
	Christophe Leroy, Andrew Jones, linuxppc-dev, Mike Kravetz,
	Muchun Song, linux-arm-kernel, Jason Gunthorpe,
	Christoph Hellwig, Lorenzo Stoakes, Matthew Wilcox

From: Peter Xu <peterx@redhat.com>

no_page_table() is not yet used for hugetlb code paths. Make it prepared.

The major difference here is hugetlb will return -EFAULT as long as page
cache does not exist, even if VM_SHARED.  See hugetlb_follow_page_mask().

Pass "address" into no_page_table() too, as hugetlb will need it.

Reviewed-by: Christoph Hellwig <hch@infradead.org>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 mm/gup.c | 44 ++++++++++++++++++++++++++------------------
 1 file changed, 26 insertions(+), 18 deletions(-)

diff --git a/mm/gup.c b/mm/gup.c
index 3813aad79c4a..b8a80e2bfe08 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -501,19 +501,27 @@ static inline void mm_set_has_pinned_flag(unsigned long *mm_flags)
 
 #ifdef CONFIG_MMU
 static struct page *no_page_table(struct vm_area_struct *vma,
-		unsigned int flags)
+				  unsigned int flags, unsigned long address)
 {
+	if (!(flags & FOLL_DUMP))
+		return NULL;
+
 	/*
-	 * When core dumping an enormous anonymous area that nobody
-	 * has touched so far, we don't want to allocate unnecessary pages or
+	 * When core dumping, we don't want to allocate unnecessary pages or
 	 * page tables.  Return error instead of NULL to skip handle_mm_fault,
 	 * then get_dump_page() will return NULL to leave a hole in the dump.
 	 * But we can only make this optimization where a hole would surely
 	 * be zero-filled if handle_mm_fault() actually did handle it.
 	 */
-	if ((flags & FOLL_DUMP) &&
-			(vma_is_anonymous(vma) || !vma->vm_ops->fault))
+	if (is_vm_hugetlb_page(vma)) {
+		struct hstate *h = hstate_vma(vma);
+
+		if (!hugetlbfs_pagecache_present(h, vma, address))
+			return ERR_PTR(-EFAULT);
+	} else if ((vma_is_anonymous(vma) || !vma->vm_ops->fault)) {
 		return ERR_PTR(-EFAULT);
+	}
+
 	return NULL;
 }
 
@@ -593,7 +601,7 @@ static struct page *follow_page_pte(struct vm_area_struct *vma,
 
 	ptep = pte_offset_map_lock(mm, pmd, address, &ptl);
 	if (!ptep)
-		return no_page_table(vma, flags);
+		return no_page_table(vma, flags, address);
 	pte = ptep_get(ptep);
 	if (!pte_present(pte))
 		goto no_page;
@@ -685,7 +693,7 @@ static struct page *follow_page_pte(struct vm_area_struct *vma,
 	pte_unmap_unlock(ptep, ptl);
 	if (!pte_none(pte))
 		return NULL;
-	return no_page_table(vma, flags);
+	return no_page_table(vma, flags, address);
 }
 
 static struct page *follow_pmd_mask(struct vm_area_struct *vma,
@@ -701,27 +709,27 @@ static struct page *follow_pmd_mask(struct vm_area_struct *vma,
 	pmd = pmd_offset(pudp, address);
 	pmdval = pmdp_get_lockless(pmd);
 	if (pmd_none(pmdval))
-		return no_page_table(vma, flags);
+		return no_page_table(vma, flags, address);
 	if (!pmd_present(pmdval))
-		return no_page_table(vma, flags);
+		return no_page_table(vma, flags, address);
 	if (pmd_devmap(pmdval)) {
 		ptl = pmd_lock(mm, pmd);
 		page = follow_devmap_pmd(vma, address, pmd, flags, &ctx->pgmap);
 		spin_unlock(ptl);
 		if (page)
 			return page;
-		return no_page_table(vma, flags);
+		return no_page_table(vma, flags, address);
 	}
 	if (likely(!pmd_trans_huge(pmdval)))
 		return follow_page_pte(vma, address, pmd, flags, &ctx->pgmap);
 
 	if (pmd_protnone(pmdval) && !gup_can_follow_protnone(vma, flags))
-		return no_page_table(vma, flags);
+		return no_page_table(vma, flags, address);
 
 	ptl = pmd_lock(mm, pmd);
 	if (unlikely(!pmd_present(*pmd))) {
 		spin_unlock(ptl);
-		return no_page_table(vma, flags);
+		return no_page_table(vma, flags, address);
 	}
 	if (unlikely(!pmd_trans_huge(*pmd))) {
 		spin_unlock(ptl);
@@ -752,17 +760,17 @@ static struct page *follow_pud_mask(struct vm_area_struct *vma,
 
 	pud = pud_offset(p4dp, address);
 	if (pud_none(*pud))
-		return no_page_table(vma, flags);
+		return no_page_table(vma, flags, address);
 	if (pud_devmap(*pud)) {
 		ptl = pud_lock(mm, pud);
 		page = follow_devmap_pud(vma, address, pud, flags, &ctx->pgmap);
 		spin_unlock(ptl);
 		if (page)
 			return page;
-		return no_page_table(vma, flags);
+		return no_page_table(vma, flags, address);
 	}
 	if (unlikely(pud_bad(*pud)))
-		return no_page_table(vma, flags);
+		return no_page_table(vma, flags, address);
 
 	return follow_pmd_mask(vma, address, pud, flags, ctx);
 }
@@ -776,10 +784,10 @@ static struct page *follow_p4d_mask(struct vm_area_struct *vma,
 
 	p4d = p4d_offset(pgdp, address);
 	if (p4d_none(*p4d))
-		return no_page_table(vma, flags);
+		return no_page_table(vma, flags, address);
 	BUILD_BUG_ON(p4d_huge(*p4d));
 	if (unlikely(p4d_bad(*p4d)))
-		return no_page_table(vma, flags);
+		return no_page_table(vma, flags, address);
 
 	return follow_pud_mask(vma, address, p4d, flags, ctx);
 }
@@ -829,7 +837,7 @@ static struct page *follow_page_mask(struct vm_area_struct *vma,
 	pgd = pgd_offset(mm, address);
 
 	if (pgd_none(*pgd) || unlikely(pgd_bad(*pgd)))
-		return no_page_table(vma, flags);
+		return no_page_table(vma, flags, address);
 
 	return follow_p4d_mask(vma, address, pgd, flags, ctx);
 }
-- 
2.41.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 143+ messages in thread

* [PATCH v2 08/13] mm/gup: Handle hugetlb for no_page_table()
@ 2024-01-03  9:14   ` peterx
  0 siblings, 0 replies; 143+ messages in thread
From: peterx @ 2024-01-03  9:14 UTC (permalink / raw)
  To: linux-mm, linux-kernel
  Cc: James Houghton, David Hildenbrand, Yang Shi, peterx,
	Andrew Jones, Matthew Wilcox, linux-riscv, Andrea Arcangeli,
	Christoph Hellwig, Aneesh Kumar K . V, Vlastimil Babka,
	Jason Gunthorpe, Axel Rasmussen, Rik van Riel, John Hubbard,
	Kirill A . Shutemov, linux-arm-kernel, Lorenzo Stoakes,
	Muchun Song, Andrew Morton, linuxppc-dev, Mike Rapoport,
	Mike Kravetz

From: Peter Xu <peterx@redhat.com>

no_page_table() is not yet used for hugetlb code paths. Make it prepared.

The major difference here is hugetlb will return -EFAULT as long as page
cache does not exist, even if VM_SHARED.  See hugetlb_follow_page_mask().

Pass "address" into no_page_table() too, as hugetlb will need it.

Reviewed-by: Christoph Hellwig <hch@infradead.org>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 mm/gup.c | 44 ++++++++++++++++++++++++++------------------
 1 file changed, 26 insertions(+), 18 deletions(-)

diff --git a/mm/gup.c b/mm/gup.c
index 3813aad79c4a..b8a80e2bfe08 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -501,19 +501,27 @@ static inline void mm_set_has_pinned_flag(unsigned long *mm_flags)
 
 #ifdef CONFIG_MMU
 static struct page *no_page_table(struct vm_area_struct *vma,
-		unsigned int flags)
+				  unsigned int flags, unsigned long address)
 {
+	if (!(flags & FOLL_DUMP))
+		return NULL;
+
 	/*
-	 * When core dumping an enormous anonymous area that nobody
-	 * has touched so far, we don't want to allocate unnecessary pages or
+	 * When core dumping, we don't want to allocate unnecessary pages or
 	 * page tables.  Return error instead of NULL to skip handle_mm_fault,
 	 * then get_dump_page() will return NULL to leave a hole in the dump.
 	 * But we can only make this optimization where a hole would surely
 	 * be zero-filled if handle_mm_fault() actually did handle it.
 	 */
-	if ((flags & FOLL_DUMP) &&
-			(vma_is_anonymous(vma) || !vma->vm_ops->fault))
+	if (is_vm_hugetlb_page(vma)) {
+		struct hstate *h = hstate_vma(vma);
+
+		if (!hugetlbfs_pagecache_present(h, vma, address))
+			return ERR_PTR(-EFAULT);
+	} else if ((vma_is_anonymous(vma) || !vma->vm_ops->fault)) {
 		return ERR_PTR(-EFAULT);
+	}
+
 	return NULL;
 }
 
@@ -593,7 +601,7 @@ static struct page *follow_page_pte(struct vm_area_struct *vma,
 
 	ptep = pte_offset_map_lock(mm, pmd, address, &ptl);
 	if (!ptep)
-		return no_page_table(vma, flags);
+		return no_page_table(vma, flags, address);
 	pte = ptep_get(ptep);
 	if (!pte_present(pte))
 		goto no_page;
@@ -685,7 +693,7 @@ static struct page *follow_page_pte(struct vm_area_struct *vma,
 	pte_unmap_unlock(ptep, ptl);
 	if (!pte_none(pte))
 		return NULL;
-	return no_page_table(vma, flags);
+	return no_page_table(vma, flags, address);
 }
 
 static struct page *follow_pmd_mask(struct vm_area_struct *vma,
@@ -701,27 +709,27 @@ static struct page *follow_pmd_mask(struct vm_area_struct *vma,
 	pmd = pmd_offset(pudp, address);
 	pmdval = pmdp_get_lockless(pmd);
 	if (pmd_none(pmdval))
-		return no_page_table(vma, flags);
+		return no_page_table(vma, flags, address);
 	if (!pmd_present(pmdval))
-		return no_page_table(vma, flags);
+		return no_page_table(vma, flags, address);
 	if (pmd_devmap(pmdval)) {
 		ptl = pmd_lock(mm, pmd);
 		page = follow_devmap_pmd(vma, address, pmd, flags, &ctx->pgmap);
 		spin_unlock(ptl);
 		if (page)
 			return page;
-		return no_page_table(vma, flags);
+		return no_page_table(vma, flags, address);
 	}
 	if (likely(!pmd_trans_huge(pmdval)))
 		return follow_page_pte(vma, address, pmd, flags, &ctx->pgmap);
 
 	if (pmd_protnone(pmdval) && !gup_can_follow_protnone(vma, flags))
-		return no_page_table(vma, flags);
+		return no_page_table(vma, flags, address);
 
 	ptl = pmd_lock(mm, pmd);
 	if (unlikely(!pmd_present(*pmd))) {
 		spin_unlock(ptl);
-		return no_page_table(vma, flags);
+		return no_page_table(vma, flags, address);
 	}
 	if (unlikely(!pmd_trans_huge(*pmd))) {
 		spin_unlock(ptl);
@@ -752,17 +760,17 @@ static struct page *follow_pud_mask(struct vm_area_struct *vma,
 
 	pud = pud_offset(p4dp, address);
 	if (pud_none(*pud))
-		return no_page_table(vma, flags);
+		return no_page_table(vma, flags, address);
 	if (pud_devmap(*pud)) {
 		ptl = pud_lock(mm, pud);
 		page = follow_devmap_pud(vma, address, pud, flags, &ctx->pgmap);
 		spin_unlock(ptl);
 		if (page)
 			return page;
-		return no_page_table(vma, flags);
+		return no_page_table(vma, flags, address);
 	}
 	if (unlikely(pud_bad(*pud)))
-		return no_page_table(vma, flags);
+		return no_page_table(vma, flags, address);
 
 	return follow_pmd_mask(vma, address, pud, flags, ctx);
 }
@@ -776,10 +784,10 @@ static struct page *follow_p4d_mask(struct vm_area_struct *vma,
 
 	p4d = p4d_offset(pgdp, address);
 	if (p4d_none(*p4d))
-		return no_page_table(vma, flags);
+		return no_page_table(vma, flags, address);
 	BUILD_BUG_ON(p4d_huge(*p4d));
 	if (unlikely(p4d_bad(*p4d)))
-		return no_page_table(vma, flags);
+		return no_page_table(vma, flags, address);
 
 	return follow_pud_mask(vma, address, p4d, flags, ctx);
 }
@@ -829,7 +837,7 @@ static struct page *follow_page_mask(struct vm_area_struct *vma,
 	pgd = pgd_offset(mm, address);
 
 	if (pgd_none(*pgd) || unlikely(pgd_bad(*pgd)))
-		return no_page_table(vma, flags);
+		return no_page_table(vma, flags, address);
 
 	return follow_p4d_mask(vma, address, pgd, flags, ctx);
 }
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 143+ messages in thread

* [PATCH v2 09/13] mm/gup: Cache *pudp in follow_pud_mask()
  2024-01-03  9:14 ` peterx
  (?)
  (?)
@ 2024-01-03  9:14   ` peterx
  -1 siblings, 0 replies; 143+ messages in thread
From: peterx @ 2024-01-03  9:14 UTC (permalink / raw)
  To: linux-mm, linux-kernel
  Cc: James Houghton, David Hildenbrand, Kirill A . Shutemov, Yang Shi,
	peterx, linux-riscv, Andrew Morton, Aneesh Kumar K . V,
	Rik van Riel, Andrea Arcangeli, Axel Rasmussen, Mike Rapoport,
	John Hubbard, Vlastimil Babka, Michael Ellerman,
	Christophe Leroy, Andrew Jones, linuxppc-dev, Mike Kravetz,
	Muchun Song, linux-arm-kernel, Jason Gunthorpe,
	Christoph Hellwig, Lorenzo Stoakes, Matthew Wilcox

From: Peter Xu <peterx@redhat.com>

Introduce "pud_t pud" in the function, so the code won't dereference *pudp
multiple time.  Not only because that looks less straightforward, but also
because if the dereference really happened, it's not clear whether there
can be race to see different *pudp values if it's being modified at the
same time.

Acked-by: James Houghton <jthoughton@google.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 mm/gup.c | 17 +++++++++--------
 1 file changed, 9 insertions(+), 8 deletions(-)

diff --git a/mm/gup.c b/mm/gup.c
index b8a80e2bfe08..63845b3ec44f 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -753,26 +753,27 @@ static struct page *follow_pud_mask(struct vm_area_struct *vma,
 				    unsigned int flags,
 				    struct follow_page_context *ctx)
 {
-	pud_t *pud;
+	pud_t *pudp, pud;
 	spinlock_t *ptl;
 	struct page *page;
 	struct mm_struct *mm = vma->vm_mm;
 
-	pud = pud_offset(p4dp, address);
-	if (pud_none(*pud))
+	pudp = pud_offset(p4dp, address);
+	pud = READ_ONCE(*pudp);
+	if (pud_none(pud))
 		return no_page_table(vma, flags, address);
-	if (pud_devmap(*pud)) {
-		ptl = pud_lock(mm, pud);
-		page = follow_devmap_pud(vma, address, pud, flags, &ctx->pgmap);
+	if (pud_devmap(pud)) {
+		ptl = pud_lock(mm, pudp);
+		page = follow_devmap_pud(vma, address, pudp, flags, &ctx->pgmap);
 		spin_unlock(ptl);
 		if (page)
 			return page;
 		return no_page_table(vma, flags, address);
 	}
-	if (unlikely(pud_bad(*pud)))
+	if (unlikely(pud_bad(pud)))
 		return no_page_table(vma, flags, address);
 
-	return follow_pmd_mask(vma, address, pud, flags, ctx);
+	return follow_pmd_mask(vma, address, pudp, flags, ctx);
 }
 
 static struct page *follow_p4d_mask(struct vm_area_struct *vma,
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 143+ messages in thread

* [PATCH v2 09/13] mm/gup: Cache *pudp in follow_pud_mask()
@ 2024-01-03  9:14   ` peterx
  0 siblings, 0 replies; 143+ messages in thread
From: peterx @ 2024-01-03  9:14 UTC (permalink / raw)
  To: linux-mm, linux-kernel
  Cc: James Houghton, David Hildenbrand, Kirill A . Shutemov, Yang Shi,
	peterx, linux-riscv, Andrew Morton, Aneesh Kumar K . V,
	Rik van Riel, Andrea Arcangeli, Axel Rasmussen, Mike Rapoport,
	John Hubbard, Vlastimil Babka, Michael Ellerman,
	Christophe Leroy, Andrew Jones, linuxppc-dev, Mike Kravetz,
	Muchun Song, linux-arm-kernel, Jason Gunthorpe,
	Christoph Hellwig, Lorenzo Stoakes, Matthew Wilcox

From: Peter Xu <peterx@redhat.com>

Introduce "pud_t pud" in the function, so the code won't dereference *pudp
multiple time.  Not only because that looks less straightforward, but also
because if the dereference really happened, it's not clear whether there
can be race to see different *pudp values if it's being modified at the
same time.

Acked-by: James Houghton <jthoughton@google.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 mm/gup.c | 17 +++++++++--------
 1 file changed, 9 insertions(+), 8 deletions(-)

diff --git a/mm/gup.c b/mm/gup.c
index b8a80e2bfe08..63845b3ec44f 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -753,26 +753,27 @@ static struct page *follow_pud_mask(struct vm_area_struct *vma,
 				    unsigned int flags,
 				    struct follow_page_context *ctx)
 {
-	pud_t *pud;
+	pud_t *pudp, pud;
 	spinlock_t *ptl;
 	struct page *page;
 	struct mm_struct *mm = vma->vm_mm;
 
-	pud = pud_offset(p4dp, address);
-	if (pud_none(*pud))
+	pudp = pud_offset(p4dp, address);
+	pud = READ_ONCE(*pudp);
+	if (pud_none(pud))
 		return no_page_table(vma, flags, address);
-	if (pud_devmap(*pud)) {
-		ptl = pud_lock(mm, pud);
-		page = follow_devmap_pud(vma, address, pud, flags, &ctx->pgmap);
+	if (pud_devmap(pud)) {
+		ptl = pud_lock(mm, pudp);
+		page = follow_devmap_pud(vma, address, pudp, flags, &ctx->pgmap);
 		spin_unlock(ptl);
 		if (page)
 			return page;
 		return no_page_table(vma, flags, address);
 	}
-	if (unlikely(pud_bad(*pud)))
+	if (unlikely(pud_bad(pud)))
 		return no_page_table(vma, flags, address);
 
-	return follow_pmd_mask(vma, address, pud, flags, ctx);
+	return follow_pmd_mask(vma, address, pudp, flags, ctx);
 }
 
 static struct page *follow_p4d_mask(struct vm_area_struct *vma,
-- 
2.41.0


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply related	[flat|nested] 143+ messages in thread

* [PATCH v2 09/13] mm/gup: Cache *pudp in follow_pud_mask()
@ 2024-01-03  9:14   ` peterx
  0 siblings, 0 replies; 143+ messages in thread
From: peterx @ 2024-01-03  9:14 UTC (permalink / raw)
  To: linux-mm, linux-kernel
  Cc: James Houghton, David Hildenbrand, Kirill A . Shutemov, Yang Shi,
	peterx, linux-riscv, Andrew Morton, Aneesh Kumar K . V,
	Rik van Riel, Andrea Arcangeli, Axel Rasmussen, Mike Rapoport,
	John Hubbard, Vlastimil Babka, Michael Ellerman,
	Christophe Leroy, Andrew Jones, linuxppc-dev, Mike Kravetz,
	Muchun Song, linux-arm-kernel, Jason Gunthorpe,
	Christoph Hellwig, Lorenzo Stoakes, Matthew Wilcox

From: Peter Xu <peterx@redhat.com>

Introduce "pud_t pud" in the function, so the code won't dereference *pudp
multiple time.  Not only because that looks less straightforward, but also
because if the dereference really happened, it's not clear whether there
can be race to see different *pudp values if it's being modified at the
same time.

Acked-by: James Houghton <jthoughton@google.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 mm/gup.c | 17 +++++++++--------
 1 file changed, 9 insertions(+), 8 deletions(-)

diff --git a/mm/gup.c b/mm/gup.c
index b8a80e2bfe08..63845b3ec44f 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -753,26 +753,27 @@ static struct page *follow_pud_mask(struct vm_area_struct *vma,
 				    unsigned int flags,
 				    struct follow_page_context *ctx)
 {
-	pud_t *pud;
+	pud_t *pudp, pud;
 	spinlock_t *ptl;
 	struct page *page;
 	struct mm_struct *mm = vma->vm_mm;
 
-	pud = pud_offset(p4dp, address);
-	if (pud_none(*pud))
+	pudp = pud_offset(p4dp, address);
+	pud = READ_ONCE(*pudp);
+	if (pud_none(pud))
 		return no_page_table(vma, flags, address);
-	if (pud_devmap(*pud)) {
-		ptl = pud_lock(mm, pud);
-		page = follow_devmap_pud(vma, address, pud, flags, &ctx->pgmap);
+	if (pud_devmap(pud)) {
+		ptl = pud_lock(mm, pudp);
+		page = follow_devmap_pud(vma, address, pudp, flags, &ctx->pgmap);
 		spin_unlock(ptl);
 		if (page)
 			return page;
 		return no_page_table(vma, flags, address);
 	}
-	if (unlikely(pud_bad(*pud)))
+	if (unlikely(pud_bad(pud)))
 		return no_page_table(vma, flags, address);
 
-	return follow_pmd_mask(vma, address, pud, flags, ctx);
+	return follow_pmd_mask(vma, address, pudp, flags, ctx);
 }
 
 static struct page *follow_p4d_mask(struct vm_area_struct *vma,
-- 
2.41.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 143+ messages in thread

* [PATCH v2 09/13] mm/gup: Cache *pudp in follow_pud_mask()
@ 2024-01-03  9:14   ` peterx
  0 siblings, 0 replies; 143+ messages in thread
From: peterx @ 2024-01-03  9:14 UTC (permalink / raw)
  To: linux-mm, linux-kernel
  Cc: James Houghton, David Hildenbrand, Yang Shi, peterx,
	Andrew Jones, Matthew Wilcox, linux-riscv, Andrea Arcangeli,
	Christoph Hellwig, Aneesh Kumar K . V, Vlastimil Babka,
	Jason Gunthorpe, Axel Rasmussen, Rik van Riel, John Hubbard,
	Kirill A . Shutemov, linux-arm-kernel, Lorenzo Stoakes,
	Muchun Song, Andrew Morton, linuxppc-dev, Mike Rapoport,
	Mike Kravetz

From: Peter Xu <peterx@redhat.com>

Introduce "pud_t pud" in the function, so the code won't dereference *pudp
multiple time.  Not only because that looks less straightforward, but also
because if the dereference really happened, it's not clear whether there
can be race to see different *pudp values if it's being modified at the
same time.

Acked-by: James Houghton <jthoughton@google.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 mm/gup.c | 17 +++++++++--------
 1 file changed, 9 insertions(+), 8 deletions(-)

diff --git a/mm/gup.c b/mm/gup.c
index b8a80e2bfe08..63845b3ec44f 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -753,26 +753,27 @@ static struct page *follow_pud_mask(struct vm_area_struct *vma,
 				    unsigned int flags,
 				    struct follow_page_context *ctx)
 {
-	pud_t *pud;
+	pud_t *pudp, pud;
 	spinlock_t *ptl;
 	struct page *page;
 	struct mm_struct *mm = vma->vm_mm;
 
-	pud = pud_offset(p4dp, address);
-	if (pud_none(*pud))
+	pudp = pud_offset(p4dp, address);
+	pud = READ_ONCE(*pudp);
+	if (pud_none(pud))
 		return no_page_table(vma, flags, address);
-	if (pud_devmap(*pud)) {
-		ptl = pud_lock(mm, pud);
-		page = follow_devmap_pud(vma, address, pud, flags, &ctx->pgmap);
+	if (pud_devmap(pud)) {
+		ptl = pud_lock(mm, pudp);
+		page = follow_devmap_pud(vma, address, pudp, flags, &ctx->pgmap);
 		spin_unlock(ptl);
 		if (page)
 			return page;
 		return no_page_table(vma, flags, address);
 	}
-	if (unlikely(pud_bad(*pud)))
+	if (unlikely(pud_bad(pud)))
 		return no_page_table(vma, flags, address);
 
-	return follow_pmd_mask(vma, address, pud, flags, ctx);
+	return follow_pmd_mask(vma, address, pudp, flags, ctx);
 }
 
 static struct page *follow_p4d_mask(struct vm_area_struct *vma,
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 143+ messages in thread

* [PATCH v2 10/13] mm/gup: Handle huge pud for follow_pud_mask()
  2024-01-03  9:14 ` peterx
  (?)
  (?)
@ 2024-01-03  9:14   ` peterx
  -1 siblings, 0 replies; 143+ messages in thread
From: peterx @ 2024-01-03  9:14 UTC (permalink / raw)
  To: linux-mm, linux-kernel
  Cc: James Houghton, David Hildenbrand, Kirill A . Shutemov, Yang Shi,
	peterx, linux-riscv, Andrew Morton, Aneesh Kumar K . V,
	Rik van Riel, Andrea Arcangeli, Axel Rasmussen, Mike Rapoport,
	John Hubbard, Vlastimil Babka, Michael Ellerman,
	Christophe Leroy, Andrew Jones, linuxppc-dev, Mike Kravetz,
	Muchun Song, linux-arm-kernel, Jason Gunthorpe,
	Christoph Hellwig, Lorenzo Stoakes, Matthew Wilcox

From: Peter Xu <peterx@redhat.com>

Teach follow_pud_mask() to be able to handle normal PUD pages like hugetlb.

Rename follow_devmap_pud() to follow_huge_pud() so that it can process
either huge devmap or hugetlb. Move it out of TRANSPARENT_HUGEPAGE_PUD and
and huge_memory.c (which relies on CONFIG_THP).

In the new follow_huge_pud(), taking care of possible CoR for hugetlb if
necessary.  touch_pud() needs to be moved out of huge_memory.c to be
accessable from gup.c even if !THP.

Since at it, optimize the non-present check by adding a pud_present() early
check before taking the pgtable lock, failing the follow_page() early if
PUD is not present: that is required by both devmap or hugetlb.  Use
pud_huge() to also cover the pud_devmap() case.

One more trivial thing to mention is, introduce "pud_t pud" in the code
paths along the way, so the code doesn't dereference *pudp multiple time.
Not only because that looks less straightforward, but also because if the
dereference really happened, it's not clear whether there can be race to
see different *pudp values when it's being modified at the same time.

Setting ctx->page_mask properly for a PUD entry.  As a side effect, this
patch should also be able to optimize devmap GUP on PUD to be able to jump
over the whole PUD range, but not yet verified.  Hugetlb already can do so
prior to this patch.

Signed-off-by: Peter Xu <peterx@redhat.com>
---
 include/linux/huge_mm.h |  8 -----
 mm/gup.c                | 70 +++++++++++++++++++++++++++++++++++++++--
 mm/huge_memory.c        | 47 ++-------------------------
 mm/internal.h           |  2 ++
 4 files changed, 71 insertions(+), 56 deletions(-)

diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index 96bd4b5d027e..3b73d20d537e 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -345,8 +345,6 @@ static inline bool folio_test_pmd_mappable(struct folio *folio)
 
 struct page *follow_devmap_pmd(struct vm_area_struct *vma, unsigned long addr,
 		pmd_t *pmd, int flags, struct dev_pagemap **pgmap);
-struct page *follow_devmap_pud(struct vm_area_struct *vma, unsigned long addr,
-		pud_t *pud, int flags, struct dev_pagemap **pgmap);
 
 vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf);
 
@@ -502,12 +500,6 @@ static inline struct page *follow_devmap_pmd(struct vm_area_struct *vma,
 	return NULL;
 }
 
-static inline struct page *follow_devmap_pud(struct vm_area_struct *vma,
-	unsigned long addr, pud_t *pud, int flags, struct dev_pagemap **pgmap)
-{
-	return NULL;
-}
-
 static inline bool thp_migration_supported(void)
 {
 	return false;
diff --git a/mm/gup.c b/mm/gup.c
index 63845b3ec44f..760406180222 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -525,6 +525,70 @@ static struct page *no_page_table(struct vm_area_struct *vma,
 	return NULL;
 }
 
+#ifdef CONFIG_PGTABLE_HAS_HUGE_LEAVES
+static struct page *follow_huge_pud(struct vm_area_struct *vma,
+				    unsigned long addr, pud_t *pudp,
+				    int flags, struct follow_page_context *ctx)
+{
+	struct mm_struct *mm = vma->vm_mm;
+	struct page *page;
+	pud_t pud = *pudp;
+	unsigned long pfn = pud_pfn(pud);
+	int ret;
+
+	assert_spin_locked(pud_lockptr(mm, pudp));
+
+	if ((flags & FOLL_WRITE) && !pud_write(pud))
+		return NULL;
+
+	if (!pud_present(pud))
+		return NULL;
+
+	pfn += (addr & ~PUD_MASK) >> PAGE_SHIFT;
+
+#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
+	if (pud_devmap(pud)) {
+		/*
+		 * device mapped pages can only be returned if the caller
+		 * will manage the page reference count.
+		 *
+		 * At least one of FOLL_GET | FOLL_PIN must be set, so
+		 * assert that here:
+		 */
+		if (!(flags & (FOLL_GET | FOLL_PIN)))
+			return ERR_PTR(-EEXIST);
+
+		if (flags & FOLL_TOUCH)
+			touch_pud(vma, addr, pudp, flags & FOLL_WRITE);
+
+		ctx->pgmap = get_dev_pagemap(pfn, ctx->pgmap);
+		if (!ctx->pgmap)
+			return ERR_PTR(-EFAULT);
+	}
+#endif	/* CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */
+	page = pfn_to_page(pfn);
+
+	if (!pud_devmap(pud) && !pud_write(pud) &&
+	    gup_must_unshare(vma, flags, page))
+		return ERR_PTR(-EMLINK);
+
+	ret = try_grab_page(page, flags);
+	if (ret)
+		page = ERR_PTR(ret);
+	else
+		ctx->page_mask = HPAGE_PUD_NR - 1;
+
+	return page;
+}
+#else  /* CONFIG_PGTABLE_HAS_HUGE_LEAVES */
+static struct page *follow_huge_pud(struct vm_area_struct *vma,
+				    unsigned long addr, pud_t *pudp,
+				    int flags, struct follow_page_context *ctx)
+{
+	return NULL;
+}
+#endif	/* CONFIG_PGTABLE_HAS_HUGE_LEAVES */
+
 static int follow_pfn_pte(struct vm_area_struct *vma, unsigned long address,
 		pte_t *pte, unsigned int flags)
 {
@@ -760,11 +824,11 @@ static struct page *follow_pud_mask(struct vm_area_struct *vma,
 
 	pudp = pud_offset(p4dp, address);
 	pud = READ_ONCE(*pudp);
-	if (pud_none(pud))
+	if (pud_none(pud) || !pud_present(pud))
 		return no_page_table(vma, flags, address);
-	if (pud_devmap(pud)) {
+	if (pud_huge(pud)) {
 		ptl = pud_lock(mm, pudp);
-		page = follow_devmap_pud(vma, address, pudp, flags, &ctx->pgmap);
+		page = follow_huge_pud(vma, address, pudp, flags, ctx);
 		spin_unlock(ptl);
 		if (page)
 			return page;
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 94ef5c02b459..9993d2b18809 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1373,8 +1373,8 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
 }
 
 #ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
-static void touch_pud(struct vm_area_struct *vma, unsigned long addr,
-		      pud_t *pud, bool write)
+void touch_pud(struct vm_area_struct *vma, unsigned long addr,
+	       pud_t *pud, bool write)
 {
 	pud_t _pud;
 
@@ -1386,49 +1386,6 @@ static void touch_pud(struct vm_area_struct *vma, unsigned long addr,
 		update_mmu_cache_pud(vma, addr, pud);
 }
 
-struct page *follow_devmap_pud(struct vm_area_struct *vma, unsigned long addr,
-		pud_t *pud, int flags, struct dev_pagemap **pgmap)
-{
-	unsigned long pfn = pud_pfn(*pud);
-	struct mm_struct *mm = vma->vm_mm;
-	struct page *page;
-	int ret;
-
-	assert_spin_locked(pud_lockptr(mm, pud));
-
-	if (flags & FOLL_WRITE && !pud_write(*pud))
-		return NULL;
-
-	if (pud_present(*pud) && pud_devmap(*pud))
-		/* pass */;
-	else
-		return NULL;
-
-	if (flags & FOLL_TOUCH)
-		touch_pud(vma, addr, pud, flags & FOLL_WRITE);
-
-	/*
-	 * device mapped pages can only be returned if the
-	 * caller will manage the page reference count.
-	 *
-	 * At least one of FOLL_GET | FOLL_PIN must be set, so assert that here:
-	 */
-	if (!(flags & (FOLL_GET | FOLL_PIN)))
-		return ERR_PTR(-EEXIST);
-
-	pfn += (addr & ~PUD_MASK) >> PAGE_SHIFT;
-	*pgmap = get_dev_pagemap(pfn, *pgmap);
-	if (!*pgmap)
-		return ERR_PTR(-EFAULT);
-	page = pfn_to_page(pfn);
-
-	ret = try_grab_page(page, flags);
-	if (ret)
-		page = ERR_PTR(ret);
-
-	return page;
-}
-
 int copy_huge_pud(struct mm_struct *dst_mm, struct mm_struct *src_mm,
 		  pud_t *dst_pud, pud_t *src_pud, unsigned long addr,
 		  struct vm_area_struct *vma)
diff --git a/mm/internal.h b/mm/internal.h
index f309a010d50f..5821b7a14b62 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -1007,6 +1007,8 @@ int __must_check try_grab_page(struct page *page, unsigned int flags);
 /*
  * mm/huge_memory.c
  */
+void touch_pud(struct vm_area_struct *vma, unsigned long addr,
+	       pud_t *pud, bool write);
 struct page *follow_trans_huge_pmd(struct vm_area_struct *vma,
 				   unsigned long addr, pmd_t *pmd,
 				   unsigned int flags);
-- 
2.41.0


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply related	[flat|nested] 143+ messages in thread

* [PATCH v2 10/13] mm/gup: Handle huge pud for follow_pud_mask()
@ 2024-01-03  9:14   ` peterx
  0 siblings, 0 replies; 143+ messages in thread
From: peterx @ 2024-01-03  9:14 UTC (permalink / raw)
  To: linux-mm, linux-kernel
  Cc: James Houghton, David Hildenbrand, Kirill A . Shutemov, Yang Shi,
	peterx, linux-riscv, Andrew Morton, Aneesh Kumar K . V,
	Rik van Riel, Andrea Arcangeli, Axel Rasmussen, Mike Rapoport,
	John Hubbard, Vlastimil Babka, Michael Ellerman,
	Christophe Leroy, Andrew Jones, linuxppc-dev, Mike Kravetz,
	Muchun Song, linux-arm-kernel, Jason Gunthorpe,
	Christoph Hellwig, Lorenzo Stoakes, Matthew Wilcox

From: Peter Xu <peterx@redhat.com>

Teach follow_pud_mask() to be able to handle normal PUD pages like hugetlb.

Rename follow_devmap_pud() to follow_huge_pud() so that it can process
either huge devmap or hugetlb. Move it out of TRANSPARENT_HUGEPAGE_PUD and
and huge_memory.c (which relies on CONFIG_THP).

In the new follow_huge_pud(), taking care of possible CoR for hugetlb if
necessary.  touch_pud() needs to be moved out of huge_memory.c to be
accessable from gup.c even if !THP.

Since at it, optimize the non-present check by adding a pud_present() early
check before taking the pgtable lock, failing the follow_page() early if
PUD is not present: that is required by both devmap or hugetlb.  Use
pud_huge() to also cover the pud_devmap() case.

One more trivial thing to mention is, introduce "pud_t pud" in the code
paths along the way, so the code doesn't dereference *pudp multiple time.
Not only because that looks less straightforward, but also because if the
dereference really happened, it's not clear whether there can be race to
see different *pudp values when it's being modified at the same time.

Setting ctx->page_mask properly for a PUD entry.  As a side effect, this
patch should also be able to optimize devmap GUP on PUD to be able to jump
over the whole PUD range, but not yet verified.  Hugetlb already can do so
prior to this patch.

Signed-off-by: Peter Xu <peterx@redhat.com>
---
 include/linux/huge_mm.h |  8 -----
 mm/gup.c                | 70 +++++++++++++++++++++++++++++++++++++++--
 mm/huge_memory.c        | 47 ++-------------------------
 mm/internal.h           |  2 ++
 4 files changed, 71 insertions(+), 56 deletions(-)

diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index 96bd4b5d027e..3b73d20d537e 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -345,8 +345,6 @@ static inline bool folio_test_pmd_mappable(struct folio *folio)
 
 struct page *follow_devmap_pmd(struct vm_area_struct *vma, unsigned long addr,
 		pmd_t *pmd, int flags, struct dev_pagemap **pgmap);
-struct page *follow_devmap_pud(struct vm_area_struct *vma, unsigned long addr,
-		pud_t *pud, int flags, struct dev_pagemap **pgmap);
 
 vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf);
 
@@ -502,12 +500,6 @@ static inline struct page *follow_devmap_pmd(struct vm_area_struct *vma,
 	return NULL;
 }
 
-static inline struct page *follow_devmap_pud(struct vm_area_struct *vma,
-	unsigned long addr, pud_t *pud, int flags, struct dev_pagemap **pgmap)
-{
-	return NULL;
-}
-
 static inline bool thp_migration_supported(void)
 {
 	return false;
diff --git a/mm/gup.c b/mm/gup.c
index 63845b3ec44f..760406180222 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -525,6 +525,70 @@ static struct page *no_page_table(struct vm_area_struct *vma,
 	return NULL;
 }
 
+#ifdef CONFIG_PGTABLE_HAS_HUGE_LEAVES
+static struct page *follow_huge_pud(struct vm_area_struct *vma,
+				    unsigned long addr, pud_t *pudp,
+				    int flags, struct follow_page_context *ctx)
+{
+	struct mm_struct *mm = vma->vm_mm;
+	struct page *page;
+	pud_t pud = *pudp;
+	unsigned long pfn = pud_pfn(pud);
+	int ret;
+
+	assert_spin_locked(pud_lockptr(mm, pudp));
+
+	if ((flags & FOLL_WRITE) && !pud_write(pud))
+		return NULL;
+
+	if (!pud_present(pud))
+		return NULL;
+
+	pfn += (addr & ~PUD_MASK) >> PAGE_SHIFT;
+
+#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
+	if (pud_devmap(pud)) {
+		/*
+		 * device mapped pages can only be returned if the caller
+		 * will manage the page reference count.
+		 *
+		 * At least one of FOLL_GET | FOLL_PIN must be set, so
+		 * assert that here:
+		 */
+		if (!(flags & (FOLL_GET | FOLL_PIN)))
+			return ERR_PTR(-EEXIST);
+
+		if (flags & FOLL_TOUCH)
+			touch_pud(vma, addr, pudp, flags & FOLL_WRITE);
+
+		ctx->pgmap = get_dev_pagemap(pfn, ctx->pgmap);
+		if (!ctx->pgmap)
+			return ERR_PTR(-EFAULT);
+	}
+#endif	/* CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */
+	page = pfn_to_page(pfn);
+
+	if (!pud_devmap(pud) && !pud_write(pud) &&
+	    gup_must_unshare(vma, flags, page))
+		return ERR_PTR(-EMLINK);
+
+	ret = try_grab_page(page, flags);
+	if (ret)
+		page = ERR_PTR(ret);
+	else
+		ctx->page_mask = HPAGE_PUD_NR - 1;
+
+	return page;
+}
+#else  /* CONFIG_PGTABLE_HAS_HUGE_LEAVES */
+static struct page *follow_huge_pud(struct vm_area_struct *vma,
+				    unsigned long addr, pud_t *pudp,
+				    int flags, struct follow_page_context *ctx)
+{
+	return NULL;
+}
+#endif	/* CONFIG_PGTABLE_HAS_HUGE_LEAVES */
+
 static int follow_pfn_pte(struct vm_area_struct *vma, unsigned long address,
 		pte_t *pte, unsigned int flags)
 {
@@ -760,11 +824,11 @@ static struct page *follow_pud_mask(struct vm_area_struct *vma,
 
 	pudp = pud_offset(p4dp, address);
 	pud = READ_ONCE(*pudp);
-	if (pud_none(pud))
+	if (pud_none(pud) || !pud_present(pud))
 		return no_page_table(vma, flags, address);
-	if (pud_devmap(pud)) {
+	if (pud_huge(pud)) {
 		ptl = pud_lock(mm, pudp);
-		page = follow_devmap_pud(vma, address, pudp, flags, &ctx->pgmap);
+		page = follow_huge_pud(vma, address, pudp, flags, ctx);
 		spin_unlock(ptl);
 		if (page)
 			return page;
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 94ef5c02b459..9993d2b18809 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1373,8 +1373,8 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
 }
 
 #ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
-static void touch_pud(struct vm_area_struct *vma, unsigned long addr,
-		      pud_t *pud, bool write)
+void touch_pud(struct vm_area_struct *vma, unsigned long addr,
+	       pud_t *pud, bool write)
 {
 	pud_t _pud;
 
@@ -1386,49 +1386,6 @@ static void touch_pud(struct vm_area_struct *vma, unsigned long addr,
 		update_mmu_cache_pud(vma, addr, pud);
 }
 
-struct page *follow_devmap_pud(struct vm_area_struct *vma, unsigned long addr,
-		pud_t *pud, int flags, struct dev_pagemap **pgmap)
-{
-	unsigned long pfn = pud_pfn(*pud);
-	struct mm_struct *mm = vma->vm_mm;
-	struct page *page;
-	int ret;
-
-	assert_spin_locked(pud_lockptr(mm, pud));
-
-	if (flags & FOLL_WRITE && !pud_write(*pud))
-		return NULL;
-
-	if (pud_present(*pud) && pud_devmap(*pud))
-		/* pass */;
-	else
-		return NULL;
-
-	if (flags & FOLL_TOUCH)
-		touch_pud(vma, addr, pud, flags & FOLL_WRITE);
-
-	/*
-	 * device mapped pages can only be returned if the
-	 * caller will manage the page reference count.
-	 *
-	 * At least one of FOLL_GET | FOLL_PIN must be set, so assert that here:
-	 */
-	if (!(flags & (FOLL_GET | FOLL_PIN)))
-		return ERR_PTR(-EEXIST);
-
-	pfn += (addr & ~PUD_MASK) >> PAGE_SHIFT;
-	*pgmap = get_dev_pagemap(pfn, *pgmap);
-	if (!*pgmap)
-		return ERR_PTR(-EFAULT);
-	page = pfn_to_page(pfn);
-
-	ret = try_grab_page(page, flags);
-	if (ret)
-		page = ERR_PTR(ret);
-
-	return page;
-}
-
 int copy_huge_pud(struct mm_struct *dst_mm, struct mm_struct *src_mm,
 		  pud_t *dst_pud, pud_t *src_pud, unsigned long addr,
 		  struct vm_area_struct *vma)
diff --git a/mm/internal.h b/mm/internal.h
index f309a010d50f..5821b7a14b62 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -1007,6 +1007,8 @@ int __must_check try_grab_page(struct page *page, unsigned int flags);
 /*
  * mm/huge_memory.c
  */
+void touch_pud(struct vm_area_struct *vma, unsigned long addr,
+	       pud_t *pud, bool write);
 struct page *follow_trans_huge_pmd(struct vm_area_struct *vma,
 				   unsigned long addr, pmd_t *pmd,
 				   unsigned int flags);
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 143+ messages in thread

* [PATCH v2 10/13] mm/gup: Handle huge pud for follow_pud_mask()
@ 2024-01-03  9:14   ` peterx
  0 siblings, 0 replies; 143+ messages in thread
From: peterx @ 2024-01-03  9:14 UTC (permalink / raw)
  To: linux-mm, linux-kernel
  Cc: James Houghton, David Hildenbrand, Kirill A . Shutemov, Yang Shi,
	peterx, linux-riscv, Andrew Morton, Aneesh Kumar K . V,
	Rik van Riel, Andrea Arcangeli, Axel Rasmussen, Mike Rapoport,
	John Hubbard, Vlastimil Babka, Michael Ellerman,
	Christophe Leroy, Andrew Jones, linuxppc-dev, Mike Kravetz,
	Muchun Song, linux-arm-kernel, Jason Gunthorpe,
	Christoph Hellwig, Lorenzo Stoakes, Matthew Wilcox

From: Peter Xu <peterx@redhat.com>

Teach follow_pud_mask() to be able to handle normal PUD pages like hugetlb.

Rename follow_devmap_pud() to follow_huge_pud() so that it can process
either huge devmap or hugetlb. Move it out of TRANSPARENT_HUGEPAGE_PUD and
and huge_memory.c (which relies on CONFIG_THP).

In the new follow_huge_pud(), taking care of possible CoR for hugetlb if
necessary.  touch_pud() needs to be moved out of huge_memory.c to be
accessable from gup.c even if !THP.

Since at it, optimize the non-present check by adding a pud_present() early
check before taking the pgtable lock, failing the follow_page() early if
PUD is not present: that is required by both devmap or hugetlb.  Use
pud_huge() to also cover the pud_devmap() case.

One more trivial thing to mention is, introduce "pud_t pud" in the code
paths along the way, so the code doesn't dereference *pudp multiple time.
Not only because that looks less straightforward, but also because if the
dereference really happened, it's not clear whether there can be race to
see different *pudp values when it's being modified at the same time.

Setting ctx->page_mask properly for a PUD entry.  As a side effect, this
patch should also be able to optimize devmap GUP on PUD to be able to jump
over the whole PUD range, but not yet verified.  Hugetlb already can do so
prior to this patch.

Signed-off-by: Peter Xu <peterx@redhat.com>
---
 include/linux/huge_mm.h |  8 -----
 mm/gup.c                | 70 +++++++++++++++++++++++++++++++++++++++--
 mm/huge_memory.c        | 47 ++-------------------------
 mm/internal.h           |  2 ++
 4 files changed, 71 insertions(+), 56 deletions(-)

diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index 96bd4b5d027e..3b73d20d537e 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -345,8 +345,6 @@ static inline bool folio_test_pmd_mappable(struct folio *folio)
 
 struct page *follow_devmap_pmd(struct vm_area_struct *vma, unsigned long addr,
 		pmd_t *pmd, int flags, struct dev_pagemap **pgmap);
-struct page *follow_devmap_pud(struct vm_area_struct *vma, unsigned long addr,
-		pud_t *pud, int flags, struct dev_pagemap **pgmap);
 
 vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf);
 
@@ -502,12 +500,6 @@ static inline struct page *follow_devmap_pmd(struct vm_area_struct *vma,
 	return NULL;
 }
 
-static inline struct page *follow_devmap_pud(struct vm_area_struct *vma,
-	unsigned long addr, pud_t *pud, int flags, struct dev_pagemap **pgmap)
-{
-	return NULL;
-}
-
 static inline bool thp_migration_supported(void)
 {
 	return false;
diff --git a/mm/gup.c b/mm/gup.c
index 63845b3ec44f..760406180222 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -525,6 +525,70 @@ static struct page *no_page_table(struct vm_area_struct *vma,
 	return NULL;
 }
 
+#ifdef CONFIG_PGTABLE_HAS_HUGE_LEAVES
+static struct page *follow_huge_pud(struct vm_area_struct *vma,
+				    unsigned long addr, pud_t *pudp,
+				    int flags, struct follow_page_context *ctx)
+{
+	struct mm_struct *mm = vma->vm_mm;
+	struct page *page;
+	pud_t pud = *pudp;
+	unsigned long pfn = pud_pfn(pud);
+	int ret;
+
+	assert_spin_locked(pud_lockptr(mm, pudp));
+
+	if ((flags & FOLL_WRITE) && !pud_write(pud))
+		return NULL;
+
+	if (!pud_present(pud))
+		return NULL;
+
+	pfn += (addr & ~PUD_MASK) >> PAGE_SHIFT;
+
+#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
+	if (pud_devmap(pud)) {
+		/*
+		 * device mapped pages can only be returned if the caller
+		 * will manage the page reference count.
+		 *
+		 * At least one of FOLL_GET | FOLL_PIN must be set, so
+		 * assert that here:
+		 */
+		if (!(flags & (FOLL_GET | FOLL_PIN)))
+			return ERR_PTR(-EEXIST);
+
+		if (flags & FOLL_TOUCH)
+			touch_pud(vma, addr, pudp, flags & FOLL_WRITE);
+
+		ctx->pgmap = get_dev_pagemap(pfn, ctx->pgmap);
+		if (!ctx->pgmap)
+			return ERR_PTR(-EFAULT);
+	}
+#endif	/* CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */
+	page = pfn_to_page(pfn);
+
+	if (!pud_devmap(pud) && !pud_write(pud) &&
+	    gup_must_unshare(vma, flags, page))
+		return ERR_PTR(-EMLINK);
+
+	ret = try_grab_page(page, flags);
+	if (ret)
+		page = ERR_PTR(ret);
+	else
+		ctx->page_mask = HPAGE_PUD_NR - 1;
+
+	return page;
+}
+#else  /* CONFIG_PGTABLE_HAS_HUGE_LEAVES */
+static struct page *follow_huge_pud(struct vm_area_struct *vma,
+				    unsigned long addr, pud_t *pudp,
+				    int flags, struct follow_page_context *ctx)
+{
+	return NULL;
+}
+#endif	/* CONFIG_PGTABLE_HAS_HUGE_LEAVES */
+
 static int follow_pfn_pte(struct vm_area_struct *vma, unsigned long address,
 		pte_t *pte, unsigned int flags)
 {
@@ -760,11 +824,11 @@ static struct page *follow_pud_mask(struct vm_area_struct *vma,
 
 	pudp = pud_offset(p4dp, address);
 	pud = READ_ONCE(*pudp);
-	if (pud_none(pud))
+	if (pud_none(pud) || !pud_present(pud))
 		return no_page_table(vma, flags, address);
-	if (pud_devmap(pud)) {
+	if (pud_huge(pud)) {
 		ptl = pud_lock(mm, pudp);
-		page = follow_devmap_pud(vma, address, pudp, flags, &ctx->pgmap);
+		page = follow_huge_pud(vma, address, pudp, flags, ctx);
 		spin_unlock(ptl);
 		if (page)
 			return page;
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 94ef5c02b459..9993d2b18809 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1373,8 +1373,8 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
 }
 
 #ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
-static void touch_pud(struct vm_area_struct *vma, unsigned long addr,
-		      pud_t *pud, bool write)
+void touch_pud(struct vm_area_struct *vma, unsigned long addr,
+	       pud_t *pud, bool write)
 {
 	pud_t _pud;
 
@@ -1386,49 +1386,6 @@ static void touch_pud(struct vm_area_struct *vma, unsigned long addr,
 		update_mmu_cache_pud(vma, addr, pud);
 }
 
-struct page *follow_devmap_pud(struct vm_area_struct *vma, unsigned long addr,
-		pud_t *pud, int flags, struct dev_pagemap **pgmap)
-{
-	unsigned long pfn = pud_pfn(*pud);
-	struct mm_struct *mm = vma->vm_mm;
-	struct page *page;
-	int ret;
-
-	assert_spin_locked(pud_lockptr(mm, pud));
-
-	if (flags & FOLL_WRITE && !pud_write(*pud))
-		return NULL;
-
-	if (pud_present(*pud) && pud_devmap(*pud))
-		/* pass */;
-	else
-		return NULL;
-
-	if (flags & FOLL_TOUCH)
-		touch_pud(vma, addr, pud, flags & FOLL_WRITE);
-
-	/*
-	 * device mapped pages can only be returned if the
-	 * caller will manage the page reference count.
-	 *
-	 * At least one of FOLL_GET | FOLL_PIN must be set, so assert that here:
-	 */
-	if (!(flags & (FOLL_GET | FOLL_PIN)))
-		return ERR_PTR(-EEXIST);
-
-	pfn += (addr & ~PUD_MASK) >> PAGE_SHIFT;
-	*pgmap = get_dev_pagemap(pfn, *pgmap);
-	if (!*pgmap)
-		return ERR_PTR(-EFAULT);
-	page = pfn_to_page(pfn);
-
-	ret = try_grab_page(page, flags);
-	if (ret)
-		page = ERR_PTR(ret);
-
-	return page;
-}
-
 int copy_huge_pud(struct mm_struct *dst_mm, struct mm_struct *src_mm,
 		  pud_t *dst_pud, pud_t *src_pud, unsigned long addr,
 		  struct vm_area_struct *vma)
diff --git a/mm/internal.h b/mm/internal.h
index f309a010d50f..5821b7a14b62 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -1007,6 +1007,8 @@ int __must_check try_grab_page(struct page *page, unsigned int flags);
 /*
  * mm/huge_memory.c
  */
+void touch_pud(struct vm_area_struct *vma, unsigned long addr,
+	       pud_t *pud, bool write);
 struct page *follow_trans_huge_pmd(struct vm_area_struct *vma,
 				   unsigned long addr, pmd_t *pmd,
 				   unsigned int flags);
-- 
2.41.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 143+ messages in thread

* [PATCH v2 10/13] mm/gup: Handle huge pud for follow_pud_mask()
@ 2024-01-03  9:14   ` peterx
  0 siblings, 0 replies; 143+ messages in thread
From: peterx @ 2024-01-03  9:14 UTC (permalink / raw)
  To: linux-mm, linux-kernel
  Cc: James Houghton, David Hildenbrand, Yang Shi, peterx,
	Andrew Jones, Matthew Wilcox, linux-riscv, Andrea Arcangeli,
	Christoph Hellwig, Aneesh Kumar K . V, Vlastimil Babka,
	Jason Gunthorpe, Axel Rasmussen, Rik van Riel, John Hubbard,
	Kirill A . Shutemov, linux-arm-kernel, Lorenzo Stoakes,
	Muchun Song, Andrew Morton, linuxppc-dev, Mike Rapoport,
	Mike Kravetz

From: Peter Xu <peterx@redhat.com>

Teach follow_pud_mask() to be able to handle normal PUD pages like hugetlb.

Rename follow_devmap_pud() to follow_huge_pud() so that it can process
either huge devmap or hugetlb. Move it out of TRANSPARENT_HUGEPAGE_PUD and
and huge_memory.c (which relies on CONFIG_THP).

In the new follow_huge_pud(), taking care of possible CoR for hugetlb if
necessary.  touch_pud() needs to be moved out of huge_memory.c to be
accessable from gup.c even if !THP.

Since at it, optimize the non-present check by adding a pud_present() early
check before taking the pgtable lock, failing the follow_page() early if
PUD is not present: that is required by both devmap or hugetlb.  Use
pud_huge() to also cover the pud_devmap() case.

One more trivial thing to mention is, introduce "pud_t pud" in the code
paths along the way, so the code doesn't dereference *pudp multiple time.
Not only because that looks less straightforward, but also because if the
dereference really happened, it's not clear whether there can be race to
see different *pudp values when it's being modified at the same time.

Setting ctx->page_mask properly for a PUD entry.  As a side effect, this
patch should also be able to optimize devmap GUP on PUD to be able to jump
over the whole PUD range, but not yet verified.  Hugetlb already can do so
prior to this patch.

Signed-off-by: Peter Xu <peterx@redhat.com>
---
 include/linux/huge_mm.h |  8 -----
 mm/gup.c                | 70 +++++++++++++++++++++++++++++++++++++++--
 mm/huge_memory.c        | 47 ++-------------------------
 mm/internal.h           |  2 ++
 4 files changed, 71 insertions(+), 56 deletions(-)

diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index 96bd4b5d027e..3b73d20d537e 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -345,8 +345,6 @@ static inline bool folio_test_pmd_mappable(struct folio *folio)
 
 struct page *follow_devmap_pmd(struct vm_area_struct *vma, unsigned long addr,
 		pmd_t *pmd, int flags, struct dev_pagemap **pgmap);
-struct page *follow_devmap_pud(struct vm_area_struct *vma, unsigned long addr,
-		pud_t *pud, int flags, struct dev_pagemap **pgmap);
 
 vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf);
 
@@ -502,12 +500,6 @@ static inline struct page *follow_devmap_pmd(struct vm_area_struct *vma,
 	return NULL;
 }
 
-static inline struct page *follow_devmap_pud(struct vm_area_struct *vma,
-	unsigned long addr, pud_t *pud, int flags, struct dev_pagemap **pgmap)
-{
-	return NULL;
-}
-
 static inline bool thp_migration_supported(void)
 {
 	return false;
diff --git a/mm/gup.c b/mm/gup.c
index 63845b3ec44f..760406180222 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -525,6 +525,70 @@ static struct page *no_page_table(struct vm_area_struct *vma,
 	return NULL;
 }
 
+#ifdef CONFIG_PGTABLE_HAS_HUGE_LEAVES
+static struct page *follow_huge_pud(struct vm_area_struct *vma,
+				    unsigned long addr, pud_t *pudp,
+				    int flags, struct follow_page_context *ctx)
+{
+	struct mm_struct *mm = vma->vm_mm;
+	struct page *page;
+	pud_t pud = *pudp;
+	unsigned long pfn = pud_pfn(pud);
+	int ret;
+
+	assert_spin_locked(pud_lockptr(mm, pudp));
+
+	if ((flags & FOLL_WRITE) && !pud_write(pud))
+		return NULL;
+
+	if (!pud_present(pud))
+		return NULL;
+
+	pfn += (addr & ~PUD_MASK) >> PAGE_SHIFT;
+
+#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
+	if (pud_devmap(pud)) {
+		/*
+		 * device mapped pages can only be returned if the caller
+		 * will manage the page reference count.
+		 *
+		 * At least one of FOLL_GET | FOLL_PIN must be set, so
+		 * assert that here:
+		 */
+		if (!(flags & (FOLL_GET | FOLL_PIN)))
+			return ERR_PTR(-EEXIST);
+
+		if (flags & FOLL_TOUCH)
+			touch_pud(vma, addr, pudp, flags & FOLL_WRITE);
+
+		ctx->pgmap = get_dev_pagemap(pfn, ctx->pgmap);
+		if (!ctx->pgmap)
+			return ERR_PTR(-EFAULT);
+	}
+#endif	/* CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */
+	page = pfn_to_page(pfn);
+
+	if (!pud_devmap(pud) && !pud_write(pud) &&
+	    gup_must_unshare(vma, flags, page))
+		return ERR_PTR(-EMLINK);
+
+	ret = try_grab_page(page, flags);
+	if (ret)
+		page = ERR_PTR(ret);
+	else
+		ctx->page_mask = HPAGE_PUD_NR - 1;
+
+	return page;
+}
+#else  /* CONFIG_PGTABLE_HAS_HUGE_LEAVES */
+static struct page *follow_huge_pud(struct vm_area_struct *vma,
+				    unsigned long addr, pud_t *pudp,
+				    int flags, struct follow_page_context *ctx)
+{
+	return NULL;
+}
+#endif	/* CONFIG_PGTABLE_HAS_HUGE_LEAVES */
+
 static int follow_pfn_pte(struct vm_area_struct *vma, unsigned long address,
 		pte_t *pte, unsigned int flags)
 {
@@ -760,11 +824,11 @@ static struct page *follow_pud_mask(struct vm_area_struct *vma,
 
 	pudp = pud_offset(p4dp, address);
 	pud = READ_ONCE(*pudp);
-	if (pud_none(pud))
+	if (pud_none(pud) || !pud_present(pud))
 		return no_page_table(vma, flags, address);
-	if (pud_devmap(pud)) {
+	if (pud_huge(pud)) {
 		ptl = pud_lock(mm, pudp);
-		page = follow_devmap_pud(vma, address, pudp, flags, &ctx->pgmap);
+		page = follow_huge_pud(vma, address, pudp, flags, ctx);
 		spin_unlock(ptl);
 		if (page)
 			return page;
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 94ef5c02b459..9993d2b18809 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1373,8 +1373,8 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
 }
 
 #ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
-static void touch_pud(struct vm_area_struct *vma, unsigned long addr,
-		      pud_t *pud, bool write)
+void touch_pud(struct vm_area_struct *vma, unsigned long addr,
+	       pud_t *pud, bool write)
 {
 	pud_t _pud;
 
@@ -1386,49 +1386,6 @@ static void touch_pud(struct vm_area_struct *vma, unsigned long addr,
 		update_mmu_cache_pud(vma, addr, pud);
 }
 
-struct page *follow_devmap_pud(struct vm_area_struct *vma, unsigned long addr,
-		pud_t *pud, int flags, struct dev_pagemap **pgmap)
-{
-	unsigned long pfn = pud_pfn(*pud);
-	struct mm_struct *mm = vma->vm_mm;
-	struct page *page;
-	int ret;
-
-	assert_spin_locked(pud_lockptr(mm, pud));
-
-	if (flags & FOLL_WRITE && !pud_write(*pud))
-		return NULL;
-
-	if (pud_present(*pud) && pud_devmap(*pud))
-		/* pass */;
-	else
-		return NULL;
-
-	if (flags & FOLL_TOUCH)
-		touch_pud(vma, addr, pud, flags & FOLL_WRITE);
-
-	/*
-	 * device mapped pages can only be returned if the
-	 * caller will manage the page reference count.
-	 *
-	 * At least one of FOLL_GET | FOLL_PIN must be set, so assert that here:
-	 */
-	if (!(flags & (FOLL_GET | FOLL_PIN)))
-		return ERR_PTR(-EEXIST);
-
-	pfn += (addr & ~PUD_MASK) >> PAGE_SHIFT;
-	*pgmap = get_dev_pagemap(pfn, *pgmap);
-	if (!*pgmap)
-		return ERR_PTR(-EFAULT);
-	page = pfn_to_page(pfn);
-
-	ret = try_grab_page(page, flags);
-	if (ret)
-		page = ERR_PTR(ret);
-
-	return page;
-}
-
 int copy_huge_pud(struct mm_struct *dst_mm, struct mm_struct *src_mm,
 		  pud_t *dst_pud, pud_t *src_pud, unsigned long addr,
 		  struct vm_area_struct *vma)
diff --git a/mm/internal.h b/mm/internal.h
index f309a010d50f..5821b7a14b62 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -1007,6 +1007,8 @@ int __must_check try_grab_page(struct page *page, unsigned int flags);
 /*
  * mm/huge_memory.c
  */
+void touch_pud(struct vm_area_struct *vma, unsigned long addr,
+	       pud_t *pud, bool write);
 struct page *follow_trans_huge_pmd(struct vm_area_struct *vma,
 				   unsigned long addr, pmd_t *pmd,
 				   unsigned int flags);
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 143+ messages in thread

* [PATCH v2 11/13] mm/gup: Handle huge pmd for follow_pmd_mask()
  2024-01-03  9:14 ` peterx
  (?)
  (?)
@ 2024-01-03  9:14   ` peterx
  -1 siblings, 0 replies; 143+ messages in thread
From: peterx @ 2024-01-03  9:14 UTC (permalink / raw)
  To: linux-mm, linux-kernel
  Cc: James Houghton, David Hildenbrand, Kirill A . Shutemov, Yang Shi,
	peterx, linux-riscv, Andrew Morton, Aneesh Kumar K . V,
	Rik van Riel, Andrea Arcangeli, Axel Rasmussen, Mike Rapoport,
	John Hubbard, Vlastimil Babka, Michael Ellerman,
	Christophe Leroy, Andrew Jones, linuxppc-dev, Mike Kravetz,
	Muchun Song, linux-arm-kernel, Jason Gunthorpe,
	Christoph Hellwig, Lorenzo Stoakes, Matthew Wilcox

From: Peter Xu <peterx@redhat.com>

Replace pmd_trans_huge() with pmd_thp_or_huge() to also cover pmd_huge() as
long as enabled.

FOLL_TOUCH and FOLL_SPLIT_PMD only apply to THP, not yet huge.

Since now follow_trans_huge_pmd() can process hugetlb pages, renaming it
into follow_huge_pmd() to match what it does.  Move it into gup.c so not
depend on CONFIG_THP.

When at it, move the ctx->page_mask setup into follow_huge_pmd(), only set
it when the page is valid.  It was not a bug to set it before even if GUP
failed (page==NULL), because follow_page_mask() callers always ignores
page_mask if so.  But doing so makes the code cleaner.

Signed-off-by: Peter Xu <peterx@redhat.com>
---
 mm/gup.c         | 107 ++++++++++++++++++++++++++++++++++++++++++++---
 mm/huge_memory.c |  86 +------------------------------------
 mm/internal.h    |   5 +--
 3 files changed, 105 insertions(+), 93 deletions(-)

diff --git a/mm/gup.c b/mm/gup.c
index 760406180222..d96429b6fc55 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -580,6 +580,93 @@ static struct page *follow_huge_pud(struct vm_area_struct *vma,
 
 	return page;
 }
+
+/* FOLL_FORCE can write to even unwritable PMDs in COW mappings. */
+static inline bool can_follow_write_pmd(pmd_t pmd, struct page *page,
+					struct vm_area_struct *vma,
+					unsigned int flags)
+{
+	/* If the pmd is writable, we can write to the page. */
+	if (pmd_write(pmd))
+		return true;
+
+	/* Maybe FOLL_FORCE is set to override it? */
+	if (!(flags & FOLL_FORCE))
+		return false;
+
+	/* But FOLL_FORCE has no effect on shared mappings */
+	if (vma->vm_flags & (VM_MAYSHARE | VM_SHARED))
+		return false;
+
+	/* ... or read-only private ones */
+	if (!(vma->vm_flags & VM_MAYWRITE))
+		return false;
+
+	/* ... or already writable ones that just need to take a write fault */
+	if (vma->vm_flags & VM_WRITE)
+		return false;
+
+	/*
+	 * See can_change_pte_writable(): we broke COW and could map the page
+	 * writable if we have an exclusive anonymous page ...
+	 */
+	if (!page || !PageAnon(page) || !PageAnonExclusive(page))
+		return false;
+
+	/* ... and a write-fault isn't required for other reasons. */
+	if (vma_soft_dirty_enabled(vma) && !pmd_soft_dirty(pmd))
+		return false;
+	return !userfaultfd_huge_pmd_wp(vma, pmd);
+}
+
+static struct page *follow_huge_pmd(struct vm_area_struct *vma,
+				    unsigned long addr, pmd_t *pmd,
+				    unsigned int flags,
+				    struct follow_page_context *ctx)
+{
+	struct mm_struct *mm = vma->vm_mm;
+	pmd_t pmdval = *pmd;
+	struct page *page;
+	int ret;
+
+	assert_spin_locked(pmd_lockptr(mm, pmd));
+
+	page = pmd_page(pmdval);
+	VM_BUG_ON_PAGE(!PageHead(page) && !is_zone_device_page(page), page);
+
+	if ((flags & FOLL_WRITE) &&
+	    !can_follow_write_pmd(pmdval, page, vma, flags))
+		return NULL;
+
+	/* Avoid dumping huge zero page */
+	if ((flags & FOLL_DUMP) && is_huge_zero_pmd(pmdval))
+		return ERR_PTR(-EFAULT);
+
+	if (pmd_protnone(*pmd) && !gup_can_follow_protnone(vma, flags))
+		return NULL;
+
+	if (!pmd_write(pmdval) && gup_must_unshare(vma, flags, page))
+		return ERR_PTR(-EMLINK);
+
+	VM_BUG_ON_PAGE((flags & FOLL_PIN) && PageAnon(page) &&
+			!PageAnonExclusive(page), page);
+
+	ret = try_grab_page(page, flags);
+	if (ret)
+		return ERR_PTR(ret);
+
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+	if (pmd_trans_huge(pmdval) && (flags & FOLL_TOUCH))
+		touch_pmd(vma, addr, pmd, flags & FOLL_WRITE);
+#endif	/* CONFIG_TRANSPARENT_HUGEPAGE */
+
+	page += (addr & ~HPAGE_PMD_MASK) >> PAGE_SHIFT;
+	ctx->page_mask = HPAGE_PMD_NR - 1;
+	VM_BUG_ON_PAGE(!PageCompound(page) && !is_zone_device_page(page), page);
+
+	return page;
+}
+
 #else  /* CONFIG_PGTABLE_HAS_HUGE_LEAVES */
 static struct page *follow_huge_pud(struct vm_area_struct *vma,
 				    unsigned long addr, pud_t *pudp,
@@ -587,6 +674,14 @@ static struct page *follow_huge_pud(struct vm_area_struct *vma,
 {
 	return NULL;
 }
+
+static struct page *follow_huge_pmd(struct vm_area_struct *vma,
+				    unsigned long addr, pmd_t *pmd,
+				    unsigned int flags,
+				    struct follow_page_context *ctx)
+{
+	return NULL;
+}
 #endif	/* CONFIG_PGTABLE_HAS_HUGE_LEAVES */
 
 static int follow_pfn_pte(struct vm_area_struct *vma, unsigned long address,
@@ -784,31 +879,31 @@ static struct page *follow_pmd_mask(struct vm_area_struct *vma,
 			return page;
 		return no_page_table(vma, flags, address);
 	}
-	if (likely(!pmd_trans_huge(pmdval)))
+	if (likely(!pmd_thp_or_huge(pmdval)))
 		return follow_page_pte(vma, address, pmd, flags, &ctx->pgmap);
 
 	if (pmd_protnone(pmdval) && !gup_can_follow_protnone(vma, flags))
 		return no_page_table(vma, flags, address);
 
 	ptl = pmd_lock(mm, pmd);
-	if (unlikely(!pmd_present(*pmd))) {
+	pmdval = *pmd;
+	if (unlikely(!pmd_present(pmdval))) {
 		spin_unlock(ptl);
 		return no_page_table(vma, flags, address);
 	}
-	if (unlikely(!pmd_trans_huge(*pmd))) {
+	if (unlikely(!pmd_thp_or_huge(pmdval))) {
 		spin_unlock(ptl);
 		return follow_page_pte(vma, address, pmd, flags, &ctx->pgmap);
 	}
-	if (flags & FOLL_SPLIT_PMD) {
+	if (pmd_trans_huge(pmdval) && (flags & FOLL_SPLIT_PMD)) {
 		spin_unlock(ptl);
 		split_huge_pmd(vma, pmd, address);
 		/* If pmd was left empty, stuff a page table in there quickly */
 		return pte_alloc(mm, pmd) ? ERR_PTR(-ENOMEM) :
 			follow_page_pte(vma, address, pmd, flags, &ctx->pgmap);
 	}
-	page = follow_trans_huge_pmd(vma, address, pmd, flags);
+	page = follow_huge_pmd(vma, address, pmd, flags, ctx);
 	spin_unlock(ptl);
-	ctx->page_mask = HPAGE_PMD_NR - 1;
 	return page;
 }
 
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 9993d2b18809..317cb445c442 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1216,8 +1216,8 @@ vm_fault_t vmf_insert_pfn_pud(struct vm_fault *vmf, pfn_t pfn, bool write)
 EXPORT_SYMBOL_GPL(vmf_insert_pfn_pud);
 #endif /* CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */
 
-static void touch_pmd(struct vm_area_struct *vma, unsigned long addr,
-		      pmd_t *pmd, bool write)
+void touch_pmd(struct vm_area_struct *vma, unsigned long addr,
+	       pmd_t *pmd, bool write)
 {
 	pmd_t _pmd;
 
@@ -1572,88 +1572,6 @@ static inline bool can_change_pmd_writable(struct vm_area_struct *vma,
 	return pmd_dirty(pmd);
 }
 
-/* FOLL_FORCE can write to even unwritable PMDs in COW mappings. */
-static inline bool can_follow_write_pmd(pmd_t pmd, struct page *page,
-					struct vm_area_struct *vma,
-					unsigned int flags)
-{
-	/* If the pmd is writable, we can write to the page. */
-	if (pmd_write(pmd))
-		return true;
-
-	/* Maybe FOLL_FORCE is set to override it? */
-	if (!(flags & FOLL_FORCE))
-		return false;
-
-	/* But FOLL_FORCE has no effect on shared mappings */
-	if (vma->vm_flags & (VM_MAYSHARE | VM_SHARED))
-		return false;
-
-	/* ... or read-only private ones */
-	if (!(vma->vm_flags & VM_MAYWRITE))
-		return false;
-
-	/* ... or already writable ones that just need to take a write fault */
-	if (vma->vm_flags & VM_WRITE)
-		return false;
-
-	/*
-	 * See can_change_pte_writable(): we broke COW and could map the page
-	 * writable if we have an exclusive anonymous page ...
-	 */
-	if (!page || !PageAnon(page) || !PageAnonExclusive(page))
-		return false;
-
-	/* ... and a write-fault isn't required for other reasons. */
-	if (vma_soft_dirty_enabled(vma) && !pmd_soft_dirty(pmd))
-		return false;
-	return !userfaultfd_huge_pmd_wp(vma, pmd);
-}
-
-struct page *follow_trans_huge_pmd(struct vm_area_struct *vma,
-				   unsigned long addr,
-				   pmd_t *pmd,
-				   unsigned int flags)
-{
-	struct mm_struct *mm = vma->vm_mm;
-	struct page *page;
-	int ret;
-
-	assert_spin_locked(pmd_lockptr(mm, pmd));
-
-	page = pmd_page(*pmd);
-	VM_BUG_ON_PAGE(!PageHead(page) && !is_zone_device_page(page), page);
-
-	if ((flags & FOLL_WRITE) &&
-	    !can_follow_write_pmd(*pmd, page, vma, flags))
-		return NULL;
-
-	/* Avoid dumping huge zero page */
-	if ((flags & FOLL_DUMP) && is_huge_zero_pmd(*pmd))
-		return ERR_PTR(-EFAULT);
-
-	if (pmd_protnone(*pmd) && !gup_can_follow_protnone(vma, flags))
-		return NULL;
-
-	if (!pmd_write(*pmd) && gup_must_unshare(vma, flags, page))
-		return ERR_PTR(-EMLINK);
-
-	VM_BUG_ON_PAGE((flags & FOLL_PIN) && PageAnon(page) &&
-			!PageAnonExclusive(page), page);
-
-	ret = try_grab_page(page, flags);
-	if (ret)
-		return ERR_PTR(ret);
-
-	if (flags & FOLL_TOUCH)
-		touch_pmd(vma, addr, pmd, flags & FOLL_WRITE);
-
-	page += (addr & ~HPAGE_PMD_MASK) >> PAGE_SHIFT;
-	VM_BUG_ON_PAGE(!PageCompound(page) && !is_zone_device_page(page), page);
-
-	return page;
-}
-
 /* NUMA hinting page fault entry point for trans huge pmds */
 vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf)
 {
diff --git a/mm/internal.h b/mm/internal.h
index 5821b7a14b62..99994b41a220 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -1009,9 +1009,8 @@ int __must_check try_grab_page(struct page *page, unsigned int flags);
  */
 void touch_pud(struct vm_area_struct *vma, unsigned long addr,
 	       pud_t *pud, bool write);
-struct page *follow_trans_huge_pmd(struct vm_area_struct *vma,
-				   unsigned long addr, pmd_t *pmd,
-				   unsigned int flags);
+void touch_pmd(struct vm_area_struct *vma, unsigned long addr,
+	       pmd_t *pmd, bool write);
 
 /*
  * mm/mmap.c
-- 
2.41.0


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply related	[flat|nested] 143+ messages in thread

* [PATCH v2 11/13] mm/gup: Handle huge pmd for follow_pmd_mask()
@ 2024-01-03  9:14   ` peterx
  0 siblings, 0 replies; 143+ messages in thread
From: peterx @ 2024-01-03  9:14 UTC (permalink / raw)
  To: linux-mm, linux-kernel
  Cc: James Houghton, David Hildenbrand, Kirill A . Shutemov, Yang Shi,
	peterx, linux-riscv, Andrew Morton, Aneesh Kumar K . V,
	Rik van Riel, Andrea Arcangeli, Axel Rasmussen, Mike Rapoport,
	John Hubbard, Vlastimil Babka, Michael Ellerman,
	Christophe Leroy, Andrew Jones, linuxppc-dev, Mike Kravetz,
	Muchun Song, linux-arm-kernel, Jason Gunthorpe,
	Christoph Hellwig, Lorenzo Stoakes, Matthew Wilcox

From: Peter Xu <peterx@redhat.com>

Replace pmd_trans_huge() with pmd_thp_or_huge() to also cover pmd_huge() as
long as enabled.

FOLL_TOUCH and FOLL_SPLIT_PMD only apply to THP, not yet huge.

Since now follow_trans_huge_pmd() can process hugetlb pages, renaming it
into follow_huge_pmd() to match what it does.  Move it into gup.c so not
depend on CONFIG_THP.

When at it, move the ctx->page_mask setup into follow_huge_pmd(), only set
it when the page is valid.  It was not a bug to set it before even if GUP
failed (page==NULL), because follow_page_mask() callers always ignores
page_mask if so.  But doing so makes the code cleaner.

Signed-off-by: Peter Xu <peterx@redhat.com>
---
 mm/gup.c         | 107 ++++++++++++++++++++++++++++++++++++++++++++---
 mm/huge_memory.c |  86 +------------------------------------
 mm/internal.h    |   5 +--
 3 files changed, 105 insertions(+), 93 deletions(-)

diff --git a/mm/gup.c b/mm/gup.c
index 760406180222..d96429b6fc55 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -580,6 +580,93 @@ static struct page *follow_huge_pud(struct vm_area_struct *vma,
 
 	return page;
 }
+
+/* FOLL_FORCE can write to even unwritable PMDs in COW mappings. */
+static inline bool can_follow_write_pmd(pmd_t pmd, struct page *page,
+					struct vm_area_struct *vma,
+					unsigned int flags)
+{
+	/* If the pmd is writable, we can write to the page. */
+	if (pmd_write(pmd))
+		return true;
+
+	/* Maybe FOLL_FORCE is set to override it? */
+	if (!(flags & FOLL_FORCE))
+		return false;
+
+	/* But FOLL_FORCE has no effect on shared mappings */
+	if (vma->vm_flags & (VM_MAYSHARE | VM_SHARED))
+		return false;
+
+	/* ... or read-only private ones */
+	if (!(vma->vm_flags & VM_MAYWRITE))
+		return false;
+
+	/* ... or already writable ones that just need to take a write fault */
+	if (vma->vm_flags & VM_WRITE)
+		return false;
+
+	/*
+	 * See can_change_pte_writable(): we broke COW and could map the page
+	 * writable if we have an exclusive anonymous page ...
+	 */
+	if (!page || !PageAnon(page) || !PageAnonExclusive(page))
+		return false;
+
+	/* ... and a write-fault isn't required for other reasons. */
+	if (vma_soft_dirty_enabled(vma) && !pmd_soft_dirty(pmd))
+		return false;
+	return !userfaultfd_huge_pmd_wp(vma, pmd);
+}
+
+static struct page *follow_huge_pmd(struct vm_area_struct *vma,
+				    unsigned long addr, pmd_t *pmd,
+				    unsigned int flags,
+				    struct follow_page_context *ctx)
+{
+	struct mm_struct *mm = vma->vm_mm;
+	pmd_t pmdval = *pmd;
+	struct page *page;
+	int ret;
+
+	assert_spin_locked(pmd_lockptr(mm, pmd));
+
+	page = pmd_page(pmdval);
+	VM_BUG_ON_PAGE(!PageHead(page) && !is_zone_device_page(page), page);
+
+	if ((flags & FOLL_WRITE) &&
+	    !can_follow_write_pmd(pmdval, page, vma, flags))
+		return NULL;
+
+	/* Avoid dumping huge zero page */
+	if ((flags & FOLL_DUMP) && is_huge_zero_pmd(pmdval))
+		return ERR_PTR(-EFAULT);
+
+	if (pmd_protnone(*pmd) && !gup_can_follow_protnone(vma, flags))
+		return NULL;
+
+	if (!pmd_write(pmdval) && gup_must_unshare(vma, flags, page))
+		return ERR_PTR(-EMLINK);
+
+	VM_BUG_ON_PAGE((flags & FOLL_PIN) && PageAnon(page) &&
+			!PageAnonExclusive(page), page);
+
+	ret = try_grab_page(page, flags);
+	if (ret)
+		return ERR_PTR(ret);
+
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+	if (pmd_trans_huge(pmdval) && (flags & FOLL_TOUCH))
+		touch_pmd(vma, addr, pmd, flags & FOLL_WRITE);
+#endif	/* CONFIG_TRANSPARENT_HUGEPAGE */
+
+	page += (addr & ~HPAGE_PMD_MASK) >> PAGE_SHIFT;
+	ctx->page_mask = HPAGE_PMD_NR - 1;
+	VM_BUG_ON_PAGE(!PageCompound(page) && !is_zone_device_page(page), page);
+
+	return page;
+}
+
 #else  /* CONFIG_PGTABLE_HAS_HUGE_LEAVES */
 static struct page *follow_huge_pud(struct vm_area_struct *vma,
 				    unsigned long addr, pud_t *pudp,
@@ -587,6 +674,14 @@ static struct page *follow_huge_pud(struct vm_area_struct *vma,
 {
 	return NULL;
 }
+
+static struct page *follow_huge_pmd(struct vm_area_struct *vma,
+				    unsigned long addr, pmd_t *pmd,
+				    unsigned int flags,
+				    struct follow_page_context *ctx)
+{
+	return NULL;
+}
 #endif	/* CONFIG_PGTABLE_HAS_HUGE_LEAVES */
 
 static int follow_pfn_pte(struct vm_area_struct *vma, unsigned long address,
@@ -784,31 +879,31 @@ static struct page *follow_pmd_mask(struct vm_area_struct *vma,
 			return page;
 		return no_page_table(vma, flags, address);
 	}
-	if (likely(!pmd_trans_huge(pmdval)))
+	if (likely(!pmd_thp_or_huge(pmdval)))
 		return follow_page_pte(vma, address, pmd, flags, &ctx->pgmap);
 
 	if (pmd_protnone(pmdval) && !gup_can_follow_protnone(vma, flags))
 		return no_page_table(vma, flags, address);
 
 	ptl = pmd_lock(mm, pmd);
-	if (unlikely(!pmd_present(*pmd))) {
+	pmdval = *pmd;
+	if (unlikely(!pmd_present(pmdval))) {
 		spin_unlock(ptl);
 		return no_page_table(vma, flags, address);
 	}
-	if (unlikely(!pmd_trans_huge(*pmd))) {
+	if (unlikely(!pmd_thp_or_huge(pmdval))) {
 		spin_unlock(ptl);
 		return follow_page_pte(vma, address, pmd, flags, &ctx->pgmap);
 	}
-	if (flags & FOLL_SPLIT_PMD) {
+	if (pmd_trans_huge(pmdval) && (flags & FOLL_SPLIT_PMD)) {
 		spin_unlock(ptl);
 		split_huge_pmd(vma, pmd, address);
 		/* If pmd was left empty, stuff a page table in there quickly */
 		return pte_alloc(mm, pmd) ? ERR_PTR(-ENOMEM) :
 			follow_page_pte(vma, address, pmd, flags, &ctx->pgmap);
 	}
-	page = follow_trans_huge_pmd(vma, address, pmd, flags);
+	page = follow_huge_pmd(vma, address, pmd, flags, ctx);
 	spin_unlock(ptl);
-	ctx->page_mask = HPAGE_PMD_NR - 1;
 	return page;
 }
 
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 9993d2b18809..317cb445c442 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1216,8 +1216,8 @@ vm_fault_t vmf_insert_pfn_pud(struct vm_fault *vmf, pfn_t pfn, bool write)
 EXPORT_SYMBOL_GPL(vmf_insert_pfn_pud);
 #endif /* CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */
 
-static void touch_pmd(struct vm_area_struct *vma, unsigned long addr,
-		      pmd_t *pmd, bool write)
+void touch_pmd(struct vm_area_struct *vma, unsigned long addr,
+	       pmd_t *pmd, bool write)
 {
 	pmd_t _pmd;
 
@@ -1572,88 +1572,6 @@ static inline bool can_change_pmd_writable(struct vm_area_struct *vma,
 	return pmd_dirty(pmd);
 }
 
-/* FOLL_FORCE can write to even unwritable PMDs in COW mappings. */
-static inline bool can_follow_write_pmd(pmd_t pmd, struct page *page,
-					struct vm_area_struct *vma,
-					unsigned int flags)
-{
-	/* If the pmd is writable, we can write to the page. */
-	if (pmd_write(pmd))
-		return true;
-
-	/* Maybe FOLL_FORCE is set to override it? */
-	if (!(flags & FOLL_FORCE))
-		return false;
-
-	/* But FOLL_FORCE has no effect on shared mappings */
-	if (vma->vm_flags & (VM_MAYSHARE | VM_SHARED))
-		return false;
-
-	/* ... or read-only private ones */
-	if (!(vma->vm_flags & VM_MAYWRITE))
-		return false;
-
-	/* ... or already writable ones that just need to take a write fault */
-	if (vma->vm_flags & VM_WRITE)
-		return false;
-
-	/*
-	 * See can_change_pte_writable(): we broke COW and could map the page
-	 * writable if we have an exclusive anonymous page ...
-	 */
-	if (!page || !PageAnon(page) || !PageAnonExclusive(page))
-		return false;
-
-	/* ... and a write-fault isn't required for other reasons. */
-	if (vma_soft_dirty_enabled(vma) && !pmd_soft_dirty(pmd))
-		return false;
-	return !userfaultfd_huge_pmd_wp(vma, pmd);
-}
-
-struct page *follow_trans_huge_pmd(struct vm_area_struct *vma,
-				   unsigned long addr,
-				   pmd_t *pmd,
-				   unsigned int flags)
-{
-	struct mm_struct *mm = vma->vm_mm;
-	struct page *page;
-	int ret;
-
-	assert_spin_locked(pmd_lockptr(mm, pmd));
-
-	page = pmd_page(*pmd);
-	VM_BUG_ON_PAGE(!PageHead(page) && !is_zone_device_page(page), page);
-
-	if ((flags & FOLL_WRITE) &&
-	    !can_follow_write_pmd(*pmd, page, vma, flags))
-		return NULL;
-
-	/* Avoid dumping huge zero page */
-	if ((flags & FOLL_DUMP) && is_huge_zero_pmd(*pmd))
-		return ERR_PTR(-EFAULT);
-
-	if (pmd_protnone(*pmd) && !gup_can_follow_protnone(vma, flags))
-		return NULL;
-
-	if (!pmd_write(*pmd) && gup_must_unshare(vma, flags, page))
-		return ERR_PTR(-EMLINK);
-
-	VM_BUG_ON_PAGE((flags & FOLL_PIN) && PageAnon(page) &&
-			!PageAnonExclusive(page), page);
-
-	ret = try_grab_page(page, flags);
-	if (ret)
-		return ERR_PTR(ret);
-
-	if (flags & FOLL_TOUCH)
-		touch_pmd(vma, addr, pmd, flags & FOLL_WRITE);
-
-	page += (addr & ~HPAGE_PMD_MASK) >> PAGE_SHIFT;
-	VM_BUG_ON_PAGE(!PageCompound(page) && !is_zone_device_page(page), page);
-
-	return page;
-}
-
 /* NUMA hinting page fault entry point for trans huge pmds */
 vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf)
 {
diff --git a/mm/internal.h b/mm/internal.h
index 5821b7a14b62..99994b41a220 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -1009,9 +1009,8 @@ int __must_check try_grab_page(struct page *page, unsigned int flags);
  */
 void touch_pud(struct vm_area_struct *vma, unsigned long addr,
 	       pud_t *pud, bool write);
-struct page *follow_trans_huge_pmd(struct vm_area_struct *vma,
-				   unsigned long addr, pmd_t *pmd,
-				   unsigned int flags);
+void touch_pmd(struct vm_area_struct *vma, unsigned long addr,
+	       pmd_t *pmd, bool write);
 
 /*
  * mm/mmap.c
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 143+ messages in thread

* [PATCH v2 11/13] mm/gup: Handle huge pmd for follow_pmd_mask()
@ 2024-01-03  9:14   ` peterx
  0 siblings, 0 replies; 143+ messages in thread
From: peterx @ 2024-01-03  9:14 UTC (permalink / raw)
  To: linux-mm, linux-kernel
  Cc: James Houghton, David Hildenbrand, Kirill A . Shutemov, Yang Shi,
	peterx, linux-riscv, Andrew Morton, Aneesh Kumar K . V,
	Rik van Riel, Andrea Arcangeli, Axel Rasmussen, Mike Rapoport,
	John Hubbard, Vlastimil Babka, Michael Ellerman,
	Christophe Leroy, Andrew Jones, linuxppc-dev, Mike Kravetz,
	Muchun Song, linux-arm-kernel, Jason Gunthorpe,
	Christoph Hellwig, Lorenzo Stoakes, Matthew Wilcox

From: Peter Xu <peterx@redhat.com>

Replace pmd_trans_huge() with pmd_thp_or_huge() to also cover pmd_huge() as
long as enabled.

FOLL_TOUCH and FOLL_SPLIT_PMD only apply to THP, not yet huge.

Since now follow_trans_huge_pmd() can process hugetlb pages, renaming it
into follow_huge_pmd() to match what it does.  Move it into gup.c so not
depend on CONFIG_THP.

When at it, move the ctx->page_mask setup into follow_huge_pmd(), only set
it when the page is valid.  It was not a bug to set it before even if GUP
failed (page==NULL), because follow_page_mask() callers always ignores
page_mask if so.  But doing so makes the code cleaner.

Signed-off-by: Peter Xu <peterx@redhat.com>
---
 mm/gup.c         | 107 ++++++++++++++++++++++++++++++++++++++++++++---
 mm/huge_memory.c |  86 +------------------------------------
 mm/internal.h    |   5 +--
 3 files changed, 105 insertions(+), 93 deletions(-)

diff --git a/mm/gup.c b/mm/gup.c
index 760406180222..d96429b6fc55 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -580,6 +580,93 @@ static struct page *follow_huge_pud(struct vm_area_struct *vma,
 
 	return page;
 }
+
+/* FOLL_FORCE can write to even unwritable PMDs in COW mappings. */
+static inline bool can_follow_write_pmd(pmd_t pmd, struct page *page,
+					struct vm_area_struct *vma,
+					unsigned int flags)
+{
+	/* If the pmd is writable, we can write to the page. */
+	if (pmd_write(pmd))
+		return true;
+
+	/* Maybe FOLL_FORCE is set to override it? */
+	if (!(flags & FOLL_FORCE))
+		return false;
+
+	/* But FOLL_FORCE has no effect on shared mappings */
+	if (vma->vm_flags & (VM_MAYSHARE | VM_SHARED))
+		return false;
+
+	/* ... or read-only private ones */
+	if (!(vma->vm_flags & VM_MAYWRITE))
+		return false;
+
+	/* ... or already writable ones that just need to take a write fault */
+	if (vma->vm_flags & VM_WRITE)
+		return false;
+
+	/*
+	 * See can_change_pte_writable(): we broke COW and could map the page
+	 * writable if we have an exclusive anonymous page ...
+	 */
+	if (!page || !PageAnon(page) || !PageAnonExclusive(page))
+		return false;
+
+	/* ... and a write-fault isn't required for other reasons. */
+	if (vma_soft_dirty_enabled(vma) && !pmd_soft_dirty(pmd))
+		return false;
+	return !userfaultfd_huge_pmd_wp(vma, pmd);
+}
+
+static struct page *follow_huge_pmd(struct vm_area_struct *vma,
+				    unsigned long addr, pmd_t *pmd,
+				    unsigned int flags,
+				    struct follow_page_context *ctx)
+{
+	struct mm_struct *mm = vma->vm_mm;
+	pmd_t pmdval = *pmd;
+	struct page *page;
+	int ret;
+
+	assert_spin_locked(pmd_lockptr(mm, pmd));
+
+	page = pmd_page(pmdval);
+	VM_BUG_ON_PAGE(!PageHead(page) && !is_zone_device_page(page), page);
+
+	if ((flags & FOLL_WRITE) &&
+	    !can_follow_write_pmd(pmdval, page, vma, flags))
+		return NULL;
+
+	/* Avoid dumping huge zero page */
+	if ((flags & FOLL_DUMP) && is_huge_zero_pmd(pmdval))
+		return ERR_PTR(-EFAULT);
+
+	if (pmd_protnone(*pmd) && !gup_can_follow_protnone(vma, flags))
+		return NULL;
+
+	if (!pmd_write(pmdval) && gup_must_unshare(vma, flags, page))
+		return ERR_PTR(-EMLINK);
+
+	VM_BUG_ON_PAGE((flags & FOLL_PIN) && PageAnon(page) &&
+			!PageAnonExclusive(page), page);
+
+	ret = try_grab_page(page, flags);
+	if (ret)
+		return ERR_PTR(ret);
+
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+	if (pmd_trans_huge(pmdval) && (flags & FOLL_TOUCH))
+		touch_pmd(vma, addr, pmd, flags & FOLL_WRITE);
+#endif	/* CONFIG_TRANSPARENT_HUGEPAGE */
+
+	page += (addr & ~HPAGE_PMD_MASK) >> PAGE_SHIFT;
+	ctx->page_mask = HPAGE_PMD_NR - 1;
+	VM_BUG_ON_PAGE(!PageCompound(page) && !is_zone_device_page(page), page);
+
+	return page;
+}
+
 #else  /* CONFIG_PGTABLE_HAS_HUGE_LEAVES */
 static struct page *follow_huge_pud(struct vm_area_struct *vma,
 				    unsigned long addr, pud_t *pudp,
@@ -587,6 +674,14 @@ static struct page *follow_huge_pud(struct vm_area_struct *vma,
 {
 	return NULL;
 }
+
+static struct page *follow_huge_pmd(struct vm_area_struct *vma,
+				    unsigned long addr, pmd_t *pmd,
+				    unsigned int flags,
+				    struct follow_page_context *ctx)
+{
+	return NULL;
+}
 #endif	/* CONFIG_PGTABLE_HAS_HUGE_LEAVES */
 
 static int follow_pfn_pte(struct vm_area_struct *vma, unsigned long address,
@@ -784,31 +879,31 @@ static struct page *follow_pmd_mask(struct vm_area_struct *vma,
 			return page;
 		return no_page_table(vma, flags, address);
 	}
-	if (likely(!pmd_trans_huge(pmdval)))
+	if (likely(!pmd_thp_or_huge(pmdval)))
 		return follow_page_pte(vma, address, pmd, flags, &ctx->pgmap);
 
 	if (pmd_protnone(pmdval) && !gup_can_follow_protnone(vma, flags))
 		return no_page_table(vma, flags, address);
 
 	ptl = pmd_lock(mm, pmd);
-	if (unlikely(!pmd_present(*pmd))) {
+	pmdval = *pmd;
+	if (unlikely(!pmd_present(pmdval))) {
 		spin_unlock(ptl);
 		return no_page_table(vma, flags, address);
 	}
-	if (unlikely(!pmd_trans_huge(*pmd))) {
+	if (unlikely(!pmd_thp_or_huge(pmdval))) {
 		spin_unlock(ptl);
 		return follow_page_pte(vma, address, pmd, flags, &ctx->pgmap);
 	}
-	if (flags & FOLL_SPLIT_PMD) {
+	if (pmd_trans_huge(pmdval) && (flags & FOLL_SPLIT_PMD)) {
 		spin_unlock(ptl);
 		split_huge_pmd(vma, pmd, address);
 		/* If pmd was left empty, stuff a page table in there quickly */
 		return pte_alloc(mm, pmd) ? ERR_PTR(-ENOMEM) :
 			follow_page_pte(vma, address, pmd, flags, &ctx->pgmap);
 	}
-	page = follow_trans_huge_pmd(vma, address, pmd, flags);
+	page = follow_huge_pmd(vma, address, pmd, flags, ctx);
 	spin_unlock(ptl);
-	ctx->page_mask = HPAGE_PMD_NR - 1;
 	return page;
 }
 
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 9993d2b18809..317cb445c442 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1216,8 +1216,8 @@ vm_fault_t vmf_insert_pfn_pud(struct vm_fault *vmf, pfn_t pfn, bool write)
 EXPORT_SYMBOL_GPL(vmf_insert_pfn_pud);
 #endif /* CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */
 
-static void touch_pmd(struct vm_area_struct *vma, unsigned long addr,
-		      pmd_t *pmd, bool write)
+void touch_pmd(struct vm_area_struct *vma, unsigned long addr,
+	       pmd_t *pmd, bool write)
 {
 	pmd_t _pmd;
 
@@ -1572,88 +1572,6 @@ static inline bool can_change_pmd_writable(struct vm_area_struct *vma,
 	return pmd_dirty(pmd);
 }
 
-/* FOLL_FORCE can write to even unwritable PMDs in COW mappings. */
-static inline bool can_follow_write_pmd(pmd_t pmd, struct page *page,
-					struct vm_area_struct *vma,
-					unsigned int flags)
-{
-	/* If the pmd is writable, we can write to the page. */
-	if (pmd_write(pmd))
-		return true;
-
-	/* Maybe FOLL_FORCE is set to override it? */
-	if (!(flags & FOLL_FORCE))
-		return false;
-
-	/* But FOLL_FORCE has no effect on shared mappings */
-	if (vma->vm_flags & (VM_MAYSHARE | VM_SHARED))
-		return false;
-
-	/* ... or read-only private ones */
-	if (!(vma->vm_flags & VM_MAYWRITE))
-		return false;
-
-	/* ... or already writable ones that just need to take a write fault */
-	if (vma->vm_flags & VM_WRITE)
-		return false;
-
-	/*
-	 * See can_change_pte_writable(): we broke COW and could map the page
-	 * writable if we have an exclusive anonymous page ...
-	 */
-	if (!page || !PageAnon(page) || !PageAnonExclusive(page))
-		return false;
-
-	/* ... and a write-fault isn't required for other reasons. */
-	if (vma_soft_dirty_enabled(vma) && !pmd_soft_dirty(pmd))
-		return false;
-	return !userfaultfd_huge_pmd_wp(vma, pmd);
-}
-
-struct page *follow_trans_huge_pmd(struct vm_area_struct *vma,
-				   unsigned long addr,
-				   pmd_t *pmd,
-				   unsigned int flags)
-{
-	struct mm_struct *mm = vma->vm_mm;
-	struct page *page;
-	int ret;
-
-	assert_spin_locked(pmd_lockptr(mm, pmd));
-
-	page = pmd_page(*pmd);
-	VM_BUG_ON_PAGE(!PageHead(page) && !is_zone_device_page(page), page);
-
-	if ((flags & FOLL_WRITE) &&
-	    !can_follow_write_pmd(*pmd, page, vma, flags))
-		return NULL;
-
-	/* Avoid dumping huge zero page */
-	if ((flags & FOLL_DUMP) && is_huge_zero_pmd(*pmd))
-		return ERR_PTR(-EFAULT);
-
-	if (pmd_protnone(*pmd) && !gup_can_follow_protnone(vma, flags))
-		return NULL;
-
-	if (!pmd_write(*pmd) && gup_must_unshare(vma, flags, page))
-		return ERR_PTR(-EMLINK);
-
-	VM_BUG_ON_PAGE((flags & FOLL_PIN) && PageAnon(page) &&
-			!PageAnonExclusive(page), page);
-
-	ret = try_grab_page(page, flags);
-	if (ret)
-		return ERR_PTR(ret);
-
-	if (flags & FOLL_TOUCH)
-		touch_pmd(vma, addr, pmd, flags & FOLL_WRITE);
-
-	page += (addr & ~HPAGE_PMD_MASK) >> PAGE_SHIFT;
-	VM_BUG_ON_PAGE(!PageCompound(page) && !is_zone_device_page(page), page);
-
-	return page;
-}
-
 /* NUMA hinting page fault entry point for trans huge pmds */
 vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf)
 {
diff --git a/mm/internal.h b/mm/internal.h
index 5821b7a14b62..99994b41a220 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -1009,9 +1009,8 @@ int __must_check try_grab_page(struct page *page, unsigned int flags);
  */
 void touch_pud(struct vm_area_struct *vma, unsigned long addr,
 	       pud_t *pud, bool write);
-struct page *follow_trans_huge_pmd(struct vm_area_struct *vma,
-				   unsigned long addr, pmd_t *pmd,
-				   unsigned int flags);
+void touch_pmd(struct vm_area_struct *vma, unsigned long addr,
+	       pmd_t *pmd, bool write);
 
 /*
  * mm/mmap.c
-- 
2.41.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 143+ messages in thread

* [PATCH v2 11/13] mm/gup: Handle huge pmd for follow_pmd_mask()
@ 2024-01-03  9:14   ` peterx
  0 siblings, 0 replies; 143+ messages in thread
From: peterx @ 2024-01-03  9:14 UTC (permalink / raw)
  To: linux-mm, linux-kernel
  Cc: James Houghton, David Hildenbrand, Yang Shi, peterx,
	Andrew Jones, Matthew Wilcox, linux-riscv, Andrea Arcangeli,
	Christoph Hellwig, Aneesh Kumar K . V, Vlastimil Babka,
	Jason Gunthorpe, Axel Rasmussen, Rik van Riel, John Hubbard,
	Kirill A . Shutemov, linux-arm-kernel, Lorenzo Stoakes,
	Muchun Song, Andrew Morton, linuxppc-dev, Mike Rapoport,
	Mike Kravetz

From: Peter Xu <peterx@redhat.com>

Replace pmd_trans_huge() with pmd_thp_or_huge() to also cover pmd_huge() as
long as enabled.

FOLL_TOUCH and FOLL_SPLIT_PMD only apply to THP, not yet huge.

Since now follow_trans_huge_pmd() can process hugetlb pages, renaming it
into follow_huge_pmd() to match what it does.  Move it into gup.c so not
depend on CONFIG_THP.

When at it, move the ctx->page_mask setup into follow_huge_pmd(), only set
it when the page is valid.  It was not a bug to set it before even if GUP
failed (page==NULL), because follow_page_mask() callers always ignores
page_mask if so.  But doing so makes the code cleaner.

Signed-off-by: Peter Xu <peterx@redhat.com>
---
 mm/gup.c         | 107 ++++++++++++++++++++++++++++++++++++++++++++---
 mm/huge_memory.c |  86 +------------------------------------
 mm/internal.h    |   5 +--
 3 files changed, 105 insertions(+), 93 deletions(-)

diff --git a/mm/gup.c b/mm/gup.c
index 760406180222..d96429b6fc55 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -580,6 +580,93 @@ static struct page *follow_huge_pud(struct vm_area_struct *vma,
 
 	return page;
 }
+
+/* FOLL_FORCE can write to even unwritable PMDs in COW mappings. */
+static inline bool can_follow_write_pmd(pmd_t pmd, struct page *page,
+					struct vm_area_struct *vma,
+					unsigned int flags)
+{
+	/* If the pmd is writable, we can write to the page. */
+	if (pmd_write(pmd))
+		return true;
+
+	/* Maybe FOLL_FORCE is set to override it? */
+	if (!(flags & FOLL_FORCE))
+		return false;
+
+	/* But FOLL_FORCE has no effect on shared mappings */
+	if (vma->vm_flags & (VM_MAYSHARE | VM_SHARED))
+		return false;
+
+	/* ... or read-only private ones */
+	if (!(vma->vm_flags & VM_MAYWRITE))
+		return false;
+
+	/* ... or already writable ones that just need to take a write fault */
+	if (vma->vm_flags & VM_WRITE)
+		return false;
+
+	/*
+	 * See can_change_pte_writable(): we broke COW and could map the page
+	 * writable if we have an exclusive anonymous page ...
+	 */
+	if (!page || !PageAnon(page) || !PageAnonExclusive(page))
+		return false;
+
+	/* ... and a write-fault isn't required for other reasons. */
+	if (vma_soft_dirty_enabled(vma) && !pmd_soft_dirty(pmd))
+		return false;
+	return !userfaultfd_huge_pmd_wp(vma, pmd);
+}
+
+static struct page *follow_huge_pmd(struct vm_area_struct *vma,
+				    unsigned long addr, pmd_t *pmd,
+				    unsigned int flags,
+				    struct follow_page_context *ctx)
+{
+	struct mm_struct *mm = vma->vm_mm;
+	pmd_t pmdval = *pmd;
+	struct page *page;
+	int ret;
+
+	assert_spin_locked(pmd_lockptr(mm, pmd));
+
+	page = pmd_page(pmdval);
+	VM_BUG_ON_PAGE(!PageHead(page) && !is_zone_device_page(page), page);
+
+	if ((flags & FOLL_WRITE) &&
+	    !can_follow_write_pmd(pmdval, page, vma, flags))
+		return NULL;
+
+	/* Avoid dumping huge zero page */
+	if ((flags & FOLL_DUMP) && is_huge_zero_pmd(pmdval))
+		return ERR_PTR(-EFAULT);
+
+	if (pmd_protnone(*pmd) && !gup_can_follow_protnone(vma, flags))
+		return NULL;
+
+	if (!pmd_write(pmdval) && gup_must_unshare(vma, flags, page))
+		return ERR_PTR(-EMLINK);
+
+	VM_BUG_ON_PAGE((flags & FOLL_PIN) && PageAnon(page) &&
+			!PageAnonExclusive(page), page);
+
+	ret = try_grab_page(page, flags);
+	if (ret)
+		return ERR_PTR(ret);
+
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+	if (pmd_trans_huge(pmdval) && (flags & FOLL_TOUCH))
+		touch_pmd(vma, addr, pmd, flags & FOLL_WRITE);
+#endif	/* CONFIG_TRANSPARENT_HUGEPAGE */
+
+	page += (addr & ~HPAGE_PMD_MASK) >> PAGE_SHIFT;
+	ctx->page_mask = HPAGE_PMD_NR - 1;
+	VM_BUG_ON_PAGE(!PageCompound(page) && !is_zone_device_page(page), page);
+
+	return page;
+}
+
 #else  /* CONFIG_PGTABLE_HAS_HUGE_LEAVES */
 static struct page *follow_huge_pud(struct vm_area_struct *vma,
 				    unsigned long addr, pud_t *pudp,
@@ -587,6 +674,14 @@ static struct page *follow_huge_pud(struct vm_area_struct *vma,
 {
 	return NULL;
 }
+
+static struct page *follow_huge_pmd(struct vm_area_struct *vma,
+				    unsigned long addr, pmd_t *pmd,
+				    unsigned int flags,
+				    struct follow_page_context *ctx)
+{
+	return NULL;
+}
 #endif	/* CONFIG_PGTABLE_HAS_HUGE_LEAVES */
 
 static int follow_pfn_pte(struct vm_area_struct *vma, unsigned long address,
@@ -784,31 +879,31 @@ static struct page *follow_pmd_mask(struct vm_area_struct *vma,
 			return page;
 		return no_page_table(vma, flags, address);
 	}
-	if (likely(!pmd_trans_huge(pmdval)))
+	if (likely(!pmd_thp_or_huge(pmdval)))
 		return follow_page_pte(vma, address, pmd, flags, &ctx->pgmap);
 
 	if (pmd_protnone(pmdval) && !gup_can_follow_protnone(vma, flags))
 		return no_page_table(vma, flags, address);
 
 	ptl = pmd_lock(mm, pmd);
-	if (unlikely(!pmd_present(*pmd))) {
+	pmdval = *pmd;
+	if (unlikely(!pmd_present(pmdval))) {
 		spin_unlock(ptl);
 		return no_page_table(vma, flags, address);
 	}
-	if (unlikely(!pmd_trans_huge(*pmd))) {
+	if (unlikely(!pmd_thp_or_huge(pmdval))) {
 		spin_unlock(ptl);
 		return follow_page_pte(vma, address, pmd, flags, &ctx->pgmap);
 	}
-	if (flags & FOLL_SPLIT_PMD) {
+	if (pmd_trans_huge(pmdval) && (flags & FOLL_SPLIT_PMD)) {
 		spin_unlock(ptl);
 		split_huge_pmd(vma, pmd, address);
 		/* If pmd was left empty, stuff a page table in there quickly */
 		return pte_alloc(mm, pmd) ? ERR_PTR(-ENOMEM) :
 			follow_page_pte(vma, address, pmd, flags, &ctx->pgmap);
 	}
-	page = follow_trans_huge_pmd(vma, address, pmd, flags);
+	page = follow_huge_pmd(vma, address, pmd, flags, ctx);
 	spin_unlock(ptl);
-	ctx->page_mask = HPAGE_PMD_NR - 1;
 	return page;
 }
 
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 9993d2b18809..317cb445c442 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1216,8 +1216,8 @@ vm_fault_t vmf_insert_pfn_pud(struct vm_fault *vmf, pfn_t pfn, bool write)
 EXPORT_SYMBOL_GPL(vmf_insert_pfn_pud);
 #endif /* CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */
 
-static void touch_pmd(struct vm_area_struct *vma, unsigned long addr,
-		      pmd_t *pmd, bool write)
+void touch_pmd(struct vm_area_struct *vma, unsigned long addr,
+	       pmd_t *pmd, bool write)
 {
 	pmd_t _pmd;
 
@@ -1572,88 +1572,6 @@ static inline bool can_change_pmd_writable(struct vm_area_struct *vma,
 	return pmd_dirty(pmd);
 }
 
-/* FOLL_FORCE can write to even unwritable PMDs in COW mappings. */
-static inline bool can_follow_write_pmd(pmd_t pmd, struct page *page,
-					struct vm_area_struct *vma,
-					unsigned int flags)
-{
-	/* If the pmd is writable, we can write to the page. */
-	if (pmd_write(pmd))
-		return true;
-
-	/* Maybe FOLL_FORCE is set to override it? */
-	if (!(flags & FOLL_FORCE))
-		return false;
-
-	/* But FOLL_FORCE has no effect on shared mappings */
-	if (vma->vm_flags & (VM_MAYSHARE | VM_SHARED))
-		return false;
-
-	/* ... or read-only private ones */
-	if (!(vma->vm_flags & VM_MAYWRITE))
-		return false;
-
-	/* ... or already writable ones that just need to take a write fault */
-	if (vma->vm_flags & VM_WRITE)
-		return false;
-
-	/*
-	 * See can_change_pte_writable(): we broke COW and could map the page
-	 * writable if we have an exclusive anonymous page ...
-	 */
-	if (!page || !PageAnon(page) || !PageAnonExclusive(page))
-		return false;
-
-	/* ... and a write-fault isn't required for other reasons. */
-	if (vma_soft_dirty_enabled(vma) && !pmd_soft_dirty(pmd))
-		return false;
-	return !userfaultfd_huge_pmd_wp(vma, pmd);
-}
-
-struct page *follow_trans_huge_pmd(struct vm_area_struct *vma,
-				   unsigned long addr,
-				   pmd_t *pmd,
-				   unsigned int flags)
-{
-	struct mm_struct *mm = vma->vm_mm;
-	struct page *page;
-	int ret;
-
-	assert_spin_locked(pmd_lockptr(mm, pmd));
-
-	page = pmd_page(*pmd);
-	VM_BUG_ON_PAGE(!PageHead(page) && !is_zone_device_page(page), page);
-
-	if ((flags & FOLL_WRITE) &&
-	    !can_follow_write_pmd(*pmd, page, vma, flags))
-		return NULL;
-
-	/* Avoid dumping huge zero page */
-	if ((flags & FOLL_DUMP) && is_huge_zero_pmd(*pmd))
-		return ERR_PTR(-EFAULT);
-
-	if (pmd_protnone(*pmd) && !gup_can_follow_protnone(vma, flags))
-		return NULL;
-
-	if (!pmd_write(*pmd) && gup_must_unshare(vma, flags, page))
-		return ERR_PTR(-EMLINK);
-
-	VM_BUG_ON_PAGE((flags & FOLL_PIN) && PageAnon(page) &&
-			!PageAnonExclusive(page), page);
-
-	ret = try_grab_page(page, flags);
-	if (ret)
-		return ERR_PTR(ret);
-
-	if (flags & FOLL_TOUCH)
-		touch_pmd(vma, addr, pmd, flags & FOLL_WRITE);
-
-	page += (addr & ~HPAGE_PMD_MASK) >> PAGE_SHIFT;
-	VM_BUG_ON_PAGE(!PageCompound(page) && !is_zone_device_page(page), page);
-
-	return page;
-}
-
 /* NUMA hinting page fault entry point for trans huge pmds */
 vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf)
 {
diff --git a/mm/internal.h b/mm/internal.h
index 5821b7a14b62..99994b41a220 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -1009,9 +1009,8 @@ int __must_check try_grab_page(struct page *page, unsigned int flags);
  */
 void touch_pud(struct vm_area_struct *vma, unsigned long addr,
 	       pud_t *pud, bool write);
-struct page *follow_trans_huge_pmd(struct vm_area_struct *vma,
-				   unsigned long addr, pmd_t *pmd,
-				   unsigned int flags);
+void touch_pmd(struct vm_area_struct *vma, unsigned long addr,
+	       pmd_t *pmd, bool write);
 
 /*
  * mm/mmap.c
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 143+ messages in thread

* [PATCH v2 12/13] mm/gup: Handle hugepd for follow_page()
  2024-01-03  9:14 ` peterx
  (?)
  (?)
@ 2024-01-03  9:14   ` peterx
  -1 siblings, 0 replies; 143+ messages in thread
From: peterx @ 2024-01-03  9:14 UTC (permalink / raw)
  To: linux-mm, linux-kernel
  Cc: James Houghton, David Hildenbrand, Kirill A . Shutemov, Yang Shi,
	peterx, linux-riscv, Andrew Morton, Aneesh Kumar K . V,
	Rik van Riel, Andrea Arcangeli, Axel Rasmussen, Mike Rapoport,
	John Hubbard, Vlastimil Babka, Michael Ellerman,
	Christophe Leroy, Andrew Jones, linuxppc-dev, Mike Kravetz,
	Muchun Song, linux-arm-kernel, Jason Gunthorpe,
	Christoph Hellwig, Lorenzo Stoakes, Matthew Wilcox

From: Peter Xu <peterx@redhat.com>

Hugepd is only used in PowerPC so far on 4K page size kernels where hash
mmu is used.  follow_page_mask() used to leverage hugetlb APIs to access
hugepd entries.  Teach follow_page_mask() itself on hugepd.

With previous refactors on fast-gup gup_huge_pd(), most of the code can be
easily leveraged.  There's something not needed for follow page, for
example, gup_hugepte() tries to detect pgtable entry change which will
never happen with slow gup (which has the pgtable lock held), but that's
not a problem to check.

Since follow_page() always only fetch one page, set the end to "address +
PAGE_SIZE" should suffice.  We will still do the pgtable walk once for each
hugetlb page by setting ctx->page_mask properly.

One thing worth mentioning is that some level of pgtable's _bad() helper
will report is_hugepd() entries as TRUE on Power8 hash MMUs.  I think it at
least applies to PUD on Power8 with 4K pgsize.  It means feeding a hugepd
entry to pud_bad() will report a false positive. Let's leave that for now
because it can be arch-specific where I am a bit declined to touch.  In
this patch it's not a problem as long as hugepd is detected before any bad
pgtable entries.

Signed-off-by: Peter Xu <peterx@redhat.com>
---
 mm/gup.c | 78 +++++++++++++++++++++++++++++++++++++++++++++++++-------
 1 file changed, 69 insertions(+), 9 deletions(-)

diff --git a/mm/gup.c b/mm/gup.c
index d96429b6fc55..245214b64108 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -30,6 +30,11 @@ struct follow_page_context {
 	unsigned int page_mask;
 };
 
+static struct page *follow_hugepd(struct vm_area_struct *vma, hugepd_t hugepd,
+				  unsigned long addr, unsigned int pdshift,
+				  unsigned int flags,
+				  struct follow_page_context *ctx);
+
 static inline void sanity_check_pinned_pages(struct page **pages,
 					     unsigned long npages)
 {
@@ -871,6 +876,9 @@ static struct page *follow_pmd_mask(struct vm_area_struct *vma,
 		return no_page_table(vma, flags, address);
 	if (!pmd_present(pmdval))
 		return no_page_table(vma, flags, address);
+	if (unlikely(is_hugepd(__hugepd(pmd_val(pmdval)))))
+		return follow_hugepd(vma, __hugepd(pmd_val(pmdval)),
+				     address, PMD_SHIFT, flags, ctx);
 	if (pmd_devmap(pmdval)) {
 		ptl = pmd_lock(mm, pmd);
 		page = follow_devmap_pmd(vma, address, pmd, flags, &ctx->pgmap);
@@ -921,6 +929,9 @@ static struct page *follow_pud_mask(struct vm_area_struct *vma,
 	pud = READ_ONCE(*pudp);
 	if (pud_none(pud) || !pud_present(pud))
 		return no_page_table(vma, flags, address);
+	if (unlikely(is_hugepd(__hugepd(pud_val(pud)))))
+		return follow_hugepd(vma, __hugepd(pud_val(pud)),
+				     address, PUD_SHIFT, flags, ctx);
 	if (pud_huge(pud)) {
 		ptl = pud_lock(mm, pudp);
 		page = follow_huge_pud(vma, address, pudp, flags, ctx);
@@ -940,13 +951,17 @@ static struct page *follow_p4d_mask(struct vm_area_struct *vma,
 				    unsigned int flags,
 				    struct follow_page_context *ctx)
 {
-	p4d_t *p4d;
+	p4d_t *p4d, p4dval;
 
 	p4d = p4d_offset(pgdp, address);
-	if (p4d_none(*p4d))
-		return no_page_table(vma, flags, address);
-	BUILD_BUG_ON(p4d_huge(*p4d));
-	if (unlikely(p4d_bad(*p4d)))
+	p4dval = *p4d;
+	BUILD_BUG_ON(p4d_huge(p4dval));
+
+	if (unlikely(is_hugepd(__hugepd(p4d_val(p4dval)))))
+		return follow_hugepd(vma, __hugepd(p4d_val(p4dval)),
+				     address, P4D_SHIFT, flags, ctx);
+
+	if (p4d_none(p4dval) || unlikely(p4d_bad(p4dval)))
 		return no_page_table(vma, flags, address);
 
 	return follow_pud_mask(vma, address, p4d, flags, ctx);
@@ -980,7 +995,7 @@ static struct page *follow_page_mask(struct vm_area_struct *vma,
 			      unsigned long address, unsigned int flags,
 			      struct follow_page_context *ctx)
 {
-	pgd_t *pgd;
+	pgd_t *pgd, pgdval;
 	struct mm_struct *mm = vma->vm_mm;
 
 	ctx->page_mask = 0;
@@ -995,11 +1010,17 @@ static struct page *follow_page_mask(struct vm_area_struct *vma,
 						&ctx->page_mask);
 
 	pgd = pgd_offset(mm, address);
+	pgdval = *pgd;
 
-	if (pgd_none(*pgd) || unlikely(pgd_bad(*pgd)))
-		return no_page_table(vma, flags, address);
+	if (unlikely(is_hugepd(__hugepd(pgd_val(pgdval)))))
+		page = follow_hugepd(vma, __hugepd(pgd_val(pgdval)),
+				     address, PGDIR_SHIFT, flags, ctx);
+	else if (pgd_none(*pgd) || unlikely(pgd_bad(*pgd)))
+		page = no_page_table(vma, flags, address);
+	else
+		page = follow_p4d_mask(vma, address, pgd, flags, ctx);
 
-	return follow_p4d_mask(vma, address, pgd, flags, ctx);
+	return page;
 }
 
 struct page *follow_page(struct vm_area_struct *vma, unsigned long address,
@@ -3026,6 +3047,37 @@ static int gup_huge_pd(hugepd_t hugepd, unsigned long addr,
 
 	return 1;
 }
+
+static struct page *follow_hugepd(struct vm_area_struct *vma, hugepd_t hugepd,
+				  unsigned long addr, unsigned int pdshift,
+				  unsigned int flags,
+				  struct follow_page_context *ctx)
+{
+	struct page *page;
+	struct hstate *h;
+	spinlock_t *ptl;
+	int nr = 0, ret;
+	pte_t *ptep;
+
+	/* Only hugetlb supports hugepd */
+	if (WARN_ON_ONCE(!is_vm_hugetlb_page(vma)))
+		return ERR_PTR(-EFAULT);
+
+	h = hstate_vma(vma);
+	ptep = hugepte_offset(hugepd, addr, pdshift);
+	ptl = huge_pte_lock(h, vma->vm_mm, ptep);
+	ret = gup_huge_pd(hugepd, addr, pdshift, addr + PAGE_SIZE,
+			  flags, &page, &nr);
+	spin_unlock(ptl);
+
+	if (ret) {
+		WARN_ON_ONCE(nr != 1);
+		ctx->page_mask = (1U << huge_page_order(h)) - 1;
+		return page;
+	}
+
+	return NULL;
+}
 #else
 static inline int gup_huge_pd(hugepd_t hugepd, unsigned long addr,
 		unsigned int pdshift, unsigned long end, unsigned int flags,
@@ -3033,6 +3085,14 @@ static inline int gup_huge_pd(hugepd_t hugepd, unsigned long addr,
 {
 	return 0;
 }
+
+static struct page *follow_hugepd(struct vm_area_struct *vma, hugepd_t hugepd,
+				  unsigned long addr, unsigned int pdshift,
+				  unsigned int flags,
+				  struct follow_page_context *ctx)
+{
+	return NULL;
+}
 #endif /* CONFIG_ARCH_HAS_HUGEPD */
 
 static int gup_huge_pmd(pmd_t orig, pmd_t *pmdp, unsigned long addr,
-- 
2.41.0


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply related	[flat|nested] 143+ messages in thread

* [PATCH v2 12/13] mm/gup: Handle hugepd for follow_page()
@ 2024-01-03  9:14   ` peterx
  0 siblings, 0 replies; 143+ messages in thread
From: peterx @ 2024-01-03  9:14 UTC (permalink / raw)
  To: linux-mm, linux-kernel
  Cc: James Houghton, David Hildenbrand, Kirill A . Shutemov, Yang Shi,
	peterx, linux-riscv, Andrew Morton, Aneesh Kumar K . V,
	Rik van Riel, Andrea Arcangeli, Axel Rasmussen, Mike Rapoport,
	John Hubbard, Vlastimil Babka, Michael Ellerman,
	Christophe Leroy, Andrew Jones, linuxppc-dev, Mike Kravetz,
	Muchun Song, linux-arm-kernel, Jason Gunthorpe,
	Christoph Hellwig, Lorenzo Stoakes, Matthew Wilcox

From: Peter Xu <peterx@redhat.com>

Hugepd is only used in PowerPC so far on 4K page size kernels where hash
mmu is used.  follow_page_mask() used to leverage hugetlb APIs to access
hugepd entries.  Teach follow_page_mask() itself on hugepd.

With previous refactors on fast-gup gup_huge_pd(), most of the code can be
easily leveraged.  There's something not needed for follow page, for
example, gup_hugepte() tries to detect pgtable entry change which will
never happen with slow gup (which has the pgtable lock held), but that's
not a problem to check.

Since follow_page() always only fetch one page, set the end to "address +
PAGE_SIZE" should suffice.  We will still do the pgtable walk once for each
hugetlb page by setting ctx->page_mask properly.

One thing worth mentioning is that some level of pgtable's _bad() helper
will report is_hugepd() entries as TRUE on Power8 hash MMUs.  I think it at
least applies to PUD on Power8 with 4K pgsize.  It means feeding a hugepd
entry to pud_bad() will report a false positive. Let's leave that for now
because it can be arch-specific where I am a bit declined to touch.  In
this patch it's not a problem as long as hugepd is detected before any bad
pgtable entries.

Signed-off-by: Peter Xu <peterx@redhat.com>
---
 mm/gup.c | 78 +++++++++++++++++++++++++++++++++++++++++++++++++-------
 1 file changed, 69 insertions(+), 9 deletions(-)

diff --git a/mm/gup.c b/mm/gup.c
index d96429b6fc55..245214b64108 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -30,6 +30,11 @@ struct follow_page_context {
 	unsigned int page_mask;
 };
 
+static struct page *follow_hugepd(struct vm_area_struct *vma, hugepd_t hugepd,
+				  unsigned long addr, unsigned int pdshift,
+				  unsigned int flags,
+				  struct follow_page_context *ctx);
+
 static inline void sanity_check_pinned_pages(struct page **pages,
 					     unsigned long npages)
 {
@@ -871,6 +876,9 @@ static struct page *follow_pmd_mask(struct vm_area_struct *vma,
 		return no_page_table(vma, flags, address);
 	if (!pmd_present(pmdval))
 		return no_page_table(vma, flags, address);
+	if (unlikely(is_hugepd(__hugepd(pmd_val(pmdval)))))
+		return follow_hugepd(vma, __hugepd(pmd_val(pmdval)),
+				     address, PMD_SHIFT, flags, ctx);
 	if (pmd_devmap(pmdval)) {
 		ptl = pmd_lock(mm, pmd);
 		page = follow_devmap_pmd(vma, address, pmd, flags, &ctx->pgmap);
@@ -921,6 +929,9 @@ static struct page *follow_pud_mask(struct vm_area_struct *vma,
 	pud = READ_ONCE(*pudp);
 	if (pud_none(pud) || !pud_present(pud))
 		return no_page_table(vma, flags, address);
+	if (unlikely(is_hugepd(__hugepd(pud_val(pud)))))
+		return follow_hugepd(vma, __hugepd(pud_val(pud)),
+				     address, PUD_SHIFT, flags, ctx);
 	if (pud_huge(pud)) {
 		ptl = pud_lock(mm, pudp);
 		page = follow_huge_pud(vma, address, pudp, flags, ctx);
@@ -940,13 +951,17 @@ static struct page *follow_p4d_mask(struct vm_area_struct *vma,
 				    unsigned int flags,
 				    struct follow_page_context *ctx)
 {
-	p4d_t *p4d;
+	p4d_t *p4d, p4dval;
 
 	p4d = p4d_offset(pgdp, address);
-	if (p4d_none(*p4d))
-		return no_page_table(vma, flags, address);
-	BUILD_BUG_ON(p4d_huge(*p4d));
-	if (unlikely(p4d_bad(*p4d)))
+	p4dval = *p4d;
+	BUILD_BUG_ON(p4d_huge(p4dval));
+
+	if (unlikely(is_hugepd(__hugepd(p4d_val(p4dval)))))
+		return follow_hugepd(vma, __hugepd(p4d_val(p4dval)),
+				     address, P4D_SHIFT, flags, ctx);
+
+	if (p4d_none(p4dval) || unlikely(p4d_bad(p4dval)))
 		return no_page_table(vma, flags, address);
 
 	return follow_pud_mask(vma, address, p4d, flags, ctx);
@@ -980,7 +995,7 @@ static struct page *follow_page_mask(struct vm_area_struct *vma,
 			      unsigned long address, unsigned int flags,
 			      struct follow_page_context *ctx)
 {
-	pgd_t *pgd;
+	pgd_t *pgd, pgdval;
 	struct mm_struct *mm = vma->vm_mm;
 
 	ctx->page_mask = 0;
@@ -995,11 +1010,17 @@ static struct page *follow_page_mask(struct vm_area_struct *vma,
 						&ctx->page_mask);
 
 	pgd = pgd_offset(mm, address);
+	pgdval = *pgd;
 
-	if (pgd_none(*pgd) || unlikely(pgd_bad(*pgd)))
-		return no_page_table(vma, flags, address);
+	if (unlikely(is_hugepd(__hugepd(pgd_val(pgdval)))))
+		page = follow_hugepd(vma, __hugepd(pgd_val(pgdval)),
+				     address, PGDIR_SHIFT, flags, ctx);
+	else if (pgd_none(*pgd) || unlikely(pgd_bad(*pgd)))
+		page = no_page_table(vma, flags, address);
+	else
+		page = follow_p4d_mask(vma, address, pgd, flags, ctx);
 
-	return follow_p4d_mask(vma, address, pgd, flags, ctx);
+	return page;
 }
 
 struct page *follow_page(struct vm_area_struct *vma, unsigned long address,
@@ -3026,6 +3047,37 @@ static int gup_huge_pd(hugepd_t hugepd, unsigned long addr,
 
 	return 1;
 }
+
+static struct page *follow_hugepd(struct vm_area_struct *vma, hugepd_t hugepd,
+				  unsigned long addr, unsigned int pdshift,
+				  unsigned int flags,
+				  struct follow_page_context *ctx)
+{
+	struct page *page;
+	struct hstate *h;
+	spinlock_t *ptl;
+	int nr = 0, ret;
+	pte_t *ptep;
+
+	/* Only hugetlb supports hugepd */
+	if (WARN_ON_ONCE(!is_vm_hugetlb_page(vma)))
+		return ERR_PTR(-EFAULT);
+
+	h = hstate_vma(vma);
+	ptep = hugepte_offset(hugepd, addr, pdshift);
+	ptl = huge_pte_lock(h, vma->vm_mm, ptep);
+	ret = gup_huge_pd(hugepd, addr, pdshift, addr + PAGE_SIZE,
+			  flags, &page, &nr);
+	spin_unlock(ptl);
+
+	if (ret) {
+		WARN_ON_ONCE(nr != 1);
+		ctx->page_mask = (1U << huge_page_order(h)) - 1;
+		return page;
+	}
+
+	return NULL;
+}
 #else
 static inline int gup_huge_pd(hugepd_t hugepd, unsigned long addr,
 		unsigned int pdshift, unsigned long end, unsigned int flags,
@@ -3033,6 +3085,14 @@ static inline int gup_huge_pd(hugepd_t hugepd, unsigned long addr,
 {
 	return 0;
 }
+
+static struct page *follow_hugepd(struct vm_area_struct *vma, hugepd_t hugepd,
+				  unsigned long addr, unsigned int pdshift,
+				  unsigned int flags,
+				  struct follow_page_context *ctx)
+{
+	return NULL;
+}
 #endif /* CONFIG_ARCH_HAS_HUGEPD */
 
 static int gup_huge_pmd(pmd_t orig, pmd_t *pmdp, unsigned long addr,
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 143+ messages in thread

* [PATCH v2 12/13] mm/gup: Handle hugepd for follow_page()
@ 2024-01-03  9:14   ` peterx
  0 siblings, 0 replies; 143+ messages in thread
From: peterx @ 2024-01-03  9:14 UTC (permalink / raw)
  To: linux-mm, linux-kernel
  Cc: James Houghton, David Hildenbrand, Kirill A . Shutemov, Yang Shi,
	peterx, linux-riscv, Andrew Morton, Aneesh Kumar K . V,
	Rik van Riel, Andrea Arcangeli, Axel Rasmussen, Mike Rapoport,
	John Hubbard, Vlastimil Babka, Michael Ellerman,
	Christophe Leroy, Andrew Jones, linuxppc-dev, Mike Kravetz,
	Muchun Song, linux-arm-kernel, Jason Gunthorpe,
	Christoph Hellwig, Lorenzo Stoakes, Matthew Wilcox

From: Peter Xu <peterx@redhat.com>

Hugepd is only used in PowerPC so far on 4K page size kernels where hash
mmu is used.  follow_page_mask() used to leverage hugetlb APIs to access
hugepd entries.  Teach follow_page_mask() itself on hugepd.

With previous refactors on fast-gup gup_huge_pd(), most of the code can be
easily leveraged.  There's something not needed for follow page, for
example, gup_hugepte() tries to detect pgtable entry change which will
never happen with slow gup (which has the pgtable lock held), but that's
not a problem to check.

Since follow_page() always only fetch one page, set the end to "address +
PAGE_SIZE" should suffice.  We will still do the pgtable walk once for each
hugetlb page by setting ctx->page_mask properly.

One thing worth mentioning is that some level of pgtable's _bad() helper
will report is_hugepd() entries as TRUE on Power8 hash MMUs.  I think it at
least applies to PUD on Power8 with 4K pgsize.  It means feeding a hugepd
entry to pud_bad() will report a false positive. Let's leave that for now
because it can be arch-specific where I am a bit declined to touch.  In
this patch it's not a problem as long as hugepd is detected before any bad
pgtable entries.

Signed-off-by: Peter Xu <peterx@redhat.com>
---
 mm/gup.c | 78 +++++++++++++++++++++++++++++++++++++++++++++++++-------
 1 file changed, 69 insertions(+), 9 deletions(-)

diff --git a/mm/gup.c b/mm/gup.c
index d96429b6fc55..245214b64108 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -30,6 +30,11 @@ struct follow_page_context {
 	unsigned int page_mask;
 };
 
+static struct page *follow_hugepd(struct vm_area_struct *vma, hugepd_t hugepd,
+				  unsigned long addr, unsigned int pdshift,
+				  unsigned int flags,
+				  struct follow_page_context *ctx);
+
 static inline void sanity_check_pinned_pages(struct page **pages,
 					     unsigned long npages)
 {
@@ -871,6 +876,9 @@ static struct page *follow_pmd_mask(struct vm_area_struct *vma,
 		return no_page_table(vma, flags, address);
 	if (!pmd_present(pmdval))
 		return no_page_table(vma, flags, address);
+	if (unlikely(is_hugepd(__hugepd(pmd_val(pmdval)))))
+		return follow_hugepd(vma, __hugepd(pmd_val(pmdval)),
+				     address, PMD_SHIFT, flags, ctx);
 	if (pmd_devmap(pmdval)) {
 		ptl = pmd_lock(mm, pmd);
 		page = follow_devmap_pmd(vma, address, pmd, flags, &ctx->pgmap);
@@ -921,6 +929,9 @@ static struct page *follow_pud_mask(struct vm_area_struct *vma,
 	pud = READ_ONCE(*pudp);
 	if (pud_none(pud) || !pud_present(pud))
 		return no_page_table(vma, flags, address);
+	if (unlikely(is_hugepd(__hugepd(pud_val(pud)))))
+		return follow_hugepd(vma, __hugepd(pud_val(pud)),
+				     address, PUD_SHIFT, flags, ctx);
 	if (pud_huge(pud)) {
 		ptl = pud_lock(mm, pudp);
 		page = follow_huge_pud(vma, address, pudp, flags, ctx);
@@ -940,13 +951,17 @@ static struct page *follow_p4d_mask(struct vm_area_struct *vma,
 				    unsigned int flags,
 				    struct follow_page_context *ctx)
 {
-	p4d_t *p4d;
+	p4d_t *p4d, p4dval;
 
 	p4d = p4d_offset(pgdp, address);
-	if (p4d_none(*p4d))
-		return no_page_table(vma, flags, address);
-	BUILD_BUG_ON(p4d_huge(*p4d));
-	if (unlikely(p4d_bad(*p4d)))
+	p4dval = *p4d;
+	BUILD_BUG_ON(p4d_huge(p4dval));
+
+	if (unlikely(is_hugepd(__hugepd(p4d_val(p4dval)))))
+		return follow_hugepd(vma, __hugepd(p4d_val(p4dval)),
+				     address, P4D_SHIFT, flags, ctx);
+
+	if (p4d_none(p4dval) || unlikely(p4d_bad(p4dval)))
 		return no_page_table(vma, flags, address);
 
 	return follow_pud_mask(vma, address, p4d, flags, ctx);
@@ -980,7 +995,7 @@ static struct page *follow_page_mask(struct vm_area_struct *vma,
 			      unsigned long address, unsigned int flags,
 			      struct follow_page_context *ctx)
 {
-	pgd_t *pgd;
+	pgd_t *pgd, pgdval;
 	struct mm_struct *mm = vma->vm_mm;
 
 	ctx->page_mask = 0;
@@ -995,11 +1010,17 @@ static struct page *follow_page_mask(struct vm_area_struct *vma,
 						&ctx->page_mask);
 
 	pgd = pgd_offset(mm, address);
+	pgdval = *pgd;
 
-	if (pgd_none(*pgd) || unlikely(pgd_bad(*pgd)))
-		return no_page_table(vma, flags, address);
+	if (unlikely(is_hugepd(__hugepd(pgd_val(pgdval)))))
+		page = follow_hugepd(vma, __hugepd(pgd_val(pgdval)),
+				     address, PGDIR_SHIFT, flags, ctx);
+	else if (pgd_none(*pgd) || unlikely(pgd_bad(*pgd)))
+		page = no_page_table(vma, flags, address);
+	else
+		page = follow_p4d_mask(vma, address, pgd, flags, ctx);
 
-	return follow_p4d_mask(vma, address, pgd, flags, ctx);
+	return page;
 }
 
 struct page *follow_page(struct vm_area_struct *vma, unsigned long address,
@@ -3026,6 +3047,37 @@ static int gup_huge_pd(hugepd_t hugepd, unsigned long addr,
 
 	return 1;
 }
+
+static struct page *follow_hugepd(struct vm_area_struct *vma, hugepd_t hugepd,
+				  unsigned long addr, unsigned int pdshift,
+				  unsigned int flags,
+				  struct follow_page_context *ctx)
+{
+	struct page *page;
+	struct hstate *h;
+	spinlock_t *ptl;
+	int nr = 0, ret;
+	pte_t *ptep;
+
+	/* Only hugetlb supports hugepd */
+	if (WARN_ON_ONCE(!is_vm_hugetlb_page(vma)))
+		return ERR_PTR(-EFAULT);
+
+	h = hstate_vma(vma);
+	ptep = hugepte_offset(hugepd, addr, pdshift);
+	ptl = huge_pte_lock(h, vma->vm_mm, ptep);
+	ret = gup_huge_pd(hugepd, addr, pdshift, addr + PAGE_SIZE,
+			  flags, &page, &nr);
+	spin_unlock(ptl);
+
+	if (ret) {
+		WARN_ON_ONCE(nr != 1);
+		ctx->page_mask = (1U << huge_page_order(h)) - 1;
+		return page;
+	}
+
+	return NULL;
+}
 #else
 static inline int gup_huge_pd(hugepd_t hugepd, unsigned long addr,
 		unsigned int pdshift, unsigned long end, unsigned int flags,
@@ -3033,6 +3085,14 @@ static inline int gup_huge_pd(hugepd_t hugepd, unsigned long addr,
 {
 	return 0;
 }
+
+static struct page *follow_hugepd(struct vm_area_struct *vma, hugepd_t hugepd,
+				  unsigned long addr, unsigned int pdshift,
+				  unsigned int flags,
+				  struct follow_page_context *ctx)
+{
+	return NULL;
+}
 #endif /* CONFIG_ARCH_HAS_HUGEPD */
 
 static int gup_huge_pmd(pmd_t orig, pmd_t *pmdp, unsigned long addr,
-- 
2.41.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 143+ messages in thread

* [PATCH v2 12/13] mm/gup: Handle hugepd for follow_page()
@ 2024-01-03  9:14   ` peterx
  0 siblings, 0 replies; 143+ messages in thread
From: peterx @ 2024-01-03  9:14 UTC (permalink / raw)
  To: linux-mm, linux-kernel
  Cc: James Houghton, David Hildenbrand, Yang Shi, peterx,
	Andrew Jones, Matthew Wilcox, linux-riscv, Andrea Arcangeli,
	Christoph Hellwig, Aneesh Kumar K . V, Vlastimil Babka,
	Jason Gunthorpe, Axel Rasmussen, Rik van Riel, John Hubbard,
	Kirill A . Shutemov, linux-arm-kernel, Lorenzo Stoakes,
	Muchun Song, Andrew Morton, linuxppc-dev, Mike Rapoport,
	Mike Kravetz

From: Peter Xu <peterx@redhat.com>

Hugepd is only used in PowerPC so far on 4K page size kernels where hash
mmu is used.  follow_page_mask() used to leverage hugetlb APIs to access
hugepd entries.  Teach follow_page_mask() itself on hugepd.

With previous refactors on fast-gup gup_huge_pd(), most of the code can be
easily leveraged.  There's something not needed for follow page, for
example, gup_hugepte() tries to detect pgtable entry change which will
never happen with slow gup (which has the pgtable lock held), but that's
not a problem to check.

Since follow_page() always only fetch one page, set the end to "address +
PAGE_SIZE" should suffice.  We will still do the pgtable walk once for each
hugetlb page by setting ctx->page_mask properly.

One thing worth mentioning is that some level of pgtable's _bad() helper
will report is_hugepd() entries as TRUE on Power8 hash MMUs.  I think it at
least applies to PUD on Power8 with 4K pgsize.  It means feeding a hugepd
entry to pud_bad() will report a false positive. Let's leave that for now
because it can be arch-specific where I am a bit declined to touch.  In
this patch it's not a problem as long as hugepd is detected before any bad
pgtable entries.

Signed-off-by: Peter Xu <peterx@redhat.com>
---
 mm/gup.c | 78 +++++++++++++++++++++++++++++++++++++++++++++++++-------
 1 file changed, 69 insertions(+), 9 deletions(-)

diff --git a/mm/gup.c b/mm/gup.c
index d96429b6fc55..245214b64108 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -30,6 +30,11 @@ struct follow_page_context {
 	unsigned int page_mask;
 };
 
+static struct page *follow_hugepd(struct vm_area_struct *vma, hugepd_t hugepd,
+				  unsigned long addr, unsigned int pdshift,
+				  unsigned int flags,
+				  struct follow_page_context *ctx);
+
 static inline void sanity_check_pinned_pages(struct page **pages,
 					     unsigned long npages)
 {
@@ -871,6 +876,9 @@ static struct page *follow_pmd_mask(struct vm_area_struct *vma,
 		return no_page_table(vma, flags, address);
 	if (!pmd_present(pmdval))
 		return no_page_table(vma, flags, address);
+	if (unlikely(is_hugepd(__hugepd(pmd_val(pmdval)))))
+		return follow_hugepd(vma, __hugepd(pmd_val(pmdval)),
+				     address, PMD_SHIFT, flags, ctx);
 	if (pmd_devmap(pmdval)) {
 		ptl = pmd_lock(mm, pmd);
 		page = follow_devmap_pmd(vma, address, pmd, flags, &ctx->pgmap);
@@ -921,6 +929,9 @@ static struct page *follow_pud_mask(struct vm_area_struct *vma,
 	pud = READ_ONCE(*pudp);
 	if (pud_none(pud) || !pud_present(pud))
 		return no_page_table(vma, flags, address);
+	if (unlikely(is_hugepd(__hugepd(pud_val(pud)))))
+		return follow_hugepd(vma, __hugepd(pud_val(pud)),
+				     address, PUD_SHIFT, flags, ctx);
 	if (pud_huge(pud)) {
 		ptl = pud_lock(mm, pudp);
 		page = follow_huge_pud(vma, address, pudp, flags, ctx);
@@ -940,13 +951,17 @@ static struct page *follow_p4d_mask(struct vm_area_struct *vma,
 				    unsigned int flags,
 				    struct follow_page_context *ctx)
 {
-	p4d_t *p4d;
+	p4d_t *p4d, p4dval;
 
 	p4d = p4d_offset(pgdp, address);
-	if (p4d_none(*p4d))
-		return no_page_table(vma, flags, address);
-	BUILD_BUG_ON(p4d_huge(*p4d));
-	if (unlikely(p4d_bad(*p4d)))
+	p4dval = *p4d;
+	BUILD_BUG_ON(p4d_huge(p4dval));
+
+	if (unlikely(is_hugepd(__hugepd(p4d_val(p4dval)))))
+		return follow_hugepd(vma, __hugepd(p4d_val(p4dval)),
+				     address, P4D_SHIFT, flags, ctx);
+
+	if (p4d_none(p4dval) || unlikely(p4d_bad(p4dval)))
 		return no_page_table(vma, flags, address);
 
 	return follow_pud_mask(vma, address, p4d, flags, ctx);
@@ -980,7 +995,7 @@ static struct page *follow_page_mask(struct vm_area_struct *vma,
 			      unsigned long address, unsigned int flags,
 			      struct follow_page_context *ctx)
 {
-	pgd_t *pgd;
+	pgd_t *pgd, pgdval;
 	struct mm_struct *mm = vma->vm_mm;
 
 	ctx->page_mask = 0;
@@ -995,11 +1010,17 @@ static struct page *follow_page_mask(struct vm_area_struct *vma,
 						&ctx->page_mask);
 
 	pgd = pgd_offset(mm, address);
+	pgdval = *pgd;
 
-	if (pgd_none(*pgd) || unlikely(pgd_bad(*pgd)))
-		return no_page_table(vma, flags, address);
+	if (unlikely(is_hugepd(__hugepd(pgd_val(pgdval)))))
+		page = follow_hugepd(vma, __hugepd(pgd_val(pgdval)),
+				     address, PGDIR_SHIFT, flags, ctx);
+	else if (pgd_none(*pgd) || unlikely(pgd_bad(*pgd)))
+		page = no_page_table(vma, flags, address);
+	else
+		page = follow_p4d_mask(vma, address, pgd, flags, ctx);
 
-	return follow_p4d_mask(vma, address, pgd, flags, ctx);
+	return page;
 }
 
 struct page *follow_page(struct vm_area_struct *vma, unsigned long address,
@@ -3026,6 +3047,37 @@ static int gup_huge_pd(hugepd_t hugepd, unsigned long addr,
 
 	return 1;
 }
+
+static struct page *follow_hugepd(struct vm_area_struct *vma, hugepd_t hugepd,
+				  unsigned long addr, unsigned int pdshift,
+				  unsigned int flags,
+				  struct follow_page_context *ctx)
+{
+	struct page *page;
+	struct hstate *h;
+	spinlock_t *ptl;
+	int nr = 0, ret;
+	pte_t *ptep;
+
+	/* Only hugetlb supports hugepd */
+	if (WARN_ON_ONCE(!is_vm_hugetlb_page(vma)))
+		return ERR_PTR(-EFAULT);
+
+	h = hstate_vma(vma);
+	ptep = hugepte_offset(hugepd, addr, pdshift);
+	ptl = huge_pte_lock(h, vma->vm_mm, ptep);
+	ret = gup_huge_pd(hugepd, addr, pdshift, addr + PAGE_SIZE,
+			  flags, &page, &nr);
+	spin_unlock(ptl);
+
+	if (ret) {
+		WARN_ON_ONCE(nr != 1);
+		ctx->page_mask = (1U << huge_page_order(h)) - 1;
+		return page;
+	}
+
+	return NULL;
+}
 #else
 static inline int gup_huge_pd(hugepd_t hugepd, unsigned long addr,
 		unsigned int pdshift, unsigned long end, unsigned int flags,
@@ -3033,6 +3085,14 @@ static inline int gup_huge_pd(hugepd_t hugepd, unsigned long addr,
 {
 	return 0;
 }
+
+static struct page *follow_hugepd(struct vm_area_struct *vma, hugepd_t hugepd,
+				  unsigned long addr, unsigned int pdshift,
+				  unsigned int flags,
+				  struct follow_page_context *ctx)
+{
+	return NULL;
+}
 #endif /* CONFIG_ARCH_HAS_HUGEPD */
 
 static int gup_huge_pmd(pmd_t orig, pmd_t *pmdp, unsigned long addr,
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 143+ messages in thread

* [PATCH v2 13/13] mm/gup: Handle hugetlb in the generic follow_page_mask code
  2024-01-03  9:14 ` peterx
  (?)
  (?)
@ 2024-01-03  9:14   ` peterx
  -1 siblings, 0 replies; 143+ messages in thread
From: peterx @ 2024-01-03  9:14 UTC (permalink / raw)
  To: linux-mm, linux-kernel
  Cc: James Houghton, David Hildenbrand, Kirill A . Shutemov, Yang Shi,
	peterx, linux-riscv, Andrew Morton, Aneesh Kumar K . V,
	Rik van Riel, Andrea Arcangeli, Axel Rasmussen, Mike Rapoport,
	John Hubbard, Vlastimil Babka, Michael Ellerman,
	Christophe Leroy, Andrew Jones, linuxppc-dev, Mike Kravetz,
	Muchun Song, linux-arm-kernel, Jason Gunthorpe,
	Christoph Hellwig, Lorenzo Stoakes, Matthew Wilcox

From: Peter Xu <peterx@redhat.com>

Now follow_page() is ready to handle hugetlb pages in whatever form, and
over all architectures.  Switch to the generic code path.

Time to retire hugetlb_follow_page_mask(), following the previous
retirement of follow_hugetlb_page() in 4849807114b8.

There may be a slight difference of how the loops run when processing slow
GUP over a large hugetlb range on cont_pte/cont_pmd supported archs: each
loop of __get_user_pages() will resolve one pgtable entry with the patch
applied, rather than relying on the size of hugetlb hstate, the latter may
cover multiple entries in one loop.

A quick performance test on an aarch64 VM on M1 chip shows 15% degrade over
a tight loop of slow gup after the path switched.  That shouldn't be a
problem because slow-gup should not be a hot path for GUP in general: when
page is commonly present, fast-gup will already succeed, while when the
page is indeed missing and require a follow up page fault, the slow gup
degrade will probably buried in the fault paths anyway.  It also explains
why slow gup for THP used to be very slow before 57edfcfd3419 ("mm/gup:
accelerate thp gup even for "pages != NULL"") lands, the latter not part of
a performance analysis but a side benefit.  If the performance will be a
concern, we can consider handle CONT_PTE in follow_page().

Before that is justified to be necessary, keep everything clean and simple.

Signed-off-by: Peter Xu <peterx@redhat.com>
---
 include/linux/hugetlb.h |  7 ----
 mm/gup.c                | 15 +++------
 mm/hugetlb.c            | 71 -----------------------------------------
 3 files changed, 5 insertions(+), 88 deletions(-)

diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index e8eddd51fc17..cdbb53407722 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -332,13 +332,6 @@ static inline void hugetlb_zap_end(
 {
 }
 
-static inline struct page *hugetlb_follow_page_mask(
-    struct vm_area_struct *vma, unsigned long address, unsigned int flags,
-    unsigned int *page_mask)
-{
-	BUILD_BUG(); /* should never be compiled in if !CONFIG_HUGETLB_PAGE*/
-}
-
 static inline int copy_hugetlb_page_range(struct mm_struct *dst,
 					  struct mm_struct *src,
 					  struct vm_area_struct *dst_vma,
diff --git a/mm/gup.c b/mm/gup.c
index 245214b64108..4f8a3dc287c9 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -997,18 +997,11 @@ static struct page *follow_page_mask(struct vm_area_struct *vma,
 {
 	pgd_t *pgd, pgdval;
 	struct mm_struct *mm = vma->vm_mm;
+	struct page *page;
 
-	ctx->page_mask = 0;
-
-	/*
-	 * Call hugetlb_follow_page_mask for hugetlb vmas as it will use
-	 * special hugetlb page table walking code.  This eliminates the
-	 * need to check for hugetlb entries in the general walking code.
-	 */
-	if (is_vm_hugetlb_page(vma))
-		return hugetlb_follow_page_mask(vma, address, flags,
-						&ctx->page_mask);
+	vma_pgtable_walk_begin(vma);
 
+	ctx->page_mask = 0;
 	pgd = pgd_offset(mm, address);
 	pgdval = *pgd;
 
@@ -1020,6 +1013,8 @@ static struct page *follow_page_mask(struct vm_area_struct *vma,
 	else
 		page = follow_p4d_mask(vma, address, pgd, flags, ctx);
 
+	vma_pgtable_walk_end(vma);
+
 	return page;
 }
 
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index bfb52bb8b943..e13b4e038c2c 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -6782,77 +6782,6 @@ int hugetlb_mfill_atomic_pte(pte_t *dst_pte,
 }
 #endif /* CONFIG_USERFAULTFD */
 
-struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma,
-				      unsigned long address, unsigned int flags,
-				      unsigned int *page_mask)
-{
-	struct hstate *h = hstate_vma(vma);
-	struct mm_struct *mm = vma->vm_mm;
-	unsigned long haddr = address & huge_page_mask(h);
-	struct page *page = NULL;
-	spinlock_t *ptl;
-	pte_t *pte, entry;
-	int ret;
-
-	hugetlb_vma_lock_read(vma);
-	pte = hugetlb_walk(vma, haddr, huge_page_size(h));
-	if (!pte)
-		goto out_unlock;
-
-	ptl = huge_pte_lock(h, mm, pte);
-	entry = huge_ptep_get(pte);
-	if (pte_present(entry)) {
-		page = pte_page(entry);
-
-		if (!huge_pte_write(entry)) {
-			if (flags & FOLL_WRITE) {
-				page = NULL;
-				goto out;
-			}
-
-			if (gup_must_unshare(vma, flags, page)) {
-				/* Tell the caller to do unsharing */
-				page = ERR_PTR(-EMLINK);
-				goto out;
-			}
-		}
-
-		page = nth_page(page, ((address & ~huge_page_mask(h)) >> PAGE_SHIFT));
-
-		/*
-		 * Note that page may be a sub-page, and with vmemmap
-		 * optimizations the page struct may be read only.
-		 * try_grab_page() will increase the ref count on the
-		 * head page, so this will be OK.
-		 *
-		 * try_grab_page() should always be able to get the page here,
-		 * because we hold the ptl lock and have verified pte_present().
-		 */
-		ret = try_grab_page(page, flags);
-
-		if (WARN_ON_ONCE(ret)) {
-			page = ERR_PTR(ret);
-			goto out;
-		}
-
-		*page_mask = (1U << huge_page_order(h)) - 1;
-	}
-out:
-	spin_unlock(ptl);
-out_unlock:
-	hugetlb_vma_unlock_read(vma);
-
-	/*
-	 * Fixup retval for dump requests: if pagecache doesn't exist,
-	 * don't try to allocate a new page but just skip it.
-	 */
-	if (!page && (flags & FOLL_DUMP) &&
-	    !hugetlbfs_pagecache_present(h, vma, address))
-		page = ERR_PTR(-EFAULT);
-
-	return page;
-}
-
 long hugetlb_change_protection(struct vm_area_struct *vma,
 		unsigned long address, unsigned long end,
 		pgprot_t newprot, unsigned long cp_flags)
-- 
2.41.0


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply related	[flat|nested] 143+ messages in thread

* [PATCH v2 13/13] mm/gup: Handle hugetlb in the generic follow_page_mask code
@ 2024-01-03  9:14   ` peterx
  0 siblings, 0 replies; 143+ messages in thread
From: peterx @ 2024-01-03  9:14 UTC (permalink / raw)
  To: linux-mm, linux-kernel
  Cc: James Houghton, David Hildenbrand, Kirill A . Shutemov, Yang Shi,
	peterx, linux-riscv, Andrew Morton, Aneesh Kumar K . V,
	Rik van Riel, Andrea Arcangeli, Axel Rasmussen, Mike Rapoport,
	John Hubbard, Vlastimil Babka, Michael Ellerman,
	Christophe Leroy, Andrew Jones, linuxppc-dev, Mike Kravetz,
	Muchun Song, linux-arm-kernel, Jason Gunthorpe,
	Christoph Hellwig, Lorenzo Stoakes, Matthew Wilcox

From: Peter Xu <peterx@redhat.com>

Now follow_page() is ready to handle hugetlb pages in whatever form, and
over all architectures.  Switch to the generic code path.

Time to retire hugetlb_follow_page_mask(), following the previous
retirement of follow_hugetlb_page() in 4849807114b8.

There may be a slight difference of how the loops run when processing slow
GUP over a large hugetlb range on cont_pte/cont_pmd supported archs: each
loop of __get_user_pages() will resolve one pgtable entry with the patch
applied, rather than relying on the size of hugetlb hstate, the latter may
cover multiple entries in one loop.

A quick performance test on an aarch64 VM on M1 chip shows 15% degrade over
a tight loop of slow gup after the path switched.  That shouldn't be a
problem because slow-gup should not be a hot path for GUP in general: when
page is commonly present, fast-gup will already succeed, while when the
page is indeed missing and require a follow up page fault, the slow gup
degrade will probably buried in the fault paths anyway.  It also explains
why slow gup for THP used to be very slow before 57edfcfd3419 ("mm/gup:
accelerate thp gup even for "pages != NULL"") lands, the latter not part of
a performance analysis but a side benefit.  If the performance will be a
concern, we can consider handle CONT_PTE in follow_page().

Before that is justified to be necessary, keep everything clean and simple.

Signed-off-by: Peter Xu <peterx@redhat.com>
---
 include/linux/hugetlb.h |  7 ----
 mm/gup.c                | 15 +++------
 mm/hugetlb.c            | 71 -----------------------------------------
 3 files changed, 5 insertions(+), 88 deletions(-)

diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index e8eddd51fc17..cdbb53407722 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -332,13 +332,6 @@ static inline void hugetlb_zap_end(
 {
 }
 
-static inline struct page *hugetlb_follow_page_mask(
-    struct vm_area_struct *vma, unsigned long address, unsigned int flags,
-    unsigned int *page_mask)
-{
-	BUILD_BUG(); /* should never be compiled in if !CONFIG_HUGETLB_PAGE*/
-}
-
 static inline int copy_hugetlb_page_range(struct mm_struct *dst,
 					  struct mm_struct *src,
 					  struct vm_area_struct *dst_vma,
diff --git a/mm/gup.c b/mm/gup.c
index 245214b64108..4f8a3dc287c9 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -997,18 +997,11 @@ static struct page *follow_page_mask(struct vm_area_struct *vma,
 {
 	pgd_t *pgd, pgdval;
 	struct mm_struct *mm = vma->vm_mm;
+	struct page *page;
 
-	ctx->page_mask = 0;
-
-	/*
-	 * Call hugetlb_follow_page_mask for hugetlb vmas as it will use
-	 * special hugetlb page table walking code.  This eliminates the
-	 * need to check for hugetlb entries in the general walking code.
-	 */
-	if (is_vm_hugetlb_page(vma))
-		return hugetlb_follow_page_mask(vma, address, flags,
-						&ctx->page_mask);
+	vma_pgtable_walk_begin(vma);
 
+	ctx->page_mask = 0;
 	pgd = pgd_offset(mm, address);
 	pgdval = *pgd;
 
@@ -1020,6 +1013,8 @@ static struct page *follow_page_mask(struct vm_area_struct *vma,
 	else
 		page = follow_p4d_mask(vma, address, pgd, flags, ctx);
 
+	vma_pgtable_walk_end(vma);
+
 	return page;
 }
 
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index bfb52bb8b943..e13b4e038c2c 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -6782,77 +6782,6 @@ int hugetlb_mfill_atomic_pte(pte_t *dst_pte,
 }
 #endif /* CONFIG_USERFAULTFD */
 
-struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma,
-				      unsigned long address, unsigned int flags,
-				      unsigned int *page_mask)
-{
-	struct hstate *h = hstate_vma(vma);
-	struct mm_struct *mm = vma->vm_mm;
-	unsigned long haddr = address & huge_page_mask(h);
-	struct page *page = NULL;
-	spinlock_t *ptl;
-	pte_t *pte, entry;
-	int ret;
-
-	hugetlb_vma_lock_read(vma);
-	pte = hugetlb_walk(vma, haddr, huge_page_size(h));
-	if (!pte)
-		goto out_unlock;
-
-	ptl = huge_pte_lock(h, mm, pte);
-	entry = huge_ptep_get(pte);
-	if (pte_present(entry)) {
-		page = pte_page(entry);
-
-		if (!huge_pte_write(entry)) {
-			if (flags & FOLL_WRITE) {
-				page = NULL;
-				goto out;
-			}
-
-			if (gup_must_unshare(vma, flags, page)) {
-				/* Tell the caller to do unsharing */
-				page = ERR_PTR(-EMLINK);
-				goto out;
-			}
-		}
-
-		page = nth_page(page, ((address & ~huge_page_mask(h)) >> PAGE_SHIFT));
-
-		/*
-		 * Note that page may be a sub-page, and with vmemmap
-		 * optimizations the page struct may be read only.
-		 * try_grab_page() will increase the ref count on the
-		 * head page, so this will be OK.
-		 *
-		 * try_grab_page() should always be able to get the page here,
-		 * because we hold the ptl lock and have verified pte_present().
-		 */
-		ret = try_grab_page(page, flags);
-
-		if (WARN_ON_ONCE(ret)) {
-			page = ERR_PTR(ret);
-			goto out;
-		}
-
-		*page_mask = (1U << huge_page_order(h)) - 1;
-	}
-out:
-	spin_unlock(ptl);
-out_unlock:
-	hugetlb_vma_unlock_read(vma);
-
-	/*
-	 * Fixup retval for dump requests: if pagecache doesn't exist,
-	 * don't try to allocate a new page but just skip it.
-	 */
-	if (!page && (flags & FOLL_DUMP) &&
-	    !hugetlbfs_pagecache_present(h, vma, address))
-		page = ERR_PTR(-EFAULT);
-
-	return page;
-}
-
 long hugetlb_change_protection(struct vm_area_struct *vma,
 		unsigned long address, unsigned long end,
 		pgprot_t newprot, unsigned long cp_flags)
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 143+ messages in thread

* [PATCH v2 13/13] mm/gup: Handle hugetlb in the generic follow_page_mask code
@ 2024-01-03  9:14   ` peterx
  0 siblings, 0 replies; 143+ messages in thread
From: peterx @ 2024-01-03  9:14 UTC (permalink / raw)
  To: linux-mm, linux-kernel
  Cc: James Houghton, David Hildenbrand, Kirill A . Shutemov, Yang Shi,
	peterx, linux-riscv, Andrew Morton, Aneesh Kumar K . V,
	Rik van Riel, Andrea Arcangeli, Axel Rasmussen, Mike Rapoport,
	John Hubbard, Vlastimil Babka, Michael Ellerman,
	Christophe Leroy, Andrew Jones, linuxppc-dev, Mike Kravetz,
	Muchun Song, linux-arm-kernel, Jason Gunthorpe,
	Christoph Hellwig, Lorenzo Stoakes, Matthew Wilcox

From: Peter Xu <peterx@redhat.com>

Now follow_page() is ready to handle hugetlb pages in whatever form, and
over all architectures.  Switch to the generic code path.

Time to retire hugetlb_follow_page_mask(), following the previous
retirement of follow_hugetlb_page() in 4849807114b8.

There may be a slight difference of how the loops run when processing slow
GUP over a large hugetlb range on cont_pte/cont_pmd supported archs: each
loop of __get_user_pages() will resolve one pgtable entry with the patch
applied, rather than relying on the size of hugetlb hstate, the latter may
cover multiple entries in one loop.

A quick performance test on an aarch64 VM on M1 chip shows 15% degrade over
a tight loop of slow gup after the path switched.  That shouldn't be a
problem because slow-gup should not be a hot path for GUP in general: when
page is commonly present, fast-gup will already succeed, while when the
page is indeed missing and require a follow up page fault, the slow gup
degrade will probably buried in the fault paths anyway.  It also explains
why slow gup for THP used to be very slow before 57edfcfd3419 ("mm/gup:
accelerate thp gup even for "pages != NULL"") lands, the latter not part of
a performance analysis but a side benefit.  If the performance will be a
concern, we can consider handle CONT_PTE in follow_page().

Before that is justified to be necessary, keep everything clean and simple.

Signed-off-by: Peter Xu <peterx@redhat.com>
---
 include/linux/hugetlb.h |  7 ----
 mm/gup.c                | 15 +++------
 mm/hugetlb.c            | 71 -----------------------------------------
 3 files changed, 5 insertions(+), 88 deletions(-)

diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index e8eddd51fc17..cdbb53407722 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -332,13 +332,6 @@ static inline void hugetlb_zap_end(
 {
 }
 
-static inline struct page *hugetlb_follow_page_mask(
-    struct vm_area_struct *vma, unsigned long address, unsigned int flags,
-    unsigned int *page_mask)
-{
-	BUILD_BUG(); /* should never be compiled in if !CONFIG_HUGETLB_PAGE*/
-}
-
 static inline int copy_hugetlb_page_range(struct mm_struct *dst,
 					  struct mm_struct *src,
 					  struct vm_area_struct *dst_vma,
diff --git a/mm/gup.c b/mm/gup.c
index 245214b64108..4f8a3dc287c9 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -997,18 +997,11 @@ static struct page *follow_page_mask(struct vm_area_struct *vma,
 {
 	pgd_t *pgd, pgdval;
 	struct mm_struct *mm = vma->vm_mm;
+	struct page *page;
 
-	ctx->page_mask = 0;
-
-	/*
-	 * Call hugetlb_follow_page_mask for hugetlb vmas as it will use
-	 * special hugetlb page table walking code.  This eliminates the
-	 * need to check for hugetlb entries in the general walking code.
-	 */
-	if (is_vm_hugetlb_page(vma))
-		return hugetlb_follow_page_mask(vma, address, flags,
-						&ctx->page_mask);
+	vma_pgtable_walk_begin(vma);
 
+	ctx->page_mask = 0;
 	pgd = pgd_offset(mm, address);
 	pgdval = *pgd;
 
@@ -1020,6 +1013,8 @@ static struct page *follow_page_mask(struct vm_area_struct *vma,
 	else
 		page = follow_p4d_mask(vma, address, pgd, flags, ctx);
 
+	vma_pgtable_walk_end(vma);
+
 	return page;
 }
 
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index bfb52bb8b943..e13b4e038c2c 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -6782,77 +6782,6 @@ int hugetlb_mfill_atomic_pte(pte_t *dst_pte,
 }
 #endif /* CONFIG_USERFAULTFD */
 
-struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma,
-				      unsigned long address, unsigned int flags,
-				      unsigned int *page_mask)
-{
-	struct hstate *h = hstate_vma(vma);
-	struct mm_struct *mm = vma->vm_mm;
-	unsigned long haddr = address & huge_page_mask(h);
-	struct page *page = NULL;
-	spinlock_t *ptl;
-	pte_t *pte, entry;
-	int ret;
-
-	hugetlb_vma_lock_read(vma);
-	pte = hugetlb_walk(vma, haddr, huge_page_size(h));
-	if (!pte)
-		goto out_unlock;
-
-	ptl = huge_pte_lock(h, mm, pte);
-	entry = huge_ptep_get(pte);
-	if (pte_present(entry)) {
-		page = pte_page(entry);
-
-		if (!huge_pte_write(entry)) {
-			if (flags & FOLL_WRITE) {
-				page = NULL;
-				goto out;
-			}
-
-			if (gup_must_unshare(vma, flags, page)) {
-				/* Tell the caller to do unsharing */
-				page = ERR_PTR(-EMLINK);
-				goto out;
-			}
-		}
-
-		page = nth_page(page, ((address & ~huge_page_mask(h)) >> PAGE_SHIFT));
-
-		/*
-		 * Note that page may be a sub-page, and with vmemmap
-		 * optimizations the page struct may be read only.
-		 * try_grab_page() will increase the ref count on the
-		 * head page, so this will be OK.
-		 *
-		 * try_grab_page() should always be able to get the page here,
-		 * because we hold the ptl lock and have verified pte_present().
-		 */
-		ret = try_grab_page(page, flags);
-
-		if (WARN_ON_ONCE(ret)) {
-			page = ERR_PTR(ret);
-			goto out;
-		}
-
-		*page_mask = (1U << huge_page_order(h)) - 1;
-	}
-out:
-	spin_unlock(ptl);
-out_unlock:
-	hugetlb_vma_unlock_read(vma);
-
-	/*
-	 * Fixup retval for dump requests: if pagecache doesn't exist,
-	 * don't try to allocate a new page but just skip it.
-	 */
-	if (!page && (flags & FOLL_DUMP) &&
-	    !hugetlbfs_pagecache_present(h, vma, address))
-		page = ERR_PTR(-EFAULT);
-
-	return page;
-}
-
 long hugetlb_change_protection(struct vm_area_struct *vma,
 		unsigned long address, unsigned long end,
 		pgprot_t newprot, unsigned long cp_flags)
-- 
2.41.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 143+ messages in thread

* [PATCH v2 13/13] mm/gup: Handle hugetlb in the generic follow_page_mask code
@ 2024-01-03  9:14   ` peterx
  0 siblings, 0 replies; 143+ messages in thread
From: peterx @ 2024-01-03  9:14 UTC (permalink / raw)
  To: linux-mm, linux-kernel
  Cc: James Houghton, David Hildenbrand, Yang Shi, peterx,
	Andrew Jones, Matthew Wilcox, linux-riscv, Andrea Arcangeli,
	Christoph Hellwig, Aneesh Kumar K . V, Vlastimil Babka,
	Jason Gunthorpe, Axel Rasmussen, Rik van Riel, John Hubbard,
	Kirill A . Shutemov, linux-arm-kernel, Lorenzo Stoakes,
	Muchun Song, Andrew Morton, linuxppc-dev, Mike Rapoport,
	Mike Kravetz

From: Peter Xu <peterx@redhat.com>

Now follow_page() is ready to handle hugetlb pages in whatever form, and
over all architectures.  Switch to the generic code path.

Time to retire hugetlb_follow_page_mask(), following the previous
retirement of follow_hugetlb_page() in 4849807114b8.

There may be a slight difference of how the loops run when processing slow
GUP over a large hugetlb range on cont_pte/cont_pmd supported archs: each
loop of __get_user_pages() will resolve one pgtable entry with the patch
applied, rather than relying on the size of hugetlb hstate, the latter may
cover multiple entries in one loop.

A quick performance test on an aarch64 VM on M1 chip shows 15% degrade over
a tight loop of slow gup after the path switched.  That shouldn't be a
problem because slow-gup should not be a hot path for GUP in general: when
page is commonly present, fast-gup will already succeed, while when the
page is indeed missing and require a follow up page fault, the slow gup
degrade will probably buried in the fault paths anyway.  It also explains
why slow gup for THP used to be very slow before 57edfcfd3419 ("mm/gup:
accelerate thp gup even for "pages != NULL"") lands, the latter not part of
a performance analysis but a side benefit.  If the performance will be a
concern, we can consider handle CONT_PTE in follow_page().

Before that is justified to be necessary, keep everything clean and simple.

Signed-off-by: Peter Xu <peterx@redhat.com>
---
 include/linux/hugetlb.h |  7 ----
 mm/gup.c                | 15 +++------
 mm/hugetlb.c            | 71 -----------------------------------------
 3 files changed, 5 insertions(+), 88 deletions(-)

diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index e8eddd51fc17..cdbb53407722 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -332,13 +332,6 @@ static inline void hugetlb_zap_end(
 {
 }
 
-static inline struct page *hugetlb_follow_page_mask(
-    struct vm_area_struct *vma, unsigned long address, unsigned int flags,
-    unsigned int *page_mask)
-{
-	BUILD_BUG(); /* should never be compiled in if !CONFIG_HUGETLB_PAGE*/
-}
-
 static inline int copy_hugetlb_page_range(struct mm_struct *dst,
 					  struct mm_struct *src,
 					  struct vm_area_struct *dst_vma,
diff --git a/mm/gup.c b/mm/gup.c
index 245214b64108..4f8a3dc287c9 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -997,18 +997,11 @@ static struct page *follow_page_mask(struct vm_area_struct *vma,
 {
 	pgd_t *pgd, pgdval;
 	struct mm_struct *mm = vma->vm_mm;
+	struct page *page;
 
-	ctx->page_mask = 0;
-
-	/*
-	 * Call hugetlb_follow_page_mask for hugetlb vmas as it will use
-	 * special hugetlb page table walking code.  This eliminates the
-	 * need to check for hugetlb entries in the general walking code.
-	 */
-	if (is_vm_hugetlb_page(vma))
-		return hugetlb_follow_page_mask(vma, address, flags,
-						&ctx->page_mask);
+	vma_pgtable_walk_begin(vma);
 
+	ctx->page_mask = 0;
 	pgd = pgd_offset(mm, address);
 	pgdval = *pgd;
 
@@ -1020,6 +1013,8 @@ static struct page *follow_page_mask(struct vm_area_struct *vma,
 	else
 		page = follow_p4d_mask(vma, address, pgd, flags, ctx);
 
+	vma_pgtable_walk_end(vma);
+
 	return page;
 }
 
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index bfb52bb8b943..e13b4e038c2c 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -6782,77 +6782,6 @@ int hugetlb_mfill_atomic_pte(pte_t *dst_pte,
 }
 #endif /* CONFIG_USERFAULTFD */
 
-struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma,
-				      unsigned long address, unsigned int flags,
-				      unsigned int *page_mask)
-{
-	struct hstate *h = hstate_vma(vma);
-	struct mm_struct *mm = vma->vm_mm;
-	unsigned long haddr = address & huge_page_mask(h);
-	struct page *page = NULL;
-	spinlock_t *ptl;
-	pte_t *pte, entry;
-	int ret;
-
-	hugetlb_vma_lock_read(vma);
-	pte = hugetlb_walk(vma, haddr, huge_page_size(h));
-	if (!pte)
-		goto out_unlock;
-
-	ptl = huge_pte_lock(h, mm, pte);
-	entry = huge_ptep_get(pte);
-	if (pte_present(entry)) {
-		page = pte_page(entry);
-
-		if (!huge_pte_write(entry)) {
-			if (flags & FOLL_WRITE) {
-				page = NULL;
-				goto out;
-			}
-
-			if (gup_must_unshare(vma, flags, page)) {
-				/* Tell the caller to do unsharing */
-				page = ERR_PTR(-EMLINK);
-				goto out;
-			}
-		}
-
-		page = nth_page(page, ((address & ~huge_page_mask(h)) >> PAGE_SHIFT));
-
-		/*
-		 * Note that page may be a sub-page, and with vmemmap
-		 * optimizations the page struct may be read only.
-		 * try_grab_page() will increase the ref count on the
-		 * head page, so this will be OK.
-		 *
-		 * try_grab_page() should always be able to get the page here,
-		 * because we hold the ptl lock and have verified pte_present().
-		 */
-		ret = try_grab_page(page, flags);
-
-		if (WARN_ON_ONCE(ret)) {
-			page = ERR_PTR(ret);
-			goto out;
-		}
-
-		*page_mask = (1U << huge_page_order(h)) - 1;
-	}
-out:
-	spin_unlock(ptl);
-out_unlock:
-	hugetlb_vma_unlock_read(vma);
-
-	/*
-	 * Fixup retval for dump requests: if pagecache doesn't exist,
-	 * don't try to allocate a new page but just skip it.
-	 */
-	if (!page && (flags & FOLL_DUMP) &&
-	    !hugetlbfs_pagecache_present(h, vma, address))
-		page = ERR_PTR(-EFAULT);
-
-	return page;
-}
-
 long hugetlb_change_protection(struct vm_area_struct *vma,
 		unsigned long address, unsigned long end,
 		pgprot_t newprot, unsigned long cp_flags)
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 143+ messages in thread

* Re: [PATCH v2 00/13] mm/gup: Unify hugetlb, part 2
  2024-01-03  9:14 ` peterx
  (?)
  (?)
@ 2024-01-03 11:14   ` Christophe Leroy
  -1 siblings, 0 replies; 143+ messages in thread
From: Christophe Leroy @ 2024-01-03 11:14 UTC (permalink / raw)
  To: peterx, linux-mm, linux-kernel
  Cc: James Houghton, David Hildenbrand, Kirill A . Shutemov, Yang Shi,
	linux-riscv, Andrew Morton, Aneesh Kumar K . V, Rik van Riel,
	Andrea Arcangeli, Axel Rasmussen, Mike Rapoport, John Hubbard,
	Vlastimil Babka, Michael Ellerman, Andrew Jones, linuxppc-dev,
	Mike Kravetz, Muchun Song, linux-arm-kernel, Jason Gunthorpe,
	Christoph Hellwig, Lorenzo Stoakes, Matthew Wilcox



Le 03/01/2024 à 10:14, peterx@redhat.com a écrit :
> From: Peter Xu <peterx@redhat.com>
> 
> 
> Test Done
> =========
> 
> This v1 went through the normal GUP smoke tests over different memory
> types on archs (using VM instances): x86_64, aarch64, ppc64le.  For
> aarch64, tested over 64KB cont_pte huge pages.  For ppc64le, tested over
> 16MB hugepd entries (Power8 hash MMU on 4K base page size).
> 

Can you tell how you test ?

I'm willing to test this series on powerpc 8xx (PPC32).

Christophe

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: [PATCH v2 00/13] mm/gup: Unify hugetlb, part 2
@ 2024-01-03 11:14   ` Christophe Leroy
  0 siblings, 0 replies; 143+ messages in thread
From: Christophe Leroy @ 2024-01-03 11:14 UTC (permalink / raw)
  To: peterx, linux-mm, linux-kernel
  Cc: James Houghton, David Hildenbrand, Kirill A . Shutemov, Yang Shi,
	linux-riscv, Andrew Morton, Aneesh Kumar K . V, Rik van Riel,
	Andrea Arcangeli, Axel Rasmussen, Mike Rapoport, John Hubbard,
	Vlastimil Babka, Michael Ellerman, Andrew Jones, linuxppc-dev,
	Mike Kravetz, Muchun Song, linux-arm-kernel, Jason Gunthorpe,
	Christoph Hellwig, Lorenzo Stoakes, Matthew Wilcox



Le 03/01/2024 à 10:14, peterx@redhat.com a écrit :
> From: Peter Xu <peterx@redhat.com>
> 
> 
> Test Done
> =========
> 
> This v1 went through the normal GUP smoke tests over different memory
> types on archs (using VM instances): x86_64, aarch64, ppc64le.  For
> aarch64, tested over 64KB cont_pte huge pages.  For ppc64le, tested over
> 16MB hugepd entries (Power8 hash MMU on 4K base page size).
> 

Can you tell how you test ?

I'm willing to test this series on powerpc 8xx (PPC32).

Christophe
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: [PATCH v2 00/13] mm/gup: Unify hugetlb, part 2
@ 2024-01-03 11:14   ` Christophe Leroy
  0 siblings, 0 replies; 143+ messages in thread
From: Christophe Leroy @ 2024-01-03 11:14 UTC (permalink / raw)
  To: peterx, linux-mm, linux-kernel
  Cc: James Houghton, David Hildenbrand, Kirill A . Shutemov, Yang Shi,
	linux-riscv, Andrew Morton, Aneesh Kumar K . V, Rik van Riel,
	Andrea Arcangeli, Axel Rasmussen, Mike Rapoport, John Hubbard,
	Vlastimil Babka, Michael Ellerman, Andrew Jones, linuxppc-dev,
	Mike Kravetz, Muchun Song, linux-arm-kernel, Jason Gunthorpe,
	Christoph Hellwig, Lorenzo Stoakes, Matthew Wilcox



Le 03/01/2024 à 10:14, peterx@redhat.com a écrit :
> From: Peter Xu <peterx@redhat.com>
> 
> 
> Test Done
> =========
> 
> This v1 went through the normal GUP smoke tests over different memory
> types on archs (using VM instances): x86_64, aarch64, ppc64le.  For
> aarch64, tested over 64KB cont_pte huge pages.  For ppc64le, tested over
> 16MB hugepd entries (Power8 hash MMU on 4K base page size).
> 

Can you tell how you test ?

I'm willing to test this series on powerpc 8xx (PPC32).

Christophe
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: [PATCH v2 00/13] mm/gup: Unify hugetlb, part 2
@ 2024-01-03 11:14   ` Christophe Leroy
  0 siblings, 0 replies; 143+ messages in thread
From: Christophe Leroy @ 2024-01-03 11:14 UTC (permalink / raw)
  To: peterx, linux-mm, linux-kernel
  Cc: James Houghton, David Hildenbrand, Yang Shi, Andrew Jones,
	linux-riscv, Andrea Arcangeli, Aneesh Kumar K . V,
	Matthew Wilcox, Christoph Hellwig, Vlastimil Babka,
	Jason Gunthorpe, Axel Rasmussen, Rik van Riel, John Hubbard,
	Kirill A . Shutemov, linux-arm-kernel, Lorenzo Stoakes,
	Muchun Song, Andrew Morton, linuxppc-dev, Mike Rapoport,
	Mike Kravetz



Le 03/01/2024 à 10:14, peterx@redhat.com a écrit :
> From: Peter Xu <peterx@redhat.com>
> 
> 
> Test Done
> =========
> 
> This v1 went through the normal GUP smoke tests over different memory
> types on archs (using VM instances): x86_64, aarch64, ppc64le.  For
> aarch64, tested over 64KB cont_pte huge pages.  For ppc64le, tested over
> 16MB hugepd entries (Power8 hash MMU on 4K base page size).
> 

Can you tell how you test ?

I'm willing to test this series on powerpc 8xx (PPC32).

Christophe

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: [PATCH v2 00/13] mm/gup: Unify hugetlb, part 2
  2024-01-03 11:14   ` Christophe Leroy
  (?)
  (?)
@ 2024-01-08  7:27     ` Peter Xu
  -1 siblings, 0 replies; 143+ messages in thread
From: Peter Xu @ 2024-01-08  7:27 UTC (permalink / raw)
  To: Christophe Leroy
  Cc: linux-mm, linux-kernel, James Houghton, David Hildenbrand,
	Kirill A . Shutemov, Yang Shi, linux-riscv, Andrew Morton,
	Aneesh Kumar K . V, Rik van Riel, Andrea Arcangeli,
	Axel Rasmussen, Mike Rapoport, John Hubbard, Vlastimil Babka,
	Michael Ellerman, Andrew Jones, linuxppc-dev, Mike Kravetz,
	Muchun Song, linux-arm-kernel, Jason Gunthorpe,
	Christoph Hellwig, Lorenzo Stoakes, Matthew Wilcox

Hi, Christophe,

On Wed, Jan 03, 2024 at 11:14:54AM +0000, Christophe Leroy wrote:
> > Test Done
> > =========
> > 
> > This v1 went through the normal GUP smoke tests over different memory
> > types on archs (using VM instances): x86_64, aarch64, ppc64le.  For
> > aarch64, tested over 64KB cont_pte huge pages.  For ppc64le, tested over
> > 16MB hugepd entries (Power8 hash MMU on 4K base page size).
> > 
> 
> Can you tell how you test ?
> 
> I'm willing to test this series on powerpc 8xx (PPC32).

My apologies, for some reason I totally overlooked this email..

I only tested using run_vmtests.sh, with:

  $ bash ./run_vmtests.sh -t gup_test -a

It should cover pretty much lots of tests of GUP using gup_test program.  I
think the ones that matters here is "-H" over either "-U/-b".

For ppc8xx, even though kernel mapping uses hugepd, I don't expect anything
should change before/after this series, because the code that I touched
(slow gup only) only affects user pages, so it shouldn't change anything
over kernel mappings.  Said so, please feel free to smoke over whatever
type of kernel hugepd mappings, and I'd trust you're the expert on how to
trigger those paths.

Since I got your attention, when working on this series I talked to David
Gibson and just got to know that hugepd is actually a pure software idea.
IIUC it means there's no PPC hardware that really understands the hugepd
format at all, but only a "this is a huge page" hint for Linux.

Considering that it _seems_ to play a similar role of cont_pXX here: do you
think hugepd can have any chance to be implemented similarly like cont_pXX,
or somehow share the code?

For example, if hugepd is recognized only by Linux kernel itself, maybe
there can be some special pgtable hint that can be attached to the cont_*
entries, showing whether it's a "real cont_*" entry or a "hugepd" entry?
IIUC it can be quite flexible because if hugepd only works for hash MMU so
no hardware will even walk that radix table.  But I can overlook important
things here.

It'll be definitely great if hugepd can be merged into some existing forms
like a generic pgtable (IMHO cont_* is such case: it's the same as no
cont_* entries for softwares, while hardware can accelerate with TLB hits
on larger ranges).  But I can be asking a very silly question here too, as
I can overlook very important things.

Thanks,

-- 
Peter Xu


^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: [PATCH v2 00/13] mm/gup: Unify hugetlb, part 2
@ 2024-01-08  7:27     ` Peter Xu
  0 siblings, 0 replies; 143+ messages in thread
From: Peter Xu @ 2024-01-08  7:27 UTC (permalink / raw)
  To: Christophe Leroy
  Cc: linux-mm, linux-kernel, James Houghton, David Hildenbrand,
	Kirill A . Shutemov, Yang Shi, linux-riscv, Andrew Morton,
	Aneesh Kumar K . V, Rik van Riel, Andrea Arcangeli,
	Axel Rasmussen, Mike Rapoport, John Hubbard, Vlastimil Babka,
	Michael Ellerman, Andrew Jones, linuxppc-dev, Mike Kravetz,
	Muchun Song, linux-arm-kernel, Jason Gunthorpe,
	Christoph Hellwig, Lorenzo Stoakes, Matthew Wilcox

Hi, Christophe,

On Wed, Jan 03, 2024 at 11:14:54AM +0000, Christophe Leroy wrote:
> > Test Done
> > =========
> > 
> > This v1 went through the normal GUP smoke tests over different memory
> > types on archs (using VM instances): x86_64, aarch64, ppc64le.  For
> > aarch64, tested over 64KB cont_pte huge pages.  For ppc64le, tested over
> > 16MB hugepd entries (Power8 hash MMU on 4K base page size).
> > 
> 
> Can you tell how you test ?
> 
> I'm willing to test this series on powerpc 8xx (PPC32).

My apologies, for some reason I totally overlooked this email..

I only tested using run_vmtests.sh, with:

  $ bash ./run_vmtests.sh -t gup_test -a

It should cover pretty much lots of tests of GUP using gup_test program.  I
think the ones that matters here is "-H" over either "-U/-b".

For ppc8xx, even though kernel mapping uses hugepd, I don't expect anything
should change before/after this series, because the code that I touched
(slow gup only) only affects user pages, so it shouldn't change anything
over kernel mappings.  Said so, please feel free to smoke over whatever
type of kernel hugepd mappings, and I'd trust you're the expert on how to
trigger those paths.

Since I got your attention, when working on this series I talked to David
Gibson and just got to know that hugepd is actually a pure software idea.
IIUC it means there's no PPC hardware that really understands the hugepd
format at all, but only a "this is a huge page" hint for Linux.

Considering that it _seems_ to play a similar role of cont_pXX here: do you
think hugepd can have any chance to be implemented similarly like cont_pXX,
or somehow share the code?

For example, if hugepd is recognized only by Linux kernel itself, maybe
there can be some special pgtable hint that can be attached to the cont_*
entries, showing whether it's a "real cont_*" entry or a "hugepd" entry?
IIUC it can be quite flexible because if hugepd only works for hash MMU so
no hardware will even walk that radix table.  But I can overlook important
things here.

It'll be definitely great if hugepd can be merged into some existing forms
like a generic pgtable (IMHO cont_* is such case: it's the same as no
cont_* entries for softwares, while hardware can accelerate with TLB hits
on larger ranges).  But I can be asking a very silly question here too, as
I can overlook very important things.

Thanks,

-- 
Peter Xu


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: [PATCH v2 00/13] mm/gup: Unify hugetlb, part 2
@ 2024-01-08  7:27     ` Peter Xu
  0 siblings, 0 replies; 143+ messages in thread
From: Peter Xu @ 2024-01-08  7:27 UTC (permalink / raw)
  To: Christophe Leroy
  Cc: James Houghton, David Hildenbrand, Yang Shi, Andrew Jones,
	linux-mm, linux-riscv, Andrea Arcangeli, Christoph Hellwig,
	Matthew Wilcox, Aneesh Kumar K . V, Vlastimil Babka,
	Jason Gunthorpe, Axel Rasmussen, Rik van Riel, John Hubbard,
	Kirill A . Shutemov, linux-arm-kernel, Lorenzo Stoakes,
	Muchun Song, linux-kernel, Andrew Morton, linuxppc-dev,
	Mike Rapoport

Hi, Christophe,

On Wed, Jan 03, 2024 at 11:14:54AM +0000, Christophe Leroy wrote:
> > Test Done
> > =========
> > 
> > This v1 went through the normal GUP smoke tests over different memory
> > types on archs (using VM instances): x86_64, aarch64, ppc64le.  For
> > aarch64, tested over 64KB cont_pte huge pages.  For ppc64le, tested over
> > 16MB hugepd entries (Power8 hash MMU on 4K base page size).
> > 
> 
> Can you tell how you test ?
> 
> I'm willing to test this series on powerpc 8xx (PPC32).

My apologies, for some reason I totally overlooked this email..

I only tested using run_vmtests.sh, with:

  $ bash ./run_vmtests.sh -t gup_test -a

It should cover pretty much lots of tests of GUP using gup_test program.  I
think the ones that matters here is "-H" over either "-U/-b".

For ppc8xx, even though kernel mapping uses hugepd, I don't expect anything
should change before/after this series, because the code that I touched
(slow gup only) only affects user pages, so it shouldn't change anything
over kernel mappings.  Said so, please feel free to smoke over whatever
type of kernel hugepd mappings, and I'd trust you're the expert on how to
trigger those paths.

Since I got your attention, when working on this series I talked to David
Gibson and just got to know that hugepd is actually a pure software idea.
IIUC it means there's no PPC hardware that really understands the hugepd
format at all, but only a "this is a huge page" hint for Linux.

Considering that it _seems_ to play a similar role of cont_pXX here: do you
think hugepd can have any chance to be implemented similarly like cont_pXX,
or somehow share the code?

For example, if hugepd is recognized only by Linux kernel itself, maybe
there can be some special pgtable hint that can be attached to the cont_*
entries, showing whether it's a "real cont_*" entry or a "hugepd" entry?
IIUC it can be quite flexible because if hugepd only works for hash MMU so
no hardware will even walk that radix table.  But I can overlook important
things here.

It'll be definitely great if hugepd can be merged into some existing forms
like a generic pgtable (IMHO cont_* is such case: it's the same as no
cont_* entries for softwares, while hardware can accelerate with TLB hits
on larger ranges).  But I can be asking a very silly question here too, as
I can overlook very important things.

Thanks,

-- 
Peter Xu


^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: [PATCH v2 00/13] mm/gup: Unify hugetlb, part 2
@ 2024-01-08  7:27     ` Peter Xu
  0 siblings, 0 replies; 143+ messages in thread
From: Peter Xu @ 2024-01-08  7:27 UTC (permalink / raw)
  To: Christophe Leroy
  Cc: linux-mm, linux-kernel, James Houghton, David Hildenbrand,
	Kirill A . Shutemov, Yang Shi, linux-riscv, Andrew Morton,
	Aneesh Kumar K . V, Rik van Riel, Andrea Arcangeli,
	Axel Rasmussen, Mike Rapoport, John Hubbard, Vlastimil Babka,
	Michael Ellerman, Andrew Jones, linuxppc-dev, Mike Kravetz,
	Muchun Song, linux-arm-kernel, Jason Gunthorpe,
	Christoph Hellwig, Lorenzo Stoakes, Matthew Wilcox

Hi, Christophe,

On Wed, Jan 03, 2024 at 11:14:54AM +0000, Christophe Leroy wrote:
> > Test Done
> > =========
> > 
> > This v1 went through the normal GUP smoke tests over different memory
> > types on archs (using VM instances): x86_64, aarch64, ppc64le.  For
> > aarch64, tested over 64KB cont_pte huge pages.  For ppc64le, tested over
> > 16MB hugepd entries (Power8 hash MMU on 4K base page size).
> > 
> 
> Can you tell how you test ?
> 
> I'm willing to test this series on powerpc 8xx (PPC32).

My apologies, for some reason I totally overlooked this email..

I only tested using run_vmtests.sh, with:

  $ bash ./run_vmtests.sh -t gup_test -a

It should cover pretty much lots of tests of GUP using gup_test program.  I
think the ones that matters here is "-H" over either "-U/-b".

For ppc8xx, even though kernel mapping uses hugepd, I don't expect anything
should change before/after this series, because the code that I touched
(slow gup only) only affects user pages, so it shouldn't change anything
over kernel mappings.  Said so, please feel free to smoke over whatever
type of kernel hugepd mappings, and I'd trust you're the expert on how to
trigger those paths.

Since I got your attention, when working on this series I talked to David
Gibson and just got to know that hugepd is actually a pure software idea.
IIUC it means there's no PPC hardware that really understands the hugepd
format at all, but only a "this is a huge page" hint for Linux.

Considering that it _seems_ to play a similar role of cont_pXX here: do you
think hugepd can have any chance to be implemented similarly like cont_pXX,
or somehow share the code?

For example, if hugepd is recognized only by Linux kernel itself, maybe
there can be some special pgtable hint that can be attached to the cont_*
entries, showing whether it's a "real cont_*" entry or a "hugepd" entry?
IIUC it can be quite flexible because if hugepd only works for hash MMU so
no hardware will even walk that radix table.  But I can overlook important
things here.

It'll be definitely great if hugepd can be merged into some existing forms
like a generic pgtable (IMHO cont_* is such case: it's the same as no
cont_* entries for softwares, while hardware can accelerate with TLB hits
on larger ranges).  But I can be asking a very silly question here too, as
I can overlook very important things.

Thanks,

-- 
Peter Xu


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: [PATCH v2 01/13] mm/Kconfig: CONFIG_PGTABLE_HAS_HUGE_LEAVES
  2024-01-03  9:14   ` peterx
  (?)
  (?)
@ 2024-01-15 17:37     ` Jason Gunthorpe
  -1 siblings, 0 replies; 143+ messages in thread
From: Jason Gunthorpe @ 2024-01-15 17:37 UTC (permalink / raw)
  To: peterx
  Cc: linux-mm, linux-kernel, James Houghton, David Hildenbrand,
	Kirill A . Shutemov, Yang Shi, linux-riscv, Andrew Morton,
	Aneesh Kumar K . V, Rik van Riel, Andrea Arcangeli,
	Axel Rasmussen, Mike Rapoport, John Hubbard, Vlastimil Babka,
	Michael Ellerman, Christophe Leroy, Andrew Jones, linuxppc-dev,
	Mike Kravetz, Muchun Song, linux-arm-kernel, Christoph Hellwig,
	Lorenzo Stoakes, Matthew Wilcox

On Wed, Jan 03, 2024 at 05:14:11PM +0800, peterx@redhat.com wrote:
> From: Peter Xu <peterx@redhat.com>
> 
> Introduce a config option that will be selected as long as huge leaves are
> involved in pgtable (thp or hugetlbfs).  It would be useful to mark any
> code with this new config that can process either hugetlb or thp pages in
> any level that is higher than pte level.
> 
> Signed-off-by: Peter Xu <peterx@redhat.com>
> ---
>  mm/Kconfig | 3 +++
>  1 file changed, 3 insertions(+)

So you mean anything that supports page table entires > PAGE_SIZE ?

Makes sense to me, though maybe add a comment in the kconfig?

Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>

Jason

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: [PATCH v2 01/13] mm/Kconfig: CONFIG_PGTABLE_HAS_HUGE_LEAVES
@ 2024-01-15 17:37     ` Jason Gunthorpe
  0 siblings, 0 replies; 143+ messages in thread
From: Jason Gunthorpe @ 2024-01-15 17:37 UTC (permalink / raw)
  To: peterx
  Cc: linux-mm, linux-kernel, James Houghton, David Hildenbrand,
	Kirill A . Shutemov, Yang Shi, linux-riscv, Andrew Morton,
	Aneesh Kumar K . V, Rik van Riel, Andrea Arcangeli,
	Axel Rasmussen, Mike Rapoport, John Hubbard, Vlastimil Babka,
	Michael Ellerman, Christophe Leroy, Andrew Jones, linuxppc-dev,
	Mike Kravetz, Muchun Song, linux-arm-kernel, Christoph Hellwig,
	Lorenzo Stoakes, Matthew Wilcox

On Wed, Jan 03, 2024 at 05:14:11PM +0800, peterx@redhat.com wrote:
> From: Peter Xu <peterx@redhat.com>
> 
> Introduce a config option that will be selected as long as huge leaves are
> involved in pgtable (thp or hugetlbfs).  It would be useful to mark any
> code with this new config that can process either hugetlb or thp pages in
> any level that is higher than pte level.
> 
> Signed-off-by: Peter Xu <peterx@redhat.com>
> ---
>  mm/Kconfig | 3 +++
>  1 file changed, 3 insertions(+)

So you mean anything that supports page table entires > PAGE_SIZE ?

Makes sense to me, though maybe add a comment in the kconfig?

Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>

Jason

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: [PATCH v2 01/13] mm/Kconfig: CONFIG_PGTABLE_HAS_HUGE_LEAVES
@ 2024-01-15 17:37     ` Jason Gunthorpe
  0 siblings, 0 replies; 143+ messages in thread
From: Jason Gunthorpe @ 2024-01-15 17:37 UTC (permalink / raw)
  To: peterx
  Cc: linux-mm, linux-kernel, James Houghton, David Hildenbrand,
	Kirill A . Shutemov, Yang Shi, linux-riscv, Andrew Morton,
	Aneesh Kumar K . V, Rik van Riel, Andrea Arcangeli,
	Axel Rasmussen, Mike Rapoport, John Hubbard, Vlastimil Babka,
	Michael Ellerman, Christophe Leroy, Andrew Jones, linuxppc-dev,
	Mike Kravetz, Muchun Song, linux-arm-kernel, Christoph Hellwig,
	Lorenzo Stoakes, Matthew Wilcox

On Wed, Jan 03, 2024 at 05:14:11PM +0800, peterx@redhat.com wrote:
> From: Peter Xu <peterx@redhat.com>
> 
> Introduce a config option that will be selected as long as huge leaves are
> involved in pgtable (thp or hugetlbfs).  It would be useful to mark any
> code with this new config that can process either hugetlb or thp pages in
> any level that is higher than pte level.
> 
> Signed-off-by: Peter Xu <peterx@redhat.com>
> ---
>  mm/Kconfig | 3 +++
>  1 file changed, 3 insertions(+)

So you mean anything that supports page table entires > PAGE_SIZE ?

Makes sense to me, though maybe add a comment in the kconfig?

Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>

Jason

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: [PATCH v2 01/13] mm/Kconfig: CONFIG_PGTABLE_HAS_HUGE_LEAVES
@ 2024-01-15 17:37     ` Jason Gunthorpe
  0 siblings, 0 replies; 143+ messages in thread
From: Jason Gunthorpe @ 2024-01-15 17:37 UTC (permalink / raw)
  To: peterx
  Cc: James Houghton, David Hildenbrand, Yang Shi, Andrew Jones,
	linux-mm, Matthew Wilcox, linux-riscv, Andrea Arcangeli,
	Christoph Hellwig, Aneesh Kumar K . V, Vlastimil Babka,
	Axel Rasmussen, Rik van Riel, John Hubbard, Kirill A . Shutemov,
	linux-arm-kernel, Lorenzo Stoakes, Muchun Song, linux-kernel,
	Andrew Morton, linuxppc-dev, Mike Rapoport, Mike Kravetz

On Wed, Jan 03, 2024 at 05:14:11PM +0800, peterx@redhat.com wrote:
> From: Peter Xu <peterx@redhat.com>
> 
> Introduce a config option that will be selected as long as huge leaves are
> involved in pgtable (thp or hugetlbfs).  It would be useful to mark any
> code with this new config that can process either hugetlb or thp pages in
> any level that is higher than pte level.
> 
> Signed-off-by: Peter Xu <peterx@redhat.com>
> ---
>  mm/Kconfig | 3 +++
>  1 file changed, 3 insertions(+)

So you mean anything that supports page table entires > PAGE_SIZE ?

Makes sense to me, though maybe add a comment in the kconfig?

Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>

Jason

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: [PATCH v2 03/13] mm: Provide generic pmd_thp_or_huge()
  2024-01-03  9:14   ` peterx
  (?)
  (?)
@ 2024-01-15 17:55     ` Jason Gunthorpe
  -1 siblings, 0 replies; 143+ messages in thread
From: Jason Gunthorpe @ 2024-01-15 17:55 UTC (permalink / raw)
  To: peterx
  Cc: linux-mm, linux-kernel, James Houghton, David Hildenbrand,
	Kirill A . Shutemov, Yang Shi, linux-riscv, Andrew Morton,
	Aneesh Kumar K . V, Rik van Riel, Andrea Arcangeli,
	Axel Rasmussen, Mike Rapoport, John Hubbard, Vlastimil Babka,
	Michael Ellerman, Christophe Leroy, Andrew Jones, linuxppc-dev,
	Mike Kravetz, Muchun Song, linux-arm-kernel, Christoph Hellwig,
	Lorenzo Stoakes, Matthew Wilcox

On Wed, Jan 03, 2024 at 05:14:13PM +0800, peterx@redhat.com wrote:
> From: Peter Xu <peterx@redhat.com>
> 
> ARM defines pmd_thp_or_huge(), detecting either a THP or a huge PMD.  It
> can be a helpful helper if we want to merge more THP and hugetlb code
> paths.  Make it a generic default implementation, only exist when
> CONFIG_MMU.  Arch can overwrite it by defining its own version.
> 
> For example, ARM's pgtable-2level.h defines it to always return false.
> 
> Keep the macro declared with all config, it should be optimized to a false
> anyway if !THP && !HUGETLB.
> 
> Signed-off-by: Peter Xu <peterx@redhat.com>
> ---
>  include/linux/pgtable.h | 4 ++++
>  mm/gup.c                | 3 +--
>  2 files changed, 5 insertions(+), 2 deletions(-)
> 
> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
> index 466cf477551a..2b42e95a4e3a 100644
> --- a/include/linux/pgtable.h
> +++ b/include/linux/pgtable.h
> @@ -1362,6 +1362,10 @@ static inline int pmd_write(pmd_t pmd)
>  #endif /* pmd_write */
>  #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
>  
> +#ifndef pmd_thp_or_huge
> +#define pmd_thp_or_huge(pmd)	(pmd_huge(pmd) || pmd_trans_huge(pmd))
> +#endif

Why not just use pmd_leaf() ?

This GUP case seems to me exactly like what pmd_leaf() should really
do and be used for..

eg x86 does:

#define pmd_leaf	pmd_large
static inline int pmd_large(pmd_t pte)
	return pmd_flags(pte) & _PAGE_PSE;

static inline int pmd_trans_huge(pmd_t pmd)
	return (pmd_val(pmd) & (_PAGE_PSE|_PAGE_DEVMAP)) == _PAGE_PSE;

int pmd_huge(pmd_t pmd)
        return !pmd_none(pmd) &&
                (pmd_val(pmd) & (_PAGE_PRESENT|_PAGE_PSE)) != _PAGE_PRESENT;

I spot checked a couple arches and it looks like it holds up.

Further, it looks to me like this site in GUP is the only core code
caller..

So, I'd suggest a small series to go arch by arch and convert the arch
to use pmd_huge() == pmd_leaf(). Then retire pmd_huge() as a public
API.

> diff --git a/mm/gup.c b/mm/gup.c
> index df83182ec72d..eebae70d2465 100644
> --- a/mm/gup.c
> +++ b/mm/gup.c
> @@ -3004,8 +3004,7 @@ static int gup_pmd_range(pud_t *pudp, pud_t pud, unsigned long addr, unsigned lo
>  		if (!pmd_present(pmd))
>  			return 0;
>  
> -		if (unlikely(pmd_trans_huge(pmd) || pmd_huge(pmd) ||
> -			     pmd_devmap(pmd))) {
> +		if (unlikely(pmd_thp_or_huge(pmd) || pmd_devmap(pmd))) {
>  			/* See gup_pte_range() */
>  			if (pmd_protnone(pmd))
>  				return 0;

And the devmap thing here doesn't make any sense either. The arch
should ensure that pmd_devmap() implies pmd_leaf(). Since devmap is a
purely SW construct it almost certainly does already anyhow.

Jason

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: [PATCH v2 03/13] mm: Provide generic pmd_thp_or_huge()
@ 2024-01-15 17:55     ` Jason Gunthorpe
  0 siblings, 0 replies; 143+ messages in thread
From: Jason Gunthorpe @ 2024-01-15 17:55 UTC (permalink / raw)
  To: peterx
  Cc: linux-mm, linux-kernel, James Houghton, David Hildenbrand,
	Kirill A . Shutemov, Yang Shi, linux-riscv, Andrew Morton,
	Aneesh Kumar K . V, Rik van Riel, Andrea Arcangeli,
	Axel Rasmussen, Mike Rapoport, John Hubbard, Vlastimil Babka,
	Michael Ellerman, Christophe Leroy, Andrew Jones, linuxppc-dev,
	Mike Kravetz, Muchun Song, linux-arm-kernel, Christoph Hellwig,
	Lorenzo Stoakes, Matthew Wilcox

On Wed, Jan 03, 2024 at 05:14:13PM +0800, peterx@redhat.com wrote:
> From: Peter Xu <peterx@redhat.com>
> 
> ARM defines pmd_thp_or_huge(), detecting either a THP or a huge PMD.  It
> can be a helpful helper if we want to merge more THP and hugetlb code
> paths.  Make it a generic default implementation, only exist when
> CONFIG_MMU.  Arch can overwrite it by defining its own version.
> 
> For example, ARM's pgtable-2level.h defines it to always return false.
> 
> Keep the macro declared with all config, it should be optimized to a false
> anyway if !THP && !HUGETLB.
> 
> Signed-off-by: Peter Xu <peterx@redhat.com>
> ---
>  include/linux/pgtable.h | 4 ++++
>  mm/gup.c                | 3 +--
>  2 files changed, 5 insertions(+), 2 deletions(-)
> 
> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
> index 466cf477551a..2b42e95a4e3a 100644
> --- a/include/linux/pgtable.h
> +++ b/include/linux/pgtable.h
> @@ -1362,6 +1362,10 @@ static inline int pmd_write(pmd_t pmd)
>  #endif /* pmd_write */
>  #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
>  
> +#ifndef pmd_thp_or_huge
> +#define pmd_thp_or_huge(pmd)	(pmd_huge(pmd) || pmd_trans_huge(pmd))
> +#endif

Why not just use pmd_leaf() ?

This GUP case seems to me exactly like what pmd_leaf() should really
do and be used for..

eg x86 does:

#define pmd_leaf	pmd_large
static inline int pmd_large(pmd_t pte)
	return pmd_flags(pte) & _PAGE_PSE;

static inline int pmd_trans_huge(pmd_t pmd)
	return (pmd_val(pmd) & (_PAGE_PSE|_PAGE_DEVMAP)) == _PAGE_PSE;

int pmd_huge(pmd_t pmd)
        return !pmd_none(pmd) &&
                (pmd_val(pmd) & (_PAGE_PRESENT|_PAGE_PSE)) != _PAGE_PRESENT;

I spot checked a couple arches and it looks like it holds up.

Further, it looks to me like this site in GUP is the only core code
caller..

So, I'd suggest a small series to go arch by arch and convert the arch
to use pmd_huge() == pmd_leaf(). Then retire pmd_huge() as a public
API.

> diff --git a/mm/gup.c b/mm/gup.c
> index df83182ec72d..eebae70d2465 100644
> --- a/mm/gup.c
> +++ b/mm/gup.c
> @@ -3004,8 +3004,7 @@ static int gup_pmd_range(pud_t *pudp, pud_t pud, unsigned long addr, unsigned lo
>  		if (!pmd_present(pmd))
>  			return 0;
>  
> -		if (unlikely(pmd_trans_huge(pmd) || pmd_huge(pmd) ||
> -			     pmd_devmap(pmd))) {
> +		if (unlikely(pmd_thp_or_huge(pmd) || pmd_devmap(pmd))) {
>  			/* See gup_pte_range() */
>  			if (pmd_protnone(pmd))
>  				return 0;

And the devmap thing here doesn't make any sense either. The arch
should ensure that pmd_devmap() implies pmd_leaf(). Since devmap is a
purely SW construct it almost certainly does already anyhow.

Jason

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: [PATCH v2 03/13] mm: Provide generic pmd_thp_or_huge()
@ 2024-01-15 17:55     ` Jason Gunthorpe
  0 siblings, 0 replies; 143+ messages in thread
From: Jason Gunthorpe @ 2024-01-15 17:55 UTC (permalink / raw)
  To: peterx
  Cc: linux-mm, linux-kernel, James Houghton, David Hildenbrand,
	Kirill A . Shutemov, Yang Shi, linux-riscv, Andrew Morton,
	Aneesh Kumar K . V, Rik van Riel, Andrea Arcangeli,
	Axel Rasmussen, Mike Rapoport, John Hubbard, Vlastimil Babka,
	Michael Ellerman, Christophe Leroy, Andrew Jones, linuxppc-dev,
	Mike Kravetz, Muchun Song, linux-arm-kernel, Christoph Hellwig,
	Lorenzo Stoakes, Matthew Wilcox

On Wed, Jan 03, 2024 at 05:14:13PM +0800, peterx@redhat.com wrote:
> From: Peter Xu <peterx@redhat.com>
> 
> ARM defines pmd_thp_or_huge(), detecting either a THP or a huge PMD.  It
> can be a helpful helper if we want to merge more THP and hugetlb code
> paths.  Make it a generic default implementation, only exist when
> CONFIG_MMU.  Arch can overwrite it by defining its own version.
> 
> For example, ARM's pgtable-2level.h defines it to always return false.
> 
> Keep the macro declared with all config, it should be optimized to a false
> anyway if !THP && !HUGETLB.
> 
> Signed-off-by: Peter Xu <peterx@redhat.com>
> ---
>  include/linux/pgtable.h | 4 ++++
>  mm/gup.c                | 3 +--
>  2 files changed, 5 insertions(+), 2 deletions(-)
> 
> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
> index 466cf477551a..2b42e95a4e3a 100644
> --- a/include/linux/pgtable.h
> +++ b/include/linux/pgtable.h
> @@ -1362,6 +1362,10 @@ static inline int pmd_write(pmd_t pmd)
>  #endif /* pmd_write */
>  #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
>  
> +#ifndef pmd_thp_or_huge
> +#define pmd_thp_or_huge(pmd)	(pmd_huge(pmd) || pmd_trans_huge(pmd))
> +#endif

Why not just use pmd_leaf() ?

This GUP case seems to me exactly like what pmd_leaf() should really
do and be used for..

eg x86 does:

#define pmd_leaf	pmd_large
static inline int pmd_large(pmd_t pte)
	return pmd_flags(pte) & _PAGE_PSE;

static inline int pmd_trans_huge(pmd_t pmd)
	return (pmd_val(pmd) & (_PAGE_PSE|_PAGE_DEVMAP)) == _PAGE_PSE;

int pmd_huge(pmd_t pmd)
        return !pmd_none(pmd) &&
                (pmd_val(pmd) & (_PAGE_PRESENT|_PAGE_PSE)) != _PAGE_PRESENT;

I spot checked a couple arches and it looks like it holds up.

Further, it looks to me like this site in GUP is the only core code
caller..

So, I'd suggest a small series to go arch by arch and convert the arch
to use pmd_huge() == pmd_leaf(). Then retire pmd_huge() as a public
API.

> diff --git a/mm/gup.c b/mm/gup.c
> index df83182ec72d..eebae70d2465 100644
> --- a/mm/gup.c
> +++ b/mm/gup.c
> @@ -3004,8 +3004,7 @@ static int gup_pmd_range(pud_t *pudp, pud_t pud, unsigned long addr, unsigned lo
>  		if (!pmd_present(pmd))
>  			return 0;
>  
> -		if (unlikely(pmd_trans_huge(pmd) || pmd_huge(pmd) ||
> -			     pmd_devmap(pmd))) {
> +		if (unlikely(pmd_thp_or_huge(pmd) || pmd_devmap(pmd))) {
>  			/* See gup_pte_range() */
>  			if (pmd_protnone(pmd))
>  				return 0;

And the devmap thing here doesn't make any sense either. The arch
should ensure that pmd_devmap() implies pmd_leaf(). Since devmap is a
purely SW construct it almost certainly does already anyhow.

Jason

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: [PATCH v2 03/13] mm: Provide generic pmd_thp_or_huge()
@ 2024-01-15 17:55     ` Jason Gunthorpe
  0 siblings, 0 replies; 143+ messages in thread
From: Jason Gunthorpe @ 2024-01-15 17:55 UTC (permalink / raw)
  To: peterx
  Cc: James Houghton, David Hildenbrand, Yang Shi, Andrew Jones,
	linux-mm, Matthew Wilcox, linux-riscv, Andrea Arcangeli,
	Christoph Hellwig, Aneesh Kumar K . V, Vlastimil Babka,
	Axel Rasmussen, Rik van Riel, John Hubbard, Kirill A . Shutemov,
	linux-arm-kernel, Lorenzo Stoakes, Muchun Song, linux-kernel,
	Andrew Morton, linuxppc-dev, Mike Rapoport, Mike Kravetz

On Wed, Jan 03, 2024 at 05:14:13PM +0800, peterx@redhat.com wrote:
> From: Peter Xu <peterx@redhat.com>
> 
> ARM defines pmd_thp_or_huge(), detecting either a THP or a huge PMD.  It
> can be a helpful helper if we want to merge more THP and hugetlb code
> paths.  Make it a generic default implementation, only exist when
> CONFIG_MMU.  Arch can overwrite it by defining its own version.
> 
> For example, ARM's pgtable-2level.h defines it to always return false.
> 
> Keep the macro declared with all config, it should be optimized to a false
> anyway if !THP && !HUGETLB.
> 
> Signed-off-by: Peter Xu <peterx@redhat.com>
> ---
>  include/linux/pgtable.h | 4 ++++
>  mm/gup.c                | 3 +--
>  2 files changed, 5 insertions(+), 2 deletions(-)
> 
> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
> index 466cf477551a..2b42e95a4e3a 100644
> --- a/include/linux/pgtable.h
> +++ b/include/linux/pgtable.h
> @@ -1362,6 +1362,10 @@ static inline int pmd_write(pmd_t pmd)
>  #endif /* pmd_write */
>  #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
>  
> +#ifndef pmd_thp_or_huge
> +#define pmd_thp_or_huge(pmd)	(pmd_huge(pmd) || pmd_trans_huge(pmd))
> +#endif

Why not just use pmd_leaf() ?

This GUP case seems to me exactly like what pmd_leaf() should really
do and be used for..

eg x86 does:

#define pmd_leaf	pmd_large
static inline int pmd_large(pmd_t pte)
	return pmd_flags(pte) & _PAGE_PSE;

static inline int pmd_trans_huge(pmd_t pmd)
	return (pmd_val(pmd) & (_PAGE_PSE|_PAGE_DEVMAP)) == _PAGE_PSE;

int pmd_huge(pmd_t pmd)
        return !pmd_none(pmd) &&
                (pmd_val(pmd) & (_PAGE_PRESENT|_PAGE_PSE)) != _PAGE_PRESENT;

I spot checked a couple arches and it looks like it holds up.

Further, it looks to me like this site in GUP is the only core code
caller..

So, I'd suggest a small series to go arch by arch and convert the arch
to use pmd_huge() == pmd_leaf(). Then retire pmd_huge() as a public
API.

> diff --git a/mm/gup.c b/mm/gup.c
> index df83182ec72d..eebae70d2465 100644
> --- a/mm/gup.c
> +++ b/mm/gup.c
> @@ -3004,8 +3004,7 @@ static int gup_pmd_range(pud_t *pudp, pud_t pud, unsigned long addr, unsigned lo
>  		if (!pmd_present(pmd))
>  			return 0;
>  
> -		if (unlikely(pmd_trans_huge(pmd) || pmd_huge(pmd) ||
> -			     pmd_devmap(pmd))) {
> +		if (unlikely(pmd_thp_or_huge(pmd) || pmd_devmap(pmd))) {
>  			/* See gup_pte_range() */
>  			if (pmd_protnone(pmd))
>  				return 0;

And the devmap thing here doesn't make any sense either. The arch
should ensure that pmd_devmap() implies pmd_leaf(). Since devmap is a
purely SW construct it almost certainly does already anyhow.

Jason

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: [PATCH v2 04/13] mm: Make HPAGE_PXD_* macros even if !THP
  2024-01-03  9:14   ` peterx
  (?)
  (?)
@ 2024-01-15 17:59     ` Jason Gunthorpe
  -1 siblings, 0 replies; 143+ messages in thread
From: Jason Gunthorpe @ 2024-01-15 17:59 UTC (permalink / raw)
  To: peterx
  Cc: linux-mm, linux-kernel, James Houghton, David Hildenbrand,
	Kirill A . Shutemov, Yang Shi, linux-riscv, Andrew Morton,
	Aneesh Kumar K . V, Rik van Riel, Andrea Arcangeli,
	Axel Rasmussen, Mike Rapoport, John Hubbard, Vlastimil Babka,
	Michael Ellerman, Christophe Leroy, Andrew Jones, linuxppc-dev,
	Mike Kravetz, Muchun Song, linux-arm-kernel, Christoph Hellwig,
	Lorenzo Stoakes, Matthew Wilcox

On Wed, Jan 03, 2024 at 05:14:14PM +0800, peterx@redhat.com wrote:
> From: Peter Xu <peterx@redhat.com>
> 
> These macros can be helpful when we plan to merge hugetlb code into generic
> code.  Move them out and define them even if !THP.
> 
> We actually already defined HPAGE_PMD_NR for other reasons even if !THP.
> Reorganize these macros.
> 
> Reviewed-by: Christoph Hellwig <hch@infradead.org>
> Signed-off-by: Peter Xu <peterx@redhat.com>
> ---
>  include/linux/huge_mm.h | 17 ++++++-----------
>  1 file changed, 6 insertions(+), 11 deletions(-)

Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>

Jason

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: [PATCH v2 04/13] mm: Make HPAGE_PXD_* macros even if !THP
@ 2024-01-15 17:59     ` Jason Gunthorpe
  0 siblings, 0 replies; 143+ messages in thread
From: Jason Gunthorpe @ 2024-01-15 17:59 UTC (permalink / raw)
  To: peterx
  Cc: linux-mm, linux-kernel, James Houghton, David Hildenbrand,
	Kirill A . Shutemov, Yang Shi, linux-riscv, Andrew Morton,
	Aneesh Kumar K . V, Rik van Riel, Andrea Arcangeli,
	Axel Rasmussen, Mike Rapoport, John Hubbard, Vlastimil Babka,
	Michael Ellerman, Christophe Leroy, Andrew Jones, linuxppc-dev,
	Mike Kravetz, Muchun Song, linux-arm-kernel, Christoph Hellwig,
	Lorenzo Stoakes, Matthew Wilcox

On Wed, Jan 03, 2024 at 05:14:14PM +0800, peterx@redhat.com wrote:
> From: Peter Xu <peterx@redhat.com>
> 
> These macros can be helpful when we plan to merge hugetlb code into generic
> code.  Move them out and define them even if !THP.
> 
> We actually already defined HPAGE_PMD_NR for other reasons even if !THP.
> Reorganize these macros.
> 
> Reviewed-by: Christoph Hellwig <hch@infradead.org>
> Signed-off-by: Peter Xu <peterx@redhat.com>
> ---
>  include/linux/huge_mm.h | 17 ++++++-----------
>  1 file changed, 6 insertions(+), 11 deletions(-)

Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>

Jason

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: [PATCH v2 04/13] mm: Make HPAGE_PXD_* macros even if !THP
@ 2024-01-15 17:59     ` Jason Gunthorpe
  0 siblings, 0 replies; 143+ messages in thread
From: Jason Gunthorpe @ 2024-01-15 17:59 UTC (permalink / raw)
  To: peterx
  Cc: linux-mm, linux-kernel, James Houghton, David Hildenbrand,
	Kirill A . Shutemov, Yang Shi, linux-riscv, Andrew Morton,
	Aneesh Kumar K . V, Rik van Riel, Andrea Arcangeli,
	Axel Rasmussen, Mike Rapoport, John Hubbard, Vlastimil Babka,
	Michael Ellerman, Christophe Leroy, Andrew Jones, linuxppc-dev,
	Mike Kravetz, Muchun Song, linux-arm-kernel, Christoph Hellwig,
	Lorenzo Stoakes, Matthew Wilcox

On Wed, Jan 03, 2024 at 05:14:14PM +0800, peterx@redhat.com wrote:
> From: Peter Xu <peterx@redhat.com>
> 
> These macros can be helpful when we plan to merge hugetlb code into generic
> code.  Move them out and define them even if !THP.
> 
> We actually already defined HPAGE_PMD_NR for other reasons even if !THP.
> Reorganize these macros.
> 
> Reviewed-by: Christoph Hellwig <hch@infradead.org>
> Signed-off-by: Peter Xu <peterx@redhat.com>
> ---
>  include/linux/huge_mm.h | 17 ++++++-----------
>  1 file changed, 6 insertions(+), 11 deletions(-)

Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>

Jason

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: [PATCH v2 04/13] mm: Make HPAGE_PXD_* macros even if !THP
@ 2024-01-15 17:59     ` Jason Gunthorpe
  0 siblings, 0 replies; 143+ messages in thread
From: Jason Gunthorpe @ 2024-01-15 17:59 UTC (permalink / raw)
  To: peterx
  Cc: James Houghton, David Hildenbrand, Yang Shi, Andrew Jones,
	linux-mm, Matthew Wilcox, linux-riscv, Andrea Arcangeli,
	Christoph Hellwig, Aneesh Kumar K . V, Vlastimil Babka,
	Axel Rasmussen, Rik van Riel, John Hubbard, Kirill A . Shutemov,
	linux-arm-kernel, Lorenzo Stoakes, Muchun Song, linux-kernel,
	Andrew Morton, linuxppc-dev, Mike Rapoport, Mike Kravetz

On Wed, Jan 03, 2024 at 05:14:14PM +0800, peterx@redhat.com wrote:
> From: Peter Xu <peterx@redhat.com>
> 
> These macros can be helpful when we plan to merge hugetlb code into generic
> code.  Move them out and define them even if !THP.
> 
> We actually already defined HPAGE_PMD_NR for other reasons even if !THP.
> Reorganize these macros.
> 
> Reviewed-by: Christoph Hellwig <hch@infradead.org>
> Signed-off-by: Peter Xu <peterx@redhat.com>
> ---
>  include/linux/huge_mm.h | 17 ++++++-----------
>  1 file changed, 6 insertions(+), 11 deletions(-)

Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>

Jason

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: [PATCH v2 06/13] mm/gup: Drop folio_fast_pin_allowed() in hugepd processing
  2024-01-03  9:14   ` peterx
  (?)
  (?)
@ 2024-01-15 18:37     ` Jason Gunthorpe
  -1 siblings, 0 replies; 143+ messages in thread
From: Jason Gunthorpe @ 2024-01-15 18:37 UTC (permalink / raw)
  To: peterx
  Cc: linux-mm, linux-kernel, James Houghton, David Hildenbrand,
	Kirill A . Shutemov, Yang Shi, linux-riscv, Andrew Morton,
	Aneesh Kumar K . V, Rik van Riel, Andrea Arcangeli,
	Axel Rasmussen, Mike Rapoport, John Hubbard, Vlastimil Babka,
	Michael Ellerman, Christophe Leroy, Andrew Jones, linuxppc-dev,
	Mike Kravetz, Muchun Song, linux-arm-kernel, Christoph Hellwig,
	Lorenzo Stoakes, Matthew Wilcox

On Wed, Jan 03, 2024 at 05:14:16PM +0800, peterx@redhat.com wrote:
> From: Peter Xu <peterx@redhat.com>
> 
> Hugepd format for GUP is only used in PowerPC with hugetlbfs.  There are
> some kernel usage of hugepd (can refer to hugepd_populate_kernel() for
> PPC_8XX), however those pages are not candidates for GUP.
> 
> Commit a6e79df92e4a ("mm/gup: disallow FOLL_LONGTERM GUP-fast writing to
> file-backed mappings") added a check to fail gup-fast if there's potential
> risk of violating GUP over writeback file systems.  That should never apply
> to hugepd.  Considering that hugepd is an old format (and even
> software-only), there's no plan to extend hugepd into other file typed
> memories that is prone to the same issue.

I didn't dig into the ppc stuff too deeply, but this looks to me like
it is the same thing as ARM's contig bits?

ie a chunk of PMD/etc entries are all managed together as though they
are a virtual larger entry and we use the hugepte_addr_end() stuff to
iterate over each sub entry.

But WHY is GUP doing this or caring about this? GUP should have no
problem handling the super-size entry (eg 8M on nohash) as a single
thing. It seems we only lack an API to get this out of the arch code?

It seems to me we should see ARM and PPC agree on what the API is for
this and then get rid of hugepd by making both use the same page table
walker API. Is that too hopeful?

> Drop that check, not only because it'll never be true for hugepd per any
> known plan, but also it paves way for reusing the function outside
> fast-gup.

I didn't see any other caller of this function in this series? When
does this re-use happen??

Jason

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: [PATCH v2 06/13] mm/gup: Drop folio_fast_pin_allowed() in hugepd processing
@ 2024-01-15 18:37     ` Jason Gunthorpe
  0 siblings, 0 replies; 143+ messages in thread
From: Jason Gunthorpe @ 2024-01-15 18:37 UTC (permalink / raw)
  To: peterx
  Cc: linux-mm, linux-kernel, James Houghton, David Hildenbrand,
	Kirill A . Shutemov, Yang Shi, linux-riscv, Andrew Morton,
	Aneesh Kumar K . V, Rik van Riel, Andrea Arcangeli,
	Axel Rasmussen, Mike Rapoport, John Hubbard, Vlastimil Babka,
	Michael Ellerman, Christophe Leroy, Andrew Jones, linuxppc-dev,
	Mike Kravetz, Muchun Song, linux-arm-kernel, Christoph Hellwig,
	Lorenzo Stoakes, Matthew Wilcox

On Wed, Jan 03, 2024 at 05:14:16PM +0800, peterx@redhat.com wrote:
> From: Peter Xu <peterx@redhat.com>
> 
> Hugepd format for GUP is only used in PowerPC with hugetlbfs.  There are
> some kernel usage of hugepd (can refer to hugepd_populate_kernel() for
> PPC_8XX), however those pages are not candidates for GUP.
> 
> Commit a6e79df92e4a ("mm/gup: disallow FOLL_LONGTERM GUP-fast writing to
> file-backed mappings") added a check to fail gup-fast if there's potential
> risk of violating GUP over writeback file systems.  That should never apply
> to hugepd.  Considering that hugepd is an old format (and even
> software-only), there's no plan to extend hugepd into other file typed
> memories that is prone to the same issue.

I didn't dig into the ppc stuff too deeply, but this looks to me like
it is the same thing as ARM's contig bits?

ie a chunk of PMD/etc entries are all managed together as though they
are a virtual larger entry and we use the hugepte_addr_end() stuff to
iterate over each sub entry.

But WHY is GUP doing this or caring about this? GUP should have no
problem handling the super-size entry (eg 8M on nohash) as a single
thing. It seems we only lack an API to get this out of the arch code?

It seems to me we should see ARM and PPC agree on what the API is for
this and then get rid of hugepd by making both use the same page table
walker API. Is that too hopeful?

> Drop that check, not only because it'll never be true for hugepd per any
> known plan, but also it paves way for reusing the function outside
> fast-gup.

I didn't see any other caller of this function in this series? When
does this re-use happen??

Jason

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: [PATCH v2 06/13] mm/gup: Drop folio_fast_pin_allowed() in hugepd processing
@ 2024-01-15 18:37     ` Jason Gunthorpe
  0 siblings, 0 replies; 143+ messages in thread
From: Jason Gunthorpe @ 2024-01-15 18:37 UTC (permalink / raw)
  To: peterx
  Cc: linux-mm, linux-kernel, James Houghton, David Hildenbrand,
	Kirill A . Shutemov, Yang Shi, linux-riscv, Andrew Morton,
	Aneesh Kumar K . V, Rik van Riel, Andrea Arcangeli,
	Axel Rasmussen, Mike Rapoport, John Hubbard, Vlastimil Babka,
	Michael Ellerman, Christophe Leroy, Andrew Jones, linuxppc-dev,
	Mike Kravetz, Muchun Song, linux-arm-kernel, Christoph Hellwig,
	Lorenzo Stoakes, Matthew Wilcox

On Wed, Jan 03, 2024 at 05:14:16PM +0800, peterx@redhat.com wrote:
> From: Peter Xu <peterx@redhat.com>
> 
> Hugepd format for GUP is only used in PowerPC with hugetlbfs.  There are
> some kernel usage of hugepd (can refer to hugepd_populate_kernel() for
> PPC_8XX), however those pages are not candidates for GUP.
> 
> Commit a6e79df92e4a ("mm/gup: disallow FOLL_LONGTERM GUP-fast writing to
> file-backed mappings") added a check to fail gup-fast if there's potential
> risk of violating GUP over writeback file systems.  That should never apply
> to hugepd.  Considering that hugepd is an old format (and even
> software-only), there's no plan to extend hugepd into other file typed
> memories that is prone to the same issue.

I didn't dig into the ppc stuff too deeply, but this looks to me like
it is the same thing as ARM's contig bits?

ie a chunk of PMD/etc entries are all managed together as though they
are a virtual larger entry and we use the hugepte_addr_end() stuff to
iterate over each sub entry.

But WHY is GUP doing this or caring about this? GUP should have no
problem handling the super-size entry (eg 8M on nohash) as a single
thing. It seems we only lack an API to get this out of the arch code?

It seems to me we should see ARM and PPC agree on what the API is for
this and then get rid of hugepd by making both use the same page table
walker API. Is that too hopeful?

> Drop that check, not only because it'll never be true for hugepd per any
> known plan, but also it paves way for reusing the function outside
> fast-gup.

I didn't see any other caller of this function in this series? When
does this re-use happen??

Jason

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: [PATCH v2 06/13] mm/gup: Drop folio_fast_pin_allowed() in hugepd processing
@ 2024-01-15 18:37     ` Jason Gunthorpe
  0 siblings, 0 replies; 143+ messages in thread
From: Jason Gunthorpe @ 2024-01-15 18:37 UTC (permalink / raw)
  To: peterx
  Cc: James Houghton, David Hildenbrand, Yang Shi, Andrew Jones,
	linux-mm, Matthew Wilcox, linux-riscv, Andrea Arcangeli,
	Christoph Hellwig, Aneesh Kumar K . V, Vlastimil Babka,
	Axel Rasmussen, Rik van Riel, John Hubbard, Kirill A . Shutemov,
	linux-arm-kernel, Lorenzo Stoakes, Muchun Song, linux-kernel,
	Andrew Morton, linuxppc-dev, Mike Rapoport, Mike Kravetz

On Wed, Jan 03, 2024 at 05:14:16PM +0800, peterx@redhat.com wrote:
> From: Peter Xu <peterx@redhat.com>
> 
> Hugepd format for GUP is only used in PowerPC with hugetlbfs.  There are
> some kernel usage of hugepd (can refer to hugepd_populate_kernel() for
> PPC_8XX), however those pages are not candidates for GUP.
> 
> Commit a6e79df92e4a ("mm/gup: disallow FOLL_LONGTERM GUP-fast writing to
> file-backed mappings") added a check to fail gup-fast if there's potential
> risk of violating GUP over writeback file systems.  That should never apply
> to hugepd.  Considering that hugepd is an old format (and even
> software-only), there's no plan to extend hugepd into other file typed
> memories that is prone to the same issue.

I didn't dig into the ppc stuff too deeply, but this looks to me like
it is the same thing as ARM's contig bits?

ie a chunk of PMD/etc entries are all managed together as though they
are a virtual larger entry and we use the hugepte_addr_end() stuff to
iterate over each sub entry.

But WHY is GUP doing this or caring about this? GUP should have no
problem handling the super-size entry (eg 8M on nohash) as a single
thing. It seems we only lack an API to get this out of the arch code?

It seems to me we should see ARM and PPC agree on what the API is for
this and then get rid of hugepd by making both use the same page table
walker API. Is that too hopeful?

> Drop that check, not only because it'll never be true for hugepd per any
> known plan, but also it paves way for reusing the function outside
> fast-gup.

I didn't see any other caller of this function in this series? When
does this re-use happen??

Jason

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: [PATCH v2 07/13] mm/gup: Refactor record_subpages() to find 1st small page
  2024-01-03  9:14   ` peterx
  (?)
  (?)
@ 2024-01-15 18:38     ` Jason Gunthorpe
  -1 siblings, 0 replies; 143+ messages in thread
From: Jason Gunthorpe @ 2024-01-15 18:38 UTC (permalink / raw)
  To: peterx
  Cc: linux-mm, linux-kernel, James Houghton, David Hildenbrand,
	Kirill A . Shutemov, Yang Shi, linux-riscv, Andrew Morton,
	Aneesh Kumar K . V, Rik van Riel, Andrea Arcangeli,
	Axel Rasmussen, Mike Rapoport, John Hubbard, Vlastimil Babka,
	Michael Ellerman, Christophe Leroy, Andrew Jones, linuxppc-dev,
	Mike Kravetz, Muchun Song, linux-arm-kernel, Christoph Hellwig,
	Lorenzo Stoakes, Matthew Wilcox

On Wed, Jan 03, 2024 at 05:14:17PM +0800, peterx@redhat.com wrote:
> From: Peter Xu <peterx@redhat.com>
> 
> All the fast-gup functions take a tail page to operate, always need to do
> page mask calculations before feeding that into record_subpages().
> 
> Merge that logic into record_subpages(), so that it will do the nth_page()
> calculation.
> 
> Signed-off-by: Peter Xu <peterx@redhat.com>
> ---
>  mm/gup.c | 25 ++++++++++++++-----------
>  1 file changed, 14 insertions(+), 11 deletions(-)

Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>

Jason

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: [PATCH v2 07/13] mm/gup: Refactor record_subpages() to find 1st small page
@ 2024-01-15 18:38     ` Jason Gunthorpe
  0 siblings, 0 replies; 143+ messages in thread
From: Jason Gunthorpe @ 2024-01-15 18:38 UTC (permalink / raw)
  To: peterx
  Cc: linux-mm, linux-kernel, James Houghton, David Hildenbrand,
	Kirill A . Shutemov, Yang Shi, linux-riscv, Andrew Morton,
	Aneesh Kumar K . V, Rik van Riel, Andrea Arcangeli,
	Axel Rasmussen, Mike Rapoport, John Hubbard, Vlastimil Babka,
	Michael Ellerman, Christophe Leroy, Andrew Jones, linuxppc-dev,
	Mike Kravetz, Muchun Song, linux-arm-kernel, Christoph Hellwig,
	Lorenzo Stoakes, Matthew Wilcox

On Wed, Jan 03, 2024 at 05:14:17PM +0800, peterx@redhat.com wrote:
> From: Peter Xu <peterx@redhat.com>
> 
> All the fast-gup functions take a tail page to operate, always need to do
> page mask calculations before feeding that into record_subpages().
> 
> Merge that logic into record_subpages(), so that it will do the nth_page()
> calculation.
> 
> Signed-off-by: Peter Xu <peterx@redhat.com>
> ---
>  mm/gup.c | 25 ++++++++++++++-----------
>  1 file changed, 14 insertions(+), 11 deletions(-)

Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>

Jason

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: [PATCH v2 07/13] mm/gup: Refactor record_subpages() to find 1st small page
@ 2024-01-15 18:38     ` Jason Gunthorpe
  0 siblings, 0 replies; 143+ messages in thread
From: Jason Gunthorpe @ 2024-01-15 18:38 UTC (permalink / raw)
  To: peterx
  Cc: linux-mm, linux-kernel, James Houghton, David Hildenbrand,
	Kirill A . Shutemov, Yang Shi, linux-riscv, Andrew Morton,
	Aneesh Kumar K . V, Rik van Riel, Andrea Arcangeli,
	Axel Rasmussen, Mike Rapoport, John Hubbard, Vlastimil Babka,
	Michael Ellerman, Christophe Leroy, Andrew Jones, linuxppc-dev,
	Mike Kravetz, Muchun Song, linux-arm-kernel, Christoph Hellwig,
	Lorenzo Stoakes, Matthew Wilcox

On Wed, Jan 03, 2024 at 05:14:17PM +0800, peterx@redhat.com wrote:
> From: Peter Xu <peterx@redhat.com>
> 
> All the fast-gup functions take a tail page to operate, always need to do
> page mask calculations before feeding that into record_subpages().
> 
> Merge that logic into record_subpages(), so that it will do the nth_page()
> calculation.
> 
> Signed-off-by: Peter Xu <peterx@redhat.com>
> ---
>  mm/gup.c | 25 ++++++++++++++-----------
>  1 file changed, 14 insertions(+), 11 deletions(-)

Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>

Jason

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: [PATCH v2 07/13] mm/gup: Refactor record_subpages() to find 1st small page
@ 2024-01-15 18:38     ` Jason Gunthorpe
  0 siblings, 0 replies; 143+ messages in thread
From: Jason Gunthorpe @ 2024-01-15 18:38 UTC (permalink / raw)
  To: peterx
  Cc: James Houghton, David Hildenbrand, Yang Shi, Andrew Jones,
	linux-mm, Matthew Wilcox, linux-riscv, Andrea Arcangeli,
	Christoph Hellwig, Aneesh Kumar K . V, Vlastimil Babka,
	Axel Rasmussen, Rik van Riel, John Hubbard, Kirill A . Shutemov,
	linux-arm-kernel, Lorenzo Stoakes, Muchun Song, linux-kernel,
	Andrew Morton, linuxppc-dev, Mike Rapoport, Mike Kravetz

On Wed, Jan 03, 2024 at 05:14:17PM +0800, peterx@redhat.com wrote:
> From: Peter Xu <peterx@redhat.com>
> 
> All the fast-gup functions take a tail page to operate, always need to do
> page mask calculations before feeding that into record_subpages().
> 
> Merge that logic into record_subpages(), so that it will do the nth_page()
> calculation.
> 
> Signed-off-by: Peter Xu <peterx@redhat.com>
> ---
>  mm/gup.c | 25 ++++++++++++++-----------
>  1 file changed, 14 insertions(+), 11 deletions(-)

Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>

Jason

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: [PATCH v2 08/13] mm/gup: Handle hugetlb for no_page_table()
  2024-01-03  9:14   ` peterx
  (?)
  (?)
@ 2024-01-15 18:39     ` Jason Gunthorpe
  -1 siblings, 0 replies; 143+ messages in thread
From: Jason Gunthorpe @ 2024-01-15 18:39 UTC (permalink / raw)
  To: peterx
  Cc: linux-mm, linux-kernel, James Houghton, David Hildenbrand,
	Kirill A . Shutemov, Yang Shi, linux-riscv, Andrew Morton,
	Aneesh Kumar K . V, Rik van Riel, Andrea Arcangeli,
	Axel Rasmussen, Mike Rapoport, John Hubbard, Vlastimil Babka,
	Michael Ellerman, Christophe Leroy, Andrew Jones, linuxppc-dev,
	Mike Kravetz, Muchun Song, linux-arm-kernel, Christoph Hellwig,
	Lorenzo Stoakes, Matthew Wilcox

On Wed, Jan 03, 2024 at 05:14:18PM +0800, peterx@redhat.com wrote:
> From: Peter Xu <peterx@redhat.com>
> 
> no_page_table() is not yet used for hugetlb code paths. Make it prepared.
> 
> The major difference here is hugetlb will return -EFAULT as long as page
> cache does not exist, even if VM_SHARED.  See hugetlb_follow_page_mask().
> 
> Pass "address" into no_page_table() too, as hugetlb will need it.
> 
> Reviewed-by: Christoph Hellwig <hch@infradead.org>
> Signed-off-by: Peter Xu <peterx@redhat.com>
> ---
>  mm/gup.c | 44 ++++++++++++++++++++++++++------------------
>  1 file changed, 26 insertions(+), 18 deletions(-)

Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>

Jason

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: [PATCH v2 08/13] mm/gup: Handle hugetlb for no_page_table()
@ 2024-01-15 18:39     ` Jason Gunthorpe
  0 siblings, 0 replies; 143+ messages in thread
From: Jason Gunthorpe @ 2024-01-15 18:39 UTC (permalink / raw)
  To: peterx
  Cc: linux-mm, linux-kernel, James Houghton, David Hildenbrand,
	Kirill A . Shutemov, Yang Shi, linux-riscv, Andrew Morton,
	Aneesh Kumar K . V, Rik van Riel, Andrea Arcangeli,
	Axel Rasmussen, Mike Rapoport, John Hubbard, Vlastimil Babka,
	Michael Ellerman, Christophe Leroy, Andrew Jones, linuxppc-dev,
	Mike Kravetz, Muchun Song, linux-arm-kernel, Christoph Hellwig,
	Lorenzo Stoakes, Matthew Wilcox

On Wed, Jan 03, 2024 at 05:14:18PM +0800, peterx@redhat.com wrote:
> From: Peter Xu <peterx@redhat.com>
> 
> no_page_table() is not yet used for hugetlb code paths. Make it prepared.
> 
> The major difference here is hugetlb will return -EFAULT as long as page
> cache does not exist, even if VM_SHARED.  See hugetlb_follow_page_mask().
> 
> Pass "address" into no_page_table() too, as hugetlb will need it.
> 
> Reviewed-by: Christoph Hellwig <hch@infradead.org>
> Signed-off-by: Peter Xu <peterx@redhat.com>
> ---
>  mm/gup.c | 44 ++++++++++++++++++++++++++------------------
>  1 file changed, 26 insertions(+), 18 deletions(-)

Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>

Jason

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: [PATCH v2 08/13] mm/gup: Handle hugetlb for no_page_table()
@ 2024-01-15 18:39     ` Jason Gunthorpe
  0 siblings, 0 replies; 143+ messages in thread
From: Jason Gunthorpe @ 2024-01-15 18:39 UTC (permalink / raw)
  To: peterx
  Cc: James Houghton, David Hildenbrand, Yang Shi, Andrew Jones,
	linux-mm, Matthew Wilcox, linux-riscv, Andrea Arcangeli,
	Christoph Hellwig, Aneesh Kumar K . V, Vlastimil Babka,
	Axel Rasmussen, Rik van Riel, John Hubbard, Kirill A . Shutemov,
	linux-arm-kernel, Lorenzo Stoakes, Muchun Song, linux-kernel,
	Andrew Morton, linuxppc-dev, Mike Rapoport, Mike Kravetz

On Wed, Jan 03, 2024 at 05:14:18PM +0800, peterx@redhat.com wrote:
> From: Peter Xu <peterx@redhat.com>
> 
> no_page_table() is not yet used for hugetlb code paths. Make it prepared.
> 
> The major difference here is hugetlb will return -EFAULT as long as page
> cache does not exist, even if VM_SHARED.  See hugetlb_follow_page_mask().
> 
> Pass "address" into no_page_table() too, as hugetlb will need it.
> 
> Reviewed-by: Christoph Hellwig <hch@infradead.org>
> Signed-off-by: Peter Xu <peterx@redhat.com>
> ---
>  mm/gup.c | 44 ++++++++++++++++++++++++++------------------
>  1 file changed, 26 insertions(+), 18 deletions(-)

Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>

Jason

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: [PATCH v2 08/13] mm/gup: Handle hugetlb for no_page_table()
@ 2024-01-15 18:39     ` Jason Gunthorpe
  0 siblings, 0 replies; 143+ messages in thread
From: Jason Gunthorpe @ 2024-01-15 18:39 UTC (permalink / raw)
  To: peterx
  Cc: linux-mm, linux-kernel, James Houghton, David Hildenbrand,
	Kirill A . Shutemov, Yang Shi, linux-riscv, Andrew Morton,
	Aneesh Kumar K . V, Rik van Riel, Andrea Arcangeli,
	Axel Rasmussen, Mike Rapoport, John Hubbard, Vlastimil Babka,
	Michael Ellerman, Christophe Leroy, Andrew Jones, linuxppc-dev,
	Mike Kravetz, Muchun Song, linux-arm-kernel, Christoph Hellwig,
	Lorenzo Stoakes, Matthew Wilcox

On Wed, Jan 03, 2024 at 05:14:18PM +0800, peterx@redhat.com wrote:
> From: Peter Xu <peterx@redhat.com>
> 
> no_page_table() is not yet used for hugetlb code paths. Make it prepared.
> 
> The major difference here is hugetlb will return -EFAULT as long as page
> cache does not exist, even if VM_SHARED.  See hugetlb_follow_page_mask().
> 
> Pass "address" into no_page_table() too, as hugetlb will need it.
> 
> Reviewed-by: Christoph Hellwig <hch@infradead.org>
> Signed-off-by: Peter Xu <peterx@redhat.com>
> ---
>  mm/gup.c | 44 ++++++++++++++++++++++++++------------------
>  1 file changed, 26 insertions(+), 18 deletions(-)

Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>

Jason

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: [PATCH v2 09/13] mm/gup: Cache *pudp in follow_pud_mask()
  2024-01-03  9:14   ` peterx
  (?)
  (?)
@ 2024-01-15 18:41     ` Jason Gunthorpe
  -1 siblings, 0 replies; 143+ messages in thread
From: Jason Gunthorpe @ 2024-01-15 18:41 UTC (permalink / raw)
  To: peterx
  Cc: linux-mm, linux-kernel, James Houghton, David Hildenbrand,
	Kirill A . Shutemov, Yang Shi, linux-riscv, Andrew Morton,
	Aneesh Kumar K . V, Rik van Riel, Andrea Arcangeli,
	Axel Rasmussen, Mike Rapoport, John Hubbard, Vlastimil Babka,
	Michael Ellerman, Christophe Leroy, Andrew Jones, linuxppc-dev,
	Mike Kravetz, Muchun Song, linux-arm-kernel, Christoph Hellwig,
	Lorenzo Stoakes, Matthew Wilcox

On Wed, Jan 03, 2024 at 05:14:19PM +0800, peterx@redhat.com wrote:
> From: Peter Xu <peterx@redhat.com>
> 
> Introduce "pud_t pud" in the function, so the code won't dereference *pudp
> multiple time.  Not only because that looks less straightforward, but also
> because if the dereference really happened, it's not clear whether there
> can be race to see different *pudp values if it's being modified at the
> same time.
> 
> Acked-by: James Houghton <jthoughton@google.com>
> Signed-off-by: Peter Xu <peterx@redhat.com>
> ---
>  mm/gup.c | 17 +++++++++--------
>  1 file changed, 9 insertions(+), 8 deletions(-)

I think we have several more case like this, and I ceratinly agree
code should not access a READ_ONCE variable more than once :(

Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>

Jason

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: [PATCH v2 09/13] mm/gup: Cache *pudp in follow_pud_mask()
@ 2024-01-15 18:41     ` Jason Gunthorpe
  0 siblings, 0 replies; 143+ messages in thread
From: Jason Gunthorpe @ 2024-01-15 18:41 UTC (permalink / raw)
  To: peterx
  Cc: linux-mm, linux-kernel, James Houghton, David Hildenbrand,
	Kirill A . Shutemov, Yang Shi, linux-riscv, Andrew Morton,
	Aneesh Kumar K . V, Rik van Riel, Andrea Arcangeli,
	Axel Rasmussen, Mike Rapoport, John Hubbard, Vlastimil Babka,
	Michael Ellerman, Christophe Leroy, Andrew Jones, linuxppc-dev,
	Mike Kravetz, Muchun Song, linux-arm-kernel, Christoph Hellwig,
	Lorenzo Stoakes, Matthew Wilcox

On Wed, Jan 03, 2024 at 05:14:19PM +0800, peterx@redhat.com wrote:
> From: Peter Xu <peterx@redhat.com>
> 
> Introduce "pud_t pud" in the function, so the code won't dereference *pudp
> multiple time.  Not only because that looks less straightforward, but also
> because if the dereference really happened, it's not clear whether there
> can be race to see different *pudp values if it's being modified at the
> same time.
> 
> Acked-by: James Houghton <jthoughton@google.com>
> Signed-off-by: Peter Xu <peterx@redhat.com>
> ---
>  mm/gup.c | 17 +++++++++--------
>  1 file changed, 9 insertions(+), 8 deletions(-)

I think we have several more case like this, and I ceratinly agree
code should not access a READ_ONCE variable more than once :(

Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>

Jason

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: [PATCH v2 09/13] mm/gup: Cache *pudp in follow_pud_mask()
@ 2024-01-15 18:41     ` Jason Gunthorpe
  0 siblings, 0 replies; 143+ messages in thread
From: Jason Gunthorpe @ 2024-01-15 18:41 UTC (permalink / raw)
  To: peterx
  Cc: James Houghton, David Hildenbrand, Yang Shi, Andrew Jones,
	linux-mm, Matthew Wilcox, linux-riscv, Andrea Arcangeli,
	Christoph Hellwig, Aneesh Kumar K . V, Vlastimil Babka,
	Axel Rasmussen, Rik van Riel, John Hubbard, Kirill A . Shutemov,
	linux-arm-kernel, Lorenzo Stoakes, Muchun Song, linux-kernel,
	Andrew Morton, linuxppc-dev, Mike Rapoport, Mike Kravetz

On Wed, Jan 03, 2024 at 05:14:19PM +0800, peterx@redhat.com wrote:
> From: Peter Xu <peterx@redhat.com>
> 
> Introduce "pud_t pud" in the function, so the code won't dereference *pudp
> multiple time.  Not only because that looks less straightforward, but also
> because if the dereference really happened, it's not clear whether there
> can be race to see different *pudp values if it's being modified at the
> same time.
> 
> Acked-by: James Houghton <jthoughton@google.com>
> Signed-off-by: Peter Xu <peterx@redhat.com>
> ---
>  mm/gup.c | 17 +++++++++--------
>  1 file changed, 9 insertions(+), 8 deletions(-)

I think we have several more case like this, and I ceratinly agree
code should not access a READ_ONCE variable more than once :(

Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>

Jason

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: [PATCH v2 09/13] mm/gup: Cache *pudp in follow_pud_mask()
@ 2024-01-15 18:41     ` Jason Gunthorpe
  0 siblings, 0 replies; 143+ messages in thread
From: Jason Gunthorpe @ 2024-01-15 18:41 UTC (permalink / raw)
  To: peterx
  Cc: linux-mm, linux-kernel, James Houghton, David Hildenbrand,
	Kirill A . Shutemov, Yang Shi, linux-riscv, Andrew Morton,
	Aneesh Kumar K . V, Rik van Riel, Andrea Arcangeli,
	Axel Rasmussen, Mike Rapoport, John Hubbard, Vlastimil Babka,
	Michael Ellerman, Christophe Leroy, Andrew Jones, linuxppc-dev,
	Mike Kravetz, Muchun Song, linux-arm-kernel, Christoph Hellwig,
	Lorenzo Stoakes, Matthew Wilcox

On Wed, Jan 03, 2024 at 05:14:19PM +0800, peterx@redhat.com wrote:
> From: Peter Xu <peterx@redhat.com>
> 
> Introduce "pud_t pud" in the function, so the code won't dereference *pudp
> multiple time.  Not only because that looks less straightforward, but also
> because if the dereference really happened, it's not clear whether there
> can be race to see different *pudp values if it's being modified at the
> same time.
> 
> Acked-by: James Houghton <jthoughton@google.com>
> Signed-off-by: Peter Xu <peterx@redhat.com>
> ---
>  mm/gup.c | 17 +++++++++--------
>  1 file changed, 9 insertions(+), 8 deletions(-)

I think we have several more case like this, and I ceratinly agree
code should not access a READ_ONCE variable more than once :(

Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>

Jason

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: [PATCH v2 10/13] mm/gup: Handle huge pud for follow_pud_mask()
  2024-01-03  9:14   ` peterx
  (?)
  (?)
@ 2024-01-15 18:49     ` Jason Gunthorpe
  -1 siblings, 0 replies; 143+ messages in thread
From: Jason Gunthorpe @ 2024-01-15 18:49 UTC (permalink / raw)
  To: peterx
  Cc: linux-mm, linux-kernel, James Houghton, David Hildenbrand,
	Kirill A . Shutemov, Yang Shi, linux-riscv, Andrew Morton,
	Aneesh Kumar K . V, Rik van Riel, Andrea Arcangeli,
	Axel Rasmussen, Mike Rapoport, John Hubbard, Vlastimil Babka,
	Michael Ellerman, Christophe Leroy, Andrew Jones, linuxppc-dev,
	Mike Kravetz, Muchun Song, linux-arm-kernel, Christoph Hellwig,
	Lorenzo Stoakes, Matthew Wilcox

On Wed, Jan 03, 2024 at 05:14:20PM +0800, peterx@redhat.com wrote:
> diff --git a/mm/gup.c b/mm/gup.c
> index 63845b3ec44f..760406180222 100644
> --- a/mm/gup.c
> +++ b/mm/gup.c
> @@ -525,6 +525,70 @@ static struct page *no_page_table(struct vm_area_struct *vma,
>  	return NULL;
>  }
>  
> +#ifdef CONFIG_PGTABLE_HAS_HUGE_LEAVES
> +static struct page *follow_huge_pud(struct vm_area_struct *vma,
> +				    unsigned long addr, pud_t *pudp,
> +				    int flags, struct follow_page_context *ctx)
> +{
> +	struct mm_struct *mm = vma->vm_mm;
> +	struct page *page;
> +	pud_t pud = *pudp;
> +	unsigned long pfn = pud_pfn(pud);
> +	int ret;
> +
> +	assert_spin_locked(pud_lockptr(mm, pudp));
> +
> +	if ((flags & FOLL_WRITE) && !pud_write(pud))
> +		return NULL;
> +
> +	if (!pud_present(pud))
> +		return NULL;
> +
> +	pfn += (addr & ~PUD_MASK) >> PAGE_SHIFT;
> +
> +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
> +	if (pud_devmap(pud)) {

Can this use IS_ENABLED(CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD) ?

> +		/*
> +		 * device mapped pages can only be returned if the caller
> +		 * will manage the page reference count.
> +		 *
> +		 * At least one of FOLL_GET | FOLL_PIN must be set, so
> +		 * assert that here:
> +		 */
> +		if (!(flags & (FOLL_GET | FOLL_PIN)))
> +			return ERR_PTR(-EEXIST);
> +
> +		if (flags & FOLL_TOUCH)
> +			touch_pud(vma, addr, pudp, flags & FOLL_WRITE);
> +
> +		ctx->pgmap = get_dev_pagemap(pfn, ctx->pgmap);
> +		if (!ctx->pgmap)
> +			return ERR_PTR(-EFAULT);
> +	}
> +#endif	/* CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */
> +	page = pfn_to_page(pfn);
> +
> +	if (!pud_devmap(pud) && !pud_write(pud) &&
> +	    gup_must_unshare(vma, flags, page))
> +		return ERR_PTR(-EMLINK);
> +
> +	ret = try_grab_page(page, flags);
> +	if (ret)
> +		page = ERR_PTR(ret);
> +	else
> +		ctx->page_mask = HPAGE_PUD_NR - 1;
> +
> +	return page;
> +}
> +#else  /* CONFIG_PGTABLE_HAS_HUGE_LEAVES */
> +static struct page *follow_huge_pud(struct vm_area_struct *vma,
> +				    unsigned long addr, pud_t *pudp,
> +				    int flags, struct follow_page_context *ctx)
> +{
> +	return NULL;
> +}
> +#endif	/* CONFIG_PGTABLE_HAS_HUGE_LEAVES */
> +
>  static int follow_pfn_pte(struct vm_area_struct *vma, unsigned long address,
>  		pte_t *pte, unsigned int flags)
>  {
> @@ -760,11 +824,11 @@ static struct page *follow_pud_mask(struct vm_area_struct *vma,
>  
>  	pudp = pud_offset(p4dp, address);
>  	pud = READ_ONCE(*pudp);
> -	if (pud_none(pud))
> +	if (pud_none(pud) || !pud_present(pud))
>  		return no_page_table(vma, flags, address);

Isn't 'pud_none() || !pud_present()' redundent? A none pud is
non-present, by definition?

> -	if (pud_devmap(pud)) {
> +	if (pud_huge(pud)) {
>  		ptl = pud_lock(mm, pudp);
> -		page = follow_devmap_pud(vma, address, pudp, flags, &ctx->pgmap);
> +		page = follow_huge_pud(vma, address, pudp, flags, ctx);
>  		spin_unlock(ptl);
>  		if (page)
>  			return page;

Otherwise it looks OK to me

Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>

Jason

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: [PATCH v2 10/13] mm/gup: Handle huge pud for follow_pud_mask()
@ 2024-01-15 18:49     ` Jason Gunthorpe
  0 siblings, 0 replies; 143+ messages in thread
From: Jason Gunthorpe @ 2024-01-15 18:49 UTC (permalink / raw)
  To: peterx
  Cc: linux-mm, linux-kernel, James Houghton, David Hildenbrand,
	Kirill A . Shutemov, Yang Shi, linux-riscv, Andrew Morton,
	Aneesh Kumar K . V, Rik van Riel, Andrea Arcangeli,
	Axel Rasmussen, Mike Rapoport, John Hubbard, Vlastimil Babka,
	Michael Ellerman, Christophe Leroy, Andrew Jones, linuxppc-dev,
	Mike Kravetz, Muchun Song, linux-arm-kernel, Christoph Hellwig,
	Lorenzo Stoakes, Matthew Wilcox

On Wed, Jan 03, 2024 at 05:14:20PM +0800, peterx@redhat.com wrote:
> diff --git a/mm/gup.c b/mm/gup.c
> index 63845b3ec44f..760406180222 100644
> --- a/mm/gup.c
> +++ b/mm/gup.c
> @@ -525,6 +525,70 @@ static struct page *no_page_table(struct vm_area_struct *vma,
>  	return NULL;
>  }
>  
> +#ifdef CONFIG_PGTABLE_HAS_HUGE_LEAVES
> +static struct page *follow_huge_pud(struct vm_area_struct *vma,
> +				    unsigned long addr, pud_t *pudp,
> +				    int flags, struct follow_page_context *ctx)
> +{
> +	struct mm_struct *mm = vma->vm_mm;
> +	struct page *page;
> +	pud_t pud = *pudp;
> +	unsigned long pfn = pud_pfn(pud);
> +	int ret;
> +
> +	assert_spin_locked(pud_lockptr(mm, pudp));
> +
> +	if ((flags & FOLL_WRITE) && !pud_write(pud))
> +		return NULL;
> +
> +	if (!pud_present(pud))
> +		return NULL;
> +
> +	pfn += (addr & ~PUD_MASK) >> PAGE_SHIFT;
> +
> +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
> +	if (pud_devmap(pud)) {

Can this use IS_ENABLED(CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD) ?

> +		/*
> +		 * device mapped pages can only be returned if the caller
> +		 * will manage the page reference count.
> +		 *
> +		 * At least one of FOLL_GET | FOLL_PIN must be set, so
> +		 * assert that here:
> +		 */
> +		if (!(flags & (FOLL_GET | FOLL_PIN)))
> +			return ERR_PTR(-EEXIST);
> +
> +		if (flags & FOLL_TOUCH)
> +			touch_pud(vma, addr, pudp, flags & FOLL_WRITE);
> +
> +		ctx->pgmap = get_dev_pagemap(pfn, ctx->pgmap);
> +		if (!ctx->pgmap)
> +			return ERR_PTR(-EFAULT);
> +	}
> +#endif	/* CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */
> +	page = pfn_to_page(pfn);
> +
> +	if (!pud_devmap(pud) && !pud_write(pud) &&
> +	    gup_must_unshare(vma, flags, page))
> +		return ERR_PTR(-EMLINK);
> +
> +	ret = try_grab_page(page, flags);
> +	if (ret)
> +		page = ERR_PTR(ret);
> +	else
> +		ctx->page_mask = HPAGE_PUD_NR - 1;
> +
> +	return page;
> +}
> +#else  /* CONFIG_PGTABLE_HAS_HUGE_LEAVES */
> +static struct page *follow_huge_pud(struct vm_area_struct *vma,
> +				    unsigned long addr, pud_t *pudp,
> +				    int flags, struct follow_page_context *ctx)
> +{
> +	return NULL;
> +}
> +#endif	/* CONFIG_PGTABLE_HAS_HUGE_LEAVES */
> +
>  static int follow_pfn_pte(struct vm_area_struct *vma, unsigned long address,
>  		pte_t *pte, unsigned int flags)
>  {
> @@ -760,11 +824,11 @@ static struct page *follow_pud_mask(struct vm_area_struct *vma,
>  
>  	pudp = pud_offset(p4dp, address);
>  	pud = READ_ONCE(*pudp);
> -	if (pud_none(pud))
> +	if (pud_none(pud) || !pud_present(pud))
>  		return no_page_table(vma, flags, address);

Isn't 'pud_none() || !pud_present()' redundent? A none pud is
non-present, by definition?

> -	if (pud_devmap(pud)) {
> +	if (pud_huge(pud)) {
>  		ptl = pud_lock(mm, pudp);
> -		page = follow_devmap_pud(vma, address, pudp, flags, &ctx->pgmap);
> +		page = follow_huge_pud(vma, address, pudp, flags, ctx);
>  		spin_unlock(ptl);
>  		if (page)
>  			return page;

Otherwise it looks OK to me

Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>

Jason

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: [PATCH v2 10/13] mm/gup: Handle huge pud for follow_pud_mask()
@ 2024-01-15 18:49     ` Jason Gunthorpe
  0 siblings, 0 replies; 143+ messages in thread
From: Jason Gunthorpe @ 2024-01-15 18:49 UTC (permalink / raw)
  To: peterx
  Cc: James Houghton, David Hildenbrand, Yang Shi, Andrew Jones,
	linux-mm, Matthew Wilcox, linux-riscv, Andrea Arcangeli,
	Christoph Hellwig, Aneesh Kumar K . V, Vlastimil Babka,
	Axel Rasmussen, Rik van Riel, John Hubbard, Kirill A . Shutemov,
	linux-arm-kernel, Lorenzo Stoakes, Muchun Song, linux-kernel,
	Andrew Morton, linuxppc-dev, Mike Rapoport, Mike Kravetz

On Wed, Jan 03, 2024 at 05:14:20PM +0800, peterx@redhat.com wrote:
> diff --git a/mm/gup.c b/mm/gup.c
> index 63845b3ec44f..760406180222 100644
> --- a/mm/gup.c
> +++ b/mm/gup.c
> @@ -525,6 +525,70 @@ static struct page *no_page_table(struct vm_area_struct *vma,
>  	return NULL;
>  }
>  
> +#ifdef CONFIG_PGTABLE_HAS_HUGE_LEAVES
> +static struct page *follow_huge_pud(struct vm_area_struct *vma,
> +				    unsigned long addr, pud_t *pudp,
> +				    int flags, struct follow_page_context *ctx)
> +{
> +	struct mm_struct *mm = vma->vm_mm;
> +	struct page *page;
> +	pud_t pud = *pudp;
> +	unsigned long pfn = pud_pfn(pud);
> +	int ret;
> +
> +	assert_spin_locked(pud_lockptr(mm, pudp));
> +
> +	if ((flags & FOLL_WRITE) && !pud_write(pud))
> +		return NULL;
> +
> +	if (!pud_present(pud))
> +		return NULL;
> +
> +	pfn += (addr & ~PUD_MASK) >> PAGE_SHIFT;
> +
> +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
> +	if (pud_devmap(pud)) {

Can this use IS_ENABLED(CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD) ?

> +		/*
> +		 * device mapped pages can only be returned if the caller
> +		 * will manage the page reference count.
> +		 *
> +		 * At least one of FOLL_GET | FOLL_PIN must be set, so
> +		 * assert that here:
> +		 */
> +		if (!(flags & (FOLL_GET | FOLL_PIN)))
> +			return ERR_PTR(-EEXIST);
> +
> +		if (flags & FOLL_TOUCH)
> +			touch_pud(vma, addr, pudp, flags & FOLL_WRITE);
> +
> +		ctx->pgmap = get_dev_pagemap(pfn, ctx->pgmap);
> +		if (!ctx->pgmap)
> +			return ERR_PTR(-EFAULT);
> +	}
> +#endif	/* CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */
> +	page = pfn_to_page(pfn);
> +
> +	if (!pud_devmap(pud) && !pud_write(pud) &&
> +	    gup_must_unshare(vma, flags, page))
> +		return ERR_PTR(-EMLINK);
> +
> +	ret = try_grab_page(page, flags);
> +	if (ret)
> +		page = ERR_PTR(ret);
> +	else
> +		ctx->page_mask = HPAGE_PUD_NR - 1;
> +
> +	return page;
> +}
> +#else  /* CONFIG_PGTABLE_HAS_HUGE_LEAVES */
> +static struct page *follow_huge_pud(struct vm_area_struct *vma,
> +				    unsigned long addr, pud_t *pudp,
> +				    int flags, struct follow_page_context *ctx)
> +{
> +	return NULL;
> +}
> +#endif	/* CONFIG_PGTABLE_HAS_HUGE_LEAVES */
> +
>  static int follow_pfn_pte(struct vm_area_struct *vma, unsigned long address,
>  		pte_t *pte, unsigned int flags)
>  {
> @@ -760,11 +824,11 @@ static struct page *follow_pud_mask(struct vm_area_struct *vma,
>  
>  	pudp = pud_offset(p4dp, address);
>  	pud = READ_ONCE(*pudp);
> -	if (pud_none(pud))
> +	if (pud_none(pud) || !pud_present(pud))
>  		return no_page_table(vma, flags, address);

Isn't 'pud_none() || !pud_present()' redundent? A none pud is
non-present, by definition?

> -	if (pud_devmap(pud)) {
> +	if (pud_huge(pud)) {
>  		ptl = pud_lock(mm, pudp);
> -		page = follow_devmap_pud(vma, address, pudp, flags, &ctx->pgmap);
> +		page = follow_huge_pud(vma, address, pudp, flags, ctx);
>  		spin_unlock(ptl);
>  		if (page)
>  			return page;

Otherwise it looks OK to me

Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>

Jason

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: [PATCH v2 10/13] mm/gup: Handle huge pud for follow_pud_mask()
@ 2024-01-15 18:49     ` Jason Gunthorpe
  0 siblings, 0 replies; 143+ messages in thread
From: Jason Gunthorpe @ 2024-01-15 18:49 UTC (permalink / raw)
  To: peterx
  Cc: linux-mm, linux-kernel, James Houghton, David Hildenbrand,
	Kirill A . Shutemov, Yang Shi, linux-riscv, Andrew Morton,
	Aneesh Kumar K . V, Rik van Riel, Andrea Arcangeli,
	Axel Rasmussen, Mike Rapoport, John Hubbard, Vlastimil Babka,
	Michael Ellerman, Christophe Leroy, Andrew Jones, linuxppc-dev,
	Mike Kravetz, Muchun Song, linux-arm-kernel, Christoph Hellwig,
	Lorenzo Stoakes, Matthew Wilcox

On Wed, Jan 03, 2024 at 05:14:20PM +0800, peterx@redhat.com wrote:
> diff --git a/mm/gup.c b/mm/gup.c
> index 63845b3ec44f..760406180222 100644
> --- a/mm/gup.c
> +++ b/mm/gup.c
> @@ -525,6 +525,70 @@ static struct page *no_page_table(struct vm_area_struct *vma,
>  	return NULL;
>  }
>  
> +#ifdef CONFIG_PGTABLE_HAS_HUGE_LEAVES
> +static struct page *follow_huge_pud(struct vm_area_struct *vma,
> +				    unsigned long addr, pud_t *pudp,
> +				    int flags, struct follow_page_context *ctx)
> +{
> +	struct mm_struct *mm = vma->vm_mm;
> +	struct page *page;
> +	pud_t pud = *pudp;
> +	unsigned long pfn = pud_pfn(pud);
> +	int ret;
> +
> +	assert_spin_locked(pud_lockptr(mm, pudp));
> +
> +	if ((flags & FOLL_WRITE) && !pud_write(pud))
> +		return NULL;
> +
> +	if (!pud_present(pud))
> +		return NULL;
> +
> +	pfn += (addr & ~PUD_MASK) >> PAGE_SHIFT;
> +
> +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
> +	if (pud_devmap(pud)) {

Can this use IS_ENABLED(CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD) ?

> +		/*
> +		 * device mapped pages can only be returned if the caller
> +		 * will manage the page reference count.
> +		 *
> +		 * At least one of FOLL_GET | FOLL_PIN must be set, so
> +		 * assert that here:
> +		 */
> +		if (!(flags & (FOLL_GET | FOLL_PIN)))
> +			return ERR_PTR(-EEXIST);
> +
> +		if (flags & FOLL_TOUCH)
> +			touch_pud(vma, addr, pudp, flags & FOLL_WRITE);
> +
> +		ctx->pgmap = get_dev_pagemap(pfn, ctx->pgmap);
> +		if (!ctx->pgmap)
> +			return ERR_PTR(-EFAULT);
> +	}
> +#endif	/* CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */
> +	page = pfn_to_page(pfn);
> +
> +	if (!pud_devmap(pud) && !pud_write(pud) &&
> +	    gup_must_unshare(vma, flags, page))
> +		return ERR_PTR(-EMLINK);
> +
> +	ret = try_grab_page(page, flags);
> +	if (ret)
> +		page = ERR_PTR(ret);
> +	else
> +		ctx->page_mask = HPAGE_PUD_NR - 1;
> +
> +	return page;
> +}
> +#else  /* CONFIG_PGTABLE_HAS_HUGE_LEAVES */
> +static struct page *follow_huge_pud(struct vm_area_struct *vma,
> +				    unsigned long addr, pud_t *pudp,
> +				    int flags, struct follow_page_context *ctx)
> +{
> +	return NULL;
> +}
> +#endif	/* CONFIG_PGTABLE_HAS_HUGE_LEAVES */
> +
>  static int follow_pfn_pte(struct vm_area_struct *vma, unsigned long address,
>  		pte_t *pte, unsigned int flags)
>  {
> @@ -760,11 +824,11 @@ static struct page *follow_pud_mask(struct vm_area_struct *vma,
>  
>  	pudp = pud_offset(p4dp, address);
>  	pud = READ_ONCE(*pudp);
> -	if (pud_none(pud))
> +	if (pud_none(pud) || !pud_present(pud))
>  		return no_page_table(vma, flags, address);

Isn't 'pud_none() || !pud_present()' redundent? A none pud is
non-present, by definition?

> -	if (pud_devmap(pud)) {
> +	if (pud_huge(pud)) {
>  		ptl = pud_lock(mm, pudp);
> -		page = follow_devmap_pud(vma, address, pudp, flags, &ctx->pgmap);
> +		page = follow_huge_pud(vma, address, pudp, flags, ctx);
>  		spin_unlock(ptl);
>  		if (page)
>  			return page;

Otherwise it looks OK to me

Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>

Jason

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: [PATCH v2 11/13] mm/gup: Handle huge pmd for follow_pmd_mask()
  2024-01-03  9:14   ` peterx
  (?)
  (?)
@ 2024-01-15 18:51     ` Jason Gunthorpe
  -1 siblings, 0 replies; 143+ messages in thread
From: Jason Gunthorpe @ 2024-01-15 18:51 UTC (permalink / raw)
  To: peterx
  Cc: linux-mm, linux-kernel, James Houghton, David Hildenbrand,
	Kirill A . Shutemov, Yang Shi, linux-riscv, Andrew Morton,
	Aneesh Kumar K . V, Rik van Riel, Andrea Arcangeli,
	Axel Rasmussen, Mike Rapoport, John Hubbard, Vlastimil Babka,
	Michael Ellerman, Christophe Leroy, Andrew Jones, linuxppc-dev,
	Mike Kravetz, Muchun Song, linux-arm-kernel, Christoph Hellwig,
	Lorenzo Stoakes, Matthew Wilcox

On Wed, Jan 03, 2024 at 05:14:21PM +0800, peterx@redhat.com wrote:
> From: Peter Xu <peterx@redhat.com>
> 
> Replace pmd_trans_huge() with pmd_thp_or_huge() to also cover pmd_huge() as
> long as enabled.
> 
> FOLL_TOUCH and FOLL_SPLIT_PMD only apply to THP, not yet huge.
> 
> Since now follow_trans_huge_pmd() can process hugetlb pages, renaming it
> into follow_huge_pmd() to match what it does.  Move it into gup.c so not
> depend on CONFIG_THP.
> 
> When at it, move the ctx->page_mask setup into follow_huge_pmd(), only set
> it when the page is valid.  It was not a bug to set it before even if GUP
> failed (page==NULL), because follow_page_mask() callers always ignores
> page_mask if so.  But doing so makes the code cleaner.
> 
> Signed-off-by: Peter Xu <peterx@redhat.com>
> ---
>  mm/gup.c         | 107 ++++++++++++++++++++++++++++++++++++++++++++---
>  mm/huge_memory.c |  86 +------------------------------------
>  mm/internal.h    |   5 +--
>  3 files changed, 105 insertions(+), 93 deletions(-)

Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>

Jason

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: [PATCH v2 11/13] mm/gup: Handle huge pmd for follow_pmd_mask()
@ 2024-01-15 18:51     ` Jason Gunthorpe
  0 siblings, 0 replies; 143+ messages in thread
From: Jason Gunthorpe @ 2024-01-15 18:51 UTC (permalink / raw)
  To: peterx
  Cc: linux-mm, linux-kernel, James Houghton, David Hildenbrand,
	Kirill A . Shutemov, Yang Shi, linux-riscv, Andrew Morton,
	Aneesh Kumar K . V, Rik van Riel, Andrea Arcangeli,
	Axel Rasmussen, Mike Rapoport, John Hubbard, Vlastimil Babka,
	Michael Ellerman, Christophe Leroy, Andrew Jones, linuxppc-dev,
	Mike Kravetz, Muchun Song, linux-arm-kernel, Christoph Hellwig,
	Lorenzo Stoakes, Matthew Wilcox

On Wed, Jan 03, 2024 at 05:14:21PM +0800, peterx@redhat.com wrote:
> From: Peter Xu <peterx@redhat.com>
> 
> Replace pmd_trans_huge() with pmd_thp_or_huge() to also cover pmd_huge() as
> long as enabled.
> 
> FOLL_TOUCH and FOLL_SPLIT_PMD only apply to THP, not yet huge.
> 
> Since now follow_trans_huge_pmd() can process hugetlb pages, renaming it
> into follow_huge_pmd() to match what it does.  Move it into gup.c so not
> depend on CONFIG_THP.
> 
> When at it, move the ctx->page_mask setup into follow_huge_pmd(), only set
> it when the page is valid.  It was not a bug to set it before even if GUP
> failed (page==NULL), because follow_page_mask() callers always ignores
> page_mask if so.  But doing so makes the code cleaner.
> 
> Signed-off-by: Peter Xu <peterx@redhat.com>
> ---
>  mm/gup.c         | 107 ++++++++++++++++++++++++++++++++++++++++++++---
>  mm/huge_memory.c |  86 +------------------------------------
>  mm/internal.h    |   5 +--
>  3 files changed, 105 insertions(+), 93 deletions(-)

Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>

Jason

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: [PATCH v2 11/13] mm/gup: Handle huge pmd for follow_pmd_mask()
@ 2024-01-15 18:51     ` Jason Gunthorpe
  0 siblings, 0 replies; 143+ messages in thread
From: Jason Gunthorpe @ 2024-01-15 18:51 UTC (permalink / raw)
  To: peterx
  Cc: James Houghton, David Hildenbrand, Yang Shi, Andrew Jones,
	linux-mm, Matthew Wilcox, linux-riscv, Andrea Arcangeli,
	Christoph Hellwig, Aneesh Kumar K . V, Vlastimil Babka,
	Axel Rasmussen, Rik van Riel, John Hubbard, Kirill A . Shutemov,
	linux-arm-kernel, Lorenzo Stoakes, Muchun Song, linux-kernel,
	Andrew Morton, linuxppc-dev, Mike Rapoport, Mike Kravetz

On Wed, Jan 03, 2024 at 05:14:21PM +0800, peterx@redhat.com wrote:
> From: Peter Xu <peterx@redhat.com>
> 
> Replace pmd_trans_huge() with pmd_thp_or_huge() to also cover pmd_huge() as
> long as enabled.
> 
> FOLL_TOUCH and FOLL_SPLIT_PMD only apply to THP, not yet huge.
> 
> Since now follow_trans_huge_pmd() can process hugetlb pages, renaming it
> into follow_huge_pmd() to match what it does.  Move it into gup.c so not
> depend on CONFIG_THP.
> 
> When at it, move the ctx->page_mask setup into follow_huge_pmd(), only set
> it when the page is valid.  It was not a bug to set it before even if GUP
> failed (page==NULL), because follow_page_mask() callers always ignores
> page_mask if so.  But doing so makes the code cleaner.
> 
> Signed-off-by: Peter Xu <peterx@redhat.com>
> ---
>  mm/gup.c         | 107 ++++++++++++++++++++++++++++++++++++++++++++---
>  mm/huge_memory.c |  86 +------------------------------------
>  mm/internal.h    |   5 +--
>  3 files changed, 105 insertions(+), 93 deletions(-)

Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>

Jason

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: [PATCH v2 11/13] mm/gup: Handle huge pmd for follow_pmd_mask()
@ 2024-01-15 18:51     ` Jason Gunthorpe
  0 siblings, 0 replies; 143+ messages in thread
From: Jason Gunthorpe @ 2024-01-15 18:51 UTC (permalink / raw)
  To: peterx
  Cc: linux-mm, linux-kernel, James Houghton, David Hildenbrand,
	Kirill A . Shutemov, Yang Shi, linux-riscv, Andrew Morton,
	Aneesh Kumar K . V, Rik van Riel, Andrea Arcangeli,
	Axel Rasmussen, Mike Rapoport, John Hubbard, Vlastimil Babka,
	Michael Ellerman, Christophe Leroy, Andrew Jones, linuxppc-dev,
	Mike Kravetz, Muchun Song, linux-arm-kernel, Christoph Hellwig,
	Lorenzo Stoakes, Matthew Wilcox

On Wed, Jan 03, 2024 at 05:14:21PM +0800, peterx@redhat.com wrote:
> From: Peter Xu <peterx@redhat.com>
> 
> Replace pmd_trans_huge() with pmd_thp_or_huge() to also cover pmd_huge() as
> long as enabled.
> 
> FOLL_TOUCH and FOLL_SPLIT_PMD only apply to THP, not yet huge.
> 
> Since now follow_trans_huge_pmd() can process hugetlb pages, renaming it
> into follow_huge_pmd() to match what it does.  Move it into gup.c so not
> depend on CONFIG_THP.
> 
> When at it, move the ctx->page_mask setup into follow_huge_pmd(), only set
> it when the page is valid.  It was not a bug to set it before even if GUP
> failed (page==NULL), because follow_page_mask() callers always ignores
> page_mask if so.  But doing so makes the code cleaner.
> 
> Signed-off-by: Peter Xu <peterx@redhat.com>
> ---
>  mm/gup.c         | 107 ++++++++++++++++++++++++++++++++++++++++++++---
>  mm/huge_memory.c |  86 +------------------------------------
>  mm/internal.h    |   5 +--
>  3 files changed, 105 insertions(+), 93 deletions(-)

Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>

Jason

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: [PATCH v2 06/13] mm/gup: Drop folio_fast_pin_allowed() in hugepd processing
  2024-01-15 18:37     ` Jason Gunthorpe
  (?)
  (?)
@ 2024-01-16  6:30       ` Christophe Leroy
  -1 siblings, 0 replies; 143+ messages in thread
From: Christophe Leroy @ 2024-01-16  6:30 UTC (permalink / raw)
  To: Jason Gunthorpe, peterx
  Cc: linux-mm, linux-kernel, James Houghton, David Hildenbrand,
	Kirill A . Shutemov, Yang Shi, linux-riscv, Andrew Morton,
	Aneesh Kumar K . V, Rik van Riel, Andrea Arcangeli,
	Axel Rasmussen, Mike Rapoport, John Hubbard, Vlastimil Babka,
	Michael Ellerman, Andrew Jones, linuxppc-dev, Mike Kravetz,
	Muchun Song, linux-arm-kernel, Christoph Hellwig,
	Lorenzo Stoakes, Matthew Wilcox



Le 15/01/2024 à 19:37, Jason Gunthorpe a écrit :
> On Wed, Jan 03, 2024 at 05:14:16PM +0800, peterx@redhat.com wrote:
>> From: Peter Xu <peterx@redhat.com>
>>
>> Hugepd format for GUP is only used in PowerPC with hugetlbfs.  There are
>> some kernel usage of hugepd (can refer to hugepd_populate_kernel() for
>> PPC_8XX), however those pages are not candidates for GUP.
>>
>> Commit a6e79df92e4a ("mm/gup: disallow FOLL_LONGTERM GUP-fast writing to
>> file-backed mappings") added a check to fail gup-fast if there's potential
>> risk of violating GUP over writeback file systems.  That should never apply
>> to hugepd.  Considering that hugepd is an old format (and even
>> software-only), there's no plan to extend hugepd into other file typed
>> memories that is prone to the same issue.
> 
> I didn't dig into the ppc stuff too deeply, but this looks to me like
> it is the same thing as ARM's contig bits?
> 
> ie a chunk of PMD/etc entries are all managed together as though they
> are a virtual larger entry and we use the hugepte_addr_end() stuff to
> iterate over each sub entry.

As far as I understand ARM's contig stuff, hugepd on powerpc is 
something different.

hugepd is a page directory dedicated to huge pages, where you have huge 
pages listed instead of regular pages. For instance, on powerpc 32 with 
each PGD entries covering 4Mbytes, a regular page table has 1024 PTEs. A 
hugepd for 512k is a page table with 8 entries.

And for 8Mbytes entries, the hugepd is a page table with only one entry. 
And 2 consecutive PGS entries will point to the same hugepd to cover the 
entire 8Mbytes.

> 
> But WHY is GUP doing this or caring about this? GUP should have no
> problem handling the super-size entry (eg 8M on nohash) as a single
> thing. It seems we only lack an API to get this out of the arch code?
> 
> It seems to me we should see ARM and PPC agree on what the API is for
> this and then get rid of hugepd by making both use the same page table
> walker API. Is that too hopeful?

Can't see the similarity between ARM contig PTE and PPC huge page 
directories.

> 
>> Drop that check, not only because it'll never be true for hugepd per any
>> known plan, but also it paves way for reusing the function outside
>> fast-gup.
> 
> I didn't see any other caller of this function in this series? When
> does this re-use happen??
> 
> Jason


Christophe

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: [PATCH v2 06/13] mm/gup: Drop folio_fast_pin_allowed() in hugepd processing
@ 2024-01-16  6:30       ` Christophe Leroy
  0 siblings, 0 replies; 143+ messages in thread
From: Christophe Leroy @ 2024-01-16  6:30 UTC (permalink / raw)
  To: Jason Gunthorpe, peterx
  Cc: linux-mm, linux-kernel, James Houghton, David Hildenbrand,
	Kirill A . Shutemov, Yang Shi, linux-riscv, Andrew Morton,
	Aneesh Kumar K . V, Rik van Riel, Andrea Arcangeli,
	Axel Rasmussen, Mike Rapoport, John Hubbard, Vlastimil Babka,
	Michael Ellerman, Andrew Jones, linuxppc-dev, Mike Kravetz,
	Muchun Song, linux-arm-kernel, Christoph Hellwig,
	Lorenzo Stoakes, Matthew Wilcox



Le 15/01/2024 à 19:37, Jason Gunthorpe a écrit :
> On Wed, Jan 03, 2024 at 05:14:16PM +0800, peterx@redhat.com wrote:
>> From: Peter Xu <peterx@redhat.com>
>>
>> Hugepd format for GUP is only used in PowerPC with hugetlbfs.  There are
>> some kernel usage of hugepd (can refer to hugepd_populate_kernel() for
>> PPC_8XX), however those pages are not candidates for GUP.
>>
>> Commit a6e79df92e4a ("mm/gup: disallow FOLL_LONGTERM GUP-fast writing to
>> file-backed mappings") added a check to fail gup-fast if there's potential
>> risk of violating GUP over writeback file systems.  That should never apply
>> to hugepd.  Considering that hugepd is an old format (and even
>> software-only), there's no plan to extend hugepd into other file typed
>> memories that is prone to the same issue.
> 
> I didn't dig into the ppc stuff too deeply, but this looks to me like
> it is the same thing as ARM's contig bits?
> 
> ie a chunk of PMD/etc entries are all managed together as though they
> are a virtual larger entry and we use the hugepte_addr_end() stuff to
> iterate over each sub entry.

As far as I understand ARM's contig stuff, hugepd on powerpc is 
something different.

hugepd is a page directory dedicated to huge pages, where you have huge 
pages listed instead of regular pages. For instance, on powerpc 32 with 
each PGD entries covering 4Mbytes, a regular page table has 1024 PTEs. A 
hugepd for 512k is a page table with 8 entries.

And for 8Mbytes entries, the hugepd is a page table with only one entry. 
And 2 consecutive PGS entries will point to the same hugepd to cover the 
entire 8Mbytes.

> 
> But WHY is GUP doing this or caring about this? GUP should have no
> problem handling the super-size entry (eg 8M on nohash) as a single
> thing. It seems we only lack an API to get this out of the arch code?
> 
> It seems to me we should see ARM and PPC agree on what the API is for
> this and then get rid of hugepd by making both use the same page table
> walker API. Is that too hopeful?

Can't see the similarity between ARM contig PTE and PPC huge page 
directories.

> 
>> Drop that check, not only because it'll never be true for hugepd per any
>> known plan, but also it paves way for reusing the function outside
>> fast-gup.
> 
> I didn't see any other caller of this function in this series? When
> does this re-use happen??
> 
> Jason


Christophe
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: [PATCH v2 06/13] mm/gup: Drop folio_fast_pin_allowed() in hugepd processing
@ 2024-01-16  6:30       ` Christophe Leroy
  0 siblings, 0 replies; 143+ messages in thread
From: Christophe Leroy @ 2024-01-16  6:30 UTC (permalink / raw)
  To: Jason Gunthorpe, peterx
  Cc: linux-mm, linux-kernel, James Houghton, David Hildenbrand,
	Kirill A . Shutemov, Yang Shi, linux-riscv, Andrew Morton,
	Aneesh Kumar K . V, Rik van Riel, Andrea Arcangeli,
	Axel Rasmussen, Mike Rapoport, John Hubbard, Vlastimil Babka,
	Michael Ellerman, Andrew Jones, linuxppc-dev, Mike Kravetz,
	Muchun Song, linux-arm-kernel, Christoph Hellwig,
	Lorenzo Stoakes, Matthew Wilcox



Le 15/01/2024 à 19:37, Jason Gunthorpe a écrit :
> On Wed, Jan 03, 2024 at 05:14:16PM +0800, peterx@redhat.com wrote:
>> From: Peter Xu <peterx@redhat.com>
>>
>> Hugepd format for GUP is only used in PowerPC with hugetlbfs.  There are
>> some kernel usage of hugepd (can refer to hugepd_populate_kernel() for
>> PPC_8XX), however those pages are not candidates for GUP.
>>
>> Commit a6e79df92e4a ("mm/gup: disallow FOLL_LONGTERM GUP-fast writing to
>> file-backed mappings") added a check to fail gup-fast if there's potential
>> risk of violating GUP over writeback file systems.  That should never apply
>> to hugepd.  Considering that hugepd is an old format (and even
>> software-only), there's no plan to extend hugepd into other file typed
>> memories that is prone to the same issue.
> 
> I didn't dig into the ppc stuff too deeply, but this looks to me like
> it is the same thing as ARM's contig bits?
> 
> ie a chunk of PMD/etc entries are all managed together as though they
> are a virtual larger entry and we use the hugepte_addr_end() stuff to
> iterate over each sub entry.

As far as I understand ARM's contig stuff, hugepd on powerpc is 
something different.

hugepd is a page directory dedicated to huge pages, where you have huge 
pages listed instead of regular pages. For instance, on powerpc 32 with 
each PGD entries covering 4Mbytes, a regular page table has 1024 PTEs. A 
hugepd for 512k is a page table with 8 entries.

And for 8Mbytes entries, the hugepd is a page table with only one entry. 
And 2 consecutive PGS entries will point to the same hugepd to cover the 
entire 8Mbytes.

> 
> But WHY is GUP doing this or caring about this? GUP should have no
> problem handling the super-size entry (eg 8M on nohash) as a single
> thing. It seems we only lack an API to get this out of the arch code?
> 
> It seems to me we should see ARM and PPC agree on what the API is for
> this and then get rid of hugepd by making both use the same page table
> walker API. Is that too hopeful?

Can't see the similarity between ARM contig PTE and PPC huge page 
directories.

> 
>> Drop that check, not only because it'll never be true for hugepd per any
>> known plan, but also it paves way for reusing the function outside
>> fast-gup.
> 
> I didn't see any other caller of this function in this series? When
> does this re-use happen??
> 
> Jason


Christophe
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: [PATCH v2 06/13] mm/gup: Drop folio_fast_pin_allowed() in hugepd processing
@ 2024-01-16  6:30       ` Christophe Leroy
  0 siblings, 0 replies; 143+ messages in thread
From: Christophe Leroy @ 2024-01-16  6:30 UTC (permalink / raw)
  To: Jason Gunthorpe, peterx
  Cc: James Houghton, David Hildenbrand, Yang Shi, Andrew Jones,
	linux-mm, linux-riscv, Andrea Arcangeli, Aneesh Kumar K . V,
	Matthew Wilcox, Christoph Hellwig, Vlastimil Babka,
	Axel Rasmussen, Rik van Riel, John Hubbard, Kirill A . Shutemov,
	linux-arm-kernel, Lorenzo Stoakes, Muchun Song, linux-kernel,
	Andrew Morton, linuxppc-dev, Mike Rapoport, Mike Kravetz



Le 15/01/2024 à 19:37, Jason Gunthorpe a écrit :
> On Wed, Jan 03, 2024 at 05:14:16PM +0800, peterx@redhat.com wrote:
>> From: Peter Xu <peterx@redhat.com>
>>
>> Hugepd format for GUP is only used in PowerPC with hugetlbfs.  There are
>> some kernel usage of hugepd (can refer to hugepd_populate_kernel() for
>> PPC_8XX), however those pages are not candidates for GUP.
>>
>> Commit a6e79df92e4a ("mm/gup: disallow FOLL_LONGTERM GUP-fast writing to
>> file-backed mappings") added a check to fail gup-fast if there's potential
>> risk of violating GUP over writeback file systems.  That should never apply
>> to hugepd.  Considering that hugepd is an old format (and even
>> software-only), there's no plan to extend hugepd into other file typed
>> memories that is prone to the same issue.
> 
> I didn't dig into the ppc stuff too deeply, but this looks to me like
> it is the same thing as ARM's contig bits?
> 
> ie a chunk of PMD/etc entries are all managed together as though they
> are a virtual larger entry and we use the hugepte_addr_end() stuff to
> iterate over each sub entry.

As far as I understand ARM's contig stuff, hugepd on powerpc is 
something different.

hugepd is a page directory dedicated to huge pages, where you have huge 
pages listed instead of regular pages. For instance, on powerpc 32 with 
each PGD entries covering 4Mbytes, a regular page table has 1024 PTEs. A 
hugepd for 512k is a page table with 8 entries.

And for 8Mbytes entries, the hugepd is a page table with only one entry. 
And 2 consecutive PGS entries will point to the same hugepd to cover the 
entire 8Mbytes.

> 
> But WHY is GUP doing this or caring about this? GUP should have no
> problem handling the super-size entry (eg 8M on nohash) as a single
> thing. It seems we only lack an API to get this out of the arch code?
> 
> It seems to me we should see ARM and PPC agree on what the API is for
> this and then get rid of hugepd by making both use the same page table
> walker API. Is that too hopeful?

Can't see the similarity between ARM contig PTE and PPC huge page 
directories.

> 
>> Drop that check, not only because it'll never be true for hugepd per any
>> known plan, but also it paves way for reusing the function outside
>> fast-gup.
> 
> I didn't see any other caller of this function in this series? When
> does this re-use happen??
> 
> Jason


Christophe

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: [PATCH v2 06/13] mm/gup: Drop folio_fast_pin_allowed() in hugepd processing
  2024-01-16  6:30       ` Christophe Leroy
  (?)
  (?)
@ 2024-01-16 12:31         ` Jason Gunthorpe
  -1 siblings, 0 replies; 143+ messages in thread
From: Jason Gunthorpe @ 2024-01-16 12:31 UTC (permalink / raw)
  To: Christophe Leroy
  Cc: peterx, linux-mm, linux-kernel, James Houghton,
	David Hildenbrand, Kirill A . Shutemov, Yang Shi, linux-riscv,
	Andrew Morton, Aneesh Kumar K . V, Rik van Riel,
	Andrea Arcangeli, Axel Rasmussen, Mike Rapoport, John Hubbard,
	Vlastimil Babka, Michael Ellerman, Andrew Jones, linuxppc-dev,
	Mike Kravetz, Muchun Song, linux-arm-kernel, Christoph Hellwig,
	Lorenzo Stoakes, Matthew Wilcox

On Tue, Jan 16, 2024 at 06:30:39AM +0000, Christophe Leroy wrote:
> 
> 
> Le 15/01/2024 à 19:37, Jason Gunthorpe a écrit :
> > On Wed, Jan 03, 2024 at 05:14:16PM +0800, peterx@redhat.com wrote:
> >> From: Peter Xu <peterx@redhat.com>
> >>
> >> Hugepd format for GUP is only used in PowerPC with hugetlbfs.  There are
> >> some kernel usage of hugepd (can refer to hugepd_populate_kernel() for
> >> PPC_8XX), however those pages are not candidates for GUP.
> >>
> >> Commit a6e79df92e4a ("mm/gup: disallow FOLL_LONGTERM GUP-fast writing to
> >> file-backed mappings") added a check to fail gup-fast if there's potential
> >> risk of violating GUP over writeback file systems.  That should never apply
> >> to hugepd.  Considering that hugepd is an old format (and even
> >> software-only), there's no plan to extend hugepd into other file typed
> >> memories that is prone to the same issue.
> > 
> > I didn't dig into the ppc stuff too deeply, but this looks to me like
> > it is the same thing as ARM's contig bits?
> > 
> > ie a chunk of PMD/etc entries are all managed together as though they
> > are a virtual larger entry and we use the hugepte_addr_end() stuff to
> > iterate over each sub entry.
> 
> As far as I understand ARM's contig stuff, hugepd on powerpc is 
> something different.
> 
> hugepd is a page directory dedicated to huge pages, where you have huge 
> pages listed instead of regular pages. For instance, on powerpc 32 with 
> each PGD entries covering 4Mbytes, a regular page table has 1024 PTEs. A 
> hugepd for 512k is a page table with 8 entries.
> 
> And for 8Mbytes entries, the hugepd is a page table with only one entry. 
> And 2 consecutive PGS entries will point to the same hugepd to cover the 
> entire 8Mbytes.

That still sounds alot like the ARM thing - except ARM replicates the
entry, you also said PPC relicates the entry like ARM to get to the
8M?

I guess the difference is in how the table memory is layed out? ARM
marks the size in the same entry that has the physical address so the
entries are self describing and then replicated. It kind of sounds
like PPC is marking the size in prior level and then reconfiguring the
layout of the lower level? Otherwise it surely must do the same
replication to make a radix index work..

If yes, I guess that is the main problem, the mm APIs don't have way
today to convey data from the pgd level to understand how to parse the
pmd level?

> > It seems to me we should see ARM and PPC agree on what the API is for
> > this and then get rid of hugepd by making both use the same page table
> > walker API. Is that too hopeful?
> 
> Can't see the similarity between ARM contig PTE and PPC huge page 
> directories.

Well, they are both variable sized entries.

So if you imagine a pmd_leaf(), pmd_leaf_size() and a pte_leaf_size()
that would return enough information for both.

Jason

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: [PATCH v2 06/13] mm/gup: Drop folio_fast_pin_allowed() in hugepd processing
@ 2024-01-16 12:31         ` Jason Gunthorpe
  0 siblings, 0 replies; 143+ messages in thread
From: Jason Gunthorpe @ 2024-01-16 12:31 UTC (permalink / raw)
  To: Christophe Leroy
  Cc: peterx, linux-mm, linux-kernel, James Houghton,
	David Hildenbrand, Kirill A . Shutemov, Yang Shi, linux-riscv,
	Andrew Morton, Aneesh Kumar K . V, Rik van Riel,
	Andrea Arcangeli, Axel Rasmussen, Mike Rapoport, John Hubbard,
	Vlastimil Babka, Michael Ellerman, Andrew Jones, linuxppc-dev,
	Mike Kravetz, Muchun Song, linux-arm-kernel, Christoph Hellwig,
	Lorenzo Stoakes, Matthew Wilcox

On Tue, Jan 16, 2024 at 06:30:39AM +0000, Christophe Leroy wrote:
> 
> 
> Le 15/01/2024 à 19:37, Jason Gunthorpe a écrit :
> > On Wed, Jan 03, 2024 at 05:14:16PM +0800, peterx@redhat.com wrote:
> >> From: Peter Xu <peterx@redhat.com>
> >>
> >> Hugepd format for GUP is only used in PowerPC with hugetlbfs.  There are
> >> some kernel usage of hugepd (can refer to hugepd_populate_kernel() for
> >> PPC_8XX), however those pages are not candidates for GUP.
> >>
> >> Commit a6e79df92e4a ("mm/gup: disallow FOLL_LONGTERM GUP-fast writing to
> >> file-backed mappings") added a check to fail gup-fast if there's potential
> >> risk of violating GUP over writeback file systems.  That should never apply
> >> to hugepd.  Considering that hugepd is an old format (and even
> >> software-only), there's no plan to extend hugepd into other file typed
> >> memories that is prone to the same issue.
> > 
> > I didn't dig into the ppc stuff too deeply, but this looks to me like
> > it is the same thing as ARM's contig bits?
> > 
> > ie a chunk of PMD/etc entries are all managed together as though they
> > are a virtual larger entry and we use the hugepte_addr_end() stuff to
> > iterate over each sub entry.
> 
> As far as I understand ARM's contig stuff, hugepd on powerpc is 
> something different.
> 
> hugepd is a page directory dedicated to huge pages, where you have huge 
> pages listed instead of regular pages. For instance, on powerpc 32 with 
> each PGD entries covering 4Mbytes, a regular page table has 1024 PTEs. A 
> hugepd for 512k is a page table with 8 entries.
> 
> And for 8Mbytes entries, the hugepd is a page table with only one entry. 
> And 2 consecutive PGS entries will point to the same hugepd to cover the 
> entire 8Mbytes.

That still sounds alot like the ARM thing - except ARM replicates the
entry, you also said PPC relicates the entry like ARM to get to the
8M?

I guess the difference is in how the table memory is layed out? ARM
marks the size in the same entry that has the physical address so the
entries are self describing and then replicated. It kind of sounds
like PPC is marking the size in prior level and then reconfiguring the
layout of the lower level? Otherwise it surely must do the same
replication to make a radix index work..

If yes, I guess that is the main problem, the mm APIs don't have way
today to convey data from the pgd level to understand how to parse the
pmd level?

> > It seems to me we should see ARM and PPC agree on what the API is for
> > this and then get rid of hugepd by making both use the same page table
> > walker API. Is that too hopeful?
> 
> Can't see the similarity between ARM contig PTE and PPC huge page 
> directories.

Well, they are both variable sized entries.

So if you imagine a pmd_leaf(), pmd_leaf_size() and a pte_leaf_size()
that would return enough information for both.

Jason

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: [PATCH v2 06/13] mm/gup: Drop folio_fast_pin_allowed() in hugepd processing
@ 2024-01-16 12:31         ` Jason Gunthorpe
  0 siblings, 0 replies; 143+ messages in thread
From: Jason Gunthorpe @ 2024-01-16 12:31 UTC (permalink / raw)
  To: Christophe Leroy
  Cc: peterx, linux-mm, linux-kernel, James Houghton,
	David Hildenbrand, Kirill A . Shutemov, Yang Shi, linux-riscv,
	Andrew Morton, Aneesh Kumar K . V, Rik van Riel,
	Andrea Arcangeli, Axel Rasmussen, Mike Rapoport, John Hubbard,
	Vlastimil Babka, Michael Ellerman, Andrew Jones, linuxppc-dev,
	Mike Kravetz, Muchun Song, linux-arm-kernel, Christoph Hellwig,
	Lorenzo Stoakes, Matthew Wilcox

On Tue, Jan 16, 2024 at 06:30:39AM +0000, Christophe Leroy wrote:
> 
> 
> Le 15/01/2024 à 19:37, Jason Gunthorpe a écrit :
> > On Wed, Jan 03, 2024 at 05:14:16PM +0800, peterx@redhat.com wrote:
> >> From: Peter Xu <peterx@redhat.com>
> >>
> >> Hugepd format for GUP is only used in PowerPC with hugetlbfs.  There are
> >> some kernel usage of hugepd (can refer to hugepd_populate_kernel() for
> >> PPC_8XX), however those pages are not candidates for GUP.
> >>
> >> Commit a6e79df92e4a ("mm/gup: disallow FOLL_LONGTERM GUP-fast writing to
> >> file-backed mappings") added a check to fail gup-fast if there's potential
> >> risk of violating GUP over writeback file systems.  That should never apply
> >> to hugepd.  Considering that hugepd is an old format (and even
> >> software-only), there's no plan to extend hugepd into other file typed
> >> memories that is prone to the same issue.
> > 
> > I didn't dig into the ppc stuff too deeply, but this looks to me like
> > it is the same thing as ARM's contig bits?
> > 
> > ie a chunk of PMD/etc entries are all managed together as though they
> > are a virtual larger entry and we use the hugepte_addr_end() stuff to
> > iterate over each sub entry.
> 
> As far as I understand ARM's contig stuff, hugepd on powerpc is 
> something different.
> 
> hugepd is a page directory dedicated to huge pages, where you have huge 
> pages listed instead of regular pages. For instance, on powerpc 32 with 
> each PGD entries covering 4Mbytes, a regular page table has 1024 PTEs. A 
> hugepd for 512k is a page table with 8 entries.
> 
> And for 8Mbytes entries, the hugepd is a page table with only one entry. 
> And 2 consecutive PGS entries will point to the same hugepd to cover the 
> entire 8Mbytes.

That still sounds alot like the ARM thing - except ARM replicates the
entry, you also said PPC relicates the entry like ARM to get to the
8M?

I guess the difference is in how the table memory is layed out? ARM
marks the size in the same entry that has the physical address so the
entries are self describing and then replicated. It kind of sounds
like PPC is marking the size in prior level and then reconfiguring the
layout of the lower level? Otherwise it surely must do the same
replication to make a radix index work..

If yes, I guess that is the main problem, the mm APIs don't have way
today to convey data from the pgd level to understand how to parse the
pmd level?

> > It seems to me we should see ARM and PPC agree on what the API is for
> > this and then get rid of hugepd by making both use the same page table
> > walker API. Is that too hopeful?
> 
> Can't see the similarity between ARM contig PTE and PPC huge page 
> directories.

Well, they are both variable sized entries.

So if you imagine a pmd_leaf(), pmd_leaf_size() and a pte_leaf_size()
that would return enough information for both.

Jason

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: [PATCH v2 06/13] mm/gup: Drop folio_fast_pin_allowed() in hugepd processing
@ 2024-01-16 12:31         ` Jason Gunthorpe
  0 siblings, 0 replies; 143+ messages in thread
From: Jason Gunthorpe @ 2024-01-16 12:31 UTC (permalink / raw)
  To: Christophe Leroy
  Cc: James Houghton, David Hildenbrand, Yang Shi, peterx,
	Andrew Jones, linux-mm, linux-riscv, Andrea Arcangeli,
	Christoph Hellwig, Matthew Wilcox, Aneesh Kumar K . V,
	Vlastimil Babka, Axel Rasmussen, Rik van Riel, John Hubbard,
	Kirill A . Shutemov, linux-arm-kernel, Lorenzo Stoakes,
	Muchun Song, linux-kernel, Andrew Morton, linuxppc-dev,
	Mike Rapop ort, Mike Kravetz

On Tue, Jan 16, 2024 at 06:30:39AM +0000, Christophe Leroy wrote:
> 
> 
> Le 15/01/2024 à 19:37, Jason Gunthorpe a écrit :
> > On Wed, Jan 03, 2024 at 05:14:16PM +0800, peterx@redhat.com wrote:
> >> From: Peter Xu <peterx@redhat.com>
> >>
> >> Hugepd format for GUP is only used in PowerPC with hugetlbfs.  There are
> >> some kernel usage of hugepd (can refer to hugepd_populate_kernel() for
> >> PPC_8XX), however those pages are not candidates for GUP.
> >>
> >> Commit a6e79df92e4a ("mm/gup: disallow FOLL_LONGTERM GUP-fast writing to
> >> file-backed mappings") added a check to fail gup-fast if there's potential
> >> risk of violating GUP over writeback file systems.  That should never apply
> >> to hugepd.  Considering that hugepd is an old format (and even
> >> software-only), there's no plan to extend hugepd into other file typed
> >> memories that is prone to the same issue.
> > 
> > I didn't dig into the ppc stuff too deeply, but this looks to me like
> > it is the same thing as ARM's contig bits?
> > 
> > ie a chunk of PMD/etc entries are all managed together as though they
> > are a virtual larger entry and we use the hugepte_addr_end() stuff to
> > iterate over each sub entry.
> 
> As far as I understand ARM's contig stuff, hugepd on powerpc is 
> something different.
> 
> hugepd is a page directory dedicated to huge pages, where you have huge 
> pages listed instead of regular pages. For instance, on powerpc 32 with 
> each PGD entries covering 4Mbytes, a regular page table has 1024 PTEs. A 
> hugepd for 512k is a page table with 8 entries.
> 
> And for 8Mbytes entries, the hugepd is a page table with only one entry. 
> And 2 consecutive PGS entries will point to the same hugepd to cover the 
> entire 8Mbytes.

That still sounds alot like the ARM thing - except ARM replicates the
entry, you also said PPC relicates the entry like ARM to get to the
8M?

I guess the difference is in how the table memory is layed out? ARM
marks the size in the same entry that has the physical address so the
entries are self describing and then replicated. It kind of sounds
like PPC is marking the size in prior level and then reconfiguring the
layout of the lower level? Otherwise it surely must do the same
replication to make a radix index work..

If yes, I guess that is the main problem, the mm APIs don't have way
today to convey data from the pgd level to understand how to parse the
pmd level?

> > It seems to me we should see ARM and PPC agree on what the API is for
> > this and then get rid of hugepd by making both use the same page table
> > walker API. Is that too hopeful?
> 
> Can't see the similarity between ARM contig PTE and PPC huge page 
> directories.

Well, they are both variable sized entries.

So if you imagine a pmd_leaf(), pmd_leaf_size() and a pte_leaf_size()
that would return enough information for both.

Jason

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: [PATCH v2 06/13] mm/gup: Drop folio_fast_pin_allowed() in hugepd processing
  2024-01-16 12:31         ` Jason Gunthorpe
  (?)
@ 2024-01-16 18:32           ` Christophe Leroy
  -1 siblings, 0 replies; 143+ messages in thread
From: Christophe Leroy @ 2024-01-16 18:32 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: peterx, linux-mm, linux-kernel, James Houghton,
	David Hildenbrand, Kirill A . Shutemov, Yang Shi, linux-riscv,
	Andrew Morton, Aneesh Kumar K . V, Rik van Riel,
	Andrea Arcangeli, Axel Rasmussen, Mike Rapoport, John Hubbard,
	Vlastimil Babka, Michael Ellerman, Andrew Jones, linuxppc-dev,
	Mike Kravetz, Muchun Song, linux-arm-kernel, Christoph Hellwig,
	Lorenzo Stoakes, Matthew Wilcox

[-- Attachment #1: Type: text/plain, Size: 3737 bytes --]



Le 16/01/2024 à 13:31, Jason Gunthorpe a écrit :
> On Tue, Jan 16, 2024 at 06:30:39AM +0000, Christophe Leroy wrote:
>>
>>
>> Le 15/01/2024 à 19:37, Jason Gunthorpe a écrit :
>>> On Wed, Jan 03, 2024 at 05:14:16PM +0800, peterx@redhat.com wrote:
>>>> From: Peter Xu <peterx@redhat.com>
>>>>
>>>> Hugepd format for GUP is only used in PowerPC with hugetlbfs.  There are
>>>> some kernel usage of hugepd (can refer to hugepd_populate_kernel() for
>>>> PPC_8XX), however those pages are not candidates for GUP.
>>>>
>>>> Commit a6e79df92e4a ("mm/gup: disallow FOLL_LONGTERM GUP-fast writing to
>>>> file-backed mappings") added a check to fail gup-fast if there's potential
>>>> risk of violating GUP over writeback file systems.  That should never apply
>>>> to hugepd.  Considering that hugepd is an old format (and even
>>>> software-only), there's no plan to extend hugepd into other file typed
>>>> memories that is prone to the same issue.
>>>
>>> I didn't dig into the ppc stuff too deeply, but this looks to me like
>>> it is the same thing as ARM's contig bits?
>>>
>>> ie a chunk of PMD/etc entries are all managed together as though they
>>> are a virtual larger entry and we use the hugepte_addr_end() stuff to
>>> iterate over each sub entry.
>>
>> As far as I understand ARM's contig stuff, hugepd on powerpc is
>> something different.
>>
>> hugepd is a page directory dedicated to huge pages, where you have huge
>> pages listed instead of regular pages. For instance, on powerpc 32 with
>> each PGD entries covering 4Mbytes, a regular page table has 1024 PTEs. A
>> hugepd for 512k is a page table with 8 entries.
>>
>> And for 8Mbytes entries, the hugepd is a page table with only one entry.
>> And 2 consecutive PGS entries will point to the same hugepd to cover the
>> entire 8Mbytes.
> 
> That still sounds alot like the ARM thing - except ARM replicates the
> entry, you also said PPC relicates the entry like ARM to get to the
> 8M?

Is it like ARM ? Not sure. The PTE is not in the PGD it must be in a L2 
directory, even for 8M.

You can see in attached picture what the hardware expects.

> 
> I guess the difference is in how the table memory is layed out? ARM
> marks the size in the same entry that has the physical address so the
> entries are self describing and then replicated. It kind of sounds
> like PPC is marking the size in prior level and then reconfiguring the
> layout of the lower level? Otherwise it surely must do the same
> replication to make a radix index work..

Yes that's how it works on powerpc. For 8xx we used to do that for both 
8M and 512k pages. Now for 512k pages we do kind of like ARM (which 
means replicating the entry 128 times) as that's needed to allow mixing 
different page sizes for a given PGD entry.

But for 8M pages that would mean replicating the entry 2048 times. 
That's a bit too much isn't it ?

> 
> If yes, I guess that is the main problem, the mm APIs don't have way
> today to convey data from the pgd level to understand how to parse the
> pmd level?
> 
>>> It seems to me we should see ARM and PPC agree on what the API is for
>>> this and then get rid of hugepd by making both use the same page table
>>> walker API. Is that too hopeful?
>>
>> Can't see the similarity between ARM contig PTE and PPC huge page
>> directories.
> 
> Well, they are both variable sized entries.
> 
> So if you imagine a pmd_leaf(), pmd_leaf_size() and a pte_leaf_size()
> that would return enough information for both.

pmd_leaf() ? Unless I'm missing something I can't do leaf at PMD (PGD) 
level. It must be a two-level process even for pages bigger than a PMD 
entry.

Christophe

[-- Attachment #2: MPC8xx_page_tables.png --]
[-- Type: image/png, Size: 126859 bytes --]

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: [PATCH v2 06/13] mm/gup: Drop folio_fast_pin_allowed() in hugepd processing
@ 2024-01-16 18:32           ` Christophe Leroy
  0 siblings, 0 replies; 143+ messages in thread
From: Christophe Leroy @ 2024-01-16 18:32 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: James Houghton, David Hildenbrand, Yang Shi, peterx,
	Andrew Jones, linux-mm, linux-riscv, Andrea Arcangeli,
	Christoph Hellwig, Matthew Wilcox, Aneesh Kumar K . V,
	Vlastimil Babka, Axel Rasmussen, Rik van Riel, John Hubbard,
	Kirill A . Shutemov, linux-arm-kernel, Lorenzo Stoakes,
	Muchun Song, linux-kernel, Andrew Morton, linuxppc-dev,
	Mike Rapop ort, Mike Kravetz

[-- Attachment #1: Type: text/plain, Size: 3737 bytes --]



Le 16/01/2024 à 13:31, Jason Gunthorpe a écrit :
> On Tue, Jan 16, 2024 at 06:30:39AM +0000, Christophe Leroy wrote:
>>
>>
>> Le 15/01/2024 à 19:37, Jason Gunthorpe a écrit :
>>> On Wed, Jan 03, 2024 at 05:14:16PM +0800, peterx@redhat.com wrote:
>>>> From: Peter Xu <peterx@redhat.com>
>>>>
>>>> Hugepd format for GUP is only used in PowerPC with hugetlbfs.  There are
>>>> some kernel usage of hugepd (can refer to hugepd_populate_kernel() for
>>>> PPC_8XX), however those pages are not candidates for GUP.
>>>>
>>>> Commit a6e79df92e4a ("mm/gup: disallow FOLL_LONGTERM GUP-fast writing to
>>>> file-backed mappings") added a check to fail gup-fast if there's potential
>>>> risk of violating GUP over writeback file systems.  That should never apply
>>>> to hugepd.  Considering that hugepd is an old format (and even
>>>> software-only), there's no plan to extend hugepd into other file typed
>>>> memories that is prone to the same issue.
>>>
>>> I didn't dig into the ppc stuff too deeply, but this looks to me like
>>> it is the same thing as ARM's contig bits?
>>>
>>> ie a chunk of PMD/etc entries are all managed together as though they
>>> are a virtual larger entry and we use the hugepte_addr_end() stuff to
>>> iterate over each sub entry.
>>
>> As far as I understand ARM's contig stuff, hugepd on powerpc is
>> something different.
>>
>> hugepd is a page directory dedicated to huge pages, where you have huge
>> pages listed instead of regular pages. For instance, on powerpc 32 with
>> each PGD entries covering 4Mbytes, a regular page table has 1024 PTEs. A
>> hugepd for 512k is a page table with 8 entries.
>>
>> And for 8Mbytes entries, the hugepd is a page table with only one entry.
>> And 2 consecutive PGS entries will point to the same hugepd to cover the
>> entire 8Mbytes.
> 
> That still sounds alot like the ARM thing - except ARM replicates the
> entry, you also said PPC relicates the entry like ARM to get to the
> 8M?

Is it like ARM ? Not sure. The PTE is not in the PGD it must be in a L2 
directory, even for 8M.

You can see in attached picture what the hardware expects.

> 
> I guess the difference is in how the table memory is layed out? ARM
> marks the size in the same entry that has the physical address so the
> entries are self describing and then replicated. It kind of sounds
> like PPC is marking the size in prior level and then reconfiguring the
> layout of the lower level? Otherwise it surely must do the same
> replication to make a radix index work..

Yes that's how it works on powerpc. For 8xx we used to do that for both 
8M and 512k pages. Now for 512k pages we do kind of like ARM (which 
means replicating the entry 128 times) as that's needed to allow mixing 
different page sizes for a given PGD entry.

But for 8M pages that would mean replicating the entry 2048 times. 
That's a bit too much isn't it ?

> 
> If yes, I guess that is the main problem, the mm APIs don't have way
> today to convey data from the pgd level to understand how to parse the
> pmd level?
> 
>>> It seems to me we should see ARM and PPC agree on what the API is for
>>> this and then get rid of hugepd by making both use the same page table
>>> walker API. Is that too hopeful?
>>
>> Can't see the similarity between ARM contig PTE and PPC huge page
>> directories.
> 
> Well, they are both variable sized entries.
> 
> So if you imagine a pmd_leaf(), pmd_leaf_size() and a pte_leaf_size()
> that would return enough information for both.

pmd_leaf() ? Unless I'm missing something I can't do leaf at PMD (PGD) 
level. It must be a two-level process even for pages bigger than a PMD 
entry.

Christophe

[-- Attachment #2: MPC8xx_page_tables.png --]
[-- Type: image/png, Size: 126859 bytes --]

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: [PATCH v2 06/13] mm/gup: Drop folio_fast_pin_allowed() in hugepd processing
@ 2024-01-16 18:32           ` Christophe Leroy
  0 siblings, 0 replies; 143+ messages in thread
From: Christophe Leroy @ 2024-01-16 18:32 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: peterx, linux-mm, linux-kernel, James Houghton,
	David Hildenbrand, Kirill A . Shutemov, Yang Shi, linux-riscv,
	Andrew Morton, Aneesh Kumar K . V, Rik van Riel,
	Andrea Arcangeli, Axel Rasmussen, Mike Rapoport, John Hubbard,
	Vlastimil Babka, Michael Ellerman, Andrew Jones, linuxppc-dev,
	Mike Kravetz, Muchun Song, linux-arm-kernel, Christoph Hellwig,
	Lorenzo Stoakes, Matthew Wilcox

[-- Attachment #1: Type: text/plain, Size: 3737 bytes --]



Le 16/01/2024 à 13:31, Jason Gunthorpe a écrit :
> On Tue, Jan 16, 2024 at 06:30:39AM +0000, Christophe Leroy wrote:
>>
>>
>> Le 15/01/2024 à 19:37, Jason Gunthorpe a écrit :
>>> On Wed, Jan 03, 2024 at 05:14:16PM +0800, peterx@redhat.com wrote:
>>>> From: Peter Xu <peterx@redhat.com>
>>>>
>>>> Hugepd format for GUP is only used in PowerPC with hugetlbfs.  There are
>>>> some kernel usage of hugepd (can refer to hugepd_populate_kernel() for
>>>> PPC_8XX), however those pages are not candidates for GUP.
>>>>
>>>> Commit a6e79df92e4a ("mm/gup: disallow FOLL_LONGTERM GUP-fast writing to
>>>> file-backed mappings") added a check to fail gup-fast if there's potential
>>>> risk of violating GUP over writeback file systems.  That should never apply
>>>> to hugepd.  Considering that hugepd is an old format (and even
>>>> software-only), there's no plan to extend hugepd into other file typed
>>>> memories that is prone to the same issue.
>>>
>>> I didn't dig into the ppc stuff too deeply, but this looks to me like
>>> it is the same thing as ARM's contig bits?
>>>
>>> ie a chunk of PMD/etc entries are all managed together as though they
>>> are a virtual larger entry and we use the hugepte_addr_end() stuff to
>>> iterate over each sub entry.
>>
>> As far as I understand ARM's contig stuff, hugepd on powerpc is
>> something different.
>>
>> hugepd is a page directory dedicated to huge pages, where you have huge
>> pages listed instead of regular pages. For instance, on powerpc 32 with
>> each PGD entries covering 4Mbytes, a regular page table has 1024 PTEs. A
>> hugepd for 512k is a page table with 8 entries.
>>
>> And for 8Mbytes entries, the hugepd is a page table with only one entry.
>> And 2 consecutive PGS entries will point to the same hugepd to cover the
>> entire 8Mbytes.
> 
> That still sounds alot like the ARM thing - except ARM replicates the
> entry, you also said PPC relicates the entry like ARM to get to the
> 8M?

Is it like ARM ? Not sure. The PTE is not in the PGD it must be in a L2 
directory, even for 8M.

You can see in attached picture what the hardware expects.

> 
> I guess the difference is in how the table memory is layed out? ARM
> marks the size in the same entry that has the physical address so the
> entries are self describing and then replicated. It kind of sounds
> like PPC is marking the size in prior level and then reconfiguring the
> layout of the lower level? Otherwise it surely must do the same
> replication to make a radix index work..

Yes that's how it works on powerpc. For 8xx we used to do that for both 
8M and 512k pages. Now for 512k pages we do kind of like ARM (which 
means replicating the entry 128 times) as that's needed to allow mixing 
different page sizes for a given PGD entry.

But for 8M pages that would mean replicating the entry 2048 times. 
That's a bit too much isn't it ?

> 
> If yes, I guess that is the main problem, the mm APIs don't have way
> today to convey data from the pgd level to understand how to parse the
> pmd level?
> 
>>> It seems to me we should see ARM and PPC agree on what the API is for
>>> this and then get rid of hugepd by making both use the same page table
>>> walker API. Is that too hopeful?
>>
>> Can't see the similarity between ARM contig PTE and PPC huge page
>> directories.
> 
> Well, they are both variable sized entries.
> 
> So if you imagine a pmd_leaf(), pmd_leaf_size() and a pte_leaf_size()
> that would return enough information for both.

pmd_leaf() ? Unless I'm missing something I can't do leaf at PMD (PGD) 
level. It must be a two-level process even for pages bigger than a PMD 
entry.

Christophe

[-- Attachment #2: MPC8xx_page_tables.png --]
[-- Type: image/png, Size: 126859 bytes --]

[-- Attachment #3: Type: text/plain, Size: 161 bytes --]

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: [PATCH v2 06/13] mm/gup: Drop folio_fast_pin_allowed() in hugepd processing
  2024-01-16 18:32           ` Christophe Leroy
  (?)
  (?)
@ 2024-01-17 13:22             ` Jason Gunthorpe
  -1 siblings, 0 replies; 143+ messages in thread
From: Jason Gunthorpe @ 2024-01-17 13:22 UTC (permalink / raw)
  To: Christophe Leroy, Ryan Roberts
  Cc: peterx, linux-mm, linux-kernel, James Houghton,
	David Hildenbrand, Kirill A . Shutemov, Yang Shi, linux-riscv,
	Andrew Morton, Aneesh Kumar K . V, Rik van Riel,
	Andrea Arcangeli, Axel Rasmussen, Mike Rapoport, John Hubbard,
	Vlastimil Babka, Michael Ellerman, Andrew Jones, linuxppc-dev,
	Mike Kravetz, Muchun Song, linux-arm-kernel, Christoph Hellwig,
	Lorenzo Stoakes, Matthew Wilcox

On Tue, Jan 16, 2024 at 06:32:32PM +0000, Christophe Leroy wrote:
> >> hugepd is a page directory dedicated to huge pages, where you have huge
> >> pages listed instead of regular pages. For instance, on powerpc 32 with
> >> each PGD entries covering 4Mbytes, a regular page table has 1024 PTEs. A
> >> hugepd for 512k is a page table with 8 entries.
> >>
> >> And for 8Mbytes entries, the hugepd is a page table with only one entry.
> >> And 2 consecutive PGS entries will point to the same hugepd to cover the
> >> entire 8Mbytes.
> > 
> > That still sounds alot like the ARM thing - except ARM replicates the
> > entry, you also said PPC relicates the entry like ARM to get to the
> > 8M?
> 
> Is it like ARM ? Not sure. The PTE is not in the PGD it must be in a L2 
> directory, even for 8M.

Your diagram looks almost exactly like ARM to me.

The key thing is that the address for the L2 Table is *always* formed as:

   L2 Table Base << 12 + L2 Index << 2 + 00

Then the L2 Descriptor must contains bits indicating the page
size. The L2 Descriptor is replicated to every 4k entry that the page
size covers.

The only difference I see is the 8M case which has a page size greater
than a single L1 entry.

> Yes that's how it works on powerpc. For 8xx we used to do that for both 
> 8M and 512k pages. Now for 512k pages we do kind of like ARM (which 
> means replicating the entry 128 times) as that's needed to allow mixing 
> different page sizes for a given PGD entry.

Right, you want to have granular page sizes or it becomes unusable in
the general case
 
> But for 8M pages that would mean replicating the entry 2048 times. 
> That's a bit too much isn't it ?

Indeed, de-duplicating the L2 Table is a neat optimization.

> > So if you imagine a pmd_leaf(), pmd_leaf_size() and a pte_leaf_size()
> > that would return enough information for both.
> 
> pmd_leaf() ? Unless I'm missing something I can't do leaf at PMD (PGD) 
> level. It must be a two-level process even for pages bigger than a PMD 
> entry.

Right, this is the normal THP/hugetlb situation on x86/etc. It
wouldn't apply here since it seems the HW doesn't have a bit in the L1
descriptor to indicate leaf.

Instead for PPC this hugepd stuff should start to follow Ryan's
generic work for ARM contig:

https://lore.kernel.org/all/20231218105100.172635-1-ryan.roberts@arm.com/

Specifically the arch implementation:

https://lore.kernel.org/linux-mm/20231218105100.172635-15-ryan.roberts@arm.com/

Ie the arch should ultimately wire up the replication and variable
page size bits within its implementation of set_ptes(). set_ptes()s
gets a contiguous run of address and should install it with maximum
use of the variable page sizes. The core code will start to call
set_ptes() in more cases as Ryan gets along his project.

For the purposes of GUP, where are are today and where we are going,
it would be much better to not have a special PPC specific "hugepd"
parser. Just process each of the 4k replicates one by one like ARM is
starting with.

The arch would still have to return the correct page address from
pte_phys() which I think Ryan is doing by having the replicates encode
the full 4k based address in each entry. The HW will ignore those low
bits and pte_phys() then works properly. This would work for PPC as
well, excluding the 8M optimization.

Going forward I'd expect to see some pte_page_size() that returns the
size bits and GUP can have logic to skip reading replicates.

The advantage of all this is that it stops making the feature special
and the work Ryan is doing to generically push larger folios into
set_ptes will become usable on these PPC platforms as well. And we can
kill the PPC specific hugepd.

Jason

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: [PATCH v2 06/13] mm/gup: Drop folio_fast_pin_allowed() in hugepd processing
@ 2024-01-17 13:22             ` Jason Gunthorpe
  0 siblings, 0 replies; 143+ messages in thread
From: Jason Gunthorpe @ 2024-01-17 13:22 UTC (permalink / raw)
  To: Christophe Leroy, Ryan Roberts
  Cc: peterx, linux-mm, linux-kernel, James Houghton,
	David Hildenbrand, Kirill A . Shutemov, Yang Shi, linux-riscv,
	Andrew Morton, Aneesh Kumar K . V, Rik van Riel,
	Andrea Arcangeli, Axel Rasmussen, Mike Rapoport, John Hubbard,
	Vlastimil Babka, Michael Ellerman, Andrew Jones, linuxppc-dev,
	Mike Kravetz, Muchun Song, linux-arm-kernel, Christoph Hellwig,
	Lorenzo Stoakes, Matthew Wilcox

On Tue, Jan 16, 2024 at 06:32:32PM +0000, Christophe Leroy wrote:
> >> hugepd is a page directory dedicated to huge pages, where you have huge
> >> pages listed instead of regular pages. For instance, on powerpc 32 with
> >> each PGD entries covering 4Mbytes, a regular page table has 1024 PTEs. A
> >> hugepd for 512k is a page table with 8 entries.
> >>
> >> And for 8Mbytes entries, the hugepd is a page table with only one entry.
> >> And 2 consecutive PGS entries will point to the same hugepd to cover the
> >> entire 8Mbytes.
> > 
> > That still sounds alot like the ARM thing - except ARM replicates the
> > entry, you also said PPC relicates the entry like ARM to get to the
> > 8M?
> 
> Is it like ARM ? Not sure. The PTE is not in the PGD it must be in a L2 
> directory, even for 8M.

Your diagram looks almost exactly like ARM to me.

The key thing is that the address for the L2 Table is *always* formed as:

   L2 Table Base << 12 + L2 Index << 2 + 00

Then the L2 Descriptor must contains bits indicating the page
size. The L2 Descriptor is replicated to every 4k entry that the page
size covers.

The only difference I see is the 8M case which has a page size greater
than a single L1 entry.

> Yes that's how it works on powerpc. For 8xx we used to do that for both 
> 8M and 512k pages. Now for 512k pages we do kind of like ARM (which 
> means replicating the entry 128 times) as that's needed to allow mixing 
> different page sizes for a given PGD entry.

Right, you want to have granular page sizes or it becomes unusable in
the general case
 
> But for 8M pages that would mean replicating the entry 2048 times. 
> That's a bit too much isn't it ?

Indeed, de-duplicating the L2 Table is a neat optimization.

> > So if you imagine a pmd_leaf(), pmd_leaf_size() and a pte_leaf_size()
> > that would return enough information for both.
> 
> pmd_leaf() ? Unless I'm missing something I can't do leaf at PMD (PGD) 
> level. It must be a two-level process even for pages bigger than a PMD 
> entry.

Right, this is the normal THP/hugetlb situation on x86/etc. It
wouldn't apply here since it seems the HW doesn't have a bit in the L1
descriptor to indicate leaf.

Instead for PPC this hugepd stuff should start to follow Ryan's
generic work for ARM contig:

https://lore.kernel.org/all/20231218105100.172635-1-ryan.roberts@arm.com/

Specifically the arch implementation:

https://lore.kernel.org/linux-mm/20231218105100.172635-15-ryan.roberts@arm.com/

Ie the arch should ultimately wire up the replication and variable
page size bits within its implementation of set_ptes(). set_ptes()s
gets a contiguous run of address and should install it with maximum
use of the variable page sizes. The core code will start to call
set_ptes() in more cases as Ryan gets along his project.

For the purposes of GUP, where are are today and where we are going,
it would be much better to not have a special PPC specific "hugepd"
parser. Just process each of the 4k replicates one by one like ARM is
starting with.

The arch would still have to return the correct page address from
pte_phys() which I think Ryan is doing by having the replicates encode
the full 4k based address in each entry. The HW will ignore those low
bits and pte_phys() then works properly. This would work for PPC as
well, excluding the 8M optimization.

Going forward I'd expect to see some pte_page_size() that returns the
size bits and GUP can have logic to skip reading replicates.

The advantage of all this is that it stops making the feature special
and the work Ryan is doing to generically push larger folios into
set_ptes will become usable on these PPC platforms as well. And we can
kill the PPC specific hugepd.

Jason

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: [PATCH v2 06/13] mm/gup: Drop folio_fast_pin_allowed() in hugepd processing
@ 2024-01-17 13:22             ` Jason Gunthorpe
  0 siblings, 0 replies; 143+ messages in thread
From: Jason Gunthorpe @ 2024-01-17 13:22 UTC (permalink / raw)
  To: Christophe Leroy, Ryan Roberts
  Cc: James Houghton, David Hildenbrand, Yang Shi, peterx,
	Andrew Jones, linux-mm, linux-riscv, Andrea Arcangeli,
	Christoph Hellwig, Matthew Wilcox, Aneesh Kumar K . V,
	Vlastimil Babka, Axel Rasmussen, Rik van Riel, John Hubbard,
	Kirill A . Shutemov, linux-arm-kernel, Lorenzo Stoakes,
	Muchun Song, linux-kernel, Andrew Morton, linuxppc-dev,
	Mike Rapop ort, Mike Kravetz

On Tue, Jan 16, 2024 at 06:32:32PM +0000, Christophe Leroy wrote:
> >> hugepd is a page directory dedicated to huge pages, where you have huge
> >> pages listed instead of regular pages. For instance, on powerpc 32 with
> >> each PGD entries covering 4Mbytes, a regular page table has 1024 PTEs. A
> >> hugepd for 512k is a page table with 8 entries.
> >>
> >> And for 8Mbytes entries, the hugepd is a page table with only one entry.
> >> And 2 consecutive PGS entries will point to the same hugepd to cover the
> >> entire 8Mbytes.
> > 
> > That still sounds alot like the ARM thing - except ARM replicates the
> > entry, you also said PPC relicates the entry like ARM to get to the
> > 8M?
> 
> Is it like ARM ? Not sure. The PTE is not in the PGD it must be in a L2 
> directory, even for 8M.

Your diagram looks almost exactly like ARM to me.

The key thing is that the address for the L2 Table is *always* formed as:

   L2 Table Base << 12 + L2 Index << 2 + 00

Then the L2 Descriptor must contains bits indicating the page
size. The L2 Descriptor is replicated to every 4k entry that the page
size covers.

The only difference I see is the 8M case which has a page size greater
than a single L1 entry.

> Yes that's how it works on powerpc. For 8xx we used to do that for both 
> 8M and 512k pages. Now for 512k pages we do kind of like ARM (which 
> means replicating the entry 128 times) as that's needed to allow mixing 
> different page sizes for a given PGD entry.

Right, you want to have granular page sizes or it becomes unusable in
the general case
 
> But for 8M pages that would mean replicating the entry 2048 times. 
> That's a bit too much isn't it ?

Indeed, de-duplicating the L2 Table is a neat optimization.

> > So if you imagine a pmd_leaf(), pmd_leaf_size() and a pte_leaf_size()
> > that would return enough information for both.
> 
> pmd_leaf() ? Unless I'm missing something I can't do leaf at PMD (PGD) 
> level. It must be a two-level process even for pages bigger than a PMD 
> entry.

Right, this is the normal THP/hugetlb situation on x86/etc. It
wouldn't apply here since it seems the HW doesn't have a bit in the L1
descriptor to indicate leaf.

Instead for PPC this hugepd stuff should start to follow Ryan's
generic work for ARM contig:

https://lore.kernel.org/all/20231218105100.172635-1-ryan.roberts@arm.com/

Specifically the arch implementation:

https://lore.kernel.org/linux-mm/20231218105100.172635-15-ryan.roberts@arm.com/

Ie the arch should ultimately wire up the replication and variable
page size bits within its implementation of set_ptes(). set_ptes()s
gets a contiguous run of address and should install it with maximum
use of the variable page sizes. The core code will start to call
set_ptes() in more cases as Ryan gets along his project.

For the purposes of GUP, where are are today and where we are going,
it would be much better to not have a special PPC specific "hugepd"
parser. Just process each of the 4k replicates one by one like ARM is
starting with.

The arch would still have to return the correct page address from
pte_phys() which I think Ryan is doing by having the replicates encode
the full 4k based address in each entry. The HW will ignore those low
bits and pte_phys() then works properly. This would work for PPC as
well, excluding the 8M optimization.

Going forward I'd expect to see some pte_page_size() that returns the
size bits and GUP can have logic to skip reading replicates.

The advantage of all this is that it stops making the feature special
and the work Ryan is doing to generically push larger folios into
set_ptes will become usable on these PPC platforms as well. And we can
kill the PPC specific hugepd.

Jason

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: [PATCH v2 06/13] mm/gup: Drop folio_fast_pin_allowed() in hugepd processing
@ 2024-01-17 13:22             ` Jason Gunthorpe
  0 siblings, 0 replies; 143+ messages in thread
From: Jason Gunthorpe @ 2024-01-17 13:22 UTC (permalink / raw)
  To: Christophe Leroy, Ryan Roberts
  Cc: peterx, linux-mm, linux-kernel, James Houghton,
	David Hildenbrand, Kirill A . Shutemov, Yang Shi, linux-riscv,
	Andrew Morton, Aneesh Kumar K . V, Rik van Riel,
	Andrea Arcangeli, Axel Rasmussen, Mike Rapoport, John Hubbard,
	Vlastimil Babka, Michael Ellerman, Andrew Jones, linuxppc-dev,
	Mike Kravetz, Muchun Song, linux-arm-kernel, Christoph Hellwig,
	Lorenzo Stoakes, Matthew Wilcox

On Tue, Jan 16, 2024 at 06:32:32PM +0000, Christophe Leroy wrote:
> >> hugepd is a page directory dedicated to huge pages, where you have huge
> >> pages listed instead of regular pages. For instance, on powerpc 32 with
> >> each PGD entries covering 4Mbytes, a regular page table has 1024 PTEs. A
> >> hugepd for 512k is a page table with 8 entries.
> >>
> >> And for 8Mbytes entries, the hugepd is a page table with only one entry.
> >> And 2 consecutive PGS entries will point to the same hugepd to cover the
> >> entire 8Mbytes.
> > 
> > That still sounds alot like the ARM thing - except ARM replicates the
> > entry, you also said PPC relicates the entry like ARM to get to the
> > 8M?
> 
> Is it like ARM ? Not sure. The PTE is not in the PGD it must be in a L2 
> directory, even for 8M.

Your diagram looks almost exactly like ARM to me.

The key thing is that the address for the L2 Table is *always* formed as:

   L2 Table Base << 12 + L2 Index << 2 + 00

Then the L2 Descriptor must contains bits indicating the page
size. The L2 Descriptor is replicated to every 4k entry that the page
size covers.

The only difference I see is the 8M case which has a page size greater
than a single L1 entry.

> Yes that's how it works on powerpc. For 8xx we used to do that for both 
> 8M and 512k pages. Now for 512k pages we do kind of like ARM (which 
> means replicating the entry 128 times) as that's needed to allow mixing 
> different page sizes for a given PGD entry.

Right, you want to have granular page sizes or it becomes unusable in
the general case
 
> But for 8M pages that would mean replicating the entry 2048 times. 
> That's a bit too much isn't it ?

Indeed, de-duplicating the L2 Table is a neat optimization.

> > So if you imagine a pmd_leaf(), pmd_leaf_size() and a pte_leaf_size()
> > that would return enough information for both.
> 
> pmd_leaf() ? Unless I'm missing something I can't do leaf at PMD (PGD) 
> level. It must be a two-level process even for pages bigger than a PMD 
> entry.

Right, this is the normal THP/hugetlb situation on x86/etc. It
wouldn't apply here since it seems the HW doesn't have a bit in the L1
descriptor to indicate leaf.

Instead for PPC this hugepd stuff should start to follow Ryan's
generic work for ARM contig:

https://lore.kernel.org/all/20231218105100.172635-1-ryan.roberts@arm.com/

Specifically the arch implementation:

https://lore.kernel.org/linux-mm/20231218105100.172635-15-ryan.roberts@arm.com/

Ie the arch should ultimately wire up the replication and variable
page size bits within its implementation of set_ptes(). set_ptes()s
gets a contiguous run of address and should install it with maximum
use of the variable page sizes. The core code will start to call
set_ptes() in more cases as Ryan gets along his project.

For the purposes of GUP, where are are today and where we are going,
it would be much better to not have a special PPC specific "hugepd"
parser. Just process each of the 4k replicates one by one like ARM is
starting with.

The arch would still have to return the correct page address from
pte_phys() which I think Ryan is doing by having the replicates encode
the full 4k based address in each entry. The HW will ignore those low
bits and pte_phys() then works properly. This would work for PPC as
well, excluding the 8M optimization.

Going forward I'd expect to see some pte_page_size() that returns the
size bits and GUP can have logic to skip reading replicates.

The advantage of all this is that it stops making the feature special
and the work Ryan is doing to generically push larger folios into
set_ptes will become usable on these PPC platforms as well. And we can
kill the PPC specific hugepd.

Jason

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: [PATCH v2 06/13] mm/gup: Drop folio_fast_pin_allowed() in hugepd processing
  2024-01-17 13:22             ` Jason Gunthorpe
  (?)
  (?)
@ 2024-01-18 15:15               ` Ryan Roberts
  -1 siblings, 0 replies; 143+ messages in thread
From: Ryan Roberts @ 2024-01-18 15:15 UTC (permalink / raw)
  To: Jason Gunthorpe, Christophe Leroy
  Cc: peterx, linux-mm, linux-kernel, James Houghton,
	David Hildenbrand, Kirill A . Shutemov, Yang Shi, linux-riscv,
	Andrew Morton, Aneesh Kumar K . V, Rik van Riel,
	Andrea Arcangeli, Axel Rasmussen, Mike Rapoport, John Hubbard,
	Vlastimil Babka, Michael Ellerman, Andrew Jones, linuxppc-dev,
	Mike Kravetz, Muchun Song, linux-arm-kernel, Christoph Hellwig,
	Lorenzo Stoakes, Matthew Wilcox

On 17/01/2024 13:22, Jason Gunthorpe wrote:
> On Tue, Jan 16, 2024 at 06:32:32PM +0000, Christophe Leroy wrote:
>>>> hugepd is a page directory dedicated to huge pages, where you have huge
>>>> pages listed instead of regular pages. For instance, on powerpc 32 with
>>>> each PGD entries covering 4Mbytes, a regular page table has 1024 PTEs. A
>>>> hugepd for 512k is a page table with 8 entries.
>>>>
>>>> And for 8Mbytes entries, the hugepd is a page table with only one entry.
>>>> And 2 consecutive PGS entries will point to the same hugepd to cover the
>>>> entire 8Mbytes.
>>>
>>> That still sounds alot like the ARM thing - except ARM replicates the
>>> entry, you also said PPC relicates the entry like ARM to get to the
>>> 8M?
>>
>> Is it like ARM ? Not sure. The PTE is not in the PGD it must be in a L2 
>> directory, even for 8M.
> 
> Your diagram looks almost exactly like ARM to me.
> 
> The key thing is that the address for the L2 Table is *always* formed as:
> 
>    L2 Table Base << 12 + L2 Index << 2 + 00
> 
> Then the L2 Descriptor must contains bits indicating the page
> size. The L2 Descriptor is replicated to every 4k entry that the page
> size covers.
> 
> The only difference I see is the 8M case which has a page size greater
> than a single L1 entry.
> 
>> Yes that's how it works on powerpc. For 8xx we used to do that for both 
>> 8M and 512k pages. Now for 512k pages we do kind of like ARM (which 
>> means replicating the entry 128 times) as that's needed to allow mixing 
>> different page sizes for a given PGD entry.
> 
> Right, you want to have granular page sizes or it becomes unusable in
> the general case
>  
>> But for 8M pages that would mean replicating the entry 2048 times. 
>> That's a bit too much isn't it ?
> 
> Indeed, de-duplicating the L2 Table is a neat optimization.
> 
>>> So if you imagine a pmd_leaf(), pmd_leaf_size() and a pte_leaf_size()
>>> that would return enough information for both.
>>
>> pmd_leaf() ? Unless I'm missing something I can't do leaf at PMD (PGD) 
>> level. It must be a two-level process even for pages bigger than a PMD 
>> entry.
> 
> Right, this is the normal THP/hugetlb situation on x86/etc. It
> wouldn't apply here since it seems the HW doesn't have a bit in the L1
> descriptor to indicate leaf.
> 
> Instead for PPC this hugepd stuff should start to follow Ryan's
> generic work for ARM contig:
> 
> https://lore.kernel.org/all/20231218105100.172635-1-ryan.roberts@arm.com/
> 
> Specifically the arch implementation:
> 
> https://lore.kernel.org/linux-mm/20231218105100.172635-15-ryan.roberts@arm.com/
> 
> Ie the arch should ultimately wire up the replication and variable
> page size bits within its implementation of set_ptes(). set_ptes()s
> gets a contiguous run of address and should install it with maximum
> use of the variable page sizes. The core code will start to call
> set_ptes() in more cases as Ryan gets along his project.

Note that it's not just set_ptes() that you want to batch; there are other calls
that can benefit too. See patches 2 and 3 in the series you linked. (although
I'm working with DavidH on this and the details are going to change a little).

> 
> For the purposes of GUP, where are are today and where we are going,
> it would be much better to not have a special PPC specific "hugepd"
> parser. Just process each of the 4k replicates one by one like ARM is
> starting with.
> 
> The arch would still have to return the correct page address from
> pte_phys() which I think Ryan is doing by having the replicates encode
> the full 4k based address in each entry.

Yes; although its actually also a requirement of the arm architecture. Since the
contig bit is just a hint that the HW may or may not take any notice of, the
page tables have to be correct for the case where the HW just reads them in base
pages. Fixing up the bottom bits should be trivial using the PTE pointer, if
needed for ppc.

> The HW will ignore those low
> bits and pte_phys() then works properly. This would work for PPC as
> well, excluding the 8M optimization.
> 
> Going forward I'd expect to see some pte_page_size() that returns the
> size bits and GUP can have logic to skip reading replicates.

Yes; pte_batch_remaining() in patch 2 is an attempt at this. But as I said the
details will likely change a little.

> 
> The advantage of all this is that it stops making the feature special
> and the work Ryan is doing to generically push larger folios into
> set_ptes will become usable on these PPC platforms as well. And we can
> kill the PPC specific hugepd.
> 
> Jason


^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: [PATCH v2 06/13] mm/gup: Drop folio_fast_pin_allowed() in hugepd processing
@ 2024-01-18 15:15               ` Ryan Roberts
  0 siblings, 0 replies; 143+ messages in thread
From: Ryan Roberts @ 2024-01-18 15:15 UTC (permalink / raw)
  To: Jason Gunthorpe, Christophe Leroy
  Cc: peterx, linux-mm, linux-kernel, James Houghton,
	David Hildenbrand, Kirill A . Shutemov, Yang Shi, linux-riscv,
	Andrew Morton, Aneesh Kumar K . V, Rik van Riel,
	Andrea Arcangeli, Axel Rasmussen, Mike Rapoport, John Hubbard,
	Vlastimil Babka, Michael Ellerman, Andrew Jones, linuxppc-dev,
	Mike Kravetz, Muchun Song, linux-arm-kernel, Christoph Hellwig,
	Lorenzo Stoakes, Matthew Wilcox

On 17/01/2024 13:22, Jason Gunthorpe wrote:
> On Tue, Jan 16, 2024 at 06:32:32PM +0000, Christophe Leroy wrote:
>>>> hugepd is a page directory dedicated to huge pages, where you have huge
>>>> pages listed instead of regular pages. For instance, on powerpc 32 with
>>>> each PGD entries covering 4Mbytes, a regular page table has 1024 PTEs. A
>>>> hugepd for 512k is a page table with 8 entries.
>>>>
>>>> And for 8Mbytes entries, the hugepd is a page table with only one entry.
>>>> And 2 consecutive PGS entries will point to the same hugepd to cover the
>>>> entire 8Mbytes.
>>>
>>> That still sounds alot like the ARM thing - except ARM replicates the
>>> entry, you also said PPC relicates the entry like ARM to get to the
>>> 8M?
>>
>> Is it like ARM ? Not sure. The PTE is not in the PGD it must be in a L2 
>> directory, even for 8M.
> 
> Your diagram looks almost exactly like ARM to me.
> 
> The key thing is that the address for the L2 Table is *always* formed as:
> 
>    L2 Table Base << 12 + L2 Index << 2 + 00
> 
> Then the L2 Descriptor must contains bits indicating the page
> size. The L2 Descriptor is replicated to every 4k entry that the page
> size covers.
> 
> The only difference I see is the 8M case which has a page size greater
> than a single L1 entry.
> 
>> Yes that's how it works on powerpc. For 8xx we used to do that for both 
>> 8M and 512k pages. Now for 512k pages we do kind of like ARM (which 
>> means replicating the entry 128 times) as that's needed to allow mixing 
>> different page sizes for a given PGD entry.
> 
> Right, you want to have granular page sizes or it becomes unusable in
> the general case
>  
>> But for 8M pages that would mean replicating the entry 2048 times. 
>> That's a bit too much isn't it ?
> 
> Indeed, de-duplicating the L2 Table is a neat optimization.
> 
>>> So if you imagine a pmd_leaf(), pmd_leaf_size() and a pte_leaf_size()
>>> that would return enough information for both.
>>
>> pmd_leaf() ? Unless I'm missing something I can't do leaf at PMD (PGD) 
>> level. It must be a two-level process even for pages bigger than a PMD 
>> entry.
> 
> Right, this is the normal THP/hugetlb situation on x86/etc. It
> wouldn't apply here since it seems the HW doesn't have a bit in the L1
> descriptor to indicate leaf.
> 
> Instead for PPC this hugepd stuff should start to follow Ryan's
> generic work for ARM contig:
> 
> https://lore.kernel.org/all/20231218105100.172635-1-ryan.roberts@arm.com/
> 
> Specifically the arch implementation:
> 
> https://lore.kernel.org/linux-mm/20231218105100.172635-15-ryan.roberts@arm.com/
> 
> Ie the arch should ultimately wire up the replication and variable
> page size bits within its implementation of set_ptes(). set_ptes()s
> gets a contiguous run of address and should install it with maximum
> use of the variable page sizes. The core code will start to call
> set_ptes() in more cases as Ryan gets along his project.

Note that it's not just set_ptes() that you want to batch; there are other calls
that can benefit too. See patches 2 and 3 in the series you linked. (although
I'm working with DavidH on this and the details are going to change a little).

> 
> For the purposes of GUP, where are are today and where we are going,
> it would be much better to not have a special PPC specific "hugepd"
> parser. Just process each of the 4k replicates one by one like ARM is
> starting with.
> 
> The arch would still have to return the correct page address from
> pte_phys() which I think Ryan is doing by having the replicates encode
> the full 4k based address in each entry.

Yes; although its actually also a requirement of the arm architecture. Since the
contig bit is just a hint that the HW may or may not take any notice of, the
page tables have to be correct for the case where the HW just reads them in base
pages. Fixing up the bottom bits should be trivial using the PTE pointer, if
needed for ppc.

> The HW will ignore those low
> bits and pte_phys() then works properly. This would work for PPC as
> well, excluding the 8M optimization.
> 
> Going forward I'd expect to see some pte_page_size() that returns the
> size bits and GUP can have logic to skip reading replicates.

Yes; pte_batch_remaining() in patch 2 is an attempt at this. But as I said the
details will likely change a little.

> 
> The advantage of all this is that it stops making the feature special
> and the work Ryan is doing to generically push larger folios into
> set_ptes will become usable on these PPC platforms as well. And we can
> kill the PPC specific hugepd.
> 
> Jason


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: [PATCH v2 06/13] mm/gup: Drop folio_fast_pin_allowed() in hugepd processing
@ 2024-01-18 15:15               ` Ryan Roberts
  0 siblings, 0 replies; 143+ messages in thread
From: Ryan Roberts @ 2024-01-18 15:15 UTC (permalink / raw)
  To: Jason Gunthorpe, Christophe Leroy
  Cc: peterx, linux-mm, linux-kernel, James Houghton,
	David Hildenbrand, Kirill A . Shutemov, Yang Shi, linux-riscv,
	Andrew Morton, Aneesh Kumar K . V, Rik van Riel,
	Andrea Arcangeli, Axel Rasmussen, Mike Rapoport, John Hubbard,
	Vlastimil Babka, Michael Ellerman, Andrew Jones, linuxppc-dev,
	Mike Kravetz, Muchun Song, linux-arm-kernel, Christoph Hellwig,
	Lorenzo Stoakes, Matthew Wilcox

On 17/01/2024 13:22, Jason Gunthorpe wrote:
> On Tue, Jan 16, 2024 at 06:32:32PM +0000, Christophe Leroy wrote:
>>>> hugepd is a page directory dedicated to huge pages, where you have huge
>>>> pages listed instead of regular pages. For instance, on powerpc 32 with
>>>> each PGD entries covering 4Mbytes, a regular page table has 1024 PTEs. A
>>>> hugepd for 512k is a page table with 8 entries.
>>>>
>>>> And for 8Mbytes entries, the hugepd is a page table with only one entry.
>>>> And 2 consecutive PGS entries will point to the same hugepd to cover the
>>>> entire 8Mbytes.
>>>
>>> That still sounds alot like the ARM thing - except ARM replicates the
>>> entry, you also said PPC relicates the entry like ARM to get to the
>>> 8M?
>>
>> Is it like ARM ? Not sure. The PTE is not in the PGD it must be in a L2 
>> directory, even for 8M.
> 
> Your diagram looks almost exactly like ARM to me.
> 
> The key thing is that the address for the L2 Table is *always* formed as:
> 
>    L2 Table Base << 12 + L2 Index << 2 + 00
> 
> Then the L2 Descriptor must contains bits indicating the page
> size. The L2 Descriptor is replicated to every 4k entry that the page
> size covers.
> 
> The only difference I see is the 8M case which has a page size greater
> than a single L1 entry.
> 
>> Yes that's how it works on powerpc. For 8xx we used to do that for both 
>> 8M and 512k pages. Now for 512k pages we do kind of like ARM (which 
>> means replicating the entry 128 times) as that's needed to allow mixing 
>> different page sizes for a given PGD entry.
> 
> Right, you want to have granular page sizes or it becomes unusable in
> the general case
>  
>> But for 8M pages that would mean replicating the entry 2048 times. 
>> That's a bit too much isn't it ?
> 
> Indeed, de-duplicating the L2 Table is a neat optimization.
> 
>>> So if you imagine a pmd_leaf(), pmd_leaf_size() and a pte_leaf_size()
>>> that would return enough information for both.
>>
>> pmd_leaf() ? Unless I'm missing something I can't do leaf at PMD (PGD) 
>> level. It must be a two-level process even for pages bigger than a PMD 
>> entry.
> 
> Right, this is the normal THP/hugetlb situation on x86/etc. It
> wouldn't apply here since it seems the HW doesn't have a bit in the L1
> descriptor to indicate leaf.
> 
> Instead for PPC this hugepd stuff should start to follow Ryan's
> generic work for ARM contig:
> 
> https://lore.kernel.org/all/20231218105100.172635-1-ryan.roberts@arm.com/
> 
> Specifically the arch implementation:
> 
> https://lore.kernel.org/linux-mm/20231218105100.172635-15-ryan.roberts@arm.com/
> 
> Ie the arch should ultimately wire up the replication and variable
> page size bits within its implementation of set_ptes(). set_ptes()s
> gets a contiguous run of address and should install it with maximum
> use of the variable page sizes. The core code will start to call
> set_ptes() in more cases as Ryan gets along his project.

Note that it's not just set_ptes() that you want to batch; there are other calls
that can benefit too. See patches 2 and 3 in the series you linked. (although
I'm working with DavidH on this and the details are going to change a little).

> 
> For the purposes of GUP, where are are today and where we are going,
> it would be much better to not have a special PPC specific "hugepd"
> parser. Just process each of the 4k replicates one by one like ARM is
> starting with.
> 
> The arch would still have to return the correct page address from
> pte_phys() which I think Ryan is doing by having the replicates encode
> the full 4k based address in each entry.

Yes; although its actually also a requirement of the arm architecture. Since the
contig bit is just a hint that the HW may or may not take any notice of, the
page tables have to be correct for the case where the HW just reads them in base
pages. Fixing up the bottom bits should be trivial using the PTE pointer, if
needed for ppc.

> The HW will ignore those low
> bits and pte_phys() then works properly. This would work for PPC as
> well, excluding the 8M optimization.
> 
> Going forward I'd expect to see some pte_page_size() that returns the
> size bits and GUP can have logic to skip reading replicates.

Yes; pte_batch_remaining() in patch 2 is an attempt at this. But as I said the
details will likely change a little.

> 
> The advantage of all this is that it stops making the feature special
> and the work Ryan is doing to generically push larger folios into
> set_ptes will become usable on these PPC platforms as well. And we can
> kill the PPC specific hugepd.
> 
> Jason


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: [PATCH v2 06/13] mm/gup: Drop folio_fast_pin_allowed() in hugepd processing
@ 2024-01-18 15:15               ` Ryan Roberts
  0 siblings, 0 replies; 143+ messages in thread
From: Ryan Roberts @ 2024-01-18 15:15 UTC (permalink / raw)
  To: Jason Gunthorpe, Christophe Leroy
  Cc: James Houghton, David Hildenbrand, Yang Shi, peterx,
	Andrew Jones, linux-mm, linux-riscv, Andrea Arcangeli,
	Christoph Hellwig, Matthew Wilcox, Aneesh Kumar K . V,
	Vlastimil Babka, Axel Rasmussen, Rik van Riel, John Hubbard,
	Kirill A . Shutemov, linux-arm-kernel, Lorenzo Stoakes,
	Muchun Song, linux-kernel, Andrew Morton, linuxppc-dev,
	Mike Rapop ort, Mike Kravetz

On 17/01/2024 13:22, Jason Gunthorpe wrote:
> On Tue, Jan 16, 2024 at 06:32:32PM +0000, Christophe Leroy wrote:
>>>> hugepd is a page directory dedicated to huge pages, where you have huge
>>>> pages listed instead of regular pages. For instance, on powerpc 32 with
>>>> each PGD entries covering 4Mbytes, a regular page table has 1024 PTEs. A
>>>> hugepd for 512k is a page table with 8 entries.
>>>>
>>>> And for 8Mbytes entries, the hugepd is a page table with only one entry.
>>>> And 2 consecutive PGS entries will point to the same hugepd to cover the
>>>> entire 8Mbytes.
>>>
>>> That still sounds alot like the ARM thing - except ARM replicates the
>>> entry, you also said PPC relicates the entry like ARM to get to the
>>> 8M?
>>
>> Is it like ARM ? Not sure. The PTE is not in the PGD it must be in a L2 
>> directory, even for 8M.
> 
> Your diagram looks almost exactly like ARM to me.
> 
> The key thing is that the address for the L2 Table is *always* formed as:
> 
>    L2 Table Base << 12 + L2 Index << 2 + 00
> 
> Then the L2 Descriptor must contains bits indicating the page
> size. The L2 Descriptor is replicated to every 4k entry that the page
> size covers.
> 
> The only difference I see is the 8M case which has a page size greater
> than a single L1 entry.
> 
>> Yes that's how it works on powerpc. For 8xx we used to do that for both 
>> 8M and 512k pages. Now for 512k pages we do kind of like ARM (which 
>> means replicating the entry 128 times) as that's needed to allow mixing 
>> different page sizes for a given PGD entry.
> 
> Right, you want to have granular page sizes or it becomes unusable in
> the general case
>  
>> But for 8M pages that would mean replicating the entry 2048 times. 
>> That's a bit too much isn't it ?
> 
> Indeed, de-duplicating the L2 Table is a neat optimization.
> 
>>> So if you imagine a pmd_leaf(), pmd_leaf_size() and a pte_leaf_size()
>>> that would return enough information for both.
>>
>> pmd_leaf() ? Unless I'm missing something I can't do leaf at PMD (PGD) 
>> level. It must be a two-level process even for pages bigger than a PMD 
>> entry.
> 
> Right, this is the normal THP/hugetlb situation on x86/etc. It
> wouldn't apply here since it seems the HW doesn't have a bit in the L1
> descriptor to indicate leaf.
> 
> Instead for PPC this hugepd stuff should start to follow Ryan's
> generic work for ARM contig:
> 
> https://lore.kernel.org/all/20231218105100.172635-1-ryan.roberts@arm.com/
> 
> Specifically the arch implementation:
> 
> https://lore.kernel.org/linux-mm/20231218105100.172635-15-ryan.roberts@arm.com/
> 
> Ie the arch should ultimately wire up the replication and variable
> page size bits within its implementation of set_ptes(). set_ptes()s
> gets a contiguous run of address and should install it with maximum
> use of the variable page sizes. The core code will start to call
> set_ptes() in more cases as Ryan gets along his project.

Note that it's not just set_ptes() that you want to batch; there are other calls
that can benefit too. See patches 2 and 3 in the series you linked. (although
I'm working with DavidH on this and the details are going to change a little).

> 
> For the purposes of GUP, where are are today and where we are going,
> it would be much better to not have a special PPC specific "hugepd"
> parser. Just process each of the 4k replicates one by one like ARM is
> starting with.
> 
> The arch would still have to return the correct page address from
> pte_phys() which I think Ryan is doing by having the replicates encode
> the full 4k based address in each entry.

Yes; although its actually also a requirement of the arm architecture. Since the
contig bit is just a hint that the HW may or may not take any notice of, the
page tables have to be correct for the case where the HW just reads them in base
pages. Fixing up the bottom bits should be trivial using the PTE pointer, if
needed for ppc.

> The HW will ignore those low
> bits and pte_phys() then works properly. This would work for PPC as
> well, excluding the 8M optimization.
> 
> Going forward I'd expect to see some pte_page_size() that returns the
> size bits and GUP can have logic to skip reading replicates.

Yes; pte_batch_remaining() in patch 2 is an attempt at this. But as I said the
details will likely change a little.

> 
> The advantage of all this is that it stops making the feature special
> and the work Ryan is doing to generically push larger folios into
> set_ptes will become usable on these PPC platforms as well. And we can
> kill the PPC specific hugepd.
> 
> Jason


^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: [PATCH v2 01/13] mm/Kconfig: CONFIG_PGTABLE_HAS_HUGE_LEAVES
  2024-01-15 17:37     ` Jason Gunthorpe
  (?)
  (?)
@ 2024-01-22  8:25       ` Peter Xu
  -1 siblings, 0 replies; 143+ messages in thread
From: Peter Xu @ 2024-01-22  8:25 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: linux-mm, linux-kernel, James Houghton, David Hildenbrand,
	Kirill A . Shutemov, Yang Shi, linux-riscv, Andrew Morton,
	Aneesh Kumar K . V, Rik van Riel, Andrea Arcangeli,
	Axel Rasmussen, Mike Rapoport, John Hubbard, Vlastimil Babka,
	Michael Ellerman, Christophe Leroy, Andrew Jones, linuxppc-dev,
	Mike Kravetz, Muchun Song, linux-arm-kernel, Christoph Hellwig,
	Lorenzo Stoakes, Matthew Wilcox

On Mon, Jan 15, 2024 at 01:37:37PM -0400, Jason Gunthorpe wrote:
> On Wed, Jan 03, 2024 at 05:14:11PM +0800, peterx@redhat.com wrote:
> > From: Peter Xu <peterx@redhat.com>
> > 
> > Introduce a config option that will be selected as long as huge leaves are
> > involved in pgtable (thp or hugetlbfs).  It would be useful to mark any
> > code with this new config that can process either hugetlb or thp pages in
> > any level that is higher than pte level.
> > 
> > Signed-off-by: Peter Xu <peterx@redhat.com>
> > ---
> >  mm/Kconfig | 3 +++
> >  1 file changed, 3 insertions(+)
> 
> So you mean anything that supports page table entires > PAGE_SIZE ?

Yes.

> 
> Makes sense to me, though maybe add a comment in the kconfig?

Sure I'll add some.

> 
> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>

Thanks for your reviews and also positive comments in previous versions,
Jason.  I appreciate that.

I'm just pretty occupied with other tasks recently so I don't yet have time
to revisit this series, along with other comments yet.  I'll do so and
reply to the comments / discussions together afterwards.

-- 
Peter Xu


^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: [PATCH v2 01/13] mm/Kconfig: CONFIG_PGTABLE_HAS_HUGE_LEAVES
@ 2024-01-22  8:25       ` Peter Xu
  0 siblings, 0 replies; 143+ messages in thread
From: Peter Xu @ 2024-01-22  8:25 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: linux-mm, linux-kernel, James Houghton, David Hildenbrand,
	Kirill A . Shutemov, Yang Shi, linux-riscv, Andrew Morton,
	Aneesh Kumar K . V, Rik van Riel, Andrea Arcangeli,
	Axel Rasmussen, Mike Rapoport, John Hubbard, Vlastimil Babka,
	Michael Ellerman, Christophe Leroy, Andrew Jones, linuxppc-dev,
	Mike Kravetz, Muchun Song, linux-arm-kernel, Christoph Hellwig,
	Lorenzo Stoakes, Matthew Wilcox

On Mon, Jan 15, 2024 at 01:37:37PM -0400, Jason Gunthorpe wrote:
> On Wed, Jan 03, 2024 at 05:14:11PM +0800, peterx@redhat.com wrote:
> > From: Peter Xu <peterx@redhat.com>
> > 
> > Introduce a config option that will be selected as long as huge leaves are
> > involved in pgtable (thp or hugetlbfs).  It would be useful to mark any
> > code with this new config that can process either hugetlb or thp pages in
> > any level that is higher than pte level.
> > 
> > Signed-off-by: Peter Xu <peterx@redhat.com>
> > ---
> >  mm/Kconfig | 3 +++
> >  1 file changed, 3 insertions(+)
> 
> So you mean anything that supports page table entires > PAGE_SIZE ?

Yes.

> 
> Makes sense to me, though maybe add a comment in the kconfig?

Sure I'll add some.

> 
> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>

Thanks for your reviews and also positive comments in previous versions,
Jason.  I appreciate that.

I'm just pretty occupied with other tasks recently so I don't yet have time
to revisit this series, along with other comments yet.  I'll do so and
reply to the comments / discussions together afterwards.

-- 
Peter Xu


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: [PATCH v2 01/13] mm/Kconfig: CONFIG_PGTABLE_HAS_HUGE_LEAVES
@ 2024-01-22  8:25       ` Peter Xu
  0 siblings, 0 replies; 143+ messages in thread
From: Peter Xu @ 2024-01-22  8:25 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: James Houghton, David Hildenbrand, Yang Shi, Andrew Jones,
	linux-mm, Matthew Wilcox, linux-riscv, Andrea Arcangeli,
	Christoph Hellwig, Aneesh Kumar K . V, Vlastimil Babka,
	Axel Rasmussen, Rik van Riel, John Hubbard, Kirill A . Shutemov,
	linux-arm-kernel, Lorenzo Stoakes, Muchun Song, linux-kernel,
	Andrew Morton, linuxppc-dev, Mike Rapoport, Mike Kravetz

On Mon, Jan 15, 2024 at 01:37:37PM -0400, Jason Gunthorpe wrote:
> On Wed, Jan 03, 2024 at 05:14:11PM +0800, peterx@redhat.com wrote:
> > From: Peter Xu <peterx@redhat.com>
> > 
> > Introduce a config option that will be selected as long as huge leaves are
> > involved in pgtable (thp or hugetlbfs).  It would be useful to mark any
> > code with this new config that can process either hugetlb or thp pages in
> > any level that is higher than pte level.
> > 
> > Signed-off-by: Peter Xu <peterx@redhat.com>
> > ---
> >  mm/Kconfig | 3 +++
> >  1 file changed, 3 insertions(+)
> 
> So you mean anything that supports page table entires > PAGE_SIZE ?

Yes.

> 
> Makes sense to me, though maybe add a comment in the kconfig?

Sure I'll add some.

> 
> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>

Thanks for your reviews and also positive comments in previous versions,
Jason.  I appreciate that.

I'm just pretty occupied with other tasks recently so I don't yet have time
to revisit this series, along with other comments yet.  I'll do so and
reply to the comments / discussions together afterwards.

-- 
Peter Xu


^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: [PATCH v2 01/13] mm/Kconfig: CONFIG_PGTABLE_HAS_HUGE_LEAVES
@ 2024-01-22  8:25       ` Peter Xu
  0 siblings, 0 replies; 143+ messages in thread
From: Peter Xu @ 2024-01-22  8:25 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: linux-mm, linux-kernel, James Houghton, David Hildenbrand,
	Kirill A . Shutemov, Yang Shi, linux-riscv, Andrew Morton,
	Aneesh Kumar K . V, Rik van Riel, Andrea Arcangeli,
	Axel Rasmussen, Mike Rapoport, John Hubbard, Vlastimil Babka,
	Michael Ellerman, Christophe Leroy, Andrew Jones, linuxppc-dev,
	Mike Kravetz, Muchun Song, linux-arm-kernel, Christoph Hellwig,
	Lorenzo Stoakes, Matthew Wilcox

On Mon, Jan 15, 2024 at 01:37:37PM -0400, Jason Gunthorpe wrote:
> On Wed, Jan 03, 2024 at 05:14:11PM +0800, peterx@redhat.com wrote:
> > From: Peter Xu <peterx@redhat.com>
> > 
> > Introduce a config option that will be selected as long as huge leaves are
> > involved in pgtable (thp or hugetlbfs).  It would be useful to mark any
> > code with this new config that can process either hugetlb or thp pages in
> > any level that is higher than pte level.
> > 
> > Signed-off-by: Peter Xu <peterx@redhat.com>
> > ---
> >  mm/Kconfig | 3 +++
> >  1 file changed, 3 insertions(+)
> 
> So you mean anything that supports page table entires > PAGE_SIZE ?

Yes.

> 
> Makes sense to me, though maybe add a comment in the kconfig?

Sure I'll add some.

> 
> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>

Thanks for your reviews and also positive comments in previous versions,
Jason.  I appreciate that.

I'm just pretty occupied with other tasks recently so I don't yet have time
to revisit this series, along with other comments yet.  I'll do so and
reply to the comments / discussions together afterwards.

-- 
Peter Xu


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: [PATCH v2 03/13] mm: Provide generic pmd_thp_or_huge()
  2024-01-15 17:55     ` Jason Gunthorpe
  (?)
  (?)
@ 2024-02-21  9:37       ` Peter Xu
  -1 siblings, 0 replies; 143+ messages in thread
From: Peter Xu @ 2024-02-21  9:37 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: linux-mm, linux-kernel, James Houghton, David Hildenbrand,
	Kirill A . Shutemov, Yang Shi, linux-riscv, Andrew Morton,
	Aneesh Kumar K . V, Rik van Riel, Andrea Arcangeli,
	Axel Rasmussen, Mike Rapoport, John Hubbard, Vlastimil Babka,
	Michael Ellerman, Christophe Leroy, Andrew Jones, linuxppc-dev,
	Mike Kravetz, Muchun Song, linux-arm-kernel, Christoph Hellwig,
	Lorenzo Stoakes, Matthew Wilcox

On Mon, Jan 15, 2024 at 01:55:51PM -0400, Jason Gunthorpe wrote:
> On Wed, Jan 03, 2024 at 05:14:13PM +0800, peterx@redhat.com wrote:
> > From: Peter Xu <peterx@redhat.com>
> > 
> > ARM defines pmd_thp_or_huge(), detecting either a THP or a huge PMD.  It
> > can be a helpful helper if we want to merge more THP and hugetlb code
> > paths.  Make it a generic default implementation, only exist when
> > CONFIG_MMU.  Arch can overwrite it by defining its own version.
> > 
> > For example, ARM's pgtable-2level.h defines it to always return false.
> > 
> > Keep the macro declared with all config, it should be optimized to a false
> > anyway if !THP && !HUGETLB.
> > 
> > Signed-off-by: Peter Xu <peterx@redhat.com>
> > ---
> >  include/linux/pgtable.h | 4 ++++
> >  mm/gup.c                | 3 +--
> >  2 files changed, 5 insertions(+), 2 deletions(-)
> > 
> > diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
> > index 466cf477551a..2b42e95a4e3a 100644
> > --- a/include/linux/pgtable.h
> > +++ b/include/linux/pgtable.h
> > @@ -1362,6 +1362,10 @@ static inline int pmd_write(pmd_t pmd)
> >  #endif /* pmd_write */
> >  #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
> >  
> > +#ifndef pmd_thp_or_huge
> > +#define pmd_thp_or_huge(pmd)	(pmd_huge(pmd) || pmd_trans_huge(pmd))
> > +#endif
> 
> Why not just use pmd_leaf() ?
> 
> This GUP case seems to me exactly like what pmd_leaf() should really
> do and be used for..

I think I mostly agree with you, and these APIs are indeed confusing.  IMHO
the challenge is about the risk of breaking others on small changes in the
details where evil resides.

> 
> eg x86 does:
> 
> #define pmd_leaf	pmd_large
> static inline int pmd_large(pmd_t pte)
> 	return pmd_flags(pte) & _PAGE_PSE;
> 
> static inline int pmd_trans_huge(pmd_t pmd)
> 	return (pmd_val(pmd) & (_PAGE_PSE|_PAGE_DEVMAP)) == _PAGE_PSE;
> 
> int pmd_huge(pmd_t pmd)
>         return !pmd_none(pmd) &&
>                 (pmd_val(pmd) & (_PAGE_PRESENT|_PAGE_PSE)) != _PAGE_PRESENT;

For example, here I don't think it's strictly pmd_leaf()? As pmd_huge()
will return true if PRESENT=0 && PSE=0 (as long as none pte ruled out
first), while pmd_leaf() will return false; I think that came from
cbef8478bee5.  I'm not sure whether that is the best solution, e.g., from a
1st glance it seems better to me to process swap entries separately
(including both migration and poisoned entries)..

Sparc has similar things there, which in that case I'm not sure whether a
direct replace is always safe.

Besides that, there're also other cases where it's not clear of such direct
replacement, not until further investigated.  E.g., arm-3level has:

#define pmd_leaf(pmd)		pmd_sect(pmd)
#define pmd_sect(pmd)		((pmd_val(pmd) & PMD_TYPE_MASK) == \
						 PMD_TYPE_SECT)
#define PMD_TYPE_SECT		(_AT(pmdval_t, 1) << 0)

While pmd_huge() there relies on PMD_TABLE_BIT ()

int pmd_huge(pmd_t pmd)
{
	return pmd_val(pmd) && !(pmd_val(pmd) & PMD_TABLE_BIT);
}

#define PMD_TABLE_BIT		(_AT(pmdval_t, 1) << 1)

These are just the trivial details that I wanted to avoid to touch in this
series, so as to resolve the hugetlb issue separately from others.

The new pmd_huge_or_thp() is not ideal, but that easily isolates all these
trivial details / evils out of the picture, so that we can tackle them one
by one.  It is strictly an OR or huge||thp, so it's hopefully safe to not
break anything yet from that regard.

> 
> I spot checked a couple arches and it looks like it holds up.
> 
> Further, it looks to me like this site in GUP is the only core code
> caller..
> 
> So, I'd suggest a small series to go arch by arch and convert the arch
> to use pmd_huge() == pmd_leaf(). Then retire pmd_huge() as a public
> API.
> 
> > diff --git a/mm/gup.c b/mm/gup.c
> > index df83182ec72d..eebae70d2465 100644
> > --- a/mm/gup.c
> > +++ b/mm/gup.c
> > @@ -3004,8 +3004,7 @@ static int gup_pmd_range(pud_t *pudp, pud_t pud, unsigned long addr, unsigned lo
> >  		if (!pmd_present(pmd))
> >  			return 0;
> >  
> > -		if (unlikely(pmd_trans_huge(pmd) || pmd_huge(pmd) ||
> > -			     pmd_devmap(pmd))) {
> > +		if (unlikely(pmd_thp_or_huge(pmd) || pmd_devmap(pmd))) {
> >  			/* See gup_pte_range() */
> >  			if (pmd_protnone(pmd))
> >  				return 0;
> 
> And the devmap thing here doesn't make any sense either. The arch
> should ensure that pmd_devmap() implies pmd_leaf(). Since devmap is a
> purely SW construct it almost certainly does already anyhow.

Yep, but only if pmd_leaf() is safe to be put here. A pmd devmap should
always imply as a pmd_leaf() indeed.

Thanks,

-- 
Peter Xu


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: [PATCH v2 03/13] mm: Provide generic pmd_thp_or_huge()
@ 2024-02-21  9:37       ` Peter Xu
  0 siblings, 0 replies; 143+ messages in thread
From: Peter Xu @ 2024-02-21  9:37 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: linux-mm, linux-kernel, James Houghton, David Hildenbrand,
	Kirill A . Shutemov, Yang Shi, linux-riscv, Andrew Morton,
	Aneesh Kumar K . V, Rik van Riel, Andrea Arcangeli,
	Axel Rasmussen, Mike Rapoport, John Hubbard, Vlastimil Babka,
	Michael Ellerman, Christophe Leroy, Andrew Jones, linuxppc-dev,
	Mike Kravetz, Muchun Song, linux-arm-kernel, Christoph Hellwig,
	Lorenzo Stoakes, Matthew Wilcox

On Mon, Jan 15, 2024 at 01:55:51PM -0400, Jason Gunthorpe wrote:
> On Wed, Jan 03, 2024 at 05:14:13PM +0800, peterx@redhat.com wrote:
> > From: Peter Xu <peterx@redhat.com>
> > 
> > ARM defines pmd_thp_or_huge(), detecting either a THP or a huge PMD.  It
> > can be a helpful helper if we want to merge more THP and hugetlb code
> > paths.  Make it a generic default implementation, only exist when
> > CONFIG_MMU.  Arch can overwrite it by defining its own version.
> > 
> > For example, ARM's pgtable-2level.h defines it to always return false.
> > 
> > Keep the macro declared with all config, it should be optimized to a false
> > anyway if !THP && !HUGETLB.
> > 
> > Signed-off-by: Peter Xu <peterx@redhat.com>
> > ---
> >  include/linux/pgtable.h | 4 ++++
> >  mm/gup.c                | 3 +--
> >  2 files changed, 5 insertions(+), 2 deletions(-)
> > 
> > diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
> > index 466cf477551a..2b42e95a4e3a 100644
> > --- a/include/linux/pgtable.h
> > +++ b/include/linux/pgtable.h
> > @@ -1362,6 +1362,10 @@ static inline int pmd_write(pmd_t pmd)
> >  #endif /* pmd_write */
> >  #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
> >  
> > +#ifndef pmd_thp_or_huge
> > +#define pmd_thp_or_huge(pmd)	(pmd_huge(pmd) || pmd_trans_huge(pmd))
> > +#endif
> 
> Why not just use pmd_leaf() ?
> 
> This GUP case seems to me exactly like what pmd_leaf() should really
> do and be used for..

I think I mostly agree with you, and these APIs are indeed confusing.  IMHO
the challenge is about the risk of breaking others on small changes in the
details where evil resides.

> 
> eg x86 does:
> 
> #define pmd_leaf	pmd_large
> static inline int pmd_large(pmd_t pte)
> 	return pmd_flags(pte) & _PAGE_PSE;
> 
> static inline int pmd_trans_huge(pmd_t pmd)
> 	return (pmd_val(pmd) & (_PAGE_PSE|_PAGE_DEVMAP)) == _PAGE_PSE;
> 
> int pmd_huge(pmd_t pmd)
>         return !pmd_none(pmd) &&
>                 (pmd_val(pmd) & (_PAGE_PRESENT|_PAGE_PSE)) != _PAGE_PRESENT;

For example, here I don't think it's strictly pmd_leaf()? As pmd_huge()
will return true if PRESENT=0 && PSE=0 (as long as none pte ruled out
first), while pmd_leaf() will return false; I think that came from
cbef8478bee5.  I'm not sure whether that is the best solution, e.g., from a
1st glance it seems better to me to process swap entries separately
(including both migration and poisoned entries)..

Sparc has similar things there, which in that case I'm not sure whether a
direct replace is always safe.

Besides that, there're also other cases where it's not clear of such direct
replacement, not until further investigated.  E.g., arm-3level has:

#define pmd_leaf(pmd)		pmd_sect(pmd)
#define pmd_sect(pmd)		((pmd_val(pmd) & PMD_TYPE_MASK) == \
						 PMD_TYPE_SECT)
#define PMD_TYPE_SECT		(_AT(pmdval_t, 1) << 0)

While pmd_huge() there relies on PMD_TABLE_BIT ()

int pmd_huge(pmd_t pmd)
{
	return pmd_val(pmd) && !(pmd_val(pmd) & PMD_TABLE_BIT);
}

#define PMD_TABLE_BIT		(_AT(pmdval_t, 1) << 1)

These are just the trivial details that I wanted to avoid to touch in this
series, so as to resolve the hugetlb issue separately from others.

The new pmd_huge_or_thp() is not ideal, but that easily isolates all these
trivial details / evils out of the picture, so that we can tackle them one
by one.  It is strictly an OR or huge||thp, so it's hopefully safe to not
break anything yet from that regard.

> 
> I spot checked a couple arches and it looks like it holds up.
> 
> Further, it looks to me like this site in GUP is the only core code
> caller..
> 
> So, I'd suggest a small series to go arch by arch and convert the arch
> to use pmd_huge() == pmd_leaf(). Then retire pmd_huge() as a public
> API.
> 
> > diff --git a/mm/gup.c b/mm/gup.c
> > index df83182ec72d..eebae70d2465 100644
> > --- a/mm/gup.c
> > +++ b/mm/gup.c
> > @@ -3004,8 +3004,7 @@ static int gup_pmd_range(pud_t *pudp, pud_t pud, unsigned long addr, unsigned lo
> >  		if (!pmd_present(pmd))
> >  			return 0;
> >  
> > -		if (unlikely(pmd_trans_huge(pmd) || pmd_huge(pmd) ||
> > -			     pmd_devmap(pmd))) {
> > +		if (unlikely(pmd_thp_or_huge(pmd) || pmd_devmap(pmd))) {
> >  			/* See gup_pte_range() */
> >  			if (pmd_protnone(pmd))
> >  				return 0;
> 
> And the devmap thing here doesn't make any sense either. The arch
> should ensure that pmd_devmap() implies pmd_leaf(). Since devmap is a
> purely SW construct it almost certainly does already anyhow.

Yep, but only if pmd_leaf() is safe to be put here. A pmd devmap should
always imply as a pmd_leaf() indeed.

Thanks,

-- 
Peter Xu


^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: [PATCH v2 03/13] mm: Provide generic pmd_thp_or_huge()
@ 2024-02-21  9:37       ` Peter Xu
  0 siblings, 0 replies; 143+ messages in thread
From: Peter Xu @ 2024-02-21  9:37 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: linux-mm, linux-kernel, James Houghton, David Hildenbrand,
	Kirill A . Shutemov, Yang Shi, linux-riscv, Andrew Morton,
	Aneesh Kumar K . V, Rik van Riel, Andrea Arcangeli,
	Axel Rasmussen, Mike Rapoport, John Hubbard, Vlastimil Babka,
	Michael Ellerman, Christophe Leroy, Andrew Jones, linuxppc-dev,
	Mike Kravetz, Muchun Song, linux-arm-kernel, Christoph Hellwig,
	Lorenzo Stoakes, Matthew Wilcox

On Mon, Jan 15, 2024 at 01:55:51PM -0400, Jason Gunthorpe wrote:
> On Wed, Jan 03, 2024 at 05:14:13PM +0800, peterx@redhat.com wrote:
> > From: Peter Xu <peterx@redhat.com>
> > 
> > ARM defines pmd_thp_or_huge(), detecting either a THP or a huge PMD.  It
> > can be a helpful helper if we want to merge more THP and hugetlb code
> > paths.  Make it a generic default implementation, only exist when
> > CONFIG_MMU.  Arch can overwrite it by defining its own version.
> > 
> > For example, ARM's pgtable-2level.h defines it to always return false.
> > 
> > Keep the macro declared with all config, it should be optimized to a false
> > anyway if !THP && !HUGETLB.
> > 
> > Signed-off-by: Peter Xu <peterx@redhat.com>
> > ---
> >  include/linux/pgtable.h | 4 ++++
> >  mm/gup.c                | 3 +--
> >  2 files changed, 5 insertions(+), 2 deletions(-)
> > 
> > diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
> > index 466cf477551a..2b42e95a4e3a 100644
> > --- a/include/linux/pgtable.h
> > +++ b/include/linux/pgtable.h
> > @@ -1362,6 +1362,10 @@ static inline int pmd_write(pmd_t pmd)
> >  #endif /* pmd_write */
> >  #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
> >  
> > +#ifndef pmd_thp_or_huge
> > +#define pmd_thp_or_huge(pmd)	(pmd_huge(pmd) || pmd_trans_huge(pmd))
> > +#endif
> 
> Why not just use pmd_leaf() ?
> 
> This GUP case seems to me exactly like what pmd_leaf() should really
> do and be used for..

I think I mostly agree with you, and these APIs are indeed confusing.  IMHO
the challenge is about the risk of breaking others on small changes in the
details where evil resides.

> 
> eg x86 does:
> 
> #define pmd_leaf	pmd_large
> static inline int pmd_large(pmd_t pte)
> 	return pmd_flags(pte) & _PAGE_PSE;
> 
> static inline int pmd_trans_huge(pmd_t pmd)
> 	return (pmd_val(pmd) & (_PAGE_PSE|_PAGE_DEVMAP)) == _PAGE_PSE;
> 
> int pmd_huge(pmd_t pmd)
>         return !pmd_none(pmd) &&
>                 (pmd_val(pmd) & (_PAGE_PRESENT|_PAGE_PSE)) != _PAGE_PRESENT;

For example, here I don't think it's strictly pmd_leaf()? As pmd_huge()
will return true if PRESENT=0 && PSE=0 (as long as none pte ruled out
first), while pmd_leaf() will return false; I think that came from
cbef8478bee5.  I'm not sure whether that is the best solution, e.g., from a
1st glance it seems better to me to process swap entries separately
(including both migration and poisoned entries)..

Sparc has similar things there, which in that case I'm not sure whether a
direct replace is always safe.

Besides that, there're also other cases where it's not clear of such direct
replacement, not until further investigated.  E.g., arm-3level has:

#define pmd_leaf(pmd)		pmd_sect(pmd)
#define pmd_sect(pmd)		((pmd_val(pmd) & PMD_TYPE_MASK) == \
						 PMD_TYPE_SECT)
#define PMD_TYPE_SECT		(_AT(pmdval_t, 1) << 0)

While pmd_huge() there relies on PMD_TABLE_BIT ()

int pmd_huge(pmd_t pmd)
{
	return pmd_val(pmd) && !(pmd_val(pmd) & PMD_TABLE_BIT);
}

#define PMD_TABLE_BIT		(_AT(pmdval_t, 1) << 1)

These are just the trivial details that I wanted to avoid to touch in this
series, so as to resolve the hugetlb issue separately from others.

The new pmd_huge_or_thp() is not ideal, but that easily isolates all these
trivial details / evils out of the picture, so that we can tackle them one
by one.  It is strictly an OR or huge||thp, so it's hopefully safe to not
break anything yet from that regard.

> 
> I spot checked a couple arches and it looks like it holds up.
> 
> Further, it looks to me like this site in GUP is the only core code
> caller..
> 
> So, I'd suggest a small series to go arch by arch and convert the arch
> to use pmd_huge() == pmd_leaf(). Then retire pmd_huge() as a public
> API.
> 
> > diff --git a/mm/gup.c b/mm/gup.c
> > index df83182ec72d..eebae70d2465 100644
> > --- a/mm/gup.c
> > +++ b/mm/gup.c
> > @@ -3004,8 +3004,7 @@ static int gup_pmd_range(pud_t *pudp, pud_t pud, unsigned long addr, unsigned lo
> >  		if (!pmd_present(pmd))
> >  			return 0;
> >  
> > -		if (unlikely(pmd_trans_huge(pmd) || pmd_huge(pmd) ||
> > -			     pmd_devmap(pmd))) {
> > +		if (unlikely(pmd_thp_or_huge(pmd) || pmd_devmap(pmd))) {
> >  			/* See gup_pte_range() */
> >  			if (pmd_protnone(pmd))
> >  				return 0;
> 
> And the devmap thing here doesn't make any sense either. The arch
> should ensure that pmd_devmap() implies pmd_leaf(). Since devmap is a
> purely SW construct it almost certainly does already anyhow.

Yep, but only if pmd_leaf() is safe to be put here. A pmd devmap should
always imply as a pmd_leaf() indeed.

Thanks,

-- 
Peter Xu


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: [PATCH v2 03/13] mm: Provide generic pmd_thp_or_huge()
@ 2024-02-21  9:37       ` Peter Xu
  0 siblings, 0 replies; 143+ messages in thread
From: Peter Xu @ 2024-02-21  9:37 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: James Houghton, David Hildenbrand, Yang Shi, Andrew Jones,
	linux-mm, Matthew Wilcox, linux-riscv, Andrea Arcangeli,
	Christoph Hellwig, Aneesh Kumar K . V, Vlastimil Babka,
	Axel Rasmussen, Rik van Riel, John Hubbard, Kirill A . Shutemov,
	linux-arm-kernel, Lorenzo Stoakes, Muchun Song, linux-kernel,
	Andrew Morton, linuxppc-dev, Mike Rapoport, Mike Kravetz

On Mon, Jan 15, 2024 at 01:55:51PM -0400, Jason Gunthorpe wrote:
> On Wed, Jan 03, 2024 at 05:14:13PM +0800, peterx@redhat.com wrote:
> > From: Peter Xu <peterx@redhat.com>
> > 
> > ARM defines pmd_thp_or_huge(), detecting either a THP or a huge PMD.  It
> > can be a helpful helper if we want to merge more THP and hugetlb code
> > paths.  Make it a generic default implementation, only exist when
> > CONFIG_MMU.  Arch can overwrite it by defining its own version.
> > 
> > For example, ARM's pgtable-2level.h defines it to always return false.
> > 
> > Keep the macro declared with all config, it should be optimized to a false
> > anyway if !THP && !HUGETLB.
> > 
> > Signed-off-by: Peter Xu <peterx@redhat.com>
> > ---
> >  include/linux/pgtable.h | 4 ++++
> >  mm/gup.c                | 3 +--
> >  2 files changed, 5 insertions(+), 2 deletions(-)
> > 
> > diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
> > index 466cf477551a..2b42e95a4e3a 100644
> > --- a/include/linux/pgtable.h
> > +++ b/include/linux/pgtable.h
> > @@ -1362,6 +1362,10 @@ static inline int pmd_write(pmd_t pmd)
> >  #endif /* pmd_write */
> >  #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
> >  
> > +#ifndef pmd_thp_or_huge
> > +#define pmd_thp_or_huge(pmd)	(pmd_huge(pmd) || pmd_trans_huge(pmd))
> > +#endif
> 
> Why not just use pmd_leaf() ?
> 
> This GUP case seems to me exactly like what pmd_leaf() should really
> do and be used for..

I think I mostly agree with you, and these APIs are indeed confusing.  IMHO
the challenge is about the risk of breaking others on small changes in the
details where evil resides.

> 
> eg x86 does:
> 
> #define pmd_leaf	pmd_large
> static inline int pmd_large(pmd_t pte)
> 	return pmd_flags(pte) & _PAGE_PSE;
> 
> static inline int pmd_trans_huge(pmd_t pmd)
> 	return (pmd_val(pmd) & (_PAGE_PSE|_PAGE_DEVMAP)) == _PAGE_PSE;
> 
> int pmd_huge(pmd_t pmd)
>         return !pmd_none(pmd) &&
>                 (pmd_val(pmd) & (_PAGE_PRESENT|_PAGE_PSE)) != _PAGE_PRESENT;

For example, here I don't think it's strictly pmd_leaf()? As pmd_huge()
will return true if PRESENT=0 && PSE=0 (as long as none pte ruled out
first), while pmd_leaf() will return false; I think that came from
cbef8478bee5.  I'm not sure whether that is the best solution, e.g., from a
1st glance it seems better to me to process swap entries separately
(including both migration and poisoned entries)..

Sparc has similar things there, which in that case I'm not sure whether a
direct replace is always safe.

Besides that, there're also other cases where it's not clear of such direct
replacement, not until further investigated.  E.g., arm-3level has:

#define pmd_leaf(pmd)		pmd_sect(pmd)
#define pmd_sect(pmd)		((pmd_val(pmd) & PMD_TYPE_MASK) == \
						 PMD_TYPE_SECT)
#define PMD_TYPE_SECT		(_AT(pmdval_t, 1) << 0)

While pmd_huge() there relies on PMD_TABLE_BIT ()

int pmd_huge(pmd_t pmd)
{
	return pmd_val(pmd) && !(pmd_val(pmd) & PMD_TABLE_BIT);
}

#define PMD_TABLE_BIT		(_AT(pmdval_t, 1) << 1)

These are just the trivial details that I wanted to avoid to touch in this
series, so as to resolve the hugetlb issue separately from others.

The new pmd_huge_or_thp() is not ideal, but that easily isolates all these
trivial details / evils out of the picture, so that we can tackle them one
by one.  It is strictly an OR or huge||thp, so it's hopefully safe to not
break anything yet from that regard.

> 
> I spot checked a couple arches and it looks like it holds up.
> 
> Further, it looks to me like this site in GUP is the only core code
> caller..
> 
> So, I'd suggest a small series to go arch by arch and convert the arch
> to use pmd_huge() == pmd_leaf(). Then retire pmd_huge() as a public
> API.
> 
> > diff --git a/mm/gup.c b/mm/gup.c
> > index df83182ec72d..eebae70d2465 100644
> > --- a/mm/gup.c
> > +++ b/mm/gup.c
> > @@ -3004,8 +3004,7 @@ static int gup_pmd_range(pud_t *pudp, pud_t pud, unsigned long addr, unsigned lo
> >  		if (!pmd_present(pmd))
> >  			return 0;
> >  
> > -		if (unlikely(pmd_trans_huge(pmd) || pmd_huge(pmd) ||
> > -			     pmd_devmap(pmd))) {
> > +		if (unlikely(pmd_thp_or_huge(pmd) || pmd_devmap(pmd))) {
> >  			/* See gup_pte_range() */
> >  			if (pmd_protnone(pmd))
> >  				return 0;
> 
> And the devmap thing here doesn't make any sense either. The arch
> should ensure that pmd_devmap() implies pmd_leaf(). Since devmap is a
> purely SW construct it almost certainly does already anyhow.

Yep, but only if pmd_leaf() is safe to be put here. A pmd devmap should
always imply as a pmd_leaf() indeed.

Thanks,

-- 
Peter Xu


^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: [PATCH v2 10/13] mm/gup: Handle huge pud for follow_pud_mask()
  2024-01-15 18:49     ` Jason Gunthorpe
  (?)
  (?)
@ 2024-02-21 11:49       ` Peter Xu
  -1 siblings, 0 replies; 143+ messages in thread
From: Peter Xu @ 2024-02-21 11:49 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: linux-mm, linux-kernel, James Houghton, David Hildenbrand,
	Kirill A . Shutemov, Yang Shi, linux-riscv, Andrew Morton,
	Aneesh Kumar K . V, Rik van Riel, Andrea Arcangeli,
	Axel Rasmussen, Mike Rapoport, John Hubbard, Vlastimil Babka,
	Michael Ellerman, Christophe Leroy, Andrew Jones, linuxppc-dev,
	Mike Kravetz, Muchun Song, linux-arm-kernel, Christoph Hellwig,
	Lorenzo Stoakes, Matthew Wilcox

On Mon, Jan 15, 2024 at 02:49:00PM -0400, Jason Gunthorpe wrote:
> On Wed, Jan 03, 2024 at 05:14:20PM +0800, peterx@redhat.com wrote:
> > diff --git a/mm/gup.c b/mm/gup.c
> > index 63845b3ec44f..760406180222 100644
> > --- a/mm/gup.c
> > +++ b/mm/gup.c
> > @@ -525,6 +525,70 @@ static struct page *no_page_table(struct vm_area_struct *vma,
> >  	return NULL;
> >  }
> >  
> > +#ifdef CONFIG_PGTABLE_HAS_HUGE_LEAVES
> > +static struct page *follow_huge_pud(struct vm_area_struct *vma,
> > +				    unsigned long addr, pud_t *pudp,
> > +				    int flags, struct follow_page_context *ctx)
> > +{
> > +	struct mm_struct *mm = vma->vm_mm;
> > +	struct page *page;
> > +	pud_t pud = *pudp;
> > +	unsigned long pfn = pud_pfn(pud);
> > +	int ret;
> > +
> > +	assert_spin_locked(pud_lockptr(mm, pudp));
> > +
> > +	if ((flags & FOLL_WRITE) && !pud_write(pud))
> > +		return NULL;
> > +
> > +	if (!pud_present(pud))
> > +		return NULL;
> > +
> > +	pfn += (addr & ~PUD_MASK) >> PAGE_SHIFT;
> > +
> > +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
> > +	if (pud_devmap(pud)) {
> 
> Can this use IS_ENABLED(CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD) ?

Sure.

> 
> > +		/*
> > +		 * device mapped pages can only be returned if the caller
> > +		 * will manage the page reference count.
> > +		 *
> > +		 * At least one of FOLL_GET | FOLL_PIN must be set, so
> > +		 * assert that here:
> > +		 */
> > +		if (!(flags & (FOLL_GET | FOLL_PIN)))
> > +			return ERR_PTR(-EEXIST);
> > +
> > +		if (flags & FOLL_TOUCH)
> > +			touch_pud(vma, addr, pudp, flags & FOLL_WRITE);
> > +
> > +		ctx->pgmap = get_dev_pagemap(pfn, ctx->pgmap);
> > +		if (!ctx->pgmap)
> > +			return ERR_PTR(-EFAULT);
> > +	}
> > +#endif	/* CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */
> > +	page = pfn_to_page(pfn);
> > +
> > +	if (!pud_devmap(pud) && !pud_write(pud) &&
> > +	    gup_must_unshare(vma, flags, page))
> > +		return ERR_PTR(-EMLINK);
> > +
> > +	ret = try_grab_page(page, flags);
> > +	if (ret)
> > +		page = ERR_PTR(ret);
> > +	else
> > +		ctx->page_mask = HPAGE_PUD_NR - 1;
> > +
> > +	return page;
> > +}
> > +#else  /* CONFIG_PGTABLE_HAS_HUGE_LEAVES */
> > +static struct page *follow_huge_pud(struct vm_area_struct *vma,
> > +				    unsigned long addr, pud_t *pudp,
> > +				    int flags, struct follow_page_context *ctx)
> > +{
> > +	return NULL;
> > +}
> > +#endif	/* CONFIG_PGTABLE_HAS_HUGE_LEAVES */
> > +
> >  static int follow_pfn_pte(struct vm_area_struct *vma, unsigned long address,
> >  		pte_t *pte, unsigned int flags)
> >  {
> > @@ -760,11 +824,11 @@ static struct page *follow_pud_mask(struct vm_area_struct *vma,
> >  
> >  	pudp = pud_offset(p4dp, address);
> >  	pud = READ_ONCE(*pudp);
> > -	if (pud_none(pud))
> > +	if (pud_none(pud) || !pud_present(pud))
> >  		return no_page_table(vma, flags, address);
> 
> Isn't 'pud_none() || !pud_present()' redundent? A none pud is
> non-present, by definition?

Hmm yes, seems redundant.  Let me drop it.

> 
> > -	if (pud_devmap(pud)) {
> > +	if (pud_huge(pud)) {
> >  		ptl = pud_lock(mm, pudp);
> > -		page = follow_devmap_pud(vma, address, pudp, flags, &ctx->pgmap);
> > +		page = follow_huge_pud(vma, address, pudp, flags, ctx);
> >  		spin_unlock(ptl);
> >  		if (page)
> >  			return page;
> 
> Otherwise it looks OK to me
> 
> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>

Thanks!

-- 
Peter Xu


^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: [PATCH v2 10/13] mm/gup: Handle huge pud for follow_pud_mask()
@ 2024-02-21 11:49       ` Peter Xu
  0 siblings, 0 replies; 143+ messages in thread
From: Peter Xu @ 2024-02-21 11:49 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: linux-mm, linux-kernel, James Houghton, David Hildenbrand,
	Kirill A . Shutemov, Yang Shi, linux-riscv, Andrew Morton,
	Aneesh Kumar K . V, Rik van Riel, Andrea Arcangeli,
	Axel Rasmussen, Mike Rapoport, John Hubbard, Vlastimil Babka,
	Michael Ellerman, Christophe Leroy, Andrew Jones, linuxppc-dev,
	Mike Kravetz, Muchun Song, linux-arm-kernel, Christoph Hellwig,
	Lorenzo Stoakes, Matthew Wilcox

On Mon, Jan 15, 2024 at 02:49:00PM -0400, Jason Gunthorpe wrote:
> On Wed, Jan 03, 2024 at 05:14:20PM +0800, peterx@redhat.com wrote:
> > diff --git a/mm/gup.c b/mm/gup.c
> > index 63845b3ec44f..760406180222 100644
> > --- a/mm/gup.c
> > +++ b/mm/gup.c
> > @@ -525,6 +525,70 @@ static struct page *no_page_table(struct vm_area_struct *vma,
> >  	return NULL;
> >  }
> >  
> > +#ifdef CONFIG_PGTABLE_HAS_HUGE_LEAVES
> > +static struct page *follow_huge_pud(struct vm_area_struct *vma,
> > +				    unsigned long addr, pud_t *pudp,
> > +				    int flags, struct follow_page_context *ctx)
> > +{
> > +	struct mm_struct *mm = vma->vm_mm;
> > +	struct page *page;
> > +	pud_t pud = *pudp;
> > +	unsigned long pfn = pud_pfn(pud);
> > +	int ret;
> > +
> > +	assert_spin_locked(pud_lockptr(mm, pudp));
> > +
> > +	if ((flags & FOLL_WRITE) && !pud_write(pud))
> > +		return NULL;
> > +
> > +	if (!pud_present(pud))
> > +		return NULL;
> > +
> > +	pfn += (addr & ~PUD_MASK) >> PAGE_SHIFT;
> > +
> > +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
> > +	if (pud_devmap(pud)) {
> 
> Can this use IS_ENABLED(CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD) ?

Sure.

> 
> > +		/*
> > +		 * device mapped pages can only be returned if the caller
> > +		 * will manage the page reference count.
> > +		 *
> > +		 * At least one of FOLL_GET | FOLL_PIN must be set, so
> > +		 * assert that here:
> > +		 */
> > +		if (!(flags & (FOLL_GET | FOLL_PIN)))
> > +			return ERR_PTR(-EEXIST);
> > +
> > +		if (flags & FOLL_TOUCH)
> > +			touch_pud(vma, addr, pudp, flags & FOLL_WRITE);
> > +
> > +		ctx->pgmap = get_dev_pagemap(pfn, ctx->pgmap);
> > +		if (!ctx->pgmap)
> > +			return ERR_PTR(-EFAULT);
> > +	}
> > +#endif	/* CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */
> > +	page = pfn_to_page(pfn);
> > +
> > +	if (!pud_devmap(pud) && !pud_write(pud) &&
> > +	    gup_must_unshare(vma, flags, page))
> > +		return ERR_PTR(-EMLINK);
> > +
> > +	ret = try_grab_page(page, flags);
> > +	if (ret)
> > +		page = ERR_PTR(ret);
> > +	else
> > +		ctx->page_mask = HPAGE_PUD_NR - 1;
> > +
> > +	return page;
> > +}
> > +#else  /* CONFIG_PGTABLE_HAS_HUGE_LEAVES */
> > +static struct page *follow_huge_pud(struct vm_area_struct *vma,
> > +				    unsigned long addr, pud_t *pudp,
> > +				    int flags, struct follow_page_context *ctx)
> > +{
> > +	return NULL;
> > +}
> > +#endif	/* CONFIG_PGTABLE_HAS_HUGE_LEAVES */
> > +
> >  static int follow_pfn_pte(struct vm_area_struct *vma, unsigned long address,
> >  		pte_t *pte, unsigned int flags)
> >  {
> > @@ -760,11 +824,11 @@ static struct page *follow_pud_mask(struct vm_area_struct *vma,
> >  
> >  	pudp = pud_offset(p4dp, address);
> >  	pud = READ_ONCE(*pudp);
> > -	if (pud_none(pud))
> > +	if (pud_none(pud) || !pud_present(pud))
> >  		return no_page_table(vma, flags, address);
> 
> Isn't 'pud_none() || !pud_present()' redundent? A none pud is
> non-present, by definition?

Hmm yes, seems redundant.  Let me drop it.

> 
> > -	if (pud_devmap(pud)) {
> > +	if (pud_huge(pud)) {
> >  		ptl = pud_lock(mm, pudp);
> > -		page = follow_devmap_pud(vma, address, pudp, flags, &ctx->pgmap);
> > +		page = follow_huge_pud(vma, address, pudp, flags, ctx);
> >  		spin_unlock(ptl);
> >  		if (page)
> >  			return page;
> 
> Otherwise it looks OK to me
> 
> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>

Thanks!

-- 
Peter Xu


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: [PATCH v2 10/13] mm/gup: Handle huge pud for follow_pud_mask()
@ 2024-02-21 11:49       ` Peter Xu
  0 siblings, 0 replies; 143+ messages in thread
From: Peter Xu @ 2024-02-21 11:49 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: James Houghton, David Hildenbrand, Yang Shi, Andrew Jones,
	linux-mm, Matthew Wilcox, linux-riscv, Andrea Arcangeli,
	Christoph Hellwig, Aneesh Kumar K . V, Vlastimil Babka,
	Axel Rasmussen, Rik van Riel, John Hubbard, Kirill A . Shutemov,
	linux-arm-kernel, Lorenzo Stoakes, Muchun Song, linux-kernel,
	Andrew Morton, linuxppc-dev, Mike Rapoport, Mike Kravetz

On Mon, Jan 15, 2024 at 02:49:00PM -0400, Jason Gunthorpe wrote:
> On Wed, Jan 03, 2024 at 05:14:20PM +0800, peterx@redhat.com wrote:
> > diff --git a/mm/gup.c b/mm/gup.c
> > index 63845b3ec44f..760406180222 100644
> > --- a/mm/gup.c
> > +++ b/mm/gup.c
> > @@ -525,6 +525,70 @@ static struct page *no_page_table(struct vm_area_struct *vma,
> >  	return NULL;
> >  }
> >  
> > +#ifdef CONFIG_PGTABLE_HAS_HUGE_LEAVES
> > +static struct page *follow_huge_pud(struct vm_area_struct *vma,
> > +				    unsigned long addr, pud_t *pudp,
> > +				    int flags, struct follow_page_context *ctx)
> > +{
> > +	struct mm_struct *mm = vma->vm_mm;
> > +	struct page *page;
> > +	pud_t pud = *pudp;
> > +	unsigned long pfn = pud_pfn(pud);
> > +	int ret;
> > +
> > +	assert_spin_locked(pud_lockptr(mm, pudp));
> > +
> > +	if ((flags & FOLL_WRITE) && !pud_write(pud))
> > +		return NULL;
> > +
> > +	if (!pud_present(pud))
> > +		return NULL;
> > +
> > +	pfn += (addr & ~PUD_MASK) >> PAGE_SHIFT;
> > +
> > +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
> > +	if (pud_devmap(pud)) {
> 
> Can this use IS_ENABLED(CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD) ?

Sure.

> 
> > +		/*
> > +		 * device mapped pages can only be returned if the caller
> > +		 * will manage the page reference count.
> > +		 *
> > +		 * At least one of FOLL_GET | FOLL_PIN must be set, so
> > +		 * assert that here:
> > +		 */
> > +		if (!(flags & (FOLL_GET | FOLL_PIN)))
> > +			return ERR_PTR(-EEXIST);
> > +
> > +		if (flags & FOLL_TOUCH)
> > +			touch_pud(vma, addr, pudp, flags & FOLL_WRITE);
> > +
> > +		ctx->pgmap = get_dev_pagemap(pfn, ctx->pgmap);
> > +		if (!ctx->pgmap)
> > +			return ERR_PTR(-EFAULT);
> > +	}
> > +#endif	/* CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */
> > +	page = pfn_to_page(pfn);
> > +
> > +	if (!pud_devmap(pud) && !pud_write(pud) &&
> > +	    gup_must_unshare(vma, flags, page))
> > +		return ERR_PTR(-EMLINK);
> > +
> > +	ret = try_grab_page(page, flags);
> > +	if (ret)
> > +		page = ERR_PTR(ret);
> > +	else
> > +		ctx->page_mask = HPAGE_PUD_NR - 1;
> > +
> > +	return page;
> > +}
> > +#else  /* CONFIG_PGTABLE_HAS_HUGE_LEAVES */
> > +static struct page *follow_huge_pud(struct vm_area_struct *vma,
> > +				    unsigned long addr, pud_t *pudp,
> > +				    int flags, struct follow_page_context *ctx)
> > +{
> > +	return NULL;
> > +}
> > +#endif	/* CONFIG_PGTABLE_HAS_HUGE_LEAVES */
> > +
> >  static int follow_pfn_pte(struct vm_area_struct *vma, unsigned long address,
> >  		pte_t *pte, unsigned int flags)
> >  {
> > @@ -760,11 +824,11 @@ static struct page *follow_pud_mask(struct vm_area_struct *vma,
> >  
> >  	pudp = pud_offset(p4dp, address);
> >  	pud = READ_ONCE(*pudp);
> > -	if (pud_none(pud))
> > +	if (pud_none(pud) || !pud_present(pud))
> >  		return no_page_table(vma, flags, address);
> 
> Isn't 'pud_none() || !pud_present()' redundent? A none pud is
> non-present, by definition?

Hmm yes, seems redundant.  Let me drop it.

> 
> > -	if (pud_devmap(pud)) {
> > +	if (pud_huge(pud)) {
> >  		ptl = pud_lock(mm, pudp);
> > -		page = follow_devmap_pud(vma, address, pudp, flags, &ctx->pgmap);
> > +		page = follow_huge_pud(vma, address, pudp, flags, ctx);
> >  		spin_unlock(ptl);
> >  		if (page)
> >  			return page;
> 
> Otherwise it looks OK to me
> 
> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>

Thanks!

-- 
Peter Xu


^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: [PATCH v2 10/13] mm/gup: Handle huge pud for follow_pud_mask()
@ 2024-02-21 11:49       ` Peter Xu
  0 siblings, 0 replies; 143+ messages in thread
From: Peter Xu @ 2024-02-21 11:49 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: linux-mm, linux-kernel, James Houghton, David Hildenbrand,
	Kirill A . Shutemov, Yang Shi, linux-riscv, Andrew Morton,
	Aneesh Kumar K . V, Rik van Riel, Andrea Arcangeli,
	Axel Rasmussen, Mike Rapoport, John Hubbard, Vlastimil Babka,
	Michael Ellerman, Christophe Leroy, Andrew Jones, linuxppc-dev,
	Mike Kravetz, Muchun Song, linux-arm-kernel, Christoph Hellwig,
	Lorenzo Stoakes, Matthew Wilcox

On Mon, Jan 15, 2024 at 02:49:00PM -0400, Jason Gunthorpe wrote:
> On Wed, Jan 03, 2024 at 05:14:20PM +0800, peterx@redhat.com wrote:
> > diff --git a/mm/gup.c b/mm/gup.c
> > index 63845b3ec44f..760406180222 100644
> > --- a/mm/gup.c
> > +++ b/mm/gup.c
> > @@ -525,6 +525,70 @@ static struct page *no_page_table(struct vm_area_struct *vma,
> >  	return NULL;
> >  }
> >  
> > +#ifdef CONFIG_PGTABLE_HAS_HUGE_LEAVES
> > +static struct page *follow_huge_pud(struct vm_area_struct *vma,
> > +				    unsigned long addr, pud_t *pudp,
> > +				    int flags, struct follow_page_context *ctx)
> > +{
> > +	struct mm_struct *mm = vma->vm_mm;
> > +	struct page *page;
> > +	pud_t pud = *pudp;
> > +	unsigned long pfn = pud_pfn(pud);
> > +	int ret;
> > +
> > +	assert_spin_locked(pud_lockptr(mm, pudp));
> > +
> > +	if ((flags & FOLL_WRITE) && !pud_write(pud))
> > +		return NULL;
> > +
> > +	if (!pud_present(pud))
> > +		return NULL;
> > +
> > +	pfn += (addr & ~PUD_MASK) >> PAGE_SHIFT;
> > +
> > +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
> > +	if (pud_devmap(pud)) {
> 
> Can this use IS_ENABLED(CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD) ?

Sure.

> 
> > +		/*
> > +		 * device mapped pages can only be returned if the caller
> > +		 * will manage the page reference count.
> > +		 *
> > +		 * At least one of FOLL_GET | FOLL_PIN must be set, so
> > +		 * assert that here:
> > +		 */
> > +		if (!(flags & (FOLL_GET | FOLL_PIN)))
> > +			return ERR_PTR(-EEXIST);
> > +
> > +		if (flags & FOLL_TOUCH)
> > +			touch_pud(vma, addr, pudp, flags & FOLL_WRITE);
> > +
> > +		ctx->pgmap = get_dev_pagemap(pfn, ctx->pgmap);
> > +		if (!ctx->pgmap)
> > +			return ERR_PTR(-EFAULT);
> > +	}
> > +#endif	/* CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */
> > +	page = pfn_to_page(pfn);
> > +
> > +	if (!pud_devmap(pud) && !pud_write(pud) &&
> > +	    gup_must_unshare(vma, flags, page))
> > +		return ERR_PTR(-EMLINK);
> > +
> > +	ret = try_grab_page(page, flags);
> > +	if (ret)
> > +		page = ERR_PTR(ret);
> > +	else
> > +		ctx->page_mask = HPAGE_PUD_NR - 1;
> > +
> > +	return page;
> > +}
> > +#else  /* CONFIG_PGTABLE_HAS_HUGE_LEAVES */
> > +static struct page *follow_huge_pud(struct vm_area_struct *vma,
> > +				    unsigned long addr, pud_t *pudp,
> > +				    int flags, struct follow_page_context *ctx)
> > +{
> > +	return NULL;
> > +}
> > +#endif	/* CONFIG_PGTABLE_HAS_HUGE_LEAVES */
> > +
> >  static int follow_pfn_pte(struct vm_area_struct *vma, unsigned long address,
> >  		pte_t *pte, unsigned int flags)
> >  {
> > @@ -760,11 +824,11 @@ static struct page *follow_pud_mask(struct vm_area_struct *vma,
> >  
> >  	pudp = pud_offset(p4dp, address);
> >  	pud = READ_ONCE(*pudp);
> > -	if (pud_none(pud))
> > +	if (pud_none(pud) || !pud_present(pud))
> >  		return no_page_table(vma, flags, address);
> 
> Isn't 'pud_none() || !pud_present()' redundent? A none pud is
> non-present, by definition?

Hmm yes, seems redundant.  Let me drop it.

> 
> > -	if (pud_devmap(pud)) {
> > +	if (pud_huge(pud)) {
> >  		ptl = pud_lock(mm, pudp);
> > -		page = follow_devmap_pud(vma, address, pudp, flags, &ctx->pgmap);
> > +		page = follow_huge_pud(vma, address, pudp, flags, ctx);
> >  		spin_unlock(ptl);
> >  		if (page)
> >  			return page;
> 
> Otherwise it looks OK to me
> 
> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>

Thanks!

-- 
Peter Xu


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: [PATCH v2 06/13] mm/gup: Drop folio_fast_pin_allowed() in hugepd processing
  2024-01-15 18:37     ` Jason Gunthorpe
  (?)
  (?)
@ 2024-02-21 11:55       ` Peter Xu
  -1 siblings, 0 replies; 143+ messages in thread
From: Peter Xu @ 2024-02-21 11:55 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: linux-mm, linux-kernel, James Houghton, David Hildenbrand,
	Kirill A . Shutemov, Yang Shi, linux-riscv, Andrew Morton,
	Aneesh Kumar K . V, Rik van Riel, Andrea Arcangeli,
	Axel Rasmussen, Mike Rapoport, John Hubbard, Vlastimil Babka,
	Michael Ellerman, Christophe Leroy, Andrew Jones, linuxppc-dev,
	Mike Kravetz, Muchun Song, linux-arm-kernel, Christoph Hellwig,
	Lorenzo Stoakes, Matthew Wilcox

On Mon, Jan 15, 2024 at 02:37:48PM -0400, Jason Gunthorpe wrote:
> > Drop that check, not only because it'll never be true for hugepd per any
> > known plan, but also it paves way for reusing the function outside
> > fast-gup.
> 
> I didn't see any other caller of this function in this series? When
> does this re-use happen??

It's reused in patch 12 ("mm/gup: Handle hugepd for follow_page()").

Thanks,

-- 
Peter Xu


^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: [PATCH v2 06/13] mm/gup: Drop folio_fast_pin_allowed() in hugepd processing
@ 2024-02-21 11:55       ` Peter Xu
  0 siblings, 0 replies; 143+ messages in thread
From: Peter Xu @ 2024-02-21 11:55 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: linux-mm, linux-kernel, James Houghton, David Hildenbrand,
	Kirill A . Shutemov, Yang Shi, linux-riscv, Andrew Morton,
	Aneesh Kumar K . V, Rik van Riel, Andrea Arcangeli,
	Axel Rasmussen, Mike Rapoport, John Hubbard, Vlastimil Babka,
	Michael Ellerman, Christophe Leroy, Andrew Jones, linuxppc-dev,
	Mike Kravetz, Muchun Song, linux-arm-kernel, Christoph Hellwig,
	Lorenzo Stoakes, Matthew Wilcox

On Mon, Jan 15, 2024 at 02:37:48PM -0400, Jason Gunthorpe wrote:
> > Drop that check, not only because it'll never be true for hugepd per any
> > known plan, but also it paves way for reusing the function outside
> > fast-gup.
> 
> I didn't see any other caller of this function in this series? When
> does this re-use happen??

It's reused in patch 12 ("mm/gup: Handle hugepd for follow_page()").

Thanks,

-- 
Peter Xu


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: [PATCH v2 06/13] mm/gup: Drop folio_fast_pin_allowed() in hugepd processing
@ 2024-02-21 11:55       ` Peter Xu
  0 siblings, 0 replies; 143+ messages in thread
From: Peter Xu @ 2024-02-21 11:55 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: James Houghton, David Hildenbrand, Yang Shi, Andrew Jones,
	linux-mm, Matthew Wilcox, linux-riscv, Andrea Arcangeli,
	Christoph Hellwig, Aneesh Kumar K . V, Vlastimil Babka,
	Axel Rasmussen, Rik van Riel, John Hubbard, Kirill A . Shutemov,
	linux-arm-kernel, Lorenzo Stoakes, Muchun Song, linux-kernel,
	Andrew Morton, linuxppc-dev, Mike Rapoport, Mike Kravetz

On Mon, Jan 15, 2024 at 02:37:48PM -0400, Jason Gunthorpe wrote:
> > Drop that check, not only because it'll never be true for hugepd per any
> > known plan, but also it paves way for reusing the function outside
> > fast-gup.
> 
> I didn't see any other caller of this function in this series? When
> does this re-use happen??

It's reused in patch 12 ("mm/gup: Handle hugepd for follow_page()").

Thanks,

-- 
Peter Xu


^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: [PATCH v2 06/13] mm/gup: Drop folio_fast_pin_allowed() in hugepd processing
@ 2024-02-21 11:55       ` Peter Xu
  0 siblings, 0 replies; 143+ messages in thread
From: Peter Xu @ 2024-02-21 11:55 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: linux-mm, linux-kernel, James Houghton, David Hildenbrand,
	Kirill A . Shutemov, Yang Shi, linux-riscv, Andrew Morton,
	Aneesh Kumar K . V, Rik van Riel, Andrea Arcangeli,
	Axel Rasmussen, Mike Rapoport, John Hubbard, Vlastimil Babka,
	Michael Ellerman, Christophe Leroy, Andrew Jones, linuxppc-dev,
	Mike Kravetz, Muchun Song, linux-arm-kernel, Christoph Hellwig,
	Lorenzo Stoakes, Matthew Wilcox

On Mon, Jan 15, 2024 at 02:37:48PM -0400, Jason Gunthorpe wrote:
> > Drop that check, not only because it'll never be true for hugepd per any
> > known plan, but also it paves way for reusing the function outside
> > fast-gup.
> 
> I didn't see any other caller of this function in this series? When
> does this re-use happen??

It's reused in patch 12 ("mm/gup: Handle hugepd for follow_page()").

Thanks,

-- 
Peter Xu


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: [PATCH v2 03/13] mm: Provide generic pmd_thp_or_huge()
  2024-02-21  9:37       ` Peter Xu
  (?)
  (?)
@ 2024-02-21 12:57         ` Jason Gunthorpe
  -1 siblings, 0 replies; 143+ messages in thread
From: Jason Gunthorpe @ 2024-02-21 12:57 UTC (permalink / raw)
  To: Peter Xu
  Cc: linux-mm, linux-kernel, James Houghton, David Hildenbrand,
	Kirill A . Shutemov, Yang Shi, linux-riscv, Andrew Morton,
	Aneesh Kumar K . V, Rik van Riel, Andrea Arcangeli,
	Axel Rasmussen, Mike Rapoport, John Hubbard, Vlastimil Babka,
	Michael Ellerman, Christophe Leroy, Andrew Jones, linuxppc-dev,
	Mike Kravetz, Muchun Song, linux-arm-kernel, Christoph Hellwig,
	Lorenzo Stoakes, Matthew Wilcox

On Wed, Feb 21, 2024 at 05:37:37PM +0800, Peter Xu wrote:
> On Mon, Jan 15, 2024 at 01:55:51PM -0400, Jason Gunthorpe wrote:
> > On Wed, Jan 03, 2024 at 05:14:13PM +0800, peterx@redhat.com wrote:
> > > From: Peter Xu <peterx@redhat.com>
> > > 
> > > ARM defines pmd_thp_or_huge(), detecting either a THP or a huge PMD.  It
> > > can be a helpful helper if we want to merge more THP and hugetlb code
> > > paths.  Make it a generic default implementation, only exist when
> > > CONFIG_MMU.  Arch can overwrite it by defining its own version.
> > > 
> > > For example, ARM's pgtable-2level.h defines it to always return false.
> > > 
> > > Keep the macro declared with all config, it should be optimized to a false
> > > anyway if !THP && !HUGETLB.
> > > 
> > > Signed-off-by: Peter Xu <peterx@redhat.com>
> > > ---
> > >  include/linux/pgtable.h | 4 ++++
> > >  mm/gup.c                | 3 +--
> > >  2 files changed, 5 insertions(+), 2 deletions(-)
> > > 
> > > diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
> > > index 466cf477551a..2b42e95a4e3a 100644
> > > --- a/include/linux/pgtable.h
> > > +++ b/include/linux/pgtable.h
> > > @@ -1362,6 +1362,10 @@ static inline int pmd_write(pmd_t pmd)
> > >  #endif /* pmd_write */
> > >  #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
> > >  
> > > +#ifndef pmd_thp_or_huge
> > > +#define pmd_thp_or_huge(pmd)	(pmd_huge(pmd) || pmd_trans_huge(pmd))
> > > +#endif
> > 
> > Why not just use pmd_leaf() ?
> > 
> > This GUP case seems to me exactly like what pmd_leaf() should really
> > do and be used for..
> 
> I think I mostly agree with you, and these APIs are indeed confusing.  IMHO
> the challenge is about the risk of breaking others on small changes in the
> details where evil resides.

These APIs are super confusing, which is why I brought it up.. Adding
even more subtly different variations is not helping.

I think pmd_leaf means the entry is present and refers to a physical
page not another radix level.

> > eg x86 does:
> > 
> > #define pmd_leaf	pmd_large
> > static inline int pmd_large(pmd_t pte)
> > 	return pmd_flags(pte) & _PAGE_PSE;
> > 
> > static inline int pmd_trans_huge(pmd_t pmd)
> > 	return (pmd_val(pmd) & (_PAGE_PSE|_PAGE_DEVMAP)) == _PAGE_PSE;
> > 
> > int pmd_huge(pmd_t pmd)
> >         return !pmd_none(pmd) &&
> >                 (pmd_val(pmd) & (_PAGE_PRESENT|_PAGE_PSE)) != _PAGE_PRESENT;
> 
> For example, here I don't think it's strictly pmd_leaf()? As pmd_huge()
> will return true if PRESENT=0 && PSE=0 (as long as none pte ruled out
> first), while pmd_leaf() will return false; I think that came from
> cbef8478bee5. 

Yikes, but do you even want to handle non-present entries in GUP
world? Isn't everything gated by !present in the first place?

> Besides that, there're also other cases where it's not clear of such direct
> replacement, not until further investigated.  E.g., arm-3level has:
> 
> #define pmd_leaf(pmd)		pmd_sect(pmd)
> #define pmd_sect(pmd)		((pmd_val(pmd) & PMD_TYPE_MASK) == \
> 						 PMD_TYPE_SECT)
> #define PMD_TYPE_SECT		(_AT(pmdval_t, 1) << 0)
> 
> While pmd_huge() there relies on PMD_TABLE_BIT ()

I looked at tht, it looked OK.. 

#define PMD_TYPE_MASK               (_AT(pmdval_t, 3) << 0)
#define PMD_TABLE_BIT               (_AT(pmdval_t, 1) << 1)

It is the same stuff, just a little confusingly written

Jason

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: [PATCH v2 03/13] mm: Provide generic pmd_thp_or_huge()
@ 2024-02-21 12:57         ` Jason Gunthorpe
  0 siblings, 0 replies; 143+ messages in thread
From: Jason Gunthorpe @ 2024-02-21 12:57 UTC (permalink / raw)
  To: Peter Xu
  Cc: linux-mm, linux-kernel, James Houghton, David Hildenbrand,
	Kirill A . Shutemov, Yang Shi, linux-riscv, Andrew Morton,
	Aneesh Kumar K . V, Rik van Riel, Andrea Arcangeli,
	Axel Rasmussen, Mike Rapoport, John Hubbard, Vlastimil Babka,
	Michael Ellerman, Christophe Leroy, Andrew Jones, linuxppc-dev,
	Mike Kravetz, Muchun Song, linux-arm-kernel, Christoph Hellwig,
	Lorenzo Stoakes, Matthew Wilcox

On Wed, Feb 21, 2024 at 05:37:37PM +0800, Peter Xu wrote:
> On Mon, Jan 15, 2024 at 01:55:51PM -0400, Jason Gunthorpe wrote:
> > On Wed, Jan 03, 2024 at 05:14:13PM +0800, peterx@redhat.com wrote:
> > > From: Peter Xu <peterx@redhat.com>
> > > 
> > > ARM defines pmd_thp_or_huge(), detecting either a THP or a huge PMD.  It
> > > can be a helpful helper if we want to merge more THP and hugetlb code
> > > paths.  Make it a generic default implementation, only exist when
> > > CONFIG_MMU.  Arch can overwrite it by defining its own version.
> > > 
> > > For example, ARM's pgtable-2level.h defines it to always return false.
> > > 
> > > Keep the macro declared with all config, it should be optimized to a false
> > > anyway if !THP && !HUGETLB.
> > > 
> > > Signed-off-by: Peter Xu <peterx@redhat.com>
> > > ---
> > >  include/linux/pgtable.h | 4 ++++
> > >  mm/gup.c                | 3 +--
> > >  2 files changed, 5 insertions(+), 2 deletions(-)
> > > 
> > > diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
> > > index 466cf477551a..2b42e95a4e3a 100644
> > > --- a/include/linux/pgtable.h
> > > +++ b/include/linux/pgtable.h
> > > @@ -1362,6 +1362,10 @@ static inline int pmd_write(pmd_t pmd)
> > >  #endif /* pmd_write */
> > >  #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
> > >  
> > > +#ifndef pmd_thp_or_huge
> > > +#define pmd_thp_or_huge(pmd)	(pmd_huge(pmd) || pmd_trans_huge(pmd))
> > > +#endif
> > 
> > Why not just use pmd_leaf() ?
> > 
> > This GUP case seems to me exactly like what pmd_leaf() should really
> > do and be used for..
> 
> I think I mostly agree with you, and these APIs are indeed confusing.  IMHO
> the challenge is about the risk of breaking others on small changes in the
> details where evil resides.

These APIs are super confusing, which is why I brought it up.. Adding
even more subtly different variations is not helping.

I think pmd_leaf means the entry is present and refers to a physical
page not another radix level.

> > eg x86 does:
> > 
> > #define pmd_leaf	pmd_large
> > static inline int pmd_large(pmd_t pte)
> > 	return pmd_flags(pte) & _PAGE_PSE;
> > 
> > static inline int pmd_trans_huge(pmd_t pmd)
> > 	return (pmd_val(pmd) & (_PAGE_PSE|_PAGE_DEVMAP)) == _PAGE_PSE;
> > 
> > int pmd_huge(pmd_t pmd)
> >         return !pmd_none(pmd) &&
> >                 (pmd_val(pmd) & (_PAGE_PRESENT|_PAGE_PSE)) != _PAGE_PRESENT;
> 
> For example, here I don't think it's strictly pmd_leaf()? As pmd_huge()
> will return true if PRESENT=0 && PSE=0 (as long as none pte ruled out
> first), while pmd_leaf() will return false; I think that came from
> cbef8478bee5. 

Yikes, but do you even want to handle non-present entries in GUP
world? Isn't everything gated by !present in the first place?

> Besides that, there're also other cases where it's not clear of such direct
> replacement, not until further investigated.  E.g., arm-3level has:
> 
> #define pmd_leaf(pmd)		pmd_sect(pmd)
> #define pmd_sect(pmd)		((pmd_val(pmd) & PMD_TYPE_MASK) == \
> 						 PMD_TYPE_SECT)
> #define PMD_TYPE_SECT		(_AT(pmdval_t, 1) << 0)
> 
> While pmd_huge() there relies on PMD_TABLE_BIT ()

I looked at tht, it looked OK.. 

#define PMD_TYPE_MASK               (_AT(pmdval_t, 3) << 0)
#define PMD_TABLE_BIT               (_AT(pmdval_t, 1) << 1)

It is the same stuff, just a little confusingly written

Jason

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: [PATCH v2 03/13] mm: Provide generic pmd_thp_or_huge()
@ 2024-02-21 12:57         ` Jason Gunthorpe
  0 siblings, 0 replies; 143+ messages in thread
From: Jason Gunthorpe @ 2024-02-21 12:57 UTC (permalink / raw)
  To: Peter Xu
  Cc: linux-mm, linux-kernel, James Houghton, David Hildenbrand,
	Kirill A . Shutemov, Yang Shi, linux-riscv, Andrew Morton,
	Aneesh Kumar K . V, Rik van Riel, Andrea Arcangeli,
	Axel Rasmussen, Mike Rapoport, John Hubbard, Vlastimil Babka,
	Michael Ellerman, Christophe Leroy, Andrew Jones, linuxppc-dev,
	Mike Kravetz, Muchun Song, linux-arm-kernel, Christoph Hellwig,
	Lorenzo Stoakes, Matthew Wilcox

On Wed, Feb 21, 2024 at 05:37:37PM +0800, Peter Xu wrote:
> On Mon, Jan 15, 2024 at 01:55:51PM -0400, Jason Gunthorpe wrote:
> > On Wed, Jan 03, 2024 at 05:14:13PM +0800, peterx@redhat.com wrote:
> > > From: Peter Xu <peterx@redhat.com>
> > > 
> > > ARM defines pmd_thp_or_huge(), detecting either a THP or a huge PMD.  It
> > > can be a helpful helper if we want to merge more THP and hugetlb code
> > > paths.  Make it a generic default implementation, only exist when
> > > CONFIG_MMU.  Arch can overwrite it by defining its own version.
> > > 
> > > For example, ARM's pgtable-2level.h defines it to always return false.
> > > 
> > > Keep the macro declared with all config, it should be optimized to a false
> > > anyway if !THP && !HUGETLB.
> > > 
> > > Signed-off-by: Peter Xu <peterx@redhat.com>
> > > ---
> > >  include/linux/pgtable.h | 4 ++++
> > >  mm/gup.c                | 3 +--
> > >  2 files changed, 5 insertions(+), 2 deletions(-)
> > > 
> > > diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
> > > index 466cf477551a..2b42e95a4e3a 100644
> > > --- a/include/linux/pgtable.h
> > > +++ b/include/linux/pgtable.h
> > > @@ -1362,6 +1362,10 @@ static inline int pmd_write(pmd_t pmd)
> > >  #endif /* pmd_write */
> > >  #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
> > >  
> > > +#ifndef pmd_thp_or_huge
> > > +#define pmd_thp_or_huge(pmd)	(pmd_huge(pmd) || pmd_trans_huge(pmd))
> > > +#endif
> > 
> > Why not just use pmd_leaf() ?
> > 
> > This GUP case seems to me exactly like what pmd_leaf() should really
> > do and be used for..
> 
> I think I mostly agree with you, and these APIs are indeed confusing.  IMHO
> the challenge is about the risk of breaking others on small changes in the
> details where evil resides.

These APIs are super confusing, which is why I brought it up.. Adding
even more subtly different variations is not helping.

I think pmd_leaf means the entry is present and refers to a physical
page not another radix level.

> > eg x86 does:
> > 
> > #define pmd_leaf	pmd_large
> > static inline int pmd_large(pmd_t pte)
> > 	return pmd_flags(pte) & _PAGE_PSE;
> > 
> > static inline int pmd_trans_huge(pmd_t pmd)
> > 	return (pmd_val(pmd) & (_PAGE_PSE|_PAGE_DEVMAP)) == _PAGE_PSE;
> > 
> > int pmd_huge(pmd_t pmd)
> >         return !pmd_none(pmd) &&
> >                 (pmd_val(pmd) & (_PAGE_PRESENT|_PAGE_PSE)) != _PAGE_PRESENT;
> 
> For example, here I don't think it's strictly pmd_leaf()? As pmd_huge()
> will return true if PRESENT=0 && PSE=0 (as long as none pte ruled out
> first), while pmd_leaf() will return false; I think that came from
> cbef8478bee5. 

Yikes, but do you even want to handle non-present entries in GUP
world? Isn't everything gated by !present in the first place?

> Besides that, there're also other cases where it's not clear of such direct
> replacement, not until further investigated.  E.g., arm-3level has:
> 
> #define pmd_leaf(pmd)		pmd_sect(pmd)
> #define pmd_sect(pmd)		((pmd_val(pmd) & PMD_TYPE_MASK) == \
> 						 PMD_TYPE_SECT)
> #define PMD_TYPE_SECT		(_AT(pmdval_t, 1) << 0)
> 
> While pmd_huge() there relies on PMD_TABLE_BIT ()

I looked at tht, it looked OK.. 

#define PMD_TYPE_MASK               (_AT(pmdval_t, 3) << 0)
#define PMD_TABLE_BIT               (_AT(pmdval_t, 1) << 1)

It is the same stuff, just a little confusingly written

Jason

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: [PATCH v2 03/13] mm: Provide generic pmd_thp_or_huge()
@ 2024-02-21 12:57         ` Jason Gunthorpe
  0 siblings, 0 replies; 143+ messages in thread
From: Jason Gunthorpe @ 2024-02-21 12:57 UTC (permalink / raw)
  To: Peter Xu
  Cc: James Houghton, David Hildenbrand, Yang Shi, Andrew Jones,
	linux-mm, Matthew Wilcox, linux-riscv, Andrea Arcangeli,
	Christoph Hellwig, Aneesh Kumar K . V, Vlastimil Babka,
	Axel Rasmussen, Rik van Riel, John Hubbard, Kirill A . Shutemov,
	linux-arm-kernel, Lorenzo Stoakes, Muchun Song, linux-kernel,
	Andrew Morton, linuxppc-dev, Mike Rapoport, Mike Kravetz

On Wed, Feb 21, 2024 at 05:37:37PM +0800, Peter Xu wrote:
> On Mon, Jan 15, 2024 at 01:55:51PM -0400, Jason Gunthorpe wrote:
> > On Wed, Jan 03, 2024 at 05:14:13PM +0800, peterx@redhat.com wrote:
> > > From: Peter Xu <peterx@redhat.com>
> > > 
> > > ARM defines pmd_thp_or_huge(), detecting either a THP or a huge PMD.  It
> > > can be a helpful helper if we want to merge more THP and hugetlb code
> > > paths.  Make it a generic default implementation, only exist when
> > > CONFIG_MMU.  Arch can overwrite it by defining its own version.
> > > 
> > > For example, ARM's pgtable-2level.h defines it to always return false.
> > > 
> > > Keep the macro declared with all config, it should be optimized to a false
> > > anyway if !THP && !HUGETLB.
> > > 
> > > Signed-off-by: Peter Xu <peterx@redhat.com>
> > > ---
> > >  include/linux/pgtable.h | 4 ++++
> > >  mm/gup.c                | 3 +--
> > >  2 files changed, 5 insertions(+), 2 deletions(-)
> > > 
> > > diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
> > > index 466cf477551a..2b42e95a4e3a 100644
> > > --- a/include/linux/pgtable.h
> > > +++ b/include/linux/pgtable.h
> > > @@ -1362,6 +1362,10 @@ static inline int pmd_write(pmd_t pmd)
> > >  #endif /* pmd_write */
> > >  #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
> > >  
> > > +#ifndef pmd_thp_or_huge
> > > +#define pmd_thp_or_huge(pmd)	(pmd_huge(pmd) || pmd_trans_huge(pmd))
> > > +#endif
> > 
> > Why not just use pmd_leaf() ?
> > 
> > This GUP case seems to me exactly like what pmd_leaf() should really
> > do and be used for..
> 
> I think I mostly agree with you, and these APIs are indeed confusing.  IMHO
> the challenge is about the risk of breaking others on small changes in the
> details where evil resides.

These APIs are super confusing, which is why I brought it up.. Adding
even more subtly different variations is not helping.

I think pmd_leaf means the entry is present and refers to a physical
page not another radix level.

> > eg x86 does:
> > 
> > #define pmd_leaf	pmd_large
> > static inline int pmd_large(pmd_t pte)
> > 	return pmd_flags(pte) & _PAGE_PSE;
> > 
> > static inline int pmd_trans_huge(pmd_t pmd)
> > 	return (pmd_val(pmd) & (_PAGE_PSE|_PAGE_DEVMAP)) == _PAGE_PSE;
> > 
> > int pmd_huge(pmd_t pmd)
> >         return !pmd_none(pmd) &&
> >                 (pmd_val(pmd) & (_PAGE_PRESENT|_PAGE_PSE)) != _PAGE_PRESENT;
> 
> For example, here I don't think it's strictly pmd_leaf()? As pmd_huge()
> will return true if PRESENT=0 && PSE=0 (as long as none pte ruled out
> first), while pmd_leaf() will return false; I think that came from
> cbef8478bee5. 

Yikes, but do you even want to handle non-present entries in GUP
world? Isn't everything gated by !present in the first place?

> Besides that, there're also other cases where it's not clear of such direct
> replacement, not until further investigated.  E.g., arm-3level has:
> 
> #define pmd_leaf(pmd)		pmd_sect(pmd)
> #define pmd_sect(pmd)		((pmd_val(pmd) & PMD_TYPE_MASK) == \
> 						 PMD_TYPE_SECT)
> #define PMD_TYPE_SECT		(_AT(pmdval_t, 1) << 0)
> 
> While pmd_huge() there relies on PMD_TABLE_BIT ()

I looked at tht, it looked OK.. 

#define PMD_TYPE_MASK               (_AT(pmdval_t, 3) << 0)
#define PMD_TABLE_BIT               (_AT(pmdval_t, 1) << 1)

It is the same stuff, just a little confusingly written

Jason

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: [PATCH v2 03/13] mm: Provide generic pmd_thp_or_huge()
  2024-02-21 12:57         ` Jason Gunthorpe
  (?)
  (?)
@ 2024-02-22  8:04           ` Peter Xu
  -1 siblings, 0 replies; 143+ messages in thread
From: Peter Xu @ 2024-02-22  8:04 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: linux-mm, linux-kernel, James Houghton, David Hildenbrand,
	Kirill A . Shutemov, Yang Shi, linux-riscv, Andrew Morton,
	Aneesh Kumar K . V, Rik van Riel, Andrea Arcangeli,
	Axel Rasmussen, Mike Rapoport, John Hubbard, Vlastimil Babka,
	Michael Ellerman, Christophe Leroy, Andrew Jones, linuxppc-dev,
	Mike Kravetz, Muchun Song, linux-arm-kernel, Christoph Hellwig,
	Lorenzo Stoakes, Matthew Wilcox

On Wed, Feb 21, 2024 at 08:57:53AM -0400, Jason Gunthorpe wrote:
> On Wed, Feb 21, 2024 at 05:37:37PM +0800, Peter Xu wrote:
> > On Mon, Jan 15, 2024 at 01:55:51PM -0400, Jason Gunthorpe wrote:
> > > On Wed, Jan 03, 2024 at 05:14:13PM +0800, peterx@redhat.com wrote:
> > > > From: Peter Xu <peterx@redhat.com>
> > > > 
> > > > ARM defines pmd_thp_or_huge(), detecting either a THP or a huge PMD.  It
> > > > can be a helpful helper if we want to merge more THP and hugetlb code
> > > > paths.  Make it a generic default implementation, only exist when
> > > > CONFIG_MMU.  Arch can overwrite it by defining its own version.
> > > > 
> > > > For example, ARM's pgtable-2level.h defines it to always return false.
> > > > 
> > > > Keep the macro declared with all config, it should be optimized to a false
> > > > anyway if !THP && !HUGETLB.
> > > > 
> > > > Signed-off-by: Peter Xu <peterx@redhat.com>
> > > > ---
> > > >  include/linux/pgtable.h | 4 ++++
> > > >  mm/gup.c                | 3 +--
> > > >  2 files changed, 5 insertions(+), 2 deletions(-)
> > > > 
> > > > diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
> > > > index 466cf477551a..2b42e95a4e3a 100644
> > > > --- a/include/linux/pgtable.h
> > > > +++ b/include/linux/pgtable.h
> > > > @@ -1362,6 +1362,10 @@ static inline int pmd_write(pmd_t pmd)
> > > >  #endif /* pmd_write */
> > > >  #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
> > > >  
> > > > +#ifndef pmd_thp_or_huge
> > > > +#define pmd_thp_or_huge(pmd)	(pmd_huge(pmd) || pmd_trans_huge(pmd))
> > > > +#endif
> > > 
> > > Why not just use pmd_leaf() ?
> > > 
> > > This GUP case seems to me exactly like what pmd_leaf() should really
> > > do and be used for..
> > 
> > I think I mostly agree with you, and these APIs are indeed confusing.  IMHO
> > the challenge is about the risk of breaking others on small changes in the
> > details where evil resides.
> 
> These APIs are super confusing, which is why I brought it up.. Adding
> even more subtly different variations is not helping.
> 
> I think pmd_leaf means the entry is present and refers to a physical
> page not another radix level.
> 
> > > eg x86 does:
> > > 
> > > #define pmd_leaf	pmd_large
> > > static inline int pmd_large(pmd_t pte)
> > > 	return pmd_flags(pte) & _PAGE_PSE;
> > > 
> > > static inline int pmd_trans_huge(pmd_t pmd)
> > > 	return (pmd_val(pmd) & (_PAGE_PSE|_PAGE_DEVMAP)) == _PAGE_PSE;
> > > 
> > > int pmd_huge(pmd_t pmd)
> > >         return !pmd_none(pmd) &&
> > >                 (pmd_val(pmd) & (_PAGE_PRESENT|_PAGE_PSE)) != _PAGE_PRESENT;
> > 
> > For example, here I don't think it's strictly pmd_leaf()? As pmd_huge()
> > will return true if PRESENT=0 && PSE=0 (as long as none pte ruled out
> > first), while pmd_leaf() will return false; I think that came from
> > cbef8478bee5. 
> 
> Yikes, but do you even want to handle non-present entries in GUP
> world? Isn't everything gated by !present in the first place?

I am as confused indeed.

> 
> > Besides that, there're also other cases where it's not clear of such direct
> > replacement, not until further investigated.  E.g., arm-3level has:
> > 
> > #define pmd_leaf(pmd)		pmd_sect(pmd)
> > #define pmd_sect(pmd)		((pmd_val(pmd) & PMD_TYPE_MASK) == \
> > 						 PMD_TYPE_SECT)
> > #define PMD_TYPE_SECT		(_AT(pmdval_t, 1) << 0)
> > 
> > While pmd_huge() there relies on PMD_TABLE_BIT ()
> 
> I looked at tht, it looked OK.. 
> 
> #define PMD_TYPE_MASK               (_AT(pmdval_t, 3) << 0)
> #define PMD_TABLE_BIT               (_AT(pmdval_t, 1) << 1)
> 
> It is the same stuff, just a little confusingly written

True, my eyes decided to skip all the shifts. :-( Ok then, let me see
whether I can give it a stab on the pXd_huge() mess.

Thanks,

-- 
Peter Xu


^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: [PATCH v2 03/13] mm: Provide generic pmd_thp_or_huge()
@ 2024-02-22  8:04           ` Peter Xu
  0 siblings, 0 replies; 143+ messages in thread
From: Peter Xu @ 2024-02-22  8:04 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: James Houghton, David Hildenbrand, Yang Shi, Andrew Jones,
	linux-mm, Matthew Wilcox, linux-riscv, Andrea Arcangeli,
	Christoph Hellwig, Aneesh Kumar K . V, Vlastimil Babka,
	Axel Rasmussen, Rik van Riel, John Hubbard, Kirill A . Shutemov,
	linux-arm-kernel, Lorenzo Stoakes, Muchun Song, linux-kernel,
	Andrew Morton, linuxppc-dev, Mike Rapoport, Mike Kravetz

On Wed, Feb 21, 2024 at 08:57:53AM -0400, Jason Gunthorpe wrote:
> On Wed, Feb 21, 2024 at 05:37:37PM +0800, Peter Xu wrote:
> > On Mon, Jan 15, 2024 at 01:55:51PM -0400, Jason Gunthorpe wrote:
> > > On Wed, Jan 03, 2024 at 05:14:13PM +0800, peterx@redhat.com wrote:
> > > > From: Peter Xu <peterx@redhat.com>
> > > > 
> > > > ARM defines pmd_thp_or_huge(), detecting either a THP or a huge PMD.  It
> > > > can be a helpful helper if we want to merge more THP and hugetlb code
> > > > paths.  Make it a generic default implementation, only exist when
> > > > CONFIG_MMU.  Arch can overwrite it by defining its own version.
> > > > 
> > > > For example, ARM's pgtable-2level.h defines it to always return false.
> > > > 
> > > > Keep the macro declared with all config, it should be optimized to a false
> > > > anyway if !THP && !HUGETLB.
> > > > 
> > > > Signed-off-by: Peter Xu <peterx@redhat.com>
> > > > ---
> > > >  include/linux/pgtable.h | 4 ++++
> > > >  mm/gup.c                | 3 +--
> > > >  2 files changed, 5 insertions(+), 2 deletions(-)
> > > > 
> > > > diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
> > > > index 466cf477551a..2b42e95a4e3a 100644
> > > > --- a/include/linux/pgtable.h
> > > > +++ b/include/linux/pgtable.h
> > > > @@ -1362,6 +1362,10 @@ static inline int pmd_write(pmd_t pmd)
> > > >  #endif /* pmd_write */
> > > >  #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
> > > >  
> > > > +#ifndef pmd_thp_or_huge
> > > > +#define pmd_thp_or_huge(pmd)	(pmd_huge(pmd) || pmd_trans_huge(pmd))
> > > > +#endif
> > > 
> > > Why not just use pmd_leaf() ?
> > > 
> > > This GUP case seems to me exactly like what pmd_leaf() should really
> > > do and be used for..
> > 
> > I think I mostly agree with you, and these APIs are indeed confusing.  IMHO
> > the challenge is about the risk of breaking others on small changes in the
> > details where evil resides.
> 
> These APIs are super confusing, which is why I brought it up.. Adding
> even more subtly different variations is not helping.
> 
> I think pmd_leaf means the entry is present and refers to a physical
> page not another radix level.
> 
> > > eg x86 does:
> > > 
> > > #define pmd_leaf	pmd_large
> > > static inline int pmd_large(pmd_t pte)
> > > 	return pmd_flags(pte) & _PAGE_PSE;
> > > 
> > > static inline int pmd_trans_huge(pmd_t pmd)
> > > 	return (pmd_val(pmd) & (_PAGE_PSE|_PAGE_DEVMAP)) == _PAGE_PSE;
> > > 
> > > int pmd_huge(pmd_t pmd)
> > >         return !pmd_none(pmd) &&
> > >                 (pmd_val(pmd) & (_PAGE_PRESENT|_PAGE_PSE)) != _PAGE_PRESENT;
> > 
> > For example, here I don't think it's strictly pmd_leaf()? As pmd_huge()
> > will return true if PRESENT=0 && PSE=0 (as long as none pte ruled out
> > first), while pmd_leaf() will return false; I think that came from
> > cbef8478bee5. 
> 
> Yikes, but do you even want to handle non-present entries in GUP
> world? Isn't everything gated by !present in the first place?

I am as confused indeed.

> 
> > Besides that, there're also other cases where it's not clear of such direct
> > replacement, not until further investigated.  E.g., arm-3level has:
> > 
> > #define pmd_leaf(pmd)		pmd_sect(pmd)
> > #define pmd_sect(pmd)		((pmd_val(pmd) & PMD_TYPE_MASK) == \
> > 						 PMD_TYPE_SECT)
> > #define PMD_TYPE_SECT		(_AT(pmdval_t, 1) << 0)
> > 
> > While pmd_huge() there relies on PMD_TABLE_BIT ()
> 
> I looked at tht, it looked OK.. 
> 
> #define PMD_TYPE_MASK               (_AT(pmdval_t, 3) << 0)
> #define PMD_TABLE_BIT               (_AT(pmdval_t, 1) << 1)
> 
> It is the same stuff, just a little confusingly written

True, my eyes decided to skip all the shifts. :-( Ok then, let me see
whether I can give it a stab on the pXd_huge() mess.

Thanks,

-- 
Peter Xu


^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: [PATCH v2 03/13] mm: Provide generic pmd_thp_or_huge()
@ 2024-02-22  8:04           ` Peter Xu
  0 siblings, 0 replies; 143+ messages in thread
From: Peter Xu @ 2024-02-22  8:04 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: linux-mm, linux-kernel, James Houghton, David Hildenbrand,
	Kirill A . Shutemov, Yang Shi, linux-riscv, Andrew Morton,
	Aneesh Kumar K . V, Rik van Riel, Andrea Arcangeli,
	Axel Rasmussen, Mike Rapoport, John Hubbard, Vlastimil Babka,
	Michael Ellerman, Christophe Leroy, Andrew Jones, linuxppc-dev,
	Mike Kravetz, Muchun Song, linux-arm-kernel, Christoph Hellwig,
	Lorenzo Stoakes, Matthew Wilcox

On Wed, Feb 21, 2024 at 08:57:53AM -0400, Jason Gunthorpe wrote:
> On Wed, Feb 21, 2024 at 05:37:37PM +0800, Peter Xu wrote:
> > On Mon, Jan 15, 2024 at 01:55:51PM -0400, Jason Gunthorpe wrote:
> > > On Wed, Jan 03, 2024 at 05:14:13PM +0800, peterx@redhat.com wrote:
> > > > From: Peter Xu <peterx@redhat.com>
> > > > 
> > > > ARM defines pmd_thp_or_huge(), detecting either a THP or a huge PMD.  It
> > > > can be a helpful helper if we want to merge more THP and hugetlb code
> > > > paths.  Make it a generic default implementation, only exist when
> > > > CONFIG_MMU.  Arch can overwrite it by defining its own version.
> > > > 
> > > > For example, ARM's pgtable-2level.h defines it to always return false.
> > > > 
> > > > Keep the macro declared with all config, it should be optimized to a false
> > > > anyway if !THP && !HUGETLB.
> > > > 
> > > > Signed-off-by: Peter Xu <peterx@redhat.com>
> > > > ---
> > > >  include/linux/pgtable.h | 4 ++++
> > > >  mm/gup.c                | 3 +--
> > > >  2 files changed, 5 insertions(+), 2 deletions(-)
> > > > 
> > > > diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
> > > > index 466cf477551a..2b42e95a4e3a 100644
> > > > --- a/include/linux/pgtable.h
> > > > +++ b/include/linux/pgtable.h
> > > > @@ -1362,6 +1362,10 @@ static inline int pmd_write(pmd_t pmd)
> > > >  #endif /* pmd_write */
> > > >  #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
> > > >  
> > > > +#ifndef pmd_thp_or_huge
> > > > +#define pmd_thp_or_huge(pmd)	(pmd_huge(pmd) || pmd_trans_huge(pmd))
> > > > +#endif
> > > 
> > > Why not just use pmd_leaf() ?
> > > 
> > > This GUP case seems to me exactly like what pmd_leaf() should really
> > > do and be used for..
> > 
> > I think I mostly agree with you, and these APIs are indeed confusing.  IMHO
> > the challenge is about the risk of breaking others on small changes in the
> > details where evil resides.
> 
> These APIs are super confusing, which is why I brought it up.. Adding
> even more subtly different variations is not helping.
> 
> I think pmd_leaf means the entry is present and refers to a physical
> page not another radix level.
> 
> > > eg x86 does:
> > > 
> > > #define pmd_leaf	pmd_large
> > > static inline int pmd_large(pmd_t pte)
> > > 	return pmd_flags(pte) & _PAGE_PSE;
> > > 
> > > static inline int pmd_trans_huge(pmd_t pmd)
> > > 	return (pmd_val(pmd) & (_PAGE_PSE|_PAGE_DEVMAP)) == _PAGE_PSE;
> > > 
> > > int pmd_huge(pmd_t pmd)
> > >         return !pmd_none(pmd) &&
> > >                 (pmd_val(pmd) & (_PAGE_PRESENT|_PAGE_PSE)) != _PAGE_PRESENT;
> > 
> > For example, here I don't think it's strictly pmd_leaf()? As pmd_huge()
> > will return true if PRESENT=0 && PSE=0 (as long as none pte ruled out
> > first), while pmd_leaf() will return false; I think that came from
> > cbef8478bee5. 
> 
> Yikes, but do you even want to handle non-present entries in GUP
> world? Isn't everything gated by !present in the first place?

I am as confused indeed.

> 
> > Besides that, there're also other cases where it's not clear of such direct
> > replacement, not until further investigated.  E.g., arm-3level has:
> > 
> > #define pmd_leaf(pmd)		pmd_sect(pmd)
> > #define pmd_sect(pmd)		((pmd_val(pmd) & PMD_TYPE_MASK) == \
> > 						 PMD_TYPE_SECT)
> > #define PMD_TYPE_SECT		(_AT(pmdval_t, 1) << 0)
> > 
> > While pmd_huge() there relies on PMD_TABLE_BIT ()
> 
> I looked at tht, it looked OK.. 
> 
> #define PMD_TYPE_MASK               (_AT(pmdval_t, 3) << 0)
> #define PMD_TABLE_BIT               (_AT(pmdval_t, 1) << 1)
> 
> It is the same stuff, just a little confusingly written

True, my eyes decided to skip all the shifts. :-( Ok then, let me see
whether I can give it a stab on the pXd_huge() mess.

Thanks,

-- 
Peter Xu


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: [PATCH v2 03/13] mm: Provide generic pmd_thp_or_huge()
@ 2024-02-22  8:04           ` Peter Xu
  0 siblings, 0 replies; 143+ messages in thread
From: Peter Xu @ 2024-02-22  8:04 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: linux-mm, linux-kernel, James Houghton, David Hildenbrand,
	Kirill A . Shutemov, Yang Shi, linux-riscv, Andrew Morton,
	Aneesh Kumar K . V, Rik van Riel, Andrea Arcangeli,
	Axel Rasmussen, Mike Rapoport, John Hubbard, Vlastimil Babka,
	Michael Ellerman, Christophe Leroy, Andrew Jones, linuxppc-dev,
	Mike Kravetz, Muchun Song, linux-arm-kernel, Christoph Hellwig,
	Lorenzo Stoakes, Matthew Wilcox

On Wed, Feb 21, 2024 at 08:57:53AM -0400, Jason Gunthorpe wrote:
> On Wed, Feb 21, 2024 at 05:37:37PM +0800, Peter Xu wrote:
> > On Mon, Jan 15, 2024 at 01:55:51PM -0400, Jason Gunthorpe wrote:
> > > On Wed, Jan 03, 2024 at 05:14:13PM +0800, peterx@redhat.com wrote:
> > > > From: Peter Xu <peterx@redhat.com>
> > > > 
> > > > ARM defines pmd_thp_or_huge(), detecting either a THP or a huge PMD.  It
> > > > can be a helpful helper if we want to merge more THP and hugetlb code
> > > > paths.  Make it a generic default implementation, only exist when
> > > > CONFIG_MMU.  Arch can overwrite it by defining its own version.
> > > > 
> > > > For example, ARM's pgtable-2level.h defines it to always return false.
> > > > 
> > > > Keep the macro declared with all config, it should be optimized to a false
> > > > anyway if !THP && !HUGETLB.
> > > > 
> > > > Signed-off-by: Peter Xu <peterx@redhat.com>
> > > > ---
> > > >  include/linux/pgtable.h | 4 ++++
> > > >  mm/gup.c                | 3 +--
> > > >  2 files changed, 5 insertions(+), 2 deletions(-)
> > > > 
> > > > diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
> > > > index 466cf477551a..2b42e95a4e3a 100644
> > > > --- a/include/linux/pgtable.h
> > > > +++ b/include/linux/pgtable.h
> > > > @@ -1362,6 +1362,10 @@ static inline int pmd_write(pmd_t pmd)
> > > >  #endif /* pmd_write */
> > > >  #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
> > > >  
> > > > +#ifndef pmd_thp_or_huge
> > > > +#define pmd_thp_or_huge(pmd)	(pmd_huge(pmd) || pmd_trans_huge(pmd))
> > > > +#endif
> > > 
> > > Why not just use pmd_leaf() ?
> > > 
> > > This GUP case seems to me exactly like what pmd_leaf() should really
> > > do and be used for..
> > 
> > I think I mostly agree with you, and these APIs are indeed confusing.  IMHO
> > the challenge is about the risk of breaking others on small changes in the
> > details where evil resides.
> 
> These APIs are super confusing, which is why I brought it up.. Adding
> even more subtly different variations is not helping.
> 
> I think pmd_leaf means the entry is present and refers to a physical
> page not another radix level.
> 
> > > eg x86 does:
> > > 
> > > #define pmd_leaf	pmd_large
> > > static inline int pmd_large(pmd_t pte)
> > > 	return pmd_flags(pte) & _PAGE_PSE;
> > > 
> > > static inline int pmd_trans_huge(pmd_t pmd)
> > > 	return (pmd_val(pmd) & (_PAGE_PSE|_PAGE_DEVMAP)) == _PAGE_PSE;
> > > 
> > > int pmd_huge(pmd_t pmd)
> > >         return !pmd_none(pmd) &&
> > >                 (pmd_val(pmd) & (_PAGE_PRESENT|_PAGE_PSE)) != _PAGE_PRESENT;
> > 
> > For example, here I don't think it's strictly pmd_leaf()? As pmd_huge()
> > will return true if PRESENT=0 && PSE=0 (as long as none pte ruled out
> > first), while pmd_leaf() will return false; I think that came from
> > cbef8478bee5. 
> 
> Yikes, but do you even want to handle non-present entries in GUP
> world? Isn't everything gated by !present in the first place?

I am as confused indeed.

> 
> > Besides that, there're also other cases where it's not clear of such direct
> > replacement, not until further investigated.  E.g., arm-3level has:
> > 
> > #define pmd_leaf(pmd)		pmd_sect(pmd)
> > #define pmd_sect(pmd)		((pmd_val(pmd) & PMD_TYPE_MASK) == \
> > 						 PMD_TYPE_SECT)
> > #define PMD_TYPE_SECT		(_AT(pmdval_t, 1) << 0)
> > 
> > While pmd_huge() there relies on PMD_TABLE_BIT ()
> 
> I looked at tht, it looked OK.. 
> 
> #define PMD_TYPE_MASK               (_AT(pmdval_t, 3) << 0)
> #define PMD_TABLE_BIT               (_AT(pmdval_t, 1) << 1)
> 
> It is the same stuff, just a little confusingly written

True, my eyes decided to skip all the shifts. :-( Ok then, let me see
whether I can give it a stab on the pXd_huge() mess.

Thanks,

-- 
Peter Xu


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 143+ messages in thread

end of thread, other threads:[~2024-02-22  8:06 UTC | newest]

Thread overview: 143+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-01-03  9:14 [PATCH v2 00/13] mm/gup: Unify hugetlb, part 2 peterx
2024-01-03  9:14 ` peterx
2024-01-03  9:14 ` peterx
2024-01-03  9:14 ` peterx
2024-01-03  9:14 ` [PATCH v2 01/13] mm/Kconfig: CONFIG_PGTABLE_HAS_HUGE_LEAVES peterx
2024-01-03  9:14   ` peterx
2024-01-03  9:14   ` peterx
2024-01-03  9:14   ` peterx
2024-01-15 17:37   ` Jason Gunthorpe
2024-01-15 17:37     ` Jason Gunthorpe
2024-01-15 17:37     ` Jason Gunthorpe
2024-01-15 17:37     ` Jason Gunthorpe
2024-01-22  8:25     ` Peter Xu
2024-01-22  8:25       ` Peter Xu
2024-01-22  8:25       ` Peter Xu
2024-01-22  8:25       ` Peter Xu
2024-01-03  9:14 ` [PATCH v2 02/13] mm/hugetlb: Declare hugetlbfs_pagecache_present() non-static peterx
2024-01-03  9:14   ` peterx
2024-01-03  9:14   ` peterx
2024-01-03  9:14   ` peterx
2024-01-03  9:14 ` [PATCH v2 03/13] mm: Provide generic pmd_thp_or_huge() peterx
2024-01-03  9:14   ` peterx
2024-01-03  9:14   ` peterx
2024-01-03  9:14   ` peterx
2024-01-15 17:55   ` Jason Gunthorpe
2024-01-15 17:55     ` Jason Gunthorpe
2024-01-15 17:55     ` Jason Gunthorpe
2024-01-15 17:55     ` Jason Gunthorpe
2024-02-21  9:37     ` Peter Xu
2024-02-21  9:37       ` Peter Xu
2024-02-21  9:37       ` Peter Xu
2024-02-21  9:37       ` Peter Xu
2024-02-21 12:57       ` Jason Gunthorpe
2024-02-21 12:57         ` Jason Gunthorpe
2024-02-21 12:57         ` Jason Gunthorpe
2024-02-21 12:57         ` Jason Gunthorpe
2024-02-22  8:04         ` Peter Xu
2024-02-22  8:04           ` Peter Xu
2024-02-22  8:04           ` Peter Xu
2024-02-22  8:04           ` Peter Xu
2024-01-03  9:14 ` [PATCH v2 04/13] mm: Make HPAGE_PXD_* macros even if !THP peterx
2024-01-03  9:14   ` peterx
2024-01-03  9:14   ` peterx
2024-01-03  9:14   ` peterx
2024-01-15 17:59   ` Jason Gunthorpe
2024-01-15 17:59     ` Jason Gunthorpe
2024-01-15 17:59     ` Jason Gunthorpe
2024-01-15 17:59     ` Jason Gunthorpe
2024-01-03  9:14 ` [PATCH v2 05/13] mm: Introduce vma_pgtable_walk_{begin|end}() peterx
2024-01-03  9:14   ` peterx
2024-01-03  9:14   ` peterx
2024-01-03  9:14   ` peterx
2024-01-03  9:14 ` [PATCH v2 06/13] mm/gup: Drop folio_fast_pin_allowed() in hugepd processing peterx
2024-01-03  9:14   ` peterx
2024-01-03  9:14   ` peterx
2024-01-03  9:14   ` peterx
2024-01-15 18:37   ` Jason Gunthorpe
2024-01-15 18:37     ` Jason Gunthorpe
2024-01-15 18:37     ` Jason Gunthorpe
2024-01-15 18:37     ` Jason Gunthorpe
2024-01-16  6:30     ` Christophe Leroy
2024-01-16  6:30       ` Christophe Leroy
2024-01-16  6:30       ` Christophe Leroy
2024-01-16  6:30       ` Christophe Leroy
2024-01-16 12:31       ` Jason Gunthorpe
2024-01-16 12:31         ` Jason Gunthorpe
2024-01-16 12:31         ` Jason Gunthorpe
2024-01-16 12:31         ` Jason Gunthorpe
2024-01-16 18:32         ` Christophe Leroy
2024-01-16 18:32           ` Christophe Leroy
2024-01-16 18:32           ` Christophe Leroy
2024-01-17 13:22           ` Jason Gunthorpe
2024-01-17 13:22             ` Jason Gunthorpe
2024-01-17 13:22             ` Jason Gunthorpe
2024-01-17 13:22             ` Jason Gunthorpe
2024-01-18 15:15             ` Ryan Roberts
2024-01-18 15:15               ` Ryan Roberts
2024-01-18 15:15               ` Ryan Roberts
2024-01-18 15:15               ` Ryan Roberts
2024-02-21 11:55     ` Peter Xu
2024-02-21 11:55       ` Peter Xu
2024-02-21 11:55       ` Peter Xu
2024-02-21 11:55       ` Peter Xu
2024-01-03  9:14 ` [PATCH v2 07/13] mm/gup: Refactor record_subpages() to find 1st small page peterx
2024-01-03  9:14   ` peterx
2024-01-03  9:14   ` peterx
2024-01-03  9:14   ` peterx
2024-01-15 18:38   ` Jason Gunthorpe
2024-01-15 18:38     ` Jason Gunthorpe
2024-01-15 18:38     ` Jason Gunthorpe
2024-01-15 18:38     ` Jason Gunthorpe
2024-01-03  9:14 ` [PATCH v2 08/13] mm/gup: Handle hugetlb for no_page_table() peterx
2024-01-03  9:14   ` peterx
2024-01-03  9:14   ` peterx
2024-01-03  9:14   ` peterx
2024-01-15 18:39   ` Jason Gunthorpe
2024-01-15 18:39     ` Jason Gunthorpe
2024-01-15 18:39     ` Jason Gunthorpe
2024-01-15 18:39     ` Jason Gunthorpe
2024-01-03  9:14 ` [PATCH v2 09/13] mm/gup: Cache *pudp in follow_pud_mask() peterx
2024-01-03  9:14   ` peterx
2024-01-03  9:14   ` peterx
2024-01-03  9:14   ` peterx
2024-01-15 18:41   ` Jason Gunthorpe
2024-01-15 18:41     ` Jason Gunthorpe
2024-01-15 18:41     ` Jason Gunthorpe
2024-01-15 18:41     ` Jason Gunthorpe
2024-01-03  9:14 ` [PATCH v2 10/13] mm/gup: Handle huge pud for follow_pud_mask() peterx
2024-01-03  9:14   ` peterx
2024-01-03  9:14   ` peterx
2024-01-03  9:14   ` peterx
2024-01-15 18:49   ` Jason Gunthorpe
2024-01-15 18:49     ` Jason Gunthorpe
2024-01-15 18:49     ` Jason Gunthorpe
2024-01-15 18:49     ` Jason Gunthorpe
2024-02-21 11:49     ` Peter Xu
2024-02-21 11:49       ` Peter Xu
2024-02-21 11:49       ` Peter Xu
2024-02-21 11:49       ` Peter Xu
2024-01-03  9:14 ` [PATCH v2 11/13] mm/gup: Handle huge pmd for follow_pmd_mask() peterx
2024-01-03  9:14   ` peterx
2024-01-03  9:14   ` peterx
2024-01-03  9:14   ` peterx
2024-01-15 18:51   ` Jason Gunthorpe
2024-01-15 18:51     ` Jason Gunthorpe
2024-01-15 18:51     ` Jason Gunthorpe
2024-01-15 18:51     ` Jason Gunthorpe
2024-01-03  9:14 ` [PATCH v2 12/13] mm/gup: Handle hugepd for follow_page() peterx
2024-01-03  9:14   ` peterx
2024-01-03  9:14   ` peterx
2024-01-03  9:14   ` peterx
2024-01-03  9:14 ` [PATCH v2 13/13] mm/gup: Handle hugetlb in the generic follow_page_mask code peterx
2024-01-03  9:14   ` peterx
2024-01-03  9:14   ` peterx
2024-01-03  9:14   ` peterx
2024-01-03 11:14 ` [PATCH v2 00/13] mm/gup: Unify hugetlb, part 2 Christophe Leroy
2024-01-03 11:14   ` Christophe Leroy
2024-01-03 11:14   ` Christophe Leroy
2024-01-03 11:14   ` Christophe Leroy
2024-01-08  7:27   ` Peter Xu
2024-01-08  7:27     ` Peter Xu
2024-01-08  7:27     ` Peter Xu
2024-01-08  7:27     ` Peter Xu

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.