linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] mm/arch: Provide pud_pfn() fallback
@ 2024-03-23 15:16 peterx
  2024-03-23 17:39 ` Peter Xu
                   ` (2 more replies)
  0 siblings, 3 replies; 5+ messages in thread
From: peterx @ 2024-03-23 15:16 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: peterx, SeongJae Park, Andrew Morton, Jason Gunthorpe,
	Mike Rapoport, Matthew Wilcox, kernel test robot

From: Peter Xu <peterx@redhat.com>

The comment in the code explains the reasons.  We took a different approach
comparing to pmd_pfn() by providing a fallback function.

Another option is to provide some lower level config options (compare to
HUGETLB_PAGE or THP) to identify which layer an arch can support for such
huge mappings.  However that can be an overkill.

Cc: Mike Rapoport (IBM) <rppt@kernel.org>
Cc: Matthew Wilcox <willy@infradead.org>
Reported-by: kernel test robot <lkp@intel.com>
Closes: https://lore.kernel.org/oe-kbuild-all/202403231529.HRev1zcD-lkp@intel.com/
Signed-off-by: Peter Xu <peterx@redhat.com>
---

Andrew,

If we care about per-commit build errors (and if it is ever feasible to
reorder), we can move this patch to be before the patch "mm/gup: handle
huge pud for follow_pud_mask()" in mm-unstable to unbreak build on that
commit.

Thanks,
---
 arch/riscv/include/asm/pgtable.h    |  1 +
 arch/s390/include/asm/pgtable.h     |  1 +
 arch/sparc/include/asm/pgtable_64.h |  1 +
 arch/x86/include/asm/pgtable.h      |  1 +
 include/linux/pgtable.h             | 10 ++++++++++
 5 files changed, 14 insertions(+)

diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h
index 20242402fc11..0ca28cc8e3fa 100644
--- a/arch/riscv/include/asm/pgtable.h
+++ b/arch/riscv/include/asm/pgtable.h
@@ -646,6 +646,7 @@ static inline unsigned long pmd_pfn(pmd_t pmd)
 
 #define __pud_to_phys(pud)  (__page_val_to_pfn(pud_val(pud)) << PAGE_SHIFT)
 
+#define pud_pfn pud_pfn
 static inline unsigned long pud_pfn(pud_t pud)
 {
 	return ((__pud_to_phys(pud) & PUD_MASK) >> PAGE_SHIFT);
diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h
index 1a71cb19c089..6cbbe473f680 100644
--- a/arch/s390/include/asm/pgtable.h
+++ b/arch/s390/include/asm/pgtable.h
@@ -1414,6 +1414,7 @@ static inline unsigned long pud_deref(pud_t pud)
 	return (unsigned long)__va(pud_val(pud) & origin_mask);
 }
 
+#define pud_pfn pud_pfn
 static inline unsigned long pud_pfn(pud_t pud)
 {
 	return __pa(pud_deref(pud)) >> PAGE_SHIFT;
diff --git a/arch/sparc/include/asm/pgtable_64.h b/arch/sparc/include/asm/pgtable_64.h
index 4d1bafaba942..26efc9bb644a 100644
--- a/arch/sparc/include/asm/pgtable_64.h
+++ b/arch/sparc/include/asm/pgtable_64.h
@@ -875,6 +875,7 @@ static inline bool pud_leaf(pud_t pud)
 	return pte_val(pte) & _PAGE_PMD_HUGE;
 }
 
+#define pud_pfn pud_pfn
 static inline unsigned long pud_pfn(pud_t pud)
 {
 	pte_t pte = __pte(pud_val(pud));
diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index cefc7a84f7a4..273f7557218c 100644
--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/include/asm/pgtable.h
@@ -234,6 +234,7 @@ static inline unsigned long pmd_pfn(pmd_t pmd)
 	return (pfn & pmd_pfn_mask(pmd)) >> PAGE_SHIFT;
 }
 
+#define pud_pfn pud_pfn
 static inline unsigned long pud_pfn(pud_t pud)
 {
 	phys_addr_t pfn = pud_val(pud);
diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
index 2a1c044ae467..deae9e50f1a8 100644
--- a/include/linux/pgtable.h
+++ b/include/linux/pgtable.h
@@ -1817,6 +1817,16 @@ typedef unsigned int pgtbl_mod_mask;
 #define pte_leaf_size(x) PAGE_SIZE
 #endif
 
+/*
+ * We always define pmd_pfn for all archs as it's used in lots of generic
+ * code.  Now it happens too for pud_pfn (and can happen for larger
+ * mappings too in the future; we're not there yet).  Instead of defining
+ * it for all archs (like pmd_pfn), provide a fallback.
+ */
+#ifndef pud_pfn
+#define pud_pfn(x) ({ BUILD_BUG(); 0; })
+#endif
+
 /*
  * Some architectures have MMUs that are configurable or selectable at boot
  * time. These lead to variable PTRS_PER_x. For statically allocated arrays it
-- 
2.44.0



^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH] mm/arch: Provide pud_pfn() fallback
  2024-03-23 15:16 [PATCH] mm/arch: Provide pud_pfn() fallback peterx
@ 2024-03-23 17:39 ` Peter Xu
  2024-03-25 13:05 ` Jason Gunthorpe
  2024-03-26 20:27 ` Andrew Morton
  2 siblings, 0 replies; 5+ messages in thread
From: Peter Xu @ 2024-03-23 17:39 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: SeongJae Park, Andrew Morton, Jason Gunthorpe, Mike Rapoport,
	Matthew Wilcox, kernel test robot

On Sat, Mar 23, 2024 at 11:16:43AM -0400, peterx@redhat.com wrote:
> From: Peter Xu <peterx@redhat.com>
> 
> The comment in the code explains the reasons.  We took a different approach
> comparing to pmd_pfn() by providing a fallback function.
> 
> Another option is to provide some lower level config options (compare to
> HUGETLB_PAGE or THP) to identify which layer an arch can support for such
> huge mappings.  However that can be an overkill.
> 
> Cc: Mike Rapoport (IBM) <rppt@kernel.org>
> Cc: Matthew Wilcox <willy@infradead.org>
> Reported-by: kernel test robot <lkp@intel.com>
> Closes: https://lore.kernel.org/oe-kbuild-all/202403231529.HRev1zcD-lkp@intel.com/

Also:

Closes: https://lore.kernel.org/oe-kbuild-all/202403240112.kHKVSfCL-lkp@intel.com/

> Signed-off-by: Peter Xu <peterx@redhat.com>
> ---
> 
> Andrew,
> 
> If we care about per-commit build errors (and if it is ever feasible to
> reorder), we can move this patch to be before the patch "mm/gup: handle
> huge pud for follow_pud_mask()" in mm-unstable to unbreak build on that
> commit.
> 
> Thanks,
> ---
>  arch/riscv/include/asm/pgtable.h    |  1 +
>  arch/s390/include/asm/pgtable.h     |  1 +
>  arch/sparc/include/asm/pgtable_64.h |  1 +
>  arch/x86/include/asm/pgtable.h      |  1 +
>  include/linux/pgtable.h             | 10 ++++++++++
>  5 files changed, 14 insertions(+)
> 
> diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h
> index 20242402fc11..0ca28cc8e3fa 100644
> --- a/arch/riscv/include/asm/pgtable.h
> +++ b/arch/riscv/include/asm/pgtable.h
> @@ -646,6 +646,7 @@ static inline unsigned long pmd_pfn(pmd_t pmd)
>  
>  #define __pud_to_phys(pud)  (__page_val_to_pfn(pud_val(pud)) << PAGE_SHIFT)
>  
> +#define pud_pfn pud_pfn
>  static inline unsigned long pud_pfn(pud_t pud)
>  {
>  	return ((__pud_to_phys(pud) & PUD_MASK) >> PAGE_SHIFT);
> diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h
> index 1a71cb19c089..6cbbe473f680 100644
> --- a/arch/s390/include/asm/pgtable.h
> +++ b/arch/s390/include/asm/pgtable.h
> @@ -1414,6 +1414,7 @@ static inline unsigned long pud_deref(pud_t pud)
>  	return (unsigned long)__va(pud_val(pud) & origin_mask);
>  }
>  
> +#define pud_pfn pud_pfn
>  static inline unsigned long pud_pfn(pud_t pud)
>  {
>  	return __pa(pud_deref(pud)) >> PAGE_SHIFT;
> diff --git a/arch/sparc/include/asm/pgtable_64.h b/arch/sparc/include/asm/pgtable_64.h
> index 4d1bafaba942..26efc9bb644a 100644
> --- a/arch/sparc/include/asm/pgtable_64.h
> +++ b/arch/sparc/include/asm/pgtable_64.h
> @@ -875,6 +875,7 @@ static inline bool pud_leaf(pud_t pud)
>  	return pte_val(pte) & _PAGE_PMD_HUGE;
>  }
>  
> +#define pud_pfn pud_pfn
>  static inline unsigned long pud_pfn(pud_t pud)
>  {
>  	pte_t pte = __pte(pud_val(pud));
> diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
> index cefc7a84f7a4..273f7557218c 100644
> --- a/arch/x86/include/asm/pgtable.h
> +++ b/arch/x86/include/asm/pgtable.h
> @@ -234,6 +234,7 @@ static inline unsigned long pmd_pfn(pmd_t pmd)
>  	return (pfn & pmd_pfn_mask(pmd)) >> PAGE_SHIFT;
>  }
>  
> +#define pud_pfn pud_pfn
>  static inline unsigned long pud_pfn(pud_t pud)
>  {
>  	phys_addr_t pfn = pud_val(pud);
> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
> index 2a1c044ae467..deae9e50f1a8 100644
> --- a/include/linux/pgtable.h
> +++ b/include/linux/pgtable.h
> @@ -1817,6 +1817,16 @@ typedef unsigned int pgtbl_mod_mask;
>  #define pte_leaf_size(x) PAGE_SIZE
>  #endif
>  
> +/*
> + * We always define pmd_pfn for all archs as it's used in lots of generic
> + * code.  Now it happens too for pud_pfn (and can happen for larger
> + * mappings too in the future; we're not there yet).  Instead of defining
> + * it for all archs (like pmd_pfn), provide a fallback.
> + */
> +#ifndef pud_pfn
> +#define pud_pfn(x) ({ BUILD_BUG(); 0; })
> +#endif
> +
>  /*
>   * Some architectures have MMUs that are configurable or selectable at boot
>   * time. These lead to variable PTRS_PER_x. For statically allocated arrays it
> -- 
> 2.44.0
> 

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] mm/arch: Provide pud_pfn() fallback
  2024-03-23 15:16 [PATCH] mm/arch: Provide pud_pfn() fallback peterx
  2024-03-23 17:39 ` Peter Xu
@ 2024-03-25 13:05 ` Jason Gunthorpe
  2024-03-26 20:27 ` Andrew Morton
  2 siblings, 0 replies; 5+ messages in thread
From: Jason Gunthorpe @ 2024-03-25 13:05 UTC (permalink / raw)
  To: peterx
  Cc: linux-kernel, linux-mm, SeongJae Park, Andrew Morton,
	Mike Rapoport, Matthew Wilcox, kernel test robot

On Sat, Mar 23, 2024 at 11:16:43AM -0400, peterx@redhat.com wrote:
> From: Peter Xu <peterx@redhat.com>
> 
> The comment in the code explains the reasons.  We took a different approach
> comparing to pmd_pfn() by providing a fallback function.
> 
> Another option is to provide some lower level config options (compare to
> HUGETLB_PAGE or THP) to identify which layer an arch can support for such
> huge mappings.  However that can be an overkill.
> 
> Cc: Mike Rapoport (IBM) <rppt@kernel.org>
> Cc: Matthew Wilcox <willy@infradead.org>
> Reported-by: kernel test robot <lkp@intel.com>
> Closes: https://lore.kernel.org/oe-kbuild-all/202403231529.HRev1zcD-lkp@intel.com/
> Signed-off-by: Peter Xu <peterx@redhat.com>
> ---

Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>

Jason


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] mm/arch: Provide pud_pfn() fallback
  2024-03-23 15:16 [PATCH] mm/arch: Provide pud_pfn() fallback peterx
  2024-03-23 17:39 ` Peter Xu
  2024-03-25 13:05 ` Jason Gunthorpe
@ 2024-03-26 20:27 ` Andrew Morton
  2024-03-26 20:43   ` Peter Xu
  2 siblings, 1 reply; 5+ messages in thread
From: Andrew Morton @ 2024-03-26 20:27 UTC (permalink / raw)
  To: peterx
  Cc: linux-kernel, linux-mm, SeongJae Park, Jason Gunthorpe,
	Mike Rapoport, Matthew Wilcox, kernel test robot

On Sat, 23 Mar 2024 11:16:43 -0400 peterx@redhat.com wrote:

> From: Peter Xu <peterx@redhat.com>
> 
> The comment in the code explains the reasons.  We took a different approach
> comparing to pmd_pfn() by providing a fallback function.
> 
> Another option is to provide some lower level config options (compare to
> HUGETLB_PAGE or THP) to identify which layer an arch can support for such
> huge mappings.  However that can be an overkill.
> 
> ...
>
> If we care about per-commit build errors (and if it is ever feasible to
> reorder), we can move this patch to be before the patch "mm/gup: handle
> huge pud for follow_pud_mask()" in mm-unstable to unbreak build on that
> commit.

I temporarily disabled that whole series a few days ago.  Because of
multiple build issues, iirc.

Let's make that permanent.  Please redo the whole series against
mm-unstable and resend?



^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] mm/arch: Provide pud_pfn() fallback
  2024-03-26 20:27 ` Andrew Morton
@ 2024-03-26 20:43   ` Peter Xu
  0 siblings, 0 replies; 5+ messages in thread
From: Peter Xu @ 2024-03-26 20:43 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-kernel, linux-mm, SeongJae Park, Jason Gunthorpe,
	Mike Rapoport, Matthew Wilcox, kernel test robot

On Tue, Mar 26, 2024 at 01:27:26PM -0700, Andrew Morton wrote:
> On Sat, 23 Mar 2024 11:16:43 -0400 peterx@redhat.com wrote:
> 
> > From: Peter Xu <peterx@redhat.com>
> > 
> > The comment in the code explains the reasons.  We took a different approach
> > comparing to pmd_pfn() by providing a fallback function.
> > 
> > Another option is to provide some lower level config options (compare to
> > HUGETLB_PAGE or THP) to identify which layer an arch can support for such
> > huge mappings.  However that can be an overkill.
> > 
> > ...
> >
> > If we care about per-commit build errors (and if it is ever feasible to
> > reorder), we can move this patch to be before the patch "mm/gup: handle
> > huge pud for follow_pud_mask()" in mm-unstable to unbreak build on that
> > commit.
> 
> I temporarily disabled that whole series a few days ago.  Because of
> multiple build issues, iirc.
> 
> Let's make that permanent.  Please redo the whole series against
> mm-unstable and resend?

Yes, that's the plan.  Feel free to ignore this as this is not used until
that GUP rework series, I'll include it in the whole set to be reposted.

I'm currently doing the build tests; just finished writting the harness for
testing the matrix.  It'll take a bit time to run through the tests I
specified (I tried to cover a few more archs/configs), and I'll repost with
all patches included (fixups squashed) when test finished.

[side note: I think I can reproduce the other not-reported issue, on
 arm+alldefconfig; that'll get covered too]

Thanks,

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2024-03-26 20:43 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-03-23 15:16 [PATCH] mm/arch: Provide pud_pfn() fallback peterx
2024-03-23 17:39 ` Peter Xu
2024-03-25 13:05 ` Jason Gunthorpe
2024-03-26 20:27 ` Andrew Morton
2024-03-26 20:43   ` Peter Xu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).