linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* Re: [PATCH v12 07/22] riscv: mm: Add p?d_leaf() definitions
       [not found] ` <20191018101248.33727-8-steven.price@arm.com>
@ 2019-10-18 15:57   ` Christoph Hellwig
  0 siblings, 0 replies; 5+ messages in thread
From: Christoph Hellwig @ 2019-10-18 15:57 UTC (permalink / raw)
  To: Steven Price
  Cc: linux-mm, Mark Rutland, Peter Zijlstra, Catalin Marinas,
	Dave Hansen, H. Peter Anvin, linux-riscv, Will Deacon, Liang,
	Kan, Alexandre Ghiti, x86, Ingo Molnar, Palmer Dabbelt,
	Albert Ou, Arnd Bergmann, Jérôme Glisse,
	Borislav Petkov, Andy Lutomirski, Paul Walmsley, Thomas Gleixner,
	linux-arm-kernel, Ard Biesheuvel, linux-kernel, James Morse,
	Andrew Morton

> +	return pud_present(pud)
> +		&& (pud_val(pud) & (_PAGE_READ | _PAGE_WRITE | _PAGE_EXEC));
> +}

The operators always need to go before the line break, not after it
per linux coding style.  There are a few more spots like this, so please
audit the whole series for it.


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH v12 12/22] mm: pagewalk: Allow walking without vma
       [not found] ` <20191018101248.33727-13-steven.price@arm.com>
@ 2019-10-24 13:05   ` Zong Li
  0 siblings, 0 replies; 5+ messages in thread
From: Zong Li @ 2019-10-24 13:05 UTC (permalink / raw)
  To: Steven Price
  Cc: linux-mm, Andy Lutomirski, Ard Biesheuvel, Arnd Bergmann,
	Borislav Petkov, Catalin Marinas, Dave Hansen, Ingo Molnar,
	James Morse, Jérôme Glisse, Peter Zijlstra,
	Thomas Gleixner, Will Deacon, x86, H. Peter Anvin,
	linux-arm-kernel, Linux Kernel Mailing List, Mark Rutland, Liang,
	Kan, Andrew Morton

Steven Price <steven.price@arm.com> 於 2019年10月19日 週六 下午4:12寫道:

>
> Since 48684a65b4e3: "mm: pagewalk: fix misbehavior of walk_page_range
> for vma(VM_PFNMAP)", page_table_walk() will report any kernel area as
> a hole, because it lacks a vma.
>
> This means each arch has re-implemented page table walking when needed,
> for example in the per-arch ptdump walker.
>
> Remove the requirement to have a vma except when trying to split huge
> pages.
>
> Signed-off-by: Steven Price <steven.price@arm.com>
> ---
>  mm/pagewalk.c | 25 +++++++++++++++++--------
>  1 file changed, 17 insertions(+), 8 deletions(-)
>
> diff --git a/mm/pagewalk.c b/mm/pagewalk.c
> index fc4d98a3a5a0..4139e9163aee 100644
> --- a/mm/pagewalk.c
> +++ b/mm/pagewalk.c
> @@ -38,7 +38,7 @@ static int walk_pmd_range(pud_t *pud, unsigned long addr, unsigned long end,
>         do {
>  again:
>                 next = pmd_addr_end(addr, end);
> -               if (pmd_none(*pmd) || !walk->vma) {
> +               if (pmd_none(*pmd)) {
>                         if (ops->pte_hole)
>                                 err = ops->pte_hole(addr, next, walk);
>                         if (err)
> @@ -61,9 +61,14 @@ static int walk_pmd_range(pud_t *pud, unsigned long addr, unsigned long end,
>                 if (!ops->pte_entry)
>                         continue;
>
> -               split_huge_pmd(walk->vma, pmd, addr);
> -               if (pmd_trans_unstable(pmd))
> -                       goto again;
> +               if (walk->vma) {
> +                       split_huge_pmd(walk->vma, pmd, addr);
> +                       if (pmd_trans_unstable(pmd))
> +                               goto again;
> +               } else if (pmd_leaf(*pmd)) {
> +                       continue;
> +               }
> +
>                 err = walk_pte_range(pmd, addr, next, walk);
>                 if (err)
>                         break;
> @@ -84,7 +89,7 @@ static int walk_pud_range(p4d_t *p4d, unsigned long addr, unsigned long end,
>         do {
>   again:
>                 next = pud_addr_end(addr, end);
> -               if (pud_none(*pud) || !walk->vma) {
> +               if (pud_none(*pud)) {
>                         if (ops->pte_hole)
>                                 err = ops->pte_hole(addr, next, walk);
>                         if (err)
> @@ -98,9 +103,13 @@ static int walk_pud_range(p4d_t *p4d, unsigned long addr, unsigned long end,
>                                 break;
>                 }
>
> -               split_huge_pud(walk->vma, pud, addr);
> -               if (pud_none(*pud))
> -                       goto again;
> +               if (walk->vma) {
> +                       split_huge_pud(walk->vma, pud, addr);
> +                       if (pud_none(*pud))
> +                               goto again;
> +               } else if (pud_leaf(*pud)) {
> +                       continue;
> +               }
>
>                 if (ops->pmd_entry || ops->pte_entry)
>                         err = walk_pmd_range(pud, addr, next, walk);
> --
> 2.20.1
>

It's good to me.

Tested-by: Zong Li <zong.li@sifive.com>


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH v12 13/22] mm: pagewalk: Add test_p?d callbacks
       [not found] ` <20191018101248.33727-14-steven.price@arm.com>
@ 2019-10-24 13:06   ` Zong Li
  0 siblings, 0 replies; 5+ messages in thread
From: Zong Li @ 2019-10-24 13:06 UTC (permalink / raw)
  To: Steven Price
  Cc: linux-mm, Andy Lutomirski, Ard Biesheuvel, Arnd Bergmann,
	Borislav Petkov, Catalin Marinas, Dave Hansen, Ingo Molnar,
	James Morse, Jérôme Glisse, Peter Zijlstra,
	Thomas Gleixner, Will Deacon, x86, H. Peter Anvin,
	linux-arm-kernel, Linux Kernel Mailing List, Mark Rutland, Liang,
	Kan, Andrew Morton

Steven Price <steven.price@arm.com> 於 2019年10月19日 週六 下午4:12寫道:
>
> It is useful to be able to skip parts of the page table tree even when
> walking without VMAs. Add test_p?d callbacks similar to test_walk but
> which are called just before a table at that level is walked. If the
> callback returns non-zero then the entire table is skipped.
>
> Signed-off-by: Steven Price <steven.price@arm.com>
> ---
>  include/linux/pagewalk.h | 11 +++++++++++
>  mm/pagewalk.c            | 24 ++++++++++++++++++++++++
>  2 files changed, 35 insertions(+)
>
> diff --git a/include/linux/pagewalk.h b/include/linux/pagewalk.h
> index 12004b097eae..df424197a25a 100644
> --- a/include/linux/pagewalk.h
> +++ b/include/linux/pagewalk.h
> @@ -24,6 +24,11 @@ struct mm_walk;
>   *                     "do page table walk over the current vma", returning
>   *                     a negative value means "abort current page table walk
>   *                     right now" and returning 1 means "skip the current vma"
> + * @test_pmd:          similar to test_walk(), but called for every pmd.
> + * @test_pud:          similar to test_walk(), but called for every pud.
> + * @test_p4d:          similar to test_walk(), but called for every p4d.
> + *                     Returning 0 means walk this part of the page tables,
> + *                     returning 1 means to skip this range.
>   *
>   * p?d_entry callbacks are called even if those levels are folded on a
>   * particular architecture/configuration.
> @@ -46,6 +51,12 @@ struct mm_walk_ops {
>                              struct mm_walk *walk);
>         int (*test_walk)(unsigned long addr, unsigned long next,
>                         struct mm_walk *walk);
> +       int (*test_pmd)(unsigned long addr, unsigned long next,
> +                       pmd_t *pmd_start, struct mm_walk *walk);
> +       int (*test_pud)(unsigned long addr, unsigned long next,
> +                       pud_t *pud_start, struct mm_walk *walk);
> +       int (*test_p4d)(unsigned long addr, unsigned long next,
> +                       p4d_t *p4d_start, struct mm_walk *walk);
>  };
>
>  /**
> diff --git a/mm/pagewalk.c b/mm/pagewalk.c
> index 4139e9163aee..43acffefd43f 100644
> --- a/mm/pagewalk.c
> +++ b/mm/pagewalk.c
> @@ -34,6 +34,14 @@ static int walk_pmd_range(pud_t *pud, unsigned long addr, unsigned long end,
>         const struct mm_walk_ops *ops = walk->ops;
>         int err = 0;
>
> +       if (ops->test_pmd) {
> +               err = ops->test_pmd(addr, end, pmd_offset(pud, 0UL), walk);
> +               if (err < 0)
> +                       return err;
> +               if (err > 0)
> +                       return 0;
> +       }
> +
>         pmd = pmd_offset(pud, addr);
>         do {
>  again:
> @@ -85,6 +93,14 @@ static int walk_pud_range(p4d_t *p4d, unsigned long addr, unsigned long end,
>         const struct mm_walk_ops *ops = walk->ops;
>         int err = 0;
>
> +       if (ops->test_pud) {
> +               err = ops->test_pud(addr, end, pud_offset(p4d, 0UL), walk);
> +               if (err < 0)
> +                       return err;
> +               if (err > 0)
> +                       return 0;
> +       }
> +
>         pud = pud_offset(p4d, addr);
>         do {
>   again:
> @@ -128,6 +144,14 @@ static int walk_p4d_range(pgd_t *pgd, unsigned long addr, unsigned long end,
>         const struct mm_walk_ops *ops = walk->ops;
>         int err = 0;
>
> +       if (ops->test_p4d) {
> +               err = ops->test_p4d(addr, end, p4d_offset(pgd, 0UL), walk);
> +               if (err < 0)
> +                       return err;
> +               if (err > 0)
> +                       return 0;
> +       }
> +
>         p4d = p4d_offset(pgd, addr);
>         do {
>                 next = p4d_addr_end(addr, end);
> --
> 2.20.1
>

It's good to me.

Tested-by: Zong Li <zong.li@sifive.com>


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH v12 11/22] mm: pagewalk: Add p4d_entry() and pgd_entry()
       [not found] ` <20191018101248.33727-12-steven.price@arm.com>
@ 2019-10-24 13:06   ` Zong Li
  0 siblings, 0 replies; 5+ messages in thread
From: Zong Li @ 2019-10-24 13:06 UTC (permalink / raw)
  To: Steven Price
  Cc: linux-mm, Andy Lutomirski, Ard Biesheuvel, Arnd Bergmann,
	Borislav Petkov, Catalin Marinas, Dave Hansen, Ingo Molnar,
	James Morse, Jérôme Glisse, Peter Zijlstra,
	Thomas Gleixner, Will Deacon, x86, H. Peter Anvin,
	linux-arm-kernel, Linux Kernel Mailing List, Mark Rutland, Liang,
	Kan, Andrew Morton

Steven Price <steven.price@arm.com> 於 2019年10月19日 週六 下午4:14寫道:
>
> pgd_entry() and pud_entry() were removed by commit 0b1fbfe50006c410
> ("mm/pagewalk: remove pgd_entry() and pud_entry()") because there were
> no users. We're about to add users so reintroduce them, along with
> p4d_entry() as we now have 5 levels of tables.
>
> Note that commit a00cc7d9dd93d66a ("mm, x86: add support for
> PUD-sized transparent hugepages") already re-added pud_entry() but with
> different semantics to the other callbacks. Since there have never
> been upstream users of this, revert the semantics back to match the
> other callbacks. This means pud_entry() is called for all entries, not
> just transparent huge pages.
>
> Signed-off-by: Steven Price <steven.price@arm.com>
> ---
>  include/linux/pagewalk.h | 19 +++++++++++++------
>  mm/pagewalk.c            | 27 ++++++++++++++++-----------
>  2 files changed, 29 insertions(+), 17 deletions(-)
>
> diff --git a/include/linux/pagewalk.h b/include/linux/pagewalk.h
> index bddd9759bab9..12004b097eae 100644
> --- a/include/linux/pagewalk.h
> +++ b/include/linux/pagewalk.h
> @@ -8,15 +8,15 @@ struct mm_walk;
>
>  /**
>   * mm_walk_ops - callbacks for walk_page_range
> - * @pud_entry:         if set, called for each non-empty PUD (2nd-level) entry
> - *                     this handler should only handle pud_trans_huge() puds.
> - *                     the pmd_entry or pte_entry callbacks will be used for
> - *                     regular PUDs.
> - * @pmd_entry:         if set, called for each non-empty PMD (3rd-level) entry
> + * @pgd_entry:         if set, called for each non-empty PGD (top-level) entry
> + * @p4d_entry:         if set, called for each non-empty P4D entry
> + * @pud_entry:         if set, called for each non-empty PUD entry
> + * @pmd_entry:         if set, called for each non-empty PMD entry
>   *                     this handler is required to be able to handle
>   *                     pmd_trans_huge() pmds.  They may simply choose to
>   *                     split_huge_page() instead of handling it explicitly.
> - * @pte_entry:         if set, called for each non-empty PTE (4th-level) entry
> + * @pte_entry:         if set, called for each non-empty PTE (lowest-level)
> + *                     entry
>   * @pte_hole:          if set, called for each hole at all levels
>   * @hugetlb_entry:     if set, called for each hugetlb entry
>   * @test_walk:         caller specific callback function to determine whether
> @@ -24,8 +24,15 @@ struct mm_walk;
>   *                     "do page table walk over the current vma", returning
>   *                     a negative value means "abort current page table walk
>   *                     right now" and returning 1 means "skip the current vma"
> + *
> + * p?d_entry callbacks are called even if those levels are folded on a
> + * particular architecture/configuration.
>   */
>  struct mm_walk_ops {
> +       int (*pgd_entry)(pgd_t *pgd, unsigned long addr,
> +                        unsigned long next, struct mm_walk *walk);
> +       int (*p4d_entry)(p4d_t *p4d, unsigned long addr,
> +                        unsigned long next, struct mm_walk *walk);
>         int (*pud_entry)(pud_t *pud, unsigned long addr,
>                          unsigned long next, struct mm_walk *walk);
>         int (*pmd_entry)(pmd_t *pmd, unsigned long addr,
> diff --git a/mm/pagewalk.c b/mm/pagewalk.c
> index d48c2a986ea3..fc4d98a3a5a0 100644
> --- a/mm/pagewalk.c
> +++ b/mm/pagewalk.c
> @@ -93,15 +93,9 @@ static int walk_pud_range(p4d_t *p4d, unsigned long addr, unsigned long end,
>                 }
>
>                 if (ops->pud_entry) {
> -                       spinlock_t *ptl = pud_trans_huge_lock(pud, walk->vma);
> -
> -                       if (ptl) {
> -                               err = ops->pud_entry(pud, addr, next, walk);
> -                               spin_unlock(ptl);
> -                               if (err)
> -                                       break;
> -                               continue;
> -                       }
> +                       err = ops->pud_entry(pud, addr, next, walk);
> +                       if (err)
> +                               break;
>                 }
>
>                 split_huge_pud(walk->vma, pud, addr);
> @@ -135,7 +129,12 @@ static int walk_p4d_range(pgd_t *pgd, unsigned long addr, unsigned long end,
>                                 break;
>                         continue;
>                 }
> -               if (ops->pmd_entry || ops->pte_entry)
> +               if (ops->p4d_entry) {
> +                       err = ops->p4d_entry(p4d, addr, next, walk);
> +                       if (err)
> +                               break;
> +               }
> +               if (ops->pud_entry || ops->pmd_entry || ops->pte_entry)
>                         err = walk_pud_range(p4d, addr, next, walk);
>                 if (err)
>                         break;
> @@ -162,7 +161,13 @@ static int walk_pgd_range(unsigned long addr, unsigned long end,
>                                 break;
>                         continue;
>                 }
> -               if (ops->pmd_entry || ops->pte_entry)
> +               if (ops->pgd_entry) {
> +                       err = ops->pgd_entry(pgd, addr, next, walk);
> +                       if (err)
> +                               break;
> +               }
> +               if (ops->p4d_entry || ops->pud_entry || ops->pmd_entry ||
> +                   ops->pte_entry)
>                         err = walk_p4d_range(pgd, addr, next, walk);
>                 if (err)
>                         break;
> --
> 2.20.1
>

It's good to me.

Tested-by: Zong Li <zong.li@sifive.com>


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH v12 14/22] mm: pagewalk: Add 'depth' parameter to pte_hole
       [not found] ` <20191018101248.33727-15-steven.price@arm.com>
@ 2019-10-24 13:07   ` Zong Li
  0 siblings, 0 replies; 5+ messages in thread
From: Zong Li @ 2019-10-24 13:07 UTC (permalink / raw)
  To: Steven Price
  Cc: linux-mm, Andy Lutomirski, Ard Biesheuvel, Arnd Bergmann,
	Borislav Petkov, Catalin Marinas, Dave Hansen, Ingo Molnar,
	James Morse, Jérôme Glisse, Peter Zijlstra,
	Thomas Gleixner, Will Deacon, x86, H. Peter Anvin,
	linux-arm-kernel, Linux Kernel Mailing List, Mark Rutland, Liang,
	Kan, Andrew Morton

Steven Price <steven.price@arm.com> 於 2019年10月19日 週六 下午4:13寫道:
>
> The pte_hole() callback is called at multiple levels of the page tables.
> Code dumping the kernel page tables needs to know what at what depth
> the missing entry is. Add this is an extra parameter to pte_hole().
> When the depth isn't know (e.g. processing a vma) then -1 is passed.
>
> The depth that is reported is the actual level where the entry is
> missing (ignoring any folding that is in place), i.e. any levels where
> PTRS_PER_P?D is set to 1 are ignored.
>
> Note that depth starts at 0 for a PGD so that PUD/PMD/PTE retain their
> natural numbers as levels 2/3/4.
>
> Signed-off-by: Steven Price <steven.price@arm.com>
> ---
>  fs/proc/task_mmu.c       |  4 ++--
>  include/linux/pagewalk.h |  7 +++++--
>  mm/hmm.c                 |  8 ++++----
>  mm/migrate.c             |  5 +++--
>  mm/mincore.c             |  1 +
>  mm/pagewalk.c            | 31 +++++++++++++++++++++++++------
>  6 files changed, 40 insertions(+), 16 deletions(-)
>
> diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
> index 9442631fd4af..3ba9ae83bff5 100644
> --- a/fs/proc/task_mmu.c
> +++ b/fs/proc/task_mmu.c
> @@ -505,7 +505,7 @@ static void smaps_account(struct mem_size_stats *mss, struct page *page,
>
>  #ifdef CONFIG_SHMEM
>  static int smaps_pte_hole(unsigned long addr, unsigned long end,
> -               struct mm_walk *walk)
> +                         __always_unused int depth, struct mm_walk *walk)
>  {
>         struct mem_size_stats *mss = walk->private;
>
> @@ -1282,7 +1282,7 @@ static int add_to_pagemap(unsigned long addr, pagemap_entry_t *pme,
>  }
>
>  static int pagemap_pte_hole(unsigned long start, unsigned long end,
> -                               struct mm_walk *walk)
> +                           __always_unused int depth, struct mm_walk *walk)
>  {
>         struct pagemapread *pm = walk->private;
>         unsigned long addr = start;
> diff --git a/include/linux/pagewalk.h b/include/linux/pagewalk.h
> index df424197a25a..90466d60f87a 100644
> --- a/include/linux/pagewalk.h
> +++ b/include/linux/pagewalk.h
> @@ -17,7 +17,10 @@ struct mm_walk;
>   *                     split_huge_page() instead of handling it explicitly.
>   * @pte_entry:         if set, called for each non-empty PTE (lowest-level)
>   *                     entry
> - * @pte_hole:          if set, called for each hole at all levels
> + * @pte_hole:          if set, called for each hole at all levels,
> + *                     depth is -1 if not known, 0:PGD, 1:P4D, 2:PUD, 3:PMD
> + *                     4:PTE. Any folded depths (where PTRS_PER_P?D is equal
> + *                     to 1) are skipped.
>   * @hugetlb_entry:     if set, called for each hugetlb entry
>   * @test_walk:         caller specific callback function to determine whether
>   *                     we walk over the current vma or not. Returning 0 means
> @@ -45,7 +48,7 @@ struct mm_walk_ops {
>         int (*pte_entry)(pte_t *pte, unsigned long addr,
>                          unsigned long next, struct mm_walk *walk);
>         int (*pte_hole)(unsigned long addr, unsigned long next,
> -                       struct mm_walk *walk);
> +                       int depth, struct mm_walk *walk);
>         int (*hugetlb_entry)(pte_t *pte, unsigned long hmask,
>                              unsigned long addr, unsigned long next,
>                              struct mm_walk *walk);
> diff --git a/mm/hmm.c b/mm/hmm.c
> index 902f5fa6bf93..df3d531c8f2d 100644
> --- a/mm/hmm.c
> +++ b/mm/hmm.c
> @@ -376,7 +376,7 @@ static void hmm_range_need_fault(const struct hmm_vma_walk *hmm_vma_walk,
>  }
>
>  static int hmm_vma_walk_hole(unsigned long addr, unsigned long end,
> -                            struct mm_walk *walk)
> +                            __always_unused int depth, struct mm_walk *walk)
>  {
>         struct hmm_vma_walk *hmm_vma_walk = walk->private;
>         struct hmm_range *range = hmm_vma_walk->range;
> @@ -564,7 +564,7 @@ static int hmm_vma_walk_pmd(pmd_t *pmdp,
>  again:
>         pmd = READ_ONCE(*pmdp);
>         if (pmd_none(pmd))
> -               return hmm_vma_walk_hole(start, end, walk);
> +               return hmm_vma_walk_hole(start, end, -1, walk);
>
>         if (thp_migration_supported() && is_pmd_migration_entry(pmd)) {
>                 bool fault, write_fault;
> @@ -666,7 +666,7 @@ static int hmm_vma_walk_pud(pud_t *pudp, unsigned long start, unsigned long end,
>  again:
>         pud = READ_ONCE(*pudp);
>         if (pud_none(pud))
> -               return hmm_vma_walk_hole(start, end, walk);
> +               return hmm_vma_walk_hole(start, end, -1, walk);
>
>         if (pud_huge(pud) && pud_devmap(pud)) {
>                 unsigned long i, npages, pfn;
> @@ -674,7 +674,7 @@ static int hmm_vma_walk_pud(pud_t *pudp, unsigned long start, unsigned long end,
>                 bool fault, write_fault;
>
>                 if (!pud_present(pud))
> -                       return hmm_vma_walk_hole(start, end, walk);
> +                       return hmm_vma_walk_hole(start, end, -1, walk);
>
>                 i = (addr - range->start) >> PAGE_SHIFT;
>                 npages = (end - addr) >> PAGE_SHIFT;
> diff --git a/mm/migrate.c b/mm/migrate.c
> index 4fe45d1428c8..435258df9a36 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -2123,6 +2123,7 @@ int migrate_misplaced_transhuge_page(struct mm_struct *mm,
>  #ifdef CONFIG_DEVICE_PRIVATE
>  static int migrate_vma_collect_hole(unsigned long start,
>                                     unsigned long end,
> +                                   __always_unused int depth,
>                                     struct mm_walk *walk)
>  {
>         struct migrate_vma *migrate = walk->private;
> @@ -2167,7 +2168,7 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp,
>
>  again:
>         if (pmd_none(*pmdp))
> -               return migrate_vma_collect_hole(start, end, walk);
> +               return migrate_vma_collect_hole(start, end, -1, walk);
>
>         if (pmd_trans_huge(*pmdp)) {
>                 struct page *page;
> @@ -2200,7 +2201,7 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp,
>                                 return migrate_vma_collect_skip(start, end,
>                                                                 walk);
>                         if (pmd_none(*pmdp))
> -                               return migrate_vma_collect_hole(start, end,
> +                               return migrate_vma_collect_hole(start, end, -1,
>                                                                 walk);
>                 }
>         }
> diff --git a/mm/mincore.c b/mm/mincore.c
> index 49b6fa2f6aa1..0e6dd9948f1a 100644
> --- a/mm/mincore.c
> +++ b/mm/mincore.c
> @@ -112,6 +112,7 @@ static int __mincore_unmapped_range(unsigned long addr, unsigned long end,
>  }
>
>  static int mincore_unmapped_range(unsigned long addr, unsigned long end,
> +                                  __always_unused int depth,
>                                    struct mm_walk *walk)
>  {
>         walk->private += __mincore_unmapped_range(addr, end,
> diff --git a/mm/pagewalk.c b/mm/pagewalk.c
> index 43acffefd43f..b67400dc1def 100644
> --- a/mm/pagewalk.c
> +++ b/mm/pagewalk.c
> @@ -4,6 +4,22 @@
>  #include <linux/sched.h>
>  #include <linux/hugetlb.h>
>
> +/*
> + * We want to know the real level where a entry is located ignoring any
> + * folding of levels which may be happening. For example if p4d is folded then
> + * a missing entry found at level 1 (p4d) is actually at level 0 (pgd).
> + */
> +static int real_depth(int depth)
> +{
> +       if (depth == 3 && PTRS_PER_PMD == 1)
> +               depth = 2;
> +       if (depth == 2 && PTRS_PER_PUD == 1)
> +               depth = 1;
> +       if (depth == 1 && PTRS_PER_P4D == 1)
> +               depth = 0;
> +       return depth;
> +}
> +
>  static int walk_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end,
>                           struct mm_walk *walk)
>  {
> @@ -33,6 +49,7 @@ static int walk_pmd_range(pud_t *pud, unsigned long addr, unsigned long end,
>         unsigned long next;
>         const struct mm_walk_ops *ops = walk->ops;
>         int err = 0;
> +       int depth = real_depth(3);
>
>         if (ops->test_pmd) {
>                 err = ops->test_pmd(addr, end, pmd_offset(pud, 0UL), walk);
> @@ -48,7 +65,7 @@ static int walk_pmd_range(pud_t *pud, unsigned long addr, unsigned long end,
>                 next = pmd_addr_end(addr, end);
>                 if (pmd_none(*pmd)) {
>                         if (ops->pte_hole)
> -                               err = ops->pte_hole(addr, next, walk);
> +                               err = ops->pte_hole(addr, next, depth, walk);
>                         if (err)
>                                 break;
>                         continue;
> @@ -92,6 +109,7 @@ static int walk_pud_range(p4d_t *p4d, unsigned long addr, unsigned long end,
>         unsigned long next;
>         const struct mm_walk_ops *ops = walk->ops;
>         int err = 0;
> +       int depth = real_depth(2);
>
>         if (ops->test_pud) {
>                 err = ops->test_pud(addr, end, pud_offset(p4d, 0UL), walk);
> @@ -107,7 +125,7 @@ static int walk_pud_range(p4d_t *p4d, unsigned long addr, unsigned long end,
>                 next = pud_addr_end(addr, end);
>                 if (pud_none(*pud)) {
>                         if (ops->pte_hole)
> -                               err = ops->pte_hole(addr, next, walk);
> +                               err = ops->pte_hole(addr, next, depth, walk);
>                         if (err)
>                                 break;
>                         continue;
> @@ -143,6 +161,7 @@ static int walk_p4d_range(pgd_t *pgd, unsigned long addr, unsigned long end,
>         unsigned long next;
>         const struct mm_walk_ops *ops = walk->ops;
>         int err = 0;
> +       int depth = real_depth(1);
>
>         if (ops->test_p4d) {
>                 err = ops->test_p4d(addr, end, p4d_offset(pgd, 0UL), walk);
> @@ -157,7 +176,7 @@ static int walk_p4d_range(pgd_t *pgd, unsigned long addr, unsigned long end,
>                 next = p4d_addr_end(addr, end);
>                 if (p4d_none_or_clear_bad(p4d)) {
>                         if (ops->pte_hole)
> -                               err = ops->pte_hole(addr, next, walk);
> +                               err = ops->pte_hole(addr, next, depth, walk);
>                         if (err)
>                                 break;
>                         continue;
> @@ -189,7 +208,7 @@ static int walk_pgd_range(unsigned long addr, unsigned long end,
>                 next = pgd_addr_end(addr, end);
>                 if (pgd_none_or_clear_bad(pgd)) {
>                         if (ops->pte_hole)
> -                               err = ops->pte_hole(addr, next, walk);
> +                               err = ops->pte_hole(addr, next, 0, walk);
>                         if (err)
>                                 break;
>                         continue;
> @@ -236,7 +255,7 @@ static int walk_hugetlb_range(unsigned long addr, unsigned long end,
>                 if (pte)
>                         err = ops->hugetlb_entry(pte, hmask, addr, next, walk);
>                 else if (ops->pte_hole)
> -                       err = ops->pte_hole(addr, next, walk);
> +                       err = ops->pte_hole(addr, next, -1, walk);
>
>                 if (err)
>                         break;
> @@ -280,7 +299,7 @@ static int walk_page_test(unsigned long start, unsigned long end,
>         if (vma->vm_flags & VM_PFNMAP) {
>                 int err = 1;
>                 if (ops->pte_hole)
> -                       err = ops->pte_hole(start, end, walk);
> +                       err = ops->pte_hole(start, end, -1, walk);
>                 return err ? err : 1;
>         }
>         return 0;
> --
> 2.20.1
>

It's good to me.

Tested-by: Zong Li <zong.li@sifive.com>


^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2019-10-24 13:07 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <20191018101248.33727-1-steven.price@arm.com>
     [not found] ` <20191018101248.33727-8-steven.price@arm.com>
2019-10-18 15:57   ` [PATCH v12 07/22] riscv: mm: Add p?d_leaf() definitions Christoph Hellwig
     [not found] ` <20191018101248.33727-13-steven.price@arm.com>
2019-10-24 13:05   ` [PATCH v12 12/22] mm: pagewalk: Allow walking without vma Zong Li
     [not found] ` <20191018101248.33727-14-steven.price@arm.com>
2019-10-24 13:06   ` [PATCH v12 13/22] mm: pagewalk: Add test_p?d callbacks Zong Li
     [not found] ` <20191018101248.33727-12-steven.price@arm.com>
2019-10-24 13:06   ` [PATCH v12 11/22] mm: pagewalk: Add p4d_entry() and pgd_entry() Zong Li
     [not found] ` <20191018101248.33727-15-steven.price@arm.com>
2019-10-24 13:07   ` [PATCH v12 14/22] mm: pagewalk: Add 'depth' parameter to pte_hole Zong Li

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).