From: Steven Price <steven.price@arm.com> To: linux-mm@kvack.org Cc: "Steven Price" <steven.price@arm.com>, "Andy Lutomirski" <luto@kernel.org>, "Ard Biesheuvel" <ard.biesheuvel@linaro.org>, "Arnd Bergmann" <arnd@arndb.de>, "Borislav Petkov" <bp@alien8.de>, "Catalin Marinas" <catalin.marinas@arm.com>, "Dave Hansen" <dave.hansen@linux.intel.com>, "Ingo Molnar" <mingo@redhat.com>, "James Morse" <james.morse@arm.com>, "Jérôme Glisse" <jglisse@redhat.com>, "Peter Zijlstra" <peterz@infradead.org>, "Thomas Gleixner" <tglx@linutronix.de>, "Will Deacon" <will.deacon@arm.com>, x86@kernel.org, "H. Peter Anvin" <hpa@zytor.com>, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, "Mark Rutland" <Mark.Rutland@arm.com>, "Liang, Kan" <kan.liang@linux.intel.com> Subject: [PATCH v5 10/19] mm: pagewalk: Add p4d_entry() and pgd_entry() Date: Thu, 21 Mar 2019 14:19:44 +0000 [thread overview] Message-ID: <20190321141953.31960-11-steven.price@arm.com> (raw) In-Reply-To: <20190321141953.31960-1-steven.price@arm.com> pgd_entry() and pud_entry() were removed by commit 0b1fbfe50006c410 ("mm/pagewalk: remove pgd_entry() and pud_entry()") because there were no users. We're about to add users so reintroduce them, along with p4d_entry() as we now have 5 levels of tables. Note that commit a00cc7d9dd93d66a ("mm, x86: add support for PUD-sized transparent hugepages") already re-added pud_entry() but with different semantics to the other callbacks. Since there have never been upstream users of this, revert the semantics back to match the other callbacks. This means pud_entry() is called for all entries, not just transparent huge pages. Signed-off-by: Steven Price <steven.price@arm.com> --- include/linux/mm.h | 9 ++++++--- mm/pagewalk.c | 27 ++++++++++++++++----------- 2 files changed, 22 insertions(+), 14 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 76769749b5a5..2983f2396a72 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1367,10 +1367,9 @@ void unmap_vmas(struct mmu_gather *tlb, struct vm_area_struct *start_vma, /** * mm_walk - callbacks for walk_page_range + * @pgd_entry: if set, called for each non-empty PGD (top-level) entry + * @p4d_entry: if set, called for each non-empty P4D (1st-level) entry * @pud_entry: if set, called for each non-empty PUD (2nd-level) entry - * this handler should only handle pud_trans_huge() puds. - * the pmd_entry or pte_entry callbacks will be used for - * regular PUDs. * @pmd_entry: if set, called for each non-empty PMD (3rd-level) entry * this handler is required to be able to handle * pmd_trans_huge() pmds. They may simply choose to @@ -1390,6 +1389,10 @@ void unmap_vmas(struct mmu_gather *tlb, struct vm_area_struct *start_vma, * (see the comment on walk_page_range() for more details) */ struct mm_walk { + int (*pgd_entry)(pgd_t *pgd, unsigned long addr, + unsigned long next, struct mm_walk *walk); + int (*p4d_entry)(p4d_t *p4d, unsigned long addr, + unsigned long next, struct mm_walk *walk); int (*pud_entry)(pud_t *pud, unsigned long addr, unsigned long next, struct mm_walk *walk); int (*pmd_entry)(pmd_t *pmd, unsigned long addr, diff --git a/mm/pagewalk.c b/mm/pagewalk.c index c3084ff2569d..98373a9f88b8 100644 --- a/mm/pagewalk.c +++ b/mm/pagewalk.c @@ -90,15 +90,9 @@ static int walk_pud_range(p4d_t *p4d, unsigned long addr, unsigned long end, } if (walk->pud_entry) { - spinlock_t *ptl = pud_trans_huge_lock(pud, walk->vma); - - if (ptl) { - err = walk->pud_entry(pud, addr, next, walk); - spin_unlock(ptl); - if (err) - break; - continue; - } + err = walk->pud_entry(pud, addr, next, walk); + if (err) + break; } split_huge_pud(walk->vma, pud, addr); @@ -131,7 +125,12 @@ static int walk_p4d_range(pgd_t *pgd, unsigned long addr, unsigned long end, break; continue; } - if (walk->pmd_entry || walk->pte_entry) + if (walk->p4d_entry) { + err = walk->p4d_entry(p4d, addr, next, walk); + if (err) + break; + } + if (walk->pud_entry || walk->pmd_entry || walk->pte_entry) err = walk_pud_range(p4d, addr, next, walk); if (err) break; @@ -157,7 +156,13 @@ static int walk_pgd_range(unsigned long addr, unsigned long end, break; continue; } - if (walk->pmd_entry || walk->pte_entry) + if (walk->pgd_entry) { + err = walk->pgd_entry(pgd, addr, next, walk); + if (err) + break; + } + if (walk->p4d_entry || walk->pud_entry || walk->pmd_entry || + walk->pte_entry) err = walk_p4d_range(pgd, addr, next, walk); if (err) break; -- 2.20.1
WARNING: multiple messages have this Message-ID (diff)
From: Steven Price <steven.price@arm.com> To: linux-mm@kvack.org Cc: "Mark Rutland" <Mark.Rutland@arm.com>, x86@kernel.org, "Arnd Bergmann" <arnd@arndb.de>, "Ard Biesheuvel" <ard.biesheuvel@linaro.org>, "Peter Zijlstra" <peterz@infradead.org>, "Catalin Marinas" <catalin.marinas@arm.com>, "Dave Hansen" <dave.hansen@linux.intel.com>, "Will Deacon" <will.deacon@arm.com>, linux-kernel@vger.kernel.org, "Steven Price" <steven.price@arm.com>, "Jérôme Glisse" <jglisse@redhat.com>, "Ingo Molnar" <mingo@redhat.com>, "Borislav Petkov" <bp@alien8.de>, "Andy Lutomirski" <luto@kernel.org>, "H. Peter Anvin" <hpa@zytor.com>, "James Morse" <james.morse@arm.com>, "Thomas Gleixner" <tglx@linutronix.de>, linux-arm-kernel@lists.infradead.org, "Liang, Kan" <kan.liang@linux.intel.com> Subject: [PATCH v5 10/19] mm: pagewalk: Add p4d_entry() and pgd_entry() Date: Thu, 21 Mar 2019 14:19:44 +0000 [thread overview] Message-ID: <20190321141953.31960-11-steven.price@arm.com> (raw) In-Reply-To: <20190321141953.31960-1-steven.price@arm.com> pgd_entry() and pud_entry() were removed by commit 0b1fbfe50006c410 ("mm/pagewalk: remove pgd_entry() and pud_entry()") because there were no users. We're about to add users so reintroduce them, along with p4d_entry() as we now have 5 levels of tables. Note that commit a00cc7d9dd93d66a ("mm, x86: add support for PUD-sized transparent hugepages") already re-added pud_entry() but with different semantics to the other callbacks. Since there have never been upstream users of this, revert the semantics back to match the other callbacks. This means pud_entry() is called for all entries, not just transparent huge pages. Signed-off-by: Steven Price <steven.price@arm.com> --- include/linux/mm.h | 9 ++++++--- mm/pagewalk.c | 27 ++++++++++++++++----------- 2 files changed, 22 insertions(+), 14 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 76769749b5a5..2983f2396a72 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1367,10 +1367,9 @@ void unmap_vmas(struct mmu_gather *tlb, struct vm_area_struct *start_vma, /** * mm_walk - callbacks for walk_page_range + * @pgd_entry: if set, called for each non-empty PGD (top-level) entry + * @p4d_entry: if set, called for each non-empty P4D (1st-level) entry * @pud_entry: if set, called for each non-empty PUD (2nd-level) entry - * this handler should only handle pud_trans_huge() puds. - * the pmd_entry or pte_entry callbacks will be used for - * regular PUDs. * @pmd_entry: if set, called for each non-empty PMD (3rd-level) entry * this handler is required to be able to handle * pmd_trans_huge() pmds. They may simply choose to @@ -1390,6 +1389,10 @@ void unmap_vmas(struct mmu_gather *tlb, struct vm_area_struct *start_vma, * (see the comment on walk_page_range() for more details) */ struct mm_walk { + int (*pgd_entry)(pgd_t *pgd, unsigned long addr, + unsigned long next, struct mm_walk *walk); + int (*p4d_entry)(p4d_t *p4d, unsigned long addr, + unsigned long next, struct mm_walk *walk); int (*pud_entry)(pud_t *pud, unsigned long addr, unsigned long next, struct mm_walk *walk); int (*pmd_entry)(pmd_t *pmd, unsigned long addr, diff --git a/mm/pagewalk.c b/mm/pagewalk.c index c3084ff2569d..98373a9f88b8 100644 --- a/mm/pagewalk.c +++ b/mm/pagewalk.c @@ -90,15 +90,9 @@ static int walk_pud_range(p4d_t *p4d, unsigned long addr, unsigned long end, } if (walk->pud_entry) { - spinlock_t *ptl = pud_trans_huge_lock(pud, walk->vma); - - if (ptl) { - err = walk->pud_entry(pud, addr, next, walk); - spin_unlock(ptl); - if (err) - break; - continue; - } + err = walk->pud_entry(pud, addr, next, walk); + if (err) + break; } split_huge_pud(walk->vma, pud, addr); @@ -131,7 +125,12 @@ static int walk_p4d_range(pgd_t *pgd, unsigned long addr, unsigned long end, break; continue; } - if (walk->pmd_entry || walk->pte_entry) + if (walk->p4d_entry) { + err = walk->p4d_entry(p4d, addr, next, walk); + if (err) + break; + } + if (walk->pud_entry || walk->pmd_entry || walk->pte_entry) err = walk_pud_range(p4d, addr, next, walk); if (err) break; @@ -157,7 +156,13 @@ static int walk_pgd_range(unsigned long addr, unsigned long end, break; continue; } - if (walk->pmd_entry || walk->pte_entry) + if (walk->pgd_entry) { + err = walk->pgd_entry(pgd, addr, next, walk); + if (err) + break; + } + if (walk->p4d_entry || walk->pud_entry || walk->pmd_entry || + walk->pte_entry) err = walk_p4d_range(pgd, addr, next, walk); if (err) break; -- 2.20.1 _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
next prev parent reply other threads:[~2019-03-21 14:20 UTC|newest] Thread overview: 53+ messages / expand[flat|nested] mbox.gz Atom feed top 2019-03-21 14:19 [PATCH v5 00/19] Convert x86 & arm64 to use generic page walk Steven Price 2019-03-21 14:19 ` Steven Price 2019-03-21 14:19 ` [PATCH v5 01/19] arc: mm: Add p?d_large() definitions Steven Price 2019-03-21 14:19 ` Steven Price 2019-03-21 14:19 ` Steven Price 2019-03-21 14:19 ` [PATCH v5 02/19] arm64: " Steven Price 2019-03-21 14:19 ` Steven Price 2019-03-21 14:19 ` [PATCH v5 03/19] mips: " Steven Price 2019-03-21 14:19 ` Steven Price 2019-03-21 14:19 ` [PATCH v5 04/19] powerpc: " Steven Price 2019-03-21 14:19 ` Steven Price 2019-03-21 14:19 ` Steven Price 2019-03-21 14:19 ` Steven Price 2019-03-21 14:19 ` [PATCH v5 05/19] riscv: " Steven Price 2019-03-21 14:19 ` Steven Price 2019-03-21 14:19 ` Steven Price 2019-03-21 14:19 ` [PATCH v5 06/19] s390: " Steven Price 2019-03-21 14:19 ` Steven Price 2019-03-21 14:19 ` [PATCH v5 07/19] sparc: " Steven Price 2019-03-21 14:19 ` Steven Price 2019-03-21 14:19 ` Steven Price 2019-03-21 14:19 ` [PATCH v5 08/19] x86: " Steven Price 2019-03-21 14:19 ` Steven Price 2019-03-21 14:19 ` [PATCH v5 09/19] mm: Add generic p?d_large() macros Steven Price 2019-03-21 14:19 ` Steven Price 2019-03-21 14:19 ` Steven Price [this message] 2019-03-21 14:19 ` [PATCH v5 10/19] mm: pagewalk: Add p4d_entry() and pgd_entry() Steven Price 2019-03-21 21:15 ` Mike Rapoport 2019-03-21 21:15 ` Mike Rapoport 2019-03-22 10:11 ` Steven Price 2019-03-22 10:11 ` Steven Price 2019-03-22 10:29 ` Mike Rapoport 2019-03-22 10:29 ` Mike Rapoport 2019-03-22 10:37 ` Steven Price 2019-03-22 10:37 ` Steven Price 2019-03-21 14:19 ` [PATCH v5 11/19] mm: pagewalk: Allow walking without vma Steven Price 2019-03-21 14:19 ` Steven Price 2019-03-21 14:19 ` [PATCH v5 12/19] mm: pagewalk: Add test_p?d callbacks Steven Price 2019-03-21 14:19 ` Steven Price 2019-03-21 14:19 ` [PATCH v5 13/19] arm64: mm: Convert mm/dump.c to use walk_page_range() Steven Price 2019-03-21 14:19 ` Steven Price 2019-03-21 14:19 ` [PATCH v5 14/19] x86: mm: Don't display pages which aren't present in debugfs Steven Price 2019-03-21 14:19 ` Steven Price 2019-03-21 14:19 ` [PATCH v5 15/19] x86: mm: Point to struct seq_file from struct pg_state Steven Price 2019-03-21 14:19 ` Steven Price 2019-03-21 14:19 ` [PATCH v5 16/19] x86: mm+efi: Convert ptdump_walk_pgd_level() to take a mm_struct Steven Price 2019-03-21 14:19 ` Steven Price 2019-03-21 14:19 ` [PATCH v5 17/19] x86: mm: Convert ptdump_walk_pgd_level_debugfs() to take an mm_struct Steven Price 2019-03-21 14:19 ` Steven Price 2019-03-21 14:19 ` [PATCH v5 18/19] x86: mm: Convert ptdump_walk_pgd_level_core() " Steven Price 2019-03-21 14:19 ` Steven Price 2019-03-21 14:19 ` [PATCH v5 19/19] x86: mm: Convert dump_pagetables to use walk_page_range Steven Price 2019-03-21 14:19 ` Steven Price
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20190321141953.31960-11-steven.price@arm.com \ --to=steven.price@arm.com \ --cc=Mark.Rutland@arm.com \ --cc=ard.biesheuvel@linaro.org \ --cc=arnd@arndb.de \ --cc=bp@alien8.de \ --cc=catalin.marinas@arm.com \ --cc=dave.hansen@linux.intel.com \ --cc=hpa@zytor.com \ --cc=james.morse@arm.com \ --cc=jglisse@redhat.com \ --cc=kan.liang@linux.intel.com \ --cc=linux-arm-kernel@lists.infradead.org \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=luto@kernel.org \ --cc=mingo@redhat.com \ --cc=peterz@infradead.org \ --cc=tglx@linutronix.de \ --cc=will.deacon@arm.com \ --cc=x86@kernel.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.