From: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> To: Andrew Morton <akpm@linux-foundation.org> Cc: Dave Hansen <dave.hansen@intel.com>, Hugh Dickins <hughd@google.com>, "Kirill A. Shutemov" <kirill@shutemov.name>, Jerome Marchand <jmarchan@redhat.com>, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Naoya Horiguchi <nao.horiguchi@gmail.com> Subject: [PATCH -mm v6 01/13] mm/pagewalk: remove pgd_entry() and pud_entry() Date: Fri, 1 Aug 2014 15:20:37 -0400 [thread overview] Message-ID: <1406920849-25908-2-git-send-email-n-horiguchi@ah.jp.nec.com> (raw) In-Reply-To: <1406920849-25908-1-git-send-email-n-horiguchi@ah.jp.nec.com> Currently no user of page table walker sets ->pgd_entry() or ->pud_entry(), so checking their existence in each loop is just wasting CPU cycle. So let's remove it to reduce overhead. Signed-off-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> --- include/linux/mm.h | 6 ------ mm/pagewalk.c | 9 ++------- 2 files changed, 2 insertions(+), 13 deletions(-) diff --git mmotm-2014-07-30-15-57.orig/include/linux/mm.h mmotm-2014-07-30-15-57/include/linux/mm.h index 368600628d14..4d5bca99a33d 100644 --- mmotm-2014-07-30-15-57.orig/include/linux/mm.h +++ mmotm-2014-07-30-15-57/include/linux/mm.h @@ -1094,8 +1094,6 @@ void unmap_vmas(struct mmu_gather *tlb, struct vm_area_struct *start_vma, /** * mm_walk - callbacks for walk_page_range - * @pgd_entry: if set, called for each non-empty PGD (top-level) entry - * @pud_entry: if set, called for each non-empty PUD (2nd-level) entry * @pmd_entry: if set, called for each non-empty PMD (3rd-level) entry * this handler is required to be able to handle * pmd_trans_huge() pmds. They may simply choose to @@ -1109,10 +1107,6 @@ void unmap_vmas(struct mmu_gather *tlb, struct vm_area_struct *start_vma, * (see walk_page_range for more details) */ struct mm_walk { - int (*pgd_entry)(pgd_t *pgd, unsigned long addr, - unsigned long next, struct mm_walk *walk); - int (*pud_entry)(pud_t *pud, unsigned long addr, - unsigned long next, struct mm_walk *walk); int (*pmd_entry)(pmd_t *pmd, unsigned long addr, unsigned long next, struct mm_walk *walk); int (*pte_entry)(pte_t *pte, unsigned long addr, diff --git mmotm-2014-07-30-15-57.orig/mm/pagewalk.c mmotm-2014-07-30-15-57/mm/pagewalk.c index 2beeabf502c5..335690650b12 100644 --- mmotm-2014-07-30-15-57.orig/mm/pagewalk.c +++ mmotm-2014-07-30-15-57/mm/pagewalk.c @@ -86,9 +86,7 @@ static int walk_pud_range(pgd_t *pgd, unsigned long addr, unsigned long end, break; continue; } - if (walk->pud_entry) - err = walk->pud_entry(pud, addr, next, walk); - if (!err && (walk->pmd_entry || walk->pte_entry)) + if (walk->pmd_entry || walk->pte_entry) err = walk_pmd_range(pud, addr, next, walk); if (err) break; @@ -234,10 +232,7 @@ int walk_page_range(unsigned long addr, unsigned long end, pgd++; continue; } - if (walk->pgd_entry) - err = walk->pgd_entry(pgd, addr, next, walk); - if (!err && - (walk->pud_entry || walk->pmd_entry || walk->pte_entry)) + if (walk->pmd_entry || walk->pte_entry) err = walk_pud_range(pgd, addr, next, walk); if (err) break; -- 1.9.3
WARNING: multiple messages have this Message-ID (diff)
From: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> To: Andrew Morton <akpm@linux-foundation.org> Cc: Dave Hansen <dave.hansen@intel.com>, Hugh Dickins <hughd@google.com>, "Kirill A. Shutemov" <kirill@shutemov.name>, Jerome Marchand <jmarchan@redhat.com>, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Naoya Horiguchi <nao.horiguchi@gmail.com> Subject: [PATCH -mm v6 01/13] mm/pagewalk: remove pgd_entry() and pud_entry() Date: Fri, 1 Aug 2014 15:20:37 -0400 [thread overview] Message-ID: <1406920849-25908-2-git-send-email-n-horiguchi@ah.jp.nec.com> (raw) In-Reply-To: <1406920849-25908-1-git-send-email-n-horiguchi@ah.jp.nec.com> Currently no user of page table walker sets ->pgd_entry() or ->pud_entry(), so checking their existence in each loop is just wasting CPU cycle. So let's remove it to reduce overhead. Signed-off-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> --- include/linux/mm.h | 6 ------ mm/pagewalk.c | 9 ++------- 2 files changed, 2 insertions(+), 13 deletions(-) diff --git mmotm-2014-07-30-15-57.orig/include/linux/mm.h mmotm-2014-07-30-15-57/include/linux/mm.h index 368600628d14..4d5bca99a33d 100644 --- mmotm-2014-07-30-15-57.orig/include/linux/mm.h +++ mmotm-2014-07-30-15-57/include/linux/mm.h @@ -1094,8 +1094,6 @@ void unmap_vmas(struct mmu_gather *tlb, struct vm_area_struct *start_vma, /** * mm_walk - callbacks for walk_page_range - * @pgd_entry: if set, called for each non-empty PGD (top-level) entry - * @pud_entry: if set, called for each non-empty PUD (2nd-level) entry * @pmd_entry: if set, called for each non-empty PMD (3rd-level) entry * this handler is required to be able to handle * pmd_trans_huge() pmds. They may simply choose to @@ -1109,10 +1107,6 @@ void unmap_vmas(struct mmu_gather *tlb, struct vm_area_struct *start_vma, * (see walk_page_range for more details) */ struct mm_walk { - int (*pgd_entry)(pgd_t *pgd, unsigned long addr, - unsigned long next, struct mm_walk *walk); - int (*pud_entry)(pud_t *pud, unsigned long addr, - unsigned long next, struct mm_walk *walk); int (*pmd_entry)(pmd_t *pmd, unsigned long addr, unsigned long next, struct mm_walk *walk); int (*pte_entry)(pte_t *pte, unsigned long addr, diff --git mmotm-2014-07-30-15-57.orig/mm/pagewalk.c mmotm-2014-07-30-15-57/mm/pagewalk.c index 2beeabf502c5..335690650b12 100644 --- mmotm-2014-07-30-15-57.orig/mm/pagewalk.c +++ mmotm-2014-07-30-15-57/mm/pagewalk.c @@ -86,9 +86,7 @@ static int walk_pud_range(pgd_t *pgd, unsigned long addr, unsigned long end, break; continue; } - if (walk->pud_entry) - err = walk->pud_entry(pud, addr, next, walk); - if (!err && (walk->pmd_entry || walk->pte_entry)) + if (walk->pmd_entry || walk->pte_entry) err = walk_pmd_range(pud, addr, next, walk); if (err) break; @@ -234,10 +232,7 @@ int walk_page_range(unsigned long addr, unsigned long end, pgd++; continue; } - if (walk->pgd_entry) - err = walk->pgd_entry(pgd, addr, next, walk); - if (!err && - (walk->pud_entry || walk->pmd_entry || walk->pte_entry)) + if (walk->pmd_entry || walk->pte_entry) err = walk_pud_range(pgd, addr, next, walk); if (err) break; -- 1.9.3 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2014-08-01 19:22 UTC|newest] Thread overview: 36+ messages / expand[flat|nested] mbox.gz Atom feed top 2014-08-01 19:20 [PATCH -mm v6 00/13] pagewalk: improve vma handling, apply to new users Naoya Horiguchi 2014-08-01 19:20 ` Naoya Horiguchi 2014-08-01 19:20 ` Naoya Horiguchi [this message] 2014-08-01 19:20 ` [PATCH -mm v6 01/13] mm/pagewalk: remove pgd_entry() and pud_entry() Naoya Horiguchi 2014-08-01 19:20 ` [PATCH -mm v6 02/13] pagewalk: improve vma handling Naoya Horiguchi 2014-08-01 19:20 ` Naoya Horiguchi 2014-08-01 19:20 ` [PATCH -mm v6 03/13] pagewalk: add walk_page_vma() Naoya Horiguchi 2014-08-01 19:20 ` Naoya Horiguchi 2014-08-01 19:20 ` [PATCH -mm v6 04/13] smaps: remove mem_size_stats->vma and use walk_page_vma() Naoya Horiguchi 2014-08-01 19:20 ` Naoya Horiguchi 2014-08-01 19:20 ` [PATCH -mm v6 05/13] clear_refs: remove clear_refs_private->vma and introduce clear_refs_test_walk() Naoya Horiguchi 2014-08-01 19:20 ` Naoya Horiguchi 2014-08-01 19:20 ` [PATCH -mm v6 06/13] pagemap: use walk->vma instead of calling find_vma() Naoya Horiguchi 2014-08-01 19:20 ` Naoya Horiguchi 2014-08-06 17:30 ` Peter Feiner 2014-08-06 17:30 ` Peter Feiner 2014-08-06 19:14 ` Naoya Horiguchi 2014-08-06 19:14 ` Naoya Horiguchi 2014-08-01 19:20 ` [PATCH -mm v6 07/13] numa_maps: fix typo in gather_hugetbl_stats Naoya Horiguchi 2014-08-01 19:20 ` Naoya Horiguchi 2014-08-01 19:20 ` [PATCH -mm v6 08/13] numa_maps: remove numa_maps->vma Naoya Horiguchi 2014-08-01 19:20 ` Naoya Horiguchi 2014-08-01 19:20 ` [PATCH -mm v6 09/13] memcg: cleanup preparation for page table walk Naoya Horiguchi 2014-08-01 19:20 ` Naoya Horiguchi 2014-08-01 19:20 ` [PATCH -mm v6 10/13] arch/powerpc/mm/subpage-prot.c: use walk->vma and walk_page_vma() Naoya Horiguchi 2014-08-01 19:20 ` Naoya Horiguchi 2014-08-01 19:20 ` [PATCH -mm v6 11/13] mempolicy: apply page table walker on queue_pages_range() Naoya Horiguchi 2014-08-01 19:20 ` Naoya Horiguchi 2014-08-01 19:20 ` [PATCH -mm v6 12/13] mm: /proc/pid/clear_refs: avoid split_huge_page() Naoya Horiguchi 2014-08-01 19:20 ` Naoya Horiguchi 2014-08-01 19:20 ` [PATCH -mm v6 13/13] mincore: apply page table walker on do_mincore() Naoya Horiguchi 2014-08-01 19:20 ` Naoya Horiguchi 2014-10-16 14:51 ` [PATCH -mm v6 00/13] pagewalk: improve vma handling, apply to new users Kirill A. Shutemov 2014-10-16 14:51 ` Kirill A. Shutemov 2014-10-16 19:23 ` Andrew Morton 2014-10-16 19:23 ` Andrew Morton
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=1406920849-25908-2-git-send-email-n-horiguchi@ah.jp.nec.com \ --to=n-horiguchi@ah.jp.nec.com \ --cc=akpm@linux-foundation.org \ --cc=dave.hansen@intel.com \ --cc=hughd@google.com \ --cc=jmarchan@redhat.com \ --cc=kirill@shutemov.name \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=nao.horiguchi@gmail.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.