From: Peter Xu <peterx@redhat.com> To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Gerald Schaefer <gerald.schaefer@de.ibm.com>, Linus Torvalds <torvalds@linux-foundation.org>, peterx@redhat.com, Andrew Morton <akpm@linux-foundation.org>, Will Deacon <will@kernel.org>, Andrea Arcangeli <aarcange@redhat.com>, David Rientjes <rientjes@google.com>, John Hubbard <jhubbard@nvidia.com>, Michael Ellerman <mpe@ellerman.id.au>, Catalin Marinas <catalin.marinas@arm.com>, linux-arm-kernel@lists.infradead.org Subject: [PATCH v5 05/25] mm/arm64: Use general page fault accounting Date: Tue, 7 Jul 2020 18:50:01 -0400 [thread overview] Message-ID: <20200707225021.200906-6-peterx@redhat.com> (raw) In-Reply-To: <20200707225021.200906-1-peterx@redhat.com> Use the general page fault accounting by passing regs into handle_mm_fault(). It naturally solve the issue of multiple page fault accounting when page fault retry happened. To do this, we pass pt_regs pointer into __do_page_fault(). CC: Catalin Marinas <catalin.marinas@arm.com> CC: Will Deacon <will@kernel.org> CC: linux-arm-kernel@lists.infradead.org Acked-by: Will Deacon <will@kernel.org> Signed-off-by: Peter Xu <peterx@redhat.com> --- arch/arm64/mm/fault.c | 29 ++++++----------------------- 1 file changed, 6 insertions(+), 23 deletions(-) diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index f885940035ce..a3bd189602df 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -404,7 +404,8 @@ static void do_bad_area(unsigned long addr, unsigned int esr, struct pt_regs *re #define VM_FAULT_BADACCESS 0x020000 static vm_fault_t __do_page_fault(struct mm_struct *mm, unsigned long addr, - unsigned int mm_flags, unsigned long vm_flags) + unsigned int mm_flags, unsigned long vm_flags, + struct pt_regs *regs) { struct vm_area_struct *vma = find_vma(mm, addr); @@ -428,7 +429,7 @@ static vm_fault_t __do_page_fault(struct mm_struct *mm, unsigned long addr, */ if (!(vma->vm_flags & vm_flags)) return VM_FAULT_BADACCESS; - return handle_mm_fault(vma, addr & PAGE_MASK, mm_flags, NULL); + return handle_mm_fault(vma, addr & PAGE_MASK, mm_flags, regs); } static bool is_el0_instruction_abort(unsigned int esr) @@ -450,7 +451,7 @@ static int __kprobes do_page_fault(unsigned long addr, unsigned int esr, { const struct fault_info *inf; struct mm_struct *mm = current->mm; - vm_fault_t fault, major = 0; + vm_fault_t fault; unsigned long vm_flags = VM_ACCESS_FLAGS; unsigned int mm_flags = FAULT_FLAG_DEFAULT; @@ -516,8 +517,7 @@ static int __kprobes do_page_fault(unsigned long addr, unsigned int esr, #endif } - fault = __do_page_fault(mm, addr, mm_flags, vm_flags); - major |= fault & VM_FAULT_MAJOR; + fault = __do_page_fault(mm, addr, mm_flags, vm_flags, regs); /* Quick path to respond to signals */ if (fault_signal_pending(fault, regs)) { @@ -538,25 +538,8 @@ static int __kprobes do_page_fault(unsigned long addr, unsigned int esr, * Handle the "normal" (no error) case first. */ if (likely(!(fault & (VM_FAULT_ERROR | VM_FAULT_BADMAP | - VM_FAULT_BADACCESS)))) { - /* - * Major/minor page fault accounting is only done - * once. If we go through a retry, it is extremely - * likely that the page will be found in page cache at - * that point. - */ - if (major) { - current->maj_flt++; - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, 1, regs, - addr); - } else { - current->min_flt++; - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, 1, regs, - addr); - } - + VM_FAULT_BADACCESS)))) return 0; - } /* * If we are in kernel mode at this point, we have no context to -- 2.26.2
WARNING: multiple messages have this Message-ID (diff)
From: Peter Xu <peterx@redhat.com> To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Andrea Arcangeli <aarcange@redhat.com>, Catalin Marinas <catalin.marinas@arm.com>, John Hubbard <jhubbard@nvidia.com>, Linus Torvalds <torvalds@linux-foundation.org>, peterx@redhat.com, Michael Ellerman <mpe@ellerman.id.au>, David Rientjes <rientjes@google.com>, Andrew Morton <akpm@linux-foundation.org>, Will Deacon <will@kernel.org>, Gerald Schaefer <gerald.schaefer@de.ibm.com>, linux-arm-kernel@lists.infradead.org Subject: [PATCH v5 05/25] mm/arm64: Use general page fault accounting Date: Tue, 7 Jul 2020 18:50:01 -0400 [thread overview] Message-ID: <20200707225021.200906-6-peterx@redhat.com> (raw) In-Reply-To: <20200707225021.200906-1-peterx@redhat.com> Use the general page fault accounting by passing regs into handle_mm_fault(). It naturally solve the issue of multiple page fault accounting when page fault retry happened. To do this, we pass pt_regs pointer into __do_page_fault(). CC: Catalin Marinas <catalin.marinas@arm.com> CC: Will Deacon <will@kernel.org> CC: linux-arm-kernel@lists.infradead.org Acked-by: Will Deacon <will@kernel.org> Signed-off-by: Peter Xu <peterx@redhat.com> --- arch/arm64/mm/fault.c | 29 ++++++----------------------- 1 file changed, 6 insertions(+), 23 deletions(-) diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index f885940035ce..a3bd189602df 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -404,7 +404,8 @@ static void do_bad_area(unsigned long addr, unsigned int esr, struct pt_regs *re #define VM_FAULT_BADACCESS 0x020000 static vm_fault_t __do_page_fault(struct mm_struct *mm, unsigned long addr, - unsigned int mm_flags, unsigned long vm_flags) + unsigned int mm_flags, unsigned long vm_flags, + struct pt_regs *regs) { struct vm_area_struct *vma = find_vma(mm, addr); @@ -428,7 +429,7 @@ static vm_fault_t __do_page_fault(struct mm_struct *mm, unsigned long addr, */ if (!(vma->vm_flags & vm_flags)) return VM_FAULT_BADACCESS; - return handle_mm_fault(vma, addr & PAGE_MASK, mm_flags, NULL); + return handle_mm_fault(vma, addr & PAGE_MASK, mm_flags, regs); } static bool is_el0_instruction_abort(unsigned int esr) @@ -450,7 +451,7 @@ static int __kprobes do_page_fault(unsigned long addr, unsigned int esr, { const struct fault_info *inf; struct mm_struct *mm = current->mm; - vm_fault_t fault, major = 0; + vm_fault_t fault; unsigned long vm_flags = VM_ACCESS_FLAGS; unsigned int mm_flags = FAULT_FLAG_DEFAULT; @@ -516,8 +517,7 @@ static int __kprobes do_page_fault(unsigned long addr, unsigned int esr, #endif } - fault = __do_page_fault(mm, addr, mm_flags, vm_flags); - major |= fault & VM_FAULT_MAJOR; + fault = __do_page_fault(mm, addr, mm_flags, vm_flags, regs); /* Quick path to respond to signals */ if (fault_signal_pending(fault, regs)) { @@ -538,25 +538,8 @@ static int __kprobes do_page_fault(unsigned long addr, unsigned int esr, * Handle the "normal" (no error) case first. */ if (likely(!(fault & (VM_FAULT_ERROR | VM_FAULT_BADMAP | - VM_FAULT_BADACCESS)))) { - /* - * Major/minor page fault accounting is only done - * once. If we go through a retry, it is extremely - * likely that the page will be found in page cache at - * that point. - */ - if (major) { - current->maj_flt++; - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, 1, regs, - addr); - } else { - current->min_flt++; - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, 1, regs, - addr); - } - + VM_FAULT_BADACCESS)))) return 0; - } /* * If we are in kernel mode at this point, we have no context to -- 2.26.2 _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
next prev parent reply other threads:[~2020-07-07 22:50 UTC|newest] Thread overview: 37+ messages / expand[flat|nested] mbox.gz Atom feed top 2020-07-07 22:49 [PATCH v5 00/25] mm: Page fault accounting cleanups Peter Xu 2020-07-07 22:49 ` [PATCH v5 01/25] mm: Do page fault accounting in handle_mm_fault Peter Xu 2020-07-07 22:49 ` [PATCH v5 02/25] mm/alpha: Use general page fault accounting Peter Xu 2020-07-07 22:49 ` [PATCH v5 03/25] mm/arc: " Peter Xu 2020-07-07 22:49 ` Peter Xu 2020-07-07 22:50 ` [PATCH v5 04/25] mm/arm: " Peter Xu 2020-07-07 22:50 ` Peter Xu 2020-07-07 22:50 ` Peter Xu [this message] 2020-07-07 22:50 ` [PATCH v5 05/25] mm/arm64: " Peter Xu 2020-07-07 22:50 ` [PATCH v5 06/25] mm/csky: " Peter Xu 2020-07-07 22:50 ` [PATCH v5 07/25] mm/hexagon: " Peter Xu 2020-07-07 22:50 ` [PATCH v5 08/25] mm/ia64: " Peter Xu 2020-07-07 22:50 ` [PATCH v5 09/25] mm/m68k: " Peter Xu 2020-07-07 22:50 ` [PATCH v5 10/25] mm/microblaze: " Peter Xu 2020-07-07 22:50 ` [PATCH v5 11/25] mm/mips: " Peter Xu 2020-07-07 22:50 ` [PATCH v5 12/25] mm/nds32: " Peter Xu 2020-07-07 22:50 ` [PATCH v5 13/25] mm/nios2: " Peter Xu 2020-07-07 22:50 ` [PATCH v5 14/25] mm/openrisc: " Peter Xu 2020-07-07 22:50 ` [OpenRISC] " Peter Xu 2020-07-07 22:50 ` [PATCH v5 15/25] mm/parisc: " Peter Xu 2020-07-07 22:50 ` [PATCH v5 16/25] mm/powerpc: " Peter Xu 2020-07-07 22:50 ` Peter Xu 2020-07-07 22:50 ` [PATCH v5 17/25] mm/riscv: " Peter Xu 2020-07-07 22:50 ` Peter Xu 2020-07-07 22:50 ` [PATCH v5 18/25] mm/s390: " Peter Xu 2020-07-08 5:49 ` Alexander Gordeev 2020-07-08 14:30 ` Peter Xu 2020-07-07 22:50 ` [PATCH v5 19/25] mm/sh: " Peter Xu 2020-07-07 22:50 ` Peter Xu 2020-07-07 22:50 ` [PATCH v5 20/25] mm/sparc32: " Peter Xu 2020-07-07 22:50 ` Peter Xu 2020-07-07 22:50 ` [PATCH v5 21/25] mm/sparc64: " Peter Xu 2020-07-07 22:50 ` Peter Xu 2020-07-07 22:50 ` [PATCH v5 22/25] mm/x86: " Peter Xu 2020-07-07 22:50 ` [PATCH v5 23/25] mm/xtensa: " Peter Xu 2020-07-07 22:50 ` [PATCH v5 24/25] mm: Clean up the last pieces of page fault accountings Peter Xu 2020-07-07 22:50 ` [PATCH v5 25/25] mm/gup: Remove task_struct pointer for all gup code Peter Xu
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20200707225021.200906-6-peterx@redhat.com \ --to=peterx@redhat.com \ --cc=aarcange@redhat.com \ --cc=akpm@linux-foundation.org \ --cc=catalin.marinas@arm.com \ --cc=gerald.schaefer@de.ibm.com \ --cc=jhubbard@nvidia.com \ --cc=linux-arm-kernel@lists.infradead.org \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=mpe@ellerman.id.au \ --cc=rientjes@google.com \ --cc=torvalds@linux-foundation.org \ --cc=will@kernel.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.