From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=DKIM_INVALID,DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 85A47C433E0 for ; Fri, 26 Jun 2020 22:31:40 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 2BCD420B1F for ; Fri, 26 Jun 2020 22:31:40 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="ALf5tAhe" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2BCD420B1F Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id AE4066B0007; Fri, 26 Jun 2020 18:31:39 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A6ABD6B0008; Fri, 26 Jun 2020 18:31:39 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8E4316B000A; Fri, 26 Jun 2020 18:31:39 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0105.hostedemail.com [216.40.44.105]) by kanga.kvack.org (Postfix) with ESMTP id 68EB86B0007 for ; Fri, 26 Jun 2020 18:31:39 -0400 (EDT) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id D54E42DFD for ; Fri, 26 Jun 2020 22:31:38 +0000 (UTC) X-FDA: 76972811076.18.straw47_23120f726e59 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin18.hostedemail.com (Postfix) with ESMTP id A0010100EDBF3 for ; Fri, 26 Jun 2020 22:31:38 +0000 (UTC) X-HE-Tag: straw47_23120f726e59 X-Filterd-Recvd-Size: 25311 Received: from us-smtp-delivery-1.mimecast.com (us-smtp-2.mimecast.com [207.211.31.81]) by imf35.hostedemail.com (Postfix) with ESMTP for ; Fri, 26 Jun 2020 22:31:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1593210697; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=46zX68cVzugG5lsX8jDq0MfZnX4kHNTYf4hJ1qmG08o=; b=ALf5tAhe1HO3yNJydNKXNCSZ4asLeKxFgmdQtjqemrVD5pp+ow7Ah7UY/oxZwBL3dr1hnM qKTjJOyAG6pz5v6O/RZycDhcnjr0Vk5udErvNePnBkHSkxgTjaRi9dP8OtXCaO1DsRZvBv jtOJeb64AyGNGNEbESUHxsKYgkW2IaM= Received: from mail-qk1-f198.google.com (mail-qk1-f198.google.com [209.85.222.198]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-453-lpLTBcuOMQuYsGqsEhMfYg-1; Fri, 26 Jun 2020 18:31:35 -0400 X-MC-Unique: lpLTBcuOMQuYsGqsEhMfYg-1 Received: by mail-qk1-f198.google.com with SMTP id j79so7687500qke.5 for ; Fri, 26 Jun 2020 15:31:35 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=46zX68cVzugG5lsX8jDq0MfZnX4kHNTYf4hJ1qmG08o=; b=ky1tiS2+OUmvI44oRbGVvI5GBO1pVvJazE8CsVDnNRKEwD6zEvLk2dOPwh25GghubS m3Rzhidvc3UQCWQYDREokfJUA4+DZKKxWVpKJ4b5Ys7BGfgrTOy/uJykH4dhb45uWMI/ s211a5kqIpcEdhoCZZVopDuaJlcnb1O3NU266xmrRXY5JnIUGvu9x7J+0uE/SNIiHXD1 TVIMnKFLVrRGyOPUr2XOjJfnGJZ1YhUfKcfwhHp0xOgNOkJVVzVaAIbud/55UeDPIW9P oboQSBmL45aGc4/xAyxH9R+TBW96fIleq/NfwHZk8hGDeWT2CuiVQetsZw2SdpQHiiKP wslw== X-Gm-Message-State: AOAM532lAeA3CXnks4PfGHLrJjVmU0u+N2vVQJAg74uu2eTd6Gy8yhPB drPXUr+thC6NaXJJd7e5mVFckcuzGCeL1YVbR91NwuxDFr1UvDRVhzZ9ntz9ajwXf6oJsvS+HJW JdpvkCjv4BgA= X-Received: by 2002:ac8:f6f:: with SMTP id l44mr2968601qtk.4.1593210694239; Fri, 26 Jun 2020 15:31:34 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwGBIAEIhdGjGdtK4u5EAqqYoyGVG3TjRqnu5X22wt8z+X7Z/x3bgLMi5bwNXGSEC9tGQ4LRg== X-Received: by 2002:ac8:f6f:: with SMTP id l44mr2968555qtk.4.1593210693715; Fri, 26 Jun 2020 15:31:33 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c0:32::2]) by smtp.gmail.com with ESMTPSA id f203sm9903311qke.135.2020.06.26.15.31.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 26 Jun 2020 15:31:33 -0700 (PDT) From: Peter Xu To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Andrew Morton , Linus Torvalds , Gerald Schaefer , Andrea Arcangeli , Will Deacon , peterx@redhat.com, Michael Ellerman Subject: [PATCH 01/26] mm: Do page fault accounting in handle_mm_fault Date: Fri, 26 Jun 2020 18:31:05 -0400 Message-Id: <20200626223130.199227-2-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200626223130.199227-1-peterx@redhat.com> References: <20200626223130.199227-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=US-ASCII X-Rspamd-Queue-Id: A0010100EDBF3 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This is a preparation patch to move page fault accountings into the gener= al code in handle_mm_fault(). This includes both the per task flt_maj/flt_m= in counters, and the major/minor page fault perf events. To do this, the pt= _regs pointer is passed into handle_mm_fault(). PERF_COUNT_SW_PAGE_FAULTS should still be kept in per-arch page fault han= dlers. So far, all the pt_regs pointer that passed into handle_mm_fault() is NUL= L, which means this patch should have no intented functional change. Suggested-by: Linus Torvalds Signed-off-by: Peter Xu --- arch/alpha/mm/fault.c | 2 +- arch/arc/mm/fault.c | 2 +- arch/arm/mm/fault.c | 2 +- arch/arm64/mm/fault.c | 2 +- arch/csky/mm/fault.c | 3 +- arch/hexagon/mm/vm_fault.c | 2 +- arch/ia64/mm/fault.c | 2 +- arch/m68k/mm/fault.c | 2 +- arch/microblaze/mm/fault.c | 2 +- arch/mips/mm/fault.c | 2 +- arch/nds32/mm/fault.c | 2 +- arch/nios2/mm/fault.c | 2 +- arch/openrisc/mm/fault.c | 2 +- arch/parisc/mm/fault.c | 2 +- arch/powerpc/mm/copro_fault.c | 2 +- arch/powerpc/mm/fault.c | 2 +- arch/riscv/mm/fault.c | 2 +- arch/s390/mm/fault.c | 2 +- arch/sh/mm/fault.c | 2 +- arch/sparc/mm/fault_32.c | 4 +-- arch/sparc/mm/fault_64.c | 2 +- arch/um/kernel/trap.c | 2 +- arch/unicore32/mm/fault.c | 2 +- arch/x86/mm/fault.c | 2 +- arch/xtensa/mm/fault.c | 2 +- drivers/iommu/amd_iommu_v2.c | 2 +- drivers/iommu/intel-svm.c | 2 +- include/linux/mm.h | 7 ++-- mm/gup.c | 4 +-- mm/hmm.c | 3 +- mm/ksm.c | 3 +- mm/memory.c | 62 ++++++++++++++++++++++++++++++++++- 32 files changed, 101 insertions(+), 35 deletions(-) diff --git a/arch/alpha/mm/fault.c b/arch/alpha/mm/fault.c index c2d7b6d7bac7..82e72f24486e 100644 --- a/arch/alpha/mm/fault.c +++ b/arch/alpha/mm/fault.c @@ -148,7 +148,7 @@ do_page_fault(unsigned long address, unsigned long mm= csr, /* If for any reason at all we couldn't handle the fault, make sure we exit gracefully rather than endlessly redo the fault. */ - fault =3D handle_mm_fault(vma, address, flags); + fault =3D handle_mm_fault(vma, address, flags, NULL); =20 if (fault_signal_pending(fault, regs)) return; diff --git a/arch/arc/mm/fault.c b/arch/arc/mm/fault.c index 92b339c7adba..34380139e7a2 100644 --- a/arch/arc/mm/fault.c +++ b/arch/arc/mm/fault.c @@ -131,7 +131,7 @@ void do_page_fault(unsigned long address, struct pt_r= egs *regs) goto bad_area; } =20 - fault =3D handle_mm_fault(vma, address, flags); + fault =3D handle_mm_fault(vma, address, flags, NULL); =20 /* Quick path to respond to signals */ if (fault_signal_pending(fault, regs)) { diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c index 2dd5c41cbb8d..0d6be0f4f27c 100644 --- a/arch/arm/mm/fault.c +++ b/arch/arm/mm/fault.c @@ -223,7 +223,7 @@ __do_page_fault(struct mm_struct *mm, unsigned long a= ddr, unsigned int fsr, goto out; } =20 - return handle_mm_fault(vma, addr & PAGE_MASK, flags); + return handle_mm_fault(vma, addr & PAGE_MASK, flags, NULL); =20 check_stack: /* Don't allow expansion below FIRST_USER_ADDRESS */ diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index c9cedc0432d2..5f6607b951b8 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -422,7 +422,7 @@ static vm_fault_t __do_page_fault(struct mm_struct *m= m, unsigned long addr, */ if (!(vma->vm_flags & vm_flags)) return VM_FAULT_BADACCESS; - return handle_mm_fault(vma, addr & PAGE_MASK, mm_flags); + return handle_mm_fault(vma, addr & PAGE_MASK, mm_flags, NULL); } =20 static bool is_el0_instruction_abort(unsigned int esr) diff --git a/arch/csky/mm/fault.c b/arch/csky/mm/fault.c index 4e6dc68f3258..b14f97d3cb15 100644 --- a/arch/csky/mm/fault.c +++ b/arch/csky/mm/fault.c @@ -150,7 +150,8 @@ asmlinkage void do_page_fault(struct pt_regs *regs, u= nsigned long write, * make sure we exit gracefully rather than endlessly redo * the fault. */ - fault =3D handle_mm_fault(vma, address, write ? FAULT_FLAG_WRITE : 0); + fault =3D handle_mm_fault(vma, address, write ? FAULT_FLAG_WRITE : 0, + NULL); if (unlikely(fault & VM_FAULT_ERROR)) { if (fault & VM_FAULT_OOM) goto out_of_memory; diff --git a/arch/hexagon/mm/vm_fault.c b/arch/hexagon/mm/vm_fault.c index 72334b26317a..f04cd0a6d905 100644 --- a/arch/hexagon/mm/vm_fault.c +++ b/arch/hexagon/mm/vm_fault.c @@ -89,7 +89,7 @@ void do_page_fault(unsigned long address, long cause, s= truct pt_regs *regs) break; } =20 - fault =3D handle_mm_fault(vma, address, flags); + fault =3D handle_mm_fault(vma, address, flags, NULL); =20 if (fault_signal_pending(fault, regs)) return; diff --git a/arch/ia64/mm/fault.c b/arch/ia64/mm/fault.c index 30d0c1fca99e..caa93e083c9d 100644 --- a/arch/ia64/mm/fault.c +++ b/arch/ia64/mm/fault.c @@ -139,7 +139,7 @@ ia64_do_page_fault (unsigned long address, unsigned l= ong isr, struct pt_regs *re * sure we exit gracefully rather than endlessly redo the * fault. */ - fault =3D handle_mm_fault(vma, address, flags); + fault =3D handle_mm_fault(vma, address, flags, NULL); =20 if (fault_signal_pending(fault, regs)) return; diff --git a/arch/m68k/mm/fault.c b/arch/m68k/mm/fault.c index 3bfb5c8ac3c7..2db38dfbc00c 100644 --- a/arch/m68k/mm/fault.c +++ b/arch/m68k/mm/fault.c @@ -135,7 +135,7 @@ int do_page_fault(struct pt_regs *regs, unsigned long= address, * the fault. */ =20 - fault =3D handle_mm_fault(vma, address, flags); + fault =3D handle_mm_fault(vma, address, flags, NULL); pr_debug("handle_mm_fault returns %x\n", fault); =20 if (fault_signal_pending(fault, regs)) diff --git a/arch/microblaze/mm/fault.c b/arch/microblaze/mm/fault.c index 3248141f8ed5..9abfa5224386 100644 --- a/arch/microblaze/mm/fault.c +++ b/arch/microblaze/mm/fault.c @@ -215,7 +215,7 @@ void do_page_fault(struct pt_regs *regs, unsigned lon= g address, * make sure we exit gracefully rather than endlessly redo * the fault. */ - fault =3D handle_mm_fault(vma, address, flags); + fault =3D handle_mm_fault(vma, address, flags, NULL); =20 if (fault_signal_pending(fault, regs)) return; diff --git a/arch/mips/mm/fault.c b/arch/mips/mm/fault.c index f8d62cd83b36..31c2afb8f8a5 100644 --- a/arch/mips/mm/fault.c +++ b/arch/mips/mm/fault.c @@ -152,7 +152,7 @@ static void __kprobes __do_page_fault(struct pt_regs = *regs, unsigned long write, * make sure we exit gracefully rather than endlessly redo * the fault. */ - fault =3D handle_mm_fault(vma, address, flags); + fault =3D handle_mm_fault(vma, address, flags, NULL); =20 if (fault_signal_pending(fault, regs)) return; diff --git a/arch/nds32/mm/fault.c b/arch/nds32/mm/fault.c index f331e533edc2..22527129025c 100644 --- a/arch/nds32/mm/fault.c +++ b/arch/nds32/mm/fault.c @@ -207,7 +207,7 @@ void do_page_fault(unsigned long entry, unsigned long= addr, * the fault. */ =20 - fault =3D handle_mm_fault(vma, addr, flags); + fault =3D handle_mm_fault(vma, addr, flags, NULL); =20 /* * If we need to retry but a fatal signal is pending, handle the diff --git a/arch/nios2/mm/fault.c b/arch/nios2/mm/fault.c index ec9d8a9c426f..88abf297c759 100644 --- a/arch/nios2/mm/fault.c +++ b/arch/nios2/mm/fault.c @@ -131,7 +131,7 @@ asmlinkage void do_page_fault(struct pt_regs *regs, u= nsigned long cause, * make sure we exit gracefully rather than endlessly redo * the fault. */ - fault =3D handle_mm_fault(vma, address, flags); + fault =3D handle_mm_fault(vma, address, flags, NULL); =20 if (fault_signal_pending(fault, regs)) return; diff --git a/arch/openrisc/mm/fault.c b/arch/openrisc/mm/fault.c index 8af1cc78c4fb..45aedc572361 100644 --- a/arch/openrisc/mm/fault.c +++ b/arch/openrisc/mm/fault.c @@ -159,7 +159,7 @@ asmlinkage void do_page_fault(struct pt_regs *regs, u= nsigned long address, * the fault. */ =20 - fault =3D handle_mm_fault(vma, address, flags); + fault =3D handle_mm_fault(vma, address, flags, NULL); =20 if (fault_signal_pending(fault, regs)) return; diff --git a/arch/parisc/mm/fault.c b/arch/parisc/mm/fault.c index 86e8c848f3d7..c10908ea8803 100644 --- a/arch/parisc/mm/fault.c +++ b/arch/parisc/mm/fault.c @@ -302,7 +302,7 @@ void do_page_fault(struct pt_regs *regs, unsigned lon= g code, * fault. */ =20 - fault =3D handle_mm_fault(vma, address, flags); + fault =3D handle_mm_fault(vma, address, flags, NULL); =20 if (fault_signal_pending(fault, regs)) return; diff --git a/arch/powerpc/mm/copro_fault.c b/arch/powerpc/mm/copro_fault.= c index beb060b96632..c0478bef1f14 100644 --- a/arch/powerpc/mm/copro_fault.c +++ b/arch/powerpc/mm/copro_fault.c @@ -64,7 +64,7 @@ int copro_handle_mm_fault(struct mm_struct *mm, unsigne= d long ea, } =20 ret =3D 0; - *flt =3D handle_mm_fault(vma, ea, is_write ? FAULT_FLAG_WRITE : 0); + *flt =3D handle_mm_fault(vma, ea, is_write ? FAULT_FLAG_WRITE : 0, NULL= ); if (unlikely(*flt & VM_FAULT_ERROR)) { if (*flt & VM_FAULT_OOM) { ret =3D -ENOMEM; diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c index 84af6c8eecf7..992b10c3761c 100644 --- a/arch/powerpc/mm/fault.c +++ b/arch/powerpc/mm/fault.c @@ -563,7 +563,7 @@ static int __do_page_fault(struct pt_regs *regs, unsi= gned long address, * make sure we exit gracefully rather than endlessly redo * the fault. */ - fault =3D handle_mm_fault(vma, address, flags); + fault =3D handle_mm_fault(vma, address, flags, NULL); =20 #ifdef CONFIG_PPC_MEM_KEYS /* diff --git a/arch/riscv/mm/fault.c b/arch/riscv/mm/fault.c index be84e32adc4c..677ee1bb11ac 100644 --- a/arch/riscv/mm/fault.c +++ b/arch/riscv/mm/fault.c @@ -110,7 +110,7 @@ asmlinkage void do_page_fault(struct pt_regs *regs) * make sure we exit gracefully rather than endlessly redo * the fault. */ - fault =3D handle_mm_fault(vma, addr, flags); + fault =3D handle_mm_fault(vma, addr, flags, NULL); =20 /* * If we need to retry but a fatal signal is pending, handle the diff --git a/arch/s390/mm/fault.c b/arch/s390/mm/fault.c index dedc28be27ab..ab6d7eedcfab 100644 --- a/arch/s390/mm/fault.c +++ b/arch/s390/mm/fault.c @@ -479,7 +479,7 @@ static inline vm_fault_t do_exception(struct pt_regs = *regs, int access) * make sure we exit gracefully rather than endlessly redo * the fault. */ - fault =3D handle_mm_fault(vma, address, flags); + fault =3D handle_mm_fault(vma, address, flags, NULL); if (fault_signal_pending(fault, regs)) { fault =3D VM_FAULT_SIGNAL; if (flags & FAULT_FLAG_RETRY_NOWAIT) diff --git a/arch/sh/mm/fault.c b/arch/sh/mm/fault.c index 5f23d7907597..a4e670a9c9b3 100644 --- a/arch/sh/mm/fault.c +++ b/arch/sh/mm/fault.c @@ -464,7 +464,7 @@ asmlinkage void __kprobes do_page_fault(struct pt_reg= s *regs, * make sure we exit gracefully rather than endlessly redo * the fault. */ - fault =3D handle_mm_fault(vma, address, flags); + fault =3D handle_mm_fault(vma, address, flags, NULL); =20 if (unlikely(fault & (VM_FAULT_RETRY | VM_FAULT_ERROR))) if (mm_fault_error(regs, error_code, address, fault)) diff --git a/arch/sparc/mm/fault_32.c b/arch/sparc/mm/fault_32.c index f6e0e601f857..61524d284706 100644 --- a/arch/sparc/mm/fault_32.c +++ b/arch/sparc/mm/fault_32.c @@ -235,7 +235,7 @@ asmlinkage void do_sparc_fault(struct pt_regs *regs, = int text_fault, int write, * make sure we exit gracefully rather than endlessly redo * the fault. */ - fault =3D handle_mm_fault(vma, address, flags); + fault =3D handle_mm_fault(vma, address, flags, NULL); =20 if (fault_signal_pending(fault, regs)) return; @@ -411,7 +411,7 @@ static void force_user_fault(unsigned long address, i= nt write) if (!(vma->vm_flags & (VM_READ | VM_EXEC))) goto bad_area; } - switch (handle_mm_fault(vma, address, flags)) { + switch (handle_mm_fault(vma, address, flags, NULL)) { case VM_FAULT_SIGBUS: case VM_FAULT_OOM: goto do_sigbus; diff --git a/arch/sparc/mm/fault_64.c b/arch/sparc/mm/fault_64.c index c0c0dd471b6b..6b702a0a8155 100644 --- a/arch/sparc/mm/fault_64.c +++ b/arch/sparc/mm/fault_64.c @@ -423,7 +423,7 @@ asmlinkage void __kprobes do_sparc64_fault(struct pt_= regs *regs) goto bad_area; } =20 - fault =3D handle_mm_fault(vma, address, flags); + fault =3D handle_mm_fault(vma, address, flags, NULL); =20 if (fault_signal_pending(fault, regs)) goto exit_exception; diff --git a/arch/um/kernel/trap.c b/arch/um/kernel/trap.c index 8f18cf56b3dd..32cc8f59322b 100644 --- a/arch/um/kernel/trap.c +++ b/arch/um/kernel/trap.c @@ -75,7 +75,7 @@ int handle_page_fault(unsigned long address, unsigned l= ong ip, do { vm_fault_t fault; =20 - fault =3D handle_mm_fault(vma, address, flags); + fault =3D handle_mm_fault(vma, address, flags, NULL); =20 if ((fault & VM_FAULT_RETRY) && fatal_signal_pending(current)) goto out_nosemaphore; diff --git a/arch/unicore32/mm/fault.c b/arch/unicore32/mm/fault.c index 3022104aa613..847ff24fcc2a 100644 --- a/arch/unicore32/mm/fault.c +++ b/arch/unicore32/mm/fault.c @@ -186,7 +186,7 @@ static vm_fault_t __do_pf(struct mm_struct *mm, unsig= ned long addr, * If for any reason at all we couldn't handle the fault, make * sure we exit gracefully rather than endlessly redo the fault. */ - fault =3D handle_mm_fault(vma, addr & PAGE_MASK, flags); + fault =3D handle_mm_fault(vma, addr & PAGE_MASK, flags, NULL); return fault; =20 check_stack: diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c index a51df516b87b..3e27ed85af06 100644 --- a/arch/x86/mm/fault.c +++ b/arch/x86/mm/fault.c @@ -1461,7 +1461,7 @@ void do_user_addr_fault(struct pt_regs *regs, * userland). The return to userland is identified whenever * FAULT_FLAG_USER|FAULT_FLAG_KILLABLE are both set in flags. */ - fault =3D handle_mm_fault(vma, address, flags); + fault =3D handle_mm_fault(vma, address, flags, NULL); major |=3D fault & VM_FAULT_MAJOR; =20 /* Quick path to respond to signals */ diff --git a/arch/xtensa/mm/fault.c b/arch/xtensa/mm/fault.c index e7172bd53ced..722ef3c98d60 100644 --- a/arch/xtensa/mm/fault.c +++ b/arch/xtensa/mm/fault.c @@ -108,7 +108,7 @@ void do_page_fault(struct pt_regs *regs) * make sure we exit gracefully rather than endlessly redo * the fault. */ - fault =3D handle_mm_fault(vma, address, flags); + fault =3D handle_mm_fault(vma, address, flags, NULL); =20 if (fault_signal_pending(fault, regs)) return; diff --git a/drivers/iommu/amd_iommu_v2.c b/drivers/iommu/amd_iommu_v2.c index d6d85debd01b..66042b816943 100644 --- a/drivers/iommu/amd_iommu_v2.c +++ b/drivers/iommu/amd_iommu_v2.c @@ -497,7 +497,7 @@ static void do_fault(struct work_struct *work) if (access_error(vma, fault)) goto out; =20 - ret =3D handle_mm_fault(vma, address, flags); + ret =3D handle_mm_fault(vma, address, flags, NULL); out: up_read(&mm->mmap_sem); =20 diff --git a/drivers/iommu/intel-svm.c b/drivers/iommu/intel-svm.c index 2998418f0a38..c9cb5e5b6c34 100644 --- a/drivers/iommu/intel-svm.c +++ b/drivers/iommu/intel-svm.c @@ -629,7 +629,7 @@ static irqreturn_t prq_event_thread(int irq, void *d) goto invalid; =20 ret =3D handle_mm_fault(vma, address, - req->wr_req ? FAULT_FLAG_WRITE : 0); + req->wr_req ? FAULT_FLAG_WRITE : 0, NULL); if (ret & VM_FAULT_ERROR) goto invalid; =20 diff --git a/include/linux/mm.h b/include/linux/mm.h index f3fe7371855c..46bee4044ac1 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -36,6 +36,7 @@ struct file_ra_state; struct user_struct; struct writeback_control; struct bdi_writeback; +struct pt_regs; =20 void init_mm_internals(void); =20 @@ -1652,7 +1653,8 @@ int invalidate_inode_page(struct page *page); =20 #ifdef CONFIG_MMU extern vm_fault_t handle_mm_fault(struct vm_area_struct *vma, - unsigned long address, unsigned int flags); + unsigned long address, unsigned int flags, + struct pt_regs *regs); extern int fixup_user_fault(struct task_struct *tsk, struct mm_struct *m= m, unsigned long address, unsigned int fault_flags, bool *unlocked); @@ -1662,7 +1664,8 @@ void unmap_mapping_range(struct address_space *mapp= ing, loff_t const holebegin, loff_t const holelen, int even_cows); #else static inline vm_fault_t handle_mm_fault(struct vm_area_struct *vma, - unsigned long address, unsigned int flags) + unsigned long address, unsigned int flags, + struct pt_regs *regs) { /* should never happen if there's no MMU */ BUG(); diff --git a/mm/gup.c b/mm/gup.c index 87a6a59fe667..1a48c639ea49 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -876,7 +876,7 @@ static int faultin_page(struct task_struct *tsk, stru= ct vm_area_struct *vma, fault_flags |=3D FAULT_FLAG_TRIED; } =20 - ret =3D handle_mm_fault(vma, address, fault_flags); + ret =3D handle_mm_fault(vma, address, fault_flags, NULL); if (ret & VM_FAULT_ERROR) { int err =3D vm_fault_to_errno(ret, *flags); =20 @@ -1222,7 +1222,7 @@ int fixup_user_fault(struct task_struct *tsk, struc= t mm_struct *mm, fatal_signal_pending(current)) return -EINTR; =20 - ret =3D handle_mm_fault(vma, address, fault_flags); + ret =3D handle_mm_fault(vma, address, fault_flags, NULL); major |=3D ret & VM_FAULT_MAJOR; if (ret & VM_FAULT_ERROR) { int err =3D vm_fault_to_errno(ret, 0); diff --git a/mm/hmm.c b/mm/hmm.c index 280585833adf..5fca59a1f6e9 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -90,7 +90,8 @@ static int hmm_vma_fault(unsigned long addr, unsigned l= ong end, } =20 for (; addr < end; addr +=3D PAGE_SIZE) - if (handle_mm_fault(vma, addr, fault_flags) & VM_FAULT_ERROR) + if (handle_mm_fault(vma, addr, fault_flags, NULL) & + VM_FAULT_ERROR) return -EFAULT; return -EBUSY; } diff --git a/mm/ksm.c b/mm/ksm.c index 281c00129a2e..2e2b02abcc0f 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -480,7 +480,8 @@ static int break_ksm(struct vm_area_struct *vma, unsi= gned long addr) break; if (PageKsm(page)) ret =3D handle_mm_fault(vma, addr, - FAULT_FLAG_WRITE | FAULT_FLAG_REMOTE); + FAULT_FLAG_WRITE | FAULT_FLAG_REMOTE, + NULL); else ret =3D VM_FAULT_WRITE; put_page(page); diff --git a/mm/memory.c b/mm/memory.c index f703fe8c8346..4a9b333b079e 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -71,6 +71,8 @@ #include #include #include +#include +#include =20 #include =20 @@ -4345,6 +4347,36 @@ static vm_fault_t __handle_mm_fault(struct vm_area= _struct *vma, return handle_pte_fault(&vmf); } =20 +/** + * mm_account_fault - Do page fault accountings + * @regs: the pt_regs struct pointer. When set to NULL, will skip accou= nting + * @address: faulted address. + * @major: whether this is a major fault. + * + * This will take care of most of the page fault accountings. It should= only + * be called when a page fault is completed. For example, VM_FAULT_RETR= Y means + * the fault needs to be retried again later, so it should not contribut= e to + * the accounting. + * + * The accounting will also include the PERF_COUNT_SW_PAGE_FAULTS_[MAJ|M= IN] + * perf counter updates. Note: the handling of PERF_COUNT_SW_PAGE_FAULT= S + * should still be in per-arch page fault handlers at the entry of page = fault. + */ +static inline void mm_account_fault(struct pt_regs *regs, + unsigned long address, bool major) +{ + if (!regs) + return; + + if (major) { + current->maj_flt++; + perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, 1, regs, address); + } else { + current->min_flt++; + perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, 1, regs, address); + } +} + /* * By the time we get here, we already hold the mm semaphore * @@ -4352,7 +4384,7 @@ static vm_fault_t __handle_mm_fault(struct vm_area_= struct *vma, * return value. See filemap_fault() and __lock_page_or_retry(). */ vm_fault_t handle_mm_fault(struct vm_area_struct *vma, unsigned long add= ress, - unsigned int flags) + unsigned int flags, struct pt_regs *regs) { vm_fault_t ret; =20 @@ -4393,6 +4425,34 @@ vm_fault_t handle_mm_fault(struct vm_area_struct *= vma, unsigned long address, mem_cgroup_oom_synchronize(false); } =20 + if (ret & (VM_FAULT_RETRY | VM_FAULT_ERROR)) + return ret; + + /* + * Do accounting in the common code, to avoid unnecessary + * architecture differences or duplicated code. + * + * We arbitrarily make the rules be: + * + * - Unsuccessful faults do not count (e.g. when the address wasn't + * valid). That includes arch_vma_access_permitted() failing above. + * + * So this is expressly not a "this many hardware page faults" + * counter. Use the hw profiling for that. + * + * - Incomplete faults do not count (e.g. RETRY). They will only + * count once completed. + * + * - The fault counts as a "major" fault when the final successful + * fault is VM_FAULT_MAJOR, or if it was a retry (which implies that + * we couldn't handle it immediately previously). + * + * - If the fault is done for GUP, regs will be NULL and no accounting + * will be done. + */ + mm_account_fault(regs, address, (ret & VM_FAULT_MAJOR) || + (flags & FAULT_FLAG_TRIED)); + return ret; } EXPORT_SYMBOL_GPL(handle_mm_fault); --=20 2.26.2