From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AFD28C433DF for ; Wed, 12 Aug 2020 01:37:56 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8C4842076C for ; Wed, 12 Aug 2020 01:37:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1597196276; bh=sX+X4UhlxIIMwqOO6wfl+OOBJFTlOrEk4h0z848ukuU=; h=Date:From:To:Subject:In-Reply-To:Reply-To:List-ID:From; b=bhlTGtAyWVHIl5q5TtuDmrdHCPyhU3WqidiCdNvXNnn+hVKPNJLKFr9mHPjF4Yx02 ax1/IDXC/8mCGN2fy5eyKGiIzCc2vZbMJ0bJdNTxZGR2lZqrnU2bjsK03iDhXeqbtR gv/Yr7+kBi3c1YbDGIGL0aG8Wv/vAqNkTaTRqdGQ= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726468AbgHLBh4 (ORCPT ); Tue, 11 Aug 2020 21:37:56 -0400 Received: from mail.kernel.org ([198.145.29.99]:40006 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726235AbgHLBh4 (ORCPT ); Tue, 11 Aug 2020 21:37:56 -0400 Received: from localhost.localdomain (c-73-231-172-41.hsd1.ca.comcast.net [73.231.172.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 56DF320829; Wed, 12 Aug 2020 01:37:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1597196275; bh=sX+X4UhlxIIMwqOO6wfl+OOBJFTlOrEk4h0z848ukuU=; h=Date:From:To:Subject:In-Reply-To:From; b=TRB3UwGA/LeJhpEqTwDkA59S2Bse1kYOZXyzboazPKLjFC5EtILzSh1FmQkO817y+ IP7y18UsQ0U1oSqlA7OU0ua+9nb2nSI/RraQthjmOt0Ni8AE/NNDeX1HTXOfH1Vhia jXovw+nKUaSwB8oyAPJJdfgSq8AwidiUJwfos9Ao= Date: Tue, 11 Aug 2020 18:37:54 -0700 From: Andrew Morton To: akpm@linux-foundation.org, linux-mm@kvack.org, linux@armlinux.org.uk, mm-commits@vger.kernel.org, peterx@redhat.com, torvalds@linux-foundation.org, will@kernel.org Subject: [patch 144/165] mm/arm: use general page fault accounting Message-ID: <20200812013754.yW91BSF5F%akpm@linux-foundation.org> In-Reply-To: <20200811182949.e12ae9a472e3b5e27e16ad6c@linux-foundation.org> User-Agent: s-nail v14.8.16 Sender: mm-commits-owner@vger.kernel.org Precedence: bulk Reply-To: linux-kernel@vger.kernel.org List-ID: X-Mailing-List: mm-commits@vger.kernel.org From: Peter Xu Subject: mm/arm: use general page fault accounting Use the general page fault accounting by passing regs into handle_mm_fault(). It naturally solve the issue of multiple page fault accounting when page fault retry happened. To do this, we need to pass the pt_regs pointer into __do_page_fault(). Fix PERF_COUNT_SW_PAGE_FAULTS perf event manually for page fault retries, by moving it before taking mmap_sem. Link: http://lkml.kernel.org/r/20200707225021.200906-5-peterx@redhat.com Signed-off-by: Peter Xu Cc: Russell King Cc: Will Deacon Signed-off-by: Andrew Morton --- arch/arm/mm/fault.c | 25 ++++++------------------- 1 file changed, 6 insertions(+), 19 deletions(-) --- a/arch/arm/mm/fault.c~mm-arm-use-general-page-fault-accounting +++ a/arch/arm/mm/fault.c @@ -202,7 +202,8 @@ static inline bool access_error(unsigned static vm_fault_t __kprobes __do_page_fault(struct mm_struct *mm, unsigned long addr, unsigned int fsr, - unsigned int flags, struct task_struct *tsk) + unsigned int flags, struct task_struct *tsk, + struct pt_regs *regs) { struct vm_area_struct *vma; vm_fault_t fault; @@ -224,7 +225,7 @@ good_area: goto out; } - return handle_mm_fault(vma, addr & PAGE_MASK, flags, NULL); + return handle_mm_fault(vma, addr & PAGE_MASK, flags, regs); check_stack: /* Don't allow expansion below FIRST_USER_ADDRESS */ @@ -266,6 +267,8 @@ do_page_fault(unsigned long addr, unsign if ((fsr & FSR_WRITE) && !(fsr & FSR_CM)) flags |= FAULT_FLAG_WRITE; + perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, addr); + /* * As per x86, we may deadlock here. However, since the kernel only * validly references user space from well defined areas of the code, @@ -290,7 +293,7 @@ retry: #endif } - fault = __do_page_fault(mm, addr, fsr, flags, tsk); + fault = __do_page_fault(mm, addr, fsr, flags, tsk, regs); /* If we need to retry but a fatal signal is pending, handle the * signal first. We do not need to release the mmap_lock because @@ -302,23 +305,7 @@ retry: return 0; } - /* - * Major/minor page fault accounting is only done on the - * initial attempt. If we go through a retry, it is extremely - * likely that the page will be found in page cache at that point. - */ - - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, addr); if (!(fault & VM_FAULT_ERROR) && flags & FAULT_FLAG_ALLOW_RETRY) { - if (fault & VM_FAULT_MAJOR) { - tsk->maj_flt++; - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, 1, - regs, addr); - } else { - tsk->min_flt++; - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, 1, - regs, addr); - } if (fault & VM_FAULT_RETRY) { flags |= FAULT_FLAG_TRIED; goto retry; _