From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756642AbdCUJMM (ORCPT ); Tue, 21 Mar 2017 05:12:12 -0400 Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]:57373 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1756464AbdCUJMK (ORCPT ); Tue, 21 Mar 2017 05:12:10 -0400 From: "Aneesh Kumar K.V" To: Laurent Dufour , mpe@ellerman.id.au, benh@kernel.crashing.org, paulus@samba.org, bsingharora@gmail.com, npiggin@gmail.com Cc: linuxppc-dev@lists.ozlabs.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH 2/3] powerpc/mm: handle VM_FAULT_RETRY earlier In-Reply-To: <42011273e3edc4176dc8045b3956dfd218259143.1487090656.git.ldufour@linux.vnet.ibm.com> References: <42011273e3edc4176dc8045b3956dfd218259143.1487090656.git.ldufour@linux.vnet.ibm.com> Date: Tue, 21 Mar 2017 14:42:01 +0530 MIME-Version: 1.0 Content-Type: text/plain X-TM-AS-MML: disable x-cbid: 17032109-0004-0000-0000-0000055521C0 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 17032109-0005-0000-0000-000013523B6E Message-Id: <877f3jouxq.fsf@skywalker.in.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:,, definitions=2017-03-21_07:,, signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=1 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1702020001 definitions=main-1703210082 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Laurent Dufour writes: > In do_page_fault() if handle_mm_fault() returns VM_FAULT_RETRY, retry > the page fault handling before anything else. > > This would simplify the handling of the mmap_sem lock in this part of > the code. > > Signed-off-by: Laurent Dufour > --- > arch/powerpc/mm/fault.c | 67 ++++++++++++++++++++++++++++--------------------- > 1 file changed, 38 insertions(+), 29 deletions(-) > > diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c > index ee09604bbe12..2a6bc7e6e69a 100644 > --- a/arch/powerpc/mm/fault.c > +++ b/arch/powerpc/mm/fault.c > @@ -434,6 +434,26 @@ int do_page_fault(struct pt_regs *regs, unsigned long address, > * the fault. > */ > fault = handle_mm_fault(vma, address, flags); > + > + /* > + * Handle the retry right now, the mmap_sem has been released in that > + * case. > + */ > + if (unlikely(fault & VM_FAULT_RETRY)) { > + /* We retry only once */ > + if (flags & FAULT_FLAG_ALLOW_RETRY) { > + /* > + * Clear FAULT_FLAG_ALLOW_RETRY to avoid any risk > + * of starvation. > + */ > + flags &= ~FAULT_FLAG_ALLOW_RETRY; > + flags |= FAULT_FLAG_TRIED; > + if (!fatal_signal_pending(current)) > + goto retry; > + } > + /* We will enter mm_fault_error() below */ > + } > + > if (unlikely(fault & (VM_FAULT_RETRY|VM_FAULT_ERROR))) { > if (fault & VM_FAULT_SIGSEGV) > goto bad_area; > @@ -445,38 +465,27 @@ int do_page_fault(struct pt_regs *regs, unsigned long address, > } We could make it further simpler, by handling the FAULT_RETRY without FLAG_ALLOW_RETRY set earlier. But i guess that can be done later ? Reviewed-by: Aneesh Kumar K.V > > /* > - * Major/minor page fault accounting is only done on the > - * initial attempt. If we go through a retry, it is extremely > - * likely that the page will be found in page cache at that point. > + * Major/minor page fault accounting. > */ > - if (flags & FAULT_FLAG_ALLOW_RETRY) { > - if (fault & VM_FAULT_MAJOR) { > - current->maj_flt++; > - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, 1, > - regs, address); > + if (fault & VM_FAULT_MAJOR) { > + current->maj_flt++; > + perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, 1, > + regs, address); > #ifdef CONFIG_PPC_SMLPAR > - if (firmware_has_feature(FW_FEATURE_CMO)) { > - u32 page_ins; > - > - preempt_disable(); > - page_ins = be32_to_cpu(get_lppaca()->page_ins); > - page_ins += 1 << PAGE_FACTOR; > - get_lppaca()->page_ins = cpu_to_be32(page_ins); > - preempt_enable(); > - } > -#endif /* CONFIG_PPC_SMLPAR */ > - } else { > - current->min_flt++; > - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, 1, > - regs, address); > - } > - if (fault & VM_FAULT_RETRY) { > - /* Clear FAULT_FLAG_ALLOW_RETRY to avoid any risk > - * of starvation. */ > - flags &= ~FAULT_FLAG_ALLOW_RETRY; > - flags |= FAULT_FLAG_TRIED; > - goto retry; > + if (firmware_has_feature(FW_FEATURE_CMO)) { > + u32 page_ins; > + > + preempt_disable(); > + page_ins = be32_to_cpu(get_lppaca()->page_ins); > + page_ins += 1 << PAGE_FACTOR; > + get_lppaca()->page_ins = cpu_to_be32(page_ins); > + preempt_enable(); > } > +#endif /* CONFIG_PPC_SMLPAR */ > + } else { > + current->min_flt++; > + perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, 1, > + regs, address); > } > > up_read(&mm->mmap_sem); > -- > 2.7.4