From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.9 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C9DD6C11D05 for ; Thu, 20 Feb 2020 16:03:09 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 680EE20658 for ; Thu, 20 Feb 2020 16:03:09 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="bP72y5AP" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 680EE20658 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 0A7956B0008; Thu, 20 Feb 2020 11:03:09 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 059226B000A; Thu, 20 Feb 2020 11:03:09 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EB17F6B000C; Thu, 20 Feb 2020 11:03:08 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0164.hostedemail.com [216.40.44.164]) by kanga.kvack.org (Postfix) with ESMTP id CDAF36B0008 for ; Thu, 20 Feb 2020 11:03:08 -0500 (EST) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id C48AD8248047 for ; Thu, 20 Feb 2020 16:03:07 +0000 (UTC) X-FDA: 76510974414.22.key08_553f810b59541 X-HE-Tag: key08_553f810b59541 X-Filterd-Recvd-Size: 25629 Received: from us-smtp-delivery-1.mimecast.com (us-smtp-1.mimecast.com [207.211.31.81]) by imf01.hostedemail.com (Postfix) with ESMTP for ; Thu, 20 Feb 2020 16:03:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1582214585; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=O1ndCwjBmLsFWz+QpiPCIYOe+DTaass/mwtOb1B6bRk=; b=bP72y5APMohCpi7vEZ+U9fpagy35Ha8uLpu9Qyq0tv1BQUeRZEVC8pCXqDdme717l59mLJ SqsF/oF63tDGtntgufsyHWOMMBkHlEFH929XMhpWe9eHxsB8aC3UNMJiosfQ4lEm+DNmGm i2ZaSWzR75prpa2BnSjRoQNRmtlMtlA= Received: from mail-qv1-f69.google.com (mail-qv1-f69.google.com [209.85.219.69]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-49-LDxUO9MMMBaVYs0kc1RF8A-1; Thu, 20 Feb 2020 11:02:54 -0500 X-MC-Unique: LDxUO9MMMBaVYs0kc1RF8A-1 Received: by mail-qv1-f69.google.com with SMTP id r9so2846390qvs.19 for ; Thu, 20 Feb 2020 08:02:54 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=6wd5M5EQFyC3lDyp7uLiV/2aqPZLCm8Cbrym4pCSB+M=; b=YDuJqdswnjh0BuDo0zEiub65zUtCosBCfzPJznbkgMHSopcrNtSba0O5Z3p0wcn6Nm Hv5sMkkhfxKa/tYAyKaNND8sVKmX/7ePS5pGxpethRC6NSqoeTB34Nof65HM46c3uN15 32azUSh/z9EAP22mIp7INzWR1/FK307pTUCO9rlP9iKyI0WuKFAxBd3mRjb33KoUf1lt koEo5D0Q1lain2LPbkxmaP4T0HDqYjMliyL/AcmE5HTkk47fdeogFYaK6RG7LeFj7Krk 7ooxhEjgoL46cjibpUeK/PrAJtbPPAMX0pAYnmKTU+F8p+A9hrVidezWISad3ZOe9w5S Bspg== X-Gm-Message-State: APjAAAVG+W0R/8YwAK1Vt8zwBS4Dh2jmk740NfZPpnxnwfpR7wkvlCkB kSv4okPEOl7N6Fa4kJrGdldChp/kT7fBPTPJymX3DVTYm68x1/MC2RxXahrJ4aYtUqeHixJkRPo 9IFQziNYB9/A= X-Received: by 2002:ac8:7b4f:: with SMTP id m15mr27705219qtu.48.1582214572573; Thu, 20 Feb 2020 08:02:52 -0800 (PST) X-Google-Smtp-Source: APXvYqx2UtJh6uQtgDQyLSFbHG0Z0J8c2btCwr3qqf5ql9p+5mOVXtLYvZNtZPfvhXiu0p5bz+BqyQ== X-Received: by 2002:ac8:7b4f:: with SMTP id m15mr27705165qtu.48.1582214571889; Thu, 20 Feb 2020 08:02:51 -0800 (PST) Received: from xz-x1.redhat.com ([104.156.64.75]) by smtp.gmail.com with ESMTPSA id m17sm1758881qki.128.2020.02.20.08.02.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 20 Feb 2020 08:02:51 -0800 (PST) From: Peter Xu To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Peter Xu , Martin Cracauer , Mike Rapoport , Hugh Dickins , Jerome Glisse , "Kirill A . Shutemov" , Matthew Wilcox , Pavel Emelyanov , Brian Geffon , Maya Gokhale , Denis Plotnikov , Andrea Arcangeli , Johannes Weiner , "Dr . David Alan Gilbert" , Linus Torvalds , Mike Kravetz , Marty McFadden , David Hildenbrand , Bobby Powers , Mel Gorman Subject: [PATCH RESEND v6 13/16] mm: Allow VM_FAULT_RETRY for multiple times Date: Thu, 20 Feb 2020 11:02:46 -0500 Message-Id: <20200220160246.9790-1-peterx@redhat.com> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20200220155353.8676-1-peterx@redhat.com> References: MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The idea comes from a discussion between Linus and Andrea [1]. Before this patch we only allow a page fault to retry once. We achieved this by clearing the FAULT_FLAG_ALLOW_RETRY flag when doing handle_mm_fault() the second time. This was majorly used to avoid unexpected starvation of the system by looping over forever to handle the page fault on a single page. However that should hardly happen, and after all for each code path to return a VM_FAULT_RETRY we'll first wait for a condition (during which time we should possibly yield the cpu) to happen before VM_FAULT_RETRY is really returned. This patch removes the restriction by keeping the FAULT_FLAG_ALLOW_RETRY flag when we receive VM_FAULT_RETRY. It means that the page fault handler now can retry the page fault for multiple times if necessary without the need to generate another page fault event. Meanwhile we still keep the FAULT_FLAG_TRIED flag so page fault handler can still identify whether a page fault is the first attempt or not. Then we'll have these combinations of fault flags (only considering ALLOW_RETRY flag and TRIED flag): - ALLOW_RETRY and !TRIED: this means the page fault allows to retry, and this is the first try - ALLOW_RETRY and TRIED: this means the page fault allows to retry, and this is not the first try - !ALLOW_RETRY and !TRIED: this means the page fault does not allow to retry at all - !ALLOW_RETRY and TRIED: this is forbidden and should never be used In existing code we have multiple places that has taken special care of the first condition above by checking against (fault_flags & FAULT_FLAG_ALLOW_RETRY). This patch introduces a simple helper to detect the first retry of a page fault by checking against both (fault_flags & FAULT_FLAG_ALLOW_RETRY) and !(fault_flag & FAULT_FLAG_TRIED) because now even the 2nd try will have the ALLOW_RETRY set, then use that helper in all existing special paths. One example is in __lock_page_or_retry(), now we'll drop the mmap_sem only in the first attempt of page fault and we'll keep it in follow up retries, so old locking behavior will be retained. This will be a nice enhancement for current code [2] at the same time a supporting material for the future userfaultfd-writeprotect work, since in that work there will always be an explicit userfault writeprotect retry for protected pages, and if that cannot resolve the page fault (e.g., when userfaultfd-writeprotect is used in conjunction with swapped pages) then we'll possibly need a 3rd retry of the page fault. It might also benefit other potential users who will have similar requirement like userfault write-protection. GUP code is not touched yet and will be covered in follow up patch. Please read the thread below for more information. [1] https://lore.kernel.org/lkml/20171102193644.GB22686@redhat.com/ [2] https://lore.kernel.org/lkml/20181230154648.GB9832@redhat.com/ Suggested-by: Linus Torvalds Suggested-by: Andrea Arcangeli Signed-off-by: Peter Xu --- arch/alpha/mm/fault.c | 2 +- arch/arc/mm/fault.c | 1 - arch/arm/mm/fault.c | 3 --- arch/arm64/mm/fault.c | 5 ----- arch/hexagon/mm/vm_fault.c | 1 - arch/ia64/mm/fault.c | 1 - arch/m68k/mm/fault.c | 3 --- arch/microblaze/mm/fault.c | 1 - arch/mips/mm/fault.c | 1 - arch/nds32/mm/fault.c | 1 - arch/nios2/mm/fault.c | 3 --- arch/openrisc/mm/fault.c | 1 - arch/parisc/mm/fault.c | 4 +--- arch/powerpc/mm/fault.c | 6 ------ arch/riscv/mm/fault.c | 5 ----- arch/s390/mm/fault.c | 5 +---- arch/sh/mm/fault.c | 1 - arch/sparc/mm/fault_32.c | 1 - arch/sparc/mm/fault_64.c | 1 - arch/um/kernel/trap.c | 1 - arch/unicore32/mm/fault.c | 4 +--- arch/x86/mm/fault.c | 2 -- arch/xtensa/mm/fault.c | 1 - drivers/gpu/drm/ttm/ttm_bo_vm.c | 12 ++++++++--- include/linux/mm.h | 37 +++++++++++++++++++++++++++++++++ mm/filemap.c | 2 +- mm/internal.h | 6 +++--- 27 files changed, 54 insertions(+), 57 deletions(-) diff --git a/arch/alpha/mm/fault.c b/arch/alpha/mm/fault.c index fcfa229cc1e7..c2d7b6d7bac7 100644 --- a/arch/alpha/mm/fault.c +++ b/arch/alpha/mm/fault.c @@ -169,7 +169,7 @@ do_page_fault(unsigned long address, unsigned long mmcs= r, =09=09else =09=09=09current->min_flt++; =09=09if (fault & VM_FAULT_RETRY) { -=09=09=09flags &=3D ~FAULT_FLAG_ALLOW_RETRY; +=09=09=09flags |=3D FAULT_FLAG_TRIED; =20 =09=09=09 /* No need to up_read(&mm->mmap_sem) as we would =09=09=09 * have already released it in __lock_page_or_retry diff --git a/arch/arc/mm/fault.c b/arch/arc/mm/fault.c index 643fad774071..92b339c7adba 100644 --- a/arch/arc/mm/fault.c +++ b/arch/arc/mm/fault.c @@ -145,7 +145,6 @@ void do_page_fault(unsigned long address, struct pt_reg= s *regs) =09 */ =09if (unlikely((fault & VM_FAULT_RETRY) && =09=09 (flags & FAULT_FLAG_ALLOW_RETRY))) { -=09=09flags &=3D ~FAULT_FLAG_ALLOW_RETRY; =09=09flags |=3D FAULT_FLAG_TRIED; =09=09goto retry; =09} diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c index 18ef0b143ac2..b598e6978b29 100644 --- a/arch/arm/mm/fault.c +++ b/arch/arm/mm/fault.c @@ -319,9 +319,6 @@ do_page_fault(unsigned long addr, unsigned int fsr, str= uct pt_regs *regs) =09=09=09=09=09regs, addr); =09=09} =09=09if (fault & VM_FAULT_RETRY) { -=09=09=09/* Clear FAULT_FLAG_ALLOW_RETRY to avoid any risk -=09=09=09* of starvation. */ -=09=09=09flags &=3D ~FAULT_FLAG_ALLOW_RETRY; =09=09=09flags |=3D FAULT_FLAG_TRIED; =09=09=09goto retry; =09=09} diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index cbb29a43aa7f..1027851d469a 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -521,12 +521,7 @@ static int __kprobes do_page_fault(unsigned long addr,= unsigned int esr, =09} =20 =09if (fault & VM_FAULT_RETRY) { -=09=09/* -=09=09 * Clear FAULT_FLAG_ALLOW_RETRY to avoid any risk of -=09=09 * starvation. -=09=09 */ =09=09if (mm_flags & FAULT_FLAG_ALLOW_RETRY) { -=09=09=09mm_flags &=3D ~FAULT_FLAG_ALLOW_RETRY; =09=09=09mm_flags |=3D FAULT_FLAG_TRIED; =09=09=09goto retry; =09=09} diff --git a/arch/hexagon/mm/vm_fault.c b/arch/hexagon/mm/vm_fault.c index d9e15d941bdb..72334b26317a 100644 --- a/arch/hexagon/mm/vm_fault.c +++ b/arch/hexagon/mm/vm_fault.c @@ -102,7 +102,6 @@ void do_page_fault(unsigned long address, long cause, s= truct pt_regs *regs) =09=09=09else =09=09=09=09current->min_flt++; =09=09=09if (fault & VM_FAULT_RETRY) { -=09=09=09=09flags &=3D ~FAULT_FLAG_ALLOW_RETRY; =09=09=09=09flags |=3D FAULT_FLAG_TRIED; =09=09=09=09goto retry; =09=09=09} diff --git a/arch/ia64/mm/fault.c b/arch/ia64/mm/fault.c index b5aa4e80c762..30d0c1fca99e 100644 --- a/arch/ia64/mm/fault.c +++ b/arch/ia64/mm/fault.c @@ -167,7 +167,6 @@ ia64_do_page_fault (unsigned long address, unsigned lon= g isr, struct pt_regs *re =09=09else =09=09=09current->min_flt++; =09=09if (fault & VM_FAULT_RETRY) { -=09=09=09flags &=3D ~FAULT_FLAG_ALLOW_RETRY; =09=09=09flags |=3D FAULT_FLAG_TRIED; =20 =09=09=09 /* No need to up_read(&mm->mmap_sem) as we would diff --git a/arch/m68k/mm/fault.c b/arch/m68k/mm/fault.c index 182799fd9987..f7afb9897966 100644 --- a/arch/m68k/mm/fault.c +++ b/arch/m68k/mm/fault.c @@ -162,9 +162,6 @@ int do_page_fault(struct pt_regs *regs, unsigned long a= ddress, =09=09else =09=09=09current->min_flt++; =09=09if (fault & VM_FAULT_RETRY) { -=09=09=09/* Clear FAULT_FLAG_ALLOW_RETRY to avoid any risk -=09=09=09 * of starvation. */ -=09=09=09flags &=3D ~FAULT_FLAG_ALLOW_RETRY; =09=09=09flags |=3D FAULT_FLAG_TRIED; =20 =09=09=09/* diff --git a/arch/microblaze/mm/fault.c b/arch/microblaze/mm/fault.c index 32da02778a63..3248141f8ed5 100644 --- a/arch/microblaze/mm/fault.c +++ b/arch/microblaze/mm/fault.c @@ -236,7 +236,6 @@ void do_page_fault(struct pt_regs *regs, unsigned long = address, =09=09else =09=09=09current->min_flt++; =09=09if (fault & VM_FAULT_RETRY) { -=09=09=09flags &=3D ~FAULT_FLAG_ALLOW_RETRY; =09=09=09flags |=3D FAULT_FLAG_TRIED; =20 =09=09=09/* diff --git a/arch/mips/mm/fault.c b/arch/mips/mm/fault.c index f177da67d940..fd64b135fd7b 100644 --- a/arch/mips/mm/fault.c +++ b/arch/mips/mm/fault.c @@ -178,7 +178,6 @@ static void __kprobes __do_page_fault(struct pt_regs *r= egs, unsigned long write, =09=09=09tsk->min_flt++; =09=09} =09=09if (fault & VM_FAULT_RETRY) { -=09=09=09flags &=3D ~FAULT_FLAG_ALLOW_RETRY; =09=09=09flags |=3D FAULT_FLAG_TRIED; =20 =09=09=09/* diff --git a/arch/nds32/mm/fault.c b/arch/nds32/mm/fault.c index 2810a4e5ab27..0cf0c08c7da2 100644 --- a/arch/nds32/mm/fault.c +++ b/arch/nds32/mm/fault.c @@ -246,7 +246,6 @@ void do_page_fault(unsigned long entry, unsigned long a= ddr, =09=09=09=09 1, regs, addr); =09=09} =09=09if (fault & VM_FAULT_RETRY) { -=09=09=09flags &=3D ~FAULT_FLAG_ALLOW_RETRY; =09=09=09flags |=3D FAULT_FLAG_TRIED; =20 =09=09=09/* No need to up_read(&mm->mmap_sem) as we would diff --git a/arch/nios2/mm/fault.c b/arch/nios2/mm/fault.c index c38bea4220fb..ec9d8a9c426f 100644 --- a/arch/nios2/mm/fault.c +++ b/arch/nios2/mm/fault.c @@ -157,9 +157,6 @@ asmlinkage void do_page_fault(struct pt_regs *regs, uns= igned long cause, =09=09else =09=09=09current->min_flt++; =09=09if (fault & VM_FAULT_RETRY) { -=09=09=09/* Clear FAULT_FLAG_ALLOW_RETRY to avoid any risk -=09=09=09 * of starvation. */ -=09=09=09flags &=3D ~FAULT_FLAG_ALLOW_RETRY; =09=09=09flags |=3D FAULT_FLAG_TRIED; =20 =09=09=09/* diff --git a/arch/openrisc/mm/fault.c b/arch/openrisc/mm/fault.c index 30d5c51e9d40..8af1cc78c4fb 100644 --- a/arch/openrisc/mm/fault.c +++ b/arch/openrisc/mm/fault.c @@ -181,7 +181,6 @@ asmlinkage void do_page_fault(struct pt_regs *regs, uns= igned long address, =09=09else =09=09=09tsk->min_flt++; =09=09if (fault & VM_FAULT_RETRY) { -=09=09=09flags &=3D ~FAULT_FLAG_ALLOW_RETRY; =09=09=09flags |=3D FAULT_FLAG_TRIED; =20 =09=09=09 /* No need to up_read(&mm->mmap_sem) as we would diff --git a/arch/parisc/mm/fault.c b/arch/parisc/mm/fault.c index 8e88e5c5f26a..86e8c848f3d7 100644 --- a/arch/parisc/mm/fault.c +++ b/arch/parisc/mm/fault.c @@ -328,14 +328,12 @@ void do_page_fault(struct pt_regs *regs, unsigned lon= g code, =09=09else =09=09=09current->min_flt++; =09=09if (fault & VM_FAULT_RETRY) { -=09=09=09flags &=3D ~FAULT_FLAG_ALLOW_RETRY; - =09=09=09/* =09=09=09 * No need to up_read(&mm->mmap_sem) as we would =09=09=09 * have already released it in __lock_page_or_retry =09=09=09 * in mm/filemap.c. =09=09=09 */ - +=09=09=09flags |=3D FAULT_FLAG_TRIED; =09=09=09goto retry; =09=09} =09} diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c index d7e1f8dc7e4c..d15f0f0ee806 100644 --- a/arch/powerpc/mm/fault.c +++ b/arch/powerpc/mm/fault.c @@ -590,13 +590,7 @@ static int __do_page_fault(struct pt_regs *regs, unsig= ned long address, =09 * case. =09 */ =09if (unlikely(fault & VM_FAULT_RETRY)) { -=09=09/* We retry only once */ =09=09if (flags & FAULT_FLAG_ALLOW_RETRY) { -=09=09=09/* -=09=09=09 * Clear FAULT_FLAG_ALLOW_RETRY to avoid any risk -=09=09=09 * of starvation. -=09=09=09 */ -=09=09=09flags &=3D ~FAULT_FLAG_ALLOW_RETRY; =09=09=09flags |=3D FAULT_FLAG_TRIED; =09=09=09goto retry; =09=09} diff --git a/arch/riscv/mm/fault.c b/arch/riscv/mm/fault.c index a252d9e38561..be84e32adc4c 100644 --- a/arch/riscv/mm/fault.c +++ b/arch/riscv/mm/fault.c @@ -144,11 +144,6 @@ asmlinkage void do_page_fault(struct pt_regs *regs) =09=09=09=09 1, regs, addr); =09=09} =09=09if (fault & VM_FAULT_RETRY) { -=09=09=09/* -=09=09=09 * Clear FAULT_FLAG_ALLOW_RETRY to avoid any risk -=09=09=09 * of starvation. -=09=09=09 */ -=09=09=09flags &=3D ~(FAULT_FLAG_ALLOW_RETRY); =09=09=09flags |=3D FAULT_FLAG_TRIED; =20 =09=09=09/* diff --git a/arch/s390/mm/fault.c b/arch/s390/mm/fault.c index 551ac311bd35..aeccdb30899a 100644 --- a/arch/s390/mm/fault.c +++ b/arch/s390/mm/fault.c @@ -513,10 +513,7 @@ static inline vm_fault_t do_exception(struct pt_regs *= regs, int access) =09=09=09=09fault =3D VM_FAULT_PFAULT; =09=09=09=09goto out_up; =09=09=09} -=09=09=09/* Clear FAULT_FLAG_ALLOW_RETRY to avoid any risk -=09=09=09 * of starvation. */ -=09=09=09flags &=3D ~(FAULT_FLAG_ALLOW_RETRY | -=09=09=09=09 FAULT_FLAG_RETRY_NOWAIT); +=09=09=09flags &=3D ~FAULT_FLAG_RETRY_NOWAIT; =09=09=09flags |=3D FAULT_FLAG_TRIED; =09=09=09down_read(&mm->mmap_sem); =09=09=09goto retry; diff --git a/arch/sh/mm/fault.c b/arch/sh/mm/fault.c index d9c8f2d00a54..13ee4d20e622 100644 --- a/arch/sh/mm/fault.c +++ b/arch/sh/mm/fault.c @@ -481,7 +481,6 @@ asmlinkage void __kprobes do_page_fault(struct pt_regs = *regs, =09=09=09=09 regs, address); =09=09} =09=09if (fault & VM_FAULT_RETRY) { -=09=09=09flags &=3D ~FAULT_FLAG_ALLOW_RETRY; =09=09=09flags |=3D FAULT_FLAG_TRIED; =20 =09=09=09/* diff --git a/arch/sparc/mm/fault_32.c b/arch/sparc/mm/fault_32.c index a91b0c2d84f8..f6e0e601f857 100644 --- a/arch/sparc/mm/fault_32.c +++ b/arch/sparc/mm/fault_32.c @@ -261,7 +261,6 @@ asmlinkage void do_sparc_fault(struct pt_regs *regs, in= t text_fault, int write, =09=09=09=09 1, regs, address); =09=09} =09=09if (fault & VM_FAULT_RETRY) { -=09=09=09flags &=3D ~FAULT_FLAG_ALLOW_RETRY; =09=09=09flags |=3D FAULT_FLAG_TRIED; =20 =09=09=09/* No need to up_read(&mm->mmap_sem) as we would diff --git a/arch/sparc/mm/fault_64.c b/arch/sparc/mm/fault_64.c index 30653418a672..c0c0dd471b6b 100644 --- a/arch/sparc/mm/fault_64.c +++ b/arch/sparc/mm/fault_64.c @@ -449,7 +449,6 @@ asmlinkage void __kprobes do_sparc64_fault(struct pt_re= gs *regs) =09=09=09=09 1, regs, address); =09=09} =09=09if (fault & VM_FAULT_RETRY) { -=09=09=09flags &=3D ~FAULT_FLAG_ALLOW_RETRY; =09=09=09flags |=3D FAULT_FLAG_TRIED; =20 =09=09=09/* No need to up_read(&mm->mmap_sem) as we would diff --git a/arch/um/kernel/trap.c b/arch/um/kernel/trap.c index c59ad37eacda..8f18cf56b3dd 100644 --- a/arch/um/kernel/trap.c +++ b/arch/um/kernel/trap.c @@ -97,7 +97,6 @@ int handle_page_fault(unsigned long address, unsigned lon= g ip, =09=09=09else =09=09=09=09current->min_flt++; =09=09=09if (fault & VM_FAULT_RETRY) { -=09=09=09=09flags &=3D ~FAULT_FLAG_ALLOW_RETRY; =09=09=09=09flags |=3D FAULT_FLAG_TRIED; =20 =09=09=09=09goto retry; diff --git a/arch/unicore32/mm/fault.c b/arch/unicore32/mm/fault.c index 34a90453ca18..a9bd08fbe588 100644 --- a/arch/unicore32/mm/fault.c +++ b/arch/unicore32/mm/fault.c @@ -259,9 +259,7 @@ static int do_pf(unsigned long addr, unsigned int fsr, = struct pt_regs *regs) =09=09else =09=09=09tsk->min_flt++; =09=09if (fault & VM_FAULT_RETRY) { -=09=09=09/* Clear FAULT_FLAG_ALLOW_RETRY to avoid any risk -=09=09=09* of starvation. */ -=09=09=09flags &=3D ~FAULT_FLAG_ALLOW_RETRY; +=09=09=09flags |=3D FAULT_FLAG_TRIED; =09=09=09goto retry; =09=09} =09} diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c index 7b6f65333355..4ce647bbe546 100644 --- a/arch/x86/mm/fault.c +++ b/arch/x86/mm/fault.c @@ -1457,8 +1457,6 @@ void do_user_addr_fault(struct pt_regs *regs, =09 */ =09if (unlikely((fault & VM_FAULT_RETRY) && =09=09 (flags & FAULT_FLAG_ALLOW_RETRY))) { -=09=09/* Retry at most once */ -=09=09flags &=3D ~FAULT_FLAG_ALLOW_RETRY; =09=09flags |=3D FAULT_FLAG_TRIED; =09=09goto retry; =09} diff --git a/arch/xtensa/mm/fault.c b/arch/xtensa/mm/fault.c index 7d196dc951e8..e7172bd53ced 100644 --- a/arch/xtensa/mm/fault.c +++ b/arch/xtensa/mm/fault.c @@ -128,7 +128,6 @@ void do_page_fault(struct pt_regs *regs) =09=09else =09=09=09current->min_flt++; =09=09if (fault & VM_FAULT_RETRY) { -=09=09=09flags &=3D ~FAULT_FLAG_ALLOW_RETRY; =09=09=09flags |=3D FAULT_FLAG_TRIED; =20 =09=09=09 /* No need to up_read(&mm->mmap_sem) as we would diff --git a/drivers/gpu/drm/ttm/ttm_bo_vm.c b/drivers/gpu/drm/ttm/ttm_bo_v= m.c index 389128b8c4dd..cb8829ca6c7f 100644 --- a/drivers/gpu/drm/ttm/ttm_bo_vm.c +++ b/drivers/gpu/drm/ttm/ttm_bo_vm.c @@ -59,9 +59,10 @@ static vm_fault_t ttm_bo_vm_fault_idle(struct ttm_buffer= _object *bo, =20 =09/* =09 * If possible, avoid waiting for GPU with mmap_sem -=09 * held. +=09 * held. We only do this if the fault allows retry and this +=09 * is the first attempt. =09 */ -=09if (vmf->flags & FAULT_FLAG_ALLOW_RETRY) { +=09if (fault_flag_allow_retry_first(vmf->flags)) { =09=09ret =3D VM_FAULT_RETRY; =09=09if (vmf->flags & FAULT_FLAG_RETRY_NOWAIT) =09=09=09goto out_unlock; @@ -135,7 +136,12 @@ vm_fault_t ttm_bo_vm_reserve(struct ttm_buffer_object = *bo, =09 * for the buffer to become unreserved. =09 */ =09if (unlikely(!dma_resv_trylock(bo->base.resv))) { -=09=09if (vmf->flags & FAULT_FLAG_ALLOW_RETRY) { +=09=09/* +=09=09 * If the fault allows retry and this is the first +=09=09 * fault attempt, we try to release the mmap_sem +=09=09 * before waiting +=09=09 */ +=09=09if (fault_flag_allow_retry_first(vmf->flags)) { =09=09=09if (!(vmf->flags & FAULT_FLAG_RETRY_NOWAIT)) { =09=09=09=09ttm_bo_get(bo); =09=09=09=09up_read(&vmf->vma->vm_mm->mmap_sem); diff --git a/include/linux/mm.h b/include/linux/mm.h index ff653f9136dd..51a886d50758 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -391,6 +391,25 @@ extern pgprot_t protection_map[16]; * @FAULT_FLAG_REMOTE: The fault is not for current task/mm. * @FAULT_FLAG_INSTRUCTION: The fault was during an instruction fetch. * @FAULT_FLAG_INTERRUPTIBLE: The fault can be interrupted by non-fatal si= gnals. + * + * About @FAULT_FLAG_ALLOW_RETRY and @FAULT_FLAG_TRIED: we can specify + * whether we would allow page faults to retry by specifying these two + * fault flags correctly. Currently there can be three legal combinations= : + * + * (a) ALLOW_RETRY and !TRIED: this means the page fault allows retry, an= d + * this is the first try + * + * (b) ALLOW_RETRY and TRIED: this means the page fault allows retry, an= d + * we've already tried at least once + * + * (c) !ALLOW_RETRY and !TRIED: this means the page fault does not allow r= etry + * + * The unlisted combination (!ALLOW_RETRY && TRIED) is illegal and should = never + * be used. Note that page faults can be allowed to retry for multiple ti= mes, + * in which case we'll have an initial fault with flags (a) then later on + * continuous faults with flags (b). We should always try to detect pendi= ng + * signals before a retry to make sure the continuous page faults can stil= l be + * interrupted if necessary. */ #define FAULT_FLAG_WRITE=09=09=090x01 #define FAULT_FLAG_MKWRITE=09=09=090x02 @@ -411,6 +430,24 @@ extern pgprot_t protection_map[16]; =09=09=09 FAULT_FLAG_KILLABLE | \ =09=09=09 FAULT_FLAG_INTERRUPTIBLE) =20 +/** + * fault_flag_allow_retry_first - check ALLOW_RETRY the first time + * + * This is mostly used for places where we want to try to avoid taking + * the mmap_sem for too long a time when waiting for another condition + * to change, in which case we can try to be polite to release the + * mmap_sem in the first round to avoid potential starvation of other + * processes that would also want the mmap_sem. + * + * Return: true if the page fault allows retry and this is the first + * attempt of the fault handling; false otherwise. + */ +static inline bool fault_flag_allow_retry_first(unsigned int flags) +{ +=09return (flags & FAULT_FLAG_ALLOW_RETRY) && +=09 (!(flags & FAULT_FLAG_TRIED)); +} + #define FAULT_FLAG_TRACE \ =09{ FAULT_FLAG_WRITE,=09=09"WRITE" }, \ =09{ FAULT_FLAG_MKWRITE,=09=09"MKWRITE" }, \ diff --git a/mm/filemap.c b/mm/filemap.c index 1784478270e1..590ec3a9f5da 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1386,7 +1386,7 @@ EXPORT_SYMBOL_GPL(__lock_page_killable); int __lock_page_or_retry(struct page *page, struct mm_struct *mm, =09=09=09 unsigned int flags) { -=09if (flags & FAULT_FLAG_ALLOW_RETRY) { +=09if (fault_flag_allow_retry_first(flags)) { =09=09/* =09=09 * CAUTION! In this case, mmap_sem is not released =09=09 * even though return 0. diff --git a/mm/internal.h b/mm/internal.h index 3cf20ab3ca01..5958cfe50a0c 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -377,10 +377,10 @@ static inline struct file *maybe_unlock_mmap_for_io(s= truct vm_fault *vmf, =09/* =09 * FAULT_FLAG_RETRY_NOWAIT means we don't want to wait on page locks or =09 * anything, so we only pin the file and drop the mmap_sem if only -=09 * FAULT_FLAG_ALLOW_RETRY is set. +=09 * FAULT_FLAG_ALLOW_RETRY is set, while this is the first attempt. =09 */ -=09if ((flags & (FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_RETRY_NOWAIT)) =3D=3D -=09 FAULT_FLAG_ALLOW_RETRY) { +=09if (fault_flag_allow_retry_first(flags) && +=09 !(flags & FAULT_FLAG_RETRY_NOWAIT)) { =09=09fpin =3D get_file(vmf->vma->vm_file); =09=09up_read(&vmf->vma->vm_mm->mmap_sem); =09} --=20 2.24.1