From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.5 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS,USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A2D44C433E8 for ; Sat, 25 Jul 2020 15:58:51 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4EB1F2070C for ; Sat, 25 Jul 2020 15:58:50 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4EB1F2070C Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 5FA586B0002; Sat, 25 Jul 2020 11:58:50 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5AA766B0003; Sat, 25 Jul 2020 11:58:50 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4BFA86B0006; Sat, 25 Jul 2020 11:58:50 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0179.hostedemail.com [216.40.44.179]) by kanga.kvack.org (Postfix) with ESMTP id 358B56B0002 for ; Sat, 25 Jul 2020 11:58:50 -0400 (EDT) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id C8EF622894 for ; Sat, 25 Jul 2020 15:58:49 +0000 (UTC) X-FDA: 77077056378.19.ink57_521863a26f51 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin19.hostedemail.com (Postfix) with ESMTP id A17A1F674F for ; Sat, 25 Jul 2020 15:58:49 +0000 (UTC) X-HE-Tag: ink57_521863a26f51 X-Filterd-Recvd-Size: 5724 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf16.hostedemail.com (Postfix) with ESMTP for ; Sat, 25 Jul 2020 15:58:49 +0000 (UTC) Received: from gaia (unknown [95.146.230.158]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id B118B206E3; Sat, 25 Jul 2020 15:58:45 +0000 (UTC) Date: Sat, 25 Jul 2020 16:58:43 +0100 From: Catalin Marinas To: Linus Torvalds Cc: Yang Shi , Andrew Morton , Johannes Weiner , Hillf Danton , Hugh Dickins , Josef Bacik , "Kirill A . Shutemov" , Linux-MM , mm-commits@vger.kernel.org, Will Deacon , Matthew Wilcox , xuyu@linux.alibaba.com Subject: Re: [patch 01/15] mm/memory.c: avoid access flag update TLB flush for retried page fault Message-ID: <20200725155841.GA14490@gaia> References: <20200723211432.b31831a0df3bc2cbdae31b40@linux-foundation.org> <20200724041508.QlTbrHnfh%akpm@linux-foundation.org> <7de20d4a-f86c-8e1f-b238-65f02b560325@linux.alibaba.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.10.1 (2018-07-13) X-Rspamd-Queue-Id: A17A1F674F X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam04 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, Jul 24, 2020 at 06:29:43PM -0700, Linus Torvalds wrote: > On Fri, Jul 24, 2020 at 5:37 PM Yang Shi wrote: > > A follow-up question about your comment in the previous email "The > > notion of "this is a retry, so let's do nothing" is fundamentally > > wrong.", do you mean it is not safe? > > I mean it fails my "smell test". > > The patch didn't just avoid the TLB flush, it avoided all the other > "mark it dirty and young" things too. And that made me go "why would > RETRY be different in this regard"? I had a similar concern, couldn't convince myself it's entirely safe. Even if it is safe now, some mm change in the future may break the current assumptions. The arm64 do_page_fault() sets FAULT_FLAG_TRIED if a previous handle_mm_fault() returns VM_FAULT_RETRY. A quick grep for VM_FAULT_RETRY shows a few more instances than what Yang listed. Maybe they are all safe, I just couldn't get my head around it. > For any architecture that guarantees that a page fault will always > flush the old TLB entry for this kind of situation, that > flush_tlb_fix_spurious_fault() thing can be a no-op. > > So that's why on x86, we just do > > #define flush_tlb_fix_spurious_fault(vma, address) do { } while (0) > > and have no issues. > > Note that it does *not* need to do any cross-CPU flushing or anything > like that. So it's actually wrong (I think) to have that default > fallback for > > #define flush_tlb_fix_spurious_fault(vma, address) > flush_tlb_page(vma, address) > > because flush_tlb_page() is the serious "do cross CPU etc". > > Does the arm64 flush_tlb_page() perhaps do the whole expensive > cross-CPU thing rather than the much cheaper "just local invalidate" > version? I think it makes sense to have a local-only flush_tlb_fix_spurious_fault(), but with ptep_set_access_flags() updated to still issue the full broadcast TLBI. In addition, I added a minor optimisation to avoid the TLB flush if the old pte was not accessible. In a read-access fault case (followed by mkyoung), the TLB wouldn't have cached a non-accessible pte (not sure it makes much difference to Yang's case). Anyway, from ARMv8.1 onwards, the hardware handles the access flag automatically. I'm not sure the first dsb(nshst) below is of much use in this case. If we got a spurious fault, the write to the pte happened on a different CPU (IIUC, we shouldn't return to user with updated ptes without a TLB flush on the same CPU). Anyway, we can refine this if it solves Yang's performance regression. -------------8<----------------------- diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h index d493174415db..d1401cbad7d4 100644 --- a/arch/arm64/include/asm/tlbflush.h +++ b/arch/arm64/include/asm/tlbflush.h @@ -268,6 +268,20 @@ static inline void flush_tlb_page(struct vm_area_struct *vma, dsb(ish); } +static inline void local_flush_tlb_page(struct vm_area_struct *vma, + unsigned long uaddr) +{ + unsigned long addr = __TLBI_VADDR(uaddr, ASID(vma->vm_mm)); + + dsb(nshst); + __tlbi(vale1, addr); + __tlbi_user(vale1, addr); + dsb(nsh); +} + +#define flush_tlb_fix_spurious_fault(vma, address) \ + local_flush_tlb_page(vma, address) + /* * This is meant to avoid soft lock-ups on large TLB flushing ranges and not * necessarily a performance improvement. diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index 8afb238ff335..0accee714cc2 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -218,7 +218,9 @@ int ptep_set_access_flags(struct vm_area_struct *vma, pteval = cmpxchg_relaxed(&pte_val(*ptep), old_pteval, pteval); } while (pteval != old_pteval); - flush_tlb_fix_spurious_fault(vma, address); + if (pte_accessible(vma->vm_mm, pte)) + flush_tlb_page(vma, address); + return 1; } -- Catalin