From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id ADA82C433EA for ; Fri, 24 Jul 2020 04:15:10 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 877AE207DF for ; Fri, 24 Jul 2020 04:15:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1595564110; bh=LyM/CoN+MNb/+tdSr00wP9FC+F6KMGisYzs5z9n9+7k=; h=Date:From:To:Subject:In-Reply-To:Reply-To:List-ID:From; b=tUZkhX0eU2QDrVjAEG6znCBjp/YqKVzK1h7pr9VqHnIyecZTrtdS9PXrYYmZltmab w8SmD+qsgcaDE9XeubcSBWxAc9G5do+LJ08HduBHZH2959VMZZ6pklQEZbME5PKEXk ENSoB1eczY8Lt9v8fuSKmpwYac5gyE52hXx7+9zo= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726726AbgGXEPK (ORCPT ); Fri, 24 Jul 2020 00:15:10 -0400 Received: from mail.kernel.org ([198.145.29.99]:53454 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725860AbgGXEPJ (ORCPT ); Fri, 24 Jul 2020 00:15:09 -0400 Received: from localhost.localdomain (c-73-231-172-41.hsd1.ca.comcast.net [73.231.172.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 7872020792; Fri, 24 Jul 2020 04:15:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1595564109; bh=LyM/CoN+MNb/+tdSr00wP9FC+F6KMGisYzs5z9n9+7k=; h=Date:From:To:Subject:In-Reply-To:From; b=Qq3YhMSezwnip+fgonZ8ho9is3iWhdOMr6hSzkx2TLHXx0IrUfUy1kdEIwjU1tjxq JhKlXFqSCsaTDmHJW1Yh4uQo5ap1PJYIiLH9usu6br61AmB6xvKZxE549/RnuOJpCV m/wpr+z1jk3fp4ZnC6wDa0MJCf1z443xYPb2rzTk= Date: Thu, 23 Jul 2020 21:15:08 -0700 From: Andrew Morton To: akpm@linux-foundation.org, catalin.marinas@arm.com, hannes@cmpxchg.org, hdanton@sina.com, hughd@google.com, josef@toxicpanda.com, kirill.shutemov@linux.intel.com, linux-mm@kvack.org, mm-commits@vger.kernel.org, torvalds@linux-foundation.org, will.deacon@arm.com, willy@infradead.org, xuyu@linux.alibaba.com, yang.shi@linux.alibaba.com Subject: [patch 01/15] mm/memory.c: avoid access flag update TLB flush for retried page fault Message-ID: <20200724041508.QlTbrHnfh%akpm@linux-foundation.org> In-Reply-To: <20200723211432.b31831a0df3bc2cbdae31b40@linux-foundation.org> User-Agent: s-nail v14.8.16 Sender: mm-commits-owner@vger.kernel.org Precedence: bulk Reply-To: linux-kernel@vger.kernel.org List-ID: X-Mailing-List: mm-commits@vger.kernel.org From: Yang Shi Subject: mm/memory.c: avoid access flag update TLB flush for retried page fault Recently we found regression when running will_it_scale/page_fault3 test on ARM64. Over 70% down for the multi processes cases and over 20% down for the multi threads cases. It turns out the regression is caused by commit 89b15332af7c0312a41e50846819ca6613b58b4c ("mm: drop mmap_sem before calling balance_dirty_pages() in write fault"). The test mmaps a memory size file then write to the mapping, this would make all memory dirty and trigger dirty pages throttle, that upstream commit would release mmap_sem then retry the page fault. The retried page fault would see correct PTEs installed by the first try then update dirty bit and clear read-only bit and flush TLBs for ARM. The regression is caused by the excessive TLB flush. It is fine on x86 since x86 doesn't clear read-only bit so there is no need to flush TLB for this case. The page fault would be retried due to: 1. Waiting for page readahead 2. Waiting for page swapped in 3. Waiting for dirty pages throttling The first two cases don't have PTEs set up at all, so the retried page fault would install the PTEs, so they don't reach there. But the #3 case usually has PTEs installed, the retried page fault would reach the dirty bit and read-only bit update. But it seems not necessary to modify those bits again for #3 since they should be already set by the first page fault try. Of course the parallel page fault may set up PTEs, but we just need care about write fault. If the parallel page fault setup a writable and dirty PTE then the retried fault doesn't need do anything extra. If the parallel page fault setup a clean read-only PTE, the retried fault should just call do_wp_page() then return as the below code snippet shows: if (vmf->flags & FAULT_FLAG_WRITE) { if (!pte_write(entry)) return do_wp_page(vmf); } With this fix the test result get back to normal. Link: http://lkml.kernel.org/r/1594148072-91273-1-git-send-email-yang.shi@linux.alibaba.com Signed-off-by: Yang Shi Reported-by: Xu Yu Debugged-by: Xu Yu Tested-by: Xu Yu Cc: Johannes Weiner Cc: Matthew Wilcox (Oracle) Cc: Kirill A. Shutemov Cc: Josef Bacik Cc: Hillf Danton Cc: Hugh Dickins Cc: Catalin Marinas Cc: Will Deacon Signed-off-by: Andrew Morton --- mm/memory.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) --- a/mm/memory.c~mm-avoid-access-flag-update-tlb-flush-for-retried-page-fault +++ a/mm/memory.c @@ -4241,8 +4241,13 @@ static vm_fault_t handle_pte_fault(struc if (vmf->flags & FAULT_FLAG_WRITE) { if (!pte_write(entry)) return do_wp_page(vmf); - entry = pte_mkdirty(entry); } + + if ((vmf->flags & FAULT_FLAG_WRITE) && !(vmf->flags & FAULT_FLAG_TRIED)) + entry = pte_mkdirty(entry); + else if (vmf->flags & FAULT_FLAG_TRIED) + goto unlock; + entry = pte_mkyoung(entry); if (ptep_set_access_flags(vmf->vma, vmf->address, vmf->pte, entry, vmf->flags & FAULT_FLAG_WRITE)) { _