From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.2 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 48A19C2D0E4 for ; Mon, 23 Nov 2020 14:22:46 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A60E820758 for ; Mon, 23 Nov 2020 14:22:45 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A60E820758 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id F1C546B0098; Mon, 23 Nov 2020 09:22:44 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id ECB7C6B009F; Mon, 23 Nov 2020 09:22:44 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E08666B00A0; Mon, 23 Nov 2020 09:22:44 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0155.hostedemail.com [216.40.44.155]) by kanga.kvack.org (Postfix) with ESMTP id B45EF6B0098 for ; Mon, 23 Nov 2020 09:22:44 -0500 (EST) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 552A3824999B for ; Mon, 23 Nov 2020 14:22:44 +0000 (UTC) X-FDA: 77515899048.06.feet42_2f03d9827366 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin06.hostedemail.com (Postfix) with ESMTP id 2C3171003A235 for ; Mon, 23 Nov 2020 14:22:44 +0000 (UTC) X-HE-Tag: feet42_2f03d9827366 X-Filterd-Recvd-Size: 3517 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf20.hostedemail.com (Postfix) with ESMTP for ; Mon, 23 Nov 2020 14:22:43 +0000 (UTC) Received: from gaia (unknown [95.146.230.165]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 8647E20782; Mon, 23 Nov 2020 14:22:40 +0000 (UTC) Date: Mon, 23 Nov 2020 14:22:37 +0000 From: Catalin Marinas To: Will Deacon Cc: linux-kernel@vger.kernel.org, kernel-team@android.com, Yu Zhao , Minchan Kim , Peter Zijlstra , Linus Torvalds , Anshuman Khandual , linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, stable@vger.kernel.org Subject: Re: [PATCH 2/6] arm64: pgtable: Ensure dirty bit is preserved across pte_wrprotect() Message-ID: <20201123142237.GF17833@gaia> References: <20201120143557.6715-1-will@kernel.org> <20201120143557.6715-3-will@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20201120143557.6715-3-will@kernel.org> User-Agent: Mutt/1.10.1 (2018-07-13) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, Nov 20, 2020 at 02:35:53PM +0000, Will Deacon wrote: > With hardware dirty bit management, calling pte_wrprotect() on a writable, > dirty PTE will lose the dirty state and return a read-only, clean entry. My assumption at the time was that the caller of pte_wrprotect() already moved the 'dirty' information to the underlying page. Most pte_wrprotect() calls also do a pte_mkclean(). However, it doesn't seem to always be the case (soft-dirty but we don't support it yet). I was worried that we may inadvertently set the dirty bit when doing a pte_wrprotect() on a freshly created pte (not read from memory, for example __split_huge_pmd_locked()) but I think all our __P* and __S* attributes start with a PTE_RDONLY, therefore the pte_hw_dirty() returns false. A test for mm/debug_vm_pgtable.c, something like: for (i = 0, i < ARRAY_SIZE(protection_map); i++) { pte = pfn_pte(pfn, protection_map(i)); WARN_ON(pte_dirty(pte_wrprotect(pte)); } (I'll leave this to Anshuman ;)) > Move the logic from ptep_set_wrprotect() into pte_wrprotect() to ensure that > the dirty bit is preserved for writable entries, as this is required for > soft-dirty bit management if we enable it in the future. > > Cc: > Signed-off-by: Will Deacon I think this could go back as far as the hardware AF/DBM support (v4.3): Fixes: 2f4b829c625e ("arm64: Add support for hardware updates of the access and dirty pte bits") If you limit this fix to 4.14, you probably don't need additional commits. Otherwise, at least this one: 3bbf7157ac66 ("arm64: Convert pte handling from inline asm to using (cmp)xchg") and a slightly more intrusive: 73e86cb03cf2 ("arm64: Move PTE_RDONLY bit handling out of set_pte_at()") We also had some attempts at fixing ptep_set_wrprotect(): 64c26841b349 ("arm64: Ignore hardware dirty bit updates in ptep_set_wrprotect()") Fixed subsequently by: 8781bcbc5e69 ("arm64: mm: Fix pte_mkclean, pte_mkdirty semantics") I have a hope that at some point we'll understand how this all works ;). For this patch: Reviewed-by: Catalin Marinas