From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_IN_DEF_DKIM_WL autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F4201C63697 for ; Mon, 23 Nov 2020 19:21:23 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4E368206CA for ; Mon, 23 Nov 2020 19:21:23 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="vaFqnXwZ" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4E368206CA Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id CE8F76B005D; Mon, 23 Nov 2020 14:21:22 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C98346B006E; Mon, 23 Nov 2020 14:21:22 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B60456B0070; Mon, 23 Nov 2020 14:21:22 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0203.hostedemail.com [216.40.44.203]) by kanga.kvack.org (Postfix) with ESMTP id 8799C6B005D for ; Mon, 23 Nov 2020 14:21:22 -0500 (EST) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 2A7F5180AD81D for ; Mon, 23 Nov 2020 19:21:22 +0000 (UTC) X-FDA: 77516651604.28.veil06_6114b1027367 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin28.hostedemail.com (Postfix) with ESMTP id 0AFA76D67 for ; Mon, 23 Nov 2020 19:21:22 +0000 (UTC) X-HE-Tag: veil06_6114b1027367 X-Filterd-Recvd-Size: 6959 Received: from mail-il1-f194.google.com (mail-il1-f194.google.com [209.85.166.194]) by imf10.hostedemail.com (Postfix) with ESMTP for ; Mon, 23 Nov 2020 19:21:21 +0000 (UTC) Received: by mail-il1-f194.google.com with SMTP id w8so16908386ilg.12 for ; Mon, 23 Nov 2020 11:21:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=oz8metO46QbXJuy4PHB2Bj1tfr/+9gTKVTZ9NXLvTQM=; b=vaFqnXwZPjEvSJisaMdQytWcRMZg/6lN0E2LnGCrAWsIQ5+/uU8gOj1v5k/JebeKXc hYW6s6iiCXe1PLt2zd+4r7HgKnNE+9Ptsyw1iN3uZMBBIeAFZJ1/Kt31Us7a8NkTcxna wyIiYb0i5tbg1fveKnK6z5tLz2vJHpGkNRAhprU2BgmEwtAI8mHkeu3bUldOS05iVZP5 LK+62kXbfePdrvCtf79iyUHb26mo+NB9Jull4XcxfAhlEQhU5Ux8MYK68yX8rOLZNqJg VS3QMTmh4WninUhXp5/2/52Fjx/D26yuHLT3xAtJ6QeGQYnmCiAgg0LSG5fYnaMpcOaa A2WQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=oz8metO46QbXJuy4PHB2Bj1tfr/+9gTKVTZ9NXLvTQM=; b=W8gdKl2JHLoFpTdRnm0O8ORAeajJ6tfh/NXz5yoEfN73Md/e3aKW4WtmVIL28fdgvR 9sullRsZJc56R53TmhoAsB8J072DKikte7RRyXZTe7rdCrWnBMAZ0CLbItJQqMWtavkA uiy2Jnyby3uQV6DEJeMmK3N90F3UBIrSip/C9YJdkGER2IdmWAK5t/SMq9qeKr5Z8eqz 9DWl7vPK42LkpfGUxtZCmkRczaExtjR3zxbxciLiEFdwX791b1y6RWwkac5hwmEhczJT bouYxvV5UhzjfyXmkuq981ayD9BFNL15ixdTaBRgDlZ/TjhEdcYUDCPD5YRSpaWJQZJP POMw== X-Gm-Message-State: AOAM531j5YSarwQLv8Jy5hs8CC0pNS5Z2ZId0iRUVOPDP6x0Is1BpP8o i/qyIHIKRrj6Umc/hmAt2+mPug== X-Google-Smtp-Source: ABdhPJz181WGW4bS4/y2PxtjF4Dx7I880U/pEY0+FDVndB4ydb0G4uW8ikXwCWoVTBQ3EDl1IRZlwg== X-Received: by 2002:a92:d588:: with SMTP id a8mr1210329iln.79.1606159280803; Mon, 23 Nov 2020 11:21:20 -0800 (PST) Received: from google.com ([2620:15c:183:200:7220:84ff:fe09:2d90]) by smtp.gmail.com with ESMTPSA id c7sm5956947ilk.36.2020.11.23.11.21.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 23 Nov 2020 11:21:20 -0800 (PST) Date: Mon, 23 Nov 2020 12:21:16 -0700 From: Yu Zhao To: Will Deacon Cc: linux-kernel@vger.kernel.org, kernel-team@android.com, Catalin Marinas , Minchan Kim , Peter Zijlstra , Linus Torvalds , Anshuman Khandual , linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org Subject: Re: [PATCH 4/6] mm: proc: Invalidate TLB after clearing soft-dirty page state Message-ID: <20201123192116.GA3883038@google.com> References: <20201120143557.6715-1-will@kernel.org> <20201120143557.6715-5-will@kernel.org> <20201120202253.GB1303870@google.com> <20201121024922.GA1363491@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20201121024922.GA1363491@google.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, Nov 20, 2020 at 07:49:22PM -0700, Yu Zhao wrote: > On Fri, Nov 20, 2020 at 01:22:53PM -0700, Yu Zhao wrote: > > On Fri, Nov 20, 2020 at 02:35:55PM +0000, Will Deacon wrote: > > > Since commit 0758cd830494 ("asm-generic/tlb: avoid potential double flush"), > > > TLB invalidation is elided in tlb_finish_mmu() if no entries were batched > > > via the tlb_remove_*() functions. Consequently, the page-table modifications > > > performed by clear_refs_write() in response to a write to > > > /proc//clear_refs do not perform TLB invalidation. Although this is > > > fine when simply aging the ptes, in the case of clearing the "soft-dirty" > > > state we can end up with entries where pte_write() is false, yet a > > > writable mapping remains in the TLB. I double checked my conclusion and I think it holds. But let me correct some typos and add a summary. > > I don't think we need a TLB flush in this context, same reason as we ^^^^^ gather > > don't have one in copy_present_pte() which uses ptep_set_wrprotect() > > to write-protect a src PTE. > > > > ptep_modify_prot_start/commit() and ptep_set_wrprotect() guarantee > > either the dirty bit is set (when a PTE is still writable) or a PF > > happens (when a PTE has become r/o) when h/w page table walker races > > with kernel that modifies a PTE using the two APIs. > > After we remove the writable bit, if we end up with a clean PTE, any > subsequent write will trigger a page fault. We can't have a stale > writable tlb entry. The architecture-specific APIs guarantee this. > > If we end up with a dirty PTE, then yes, there will be a stale > writable tlb entry. But this won't be a problem because when we > write-protect a page (not PTE), we always check both pte_dirty() > and pte_write(), i.e., write_protect_page() and page_mkclean_one(). > When they see this dirty PTE, they will flush. And generally, only > callers of pte_mkclean() should flush tlb; otherwise we end up one > extra if callers of pte_mkclean() and pte_wrprotect() both flush. > > Now let's take a step back and see why we got > tlb_gather/finish_mmu() here in the first place. Commit b3a81d0841a95 > ("mm: fix KSM data corruption") explains the problem clearly. But > to fix a problem created by two threads clearing pte_write() and > pte_dirty() independently, we only need one of them to set > mm_tlb_flush_pending(). Given only removing the writable bit requires ^^^^^^^^ dirty > tlb flush, that thread should be the one, as I just explained. Adding > tlb_gather/finish_mmu() is unnecessary in that fix. And there is no > point in having the original flush_tlb_mm() either, given data > integrity is already guaranteed. (i.e., writable tlb entries are flushed when removing the dirty bit.) > Of course, with it we have more accurate access tracking. > > Does a similar problem exist for page_mkclean_one()? Possibly. It > checks pte_dirty() and pte_write() but not mm_tlb_flush_pending(). > At the moment, madvise_free_pte_range() only supports anonymous > memory, which doesn't do writeback. But the missing > mm_tlb_flush_pending() just seems to be an accident waiting to happen. > E.g., clean_record_pte() calls pte_mkclean() and does batched flush. > I don't know what it's for, but if it's called on file VMAs, a similar > race involving 4 CPUs can happen. This time CPU 1 runs > clean_record_pte() and CPU 3 runs page_mkclean_one(). To summarize, IMO, we should 1) remove tlb_gather/finish_mmu() here; 2) check mm_tlb_flush_pending() in page_mkclean_one() and dax_entry_mkclean().