From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-20.9 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,FSL_HELO_FAKE, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8F2F2C64E75 for ; Fri, 20 Nov 2020 20:40:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 411E72223F for ; Fri, 20 Nov 2020 20:40:23 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="l3vaiI1a" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731692AbgKTUkM (ORCPT ); Fri, 20 Nov 2020 15:40:12 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33506 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730731AbgKTUkL (ORCPT ); Fri, 20 Nov 2020 15:40:11 -0500 Received: from mail-il1-x142.google.com (mail-il1-x142.google.com [IPv6:2607:f8b0:4864:20::142]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 28549C0613CF for ; Fri, 20 Nov 2020 12:40:11 -0800 (PST) Received: by mail-il1-x142.google.com with SMTP id w10so9720885ilq.5 for ; Fri, 20 Nov 2020 12:40:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=TD3o5iLKwU8zdratpmBsatFArpZthyFmbqxgB9MOjPI=; b=l3vaiI1ac+8dhD50x/5uK13Cu2aUgwNOWKJNmC+bu+75M3KtxPGQGFUHugXkXFSOtz RYDLWqI5QSaSpHPS1G+BXBpU+AVvdbLMW0Jk+LzwlsOyPE2JcnfhQxnEjPJnzJmSfWRr +MsGH76gSVMw4g7190iaec4efnBG0EDi+SaQ7AvMDAcqp68dJz2GsHoqsit+WNfN9fy9 k1zNH2z8TFCnjxDt7a4fQgrLALzNepcfTuSKzXd8cKzYm82jcvNXNBPBVofaQnpNV2KQ oQPtYP1XYk4eIie9ePH0hMOiFrF0uBYu6MlRmhEp4yZ1/hGRGwk502ppiafCmL6YIaum A3mg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=TD3o5iLKwU8zdratpmBsatFArpZthyFmbqxgB9MOjPI=; b=flgEUMauIgaTqtCGe8cYZhJ7lKSbCr/mXK1rXvSIpOwe3iWOpbrKQ0PVTuBE6c0W+G e0ySDLG/VvWTt59mwmNvI3zdxRAzXO0gXwvHlXc5D5pdshJGNorjmBBunWSOZqI9qWL1 5lQicPOl+Adxuje92JX3KMPZoEVNPHBoWmKvA25NfS75VrnxU3xcJzsSen6OGZjtj7I4 1kvYazZQnj9FJJgKFt7K7BtdHAtAScx/xxBTSnkqU9eXp2o+kVd99awUISHObo/T4E/L VNsG1rXi8XXZBiftueFk4/USnRO4A7Du5BfEuXUCQOCsJ1ct/A/fTk+LC2w255gPOjJX Fokg== X-Gm-Message-State: AOAM5301p/uJyKXLTn6sujXwCoDM++t/hsPhHwfv1zG2b423pD7BrReW mF9CPvZe9ibpuLExOlAvbsl4xEPBEc8oqyDt X-Google-Smtp-Source: ABdhPJx2YPyA8+PrxQVdxE2fbx1FR9qm5FX6boWJoKvZ1Tsrv7fg2bruJzhdDC5G6i4LoN3rp3bbpA== X-Received: by 2002:a92:a80d:: with SMTP id o13mr12283087ilh.60.1605904810251; Fri, 20 Nov 2020 12:40:10 -0800 (PST) Received: from google.com ([2620:15c:183:200:7220:84ff:fe09:2d90]) by smtp.gmail.com with ESMTPSA id w3sm2068621iol.9.2020.11.20.12.40.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 20 Nov 2020 12:40:09 -0800 (PST) Date: Fri, 20 Nov 2020 13:40:05 -0700 From: Yu Zhao To: Will Deacon Cc: linux-kernel@vger.kernel.org, kernel-team@android.com, Catalin Marinas , Minchan Kim , Peter Zijlstra , Linus Torvalds , Anshuman Khandual , linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org Subject: Re: [PATCH 6/6] mm: proc: Avoid fullmm flush for young/dirty bit toggling Message-ID: <20201120204005.GC1303870@google.com> References: <20201120143557.6715-1-will@kernel.org> <20201120143557.6715-7-will@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20201120143557.6715-7-will@kernel.org> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Nov 20, 2020 at 02:35:57PM +0000, Will Deacon wrote: > clear_refs_write() uses the 'fullmm' API for invalidating TLBs after > updating the page-tables for the current mm. However, since the mm is not > being freed, this can result in stale TLB entries on architectures which > elide 'fullmm' invalidation. > > Ensure that TLB invalidation is performed after updating soft-dirty > entries via clear_refs_write() by using the non-fullmm API to MMU gather. > > Signed-off-by: Will Deacon > --- > fs/proc/task_mmu.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c > index a76d339b5754..316af047f1aa 100644 > --- a/fs/proc/task_mmu.c > +++ b/fs/proc/task_mmu.c > @@ -1238,7 +1238,7 @@ static ssize_t clear_refs_write(struct file *file, const char __user *buf, > count = -EINTR; > goto out_mm; > } > - tlb_gather_mmu_fullmm(&tlb, mm); > + tlb_gather_mmu(&tlb, mm, 0, TASK_SIZE); Let's assume my reply to patch 4 is wrong, and therefore we still need tlb_gather/finish_mmu() here. But then wouldn't this change deprive architectures other than ARM the opportunity to optimize based on the fact it's a full-mm flush? It seems to me ARM's interpretation of tlb->fullmm is a special case, not the other way around.