From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 98339C433F5 for ; Wed, 13 Oct 2021 15:59:10 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 404FB61168 for ; Wed, 13 Oct 2021 15:59:10 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 404FB61168 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id C6BF5940007; Wed, 13 Oct 2021 11:59:09 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BF3856B0073; Wed, 13 Oct 2021 11:59:09 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A6E16940007; Wed, 13 Oct 2021 11:59:09 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0118.hostedemail.com [216.40.44.118]) by kanga.kvack.org (Postfix) with ESMTP id 94ED66B006C for ; Wed, 13 Oct 2021 11:59:09 -0400 (EDT) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 5450918259E4B for ; Wed, 13 Oct 2021 15:59:09 +0000 (UTC) X-FDA: 78691873218.09.D12E12A Received: from mail-pj1-f50.google.com (mail-pj1-f50.google.com [209.85.216.50]) by imf22.hostedemail.com (Postfix) with ESMTP id 882F51904 for ; Wed, 13 Oct 2021 15:59:08 +0000 (UTC) Received: by mail-pj1-f50.google.com with SMTP id ls18so2548368pjb.3 for ; Wed, 13 Oct 2021 08:59:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=mime-version:subject:from:in-reply-to:date:cc :content-transfer-encoding:message-id:references:to; bh=NlebU/Dona3za1z6+SiD/1Xj/DgJ6yfzkx+M783le8c=; b=J76yMOSazJaFm39w8/8dmmyH3vr4svRzfCoZwCfINqEK/oLUPvBy9PcjhsUuxbyOK4 3INnshf9A2ItYXXWRGyCh4htsNEigX96ImIb8XDUDC19WlZu3g2+kkMBL1HKvWgB+7/t 8ceH5I7hyXR0ftFFRcEDy/O5zYJr8Y9yXNZFtzvIcfnTNTqo5TJF93iPplA+Z4qGZxVW orK8ioucIIgMe+iv5g9+sbHurtFbDZQ0gHmKI2HjRiB7oV0u9ranOkxFjYCP6L7UAAcS PKH7QZ7LjkIA961jHb9EqU4zCVUqUQ9D1RmUDSesBozlikcKMwGz3bhZihvOAN/V4RiH KVWQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:subject:from:in-reply-to:date:cc :content-transfer-encoding:message-id:references:to; bh=NlebU/Dona3za1z6+SiD/1Xj/DgJ6yfzkx+M783le8c=; b=vclzO4jDRxYf/E6OdzbR6bi6uEnKbOyYglGSq7HUAX8xZgJq2ydSO6/s5osOVspHI3 +jgzApSaZAgxOq/oECpF1c/mNmFeAo3JRC6CF4jX8UhcqwruH2YVyzGh89cD7S0oWByo KXCKtA8esHWlic6+M/slkz3DFqsRKRSVZnHBQz6uihSsWY9KLrq2ZwdUwx27vre/9ZMH D2MiNV1PA5pX2SxPppz/ZIL9hSMN+nJwvoNX2AajXmnygREQL0ovHpZWAyiSC0QMt+mZ zz54Qj21YWkN8W2b5FBSgGKTKa3CVUxQ0K5XuaHOebtCcpgW1l34CnUTwQmmqLvGwJzy qBHA== X-Gm-Message-State: AOAM531/tSnIxcaEX4AEmKi2opHHjTO5YNOhZ7JuNXHd3+DpkIOqjDUh DaEp8Nx0GiREERzYd8y28OE= X-Google-Smtp-Source: ABdhPJyGQPdam3RnOhNb88m7z9uDayTrs7w54eCyz0HzsFMebeTGAwanx7d7N/bm/4SyxqBsKugvAA== X-Received: by 2002:a17:90a:11:: with SMTP id 17mr91971pja.238.1634140747745; Wed, 13 Oct 2021 08:59:07 -0700 (PDT) Received: from smtpclient.apple (c-24-6-216-183.hsd1.ca.comcast.net. [24.6.216.183]) by smtp.gmail.com with ESMTPSA id p17sm6433141pju.34.2021.10.13.08.59.05 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Wed, 13 Oct 2021 08:59:06 -0700 (PDT) Content-Type: text/plain; charset=utf-8 Mime-Version: 1.0 (Mac OS X Mail 14.0 \(3654.120.0.1.13\)) Subject: Re: [PATCH 1/2] mm/mprotect: use mmu_gather From: Nadav Amit In-Reply-To: Date: Wed, 13 Oct 2021 08:59:04 -0700 Cc: Andrew Morton , LKML , Linux-MM , Andrea Arcangeli , Andrew Cooper , Andy Lutomirski , Dave Hansen , Peter Zijlstra , Thomas Gleixner , Will Deacon , Yu Zhao , Nick Piggin , x86@kernel.org Content-Transfer-Encoding: quoted-printable Message-Id: <09F31D01-E818-4538-A6E9-3E4779FC4B53@gmail.com> References: <20210925205423.168858-1-namit@vmware.com> <20210925205423.168858-2-namit@vmware.com> <2CED2F72-4D1C-4DBC-AC03-4B246E1673C2@gmail.com> To: Peter Xu X-Mailer: Apple Mail (2.3654.120.0.1.13) X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 882F51904 X-Stat-Signature: sh1763t9p6aasnptztfxjdoxd87b5bq6 Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=J76yMOSa; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf22.hostedemail.com: domain of nadav.amit@gmail.com designates 209.85.216.50 as permitted sender) smtp.mailfrom=nadav.amit@gmail.com X-HE-Tag: 1634140748-111623 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: > On Oct 12, 2021, at 4:20 PM, Peter Xu wrote: >=20 > On Tue, Oct 12, 2021 at 10:31:45AM -0700, Nadav Amit wrote: >>=20 >>=20 >>> On Oct 12, 2021, at 3:16 AM, Peter Xu wrote: >>>=20 >>> On Sat, Sep 25, 2021 at 01:54:22PM -0700, Nadav Amit wrote: >>>> @@ -338,25 +344,25 @@ static unsigned long = change_protection_range(struct vm_area_struct *vma, >>>> struct mm_struct *mm =3D vma->vm_mm; >>>> pgd_t *pgd; >>>> unsigned long next; >>>> - unsigned long start =3D addr; >>>> unsigned long pages =3D 0; >>>> + struct mmu_gather tlb; >>>>=20 >>>> BUG_ON(addr >=3D end); >>>> pgd =3D pgd_offset(mm, addr); >>>> flush_cache_range(vma, addr, end); >>>> inc_tlb_flush_pending(mm); >>>> + tlb_gather_mmu(&tlb, mm); >>>> + tlb_start_vma(&tlb, vma); >>>=20 >>> Pure question: >>>=20 >>> I actually have no idea why tlb_start_vma() is needed here, as = protection range >>> can be just a single page, but anyway.. I do see that = tlb_start_vma() contains >>> a whole-vma flush_cache_range() when the arch needs it, then does it = mean that >>> besides the inc_tlb_flush_pending() to be dropped, so as to the = other call to >>> flush_cache_range() above? >>=20 >> Good point. >>=20 >> tlb_start_vma() and tlb_end_vma() are required since some archs do = not >> batch TLB flushes across VMAs (e.g., ARM). >=20 > Sorry I didn't follow here - as change_protection() is per-vma anyway, = so I > don't see why it needs to consider vma crossing. >=20 > In all cases, it'll be great if you could add some explanation into = commit > message on why we need tlb_{start|end}_vma(), as I think it could not = be > obvious to all people. tlb_start_vma() is required when we switch from flush_tlb_range() = because certain properties of the VMA (e.g., executable) are needed on certain arch. That=E2=80=99s the reason flush_tlb_range() requires the VMA that = is invalidated to be provided. Regardless, there is an interface and that is the way it is used. I am = not inclined to break it, even if it was possible, for unclear performance benefits. As I discussed offline with Andrea and David, switching to = tlb_gather_mmu() interface has additional advantages than batching and avoiding = unnecessary flushes on PTE permission promotion (as done in patch 2). If a single = PTE is updated out of a bigger range, currently flush_tlb_range() would = flush the whole range instead of the single page. In addition, once I fix this patch-set, if you update a THP, you would (at least on x86) be able to flush a single PTE instead of flushing 512 entries (which would actually be done using a full TLB flush). I would say that as I mentioned in a different thread, and was not upfront about before, one of the motivations of mine behind this patch is that I need a vectored UFFDIO_WRITEPROTECTV interface for = performance. Nevertheless, I think these two patches stand by themselves and have independent value.=