From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3DC0DC433F5 for ; Thu, 7 Oct 2021 17:07:32 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D1EE86120D for ; Thu, 7 Oct 2021 17:07:31 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org D1EE86120D Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 19DA06B006C; Thu, 7 Oct 2021 13:07:31 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 14E0B900002; Thu, 7 Oct 2021 13:07:31 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 03C1E6B0072; Thu, 7 Oct 2021 13:07:30 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0226.hostedemail.com [216.40.44.226]) by kanga.kvack.org (Postfix) with ESMTP id E71FE6B006C for ; Thu, 7 Oct 2021 13:07:30 -0400 (EDT) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 98C7918439136 for ; Thu, 7 Oct 2021 17:07:30 +0000 (UTC) X-FDA: 78670272660.08.DB1AC4A Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf27.hostedemail.com (Postfix) with ESMTP id 20A357001F63 for ; Thu, 7 Oct 2021 17:07:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1633626449; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=vwpfyQTtaeMehGGt8WXqpg7XbUlOAwGjEWSFfCq+LBk=; b=SjPkACBUKhLJ7v8M5DL0zf2v6zZX22+dadpMfzJlrGWvoyE84hugRnFhJC7hEHhVSis5B5 wOrzcFZnImB7MCxWRqawnmjkMbYBVWclo3qMvWKuwUgwi28sTEfzzGbpgyn4Kp3rNufwYP lmfkWweTkmzmzvD5K3jW7ZMKtBayhhA= Received: from mail-wr1-f72.google.com (mail-wr1-f72.google.com [209.85.221.72]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-480-hZiYI7T-OkCGEx5S9bxu7A-1; Thu, 07 Oct 2021 13:07:27 -0400 X-MC-Unique: hZiYI7T-OkCGEx5S9bxu7A-1 Received: by mail-wr1-f72.google.com with SMTP id k2-20020adfc702000000b0016006b2da9bso5219080wrg.1 for ; Thu, 07 Oct 2021 10:07:27 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:to:cc:references:from:organization:subject :message-id:date:user-agent:mime-version:in-reply-to :content-language:content-transfer-encoding; bh=vwpfyQTtaeMehGGt8WXqpg7XbUlOAwGjEWSFfCq+LBk=; b=BXPyn2yE6GCu6g627vUBKpovwpm3ZXd5Ut8Z6iXGFXEAzFVrGgoM/JoFDzZ4SNdYSc SQvzuRvi/Yq4nTvWXhQfND39P2vaIZx+k+azJySFCJLIsP4J0MdGspGDg0lHeDBaXhU2 BP6G3UR5KZDtq07ovm7Z/LjT1v8celHHrDK5JzsLbg6Ehyz6P2r33dRIgGMjLNp87qFE G16Y3nD9KZlyGIY9xHAMO/6Vs5axj2CsR8L9AzfEuPPzoKISo54q5zmJj6CQNSqYJp4f YyvNxYPLkiN6lAm9dXNrzkZCdNKSU+vXqye3xw3S7I9QA+CeTiuZ9mHOppObL9V675Xa hp4Q== X-Gm-Message-State: AOAM531fFHgtQKWzuXBq5B3OkvMjU8Zv9f6R1KDGPjNwm7DHoB6YuMgH EagZdI3JzAdWqTLBm5PMVAngnW+lT+Bz3muAZrCDKHlTwyf4vbrfLbTc1dnNdOtUrs9peBt6HsN vIRsQyZqvSs4= X-Received: by 2002:a5d:6b46:: with SMTP id x6mr6922564wrw.192.1633626446398; Thu, 07 Oct 2021 10:07:26 -0700 (PDT) X-Google-Smtp-Source: ABdhPJygc3VADN1g1Myu/7uOiT5xksaiWSHEIY4sCoNmOoTd1avEjs9XTndvBXFpkeGLZj+Y1QyBzQ== X-Received: by 2002:a5d:6b46:: with SMTP id x6mr6922531wrw.192.1633626446145; Thu, 07 Oct 2021 10:07:26 -0700 (PDT) Received: from [192.168.3.132] (p5b0c6886.dip0.t-ipconnect.de. [91.12.104.134]) by smtp.gmail.com with ESMTPSA id t11sm175959wrz.65.2021.10.07.10.07.25 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 07 Oct 2021 10:07:25 -0700 (PDT) To: Nadav Amit Cc: Andrew Morton , LKML , Linux-MM , Peter Xu , Andrea Arcangeli , Andrew Cooper , Andy Lutomirski , Dave Hansen , Peter Zijlstra , Thomas Gleixner , Will Deacon , Yu Zhao , Nick Piggin , "x86@kernel.org" References: <20210925205423.168858-1-namit@vmware.com> <20210925205423.168858-3-namit@vmware.com> <5485fae5-3cd6-9dc3-0579-dc8aab8a3de1@redhat.com> <5356D62E-1900-4E92-AF23-AA5625EFFD92@vmware.com> From: David Hildenbrand Organization: Red Hat Subject: Re: [PATCH 2/2] mm/mprotect: do not flush on permission promotion Message-ID: <1952fc7c-fb21-7d0e-661b-afa59b4580e5@redhat.com> Date: Thu, 7 Oct 2021 19:07:24 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.11.0 MIME-Version: 1.0 In-Reply-To: <5356D62E-1900-4E92-AF23-AA5625EFFD92@vmware.com> X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 20A357001F63 X-Stat-Signature: k59brdm7wj74686h9bta76wem7mia34n Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=SjPkACBU; dmarc=pass (policy=none) header.from=redhat.com; spf=none (imf27.hostedemail.com: domain of david@redhat.com has no SPF policy when checking 170.10.133.124) smtp.mailfrom=david@redhat.com X-HE-Tag: 1633626449-924269 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 07.10.21 18:16, Nadav Amit wrote: >=20 >> On Oct 7, 2021, at 5:13 AM, David Hildenbrand wrote= : >> >> On 25.09.21 22:54, Nadav Amit wrote: >>> From: Nadav Amit >>> Currently, using mprotect() to unprotect a memory region or uffd to >>> unprotect a memory region causes a TLB flush. At least on x86, as >>> protection is promoted, no TLB flush is needed. >>> Add an arch-specific pte_may_need_flush() which tells whether a TLB >>> flush is needed based on the old PTE and the new one. Implement an x8= 6 >>> pte_may_need_flush(). >>> For x86, PTE protection promotion or changes of software bits does >>> require a flush, also add logic that considers the dirty-bit. Changes= to >>> the access-bit do not trigger a TLB flush, although architecturally t= hey >>> should, as Linux considers the access-bit as a hint. >> >> Is the added LOC worth the benefit? IOW, do we have some benchmark tha= t really benefits from that? >=20 > So you ask whether the added ~10 LOC (net) worth the benefit? I read "3 files changed, 46 insertions(+), 1 deletion(-)" to optimize=20 something without proof, so I naturally have to ask. So this is just a=20 "usually we optimize and show numbers to proof" comment. >=20 > Let=E2=80=99s start with the cost of this patch. >=20 > If you ask about complexity, I think that it is a rather simple > patch and documented as needed. Please be more concrete if you > think otherwise. It is most certainly added complexity, although documented cleanly. >=20 > If you ask about the runtime overhead, my experience is that > such code, which mostly does bit operations, has negligible cost. > The execution time of mprotect code, and other similar pieces of > code, is mostly dominated by walking the page-tables & getting > the pages (which might require cold or random memory accesses), > acquiring the locks, and of course the TLB flushes that this > patch tries to eliminate. I'm absolutely not concerned about runtime overhead :) >=20 > As for the benefit: TLB flush on x86 of a single PTE has an > overhead of ~200 cycles. If a TLB shootdown is needed, for instance > on multithreaded applications, this overhead can grow to few > microseconds or even more, depending on the number of sockets, > whether the workload runs in a VM (and worse if CPUs are > overcommitted) and so on. >=20 > This overhead is completely unnecessary on many occasions. If > you run mprotect() to add permissions, or as I noted in my case, > to do something similar using userfaultfd. Note that the > potentially unnecessary TLB flush/shootdown takes place while > you hold the mmap-lock for write in the case of mprotect(), > thereby potentially preventing other threads from making > progress during that time. >=20 > On my in-development workload it was a considerable overhead > (I didn=E2=80=99t collect numbers though). Basically, I track dirty > pages using uffd, and every page-fault that can be easily > resolved by unprotecting cause a TLB flush/shootdown. Any numbers would be helpful. >=20 > If you want, I will write a microbenchmarks and give you numbers. > If you look for further optimizations (although you did not indicate > so), such as doing the TLB batching from do_mprotect_key(), > (i.e. batching across VMAs), we can discuss it and apply it on > top of these patches. I think this patch itself is sufficient if we can show a benefit; I do=20 wonder if existing benchmarks could already show a benefit, I feel like=20 they should if this makes a difference. Excessive mprotect() usage=20 (protect<>unprotect) isn't something unusual. --=20 Thanks, David / dhildenb