From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 17E36C433F5 for ; Sun, 26 Sep 2021 04:25:47 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 6E6626113D for ; Sun, 26 Sep 2021 04:25:46 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 6E6626113D Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id D636B6B0071; Sun, 26 Sep 2021 00:25:45 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D1353900002; Sun, 26 Sep 2021 00:25:45 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BDAAA6B0073; Sun, 26 Sep 2021 00:25:45 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0030.hostedemail.com [216.40.44.30]) by kanga.kvack.org (Postfix) with ESMTP id ADAD16B0071 for ; Sun, 26 Sep 2021 00:25:45 -0400 (EDT) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 33CC039487 for ; Sun, 26 Sep 2021 04:25:45 +0000 (UTC) X-FDA: 78628436250.06.B2DCB23 Received: from mail-pl1-f172.google.com (mail-pl1-f172.google.com [209.85.214.172]) by imf05.hostedemail.com (Postfix) with ESMTP id E73DC506327E for ; Sun, 26 Sep 2021 04:25:44 +0000 (UTC) Received: by mail-pl1-f172.google.com with SMTP id x4so1866181pln.5 for ; Sat, 25 Sep 2021 21:25:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=Eci95qFJwxt5whQI1jVEQofO7PeEx2QBhVEGTRmpE3E=; b=jpfhB/aAUgTyiQ8HW32VMzVZX4P07EEdBZmcgdgfXmTh4Z5lBF3TjM+QFJfr1JM22H c6jRNJ3UEZe/+OPQjr96wTQI5N3ozGl4cnag5nw7X8Kp1FUYh3/dmT0lZuGJYuKygaxP QzI7K6TNX6dDCniEU6Pfw8zDxuzUdeQe7OWAtrm8GsUTShkGtazrvKFkjD2wGUTKbpM8 ideyc3zSfk3QeiTWtv5VuoD0f5tIGE6x/UfXxuEs6Gx6azPajTIM6fAaiJsrUpl+J0Bi QPCwV4oFfzp9pqG2I1F/v0LbcP0cSVPrnX1EZXrssYJ4Y5F9Yoo828moqfXZ/a/q/vo+ +k2g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=Eci95qFJwxt5whQI1jVEQofO7PeEx2QBhVEGTRmpE3E=; b=ZnqjXzg9UwvbTG0N2k2fszz0qbyk7s1XChC1Kq8IzmNTM5j4h6bAU+73SZF9xcU93T x/vYw8pqmcXCXmRpr+yILmvhpv1Ah+PSN2ls2kzXcbvFi0dEwpekKc7qK2Ld/+JTO25+ MEYq7S0uDCMKUyq/ZcwXinjpnBKUu0fndzT6SyvlOEchfF0HIH6omcgXaKJXnZ62jLCr oEJuZo4ZtvgcuXIPV7Dhz8YgQaC75yHcQfdmlrQW/Wdv8hcDsTG30SclUF/SbODlVoiT 4aOb/OADuXvELiFocIWXgvCeuZMzV/vrwxR95GtaDb1eEw3wiUHs6gMNquSheAP61T8q VqIw== X-Gm-Message-State: AOAM53376ZA8VCQO1GfBO8kTXi2IFPramGt4I17YApptKOyhdDfehxFM OBhNnKH8C8WsGlpN+gOZaRw= X-Google-Smtp-Source: ABdhPJyDtM5RMnAud1UQvGlqFOKRKSP0yb0lYaXQsjAInxblUyVpccuDdRj4Nd8I+XKyOBrGJ49/ow== X-Received: by 2002:a17:90a:4207:: with SMTP id o7mr11569863pjg.192.1632630343776; Sat, 25 Sep 2021 21:25:43 -0700 (PDT) Received: from sc2-haas01-esx0118.eng.vmware.com ([66.170.99.1]) by smtp.gmail.com with ESMTPSA id q11sm12406154pjf.14.2021.09.25.21.25.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 25 Sep 2021 21:25:43 -0700 (PDT) From: Nadav Amit X-Google-Original-From: Nadav Amit To: Andrew Morton Cc: LKML , Linux-MM , Peter Xu , Nadav Amit , Andrea Arcangeli , Andrew Cooper , Andy Lutomirski , Dave Hansen , Peter Zijlstra , Thomas Gleixner , Will Deacon , Yu Zhao , Nick Piggin , x86@kernel.org Subject: [PATCH 0/2] mm/mprotect: avoid unnecessary TLB flushes Date: Sat, 25 Sep 2021 13:54:21 -0700 Message-Id: <20210925205423.168858-1-namit@vmware.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: E73DC506327E X-Stat-Signature: qgc8hrzcg71pasinnrf8j5ydg3np84gu Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b="jpfhB/aA"; dmarc=pass (policy=none) header.from=gmail.com; spf=none (imf05.hostedemail.com: domain of mail-pl1-f172.google.com has no SPF policy when checking 209.85.214.172) smtp.helo=mail-pl1-f172.google.com X-HE-Tag: 1632630344-994210 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Nadav Amit This patch-set is based on a very small subset of an old RFC (see link below), and intended to avoid TLB flushes when they are not necessary architecturally. Specifically, memory-unprotect using userfaultfd (i.e., using userfaultfd IOCTL) triggers a TLB flush when in fact no architectural data, other than a software flag, is updated. This overhead shows up in my development workload profiles. Instead of tailoring a solution for this specific scenario, it is arguably better to use this opportunity to consolidate the interfaces that are used for TLB batching, avoid the open-coded [inc|dec]_tlb_flush_pending() and use the tlb_[gather|finish]_mmu() interface instead. Avoiding the TLB flushes is done very conservatively (unlike the RFC): 1. According to x86 specifications no flushes are necessary on permission promotion and changes to software bits. 2. Linux does not flush PTEs after the access bit is cleared. I considered the feedback of Andy Lutomirski and Andrew Cooper for the RFC regarding avoiding TLB invalidations when RW is cleared for clean PTEs. Although the bugs they pointed out can be easily addressed, I am concerned since I could not find specifications that explicitly clarify this optimization is valid. -- RFC -> v1: * Do not skip TLB flushes when clearing RW on clean PTEs * Do not defer huge PMD flush as it is already done inline Link: https://lore.kernel.org/lkml/20210131001132.3368247-1-namit@vmware.= com/ Cc: Andrea Arcangeli Cc: Andrew Cooper Cc: Andrew Morton Cc: Andy Lutomirski Cc: Dave Hansen Cc: Peter Zijlstra Cc: Thomas Gleixner Cc: Will Deacon Cc: Yu Zhao Cc: Nick Piggin Cc: x86@kernel.org Nadav Amit (2): mm/mprotect: use mmu_gather mm/mprotect: do not flush on permission promotion arch/x86/include/asm/tlbflush.h | 40 ++++++++++++++++++++++++++ include/asm-generic/tlb.h | 4 +++ mm/mprotect.c | 51 +++++++++++++++++++-------------- 3 files changed, 73 insertions(+), 22 deletions(-) --=20 2.25.1