From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 10553C433FE for ; Thu, 3 Dec 2020 18:02:43 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 6D50E20754 for ; Thu, 3 Dec 2020 18:02:42 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6D50E20754 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 80C8F8D0007; Thu, 3 Dec 2020 13:02:41 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 794BB8D0005; Thu, 3 Dec 2020 13:02:41 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 681B78D0007; Thu, 3 Dec 2020 13:02:41 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 4FDEE8D0005 for ; Thu, 3 Dec 2020 13:02:41 -0500 (EST) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 1059F1F1B for ; Thu, 3 Dec 2020 18:02:41 +0000 (UTC) X-FDA: 77552741322.08.nerve75_091857b273bd Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin08.hostedemail.com (Postfix) with ESMTP id CFF211819E793 for ; Thu, 3 Dec 2020 18:02:40 +0000 (UTC) X-HE-Tag: nerve75_091857b273bd X-Filterd-Recvd-Size: 8098 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [216.205.24.124]) by imf15.hostedemail.com (Postfix) with ESMTP for ; Thu, 3 Dec 2020 18:02:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1607018559; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=a2+UpSo6XPrHw9eGBG0KMLvoOle6y6b1PoPGuGKDswg=; b=ZCQqG6SKWzdc90m8qgDgrqoWzaFfdGyoTMHvBiiggcKMYUVQB/oqOTDbdHNpceT9HramOO CmkQmybBMAwlmP8byoEMOzo5+OnYdQuPWRPyAPpmMKOD3yJxWWcneZWCQSalROsaq4DAC6 EXIsHPB2L+Ym1F1zNXJenIOwgwRrdkc= Received: from mail-qk1-f198.google.com (mail-qk1-f198.google.com [209.85.222.198]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-71-80FVuVg7M7mHVLsaQoleyA-1; Thu, 03 Dec 2020 13:02:37 -0500 X-MC-Unique: 80FVuVg7M7mHVLsaQoleyA-1 Received: by mail-qk1-f198.google.com with SMTP id 198so2646315qkj.7 for ; Thu, 03 Dec 2020 10:02:37 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=a2+UpSo6XPrHw9eGBG0KMLvoOle6y6b1PoPGuGKDswg=; b=cR+3A6R+/Ud2BKEWnDDa/P4fkyhWxsKFxDqg6yMOg+mc8JeXaatKWNO3JaQI3banad KVbTSiUUIy7YCWkvLWyODByZMBpJPpg2jvb5riUnYWO8udVS6g7/BX6PFN9OdnOdSzyF hu34Qz0DqS+xjSTa3+kfquShXf9I0VE2ecz6f4qjqV1qNZWbwX3/dEvn2f+Jq1jLaayQ y14yswkzxCgkglNNTj3JX8dY/uUxh7lcUmNcI3HLnCghXZuzLPJARZQP6hARXkQ9Fbqx sjqVM27XyI/KdAJl1S39rdWPVm31j2AKWfORJzBkfuKni36sUbs6w2gOcvTvN6qkLc7X wr/w== X-Gm-Message-State: AOAM532I9c3YcW09ELR5/FsQQ4ChT97ENLNBnGaGQbta3BMUrUn3B7Zu fWnmzaAvYLJIsFZKssdoCobG89G0yERRPCZRcNes3rvbWyEV4gBffi/X6XKFhYghHHgyvUWHOgW Ztaw18VnsV3g= X-Received: by 2002:a0c:c3cf:: with SMTP id p15mr195950qvi.13.1607018557316; Thu, 03 Dec 2020 10:02:37 -0800 (PST) X-Google-Smtp-Source: ABdhPJx6QB5+m7IR/CUDyS+SYYgOVMwH+T7exPAg4NObUJAzQFGaa22mijTPbvgciaVlOORdOrzuhw== X-Received: by 2002:a0c:c3cf:: with SMTP id p15mr195906qvi.13.1607018556966; Thu, 03 Dec 2020 10:02:36 -0800 (PST) Received: from xz-x1 ([142.126.94.187]) by smtp.gmail.com with ESMTPSA id y192sm2281006qkb.12.2020.12.03.10.02.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 03 Dec 2020 10:02:36 -0800 (PST) Date: Thu, 3 Dec 2020 13:02:34 -0500 From: Peter Xu To: Hugh Dickins Cc: Andrea Arcangeli , Matthew Wilcox , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Andrew Morton , Mike Rapoport , David Hildenbrand Subject: Re: [PATCH v2] mm: Don't fault around userfaultfd-registered regions on reads Message-ID: <20201203180234.GJ108496@xz-x1> References: <20201130230603.46187-1-peterx@redhat.com> <20201201125927.GB11935@casper.infradead.org> <20201201223033.GG3277@xz-x1> <20201202234117.GD108496@xz-x1> MIME-Version: 1.0 In-Reply-To: Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=peterx@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=utf-8 Content-Disposition: inline X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Dec 02, 2020 at 09:36:45PM -0800, Hugh Dickins wrote: > On Wed, 2 Dec 2020, Peter Xu wrote: > > On Wed, Dec 02, 2020 at 02:37:33PM -0800, Hugh Dickins wrote: > > > On Tue, 1 Dec 2020, Andrea Arcangeli wrote: > > > > > > > > Any suggestions on how to have the per-vaddr per-mm _PAGE_UFFD_WP bit > > > > survive the pte invalidates in a way that remains associated to a > > > > certain vaddr in a single mm (so it can shoot itself in the foot if it > > > > wants, but it can't interfere with all other mm sharing the shmem > > > > file) would be welcome... > > > > > > I think it has to be a new variety of swap-like non_swap_entry() pte, > > > see include/linux/swapops.h. Anything else would be more troublesome. > > > > > > Search for non_swap_entry and for migration_entry, to find places that > > > might need to learn about this new variety. > > > > > > IIUC you only need a single value, no need to carve out another whole > > > swp_type: could probably be swp_offset 0 of any swp_type other than 0. > > > > > > Note that fork's copy_page_range() does not "copy ptes where a page > > > fault will fill them correctly", so would in effect put a pte_none > > > into the child where the parent has this uffd_wp entry. I don't know > > > anything about uffd versus fork, whether that would pose a problem. > > > > Thanks for the idea, Hugh! > > > > I thought about something similar today, but instead of swap entries, I was > > thinking about constantly filling in a pte with a value of "_PAGE_PROTNONE | > > _PAGE_UFFD_WP" when e.g. we'd like to zap a page with shmem+uffd-wp. I feel > > like the fundamental idea is similar - we can somehow keep the pte with uffd-wp > > information even if zapped/swapped-out, so as long as the shmem access will > > fruther trap into the fault handler, then we can operate on that pte and read > > that information out, like recover that pte into a normal pte (with swap/page > > cache, and vma/addr information, we'll be able to) and then we can retry the > > fault. > > Yes, I think that should work too: I can't predict which way would cause > less trouble. > > We usually tend to keep away from protnone games, because NUMA balancing > use of protnone is already confusing enough. Yes it is tricky. However it gives me the feeling that numa balancing and its protnone trick provided a general solution for things like this, so that we can trap a per-mm per-pte page access like this. With that, I'm currently slightly prefer to try the protnone way first, because using swp entry could be a bit misleading from the 1st glance - note that when this happens, we could have two states for this pte: 1. Page in shmem page cache 2. Page in shmem swap cache (so page cache is a value) And actually there's another 3rd state that should never happen as long as the userspace does unprotect properly, but I guess we'd better also take care of: 3. Page is not cached at all (page missing; logically should not happen because when page cache evicted then all ptes should have been removed, however since this pte is not linked to the page in any way, it could get lost? Then we should simply retry the fault after recovering the tricky pte into an all-zero pte) It'll be a bit odd imho to use a swp entry to represent all these states, for example we can see the pte is a swp-like entry but in reality the shmem page sits right in the page cache.. So I'm thinking whether we could decouple the pte_protnone() idea with numa balancing first (currently, there's a close bind), let numa depend on protnone, then uffd-wp+shmem can also depend on protnone. > > But those ptes will be pte_present(), so you must provide a pfn, Could I ask why? > and I think if you use the zero_pfn, vm_normal_page() will return false on it, > and avoid special casing (and reference counting) it in various places. I'll keep this in mind, thanks. Meanwhile, this reminded me another option - besides _PAGE_PROTNONE, can we use _PAGE_SPECIAL? That sounds better at least from its naming that it tells it's a special page already. We can also leverage existing pte_special() checks here and there, then mimic what we do with pte_devmap(), maybe? -- Peter Xu