From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-0.7 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 205C4CA9EB6 for ; Wed, 23 Oct 2019 17:09:37 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B810C21925 for ; Wed, 23 Oct 2019 17:09:36 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=intel-com.20150623.gappssmtp.com header.i=@intel-com.20150623.gappssmtp.com header.b="hbN7bIiI" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B810C21925 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 616AC6B0003; Wed, 23 Oct 2019 13:09:36 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5C7436B0006; Wed, 23 Oct 2019 13:09:36 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4B6B16B0007; Wed, 23 Oct 2019 13:09:36 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0161.hostedemail.com [216.40.44.161]) by kanga.kvack.org (Postfix) with ESMTP id 2422E6B0003 for ; Wed, 23 Oct 2019 13:09:36 -0400 (EDT) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with SMTP id 9A3AF18050EBF for ; Wed, 23 Oct 2019 17:09:35 +0000 (UTC) X-FDA: 76075685910.12.boat67_539b12bbf4062 X-HE-Tag: boat67_539b12bbf4062 X-Filterd-Recvd-Size: 13411 Received: from mail-ot1-f67.google.com (mail-ot1-f67.google.com [209.85.210.67]) by imf47.hostedemail.com (Postfix) with ESMTP for ; Wed, 23 Oct 2019 17:09:34 +0000 (UTC) Received: by mail-ot1-f67.google.com with SMTP id s22so18061049otr.6 for ; Wed, 23 Oct 2019 10:09:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel-com.20150623.gappssmtp.com; s=20150623; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=/PPTNoTWpl2Emb8pb8ijRWaxdCAJfb0VFPmFXVJXJ0Y=; b=hbN7bIiIX8Xmxz4v3l9adUYVp8CKtz0z5KQi/27rhGcPP601NUpctvc3jVlRlmAq60 bUH2bqaNZJ9UrYZme/X2orJt/+vEoAwoqDVyVGfR1TqrAatkLwtzHFrJEv4mqzOTDUKZ zp2peBbaDwOUIb5SBWYc1SXLUeaoIXVN4F7WKhHc8YJviU5NL7SbXN5mNamVMC412qqH 0SCiBOM5ynGJUKcG0RwMlOZ/gPcnjg1Yf0DknwM/RuqXhua0o3wQY3vosVPoVXCS9v5X y0MGUZpzfF0MfwnxF5f6iDq8OgHH01U9NAwoE0E8xv2N0rSqlvONAVbxkmTs03rBkNDf 62ZA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=/PPTNoTWpl2Emb8pb8ijRWaxdCAJfb0VFPmFXVJXJ0Y=; b=DxBcwK7WyyJ7fUHEmkITruVQ6XIEyLfCt7xcDljejb8Bq45qRG3JaJV+gDxp7tBg5e Z5NID9tDnQP9tEQFyelxffNEJHC+CZ6VFEHAGwmKVq7HUivv18FxaOFA0BzuD7KJT8cR p+6Taz6HOOuqmyK1ObgBpCXUwMa8zuzZ1tdWIyN2yejlvWBzFO4sqwha25I6IUjgg9fW JJGPxVfJ2Yume81KJK9hlX3sX+nE4pz1EKnEvy5J7IaVBeYggpoVAb0gsoZnnjE0eNUu QxIjurHllzvpJitAxgYA8HJIOpnpDNtQHWqzyhZtj+1ckiafOUJOUzFWGejVgPWM2hG2 KbYg== X-Gm-Message-State: APjAAAXfbRKlPlrZ2uBd06BL2G3pHJ3sV0S+XKI0qlOP7PeMDh1D3dXH pGGMnqPKu0dkRMkvAD60wyJ34Ds8mdysNaFauTPnUQ== X-Google-Smtp-Source: APXvYqwKooneF9Tk0GZvPmgAdoyoPp5zUpQpfCPxGZyKnmAbzf+oYuN3U71bPYK4qtN8ms7/1oT3X39EsbFkk/t4RFE= X-Received: by 2002:a05:6830:617:: with SMTP id w23mr7691060oti.247.1571850573334; Wed, 23 Oct 2019 10:09:33 -0700 (PDT) MIME-Version: 1.0 References: <20191022171239.21487-1-david@redhat.com> In-Reply-To: From: Dan Williams Date: Wed, 23 Oct 2019 10:09:21 -0700 Message-ID: Subject: Re: [PATCH RFC v1 00/12] mm: Don't mark hotplugged pages PG_reserved (including ZONE_DEVICE) To: David Hildenbrand Cc: Linux Kernel Mailing List , Linux MM , Michal Hocko , Andrew Morton , kvm-ppc@vger.kernel.org, linuxppc-dev , KVM list , linux-hyperv@vger.kernel.org, devel@driverdev.osuosl.org, xen-devel , X86 ML , Alexander Duyck , Kees Cook , Alex Williamson , Allison Randal , Andy Lutomirski , "Aneesh Kumar K.V" , Anshuman Khandual , Anthony Yznaga , Ben Chan , Benjamin Herrenschmidt , Borislav Petkov , Boris Ostrovsky , Christophe Leroy , Cornelia Huck , Dan Carpenter , Dave Hansen , Fabio Estevam , Greg Kroah-Hartman , Haiyang Zhang , "H. Peter Anvin" , Ingo Molnar , "Isaac J. Manjarres" , Jeremy Sowden , Jim Mattson , Joerg Roedel , Johannes Weiner , Juergen Gross , KarimAllah Ahmed , Kate Stewart , "K. Y. Srinivasan" , Madhumitha Prabakaran , Matt Sickler , Mel Gorman , Michael Ellerman , Michal Hocko , Mike Rapoport , Mike Rapoport , Nicholas Piggin , Nishka Dasgupta , Oscar Salvador , Paolo Bonzini , Paul Mackerras , Paul Mackerras , Pavel Tatashin , Pavel Tatashin , Peter Zijlstra , Qian Cai , =?UTF-8?B?UmFkaW0gS3LEjW3DocWZ?= , Rob Springer , Sasha Levin , Sean Christopherson , =?UTF-8?Q?Simon_Sandstr=C3=B6m?= , Stefano Stabellini , Stephen Hemminger , Thomas Gleixner , Todd Poynor , Vandana BN , Vitaly Kuznetsov , Vlastimil Babka , Wanpeng Li , YueHaibing Content-Type: text/plain; charset="UTF-8" X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Oct 23, 2019 at 12:26 AM David Hildenbrand wrote: > > On 22.10.19 23:54, Dan Williams wrote: > > Hi David, > > > > Thanks for tackling this! > > Thanks for having a look :) > > [...] > > > >> I am probably a little bit too careful (but I don't want to break things). > >> In most places (besides KVM and vfio that are nuts), the > >> pfn_to_online_page() check could most probably be avoided by a > >> is_zone_device_page() check. However, I usually get suspicious when I see > >> a pfn_valid() check (especially after I learned that people mmap parts of > >> /dev/mem into user space, including memory without memmaps. Also, people > >> could memmap offline memory blocks this way :/). As long as this does not > >> hurt performance, I think we should rather do it the clean way. > > > > I'm concerned about using is_zone_device_page() in places that are not > > known to already have a reference to the page. Here's an audit of > > current usages, and the ones I think need to cleaned up. The "unsafe" > > ones do not appear to have any protections against the device page > > being removed (get_dev_pagemap()). Yes, some of these were added by > > me. The "unsafe? HMM" ones need HMM eyes because HMM leaks device > > pages into anonymous memory paths and I'm not up to speed on how it > > guarantees 'struct page' validity vs device shutdown without using > > get_dev_pagemap(). > > > > smaps_pmd_entry(): unsafe > > > > put_devmap_managed_page(): safe, page reference is held > > > > is_device_private_page(): safe? gpu driver manages private page lifetime > > > > is_pci_p2pdma_page(): safe, page reference is held > > > > uncharge_page(): unsafe? HMM > > > > add_to_kill(): safe, protected by get_dev_pagemap() and dax_lock_page() > > > > soft_offline_page(): unsafe > > > > remove_migration_pte(): unsafe? HMM > > > > move_to_new_page(): unsafe? HMM > > > > migrate_vma_pages() and helpers: unsafe? HMM > > > > try_to_unmap_one(): unsafe? HMM > > > > __put_page(): safe > > > > release_pages(): safe > > > > I'm hoping all the HMM ones can be converted to > > is_device_private_page() directlly and have that routine grow a nice > > comment about how it knows it can always safely de-reference its @page > > argument. > > > > For the rest I'd like to propose that we add a facility to determine > > ZONE_DEVICE by pfn rather than page. The most straightforward why I > > can think of would be to just add another bitmap to mem_section_usage > > to indicate if a subsection is ZONE_DEVICE or not. > > (it's a somewhat unrelated bigger discussion, but we can start discussing it in this thread) > > I dislike this for three reasons > > a) It does not protect against any races, really, it does not improve things. > b) We do have the exact same problem with pfn_to_online_page(). As long as we > don't hold the memory hotplug lock, memory can get offlined and remove any time. Racy. True, we need to solve that problem too. That seems to want something lighter weight than the hotplug lock that can be held over pfn lookups + use rather than requiring a page lookup in paths where it's not clear that a page reference would prevent unplug. > c) We mix in ZONE specific stuff into the core. It should be "just another zone" Not sure I grok this when the RFC is sprinkling zone-specific is_zone_device_page() throughout the core? > > What I propose instead (already discussed in https://lkml.org/lkml/2019/10/10/87) Sorry I missed this earlier... > > 1. Convert SECTION_IS_ONLINE to SECTION_IS_ACTIVE > 2. Convert SECTION_IS_ACTIVE to a subsection bitmap > 3. Introduce pfn_active() that checks against the subsection bitmap > 4. Once the memmap was initialized / prepared, set the subsection active > (similar to SECTION_IS_ONLINE in the buddy right now) > 5. Before the memmap gets invalidated, set the subsection inactive > (similar to SECTION_IS_ONLINE in the buddy right now) > 5. pfn_to_online_page() = pfn_active() && zone != ZONE_DEVICE > 6. pfn_to_device_page() = pfn_active() && zone == ZONE_DEVICE This does not seem to reduce any complexity because it still requires a pfn to zone lookup at the end of the process. I.e. converting pfn_to_online_page() to use a new pfn_active() subsection map plus looking up the zone from pfn_to_page() is more steps than just doing a direct pfn to zone lookup. What am I missing? > > Especially, driver-reserved device memory will not get set active in > the subsection bitmap. > > Now to the race. Taking the memory hotplug lock at random places is ugly. I do > wonder if we can use RCU: Ah, yes, exactly what I was thinking above. > > The user of pfn_active()/pfn_to_online_page()/pfn_to_device_page(): > > /* the memmap is guaranteed to remain active under RCU */ > rcu_read_lock(); > if (pfn_active(random_pfn)) { > page = pfn_to_page(random_pfn); > ... use the page, stays valid > } > rcu_unread_lock(); > > Memory offlining/memremap code: > > set_subsections_inactive(pfn, nr_pages); /* clears the bit atomically */ > synchronize_rcu(); > /* all users saw the bitmap update, we can invalide the memmap */ > remove_pfn_range_from_zone(zone, pfn, nr_pages); Looks good to me. > > > > >> > >> I only gave it a quick test with DIMMs on x86-64, but didn't test the > >> ZONE_DEVICE part at all (any tips for a nice QEMU setup?). Compile-tested > >> on x86-64 and PPC. > > > > I'll give it a spin, but I don't think the kernel wants to grow more > > is_zone_device_page() users. > > Let's recap. In this RFC, I introduce a total of 4 (!) users only. > The other parts can rely on pfn_to_online_page() only. > > 1. "staging: kpc2000: Prepare transfer_complete_cb() for PG_reserved changes" > - Basically never used with ZONE_DEVICE. > - We hold a reference! > - All it protects is a SetPageDirty(page); > > 2. "staging/gasket: Prepare gasket_release_page() for PG_reserved changes" > - Same as 1. > > 3. "mm/usercopy.c: Prepare check_page_span() for PG_reserved changes" > - We come via virt_to_head_page() / virt_to_head_page(), not sure about > references (I assume this should be fine as we don't come via random > PFNs) > - We check that we don't mix Reserved (including device memory) and CMA > pages when crossing compound pages. > > I think we can drop 1. and 2., resulting in a total of 2 new users in > the same context. I think that is totally tolerable to finally clean > this up. ...but more is_zone_device_page() doesn't "finally clean this up". Like we discussed above it's the missing locking that's the real cleanup, the pfn_to_online_page() internals are secondary. > > > However, I think we also have to clarify if we need the change in 3 at all. > It comes from > > commit f5509cc18daa7f82bcc553be70df2117c8eedc16 > Author: Kees Cook > Date: Tue Jun 7 11:05:33 2016 -0700 > > mm: Hardened usercopy > > This is the start of porting PAX_USERCOPY into the mainline kernel. This > is the first set of features, controlled by CONFIG_HARDENED_USERCOPY. The > work is based on code by PaX Team and Brad Spengler, and an earlier port > from Casey Schaufler. Additional non-slab page tests are from Rik van Riel. > [...] > - otherwise, object must not span page allocations (excepting Reserved > and CMA ranges) > > Not sure if we really have to care about ZONE_DEVICE at this point. That check needs to be careful to ignore ZONE_DEVICE pages. There's nothing wrong with a copy spanning ZONE_DEVICE and typical pages.