From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B1D3EC64E8A for ; Wed, 2 Dec 2020 10:14:32 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id ED7F6206E3 for ; Wed, 2 Dec 2020 10:14:31 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org ED7F6206E3 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 664C26B005C; Wed, 2 Dec 2020 05:14:31 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 6145F8D0002; Wed, 2 Dec 2020 05:14:31 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 52A878D0001; Wed, 2 Dec 2020 05:14:31 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 3B5C16B005C for ; Wed, 2 Dec 2020 05:14:31 -0500 (EST) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id E7E218249980 for ; Wed, 2 Dec 2020 10:14:30 +0000 (UTC) X-FDA: 77547932700.22.box83_1004b0c273b2 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin22.hostedemail.com (Postfix) with ESMTP id C6A6C18038E87 for ; Wed, 2 Dec 2020 10:14:30 +0000 (UTC) X-HE-Tag: box83_1004b0c273b2 X-Filterd-Recvd-Size: 5932 Received: from verein.lst.de (verein.lst.de [213.95.11.211]) by imf40.hostedemail.com (Postfix) with ESMTP for ; Wed, 2 Dec 2020 10:14:29 +0000 (UTC) Received: by verein.lst.de (Postfix, from userid 2407) id 2A56B67373; Wed, 2 Dec 2020 11:14:27 +0100 (CET) Date: Wed, 2 Dec 2020 11:14:26 +0100 From: Christoph Hellwig To: Ralph Campbell Cc: Christoph Hellwig , linux-mm@kvack.org, nouveau@lists.freedesktop.org, linux-kselftest@vger.kernel.org, linux-kernel@vger.kernel.org, Jerome Glisse , John Hubbard , Alistair Popple , Jason Gunthorpe , Bharata B Rao , Zi Yan , "Kirill A . Shutemov" , Yang Shi , Ben Skeggs , Shuah Khan , Andrew Morton , Logan Gunthorpe , Dan Williams , linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org Subject: Re: [PATCH v3 3/6] mm: support THP migration to device private memory Message-ID: <20201202101426.GC7597@lst.de> References: <20201106005147.20113-1-rcampbell@nvidia.com> <20201106005147.20113-4-rcampbell@nvidia.com> <20201106080322.GE31341@lst.de> <20201109091415.GC28918@lst.de> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.17 (2007-11-01) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: [adding a few of the usual suspects] On Wed, Nov 11, 2020 at 03:38:42PM -0800, Ralph Campbell wrote: > There are 4 types of ZONE_DEVICE struct pages: > MEMORY_DEVICE_PRIVATE, MEMORY_DEVICE_FS_DAX, MEMORY_DEVICE_GENERIC, and > MEMORY_DEVICE_PCI_P2PDMA. > > Currently, memremap_pages() allocates struct pages for a physical address range > with a page_ref_count(page) of one and increments the pgmap->ref per CPU > reference count by the number of pages created since each ZONE_DEVICE struct > page has a pointer to the pgmap. > > The struct pages are not freed until memunmap_pages() is called which > calls put_page() which calls put_dev_pagemap() which releases a reference to > pgmap->ref. memunmap_pages() blocks waiting for pgmap->ref reference count > to be zero. As far as I can tell, the put_page() in memunmap_pages() has to > be the *last* put_page() (see MEMORY_DEVICE_PCI_P2PDMA). > My RFC [1] breaks this put_page() -> put_dev_pagemap() connection so that > the struct page reference count can go to zero and back to non-zero without > changing the pgmap->ref reference count. > > Q1: Is that safe? Is there some code that depends on put_page() dropping > the pgmap->ref reference count as part of memunmap_pages()? > My testing of [1] seems OK but I'm sure there are lots of cases I didn't test. It should be safe, but the audit you've done is important to make sure we do not miss anything important. > MEMORY_DEVICE_PCI_P2PDMA: > Struct pages are created in pci_p2pdma_add_resource() and represent device > memory accessible by PCIe bar address space. Memory is allocated with > pci_alloc_p2pmem() based on a byte length but the gen_pool_alloc_owner() > call will allocate memory in a minimum of PAGE_SIZE units. > Reference counting is +1 per *allocation* on the pgmap->ref reference count. > Note that this is not +1 per page which is what put_page() expects. So > currently, a get_page()/put_page() works OK because the page reference count > only goes 1->2 and 2->1. If it went to zero, the pgmap->ref reference count > would be incorrect if the allocation size was greater than one page. > > I see pci_alloc_p2pmem() is called by nvme_alloc_sq_cmds() and > pci_p2pmem_alloc_sgl() to create a command queue and a struct scatterlist *. > Looks like sg_page(sg) returns the ZONE_DEVICE struct page of the scatterlist. > There are a huge number of places sg_page() is called so it is hard to tell > whether or not get_page()/put_page() is ever called on MEMORY_DEVICE_PCI_P2PDMA > pages. Nothing should call get_page/put_page on them, as they are not treated as refcountable memory. More importantly nothing is allowed to keep a reference longer than the time of the I/O. > pci_p2pmem_virt_to_bus() will return the physical address and I guess > pfn_to_page(physaddr >> PAGE_SHIFT) could return the struct page. > > Since there is a clear allocation/free, pci_alloc_p2pmem() can probably be > modified to increment/decrement the MEMORY_DEVICE_PCI_P2PDMA struct page > reference count. Or maybe just leave it at one like it is now. And yes, doing that is probably a sensible safe guard. > MEMORY_DEVICE_FS_DAX: > Struct pages are created in pmem_attach_disk() and virtio_fs_setup_dax() with > an initial reference count of one. > The problem I see is that there are 3 states that are important: > a) memory is free and not allocated to any file (page_ref_count() == 0). > b) memory is allocated to a file and in the page cache (page_ref_count() == 1). > c) some gup() or I/O has a reference even after calling unmap_mapping_pages() > (page_ref_count() > 1). ext4_break_layouts() basically waits until the > page_ref_count() == 1 with put_page() calling wake_up_var(&page->_refcount) > to wake up ext4_break_layouts(). > The current code doesn't seem to distinguish (a) and (b). If we want to use > the 0->1 reference count to signal (c), then the page cache would have hold > entries with a page_ref_count() == 0 which doesn't match the general page cache I think the sensible model here is to grab a reference when it is added to the page cache. That is exactly how normal system memory pages work.