From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 24EAAC64E8A for ; Wed, 2 Dec 2020 10:09:02 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E3DAF206E3 for ; Wed, 2 Dec 2020 10:08:59 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E3DAF206E3 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 360BE6B005C; Wed, 2 Dec 2020 05:08:59 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 2E9568D0003; Wed, 2 Dec 2020 05:08:59 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 200D68D0002; Wed, 2 Dec 2020 05:08:59 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0020.hostedemail.com [216.40.44.20]) by kanga.kvack.org (Postfix) with ESMTP id 0B1966B005C for ; Wed, 2 Dec 2020 05:08:59 -0500 (EST) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id BD2228249980 for ; Wed, 2 Dec 2020 10:08:58 +0000 (UTC) X-FDA: 77547918756.16.van45_030c360273b2 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin16.hostedemail.com (Postfix) with ESMTP id 99E55100E6917 for ; Wed, 2 Dec 2020 10:08:58 +0000 (UTC) X-HE-Tag: van45_030c360273b2 X-Filterd-Recvd-Size: 3098 Received: from verein.lst.de (verein.lst.de [213.95.11.211]) by imf09.hostedemail.com (Postfix) with ESMTP for ; Wed, 2 Dec 2020 10:08:57 +0000 (UTC) Received: by verein.lst.de (Postfix, from userid 2407) id 4A8C467373; Wed, 2 Dec 2020 11:08:55 +0100 (CET) Date: Wed, 2 Dec 2020 11:08:54 +0100 From: Christoph Hellwig To: Jason Gunthorpe Cc: Ralph Campbell , Christoph Hellwig , linux-mm@kvack.org, nouveau@lists.freedesktop.org, linux-kselftest@vger.kernel.org, linux-kernel@vger.kernel.org, Jerome Glisse , John Hubbard , Alistair Popple , Bharata B Rao , Zi Yan , "Kirill A . Shutemov" , Yang Shi , Ben Skeggs , Shuah Khan , Andrew Morton , Roger Pau Monne Subject: Re: [PATCH v3 3/6] mm: support THP migration to device private memory Message-ID: <20201202100854.GB7597@lst.de> References: <20201106005147.20113-1-rcampbell@nvidia.com> <20201106005147.20113-4-rcampbell@nvidia.com> <20201106080322.GE31341@lst.de> <20201109091415.GC28918@lst.de> <20201120200133.GH917484@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20201120200133.GH917484@nvidia.com> User-Agent: Mutt/1.5.17 (2007-11-01) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, Nov 20, 2020 at 04:01:33PM -0400, Jason Gunthorpe wrote: > On Wed, Nov 11, 2020 at 03:38:42PM -0800, Ralph Campbell wrote: > > > MEMORY_DEVICE_GENERIC: > > Struct pages are created in dev_dax_probe() and represent non-volatile memory. > > The device can be mmap()'ed which calls dax_mmap() which sets > > vma->vm_flags | VM_HUGEPAGE. > > A CPU page fault will result in a PTE, PMD, or PUD sized page > > (but not compound) to be inserted by vmf_insert_mixed() which will call either > > insert_pfn() or insert_page(). > > Neither insert_pfn() nor insert_page() increments the page reference > > count. > > But why was this done? It seems very strange to put a pfn with a > struct page into a VMA and then deliberately not take the refcount for > the duration of that pfn being in the VMA? > > What prevents memunmap_pages() from progressing while VMAs still point > at the memory? Agreed. Adding Roger who added MEMORY_DEVICE_GENERIC and the only user. > > I think just leaving the page reference count at one is better than trying > > to use the mmu_interval_notifier or changing vmf_insert_mixed() and > > invalidations of pfn_t_devmap(pfn) to adjust the page reference count. > > Why so? The entire point of getting struct page's for this stuff was > to be able to follow the struct page flow. I never did learn a reason > why there is devmap stuff all over the place in the page table code... Exactly.