From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 21DA7C00140 for ; Wed, 10 Aug 2022 09:30:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232100AbiHJJae (ORCPT ); Wed, 10 Aug 2022 05:30:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56648 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232099AbiHJJab (ORCPT ); Wed, 10 Aug 2022 05:30:31 -0400 Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3FBD1260A; Wed, 10 Aug 2022 02:30:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1660123830; x=1691659830; h=date:from:to:cc:subject:message-id:reply-to:references: mime-version:in-reply-to; bh=zHygXbTWCyPOJ+4aJpksT5a9YTzKkpyV1mfD/3Fs2Cg=; b=JpAhHRjiVNQiy83F70gwisr6MF3gEDtSJy+qDfOk2lHMyEsCRxI/aosk lYaVYequjkGeOsXkEQXgB8ebcVeovnXl0uXqPLEioH9NesaT+4VFQIOV0 blTcyJKDJTneMm5mouhbl4ME+zMoRU5D1I7YxHfZPoL09mVr5NdiTeDCK l3fLPrLCTv71x8qb+f+GWjxZrCe4s+1tRJwxRlVgAy+PEEDzXYiLl7kLb HHcaY+4oeZ1VvpPeEyFviJfs9HS4qfPbeuWYMGo6MM4kgM3CRjM8qeD1M BfysjD5+GcBKVJdH9TyFPgqNe/gvQOv57xn43ITFbRk61T3y8+skcytIK Q==; X-IronPort-AV: E=McAfee;i="6400,9594,10434"; a="316987586" X-IronPort-AV: E=Sophos;i="5.93,227,1654585200"; d="scan'208";a="316987586" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Aug 2022 02:30:29 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.93,227,1654585200"; d="scan'208";a="664821304" Received: from chaop.bj.intel.com (HELO localhost) ([10.240.193.75]) by fmsmga008.fm.intel.com with ESMTP; 10 Aug 2022 02:30:18 -0700 Date: Wed, 10 Aug 2022 17:25:32 +0800 From: Chao Peng To: David Hildenbrand Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-api@vger.kernel.org, linux-doc@vger.kernel.org, qemu-devel@nongnu.org, linux-kselftest@vger.kernel.org, Paolo Bonzini , Jonathan Corbet , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Shuah Khan , Mike Rapoport , Steven Price , "Maciej S . Szmigiero" , Vlastimil Babka , Vishal Annapurve , Yu Zhang , "Kirill A . Shutemov" , luto@kernel.org, jun.nakajima@intel.com, dave.hansen@intel.com, ak@linux.intel.com, aarcange@redhat.com, ddutile@redhat.com, dhildenb@redhat.com, Quentin Perret , Michael Roth , mhocko@suse.com, Muchun Song Subject: Re: [PATCH v7 04/14] mm/shmem: Support memfile_notifier Message-ID: <20220810092532.GD862421@chaop.bj.intel.com> Reply-To: Chao Peng References: <20220706082016.2603916-1-chao.p.peng@linux.intel.com> <20220706082016.2603916-5-chao.p.peng@linux.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-api@vger.kernel.org On Fri, Aug 05, 2022 at 03:26:02PM +0200, David Hildenbrand wrote: > On 06.07.22 10:20, Chao Peng wrote: > > From: "Kirill A. Shutemov" > > > > Implement shmem as a memfile_notifier backing store. Essentially it > > interacts with the memfile_notifier feature flags for userspace > > access/page migration/page reclaiming and implements the necessary > > memfile_backing_store callbacks. > > > > Signed-off-by: Kirill A. Shutemov > > Signed-off-by: Chao Peng > > --- > > [...] > > > +#ifdef CONFIG_MEMFILE_NOTIFIER > > +static struct memfile_node *shmem_lookup_memfile_node(struct file *file) > > +{ > > + struct inode *inode = file_inode(file); > > + > > + if (!shmem_mapping(inode->i_mapping)) > > + return NULL; > > + > > + return &SHMEM_I(inode)->memfile_node; > > +} > > + > > + > > +static int shmem_get_pfn(struct file *file, pgoff_t offset, pfn_t *pfn, > > + int *order) > > +{ > > + struct page *page; > > + int ret; > > + > > + ret = shmem_getpage(file_inode(file), offset, &page, SGP_WRITE); > > + if (ret) > > + return ret; > > + > > + unlock_page(page); > > + *pfn = page_to_pfn_t(page); > > + *order = thp_order(compound_head(page)); > > + return 0; > > +} > > + > > +static void shmem_put_pfn(pfn_t pfn) > > +{ > > + struct page *page = pfn_t_to_page(pfn); > > + > > + if (!page) > > + return; > > + > > + put_page(page); > > > Why do we export shmem_get_pfn/shmem_put_pfn and not simply > > get_folio() > > and let the caller deal with putting the folio? What's the reason to > > a) Operate on PFNs and not folios > b) Have these get/put semantics? We have a design assumption that somedays this can even support non-page based backing stores. There are some discussions: https://lkml.org/lkml/2022/3/28/1440 I should add document for this two callbacks. > > > +} > > + > > +static struct memfile_backing_store shmem_backing_store = { > > + .lookup_memfile_node = shmem_lookup_memfile_node, > > + .get_pfn = shmem_get_pfn, > > + .put_pfn = shmem_put_pfn, > > +}; > > +#endif /* CONFIG_MEMFILE_NOTIFIER */ > > + > > void __init shmem_init(void) > > { > > int error; > > @@ -3956,6 +4059,10 @@ void __init shmem_init(void) > > else > > shmem_huge = SHMEM_HUGE_NEVER; /* just in case it was patched */ > > #endif > > + > > +#ifdef CONFIG_MEMFILE_NOTIFIER > > + memfile_register_backing_store(&shmem_backing_store); > > Can we instead prove a dummy function that does nothing without > CONFIG_MEMFILE_NOTIFIER? Sounds good. Chao > > > +#endif > > return; > > > > out1: > > > -- > Thanks, > > David / dhildenb >