From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DFB2CC00140 for ; Fri, 5 Aug 2022 13:26:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240868AbiHEN0N (ORCPT ); Fri, 5 Aug 2022 09:26:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42798 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240801AbiHEN0K (ORCPT ); Fri, 5 Aug 2022 09:26:10 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 221B924F21 for ; Fri, 5 Aug 2022 06:26:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1659705968; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=mJBXhzmt1GniFe2smeou8x6jmkkaotxI6ROvs4HpGEk=; b=bpkEz2iOjvmYkH6VqnaDfNHgqGT0ELkwEQZ5jkL78af+dgRHHpcOoV2AW8YnkXOv88YBxe pow/GbzO5yzeeGp0L+WWtlFxnAPz5tYfNIF5Zj0kRdan0EV2SfzYBgl1LHxlv+mF8DOQFj 5xAP6JdktNElst8eIPeD5hyZxMDL9xA= Received: from mail-wm1-f70.google.com (mail-wm1-f70.google.com [209.85.128.70]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-447-_ubm6lvZMIG2UPgAACp9xQ-1; Fri, 05 Aug 2022 09:26:07 -0400 X-MC-Unique: _ubm6lvZMIG2UPgAACp9xQ-1 Received: by mail-wm1-f70.google.com with SMTP id i9-20020a1c3b09000000b003a511239973so1597300wma.7 for ; Fri, 05 Aug 2022 06:26:06 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:in-reply-to:organization:from:references :cc:to:content-language:subject:user-agent:mime-version:date :message-id:x-gm-message-state:from:to:cc; bh=mJBXhzmt1GniFe2smeou8x6jmkkaotxI6ROvs4HpGEk=; b=P5tlWr/QCr7VIKnOxDPXlSdchgMPlhww8wsvo70YqSafaUqDbH+FZ5eIk8nRZhoSAo OIM+QDxwV1gEP6Ba1Gf37IGT/yaw4B+3xvEYhfIwKHZzpiuUeIlux2HKmPjKwW3bD8bR XHc9c8/xL6UAWrmgpBW+YpKPFpCFhvKMk5m4n5kVKJSZsQ7cCIjYRZXIiha1+mWGse7r 4zsYMqeEVPff9XfyDrdsmTjyq5kiY2EKAU6EA84LbEFOotpY60w71k4BHhFQ5d3EBmcO CcIBAGRapQudSI+yX0vG0vH1Wlwmhx46Ht/WLgYHlQocby585drG54uIcOnocRvkvT1P 2D0A== X-Gm-Message-State: ACgBeo0eEEAXl+rNpzoZCEl2OGGvIXGz3HgqssqI7GMlIBaFsLfhoFCB G9le/KAnxEFihpMmAnDViIFixkWcRzyat9YwJQQHrGmbCeVVLMNpy4HIYxQLSsA3iijFLY8MSuZ kWQp3REW3/+GGBqprCNWcCP+g X-Received: by 2002:a5d:588f:0:b0:220:761a:6894 with SMTP id n15-20020a5d588f000000b00220761a6894mr4409208wrf.406.1659705965774; Fri, 05 Aug 2022 06:26:05 -0700 (PDT) X-Google-Smtp-Source: AA6agR6ajxOjPYt9fURAvT4aPqpXXmNPivXi3/PkheqDod4RApDXYAPhtZxZHk1xvMFDF/irKdR08g== X-Received: by 2002:a5d:588f:0:b0:220:761a:6894 with SMTP id n15-20020a5d588f000000b00220761a6894mr4409161wrf.406.1659705965445; Fri, 05 Aug 2022 06:26:05 -0700 (PDT) Received: from ?IPV6:2003:cb:c706:fb00:f5c3:24b2:3d03:9d52? (p200300cbc706fb00f5c324b23d039d52.dip0.t-ipconnect.de. [2003:cb:c706:fb00:f5c3:24b2:3d03:9d52]) by smtp.gmail.com with ESMTPSA id o6-20020a05600c4fc600b003a32490c95dsm9887725wmq.35.2022.08.05.06.26.03 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Fri, 05 Aug 2022 06:26:05 -0700 (PDT) Message-ID: Date: Fri, 5 Aug 2022 15:26:02 +0200 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.11.0 Subject: Re: [PATCH v7 04/14] mm/shmem: Support memfile_notifier Content-Language: en-US To: Chao Peng , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-api@vger.kernel.org, linux-doc@vger.kernel.org, qemu-devel@nongnu.org, linux-kselftest@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Shuah Khan , Mike Rapoport , Steven Price , "Maciej S . Szmigiero" , Vlastimil Babka , Vishal Annapurve , Yu Zhang , "Kirill A . Shutemov" , luto@kernel.org, jun.nakajima@intel.com, dave.hansen@intel.com, ak@linux.intel.com, aarcange@redhat.com, ddutile@redhat.com, dhildenb@redhat.com, Quentin Perret , Michael Roth , mhocko@suse.com, Muchun Song References: <20220706082016.2603916-1-chao.p.peng@linux.intel.com> <20220706082016.2603916-5-chao.p.peng@linux.intel.com> From: David Hildenbrand Organization: Red Hat In-Reply-To: <20220706082016.2603916-5-chao.p.peng@linux.intel.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 06.07.22 10:20, Chao Peng wrote: > From: "Kirill A. Shutemov" > > Implement shmem as a memfile_notifier backing store. Essentially it > interacts with the memfile_notifier feature flags for userspace > access/page migration/page reclaiming and implements the necessary > memfile_backing_store callbacks. > > Signed-off-by: Kirill A. Shutemov > Signed-off-by: Chao Peng > --- [...] > +#ifdef CONFIG_MEMFILE_NOTIFIER > +static struct memfile_node *shmem_lookup_memfile_node(struct file *file) > +{ > + struct inode *inode = file_inode(file); > + > + if (!shmem_mapping(inode->i_mapping)) > + return NULL; > + > + return &SHMEM_I(inode)->memfile_node; > +} > + > + > +static int shmem_get_pfn(struct file *file, pgoff_t offset, pfn_t *pfn, > + int *order) > +{ > + struct page *page; > + int ret; > + > + ret = shmem_getpage(file_inode(file), offset, &page, SGP_WRITE); > + if (ret) > + return ret; > + > + unlock_page(page); > + *pfn = page_to_pfn_t(page); > + *order = thp_order(compound_head(page)); > + return 0; > +} > + > +static void shmem_put_pfn(pfn_t pfn) > +{ > + struct page *page = pfn_t_to_page(pfn); > + > + if (!page) > + return; > + > + put_page(page); Why do we export shmem_get_pfn/shmem_put_pfn and not simply get_folio() and let the caller deal with putting the folio? What's the reason to a) Operate on PFNs and not folios b) Have these get/put semantics? > +} > + > +static struct memfile_backing_store shmem_backing_store = { > + .lookup_memfile_node = shmem_lookup_memfile_node, > + .get_pfn = shmem_get_pfn, > + .put_pfn = shmem_put_pfn, > +}; > +#endif /* CONFIG_MEMFILE_NOTIFIER */ > + > void __init shmem_init(void) > { > int error; > @@ -3956,6 +4059,10 @@ void __init shmem_init(void) > else > shmem_huge = SHMEM_HUGE_NEVER; /* just in case it was patched */ > #endif > + > +#ifdef CONFIG_MEMFILE_NOTIFIER > + memfile_register_backing_store(&shmem_backing_store); Can we instead prove a dummy function that does nothing without CONFIG_MEMFILE_NOTIFIER? > +#endif > return; > > out1: -- Thanks, David / dhildenb