From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0F027C6FA8B for ; Fri, 23 Sep 2022 00:53:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231511AbiIWAxl (ORCPT ); Thu, 22 Sep 2022 20:53:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37386 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231556AbiIWAxd (ORCPT ); Thu, 22 Sep 2022 20:53:33 -0400 Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 336C563D3; Thu, 22 Sep 2022 17:53:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1663894412; x=1695430412; h=date:from:to:cc:subject:message-id:references: mime-version:in-reply-to; bh=N5ROSjhMMMcLpyjlHpJbtQYkQW/pwctSUSnQMueHIwE=; b=j0npw3hxIhk/PNb5lDfzUqD6jLjrltPDSjqtLFQjLyRdomVN6bRefFHz uXvWfgG4uz2MxxCNbni1bBuYLUa+koUygb7dcUoPhN2odkzkEqsUE2Wob Fo3ygUUA51auPuT+HUNkVklb8KJxTyVHf8P3Nv4GF0eEHysOqNnxnR5jK a+aoyB5TILiFMiHOHcYb4b+G3B3EocQyJuAwQzMVBtEvZSutXObl/iFrY gYXU6YQKmcVmT+UImswR9TWKMm2m/x/gVKgcmicaxRHWA+kzIVIcNHalx KMF697UTmPER7m2tv6+hrcdtos4Dsv4g4RgjJkJoqemIe2EOLD0fxzTul Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10478"; a="283569063" X-IronPort-AV: E=Sophos;i="5.93,337,1654585200"; d="scan'208";a="283569063" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Sep 2022 17:53:31 -0700 X-IronPort-AV: E=Sophos;i="5.93,337,1654585200"; d="scan'208";a="622334850" Received: from dnessim-mobl1.ger.corp.intel.com (HELO box.shutemov.name) ([10.252.60.183]) by fmsmga007-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Sep 2022 17:53:21 -0700 Received: by box.shutemov.name (Postfix, from userid 1000) id 6B9DA104532; Fri, 23 Sep 2022 03:53:19 +0300 (+03) Date: Fri, 23 Sep 2022 03:53:19 +0300 From: "Kirill A . Shutemov" To: Sean Christopherson Cc: "Wang, Wei W" , Chao Peng , "kvm@vger.kernel.org" , "linux-kernel@vger.kernel.org" , "linux-mm@kvack.org" , "linux-fsdevel@vger.kernel.org" , "linux-api@vger.kernel.org" , "linux-doc@vger.kernel.org" , "qemu-devel@nongnu.org" , Paolo Bonzini , Jonathan Corbet , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "x86@kernel.org" , "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Shuah Khan , Mike Rapoport , Steven Price , "Maciej S . Szmigiero" , Vlastimil Babka , Vishal Annapurve , Yu Zhang , "Lutomirski, Andy" , "Nakajima, Jun" , "Hansen, Dave" , "ak@linux.intel.com" , "david@redhat.com" , "aarcange@redhat.com" , "ddutile@redhat.com" , "dhildenb@redhat.com" , Quentin Perret , Michael Roth , "Hocko, Michal" , Muchun Song Subject: Re: [PATCH v8 1/8] mm/memfd: Introduce userspace inaccessible memfd Message-ID: <20220923005319.wkzpl36uailh4zbw@box.shutemov.name> References: <20220915142913.2213336-1-chao.p.peng@linux.intel.com> <20220915142913.2213336-2-chao.p.peng@linux.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Sep 22, 2022 at 07:49:18PM +0000, Sean Christopherson wrote: > On Thu, Sep 22, 2022, Wang, Wei W wrote: > > On Thursday, September 15, 2022 10:29 PM, Chao Peng wrote: > > > +int inaccessible_get_pfn(struct file *file, pgoff_t offset, pfn_t *pfn, > > > + int *order) > > > > Better to remove "order" from this interface? > > Hard 'no'. > > > Some callers only need to get pfn, and no need to bother with > > defining and inputting something unused. For callers who need the "order", > > can easily get it via thp_order(pfn_to_page(pfn)) on their own. > > That requires (a) assuming the pfn is backed by struct page, and (b) assuming the > struct page is a transparent huge page. That might be true for the current > implementation, but it most certainly will not always be true. > > KVM originally did things like this, where there was dedicated code for THP vs. > HugeTLB, and it was a mess. The goal here is very much to avoid repeating those > mistakes. Have the backing store _tell_ KVM how big the mapping is, don't force > KVM to rediscover the info on its own. I guess we can allow order pointer to be NULL to cover caller that don't need to know the order. Is it useful? -- Kiryl Shutsemau / Kirill A. Shutemov