From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 15923ECAAD4 for ; Fri, 26 Aug 2022 15:25:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229837AbiHZPZV (ORCPT ); Fri, 26 Aug 2022 11:25:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37226 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344614AbiHZPY6 (ORCPT ); Fri, 26 Aug 2022 11:24:58 -0400 Received: from mail-oi1-x233.google.com (mail-oi1-x233.google.com [IPv6:2607:f8b0:4864:20::233]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A18426400 for ; Fri, 26 Aug 2022 08:20:09 -0700 (PDT) Received: by mail-oi1-x233.google.com with SMTP id w196so2335742oiw.10 for ; Fri, 26 Aug 2022 08:20:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc; bh=JxB2qaa3GXj7Ww0WJz/K7eVu2ESNhoaSdSA9k8puE0w=; b=rNZHisjyMKdc1KU7tmh/nGQkaLaNeYt3bMs5VKI6xJP2h2AOfOUzwRZRLla65xJeDe CEYqIxWJMO3nXKmE2NClEC3ph5qIL8NDIQhMrgP6gJ0WU4aWnONbzX+FJWCvfdfcSIsP AezQ9zuYWaHPSiF6LfTwPDglMWkbw4zeg+3bhrcfEcjamrhe4gPIvEzOpxfWMPFeywgQ UoNWH8KGUwupF/7+yTAgAVu4bX9J2NJTBc7jkz7aI3melLIyLqn9dmv13Lxx0qNdblrw TITwMBj98834n0fELMej+U+DtcWyCv5lRYX7IgVMedybI7TbJ5XNWzkK1RBJSfbEeA4F GDXg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc; bh=JxB2qaa3GXj7Ww0WJz/K7eVu2ESNhoaSdSA9k8puE0w=; b=Pqm4if21nYPXikJhV789dc/ukGOnNWtpJmNNPzBTh1E4Pj/BDTjzj5jvlmzvfl7joz ezpoos6sXN9xgYOjIkUuFbWB2m/6i53WAXRm4S0t9iAJorUlJSZv91Vv9A1dtZyQdlgw Wfn2W6uQqlg9br3xCxDEa6pjFVc4UFbGjxaoZejiUhPPBzc0RY1uul5vyxz/uNQ1EP7o qf5jl/i+RU6cVglFnOnrrbiDKrGoNaC4eiXZYbtug0pWNC/7b0vMG8cZ56OcY1dEq1Vo vqITaJBGvw7OU7wx0DLdSPJyTv4PSfK9T3vWwpOsWrf6CygJnOktdSIgt/e1ctvWkwuL E7PA== X-Gm-Message-State: ACgBeo21V6V8BTQnLmdSR6HvY2+CPUXJE89WqB3eQk69mKGX2kjls+Fp m+rUY0oVe+YWleTgVJXJegEnpA8hIzXPzq8QX4hIWA== X-Google-Smtp-Source: AA6agR6SQe5onizWDvNdLrAzua1I9O94/DWXsFjcgNFZAlcVSFzWJyrqTisu6QhEpjJ6FUet5mh61grmllD7Si4hFpE= X-Received: by 2002:a05:6808:152a:b0:344:c8d1:27e1 with SMTP id u42-20020a056808152a00b00344c8d127e1mr1877973oiw.294.1661527208826; Fri, 26 Aug 2022 08:20:08 -0700 (PDT) MIME-Version: 1.0 References: <20220706082016.2603916-1-chao.p.peng@linux.intel.com> <20220706082016.2603916-2-chao.p.peng@linux.intel.com> In-Reply-To: <20220706082016.2603916-2-chao.p.peng@linux.intel.com> From: Fuad Tabba Date: Fri, 26 Aug 2022 16:19:32 +0100 Message-ID: Subject: Re: [PATCH v7 01/14] mm: Add F_SEAL_AUTO_ALLOCATE seal to memfd To: Chao Peng Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-api@vger.kernel.org, linux-doc@vger.kernel.org, qemu-devel@nongnu.org, linux-kselftest@vger.kernel.org, Paolo Bonzini , Jonathan Corbet , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Shuah Khan , Mike Rapoport , Steven Price , "Maciej S . Szmigiero" , Vlastimil Babka , Vishal Annapurve , Yu Zhang , "Kirill A . Shutemov" , luto@kernel.org, jun.nakajima@intel.com, dave.hansen@intel.com, ak@linux.intel.com, david@redhat.com, aarcange@redhat.com, ddutile@redhat.com, dhildenb@redhat.com, Quentin Perret , Michael Roth , mhocko@suse.com, Muchun Song Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-api@vger.kernel.org Hi Chao, On Wed, Jul 6, 2022 at 9:25 AM Chao Peng wrote: > > Normally, a write to unallocated space of a file or the hole of a sparse > file automatically causes space allocation, for memfd, this equals to > memory allocation. This new seal prevents such automatically allocating, > either this is from a direct write() or a write on the previously > mmap-ed area. The seal does not prevent fallocate() so an explicit > fallocate() can still cause allocating and can be used to reserve > memory. > > This is used to prevent unintentional allocation from userspace on a > stray or careless write and any intentional allocation should use an > explicit fallocate(). One of the main usecases is to avoid memory double > allocation for confidential computing usage where we use two memfds to > back guest memory and at a single point only one memfd is alive and we > want to prevent memory allocation for the other memfd which may have > been mmap-ed previously. More discussion can be found at: > > https://lkml.org/lkml/2022/6/14/1255 > > Suggested-by: Sean Christopherson > Signed-off-by: Chao Peng > --- > include/uapi/linux/fcntl.h | 1 + > mm/memfd.c | 3 ++- > mm/shmem.c | 16 ++++++++++++++-- > 3 files changed, 17 insertions(+), 3 deletions(-) > > diff --git a/include/uapi/linux/fcntl.h b/include/uapi/linux/fcntl.h > index 2f86b2ad6d7e..98bdabc8e309 100644 > --- a/include/uapi/linux/fcntl.h > +++ b/include/uapi/linux/fcntl.h > @@ -43,6 +43,7 @@ > #define F_SEAL_GROW 0x0004 /* prevent file from growing */ > #define F_SEAL_WRITE 0x0008 /* prevent writes */ > #define F_SEAL_FUTURE_WRITE 0x0010 /* prevent future writes while mapped */ > +#define F_SEAL_AUTO_ALLOCATE 0x0020 /* prevent allocation for writes */ I think this should also be added to tools/include/uapi/linux/fcntl.h Cheers, /fuad > /* (1U << 31) is reserved for signed error codes */ > > /* > diff --git a/mm/memfd.c b/mm/memfd.c > index 08f5f8304746..2afd898798e4 100644 > --- a/mm/memfd.c > +++ b/mm/memfd.c > @@ -150,7 +150,8 @@ static unsigned int *memfd_file_seals_ptr(struct file *file) > F_SEAL_SHRINK | \ > F_SEAL_GROW | \ > F_SEAL_WRITE | \ > - F_SEAL_FUTURE_WRITE) > + F_SEAL_FUTURE_WRITE | \ > + F_SEAL_AUTO_ALLOCATE) > > static int memfd_add_seals(struct file *file, unsigned int seals) > { > diff --git a/mm/shmem.c b/mm/shmem.c > index a6f565308133..6c8aef15a17d 100644 > --- a/mm/shmem.c > +++ b/mm/shmem.c > @@ -2051,6 +2051,8 @@ static vm_fault_t shmem_fault(struct vm_fault *vmf) > struct vm_area_struct *vma = vmf->vma; > struct inode *inode = file_inode(vma->vm_file); > gfp_t gfp = mapping_gfp_mask(inode->i_mapping); > + struct shmem_inode_info *info = SHMEM_I(inode); > + enum sgp_type sgp; > int err; > vm_fault_t ret = VM_FAULT_LOCKED; > > @@ -2113,7 +2115,12 @@ static vm_fault_t shmem_fault(struct vm_fault *vmf) > spin_unlock(&inode->i_lock); > } > > - err = shmem_getpage_gfp(inode, vmf->pgoff, &vmf->page, SGP_CACHE, > + if (unlikely(info->seals & F_SEAL_AUTO_ALLOCATE)) > + sgp = SGP_NOALLOC; > + else > + sgp = SGP_CACHE; > + > + err = shmem_getpage_gfp(inode, vmf->pgoff, &vmf->page, sgp, > gfp, vma, vmf, &ret); > if (err) > return vmf_error(err); > @@ -2459,6 +2466,7 @@ shmem_write_begin(struct file *file, struct address_space *mapping, > struct inode *inode = mapping->host; > struct shmem_inode_info *info = SHMEM_I(inode); > pgoff_t index = pos >> PAGE_SHIFT; > + enum sgp_type sgp; > int ret = 0; > > /* i_rwsem is held by caller */ > @@ -2470,7 +2478,11 @@ shmem_write_begin(struct file *file, struct address_space *mapping, > return -EPERM; > } > > - ret = shmem_getpage(inode, index, pagep, SGP_WRITE); > + if (unlikely(info->seals & F_SEAL_AUTO_ALLOCATE)) > + sgp = SGP_NOALLOC; > + else > + sgp = SGP_WRITE; > + ret = shmem_getpage(inode, index, pagep, sgp); > > if (ret) > return ret; > -- > 2.25.1 >