linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Mike Rapoport <rppt@kernel.org>
To: Matthew Wilcox <willy@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Alexander Viro <viro@zeniv.linux.org.uk>,
	Andy Lutomirski <luto@kernel.org>, Arnd Bergmann <arnd@arndb.de>,
	Borislav Petkov <bp@alien8.de>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Christopher Lameter <cl@linux.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	David Hildenbrand <david@redhat.com>,
	Elena Reshetova <elena.reshetova@intel.com>,
	"H. Peter Anvin" <hpa@zytor.com>, Ingo Molnar <mingo@redhat.com>,
	James Bottomley <jejb@linux.ibm.com>,
	"Kirill A. Shutemov" <kirill@shutemov.name>,
	Mark Rutland <mark.rutland@arm.com>,
	Mike Rapoport <rppt@linux.ibm.com>,
	Michael Kerrisk <mtk.manpages@gmail.com>,
	Palmer Dabbelt <palmer@dabbelt.com>,
	Paul Walmsley <paul.walmsley@sifive.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Rick Edgecombe <rick.p.edgecombe@intel.com>,
	Roman Gushchin <guro@fb.com>, Shakeel Butt <shakeelb@google.com>,
	Shuah Khan <shuah@kernel.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Tycho Andersen <tycho@tycho.ws>, Will Deacon <will@kernel.org>,
	linux-api@vger.kernel.org, linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-fsdevel@vger.kernel.org, linux-mm@kvack.org,
	linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org,
	linux-nvdimm@lists.01.org, linux-riscv@lists.infradead.org,
	x86@kernel.org, Hagen Paul Pfeifer <hagen@jauu.net>
Subject: Re: [PATCH v14 05/10] mm: introduce memfd_secret system call to create "secret" memory areas
Date: Wed, 20 Jan 2021 17:05:10 +0200	[thread overview]
Message-ID: <20210120150510.GO1106298@kernel.org> (raw)
In-Reply-To: <20210119202213.GI2260413@casper.infradead.org>

On Tue, Jan 19, 2021 at 08:22:13PM +0000, Matthew Wilcox wrote:
> On Thu, Dec 03, 2020 at 08:29:44AM +0200, Mike Rapoport wrote:
> > +static vm_fault_t secretmem_fault(struct vm_fault *vmf)
> > +{
> > +	struct address_space *mapping = vmf->vma->vm_file->f_mapping;
> > +	struct inode *inode = file_inode(vmf->vma->vm_file);
> > +	pgoff_t offset = vmf->pgoff;
> > +	vm_fault_t ret = 0;
> > +	unsigned long addr;
> > +	struct page *page;
> > +	int err;
> > +
> > +	if (((loff_t)vmf->pgoff << PAGE_SHIFT) >= i_size_read(inode))
> > +		return vmf_error(-EINVAL);
> > +
> > +	page = find_get_page(mapping, offset);
> > +	if (!page) {
> > +
> > +		page = secretmem_alloc_page(vmf->gfp_mask);
> > +		if (!page)
> > +			return vmf_error(-ENOMEM);
> 
> Just use VM_FAULT_OOM directly.
 
Ok.

> > +		err = add_to_page_cache(page, mapping, offset, vmf->gfp_mask);
> > +		if (unlikely(err))
> > +			goto err_put_page;
> 
> What if the error is EEXIST because somebody else raced with you to add
> a new page to the page cache?

Right, for -EEXIST I need a retry here, thanks.

> > +		err = set_direct_map_invalid_noflush(page, 1);
> > +		if (err)
> > +			goto err_del_page_cache;
> 
> Does this work correctly if somebody else has a reference to the page
> in the meantime?

Yes, it does. If somebody else won the race that page was dropped from the
direct map and this call would be essentially a nop. And anyway, the very
next patch changes the way pages are removed from the direct map ;-)

> > +		addr = (unsigned long)page_address(page);
> > +		flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
> > +
> > +		__SetPageUptodate(page);
> 
> Once you've added it to the cache, somebody else can come along and try
> to lock it.  They will set PageWaiter.  Now you call __SetPageUptodate
> and wipe out their PageWaiter bit.  So you won't wake them up when you
> unlock.
> 
> You can call __SetPageUptodate before adding it to the page cache,
> but once it's visible to another thread, you can't do that.

Will fix.

> > +		ret = VM_FAULT_LOCKED;
> > +	}
> > +
> > +	vmf->page = page;
> 
> You're supposed to return the page locked, so use find_lock_page() instead
> of find_get_page().

Ok/
 
> > +	return ret;
> > +
> > +err_del_page_cache:
> > +	delete_from_page_cache(page);
> > +err_put_page:
> > +	put_page(page);
> > +	return vmf_error(err);
> > +}

-- 
Sincerely yours,
Mike.

  reply	other threads:[~2021-01-20 15:13 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-12-03  6:29 [PATCH v14 00/10] mm: introduce memfd_secret system call to create "secret" memory areas Mike Rapoport
2020-12-03  6:29 ` [PATCH v14 01/10] mm: add definition of PMD_PAGE_ORDER Mike Rapoport
2020-12-03  6:29 ` [PATCH v14 02/10] mmap: make mlock_future_check() global Mike Rapoport
2020-12-03  6:29 ` [PATCH v14 03/10] set_memory: allow set_direct_map_*_noflush() for multiple pages Mike Rapoport
2020-12-03  6:29 ` [PATCH v14 04/10] set_memory: allow querying whether set_direct_map_*() is actually enabled Mike Rapoport
2020-12-03 23:36   ` Andrew Morton
2020-12-06 11:28     ` Mike Rapoport
2020-12-03  6:29 ` [PATCH v14 05/10] mm: introduce memfd_secret system call to create "secret" memory areas Mike Rapoport
2021-01-19 20:22   ` Matthew Wilcox
2021-01-20 15:05     ` Mike Rapoport [this message]
2021-01-20 16:02       ` Matthew Wilcox
2021-01-20 17:04         ` Mike Rapoport
2020-12-03  6:29 ` [PATCH v14 06/10] secretmem: use PMD-size pages to amortize direct map fragmentation Mike Rapoport
2020-12-03  6:29 ` [PATCH v14 07/10] secretmem: add memcg accounting Mike Rapoport
2020-12-03 15:47   ` Shakeel Butt
2020-12-03  6:29 ` [PATCH v14 08/10] PM: hibernate: disable when there are active secretmem users Mike Rapoport
2020-12-03  6:29 ` [PATCH v14 09/10] arch, mm: wire up memfd_secret system call were relevant Mike Rapoport
2020-12-03 23:39   ` Andrew Morton
2020-12-06 11:30     ` Mike Rapoport
2020-12-07 14:45   ` Qian Cai
2020-12-07 16:00     ` Mike Rapoport
2020-12-08  1:34       ` Andrew Morton
2020-12-03  6:29 ` [PATCH v14 10/10] secretmem: test: add basic selftest for memfd_secret(2) Mike Rapoport
2020-12-12  6:16   ` John Hubbard
2020-12-12 13:59     ` Mike Rapoport

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210120150510.GO1106298@kernel.org \
    --to=rppt@kernel.org \
    --cc=akpm@linux-foundation.org \
    --cc=arnd@arndb.de \
    --cc=bp@alien8.de \
    --cc=catalin.marinas@arm.com \
    --cc=cl@linux.com \
    --cc=dan.j.williams@intel.com \
    --cc=dave.hansen@linux.intel.com \
    --cc=david@redhat.com \
    --cc=elena.reshetova@intel.com \
    --cc=guro@fb.com \
    --cc=hagen@jauu.net \
    --cc=hpa@zytor.com \
    --cc=jejb@linux.ibm.com \
    --cc=kirill@shutemov.name \
    --cc=linux-api@vger.kernel.org \
    --cc=linux-arch@vger.kernel.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-kselftest@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-nvdimm@lists.01.org \
    --cc=linux-riscv@lists.infradead.org \
    --cc=luto@kernel.org \
    --cc=mark.rutland@arm.com \
    --cc=mingo@redhat.com \
    --cc=mtk.manpages@gmail.com \
    --cc=palmer@dabbelt.com \
    --cc=paul.walmsley@sifive.com \
    --cc=peterz@infradead.org \
    --cc=rick.p.edgecombe@intel.com \
    --cc=rppt@linux.ibm.com \
    --cc=shakeelb@google.com \
    --cc=shuah@kernel.org \
    --cc=tglx@linutronix.de \
    --cc=tycho@tycho.ws \
    --cc=viro@zeniv.linux.org.uk \
    --cc=will@kernel.org \
    --cc=willy@infradead.org \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).