linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Alistair Popple <apopple@nvidia.com>
To: Christoph Hellwig <hch@infradead.org>
Cc: <linux-mm@kvack.org>, <nouveau@lists.freedesktop.org>,
	<bskeggs@redhat.com>, <akpm@linux-foundation.org>,
	<linux-doc@vger.kernel.org>, <linux-kernel@vger.kernel.org>,
	<kvm-ppc@vger.kernel.org>, <dri-devel@lists.freedesktop.org>,
	<jhubbard@nvidia.com>, <rcampbell@nvidia.com>,
	<jglisse@redhat.com>, <jgg@nvidia.com>, <daniel@ffwll.ch>,
	<willy@infradead.org>
Subject: Re: [PATCH v6 5/8] mm: Device exclusive memory access
Date: Mon, 22 Mar 2021 21:27:46 +1100	[thread overview]
Message-ID: <6616451.iqfUG9VtI1@nvdebian> (raw)
In-Reply-To: <20210315074245.GC4136862@infradead.org>

On Monday, 15 March 2021 6:42:45 PM AEDT Christoph Hellwig wrote:
> > +Not all devices support atomic access to system memory. To support atomic
> > +operations to a shared virtual memory page such a device needs access to 
that
> > +page which is exclusive of any userspace access from the CPU. The
> > +``make_device_exclusive_range()`` function can be used to make a memory 
range
> > +inaccessible from userspace.
> 
> s/Not all devices/Some devices/ ?

I will reword this. What I was trying to convey is that devices may have 
features which allow for atomics to be implemented with SW assistance.

> >  static inline int mm_has_notifiers(struct mm_struct *mm)
> > @@ -528,7 +534,17 @@ static inline void mmu_notifier_range_init_migrate(
> >  {
> >  	mmu_notifier_range_init(range, MMU_NOTIFY_MIGRATE, flags, vma, mm,
> >  				start, end);
> > -	range->migrate_pgmap_owner = pgmap;
> > +	range->owner = pgmap;
> > +}
> > +
> > +static inline void mmu_notifier_range_init_exclusive(
> > +			struct mmu_notifier_range *range, unsigned int flags,
> > +			struct vm_area_struct *vma, struct mm_struct *mm,
> > +			unsigned long start, unsigned long end, void *owner)
> > +{
> > +	mmu_notifier_range_init(range, MMU_NOTIFY_EXCLUSIVE, flags, vma, mm,
> > +				start, end);
> > +	range->owner = owner;
> 
> Maybe just replace mmu_notifier_range_init_migrate with a
> mmu_notifier_range_init_owner helper that takes the owner but does
> not hard code a type?

Ok. That does result in a function which takes a fair number of arguments, but 
I guess that's no worse than multiple functions hard coding the different 
types and it does result in less code overall.

> >  		}
> > +	} else if (is_device_exclusive_entry(entry)) {
> > +		page = pfn_swap_entry_to_page(entry);
> > +
> > +		get_page(page);
> > +		rss[mm_counter(page)]++;
> > +
> > +		if (is_writable_device_exclusive_entry(entry) &&
> > +		    is_cow_mapping(vm_flags)) {
> > +			/*
> > +			 * COW mappings require pages in both
> > +			 * parent and child to be set to read.
> > +			 */
> > +			entry = make_readable_device_exclusive_entry(
> > +							swp_offset(entry));
> > +			pte = swp_entry_to_pte(entry);
> > +			if (pte_swp_soft_dirty(*src_pte))
> > +				pte = pte_swp_mksoft_dirty(pte);
> > +			if (pte_swp_uffd_wp(*src_pte))
> > +				pte = pte_swp_mkuffd_wp(pte);
> > +			set_pte_at(src_mm, addr, src_pte, pte);
> > +		}
> 
> Just cosmetic, but I wonder if should factor this code block into
> a little helper.

In that case there are arguably are other bits of this function which should 
be refactored into helpers as well. Unless you feel strongly about it I would 
like to leave this as is and put together a future series to fix this and a 
couple of other areas I've noticed that could do with some refactoring/clean 
ups.

> > +
> > +static bool try_to_protect_one(struct page *page, struct vm_area_struct 
*vma,
> > +			unsigned long address, void *arg)
> > +{
> > +	struct mm_struct *mm = vma->vm_mm;
> > +	struct page_vma_mapped_walk pvmw = {
> > +		.page = page,
> > +		.vma = vma,
> > +		.address = address,
> > +	};
> > +	struct ttp_args *ttp = (struct ttp_args *) arg;
> 
> This cast should not be needed.
> 
> > +	return ttp.valid && (!page_mapcount(page) ? true : false);
> 
> This can be simplified to:
> 
> 	return ttp.valid && !page_mapcount(page);
> 
> > +	npages = get_user_pages_remote(mm, start, npages,
> > +				       FOLL_GET | FOLL_WRITE | FOLL_SPLIT_PMD,
> > +				       pages, NULL, NULL);
> > +	for (i = 0; i < npages; i++, start += PAGE_SIZE) {
> > +		if (!trylock_page(pages[i])) {
> > +			put_page(pages[i]);
> > +			pages[i] = NULL;
> > +			continue;
> > +		}
> > +
> > +		if (!try_to_protect(pages[i], mm, start, arg)) {
> > +			unlock_page(pages[i]);
> > +			put_page(pages[i]);
> > +			pages[i] = NULL;
> > +		}
> 
> Should the trylock_page go into try_to_protect to simplify the loop
> a little?  Also I wonder if we need make_device_exclusive_range or
> should just open code the get_user_pages_remote + try_to_protect
> loop in the callers, as that might allow them to also deduct other
> information about the found pages.

This function has evolved over time and putting the trylock_page into 
try_to_protect does simplify things nicely. I'm not sure what other 
information a caller could deduct through open coding though, but I guess in 
some circumstances it might be possible for callers to skip 
get_user_pages_remote() which might be a future improvement.

The main reason it looks like this was simply to keep it looking fairly 
similar to how hmm_range_fault() and migrate_vma() are used with an array of 
pages (or pfns) which are filled out from the given address range.
 
> Otherwise looks good:
> 
> Reviewed-by: Christoph Hellwig <hch@lst.de>
> 

Thanks.





  reply	other threads:[~2021-03-22 10:27 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-03-12  8:38 [PATCH v6 0/8] Add support for SVM atomics in Nouveau Alistair Popple
2021-03-12  8:38 ` [PATCH v6 1/8] mm: Remove special swap entry functions Alistair Popple
2021-03-15  7:27   ` Christoph Hellwig
2021-03-22  9:20     ` Alistair Popple
2021-03-12  8:38 ` [PATCH v6 2/8] mm/swapops: Rework swap entry manipulation code Alistair Popple
2021-03-12  8:38 ` [PATCH v6 3/8] mm/rmap: Split try_to_munlock from try_to_unmap Alistair Popple
2021-03-15  7:28   ` Christoph Hellwig
2021-03-12  8:38 ` [PATCH v6 4/8] mm/rmap: Split migration into its own function Alistair Popple
2021-03-12  8:38 ` [PATCH v6 5/8] mm: Device exclusive memory access Alistair Popple
2021-03-15  7:42   ` Christoph Hellwig
2021-03-22 10:27     ` Alistair Popple [this message]
2021-03-12  8:38 ` [PATCH v6 6/8] mm: Selftests for exclusive device memory Alistair Popple
2021-03-12  8:38 ` [PATCH v6 7/8] nouveau/svm: Refactor nouveau_range_fault Alistair Popple
2021-03-12  8:38 ` [PATCH v6 8/8] nouveau/svm: Implement atomic SVM access Alistair Popple
2021-03-15  7:51   ` Christoph Hellwig
2021-03-22  9:27     ` Alistair Popple

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=6616451.iqfUG9VtI1@nvdebian \
    --to=apopple@nvidia.com \
    --cc=akpm@linux-foundation.org \
    --cc=bskeggs@redhat.com \
    --cc=daniel@ffwll.ch \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=hch@infradead.org \
    --cc=jgg@nvidia.com \
    --cc=jglisse@redhat.com \
    --cc=jhubbard@nvidia.com \
    --cc=kvm-ppc@vger.kernel.org \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=nouveau@lists.freedesktop.org \
    --cc=rcampbell@nvidia.com \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).