KVM Archive on lore.kernel.org
 help / color / Atom feed
* [PATCH 1/1] vfio/spapr_tce: convert get_user_pages() --> pin_user_pages()
@ 2020-05-23  1:43 John Hubbard
  2020-05-26 19:28 ` Souptick Joarder
  0 siblings, 1 reply; 3+ messages in thread
From: John Hubbard @ 2020-05-23  1:43 UTC (permalink / raw)
  To: LKML; +Cc: Souptick Joarder, John Hubbard, Alex Williamson, Cornelia Huck, kvm

This code was using get_user_pages*(), in a "Case 2" scenario
(DMA/RDMA), using the categorization from [1]. That means that it's
time to convert the get_user_pages*() + put_page() calls to
pin_user_pages*() + unpin_user_pages() calls.

There is some helpful background in [2]: basically, this is a small
part of fixing a long-standing disconnect between pinning pages, and
file systems' use of those pages.

[1] Documentation/core-api/pin_user_pages.rst

[2] "Explicit pinning of user-space pages":
    https://lwn.net/Articles/807108/

Cc: Alex Williamson <alex.williamson@redhat.com>
Cc: Cornelia Huck <cohuck@redhat.com>
Cc: kvm@vger.kernel.org
Signed-off-by: John Hubbard <jhubbard@nvidia.com>
---

Hi,

I'm compile-tested this, but am not able to run-time test, so any
testing help is much appreciated!

thanks,
John Hubbard
NVIDIA

 drivers/vfio/vfio_iommu_spapr_tce.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/vfio/vfio_iommu_spapr_tce.c b/drivers/vfio/vfio_iommu_spapr_tce.c
index 16b3adc508db..fe888b5dcc00 100644
--- a/drivers/vfio/vfio_iommu_spapr_tce.c
+++ b/drivers/vfio/vfio_iommu_spapr_tce.c
@@ -383,7 +383,7 @@ static void tce_iommu_unuse_page(struct tce_container *container,
 	struct page *page;
 
 	page = pfn_to_page(hpa >> PAGE_SHIFT);
-	put_page(page);
+	unpin_user_page(page);
 }
 
 static int tce_iommu_prereg_ua_to_hpa(struct tce_container *container,
@@ -486,7 +486,7 @@ static int tce_iommu_use_page(unsigned long tce, unsigned long *hpa)
 	struct page *page = NULL;
 	enum dma_data_direction direction = iommu_tce_direction(tce);
 
-	if (get_user_pages_fast(tce & PAGE_MASK, 1,
+	if (pin_user_pages_fast(tce & PAGE_MASK, 1,
 			direction != DMA_TO_DEVICE ? FOLL_WRITE : 0,
 			&page) != 1)
 		return -EFAULT;
-- 
2.26.2


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH 1/1] vfio/spapr_tce: convert get_user_pages() --> pin_user_pages()
  2020-05-23  1:43 [PATCH 1/1] vfio/spapr_tce: convert get_user_pages() --> pin_user_pages() John Hubbard
@ 2020-05-26 19:28 ` Souptick Joarder
  2020-05-26 19:45   ` John Hubbard
  0 siblings, 1 reply; 3+ messages in thread
From: Souptick Joarder @ 2020-05-26 19:28 UTC (permalink / raw)
  To: John Hubbard; +Cc: LKML, Alex Williamson, Cornelia Huck, kvm

Hi John,

On Sat, May 23, 2020 at 7:13 AM John Hubbard <jhubbard@nvidia.com> wrote:
>
> This code was using get_user_pages*(), in a "Case 2" scenario
> (DMA/RDMA), using the categorization from [1]. That means that it's
> time to convert the get_user_pages*() + put_page() calls to
> pin_user_pages*() + unpin_user_pages() calls.
>
> There is some helpful background in [2]: basically, this is a small
> part of fixing a long-standing disconnect between pinning pages, and
> file systems' use of those pages.
>
> [1] Documentation/core-api/pin_user_pages.rst
>
> [2] "Explicit pinning of user-space pages":
>     https://lwn.net/Articles/807108/
>
> Cc: Alex Williamson <alex.williamson@redhat.com>
> Cc: Cornelia Huck <cohuck@redhat.com>
> Cc: kvm@vger.kernel.org
> Signed-off-by: John Hubbard <jhubbard@nvidia.com>
> ---
>
> Hi,
>
> I'm compile-tested this, but am not able to run-time test, so any
> testing help is much appreciated!
>
> thanks,
> John Hubbard
> NVIDIA
>
>  drivers/vfio/vfio_iommu_spapr_tce.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/vfio/vfio_iommu_spapr_tce.c b/drivers/vfio/vfio_iommu_spapr_tce.c
> index 16b3adc508db..fe888b5dcc00 100644
> --- a/drivers/vfio/vfio_iommu_spapr_tce.c
> +++ b/drivers/vfio/vfio_iommu_spapr_tce.c
> @@ -383,7 +383,7 @@ static void tce_iommu_unuse_page(struct tce_container *container,
>         struct page *page;
>
>         page = pfn_to_page(hpa >> PAGE_SHIFT);
> -       put_page(page);
> +       unpin_user_page(page);
>  }
>
>  static int tce_iommu_prereg_ua_to_hpa(struct tce_container *container,
> @@ -486,7 +486,7 @@ static int tce_iommu_use_page(unsigned long tce, unsigned long *hpa)
>         struct page *page = NULL;
>         enum dma_data_direction direction = iommu_tce_direction(tce);
>
> -       if (get_user_pages_fast(tce & PAGE_MASK, 1,
> +       if (pin_user_pages_fast(tce & PAGE_MASK, 1,
>                         direction != DMA_TO_DEVICE ? FOLL_WRITE : 0,
>                         &page) != 1)
>                 return -EFAULT;

There are few places where nr_pages is passed as 1 to get_user_pages_fast().
With similar conversion those will be changed to pin_user_pages_fast().

Does it make sense to add an inline like - pin_user_page_fast(), similar to
get_user_page_fast_only() ( now merged in linux-next) ?

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH 1/1] vfio/spapr_tce: convert get_user_pages() --> pin_user_pages()
  2020-05-26 19:28 ` Souptick Joarder
@ 2020-05-26 19:45   ` John Hubbard
  0 siblings, 0 replies; 3+ messages in thread
From: John Hubbard @ 2020-05-26 19:45 UTC (permalink / raw)
  To: Souptick Joarder; +Cc: LKML, Alex Williamson, Cornelia Huck, kvm

On 2020-05-26 12:28, Souptick Joarder wrote:
>> @@ -486,7 +486,7 @@ static int tce_iommu_use_page(unsigned long tce, unsigned long *hpa)
>>          struct page *page = NULL;
>>          enum dma_data_direction direction = iommu_tce_direction(tce);
>>
>> -       if (get_user_pages_fast(tce & PAGE_MASK, 1,
>> +       if (pin_user_pages_fast(tce & PAGE_MASK, 1,
>>                          direction != DMA_TO_DEVICE ? FOLL_WRITE : 0,
>>                          &page) != 1)
>>                  return -EFAULT;
> 
> There are few places where nr_pages is passed as 1 to get_user_pages_fast().
> With similar conversion those will be changed to pin_user_pages_fast().
> 
> Does it make sense to add an inline like - pin_user_page_fast(), similar to
> get_user_page_fast_only() ( now merged in linux-next) ?
> 

Perhaps not *just* yet, IMHO. There are only two places so far: here, and
dax_lock_page(). And we don't expect that many places, either, because
pin_user_pages*(), unlike get_user_pages(), is more likely to operate on
a bunch of pages at once. Although, that could change if we look into the
remaining call sites and find more single-page cases that need a gup-to-pup
conversion.

get_user_pages*() has a few more situations (Case 4, in
Documentation/core-api/pin_user_pages.rst: struct page manipulation) in
which it operates on single pages. Those will remain get_user_pages*()
calls, or perhaps change to get_user_page().


thanks,
-- 
John Hubbard
NVIDIA

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, back to index

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-05-23  1:43 [PATCH 1/1] vfio/spapr_tce: convert get_user_pages() --> pin_user_pages() John Hubbard
2020-05-26 19:28 ` Souptick Joarder
2020-05-26 19:45   ` John Hubbard

KVM Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/kvm/0 kvm/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 kvm kvm/ https://lore.kernel.org/kvm \
		kvm@vger.kernel.org
	public-inbox-index kvm

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.kernel.vger.kvm


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git