* [PATCH 1/3] drivers/gpu/drm/via: convert put_page() to put_user_page*()
2019-07-22 4:30 [PATCH 0/4] put_user_page: new put_user_page_dirty*() helpers john.hubbard
@ 2019-07-22 4:30 ` john.hubbard
2019-07-22 9:33 ` Christoph Hellwig
2019-07-22 4:30 ` [PATCH 2/3] net/xdp: " john.hubbard
2019-07-22 4:30 ` [PATCH 3/3] gup: new put_user_page_dirty*() helpers john.hubbard
2 siblings, 1 reply; 10+ messages in thread
From: john.hubbard @ 2019-07-22 4:30 UTC (permalink / raw)
To: Andrew Morton
Cc: Alexander Viro, Björn Töpel, Boaz Harrosh,
Christoph Hellwig, Daniel Vetter, Dan Williams, Dave Chinner,
David Airlie, David S . Miller, Ilya Dryomov, Jan Kara,
Jason Gunthorpe, Jens Axboe, Jérôme Glisse,
Johannes Thumshirn, Magnus Karlsson, Matthew Wilcox,
Miklos Szeredi, Ming Lei, Sage Weil, Santosh Shilimkar,
Yan Zheng, netdev, dri-devel, linux-mm, linux-rdma, bpf, LKML,
John Hubbard
From: John Hubbard <jhubbard@nvidia.com>
For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().
This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeholder versions").
Cc: David Airlie <airlied@linux.ie>
Cc: Daniel Vetter <daniel@ffwll.ch>
Cc: dri-devel@lists.freedesktop.org
Signed-off-by: John Hubbard <jhubbard@nvidia.com>
---
drivers/gpu/drm/via/via_dmablit.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/via/via_dmablit.c b/drivers/gpu/drm/via/via_dmablit.c
index 062067438f1d..219827ae114f 100644
--- a/drivers/gpu/drm/via/via_dmablit.c
+++ b/drivers/gpu/drm/via/via_dmablit.c
@@ -189,8 +189,9 @@ via_free_sg_info(struct pci_dev *pdev, drm_via_sg_info_t *vsg)
for (i = 0; i < vsg->num_pages; ++i) {
if (NULL != (page = vsg->pages[i])) {
if (!PageReserved(page) && (DMA_FROM_DEVICE == vsg->direction))
- SetPageDirty(page);
- put_page(page);
+ put_user_pages_dirty(&page, 1);
+ else
+ put_user_page(page);
}
}
/* fall through */
--
2.22.0
^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH 1/3] drivers/gpu/drm/via: convert put_page() to put_user_page*()
2019-07-22 4:30 ` [PATCH 1/3] drivers/gpu/drm/via: convert put_page() to put_user_page*() john.hubbard
@ 2019-07-22 9:33 ` Christoph Hellwig
2019-07-22 18:53 ` John Hubbard
0 siblings, 1 reply; 10+ messages in thread
From: Christoph Hellwig @ 2019-07-22 9:33 UTC (permalink / raw)
To: john.hubbard
Cc: Andrew Morton, Alexander Viro, Björn Töpel,
Boaz Harrosh, Christoph Hellwig, Daniel Vetter, Dan Williams,
Dave Chinner, David Airlie, David S . Miller, Ilya Dryomov,
Jan Kara, Jason Gunthorpe, Jens Axboe, Jérôme Glisse,
Johannes Thumshirn, Magnus Karlsson, Matthew Wilcox,
Miklos Szeredi, Ming Lei, Sage Weil, Santosh Shilimkar,
Yan Zheng, netdev, dri-devel, linux-mm, linux-rdma, bpf, LKML,
John Hubbard
On Sun, Jul 21, 2019 at 09:30:10PM -0700, john.hubbard@gmail.com wrote:
> for (i = 0; i < vsg->num_pages; ++i) {
> if (NULL != (page = vsg->pages[i])) {
> if (!PageReserved(page) && (DMA_FROM_DEVICE == vsg->direction))
> - SetPageDirty(page);
> - put_page(page);
> + put_user_pages_dirty(&page, 1);
> + else
> + put_user_page(page);
> }
Can't just pass a dirty argument to put_user_pages? Also do we really
need a separate put_user_page for the single page case?
put_user_pages_dirty?
Also the PageReserved check looks bogus, as I can't see how a reserved
page can end up here. So IMHO the above snippled should really look
something like this:
put_user_pages(vsg->pages[i], vsg->num_pages,
vsg->direction == DMA_FROM_DEVICE);
in the end.
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 1/3] drivers/gpu/drm/via: convert put_page() to put_user_page*()
2019-07-22 9:33 ` Christoph Hellwig
@ 2019-07-22 18:53 ` John Hubbard
2019-07-22 19:07 ` Matthew Wilcox
0 siblings, 1 reply; 10+ messages in thread
From: John Hubbard @ 2019-07-22 18:53 UTC (permalink / raw)
To: Christoph Hellwig, john.hubbard
Cc: Andrew Morton, Alexander Viro, Björn Töpel,
Boaz Harrosh, Daniel Vetter, Dan Williams, Dave Chinner,
David Airlie, David S . Miller, Ilya Dryomov, Jan Kara,
Jason Gunthorpe, Jens Axboe, Jérôme Glisse,
Johannes Thumshirn, Magnus Karlsson, Matthew Wilcox,
Miklos Szeredi, Ming Lei, Sage Weil, Santosh Shilimkar,
Yan Zheng, netdev, dri-devel, linux-mm, linux-rdma, bpf, LKML
On 7/22/19 2:33 AM, Christoph Hellwig wrote:
> On Sun, Jul 21, 2019 at 09:30:10PM -0700, john.hubbard@gmail.com wrote:
>> for (i = 0; i < vsg->num_pages; ++i) {
>> if (NULL != (page = vsg->pages[i])) {
>> if (!PageReserved(page) && (DMA_FROM_DEVICE == vsg->direction))
>> - SetPageDirty(page);
>> - put_page(page);
>> + put_user_pages_dirty(&page, 1);
>> + else
>> + put_user_page(page);
>> }
>
> Can't just pass a dirty argument to put_user_pages? Also do we really
Yes, and in fact that would help a lot more than the single page case,
which is really just cosmetic after all.
> need a separate put_user_page for the single page case?
> put_user_pages_dirty?
Not really. I'm still zeroing in on the ideal API for all these call sites,
and I agree that the approach below is cleaner.
>
> Also the PageReserved check looks bogus, as I can't see how a reserved
> page can end up here. So IMHO the above snippled should really look
> something like this:
>
> put_user_pages(vsg->pages[i], vsg->num_pages,
> vsg->direction == DMA_FROM_DEVICE);
>
> in the end.
>
Agreed.
thanks,
--
John Hubbard
NVIDIA
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 1/3] drivers/gpu/drm/via: convert put_page() to put_user_page*()
2019-07-22 18:53 ` John Hubbard
@ 2019-07-22 19:07 ` Matthew Wilcox
2019-07-22 19:10 ` John Hubbard
0 siblings, 1 reply; 10+ messages in thread
From: Matthew Wilcox @ 2019-07-22 19:07 UTC (permalink / raw)
To: John Hubbard
Cc: Christoph Hellwig, john.hubbard, Andrew Morton, Alexander Viro,
Björn Töpel, Boaz Harrosh, Daniel Vetter, Dan Williams,
Dave Chinner, David Airlie, David S . Miller, Ilya Dryomov,
Jan Kara, Jason Gunthorpe, Jens Axboe, Jérôme Glisse,
Johannes Thumshirn, Magnus Karlsson, Miklos Szeredi, Ming Lei,
Sage Weil, Santosh Shilimkar, Yan Zheng, netdev, dri-devel,
linux-mm, linux-rdma, bpf, LKML
On Mon, Jul 22, 2019 at 11:53:54AM -0700, John Hubbard wrote:
> On 7/22/19 2:33 AM, Christoph Hellwig wrote:
> > On Sun, Jul 21, 2019 at 09:30:10PM -0700, john.hubbard@gmail.com wrote:
> >> for (i = 0; i < vsg->num_pages; ++i) {
> >> if (NULL != (page = vsg->pages[i])) {
> >> if (!PageReserved(page) && (DMA_FROM_DEVICE == vsg->direction))
> >> - SetPageDirty(page);
> >> - put_page(page);
> >> + put_user_pages_dirty(&page, 1);
> >> + else
> >> + put_user_page(page);
> >> }
> >
> > Can't just pass a dirty argument to put_user_pages? Also do we really
>
> Yes, and in fact that would help a lot more than the single page case,
> which is really just cosmetic after all.
>
> > need a separate put_user_page for the single page case?
> > put_user_pages_dirty?
>
> Not really. I'm still zeroing in on the ideal API for all these call sites,
> and I agree that the approach below is cleaner.
so enum { CLEAN = 0, DIRTY = 1, LOCK = 2, DIRTY_LOCK = 3 };
?
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 1/3] drivers/gpu/drm/via: convert put_page() to put_user_page*()
2019-07-22 19:07 ` Matthew Wilcox
@ 2019-07-22 19:10 ` John Hubbard
0 siblings, 0 replies; 10+ messages in thread
From: John Hubbard @ 2019-07-22 19:10 UTC (permalink / raw)
To: Matthew Wilcox
Cc: Christoph Hellwig, john.hubbard, Andrew Morton, Alexander Viro,
Björn Töpel, Boaz Harrosh, Daniel Vetter, Dan Williams,
Dave Chinner, David Airlie, David S . Miller, Ilya Dryomov,
Jan Kara, Jason Gunthorpe, Jens Axboe, Jérôme Glisse,
Johannes Thumshirn, Magnus Karlsson, Miklos Szeredi, Ming Lei,
Sage Weil, Santosh Shilimkar, Yan Zheng, netdev, dri-devel,
linux-mm, linux-rdma, bpf, LKML
On 7/22/19 12:07 PM, Matthew Wilcox wrote:
> On Mon, Jul 22, 2019 at 11:53:54AM -0700, John Hubbard wrote:
>> On 7/22/19 2:33 AM, Christoph Hellwig wrote:
>>> On Sun, Jul 21, 2019 at 09:30:10PM -0700, john.hubbard@gmail.com wrote:
>>>> for (i = 0; i < vsg->num_pages; ++i) {
>>>> if (NULL != (page = vsg->pages[i])) {
>>>> if (!PageReserved(page) && (DMA_FROM_DEVICE == vsg->direction))
>>>> - SetPageDirty(page);
>>>> - put_page(page);
>>>> + put_user_pages_dirty(&page, 1);
>>>> + else
>>>> + put_user_page(page);
>>>> }
>>>
>>> Can't just pass a dirty argument to put_user_pages? Also do we really
>>
>> Yes, and in fact that would help a lot more than the single page case,
>> which is really just cosmetic after all.
>>
>>> need a separate put_user_page for the single page case?
>>> put_user_pages_dirty?
>>
>> Not really. I'm still zeroing in on the ideal API for all these call sites,
>> and I agree that the approach below is cleaner.
>
> so enum { CLEAN = 0, DIRTY = 1, LOCK = 2, DIRTY_LOCK = 3 };
> ?
>
Sure. In fact, I just applied something similar to bio_release_pages()
locally, in order to reconcile Christoph's and Jerome's approaches
(they each needed to add a bool arg), so I'm all about the enums in the
arg lists. :)
I'm going to post that one shortly, let's see how it goes over. heh.
thanks,
--
John Hubbard
NVIDIA
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH 2/3] net/xdp: convert put_page() to put_user_page*()
2019-07-22 4:30 [PATCH 0/4] put_user_page: new put_user_page_dirty*() helpers john.hubbard
2019-07-22 4:30 ` [PATCH 1/3] drivers/gpu/drm/via: convert put_page() to put_user_page*() john.hubbard
@ 2019-07-22 4:30 ` john.hubbard
2019-07-22 9:34 ` Christoph Hellwig
2019-07-22 4:30 ` [PATCH 3/3] gup: new put_user_page_dirty*() helpers john.hubbard
2 siblings, 1 reply; 10+ messages in thread
From: john.hubbard @ 2019-07-22 4:30 UTC (permalink / raw)
To: Andrew Morton
Cc: Alexander Viro, Björn Töpel, Boaz Harrosh,
Christoph Hellwig, Daniel Vetter, Dan Williams, Dave Chinner,
David Airlie, David S . Miller, Ilya Dryomov, Jan Kara,
Jason Gunthorpe, Jens Axboe, Jérôme Glisse,
Johannes Thumshirn, Magnus Karlsson, Matthew Wilcox,
Miklos Szeredi, Ming Lei, Sage Weil, Santosh Shilimkar,
Yan Zheng, netdev, dri-devel, linux-mm, linux-rdma, bpf, LKML,
John Hubbard
From: John Hubbard <jhubbard@nvidia.com>
For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().
This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeholder versions").
Cc: Björn Töpel <bjorn.topel@intel.com>
Cc: Magnus Karlsson <magnus.karlsson@intel.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: netdev@vger.kernel.org
Signed-off-by: John Hubbard <jhubbard@nvidia.com>
---
net/xdp/xdp_umem.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/net/xdp/xdp_umem.c b/net/xdp/xdp_umem.c
index 83de74ca729a..9cbbb96c2a32 100644
--- a/net/xdp/xdp_umem.c
+++ b/net/xdp/xdp_umem.c
@@ -171,8 +171,7 @@ static void xdp_umem_unpin_pages(struct xdp_umem *umem)
for (i = 0; i < umem->npgs; i++) {
struct page *page = umem->pgs[i];
- set_page_dirty_lock(page);
- put_page(page);
+ put_user_pages_dirty_lock(&page, 1);
}
kfree(umem->pgs);
--
2.22.0
^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH 2/3] net/xdp: convert put_page() to put_user_page*()
2019-07-22 4:30 ` [PATCH 2/3] net/xdp: " john.hubbard
@ 2019-07-22 9:34 ` Christoph Hellwig
0 siblings, 0 replies; 10+ messages in thread
From: Christoph Hellwig @ 2019-07-22 9:34 UTC (permalink / raw)
To: john.hubbard
Cc: Andrew Morton, Alexander Viro, Björn Töpel,
Boaz Harrosh, Christoph Hellwig, Daniel Vetter, Dan Williams,
Dave Chinner, David Airlie, David S . Miller, Ilya Dryomov,
Jan Kara, Jason Gunthorpe, Jens Axboe, Jérôme Glisse,
Johannes Thumshirn, Magnus Karlsson, Matthew Wilcox,
Miklos Szeredi, Ming Lei, Sage Weil, Santosh Shilimkar,
Yan Zheng, netdev, dri-devel, linux-mm, linux-rdma, bpf, LKML,
John Hubbard
> diff --git a/net/xdp/xdp_umem.c b/net/xdp/xdp_umem.c
> index 83de74ca729a..9cbbb96c2a32 100644
> --- a/net/xdp/xdp_umem.c
> +++ b/net/xdp/xdp_umem.c
> @@ -171,8 +171,7 @@ static void xdp_umem_unpin_pages(struct xdp_umem *umem)
> for (i = 0; i < umem->npgs; i++) {
> struct page *page = umem->pgs[i];
>
> - set_page_dirty_lock(page);
> - put_page(page);
> + put_user_pages_dirty_lock(&page, 1);
Same here, we really should avoid the need for the loop here and
do the looping inside the helper.
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH 3/3] gup: new put_user_page_dirty*() helpers
2019-07-22 4:30 [PATCH 0/4] put_user_page: new put_user_page_dirty*() helpers john.hubbard
2019-07-22 4:30 ` [PATCH 1/3] drivers/gpu/drm/via: convert put_page() to put_user_page*() john.hubbard
2019-07-22 4:30 ` [PATCH 2/3] net/xdp: " john.hubbard
@ 2019-07-22 4:30 ` john.hubbard
2019-07-22 19:05 ` John Hubbard
2 siblings, 1 reply; 10+ messages in thread
From: john.hubbard @ 2019-07-22 4:30 UTC (permalink / raw)
To: Andrew Morton
Cc: Alexander Viro, Björn Töpel, Boaz Harrosh,
Christoph Hellwig, Daniel Vetter, Dan Williams, Dave Chinner,
David Airlie, David S . Miller, Ilya Dryomov, Jan Kara,
Jason Gunthorpe, Jens Axboe, Jérôme Glisse,
Johannes Thumshirn, Magnus Karlsson, Matthew Wilcox,
Miklos Szeredi, Ming Lei, Sage Weil, Santosh Shilimkar,
Yan Zheng, netdev, dri-devel, linux-mm, linux-rdma, bpf, LKML,
John Hubbard
From: John Hubbard <jhubbard@nvidia.com>
While converting call sites to use put_user_page*() [1], quite a few
places ended up needing a single-page routine to put and dirty a
page.
Provide put_user_page_dirty() and put_user_page_dirty_lock(),
and use them in a few places: net/xdp, drm/via/, drivers/infiniband.
Cc: Jason Gunthorpe <jgg@ziepe.ca>
Cc: Jan Kara <jack@suse.cz>
Signed-off-by: John Hubbard <jhubbard@nvidia.com>
---
drivers/gpu/drm/via/via_dmablit.c | 2 +-
drivers/infiniband/core/umem.c | 2 +-
drivers/infiniband/hw/usnic/usnic_uiom.c | 2 +-
include/linux/mm.h | 10 ++++++++++
net/xdp/xdp_umem.c | 2 +-
5 files changed, 14 insertions(+), 4 deletions(-)
diff --git a/drivers/gpu/drm/via/via_dmablit.c b/drivers/gpu/drm/via/via_dmablit.c
index 219827ae114f..d30b2d75599f 100644
--- a/drivers/gpu/drm/via/via_dmablit.c
+++ b/drivers/gpu/drm/via/via_dmablit.c
@@ -189,7 +189,7 @@ via_free_sg_info(struct pci_dev *pdev, drm_via_sg_info_t *vsg)
for (i = 0; i < vsg->num_pages; ++i) {
if (NULL != (page = vsg->pages[i])) {
if (!PageReserved(page) && (DMA_FROM_DEVICE == vsg->direction))
- put_user_pages_dirty(&page, 1);
+ put_user_page_dirty(page);
else
put_user_page(page);
}
diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c
index 08da840ed7ee..a7337cc3ca20 100644
--- a/drivers/infiniband/core/umem.c
+++ b/drivers/infiniband/core/umem.c
@@ -55,7 +55,7 @@ static void __ib_umem_release(struct ib_device *dev, struct ib_umem *umem, int d
for_each_sg_page(umem->sg_head.sgl, &sg_iter, umem->sg_nents, 0) {
page = sg_page_iter_page(&sg_iter);
if (umem->writable && dirty)
- put_user_pages_dirty_lock(&page, 1);
+ put_user_page_dirty_lock(page);
else
put_user_page(page);
}
diff --git a/drivers/infiniband/hw/usnic/usnic_uiom.c b/drivers/infiniband/hw/usnic/usnic_uiom.c
index 0b0237d41613..d2ded624fb2a 100644
--- a/drivers/infiniband/hw/usnic/usnic_uiom.c
+++ b/drivers/infiniband/hw/usnic/usnic_uiom.c
@@ -76,7 +76,7 @@ static void usnic_uiom_put_pages(struct list_head *chunk_list, int dirty)
page = sg_page(sg);
pa = sg_phys(sg);
if (dirty)
- put_user_pages_dirty_lock(&page, 1);
+ put_user_page_dirty_lock(page);
else
put_user_page(page);
usnic_dbg("pa: %pa\n", &pa);
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 0334ca97c584..c0584c6d9d78 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1061,6 +1061,16 @@ void put_user_pages_dirty(struct page **pages, unsigned long npages);
void put_user_pages_dirty_lock(struct page **pages, unsigned long npages);
void put_user_pages(struct page **pages, unsigned long npages);
+static inline void put_user_page_dirty(struct page *page)
+{
+ put_user_pages_dirty(&page, 1);
+}
+
+static inline void put_user_page_dirty_lock(struct page *page)
+{
+ put_user_pages_dirty_lock(&page, 1);
+}
+
#if defined(CONFIG_SPARSEMEM) && !defined(CONFIG_SPARSEMEM_VMEMMAP)
#define SECTION_IN_PAGE_FLAGS
#endif
diff --git a/net/xdp/xdp_umem.c b/net/xdp/xdp_umem.c
index 9cbbb96c2a32..1d122e52c6de 100644
--- a/net/xdp/xdp_umem.c
+++ b/net/xdp/xdp_umem.c
@@ -171,7 +171,7 @@ static void xdp_umem_unpin_pages(struct xdp_umem *umem)
for (i = 0; i < umem->npgs; i++) {
struct page *page = umem->pgs[i];
- put_user_pages_dirty_lock(&page, 1);
+ put_user_page_dirty_lock(page);
}
kfree(umem->pgs);
--
2.22.0
^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH 3/3] gup: new put_user_page_dirty*() helpers
2019-07-22 4:30 ` [PATCH 3/3] gup: new put_user_page_dirty*() helpers john.hubbard
@ 2019-07-22 19:05 ` John Hubbard
0 siblings, 0 replies; 10+ messages in thread
From: John Hubbard @ 2019-07-22 19:05 UTC (permalink / raw)
To: john.hubbard, Andrew Morton
Cc: Alexander Viro, Björn Töpel, Boaz Harrosh,
Christoph Hellwig, Daniel Vetter, Dan Williams, Dave Chinner,
David Airlie, David S . Miller, Ilya Dryomov, Jan Kara,
Jason Gunthorpe, Jens Axboe, Jérôme Glisse,
Johannes Thumshirn, Magnus Karlsson, Matthew Wilcox,
Miklos Szeredi, Ming Lei, Sage Weil, Santosh Shilimkar,
Yan Zheng, netdev, dri-devel, linux-mm, linux-rdma, bpf, LKML
On 7/21/19 9:30 PM, john.hubbard@gmail.com wrote:
> From: John Hubbard <jhubbard@nvidia.com>
>
> While converting call sites to use put_user_page*() [1], quite a few
> places ended up needing a single-page routine to put and dirty a
> page.
>
> Provide put_user_page_dirty() and put_user_page_dirty_lock(),
> and use them in a few places: net/xdp, drm/via/, drivers/infiniband.
>
Please disregard this one, I'm going to drop it, as per the
discussion in patch 1.
thanks,
--
John Hubbard
NVIDIA
> Cc: Jason Gunthorpe <jgg@ziepe.ca>
> Cc: Jan Kara <jack@suse.cz>
> Signed-off-by: John Hubbard <jhubbard@nvidia.com>
> ---
> drivers/gpu/drm/via/via_dmablit.c | 2 +-
> drivers/infiniband/core/umem.c | 2 +-
> drivers/infiniband/hw/usnic/usnic_uiom.c | 2 +-
> include/linux/mm.h | 10 ++++++++++
> net/xdp/xdp_umem.c | 2 +-
> 5 files changed, 14 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/gpu/drm/via/via_dmablit.c b/drivers/gpu/drm/via/via_dmablit.c
> index 219827ae114f..d30b2d75599f 100644
> --- a/drivers/gpu/drm/via/via_dmablit.c
> +++ b/drivers/gpu/drm/via/via_dmablit.c
> @@ -189,7 +189,7 @@ via_free_sg_info(struct pci_dev *pdev, drm_via_sg_info_t *vsg)
> for (i = 0; i < vsg->num_pages; ++i) {
> if (NULL != (page = vsg->pages[i])) {
> if (!PageReserved(page) && (DMA_FROM_DEVICE == vsg->direction))
> - put_user_pages_dirty(&page, 1);
> + put_user_page_dirty(page);
> else
> put_user_page(page);
> }
> diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c
> index 08da840ed7ee..a7337cc3ca20 100644
> --- a/drivers/infiniband/core/umem.c
> +++ b/drivers/infiniband/core/umem.c
> @@ -55,7 +55,7 @@ static void __ib_umem_release(struct ib_device *dev, struct ib_umem *umem, int d
> for_each_sg_page(umem->sg_head.sgl, &sg_iter, umem->sg_nents, 0) {
> page = sg_page_iter_page(&sg_iter);
> if (umem->writable && dirty)
> - put_user_pages_dirty_lock(&page, 1);
> + put_user_page_dirty_lock(page);
> else
> put_user_page(page);
> }
> diff --git a/drivers/infiniband/hw/usnic/usnic_uiom.c b/drivers/infiniband/hw/usnic/usnic_uiom.c
> index 0b0237d41613..d2ded624fb2a 100644
> --- a/drivers/infiniband/hw/usnic/usnic_uiom.c
> +++ b/drivers/infiniband/hw/usnic/usnic_uiom.c
> @@ -76,7 +76,7 @@ static void usnic_uiom_put_pages(struct list_head *chunk_list, int dirty)
> page = sg_page(sg);
> pa = sg_phys(sg);
> if (dirty)
> - put_user_pages_dirty_lock(&page, 1);
> + put_user_page_dirty_lock(page);
> else
> put_user_page(page);
> usnic_dbg("pa: %pa\n", &pa);
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index 0334ca97c584..c0584c6d9d78 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -1061,6 +1061,16 @@ void put_user_pages_dirty(struct page **pages, unsigned long npages);
> void put_user_pages_dirty_lock(struct page **pages, unsigned long npages);
> void put_user_pages(struct page **pages, unsigned long npages);
>
> +static inline void put_user_page_dirty(struct page *page)
> +{
> + put_user_pages_dirty(&page, 1);
> +}
> +
> +static inline void put_user_page_dirty_lock(struct page *page)
> +{
> + put_user_pages_dirty_lock(&page, 1);
> +}
> +
> #if defined(CONFIG_SPARSEMEM) && !defined(CONFIG_SPARSEMEM_VMEMMAP)
> #define SECTION_IN_PAGE_FLAGS
> #endif
> diff --git a/net/xdp/xdp_umem.c b/net/xdp/xdp_umem.c
> index 9cbbb96c2a32..1d122e52c6de 100644
> --- a/net/xdp/xdp_umem.c
> +++ b/net/xdp/xdp_umem.c
> @@ -171,7 +171,7 @@ static void xdp_umem_unpin_pages(struct xdp_umem *umem)
> for (i = 0; i < umem->npgs; i++) {
> struct page *page = umem->pgs[i];
>
> - put_user_pages_dirty_lock(&page, 1);
> + put_user_page_dirty_lock(page);
> }
>
> kfree(umem->pgs);
>
^ permalink raw reply [flat|nested] 10+ messages in thread