From: ira.weiny@intel.com
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Jason Gunthorpe <jgg@ziepe.ca>,
Dan Williams <dan.j.williams@intel.com>,
Matthew Wilcox <willy@infradead.org>, Jan Kara <jack@suse.cz>,
Theodore Ts'o <tytso@mit.edu>, John Hubbard <jhubbard@nvidia.com>,
Michal Hocko <mhocko@suse.com>,
Dave Chinner <david@fromorbit.com>,
linux-xfs@vger.kernel.org, linux-rdma@vger.kernel.org,
linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org,
linux-nvdimm@lists.01.org, linux-ext4@vger.kernel.org,
linux-mm@kvack.org, Ira Weiny <ira.weiny@intel.com>
Subject: [RFC PATCH v2 12/19] mm/gup: Prep put_user_pages() to take an vaddr_pin struct
Date: Fri, 9 Aug 2019 15:58:26 -0700 [thread overview]
Message-ID: <20190809225833.6657-13-ira.weiny@intel.com> (raw)
In-Reply-To: <20190809225833.6657-1-ira.weiny@intel.com>
From: Ira Weiny <ira.weiny@intel.com>
Once callers start to use vaddr_pin the put_user_pages calls will need
to have access to this data coming in. Prep put_user_pages() for this
data.
Signed-off-by: Ira Weiny <ira.weiny@intel.com>
---
include/linux/mm.h | 20 +-------
mm/gup.c | 122 ++++++++++++++++++++++++++++++++-------------
2 files changed, 88 insertions(+), 54 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index befe150d17be..9d37cafbef9a 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1064,25 +1064,7 @@ static inline void put_page(struct page *page)
__put_page(page);
}
-/**
- * put_user_page() - release a gup-pinned page
- * @page: pointer to page to be released
- *
- * Pages that were pinned via get_user_pages*() must be released via
- * either put_user_page(), or one of the put_user_pages*() routines
- * below. This is so that eventually, pages that are pinned via
- * get_user_pages*() can be separately tracked and uniquely handled. In
- * particular, interactions with RDMA and filesystems need special
- * handling.
- *
- * put_user_page() and put_page() are not interchangeable, despite this early
- * implementation that makes them look the same. put_user_page() calls must
- * be perfectly matched up with get_user_page() calls.
- */
-static inline void put_user_page(struct page *page)
-{
- put_page(page);
-}
+void put_user_page(struct page *page);
void put_user_pages_dirty_lock(struct page **pages, unsigned long npages,
bool make_dirty);
diff --git a/mm/gup.c b/mm/gup.c
index a7a9d2f5278c..10cfd30ff668 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -24,30 +24,41 @@
#include "internal.h"
-/**
- * put_user_pages_dirty_lock() - release and optionally dirty gup-pinned pages
- * @pages: array of pages to be maybe marked dirty, and definitely released.
- * @npages: number of pages in the @pages array.
- * @make_dirty: whether to mark the pages dirty
- *
- * "gup-pinned page" refers to a page that has had one of the get_user_pages()
- * variants called on that page.
- *
- * For each page in the @pages array, make that page (or its head page, if a
- * compound page) dirty, if @make_dirty is true, and if the page was previously
- * listed as clean. In any case, releases all pages using put_user_page(),
- * possibly via put_user_pages(), for the non-dirty case.
- *
- * Please see the put_user_page() documentation for details.
- *
- * set_page_dirty_lock() is used internally. If instead, set_page_dirty() is
- * required, then the caller should a) verify that this is really correct,
- * because _lock() is usually required, and b) hand code it:
- * set_page_dirty_lock(), put_user_page().
- *
- */
-void put_user_pages_dirty_lock(struct page **pages, unsigned long npages,
- bool make_dirty)
+static void __put_user_page(struct vaddr_pin *vaddr_pin, struct page *page)
+{
+ page = compound_head(page);
+
+ /*
+ * For devmap managed pages we need to catch refcount transition from
+ * GUP_PIN_COUNTING_BIAS to 1, when refcount reach one it means the
+ * page is free and we need to inform the device driver through
+ * callback. See include/linux/memremap.h and HMM for details.
+ */
+ if (put_devmap_managed_page(page))
+ return;
+
+ if (put_page_testzero(page))
+ __put_page(page);
+}
+
+static void __put_user_pages(struct vaddr_pin *vaddr_pin, struct page **pages,
+ unsigned long npages)
+{
+ unsigned long index;
+
+ /*
+ * TODO: this can be optimized for huge pages: if a series of pages is
+ * physically contiguous and part of the same compound page, then a
+ * single operation to the head page should suffice.
+ */
+ for (index = 0; index < npages; index++)
+ __put_user_page(vaddr_pin, pages[index]);
+}
+
+static void __put_user_pages_dirty_lock(struct vaddr_pin *vaddr_pin,
+ struct page **pages,
+ unsigned long npages,
+ bool make_dirty)
{
unsigned long index;
@@ -58,7 +69,7 @@ void put_user_pages_dirty_lock(struct page **pages, unsigned long npages,
*/
if (!make_dirty) {
- put_user_pages(pages, npages);
+ __put_user_pages(vaddr_pin, pages, npages);
return;
}
@@ -86,9 +97,58 @@ void put_user_pages_dirty_lock(struct page **pages, unsigned long npages,
*/
if (!PageDirty(page))
set_page_dirty_lock(page);
- put_user_page(page);
+ __put_user_page(vaddr_pin, page);
}
}
+
+/**
+ * put_user_page() - release a gup-pinned page
+ * @page: pointer to page to be released
+ *
+ * Pages that were pinned via get_user_pages*() must be released via
+ * either put_user_page(), or one of the put_user_pages*() routines
+ * below. This is so that eventually, pages that are pinned via
+ * get_user_pages*() can be separately tracked and uniquely handled. In
+ * particular, interactions with RDMA and filesystems need special
+ * handling.
+ *
+ * put_user_page() and put_page() are not interchangeable, despite this early
+ * implementation that makes them look the same. put_user_page() calls must
+ * be perfectly matched up with get_user_page() calls.
+ */
+void put_user_page(struct page *page)
+{
+ __put_user_page(NULL, page);
+}
+EXPORT_SYMBOL(put_user_page);
+
+/**
+ * put_user_pages_dirty_lock() - release and optionally dirty gup-pinned pages
+ * @pages: array of pages to be maybe marked dirty, and definitely released.
+ * @npages: number of pages in the @pages array.
+ * @make_dirty: whether to mark the pages dirty
+ *
+ * "gup-pinned page" refers to a page that has had one of the get_user_pages()
+ * variants called on that page.
+ *
+ * For each page in the @pages array, make that page (or its head page, if a
+ * compound page) dirty, if @make_dirty is true, and if the page was previously
+ * listed as clean. In any case, releases all pages using put_user_page(),
+ * possibly via put_user_pages(), for the non-dirty case.
+ *
+ * Please see the put_user_page() documentation for details.
+ *
+ * set_page_dirty_lock() is used internally. If instead, set_page_dirty() is
+ * required, then the caller should a) verify that this is really correct,
+ * because _lock() is usually required, and b) hand code it:
+ * set_page_dirty_lock(), put_user_page().
+ *
+ */
+void put_user_pages_dirty_lock(struct page **pages, unsigned long npages,
+ bool make_dirty)
+{
+ __put_user_pages_dirty_lock(NULL, pages, npages, make_dirty);
+}
EXPORT_SYMBOL(put_user_pages_dirty_lock);
/**
@@ -102,15 +162,7 @@ EXPORT_SYMBOL(put_user_pages_dirty_lock);
*/
void put_user_pages(struct page **pages, unsigned long npages)
{
- unsigned long index;
-
- /*
- * TODO: this can be optimized for huge pages: if a series of pages is
- * physically contiguous and part of the same compound page, then a
- * single operation to the head page should suffice.
- */
- for (index = 0; index < npages; index++)
- put_user_page(pages[index]);
+ __put_user_pages(NULL, pages, npages);
}
EXPORT_SYMBOL(put_user_pages);
--
2.20.1
next prev parent reply other threads:[~2019-08-09 22:59 UTC|newest]
Thread overview: 118+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-08-09 22:58 [RFC PATCH v2 00/19] RDMA/FS DAX truncate proposal V1,000,002 ;-) ira.weiny
2019-08-09 22:58 ` [RFC PATCH v2 01/19] fs/locks: Export F_LAYOUT lease to user space ira.weiny
2019-08-09 23:52 ` Dave Chinner
2019-08-12 17:36 ` Ira Weiny
2019-08-14 8:05 ` Dave Chinner
2019-08-14 11:21 ` Jeff Layton
2019-08-14 11:38 ` Dave Chinner
2019-08-09 22:58 ` [RFC PATCH v2 02/19] fs/locks: Add Exclusive flag to user Layout lease ira.weiny
2019-08-14 14:15 ` Jeff Layton
2019-08-14 21:56 ` Dave Chinner
2019-08-26 10:41 ` Jeff Layton
2019-08-29 23:34 ` Ira Weiny
2019-09-04 12:52 ` Jeff Layton
2019-09-04 23:12 ` John Hubbard
2019-08-09 22:58 ` [RFC PATCH v2 03/19] mm/gup: Pass flags down to __gup_device_huge* calls ira.weiny
2019-08-09 22:58 ` [RFC PATCH v2 04/19] mm/gup: Ensure F_LAYOUT lease is held prior to GUP'ing pages ira.weiny
2019-08-09 22:58 ` [RFC PATCH v2 05/19] fs/ext4: Teach ext4 to break layout leases ira.weiny
2019-08-09 22:58 ` [RFC PATCH v2 06/19] fs/ext4: Teach dax_layout_busy_page() to operate on a sub-range ira.weiny
2019-08-23 15:18 ` Vivek Goyal
2019-08-29 18:52 ` Ira Weiny
2019-08-09 22:58 ` [RFC PATCH v2 07/19] fs/xfs: Teach xfs to use new dax_layout_busy_page() ira.weiny
2019-08-09 23:30 ` Dave Chinner
2019-08-12 18:05 ` Ira Weiny
2019-08-14 8:04 ` Dave Chinner
2019-08-09 22:58 ` [RFC PATCH v2 08/19] fs/xfs: Fail truncate if page lease can't be broken ira.weiny
2019-08-09 23:22 ` Dave Chinner
2019-08-12 18:08 ` Ira Weiny
2019-08-09 22:58 ` [RFC PATCH v2 09/19] mm/gup: Introduce vaddr_pin structure ira.weiny
2019-08-10 0:06 ` John Hubbard
2019-08-10 0:06 ` John Hubbard
2019-08-09 22:58 ` [RFC PATCH v2 10/19] mm/gup: Pass a NULL vaddr_pin through GUP fast ira.weiny
2019-08-10 0:06 ` John Hubbard
2019-08-10 0:06 ` John Hubbard
2019-08-09 22:58 ` [RFC PATCH v2 11/19] mm/gup: Pass follow_page_context further down the call stack ira.weiny
2019-08-10 0:18 ` John Hubbard
2019-08-10 0:18 ` John Hubbard
2019-08-12 19:01 ` Ira Weiny
2019-08-09 22:58 ` ira.weiny [this message]
2019-08-10 0:30 ` [RFC PATCH v2 12/19] mm/gup: Prep put_user_pages() to take an vaddr_pin struct John Hubbard
2019-08-10 0:30 ` John Hubbard
2019-08-12 20:46 ` Ira Weiny
2019-08-09 22:58 ` [RFC PATCH v2 13/19] {mm,file}: Add file_pins objects ira.weiny
2019-08-09 22:58 ` [RFC PATCH v2 14/19] fs/locks: Associate file pins while performing GUP ira.weiny
2019-08-09 22:58 ` [RFC PATCH v2 15/19] mm/gup: Introduce vaddr_pin_pages() ira.weiny
2019-08-10 0:09 ` John Hubbard
2019-08-10 0:09 ` John Hubbard
2019-08-12 21:00 ` Ira Weiny
2019-08-12 21:20 ` John Hubbard
2019-08-12 21:20 ` John Hubbard
2019-08-11 23:07 ` John Hubbard
2019-08-11 23:07 ` John Hubbard
2019-08-12 21:01 ` Ira Weiny
2019-08-12 12:28 ` Jason Gunthorpe
2019-08-12 21:48 ` Ira Weiny
2019-08-13 11:47 ` Jason Gunthorpe
2019-08-13 17:46 ` Ira Weiny
2019-08-13 17:56 ` John Hubbard
2019-08-13 17:56 ` John Hubbard
2019-08-09 22:58 ` [RFC PATCH v2 16/19] RDMA/uverbs: Add back pointer to system file object ira.weiny
2019-08-12 13:00 ` Jason Gunthorpe
2019-08-12 17:28 ` Ira Weiny
2019-08-12 17:56 ` Jason Gunthorpe
2019-08-12 21:15 ` Ira Weiny
2019-08-13 11:48 ` Jason Gunthorpe
2019-08-13 17:41 ` Ira Weiny
2019-08-13 18:00 ` Jason Gunthorpe
2019-08-13 20:38 ` Ira Weiny
2019-08-14 12:23 ` Jason Gunthorpe
2019-08-14 17:50 ` Ira Weiny
2019-08-14 18:15 ` Jason Gunthorpe
2019-09-04 22:25 ` Ira Weiny
2019-09-11 8:19 ` Jason Gunthorpe
2019-08-09 22:58 ` [RFC PATCH v2 17/19] RDMA/umem: Convert to vaddr_[pin|unpin]* operations ira.weiny
2019-08-09 22:58 ` [RFC PATCH v2 18/19] {mm,procfs}: Add display file_pins proc ira.weiny
2019-08-09 22:58 ` [RFC PATCH v2 19/19] mm/gup: Remove FOLL_LONGTERM DAX exclusion ira.weiny
2019-08-14 10:17 ` [RFC PATCH v2 00/19] RDMA/FS DAX truncate proposal V1,000,002 ;-) Jan Kara
2019-08-14 18:08 ` Ira Weiny
2019-08-15 13:05 ` Jan Kara
2019-08-16 19:05 ` Ira Weiny
2019-08-16 23:20 ` [RFC PATCH v2 00/19] RDMA/FS DAX truncate proposal V1,000,002 ; -) Ira Weiny
2019-08-19 6:36 ` Jan Kara
2019-08-17 2:26 ` [RFC PATCH v2 00/19] RDMA/FS DAX truncate proposal V1,000,002 ;-) Dave Chinner
2019-08-19 6:34 ` Jan Kara
2019-08-19 9:24 ` Dave Chinner
2019-08-19 12:38 ` Jason Gunthorpe
2019-08-19 21:53 ` Ira Weiny
2019-08-20 1:12 ` Dave Chinner
2019-08-20 11:55 ` Jason Gunthorpe
2019-08-21 18:02 ` Ira Weiny
2019-08-21 18:13 ` Jason Gunthorpe
2019-08-21 18:22 ` John Hubbard
2019-08-21 18:57 ` Ira Weiny
2019-08-21 19:06 ` Ira Weiny
2019-08-21 19:48 ` Jason Gunthorpe
2019-08-21 20:44 ` Ira Weiny
2019-08-21 23:49 ` Jason Gunthorpe
2019-08-23 3:23 ` Dave Chinner
2019-08-23 12:04 ` Jason Gunthorpe
2019-08-24 0:11 ` Dave Chinner
2019-08-24 5:08 ` Ira Weiny
2019-08-26 5:55 ` Dave Chinner
2019-08-29 2:02 ` Ira Weiny
2019-08-29 3:27 ` John Hubbard
2019-08-29 16:16 ` Ira Weiny
2019-09-02 22:26 ` Dave Chinner
2019-09-04 16:54 ` Ira Weiny
2019-08-25 19:39 ` Jason Gunthorpe
2019-08-24 4:49 ` Ira Weiny
2019-08-25 19:40 ` Jason Gunthorpe
2019-08-23 0:59 ` Dave Chinner
2019-08-23 17:15 ` Ira Weiny
2019-08-24 0:18 ` Dave Chinner
2019-08-20 0:05 ` John Hubbard
2019-08-20 1:20 ` Dave Chinner
2019-08-20 3:09 ` John Hubbard
2019-08-20 3:36 ` Dave Chinner
2019-08-21 18:43 ` John Hubbard
2019-08-21 19:09 ` Ira Weiny
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190809225833.6657-13-ira.weiny@intel.com \
--to=ira.weiny@intel.com \
--cc=akpm@linux-foundation.org \
--cc=dan.j.williams@intel.com \
--cc=david@fromorbit.com \
--cc=jack@suse.cz \
--cc=jgg@ziepe.ca \
--cc=jhubbard@nvidia.com \
--cc=linux-ext4@vger.kernel.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-nvdimm@lists.01.org \
--cc=linux-rdma@vger.kernel.org \
--cc=linux-xfs@vger.kernel.org \
--cc=mhocko@suse.com \
--cc=tytso@mit.edu \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).