linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/6] RFC v2: mm: gup/dma tracking
@ 2019-02-04  5:21 john.hubbard
  2019-02-04  5:21 ` [PATCH 1/6] mm: introduce put_user_page*(), placeholder versions john.hubbard
                   ` (8 more replies)
  0 siblings, 9 replies; 26+ messages in thread
From: john.hubbard @ 2019-02-04  5:21 UTC (permalink / raw)
  To: Andrew Morton, linux-mm
  Cc: Al Viro, Christian Benvenuti, Christoph Hellwig,
	Christopher Lameter, Dan Williams, Dave Chinner,
	Dennis Dalessandro, Doug Ledford, Jan Kara, Jason Gunthorpe,
	Jerome Glisse, Matthew Wilcox, Michal Hocko, Mike Rapoport,
	Mike Marciniszyn, Ralph Campbell, Tom Talpey, LKML,
	linux-fsdevel, John Hubbard

From: John Hubbard <jhubbard@nvidia.com>

Hi,

I'm calling this RFC v2, even though with all the discussion it actually
feels
like about v7 or so. But now that the dust has settled, it's time to show a
surprisingly small, cleaner approach. Jan and Jerome came up with a scheme
(discussed in more detail in "track gup-pinned pages" commit description)
that
does not require any additional struct page fields. This approach has the
additional advantage of being very lightweight and therefore fast, because

    a) it mostly just does atomics,

    b) unlike previous approaches, there is no need to remove and re-add to
       LRUs.

    c) it uses the same lock-free algorithms that get_user_pages already
       relies upon.

This RFC shows the following:

1) A patch to get the call site conversion started:

    mm: introduce put_user_page*(), placeholder versions

2) A sample call site conversion:

    infiniband/mm: convert put_page() to put_user_page*()

  ...NOT shown: all of the other 100+ gup call site conversions. Again,
  those are in various states of progress and disrepair, at [1].

3) Tracking, instrumentation, and documentation patches, once all the call
   sites have been converted.

4) A small refactoring patch that I'm also going to submit separately, for
   the page_cache_add_speculative() routine.

This seems to be working pretty well here.  I've converted enough call sites
(there is git repo [1] with that, which gets rebased madly, but it's there if
you really want to try some early testing) to run things such as fio.

Performance: here is an fio run on an NVMe drive, using this for the fio
configuration file:

    [reader]
    direct=1
    ioengine=libaio
    blocksize=4096
    size=1g
    numjobs=1
    rw=read
    iodepth=64

reader: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.3
Starting 1 process
Jobs: 1 (f=1)
reader: (groupid=0, jobs=1): err= 0: pid=7011: Sun Feb  3 20:36:51 2019
   read: IOPS=190k, BW=741MiB/s (778MB/s)(1024MiB/1381msec)
    slat (nsec): min=2716, max=57255, avg=4048.14, stdev=1084.10
    clat (usec): min=20, max=12485, avg=332.63, stdev=191.77
     lat (usec): min=22, max=12498, avg=336.72, stdev=192.07
    clat percentiles (usec):
     |  1.00th=[  322],  5.00th=[  322], 10.00th=[  322], 20.00th=[  326],
     | 30.00th=[  326], 40.00th=[  326], 50.00th=[  326], 60.00th=[  326],
     | 70.00th=[  326], 80.00th=[  330], 90.00th=[  330], 95.00th=[  330],
     | 99.00th=[  478], 99.50th=[  717], 99.90th=[ 1074], 99.95th=[ 1090],
     | 99.99th=[12256]
   bw (  KiB/s): min=730152, max=776512, per=99.22%, avg=753332.00, stdev=32781.47, samples=2
   iops        : min=182538, max=194128, avg=188333.00, stdev=8195.37, samples=2
  lat (usec)   : 50=0.01%, 100=0.01%, 250=0.07%, 500=99.26%, 750=0.38%
  lat (usec)   : 1000=0.02%
  lat (msec)   : 2=0.24%, 20=0.02%
  cpu          : usr=15.07%, sys=84.13%, ctx=10, majf=0, minf=74
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued rwts: total=262144,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: bw=741MiB/s (778MB/s), 741MiB/s-741MiB/s (778MB/s-778MB/s), io=1024MiB (1074MB), run=1381-1381msec

Disk stats (read/write):
  nvme0n1: ios=216966/0, merge=0/0, ticks=6112/0, in_queue=704, util=91.34%

A write up of the larger problem follows (co-written with Jérôme Glisse):

Overview
========

Some kernel components (file systems, device drivers) need to access
memory that is specified via process virtual address. For a long time, the
API to achieve that was get_user_pages ("GUP") and its variations. However,
GUP has critical limitations that have been overlooked; in particular, GUP
does not interact correctly with filesystems in all situations. That means
that file-backed memory + GUP is a recipe for potential problems, some of
which have already occurred in the field.

GUP was first introduced for Direct IO (O_DIRECT), allowing filesystem code
to get the struct page behind a virtual address and to let storage hardware
perform a direct copy to or from that page. This is a short-lived access
pattern, and as such, the window for a concurrent writeback of GUP'd page
was small enough that there were not (we think) any reported problems.
Also, userspace was expected to understand and accept that Direct IO was
not synchronized with memory-mapped access to that data, nor with any
process address space changes such as munmap(), mremap(), etc.

Over the years, more GUP uses have appeared (virtualization, device
drivers, RDMA) that can keep the pages they get via GUP for a long period
of time (seconds, minutes, hours, days, ...). This long-term pinning makes
an underlying design problem more obvious.

In fact, there are a number of key problems inherent to GUP:

Interactions with file systems
==============================

File systems expect to be able to write back data, both to reclaim pages,
and for data integrity. Allowing other hardware (NICs, GPUs, etc) to gain
write access to the file memory pages means that such hardware can dirty the
pages, without the filesystem being aware. This can, in some cases
(depending on filesystem, filesystem options, block device, block device
options, and other variables), lead to data corruption, and also to kernel
bugs of the form:

    kernel BUG at /build/linux-fQ94TU/linux-4.4.0/fs/ext4/inode.c:1899!
    backtrace:
        ext4_writepage
        __writepage
        write_cache_pages
        ext4_writepages
        do_writepages
        __writeback_single_inode
        writeback_sb_inodes
        __writeback_inodes_wb
        wb_writeback
        wb_workfn
        process_one_work
        worker_thread
        kthread
        ret_from_fork

...which is due to the file system asserting that there are still buffer
heads attached:

        ({                                                      \
                BUG_ON(!PagePrivate(page));                     \
                ((struct buffer_head *)page_private(page));     \
        })

Dave Chinner's description of this is very clear:

    "The fundamental issue is that ->page_mkwrite must be called on every
    write access to a clean file backed page, not just the first one.
    How long the GUP reference lasts is irrelevant, if the page is clean
    and you need to dirty it, you must call ->page_mkwrite before it is
    marked writeable and dirtied. Every. Time."

This is just one symptom of the larger design problem: filesystems do not
actually support get_user_pages() being called on their pages, and letting
hardware write directly to those pages--even though that pattern has been
going on since about 2005 or so.

Long term GUP
=============

Long term GUP is an issue when FOLL_WRITE is specified to GUP (so, a
writeable mapping is created), and the pages are file-backed. That can lead
to filesystem corruption. What happens is that when a file-backed page is
being written back, it is first mapped read-only in all of the CPU page
tables; the file system then assumes that nobody can write to the page, and
that the page content is therefore stable. Unfortunately, the GUP callers
generally do not monitor changes to the CPU pages tables; they instead
assume that the following pattern is safe (it's not):

    get_user_pages()

    Hardware can keep a reference to those pages for a very long time,
    and write to it at any time. Because "hardware" here means "devices
    that are not a CPU", this activity occurs without any interaction
    with the kernel's file system code.

    for each page
        set_page_dirty
        put_page()

In fact, the GUP documentation even recommends that pattern.

Anyway, the file system assumes that the page is stable (nothing is writing
to the page), and that is a problem: stable page content is necessary for
many filesystem actions during writeback, such as checksum, encryption,
RAID striping, etc. Furthermore, filesystem features like COW (copy on
write) or snapshot also rely on being able to use a new page for as memory
for that memory range inside the file.

Corruption during write back is clearly possible here. To solve that, one
idea is to identify pages that have active GUP, so that we can use a bounce
page to write stable data to the filesystem. The filesystem would work
on the bounce page, while any of the active GUP might write to the
original page. This would avoid the stable page violation problem, but note
that it is only part of the overall solution, because other problems
remain.

Other filesystem features that need to replace the page with a new one can
be inhibited for pages that are GUP-pinned. This will, however, alter and
limit some of those filesystem features. The only fix for that would be to
require GUP users monitor and respond to CPU page table updates. Subsystems
such as ODP and HMM do this, for example. This aspect of the problem is
still under discussion.

Direct IO
=========

Direct IO can cause corruption, if userspace does Direct-IO that writes to
a range of virtual addresses that are mmap'd to a file.  The pages written
to are file-backed pages that can be under write back, while the Direct IO
is taking place.  Here, Direct IO need races with a write back: it calls
GUP before page_mkclean() has replaced the CPU pte with a read-only entry.
The race window is pretty small, which is probably why years have gone by
before we noticed this problem: Direct IO is generally very quick, and
tends to finish up before the filesystem gets around to do anything with
the page contents.  However, it's still a real problem.  The solution is
to never let GUP return pages that are under write back, but instead,
force GUP to take a write fault on those pages.  That way, GUP will
properly synchronize with the active write back.  This does not change the
required GUP behavior, it just avoids that race.

What this patchset does
=======================

This patchset overloads page->_refcount, in order to track GUP-pinned
pages.

This patchset checks if the page is under write back, and if so, it backs
off and forces a page fault (via the GUP slow path). Before this patchset,
GUP might have returned the struct page because page_mkclean() had not yet
updated the CPU page table. After this patch, GUP no longer race with
page_mkclean() and thus any user of GUP properly synchronize on active
write back (this is not only useful to direct-IO, but also to other users
of GUP).

This patchset does not include any of the filesystem changes needed to
fix the issues. That is left as a separate patchset that will use the
new flag.


Changes from earlier versions
=============================

-- Fixed up kerneldoc issues in put_user_page*() functions, in response
   to Mike Rapoport's review.

-- Use overloaded page->_refcount to track gup-pinned pages. This avoids the
   need for an extra page flag, and also avoids the need for an extra counting
   field.

[1] git@github.com:johnhubbard/linux.git (branch: gup_dma_core)
[2] https://lwn.net/Articles/753027/ "The trouble with get_user_pages()"

Suggested-by: Jan Kara <jack@suse.cz>
Suggested-by: Jérôme Glisse <jglisse@redhat.com>

Cc: Christian Benvenuti <benve@cisco.com>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Christopher Lameter <cl@linux.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Chinner <david@fromorbit.com>
Cc: Dennis Dalessandro <dennis.dalessandro@intel.com>
Cc: Doug Ledford <dledford@redhat.com>
Cc: Jan Kara <jack@suse.cz>
Cc: Jason Gunthorpe <jgg@ziepe.ca>
Cc: Jérôme Glisse <jglisse@redhat.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Mike Rapoport <rppt@linux.ibm.com>
Cc: Mike Marciniszyn <mike.marciniszyn@intel.com>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Cc: Tom Talpey <tom@talpey.com>


John Hubbard (6):
  mm: introduce put_user_page*(), placeholder versions
  infiniband/mm: convert put_page() to put_user_page*()
  mm: page_cache_add_speculative(): refactoring
  mm/gup: track gup-pinned pages
  mm/gup: /proc/vmstat support for get/put user pages
  mm/gup: Documentation/vm/get_user_pages.rst, MAINTAINERS

 Documentation/vm/get_user_pages.rst         | 197 ++++++++++++++++++++
 Documentation/vm/index.rst                  |   1 +
 MAINTAINERS                                 |  10 +
 drivers/infiniband/core/umem.c              |   7 +-
 drivers/infiniband/core/umem_odp.c          |   2 +-
 drivers/infiniband/hw/hfi1/user_pages.c     |  11 +-
 drivers/infiniband/hw/mthca/mthca_memfree.c |   6 +-
 drivers/infiniband/hw/qib/qib_user_pages.c  |  11 +-
 drivers/infiniband/hw/qib/qib_user_sdma.c   |   6 +-
 drivers/infiniband/hw/usnic/usnic_uiom.c    |   7 +-
 include/linux/mm.h                          |  57 ++++++
 include/linux/mmzone.h                      |   5 +
 include/linux/pagemap.h                     |  36 ++--
 mm/gup.c                                    |  80 ++++++--
 mm/swap.c                                   | 104 +++++++++++
 mm/vmstat.c                                 |   5 +
 16 files changed, 482 insertions(+), 63 deletions(-)
 create mode 100644 Documentation/vm/get_user_pages.rst

-- 
2.20.1


^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH 1/6] mm: introduce put_user_page*(), placeholder versions
  2019-02-04  5:21 [PATCH 0/6] RFC v2: mm: gup/dma tracking john.hubbard
@ 2019-02-04  5:21 ` john.hubbard
  2019-02-04  5:21 ` [PATCH 2/6] infiniband/mm: convert put_page() to put_user_page*() john.hubbard
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 26+ messages in thread
From: john.hubbard @ 2019-02-04  5:21 UTC (permalink / raw)
  To: Andrew Morton, linux-mm
  Cc: Al Viro, Christian Benvenuti, Christoph Hellwig,
	Christopher Lameter, Dan Williams, Dave Chinner,
	Dennis Dalessandro, Doug Ledford, Jan Kara, Jason Gunthorpe,
	Jerome Glisse, Matthew Wilcox, Michal Hocko, Mike Rapoport,
	Mike Marciniszyn, Ralph Campbell, Tom Talpey, LKML,
	linux-fsdevel, John Hubbard

From: John Hubbard <jhubbard@nvidia.com>

Introduces put_user_page(), which simply calls put_page().
This provides a way to update all get_user_pages*() callers,
so that they call put_user_page(), instead of put_page().

Also introduces put_user_pages(), and a few dirty/locked variations,
as a replacement for release_pages(), and also as a replacement
for open-coded loops that release multiple pages.
These may be used for subsequent performance improvements,
via batching of pages to be released.

This is the first step of fixing a problem (also described in [1] and
[2]) with interactions between get_user_pages ("gup") and filesystems.

Problem description: let's start with a bug report. Below, is what happens
sometimes, under memory pressure, when a driver pins some pages via gup,
and then marks those pages dirty, and releases them. Note that the gup
documentation actually recommends that pattern. The problem is that the
filesystem may do a writeback while the pages were gup-pinned, and then the
filesystem believes that the pages are clean. So, when the driver later
marks the pages as dirty, that conflicts with the filesystem's page
tracking and results in a BUG(), like this one that I experienced:

    kernel BUG at /build/linux-fQ94TU/linux-4.4.0/fs/ext4/inode.c:1899!
    backtrace:
        ext4_writepage
        __writepage
        write_cache_pages
        ext4_writepages
        do_writepages
        __writeback_single_inode
        writeback_sb_inodes
        __writeback_inodes_wb
        wb_writeback
        wb_workfn
        process_one_work
        worker_thread
        kthread
        ret_from_fork

...which is due to the file system asserting that there are still buffer
heads attached:

        ({                                                      \
                BUG_ON(!PagePrivate(page));                     \
                ((struct buffer_head *)page_private(page));     \
        })

Dave Chinner's description of this is very clear:

    "The fundamental issue is that ->page_mkwrite must be called on every
    write access to a clean file backed page, not just the first one.
    How long the GUP reference lasts is irrelevant, if the page is clean
    and you need to dirty it, you must call ->page_mkwrite before it is
    marked writeable and dirtied. Every. Time."

This is just one symptom of the larger design problem: filesystems do not
actually support get_user_pages() being called on their pages, and letting
hardware write directly to those pages--even though that patter has been
going on since about 2005 or so.

The steps are to fix it are:

1) (This patch): provide put_user_page*() routines, intended to be used
   for releasing pages that were pinned via get_user_pages*().

2) Convert all of the call sites for get_user_pages*(), to
   invoke put_user_page*(), instead of put_page(). This involves dozens of
   call sites, and will take some time.

3) After (2) is complete, use get_user_pages*() and put_user_page*() to
   implement tracking of these pages. This tracking will be separate from
   the existing struct page refcounting.

4) Use the tracking and identification of these pages, to implement
   special handling (especially in writeback paths) when the pages are
   backed by a filesystem.

[1] https://lwn.net/Articles/774411/ : "DMA and get_user_pages()"
[2] https://lwn.net/Articles/753027/ : "The Trouble with get_user_pages()"

Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Christopher Lameter <cl@linux.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Chinner <david@fromorbit.com>
Cc: Jan Kara <jack@suse.cz>
Cc: Jason Gunthorpe <jgg@ziepe.ca>
Cc: Jerome Glisse <jglisse@redhat.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Mike Rapoport <rppt@linux.ibm.com>
Cc: Ralph Campbell <rcampbell@nvidia.com>

Reviewed-by: Jan Kara <jack@suse.cz>
Signed-off-by: John Hubbard <jhubbard@nvidia.com>
---
 include/linux/mm.h | 24 ++++++++++++++
 mm/swap.c          | 82 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 106 insertions(+)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 80bb6408fe73..809b7397d41e 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -993,6 +993,30 @@ static inline void put_page(struct page *page)
 		__put_page(page);
 }
 
+/**
+ * put_user_page() - release a gup-pinned page
+ * @page:            pointer to page to be released
+ *
+ * Pages that were pinned via get_user_pages*() must be released via
+ * either put_user_page(), or one of the put_user_pages*() routines
+ * below. This is so that eventually, pages that are pinned via
+ * get_user_pages*() can be separately tracked and uniquely handled. In
+ * particular, interactions with RDMA and filesystems need special
+ * handling.
+ *
+ * put_user_page() and put_page() are not interchangeable, despite this early
+ * implementation that makes them look the same. put_user_page() calls must
+ * be perfectly matched up with get_user_page() calls.
+ */
+static inline void put_user_page(struct page *page)
+{
+	put_page(page);
+}
+
+void put_user_pages_dirty(struct page **pages, unsigned long npages);
+void put_user_pages_dirty_lock(struct page **pages, unsigned long npages);
+void put_user_pages(struct page **pages, unsigned long npages);
+
 #if defined(CONFIG_SPARSEMEM) && !defined(CONFIG_SPARSEMEM_VMEMMAP)
 #define SECTION_IN_PAGE_FLAGS
 #endif
diff --git a/mm/swap.c b/mm/swap.c
index 4929bc1be60e..7c42ca45bb89 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -133,6 +133,88 @@ void put_pages_list(struct list_head *pages)
 }
 EXPORT_SYMBOL(put_pages_list);
 
+typedef int (*set_dirty_func)(struct page *page);
+
+static void __put_user_pages_dirty(struct page **pages,
+				   unsigned long npages,
+				   set_dirty_func sdf)
+{
+	unsigned long index;
+
+	for (index = 0; index < npages; index++) {
+		struct page *page = compound_head(pages[index]);
+
+		if (!PageDirty(page))
+			sdf(page);
+
+		put_user_page(page);
+	}
+}
+
+/**
+ * put_user_pages_dirty() - release and dirty an array of gup-pinned pages
+ * @pages:  array of pages to be marked dirty and released.
+ * @npages: number of pages in the @pages array.
+ *
+ * "gup-pinned page" refers to a page that has had one of the get_user_pages()
+ * variants called on that page.
+ *
+ * For each page in the @pages array, make that page (or its head page, if a
+ * compound page) dirty, if it was previously listed as clean. Then, release
+ * the page using put_user_page().
+ *
+ * Please see the put_user_page() documentation for details.
+ *
+ * set_page_dirty(), which does not lock the page, is used here.
+ * Therefore, it is the caller's responsibility to ensure that this is
+ * safe. If not, then put_user_pages_dirty_lock() should be called instead.
+ *
+ */
+void put_user_pages_dirty(struct page **pages, unsigned long npages)
+{
+	__put_user_pages_dirty(pages, npages, set_page_dirty);
+}
+EXPORT_SYMBOL(put_user_pages_dirty);
+
+/**
+ * put_user_pages_dirty_lock() - release and dirty an array of gup-pinned pages
+ * @pages:  array of pages to be marked dirty and released.
+ * @npages: number of pages in the @pages array.
+ *
+ * For each page in the @pages array, make that page (or its head page, if a
+ * compound page) dirty, if it was previously listed as clean. Then, release
+ * the page using put_user_page().
+ *
+ * Please see the put_user_page() documentation for details.
+ *
+ * This is just like put_user_pages_dirty(), except that it invokes
+ * set_page_dirty_lock(), instead of set_page_dirty().
+ *
+ */
+void put_user_pages_dirty_lock(struct page **pages, unsigned long npages)
+{
+	__put_user_pages_dirty(pages, npages, set_page_dirty_lock);
+}
+EXPORT_SYMBOL(put_user_pages_dirty_lock);
+
+/**
+ * put_user_pages() - release an array of gup-pinned pages.
+ * @pages:  array of pages to be marked dirty and released.
+ * @npages: number of pages in the @pages array.
+ *
+ * For each page in the @pages array, release the page using put_user_page().
+ *
+ * Please see the put_user_page() documentation for details.
+ */
+void put_user_pages(struct page **pages, unsigned long npages)
+{
+	unsigned long index;
+
+	for (index = 0; index < npages; index++)
+		put_user_page(pages[index]);
+}
+EXPORT_SYMBOL(put_user_pages);
+
 /*
  * get_kernel_pages() - pin kernel pages in memory
  * @kiov:	An array of struct kvec structures
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH 2/6] infiniband/mm: convert put_page() to put_user_page*()
  2019-02-04  5:21 [PATCH 0/6] RFC v2: mm: gup/dma tracking john.hubbard
  2019-02-04  5:21 ` [PATCH 1/6] mm: introduce put_user_page*(), placeholder versions john.hubbard
@ 2019-02-04  5:21 ` john.hubbard
  2019-02-04  5:21 ` [PATCH 3/6] mm: page_cache_add_speculative(): refactoring john.hubbard
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 26+ messages in thread
From: john.hubbard @ 2019-02-04  5:21 UTC (permalink / raw)
  To: Andrew Morton, linux-mm
  Cc: Al Viro, Christian Benvenuti, Christoph Hellwig,
	Christopher Lameter, Dan Williams, Dave Chinner,
	Dennis Dalessandro, Doug Ledford, Jan Kara, Jason Gunthorpe,
	Jerome Glisse, Matthew Wilcox, Michal Hocko, Mike Rapoport,
	Mike Marciniszyn, Ralph Campbell, Tom Talpey, LKML,
	linux-fsdevel, John Hubbard, Jason Gunthorpe

From: John Hubbard <jhubbard@nvidia.com>

For infiniband code that retains pages via get_user_pages*(),
release those pages via the new put_user_page(), or
put_user_pages*(), instead of put_page()

This is a tiny part of the second step of fixing the problem described
in [1]. The steps are:

1) Provide put_user_page*() routines, intended to be used
   for releasing pages that were pinned via get_user_pages*().

2) Convert all of the call sites for get_user_pages*(), to
   invoke put_user_page*(), instead of put_page(). This involves dozens of
   call sites, and will take some time.

3) After (2) is complete, use get_user_pages*() and put_user_page*() to
   implement tracking of these pages. This tracking will be separate from
   the existing struct page refcounting.

4) Use the tracking and identification of these pages, to implement
   special handling (especially in writeback paths) when the pages are
   backed by a filesystem. Again, [1] provides details as to why that is
   desirable.

[1] https://lwn.net/Articles/753027/ : "The Trouble with get_user_pages()"

Cc: Doug Ledford <dledford@redhat.com>
Cc: Jason Gunthorpe <jgg@ziepe.ca>
Cc: Mike Marciniszyn <mike.marciniszyn@intel.com>
Cc: Dennis Dalessandro <dennis.dalessandro@intel.com>
Cc: Christian Benvenuti <benve@cisco.com>

Reviewed-by: Jan Kara <jack@suse.cz>
Reviewed-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Acked-by: Jason Gunthorpe <jgg@mellanox.com>
Signed-off-by: John Hubbard <jhubbard@nvidia.com>
---
 drivers/infiniband/core/umem.c              |  7 ++++---
 drivers/infiniband/core/umem_odp.c          |  2 +-
 drivers/infiniband/hw/hfi1/user_pages.c     | 11 ++++-------
 drivers/infiniband/hw/mthca/mthca_memfree.c |  6 +++---
 drivers/infiniband/hw/qib/qib_user_pages.c  | 11 ++++-------
 drivers/infiniband/hw/qib/qib_user_sdma.c   |  6 +++---
 drivers/infiniband/hw/usnic/usnic_uiom.c    |  7 ++++---
 7 files changed, 23 insertions(+), 27 deletions(-)

diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c
index c6144df47ea4..c2898bc7b3b2 100644
--- a/drivers/infiniband/core/umem.c
+++ b/drivers/infiniband/core/umem.c
@@ -58,9 +58,10 @@ static void __ib_umem_release(struct ib_device *dev, struct ib_umem *umem, int d
 	for_each_sg(umem->sg_head.sgl, sg, umem->npages, i) {
 
 		page = sg_page(sg);
-		if (!PageDirty(page) && umem->writable && dirty)
-			set_page_dirty_lock(page);
-		put_page(page);
+		if (umem->writable && dirty)
+			put_user_pages_dirty_lock(&page, 1);
+		else
+			put_user_page(page);
 	}
 
 	sg_free_table(&umem->sg_head);
diff --git a/drivers/infiniband/core/umem_odp.c b/drivers/infiniband/core/umem_odp.c
index acb882f279cb..d32757c1f77e 100644
--- a/drivers/infiniband/core/umem_odp.c
+++ b/drivers/infiniband/core/umem_odp.c
@@ -663,7 +663,7 @@ int ib_umem_odp_map_dma_pages(struct ib_umem_odp *umem_odp, u64 user_virt,
 					ret = -EFAULT;
 					break;
 				}
-				put_page(local_page_list[j]);
+				put_user_page(local_page_list[j]);
 				continue;
 			}
 
diff --git a/drivers/infiniband/hw/hfi1/user_pages.c b/drivers/infiniband/hw/hfi1/user_pages.c
index e341e6dcc388..99ccc0483711 100644
--- a/drivers/infiniband/hw/hfi1/user_pages.c
+++ b/drivers/infiniband/hw/hfi1/user_pages.c
@@ -121,13 +121,10 @@ int hfi1_acquire_user_pages(struct mm_struct *mm, unsigned long vaddr, size_t np
 void hfi1_release_user_pages(struct mm_struct *mm, struct page **p,
 			     size_t npages, bool dirty)
 {
-	size_t i;
-
-	for (i = 0; i < npages; i++) {
-		if (dirty)
-			set_page_dirty_lock(p[i]);
-		put_page(p[i]);
-	}
+	if (dirty)
+		put_user_pages_dirty_lock(p, npages);
+	else
+		put_user_pages(p, npages);
 
 	if (mm) { /* during close after signal, mm can be NULL */
 		down_write(&mm->mmap_sem);
diff --git a/drivers/infiniband/hw/mthca/mthca_memfree.c b/drivers/infiniband/hw/mthca/mthca_memfree.c
index 112d2f38e0de..99108f3dcf01 100644
--- a/drivers/infiniband/hw/mthca/mthca_memfree.c
+++ b/drivers/infiniband/hw/mthca/mthca_memfree.c
@@ -481,7 +481,7 @@ int mthca_map_user_db(struct mthca_dev *dev, struct mthca_uar *uar,
 
 	ret = pci_map_sg(dev->pdev, &db_tab->page[i].mem, 1, PCI_DMA_TODEVICE);
 	if (ret < 0) {
-		put_page(pages[0]);
+		put_user_page(pages[0]);
 		goto out;
 	}
 
@@ -489,7 +489,7 @@ int mthca_map_user_db(struct mthca_dev *dev, struct mthca_uar *uar,
 				 mthca_uarc_virt(dev, uar, i));
 	if (ret) {
 		pci_unmap_sg(dev->pdev, &db_tab->page[i].mem, 1, PCI_DMA_TODEVICE);
-		put_page(sg_page(&db_tab->page[i].mem));
+		put_user_page(sg_page(&db_tab->page[i].mem));
 		goto out;
 	}
 
@@ -555,7 +555,7 @@ void mthca_cleanup_user_db_tab(struct mthca_dev *dev, struct mthca_uar *uar,
 		if (db_tab->page[i].uvirt) {
 			mthca_UNMAP_ICM(dev, mthca_uarc_virt(dev, uar, i), 1);
 			pci_unmap_sg(dev->pdev, &db_tab->page[i].mem, 1, PCI_DMA_TODEVICE);
-			put_page(sg_page(&db_tab->page[i].mem));
+			put_user_page(sg_page(&db_tab->page[i].mem));
 		}
 	}
 
diff --git a/drivers/infiniband/hw/qib/qib_user_pages.c b/drivers/infiniband/hw/qib/qib_user_pages.c
index 16543d5e80c3..1a5c64c8695f 100644
--- a/drivers/infiniband/hw/qib/qib_user_pages.c
+++ b/drivers/infiniband/hw/qib/qib_user_pages.c
@@ -40,13 +40,10 @@
 static void __qib_release_user_pages(struct page **p, size_t num_pages,
 				     int dirty)
 {
-	size_t i;
-
-	for (i = 0; i < num_pages; i++) {
-		if (dirty)
-			set_page_dirty_lock(p[i]);
-		put_page(p[i]);
-	}
+	if (dirty)
+		put_user_pages_dirty_lock(p, num_pages);
+	else
+		put_user_pages(p, num_pages);
 }
 
 /*
diff --git a/drivers/infiniband/hw/qib/qib_user_sdma.c b/drivers/infiniband/hw/qib/qib_user_sdma.c
index 31c523b2a9f5..a1a1ec4adffc 100644
--- a/drivers/infiniband/hw/qib/qib_user_sdma.c
+++ b/drivers/infiniband/hw/qib/qib_user_sdma.c
@@ -320,7 +320,7 @@ static int qib_user_sdma_page_to_frags(const struct qib_devdata *dd,
 		 * the caller can ignore this page.
 		 */
 		if (put) {
-			put_page(page);
+			put_user_page(page);
 		} else {
 			/* coalesce case */
 			kunmap(page);
@@ -634,7 +634,7 @@ static void qib_user_sdma_free_pkt_frag(struct device *dev,
 			kunmap(pkt->addr[i].page);
 
 		if (pkt->addr[i].put_page)
-			put_page(pkt->addr[i].page);
+			put_user_page(pkt->addr[i].page);
 		else
 			__free_page(pkt->addr[i].page);
 	} else if (pkt->addr[i].kvaddr) {
@@ -709,7 +709,7 @@ static int qib_user_sdma_pin_pages(const struct qib_devdata *dd,
 	/* if error, return all pages not managed by pkt */
 free_pages:
 	while (i < j)
-		put_page(pages[i++]);
+		put_user_page(pages[i++]);
 
 done:
 	return ret;
diff --git a/drivers/infiniband/hw/usnic/usnic_uiom.c b/drivers/infiniband/hw/usnic/usnic_uiom.c
index 49275a548751..2ef8d31dc838 100644
--- a/drivers/infiniband/hw/usnic/usnic_uiom.c
+++ b/drivers/infiniband/hw/usnic/usnic_uiom.c
@@ -77,9 +77,10 @@ static void usnic_uiom_put_pages(struct list_head *chunk_list, int dirty)
 		for_each_sg(chunk->page_list, sg, chunk->nents, i) {
 			page = sg_page(sg);
 			pa = sg_phys(sg);
-			if (!PageDirty(page) && dirty)
-				set_page_dirty_lock(page);
-			put_page(page);
+			if (dirty)
+				put_user_pages_dirty_lock(&page, 1);
+			else
+				put_user_page(page);
 			usnic_dbg("pa: %pa\n", &pa);
 		}
 		kfree(chunk);
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH 3/6] mm: page_cache_add_speculative(): refactoring
  2019-02-04  5:21 [PATCH 0/6] RFC v2: mm: gup/dma tracking john.hubbard
  2019-02-04  5:21 ` [PATCH 1/6] mm: introduce put_user_page*(), placeholder versions john.hubbard
  2019-02-04  5:21 ` [PATCH 2/6] infiniband/mm: convert put_page() to put_user_page*() john.hubbard
@ 2019-02-04  5:21 ` john.hubbard
  2019-02-04  5:21 ` [PATCH 4/6] mm/gup: track gup-pinned pages john.hubbard
                   ` (5 subsequent siblings)
  8 siblings, 0 replies; 26+ messages in thread
From: john.hubbard @ 2019-02-04  5:21 UTC (permalink / raw)
  To: Andrew Morton, linux-mm
  Cc: Al Viro, Christian Benvenuti, Christoph Hellwig,
	Christopher Lameter, Dan Williams, Dave Chinner,
	Dennis Dalessandro, Doug Ledford, Jan Kara, Jason Gunthorpe,
	Jerome Glisse, Matthew Wilcox, Michal Hocko, Mike Rapoport,
	Mike Marciniszyn, Ralph Campbell, Tom Talpey, LKML,
	linux-fsdevel, John Hubbard, Nick Piggin, Dave Kleikamp,
	Benjamin Herrenschmidt

From: John Hubbard <jhubbard@nvidia.com>

This combines the common elements of these routines:

    page_cache_get_speculative()
    page_cache_add_speculative()

This was anticipated by the original author, as shown by the comment
in commit ce0ad7f095258 ("powerpc/mm: Lockless get_user_pages_fast()
for 64-bit (v3)"):

    "Same as above, but add instead of inc (could just be merged)"

An upcoming patch for get_user_pages() tracking will use these routines,
so let's remove the duplication now.

There is no intention to introduce any behavioral change, but there is a
small risk of that, due to slightly differing ways of expressing the
TINY_RCU and related configurations.

Cc: Nick Piggin <npiggin@suse.de>
Cc: Dave Kleikamp <shaggy@linux.vnet.ibm.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: John Hubbard <jhubbard@nvidia.com>
---
 include/linux/pagemap.h | 33 +++++++++++----------------------
 1 file changed, 11 insertions(+), 22 deletions(-)

diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index e2d7039af6a3..5c8a9b59cbdc 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -164,8 +164,10 @@ void release_pages(struct page **pages, int nr);
  * will find the page or it will not. Likewise, the old find_get_page could run
  * either before the insertion or afterwards, depending on timing.
  */
-static inline int page_cache_get_speculative(struct page *page)
+static inline int __page_cache_add_speculative(struct page *page, int count)
 {
+	VM_BUG_ON(in_interrupt());
+
 #ifdef CONFIG_TINY_RCU
 # ifdef CONFIG_PREEMPT_COUNT
 	VM_BUG_ON(!in_atomic() && !irqs_disabled());
@@ -180,10 +182,10 @@ static inline int page_cache_get_speculative(struct page *page)
 	 * SMP requires.
 	 */
 	VM_BUG_ON_PAGE(page_count(page) == 0, page);
-	page_ref_inc(page);
+	page_ref_add(page, count);
 
 #else
-	if (unlikely(!get_page_unless_zero(page))) {
+	if (unlikely(!page_ref_add_unless(page, count, 0))) {
 		/*
 		 * Either the page has been freed, or will be freed.
 		 * In either case, retry here and the caller should
@@ -197,27 +199,14 @@ static inline int page_cache_get_speculative(struct page *page)
 	return 1;
 }
 
-/*
- * Same as above, but add instead of inc (could just be merged)
- */
-static inline int page_cache_add_speculative(struct page *page, int count)
+static inline int page_cache_get_speculative(struct page *page)
 {
-	VM_BUG_ON(in_interrupt());
-
-#if !defined(CONFIG_SMP) && defined(CONFIG_TREE_RCU)
-# ifdef CONFIG_PREEMPT_COUNT
-	VM_BUG_ON(!in_atomic() && !irqs_disabled());
-# endif
-	VM_BUG_ON_PAGE(page_count(page) == 0, page);
-	page_ref_add(page, count);
-
-#else
-	if (unlikely(!page_ref_add_unless(page, count, 0)))
-		return 0;
-#endif
-	VM_BUG_ON_PAGE(PageCompound(page) && page != compound_head(page), page);
+	return __page_cache_add_speculative(page, 1);
+}
 
-	return 1;
+static inline int page_cache_add_speculative(struct page *page, int count)
+{
+	return __page_cache_add_speculative(page, count);
 }
 
 #ifdef CONFIG_NUMA
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH 4/6] mm/gup: track gup-pinned pages
  2019-02-04  5:21 [PATCH 0/6] RFC v2: mm: gup/dma tracking john.hubbard
                   ` (2 preceding siblings ...)
  2019-02-04  5:21 ` [PATCH 3/6] mm: page_cache_add_speculative(): refactoring john.hubbard
@ 2019-02-04  5:21 ` john.hubbard
  2019-02-04 18:19   ` Matthew Wilcox
  2019-02-20 19:24   ` Ira Weiny
  2019-02-04  5:21 ` [PATCH 5/6] mm/gup: /proc/vmstat support for get/put user pages john.hubbard
                   ` (4 subsequent siblings)
  8 siblings, 2 replies; 26+ messages in thread
From: john.hubbard @ 2019-02-04  5:21 UTC (permalink / raw)
  To: Andrew Morton, linux-mm
  Cc: Al Viro, Christian Benvenuti, Christoph Hellwig,
	Christopher Lameter, Dan Williams, Dave Chinner,
	Dennis Dalessandro, Doug Ledford, Jan Kara, Jason Gunthorpe,
	Jerome Glisse, Matthew Wilcox, Michal Hocko, Mike Rapoport,
	Mike Marciniszyn, Ralph Campbell, Tom Talpey, LKML,
	linux-fsdevel, John Hubbard

From: John Hubbard <jhubbard@nvidia.com>

Now that all callers of get_user_pages*() have been updated to use
put_user_page(), instead of put_page(), add tracking of such
"gup-pinned" pages. The purpose of this tracking is to answer the
question "has this page been pinned by a call to get_user_pages()?"

In order to answer that, refcounting is required. get_user_pages() and all
its variants increment a reference count, and put_user_page() and its
variants decrement that reference count. If the net count is *effectively*
non-zero (see below), then the page is considered gup-pinned.

What to do in response to encountering such a page, is left to later
patchsets. There is discussion about this in [1], and in an upcoming patch
that adds:

   Documentation/vm/get_user_pages.rst

So, this patch simply adds tracking of such pages.  In order to achieve
this without using up any more bits or fields in struct page, the
page->_refcount field is overloaded.  gup pins are incremented by adding a
large chunk (1024) instead of 1.  This provides a way to say, "either this
page is gup-pinned, or you have a *lot* of references on it, and thus this
is a false positive".  False positives are generally OK, as long as they
are expected to be rare: taking action for a page that looks gup-pinned,
but is not, is not going to be a problem.  It's false negatives (failing
to detect a gup-pinned page) that would be a problem, and those won't
happen with this approach.

This takes advantage of two distinct, pre-existing lock-free algorithms:

a) get_user_pages() and things such as page_mkclean(), both operate on
   page table entries, without taking locks. This relies partly on just
   letting the CPU hardware (which of course also never takes locks to
   use its own page tables) just take page faults if something has changed.

b) page_cache_get_speculative(), called by get_user_pages(), is a way to
   avoid having pages get freed out from under get_user_pages() or other
   things that want to pin pages.

As a result, performance is expected to be unchanged in any noticeable
way, by this patch.

In order to test this, a lot of get_user_pages() call sites have to be
converted over to use put_user_page(), but I did that locally, and here
is an fio run on an NVMe drive, using this for the fio configuration file:

    [reader]
    direct=1
    ioengine=libaio
    blocksize=4096
    size=1g
    numjobs=1
    rw=read
    iodepth=64

reader: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B,
        (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.3
Starting 1 process
Jobs: 1 (f=1)
reader: (groupid=0, jobs=1): err= 0: pid=7011: Sun Feb  3 20:36:51 2019
   read: IOPS=190k, BW=741MiB/s (778MB/s)(1024MiB/1381msec)
    slat (nsec): min=2716, max=57255, avg=4048.14, stdev=1084.10
    clat (usec): min=20, max=12485, avg=332.63, stdev=191.77
     lat (usec): min=22, max=12498, avg=336.72, stdev=192.07
    clat percentiles (usec):
     |  1.00th=[  322],  5.00th=[  322], 10.00th=[  322], 20.00th=[  326],
     | 30.00th=[  326], 40.00th=[  326], 50.00th=[  326], 60.00th=[  326],
     | 70.00th=[  326], 80.00th=[  330], 90.00th=[  330], 95.00th=[  330],
     | 99.00th=[  478], 99.50th=[  717], 99.90th=[ 1074], 99.95th=[ 1090],
     | 99.99th=[12256]
   bw (  KiB/s): min=730152, max=776512, per=99.22%, avg=753332.00,
                 stdev=32781.47, samples=2
   iops        : min=182538, max=194128, avg=188333.00, stdev=8195.37,
                 samples=2
  lat (usec)   : 50=0.01%, 100=0.01%, 250=0.07%, 500=99.26%, 750=0.38%
  lat (usec)   : 1000=0.02%
  lat (msec)   : 2=0.24%, 20=0.02%
  cpu          : usr=15.07%, sys=84.13%, ctx=10, majf=0, minf=74
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%,
                 >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
                 >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%,
                 >=64=0.0%
     issued rwts: total=262144,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: bw=741MiB/s (778MB/s), 741MiB/s-741MiB/s (778MB/s-778MB/s),
     io=1024MiB (1074MB), run=1381-1381msec

Disk stats (read/write):
  nvme0n1: ios=216966/0, merge=0/0, ticks=6112/0, in_queue=704, util=91.34%

[1] https://lwn.net/Articles/753027/ "The trouble with get_user_pages()"

Suggested-by: Jan Kara <jack@suse.cz>
Suggested-by: Jérôme Glisse <jglisse@redhat.com>

Cc: Christian Benvenuti <benve@cisco.com>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Christopher Lameter <cl@linux.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Chinner <david@fromorbit.com>
Cc: Dennis Dalessandro <dennis.dalessandro@intel.com>
Cc: Doug Ledford <dledford@redhat.com>
Cc: Jan Kara <jack@suse.cz>
Cc: Jason Gunthorpe <jgg@ziepe.ca>
Cc: Jérôme Glisse <jglisse@redhat.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Mike Rapoport <rppt@linux.ibm.com>
Cc: Mike Marciniszyn <mike.marciniszyn@intel.com>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Cc: Tom Talpey <tom@talpey.com>
Signed-off-by: John Hubbard <jhubbard@nvidia.com>
---
 include/linux/mm.h      | 81 +++++++++++++++++++++++++++++------------
 include/linux/pagemap.h |  5 +++
 mm/gup.c                | 60 ++++++++++++++++++++++--------
 mm/swap.c               | 21 +++++++++++
 4 files changed, 128 insertions(+), 39 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 809b7397d41e..dcb01cf0a9de 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -965,6 +965,63 @@ static inline bool is_pci_p2pdma_page(const struct page *page)
 }
 #endif /* CONFIG_DEV_PAGEMAP_OPS */
 
+/*
+ * GUP_PIN_COUNTING_BIAS, and the associated functions that use it, overload
+ * the page's refcount so that two separate items are tracked: the original page
+ * reference count, and also a new count of how many get_user_pages() calls were
+ * made against the page. ("gup-pinned" is another term for the latter).
+ *
+ * With this scheme, get_user_pages() becomes special: such pages are marked
+ * as distinct from normal pages. As such, the new put_user_page() call (and
+ * its variants) must be used in order to release gup-pinned pages.
+ *
+ * Choice of value:
+ *
+ * By making GUP_PIN_COUNTING_BIAS a power of two, debugging of page reference
+ * counts with respect to get_user_pages() and put_user_page() becomes simpler,
+ * due to the fact that adding an even power of two to the page refcount has
+ * the effect of using only the upper N bits, for the code that counts up using
+ * the bias value. This means that the lower bits are left for the exclusive
+ * use of the original code that increments and decrements by one (or at least,
+ * by much smaller values than the bias value).
+ *
+ * Of course, once the lower bits overflow into the upper bits (and this is
+ * OK, because subtraction recovers the original values), then visual inspection
+ * no longer suffices to directly view the separate counts. However, for normal
+ * applications that don't have huge page reference counts, this won't be an
+ * issue.
+ *
+ * This has to work on 32-bit as well as 64-bit systems. In the more constrained
+ * 32-bit systems, the 10 bit value of the bias value leaves 22 bits for the
+ * upper bits. Therefore, only about 4M calls to get_user_page() may occur for
+ * a page.
+ *
+ * Locking: the lockless algorithm described in page_cache_gup_pin_speculative()
+ * and page_cache_gup_pin_speculative() provides safe operation for
+ * get_user_pages and page_mkclean and other calls that race to set up page
+ * table entries.
+ */
+#define GUP_PIN_COUNTING_BIAS (1UL << 10)
+
+int get_gup_pin_page(struct page *page);
+
+void put_user_page(struct page *page);
+void put_user_pages_dirty(struct page **pages, unsigned long npages);
+void put_user_pages_dirty_lock(struct page **pages, unsigned long npages);
+void put_user_pages(struct page **pages, unsigned long npages);
+
+/**
+ * page_gup_pinned() - report if a page is gup-pinned (pinned by a call to
+ *			get_user_pages).
+ * @page:	pointer to page to be queried.
+ * @Returns:	True, if it is likely that the page has been "gup-pinned".
+ *		False, if the page is definitely not gup-pinned.
+ */
+static inline bool page_gup_pinned(struct page *page)
+{
+	return (page_ref_count(page)) > GUP_PIN_COUNTING_BIAS;
+}
+
 static inline void get_page(struct page *page)
 {
 	page = compound_head(page);
@@ -993,30 +1050,6 @@ static inline void put_page(struct page *page)
 		__put_page(page);
 }
 
-/**
- * put_user_page() - release a gup-pinned page
- * @page:            pointer to page to be released
- *
- * Pages that were pinned via get_user_pages*() must be released via
- * either put_user_page(), or one of the put_user_pages*() routines
- * below. This is so that eventually, pages that are pinned via
- * get_user_pages*() can be separately tracked and uniquely handled. In
- * particular, interactions with RDMA and filesystems need special
- * handling.
- *
- * put_user_page() and put_page() are not interchangeable, despite this early
- * implementation that makes them look the same. put_user_page() calls must
- * be perfectly matched up with get_user_page() calls.
- */
-static inline void put_user_page(struct page *page)
-{
-	put_page(page);
-}
-
-void put_user_pages_dirty(struct page **pages, unsigned long npages);
-void put_user_pages_dirty_lock(struct page **pages, unsigned long npages);
-void put_user_pages(struct page **pages, unsigned long npages);
-
 #if defined(CONFIG_SPARSEMEM) && !defined(CONFIG_SPARSEMEM_VMEMMAP)
 #define SECTION_IN_PAGE_FLAGS
 #endif
diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index 5c8a9b59cbdc..5f5b72ba595f 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -209,6 +209,11 @@ static inline int page_cache_add_speculative(struct page *page, int count)
 	return __page_cache_add_speculative(page, count);
 }
 
+static inline int page_cache_gup_pin_speculative(struct page *page)
+{
+	return __page_cache_add_speculative(page, GUP_PIN_COUNTING_BIAS);
+}
+
 #ifdef CONFIG_NUMA
 extern struct page *__page_cache_alloc(gfp_t gfp);
 #else
diff --git a/mm/gup.c b/mm/gup.c
index 05acd7e2eb22..3291da342f9c 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -25,6 +25,26 @@ struct follow_page_context {
 	unsigned int page_mask;
 };
 
+/**
+ * get_gup_pin_page() - mark a page as being used by get_user_pages().
+ * @page:	pointer to page to be marked
+ * @Returns:	0 for success, -EOVERFLOW if the page refcount would have
+ *		overflowed.
+ *
+ */
+int get_gup_pin_page(struct page *page)
+{
+	page = compound_head(page);
+
+	if (page_ref_count(page) >= (UINT_MAX - GUP_PIN_COUNTING_BIAS)) {
+		WARN_ONCE(1, "get_user_pages pin count overflowed");
+		return -EOVERFLOW;
+	}
+
+	page_ref_add(page, GUP_PIN_COUNTING_BIAS);
+	return 0;
+}
+
 static struct page *no_page_table(struct vm_area_struct *vma,
 		unsigned int flags)
 {
@@ -157,8 +177,14 @@ static struct page *follow_page_pte(struct vm_area_struct *vma,
 		goto retry;
 	}
 
-	if (flags & FOLL_GET)
-		get_page(page);
+	if (flags & FOLL_GET) {
+		int ret = get_gup_pin_page(page);
+
+		if (ret) {
+			page = ERR_PTR(ret);
+			goto out;
+		}
+	}
 	if (flags & FOLL_TOUCH) {
 		if ((flags & FOLL_WRITE) &&
 		    !pte_dirty(pte) && !PageDirty(page))
@@ -497,7 +523,10 @@ static int get_gate_page(struct mm_struct *mm, unsigned long address,
 		if (is_device_public_page(*page))
 			goto unmap;
 	}
-	get_page(*page);
+
+	ret = get_gup_pin_page(*page);
+	if (ret)
+		goto unmap;
 out:
 	ret = 0;
 unmap:
@@ -1429,11 +1458,11 @@ static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end,
 		page = pte_page(pte);
 		head = compound_head(page);
 
-		if (!page_cache_get_speculative(head))
+		if (!page_cache_gup_pin_speculative(head))
 			goto pte_unmap;
 
 		if (unlikely(pte_val(pte) != pte_val(*ptep))) {
-			put_page(head);
+			put_user_page(head);
 			goto pte_unmap;
 		}
 
@@ -1488,7 +1517,11 @@ static int __gup_device_huge(unsigned long pfn, unsigned long addr,
 		}
 		SetPageReferenced(page);
 		pages[*nr] = page;
-		get_page(page);
+		if (get_gup_pin_page(page)) {
+			undo_dev_pagemap(nr, nr_start, pages);
+			return 0;
+		}
+
 		(*nr)++;
 		pfn++;
 	} while (addr += PAGE_SIZE, addr != end);
@@ -1569,15 +1602,14 @@ static int gup_huge_pmd(pmd_t orig, pmd_t *pmdp, unsigned long addr,
 	} while (addr += PAGE_SIZE, addr != end);
 
 	head = compound_head(pmd_page(orig));
-	if (!page_cache_add_speculative(head, refs)) {
+	if (!page_cache_gup_pin_speculative(head)) {
 		*nr -= refs;
 		return 0;
 	}
 
 	if (unlikely(pmd_val(orig) != pmd_val(*pmdp))) {
 		*nr -= refs;
-		while (refs--)
-			put_page(head);
+		put_user_page(head);
 		return 0;
 	}
 
@@ -1607,15 +1639,14 @@ static int gup_huge_pud(pud_t orig, pud_t *pudp, unsigned long addr,
 	} while (addr += PAGE_SIZE, addr != end);
 
 	head = compound_head(pud_page(orig));
-	if (!page_cache_add_speculative(head, refs)) {
+	if (!page_cache_gup_pin_speculative(head)) {
 		*nr -= refs;
 		return 0;
 	}
 
 	if (unlikely(pud_val(orig) != pud_val(*pudp))) {
 		*nr -= refs;
-		while (refs--)
-			put_page(head);
+		put_user_page(head);
 		return 0;
 	}
 
@@ -1644,15 +1675,14 @@ static int gup_huge_pgd(pgd_t orig, pgd_t *pgdp, unsigned long addr,
 	} while (addr += PAGE_SIZE, addr != end);
 
 	head = compound_head(pgd_page(orig));
-	if (!page_cache_add_speculative(head, refs)) {
+	if (!page_cache_gup_pin_speculative(head)) {
 		*nr -= refs;
 		return 0;
 	}
 
 	if (unlikely(pgd_val(orig) != pgd_val(*pgdp))) {
 		*nr -= refs;
-		while (refs--)
-			put_page(head);
+		put_user_page(head);
 		return 0;
 	}
 
diff --git a/mm/swap.c b/mm/swap.c
index 7c42ca45bb89..39b0ddd35933 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -133,6 +133,27 @@ void put_pages_list(struct list_head *pages)
 }
 EXPORT_SYMBOL(put_pages_list);
 
+/**
+ * put_user_page() - release a gup-pinned page
+ * @page:            pointer to page to be released
+ *
+ * Pages that were pinned via get_user_pages*() must be released via
+ * either put_user_page(), or one of the put_user_pages*() routines
+ * below. This is so that eventually, pages that are pinned via
+ * get_user_pages*() can be separately tracked and uniquely handled. In
+ * particular, interactions with RDMA and filesystems need special
+ * handling.
+ */
+void put_user_page(struct page *page)
+{
+	page = compound_head(page);
+
+	VM_BUG_ON_PAGE(page_ref_count(page) < GUP_PIN_COUNTING_BIAS, page);
+
+	page_ref_sub(page, GUP_PIN_COUNTING_BIAS);
+}
+EXPORT_SYMBOL(put_user_page);
+
 typedef int (*set_dirty_func)(struct page *page);
 
 static void __put_user_pages_dirty(struct page **pages,
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH 5/6] mm/gup: /proc/vmstat support for get/put user pages
  2019-02-04  5:21 [PATCH 0/6] RFC v2: mm: gup/dma tracking john.hubbard
                   ` (3 preceding siblings ...)
  2019-02-04  5:21 ` [PATCH 4/6] mm/gup: track gup-pinned pages john.hubbard
@ 2019-02-04  5:21 ` john.hubbard
  2019-02-04  5:21 ` [PATCH 6/6] mm/gup: Documentation/vm/get_user_pages.rst, MAINTAINERS john.hubbard
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 26+ messages in thread
From: john.hubbard @ 2019-02-04  5:21 UTC (permalink / raw)
  To: Andrew Morton, linux-mm
  Cc: Al Viro, Christian Benvenuti, Christoph Hellwig,
	Christopher Lameter, Dan Williams, Dave Chinner,
	Dennis Dalessandro, Doug Ledford, Jan Kara, Jason Gunthorpe,
	Jerome Glisse, Matthew Wilcox, Michal Hocko, Mike Rapoport,
	Mike Marciniszyn, Ralph Campbell, Tom Talpey, LKML,
	linux-fsdevel, John Hubbard

From: John Hubbard <jhubbard@nvidia.com>

Add five new /proc/vmstat items, to provide some
visibility into what get_user_pages() and put_user_page()
are doing.

After booting and running fio (https://github.com/axboe/fio)
a few times on an NVMe device, as a way to get lots of
get_user_pages_fast() calls, the counters look like this:

$ cat /proc/vmstat |grep gup
nr_gup_slow_pages_requested 21319
nr_gup_fast_pages_requested 11533792
nr_gup_fast_page_backoffs 0
nr_gup_page_count_overflows 0
nr_gup_pages_returned 11555104

Interpretation of the above:
   Total gup requests (slow + fast): 11555111
   Total put_user_page calls:        11555104

This shows 7 more calls to get_user_pages(), than to
put_user_page(). That may, or may not, represent a
problem worth investigating.

Normally, those last two numbers should be equal, but a
couple of things may cause them to differ:

1) Inherent race condition in reading /proc/vmstat values.

2) Bugs at any of the get_user_pages*() call sites. Those
sites need to match get_user_pages() and put_user_page() calls.

Signed-off-by: John Hubbard <jhubbard@nvidia.com>
---
 include/linux/mmzone.h |  5 +++++
 mm/gup.c               | 20 ++++++++++++++++++++
 mm/swap.c              |  1 +
 mm/vmstat.c            |  5 +++++
 4 files changed, 31 insertions(+)

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 842f9189537b..f20c14958a2b 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -183,6 +183,11 @@ enum node_stat_item {
 	NR_DIRTIED,		/* page dirtyings since bootup */
 	NR_WRITTEN,		/* page writings since bootup */
 	NR_KERNEL_MISC_RECLAIMABLE,	/* reclaimable non-slab kernel pages */
+	NR_GUP_SLOW_PAGES_REQUESTED,	/* via: get_user_pages() */
+	NR_GUP_FAST_PAGES_REQUESTED,	/* via: get_user_pages_fast() */
+	NR_GUP_FAST_PAGE_BACKOFFS,	/* gup_fast() lost to page_mkclean() */
+	NR_GUP_PAGE_COUNT_OVERFLOWS,	/* gup count overflowed: gup() failed */
+	NR_GUP_PAGES_RETURNED,		/* via: put_user_page() */
 	NR_VM_NODE_STAT_ITEMS
 };
 
diff --git a/mm/gup.c b/mm/gup.c
index 3291da342f9c..848ee7899831 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -37,6 +37,8 @@ int get_gup_pin_page(struct page *page)
 	page = compound_head(page);
 
 	if (page_ref_count(page) >= (UINT_MAX - GUP_PIN_COUNTING_BIAS)) {
+		mod_node_page_state(page_pgdat(page),
+				    NR_GUP_PAGE_COUNT_OVERFLOWS, 1);
 		WARN_ONCE(1, "get_user_pages pin count overflowed");
 		return -EOVERFLOW;
 	}
@@ -184,6 +186,8 @@ static struct page *follow_page_pte(struct vm_area_struct *vma,
 			page = ERR_PTR(ret);
 			goto out;
 		}
+		mod_node_page_state(page_pgdat(page),
+				    NR_GUP_SLOW_PAGES_REQUESTED, 1);
 	}
 	if (flags & FOLL_TOUCH) {
 		if ((flags & FOLL_WRITE) &&
@@ -527,6 +531,8 @@ static int get_gate_page(struct mm_struct *mm, unsigned long address,
 	ret = get_gup_pin_page(*page);
 	if (ret)
 		goto unmap;
+
+	mod_node_page_state(page_pgdat(*page), NR_GUP_SLOW_PAGES_REQUESTED, 1);
 out:
 	ret = 0;
 unmap:
@@ -1461,7 +1467,12 @@ static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end,
 		if (!page_cache_gup_pin_speculative(head))
 			goto pte_unmap;
 
+		mod_node_page_state(page_pgdat(head),
+				    NR_GUP_FAST_PAGES_REQUESTED, 1);
+
 		if (unlikely(pte_val(pte) != pte_val(*ptep))) {
+			mod_node_page_state(page_pgdat(head),
+					    NR_GUP_FAST_PAGE_BACKOFFS, 1);
 			put_user_page(head);
 			goto pte_unmap;
 		}
@@ -1522,6 +1533,9 @@ static int __gup_device_huge(unsigned long pfn, unsigned long addr,
 			return 0;
 		}
 
+		mod_node_page_state(page_pgdat(page),
+				    NR_GUP_FAST_PAGES_REQUESTED, 1);
+
 		(*nr)++;
 		pfn++;
 	} while (addr += PAGE_SIZE, addr != end);
@@ -1607,6 +1621,8 @@ static int gup_huge_pmd(pmd_t orig, pmd_t *pmdp, unsigned long addr,
 		return 0;
 	}
 
+	mod_node_page_state(page_pgdat(head), NR_GUP_FAST_PAGES_REQUESTED, 1);
+
 	if (unlikely(pmd_val(orig) != pmd_val(*pmdp))) {
 		*nr -= refs;
 		put_user_page(head);
@@ -1644,6 +1660,8 @@ static int gup_huge_pud(pud_t orig, pud_t *pudp, unsigned long addr,
 		return 0;
 	}
 
+	mod_node_page_state(page_pgdat(head), NR_GUP_FAST_PAGES_REQUESTED, 1);
+
 	if (unlikely(pud_val(orig) != pud_val(*pudp))) {
 		*nr -= refs;
 		put_user_page(head);
@@ -1680,6 +1698,8 @@ static int gup_huge_pgd(pgd_t orig, pgd_t *pgdp, unsigned long addr,
 		return 0;
 	}
 
+	mod_node_page_state(page_pgdat(head), NR_GUP_FAST_PAGES_REQUESTED, 1);
+
 	if (unlikely(pgd_val(orig) != pgd_val(*pgdp))) {
 		*nr -= refs;
 		put_user_page(head);
diff --git a/mm/swap.c b/mm/swap.c
index 39b0ddd35933..49e192f242d4 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -150,6 +150,7 @@ void put_user_page(struct page *page)
 
 	VM_BUG_ON_PAGE(page_ref_count(page) < GUP_PIN_COUNTING_BIAS, page);
 
+	mod_node_page_state(page_pgdat(page), NR_GUP_PAGES_RETURNED, 1);
 	page_ref_sub(page, GUP_PIN_COUNTING_BIAS);
 }
 EXPORT_SYMBOL(put_user_page);
diff --git a/mm/vmstat.c b/mm/vmstat.c
index 83b30edc2f7f..18a1a4a2dd29 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -1164,6 +1164,11 @@ const char * const vmstat_text[] = {
 	"nr_dirtied",
 	"nr_written",
 	"nr_kernel_misc_reclaimable",
+	"nr_gup_slow_pages_requested",
+	"nr_gup_fast_pages_requested",
+	"nr_gup_fast_page_backoffs",
+	"nr_gup_page_count_overflows",
+	"nr_gup_pages_returned",
 
 	/* enum writeback_stat_item counters */
 	"nr_dirty_threshold",
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH 6/6] mm/gup: Documentation/vm/get_user_pages.rst, MAINTAINERS
  2019-02-04  5:21 [PATCH 0/6] RFC v2: mm: gup/dma tracking john.hubbard
                   ` (4 preceding siblings ...)
  2019-02-04  5:21 ` [PATCH 5/6] mm/gup: /proc/vmstat support for get/put user pages john.hubbard
@ 2019-02-04  5:21 ` john.hubbard
  2019-02-05 16:40   ` Mike Rapoport
  2019-02-04 16:08 ` [PATCH 0/6] RFC v2: mm: gup/dma tracking Christopher Lameter
                   ` (2 subsequent siblings)
  8 siblings, 1 reply; 26+ messages in thread
From: john.hubbard @ 2019-02-04  5:21 UTC (permalink / raw)
  To: Andrew Morton, linux-mm
  Cc: Al Viro, Christian Benvenuti, Christoph Hellwig,
	Christopher Lameter, Dan Williams, Dave Chinner,
	Dennis Dalessandro, Doug Ledford, Jan Kara, Jason Gunthorpe,
	Jerome Glisse, Matthew Wilcox, Michal Hocko, Mike Rapoport,
	Mike Marciniszyn, Ralph Campbell, Tom Talpey, LKML,
	linux-fsdevel, John Hubbard

From: John Hubbard <jhubbard@nvidia.com>

1. Added Documentation/vm/get_user_pages.rst

2. Added a GET_USER_PAGES entry in MAINTAINERS

Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Jan Kara <jack@suse.cz>
Signed-off-by: Jérôme Glisse <jglisse@redhat.com>
Signed-off-by: John Hubbard <jhubbard@nvidia.com>
---
 Documentation/vm/get_user_pages.rst | 197 ++++++++++++++++++++++++++++
 Documentation/vm/index.rst          |   1 +
 MAINTAINERS                         |  10 ++
 3 files changed, 208 insertions(+)
 create mode 100644 Documentation/vm/get_user_pages.rst

diff --git a/Documentation/vm/get_user_pages.rst b/Documentation/vm/get_user_pages.rst
new file mode 100644
index 000000000000..8598f20afb09
--- /dev/null
+++ b/Documentation/vm/get_user_pages.rst
@@ -0,0 +1,197 @@
+.. _get_user_pages:
+
+==============
+get_user_pages
+==============
+
+.. contents:: :local:
+
+Overview
+========
+
+Some kernel components (file systems, device drivers) need to access
+memory that is specified via process virtual address. For a long time, the
+API to achieve that was get_user_pages ("GUP") and its variations. However,
+GUP has critical limitations that have been overlooked; in particular, GUP
+does not interact correctly with filesystems in all situations. That means
+that file-backed memory + GUP is a recipe for potential problems, some of
+which have already occurred in the field.
+
+GUP was first introduced for Direct IO (O_DIRECT), allowing filesystem code
+to get the struct page behind a virtual address and to let storage hardware
+perform a direct copy to or from that page. This is a short-lived access
+pattern, and as such, the window for a concurrent writeback of GUP'd page
+was small enough that there were not (we think) any reported problems.
+Also, userspace was expected to understand and accept that Direct IO was
+not synchronized with memory-mapped access to that data, nor with any
+process address space changes such as munmap(), mremap(), etc.
+
+Over the years, more GUP uses have appeared (virtualization, device
+drivers, RDMA) that can keep the pages they get via GUP for a long period
+of time (seconds, minutes, hours, days, ...). This long-term pinning makes
+an underlying design problem more obvious.
+
+In fact, there are a number of key problems inherent to GUP:
+
+Interactions with file systems
+==============================
+
+File systems expect to be able to write back data, both to reclaim pages,
+and for data integrity. Allowing other hardware (NICs, GPUs, etc) to gain
+write access to the file memory pages means that such hardware can dirty the
+pages, without the filesystem being aware. This can, in some cases
+(depending on filesystem, filesystem options, block device, block device
+options, and other variables), lead to data corruption, and also to kernel
+bugs of the form:
+
+::
+
+    kernel BUG at /build/linux-fQ94TU/linux-4.4.0/fs/ext4/inode.c:1899!
+    backtrace:
+
+	ext4_writepage
+	__writepage
+	write_cache_pages
+	ext4_writepages
+	do_writepages
+	__writeback_single_inode
+	writeback_sb_inodes
+	__writeback_inodes_wb
+	wb_writeback
+	wb_workfn
+	process_one_work
+	worker_thread
+	kthread
+	ret_from_fork
+
+...which is due to the file system asserting that there are still buffer
+heads attached:
+
+::
+
+ /* If we *know* page->private refers to buffer_heads */
+ #define page_buffers(page)                                      \
+        ({                                                      \
+                BUG_ON(!PagePrivate(page));                     \
+                ((struct buffer_head *)page_private(page));     \
+        })
+ #define page_has_buffers(page)  PagePrivate(page)
+
+Dave Chinner's description of this is very clear:
+
+    "The fundamental issue is that ->page_mkwrite must be called on every
+    write access to a clean file backed page, not just the first one.
+    How long the GUP reference lasts is irrelevant, if the page is clean
+    and you need to dirty it, you must call ->page_mkwrite before it is
+    marked writeable and dirtied. Every. Time."
+
+This is just one symptom of the larger design problem: filesystems do not
+actually support get_user_pages() being called on their pages, and letting
+hardware write directly to those pages--even though that pattern has been
+going on since about 2005 or so.
+
+Long term GUP
+=============
+
+Long term GUP is an issue when FOLL_WRITE is specified to GUP (so, a
+writeable mapping is created), and the pages are file-backed. That can lead
+to filesystem corruption. What happens is that when a file-backed page is
+being written back, it is first mapped read-only in all of the CPU page
+tables; the file system then assumes that nobody can write to the page, and
+that the page content is therefore stable. Unfortunately, the GUP callers
+generally do not monitor changes to the CPU pages tables; they instead
+assume that the following pattern is safe (it's not):
+
+::
+
+    get_user_pages()
+
+    Hardware then keeps a reference to those pages for some potentially
+    long time. During this time, hardware may write to the pages. Because
+    "hardware" here means "devices that are not a CPU", this activity
+    occurs without any interaction with the kernel's file system code.
+
+    for each page:
+	set_page_dirty()
+	put_page()
+
+In fact, the GUP documentation even recommends that pattern.
+
+Anyway, the file system assumes that the page is stable (nothing is writing
+to the page), and that is a problem: stable page content is necessary for
+many filesystem actions during writeback, such as checksum, encryption,
+RAID striping, etc. Furthermore, filesystem features like COW (copy on
+write) or snapshot also rely on being able to use a new page for as memory
+for that memory range inside the file.
+
+Corruption during write back is clearly possible here. To solve that, one
+idea is to identify pages that have active GUP, so that we can use a bounce
+page to write stable data to the filesystem. The filesystem would work
+on the bounce page, while any of the active GUP might write to the
+original page. This would avoid the stable page violation problem, but note
+that it is only part of the overall solution, because other problems
+remain.
+
+Other filesystem features that need to replace the page with a new one can
+be inhibited for pages that are GUP-pinned. This will, however, alter and
+limit some of those filesystem features. The only fix for that would be to
+require GUP users monitor and respond to CPU page table updates. Subsystems
+such as ODP and HMM do this, for example. This aspect of the problem is
+still under discussion.
+
+Direct IO
+=========
+
+Direct IO can cause corruption, if userspace does Direct-IO that writes to
+a range of virtual addresses that are mmap'd to a file.  The pages written
+to are file-backed pages that can be under write back, while the Direct IO
+is taking place.  Here, Direct IO need races with a write back: it calls
+GUP before page_mkclean() has replaced the CPU pte with a read-only entry.
+The race window is pretty small, which is probably why years have gone by
+before we noticed this problem: Direct IO is generally very quick, and
+tends to finish up before the filesystem gets around to do anything with
+the page contents.  However, it's still a real problem.  The solution is
+to never let GUP return pages that are under write back, but instead,
+force GUP to take a write fault on those pages.  That way, GUP will
+properly synchronize with the active write back.  This does not change the
+required GUP behavior, it just avoids that race.
+
+Measurement and visibility
+==========================
+
+There are several /proc/vmstat items, in order to provide some visibility
+into what get_user_pages() and put_user_page() are doing.
+
+After booting and running fio (https://github.com/axboe/fio)
+a few times on an NVMe device, as a way to get lots of
+get_user_pages_fast() calls, the counters look like this:
+
+::
+
+ $ cat /proc/vmstat | grep gup
+ nr_gup_slow_pages_requested 21319
+ nr_gup_fast_pages_requested 11533792
+ nr_gup_fast_page_backoffs 0
+ nr_gup_page_count_overflows 0
+ nr_gup_pages_returned 11555104
+
+Interpretation of the above:
+
+::
+
+ Total gup requests (slow + fast): 11555111
+ Total put_user_page calls:        11555104
+
+This shows 7 more calls to get_user_pages(), than to put_user_page().
+That may, or may not, represent a problem worth investigating.
+
+Normally, those last two numbers should be equal, but a couple of things
+may cause them to differ:
+
+1. Inherent race condition in reading /proc/vmstat values.
+
+2. Bugs at any of the get_user_pages*() call sites. Those
+sites need to match get_user_pages() and put_user_page() calls.
+
+
+
diff --git a/Documentation/vm/index.rst b/Documentation/vm/index.rst
index 2b3ab3a1ccf3..433aaf1996e6 100644
--- a/Documentation/vm/index.rst
+++ b/Documentation/vm/index.rst
@@ -32,6 +32,7 @@ descriptions of data structures and algorithms.
    balance
    cleancache
    frontswap
+   get_user_pages
    highmem
    hmm
    hwpoison
diff --git a/MAINTAINERS b/MAINTAINERS
index 8c68de3cfd80..1e8f91b8ce4f 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -6384,6 +6384,16 @@ M:	Frank Haverkamp <haver@linux.ibm.com>
 S:	Supported
 F:	drivers/misc/genwqe/
 
+GET_USER_PAGES
+M:	Dan Williams <dan.j.williams@intel.com>
+M:	Jan Kara <jack@suse.cz>
+M:	Jérôme Glisse <jglisse@redhat.com>
+M:	John Hubbard <jhubbard@nvidia.com>
+L:	linux-mm@kvack.org
+S:	Maintained
+F:	mm/gup.c
+F:	Documentation/vm/get_user_pages.rst
+
 GET_MAINTAINER SCRIPT
 M:	Joe Perches <joe@perches.com>
 S:	Maintained
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* Re: [PATCH 0/6] RFC v2: mm: gup/dma tracking
  2019-02-04  5:21 [PATCH 0/6] RFC v2: mm: gup/dma tracking john.hubbard
                   ` (5 preceding siblings ...)
  2019-02-04  5:21 ` [PATCH 6/6] mm/gup: Documentation/vm/get_user_pages.rst, MAINTAINERS john.hubbard
@ 2019-02-04 16:08 ` Christopher Lameter
  2019-02-04 16:12   ` Christoph Hellwig
  2019-02-04 17:14 ` Christopher Lameter
  2019-02-05  1:41 ` Tom Talpey
  8 siblings, 1 reply; 26+ messages in thread
From: Christopher Lameter @ 2019-02-04 16:08 UTC (permalink / raw)
  To: john.hubbard
  Cc: Andrew Morton, linux-mm, Al Viro, Christian Benvenuti,
	Christoph Hellwig, Dan Williams, Dave Chinner,
	Dennis Dalessandro, Doug Ledford, Jan Kara, Jason Gunthorpe,
	Jerome Glisse, Matthew Wilcox, Michal Hocko, Mike Rapoport,
	Mike Marciniszyn, Ralph Campbell, Tom Talpey, LKML,
	linux-fsdevel, John Hubbard

On Sun, 3 Feb 2019, john.hubbard@gmail.com wrote:

> Some kernel components (file systems, device drivers) need to access
> memory that is specified via process virtual address. For a long time, the
> API to achieve that was get_user_pages ("GUP") and its variations. However,
> GUP has critical limitations that have been overlooked; in particular, GUP
> does not interact correctly with filesystems in all situations. That means
> that file-backed memory + GUP is a recipe for potential problems, some of
> which have already occurred in the field.

It may be worth noting a couple of times in this text that this was
designed for anonymous memory and that such use is/was ok. We are talking
about a use case here using mmapped access with a regular filesystem that
was not initially intended. The mmapping of from the hugepages filesystem
is special in that it is not a device that is actually writing things
back.

Any use with a filesystem that actually writes data back to a medium
is something that is broken.



^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 0/6] RFC v2: mm: gup/dma tracking
  2019-02-04 16:08 ` [PATCH 0/6] RFC v2: mm: gup/dma tracking Christopher Lameter
@ 2019-02-04 16:12   ` Christoph Hellwig
  2019-02-04 16:59     ` Christopher Lameter
  0 siblings, 1 reply; 26+ messages in thread
From: Christoph Hellwig @ 2019-02-04 16:12 UTC (permalink / raw)
  To: Christopher Lameter
  Cc: john.hubbard, Andrew Morton, linux-mm, Al Viro,
	Christian Benvenuti, Christoph Hellwig, Dan Williams,
	Dave Chinner, Dennis Dalessandro, Doug Ledford, Jan Kara,
	Jason Gunthorpe, Jerome Glisse, Matthew Wilcox, Michal Hocko,
	Mike Rapoport, Mike Marciniszyn, Ralph Campbell, Tom Talpey,
	LKML, linux-fsdevel, John Hubbard

On Mon, Feb 04, 2019 at 04:08:02PM +0000, Christopher Lameter wrote:
> It may be worth noting a couple of times in this text that this was
> designed for anonymous memory and that such use is/was ok. We are talking
> about a use case here using mmapped access with a regular filesystem that
> was not initially intended. The mmapping of from the hugepages filesystem
> is special in that it is not a device that is actually writing things
> back.
> 
> Any use with a filesystem that actually writes data back to a medium
> is something that is broken.

Saying it was not intended seems rather odd, as it was supported
since day 0 and people made use of it.

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 0/6] RFC v2: mm: gup/dma tracking
  2019-02-04 16:12   ` Christoph Hellwig
@ 2019-02-04 16:59     ` Christopher Lameter
  0 siblings, 0 replies; 26+ messages in thread
From: Christopher Lameter @ 2019-02-04 16:59 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: john.hubbard, Andrew Morton, linux-mm, Al Viro,
	Christian Benvenuti, Dan Williams, Dave Chinner,
	Dennis Dalessandro, Doug Ledford, Jan Kara, Jason Gunthorpe,
	Jerome Glisse, Matthew Wilcox, Michal Hocko, Mike Rapoport,
	Mike Marciniszyn, Ralph Campbell, Tom Talpey, LKML,
	linux-fsdevel, John Hubbard

On Mon, 4 Feb 2019, Christoph Hellwig wrote:

> On Mon, Feb 04, 2019 at 04:08:02PM +0000, Christopher Lameter wrote:
> > It may be worth noting a couple of times in this text that this was
> > designed for anonymous memory and that such use is/was ok. We are talking
> > about a use case here using mmapped access with a regular filesystem that
> > was not initially intended. The mmapping of from the hugepages filesystem
> > is special in that it is not a device that is actually writing things
> > back.
> >
> > Any use with a filesystem that actually writes data back to a medium
> > is something that is broken.
>
> Saying it was not intended seems rather odd, as it was supported
> since day 0 and people made use of it.

Well until last year I never thought there was a problem because I
considered it separate from regular filesystem I/O.





^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 0/6] RFC v2: mm: gup/dma tracking
  2019-02-04  5:21 [PATCH 0/6] RFC v2: mm: gup/dma tracking john.hubbard
                   ` (6 preceding siblings ...)
  2019-02-04 16:08 ` [PATCH 0/6] RFC v2: mm: gup/dma tracking Christopher Lameter
@ 2019-02-04 17:14 ` Christopher Lameter
  2019-02-04 17:51   ` Jason Gunthorpe
  2019-02-04 23:35   ` Ira Weiny
  2019-02-05  1:41 ` Tom Talpey
  8 siblings, 2 replies; 26+ messages in thread
From: Christopher Lameter @ 2019-02-04 17:14 UTC (permalink / raw)
  To: john.hubbard
  Cc: Andrew Morton, linux-mm, Al Viro, Christian Benvenuti,
	Christoph Hellwig, Dan Williams, Dave Chinner,
	Dennis Dalessandro, Doug Ledford, Jan Kara, Jason Gunthorpe,
	Jerome Glisse, Matthew Wilcox, Michal Hocko, Mike Rapoport,
	Mike Marciniszyn, Ralph Campbell, Tom Talpey, LKML,
	linux-fsdevel, John Hubbard

Frankly I still think this does not solve anything.

Concurrent write access from two sources to a single page is simply wrong.
You cannot make this right by allowing long term RDMA pins in a filesystem
and thus the filesystem can never update part of its files on disk.

Can we just disable RDMA to regular filesystems? Regular filesystems
should have full control of the write back and dirty status of their
pages.

Special filesystems that do not actually do write back (like hugetlbfs),
mmaped raw device files and anonymous allocations are fine.


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 0/6] RFC v2: mm: gup/dma tracking
  2019-02-04 17:14 ` Christopher Lameter
@ 2019-02-04 17:51   ` Jason Gunthorpe
  2019-02-04 18:21     ` Christopher Lameter
  2019-02-04 23:35   ` Ira Weiny
  1 sibling, 1 reply; 26+ messages in thread
From: Jason Gunthorpe @ 2019-02-04 17:51 UTC (permalink / raw)
  To: Christopher Lameter
  Cc: john.hubbard, Andrew Morton, linux-mm, Al Viro,
	Christian Benvenuti, Christoph Hellwig, Dan Williams,
	Dave Chinner, Dennis Dalessandro, Doug Ledford, Jan Kara,
	Jerome Glisse, Matthew Wilcox, Michal Hocko, Mike Rapoport,
	Mike Marciniszyn, Ralph Campbell, Tom Talpey, LKML,
	linux-fsdevel, John Hubbard

On Mon, Feb 04, 2019 at 05:14:19PM +0000, Christopher Lameter wrote:
> Frankly I still think this does not solve anything.
> 
> Concurrent write access from two sources to a single page is simply wrong.
> You cannot make this right by allowing long term RDMA pins in a filesystem
> and thus the filesystem can never update part of its files on disk.

Fundamentally this patch series is fixing O_DIRECT to not crash the
kernel in extreme cases.. RDMA has the same problem, but it is much
easier to hit.

I think questions related to RDMA are somewhat separate, and maybe it
should be blocked, or not, but either way O_DIRECT has to be fixed.

Jason

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 4/6] mm/gup: track gup-pinned pages
  2019-02-04  5:21 ` [PATCH 4/6] mm/gup: track gup-pinned pages john.hubbard
@ 2019-02-04 18:19   ` Matthew Wilcox
  2019-02-04 19:11     ` John Hubbard
  2019-02-20 19:24   ` Ira Weiny
  1 sibling, 1 reply; 26+ messages in thread
From: Matthew Wilcox @ 2019-02-04 18:19 UTC (permalink / raw)
  To: john.hubbard
  Cc: Andrew Morton, linux-mm, Al Viro, Christian Benvenuti,
	Christoph Hellwig, Christopher Lameter, Dan Williams,
	Dave Chinner, Dennis Dalessandro, Doug Ledford, Jan Kara,
	Jason Gunthorpe, Jerome Glisse, Michal Hocko, Mike Rapoport,
	Mike Marciniszyn, Ralph Campbell, Tom Talpey, LKML,
	linux-fsdevel, John Hubbard

On Sun, Feb 03, 2019 at 09:21:33PM -0800, john.hubbard@gmail.com wrote:
> +/*
> + * GUP_PIN_COUNTING_BIAS, and the associated functions that use it, overload
> + * the page's refcount so that two separate items are tracked: the original page
> + * reference count, and also a new count of how many get_user_pages() calls were
> + * made against the page. ("gup-pinned" is another term for the latter).
> + *
> + * With this scheme, get_user_pages() becomes special: such pages are marked
> + * as distinct from normal pages. As such, the new put_user_page() call (and
> + * its variants) must be used in order to release gup-pinned pages.
> + *
> + * Choice of value:
> + *
> + * By making GUP_PIN_COUNTING_BIAS a power of two, debugging of page reference
> + * counts with respect to get_user_pages() and put_user_page() becomes simpler,
> + * due to the fact that adding an even power of two to the page refcount has
> + * the effect of using only the upper N bits, for the code that counts up using
> + * the bias value. This means that the lower bits are left for the exclusive
> + * use of the original code that increments and decrements by one (or at least,
> + * by much smaller values than the bias value).
> + *
> + * Of course, once the lower bits overflow into the upper bits (and this is
> + * OK, because subtraction recovers the original values), then visual inspection
> + * no longer suffices to directly view the separate counts. However, for normal
> + * applications that don't have huge page reference counts, this won't be an
> + * issue.
> + *
> + * This has to work on 32-bit as well as 64-bit systems. In the more constrained
> + * 32-bit systems, the 10 bit value of the bias value leaves 22 bits for the
> + * upper bits. Therefore, only about 4M calls to get_user_page() may occur for
> + * a page.

The refcount is 32-bit on both 64 and 32 bit systems.  This limit
exists on both sizes of system.

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 0/6] RFC v2: mm: gup/dma tracking
  2019-02-04 17:51   ` Jason Gunthorpe
@ 2019-02-04 18:21     ` Christopher Lameter
  2019-02-04 19:09       ` Matthew Wilcox
  0 siblings, 1 reply; 26+ messages in thread
From: Christopher Lameter @ 2019-02-04 18:21 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: john.hubbard, Andrew Morton, linux-mm, Al Viro,
	Christian Benvenuti, Christoph Hellwig, Dan Williams,
	Dave Chinner, Dennis Dalessandro, Doug Ledford, Jan Kara,
	Jerome Glisse, Matthew Wilcox, Michal Hocko, Mike Rapoport,
	Mike Marciniszyn, Ralph Campbell, Tom Talpey, LKML,
	linux-fsdevel, John Hubbard

On Mon, 4 Feb 2019, Jason Gunthorpe wrote:

> On Mon, Feb 04, 2019 at 05:14:19PM +0000, Christopher Lameter wrote:
> > Frankly I still think this does not solve anything.
> >
> > Concurrent write access from two sources to a single page is simply wrong.
> > You cannot make this right by allowing long term RDMA pins in a filesystem
> > and thus the filesystem can never update part of its files on disk.
>
> Fundamentally this patch series is fixing O_DIRECT to not crash the
> kernel in extreme cases.. RDMA has the same problem, but it is much
> easier to hit.

O_DIRECT is the same issue. O_DIRECT addresses always have been in
anonymous memory or special file systems.

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 0/6] RFC v2: mm: gup/dma tracking
  2019-02-04 18:21     ` Christopher Lameter
@ 2019-02-04 19:09       ` Matthew Wilcox
  0 siblings, 0 replies; 26+ messages in thread
From: Matthew Wilcox @ 2019-02-04 19:09 UTC (permalink / raw)
  To: Christopher Lameter
  Cc: Jason Gunthorpe, john.hubbard, Andrew Morton, linux-mm, Al Viro,
	Christian Benvenuti, Christoph Hellwig, Dan Williams,
	Dave Chinner, Dennis Dalessandro, Doug Ledford, Jan Kara,
	Jerome Glisse, Michal Hocko, Mike Rapoport, Mike Marciniszyn,
	Ralph Campbell, Tom Talpey, LKML, linux-fsdevel, John Hubbard

On Mon, Feb 04, 2019 at 06:21:39PM +0000, Christopher Lameter wrote:
> On Mon, 4 Feb 2019, Jason Gunthorpe wrote:
> 
> > On Mon, Feb 04, 2019 at 05:14:19PM +0000, Christopher Lameter wrote:
> > > Frankly I still think this does not solve anything.
> > >
> > > Concurrent write access from two sources to a single page is simply wrong.
> > > You cannot make this right by allowing long term RDMA pins in a filesystem
> > > and thus the filesystem can never update part of its files on disk.
> >
> > Fundamentally this patch series is fixing O_DIRECT to not crash the
> > kernel in extreme cases.. RDMA has the same problem, but it is much
> > easier to hit.
> 
> O_DIRECT is the same issue. O_DIRECT addresses always have been in
> anonymous memory or special file systems.

That's never been a constraint that's existed.

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 4/6] mm/gup: track gup-pinned pages
  2019-02-04 18:19   ` Matthew Wilcox
@ 2019-02-04 19:11     ` John Hubbard
  0 siblings, 0 replies; 26+ messages in thread
From: John Hubbard @ 2019-02-04 19:11 UTC (permalink / raw)
  To: Matthew Wilcox, john.hubbard
  Cc: Andrew Morton, linux-mm, Al Viro, Christian Benvenuti,
	Christoph Hellwig, Christopher Lameter, Dan Williams,
	Dave Chinner, Dennis Dalessandro, Doug Ledford, Jan Kara,
	Jason Gunthorpe, Jerome Glisse, Michal Hocko, Mike Rapoport,
	Mike Marciniszyn, Ralph Campbell, Tom Talpey, LKML,
	linux-fsdevel

On 2/4/19 10:19 AM, Matthew Wilcox wrote:
> On Sun, Feb 03, 2019 at 09:21:33PM -0800, john.hubbard@gmail.com wrote:
>> +/*
>> + * GUP_PIN_COUNTING_BIAS, and the associated functions that use it, overload
>> + * the page's refcount so that two separate items are tracked: the original page
>> + * reference count, and also a new count of how many get_user_pages() calls were
>> + * made against the page. ("gup-pinned" is another term for the latter).
>> + *
>> + * With this scheme, get_user_pages() becomes special: such pages are marked
>> + * as distinct from normal pages. As such, the new put_user_page() call (and
>> + * its variants) must be used in order to release gup-pinned pages.
>> + *
>> + * Choice of value:
>> + *
>> + * By making GUP_PIN_COUNTING_BIAS a power of two, debugging of page reference
>> + * counts with respect to get_user_pages() and put_user_page() becomes simpler,
>> + * due to the fact that adding an even power of two to the page refcount has
>> + * the effect of using only the upper N bits, for the code that counts up using
>> + * the bias value. This means that the lower bits are left for the exclusive
>> + * use of the original code that increments and decrements by one (or at least,
>> + * by much smaller values than the bias value).
>> + *
>> + * Of course, once the lower bits overflow into the upper bits (and this is
>> + * OK, because subtraction recovers the original values), then visual inspection
>> + * no longer suffices to directly view the separate counts. However, for normal
>> + * applications that don't have huge page reference counts, this won't be an
>> + * issue.
>> + *
>> + * This has to work on 32-bit as well as 64-bit systems. In the more constrained
>> + * 32-bit systems, the 10 bit value of the bias value leaves 22 bits for the
>> + * upper bits. Therefore, only about 4M calls to get_user_page() may occur for
>> + * a page.
> 
> The refcount is 32-bit on both 64 and 32 bit systems.  This limit
> exists on both sizes of system.
> 

Oh right, I'll just delete that last paragraph, then. Thanks for catching that.


thanks,
-- 
John Hubbard
NVIDIA

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 0/6] RFC v2: mm: gup/dma tracking
  2019-02-04 17:14 ` Christopher Lameter
  2019-02-04 17:51   ` Jason Gunthorpe
@ 2019-02-04 23:35   ` Ira Weiny
  2019-02-05 19:30     ` Christopher Lameter
  1 sibling, 1 reply; 26+ messages in thread
From: Ira Weiny @ 2019-02-04 23:35 UTC (permalink / raw)
  To: Christopher Lameter
  Cc: john.hubbard, Andrew Morton, linux-mm, Al Viro,
	Christian Benvenuti, Christoph Hellwig, Dan Williams,
	Dave Chinner, Dennis Dalessandro, Doug Ledford, Jan Kara,
	Jason Gunthorpe, Jerome Glisse, Matthew Wilcox, Michal Hocko,
	Mike Rapoport, Mike Marciniszyn, Ralph Campbell, Tom Talpey,
	LKML, linux-fsdevel, John Hubbard

On Mon, Feb 04, 2019 at 05:14:19PM +0000, Christopher Lameter wrote:
> Frankly I still think this does not solve anything.
> 
> Concurrent write access from two sources to a single page is simply wrong.
> You cannot make this right by allowing long term RDMA pins in a filesystem
> and thus the filesystem can never update part of its files on disk.
> 
> Can we just disable RDMA to regular filesystems? Regular filesystems
> should have full control of the write back and dirty status of their
> pages.

That may be a solution to the corruption/crashes but it is not a solution which
users want to see.  RDMA directly to file systems (specifically DAX) is a use
case we have seen customers ask for.

I think this is the correct path toward supporting this use case.

Ira


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 0/6] RFC v2: mm: gup/dma tracking
  2019-02-04  5:21 [PATCH 0/6] RFC v2: mm: gup/dma tracking john.hubbard
                   ` (7 preceding siblings ...)
  2019-02-04 17:14 ` Christopher Lameter
@ 2019-02-05  1:41 ` Tom Talpey
  2019-02-05  8:22   ` John Hubbard
  8 siblings, 1 reply; 26+ messages in thread
From: Tom Talpey @ 2019-02-05  1:41 UTC (permalink / raw)
  To: john.hubbard, Andrew Morton, linux-mm
  Cc: Al Viro, Christian Benvenuti, Christoph Hellwig,
	Christopher Lameter, Dan Williams, Dave Chinner,
	Dennis Dalessandro, Doug Ledford, Jan Kara, Jason Gunthorpe,
	Jerome Glisse, Matthew Wilcox, Michal Hocko, Mike Rapoport,
	Mike Marciniszyn, Ralph Campbell, LKML, linux-fsdevel,
	John Hubbard

On 2/4/2019 12:21 AM, john.hubbard@gmail.com wrote:
> From: John Hubbard <jhubbard@nvidia.com>
> 
> 
> Performance: here is an fio run on an NVMe drive, using this for the fio
> configuration file:
> 
>      [reader]
>      direct=1
>      ioengine=libaio
>      blocksize=4096
>      size=1g
>      numjobs=1
>      rw=read
>      iodepth=64
> 
> reader: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
> fio-3.3
> Starting 1 process
> Jobs: 1 (f=1)
> reader: (groupid=0, jobs=1): err= 0: pid=7011: Sun Feb  3 20:36:51 2019
>     read: IOPS=190k, BW=741MiB/s (778MB/s)(1024MiB/1381msec)
>      slat (nsec): min=2716, max=57255, avg=4048.14, stdev=1084.10
>      clat (usec): min=20, max=12485, avg=332.63, stdev=191.77
>       lat (usec): min=22, max=12498, avg=336.72, stdev=192.07
>      clat percentiles (usec):
>       |  1.00th=[  322],  5.00th=[  322], 10.00th=[  322], 20.00th=[  326],
>       | 30.00th=[  326], 40.00th=[  326], 50.00th=[  326], 60.00th=[  326],
>       | 70.00th=[  326], 80.00th=[  330], 90.00th=[  330], 95.00th=[  330],
>       | 99.00th=[  478], 99.50th=[  717], 99.90th=[ 1074], 99.95th=[ 1090],
>       | 99.99th=[12256]

These latencies are concerning. The best results we saw at the end of
November (previous approach) were MUCH flatter. These really start
spiking at three 9's, and are sky-high at four 9's. The "stdev" values
for clat and lat are about 10 times the previous. There's some kind
of serious queuing contention here, that wasn't there in November.

>     bw (  KiB/s): min=730152, max=776512, per=99.22%, avg=753332.00, stdev=32781.47, samples=2
>     iops        : min=182538, max=194128, avg=188333.00, stdev=8195.37, samples=2
>    lat (usec)   : 50=0.01%, 100=0.01%, 250=0.07%, 500=99.26%, 750=0.38%
>    lat (usec)   : 1000=0.02%
>    lat (msec)   : 2=0.24%, 20=0.02%
>    cpu          : usr=15.07%, sys=84.13%, ctx=10, majf=0, minf=74

System CPU 84% is roughly double the November results of 45%. Ouch.

Did you re-run the baseline on the new unpatched base kernel and can
we see the before/after?

Tom.

>    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
>       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
>       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
>       issued rwts: total=262144,0,0,0 short=0,0,0,0 dropped=0,0,0,0
>       latency   : target=0, window=0, percentile=100.00%, depth=64
> 
> Run status group 0 (all jobs):
>     READ: bw=741MiB/s (778MB/s), 741MiB/s-741MiB/s (778MB/s-778MB/s), io=1024MiB (1074MB), run=1381-1381msec
> 
> Disk stats (read/write):
>    nvme0n1: ios=216966/0, merge=0/0, ticks=6112/0, in_queue=704, util=91.34%

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 0/6] RFC v2: mm: gup/dma tracking
  2019-02-05  1:41 ` Tom Talpey
@ 2019-02-05  8:22   ` John Hubbard
  2019-02-05 13:38     ` Tom Talpey
  0 siblings, 1 reply; 26+ messages in thread
From: John Hubbard @ 2019-02-05  8:22 UTC (permalink / raw)
  To: Tom Talpey, john.hubbard, Andrew Morton, linux-mm
  Cc: Al Viro, Christian Benvenuti, Christoph Hellwig,
	Christopher Lameter, Dan Williams, Dave Chinner,
	Dennis Dalessandro, Doug Ledford, Jan Kara, Jason Gunthorpe,
	Jerome Glisse, Matthew Wilcox, Michal Hocko, Mike Rapoport,
	Mike Marciniszyn, Ralph Campbell, LKML, linux-fsdevel

On 2/4/19 5:41 PM, Tom Talpey wrote:
> On 2/4/2019 12:21 AM, john.hubbard@gmail.com wrote:
>> From: John Hubbard <jhubbard@nvidia.com>
>>
>>
>> Performance: here is an fio run on an NVMe drive, using this for the fio
>> configuration file:
>>
>>      [reader]
>>      direct=1
>>      ioengine=libaio
>>      blocksize=4096
>>      size=1g
>>      numjobs=1
>>      rw=read
>>      iodepth=64
>>
>> reader: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 
>> 4096B-4096B, ioengine=libaio, iodepth=64
>> fio-3.3
>> Starting 1 process
>> Jobs: 1 (f=1)
>> reader: (groupid=0, jobs=1): err= 0: pid=7011: Sun Feb  3 20:36:51 2019
>>     read: IOPS=190k, BW=741MiB/s (778MB/s)(1024MiB/1381msec)
>>      slat (nsec): min=2716, max=57255, avg=4048.14, stdev=1084.10
>>      clat (usec): min=20, max=12485, avg=332.63, stdev=191.77
>>       lat (usec): min=22, max=12498, avg=336.72, stdev=192.07
>>      clat percentiles (usec):
>>       |  1.00th=[  322],  5.00th=[  322], 10.00th=[  322], 20.00th=[  
>> 326],
>>       | 30.00th=[  326], 40.00th=[  326], 50.00th=[  326], 60.00th=[  
>> 326],
>>       | 70.00th=[  326], 80.00th=[  330], 90.00th=[  330], 95.00th=[  
>> 330],
>>       | 99.00th=[  478], 99.50th=[  717], 99.90th=[ 1074], 99.95th=[ 
>> 1090],
>>       | 99.99th=[12256]
> 
> These latencies are concerning. The best results we saw at the end of
> November (previous approach) were MUCH flatter. These really start
> spiking at three 9's, and are sky-high at four 9's. The "stdev" values
> for clat and lat are about 10 times the previous. There's some kind
> of serious queuing contention here, that wasn't there in November.

Hi Tom,

I think this latency problem is also there in the baseline kernel, but...

> 
>>     bw (  KiB/s): min=730152, max=776512, per=99.22%, avg=753332.00, 
>> stdev=32781.47, samples=2
>>     iops        : min=182538, max=194128, avg=188333.00, 
>> stdev=8195.37, samples=2
>>    lat (usec)   : 50=0.01%, 100=0.01%, 250=0.07%, 500=99.26%, 750=0.38%
>>    lat (usec)   : 1000=0.02%
>>    lat (msec)   : 2=0.24%, 20=0.02%
>>    cpu          : usr=15.07%, sys=84.13%, ctx=10, majf=0, minf=74
> 
> System CPU 84% is roughly double the November results of 45%. Ouch.

That's my fault. First of all, I had a few extra, supposedly minor debug
settings in the .config, which I'm removing now--I'm doing a proper run
with the original .config file from November, below. Second, I'm not
sure I controlled the run carefully enough.

> 
> Did you re-run the baseline on the new unpatched base kernel and can
> we see the before/after?

Doing that now, I see:

-- No significant perf difference between before and after, but
-- Still high clat in the 99.99th

=======================================================================
Before: using commit 8834f5600cf3 ("Linux 5.0-rc5")
===================================================
reader: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 
4096B-4096B, ioengine=libaio, iodepth=64
fio-3.3
Starting 1 process
Jobs: 1 (f=1)
reader: (groupid=0, jobs=1): err= 0: pid=1829: Tue Feb  5 00:08:08 2019
    read: IOPS=193k, BW=753MiB/s (790MB/s)(1024MiB/1359msec)
     slat (nsec): min=1269, max=40309, avg=1493.66, stdev=534.83
     clat (usec): min=127, max=12249, avg=329.83, stdev=184.92
      lat (usec): min=129, max=12256, avg=331.35, stdev=185.06
     clat percentiles (usec):
      |  1.00th=[  326],  5.00th=[  326], 10.00th=[  326], 20.00th=[  326],
      | 30.00th=[  326], 40.00th=[  326], 50.00th=[  326], 60.00th=[  326],
      | 70.00th=[  326], 80.00th=[  326], 90.00th=[  326], 95.00th=[  326],
      | 99.00th=[  347], 99.50th=[  519], 99.90th=[  529], 99.95th=[  537],
      | 99.99th=[12125]
    bw (  KiB/s): min=755032, max=781472, per=99.57%, avg=768252.00, 
stdev=18695.90, samples=2
    iops        : min=188758, max=195368, avg=192063.00, stdev=4673.98, 
samples=2
   lat (usec)   : 250=0.08%, 500=99.18%, 750=0.72%
   lat (msec)   : 20=0.02%
   cpu          : usr=12.30%, sys=46.83%, ctx=253554, majf=0, minf=74
   IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, 
 >=64=100.0%
      submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, 
 >=64=0.0%
      complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, 
 >=64=0.0%
      issued rwts: total=262144,0,0,0 short=0,0,0,0 dropped=0,0,0,0
      latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
    READ: bw=753MiB/s (790MB/s), 753MiB/s-753MiB/s (790MB/s-790MB/s), 
io=1024MiB (1074MB), run=1359-1359msec

Disk stats (read/write):
   nvme0n1: ios=221246/0, merge=0/0, ticks=71556/0, in_queue=704, 
util=91.35%

=======================================================================
After:
=======================================================================
reader: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 
4096B-4096B, ioengine=libaio, iodepth=64
fio-3.3
Starting 1 process
Jobs: 1 (f=1)
reader: (groupid=0, jobs=1): err= 0: pid=1803: Mon Feb  4 23:58:07 2019
    read: IOPS=193k, BW=753MiB/s (790MB/s)(1024MiB/1359msec)
     slat (nsec): min=1276, max=41900, avg=1505.36, stdev=565.26
     clat (usec): min=177, max=12186, avg=329.88, stdev=184.03
      lat (usec): min=178, max=12192, avg=331.42, stdev=184.16
     clat percentiles (usec):
      |  1.00th=[  326],  5.00th=[  326], 10.00th=[  326], 20.00th=[  326],
      | 30.00th=[  326], 40.00th=[  326], 50.00th=[  326], 60.00th=[  326],
      | 70.00th=[  326], 80.00th=[  326], 90.00th=[  326], 95.00th=[  326],
      | 99.00th=[  359], 99.50th=[  498], 99.90th=[  537], 99.95th=[  627],
      | 99.99th=[12125]
    bw (  KiB/s): min=754656, max=781504, per=99.55%, avg=768080.00, 
stdev=18984.40, samples=2
    iops        : min=188664, max=195378, avg=192021.00, stdev=4747.51, 
samples=2
   lat (usec)   : 250=0.12%, 500=99.40%, 750=0.46%
   lat (msec)   : 20=0.02%
   cpu          : usr=12.44%, sys=47.05%, ctx=252127, majf=0, minf=73
   IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, 
 >=64=100.0%
      submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, 
 >=64=0.0%
      complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, 
 >=64=0.0%
      issued rwts: total=262144,0,0,0 short=0,0,0,0 dropped=0,0,0,0
      latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
    READ: bw=753MiB/s (790MB/s), 753MiB/s-753MiB/s (790MB/s-790MB/s), 
io=1024MiB (1074MB), run=1359-1359msec

Disk stats (read/write):
   nvme0n1: ios=221203/0, merge=0/0, ticks=71291/0, in_queue=704, 
util=91.19%

How's this look to you?

thanks,
-- 
John Hubbard
NVIDIA

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 0/6] RFC v2: mm: gup/dma tracking
  2019-02-05  8:22   ` John Hubbard
@ 2019-02-05 13:38     ` Tom Talpey
  2019-02-05 21:55       ` John Hubbard
  0 siblings, 1 reply; 26+ messages in thread
From: Tom Talpey @ 2019-02-05 13:38 UTC (permalink / raw)
  To: John Hubbard, john.hubbard, Andrew Morton, linux-mm
  Cc: Al Viro, Christian Benvenuti, Christoph Hellwig,
	Christopher Lameter, Dan Williams, Dave Chinner,
	Dennis Dalessandro, Doug Ledford, Jan Kara, Jason Gunthorpe,
	Jerome Glisse, Matthew Wilcox, Michal Hocko, Mike Rapoport,
	Mike Marciniszyn, Ralph Campbell, LKML, linux-fsdevel

On 2/5/2019 3:22 AM, John Hubbard wrote:
> On 2/4/19 5:41 PM, Tom Talpey wrote:
>> On 2/4/2019 12:21 AM, john.hubbard@gmail.com wrote:
>>> From: John Hubbard <jhubbard@nvidia.com>
>>>
>>>
>>> Performance: here is an fio run on an NVMe drive, using this for the fio
>>> configuration file:
>>>
>>>      [reader]
>>>      direct=1
>>>      ioengine=libaio
>>>      blocksize=4096
>>>      size=1g
>>>      numjobs=1
>>>      rw=read
>>>      iodepth=64
>>>
>>> reader: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 
>>> 4096B-4096B, ioengine=libaio, iodepth=64
>>> fio-3.3
>>> Starting 1 process
>>> Jobs: 1 (f=1)
>>> reader: (groupid=0, jobs=1): err= 0: pid=7011: Sun Feb  3 20:36:51 2019
>>>     read: IOPS=190k, BW=741MiB/s (778MB/s)(1024MiB/1381msec)
>>>      slat (nsec): min=2716, max=57255, avg=4048.14, stdev=1084.10
>>>      clat (usec): min=20, max=12485, avg=332.63, stdev=191.77
>>>       lat (usec): min=22, max=12498, avg=336.72, stdev=192.07
>>>      clat percentiles (usec):
>>>       |  1.00th=[  322],  5.00th=[  322], 10.00th=[  322], 20.00th=[ 
>>> 326],
>>>       | 30.00th=[  326], 40.00th=[  326], 50.00th=[  326], 60.00th=[ 
>>> 326],
>>>       | 70.00th=[  326], 80.00th=[  330], 90.00th=[  330], 95.00th=[ 
>>> 330],
>>>       | 99.00th=[  478], 99.50th=[  717], 99.90th=[ 1074], 99.95th=[ 
>>> 1090],
>>>       | 99.99th=[12256]
>>
>> These latencies are concerning. The best results we saw at the end of
>> November (previous approach) were MUCH flatter. These really start
>> spiking at three 9's, and are sky-high at four 9's. The "stdev" values
>> for clat and lat are about 10 times the previous. There's some kind
>> of serious queuing contention here, that wasn't there in November.
> 
> Hi Tom,
> 
> I think this latency problem is also there in the baseline kernel, but...
> 
>>
>>>     bw (  KiB/s): min=730152, max=776512, per=99.22%, avg=753332.00, 
>>> stdev=32781.47, samples=2
>>>     iops        : min=182538, max=194128, avg=188333.00, 
>>> stdev=8195.37, samples=2
>>>    lat (usec)   : 50=0.01%, 100=0.01%, 250=0.07%, 500=99.26%, 750=0.38%
>>>    lat (usec)   : 1000=0.02%
>>>    lat (msec)   : 2=0.24%, 20=0.02%
>>>    cpu          : usr=15.07%, sys=84.13%, ctx=10, majf=0, minf=74
>>
>> System CPU 84% is roughly double the November results of 45%. Ouch.
> 
> That's my fault. First of all, I had a few extra, supposedly minor debug
> settings in the .config, which I'm removing now--I'm doing a proper run
> with the original .config file from November, below. Second, I'm not
> sure I controlled the run carefully enough.
> 
>>
>> Did you re-run the baseline on the new unpatched base kernel and can
>> we see the before/after?
> 
> Doing that now, I see:
> 
> -- No significant perf difference between before and after, but
> -- Still high clat in the 99.99th
> 
> =======================================================================
> Before: using commit 8834f5600cf3 ("Linux 5.0-rc5")
> ===================================================
> reader: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 
> 4096B-4096B, ioengine=libaio, iodepth=64
> fio-3.3
> Starting 1 process
> Jobs: 1 (f=1)
> reader: (groupid=0, jobs=1): err= 0: pid=1829: Tue Feb  5 00:08:08 2019
>     read: IOPS=193k, BW=753MiB/s (790MB/s)(1024MiB/1359msec)
>      slat (nsec): min=1269, max=40309, avg=1493.66, stdev=534.83
>      clat (usec): min=127, max=12249, avg=329.83, stdev=184.92
>       lat (usec): min=129, max=12256, avg=331.35, stdev=185.06
>      clat percentiles (usec):
>       |  1.00th=[  326],  5.00th=[  326], 10.00th=[  326], 20.00th=[  326],
>       | 30.00th=[  326], 40.00th=[  326], 50.00th=[  326], 60.00th=[  326],
>       | 70.00th=[  326], 80.00th=[  326], 90.00th=[  326], 95.00th=[  326],
>       | 99.00th=[  347], 99.50th=[  519], 99.90th=[  529], 99.95th=[  537],
>       | 99.99th=[12125]
>     bw (  KiB/s): min=755032, max=781472, per=99.57%, avg=768252.00, 
> stdev=18695.90, samples=2
>     iops        : min=188758, max=195368, avg=192063.00, stdev=4673.98, 
> samples=2
>    lat (usec)   : 250=0.08%, 500=99.18%, 750=0.72%
>    lat (msec)   : 20=0.02%
>    cpu          : usr=12.30%, sys=46.83%, ctx=253554, majf=0, minf=74
>    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, 
>  >=64=100.0%
>       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, 
>  >=64=0.0%
>       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, 
>  >=64=0.0%
>       issued rwts: total=262144,0,0,0 short=0,0,0,0 dropped=0,0,0,0
>       latency   : target=0, window=0, percentile=100.00%, depth=64
> 
> Run status group 0 (all jobs):
>     READ: bw=753MiB/s (790MB/s), 753MiB/s-753MiB/s (790MB/s-790MB/s), 
> io=1024MiB (1074MB), run=1359-1359msec
> 
> Disk stats (read/write):
>    nvme0n1: ios=221246/0, merge=0/0, ticks=71556/0, in_queue=704, 
> util=91.35%
> 
> =======================================================================
> After:
> =======================================================================
> reader: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 
> 4096B-4096B, ioengine=libaio, iodepth=64
> fio-3.3
> Starting 1 process
> Jobs: 1 (f=1)
> reader: (groupid=0, jobs=1): err= 0: pid=1803: Mon Feb  4 23:58:07 2019
>     read: IOPS=193k, BW=753MiB/s (790MB/s)(1024MiB/1359msec)
>      slat (nsec): min=1276, max=41900, avg=1505.36, stdev=565.26
>      clat (usec): min=177, max=12186, avg=329.88, stdev=184.03
>       lat (usec): min=178, max=12192, avg=331.42, stdev=184.16
>      clat percentiles (usec):
>       |  1.00th=[  326],  5.00th=[  326], 10.00th=[  326], 20.00th=[  326],
>       | 30.00th=[  326], 40.00th=[  326], 50.00th=[  326], 60.00th=[  326],
>       | 70.00th=[  326], 80.00th=[  326], 90.00th=[  326], 95.00th=[  326],
>       | 99.00th=[  359], 99.50th=[  498], 99.90th=[  537], 99.95th=[  627],
>       | 99.99th=[12125]
>     bw (  KiB/s): min=754656, max=781504, per=99.55%, avg=768080.00, 
> stdev=18984.40, samples=2
>     iops        : min=188664, max=195378, avg=192021.00, stdev=4747.51, 
> samples=2
>    lat (usec)   : 250=0.12%, 500=99.40%, 750=0.46%
>    lat (msec)   : 20=0.02%
>    cpu          : usr=12.44%, sys=47.05%, ctx=252127, majf=0, minf=73
>    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, 
>  >=64=100.0%
>       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, 
>  >=64=0.0%
>       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, 
>  >=64=0.0%
>       issued rwts: total=262144,0,0,0 short=0,0,0,0 dropped=0,0,0,0
>       latency   : target=0, window=0, percentile=100.00%, depth=64
> 
> Run status group 0 (all jobs):
>     READ: bw=753MiB/s (790MB/s), 753MiB/s-753MiB/s (790MB/s-790MB/s), 
> io=1024MiB (1074MB), run=1359-1359msec
> 
> Disk stats (read/write):
>    nvme0n1: ios=221203/0, merge=0/0, ticks=71291/0, in_queue=704, 
> util=91.19%
> 
> How's this look to you?

Ok, I'm satisfied the four-9's latency spike is in not your code. :-)
Results look good relative to baseline. Thanks for doublechecking!

Tom.

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 6/6] mm/gup: Documentation/vm/get_user_pages.rst, MAINTAINERS
  2019-02-04  5:21 ` [PATCH 6/6] mm/gup: Documentation/vm/get_user_pages.rst, MAINTAINERS john.hubbard
@ 2019-02-05 16:40   ` Mike Rapoport
  2019-02-05 21:53     ` John Hubbard
  0 siblings, 1 reply; 26+ messages in thread
From: Mike Rapoport @ 2019-02-05 16:40 UTC (permalink / raw)
  To: john.hubbard
  Cc: Andrew Morton, linux-mm, Al Viro, Christian Benvenuti,
	Christoph Hellwig, Christopher Lameter, Dan Williams,
	Dave Chinner, Dennis Dalessandro, Doug Ledford, Jan Kara,
	Jason Gunthorpe, Jerome Glisse, Matthew Wilcox, Michal Hocko,
	Mike Marciniszyn, Ralph Campbell, Tom Talpey, LKML,
	linux-fsdevel, John Hubbard

Hi John,

On Sun, Feb 03, 2019 at 09:21:35PM -0800, john.hubbard@gmail.com wrote:
> From: John Hubbard <jhubbard@nvidia.com>
> 
> 1. Added Documentation/vm/get_user_pages.rst
> 
> 2. Added a GET_USER_PAGES entry in MAINTAINERS
> 
> Cc: Dan Williams <dan.j.williams@intel.com>
> Cc: Jan Kara <jack@suse.cz>
> Signed-off-by: Jérôme Glisse <jglisse@redhat.com>
> Signed-off-by: John Hubbard <jhubbard@nvidia.com>
> ---
>  Documentation/vm/get_user_pages.rst | 197 ++++++++++++++++++++++++++++
>  Documentation/vm/index.rst          |   1 +
>  MAINTAINERS                         |  10 ++
>  3 files changed, 208 insertions(+)
>  create mode 100644 Documentation/vm/get_user_pages.rst
> 
> diff --git a/Documentation/vm/get_user_pages.rst b/Documentation/vm/get_user_pages.rst
> new file mode 100644
> index 000000000000..8598f20afb09
> --- /dev/null
> +++ b/Documentation/vm/get_user_pages.rst

It's great to see docs coming alone with the patches! :)

Yet, I'm a bit confused. The documentation here mostly describes the
existing problems that this patchset aims to solve, but the text here does
not describe the proposed solution.

> @@ -0,0 +1,197 @@
> +.. _get_user_pages:
> +
> +==============
> +get_user_pages
> +==============
> +
> +.. contents:: :local:
> +
> +Overview
> +========
> +
> +Some kernel components (file systems, device drivers) need to access
> +memory that is specified via process virtual address. For a long time, the
> +API to achieve that was get_user_pages ("GUP") and its variations. However,
> +GUP has critical limitations that have been overlooked; in particular, GUP
> +does not interact correctly with filesystems in all situations. That means
> +that file-backed memory + GUP is a recipe for potential problems, some of
> +which have already occurred in the field.
> +
> +GUP was first introduced for Direct IO (O_DIRECT), allowing filesystem code
> +to get the struct page behind a virtual address and to let storage hardware
> +perform a direct copy to or from that page. This is a short-lived access
> +pattern, and as such, the window for a concurrent writeback of GUP'd page
> +was small enough that there were not (we think) any reported problems.
> +Also, userspace was expected to understand and accept that Direct IO was
> +not synchronized with memory-mapped access to that data, nor with any
> +process address space changes such as munmap(), mremap(), etc.
> +
> +Over the years, more GUP uses have appeared (virtualization, device
> +drivers, RDMA) that can keep the pages they get via GUP for a long period
> +of time (seconds, minutes, hours, days, ...). This long-term pinning makes
> +an underlying design problem more obvious.
> +
> +In fact, there are a number of key problems inherent to GUP:
> +
> +Interactions with file systems
> +==============================
> +
> +File systems expect to be able to write back data, both to reclaim pages,
> +and for data integrity. Allowing other hardware (NICs, GPUs, etc) to gain
> +write access to the file memory pages means that such hardware can dirty the
> +pages, without the filesystem being aware. This can, in some cases
> +(depending on filesystem, filesystem options, block device, block device
> +options, and other variables), lead to data corruption, and also to kernel
> +bugs of the form:
> +
> +::
> +
> +    kernel BUG at /build/linux-fQ94TU/linux-4.4.0/fs/ext4/inode.c:1899!
> +    backtrace:
> +
> +	ext4_writepage
> +	__writepage
> +	write_cache_pages
> +	ext4_writepages
> +	do_writepages
> +	__writeback_single_inode
> +	writeback_sb_inodes
> +	__writeback_inodes_wb
> +	wb_writeback
> +	wb_workfn
> +	process_one_work
> +	worker_thread
> +	kthread
> +	ret_from_fork
> +
> +...which is due to the file system asserting that there are still buffer
> +heads attached:
> +
> +::
> +
> + /* If we *know* page->private refers to buffer_heads */
> + #define page_buffers(page)                                      \
> +        ({                                                      \
> +                BUG_ON(!PagePrivate(page));                     \
> +                ((struct buffer_head *)page_private(page));     \
> +        })
> + #define page_has_buffers(page)  PagePrivate(page)
> +
> +Dave Chinner's description of this is very clear:
> +
> +    "The fundamental issue is that ->page_mkwrite must be called on every
> +    write access to a clean file backed page, not just the first one.
> +    How long the GUP reference lasts is irrelevant, if the page is clean
> +    and you need to dirty it, you must call ->page_mkwrite before it is
> +    marked writeable and dirtied. Every. Time."
> +
> +This is just one symptom of the larger design problem: filesystems do not
> +actually support get_user_pages() being called on their pages, and letting
> +hardware write directly to those pages--even though that pattern has been
> +going on since about 2005 or so.
> +
> +Long term GUP
> +=============
> +
> +Long term GUP is an issue when FOLL_WRITE is specified to GUP (so, a
> +writeable mapping is created), and the pages are file-backed. That can lead
> +to filesystem corruption. What happens is that when a file-backed page is
> +being written back, it is first mapped read-only in all of the CPU page
> +tables; the file system then assumes that nobody can write to the page, and
> +that the page content is therefore stable. Unfortunately, the GUP callers
> +generally do not monitor changes to the CPU pages tables; they instead
> +assume that the following pattern is safe (it's not):
> +
> +::
> +
> +    get_user_pages()
> +
> +    Hardware then keeps a reference to those pages for some potentially
> +    long time. During this time, hardware may write to the pages. Because
> +    "hardware" here means "devices that are not a CPU", this activity
> +    occurs without any interaction with the kernel's file system code.
> +
> +    for each page:
> +	set_page_dirty()
> +	put_page()
> +
> +In fact, the GUP documentation even recommends that pattern.
> +
> +Anyway, the file system assumes that the page is stable (nothing is writing
> +to the page), and that is a problem: stable page content is necessary for
> +many filesystem actions during writeback, such as checksum, encryption,
> +RAID striping, etc. Furthermore, filesystem features like COW (copy on
> +write) or snapshot also rely on being able to use a new page for as memory
> +for that memory range inside the file.
> +
> +Corruption during write back is clearly possible here. To solve that, one
> +idea is to identify pages that have active GUP, so that we can use a bounce
> +page to write stable data to the filesystem. The filesystem would work
> +on the bounce page, while any of the active GUP might write to the
> +original page. This would avoid the stable page violation problem, but note
> +that it is only part of the overall solution, because other problems
> +remain.
> +
> +Other filesystem features that need to replace the page with a new one can
> +be inhibited for pages that are GUP-pinned. This will, however, alter and
> +limit some of those filesystem features. The only fix for that would be to
> +require GUP users monitor and respond to CPU page table updates. Subsystems
> +such as ODP and HMM do this, for example. This aspect of the problem is
> +still under discussion.
> +
> +Direct IO
> +=========
> +
> +Direct IO can cause corruption, if userspace does Direct-IO that writes to
> +a range of virtual addresses that are mmap'd to a file.  The pages written
> +to are file-backed pages that can be under write back, while the Direct IO
> +is taking place.  Here, Direct IO need races with a write back: it calls
> +GUP before page_mkclean() has replaced the CPU pte with a read-only entry.
> +The race window is pretty small, which is probably why years have gone by
> +before we noticed this problem: Direct IO is generally very quick, and
> +tends to finish up before the filesystem gets around to do anything with
> +the page contents.  However, it's still a real problem.  The solution is
> +to never let GUP return pages that are under write back, but instead,
> +force GUP to take a write fault on those pages.  That way, GUP will
> +properly synchronize with the active write back.  This does not change the
> +required GUP behavior, it just avoids that race.
> +
> +Measurement and visibility
> +==========================
> +
> +There are several /proc/vmstat items, in order to provide some visibility
> +into what get_user_pages() and put_user_page() are doing.
> +
> +After booting and running fio (https://github.com/axboe/fio)
> +a few times on an NVMe device, as a way to get lots of
> +get_user_pages_fast() calls, the counters look like this:
> +
> +::
> +
> + $ cat /proc/vmstat | grep gup
> + nr_gup_slow_pages_requested 21319
> + nr_gup_fast_pages_requested 11533792
> + nr_gup_fast_page_backoffs 0
> + nr_gup_page_count_overflows 0
> + nr_gup_pages_returned 11555104
> +
> +Interpretation of the above:
> +
> +::
> +
> + Total gup requests (slow + fast): 11555111
> + Total put_user_page calls:        11555104
> +
> +This shows 7 more calls to get_user_pages(), than to put_user_page().
> +That may, or may not, represent a problem worth investigating.
> +
> +Normally, those last two numbers should be equal, but a couple of things
> +may cause them to differ:
> +
> +1. Inherent race condition in reading /proc/vmstat values.
> +
> +2. Bugs at any of the get_user_pages*() call sites. Those
> +sites need to match get_user_pages() and put_user_page() calls.
> +
> +
> +
> diff --git a/Documentation/vm/index.rst b/Documentation/vm/index.rst
> index 2b3ab3a1ccf3..433aaf1996e6 100644
> --- a/Documentation/vm/index.rst
> +++ b/Documentation/vm/index.rst
> @@ -32,6 +32,7 @@ descriptions of data structures and algorithms.
>     balance
>     cleancache
>     frontswap
> +   get_user_pages
>     highmem
>     hmm
>     hwpoison
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 8c68de3cfd80..1e8f91b8ce4f 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -6384,6 +6384,16 @@ M:	Frank Haverkamp <haver@linux.ibm.com>
>  S:	Supported
>  F:	drivers/misc/genwqe/
>  
> +GET_USER_PAGES
> +M:	Dan Williams <dan.j.williams@intel.com>
> +M:	Jan Kara <jack@suse.cz>
> +M:	Jérôme Glisse <jglisse@redhat.com>
> +M:	John Hubbard <jhubbard@nvidia.com>
> +L:	linux-mm@kvack.org
> +S:	Maintained
> +F:	mm/gup.c
> +F:	Documentation/vm/get_user_pages.rst
> +
>  GET_MAINTAINER SCRIPT
>  M:	Joe Perches <joe@perches.com>
>  S:	Maintained
> -- 
> 2.20.1
> 

-- 
Sincerely yours,
Mike.


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 0/6] RFC v2: mm: gup/dma tracking
  2019-02-04 23:35   ` Ira Weiny
@ 2019-02-05 19:30     ` Christopher Lameter
  0 siblings, 0 replies; 26+ messages in thread
From: Christopher Lameter @ 2019-02-05 19:30 UTC (permalink / raw)
  To: Ira Weiny
  Cc: john.hubbard, Andrew Morton, linux-mm, Al Viro,
	Christian Benvenuti, Christoph Hellwig, Dan Williams,
	Dave Chinner, Dennis Dalessandro, Doug Ledford, Jan Kara,
	Jason Gunthorpe, Jerome Glisse, Matthew Wilcox, Michal Hocko,
	Mike Rapoport, Mike Marciniszyn, Ralph Campbell, Tom Talpey,
	LKML, linux-fsdevel, John Hubbard

On Mon, 4 Feb 2019, Ira Weiny wrote:

> On Mon, Feb 04, 2019 at 05:14:19PM +0000, Christopher Lameter wrote:
> > Frankly I still think this does not solve anything.
> >
> > Concurrent write access from two sources to a single page is simply wrong.
> > You cannot make this right by allowing long term RDMA pins in a filesystem
> > and thus the filesystem can never update part of its files on disk.
> >
> > Can we just disable RDMA to regular filesystems? Regular filesystems
> > should have full control of the write back and dirty status of their
> > pages.
>
> That may be a solution to the corruption/crashes but it is not a solution which
> users want to see.  RDMA directly to file systems (specifically DAX) is a use
> case we have seen customers ask for.

DAX is a special file system that does not use writeback for the DAX
mappings. Thus it could be an exception. And the pages are already pinned.




^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 6/6] mm/gup: Documentation/vm/get_user_pages.rst, MAINTAINERS
  2019-02-05 16:40   ` Mike Rapoport
@ 2019-02-05 21:53     ` John Hubbard
  0 siblings, 0 replies; 26+ messages in thread
From: John Hubbard @ 2019-02-05 21:53 UTC (permalink / raw)
  To: Mike Rapoport, john.hubbard
  Cc: Andrew Morton, linux-mm, Al Viro, Christian Benvenuti,
	Christoph Hellwig, Christopher Lameter, Dan Williams,
	Dave Chinner, Dennis Dalessandro, Doug Ledford, Jan Kara,
	Jason Gunthorpe, Jerome Glisse, Matthew Wilcox, Michal Hocko,
	Mike Marciniszyn, Ralph Campbell, Tom Talpey, LKML,
	linux-fsdevel

On 2/5/19 8:40 AM, Mike Rapoport wrote:
> Hi John,
> 
> On Sun, Feb 03, 2019 at 09:21:35PM -0800, john.hubbard@gmail.com wrote:
>> From: John Hubbard <jhubbard@nvidia.com>
>>
>> 1. Added Documentation/vm/get_user_pages.rst
>>
>> 2. Added a GET_USER_PAGES entry in MAINTAINERS
>>
>> Cc: Dan Williams <dan.j.williams@intel.com>
>> Cc: Jan Kara <jack@suse.cz>
>> Signed-off-by: Jérôme Glisse <jglisse@redhat.com>
>> Signed-off-by: John Hubbard <jhubbard@nvidia.com>
>> ---
>>  Documentation/vm/get_user_pages.rst | 197 ++++++++++++++++++++++++++++
>>  Documentation/vm/index.rst          |   1 +
>>  MAINTAINERS                         |  10 ++
>>  3 files changed, 208 insertions(+)
>>  create mode 100644 Documentation/vm/get_user_pages.rst
>>
>> diff --git a/Documentation/vm/get_user_pages.rst b/Documentation/vm/get_user_pages.rst
>> new file mode 100644
>> index 000000000000..8598f20afb09
>> --- /dev/null
>> +++ b/Documentation/vm/get_user_pages.rst
> 
> It's great to see docs coming alone with the patches! :)
> 
> Yet, I'm a bit confused. The documentation here mostly describes the
> existing problems that this patchset aims to solve, but the text here does
> not describe the proposed solution.
> 

Yes, that's true. I'll take another pass at it with that in mind.

thanks,
-- 
John Hubbard
NVIDIA

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 0/6] RFC v2: mm: gup/dma tracking
  2019-02-05 13:38     ` Tom Talpey
@ 2019-02-05 21:55       ` John Hubbard
  0 siblings, 0 replies; 26+ messages in thread
From: John Hubbard @ 2019-02-05 21:55 UTC (permalink / raw)
  To: Tom Talpey, john.hubbard, Andrew Morton, linux-mm
  Cc: Al Viro, Christian Benvenuti, Christoph Hellwig,
	Christopher Lameter, Dan Williams, Dave Chinner,
	Dennis Dalessandro, Doug Ledford, Jan Kara, Jason Gunthorpe,
	Jerome Glisse, Matthew Wilcox, Michal Hocko, Mike Rapoport,
	Mike Marciniszyn, Ralph Campbell, LKML, linux-fsdevel

On 2/5/19 5:38 AM, Tom Talpey wrote:
> 
> Ok, I'm satisfied the four-9's latency spike is in not your code. :-)
> Results look good relative to baseline. Thanks for doublechecking!
> 
> Tom.


Great, in that case, I'll put the new before-and-after results in the next 
version. Appreciate your help here, as always!

thanks,
-- 
John Hubbard
NVIDIA

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 4/6] mm/gup: track gup-pinned pages
  2019-02-04  5:21 ` [PATCH 4/6] mm/gup: track gup-pinned pages john.hubbard
  2019-02-04 18:19   ` Matthew Wilcox
@ 2019-02-20 19:24   ` Ira Weiny
  2019-02-20 20:22     ` John Hubbard
  1 sibling, 1 reply; 26+ messages in thread
From: Ira Weiny @ 2019-02-20 19:24 UTC (permalink / raw)
  To: john.hubbard
  Cc: Andrew Morton, linux-mm, Al Viro, Christian Benvenuti,
	Christoph Hellwig, Christopher Lameter, Dan Williams,
	Dave Chinner, Dennis Dalessandro, Doug Ledford, Jan Kara,
	Jason Gunthorpe, Jerome Glisse, Matthew Wilcox, Michal Hocko,
	Mike Rapoport, Mike Marciniszyn, Ralph Campbell, Tom Talpey,
	LKML, linux-fsdevel, John Hubbard

On Sun, Feb 03, 2019 at 09:21:33PM -0800, john.hubbard@gmail.com wrote:
> From: John Hubbard <jhubbard@nvidia.com>
> 

[snip]

>  
> +/*
> + * GUP_PIN_COUNTING_BIAS, and the associated functions that use it, overload
> + * the page's refcount so that two separate items are tracked: the original page
> + * reference count, and also a new count of how many get_user_pages() calls were
> + * made against the page. ("gup-pinned" is another term for the latter).
> + *
> + * With this scheme, get_user_pages() becomes special: such pages are marked
> + * as distinct from normal pages. As such, the new put_user_page() call (and
> + * its variants) must be used in order to release gup-pinned pages.
> + *
> + * Choice of value:
> + *
> + * By making GUP_PIN_COUNTING_BIAS a power of two, debugging of page reference
> + * counts with respect to get_user_pages() and put_user_page() becomes simpler,
> + * due to the fact that adding an even power of two to the page refcount has
> + * the effect of using only the upper N bits, for the code that counts up using
> + * the bias value. This means that the lower bits are left for the exclusive
> + * use of the original code that increments and decrements by one (or at least,
> + * by much smaller values than the bias value).
> + *
> + * Of course, once the lower bits overflow into the upper bits (and this is
> + * OK, because subtraction recovers the original values), then visual inspection
> + * no longer suffices to directly view the separate counts. However, for normal
> + * applications that don't have huge page reference counts, this won't be an
> + * issue.
> + *
> + * This has to work on 32-bit as well as 64-bit systems. In the more constrained
> + * 32-bit systems, the 10 bit value of the bias value leaves 22 bits for the
> + * upper bits. Therefore, only about 4M calls to get_user_page() may occur for
> + * a page.
> + *
> + * Locking: the lockless algorithm described in page_cache_gup_pin_speculative()
> + * and page_cache_gup_pin_speculative() provides safe operation for

Did you mean:

page_cache_gup_pin_speculative and __ page_cache_get_speculative __?

Just found this while looking at your branch.

Sorry,
Ira

> + * get_user_pages and page_mkclean and other calls that race to set up page
> + * table entries.
> + */
> +#define GUP_PIN_COUNTING_BIAS (1UL << 10)
> +
> +int get_gup_pin_page(struct page *page);
> +
> +void put_user_page(struct page *page);
> +void put_user_pages_dirty(struct page **pages, unsigned long npages);
> +void put_user_pages_dirty_lock(struct page **pages, unsigned long npages);
> +void put_user_pages(struct page **pages, unsigned long npages);
> +
> +/**
> + * page_gup_pinned() - report if a page is gup-pinned (pinned by a call to
> + *			get_user_pages).
> + * @page:	pointer to page to be queried.
> + * @Returns:	True, if it is likely that the page has been "gup-pinned".
> + *		False, if the page is definitely not gup-pinned.
> + */
> +static inline bool page_gup_pinned(struct page *page)
> +{
> +	return (page_ref_count(page)) > GUP_PIN_COUNTING_BIAS;
> +}
> +
>  static inline void get_page(struct page *page)
>  {
>  	page = compound_head(page);
> @@ -993,30 +1050,6 @@ static inline void put_page(struct page *page)
>  		__put_page(page);
>  }
>  
> -/**
> - * put_user_page() - release a gup-pinned page
> - * @page:            pointer to page to be released
> - *
> - * Pages that were pinned via get_user_pages*() must be released via
> - * either put_user_page(), or one of the put_user_pages*() routines
> - * below. This is so that eventually, pages that are pinned via
> - * get_user_pages*() can be separately tracked and uniquely handled. In
> - * particular, interactions with RDMA and filesystems need special
> - * handling.
> - *
> - * put_user_page() and put_page() are not interchangeable, despite this early
> - * implementation that makes them look the same. put_user_page() calls must
> - * be perfectly matched up with get_user_page() calls.
> - */
> -static inline void put_user_page(struct page *page)
> -{
> -	put_page(page);
> -}
> -
> -void put_user_pages_dirty(struct page **pages, unsigned long npages);
> -void put_user_pages_dirty_lock(struct page **pages, unsigned long npages);
> -void put_user_pages(struct page **pages, unsigned long npages);
> -
>  #if defined(CONFIG_SPARSEMEM) && !defined(CONFIG_SPARSEMEM_VMEMMAP)
>  #define SECTION_IN_PAGE_FLAGS
>  #endif
> diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
> index 5c8a9b59cbdc..5f5b72ba595f 100644
> --- a/include/linux/pagemap.h
> +++ b/include/linux/pagemap.h
> @@ -209,6 +209,11 @@ static inline int page_cache_add_speculative(struct page *page, int count)
>  	return __page_cache_add_speculative(page, count);
>  }
>  
> +static inline int page_cache_gup_pin_speculative(struct page *page)
> +{
> +	return __page_cache_add_speculative(page, GUP_PIN_COUNTING_BIAS);
> +}
> +
>  #ifdef CONFIG_NUMA
>  extern struct page *__page_cache_alloc(gfp_t gfp);
>  #else
> diff --git a/mm/gup.c b/mm/gup.c
> index 05acd7e2eb22..3291da342f9c 100644
> --- a/mm/gup.c
> +++ b/mm/gup.c
> @@ -25,6 +25,26 @@ struct follow_page_context {
>  	unsigned int page_mask;
>  };
>  
> +/**
> + * get_gup_pin_page() - mark a page as being used by get_user_pages().
> + * @page:	pointer to page to be marked
> + * @Returns:	0 for success, -EOVERFLOW if the page refcount would have
> + *		overflowed.
> + *
> + */
> +int get_gup_pin_page(struct page *page)
> +{
> +	page = compound_head(page);
> +
> +	if (page_ref_count(page) >= (UINT_MAX - GUP_PIN_COUNTING_BIAS)) {
> +		WARN_ONCE(1, "get_user_pages pin count overflowed");
> +		return -EOVERFLOW;
> +	}
> +
> +	page_ref_add(page, GUP_PIN_COUNTING_BIAS);
> +	return 0;
> +}
> +
>  static struct page *no_page_table(struct vm_area_struct *vma,
>  		unsigned int flags)
>  {
> @@ -157,8 +177,14 @@ static struct page *follow_page_pte(struct vm_area_struct *vma,
>  		goto retry;
>  	}
>  
> -	if (flags & FOLL_GET)
> -		get_page(page);
> +	if (flags & FOLL_GET) {
> +		int ret = get_gup_pin_page(page);
> +
> +		if (ret) {
> +			page = ERR_PTR(ret);
> +			goto out;
> +		}
> +	}
>  	if (flags & FOLL_TOUCH) {
>  		if ((flags & FOLL_WRITE) &&
>  		    !pte_dirty(pte) && !PageDirty(page))
> @@ -497,7 +523,10 @@ static int get_gate_page(struct mm_struct *mm, unsigned long address,
>  		if (is_device_public_page(*page))
>  			goto unmap;
>  	}
> -	get_page(*page);
> +
> +	ret = get_gup_pin_page(*page);
> +	if (ret)
> +		goto unmap;
>  out:
>  	ret = 0;
>  unmap:
> @@ -1429,11 +1458,11 @@ static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end,
>  		page = pte_page(pte);
>  		head = compound_head(page);
>  
> -		if (!page_cache_get_speculative(head))
> +		if (!page_cache_gup_pin_speculative(head))
>  			goto pte_unmap;
>  
>  		if (unlikely(pte_val(pte) != pte_val(*ptep))) {
> -			put_page(head);
> +			put_user_page(head);
>  			goto pte_unmap;
>  		}
>  
> @@ -1488,7 +1517,11 @@ static int __gup_device_huge(unsigned long pfn, unsigned long addr,
>  		}
>  		SetPageReferenced(page);
>  		pages[*nr] = page;
> -		get_page(page);
> +		if (get_gup_pin_page(page)) {
> +			undo_dev_pagemap(nr, nr_start, pages);
> +			return 0;
> +		}
> +
>  		(*nr)++;
>  		pfn++;
>  	} while (addr += PAGE_SIZE, addr != end);
> @@ -1569,15 +1602,14 @@ static int gup_huge_pmd(pmd_t orig, pmd_t *pmdp, unsigned long addr,
>  	} while (addr += PAGE_SIZE, addr != end);
>  
>  	head = compound_head(pmd_page(orig));
> -	if (!page_cache_add_speculative(head, refs)) {
> +	if (!page_cache_gup_pin_speculative(head)) {
>  		*nr -= refs;
>  		return 0;
>  	}
>  
>  	if (unlikely(pmd_val(orig) != pmd_val(*pmdp))) {
>  		*nr -= refs;
> -		while (refs--)
> -			put_page(head);
> +		put_user_page(head);
>  		return 0;
>  	}
>  
> @@ -1607,15 +1639,14 @@ static int gup_huge_pud(pud_t orig, pud_t *pudp, unsigned long addr,
>  	} while (addr += PAGE_SIZE, addr != end);
>  
>  	head = compound_head(pud_page(orig));
> -	if (!page_cache_add_speculative(head, refs)) {
> +	if (!page_cache_gup_pin_speculative(head)) {
>  		*nr -= refs;
>  		return 0;
>  	}
>  
>  	if (unlikely(pud_val(orig) != pud_val(*pudp))) {
>  		*nr -= refs;
> -		while (refs--)
> -			put_page(head);
> +		put_user_page(head);
>  		return 0;
>  	}
>  
> @@ -1644,15 +1675,14 @@ static int gup_huge_pgd(pgd_t orig, pgd_t *pgdp, unsigned long addr,
>  	} while (addr += PAGE_SIZE, addr != end);
>  
>  	head = compound_head(pgd_page(orig));
> -	if (!page_cache_add_speculative(head, refs)) {
> +	if (!page_cache_gup_pin_speculative(head)) {
>  		*nr -= refs;
>  		return 0;
>  	}
>  
>  	if (unlikely(pgd_val(orig) != pgd_val(*pgdp))) {
>  		*nr -= refs;
> -		while (refs--)
> -			put_page(head);
> +		put_user_page(head);
>  		return 0;
>  	}
>  
> diff --git a/mm/swap.c b/mm/swap.c
> index 7c42ca45bb89..39b0ddd35933 100644
> --- a/mm/swap.c
> +++ b/mm/swap.c
> @@ -133,6 +133,27 @@ void put_pages_list(struct list_head *pages)
>  }
>  EXPORT_SYMBOL(put_pages_list);
>  
> +/**
> + * put_user_page() - release a gup-pinned page
> + * @page:            pointer to page to be released
> + *
> + * Pages that were pinned via get_user_pages*() must be released via
> + * either put_user_page(), or one of the put_user_pages*() routines
> + * below. This is so that eventually, pages that are pinned via
> + * get_user_pages*() can be separately tracked and uniquely handled. In
> + * particular, interactions with RDMA and filesystems need special
> + * handling.
> + */
> +void put_user_page(struct page *page)
> +{
> +	page = compound_head(page);
> +
> +	VM_BUG_ON_PAGE(page_ref_count(page) < GUP_PIN_COUNTING_BIAS, page);
> +
> +	page_ref_sub(page, GUP_PIN_COUNTING_BIAS);
> +}
> +EXPORT_SYMBOL(put_user_page);
> +
>  typedef int (*set_dirty_func)(struct page *page);
>  
>  static void __put_user_pages_dirty(struct page **pages,
> -- 
> 2.20.1
> 

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 4/6] mm/gup: track gup-pinned pages
  2019-02-20 19:24   ` Ira Weiny
@ 2019-02-20 20:22     ` John Hubbard
  0 siblings, 0 replies; 26+ messages in thread
From: John Hubbard @ 2019-02-20 20:22 UTC (permalink / raw)
  To: Ira Weiny, john.hubbard
  Cc: Andrew Morton, linux-mm, Al Viro, Christian Benvenuti,
	Christoph Hellwig, Christopher Lameter, Dan Williams,
	Dave Chinner, Dennis Dalessandro, Doug Ledford, Jan Kara,
	Jason Gunthorpe, Jerome Glisse, Matthew Wilcox, Michal Hocko,
	Mike Rapoport, Mike Marciniszyn, Ralph Campbell, Tom Talpey,
	LKML, linux-fsdevel

On 2/20/19 11:24 AM, Ira Weiny wrote:
> On Sun, Feb 03, 2019 at 09:21:33PM -0800, john.hubbard@gmail.com wrote:
>> From: John Hubbard <jhubbard@nvidia.com>
> [snip]
>> + *
>> + * Locking: the lockless algorithm described in page_cache_gup_pin_speculative()
>> + * and page_cache_gup_pin_speculative() provides safe operation for
> 
> Did you mean:
> 
> page_cache_gup_pin_speculative and __ page_cache_get_speculative __?
> 
> Just found this while looking at your branch.
> 
> Sorry,
> Ira
> 

Hi Ira,

Yes, thanks for catching that. I've changed it in the git repo now, and it will
show up when the next spin of this patchset goes out.

thanks,
-- 
John Hubbard
NVIDIA

^ permalink raw reply	[flat|nested] 26+ messages in thread

end of thread, other threads:[~2019-02-20 20:22 UTC | newest]

Thread overview: 26+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-02-04  5:21 [PATCH 0/6] RFC v2: mm: gup/dma tracking john.hubbard
2019-02-04  5:21 ` [PATCH 1/6] mm: introduce put_user_page*(), placeholder versions john.hubbard
2019-02-04  5:21 ` [PATCH 2/6] infiniband/mm: convert put_page() to put_user_page*() john.hubbard
2019-02-04  5:21 ` [PATCH 3/6] mm: page_cache_add_speculative(): refactoring john.hubbard
2019-02-04  5:21 ` [PATCH 4/6] mm/gup: track gup-pinned pages john.hubbard
2019-02-04 18:19   ` Matthew Wilcox
2019-02-04 19:11     ` John Hubbard
2019-02-20 19:24   ` Ira Weiny
2019-02-20 20:22     ` John Hubbard
2019-02-04  5:21 ` [PATCH 5/6] mm/gup: /proc/vmstat support for get/put user pages john.hubbard
2019-02-04  5:21 ` [PATCH 6/6] mm/gup: Documentation/vm/get_user_pages.rst, MAINTAINERS john.hubbard
2019-02-05 16:40   ` Mike Rapoport
2019-02-05 21:53     ` John Hubbard
2019-02-04 16:08 ` [PATCH 0/6] RFC v2: mm: gup/dma tracking Christopher Lameter
2019-02-04 16:12   ` Christoph Hellwig
2019-02-04 16:59     ` Christopher Lameter
2019-02-04 17:14 ` Christopher Lameter
2019-02-04 17:51   ` Jason Gunthorpe
2019-02-04 18:21     ` Christopher Lameter
2019-02-04 19:09       ` Matthew Wilcox
2019-02-04 23:35   ` Ira Weiny
2019-02-05 19:30     ` Christopher Lameter
2019-02-05  1:41 ` Tom Talpey
2019-02-05  8:22   ` John Hubbard
2019-02-05 13:38     ` Tom Talpey
2019-02-05 21:55       ` John Hubbard

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).