Linux-NFS Archive on lore.kernel.org
 help / color / Atom feed
From: John Hubbard <jhubbard@nvidia.com>
To: Mike Marshall <hubcap@omnibond.com>
Cc: john.hubbard@gmail.com,
	"Andrew Morton" <akpm@linux-foundation.org>,
	"Christoph Hellwig" <hch@infradead.org>,
	"Dan Williams" <dan.j.williams@intel.com>,
	"Dave Chinner" <david@fromorbit.com>,
	"Dave Hansen" <dave.hansen@linux.intel.com>,
	"Ira Weiny" <ira.weiny@intel.com>, "Jan Kara" <jack@suse.cz>,
	"Jason Gunthorpe" <jgg@ziepe.ca>,
	"Jérôme Glisse" <jglisse@redhat.com>,
	LKML <linux-kernel@vger.kernel.org>,
	amd-gfx@lists.freedesktop.org,
	ceph-devel <ceph-devel@vger.kernel.org>,
	devel@driverdev.osuosl.org, devel@lists.orangefs.org,
	dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org,
	kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org,
	linux-block@vger.kernel.org, linux-crypto@vger.kernel.org,
	linux-fbdev@vger.kernel.org,
	linux-fsdevel <linux-fsdevel@vger.kernel.org>,
	linux-media@vger.kernel.org, linux-mm <linux-mm@kvack.org>,
	"Linux NFS Mailing List" <linux-nfs@vger.kernel.org>,
	linux-rdma@vger.kernel.org, linux-rpi-kernel@lists.infradead.org,
	linux-xfs@vger.kernel.org, netdev@vger.kernel.org,
	rds-devel@oss.oracle.com, sparclinux@vger.kernel.org,
	x86@kernel.org, xen-devel@lists.xenproject.org
Subject: Re: [PATCH v3 00/39] put_user_pages(): miscellaneous call sites
Date: Thu, 29 Aug 2019 19:21:03 -0700
Message-ID: <d453f865-2224-ed53-a2f4-f43d574c130a@nvidia.com> (raw)
In-Reply-To: <CAOg9mSQKGDywcMde2DE42diUS7J8m74Hdv+xp_PJhC39EXZQuw@mail.gmail.com>

On 8/29/2019 6:29 PM, Mike Marshall wrote:
> Hi John...
> 
> I added this patch series on top of Linux 5.3rc6 and ran
> xfstests with no regressions...
> 
> Acked-by: Mike Marshall <hubcap@omnibond.com>
> 

Hi Mike (and I hope Ira and others are reading as well, because
I'm making a bunch of claims further down),

That's great news, thanks for running that test suite and for
the report and the ACK.

There is an interesting pause right now, due to the fact that
we've made some tentative decisions about gup pinning, that affect
the call sites. A key decision is that only pages that were
requested via FOLL_PIN, will require put_user_page*() to release
them. There are 4 main cases, which were first explained by Jan
Kara and Vlastimil Babka, and are now written up in my FOLL_PIN
patch [1].

So, what that means for this series is that:

1. Some call sites (mlock.c for example, and a lot of the mm/ files
in fact, and more) will not be converted: some of these patches will
get dropped, especially in mm/.

2. Call sites that do DirectIO or RDMA will need to set FOLL_PIN, and
will also need to call put_user_page().

3. Call sites that do RDMA will need to set FOLL_LONGTERM *and* FOLL_PIN,

    3.a. ...and will at least in some cases need to provide a link to a
    vaddr_pin object, and thus back to a struct file*...maybe. Still
    under discussion.

4. It's desirable to keep FOLL_* flags (or at least FOLL_PIN) internal
to the gup() calls. That implies using a wrapper call such as Ira's
vaddr_pin_[user]_pages(), instead of gup(), and vaddr_unpin_[user]_pages()
instead of put_user_page*().

5. We don't want to churn the call sites unnecessarily.

With that in mind, I've taken another pass through all these patches
and narrowed it down to:

     a) 12 call sites that I'd like to convert soon, but even those
        really look cleaner with a full conversion to a wrapper call
        similar to (identical to?) vaddr_pin_[user]_pages(), probably
        just the FOLL_PIN only variant (not FOLL_LONGTERM). That
        wrapper call is not ready yet, though.

     b) Some more call sites that require both FOLL_PIN and FOLL_LONGTERM.
        Definitely will wait to use the wrapper calls for these, because
        they may also require hooking up to a struct file*.

     c) A few more that were already applied, which is fine, because they
        show where to convert, and simplify a few sites anyway. But they'll
        need follow-on changes to, one way or another, set FOLL_PIN.

     d) And of course a few sites whose patches get dropped, as mentioned
        above.

[1] https://lore.kernel.org/r/20190821040727.19650-3-jhubbard@nvidia.com

thanks,
-- 
John Hubbard
NVIDIA

      reply index

Thread overview: 52+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-08-07  1:32 john.hubbard
2019-08-07  1:33 ` [PATCH v3 01/41] mm/gup: add make_dirty arg to put_user_pages_dirty_lock() john.hubbard
2019-08-07  1:33 ` [PATCH v3 02/41] drivers/gpu/drm/via: convert put_page() to put_user_page*() john.hubbard
2019-08-07  1:33 ` [PATCH v3 03/41] net/xdp: " john.hubbard
2019-08-07  1:33 ` [PATCH v3 04/41] net/rds: " john.hubbard
2019-08-07  1:33 ` [PATCH v3 05/41] net/ceph: " john.hubbard
2019-08-07  1:33 ` [PATCH v3 06/41] x86/kvm: " john.hubbard
2019-08-07  1:33 ` [PATCH v3 07/41] drm/etnaviv: convert release_pages() to put_user_pages() john.hubbard
2019-08-07  1:33 ` [PATCH v3 08/41] drm/i915: convert put_page() to put_user_page*() john.hubbard
2019-08-07  1:33 ` [PATCH v3 09/41] drm/radeon: " john.hubbard
2019-08-07  1:33 ` [PATCH v3 10/41] media/ivtv: " john.hubbard
2019-08-07  8:51   ` Hans Verkuil
2019-08-07  1:33 ` [PATCH v3 11/41] media/v4l2-core/mm: " john.hubbard
2019-08-07  7:20   ` Sakari Ailus
2019-08-07  8:07   ` Hans Verkuil
2019-08-07  1:33 ` [PATCH v3 12/41] genwqe: " john.hubbard
2019-08-07  1:33 ` [PATCH v3 13/41] scif: " john.hubbard
2019-08-07  1:33 ` [PATCH v3 14/41] vmci: " john.hubbard
2019-08-07  1:33 ` [PATCH v3 15/41] rapidio: " john.hubbard
2019-08-07  1:33 ` [PATCH v3 16/41] oradax: " john.hubbard
2019-08-07  1:33 ` [PATCH v3 17/41] staging/vc04_services: " john.hubbard
2019-08-07  1:33 ` [PATCH v3 18/41] drivers/tee: " john.hubbard
2019-08-07  1:33 ` [PATCH v3 19/41] vfio: " john.hubbard
2019-08-07  1:33 ` [PATCH v3 20/41] fbdev/pvr2fb: " john.hubbard
2019-08-09 11:38   ` Bartlomiej Zolnierkiewicz
2019-08-07  1:33 ` [PATCH v3 21/41] fsl_hypervisor: " john.hubbard
2019-08-07  1:33 ` [PATCH v3 22/41] xen: " john.hubbard
2019-08-07  1:33 ` [PATCH v3 23/41] fs/exec.c: " john.hubbard
2019-08-07  1:33 ` [PATCH v3 24/41] orangefs: " john.hubbard
2019-08-07  1:33 ` [PATCH v3 25/41] uprobes: " john.hubbard
2019-08-07  1:33 ` [PATCH v3 26/41] futex: " john.hubbard
2019-08-07  1:33 ` [PATCH v3 27/41] mm/frame_vector.c: " john.hubbard
2019-08-07  1:33 ` [PATCH v3 28/41] mm/gup_benchmark.c: " john.hubbard
2019-08-07  1:33 ` [PATCH v3 29/41] mm/memory.c: " john.hubbard
2019-08-07  1:33 ` [PATCH v3 30/41] mm/madvise.c: " john.hubbard
2019-08-07  1:33 ` [PATCH v3 31/41] mm/process_vm_access.c: " john.hubbard
2019-08-07  1:33 ` [PATCH v3 32/41] crypt: " john.hubbard
2019-08-07  1:33 ` [PATCH v3 33/41] fs/nfs: " john.hubbard
2019-08-07  1:33 ` [PATCH v3 34/41] goldfish_pipe: " john.hubbard
2019-08-07  1:33 ` [PATCH v3 35/41] kernel/events/core.c: " john.hubbard
2019-08-07  1:33 ` [PATCH v3 36/41] fs/binfmt_elf: " john.hubbard
2019-08-07  1:33 ` [PATCH v3 37/41] security/tomoyo: " john.hubbard
2019-08-07  1:33 ` [PATCH v3 38/41] powerpc: " john.hubbard
2019-08-08  5:42   ` Michael Ellerman
2019-08-09  1:26     ` John Hubbard
2019-08-09 12:20       ` Michael Ellerman
2019-08-07  1:33 ` [PATCH v3 39/41] mm/mlock.c: " john.hubbard
2019-08-07  1:33 ` [PATCH v3 40/41] mm/mempolicy.c: " john.hubbard
2019-08-07  1:33 ` [PATCH v3 41/41] mm/ksm: " john.hubbard
2019-08-07  1:49 ` [PATCH v3 00/39] put_user_pages(): miscellaneous call sites John Hubbard
2019-08-30  1:29   ` Mike Marshall
2019-08-30  2:21     ` John Hubbard [this message]

Reply instructions:

You may reply publically to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=d453f865-2224-ed53-a2f4-f43d574c130a@nvidia.com \
    --to=jhubbard@nvidia.com \
    --cc=akpm@linux-foundation.org \
    --cc=amd-gfx@lists.freedesktop.org \
    --cc=ceph-devel@vger.kernel.org \
    --cc=dan.j.williams@intel.com \
    --cc=dave.hansen@linux.intel.com \
    --cc=david@fromorbit.com \
    --cc=devel@driverdev.osuosl.org \
    --cc=devel@lists.orangefs.org \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=hch@infradead.org \
    --cc=hubcap@omnibond.com \
    --cc=intel-gfx@lists.freedesktop.org \
    --cc=ira.weiny@intel.com \
    --cc=jack@suse.cz \
    --cc=jgg@ziepe.ca \
    --cc=jglisse@redhat.com \
    --cc=john.hubbard@gmail.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-crypto@vger.kernel.org \
    --cc=linux-fbdev@vger.kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-media@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-nfs@vger.kernel.org \
    --cc=linux-rdma@vger.kernel.org \
    --cc=linux-rpi-kernel@lists.infradead.org \
    --cc=linux-xfs@vger.kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=rds-devel@oss.oracle.com \
    --cc=sparclinux@vger.kernel.org \
    --cc=x86@kernel.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

Linux-NFS Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/linux-nfs/0 linux-nfs/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 linux-nfs linux-nfs/ https://lore.kernel.org/linux-nfs \
		linux-nfs@vger.kernel.org
	public-inbox-index linux-nfs

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.kernel.vger.linux-nfs


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git