From: John Hubbard <jhubbard@nvidia.com> To: Andrew Morton <akpm@linux-foundation.org> Cc: "Al Viro" <viro@zeniv.linux.org.uk>, "Alex Williamson" <alex.williamson@redhat.com>, "Benjamin Herrenschmidt" <benh@kernel.crashing.org>, "Björn Töpel" <bjorn.topel@intel.com>, "Christoph Hellwig" <hch@infradead.org>, "Dan Williams" <dan.j.williams@intel.com>, "Daniel Vetter" <daniel@ffwll.ch>, "Dave Chinner" <david@fromorbit.com>, "David Airlie" <airlied@linux.ie>, "David S . Miller" <davem@davemloft.net>, "Ira Weiny" <ira.weiny@intel.com>, "Jan Kara" <jack@suse.cz>, "Jason Gunthorpe" <jgg@ziepe.ca>, "Jens Axboe" <axboe@kernel.dk>, "Jonathan Corbet" <corbet@lwn.net>, "Jérôme Glisse" <jglisse@redhat.com>, "Magnus Karlsson" <magnus.karlsson@intel.com>, "Mauro Carvalho Chehab" <mchehab@kernel.org>, "Michael Ellerman" <mpe@ellerman.id.au>, "Michal Hocko" <mhocko@suse.com>, "Mike Kravetz" <mike.kravetz@oracle.com>, "Paul Mackerras" <paulus@samba.org>, "Shuah Khan" <shuah@kernel.org>, "Vlastimil Babka" <vbabka@suse.cz>, bpf@vger.kernel.org, dri-devel@lists.freedesktop.org, kvm@vger.kernel.org, linux-block@vger.kernel.org, linux-doc@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-media@vger.kernel.org, linux-rdma@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, netdev@vger.kernel.org, linux-mm@kvack.org, LKML <linux-kernel@vger.kernel.org>, "John Hubbard" <jhubbard@nvidia.com> Subject: [PATCH v2 07/18] infiniband: set FOLL_PIN, FOLL_LONGTERM via pin_longterm_pages*() Date: Sun, 3 Nov 2019 13:18:02 -0800 Message-ID: <20191103211813.213227-8-jhubbard@nvidia.com> (raw) In-Reply-To: <20191103211813.213227-1-jhubbard@nvidia.com> Convert infiniband to use the new wrapper calls, and stop explicitly setting FOLL_LONGTERM at the call sites. The new pin_longterm_*() calls replace get_user_pages*() calls, and set both FOLL_LONGTERM and a new FOLL_PIN flag. The FOLL_PIN flag requires that the caller must return the pages via put_user_page*() calls, but infiniband was already doing that as part of an earlier commit. Reviewed-by: Ira Weiny <ira.weiny@intel.com> Signed-off-by: John Hubbard <jhubbard@nvidia.com> --- drivers/infiniband/core/umem.c | 5 ++--- drivers/infiniband/core/umem_odp.c | 10 +++++----- drivers/infiniband/hw/hfi1/user_pages.c | 4 ++-- drivers/infiniband/hw/mthca/mthca_memfree.c | 3 +-- drivers/infiniband/hw/qib/qib_user_pages.c | 8 ++++---- drivers/infiniband/hw/qib/qib_user_sdma.c | 2 +- drivers/infiniband/hw/usnic/usnic_uiom.c | 9 ++++----- drivers/infiniband/sw/siw/siw_mem.c | 5 ++--- 8 files changed, 21 insertions(+), 25 deletions(-) diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c index 24244a2f68cc..c5a78d3e674b 100644 --- a/drivers/infiniband/core/umem.c +++ b/drivers/infiniband/core/umem.c @@ -272,11 +272,10 @@ struct ib_umem *ib_umem_get(struct ib_udata *udata, unsigned long addr, while (npages) { down_read(&mm->mmap_sem); - ret = get_user_pages(cur_base, + ret = pin_longterm_pages(cur_base, min_t(unsigned long, npages, PAGE_SIZE / sizeof (struct page *)), - gup_flags | FOLL_LONGTERM, - page_list, NULL); + gup_flags, page_list, NULL); if (ret < 0) { up_read(&mm->mmap_sem); goto umem_release; diff --git a/drivers/infiniband/core/umem_odp.c b/drivers/infiniband/core/umem_odp.c index 163ff7ba92b7..a38b67b83db5 100644 --- a/drivers/infiniband/core/umem_odp.c +++ b/drivers/infiniband/core/umem_odp.c @@ -534,7 +534,7 @@ static int ib_umem_odp_map_dma_single_page( } else if (umem_odp->page_list[page_index] == page) { umem_odp->dma_list[page_index] |= access_mask; } else { - pr_err("error: got different pages in IB device and from get_user_pages. IB device page: %p, gup page: %p\n", + pr_err("error: got different pages in IB device and from pin_longterm_pages. IB device page: %p, gup page: %p\n", umem_odp->page_list[page_index], page); /* Better remove the mapping now, to prevent any further * damage. */ @@ -639,11 +639,11 @@ int ib_umem_odp_map_dma_pages(struct ib_umem_odp *umem_odp, u64 user_virt, /* * Note: this might result in redundent page getting. We can * avoid this by checking dma_list to be 0 before calling - * get_user_pages. However, this make the code much more - * complex (and doesn't gain us much performance in most use - * cases). + * pin_longterm_pages. However, this makes the code much + * more complex (and doesn't gain us much performance in most + * use cases). */ - npages = get_user_pages_remote(owning_process, owning_mm, + npages = pin_longterm_pages_remote(owning_process, owning_mm, user_virt, gup_num_pages, flags, local_page_list, NULL, NULL); up_read(&owning_mm->mmap_sem); diff --git a/drivers/infiniband/hw/hfi1/user_pages.c b/drivers/infiniband/hw/hfi1/user_pages.c index 469acb961fbd..9b55b0a73e29 100644 --- a/drivers/infiniband/hw/hfi1/user_pages.c +++ b/drivers/infiniband/hw/hfi1/user_pages.c @@ -104,9 +104,9 @@ int hfi1_acquire_user_pages(struct mm_struct *mm, unsigned long vaddr, size_t np bool writable, struct page **pages) { int ret; - unsigned int gup_flags = FOLL_LONGTERM | (writable ? FOLL_WRITE : 0); + unsigned int gup_flags = (writable ? FOLL_WRITE : 0); - ret = get_user_pages_fast(vaddr, npages, gup_flags, pages); + ret = pin_longterm_pages_fast(vaddr, npages, gup_flags, pages); if (ret < 0) return ret; diff --git a/drivers/infiniband/hw/mthca/mthca_memfree.c b/drivers/infiniband/hw/mthca/mthca_memfree.c index edccfd6e178f..beec7e4b8a96 100644 --- a/drivers/infiniband/hw/mthca/mthca_memfree.c +++ b/drivers/infiniband/hw/mthca/mthca_memfree.c @@ -472,8 +472,7 @@ int mthca_map_user_db(struct mthca_dev *dev, struct mthca_uar *uar, goto out; } - ret = get_user_pages_fast(uaddr & PAGE_MASK, 1, - FOLL_WRITE | FOLL_LONGTERM, pages); + ret = pin_longterm_pages_fast(uaddr & PAGE_MASK, 1, FOLL_WRITE, pages); if (ret < 0) goto out; diff --git a/drivers/infiniband/hw/qib/qib_user_pages.c b/drivers/infiniband/hw/qib/qib_user_pages.c index 6bf764e41891..684a14e14d9b 100644 --- a/drivers/infiniband/hw/qib/qib_user_pages.c +++ b/drivers/infiniband/hw/qib/qib_user_pages.c @@ -108,10 +108,10 @@ int qib_get_user_pages(unsigned long start_page, size_t num_pages, down_read(¤t->mm->mmap_sem); for (got = 0; got < num_pages; got += ret) { - ret = get_user_pages(start_page + got * PAGE_SIZE, - num_pages - got, - FOLL_LONGTERM | FOLL_WRITE | FOLL_FORCE, - p + got, NULL); + ret = pin_longterm_pages(start_page + got * PAGE_SIZE, + num_pages - got, + FOLL_WRITE | FOLL_FORCE, + p + got, NULL); if (ret < 0) { up_read(¤t->mm->mmap_sem); goto bail_release; diff --git a/drivers/infiniband/hw/qib/qib_user_sdma.c b/drivers/infiniband/hw/qib/qib_user_sdma.c index 05190edc2611..fd86a9d19370 100644 --- a/drivers/infiniband/hw/qib/qib_user_sdma.c +++ b/drivers/infiniband/hw/qib/qib_user_sdma.c @@ -670,7 +670,7 @@ static int qib_user_sdma_pin_pages(const struct qib_devdata *dd, else j = npages; - ret = get_user_pages_fast(addr, j, FOLL_LONGTERM, pages); + ret = pin_longterm_pages_fast(addr, j, 0, pages); if (ret != j) { i = 0; j = ret; diff --git a/drivers/infiniband/hw/usnic/usnic_uiom.c b/drivers/infiniband/hw/usnic/usnic_uiom.c index 62e6ffa9ad78..6b90ca1c3771 100644 --- a/drivers/infiniband/hw/usnic/usnic_uiom.c +++ b/drivers/infiniband/hw/usnic/usnic_uiom.c @@ -141,11 +141,10 @@ static int usnic_uiom_get_pages(unsigned long addr, size_t size, int writable, ret = 0; while (npages) { - ret = get_user_pages(cur_base, - min_t(unsigned long, npages, - PAGE_SIZE / sizeof(struct page *)), - gup_flags | FOLL_LONGTERM, - page_list, NULL); + ret = pin_longterm_pages(cur_base, + min_t(unsigned long, npages, + PAGE_SIZE / sizeof(struct page *)), + gup_flags, page_list, NULL); if (ret < 0) goto out; diff --git a/drivers/infiniband/sw/siw/siw_mem.c b/drivers/infiniband/sw/siw/siw_mem.c index e99983f07663..20e663d7ada8 100644 --- a/drivers/infiniband/sw/siw/siw_mem.c +++ b/drivers/infiniband/sw/siw/siw_mem.c @@ -426,9 +426,8 @@ struct siw_umem *siw_umem_get(u64 start, u64 len, bool writable) while (nents) { struct page **plist = &umem->page_chunk[i].plist[got]; - rv = get_user_pages(first_page_va, nents, - foll_flags | FOLL_LONGTERM, - plist, NULL); + rv = pin_longterm_pages(first_page_va, nents, + foll_flags, plist, NULL); if (rv < 0) goto out_sem_up; -- 2.23.0
next prev parent reply index Thread overview: 57+ messages / expand[flat|nested] mbox.gz Atom feed top 2019-11-03 21:17 [PATCH v2 00/18] mm/gup: track dma-pinned pages: FOLL_PIN, FOLL_LONGTERM John Hubbard 2019-11-03 21:17 ` [PATCH v2 01/18] mm/gup: pass flags arg to __gup_device_* functions John Hubbard 2019-11-04 16:39 ` Jerome Glisse 2019-11-03 21:17 ` [PATCH v2 02/18] mm/gup: factor out duplicate code from four routines John Hubbard 2019-11-04 16:51 ` Jerome Glisse 2019-11-03 21:17 ` [PATCH v2 03/18] goldish_pipe: rename local pin_user_pages() routine John Hubbard 2019-11-04 16:52 ` Jerome Glisse 2019-11-03 21:17 ` [PATCH v2 04/18] media/v4l2-core: set pages dirty upon releasing DMA buffers John Hubbard 2019-11-10 10:10 ` Hans Verkuil 2019-11-11 21:46 ` John Hubbard 2019-11-03 21:18 ` [PATCH v2 05/18] mm/gup: introduce pin_user_pages*() and FOLL_PIN John Hubbard 2019-11-04 17:33 ` Jerome Glisse 2019-11-04 19:04 ` John Hubbard 2019-11-04 19:18 ` Jerome Glisse 2019-11-04 19:30 ` John Hubbard 2019-11-04 19:52 ` Jerome Glisse 2019-11-04 20:09 ` John Hubbard 2019-11-04 20:31 ` Jason Gunthorpe 2019-11-04 20:40 ` John Hubbard 2019-11-04 20:31 ` Jerome Glisse 2019-11-04 20:37 ` Jason Gunthorpe 2019-11-04 20:57 ` John Hubbard 2019-11-04 21:15 ` Jason Gunthorpe 2019-11-04 21:34 ` John Hubbard 2019-11-04 20:33 ` David Rientjes 2019-11-04 20:48 ` Jerome Glisse 2019-11-05 13:10 ` Mike Rapoport 2019-11-05 19:00 ` John Hubbard 2019-11-07 2:25 ` Ira Weiny 2019-11-07 8:07 ` Mike Rapoport 2019-11-03 21:18 ` [PATCH v2 06/18] goldish_pipe: convert to pin_user_pages() and put_user_page() John Hubbard 2019-11-03 21:18 ` John Hubbard [this message] 2019-11-04 20:33 ` [PATCH v2 07/18] infiniband: set FOLL_PIN, FOLL_LONGTERM via pin_longterm_pages*() Jason Gunthorpe 2019-11-04 20:48 ` John Hubbard 2019-11-04 20:57 ` Jason Gunthorpe 2019-11-04 22:03 ` John Hubbard 2019-11-05 2:32 ` Jason Gunthorpe 2019-11-07 2:26 ` Ira Weiny 2019-11-03 21:18 ` [PATCH v2 08/18] mm/process_vm_access: set FOLL_PIN via pin_user_pages_remote() John Hubbard 2019-11-04 17:41 ` Jerome Glisse 2019-11-03 21:18 ` [PATCH v2 09/18] drm/via: set FOLL_PIN via pin_user_pages_fast() John Hubbard 2019-11-04 17:44 ` Jerome Glisse 2019-11-04 18:22 ` John Hubbard 2019-11-03 21:18 ` [PATCH v2 10/18] fs/io_uring: set FOLL_PIN via pin_user_pages() John Hubbard 2019-11-03 21:18 ` [PATCH v2 11/18] net/xdp: " John Hubbard 2019-11-03 21:18 ` [PATCH v2 12/18] mm/gup: track FOLL_PIN pages John Hubbard 2019-11-04 18:52 ` Jerome Glisse 2019-11-04 22:49 ` John Hubbard 2019-11-04 23:49 ` Jerome Glisse 2019-11-05 0:18 ` John Hubbard 2019-11-03 21:18 ` [PATCH v2 13/18] media/v4l2-core: pin_longterm_pages (FOLL_PIN) and put_user_page() conversion John Hubbard 2019-11-10 10:11 ` Hans Verkuil 2019-11-03 21:18 ` [PATCH v2 14/18] vfio, mm: " John Hubbard 2019-11-03 21:18 ` [PATCH v2 15/18] powerpc: book3s64: convert to pin_longterm_pages() and put_user_page() John Hubbard 2019-11-03 21:18 ` [PATCH v2 16/18] mm/gup_benchmark: support pin_user_pages() and related calls John Hubbard 2019-11-03 21:18 ` [PATCH v2 17/18] selftests/vm: run_vmtests: invoke gup_benchmark with basic FOLL_PIN coverage John Hubbard 2019-11-03 21:18 ` [PATCH v2 18/18] mm/gup: remove support for gup(FOLL_LONGTERM) John Hubbard
Reply instructions: You may reply publically to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20191103211813.213227-8-jhubbard@nvidia.com \ --to=jhubbard@nvidia.com \ --cc=airlied@linux.ie \ --cc=akpm@linux-foundation.org \ --cc=alex.williamson@redhat.com \ --cc=axboe@kernel.dk \ --cc=benh@kernel.crashing.org \ --cc=bjorn.topel@intel.com \ --cc=bpf@vger.kernel.org \ --cc=corbet@lwn.net \ --cc=dan.j.williams@intel.com \ --cc=daniel@ffwll.ch \ --cc=davem@davemloft.net \ --cc=david@fromorbit.com \ --cc=dri-devel@lists.freedesktop.org \ --cc=hch@infradead.org \ --cc=ira.weiny@intel.com \ --cc=jack@suse.cz \ --cc=jgg@ziepe.ca \ --cc=jglisse@redhat.com \ --cc=kvm@vger.kernel.org \ --cc=linux-block@vger.kernel.org \ --cc=linux-doc@vger.kernel.org \ --cc=linux-fsdevel@vger.kernel.org \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-kselftest@vger.kernel.org \ --cc=linux-media@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=linux-rdma@vger.kernel.org \ --cc=linuxppc-dev@lists.ozlabs.org \ --cc=magnus.karlsson@intel.com \ --cc=mchehab@kernel.org \ --cc=mhocko@suse.com \ --cc=mike.kravetz@oracle.com \ --cc=mpe@ellerman.id.au \ --cc=netdev@vger.kernel.org \ --cc=paulus@samba.org \ --cc=shuah@kernel.org \ --cc=vbabka@suse.cz \ --cc=viro@zeniv.linux.org.uk \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: link
Linux-RDMA Archive on lore.kernel.org Archives are clonable: git clone --mirror https://lore.kernel.org/linux-rdma/0 linux-rdma/git/0.git # If you have public-inbox 1.1+ installed, you may # initialize and index your mirror using the following commands: public-inbox-init -V2 linux-rdma linux-rdma/ https://lore.kernel.org/linux-rdma \ linux-rdma@vger.kernel.org public-inbox-index linux-rdma Example config snippet for mirrors Newsgroup available over NNTP: nntp://nntp.lore.kernel.org/org.kernel.vger.linux-rdma AGPL code for this site: git clone https://public-inbox.org/public-inbox.git