From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.2 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 45739CA9ECE for ; Thu, 31 Oct 2019 23:25:58 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 1ADD12080F for ; Thu, 31 Oct 2019 23:25:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727389AbfJaXZt (ORCPT ); Thu, 31 Oct 2019 19:25:49 -0400 Received: from mga11.intel.com ([192.55.52.93]:50824 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726540AbfJaXZt (ORCPT ); Thu, 31 Oct 2019 19:25:49 -0400 X-Amp-Result: UNKNOWN X-Amp-Original-Verdict: FILE UNKNOWN X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 31 Oct 2019 16:25:48 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.68,253,1569308400"; d="scan'208";a="199676391" Received: from iweiny-desk2.sc.intel.com ([10.3.52.157]) by fmsmga007.fm.intel.com with ESMTP; 31 Oct 2019 16:25:48 -0700 Date: Thu, 31 Oct 2019 16:25:48 -0700 From: Ira Weiny To: John Hubbard Cc: Andrew Morton , Al Viro , Alex Williamson , Benjamin Herrenschmidt , =?iso-8859-1?Q?Bj=F6rn_T=F6pel?= , Christoph Hellwig , Dan Williams , Daniel Vetter , Dave Chinner , David Airlie , "David S . Miller" , Jan Kara , Jason Gunthorpe , Jens Axboe , Jonathan Corbet , =?iso-8859-1?B?Suly9G1l?= Glisse , Magnus Karlsson , Mauro Carvalho Chehab , Michael Ellerman , Michal Hocko , Mike Kravetz , Paul Mackerras , Shuah Khan , Vlastimil Babka , bpf@vger.kernel.org, dri-devel@lists.freedesktop.org, kvm@vger.kernel.org, linux-block@vger.kernel.org, linux-doc@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-media@vger.kernel.org, linux-rdma@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, netdev@vger.kernel.org, linux-mm@kvack.org, LKML Subject: Re: [PATCH 07/19] infiniband: set FOLL_PIN, FOLL_LONGTERM via pin_longterm_pages*() Message-ID: <20191031232546.GG14771@iweiny-DESK2.sc.intel.com> References: <20191030224930.3990755-1-jhubbard@nvidia.com> <20191030224930.3990755-8-jhubbard@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20191030224930.3990755-8-jhubbard@nvidia.com> User-Agent: Mutt/1.11.1 (2018-12-01) Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org On Wed, Oct 30, 2019 at 03:49:18PM -0700, John Hubbard wrote: > Convert infiniband to use the new wrapper calls, and stop > explicitly setting FOLL_LONGTERM at the call sites. > > The new pin_longterm_*() calls replace get_user_pages*() > calls, and set both FOLL_LONGTERM and a new FOLL_PIN > flag. The FOLL_PIN flag requires that the caller must > return the pages via put_user_page*() calls, but > infiniband was already doing that as part of an earlier > commit. > NOTE: I'm not 100% convinced that mixing the flags and new calls like this is good. I think we are going to need a lot more documentation on which flags are "user" accessible vs not... Reviewed-by: Ira Weiny > Signed-off-by: John Hubbard > --- > drivers/infiniband/core/umem.c | 5 ++--- > drivers/infiniband/core/umem_odp.c | 10 +++++----- > drivers/infiniband/hw/hfi1/user_pages.c | 4 ++-- > drivers/infiniband/hw/mthca/mthca_memfree.c | 3 +-- > drivers/infiniband/hw/qib/qib_user_pages.c | 8 ++++---- > drivers/infiniband/hw/qib/qib_user_sdma.c | 2 +- > drivers/infiniband/hw/usnic/usnic_uiom.c | 9 ++++----- > drivers/infiniband/sw/siw/siw_mem.c | 5 ++--- > 8 files changed, 21 insertions(+), 25 deletions(-) > > diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c > index 24244a2f68cc..c5a78d3e674b 100644 > --- a/drivers/infiniband/core/umem.c > +++ b/drivers/infiniband/core/umem.c > @@ -272,11 +272,10 @@ struct ib_umem *ib_umem_get(struct ib_udata *udata, unsigned long addr, > > while (npages) { > down_read(&mm->mmap_sem); > - ret = get_user_pages(cur_base, > + ret = pin_longterm_pages(cur_base, > min_t(unsigned long, npages, > PAGE_SIZE / sizeof (struct page *)), > - gup_flags | FOLL_LONGTERM, > - page_list, NULL); > + gup_flags, page_list, NULL); > if (ret < 0) { > up_read(&mm->mmap_sem); > goto umem_release; > diff --git a/drivers/infiniband/core/umem_odp.c b/drivers/infiniband/core/umem_odp.c > index 163ff7ba92b7..a38b67b83db5 100644 > --- a/drivers/infiniband/core/umem_odp.c > +++ b/drivers/infiniband/core/umem_odp.c > @@ -534,7 +534,7 @@ static int ib_umem_odp_map_dma_single_page( > } else if (umem_odp->page_list[page_index] == page) { > umem_odp->dma_list[page_index] |= access_mask; > } else { > - pr_err("error: got different pages in IB device and from get_user_pages. IB device page: %p, gup page: %p\n", > + pr_err("error: got different pages in IB device and from pin_longterm_pages. IB device page: %p, gup page: %p\n", > umem_odp->page_list[page_index], page); > /* Better remove the mapping now, to prevent any further > * damage. */ > @@ -639,11 +639,11 @@ int ib_umem_odp_map_dma_pages(struct ib_umem_odp *umem_odp, u64 user_virt, > /* > * Note: this might result in redundent page getting. We can > * avoid this by checking dma_list to be 0 before calling > - * get_user_pages. However, this make the code much more > - * complex (and doesn't gain us much performance in most use > - * cases). > + * pin_longterm_pages. However, this makes the code much > + * more complex (and doesn't gain us much performance in most > + * use cases). > */ > - npages = get_user_pages_remote(owning_process, owning_mm, > + npages = pin_longterm_pages_remote(owning_process, owning_mm, > user_virt, gup_num_pages, > flags, local_page_list, NULL, NULL); > up_read(&owning_mm->mmap_sem); > diff --git a/drivers/infiniband/hw/hfi1/user_pages.c b/drivers/infiniband/hw/hfi1/user_pages.c > index 469acb961fbd..9b55b0a73e29 100644 > --- a/drivers/infiniband/hw/hfi1/user_pages.c > +++ b/drivers/infiniband/hw/hfi1/user_pages.c > @@ -104,9 +104,9 @@ int hfi1_acquire_user_pages(struct mm_struct *mm, unsigned long vaddr, size_t np > bool writable, struct page **pages) > { > int ret; > - unsigned int gup_flags = FOLL_LONGTERM | (writable ? FOLL_WRITE : 0); > + unsigned int gup_flags = (writable ? FOLL_WRITE : 0); > > - ret = get_user_pages_fast(vaddr, npages, gup_flags, pages); > + ret = pin_longterm_pages_fast(vaddr, npages, gup_flags, pages); > if (ret < 0) > return ret; > > diff --git a/drivers/infiniband/hw/mthca/mthca_memfree.c b/drivers/infiniband/hw/mthca/mthca_memfree.c > index edccfd6e178f..beec7e4b8a96 100644 > --- a/drivers/infiniband/hw/mthca/mthca_memfree.c > +++ b/drivers/infiniband/hw/mthca/mthca_memfree.c > @@ -472,8 +472,7 @@ int mthca_map_user_db(struct mthca_dev *dev, struct mthca_uar *uar, > goto out; > } > > - ret = get_user_pages_fast(uaddr & PAGE_MASK, 1, > - FOLL_WRITE | FOLL_LONGTERM, pages); > + ret = pin_longterm_pages_fast(uaddr & PAGE_MASK, 1, FOLL_WRITE, pages); > if (ret < 0) > goto out; > > diff --git a/drivers/infiniband/hw/qib/qib_user_pages.c b/drivers/infiniband/hw/qib/qib_user_pages.c > index 6bf764e41891..684a14e14d9b 100644 > --- a/drivers/infiniband/hw/qib/qib_user_pages.c > +++ b/drivers/infiniband/hw/qib/qib_user_pages.c > @@ -108,10 +108,10 @@ int qib_get_user_pages(unsigned long start_page, size_t num_pages, > > down_read(¤t->mm->mmap_sem); > for (got = 0; got < num_pages; got += ret) { > - ret = get_user_pages(start_page + got * PAGE_SIZE, > - num_pages - got, > - FOLL_LONGTERM | FOLL_WRITE | FOLL_FORCE, > - p + got, NULL); > + ret = pin_longterm_pages(start_page + got * PAGE_SIZE, > + num_pages - got, > + FOLL_WRITE | FOLL_FORCE, > + p + got, NULL); > if (ret < 0) { > up_read(¤t->mm->mmap_sem); > goto bail_release; > diff --git a/drivers/infiniband/hw/qib/qib_user_sdma.c b/drivers/infiniband/hw/qib/qib_user_sdma.c > index 05190edc2611..fd86a9d19370 100644 > --- a/drivers/infiniband/hw/qib/qib_user_sdma.c > +++ b/drivers/infiniband/hw/qib/qib_user_sdma.c > @@ -670,7 +670,7 @@ static int qib_user_sdma_pin_pages(const struct qib_devdata *dd, > else > j = npages; > > - ret = get_user_pages_fast(addr, j, FOLL_LONGTERM, pages); > + ret = pin_longterm_pages_fast(addr, j, 0, pages); > if (ret != j) { > i = 0; > j = ret; > diff --git a/drivers/infiniband/hw/usnic/usnic_uiom.c b/drivers/infiniband/hw/usnic/usnic_uiom.c > index 62e6ffa9ad78..6b90ca1c3771 100644 > --- a/drivers/infiniband/hw/usnic/usnic_uiom.c > +++ b/drivers/infiniband/hw/usnic/usnic_uiom.c > @@ -141,11 +141,10 @@ static int usnic_uiom_get_pages(unsigned long addr, size_t size, int writable, > ret = 0; > > while (npages) { > - ret = get_user_pages(cur_base, > - min_t(unsigned long, npages, > - PAGE_SIZE / sizeof(struct page *)), > - gup_flags | FOLL_LONGTERM, > - page_list, NULL); > + ret = pin_longterm_pages(cur_base, > + min_t(unsigned long, npages, > + PAGE_SIZE / sizeof(struct page *)), > + gup_flags, page_list, NULL); > > if (ret < 0) > goto out; > diff --git a/drivers/infiniband/sw/siw/siw_mem.c b/drivers/infiniband/sw/siw/siw_mem.c > index e99983f07663..20e663d7ada8 100644 > --- a/drivers/infiniband/sw/siw/siw_mem.c > +++ b/drivers/infiniband/sw/siw/siw_mem.c > @@ -426,9 +426,8 @@ struct siw_umem *siw_umem_get(u64 start, u64 len, bool writable) > while (nents) { > struct page **plist = &umem->page_chunk[i].plist[got]; > > - rv = get_user_pages(first_page_va, nents, > - foll_flags | FOLL_LONGTERM, > - plist, NULL); > + rv = pin_longterm_pages(first_page_va, nents, > + foll_flags, plist, NULL); > if (rv < 0) > goto out_sem_up; > > -- > 2.23.0 > From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.2 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8FCDFCA9ECB for ; Thu, 31 Oct 2019 23:28:14 +0000 (UTC) Received: from lists.ozlabs.org (lists.ozlabs.org [203.11.71.2]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 452EE2086D for ; Thu, 31 Oct 2019 23:28:14 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 452EE2086D Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=linuxppc-dev-bounces+linuxppc-dev=archiver.kernel.org@lists.ozlabs.org Received: from bilbo.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 4741hq6196zF6Hg for ; Fri, 1 Nov 2019 10:28:11 +1100 (AEDT) Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=intel.com (client-ip=134.134.136.65; helo=mga03.intel.com; envelope-from=ira.weiny@intel.com; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=pass (p=none dis=none) header.from=intel.com Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 4741f90hB2zF6J2 for ; Fri, 1 Nov 2019 10:25:51 +1100 (AEDT) X-Amp-Result: UNKNOWN X-Amp-Original-Verdict: FILE UNKNOWN X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga103.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 31 Oct 2019 16:25:48 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.68,253,1569308400"; d="scan'208";a="199676391" Received: from iweiny-desk2.sc.intel.com ([10.3.52.157]) by fmsmga007.fm.intel.com with ESMTP; 31 Oct 2019 16:25:48 -0700 Date: Thu, 31 Oct 2019 16:25:48 -0700 From: Ira Weiny To: John Hubbard Subject: Re: [PATCH 07/19] infiniband: set FOLL_PIN, FOLL_LONGTERM via pin_longterm_pages*() Message-ID: <20191031232546.GG14771@iweiny-DESK2.sc.intel.com> References: <20191030224930.3990755-1-jhubbard@nvidia.com> <20191030224930.3990755-8-jhubbard@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20191030224930.3990755-8-jhubbard@nvidia.com> User-Agent: Mutt/1.11.1 (2018-12-01) X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Michal Hocko , Jan Kara , kvm@vger.kernel.org, linux-doc@vger.kernel.org, David Airlie , Dave Chinner , dri-devel@lists.freedesktop.org, LKML , linux-mm@kvack.org, Paul Mackerras , linux-kselftest@vger.kernel.org, Shuah Khan , Jonathan Corbet , linux-rdma@vger.kernel.org, Christoph Hellwig , Jason Gunthorpe , Vlastimil Babka , =?iso-8859-1?Q?Bj=F6rn_T=F6pel?= , linux-media@vger.kernel.org, linux-block@vger.kernel.org, =?iso-8859-1?B?Suly9G1l?= Glisse , Al Viro , Dan Williams , Mauro Carvalho Chehab , bpf@vger.kernel.org, Magnus Karlsson , Jens Axboe , netdev@vger.kernel.org, Alex Williamson , Daniel Vetter , linux-fsdevel@vger.kernel.org, Andrew Morton , linuxppc-dev@lists.ozlabs.org, "David S . Miller" , Mike Kravetz Errors-To: linuxppc-dev-bounces+linuxppc-dev=archiver.kernel.org@lists.ozlabs.org Sender: "Linuxppc-dev" On Wed, Oct 30, 2019 at 03:49:18PM -0700, John Hubbard wrote: > Convert infiniband to use the new wrapper calls, and stop > explicitly setting FOLL_LONGTERM at the call sites. > > The new pin_longterm_*() calls replace get_user_pages*() > calls, and set both FOLL_LONGTERM and a new FOLL_PIN > flag. The FOLL_PIN flag requires that the caller must > return the pages via put_user_page*() calls, but > infiniband was already doing that as part of an earlier > commit. > NOTE: I'm not 100% convinced that mixing the flags and new calls like this is good. I think we are going to need a lot more documentation on which flags are "user" accessible vs not... Reviewed-by: Ira Weiny > Signed-off-by: John Hubbard > --- > drivers/infiniband/core/umem.c | 5 ++--- > drivers/infiniband/core/umem_odp.c | 10 +++++----- > drivers/infiniband/hw/hfi1/user_pages.c | 4 ++-- > drivers/infiniband/hw/mthca/mthca_memfree.c | 3 +-- > drivers/infiniband/hw/qib/qib_user_pages.c | 8 ++++---- > drivers/infiniband/hw/qib/qib_user_sdma.c | 2 +- > drivers/infiniband/hw/usnic/usnic_uiom.c | 9 ++++----- > drivers/infiniband/sw/siw/siw_mem.c | 5 ++--- > 8 files changed, 21 insertions(+), 25 deletions(-) > > diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c > index 24244a2f68cc..c5a78d3e674b 100644 > --- a/drivers/infiniband/core/umem.c > +++ b/drivers/infiniband/core/umem.c > @@ -272,11 +272,10 @@ struct ib_umem *ib_umem_get(struct ib_udata *udata, unsigned long addr, > > while (npages) { > down_read(&mm->mmap_sem); > - ret = get_user_pages(cur_base, > + ret = pin_longterm_pages(cur_base, > min_t(unsigned long, npages, > PAGE_SIZE / sizeof (struct page *)), > - gup_flags | FOLL_LONGTERM, > - page_list, NULL); > + gup_flags, page_list, NULL); > if (ret < 0) { > up_read(&mm->mmap_sem); > goto umem_release; > diff --git a/drivers/infiniband/core/umem_odp.c b/drivers/infiniband/core/umem_odp.c > index 163ff7ba92b7..a38b67b83db5 100644 > --- a/drivers/infiniband/core/umem_odp.c > +++ b/drivers/infiniband/core/umem_odp.c > @@ -534,7 +534,7 @@ static int ib_umem_odp_map_dma_single_page( > } else if (umem_odp->page_list[page_index] == page) { > umem_odp->dma_list[page_index] |= access_mask; > } else { > - pr_err("error: got different pages in IB device and from get_user_pages. IB device page: %p, gup page: %p\n", > + pr_err("error: got different pages in IB device and from pin_longterm_pages. IB device page: %p, gup page: %p\n", > umem_odp->page_list[page_index], page); > /* Better remove the mapping now, to prevent any further > * damage. */ > @@ -639,11 +639,11 @@ int ib_umem_odp_map_dma_pages(struct ib_umem_odp *umem_odp, u64 user_virt, > /* > * Note: this might result in redundent page getting. We can > * avoid this by checking dma_list to be 0 before calling > - * get_user_pages. However, this make the code much more > - * complex (and doesn't gain us much performance in most use > - * cases). > + * pin_longterm_pages. However, this makes the code much > + * more complex (and doesn't gain us much performance in most > + * use cases). > */ > - npages = get_user_pages_remote(owning_process, owning_mm, > + npages = pin_longterm_pages_remote(owning_process, owning_mm, > user_virt, gup_num_pages, > flags, local_page_list, NULL, NULL); > up_read(&owning_mm->mmap_sem); > diff --git a/drivers/infiniband/hw/hfi1/user_pages.c b/drivers/infiniband/hw/hfi1/user_pages.c > index 469acb961fbd..9b55b0a73e29 100644 > --- a/drivers/infiniband/hw/hfi1/user_pages.c > +++ b/drivers/infiniband/hw/hfi1/user_pages.c > @@ -104,9 +104,9 @@ int hfi1_acquire_user_pages(struct mm_struct *mm, unsigned long vaddr, size_t np > bool writable, struct page **pages) > { > int ret; > - unsigned int gup_flags = FOLL_LONGTERM | (writable ? FOLL_WRITE : 0); > + unsigned int gup_flags = (writable ? FOLL_WRITE : 0); > > - ret = get_user_pages_fast(vaddr, npages, gup_flags, pages); > + ret = pin_longterm_pages_fast(vaddr, npages, gup_flags, pages); > if (ret < 0) > return ret; > > diff --git a/drivers/infiniband/hw/mthca/mthca_memfree.c b/drivers/infiniband/hw/mthca/mthca_memfree.c > index edccfd6e178f..beec7e4b8a96 100644 > --- a/drivers/infiniband/hw/mthca/mthca_memfree.c > +++ b/drivers/infiniband/hw/mthca/mthca_memfree.c > @@ -472,8 +472,7 @@ int mthca_map_user_db(struct mthca_dev *dev, struct mthca_uar *uar, > goto out; > } > > - ret = get_user_pages_fast(uaddr & PAGE_MASK, 1, > - FOLL_WRITE | FOLL_LONGTERM, pages); > + ret = pin_longterm_pages_fast(uaddr & PAGE_MASK, 1, FOLL_WRITE, pages); > if (ret < 0) > goto out; > > diff --git a/drivers/infiniband/hw/qib/qib_user_pages.c b/drivers/infiniband/hw/qib/qib_user_pages.c > index 6bf764e41891..684a14e14d9b 100644 > --- a/drivers/infiniband/hw/qib/qib_user_pages.c > +++ b/drivers/infiniband/hw/qib/qib_user_pages.c > @@ -108,10 +108,10 @@ int qib_get_user_pages(unsigned long start_page, size_t num_pages, > > down_read(¤t->mm->mmap_sem); > for (got = 0; got < num_pages; got += ret) { > - ret = get_user_pages(start_page + got * PAGE_SIZE, > - num_pages - got, > - FOLL_LONGTERM | FOLL_WRITE | FOLL_FORCE, > - p + got, NULL); > + ret = pin_longterm_pages(start_page + got * PAGE_SIZE, > + num_pages - got, > + FOLL_WRITE | FOLL_FORCE, > + p + got, NULL); > if (ret < 0) { > up_read(¤t->mm->mmap_sem); > goto bail_release; > diff --git a/drivers/infiniband/hw/qib/qib_user_sdma.c b/drivers/infiniband/hw/qib/qib_user_sdma.c > index 05190edc2611..fd86a9d19370 100644 > --- a/drivers/infiniband/hw/qib/qib_user_sdma.c > +++ b/drivers/infiniband/hw/qib/qib_user_sdma.c > @@ -670,7 +670,7 @@ static int qib_user_sdma_pin_pages(const struct qib_devdata *dd, > else > j = npages; > > - ret = get_user_pages_fast(addr, j, FOLL_LONGTERM, pages); > + ret = pin_longterm_pages_fast(addr, j, 0, pages); > if (ret != j) { > i = 0; > j = ret; > diff --git a/drivers/infiniband/hw/usnic/usnic_uiom.c b/drivers/infiniband/hw/usnic/usnic_uiom.c > index 62e6ffa9ad78..6b90ca1c3771 100644 > --- a/drivers/infiniband/hw/usnic/usnic_uiom.c > +++ b/drivers/infiniband/hw/usnic/usnic_uiom.c > @@ -141,11 +141,10 @@ static int usnic_uiom_get_pages(unsigned long addr, size_t size, int writable, > ret = 0; > > while (npages) { > - ret = get_user_pages(cur_base, > - min_t(unsigned long, npages, > - PAGE_SIZE / sizeof(struct page *)), > - gup_flags | FOLL_LONGTERM, > - page_list, NULL); > + ret = pin_longterm_pages(cur_base, > + min_t(unsigned long, npages, > + PAGE_SIZE / sizeof(struct page *)), > + gup_flags, page_list, NULL); > > if (ret < 0) > goto out; > diff --git a/drivers/infiniband/sw/siw/siw_mem.c b/drivers/infiniband/sw/siw/siw_mem.c > index e99983f07663..20e663d7ada8 100644 > --- a/drivers/infiniband/sw/siw/siw_mem.c > +++ b/drivers/infiniband/sw/siw/siw_mem.c > @@ -426,9 +426,8 @@ struct siw_umem *siw_umem_get(u64 start, u64 len, bool writable) > while (nents) { > struct page **plist = &umem->page_chunk[i].plist[got]; > > - rv = get_user_pages(first_page_va, nents, > - foll_flags | FOLL_LONGTERM, > - plist, NULL); > + rv = pin_longterm_pages(first_page_va, nents, > + foll_flags, plist, NULL); > if (rv < 0) > goto out_sem_up; > > -- > 2.23.0 > From mboxrd@z Thu Jan 1 00:00:00 1970 From: Ira Weiny Subject: Re: [PATCH 07/19] infiniband: set FOLL_PIN, FOLL_LONGTERM via pin_longterm_pages*() Date: Thu, 31 Oct 2019 16:25:48 -0700 Message-ID: <20191031232546.GG14771@iweiny-DESK2.sc.intel.com> References: <20191030224930.3990755-1-jhubbard@nvidia.com> <20191030224930.3990755-8-jhubbard@nvidia.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Return-path: Content-Disposition: inline In-Reply-To: <20191030224930.3990755-8-jhubbard@nvidia.com> Sender: linux-kernel-owner@vger.kernel.org To: John Hubbard Cc: Andrew Morton , Al Viro , Alex Williamson , Benjamin Herrenschmidt , =?iso-8859-1?Q?Bj=F6rn_T=F6pel?= , Christoph Hellwig , Dan Williams , Daniel Vetter , Dave Chinner , David Airlie , "David S . Miller" , Jan Kara , Jason Gunthorpe , Jens Axboe , Jonathan Corbet , =?iso-8859-1?B?Suly9G1l?= Glisse , Magnus Karlsson , Mauro Carvalho Chehab , Michael Ellerman List-Id: dri-devel@lists.freedesktop.org On Wed, Oct 30, 2019 at 03:49:18PM -0700, John Hubbard wrote: > Convert infiniband to use the new wrapper calls, and stop > explicitly setting FOLL_LONGTERM at the call sites. > > The new pin_longterm_*() calls replace get_user_pages*() > calls, and set both FOLL_LONGTERM and a new FOLL_PIN > flag. The FOLL_PIN flag requires that the caller must > return the pages via put_user_page*() calls, but > infiniband was already doing that as part of an earlier > commit. > NOTE: I'm not 100% convinced that mixing the flags and new calls like this is good. I think we are going to need a lot more documentation on which flags are "user" accessible vs not... Reviewed-by: Ira Weiny > Signed-off-by: John Hubbard > --- > drivers/infiniband/core/umem.c | 5 ++--- > drivers/infiniband/core/umem_odp.c | 10 +++++----- > drivers/infiniband/hw/hfi1/user_pages.c | 4 ++-- > drivers/infiniband/hw/mthca/mthca_memfree.c | 3 +-- > drivers/infiniband/hw/qib/qib_user_pages.c | 8 ++++---- > drivers/infiniband/hw/qib/qib_user_sdma.c | 2 +- > drivers/infiniband/hw/usnic/usnic_uiom.c | 9 ++++----- > drivers/infiniband/sw/siw/siw_mem.c | 5 ++--- > 8 files changed, 21 insertions(+), 25 deletions(-) > > diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c > index 24244a2f68cc..c5a78d3e674b 100644 > --- a/drivers/infiniband/core/umem.c > +++ b/drivers/infiniband/core/umem.c > @@ -272,11 +272,10 @@ struct ib_umem *ib_umem_get(struct ib_udata *udata, unsigned long addr, > > while (npages) { > down_read(&mm->mmap_sem); > - ret = get_user_pages(cur_base, > + ret = pin_longterm_pages(cur_base, > min_t(unsigned long, npages, > PAGE_SIZE / sizeof (struct page *)), > - gup_flags | FOLL_LONGTERM, > - page_list, NULL); > + gup_flags, page_list, NULL); > if (ret < 0) { > up_read(&mm->mmap_sem); > goto umem_release; > diff --git a/drivers/infiniband/core/umem_odp.c b/drivers/infiniband/core/umem_odp.c > index 163ff7ba92b7..a38b67b83db5 100644 > --- a/drivers/infiniband/core/umem_odp.c > +++ b/drivers/infiniband/core/umem_odp.c > @@ -534,7 +534,7 @@ static int ib_umem_odp_map_dma_single_page( > } else if (umem_odp->page_list[page_index] == page) { > umem_odp->dma_list[page_index] |= access_mask; > } else { > - pr_err("error: got different pages in IB device and from get_user_pages. IB device page: %p, gup page: %p\n", > + pr_err("error: got different pages in IB device and from pin_longterm_pages. IB device page: %p, gup page: %p\n", > umem_odp->page_list[page_index], page); > /* Better remove the mapping now, to prevent any further > * damage. */ > @@ -639,11 +639,11 @@ int ib_umem_odp_map_dma_pages(struct ib_umem_odp *umem_odp, u64 user_virt, > /* > * Note: this might result in redundent page getting. We can > * avoid this by checking dma_list to be 0 before calling > - * get_user_pages. However, this make the code much more > - * complex (and doesn't gain us much performance in most use > - * cases). > + * pin_longterm_pages. However, this makes the code much > + * more complex (and doesn't gain us much performance in most > + * use cases). > */ > - npages = get_user_pages_remote(owning_process, owning_mm, > + npages = pin_longterm_pages_remote(owning_process, owning_mm, > user_virt, gup_num_pages, > flags, local_page_list, NULL, NULL); > up_read(&owning_mm->mmap_sem); > diff --git a/drivers/infiniband/hw/hfi1/user_pages.c b/drivers/infiniband/hw/hfi1/user_pages.c > index 469acb961fbd..9b55b0a73e29 100644 > --- a/drivers/infiniband/hw/hfi1/user_pages.c > +++ b/drivers/infiniband/hw/hfi1/user_pages.c > @@ -104,9 +104,9 @@ int hfi1_acquire_user_pages(struct mm_struct *mm, unsigned long vaddr, size_t np > bool writable, struct page **pages) > { > int ret; > - unsigned int gup_flags = FOLL_LONGTERM | (writable ? FOLL_WRITE : 0); > + unsigned int gup_flags = (writable ? FOLL_WRITE : 0); > > - ret = get_user_pages_fast(vaddr, npages, gup_flags, pages); > + ret = pin_longterm_pages_fast(vaddr, npages, gup_flags, pages); > if (ret < 0) > return ret; > > diff --git a/drivers/infiniband/hw/mthca/mthca_memfree.c b/drivers/infiniband/hw/mthca/mthca_memfree.c > index edccfd6e178f..beec7e4b8a96 100644 > --- a/drivers/infiniband/hw/mthca/mthca_memfree.c > +++ b/drivers/infiniband/hw/mthca/mthca_memfree.c > @@ -472,8 +472,7 @@ int mthca_map_user_db(struct mthca_dev *dev, struct mthca_uar *uar, > goto out; > } > > - ret = get_user_pages_fast(uaddr & PAGE_MASK, 1, > - FOLL_WRITE | FOLL_LONGTERM, pages); > + ret = pin_longterm_pages_fast(uaddr & PAGE_MASK, 1, FOLL_WRITE, pages); > if (ret < 0) > goto out; > > diff --git a/drivers/infiniband/hw/qib/qib_user_pages.c b/drivers/infiniband/hw/qib/qib_user_pages.c > index 6bf764e41891..684a14e14d9b 100644 > --- a/drivers/infiniband/hw/qib/qib_user_pages.c > +++ b/drivers/infiniband/hw/qib/qib_user_pages.c > @@ -108,10 +108,10 @@ int qib_get_user_pages(unsigned long start_page, size_t num_pages, > > down_read(¤t->mm->mmap_sem); > for (got = 0; got < num_pages; got += ret) { > - ret = get_user_pages(start_page + got * PAGE_SIZE, > - num_pages - got, > - FOLL_LONGTERM | FOLL_WRITE | FOLL_FORCE, > - p + got, NULL); > + ret = pin_longterm_pages(start_page + got * PAGE_SIZE, > + num_pages - got, > + FOLL_WRITE | FOLL_FORCE, > + p + got, NULL); > if (ret < 0) { > up_read(¤t->mm->mmap_sem); > goto bail_release; > diff --git a/drivers/infiniband/hw/qib/qib_user_sdma.c b/drivers/infiniband/hw/qib/qib_user_sdma.c > index 05190edc2611..fd86a9d19370 100644 > --- a/drivers/infiniband/hw/qib/qib_user_sdma.c > +++ b/drivers/infiniband/hw/qib/qib_user_sdma.c > @@ -670,7 +670,7 @@ static int qib_user_sdma_pin_pages(const struct qib_devdata *dd, > else > j = npages; > > - ret = get_user_pages_fast(addr, j, FOLL_LONGTERM, pages); > + ret = pin_longterm_pages_fast(addr, j, 0, pages); > if (ret != j) { > i = 0; > j = ret; > diff --git a/drivers/infiniband/hw/usnic/usnic_uiom.c b/drivers/infiniband/hw/usnic/usnic_uiom.c > index 62e6ffa9ad78..6b90ca1c3771 100644 > --- a/drivers/infiniband/hw/usnic/usnic_uiom.c > +++ b/drivers/infiniband/hw/usnic/usnic_uiom.c > @@ -141,11 +141,10 @@ static int usnic_uiom_get_pages(unsigned long addr, size_t size, int writable, > ret = 0; > > while (npages) { > - ret = get_user_pages(cur_base, > - min_t(unsigned long, npages, > - PAGE_SIZE / sizeof(struct page *)), > - gup_flags | FOLL_LONGTERM, > - page_list, NULL); > + ret = pin_longterm_pages(cur_base, > + min_t(unsigned long, npages, > + PAGE_SIZE / sizeof(struct page *)), > + gup_flags, page_list, NULL); > > if (ret < 0) > goto out; > diff --git a/drivers/infiniband/sw/siw/siw_mem.c b/drivers/infiniband/sw/siw/siw_mem.c > index e99983f07663..20e663d7ada8 100644 > --- a/drivers/infiniband/sw/siw/siw_mem.c > +++ b/drivers/infiniband/sw/siw/siw_mem.c > @@ -426,9 +426,8 @@ struct siw_umem *siw_umem_get(u64 start, u64 len, bool writable) > while (nents) { > struct page **plist = &umem->page_chunk[i].plist[got]; > > - rv = get_user_pages(first_page_va, nents, > - foll_flags | FOLL_LONGTERM, > - plist, NULL); > + rv = pin_longterm_pages(first_page_va, nents, > + foll_flags, plist, NULL); > if (rv < 0) > goto out_sem_up; > > -- > 2.23.0 > From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.2 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4EF9DCA9ED0 for ; Thu, 31 Oct 2019 23:25:51 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 2E60C2080F for ; Thu, 31 Oct 2019 23:25:50 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2E60C2080F Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 3DEC06F689; Thu, 31 Oct 2019 23:25:50 +0000 (UTC) Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by gabe.freedesktop.org (Postfix) with ESMTPS id 0E2476F689 for ; Thu, 31 Oct 2019 23:25:48 +0000 (UTC) X-Amp-Result: UNKNOWN X-Amp-Original-Verdict: FILE UNKNOWN X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga107.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 31 Oct 2019 16:25:48 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.68,253,1569308400"; d="scan'208";a="199676391" Received: from iweiny-desk2.sc.intel.com ([10.3.52.157]) by fmsmga007.fm.intel.com with ESMTP; 31 Oct 2019 16:25:48 -0700 Date: Thu, 31 Oct 2019 16:25:48 -0700 From: Ira Weiny To: John Hubbard Subject: Re: [PATCH 07/19] infiniband: set FOLL_PIN, FOLL_LONGTERM via pin_longterm_pages*() Message-ID: <20191031232546.GG14771@iweiny-DESK2.sc.intel.com> References: <20191030224930.3990755-1-jhubbard@nvidia.com> <20191030224930.3990755-8-jhubbard@nvidia.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20191030224930.3990755-8-jhubbard@nvidia.com> User-Agent: Mutt/1.11.1 (2018-12-01) X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Michal Hocko , Jan Kara , kvm@vger.kernel.org, linux-doc@vger.kernel.org, David Airlie , Dave Chinner , dri-devel@lists.freedesktop.org, LKML , linux-mm@kvack.org, Paul Mackerras , linux-kselftest@vger.kernel.org, Shuah Khan , Jonathan Corbet , linux-rdma@vger.kernel.org, Michael Ellerman , Christoph Hellwig , Jason Gunthorpe , Vlastimil Babka , =?iso-8859-1?Q?Bj=F6rn_T=F6pel?= , linux-media@vger.kernel.org, linux-block@vger.kernel.org, =?iso-8859-1?B?Suly9G1l?= Glisse , Al Viro , Dan Williams , Mauro Carvalho Chehab , bpf@vger.kernel.org, Magnus Karlsson , Jens Axboe , netdev@vger.kernel.org, Alex Williamson , linux-fsdevel@vger.kernel.org, Andrew Morton , linuxppc-dev@lists.ozlabs.org, "David S . Miller" , Mike Kravetz Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: base64 Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Message-ID: <20191031232548.6Hcpfjgp18gGbtY5rwIG2Jk9wadcQxeO2MxniBLTXLc@z> T24gV2VkLCBPY3QgMzAsIDIwMTkgYXQgMDM6NDk6MThQTSAtMDcwMCwgSm9obiBIdWJiYXJkIHdy b3RlOgo+IENvbnZlcnQgaW5maW5pYmFuZCB0byB1c2UgdGhlIG5ldyB3cmFwcGVyIGNhbGxzLCBh bmQgc3RvcAo+IGV4cGxpY2l0bHkgc2V0dGluZyBGT0xMX0xPTkdURVJNIGF0IHRoZSBjYWxsIHNp dGVzLgo+IAo+IFRoZSBuZXcgcGluX2xvbmd0ZXJtXyooKSBjYWxscyByZXBsYWNlIGdldF91c2Vy X3BhZ2VzKigpCj4gY2FsbHMsIGFuZCBzZXQgYm90aCBGT0xMX0xPTkdURVJNIGFuZCBhIG5ldyBG T0xMX1BJTgo+IGZsYWcuIFRoZSBGT0xMX1BJTiBmbGFnIHJlcXVpcmVzIHRoYXQgdGhlIGNhbGxl ciBtdXN0Cj4gcmV0dXJuIHRoZSBwYWdlcyB2aWEgcHV0X3VzZXJfcGFnZSooKSBjYWxscywgYnV0 Cj4gaW5maW5pYmFuZCB3YXMgYWxyZWFkeSBkb2luZyB0aGF0IGFzIHBhcnQgb2YgYW4gZWFybGll cgo+IGNvbW1pdC4KPiAKCk5PVEU6IEknbSBub3QgMTAwJSBjb252aW5jZWQgdGhhdCBtaXhpbmcg dGhlIGZsYWdzIGFuZCBuZXcgY2FsbHMgbGlrZSB0aGlzIGlzCmdvb2QuICBJIHRoaW5rIHdlIGFy ZSBnb2luZyB0byBuZWVkIGEgbG90IG1vcmUgZG9jdW1lbnRhdGlvbiBvbiB3aGljaCBmbGFncyBh cmUKInVzZXIiIGFjY2Vzc2libGUgdnMgbm90Li4uCgpSZXZpZXdlZC1ieTogSXJhIFdlaW55IDxp cmEud2VpbnlAaW50ZWwuY29tPgoKPiBTaWduZWQtb2ZmLWJ5OiBKb2huIEh1YmJhcmQgPGpodWJi YXJkQG52aWRpYS5jb20+Cj4gLS0tCj4gIGRyaXZlcnMvaW5maW5pYmFuZC9jb3JlL3VtZW0uYyAg ICAgICAgICAgICAgfCAgNSArKy0tLQo+ICBkcml2ZXJzL2luZmluaWJhbmQvY29yZS91bWVtX29k cC5jICAgICAgICAgIHwgMTAgKysrKystLS0tLQo+ICBkcml2ZXJzL2luZmluaWJhbmQvaHcvaGZp MS91c2VyX3BhZ2VzLmMgICAgIHwgIDQgKystLQo+ICBkcml2ZXJzL2luZmluaWJhbmQvaHcvbXRo Y2EvbXRoY2FfbWVtZnJlZS5jIHwgIDMgKy0tCj4gIGRyaXZlcnMvaW5maW5pYmFuZC9ody9xaWIv cWliX3VzZXJfcGFnZXMuYyAgfCAgOCArKysrLS0tLQo+ICBkcml2ZXJzL2luZmluaWJhbmQvaHcv cWliL3FpYl91c2VyX3NkbWEuYyAgIHwgIDIgKy0KPiAgZHJpdmVycy9pbmZpbmliYW5kL2h3L3Vz bmljL3VzbmljX3Vpb20uYyAgICB8ICA5ICsrKystLS0tLQo+ICBkcml2ZXJzL2luZmluaWJhbmQv c3cvc2l3L3Npd19tZW0uYyAgICAgICAgIHwgIDUgKystLS0KPiAgOCBmaWxlcyBjaGFuZ2VkLCAy MSBpbnNlcnRpb25zKCspLCAyNSBkZWxldGlvbnMoLSkKPiAKPiBkaWZmIC0tZ2l0IGEvZHJpdmVy cy9pbmZpbmliYW5kL2NvcmUvdW1lbS5jIGIvZHJpdmVycy9pbmZpbmliYW5kL2NvcmUvdW1lbS5j Cj4gaW5kZXggMjQyNDRhMmY2OGNjLi5jNWE3OGQzZTY3NGIgMTAwNjQ0Cj4gLS0tIGEvZHJpdmVy cy9pbmZpbmliYW5kL2NvcmUvdW1lbS5jCj4gKysrIGIvZHJpdmVycy9pbmZpbmliYW5kL2NvcmUv dW1lbS5jCj4gQEAgLTI3MiwxMSArMjcyLDEwIEBAIHN0cnVjdCBpYl91bWVtICppYl91bWVtX2dl dChzdHJ1Y3QgaWJfdWRhdGEgKnVkYXRhLCB1bnNpZ25lZCBsb25nIGFkZHIsCj4gIAo+ICAJd2hp bGUgKG5wYWdlcykgewo+ICAJCWRvd25fcmVhZCgmbW0tPm1tYXBfc2VtKTsKPiAtCQlyZXQgPSBn ZXRfdXNlcl9wYWdlcyhjdXJfYmFzZSwKPiArCQlyZXQgPSBwaW5fbG9uZ3Rlcm1fcGFnZXMoY3Vy X2Jhc2UsCj4gIAkJCQkgICAgIG1pbl90KHVuc2lnbmVkIGxvbmcsIG5wYWdlcywKPiAgCQkJCQkg ICBQQUdFX1NJWkUgLyBzaXplb2YgKHN0cnVjdCBwYWdlICopKSwKPiAtCQkJCSAgICAgZ3VwX2Zs YWdzIHwgRk9MTF9MT05HVEVSTSwKPiAtCQkJCSAgICAgcGFnZV9saXN0LCBOVUxMKTsKPiArCQkJ CSAgICAgZ3VwX2ZsYWdzLCBwYWdlX2xpc3QsIE5VTEwpOwo+ICAJCWlmIChyZXQgPCAwKSB7Cj4g IAkJCXVwX3JlYWQoJm1tLT5tbWFwX3NlbSk7Cj4gIAkJCWdvdG8gdW1lbV9yZWxlYXNlOwo+IGRp ZmYgLS1naXQgYS9kcml2ZXJzL2luZmluaWJhbmQvY29yZS91bWVtX29kcC5jIGIvZHJpdmVycy9p bmZpbmliYW5kL2NvcmUvdW1lbV9vZHAuYwo+IGluZGV4IDE2M2ZmN2JhOTJiNy4uYTM4YjY3Yjgz ZGI1IDEwMDY0NAo+IC0tLSBhL2RyaXZlcnMvaW5maW5pYmFuZC9jb3JlL3VtZW1fb2RwLmMKPiAr KysgYi9kcml2ZXJzL2luZmluaWJhbmQvY29yZS91bWVtX29kcC5jCj4gQEAgLTUzNCw3ICs1MzQs NyBAQCBzdGF0aWMgaW50IGliX3VtZW1fb2RwX21hcF9kbWFfc2luZ2xlX3BhZ2UoCj4gIAl9IGVs c2UgaWYgKHVtZW1fb2RwLT5wYWdlX2xpc3RbcGFnZV9pbmRleF0gPT0gcGFnZSkgewo+ICAJCXVt ZW1fb2RwLT5kbWFfbGlzdFtwYWdlX2luZGV4XSB8PSBhY2Nlc3NfbWFzazsKPiAgCX0gZWxzZSB7 Cj4gLQkJcHJfZXJyKCJlcnJvcjogZ290IGRpZmZlcmVudCBwYWdlcyBpbiBJQiBkZXZpY2UgYW5k IGZyb20gZ2V0X3VzZXJfcGFnZXMuIElCIGRldmljZSBwYWdlOiAlcCwgZ3VwIHBhZ2U6ICVwXG4i LAo+ICsJCXByX2VycigiZXJyb3I6IGdvdCBkaWZmZXJlbnQgcGFnZXMgaW4gSUIgZGV2aWNlIGFu ZCBmcm9tIHBpbl9sb25ndGVybV9wYWdlcy4gSUIgZGV2aWNlIHBhZ2U6ICVwLCBndXAgcGFnZTog JXBcbiIsCj4gIAkJICAgICAgIHVtZW1fb2RwLT5wYWdlX2xpc3RbcGFnZV9pbmRleF0sIHBhZ2Up Owo+ICAJCS8qIEJldHRlciByZW1vdmUgdGhlIG1hcHBpbmcgbm93LCB0byBwcmV2ZW50IGFueSBm dXJ0aGVyCj4gIAkJICogZGFtYWdlLiAqLwo+IEBAIC02MzksMTEgKzYzOSwxMSBAQCBpbnQgaWJf dW1lbV9vZHBfbWFwX2RtYV9wYWdlcyhzdHJ1Y3QgaWJfdW1lbV9vZHAgKnVtZW1fb2RwLCB1NjQg dXNlcl92aXJ0LAo+ICAJCS8qCj4gIAkJICogTm90ZTogdGhpcyBtaWdodCByZXN1bHQgaW4gcmVk dW5kZW50IHBhZ2UgZ2V0dGluZy4gV2UgY2FuCj4gIAkJICogYXZvaWQgdGhpcyBieSBjaGVja2lu ZyBkbWFfbGlzdCB0byBiZSAwIGJlZm9yZSBjYWxsaW5nCj4gLQkJICogZ2V0X3VzZXJfcGFnZXMu IEhvd2V2ZXIsIHRoaXMgbWFrZSB0aGUgY29kZSBtdWNoIG1vcmUKPiAtCQkgKiBjb21wbGV4IChh bmQgZG9lc24ndCBnYWluIHVzIG11Y2ggcGVyZm9ybWFuY2UgaW4gbW9zdCB1c2UKPiAtCQkgKiBj YXNlcykuCj4gKwkJICogcGluX2xvbmd0ZXJtX3BhZ2VzLiBIb3dldmVyLCB0aGlzIG1ha2VzIHRo ZSBjb2RlIG11Y2gKPiArCQkgKiBtb3JlIGNvbXBsZXggKGFuZCBkb2Vzbid0IGdhaW4gdXMgbXVj aCBwZXJmb3JtYW5jZSBpbiBtb3N0Cj4gKwkJICogdXNlIGNhc2VzKS4KPiAgCQkgKi8KPiAtCQlu cGFnZXMgPSBnZXRfdXNlcl9wYWdlc19yZW1vdGUob3duaW5nX3Byb2Nlc3MsIG93bmluZ19tbSwK PiArCQlucGFnZXMgPSBwaW5fbG9uZ3Rlcm1fcGFnZXNfcmVtb3RlKG93bmluZ19wcm9jZXNzLCBv d25pbmdfbW0sCj4gIAkJCQl1c2VyX3ZpcnQsIGd1cF9udW1fcGFnZXMsCj4gIAkJCQlmbGFncywg bG9jYWxfcGFnZV9saXN0LCBOVUxMLCBOVUxMKTsKPiAgCQl1cF9yZWFkKCZvd25pbmdfbW0tPm1t YXBfc2VtKTsKPiBkaWZmIC0tZ2l0IGEvZHJpdmVycy9pbmZpbmliYW5kL2h3L2hmaTEvdXNlcl9w YWdlcy5jIGIvZHJpdmVycy9pbmZpbmliYW5kL2h3L2hmaTEvdXNlcl9wYWdlcy5jCj4gaW5kZXgg NDY5YWNiOTYxZmJkLi45YjU1YjBhNzNlMjkgMTAwNjQ0Cj4gLS0tIGEvZHJpdmVycy9pbmZpbmli YW5kL2h3L2hmaTEvdXNlcl9wYWdlcy5jCj4gKysrIGIvZHJpdmVycy9pbmZpbmliYW5kL2h3L2hm aTEvdXNlcl9wYWdlcy5jCj4gQEAgLTEwNCw5ICsxMDQsOSBAQCBpbnQgaGZpMV9hY3F1aXJlX3Vz ZXJfcGFnZXMoc3RydWN0IG1tX3N0cnVjdCAqbW0sIHVuc2lnbmVkIGxvbmcgdmFkZHIsIHNpemVf dCBucAo+ICAJCQkgICAgYm9vbCB3cml0YWJsZSwgc3RydWN0IHBhZ2UgKipwYWdlcykKPiAgewo+ ICAJaW50IHJldDsKPiAtCXVuc2lnbmVkIGludCBndXBfZmxhZ3MgPSBGT0xMX0xPTkdURVJNIHwg KHdyaXRhYmxlID8gRk9MTF9XUklURSA6IDApOwo+ICsJdW5zaWduZWQgaW50IGd1cF9mbGFncyA9 ICh3cml0YWJsZSA/IEZPTExfV1JJVEUgOiAwKTsKPiAgCj4gLQlyZXQgPSBnZXRfdXNlcl9wYWdl c19mYXN0KHZhZGRyLCBucGFnZXMsIGd1cF9mbGFncywgcGFnZXMpOwo+ICsJcmV0ID0gcGluX2xv bmd0ZXJtX3BhZ2VzX2Zhc3QodmFkZHIsIG5wYWdlcywgZ3VwX2ZsYWdzLCBwYWdlcyk7Cj4gIAlp ZiAocmV0IDwgMCkKPiAgCQlyZXR1cm4gcmV0Owo+ICAKPiBkaWZmIC0tZ2l0IGEvZHJpdmVycy9p bmZpbmliYW5kL2h3L210aGNhL210aGNhX21lbWZyZWUuYyBiL2RyaXZlcnMvaW5maW5pYmFuZC9o dy9tdGhjYS9tdGhjYV9tZW1mcmVlLmMKPiBpbmRleCBlZGNjZmQ2ZTE3OGYuLmJlZWM3ZTRiOGE5 NiAxMDA2NDQKPiAtLS0gYS9kcml2ZXJzL2luZmluaWJhbmQvaHcvbXRoY2EvbXRoY2FfbWVtZnJl ZS5jCj4gKysrIGIvZHJpdmVycy9pbmZpbmliYW5kL2h3L210aGNhL210aGNhX21lbWZyZWUuYwo+ IEBAIC00NzIsOCArNDcyLDcgQEAgaW50IG10aGNhX21hcF91c2VyX2RiKHN0cnVjdCBtdGhjYV9k ZXYgKmRldiwgc3RydWN0IG10aGNhX3VhciAqdWFyLAo+ICAJCWdvdG8gb3V0Owo+ICAJfQo+ICAK PiAtCXJldCA9IGdldF91c2VyX3BhZ2VzX2Zhc3QodWFkZHIgJiBQQUdFX01BU0ssIDEsCj4gLQkJ CQkgIEZPTExfV1JJVEUgfCBGT0xMX0xPTkdURVJNLCBwYWdlcyk7Cj4gKwlyZXQgPSBwaW5fbG9u Z3Rlcm1fcGFnZXNfZmFzdCh1YWRkciAmIFBBR0VfTUFTSywgMSwgRk9MTF9XUklURSwgcGFnZXMp Owo+ICAJaWYgKHJldCA8IDApCj4gIAkJZ290byBvdXQ7Cj4gIAo+IGRpZmYgLS1naXQgYS9kcml2 ZXJzL2luZmluaWJhbmQvaHcvcWliL3FpYl91c2VyX3BhZ2VzLmMgYi9kcml2ZXJzL2luZmluaWJh bmQvaHcvcWliL3FpYl91c2VyX3BhZ2VzLmMKPiBpbmRleCA2YmY3NjRlNDE4OTEuLjY4NGExNGUx NGQ5YiAxMDA2NDQKPiAtLS0gYS9kcml2ZXJzL2luZmluaWJhbmQvaHcvcWliL3FpYl91c2VyX3Bh Z2VzLmMKPiArKysgYi9kcml2ZXJzL2luZmluaWJhbmQvaHcvcWliL3FpYl91c2VyX3BhZ2VzLmMK PiBAQCAtMTA4LDEwICsxMDgsMTAgQEAgaW50IHFpYl9nZXRfdXNlcl9wYWdlcyh1bnNpZ25lZCBs b25nIHN0YXJ0X3BhZ2UsIHNpemVfdCBudW1fcGFnZXMsCj4gIAo+ICAJZG93bl9yZWFkKCZjdXJy ZW50LT5tbS0+bW1hcF9zZW0pOwo+ICAJZm9yIChnb3QgPSAwOyBnb3QgPCBudW1fcGFnZXM7IGdv dCArPSByZXQpIHsKPiAtCQlyZXQgPSBnZXRfdXNlcl9wYWdlcyhzdGFydF9wYWdlICsgZ290ICog UEFHRV9TSVpFLAo+IC0JCQkJICAgICBudW1fcGFnZXMgLSBnb3QsCj4gLQkJCQkgICAgIEZPTExf TE9OR1RFUk0gfCBGT0xMX1dSSVRFIHwgRk9MTF9GT1JDRSwKPiAtCQkJCSAgICAgcCArIGdvdCwg TlVMTCk7Cj4gKwkJcmV0ID0gcGluX2xvbmd0ZXJtX3BhZ2VzKHN0YXJ0X3BhZ2UgKyBnb3QgKiBQ QUdFX1NJWkUsCj4gKwkJCQkJIG51bV9wYWdlcyAtIGdvdCwKPiArCQkJCQkgRk9MTF9XUklURSB8 IEZPTExfRk9SQ0UsCj4gKwkJCQkJIHAgKyBnb3QsIE5VTEwpOwo+ICAJCWlmIChyZXQgPCAwKSB7 Cj4gIAkJCXVwX3JlYWQoJmN1cnJlbnQtPm1tLT5tbWFwX3NlbSk7Cj4gIAkJCWdvdG8gYmFpbF9y ZWxlYXNlOwo+IGRpZmYgLS1naXQgYS9kcml2ZXJzL2luZmluaWJhbmQvaHcvcWliL3FpYl91c2Vy X3NkbWEuYyBiL2RyaXZlcnMvaW5maW5pYmFuZC9ody9xaWIvcWliX3VzZXJfc2RtYS5jCj4gaW5k ZXggMDUxOTBlZGMyNjExLi5mZDg2YTlkMTkzNzAgMTAwNjQ0Cj4gLS0tIGEvZHJpdmVycy9pbmZp bmliYW5kL2h3L3FpYi9xaWJfdXNlcl9zZG1hLmMKPiArKysgYi9kcml2ZXJzL2luZmluaWJhbmQv aHcvcWliL3FpYl91c2VyX3NkbWEuYwo+IEBAIC02NzAsNyArNjcwLDcgQEAgc3RhdGljIGludCBx aWJfdXNlcl9zZG1hX3Bpbl9wYWdlcyhjb25zdCBzdHJ1Y3QgcWliX2RldmRhdGEgKmRkLAo+ICAJ CWVsc2UKPiAgCQkJaiA9IG5wYWdlczsKPiAgCj4gLQkJcmV0ID0gZ2V0X3VzZXJfcGFnZXNfZmFz dChhZGRyLCBqLCBGT0xMX0xPTkdURVJNLCBwYWdlcyk7Cj4gKwkJcmV0ID0gcGluX2xvbmd0ZXJt X3BhZ2VzX2Zhc3QoYWRkciwgaiwgMCwgcGFnZXMpOwo+ICAJCWlmIChyZXQgIT0gaikgewo+ICAJ CQlpID0gMDsKPiAgCQkJaiA9IHJldDsKPiBkaWZmIC0tZ2l0IGEvZHJpdmVycy9pbmZpbmliYW5k L2h3L3VzbmljL3VzbmljX3Vpb20uYyBiL2RyaXZlcnMvaW5maW5pYmFuZC9ody91c25pYy91c25p Y191aW9tLmMKPiBpbmRleCA2MmU2ZmZhOWFkNzguLjZiOTBjYTFjMzc3MSAxMDA2NDQKPiAtLS0g YS9kcml2ZXJzL2luZmluaWJhbmQvaHcvdXNuaWMvdXNuaWNfdWlvbS5jCj4gKysrIGIvZHJpdmVy cy9pbmZpbmliYW5kL2h3L3VzbmljL3VzbmljX3Vpb20uYwo+IEBAIC0xNDEsMTEgKzE0MSwxMCBA QCBzdGF0aWMgaW50IHVzbmljX3Vpb21fZ2V0X3BhZ2VzKHVuc2lnbmVkIGxvbmcgYWRkciwgc2l6 ZV90IHNpemUsIGludCB3cml0YWJsZSwKPiAgCXJldCA9IDA7Cj4gIAo+ICAJd2hpbGUgKG5wYWdl cykgewo+IC0JCXJldCA9IGdldF91c2VyX3BhZ2VzKGN1cl9iYXNlLAo+IC0JCQkJICAgICBtaW5f dCh1bnNpZ25lZCBsb25nLCBucGFnZXMsCj4gLQkJCQkgICAgIFBBR0VfU0laRSAvIHNpemVvZihz dHJ1Y3QgcGFnZSAqKSksCj4gLQkJCQkgICAgIGd1cF9mbGFncyB8IEZPTExfTE9OR1RFUk0sCj4g LQkJCQkgICAgIHBhZ2VfbGlzdCwgTlVMTCk7Cj4gKwkJcmV0ID0gcGluX2xvbmd0ZXJtX3BhZ2Vz KGN1cl9iYXNlLAo+ICsJCQkJCSBtaW5fdCh1bnNpZ25lZCBsb25nLCBucGFnZXMsCj4gKwkJCQkJ ICAgICBQQUdFX1NJWkUgLyBzaXplb2Yoc3RydWN0IHBhZ2UgKikpLAo+ICsJCQkJCSBndXBfZmxh Z3MsIHBhZ2VfbGlzdCwgTlVMTCk7Cj4gIAo+ICAJCWlmIChyZXQgPCAwKQo+ICAJCQlnb3RvIG91 dDsKPiBkaWZmIC0tZ2l0IGEvZHJpdmVycy9pbmZpbmliYW5kL3N3L3Npdy9zaXdfbWVtLmMgYi9k cml2ZXJzL2luZmluaWJhbmQvc3cvc2l3L3Npd19tZW0uYwo+IGluZGV4IGU5OTk4M2YwNzY2My4u MjBlNjYzZDdhZGE4IDEwMDY0NAo+IC0tLSBhL2RyaXZlcnMvaW5maW5pYmFuZC9zdy9zaXcvc2l3 X21lbS5jCj4gKysrIGIvZHJpdmVycy9pbmZpbmliYW5kL3N3L3Npdy9zaXdfbWVtLmMKPiBAQCAt NDI2LDkgKzQyNiw4IEBAIHN0cnVjdCBzaXdfdW1lbSAqc2l3X3VtZW1fZ2V0KHU2NCBzdGFydCwg dTY0IGxlbiwgYm9vbCB3cml0YWJsZSkKPiAgCQl3aGlsZSAobmVudHMpIHsKPiAgCQkJc3RydWN0 IHBhZ2UgKipwbGlzdCA9ICZ1bWVtLT5wYWdlX2NodW5rW2ldLnBsaXN0W2dvdF07Cj4gIAo+IC0J CQlydiA9IGdldF91c2VyX3BhZ2VzKGZpcnN0X3BhZ2VfdmEsIG5lbnRzLAo+IC0JCQkJCSAgICBm b2xsX2ZsYWdzIHwgRk9MTF9MT05HVEVSTSwKPiAtCQkJCQkgICAgcGxpc3QsIE5VTEwpOwo+ICsJ CQlydiA9IHBpbl9sb25ndGVybV9wYWdlcyhmaXJzdF9wYWdlX3ZhLCBuZW50cywKPiArCQkJCQkJ Zm9sbF9mbGFncywgcGxpc3QsIE5VTEwpOwo+ICAJCQlpZiAocnYgPCAwKQo+ICAJCQkJZ290byBv dXRfc2VtX3VwOwo+ICAKPiAtLSAKPiAyLjIzLjAKPiAKX19fX19fX19fX19fX19fX19fX19fX19f X19fX19fX19fX19fX19fX19fX19fX18KZHJpLWRldmVsIG1haWxpbmcgbGlzdApkcmktZGV2ZWxA bGlzdHMuZnJlZWRlc2t0b3Aub3JnCmh0dHBzOi8vbGlzdHMuZnJlZWRlc2t0b3Aub3JnL21haWxt YW4vbGlzdGluZm8vZHJpLWRldmVs