From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.9 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5ACC6CA9EC7 for ; Wed, 30 Oct 2019 22:51:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 22F5220874 for ; Wed, 30 Oct 2019 22:51:53 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="VYzCcB0x" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728083AbfJ3Wvk (ORCPT ); Wed, 30 Oct 2019 18:51:40 -0400 Received: from hqemgate14.nvidia.com ([216.228.121.143]:1424 "EHLO hqemgate14.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727614AbfJ3Wtv (ORCPT ); Wed, 30 Oct 2019 18:49:51 -0400 Received: from hqpgpgate102.nvidia.com (Not Verified[216.228.121.13]) by hqemgate14.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Wed, 30 Oct 2019 15:49:49 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate102.nvidia.com (PGP Universal service); Wed, 30 Oct 2019 15:49:43 -0700 X-PGP-Universal: processed; by hqpgpgate102.nvidia.com on Wed, 30 Oct 2019 15:49:43 -0700 Received: from HQMAIL107.nvidia.com (172.20.187.13) by HQMAIL105.nvidia.com (172.20.187.12) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Wed, 30 Oct 2019 22:49:42 +0000 Received: from rnnvemgw01.nvidia.com (10.128.109.123) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Wed, 30 Oct 2019 22:49:42 +0000 Received: from blueforge.nvidia.com (Not Verified[10.110.48.28]) by rnnvemgw01.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Wed, 30 Oct 2019 15:49:41 -0700 From: John Hubbard To: Andrew Morton CC: Al Viro , Alex Williamson , Benjamin Herrenschmidt , =?UTF-8?q?Bj=C3=B6rn=20T=C3=B6pel?= , Christoph Hellwig , Dan Williams , Daniel Vetter , Dave Chinner , David Airlie , "David S . Miller" , Ira Weiny , Jan Kara , Jason Gunthorpe , Jens Axboe , Jonathan Corbet , =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= , Magnus Karlsson , Mauro Carvalho Chehab , Michael Ellerman , Michal Hocko , Mike Kravetz , Paul Mackerras , Shuah Khan , Vlastimil Babka , , , , , , , , , , , , , LKML , John Hubbard Subject: [PATCH 07/19] infiniband: set FOLL_PIN, FOLL_LONGTERM via pin_longterm_pages*() Date: Wed, 30 Oct 2019 15:49:18 -0700 Message-ID: <20191030224930.3990755-8-jhubbard@nvidia.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20191030224930.3990755-1-jhubbard@nvidia.com> References: <20191030224930.3990755-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public Content-Transfer-Encoding: quoted-printable Content-Type: text/plain DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1572475789; bh=DXHsiJq7l9PXxJGsaitK1WD9ceosiXZuRc8Rt0sYOTo=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:X-NVConfidentiality: Content-Transfer-Encoding:Content-Type; b=VYzCcB0xgHaGOKtSGQv4L5IsJwx31xRMSf3LX3V6v9Wx3Id206JEMDUUF99xa4w7m Mp14nBvM6DEo/uWK5j5BqH2j5jJrp2/+FQOxvAkkiu8O2lLAvWhk3ZDIW+8Ewf3n9G bQcdQdVvxa4tw7LxTDNkDtP+Q2oHLhhEQAVU+XZyDB6VKqJDdtmE+qT0CPZpoYDBb5 EfFFFawBbjd7cw1gJyyLxoE4xctJxQJtUeovlmWVQxMATDv/KorKOkUPvkgDz4t2Vo DlDxh6OEdO6uLZZzwRtI8IBNXkx8nJqgNH3MmUiTzYfzo1zC60NzPkiaFUDvXEfEIu WAxwtR0kljp6Q== Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Convert infiniband to use the new wrapper calls, and stop explicitly setting FOLL_LONGTERM at the call sites. The new pin_longterm_*() calls replace get_user_pages*() calls, and set both FOLL_LONGTERM and a new FOLL_PIN flag. The FOLL_PIN flag requires that the caller must return the pages via put_user_page*() calls, but infiniband was already doing that as part of an earlier commit. Signed-off-by: John Hubbard --- drivers/infiniband/core/umem.c | 5 ++--- drivers/infiniband/core/umem_odp.c | 10 +++++----- drivers/infiniband/hw/hfi1/user_pages.c | 4 ++-- drivers/infiniband/hw/mthca/mthca_memfree.c | 3 +-- drivers/infiniband/hw/qib/qib_user_pages.c | 8 ++++---- drivers/infiniband/hw/qib/qib_user_sdma.c | 2 +- drivers/infiniband/hw/usnic/usnic_uiom.c | 9 ++++----- drivers/infiniband/sw/siw/siw_mem.c | 5 ++--- 8 files changed, 21 insertions(+), 25 deletions(-) diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.= c index 24244a2f68cc..c5a78d3e674b 100644 --- a/drivers/infiniband/core/umem.c +++ b/drivers/infiniband/core/umem.c @@ -272,11 +272,10 @@ struct ib_umem *ib_umem_get(struct ib_udata *udata, u= nsigned long addr, =20 while (npages) { down_read(&mm->mmap_sem); - ret =3D get_user_pages(cur_base, + ret =3D pin_longterm_pages(cur_base, min_t(unsigned long, npages, PAGE_SIZE / sizeof (struct page *)), - gup_flags | FOLL_LONGTERM, - page_list, NULL); + gup_flags, page_list, NULL); if (ret < 0) { up_read(&mm->mmap_sem); goto umem_release; diff --git a/drivers/infiniband/core/umem_odp.c b/drivers/infiniband/core/u= mem_odp.c index 163ff7ba92b7..a38b67b83db5 100644 --- a/drivers/infiniband/core/umem_odp.c +++ b/drivers/infiniband/core/umem_odp.c @@ -534,7 +534,7 @@ static int ib_umem_odp_map_dma_single_page( } else if (umem_odp->page_list[page_index] =3D=3D page) { umem_odp->dma_list[page_index] |=3D access_mask; } else { - pr_err("error: got different pages in IB device and from get_user_pages.= IB device page: %p, gup page: %p\n", + pr_err("error: got different pages in IB device and from pin_longterm_pa= ges. IB device page: %p, gup page: %p\n", umem_odp->page_list[page_index], page); /* Better remove the mapping now, to prevent any further * damage. */ @@ -639,11 +639,11 @@ int ib_umem_odp_map_dma_pages(struct ib_umem_odp *ume= m_odp, u64 user_virt, /* * Note: this might result in redundent page getting. We can * avoid this by checking dma_list to be 0 before calling - * get_user_pages. However, this make the code much more - * complex (and doesn't gain us much performance in most use - * cases). + * pin_longterm_pages. However, this makes the code much + * more complex (and doesn't gain us much performance in most + * use cases). */ - npages =3D get_user_pages_remote(owning_process, owning_mm, + npages =3D pin_longterm_pages_remote(owning_process, owning_mm, user_virt, gup_num_pages, flags, local_page_list, NULL, NULL); up_read(&owning_mm->mmap_sem); diff --git a/drivers/infiniband/hw/hfi1/user_pages.c b/drivers/infiniband/h= w/hfi1/user_pages.c index 469acb961fbd..9b55b0a73e29 100644 --- a/drivers/infiniband/hw/hfi1/user_pages.c +++ b/drivers/infiniband/hw/hfi1/user_pages.c @@ -104,9 +104,9 @@ int hfi1_acquire_user_pages(struct mm_struct *mm, unsig= ned long vaddr, size_t np bool writable, struct page **pages) { int ret; - unsigned int gup_flags =3D FOLL_LONGTERM | (writable ? FOLL_WRITE : 0); + unsigned int gup_flags =3D (writable ? FOLL_WRITE : 0); =20 - ret =3D get_user_pages_fast(vaddr, npages, gup_flags, pages); + ret =3D pin_longterm_pages_fast(vaddr, npages, gup_flags, pages); if (ret < 0) return ret; =20 diff --git a/drivers/infiniband/hw/mthca/mthca_memfree.c b/drivers/infiniba= nd/hw/mthca/mthca_memfree.c index edccfd6e178f..beec7e4b8a96 100644 --- a/drivers/infiniband/hw/mthca/mthca_memfree.c +++ b/drivers/infiniband/hw/mthca/mthca_memfree.c @@ -472,8 +472,7 @@ int mthca_map_user_db(struct mthca_dev *dev, struct mth= ca_uar *uar, goto out; } =20 - ret =3D get_user_pages_fast(uaddr & PAGE_MASK, 1, - FOLL_WRITE | FOLL_LONGTERM, pages); + ret =3D pin_longterm_pages_fast(uaddr & PAGE_MASK, 1, FOLL_WRITE, pages); if (ret < 0) goto out; =20 diff --git a/drivers/infiniband/hw/qib/qib_user_pages.c b/drivers/infiniban= d/hw/qib/qib_user_pages.c index 6bf764e41891..684a14e14d9b 100644 --- a/drivers/infiniband/hw/qib/qib_user_pages.c +++ b/drivers/infiniband/hw/qib/qib_user_pages.c @@ -108,10 +108,10 @@ int qib_get_user_pages(unsigned long start_page, size= _t num_pages, =20 down_read(¤t->mm->mmap_sem); for (got =3D 0; got < num_pages; got +=3D ret) { - ret =3D get_user_pages(start_page + got * PAGE_SIZE, - num_pages - got, - FOLL_LONGTERM | FOLL_WRITE | FOLL_FORCE, - p + got, NULL); + ret =3D pin_longterm_pages(start_page + got * PAGE_SIZE, + num_pages - got, + FOLL_WRITE | FOLL_FORCE, + p + got, NULL); if (ret < 0) { up_read(¤t->mm->mmap_sem); goto bail_release; diff --git a/drivers/infiniband/hw/qib/qib_user_sdma.c b/drivers/infiniband= /hw/qib/qib_user_sdma.c index 05190edc2611..fd86a9d19370 100644 --- a/drivers/infiniband/hw/qib/qib_user_sdma.c +++ b/drivers/infiniband/hw/qib/qib_user_sdma.c @@ -670,7 +670,7 @@ static int qib_user_sdma_pin_pages(const struct qib_dev= data *dd, else j =3D npages; =20 - ret =3D get_user_pages_fast(addr, j, FOLL_LONGTERM, pages); + ret =3D pin_longterm_pages_fast(addr, j, 0, pages); if (ret !=3D j) { i =3D 0; j =3D ret; diff --git a/drivers/infiniband/hw/usnic/usnic_uiom.c b/drivers/infiniband/= hw/usnic/usnic_uiom.c index 62e6ffa9ad78..6b90ca1c3771 100644 --- a/drivers/infiniband/hw/usnic/usnic_uiom.c +++ b/drivers/infiniband/hw/usnic/usnic_uiom.c @@ -141,11 +141,10 @@ static int usnic_uiom_get_pages(unsigned long addr, s= ize_t size, int writable, ret =3D 0; =20 while (npages) { - ret =3D get_user_pages(cur_base, - min_t(unsigned long, npages, - PAGE_SIZE / sizeof(struct page *)), - gup_flags | FOLL_LONGTERM, - page_list, NULL); + ret =3D pin_longterm_pages(cur_base, + min_t(unsigned long, npages, + PAGE_SIZE / sizeof(struct page *)), + gup_flags, page_list, NULL); =20 if (ret < 0) goto out; diff --git a/drivers/infiniband/sw/siw/siw_mem.c b/drivers/infiniband/sw/si= w/siw_mem.c index e99983f07663..20e663d7ada8 100644 --- a/drivers/infiniband/sw/siw/siw_mem.c +++ b/drivers/infiniband/sw/siw/siw_mem.c @@ -426,9 +426,8 @@ struct siw_umem *siw_umem_get(u64 start, u64 len, bool = writable) while (nents) { struct page **plist =3D &umem->page_chunk[i].plist[got]; =20 - rv =3D get_user_pages(first_page_va, nents, - foll_flags | FOLL_LONGTERM, - plist, NULL); + rv =3D pin_longterm_pages(first_page_va, nents, + foll_flags, plist, NULL); if (rv < 0) goto out_sem_up; =20 --=20 2.23.0 From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.6 required=3.0 tests=DKIM_INVALID,DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E366ECA9EC7 for ; Wed, 30 Oct 2019 23:07:04 +0000 (UTC) Received: from lists.ozlabs.org (lists.ozlabs.org [203.11.71.2]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 8F327205ED for ; Wed, 30 Oct 2019 23:07:04 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="VYzCcB0x" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8F327205ED Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=nvidia.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=linuxppc-dev-bounces+linuxppc-dev=archiver.kernel.org@lists.ozlabs.org Received: from bilbo.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 473PGr16V4zDqpL for ; Thu, 31 Oct 2019 10:07:00 +1100 (AEDT) Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=nvidia.com (client-ip=216.228.121.143; helo=hqemgate14.nvidia.com; envelope-from=jhubbard@nvidia.com; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=pass (p=none dis=none) header.from=nvidia.com Authentication-Results: lists.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=nvidia.com header.i=@nvidia.com header.b="VYzCcB0x"; dkim-atps=neutral Received: from hqemgate14.nvidia.com (hqemgate14.nvidia.com [216.228.121.143]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 473Ntz0mbkzF4q5 for ; Thu, 31 Oct 2019 09:49:46 +1100 (AEDT) Received: from hqpgpgate102.nvidia.com (Not Verified[216.228.121.13]) by hqemgate14.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Wed, 30 Oct 2019 15:49:49 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate102.nvidia.com (PGP Universal service); Wed, 30 Oct 2019 15:49:43 -0700 X-PGP-Universal: processed; by hqpgpgate102.nvidia.com on Wed, 30 Oct 2019 15:49:43 -0700 Received: from HQMAIL107.nvidia.com (172.20.187.13) by HQMAIL105.nvidia.com (172.20.187.12) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Wed, 30 Oct 2019 22:49:42 +0000 Received: from rnnvemgw01.nvidia.com (10.128.109.123) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Wed, 30 Oct 2019 22:49:42 +0000 Received: from blueforge.nvidia.com (Not Verified[10.110.48.28]) by rnnvemgw01.nvidia.com with Trustwave SEG (v7, 5, 8, 10121) id ; Wed, 30 Oct 2019 15:49:41 -0700 From: John Hubbard To: Andrew Morton Subject: [PATCH 07/19] infiniband: set FOLL_PIN, FOLL_LONGTERM via pin_longterm_pages*() Date: Wed, 30 Oct 2019 15:49:18 -0700 Message-ID: <20191030224930.3990755-8-jhubbard@nvidia.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20191030224930.3990755-1-jhubbard@nvidia.com> References: <20191030224930.3990755-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public Content-Transfer-Encoding: quoted-printable Content-Type: text/plain DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1572475789; bh=DXHsiJq7l9PXxJGsaitK1WD9ceosiXZuRc8Rt0sYOTo=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:X-NVConfidentiality: Content-Transfer-Encoding:Content-Type; b=VYzCcB0xgHaGOKtSGQv4L5IsJwx31xRMSf3LX3V6v9Wx3Id206JEMDUUF99xa4w7m Mp14nBvM6DEo/uWK5j5BqH2j5jJrp2/+FQOxvAkkiu8O2lLAvWhk3ZDIW+8Ewf3n9G bQcdQdVvxa4tw7LxTDNkDtP+Q2oHLhhEQAVU+XZyDB6VKqJDdtmE+qT0CPZpoYDBb5 EfFFFawBbjd7cw1gJyyLxoE4xctJxQJtUeovlmWVQxMATDv/KorKOkUPvkgDz4t2Vo DlDxh6OEdO6uLZZzwRtI8IBNXkx8nJqgNH3MmUiTzYfzo1zC60NzPkiaFUDvXEfEIu WAxwtR0kljp6Q== X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Michal Hocko , Jan Kara , kvm@vger.kernel.org, linux-doc@vger.kernel.org, David Airlie , Dave Chinner , dri-devel@lists.freedesktop.org, LKML , linux-mm@kvack.org, Paul Mackerras , linux-kselftest@vger.kernel.org, Ira Weiny , Jonathan Corbet , linux-rdma@vger.kernel.org, Christoph Hellwig , Jason Gunthorpe , Vlastimil Babka , =?UTF-8?q?Bj=C3=B6rn=20T=C3=B6pel?= , linux-media@vger.kernel.org, Shuah Khan , John Hubbard , linux-block@vger.kernel.org, =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= , Al Viro , Dan Williams , Mauro Carvalho Chehab , Magnus Karlsson , Jens Axboe , netdev@vger.kernel.org, Alex Williamson , Daniel Vetter , linux-fsdevel@vger.kernel.org, bpf@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, "David S . Miller" , Mike Kravetz Errors-To: linuxppc-dev-bounces+linuxppc-dev=archiver.kernel.org@lists.ozlabs.org Sender: "Linuxppc-dev" Convert infiniband to use the new wrapper calls, and stop explicitly setting FOLL_LONGTERM at the call sites. The new pin_longterm_*() calls replace get_user_pages*() calls, and set both FOLL_LONGTERM and a new FOLL_PIN flag. The FOLL_PIN flag requires that the caller must return the pages via put_user_page*() calls, but infiniband was already doing that as part of an earlier commit. Signed-off-by: John Hubbard --- drivers/infiniband/core/umem.c | 5 ++--- drivers/infiniband/core/umem_odp.c | 10 +++++----- drivers/infiniband/hw/hfi1/user_pages.c | 4 ++-- drivers/infiniband/hw/mthca/mthca_memfree.c | 3 +-- drivers/infiniband/hw/qib/qib_user_pages.c | 8 ++++---- drivers/infiniband/hw/qib/qib_user_sdma.c | 2 +- drivers/infiniband/hw/usnic/usnic_uiom.c | 9 ++++----- drivers/infiniband/sw/siw/siw_mem.c | 5 ++--- 8 files changed, 21 insertions(+), 25 deletions(-) diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.= c index 24244a2f68cc..c5a78d3e674b 100644 --- a/drivers/infiniband/core/umem.c +++ b/drivers/infiniband/core/umem.c @@ -272,11 +272,10 @@ struct ib_umem *ib_umem_get(struct ib_udata *udata, u= nsigned long addr, =20 while (npages) { down_read(&mm->mmap_sem); - ret =3D get_user_pages(cur_base, + ret =3D pin_longterm_pages(cur_base, min_t(unsigned long, npages, PAGE_SIZE / sizeof (struct page *)), - gup_flags | FOLL_LONGTERM, - page_list, NULL); + gup_flags, page_list, NULL); if (ret < 0) { up_read(&mm->mmap_sem); goto umem_release; diff --git a/drivers/infiniband/core/umem_odp.c b/drivers/infiniband/core/u= mem_odp.c index 163ff7ba92b7..a38b67b83db5 100644 --- a/drivers/infiniband/core/umem_odp.c +++ b/drivers/infiniband/core/umem_odp.c @@ -534,7 +534,7 @@ static int ib_umem_odp_map_dma_single_page( } else if (umem_odp->page_list[page_index] =3D=3D page) { umem_odp->dma_list[page_index] |=3D access_mask; } else { - pr_err("error: got different pages in IB device and from get_user_pages.= IB device page: %p, gup page: %p\n", + pr_err("error: got different pages in IB device and from pin_longterm_pa= ges. IB device page: %p, gup page: %p\n", umem_odp->page_list[page_index], page); /* Better remove the mapping now, to prevent any further * damage. */ @@ -639,11 +639,11 @@ int ib_umem_odp_map_dma_pages(struct ib_umem_odp *ume= m_odp, u64 user_virt, /* * Note: this might result in redundent page getting. We can * avoid this by checking dma_list to be 0 before calling - * get_user_pages. However, this make the code much more - * complex (and doesn't gain us much performance in most use - * cases). + * pin_longterm_pages. However, this makes the code much + * more complex (and doesn't gain us much performance in most + * use cases). */ - npages =3D get_user_pages_remote(owning_process, owning_mm, + npages =3D pin_longterm_pages_remote(owning_process, owning_mm, user_virt, gup_num_pages, flags, local_page_list, NULL, NULL); up_read(&owning_mm->mmap_sem); diff --git a/drivers/infiniband/hw/hfi1/user_pages.c b/drivers/infiniband/h= w/hfi1/user_pages.c index 469acb961fbd..9b55b0a73e29 100644 --- a/drivers/infiniband/hw/hfi1/user_pages.c +++ b/drivers/infiniband/hw/hfi1/user_pages.c @@ -104,9 +104,9 @@ int hfi1_acquire_user_pages(struct mm_struct *mm, unsig= ned long vaddr, size_t np bool writable, struct page **pages) { int ret; - unsigned int gup_flags =3D FOLL_LONGTERM | (writable ? FOLL_WRITE : 0); + unsigned int gup_flags =3D (writable ? FOLL_WRITE : 0); =20 - ret =3D get_user_pages_fast(vaddr, npages, gup_flags, pages); + ret =3D pin_longterm_pages_fast(vaddr, npages, gup_flags, pages); if (ret < 0) return ret; =20 diff --git a/drivers/infiniband/hw/mthca/mthca_memfree.c b/drivers/infiniba= nd/hw/mthca/mthca_memfree.c index edccfd6e178f..beec7e4b8a96 100644 --- a/drivers/infiniband/hw/mthca/mthca_memfree.c +++ b/drivers/infiniband/hw/mthca/mthca_memfree.c @@ -472,8 +472,7 @@ int mthca_map_user_db(struct mthca_dev *dev, struct mth= ca_uar *uar, goto out; } =20 - ret =3D get_user_pages_fast(uaddr & PAGE_MASK, 1, - FOLL_WRITE | FOLL_LONGTERM, pages); + ret =3D pin_longterm_pages_fast(uaddr & PAGE_MASK, 1, FOLL_WRITE, pages); if (ret < 0) goto out; =20 diff --git a/drivers/infiniband/hw/qib/qib_user_pages.c b/drivers/infiniban= d/hw/qib/qib_user_pages.c index 6bf764e41891..684a14e14d9b 100644 --- a/drivers/infiniband/hw/qib/qib_user_pages.c +++ b/drivers/infiniband/hw/qib/qib_user_pages.c @@ -108,10 +108,10 @@ int qib_get_user_pages(unsigned long start_page, size= _t num_pages, =20 down_read(¤t->mm->mmap_sem); for (got =3D 0; got < num_pages; got +=3D ret) { - ret =3D get_user_pages(start_page + got * PAGE_SIZE, - num_pages - got, - FOLL_LONGTERM | FOLL_WRITE | FOLL_FORCE, - p + got, NULL); + ret =3D pin_longterm_pages(start_page + got * PAGE_SIZE, + num_pages - got, + FOLL_WRITE | FOLL_FORCE, + p + got, NULL); if (ret < 0) { up_read(¤t->mm->mmap_sem); goto bail_release; diff --git a/drivers/infiniband/hw/qib/qib_user_sdma.c b/drivers/infiniband= /hw/qib/qib_user_sdma.c index 05190edc2611..fd86a9d19370 100644 --- a/drivers/infiniband/hw/qib/qib_user_sdma.c +++ b/drivers/infiniband/hw/qib/qib_user_sdma.c @@ -670,7 +670,7 @@ static int qib_user_sdma_pin_pages(const struct qib_dev= data *dd, else j =3D npages; =20 - ret =3D get_user_pages_fast(addr, j, FOLL_LONGTERM, pages); + ret =3D pin_longterm_pages_fast(addr, j, 0, pages); if (ret !=3D j) { i =3D 0; j =3D ret; diff --git a/drivers/infiniband/hw/usnic/usnic_uiom.c b/drivers/infiniband/= hw/usnic/usnic_uiom.c index 62e6ffa9ad78..6b90ca1c3771 100644 --- a/drivers/infiniband/hw/usnic/usnic_uiom.c +++ b/drivers/infiniband/hw/usnic/usnic_uiom.c @@ -141,11 +141,10 @@ static int usnic_uiom_get_pages(unsigned long addr, s= ize_t size, int writable, ret =3D 0; =20 while (npages) { - ret =3D get_user_pages(cur_base, - min_t(unsigned long, npages, - PAGE_SIZE / sizeof(struct page *)), - gup_flags | FOLL_LONGTERM, - page_list, NULL); + ret =3D pin_longterm_pages(cur_base, + min_t(unsigned long, npages, + PAGE_SIZE / sizeof(struct page *)), + gup_flags, page_list, NULL); =20 if (ret < 0) goto out; diff --git a/drivers/infiniband/sw/siw/siw_mem.c b/drivers/infiniband/sw/si= w/siw_mem.c index e99983f07663..20e663d7ada8 100644 --- a/drivers/infiniband/sw/siw/siw_mem.c +++ b/drivers/infiniband/sw/siw/siw_mem.c @@ -426,9 +426,8 @@ struct siw_umem *siw_umem_get(u64 start, u64 len, bool = writable) while (nents) { struct page **plist =3D &umem->page_chunk[i].plist[got]; =20 - rv =3D get_user_pages(first_page_va, nents, - foll_flags | FOLL_LONGTERM, - plist, NULL); + rv =3D pin_longterm_pages(first_page_va, nents, + foll_flags, plist, NULL); if (rv < 0) goto out_sem_up; =20 --=20 2.23.0 From mboxrd@z Thu Jan 1 00:00:00 1970 From: John Hubbard Subject: [PATCH 07/19] infiniband: set FOLL_PIN, FOLL_LONGTERM via pin_longterm_pages*() Date: Wed, 30 Oct 2019 15:49:18 -0700 Message-ID: <20191030224930.3990755-8-jhubbard@nvidia.com> References: <20191030224930.3990755-1-jhubbard@nvidia.com> Mime-Version: 1.0 Content-Type: text/plain Content-Transfer-Encoding: quoted-printable Return-path: In-Reply-To: <20191030224930.3990755-1-jhubbard@nvidia.com> Sender: linux-kernel-owner@vger.kernel.org To: Andrew Morton Cc: Al Viro , Alex Williamson , Benjamin Herrenschmidt , =?UTF-8?q?Bj=C3=B6rn=20T=C3=B6pel?= , Christoph Hellwig , Dan Williams , Daniel Vetter , Dave Chinner , David Airlie , "David S . Miller" , Ira Weiny , Jan Kara , Jason Gunthorpe , Jens Axboe , Jonathan Corbet , =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= , Magnus Karlsson , Mauro Carvalho Chehab , Michael Ellerman M List-Id: dri-devel@lists.freedesktop.org Convert infiniband to use the new wrapper calls, and stop explicitly setting FOLL_LONGTERM at the call sites. The new pin_longterm_*() calls replace get_user_pages*() calls, and set both FOLL_LONGTERM and a new FOLL_PIN flag. The FOLL_PIN flag requires that the caller must return the pages via put_user_page*() calls, but infiniband was already doing that as part of an earlier commit. Signed-off-by: John Hubbard --- drivers/infiniband/core/umem.c | 5 ++--- drivers/infiniband/core/umem_odp.c | 10 +++++----- drivers/infiniband/hw/hfi1/user_pages.c | 4 ++-- drivers/infiniband/hw/mthca/mthca_memfree.c | 3 +-- drivers/infiniband/hw/qib/qib_user_pages.c | 8 ++++---- drivers/infiniband/hw/qib/qib_user_sdma.c | 2 +- drivers/infiniband/hw/usnic/usnic_uiom.c | 9 ++++----- drivers/infiniband/sw/siw/siw_mem.c | 5 ++--- 8 files changed, 21 insertions(+), 25 deletions(-) diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.= c index 24244a2f68cc..c5a78d3e674b 100644 --- a/drivers/infiniband/core/umem.c +++ b/drivers/infiniband/core/umem.c @@ -272,11 +272,10 @@ struct ib_umem *ib_umem_get(struct ib_udata *udata, u= nsigned long addr, =20 while (npages) { down_read(&mm->mmap_sem); - ret =3D get_user_pages(cur_base, + ret =3D pin_longterm_pages(cur_base, min_t(unsigned long, npages, PAGE_SIZE / sizeof (struct page *)), - gup_flags | FOLL_LONGTERM, - page_list, NULL); + gup_flags, page_list, NULL); if (ret < 0) { up_read(&mm->mmap_sem); goto umem_release; diff --git a/drivers/infiniband/core/umem_odp.c b/drivers/infiniband/core/u= mem_odp.c index 163ff7ba92b7..a38b67b83db5 100644 --- a/drivers/infiniband/core/umem_odp.c +++ b/drivers/infiniband/core/umem_odp.c @@ -534,7 +534,7 @@ static int ib_umem_odp_map_dma_single_page( } else if (umem_odp->page_list[page_index] =3D=3D page) { umem_odp->dma_list[page_index] |=3D access_mask; } else { - pr_err("error: got different pages in IB device and from get_user_pages.= IB device page: %p, gup page: %p\n", + pr_err("error: got different pages in IB device and from pin_longterm_pa= ges. IB device page: %p, gup page: %p\n", umem_odp->page_list[page_index], page); /* Better remove the mapping now, to prevent any further * damage. */ @@ -639,11 +639,11 @@ int ib_umem_odp_map_dma_pages(struct ib_umem_odp *ume= m_odp, u64 user_virt, /* * Note: this might result in redundent page getting. We can * avoid this by checking dma_list to be 0 before calling - * get_user_pages. However, this make the code much more - * complex (and doesn't gain us much performance in most use - * cases). + * pin_longterm_pages. However, this makes the code much + * more complex (and doesn't gain us much performance in most + * use cases). */ - npages =3D get_user_pages_remote(owning_process, owning_mm, + npages =3D pin_longterm_pages_remote(owning_process, owning_mm, user_virt, gup_num_pages, flags, local_page_list, NULL, NULL); up_read(&owning_mm->mmap_sem); diff --git a/drivers/infiniband/hw/hfi1/user_pages.c b/drivers/infiniband/h= w/hfi1/user_pages.c index 469acb961fbd..9b55b0a73e29 100644 --- a/drivers/infiniband/hw/hfi1/user_pages.c +++ b/drivers/infiniband/hw/hfi1/user_pages.c @@ -104,9 +104,9 @@ int hfi1_acquire_user_pages(struct mm_struct *mm, unsig= ned long vaddr, size_t np bool writable, struct page **pages) { int ret; - unsigned int gup_flags =3D FOLL_LONGTERM | (writable ? FOLL_WRITE : 0); + unsigned int gup_flags =3D (writable ? FOLL_WRITE : 0); =20 - ret =3D get_user_pages_fast(vaddr, npages, gup_flags, pages); + ret =3D pin_longterm_pages_fast(vaddr, npages, gup_flags, pages); if (ret < 0) return ret; =20 diff --git a/drivers/infiniband/hw/mthca/mthca_memfree.c b/drivers/infiniba= nd/hw/mthca/mthca_memfree.c index edccfd6e178f..beec7e4b8a96 100644 --- a/drivers/infiniband/hw/mthca/mthca_memfree.c +++ b/drivers/infiniband/hw/mthca/mthca_memfree.c @@ -472,8 +472,7 @@ int mthca_map_user_db(struct mthca_dev *dev, struct mth= ca_uar *uar, goto out; } =20 - ret =3D get_user_pages_fast(uaddr & PAGE_MASK, 1, - FOLL_WRITE | FOLL_LONGTERM, pages); + ret =3D pin_longterm_pages_fast(uaddr & PAGE_MASK, 1, FOLL_WRITE, pages); if (ret < 0) goto out; =20 diff --git a/drivers/infiniband/hw/qib/qib_user_pages.c b/drivers/infiniban= d/hw/qib/qib_user_pages.c index 6bf764e41891..684a14e14d9b 100644 --- a/drivers/infiniband/hw/qib/qib_user_pages.c +++ b/drivers/infiniband/hw/qib/qib_user_pages.c @@ -108,10 +108,10 @@ int qib_get_user_pages(unsigned long start_page, size= _t num_pages, =20 down_read(¤t->mm->mmap_sem); for (got =3D 0; got < num_pages; got +=3D ret) { - ret =3D get_user_pages(start_page + got * PAGE_SIZE, - num_pages - got, - FOLL_LONGTERM | FOLL_WRITE | FOLL_FORCE, - p + got, NULL); + ret =3D pin_longterm_pages(start_page + got * PAGE_SIZE, + num_pages - got, + FOLL_WRITE | FOLL_FORCE, + p + got, NULL); if (ret < 0) { up_read(¤t->mm->mmap_sem); goto bail_release; diff --git a/drivers/infiniband/hw/qib/qib_user_sdma.c b/drivers/infiniband= /hw/qib/qib_user_sdma.c index 05190edc2611..fd86a9d19370 100644 --- a/drivers/infiniband/hw/qib/qib_user_sdma.c +++ b/drivers/infiniband/hw/qib/qib_user_sdma.c @@ -670,7 +670,7 @@ static int qib_user_sdma_pin_pages(const struct qib_dev= data *dd, else j =3D npages; =20 - ret =3D get_user_pages_fast(addr, j, FOLL_LONGTERM, pages); + ret =3D pin_longterm_pages_fast(addr, j, 0, pages); if (ret !=3D j) { i =3D 0; j =3D ret; diff --git a/drivers/infiniband/hw/usnic/usnic_uiom.c b/drivers/infiniband/= hw/usnic/usnic_uiom.c index 62e6ffa9ad78..6b90ca1c3771 100644 --- a/drivers/infiniband/hw/usnic/usnic_uiom.c +++ b/drivers/infiniband/hw/usnic/usnic_uiom.c @@ -141,11 +141,10 @@ static int usnic_uiom_get_pages(unsigned long addr, s= ize_t size, int writable, ret =3D 0; =20 while (npages) { - ret =3D get_user_pages(cur_base, - min_t(unsigned long, npages, - PAGE_SIZE / sizeof(struct page *)), - gup_flags | FOLL_LONGTERM, - page_list, NULL); + ret =3D pin_longterm_pages(cur_base, + min_t(unsigned long, npages, + PAGE_SIZE / sizeof(struct page *)), + gup_flags, page_list, NULL); =20 if (ret < 0) goto out; diff --git a/drivers/infiniband/sw/siw/siw_mem.c b/drivers/infiniband/sw/si= w/siw_mem.c index e99983f07663..20e663d7ada8 100644 --- a/drivers/infiniband/sw/siw/siw_mem.c +++ b/drivers/infiniband/sw/siw/siw_mem.c @@ -426,9 +426,8 @@ struct siw_umem *siw_umem_get(u64 start, u64 len, bool = writable) while (nents) { struct page **plist =3D &umem->page_chunk[i].plist[got]; =20 - rv =3D get_user_pages(first_page_va, nents, - foll_flags | FOLL_LONGTERM, - plist, NULL); + rv =3D pin_longterm_pages(first_page_va, nents, + foll_flags, plist, NULL); if (rv < 0) goto out_sem_up; =20 --=20 2.23.0 From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E74ABCA9ECB for ; Thu, 31 Oct 2019 09:05:18 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C75D62083E for ; Thu, 31 Oct 2019 09:05:18 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C75D62083E Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=nvidia.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 037B26EDEA; Thu, 31 Oct 2019 09:04:48 +0000 (UTC) Received: from hqemgate14.nvidia.com (hqemgate14.nvidia.com [216.228.121.143]) by gabe.freedesktop.org (Postfix) with ESMTPS id CBE546EBB5 for ; Wed, 30 Oct 2019 22:49:43 +0000 (UTC) Received: from hqpgpgate102.nvidia.com (Not Verified[216.228.121.13]) by hqemgate14.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Wed, 30 Oct 2019 15:49:49 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate102.nvidia.com (PGP Universal service); Wed, 30 Oct 2019 15:49:43 -0700 X-PGP-Universal: processed; by hqpgpgate102.nvidia.com on Wed, 30 Oct 2019 15:49:43 -0700 Received: from HQMAIL107.nvidia.com (172.20.187.13) by HQMAIL105.nvidia.com (172.20.187.12) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Wed, 30 Oct 2019 22:49:42 +0000 Received: from rnnvemgw01.nvidia.com (10.128.109.123) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Wed, 30 Oct 2019 22:49:42 +0000 Received: from blueforge.nvidia.com (Not Verified[10.110.48.28]) by rnnvemgw01.nvidia.com with Trustwave SEG (v7, 5, 8, 10121) id ; Wed, 30 Oct 2019 15:49:41 -0700 From: John Hubbard To: Andrew Morton Subject: [PATCH 07/19] infiniband: set FOLL_PIN, FOLL_LONGTERM via pin_longterm_pages*() Date: Wed, 30 Oct 2019 15:49:18 -0700 Message-ID: <20191030224930.3990755-8-jhubbard@nvidia.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20191030224930.3990755-1-jhubbard@nvidia.com> References: <20191030224930.3990755-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public X-Mailman-Approved-At: Thu, 31 Oct 2019 09:04:21 +0000 X-Mailman-Original-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1572475789; bh=DXHsiJq7l9PXxJGsaitK1WD9ceosiXZuRc8Rt0sYOTo=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:X-NVConfidentiality: Content-Transfer-Encoding:Content-Type; b=VYzCcB0xgHaGOKtSGQv4L5IsJwx31xRMSf3LX3V6v9Wx3Id206JEMDUUF99xa4w7m Mp14nBvM6DEo/uWK5j5BqH2j5jJrp2/+FQOxvAkkiu8O2lLAvWhk3ZDIW+8Ewf3n9G bQcdQdVvxa4tw7LxTDNkDtP+Q2oHLhhEQAVU+XZyDB6VKqJDdtmE+qT0CPZpoYDBb5 EfFFFawBbjd7cw1gJyyLxoE4xctJxQJtUeovlmWVQxMATDv/KorKOkUPvkgDz4t2Vo DlDxh6OEdO6uLZZzwRtI8IBNXkx8nJqgNH3MmUiTzYfzo1zC60NzPkiaFUDvXEfEIu WAxwtR0kljp6Q== X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Michal Hocko , Jan Kara , kvm@vger.kernel.org, linux-doc@vger.kernel.org, David Airlie , Dave Chinner , dri-devel@lists.freedesktop.org, LKML , linux-mm@kvack.org, Paul Mackerras , linux-kselftest@vger.kernel.org, Ira Weiny , Jonathan Corbet , linux-rdma@vger.kernel.org, Michael Ellerman , Christoph Hellwig , Jason Gunthorpe , Vlastimil Babka , =?UTF-8?q?Bj=C3=B6rn=20T=C3=B6pel?= , linux-media@vger.kernel.org, Shuah Khan , John Hubbard , linux-block@vger.kernel.org, =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= , Al Viro , Dan Williams , Mauro Carvalho Chehab , Magnus Karlsson , Jens Axboe , netdev@vger.kernel.org, Alex Williamson , linux-fsdevel@vger.kernel.org, bpf@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, "David S . Miller" , Mike Kravetz Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: base64 Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Message-ID: <20191030224918.nbXSEgRhZ8vA8OHiBUhLNIcQOAENtGMvXU5Ye8M2QRQ@z> Q29udmVydCBpbmZpbmliYW5kIHRvIHVzZSB0aGUgbmV3IHdyYXBwZXIgY2FsbHMsIGFuZCBzdG9w CmV4cGxpY2l0bHkgc2V0dGluZyBGT0xMX0xPTkdURVJNIGF0IHRoZSBjYWxsIHNpdGVzLgoKVGhl IG5ldyBwaW5fbG9uZ3Rlcm1fKigpIGNhbGxzIHJlcGxhY2UgZ2V0X3VzZXJfcGFnZXMqKCkKY2Fs bHMsIGFuZCBzZXQgYm90aCBGT0xMX0xPTkdURVJNIGFuZCBhIG5ldyBGT0xMX1BJTgpmbGFnLiBU aGUgRk9MTF9QSU4gZmxhZyByZXF1aXJlcyB0aGF0IHRoZSBjYWxsZXIgbXVzdApyZXR1cm4gdGhl IHBhZ2VzIHZpYSBwdXRfdXNlcl9wYWdlKigpIGNhbGxzLCBidXQKaW5maW5pYmFuZCB3YXMgYWxy ZWFkeSBkb2luZyB0aGF0IGFzIHBhcnQgb2YgYW4gZWFybGllcgpjb21taXQuCgpTaWduZWQtb2Zm LWJ5OiBKb2huIEh1YmJhcmQgPGpodWJiYXJkQG52aWRpYS5jb20+Ci0tLQogZHJpdmVycy9pbmZp bmliYW5kL2NvcmUvdW1lbS5jICAgICAgICAgICAgICB8ICA1ICsrLS0tCiBkcml2ZXJzL2luZmlu aWJhbmQvY29yZS91bWVtX29kcC5jICAgICAgICAgIHwgMTAgKysrKystLS0tLQogZHJpdmVycy9p bmZpbmliYW5kL2h3L2hmaTEvdXNlcl9wYWdlcy5jICAgICB8ICA0ICsrLS0KIGRyaXZlcnMvaW5m aW5pYmFuZC9ody9tdGhjYS9tdGhjYV9tZW1mcmVlLmMgfCAgMyArLS0KIGRyaXZlcnMvaW5maW5p YmFuZC9ody9xaWIvcWliX3VzZXJfcGFnZXMuYyAgfCAgOCArKysrLS0tLQogZHJpdmVycy9pbmZp bmliYW5kL2h3L3FpYi9xaWJfdXNlcl9zZG1hLmMgICB8ICAyICstCiBkcml2ZXJzL2luZmluaWJh bmQvaHcvdXNuaWMvdXNuaWNfdWlvbS5jICAgIHwgIDkgKysrKy0tLS0tCiBkcml2ZXJzL2luZmlu aWJhbmQvc3cvc2l3L3Npd19tZW0uYyAgICAgICAgIHwgIDUgKystLS0KIDggZmlsZXMgY2hhbmdl ZCwgMjEgaW5zZXJ0aW9ucygrKSwgMjUgZGVsZXRpb25zKC0pCgpkaWZmIC0tZ2l0IGEvZHJpdmVy cy9pbmZpbmliYW5kL2NvcmUvdW1lbS5jIGIvZHJpdmVycy9pbmZpbmliYW5kL2NvcmUvdW1lbS5j CmluZGV4IDI0MjQ0YTJmNjhjYy4uYzVhNzhkM2U2NzRiIDEwMDY0NAotLS0gYS9kcml2ZXJzL2lu ZmluaWJhbmQvY29yZS91bWVtLmMKKysrIGIvZHJpdmVycy9pbmZpbmliYW5kL2NvcmUvdW1lbS5j CkBAIC0yNzIsMTEgKzI3MiwxMCBAQCBzdHJ1Y3QgaWJfdW1lbSAqaWJfdW1lbV9nZXQoc3RydWN0 IGliX3VkYXRhICp1ZGF0YSwgdW5zaWduZWQgbG9uZyBhZGRyLAogCiAJd2hpbGUgKG5wYWdlcykg ewogCQlkb3duX3JlYWQoJm1tLT5tbWFwX3NlbSk7Ci0JCXJldCA9IGdldF91c2VyX3BhZ2VzKGN1 cl9iYXNlLAorCQlyZXQgPSBwaW5fbG9uZ3Rlcm1fcGFnZXMoY3VyX2Jhc2UsCiAJCQkJICAgICBt aW5fdCh1bnNpZ25lZCBsb25nLCBucGFnZXMsCiAJCQkJCSAgIFBBR0VfU0laRSAvIHNpemVvZiAo c3RydWN0IHBhZ2UgKikpLAotCQkJCSAgICAgZ3VwX2ZsYWdzIHwgRk9MTF9MT05HVEVSTSwKLQkJ CQkgICAgIHBhZ2VfbGlzdCwgTlVMTCk7CisJCQkJICAgICBndXBfZmxhZ3MsIHBhZ2VfbGlzdCwg TlVMTCk7CiAJCWlmIChyZXQgPCAwKSB7CiAJCQl1cF9yZWFkKCZtbS0+bW1hcF9zZW0pOwogCQkJ Z290byB1bWVtX3JlbGVhc2U7CmRpZmYgLS1naXQgYS9kcml2ZXJzL2luZmluaWJhbmQvY29yZS91 bWVtX29kcC5jIGIvZHJpdmVycy9pbmZpbmliYW5kL2NvcmUvdW1lbV9vZHAuYwppbmRleCAxNjNm ZjdiYTkyYjcuLmEzOGI2N2I4M2RiNSAxMDA2NDQKLS0tIGEvZHJpdmVycy9pbmZpbmliYW5kL2Nv cmUvdW1lbV9vZHAuYworKysgYi9kcml2ZXJzL2luZmluaWJhbmQvY29yZS91bWVtX29kcC5jCkBA IC01MzQsNyArNTM0LDcgQEAgc3RhdGljIGludCBpYl91bWVtX29kcF9tYXBfZG1hX3NpbmdsZV9w YWdlKAogCX0gZWxzZSBpZiAodW1lbV9vZHAtPnBhZ2VfbGlzdFtwYWdlX2luZGV4XSA9PSBwYWdl KSB7CiAJCXVtZW1fb2RwLT5kbWFfbGlzdFtwYWdlX2luZGV4XSB8PSBhY2Nlc3NfbWFzazsKIAl9 IGVsc2UgewotCQlwcl9lcnIoImVycm9yOiBnb3QgZGlmZmVyZW50IHBhZ2VzIGluIElCIGRldmlj ZSBhbmQgZnJvbSBnZXRfdXNlcl9wYWdlcy4gSUIgZGV2aWNlIHBhZ2U6ICVwLCBndXAgcGFnZTog JXBcbiIsCisJCXByX2VycigiZXJyb3I6IGdvdCBkaWZmZXJlbnQgcGFnZXMgaW4gSUIgZGV2aWNl IGFuZCBmcm9tIHBpbl9sb25ndGVybV9wYWdlcy4gSUIgZGV2aWNlIHBhZ2U6ICVwLCBndXAgcGFn ZTogJXBcbiIsCiAJCSAgICAgICB1bWVtX29kcC0+cGFnZV9saXN0W3BhZ2VfaW5kZXhdLCBwYWdl KTsKIAkJLyogQmV0dGVyIHJlbW92ZSB0aGUgbWFwcGluZyBub3csIHRvIHByZXZlbnQgYW55IGZ1 cnRoZXIKIAkJICogZGFtYWdlLiAqLwpAQCAtNjM5LDExICs2MzksMTEgQEAgaW50IGliX3VtZW1f b2RwX21hcF9kbWFfcGFnZXMoc3RydWN0IGliX3VtZW1fb2RwICp1bWVtX29kcCwgdTY0IHVzZXJf dmlydCwKIAkJLyoKIAkJICogTm90ZTogdGhpcyBtaWdodCByZXN1bHQgaW4gcmVkdW5kZW50IHBh Z2UgZ2V0dGluZy4gV2UgY2FuCiAJCSAqIGF2b2lkIHRoaXMgYnkgY2hlY2tpbmcgZG1hX2xpc3Qg dG8gYmUgMCBiZWZvcmUgY2FsbGluZwotCQkgKiBnZXRfdXNlcl9wYWdlcy4gSG93ZXZlciwgdGhp cyBtYWtlIHRoZSBjb2RlIG11Y2ggbW9yZQotCQkgKiBjb21wbGV4IChhbmQgZG9lc24ndCBnYWlu IHVzIG11Y2ggcGVyZm9ybWFuY2UgaW4gbW9zdCB1c2UKLQkJICogY2FzZXMpLgorCQkgKiBwaW5f bG9uZ3Rlcm1fcGFnZXMuIEhvd2V2ZXIsIHRoaXMgbWFrZXMgdGhlIGNvZGUgbXVjaAorCQkgKiBt b3JlIGNvbXBsZXggKGFuZCBkb2Vzbid0IGdhaW4gdXMgbXVjaCBwZXJmb3JtYW5jZSBpbiBtb3N0 CisJCSAqIHVzZSBjYXNlcykuCiAJCSAqLwotCQlucGFnZXMgPSBnZXRfdXNlcl9wYWdlc19yZW1v dGUob3duaW5nX3Byb2Nlc3MsIG93bmluZ19tbSwKKwkJbnBhZ2VzID0gcGluX2xvbmd0ZXJtX3Bh Z2VzX3JlbW90ZShvd25pbmdfcHJvY2Vzcywgb3duaW5nX21tLAogCQkJCXVzZXJfdmlydCwgZ3Vw X251bV9wYWdlcywKIAkJCQlmbGFncywgbG9jYWxfcGFnZV9saXN0LCBOVUxMLCBOVUxMKTsKIAkJ dXBfcmVhZCgmb3duaW5nX21tLT5tbWFwX3NlbSk7CmRpZmYgLS1naXQgYS9kcml2ZXJzL2luZmlu aWJhbmQvaHcvaGZpMS91c2VyX3BhZ2VzLmMgYi9kcml2ZXJzL2luZmluaWJhbmQvaHcvaGZpMS91 c2VyX3BhZ2VzLmMKaW5kZXggNDY5YWNiOTYxZmJkLi45YjU1YjBhNzNlMjkgMTAwNjQ0Ci0tLSBh L2RyaXZlcnMvaW5maW5pYmFuZC9ody9oZmkxL3VzZXJfcGFnZXMuYworKysgYi9kcml2ZXJzL2lu ZmluaWJhbmQvaHcvaGZpMS91c2VyX3BhZ2VzLmMKQEAgLTEwNCw5ICsxMDQsOSBAQCBpbnQgaGZp MV9hY3F1aXJlX3VzZXJfcGFnZXMoc3RydWN0IG1tX3N0cnVjdCAqbW0sIHVuc2lnbmVkIGxvbmcg dmFkZHIsIHNpemVfdCBucAogCQkJICAgIGJvb2wgd3JpdGFibGUsIHN0cnVjdCBwYWdlICoqcGFn ZXMpCiB7CiAJaW50IHJldDsKLQl1bnNpZ25lZCBpbnQgZ3VwX2ZsYWdzID0gRk9MTF9MT05HVEVS TSB8ICh3cml0YWJsZSA/IEZPTExfV1JJVEUgOiAwKTsKKwl1bnNpZ25lZCBpbnQgZ3VwX2ZsYWdz ID0gKHdyaXRhYmxlID8gRk9MTF9XUklURSA6IDApOwogCi0JcmV0ID0gZ2V0X3VzZXJfcGFnZXNf ZmFzdCh2YWRkciwgbnBhZ2VzLCBndXBfZmxhZ3MsIHBhZ2VzKTsKKwlyZXQgPSBwaW5fbG9uZ3Rl cm1fcGFnZXNfZmFzdCh2YWRkciwgbnBhZ2VzLCBndXBfZmxhZ3MsIHBhZ2VzKTsKIAlpZiAocmV0 IDwgMCkKIAkJcmV0dXJuIHJldDsKIApkaWZmIC0tZ2l0IGEvZHJpdmVycy9pbmZpbmliYW5kL2h3 L210aGNhL210aGNhX21lbWZyZWUuYyBiL2RyaXZlcnMvaW5maW5pYmFuZC9ody9tdGhjYS9tdGhj YV9tZW1mcmVlLmMKaW5kZXggZWRjY2ZkNmUxNzhmLi5iZWVjN2U0YjhhOTYgMTAwNjQ0Ci0tLSBh L2RyaXZlcnMvaW5maW5pYmFuZC9ody9tdGhjYS9tdGhjYV9tZW1mcmVlLmMKKysrIGIvZHJpdmVy cy9pbmZpbmliYW5kL2h3L210aGNhL210aGNhX21lbWZyZWUuYwpAQCAtNDcyLDggKzQ3Miw3IEBA IGludCBtdGhjYV9tYXBfdXNlcl9kYihzdHJ1Y3QgbXRoY2FfZGV2ICpkZXYsIHN0cnVjdCBtdGhj YV91YXIgKnVhciwKIAkJZ290byBvdXQ7CiAJfQogCi0JcmV0ID0gZ2V0X3VzZXJfcGFnZXNfZmFz dCh1YWRkciAmIFBBR0VfTUFTSywgMSwKLQkJCQkgIEZPTExfV1JJVEUgfCBGT0xMX0xPTkdURVJN LCBwYWdlcyk7CisJcmV0ID0gcGluX2xvbmd0ZXJtX3BhZ2VzX2Zhc3QodWFkZHIgJiBQQUdFX01B U0ssIDEsIEZPTExfV1JJVEUsIHBhZ2VzKTsKIAlpZiAocmV0IDwgMCkKIAkJZ290byBvdXQ7CiAK ZGlmZiAtLWdpdCBhL2RyaXZlcnMvaW5maW5pYmFuZC9ody9xaWIvcWliX3VzZXJfcGFnZXMuYyBi L2RyaXZlcnMvaW5maW5pYmFuZC9ody9xaWIvcWliX3VzZXJfcGFnZXMuYwppbmRleCA2YmY3NjRl NDE4OTEuLjY4NGExNGUxNGQ5YiAxMDA2NDQKLS0tIGEvZHJpdmVycy9pbmZpbmliYW5kL2h3L3Fp Yi9xaWJfdXNlcl9wYWdlcy5jCisrKyBiL2RyaXZlcnMvaW5maW5pYmFuZC9ody9xaWIvcWliX3Vz ZXJfcGFnZXMuYwpAQCAtMTA4LDEwICsxMDgsMTAgQEAgaW50IHFpYl9nZXRfdXNlcl9wYWdlcyh1 bnNpZ25lZCBsb25nIHN0YXJ0X3BhZ2UsIHNpemVfdCBudW1fcGFnZXMsCiAKIAlkb3duX3JlYWQo JmN1cnJlbnQtPm1tLT5tbWFwX3NlbSk7CiAJZm9yIChnb3QgPSAwOyBnb3QgPCBudW1fcGFnZXM7 IGdvdCArPSByZXQpIHsKLQkJcmV0ID0gZ2V0X3VzZXJfcGFnZXMoc3RhcnRfcGFnZSArIGdvdCAq IFBBR0VfU0laRSwKLQkJCQkgICAgIG51bV9wYWdlcyAtIGdvdCwKLQkJCQkgICAgIEZPTExfTE9O R1RFUk0gfCBGT0xMX1dSSVRFIHwgRk9MTF9GT1JDRSwKLQkJCQkgICAgIHAgKyBnb3QsIE5VTEwp OworCQlyZXQgPSBwaW5fbG9uZ3Rlcm1fcGFnZXMoc3RhcnRfcGFnZSArIGdvdCAqIFBBR0VfU0la RSwKKwkJCQkJIG51bV9wYWdlcyAtIGdvdCwKKwkJCQkJIEZPTExfV1JJVEUgfCBGT0xMX0ZPUkNF LAorCQkJCQkgcCArIGdvdCwgTlVMTCk7CiAJCWlmIChyZXQgPCAwKSB7CiAJCQl1cF9yZWFkKCZj dXJyZW50LT5tbS0+bW1hcF9zZW0pOwogCQkJZ290byBiYWlsX3JlbGVhc2U7CmRpZmYgLS1naXQg YS9kcml2ZXJzL2luZmluaWJhbmQvaHcvcWliL3FpYl91c2VyX3NkbWEuYyBiL2RyaXZlcnMvaW5m aW5pYmFuZC9ody9xaWIvcWliX3VzZXJfc2RtYS5jCmluZGV4IDA1MTkwZWRjMjYxMS4uZmQ4NmE5 ZDE5MzcwIDEwMDY0NAotLS0gYS9kcml2ZXJzL2luZmluaWJhbmQvaHcvcWliL3FpYl91c2VyX3Nk bWEuYworKysgYi9kcml2ZXJzL2luZmluaWJhbmQvaHcvcWliL3FpYl91c2VyX3NkbWEuYwpAQCAt NjcwLDcgKzY3MCw3IEBAIHN0YXRpYyBpbnQgcWliX3VzZXJfc2RtYV9waW5fcGFnZXMoY29uc3Qg c3RydWN0IHFpYl9kZXZkYXRhICpkZCwKIAkJZWxzZQogCQkJaiA9IG5wYWdlczsKIAotCQlyZXQg PSBnZXRfdXNlcl9wYWdlc19mYXN0KGFkZHIsIGosIEZPTExfTE9OR1RFUk0sIHBhZ2VzKTsKKwkJ cmV0ID0gcGluX2xvbmd0ZXJtX3BhZ2VzX2Zhc3QoYWRkciwgaiwgMCwgcGFnZXMpOwogCQlpZiAo cmV0ICE9IGopIHsKIAkJCWkgPSAwOwogCQkJaiA9IHJldDsKZGlmZiAtLWdpdCBhL2RyaXZlcnMv aW5maW5pYmFuZC9ody91c25pYy91c25pY191aW9tLmMgYi9kcml2ZXJzL2luZmluaWJhbmQvaHcv dXNuaWMvdXNuaWNfdWlvbS5jCmluZGV4IDYyZTZmZmE5YWQ3OC4uNmI5MGNhMWMzNzcxIDEwMDY0 NAotLS0gYS9kcml2ZXJzL2luZmluaWJhbmQvaHcvdXNuaWMvdXNuaWNfdWlvbS5jCisrKyBiL2Ry aXZlcnMvaW5maW5pYmFuZC9ody91c25pYy91c25pY191aW9tLmMKQEAgLTE0MSwxMSArMTQxLDEw IEBAIHN0YXRpYyBpbnQgdXNuaWNfdWlvbV9nZXRfcGFnZXModW5zaWduZWQgbG9uZyBhZGRyLCBz aXplX3Qgc2l6ZSwgaW50IHdyaXRhYmxlLAogCXJldCA9IDA7CiAKIAl3aGlsZSAobnBhZ2VzKSB7 Ci0JCXJldCA9IGdldF91c2VyX3BhZ2VzKGN1cl9iYXNlLAotCQkJCSAgICAgbWluX3QodW5zaWdu ZWQgbG9uZywgbnBhZ2VzLAotCQkJCSAgICAgUEFHRV9TSVpFIC8gc2l6ZW9mKHN0cnVjdCBwYWdl ICopKSwKLQkJCQkgICAgIGd1cF9mbGFncyB8IEZPTExfTE9OR1RFUk0sCi0JCQkJICAgICBwYWdl X2xpc3QsIE5VTEwpOworCQlyZXQgPSBwaW5fbG9uZ3Rlcm1fcGFnZXMoY3VyX2Jhc2UsCisJCQkJ CSBtaW5fdCh1bnNpZ25lZCBsb25nLCBucGFnZXMsCisJCQkJCSAgICAgUEFHRV9TSVpFIC8gc2l6 ZW9mKHN0cnVjdCBwYWdlICopKSwKKwkJCQkJIGd1cF9mbGFncywgcGFnZV9saXN0LCBOVUxMKTsK IAogCQlpZiAocmV0IDwgMCkKIAkJCWdvdG8gb3V0OwpkaWZmIC0tZ2l0IGEvZHJpdmVycy9pbmZp bmliYW5kL3N3L3Npdy9zaXdfbWVtLmMgYi9kcml2ZXJzL2luZmluaWJhbmQvc3cvc2l3L3Npd19t ZW0uYwppbmRleCBlOTk5ODNmMDc2NjMuLjIwZTY2M2Q3YWRhOCAxMDA2NDQKLS0tIGEvZHJpdmVy cy9pbmZpbmliYW5kL3N3L3Npdy9zaXdfbWVtLmMKKysrIGIvZHJpdmVycy9pbmZpbmliYW5kL3N3 L3Npdy9zaXdfbWVtLmMKQEAgLTQyNiw5ICs0MjYsOCBAQCBzdHJ1Y3Qgc2l3X3VtZW0gKnNpd191 bWVtX2dldCh1NjQgc3RhcnQsIHU2NCBsZW4sIGJvb2wgd3JpdGFibGUpCiAJCXdoaWxlIChuZW50 cykgewogCQkJc3RydWN0IHBhZ2UgKipwbGlzdCA9ICZ1bWVtLT5wYWdlX2NodW5rW2ldLnBsaXN0 W2dvdF07CiAKLQkJCXJ2ID0gZ2V0X3VzZXJfcGFnZXMoZmlyc3RfcGFnZV92YSwgbmVudHMsCi0J CQkJCSAgICBmb2xsX2ZsYWdzIHwgRk9MTF9MT05HVEVSTSwKLQkJCQkJICAgIHBsaXN0LCBOVUxM KTsKKwkJCXJ2ID0gcGluX2xvbmd0ZXJtX3BhZ2VzKGZpcnN0X3BhZ2VfdmEsIG5lbnRzLAorCQkJ CQkJZm9sbF9mbGFncywgcGxpc3QsIE5VTEwpOwogCQkJaWYgKHJ2IDwgMCkKIAkJCQlnb3RvIG91 dF9zZW1fdXA7CiAKLS0gCjIuMjMuMAoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fX19fX19fX19fX18KZHJpLWRldmVsIG1haWxpbmcgbGlzdApkcmktZGV2ZWxAbGlzdHMuZnJl ZWRlc2t0b3Aub3JnCmh0dHBzOi8vbGlzdHMuZnJlZWRlc2t0b3Aub3JnL21haWxtYW4vbGlzdGlu Zm8vZHJpLWRldmVs