From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.9 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 65E4CC35247 for ; Tue, 7 Jan 2020 22:47:00 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 187262075A for ; Tue, 7 Jan 2020 22:47:00 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="iSEk+JCU" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 187262075A Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=nvidia.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 96E508E0014; Tue, 7 Jan 2020 17:46:16 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 8BC7B8E0019; Tue, 7 Jan 2020 17:46:16 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6DF648E001A; Tue, 7 Jan 2020 17:46:16 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0252.hostedemail.com [216.40.44.252]) by kanga.kvack.org (Postfix) with ESMTP id 4CF858E0019 for ; Tue, 7 Jan 2020 17:46:16 -0500 (EST) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with SMTP id 1C1E6180AD801 for ; Tue, 7 Jan 2020 22:46:16 +0000 (UTC) X-FDA: 76352323152.06.books44_198a39b9b4d40 X-HE-Tag: books44_198a39b9b4d40 X-Filterd-Recvd-Size: 5961 Received: from hqnvemgate26.nvidia.com (hqnvemgate26.nvidia.com [216.228.121.65]) by imf32.hostedemail.com (Postfix) with ESMTP for ; Tue, 7 Jan 2020 22:46:15 +0000 (UTC) Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate26.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Tue, 07 Jan 2020 14:45:47 -0800 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Tue, 07 Jan 2020 14:46:05 -0800 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Tue, 07 Jan 2020 14:46:05 -0800 Received: from HQMAIL109.nvidia.com (172.20.187.15) by HQMAIL101.nvidia.com (172.20.187.10) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Tue, 7 Jan 2020 22:46:04 +0000 Received: from HQMAIL105.nvidia.com (172.20.187.12) by HQMAIL109.nvidia.com (172.20.187.15) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Tue, 7 Jan 2020 22:46:00 +0000 Received: from hqnvemgw03.nvidia.com (10.124.88.68) by HQMAIL105.nvidia.com (172.20.187.12) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Tue, 7 Jan 2020 22:46:00 +0000 Received: from blueforge.nvidia.com (Not Verified[10.110.48.28]) by hqnvemgw03.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Tue, 07 Jan 2020 14:46:00 -0800 From: John Hubbard To: Andrew Morton CC: Al Viro , Alex Williamson , Benjamin Herrenschmidt , =?UTF-8?q?Bj=C3=B6rn=20T=C3=B6pel?= , Christoph Hellwig , Dan Williams , Daniel Vetter , Dave Chinner , David Airlie , "David S . Miller" , Ira Weiny , Jan Kara , Jason Gunthorpe , Jens Axboe , Jonathan Corbet , =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= , "Kirill A . Shutemov" , "Magnus Karlsson" , Mauro Carvalho Chehab , Michael Ellerman , Michal Hocko , Mike Kravetz , Paul Mackerras , Shuah Khan , Vlastimil Babka , , , , , , , , , , , , , LKML , John Hubbard , "Leon Romanovsky" , Christoph Hellwig , "Jason Gunthorpe" Subject: [PATCH v12 09/22] IB/umem: use get_user_pages_fast() to pin DMA pages Date: Tue, 7 Jan 2020 14:45:45 -0800 Message-ID: <20200107224558.2362728-10-jhubbard@nvidia.com> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20200107224558.2362728-1-jhubbard@nvidia.com> References: <20200107224558.2362728-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public Content-Transfer-Encoding: quoted-printable Content-Type: text/plain DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1578437147; bh=A8cKE3E73H1BacDZvP/Kx4kwIlUM6SQBSWEgvT0OoDQ=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:X-NVConfidentiality: Content-Transfer-Encoding:Content-Type; b=iSEk+JCUp8ewP7PBnVMThZ2WjM/ggmSYXFFfMcqcadaWG32pxcMYdSYBg7R3OQIRl VbciV+roI/rdFNMaUPDmm5WbEgu1HP3ugrmIWY1H2ftl7K+pqu/VlLG5/99cXmVt8J B/OfRn7G4QVy/L/qy9Q9CqKK4TNK0MqsoADj5nouifiCfAVuNOy6yuXfPEC5rebraK qz3TKmrL1StIuP8h3+oxDS8hTNYdDOopjLoE+efFWDRYZzUoy/Vx2SGYBag6y1dgOJ /cQ03feYudCpKF9J2+CM1QHUfMQk9zaa2sO8mST63JPjJOeJemcioRlRh99hPmvI1x LV9En/YGXie+Q== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: And get rid of the mmap_sem calls, as part of that. Note that get_user_pages_fast() will, if necessary, fall back to __gup_longterm_unlocked(), which takes the mmap_sem as needed. Reviewed-by: Leon Romanovsky Reviewed-by: Christoph Hellwig Reviewed-by: Jan Kara Reviewed-by: Jason Gunthorpe Reviewed-by: Ira Weiny Signed-off-by: John Hubbard --- drivers/infiniband/core/umem.c | 17 ++++++----------- 1 file changed, 6 insertions(+), 11 deletions(-) diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.= c index 7a3b99597ead..214e87aa609d 100644 --- a/drivers/infiniband/core/umem.c +++ b/drivers/infiniband/core/umem.c @@ -266,16 +266,13 @@ struct ib_umem *ib_umem_get(struct ib_udata *udata, u= nsigned long addr, sg =3D umem->sg_head.sgl; =20 while (npages) { - down_read(&mm->mmap_sem); - ret =3D get_user_pages(cur_base, - min_t(unsigned long, npages, - PAGE_SIZE / sizeof (struct page *)), - gup_flags | FOLL_LONGTERM, - page_list, NULL); - if (ret < 0) { - up_read(&mm->mmap_sem); + ret =3D get_user_pages_fast(cur_base, + min_t(unsigned long, npages, + PAGE_SIZE / + sizeof(struct page *)), + gup_flags | FOLL_LONGTERM, page_list); + if (ret < 0) goto umem_release; - } =20 cur_base +=3D ret * PAGE_SIZE; npages -=3D ret; @@ -283,8 +280,6 @@ struct ib_umem *ib_umem_get(struct ib_udata *udata, uns= igned long addr, sg =3D ib_umem_add_sg_table(sg, page_list, ret, dma_get_max_seg_size(context->device->dma_device), &umem->sg_nents); - - up_read(&mm->mmap_sem); } =20 sg_mark_end(sg); --=20 2.24.1