From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.3 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,NICE_REPLY_A, SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CCC23C4361B for ; Wed, 9 Dec 2020 05:19:01 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 46F3323A6A for ; Wed, 9 Dec 2020 05:19:01 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 46F3323A6A Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=nvidia.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C8E5C6B009D; Wed, 9 Dec 2020 00:19:00 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C40258D0003; Wed, 9 Dec 2020 00:19:00 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B54FD8D0002; Wed, 9 Dec 2020 00:19:00 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0072.hostedemail.com [216.40.44.72]) by kanga.kvack.org (Postfix) with ESMTP id 9A90B6B009D for ; Wed, 9 Dec 2020 00:19:00 -0500 (EST) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 5E73E181AEF09 for ; Wed, 9 Dec 2020 05:19:00 +0000 (UTC) X-FDA: 77572589640.08.river90_4d02269273ed Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin08.hostedemail.com (Postfix) with ESMTP id 45CC01819E76C for ; Wed, 9 Dec 2020 05:19:00 +0000 (UTC) X-HE-Tag: river90_4d02269273ed X-Filterd-Recvd-Size: 6043 Received: from hqnvemgate24.nvidia.com (hqnvemgate24.nvidia.com [216.228.121.143]) by imf22.hostedemail.com (Postfix) with ESMTP for ; Wed, 9 Dec 2020 05:18:59 +0000 (UTC) Received: from hqmail.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate24.nvidia.com (using TLS: TLSv1.2, AES256-SHA) id ; Tue, 08 Dec 2020 21:18:57 -0800 Received: from [10.2.60.96] (172.20.13.39) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Wed, 9 Dec 2020 05:18:57 +0000 Subject: Re: [PATCH RFC 8/9] RDMA/umem: batch page unpin in __ib_mem_release() To: Joao Martins , CC: Dan Williams , Ira Weiny , , Matthew Wilcox , "Jason Gunthorpe" , Jane Chu , Muchun Song , Mike Kravetz , "Andrew Morton" References: <20201208172901.17384-1-joao.m.martins@oracle.com> <20201208172901.17384-10-joao.m.martins@oracle.com> From: John Hubbard Message-ID: <68d7bedf-99b4-6c7f-02f6-3188474b366c@nvidia.com> Date: Tue, 8 Dec 2020 21:18:57 -0800 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:84.0) Gecko/20100101 Thunderbird/84.0 MIME-Version: 1.0 In-Reply-To: <20201208172901.17384-10-joao.m.martins@oracle.com> Content-Type: text/plain; charset="UTF-8"; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [172.20.13.39] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1607491138; bh=QIEJYHHX8wLddTOoX4T648TxvHYICMB5vZPMjxdEFik=; h=Subject:To:CC:References:From:Message-ID:Date:User-Agent: MIME-Version:In-Reply-To:Content-Type:Content-Language: Content-Transfer-Encoding:X-Originating-IP:X-ClientProxiedBy; b=UBqXVXVQ0ZPusuzw11Uvb2EZzo2pN3yvaTBSlnWfWprRioWNPt/B94GirFW8XZiK8 d8B9eSyKhv/EArJp+8sUgY0J7cGEjgvWapMDH7NeAI7yMkfqWgDT0gtnY857f8y/kh 0XSJwhkBaLT9GjGo5wgMOpNyY2/fbxdANAD6CNwBbd9vUIX/2t9DwcYGwefGda4tyj bb6XkbhoMP3VNd7R4jeXtWFEiRbVrVopzCvfRxcO5/5thdKN1hqDr8dsr26/odhEQm CgVapWU348QcTeWzDeE+0D3MvKPBsvqayCi3kKGYMgyDIGp7T7gO8DeDajGWPYMR1y AJzqQ5xxBVcdA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 12/8/20 9:29 AM, Joao Martins wrote: > Take advantage of the newly added unpin_user_pages() batched > refcount update, by calculating a page array from an SGL > (same size as the one used in ib_mem_get()) and call > unpin_user_pages() with that. > > unpin_user_pages() will check on consecutive pages that belong > to the same compound page set and batch the refcount update in > a single write. > > Running a test program which calls mr reg/unreg on a 1G in size > and measures cost of both operations together (in a guest using rxe) > with device-dax and hugetlbfs: > > Before: > 159 rounds in 5.027 sec: 31617.923 usec / round (device-dax) > 466 rounds in 5.009 sec: 10748.456 usec / round (hugetlbfs) > > After: > 305 rounds in 5.010 sec: 16426.047 usec / round (device-dax) > 1073 rounds in 5.004 sec: 4663.622 usec / round (hugetlbfs) > > We also see similar improvements on a setup with pmem and RDMA hardware. > > Signed-off-by: Joao Martins > --- > drivers/infiniband/core/umem.c | 25 ++++++++++++++++++++++--- > 1 file changed, 22 insertions(+), 3 deletions(-) > > diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c > index e9fecbdf391b..493cfdcf7381 100644 > --- a/drivers/infiniband/core/umem.c > +++ b/drivers/infiniband/core/umem.c > @@ -44,20 +44,40 @@ > > #include "uverbs.h" > > +#define PAGES_PER_LIST (PAGE_SIZE / sizeof(struct page *)) I was going to maybe suggest that this item, and the "bool make_dirty" cleanup, be a separate patch, because they are just cleanups. But the memory allocation issue below might make that whole (minor) point obsolete. > + > static void __ib_umem_release(struct ib_device *dev, struct ib_umem *umem, int dirty) > { > + bool make_dirty = umem->writable && dirty; > + struct page **page_list = NULL; > struct sg_page_iter sg_iter; > + unsigned long nr = 0; > struct page *page; > > + page_list = (struct page **) __get_free_page(GFP_KERNEL); Yeah, allocating memory in a free/release path is not good. btw, for future use, I see that kmalloc() is generally recommended these days (that's a change), when you want a pointer to storage, as opposed to wanting struct pages: https://lore.kernel.org/lkml/CA+55aFwyxJ+TOpaJZnC5MPJ-25xbLAEu8iJP8zTYhmA3LXFF8Q@mail.gmail.com/ > + > if (umem->nmap > 0) > ib_dma_unmap_sg(dev, umem->sg_head.sgl, umem->sg_nents, > DMA_BIDIRECTIONAL); > > for_each_sg_page(umem->sg_head.sgl, &sg_iter, umem->sg_nents, 0) { > page = sg_page_iter_page(&sg_iter); > - unpin_user_pages_dirty_lock(&page, 1, umem->writable && dirty); > + if (page_list) > + page_list[nr++] = page; > + > + if (!page_list) { > + unpin_user_pages_dirty_lock(&page, 1, make_dirty); > + } else if (nr == PAGES_PER_LIST) { > + unpin_user_pages_dirty_lock(page_list, nr, make_dirty); > + nr = 0; > + } > } > > + if (nr) > + unpin_user_pages_dirty_lock(page_list, nr, make_dirty); > + > + if (page_list) > + free_page((unsigned long) page_list); > sg_free_table(&umem->sg_head); > } > > @@ -212,8 +232,7 @@ struct ib_umem *ib_umem_get(struct ib_device *device, unsigned long addr, > cond_resched(); > ret = pin_user_pages_fast(cur_base, > min_t(unsigned long, npages, > - PAGE_SIZE / > - sizeof(struct page *)), > + PAGES_PER_LIST), > gup_flags | FOLL_LONGTERM, page_list); > if (ret < 0) > goto umem_release; > thanks, -- John Hubbard NVIDIA