From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0999EC43461 for ; Tue, 8 Sep 2020 13:32:39 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A1A6823BCE for ; Tue, 8 Sep 2020 13:32:38 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A1A6823BCE Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 46DFC6B006E; Tue, 8 Sep 2020 09:32:38 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 41AAC6B0070; Tue, 8 Sep 2020 09:32:38 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 26F6F6B0071; Tue, 8 Sep 2020 09:32:38 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0146.hostedemail.com [216.40.44.146]) by kanga.kvack.org (Postfix) with ESMTP id 01FCE6B006E for ; Tue, 8 Sep 2020 09:32:37 -0400 (EDT) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id B4B258248047 for ; Tue, 8 Sep 2020 13:32:37 +0000 (UTC) X-FDA: 77239983954.06.lift75_4e0705a270d5 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin06.hostedemail.com (Postfix) with ESMTP id 5685E100AF853 for ; Tue, 8 Sep 2020 13:32:37 +0000 (UTC) X-HE-Tag: lift75_4e0705a270d5 X-Filterd-Recvd-Size: 10312 Received: from huawei.com (szxga06-in.huawei.com [45.249.212.32]) by imf25.hostedemail.com (Postfix) with ESMTP for ; Tue, 8 Sep 2020 13:32:35 +0000 (UTC) Received: from DGGEMS408-HUB.china.huawei.com (unknown [172.30.72.59]) by Forcepoint Email with ESMTP id D444FBCC89A828FA0E43; Tue, 8 Sep 2020 21:32:26 +0800 (CST) Received: from localhost (10.174.151.129) by DGGEMS408-HUB.china.huawei.com (10.3.19.208) with Microsoft SMTP Server id 14.3.487.0; Tue, 8 Sep 2020 21:32:16 +0800 From: Ming Mao To: , , , , CC: , , , , , , Ming Mao Subject: [PATCH V4 2/2] vfio: optimized for unpinning pages Date: Tue, 8 Sep 2020 21:32:04 +0800 Message-ID: <20200908133204.1338-3-maoming.maoming@huawei.com> X-Mailer: git-send-email 2.26.2.windows.1 In-Reply-To: <20200908133204.1338-1-maoming.maoming@huawei.com> References: <20200908133204.1338-1-maoming.maoming@huawei.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.174.151.129] X-CFilter-Loop: Reflected X-Rspamd-Queue-Id: 5685E100AF853 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam02 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The pages are unpinned one by one in unpin_user_pages_dirty_lock(). We add a new API unpin_user_hugetlb_pages_dirty_lock() which deletes the for loop to optimize this. If we want to unpin the hugetlb pages, all work can be done by a single operation to the head page in this API. Signed-off-by: Ming Mao --- drivers/vfio/vfio_iommu_type1.c | 90 +++++++++++++++++++++++++++----- include/linux/mm.h | 3 ++ mm/gup.c | 91 +++++++++++++++++++++++++++++++++ 3 files changed, 172 insertions(+), 12 deletions(-) diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_ty= pe1.c index 8c1dc5136..44fc5f16c 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -609,6 +609,26 @@ static long hugetlb_page_vaddr_get_pfn(struct mm_str= uct *mm, unsigned long vaddr return ret; } =20 +/* + * put pfns for a hugetlb page + * @start: the PAGE_SIZE-page we start to put,can be any page in this hu= getlb page + * @npages: the number of PAGE_SIZE-pages to put + * @prot: IOMMU_READ/WRITE + */ +static int hugetlb_put_pfn(unsigned long start, unsigned long npages, in= t prot) +{ + struct page *page; + + if (!pfn_valid(start)) + return -EFAULT; + + page =3D pfn_to_page(start); + if (!page || !PageHuge(page)) + return -EINVAL; + + return unpin_user_hugetlb_pages_dirty_lock(page, npages, prot & IOMMU_W= RITE); +} + static long vfio_pin_hugetlb_pages_remote(struct vfio_dma *dma, unsigned= long vaddr, long npage, unsigned long *pfn_base, unsigned long limit) @@ -616,7 +636,7 @@ static long vfio_pin_hugetlb_pages_remote(struct vfio= _dma *dma, unsigned long va unsigned long pfn =3D 0; long ret, pinned =3D 0, lock_acct =3D 0; dma_addr_t iova =3D vaddr - dma->vaddr + dma->iova; - long pinned_loop, i; + long pinned_loop; =20 /* This code path is only user initiated */ if (!current->mm) @@ -674,8 +694,7 @@ static long vfio_pin_hugetlb_pages_remote(struct vfio= _dma *dma, unsigned long va =20 if (!dma->lock_cap && current->mm->locked_vm + lock_acct > limit) { - for (i =3D 0; i < pinned_loop; i++) - put_pfn(pfn++, dma->prot); + hugetlb_put_pfn(pfn, pinned_loop, dma->prot); pr_warn("%s: RLIMIT_MEMLOCK (%ld) exceeded\n", __func__, limit << PAGE_SHIFT); ret =3D -ENOMEM; @@ -695,6 +714,40 @@ static long vfio_pin_hugetlb_pages_remote(struct vfi= o_dma *dma, unsigned long va return pinned; } =20 +static long vfio_unpin_hugetlb_pages_remote(struct vfio_dma *dma, dma_ad= dr_t iova, + unsigned long pfn, long npage, + bool do_accounting) +{ + long unlocked =3D 0, locked =3D 0; + long i, unpinned; + + for (i =3D 0; i < npage; i +=3D unpinned, iova +=3D unpinned * PAGE_SIZ= E) { + if (!is_hugetlb_page(pfn)) + goto slow_path; + + unpinned =3D hugetlb_put_pfn(pfn, npage - i, dma->prot); + if (unpinned > 0) { + pfn +=3D unpinned; + unlocked +=3D unpinned; + locked +=3D hugetlb_page_get_externally_pinned_num(dma, pfn, unpinned= ); + } else + goto slow_path; + } +slow_path: + for (; i < npage; i++, iova +=3D PAGE_SIZE) { + if (put_pfn(pfn++, dma->prot)) { + unlocked++; + if (vfio_find_vpfn(dma, iova)) + locked++; + } + } + + if (do_accounting) + vfio_lock_acct(dma, locked - unlocked, true); + + return unlocked; +} + /* * Attempt to pin pages. We really don't want to track all the pfns and * the iommu can only map chunks of consecutive pfns anyway, so get the @@ -993,11 +1046,18 @@ static long vfio_sync_unpin(struct vfio_dma *dma, = struct vfio_domain *domain, iommu_tlb_sync(domain->domain, iotlb_gather); =20 list_for_each_entry_safe(entry, next, regions, list) { - unlocked +=3D vfio_unpin_pages_remote(dma, - entry->iova, - entry->phys >> PAGE_SHIFT, - entry->len >> PAGE_SHIFT, - false); + if (is_hugetlb_page(entry->phys >> PAGE_SHIFT)) + unlocked +=3D vfio_unpin_hugetlb_pages_remote(dma, + entry->iova, + entry->phys >> PAGE_SHIFT, + entry->len >> PAGE_SHIFT, + false); + else + unlocked +=3D vfio_unpin_pages_remote(dma, + entry->iova, + entry->phys >> PAGE_SHIFT, + entry->len >> PAGE_SHIFT, + false); list_del(&entry->list); kfree(entry); } @@ -1064,10 +1124,16 @@ static size_t unmap_unpin_slow(struct vfio_domain= *domain, size_t unmapped =3D iommu_unmap(domain->domain, *iova, len); =20 if (unmapped) { - *unlocked +=3D vfio_unpin_pages_remote(dma, *iova, - phys >> PAGE_SHIFT, - unmapped >> PAGE_SHIFT, - false); + if (is_hugetlb_page(phys >> PAGE_SHIFT)) + *unlocked +=3D vfio_unpin_hugetlb_pages_remote(dma, *iova, + phys >> PAGE_SHIFT, + unmapped >> PAGE_SHIFT, + false); + else + *unlocked +=3D vfio_unpin_pages_remote(dma, *iova, + phys >> PAGE_SHIFT, + unmapped >> PAGE_SHIFT, + false); *iova +=3D unmapped; cond_resched(); } diff --git a/include/linux/mm.h b/include/linux/mm.h index dc7b87310..a425135d0 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1202,8 +1202,11 @@ static inline void put_page(struct page *page) #define GUP_PIN_COUNTING_BIAS (1U << 10) =20 void unpin_user_page(struct page *page); +int unpin_user_hugetlb_page(struct page *page, unsigned long npages); void unpin_user_pages_dirty_lock(struct page **pages, unsigned long npag= es, bool make_dirty); +int unpin_user_hugetlb_pages_dirty_lock(struct page *pages, unsigned lon= g npages, + bool make_dirty); void unpin_user_pages(struct page **pages, unsigned long npages); =20 /** diff --git a/mm/gup.c b/mm/gup.c index 6f47697f8..14ee321eb 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -205,11 +205,42 @@ static bool __unpin_devmap_managed_user_page(struct= page *page) =20 return true; } + +static bool __unpin_devmap_managed_user_hugetlb_page(struct page *page, = unsigned long npages) +{ + int count; + struct page *head =3D compound_head(page); + + if (!page_is_devmap_managed(head)) + return false; + + hpage_pincount_sub(head, npages); + + count =3D page_ref_sub_return(head, npages); + + mod_node_page_state(page_pgdat(head), NR_FOLL_PIN_RELEASED, npages); + /* + * devmap page refcounts are 1-based, rather than 0-based: if + * refcount is 1, then the page is free and the refcount is + * stable because nobody holds a reference on the page. + */ + if (count =3D=3D 1) + free_devmap_managed_page(head); + else if (!count) + __put_page(head); + + return true; +} #else static bool __unpin_devmap_managed_user_page(struct page *page) { return false; } + +static bool __unpin_devmap_managed_user_hugetlb_page(struct page *page, = unsigned long npages) +{ + return false; +} #endif /* CONFIG_DEV_PAGEMAP_OPS */ =20 /** @@ -248,6 +279,66 @@ void unpin_user_page(struct page *page) } EXPORT_SYMBOL(unpin_user_page); =20 +int unpin_user_hugetlb_page(struct page *page, unsigned long npages) +{ + struct page *head; + long page_offset, unpinned, hugetlb_npages; + + if (!page || !PageHuge(page)) + return -EINVAL; + + if (!npages) + return 0; + + head =3D compound_head(page); + hugetlb_npages =3D 1UL << compound_order(head); + /* the offset of this subpage in the hugetlb page */ + page_offset =3D page_to_pfn(page) & (hugetlb_npages - 1); + /* unpinned > 0, because page_offset is always less than hugetlb_npages= */ + unpinned =3D min_t(long, npages, (hugetlb_npages - page_offset)); + + if (__unpin_devmap_managed_user_hugetlb_page(page, unpinned)) + return unpinned; + + hpage_pincount_sub(head, unpinned); + + if (page_ref_sub_and_test(head, unpinned)) + __put_page(head); + mod_node_page_state(page_pgdat(head), NR_FOLL_PIN_RELEASED, unpinned); + + return unpinned; +} +EXPORT_SYMBOL(unpin_user_hugetlb_page); + +/* + * @page:the first subpage to unpin in a hugetlb page + * @npages: number of pages to unpin + * @make_dirty: whether to mark the pages dirty + * + * Nearly the same as unpin_user_pages_dirty_lock + * If npages is 0, returns 0. + * If npages is >0, returns the number of + * pages unpinned. And this may be less than npages. + */ +int unpin_user_hugetlb_pages_dirty_lock(struct page *page, unsigned long= npages, + bool make_dirty) +{ + struct page *head; + + if (!page || !PageHuge(page)) + return -EINVAL; + + if (!npages) + return 0; + + head =3D compound_head(page); + if (make_dirty && !PageDirty(head)) + set_page_dirty_lock(head); + + return unpin_user_hugetlb_page(page, npages); +} +EXPORT_SYMBOL(unpin_user_hugetlb_pages_dirty_lock); + /** * unpin_user_pages_dirty_lock() - release and optionally dirty gup-pinn= ed pages * @pages: array of pages to be maybe marked dirty, and definitely rele= ased. --=20 2.23.0