From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 47CE0C433B4 for ; Tue, 13 Apr 2021 08:55:52 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2CABE6135C for ; Tue, 13 Apr 2021 08:55:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244689AbhDMI4K (ORCPT ); Tue, 13 Apr 2021 04:56:10 -0400 Received: from szxga05-in.huawei.com ([45.249.212.191]:16541 "EHLO szxga05-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243989AbhDMIzj (ORCPT ); Tue, 13 Apr 2021 04:55:39 -0400 Received: from DGGEMS407-HUB.china.huawei.com (unknown [172.30.72.60]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4FKK9k0VVxzPqkd; Tue, 13 Apr 2021 16:52:26 +0800 (CST) Received: from DESKTOP-5IS4806.china.huawei.com (10.174.187.224) by DGGEMS407-HUB.china.huawei.com (10.3.19.207) with Microsoft SMTP Server id 14.3.498.0; Tue, 13 Apr 2021 16:55:08 +0800 From: Keqian Zhu To: , , , Robin Murphy , "Will Deacon" , Joerg Roedel , Yi Sun , Jean-Philippe Brucker , Jonathan Cameron , Tian Kevin , Lu Baolu CC: Alex Williamson , Cornelia Huck , Kirti Wankhede , , , , Subject: [PATCH v3 03/12] iommu: Add iommu_merge_page interface Date: Tue, 13 Apr 2021 16:54:48 +0800 Message-ID: <20210413085457.25400-4-zhukeqian1@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20210413085457.25400-1-zhukeqian1@huawei.com> References: <20210413085457.25400-1-zhukeqian1@huawei.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.174.187.224] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org If block(largepage) mappings are split during start dirty log, then when stop dirty log, we need to recover them for better DMA performance. This adds a new interface named iommu_merge_page in IOMMU base layer. A specific IOMMU driver can invoke it during stop dirty log. If so, the driver also need to realize the merge_page iommu ops. We flush all iotlbs after the whole procedure is completed to ease the pressure of iommu, as we will hanle a huge range of mapping in general. Signed-off-by: Keqian Zhu Signed-off-by: Kunkun Jiang --- drivers/iommu/iommu.c | 75 +++++++++++++++++++++++++++++++++++++++++++ include/linux/iommu.h | 12 +++++++ 2 files changed, 87 insertions(+) diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c index bb413a927870..8f0d71bafb3a 100644 --- a/drivers/iommu/iommu.c +++ b/drivers/iommu/iommu.c @@ -2762,6 +2762,81 @@ int iommu_split_block(struct iommu_domain *domain, unsigned long iova, } EXPORT_SYMBOL_GPL(iommu_split_block); +static int __iommu_merge_page(struct iommu_domain *domain, + unsigned long iova, phys_addr_t paddr, + size_t size, int prot) +{ + const struct iommu_ops *ops = domain->ops; + unsigned int min_pagesz; + size_t pgsize; + int ret = 0; + + if (unlikely(!ops || !ops->merge_page)) + return -ENODEV; + + min_pagesz = 1 << __ffs(domain->pgsize_bitmap); + if (!IS_ALIGNED(iova | paddr | size, min_pagesz)) { + pr_err("unaligned: iova 0x%lx pa %pa size 0x%zx min_pagesz 0x%x\n", + iova, &paddr, size, min_pagesz); + return -EINVAL; + } + + while (size) { + pgsize = iommu_pgsize(domain, iova | paddr, size); + + ret = ops->merge_page(domain, iova, paddr, pgsize, prot); + if (ret) + break; + + pr_debug("merge handled: iova 0x%lx pa %pa size 0x%zx\n", + iova, &paddr, pgsize); + + iova += pgsize; + paddr += pgsize; + size -= pgsize; + } + + return ret; +} + +int iommu_merge_page(struct iommu_domain *domain, unsigned long iova, + size_t size, int prot) +{ + phys_addr_t phys; + dma_addr_t p, i; + size_t cont_size; + bool flush = false; + int ret = 0; + + while (size) { + flush = true; + + phys = iommu_iova_to_phys(domain, iova); + cont_size = PAGE_SIZE; + p = phys + cont_size; + i = iova + cont_size; + + while (cont_size < size && p == iommu_iova_to_phys(domain, i)) { + p += PAGE_SIZE; + i += PAGE_SIZE; + cont_size += PAGE_SIZE; + } + + ret = __iommu_merge_page(domain, iova, phys, cont_size, prot); + if (ret) + break; + + iova += cont_size; + size -= cont_size; + } + + if (flush) + iommu_flush_iotlb_all(domain); + + return ret; +} +EXPORT_SYMBOL_GPL(iommu_merge_page); + int iommu_switch_dirty_log(struct iommu_domain *domain, bool enable, unsigned long iova, size_t size, int prot) { diff --git a/include/linux/iommu.h b/include/linux/iommu.h index c6c90ac069e3..fea3ecabff3d 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -209,6 +209,7 @@ struct iommu_iotlb_gather { * @domain_get_attr: Query domain attributes * @domain_set_attr: Change domain attributes * @split_block: Split block mapping into page mapping + * @merge_page: Merge page mapping into block mapping * @switch_dirty_log: Perform actions to start|stop dirty log tracking * @sync_dirty_log: Sync dirty log from IOMMU into a dirty bitmap * @clear_dirty_log: Clear dirty log of IOMMU by a mask bitmap @@ -270,6 +271,8 @@ struct iommu_ops { /* Track dirty log */ int (*split_block)(struct iommu_domain *domain, unsigned long iova, size_t size); + int (*merge_page)(struct iommu_domain *domain, unsigned long iova, + phys_addr_t phys, size_t size, int prot); int (*switch_dirty_log)(struct iommu_domain *domain, bool enable, unsigned long iova, size_t size, int prot); int (*sync_dirty_log)(struct iommu_domain *domain, @@ -534,6 +537,8 @@ extern int iommu_domain_set_attr(struct iommu_domain *domain, enum iommu_attr, void *data); extern int iommu_split_block(struct iommu_domain *domain, unsigned long iova, size_t size); +extern int iommu_merge_page(struct iommu_domain *domain, unsigned long iova, + size_t size, int prot); extern int iommu_switch_dirty_log(struct iommu_domain *domain, bool enable, unsigned long iova, size_t size, int prot); extern int iommu_sync_dirty_log(struct iommu_domain *domain, unsigned long iova, @@ -940,6 +945,13 @@ static inline int iommu_split_block(struct iommu_domain *domain, return -EINVAL; } +static inline int iommu_merge_page(struct iommu_domain *domain, + unsigned long iova, size_t size, + int prot) +{ + return -EINVAL; +} + static inline int iommu_switch_dirty_log(struct iommu_domain *domain, bool enable, unsigned long iova, size_t size, int prot) -- 2.19.1 From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6B3E2C43600 for ; Tue, 13 Apr 2021 08:55:27 +0000 (UTC) Received: from smtp3.osuosl.org (smtp3.osuosl.org [140.211.166.136]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 2C60A60249 for ; Tue, 13 Apr 2021 08:55:27 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2C60A60249 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=iommu-bounces@lists.linux-foundation.org Received: from localhost (localhost [127.0.0.1]) by smtp3.osuosl.org (Postfix) with ESMTP id 761AA60B66; Tue, 13 Apr 2021 08:55:26 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from smtp3.osuosl.org ([127.0.0.1]) by localhost (smtp3.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 1vlowtNm_nXY; Tue, 13 Apr 2021 08:55:25 +0000 (UTC) Received: from lists.linuxfoundation.org (lf-lists.osuosl.org [140.211.9.56]) by smtp3.osuosl.org (Postfix) with ESMTP id B2BF060BA4; Tue, 13 Apr 2021 08:55:24 +0000 (UTC) Received: from lf-lists.osuosl.org (localhost [127.0.0.1]) by lists.linuxfoundation.org (Postfix) with ESMTP id 366FAC001A; Tue, 13 Apr 2021 08:55:24 +0000 (UTC) Received: from smtp4.osuosl.org (smtp4.osuosl.org [140.211.166.137]) by lists.linuxfoundation.org (Postfix) with ESMTP id E60E4C000A for ; Tue, 13 Apr 2021 08:55:21 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp4.osuosl.org (Postfix) with ESMTP id E1BB940657 for ; Tue, 13 Apr 2021 08:55:21 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from smtp4.osuosl.org ([127.0.0.1]) by localhost (smtp4.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id E7uDSq6O4ieD for ; Tue, 13 Apr 2021 08:55:21 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.8.0 Received: from szxga05-in.huawei.com (szxga05-in.huawei.com [45.249.212.191]) by smtp4.osuosl.org (Postfix) with ESMTPS id A284140677 for ; Tue, 13 Apr 2021 08:55:20 +0000 (UTC) Received: from DGGEMS407-HUB.china.huawei.com (unknown [172.30.72.60]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4FKK9k0VVxzPqkd; Tue, 13 Apr 2021 16:52:26 +0800 (CST) Received: from DESKTOP-5IS4806.china.huawei.com (10.174.187.224) by DGGEMS407-HUB.china.huawei.com (10.3.19.207) with Microsoft SMTP Server id 14.3.498.0; Tue, 13 Apr 2021 16:55:08 +0800 From: Keqian Zhu To: , , , Robin Murphy , "Will Deacon" , Joerg Roedel , Yi Sun , Jean-Philippe Brucker , Jonathan Cameron , Tian Kevin , Lu Baolu Subject: [PATCH v3 03/12] iommu: Add iommu_merge_page interface Date: Tue, 13 Apr 2021 16:54:48 +0800 Message-ID: <20210413085457.25400-4-zhukeqian1@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20210413085457.25400-1-zhukeqian1@huawei.com> References: <20210413085457.25400-1-zhukeqian1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.187.224] X-CFilter-Loop: Reflected Cc: jiangkunkun@huawei.com, Cornelia Huck , Kirti Wankhede , lushenming@huawei.com, Alex Williamson , wanghaibin.wang@huawei.com X-BeenThere: iommu@lists.linux-foundation.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: Development issues for Linux IOMMU support List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: iommu-bounces@lists.linux-foundation.org Sender: "iommu" If block(largepage) mappings are split during start dirty log, then when stop dirty log, we need to recover them for better DMA performance. This adds a new interface named iommu_merge_page in IOMMU base layer. A specific IOMMU driver can invoke it during stop dirty log. If so, the driver also need to realize the merge_page iommu ops. We flush all iotlbs after the whole procedure is completed to ease the pressure of iommu, as we will hanle a huge range of mapping in general. Signed-off-by: Keqian Zhu Signed-off-by: Kunkun Jiang --- drivers/iommu/iommu.c | 75 +++++++++++++++++++++++++++++++++++++++++++ include/linux/iommu.h | 12 +++++++ 2 files changed, 87 insertions(+) diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c index bb413a927870..8f0d71bafb3a 100644 --- a/drivers/iommu/iommu.c +++ b/drivers/iommu/iommu.c @@ -2762,6 +2762,81 @@ int iommu_split_block(struct iommu_domain *domain, unsigned long iova, } EXPORT_SYMBOL_GPL(iommu_split_block); +static int __iommu_merge_page(struct iommu_domain *domain, + unsigned long iova, phys_addr_t paddr, + size_t size, int prot) +{ + const struct iommu_ops *ops = domain->ops; + unsigned int min_pagesz; + size_t pgsize; + int ret = 0; + + if (unlikely(!ops || !ops->merge_page)) + return -ENODEV; + + min_pagesz = 1 << __ffs(domain->pgsize_bitmap); + if (!IS_ALIGNED(iova | paddr | size, min_pagesz)) { + pr_err("unaligned: iova 0x%lx pa %pa size 0x%zx min_pagesz 0x%x\n", + iova, &paddr, size, min_pagesz); + return -EINVAL; + } + + while (size) { + pgsize = iommu_pgsize(domain, iova | paddr, size); + + ret = ops->merge_page(domain, iova, paddr, pgsize, prot); + if (ret) + break; + + pr_debug("merge handled: iova 0x%lx pa %pa size 0x%zx\n", + iova, &paddr, pgsize); + + iova += pgsize; + paddr += pgsize; + size -= pgsize; + } + + return ret; +} + +int iommu_merge_page(struct iommu_domain *domain, unsigned long iova, + size_t size, int prot) +{ + phys_addr_t phys; + dma_addr_t p, i; + size_t cont_size; + bool flush = false; + int ret = 0; + + while (size) { + flush = true; + + phys = iommu_iova_to_phys(domain, iova); + cont_size = PAGE_SIZE; + p = phys + cont_size; + i = iova + cont_size; + + while (cont_size < size && p == iommu_iova_to_phys(domain, i)) { + p += PAGE_SIZE; + i += PAGE_SIZE; + cont_size += PAGE_SIZE; + } + + ret = __iommu_merge_page(domain, iova, phys, cont_size, prot); + if (ret) + break; + + iova += cont_size; + size -= cont_size; + } + + if (flush) + iommu_flush_iotlb_all(domain); + + return ret; +} +EXPORT_SYMBOL_GPL(iommu_merge_page); + int iommu_switch_dirty_log(struct iommu_domain *domain, bool enable, unsigned long iova, size_t size, int prot) { diff --git a/include/linux/iommu.h b/include/linux/iommu.h index c6c90ac069e3..fea3ecabff3d 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -209,6 +209,7 @@ struct iommu_iotlb_gather { * @domain_get_attr: Query domain attributes * @domain_set_attr: Change domain attributes * @split_block: Split block mapping into page mapping + * @merge_page: Merge page mapping into block mapping * @switch_dirty_log: Perform actions to start|stop dirty log tracking * @sync_dirty_log: Sync dirty log from IOMMU into a dirty bitmap * @clear_dirty_log: Clear dirty log of IOMMU by a mask bitmap @@ -270,6 +271,8 @@ struct iommu_ops { /* Track dirty log */ int (*split_block)(struct iommu_domain *domain, unsigned long iova, size_t size); + int (*merge_page)(struct iommu_domain *domain, unsigned long iova, + phys_addr_t phys, size_t size, int prot); int (*switch_dirty_log)(struct iommu_domain *domain, bool enable, unsigned long iova, size_t size, int prot); int (*sync_dirty_log)(struct iommu_domain *domain, @@ -534,6 +537,8 @@ extern int iommu_domain_set_attr(struct iommu_domain *domain, enum iommu_attr, void *data); extern int iommu_split_block(struct iommu_domain *domain, unsigned long iova, size_t size); +extern int iommu_merge_page(struct iommu_domain *domain, unsigned long iova, + size_t size, int prot); extern int iommu_switch_dirty_log(struct iommu_domain *domain, bool enable, unsigned long iova, size_t size, int prot); extern int iommu_sync_dirty_log(struct iommu_domain *domain, unsigned long iova, @@ -940,6 +945,13 @@ static inline int iommu_split_block(struct iommu_domain *domain, return -EINVAL; } +static inline int iommu_merge_page(struct iommu_domain *domain, + unsigned long iova, size_t size, + int prot) +{ + return -EINVAL; +} + static inline int iommu_switch_dirty_log(struct iommu_domain *domain, bool enable, unsigned long iova, size_t size, int prot) -- 2.19.1 _______________________________________________ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1B386C433ED for ; Tue, 13 Apr 2021 09:01:23 +0000 (UTC) Received: from desiato.infradead.org (desiato.infradead.org [90.155.92.199]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 9EE22613B1 for ; Tue, 13 Apr 2021 09:01:22 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9EE22613B1 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=desiato.20200630; h=Sender:Content-Transfer-Encoding :Content-Type:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:CC:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=cHmuLcCaOS7hA9yAXTcEWfzERofFe8jzw8C5wrFHDF8=; b=HZh187ti0YZz30sD3WVIIQtt1 9xumHQ25vU2k2hxLhhDWIrTL7QEwKreHZOqkBNLr2ER8qaD+lQ/aIXCB89k2exQzplpU3P8cKOqcx t0mFyN8yWPP0fw8JL4MWRo6s5Xmq9KBFMcSgftejUlz5cwGlu41ChHhROEQav0CPc/uZvTxoSyDCL FKW+IpMMEPhZVXCsjcL04RLlCJbrlE1wDuFvW8q/lKwosWKsEp/9VCmS9sRnJ0rof5/5/r3LmRQhR ySJKp9ZhJgyD069Otz+z1T/4DjVOLF0W1KXgTlHkAgt7/+uTAd26EX/+fN3wE1zGVRgbetvTa111p DtmhiU1zA==; Received: from localhost ([::1] helo=desiato.infradead.org) by desiato.infradead.org with esmtp (Exim 4.94 #2 (Red Hat Linux)) id 1lWEsQ-008cKj-Gu; Tue, 13 Apr 2021 08:58:30 +0000 Received: from bombadil.infradead.org ([2607:7c80:54:e::133]) by desiato.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lWEpU-008c55-EU for linux-arm-kernel@desiato.infradead.org; Tue, 13 Apr 2021 08:56:05 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Type:MIME-Version:References: In-Reply-To:Message-ID:Date:Subject:CC:To:From:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=ubsi1ADGObl7y/T5MgEEMpkYvISvRfrS9NHXyj48IBo=; b=Tg0T1x/Up98NPuyjlzjjvYfZg3 L/dhk8isqgNXfnH+Osc6Csg3L+n5/4lniN8RensdwOhSECljO7VbbltWoMYcwNCF6sxTeQeOPya5C Gigsct+lX/KfZU28+U53uyBQkaTesl59tfZ14m1L5AadzuXFLWcrNVCRUV1XpRMZRx/nRGIbeHYxa L1raMIL7Cg95PlfdH0ZFqSgjPwrWdnMLw08JOXnMvZkvknwbk7o7fbm1lenFGpskXfYTFPs3HFVwd 5GQatymE6iqxMwNqrjdq0IicjzWjH0bydSWmvjkdk+YoAmoLKIheBIHs98RTs7dtqHskIAjb+ZTnd gF3QW8/w==; Received: from szxga05-in.huawei.com ([45.249.212.191]) by bombadil.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lWEpQ-006qgx-Az for linux-arm-kernel@lists.infradead.org; Tue, 13 Apr 2021 08:55:27 +0000 Received: from DGGEMS407-HUB.china.huawei.com (unknown [172.30.72.60]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4FKK9k0VVxzPqkd; Tue, 13 Apr 2021 16:52:26 +0800 (CST) Received: from DESKTOP-5IS4806.china.huawei.com (10.174.187.224) by DGGEMS407-HUB.china.huawei.com (10.3.19.207) with Microsoft SMTP Server id 14.3.498.0; Tue, 13 Apr 2021 16:55:08 +0800 From: Keqian Zhu To: , , , Robin Murphy , "Will Deacon" , Joerg Roedel , Yi Sun , Jean-Philippe Brucker , Jonathan Cameron , Tian Kevin , Lu Baolu CC: Alex Williamson , Cornelia Huck , Kirti Wankhede , , , , Subject: [PATCH v3 03/12] iommu: Add iommu_merge_page interface Date: Tue, 13 Apr 2021 16:54:48 +0800 Message-ID: <20210413085457.25400-4-zhukeqian1@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20210413085457.25400-1-zhukeqian1@huawei.com> References: <20210413085457.25400-1-zhukeqian1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.187.224] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210413_015524_793641_ED77EE93 X-CRM114-Status: GOOD ( 14.45 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org If block(largepage) mappings are split during start dirty log, then when stop dirty log, we need to recover them for better DMA performance. This adds a new interface named iommu_merge_page in IOMMU base layer. A specific IOMMU driver can invoke it during stop dirty log. If so, the driver also need to realize the merge_page iommu ops. We flush all iotlbs after the whole procedure is completed to ease the pressure of iommu, as we will hanle a huge range of mapping in general. Signed-off-by: Keqian Zhu Signed-off-by: Kunkun Jiang --- drivers/iommu/iommu.c | 75 +++++++++++++++++++++++++++++++++++++++++++ include/linux/iommu.h | 12 +++++++ 2 files changed, 87 insertions(+) diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c index bb413a927870..8f0d71bafb3a 100644 --- a/drivers/iommu/iommu.c +++ b/drivers/iommu/iommu.c @@ -2762,6 +2762,81 @@ int iommu_split_block(struct iommu_domain *domain, unsigned long iova, } EXPORT_SYMBOL_GPL(iommu_split_block); +static int __iommu_merge_page(struct iommu_domain *domain, + unsigned long iova, phys_addr_t paddr, + size_t size, int prot) +{ + const struct iommu_ops *ops = domain->ops; + unsigned int min_pagesz; + size_t pgsize; + int ret = 0; + + if (unlikely(!ops || !ops->merge_page)) + return -ENODEV; + + min_pagesz = 1 << __ffs(domain->pgsize_bitmap); + if (!IS_ALIGNED(iova | paddr | size, min_pagesz)) { + pr_err("unaligned: iova 0x%lx pa %pa size 0x%zx min_pagesz 0x%x\n", + iova, &paddr, size, min_pagesz); + return -EINVAL; + } + + while (size) { + pgsize = iommu_pgsize(domain, iova | paddr, size); + + ret = ops->merge_page(domain, iova, paddr, pgsize, prot); + if (ret) + break; + + pr_debug("merge handled: iova 0x%lx pa %pa size 0x%zx\n", + iova, &paddr, pgsize); + + iova += pgsize; + paddr += pgsize; + size -= pgsize; + } + + return ret; +} + +int iommu_merge_page(struct iommu_domain *domain, unsigned long iova, + size_t size, int prot) +{ + phys_addr_t phys; + dma_addr_t p, i; + size_t cont_size; + bool flush = false; + int ret = 0; + + while (size) { + flush = true; + + phys = iommu_iova_to_phys(domain, iova); + cont_size = PAGE_SIZE; + p = phys + cont_size; + i = iova + cont_size; + + while (cont_size < size && p == iommu_iova_to_phys(domain, i)) { + p += PAGE_SIZE; + i += PAGE_SIZE; + cont_size += PAGE_SIZE; + } + + ret = __iommu_merge_page(domain, iova, phys, cont_size, prot); + if (ret) + break; + + iova += cont_size; + size -= cont_size; + } + + if (flush) + iommu_flush_iotlb_all(domain); + + return ret; +} +EXPORT_SYMBOL_GPL(iommu_merge_page); + int iommu_switch_dirty_log(struct iommu_domain *domain, bool enable, unsigned long iova, size_t size, int prot) { diff --git a/include/linux/iommu.h b/include/linux/iommu.h index c6c90ac069e3..fea3ecabff3d 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -209,6 +209,7 @@ struct iommu_iotlb_gather { * @domain_get_attr: Query domain attributes * @domain_set_attr: Change domain attributes * @split_block: Split block mapping into page mapping + * @merge_page: Merge page mapping into block mapping * @switch_dirty_log: Perform actions to start|stop dirty log tracking * @sync_dirty_log: Sync dirty log from IOMMU into a dirty bitmap * @clear_dirty_log: Clear dirty log of IOMMU by a mask bitmap @@ -270,6 +271,8 @@ struct iommu_ops { /* Track dirty log */ int (*split_block)(struct iommu_domain *domain, unsigned long iova, size_t size); + int (*merge_page)(struct iommu_domain *domain, unsigned long iova, + phys_addr_t phys, size_t size, int prot); int (*switch_dirty_log)(struct iommu_domain *domain, bool enable, unsigned long iova, size_t size, int prot); int (*sync_dirty_log)(struct iommu_domain *domain, @@ -534,6 +537,8 @@ extern int iommu_domain_set_attr(struct iommu_domain *domain, enum iommu_attr, void *data); extern int iommu_split_block(struct iommu_domain *domain, unsigned long iova, size_t size); +extern int iommu_merge_page(struct iommu_domain *domain, unsigned long iova, + size_t size, int prot); extern int iommu_switch_dirty_log(struct iommu_domain *domain, bool enable, unsigned long iova, size_t size, int prot); extern int iommu_sync_dirty_log(struct iommu_domain *domain, unsigned long iova, @@ -940,6 +945,13 @@ static inline int iommu_split_block(struct iommu_domain *domain, return -EINVAL; } +static inline int iommu_merge_page(struct iommu_domain *domain, + unsigned long iova, size_t size, + int prot) +{ + return -EINVAL; +} + static inline int iommu_switch_dirty_log(struct iommu_domain *domain, bool enable, unsigned long iova, size_t size, int prot) -- 2.19.1 _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel