From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2C530C433E0 for ; Tue, 4 Aug 2020 13:42:55 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id EDED82075D for ; Tue, 4 Aug 2020 13:42:54 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=xen.org header.i=@xen.org header.b="Yr1ZjU5n" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org EDED82075D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=xen.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1k2xDE-0008VP-Vd; Tue, 04 Aug 2020 13:42:40 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1k2xDD-0008MU-LD for xen-devel@lists.xenproject.org; Tue, 04 Aug 2020 13:42:39 +0000 X-Inumbo-ID: afc6bc69-b8a6-4d13-9625-62a2febbeedd Received: from mail.xenproject.org (unknown [104.130.215.37]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id afc6bc69-b8a6-4d13-9625-62a2febbeedd; Tue, 04 Aug 2020 13:42:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org; s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender: Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe: List-Subscribe:List-Post:List-Owner:List-Archive; bh=/0O2Z8CAhQQK7TsWFvy4DlpzDrss2wE2CpafAdDCzAw=; b=Yr1ZjU5nKtAJgqbZwjTiZmj7KH jykY8O3WC0WtuIBBzZLKdiLVGuGyRdHpPa2qBaWv83Xg4kDcmtiINn0pKZvhFpG2y7vQkdHrb2zXd nY4T9nFPLLGvvVZ3kgjgniTdGUVcXQnflLs35RDusi23Nqcs1qJrjUcYjefTdJhys2WY=; Received: from xenbits.xenproject.org ([104.239.192.120]) by mail.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1k2xCw-00084b-Fp; Tue, 04 Aug 2020 13:42:22 +0000 Received: from host86-143-223-30.range86-143.btcentralplus.com ([86.143.223.30] helo=u2f063a87eabd5f.home) by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1k2xCw-0003ag-7z; Tue, 04 Aug 2020 13:42:22 +0000 From: Paul Durrant To: xen-devel@lists.xenproject.org Subject: [PATCH v4 07/14] iommu: make map, unmap and flush all take both an order and a count Date: Tue, 4 Aug 2020 14:42:02 +0100 Message-Id: <20200804134209.8717-8-paul@xen.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200804134209.8717-1-paul@xen.org> References: <20200804134209.8717-1-paul@xen.org> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Kevin Tian , Stefano Stabellini , Julien Grall , Jun Nakajima , Wei Liu , Andrew Cooper , Paul Durrant , Ian Jackson , George Dunlap , Jan Beulich , Volodymyr Babchuk , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" From: Paul Durrant At the moment iommu_map() and iommu_unmap() take a page order but not a count, whereas iommu_iotlb_flush() takes a count but not a page order. This patch simply makes them consistent with each other. Signed-off-by: Paul Durrant --- Cc: Jun Nakajima Cc: Kevin Tian Cc: Jan Beulich Cc: Andrew Cooper Cc: George Dunlap Cc: Wei Liu Cc: "Roger Pau Monné" Cc: Ian Jackson Cc: Julien Grall Cc: Stefano Stabellini Cc: Volodymyr Babchuk v2: - New in v2 --- xen/arch/arm/p2m.c | 2 +- xen/arch/x86/mm/p2m-ept.c | 2 +- xen/common/memory.c | 4 +-- xen/drivers/passthrough/amd/iommu.h | 2 +- xen/drivers/passthrough/amd/iommu_map.c | 4 +-- xen/drivers/passthrough/arm/ipmmu-vmsa.c | 2 +- xen/drivers/passthrough/arm/smmu.c | 2 +- xen/drivers/passthrough/iommu.c | 31 ++++++++++++------------ xen/drivers/passthrough/vtd/iommu.c | 4 +-- xen/drivers/passthrough/x86/iommu.c | 2 +- xen/include/xen/iommu.h | 9 ++++--- 11 files changed, 33 insertions(+), 31 deletions(-) diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c index ce59f2b503..71f4a78425 100644 --- a/xen/arch/arm/p2m.c +++ b/xen/arch/arm/p2m.c @@ -1061,7 +1061,7 @@ static int __p2m_set_entry(struct p2m_domain *p2m, flush_flags |= IOMMU_FLUSHF_added; rc = iommu_iotlb_flush(p2m->domain, _dfn(gfn_x(sgfn)), - 1UL << page_order, flush_flags); + page_order, 1, flush_flags); } else rc = 0; diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c index b8154a7ecc..b2ac912cde 100644 --- a/xen/arch/x86/mm/p2m-ept.c +++ b/xen/arch/x86/mm/p2m-ept.c @@ -843,7 +843,7 @@ out: need_modify_vtd_table ) { if ( iommu_use_hap_pt(d) ) - rc = iommu_iotlb_flush(d, _dfn(gfn), (1u << order), + rc = iommu_iotlb_flush(d, _dfn(gfn), (1u << order), 1, (iommu_flags ? IOMMU_FLUSHF_added : 0) | (vtd_pte_present ? IOMMU_FLUSHF_modified : 0)); diff --git a/xen/common/memory.c b/xen/common/memory.c index 714077c1e5..8de334ff10 100644 --- a/xen/common/memory.c +++ b/xen/common/memory.c @@ -851,12 +851,12 @@ int xenmem_add_to_physmap(struct domain *d, struct xen_add_to_physmap *xatp, this_cpu(iommu_dont_flush_iotlb) = 0; - ret = iommu_iotlb_flush(d, _dfn(xatp->idx - done), done, + ret = iommu_iotlb_flush(d, _dfn(xatp->idx - done), 0, done, IOMMU_FLUSHF_added | IOMMU_FLUSHF_modified); if ( unlikely(ret) && rc >= 0 ) rc = ret; - ret = iommu_iotlb_flush(d, _dfn(xatp->gpfn - done), done, + ret = iommu_iotlb_flush(d, _dfn(xatp->gpfn - done), 0, done, IOMMU_FLUSHF_added | IOMMU_FLUSHF_modified); if ( unlikely(ret) && rc >= 0 ) rc = ret; diff --git a/xen/drivers/passthrough/amd/iommu.h b/xen/drivers/passthrough/amd/iommu.h index e2d174f3b4..f1f0415469 100644 --- a/xen/drivers/passthrough/amd/iommu.h +++ b/xen/drivers/passthrough/amd/iommu.h @@ -231,7 +231,7 @@ int amd_iommu_reserve_domain_unity_map(struct domain *domain, paddr_t phys_addr, unsigned long size, int iw, int ir); int __must_check amd_iommu_flush_iotlb_pages(struct domain *d, dfn_t dfn, - unsigned int page_count, + unsigned long page_count, unsigned int flush_flags); int __must_check amd_iommu_flush_iotlb_all(struct domain *d); diff --git a/xen/drivers/passthrough/amd/iommu_map.c b/xen/drivers/passthrough/amd/iommu_map.c index 54b991294a..0cb948d114 100644 --- a/xen/drivers/passthrough/amd/iommu_map.c +++ b/xen/drivers/passthrough/amd/iommu_map.c @@ -351,7 +351,7 @@ int amd_iommu_unmap_page(struct domain *d, dfn_t dfn, return 0; } -static unsigned long flush_count(unsigned long dfn, unsigned int page_count, +static unsigned long flush_count(unsigned long dfn, unsigned long page_count, unsigned int order) { unsigned long start = dfn >> order; @@ -362,7 +362,7 @@ static unsigned long flush_count(unsigned long dfn, unsigned int page_count, } int amd_iommu_flush_iotlb_pages(struct domain *d, dfn_t dfn, - unsigned int page_count, + unsigned long page_count, unsigned int flush_flags) { unsigned long dfn_l = dfn_x(dfn); diff --git a/xen/drivers/passthrough/arm/ipmmu-vmsa.c b/xen/drivers/passthrough/arm/ipmmu-vmsa.c index b2a65dfaaf..346165c3fa 100644 --- a/xen/drivers/passthrough/arm/ipmmu-vmsa.c +++ b/xen/drivers/passthrough/arm/ipmmu-vmsa.c @@ -945,7 +945,7 @@ static int __must_check ipmmu_iotlb_flush_all(struct domain *d) } static int __must_check ipmmu_iotlb_flush(struct domain *d, dfn_t dfn, - unsigned int page_count, + unsigned long page_count, unsigned int flush_flags) { ASSERT(flush_flags); diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c index 94662a8501..06f9bda47d 100644 --- a/xen/drivers/passthrough/arm/smmu.c +++ b/xen/drivers/passthrough/arm/smmu.c @@ -2534,7 +2534,7 @@ static int __must_check arm_smmu_iotlb_flush_all(struct domain *d) } static int __must_check arm_smmu_iotlb_flush(struct domain *d, dfn_t dfn, - unsigned int page_count, + unsigned long page_count, unsigned int flush_flags) { ASSERT(flush_flags); diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c index e2c0193a09..568a4a5661 100644 --- a/xen/drivers/passthrough/iommu.c +++ b/xen/drivers/passthrough/iommu.c @@ -235,8 +235,8 @@ void iommu_domain_destroy(struct domain *d) } int iommu_map(struct domain *d, dfn_t dfn, mfn_t mfn, - unsigned int page_order, unsigned int flags, - unsigned int *flush_flags) + unsigned int page_order, unsigned int page_count, + unsigned int flags, unsigned int *flush_flags) { const struct domain_iommu *hd = dom_iommu(d); unsigned long i; @@ -248,7 +248,7 @@ int iommu_map(struct domain *d, dfn_t dfn, mfn_t mfn, ASSERT(IS_ALIGNED(dfn_x(dfn), (1ul << page_order))); ASSERT(IS_ALIGNED(mfn_x(mfn), (1ul << page_order))); - for ( i = 0; i < (1ul << page_order); i++ ) + for ( i = 0; i < ((unsigned long)page_count << page_order); i++ ) { rc = iommu_call(hd->platform_ops, map_page, d, dfn_add(dfn, i), mfn_add(mfn, i), flags, flush_flags); @@ -285,16 +285,16 @@ int iommu_legacy_map(struct domain *d, dfn_t dfn, mfn_t mfn, unsigned int page_order, unsigned int flags) { unsigned int flush_flags = 0; - int rc = iommu_map(d, dfn, mfn, page_order, flags, &flush_flags); + int rc = iommu_map(d, dfn, mfn, page_order, 1, flags, &flush_flags); if ( !this_cpu(iommu_dont_flush_iotlb) && !rc ) - rc = iommu_iotlb_flush(d, dfn, (1u << page_order), flush_flags); + rc = iommu_iotlb_flush(d, dfn, (1u << page_order), 1, flush_flags); return rc; } int iommu_unmap(struct domain *d, dfn_t dfn, unsigned int page_order, - unsigned int *flush_flags) + unsigned int page_count, unsigned int *flush_flags) { const struct domain_iommu *hd = dom_iommu(d); unsigned long i; @@ -305,7 +305,7 @@ int iommu_unmap(struct domain *d, dfn_t dfn, unsigned int page_order, ASSERT(IS_ALIGNED(dfn_x(dfn), (1ul << page_order))); - for ( i = 0; i < (1ul << page_order); i++ ) + for ( i = 0; i < ((unsigned long)page_count << page_order); i++ ) { int err = iommu_call(hd->platform_ops, unmap_page, d, dfn_add(dfn, i), flush_flags); @@ -338,10 +338,10 @@ int iommu_unmap(struct domain *d, dfn_t dfn, unsigned int page_order, int iommu_legacy_unmap(struct domain *d, dfn_t dfn, unsigned int page_order) { unsigned int flush_flags = 0; - int rc = iommu_unmap(d, dfn, page_order, &flush_flags); + int rc = iommu_unmap(d, dfn, page_order, 1, &flush_flags); if ( !this_cpu(iommu_dont_flush_iotlb) && ! rc ) - rc = iommu_iotlb_flush(d, dfn, (1u << page_order), flush_flags); + rc = iommu_iotlb_flush(d, dfn, (1u << page_order), 1, flush_flags); return rc; } @@ -357,8 +357,8 @@ int iommu_lookup_page(struct domain *d, dfn_t dfn, mfn_t *mfn, return iommu_call(hd->platform_ops, lookup_page, d, dfn, mfn, flags); } -int iommu_iotlb_flush(struct domain *d, dfn_t dfn, unsigned int page_count, - unsigned int flush_flags) +int iommu_iotlb_flush(struct domain *d, dfn_t dfn, unsigned int page_order, + unsigned int page_count, unsigned int flush_flags) { const struct domain_iommu *hd = dom_iommu(d); int rc; @@ -370,14 +370,15 @@ int iommu_iotlb_flush(struct domain *d, dfn_t dfn, unsigned int page_count, if ( dfn_eq(dfn, INVALID_DFN) ) return -EINVAL; - rc = iommu_call(hd->platform_ops, iotlb_flush, d, dfn, page_count, - flush_flags); + rc = iommu_call(hd->platform_ops, iotlb_flush, d, dfn, + (unsigned long)page_count << page_order, flush_flags); if ( unlikely(rc) ) { if ( !d->is_shutting_down && printk_ratelimit() ) printk(XENLOG_ERR - "d%d: IOMMU IOTLB flush failed: %d, dfn %"PRI_dfn", page count %u flags %x\n", - d->domain_id, rc, dfn_x(dfn), page_count, flush_flags); + "d%d: IOMMU IOTLB flush failed: %d, dfn %"PRI_dfn", page order %u, page count %u flags %x\n", + d->domain_id, rc, dfn_x(dfn), page_order, page_count, + flush_flags); if ( !is_hardware_domain(d) ) domain_crash(d); diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c index 607e8b5e65..68cf0e535a 100644 --- a/xen/drivers/passthrough/vtd/iommu.c +++ b/xen/drivers/passthrough/vtd/iommu.c @@ -584,7 +584,7 @@ static int __must_check iommu_flush_all(void) static int __must_check iommu_flush_iotlb(struct domain *d, dfn_t dfn, bool_t dma_old_pte_present, - unsigned int page_count) + unsigned long page_count) { struct domain_iommu *hd = dom_iommu(d); struct acpi_drhd_unit *drhd; @@ -632,7 +632,7 @@ static int __must_check iommu_flush_iotlb(struct domain *d, dfn_t dfn, static int __must_check iommu_flush_iotlb_pages(struct domain *d, dfn_t dfn, - unsigned int page_count, + unsigned long page_count, unsigned int flush_flags) { ASSERT(page_count && !dfn_eq(dfn, INVALID_DFN)); diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c index aea07e47c4..dba6c9d642 100644 --- a/xen/drivers/passthrough/x86/iommu.c +++ b/xen/drivers/passthrough/x86/iommu.c @@ -244,7 +244,7 @@ void __hwdom_init arch_iommu_hwdom_init(struct domain *d) else if ( paging_mode_translate(d) ) rc = set_identity_p2m_entry(d, pfn, p2m_access_rw, 0); else - rc = iommu_map(d, _dfn(pfn), _mfn(pfn), PAGE_ORDER_4K, + rc = iommu_map(d, _dfn(pfn), _mfn(pfn), PAGE_ORDER_4K, 1, IOMMUF_readable | IOMMUF_writable, &flush_flags); if ( rc ) diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h index 1831dc66b0..d9c2e764aa 100644 --- a/xen/include/xen/iommu.h +++ b/xen/include/xen/iommu.h @@ -146,10 +146,10 @@ enum #define IOMMU_FLUSHF_modified (1u << _IOMMU_FLUSHF_modified) int __must_check iommu_map(struct domain *d, dfn_t dfn, mfn_t mfn, - unsigned int page_order, unsigned int flags, - unsigned int *flush_flags); + unsigned int page_order, unsigned int page_count, + unsigned int flags, unsigned int *flush_flags); int __must_check iommu_unmap(struct domain *d, dfn_t dfn, - unsigned int page_order, + unsigned int page_order, unsigned int page_count, unsigned int *flush_flags); int __must_check iommu_legacy_map(struct domain *d, dfn_t dfn, mfn_t mfn, @@ -162,6 +162,7 @@ int __must_check iommu_lookup_page(struct domain *d, dfn_t dfn, mfn_t *mfn, unsigned int *flags); int __must_check iommu_iotlb_flush(struct domain *d, dfn_t dfn, + unsigned int page_order, unsigned int page_count, unsigned int flush_flags); int __must_check iommu_iotlb_flush_all(struct domain *d, @@ -281,7 +282,7 @@ struct iommu_ops { void (*share_p2m)(struct domain *d); void (*crash_shutdown)(void); int __must_check (*iotlb_flush)(struct domain *d, dfn_t dfn, - unsigned int page_count, + unsigned long page_count, unsigned int flush_flags); int __must_check (*iotlb_flush_all)(struct domain *d); int (*get_reserved_device_memory)(iommu_grdm_t *, void *); -- 2.20.1