From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: "Kevin Tian" <kevin.tian@intel.com>,
"Stefano Stabellini" <sstabellini@kernel.org>,
"Julien Grall" <julien@xen.org>,
"Jun Nakajima" <jun.nakajima@intel.com>, "Wei Liu" <wl@xen.org>,
"Andrew Cooper" <andrew.cooper3@citrix.com>,
"Paul Durrant" <pdurrant@amazon.com>,
"Ian Jackson" <ian.jackson@eu.citrix.com>,
"George Dunlap" <george.dunlap@citrix.com>,
"Jan Beulich" <jbeulich@suse.com>,
"Roger Pau Monné" <roger.pau@citrix.com>
Subject: [PATCH v4 08/14] remove remaining uses of iommu_legacy_map/unmap
Date: Tue, 4 Aug 2020 14:42:03 +0100 [thread overview]
Message-ID: <20200804134209.8717-9-paul@xen.org> (raw)
In-Reply-To: <20200804134209.8717-1-paul@xen.org>
From: Paul Durrant <pdurrant@amazon.com>
The 'legacy' functions do implicit flushing so amend the callers to do the
appropriate flushing.
Unfortunately, because of the structure of the P2M code, we cannot remove
the per-CPU 'iommu_dont_flush_iotlb' global and the optimization it
facilitates. It is now checked directly iommu_iotlb_flush(). Also, it is
now declared as bool (rather than bool_t) and setting/clearing it are no
longer pointlessly gated on is_iommu_enabled() returning true. (Arguably
it is also pointless to gate the call to iommu_iotlb_flush() on that
condition - since it is a no-op in that case - but the if clause allows
the scope of a stack variable to be restricted).
NOTE: The code in memory_add() now fails if the number of pages passed to
a single call overflows an unsigned int. I don't believe this will
ever happen in practice.
Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Julien Grall <julien@xen.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Jun Nakajima <jun.nakajima@intel.com>
Cc: Kevin Tian <kevin.tian@intel.com>
v3:
- Same as v2; elected to implement batch flushing in the grant table code as
a subsequent patch
v2:
- Shorten the diff (mainly because of a prior patch introducing automatic
flush-on-fail into iommu_map() and iommu_unmap())
---
xen/arch/x86/mm.c | 21 +++++++++++++++-----
xen/arch/x86/mm/p2m-ept.c | 20 +++++++++++--------
xen/arch/x86/mm/p2m-pt.c | 15 +++++++++++----
xen/arch/x86/mm/p2m.c | 26 ++++++++++++++++++-------
xen/arch/x86/x86_64/mm.c | 27 +++++++++++++-------------
xen/common/grant_table.c | 34 ++++++++++++++++++++++++---------
xen/common/memory.c | 5 +++--
xen/drivers/passthrough/iommu.c | 25 +-----------------------
xen/include/xen/iommu.h | 21 +++++---------------
9 files changed, 106 insertions(+), 88 deletions(-)
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 638f6bf580..062af1f684 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -2446,10 +2446,16 @@ static int cleanup_page_mappings(struct page_info *page)
if ( d && unlikely(need_iommu_pt_sync(d)) && is_pv_domain(d) )
{
- int rc2 = iommu_legacy_unmap(d, _dfn(mfn), PAGE_ORDER_4K);
+ unsigned int flush_flags = 0;
+ int err;
+ err = iommu_unmap(d, _dfn(mfn), PAGE_ORDER_4K, 1, &flush_flags);
if ( !rc )
- rc = rc2;
+ rc = err;
+
+ err = iommu_iotlb_flush(d, _dfn(mfn), PAGE_ORDER_4K, 1, flush_flags);
+ if ( !rc )
+ rc = err;
}
if ( likely(!is_special_page(page)) )
@@ -2971,12 +2977,17 @@ static int _get_page_type(struct page_info *page, unsigned long type,
if ( d && unlikely(need_iommu_pt_sync(d)) && is_pv_domain(d) )
{
mfn_t mfn = page_to_mfn(page);
+ dfn_t dfn = _dfn(mfn_x(mfn));
+ unsigned int flush_flags = 0;
if ( (x & PGT_type_mask) == PGT_writable_page )
- rc = iommu_legacy_unmap(d, _dfn(mfn_x(mfn)), PAGE_ORDER_4K);
+ rc = iommu_unmap(d, dfn, PAGE_ORDER_4K, 1, &flush_flags);
else
- rc = iommu_legacy_map(d, _dfn(mfn_x(mfn)), mfn, PAGE_ORDER_4K,
- IOMMUF_readable | IOMMUF_writable);
+ rc = iommu_map(d, dfn, mfn, PAGE_ORDER_4K, 1,
+ IOMMUF_readable | IOMMUF_writable, &flush_flags);
+
+ if ( !rc )
+ rc = iommu_iotlb_flush(d, dfn, PAGE_ORDER_4K, 1, flush_flags);
if ( unlikely(rc) )
{
diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
index b2ac912cde..e38b0bf95c 100644
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -842,15 +842,19 @@ out:
if ( rc == 0 && p2m_is_hostp2m(p2m) &&
need_modify_vtd_table )
{
- if ( iommu_use_hap_pt(d) )
- rc = iommu_iotlb_flush(d, _dfn(gfn), (1u << order), 1,
- (iommu_flags ? IOMMU_FLUSHF_added : 0) |
- (vtd_pte_present ? IOMMU_FLUSHF_modified
- : 0));
- else if ( need_iommu_pt_sync(d) )
+ unsigned int flush_flags = 0;
+
+ if ( need_iommu_pt_sync(d) )
rc = iommu_flags ?
- iommu_legacy_map(d, _dfn(gfn), mfn, order, iommu_flags) :
- iommu_legacy_unmap(d, _dfn(gfn), order);
+ iommu_map(d, _dfn(gfn), mfn, order, 1, iommu_flags, &flush_flags) :
+ iommu_unmap(d, _dfn(gfn), order, 1, &flush_flags);
+ else if ( iommu_use_hap_pt(d) )
+ flush_flags =
+ (iommu_flags ? IOMMU_FLUSHF_added : 0) |
+ (vtd_pte_present ? IOMMU_FLUSHF_modified : 0);
+
+ if ( !rc )
+ rc = iommu_iotlb_flush(d, _dfn(gfn), order, 1, flush_flags);
}
unmap_domain_page(table);
diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c
index badb26bc34..3c0901b56c 100644
--- a/xen/arch/x86/mm/p2m-pt.c
+++ b/xen/arch/x86/mm/p2m-pt.c
@@ -678,10 +678,17 @@ p2m_pt_set_entry(struct p2m_domain *p2m, gfn_t gfn_, mfn_t mfn,
if ( need_iommu_pt_sync(p2m->domain) &&
(iommu_old_flags != iommu_pte_flags || old_mfn != mfn_x(mfn)) )
- rc = iommu_pte_flags
- ? iommu_legacy_map(d, _dfn(gfn), mfn, page_order,
- iommu_pte_flags)
- : iommu_legacy_unmap(d, _dfn(gfn), page_order);
+ {
+ unsigned int flush_flags = 0;
+
+ rc = iommu_pte_flags ?
+ iommu_map(d, _dfn(gfn), mfn, page_order, 1, iommu_pte_flags,
+ &flush_flags) :
+ iommu_unmap(d, _dfn(gfn), page_order, 1, &flush_flags);
+
+ if ( !rc )
+ rc = iommu_iotlb_flush(d, _dfn(gfn), page_order, 1, flush_flags);
+ }
/*
* Free old intermediate tables if necessary. This has to be the
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index db7bde0230..9f8b9bc5fd 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -1350,10 +1350,15 @@ int set_identity_p2m_entry(struct domain *d, unsigned long gfn_l,
if ( !paging_mode_translate(p2m->domain) )
{
- if ( !is_iommu_enabled(d) )
- return 0;
- return iommu_legacy_map(d, _dfn(gfn_l), _mfn(gfn_l), PAGE_ORDER_4K,
- IOMMUF_readable | IOMMUF_writable);
+ unsigned int flush_flags = 0;
+
+ ret = iommu_map(d, _dfn(gfn_l), _mfn(gfn_l), PAGE_ORDER_4K, 1,
+ IOMMUF_readable | IOMMUF_writable, &flush_flags);
+ if ( !ret )
+ ret = iommu_iotlb_flush(d, _dfn(gfn_l), PAGE_ORDER_4K, 1,
+ flush_flags);
+
+ return ret;
}
gfn_lock(p2m, gfn, 0);
@@ -1441,9 +1446,16 @@ int clear_identity_p2m_entry(struct domain *d, unsigned long gfn_l)
if ( !paging_mode_translate(d) )
{
- if ( !is_iommu_enabled(d) )
- return 0;
- return iommu_legacy_unmap(d, _dfn(gfn_l), PAGE_ORDER_4K);
+ unsigned int flush_flags = 0;
+ int err;
+
+ ret = iommu_unmap(d, _dfn(gfn_l), PAGE_ORDER_4K, 1, &flush_flags);
+
+ err = iommu_iotlb_flush(d, _dfn(gfn_l), PAGE_ORDER_4K, 1, flush_flags);
+ if ( !ret )
+ ret = err;
+
+ return ret;
}
gfn_lock(p2m, gfn, 0);
diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index 102079a801..02684bcf9d 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -1413,21 +1413,22 @@ int memory_add(unsigned long spfn, unsigned long epfn, unsigned int pxm)
!iommu_use_hap_pt(hardware_domain) &&
!need_iommu_pt_sync(hardware_domain) )
{
- for ( i = spfn; i < epfn; i++ )
- if ( iommu_legacy_map(hardware_domain, _dfn(i), _mfn(i),
- PAGE_ORDER_4K,
- IOMMUF_readable | IOMMUF_writable) )
- break;
- if ( i != epfn )
- {
- while (i-- > old_max)
- /* If statement to satisfy __must_check. */
- if ( iommu_legacy_unmap(hardware_domain, _dfn(i),
- PAGE_ORDER_4K) )
- continue;
+ unsigned int flush_flags = 0;
+ unsigned int n = epfn - spfn;
+ int rc;
+ ret = -EOVERFLOW;
+ if ( spfn + n != epfn )
+ goto destroy_m2p;
+
+ rc = iommu_map(hardware_domain, _dfn(i), _mfn(i),
+ PAGE_ORDER_4K, n, IOMMUF_readable | IOMMUF_writable,
+ &flush_flags);
+ if ( !rc )
+ rc = iommu_iotlb_flush(hardware_domain, _dfn(i), PAGE_ORDER_4K, n,
+ flush_flags);
+ if ( rc )
goto destroy_m2p;
- }
}
/* We can't revert any more */
diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index 9f0cae52c0..d6526bca12 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -1225,11 +1225,23 @@ map_grant_ref(
kind = IOMMUF_readable;
else
kind = 0;
- if ( kind && iommu_legacy_map(ld, _dfn(mfn_x(mfn)), mfn, 0, kind) )
+ if ( kind )
{
- double_gt_unlock(lgt, rgt);
- rc = GNTST_general_error;
- goto undo_out;
+ dfn_t dfn = _dfn(mfn_x(mfn));
+ unsigned int flush_flags = 0;
+ int err;
+
+ err = iommu_map(ld, dfn, mfn, 0, 1, kind, &flush_flags);
+ if ( !err )
+ err = iommu_iotlb_flush(ld, dfn, 0, 1, flush_flags);
+ if ( err )
+ rc = GNTST_general_error;
+
+ if ( rc != GNTST_okay )
+ {
+ double_gt_unlock(lgt, rgt);
+ goto undo_out;
+ }
}
}
@@ -1473,21 +1485,25 @@ unmap_common(
if ( rc == GNTST_okay && gnttab_need_iommu_mapping(ld) )
{
unsigned int kind;
+ dfn_t dfn = _dfn(mfn_x(op->mfn));
+ unsigned int flush_flags = 0;
int err = 0;
double_gt_lock(lgt, rgt);
kind = mapkind(lgt, rd, op->mfn);
if ( !kind )
- err = iommu_legacy_unmap(ld, _dfn(mfn_x(op->mfn)), 0);
+ err = iommu_unmap(ld, dfn, 0, 1, &flush_flags);
else if ( !(kind & MAPKIND_WRITE) )
- err = iommu_legacy_map(ld, _dfn(mfn_x(op->mfn)), op->mfn, 0,
- IOMMUF_readable);
-
- double_gt_unlock(lgt, rgt);
+ err = iommu_map(ld, dfn, op->mfn, 0, 1, IOMMUF_readable,
+ &flush_flags);
+ if ( !err )
+ err = iommu_iotlb_flush(ld, dfn, 0, 1, flush_flags);
if ( err )
rc = GNTST_general_error;
+
+ double_gt_unlock(lgt, rgt);
}
/* If just unmapped a writable mapping, mark as dirtied */
diff --git a/xen/common/memory.c b/xen/common/memory.c
index 8de334ff10..2891bef57b 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -824,8 +824,7 @@ int xenmem_add_to_physmap(struct domain *d, struct xen_add_to_physmap *xatp,
xatp->gpfn += start;
xatp->size -= start;
- if ( is_iommu_enabled(d) )
- this_cpu(iommu_dont_flush_iotlb) = 1;
+ this_cpu(iommu_dont_flush_iotlb) = true;
while ( xatp->size > done )
{
@@ -845,6 +844,8 @@ int xenmem_add_to_physmap(struct domain *d, struct xen_add_to_physmap *xatp,
}
}
+ this_cpu(iommu_dont_flush_iotlb) = false;
+
if ( is_iommu_enabled(d) )
{
int ret;
diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index 568a4a5661..ab44c332bb 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -281,18 +281,6 @@ int iommu_map(struct domain *d, dfn_t dfn, mfn_t mfn,
return rc;
}
-int iommu_legacy_map(struct domain *d, dfn_t dfn, mfn_t mfn,
- unsigned int page_order, unsigned int flags)
-{
- unsigned int flush_flags = 0;
- int rc = iommu_map(d, dfn, mfn, page_order, 1, flags, &flush_flags);
-
- if ( !this_cpu(iommu_dont_flush_iotlb) && !rc )
- rc = iommu_iotlb_flush(d, dfn, (1u << page_order), 1, flush_flags);
-
- return rc;
-}
-
int iommu_unmap(struct domain *d, dfn_t dfn, unsigned int page_order,
unsigned int page_count, unsigned int *flush_flags)
{
@@ -335,17 +323,6 @@ int iommu_unmap(struct domain *d, dfn_t dfn, unsigned int page_order,
return rc;
}
-int iommu_legacy_unmap(struct domain *d, dfn_t dfn, unsigned int page_order)
-{
- unsigned int flush_flags = 0;
- int rc = iommu_unmap(d, dfn, page_order, 1, &flush_flags);
-
- if ( !this_cpu(iommu_dont_flush_iotlb) && ! rc )
- rc = iommu_iotlb_flush(d, dfn, (1u << page_order), 1, flush_flags);
-
- return rc;
-}
-
int iommu_lookup_page(struct domain *d, dfn_t dfn, mfn_t *mfn,
unsigned int *flags)
{
@@ -364,7 +341,7 @@ int iommu_iotlb_flush(struct domain *d, dfn_t dfn, unsigned int page_order,
int rc;
if ( !is_iommu_enabled(d) || !hd->platform_ops->iotlb_flush ||
- !page_count || !flush_flags )
+ !page_count || !flush_flags || this_cpu(iommu_dont_flush_iotlb) )
return 0;
if ( dfn_eq(dfn, INVALID_DFN) )
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index d9c2e764aa..b7e5d3da09 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -151,16 +151,8 @@ int __must_check iommu_map(struct domain *d, dfn_t dfn, mfn_t mfn,
int __must_check iommu_unmap(struct domain *d, dfn_t dfn,
unsigned int page_order, unsigned int page_count,
unsigned int *flush_flags);
-
-int __must_check iommu_legacy_map(struct domain *d, dfn_t dfn, mfn_t mfn,
- unsigned int page_order,
- unsigned int flags);
-int __must_check iommu_legacy_unmap(struct domain *d, dfn_t dfn,
- unsigned int page_order);
-
int __must_check iommu_lookup_page(struct domain *d, dfn_t dfn, mfn_t *mfn,
unsigned int *flags);
-
int __must_check iommu_iotlb_flush(struct domain *d, dfn_t dfn,
unsigned int page_order,
unsigned int page_count,
@@ -370,15 +362,12 @@ void iommu_dev_iotlb_flush_timeout(struct domain *d, struct pci_dev *pdev);
/*
* The purpose of the iommu_dont_flush_iotlb optional cpu flag is to
- * avoid unecessary iotlb_flush in the low level IOMMU code.
- *
- * iommu_map_page/iommu_unmap_page must flush the iotlb but somethimes
- * this operation can be really expensive. This flag will be set by the
- * caller to notify the low level IOMMU code to avoid the iotlb flushes.
- * iommu_iotlb_flush/iommu_iotlb_flush_all will be explicitly called by
- * the caller.
+ * avoid unecessary IOMMU flushing while updating the P2M.
+ * Setting the value to true will cause iommu_iotlb_flush() to return without
+ * actually performing a flush. A batch flush must therefore be done by the
+ * calling code after setting the value back to false.
*/
-DECLARE_PER_CPU(bool_t, iommu_dont_flush_iotlb);
+DECLARE_PER_CPU(bool, iommu_dont_flush_iotlb);
extern struct spinlock iommu_pt_cleanup_lock;
extern struct page_list_head iommu_pt_cleanup_list;
--
2.20.1
next prev parent reply other threads:[~2020-08-04 13:43 UTC|newest]
Thread overview: 43+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-08-04 13:41 [PATCH v4 00/14] IOMMU cleanup Paul Durrant
2020-08-04 13:41 ` [PATCH v4 01/14] x86/iommu: re-arrange arch_iommu to separate common fields Paul Durrant
2020-08-14 6:14 ` Tian, Kevin
2020-08-04 13:41 ` [PATCH v4 02/14] x86/iommu: add common page-table allocator Paul Durrant
2020-08-05 15:39 ` Jan Beulich
2020-08-04 13:41 ` [PATCH v4 03/14] x86/iommu: convert VT-d code to use new page table allocator Paul Durrant
2020-08-14 6:41 ` Tian, Kevin
2020-08-14 7:16 ` Durrant, Paul
2020-08-04 13:41 ` [PATCH v4 04/14] x86/iommu: convert AMD IOMMU " Paul Durrant
2020-08-04 13:42 ` [PATCH v4 05/14] iommu: remove unused iommu_ops method and tasklet Paul Durrant
2020-08-04 13:42 ` [PATCH v4 06/14] iommu: flush I/O TLB if iommu_map() or iommu_unmap() fail Paul Durrant
2020-08-05 16:06 ` Jan Beulich
2020-08-05 16:18 ` Paul Durrant
2020-08-06 11:41 ` Jan Beulich
2020-08-14 6:53 ` Tian, Kevin
2020-08-14 7:19 ` Durrant, Paul
2020-08-04 13:42 ` [PATCH v4 07/14] iommu: make map, unmap and flush all take both an order and a count Paul Durrant
2020-08-06 9:57 ` Jan Beulich
2020-08-11 11:00 ` Durrant, Paul
2020-08-14 6:57 ` Tian, Kevin
2020-08-04 13:42 ` Paul Durrant [this message]
2020-08-06 10:28 ` [PATCH v4 08/14] remove remaining uses of iommu_legacy_map/unmap Jan Beulich
2020-08-12 9:36 ` [EXTERNAL] " Paul Durrant
2020-08-04 13:42 ` [PATCH v4 09/14] common/grant_table: batch flush I/O TLB Paul Durrant
2020-08-06 11:49 ` Jan Beulich
2020-08-04 13:42 ` [PATCH v4 10/14] iommu: remove the share_p2m operation Paul Durrant
2020-08-06 12:18 ` Jan Beulich
2020-08-14 7:04 ` Tian, Kevin
2020-08-04 13:42 ` [PATCH v4 11/14] iommu: stop calling IOMMU page tables 'p2m tables' Paul Durrant
2020-08-06 12:23 ` Jan Beulich
2020-08-14 7:12 ` Tian, Kevin
2020-08-04 13:42 ` [PATCH v4 12/14] vtd: use a bit field for root_entry Paul Durrant
2020-08-06 12:34 ` Jan Beulich
2020-08-12 13:13 ` Durrant, Paul
2020-08-18 8:27 ` Jan Beulich
2020-08-14 7:17 ` Tian, Kevin
2020-08-04 13:42 ` [PATCH v4 13/14] vtd: use a bit field for context_entry Paul Durrant
2020-08-06 12:46 ` Jan Beulich
2020-08-12 13:47 ` Paul Durrant
2020-08-14 7:19 ` Tian, Kevin
2020-08-04 13:42 ` [PATCH v4 14/14] vtd: use a bit field for dma_pte Paul Durrant
2020-08-06 12:53 ` Jan Beulich
2020-08-12 13:49 ` Paul Durrant
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200804134209.8717-9-paul@xen.org \
--to=paul@xen.org \
--cc=andrew.cooper3@citrix.com \
--cc=george.dunlap@citrix.com \
--cc=ian.jackson@eu.citrix.com \
--cc=jbeulich@suse.com \
--cc=julien@xen.org \
--cc=jun.nakajima@intel.com \
--cc=kevin.tian@intel.com \
--cc=pdurrant@amazon.com \
--cc=roger.pau@citrix.com \
--cc=sstabellini@kernel.org \
--cc=wl@xen.org \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).