From: Paul Durrant <paul.durrant@citrix.com>
To: xen-devel@lists.xenproject.org
Cc: "Kevin Tian" <kevin.tian@intel.com>,
"Stefano Stabellini" <sstabellini@kernel.org>,
"Wei Liu" <wei.liu2@citrix.com>,
"Jun Nakajima" <jun.nakajima@intel.com>,
"Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>,
"George Dunlap" <george.dunlap@eu.citrix.com>,
"Andrew Cooper" <andrew.cooper3@citrix.com>,
"Ian Jackson" <ian.jackson@eu.citrix.com>,
"Tim Deegan" <tim@xen.org>, "Julien Grall" <julien.grall@arm.com>,
"Paul Durrant" <paul.durrant@citrix.com>,
"Jan Beulich A" <jbeulich@suse.com>,
"Roger Pau Monné" <roger.pau@citrix.com>
Subject: [PATCH v3 2/4] iommu: rename wrapper functions
Date: Wed, 5 Dec 2018 11:29:22 +0000 [thread overview]
Message-ID: <20181205112924.36470-3-paul.durrant@citrix.com> (raw)
In-Reply-To: <20181205112924.36470-1-paul.durrant@citrix.com>
A subsequent patch will add semantically different versions of
iommu_map/unmap() so, in advance of that change, this patch renames the
existing functions to iommu_legacy_map/unmap() and modifies all call-sites.
It also adjusts a comment that refers to iommu_map_page(), which was re-
named by a previous patch.
This patch is purely cosmetic. No functional change.
Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
---
Cc: Jan Beulich <jbeulich@suse.com>A
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Julien Grall <julien.grall@arm.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Tim Deegan <tim@xen.org>
Cc: Jun Nakajima <jun.nakajima@intel.com>
Cc: Kevin Tian <kevin.tian@intel.com>
Cc: George Dunlap <george.dunlap@eu.citrix.com>
v2:
- New in v2.
v3:
- Leave iommu_iotlb_flush[_all] alone.
- Make patch purely cosmetic.
- Fix comment in xen/iommu.h.
---
xen/arch/x86/mm.c | 11 ++++++-----
xen/arch/x86/mm/p2m-ept.c | 4 ++--
xen/arch/x86/mm/p2m-pt.c | 5 +++--
xen/arch/x86/mm/p2m.c | 12 ++++++------
xen/arch/x86/x86_64/mm.c | 9 +++++----
xen/common/grant_table.c | 14 +++++++-------
xen/common/memory.c | 4 ++--
xen/drivers/passthrough/iommu.c | 6 +++---
xen/drivers/passthrough/x86/iommu.c | 4 ++--
xen/include/xen/iommu.h | 16 +++++++++++-----
10 files changed, 47 insertions(+), 38 deletions(-)
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 28a003063e..746f0b0258 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -2801,12 +2801,13 @@ static int _get_page_type(struct page_info *page, unsigned long type,
mfn_t mfn = page_to_mfn(page);
if ( (x & PGT_type_mask) == PGT_writable_page )
- iommu_ret = iommu_unmap(d, _dfn(mfn_x(mfn)),
- PAGE_ORDER_4K);
+ iommu_ret = iommu_legacy_unmap(d, _dfn(mfn_x(mfn)),
+ PAGE_ORDER_4K);
else if ( type == PGT_writable_page )
- iommu_ret = iommu_map(d, _dfn(mfn_x(mfn)), mfn,
- PAGE_ORDER_4K,
- IOMMUF_readable | IOMMUF_writable);
+ iommu_ret = iommu_legacy_map(d, _dfn(mfn_x(mfn)), mfn,
+ PAGE_ORDER_4K,
+ IOMMUF_readable |
+ IOMMUF_writable);
}
}
diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
index 6e4e375bad..64a49c07b7 100644
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -882,8 +882,8 @@ out:
rc = iommu_pte_flush(d, gfn, &ept_entry->epte, order, vtd_pte_present);
else if ( need_iommu_pt_sync(d) )
rc = iommu_flags ?
- iommu_map(d, _dfn(gfn), mfn, order, iommu_flags) :
- iommu_unmap(d, _dfn(gfn), order);
+ iommu_legacy_map(d, _dfn(gfn), mfn, order, iommu_flags) :
+ iommu_legacy_unmap(d, _dfn(gfn), order);
}
unmap_domain_page(table);
diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c
index 17a6b61f12..69ffb08179 100644
--- a/xen/arch/x86/mm/p2m-pt.c
+++ b/xen/arch/x86/mm/p2m-pt.c
@@ -686,8 +686,9 @@ p2m_pt_set_entry(struct p2m_domain *p2m, gfn_t gfn_, mfn_t mfn,
if ( need_iommu_pt_sync(p2m->domain) )
rc = iommu_pte_flags ?
- iommu_map(d, _dfn(gfn), mfn, page_order, iommu_pte_flags) :
- iommu_unmap(d, _dfn(gfn), page_order);
+ iommu_legacy_map(d, _dfn(gfn), mfn, page_order,
+ iommu_pte_flags) :
+ iommu_legacy_unmap(d, _dfn(gfn), page_order);
else if ( iommu_use_hap_pt(d) && iommu_old_flags )
amd_iommu_flush_pages(p2m->domain, gfn, page_order);
}
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index fea4497910..ed76e96d33 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -733,7 +733,7 @@ p2m_remove_page(struct p2m_domain *p2m, unsigned long gfn_l, unsigned long mfn,
if ( !paging_mode_translate(p2m->domain) )
return need_iommu_pt_sync(p2m->domain) ?
- iommu_unmap(p2m->domain, _dfn(mfn), page_order) : 0;
+ iommu_legacy_unmap(p2m->domain, _dfn(mfn), page_order) : 0;
ASSERT(gfn_locked_by_me(p2m, gfn));
P2M_DEBUG("removing gfn=%#lx mfn=%#lx\n", gfn_l, mfn);
@@ -780,8 +780,8 @@ guest_physmap_add_entry(struct domain *d, gfn_t gfn, mfn_t mfn,
if ( !paging_mode_translate(d) )
return (need_iommu_pt_sync(d) && t == p2m_ram_rw) ?
- iommu_map(d, _dfn(mfn_x(mfn)), mfn, page_order,
- IOMMUF_readable | IOMMUF_writable) : 0;
+ iommu_legacy_map(d, _dfn(mfn_x(mfn)), mfn, page_order,
+ IOMMUF_readable | IOMMUF_writable) : 0;
/* foreign pages are added thru p2m_add_foreign */
if ( p2m_is_foreign(t) )
@@ -1151,8 +1151,8 @@ int set_identity_p2m_entry(struct domain *d, unsigned long gfn_l,
{
if ( !need_iommu_pt_sync(d) )
return 0;
- return iommu_map(d, _dfn(gfn_l), _mfn(gfn_l), PAGE_ORDER_4K,
- IOMMUF_readable | IOMMUF_writable);
+ return iommu_legacy_map(d, _dfn(gfn_l), _mfn(gfn_l), PAGE_ORDER_4K,
+ IOMMUF_readable | IOMMUF_writable);
}
gfn_lock(p2m, gfn, 0);
@@ -1242,7 +1242,7 @@ int clear_identity_p2m_entry(struct domain *d, unsigned long gfn_l)
{
if ( !need_iommu_pt_sync(d) )
return 0;
- return iommu_unmap(d, _dfn(gfn_l), PAGE_ORDER_4K);
+ return iommu_legacy_unmap(d, _dfn(gfn_l), PAGE_ORDER_4K);
}
gfn_lock(p2m, gfn, 0);
diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index 11977f2671..8056679de0 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -1436,15 +1436,16 @@ int memory_add(unsigned long spfn, unsigned long epfn, unsigned int pxm)
!need_iommu_pt_sync(hardware_domain) )
{
for ( i = spfn; i < epfn; i++ )
- if ( iommu_map(hardware_domain, _dfn(i), _mfn(i),
- PAGE_ORDER_4K,
- IOMMUF_readable | IOMMUF_writable) )
+ if ( iommu_legacy_map(hardware_domain, _dfn(i), _mfn(i),
+ PAGE_ORDER_4K,
+ IOMMUF_readable | IOMMUF_writable) )
break;
if ( i != epfn )
{
while (i-- > old_max)
/* If statement to satisfy __must_check. */
- if ( iommu_unmap(hardware_domain, _dfn(i), PAGE_ORDER_4K) )
+ if ( iommu_legacy_unmap(hardware_domain, _dfn(i),
+ PAGE_ORDER_4K) )
continue;
goto destroy_m2p;
diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index b67ae9e3f5..fd099a8f25 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -1134,14 +1134,14 @@ map_grant_ref(
!(old_pin & (GNTPIN_hstw_mask|GNTPIN_devw_mask)) )
{
if ( !(kind & MAPKIND_WRITE) )
- err = iommu_map(ld, _dfn(mfn_x(mfn)), mfn, 0,
- IOMMUF_readable | IOMMUF_writable);
+ err = iommu_legacy_map(ld, _dfn(mfn_x(mfn)), mfn, 0,
+ IOMMUF_readable | IOMMUF_writable);
}
else if ( act_pin && !old_pin )
{
if ( !kind )
- err = iommu_map(ld, _dfn(mfn_x(mfn)), mfn, 0,
- IOMMUF_readable);
+ err = iommu_legacy_map(ld, _dfn(mfn_x(mfn)), mfn, 0,
+ IOMMUF_readable);
}
if ( err )
{
@@ -1389,10 +1389,10 @@ unmap_common(
kind = mapkind(lgt, rd, op->mfn);
if ( !kind )
- err = iommu_unmap(ld, _dfn(mfn_x(op->mfn)), 0);
+ err = iommu_legacy_unmap(ld, _dfn(mfn_x(op->mfn)), 0);
else if ( !(kind & MAPKIND_WRITE) )
- err = iommu_map(ld, _dfn(mfn_x(op->mfn)), op->mfn, 0,
- IOMMUF_readable);
+ err = iommu_legacy_map(ld, _dfn(mfn_x(op->mfn)), op->mfn, 0,
+ IOMMUF_readable);
double_gt_unlock(lgt, rgt);
diff --git a/xen/common/memory.c b/xen/common/memory.c
index 175bd62c11..7b668077d8 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -865,11 +865,11 @@ int xenmem_add_to_physmap(struct domain *d, struct xen_add_to_physmap *xatp,
this_cpu(iommu_dont_flush_iotlb) = 0;
- ret = iommu_iotlb_flush(d, _dfn(xatp->idx - done), done);
+ ret = iommu_flush(d, _dfn(xatp->idx - done), done);
if ( unlikely(ret) && rc >= 0 )
rc = ret;
- ret = iommu_iotlb_flush(d, _dfn(xatp->gpfn - done), done);
+ ret = iommu_flush(d, _dfn(xatp->gpfn - done), done);
if ( unlikely(ret) && rc >= 0 )
rc = ret;
}
diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index c1cce08551..105995a343 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -304,8 +304,8 @@ void iommu_domain_destroy(struct domain *d)
arch_iommu_domain_destroy(d);
}
-int iommu_map(struct domain *d, dfn_t dfn, mfn_t mfn,
- unsigned int page_order, unsigned int flags)
+int iommu_legacy_map(struct domain *d, dfn_t dfn, mfn_t mfn,
+ unsigned int page_order, unsigned int flags)
{
const struct domain_iommu *hd = dom_iommu(d);
unsigned long i;
@@ -345,7 +345,7 @@ int iommu_map(struct domain *d, dfn_t dfn, mfn_t mfn,
return rc;
}
-int iommu_unmap(struct domain *d, dfn_t dfn, unsigned int page_order)
+int iommu_legacy_unmap(struct domain *d, dfn_t dfn, unsigned int page_order)
{
const struct domain_iommu *hd = dom_iommu(d);
unsigned long i;
diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c
index c68a72279d..b12289a18f 100644
--- a/xen/drivers/passthrough/x86/iommu.c
+++ b/xen/drivers/passthrough/x86/iommu.c
@@ -241,8 +241,8 @@ void __hwdom_init arch_iommu_hwdom_init(struct domain *d)
if ( paging_mode_translate(d) )
rc = set_identity_p2m_entry(d, pfn, p2m_access_rw, 0);
else
- rc = iommu_map(d, _dfn(pfn), _mfn(pfn), PAGE_ORDER_4K,
- IOMMUF_readable | IOMMUF_writable);
+ rc = iommu_legacy_map(d, _dfn(pfn), _mfn(pfn), PAGE_ORDER_4K,
+ IOMMUF_readable | IOMMUF_writable);
if ( rc )
printk(XENLOG_WARNING " d%d: IOMMU mapping failed: %d\n",
d->domain_id, rc);
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 3d78126801..1f875aa328 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -83,15 +83,21 @@ int iommu_construct(struct domain *d);
/* Function used internally, use iommu_domain_destroy */
void iommu_teardown(struct domain *d);
-/* iommu_map_page() takes flags to direct the mapping operation. */
+/*
+ * The following flags are passed to map operations and passed by lookup
+ * operations.
+ */
#define _IOMMUF_readable 0
#define IOMMUF_readable (1u<<_IOMMUF_readable)
#define _IOMMUF_writable 1
#define IOMMUF_writable (1u<<_IOMMUF_writable)
-int __must_check iommu_map(struct domain *d, dfn_t dfn, mfn_t mfn,
- unsigned int page_order, unsigned int flags);
-int __must_check iommu_unmap(struct domain *d, dfn_t dfn,
- unsigned int page_order);
+
+int __must_check iommu_legacy_map(struct domain *d, dfn_t dfn, mfn_t mfn,
+ unsigned int page_order,
+ unsigned int flags);
+int __must_check iommu_legacy_unmap(struct domain *d, dfn_t dfn,
+ unsigned int page_order);
+
int __must_check iommu_lookup_page(struct domain *d, dfn_t dfn, mfn_t *mfn,
unsigned int *flags);
--
2.11.0
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
next prev parent reply other threads:[~2018-12-05 11:29 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-12-05 11:29 [PATCH v3 0/4] iommu improvements Paul Durrant
2018-12-05 11:29 ` [PATCH v3 1/4] amd-iommu: add flush iommu_ops Paul Durrant
2018-12-05 11:46 ` Jan Beulich
2018-12-05 12:56 ` Paul Durrant
2018-12-05 12:58 ` Paul Durrant
2018-12-05 13:13 ` Jan Beulich
2018-12-05 13:15 ` Paul Durrant
2018-12-05 13:47 ` Jan Beulich
2018-12-05 11:29 ` Paul Durrant [this message]
2018-12-06 14:44 ` [PATCH v3 2/4] iommu: rename wrapper functions Jan Beulich
2018-12-06 15:00 ` Andrew Cooper
2018-12-05 11:29 ` [PATCH v3 3/4] iommu: elide flushing for higher order map/unmap operations Paul Durrant
2018-12-06 15:08 ` Jan Beulich
2018-12-06 15:11 ` Paul Durrant
2018-12-05 11:29 ` [PATCH v3 4/4] x86/mm/p2m: stop checking for IOMMU shared page tables in mmio_order() Paul Durrant
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20181205112924.36470-3-paul.durrant@citrix.com \
--to=paul.durrant@citrix.com \
--cc=andrew.cooper3@citrix.com \
--cc=george.dunlap@eu.citrix.com \
--cc=ian.jackson@eu.citrix.com \
--cc=jbeulich@suse.com \
--cc=julien.grall@arm.com \
--cc=jun.nakajima@intel.com \
--cc=kevin.tian@intel.com \
--cc=konrad.wilk@oracle.com \
--cc=roger.pau@citrix.com \
--cc=sstabellini@kernel.org \
--cc=tim@xen.org \
--cc=wei.liu2@citrix.com \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).