All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC PATCH V2 00/26] xen/vIOMMU: Add vIOMMU support with irq remapping fucntion of virtual vtd
@ 2017-05-18  5:34 Lan Tianyu
  2017-05-18  5:34 ` [RFC PATCH V2 1/26] VIOMMU: Add vIOMMU helper functions to create, destroy and query capabilities Lan Tianyu
                   ` (25 more replies)
  0 siblings, 26 replies; 42+ messages in thread
From: Lan Tianyu @ 2017-05-18  5:34 UTC (permalink / raw)
  To: xen-devel
  Cc: Lan Tianyu, kevin.tian, wei.liu2, andrew.cooper3, ian.jackson,
	jbeulich, chao.gao


Change since v1:
       1) Add Xen virtual IOMMU doc docs/misc/viommu.txt
       2) Move vIOMMU hypercall of create/destroy vIOMMU and query  
capabilities from dmop to domctl suggested by Paul Durrant. Because
these hypercalls can be done in tool stack and more VM mode(E,G PVH
or other modes don't use Qemu) can be benefit.
       3) Add check of input MMIO address and length.
       4) Add iommu_type in vIOMMU hypercall parameter to specify
vendor vIOMMU device model(E,G Intel VTD, AMD or ARM IOMMU. So far
only support Intel VTD).
       5) Add save and restore support for vvtd


This patchset is to introduce vIOMMU framework and add virtual VTD's
interrupt remapping support according "Xen virtual IOMMU high level
design doc V3"(https://lists.xenproject.org/archives/html/xen-devel/
2016-11/msg01391.html).

- vIOMMU framework
New framework provides viommu_ops and help functions to abstract
vIOMMU operations(E,G create, destroy, handle irq remapping request
and so on). Vendors(Intel, ARM, AMD and son) can implement their
vIOMMU callbacks.

- Virtual VTD
We enable irq remapping function and covers both
MSI and IOAPIC interrupts. Don't support post interrupt mode emulation
and post interrupt mode enabled on host with virtual VTD. Will add
later.

Repo:
https://github.com/lantianyu/Xen/commits/xen_viommu_rfc_v2https://github.com/lantianyu/Xen/commits/xen_viommu_rfc_v2

Chao Gao (21):
  Tools/libxc: Add viommu operations in libxc
  Tools/libacpi: Add DMA remapping reporting (DMAR) ACPI table
    structures
  Tools/libacpi: Add new fields in acpi_config to build DMAR table
  Tools/libacpi: Add a user configurable parameter to control vIOMMU
    attributes
  libxl: create vIOMMU during domain construction
  x86/hvm: Introduce a emulated VTD for HVM
  X86/vvtd: Add MMIO handler for VVTD
  X86/vvtd: Set Interrupt Remapping Table Pointer through GCMD
  X86/vvtd: Process interrupt remapping request
  x86/vvtd: decode interrupt attribute from IRTE
  x86/vioapic: Hook interrupt delivery of vIOAPIC
  X86/vvtd: Enable Queued Invalidation through GCMD
  X86/vvtd: Enable Interrupt Remapping through GCMD
  x86/vpt: Get interrupt vector through a vioapic interface
  passthrough: move some fields of hvm_gmsi_info to a sub-structure
  Tools/libxc: Add a new interface to bind remapping format msi with
    pirq
  x86/vmsi: Hook delivering remapping format msi to guest
  x86/vvtd: Handle interrupt translation faults
  x86/vvtd: Add queued invalidation (QI) support
  x86/vlapic: drop no longer suitable restriction to set x2apic id
  x86/vvtd: save and restore emulated VT-d

Lan Tianyu (5):
  VIOMMU: Add vIOMMU helper functions to create, destroy and query
    capabilities
  DOMCTL: Introduce new DOMCTL commands for vIOMMU support
  VIOMMU: Add irq request callback to deal with irq remapping
  VIOMMU: Add get irq info callback to convert irq remapping request
  Xen/doc: Add Xen virtual IOMMU doc

 docs/man/xl.cfg.pod.5.in               |   34 +-
 docs/misc/viommu.txt                   |  129 ++++
 tools/libacpi/acpi2_0.h                |   45 ++
 tools/libacpi/build.c                  |   58 ++
 tools/libacpi/libacpi.h                |   12 +
 tools/libxc/Makefile                   |    1 +
 tools/libxc/include/xenctrl.h          |   24 +
 tools/libxc/xc_domain.c                |   55 ++
 tools/libxc/xc_viommu.c                |   81 +++
 tools/libxl/libxl_arch.h               |    5 +
 tools/libxl/libxl_arm.c                |    7 +
 tools/libxl/libxl_create.c             |    4 +
 tools/libxl/libxl_dom.c                |   87 +++
 tools/libxl/libxl_types.idl            |   10 +
 tools/libxl/libxl_x86.c                |   24 +
 tools/xl/xl_parse.c                    |   64 ++
 xen/arch/x86/hvm/Makefile              |    1 +
 xen/arch/x86/hvm/irq.c                 |   11 +
 xen/arch/x86/hvm/vioapic.c             |   41 ++
 xen/arch/x86/hvm/vlapic.c              |   18 +-
 xen/arch/x86/hvm/vmsi.c                |   18 +-
 xen/arch/x86/hvm/vpt.c                 |    2 +-
 xen/arch/x86/hvm/vvtd.c                | 1223 ++++++++++++++++++++++++++++++++
 xen/arch/x86/setup.c                   |    1 +
 xen/common/Kconfig                     |   11 +
 xen/common/Makefile                    |    1 +
 xen/common/domain.c                    |    3 +
 xen/common/domctl.c                    |    3 +
 xen/common/viommu.c                    |  235 ++++++
 xen/drivers/passthrough/io.c           |  194 ++++-
 xen/drivers/passthrough/vtd/iommu.h    |  225 +++++-
 xen/drivers/passthrough/vtd/vtd.h      |    6 +
 xen/include/asm-x86/hvm/vioapic.h      |    1 +
 xen/include/asm-x86/msi.h              |    3 +
 xen/include/asm-x86/viommu.h           |   84 +++
 xen/include/public/arch-x86/hvm/save.h |   24 +-
 xen/include/public/domctl.h            |   47 ++
 xen/include/public/viommu.h            |   49 ++
 xen/include/xen/hvm/irq.h              |   15 +-
 xen/include/xen/sched.h                |    2 +
 xen/include/xen/viommu.h               |  103 +++
 41 files changed, 2864 insertions(+), 97 deletions(-)
 create mode 100644 docs/misc/viommu.txt
 create mode 100644 tools/libxc/xc_viommu.c
 create mode 100644 xen/arch/x86/hvm/vvtd.c
 create mode 100644 xen/common/viommu.c
 create mode 100644 xen/include/asm-x86/viommu.h
 create mode 100644 xen/include/public/viommu.h
 create mode 100644 xen/include/xen/viommu.h

-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [RFC PATCH V2 1/26] VIOMMU: Add vIOMMU helper functions to create, destroy and query capabilities
  2017-05-18  5:34 [RFC PATCH V2 00/26] xen/vIOMMU: Add vIOMMU support with irq remapping fucntion of virtual vtd Lan Tianyu
@ 2017-05-18  5:34 ` Lan Tianyu
  2017-05-30 15:36   ` Wei Liu
  2017-05-18  5:34 ` [RFC PATCH V2 2/26] DOMCTL: Introduce new DOMCTL commands for vIOMMU support Lan Tianyu
                   ` (24 subsequent siblings)
  25 siblings, 1 reply; 42+ messages in thread
From: Lan Tianyu @ 2017-05-18  5:34 UTC (permalink / raw)
  To: xen-devel
  Cc: Lan Tianyu, kevin.tian, wei.liu2, andrew.cooper3, ian.jackson,
	jbeulich, chao.gao

This patch is to introduct an abstract layer for arch vIOMMU implementation
to deal with requests from dom0. Arch vIOMMU code needs to provide callback
to perform create, destroy and query capabilities operation.

Signed-off-by: Lan Tianyu <tianyu.lan@intel.com>
---
 xen/arch/x86/setup.c        |   1 +
 xen/common/Kconfig          |  11 +++
 xen/common/Makefile         |   1 +
 xen/common/domain.c         |   3 +
 xen/common/viommu.c         | 169 ++++++++++++++++++++++++++++++++++++++++++++
 xen/include/public/viommu.h |  49 +++++++++++++
 xen/include/xen/sched.h     |   2 +
 xen/include/xen/viommu.h    |  79 +++++++++++++++++++++
 8 files changed, 315 insertions(+)
 create mode 100644 xen/common/viommu.c
 create mode 100644 xen/include/public/viommu.h
 create mode 100644 xen/include/xen/viommu.h

diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index f7b9278..f204d71 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -1513,6 +1513,7 @@ void __init noreturn __start_xen(unsigned long mbi_p)
     early_msi_init();
 
     iommu_setup();    /* setup iommu if available */
+    viommu_setup();
 
     smp_prepare_cpus(max_cpus);
 
diff --git a/xen/common/Kconfig b/xen/common/Kconfig
index dc8e876..90e3741 100644
--- a/xen/common/Kconfig
+++ b/xen/common/Kconfig
@@ -73,6 +73,17 @@ config TMEM
 
 	  If unsure, say Y.
 
+config VIOMMU
+	def_bool y
+	prompt "Xen vIOMMU Support" if EXPERT = "y"
+	depends on X86
+	---help---
+	 Virtual IOMMU provides interrupt remapping function for guest and
+	 it allows guest to boot up more than 255 vcpus which requires interrupt
+	 remapping function.
+
+	  If unsure, say Y.
+
 config XENOPROF
 	def_bool y
 	prompt "Xen Oprofile Support" if EXPERT = "y"
diff --git a/xen/common/Makefile b/xen/common/Makefile
index 26c5a64..f61e579 100644
--- a/xen/common/Makefile
+++ b/xen/common/Makefile
@@ -61,6 +61,7 @@ obj-y += vm_event.o
 obj-y += vmap.o
 obj-y += vsprintf.o
 obj-y += wait.o
+obj-$(CONFIG_VIOMMU) += viommu.o
 obj-bin-y += warning.init.o
 obj-$(CONFIG_XENOPROF) += xenoprof.o
 obj-y += xmalloc_tlsf.o
diff --git a/xen/common/domain.c b/xen/common/domain.c
index b22aacc..d1f9b10 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -396,6 +396,9 @@ struct domain *domain_create(domid_t domid, unsigned int domcr_flags,
         spin_unlock(&domlist_update_lock);
     }
 
+    if ( (err = viommu_init_domain(d)) != 0 )
+        goto fail;
+
     return d;
 
  fail:
diff --git a/xen/common/viommu.c b/xen/common/viommu.c
new file mode 100644
index 0000000..eadcecb
--- /dev/null
+++ b/xen/common/viommu.c
@@ -0,0 +1,169 @@
+/*
+ * common/viommu.c
+ * 
+ * Copyright (c) 2017 Intel Corporation
+ * Author: Lan Tianyu <tianyu.lan@intel.com> 
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <xen/types.h>
+#include <xen/sched.h>
+#include <xen/spinlock.h>
+
+spinlock_t type_list_lock;
+static struct list_head type_list;
+
+struct viommu_type {
+    u64 type;
+    struct viommu_ops *ops;
+    struct list_head node;
+};
+
+int viommu_init_domain(struct domain *d)
+{
+    d->viommu.nr_viommu = 0;
+    return 0;
+}
+
+struct viommu_type *viommu_get_type(u64 type)
+{
+    struct viommu_type *viommu_type = NULL;
+
+    spin_lock(&type_list_lock);
+    list_for_each_entry( viommu_type, &type_list, node )
+    {
+        if ( viommu_type->type == type )
+        {
+            spin_unlock(&type_list_lock);
+            return viommu_type;
+        }
+    }
+    spin_unlock(&type_list_lock);
+
+    return NULL;
+}
+
+int viommu_register_type(u64 type, struct viommu_ops * ops)
+{
+    struct viommu_type *viommu_type = NULL;
+
+    if ( viommu_get_type(type) )
+        return -EEXIST;
+
+    viommu_type = xzalloc(struct viommu_type);
+    if ( !viommu_type )
+        return -ENOMEM;
+
+    viommu_type->type = type;
+    viommu_type->ops = ops;
+
+    spin_lock(&type_list_lock);
+    list_add_tail(&viommu_type->node, &type_list);
+    spin_unlock(&type_list_lock);
+
+    return 0;
+}
+
+void viommu_unregister_type(u64 type)
+{
+    struct viommu_type *viommu_type = viommu_get_type(type);
+
+    if ( viommu_type )
+    {
+        spin_lock(&type_list_lock);
+        list_del(&viommu_type->node);
+        spin_unlock(&type_list_lock);
+
+        xfree(viommu_type);
+    }
+}
+
+int viommu_create(struct domain *d, u64 type, u64 base_address,
+                  u64 length, u64 caps)
+{
+    struct viommu_info *info = &d->viommu;
+    struct viommu *viommu;
+    struct viommu_type *viommu_type = NULL;
+    int rc;
+
+    viommu_type = viommu_get_type(type);
+    if ( !viommu_type )
+        return -EFAULT;
+
+    if ( !info || info->nr_viommu >= NR_VIOMMU_PER_DOMAIN
+        || !viommu_type->ops || !viommu_type->ops->create )
+        return -EINVAL;
+
+    viommu = xzalloc(struct viommu);
+    if ( !viommu )
+        return -ENOMEM;
+
+    viommu->base_address = base_address;
+    viommu->length = length;
+    viommu->caps = caps;
+    viommu->ops = viommu_type->ops;
+    viommu->viommu_id = info->nr_viommu;
+
+    info->viommu[info->nr_viommu] = viommu;
+    info->nr_viommu++;
+
+    rc = viommu->ops->create(d, viommu);
+    if ( rc < 0 )
+    {
+        xfree(viommu);
+        return rc;
+    }
+
+    return viommu->viommu_id;
+}
+
+int viommu_destroy(struct domain *d, u32 viommu_id)
+{
+    struct viommu_info *info = &d->viommu;
+
+    if ( !info || viommu_id > info->nr_viommu || !info->viommu[viommu_id] )
+        return -EINVAL;
+
+    if ( info->viommu[viommu_id]->ops->destroy(info->viommu[viommu_id]) )
+        return -EFAULT;
+
+    info->viommu[viommu_id] = NULL;
+    return 0;
+}
+
+u64 viommu_query_caps(struct domain *d, u64 type)
+{
+    struct viommu_type *viommu_type = viommu_get_type(type);
+
+    if ( !viommu_type )
+        return -EFAULT;
+
+    return viommu_type->ops->query_caps(d);
+}
+
+int __init viommu_setup(void)
+{
+    INIT_LIST_HEAD(&type_list);
+    spin_lock_init(&type_list_lock);
+    return 0;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * End:
+ */
diff --git a/xen/include/public/viommu.h b/xen/include/public/viommu.h
new file mode 100644
index 0000000..a4f7c47
--- /dev/null
+++ b/xen/include/public/viommu.h
@@ -0,0 +1,49 @@
+/*
+ * viommu.h
+ *
+ * Virtual IOMMU information
+ *
+ * Copyright (c) 2017 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person
+ * obtaining a copy of this software and associated documentation
+ * files (the "Software"), to deal in the Software without restriction,
+ * including without limitation the rights to use, copy, modify, merge,
+ * publish, distribute, sublicense, and/or sell copies of the Software,
+ * and to permit persons to whom the Software is furnished to do so,
+ * subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be
+ * included in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
+ * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
+ * CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
+ * TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
+ * SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __XEN_PUBLIC_VIOMMU_H__
+#define __XEN_PUBLIC_VIOMMU_H__
+
+/* VIOMMU type */
+#define VIOMMU_TYPE_INTEL_VTD     (1 << 0)
+
+/* VIOMMU capabilities*/
+#define VIOMMU_CAP_IRQ_REMAPPING  (1 << 0)
+
+#endif /* __XEN_PUBLIC_VIOMMU_H__ */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
+
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 1127ca9..af52ae8 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -21,6 +21,7 @@
 #include <xen/perfc.h>
 #include <asm/atomic.h>
 #include <xen/wait.h>
+#include <xen/viommu.h>
 #include <public/xen.h>
 #include <public/domctl.h>
 #include <public/sysctl.h>
@@ -477,6 +478,7 @@ struct domain
     /* vNUMA topology accesses are protected by rwlock. */
     rwlock_t vnuma_rwlock;
     struct vnuma_info *vnuma;
+    struct viommu_info viommu;
 
     /* Common monitor options */
     struct {
diff --git a/xen/include/xen/viommu.h b/xen/include/xen/viommu.h
new file mode 100644
index 0000000..ae5f6af
--- /dev/null
+++ b/xen/include/xen/viommu.h
@@ -0,0 +1,79 @@
+/*
+ * include/xen/viommu.h
+ *
+ * Copyright (c) 2017, Intel Corporation
+ * Author: Lan Tianyu <tianyu.lan@intel.com> 
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; If not, see <http://www.gnu.org/licenses/>.
+ *
+ */
+#ifndef __XEN_VIOMMU_H__
+#define __XEN_VIOMMU_H__
+
+#define NR_VIOMMU_PER_DOMAIN 1
+
+struct viommu;
+
+struct viommu_ops {
+    u64 (*query_caps)(struct domain *d);
+    int (*create)(struct domain *d, struct viommu *viommu);
+    int (*destroy)(struct viommu *viommu);
+};
+
+struct viommu {
+    u64 base_address;
+    u64 length;
+    u64 caps;
+    u32 viommu_id;
+    const struct viommu_ops *ops;
+    void *priv;
+};
+
+struct viommu_info {
+    u32 nr_viommu;
+    struct viommu *viommu[NR_VIOMMU_PER_DOMAIN]; /* viommu array*/
+};
+
+#ifdef CONFIG_VIOMMU
+int viommu_init_domain(struct domain *d);
+int viommu_create(struct domain *d, u64 type, u64 base_address,
+                  u64 length, u64 caps);
+int viommu_destroy(struct domain *d, u32 viommu_id);
+int viommu_register_type(u64 type, struct viommu_ops * ops);
+void viommu_unregister_type(u64 type);
+u64 viommu_query_caps(struct domain *d, u64 viommu_type);
+int viommu_setup(void);
+#else
+static inline int viommu_init_domain(struct domain *d) { return 0 };
+static inline int viommu_create(struct domain *d, u64 type, u64 base_address,
+                                u64 length, u64 caps) { return -ENODEV };
+static inline int viommu_destroy(struct domain *d, u32 viommu_id) { return 0 };
+static inline int viommu_register_type(u64 type, struct viommu_ops * ops)
+{ return 0; };
+static inline void viommu_unregister_type(u64 type) { };
+static inline u64 viommu_query_caps(struct domain *d, u64 viommu_type)
+                { return -ENODEV };
+static inline int __init viommu_setup(void) { return 0 };
+#endif
+
+#endif /* __XEN_VIOMMU_H__ */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [RFC PATCH V2 2/26] DOMCTL: Introduce new DOMCTL commands for vIOMMU support
  2017-05-18  5:34 [RFC PATCH V2 00/26] xen/vIOMMU: Add vIOMMU support with irq remapping fucntion of virtual vtd Lan Tianyu
  2017-05-18  5:34 ` [RFC PATCH V2 1/26] VIOMMU: Add vIOMMU helper functions to create, destroy and query capabilities Lan Tianyu
@ 2017-05-18  5:34 ` Lan Tianyu
  2017-05-30 15:36   ` Wei Liu
  2017-05-18  5:34 ` [RFC PATCH V2 3/26] VIOMMU: Add irq request callback to deal with irq remapping Lan Tianyu
                   ` (23 subsequent siblings)
  25 siblings, 1 reply; 42+ messages in thread
From: Lan Tianyu @ 2017-05-18  5:34 UTC (permalink / raw)
  To: xen-devel
  Cc: Lan Tianyu, kevin.tian, wei.liu2, andrew.cooper3, ian.jackson,
	jbeulich, chao.gao

This patch is to introduce create, destroy and query capabilities
command for vIOMMU. vIOMMU layer will deal with requests and call
arch vIOMMU ops.

Signed-off-by: Lan Tianyu <tianyu.lan@intel.com>
---
 xen/common/domctl.c         |  3 +++
 xen/common/viommu.c         | 35 +++++++++++++++++++++++++++++++++++
 xen/include/public/domctl.h | 40 ++++++++++++++++++++++++++++++++++++++++
 xen/include/xen/viommu.h    |  8 +++++++-
 4 files changed, 85 insertions(+), 1 deletion(-)

diff --git a/xen/common/domctl.c b/xen/common/domctl.c
index 951a5dc..a178544 100644
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -1141,6 +1141,9 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
         if ( !ret )
             copyback = 1;
         break;
+    case XEN_DOMCTL_viommu_op:
+        ret = viommu_domctl(d, &op->u.viommu_op, &copyback);
+        break;
 
     default:
         ret = arch_do_domctl(op, d, u_domctl);
diff --git a/xen/common/viommu.c b/xen/common/viommu.c
index eadcecb..74afbf5 100644
--- a/xen/common/viommu.c
+++ b/xen/common/viommu.c
@@ -30,6 +30,41 @@ struct viommu_type {
     struct list_head node;
 };
 
+int viommu_domctl(struct domain *d, struct xen_domctl_viommu_op *op,
+                  bool_t *need_copy)
+{
+    int rc = -EINVAL;
+
+    switch ( op->cmd )
+    {
+    case XEN_DOMCTL_create_viommu:
+		rc = viommu_create(d, op->u.create_viommu.viommu_type,
+                           op->u.create_viommu.base_address,
+                           op->u.create_viommu.length,
+                           op->u.create_viommu.capabilities);
+        if (rc >= 0) {
+            op->u.create_viommu.viommu_id = rc;
+            *need_copy = true;
+        }
+        break;
+
+    case XEN_DOMCTL_destroy_viommu:
+        rc = viommu_destroy(d, op->u.destroy_viommu.viommu_id);
+        break;
+
+    case XEN_DOMCTL_query_viommu_caps:
+        op->u.query_caps.caps
+                = viommu_query_caps(d, op->u.query_caps.viommu_type);
+        *need_copy = true;
+        break;
+
+    default:
+        break;
+    }
+
+    return rc;
+}
+
 int viommu_init_domain(struct domain *d)
 {
     d->viommu.nr_viommu = 0;
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index e6cf211..d499fc6 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -1141,6 +1141,44 @@ struct xen_domctl_psr_cat_op {
 typedef struct xen_domctl_psr_cat_op xen_domctl_psr_cat_op_t;
 DEFINE_XEN_GUEST_HANDLE(xen_domctl_psr_cat_op_t);
 
+struct xen_domctl_viommu_op {
+    uint32_t cmd;
+#define XEN_DOMCTL_create_viommu          0
+#define XEN_DOMCTL_destroy_viommu         1
+#define XEN_DOMCTL_query_viommu_caps      2
+    union {
+        struct {
+            /* IN - vIOMMU type */
+            uint64_t viommu_type;
+            /* 
+             * IN - MMIO base address of vIOMMU. vIOMMU device models
+             * are in charge of to check base_address and length.
+             */
+            uint64_t base_address;
+            /* IN - Length of MMIO region */
+            uint64_t length;
+            /* IN - Capabilities with which we want to create */
+            uint64_t capabilities;
+            /* OUT - vIOMMU identity */
+            uint32_t viommu_id;
+        } create_viommu;
+
+        struct {
+            /* IN - vIOMMU identity */
+            uint32_t viommu_id;
+        } destroy_viommu;
+
+        struct {
+            /* IN - vIOMMU type */
+            uint64_t viommu_type;
+            /* OUT - vIOMMU Capabilities */
+            uint64_t caps;
+        } query_caps;
+    } u;
+};
+typedef struct xen_domctl_viommu_op xen_domctl_viommu_op;
+DEFINE_XEN_GUEST_HANDLE(xen_domctl_viommu_op);
+
 struct xen_domctl {
     uint32_t cmd;
 #define XEN_DOMCTL_createdomain                   1
@@ -1218,6 +1256,7 @@ struct xen_domctl {
 #define XEN_DOMCTL_monitor_op                    77
 #define XEN_DOMCTL_psr_cat_op                    78
 #define XEN_DOMCTL_soft_reset                    79
+#define XEN_DOMCTL_viommu_op                     80
 #define XEN_DOMCTL_gdbsx_guestmemio            1000
 #define XEN_DOMCTL_gdbsx_pausevcpu             1001
 #define XEN_DOMCTL_gdbsx_unpausevcpu           1002
@@ -1280,6 +1319,7 @@ struct xen_domctl {
         struct xen_domctl_psr_cmt_op        psr_cmt_op;
         struct xen_domctl_monitor_op        monitor_op;
         struct xen_domctl_psr_cat_op        psr_cat_op;
+        struct xen_domctl_viommu_op         viommu_op;
         uint8_t                             pad[128];
     } u;
 };
diff --git a/xen/include/xen/viommu.h b/xen/include/xen/viommu.h
index ae5f6af..5909800 100644
--- a/xen/include/xen/viommu.h
+++ b/xen/include/xen/viommu.h
@@ -52,6 +52,8 @@ int viommu_destroy(struct domain *d, u32 viommu_id);
 int viommu_register_type(u64 type, struct viommu_ops * ops);
 void viommu_unregister_type(u64 type);
 u64 viommu_query_caps(struct domain *d, u64 viommu_type);
+int viommu_domctl(struct domain *d, struct xen_domctl_viommu_op *op,
+                  bool_t *need_copy);
 int viommu_setup(void);
 #else
 static inline int viommu_init_domain(struct domain *d) { return 0 };
@@ -62,8 +64,12 @@ static inline int viommu_register_type(u64 type, struct viommu_ops * ops)
 { return 0; };
 static inline void viommu_unregister_type(u64 type) { };
 static inline u64 viommu_query_caps(struct domain *d, u64 viommu_type)
-                { return -ENODEV };
+{ return -ENODEV };
 static inline int __init viommu_setup(void) { return 0 };
+static inline int viommu_domctl(struct domain *d,
+                                struct xen_domctl_viommu_op *op,
+                                bool_t *need_copy)
+{ return -ENODEV };
 #endif
 
 #endif /* __XEN_VIOMMU_H__ */
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [RFC PATCH V2 3/26] VIOMMU: Add irq request callback to deal with irq remapping
  2017-05-18  5:34 [RFC PATCH V2 00/26] xen/vIOMMU: Add vIOMMU support with irq remapping fucntion of virtual vtd Lan Tianyu
  2017-05-18  5:34 ` [RFC PATCH V2 1/26] VIOMMU: Add vIOMMU helper functions to create, destroy and query capabilities Lan Tianyu
  2017-05-18  5:34 ` [RFC PATCH V2 2/26] DOMCTL: Introduce new DOMCTL commands for vIOMMU support Lan Tianyu
@ 2017-05-18  5:34 ` Lan Tianyu
  2017-05-30 15:36   ` Wei Liu
  2017-05-18  5:34 ` [RFC PATCH V2 4/26] VIOMMU: Add get irq info callback to convert irq remapping request Lan Tianyu
                   ` (22 subsequent siblings)
  25 siblings, 1 reply; 42+ messages in thread
From: Lan Tianyu @ 2017-05-18  5:34 UTC (permalink / raw)
  To: xen-devel
  Cc: Lan Tianyu, kevin.tian, wei.liu2, andrew.cooper3, ian.jackson,
	jbeulich, chao.gao

This patch is to add irq request callback for platform implementation
to deal with irq remapping request.

Signed-off-by: Lan Tianyu <tianyu.lan@intel.com>
---
 xen/common/viommu.c          | 15 +++++++++
 xen/include/asm-x86/viommu.h | 73 ++++++++++++++++++++++++++++++++++++++++++++
 xen/include/xen/viommu.h     |  9 ++++++
 3 files changed, 97 insertions(+)
 create mode 100644 xen/include/asm-x86/viommu.h

diff --git a/xen/common/viommu.c b/xen/common/viommu.c
index 74afbf5..4e3ecd7 100644
--- a/xen/common/viommu.c
+++ b/xen/common/viommu.c
@@ -194,6 +194,21 @@ int __init viommu_setup(void)
     return 0;
 }
 
+int viommu_handle_irq_request(struct domain *d, u32 viommu_id,
+        struct irq_remapping_request *request)
+{
+    struct viommu_info *info = &d->viommu;
+
+    if ( !info || viommu_id > info->nr_viommu
+         || !info->viommu[viommu_id] )
+        return -EINVAL;
+
+    if ( !info->viommu[viommu_id]->ops->handle_irq_request )
+        return -EINVAL;
+
+    return info->viommu[viommu_id]->ops->handle_irq_request(d, request);
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/include/asm-x86/viommu.h b/xen/include/asm-x86/viommu.h
new file mode 100644
index 0000000..51bda72
--- /dev/null
+++ b/xen/include/asm-x86/viommu.h
@@ -0,0 +1,73 @@
+/*
+ * include/asm-x86/viommu.h
+ *
+ * Copyright (c) 2017 Intel Corporation.
+ * Author: Lan Tianyu <tianyu.lan@intel.com> 
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; If not, see <http://www.gnu.org/licenses/>.
+ *
+ */
+#ifndef __ARCH_X86_VIOMMU_H__
+#define __ARCH_X86_VIOMMU_H__
+
+#include <xen/viommu.h>
+#include <asm/types.h>
+
+/* IRQ request type */
+#define VIOMMU_REQUEST_IRQ_MSI          0
+#define VIOMMU_REQUEST_IRQ_APIC         1
+
+struct irq_remapping_request
+{
+    union {
+        /* MSI */
+        struct {
+            u64 addr;
+            u32 data;
+        } msi;
+        /* Redirection Entry in IOAPIC */
+        u64 rte;
+    } msg;
+    u16 source_id;
+    u8 type;
+};
+
+static inline void irq_request_ioapic_fill(struct irq_remapping_request *req,
+                             uint32_t ioapic_id, uint64_t rte)
+{
+    ASSERT(req);
+    req->type = VIOMMU_REQUEST_IRQ_APIC;
+    req->source_id = ioapic_id;
+    req->msg.rte = rte;
+}
+
+static inline void irq_request_msi_fill(struct irq_remapping_request *req,
+                          uint32_t source_id, uint64_t addr, uint32_t data)
+{
+    ASSERT(req);
+    req->type = VIOMMU_REQUEST_IRQ_MSI;
+    req->source_id = source_id;
+    req->msg.msi.addr = addr;
+    req->msg.msi.data = data;
+}
+
+#endif /* __ARCH_X86_VIOMMU_H__ */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * End:
+ */
diff --git a/xen/include/xen/viommu.h b/xen/include/xen/viommu.h
index 5909800..5b99211 100644
--- a/xen/include/xen/viommu.h
+++ b/xen/include/xen/viommu.h
@@ -20,6 +20,8 @@
 #ifndef __XEN_VIOMMU_H__
 #define __XEN_VIOMMU_H__
 
+#include <asm/viommu.h>
+
 #define NR_VIOMMU_PER_DOMAIN 1
 
 struct viommu;
@@ -28,6 +30,8 @@ struct viommu_ops {
     u64 (*query_caps)(struct domain *d);
     int (*create)(struct domain *d, struct viommu *viommu);
     int (*destroy)(struct viommu *viommu);
+    int (*handle_irq_request)(struct domain *d,
+                              struct irq_remapping_request *request);
 };
 
 struct viommu {
@@ -55,6 +59,8 @@ u64 viommu_query_caps(struct domain *d, u64 viommu_type);
 int viommu_domctl(struct domain *d, struct xen_domctl_viommu_op *op,
                   bool_t *need_copy);
 int viommu_setup(void);
+int viommu_handle_irq_request(struct domain *d, u32 viommu_id,
+                              struct irq_remapping_request *request);
 #else
 static inline int viommu_init_domain(struct domain *d) { return 0 };
 static inline int viommu_create(struct domain *d, u64 type, u64 base_address,
@@ -70,6 +76,9 @@ static inline int viommu_domctl(struct domain *d,
                                 struct xen_domctl_viommu_op *op,
                                 bool_t *need_copy)
 { return -ENODEV };
+static inline int viommu_handle_irq_request(struct domain *d, u32 viommu_id,
+                              struct irq_remapping_request *request)
+{ return 0 };
 #endif
 
 #endif /* __XEN_VIOMMU_H__ */
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [RFC PATCH V2 4/26] VIOMMU: Add get irq info callback to convert irq remapping request
  2017-05-18  5:34 [RFC PATCH V2 00/26] xen/vIOMMU: Add vIOMMU support with irq remapping fucntion of virtual vtd Lan Tianyu
                   ` (2 preceding siblings ...)
  2017-05-18  5:34 ` [RFC PATCH V2 3/26] VIOMMU: Add irq request callback to deal with irq remapping Lan Tianyu
@ 2017-05-18  5:34 ` Lan Tianyu
  2017-05-30 15:36   ` Wei Liu
  2017-05-18  5:34 ` [RFC PATCH V2 5/26] Xen/doc: Add Xen virtual IOMMU doc Lan Tianyu
                   ` (21 subsequent siblings)
  25 siblings, 1 reply; 42+ messages in thread
From: Lan Tianyu @ 2017-05-18  5:34 UTC (permalink / raw)
  To: xen-devel
  Cc: Lan Tianyu, kevin.tian, wei.liu2, andrew.cooper3, ian.jackson,
	jbeulich, chao.gao

This patch is to add get_irq_info callback for platform implementation
to convert irq remapping request to irq info (E,G vector, dest, dest_mode
and so on).

Signed-off-by: Lan Tianyu <tianyu.lan@intel.com>
---
 xen/common/viommu.c          | 16 ++++++++++++++++
 xen/include/asm-x86/viommu.h |  8 ++++++++
 xen/include/xen/viommu.h     |  9 +++++++++
 3 files changed, 33 insertions(+)

diff --git a/xen/common/viommu.c b/xen/common/viommu.c
index 4e3ecd7..c6c9589 100644
--- a/xen/common/viommu.c
+++ b/xen/common/viommu.c
@@ -209,6 +209,22 @@ int viommu_handle_irq_request(struct domain *d, u32 viommu_id,
     return info->viommu[viommu_id]->ops->handle_irq_request(d, request);
 }
 
+int viommu_get_irq_info(struct domain *d, u32 viommu_id,
+                        struct irq_remapping_request *request,
+                        struct irq_remapping_info *irq_info)
+{
+    struct viommu_info *info = &d->viommu;
+
+    if ( !info || viommu_id > info->nr_viommu
+         || !info->viommu[viommu_id] )
+        return -EINVAL;
+
+    if ( !info->viommu[viommu_id]->ops->get_irq_info )
+        return -EINVAL;
+
+    return info->viommu[viommu_id]->ops->get_irq_info(d, request, irq_info);
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/include/asm-x86/viommu.h b/xen/include/asm-x86/viommu.h
index 51bda72..1e8d4be 100644
--- a/xen/include/asm-x86/viommu.h
+++ b/xen/include/asm-x86/viommu.h
@@ -27,6 +27,14 @@
 #define VIOMMU_REQUEST_IRQ_MSI          0
 #define VIOMMU_REQUEST_IRQ_APIC         1
 
+struct irq_remapping_info
+{
+    u8  vector;
+    u32 dest;
+    u32 dest_mode:1;
+    u32 delivery_mode:3;
+};
+
 struct irq_remapping_request
 {
     union {
diff --git a/xen/include/xen/viommu.h b/xen/include/xen/viommu.h
index 5b99211..e40fca4 100644
--- a/xen/include/xen/viommu.h
+++ b/xen/include/xen/viommu.h
@@ -32,6 +32,8 @@ struct viommu_ops {
     int (*destroy)(struct viommu *viommu);
     int (*handle_irq_request)(struct domain *d,
                               struct irq_remapping_request *request);
+    int (*get_irq_info)(struct domain *d, struct irq_remapping_request *request,
+                        struct irq_remapping_info *info);
 };
 
 struct viommu {
@@ -61,6 +63,9 @@ int viommu_domctl(struct domain *d, struct xen_domctl_viommu_op *op,
 int viommu_setup(void);
 int viommu_handle_irq_request(struct domain *d, u32 viommu_id,
                               struct irq_remapping_request *request);
+int viommu_get_irq_info(struct domain *d, u32 viommu_id, 
+                        struct irq_remapping_request *request,
+                        struct irq_remapping_info *irq_info);
 #else
 static inline int viommu_init_domain(struct domain *d) { return 0 };
 static inline int viommu_create(struct domain *d, u64 type, u64 base_address,
@@ -79,6 +84,10 @@ static inline int viommu_domctl(struct domain *d,
 static inline int viommu_handle_irq_request(struct domain *d, u32 viommu_id,
                               struct irq_remapping_request *request)
 { return 0 };
+static inline int viommu_get_irq_info(struct domain *d, u32 viommu_id,
+                                      struct irq_remapping_request *request,
+                                      struct irq_remapping_info *irq_info)
+{ return 0 };
 #endif
 
 #endif /* __XEN_VIOMMU_H__ */
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [RFC PATCH V2 5/26] Xen/doc: Add Xen virtual IOMMU doc
  2017-05-18  5:34 [RFC PATCH V2 00/26] xen/vIOMMU: Add vIOMMU support with irq remapping fucntion of virtual vtd Lan Tianyu
                   ` (3 preceding siblings ...)
  2017-05-18  5:34 ` [RFC PATCH V2 4/26] VIOMMU: Add get irq info callback to convert irq remapping request Lan Tianyu
@ 2017-05-18  5:34 ` Lan Tianyu
  2017-05-18  5:34 ` [RFC PATCH V2 6/26] Tools/libxc: Add viommu operations in libxc Lan Tianyu
                   ` (20 subsequent siblings)
  25 siblings, 0 replies; 42+ messages in thread
From: Lan Tianyu @ 2017-05-18  5:34 UTC (permalink / raw)
  To: xen-devel
  Cc: Lan Tianyu, kevin.tian, wei.liu2, andrew.cooper3, ian.jackson,
	jbeulich, chao.gao

This patch is to add Xen virtual IOMMU doc to introduce motivation,
framework, vIOMMU hypercall and xl configuration.

Signed-off-by: Lan Tianyu <tianyu.lan@intel.com>
---
 docs/misc/viommu.txt | 129 +++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 129 insertions(+)
 create mode 100644 docs/misc/viommu.txt

diff --git a/docs/misc/viommu.txt b/docs/misc/viommu.txt
new file mode 100644
index 0000000..76d4cee
--- /dev/null
+++ b/docs/misc/viommu.txt
@@ -0,0 +1,129 @@
+Xen virtual IOMMU
+
+Motivation
+==========
+*) Enable more than 255 vcpu support
+HPC cloud service requires VM provides high performance parallel
+computing and we hope to create a huge VM with >255 vcpu on one machine
+to meet such requirement. Pin each vcpu to separate pcpus.
+
+To support >255 vcpus, X2APIC mode in guest is necessary because legacy
+APIC(XAPIC) just supports 8-bit APIC ID and it only can support 255
+vcpus at most. X2APIC mode supports 32-bit APIC ID and it requires
+interrupt mapping function of vIOMMU.
+
+The reason for this is that there is no modification to existing PCI MSI
+and IOAPIC with the introduction of X2APIC. PCI MSI/IOAPIC can only send
+interrupt message containing 8-bit APIC ID, which cannot address >255
+cpus. Interrupt remapping supports 32-bit APIC ID and so it's necessary
+to enable >255 cpus with x2apic mode.
+
+
+vIOMMU Architecture
+===================
+vIOMMU device model is inside Xen hypervisor for following factors
+    1) Avoid round trips between Qemu and Xen hypervisor
+    2) Ease of integration with the rest of hypervisor
+    3) HVMlite/PVH doesn't use Qemu
+
+* Interrupt remapping overview.
+Interrupts from virtual devices and physical devices are delivered
+to vLAPIC from vIOAPIC and vMSI. vIOMMU needs to remap interrupt during
+this procedure.
+
++---------------------------------------------------+
+|Qemu                       |VM                     |
+|                           | +----------------+    |
+|                           | |  Device driver |    |
+|                           | +--------+-------+    |
+|                           |          ^            |
+|       +----------------+  | +--------+-------+    |
+|       | Virtual device |  | |  IRQ subsystem |    |
+|       +-------+--------+  | +--------+-------+    |
+|               |           |          ^            |
+|               |           |          |            |
++---------------------------+-----------------------+
+|hyperviosr     |                      | VIRQ       |
+|               |            +---------+--------+   |
+|               |            |      vLAPIC      |   |
+|               |VIRQ        +---------+--------+   |
+|               |                      ^            |
+|               |                      |            |
+|               |            +---------+--------+   |
+|               |            |      vIOMMU      |   |
+|               |            +---------+--------+   |
+|               |                      ^            |
+|               |                      |            |
+|               |            +---------+--------+   |
+|               |            |   vIOAPIC/vMSI   |   |
+|               |            +----+----+--------+   |
+|               |                 ^    ^            |
+|               +-----------------+    |            |
+|                                      |            |
++---------------------------------------------------+
+HW                                     |IRQ
+                                +-------------------+
+                                |   PCI Device      |
+                                +-------------------+
+
+
+vIOMMU hypercall
+================
+Introduce new domctl hypercall "xen_domctl_viommu_op" to create/destroy
+vIOMMU and query vIOMMU capabilities that device model can support.
+
+* vIOMMU hypercall parameter structure
+struct xen_domctl_viommu_op {
+    uint32_t cmd;
+#define XEN_DOMCTL_create_viommu          0
+#define XEN_DOMCTL_destroy_viommu         1
+#define XEN_DOMCTL_query_viommu_caps      2
+    union {
+        struct {
+            /* IN - vIOMMU type */
+            uint64_t viommu_type;
+            /* IN - MMIO base address of vIOMMU. */
+            uint64_t base_address;
+            /* IN - Length of MMIO region */
+            uint64_t length;
+            /* IN - Capabilities with which we want to create */
+            uint64_t capabilities;
+            /* OUT - vIOMMU identity */
+            uint32_t viommu_id;
+        } create_viommu;
+
+        struct {
+            /* IN - vIOMMU identity */
+            uint32_t viommu_id;
+        } destroy_viommu;
+
+        struct {
+            /* IN - vIOMMU type */
+            uint64_t viommu_type;
+            /* OUT - vIOMMU Capabilities */
+            uint64_t caps;
+        } query_caps;
+    } u;
+};
+
+- XEN_DOMCTL_query_viommu_caps
+    Query capabilities of vIOMMU device model. vIOMMU_type specifies
+which vendor vIOMMU device model(E,G Intel VTD) is targeted and hypervisor
+returns capability bits(E,G interrupt remapping bit).
+
+- XEN_DOMCTL_create_viommu
+    Create vIOMMU device with vIOMMU_type, capabilities, MMIO
+base address and length. Hypervisor returns viommu_id. Capabilities should
+be in range of value returned by query_viommu_caps hypercall.
+
+- XEN_DOMCTL_destroy_viommu
+    Destroy vIOMMU in Xen hypervisor with viommu_id as parameters.
+
+xl vIOMMU configuration
+=======================
+viommu="type=vtd,intremap=1,x2apic=1"
+
+"type" - Specify vIOMMU device model type. Currently only supports Intel vtd
+device model.
+"intremap" - Enable vIOMMU interrupt remapping function.
+"x2apic" - Support x2apic mode with interrupt remapping function.
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [RFC PATCH V2 6/26] Tools/libxc: Add viommu operations in libxc
  2017-05-18  5:34 [RFC PATCH V2 00/26] xen/vIOMMU: Add vIOMMU support with irq remapping fucntion of virtual vtd Lan Tianyu
                   ` (4 preceding siblings ...)
  2017-05-18  5:34 ` [RFC PATCH V2 5/26] Xen/doc: Add Xen virtual IOMMU doc Lan Tianyu
@ 2017-05-18  5:34 ` Lan Tianyu
  2017-05-30 15:36   ` Wei Liu
  2017-05-18  5:34 ` [RFC PATCH V2 7/26] Tools/libacpi: Add DMA remapping reporting (DMAR) ACPI table structures Lan Tianyu
                   ` (19 subsequent siblings)
  25 siblings, 1 reply; 42+ messages in thread
From: Lan Tianyu @ 2017-05-18  5:34 UTC (permalink / raw)
  To: xen-devel
  Cc: Lan Tianyu, kevin.tian, wei.liu2, andrew.cooper3, ian.jackson,
	jbeulich, Chao Gao

From: Chao Gao <chao.gao@intel.com>

This patch is to add XEN_DOMCTL_viommu_op hypercall. This hypercall
comprise three sub-command:
- query capabilities of one specific type vIOMMU emulated by Xen
- create vIOMMU in Xen hypervisor with viommu type, register range,
    capability
- destroy vIOMMU specified by viommu_id

Signed-off-by: Chao Gao <chao.gao@intel.com>
Signed-off-by: Lan Tianyu <tianyu.lan@intel.com>
---
 tools/libxc/Makefile          |  1 +
 tools/libxc/include/xenctrl.h |  7 ++++
 tools/libxc/xc_viommu.c       | 81 +++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 89 insertions(+)
 create mode 100644 tools/libxc/xc_viommu.c

diff --git a/tools/libxc/Makefile b/tools/libxc/Makefile
index 8ae552f..c982571 100644
--- a/tools/libxc/Makefile
+++ b/tools/libxc/Makefile
@@ -42,6 +42,7 @@ CTRL_SRCS-y       += xc_kexec.c
 CTRL_SRCS-y       += xc_resource.c
 CTRL_SRCS-$(CONFIG_X86) += xc_psr.c
 CTRL_SRCS-$(CONFIG_X86) += xc_pagetab.c
+CTRL_SRCS-$(CONFIG_X86) += xc_viommu.c
 CTRL_SRCS-$(CONFIG_Linux) += xc_linux.c
 CTRL_SRCS-$(CONFIG_FreeBSD) += xc_freebsd.c
 CTRL_SRCS-$(CONFIG_SunOS) += xc_solaris.c
diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h
index 1629f41..6c8110c 100644
--- a/tools/libxc/include/xenctrl.h
+++ b/tools/libxc/include/xenctrl.h
@@ -2491,6 +2491,13 @@ enum xc_static_cpu_featuremask {
 const uint32_t *xc_get_static_cpu_featuremask(enum xc_static_cpu_featuremask);
 const uint32_t *xc_get_feature_deep_deps(uint32_t feature);
 
+int xc_viommu_query_cap(xc_interface *xch, uint32_t dom,
+                        uint64_t type, uint64_t *cap);
+int xc_viommu_create(xc_interface *xch, uint32_t dom, uint64_t type,
+                     uint64_t base_addr, uint64_t length, uint64_t cap,
+                     uint32_t *viommu_id);
+int xc_viommu_destroy(xc_interface *xch, uint32_t dom, uint32_t viommu_id);
+
 #endif
 
 int xc_livepatch_upload(xc_interface *xch,
diff --git a/tools/libxc/xc_viommu.c b/tools/libxc/xc_viommu.c
new file mode 100644
index 0000000..54ed877
--- /dev/null
+++ b/tools/libxc/xc_viommu.c
@@ -0,0 +1,81 @@
+/*
+ * xc_viommu.c
+ *
+ * viommu related API functions.
+ *
+ * Copyright (C) 2017 Intel Corporation
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License, version 2.1, as published by the Free Software Foundation.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include "xc_private.h"
+
+int xc_viommu_query_cap(xc_interface *xch, uint32_t dom,
+                        uint64_t type, uint64_t *cap)
+{
+    int rc;
+    DECLARE_DOMCTL;
+
+    domctl.cmd = XEN_DOMCTL_viommu_op;
+    domctl.domain = (domid_t)dom;
+    domctl.u.viommu_op.cmd = XEN_DOMCTL_query_viommu_caps;
+    domctl.u.viommu_op.u.query_caps.viommu_type = type;
+
+    rc = do_domctl(xch, &domctl);
+    if ( !rc )
+        *cap = domctl.u.viommu_op.u.query_caps.caps;
+    return rc;
+}
+
+int xc_viommu_create(xc_interface *xch, uint32_t dom, uint64_t type,
+                     uint64_t base_addr, uint64_t length, uint64_t cap,
+                     uint32_t *viommu_id)
+{
+    int rc;
+    DECLARE_DOMCTL;
+
+    domctl.cmd = XEN_DOMCTL_viommu_op;
+    domctl.domain = (domid_t)dom;
+    domctl.u.viommu_op.cmd = XEN_DOMCTL_create_viommu;
+    domctl.u.viommu_op.u.create_viommu.viommu_type = type;
+    domctl.u.viommu_op.u.create_viommu.base_address = base_addr;
+    domctl.u.viommu_op.u.create_viommu.length = length;
+    domctl.u.viommu_op.u.create_viommu.capabilities = cap;
+
+    rc = do_domctl(xch, &domctl);
+    if ( !rc )
+        *viommu_id = domctl.u.viommu_op.u.create_viommu.viommu_id;
+    return rc;
+}
+
+int xc_viommu_destroy(xc_interface *xch, uint32_t dom, uint32_t viommu_id)
+{
+    DECLARE_DOMCTL;
+
+    domctl.cmd = XEN_DOMCTL_viommu_op;
+    domctl.domain = (domid_t)dom;
+    domctl.u.viommu_op.cmd = XEN_DOMCTL_destroy_viommu;
+    domctl.u.viommu_op.u.destroy_viommu.viommu_id = viommu_id;
+
+    return do_domctl(xch, &domctl);
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [RFC PATCH V2 7/26] Tools/libacpi: Add DMA remapping reporting (DMAR) ACPI table structures
  2017-05-18  5:34 [RFC PATCH V2 00/26] xen/vIOMMU: Add vIOMMU support with irq remapping fucntion of virtual vtd Lan Tianyu
                   ` (5 preceding siblings ...)
  2017-05-18  5:34 ` [RFC PATCH V2 6/26] Tools/libxc: Add viommu operations in libxc Lan Tianyu
@ 2017-05-18  5:34 ` Lan Tianyu
  2017-05-18  5:34 ` [RFC PATCH V2 8/26] Tools/libacpi: Add new fields in acpi_config to build DMAR table Lan Tianyu
                   ` (18 subsequent siblings)
  25 siblings, 0 replies; 42+ messages in thread
From: Lan Tianyu @ 2017-05-18  5:34 UTC (permalink / raw)
  To: xen-devel
  Cc: Lan Tianyu, kevin.tian, wei.liu2, andrew.cooper3, ian.jackson,
	jbeulich, Chao Gao

From: Chao Gao <chao.gao@intel.com>

Add dmar table structure according Chapter 8 "BIOS Considerations" of
VTd spec Rev. 2.4.

VTd spec:http://www.intel.com/content/dam/www/public/us/en/documents/product-specifications/vt-directed-io-spec.pdf

Signed-off-by: Chao Gao <chao.gao@intel.com>
Signed-off-by: Lan Tianyu <tianyu.lan@intel.com>
---
 tools/libacpi/acpi2_0.h | 45 +++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 45 insertions(+)

diff --git a/tools/libacpi/acpi2_0.h b/tools/libacpi/acpi2_0.h
index 2619ba3..8f942b5 100644
--- a/tools/libacpi/acpi2_0.h
+++ b/tools/libacpi/acpi2_0.h
@@ -421,6 +421,49 @@ struct acpi_20_slit {
     uint8_t entry[0];
 };
 
+/* DMA Remapping Table in VTd spec Rev. 2.4. */
+struct acpi_dmar {
+    struct acpi_header header;
+    uint8_t host_address_width;
+    uint8_t flags;
+    uint8_t reserved[10]; /* reserved(0) */
+};
+
+/* Remapping Structure Types */
+enum {
+    ACPI_DMAR_TYPE_HARDWARE_UNIT = 0,       /* DRHD */
+    ACPI_DMAR_TYPE_RESERVED_MEMORY = 1,     /* RMRR */
+    ACPI_DMAR_TYPE_ATSR = 2,                /* ATSR */
+    ACPI_DMAR_TYPE_HARDWARE_AFFINITY = 3,   /* RHSR */
+    ACPI_DMAR_TYPE_ANDD = 4,                /* ANDD */
+    ACPI_DMAR_TYPE_RESERVED = 5             /* Reserved for furture use */
+};
+
+struct dmar_device_scope {
+    uint8_t type;
+    uint8_t length;
+    uint8_t reserved[2]; /* reserved(0) */
+    uint8_t enumeration_id;
+    uint8_t bus;
+    uint16_t path[0];
+};
+
+struct acpi_dmar_hardware_unit {
+    uint16_t type;
+    uint16_t length;
+    uint8_t flags;
+    uint8_t reserved; /* reserved(0) */
+    uint16_t pci_segment; /* The PCI segment associated with this unit */
+    uint64_t address; /* Base address of remapping hardware register-set */
+    struct dmar_device_scope scope[0];
+};
+
+/* Device scope type */
+#define ACPI_DMAR_DEVICE_SCOPE_IOAPIC   0x03
+
+/* Masks for flags field of struct acpi_dmar_hardware_unit */
+#define ACPI_DMAR_INCLUDE_PCI_ALL   1
+
 /*
  * Table Signatures.
  */
@@ -435,6 +478,7 @@ struct acpi_20_slit {
 #define ACPI_2_0_WAET_SIGNATURE ASCII32('W','A','E','T')
 #define ACPI_2_0_SRAT_SIGNATURE ASCII32('S','R','A','T')
 #define ACPI_2_0_SLIT_SIGNATURE ASCII32('S','L','I','T')
+#define ACPI_2_0_DMAR_SIGNATURE ASCII32('D','M','A','R')
 
 /*
  * Table revision numbers.
@@ -449,6 +493,7 @@ struct acpi_20_slit {
 #define ACPI_1_0_FADT_REVISION 0x01
 #define ACPI_2_0_SRAT_REVISION 0x01
 #define ACPI_2_0_SLIT_REVISION 0x01
+#define ACPI_2_0_DMAR_REVISION 0x01
 
 #pragma pack ()
 
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [RFC PATCH V2 8/26] Tools/libacpi: Add new fields in acpi_config to build DMAR table
  2017-05-18  5:34 [RFC PATCH V2 00/26] xen/vIOMMU: Add vIOMMU support with irq remapping fucntion of virtual vtd Lan Tianyu
                   ` (6 preceding siblings ...)
  2017-05-18  5:34 ` [RFC PATCH V2 7/26] Tools/libacpi: Add DMA remapping reporting (DMAR) ACPI table structures Lan Tianyu
@ 2017-05-18  5:34 ` Lan Tianyu
  2017-05-18  5:34 ` [RFC PATCH V2 9/26] Tools/libacpi: Add a user configurable parameter to control vIOMMU attributes Lan Tianyu
                   ` (17 subsequent siblings)
  25 siblings, 0 replies; 42+ messages in thread
From: Lan Tianyu @ 2017-05-18  5:34 UTC (permalink / raw)
  To: xen-devel
  Cc: Lan Tianyu, kevin.tian, wei.liu2, andrew.cooper3, ian.jackson,
	jbeulich, Chao Gao

From: Chao Gao <chao.gao@intel.com>

The BIOS reports the remapping hardware units in a platform to system software
through the DMA Remapping Reporting (DMAR) ACPI table.

To build DMAR table during domain construction, two fields are added to struct
acpi_config. One is dmar_flag which indicates whether interrupt remapping is
supported and whether enabling X2APIC mode is premitted. The other is the base
address of remapping hardware register-set for a remapping unit. Also, a
function construct_dmar() is added to build DMAR table according the two
fields.

Signed-off-by: Chao Gao <chao.gao@intel.com>
Signed-off-by: Lan Tianyu <tianyu.lan@intel.com>
---
 tools/libacpi/build.c   | 53 +++++++++++++++++++++++++++++++++++++++++++++++++
 tools/libacpi/libacpi.h | 11 ++++++++++
 2 files changed, 64 insertions(+)

diff --git a/tools/libacpi/build.c b/tools/libacpi/build.c
index f9881c9..d5bedfd 100644
--- a/tools/libacpi/build.c
+++ b/tools/libacpi/build.c
@@ -28,6 +28,10 @@
 
 #define ACPI_MAX_SECONDARY_TABLES 16
 
+#define VTD_HOST_ADDRESS_WIDTH 39
+#define I440_PSEUDO_BUS_PLATFORM 0xff
+#define I440_PSEUDO_DEVFN_IOAPIC 0x0
+
 #define align16(sz)        (((sz) + 15) & ~15)
 #define fixed_strcpy(d, s) strncpy((d), (s), sizeof(d))
 
@@ -303,6 +307,55 @@ static struct acpi_20_slit *construct_slit(struct acpi_ctxt *ctxt,
     return slit;
 }
 
+struct acpi_dmar *construct_dmar(struct acpi_ctxt *ctxt,
+                                 const struct acpi_config *config)
+{
+    struct acpi_dmar *dmar;
+    struct acpi_dmar_hardware_unit *drhd;
+    struct dmar_device_scope *scope;
+    unsigned int size;
+    unsigned int ioapic_scope_size = sizeof(*scope) + sizeof(scope->path[0]);
+
+    size = sizeof(*dmar) + sizeof(*drhd) + ioapic_scope_size;
+
+    dmar = ctxt->mem_ops.alloc(ctxt, size, 16);
+    if ( !dmar )
+        return NULL;
+
+    memset(dmar, 0, size);
+    dmar->header.signature = ACPI_2_0_DMAR_SIGNATURE;
+    dmar->header.revision = ACPI_2_0_DMAR_REVISION;
+    dmar->header.length = size;
+    fixed_strcpy(dmar->header.oem_id, ACPI_OEM_ID);
+    fixed_strcpy(dmar->header.oem_table_id, ACPI_OEM_TABLE_ID);
+    dmar->header.oem_revision = ACPI_OEM_REVISION;
+    dmar->header.creator_id   = ACPI_CREATOR_ID;
+    dmar->header.creator_revision = ACPI_CREATOR_REVISION;
+    dmar->host_address_width = VTD_HOST_ADDRESS_WIDTH - 1;
+    dmar->flags = config->dmar_flag & (DMAR_INTR_REMAP|DMAR_X2APIC_OPT_OUT);
+
+    drhd = (struct acpi_dmar_hardware_unit *)((void*)dmar + sizeof(*dmar));
+    drhd->type = ACPI_DMAR_TYPE_HARDWARE_UNIT;
+    drhd->length = sizeof(*drhd) + ioapic_scope_size;
+    drhd->flags = ACPI_DMAR_INCLUDE_PCI_ALL;
+    drhd->pci_segment = 0;
+    drhd->address = config->viommu_base_addr;
+
+    scope = &drhd->scope[0];
+    scope->type = ACPI_DMAR_DEVICE_SCOPE_IOAPIC;
+    scope->length = ioapic_scope_size;
+    /*
+     * This field provides the I/O APICID as provided in the I/O APIC structure
+     * in the ACPI MADT (Multiple APIC Descriptor Table).
+     */
+    scope->enumeration_id = 1;
+    scope->bus = I440_PSEUDO_BUS_PLATFORM;
+    scope->path[0] = I440_PSEUDO_DEVFN_IOAPIC;
+
+    set_checksum(dmar, offsetof(struct acpi_header, checksum), size);
+    return dmar;
+}
+
 static int construct_passthrough_tables(struct acpi_ctxt *ctxt,
                                         unsigned long *table_ptrs,
                                         int nr_tables,
diff --git a/tools/libacpi/libacpi.h b/tools/libacpi/libacpi.h
index 2ed1ecf..6a4e1cf 100644
--- a/tools/libacpi/libacpi.h
+++ b/tools/libacpi/libacpi.h
@@ -20,6 +20,8 @@
 #ifndef __LIBACPI_H__
 #define __LIBACPI_H__
 
+#include "acpi2_0.h"
+
 #define ACPI_HAS_COM1              (1<<0)
 #define ACPI_HAS_COM2              (1<<1)
 #define ACPI_HAS_LPT1              (1<<2)
@@ -36,6 +38,7 @@
 #define ACPI_HAS_8042              (1<<13)
 #define ACPI_HAS_CMOS_RTC          (1<<14)
 #define ACPI_HAS_SSDT_LAPTOP_SLATE (1<<15)
+#define ACPI_HAS_DMAR              (1<<16)
 
 struct xen_vmemrange;
 struct acpi_numa {
@@ -96,8 +99,16 @@ struct acpi_config {
     uint32_t ioapic_base_address;
     uint16_t pci_isa_irq_mask;
     uint8_t ioapic_id;
+
+    /* dmar info */
+    uint8_t dmar_flag;
+    uint64_t viommu_base_addr;
 };
 
+#define DMAR_INTR_REMAP 0x1
+#define DMAR_X2APIC_OPT_OUT 0x2
+struct acpi_dmar *construct_dmar(struct acpi_ctxt *ctxt,
+                                 const struct acpi_config *config);
 int acpi_build_tables(struct acpi_ctxt *ctxt, struct acpi_config *config);
 
 #endif /* __LIBACPI_H__ */
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [RFC PATCH V2 9/26] Tools/libacpi: Add a user configurable parameter to control vIOMMU attributes
  2017-05-18  5:34 [RFC PATCH V2 00/26] xen/vIOMMU: Add vIOMMU support with irq remapping fucntion of virtual vtd Lan Tianyu
                   ` (7 preceding siblings ...)
  2017-05-18  5:34 ` [RFC PATCH V2 8/26] Tools/libacpi: Add new fields in acpi_config to build DMAR table Lan Tianyu
@ 2017-05-18  5:34 ` Lan Tianyu
  2017-05-18  5:34 ` [RFC PATCH V2 10/26] libxl: create vIOMMU during domain construction Lan Tianyu
                   ` (16 subsequent siblings)
  25 siblings, 0 replies; 42+ messages in thread
From: Lan Tianyu @ 2017-05-18  5:34 UTC (permalink / raw)
  To: xen-devel
  Cc: Lan Tianyu, kevin.tian, wei.liu2, andrew.cooper3, ian.jackson,
	jbeulich, Chao Gao

From: Chao Gao <chao.gao@intel.com>

a field, viommu_info, is added to struct libxl_domain_build_info. Several
attributes can be specified by guest configuration file for the DMAR table
building and vIOMMU creation.

In domain creation process, a new logic is added to build ACPI DMAR table in
tool stack according VM configuration and to pass though it to hvmloader via
xenstore ACPI PT channel. If there are ACPI tables needed to pass through, we
joint the tables.

Signed-off-by: Chao Gao <chao.gao@intel.com>
Signed-off-by: Lan Tianyu <tianyu.lan@intel.com>
---
 docs/man/xl.cfg.pod.5.in    | 34 +++++++++++++++++-
 tools/libacpi/build.c       |  5 +++
 tools/libacpi/libacpi.h     |  1 +
 tools/libxl/libxl_dom.c     | 87 +++++++++++++++++++++++++++++++++++++++++++++
 tools/libxl/libxl_types.idl | 10 ++++++
 tools/xl/xl_parse.c         | 64 +++++++++++++++++++++++++++++++++
 6 files changed, 200 insertions(+), 1 deletion(-)

diff --git a/docs/man/xl.cfg.pod.5.in b/docs/man/xl.cfg.pod.5.in
index 13167ff..dda7748 100644
--- a/docs/man/xl.cfg.pod.5.in
+++ b/docs/man/xl.cfg.pod.5.in
@@ -1481,7 +1481,39 @@ Do not provide a VM generation ID.
 See also "Virtual Machine Generation ID" by Microsoft
 (http://www.microsoft.com/en-us/download/details.aspx?id=30707).
 
-=back 
+=back
+
+=item B<viommu="VIOMMU_STRING">
+
+Specifies the viommu which are to be provided to the guest.
+
+B<VIOMMU_STRING> has the form C<KEY=VALUE,KEY=VALUE,...> where:
+
+=over 4
+
+=item B<KEY=VALUE>
+
+Possible B<KEY>s are:
+
+=over 4
+
+=item B<type="STRING">
+
+Currently there is only one valid type:
+
+(X86 only) "vtd" means providing a emulated intel VT-d to the guest.
+
+=item B<intremap=BOOLEAN>
+
+Specifies whether the vvtd should support intrrupt remapping
+and default 'true'.
+
+=item B<x2apic=BOOLEAN>
+
+Specifies whether the vvtd should support x2apic mode
+and default 'true'.
+
+=back
 
 =head3 Guest Virtual Time Controls
 
diff --git a/tools/libacpi/build.c b/tools/libacpi/build.c
index d5bedfd..0c3d3db 100644
--- a/tools/libacpi/build.c
+++ b/tools/libacpi/build.c
@@ -561,6 +561,11 @@ static int new_vm_gid(struct acpi_ctxt *ctxt,
     return 1;
 }
 
+uint32_t acpi_get_table_size(struct acpi_header * header)
+{
+    return header ? header->length : 0;
+}
+
 int acpi_build_tables(struct acpi_ctxt *ctxt, struct acpi_config *config)
 {
     struct acpi_info *acpi_info;
diff --git a/tools/libacpi/libacpi.h b/tools/libacpi/libacpi.h
index 6a4e1cf..0a58d6f 100644
--- a/tools/libacpi/libacpi.h
+++ b/tools/libacpi/libacpi.h
@@ -109,6 +109,7 @@ struct acpi_config {
 #define DMAR_X2APIC_OPT_OUT 0x2
 struct acpi_dmar *construct_dmar(struct acpi_ctxt *ctxt,
                                  const struct acpi_config *config);
+uint32_t acpi_get_table_size(struct acpi_header * header);
 int acpi_build_tables(struct acpi_ctxt *ctxt, struct acpi_config *config);
 
 #endif /* __LIBACPI_H__ */
diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index 5d914a5..f8d61c2 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -19,11 +19,13 @@
 
 #include "libxl_internal.h"
 #include "libxl_arch.h"
+#include "libacpi/libacpi.h"
 
 #include <xc_dom.h>
 #include <xen/hvm/hvm_info_table.h>
 #include <xen/hvm/hvm_xs_strings.h>
 #include <xen/hvm/e820.h>
+#include <xen/viommu.h>
 
 #include "_paths.h"
 
@@ -925,6 +927,43 @@ out:
     return rc;
 }
 
+static unsigned long acpi_v2p(struct acpi_ctxt *ctxt, void *v)
+{
+    return (unsigned long)v;
+}
+
+static void *acpi_mem_alloc(struct acpi_ctxt *ctxt,
+                            uint32_t size, uint32_t align)
+{
+    return aligned_alloc(align, size);
+}
+
+static void acpi_mem_free(struct acpi_ctxt *ctxt,
+                          void *v, uint32_t size)
+{
+    /* ACPI builder currently doesn't free memory so this is just a stub */
+}
+
+static int libxl__acpi_build_dmar(libxl__gc *gc,
+                                  struct acpi_config *config,
+                                  void **data_r, int *datalen_r)
+{
+    struct acpi_ctxt ctxt;
+    void *table;
+
+    ctxt.mem_ops.alloc = acpi_mem_alloc;
+    ctxt.mem_ops.free = acpi_mem_free;
+    ctxt.mem_ops.v2p = acpi_v2p;
+
+    table = construct_dmar(&ctxt, config);
+    if ( !table )
+        return ERROR_FAIL;
+
+    *data_r = table;
+    *datalen_r = acpi_get_table_size((struct acpi_header *)table);
+    return 0;
+}
+
 static int libxl__domain_firmware(libxl__gc *gc,
                                   libxl_domain_build_info *info,
                                   struct xc_dom_image *dom)
@@ -1045,6 +1084,54 @@ static int libxl__domain_firmware(libxl__gc *gc,
         }
     }
 
+    /* build DMAR table according guest configuration and joint it with other
+     * apci tables specified by acpi_modules */
+    if ((info->u.hvm.viommu.type == VIOMMU_TYPE_INTEL_VTD) &&
+        !libxl_defbool_is_default(info->u.hvm.viommu.intremap) &&
+        info->device_model_version == LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN) {
+        struct acpi_config config;
+
+        memset(&config, 0, sizeof(config));
+        if (libxl_defbool_val(info->u.hvm.viommu.intremap)) {
+            config.table_flags |= ACPI_HAS_DMAR;
+            config.dmar_flag = DMAR_INTR_REMAP;
+            if (!libxl_defbool_is_default(info->u.hvm.viommu.x2apic)
+                && !libxl_defbool_val(info->u.hvm.viommu.x2apic))
+                config.dmar_flag |= DMAR_X2APIC_OPT_OUT;
+
+            config.viommu_base_addr = info->u.hvm.viommu.base_addr;
+            data = NULL;
+            e = libxl__acpi_build_dmar(gc, &config, &data, &datalen);
+            if (e) {
+                LOGE(ERROR, "failed to build DMAR table");
+                rc = ERROR_FAIL;
+                goto out;
+            }
+
+            libxl__ptr_add(gc, data);
+            if (datalen) {
+                if (!dom->acpi_modules[0].data) {
+                    dom->acpi_modules[0].data = data;
+                    dom->acpi_modules[0].length = (uint32_t)datalen;
+                } else {
+                    /* joint tables */
+                    void *newdata;
+                    newdata = malloc(datalen + dom->acpi_modules[0].length);
+                    if (!newdata) {
+                        LOGE(ERROR, "failed to joint DMAR table to acpi modules");
+                        rc = ERROR_FAIL;
+                        goto out;
+                    }
+                    memcpy(newdata, dom->acpi_modules[0].data,
+                           dom->acpi_modules[0].length);
+                    memcpy(newdata + dom->acpi_modules[0].length, data, datalen);
+                    dom->acpi_modules[0].data = newdata;
+                    dom->acpi_modules[0].length += (uint32_t)datalen;
+                }
+            }
+        }
+    }
+
     return 0;
 out:
     assert(rc != 0);
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index 2204425..93e9e2c 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -450,6 +450,15 @@ libxl_altp2m_mode = Enumeration("altp2m_mode", [
     (3, "limited"),
     ], init_val = "LIBXL_ALTP2M_MODE_DISABLED")
 
+libxl_viommu_info = Struct("viommu_info", [
+    ("type",            uint64),
+    ("intremap",        libxl_defbool),
+    ("x2apic",          libxl_defbool),
+    ("cap",             uint64),
+    ("base_addr",       uint64),
+    ("length",          uint64),
+    ])
+
 libxl_domain_build_info = Struct("domain_build_info",[
     ("max_vcpus",       integer),
     ("avail_vcpus",     libxl_bitmap),
@@ -564,6 +573,7 @@ libxl_domain_build_info = Struct("domain_build_info",[
                                        ("serial_list",      libxl_string_list),
                                        ("rdm", libxl_rdm_reserve),
                                        ("rdm_mem_boundary_memkb", MemKB),
+                                       ("viommu",           libxl_viommu_info),
                                        ])),
                  ("pv", Struct(None, [("kernel", string),
                                       ("slack_memkb", MemKB),
diff --git a/tools/xl/xl_parse.c b/tools/xl/xl_parse.c
index 856a304..584d805 100644
--- a/tools/xl/xl_parse.c
+++ b/tools/xl/xl_parse.c
@@ -18,6 +18,7 @@
 #include <stdio.h>
 #include <stdlib.h>
 #include <xen/hvm/e820.h>
+#include <xen/viommu.h>
 
 #include <libxl.h>
 #include <libxl_utils.h>
@@ -29,6 +30,9 @@
 
 extern void set_default_nic_values(libxl_device_nic *nic);
 
+#define VIOMMU_BASE_ADDR 0xfed90000UL
+#define VIOMMU_REGISTER_LEN 0x1000UL
+
 #define ARRAY_EXTEND_INIT__CORE(array,count,initfn,more)                \
     ({                                                                  \
         typeof((count)) array_extend_old_count = (count);               \
@@ -803,6 +807,32 @@ int parse_usbdev_config(libxl_device_usbdev *usbdev, char *token)
     return 0;
 }
 
+/* Parses viommu data and adds info into viommu
+ * Returns 1 if the input token does not match one of the keys
+ * or parsed values are not correct. Successful parse returns 0 */
+static int parse_viommu_config(libxl_viommu_info *viommu, char *token)
+{
+    char *oparg;
+
+    if (MATCH_OPTION("type", token, oparg)) {
+        if (!strcmp(oparg, "vtd")) {
+            viommu->type = VIOMMU_TYPE_INTEL_VTD;
+        } else {
+            fprintf(stderr, "Invalid viommu type: %s\n", oparg);
+            return 1;
+        }
+    } else if (MATCH_OPTION("intremap", token, oparg)) {
+        libxl_defbool_set(&viommu->intremap, !!strtoul(oparg, NULL, 0));
+    } else if (MATCH_OPTION("x2apic", token, oparg)) {
+        libxl_defbool_set(&viommu->x2apic, !!strtoul(oparg, NULL, 0));
+    } else {
+        fprintf(stderr, "Unknown string `%s' in viommu spec\n", token);
+        return 1;
+    }
+
+    return 0;
+}
+
 void parse_config_data(const char *config_source,
                        const char *config_data,
                        int config_len,
@@ -1182,6 +1212,40 @@ void parse_config_data(const char *config_source,
 
         if (!xlu_cfg_get_long (config, "rdm_mem_boundary", &l, 0))
             b_info->u.hvm.rdm_mem_boundary_memkb = l * 1024;
+
+        if (!xlu_cfg_get_string(config, "viommu", &buf, 0)) {
+            libxl_viommu_info viommu;
+            char *p, *str2;
+
+            str2 = strdup(buf);
+            if (!str2) {
+                fprintf(stderr, "ERROR: strdup failed\n");
+                exit (1);
+            }
+            p = strtok(str2, ",");
+            if (!p) {
+                fprintf(stderr, "ERROR: invalid viommu_info format\n");
+                exit (1);
+            }
+            do {
+                if (*p == ' ')
+                    p++;
+                if (parse_viommu_config(&viommu, p)) {
+                    fprintf(stderr, "ERROR: invalid viommu setting\n");
+                    exit (1);
+                }
+            } while ((p=strtok(NULL, ",")) != NULL);
+            free(str2);
+            b_info->u.hvm.viommu.type = viommu.type;
+            b_info->u.hvm.viommu.intremap = viommu.intremap;
+            b_info->u.hvm.viommu.x2apic = viommu.x2apic;
+            if ( libxl_defbool_val(b_info->u.hvm.viommu.intremap) )
+            {
+                b_info->u.hvm.viommu.cap = VIOMMU_CAP_IRQ_REMAPPING;
+                b_info->u.hvm.viommu.base_addr = VIOMMU_BASE_ADDR;
+                b_info->u.hvm.viommu.length = VIOMMU_REGISTER_LEN;
+            }
+        }
         break;
     case LIBXL_DOMAIN_TYPE_PV:
     {
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [RFC PATCH V2 10/26] libxl: create vIOMMU during domain construction
  2017-05-18  5:34 [RFC PATCH V2 00/26] xen/vIOMMU: Add vIOMMU support with irq remapping fucntion of virtual vtd Lan Tianyu
                   ` (8 preceding siblings ...)
  2017-05-18  5:34 ` [RFC PATCH V2 9/26] Tools/libacpi: Add a user configurable parameter to control vIOMMU attributes Lan Tianyu
@ 2017-05-18  5:34 ` Lan Tianyu
  2017-05-30 15:36   ` Wei Liu
  2017-05-18  5:34 ` [RFC PATCH V2 11/26] x86/hvm: Introduce a emulated VTD for HVM Lan Tianyu
                   ` (15 subsequent siblings)
  25 siblings, 1 reply; 42+ messages in thread
From: Lan Tianyu @ 2017-05-18  5:34 UTC (permalink / raw)
  To: xen-devel
  Cc: Lan Tianyu, kevin.tian, wei.liu2, andrew.cooper3, ian.jackson,
	jbeulich, Chao Gao

From: Chao Gao <chao.gao@intel.com>

If guest is configured to have a vIOMMU, create it during domain construction.

Signed-off-by: Chao Gao <chao.gao@intel.com>
Signed-off-by: Lan Tianyu <tianyu.lan@intel.com>
---
 tools/libxl/libxl_arch.h   |  5 +++++
 tools/libxl/libxl_arm.c    |  7 +++++++
 tools/libxl/libxl_create.c |  4 ++++
 tools/libxl/libxl_x86.c    | 24 ++++++++++++++++++++++++
 4 files changed, 40 insertions(+)

diff --git a/tools/libxl/libxl_arch.h b/tools/libxl/libxl_arch.h
index 5e1fc60..7f9fc9a 100644
--- a/tools/libxl/libxl_arch.h
+++ b/tools/libxl/libxl_arch.h
@@ -71,6 +71,11 @@ int libxl__arch_extra_memory(libxl__gc *gc,
                              const libxl_domain_build_info *info,
                              uint64_t *out);
 
+_hidden
+int libxl__arch_create_viommu(libxl__gc *gc,
+                              const libxl_domain_config *d_config,
+                              uint32_t domid);
+
 #if defined(__i386__) || defined(__x86_64__)
 
 #define LAPIC_BASE_ADDRESS  0xfee00000
diff --git a/tools/libxl/libxl_arm.c b/tools/libxl/libxl_arm.c
index d842d88..f5bf5dd 100644
--- a/tools/libxl/libxl_arm.c
+++ b/tools/libxl/libxl_arm.c
@@ -1065,6 +1065,13 @@ void libxl__arch_domain_build_info_acpi_setdefault(
     libxl_defbool_setdefault(&b_info->acpi, false);
 }
 
+int libxl__arch_create_viommu(libxl__gc *gc,
+                         const libxl_domain_config *d_config,
+                         uint32_t domid)
+{
+    return 0;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index bffbc45..fd9bfb8 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -557,6 +557,10 @@ int libxl__domain_make(libxl__gc *gc, libxl_domain_config *d_config,
         }
     }
 
+    rc = libxl__arch_create_viommu(gc, d_config, *domid);
+    if (rc < 0)
+        goto out;
+
     rc = libxl__arch_domain_save_config(gc, d_config, xc_config);
     if (rc < 0)
         goto out;
diff --git a/tools/libxl/libxl_x86.c b/tools/libxl/libxl_x86.c
index 455f6f0..819ee0a 100644
--- a/tools/libxl/libxl_x86.c
+++ b/tools/libxl/libxl_x86.c
@@ -2,6 +2,7 @@
 #include "libxl_arch.h"
 
 #include <xc_dom.h>
+#include <xen/viommu.h>
 
 int libxl__arch_domain_prepare_config(libxl__gc *gc,
                                       libxl_domain_config *d_config,
@@ -587,6 +588,29 @@ void libxl__arch_domain_build_info_acpi_setdefault(
     libxl_defbool_setdefault(&b_info->acpi, true);
 }
 
+int libxl__arch_create_viommu(libxl__gc *gc,
+                              const libxl_domain_config *d_config,
+                              uint32_t domid)
+{
+    int rc = 0;
+    libxl_ctx *ctx = libxl__gc_owner(gc);
+    libxl_viommu_info viommu = d_config->b_info.u.hvm.viommu;
+
+    if (viommu.type == VIOMMU_TYPE_INTEL_VTD) {
+        uint32_t id;
+        uint64_t cap;
+
+        rc = xc_viommu_query_cap(ctx->xch, domid, viommu.type, &cap);
+        if (rc || ((cap & viommu.cap) != cap))
+            return rc;
+
+        rc = xc_viommu_create(ctx->xch, domid, viommu.type,
+                              viommu.base_addr, viommu.length, viommu.cap, &id);
+    }
+
+    return rc;
+}
+
 /*
  * Local variables:
  * mode: C
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [RFC PATCH V2 11/26] x86/hvm: Introduce a emulated VTD for HVM
  2017-05-18  5:34 [RFC PATCH V2 00/26] xen/vIOMMU: Add vIOMMU support with irq remapping fucntion of virtual vtd Lan Tianyu
                   ` (9 preceding siblings ...)
  2017-05-18  5:34 ` [RFC PATCH V2 10/26] libxl: create vIOMMU during domain construction Lan Tianyu
@ 2017-05-18  5:34 ` Lan Tianyu
  2017-05-30 15:36   ` Wei Liu
  2017-05-18  5:34 ` [RFC PATCH V2 12/26] X86/vvtd: Add MMIO handler for VVTD Lan Tianyu
                   ` (14 subsequent siblings)
  25 siblings, 1 reply; 42+ messages in thread
From: Lan Tianyu @ 2017-05-18  5:34 UTC (permalink / raw)
  To: xen-devel
  Cc: Lan Tianyu, kevin.tian, wei.liu2, andrew.cooper3, ian.jackson,
	jbeulich, Chao Gao

From: Chao Gao <chao.gao@intel.com>

This patch adds create/destroy/query function for the emulated VTD
and adapts it to the common VIOMMU abstraction.

Signed-off-by: Chao Gao <chao.gao@intel.com>
Signed-off-by: Lan Tianyu <tianyu.lan@intel.com>
---
 xen/arch/x86/hvm/Makefile           |   1 +
 xen/arch/x86/hvm/vvtd.c             | 176 ++++++++++++++++++++++++++++++++++++
 xen/drivers/passthrough/vtd/iommu.h | 102 ++++++++++++++++-----
 xen/include/asm-x86/viommu.h        |   3 +
 4 files changed, 259 insertions(+), 23 deletions(-)
 create mode 100644 xen/arch/x86/hvm/vvtd.c

diff --git a/xen/arch/x86/hvm/Makefile b/xen/arch/x86/hvm/Makefile
index 0a3d0f4..82a2030 100644
--- a/xen/arch/x86/hvm/Makefile
+++ b/xen/arch/x86/hvm/Makefile
@@ -22,6 +22,7 @@ obj-y += rtc.o
 obj-y += save.o
 obj-y += stdvga.o
 obj-y += vioapic.o
+obj-y += vvtd.o
 obj-y += viridian.o
 obj-y += vlapic.o
 obj-y += vmsi.o
diff --git a/xen/arch/x86/hvm/vvtd.c b/xen/arch/x86/hvm/vvtd.c
new file mode 100644
index 0000000..e364f2b
--- /dev/null
+++ b/xen/arch/x86/hvm/vvtd.c
@@ -0,0 +1,176 @@
+/*
+ * vvtd.c
+ *
+ * virtualize VTD for HVM.
+ *
+ * Copyright (C) 2017 Chao Gao, Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms and conditions of the GNU General Public
+ * License, version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public
+ * License along with this program; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <xen/domain_page.h>
+#include <xen/sched.h>
+#include <xen/types.h>
+#include <xen/viommu.h>
+#include <xen/xmalloc.h>
+#include <asm/current.h>
+#include <asm/hvm/domain.h>
+#include <asm/page.h>
+#include <public/viommu.h>
+
+#include "../../../drivers/passthrough/vtd/iommu.h"
+
+struct hvm_hw_vvtd_regs {
+    uint8_t data[1024];
+};
+
+/* Status field of struct vvtd */
+#define VIOMMU_STATUS_IRQ_REMAPPING_ENABLED     (1 << 0)
+#define VIOMMU_STATUS_DMA_REMAPPING_ENABLED     (1 << 1)
+
+struct vvtd {
+    /* VIOMMU_STATUS_XXX_REMAPPING_ENABLED */
+    int status;
+    /* Address range of remapping hardware register-set */
+    uint64_t base_addr;
+    uint64_t length;
+    /* Point back to the owner domain */
+    struct domain *domain;
+    struct hvm_hw_vvtd_regs *regs;
+    struct page_info *regs_page;
+};
+
+static inline void vvtd_set_reg(struct vvtd *vtd, uint32_t reg,
+                                uint32_t value)
+{
+    *((uint32_t *)(&vtd->regs->data[reg])) = value;
+}
+
+static inline uint32_t vvtd_get_reg(struct vvtd *vtd, uint32_t reg)
+{
+    return *((uint32_t *)(&vtd->regs->data[reg]));
+}
+
+static inline uint8_t vvtd_get_reg_byte(struct vvtd *vtd, uint32_t reg)
+{
+    return *((uint8_t *)(&vtd->regs->data[reg]));
+}
+
+#define vvtd_get_reg_quad(vvtd, reg, val) do { \
+    (val) = vvtd_get_reg(vvtd, (reg) + 4 ); \
+    (val) = (val) << 32; \
+    (val) += vvtd_get_reg(vvtd, reg); \
+} while(0)
+#define vvtd_set_reg_quad(vvtd, reg, val) do { \
+    vvtd_set_reg(vvtd, reg, (val)); \
+    vvtd_set_reg(vvtd, (reg) + 4, (val) >> 32); \
+} while(0)
+
+static void vvtd_reset(struct vvtd *vvtd, uint64_t capability)
+{
+    uint64_t cap, ecap;
+
+    cap = DMA_CAP_NFR | DMA_CAP_SLLPS | DMA_CAP_FRO | \
+          DMA_CAP_MGAW | DMA_CAP_SAGAW | DMA_CAP_ND;
+    ecap = DMA_ECAP_IR | DMA_ECAP_EIM | DMA_ECAP_QI;
+    vvtd_set_reg(vvtd, DMAR_VER_REG, 0x10UL);
+    vvtd_set_reg_quad(vvtd, DMAR_CAP_REG, cap);
+    vvtd_set_reg_quad(vvtd, DMAR_ECAP_REG, ecap);
+    vvtd_set_reg(vvtd, DMAR_GCMD_REG, 0);
+    vvtd_set_reg(vvtd, DMAR_GSTS_REG, 0);
+    vvtd_set_reg(vvtd, DMAR_RTADDR_REG, 0);
+    vvtd_set_reg_quad(vvtd, DMAR_CCMD_REG, 0x0ULL);
+    vvtd_set_reg(vvtd, DMAR_FSTS_REG, 0);
+    vvtd_set_reg(vvtd, DMAR_FECTL_REG, 0x80000000UL);
+    vvtd_set_reg(vvtd, DMAR_FEDATA_REG, 0);
+    vvtd_set_reg(vvtd, DMAR_FEADDR_REG, 0);
+    vvtd_set_reg(vvtd, DMAR_FEUADDR_REG, 0);
+    vvtd_set_reg(vvtd, DMAR_PMEN_REG, 0);
+    vvtd_set_reg_quad(vvtd, DMAR_IQH_REG, 0x0ULL);
+    vvtd_set_reg_quad(vvtd, DMAR_IQT_REG, 0x0ULL);
+    vvtd_set_reg_quad(vvtd, DMAR_IQA_REG, 0x0ULL);
+    vvtd_set_reg(vvtd, DMAR_ICS_REG, 0);
+    vvtd_set_reg(vvtd, DMAR_IECTL_REG, 0x80000000UL);
+    vvtd_set_reg(vvtd, DMAR_IEDATA_REG, 0);
+    vvtd_set_reg(vvtd, DMAR_IEADDR_REG, 0);
+    vvtd_set_reg(vvtd, DMAR_IEUADDR_REG, 0);
+    vvtd_set_reg(vvtd, DMAR_IRTA_REG, 0);
+}
+
+static u64 vvtd_query_caps(struct domain *d)
+{
+    return VIOMMU_CAP_IRQ_REMAPPING;
+}
+
+static int vvtd_create(struct domain *d, struct viommu *viommu)
+{
+    struct vvtd *vvtd;
+    int ret;
+
+    if ( !is_hvm_domain(d) || (viommu->length != PAGE_SIZE) ||
+        ((~vvtd_query_caps(d)) & viommu->caps) )
+        return -EINVAL;
+
+    ret = -ENOMEM;
+    vvtd = xmalloc_bytes(sizeof(struct vvtd));
+    if ( vvtd == NULL )
+        return ret;
+
+    vvtd->regs_page = alloc_domheap_page(d, MEMF_no_owner);
+    if ( vvtd->regs_page == NULL )
+        goto out1;
+
+    vvtd->regs = __map_domain_page_global(vvtd->regs_page);
+    if ( vvtd->regs == NULL )
+        goto out2;
+    clear_page(vvtd->regs);
+
+    vvtd_reset(vvtd, viommu->caps);
+    vvtd->base_addr = viommu->base_address;
+    vvtd->length = viommu->length;
+    vvtd->domain = d;
+    vvtd->status = 0;
+    return 0;
+
+out2:
+    free_domheap_page(vvtd->regs_page);
+out1:
+    xfree(vvtd);
+    return ret;
+}
+
+static int vvtd_destroy(struct viommu *viommu)
+{
+    struct vvtd *vvtd = viommu->priv;
+
+    if ( vvtd )
+    {
+        unmap_domain_page_global(vvtd->regs);
+        free_domheap_page(vvtd->regs_page);
+        xfree(vvtd);
+    }
+    return 0;
+}
+
+struct viommu_ops vvtd_hvm_vmx_ops = {
+    .query_caps = vvtd_query_caps,
+    .create = vvtd_create,
+    .destroy = vvtd_destroy
+};
+
+static int vvtd_register(void)
+{
+    viommu_register_type(VIOMMU_TYPE_INTEL_VTD, &vvtd_hvm_vmx_ops);
+    return 0;
+}
+__initcall(vvtd_register);
diff --git a/xen/drivers/passthrough/vtd/iommu.h b/xen/drivers/passthrough/vtd/iommu.h
index 72c1a2e..2e9dcaa 100644
--- a/xen/drivers/passthrough/vtd/iommu.h
+++ b/xen/drivers/passthrough/vtd/iommu.h
@@ -23,31 +23,54 @@
 #include <asm/msi.h>
 
 /*
- * Intel IOMMU register specification per version 1.0 public spec.
+ * Intel IOMMU register specification per version 2.4 public spec.
  */
 
-#define    DMAR_VER_REG    0x0    /* Arch version supported by this IOMMU */
-#define    DMAR_CAP_REG    0x8    /* Hardware supported capabilities */
-#define    DMAR_ECAP_REG    0x10    /* Extended capabilities supported */
-#define    DMAR_GCMD_REG    0x18    /* Global command register */
-#define    DMAR_GSTS_REG    0x1c    /* Global status register */
-#define    DMAR_RTADDR_REG    0x20    /* Root entry table */
-#define    DMAR_CCMD_REG    0x28    /* Context command reg */
-#define    DMAR_FSTS_REG    0x34    /* Fault Status register */
-#define    DMAR_FECTL_REG    0x38    /* Fault control register */
-#define    DMAR_FEDATA_REG    0x3c    /* Fault event interrupt data register */
-#define    DMAR_FEADDR_REG    0x40    /* Fault event interrupt addr register */
-#define    DMAR_FEUADDR_REG 0x44    /* Upper address register */
-#define    DMAR_AFLOG_REG    0x58    /* Advanced Fault control */
-#define    DMAR_PMEN_REG    0x64    /* Enable Protected Memory Region */
-#define    DMAR_PLMBASE_REG 0x68    /* PMRR Low addr */
-#define    DMAR_PLMLIMIT_REG 0x6c    /* PMRR low limit */
-#define    DMAR_PHMBASE_REG 0x70    /* pmrr high base addr */
-#define    DMAR_PHMLIMIT_REG 0x78    /* pmrr high limit */
-#define    DMAR_IQH_REG    0x80    /* invalidation queue head */
-#define    DMAR_IQT_REG    0x88    /* invalidation queue tail */
-#define    DMAR_IQA_REG    0x90    /* invalidation queue addr */
-#define    DMAR_IRTA_REG   0xB8    /* intr remap */
+#define DMAR_VER_REG            0x0  /* Arch version supported by this IOMMU */
+#define DMAR_CAP_REG            0x8  /* Hardware supported capabilities */
+#define DMAR_ECAP_REG           0x10 /* Extended capabilities supported */
+#define DMAR_GCMD_REG           0x18 /* Global command register */
+#define DMAR_GSTS_REG           0x1c /* Global status register */
+#define DMAR_RTADDR_REG         0x20 /* Root entry table */
+#define DMAR_CCMD_REG           0x28 /* Context command reg */
+#define DMAR_FSTS_REG           0x34 /* Fault Status register */
+#define DMAR_FECTL_REG          0x38 /* Fault control register */
+#define DMAR_FEDATA_REG         0x3c /* Fault event interrupt data register */
+#define DMAR_FEADDR_REG         0x40 /* Fault event interrupt addr register */
+#define DMAR_FEUADDR_REG        0x44 /* Upper address register */
+#define DMAR_AFLOG_REG          0x58 /* Advanced Fault control */
+#define DMAR_PMEN_REG           0x64 /* Enable Protected Memory Region */
+#define DMAR_PLMBASE_REG        0x68 /* PMRR Low addr */
+#define DMAR_PLMLIMIT_REG       0x6c /* PMRR low limit */
+#define DMAR_PHMBASE_REG        0x70 /* pmrr high base addr */
+#define DMAR_PHMLIMIT_REG       0x78 /* pmrr high limit */
+#define DMAR_IQH_REG            0x80 /* invalidation queue head */
+#define DMAR_IQT_REG            0x88 /* invalidation queue tail */
+#define DMAR_IQT_REG_HI         0x8c
+#define DMAR_IQA_REG            0x90 /* invalidation queue addr */
+#define DMAR_IQA_REG_HI         0x94
+#define DMAR_ICS_REG            0x9c /* Invalidation complete status */
+#define DMAR_IECTL_REG          0xa0 /* Invalidation event control */
+#define DMAR_IEDATA_REG         0xa4 /* Invalidation event data */
+#define DMAR_IEADDR_REG         0xa8 /* Invalidation event address */
+#define DMAR_IEUADDR_REG        0xac /* Invalidation event address */
+#define DMAR_IRTA_REG           0xb8 /* Interrupt remapping table addr */
+#define DMAR_IRTA_REG_HI        0xbc
+#define DMAR_PQH_REG            0xc0 /* Page request queue head */
+#define DMAR_PQH_REG_HI         0xc4
+#define DMAR_PQT_REG            0xc8 /* Page request queue tail*/
+#define DMAR_PQT_REG_HI         0xcc
+#define DMAR_PQA_REG            0xd0 /* Page request queue address */
+#define DMAR_PQA_REG_HI         0xd4
+#define DMAR_PRS_REG            0xdc /* Page request status */
+#define DMAR_PECTL_REG          0xe0 /* Page request event control */
+#define DMAR_PEDATA_REG         0xe4 /* Page request event data */
+#define DMAR_PEADDR_REG         0xe8 /* Page request event address */
+#define DMAR_PEUADDR_REG        0xec /* Page event upper address */
+#define DMAR_MTRRCAP_REG        0x100 /* MTRR capability */
+#define DMAR_MTRRCAP_REG_HI     0x104
+#define DMAR_MTRRDEF_REG        0x108 /* MTRR default type */
+#define DMAR_MTRRDEF_REG_HI     0x10c
 
 #define OFFSET_STRIDE        (9)
 #define dmar_readl(dmar, reg) readl((dmar) + (reg))
@@ -58,6 +81,31 @@
 #define VER_MAJOR(v)        (((v) & 0xf0) >> 4)
 #define VER_MINOR(v)        ((v) & 0x0f)
 
+/* CAP_REG */
+/* (offset >> 4) << 24 */
+#define DMA_DOMAIN_ID_SHIFT         16  /* 16-bit domain id for 64K domains */
+#define DMA_DOMAIN_ID_MASK          ((1UL << DMA_DOMAIN_ID_SHIFT) - 1)
+#define DMA_CAP_ND                  (((DMA_DOMAIN_ID_SHIFT - 4) / 2) & 7ULL)
+#define DMA_MGAW                    39  /* Maximum Guest Address Width */
+#define DMA_CAP_MGAW                (((DMA_MGAW - 1) & 0x3fULL) << 16)
+#define DMA_MAMV                    18ULL
+#define DMA_CAP_MAMV                (DMA_MAMV << 48)
+#define DMA_CAP_PSI                 (1ULL << 39)
+#define DMA_CAP_SLLPS               ((1ULL << 34) | (1ULL << 35))
+#define DMAR_FRCD_REG_NR            1ULL
+#define DMA_CAP_FRO_OFFSET          0x220ULL
+#define DMA_CAP_FRO                 (DMA_CAP_FRO_OFFSET << 20)
+#define DMA_CAP_NFR                 ((DMAR_FRCD_REG_NR - 1) << 40)
+
+/* Supported Adjusted Guest Address Widths */
+#define DMA_CAP_SAGAW_SHIFT         8
+#define DMA_CAP_SAGAW_MASK          (0x1fULL << DMA_CAP_SAGAW_SHIFT)
+ /* 39-bit AGAW, 3-level page-table */
+#define DMA_CAP_SAGAW_39bit         (0x2ULL << DMA_CAP_SAGAW_SHIFT)
+ /* 48-bit AGAW, 4-level page-table */
+#define DMA_CAP_SAGAW_48bit         (0x4ULL << DMA_CAP_SAGAW_SHIFT)
+#define DMA_CAP_SAGAW               DMA_CAP_SAGAW_39bit
+
 /*
  * Decoding Capability Register
  */
@@ -89,6 +137,14 @@
 #define cap_afl(c)        (((c) >> 3) & 1)
 #define cap_ndoms(c)        (1 << (4 + 2 * ((c) & 0x7)))
 
+/* ECAP_REG */
+/* (offset >> 4) << 8 */
+#define DMA_ECAP_QI                 (1ULL << 1)
+/* Interrupt Remapping support */
+#define DMA_ECAP_IR                 (1ULL << 3)
+#define DMA_ECAP_EIM                (1ULL << 4)
+#define DMA_ECAP_MHMV               (15ULL << 20)
+
 /*
  * Extended Capability Register
  */
diff --git a/xen/include/asm-x86/viommu.h b/xen/include/asm-x86/viommu.h
index 1e8d4be..b730e65 100644
--- a/xen/include/asm-x86/viommu.h
+++ b/xen/include/asm-x86/viommu.h
@@ -22,6 +22,9 @@
 
 #include <xen/viommu.h>
 #include <asm/types.h>
+#include <asm/processor.h>
+
+extern struct viommu_ops vvtd_hvm_vmx_ops;
 
 /* IRQ request type */
 #define VIOMMU_REQUEST_IRQ_MSI          0
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [RFC PATCH V2 12/26] X86/vvtd: Add MMIO handler for VVTD
  2017-05-18  5:34 [RFC PATCH V2 00/26] xen/vIOMMU: Add vIOMMU support with irq remapping fucntion of virtual vtd Lan Tianyu
                   ` (10 preceding siblings ...)
  2017-05-18  5:34 ` [RFC PATCH V2 11/26] x86/hvm: Introduce a emulated VTD for HVM Lan Tianyu
@ 2017-05-18  5:34 ` Lan Tianyu
  2017-05-30 15:36   ` Wei Liu
  2017-05-18  5:34 ` [RFC PATCH V2 13/26] X86/vvtd: Set Interrupt Remapping Table Pointer through GCMD Lan Tianyu
                   ` (13 subsequent siblings)
  25 siblings, 1 reply; 42+ messages in thread
From: Lan Tianyu @ 2017-05-18  5:34 UTC (permalink / raw)
  To: xen-devel
  Cc: Lan Tianyu, kevin.tian, wei.liu2, andrew.cooper3, ian.jackson,
	jbeulich, Chao Gao

From: Chao Gao <chao.gao@intel.com>

This patch adds VVTD MMIO handler to deal with MMIO access.

Signed-off-by: Chao Gao <chao.gao@intel.com>
Signed-off-by: Lan Tianyu <tianyu.lan@intel.com>
---
 xen/arch/x86/hvm/vvtd.c | 127 ++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 127 insertions(+)

diff --git a/xen/arch/x86/hvm/vvtd.c b/xen/arch/x86/hvm/vvtd.c
index e364f2b..b0a23ee 100644
--- a/xen/arch/x86/hvm/vvtd.c
+++ b/xen/arch/x86/hvm/vvtd.c
@@ -50,6 +50,38 @@ struct vvtd {
     struct page_info *regs_page;
 };
 
+#define __DEBUG_VVTD__
+#ifdef __DEBUG_VVTD__
+extern unsigned int vvtd_debug_level;
+#define VVTD_DBG_INFO     1
+#define VVTD_DBG_TRANS    (1<<1)
+#define VVTD_DBG_RW       (1<<2)
+#define VVTD_DBG_FAULT    (1<<3)
+#define VVTD_DBG_EOI      (1<<4)
+#define VVTD_DEBUG(lvl, _f, _a...) do { \
+    if ( vvtd_debug_level & lvl ) \
+    printk("VVTD %s:" _f "\n", __func__, ## _a);    \
+} while(0)
+#else
+#define VVTD_DEBUG(fmt...) do {} while(0)
+#endif
+
+unsigned int vvtd_debug_level __read_mostly;
+integer_param("vvtd_debug", vvtd_debug_level);
+
+struct vvtd *domain_vvtd(struct domain *d)
+{
+    struct viommu_info *info = &d->viommu;
+
+    BUILD_BUG_ON(NR_VIOMMU_PER_DOMAIN != 1);
+    return (info && info->viommu[0]) ? info->viommu[0]->priv : NULL;
+}
+
+static inline struct vvtd *vcpu_vvtd(struct vcpu *v)
+{
+    return domain_vvtd(v->domain);
+}
+
 static inline void vvtd_set_reg(struct vvtd *vtd, uint32_t reg,
                                 uint32_t value)
 {
@@ -76,6 +108,100 @@ static inline uint8_t vvtd_get_reg_byte(struct vvtd *vtd, uint32_t reg)
     vvtd_set_reg(vvtd, (reg) + 4, (val) >> 32); \
 } while(0)
 
+static int vvtd_range(struct vcpu *v, unsigned long addr)
+{
+    struct vvtd *vvtd = vcpu_vvtd(v);
+
+    if ( vvtd )
+        return (addr >= vvtd->base_addr) &&
+               (addr < vvtd->base_addr + PAGE_SIZE);
+    return 0;
+}
+
+static int vvtd_read(struct vcpu *v, unsigned long addr,
+                     unsigned int len, unsigned long *pval)
+{
+    struct vvtd *vvtd = vcpu_vvtd(v);
+    unsigned int offset = addr - vvtd->base_addr;
+    unsigned int offset_aligned = offset & ~3;
+
+    if ( !pval )
+        return X86EMUL_OKAY;
+
+    VVTD_DEBUG(VVTD_DBG_RW, "READ INFO: offset %x len %d.", offset, len);
+
+    if ( offset & 3 )
+    {
+        VVTD_DEBUG(VVTD_DBG_RW, "Alignment is not canonical.");
+        return X86EMUL_OKAY;
+    }
+
+    switch( len )
+    {
+    case 4:
+        *pval = vvtd_get_reg(vvtd, offset_aligned);
+        break;
+
+    case 8:
+        vvtd_get_reg_quad(vvtd, offset_aligned, *pval);
+        break;
+
+    default:
+        break;
+    }
+
+    return X86EMUL_OKAY;
+}
+
+static int vvtd_write(struct vcpu *v, unsigned long addr,
+                      unsigned int len, unsigned long val)
+{
+    struct vvtd *vvtd = vcpu_vvtd(v);
+    unsigned int offset = addr - vvtd->base_addr;
+    unsigned int offset_aligned = offset & ~0x3;
+    int ret;
+
+    VVTD_DEBUG(VVTD_DBG_RW, "WRITE INFO: offset %x len %d val %lx.",
+               offset, len, val);
+
+    if ( (offset & 3) || ((len != 4) && (len != 8)) )
+    {
+        VVTD_DEBUG(VVTD_DBG_RW, "Alignment or length is not canonical");
+        return X86EMUL_UNHANDLEABLE;
+    }
+
+    ret = X86EMUL_OKAY;
+    if ( len == 4 )
+    {
+        switch ( offset_aligned )
+        {
+        case DMAR_IEDATA_REG:
+        case DMAR_IEADDR_REG:
+        case DMAR_IEUADDR_REG:
+        case DMAR_FEDATA_REG:
+        case DMAR_FEADDR_REG:
+        case DMAR_FEUADDR_REG:
+            vvtd_set_reg(vvtd, offset_aligned, val);
+            ret = X86EMUL_OKAY;
+            break;
+
+        default:
+            ret = X86EMUL_UNHANDLEABLE;
+            break;
+        }
+    }
+    else
+        ret = X86EMUL_UNHANDLEABLE;
+
+    return ret;
+}
+
+static const struct hvm_mmio_ops vvtd_mmio_ops = {
+    .check = vvtd_range,
+    .read = vvtd_read,
+    .write = vvtd_write
+};
+
 static void vvtd_reset(struct vvtd *vvtd, uint64_t capability)
 {
     uint64_t cap, ecap;
@@ -140,6 +266,7 @@ static int vvtd_create(struct domain *d, struct viommu *viommu)
     vvtd->length = viommu->length;
     vvtd->domain = d;
     vvtd->status = 0;
+    register_mmio_handler(d, &vvtd_mmio_ops);
     return 0;
 
 out2:
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [RFC PATCH V2 13/26] X86/vvtd: Set Interrupt Remapping Table Pointer through GCMD
  2017-05-18  5:34 [RFC PATCH V2 00/26] xen/vIOMMU: Add vIOMMU support with irq remapping fucntion of virtual vtd Lan Tianyu
                   ` (11 preceding siblings ...)
  2017-05-18  5:34 ` [RFC PATCH V2 12/26] X86/vvtd: Add MMIO handler for VVTD Lan Tianyu
@ 2017-05-18  5:34 ` Lan Tianyu
  2017-05-18  5:34 ` [RFC PATCH V2 14/26] X86/vvtd: Process interrupt remapping request Lan Tianyu
                   ` (12 subsequent siblings)
  25 siblings, 0 replies; 42+ messages in thread
From: Lan Tianyu @ 2017-05-18  5:34 UTC (permalink / raw)
  To: xen-devel
  Cc: Lan Tianyu, kevin.tian, wei.liu2, andrew.cooper3, ian.jackson,
	jbeulich, Chao Gao

From: Chao Gao <chao.gao@intel.com>

Software sets this field to set/update the interrupt remapping table pointer
used by hardware. The interrupt remapping table pointer is specified through
the Interrupt Remapping Table Address (IRTA_REG) register.

This patch emulates this operation and adds some new fields in VVTD to track
info (e.g. the table's gfn and max supported entries) of interrupt remapping
table.

Signed-off-by: Chao Gao <chao.gao@intel.com>
Signed-off-by: Lan Tianyu <tianyu.lan@intel.com>
---
 xen/arch/x86/hvm/vvtd.c             | 70 +++++++++++++++++++++++++++++++++++++
 xen/drivers/passthrough/vtd/iommu.h |  9 ++++-
 2 files changed, 78 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/hvm/vvtd.c b/xen/arch/x86/hvm/vvtd.c
index b0a23ee..b6fd34b 100644
--- a/xen/arch/x86/hvm/vvtd.c
+++ b/xen/arch/x86/hvm/vvtd.c
@@ -46,6 +46,13 @@ struct vvtd {
     uint64_t length;
     /* Point back to the owner domain */
     struct domain *domain;
+    /* Is in Extended Interrupt Mode? */
+    bool eim;
+    /* Max remapping entries in IRT */
+    int irt_max_entry;
+    /* Interrupt remapping table base gfn */
+    uint64_t irt;
+
     struct hvm_hw_vvtd_regs *regs;
     struct page_info *regs_page;
 };
@@ -82,6 +89,11 @@ static inline struct vvtd *vcpu_vvtd(struct vcpu *v)
     return domain_vvtd(v->domain);
 }
 
+static inline void __vvtd_set_bit(struct vvtd *vvtd, uint32_t reg, int nr)
+{
+    return __set_bit(nr, (uint32_t *)&vvtd->regs->data[reg]);
+}
+
 static inline void vvtd_set_reg(struct vvtd *vtd, uint32_t reg,
                                 uint32_t value)
 {
@@ -108,6 +120,41 @@ static inline uint8_t vvtd_get_reg_byte(struct vvtd *vtd, uint32_t reg)
     vvtd_set_reg(vvtd, (reg) + 4, (val) >> 32); \
 } while(0)
 
+static int vvtd_handle_gcmd_sirtp(struct vvtd *vvtd, uint32_t val)
+{
+    uint64_t irta;
+
+    if ( !(val & DMA_GCMD_SIRTP) )
+        return X86EMUL_OKAY;
+
+    vvtd_get_reg_quad(vvtd, DMAR_IRTA_REG, irta);
+    vvtd->irt = DMA_IRTA_ADDR(irta) >> PAGE_SHIFT;
+    vvtd->irt_max_entry = DMA_IRTA_SIZE(irta);
+    vvtd->eim = DMA_IRTA_EIME(irta);
+    VVTD_DEBUG(VVTD_DBG_RW, "Update IR info (addr=%lx eim=%d size=%d).",
+               vvtd->irt, vvtd->eim, vvtd->irt_max_entry);
+    __vvtd_set_bit(vvtd, DMAR_GSTS_REG, DMA_GSTS_SIRTPS_BIT);
+
+    return X86EMUL_OKAY;
+}
+
+static int vvtd_write_gcmd(struct vvtd *vvtd, uint32_t val)
+{
+    uint32_t orig = vvtd_get_reg(vvtd, DMAR_GSTS_REG);
+    uint32_t changed = orig ^ val;
+
+    if ( !changed )
+        return X86EMUL_OKAY;
+    if ( (changed & (changed - 1)) )
+        VVTD_DEBUG(VVTD_DBG_RW, "Guest attempts to update multiple fields "
+                     "of GCMD_REG in one write transation.");
+
+    if ( changed & DMA_GCMD_SIRTP )
+        vvtd_handle_gcmd_sirtp(vvtd, val);
+
+    return X86EMUL_OKAY;
+}
+
 static int vvtd_range(struct vcpu *v, unsigned long addr)
 {
     struct vvtd *vvtd = vcpu_vvtd(v);
@@ -175,12 +222,18 @@ static int vvtd_write(struct vcpu *v, unsigned long addr,
     {
         switch ( offset_aligned )
         {
+        case DMAR_GCMD_REG:
+            ret = vvtd_write_gcmd(vvtd, val);
+            break;
+
         case DMAR_IEDATA_REG:
         case DMAR_IEADDR_REG:
         case DMAR_IEUADDR_REG:
         case DMAR_FEDATA_REG:
         case DMAR_FEADDR_REG:
         case DMAR_FEUADDR_REG:
+        case DMAR_IRTA_REG:
+        case DMAR_IRTA_REG_HI:
             vvtd_set_reg(vvtd, offset_aligned, val);
             ret = X86EMUL_OKAY;
             break;
@@ -190,6 +243,20 @@ static int vvtd_write(struct vcpu *v, unsigned long addr,
             break;
         }
     }
+    else if ( len == 8 )
+    {
+        switch ( offset_aligned )
+        {
+        case DMAR_IRTA_REG:
+            vvtd_set_reg_quad(vvtd, DMAR_IRTA_REG, val);
+            ret = X86EMUL_OKAY;
+            break;
+
+        default:
+            ret = X86EMUL_UNHANDLEABLE;
+            break;
+        }
+    }
     else
         ret = X86EMUL_UNHANDLEABLE;
 
@@ -266,6 +333,9 @@ static int vvtd_create(struct domain *d, struct viommu *viommu)
     vvtd->length = viommu->length;
     vvtd->domain = d;
     vvtd->status = 0;
+    vvtd->eim = 0;
+    vvtd->irt = 0;
+    vvtd->irt_max_entry = 0;
     register_mmio_handler(d, &vvtd_mmio_ops);
     return 0;
 
diff --git a/xen/drivers/passthrough/vtd/iommu.h b/xen/drivers/passthrough/vtd/iommu.h
index 2e9dcaa..fd040d0 100644
--- a/xen/drivers/passthrough/vtd/iommu.h
+++ b/xen/drivers/passthrough/vtd/iommu.h
@@ -195,9 +195,16 @@
 #define DMA_GSTS_WBFS   (((u64)1) << 27)
 #define DMA_GSTS_QIES   (((u64)1) <<26)
 #define DMA_GSTS_IRES   (((u64)1) <<25)
-#define DMA_GSTS_SIRTPS (((u64)1) << 24)
+#define DMA_GSTS_SIRTPS_BIT     24
+#define DMA_GSTS_SIRTPS (((u64)1) << DMA_GSTS_SIRTPS_BIT)
 #define DMA_GSTS_CFIS   (((u64)1) <<23)
 
+/* IRTA_REG */
+#define DMA_IRTA_ADDR(val)      (val & ~0xfffULL)
+#define DMA_IRTA_EIME(val)      (!!(val & (1 << 11)))
+#define DMA_IRTA_S(val)         (val & 0xf)
+#define DMA_IRTA_SIZE(val)      (1UL << (DMA_IRTA_S(val) + 1))
+
 /* PMEN_REG */
 #define DMA_PMEN_EPM    (((u32)1) << 31)
 #define DMA_PMEN_PRS    (((u32)1) << 0)
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [RFC PATCH V2 14/26] X86/vvtd: Process interrupt remapping request
  2017-05-18  5:34 [RFC PATCH V2 00/26] xen/vIOMMU: Add vIOMMU support with irq remapping fucntion of virtual vtd Lan Tianyu
                   ` (12 preceding siblings ...)
  2017-05-18  5:34 ` [RFC PATCH V2 13/26] X86/vvtd: Set Interrupt Remapping Table Pointer through GCMD Lan Tianyu
@ 2017-05-18  5:34 ` Lan Tianyu
  2017-05-18  5:34 ` [RFC PATCH V2 15/26] x86/vvtd: decode interrupt attribute from IRTE Lan Tianyu
                   ` (11 subsequent siblings)
  25 siblings, 0 replies; 42+ messages in thread
From: Lan Tianyu @ 2017-05-18  5:34 UTC (permalink / raw)
  To: xen-devel
  Cc: Lan Tianyu, kevin.tian, wei.liu2, andrew.cooper3, ian.jackson,
	jbeulich, Chao Gao

From: Chao Gao <chao.gao@intel.com>

When a remapping interrupt request arrives, remapping hardware computes the
interrupt_index per the algorithm described in VTD spec
"Interrupt Remapping Table", interprets the IRTE and generates a remapped
interrupt request.

This patch introduces viommu_handle_irq_request() to emulate the process how
remapping hardware handles a remapping interrupt request.

Signed-off-by: Chao Gao <chao.gao@intel.com>
Signed-off-by: Lan Tianyu <tianyu.lan@intel.com>
---
 xen/arch/x86/hvm/vvtd.c             | 279 +++++++++++++++++++++++++++++++++++-
 xen/drivers/passthrough/vtd/iommu.h |  21 +++
 xen/drivers/passthrough/vtd/vtd.h   |   6 +
 3 files changed, 305 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/hvm/vvtd.c b/xen/arch/x86/hvm/vvtd.c
index b6fd34b..c993a15 100644
--- a/xen/arch/x86/hvm/vvtd.c
+++ b/xen/arch/x86/hvm/vvtd.c
@@ -23,12 +23,17 @@
 #include <xen/types.h>
 #include <xen/viommu.h>
 #include <xen/xmalloc.h>
+#include <asm/apic.h>
 #include <asm/current.h>
+#include <asm/event.h>
 #include <asm/hvm/domain.h>
+#include <asm/io_apic.h>
 #include <asm/page.h>
+#include <asm/p2m.h>
 #include <public/viommu.h>
 
 #include "../../../drivers/passthrough/vtd/iommu.h"
+#include "../../../drivers/passthrough/vtd/vtd.h"
 
 struct hvm_hw_vvtd_regs {
     uint8_t data[1024];
@@ -38,6 +43,9 @@ struct hvm_hw_vvtd_regs {
 #define VIOMMU_STATUS_IRQ_REMAPPING_ENABLED     (1 << 0)
 #define VIOMMU_STATUS_DMA_REMAPPING_ENABLED     (1 << 1)
 
+#define vvtd_irq_remapping_enabled(vvtd) \
+            (vvtd->status & VIOMMU_STATUS_IRQ_REMAPPING_ENABLED)
+
 struct vvtd {
     /* VIOMMU_STATUS_XXX_REMAPPING_ENABLED */
     int status;
@@ -120,6 +128,140 @@ static inline uint8_t vvtd_get_reg_byte(struct vvtd *vtd, uint32_t reg)
     vvtd_set_reg(vvtd, (reg) + 4, (val) >> 32); \
 } while(0)
 
+static int map_guest_page(struct domain *d, uint64_t gfn, void **virt)
+{
+    struct page_info *p;
+
+    p = get_page_from_gfn(d, gfn, NULL, P2M_ALLOC);
+    if ( !p )
+        return -EINVAL;
+
+    if ( !get_page_type(p, PGT_writable_page) )
+    {
+        put_page(p);
+        return -EINVAL;
+    }
+
+    *virt = __map_domain_page_global(p);
+    if ( !*virt )
+    {
+        put_page_and_type(p);
+        return -ENOMEM;
+    }
+    return 0;
+}
+
+static void unmap_guest_page(void *virt)
+{
+    struct page_info *page;
+
+    if ( !virt )
+        return;
+
+    virt = (void *)((unsigned long)virt & PAGE_MASK);
+    page = mfn_to_page(domain_page_map_to_mfn(virt));
+
+    unmap_domain_page_global(virt);
+    put_page_and_type(page);
+}
+
+static void vvtd_inj_irq(
+    struct vlapic *target,
+    uint8_t vector,
+    uint8_t trig_mode,
+    uint8_t delivery_mode)
+{
+    VVTD_DEBUG(VVTD_DBG_INFO, "dest=v%d, delivery_mode=%x vector=%d "
+               "trig_mode=%d.",
+               vlapic_vcpu(target)->vcpu_id, delivery_mode,
+               vector, trig_mode);
+
+    ASSERT((delivery_mode == dest_Fixed) ||
+           (delivery_mode == dest_LowestPrio));
+
+    vlapic_set_irq(target, vector, trig_mode);
+}
+
+static int vvtd_delivery(
+    struct domain *d, int vector,
+    uint32_t dest, uint8_t dest_mode,
+    uint8_t delivery_mode, uint8_t trig_mode)
+{
+    struct vlapic *target;
+    struct vcpu *v;
+
+    switch ( delivery_mode )
+    {
+    case dest_LowestPrio:
+        target = vlapic_lowest_prio(d, NULL, 0, dest, dest_mode);
+        if ( target != NULL )
+        {
+            vvtd_inj_irq(target, vector, trig_mode, delivery_mode);
+            break;
+        }
+        VVTD_DEBUG(VVTD_DBG_INFO, "null round robin: vector=%02x\n", vector);
+        break;
+
+    case dest_Fixed:
+        for_each_vcpu ( d, v )
+            if ( vlapic_match_dest(vcpu_vlapic(v), NULL, 0, dest,
+                                   dest_mode) )
+                vvtd_inj_irq(vcpu_vlapic(v), vector,
+                             trig_mode, delivery_mode);
+        break;
+
+    case dest_NMI:
+        for_each_vcpu ( d, v )
+            if ( vlapic_match_dest(vcpu_vlapic(v), NULL, 0, dest, dest_mode)
+                 && !test_and_set_bool(v->nmi_pending) )
+                vcpu_kick(v);
+        break;
+
+    default:
+        printk(XENLOG_G_WARNING
+               "%pv: Unsupported VTD delivery mode %d for Dom%d\n",
+               current, delivery_mode, d->domain_id);
+        return -EINVAL;
+    }
+
+    return 0;
+}
+
+static uint32_t irq_remapping_request_index(struct irq_remapping_request *irq)
+{
+    if ( irq->type == VIOMMU_REQUEST_IRQ_MSI )
+    {
+        struct msi_msg_remap_entry msi_msg = { { irq->msg.msi.addr }, 0,
+                                               irq->msg.msi.data };
+
+        return MSI_REMAP_ENTRY_INDEX(msi_msg);
+    }
+    else if ( irq->type == VIOMMU_REQUEST_IRQ_APIC )
+    {
+        struct IO_APIC_route_remap_entry remap_rte = { { irq->msg.rte } };
+
+        return IOAPIC_REMAP_ENTRY_INDEX(remap_rte);
+    }
+    BUG();
+    return 0;
+}
+
+static inline uint32_t irte_dest(struct vvtd *vvtd, uint32_t dest)
+{
+    uint64_t irta;
+
+    vvtd_get_reg_quad(vvtd, DMAR_IRTA_REG, irta);
+    /* In xAPIC mode, only 8-bits([15:8]) are valid */
+    return DMA_IRTA_EIME(irta) ? dest : MASK_EXTR(dest, IRTE_xAPIC_DEST_MASK);
+}
+
+static int vvtd_record_fault(struct vvtd *vvtd,
+                             struct irq_remapping_request *irq,
+                             int reason)
+{
+    return 0;
+}
+
 static int vvtd_handle_gcmd_sirtp(struct vvtd *vvtd, uint32_t val)
 {
     uint64_t irta;
@@ -269,6 +411,140 @@ static const struct hvm_mmio_ops vvtd_mmio_ops = {
     .write = vvtd_write
 };
 
+static bool ir_sid_valid(struct iremap_entry *irte, uint32_t source_id)
+{
+    return TRUE;
+}
+
+/*
+ * @record_fault: a flag to indicate whether we need recording a fault
+ * when a fault happens during fetching vIRTE. (1 means recording it,
+ * 0 means ignoring the fault). record_fault = 0 is used in parsing
+ * process in which only the encoded attributes is cared.
+ */
+static int vvtd_get_entry(struct vvtd *vvtd,
+                          struct irq_remapping_request *irq,
+                          struct iremap_entry *dest,
+                          bool record_fault)
+{
+    int ret;
+    uint32_t entry = irq_remapping_request_index(irq);
+    struct iremap_entry  *irte, *irt_page;
+
+    VVTD_DEBUG(VVTD_DBG_TRANS, "interpret a request with index %x", entry);
+
+    if ( entry > vvtd->irt_max_entry )
+    {
+        ret = VTD_FR_IR_INDEX_OVER;
+        goto handle_fault;
+    }
+
+    ret = map_guest_page(vvtd->domain, vvtd->irt + (entry >> IREMAP_ENTRY_ORDER),
+                         (void**)&irt_page);
+    if ( ret )
+    {
+        ret = VTD_FR_IR_ROOT_INVAL;
+        goto handle_fault;
+    }
+
+    irte = irt_page + (entry % (1 << IREMAP_ENTRY_ORDER));
+    dest->val = irte->val;
+    if ( !qinval_present(*irte) )
+    {
+        ret = VTD_FR_IR_ENTRY_P;
+        goto unmap_handle_fault;
+    }
+
+    /* Check reserved bits */
+    if ( (irte->remap.res_1 || irte->remap.res_2 || irte->remap.res_3 ||
+          irte->remap.res_4) )
+    {
+        ret = VTD_FR_IR_IRTE_RSVD;
+        goto unmap_handle_fault;
+    }
+
+    if (!ir_sid_valid(irte, irq->source_id))
+    {
+        ret = VTD_FR_IR_SID_ERR;
+        goto unmap_handle_fault;
+    }
+    unmap_guest_page(irt_page);
+    return 0;
+
+ unmap_handle_fault:
+    unmap_guest_page(irt_page);
+ handle_fault:
+    if ( !record_fault )
+        return ret;
+
+    switch ( ret )
+    {
+    case VTD_FR_IR_SID_ERR:
+    case VTD_FR_IR_IRTE_RSVD:
+    case VTD_FR_IR_ENTRY_P:
+        if ( qinval_fault_disable(*irte) )
+            break;
+    /* fall through */
+    case VTD_FR_IR_INDEX_OVER:
+    case VTD_FR_IR_ROOT_INVAL:
+        vvtd_record_fault(vvtd, irq, ret);
+        break;
+
+    default:
+        gdprintk(XENLOG_G_INFO, "Can't handle VT-d fault %x\n", ret);
+    }
+    return ret;
+}
+
+static int vvtd_irq_request_sanity_check(struct vvtd *vvtd,
+                                         struct irq_remapping_request *irq)
+{
+    if ( irq->type == VIOMMU_REQUEST_IRQ_APIC )
+    {
+        struct IO_APIC_route_remap_entry rte = { { irq->msg.rte } };
+
+        ASSERT(rte.format);
+        return (!rte.reserved) ? 0 : VTD_FR_IR_REQ_RSVD;
+    }
+    else if ( irq->type == VIOMMU_REQUEST_IRQ_MSI )
+    {
+        struct msi_msg_remap_entry msi_msg = { { irq->msg.msi.addr } };
+
+        ASSERT(msi_msg.address_lo.format);
+        ASSERT(msi_msg.address_lo.addr_id_val == 0xfee);
+        return 0;
+    }
+    BUG();
+    return 0;
+}
+
+static int vvtd_handle_irq_request(struct domain *d,
+                                   struct irq_remapping_request *irq)
+{
+    struct iremap_entry irte;
+    int ret;
+    struct vvtd *vvtd = domain_vvtd(d);
+
+    if ( !vvtd || !vvtd_irq_remapping_enabled(vvtd) )
+        return -EINVAL;
+
+    ret = vvtd_irq_request_sanity_check(vvtd, irq);
+    if ( ret )
+    {
+        vvtd_record_fault(vvtd, irq, ret);
+        return ret;
+    }
+
+    if ( !vvtd_get_entry(vvtd, irq, &irte, true) )
+    {
+        vvtd_delivery(vvtd->domain, irte.remap.vector,
+                      irte_dest(vvtd, irte.remap.dst), irte.remap.dm,
+                      irte.remap.dlm, irte.remap.tm);
+        return 0;
+    }
+    return -EFAULT;
+}
+
 static void vvtd_reset(struct vvtd *vvtd, uint64_t capability)
 {
     uint64_t cap, ecap;
@@ -362,7 +638,8 @@ static int vvtd_destroy(struct viommu *viommu)
 struct viommu_ops vvtd_hvm_vmx_ops = {
     .query_caps = vvtd_query_caps,
     .create = vvtd_create,
-    .destroy = vvtd_destroy
+    .destroy = vvtd_destroy,
+    .handle_irq_request = vvtd_handle_irq_request
 };
 
 static int vvtd_register(void)
diff --git a/xen/drivers/passthrough/vtd/iommu.h b/xen/drivers/passthrough/vtd/iommu.h
index fd040d0..1c53d22 100644
--- a/xen/drivers/passthrough/vtd/iommu.h
+++ b/xen/drivers/passthrough/vtd/iommu.h
@@ -247,6 +247,21 @@
 #define dma_frcd_source_id(c) (c & 0xffff)
 #define dma_frcd_page_addr(d) (d & (((u64)-1) << 12)) /* low 64 bit */
 
+enum VTD_FAULT_TYPE
+{
+    /* Interrupt remapping transition faults */
+    VTD_FR_IR_REQ_RSVD = 0x20,   /* One or more IR request reserved
+                                  * fields set */
+    VTD_FR_IR_INDEX_OVER = 0x21, /* Index value greater than max */
+    VTD_FR_IR_ENTRY_P = 0x22,    /* Present (P) not set in IRTE */
+    VTD_FR_IR_ROOT_INVAL = 0x23, /* IR Root table invalid */
+    VTD_FR_IR_IRTE_RSVD = 0x24,  /* IRTE Rsvd field non-zero with
+                                  * Present flag set */
+    VTD_FR_IR_REQ_COMPAT = 0x25, /* Encountered compatible IR
+                                  * request while disabled */
+    VTD_FR_IR_SID_ERR = 0x26,    /* Invalid Source-ID */
+};
+
 /*
  * 0: Present
  * 1-11: Reserved
@@ -387,6 +402,12 @@ struct iremap_entry {
 };
 
 /*
+ * When VT-d doesn't enable Extended Interrupt Mode. Hardware only interprets
+ * only 8-bits ([15:8]) of Destination-ID field in the IRTEs.
+ */
+#define IRTE_xAPIC_DEST_MASK 0xff00
+
+/*
  * Posted-interrupt descriptor address is 64 bits with 64-byte aligned, only
  * the upper 26 bits of lest significiant 32 bits is available.
  */
diff --git a/xen/drivers/passthrough/vtd/vtd.h b/xen/drivers/passthrough/vtd/vtd.h
index bb8889f..1032b46 100644
--- a/xen/drivers/passthrough/vtd/vtd.h
+++ b/xen/drivers/passthrough/vtd/vtd.h
@@ -47,6 +47,8 @@ struct IO_APIC_route_remap_entry {
     };
 };
 
+#define IOAPIC_REMAP_ENTRY_INDEX(x) ((x.index_15 << 15) + x.index_0_14)
+
 struct msi_msg_remap_entry {
     union {
         u32 val;
@@ -65,4 +67,8 @@ struct msi_msg_remap_entry {
     u32	data;		/* msi message data */
 };
 
+#define MSI_REMAP_ENTRY_INDEX(x) ((x.address_lo.index_15 << 15) + \
+                                  x.address_lo.index_0_14 + \
+                                  (x.address_lo.SHV ? (uint16_t)x.data : 0))
+
 #endif // _VTD_H_
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [RFC PATCH V2 15/26] x86/vvtd: decode interrupt attribute from IRTE
  2017-05-18  5:34 [RFC PATCH V2 00/26] xen/vIOMMU: Add vIOMMU support with irq remapping fucntion of virtual vtd Lan Tianyu
                   ` (13 preceding siblings ...)
  2017-05-18  5:34 ` [RFC PATCH V2 14/26] X86/vvtd: Process interrupt remapping request Lan Tianyu
@ 2017-05-18  5:34 ` Lan Tianyu
  2017-05-18  5:34 ` [RFC PATCH V2 16/26] x86/vioapic: Hook interrupt delivery of vIOAPIC Lan Tianyu
                   ` (10 subsequent siblings)
  25 siblings, 0 replies; 42+ messages in thread
From: Lan Tianyu @ 2017-05-18  5:34 UTC (permalink / raw)
  To: xen-devel
  Cc: Lan Tianyu, kevin.tian, wei.liu2, andrew.cooper3, ian.jackson,
	jbeulich, Chao Gao

From: Chao Gao <chao.gao@intel.com>

Previously, interrupt attributes can be extracted from msi message or
IOAPIC RTE. However, with interrupt remapping enabled, the attributes
are enclosed in the associated IRTE. This callback is for cases in
which the caller wants to acquire interrupt attributes.

Signed-off-by: Chao Gao <chao.gao@intel.com>
Signed-off-by: Lan Tianyu <tianyu.lan@intel.com>
---
 xen/arch/x86/hvm/vvtd.c | 22 +++++++++++++++++++++-
 1 file changed, 21 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/hvm/vvtd.c b/xen/arch/x86/hvm/vvtd.c
index c993a15..57932cb 100644
--- a/xen/arch/x86/hvm/vvtd.c
+++ b/xen/arch/x86/hvm/vvtd.c
@@ -545,6 +545,25 @@ static int vvtd_handle_irq_request(struct domain *d,
     return -EFAULT;
 }
 
+static int vvtd_get_irq_info(struct domain *d,
+                             struct irq_remapping_request *irq,
+                             struct irq_remapping_info *info)
+{
+    int ret;
+    struct iremap_entry irte;
+    struct vvtd *vvtd = domain_vvtd(d);
+
+    ret = vvtd_get_entry(vvtd, irq, &irte, false);
+    if ( ret )
+        return -ret;
+
+    info->vector = irte.remap.vector;
+    info->dest = irte_dest(vvtd, irte.remap.dst);
+    info->dest_mode = irte.remap.dm;
+    info->delivery_mode = irte.remap.dlm;
+    return 0;
+}
+
 static void vvtd_reset(struct vvtd *vvtd, uint64_t capability)
 {
     uint64_t cap, ecap;
@@ -639,7 +658,8 @@ struct viommu_ops vvtd_hvm_vmx_ops = {
     .query_caps = vvtd_query_caps,
     .create = vvtd_create,
     .destroy = vvtd_destroy,
-    .handle_irq_request = vvtd_handle_irq_request
+    .handle_irq_request = vvtd_handle_irq_request,
+    .get_irq_info = vvtd_get_irq_info
 };
 
 static int vvtd_register(void)
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [RFC PATCH V2 16/26] x86/vioapic: Hook interrupt delivery of vIOAPIC
  2017-05-18  5:34 [RFC PATCH V2 00/26] xen/vIOMMU: Add vIOMMU support with irq remapping fucntion of virtual vtd Lan Tianyu
                   ` (14 preceding siblings ...)
  2017-05-18  5:34 ` [RFC PATCH V2 15/26] x86/vvtd: decode interrupt attribute from IRTE Lan Tianyu
@ 2017-05-18  5:34 ` Lan Tianyu
  2017-05-18  5:34 ` [RFC PATCH V2 17/26] X86/vvtd: Enable Queued Invalidation through GCMD Lan Tianyu
                   ` (9 subsequent siblings)
  25 siblings, 0 replies; 42+ messages in thread
From: Lan Tianyu @ 2017-05-18  5:34 UTC (permalink / raw)
  To: xen-devel
  Cc: Lan Tianyu, kevin.tian, wei.liu2, andrew.cooper3, ian.jackson,
	jbeulich, Chao Gao

From: Chao Gao <chao.gao@intel.com>

When irq remapping is enabled, IOAPIC Redirection Entry may be in remapping
format. If that, generate an irq_remapping_request and call the common
VIOMMU abstraction's callback to handle this interrupt request. Device
model is responsible for checking the request's validity.

Signed-off-by: Chao Gao <chao.gao@intel.com>
Signed-off-by: Lan Tianyu <tianyu.lan@intel.com>
---
 xen/arch/x86/hvm/vioapic.c | 14 ++++++++++++++
 1 file changed, 14 insertions(+)

diff --git a/xen/arch/x86/hvm/vioapic.c b/xen/arch/x86/hvm/vioapic.c
index abcc473..40f529c 100644
--- a/xen/arch/x86/hvm/vioapic.c
+++ b/xen/arch/x86/hvm/vioapic.c
@@ -30,6 +30,7 @@
 #include <xen/lib.h>
 #include <xen/errno.h>
 #include <xen/sched.h>
+#include <xen/viommu.h>
 #include <public/hvm/ioreq.h>
 #include <asm/hvm/io.h>
 #include <asm/hvm/vpic.h>
@@ -39,6 +40,8 @@
 #include <asm/event.h>
 #include <asm/io_apic.h>
 
+#include "../../../drivers/passthrough/vtd/vtd.h"
+
 /* HACK: Route IRQ0 only to VCPU0 to prevent time jumps. */
 #define IRQ0_SPECIAL_ROUTING 1
 
@@ -327,9 +330,20 @@ static void vioapic_deliver(struct hvm_vioapic *vioapic, unsigned int pin)
     struct vlapic *target;
     struct vcpu *v;
     unsigned int irq = vioapic->base_gsi + pin;
+    struct IO_APIC_route_remap_entry rte = { { vioapic->redirtbl[pin].bits } };
 
     ASSERT(spin_is_locked(&d->arch.hvm_domain.irq_lock));
 
+    if ( rte.format )
+    {
+        struct irq_remapping_request request;
+
+        irq_request_ioapic_fill(&request, vioapic->id, rte.val);
+        /* Currently, only viommu 0 is supported */
+        viommu_handle_irq_request(d, 0, &request);
+        return;
+    }
+
     HVM_DBG_LOG(DBG_LEVEL_IOAPIC,
                 "dest=%x dest_mode=%x delivery_mode=%x "
                 "vector=%x trig_mode=%x",
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [RFC PATCH V2 17/26] X86/vvtd: Enable Queued Invalidation through GCMD
  2017-05-18  5:34 [RFC PATCH V2 00/26] xen/vIOMMU: Add vIOMMU support with irq remapping fucntion of virtual vtd Lan Tianyu
                   ` (15 preceding siblings ...)
  2017-05-18  5:34 ` [RFC PATCH V2 16/26] x86/vioapic: Hook interrupt delivery of vIOAPIC Lan Tianyu
@ 2017-05-18  5:34 ` Lan Tianyu
  2017-05-18  5:34 ` [RFC PATCH V2 18/26] X86/vvtd: Enable Interrupt Remapping " Lan Tianyu
                   ` (8 subsequent siblings)
  25 siblings, 0 replies; 42+ messages in thread
From: Lan Tianyu @ 2017-05-18  5:34 UTC (permalink / raw)
  To: xen-devel
  Cc: Lan Tianyu, kevin.tian, wei.liu2, andrew.cooper3, ian.jackson,
	jbeulich, Chao Gao

From: Chao Gao <chao.gao@intel.com>

Software writes to QIE fields of GCMD to enable or disable queued
invalidations. This patch emulates QIE fields of GCMD.

Signed-off-by: Chao Gao <chao.gao@intel.com>
Signed-off-by: Lan Tianyu <tianyu.lan@intel.com>
---
 xen/arch/x86/hvm/vvtd.c             | 18 ++++++++++++++++++
 xen/drivers/passthrough/vtd/iommu.h |  3 ++-
 2 files changed, 20 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/hvm/vvtd.c b/xen/arch/x86/hvm/vvtd.c
index 57932cb..be3acd5 100644
--- a/xen/arch/x86/hvm/vvtd.c
+++ b/xen/arch/x86/hvm/vvtd.c
@@ -102,6 +102,11 @@ static inline void __vvtd_set_bit(struct vvtd *vvtd, uint32_t reg, int nr)
     return __set_bit(nr, (uint32_t *)&vvtd->regs->data[reg]);
 }
 
+static inline void __vvtd_clear_bit(struct vvtd *vvtd, uint32_t reg, int nr)
+{
+    return __clear_bit(nr, (uint32_t *)&vvtd->regs->data[reg]);
+}
+
 static inline void vvtd_set_reg(struct vvtd *vtd, uint32_t reg,
                                 uint32_t value)
 {
@@ -262,6 +267,17 @@ static int vvtd_record_fault(struct vvtd *vvtd,
     return 0;
 }
 
+static int vvtd_handle_gcmd_qie(struct vvtd *vvtd, uint32_t val)
+{
+    VVTD_DEBUG(VVTD_DBG_RW, "Enable Queue Invalidation.");
+
+    if ( val & DMA_GCMD_QIE )
+        __vvtd_set_bit(vvtd, DMAR_GSTS_REG, DMA_GSTS_QIES_BIT);
+    else
+        __vvtd_clear_bit(vvtd, DMAR_GSTS_REG, DMA_GSTS_QIES_BIT);
+    return X86EMUL_OKAY;
+}
+
 static int vvtd_handle_gcmd_sirtp(struct vvtd *vvtd, uint32_t val)
 {
     uint64_t irta;
@@ -293,6 +309,8 @@ static int vvtd_write_gcmd(struct vvtd *vvtd, uint32_t val)
 
     if ( changed & DMA_GCMD_SIRTP )
         vvtd_handle_gcmd_sirtp(vvtd, val);
+    if ( changed & DMA_GCMD_QIE )
+        vvtd_handle_gcmd_qie(vvtd, val);
 
     return X86EMUL_OKAY;
 }
diff --git a/xen/drivers/passthrough/vtd/iommu.h b/xen/drivers/passthrough/vtd/iommu.h
index 1c53d22..2d60df6 100644
--- a/xen/drivers/passthrough/vtd/iommu.h
+++ b/xen/drivers/passthrough/vtd/iommu.h
@@ -193,7 +193,8 @@
 #define DMA_GSTS_FLS    (((u64)1) << 29)
 #define DMA_GSTS_AFLS   (((u64)1) << 28)
 #define DMA_GSTS_WBFS   (((u64)1) << 27)
-#define DMA_GSTS_QIES   (((u64)1) <<26)
+#define DMA_GSTS_QIES_BIT       26
+#define DMA_GSTS_QIES           (((u64)1) << DMA_GSTS_QIES_BIT)
 #define DMA_GSTS_IRES   (((u64)1) <<25)
 #define DMA_GSTS_SIRTPS_BIT     24
 #define DMA_GSTS_SIRTPS (((u64)1) << DMA_GSTS_SIRTPS_BIT)
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [RFC PATCH V2 18/26] X86/vvtd: Enable Interrupt Remapping through GCMD
  2017-05-18  5:34 [RFC PATCH V2 00/26] xen/vIOMMU: Add vIOMMU support with irq remapping fucntion of virtual vtd Lan Tianyu
                   ` (16 preceding siblings ...)
  2017-05-18  5:34 ` [RFC PATCH V2 17/26] X86/vvtd: Enable Queued Invalidation through GCMD Lan Tianyu
@ 2017-05-18  5:34 ` Lan Tianyu
  2017-05-18  5:34 ` [RFC PATCH V2 19/26] x86/vpt: Get interrupt vector through a vioapic interface Lan Tianyu
                   ` (7 subsequent siblings)
  25 siblings, 0 replies; 42+ messages in thread
From: Lan Tianyu @ 2017-05-18  5:34 UTC (permalink / raw)
  To: xen-devel
  Cc: Lan Tianyu, kevin.tian, wei.liu2, andrew.cooper3, ian.jackson,
	jbeulich, Chao Gao

From: Chao Gao <chao.gao@intel.com>

Software writes this field to enable/disable interrupt reampping. This patch
emulate IRES field of GCMD.

Signed-off-by: Chao Gao <chao.gao@intel.com>
Signed-off-by: Lan Tianyu <tianyu.lan@intel.com>
---
 xen/arch/x86/hvm/vvtd.c             | 26 ++++++++++++++++++++++++++
 xen/drivers/passthrough/vtd/iommu.h |  3 ++-
 2 files changed, 28 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/hvm/vvtd.c b/xen/arch/x86/hvm/vvtd.c
index be3acd5..10b0cd0 100644
--- a/xen/arch/x86/hvm/vvtd.c
+++ b/xen/arch/x86/hvm/vvtd.c
@@ -278,6 +278,24 @@ static int vvtd_handle_gcmd_qie(struct vvtd *vvtd, uint32_t val)
     return X86EMUL_OKAY;
 }
 
+static int vvtd_handle_gcmd_ire(struct vvtd *vvtd, uint32_t val)
+{
+    VVTD_DEBUG(VVTD_DBG_RW, "Enable Interrupt Remapping.");
+
+    if ( val & DMA_GCMD_IRE )
+    {
+        vvtd->status |= VIOMMU_STATUS_IRQ_REMAPPING_ENABLED;
+        __vvtd_set_bit(vvtd, DMAR_GSTS_REG, DMA_GSTS_IRES_BIT);
+    }
+    else
+    {
+        vvtd->status |= ~VIOMMU_STATUS_IRQ_REMAPPING_ENABLED;
+        __vvtd_clear_bit(vvtd, DMAR_GSTS_REG, DMA_GSTS_IRES_BIT);
+    }
+
+    return X86EMUL_OKAY;
+}
+
 static int vvtd_handle_gcmd_sirtp(struct vvtd *vvtd, uint32_t val)
 {
     uint64_t irta;
@@ -285,6 +303,10 @@ static int vvtd_handle_gcmd_sirtp(struct vvtd *vvtd, uint32_t val)
     if ( !(val & DMA_GCMD_SIRTP) )
         return X86EMUL_OKAY;
 
+    if ( vvtd_irq_remapping_enabled(vvtd) )
+        VVTD_DEBUG(VVTD_DBG_RW, "Update Interrupt Remapping Table when "
+                   "active." );
+
     vvtd_get_reg_quad(vvtd, DMAR_IRTA_REG, irta);
     vvtd->irt = DMA_IRTA_ADDR(irta) >> PAGE_SHIFT;
     vvtd->irt_max_entry = DMA_IRTA_SIZE(irta);
@@ -311,6 +333,10 @@ static int vvtd_write_gcmd(struct vvtd *vvtd, uint32_t val)
         vvtd_handle_gcmd_sirtp(vvtd, val);
     if ( changed & DMA_GCMD_QIE )
         vvtd_handle_gcmd_qie(vvtd, val);
+    if ( changed & DMA_GCMD_IRE )
+        vvtd_handle_gcmd_ire(vvtd, val);
+    if ( changed & ~(DMA_GCMD_QIE | DMA_GCMD_SIRTP | DMA_GCMD_IRE) )
+        gdprintk(XENLOG_INFO, "Only QIE,SIRTP,IRE in GCMD_REG are handled.\n");
 
     return X86EMUL_OKAY;
 }
diff --git a/xen/drivers/passthrough/vtd/iommu.h b/xen/drivers/passthrough/vtd/iommu.h
index 2d60df6..03361c0 100644
--- a/xen/drivers/passthrough/vtd/iommu.h
+++ b/xen/drivers/passthrough/vtd/iommu.h
@@ -195,7 +195,8 @@
 #define DMA_GSTS_WBFS   (((u64)1) << 27)
 #define DMA_GSTS_QIES_BIT       26
 #define DMA_GSTS_QIES           (((u64)1) << DMA_GSTS_QIES_BIT)
-#define DMA_GSTS_IRES   (((u64)1) <<25)
+#define DMA_GSTS_IRES_BIT       25
+#define DMA_GSTS_IRES   (((u64)1) << DMA_GSTS_IRES_BIT)
 #define DMA_GSTS_SIRTPS_BIT     24
 #define DMA_GSTS_SIRTPS (((u64)1) << DMA_GSTS_SIRTPS_BIT)
 #define DMA_GSTS_CFIS   (((u64)1) <<23)
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [RFC PATCH V2 19/26] x86/vpt: Get interrupt vector through a vioapic interface
  2017-05-18  5:34 [RFC PATCH V2 00/26] xen/vIOMMU: Add vIOMMU support with irq remapping fucntion of virtual vtd Lan Tianyu
                   ` (17 preceding siblings ...)
  2017-05-18  5:34 ` [RFC PATCH V2 18/26] X86/vvtd: Enable Interrupt Remapping " Lan Tianyu
@ 2017-05-18  5:34 ` Lan Tianyu
  2017-05-18  5:34 ` [RFC PATCH V2 20/26] passthrough: move some fields of hvm_gmsi_info to a sub-structure Lan Tianyu
                   ` (6 subsequent siblings)
  25 siblings, 0 replies; 42+ messages in thread
From: Lan Tianyu @ 2017-05-18  5:34 UTC (permalink / raw)
  To: xen-devel
  Cc: Lan Tianyu, kevin.tian, wei.liu2, andrew.cooper3, ian.jackson,
	jbeulich, Chao Gao

From: Chao Gao <chao.gao@intel.com>

When IOAPIC RTE is in remapping format, it doesn't contain the vector of
interrupt. This patch adds vioapic_gsi_vector() to translate pin to vector.

Signed-off-by: Chao Gao <chao.gao@intel.com>
Signed-off-by: Lan Tianyu <tianyu.lan@intel.com>
---
 xen/arch/x86/hvm/vioapic.c        | 27 +++++++++++++++++++++++++++
 xen/arch/x86/hvm/vpt.c            |  2 +-
 xen/include/asm-x86/hvm/vioapic.h |  1 +
 3 files changed, 29 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/hvm/vioapic.c b/xen/arch/x86/hvm/vioapic.c
index 40f529c..e7d1de7 100644
--- a/xen/arch/x86/hvm/vioapic.c
+++ b/xen/arch/x86/hvm/vioapic.c
@@ -319,6 +319,33 @@ static inline int pit_channel0_enabled(void)
     return pt_active(&current->domain->arch.vpit.pt0);
 }
 
+int vioapic_pin_vector(struct hvm_vioapic *vioapic, unsigned int pin)
+{
+    struct IO_APIC_route_remap_entry rte = { { vioapic->redirtbl[pin].bits } };
+
+    if ( rte.format )
+    {
+        int err;
+        struct irq_remapping_request request;
+        struct irq_remapping_info info;
+
+        irq_request_ioapic_fill(&request, vioapic->id, rte.val);
+        /* Currently, only viommu 0 is supported */
+        err = viommu_get_irq_info(vioapic->domain, 0, &request, &info);
+        if ( err < 0 )
+        {
+            gdprintk(XENLOG_ERR, "Bad gsi or bad interrupt remapping table "
+                     "entry.\n");
+            domain_crash(vioapic->domain);
+        }
+        return info.vector;
+    }
+    else
+    {
+        return vioapic->redirtbl[pin].fields.vector;
+    }
+}
+
 static void vioapic_deliver(struct hvm_vioapic *vioapic, unsigned int pin)
 {
     uint16_t dest = vioapic->redirtbl[pin].fields.dest_id;
diff --git a/xen/arch/x86/hvm/vpt.c b/xen/arch/x86/hvm/vpt.c
index e3f2039..b95d3a1 100644
--- a/xen/arch/x86/hvm/vpt.c
+++ b/xen/arch/x86/hvm/vpt.c
@@ -101,7 +101,7 @@ static int pt_irq_vector(struct periodic_time *pt, enum hvm_intsrc src)
         return -1;
     }
 
-    return vioapic->redirtbl[pin].fields.vector;
+    return vioapic_pin_vector(vioapic, pin);
 }
 
 static int pt_irq_masked(struct periodic_time *pt)
diff --git a/xen/include/asm-x86/hvm/vioapic.h b/xen/include/asm-x86/hvm/vioapic.h
index 2ceb60e..bc2725b 100644
--- a/xen/include/asm-x86/hvm/vioapic.h
+++ b/xen/include/asm-x86/hvm/vioapic.h
@@ -64,6 +64,7 @@ struct hvm_vioapic {
 struct hvm_vioapic *gsi_vioapic(const struct domain *d, unsigned int gsi,
                                 unsigned int *pin);
 
+int vioapic_pin_vector(struct hvm_vioapic *vioapic, unsigned int pin);
 int vioapic_init(struct domain *d);
 void vioapic_deinit(struct domain *d);
 void vioapic_reset(struct domain *d);
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [RFC PATCH V2 20/26] passthrough: move some fields of hvm_gmsi_info to a sub-structure
  2017-05-18  5:34 [RFC PATCH V2 00/26] xen/vIOMMU: Add vIOMMU support with irq remapping fucntion of virtual vtd Lan Tianyu
                   ` (18 preceding siblings ...)
  2017-05-18  5:34 ` [RFC PATCH V2 19/26] x86/vpt: Get interrupt vector through a vioapic interface Lan Tianyu
@ 2017-05-18  5:34 ` Lan Tianyu
  2017-05-18  5:34 ` [RFC PATCH V2 21/26] Tools/libxc: Add a new interface to bind remapping format msi with pirq Lan Tianyu
                   ` (5 subsequent siblings)
  25 siblings, 0 replies; 42+ messages in thread
From: Lan Tianyu @ 2017-05-18  5:34 UTC (permalink / raw)
  To: xen-devel
  Cc: Lan Tianyu, kevin.tian, wei.liu2, andrew.cooper3, ian.jackson,
	jbeulich, Chao Gao

From: Chao Gao <chao.gao@intel.com>

No functional change. It is a preparation for introducing new fields in
hvm_gmsi_info to manage remapping format msi bound to a physical msi.

Signed-off-by: Chao Gao <chao.gao@intel.com>
Signed-off-by: Lan Tianyu <tianyu.lan@intel.com>
---
 xen/arch/x86/hvm/vmsi.c      |  4 ++--
 xen/drivers/passthrough/io.c | 32 ++++++++++++++++----------------
 xen/include/xen/hvm/irq.h    |  8 ++++++--
 3 files changed, 24 insertions(+), 20 deletions(-)

diff --git a/xen/arch/x86/hvm/vmsi.c b/xen/arch/x86/hvm/vmsi.c
index a36692c..c4ec0ad 100644
--- a/xen/arch/x86/hvm/vmsi.c
+++ b/xen/arch/x86/hvm/vmsi.c
@@ -101,8 +101,8 @@ int vmsi_deliver(
 
 void vmsi_deliver_pirq(struct domain *d, const struct hvm_pirq_dpci *pirq_dpci)
 {
-    uint32_t flags = pirq_dpci->gmsi.gflags;
-    int vector = pirq_dpci->gmsi.gvec;
+    uint32_t flags = pirq_dpci->gmsi.legacy.gflags;
+    int vector = pirq_dpci->gmsi.legacy.gvec;
     uint8_t dest = (uint8_t)flags;
     uint8_t dest_mode = !!(flags & VMSI_DM_MASK);
     uint8_t delivery_mode = (flags & VMSI_DELIV_MASK)
diff --git a/xen/drivers/passthrough/io.c b/xen/drivers/passthrough/io.c
index e5a43e5..2158a11 100644
--- a/xen/drivers/passthrough/io.c
+++ b/xen/drivers/passthrough/io.c
@@ -325,8 +325,8 @@ int pt_irq_create_bind(
         {
             pirq_dpci->flags = HVM_IRQ_DPCI_MAPPED | HVM_IRQ_DPCI_MACH_MSI |
                                HVM_IRQ_DPCI_GUEST_MSI;
-            pirq_dpci->gmsi.gvec = pt_irq_bind->u.msi.gvec;
-            pirq_dpci->gmsi.gflags = pt_irq_bind->u.msi.gflags;
+            pirq_dpci->gmsi.legacy.gvec = pt_irq_bind->u.msi.gvec;
+            pirq_dpci->gmsi.legacy.gflags = pt_irq_bind->u.msi.gflags;
             /*
              * 'pt_irq_create_bind' can be called after 'pt_irq_destroy_bind'.
              * The 'pirq_cleanup_check' which would free the structure is only
@@ -358,8 +358,8 @@ int pt_irq_create_bind(
             }
             if ( unlikely(rc) )
             {
-                pirq_dpci->gmsi.gflags = 0;
-                pirq_dpci->gmsi.gvec = 0;
+                pirq_dpci->gmsi.legacy.gflags = 0;
+                pirq_dpci->gmsi.legacy.gvec = 0;
                 pirq_dpci->dom = NULL;
                 pirq_dpci->flags = 0;
                 pirq_cleanup_check(info, d);
@@ -378,20 +378,20 @@ int pt_irq_create_bind(
             }
 
             /* If pirq is already mapped as vmsi, update guest data/addr. */
-            if ( pirq_dpci->gmsi.gvec != pt_irq_bind->u.msi.gvec ||
-                 pirq_dpci->gmsi.gflags != pt_irq_bind->u.msi.gflags )
+            if ( pirq_dpci->gmsi.legacy.gvec != pt_irq_bind->u.msi.gvec ||
+                 pirq_dpci->gmsi.legacy.gflags != pt_irq_bind->u.msi.gflags )
             {
                 /* Directly clear pending EOIs before enabling new MSI info. */
                 pirq_guest_eoi(info);
 
-                pirq_dpci->gmsi.gvec = pt_irq_bind->u.msi.gvec;
-                pirq_dpci->gmsi.gflags = pt_irq_bind->u.msi.gflags;
+                pirq_dpci->gmsi.legacy.gvec = pt_irq_bind->u.msi.gvec;
+                pirq_dpci->gmsi.legacy.gflags = pt_irq_bind->u.msi.gflags;
             }
         }
         /* Calculate dest_vcpu_id for MSI-type pirq migration. */
-        dest = pirq_dpci->gmsi.gflags & VMSI_DEST_ID_MASK;
-        dest_mode = !!(pirq_dpci->gmsi.gflags & VMSI_DM_MASK);
-        delivery_mode = (pirq_dpci->gmsi.gflags & VMSI_DELIV_MASK) >>
+        dest = pirq_dpci->gmsi.legacy.gflags & VMSI_DEST_ID_MASK;
+        dest_mode = !!(pirq_dpci->gmsi.legacy.gflags & VMSI_DM_MASK);
+        delivery_mode = (pirq_dpci->gmsi.legacy.gflags & VMSI_DELIV_MASK) >>
                          GFLAGS_SHIFT_DELIV_MODE;
 
         dest_vcpu_id = hvm_girq_dest_2_vcpu_id(d, dest, dest_mode);
@@ -404,7 +404,7 @@ int pt_irq_create_bind(
         {
             if ( delivery_mode == dest_LowestPrio )
                 vcpu = vector_hashing_dest(d, dest, dest_mode,
-                                           pirq_dpci->gmsi.gvec);
+                                           pirq_dpci->gmsi.legacy.gvec);
             if ( vcpu )
                 pirq_dpci->gmsi.posted = true;
         }
@@ -414,7 +414,7 @@ int pt_irq_create_bind(
         /* Use interrupt posting if it is supported. */
         if ( iommu_intpost )
             pi_update_irte(vcpu ? &vcpu->arch.hvm_vmx.pi_desc : NULL,
-                           info, pirq_dpci->gmsi.gvec);
+                           info, pirq_dpci->gmsi.legacy.gvec);
 
         break;
     }
@@ -729,10 +729,10 @@ static int _hvm_dpci_msi_eoi(struct domain *d,
     int vector = (long)arg;
 
     if ( (pirq_dpci->flags & HVM_IRQ_DPCI_MACH_MSI) &&
-         (pirq_dpci->gmsi.gvec == vector) )
+         (pirq_dpci->gmsi.legacy.gvec == vector) )
     {
-        int dest = pirq_dpci->gmsi.gflags & VMSI_DEST_ID_MASK;
-        int dest_mode = !!(pirq_dpci->gmsi.gflags & VMSI_DM_MASK);
+        int dest = pirq_dpci->gmsi.legacy.gflags & VMSI_DEST_ID_MASK;
+        int dest_mode = !!(pirq_dpci->gmsi.legacy.gflags & VMSI_DM_MASK);
 
         if ( vlapic_match_dest(vcpu_vlapic(current), NULL, 0, dest,
                                dest_mode) )
diff --git a/xen/include/xen/hvm/irq.h b/xen/include/xen/hvm/irq.h
index 671a6f2..5f8e2f4 100644
--- a/xen/include/xen/hvm/irq.h
+++ b/xen/include/xen/hvm/irq.h
@@ -60,8 +60,12 @@ struct dev_intx_gsi_link {
 #define GFLAGS_SHIFT_TRG_MODE       15
 
 struct hvm_gmsi_info {
-    uint32_t gvec;
-    uint32_t gflags;
+    union {
+        struct {
+            uint32_t gvec;
+            uint32_t gflags;
+        } legacy;
+    };
     int dest_vcpu_id; /* -1 :multi-dest, non-negative: dest_vcpu_id */
     bool posted; /* directly deliver to guest via VT-d PI? */
 };
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [RFC PATCH V2 21/26] Tools/libxc: Add a new interface to bind remapping format msi with pirq
  2017-05-18  5:34 [RFC PATCH V2 00/26] xen/vIOMMU: Add vIOMMU support with irq remapping fucntion of virtual vtd Lan Tianyu
                   ` (19 preceding siblings ...)
  2017-05-18  5:34 ` [RFC PATCH V2 20/26] passthrough: move some fields of hvm_gmsi_info to a sub-structure Lan Tianyu
@ 2017-05-18  5:34 ` Lan Tianyu
  2017-05-30 15:36   ` Wei Liu
  2017-05-18  5:34 ` [RFC PATCH V2 22/26] x86/vmsi: Hook delivering remapping format msi to guest Lan Tianyu
                   ` (4 subsequent siblings)
  25 siblings, 1 reply; 42+ messages in thread
From: Lan Tianyu @ 2017-05-18  5:34 UTC (permalink / raw)
  To: xen-devel
  Cc: Lan Tianyu, kevin.tian, wei.liu2, andrew.cooper3, ian.jackson,
	jbeulich, Chao Gao

From: Chao Gao <chao.gao@intel.com>

Introduce a new binding relationship and provide a new interface to
manage the new relationship.

Signed-off-by: Chao Gao <chao.gao@intel.com>
Signed-off-by: Lan Tianyu <tianyu.lan@intel.com>
---
 tools/libxc/include/xenctrl.h |  17 ++++++
 tools/libxc/xc_domain.c       |  55 +++++++++++++++++
 xen/drivers/passthrough/io.c  | 138 +++++++++++++++++++++++++++++++++++-------
 xen/include/public/domctl.h   |   7 +++
 xen/include/xen/hvm/irq.h     |   7 +++
 5 files changed, 203 insertions(+), 21 deletions(-)

diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h
index 6c8110c..465dc5b 100644
--- a/tools/libxc/include/xenctrl.h
+++ b/tools/libxc/include/xenctrl.h
@@ -1709,6 +1709,15 @@ int xc_domain_ioport_mapping(xc_interface *xch,
                              uint32_t nr_ports,
                              uint32_t add_mapping);
 
+int xc_domain_update_msi_irq_remapping(
+    xc_interface *xch,
+    uint32_t domid,
+    uint32_t pirq,
+    uint32_t source_id,
+    uint32_t data,
+    uint64_t addr,
+    uint64_t gtable);
+
 int xc_domain_update_msi_irq(
     xc_interface *xch,
     uint32_t domid,
@@ -1723,6 +1732,14 @@ int xc_domain_unbind_msi_irq(xc_interface *xch,
                              uint32_t pirq,
                              uint32_t gflags);
 
+int xc_domain_unbind_msi_irq_remapping(
+    xc_interface *xch,
+    uint32_t domid,
+    uint32_t pirq,
+    uint32_t source_id,
+    uint32_t data,
+    uint64_t addr);
+
 int xc_domain_bind_pt_irq(xc_interface *xch,
                           uint32_t domid,
                           uint8_t machine_irq,
diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
index 00909ad4..1f174c1 100644
--- a/tools/libxc/xc_domain.c
+++ b/tools/libxc/xc_domain.c
@@ -1652,8 +1652,35 @@ int xc_deassign_dt_device(
     return rc;
 }
 
+int xc_domain_update_msi_irq_remapping(
+    xc_interface *xch,
+    uint32_t domid,
+    uint32_t pirq,
+    uint32_t source_id,
+    uint32_t data,
+    uint64_t addr,
+    uint64_t gtable)
+{
+    int rc;
+    xen_domctl_bind_pt_irq_t *bind;
 
+    DECLARE_DOMCTL;
 
+    domctl.cmd = XEN_DOMCTL_bind_pt_irq;
+    domctl.domain = (domid_t)domid;
+
+    bind = &(domctl.u.bind_pt_irq);
+    bind->hvm_domid = domid;
+    bind->irq_type = PT_IRQ_TYPE_MSI_IR;
+    bind->machine_irq = pirq;
+    bind->u.msi_ir.source_id = source_id;
+    bind->u.msi_ir.data = data;
+    bind->u.msi_ir.addr = addr;
+    bind->u.msi_ir.gtable = gtable;
+
+    rc = do_domctl(xch, &domctl);
+    return rc;
+}
 
 int xc_domain_update_msi_irq(
     xc_interface *xch,
@@ -1683,6 +1710,34 @@ int xc_domain_update_msi_irq(
     return rc;
 }
 
+int xc_domain_unbind_msi_irq_remapping(
+    xc_interface *xch,
+    uint32_t domid,
+    uint32_t pirq,
+    uint32_t source_id,
+    uint32_t data,
+    uint64_t addr)
+{
+    int rc;
+    xen_domctl_bind_pt_irq_t *bind;
+
+    DECLARE_DOMCTL;
+
+    domctl.cmd = XEN_DOMCTL_unbind_pt_irq;
+    domctl.domain = (domid_t)domid;
+
+    bind = &(domctl.u.bind_pt_irq);
+    bind->hvm_domid = domid;
+    bind->irq_type = PT_IRQ_TYPE_MSI_IR;
+    bind->machine_irq = pirq;
+    bind->u.msi_ir.source_id = source_id;
+    bind->u.msi_ir.data = data;
+    bind->u.msi_ir.addr = addr;
+
+    rc = do_domctl(xch, &domctl);
+    return rc;
+}
+
 int xc_domain_unbind_msi_irq(
     xc_interface *xch,
     uint32_t domid,
diff --git a/xen/drivers/passthrough/io.c b/xen/drivers/passthrough/io.c
index 2158a11..b4b6e9c 100644
--- a/xen/drivers/passthrough/io.c
+++ b/xen/drivers/passthrough/io.c
@@ -259,6 +259,94 @@ static struct vcpu *vector_hashing_dest(const struct domain *d,
     return dest;
 }
 
+static inline void set_hvm_gmsi_info(struct hvm_gmsi_info *msi,
+                                     xen_domctl_bind_pt_irq_t *pt_irq_bind,
+                                     int irq_type)
+{
+    if ( irq_type == PT_IRQ_TYPE_MSI )
+    {
+        msi->legacy.gvec = pt_irq_bind->u.msi.gvec;
+        msi->legacy.gflags = pt_irq_bind->u.msi.gflags;
+    }
+    else if ( irq_type == PT_IRQ_TYPE_MSI_IR )
+    {
+        msi->intremap.source_id = pt_irq_bind->u.msi_ir.source_id;
+        msi->intremap.data = pt_irq_bind->u.msi_ir.data;
+        msi->intremap.addr = pt_irq_bind->u.msi_ir.addr;
+    }
+    else
+        BUG();
+}
+
+static inline void clear_hvm_gmsi_info(struct hvm_gmsi_info *msi, int irq_type)
+{
+    if ( irq_type == PT_IRQ_TYPE_MSI )
+    {
+        msi->legacy.gvec = 0;
+        msi->legacy.gflags = 0;
+    }
+    else if ( irq_type == PT_IRQ_TYPE_MSI_IR )
+    {
+        msi->intremap.source_id = 0;
+        msi->intremap.data = 0;
+        msi->intremap.addr = 0;
+    }
+    else
+        BUG();
+}
+
+static inline bool hvm_gmsi_info_need_update(struct hvm_gmsi_info *msi,
+                                         xen_domctl_bind_pt_irq_t *pt_irq_bind,
+                                         int irq_type)
+{
+    if ( irq_type == PT_IRQ_TYPE_MSI )
+        return ((msi->legacy.gvec != pt_irq_bind->u.msi.gvec) ||
+                (msi->legacy.gflags != pt_irq_bind->u.msi.gflags));
+    else if ( irq_type == PT_IRQ_TYPE_MSI_IR )
+        return ((msi->intremap.source_id != pt_irq_bind->u.msi_ir.source_id) ||
+                (msi->intremap.data != pt_irq_bind->u.msi_ir.data) ||
+                (msi->intremap.addr != pt_irq_bind->u.msi_ir.addr));
+    BUG();
+    return 0;
+}
+
+static int pirq_dpci_2_msi_attr(struct domain *d,
+                                struct hvm_pirq_dpci *pirq_dpci, uint8_t *gvec,
+                                uint8_t *dest, uint8_t *dm, uint8_t *dlm)
+{
+    int rc = 0;
+
+    if ( pirq_dpci->flags & HVM_IRQ_DPCI_GUEST_MSI )
+    {
+        *gvec = pirq_dpci->gmsi.legacy.gvec;
+        *dest = pirq_dpci->gmsi.legacy.gflags & VMSI_DEST_ID_MASK;
+        *dm = !!(pirq_dpci->gmsi.legacy.gflags & VMSI_DM_MASK);
+        *dlm = (pirq_dpci->gmsi.legacy.gflags & VMSI_DELIV_MASK) >>
+                GFLAGS_SHIFT_DELIV_MODE;
+    }
+    else if ( pirq_dpci->flags & HVM_IRQ_DPCI_GUEST_MSI_IR )
+    {
+        struct irq_remapping_request request;
+        struct irq_remapping_info irq_info;
+
+        irq_request_msi_fill(&request, pirq_dpci->gmsi.intremap.source_id,
+                             pirq_dpci->gmsi.intremap.addr,
+                             pirq_dpci->gmsi.intremap.data);
+        /* Currently, only viommu 0 is supported */
+        rc = viommu_get_irq_info(d, 0, &request, &irq_info);
+        if ( !rc )
+        {
+            *gvec = irq_info.vector;
+            *dest = irq_info.dest;
+            *dm = irq_info.dest_mode;
+            *dlm = irq_info.delivery_mode;
+        }
+    }
+    else
+        BUG();
+    return rc;
+}
+
 int pt_irq_create_bind(
     struct domain *d, xen_domctl_bind_pt_irq_t *pt_irq_bind)
 {
@@ -316,17 +404,22 @@ int pt_irq_create_bind(
     switch ( pt_irq_bind->irq_type )
     {
     case PT_IRQ_TYPE_MSI:
+    case PT_IRQ_TYPE_MSI_IR:
     {
-        uint8_t dest, dest_mode, delivery_mode;
+        uint8_t dest = 0, dest_mode = 0, delivery_mode = 0, gvec;
         int dest_vcpu_id;
         const struct vcpu *vcpu;
+        int irq_type = pt_irq_bind->irq_type;
+        bool ir = (pt_irq_bind->irq_type == PT_IRQ_TYPE_MSI_IR);
+        uint64_t gtable = ir ? pt_irq_bind->u.msi_ir.gtable :
+                          pt_irq_bind->u.msi.gtable;
 
         if ( !(pirq_dpci->flags & HVM_IRQ_DPCI_MAPPED) )
         {
             pirq_dpci->flags = HVM_IRQ_DPCI_MAPPED | HVM_IRQ_DPCI_MACH_MSI |
-                               HVM_IRQ_DPCI_GUEST_MSI;
-            pirq_dpci->gmsi.legacy.gvec = pt_irq_bind->u.msi.gvec;
-            pirq_dpci->gmsi.legacy.gflags = pt_irq_bind->u.msi.gflags;
+                               (ir ? HVM_IRQ_DPCI_GUEST_MSI_IR :
+                                HVM_IRQ_DPCI_GUEST_MSI);
+            set_hvm_gmsi_info(&pirq_dpci->gmsi, pt_irq_bind, irq_type);
             /*
              * 'pt_irq_create_bind' can be called after 'pt_irq_destroy_bind'.
              * The 'pirq_cleanup_check' which would free the structure is only
@@ -341,9 +434,9 @@ int pt_irq_create_bind(
             pirq_dpci->dom = d;
             /* bind after hvm_irq_dpci is setup to avoid race with irq handler*/
             rc = pirq_guest_bind(d->vcpu[0], info, 0);
-            if ( rc == 0 && pt_irq_bind->u.msi.gtable )
+            if ( rc == 0 && gtable )
             {
-                rc = msixtbl_pt_register(d, info, pt_irq_bind->u.msi.gtable);
+                rc = msixtbl_pt_register(d, info, gtable);
                 if ( unlikely(rc) )
                 {
                     pirq_guest_unbind(d, info);
@@ -358,8 +451,7 @@ int pt_irq_create_bind(
             }
             if ( unlikely(rc) )
             {
-                pirq_dpci->gmsi.legacy.gflags = 0;
-                pirq_dpci->gmsi.legacy.gvec = 0;
+                clear_hvm_gmsi_info(&pirq_dpci->gmsi, irq_type);
                 pirq_dpci->dom = NULL;
                 pirq_dpci->flags = 0;
                 pirq_cleanup_check(info, d);
@@ -369,7 +461,8 @@ int pt_irq_create_bind(
         }
         else
         {
-            uint32_t mask = HVM_IRQ_DPCI_MACH_MSI | HVM_IRQ_DPCI_GUEST_MSI;
+            uint32_t mask = HVM_IRQ_DPCI_MACH_MSI |
+                     (ir ? HVM_IRQ_DPCI_GUEST_MSI_IR : HVM_IRQ_DPCI_GUEST_MSI);
 
             if ( (pirq_dpci->flags & mask) != mask )
             {
@@ -378,29 +471,31 @@ int pt_irq_create_bind(
             }
 
             /* If pirq is already mapped as vmsi, update guest data/addr. */
-            if ( pirq_dpci->gmsi.legacy.gvec != pt_irq_bind->u.msi.gvec ||
-                 pirq_dpci->gmsi.legacy.gflags != pt_irq_bind->u.msi.gflags )
+            if ( hvm_gmsi_info_need_update(&pirq_dpci->gmsi, pt_irq_bind,
+                                           irq_type) )
             {
                 /* Directly clear pending EOIs before enabling new MSI info. */
                 pirq_guest_eoi(info);
 
-                pirq_dpci->gmsi.legacy.gvec = pt_irq_bind->u.msi.gvec;
-                pirq_dpci->gmsi.legacy.gflags = pt_irq_bind->u.msi.gflags;
+                set_hvm_gmsi_info(&pirq_dpci->gmsi, pt_irq_bind, irq_type);
             }
         }
         /* Calculate dest_vcpu_id for MSI-type pirq migration. */
-        dest = pirq_dpci->gmsi.legacy.gflags & VMSI_DEST_ID_MASK;
-        dest_mode = !!(pirq_dpci->gmsi.legacy.gflags & VMSI_DM_MASK);
-        delivery_mode = (pirq_dpci->gmsi.legacy.gflags & VMSI_DELIV_MASK) >>
-                         GFLAGS_SHIFT_DELIV_MODE;
-
-        dest_vcpu_id = hvm_girq_dest_2_vcpu_id(d, dest, dest_mode);
+        rc = pirq_dpci_2_msi_attr(d, pirq_dpci, &gvec, &dest, &dest_mode,
+                                  &delivery_mode);
+        if ( unlikely(rc) )
+        {
+            spin_unlock(&d->event_lock);
+            return -EFAULT;
+        }
+        else
+            dest_vcpu_id = hvm_girq_dest_2_vcpu_id(d, dest, dest_mode);
         pirq_dpci->gmsi.dest_vcpu_id = dest_vcpu_id;
         spin_unlock(&d->event_lock);
 
         pirq_dpci->gmsi.posted = false;
         vcpu = (dest_vcpu_id >= 0) ? d->vcpu[dest_vcpu_id] : NULL;
-        if ( iommu_intpost )
+        if ( iommu_intpost && !ir )
         {
             if ( delivery_mode == dest_LowestPrio )
                 vcpu = vector_hashing_dest(d, dest, dest_mode,
@@ -412,7 +507,7 @@ int pt_irq_create_bind(
             hvm_migrate_pirqs(d->vcpu[dest_vcpu_id]);
 
         /* Use interrupt posting if it is supported. */
-        if ( iommu_intpost )
+        if ( iommu_intpost && !ir )
             pi_update_irte(vcpu ? &vcpu->arch.hvm_vmx.pi_desc : NULL,
                            info, pirq_dpci->gmsi.legacy.gvec);
 
@@ -545,6 +640,7 @@ int pt_irq_destroy_bind(
         }
         break;
     case PT_IRQ_TYPE_MSI:
+    case PT_IRQ_TYPE_MSI_IR:
         break;
     default:
         return -EOPNOTSUPP;
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index d499fc6..6fc3547 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -555,6 +555,7 @@ typedef enum pt_irq_type_e {
     PT_IRQ_TYPE_MSI,
     PT_IRQ_TYPE_MSI_TRANSLATE,
     PT_IRQ_TYPE_SPI,    /* ARM: valid range 32-1019 */
+    PT_IRQ_TYPE_MSI_IR,
 } pt_irq_type_t;
 struct xen_domctl_bind_pt_irq {
     uint32_t machine_irq;
@@ -576,6 +577,12 @@ struct xen_domctl_bind_pt_irq {
             uint64_aligned_t gtable;
         } msi;
         struct {
+            uint32_t source_id;
+            uint32_t data;
+            uint64_t addr;
+            uint64_aligned_t gtable;
+        } msi_ir;
+        struct {
             uint16_t spi;
         } spi;
     } u;
diff --git a/xen/include/xen/hvm/irq.h b/xen/include/xen/hvm/irq.h
index 5f8e2f4..9e93459 100644
--- a/xen/include/xen/hvm/irq.h
+++ b/xen/include/xen/hvm/irq.h
@@ -40,6 +40,7 @@ struct dev_intx_gsi_link {
 #define _HVM_IRQ_DPCI_EOI_LATCH_SHIFT           3
 #define _HVM_IRQ_DPCI_GUEST_PCI_SHIFT           4
 #define _HVM_IRQ_DPCI_GUEST_MSI_SHIFT           5
+#define _HVM_IRQ_DPCI_GUEST_MSI_IR_SHIFT        6
 #define _HVM_IRQ_DPCI_TRANSLATE_SHIFT          15
 #define HVM_IRQ_DPCI_MACH_PCI        (1 << _HVM_IRQ_DPCI_MACH_PCI_SHIFT)
 #define HVM_IRQ_DPCI_MACH_MSI        (1 << _HVM_IRQ_DPCI_MACH_MSI_SHIFT)
@@ -47,6 +48,7 @@ struct dev_intx_gsi_link {
 #define HVM_IRQ_DPCI_EOI_LATCH       (1 << _HVM_IRQ_DPCI_EOI_LATCH_SHIFT)
 #define HVM_IRQ_DPCI_GUEST_PCI       (1 << _HVM_IRQ_DPCI_GUEST_PCI_SHIFT)
 #define HVM_IRQ_DPCI_GUEST_MSI       (1 << _HVM_IRQ_DPCI_GUEST_MSI_SHIFT)
+#define HVM_IRQ_DPCI_GUEST_MSI_IR    (1 << _HVM_IRQ_DPCI_GUEST_MSI_IR_SHIFT)
 #define HVM_IRQ_DPCI_TRANSLATE       (1 << _HVM_IRQ_DPCI_TRANSLATE_SHIFT)
 
 #define VMSI_DEST_ID_MASK 0xff
@@ -65,6 +67,11 @@ struct hvm_gmsi_info {
             uint32_t gvec;
             uint32_t gflags;
         } legacy;
+        struct {
+            uint32_t source_id;
+            uint32_t data;
+            uint64_t addr;
+        } intremap;
     };
     int dest_vcpu_id; /* -1 :multi-dest, non-negative: dest_vcpu_id */
     bool posted; /* directly deliver to guest via VT-d PI? */
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [RFC PATCH V2 22/26] x86/vmsi: Hook delivering remapping format msi to guest
  2017-05-18  5:34 [RFC PATCH V2 00/26] xen/vIOMMU: Add vIOMMU support with irq remapping fucntion of virtual vtd Lan Tianyu
                   ` (20 preceding siblings ...)
  2017-05-18  5:34 ` [RFC PATCH V2 21/26] Tools/libxc: Add a new interface to bind remapping format msi with pirq Lan Tianyu
@ 2017-05-18  5:34 ` Lan Tianyu
  2017-05-18  5:34 ` [RFC PATCH V2 23/26] x86/vvtd: Handle interrupt translation faults Lan Tianyu
                   ` (3 subsequent siblings)
  25 siblings, 0 replies; 42+ messages in thread
From: Lan Tianyu @ 2017-05-18  5:34 UTC (permalink / raw)
  To: xen-devel; +Cc: Lan Tianyu, andrew.cooper3, kevin.tian, jbeulich, Chao Gao

From: Chao Gao <chao.gao@intel.com>

In two situations, hypervisor delivers a msi to a hvm guest. One is
when qemu sends a request to hypervisor through XEN_DMOP_inject_msi.
The other is when a physical interrupt arrives and it has been bound
to a guest msi.

For the former, the msi is routed to common vIOMMU layer if it is in
remapping format. For the latter, if the pt irq is bound to a guest
remapping msi, a new remapping msi is constructed based on the binding
information and routed to common vIOMMU layer.

Signed-off-by: Chao Gao <chao.gao@intel.com>
Signed-off-by: Lan Tianyu <tianyu.lan@intel.com>
---
 xen/arch/x86/hvm/irq.c       | 11 ++++++++++
 xen/arch/x86/hvm/vmsi.c      | 14 ++++++++++--
 xen/drivers/passthrough/io.c | 52 +++++++++++++++++++++++++++++++++-----------
 xen/include/asm-x86/msi.h    |  3 +++
 4 files changed, 65 insertions(+), 15 deletions(-)

diff --git a/xen/arch/x86/hvm/irq.c b/xen/arch/x86/hvm/irq.c
index 8625584..abe2f77 100644
--- a/xen/arch/x86/hvm/irq.c
+++ b/xen/arch/x86/hvm/irq.c
@@ -26,6 +26,7 @@
 #include <asm/hvm/domain.h>
 #include <asm/hvm/support.h>
 #include <asm/msi.h>
+#include <asm/viommu.h>
 
 /* Must be called with hvm_domain->irq_lock hold */
 static void assert_gsi(struct domain *d, unsigned ioapic_gsi)
@@ -298,6 +299,16 @@ int hvm_inject_msi(struct domain *d, uint64_t addr, uint32_t data)
         >> MSI_DATA_TRIGGER_SHIFT;
     uint8_t vector = data & MSI_DATA_VECTOR_MASK;
 
+    if ( addr & MSI_ADDR_INTEFORMAT_MASK )
+    {
+        struct irq_remapping_request request;
+
+        irq_request_msi_fill(&request, 0, addr, data);
+        /* Currently, only viommu 0 is supported */
+        viommu_handle_irq_request(d, 0, &request);
+        return 0;
+    }
+
     if ( !vector )
     {
         int pirq = ((addr >> 32) & 0xffffff00) | dest;
diff --git a/xen/arch/x86/hvm/vmsi.c b/xen/arch/x86/hvm/vmsi.c
index c4ec0ad..75ceb19 100644
--- a/xen/arch/x86/hvm/vmsi.c
+++ b/xen/arch/x86/hvm/vmsi.c
@@ -114,9 +114,19 @@ void vmsi_deliver_pirq(struct domain *d, const struct hvm_pirq_dpci *pirq_dpci)
                 "vector=%x trig_mode=%x\n",
                 dest, dest_mode, delivery_mode, vector, trig_mode);
 
-    ASSERT(pirq_dpci->flags & HVM_IRQ_DPCI_GUEST_MSI);
+    ASSERT(pirq_dpci->flags & (HVM_IRQ_DPCI_GUEST_MSI | HVM_IRQ_DPCI_GUEST_MSI_IR));
+    if ( pirq_dpci->flags & HVM_IRQ_DPCI_GUEST_MSI_IR )
+    {
+        struct irq_remapping_request request;
 
-    vmsi_deliver(d, vector, dest, dest_mode, delivery_mode, trig_mode);
+        irq_request_msi_fill(&request, pirq_dpci->gmsi.intremap.source_id,
+                             pirq_dpci->gmsi.intremap.addr,
+                             pirq_dpci->gmsi.intremap.data);
+        /* Currently, only viommu 0 is supported */
+        viommu_handle_irq_request(d, 0, &request);
+    }
+    else
+        vmsi_deliver(d, vector, dest, dest_mode, delivery_mode, trig_mode);
 }
 
 /* Return value, -1 : multi-dests, non-negative value: dest_vcpu_id */
diff --git a/xen/drivers/passthrough/io.c b/xen/drivers/passthrough/io.c
index b4b6e9c..572e60d 100644
--- a/xen/drivers/passthrough/io.c
+++ b/xen/drivers/passthrough/io.c
@@ -139,7 +139,9 @@ static void pt_pirq_softirq_reset(struct hvm_pirq_dpci *pirq_dpci)
 
 bool_t pt_irq_need_timer(uint32_t flags)
 {
-    return !(flags & (HVM_IRQ_DPCI_GUEST_MSI | HVM_IRQ_DPCI_TRANSLATE));
+    return !(flags & (HVM_IRQ_DPCI_GUEST_MSI_IR |
+                      HVM_IRQ_DPCI_GUEST_MSI |
+                      HVM_IRQ_DPCI_TRANSLATE));
 }
 
 static int pt_irq_guest_eoi(struct domain *d, struct hvm_pirq_dpci *pirq_dpci,
@@ -659,7 +661,8 @@ int pt_irq_destroy_bind(
     pirq = pirq_info(d, machine_gsi);
     pirq_dpci = pirq_dpci(pirq);
 
-    if ( pt_irq_bind->irq_type != PT_IRQ_TYPE_MSI )
+    if ( (pt_irq_bind->irq_type != PT_IRQ_TYPE_MSI_IR) &&
+         (pt_irq_bind->irq_type != PT_IRQ_TYPE_MSI) )
     {
         unsigned int bus = pt_irq_bind->u.pci.bus;
         unsigned int device = pt_irq_bind->u.pci.device;
@@ -824,20 +827,41 @@ static int _hvm_dpci_msi_eoi(struct domain *d,
 {
     int vector = (long)arg;
 
-    if ( (pirq_dpci->flags & HVM_IRQ_DPCI_MACH_MSI) &&
-         (pirq_dpci->gmsi.legacy.gvec == vector) )
+    if ( pirq_dpci->flags & HVM_IRQ_DPCI_MACH_MSI )
     {
-        int dest = pirq_dpci->gmsi.legacy.gflags & VMSI_DEST_ID_MASK;
-        int dest_mode = !!(pirq_dpci->gmsi.legacy.gflags & VMSI_DM_MASK);
+        if ( (pirq_dpci->flags & HVM_IRQ_DPCI_GUEST_MSI) &&
+             (pirq_dpci->gmsi.legacy.gvec == vector) )
+        {
+            int dest = pirq_dpci->gmsi.legacy.gflags & VMSI_DEST_ID_MASK;
+            int dest_mode = !!(pirq_dpci->gmsi.legacy.gflags & VMSI_DM_MASK);
 
-        if ( vlapic_match_dest(vcpu_vlapic(current), NULL, 0, dest,
-                               dest_mode) )
+            if ( vlapic_match_dest(vcpu_vlapic(current), NULL, 0, dest,
+                                   dest_mode) )
+            {
+                __msi_pirq_eoi(pirq_dpci);
+                return 1;
+            }
+        }
+        else if ( pirq_dpci->flags & HVM_IRQ_DPCI_GUEST_MSI_IR )
         {
-            __msi_pirq_eoi(pirq_dpci);
-            return 1;
+            int ret;
+            struct irq_remapping_request request;
+            struct irq_remapping_info irq_info;
+
+            irq_request_msi_fill(&request, pirq_dpci->gmsi.intremap.source_id,
+                                 pirq_dpci->gmsi.intremap.addr,
+                                 pirq_dpci->gmsi.intremap.data);
+            /* Currently, only viommu 0 is supported */
+            ret = viommu_get_irq_info(d, 0, &request, &irq_info);
+            if ( (!ret) && (irq_info.vector == vector) &&
+                 vlapic_match_dest(vcpu_vlapic(current), NULL, 0,
+                                   irq_info.dest, irq_info.dest_mode) )
+            {
+                __msi_pirq_eoi(pirq_dpci);
+                return 1;
+            }
         }
     }
-
     return 0;
 }
 
@@ -869,14 +893,16 @@ static void hvm_dirq_assist(struct domain *d, struct hvm_pirq_dpci *pirq_dpci)
         {
             send_guest_pirq(d, pirq);
 
-            if ( pirq_dpci->flags & HVM_IRQ_DPCI_GUEST_MSI )
+            if ( pirq_dpci->flags
+                 & (HVM_IRQ_DPCI_GUEST_MSI | HVM_IRQ_DPCI_GUEST_MSI_IR) )
             {
                 spin_unlock(&d->event_lock);
                 return;
             }
         }
 
-        if ( pirq_dpci->flags & HVM_IRQ_DPCI_GUEST_MSI )
+        if ( pirq_dpci->flags
+             & (HVM_IRQ_DPCI_GUEST_MSI | HVM_IRQ_DPCI_GUEST_MSI_IR) )
         {
             vmsi_deliver_pirq(d, pirq_dpci);
             spin_unlock(&d->event_lock);
diff --git a/xen/include/asm-x86/msi.h b/xen/include/asm-x86/msi.h
index a5de6a1..c41e2a7 100644
--- a/xen/include/asm-x86/msi.h
+++ b/xen/include/asm-x86/msi.h
@@ -49,6 +49,9 @@
 #define MSI_ADDR_REDIRECTION_CPU    (0 << MSI_ADDR_REDIRECTION_SHIFT)
 #define MSI_ADDR_REDIRECTION_LOWPRI (1 << MSI_ADDR_REDIRECTION_SHIFT)
 
+#define MSI_ADDR_INTEFORMAT_SHIFT   4
+#define MSI_ADDR_INTEFORMAT_MASK    (1 << MSI_ADDR_INTEFORMAT_SHIFT)
+
 #define MSI_ADDR_DEST_ID_SHIFT		12
 #define	 MSI_ADDR_DEST_ID_MASK		0x00ff000
 #define  MSI_ADDR_DEST_ID(dest)		(((dest) << MSI_ADDR_DEST_ID_SHIFT) & MSI_ADDR_DEST_ID_MASK)
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [RFC PATCH V2 23/26] x86/vvtd: Handle interrupt translation faults
  2017-05-18  5:34 [RFC PATCH V2 00/26] xen/vIOMMU: Add vIOMMU support with irq remapping fucntion of virtual vtd Lan Tianyu
                   ` (21 preceding siblings ...)
  2017-05-18  5:34 ` [RFC PATCH V2 22/26] x86/vmsi: Hook delivering remapping format msi to guest Lan Tianyu
@ 2017-05-18  5:34 ` Lan Tianyu
  2017-05-18  5:34 ` [RFC PATCH V2 24/26] x86/vvtd: Add queued invalidation (QI) support Lan Tianyu
                   ` (2 subsequent siblings)
  25 siblings, 0 replies; 42+ messages in thread
From: Lan Tianyu @ 2017-05-18  5:34 UTC (permalink / raw)
  To: xen-devel; +Cc: Lan Tianyu, andrew.cooper3, kevin.tian, jbeulich, Chao Gao

From: Chao Gao <chao.gao@intel.com>

Interrupt translation faults are non-recoverable fault. When faults
are triggered, it needs to populate fault info to Fault Recording
Registers and inject vIOMMU msi interrupt to notify guest IOMMU driver
to deal with faults.

This patch emulates hardware's handling interrupt translation
faults (more information about the process can be found in VT-d spec,
chipter "Translation Faults", section "Non-Recoverable Fault
Reporting" and section "Non-Recoverable Logging").
Specifically, viommu_record_fault() records the fault information and
viommu_report_non_recoverable_fault() reports faults to software.
Currently, only Primary Fault Logging is supported and the Number of
Fault-recording Registers is 1.

Signed-off-by: Chao Gao <chao.gao@intel.com>
Signed-off-by: Lan Tianyu <tianyu.lan@intel.com>
---
 xen/arch/x86/hvm/vvtd.c             | 237 +++++++++++++++++++++++++++++++++++-
 xen/drivers/passthrough/vtd/iommu.h |  60 +++++++--
 2 files changed, 285 insertions(+), 12 deletions(-)

diff --git a/xen/arch/x86/hvm/vvtd.c b/xen/arch/x86/hvm/vvtd.c
index 10b0cd0..a741452 100644
--- a/xen/arch/x86/hvm/vvtd.c
+++ b/xen/arch/x86/hvm/vvtd.c
@@ -19,6 +19,7 @@
  */
 
 #include <xen/domain_page.h>
+#include <xen/lib.h>
 #include <xen/sched.h>
 #include <xen/types.h>
 #include <xen/viommu.h>
@@ -30,6 +31,7 @@
 #include <asm/io_apic.h>
 #include <asm/page.h>
 #include <asm/p2m.h>
+#include <asm/system.h>
 #include <public/viommu.h>
 
 #include "../../../drivers/passthrough/vtd/iommu.h"
@@ -49,6 +51,8 @@ struct hvm_hw_vvtd_regs {
 struct vvtd {
     /* VIOMMU_STATUS_XXX_REMAPPING_ENABLED */
     int status;
+    /* Fault Recording index */
+    int frcd_idx;
     /* Address range of remapping hardware register-set */
     uint64_t base_addr;
     uint64_t length;
@@ -97,6 +101,23 @@ static inline struct vvtd *vcpu_vvtd(struct vcpu *v)
     return domain_vvtd(v->domain);
 }
 
+static inline int vvtd_test_and_set_bit(struct vvtd *vvtd, uint32_t reg,
+                                        int nr)
+{
+    return test_and_set_bit(nr, (uint32_t *)&vvtd->regs->data[reg]);
+}
+
+static inline int vvtd_test_and_clear_bit(struct vvtd *vvtd, uint32_t reg,
+                                          int nr)
+{
+    return test_and_clear_bit(nr, (uint32_t *)&vvtd->regs->data[reg]);
+}
+
+static inline int vvtd_test_bit(struct vvtd *vvtd, uint32_t reg, int nr)
+{
+    return test_bit(nr, (uint32_t *)&vvtd->regs->data[reg]);
+}
+
 static inline void __vvtd_set_bit(struct vvtd *vvtd, uint32_t reg, int nr)
 {
     return __set_bit(nr, (uint32_t *)&vvtd->regs->data[reg]);
@@ -232,6 +253,24 @@ static int vvtd_delivery(
     return 0;
 }
 
+void vvtd_generate_interrupt(struct vvtd *vvtd,
+                             uint32_t addr,
+                             uint32_t data)
+{
+    uint8_t dest, dm, dlm, tm, vector;
+
+    VVTD_DEBUG(VVTD_DBG_FAULT, "Sending interrupt %x %x to d%d",
+               addr, data, vvtd->domain->domain_id);
+
+    dest = (addr & MSI_ADDR_DEST_ID_MASK) >> MSI_ADDR_DEST_ID_SHIFT;
+    dm = !!(addr & MSI_ADDR_DESTMODE_MASK);
+    dlm = (data & MSI_DATA_DELIVERY_MODE_MASK) >> MSI_DATA_DELIVERY_MODE_SHIFT;
+    tm = (data & MSI_DATA_TRIGGER_MASK) >> MSI_DATA_TRIGGER_SHIFT;
+    vector = data & MSI_DATA_VECTOR_MASK;
+
+    vvtd_delivery(vvtd->domain, vector, dest, dm, dlm, tm);
+}
+
 static uint32_t irq_remapping_request_index(struct irq_remapping_request *irq)
 {
     if ( irq->type == VIOMMU_REQUEST_IRQ_MSI )
@@ -260,11 +299,188 @@ static inline uint32_t irte_dest(struct vvtd *vvtd, uint32_t dest)
     return DMA_IRTA_EIME(irta) ? dest : MASK_EXTR(dest, IRTE_xAPIC_DEST_MASK);
 }
 
+static void vvtd_report_non_recoverable_fault(struct vvtd *vvtd, int reason)
+{
+    uint32_t fsts;
+
+    ASSERT(reason & DMA_FSTS_FAULTS);
+    fsts = vvtd_get_reg(vvtd, DMAR_FSTS_REG);
+    __vvtd_set_bit(vvtd, DMAR_FSTS_REG, reason);
+
+    /*
+     * Accoroding to VT-d spec "Non-Recoverable Fault Event" chapter, if
+     * there are any previously reported interrupt conditions that are yet to
+     * be sevices by software, the Fault Event interrrupt is not generated.
+     */
+    if ( fsts & DMA_FSTS_FAULTS )
+        return;
+
+    __vvtd_set_bit(vvtd, DMAR_FECTL_REG, DMA_FECTL_IP_BIT);
+    if ( !vvtd_test_bit(vvtd, DMAR_FECTL_REG, DMA_FECTL_IM_BIT) )
+    {
+        uint32_t fe_data, fe_addr;
+        fe_data = vvtd_get_reg(vvtd, DMAR_FEDATA_REG);
+        fe_addr = vvtd_get_reg(vvtd, DMAR_FEADDR_REG);
+        vvtd_generate_interrupt(vvtd, fe_addr, fe_data);
+        __vvtd_clear_bit(vvtd, DMAR_FECTL_REG, DMA_FECTL_IP_BIT);
+    }
+}
+
+static void vvtd_recomputing_ppf(struct vvtd *vvtd)
+{
+    int i;
+
+    for ( i = 0; i < DMAR_FRCD_REG_NR; i++ )
+    {
+        if ( vvtd_test_bit(vvtd, DMA_FRCD(i, DMA_FRCD3_OFFSET),
+                           DMA_FRCD_F_BIT) )
+        {
+            vvtd_report_non_recoverable_fault(vvtd, DMA_FSTS_PPF_BIT);
+            return;
+        }
+    }
+    /*
+     * No Primary Fault is in Fault Record Registers, thus clear PPF bit in
+     * FSTS.
+     */
+    __vvtd_clear_bit(vvtd, DMAR_FSTS_REG, DMA_FSTS_PPF_BIT);
+
+    /* If no fault is in FSTS, clear pending bit in FECTL. */
+    if ( !(vvtd_get_reg(vvtd, DMAR_FSTS_REG) & DMA_FSTS_FAULTS) )
+        __vvtd_clear_bit(vvtd, DMAR_FECTL_REG, DMA_FECTL_IP_BIT);
+}
+
+/*
+ * Commit a frcd to emulated Fault Record Registers.
+ */
+static void vvtd_commit_frcd(struct vvtd *vvtd, int idx,
+                             struct vtd_fault_record_register *frcd)
+{
+    vvtd_set_reg_quad(vvtd, DMA_FRCD(idx, DMA_FRCD0_OFFSET), frcd->bits.lo);
+    vvtd_set_reg_quad(vvtd, DMA_FRCD(idx, DMA_FRCD2_OFFSET), frcd->bits.hi);
+    vvtd_recomputing_ppf(vvtd);
+}
+
+/*
+ * Allocate a FRCD for the caller. If success, return the FRI. Or, return -1
+ * when failure.
+ */
+static int vvtd_alloc_frcd(struct vvtd *vvtd)
+{
+    int prev;
+    /* Set the F bit to indicate the FRCD is in use. */
+    if ( vvtd_test_and_set_bit(vvtd, DMA_FRCD(vvtd->frcd_idx, DMA_FRCD3_OFFSET),
+                               DMA_FRCD_F_BIT) )
+    {
+        prev = vvtd->frcd_idx;
+        vvtd->frcd_idx = (prev + 1) % DMAR_FRCD_REG_NR;
+        return vvtd->frcd_idx;
+    }
+    return -1;
+}
+
+static void vvtd_free_frcd(struct vvtd *vvtd, int i)
+{
+    __vvtd_clear_bit(vvtd, DMA_FRCD(i, DMA_FRCD3_OFFSET), DMA_FRCD_F_BIT);
+}
+
 static int vvtd_record_fault(struct vvtd *vvtd,
-                             struct irq_remapping_request *irq,
+                             struct irq_remapping_request *request,
                              int reason)
 {
-    return 0;
+    struct vtd_fault_record_register frcd;
+    int frcd_idx;
+
+    switch(reason)
+    {
+    case VTD_FR_IR_REQ_RSVD:
+    case VTD_FR_IR_INDEX_OVER:
+    case VTD_FR_IR_ENTRY_P:
+    case VTD_FR_IR_ROOT_INVAL:
+    case VTD_FR_IR_IRTE_RSVD:
+    case VTD_FR_IR_REQ_COMPAT:
+    case VTD_FR_IR_SID_ERR:
+        if ( vvtd_test_bit(vvtd, DMAR_FSTS_REG, DMA_FSTS_PFO_BIT) )
+            return X86EMUL_OKAY;
+
+        /* No available Fault Record means Fault overflowed */
+        frcd_idx = vvtd_alloc_frcd(vvtd);
+        if ( frcd_idx == -1 )
+        {
+            vvtd_report_non_recoverable_fault(vvtd, DMA_FSTS_PFO_BIT);
+            return X86EMUL_OKAY;
+        }
+        memset(&frcd, 0, sizeof(frcd));
+        frcd.fields.FR = (u8)reason;
+        frcd.fields.FI = ((u64)irq_remapping_request_index(request)) << 36;
+        frcd.fields.SID = (u16)request->source_id;
+        frcd.fields.F = 1;
+        vvtd_commit_frcd(vvtd, frcd_idx, &frcd);
+        return X86EMUL_OKAY;
+
+    default:
+        break;
+    }
+
+    gdprintk(XENLOG_ERR, "Can't handle vVTD Fault (reason 0x%x).", reason);
+    domain_crash(vvtd->domain);
+    return X86EMUL_OKAY;
+}
+
+static int vvtd_write_frcd3(struct vvtd *vvtd, uint32_t val)
+{
+    /* Writing a 1 means clear fault */
+    if ( val & DMA_FRCD_F )
+    {
+        vvtd_free_frcd(vvtd, 0);
+        vvtd_recomputing_ppf(vvtd);
+    }
+    return X86EMUL_OKAY;
+}
+
+static int vvtd_write_fectl(struct vvtd *vvtd, uint32_t val)
+{
+    /*
+     * Only DMA_FECTL_IM bit is writable. Generate pending event when unmask.
+     */
+    if ( !(val & DMA_FECTL_IM) )
+    {
+        /* Clear IM */
+        __vvtd_clear_bit(vvtd, DMAR_FECTL_REG, DMA_FECTL_IM_BIT);
+        if ( vvtd_test_and_clear_bit(vvtd, DMAR_FECTL_REG, DMA_FECTL_IP_BIT) )
+        {
+            uint32_t fe_data, fe_addr;
+            fe_data = vvtd_get_reg(vvtd, DMAR_FEDATA_REG);
+            fe_addr = vvtd_get_reg(vvtd, DMAR_FEADDR_REG);
+            vvtd_generate_interrupt(vvtd, fe_addr, fe_data);
+        }
+    }
+    else
+        __vvtd_set_bit(vvtd, DMAR_FECTL_REG, DMA_FECTL_IM_BIT);
+
+    return X86EMUL_OKAY;
+}
+
+static int vvtd_write_fsts(struct vvtd *vvtd, uint32_t val)
+{
+    int i, max_fault_index = DMA_FSTS_PRO_BIT;
+    uint64_t bits_to_clear = val & DMA_FSTS_RW1CS;
+
+    i = find_first_bit(&bits_to_clear, max_fault_index / 8 + 1);
+    while ( i <= max_fault_index )
+    {
+        __vvtd_clear_bit(vvtd, DMAR_FSTS_REG, i);
+        i = find_next_bit(&bits_to_clear, max_fault_index / 8 + 1, i + 1);
+    }
+
+    /*
+     * Clear IP field when all status fields in the Fault Status Register
+     * being clear.
+     */
+    if ( !((vvtd_get_reg(vvtd, DMAR_FSTS_REG) & DMA_FSTS_FAULTS)) )
+        __vvtd_clear_bit(vvtd, DMAR_FECTL_REG, DMA_FECTL_IP_BIT);
+
+    return X86EMUL_OKAY;
 }
 
 static int vvtd_handle_gcmd_qie(struct vvtd *vvtd, uint32_t val)
@@ -412,6 +628,18 @@ static int vvtd_write(struct vcpu *v, unsigned long addr,
             ret = vvtd_write_gcmd(vvtd, val);
             break;
 
+        case DMAR_FSTS_REG:
+            ret = vvtd_write_fsts(vvtd, val);
+            break;
+
+        case DMAR_FECTL_REG:
+            ret = vvtd_write_fectl(vvtd, val);
+            break;
+
+        case DMA_CAP_FRO_OFFSET + DMA_FRCD3_OFFSET:
+            ret = vvtd_write_frcd3(vvtd, val);
+            break;
+
         case DMAR_IEDATA_REG:
         case DMAR_IEADDR_REG:
         case DMAR_IEUADDR_REG:
@@ -438,6 +666,10 @@ static int vvtd_write(struct vcpu *v, unsigned long addr,
             ret = X86EMUL_OKAY;
             break;
 
+        case DMA_CAP_FRO_OFFSET + DMA_FRCD2_OFFSET:
+            ret = vvtd_write_frcd3(vvtd, val >> 32);
+            break;
+
         default:
             ret = X86EMUL_UNHANDLEABLE;
             break;
@@ -675,6 +907,7 @@ static int vvtd_create(struct domain *d, struct viommu *viommu)
     vvtd->eim = 0;
     vvtd->irt = 0;
     vvtd->irt_max_entry = 0;
+    vvtd->frcd_idx = 0;
     register_mmio_handler(d, &vvtd_mmio_ops);
     return 0;
 
diff --git a/xen/drivers/passthrough/vtd/iommu.h b/xen/drivers/passthrough/vtd/iommu.h
index 03361c0..5474c72 100644
--- a/xen/drivers/passthrough/vtd/iommu.h
+++ b/xen/drivers/passthrough/vtd/iommu.h
@@ -229,26 +229,66 @@
 #define DMA_CCMD_CAIG_MASK(x) (((u64)x) & ((u64) 0x3 << 59))
 
 /* FECTL_REG */
-#define DMA_FECTL_IM (((u64)1) << 31)
+#define DMA_FECTL_IM_BIT 31
+#define DMA_FECTL_IM (1U << DMA_FECTL_IM_BIT)
+#define DMA_FECTL_IP_BIT 30
+#define DMA_FECTL_IP (1U << DMA_FECTL_IP_BIT)
 
 /* FSTS_REG */
-#define DMA_FSTS_PFO ((u64)1 << 0)
-#define DMA_FSTS_PPF ((u64)1 << 1)
-#define DMA_FSTS_AFO ((u64)1 << 2)
-#define DMA_FSTS_APF ((u64)1 << 3)
-#define DMA_FSTS_IQE ((u64)1 << 4)
-#define DMA_FSTS_ICE ((u64)1 << 5)
-#define DMA_FSTS_ITE ((u64)1 << 6)
-#define DMA_FSTS_FAULTS    DMA_FSTS_PFO | DMA_FSTS_PPF | DMA_FSTS_AFO | DMA_FSTS_APF | DMA_FSTS_IQE | DMA_FSTS_ICE | DMA_FSTS_ITE
+#define DMA_FSTS_PFO_BIT 0
+#define DMA_FSTS_PFO (1U << DMA_FSTS_PFO_BIT)
+#define DMA_FSTS_PPF_BIT 1
+#define DMA_FSTS_PPF (1U << DMA_FSTS_PPF_BIT)
+#define DMA_FSTS_AFO (1U << 2)
+#define DMA_FSTS_APF (1U << 3)
+#define DMA_FSTS_IQE (1U << 4)
+#define DMA_FSTS_ICE (1U << 5)
+#define DMA_FSTS_ITE (1U << 6)
+#define DMA_FSTS_PRO_BIT 7
+#define DMA_FSTS_PRO (1U << DMA_FSTS_PRO_BIT)
+#define DMA_FSTS_FAULTS    (DMA_FSTS_PFO | DMA_FSTS_PPF | DMA_FSTS_AFO | DMA_FSTS_APF | DMA_FSTS_IQE | DMA_FSTS_ICE | DMA_FSTS_ITE | DMA_FSTS_PRO)
+#define DMA_FSTS_RW1CS     (DMA_FSTS_PFO | DMA_FSTS_AFO | DMA_FSTS_APF | DMA_FSTS_IQE | DMA_FSTS_ICE | DMA_FSTS_ITE | DMA_FSTS_PRO)
 #define dma_fsts_fault_record_index(s) (((s) >> 8) & 0xff)
 
 /* FRCD_REG, 32 bits access */
-#define DMA_FRCD_F (((u64)1) << 31)
+#define DMA_FRCD_LEN            0x10
+#define DMA_FRCD0_OFFSET        0x0
+#define DMA_FRCD1_OFFSET        0x4
+#define DMA_FRCD2_OFFSET        0x8
+#define DMA_FRCD3_OFFSET        0xc
+#define DMA_FRCD3_FR_MASK       0xffUL
+#define DMA_FRCD_F_BIT 31
+#define DMA_FRCD_F ((u64)1 << DMA_FRCD_F_BIT)
+#define DMA_FRCD(idx, offset) (DMA_CAP_FRO_OFFSET + DMA_FRCD_LEN * idx + offset)
 #define dma_frcd_type(d) ((d >> 30) & 1)
 #define dma_frcd_fault_reason(c) (c & 0xff)
 #define dma_frcd_source_id(c) (c & 0xffff)
 #define dma_frcd_page_addr(d) (d & (((u64)-1) << 12)) /* low 64 bit */
 
+struct vtd_fault_record_register
+{
+    union {
+        struct {
+            u64 lo;
+            u64 hi;
+        } bits;
+        struct {
+            u64 rsvd0   :12,
+                FI      :52; /* Fault Info */
+            u64 SID     :16, /* Source Identifier */
+                rsvd1   :9,
+                PRIV    :1,  /* Privilege Mode Requested */
+                EXE     :1,  /* Execute Permission Requested */
+                PP      :1,  /* PASID Present */
+                FR      :8,  /* Fault Reason */
+                PV      :20, /* PASID Value */
+                AT      :2,  /* Address Type */
+                T       :1,  /* Type. (0) Write (1) Read/AtomicOp */
+                F       :1;  /* Fault */
+        } fields;
+    };
+};
+
 enum VTD_FAULT_TYPE
 {
     /* Interrupt remapping transition faults */
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [RFC PATCH V2 24/26] x86/vvtd: Add queued invalidation (QI) support
  2017-05-18  5:34 [RFC PATCH V2 00/26] xen/vIOMMU: Add vIOMMU support with irq remapping fucntion of virtual vtd Lan Tianyu
                   ` (22 preceding siblings ...)
  2017-05-18  5:34 ` [RFC PATCH V2 23/26] x86/vvtd: Handle interrupt translation faults Lan Tianyu
@ 2017-05-18  5:34 ` Lan Tianyu
  2017-05-18  5:34 ` [RFC PATCH V2 25/26] x86/vlapic: drop no longer suitable restriction to set x2apic id Lan Tianyu
  2017-05-18  5:34 ` [RFC PATCH V2 26/26] x86/vvtd: save and restore emulated VT-d Lan Tianyu
  25 siblings, 0 replies; 42+ messages in thread
From: Lan Tianyu @ 2017-05-18  5:34 UTC (permalink / raw)
  To: xen-devel; +Cc: Lan Tianyu, andrew.cooper3, kevin.tian, jbeulich, Chao Gao

From: Chao Gao <chao.gao@intel.com>

Queued Invalidation Interface is an expanded invalidation interface with
extended capabilities. Hardware implementations report support for queued
invalidation interface through the Extended Capability Register. The queued
invalidation interface uses an Invalidation Queue (IQ), which is a circular
buffer in system memory. Software submits commands by writing Invalidation
Descriptors to the IQ.

In this patch, a new function viommu_process_iq() is used for emulating how
hardware handles invalidation requests through QI.

Signed-off-by: Chao Gao <chao.gao@intel.com>
Signed-off-by: Lan Tianyu <tianyu.lan@intel.com>
---
 xen/arch/x86/hvm/vvtd.c             | 244 ++++++++++++++++++++++++++++++++++++
 xen/drivers/passthrough/vtd/iommu.h |  29 ++++-
 2 files changed, 272 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/hvm/vvtd.c b/xen/arch/x86/hvm/vvtd.c
index a741452..ce25a77 100644
--- a/xen/arch/x86/hvm/vvtd.c
+++ b/xen/arch/x86/hvm/vvtd.c
@@ -427,6 +427,185 @@ static int vvtd_record_fault(struct vvtd *vvtd,
     return X86EMUL_OKAY;
 }
 
+/*
+ * Process a invalidation descriptor. Currently, only Two types descriptors,
+ * Interrupt Entry Cache invalidation descritor and Invalidation Wait
+ * Descriptor are handled.
+ * @vvtd: the virtual vtd instance
+ * @i: the index of the invalidation descriptor to be processed
+ *
+ * If success return 0, or return -1 when failure.
+ */
+static int process_iqe(struct vvtd *vvtd, int i)
+{
+    uint64_t iqa, addr;
+    struct qinval_entry *qinval_page;
+    void *pg;
+    int ret;
+
+    vvtd_get_reg_quad(vvtd, DMAR_IQA_REG, iqa);
+    ret = map_guest_page(vvtd->domain, DMA_IQA_ADDR(iqa)>>PAGE_SHIFT,
+                         (void**)&qinval_page);
+    if ( ret )
+    {
+        gdprintk(XENLOG_ERR, "Can't map guest IRT (rc %d)", ret);
+        return -1;
+    }
+
+    switch ( qinval_page[i].q.inv_wait_dsc.lo.type )
+    {
+    case TYPE_INVAL_WAIT:
+        if ( qinval_page[i].q.inv_wait_dsc.lo.sw )
+        {
+            addr = (qinval_page[i].q.inv_wait_dsc.hi.saddr << 2);
+            ret = map_guest_page(vvtd->domain, addr >> PAGE_SHIFT, &pg);
+            if ( ret )
+            {
+                gdprintk(XENLOG_ERR, "Can't map guest memory to inform guest "
+                         "IWC completion (rc %d)", ret);
+                goto error;
+            }
+            *(uint32_t *)((uint64_t)pg + (addr & ~PAGE_MASK)) =
+                qinval_page[i].q.inv_wait_dsc.lo.sdata;
+            unmap_guest_page(pg);
+        }
+
+        /*
+         * The following code generates an invalidation completion event
+         * indicating the invalidation wait descriptor completion. Note that
+         * the following code fragment is not tested properly.
+         */
+        if ( qinval_page[i].q.inv_wait_dsc.lo.iflag )
+        {
+            uint32_t ie_data, ie_addr;
+            if ( !vvtd_test_and_set_bit(vvtd, DMAR_ICS_REG, DMA_ICS_IWC_BIT) )
+            {
+                __vvtd_set_bit(vvtd, DMAR_IECTL_REG, DMA_IECTL_IP_BIT);
+                if ( !vvtd_test_bit(vvtd, DMAR_IECTL_REG, DMA_IECTL_IM_BIT) )
+                {
+                    ie_data = vvtd_get_reg(vvtd, DMAR_IEDATA_REG);
+                    ie_addr = vvtd_get_reg(vvtd, DMAR_IEADDR_REG);
+                    vvtd_generate_interrupt(vvtd, ie_addr, ie_data);
+                    __vvtd_clear_bit(vvtd, DMAR_IECTL_REG, DMA_IECTL_IP_BIT);
+                }
+            }
+        }
+        break;
+
+    case TYPE_INVAL_IEC:
+        /*
+         * Currently, no cache is preserved in hypervisor. Only need to update
+         * pIRTEs which are modified in binding process.
+         */
+        break;
+
+    default:
+        goto error;
+    }
+
+    unmap_guest_page((void*)qinval_page);
+    return 0;
+
+error:
+    unmap_guest_page((void*)qinval_page);
+    gdprintk(XENLOG_ERR, "Internal error in Queue Invalidation.\n");
+    domain_crash(vvtd->domain);
+    return -1;
+}
+
+/*
+ * Invalidate all the descriptors in Invalidation Queue.
+ */
+static void vvtd_process_iq(struct vvtd *vvtd)
+{
+    uint64_t iqh, iqt, iqa, max_entry, i;
+    int ret = 0;
+
+    /*
+     * No new descriptor is fetched from the Invalidation Queue until
+     * software clears the IQE field in the Fault Status Register
+     */
+    if ( vvtd_test_bit(vvtd, DMAR_FSTS_REG, DMA_FSTS_IQE_BIT) )
+        return;
+
+    vvtd_get_reg_quad(vvtd, DMAR_IQH_REG, iqh);
+    vvtd_get_reg_quad(vvtd, DMAR_IQT_REG, iqt);
+    vvtd_get_reg_quad(vvtd, DMAR_IQA_REG, iqa);
+
+    max_entry = DMA_IQA_ENTRY_PER_PAGE << DMA_IQA_QS(iqa);
+    iqh = DMA_IQH_QH(iqh);
+    iqt = DMA_IQT_QT(iqt);
+
+    ASSERT(iqt < max_entry);
+    if ( iqh == iqt )
+        return;
+
+    i = iqh;
+    while ( i != iqt )
+    {
+        ret = process_iqe(vvtd, i);
+        if ( ret )
+            break;
+        else
+            i = (i + 1) % max_entry;
+        vvtd_set_reg_quad(vvtd, DMAR_IQH_REG, i << DMA_IQH_QH_SHIFT);
+    }
+
+    /*
+     * When IQE set, IQH references the desriptor associated with the error.
+     */
+    if ( ret )
+        vvtd_report_non_recoverable_fault(vvtd, DMA_FSTS_IQE_BIT);
+}
+
+static int vvtd_write_iqt(struct vvtd *vvtd, unsigned long val)
+{
+    uint64_t iqa;
+
+    if ( val & DMA_IQT_RSVD )
+    {
+        VVTD_DEBUG(VVTD_DBG_RW, "Attempt to set reserved bits in "
+                   "Invalidation Queue Tail.");
+        return X86EMUL_OKAY;
+    }
+
+    vvtd_get_reg_quad(vvtd, DMAR_IQA_REG, iqa);
+    if ( DMA_IQT_QT(val) >= DMA_IQA_ENTRY_PER_PAGE << DMA_IQA_QS(iqa) )
+    {
+        VVTD_DEBUG(VVTD_DBG_RW, "IQT: Value %lx exceeded supported max "
+                   "index.", val);
+        return X86EMUL_OKAY;
+    }
+
+    vvtd_set_reg_quad(vvtd, DMAR_IQT_REG, val);
+    vvtd_process_iq(vvtd);
+    return X86EMUL_OKAY;
+}
+
+static int vvtd_write_iqa(struct vvtd *vvtd, unsigned long val)
+{
+    if ( val & DMA_IQA_RSVD )
+    {
+        VVTD_DEBUG(VVTD_DBG_RW, "Attempt to set reserved bits in "
+                   "Invalidation Queue Address.");
+        return X86EMUL_OKAY;
+    }
+
+    vvtd_set_reg_quad(vvtd, DMAR_IQA_REG, val);
+    return X86EMUL_OKAY;
+}
+
+static int vvtd_write_ics(struct vvtd *vvtd, uint32_t val)
+{
+    if ( val & DMA_ICS_IWC )
+    {
+        __vvtd_clear_bit(vvtd, DMAR_ICS_REG, DMA_ICS_IWC_BIT);
+        /*When IWC field is cleared, the IP field needs to be cleared */
+        __vvtd_clear_bit(vvtd, DMAR_IECTL_REG, DMA_IECTL_IP_BIT);
+    }
+    return X86EMUL_OKAY;
+}
+
 static int vvtd_write_frcd3(struct vvtd *vvtd, uint32_t val)
 {
     /* Writing a 1 means clear fault */
@@ -438,6 +617,29 @@ static int vvtd_write_frcd3(struct vvtd *vvtd, uint32_t val)
     return X86EMUL_OKAY;
 }
 
+static int vvtd_write_iectl(struct vvtd *vvtd, uint32_t val)
+{
+    /*
+     * Only DMA_IECTL_IM bit is writable. Generate pending event when unmask.
+     */
+    if ( !(val & DMA_IECTL_IM) )
+    {
+        /* Clear IM and clear IP */
+        __vvtd_clear_bit(vvtd, DMAR_IECTL_REG, DMA_IECTL_IM_BIT);
+        if ( vvtd_test_and_clear_bit(vvtd, DMAR_IECTL_REG, DMA_IECTL_IP_BIT) )
+        {
+            uint32_t ie_data, ie_addr;
+            ie_data = vvtd_get_reg(vvtd, DMAR_IEDATA_REG);
+            ie_addr = vvtd_get_reg(vvtd, DMAR_IEADDR_REG);
+            vvtd_generate_interrupt(vvtd, ie_addr, ie_data);
+        }
+    }
+    else
+        __vvtd_set_bit(vvtd, DMAR_IECTL_REG, DMA_IECTL_IM_BIT);
+
+    return X86EMUL_OKAY;
+}
+
 static int vvtd_write_fectl(struct vvtd *vvtd, uint32_t val)
 {
     /*
@@ -480,6 +682,10 @@ static int vvtd_write_fsts(struct vvtd *vvtd, uint32_t val)
     if ( !((vvtd_get_reg(vvtd, DMAR_FSTS_REG) & DMA_FSTS_FAULTS)) )
         __vvtd_clear_bit(vvtd, DMAR_FECTL_REG, DMA_FECTL_IP_BIT);
 
+    /* Continue to deal invalidation when IQE is clear */
+    if ( !vvtd_test_bit(vvtd, DMAR_FSTS_REG, DMA_FSTS_IQE_BIT) )
+        vvtd_process_iq(vvtd);
+
     return X86EMUL_OKAY;
 }
 
@@ -640,6 +846,36 @@ static int vvtd_write(struct vcpu *v, unsigned long addr,
             ret = vvtd_write_frcd3(vvtd, val);
             break;
 
+        case DMAR_IECTL_REG:
+            ret = vvtd_write_iectl(vvtd, val);
+            break;
+
+        case DMAR_ICS_REG:
+            ret = vvtd_write_ics(vvtd, val);
+            break;
+
+        case DMAR_IQT_REG:
+            ret = vvtd_write_iqt(vvtd, (uint32_t)val);
+            break;
+
+        case DMAR_IQA_REG:
+        {
+            uint32_t iqa_hi;
+
+            iqa_hi = vvtd_get_reg(vvtd, DMAR_IQA_REG_HI);
+            ret = vvtd_write_iqa(vvtd, (uint32_t)val | ((uint64_t)iqa_hi << 32));
+            break;
+        }
+
+        case DMAR_IQA_REG_HI:
+        {
+            uint32_t iqa_lo;
+
+            iqa_lo = vvtd_get_reg(vvtd, DMAR_IQA_REG);
+            ret = vvtd_write_iqa(vvtd, (val << 32) | iqa_lo);
+            break;
+        }
+
         case DMAR_IEDATA_REG:
         case DMAR_IEADDR_REG:
         case DMAR_IEUADDR_REG:
@@ -670,6 +906,14 @@ static int vvtd_write(struct vcpu *v, unsigned long addr,
             ret = vvtd_write_frcd3(vvtd, val >> 32);
             break;
 
+        case DMAR_IQT_REG:
+            ret = vvtd_write_iqt(vvtd, val);
+            break;
+
+        case DMAR_IQA_REG:
+            ret = vvtd_write_iqa(vvtd, val);
+            break;
+
         default:
             ret = X86EMUL_UNHANDLEABLE;
             break;
diff --git a/xen/drivers/passthrough/vtd/iommu.h b/xen/drivers/passthrough/vtd/iommu.h
index 5474c72..135c4cf 100644
--- a/xen/drivers/passthrough/vtd/iommu.h
+++ b/xen/drivers/passthrough/vtd/iommu.h
@@ -207,6 +207,32 @@
 #define DMA_IRTA_S(val)         (val & 0xf)
 #define DMA_IRTA_SIZE(val)      (1UL << (DMA_IRTA_S(val) + 1))
 
+/* IQH_REG */
+#define DMA_IQH_QH_SHIFT        4
+#define DMA_IQH_QH(val)         ((val >> 4) & 0x7fffULL)
+
+/* IQT_REG */
+#define DMA_IQT_QT_SHIFT        4
+#define DMA_IQT_QT(val)         ((val >> 4) & 0x7fffULL)
+#define DMA_IQT_RSVD            0xfffffffffff80007ULL
+
+/* IQA_REG */
+#define DMA_MGAW                39  /* Maximum Guest Address Width */
+#define DMA_IQA_ADDR(val)       (val & ~0xfffULL)
+#define DMA_IQA_QS(val)         (val & 0x7)
+#define DMA_IQA_ENTRY_PER_PAGE  (1 << 8)
+#define DMA_IQA_RSVD            (~((1ULL << DMA_MGAW) -1 ) | 0xff8ULL)
+
+/* IECTL_REG */
+#define DMA_IECTL_IM_BIT 31
+#define DMA_IECTL_IM            (1 << DMA_IECTL_IM_BIT)
+#define DMA_IECTL_IP_BIT 30
+#define DMA_IECTL_IP (((u64)1) << DMA_IECTL_IP_BIT)
+
+/* ICS_REG */
+#define DMA_ICS_IWC_BIT         0
+#define DMA_ICS_IWC             (1 << DMA_ICS_IWC_BIT)
+
 /* PMEN_REG */
 #define DMA_PMEN_EPM    (((u32)1) << 31)
 #define DMA_PMEN_PRS    (((u32)1) << 0)
@@ -241,7 +267,8 @@
 #define DMA_FSTS_PPF (1U << DMA_FSTS_PPF_BIT)
 #define DMA_FSTS_AFO (1U << 2)
 #define DMA_FSTS_APF (1U << 3)
-#define DMA_FSTS_IQE (1U << 4)
+#define DMA_FSTS_IQE_BIT 4
+#define DMA_FSTS_IQE (1U << DMA_FSTS_IQE_BIT)
 #define DMA_FSTS_ICE (1U << 5)
 #define DMA_FSTS_ITE (1U << 6)
 #define DMA_FSTS_PRO_BIT 7
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [RFC PATCH V2 25/26] x86/vlapic: drop no longer suitable restriction to set x2apic id
  2017-05-18  5:34 [RFC PATCH V2 00/26] xen/vIOMMU: Add vIOMMU support with irq remapping fucntion of virtual vtd Lan Tianyu
                   ` (23 preceding siblings ...)
  2017-05-18  5:34 ` [RFC PATCH V2 24/26] x86/vvtd: Add queued invalidation (QI) support Lan Tianyu
@ 2017-05-18  5:34 ` Lan Tianyu
  2017-05-18  5:34 ` [RFC PATCH V2 26/26] x86/vvtd: save and restore emulated VT-d Lan Tianyu
  25 siblings, 0 replies; 42+ messages in thread
From: Lan Tianyu @ 2017-05-18  5:34 UTC (permalink / raw)
  To: xen-devel; +Cc: Lan Tianyu, andrew.cooper3, kevin.tian, jbeulich, Chao Gao

From: Chao Gao <chao.gao@intel.com>

If vlapic is in x2apic mode in saving process, we should set it's
x2apic id when restoring. Just drop the unsuitable restrition as the
existing comment says.

Signed-off-by: Chao Gao <chao.gao@intel.com>
Signed-off-by: Lan Tianyu <tianyu.lan@intel.com>
---
 xen/arch/x86/hvm/vlapic.c | 18 ++----------------
 1 file changed, 2 insertions(+), 16 deletions(-)

diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c
index cf8ee50..cc55473 100644
--- a/xen/arch/x86/hvm/vlapic.c
+++ b/xen/arch/x86/hvm/vlapic.c
@@ -1374,25 +1374,11 @@ static int lapic_save_regs(struct domain *d, hvm_domain_context_t *h)
  */
 static void lapic_load_fixup(struct vlapic *vlapic)
 {
-    uint32_t id = vlapic->loaded.id;
-
-    if ( vlapic_x2apic_mode(vlapic) && id && vlapic->loaded.ldr == 1 )
-    {
-        /*
-         * This is optional: ID != 0 contradicts LDR == 1. It's being added
-         * to aid in eventual debugging of issues arising from the fixup done
-         * here, but can be dropped as soon as it is found to conflict with
-         * other (future) changes.
-         */
-        if ( GET_xAPIC_ID(id) != vlapic_vcpu(vlapic)->vcpu_id * 2 ||
-             id != SET_xAPIC_ID(GET_xAPIC_ID(id)) )
-            printk(XENLOG_G_WARNING "%pv: bogus APIC ID %#x loaded\n",
-                   vlapic_vcpu(vlapic), id);
+    if ( vlapic_x2apic_mode(vlapic) )
         set_x2apic_id(vlapic);
-    }
     else /* Undo an eventual earlier fixup. */
     {
-        vlapic_set_reg(vlapic, APIC_ID, id);
+        vlapic_set_reg(vlapic, APIC_ID, vlapic->loaded.id);
         vlapic_set_reg(vlapic, APIC_LDR, vlapic->loaded.ldr);
     }
 }
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [RFC PATCH V2 26/26] x86/vvtd: save and restore emulated VT-d
  2017-05-18  5:34 [RFC PATCH V2 00/26] xen/vIOMMU: Add vIOMMU support with irq remapping fucntion of virtual vtd Lan Tianyu
                   ` (24 preceding siblings ...)
  2017-05-18  5:34 ` [RFC PATCH V2 25/26] x86/vlapic: drop no longer suitable restriction to set x2apic id Lan Tianyu
@ 2017-05-18  5:34 ` Lan Tianyu
  25 siblings, 0 replies; 42+ messages in thread
From: Lan Tianyu @ 2017-05-18  5:34 UTC (permalink / raw)
  To: xen-devel; +Cc: Lan Tianyu, andrew.cooper3, kevin.tian, jbeulich, Chao Gao

From: Chao Gao <chao.gao@intel.com>

Wrap some useful status in a new structure hvm_hw_vvtd, following
the customs of vlapic, vioapic and etc. Provide two save-restore
pairs to save/restore registers and non-register status.

Signed-off-by: Chao Gao <chao.gao@intel.com>
Signed-off-by: Lan Tianyu <tianyu.lan@intel.com>
---
 xen/arch/x86/hvm/vvtd.c                | 98 ++++++++++++++++++++++------------
 xen/include/public/arch-x86/hvm/save.h | 24 ++++++++-
 2 files changed, 88 insertions(+), 34 deletions(-)

diff --git a/xen/arch/x86/hvm/vvtd.c b/xen/arch/x86/hvm/vvtd.c
index ce25a77..e35bc9e 100644
--- a/xen/arch/x86/hvm/vvtd.c
+++ b/xen/arch/x86/hvm/vvtd.c
@@ -20,6 +20,7 @@
 
 #include <xen/domain_page.h>
 #include <xen/lib.h>
+#include <xen/hvm/save.h>
 #include <xen/sched.h>
 #include <xen/types.h>
 #include <xen/viommu.h>
@@ -33,38 +34,25 @@
 #include <asm/p2m.h>
 #include <asm/system.h>
 #include <public/viommu.h>
+#include <public/hvm/save.h>
 
 #include "../../../drivers/passthrough/vtd/iommu.h"
 #include "../../../drivers/passthrough/vtd/vtd.h"
 
-struct hvm_hw_vvtd_regs {
-    uint8_t data[1024];
-};
-
 /* Status field of struct vvtd */
 #define VIOMMU_STATUS_IRQ_REMAPPING_ENABLED     (1 << 0)
 #define VIOMMU_STATUS_DMA_REMAPPING_ENABLED     (1 << 1)
 
 #define vvtd_irq_remapping_enabled(vvtd) \
-            (vvtd->status & VIOMMU_STATUS_IRQ_REMAPPING_ENABLED)
+            (vvtd->hw.status & VIOMMU_STATUS_IRQ_REMAPPING_ENABLED)
 
 struct vvtd {
-    /* VIOMMU_STATUS_XXX_REMAPPING_ENABLED */
-    int status;
-    /* Fault Recording index */
-    int frcd_idx;
     /* Address range of remapping hardware register-set */
     uint64_t base_addr;
     uint64_t length;
     /* Point back to the owner domain */
     struct domain *domain;
-    /* Is in Extended Interrupt Mode? */
-    bool eim;
-    /* Max remapping entries in IRT */
-    int irt_max_entry;
-    /* Interrupt remapping table base gfn */
-    uint64_t irt;
-
+    struct hvm_hw_vvtd hw;
     struct hvm_hw_vvtd_regs *regs;
     struct page_info *regs_page;
 };
@@ -369,12 +357,12 @@ static int vvtd_alloc_frcd(struct vvtd *vvtd)
 {
     int prev;
     /* Set the F bit to indicate the FRCD is in use. */
-    if ( vvtd_test_and_set_bit(vvtd, DMA_FRCD(vvtd->frcd_idx, DMA_FRCD3_OFFSET),
+    if ( vvtd_test_and_set_bit(vvtd, DMA_FRCD(vvtd->hw.frcd_idx, DMA_FRCD3_OFFSET),
                                DMA_FRCD_F_BIT) )
     {
-        prev = vvtd->frcd_idx;
-        vvtd->frcd_idx = (prev + 1) % DMAR_FRCD_REG_NR;
-        return vvtd->frcd_idx;
+        prev = vvtd->hw.frcd_idx;
+        vvtd->hw.frcd_idx = (prev + 1) % DMAR_FRCD_REG_NR;
+        return vvtd->hw.frcd_idx;
     }
     return -1;
 }
@@ -706,12 +694,12 @@ static int vvtd_handle_gcmd_ire(struct vvtd *vvtd, uint32_t val)
 
     if ( val & DMA_GCMD_IRE )
     {
-        vvtd->status |= VIOMMU_STATUS_IRQ_REMAPPING_ENABLED;
+        vvtd->hw.status |= VIOMMU_STATUS_IRQ_REMAPPING_ENABLED;
         __vvtd_set_bit(vvtd, DMAR_GSTS_REG, DMA_GSTS_IRES_BIT);
     }
     else
     {
-        vvtd->status |= ~VIOMMU_STATUS_IRQ_REMAPPING_ENABLED;
+        vvtd->hw.status |= ~VIOMMU_STATUS_IRQ_REMAPPING_ENABLED;
         __vvtd_clear_bit(vvtd, DMAR_GSTS_REG, DMA_GSTS_IRES_BIT);
     }
 
@@ -730,11 +718,11 @@ static int vvtd_handle_gcmd_sirtp(struct vvtd *vvtd, uint32_t val)
                    "active." );
 
     vvtd_get_reg_quad(vvtd, DMAR_IRTA_REG, irta);
-    vvtd->irt = DMA_IRTA_ADDR(irta) >> PAGE_SHIFT;
-    vvtd->irt_max_entry = DMA_IRTA_SIZE(irta);
-    vvtd->eim = DMA_IRTA_EIME(irta);
+    vvtd->hw.irt = DMA_IRTA_ADDR(irta) >> PAGE_SHIFT;
+    vvtd->hw.irt_max_entry = DMA_IRTA_SIZE(irta);
+    vvtd->hw.eim = DMA_IRTA_EIME(irta);
     VVTD_DEBUG(VVTD_DBG_RW, "Update IR info (addr=%lx eim=%d size=%d).",
-               vvtd->irt, vvtd->eim, vvtd->irt_max_entry);
+               vvtd->hw.irt, vvtd->hw.eim, vvtd->hw.irt_max_entry);
     __vvtd_set_bit(vvtd, DMAR_GSTS_REG, DMA_GSTS_SIRTPS_BIT);
 
     return X86EMUL_OKAY;
@@ -953,13 +941,13 @@ static int vvtd_get_entry(struct vvtd *vvtd,
 
     VVTD_DEBUG(VVTD_DBG_TRANS, "interpret a request with index %x", entry);
 
-    if ( entry > vvtd->irt_max_entry )
+    if ( entry > vvtd->hw.irt_max_entry )
     {
         ret = VTD_FR_IR_INDEX_OVER;
         goto handle_fault;
     }
 
-    ret = map_guest_page(vvtd->domain, vvtd->irt + (entry >> IREMAP_ENTRY_ORDER),
+    ret = map_guest_page(vvtd->domain, vvtd->hw.irt + (entry >> IREMAP_ENTRY_ORDER),
                          (void**)&irt_page);
     if ( ret )
     {
@@ -1084,6 +1072,49 @@ static int vvtd_get_irq_info(struct domain *d,
     return 0;
 }
 
+static int vvtd_load_regs(struct domain *d, hvm_domain_context_t *h)
+{
+    if ( !domain_vvtd(d) )
+        return -ENODEV;
+
+    if ( hvm_load_entry(IOMMU_REGS, h, domain_vvtd(d)->regs) )
+        return -EINVAL;
+
+    return 0;
+}
+
+static int vvtd_save_regs(struct domain *d, hvm_domain_context_t *h)
+{
+    if ( !domain_vvtd(d) )
+        return 0;
+
+    return hvm_save_entry(IOMMU_REGS, 0, h, domain_vvtd(d)->regs);
+}
+
+static int vvtd_load_hidden(struct domain *d, hvm_domain_context_t *h)
+{
+    if ( !domain_vvtd(d) )
+        return -ENODEV;
+
+    if ( hvm_load_entry(IOMMU, h, &domain_vvtd(d)->hw) )
+        return -EINVAL;
+
+    return 0;
+}
+
+static int vvtd_save_hidden(struct domain *d, hvm_domain_context_t *h)
+{
+    if ( !domain_vvtd(d) )
+        return 0;
+
+    return hvm_save_entry(IOMMU, 0, h, &domain_vvtd(d)->hw);
+}
+
+HVM_REGISTER_SAVE_RESTORE(IOMMU, vvtd_save_hidden, vvtd_load_hidden,
+                          1, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(IOMMU_REGS, vvtd_save_regs, vvtd_load_regs,
+                          1, HVMSR_PER_DOM);
+
 static void vvtd_reset(struct vvtd *vvtd, uint64_t capability)
 {
     uint64_t cap, ecap;
@@ -1147,12 +1178,13 @@ static int vvtd_create(struct domain *d, struct viommu *viommu)
     vvtd->base_addr = viommu->base_address;
     vvtd->length = viommu->length;
     vvtd->domain = d;
-    vvtd->status = 0;
-    vvtd->eim = 0;
-    vvtd->irt = 0;
-    vvtd->irt_max_entry = 0;
-    vvtd->frcd_idx = 0;
+    vvtd->hw.status = 0;
+    vvtd->hw.eim = 0;
+    vvtd->hw.irt = 0;
+    vvtd->hw.irt_max_entry = 0;
+    vvtd->hw.frcd_idx = 0;
     register_mmio_handler(d, &vvtd_mmio_ops);
+    viommu->priv = (void *)vvtd;
     return 0;
 
 out2:
diff --git a/xen/include/public/arch-x86/hvm/save.h b/xen/include/public/arch-x86/hvm/save.h
index 816973b..28fafc8 100644
--- a/xen/include/public/arch-x86/hvm/save.h
+++ b/xen/include/public/arch-x86/hvm/save.h
@@ -638,10 +638,32 @@ struct hvm_msr {
 
 #define CPU_MSR_CODE  20
 
+struct hvm_hw_vvtd_regs {
+    uint8_t data[1024];
+};
+
+DECLARE_HVM_SAVE_TYPE(IOMMU_REGS, 21, struct hvm_hw_vvtd_regs);
+
+struct hvm_hw_vvtd
+{
+    /* VIOMMU_STATUS_XXX_REMAPPING_ENABLED */
+    uint32_t status;
+    /* Fault Recording index */
+    uint32_t frcd_idx;
+    /* Is in Extended Interrupt Mode? */
+    uint32_t eim;
+    /* Max remapping entries in IRT */
+    uint32_t irt_max_entry;
+    /* Interrupt remapping table base gfn */
+    uint64_t irt;
+};
+
+DECLARE_HVM_SAVE_TYPE(IOMMU, 22, struct hvm_hw_vvtd);
+
 /* 
  * Largest type-code in use
  */
-#define HVM_SAVE_CODE_MAX 20
+#define HVM_SAVE_CODE_MAX 22
 
 #endif /* __XEN_PUBLIC_HVM_SAVE_X86_H__ */
 
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* Re: [RFC PATCH V2 1/26] VIOMMU: Add vIOMMU helper functions to create, destroy and query capabilities
  2017-05-18  5:34 ` [RFC PATCH V2 1/26] VIOMMU: Add vIOMMU helper functions to create, destroy and query capabilities Lan Tianyu
@ 2017-05-30 15:36   ` Wei Liu
  2017-05-30 15:42     ` Jan Beulich
  0 siblings, 1 reply; 42+ messages in thread
From: Wei Liu @ 2017-05-30 15:36 UTC (permalink / raw)
  To: Lan Tianyu
  Cc: kevin.tian, wei.liu2, andrew.cooper3, ian.jackson, xen-devel,
	jbeulich, chao.gao

On Thu, May 18, 2017 at 01:34:31AM -0400, Lan Tianyu wrote:
> This patch is to introduct an abstract layer for arch vIOMMU implementation
> to deal with requests from dom0. Arch vIOMMU code needs to provide callback
> to perform create, destroy and query capabilities operation.
> 
> Signed-off-by: Lan Tianyu <tianyu.lan@intel.com>
> ---
>  xen/arch/x86/setup.c        |   1 +
>  xen/common/Kconfig          |  11 +++
>  xen/common/Makefile         |   1 +
>  xen/common/domain.c         |   3 +
>  xen/common/viommu.c         | 169 ++++++++++++++++++++++++++++++++++++++++++++
>  xen/include/public/viommu.h |  49 +++++++++++++
>  xen/include/xen/sched.h     |   2 +
>  xen/include/xen/viommu.h    |  79 +++++++++++++++++++++
>  8 files changed, 315 insertions(+)
>  create mode 100644 xen/common/viommu.c
>  create mode 100644 xen/include/public/viommu.h
>  create mode 100644 xen/include/xen/viommu.h
> 
> diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
> index f7b9278..f204d71 100644
> --- a/xen/arch/x86/setup.c
> +++ b/xen/arch/x86/setup.c
> @@ -1513,6 +1513,7 @@ void __init noreturn __start_xen(unsigned long mbi_p)
>      early_msi_init();
>  
>      iommu_setup();    /* setup iommu if available */
> +    viommu_setup();
>  
>      smp_prepare_cpus(max_cpus);
>  
> diff --git a/xen/common/Kconfig b/xen/common/Kconfig
> index dc8e876..90e3741 100644
> --- a/xen/common/Kconfig
> +++ b/xen/common/Kconfig
> @@ -73,6 +73,17 @@ config TMEM
>  
>  	  If unsure, say Y.
>  
> +config VIOMMU
> +	def_bool y
> +	prompt "Xen vIOMMU Support" if EXPERT = "y"
> +	depends on X86
> +	---help---
> +	 Virtual IOMMU provides interrupt remapping function for guest and
> +	 it allows guest to boot up more than 255 vcpus which requires interrupt
> +	 remapping function.
> +
> +	  If unsure, say Y.

Indentation. And this should be disabled by default.

> +
>  config XENOPROF
>  	def_bool y
>  	prompt "Xen Oprofile Support" if EXPERT = "y"
> diff --git a/xen/common/Makefile b/xen/common/Makefile
> index 26c5a64..f61e579 100644
> --- a/xen/common/Makefile
> +++ b/xen/common/Makefile
> @@ -61,6 +61,7 @@ obj-y += vm_event.o
>  obj-y += vmap.o
>  obj-y += vsprintf.o
>  obj-y += wait.o
> +obj-$(CONFIG_VIOMMU) += viommu.o

Please sort this list alphabetically.

>  obj-bin-y += warning.init.o
>  obj-$(CONFIG_XENOPROF) += xenoprof.o
>  obj-y += xmalloc_tlsf.o
> diff --git a/xen/common/domain.c b/xen/common/domain.c
> index b22aacc..d1f9b10 100644
> --- a/xen/common/domain.c
> +++ b/xen/common/domain.c
> @@ -396,6 +396,9 @@ struct domain *domain_create(domid_t domid, unsigned int domcr_flags,
>          spin_unlock(&domlist_update_lock);
>      }
>  
> +    if ( (err = viommu_init_domain(d)) != 0 )
> +        goto fail;
> +
>      return d;
>  
>   fail:
> diff --git a/xen/common/viommu.c b/xen/common/viommu.c
> new file mode 100644
> index 0000000..eadcecb
> --- /dev/null
> +++ b/xen/common/viommu.c
> @@ -0,0 +1,169 @@
> +/*
> + * common/viommu.c
> + * 
> + * Copyright (c) 2017 Intel Corporation
> + * Author: Lan Tianyu <tianyu.lan@intel.com> 
> + *
> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms and conditions of the GNU General Public License,
> + * version 2, as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope it will be useful, but WITHOUT
> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
> + * more details.
> + *
> + * You should have received a copy of the GNU General Public License along with
> + * this program; If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#include <xen/types.h>
> +#include <xen/sched.h>
> +#include <xen/spinlock.h>
[...]
> +
> +void viommu_unregister_type(u64 type)
> +{
> +    struct viommu_type *viommu_type = viommu_get_type(type);
> +
> +    if ( viommu_type )
> +    {
> +        spin_lock(&type_list_lock);
> +        list_del(&viommu_type->node);
> +        spin_unlock(&type_list_lock);
> +
> +        xfree(viommu_type);
> +    }
> +}
> +

Is the unregister functions really useful?

Xen doesn't support module. And I don't see the unregister function used
in your series.

If you don't actually care about dynamically loading and unloading ops,
I think the code can be simplified.

> +int viommu_create(struct domain *d, u64 type, u64 base_address,
> +                  u64 length, u64 caps)
> +{
> +    struct viommu_info *info = &d->viommu;
> +    struct viommu *viommu;
> +    struct viommu_type *viommu_type = NULL;
> +    int rc;
> +
> +    viommu_type = viommu_get_type(type);
> +    if ( !viommu_type )
> +        return -EFAULT;

EINVAL

> +
> +    if ( !info || info->nr_viommu >= NR_VIOMMU_PER_DOMAIN
> +        || !viommu_type->ops || !viommu_type->ops->create )
> +        return -EINVAL;
> +
> +    viommu = xzalloc(struct viommu);
> +    if ( !viommu )
> +        return -ENOMEM;
> +
> +    viommu->base_address = base_address;
> +    viommu->length = length;
> +    viommu->caps = caps;
> +    viommu->ops = viommu_type->ops;
> +    viommu->viommu_id = info->nr_viommu;
> +
> +    info->viommu[info->nr_viommu] = viommu;
> +    info->nr_viommu++;
> +
> +    rc = viommu->ops->create(d, viommu);
> +    if ( rc < 0 )
> +    {

Presumably you also need to reset info->viommu in the error path.

Or even better, use viommu_destroy to handle the error path.

> +        xfree(viommu);
> +        return rc;
> +    }
> +
> +    return viommu->viommu_id;
> +}
> +
> +int viommu_destroy(struct domain *d, u32 viommu_id)
> +{
> +    struct viommu_info *info = &d->viommu;
> +
> +    if ( !info || viommu_id > info->nr_viommu || !info->viommu[viommu_id] )
> +        return -EINVAL;
> +
> +    if ( info->viommu[viommu_id]->ops->destroy(info->viommu[viommu_id]) )
> +        return -EFAULT;
> +
> +    info->viommu[viommu_id] = NULL;
> +    return 0;
> +}
> +
> +u64 viommu_query_caps(struct domain *d, u64 type)
> +{
> +    struct viommu_type *viommu_type = viommu_get_type(type);
> +
> +    if ( !viommu_type )
> +        return -EFAULT;

EINVAL

> +
> +    return viommu_type->ops->query_caps(d);
> +}
> +
> +int __init viommu_setup(void)
> +{
> +    INIT_LIST_HEAD(&type_list);
> +    spin_lock_init(&type_list_lock);
> +    return 0;
> +}
> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * tab-width: 4
> + * End:
> + */
> diff --git a/xen/include/public/viommu.h b/xen/include/public/viommu.h
> new file mode 100644
> index 0000000..a4f7c47
> --- /dev/null
> +++ b/xen/include/public/viommu.h
> @@ -0,0 +1,49 @@
> +/*
> + * viommu.h
> + *
> + * Virtual IOMMU information
> + *
> + * Copyright (c) 2017 Intel Corporation
> + *
> + * Permission is hereby granted, free of charge, to any person
> + * obtaining a copy of this software and associated documentation
> + * files (the "Software"), to deal in the Software without restriction,
> + * including without limitation the rights to use, copy, modify, merge,
> + * publish, distribute, sublicense, and/or sell copies of the Software,
> + * and to permit persons to whom the Software is furnished to do so,
> + * subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice shall be
> + * included in all copies or substantial portions of the Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
> + * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
> + * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
> + * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
> + * CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
> + * TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
> + * SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
> + *
> + */
> +
> +#ifndef __XEN_PUBLIC_VIOMMU_H__
> +#define __XEN_PUBLIC_VIOMMU_H__
> +
> +/* VIOMMU type */
> +#define VIOMMU_TYPE_INTEL_VTD     (1 << 0)
> +
> +/* VIOMMU capabilities*/
> +#define VIOMMU_CAP_IRQ_REMAPPING  (1 << 0)
> +

1U in both cases.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [RFC PATCH V2 2/26] DOMCTL: Introduce new DOMCTL commands for vIOMMU support
  2017-05-18  5:34 ` [RFC PATCH V2 2/26] DOMCTL: Introduce new DOMCTL commands for vIOMMU support Lan Tianyu
@ 2017-05-30 15:36   ` Wei Liu
  0 siblings, 0 replies; 42+ messages in thread
From: Wei Liu @ 2017-05-30 15:36 UTC (permalink / raw)
  To: Lan Tianyu
  Cc: kevin.tian, wei.liu2, andrew.cooper3, ian.jackson, xen-devel,
	jbeulich, chao.gao

On Thu, May 18, 2017 at 01:34:32AM -0400, Lan Tianyu wrote:
> This patch is to introduce create, destroy and query capabilities
> command for vIOMMU. vIOMMU layer will deal with requests and call
> arch vIOMMU ops.
> 
> Signed-off-by: Lan Tianyu <tianyu.lan@intel.com>
> ---
>  xen/common/domctl.c         |  3 +++
>  xen/common/viommu.c         | 35 +++++++++++++++++++++++++++++++++++
>  xen/include/public/domctl.h | 40 ++++++++++++++++++++++++++++++++++++++++
>  xen/include/xen/viommu.h    |  8 +++++++-
>  4 files changed, 85 insertions(+), 1 deletion(-)
> 
> diff --git a/xen/common/domctl.c b/xen/common/domctl.c
> index 951a5dc..a178544 100644
> --- a/xen/common/domctl.c
> +++ b/xen/common/domctl.c
> @@ -1141,6 +1141,9 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
>          if ( !ret )
>              copyback = 1;
>          break;
> +    case XEN_DOMCTL_viommu_op:
> +        ret = viommu_domctl(d, &op->u.viommu_op, &copyback);
> +        break;
>  
>      default:
>          ret = arch_do_domctl(op, d, u_domctl);
> diff --git a/xen/common/viommu.c b/xen/common/viommu.c
> index eadcecb..74afbf5 100644
> --- a/xen/common/viommu.c
> +++ b/xen/common/viommu.c
> @@ -30,6 +30,41 @@ struct viommu_type {
>      struct list_head node;
>  };
>  
> +int viommu_domctl(struct domain *d, struct xen_domctl_viommu_op *op,
> +                  bool_t *need_copy)

s/bool_t/bool/g

> +{
> +    int rc = -EINVAL;
> +
> +    switch ( op->cmd )
> +    {
> +    case XEN_DOMCTL_create_viommu:
> +		rc = viommu_create(d, op->u.create_viommu.viommu_type,
> +                           op->u.create_viommu.base_address,
> +                           op->u.create_viommu.length,
> +                           op->u.create_viommu.capabilities);

Indentation.

> +        if (rc >= 0) {

Style.

> +            op->u.create_viommu.viommu_id = rc;
> +            *need_copy = true;
> +        }
> +        break;
> +
> +    case XEN_DOMCTL_destroy_viommu:
> +        rc = viommu_destroy(d, op->u.destroy_viommu.viommu_id);
> +        break;
> +
> +    case XEN_DOMCTL_query_viommu_caps:
> +        op->u.query_caps.caps
> +                = viommu_query_caps(d, op->u.query_caps.viommu_type);
> +        *need_copy = true;
> +        break;
> +
> +    default:
> +        break;
> +    }
> +
> +    return rc;
> +}
> +
[...]
>  static inline int viommu_init_domain(struct domain *d) { return 0 };
> @@ -62,8 +64,12 @@ static inline int viommu_register_type(u64 type, struct viommu_ops * ops)
>  { return 0; };
>  static inline void viommu_unregister_type(u64 type) { };
>  static inline u64 viommu_query_caps(struct domain *d, u64 viommu_type)
> -                { return -ENODEV };
> +{ return -ENODEV };

Spurious change.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [RFC PATCH V2 3/26] VIOMMU: Add irq request callback to deal with irq remapping
  2017-05-18  5:34 ` [RFC PATCH V2 3/26] VIOMMU: Add irq request callback to deal with irq remapping Lan Tianyu
@ 2017-05-30 15:36   ` Wei Liu
  0 siblings, 0 replies; 42+ messages in thread
From: Wei Liu @ 2017-05-30 15:36 UTC (permalink / raw)
  To: Lan Tianyu
  Cc: kevin.tian, wei.liu2, andrew.cooper3, ian.jackson, xen-devel,
	jbeulich, chao.gao

On Thu, May 18, 2017 at 01:34:33AM -0400, Lan Tianyu wrote:
> This patch is to add irq request callback for platform implementation
> to deal with irq remapping request.
> 
> Signed-off-by: Lan Tianyu <tianyu.lan@intel.com>
> ---
>  xen/common/viommu.c          | 15 +++++++++
>  xen/include/asm-x86/viommu.h | 73 ++++++++++++++++++++++++++++++++++++++++++++
>  xen/include/xen/viommu.h     |  9 ++++++
>  3 files changed, 97 insertions(+)
>  create mode 100644 xen/include/asm-x86/viommu.h
> 
> diff --git a/xen/common/viommu.c b/xen/common/viommu.c
> index 74afbf5..4e3ecd7 100644
> --- a/xen/common/viommu.c
> +++ b/xen/common/viommu.c
> @@ -194,6 +194,21 @@ int __init viommu_setup(void)
>      return 0;
>  }
>  
> +int viommu_handle_irq_request(struct domain *d, u32 viommu_id,
> +        struct irq_remapping_request *request)

Indentation.

> +{
> +    struct viommu_info *info = &d->viommu;
> +
> +    if ( !info || viommu_id > info->nr_viommu

">=" ?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [RFC PATCH V2 4/26] VIOMMU: Add get irq info callback to convert irq remapping request
  2017-05-18  5:34 ` [RFC PATCH V2 4/26] VIOMMU: Add get irq info callback to convert irq remapping request Lan Tianyu
@ 2017-05-30 15:36   ` Wei Liu
  0 siblings, 0 replies; 42+ messages in thread
From: Wei Liu @ 2017-05-30 15:36 UTC (permalink / raw)
  To: Lan Tianyu
  Cc: kevin.tian, wei.liu2, andrew.cooper3, ian.jackson, xen-devel,
	jbeulich, chao.gao

On Thu, May 18, 2017 at 01:34:34AM -0400, Lan Tianyu wrote:
> This patch is to add get_irq_info callback for platform implementation
> to convert irq remapping request to irq info (E,G vector, dest, dest_mode
> and so on).
> 
> Signed-off-by: Lan Tianyu <tianyu.lan@intel.com>
> ---
>  xen/common/viommu.c          | 16 ++++++++++++++++
>  xen/include/asm-x86/viommu.h |  8 ++++++++
>  xen/include/xen/viommu.h     |  9 +++++++++
>  3 files changed, 33 insertions(+)
> 
> diff --git a/xen/common/viommu.c b/xen/common/viommu.c
> index 4e3ecd7..c6c9589 100644
> --- a/xen/common/viommu.c
> +++ b/xen/common/viommu.c
> @@ -209,6 +209,22 @@ int viommu_handle_irq_request(struct domain *d, u32 viommu_id,
>      return info->viommu[viommu_id]->ops->handle_irq_request(d, request);
>  }
>  
> +int viommu_get_irq_info(struct domain *d, u32 viommu_id,
> +                        struct irq_remapping_request *request,
> +                        struct irq_remapping_info *irq_info)
> +{
> +    struct viommu_info *info = &d->viommu;
> +
> +    if ( !info || viommu_id > info->nr_viommu

>= again?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [RFC PATCH V2 6/26] Tools/libxc: Add viommu operations in libxc
  2017-05-18  5:34 ` [RFC PATCH V2 6/26] Tools/libxc: Add viommu operations in libxc Lan Tianyu
@ 2017-05-30 15:36   ` Wei Liu
  0 siblings, 0 replies; 42+ messages in thread
From: Wei Liu @ 2017-05-30 15:36 UTC (permalink / raw)
  To: Lan Tianyu
  Cc: kevin.tian, wei.liu2, andrew.cooper3, ian.jackson, xen-devel,
	jbeulich, Chao Gao

On Thu, May 18, 2017 at 01:34:36AM -0400, Lan Tianyu wrote:
> From: Chao Gao <chao.gao@intel.com>
> 
> This patch is to add XEN_DOMCTL_viommu_op hypercall. This hypercall
> comprise three sub-command:
> - query capabilities of one specific type vIOMMU emulated by Xen
> - create vIOMMU in Xen hypervisor with viommu type, register range,
>     capability
> - destroy vIOMMU specified by viommu_id
> 
> Signed-off-by: Chao Gao <chao.gao@intel.com>
> Signed-off-by: Lan Tianyu <tianyu.lan@intel.com>

I skim-read this patch. The code looks reasonable. The final ack depends
on the hypercall interface.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [RFC PATCH V2 10/26] libxl: create vIOMMU during domain construction
  2017-05-18  5:34 ` [RFC PATCH V2 10/26] libxl: create vIOMMU during domain construction Lan Tianyu
@ 2017-05-30 15:36   ` Wei Liu
  0 siblings, 0 replies; 42+ messages in thread
From: Wei Liu @ 2017-05-30 15:36 UTC (permalink / raw)
  To: Lan Tianyu
  Cc: kevin.tian, wei.liu2, andrew.cooper3, ian.jackson, xen-devel,
	jbeulich, Chao Gao

On Thu, May 18, 2017 at 01:34:40AM -0400, Lan Tianyu wrote:
> From: Chao Gao <chao.gao@intel.com>
> 
> If guest is configured to have a vIOMMU, create it during domain construction.
> 
> Signed-off-by: Chao Gao <chao.gao@intel.com>
> Signed-off-by: Lan Tianyu <tianyu.lan@intel.com>
> ---
>  tools/libxl/libxl_arch.h   |  5 +++++
>  tools/libxl/libxl_arm.c    |  7 +++++++
>  tools/libxl/libxl_create.c |  4 ++++
>  tools/libxl/libxl_x86.c    | 24 ++++++++++++++++++++++++

Where is the change to libxl_types.idl?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [RFC PATCH V2 11/26] x86/hvm: Introduce a emulated VTD for HVM
  2017-05-18  5:34 ` [RFC PATCH V2 11/26] x86/hvm: Introduce a emulated VTD for HVM Lan Tianyu
@ 2017-05-30 15:36   ` Wei Liu
  2017-05-30 15:46     ` Jan Beulich
  0 siblings, 1 reply; 42+ messages in thread
From: Wei Liu @ 2017-05-30 15:36 UTC (permalink / raw)
  To: Lan Tianyu
  Cc: kevin.tian, wei.liu2, andrew.cooper3, ian.jackson, xen-devel,
	jbeulich, Chao Gao

On Thu, May 18, 2017 at 01:34:41AM -0400, Lan Tianyu wrote:
> From: Chao Gao <chao.gao@intel.com>
> 
> This patch adds create/destroy/query function for the emulated VTD
> and adapts it to the common VIOMMU abstraction.
> 
> Signed-off-by: Chao Gao <chao.gao@intel.com>
> Signed-off-by: Lan Tianyu <tianyu.lan@intel.com>
> ---
>  xen/arch/x86/hvm/Makefile           |   1 +
>  xen/arch/x86/hvm/vvtd.c             | 176 ++++++++++++++++++++++++++++++++++++
>  xen/drivers/passthrough/vtd/iommu.h | 102 ++++++++++++++++-----
>  xen/include/asm-x86/viommu.h        |   3 +
>  4 files changed, 259 insertions(+), 23 deletions(-)
>  create mode 100644 xen/arch/x86/hvm/vvtd.c
> 
> diff --git a/xen/arch/x86/hvm/Makefile b/xen/arch/x86/hvm/Makefile
> index 0a3d0f4..82a2030 100644
> --- a/xen/arch/x86/hvm/Makefile
> +++ b/xen/arch/x86/hvm/Makefile
> @@ -22,6 +22,7 @@ obj-y += rtc.o
>  obj-y += save.o
>  obj-y += stdvga.o
>  obj-y += vioapic.o
> +obj-y += vvtd.o

Please sort this.

>  obj-y += viridian.o
>  obj-y += vlapic.o
>  obj-y += vmsi.o
> diff --git a/xen/arch/x86/hvm/vvtd.c b/xen/arch/x86/hvm/vvtd.c
> new file mode 100644
> index 0000000..e364f2b
> --- /dev/null
> +++ b/xen/arch/x86/hvm/vvtd.c
> @@ -0,0 +1,176 @@
> +/*
> + * vvtd.c
> + *
> + * virtualize VTD for HVM.
> + *
> + * Copyright (C) 2017 Chao Gao, Intel Corporation.
> + *
> + * This program is free software; you can redistribute it and/or
> + * modify it under the terms and conditions of the GNU General Public
> + * License, version 2, as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> + * General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public
> + * License along with this program; If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#include <xen/domain_page.h>
> +#include <xen/sched.h>
> +#include <xen/types.h>
> +#include <xen/viommu.h>
> +#include <xen/xmalloc.h>
> +#include <asm/current.h>
> +#include <asm/hvm/domain.h>
> +#include <asm/page.h>
> +#include <public/viommu.h>
> +
> +#include "../../../drivers/passthrough/vtd/iommu.h"
> +

Maybe you should move this header to include/asm-x86?

> +struct hvm_hw_vvtd_regs {
> +    uint8_t data[1024];
> +};
> +
> +/* Status field of struct vvtd */
> +#define VIOMMU_STATUS_IRQ_REMAPPING_ENABLED     (1 << 0)
> +#define VIOMMU_STATUS_DMA_REMAPPING_ENABLED     (1 << 1)
[...]
> diff --git a/xen/drivers/passthrough/vtd/iommu.h b/xen/drivers/passthrough/vtd/iommu.h
> index 72c1a2e..2e9dcaa 100644
> --- a/xen/drivers/passthrough/vtd/iommu.h
> +++ b/xen/drivers/passthrough/vtd/iommu.h
> @@ -23,31 +23,54 @@
>  #include <asm/msi.h>
>  
>  /*
> - * Intel IOMMU register specification per version 1.0 public spec.
> + * Intel IOMMU register specification per version 2.4 public spec.
>   */
>  

It would be better to have a separate patch to update the spec.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [RFC PATCH V2 12/26] X86/vvtd: Add MMIO handler for VVTD
  2017-05-18  5:34 ` [RFC PATCH V2 12/26] X86/vvtd: Add MMIO handler for VVTD Lan Tianyu
@ 2017-05-30 15:36   ` Wei Liu
  0 siblings, 0 replies; 42+ messages in thread
From: Wei Liu @ 2017-05-30 15:36 UTC (permalink / raw)
  To: Lan Tianyu
  Cc: kevin.tian, wei.liu2, andrew.cooper3, ian.jackson, xen-devel,
	jbeulich, Chao Gao

On Thu, May 18, 2017 at 01:34:42AM -0400, Lan Tianyu wrote:
> From: Chao Gao <chao.gao@intel.com>
> 
> This patch adds VVTD MMIO handler to deal with MMIO access.
> 
> Signed-off-by: Chao Gao <chao.gao@intel.com>
> Signed-off-by: Lan Tianyu <tianyu.lan@intel.com>
> ---
>  xen/arch/x86/hvm/vvtd.c | 127 ++++++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 127 insertions(+)
> 
> diff --git a/xen/arch/x86/hvm/vvtd.c b/xen/arch/x86/hvm/vvtd.c
> index e364f2b..b0a23ee 100644
> --- a/xen/arch/x86/hvm/vvtd.c
> +++ b/xen/arch/x86/hvm/vvtd.c
> @@ -50,6 +50,38 @@ struct vvtd {
>      struct page_info *regs_page;
>  };
>  
> +#define __DEBUG_VVTD__
> +#ifdef __DEBUG_VVTD__
> +extern unsigned int vvtd_debug_level;
> +#define VVTD_DBG_INFO     1
> +#define VVTD_DBG_TRANS    (1<<1)
> +#define VVTD_DBG_RW       (1<<2)
> +#define VVTD_DBG_FAULT    (1<<3)
> +#define VVTD_DBG_EOI      (1<<4)

Use 1U and add spaces around <<.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [RFC PATCH V2 21/26] Tools/libxc: Add a new interface to bind remapping format msi with pirq
  2017-05-18  5:34 ` [RFC PATCH V2 21/26] Tools/libxc: Add a new interface to bind remapping format msi with pirq Lan Tianyu
@ 2017-05-30 15:36   ` Wei Liu
  0 siblings, 0 replies; 42+ messages in thread
From: Wei Liu @ 2017-05-30 15:36 UTC (permalink / raw)
  To: Lan Tianyu
  Cc: kevin.tian, wei.liu2, andrew.cooper3, ian.jackson, xen-devel,
	jbeulich, Chao Gao

On Thu, May 18, 2017 at 01:34:51AM -0400, Lan Tianyu wrote:
> From: Chao Gao <chao.gao@intel.com>
> 
> Introduce a new binding relationship and provide a new interface to
> manage the new relationship.
> 
> Signed-off-by: Chao Gao <chao.gao@intel.com>
> Signed-off-by: Lan Tianyu <tianyu.lan@intel.com>
> ---
>  tools/libxc/include/xenctrl.h |  17 ++++++
>  tools/libxc/xc_domain.c       |  55 +++++++++++++++++
>  xen/drivers/passthrough/io.c  | 138 +++++++++++++++++++++++++++++++++++-------
>  xen/include/public/domctl.h   |   7 +++
>  xen/include/xen/hvm/irq.h     |   7 +++
>  5 files changed, 203 insertions(+), 21 deletions(-)
> 
> diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h
> index 6c8110c..465dc5b 100644
> --- a/tools/libxc/include/xenctrl.h
> +++ b/tools/libxc/include/xenctrl.h
> @@ -1709,6 +1709,15 @@ int xc_domain_ioport_mapping(xc_interface *xch,
>                               uint32_t nr_ports,
>                               uint32_t add_mapping);
>  
> +int xc_domain_update_msi_irq_remapping(
> +    xc_interface *xch,
> +    uint32_t domid,
> +    uint32_t pirq,
> +    uint32_t source_id,
> +    uint32_t data,
> +    uint64_t addr,
> +    uint64_t gtable);

The indentation (here and later) is a bit unusual.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [RFC PATCH V2 1/26] VIOMMU: Add vIOMMU helper functions to create, destroy and query capabilities
  2017-05-30 15:36   ` Wei Liu
@ 2017-05-30 15:42     ` Jan Beulich
  2017-06-02  7:10       ` Lan Tianyu
  0 siblings, 1 reply; 42+ messages in thread
From: Jan Beulich @ 2017-05-30 15:42 UTC (permalink / raw)
  To: wei.liu2, Lan Tianyu
  Cc: andrew.cooper3, kevin.tian, xen-devel, ian.jackson, chao.gao

>>> On 30.05.17 at 17:36, <wei.liu2@citrix.com> wrote:
> On Thu, May 18, 2017 at 01:34:31AM -0400, Lan Tianyu wrote:
>> --- a/xen/common/Kconfig
>> +++ b/xen/common/Kconfig
>> @@ -73,6 +73,17 @@ config TMEM
>>  
>>  	  If unsure, say Y.
>>  
>> +config VIOMMU
>> +	def_bool y
>> +	prompt "Xen vIOMMU Support" if EXPERT = "y"
>> +	depends on X86
>> +	---help---
>> +	 Virtual IOMMU provides interrupt remapping function for guest and
>> +	 it allows guest to boot up more than 255 vcpus which requires interrupt
>> +	 remapping function.
>> +
>> +	  If unsure, say Y.
> 
> Indentation. And this should be disabled by default.

It's actually a question whether in our current scheme a Kconfig
option is appropriate here in the first place. I'd rather see this be
an always built feature which needs enabling on the command line
for the time being.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [RFC PATCH V2 11/26] x86/hvm: Introduce a emulated VTD for HVM
  2017-05-30 15:36   ` Wei Liu
@ 2017-05-30 15:46     ` Jan Beulich
  0 siblings, 0 replies; 42+ messages in thread
From: Jan Beulich @ 2017-05-30 15:46 UTC (permalink / raw)
  To: wei.liu2, Lan Tianyu
  Cc: andrew.cooper3, kevin.tian, xen-devel, ian.jackson, Chao Gao

>>> On 30.05.17 at 17:36, <wei.liu2@citrix.com> wrote:
> On Thu, May 18, 2017 at 01:34:41AM -0400, Lan Tianyu wrote:
>> --- a/xen/arch/x86/hvm/Makefile
>> +++ b/xen/arch/x86/hvm/Makefile
>> @@ -22,6 +22,7 @@ obj-y += rtc.o
>>  obj-y += save.o
>>  obj-y += stdvga.o
>>  obj-y += vioapic.o
>> +obj-y += vvtd.o
> 
> Please sort this.

Also I guess this belongs into vmx/ ?

>> --- /dev/null
>> +++ b/xen/arch/x86/hvm/vvtd.c
>> @@ -0,0 +1,176 @@
>> +/*
>> + * vvtd.c
>> + *
>> + * virtualize VTD for HVM.
>> + *
>> + * Copyright (C) 2017 Chao Gao, Intel Corporation.
>> + *
>> + * This program is free software; you can redistribute it and/or
>> + * modify it under the terms and conditions of the GNU General Public
>> + * License, version 2, as published by the Free Software Foundation.
>> + *
>> + * This program is distributed in the hope that it will be useful,
>> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
>> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
>> + * General Public License for more details.
>> + *
>> + * You should have received a copy of the GNU General Public
>> + * License along with this program; If not, see <http://www.gnu.org/licenses/>.
>> + */
>> +
>> +#include <xen/domain_page.h>
>> +#include <xen/sched.h>
>> +#include <xen/types.h>
>> +#include <xen/viommu.h>
>> +#include <xen/xmalloc.h>
>> +#include <asm/current.h>
>> +#include <asm/hvm/domain.h>
>> +#include <asm/page.h>
>> +#include <public/viommu.h>
>> +
>> +#include "../../../drivers/passthrough/vtd/iommu.h"
>> +
> 
> Maybe you should move this header to include/asm-x86?

Or, other than suggested above, the .c file should move into
that directory?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [RFC PATCH V2 1/26] VIOMMU: Add vIOMMU helper functions to create, destroy and query capabilities
  2017-05-30 15:42     ` Jan Beulich
@ 2017-06-02  7:10       ` Lan Tianyu
  2017-06-02  7:31         ` Julien Grall
  0 siblings, 1 reply; 42+ messages in thread
From: Lan Tianyu @ 2017-06-02  7:10 UTC (permalink / raw)
  To: Jan Beulich, wei.liu2, julien.grall
  Cc: andrew.cooper3, kevin.tian, xen-devel, ian.jackson, chao.gao


[-- Attachment #1.1: Type: text/plain, Size: 1344 bytes --]

Hi Jan:

          Thanks for your review.


On 2017年05月30日 23:42, Jan Beulich wrote:
>>>> On 30.05.17 at 17:36, <wei.liu2@citrix.com> wrote:
>> On Thu, May 18, 2017 at 01:34:31AM -0400, Lan Tianyu wrote:
>>> --- a/xen/common/Kconfig
>>> +++ b/xen/common/Kconfig
>>> @@ -73,6 +73,17 @@ config TMEM
>>>  
>>>  	  If unsure, say Y.
>>>  
>>> +config VIOMMU
>>> +	def_bool y
>>> +	prompt "Xen vIOMMU Support" if EXPERT = "y"
>>> +	depends on X86
>>> +	---help---
>>> +	 Virtual IOMMU provides interrupt remapping function for guest and
>>> +	 it allows guest to boot up more than 255 vcpus which requires interrupt
>>> +	 remapping function.
>>> +
>>> +	  If unsure, say Y.
>> Indentation. And this should be disabled by default.
> It's actually a question whether in our current scheme a Kconfig
> option is appropriate here in the first place. I'd rather see this be
> an always built feature which needs enabling on the command line
> for the time being.

          In the RFC V1, we made vIOMMU always built-in feature. But ARM
or other arches doesn't have vIOMMU support.

Julien suggested to introduce a new Kconfig and only built vIOMMU on
x86. Both two ways won't affect vIOMMU function.

https://www.mail-archive.com/xen-devel@lists.xen.org/msg101421.html

Jan & Julien, we need to make a choice here.

-- 
Best regards
Tianyu Lan


[-- Attachment #1.2: Type: text/html, Size: 2318 bytes --]

[-- Attachment #2: Type: text/plain, Size: 127 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [RFC PATCH V2 1/26] VIOMMU: Add vIOMMU helper functions to create, destroy and query capabilities
  2017-06-02  7:10       ` Lan Tianyu
@ 2017-06-02  7:31         ` Julien Grall
  2017-06-06  6:31           ` Jan Beulich
  0 siblings, 1 reply; 42+ messages in thread
From: Julien Grall @ 2017-06-02  7:31 UTC (permalink / raw)
  To: Lan Tianyu, Jan Beulich, wei.liu2
  Cc: andrew.cooper3, kevin.tian, xen-devel, ian.jackson, chao.gao

Hi,

On 06/02/2017 08:10 AM, Lan Tianyu wrote:
> Hi Jan:
> 
>            Thanks for your review.
> 
> 
> On 2017年05月30日 23:42, Jan Beulich wrote:
>>>>> On 30.05.17 at 17:36,<wei.liu2@citrix.com>  wrote:
>>> On Thu, May 18, 2017 at 01:34:31AM -0400, Lan Tianyu wrote:
>>>> --- a/xen/common/Kconfig
>>>> +++ b/xen/common/Kconfig
>>>> @@ -73,6 +73,17 @@ config TMEM
>>>>   
>>>>   	  If unsure, say Y.
>>>>   
>>>> +config VIOMMU
>>>> +	def_bool y
>>>> +	prompt "Xen vIOMMU Support" if EXPERT = "y"
>>>> +	depends on X86
>>>> +	---help---
>>>> +	 Virtual IOMMU provides interrupt remapping function for guest and
>>>> +	 it allows guest to boot up more than 255 vcpus which requires interrupt
>>>> +	 remapping function.
>>>> +
>>>> +	  If unsure, say Y.
>>> Indentation. And this should be disabled by default.
>> It's actually a question whether in our current scheme a Kconfig
>> option is appropriate here in the first place. I'd rather see this be
>> an always built feature which needs enabling on the command line
>> for the time being.
> 
>            In the RFC V1, we made vIOMMU always built-in feature. But 
> ARM or other arches doesn't have vIOMMU support.
> 
> Julien suggested to introduce a new Kconfig and only built vIOMMU on 
> x86. Both two ways won't affect vIOMMU function.
> 
> https://www.mail-archive.com/xen-devel@lists.xen.org/msg101421.html
> 
> Jan & Julien, we need to make a choice here
We should definitely not compiled in code that are not used for an 
architecture. This would be dead code or potential bug if not disabled 
correctly.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [RFC PATCH V2 1/26] VIOMMU: Add vIOMMU helper functions to create, destroy and query capabilities
  2017-06-02  7:31         ` Julien Grall
@ 2017-06-06  6:31           ` Jan Beulich
  2017-06-06 16:38             ` Julien Grall
  0 siblings, 1 reply; 42+ messages in thread
From: Jan Beulich @ 2017-06-06  6:31 UTC (permalink / raw)
  To: Julien Grall, Lan Tianyu
  Cc: kevin.tian, wei.liu2, andrew.cooper3, ian.jackson, xen-devel, chao.gao

>>> On 02.06.17 at 09:31, <julien.grall@arm.com> wrote:
> On 06/02/2017 08:10 AM, Lan Tianyu wrote:
>> On 2017年05月30日 23:42, Jan Beulich wrote:
>>>>>> On 30.05.17 at 17:36,<wei.liu2@citrix.com>  wrote:
>>>> On Thu, May 18, 2017 at 01:34:31AM -0400, Lan Tianyu wrote:
>>>>> --- a/xen/common/Kconfig
>>>>> +++ b/xen/common/Kconfig
>>>>> @@ -73,6 +73,17 @@ config TMEM
>>>>>   
>>>>>   	  If unsure, say Y.
>>>>>   
>>>>> +config VIOMMU
>>>>> +	def_bool y
>>>>> +	prompt "Xen vIOMMU Support" if EXPERT = "y"
>>>>> +	depends on X86
>>>>> +	---help---
>>>>> +	 Virtual IOMMU provides interrupt remapping function for guest and
>>>>> +	 it allows guest to boot up more than 255 vcpus which requires interrupt
>>>>> +	 remapping function.
>>>>> +
>>>>> +	  If unsure, say Y.
>>>> Indentation. And this should be disabled by default.
>>> It's actually a question whether in our current scheme a Kconfig
>>> option is appropriate here in the first place. I'd rather see this be
>>> an always built feature which needs enabling on the command line
>>> for the time being.
>> 
>>            In the RFC V1, we made vIOMMU always built-in feature. But 
>> ARM or other arches doesn't have vIOMMU support.
>> 
>> Julien suggested to introduce a new Kconfig and only built vIOMMU on 
>> x86. Both two ways won't affect vIOMMU function.
>> 
>> https://www.mail-archive.com/xen-devel@lists.xen.org/msg101421.html 
>> 
>> Jan & Julien, we need to make a choice here
> We should definitely not compiled in code that are not used for an 
> architecture. This would be dead code or potential bug if not disabled 
> correctly.

I agree, but imo this should be a prompt-less Kconfig option,
selected under suitable conditions.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [RFC PATCH V2 1/26] VIOMMU: Add vIOMMU helper functions to create, destroy and query capabilities
  2017-06-06  6:31           ` Jan Beulich
@ 2017-06-06 16:38             ` Julien Grall
  0 siblings, 0 replies; 42+ messages in thread
From: Julien Grall @ 2017-06-06 16:38 UTC (permalink / raw)
  To: Jan Beulich, Lan Tianyu
  Cc: kevin.tian, wei.liu2, andrew.cooper3, ian.jackson, xen-devel, chao.gao

Hi Jan,

On 06/06/17 07:31, Jan Beulich wrote:
>>>> On 02.06.17 at 09:31, <julien.grall@arm.com> wrote:
>> On 06/02/2017 08:10 AM, Lan Tianyu wrote:
>>> On 2017年05月30日 23:42, Jan Beulich wrote:
>>>>>>> On 30.05.17 at 17:36,<wei.liu2@citrix.com>  wrote:
>>>>> On Thu, May 18, 2017 at 01:34:31AM -0400, Lan Tianyu wrote:
>>>>>> --- a/xen/common/Kconfig
>>>>>> +++ b/xen/common/Kconfig
>>>>>> @@ -73,6 +73,17 @@ config TMEM
>>>>>>
>>>>>>   	  If unsure, say Y.
>>>>>>
>>>>>> +config VIOMMU
>>>>>> +	def_bool y
>>>>>> +	prompt "Xen vIOMMU Support" if EXPERT = "y"
>>>>>> +	depends on X86
>>>>>> +	---help---
>>>>>> +	 Virtual IOMMU provides interrupt remapping function for guest and
>>>>>> +	 it allows guest to boot up more than 255 vcpus which requires interrupt
>>>>>> +	 remapping function.
>>>>>> +
>>>>>> +	  If unsure, say Y.
>>>>> Indentation. And this should be disabled by default.
>>>> It's actually a question whether in our current scheme a Kconfig
>>>> option is appropriate here in the first place. I'd rather see this be
>>>> an always built feature which needs enabling on the command line
>>>> for the time being.
>>>
>>>            In the RFC V1, we made vIOMMU always built-in feature. But
>>> ARM or other arches doesn't have vIOMMU support.
>>>
>>> Julien suggested to introduce a new Kconfig and only built vIOMMU on
>>> x86. Both two ways won't affect vIOMMU function.
>>>
>>> https://www.mail-archive.com/xen-devel@lists.xen.org/msg101421.html
>>>
>>> Jan & Julien, we need to make a choice here
>> We should definitely not compiled in code that are not used for an
>> architecture. This would be dead code or potential bug if not disabled
>> correctly.
>
> I agree, but imo this should be a prompt-less Kconfig option,
> selected under suitable conditions.

I don't mind of the way to do it as long as it is disabled on ARM.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 42+ messages in thread

end of thread, other threads:[~2017-06-06 16:38 UTC | newest]

Thread overview: 42+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-05-18  5:34 [RFC PATCH V2 00/26] xen/vIOMMU: Add vIOMMU support with irq remapping fucntion of virtual vtd Lan Tianyu
2017-05-18  5:34 ` [RFC PATCH V2 1/26] VIOMMU: Add vIOMMU helper functions to create, destroy and query capabilities Lan Tianyu
2017-05-30 15:36   ` Wei Liu
2017-05-30 15:42     ` Jan Beulich
2017-06-02  7:10       ` Lan Tianyu
2017-06-02  7:31         ` Julien Grall
2017-06-06  6:31           ` Jan Beulich
2017-06-06 16:38             ` Julien Grall
2017-05-18  5:34 ` [RFC PATCH V2 2/26] DOMCTL: Introduce new DOMCTL commands for vIOMMU support Lan Tianyu
2017-05-30 15:36   ` Wei Liu
2017-05-18  5:34 ` [RFC PATCH V2 3/26] VIOMMU: Add irq request callback to deal with irq remapping Lan Tianyu
2017-05-30 15:36   ` Wei Liu
2017-05-18  5:34 ` [RFC PATCH V2 4/26] VIOMMU: Add get irq info callback to convert irq remapping request Lan Tianyu
2017-05-30 15:36   ` Wei Liu
2017-05-18  5:34 ` [RFC PATCH V2 5/26] Xen/doc: Add Xen virtual IOMMU doc Lan Tianyu
2017-05-18  5:34 ` [RFC PATCH V2 6/26] Tools/libxc: Add viommu operations in libxc Lan Tianyu
2017-05-30 15:36   ` Wei Liu
2017-05-18  5:34 ` [RFC PATCH V2 7/26] Tools/libacpi: Add DMA remapping reporting (DMAR) ACPI table structures Lan Tianyu
2017-05-18  5:34 ` [RFC PATCH V2 8/26] Tools/libacpi: Add new fields in acpi_config to build DMAR table Lan Tianyu
2017-05-18  5:34 ` [RFC PATCH V2 9/26] Tools/libacpi: Add a user configurable parameter to control vIOMMU attributes Lan Tianyu
2017-05-18  5:34 ` [RFC PATCH V2 10/26] libxl: create vIOMMU during domain construction Lan Tianyu
2017-05-30 15:36   ` Wei Liu
2017-05-18  5:34 ` [RFC PATCH V2 11/26] x86/hvm: Introduce a emulated VTD for HVM Lan Tianyu
2017-05-30 15:36   ` Wei Liu
2017-05-30 15:46     ` Jan Beulich
2017-05-18  5:34 ` [RFC PATCH V2 12/26] X86/vvtd: Add MMIO handler for VVTD Lan Tianyu
2017-05-30 15:36   ` Wei Liu
2017-05-18  5:34 ` [RFC PATCH V2 13/26] X86/vvtd: Set Interrupt Remapping Table Pointer through GCMD Lan Tianyu
2017-05-18  5:34 ` [RFC PATCH V2 14/26] X86/vvtd: Process interrupt remapping request Lan Tianyu
2017-05-18  5:34 ` [RFC PATCH V2 15/26] x86/vvtd: decode interrupt attribute from IRTE Lan Tianyu
2017-05-18  5:34 ` [RFC PATCH V2 16/26] x86/vioapic: Hook interrupt delivery of vIOAPIC Lan Tianyu
2017-05-18  5:34 ` [RFC PATCH V2 17/26] X86/vvtd: Enable Queued Invalidation through GCMD Lan Tianyu
2017-05-18  5:34 ` [RFC PATCH V2 18/26] X86/vvtd: Enable Interrupt Remapping " Lan Tianyu
2017-05-18  5:34 ` [RFC PATCH V2 19/26] x86/vpt: Get interrupt vector through a vioapic interface Lan Tianyu
2017-05-18  5:34 ` [RFC PATCH V2 20/26] passthrough: move some fields of hvm_gmsi_info to a sub-structure Lan Tianyu
2017-05-18  5:34 ` [RFC PATCH V2 21/26] Tools/libxc: Add a new interface to bind remapping format msi with pirq Lan Tianyu
2017-05-30 15:36   ` Wei Liu
2017-05-18  5:34 ` [RFC PATCH V2 22/26] x86/vmsi: Hook delivering remapping format msi to guest Lan Tianyu
2017-05-18  5:34 ` [RFC PATCH V2 23/26] x86/vvtd: Handle interrupt translation faults Lan Tianyu
2017-05-18  5:34 ` [RFC PATCH V2 24/26] x86/vvtd: Add queued invalidation (QI) support Lan Tianyu
2017-05-18  5:34 ` [RFC PATCH V2 25/26] x86/vlapic: drop no longer suitable restriction to set x2apic id Lan Tianyu
2017-05-18  5:34 ` [RFC PATCH V2 26/26] x86/vvtd: save and restore emulated VT-d Lan Tianyu

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.