All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] [V15 0/4] AMD IOMMU
@ 2016-08-02  8:39 David Kiarie
  2016-08-02  8:39 ` [Qemu-devel] [V15 1/4] hw/pci: Prepare for " David Kiarie
                   ` (3 more replies)
  0 siblings, 4 replies; 26+ messages in thread
From: David Kiarie @ 2016-08-02  8:39 UTC (permalink / raw)
  To: qemu-devel
  Cc: peterx, rkrcmar, jan.kiszka, valentine.sinitsyn, ehabkost, mst,
	David Kiarie

Hi all,

This patchset adds basic AMD IOMMU emulation support to Qemu. This version happens to have been delayed since I expected to send it together with IR code but it seems that may take even longer so I'm sending this first.

Changes since v13
   -Added an error to make AMD IOMMU incompatible with device assignment.[Alex]
   -Converted AMD IOMMU into a composite PCI and System Bus device. This helps with:
      -We can now inherit from X86 IOMMU base class(which is implemented as a System Bus device).
      -We can now reserve MMIO region for IOMMU without a BAR register and without a hack.

Changes since v12

   -Coding style fixes [Jan, Michael]
   -Error logging fix to avoid using a macro[Jan]
   -moved some PCI macros to PCI header[Jan]
   -Use a lookup table for MMIO register names when tracing[Jan]

Changes since V11
   -AMD IOMMU is not started with -device amd-iommu (with a dependency on Marcel's patches).
   -IOMMU commands are represented using bitfields which is less error prone and more readable[Peter]
   -Changed from debug fprintfs to tracing[Jan]

Changes since V10
 
   -Support for huge pages including some obscure AMD IOMMU feature that allows default page size override[Jan].
   -Fixed an issue with generation of interrupts. We noted that AMD IOMMU has BusMaster- and is therefore not able to generate interrupts like any other PCI device. We have resulted in writing directly to system address but this could be fixed by some patches which have not been merged yet.

Changes since v9

   -amd_iommu prefixes have been renamed to a shorter 'amdvi' both in the macros
    and in the functions/code. The register macros have not been moved to the 
    implementation file since almost the macros there are basically macros and I 
    reckoned renaming them should suffice.
   -taken care of byte order in the use of 'dma_memory_read'[Michael]
   -Taken care of invalid DTE entries to ensure no DMA unless a device is configured to allow it.
   -An issue with the emulate IOMMU defaulting to AMD_IOMMU has been fixed[Marcel]
   
You can test[1] this patches by starting with parameters 
    qemu-system-x86_64 -M -device amd-iommu -m 2G -enable-kvm -smp 4 -cpu host -hda file.img -soundhw ac97 
emulating whatever devices you want.

Not passing any command line parameters to linux should be enough to test this patches since the devices are basically
passes-through but to the 'host' (l1 guest). You can still go ahead pass command line parameter 'iommu=pt iommu=1'
and try to pass a device to L2 guest. This can also done without passing any iommu related parameters to the kernel. 

David Kiarie (4):
  hw/pci: Prepare for AMD IOMMU
  hw/i386/trace-events: Add AMD IOMMU trace events
  hw/i386: Introduce AMD IOMMU
  hw/i386: AMD IOMMU IVRS table

 hw/acpi/aml-build.c         |    2 +-
 hw/i386/Makefile.objs       |    1 +
 hw/i386/acpi-build.c        |   76 ++-
 hw/i386/amd_iommu.c         | 1397 +++++++++++++++++++++++++++++++++++++++++++
 hw/i386/amd_iommu.h         |  390 ++++++++++++
 hw/i386/trace-events        |   36 ++
 hw/i386/x86-iommu.c         |   19 +
 include/hw/acpi/aml-build.h |    1 +
 include/hw/i386/x86-iommu.h |   11 +
 include/hw/pci/pci.h        |    5 +-
 10 files changed, 1929 insertions(+), 9 deletions(-)
 create mode 100644 hw/i386/amd_iommu.c
 create mode 100644 hw/i386/amd_iommu.h

-- 
2.1.4

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [Qemu-devel] [V15 1/4] hw/pci: Prepare for AMD IOMMU
  2016-08-02  8:39 [Qemu-devel] [V15 0/4] AMD IOMMU David Kiarie
@ 2016-08-02  8:39 ` David Kiarie
  2016-08-08  9:01   ` Peter Xu
  2016-08-02  8:39 ` [Qemu-devel] [V15 2/4] hw/i386/trace-events: Add AMD IOMMU trace events David Kiarie
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 26+ messages in thread
From: David Kiarie @ 2016-08-02  8:39 UTC (permalink / raw)
  To: qemu-devel
  Cc: peterx, rkrcmar, jan.kiszka, valentine.sinitsyn, ehabkost, mst,
	David Kiarie

Introduce PCI macros from for use by AMD IOMMU

Signed-off-by: David Kiarie <davidkiarie4@gmail.com>
---
 include/hw/pci/pci.h | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/include/hw/pci/pci.h b/include/hw/pci/pci.h
index 929ec2f..d47e0e6 100644
--- a/include/hw/pci/pci.h
+++ b/include/hw/pci/pci.h
@@ -11,11 +11,14 @@
 #include "hw/pci/pcie.h"
 
 /* PCI bus */
-
+#define PCI_BDF(bus, devfn)     ((((uint16_t)(bus)) << 8) | (devfn))
 #define PCI_DEVFN(slot, func)   ((((slot) & 0x1f) << 3) | ((func) & 0x07))
+#define PCI_BUS_NUM(x)          (((x) >> 8) & 0xff)
 #define PCI_SLOT(devfn)         (((devfn) >> 3) & 0x1f)
 #define PCI_FUNC(devfn)         ((devfn) & 0x07)
 #define PCI_BUILD_BDF(bus, devfn)     ((bus << 8) | (devfn))
+#define PCI_BUS_MAX             256
+#define PCI_DEVFN_MAX           256
 #define PCI_SLOT_MAX            32
 #define PCI_FUNC_MAX            8
 
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [Qemu-devel] [V15 2/4] hw/i386/trace-events: Add AMD IOMMU trace events
  2016-08-02  8:39 [Qemu-devel] [V15 0/4] AMD IOMMU David Kiarie
  2016-08-02  8:39 ` [Qemu-devel] [V15 1/4] hw/pci: Prepare for " David Kiarie
@ 2016-08-02  8:39 ` David Kiarie
  2016-08-02  8:39 ` [Qemu-devel] [V15 3/4] hw/i386: Introduce AMD IOMMU David Kiarie
  2016-08-02  8:39 ` [Qemu-devel] [V15 4/4] hw/i386: AMD IOMMU IVRS table David Kiarie
  3 siblings, 0 replies; 26+ messages in thread
From: David Kiarie @ 2016-08-02  8:39 UTC (permalink / raw)
  To: qemu-devel
  Cc: peterx, rkrcmar, jan.kiszka, valentine.sinitsyn, ehabkost, mst,
	David Kiarie

Signed-off-by: David Kiarie <davidkiarie4@gmail.com>
---
 hw/i386/trace-events | 29 +++++++++++++++++++++++++++++
 1 file changed, 29 insertions(+)

diff --git a/hw/i386/trace-events b/hw/i386/trace-events
index b4882c1..592de3a 100644
--- a/hw/i386/trace-events
+++ b/hw/i386/trace-events
@@ -13,3 +13,32 @@ mhp_pc_dimm_assigned_address(uint64_t addr) "0x%"PRIx64
 
 # hw/i386/x86-iommu.c
 x86_iommu_iec_notify(bool global, uint32_t index, uint32_t mask) "Notify IEC invalidation: global=%d index=%" PRIu32 " mask=%" PRIu32
+
+# hw/i386/amd_iommu.c
+amdvi_evntlog_fail(uint64_t addr, uint32_t head) "error: fail to write at addr 0x%"PRIx64 " +  offset 0x%"PRIx32
+amdvi_cache_update(uint16_t domid, uint32_t bus, uint32_t slot, uint32_t func, uint64_t gpa, uint64_t txaddr) " update iotlb domid 0x%"PRIx16" devid: %02x:%02x.%x gpa 0x%"PRIx64 " hpa 0x%"PRIx64
+amdvi_completion_wait_fail(uint64_t addr) "error: fail to write at address 0x%"PRIx64
+amdvi_mmio_write(const char *reg, uint64_t addr, unsigned size, uint64_t val, unsigned long offset) "%s write addr 0x%"PRIx64 ", size %d, val 0x%"PRIx64 ", offset 0x%"PRIx64
+amdvi_mmio_read(const char *reg, uint64_t addr, unsigned size, uint64_t offset) "%s read addr 0x%"PRIx64", size %d offset 0x%"PRIx64
+amdvi_command_error(uint64_t status) "error: Executing commands with command buffer disabled 0x%"PRIx64
+amdvi_command_read_fail(uint64_t addr, uint32_t head) "error: fail to access memory at 0x%"PRIx64" + 0x%"PRIu32
+amdvi_command_exec(uint32_t head, uint32_t tail, uint64_t buf) "command buffer head at 0x%"PRIx32 " command buffer tail at 0x%"PRIx32" command buffer base at 0x%" PRIx64
+amdvi_unhandled_command(uint8_t type) "unhandled command %d"
+amdvi_intr_inval(void) "Interrupt table invalidated"
+amdvi_iotlb_inval(void) "IOTLB pages invalidated"
+amdvi_prefetch_pages(void) "Pre-fetch of AMD-Vi pages requested"
+amdvi_pages_inval(uint16_t domid) "AMD-Vi pages for domain 0x%"PRIx16 " invalidated"
+amdvi_all_inval(void) "Invalidation of all AMD-Vi cache requested "
+amdvi_ppr_exec(void) "Execution of PPR queue requested "
+amdvi_devtab_inval(uint16_t bus, uint16_t slot, uint16_t func) "device table entry for devid: %02x:%02x.%x invalidated"
+amdvi_completion_wait(uint64_t addr, uint64_t data) "completion wait requested with store address 0x%"PRIx64" and store data 0x%"PRIx64
+amdvi_control_status(uint64_t val) "MMIO_STATUS state 0x%"PRIx64
+amdvi_iotlb_reset(void) "IOTLB exceed size limit - reset "
+amdvi_completion_wait_exec(uint64_t addr, uint64_t data) "completion wait requested with store address 0x%"PRIx64" and store data 0x%"PRIx64
+amdvi_dte_get_fail(uint64_t addr, uint32_t offset) "error: failed to access Device Entry devtab 0x%"PRIx64" offset 0x%"PRIx32
+amdvi_invalid_dte(uint64_t addr) "PTE entry at 0x%"PRIx64" is invalid "
+amdvi_get_pte_hwerror(uint64_t addr) "hardware error eccessing PTE at addr 0x%"PRIx64
+amdvi_mode_invalid(unsigned level, uint64_t addr)"error: translation level 0x%"PRIu8" translating addr 0x%"PRIx64
+amdvi_page_fault(uint64_t addr) "error: page fault accessing guest physical address 0x%"PRIx64
+amdvi_iotlb_hit(uint16_t bus, uint16_t slot, uint16_t func, uint64_t addr, uint64_t txaddr) "hit iotlb devid %02x:%02x.%x gpa 0x%"PRIx64 " hpa 0x%"PRIx64
+amdvi_translation_result(uint16_t bus, uint16_t slot, uint16_t func, uint64_t addr, uint64_t txaddr) "devid: %02x:%02x.%x gpa 0x%"PRIx64 " hpa 0x%"PRIx64
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [Qemu-devel] [V15 3/4] hw/i386: Introduce AMD IOMMU
  2016-08-02  8:39 [Qemu-devel] [V15 0/4] AMD IOMMU David Kiarie
  2016-08-02  8:39 ` [Qemu-devel] [V15 1/4] hw/pci: Prepare for " David Kiarie
  2016-08-02  8:39 ` [Qemu-devel] [V15 2/4] hw/i386/trace-events: Add AMD IOMMU trace events David Kiarie
@ 2016-08-02  8:39 ` David Kiarie
  2016-08-09  5:44   ` Peter Xu
                     ` (2 more replies)
  2016-08-02  8:39 ` [Qemu-devel] [V15 4/4] hw/i386: AMD IOMMU IVRS table David Kiarie
  3 siblings, 3 replies; 26+ messages in thread
From: David Kiarie @ 2016-08-02  8:39 UTC (permalink / raw)
  To: qemu-devel
  Cc: peterx, rkrcmar, jan.kiszka, valentine.sinitsyn, ehabkost, mst,
	David Kiarie

Add AMD IOMMU emulaton to Qemu in addition to Intel IOMMU.
The IOMMU does basic translation, error checking and has a
minimal IOTLB implementation. This IOMMU bypassed the need
for target aborts by responding with IOMMU_NONE access rights
and exempts the region 0xfee00000-0xfeefffff from translation
as it is the q35 interrupt region.

We advertise features that are not yet implemented to please
the Linux IOMMU driver.

IOTLB aims at implementing commands on real IOMMUs which is
essential for debugging and may not offer any performance
benefits

Signed-off-by: David Kiarie <davidkiarie4@gmail.com>
---
 hw/i386/Makefile.objs |    1 +
 hw/i386/amd_iommu.c   | 1397 +++++++++++++++++++++++++++++++++++++++++++++++++
 hw/i386/amd_iommu.h   |  390 ++++++++++++++
 hw/i386/trace-events  |    7 +
 4 files changed, 1795 insertions(+)
 create mode 100644 hw/i386/amd_iommu.c
 create mode 100644 hw/i386/amd_iommu.h

diff --git a/hw/i386/Makefile.objs b/hw/i386/Makefile.objs
index 90e94ff..909ead6 100644
--- a/hw/i386/Makefile.objs
+++ b/hw/i386/Makefile.objs
@@ -3,6 +3,7 @@ obj-y += multiboot.o
 obj-y += pc.o pc_piix.o pc_q35.o
 obj-y += pc_sysfw.o
 obj-y += x86-iommu.o intel_iommu.o
+obj-y += amd_iommu.o
 obj-$(CONFIG_XEN) += ../xenpv/ xen/
 
 obj-y += kvmvapic.o
diff --git a/hw/i386/amd_iommu.c b/hw/i386/amd_iommu.c
new file mode 100644
index 0000000..7b64dd7
--- /dev/null
+++ b/hw/i386/amd_iommu.c
@@ -0,0 +1,1397 @@
+/*
+ * QEMU emulation of AMD IOMMU (AMD-Vi)
+ *
+ * Copyright (C) 2011 Eduard - Gabriel Munteanu
+ * Copyright (C) 2015 David Kiarie, <davidkiarie4@gmail.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, see <http://www.gnu.org/licenses/>.
+ *
+ * Cache implementation inspired by hw/i386/intel_iommu.c
+ *
+ */
+#include "qemu/osdep.h"
+#include <math.h>
+#include "hw/pci/msi.h"
+#include "hw/i386/pc.h"
+#include "hw/i386/amd_iommu.h"
+#include "hw/pci/pci_bus.h"
+#include "trace.h"
+
+/* used AMD-Vi MMIO registers */
+const char *amdvi_mmio_low[] = {
+    "AMDVI_MMIO_DEVTAB_BASE",
+    "AMDVI_MMIO_CMDBUF_BASE",
+    "AMDVI_MMIO_EVTLOG_BASE",
+    "AMDVI_MMIO_CONTROL",
+    "AMDVI_MMIO_EXCL_BASE",
+    "AMDVI_MMIO_EXCL_LIMIT",
+    "AMDVI_MMIO_EXT_FEATURES",
+    "AMDVI_MMIO_PPR_BASE",
+    "UNHANDLED"
+};
+const char *amdvi_mmio_high[] = {
+    "AMDVI_MMIO_COMMAND_HEAD",
+    "AMDVI_MMIO_COMMAND_TAIL",
+    "AMDVI_MMIO_EVTLOG_HEAD",
+    "AMDVI_MMIO_EVTLOG_TAIL",
+    "AMDVI_MMIO_STATUS",
+    "AMDVI_MMIO_PPR_HEAD",
+    "AMDVI_MMIO_PPR_TAIL",
+    "UNHANDLED"
+};
+typedef struct AMDVIAddressSpace {
+    uint8_t bus_num;            /* bus number                           */
+    uint8_t devfn;              /* device function                      */
+    AMDVIState *iommu_state;    /* AMDVI - one per machine              */
+    MemoryRegion iommu;         /* Device's address translation region  */
+    MemoryRegion iommu_ir;      /* Device's interrupt remapping region  */
+    AddressSpace as;            /* device's corresponding address space */
+} AMDVIAddressSpace;
+
+/* AMDVI cache entry */
+typedef struct AMDVIIOTLBEntry {
+    uint64_t gfn;               /* guest frame number  */
+    uint16_t domid;             /* assigned domain id  */
+    uint16_t devid;             /* device owning entry */
+    uint64_t perms;             /* access permissions  */
+    uint64_t translated_addr;   /* translated address  */
+    uint64_t page_mask;         /* physical page size  */
+} AMDVIIOTLBEntry;
+
+/* serialize IOMMU command processing */
+typedef struct QEMU_PACKED {
+#ifdef HOST_WORDS_BIGENDIAN
+    uint64_t type:4;               /* command type           */
+    uint64_t reserved:8;
+    uint64_t store_addr:49;        /* addr to write          */
+    uint64_t completion_flush:1;   /* allow more executions  */
+    uint64_t completion_int:1;     /* set MMIOWAITINT        */
+    uint64_t completion_store:1;   /* write data to address  */
+#else
+    uint64_t completion_store:1;
+    uint64_t completion_int:1;
+    uint64_t completion_flush:1;
+    uint64_t store_addr:49;
+    uint64_t reserved:8;
+    uint64_t type:4;
+#endif /* __BIG_ENDIAN_BITFIELD */
+    uint64_t store_data;           /* data to write          */
+} CMDCompletionWait;
+
+/* invalidate internal caches for devid */
+typedef struct QEMU_PACKED {
+#ifdef HOST_WORDS_BIGENDIAN
+    uint64_t devid;                /* device to invalidate   */
+    uint64_t reserved_1:44;
+    uint64_t type:4;               /* command type           */
+#else
+    uint64_t devid;
+    uint64_t reserved_1:44;
+    uint64_t type:4;
+#endif /* __BIG_ENDIAN_BITFIELD */
+    uint64_t reserved_2;
+} CMDInvalDevEntry;
+
+/* invalidate a range of entries in IOMMU translation cache for devid */
+typedef struct QEMU_PACKED {
+#ifdef HOST_WORDS_BIGENDIAN
+    uint64_t type:4;               /* command type           */
+    uint64_t reserved_2:12
+    uint64_t domid:16;             /* domain to inval for    */
+    uint64_t reserved_1:12;
+    uint64_t pasid:20;
+#else
+    uint64_t pasid:20;
+    uint64_t reserved_1:12;
+    uint64_t domid:16;
+    uint64_t reserved_2:12;
+    uint64_t type:4;
+#endif /* __BIG_ENDIAN_BITFIELD */
+
+#ifdef HOST_WORDS_BIGENDIAN
+    uint64_t address:51;          /* address to invalidate   */
+    uint64_t reserved_3:10;
+    uint64_t guest:1;             /* G/N invalidation        */
+    uint64_t pde:1;               /* invalidate cached ptes  */
+    uint64_t size:1               /* size of invalidation    */
+#else
+    uint64_t size:1;
+    uint64_t pde:1;
+    uint64_t guest:1;
+    uint64_t reserved_3:10;
+    uint64_t address:51;
+#endif /* __BIG_ENDIAN_BITFIELD */
+} CMDInvalIommuPages;
+
+/* inval specified address for devid from remote IOTLB */
+typedef struct QEMU_PACKED {
+#ifdef HOST_WORDS_BIGENDIAN
+    uint64_t type:4;            /* command type        */
+    uint64_t pasid_19_6:4;
+    uint64_t pasid_7_0:8;
+    uint64_t queuid:16;
+    uint64_t maxpend:8;
+    uint64_t pasid_15_8;
+    uint64_t devid:16;         /* related devid        */
+#else
+    uint64_t devid:16;
+    uint64_t pasid_15_8:8;
+    uint64_t maxpend:8;
+    uint64_t queuid:16;
+    uint64_t pasid_7_0:8;
+    uint64_t pasid_19_6:4;
+    uint64_t type:4;
+#endif /* __BIG_ENDIAN_BITFIELD */
+
+#ifdef HOST_WORDS_BIGENDIAN
+    uint64_t address:52;       /* invalidate addr      */
+    uint64_t reserved_2:9;
+    uint64_t guest:1;          /* G/N invalidate       */
+    uint64_t reserved_1:1;
+    uint64_t size:1            /* size of invalidation */
+#else
+    uint64_t size:1;
+    uint64_t reserved_1:1;
+    uint64_t guest:1;
+    uint64_t reserved_2:9;
+    uint64_t address:52;
+#endif /* __BIG_ENDIAN_BITFIELD */
+} CMDInvalIOTLBPages;
+
+/* invalidate all cached interrupt info for devid */
+typedef struct QEMU_PACKED {
+#ifdef HOST_WORDS_BIGENDIAN
+    uint64_t type:4;          /* command type        */
+    uint64_t reserved_1:44;
+    uint64_t devid:16;        /* related devid       */
+#else
+    uint64_t devid:16;
+    uint64_t reserved_1:44;
+    uint64_t type:4;
+#endif /* __BIG_ENDIAN_BITFIELD */
+    uint64_t reserved_2;
+} CMDInvalIntrTable;
+
+/* load adddress translation info for devid into translation cache */
+typedef struct QEMU_PACKED {
+#ifdef HOST_WORDS_BIGENDIAN
+    uint64_t type:4;          /* command type       */
+    uint64_t reserved_2:8;
+    uint64_t pasid_19_0:20;
+    uint64_t pfcount_7_0:8;
+    uint64_t reserved_1:8;
+    uint64_t devid;           /* related devid      */
+#else
+    uint64_t devid;
+    uint64_t reserved_1:8;
+    uint64_t pfcount_7_0:8;
+    uint64_t pasid_19_0:20;
+    uint64_t reserved_2:8;
+    uint64_t type:4;
+#endif /* __BIG_ENDIAN_BITFIELD */
+
+#ifdef HOST_WORDS_BIGENDIAN
+    uint64_t address:52;     /* invalidate address       */
+    uint64_t reserved_5:7;
+    uint64_t inval:1;        /* inval matching entries   */
+    uint64_t reserved_4:1;
+    uint64_t guest:1;        /* G/N invalidate           */
+    uint64_t reserved_3:1;
+    uint64_t size:1;         /* prefetched page size     */
+#else
+    uint64_t size:1;
+    uint64_t reserved_3:1;
+    uint64_t guest:1;
+    uint64_t reserved_4:1;
+    uint64_t inval:1;
+    uint64_t reserved_5:7;
+    uint64_t address:52;
+#endif /* __BIG_ENDIAN_BITFIELD */
+} CMDPrefetchPages;
+
+/* clear all address translation/interrupt remapping caches */
+typedef struct QEMU_PACKED {
+#ifdef HOST_WORDS_BIGENDIAN
+    uint64_t type:4;              /* command type       */
+    uint64_t reserved_1:60;
+#else
+    uint64_t reserved_1:60;
+    uint64_t type:4;
+#endif /* __BIG_ENDIAN_BITFIELD */
+    uint64_t reserved_2;
+} CMDInvalIommuAll;
+
+/* issue a PCIe completion packet for devid */
+typedef struct QEMU_PACKED {
+#ifdef HOST_WORDS_BIGENDIAN
+    uint32_t devid;               /* related devid      */
+    uint32_t reserved_1;
+#else
+    uint32_t reserved_1;
+    uint32_t devid;
+#endif /* __BIG_ENDIAN_BITFIELD */
+
+#ifdef HOST_WORDS_BIGENDIAN
+    uint32_t type:4;              /* command type       */
+    uint32_t reserved_2:8;
+    uint32_t pasid_19_0:20
+#else
+    uint32_t pasid_19_0:20;
+    uint32_t reserved_2:8;
+    uint32_t type:4;
+#endif /* __BIG_ENDIAN_BITFIELD */
+
+#ifdef HOST_WORDS_BIGENDIAN
+    uint32_t reserved_3:29;
+    uint32_t guest:1;
+    uint32_t reserved_4:2;
+#else
+    uint32_t reserved_3:2;
+    uint32_t guest:1;
+    uint32_t reserved_4:29;
+#endif /* __BIG_ENDIAN_BITFIELD */
+
+#ifdef HOST_WORDS_BIGENDIAN
+    uint32_t reserved_5:16;
+    uint32_t completion_tag:16    /* PCIe PRI informatin */
+#else
+    uint32_t completion_tag:16;
+    uint32_t reserved_5:16;
+#endif /* __BIG_ENDIAN_BITFIELD */
+} CMDCompletePPR;
+
+/* configure MMIO registers at startup/reset */
+static void amdvi_set_quad(AMDVIState *s, hwaddr addr, uint64_t val,
+                           uint64_t romask, uint64_t w1cmask)
+{
+    stq_le_p(&s->mmior[addr], val);
+    stq_le_p(&s->romask[addr], romask);
+    stq_le_p(&s->w1cmask[addr], w1cmask);
+}
+
+static uint16_t amdvi_readw(AMDVIState *s, hwaddr addr)
+{
+    return lduw_le_p(&s->mmior[addr]);
+}
+
+static uint32_t amdvi_readl(AMDVIState *s, hwaddr addr)
+{
+    return ldl_le_p(&s->mmior[addr]);
+}
+
+static uint64_t amdvi_readq(AMDVIState *s, hwaddr addr)
+{
+    return ldq_le_p(&s->mmior[addr]);
+}
+
+/* internal write */
+static void amdvi_writeq_raw(AMDVIState *s, uint64_t val, hwaddr addr)
+{
+    stq_le_p(&s->mmior[addr], val);
+}
+
+/* external write */
+static void amdvi_writew(AMDVIState *s, hwaddr addr, uint16_t val)
+{
+    uint16_t romask = lduw_le_p(&s->romask[addr]);
+    uint16_t w1cmask = lduw_le_p(&s->w1cmask[addr]);
+    uint16_t oldval = lduw_le_p(&s->mmior[addr]);
+    stw_le_p(&s->mmior[addr], (val & ~(val & w1cmask)) | (romask & oldval));
+}
+
+static void amdvi_writel(AMDVIState *s, hwaddr addr, uint32_t val)
+{
+    uint32_t romask = ldl_le_p(&s->romask[addr]);
+    uint32_t w1cmask = ldl_le_p(&s->w1cmask[addr]);
+    uint32_t oldval = ldl_le_p(&s->mmior[addr]);
+    stl_le_p(&s->mmior[addr], (val & ~(val & w1cmask)) | (romask & oldval));
+}
+
+static void amdvi_writeq(AMDVIState *s, hwaddr addr, uint64_t val)
+{
+    uint64_t romask = ldq_le_p(&s->romask[addr]);
+    uint64_t w1cmask = ldq_le_p(&s->w1cmask[addr]);
+    uint32_t oldval = ldq_le_p(&s->mmior[addr]);
+    stq_le_p(&s->mmior[addr], (val & ~(val & w1cmask)) | (romask & oldval));
+}
+
+/* OR a 64-bit register with a 64-bit value */
+static bool amdvi_orq(AMDVIState *s, hwaddr addr, uint64_t val)
+{
+    return amdvi_readq(s, addr) | val;
+}
+
+/* OR a 64-bit register with a 64-bit value storing result in the register */
+static void amdvi_orassignq(AMDVIState *s, hwaddr addr, uint64_t val)
+{
+    amdvi_writeq_raw(s, addr, amdvi_readq(s, addr) | val);
+}
+
+/* AND a 64-bit register with a 64-bit value storing result in the register */
+static void amdvi_and_assignq(AMDVIState *s, hwaddr addr, uint64_t val)
+{
+   amdvi_writeq_raw(s, addr, amdvi_readq(s, addr) & val);
+}
+
+static void amdvi_generate_msi_interrupt(AMDVIState *s)
+{
+    MSIMessage msg;
+    if (msi_enabled(&s->pci.dev)) {
+        msg = msi_get_message(&s->pci.dev, 0);
+        address_space_stl_le(&address_space_memory, msg.address, msg.data,
+                         MEMTXATTRS_UNSPECIFIED, NULL);
+    }
+}
+
+static void amdvi_log_event(AMDVIState *s, uint64_t *evt)
+{
+    /* event logging not enabled */
+    if (!s->evtlog_enabled || amdvi_orq(s, AMDVI_MMIO_STATUS,
+        AMDVI_MMIO_STATUS_EVT_OVF)) {
+        return;
+    }
+
+    /* event log buffer full */
+    if (s->evtlog_tail >= s->evtlog_len) {
+        amdvi_orassignq(s, AMDVI_MMIO_STATUS, AMDVI_MMIO_STATUS_EVT_OVF);
+        /* generate interrupt */
+        amdvi_generate_msi_interrupt(s);
+        return;
+    }
+
+    if (dma_memory_write(&address_space_memory, s->evtlog_len + s->evtlog_tail,
+        &evt, AMDVI_EVENT_LEN)) {
+        trace_amdvi_evntlog_fail(s->evtlog, s->evtlog_tail);
+    }
+
+    s->evtlog_tail += AMDVI_EVENT_LEN;
+    amdvi_orassignq(s, AMDVI_MMIO_STATUS, AMDVI_MMIO_STATUS_COMP_INT);
+    amdvi_generate_msi_interrupt(s);
+}
+
+static void amdvi_setevent_bits(uint64_t *buffer, uint64_t value, int start,
+                                int length)
+{
+    int index = start / 64, bitpos = start % 64;
+    uint64_t mask = ((1 << length) - 1) << bitpos;
+    buffer[index] &= ~mask;
+    buffer[index] |= (value << bitpos) & mask;
+}
+/*
+ * AMDVi event structure
+ *    0:15   -> DeviceID
+ *    55:63  -> event type + miscellaneous info
+ *    64:127 -> related address
+ */
+static void amdvi_encode_event(uint64_t *evt, uint16_t devid, uint64_t addr,
+                               uint16_t info)
+{
+    amdvi_setevent_bits(evt, devid, 0, 16);
+    amdvi_setevent_bits(evt, info, 55, 8);
+    amdvi_setevent_bits(evt, addr, 63, 64);
+}
+/* log an error encountered page-walking
+ *
+ * @addr: virtual address in translation request
+ */
+static void amdvi_page_fault(AMDVIState *s, uint16_t devid,
+                             hwaddr addr, uint16_t info)
+{
+    uint64_t evt[4];
+
+    info |= AMDVI_EVENT_IOPF_I | AMDVI_EVENT_IOPF;
+    amdvi_encode_event(evt, devid, addr, info);
+    amdvi_log_event(s, evt);
+    pci_word_test_and_set_mask(s->pci.dev.config + PCI_STATUS,
+            PCI_STATUS_SIG_TARGET_ABORT);
+}
+/*
+ * log a master abort accessing device table
+ *  @devtab : address of device table entry
+ *  @info : error flags
+ */
+static void amdvi_log_devtab_error(AMDVIState *s, uint16_t devid,
+                                   hwaddr devtab, uint16_t info)
+{
+    uint64_t evt[4];
+
+    info |= AMDVI_EVENT_DEV_TAB_HW_ERROR;
+
+    amdvi_encode_event(evt, devid, devtab, info);
+    amdvi_log_event(s, evt);
+    pci_word_test_and_set_mask(s->pci.dev.config + PCI_STATUS,
+            PCI_STATUS_SIG_TARGET_ABORT);
+}
+
+/* log an event trying to access command buffer
+ *   @addr : address that couldn't be accessed
+ */
+static void amdvi_log_command_error(AMDVIState *s, hwaddr addr)
+{
+    uint64_t evt[4], info = AMDVI_EVENT_COMMAND_HW_ERROR;
+
+    amdvi_encode_event(evt, 0, addr, info);
+    amdvi_log_event(s, evt);
+    pci_word_test_and_set_mask(s->pci.dev.config + PCI_STATUS,
+            PCI_STATUS_SIG_TARGET_ABORT);
+}
+
+/* log an illegal comand event
+ *   @addr : address of illegal command
+ */
+static void amdvi_log_illegalcom_error(AMDVIState *s, uint16_t info,
+                                       hwaddr addr)
+{
+    uint64_t evt[4];
+
+    info |= AMDVI_EVENT_ILLEGAL_COMMAND_ERROR;
+    amdvi_encode_event(evt, 0, addr, info);
+    amdvi_log_event(s, evt);
+}
+
+/* log an error accessing device table
+ *
+ *  @devid : device owning the table entry
+ *  @devtab : address of device table entry
+ *  @info : error flags
+ */
+static void amdvi_log_illegaldevtab_error(AMDVIState *s, uint16_t devid,
+                                          hwaddr addr, uint16_t info)
+{
+    uint64_t evt[4];
+
+    info |= AMDVI_EVENT_ILLEGAL_DEVTAB_ENTRY;
+    amdvi_encode_event(evt, devid, addr, info);
+    amdvi_log_event(s, evt);
+}
+
+/* log an error accessing a PTE entry
+ * @addr : address that couldn't be accessed
+ */
+static void amdvi_log_pagetab_error(AMDVIState *s, uint16_t devid,
+                                    hwaddr addr, uint16_t info)
+{
+    uint64_t evt[4];
+
+    info |= AMDVI_EVENT_PAGE_TAB_HW_ERROR;
+    amdvi_encode_event(evt, devid, addr, info);
+    amdvi_log_event(s, evt);
+    pci_word_test_and_set_mask(s->pci.dev.config + PCI_STATUS,
+             PCI_STATUS_SIG_TARGET_ABORT);
+}
+
+static gboolean amdvi_uint64_equal(gconstpointer v1, gconstpointer v2)
+{
+    return *((const uint64_t *)v1) == *((const uint64_t *)v2);
+}
+
+static guint amdvi_uint64_hash(gconstpointer v)
+{
+    return (guint)*(const uint64_t *)v;
+}
+
+static AMDVIIOTLBEntry *amdvi_iotlb_lookup(AMDVIState *s, hwaddr addr,
+                                           uint64_t devid)
+{
+    uint64_t key = (addr >> AMDVI_PAGE_SHIFT_4K) |
+                   ((uint64_t)(devid) << AMDVI_DEVID_SHIFT);
+    return g_hash_table_lookup(s->iotlb, &key);
+}
+
+static void amdvi_iotlb_reset(AMDVIState *s)
+{
+    assert(s->iotlb);
+    g_hash_table_remove_all(s->iotlb);
+}
+
+static gboolean amdvi_iotlb_remove_by_devid(gpointer key, gpointer value,
+                                            gpointer user_data)
+{
+    AMDVIIOTLBEntry *entry = (AMDVIIOTLBEntry *)value;
+    uint16_t devid = *(uint16_t *)user_data;
+    return entry->devid == devid;
+}
+
+static void amdvi_iotlb_remove_page(AMDVIState *s, hwaddr addr,
+                                    uint64_t devid)
+{
+    uint64_t key = (addr >> AMDVI_PAGE_SHIFT_4K) |
+                   ((uint64_t)(devid) << AMDVI_DEVID_SHIFT);
+    g_hash_table_remove(s->iotlb, &key);
+}
+
+static void amdvi_update_iotlb(AMDVIState *s, uint16_t devid,
+                               uint64_t gpa, IOMMUTLBEntry to_cache,
+                               uint16_t domid)
+{
+    AMDVIIOTLBEntry *entry = g_malloc(sizeof(*entry));
+    uint64_t *key = g_malloc(sizeof(key));
+    uint64_t gfn = gpa >> AMDVI_PAGE_SHIFT_4K;
+
+    /* don't cache erroneous translations */
+    if (to_cache.perm != IOMMU_NONE) {
+        trace_amdvi_cache_update(domid, PCI_BUS_NUM(devid), PCI_SLOT(devid),
+                PCI_FUNC(devid), gpa, to_cache.translated_addr);
+
+        if (g_hash_table_size(s->iotlb) >= AMDVI_IOTLB_MAX_SIZE) {
+            trace_amdvi_iotlb_reset();
+            amdvi_iotlb_reset(s);
+        }
+
+        entry->gfn = gfn;
+        entry->domid = domid;
+        entry->perms = to_cache.perm;
+        entry->translated_addr = to_cache.translated_addr;
+        entry->page_mask = to_cache.addr_mask;
+        *key = gfn | ((uint64_t)(devid) << AMDVI_DEVID_SHIFT);
+        g_hash_table_replace(s->iotlb, key, entry);
+    }
+}
+
+static void amdvi_completion_wait(AMDVIState *s, CMDCompletionWait *wait)
+{
+    /* pad the last 3 bits */
+    hwaddr addr = cpu_to_le64(wait->store_addr << 3);
+    uint64_t data = cpu_to_le64(wait->store_data);
+
+    if (wait->reserved) {
+        amdvi_log_illegalcom_error(s, wait->type, s->cmdbuf + s->cmdbuf_head);
+    }
+
+    if (wait->completion_store) {
+        if (dma_memory_write(&address_space_memory, addr, &data,
+            AMDVI_COMPLETION_DATA_SIZE))
+        {
+            trace_amdvi_completion_wait_fail(addr);
+        }
+    }
+
+    /* set completion interrupt */
+    if (wait->completion_int) {
+        amdvi_orq(s, AMDVI_MMIO_STATUS, AMDVI_MMIO_STATUS_COMP_INT);
+        /* generate interrupt */
+        amdvi_generate_msi_interrupt(s);
+    }
+
+    trace_amdvi_completion_wait(addr, data);
+}
+
+/* log error without aborting since linux seems to be using reserved bits */
+static void amdvi_inval_devtab_entry(AMDVIState *s, void *cmd)
+{
+    CMDInvalIntrTable *inval = (CMDInvalIntrTable *)cmd;
+    /* This command should invalidate internal caches of which there isn't */
+    if (inval->reserved_1 || inval->reserved_2) {
+        amdvi_log_illegalcom_error(s, inval->type, s->cmdbuf + s->cmdbuf_head);
+    }
+    trace_amdvi_devtab_inval(PCI_BUS_NUM(inval->devid), PCI_SLOT(inval->devid),
+            PCI_FUNC(inval->devid));
+}
+
+static void amdvi_complete_ppr(AMDVIState *s, void *cmd)
+{
+    CMDCompletePPR *pprcomp = (CMDCompletePPR *)cmd;
+
+    if (pprcomp->reserved_1 || pprcomp->reserved_2 || pprcomp->reserved_3 ||
+        pprcomp->reserved_4 || pprcomp->reserved_5) {
+        amdvi_log_illegalcom_error(s, pprcomp->type, s->cmdbuf +
+                s->cmdbuf_head);
+    }
+    trace_amdvi_ppr_exec();
+}
+
+static void amdvi_inval_all(AMDVIState *s, CMDInvalIommuAll *inval)
+{
+    if (inval->reserved_2 || inval->reserved_1) {
+        amdvi_log_illegalcom_error(s, inval->type, s->cmdbuf + s->cmdbuf_head);
+    }
+
+    amdvi_iotlb_reset(s);
+    trace_amdvi_all_inval();
+}
+
+static gboolean amdvi_iotlb_remove_by_domid(gpointer key, gpointer value,
+                                            gpointer user_data)
+{
+    AMDVIIOTLBEntry *entry = (AMDVIIOTLBEntry *)value;
+    uint16_t domid = *(uint16_t *)user_data;
+    return entry->domid == domid;
+}
+
+/* we don't have devid - we can't remove pages by address */
+static void amdvi_inval_pages(AMDVIState *s, CMDInvalIommuPages *inval)
+{
+    uint16_t domid = inval->domid;
+
+    if (inval->reserved_1 || inval->reserved_2 || inval->reserved_3) {
+        amdvi_log_illegalcom_error(s, inval->type, s->cmdbuf + s->cmdbuf_head);
+    }
+
+    g_hash_table_foreach_remove(s->iotlb, amdvi_iotlb_remove_by_domid,
+                                &domid);
+    trace_amdvi_pages_inval(inval->domid);
+}
+
+static void amdvi_prefetch_pages(AMDVIState *s, CMDPrefetchPages *prefetch)
+{
+    if (prefetch->reserved_1 || prefetch->reserved_2 || prefetch->reserved_3
+        || prefetch->reserved_4 || prefetch->reserved_5) {
+        amdvi_log_illegalcom_error(s, prefetch->type, s->cmdbuf +
+                s->cmdbuf_head);
+    }
+    trace_amdvi_prefetch_pages();
+}
+
+static void amdvi_inval_inttable(AMDVIState *s, CMDInvalIntrTable *inval)
+{
+    if (inval->reserved_1 || inval->reserved_2) {
+        amdvi_log_illegalcom_error(s, inval->type, s->cmdbuf + s->cmdbuf_head);
+        return;
+    }
+    trace_amdvi_intr_inval();
+}
+
+/* FIXME: Try to work with the specified size instead of all the pages
+ * when the S bit is on
+ */
+static void iommu_inval_iotlb(AMDVIState *s, CMDInvalIOTLBPages *inval)
+{
+    uint16_t devid = inval->devid;
+
+    if (inval->reserved_1 || inval->reserved_2) {
+        amdvi_log_illegalcom_error(s, inval->type, s->cmdbuf + s->cmdbuf_head);
+        return;
+    }
+
+    if (inval->size) {
+        g_hash_table_foreach_remove(s->iotlb, amdvi_iotlb_remove_by_devid,
+                                    &devid);
+    } else {
+        amdvi_iotlb_remove_page(s, inval->address << 12, inval->devid);
+    }
+    trace_amdvi_iotlb_inval();
+}
+
+/* not honouring reserved bits is regarded as an illegal command */
+static void amdvi_cmdbuf_exec(AMDVIState *s)
+{
+    CMDCompletionWait cmd;
+
+    if (dma_memory_read(&address_space_memory, s->cmdbuf + s->cmdbuf_head,
+        &cmd, AMDVI_COMMAND_SIZE)) {
+        trace_amdvi_command_read_fail(s->cmdbuf, s->cmdbuf_head);
+        amdvi_log_command_error(s, s->cmdbuf + s->cmdbuf_head);
+        return;
+    }
+
+    switch (cmd.type) {
+    case AMDVI_CMD_COMPLETION_WAIT:
+        amdvi_completion_wait(s, (CMDCompletionWait *)&cmd);
+        break;
+    case AMDVI_CMD_INVAL_DEVTAB_ENTRY:
+        amdvi_inval_devtab_entry(s, (CMDInvalDevEntry *)&cmd);
+        break;
+    case AMDVI_CMD_INVAL_AMDVI_PAGES:
+        amdvi_inval_pages(s, (CMDInvalIommuPages *)&cmd);
+        break;
+    case AMDVI_CMD_INVAL_IOTLB_PAGES:
+        iommu_inval_iotlb(s, (CMDInvalIOTLBPages *)&cmd);
+        break;
+    case AMDVI_CMD_INVAL_INTR_TABLE:
+        amdvi_inval_inttable(s, (CMDInvalIntrTable *)&cmd);
+        break;
+    case AMDVI_CMD_PREFETCH_AMDVI_PAGES:
+        amdvi_prefetch_pages(s, (CMDPrefetchPages *)&cmd);
+        break;
+    case AMDVI_CMD_COMPLETE_PPR_REQUEST:
+        amdvi_complete_ppr(s, (CMDCompletePPR *)&cmd);
+        break;
+    case AMDVI_CMD_INVAL_AMDVI_ALL:
+        amdvi_inval_all(s, (CMDInvalIommuAll *)&cmd);
+        break;
+    default:
+        trace_amdvi_unhandled_command(cmd.type);
+        /* log illegal command */
+        amdvi_log_illegalcom_error(s, cmd.type,
+                                   s->cmdbuf + s->cmdbuf_head);
+    }
+}
+
+static void amdvi_cmdbuf_run(AMDVIState *s)
+{
+    if (!s->cmdbuf_enabled) {
+        trace_amdvi_command_error(amdvi_readq(s, AMDVI_MMIO_CONTROL));
+        return;
+    }
+
+    /* check if there is work to do. */
+    while (s->cmdbuf_head != s->cmdbuf_tail) {
+         trace_amdvi_command_exec(s->cmdbuf_head, s->cmdbuf_tail, s->cmdbuf);
+         amdvi_cmdbuf_exec(s);
+         s->cmdbuf_head += AMDVI_COMMAND_SIZE;
+         amdvi_writeq_raw(s, s->cmdbuf_head, AMDVI_MMIO_COMMAND_HEAD);
+
+        /* wrap head pointer */
+        if (s->cmdbuf_head >= s->cmdbuf_len * AMDVI_COMMAND_SIZE) {
+            s->cmdbuf_head = 0;
+        }
+    }
+}
+
+static void amdvi_mmio_trace(hwaddr addr, unsigned size)
+{
+    uint8_t index = addr & ~0x2000;
+
+    if ((addr & 0x2000)) {
+        /* high table */
+        index = index >= AMDVI_MMIO_REGS_HIGH ? AMDVI_MMIO_REGS_HIGH : index;
+        trace_amdvi_mmio_read(amdvi_mmio_high[index], addr, size, addr & ~0x07);
+    } else {
+        index = index >= AMDVI_MMIO_REGS_LOW ? AMDVI_MMIO_REGS_LOW : index;
+        trace_amdvi_mmio_read(amdvi_mmio_high[index], addr, size, addr & ~0x07);
+    }
+}
+
+static uint64_t amdvi_mmio_read(void *opaque, hwaddr addr, unsigned size)
+{
+    AMDVIState *s = opaque;
+
+    uint64_t val = -1;
+    if (addr + size > AMDVI_MMIO_SIZE) {
+        trace_amdvi_mmio_read("error: addr outside region: max ",
+                (uint64_t)AMDVI_MMIO_SIZE, addr, size);
+        return (uint64_t)-1;
+    }
+
+    if (size == 2) {
+        val = amdvi_readw(s, addr);
+    } else if (size == 4) {
+        val = amdvi_readl(s, addr);
+    } else if (size == 8) {
+        val = amdvi_readq(s, addr);
+    }
+    amdvi_mmio_trace(addr, size);
+
+    return val;
+}
+
+static void amdvi_handle_control_write(AMDVIState *s)
+{
+    unsigned long control = amdvi_readq(s, AMDVI_MMIO_CONTROL);
+    s->enabled = !!(control & AMDVI_MMIO_CONTROL_AMDVIEN);
+
+    s->ats_enabled = !!(control & AMDVI_MMIO_CONTROL_HTTUNEN);
+    s->evtlog_enabled = s->enabled && !!(control &
+                        AMDVI_MMIO_CONTROL_EVENTLOGEN);
+
+    s->evtlog_intr = !!(control & AMDVI_MMIO_CONTROL_EVENTINTEN);
+    s->completion_wait_intr = !!(control & AMDVI_MMIO_CONTROL_COMWAITINTEN);
+    s->cmdbuf_enabled = s->enabled && !!(control &
+                        AMDVI_MMIO_CONTROL_CMDBUFLEN);
+
+    /* update the flags depending on the control register */
+    if (s->cmdbuf_enabled) {
+        amdvi_orassignq(s, AMDVI_MMIO_STATUS, AMDVI_MMIO_STATUS_CMDBUF_RUN);
+    } else {
+        amdvi_and_assignq(s, AMDVI_MMIO_STATUS, ~AMDVI_MMIO_STATUS_CMDBUF_RUN);
+    }
+    if (s->evtlog_enabled) {
+        amdvi_orassignq(s, AMDVI_MMIO_STATUS, AMDVI_MMIO_STATUS_EVT_RUN);
+    } else {
+        amdvi_and_assignq(s, AMDVI_MMIO_STATUS, ~AMDVI_MMIO_STATUS_EVT_RUN);
+    }
+
+    trace_amdvi_control_status(control);
+    amdvi_cmdbuf_run(s);
+}
+
+static inline void amdvi_handle_devtab_write(AMDVIState *s)
+
+{
+    uint64_t val = amdvi_readq(s, AMDVI_MMIO_DEVICE_TABLE);
+    s->devtab = (val & AMDVI_MMIO_DEVTAB_BASE_MASK);
+
+    /* set device table length */
+    s->devtab_len = ((val & AMDVI_MMIO_DEVTAB_SIZE_MASK) + 1 *
+                    (AMDVI_MMIO_DEVTAB_SIZE_UNIT /
+                     AMDVI_MMIO_DEVTAB_ENTRY_SIZE));
+}
+
+static inline void amdvi_handle_cmdhead_write(AMDVIState *s)
+{
+    s->cmdbuf_head = amdvi_readq(s, AMDVI_MMIO_COMMAND_HEAD)
+                     & AMDVI_MMIO_CMDBUF_HEAD_MASK;
+    amdvi_cmdbuf_run(s);
+}
+
+static inline void amdvi_handle_cmdbase_write(AMDVIState *s)
+{
+    s->cmdbuf = amdvi_readq(s, AMDVI_MMIO_COMMAND_BASE)
+                & AMDVI_MMIO_CMDBUF_BASE_MASK;
+    s->cmdbuf_len = 1UL << (amdvi_readq(s, AMDVI_MMIO_CMDBUF_SIZE_BYTE)
+                    & AMDVI_MMIO_CMDBUF_SIZE_MASK);
+    s->cmdbuf_head = s->cmdbuf_tail = 0;
+}
+
+static inline void amdvi_handle_cmdtail_write(AMDVIState *s)
+{
+    s->cmdbuf_tail = amdvi_readq(s, AMDVI_MMIO_COMMAND_TAIL)
+                     & AMDVI_MMIO_CMDBUF_TAIL_MASK;
+    amdvi_cmdbuf_run(s);
+}
+
+static inline void amdvi_handle_excllim_write(AMDVIState *s)
+{
+    uint64_t val = amdvi_readq(s, AMDVI_MMIO_EXCL_LIMIT);
+    s->excl_limit = (val & AMDVI_MMIO_EXCL_LIMIT_MASK) |
+                    AMDVI_MMIO_EXCL_LIMIT_LOW;
+}
+
+static inline void amdvi_handle_evtbase_write(AMDVIState *s)
+{
+    uint64_t val = amdvi_readq(s, AMDVI_MMIO_EVENT_BASE);
+    s->evtlog = val & AMDVI_MMIO_EVTLOG_BASE_MASK;
+    s->evtlog_len = 1UL << (amdvi_readq(s, AMDVI_MMIO_EVTLOG_SIZE_BYTE)
+                    & AMDVI_MMIO_EVTLOG_SIZE_MASK);
+}
+
+static inline void amdvi_handle_evttail_write(AMDVIState *s)
+{
+    uint64_t val = amdvi_readq(s, AMDVI_MMIO_EVENT_TAIL);
+    s->evtlog_tail = val & AMDVI_MMIO_EVTLOG_TAIL_MASK;
+}
+
+static inline void amdvi_handle_evthead_write(AMDVIState *s)
+{
+    uint64_t val = amdvi_readq(s, AMDVI_MMIO_EVENT_HEAD);
+    s->evtlog_head = val & AMDVI_MMIO_EVTLOG_HEAD_MASK;
+}
+
+static inline void amdvi_handle_pprbase_write(AMDVIState *s)
+{
+    uint64_t val = amdvi_readq(s, AMDVI_MMIO_PPR_BASE);
+    s->ppr_log = val & AMDVI_MMIO_PPRLOG_BASE_MASK;
+    s->pprlog_len = 1UL << (amdvi_readq(s, AMDVI_MMIO_PPRLOG_SIZE_BYTE)
+                    & AMDVI_MMIO_PPRLOG_SIZE_MASK);
+}
+
+static inline void amdvi_handle_pprhead_write(AMDVIState *s)
+{
+    uint64_t val = amdvi_readq(s, AMDVI_MMIO_PPR_HEAD);
+    s->pprlog_head = val & AMDVI_MMIO_PPRLOG_HEAD_MASK;
+}
+
+static inline void amdvi_handle_pprtail_write(AMDVIState *s)
+{
+    uint64_t val = amdvi_readq(s, AMDVI_MMIO_PPR_TAIL);
+    s->pprlog_tail = val & AMDVI_MMIO_PPRLOG_TAIL_MASK;
+}
+
+/* FIXME: something might go wrong if System Software writes in chunks
+ * of one byte but linux writes in chunks of 4 bytes so currently it
+ * works correctly with linux but will definitely be busted if software
+ * reads/writes 8 bytes
+ */
+static void amdvi_mmio_reg_write(AMDVIState *s, unsigned size, uint64_t val,
+                                 hwaddr addr)
+{
+    if (size == 2) {
+        amdvi_writew(s, addr, val);
+    } else if (size == 4) {
+        amdvi_writel(s, addr, val);
+    } else if (size == 8) {
+        amdvi_writeq(s, addr, val);
+    }
+}
+
+static void amdvi_mmio_write(void *opaque, hwaddr addr, uint64_t val,
+                             unsigned size)
+{
+    AMDVIState *s = opaque;
+    unsigned long offset = addr & 0x07;
+
+    if (addr + size > AMDVI_MMIO_SIZE) {
+        trace_amdvi_mmio_write("error: addr outside region: max ",
+                (uint64_t)AMDVI_MMIO_SIZE, size, val, offset);
+        return;
+    }
+
+    amdvi_mmio_trace(addr, size);
+    switch (addr & ~0x07) {
+    case AMDVI_MMIO_CONTROL:
+        amdvi_mmio_reg_write(s, size, val, addr);
+        amdvi_handle_control_write(s);
+        break;
+    case AMDVI_MMIO_DEVICE_TABLE:
+        amdvi_mmio_reg_write(s, size, val, addr);
+       /*  set device table address
+        *   This also suffers from inability to tell whether software
+        *   is done writing
+        */
+
+        if (offset || (size == 8)) {
+            amdvi_handle_devtab_write(s);
+        }
+        break;
+    case AMDVI_MMIO_COMMAND_HEAD:
+        amdvi_mmio_reg_write(s, size, val, addr);
+        amdvi_handle_cmdhead_write(s);
+        break;
+    case AMDVI_MMIO_COMMAND_BASE:
+        amdvi_mmio_reg_write(s, size, val, addr);
+        /* FIXME - make sure System Software has finished writing incase
+         * it writes in chucks less than 8 bytes in a robust way.As for
+         * now, this hacks works for the linux driver
+         */
+        if (offset || (size == 8)) {
+            amdvi_handle_cmdbase_write(s);
+        }
+        break;
+    case AMDVI_MMIO_COMMAND_TAIL:
+        amdvi_mmio_reg_write(s, size, val, addr);
+        amdvi_handle_cmdtail_write(s);
+        break;
+    case AMDVI_MMIO_EVENT_BASE:
+        amdvi_mmio_reg_write(s, size, val, addr);
+        amdvi_handle_evtbase_write(s);
+        break;
+    case AMDVI_MMIO_EVENT_HEAD:
+        amdvi_mmio_reg_write(s, size, val, addr);
+        amdvi_handle_evthead_write(s);
+        break;
+    case AMDVI_MMIO_EVENT_TAIL:
+        amdvi_mmio_reg_write(s, size, val, addr);
+        amdvi_handle_evttail_write(s);
+        break;
+    case AMDVI_MMIO_EXCL_LIMIT:
+        amdvi_mmio_reg_write(s, size, val, addr);
+        amdvi_handle_excllim_write(s);
+        break;
+        /* PPR log base - unused for now */
+    case AMDVI_MMIO_PPR_BASE:
+        amdvi_mmio_reg_write(s, size, val, addr);
+        amdvi_handle_pprbase_write(s);
+        break;
+        /* PPR log head - also unused for now */
+    case AMDVI_MMIO_PPR_HEAD:
+        amdvi_mmio_reg_write(s, size, val, addr);
+        amdvi_handle_pprhead_write(s);
+        break;
+        /* PPR log tail - unused for now */
+    case AMDVI_MMIO_PPR_TAIL:
+        amdvi_mmio_reg_write(s, size, val, addr);
+        amdvi_handle_pprtail_write(s);
+        break;
+    }
+}
+
+static inline uint64_t amdvi_get_perms(uint64_t entry)
+{
+    return (entry & (AMDVI_DEV_PERM_READ | AMDVI_DEV_PERM_WRITE)) >>
+           AMDVI_DEV_PERM_SHIFT;
+}
+
+/* a valid entry should have V = 1 and reserved bits honoured */
+static bool amdvi_validate_dte(AMDVIState *s, uint16_t devid,
+                               uint64_t *dte)
+{
+    if ((dte[0] & AMDVI_DTE_LOWER_QUAD_RESERVED)
+        || (dte[1] & AMDVI_DTE_MIDDLE_QUAD_RESERVED)
+        || (dte[2] & AMDVI_DTE_UPPER_QUAD_RESERVED) || dte[3]) {
+        amdvi_log_illegaldevtab_error(s, devid,
+                                s->devtab + devid * AMDVI_DEVTAB_ENTRY_SIZE, 0);
+        return false;
+    }
+
+    return dte[0] & AMDVI_DEV_VALID;
+}
+
+/* get a device table entry given the devid */
+static bool amdvi_get_dte(AMDVIState *s, int devid, uint64_t *entry)
+{
+    uint32_t offset = devid * AMDVI_DEVTAB_ENTRY_SIZE;
+
+    if (dma_memory_read(&address_space_memory, s->devtab + offset, entry,
+                        AMDVI_DEVTAB_ENTRY_SIZE)) {
+        trace_amdvi_dte_get_fail(s->devtab, offset);
+        /* log error accessing dte */
+        amdvi_log_devtab_error(s, devid, s->devtab + offset, 0);
+        return false;
+    }
+
+    *entry = le64_to_cpu(*entry);
+    if (!amdvi_validate_dte(s, devid, entry)) {
+        trace_amdvi_invalid_dte(entry[0]);
+        return false;
+    }
+
+    return true;
+}
+
+/* get pte translation mode */
+static inline uint8_t get_pte_translation_mode(uint64_t pte)
+{
+    return (pte >> AMDVI_DEV_MODE_RSHIFT) & AMDVI_DEV_MODE_MASK;
+}
+
+static inline uint64_t pte_override_page_mask(uint64_t pte)
+{
+    uint8_t page_mask = 12;
+    uint64_t addr = (pte & AMDVI_DEV_PT_ROOT_MASK) ^ AMDVI_DEV_PT_ROOT_MASK;
+    /* find the first zero bit */
+    while (addr & 1) {
+        page_mask++;
+        addr = addr >> 1;
+    }
+
+    return ~((1ULL << page_mask) - 1);
+}
+
+static inline uint64_t pte_get_page_mask(uint64_t oldlevel)
+{
+    return ~((1UL << ((oldlevel * 9) + 3)) - 1);
+}
+
+static inline uint64_t amdvi_get_pte_entry(AMDVIState *s, uint64_t pte_addr,
+                                          uint16_t devid)
+{
+    uint64_t pte;
+
+    if (dma_memory_read(&address_space_memory, pte_addr, &pte, sizeof(pte))) {
+        trace_amdvi_get_pte_hwerror(pte_addr);
+        amdvi_log_pagetab_error(s, devid, pte_addr, 0);
+        pte = 0;
+        return pte;
+    }
+
+    pte = cpu_to_le64(pte);
+    return pte;
+}
+
+static void amdvi_page_walk(AMDVIAddressSpace *as, uint64_t *dte,
+                            IOMMUTLBEntry *ret, unsigned perms,
+                            hwaddr addr)
+{
+    unsigned level, present, pte_perms, oldlevel;
+    uint64_t pte = dte[0], pte_addr, page_mask;
+
+    /* make sure the DTE has TV = 1 */
+    if (pte & AMDVI_DEV_TRANSLATION_VALID) {
+        level = get_pte_translation_mode(pte);
+        if (level >= 7) {
+            trace_amdvi_mode_invalid(level, addr);
+            return;
+        }
+        if (level == 0) {
+            goto no_remap;
+        }
+
+        /* we are at the leaf page table or page table encodes a huge page */
+        while (level > 0) {
+            pte_perms = amdvi_get_perms(pte);
+            present = pte & 1;
+            if (!present || perms != (perms & pte_perms)) {
+                amdvi_page_fault(as->iommu_state, as->devfn, addr, perms);
+                trace_amdvi_page_fault(addr);
+                return;
+            }
+
+            /* go to the next lower level */
+            pte_addr = pte & AMDVI_DEV_PT_ROOT_MASK;
+            /* add offset and load pte */
+            pte_addr += ((addr >> (3 + 9 * level)) & 0x1FF) << 3;
+            pte = amdvi_get_pte_entry(as->iommu_state, pte_addr, as->devfn);
+            if (!pte) {
+                return;
+            }
+            oldlevel = level;
+            level = get_pte_translation_mode(pte);
+            if (level == 0x7) {
+                break;
+            }
+        }
+
+        if (level == 0x7) {
+            page_mask = pte_override_page_mask(pte);
+        } else {
+            page_mask = pte_get_page_mask(oldlevel);
+        }
+
+        /* get access permissions from pte */
+        ret->iova = addr & page_mask;
+        ret->translated_addr = (pte & AMDVI_DEV_PT_ROOT_MASK) & page_mask;
+        ret->addr_mask = ~page_mask;
+        ret->perm = amdvi_get_perms(pte);
+        return;
+    }
+no_remap:
+    ret->iova = addr & AMDVI_PAGE_MASK_4K;
+    ret->translated_addr = addr & AMDVI_PAGE_MASK_4K;
+    ret->addr_mask = ~AMDVI_PAGE_MASK_4K;
+    ret->perm = amdvi_get_perms(pte);
+}
+
+static void amdvi_do_translate(AMDVIAddressSpace *as, hwaddr addr,
+                               bool is_write, IOMMUTLBEntry *ret)
+{
+    AMDVIState *s = as->iommu_state;
+    uint16_t devid = PCI_BDF(as->bus_num, as->devfn);
+    AMDVIIOTLBEntry *iotlb_entry = amdvi_iotlb_lookup(s, addr, as->devfn);
+    uint64_t entry[4];
+
+    if (iotlb_entry) {
+        trace_amdvi_iotlb_hit(PCI_BUS_NUM(devid), PCI_SLOT(devid),
+                PCI_FUNC(devid), addr, iotlb_entry->translated_addr);
+        ret->iova = addr & ~iotlb_entry->page_mask;
+        ret->translated_addr = iotlb_entry->translated_addr;
+        ret->addr_mask = iotlb_entry->page_mask;
+        ret->perm = iotlb_entry->perms;
+        return;
+    }
+
+    /* devices with V = 0 are not translated */
+    if (!amdvi_get_dte(s, devid, entry)) {
+        goto out;
+    }
+
+    amdvi_page_walk(as, entry, ret,
+                    is_write ? AMDVI_PERM_WRITE : AMDVI_PERM_READ, addr);
+
+    amdvi_update_iotlb(s, as->devfn, addr, *ret,
+                       entry[1] & AMDVI_DEV_DOMID_ID_MASK);
+    return;
+
+out:
+    ret->iova = addr & AMDVI_PAGE_MASK_4K;
+    ret->translated_addr = addr & AMDVI_PAGE_MASK_4K;
+    ret->addr_mask = ~AMDVI_PAGE_MASK_4K;
+    ret->perm = IOMMU_RW;
+}
+
+static inline bool amdvi_is_interrupt_addr(hwaddr addr)
+{
+    return addr >= AMDVI_INT_ADDR_FIRST && addr <= AMDVI_INT_ADDR_LAST;
+}
+
+static IOMMUTLBEntry amdvi_translate(MemoryRegion *iommu, hwaddr addr,
+                                     bool is_write)
+{
+    AMDVIAddressSpace *as = container_of(iommu, AMDVIAddressSpace, iommu);
+    AMDVIState *s = as->iommu_state;
+    IOMMUTLBEntry ret = {
+        .target_as = &address_space_memory,
+        .iova = addr,
+        .translated_addr = 0,
+        .addr_mask = ~(hwaddr)0,
+        .perm = IOMMU_NONE
+    };
+
+    if (!s->enabled) {
+        /* AMDVI disabled - corresponds to iommu=off not
+         * failure to provide any parameter
+         */
+        ret.iova = addr & AMDVI_PAGE_MASK_4K;
+        ret.translated_addr = addr & AMDVI_PAGE_MASK_4K;
+        ret.addr_mask = ~AMDVI_PAGE_MASK_4K;
+        ret.perm = IOMMU_RW;
+        return ret;
+    } else if (amdvi_is_interrupt_addr(addr)) {
+        ret.iova = addr & AMDVI_PAGE_MASK_4K;
+        ret.translated_addr = addr & AMDVI_PAGE_MASK_4K;
+        ret.addr_mask = ~AMDVI_PAGE_MASK_4K;
+        ret.perm = IOMMU_WO;
+        return ret;
+    }
+
+    amdvi_do_translate(as, addr, is_write, &ret);
+    trace_amdvi_translation_result(as->bus_num, PCI_SLOT(as->devfn),
+            PCI_FUNC(as->devfn), addr, ret.translated_addr);
+    return ret;
+}
+
+static AddressSpace *amdvi_host_dma_iommu(PCIBus *bus, void *opaque, int devfn)
+{
+    AMDVIState *s = opaque;
+    AMDVIAddressSpace **iommu_as;
+    int bus_num = pci_bus_num(bus);
+
+    iommu_as = s->address_spaces[bus_num];
+
+    /* allocate memory during the first run */
+    if (!iommu_as) {
+        iommu_as = g_malloc0(sizeof(AMDVIAddressSpace *) * PCI_DEVFN_MAX);
+        s->address_spaces[bus_num] = iommu_as;
+    }
+
+    /* set up AMDVI region */
+    if (!iommu_as[devfn]) {
+        iommu_as[devfn] = g_malloc0(sizeof(AMDVIAddressSpace));
+        iommu_as[devfn]->bus_num = (uint8_t)bus_num;
+        iommu_as[devfn]->devfn = (uint8_t)devfn;
+        iommu_as[devfn]->iommu_state = s;
+
+        memory_region_init_iommu(&iommu_as[devfn]->iommu, OBJECT(s),
+                                 &s->iommu_ops, "amd-iommu", UINT64_MAX);
+        address_space_init(&iommu_as[devfn]->as, &iommu_as[devfn]->iommu,
+                           "amd-iommu");
+    }
+    return &iommu_as[devfn]->as;
+}
+
+static const MemoryRegionOps mmio_mem_ops = {
+    .read = amdvi_mmio_read,
+    .write = amdvi_mmio_write,
+    .endianness = DEVICE_LITTLE_ENDIAN,
+    .impl = {
+        .min_access_size = 1,
+        .max_access_size = 8,
+        .unaligned = false,
+    },
+    .valid = {
+        .min_access_size = 1,
+        .max_access_size = 8,
+    }
+};
+
+static void amdvi_iommu_notify_started(MemoryRegion *iommu)
+{
+    AMDVIAddressSpace *as = container_of(iommu, AMDVIAddressSpace, iommu);
+
+    hw_error("device %02x.%02x.%x requires iommu notifier which is not "
+             "currently supported", as->bus_num, PCI_SLOT(as->devfn),
+             PCI_FUNC(as->devfn));
+}
+
+static void amdvi_init(AMDVIState *s)
+{
+    amdvi_iotlb_reset(s);
+
+    s->iommu_ops.translate = amdvi_translate;
+    s->iommu_ops.notify_started = amdvi_iommu_notify_started;
+    s->devtab_len = 0;
+    s->cmdbuf_len = 0;
+    s->cmdbuf_head = 0;
+    s->cmdbuf_tail = 0;
+    s->evtlog_head = 0;
+    s->evtlog_tail = 0;
+    s->excl_enabled = false;
+    s->excl_allow = false;
+    s->mmio_enabled = false;
+    s->enabled = false;
+    s->ats_enabled = false;
+    s->cmdbuf_enabled = false;
+
+    /* reset MMIO */
+    memset(s->mmior, 0, AMDVI_MMIO_SIZE);
+    amdvi_set_quad(s, AMDVI_MMIO_EXT_FEATURES, AMDVI_EXT_FEATURES,
+            0xffffffffffffffef, 0);
+    amdvi_set_quad(s, AMDVI_MMIO_STATUS, 0, 0x98, 0x67);
+
+    /* reset device ident */
+    pci_config_set_vendor_id(s->pci.dev.config, PCI_VENDOR_ID_AMD);
+    pci_config_set_prog_interface(s->pci.dev.config, 00);
+    pci_config_set_device_id(s->pci.dev.config, s->devid);
+    pci_config_set_class(s->pci.dev.config, 0x0806);
+
+    /* reset AMDVI specific capabilities, all r/o */
+    pci_set_long(s->pci.dev.config + s->capab_offset, AMDVI_CAPAB_FEATURES);
+    pci_set_long(s->pci.dev.config + s->capab_offset + AMDVI_CAPAB_BAR_LOW,
+                 s->mmio.addr & ~(0xffff0000));
+    pci_set_long(s->pci.dev.config + s->capab_offset + AMDVI_CAPAB_BAR_HIGH,
+                (s->mmio.addr & ~(0xffff)) >> 16);
+    pci_set_long(s->pci.dev.config + s->capab_offset + AMDVI_CAPAB_RANGE,
+                 0xff000000);
+    pci_set_long(s->pci.dev.config + s->capab_offset + AMDVI_CAPAB_MISC, 0);
+    pci_set_long(s->pci.dev.config + s->capab_offset + AMDVI_CAPAB_MISC,
+            AMDVI_MAX_PH_ADDR | AMDVI_MAX_GVA_ADDR | AMDVI_MAX_VA_ADDR);
+}
+
+static void amdvi_reset(DeviceState *dev)
+{
+    AMDVIState *s = AMD_IOMMU_DEVICE(dev);
+
+    msi_reset(&s->pci.dev);
+    amdvi_init(s);
+}
+
+static void amdvi_realize(DeviceState *dev, Error **err)
+{
+    AMDVIState *s = AMD_IOMMU_DEVICE(dev);
+    PCIBus *bus = PC_MACHINE(qdev_get_machine())->bus;
+    s->iotlb = g_hash_table_new_full(amdvi_uint64_hash,
+                                     amdvi_uint64_equal, g_free, g_free);
+
+    /* This device should take care of IOMMU PCI properties */
+    qdev_set_parent_bus(DEVICE(&s->pci), &bus->qbus);
+    object_property_set_bool(OBJECT(&s->pci), true, "realized", err);
+    s->capab_offset = pci_add_capability(&s->pci.dev, AMDVI_CAPAB_ID_SEC, 0,
+                                         AMDVI_CAPAB_SIZE);
+    pci_add_capability(&s->pci.dev, PCI_CAP_ID_MSI, 0, AMDVI_CAPAB_REG_SIZE);
+    pci_add_capability(&s->pci.dev, PCI_CAP_ID_HT, 0, AMDVI_CAPAB_REG_SIZE);
+
+    /* set up MMIO */
+    memory_region_init_io(&s->mmio, OBJECT(s), &mmio_mem_ops, s, "amdvi-mmio",
+                          AMDVI_MMIO_SIZE);
+
+    sysbus_init_mmio(SYS_BUS_DEVICE(s), &s->mmio);
+    sysbus_mmio_map(SYS_BUS_DEVICE(s), 0, AMDVI_BASE_ADDR);
+    pci_setup_iommu(bus, amdvi_host_dma_iommu, s);
+    s->devid = object_property_get_int(OBJECT(&s->pci), "addr", err);
+    msi_init(&s->pci.dev, 0, 1, true, false, err);
+    amdvi_init(s);
+}
+
+static const VMStateDescription vmstate_amdvi = {
+    .name = "amd-iommu",
+    .unmigratable = 1
+};
+
+static void amdvi_instance_init(Object *klass)
+{
+    AMDVIState *s = AMD_IOMMU_DEVICE(klass);
+
+    object_initialize(&s->pci, sizeof(s->pci), TYPE_AMD_IOMMU_PCI);
+}
+
+static void amdvi_class_init(ObjectClass *klass, void* data)
+{
+    DeviceClass *dc = DEVICE_CLASS(klass);
+    X86IOMMUClass *dc_class = X86_IOMMU_CLASS(klass);
+
+    dc->reset = amdvi_reset;
+    dc->vmsd = &vmstate_amdvi;
+    dc_class->realize = amdvi_realize;
+}
+
+static const TypeInfo amdvi = {
+    .name = TYPE_AMD_IOMMU_DEVICE,
+    .parent = TYPE_X86_IOMMU_DEVICE,
+    .instance_size = sizeof(AMDVIState),
+    .instance_init = amdvi_instance_init,
+    .class_init = amdvi_class_init
+};
+
+static const TypeInfo amdviPCI = {
+    .name = "AMDVI-PCI",
+    .parent = TYPE_PCI_DEVICE,
+    .instance_size = sizeof(AMDVIPCIState),
+};
+
+static void amdviPCI_register_types(void)
+{
+    type_register_static(&amdviPCI);
+    type_register_static(&amdvi);
+}
+
+type_init(amdviPCI_register_types);
diff --git a/hw/i386/amd_iommu.h b/hw/i386/amd_iommu.h
new file mode 100644
index 0000000..2f4ac55
--- /dev/null
+++ b/hw/i386/amd_iommu.h
@@ -0,0 +1,390 @@
+/*
+ * QEMU emulation of an AMD IOMMU (AMD-Vi)
+ *
+ * Copyright (C) 2011 Eduard - Gabriel Munteanu
+ * Copyright (C) 2015 David Kiarie, <davidkiarie4@gmail.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef AMD_IOMMU_H_
+#define AMD_IOMMU_H_
+
+#include "hw/hw.h"
+#include "hw/pci/pci.h"
+#include "hw/pci/msi.h"
+#include "hw/sysbus.h"
+#include "sysemu/dma.h"
+#include "hw/i386/pc.h"
+#include "sysemu/dma.h"
+#include "hw/i386/x86-iommu.h"
+
+/* Capability registers */
+#define AMDVI_CAPAB_BAR_LOW           0x04
+#define AMDVI_CAPAB_BAR_HIGH          0x08
+#define AMDVI_CAPAB_RANGE             0x0C
+#define AMDVI_CAPAB_MISC              0x10
+
+#define AMDVI_CAPAB_SIZE              0x18
+#define AMDVI_CAPAB_REG_SIZE          0x04
+
+/* Capability header data */
+#define AMDVI_CAPAB_ID_SEC            0xf
+#define AMDVI_CAPAB_FLAT_EXT          (1 << 28)
+#define AMDVI_CAPAB_EFR_SUP           (1 << 27)
+#define AMDVI_CAPAB_FLAG_NPCACHE      (1 << 26)
+#define AMDVI_CAPAB_FLAG_HTTUNNEL     (1 << 25)
+#define AMDVI_CAPAB_FLAG_IOTLBSUP     (1 << 24)
+#define AMDVI_CAPAB_INIT_TYPE         (3 << 16)
+
+/* No. of used MMIO registers */
+#define AMDVI_MMIO_REGS_HIGH  8
+#define AMDVI_MMIO_REGS_LOW   7
+
+/* MMIO registers */
+#define AMDVI_MMIO_DEVICE_TABLE       0x0000
+#define AMDVI_MMIO_COMMAND_BASE       0x0008
+#define AMDVI_MMIO_EVENT_BASE         0x0010
+#define AMDVI_MMIO_CONTROL            0x0018
+#define AMDVI_MMIO_EXCL_BASE          0x0020
+#define AMDVI_MMIO_EXCL_LIMIT         0x0028
+#define AMDVI_MMIO_EXT_FEATURES       0x0030
+#define AMDVI_MMIO_COMMAND_HEAD       0x2000
+#define AMDVI_MMIO_COMMAND_TAIL       0x2008
+#define AMDVI_MMIO_EVENT_HEAD         0x2010
+#define AMDVI_MMIO_EVENT_TAIL         0x2018
+#define AMDVI_MMIO_STATUS             0x2020
+#define AMDVI_MMIO_PPR_BASE           0x0038
+#define AMDVI_MMIO_PPR_HEAD           0x2030
+#define AMDVI_MMIO_PPR_TAIL           0x2038
+
+#define AMDVI_MMIO_SIZE               0x4000
+
+#define AMDVI_MMIO_DEVTAB_SIZE_MASK   ((1ULL << 12) - 1)
+#define AMDVI_MMIO_DEVTAB_BASE_MASK   (((1ULL << 52) - 1) & ~ \
+                                       AMDVI_MMIO_DEVTAB_SIZE_MASK)
+#define AMDVI_MMIO_DEVTAB_ENTRY_SIZE  32
+#define AMDVI_MMIO_DEVTAB_SIZE_UNIT   4096
+
+/* some of this are similar but just for readability */
+#define AMDVI_MMIO_CMDBUF_SIZE_BYTE       (AMDVI_MMIO_COMMAND_BASE + 7)
+#define AMDVI_MMIO_CMDBUF_SIZE_MASK       0x0F
+#define AMDVI_MMIO_CMDBUF_BASE_MASK       AMDVI_MMIO_DEVTAB_BASE_MASK
+#define AMDVI_MMIO_CMDBUF_HEAD_MASK       (((1ULL << 19) - 1) & ~0x0F)
+#define AMDVI_MMIO_CMDBUF_TAIL_MASK       AMDVI_MMIO_EVTLOG_HEAD_MASK
+
+#define AMDVI_MMIO_EVTLOG_SIZE_BYTE       (AMDVI_MMIO_EVENT_BASE + 7)
+#define AMDVI_MMIO_EVTLOG_SIZE_MASK       AMDVI_MMIO_CMDBUF_SIZE_MASK
+#define AMDVI_MMIO_EVTLOG_BASE_MASK       AMDVI_MMIO_CMDBUF_BASE_MASK
+#define AMDVI_MMIO_EVTLOG_HEAD_MASK       (((1ULL << 19) - 1) & ~0x0F)
+#define AMDVI_MMIO_EVTLOG_TAIL_MASK       AMDVI_MMIO_EVTLOG_HEAD_MASK
+
+#define AMDVI_MMIO_PPRLOG_SIZE_BYTE       (AMDVI_MMIO_EVENT_BASE + 7)
+#define AMDVI_MMIO_PPRLOG_HEAD_MASK       AMDVI_MMIO_EVTLOG_HEAD_MASK
+#define AMDVI_MMIO_PPRLOG_TAIL_MASK       AMDVI_MMIO_EVTLOG_HEAD_MASK
+#define AMDVI_MMIO_PPRLOG_BASE_MASK       AMDVI_MMIO_EVTLOG_BASE_MASK
+#define AMDVI_MMIO_PPRLOG_SIZE_MASK       AMDVI_MMIO_EVTLOG_SIZE_MASK
+
+#define AMDVI_MMIO_EXCL_ENABLED_MASK      (1ULL << 0)
+#define AMDVI_MMIO_EXCL_ALLOW_MASK        (1ULL << 1)
+#define AMDVI_MMIO_EXCL_LIMIT_MASK        AMDVI_MMIO_DEVTAB_BASE_MASK
+#define AMDVI_MMIO_EXCL_LIMIT_LOW         0xFFF
+
+/* mmio control register flags */
+#define AMDVI_MMIO_CONTROL_AMDVIEN        (1ULL << 0)
+#define AMDVI_MMIO_CONTROL_HTTUNEN        (1ULL << 1)
+#define AMDVI_MMIO_CONTROL_EVENTLOGEN     (1ULL << 2)
+#define AMDVI_MMIO_CONTROL_EVENTINTEN     (1ULL << 3)
+#define AMDVI_MMIO_CONTROL_COMWAITINTEN   (1ULL << 4)
+#define AMDVI_MMIO_CONTROL_CMDBUFLEN      (1ULL << 12)
+
+/* MMIO status register bits */
+#define AMDVI_MMIO_STATUS_CMDBUF_RUN  (1 << 4)
+#define AMDVI_MMIO_STATUS_EVT_RUN     (1 << 3)
+#define AMDVI_MMIO_STATUS_COMP_INT    (1 << 2)
+#define AMDVI_MMIO_STATUS_EVT_OVF     (1 << 0)
+
+#define AMDVI_CMDBUF_ID_BYTE              0x07
+#define AMDVI_CMDBUF_ID_RSHIFT            4
+
+#define AMDVI_CMD_COMPLETION_WAIT         0x01
+#define AMDVI_CMD_INVAL_DEVTAB_ENTRY      0x02
+#define AMDVI_CMD_INVAL_AMDVI_PAGES       0x03
+#define AMDVI_CMD_INVAL_IOTLB_PAGES       0x04
+#define AMDVI_CMD_INVAL_INTR_TABLE        0x05
+#define AMDVI_CMD_PREFETCH_AMDVI_PAGES    0x06
+#define AMDVI_CMD_COMPLETE_PPR_REQUEST    0x07
+#define AMDVI_CMD_INVAL_AMDVI_ALL         0x08
+
+#define AMDVI_DEVTAB_ENTRY_SIZE           32
+
+/* Device table entry bits 0:63 */
+#define AMDVI_DEV_VALID                   (1ULL << 0)
+#define AMDVI_DEV_TRANSLATION_VALID       (1ULL << 1)
+#define AMDVI_DEV_MODE_MASK               0x7
+#define AMDVI_DEV_MODE_RSHIFT             9
+#define AMDVI_DEV_PT_ROOT_MASK            0xFFFFFFFFFF000
+#define AMDVI_DEV_PT_ROOT_RSHIFT          12
+#define AMDVI_DEV_PERM_SHIFT              61
+#define AMDVI_DEV_PERM_READ               (1ULL << 61)
+#define AMDVI_DEV_PERM_WRITE              (1ULL << 62)
+
+/* Device table entry bits 64:127 */
+#define AMDVI_DEV_DOMID_ID_MASK          ((1ULL << 16) - 1)
+
+/* Event codes and flags, as stored in the info field */
+#define AMDVI_EVENT_ILLEGAL_DEVTAB_ENTRY  (0x1U << 12)
+#define AMDVI_EVENT_IOPF                  (0x2U << 12)
+#define   AMDVI_EVENT_IOPF_I              (1U << 3)
+#define AMDVI_EVENT_DEV_TAB_HW_ERROR      (0x3U << 12)
+#define AMDVI_EVENT_PAGE_TAB_HW_ERROR     (0x4U << 12)
+#define AMDVI_EVENT_ILLEGAL_COMMAND_ERROR (0x5U << 12)
+#define AMDVI_EVENT_COMMAND_HW_ERROR      (0x6U << 12)
+
+#define AMDVI_EVENT_LEN                  16
+#define AMDVI_PERM_READ             (1 << 0)
+#define AMDVI_PERM_WRITE            (1 << 1)
+
+#define AMDVI_FEATURE_PREFETCH            (1ULL << 0) /* page prefetch       */
+#define AMDVI_FEATURE_PPR                 (1ULL << 1) /* PPR Support         */
+#define AMDVI_FEATURE_GT                  (1ULL << 4) /* Guest Translation   */
+#define AMDVI_FEATURE_IA                  (1ULL << 6) /* inval all support   */
+#define AMDVI_FEATURE_GA                  (1ULL << 7) /* guest VAPIC support */
+#define AMDVI_FEATURE_HE                  (1ULL << 8) /* hardware error regs */
+#define AMDVI_FEATURE_PC                  (1ULL << 9) /* Perf counters       */
+
+/* reserved DTE bits */
+#define AMDVI_DTE_LOWER_QUAD_RESERVED  0x80300000000000fc
+#define AMDVI_DTE_MIDDLE_QUAD_RESERVED 0x0000000000000100
+#define AMDVI_DTE_UPPER_QUAD_RESERVED  0x08f0000000000000
+
+/* AMDVI paging mode */
+#define AMDVI_GATS_MODE                 (6ULL <<  12)
+#define AMDVI_HATS_MODE                 (6ULL <<  10)
+
+/* IOTLB */
+#define AMDVI_IOTLB_MAX_SIZE 1024
+#define AMDVI_DEVID_SHIFT    36
+
+/* interrupt types */
+#define AMDVI_MT_FIXED  0x0
+#define AMDVI_MT_ARBIT  0x1
+#define AMDVI_MT_SMI    0x2
+#define AMDVI_MT_NMI    0x3
+#define AMDVI_MT_INIT   0x4
+#define AMDVI_MT_EXTINT 0x6
+#define AMDVI_MT_LINT1  0xb
+#define AMDVI_MT_LINT0  0xe
+
+/* Ext reg, GA support */
+#define AMDVI_GASUP    (1UL << 7)
+/* MMIO control GA enable bits */
+#define AMDVI_GAEN     (1UL << 17)
+
+/* MSI interrupt type mask */
+#define AMDVI_IR_TYPE_MASK 0x300
+
+/* interrupt destination mode */
+#define AMDVI_IRDEST_MODE_MASK 0x2
+
+/* select MSI data 10:0 bits */
+#define AMDVI_IRTE_INDEX_MASK 0x7ff
+
+/* bits determining whether specific interrupts should be passed
+ * split DTE into 64-bit chunks
+ */
+#define AMDVI_DTE_INTPASS       56
+#define AMDVI_DTE_EINTPASS      57
+#define AMDVI_DTE_NMIPASS       58
+#define AMDVI_DTE_INTCTL        60
+#define AMDVI_DTE_LINT0PASS     62
+#define AMDVI_DTE_LINT1PASS     63
+
+/* interrupt data valid */
+#define AMDVI_IR_VALID          (1UL << 0)
+
+/* interrupt root table mask */
+#define AMDVI_IRTEROOT_MASK     0xffffffffffffc0
+
+/* default IRTE size */
+#define AMDVI_DEFAULT_IRTE_SIZE 0x4
+
+/* IRTE size with GASup enabled */
+#define AMDVI_IRTE_SIZE_GASUP   0x10
+
+#define AMDVI_IRTE_VECTOR_MASK    (0xffU << 16)
+#define AMDVI_IRTE_DEST_MASK      (0xffU << 8)
+#define AMDVI_IRTE_DM_MASK        (0x1U << 6)
+#define AMDVI_IRTE_RQEOI_MASK     (0x1U << 5)
+#define AMDVI_IRTE_INTTYPE_MASK   (0x7U << 2)
+#define AMDVI_IRTE_SUPIOPF_MASK   (0x1U << 1)
+#define AMDVI_IRTE_REMAP_MASK     (0x1U << 0)
+
+#define AMDVI_IR_TABLE_SIZE_MASK 0xfe
+
+/* offsets into MSI data */
+#define AMDVI_MSI_DATA_DM_RSHIFT       0x8
+#define AMDVI_MSI_DATA_LEVEL_RSHIFT    0xe
+#define AMDVI_MSI_DATA_TRM_RSHIFT      0xf
+
+/* offsets into MSI address */
+#define AMDVI_MSI_ADDR_DM_RSHIFT       0x2
+#define AMDVI_MSI_ADDR_RH_RSHIFT       0x3
+#define AMDVI_MSI_ADDR_DEST_RSHIFT     0xc
+
+#define AMDVI_LOCAL_APIC_ADDR     0xfee00000
+
+/* extended feature support */
+#define AMDVI_EXT_FEATURES (AMDVI_FEATURE_PREFETCH | AMDVI_FEATURE_PPR | \
+        AMDVI_FEATURE_IA | AMDVI_FEATURE_GT | AMDVI_FEATURE_GA | \
+        AMDVI_FEATURE_HE | AMDVI_GATS_MODE | AMDVI_HATS_MODE)
+
+/* capabilities header */
+#define AMDVI_CAPAB_FEATURES (AMDVI_CAPAB_FLAT_EXT | \
+        AMDVI_CAPAB_FLAG_NPCACHE | AMDVI_CAPAB_FLAG_IOTLBSUP \
+        | AMDVI_CAPAB_ID_SEC | AMDVI_CAPAB_INIT_TYPE | \
+        AMDVI_CAPAB_FLAG_HTTUNNEL |  AMDVI_CAPAB_EFR_SUP)
+
+/* AMDVI default address */
+#define AMDVI_BASE_ADDR 0xfed80000
+
+/* page management constants */
+#define AMDVI_PAGE_SHIFT 12
+#define AMDVI_PAGE_SIZE  (1ULL << AMDVI_PAGE_SHIFT)
+
+#define AMDVI_PAGE_SHIFT_4K 12
+#define AMDVI_PAGE_MASK_4K  (~((1ULL << AMDVI_PAGE_SHIFT_4K) - 1))
+
+#define AMDVI_MAX_VA_ADDR          (48UL << 5)
+#define AMDVI_MAX_PH_ADDR          (40UL << 8)
+#define AMDVI_MAX_GVA_ADDR         (48UL << 15)
+
+/* invalidation command device id */
+#define AMDVI_INVAL_DEV_ID_SHIFT  32
+#define AMDVI_INVAL_DEV_ID_MASK   (~((1UL << AMDVI_INVAL_DEV_ID_SHIFT) - 1))
+
+/* invalidation address */
+#define AMDVI_INVAL_ADDR_MASK_SHIFT 12
+#define AMDVI_INVAL_ADDR_MASK     (~((1UL << AMDVI_INVAL_ADDR_MASK_SHIFT) - 1))
+
+/* invalidation S bit mask */
+#define AMDVI_INVAL_ALL(val) ((val) & (0x1))
+
+/* Completion Wait data size */
+#define AMDVI_COMPLETION_DATA_SIZE    8
+
+#define AMDVI_COMMAND_SIZE   16
+
+#define AMDVI_INT_ADDR_FIRST 0xfee00000ULL
+#define AMDVI_INT_ADDR_LAST  0xfeefffffULL
+
+#define AMDVI_INT_ADDR_SIZE ((AMDVI_INT_ADDR_LAST - \
+        AMDVI_INT_ADDR_FIRST) + 1)
+
+/* Completion Wait data size */
+#define AMDVI_COMPLETION_DATA_SIZE    8
+
+#define AMDVI_COMMAND_SIZE   16
+
+/* AMD IOMMU errors */
+#define AMDVI_ILLEG_DEV_TAB  0x1
+#define AMDVI_IOPF_          0x2
+#define AMDVI_DEV_TAB_HW     0x3
+#define AMDVI_PAGE_TAB_HW    0x4
+#define AMDVI_ILLEG_COM      0x5
+#define AMDVI_COM_HW         0x6
+#define AMDVI_IOTLB_TIMEOUT  0x7
+#define AMDVI_INVAL_DEV_REQ  0x8
+#define AMDVI_INVAL_PPR_REQ  0x9
+#define AMDVI_EVT_COUNT_ZERO 0xa
+
+/* represent target and master aborts error state */
+#define AMDVI_TARGET_ABORT     0xb
+#define AMDVI_MASTER_ABORT     0xc
+
+#define TYPE_AMD_IOMMU_DEVICE "amd-iommu"
+#define AMD_IOMMU_DEVICE(obj)\
+    OBJECT_CHECK(AMDVIState, (obj), TYPE_AMD_IOMMU_DEVICE)
+
+#define TYPE_AMD_IOMMU_PCI "AMDVI-PCI"
+#define AMD_IOMMU_PCI(obj)\
+    OBJECT_CHECK(AMDVIPCIState, (obj), TYPE_AMD_IOMMU_PCI)
+
+typedef struct AMDVIAddressSpace AMDVIAddressSpace;
+
+/* functions to steal PCI config space */
+typedef struct AMDVIPCIState {
+    PCIDevice dev;               /* The PCI device itself        */
+} AMDVIPCIState;
+
+typedef struct AMDVIState {
+    X86IOMMUState iommu;        /* IOMMU bus device             */
+    AMDVIPCIState pci;          /* IOMMU PCI device             */
+
+    uint32_t version;
+    uint32_t capab_offset;       /* capability offset pointer    */
+
+    uint64_t mmio_addr;
+
+    uint32_t devid;              /* auto-assigned devid          */
+
+    bool enabled;                /* IOMMU enabled                */
+    bool ats_enabled;            /* address translation enabled  */
+    bool cmdbuf_enabled;         /* command buffer enabled       */
+    bool evtlog_enabled;         /* event log enabled            */
+    bool excl_enabled;
+
+    hwaddr devtab;               /* base address device table    */
+    size_t devtab_len;           /* device table length          */
+
+    hwaddr cmdbuf;               /* command buffer base address  */
+    uint64_t cmdbuf_len;         /* command buffer length        */
+    uint32_t cmdbuf_head;        /* current IOMMU read position  */
+    uint32_t cmdbuf_tail;        /* next Software write position */
+    bool completion_wait_intr;
+
+    hwaddr evtlog;               /* base address event log       */
+    bool evtlog_intr;
+    uint32_t evtlog_len;         /* event log length             */
+    uint32_t evtlog_head;        /* current IOMMU write position */
+    uint32_t evtlog_tail;        /* current Software read position */
+
+    /* unused for now */
+    hwaddr excl_base;            /* base DVA - IOMMU exclusion range */
+    hwaddr excl_limit;           /* limit of IOMMU exclusion range   */
+    bool excl_allow;             /* translate accesses to the exclusion range */
+    bool excl_enable;            /* exclusion range enabled          */
+
+    hwaddr ppr_log;              /* base address ppr log */
+    uint32_t pprlog_len;         /* ppr log len  */
+    uint32_t pprlog_head;        /* ppr log head */
+    uint32_t pprlog_tail;        /* ppr log tail */
+
+    MemoryRegion mmio;                 /* MMIO region                  */
+    uint8_t mmior[AMDVI_MMIO_SIZE];    /* read/write MMIO              */
+    uint8_t w1cmask[AMDVI_MMIO_SIZE];  /* read/write 1 clear mask      */
+    uint8_t romask[AMDVI_MMIO_SIZE];   /* MMIO read/only mask          */
+    bool mmio_enabled;
+
+    /* IOMMU function */
+    MemoryRegionIOMMUOps iommu_ops;
+
+    /* for each served device */
+    AMDVIAddressSpace **address_spaces[PCI_BUS_MAX];
+
+    /* IOTLB */
+    GHashTable *iotlb;
+} AMDVIState;
+
+#endif
diff --git a/hw/i386/trace-events b/hw/i386/trace-events
index 592de3a..5c12c10 100644
--- a/hw/i386/trace-events
+++ b/hw/i386/trace-events
@@ -42,3 +42,10 @@ amdvi_mode_invalid(unsigned level, uint64_t addr)"error: translation level 0x%"P
 amdvi_page_fault(uint64_t addr) "error: page fault accessing guest physical address 0x%"PRIx64
 amdvi_iotlb_hit(uint16_t bus, uint16_t slot, uint16_t func, uint64_t addr, uint64_t txaddr) "hit iotlb devid %02x:%02x.%x gpa 0x%"PRIx64 " hpa 0x%"PRIx64
 amdvi_translation_result(uint16_t bus, uint16_t slot, uint16_t func, uint64_t addr, uint64_t txaddr) "devid: %02x:%02x.%x gpa 0x%"PRIx64 " hpa 0x%"PRIx64
+amdvi_irte_get_fail(uint64_t addr, uint64_t offset) "couldn't access device table entry 0x%"PRIx64" + offset 0x%"PRIx64
+amdvi_invalid_irte_entry(uint16_t devid, uint64_t offset) "devid %x requested IRTE offset 0x%"PRIx64" Outside IR table range"
+amdvi_ir_request(uint32_t data, uint64_t addr, uint16_t sid) "IR request data 0x%"PRIx32" address 0x%"PRIx64" SID %x"
+amdvi_ir_remap(uint32_t data, uint64_t addr, uint16_t sid) "IR remap data 0x%"PRIx32" address 0x%"PRIx64" SID %x"
+amdvi_ir_target_abort(uint32_t data, uint64_t addr, uint16_t sid) "IR target abort data 0x%"PRIx32" address 0x%"PRIx64" SID %x"
+amdvi_ir_write_fail(uint64_t addr, uint32_t data) "fail to write to addr 0x%"PRIx64 " value 0x%"PRIx32
+amdvi_ir_read_fail(uint64_t addr) " fail to read from addr 0x%"PRIx64
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [Qemu-devel] [V15 4/4] hw/i386: AMD IOMMU IVRS table
  2016-08-02  8:39 [Qemu-devel] [V15 0/4] AMD IOMMU David Kiarie
                   ` (2 preceding siblings ...)
  2016-08-02  8:39 ` [Qemu-devel] [V15 3/4] hw/i386: Introduce AMD IOMMU David Kiarie
@ 2016-08-02  8:39 ` David Kiarie
  2016-08-02 13:32   ` Igor Mammedov
  3 siblings, 1 reply; 26+ messages in thread
From: David Kiarie @ 2016-08-02  8:39 UTC (permalink / raw)
  To: qemu-devel
  Cc: peterx, rkrcmar, jan.kiszka, valentine.sinitsyn, ehabkost, mst,
	David Kiarie

Add IVRS table for AMD IOMMU. Generate IVRS or DMAR
depending on emulated IOMMU.

Signed-off-by: David Kiarie <davidkiarie4@gmail.com>
---
 hw/acpi/aml-build.c         |  2 +-
 hw/i386/acpi-build.c        | 76 ++++++++++++++++++++++++++++++++++++++++-----
 hw/i386/x86-iommu.c         | 19 ++++++++++++
 include/hw/acpi/aml-build.h |  1 +
 include/hw/i386/x86-iommu.h | 11 +++++++
 5 files changed, 101 insertions(+), 8 deletions(-)

diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
index db3e914..b2a1e40 100644
--- a/hw/acpi/aml-build.c
+++ b/hw/acpi/aml-build.c
@@ -226,7 +226,7 @@ static void build_extop_package(GArray *package, uint8_t op)
     build_prepend_byte(package, 0x5B); /* ExtOpPrefix */
 }
 
-static void build_append_int_noprefix(GArray *table, uint64_t value, int size)
+void build_append_int_noprefix(GArray *table, uint64_t value, int size)
 {
     int i;
 
diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
index a26a4bb..efed318 100644
--- a/hw/i386/acpi-build.c
+++ b/hw/i386/acpi-build.c
@@ -59,7 +59,8 @@
 
 #include "qapi/qmp/qint.h"
 #include "qom/qom-qobject.h"
-#include "hw/i386/x86-iommu.h"
+#include "hw/i386/amd_iommu.h"
+#include "hw/i386/intel_iommu.h"
 
 #include "hw/acpi/ipmi.h"
 
@@ -2562,6 +2563,68 @@ build_dmar_q35(GArray *table_data, BIOSLinker *linker)
     build_header(linker, table_data, (void *)(table_data->data + dmar_start),
                  "DMAR", table_data->len - dmar_start, 1, NULL, NULL);
 }
+/*
+ *   IVRS table as specified in AMD IOMMU Specification v2.62, Section 5.2
+ *   accessible here http://support.amd.com/TechDocs/48882_IOMMU.pdf
+ */
+static void
+build_amd_iommu(GArray *table_data, BIOSLinker *linker)
+{
+    int iommu_start = table_data->len;
+    AMDVIState *s = AMD_IOMMU_DEVICE(x86_iommu_get_default());
+    assert(s);
+
+    /* IVRS header */
+    acpi_data_push(table_data, sizeof(AcpiTableHeader));
+    /* IVinfo - IO virtualization information common to all IOMMU
+     * units in a system
+     */
+    build_append_int_noprefix(table_data, 40UL << 8/* PASize */, 4);
+    /* reserved */
+    build_append_int_noprefix(table_data, 0, 8);
+
+    /* IVHD definition - type 10h */
+    build_append_int_noprefix(table_data, 0x10, 1);
+    /* virtualization flags */
+    build_append_int_noprefix(table_data,
+                             (1UL << 0) | /* HtTunEn      */
+                             (1UL << 4) | /* iotblSup     */
+                             (1UL << 6) | /* PrefSup      */
+                             (1UL << 7),  /* PPRSup       */
+                             1);
+    /* IVHD length */
+    build_append_int_noprefix(table_data, 0x28, 2);
+    /* DeviceID */
+    build_append_int_noprefix(table_data, s->devid, 2);
+    /* Capability offset */
+    build_append_int_noprefix(table_data, s->capab_offset, 2);
+    /* IOMMU base address */
+    build_append_int_noprefix(table_data, s->mmio.addr, 8);
+    /* PCI Segment Group */
+    build_append_int_noprefix(table_data, 0, 2);
+    /* IOMMU info */
+    build_append_int_noprefix(table_data, 0, 2);
+    /* IOMMU Feature Reporting */
+    build_append_int_noprefix(table_data,
+                             (48UL << 30) | /* HATS   */
+                             (48UL << 28) | /* GATS   */
+                             (1UL << 2),    /* GTSup  */
+                             4);
+    /* Add device flags here
+     *   These are 4-byte device entries currently reporting the range of
+     *   devices 00h - ffffh; all devices
+     *   Device setting affecting all devices should be made here
+     *
+     *   Refer to Spec - Table 95:IVHD Device Entry Type Codes(4-byte)
+     */
+    /* start of device range, 4-byte entries */
+    build_append_int_noprefix(table_data, 0x00000003, 4);
+    /* end of device range */
+    build_append_int_noprefix(table_data, 0x00ffff04, 4);
+
+    build_header(linker, table_data, (void *)(table_data->data + iommu_start),
+                 "IVRS", table_data->len - iommu_start, 1, NULL, NULL);
+}
 
 static GArray *
 build_rsdp(GArray *rsdp_table, BIOSLinker *linker, unsigned rsdt_tbl_offset)
@@ -2622,11 +2685,6 @@ static bool acpi_get_mcfg(AcpiMcfgInfo *mcfg)
     return true;
 }
 
-static bool acpi_has_iommu(void)
-{
-    return !!x86_iommu_get_default();
-}
-
 static
 void acpi_build(AcpiBuildTables *tables, MachineState *machine)
 {
@@ -2639,6 +2697,7 @@ void acpi_build(AcpiBuildTables *tables, MachineState *machine)
     AcpiMcfgInfo mcfg;
     Range pci_hole, pci_hole64;
     uint8_t *u;
+    IommuType IOMMUType = x86_iommu_get_type();
     size_t aml_len = 0;
     GArray *tables_blob = tables->table_data;
     AcpiSlicOem slic_oem = { .id = NULL, .table_id = NULL };
@@ -2706,7 +2765,10 @@ void acpi_build(AcpiBuildTables *tables, MachineState *machine)
         acpi_add_table(table_offsets, tables_blob);
         build_mcfg_q35(tables_blob, tables->linker, &mcfg);
     }
-    if (acpi_has_iommu()) {
+    if (IOMMUType == TYPE_AMD) {
+        acpi_add_table(table_offsets, tables_blob);
+        build_amd_iommu(tables_blob, tables->linker);
+    } else if (IOMMUType == TYPE_INTEL) {
         acpi_add_table(table_offsets, tables_blob);
         build_dmar_q35(tables_blob, tables->linker);
     }
diff --git a/hw/i386/x86-iommu.c b/hw/i386/x86-iommu.c
index ce26b2a..893d54d 100644
--- a/hw/i386/x86-iommu.c
+++ b/hw/i386/x86-iommu.c
@@ -20,7 +20,10 @@
 #include "qemu/osdep.h"
 #include "hw/sysbus.h"
 #include "hw/boards.h"
+#include "hw/i386/intel_iommu.h"
+#include "hw/i386/amd_iommu.h"
 #include "hw/i386/x86-iommu.h"
+#include "sysemu/kvm.h"
 #include "qemu/error-report.h"
 #include "trace.h"
 
@@ -71,6 +74,21 @@ X86IOMMUState *x86_iommu_get_default(void)
     return x86_iommu_default;
 }
 
+IommuType x86_iommu_get_type(void)
+{
+    bool ambiguous;
+
+    if (object_resolve_path_type("", TYPE_AMD_IOMMU_DEVICE, &ambiguous)
+            && !ambiguous) {
+        return TYPE_AMD;
+    } else if (object_resolve_path_type("", TYPE_INTEL_IOMMU_DEVICE, &ambiguous)
+            && !ambiguous) {
+        return TYPE_INTEL;
+    } else {
+        return TYPE_NONE;
+    }
+}
+
 static void x86_iommu_realize(DeviceState *dev, Error **errp)
 {
     X86IOMMUState *x86_iommu = X86_IOMMU_DEVICE(dev);
@@ -79,6 +97,7 @@ static void x86_iommu_realize(DeviceState *dev, Error **errp)
     if (x86_class->realize) {
         x86_class->realize(dev, errp);
     }
+
     x86_iommu_set_default(X86_IOMMU_DEVICE(dev));
 }
 
diff --git a/include/hw/acpi/aml-build.h b/include/hw/acpi/aml-build.h
index e5f0878..559326c 100644
--- a/include/hw/acpi/aml-build.h
+++ b/include/hw/acpi/aml-build.h
@@ -367,6 +367,7 @@ Aml *aml_sizeof(Aml *arg);
 Aml *aml_concatenate(Aml *source1, Aml *source2, Aml *target);
 Aml *aml_object_type(Aml *object);
 
+void build_append_int_noprefix(GArray *table, uint64_t value, int size);
 void
 build_header(BIOSLinker *linker, GArray *table_data,
              AcpiTableHeader *h, const char *sig, int len, uint8_t rev,
diff --git a/include/hw/i386/x86-iommu.h b/include/hw/i386/x86-iommu.h
index c48e8dd..2acc04a 100644
--- a/include/hw/i386/x86-iommu.h
+++ b/include/hw/i386/x86-iommu.h
@@ -37,6 +37,12 @@
 typedef struct X86IOMMUState X86IOMMUState;
 typedef struct X86IOMMUClass X86IOMMUClass;
 
+typedef enum IommuType {
+    TYPE_INTEL,
+    TYPE_AMD,
+    TYPE_NONE
+} IommuType;
+
 struct X86IOMMUClass {
     SysBusDeviceClass parent;
     /* Intel/AMD specific realize() hook */
@@ -76,6 +82,11 @@ struct X86IOMMUState {
  */
 X86IOMMUState *x86_iommu_get_default(void);
 
+/*
+ * x86_iommu_get_type - get IOMMU type
+ */
+IommuType x86_iommu_get_type(void);
+
 /**
  * x86_iommu_iec_register_notifier - register IEC (Interrupt Entry
  *                                   Cache) notifiers
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* Re: [Qemu-devel] [V15 4/4] hw/i386: AMD IOMMU IVRS table
  2016-08-02  8:39 ` [Qemu-devel] [V15 4/4] hw/i386: AMD IOMMU IVRS table David Kiarie
@ 2016-08-02 13:32   ` Igor Mammedov
  0 siblings, 0 replies; 26+ messages in thread
From: Igor Mammedov @ 2016-08-02 13:32 UTC (permalink / raw)
  To: David Kiarie
  Cc: qemu-devel, ehabkost, mst, rkrcmar, peterx, valentine.sinitsyn,
	jan.kiszka

On Tue,  2 Aug 2016 11:39:07 +0300
David Kiarie <davidkiarie4@gmail.com> wrote:

> Add IVRS table for AMD IOMMU. Generate IVRS or DMAR
> depending on emulated IOMMU.
> 
> Signed-off-by: David Kiarie <davidkiarie4@gmail.com>
> ---
>  hw/acpi/aml-build.c         |  2 +-
>  hw/i386/acpi-build.c        | 76 ++++++++++++++++++++++++++++++++++++++++-----
>  hw/i386/x86-iommu.c         | 19 ++++++++++++
>  include/hw/acpi/aml-build.h |  1 +
>  include/hw/i386/x86-iommu.h | 11 +++++++
>  5 files changed, 101 insertions(+), 8 deletions(-)
> 
> diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
> index db3e914..b2a1e40 100644
> --- a/hw/acpi/aml-build.c
> +++ b/hw/acpi/aml-build.c
> @@ -226,7 +226,7 @@ static void build_extop_package(GArray *package, uint8_t op)
>      build_prepend_byte(package, 0x5B); /* ExtOpPrefix */
>  }
>  
> -static void build_append_int_noprefix(GArray *table, uint64_t value, int size)
> +void build_append_int_noprefix(GArray *table, uint64_t value, int size)
>  {
>      int i;
>  
> diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
> index a26a4bb..efed318 100644
> --- a/hw/i386/acpi-build.c
> +++ b/hw/i386/acpi-build.c
> @@ -59,7 +59,8 @@
>  
>  #include "qapi/qmp/qint.h"
>  #include "qom/qom-qobject.h"
> -#include "hw/i386/x86-iommu.h"
> +#include "hw/i386/amd_iommu.h"
> +#include "hw/i386/intel_iommu.h"
>  
>  #include "hw/acpi/ipmi.h"
>  
> @@ -2562,6 +2563,68 @@ build_dmar_q35(GArray *table_data, BIOSLinker *linker)
>      build_header(linker, table_data, (void *)(table_data->data + dmar_start),
>                   "DMAR", table_data->len - dmar_start, 1, NULL, NULL);
>  }
> +/*
> + *   IVRS table as specified in AMD IOMMU Specification v2.62, Section 5.2
> + *   accessible here http://support.amd.com/TechDocs/48882_IOMMU.pdf
> + */
> +static void
> +build_amd_iommu(GArray *table_data, BIOSLinker *linker)
> +{
> +    int iommu_start = table_data->len;
> +    AMDVIState *s = AMD_IOMMU_DEVICE(x86_iommu_get_default());
> +    assert(s);
Wouldn't above cast assert on its own if it isn't able to cast?
So is assert(s) needed?

> +
> +    /* IVRS header */
> +    acpi_data_push(table_data, sizeof(AcpiTableHeader));
> +    /* IVinfo - IO virtualization information common to all IOMMU
> +     * units in a system
> +     */
> +    build_append_int_noprefix(table_data, 40UL << 8/* PASize */, 4);
> +    /* reserved */
> +    build_append_int_noprefix(table_data, 0, 8);
> +
> +    /* IVHD definition - type 10h */
> +    build_append_int_noprefix(table_data, 0x10, 1);
> +    /* virtualization flags */
> +    build_append_int_noprefix(table_data,
> +                             (1UL << 0) | /* HtTunEn      */
> +                             (1UL << 4) | /* iotblSup     */
> +                             (1UL << 6) | /* PrefSup      */
> +                             (1UL << 7),  /* PPRSup       */
> +                             1);
> +    /* IVHD length */
> +    build_append_int_noprefix(table_data, 0x28, 2);
> +    /* DeviceID */
> +    build_append_int_noprefix(table_data, s->devid, 2);
> +    /* Capability offset */
> +    build_append_int_noprefix(table_data, s->capab_offset, 2);
> +    /* IOMMU base address */
> +    build_append_int_noprefix(table_data, s->mmio.addr, 8);
> +    /* PCI Segment Group */
> +    build_append_int_noprefix(table_data, 0, 2);
> +    /* IOMMU info */
> +    build_append_int_noprefix(table_data, 0, 2);
> +    /* IOMMU Feature Reporting */
> +    build_append_int_noprefix(table_data,
> +                             (48UL << 30) | /* HATS   */
> +                             (48UL << 28) | /* GATS   */
> +                             (1UL << 2),    /* GTSup  */
> +                             4);
> +    /* Add device flags here
> +     *   These are 4-byte device entries currently reporting the range of
> +     *   devices 00h - ffffh; all devices
> +     *   Device setting affecting all devices should be made here
> +     *
> +     *   Refer to Spec - Table 95:IVHD Device Entry Type Codes(4-byte)
> +     */
> +    /* start of device range, 4-byte entries */
> +    build_append_int_noprefix(table_data, 0x00000003, 4);
> +    /* end of device range */
> +    build_append_int_noprefix(table_data, 0x00ffff04, 4);
Why range is used here instead of IVHD entry type 1? which also
should affect all devices but just with one entry?

> +
> +    build_header(linker, table_data, (void *)(table_data->data + iommu_start),
> +                 "IVRS", table_data->len - iommu_start, 1, NULL, NULL);
> +}
>  
>  static GArray *
>  build_rsdp(GArray *rsdp_table, BIOSLinker *linker, unsigned rsdt_tbl_offset)
> @@ -2622,11 +2685,6 @@ static bool acpi_get_mcfg(AcpiMcfgInfo *mcfg)
>      return true;
>  }
>  
> -static bool acpi_has_iommu(void)
> -{
> -    return !!x86_iommu_get_default();
> -}
> -
>  static
>  void acpi_build(AcpiBuildTables *tables, MachineState *machine)
>  {
> @@ -2639,6 +2697,7 @@ void acpi_build(AcpiBuildTables *tables, MachineState *machine)
>      AcpiMcfgInfo mcfg;
>      Range pci_hole, pci_hole64;
>      uint8_t *u;
> +    IommuType IOMMUType = x86_iommu_get_type();
>      size_t aml_len = 0;
>      GArray *tables_blob = tables->table_data;
>      AcpiSlicOem slic_oem = { .id = NULL, .table_id = NULL };
> @@ -2706,7 +2765,10 @@ void acpi_build(AcpiBuildTables *tables, MachineState *machine)
>          acpi_add_table(table_offsets, tables_blob);
>          build_mcfg_q35(tables_blob, tables->linker, &mcfg);
>      }
> -    if (acpi_has_iommu()) {
> +    if (IOMMUType == TYPE_AMD) {
> +        acpi_add_table(table_offsets, tables_blob);
> +        build_amd_iommu(tables_blob, tables->linker);
> +    } else if (IOMMUType == TYPE_INTEL) {
>          acpi_add_table(table_offsets, tables_blob);
>          build_dmar_q35(tables_blob, tables->linker);
>      }
> diff --git a/hw/i386/x86-iommu.c b/hw/i386/x86-iommu.c
> index ce26b2a..893d54d 100644
> --- a/hw/i386/x86-iommu.c
> +++ b/hw/i386/x86-iommu.c
> @@ -20,7 +20,10 @@
>  #include "qemu/osdep.h"
>  #include "hw/sysbus.h"
>  #include "hw/boards.h"
> +#include "hw/i386/intel_iommu.h"
> +#include "hw/i386/amd_iommu.h"
>  #include "hw/i386/x86-iommu.h"
> +#include "sysemu/kvm.h"
>  #include "qemu/error-report.h"
>  #include "trace.h"
>  
> @@ -71,6 +74,21 @@ X86IOMMUState *x86_iommu_get_default(void)
>      return x86_iommu_default;
>  }
>  
> +IommuType x86_iommu_get_type(void)
> +{
> +    bool ambiguous;
> +
> +    if (object_resolve_path_type("", TYPE_AMD_IOMMU_DEVICE, &ambiguous)
> +            && !ambiguous) {
> +        return TYPE_AMD;
> +    } else if (object_resolve_path_type("", TYPE_INTEL_IOMMU_DEVICE, &ambiguous)
> +            && !ambiguous) {
> +        return TYPE_INTEL;
> +    } else {
> +        return TYPE_NONE;
> +    }
> +}
> +
>  static void x86_iommu_realize(DeviceState *dev, Error **errp)
>  {
>      X86IOMMUState *x86_iommu = X86_IOMMU_DEVICE(dev);
> @@ -79,6 +97,7 @@ static void x86_iommu_realize(DeviceState *dev, Error **errp)
>      if (x86_class->realize) {
>          x86_class->realize(dev, errp);
>      }
> +
Unrelated new line change?

>      x86_iommu_set_default(X86_IOMMU_DEVICE(dev));
>  }
>  
> diff --git a/include/hw/acpi/aml-build.h b/include/hw/acpi/aml-build.h
> index e5f0878..559326c 100644
> --- a/include/hw/acpi/aml-build.h
> +++ b/include/hw/acpi/aml-build.h
> @@ -367,6 +367,7 @@ Aml *aml_sizeof(Aml *arg);
>  Aml *aml_concatenate(Aml *source1, Aml *source2, Aml *target);
>  Aml *aml_object_type(Aml *object);
>  
> +void build_append_int_noprefix(GArray *table, uint64_t value, int size);
>  void
>  build_header(BIOSLinker *linker, GArray *table_data,
>               AcpiTableHeader *h, const char *sig, int len, uint8_t rev,
> diff --git a/include/hw/i386/x86-iommu.h b/include/hw/i386/x86-iommu.h
> index c48e8dd..2acc04a 100644
> --- a/include/hw/i386/x86-iommu.h
> +++ b/include/hw/i386/x86-iommu.h
> @@ -37,6 +37,12 @@
>  typedef struct X86IOMMUState X86IOMMUState;
>  typedef struct X86IOMMUClass X86IOMMUClass;
>  
> +typedef enum IommuType {
> +    TYPE_INTEL,
> +    TYPE_AMD,
> +    TYPE_NONE
> +} IommuType;
> +
>  struct X86IOMMUClass {
>      SysBusDeviceClass parent;
>      /* Intel/AMD specific realize() hook */
> @@ -76,6 +82,11 @@ struct X86IOMMUState {
>   */
>  X86IOMMUState *x86_iommu_get_default(void);
>  
> +/*
> + * x86_iommu_get_type - get IOMMU type
> + */
> +IommuType x86_iommu_get_type(void);
> +
>  /**
>   * x86_iommu_iec_register_notifier - register IEC (Interrupt Entry
>   *                                   Cache) notifiers

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Qemu-devel] [V15 1/4] hw/pci: Prepare for AMD IOMMU
  2016-08-02  8:39 ` [Qemu-devel] [V15 1/4] hw/pci: Prepare for " David Kiarie
@ 2016-08-08  9:01   ` Peter Xu
  2016-08-08  9:25     ` David Kiarie
  0 siblings, 1 reply; 26+ messages in thread
From: Peter Xu @ 2016-08-08  9:01 UTC (permalink / raw)
  To: David Kiarie
  Cc: qemu-devel, rkrcmar, jan.kiszka, valentine.sinitsyn, ehabkost, mst

On Tue, Aug 02, 2016 at 11:39:04AM +0300, David Kiarie wrote:
> Introduce PCI macros from for use by AMD IOMMU
> 
> Signed-off-by: David Kiarie <davidkiarie4@gmail.com>
> ---
>  include/hw/pci/pci.h | 5 ++++-
>  1 file changed, 4 insertions(+), 1 deletion(-)
> 
> diff --git a/include/hw/pci/pci.h b/include/hw/pci/pci.h
> index 929ec2f..d47e0e6 100644
> --- a/include/hw/pci/pci.h
> +++ b/include/hw/pci/pci.h
> @@ -11,11 +11,14 @@
>  #include "hw/pci/pcie.h"
>  
>  /* PCI bus */
> -
> +#define PCI_BDF(bus, devfn)     ((((uint16_t)(bus)) << 8) | (devfn))

Seems the same as PCI_BUILD_BDF() below?

>  #define PCI_DEVFN(slot, func)   ((((slot) & 0x1f) << 3) | ((func) & 0x07))
> +#define PCI_BUS_NUM(x)          (((x) >> 8) & 0xff)
>  #define PCI_SLOT(devfn)         (((devfn) >> 3) & 0x1f)
>  #define PCI_FUNC(devfn)         ((devfn) & 0x07)
>  #define PCI_BUILD_BDF(bus, devfn)     ((bus << 8) | (devfn))
> +#define PCI_BUS_MAX             256
> +#define PCI_DEVFN_MAX           256
>  #define PCI_SLOT_MAX            32
>  #define PCI_FUNC_MAX            8
>  
> -- 
> 2.1.4
> 

-- peterx

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Qemu-devel] [V15 1/4] hw/pci: Prepare for AMD IOMMU
  2016-08-08  9:01   ` Peter Xu
@ 2016-08-08  9:25     ` David Kiarie
  0 siblings, 0 replies; 26+ messages in thread
From: David Kiarie @ 2016-08-08  9:25 UTC (permalink / raw)
  To: Peter Xu
  Cc: QEMU Developers, rkrcmar, Jan Kiszka, Valentine Sinitsyn,
	Eduardo Habkost, Michael S. Tsirkin

On Mon, Aug 8, 2016 at 12:01 PM, Peter Xu <peterx@redhat.com> wrote:

> On Tue, Aug 02, 2016 at 11:39:04AM +0300, David Kiarie wrote:
> > Introduce PCI macros from for use by AMD IOMMU
> >
> > Signed-off-by: David Kiarie <davidkiarie4@gmail.com>
> > ---
> >  include/hw/pci/pci.h | 5 ++++-
> >  1 file changed, 4 insertions(+), 1 deletion(-)
> >
> > diff --git a/include/hw/pci/pci.h b/include/hw/pci/pci.h
> > index 929ec2f..d47e0e6 100644
> > --- a/include/hw/pci/pci.h
> > +++ b/include/hw/pci/pci.h
> > @@ -11,11 +11,14 @@
> >  #include "hw/pci/pcie.h"
> >
> >  /* PCI bus */
> > -
> > +#define PCI_BDF(bus, devfn)     ((((uint16_t)(bus)) << 8) | (devfn))
>
> Seems the same as PCI_BUILD_BDF() below?
>

Yes, I noted. It's one of the things I intend to fix on the version.


> >  #define PCI_DEVFN(slot, func)   ((((slot) & 0x1f) << 3) | ((func) &
> 0x07))
> > +#define PCI_BUS_NUM(x)          (((x) >> 8) & 0xff)
> >  #define PCI_SLOT(devfn)         (((devfn) >> 3) & 0x1f)
> >  #define PCI_FUNC(devfn)         ((devfn) & 0x07)
> >  #define PCI_BUILD_BDF(bus, devfn)     ((bus << 8) | (devfn))
> > +#define PCI_BUS_MAX             256
> > +#define PCI_DEVFN_MAX           256
> >  #define PCI_SLOT_MAX            32
> >  #define PCI_FUNC_MAX            8
> >
> > --
> > 2.1.4
> >
>
> -- peterx
>

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Qemu-devel] [V15 3/4] hw/i386: Introduce AMD IOMMU
  2016-08-02  8:39 ` [Qemu-devel] [V15 3/4] hw/i386: Introduce AMD IOMMU David Kiarie
@ 2016-08-09  5:44   ` Peter Xu
  2016-08-09 12:07     ` David Kiarie
                       ` (2 more replies)
  2016-08-11  8:23   ` Valentine Sinitsyn
  2016-08-12 19:10   ` Valentine Sinitsyn
  2 siblings, 3 replies; 26+ messages in thread
From: Peter Xu @ 2016-08-09  5:44 UTC (permalink / raw)
  To: David Kiarie
  Cc: qemu-devel, rkrcmar, jan.kiszka, valentine.sinitsyn, ehabkost, mst

On Tue, Aug 02, 2016 at 11:39:06AM +0300, David Kiarie wrote:

[...]

> +/* invalidate internal caches for devid */
> +typedef struct QEMU_PACKED {
> +#ifdef HOST_WORDS_BIGENDIAN
> +    uint64_t devid;                /* device to invalidate   */
> +    uint64_t reserved_1:44;
> +    uint64_t type:4;               /* command type           */
> +#else
> +    uint64_t devid;
> +    uint64_t reserved_1:44;
> +    uint64_t type:4;
> +#endif /* __BIG_ENDIAN_BITFIELD */

Guess you forgot to reverse the order of fields in one of above block.

[...]

> +/* load adddress translation info for devid into translation cache */
> +typedef struct QEMU_PACKED {
> +#ifdef HOST_WORDS_BIGENDIAN
> +    uint64_t type:4;          /* command type       */
> +    uint64_t reserved_2:8;
> +    uint64_t pasid_19_0:20;
> +    uint64_t pfcount_7_0:8;
> +    uint64_t reserved_1:8;
> +    uint64_t devid;           /* related devid      */
> +#else
> +    uint64_t devid;
> +    uint64_t reserved_1:8;
> +    uint64_t pfcount_7_0:8;
> +    uint64_t pasid_19_0:20;
> +    uint64_t reserved_2:8;
> +    uint64_t type:4;
> +#endif /* __BIG_ENDIAN_BITFIELD */

For this one, "devid" looks like a 16 bits field?

[...]

> +/* issue a PCIe completion packet for devid */
> +typedef struct QEMU_PACKED {
> +#ifdef HOST_WORDS_BIGENDIAN
> +    uint32_t devid;               /* related devid      */
> +    uint32_t reserved_1;
> +#else
> +    uint32_t reserved_1;
> +    uint32_t devid;
> +#endif /* __BIG_ENDIAN_BITFIELD */

Here I am not sure we need this "#ifdef".

[...]

> +/* external write */
> +static void amdvi_writew(AMDVIState *s, hwaddr addr, uint16_t val)
> +{
> +    uint16_t romask = lduw_le_p(&s->romask[addr]);
> +    uint16_t w1cmask = lduw_le_p(&s->w1cmask[addr]);
> +    uint16_t oldval = lduw_le_p(&s->mmior[addr]);
> +    stw_le_p(&s->mmior[addr], (val & ~(val & w1cmask)) | (romask & oldval));

I think the above is problematic, e.g., what if we write 1 to one of
the romask while it's 0 originally? In that case, the RO bit will be
written to 1.

Maybe we need:

  stw_le_p(&s->mmior[addr], ((oldval & romask) | (val & ~romask)) & \
                            (val & w1cmask));

Same question to the below two functions.

> +}
> +
> +static void amdvi_writel(AMDVIState *s, hwaddr addr, uint32_t val)
> +{
> +    uint32_t romask = ldl_le_p(&s->romask[addr]);
> +    uint32_t w1cmask = ldl_le_p(&s->w1cmask[addr]);
> +    uint32_t oldval = ldl_le_p(&s->mmior[addr]);
> +    stl_le_p(&s->mmior[addr], (val & ~(val & w1cmask)) | (romask & oldval));
> +}
> +
> +static void amdvi_writeq(AMDVIState *s, hwaddr addr, uint64_t val)
> +{
> +    uint64_t romask = ldq_le_p(&s->romask[addr]);
> +    uint64_t w1cmask = ldq_le_p(&s->w1cmask[addr]);
> +    uint32_t oldval = ldq_le_p(&s->mmior[addr]);
> +    stq_le_p(&s->mmior[addr], (val & ~(val & w1cmask)) | (romask & oldval));
> +}
> +
> +/* OR a 64-bit register with a 64-bit value */
> +static bool amdvi_orq(AMDVIState *s, hwaddr addr, uint64_t val)

Nit: This function name gives me an illusion that it's a write op, not
read. IMHO it'll be better we directly use amdvi_readq() for all the
callers of this function, which is more clear to me.

> +{
> +    return amdvi_readq(s, addr) | val;
> +}
> +
> +/* OR a 64-bit register with a 64-bit value storing result in the register */
> +static void amdvi_orassignq(AMDVIState *s, hwaddr addr, uint64_t val)
> +{
> +    amdvi_writeq_raw(s, addr, amdvi_readq(s, addr) | val);
> +}
> +
> +/* AND a 64-bit register with a 64-bit value storing result in the register */
> +static void amdvi_and_assignq(AMDVIState *s, hwaddr addr, uint64_t val)

Nit: the name is not matched with above:

  amdvi_{or|and}assign[qw]

Though I would prefer:

  amdvi_assign_[qw]_{or|and}

[...]

> +static void amdvi_log_event(AMDVIState *s, uint64_t *evt)
> +{
> +    /* event logging not enabled */
> +    if (!s->evtlog_enabled || amdvi_orq(s, AMDVI_MMIO_STATUS,
> +        AMDVI_MMIO_STATUS_EVT_OVF)) {
> +        return;
> +    }
> +
> +    /* event log buffer full */
> +    if (s->evtlog_tail >= s->evtlog_len) {
> +        amdvi_orassignq(s, AMDVI_MMIO_STATUS, AMDVI_MMIO_STATUS_EVT_OVF);
> +        /* generate interrupt */
> +        amdvi_generate_msi_interrupt(s);
> +        return;
> +    }
> +
> +    if (dma_memory_write(&address_space_memory, s->evtlog_len + s->evtlog_tail,
> +        &evt, AMDVI_EVENT_LEN)) {

Check with MEMTX_OK?

[...]

> +/*
> + * AMDVi event structure
> + *    0:15   -> DeviceID
> + *    55:63  -> event type + miscellaneous info
> + *    64:127 -> related address
> + */
> +static void amdvi_encode_event(uint64_t *evt, uint16_t devid, uint64_t addr,
> +                               uint16_t info)
> +{
> +    amdvi_setevent_bits(evt, devid, 0, 16);
> +    amdvi_setevent_bits(evt, info, 55, 8);
> +    amdvi_setevent_bits(evt, addr, 63, 64);
                                      ^^
                                should here be 64?

Also, I am not sure whether we need this amdvi_setevent_bits() if it's
only used in this function. Though not a big problem for me.

> +}
> +/* log an error encountered page-walking

"during page-walking"

> + *
> + * @addr: virtual address in translation request
> + */
> +static void amdvi_page_fault(AMDVIState *s, uint16_t devid,
> +                             hwaddr addr, uint16_t info)
> +{
> +    uint64_t evt[4];
> +
> +    info |= AMDVI_EVENT_IOPF_I | AMDVI_EVENT_IOPF;
> +    amdvi_encode_event(evt, devid, addr, info);
> +    amdvi_log_event(s, evt);
> +    pci_word_test_and_set_mask(s->pci.dev.config + PCI_STATUS,
> +            PCI_STATUS_SIG_TARGET_ABORT);

Nit: maybe we can provide a function for setting this bit.

[...]

> +static void amdvi_update_iotlb(AMDVIState *s, uint16_t devid,
> +                               uint64_t gpa, IOMMUTLBEntry to_cache,
> +                               uint16_t domid)
> +{
> +    AMDVIIOTLBEntry *entry = g_malloc(sizeof(*entry));
> +    uint64_t *key = g_malloc(sizeof(key));
> +    uint64_t gfn = gpa >> AMDVI_PAGE_SHIFT_4K;
> +
> +    /* don't cache erroneous translations */
> +    if (to_cache.perm != IOMMU_NONE) {
> +        trace_amdvi_cache_update(domid, PCI_BUS_NUM(devid), PCI_SLOT(devid),
> +                PCI_FUNC(devid), gpa, to_cache.translated_addr);
> +
> +        if (g_hash_table_size(s->iotlb) >= AMDVI_IOTLB_MAX_SIZE) {
> +            trace_amdvi_iotlb_reset();

We'd better put this trace into amdvi_iotlb_reset().

> +            amdvi_iotlb_reset(s);
> +        }
> +
> +        entry->gfn = gfn;
> +        entry->domid = domid;
> +        entry->perms = to_cache.perm;
> +        entry->translated_addr = to_cache.translated_addr;
> +        entry->page_mask = to_cache.addr_mask;
> +        *key = gfn | ((uint64_t)(devid) << AMDVI_DEVID_SHIFT);
> +        g_hash_table_replace(s->iotlb, key, entry);
> +    }
> +}
> +
> +static void amdvi_completion_wait(AMDVIState *s, CMDCompletionWait *wait)
> +{
> +    /* pad the last 3 bits */
> +    hwaddr addr = cpu_to_le64(wait->store_addr << 3);

Is this correct? IMO it should be:

  hwaddr addr = le64_to_cpu(wait->store_addr) << 3;

> +    uint64_t data = cpu_to_le64(wait->store_data);

Maybe:

  uint64_t data = le64_to_cpu(wait->store_data);

?

> +
> +    if (wait->reserved) {
> +        amdvi_log_illegalcom_error(s, wait->type, s->cmdbuf + s->cmdbuf_head);
> +    }
> +
> +    if (wait->completion_store) {
> +        if (dma_memory_write(&address_space_memory, addr, &data,
> +            AMDVI_COMPLETION_DATA_SIZE))
> +        {

Left bracket is better moved upward to follow the coding style.

> +            trace_amdvi_completion_wait_fail(addr);
> +        }
> +    }

Thanks,

-- peterx

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Qemu-devel] [V15 3/4] hw/i386: Introduce AMD IOMMU
  2016-08-09  5:44   ` Peter Xu
@ 2016-08-09 12:07     ` David Kiarie
  2016-08-09 12:21       ` Peter Xu
  2016-08-09 12:52     ` David Kiarie
  2016-08-09 17:46     ` David Kiarie
  2 siblings, 1 reply; 26+ messages in thread
From: David Kiarie @ 2016-08-09 12:07 UTC (permalink / raw)
  To: Peter Xu
  Cc: QEMU Developers, rkrcmar, Jan Kiszka, Valentine Sinitsyn,
	Eduardo Habkost, Michael S. Tsirkin

On Tue, Aug 9, 2016 at 8:44 AM, Peter Xu <peterx@redhat.com> wrote:

> On Tue, Aug 02, 2016 at 11:39:06AM +0300, David Kiarie wrote:
>
> [...]
>
>
Hi Peter.

Most of your comments are valid thought some are subjective :-). I'm
covering most if not all of them on next version (should coming shortly).

> +/* invalidate internal caches for devid */
> > +typedef struct QEMU_PACKED {
> > +#ifdef HOST_WORDS_BIGENDIAN
> > +    uint64_t devid;                /* device to invalidate   */
> > +    uint64_t reserved_1:44;
> > +    uint64_t type:4;               /* command type           */
> > +#else
> > +    uint64_t devid;
> > +    uint64_t reserved_1:44;
> > +    uint64_t type:4;
> > +#endif /* __BIG_ENDIAN_BITFIELD */
>
> Guess you forgot to reverse the order of fields in one of above block.
>
> [...]
>
> > +/* load adddress translation info for devid into translation cache */
> > +typedef struct QEMU_PACKED {
> > +#ifdef HOST_WORDS_BIGENDIAN
> > +    uint64_t type:4;          /* command type       */
> > +    uint64_t reserved_2:8;
> > +    uint64_t pasid_19_0:20;
> > +    uint64_t pfcount_7_0:8;
> > +    uint64_t reserved_1:8;
> > +    uint64_t devid;           /* related devid      */
> > +#else
> > +    uint64_t devid;
> > +    uint64_t reserved_1:8;
> > +    uint64_t pfcount_7_0:8;
> > +    uint64_t pasid_19_0:20;
> > +    uint64_t reserved_2:8;
> > +    uint64_t type:4;
> > +#endif /* __BIG_ENDIAN_BITFIELD */
>
> For this one, "devid" looks like a 16 bits field?
>
> [...]
>
> > +/* issue a PCIe completion packet for devid */
> > +typedef struct QEMU_PACKED {
> > +#ifdef HOST_WORDS_BIGENDIAN
> > +    uint32_t devid;               /* related devid      */
> > +    uint32_t reserved_1;
> > +#else
> > +    uint32_t reserved_1;
> > +    uint32_t devid;
> > +#endif /* __BIG_ENDIAN_BITFIELD */
>
> Here I am not sure we need this "#ifdef".
>
> [...]
>
> > +/* external write */
> > +static void amdvi_writew(AMDVIState *s, hwaddr addr, uint16_t val)
> > +{
> > +    uint16_t romask = lduw_le_p(&s->romask[addr]);
> > +    uint16_t w1cmask = lduw_le_p(&s->w1cmask[addr]);
> > +    uint16_t oldval = lduw_le_p(&s->mmior[addr]);
> > +    stw_le_p(&s->mmior[addr], (val & ~(val & w1cmask)) | (romask &
> oldval));
>
> I think the above is problematic, e.g., what if we write 1 to one of
> the romask while it's 0 originally? In that case, the RO bit will be
> written to 1.
>
> Maybe we need:
>
>   stw_le_p(&s->mmior[addr], ((oldval & romask) | (val & ~romask)) & \
>                             (val & w1cmask));
>
> Same question to the below two functions.
>
> > +}
> > +
> > +static void amdvi_writel(AMDVIState *s, hwaddr addr, uint32_t val)
> > +{
> > +    uint32_t romask = ldl_le_p(&s->romask[addr]);
> > +    uint32_t w1cmask = ldl_le_p(&s->w1cmask[addr]);
> > +    uint32_t oldval = ldl_le_p(&s->mmior[addr]);
> > +    stl_le_p(&s->mmior[addr], (val & ~(val & w1cmask)) | (romask &
> oldval));
> > +}
> > +
> > +static void amdvi_writeq(AMDVIState *s, hwaddr addr, uint64_t val)
> > +{
> > +    uint64_t romask = ldq_le_p(&s->romask[addr]);
> > +    uint64_t w1cmask = ldq_le_p(&s->w1cmask[addr]);
> > +    uint32_t oldval = ldq_le_p(&s->mmior[addr]);
> > +    stq_le_p(&s->mmior[addr], (val & ~(val & w1cmask)) | (romask &
> oldval));
> > +}
> > +
> > +/* OR a 64-bit register with a 64-bit value */
> > +static bool amdvi_orq(AMDVIState *s, hwaddr addr, uint64_t val)
>
> Nit: This function name gives me an illusion that it's a write op, not
> read. IMHO it'll be better we directly use amdvi_readq() for all the
> callers of this function, which is more clear to me.
>
> > +{
> > +    return amdvi_readq(s, addr) | val;
> > +}
> > +
> > +/* OR a 64-bit register with a 64-bit value storing result in the
> register */
> > +static void amdvi_orassignq(AMDVIState *s, hwaddr addr, uint64_t val)
> > +{
> > +    amdvi_writeq_raw(s, addr, amdvi_readq(s, addr) | val);
> > +}
> > +
> > +/* AND a 64-bit register with a 64-bit value storing result in the
> register */
> > +static void amdvi_and_assignq(AMDVIState *s, hwaddr addr, uint64_t val)
>
> Nit: the name is not matched with above:
>
>   amdvi_{or|and}assign[qw]
>
> Though I would prefer:
>
>   amdvi_assign_[qw]_{or|and}
>
> [...]
>
> > +static void amdvi_log_event(AMDVIState *s, uint64_t *evt)
> > +{
> > +    /* event logging not enabled */
> > +    if (!s->evtlog_enabled || amdvi_orq(s, AMDVI_MMIO_STATUS,
> > +        AMDVI_MMIO_STATUS_EVT_OVF)) {
> > +        return;
> > +    }
> > +
> > +    /* event log buffer full */
> > +    if (s->evtlog_tail >= s->evtlog_len) {
> > +        amdvi_orassignq(s, AMDVI_MMIO_STATUS,
> AMDVI_MMIO_STATUS_EVT_OVF);
> > +        /* generate interrupt */
> > +        amdvi_generate_msi_interrupt(s);
> > +        return;
> > +    }
> > +
> > +    if (dma_memory_write(&address_space_memory, s->evtlog_len +
> s->evtlog_tail,
> > +        &evt, AMDVI_EVENT_LEN)) {
>
> Check with MEMTX_OK?
>
> [...]
>
> > +/*
> > + * AMDVi event structure
> > + *    0:15   -> DeviceID
> > + *    55:63  -> event type + miscellaneous info
> > + *    64:127 -> related address
> > + */
> > +static void amdvi_encode_event(uint64_t *evt, uint16_t devid, uint64_t
> addr,
> > +                               uint16_t info)
> > +{
> > +    amdvi_setevent_bits(evt, devid, 0, 16);
> > +    amdvi_setevent_bits(evt, info, 55, 8);
> > +    amdvi_setevent_bits(evt, addr, 63, 64);
>                                       ^^
>                                 should here be 64?
>
> Also, I am not sure whether we need this amdvi_setevent_bits() if it's
> only used in this function. Though not a big problem for me.
>
> > +}
> > +/* log an error encountered page-walking
>
> "during page-walking"
>
> > + *
> > + * @addr: virtual address in translation request
> > + */
> > +static void amdvi_page_fault(AMDVIState *s, uint16_t devid,
> > +                             hwaddr addr, uint16_t info)
> > +{
> > +    uint64_t evt[4];
> > +
> > +    info |= AMDVI_EVENT_IOPF_I | AMDVI_EVENT_IOPF;
> > +    amdvi_encode_event(evt, devid, addr, info);
> > +    amdvi_log_event(s, evt);
> > +    pci_word_test_and_set_mask(s->pci.dev.config + PCI_STATUS,
> > +            PCI_STATUS_SIG_TARGET_ABORT);
>
> Nit: maybe we can provide a function for setting this bit.
>
> [...]
>
> > +static void amdvi_update_iotlb(AMDVIState *s, uint16_t devid,
> > +                               uint64_t gpa, IOMMUTLBEntry to_cache,
> > +                               uint16_t domid)
> > +{
> > +    AMDVIIOTLBEntry *entry = g_malloc(sizeof(*entry));
> > +    uint64_t *key = g_malloc(sizeof(key));
> > +    uint64_t gfn = gpa >> AMDVI_PAGE_SHIFT_4K;
> > +
> > +    /* don't cache erroneous translations */
> > +    if (to_cache.perm != IOMMU_NONE) {
> > +        trace_amdvi_cache_update(domid, PCI_BUS_NUM(devid),
> PCI_SLOT(devid),
> > +                PCI_FUNC(devid), gpa, to_cache.translated_addr);
> > +
> > +        if (g_hash_table_size(s->iotlb) >= AMDVI_IOTLB_MAX_SIZE) {
> > +            trace_amdvi_iotlb_reset();
>
> We'd better put this trace into amdvi_iotlb_reset().
>
> > +            amdvi_iotlb_reset(s);
> > +        }
> > +
> > +        entry->gfn = gfn;
> > +        entry->domid = domid;
> > +        entry->perms = to_cache.perm;
> > +        entry->translated_addr = to_cache.translated_addr;
> > +        entry->page_mask = to_cache.addr_mask;
> > +        *key = gfn | ((uint64_t)(devid) << AMDVI_DEVID_SHIFT);
> > +        g_hash_table_replace(s->iotlb, key, entry);
> > +    }
> > +}
> > +
> > +static void amdvi_completion_wait(AMDVIState *s, CMDCompletionWait
> *wait)
> > +{
> > +    /* pad the last 3 bits */
> > +    hwaddr addr = cpu_to_le64(wait->store_addr << 3);
>
> Is this correct? IMO it should be:
>
>   hwaddr addr = le64_to_cpu(wait->store_addr) << 3;
>
> > +    uint64_t data = cpu_to_le64(wait->store_data);
>
> Maybe:
>
>   uint64_t data = le64_to_cpu(wait->store_data);
>
> ?
>
> > +
> > +    if (wait->reserved) {
> > +        amdvi_log_illegalcom_error(s, wait->type, s->cmdbuf +
> s->cmdbuf_head);
> > +    }
> > +
> > +    if (wait->completion_store) {
> > +        if (dma_memory_write(&address_space_memory, addr, &data,
> > +            AMDVI_COMPLETION_DATA_SIZE))
> > +        {
>
> Left bracket is better moved upward to follow the coding style.
>
> > +            trace_amdvi_completion_wait_fail(addr);
> > +        }
> > +    }
>
> Thanks,
>
> -- peterx
>

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Qemu-devel] [V15 3/4] hw/i386: Introduce AMD IOMMU
  2016-08-09 12:07     ` David Kiarie
@ 2016-08-09 12:21       ` Peter Xu
  0 siblings, 0 replies; 26+ messages in thread
From: Peter Xu @ 2016-08-09 12:21 UTC (permalink / raw)
  To: David Kiarie
  Cc: QEMU Developers, rkrcmar, Jan Kiszka, Valentine Sinitsyn,
	Eduardo Habkost, Michael S. Tsirkin

On Tue, Aug 09, 2016 at 03:07:43PM +0300, David Kiarie wrote:
> On Tue, Aug 9, 2016 at 8:44 AM, Peter Xu <peterx@redhat.com> wrote:
> 
> > On Tue, Aug 02, 2016 at 11:39:06AM +0300, David Kiarie wrote:
> >
> > [...]
> >
> >
> Hi Peter.
> 
> Most of your comments are valid thought some are subjective :-). I'm
> covering most if not all of them on next version (should coming shortly).

Hi, David,

I think for most subjective comments, I was using "Nit:" as prefix.
Most of the other comments should not? ;)

For endian issue, I am not sure whether that's important, since I
don't know whether there will be anyone run x86_64 on e.g. big endian
machines with a AMD IOMMU... For the other comments besides "nit" and
"endianess" issues, I would like to hear your opinion if you disagree
on any of them (so I can learn as well if I made any mistake). :)

Thanks,

-- peterx

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Qemu-devel] [V15 3/4] hw/i386: Introduce AMD IOMMU
  2016-08-09  5:44   ` Peter Xu
  2016-08-09 12:07     ` David Kiarie
@ 2016-08-09 12:52     ` David Kiarie
  2016-08-09 13:01       ` Valentine Sinitsyn
  2016-08-10  2:08       ` Peter Xu
  2016-08-09 17:46     ` David Kiarie
  2 siblings, 2 replies; 26+ messages in thread
From: David Kiarie @ 2016-08-09 12:52 UTC (permalink / raw)
  To: Peter Xu
  Cc: QEMU Developers, rkrcmar, Jan Kiszka, Valentine Sinitsyn,
	Eduardo Habkost, Michael S. Tsirkin

On Tue, Aug 9, 2016 at 8:44 AM, Peter Xu <peterx@redhat.com> wrote:

> On Tue, Aug 02, 2016 at 11:39:06AM +0300, David Kiarie wrote:
>
> [...]
>
> > +/* invalidate internal caches for devid */
> > +typedef struct QEMU_PACKED {
> > +#ifdef HOST_WORDS_BIGENDIAN
> > +    uint64_t devid;                /* device to invalidate   */
> > +    uint64_t reserved_1:44;
> > +    uint64_t type:4;               /* command type           */
> > +#else
> > +    uint64_t devid;
> > +    uint64_t reserved_1:44;
> > +    uint64_t type:4;
> > +#endif /* __BIG_ENDIAN_BITFIELD */
>
> Guess you forgot to reverse the order of fields in one of above block.
>

Yes, I forgot to reverse order of fields here.


>
> [...]
>
> > +/* load adddress translation info for devid into translation cache */
> > +typedef struct QEMU_PACKED {
> > +#ifdef HOST_WORDS_BIGENDIAN
> > +    uint64_t type:4;          /* command type       */
> > +    uint64_t reserved_2:8;
> > +    uint64_t pasid_19_0:20;
> > +    uint64_t pfcount_7_0:8;
> > +    uint64_t reserved_1:8;
> > +    uint64_t devid;           /* related devid      */
> > +#else
> > +    uint64_t devid;
> > +    uint64_t reserved_1:8;
> > +    uint64_t pfcount_7_0:8;
> > +    uint64_t pasid_19_0:20;
> > +    uint64_t reserved_2:8;
> > +    uint64_t type:4;
> > +#endif /* __BIG_ENDIAN_BITFIELD */
>
> For this one, "devid" looks like a 16 bits field?
>

Right. should be 16 bits.


>
> [...]
>
> > +/* issue a PCIe completion packet for devid */
> > +typedef struct QEMU_PACKED {
> > +#ifdef HOST_WORDS_BIGENDIAN
> > +    uint32_t devid;               /* related devid      */
> > +    uint32_t reserved_1;
> > +#else
> > +    uint32_t reserved_1;
> > +    uint32_t devid;
> > +#endif /* __BIG_ENDIAN_BITFIELD */
>
> Here I am not sure we need this "#ifdef".
>

There's an error here but it's not with the #ifdef but instead I have not
set the right bit on the bitfields - for instance devid should be 16.


>
> [...]
>
> > +/* external write */
> > +static void amdvi_writew(AMDVIState *s, hwaddr addr, uint16_t val)
> > +{
> > +    uint16_t romask = lduw_le_p(&s->romask[addr]);
> > +    uint16_t w1cmask = lduw_le_p(&s->w1cmask[addr]);
> > +    uint16_t oldval = lduw_le_p(&s->mmior[addr]);
> > +    stw_le_p(&s->mmior[addr], (val & ~(val & w1cmask)) | (romask &
> oldval));
>
> I think the above is problematic, e.g., what if we write 1 to one of
> the romask while it's 0 originally? In that case, the RO bit will be
> written to 1.
>
> Maybe we need:
>
>   stw_le_p(&s->mmior[addr], ((oldval & romask) | (val & ~romask)) & \
>                             (val & w1cmask));
>
> Same question to the below two functions.
>

Right. I was very determined to come up with my algo but failed horribly ;-)


>
> > +}
> > +
> > +static void amdvi_writel(AMDVIState *s, hwaddr addr, uint32_t val)
> > +{
> > +    uint32_t romask = ldl_le_p(&s->romask[addr]);
> > +    uint32_t w1cmask = ldl_le_p(&s->w1cmask[addr]);
> > +    uint32_t oldval = ldl_le_p(&s->mmior[addr]);
> > +    stl_le_p(&s->mmior[addr], (val & ~(val & w1cmask)) | (romask &
> oldval));
> > +}
> > +
> > +static void amdvi_writeq(AMDVIState *s, hwaddr addr, uint64_t val)
> > +{
> > +    uint64_t romask = ldq_le_p(&s->romask[addr]);
> > +    uint64_t w1cmask = ldq_le_p(&s->w1cmask[addr]);
> > +    uint32_t oldval = ldq_le_p(&s->mmior[addr]);
> > +    stq_le_p(&s->mmior[addr], (val & ~(val & w1cmask)) | (romask &
> oldval));
> > +}
> > +
> > +/* OR a 64-bit register with a 64-bit value */
> > +static bool amdvi_orq(AMDVIState *s, hwaddr addr, uint64_t val)
>
> Nit: This function name gives me an illusion that it's a write op, not
> read. IMHO it'll be better we directly use amdvi_readq() for all the
> callers of this function, which is more clear to me.
>
> > +{
> > +    return amdvi_readq(s, addr) | val;
> > +}
> > +
> > +/* OR a 64-bit register with a 64-bit value storing result in the
> register */
> > +static void amdvi_orassignq(AMDVIState *s, hwaddr addr, uint64_t val)
> > +{
> > +    amdvi_writeq_raw(s, addr, amdvi_readq(s, addr) | val);
> > +}
> > +
> > +/* AND a 64-bit register with a 64-bit value storing result in the
> register */
> > +static void amdvi_and_assignq(AMDVIState *s, hwaddr addr, uint64_t val)
>
> Nit: the name is not matched with above:
>
>   amdvi_{or|and}assign[qw]
>
> Though I would prefer:
>
>   amdvi_assign_[qw]_{or|and}
>

Your naming sounds better.


>
> [...]
>
> > +static void amdvi_log_event(AMDVIState *s, uint64_t *evt)
> > +{
> > +    /* event logging not enabled */
> > +    if (!s->evtlog_enabled || amdvi_orq(s, AMDVI_MMIO_STATUS,
> > +        AMDVI_MMIO_STATUS_EVT_OVF)) {
> > +        return;
> > +    }
> > +
> > +    /* event log buffer full */
> > +    if (s->evtlog_tail >= s->evtlog_len) {
> > +        amdvi_orassignq(s, AMDVI_MMIO_STATUS,
> AMDVI_MMIO_STATUS_EVT_OVF);
> > +        /* generate interrupt */
> > +        amdvi_generate_msi_interrupt(s);
> > +        return;
> > +    }
> > +
> > +    if (dma_memory_write(&address_space_memory, s->evtlog_len +
> s->evtlog_tail,
> > +        &evt, AMDVI_EVENT_LEN)) {
>
> Check with MEMTX_OK?
>

I'm not sure what exactly you mean here.


>
> [...]
>
> > +/*
> > + * AMDVi event structure
> > + *    0:15   -> DeviceID
> > + *    55:63  -> event type + miscellaneous info
> > + *    64:127 -> related address
> > + */
> > +static void amdvi_encode_event(uint64_t *evt, uint16_t devid, uint64_t
> addr,
> > +                               uint16_t info)
> > +{
> > +    amdvi_setevent_bits(evt, devid, 0, 16);
> > +    amdvi_setevent_bits(evt, info, 55, 8);
> > +    amdvi_setevent_bits(evt, addr, 63, 64);
>                                       ^^
>                                 should here be 64?
>
> Also, I am not sure whether we need this amdvi_setevent_bits() if it's
> only used in this function. Though not a big problem for me.
>

It's only used in this function but I actually wrote his mainly for future
use. The idea is that various events encode totally different information
while the above is an over-simplified version to encode information common
to most events. In case an event wants to encode more information it would
turn out much more easier.


>
> > +}
> > +/* log an error encountered page-walking
>
> "during page-walking"
>

"encountered page-walking"  sounds right to me. "page-walking" is a verb,
in continuous tense, right ? how about I say "during hacking" ;-)


> > + *
> > + * @addr: virtual address in translation request
> > + */
> > +static void amdvi_page_fault(AMDVIState *s, uint16_t devid,
> > +                             hwaddr addr, uint16_t info)
> > +{
> > +    uint64_t evt[4];
> > +
> > +    info |= AMDVI_EVENT_IOPF_I | AMDVI_EVENT_IOPF;
> > +    amdvi_encode_event(evt, devid, addr, info);
> > +    amdvi_log_event(s, evt);
> > +    pci_word_test_and_set_mask(s->pci.dev.config + PCI_STATUS,
> > +            PCI_STATUS_SIG_TARGET_ABORT);
>
> Nit: maybe we can provide a function for setting this bit.
>

I've actually being ignoring these since Qemu doesn't seem to care about
them.


>
> [...]
>
> > +static void amdvi_update_iotlb(AMDVIState *s, uint16_t devid,
> > +                               uint64_t gpa, IOMMUTLBEntry to_cache,
> > +                               uint16_t domid)
> > +{
> > +    AMDVIIOTLBEntry *entry = g_malloc(sizeof(*entry));
> > +    uint64_t *key = g_malloc(sizeof(key));
> > +    uint64_t gfn = gpa >> AMDVI_PAGE_SHIFT_4K;
> > +
> > +    /* don't cache erroneous translations */
> > +    if (to_cache.perm != IOMMU_NONE) {
> > +        trace_amdvi_cache_update(domid, PCI_BUS_NUM(devid),
> PCI_SLOT(devid),
> > +                PCI_FUNC(devid), gpa, to_cache.translated_addr);
> > +
> > +        if (g_hash_table_size(s->iotlb) >= AMDVI_IOTLB_MAX_SIZE) {
> > +            trace_amdvi_iotlb_reset();
>
> We'd better put this trace into amdvi_iotlb_reset().
>
> > +            amdvi_iotlb_reset(s);
> > +        }
> > +
> > +        entry->gfn = gfn;
> > +        entry->domid = domid;
> > +        entry->perms = to_cache.perm;
> > +        entry->translated_addr = to_cache.translated_addr;
> > +        entry->page_mask = to_cache.addr_mask;
> > +        *key = gfn | ((uint64_t)(devid) << AMDVI_DEVID_SHIFT);
> > +        g_hash_table_replace(s->iotlb, key, entry);
> > +    }
> > +}
> > +
> > +static void amdvi_completion_wait(AMDVIState *s, CMDCompletionWait
> *wait)
> > +{
> > +    /* pad the last 3 bits */
> > +    hwaddr addr = cpu_to_le64(wait->store_addr << 3);
>
> Is this correct? IMO it should be:
>
>   hwaddr addr = le64_to_cpu(wait->store_addr) << 3;
>
> > +    uint64_t data = cpu_to_le64(wait->store_data);
>
> Maybe:
>
>   uint64_t data = le64_to_cpu(wait->store_data);
>
> ?


I should fix these too.


>


> > +
> > +    if (wait->reserved) {
> > +        amdvi_log_illegalcom_error(s, wait->type, s->cmdbuf +
> s->cmdbuf_head);
> > +    }
> > +
> > +    if (wait->completion_store) {
> > +        if (dma_memory_write(&address_space_memory, addr, &data,
> > +            AMDVI_COMPLETION_DATA_SIZE))
> > +        {
>
> Left bracket is better moved upward to follow the coding style.
>

To fix.


>
> > +            trace_amdvi_completion_wait_fail(addr);
> > +        }
> > +    }
>
> Thanks,
>
> -- peterx
>

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Qemu-devel] [V15 3/4] hw/i386: Introduce AMD IOMMU
  2016-08-09 12:52     ` David Kiarie
@ 2016-08-09 13:01       ` Valentine Sinitsyn
  2016-08-09 13:17         ` David Kiarie
  2016-08-10  2:08       ` Peter Xu
  1 sibling, 1 reply; 26+ messages in thread
From: Valentine Sinitsyn @ 2016-08-09 13:01 UTC (permalink / raw)
  To: David Kiarie, Peter Xu
  Cc: QEMU Developers, rkrcmar, Jan Kiszka, Eduardo Habkost,
	Michael S. Tsirkin

Hi all,

On 09.08.2016 17:52, David Kiarie wrote:
>
>
> On Tue, Aug 9, 2016 at 8:44 AM, Peter Xu <peterx@redhat.com
> <mailto:peterx@redhat.com>> wrote:
>
>     On Tue, Aug 02, 2016 at 11:39:06AM +0300, David Kiarie wrote:
>
>     [...]
>
>     > +/* invalidate internal caches for devid */
>     > +typedef struct QEMU_PACKED {
>     > +#ifdef HOST_WORDS_BIGENDIAN
>     > +    uint64_t devid;                /* device to invalidate   */
>     > +    uint64_t reserved_1:44;
>     > +    uint64_t type:4;               /* command type           */
>     > +#else
>     > +    uint64_t devid;
>     > +    uint64_t reserved_1:44;
>     > +    uint64_t type:4;
>     > +#endif /* __BIG_ENDIAN_BITFIELD */
>
>     Guess you forgot to reverse the order of fields in one of above block.
>
>
> Yes, I forgot to reverse order of fields here.
>
>
>
>     [...]
>
>     > +/* load adddress translation info for devid into translation cache */
>     > +typedef struct QEMU_PACKED {
>     > +#ifdef HOST_WORDS_BIGENDIAN
>     > +    uint64_t type:4;          /* command type       */
>     > +    uint64_t reserved_2:8;
>     > +    uint64_t pasid_19_0:20;
>     > +    uint64_t pfcount_7_0:8;
>     > +    uint64_t reserved_1:8;
>     > +    uint64_t devid;           /* related devid      */
>     > +#else
>     > +    uint64_t devid;
>     > +    uint64_t reserved_1:8;
>     > +    uint64_t pfcount_7_0:8;
>     > +    uint64_t pasid_19_0:20;
>     > +    uint64_t reserved_2:8;
>     > +    uint64_t type:4;
>     > +#endif /* __BIG_ENDIAN_BITFIELD */
>
>     For this one, "devid" looks like a 16 bits field?
>
>
> Right. should be 16 bits.
>
>
>
>     [...]
>
>     > +/* issue a PCIe completion packet for devid */
>     > +typedef struct QEMU_PACKED {
>     > +#ifdef HOST_WORDS_BIGENDIAN
>     > +    uint32_t devid;               /* related devid      */
>     > +    uint32_t reserved_1;
>     > +#else
>     > +    uint32_t reserved_1;
>     > +    uint32_t devid;
>     > +#endif /* __BIG_ENDIAN_BITFIELD */
>
>     Here I am not sure we need this "#ifdef".
>
>
> There's an error here but it's not with the #ifdef but instead I have
> not set the right bit on the bitfields - for instance devid should be 16.
>
>
>
>     [...]
>
>     > +/* external write */
>     > +static void amdvi_writew(AMDVIState *s, hwaddr addr, uint16_t val)
>     > +{
>     > +    uint16_t romask = lduw_le_p(&s->romask[addr]);
>     > +    uint16_t w1cmask = lduw_le_p(&s->w1cmask[addr]);
>     > +    uint16_t oldval = lduw_le_p(&s->mmior[addr]);
>     > +    stw_le_p(&s->mmior[addr], (val & ~(val & w1cmask)) | (romask & oldval));
>
>     I think the above is problematic, e.g., what if we write 1 to one of
>     the romask while it's 0 originally? In that case, the RO bit will be
>     written to 1.
>
>     Maybe we need:
>
>       stw_le_p(&s->mmior[addr], ((oldval & romask) | (val & ~romask)) & \
>                                 (val & w1cmask));
>
>     Same question to the below two functions.
>
>
> Right. I was very determined to come up with my algo but failed horribly ;-)
>
>
>
>     > +}
>     > +
>     > +static void amdvi_writel(AMDVIState *s, hwaddr addr, uint32_t val)
>     > +{
>     > +    uint32_t romask = ldl_le_p(&s->romask[addr]);
>     > +    uint32_t w1cmask = ldl_le_p(&s->w1cmask[addr]);
>     > +    uint32_t oldval = ldl_le_p(&s->mmior[addr]);
>     > +    stl_le_p(&s->mmior[addr], (val & ~(val & w1cmask)) | (romask & oldval));
>     > +}
>     > +
>     > +static void amdvi_writeq(AMDVIState *s, hwaddr addr, uint64_t val)
>     > +{
>     > +    uint64_t romask = ldq_le_p(&s->romask[addr]);
>     > +    uint64_t w1cmask = ldq_le_p(&s->w1cmask[addr]);
>     > +    uint32_t oldval = ldq_le_p(&s->mmior[addr]);
>     > +    stq_le_p(&s->mmior[addr], (val & ~(val & w1cmask)) | (romask & oldval));
>     > +}
>     > +
>     > +/* OR a 64-bit register with a 64-bit value */
>     > +static bool amdvi_orq(AMDVIState *s, hwaddr addr, uint64_t val)
>
>     Nit: This function name gives me an illusion that it's a write op, not
>     read. IMHO it'll be better we directly use amdvi_readq() for all the
>     callers of this function, which is more clear to me.
>
>     > +{
>     > +    return amdvi_readq(s, addr) | val;
>     > +}
>     > +
>     > +/* OR a 64-bit register with a 64-bit value storing result in the register */
>     > +static void amdvi_orassignq(AMDVIState *s, hwaddr addr, uint64_t val)
>     > +{
>     > +    amdvi_writeq_raw(s, addr, amdvi_readq(s, addr) | val);
>     > +}
>     > +
>     > +/* AND a 64-bit register with a 64-bit value storing result in the register */
>     > +static void amdvi_and_assignq(AMDVIState *s, hwaddr addr, uint64_t val)
>
>     Nit: the name is not matched with above:
>
>       amdvi_{or|and}assign[qw]
>
>     Though I would prefer:
>
>       amdvi_assign_[qw]_{or|and}
>
>
> Your naming sounds better.
>
>
>
>     [...]
>
>     > +static void amdvi_log_event(AMDVIState *s, uint64_t *evt)
>     > +{
>     > +    /* event logging not enabled */
>     > +    if (!s->evtlog_enabled || amdvi_orq(s, AMDVI_MMIO_STATUS,
>     > +        AMDVI_MMIO_STATUS_EVT_OVF)) {
>     > +        return;
>     > +    }
>     > +
>     > +    /* event log buffer full */
>     > +    if (s->evtlog_tail >= s->evtlog_len) {
>     > +        amdvi_orassignq(s, AMDVI_MMIO_STATUS, AMDVI_MMIO_STATUS_EVT_OVF);
>     > +        /* generate interrupt */
>     > +        amdvi_generate_msi_interrupt(s);
>     > +        return;
>     > +    }
>     > +
>     > +    if (dma_memory_write(&address_space_memory, s->evtlog_len + s->evtlog_tail,
>     > +        &evt, AMDVI_EVENT_LEN)) {
>
>     Check with MEMTX_OK?
>
>
> I'm not sure what exactly you mean here.
>
>
>
>     [...]
>
>     > +/*
>     > + * AMDVi event structure
>     > + *    0:15   -> DeviceID
>     > + *    55:63  -> event type + miscellaneous info
>     > + *    64:127 -> related address
>     > + */
>     > +static void amdvi_encode_event(uint64_t *evt, uint16_t devid, uint64_t addr,
>     > +                               uint16_t info)
>     > +{
>     > +    amdvi_setevent_bits(evt, devid, 0, 16);
>     > +    amdvi_setevent_bits(evt, info, 55, 8);
>     > +    amdvi_setevent_bits(evt, addr, 63, 64);
>                                           ^^
>                                     should here be 64?
>
>     Also, I am not sure whether we need this amdvi_setevent_bits() if it's
>     only used in this function. Though not a big problem for me.
>
>
> It's only used in this function but I actually wrote his mainly for
> future use. The idea is that various events encode totally different
> information while the above is an over-simplified version to encode
> information common to most events. In case an event wants to encode more
> information it would turn out much more easier.
>
>
>
>     > +}
>     > +/* log an error encountered page-walking
>
>     "during page-walking"
>
>
> "encountered page-walking"  sounds right to me. "page-walking" is a
> verb, in continuous tense, right ? how about I say "during hacking" ;-)
I'm a non-native, but: isn't "page-walking" a gerund here? Otherwise, 
"during hacking" sounds good to me.

I also got comments on wording sometimes. In this cases I assume my 
initial phrase was misleading, and try to re-phrase it.

Valentine

>
>
>     > + *
>     > + * @addr: virtual address in translation request
>     > + */
>     > +static void amdvi_page_fault(AMDVIState *s, uint16_t devid,
>     > +                             hwaddr addr, uint16_t info)
>     > +{
>     > +    uint64_t evt[4];
>     > +
>     > +    info |= AMDVI_EVENT_IOPF_I | AMDVI_EVENT_IOPF;
>     > +    amdvi_encode_event(evt, devid, addr, info);
>     > +    amdvi_log_event(s, evt);
>     > +    pci_word_test_and_set_mask(s->pci.dev.config + PCI_STATUS,
>     > +            PCI_STATUS_SIG_TARGET_ABORT);
>
>     Nit: maybe we can provide a function for setting this bit.
>
>
> I've actually being ignoring these since Qemu doesn't seem to care about
> them.
>
>
>
>     [...]
>
>     > +static void amdvi_update_iotlb(AMDVIState *s, uint16_t devid,
>     > +                               uint64_t gpa, IOMMUTLBEntry to_cache,
>     > +                               uint16_t domid)
>     > +{
>     > +    AMDVIIOTLBEntry *entry = g_malloc(sizeof(*entry));
>     > +    uint64_t *key = g_malloc(sizeof(key));
>     > +    uint64_t gfn = gpa >> AMDVI_PAGE_SHIFT_4K;
>     > +
>     > +    /* don't cache erroneous translations */
>     > +    if (to_cache.perm != IOMMU_NONE) {
>     > +        trace_amdvi_cache_update(domid, PCI_BUS_NUM(devid), PCI_SLOT(devid),
>     > +                PCI_FUNC(devid), gpa, to_cache.translated_addr);
>     > +
>     > +        if (g_hash_table_size(s->iotlb) >= AMDVI_IOTLB_MAX_SIZE) {
>     > +            trace_amdvi_iotlb_reset();
>
>     We'd better put this trace into amdvi_iotlb_reset().
>
>     > +            amdvi_iotlb_reset(s);
>     > +        }
>     > +
>     > +        entry->gfn = gfn;
>     > +        entry->domid = domid;
>     > +        entry->perms = to_cache.perm;
>     > +        entry->translated_addr = to_cache.translated_addr;
>     > +        entry->page_mask = to_cache.addr_mask;
>     > +        *key = gfn | ((uint64_t)(devid) << AMDVI_DEVID_SHIFT);
>     > +        g_hash_table_replace(s->iotlb, key, entry);
>     > +    }
>     > +}
>     > +
>     > +static void amdvi_completion_wait(AMDVIState *s, CMDCompletionWait *wait)
>     > +{
>     > +    /* pad the last 3 bits */
>     > +    hwaddr addr = cpu_to_le64(wait->store_addr << 3);
>
>     Is this correct? IMO it should be:
>
>       hwaddr addr = le64_to_cpu(wait->store_addr) << 3;
>
>     > +    uint64_t data = cpu_to_le64(wait->store_data);
>
>     Maybe:
>
>       uint64_t data = le64_to_cpu(wait->store_data);
>
>     ?
>
>
> I should fix these too.
>
>
>
>
>
>     > +
>     > +    if (wait->reserved) {
>     > +        amdvi_log_illegalcom_error(s, wait->type, s->cmdbuf + s->cmdbuf_head);
>     > +    }
>     > +
>     > +    if (wait->completion_store) {
>     > +        if (dma_memory_write(&address_space_memory, addr, &data,
>     > +            AMDVI_COMPLETION_DATA_SIZE))
>     > +        {
>
>     Left bracket is better moved upward to follow the coding style.
>
>
> To fix.
>
>
>
>     > +            trace_amdvi_completion_wait_fail(addr);
>     > +        }
>     > +    }
>
>     Thanks,
>
>     -- peterx
>
>

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Qemu-devel] [V15 3/4] hw/i386: Introduce AMD IOMMU
  2016-08-09 13:01       ` Valentine Sinitsyn
@ 2016-08-09 13:17         ` David Kiarie
  0 siblings, 0 replies; 26+ messages in thread
From: David Kiarie @ 2016-08-09 13:17 UTC (permalink / raw)
  To: Valentine Sinitsyn
  Cc: Peter Xu, QEMU Developers, rkrcmar, Jan Kiszka, Eduardo Habkost,
	Michael S. Tsirkin

On Tue, Aug 9, 2016 at 4:01 PM, Valentine Sinitsyn <
valentine.sinitsyn@gmail.com> wrote:

> Hi all,
>
> On 09.08.2016 17:52, David Kiarie wrote:
>
>>
>>
>> On Tue, Aug 9, 2016 at 8:44 AM, Peter Xu <peterx@redhat.com
>> <mailto:peterx@redhat.com>> wrote:
>>
>>     On Tue, Aug 02, 2016 at 11:39:06AM +0300, David Kiarie wrote:
>>
>>     [...]
>>
>>     > +/* invalidate internal caches for devid */
>>     > +typedef struct QEMU_PACKED {
>>     > +#ifdef HOST_WORDS_BIGENDIAN
>>     > +    uint64_t devid;                /* device to invalidate   */
>>     > +    uint64_t reserved_1:44;
>>     > +    uint64_t type:4;               /* command type           */
>>     > +#else
>>     > +    uint64_t devid;
>>     > +    uint64_t reserved_1:44;
>>     > +    uint64_t type:4;
>>     > +#endif /* __BIG_ENDIAN_BITFIELD */
>>
>>     Guess you forgot to reverse the order of fields in one of above block.
>>
>>
>> Yes, I forgot to reverse order of fields here.
>>
>>
>>
>>     [...]
>>
>>     > +/* load adddress translation info for devid into translation cache
>> */
>>     > +typedef struct QEMU_PACKED {
>>     > +#ifdef HOST_WORDS_BIGENDIAN
>>     > +    uint64_t type:4;          /* command type       */
>>     > +    uint64_t reserved_2:8;
>>     > +    uint64_t pasid_19_0:20;
>>     > +    uint64_t pfcount_7_0:8;
>>     > +    uint64_t reserved_1:8;
>>     > +    uint64_t devid;           /* related devid      */
>>     > +#else
>>     > +    uint64_t devid;
>>     > +    uint64_t reserved_1:8;
>>     > +    uint64_t pfcount_7_0:8;
>>     > +    uint64_t pasid_19_0:20;
>>     > +    uint64_t reserved_2:8;
>>     > +    uint64_t type:4;
>>     > +#endif /* __BIG_ENDIAN_BITFIELD */
>>
>>     For this one, "devid" looks like a 16 bits field?
>>
>>
>> Right. should be 16 bits.
>>
>>
>>
>>     [...]
>>
>>     > +/* issue a PCIe completion packet for devid */
>>     > +typedef struct QEMU_PACKED {
>>     > +#ifdef HOST_WORDS_BIGENDIAN
>>     > +    uint32_t devid;               /* related devid      */
>>     > +    uint32_t reserved_1;
>>     > +#else
>>     > +    uint32_t reserved_1;
>>     > +    uint32_t devid;
>>     > +#endif /* __BIG_ENDIAN_BITFIELD */
>>
>>     Here I am not sure we need this "#ifdef".
>>
>>
>> There's an error here but it's not with the #ifdef but instead I have
>> not set the right bit on the bitfields - for instance devid should be 16.
>>
>>
>>
>>     [...]
>>
>>     > +/* external write */
>>     > +static void amdvi_writew(AMDVIState *s, hwaddr addr, uint16_t val)
>>     > +{
>>     > +    uint16_t romask = lduw_le_p(&s->romask[addr]);
>>     > +    uint16_t w1cmask = lduw_le_p(&s->w1cmask[addr]);
>>     > +    uint16_t oldval = lduw_le_p(&s->mmior[addr]);
>>     > +    stw_le_p(&s->mmior[addr], (val & ~(val & w1cmask)) | (romask &
>> oldval));
>>
>>     I think the above is problematic, e.g., what if we write 1 to one of
>>     the romask while it's 0 originally? In that case, the RO bit will be
>>     written to 1.
>>
>>     Maybe we need:
>>
>>       stw_le_p(&s->mmior[addr], ((oldval & romask) | (val & ~romask)) & \
>>                                 (val & w1cmask));
>>
>>     Same question to the below two functions.
>>
>>
>> Right. I was very determined to come up with my algo but failed horribly
>> ;-)
>>
>>
>>
>>     > +}
>>     > +
>>     > +static void amdvi_writel(AMDVIState *s, hwaddr addr, uint32_t val)
>>     > +{
>>     > +    uint32_t romask = ldl_le_p(&s->romask[addr]);
>>     > +    uint32_t w1cmask = ldl_le_p(&s->w1cmask[addr]);
>>     > +    uint32_t oldval = ldl_le_p(&s->mmior[addr]);
>>     > +    stl_le_p(&s->mmior[addr], (val & ~(val & w1cmask)) | (romask &
>> oldval));
>>     > +}
>>     > +
>>     > +static void amdvi_writeq(AMDVIState *s, hwaddr addr, uint64_t val)
>>     > +{
>>     > +    uint64_t romask = ldq_le_p(&s->romask[addr]);
>>     > +    uint64_t w1cmask = ldq_le_p(&s->w1cmask[addr]);
>>     > +    uint32_t oldval = ldq_le_p(&s->mmior[addr]);
>>     > +    stq_le_p(&s->mmior[addr], (val & ~(val & w1cmask)) | (romask &
>> oldval));
>>     > +}
>>     > +
>>     > +/* OR a 64-bit register with a 64-bit value */
>>     > +static bool amdvi_orq(AMDVIState *s, hwaddr addr, uint64_t val)
>>
>>     Nit: This function name gives me an illusion that it's a write op, not
>>     read. IMHO it'll be better we directly use amdvi_readq() for all the
>>     callers of this function, which is more clear to me.
>>
>>     > +{
>>     > +    return amdvi_readq(s, addr) | val;
>>     > +}
>>     > +
>>     > +/* OR a 64-bit register with a 64-bit value storing result in the
>> register */
>>     > +static void amdvi_orassignq(AMDVIState *s, hwaddr addr, uint64_t
>> val)
>>     > +{
>>     > +    amdvi_writeq_raw(s, addr, amdvi_readq(s, addr) | val);
>>     > +}
>>     > +
>>     > +/* AND a 64-bit register with a 64-bit value storing result in the
>> register */
>>     > +static void amdvi_and_assignq(AMDVIState *s, hwaddr addr, uint64_t
>> val)
>>
>>     Nit: the name is not matched with above:
>>
>>       amdvi_{or|and}assign[qw]
>>
>>     Though I would prefer:
>>
>>       amdvi_assign_[qw]_{or|and}
>>
>>
>> Your naming sounds better.
>>
>>
>>
>>     [...]
>>
>>     > +static void amdvi_log_event(AMDVIState *s, uint64_t *evt)
>>     > +{
>>     > +    /* event logging not enabled */
>>     > +    if (!s->evtlog_enabled || amdvi_orq(s, AMDVI_MMIO_STATUS,
>>     > +        AMDVI_MMIO_STATUS_EVT_OVF)) {
>>     > +        return;
>>     > +    }
>>     > +
>>     > +    /* event log buffer full */
>>     > +    if (s->evtlog_tail >= s->evtlog_len) {
>>     > +        amdvi_orassignq(s, AMDVI_MMIO_STATUS,
>> AMDVI_MMIO_STATUS_EVT_OVF);
>>     > +        /* generate interrupt */
>>     > +        amdvi_generate_msi_interrupt(s);
>>     > +        return;
>>     > +    }
>>     > +
>>     > +    if (dma_memory_write(&address_space_memory, s->evtlog_len +
>> s->evtlog_tail,
>>     > +        &evt, AMDVI_EVENT_LEN)) {
>>
>>     Check with MEMTX_OK?
>>
>>
>> I'm not sure what exactly you mean here.
>>
>>
>>
>>     [...]
>>
>>     > +/*
>>     > + * AMDVi event structure
>>     > + *    0:15   -> DeviceID
>>     > + *    55:63  -> event type + miscellaneous info
>>     > + *    64:127 -> related address
>>     > + */
>>     > +static void amdvi_encode_event(uint64_t *evt, uint16_t devid,
>> uint64_t addr,
>>     > +                               uint16_t info)
>>     > +{
>>     > +    amdvi_setevent_bits(evt, devid, 0, 16);
>>     > +    amdvi_setevent_bits(evt, info, 55, 8);
>>     > +    amdvi_setevent_bits(evt, addr, 63, 64);
>>                                           ^^
>>                                     should here be 64?
>>
>>     Also, I am not sure whether we need this amdvi_setevent_bits() if it's
>>     only used in this function. Though not a big problem for me.
>>
>>
>> It's only used in this function but I actually wrote his mainly for
>> future use. The idea is that various events encode totally different
>> information while the above is an over-simplified version to encode
>> information common to most events. In case an event wants to encode more
>> information it would turn out much more easier.
>>
>>
>>
>>     > +}
>>     > +/* log an error encountered page-walking
>>
>>     "during page-walking"
>>
>>
>> "encountered page-walking"  sounds right to me. "page-walking" is a
>> verb, in continuous tense, right ? how about I say "during hacking" ;-)
>>
> I'm a non-native, but: isn't "page-walking" a gerund here? Otherwise,
> "during hacking" sounds good to me.
>

"during hacking"  sounds redundant - why should I repeat that an action is
currently happening by using 'during' while that's already clear since it's
in continuous tense ? page-walking sounds like a proper verb IMHO.


> I also got comments on wording sometimes. In this cases I assume my
> initial phrase was misleading, and try to re-phrase it.
>
> Valentine
>
>
>
>>
>>     > + *
>>     > + * @addr: virtual address in translation request
>>     > + */
>>     > +static void amdvi_page_fault(AMDVIState *s, uint16_t devid,
>>     > +                             hwaddr addr, uint16_t info)
>>     > +{
>>     > +    uint64_t evt[4];
>>     > +
>>     > +    info |= AMDVI_EVENT_IOPF_I | AMDVI_EVENT_IOPF;
>>     > +    amdvi_encode_event(evt, devid, addr, info);
>>     > +    amdvi_log_event(s, evt);
>>     > +    pci_word_test_and_set_mask(s->pci.dev.config + PCI_STATUS,
>>     > +            PCI_STATUS_SIG_TARGET_ABORT);
>>
>>     Nit: maybe we can provide a function for setting this bit.
>>
>>
>> I've actually being ignoring these since Qemu doesn't seem to care about
>> them.
>>
>>
>>
>>     [...]
>>
>>     > +static void amdvi_update_iotlb(AMDVIState *s, uint16_t devid,
>>     > +                               uint64_t gpa, IOMMUTLBEntry
>> to_cache,
>>     > +                               uint16_t domid)
>>     > +{
>>     > +    AMDVIIOTLBEntry *entry = g_malloc(sizeof(*entry));
>>     > +    uint64_t *key = g_malloc(sizeof(key));
>>     > +    uint64_t gfn = gpa >> AMDVI_PAGE_SHIFT_4K;
>>     > +
>>     > +    /* don't cache erroneous translations */
>>     > +    if (to_cache.perm != IOMMU_NONE) {
>>     > +        trace_amdvi_cache_update(domid, PCI_BUS_NUM(devid),
>> PCI_SLOT(devid),
>>     > +                PCI_FUNC(devid), gpa, to_cache.translated_addr);
>>     > +
>>     > +        if (g_hash_table_size(s->iotlb) >= AMDVI_IOTLB_MAX_SIZE) {
>>     > +            trace_amdvi_iotlb_reset();
>>
>>     We'd better put this trace into amdvi_iotlb_reset().
>>
>>     > +            amdvi_iotlb_reset(s);
>>     > +        }
>>     > +
>>     > +        entry->gfn = gfn;
>>     > +        entry->domid = domid;
>>     > +        entry->perms = to_cache.perm;
>>     > +        entry->translated_addr = to_cache.translated_addr;
>>     > +        entry->page_mask = to_cache.addr_mask;
>>     > +        *key = gfn | ((uint64_t)(devid) << AMDVI_DEVID_SHIFT);
>>     > +        g_hash_table_replace(s->iotlb, key, entry);
>>     > +    }
>>     > +}
>>     > +
>>     > +static void amdvi_completion_wait(AMDVIState *s,
>> CMDCompletionWait *wait)
>>     > +{
>>     > +    /* pad the last 3 bits */
>>     > +    hwaddr addr = cpu_to_le64(wait->store_addr << 3);
>>
>>     Is this correct? IMO it should be:
>>
>>       hwaddr addr = le64_to_cpu(wait->store_addr) << 3;
>>
>>     > +    uint64_t data = cpu_to_le64(wait->store_data);
>>
>>     Maybe:
>>
>>       uint64_t data = le64_to_cpu(wait->store_data);
>>
>>     ?
>>
>>
>> I should fix these too.
>>
>>
>>
>>
>>
>>     > +
>>     > +    if (wait->reserved) {
>>     > +        amdvi_log_illegalcom_error(s, wait->type, s->cmdbuf +
>> s->cmdbuf_head);
>>     > +    }
>>     > +
>>     > +    if (wait->completion_store) {
>>     > +        if (dma_memory_write(&address_space_memory, addr, &data,
>>     > +            AMDVI_COMPLETION_DATA_SIZE))
>>     > +        {
>>
>>     Left bracket is better moved upward to follow the coding style.
>>
>>
>> To fix.
>>
>>
>>
>>     > +            trace_amdvi_completion_wait_fail(addr);
>>     > +        }
>>     > +    }
>>
>>     Thanks,
>>
>>     -- peterx
>>
>>
>>

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Qemu-devel] [V15 3/4] hw/i386: Introduce AMD IOMMU
  2016-08-09  5:44   ` Peter Xu
  2016-08-09 12:07     ` David Kiarie
  2016-08-09 12:52     ` David Kiarie
@ 2016-08-09 17:46     ` David Kiarie
  2016-08-10  1:49       ` Peter Xu
  2 siblings, 1 reply; 26+ messages in thread
From: David Kiarie @ 2016-08-09 17:46 UTC (permalink / raw)
  To: Peter Xu
  Cc: QEMU Developers, rkrcmar, Jan Kiszka, Valentine Sinitsyn,
	Eduardo Habkost, Michael S. Tsirkin

On Tue, Aug 9, 2016 at 8:44 AM, Peter Xu <peterx@redhat.com> wrote:

> On Tue, Aug 02, 2016 at 11:39:06AM +0300, David Kiarie wrote:
>
> [...]
>
> > +/* external write */
> > +static void amdvi_writew(AMDVIState *s, hwaddr addr, uint16_t val)
> > +{
> > +    uint16_t romask = lduw_le_p(&s->romask[addr]);
> > +    uint16_t w1cmask = lduw_le_p(&s->w1cmask[addr]);
> > +    uint16_t oldval = lduw_le_p(&s->mmior[addr]);
> > +    stw_le_p(&s->mmior[addr], (val & ~(val & w1cmask)) | (romask &
> oldval));
>
> I think the above is problematic, e.g., what if we write 1 to one of
> the romask while it's 0 originally? In that case, the RO bit will be
> written to 1.
>
> Maybe we need:
>
>   stw_le_p(&s->mmior[addr], ((oldval & romask) | (val & ~romask)) & \
>                             (val & w1cmask));
>
> Same question to the below two functions.
>

It seems to me you're not taking care of w1/c bits correctly ?

I think:

stw_le_p(&s->mmior[addr], ((oldval & romask) | (val & ~romask)) & \
                           ~ (val & w1cmask));
should suffice.


> > +}
> > +/*
> > + * AMDVi event structure
> > + *    0:15   -> DeviceID
> > + *    55:63  -> event type + miscellaneous info
> > + *    64:127 -> related address
> > + */
> > +static void amdvi_encode_event(uint64_t *evt, uint16_t devid, uint64_t
> addr,
> > +                               uint16_t info)
> > +{
> > +    amdvi_setevent_bits(evt, devid, 0, 16);
> > +    amdvi_setevent_bits(evt, info, 55, 8);
> > +    amdvi_setevent_bits(evt, addr, 63, 64);
>                                       ^^
>                                 should here be 64?
>

The code is correct but the comment above is misleading.


>
> Also, I am not sure whether we need this amdvi_setevent_bits() if it's
> only used in this function. Though not a big problem for me.
>
> > +}
> > +/* log an error encountered page-walking
>
> Thanks,
>
> -- peterx
>

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Qemu-devel] [V15 3/4] hw/i386: Introduce AMD IOMMU
  2016-08-09 17:46     ` David Kiarie
@ 2016-08-10  1:49       ` Peter Xu
  0 siblings, 0 replies; 26+ messages in thread
From: Peter Xu @ 2016-08-10  1:49 UTC (permalink / raw)
  To: David Kiarie
  Cc: QEMU Developers, rkrcmar, Jan Kiszka, Valentine Sinitsyn,
	Eduardo Habkost, Michael S. Tsirkin

On Tue, Aug 09, 2016 at 08:46:09PM +0300, David Kiarie wrote:
> On Tue, Aug 9, 2016 at 8:44 AM, Peter Xu <peterx@redhat.com> wrote:
> 
> > On Tue, Aug 02, 2016 at 11:39:06AM +0300, David Kiarie wrote:
> >
> > [...]
> >
> > > +/* external write */
> > > +static void amdvi_writew(AMDVIState *s, hwaddr addr, uint16_t val)
> > > +{
> > > +    uint16_t romask = lduw_le_p(&s->romask[addr]);
> > > +    uint16_t w1cmask = lduw_le_p(&s->w1cmask[addr]);
> > > +    uint16_t oldval = lduw_le_p(&s->mmior[addr]);
> > > +    stw_le_p(&s->mmior[addr], (val & ~(val & w1cmask)) | (romask &
> > oldval));
> >
> > I think the above is problematic, e.g., what if we write 1 to one of
> > the romask while it's 0 originally? In that case, the RO bit will be
> > written to 1.
> >
> > Maybe we need:
> >
> >   stw_le_p(&s->mmior[addr], ((oldval & romask) | (val & ~romask)) & \
> >                             (val & w1cmask));
> >
> > Same question to the below two functions.
> >
> 
> It seems to me you're not taking care of w1/c bits correctly ?
> 
> I think:
> 
> stw_le_p(&s->mmior[addr], ((oldval & romask) | (val & ~romask)) & \
>                            ~ (val & w1cmask));
> should suffice.

Right. :)

-- peterx

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Qemu-devel] [V15 3/4] hw/i386: Introduce AMD IOMMU
  2016-08-09 12:52     ` David Kiarie
  2016-08-09 13:01       ` Valentine Sinitsyn
@ 2016-08-10  2:08       ` Peter Xu
  2016-08-10  6:30         ` David Kiarie
  1 sibling, 1 reply; 26+ messages in thread
From: Peter Xu @ 2016-08-10  2:08 UTC (permalink / raw)
  To: David Kiarie
  Cc: QEMU Developers, rkrcmar, Jan Kiszka, Valentine Sinitsyn,
	Eduardo Habkost, Michael S. Tsirkin

On Tue, Aug 09, 2016 at 03:52:07PM +0300, David Kiarie wrote:

[...]

> > > +    if (dma_memory_write(&address_space_memory, s->evtlog_len +
> > s->evtlog_tail,
> > > +        &evt, AMDVI_EVENT_LEN)) {
> >
> > Check with MEMTX_OK?
> >
> 
> I'm not sure what exactly you mean here.

I mean we have return code macros for these memory operations, like
MEMTX_OK/MEMTX_ERROR/... However please feel free to ignore this
comment since I see merely no place in current QEMU code that is doing
the checking at all. Your call.

> 
> 
> >
> > [...]
> >
> > > +/*
> > > + * AMDVi event structure
> > > + *    0:15   -> DeviceID
> > > + *    55:63  -> event type + miscellaneous info
> > > + *    64:127 -> related address
> > > + */
> > > +static void amdvi_encode_event(uint64_t *evt, uint16_t devid, uint64_t
> > addr,
> > > +                               uint16_t info)
> > > +{
> > > +    amdvi_setevent_bits(evt, devid, 0, 16);
> > > +    amdvi_setevent_bits(evt, info, 55, 8);
> > > +    amdvi_setevent_bits(evt, addr, 63, 64);
> >                                       ^^
> >                                 should here be 64?
> >
> > Also, I am not sure whether we need this amdvi_setevent_bits() if it's
> > only used in this function. Though not a big problem for me.
> >
> 
> It's only used in this function but I actually wrote his mainly for future
> use. The idea is that various events encode totally different information
> while the above is an over-simplified version to encode information common
> to most events. In case an event wants to encode more information it would
> turn out much more easier.

Yes my above comment is "Nit" for sure. :) Please have it if you like.

> 
> 
> >
> > > +}
> > > +/* log an error encountered page-walking
> >
> > "during page-walking"
> >
> 
> "encountered page-walking"  sounds right to me. "page-walking" is a verb,
> in continuous tense, right ? how about I say "during hacking" ;-)

I am not that good at English. I pointed that out since I "suspect"
that is wrong (in case that would help). But if you are confident
enough, please just ignore. I'm mostly ok with all comments as long as
they are "understandable".

> 
> 
> > > + *
> > > + * @addr: virtual address in translation request
> > > + */
> > > +static void amdvi_page_fault(AMDVIState *s, uint16_t devid,
> > > +                             hwaddr addr, uint16_t info)
> > > +{
> > > +    uint64_t evt[4];
> > > +
> > > +    info |= AMDVI_EVENT_IOPF_I | AMDVI_EVENT_IOPF;
> > > +    amdvi_encode_event(evt, devid, addr, info);
> > > +    amdvi_log_event(s, evt);
> > > +    pci_word_test_and_set_mask(s->pci.dev.config + PCI_STATUS,
> > > +            PCI_STATUS_SIG_TARGET_ABORT);
> >
> > Nit: maybe we can provide a function for setting this bit.
> >
> 
> I've actually being ignoring these since Qemu doesn't seem to care about
> them.
> 

Sorry I failed to understand your sentence.

-- peterx

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Qemu-devel] [V15 3/4] hw/i386: Introduce AMD IOMMU
  2016-08-10  2:08       ` Peter Xu
@ 2016-08-10  6:30         ` David Kiarie
  0 siblings, 0 replies; 26+ messages in thread
From: David Kiarie @ 2016-08-10  6:30 UTC (permalink / raw)
  To: Peter Xu
  Cc: QEMU Developers, rkrcmar, Jan Kiszka, Valentine Sinitsyn,
	Eduardo Habkost, Michael S. Tsirkin

On Wed, Aug 10, 2016 at 5:08 AM, Peter Xu <peterx@redhat.com> wrote:

> On Tue, Aug 09, 2016 at 03:52:07PM +0300, David Kiarie wrote:
>
> [...]
>
> > > > +    if (dma_memory_write(&address_space_memory, s->evtlog_len +
> > > s->evtlog_tail,
> > > > +        &evt, AMDVI_EVENT_LEN)) {
> > >
> > > Check with MEMTX_OK?
> > >
> >
> > I'm not sure what exactly you mean here.
>
> I mean we have return code macros for these memory operations, like
> MEMTX_OK/MEMTX_ERROR/... However please feel free to ignore this
> comment since I see merely no place in current QEMU code that is doing
> the checking at all. Your call.
>
> >
> >
> > >
> > > [...]
> > >
> > > > +/*
> > > > + * AMDVi event structure
> > > > + *    0:15   -> DeviceID
> > > > + *    55:63  -> event type + miscellaneous info
> > > > + *    64:127 -> related address
> > > > + */
> > > > +static void amdvi_encode_event(uint64_t *evt, uint16_t devid,
> uint64_t
> > > addr,
> > > > +                               uint16_t info)
> > > > +{
> > > > +    amdvi_setevent_bits(evt, devid, 0, 16);
> > > > +    amdvi_setevent_bits(evt, info, 55, 8);
> > > > +    amdvi_setevent_bits(evt, addr, 63, 64);
> > >                                       ^^
> > >                                 should here be 64?
> > >
> > > Also, I am not sure whether we need this amdvi_setevent_bits() if it's
> > > only used in this function. Though not a big problem for me.
> > >
> >
> > It's only used in this function but I actually wrote his mainly for
> future
> > use. The idea is that various events encode totally different information
> > while the above is an over-simplified version to encode information
> common
> > to most events. In case an event wants to encode more information it
> would
> > turn out much more easier.
>
> Yes my above comment is "Nit" for sure. :) Please have it if you like.
>
> >
> >
> > >
> > > > +}
> > > > +/* log an error encountered page-walking
> > >
> > > "during page-walking"
> > >
> >
> > "encountered page-walking"  sounds right to me. "page-walking" is a verb,
> > in continuous tense, right ? how about I say "during hacking" ;-)
>
> I am not that good at English. I pointed that out since I "suspect"
> that is wrong (in case that would help). But if you are confident
> enough, please just ignore. I'm mostly ok with all comments as long as
> they are "understandable".
>

I changed that to  "encountered during a page walk" - I'm sure no one has a
problem with that :-)


> >
> >
> > > > + *
> > > > + * @addr: virtual address in translation request
> > > > + */
> > > > +static void amdvi_page_fault(AMDVIState *s, uint16_t devid,
> > > > +                             hwaddr addr, uint16_t info)
> > > > +{
> > > > +    uint64_t evt[4];
> > > > +
> > > > +    info |= AMDVI_EVENT_IOPF_I | AMDVI_EVENT_IOPF;
> > > > +    amdvi_encode_event(evt, devid, addr, info);
> > > > +    amdvi_log_event(s, evt);
> > > > +    pci_word_test_and_set_mask(s->pci.dev.config + PCI_STATUS,
> > > > +            PCI_STATUS_SIG_TARGET_ABORT);
> > >
> > > Nit: maybe we can provide a function for setting this bit.
> > >
> >
> > I've actually being ignoring these since Qemu doesn't seem to care about
> > them.
> >
>
> Sorry I failed to understand your sentence.
>

I mean Qemu PCI bus doesn't abort any transactions regardless of whether a
device has set abort status.


> -- peterx
>

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Qemu-devel] [V15 3/4] hw/i386: Introduce AMD IOMMU
  2016-08-02  8:39 ` [Qemu-devel] [V15 3/4] hw/i386: Introduce AMD IOMMU David Kiarie
  2016-08-09  5:44   ` Peter Xu
@ 2016-08-11  8:23   ` Valentine Sinitsyn
  2016-08-11  8:32     ` David Kiarie
  2016-08-12 19:10   ` Valentine Sinitsyn
  2 siblings, 1 reply; 26+ messages in thread
From: Valentine Sinitsyn @ 2016-08-11  8:23 UTC (permalink / raw)
  To: David Kiarie, qemu-devel; +Cc: peterx, rkrcmar, jan.kiszka, ehabkost, mst

Hi,

On 02.08.2016 13:39, David Kiarie wrote:
> Add AMD IOMMU emulaton to Qemu in addition to Intel IOMMU.
> The IOMMU does basic translation, error checking and has a
> minimal IOTLB implementation. This IOMMU bypassed the need
> for target aborts by responding with IOMMU_NONE access rights
> and exempts the region 0xfee00000-0xfeefffff from translation
> as it is the q35 interrupt region.
>
> We advertise features that are not yet implemented to please
> the Linux IOMMU driver.
>
> IOTLB aims at implementing commands on real IOMMUs which is
> essential for debugging and may not offer any performance
> benefits
>
> Signed-off-by: David Kiarie <davidkiarie4@gmail.com>
> ---
>  hw/i386/Makefile.objs |    1 +
>  hw/i386/amd_iommu.c   | 1397 +++++++++++++++++++++++++++++++++++++++++++++++++
>  hw/i386/amd_iommu.h   |  390 ++++++++++++++
>  hw/i386/trace-events  |    7 +
>  4 files changed, 1795 insertions(+)
>  create mode 100644 hw/i386/amd_iommu.c
>  create mode 100644 hw/i386/amd_iommu.h
>
> diff --git a/hw/i386/Makefile.objs b/hw/i386/Makefile.objs
> index 90e94ff..909ead6 100644
> --- a/hw/i386/Makefile.objs
> +++ b/hw/i386/Makefile.objs
> @@ -3,6 +3,7 @@ obj-y += multiboot.o
>  obj-y += pc.o pc_piix.o pc_q35.o
>  obj-y += pc_sysfw.o
>  obj-y += x86-iommu.o intel_iommu.o
> +obj-y += amd_iommu.o
>  obj-$(CONFIG_XEN) += ../xenpv/ xen/
>
>  obj-y += kvmvapic.o
> diff --git a/hw/i386/amd_iommu.c b/hw/i386/amd_iommu.c
> new file mode 100644
> index 0000000..7b64dd7
> --- /dev/null
> +++ b/hw/i386/amd_iommu.c
> @@ -0,0 +1,1397 @@
> +/*
> + * QEMU emulation of AMD IOMMU (AMD-Vi)
> + *
> + * Copyright (C) 2011 Eduard - Gabriel Munteanu
> + * Copyright (C) 2015 David Kiarie, <davidkiarie4@gmail.com>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License as published by
> + * the Free Software Foundation; either version 2 of the License, or
> + * (at your option) any later version.
> +
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> +
> + * You should have received a copy of the GNU General Public License along
> + * with this program; if not, see <http://www.gnu.org/licenses/>.
> + *
> + * Cache implementation inspired by hw/i386/intel_iommu.c
> + *
> + */
> +#include "qemu/osdep.h"
> +#include <math.h>
> +#include "hw/pci/msi.h"
> +#include "hw/i386/pc.h"
> +#include "hw/i386/amd_iommu.h"
> +#include "hw/pci/pci_bus.h"
> +#include "trace.h"
> +
> +/* used AMD-Vi MMIO registers */
> +const char *amdvi_mmio_low[] = {
> +    "AMDVI_MMIO_DEVTAB_BASE",
> +    "AMDVI_MMIO_CMDBUF_BASE",
> +    "AMDVI_MMIO_EVTLOG_BASE",
> +    "AMDVI_MMIO_CONTROL",
> +    "AMDVI_MMIO_EXCL_BASE",
> +    "AMDVI_MMIO_EXCL_LIMIT",
> +    "AMDVI_MMIO_EXT_FEATURES",
> +    "AMDVI_MMIO_PPR_BASE",
> +    "UNHANDLED"
> +};
> +const char *amdvi_mmio_high[] = {
> +    "AMDVI_MMIO_COMMAND_HEAD",
> +    "AMDVI_MMIO_COMMAND_TAIL",
> +    "AMDVI_MMIO_EVTLOG_HEAD",
> +    "AMDVI_MMIO_EVTLOG_TAIL",
> +    "AMDVI_MMIO_STATUS",
> +    "AMDVI_MMIO_PPR_HEAD",
> +    "AMDVI_MMIO_PPR_TAIL",
> +    "UNHANDLED"
> +};
> +typedef struct AMDVIAddressSpace {
> +    uint8_t bus_num;            /* bus number                           */
> +    uint8_t devfn;              /* device function                      */
> +    AMDVIState *iommu_state;    /* AMDVI - one per machine              */
> +    MemoryRegion iommu;         /* Device's address translation region  */
> +    MemoryRegion iommu_ir;      /* Device's interrupt remapping region  */
> +    AddressSpace as;            /* device's corresponding address space */
> +} AMDVIAddressSpace;
> +
> +/* AMDVI cache entry */
> +typedef struct AMDVIIOTLBEntry {
> +    uint64_t gfn;               /* guest frame number  */
> +    uint16_t domid;             /* assigned domain id  */
> +    uint16_t devid;             /* device owning entry */
> +    uint64_t perms;             /* access permissions  */
> +    uint64_t translated_addr;   /* translated address  */
> +    uint64_t page_mask;         /* physical page size  */
> +} AMDVIIOTLBEntry;
> +
> +/* serialize IOMMU command processing */
> +typedef struct QEMU_PACKED {
> +#ifdef HOST_WORDS_BIGENDIAN
> +    uint64_t type:4;               /* command type           */
> +    uint64_t reserved:8;
> +    uint64_t store_addr:49;        /* addr to write          */
> +    uint64_t completion_flush:1;   /* allow more executions  */
> +    uint64_t completion_int:1;     /* set MMIOWAITINT        */
> +    uint64_t completion_store:1;   /* write data to address  */
> +#else
> +    uint64_t completion_store:1;
> +    uint64_t completion_int:1;
> +    uint64_t completion_flush:1;
> +    uint64_t store_addr:49;
> +    uint64_t reserved:8;
> +    uint64_t type:4;
> +#endif /* __BIG_ENDIAN_BITFIELD */
> +    uint64_t store_data;           /* data to write          */
> +} CMDCompletionWait;
> +
> +/* invalidate internal caches for devid */
> +typedef struct QEMU_PACKED {
> +#ifdef HOST_WORDS_BIGENDIAN
> +    uint64_t devid;                /* device to invalidate   */
> +    uint64_t reserved_1:44;
> +    uint64_t type:4;               /* command type           */
> +#else
> +    uint64_t devid;
> +    uint64_t reserved_1:44;
> +    uint64_t type:4;
> +#endif /* __BIG_ENDIAN_BITFIELD */
> +    uint64_t reserved_2;
> +} CMDInvalDevEntry;
> +
> +/* invalidate a range of entries in IOMMU translation cache for devid */
> +typedef struct QEMU_PACKED {
> +#ifdef HOST_WORDS_BIGENDIAN
> +    uint64_t type:4;               /* command type           */
> +    uint64_t reserved_2:12
> +    uint64_t domid:16;             /* domain to inval for    */
> +    uint64_t reserved_1:12;
> +    uint64_t pasid:20;
> +#else
> +    uint64_t pasid:20;
> +    uint64_t reserved_1:12;
> +    uint64_t domid:16;
> +    uint64_t reserved_2:12;
> +    uint64_t type:4;
> +#endif /* __BIG_ENDIAN_BITFIELD */
> +
> +#ifdef HOST_WORDS_BIGENDIAN
> +    uint64_t address:51;          /* address to invalidate   */
> +    uint64_t reserved_3:10;
> +    uint64_t guest:1;             /* G/N invalidation        */
> +    uint64_t pde:1;               /* invalidate cached ptes  */
> +    uint64_t size:1               /* size of invalidation    */
> +#else
> +    uint64_t size:1;
> +    uint64_t pde:1;
> +    uint64_t guest:1;
> +    uint64_t reserved_3:10;
> +    uint64_t address:51;
> +#endif /* __BIG_ENDIAN_BITFIELD */
> +} CMDInvalIommuPages;
> +
> +/* inval specified address for devid from remote IOTLB */
> +typedef struct QEMU_PACKED {
> +#ifdef HOST_WORDS_BIGENDIAN
> +    uint64_t type:4;            /* command type        */
> +    uint64_t pasid_19_6:4;
> +    uint64_t pasid_7_0:8;
> +    uint64_t queuid:16;
> +    uint64_t maxpend:8;
> +    uint64_t pasid_15_8;
> +    uint64_t devid:16;         /* related devid        */
> +#else
> +    uint64_t devid:16;
> +    uint64_t pasid_15_8:8;
> +    uint64_t maxpend:8;
> +    uint64_t queuid:16;
> +    uint64_t pasid_7_0:8;
> +    uint64_t pasid_19_6:4;
> +    uint64_t type:4;
> +#endif /* __BIG_ENDIAN_BITFIELD */
> +
> +#ifdef HOST_WORDS_BIGENDIAN
> +    uint64_t address:52;       /* invalidate addr      */
> +    uint64_t reserved_2:9;
> +    uint64_t guest:1;          /* G/N invalidate       */
> +    uint64_t reserved_1:1;
> +    uint64_t size:1            /* size of invalidation */
> +#else
> +    uint64_t size:1;
> +    uint64_t reserved_1:1;
> +    uint64_t guest:1;
> +    uint64_t reserved_2:9;
> +    uint64_t address:52;
> +#endif /* __BIG_ENDIAN_BITFIELD */
> +} CMDInvalIOTLBPages;
> +
> +/* invalidate all cached interrupt info for devid */
> +typedef struct QEMU_PACKED {
> +#ifdef HOST_WORDS_BIGENDIAN
> +    uint64_t type:4;          /* command type        */
> +    uint64_t reserved_1:44;
> +    uint64_t devid:16;        /* related devid       */
> +#else
> +    uint64_t devid:16;
> +    uint64_t reserved_1:44;
> +    uint64_t type:4;
> +#endif /* __BIG_ENDIAN_BITFIELD */
> +    uint64_t reserved_2;
> +} CMDInvalIntrTable;
> +
> +/* load adddress translation info for devid into translation cache */
> +typedef struct QEMU_PACKED {
> +#ifdef HOST_WORDS_BIGENDIAN
> +    uint64_t type:4;          /* command type       */
> +    uint64_t reserved_2:8;
> +    uint64_t pasid_19_0:20;
> +    uint64_t pfcount_7_0:8;
> +    uint64_t reserved_1:8;
> +    uint64_t devid;           /* related devid      */
> +#else
> +    uint64_t devid;
> +    uint64_t reserved_1:8;
> +    uint64_t pfcount_7_0:8;
> +    uint64_t pasid_19_0:20;
> +    uint64_t reserved_2:8;
> +    uint64_t type:4;
> +#endif /* __BIG_ENDIAN_BITFIELD */
> +
> +#ifdef HOST_WORDS_BIGENDIAN
> +    uint64_t address:52;     /* invalidate address       */
> +    uint64_t reserved_5:7;
> +    uint64_t inval:1;        /* inval matching entries   */
> +    uint64_t reserved_4:1;
> +    uint64_t guest:1;        /* G/N invalidate           */
> +    uint64_t reserved_3:1;
> +    uint64_t size:1;         /* prefetched page size     */
> +#else
> +    uint64_t size:1;
> +    uint64_t reserved_3:1;
> +    uint64_t guest:1;
> +    uint64_t reserved_4:1;
> +    uint64_t inval:1;
> +    uint64_t reserved_5:7;
> +    uint64_t address:52;
> +#endif /* __BIG_ENDIAN_BITFIELD */
> +} CMDPrefetchPages;
> +
> +/* clear all address translation/interrupt remapping caches */
> +typedef struct QEMU_PACKED {
> +#ifdef HOST_WORDS_BIGENDIAN
> +    uint64_t type:4;              /* command type       */
> +    uint64_t reserved_1:60;
> +#else
> +    uint64_t reserved_1:60;
> +    uint64_t type:4;
> +#endif /* __BIG_ENDIAN_BITFIELD */
> +    uint64_t reserved_2;
> +} CMDInvalIommuAll;
> +
> +/* issue a PCIe completion packet for devid */
> +typedef struct QEMU_PACKED {
> +#ifdef HOST_WORDS_BIGENDIAN
> +    uint32_t devid;               /* related devid      */
> +    uint32_t reserved_1;
> +#else
> +    uint32_t reserved_1;
> +    uint32_t devid;
> +#endif /* __BIG_ENDIAN_BITFIELD */
> +
> +#ifdef HOST_WORDS_BIGENDIAN
> +    uint32_t type:4;              /* command type       */
> +    uint32_t reserved_2:8;
> +    uint32_t pasid_19_0:20
> +#else
> +    uint32_t pasid_19_0:20;
> +    uint32_t reserved_2:8;
> +    uint32_t type:4;
> +#endif /* __BIG_ENDIAN_BITFIELD */
> +
> +#ifdef HOST_WORDS_BIGENDIAN
> +    uint32_t reserved_3:29;
> +    uint32_t guest:1;
> +    uint32_t reserved_4:2;
> +#else
> +    uint32_t reserved_3:2;
> +    uint32_t guest:1;
> +    uint32_t reserved_4:29;
> +#endif /* __BIG_ENDIAN_BITFIELD */
> +
> +#ifdef HOST_WORDS_BIGENDIAN
> +    uint32_t reserved_5:16;
> +    uint32_t completion_tag:16    /* PCIe PRI informatin */
> +#else
> +    uint32_t completion_tag:16;
> +    uint32_t reserved_5:16;
> +#endif /* __BIG_ENDIAN_BITFIELD */
> +} CMDCompletePPR;
> +
> +/* configure MMIO registers at startup/reset */
> +static void amdvi_set_quad(AMDVIState *s, hwaddr addr, uint64_t val,
> +                           uint64_t romask, uint64_t w1cmask)
> +{
> +    stq_le_p(&s->mmior[addr], val);
> +    stq_le_p(&s->romask[addr], romask);
> +    stq_le_p(&s->w1cmask[addr], w1cmask);
> +}
> +
> +static uint16_t amdvi_readw(AMDVIState *s, hwaddr addr)
> +{
> +    return lduw_le_p(&s->mmior[addr]);
> +}
> +
> +static uint32_t amdvi_readl(AMDVIState *s, hwaddr addr)
> +{
> +    return ldl_le_p(&s->mmior[addr]);
> +}
> +
> +static uint64_t amdvi_readq(AMDVIState *s, hwaddr addr)
> +{
> +    return ldq_le_p(&s->mmior[addr]);
> +}
> +
> +/* internal write */
> +static void amdvi_writeq_raw(AMDVIState *s, uint64_t val, hwaddr addr)
> +{
> +    stq_le_p(&s->mmior[addr], val);
> +}
> +
> +/* external write */
> +static void amdvi_writew(AMDVIState *s, hwaddr addr, uint16_t val)
> +{
> +    uint16_t romask = lduw_le_p(&s->romask[addr]);
> +    uint16_t w1cmask = lduw_le_p(&s->w1cmask[addr]);
> +    uint16_t oldval = lduw_le_p(&s->mmior[addr]);
> +    stw_le_p(&s->mmior[addr], (val & ~(val & w1cmask)) | (romask & oldval));
> +}
> +
> +static void amdvi_writel(AMDVIState *s, hwaddr addr, uint32_t val)
> +{
> +    uint32_t romask = ldl_le_p(&s->romask[addr]);
> +    uint32_t w1cmask = ldl_le_p(&s->w1cmask[addr]);
> +    uint32_t oldval = ldl_le_p(&s->mmior[addr]);
> +    stl_le_p(&s->mmior[addr], (val & ~(val & w1cmask)) | (romask & oldval));
> +}
> +
> +static void amdvi_writeq(AMDVIState *s, hwaddr addr, uint64_t val)
> +{
> +    uint64_t romask = ldq_le_p(&s->romask[addr]);
> +    uint64_t w1cmask = ldq_le_p(&s->w1cmask[addr]);
> +    uint32_t oldval = ldq_le_p(&s->mmior[addr]);
> +    stq_le_p(&s->mmior[addr], (val & ~(val & w1cmask)) | (romask & oldval));
> +}
> +
> +/* OR a 64-bit register with a 64-bit value */
> +static bool amdvi_orq(AMDVIState *s, hwaddr addr, uint64_t val)
> +{
> +    return amdvi_readq(s, addr) | val;
> +}
> +
> +/* OR a 64-bit register with a 64-bit value storing result in the register */
> +static void amdvi_orassignq(AMDVIState *s, hwaddr addr, uint64_t val)
> +{
> +    amdvi_writeq_raw(s, addr, amdvi_readq(s, addr) | val);
> +}
> +
> +/* AND a 64-bit register with a 64-bit value storing result in the register */
> +static void amdvi_and_assignq(AMDVIState *s, hwaddr addr, uint64_t val)
> +{
> +   amdvi_writeq_raw(s, addr, amdvi_readq(s, addr) & val);
> +}
> +
> +static void amdvi_generate_msi_interrupt(AMDVIState *s)
> +{
> +    MSIMessage msg;
> +    if (msi_enabled(&s->pci.dev)) {
> +        msg = msi_get_message(&s->pci.dev, 0);
> +        address_space_stl_le(&address_space_memory, msg.address, msg.data,
> +                         MEMTXATTRS_UNSPECIFIED, NULL);
Nit: don't you want to set the requester ID to the IOMMU's BDF here?

Valentine

> +    }
> +}
> +
> +static void amdvi_log_event(AMDVIState *s, uint64_t *evt)
> +{
> +    /* event logging not enabled */
> +    if (!s->evtlog_enabled || amdvi_orq(s, AMDVI_MMIO_STATUS,
> +        AMDVI_MMIO_STATUS_EVT_OVF)) {
> +        return;
> +    }
> +
> +    /* event log buffer full */
> +    if (s->evtlog_tail >= s->evtlog_len) {
> +        amdvi_orassignq(s, AMDVI_MMIO_STATUS, AMDVI_MMIO_STATUS_EVT_OVF);
> +        /* generate interrupt */
> +        amdvi_generate_msi_interrupt(s);
> +        return;
> +    }
> +
> +    if (dma_memory_write(&address_space_memory, s->evtlog_len + s->evtlog_tail,
> +        &evt, AMDVI_EVENT_LEN)) {
> +        trace_amdvi_evntlog_fail(s->evtlog, s->evtlog_tail);
> +    }
> +
> +    s->evtlog_tail += AMDVI_EVENT_LEN;
> +    amdvi_orassignq(s, AMDVI_MMIO_STATUS, AMDVI_MMIO_STATUS_COMP_INT);
> +    amdvi_generate_msi_interrupt(s);
> +}
> +
> +static void amdvi_setevent_bits(uint64_t *buffer, uint64_t value, int start,
> +                                int length)
> +{
> +    int index = start / 64, bitpos = start % 64;
> +    uint64_t mask = ((1 << length) - 1) << bitpos;
> +    buffer[index] &= ~mask;
> +    buffer[index] |= (value << bitpos) & mask;
> +}
> +/*
> + * AMDVi event structure
> + *    0:15   -> DeviceID
> + *    55:63  -> event type + miscellaneous info
> + *    64:127 -> related address
> + */
> +static void amdvi_encode_event(uint64_t *evt, uint16_t devid, uint64_t addr,
> +                               uint16_t info)
> +{
> +    amdvi_setevent_bits(evt, devid, 0, 16);
> +    amdvi_setevent_bits(evt, info, 55, 8);
> +    amdvi_setevent_bits(evt, addr, 63, 64);
> +}
> +/* log an error encountered page-walking
> + *
> + * @addr: virtual address in translation request
> + */
> +static void amdvi_page_fault(AMDVIState *s, uint16_t devid,
> +                             hwaddr addr, uint16_t info)
> +{
> +    uint64_t evt[4];
> +
> +    info |= AMDVI_EVENT_IOPF_I | AMDVI_EVENT_IOPF;
> +    amdvi_encode_event(evt, devid, addr, info);
> +    amdvi_log_event(s, evt);
> +    pci_word_test_and_set_mask(s->pci.dev.config + PCI_STATUS,
> +            PCI_STATUS_SIG_TARGET_ABORT);
> +}
> +/*
> + * log a master abort accessing device table
> + *  @devtab : address of device table entry
> + *  @info : error flags
> + */
> +static void amdvi_log_devtab_error(AMDVIState *s, uint16_t devid,
> +                                   hwaddr devtab, uint16_t info)
> +{
> +    uint64_t evt[4];
> +
> +    info |= AMDVI_EVENT_DEV_TAB_HW_ERROR;
> +
> +    amdvi_encode_event(evt, devid, devtab, info);
> +    amdvi_log_event(s, evt);
> +    pci_word_test_and_set_mask(s->pci.dev.config + PCI_STATUS,
> +            PCI_STATUS_SIG_TARGET_ABORT);
> +}
> +
> +/* log an event trying to access command buffer
> + *   @addr : address that couldn't be accessed
> + */
> +static void amdvi_log_command_error(AMDVIState *s, hwaddr addr)
> +{
> +    uint64_t evt[4], info = AMDVI_EVENT_COMMAND_HW_ERROR;
> +
> +    amdvi_encode_event(evt, 0, addr, info);
> +    amdvi_log_event(s, evt);
> +    pci_word_test_and_set_mask(s->pci.dev.config + PCI_STATUS,
> +            PCI_STATUS_SIG_TARGET_ABORT);
> +}
> +
> +/* log an illegal comand event
> + *   @addr : address of illegal command
> + */
> +static void amdvi_log_illegalcom_error(AMDVIState *s, uint16_t info,
> +                                       hwaddr addr)
> +{
> +    uint64_t evt[4];
> +
> +    info |= AMDVI_EVENT_ILLEGAL_COMMAND_ERROR;
> +    amdvi_encode_event(evt, 0, addr, info);
> +    amdvi_log_event(s, evt);
> +}
> +
> +/* log an error accessing device table
> + *
> + *  @devid : device owning the table entry
> + *  @devtab : address of device table entry
> + *  @info : error flags
> + */
> +static void amdvi_log_illegaldevtab_error(AMDVIState *s, uint16_t devid,
> +                                          hwaddr addr, uint16_t info)
> +{
> +    uint64_t evt[4];
> +
> +    info |= AMDVI_EVENT_ILLEGAL_DEVTAB_ENTRY;
> +    amdvi_encode_event(evt, devid, addr, info);
> +    amdvi_log_event(s, evt);
> +}
> +
> +/* log an error accessing a PTE entry
> + * @addr : address that couldn't be accessed
> + */
> +static void amdvi_log_pagetab_error(AMDVIState *s, uint16_t devid,
> +                                    hwaddr addr, uint16_t info)
> +{
> +    uint64_t evt[4];
> +
> +    info |= AMDVI_EVENT_PAGE_TAB_HW_ERROR;
> +    amdvi_encode_event(evt, devid, addr, info);
> +    amdvi_log_event(s, evt);
> +    pci_word_test_and_set_mask(s->pci.dev.config + PCI_STATUS,
> +             PCI_STATUS_SIG_TARGET_ABORT);
> +}
> +
> +static gboolean amdvi_uint64_equal(gconstpointer v1, gconstpointer v2)
> +{
> +    return *((const uint64_t *)v1) == *((const uint64_t *)v2);
> +}
> +
> +static guint amdvi_uint64_hash(gconstpointer v)
> +{
> +    return (guint)*(const uint64_t *)v;
> +}
> +
> +static AMDVIIOTLBEntry *amdvi_iotlb_lookup(AMDVIState *s, hwaddr addr,
> +                                           uint64_t devid)
> +{
> +    uint64_t key = (addr >> AMDVI_PAGE_SHIFT_4K) |
> +                   ((uint64_t)(devid) << AMDVI_DEVID_SHIFT);
> +    return g_hash_table_lookup(s->iotlb, &key);
> +}
> +
> +static void amdvi_iotlb_reset(AMDVIState *s)
> +{
> +    assert(s->iotlb);
> +    g_hash_table_remove_all(s->iotlb);
> +}
> +
> +static gboolean amdvi_iotlb_remove_by_devid(gpointer key, gpointer value,
> +                                            gpointer user_data)
> +{
> +    AMDVIIOTLBEntry *entry = (AMDVIIOTLBEntry *)value;
> +    uint16_t devid = *(uint16_t *)user_data;
> +    return entry->devid == devid;
> +}
> +
> +static void amdvi_iotlb_remove_page(AMDVIState *s, hwaddr addr,
> +                                    uint64_t devid)
> +{
> +    uint64_t key = (addr >> AMDVI_PAGE_SHIFT_4K) |
> +                   ((uint64_t)(devid) << AMDVI_DEVID_SHIFT);
> +    g_hash_table_remove(s->iotlb, &key);
> +}
> +
> +static void amdvi_update_iotlb(AMDVIState *s, uint16_t devid,
> +                               uint64_t gpa, IOMMUTLBEntry to_cache,
> +                               uint16_t domid)
> +{
> +    AMDVIIOTLBEntry *entry = g_malloc(sizeof(*entry));
> +    uint64_t *key = g_malloc(sizeof(key));
> +    uint64_t gfn = gpa >> AMDVI_PAGE_SHIFT_4K;
> +
> +    /* don't cache erroneous translations */
> +    if (to_cache.perm != IOMMU_NONE) {
> +        trace_amdvi_cache_update(domid, PCI_BUS_NUM(devid), PCI_SLOT(devid),
> +                PCI_FUNC(devid), gpa, to_cache.translated_addr);
> +
> +        if (g_hash_table_size(s->iotlb) >= AMDVI_IOTLB_MAX_SIZE) {
> +            trace_amdvi_iotlb_reset();
> +            amdvi_iotlb_reset(s);
> +        }
> +
> +        entry->gfn = gfn;
> +        entry->domid = domid;
> +        entry->perms = to_cache.perm;
> +        entry->translated_addr = to_cache.translated_addr;
> +        entry->page_mask = to_cache.addr_mask;
> +        *key = gfn | ((uint64_t)(devid) << AMDVI_DEVID_SHIFT);
> +        g_hash_table_replace(s->iotlb, key, entry);
> +    }
> +}
> +
> +static void amdvi_completion_wait(AMDVIState *s, CMDCompletionWait *wait)
> +{
> +    /* pad the last 3 bits */
> +    hwaddr addr = cpu_to_le64(wait->store_addr << 3);
> +    uint64_t data = cpu_to_le64(wait->store_data);
> +
> +    if (wait->reserved) {
> +        amdvi_log_illegalcom_error(s, wait->type, s->cmdbuf + s->cmdbuf_head);
> +    }
> +
> +    if (wait->completion_store) {
> +        if (dma_memory_write(&address_space_memory, addr, &data,
> +            AMDVI_COMPLETION_DATA_SIZE))
> +        {
> +            trace_amdvi_completion_wait_fail(addr);
> +        }
> +    }
> +
> +    /* set completion interrupt */
> +    if (wait->completion_int) {
> +        amdvi_orq(s, AMDVI_MMIO_STATUS, AMDVI_MMIO_STATUS_COMP_INT);
> +        /* generate interrupt */
> +        amdvi_generate_msi_interrupt(s);
> +    }
> +
> +    trace_amdvi_completion_wait(addr, data);
> +}
> +
> +/* log error without aborting since linux seems to be using reserved bits */
> +static void amdvi_inval_devtab_entry(AMDVIState *s, void *cmd)
> +{
> +    CMDInvalIntrTable *inval = (CMDInvalIntrTable *)cmd;
> +    /* This command should invalidate internal caches of which there isn't */
> +    if (inval->reserved_1 || inval->reserved_2) {
> +        amdvi_log_illegalcom_error(s, inval->type, s->cmdbuf + s->cmdbuf_head);
> +    }
> +    trace_amdvi_devtab_inval(PCI_BUS_NUM(inval->devid), PCI_SLOT(inval->devid),
> +            PCI_FUNC(inval->devid));
> +}
> +
> +static void amdvi_complete_ppr(AMDVIState *s, void *cmd)
> +{
> +    CMDCompletePPR *pprcomp = (CMDCompletePPR *)cmd;
> +
> +    if (pprcomp->reserved_1 || pprcomp->reserved_2 || pprcomp->reserved_3 ||
> +        pprcomp->reserved_4 || pprcomp->reserved_5) {
> +        amdvi_log_illegalcom_error(s, pprcomp->type, s->cmdbuf +
> +                s->cmdbuf_head);
> +    }
> +    trace_amdvi_ppr_exec();
> +}
> +
> +static void amdvi_inval_all(AMDVIState *s, CMDInvalIommuAll *inval)
> +{
> +    if (inval->reserved_2 || inval->reserved_1) {
> +        amdvi_log_illegalcom_error(s, inval->type, s->cmdbuf + s->cmdbuf_head);
> +    }
> +
> +    amdvi_iotlb_reset(s);
> +    trace_amdvi_all_inval();
> +}
> +
> +static gboolean amdvi_iotlb_remove_by_domid(gpointer key, gpointer value,
> +                                            gpointer user_data)
> +{
> +    AMDVIIOTLBEntry *entry = (AMDVIIOTLBEntry *)value;
> +    uint16_t domid = *(uint16_t *)user_data;
> +    return entry->domid == domid;
> +}
> +
> +/* we don't have devid - we can't remove pages by address */
> +static void amdvi_inval_pages(AMDVIState *s, CMDInvalIommuPages *inval)
> +{
> +    uint16_t domid = inval->domid;
> +
> +    if (inval->reserved_1 || inval->reserved_2 || inval->reserved_3) {
> +        amdvi_log_illegalcom_error(s, inval->type, s->cmdbuf + s->cmdbuf_head);
> +    }
> +
> +    g_hash_table_foreach_remove(s->iotlb, amdvi_iotlb_remove_by_domid,
> +                                &domid);
> +    trace_amdvi_pages_inval(inval->domid);
> +}
> +
> +static void amdvi_prefetch_pages(AMDVIState *s, CMDPrefetchPages *prefetch)
> +{
> +    if (prefetch->reserved_1 || prefetch->reserved_2 || prefetch->reserved_3
> +        || prefetch->reserved_4 || prefetch->reserved_5) {
> +        amdvi_log_illegalcom_error(s, prefetch->type, s->cmdbuf +
> +                s->cmdbuf_head);
> +    }
> +    trace_amdvi_prefetch_pages();
> +}
> +
> +static void amdvi_inval_inttable(AMDVIState *s, CMDInvalIntrTable *inval)
> +{
> +    if (inval->reserved_1 || inval->reserved_2) {
> +        amdvi_log_illegalcom_error(s, inval->type, s->cmdbuf + s->cmdbuf_head);
> +        return;
> +    }
> +    trace_amdvi_intr_inval();
> +}
> +
> +/* FIXME: Try to work with the specified size instead of all the pages
> + * when the S bit is on
> + */
> +static void iommu_inval_iotlb(AMDVIState *s, CMDInvalIOTLBPages *inval)
> +{
> +    uint16_t devid = inval->devid;
> +
> +    if (inval->reserved_1 || inval->reserved_2) {
> +        amdvi_log_illegalcom_error(s, inval->type, s->cmdbuf + s->cmdbuf_head);
> +        return;
> +    }
> +
> +    if (inval->size) {
> +        g_hash_table_foreach_remove(s->iotlb, amdvi_iotlb_remove_by_devid,
> +                                    &devid);
> +    } else {
> +        amdvi_iotlb_remove_page(s, inval->address << 12, inval->devid);
> +    }
> +    trace_amdvi_iotlb_inval();
> +}
> +
> +/* not honouring reserved bits is regarded as an illegal command */
> +static void amdvi_cmdbuf_exec(AMDVIState *s)
> +{
> +    CMDCompletionWait cmd;
> +
> +    if (dma_memory_read(&address_space_memory, s->cmdbuf + s->cmdbuf_head,
> +        &cmd, AMDVI_COMMAND_SIZE)) {
> +        trace_amdvi_command_read_fail(s->cmdbuf, s->cmdbuf_head);
> +        amdvi_log_command_error(s, s->cmdbuf + s->cmdbuf_head);
> +        return;
> +    }
> +
> +    switch (cmd.type) {
> +    case AMDVI_CMD_COMPLETION_WAIT:
> +        amdvi_completion_wait(s, (CMDCompletionWait *)&cmd);
> +        break;
> +    case AMDVI_CMD_INVAL_DEVTAB_ENTRY:
> +        amdvi_inval_devtab_entry(s, (CMDInvalDevEntry *)&cmd);
> +        break;
> +    case AMDVI_CMD_INVAL_AMDVI_PAGES:
> +        amdvi_inval_pages(s, (CMDInvalIommuPages *)&cmd);
> +        break;
> +    case AMDVI_CMD_INVAL_IOTLB_PAGES:
> +        iommu_inval_iotlb(s, (CMDInvalIOTLBPages *)&cmd);
> +        break;
> +    case AMDVI_CMD_INVAL_INTR_TABLE:
> +        amdvi_inval_inttable(s, (CMDInvalIntrTable *)&cmd);
> +        break;
> +    case AMDVI_CMD_PREFETCH_AMDVI_PAGES:
> +        amdvi_prefetch_pages(s, (CMDPrefetchPages *)&cmd);
> +        break;
> +    case AMDVI_CMD_COMPLETE_PPR_REQUEST:
> +        amdvi_complete_ppr(s, (CMDCompletePPR *)&cmd);
> +        break;
> +    case AMDVI_CMD_INVAL_AMDVI_ALL:
> +        amdvi_inval_all(s, (CMDInvalIommuAll *)&cmd);
> +        break;
> +    default:
> +        trace_amdvi_unhandled_command(cmd.type);
> +        /* log illegal command */
> +        amdvi_log_illegalcom_error(s, cmd.type,
> +                                   s->cmdbuf + s->cmdbuf_head);
> +    }
> +}
> +
> +static void amdvi_cmdbuf_run(AMDVIState *s)
> +{
> +    if (!s->cmdbuf_enabled) {
> +        trace_amdvi_command_error(amdvi_readq(s, AMDVI_MMIO_CONTROL));
> +        return;
> +    }
> +
> +    /* check if there is work to do. */
> +    while (s->cmdbuf_head != s->cmdbuf_tail) {
> +         trace_amdvi_command_exec(s->cmdbuf_head, s->cmdbuf_tail, s->cmdbuf);
> +         amdvi_cmdbuf_exec(s);
> +         s->cmdbuf_head += AMDVI_COMMAND_SIZE;
> +         amdvi_writeq_raw(s, s->cmdbuf_head, AMDVI_MMIO_COMMAND_HEAD);
> +
> +        /* wrap head pointer */
> +        if (s->cmdbuf_head >= s->cmdbuf_len * AMDVI_COMMAND_SIZE) {
> +            s->cmdbuf_head = 0;
> +        }
> +    }
> +}
> +
> +static void amdvi_mmio_trace(hwaddr addr, unsigned size)
> +{
> +    uint8_t index = addr & ~0x2000;
> +
> +    if ((addr & 0x2000)) {
> +        /* high table */
> +        index = index >= AMDVI_MMIO_REGS_HIGH ? AMDVI_MMIO_REGS_HIGH : index;
> +        trace_amdvi_mmio_read(amdvi_mmio_high[index], addr, size, addr & ~0x07);
> +    } else {
> +        index = index >= AMDVI_MMIO_REGS_LOW ? AMDVI_MMIO_REGS_LOW : index;
> +        trace_amdvi_mmio_read(amdvi_mmio_high[index], addr, size, addr & ~0x07);
> +    }
> +}
> +
> +static uint64_t amdvi_mmio_read(void *opaque, hwaddr addr, unsigned size)
> +{
> +    AMDVIState *s = opaque;
> +
> +    uint64_t val = -1;
> +    if (addr + size > AMDVI_MMIO_SIZE) {
> +        trace_amdvi_mmio_read("error: addr outside region: max ",
> +                (uint64_t)AMDVI_MMIO_SIZE, addr, size);
> +        return (uint64_t)-1;
> +    }
> +
> +    if (size == 2) {
> +        val = amdvi_readw(s, addr);
> +    } else if (size == 4) {
> +        val = amdvi_readl(s, addr);
> +    } else if (size == 8) {
> +        val = amdvi_readq(s, addr);
> +    }
> +    amdvi_mmio_trace(addr, size);
> +
> +    return val;
> +}
> +
> +static void amdvi_handle_control_write(AMDVIState *s)
> +{
> +    unsigned long control = amdvi_readq(s, AMDVI_MMIO_CONTROL);
> +    s->enabled = !!(control & AMDVI_MMIO_CONTROL_AMDVIEN);
> +
> +    s->ats_enabled = !!(control & AMDVI_MMIO_CONTROL_HTTUNEN);
> +    s->evtlog_enabled = s->enabled && !!(control &
> +                        AMDVI_MMIO_CONTROL_EVENTLOGEN);
> +
> +    s->evtlog_intr = !!(control & AMDVI_MMIO_CONTROL_EVENTINTEN);
> +    s->completion_wait_intr = !!(control & AMDVI_MMIO_CONTROL_COMWAITINTEN);
> +    s->cmdbuf_enabled = s->enabled && !!(control &
> +                        AMDVI_MMIO_CONTROL_CMDBUFLEN);
> +
> +    /* update the flags depending on the control register */
> +    if (s->cmdbuf_enabled) {
> +        amdvi_orassignq(s, AMDVI_MMIO_STATUS, AMDVI_MMIO_STATUS_CMDBUF_RUN);
> +    } else {
> +        amdvi_and_assignq(s, AMDVI_MMIO_STATUS, ~AMDVI_MMIO_STATUS_CMDBUF_RUN);
> +    }
> +    if (s->evtlog_enabled) {
> +        amdvi_orassignq(s, AMDVI_MMIO_STATUS, AMDVI_MMIO_STATUS_EVT_RUN);
> +    } else {
> +        amdvi_and_assignq(s, AMDVI_MMIO_STATUS, ~AMDVI_MMIO_STATUS_EVT_RUN);
> +    }
> +
> +    trace_amdvi_control_status(control);
> +    amdvi_cmdbuf_run(s);
> +}
> +
> +static inline void amdvi_handle_devtab_write(AMDVIState *s)
> +
> +{
> +    uint64_t val = amdvi_readq(s, AMDVI_MMIO_DEVICE_TABLE);
> +    s->devtab = (val & AMDVI_MMIO_DEVTAB_BASE_MASK);
> +
> +    /* set device table length */
> +    s->devtab_len = ((val & AMDVI_MMIO_DEVTAB_SIZE_MASK) + 1 *
> +                    (AMDVI_MMIO_DEVTAB_SIZE_UNIT /
> +                     AMDVI_MMIO_DEVTAB_ENTRY_SIZE));
> +}
> +
> +static inline void amdvi_handle_cmdhead_write(AMDVIState *s)
> +{
> +    s->cmdbuf_head = amdvi_readq(s, AMDVI_MMIO_COMMAND_HEAD)
> +                     & AMDVI_MMIO_CMDBUF_HEAD_MASK;
> +    amdvi_cmdbuf_run(s);
> +}
> +
> +static inline void amdvi_handle_cmdbase_write(AMDVIState *s)
> +{
> +    s->cmdbuf = amdvi_readq(s, AMDVI_MMIO_COMMAND_BASE)
> +                & AMDVI_MMIO_CMDBUF_BASE_MASK;
> +    s->cmdbuf_len = 1UL << (amdvi_readq(s, AMDVI_MMIO_CMDBUF_SIZE_BYTE)
> +                    & AMDVI_MMIO_CMDBUF_SIZE_MASK);
> +    s->cmdbuf_head = s->cmdbuf_tail = 0;
> +}
> +
> +static inline void amdvi_handle_cmdtail_write(AMDVIState *s)
> +{
> +    s->cmdbuf_tail = amdvi_readq(s, AMDVI_MMIO_COMMAND_TAIL)
> +                     & AMDVI_MMIO_CMDBUF_TAIL_MASK;
> +    amdvi_cmdbuf_run(s);
> +}
> +
> +static inline void amdvi_handle_excllim_write(AMDVIState *s)
> +{
> +    uint64_t val = amdvi_readq(s, AMDVI_MMIO_EXCL_LIMIT);
> +    s->excl_limit = (val & AMDVI_MMIO_EXCL_LIMIT_MASK) |
> +                    AMDVI_MMIO_EXCL_LIMIT_LOW;
> +}
> +
> +static inline void amdvi_handle_evtbase_write(AMDVIState *s)
> +{
> +    uint64_t val = amdvi_readq(s, AMDVI_MMIO_EVENT_BASE);
> +    s->evtlog = val & AMDVI_MMIO_EVTLOG_BASE_MASK;
> +    s->evtlog_len = 1UL << (amdvi_readq(s, AMDVI_MMIO_EVTLOG_SIZE_BYTE)
> +                    & AMDVI_MMIO_EVTLOG_SIZE_MASK);
> +}
> +
> +static inline void amdvi_handle_evttail_write(AMDVIState *s)
> +{
> +    uint64_t val = amdvi_readq(s, AMDVI_MMIO_EVENT_TAIL);
> +    s->evtlog_tail = val & AMDVI_MMIO_EVTLOG_TAIL_MASK;
> +}
> +
> +static inline void amdvi_handle_evthead_write(AMDVIState *s)
> +{
> +    uint64_t val = amdvi_readq(s, AMDVI_MMIO_EVENT_HEAD);
> +    s->evtlog_head = val & AMDVI_MMIO_EVTLOG_HEAD_MASK;
> +}
> +
> +static inline void amdvi_handle_pprbase_write(AMDVIState *s)
> +{
> +    uint64_t val = amdvi_readq(s, AMDVI_MMIO_PPR_BASE);
> +    s->ppr_log = val & AMDVI_MMIO_PPRLOG_BASE_MASK;
> +    s->pprlog_len = 1UL << (amdvi_readq(s, AMDVI_MMIO_PPRLOG_SIZE_BYTE)
> +                    & AMDVI_MMIO_PPRLOG_SIZE_MASK);
> +}
> +
> +static inline void amdvi_handle_pprhead_write(AMDVIState *s)
> +{
> +    uint64_t val = amdvi_readq(s, AMDVI_MMIO_PPR_HEAD);
> +    s->pprlog_head = val & AMDVI_MMIO_PPRLOG_HEAD_MASK;
> +}
> +
> +static inline void amdvi_handle_pprtail_write(AMDVIState *s)
> +{
> +    uint64_t val = amdvi_readq(s, AMDVI_MMIO_PPR_TAIL);
> +    s->pprlog_tail = val & AMDVI_MMIO_PPRLOG_TAIL_MASK;
> +}
> +
> +/* FIXME: something might go wrong if System Software writes in chunks
> + * of one byte but linux writes in chunks of 4 bytes so currently it
> + * works correctly with linux but will definitely be busted if software
> + * reads/writes 8 bytes
> + */
> +static void amdvi_mmio_reg_write(AMDVIState *s, unsigned size, uint64_t val,
> +                                 hwaddr addr)
> +{
> +    if (size == 2) {
> +        amdvi_writew(s, addr, val);
> +    } else if (size == 4) {
> +        amdvi_writel(s, addr, val);
> +    } else if (size == 8) {
> +        amdvi_writeq(s, addr, val);
> +    }
> +}
> +
> +static void amdvi_mmio_write(void *opaque, hwaddr addr, uint64_t val,
> +                             unsigned size)
> +{
> +    AMDVIState *s = opaque;
> +    unsigned long offset = addr & 0x07;
> +
> +    if (addr + size > AMDVI_MMIO_SIZE) {
> +        trace_amdvi_mmio_write("error: addr outside region: max ",
> +                (uint64_t)AMDVI_MMIO_SIZE, size, val, offset);
> +        return;
> +    }
> +
> +    amdvi_mmio_trace(addr, size);
> +    switch (addr & ~0x07) {
> +    case AMDVI_MMIO_CONTROL:
> +        amdvi_mmio_reg_write(s, size, val, addr);
> +        amdvi_handle_control_write(s);
> +        break;
> +    case AMDVI_MMIO_DEVICE_TABLE:
> +        amdvi_mmio_reg_write(s, size, val, addr);
> +       /*  set device table address
> +        *   This also suffers from inability to tell whether software
> +        *   is done writing
> +        */
> +
> +        if (offset || (size == 8)) {
> +            amdvi_handle_devtab_write(s);
> +        }
> +        break;
> +    case AMDVI_MMIO_COMMAND_HEAD:
> +        amdvi_mmio_reg_write(s, size, val, addr);
> +        amdvi_handle_cmdhead_write(s);
> +        break;
> +    case AMDVI_MMIO_COMMAND_BASE:
> +        amdvi_mmio_reg_write(s, size, val, addr);
> +        /* FIXME - make sure System Software has finished writing incase
> +         * it writes in chucks less than 8 bytes in a robust way.As for
> +         * now, this hacks works for the linux driver
> +         */
> +        if (offset || (size == 8)) {
> +            amdvi_handle_cmdbase_write(s);
> +        }
> +        break;
> +    case AMDVI_MMIO_COMMAND_TAIL:
> +        amdvi_mmio_reg_write(s, size, val, addr);
> +        amdvi_handle_cmdtail_write(s);
> +        break;
> +    case AMDVI_MMIO_EVENT_BASE:
> +        amdvi_mmio_reg_write(s, size, val, addr);
> +        amdvi_handle_evtbase_write(s);
> +        break;
> +    case AMDVI_MMIO_EVENT_HEAD:
> +        amdvi_mmio_reg_write(s, size, val, addr);
> +        amdvi_handle_evthead_write(s);
> +        break;
> +    case AMDVI_MMIO_EVENT_TAIL:
> +        amdvi_mmio_reg_write(s, size, val, addr);
> +        amdvi_handle_evttail_write(s);
> +        break;
> +    case AMDVI_MMIO_EXCL_LIMIT:
> +        amdvi_mmio_reg_write(s, size, val, addr);
> +        amdvi_handle_excllim_write(s);
> +        break;
> +        /* PPR log base - unused for now */
> +    case AMDVI_MMIO_PPR_BASE:
> +        amdvi_mmio_reg_write(s, size, val, addr);
> +        amdvi_handle_pprbase_write(s);
> +        break;
> +        /* PPR log head - also unused for now */
> +    case AMDVI_MMIO_PPR_HEAD:
> +        amdvi_mmio_reg_write(s, size, val, addr);
> +        amdvi_handle_pprhead_write(s);
> +        break;
> +        /* PPR log tail - unused for now */
> +    case AMDVI_MMIO_PPR_TAIL:
> +        amdvi_mmio_reg_write(s, size, val, addr);
> +        amdvi_handle_pprtail_write(s);
> +        break;
> +    }
> +}
> +
> +static inline uint64_t amdvi_get_perms(uint64_t entry)
> +{
> +    return (entry & (AMDVI_DEV_PERM_READ | AMDVI_DEV_PERM_WRITE)) >>
> +           AMDVI_DEV_PERM_SHIFT;
> +}
> +
> +/* a valid entry should have V = 1 and reserved bits honoured */
> +static bool amdvi_validate_dte(AMDVIState *s, uint16_t devid,
> +                               uint64_t *dte)
> +{
> +    if ((dte[0] & AMDVI_DTE_LOWER_QUAD_RESERVED)
> +        || (dte[1] & AMDVI_DTE_MIDDLE_QUAD_RESERVED)
> +        || (dte[2] & AMDVI_DTE_UPPER_QUAD_RESERVED) || dte[3]) {
> +        amdvi_log_illegaldevtab_error(s, devid,
> +                                s->devtab + devid * AMDVI_DEVTAB_ENTRY_SIZE, 0);
> +        return false;
> +    }
> +
> +    return dte[0] & AMDVI_DEV_VALID;
> +}
> +
> +/* get a device table entry given the devid */
> +static bool amdvi_get_dte(AMDVIState *s, int devid, uint64_t *entry)
> +{
> +    uint32_t offset = devid * AMDVI_DEVTAB_ENTRY_SIZE;
> +
> +    if (dma_memory_read(&address_space_memory, s->devtab + offset, entry,
> +                        AMDVI_DEVTAB_ENTRY_SIZE)) {
> +        trace_amdvi_dte_get_fail(s->devtab, offset);
> +        /* log error accessing dte */
> +        amdvi_log_devtab_error(s, devid, s->devtab + offset, 0);
> +        return false;
> +    }
> +
> +    *entry = le64_to_cpu(*entry);
> +    if (!amdvi_validate_dte(s, devid, entry)) {
> +        trace_amdvi_invalid_dte(entry[0]);
> +        return false;
> +    }
> +
> +    return true;
> +}
> +
> +/* get pte translation mode */
> +static inline uint8_t get_pte_translation_mode(uint64_t pte)
> +{
> +    return (pte >> AMDVI_DEV_MODE_RSHIFT) & AMDVI_DEV_MODE_MASK;
> +}
> +
> +static inline uint64_t pte_override_page_mask(uint64_t pte)
> +{
> +    uint8_t page_mask = 12;
> +    uint64_t addr = (pte & AMDVI_DEV_PT_ROOT_MASK) ^ AMDVI_DEV_PT_ROOT_MASK;
> +    /* find the first zero bit */
> +    while (addr & 1) {
> +        page_mask++;
> +        addr = addr >> 1;
> +    }
> +
> +    return ~((1ULL << page_mask) - 1);
> +}
> +
> +static inline uint64_t pte_get_page_mask(uint64_t oldlevel)
> +{
> +    return ~((1UL << ((oldlevel * 9) + 3)) - 1);
> +}
> +
> +static inline uint64_t amdvi_get_pte_entry(AMDVIState *s, uint64_t pte_addr,
> +                                          uint16_t devid)
> +{
> +    uint64_t pte;
> +
> +    if (dma_memory_read(&address_space_memory, pte_addr, &pte, sizeof(pte))) {
> +        trace_amdvi_get_pte_hwerror(pte_addr);
> +        amdvi_log_pagetab_error(s, devid, pte_addr, 0);
> +        pte = 0;
> +        return pte;
> +    }
> +
> +    pte = cpu_to_le64(pte);
> +    return pte;
> +}
> +
> +static void amdvi_page_walk(AMDVIAddressSpace *as, uint64_t *dte,
> +                            IOMMUTLBEntry *ret, unsigned perms,
> +                            hwaddr addr)
> +{
> +    unsigned level, present, pte_perms, oldlevel;
> +    uint64_t pte = dte[0], pte_addr, page_mask;
> +
> +    /* make sure the DTE has TV = 1 */
> +    if (pte & AMDVI_DEV_TRANSLATION_VALID) {
> +        level = get_pte_translation_mode(pte);
> +        if (level >= 7) {
> +            trace_amdvi_mode_invalid(level, addr);
> +            return;
> +        }
> +        if (level == 0) {
> +            goto no_remap;
> +        }
> +
> +        /* we are at the leaf page table or page table encodes a huge page */
> +        while (level > 0) {
> +            pte_perms = amdvi_get_perms(pte);
> +            present = pte & 1;
> +            if (!present || perms != (perms & pte_perms)) {
> +                amdvi_page_fault(as->iommu_state, as->devfn, addr, perms);
> +                trace_amdvi_page_fault(addr);
> +                return;
> +            }
> +
> +            /* go to the next lower level */
> +            pte_addr = pte & AMDVI_DEV_PT_ROOT_MASK;
> +            /* add offset and load pte */
> +            pte_addr += ((addr >> (3 + 9 * level)) & 0x1FF) << 3;
> +            pte = amdvi_get_pte_entry(as->iommu_state, pte_addr, as->devfn);
> +            if (!pte) {
> +                return;
> +            }
> +            oldlevel = level;
> +            level = get_pte_translation_mode(pte);
> +            if (level == 0x7) {
> +                break;
> +            }
> +        }
> +
> +        if (level == 0x7) {
> +            page_mask = pte_override_page_mask(pte);
> +        } else {
> +            page_mask = pte_get_page_mask(oldlevel);
> +        }
> +
> +        /* get access permissions from pte */
> +        ret->iova = addr & page_mask;
> +        ret->translated_addr = (pte & AMDVI_DEV_PT_ROOT_MASK) & page_mask;
> +        ret->addr_mask = ~page_mask;
> +        ret->perm = amdvi_get_perms(pte);
> +        return;
> +    }
> +no_remap:
> +    ret->iova = addr & AMDVI_PAGE_MASK_4K;
> +    ret->translated_addr = addr & AMDVI_PAGE_MASK_4K;
> +    ret->addr_mask = ~AMDVI_PAGE_MASK_4K;
> +    ret->perm = amdvi_get_perms(pte);
> +}
> +
> +static void amdvi_do_translate(AMDVIAddressSpace *as, hwaddr addr,
> +                               bool is_write, IOMMUTLBEntry *ret)
> +{
> +    AMDVIState *s = as->iommu_state;
> +    uint16_t devid = PCI_BDF(as->bus_num, as->devfn);
> +    AMDVIIOTLBEntry *iotlb_entry = amdvi_iotlb_lookup(s, addr, as->devfn);
> +    uint64_t entry[4];
> +
> +    if (iotlb_entry) {
> +        trace_amdvi_iotlb_hit(PCI_BUS_NUM(devid), PCI_SLOT(devid),
> +                PCI_FUNC(devid), addr, iotlb_entry->translated_addr);
> +        ret->iova = addr & ~iotlb_entry->page_mask;
> +        ret->translated_addr = iotlb_entry->translated_addr;
> +        ret->addr_mask = iotlb_entry->page_mask;
> +        ret->perm = iotlb_entry->perms;
> +        return;
> +    }
> +
> +    /* devices with V = 0 are not translated */
> +    if (!amdvi_get_dte(s, devid, entry)) {
> +        goto out;
> +    }
> +
> +    amdvi_page_walk(as, entry, ret,
> +                    is_write ? AMDVI_PERM_WRITE : AMDVI_PERM_READ, addr);
> +
> +    amdvi_update_iotlb(s, as->devfn, addr, *ret,
> +                       entry[1] & AMDVI_DEV_DOMID_ID_MASK);
> +    return;
> +
> +out:
> +    ret->iova = addr & AMDVI_PAGE_MASK_4K;
> +    ret->translated_addr = addr & AMDVI_PAGE_MASK_4K;
> +    ret->addr_mask = ~AMDVI_PAGE_MASK_4K;
> +    ret->perm = IOMMU_RW;
> +}
> +
> +static inline bool amdvi_is_interrupt_addr(hwaddr addr)
> +{
> +    return addr >= AMDVI_INT_ADDR_FIRST && addr <= AMDVI_INT_ADDR_LAST;
> +}
> +
> +static IOMMUTLBEntry amdvi_translate(MemoryRegion *iommu, hwaddr addr,
> +                                     bool is_write)
> +{
> +    AMDVIAddressSpace *as = container_of(iommu, AMDVIAddressSpace, iommu);
> +    AMDVIState *s = as->iommu_state;
> +    IOMMUTLBEntry ret = {
> +        .target_as = &address_space_memory,
> +        .iova = addr,
> +        .translated_addr = 0,
> +        .addr_mask = ~(hwaddr)0,
> +        .perm = IOMMU_NONE
> +    };
> +
> +    if (!s->enabled) {
> +        /* AMDVI disabled - corresponds to iommu=off not
> +         * failure to provide any parameter
> +         */
> +        ret.iova = addr & AMDVI_PAGE_MASK_4K;
> +        ret.translated_addr = addr & AMDVI_PAGE_MASK_4K;
> +        ret.addr_mask = ~AMDVI_PAGE_MASK_4K;
> +        ret.perm = IOMMU_RW;
> +        return ret;
> +    } else if (amdvi_is_interrupt_addr(addr)) {
> +        ret.iova = addr & AMDVI_PAGE_MASK_4K;
> +        ret.translated_addr = addr & AMDVI_PAGE_MASK_4K;
> +        ret.addr_mask = ~AMDVI_PAGE_MASK_4K;
> +        ret.perm = IOMMU_WO;
> +        return ret;
> +    }
> +
> +    amdvi_do_translate(as, addr, is_write, &ret);
> +    trace_amdvi_translation_result(as->bus_num, PCI_SLOT(as->devfn),
> +            PCI_FUNC(as->devfn), addr, ret.translated_addr);
> +    return ret;
> +}
> +
> +static AddressSpace *amdvi_host_dma_iommu(PCIBus *bus, void *opaque, int devfn)
> +{
> +    AMDVIState *s = opaque;
> +    AMDVIAddressSpace **iommu_as;
> +    int bus_num = pci_bus_num(bus);
> +
> +    iommu_as = s->address_spaces[bus_num];
> +
> +    /* allocate memory during the first run */
> +    if (!iommu_as) {
> +        iommu_as = g_malloc0(sizeof(AMDVIAddressSpace *) * PCI_DEVFN_MAX);
> +        s->address_spaces[bus_num] = iommu_as;
> +    }
> +
> +    /* set up AMDVI region */
> +    if (!iommu_as[devfn]) {
> +        iommu_as[devfn] = g_malloc0(sizeof(AMDVIAddressSpace));
> +        iommu_as[devfn]->bus_num = (uint8_t)bus_num;
> +        iommu_as[devfn]->devfn = (uint8_t)devfn;
> +        iommu_as[devfn]->iommu_state = s;
> +
> +        memory_region_init_iommu(&iommu_as[devfn]->iommu, OBJECT(s),
> +                                 &s->iommu_ops, "amd-iommu", UINT64_MAX);
> +        address_space_init(&iommu_as[devfn]->as, &iommu_as[devfn]->iommu,
> +                           "amd-iommu");
> +    }
> +    return &iommu_as[devfn]->as;
> +}
> +
> +static const MemoryRegionOps mmio_mem_ops = {
> +    .read = amdvi_mmio_read,
> +    .write = amdvi_mmio_write,
> +    .endianness = DEVICE_LITTLE_ENDIAN,
> +    .impl = {
> +        .min_access_size = 1,
> +        .max_access_size = 8,
> +        .unaligned = false,
> +    },
> +    .valid = {
> +        .min_access_size = 1,
> +        .max_access_size = 8,
> +    }
> +};
> +
> +static void amdvi_iommu_notify_started(MemoryRegion *iommu)
> +{
> +    AMDVIAddressSpace *as = container_of(iommu, AMDVIAddressSpace, iommu);
> +
> +    hw_error("device %02x.%02x.%x requires iommu notifier which is not "
> +             "currently supported", as->bus_num, PCI_SLOT(as->devfn),
> +             PCI_FUNC(as->devfn));
> +}
> +
> +static void amdvi_init(AMDVIState *s)
> +{
> +    amdvi_iotlb_reset(s);
> +
> +    s->iommu_ops.translate = amdvi_translate;
> +    s->iommu_ops.notify_started = amdvi_iommu_notify_started;
> +    s->devtab_len = 0;
> +    s->cmdbuf_len = 0;
> +    s->cmdbuf_head = 0;
> +    s->cmdbuf_tail = 0;
> +    s->evtlog_head = 0;
> +    s->evtlog_tail = 0;
> +    s->excl_enabled = false;
> +    s->excl_allow = false;
> +    s->mmio_enabled = false;
> +    s->enabled = false;
> +    s->ats_enabled = false;
> +    s->cmdbuf_enabled = false;
> +
> +    /* reset MMIO */
> +    memset(s->mmior, 0, AMDVI_MMIO_SIZE);
> +    amdvi_set_quad(s, AMDVI_MMIO_EXT_FEATURES, AMDVI_EXT_FEATURES,
> +            0xffffffffffffffef, 0);
> +    amdvi_set_quad(s, AMDVI_MMIO_STATUS, 0, 0x98, 0x67);
> +
> +    /* reset device ident */
> +    pci_config_set_vendor_id(s->pci.dev.config, PCI_VENDOR_ID_AMD);
> +    pci_config_set_prog_interface(s->pci.dev.config, 00);
> +    pci_config_set_device_id(s->pci.dev.config, s->devid);
> +    pci_config_set_class(s->pci.dev.config, 0x0806);
> +
> +    /* reset AMDVI specific capabilities, all r/o */
> +    pci_set_long(s->pci.dev.config + s->capab_offset, AMDVI_CAPAB_FEATURES);
> +    pci_set_long(s->pci.dev.config + s->capab_offset + AMDVI_CAPAB_BAR_LOW,
> +                 s->mmio.addr & ~(0xffff0000));
> +    pci_set_long(s->pci.dev.config + s->capab_offset + AMDVI_CAPAB_BAR_HIGH,
> +                (s->mmio.addr & ~(0xffff)) >> 16);
> +    pci_set_long(s->pci.dev.config + s->capab_offset + AMDVI_CAPAB_RANGE,
> +                 0xff000000);
> +    pci_set_long(s->pci.dev.config + s->capab_offset + AMDVI_CAPAB_MISC, 0);
> +    pci_set_long(s->pci.dev.config + s->capab_offset + AMDVI_CAPAB_MISC,
> +            AMDVI_MAX_PH_ADDR | AMDVI_MAX_GVA_ADDR | AMDVI_MAX_VA_ADDR);
> +}
> +
> +static void amdvi_reset(DeviceState *dev)
> +{
> +    AMDVIState *s = AMD_IOMMU_DEVICE(dev);
> +
> +    msi_reset(&s->pci.dev);
> +    amdvi_init(s);
> +}
> +
> +static void amdvi_realize(DeviceState *dev, Error **err)
> +{
> +    AMDVIState *s = AMD_IOMMU_DEVICE(dev);
> +    PCIBus *bus = PC_MACHINE(qdev_get_machine())->bus;
> +    s->iotlb = g_hash_table_new_full(amdvi_uint64_hash,
> +                                     amdvi_uint64_equal, g_free, g_free);
> +
> +    /* This device should take care of IOMMU PCI properties */
> +    qdev_set_parent_bus(DEVICE(&s->pci), &bus->qbus);
> +    object_property_set_bool(OBJECT(&s->pci), true, "realized", err);
> +    s->capab_offset = pci_add_capability(&s->pci.dev, AMDVI_CAPAB_ID_SEC, 0,
> +                                         AMDVI_CAPAB_SIZE);
> +    pci_add_capability(&s->pci.dev, PCI_CAP_ID_MSI, 0, AMDVI_CAPAB_REG_SIZE);
> +    pci_add_capability(&s->pci.dev, PCI_CAP_ID_HT, 0, AMDVI_CAPAB_REG_SIZE);
> +
> +    /* set up MMIO */
> +    memory_region_init_io(&s->mmio, OBJECT(s), &mmio_mem_ops, s, "amdvi-mmio",
> +                          AMDVI_MMIO_SIZE);
> +
> +    sysbus_init_mmio(SYS_BUS_DEVICE(s), &s->mmio);
> +    sysbus_mmio_map(SYS_BUS_DEVICE(s), 0, AMDVI_BASE_ADDR);
> +    pci_setup_iommu(bus, amdvi_host_dma_iommu, s);
> +    s->devid = object_property_get_int(OBJECT(&s->pci), "addr", err);
> +    msi_init(&s->pci.dev, 0, 1, true, false, err);
> +    amdvi_init(s);
> +}
> +
> +static const VMStateDescription vmstate_amdvi = {
> +    .name = "amd-iommu",
> +    .unmigratable = 1
> +};
> +
> +static void amdvi_instance_init(Object *klass)
> +{
> +    AMDVIState *s = AMD_IOMMU_DEVICE(klass);
> +
> +    object_initialize(&s->pci, sizeof(s->pci), TYPE_AMD_IOMMU_PCI);
> +}
> +
> +static void amdvi_class_init(ObjectClass *klass, void* data)
> +{
> +    DeviceClass *dc = DEVICE_CLASS(klass);
> +    X86IOMMUClass *dc_class = X86_IOMMU_CLASS(klass);
> +
> +    dc->reset = amdvi_reset;
> +    dc->vmsd = &vmstate_amdvi;
> +    dc_class->realize = amdvi_realize;
> +}
> +
> +static const TypeInfo amdvi = {
> +    .name = TYPE_AMD_IOMMU_DEVICE,
> +    .parent = TYPE_X86_IOMMU_DEVICE,
> +    .instance_size = sizeof(AMDVIState),
> +    .instance_init = amdvi_instance_init,
> +    .class_init = amdvi_class_init
> +};
> +
> +static const TypeInfo amdviPCI = {
> +    .name = "AMDVI-PCI",
> +    .parent = TYPE_PCI_DEVICE,
> +    .instance_size = sizeof(AMDVIPCIState),
> +};
> +
> +static void amdviPCI_register_types(void)
> +{
> +    type_register_static(&amdviPCI);
> +    type_register_static(&amdvi);
> +}
> +
> +type_init(amdviPCI_register_types);
> diff --git a/hw/i386/amd_iommu.h b/hw/i386/amd_iommu.h
> new file mode 100644
> index 0000000..2f4ac55
> --- /dev/null
> +++ b/hw/i386/amd_iommu.h
> @@ -0,0 +1,390 @@
> +/*
> + * QEMU emulation of an AMD IOMMU (AMD-Vi)
> + *
> + * Copyright (C) 2011 Eduard - Gabriel Munteanu
> + * Copyright (C) 2015 David Kiarie, <davidkiarie4@gmail.com>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License as published by
> + * the Free Software Foundation; either version 2 of the License, or
> + * (at your option) any later version.
> +
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> +
> + * You should have received a copy of the GNU General Public License along
> + * with this program; if not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#ifndef AMD_IOMMU_H_
> +#define AMD_IOMMU_H_
> +
> +#include "hw/hw.h"
> +#include "hw/pci/pci.h"
> +#include "hw/pci/msi.h"
> +#include "hw/sysbus.h"
> +#include "sysemu/dma.h"
> +#include "hw/i386/pc.h"
> +#include "sysemu/dma.h"
> +#include "hw/i386/x86-iommu.h"
> +
> +/* Capability registers */
> +#define AMDVI_CAPAB_BAR_LOW           0x04
> +#define AMDVI_CAPAB_BAR_HIGH          0x08
> +#define AMDVI_CAPAB_RANGE             0x0C
> +#define AMDVI_CAPAB_MISC              0x10
> +
> +#define AMDVI_CAPAB_SIZE              0x18
> +#define AMDVI_CAPAB_REG_SIZE          0x04
> +
> +/* Capability header data */
> +#define AMDVI_CAPAB_ID_SEC            0xf
> +#define AMDVI_CAPAB_FLAT_EXT          (1 << 28)
> +#define AMDVI_CAPAB_EFR_SUP           (1 << 27)
> +#define AMDVI_CAPAB_FLAG_NPCACHE      (1 << 26)
> +#define AMDVI_CAPAB_FLAG_HTTUNNEL     (1 << 25)
> +#define AMDVI_CAPAB_FLAG_IOTLBSUP     (1 << 24)
> +#define AMDVI_CAPAB_INIT_TYPE         (3 << 16)
> +
> +/* No. of used MMIO registers */
> +#define AMDVI_MMIO_REGS_HIGH  8
> +#define AMDVI_MMIO_REGS_LOW   7
> +
> +/* MMIO registers */
> +#define AMDVI_MMIO_DEVICE_TABLE       0x0000
> +#define AMDVI_MMIO_COMMAND_BASE       0x0008
> +#define AMDVI_MMIO_EVENT_BASE         0x0010
> +#define AMDVI_MMIO_CONTROL            0x0018
> +#define AMDVI_MMIO_EXCL_BASE          0x0020
> +#define AMDVI_MMIO_EXCL_LIMIT         0x0028
> +#define AMDVI_MMIO_EXT_FEATURES       0x0030
> +#define AMDVI_MMIO_COMMAND_HEAD       0x2000
> +#define AMDVI_MMIO_COMMAND_TAIL       0x2008
> +#define AMDVI_MMIO_EVENT_HEAD         0x2010
> +#define AMDVI_MMIO_EVENT_TAIL         0x2018
> +#define AMDVI_MMIO_STATUS             0x2020
> +#define AMDVI_MMIO_PPR_BASE           0x0038
> +#define AMDVI_MMIO_PPR_HEAD           0x2030
> +#define AMDVI_MMIO_PPR_TAIL           0x2038
> +
> +#define AMDVI_MMIO_SIZE               0x4000
> +
> +#define AMDVI_MMIO_DEVTAB_SIZE_MASK   ((1ULL << 12) - 1)
> +#define AMDVI_MMIO_DEVTAB_BASE_MASK   (((1ULL << 52) - 1) & ~ \
> +                                       AMDVI_MMIO_DEVTAB_SIZE_MASK)
> +#define AMDVI_MMIO_DEVTAB_ENTRY_SIZE  32
> +#define AMDVI_MMIO_DEVTAB_SIZE_UNIT   4096
> +
> +/* some of this are similar but just for readability */
> +#define AMDVI_MMIO_CMDBUF_SIZE_BYTE       (AMDVI_MMIO_COMMAND_BASE + 7)
> +#define AMDVI_MMIO_CMDBUF_SIZE_MASK       0x0F
> +#define AMDVI_MMIO_CMDBUF_BASE_MASK       AMDVI_MMIO_DEVTAB_BASE_MASK
> +#define AMDVI_MMIO_CMDBUF_HEAD_MASK       (((1ULL << 19) - 1) & ~0x0F)
> +#define AMDVI_MMIO_CMDBUF_TAIL_MASK       AMDVI_MMIO_EVTLOG_HEAD_MASK
> +
> +#define AMDVI_MMIO_EVTLOG_SIZE_BYTE       (AMDVI_MMIO_EVENT_BASE + 7)
> +#define AMDVI_MMIO_EVTLOG_SIZE_MASK       AMDVI_MMIO_CMDBUF_SIZE_MASK
> +#define AMDVI_MMIO_EVTLOG_BASE_MASK       AMDVI_MMIO_CMDBUF_BASE_MASK
> +#define AMDVI_MMIO_EVTLOG_HEAD_MASK       (((1ULL << 19) - 1) & ~0x0F)
> +#define AMDVI_MMIO_EVTLOG_TAIL_MASK       AMDVI_MMIO_EVTLOG_HEAD_MASK
> +
> +#define AMDVI_MMIO_PPRLOG_SIZE_BYTE       (AMDVI_MMIO_EVENT_BASE + 7)
> +#define AMDVI_MMIO_PPRLOG_HEAD_MASK       AMDVI_MMIO_EVTLOG_HEAD_MASK
> +#define AMDVI_MMIO_PPRLOG_TAIL_MASK       AMDVI_MMIO_EVTLOG_HEAD_MASK
> +#define AMDVI_MMIO_PPRLOG_BASE_MASK       AMDVI_MMIO_EVTLOG_BASE_MASK
> +#define AMDVI_MMIO_PPRLOG_SIZE_MASK       AMDVI_MMIO_EVTLOG_SIZE_MASK
> +
> +#define AMDVI_MMIO_EXCL_ENABLED_MASK      (1ULL << 0)
> +#define AMDVI_MMIO_EXCL_ALLOW_MASK        (1ULL << 1)
> +#define AMDVI_MMIO_EXCL_LIMIT_MASK        AMDVI_MMIO_DEVTAB_BASE_MASK
> +#define AMDVI_MMIO_EXCL_LIMIT_LOW         0xFFF
> +
> +/* mmio control register flags */
> +#define AMDVI_MMIO_CONTROL_AMDVIEN        (1ULL << 0)
> +#define AMDVI_MMIO_CONTROL_HTTUNEN        (1ULL << 1)
> +#define AMDVI_MMIO_CONTROL_EVENTLOGEN     (1ULL << 2)
> +#define AMDVI_MMIO_CONTROL_EVENTINTEN     (1ULL << 3)
> +#define AMDVI_MMIO_CONTROL_COMWAITINTEN   (1ULL << 4)
> +#define AMDVI_MMIO_CONTROL_CMDBUFLEN      (1ULL << 12)
> +
> +/* MMIO status register bits */
> +#define AMDVI_MMIO_STATUS_CMDBUF_RUN  (1 << 4)
> +#define AMDVI_MMIO_STATUS_EVT_RUN     (1 << 3)
> +#define AMDVI_MMIO_STATUS_COMP_INT    (1 << 2)
> +#define AMDVI_MMIO_STATUS_EVT_OVF     (1 << 0)
> +
> +#define AMDVI_CMDBUF_ID_BYTE              0x07
> +#define AMDVI_CMDBUF_ID_RSHIFT            4
> +
> +#define AMDVI_CMD_COMPLETION_WAIT         0x01
> +#define AMDVI_CMD_INVAL_DEVTAB_ENTRY      0x02
> +#define AMDVI_CMD_INVAL_AMDVI_PAGES       0x03
> +#define AMDVI_CMD_INVAL_IOTLB_PAGES       0x04
> +#define AMDVI_CMD_INVAL_INTR_TABLE        0x05
> +#define AMDVI_CMD_PREFETCH_AMDVI_PAGES    0x06
> +#define AMDVI_CMD_COMPLETE_PPR_REQUEST    0x07
> +#define AMDVI_CMD_INVAL_AMDVI_ALL         0x08
> +
> +#define AMDVI_DEVTAB_ENTRY_SIZE           32
> +
> +/* Device table entry bits 0:63 */
> +#define AMDVI_DEV_VALID                   (1ULL << 0)
> +#define AMDVI_DEV_TRANSLATION_VALID       (1ULL << 1)
> +#define AMDVI_DEV_MODE_MASK               0x7
> +#define AMDVI_DEV_MODE_RSHIFT             9
> +#define AMDVI_DEV_PT_ROOT_MASK            0xFFFFFFFFFF000
> +#define AMDVI_DEV_PT_ROOT_RSHIFT          12
> +#define AMDVI_DEV_PERM_SHIFT              61
> +#define AMDVI_DEV_PERM_READ               (1ULL << 61)
> +#define AMDVI_DEV_PERM_WRITE              (1ULL << 62)
> +
> +/* Device table entry bits 64:127 */
> +#define AMDVI_DEV_DOMID_ID_MASK          ((1ULL << 16) - 1)
> +
> +/* Event codes and flags, as stored in the info field */
> +#define AMDVI_EVENT_ILLEGAL_DEVTAB_ENTRY  (0x1U << 12)
> +#define AMDVI_EVENT_IOPF                  (0x2U << 12)
> +#define   AMDVI_EVENT_IOPF_I              (1U << 3)
> +#define AMDVI_EVENT_DEV_TAB_HW_ERROR      (0x3U << 12)
> +#define AMDVI_EVENT_PAGE_TAB_HW_ERROR     (0x4U << 12)
> +#define AMDVI_EVENT_ILLEGAL_COMMAND_ERROR (0x5U << 12)
> +#define AMDVI_EVENT_COMMAND_HW_ERROR      (0x6U << 12)
> +
> +#define AMDVI_EVENT_LEN                  16
> +#define AMDVI_PERM_READ             (1 << 0)
> +#define AMDVI_PERM_WRITE            (1 << 1)
> +
> +#define AMDVI_FEATURE_PREFETCH            (1ULL << 0) /* page prefetch       */
> +#define AMDVI_FEATURE_PPR                 (1ULL << 1) /* PPR Support         */
> +#define AMDVI_FEATURE_GT                  (1ULL << 4) /* Guest Translation   */
> +#define AMDVI_FEATURE_IA                  (1ULL << 6) /* inval all support   */
> +#define AMDVI_FEATURE_GA                  (1ULL << 7) /* guest VAPIC support */
> +#define AMDVI_FEATURE_HE                  (1ULL << 8) /* hardware error regs */
> +#define AMDVI_FEATURE_PC                  (1ULL << 9) /* Perf counters       */
> +
> +/* reserved DTE bits */
> +#define AMDVI_DTE_LOWER_QUAD_RESERVED  0x80300000000000fc
> +#define AMDVI_DTE_MIDDLE_QUAD_RESERVED 0x0000000000000100
> +#define AMDVI_DTE_UPPER_QUAD_RESERVED  0x08f0000000000000
> +
> +/* AMDVI paging mode */
> +#define AMDVI_GATS_MODE                 (6ULL <<  12)
> +#define AMDVI_HATS_MODE                 (6ULL <<  10)
> +
> +/* IOTLB */
> +#define AMDVI_IOTLB_MAX_SIZE 1024
> +#define AMDVI_DEVID_SHIFT    36
> +
> +/* interrupt types */
> +#define AMDVI_MT_FIXED  0x0
> +#define AMDVI_MT_ARBIT  0x1
> +#define AMDVI_MT_SMI    0x2
> +#define AMDVI_MT_NMI    0x3
> +#define AMDVI_MT_INIT   0x4
> +#define AMDVI_MT_EXTINT 0x6
> +#define AMDVI_MT_LINT1  0xb
> +#define AMDVI_MT_LINT0  0xe
> +
> +/* Ext reg, GA support */
> +#define AMDVI_GASUP    (1UL << 7)
> +/* MMIO control GA enable bits */
> +#define AMDVI_GAEN     (1UL << 17)
> +
> +/* MSI interrupt type mask */
> +#define AMDVI_IR_TYPE_MASK 0x300
> +
> +/* interrupt destination mode */
> +#define AMDVI_IRDEST_MODE_MASK 0x2
> +
> +/* select MSI data 10:0 bits */
> +#define AMDVI_IRTE_INDEX_MASK 0x7ff
> +
> +/* bits determining whether specific interrupts should be passed
> + * split DTE into 64-bit chunks
> + */
> +#define AMDVI_DTE_INTPASS       56
> +#define AMDVI_DTE_EINTPASS      57
> +#define AMDVI_DTE_NMIPASS       58
> +#define AMDVI_DTE_INTCTL        60
> +#define AMDVI_DTE_LINT0PASS     62
> +#define AMDVI_DTE_LINT1PASS     63
> +
> +/* interrupt data valid */
> +#define AMDVI_IR_VALID          (1UL << 0)
> +
> +/* interrupt root table mask */
> +#define AMDVI_IRTEROOT_MASK     0xffffffffffffc0
> +
> +/* default IRTE size */
> +#define AMDVI_DEFAULT_IRTE_SIZE 0x4
> +
> +/* IRTE size with GASup enabled */
> +#define AMDVI_IRTE_SIZE_GASUP   0x10
> +
> +#define AMDVI_IRTE_VECTOR_MASK    (0xffU << 16)
> +#define AMDVI_IRTE_DEST_MASK      (0xffU << 8)
> +#define AMDVI_IRTE_DM_MASK        (0x1U << 6)
> +#define AMDVI_IRTE_RQEOI_MASK     (0x1U << 5)
> +#define AMDVI_IRTE_INTTYPE_MASK   (0x7U << 2)
> +#define AMDVI_IRTE_SUPIOPF_MASK   (0x1U << 1)
> +#define AMDVI_IRTE_REMAP_MASK     (0x1U << 0)
> +
> +#define AMDVI_IR_TABLE_SIZE_MASK 0xfe
> +
> +/* offsets into MSI data */
> +#define AMDVI_MSI_DATA_DM_RSHIFT       0x8
> +#define AMDVI_MSI_DATA_LEVEL_RSHIFT    0xe
> +#define AMDVI_MSI_DATA_TRM_RSHIFT      0xf
> +
> +/* offsets into MSI address */
> +#define AMDVI_MSI_ADDR_DM_RSHIFT       0x2
> +#define AMDVI_MSI_ADDR_RH_RSHIFT       0x3
> +#define AMDVI_MSI_ADDR_DEST_RSHIFT     0xc
> +
> +#define AMDVI_LOCAL_APIC_ADDR     0xfee00000
> +
> +/* extended feature support */
> +#define AMDVI_EXT_FEATURES (AMDVI_FEATURE_PREFETCH | AMDVI_FEATURE_PPR | \
> +        AMDVI_FEATURE_IA | AMDVI_FEATURE_GT | AMDVI_FEATURE_GA | \
> +        AMDVI_FEATURE_HE | AMDVI_GATS_MODE | AMDVI_HATS_MODE)
> +
> +/* capabilities header */
> +#define AMDVI_CAPAB_FEATURES (AMDVI_CAPAB_FLAT_EXT | \
> +        AMDVI_CAPAB_FLAG_NPCACHE | AMDVI_CAPAB_FLAG_IOTLBSUP \
> +        | AMDVI_CAPAB_ID_SEC | AMDVI_CAPAB_INIT_TYPE | \
> +        AMDVI_CAPAB_FLAG_HTTUNNEL |  AMDVI_CAPAB_EFR_SUP)
> +
> +/* AMDVI default address */
> +#define AMDVI_BASE_ADDR 0xfed80000
> +
> +/* page management constants */
> +#define AMDVI_PAGE_SHIFT 12
> +#define AMDVI_PAGE_SIZE  (1ULL << AMDVI_PAGE_SHIFT)
> +
> +#define AMDVI_PAGE_SHIFT_4K 12
> +#define AMDVI_PAGE_MASK_4K  (~((1ULL << AMDVI_PAGE_SHIFT_4K) - 1))
> +
> +#define AMDVI_MAX_VA_ADDR          (48UL << 5)
> +#define AMDVI_MAX_PH_ADDR          (40UL << 8)
> +#define AMDVI_MAX_GVA_ADDR         (48UL << 15)
> +
> +/* invalidation command device id */
> +#define AMDVI_INVAL_DEV_ID_SHIFT  32
> +#define AMDVI_INVAL_DEV_ID_MASK   (~((1UL << AMDVI_INVAL_DEV_ID_SHIFT) - 1))
> +
> +/* invalidation address */
> +#define AMDVI_INVAL_ADDR_MASK_SHIFT 12
> +#define AMDVI_INVAL_ADDR_MASK     (~((1UL << AMDVI_INVAL_ADDR_MASK_SHIFT) - 1))
> +
> +/* invalidation S bit mask */
> +#define AMDVI_INVAL_ALL(val) ((val) & (0x1))
> +
> +/* Completion Wait data size */
> +#define AMDVI_COMPLETION_DATA_SIZE    8
> +
> +#define AMDVI_COMMAND_SIZE   16
> +
> +#define AMDVI_INT_ADDR_FIRST 0xfee00000ULL
> +#define AMDVI_INT_ADDR_LAST  0xfeefffffULL
> +
> +#define AMDVI_INT_ADDR_SIZE ((AMDVI_INT_ADDR_LAST - \
> +        AMDVI_INT_ADDR_FIRST) + 1)
> +
> +/* Completion Wait data size */
> +#define AMDVI_COMPLETION_DATA_SIZE    8
> +
> +#define AMDVI_COMMAND_SIZE   16
> +
> +/* AMD IOMMU errors */
> +#define AMDVI_ILLEG_DEV_TAB  0x1
> +#define AMDVI_IOPF_          0x2
> +#define AMDVI_DEV_TAB_HW     0x3
> +#define AMDVI_PAGE_TAB_HW    0x4
> +#define AMDVI_ILLEG_COM      0x5
> +#define AMDVI_COM_HW         0x6
> +#define AMDVI_IOTLB_TIMEOUT  0x7
> +#define AMDVI_INVAL_DEV_REQ  0x8
> +#define AMDVI_INVAL_PPR_REQ  0x9
> +#define AMDVI_EVT_COUNT_ZERO 0xa
> +
> +/* represent target and master aborts error state */
> +#define AMDVI_TARGET_ABORT     0xb
> +#define AMDVI_MASTER_ABORT     0xc
> +
> +#define TYPE_AMD_IOMMU_DEVICE "amd-iommu"
> +#define AMD_IOMMU_DEVICE(obj)\
> +    OBJECT_CHECK(AMDVIState, (obj), TYPE_AMD_IOMMU_DEVICE)
> +
> +#define TYPE_AMD_IOMMU_PCI "AMDVI-PCI"
> +#define AMD_IOMMU_PCI(obj)\
> +    OBJECT_CHECK(AMDVIPCIState, (obj), TYPE_AMD_IOMMU_PCI)
> +
> +typedef struct AMDVIAddressSpace AMDVIAddressSpace;
> +
> +/* functions to steal PCI config space */
> +typedef struct AMDVIPCIState {
> +    PCIDevice dev;               /* The PCI device itself        */
> +} AMDVIPCIState;
> +
> +typedef struct AMDVIState {
> +    X86IOMMUState iommu;        /* IOMMU bus device             */
> +    AMDVIPCIState pci;          /* IOMMU PCI device             */
> +
> +    uint32_t version;
> +    uint32_t capab_offset;       /* capability offset pointer    */
> +
> +    uint64_t mmio_addr;
> +
> +    uint32_t devid;              /* auto-assigned devid          */
> +
> +    bool enabled;                /* IOMMU enabled                */
> +    bool ats_enabled;            /* address translation enabled  */
> +    bool cmdbuf_enabled;         /* command buffer enabled       */
> +    bool evtlog_enabled;         /* event log enabled            */
> +    bool excl_enabled;
> +
> +    hwaddr devtab;               /* base address device table    */
> +    size_t devtab_len;           /* device table length          */
> +
> +    hwaddr cmdbuf;               /* command buffer base address  */
> +    uint64_t cmdbuf_len;         /* command buffer length        */
> +    uint32_t cmdbuf_head;        /* current IOMMU read position  */
> +    uint32_t cmdbuf_tail;        /* next Software write position */
> +    bool completion_wait_intr;
> +
> +    hwaddr evtlog;               /* base address event log       */
> +    bool evtlog_intr;
> +    uint32_t evtlog_len;         /* event log length             */
> +    uint32_t evtlog_head;        /* current IOMMU write position */
> +    uint32_t evtlog_tail;        /* current Software read position */
> +
> +    /* unused for now */
> +    hwaddr excl_base;            /* base DVA - IOMMU exclusion range */
> +    hwaddr excl_limit;           /* limit of IOMMU exclusion range   */
> +    bool excl_allow;             /* translate accesses to the exclusion range */
> +    bool excl_enable;            /* exclusion range enabled          */
> +
> +    hwaddr ppr_log;              /* base address ppr log */
> +    uint32_t pprlog_len;         /* ppr log len  */
> +    uint32_t pprlog_head;        /* ppr log head */
> +    uint32_t pprlog_tail;        /* ppr log tail */
> +
> +    MemoryRegion mmio;                 /* MMIO region                  */
> +    uint8_t mmior[AMDVI_MMIO_SIZE];    /* read/write MMIO              */
> +    uint8_t w1cmask[AMDVI_MMIO_SIZE];  /* read/write 1 clear mask      */
> +    uint8_t romask[AMDVI_MMIO_SIZE];   /* MMIO read/only mask          */
> +    bool mmio_enabled;
> +
> +    /* IOMMU function */
> +    MemoryRegionIOMMUOps iommu_ops;
> +
> +    /* for each served device */
> +    AMDVIAddressSpace **address_spaces[PCI_BUS_MAX];
> +
> +    /* IOTLB */
> +    GHashTable *iotlb;
> +} AMDVIState;
> +
> +#endif
> diff --git a/hw/i386/trace-events b/hw/i386/trace-events
> index 592de3a..5c12c10 100644
> --- a/hw/i386/trace-events
> +++ b/hw/i386/trace-events
> @@ -42,3 +42,10 @@ amdvi_mode_invalid(unsigned level, uint64_t addr)"error: translation level 0x%"P
>  amdvi_page_fault(uint64_t addr) "error: page fault accessing guest physical address 0x%"PRIx64
>  amdvi_iotlb_hit(uint16_t bus, uint16_t slot, uint16_t func, uint64_t addr, uint64_t txaddr) "hit iotlb devid %02x:%02x.%x gpa 0x%"PRIx64 " hpa 0x%"PRIx64
>  amdvi_translation_result(uint16_t bus, uint16_t slot, uint16_t func, uint64_t addr, uint64_t txaddr) "devid: %02x:%02x.%x gpa 0x%"PRIx64 " hpa 0x%"PRIx64
> +amdvi_irte_get_fail(uint64_t addr, uint64_t offset) "couldn't access device table entry 0x%"PRIx64" + offset 0x%"PRIx64
> +amdvi_invalid_irte_entry(uint16_t devid, uint64_t offset) "devid %x requested IRTE offset 0x%"PRIx64" Outside IR table range"
> +amdvi_ir_request(uint32_t data, uint64_t addr, uint16_t sid) "IR request data 0x%"PRIx32" address 0x%"PRIx64" SID %x"
> +amdvi_ir_remap(uint32_t data, uint64_t addr, uint16_t sid) "IR remap data 0x%"PRIx32" address 0x%"PRIx64" SID %x"
> +amdvi_ir_target_abort(uint32_t data, uint64_t addr, uint16_t sid) "IR target abort data 0x%"PRIx32" address 0x%"PRIx64" SID %x"
> +amdvi_ir_write_fail(uint64_t addr, uint32_t data) "fail to write to addr 0x%"PRIx64 " value 0x%"PRIx32
> +amdvi_ir_read_fail(uint64_t addr) " fail to read from addr 0x%"PRIx64
>

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Qemu-devel] [V15 3/4] hw/i386: Introduce AMD IOMMU
  2016-08-11  8:23   ` Valentine Sinitsyn
@ 2016-08-11  8:32     ` David Kiarie
  2016-08-11  8:35       ` Valentine Sinitsyn
  0 siblings, 1 reply; 26+ messages in thread
From: David Kiarie @ 2016-08-11  8:32 UTC (permalink / raw)
  To: Valentine Sinitsyn
  Cc: QEMU Developers, Peter Xu, rkrcmar, Jan Kiszka, Eduardo Habkost,
	Michael S. Tsirkin

On Thu, Aug 11, 2016 at 11:23 AM, Valentine Sinitsyn <
valentine.sinitsyn@gmail.com> wrote:

> Hi,
>
>
> On 02.08.2016 13:39, David Kiarie wrote:
>
>> +static void amdvi_writeq_raw(AMDVIState *s, uint64_t val, hwaddr addr)
>> +{+
>> +static void amdvi_generate_msi_interrupt(AMDVIState *s)
>> +{
>> +    MSIMessage msg;
>> +    if (msi_enabled(&s->pci.dev)) {
>> +        msg = msi_get_message(&s->pci.dev, 0);
>> +        address_space_stl_le(&address_space_memory, msg.address,
>> msg.data,
>> +                         MEMTXATTRS_UNSPECIFIED, NULL);
>>
> Nit: don't you want to set the requester ID to the IOMMU's BDF here?


We could though I overlooked that because IOMMU interrupt requests are not
processed by IOMMU.


>
>
> Valentine
>
>>
>>

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Qemu-devel] [V15 3/4] hw/i386: Introduce AMD IOMMU
  2016-08-11  8:32     ` David Kiarie
@ 2016-08-11  8:35       ` Valentine Sinitsyn
  0 siblings, 0 replies; 26+ messages in thread
From: Valentine Sinitsyn @ 2016-08-11  8:35 UTC (permalink / raw)
  To: David Kiarie
  Cc: QEMU Developers, Peter Xu, rkrcmar, Jan Kiszka, Eduardo Habkost,
	Michael S. Tsirkin



On 11.08.2016 13:32, David Kiarie wrote:
>
>
> On Thu, Aug 11, 2016 at 11:23 AM, Valentine Sinitsyn
> <valentine.sinitsyn@gmail.com <mailto:valentine.sinitsyn@gmail.com>> wrote:
>
>     Hi,
>
>
>     On 02.08.2016 13:39, David Kiarie wrote:
>
>         +static void amdvi_writeq_raw(AMDVIState *s, uint64_t val,
>         hwaddr addr)
>         +{+
>         +static void amdvi_generate_msi_interrupt(AMDVIState *s)
>         +{
>         +    MSIMessage msg;
>         +    if (msi_enabled(&s->pci.dev)) {
>         +        msg = msi_get_message(&s->pci.dev, 0);
>         +        address_space_stl_le(&address_space_memory,
>         msg.address, msg.data,
>         +                         MEMTXATTRS_UNSPECIFIED, NULL);
>
>     Nit: don't you want to set the requester ID to the IOMMU's BDF here?
>
>
> We could though I overlooked that because IOMMU interrupt requests are
> not processed by IOMMU.
True, that's the prefix. It's a matter of cleanness, not a bug, so you 
choose. Can be changed any time later in the tree.

Valentine

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Qemu-devel] [V15 3/4] hw/i386: Introduce AMD IOMMU
  2016-08-02  8:39 ` [Qemu-devel] [V15 3/4] hw/i386: Introduce AMD IOMMU David Kiarie
  2016-08-09  5:44   ` Peter Xu
  2016-08-11  8:23   ` Valentine Sinitsyn
@ 2016-08-12 19:10   ` Valentine Sinitsyn
  2016-08-12 19:40     ` David Kiarie
  2 siblings, 1 reply; 26+ messages in thread
From: Valentine Sinitsyn @ 2016-08-12 19:10 UTC (permalink / raw)
  To: David Kiarie, qemu-devel; +Cc: peterx, rkrcmar, jan.kiszka, ehabkost, mst

Hi David,

On 02.08.2016 13:39, David Kiarie wrote:
> Add AMD IOMMU emulaton to Qemu in addition to Intel IOMMU.
> The IOMMU does basic translation, error checking and has a
> minimal IOTLB implementation. This IOMMU bypassed the need
> for target aborts by responding with IOMMU_NONE access rights
> and exempts the region 0xfee00000-0xfeefffff from translation
> as it is the q35 interrupt region.
>
> We advertise features that are not yet implemented to please
> the Linux IOMMU driver.
>
> IOTLB aims at implementing commands on real IOMMUs which is
> essential for debugging and may not offer any performance
> benefits
>
> Signed-off-by: David Kiarie <davidkiarie4@gmail.com>
> ---
>  hw/i386/Makefile.objs |    1 +
>  hw/i386/amd_iommu.c   | 1397 +++++++++++++++++++++++++++++++++++++++++++++++++
>  hw/i386/amd_iommu.h   |  390 ++++++++++++++
>  hw/i386/trace-events  |    7 +
>  4 files changed, 1795 insertions(+)
>  create mode 100644 hw/i386/amd_iommu.c
>  create mode 100644 hw/i386/amd_iommu.h
>
> diff --git a/hw/i386/Makefile.objs b/hw/i386/Makefile.objs
> index 90e94ff..909ead6 100644
> --- a/hw/i386/Makefile.objs
> +++ b/hw/i386/Makefile.objs
> @@ -3,6 +3,7 @@ obj-y += multiboot.o
>  obj-y += pc.o pc_piix.o pc_q35.o
>  obj-y += pc_sysfw.o
>  obj-y += x86-iommu.o intel_iommu.o
> +obj-y += amd_iommu.o
>  obj-$(CONFIG_XEN) += ../xenpv/ xen/
>
>  obj-y += kvmvapic.o
> diff --git a/hw/i386/amd_iommu.c b/hw/i386/amd_iommu.c
> new file mode 100644
> index 0000000..7b64dd7
> --- /dev/null
> +++ b/hw/i386/amd_iommu.c
> @@ -0,0 +1,1397 @@
> +/*
> + * QEMU emulation of AMD IOMMU (AMD-Vi)
> + *
> + * Copyright (C) 2011 Eduard - Gabriel Munteanu
> + * Copyright (C) 2015 David Kiarie, <davidkiarie4@gmail.com>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License as published by
> + * the Free Software Foundation; either version 2 of the License, or
> + * (at your option) any later version.
> +
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> +
> + * You should have received a copy of the GNU General Public License along
> + * with this program; if not, see <http://www.gnu.org/licenses/>.
> + *
> + * Cache implementation inspired by hw/i386/intel_iommu.c
> + *
> + */
> +#include "qemu/osdep.h"
> +#include <math.h>
> +#include "hw/pci/msi.h"
> +#include "hw/i386/pc.h"
> +#include "hw/i386/amd_iommu.h"
> +#include "hw/pci/pci_bus.h"
> +#include "trace.h"
> +
> +/* used AMD-Vi MMIO registers */
> +const char *amdvi_mmio_low[] = {
> +    "AMDVI_MMIO_DEVTAB_BASE",
> +    "AMDVI_MMIO_CMDBUF_BASE",
> +    "AMDVI_MMIO_EVTLOG_BASE",
> +    "AMDVI_MMIO_CONTROL",
> +    "AMDVI_MMIO_EXCL_BASE",
> +    "AMDVI_MMIO_EXCL_LIMIT",
> +    "AMDVI_MMIO_EXT_FEATURES",
> +    "AMDVI_MMIO_PPR_BASE",
> +    "UNHANDLED"
> +};
> +const char *amdvi_mmio_high[] = {
> +    "AMDVI_MMIO_COMMAND_HEAD",
> +    "AMDVI_MMIO_COMMAND_TAIL",
> +    "AMDVI_MMIO_EVTLOG_HEAD",
> +    "AMDVI_MMIO_EVTLOG_TAIL",
> +    "AMDVI_MMIO_STATUS",
> +    "AMDVI_MMIO_PPR_HEAD",
> +    "AMDVI_MMIO_PPR_TAIL",
> +    "UNHANDLED"
> +};
> +typedef struct AMDVIAddressSpace {
> +    uint8_t bus_num;            /* bus number                           */
> +    uint8_t devfn;              /* device function                      */
> +    AMDVIState *iommu_state;    /* AMDVI - one per machine              */
> +    MemoryRegion iommu;         /* Device's address translation region  */
> +    MemoryRegion iommu_ir;      /* Device's interrupt remapping region  */
> +    AddressSpace as;            /* device's corresponding address space */
> +} AMDVIAddressSpace;
> +
> +/* AMDVI cache entry */
> +typedef struct AMDVIIOTLBEntry {
> +    uint64_t gfn;               /* guest frame number  */
> +    uint16_t domid;             /* assigned domain id  */
> +    uint16_t devid;             /* device owning entry */
> +    uint64_t perms;             /* access permissions  */
> +    uint64_t translated_addr;   /* translated address  */
> +    uint64_t page_mask;         /* physical page size  */
> +} AMDVIIOTLBEntry;
> +
> +/* serialize IOMMU command processing */
> +typedef struct QEMU_PACKED {
> +#ifdef HOST_WORDS_BIGENDIAN
> +    uint64_t type:4;               /* command type           */
> +    uint64_t reserved:8;
> +    uint64_t store_addr:49;        /* addr to write          */
> +    uint64_t completion_flush:1;   /* allow more executions  */
> +    uint64_t completion_int:1;     /* set MMIOWAITINT        */
> +    uint64_t completion_store:1;   /* write data to address  */
> +#else
> +    uint64_t completion_store:1;
> +    uint64_t completion_int:1;
> +    uint64_t completion_flush:1;
> +    uint64_t store_addr:49;
> +    uint64_t reserved:8;
> +    uint64_t type:4;
> +#endif /* __BIG_ENDIAN_BITFIELD */
> +    uint64_t store_data;           /* data to write          */
> +} CMDCompletionWait;
> +
> +/* invalidate internal caches for devid */
> +typedef struct QEMU_PACKED {
> +#ifdef HOST_WORDS_BIGENDIAN
> +    uint64_t devid;                /* device to invalidate   */
> +    uint64_t reserved_1:44;
> +    uint64_t type:4;               /* command type           */
> +#else
> +    uint64_t devid;
> +    uint64_t reserved_1:44;
> +    uint64_t type:4;
> +#endif /* __BIG_ENDIAN_BITFIELD */
> +    uint64_t reserved_2;
> +} CMDInvalDevEntry;
> +
> +/* invalidate a range of entries in IOMMU translation cache for devid */
> +typedef struct QEMU_PACKED {
> +#ifdef HOST_WORDS_BIGENDIAN
> +    uint64_t type:4;               /* command type           */
> +    uint64_t reserved_2:12
> +    uint64_t domid:16;             /* domain to inval for    */
> +    uint64_t reserved_1:12;
> +    uint64_t pasid:20;
> +#else
> +    uint64_t pasid:20;
> +    uint64_t reserved_1:12;
> +    uint64_t domid:16;
> +    uint64_t reserved_2:12;
> +    uint64_t type:4;
> +#endif /* __BIG_ENDIAN_BITFIELD */
> +
> +#ifdef HOST_WORDS_BIGENDIAN
> +    uint64_t address:51;          /* address to invalidate   */
> +    uint64_t reserved_3:10;
> +    uint64_t guest:1;             /* G/N invalidation        */
> +    uint64_t pde:1;               /* invalidate cached ptes  */
> +    uint64_t size:1               /* size of invalidation    */
> +#else
> +    uint64_t size:1;
> +    uint64_t pde:1;
> +    uint64_t guest:1;
> +    uint64_t reserved_3:10;
> +    uint64_t address:51;
> +#endif /* __BIG_ENDIAN_BITFIELD */
> +} CMDInvalIommuPages;
> +
> +/* inval specified address for devid from remote IOTLB */
> +typedef struct QEMU_PACKED {
> +#ifdef HOST_WORDS_BIGENDIAN
> +    uint64_t type:4;            /* command type        */
> +    uint64_t pasid_19_6:4;
> +    uint64_t pasid_7_0:8;
> +    uint64_t queuid:16;
> +    uint64_t maxpend:8;
> +    uint64_t pasid_15_8;
> +    uint64_t devid:16;         /* related devid        */
> +#else
> +    uint64_t devid:16;
> +    uint64_t pasid_15_8:8;
> +    uint64_t maxpend:8;
> +    uint64_t queuid:16;
> +    uint64_t pasid_7_0:8;
> +    uint64_t pasid_19_6:4;
> +    uint64_t type:4;
> +#endif /* __BIG_ENDIAN_BITFIELD */
> +
> +#ifdef HOST_WORDS_BIGENDIAN
> +    uint64_t address:52;       /* invalidate addr      */
> +    uint64_t reserved_2:9;
> +    uint64_t guest:1;          /* G/N invalidate       */
> +    uint64_t reserved_1:1;
> +    uint64_t size:1            /* size of invalidation */
> +#else
> +    uint64_t size:1;
> +    uint64_t reserved_1:1;
> +    uint64_t guest:1;
> +    uint64_t reserved_2:9;
> +    uint64_t address:52;
> +#endif /* __BIG_ENDIAN_BITFIELD */
> +} CMDInvalIOTLBPages;
> +
> +/* invalidate all cached interrupt info for devid */
> +typedef struct QEMU_PACKED {
> +#ifdef HOST_WORDS_BIGENDIAN
> +    uint64_t type:4;          /* command type        */
> +    uint64_t reserved_1:44;
> +    uint64_t devid:16;        /* related devid       */
> +#else
> +    uint64_t devid:16;
> +    uint64_t reserved_1:44;
> +    uint64_t type:4;
> +#endif /* __BIG_ENDIAN_BITFIELD */
> +    uint64_t reserved_2;
> +} CMDInvalIntrTable;
> +
> +/* load adddress translation info for devid into translation cache */
> +typedef struct QEMU_PACKED {
> +#ifdef HOST_WORDS_BIGENDIAN
> +    uint64_t type:4;          /* command type       */
> +    uint64_t reserved_2:8;
> +    uint64_t pasid_19_0:20;
> +    uint64_t pfcount_7_0:8;
> +    uint64_t reserved_1:8;
> +    uint64_t devid;           /* related devid      */
> +#else
> +    uint64_t devid;
> +    uint64_t reserved_1:8;
> +    uint64_t pfcount_7_0:8;
> +    uint64_t pasid_19_0:20;
> +    uint64_t reserved_2:8;
> +    uint64_t type:4;
> +#endif /* __BIG_ENDIAN_BITFIELD */
> +
> +#ifdef HOST_WORDS_BIGENDIAN
> +    uint64_t address:52;     /* invalidate address       */
> +    uint64_t reserved_5:7;
> +    uint64_t inval:1;        /* inval matching entries   */
> +    uint64_t reserved_4:1;
> +    uint64_t guest:1;        /* G/N invalidate           */
> +    uint64_t reserved_3:1;
> +    uint64_t size:1;         /* prefetched page size     */
> +#else
> +    uint64_t size:1;
> +    uint64_t reserved_3:1;
> +    uint64_t guest:1;
> +    uint64_t reserved_4:1;
> +    uint64_t inval:1;
> +    uint64_t reserved_5:7;
> +    uint64_t address:52;
> +#endif /* __BIG_ENDIAN_BITFIELD */
> +} CMDPrefetchPages;
> +
> +/* clear all address translation/interrupt remapping caches */
> +typedef struct QEMU_PACKED {
> +#ifdef HOST_WORDS_BIGENDIAN
> +    uint64_t type:4;              /* command type       */
> +    uint64_t reserved_1:60;
> +#else
> +    uint64_t reserved_1:60;
> +    uint64_t type:4;
> +#endif /* __BIG_ENDIAN_BITFIELD */
> +    uint64_t reserved_2;
> +} CMDInvalIommuAll;
> +
> +/* issue a PCIe completion packet for devid */
> +typedef struct QEMU_PACKED {
> +#ifdef HOST_WORDS_BIGENDIAN
> +    uint32_t devid;               /* related devid      */
> +    uint32_t reserved_1;
> +#else
> +    uint32_t reserved_1;
> +    uint32_t devid;
> +#endif /* __BIG_ENDIAN_BITFIELD */
> +
> +#ifdef HOST_WORDS_BIGENDIAN
> +    uint32_t type:4;              /* command type       */
> +    uint32_t reserved_2:8;
> +    uint32_t pasid_19_0:20
> +#else
> +    uint32_t pasid_19_0:20;
> +    uint32_t reserved_2:8;
> +    uint32_t type:4;
> +#endif /* __BIG_ENDIAN_BITFIELD */
> +
> +#ifdef HOST_WORDS_BIGENDIAN
> +    uint32_t reserved_3:29;
> +    uint32_t guest:1;
> +    uint32_t reserved_4:2;
> +#else
> +    uint32_t reserved_3:2;
> +    uint32_t guest:1;
> +    uint32_t reserved_4:29;
> +#endif /* __BIG_ENDIAN_BITFIELD */
> +
> +#ifdef HOST_WORDS_BIGENDIAN
> +    uint32_t reserved_5:16;
> +    uint32_t completion_tag:16    /* PCIe PRI informatin */
> +#else
> +    uint32_t completion_tag:16;
> +    uint32_t reserved_5:16;
> +#endif /* __BIG_ENDIAN_BITFIELD */
> +} CMDCompletePPR;
> +
> +/* configure MMIO registers at startup/reset */
> +static void amdvi_set_quad(AMDVIState *s, hwaddr addr, uint64_t val,
> +                           uint64_t romask, uint64_t w1cmask)
> +{
> +    stq_le_p(&s->mmior[addr], val);
> +    stq_le_p(&s->romask[addr], romask);
> +    stq_le_p(&s->w1cmask[addr], w1cmask);
> +}
> +
> +static uint16_t amdvi_readw(AMDVIState *s, hwaddr addr)
> +{
> +    return lduw_le_p(&s->mmior[addr]);
> +}
> +
> +static uint32_t amdvi_readl(AMDVIState *s, hwaddr addr)
> +{
> +    return ldl_le_p(&s->mmior[addr]);
> +}
> +
> +static uint64_t amdvi_readq(AMDVIState *s, hwaddr addr)
> +{
> +    return ldq_le_p(&s->mmior[addr]);
> +}
> +
> +/* internal write */
> +static void amdvi_writeq_raw(AMDVIState *s, uint64_t val, hwaddr addr)
> +{
> +    stq_le_p(&s->mmior[addr], val);
> +}
> +
> +/* external write */
> +static void amdvi_writew(AMDVIState *s, hwaddr addr, uint16_t val)
> +{
> +    uint16_t romask = lduw_le_p(&s->romask[addr]);
> +    uint16_t w1cmask = lduw_le_p(&s->w1cmask[addr]);
> +    uint16_t oldval = lduw_le_p(&s->mmior[addr]);
> +    stw_le_p(&s->mmior[addr], (val & ~(val & w1cmask)) | (romask & oldval));
> +}
> +
> +static void amdvi_writel(AMDVIState *s, hwaddr addr, uint32_t val)
> +{
> +    uint32_t romask = ldl_le_p(&s->romask[addr]);
> +    uint32_t w1cmask = ldl_le_p(&s->w1cmask[addr]);
> +    uint32_t oldval = ldl_le_p(&s->mmior[addr]);
> +    stl_le_p(&s->mmior[addr], (val & ~(val & w1cmask)) | (romask & oldval));
> +}
> +
> +static void amdvi_writeq(AMDVIState *s, hwaddr addr, uint64_t val)
> +{
> +    uint64_t romask = ldq_le_p(&s->romask[addr]);
> +    uint64_t w1cmask = ldq_le_p(&s->w1cmask[addr]);
> +    uint32_t oldval = ldq_le_p(&s->mmior[addr]);
> +    stq_le_p(&s->mmior[addr], (val & ~(val & w1cmask)) | (romask & oldval));
> +}
> +
> +/* OR a 64-bit register with a 64-bit value */
> +static bool amdvi_orq(AMDVIState *s, hwaddr addr, uint64_t val)
> +{
> +    return amdvi_readq(s, addr) | val;
> +}
> +
> +/* OR a 64-bit register with a 64-bit value storing result in the register */
> +static void amdvi_orassignq(AMDVIState *s, hwaddr addr, uint64_t val)
> +{
> +    amdvi_writeq_raw(s, addr, amdvi_readq(s, addr) | val);
> +}
> +
> +/* AND a 64-bit register with a 64-bit value storing result in the register */
> +static void amdvi_and_assignq(AMDVIState *s, hwaddr addr, uint64_t val)
> +{
> +   amdvi_writeq_raw(s, addr, amdvi_readq(s, addr) & val);
> +}
> +
> +static void amdvi_generate_msi_interrupt(AMDVIState *s)
> +{
> +    MSIMessage msg;
> +    if (msi_enabled(&s->pci.dev)) {
> +        msg = msi_get_message(&s->pci.dev, 0);
> +        address_space_stl_le(&address_space_memory, msg.address, msg.data,
> +                         MEMTXATTRS_UNSPECIFIED, NULL);
> +    }
> +}
> +
> +static void amdvi_log_event(AMDVIState *s, uint64_t *evt)
> +{
> +    /* event logging not enabled */
> +    if (!s->evtlog_enabled || amdvi_orq(s, AMDVI_MMIO_STATUS,
> +        AMDVI_MMIO_STATUS_EVT_OVF)) {
> +        return;
> +    }
> +
> +    /* event log buffer full */
> +    if (s->evtlog_tail >= s->evtlog_len) {
> +        amdvi_orassignq(s, AMDVI_MMIO_STATUS, AMDVI_MMIO_STATUS_EVT_OVF);
> +        /* generate interrupt */
> +        amdvi_generate_msi_interrupt(s);
> +        return;
> +    }
> +
> +    if (dma_memory_write(&address_space_memory, s->evtlog_len + s->evtlog_tail,
> +        &evt, AMDVI_EVENT_LEN)) {
> +        trace_amdvi_evntlog_fail(s->evtlog, s->evtlog_tail);
> +    }
> +
> +    s->evtlog_tail += AMDVI_EVENT_LEN;
> +    amdvi_orassignq(s, AMDVI_MMIO_STATUS, AMDVI_MMIO_STATUS_COMP_INT);
> +    amdvi_generate_msi_interrupt(s);
> +}
> +
> +static void amdvi_setevent_bits(uint64_t *buffer, uint64_t value, int start,
> +                                int length)
> +{
> +    int index = start / 64, bitpos = start % 64;
> +    uint64_t mask = ((1 << length) - 1) << bitpos;
> +    buffer[index] &= ~mask;
> +    buffer[index] |= (value << bitpos) & mask;
> +}
> +/*
> + * AMDVi event structure
> + *    0:15   -> DeviceID
> + *    55:63  -> event type + miscellaneous info
> + *    64:127 -> related address
> + */
> +static void amdvi_encode_event(uint64_t *evt, uint16_t devid, uint64_t addr,
> +                               uint16_t info)
> +{
> +    amdvi_setevent_bits(evt, devid, 0, 16);
> +    amdvi_setevent_bits(evt, info, 55, 8);
> +    amdvi_setevent_bits(evt, addr, 63, 64);
> +}
> +/* log an error encountered page-walking
> + *
> + * @addr: virtual address in translation request
> + */
> +static void amdvi_page_fault(AMDVIState *s, uint16_t devid,
> +                             hwaddr addr, uint16_t info)
> +{
> +    uint64_t evt[4];
> +
> +    info |= AMDVI_EVENT_IOPF_I | AMDVI_EVENT_IOPF;
> +    amdvi_encode_event(evt, devid, addr, info);
> +    amdvi_log_event(s, evt);
> +    pci_word_test_and_set_mask(s->pci.dev.config + PCI_STATUS,
> +            PCI_STATUS_SIG_TARGET_ABORT);
> +}
> +/*
> + * log a master abort accessing device table
> + *  @devtab : address of device table entry
> + *  @info : error flags
> + */
> +static void amdvi_log_devtab_error(AMDVIState *s, uint16_t devid,
> +                                   hwaddr devtab, uint16_t info)
> +{
> +    uint64_t evt[4];
> +
> +    info |= AMDVI_EVENT_DEV_TAB_HW_ERROR;
> +
> +    amdvi_encode_event(evt, devid, devtab, info);
> +    amdvi_log_event(s, evt);
> +    pci_word_test_and_set_mask(s->pci.dev.config + PCI_STATUS,
> +            PCI_STATUS_SIG_TARGET_ABORT);
> +}
> +
> +/* log an event trying to access command buffer
> + *   @addr : address that couldn't be accessed
> + */
> +static void amdvi_log_command_error(AMDVIState *s, hwaddr addr)
> +{
> +    uint64_t evt[4], info = AMDVI_EVENT_COMMAND_HW_ERROR;
> +
> +    amdvi_encode_event(evt, 0, addr, info);
> +    amdvi_log_event(s, evt);
> +    pci_word_test_and_set_mask(s->pci.dev.config + PCI_STATUS,
> +            PCI_STATUS_SIG_TARGET_ABORT);
> +}
> +
> +/* log an illegal comand event
> + *   @addr : address of illegal command
> + */
> +static void amdvi_log_illegalcom_error(AMDVIState *s, uint16_t info,
> +                                       hwaddr addr)
> +{
> +    uint64_t evt[4];
> +
> +    info |= AMDVI_EVENT_ILLEGAL_COMMAND_ERROR;
> +    amdvi_encode_event(evt, 0, addr, info);
> +    amdvi_log_event(s, evt);
> +}
> +
> +/* log an error accessing device table
> + *
> + *  @devid : device owning the table entry
> + *  @devtab : address of device table entry
> + *  @info : error flags
> + */
> +static void amdvi_log_illegaldevtab_error(AMDVIState *s, uint16_t devid,
> +                                          hwaddr addr, uint16_t info)
> +{
> +    uint64_t evt[4];
> +
> +    info |= AMDVI_EVENT_ILLEGAL_DEVTAB_ENTRY;
> +    amdvi_encode_event(evt, devid, addr, info);
> +    amdvi_log_event(s, evt);
> +}
> +
> +/* log an error accessing a PTE entry
> + * @addr : address that couldn't be accessed
> + */
> +static void amdvi_log_pagetab_error(AMDVIState *s, uint16_t devid,
> +                                    hwaddr addr, uint16_t info)
> +{
> +    uint64_t evt[4];
> +
> +    info |= AMDVI_EVENT_PAGE_TAB_HW_ERROR;
> +    amdvi_encode_event(evt, devid, addr, info);
> +    amdvi_log_event(s, evt);
> +    pci_word_test_and_set_mask(s->pci.dev.config + PCI_STATUS,
> +             PCI_STATUS_SIG_TARGET_ABORT);
> +}
> +
> +static gboolean amdvi_uint64_equal(gconstpointer v1, gconstpointer v2)
> +{
> +    return *((const uint64_t *)v1) == *((const uint64_t *)v2);
> +}
> +
> +static guint amdvi_uint64_hash(gconstpointer v)
> +{
> +    return (guint)*(const uint64_t *)v;
> +}
> +
> +static AMDVIIOTLBEntry *amdvi_iotlb_lookup(AMDVIState *s, hwaddr addr,
> +                                           uint64_t devid)
> +{
> +    uint64_t key = (addr >> AMDVI_PAGE_SHIFT_4K) |
> +                   ((uint64_t)(devid) << AMDVI_DEVID_SHIFT);
> +    return g_hash_table_lookup(s->iotlb, &key);
> +}
> +
> +static void amdvi_iotlb_reset(AMDVIState *s)
> +{
> +    assert(s->iotlb);
> +    g_hash_table_remove_all(s->iotlb);
> +}
> +
> +static gboolean amdvi_iotlb_remove_by_devid(gpointer key, gpointer value,
> +                                            gpointer user_data)
> +{
> +    AMDVIIOTLBEntry *entry = (AMDVIIOTLBEntry *)value;
> +    uint16_t devid = *(uint16_t *)user_data;
> +    return entry->devid == devid;
> +}
> +
> +static void amdvi_iotlb_remove_page(AMDVIState *s, hwaddr addr,
> +                                    uint64_t devid)
> +{
> +    uint64_t key = (addr >> AMDVI_PAGE_SHIFT_4K) |
> +                   ((uint64_t)(devid) << AMDVI_DEVID_SHIFT);
> +    g_hash_table_remove(s->iotlb, &key);
> +}
> +
> +static void amdvi_update_iotlb(AMDVIState *s, uint16_t devid,
> +                               uint64_t gpa, IOMMUTLBEntry to_cache,
> +                               uint16_t domid)
> +{
> +    AMDVIIOTLBEntry *entry = g_malloc(sizeof(*entry));
> +    uint64_t *key = g_malloc(sizeof(key));
> +    uint64_t gfn = gpa >> AMDVI_PAGE_SHIFT_4K;
> +
> +    /* don't cache erroneous translations */
> +    if (to_cache.perm != IOMMU_NONE) {
> +        trace_amdvi_cache_update(domid, PCI_BUS_NUM(devid), PCI_SLOT(devid),
> +                PCI_FUNC(devid), gpa, to_cache.translated_addr);
> +
> +        if (g_hash_table_size(s->iotlb) >= AMDVI_IOTLB_MAX_SIZE) {
> +            trace_amdvi_iotlb_reset();
> +            amdvi_iotlb_reset(s);
> +        }
> +
> +        entry->gfn = gfn;
> +        entry->domid = domid;
> +        entry->perms = to_cache.perm;
> +        entry->translated_addr = to_cache.translated_addr;
> +        entry->page_mask = to_cache.addr_mask;
> +        *key = gfn | ((uint64_t)(devid) << AMDVI_DEVID_SHIFT);
> +        g_hash_table_replace(s->iotlb, key, entry);
> +    }
> +}
> +
> +static void amdvi_completion_wait(AMDVIState *s, CMDCompletionWait *wait)
> +{
> +    /* pad the last 3 bits */
> +    hwaddr addr = cpu_to_le64(wait->store_addr << 3);
> +    uint64_t data = cpu_to_le64(wait->store_data);
> +
> +    if (wait->reserved) {
> +        amdvi_log_illegalcom_error(s, wait->type, s->cmdbuf + s->cmdbuf_head);
> +    }
> +
> +    if (wait->completion_store) {
> +        if (dma_memory_write(&address_space_memory, addr, &data,
> +            AMDVI_COMPLETION_DATA_SIZE))
> +        {
> +            trace_amdvi_completion_wait_fail(addr);
> +        }
> +    }
> +
> +    /* set completion interrupt */
> +    if (wait->completion_int) {
> +        amdvi_orq(s, AMDVI_MMIO_STATUS, AMDVI_MMIO_STATUS_COMP_INT);
> +        /* generate interrupt */
> +        amdvi_generate_msi_interrupt(s);
> +    }
> +
> +    trace_amdvi_completion_wait(addr, data);
> +}
> +
> +/* log error without aborting since linux seems to be using reserved bits */
> +static void amdvi_inval_devtab_entry(AMDVIState *s, void *cmd)
> +{
> +    CMDInvalIntrTable *inval = (CMDInvalIntrTable *)cmd;
> +    /* This command should invalidate internal caches of which there isn't */
> +    if (inval->reserved_1 || inval->reserved_2) {
> +        amdvi_log_illegalcom_error(s, inval->type, s->cmdbuf + s->cmdbuf_head);
> +    }
> +    trace_amdvi_devtab_inval(PCI_BUS_NUM(inval->devid), PCI_SLOT(inval->devid),
> +            PCI_FUNC(inval->devid));
> +}
> +
> +static void amdvi_complete_ppr(AMDVIState *s, void *cmd)
> +{
> +    CMDCompletePPR *pprcomp = (CMDCompletePPR *)cmd;
> +
> +    if (pprcomp->reserved_1 || pprcomp->reserved_2 || pprcomp->reserved_3 ||
> +        pprcomp->reserved_4 || pprcomp->reserved_5) {
> +        amdvi_log_illegalcom_error(s, pprcomp->type, s->cmdbuf +
> +                s->cmdbuf_head);
> +    }
> +    trace_amdvi_ppr_exec();
> +}
> +
> +static void amdvi_inval_all(AMDVIState *s, CMDInvalIommuAll *inval)
> +{
> +    if (inval->reserved_2 || inval->reserved_1) {
> +        amdvi_log_illegalcom_error(s, inval->type, s->cmdbuf + s->cmdbuf_head);
> +    }
> +
> +    amdvi_iotlb_reset(s);
> +    trace_amdvi_all_inval();
> +}
> +
> +static gboolean amdvi_iotlb_remove_by_domid(gpointer key, gpointer value,
> +                                            gpointer user_data)
> +{
> +    AMDVIIOTLBEntry *entry = (AMDVIIOTLBEntry *)value;
> +    uint16_t domid = *(uint16_t *)user_data;
> +    return entry->domid == domid;
> +}
> +
> +/* we don't have devid - we can't remove pages by address */
> +static void amdvi_inval_pages(AMDVIState *s, CMDInvalIommuPages *inval)
> +{
> +    uint16_t domid = inval->domid;
> +
> +    if (inval->reserved_1 || inval->reserved_2 || inval->reserved_3) {
> +        amdvi_log_illegalcom_error(s, inval->type, s->cmdbuf + s->cmdbuf_head);
> +    }
> +
> +    g_hash_table_foreach_remove(s->iotlb, amdvi_iotlb_remove_by_domid,
> +                                &domid);
> +    trace_amdvi_pages_inval(inval->domid);
> +}
> +
> +static void amdvi_prefetch_pages(AMDVIState *s, CMDPrefetchPages *prefetch)
> +{
> +    if (prefetch->reserved_1 || prefetch->reserved_2 || prefetch->reserved_3
> +        || prefetch->reserved_4 || prefetch->reserved_5) {
> +        amdvi_log_illegalcom_error(s, prefetch->type, s->cmdbuf +
> +                s->cmdbuf_head);
> +    }
> +    trace_amdvi_prefetch_pages();
> +}
> +
> +static void amdvi_inval_inttable(AMDVIState *s, CMDInvalIntrTable *inval)
> +{
> +    if (inval->reserved_1 || inval->reserved_2) {
> +        amdvi_log_illegalcom_error(s, inval->type, s->cmdbuf + s->cmdbuf_head);
> +        return;
> +    }
> +    trace_amdvi_intr_inval();
> +}
> +
> +/* FIXME: Try to work with the specified size instead of all the pages
> + * when the S bit is on
> + */
> +static void iommu_inval_iotlb(AMDVIState *s, CMDInvalIOTLBPages *inval)
> +{
> +    uint16_t devid = inval->devid;
> +
> +    if (inval->reserved_1 || inval->reserved_2) {
> +        amdvi_log_illegalcom_error(s, inval->type, s->cmdbuf + s->cmdbuf_head);
> +        return;
> +    }
> +
> +    if (inval->size) {
> +        g_hash_table_foreach_remove(s->iotlb, amdvi_iotlb_remove_by_devid,
> +                                    &devid);
> +    } else {
> +        amdvi_iotlb_remove_page(s, inval->address << 12, inval->devid);
> +    }
> +    trace_amdvi_iotlb_inval();
> +}
> +
> +/* not honouring reserved bits is regarded as an illegal command */
> +static void amdvi_cmdbuf_exec(AMDVIState *s)
> +{
> +    CMDCompletionWait cmd;
> +
> +    if (dma_memory_read(&address_space_memory, s->cmdbuf + s->cmdbuf_head,
> +        &cmd, AMDVI_COMMAND_SIZE)) {
> +        trace_amdvi_command_read_fail(s->cmdbuf, s->cmdbuf_head);
> +        amdvi_log_command_error(s, s->cmdbuf + s->cmdbuf_head);
> +        return;
> +    }
> +
> +    switch (cmd.type) {
> +    case AMDVI_CMD_COMPLETION_WAIT:
> +        amdvi_completion_wait(s, (CMDCompletionWait *)&cmd);
> +        break;
> +    case AMDVI_CMD_INVAL_DEVTAB_ENTRY:
> +        amdvi_inval_devtab_entry(s, (CMDInvalDevEntry *)&cmd);
> +        break;
> +    case AMDVI_CMD_INVAL_AMDVI_PAGES:
> +        amdvi_inval_pages(s, (CMDInvalIommuPages *)&cmd);
> +        break;
> +    case AMDVI_CMD_INVAL_IOTLB_PAGES:
> +        iommu_inval_iotlb(s, (CMDInvalIOTLBPages *)&cmd);
> +        break;
> +    case AMDVI_CMD_INVAL_INTR_TABLE:
> +        amdvi_inval_inttable(s, (CMDInvalIntrTable *)&cmd);
> +        break;
> +    case AMDVI_CMD_PREFETCH_AMDVI_PAGES:
> +        amdvi_prefetch_pages(s, (CMDPrefetchPages *)&cmd);
> +        break;
> +    case AMDVI_CMD_COMPLETE_PPR_REQUEST:
> +        amdvi_complete_ppr(s, (CMDCompletePPR *)&cmd);
> +        break;
> +    case AMDVI_CMD_INVAL_AMDVI_ALL:
> +        amdvi_inval_all(s, (CMDInvalIommuAll *)&cmd);
> +        break;
> +    default:
> +        trace_amdvi_unhandled_command(cmd.type);
> +        /* log illegal command */
> +        amdvi_log_illegalcom_error(s, cmd.type,
> +                                   s->cmdbuf + s->cmdbuf_head);
> +    }
> +}
> +
> +static void amdvi_cmdbuf_run(AMDVIState *s)
> +{
> +    if (!s->cmdbuf_enabled) {
> +        trace_amdvi_command_error(amdvi_readq(s, AMDVI_MMIO_CONTROL));
> +        return;
> +    }
> +
> +    /* check if there is work to do. */
> +    while (s->cmdbuf_head != s->cmdbuf_tail) {
> +         trace_amdvi_command_exec(s->cmdbuf_head, s->cmdbuf_tail, s->cmdbuf);
> +         amdvi_cmdbuf_exec(s);
> +         s->cmdbuf_head += AMDVI_COMMAND_SIZE;
> +         amdvi_writeq_raw(s, s->cmdbuf_head, AMDVI_MMIO_COMMAND_HEAD);
> +
> +        /* wrap head pointer */
> +        if (s->cmdbuf_head >= s->cmdbuf_len * AMDVI_COMMAND_SIZE) {
> +            s->cmdbuf_head = 0;
> +        }
> +    }
> +}
> +
> +static void amdvi_mmio_trace(hwaddr addr, unsigned size)
> +{
> +    uint8_t index = addr & ~0x2000;
> +
> +    if ((addr & 0x2000)) {
> +        /* high table */
> +        index = index >= AMDVI_MMIO_REGS_HIGH ? AMDVI_MMIO_REGS_HIGH : index;
> +        trace_amdvi_mmio_read(amdvi_mmio_high[index], addr, size, addr & ~0x07);
> +    } else {
> +        index = index >= AMDVI_MMIO_REGS_LOW ? AMDVI_MMIO_REGS_LOW : index;
> +        trace_amdvi_mmio_read(amdvi_mmio_high[index], addr, size, addr & ~0x07);
> +    }
> +}
> +
> +static uint64_t amdvi_mmio_read(void *opaque, hwaddr addr, unsigned size)
> +{
> +    AMDVIState *s = opaque;
> +
> +    uint64_t val = -1;
> +    if (addr + size > AMDVI_MMIO_SIZE) {
> +        trace_amdvi_mmio_read("error: addr outside region: max ",
> +                (uint64_t)AMDVI_MMIO_SIZE, addr, size);
> +        return (uint64_t)-1;
> +    }
> +
> +    if (size == 2) {
> +        val = amdvi_readw(s, addr);
> +    } else if (size == 4) {
> +        val = amdvi_readl(s, addr);
> +    } else if (size == 8) {
> +        val = amdvi_readq(s, addr);
> +    }
> +    amdvi_mmio_trace(addr, size);
> +
> +    return val;
> +}
> +
> +static void amdvi_handle_control_write(AMDVIState *s)
> +{
> +    unsigned long control = amdvi_readq(s, AMDVI_MMIO_CONTROL);
> +    s->enabled = !!(control & AMDVI_MMIO_CONTROL_AMDVIEN);
> +
> +    s->ats_enabled = !!(control & AMDVI_MMIO_CONTROL_HTTUNEN);
> +    s->evtlog_enabled = s->enabled && !!(control &
> +                        AMDVI_MMIO_CONTROL_EVENTLOGEN);
> +
> +    s->evtlog_intr = !!(control & AMDVI_MMIO_CONTROL_EVENTINTEN);
> +    s->completion_wait_intr = !!(control & AMDVI_MMIO_CONTROL_COMWAITINTEN);
> +    s->cmdbuf_enabled = s->enabled && !!(control &
> +                        AMDVI_MMIO_CONTROL_CMDBUFLEN);
> +
> +    /* update the flags depending on the control register */
> +    if (s->cmdbuf_enabled) {
> +        amdvi_orassignq(s, AMDVI_MMIO_STATUS, AMDVI_MMIO_STATUS_CMDBUF_RUN);
> +    } else {
> +        amdvi_and_assignq(s, AMDVI_MMIO_STATUS, ~AMDVI_MMIO_STATUS_CMDBUF_RUN);
> +    }
> +    if (s->evtlog_enabled) {
> +        amdvi_orassignq(s, AMDVI_MMIO_STATUS, AMDVI_MMIO_STATUS_EVT_RUN);
> +    } else {
> +        amdvi_and_assignq(s, AMDVI_MMIO_STATUS, ~AMDVI_MMIO_STATUS_EVT_RUN);
> +    }
> +
> +    trace_amdvi_control_status(control);
> +    amdvi_cmdbuf_run(s);
> +}
> +
> +static inline void amdvi_handle_devtab_write(AMDVIState *s)
> +
> +{
> +    uint64_t val = amdvi_readq(s, AMDVI_MMIO_DEVICE_TABLE);
> +    s->devtab = (val & AMDVI_MMIO_DEVTAB_BASE_MASK);
> +
> +    /* set device table length */
> +    s->devtab_len = ((val & AMDVI_MMIO_DEVTAB_SIZE_MASK) + 1 *
> +                    (AMDVI_MMIO_DEVTAB_SIZE_UNIT /
> +                     AMDVI_MMIO_DEVTAB_ENTRY_SIZE));
> +}
> +
> +static inline void amdvi_handle_cmdhead_write(AMDVIState *s)
> +{
> +    s->cmdbuf_head = amdvi_readq(s, AMDVI_MMIO_COMMAND_HEAD)
> +                     & AMDVI_MMIO_CMDBUF_HEAD_MASK;
> +    amdvi_cmdbuf_run(s);
> +}
> +
> +static inline void amdvi_handle_cmdbase_write(AMDVIState *s)
> +{
> +    s->cmdbuf = amdvi_readq(s, AMDVI_MMIO_COMMAND_BASE)
> +                & AMDVI_MMIO_CMDBUF_BASE_MASK;
> +    s->cmdbuf_len = 1UL << (amdvi_readq(s, AMDVI_MMIO_CMDBUF_SIZE_BYTE)
> +                    & AMDVI_MMIO_CMDBUF_SIZE_MASK);
> +    s->cmdbuf_head = s->cmdbuf_tail = 0;
> +}
> +
> +static inline void amdvi_handle_cmdtail_write(AMDVIState *s)
> +{
> +    s->cmdbuf_tail = amdvi_readq(s, AMDVI_MMIO_COMMAND_TAIL)
> +                     & AMDVI_MMIO_CMDBUF_TAIL_MASK;
> +    amdvi_cmdbuf_run(s);
> +}
> +
> +static inline void amdvi_handle_excllim_write(AMDVIState *s)
> +{
> +    uint64_t val = amdvi_readq(s, AMDVI_MMIO_EXCL_LIMIT);
> +    s->excl_limit = (val & AMDVI_MMIO_EXCL_LIMIT_MASK) |
> +                    AMDVI_MMIO_EXCL_LIMIT_LOW;
> +}
> +
> +static inline void amdvi_handle_evtbase_write(AMDVIState *s)
> +{
> +    uint64_t val = amdvi_readq(s, AMDVI_MMIO_EVENT_BASE);
> +    s->evtlog = val & AMDVI_MMIO_EVTLOG_BASE_MASK;
> +    s->evtlog_len = 1UL << (amdvi_readq(s, AMDVI_MMIO_EVTLOG_SIZE_BYTE)
> +                    & AMDVI_MMIO_EVTLOG_SIZE_MASK);
> +}
> +
> +static inline void amdvi_handle_evttail_write(AMDVIState *s)
> +{
> +    uint64_t val = amdvi_readq(s, AMDVI_MMIO_EVENT_TAIL);
> +    s->evtlog_tail = val & AMDVI_MMIO_EVTLOG_TAIL_MASK;
> +}
> +
> +static inline void amdvi_handle_evthead_write(AMDVIState *s)
> +{
> +    uint64_t val = amdvi_readq(s, AMDVI_MMIO_EVENT_HEAD);
> +    s->evtlog_head = val & AMDVI_MMIO_EVTLOG_HEAD_MASK;
> +}
> +
> +static inline void amdvi_handle_pprbase_write(AMDVIState *s)
> +{
> +    uint64_t val = amdvi_readq(s, AMDVI_MMIO_PPR_BASE);
> +    s->ppr_log = val & AMDVI_MMIO_PPRLOG_BASE_MASK;
> +    s->pprlog_len = 1UL << (amdvi_readq(s, AMDVI_MMIO_PPRLOG_SIZE_BYTE)
> +                    & AMDVI_MMIO_PPRLOG_SIZE_MASK);
> +}
> +
> +static inline void amdvi_handle_pprhead_write(AMDVIState *s)
> +{
> +    uint64_t val = amdvi_readq(s, AMDVI_MMIO_PPR_HEAD);
> +    s->pprlog_head = val & AMDVI_MMIO_PPRLOG_HEAD_MASK;
> +}
> +
> +static inline void amdvi_handle_pprtail_write(AMDVIState *s)
> +{
> +    uint64_t val = amdvi_readq(s, AMDVI_MMIO_PPR_TAIL);
> +    s->pprlog_tail = val & AMDVI_MMIO_PPRLOG_TAIL_MASK;
> +}
> +
> +/* FIXME: something might go wrong if System Software writes in chunks
> + * of one byte but linux writes in chunks of 4 bytes so currently it
> + * works correctly with linux but will definitely be busted if software
> + * reads/writes 8 bytes
> + */
> +static void amdvi_mmio_reg_write(AMDVIState *s, unsigned size, uint64_t val,
> +                                 hwaddr addr)
> +{
> +    if (size == 2) {
> +        amdvi_writew(s, addr, val);
> +    } else if (size == 4) {
> +        amdvi_writel(s, addr, val);
> +    } else if (size == 8) {
> +        amdvi_writeq(s, addr, val);
> +    }
> +}
> +
> +static void amdvi_mmio_write(void *opaque, hwaddr addr, uint64_t val,
> +                             unsigned size)
> +{
> +    AMDVIState *s = opaque;
> +    unsigned long offset = addr & 0x07;
> +
> +    if (addr + size > AMDVI_MMIO_SIZE) {
> +        trace_amdvi_mmio_write("error: addr outside region: max ",
> +                (uint64_t)AMDVI_MMIO_SIZE, size, val, offset);
> +        return;
> +    }
> +
> +    amdvi_mmio_trace(addr, size);
> +    switch (addr & ~0x07) {
> +    case AMDVI_MMIO_CONTROL:
> +        amdvi_mmio_reg_write(s, size, val, addr);
> +        amdvi_handle_control_write(s);
> +        break;
> +    case AMDVI_MMIO_DEVICE_TABLE:
> +        amdvi_mmio_reg_write(s, size, val, addr);
> +       /*  set device table address
> +        *   This also suffers from inability to tell whether software
> +        *   is done writing
> +        */
> +
> +        if (offset || (size == 8)) {
> +            amdvi_handle_devtab_write(s);
> +        }
> +        break;
> +    case AMDVI_MMIO_COMMAND_HEAD:
> +        amdvi_mmio_reg_write(s, size, val, addr);
> +        amdvi_handle_cmdhead_write(s);
> +        break;
> +    case AMDVI_MMIO_COMMAND_BASE:
> +        amdvi_mmio_reg_write(s, size, val, addr);
> +        /* FIXME - make sure System Software has finished writing incase
> +         * it writes in chucks less than 8 bytes in a robust way.As for
> +         * now, this hacks works for the linux driver
> +         */
> +        if (offset || (size == 8)) {
> +            amdvi_handle_cmdbase_write(s);
> +        }
> +        break;
> +    case AMDVI_MMIO_COMMAND_TAIL:
> +        amdvi_mmio_reg_write(s, size, val, addr);
> +        amdvi_handle_cmdtail_write(s);
> +        break;
> +    case AMDVI_MMIO_EVENT_BASE:
> +        amdvi_mmio_reg_write(s, size, val, addr);
> +        amdvi_handle_evtbase_write(s);
> +        break;
> +    case AMDVI_MMIO_EVENT_HEAD:
> +        amdvi_mmio_reg_write(s, size, val, addr);
> +        amdvi_handle_evthead_write(s);
> +        break;
> +    case AMDVI_MMIO_EVENT_TAIL:
> +        amdvi_mmio_reg_write(s, size, val, addr);
> +        amdvi_handle_evttail_write(s);
> +        break;
> +    case AMDVI_MMIO_EXCL_LIMIT:
> +        amdvi_mmio_reg_write(s, size, val, addr);
> +        amdvi_handle_excllim_write(s);
> +        break;
> +        /* PPR log base - unused for now */
> +    case AMDVI_MMIO_PPR_BASE:
> +        amdvi_mmio_reg_write(s, size, val, addr);
> +        amdvi_handle_pprbase_write(s);
> +        break;
> +        /* PPR log head - also unused for now */
> +    case AMDVI_MMIO_PPR_HEAD:
> +        amdvi_mmio_reg_write(s, size, val, addr);
> +        amdvi_handle_pprhead_write(s);
> +        break;
> +        /* PPR log tail - unused for now */
> +    case AMDVI_MMIO_PPR_TAIL:
> +        amdvi_mmio_reg_write(s, size, val, addr);
> +        amdvi_handle_pprtail_write(s);
> +        break;
> +    }
> +}
> +
> +static inline uint64_t amdvi_get_perms(uint64_t entry)
> +{
> +    return (entry & (AMDVI_DEV_PERM_READ | AMDVI_DEV_PERM_WRITE)) >>
> +           AMDVI_DEV_PERM_SHIFT;
> +}
> +
> +/* a valid entry should have V = 1 and reserved bits honoured */
> +static bool amdvi_validate_dte(AMDVIState *s, uint16_t devid,
> +                               uint64_t *dte)
> +{
> +    if ((dte[0] & AMDVI_DTE_LOWER_QUAD_RESERVED)
> +        || (dte[1] & AMDVI_DTE_MIDDLE_QUAD_RESERVED)
> +        || (dte[2] & AMDVI_DTE_UPPER_QUAD_RESERVED) || dte[3]) {
> +        amdvi_log_illegaldevtab_error(s, devid,
> +                                s->devtab + devid * AMDVI_DEVTAB_ENTRY_SIZE, 0);
> +        return false;
> +    }
> +
> +    return dte[0] & AMDVI_DEV_VALID;
> +}
> +
> +/* get a device table entry given the devid */
> +static bool amdvi_get_dte(AMDVIState *s, int devid, uint64_t *entry)
> +{
> +    uint32_t offset = devid * AMDVI_DEVTAB_ENTRY_SIZE;
> +
> +    if (dma_memory_read(&address_space_memory, s->devtab + offset, entry,
> +                        AMDVI_DEVTAB_ENTRY_SIZE)) {
> +        trace_amdvi_dte_get_fail(s->devtab, offset);
> +        /* log error accessing dte */
> +        amdvi_log_devtab_error(s, devid, s->devtab + offset, 0);
> +        return false;
> +    }
> +
> +    *entry = le64_to_cpu(*entry);
> +    if (!amdvi_validate_dte(s, devid, entry)) {
> +        trace_amdvi_invalid_dte(entry[0]);
> +        return false;
> +    }
> +
> +    return true;
> +}
> +
> +/* get pte translation mode */
> +static inline uint8_t get_pte_translation_mode(uint64_t pte)
> +{
> +    return (pte >> AMDVI_DEV_MODE_RSHIFT) & AMDVI_DEV_MODE_MASK;
> +}
> +
> +static inline uint64_t pte_override_page_mask(uint64_t pte)
> +{
> +    uint8_t page_mask = 12;
> +    uint64_t addr = (pte & AMDVI_DEV_PT_ROOT_MASK) ^ AMDVI_DEV_PT_ROOT_MASK;
> +    /* find the first zero bit */
> +    while (addr & 1) {
> +        page_mask++;
> +        addr = addr >> 1;
> +    }
> +
> +    return ~((1ULL << page_mask) - 1);
> +}
> +
> +static inline uint64_t pte_get_page_mask(uint64_t oldlevel)
> +{
> +    return ~((1UL << ((oldlevel * 9) + 3)) - 1);
> +}
> +
> +static inline uint64_t amdvi_get_pte_entry(AMDVIState *s, uint64_t pte_addr,
> +                                          uint16_t devid)
> +{
> +    uint64_t pte;
> +
> +    if (dma_memory_read(&address_space_memory, pte_addr, &pte, sizeof(pte))) {
> +        trace_amdvi_get_pte_hwerror(pte_addr);
> +        amdvi_log_pagetab_error(s, devid, pte_addr, 0);
> +        pte = 0;
> +        return pte;
> +    }
> +
> +    pte = cpu_to_le64(pte);
> +    return pte;
> +}
> +
> +static void amdvi_page_walk(AMDVIAddressSpace *as, uint64_t *dte,
> +                            IOMMUTLBEntry *ret, unsigned perms,
> +                            hwaddr addr)
> +{
> +    unsigned level, present, pte_perms, oldlevel;
> +    uint64_t pte = dte[0], pte_addr, page_mask;
> +
> +    /* make sure the DTE has TV = 1 */
> +    if (pte & AMDVI_DEV_TRANSLATION_VALID) {
> +        level = get_pte_translation_mode(pte);
> +        if (level >= 7) {
> +            trace_amdvi_mode_invalid(level, addr);
> +            return;
> +        }
> +        if (level == 0) {
> +            goto no_remap;
> +        }
> +
> +        /* we are at the leaf page table or page table encodes a huge page */
> +        while (level > 0) {
> +            pte_perms = amdvi_get_perms(pte);
> +            present = pte & 1;
> +            if (!present || perms != (perms & pte_perms)) {
> +                amdvi_page_fault(as->iommu_state, as->devfn, addr, perms);
> +                trace_amdvi_page_fault(addr);
> +                return;
> +            }
> +
> +            /* go to the next lower level */
> +            pte_addr = pte & AMDVI_DEV_PT_ROOT_MASK;
> +            /* add offset and load pte */
> +            pte_addr += ((addr >> (3 + 9 * level)) & 0x1FF) << 3;
> +            pte = amdvi_get_pte_entry(as->iommu_state, pte_addr, as->devfn);
> +            if (!pte) {
> +                return;
> +            }
> +            oldlevel = level;
> +            level = get_pte_translation_mode(pte);
> +            if (level == 0x7) {
> +                break;
> +            }
> +        }
> +
> +        if (level == 0x7) {
> +            page_mask = pte_override_page_mask(pte);
> +        } else {
> +            page_mask = pte_get_page_mask(oldlevel);
> +        }
> +
> +        /* get access permissions from pte */
> +        ret->iova = addr & page_mask;
> +        ret->translated_addr = (pte & AMDVI_DEV_PT_ROOT_MASK) & page_mask;
> +        ret->addr_mask = ~page_mask;
> +        ret->perm = amdvi_get_perms(pte);
> +        return;
> +    }
> +no_remap:
> +    ret->iova = addr & AMDVI_PAGE_MASK_4K;
> +    ret->translated_addr = addr & AMDVI_PAGE_MASK_4K;
> +    ret->addr_mask = ~AMDVI_PAGE_MASK_4K;
> +    ret->perm = amdvi_get_perms(pte);
> +}
> +
> +static void amdvi_do_translate(AMDVIAddressSpace *as, hwaddr addr,
> +                               bool is_write, IOMMUTLBEntry *ret)
> +{
> +    AMDVIState *s = as->iommu_state;
> +    uint16_t devid = PCI_BDF(as->bus_num, as->devfn);
> +    AMDVIIOTLBEntry *iotlb_entry = amdvi_iotlb_lookup(s, addr, as->devfn);
> +    uint64_t entry[4];
> +
> +    if (iotlb_entry) {
> +        trace_amdvi_iotlb_hit(PCI_BUS_NUM(devid), PCI_SLOT(devid),
> +                PCI_FUNC(devid), addr, iotlb_entry->translated_addr);
> +        ret->iova = addr & ~iotlb_entry->page_mask;
> +        ret->translated_addr = iotlb_entry->translated_addr;
> +        ret->addr_mask = iotlb_entry->page_mask;
> +        ret->perm = iotlb_entry->perms;
> +        return;
> +    }
> +
> +    /* devices with V = 0 are not translated */
> +    if (!amdvi_get_dte(s, devid, entry)) {
> +        goto out;
> +    }
> +
> +    amdvi_page_walk(as, entry, ret,
> +                    is_write ? AMDVI_PERM_WRITE : AMDVI_PERM_READ, addr);
> +
> +    amdvi_update_iotlb(s, as->devfn, addr, *ret,
> +                       entry[1] & AMDVI_DEV_DOMID_ID_MASK);
> +    return;
> +
> +out:
> +    ret->iova = addr & AMDVI_PAGE_MASK_4K;
> +    ret->translated_addr = addr & AMDVI_PAGE_MASK_4K;
> +    ret->addr_mask = ~AMDVI_PAGE_MASK_4K;
> +    ret->perm = IOMMU_RW;
> +}
> +
> +static inline bool amdvi_is_interrupt_addr(hwaddr addr)
> +{
> +    return addr >= AMDVI_INT_ADDR_FIRST && addr <= AMDVI_INT_ADDR_LAST;
> +}
> +
> +static IOMMUTLBEntry amdvi_translate(MemoryRegion *iommu, hwaddr addr,
> +                                     bool is_write)
> +{
> +    AMDVIAddressSpace *as = container_of(iommu, AMDVIAddressSpace, iommu);
> +    AMDVIState *s = as->iommu_state;
> +    IOMMUTLBEntry ret = {
> +        .target_as = &address_space_memory,
> +        .iova = addr,
> +        .translated_addr = 0,
> +        .addr_mask = ~(hwaddr)0,
> +        .perm = IOMMU_NONE
> +    };
> +
> +    if (!s->enabled) {
> +        /* AMDVI disabled - corresponds to iommu=off not
> +         * failure to provide any parameter
> +         */
> +        ret.iova = addr & AMDVI_PAGE_MASK_4K;
> +        ret.translated_addr = addr & AMDVI_PAGE_MASK_4K;
> +        ret.addr_mask = ~AMDVI_PAGE_MASK_4K;
> +        ret.perm = IOMMU_RW;
> +        return ret;
> +    } else if (amdvi_is_interrupt_addr(addr)) {
> +        ret.iova = addr & AMDVI_PAGE_MASK_4K;
> +        ret.translated_addr = addr & AMDVI_PAGE_MASK_4K;
> +        ret.addr_mask = ~AMDVI_PAGE_MASK_4K;
> +        ret.perm = IOMMU_WO;
> +        return ret;
> +    }
> +
> +    amdvi_do_translate(as, addr, is_write, &ret);
> +    trace_amdvi_translation_result(as->bus_num, PCI_SLOT(as->devfn),
> +            PCI_FUNC(as->devfn), addr, ret.translated_addr);
> +    return ret;
> +}
> +
> +static AddressSpace *amdvi_host_dma_iommu(PCIBus *bus, void *opaque, int devfn)
> +{
> +    AMDVIState *s = opaque;
> +    AMDVIAddressSpace **iommu_as;
> +    int bus_num = pci_bus_num(bus);
> +
> +    iommu_as = s->address_spaces[bus_num];
> +
> +    /* allocate memory during the first run */
> +    if (!iommu_as) {
> +        iommu_as = g_malloc0(sizeof(AMDVIAddressSpace *) * PCI_DEVFN_MAX);
> +        s->address_spaces[bus_num] = iommu_as;
> +    }
> +
> +    /* set up AMDVI region */
> +    if (!iommu_as[devfn]) {
> +        iommu_as[devfn] = g_malloc0(sizeof(AMDVIAddressSpace));
> +        iommu_as[devfn]->bus_num = (uint8_t)bus_num;
> +        iommu_as[devfn]->devfn = (uint8_t)devfn;
> +        iommu_as[devfn]->iommu_state = s;
> +
> +        memory_region_init_iommu(&iommu_as[devfn]->iommu, OBJECT(s),
> +                                 &s->iommu_ops, "amd-iommu", UINT64_MAX);
> +        address_space_init(&iommu_as[devfn]->as, &iommu_as[devfn]->iommu,
> +                           "amd-iommu");
> +    }
> +    return &iommu_as[devfn]->as;
> +}
> +
> +static const MemoryRegionOps mmio_mem_ops = {
> +    .read = amdvi_mmio_read,
> +    .write = amdvi_mmio_write,
> +    .endianness = DEVICE_LITTLE_ENDIAN,
> +    .impl = {
> +        .min_access_size = 1,
> +        .max_access_size = 8,
> +        .unaligned = false,
> +    },
> +    .valid = {
> +        .min_access_size = 1,
> +        .max_access_size = 8,
> +    }
> +};
> +
> +static void amdvi_iommu_notify_started(MemoryRegion *iommu)
> +{
> +    AMDVIAddressSpace *as = container_of(iommu, AMDVIAddressSpace, iommu);
> +
> +    hw_error("device %02x.%02x.%x requires iommu notifier which is not "
> +             "currently supported", as->bus_num, PCI_SLOT(as->devfn),
> +             PCI_FUNC(as->devfn));
> +}
> +
> +static void amdvi_init(AMDVIState *s)
> +{
> +    amdvi_iotlb_reset(s);
> +
> +    s->iommu_ops.translate = amdvi_translate;
> +    s->iommu_ops.notify_started = amdvi_iommu_notify_started;
> +    s->devtab_len = 0;
> +    s->cmdbuf_len = 0;
> +    s->cmdbuf_head = 0;
> +    s->cmdbuf_tail = 0;
> +    s->evtlog_head = 0;
> +    s->evtlog_tail = 0;
> +    s->excl_enabled = false;
> +    s->excl_allow = false;
> +    s->mmio_enabled = false;
> +    s->enabled = false;
> +    s->ats_enabled = false;
> +    s->cmdbuf_enabled = false;
> +
> +    /* reset MMIO */
> +    memset(s->mmior, 0, AMDVI_MMIO_SIZE);
> +    amdvi_set_quad(s, AMDVI_MMIO_EXT_FEATURES, AMDVI_EXT_FEATURES,
> +            0xffffffffffffffef, 0);
> +    amdvi_set_quad(s, AMDVI_MMIO_STATUS, 0, 0x98, 0x67);
> +
> +    /* reset device ident */
> +    pci_config_set_vendor_id(s->pci.dev.config, PCI_VENDOR_ID_AMD);
> +    pci_config_set_prog_interface(s->pci.dev.config, 00);
> +    pci_config_set_device_id(s->pci.dev.config, s->devid);
> +    pci_config_set_class(s->pci.dev.config, 0x0806);
> +
> +    /* reset AMDVI specific capabilities, all r/o */
> +    pci_set_long(s->pci.dev.config + s->capab_offset, AMDVI_CAPAB_FEATURES);
> +    pci_set_long(s->pci.dev.config + s->capab_offset + AMDVI_CAPAB_BAR_LOW,
> +                 s->mmio.addr & ~(0xffff0000));
> +    pci_set_long(s->pci.dev.config + s->capab_offset + AMDVI_CAPAB_BAR_HIGH,
> +                (s->mmio.addr & ~(0xffff)) >> 16);
> +    pci_set_long(s->pci.dev.config + s->capab_offset + AMDVI_CAPAB_RANGE,
> +                 0xff000000);
> +    pci_set_long(s->pci.dev.config + s->capab_offset + AMDVI_CAPAB_MISC, 0);
> +    pci_set_long(s->pci.dev.config + s->capab_offset + AMDVI_CAPAB_MISC,
> +            AMDVI_MAX_PH_ADDR | AMDVI_MAX_GVA_ADDR | AMDVI_MAX_VA_ADDR);
> +}
> +
> +static void amdvi_reset(DeviceState *dev)
> +{
> +    AMDVIState *s = AMD_IOMMU_DEVICE(dev);
> +
> +    msi_reset(&s->pci.dev);
> +    amdvi_init(s);
> +}
> +
> +static void amdvi_realize(DeviceState *dev, Error **err)
> +{
> +    AMDVIState *s = AMD_IOMMU_DEVICE(dev);
> +    PCIBus *bus = PC_MACHINE(qdev_get_machine())->bus;
> +    s->iotlb = g_hash_table_new_full(amdvi_uint64_hash,
> +                                     amdvi_uint64_equal, g_free, g_free);
> +
> +    /* This device should take care of IOMMU PCI properties */
> +    qdev_set_parent_bus(DEVICE(&s->pci), &bus->qbus);
> +    object_property_set_bool(OBJECT(&s->pci), true, "realized", err);
> +    s->capab_offset = pci_add_capability(&s->pci.dev, AMDVI_CAPAB_ID_SEC, 0,
> +                                         AMDVI_CAPAB_SIZE);
> +    pci_add_capability(&s->pci.dev, PCI_CAP_ID_MSI, 0, AMDVI_CAPAB_REG_SIZE);
> +    pci_add_capability(&s->pci.dev, PCI_CAP_ID_HT, 0, AMDVI_CAPAB_REG_SIZE);
> +
> +    /* set up MMIO */
> +    memory_region_init_io(&s->mmio, OBJECT(s), &mmio_mem_ops, s, "amdvi-mmio",
> +                          AMDVI_MMIO_SIZE);
> +
> +    sysbus_init_mmio(SYS_BUS_DEVICE(s), &s->mmio);
> +    sysbus_mmio_map(SYS_BUS_DEVICE(s), 0, AMDVI_BASE_ADDR);
> +    pci_setup_iommu(bus, amdvi_host_dma_iommu, s);
> +    s->devid = object_property_get_int(OBJECT(&s->pci), "addr", err);
> +    msi_init(&s->pci.dev, 0, 1, true, false, err);
> +    amdvi_init(s);
> +}
> +
> +static const VMStateDescription vmstate_amdvi = {
> +    .name = "amd-iommu",
> +    .unmigratable = 1
> +};
> +
> +static void amdvi_instance_init(Object *klass)
> +{
> +    AMDVIState *s = AMD_IOMMU_DEVICE(klass);
> +
> +    object_initialize(&s->pci, sizeof(s->pci), TYPE_AMD_IOMMU_PCI);
> +}
> +
> +static void amdvi_class_init(ObjectClass *klass, void* data)
> +{
> +    DeviceClass *dc = DEVICE_CLASS(klass);
> +    X86IOMMUClass *dc_class = X86_IOMMU_CLASS(klass);
> +
> +    dc->reset = amdvi_reset;
> +    dc->vmsd = &vmstate_amdvi;
> +    dc_class->realize = amdvi_realize;
> +}
> +
> +static const TypeInfo amdvi = {
> +    .name = TYPE_AMD_IOMMU_DEVICE,
> +    .parent = TYPE_X86_IOMMU_DEVICE,
> +    .instance_size = sizeof(AMDVIState),
> +    .instance_init = amdvi_instance_init,
> +    .class_init = amdvi_class_init
> +};
> +
> +static const TypeInfo amdviPCI = {
> +    .name = "AMDVI-PCI",
> +    .parent = TYPE_PCI_DEVICE,
> +    .instance_size = sizeof(AMDVIPCIState),
> +};
> +
> +static void amdviPCI_register_types(void)
> +{
> +    type_register_static(&amdviPCI);
> +    type_register_static(&amdvi);
> +}
> +
> +type_init(amdviPCI_register_types);
> diff --git a/hw/i386/amd_iommu.h b/hw/i386/amd_iommu.h
> new file mode 100644
> index 0000000..2f4ac55
> --- /dev/null
> +++ b/hw/i386/amd_iommu.h
> @@ -0,0 +1,390 @@
> +/*
> + * QEMU emulation of an AMD IOMMU (AMD-Vi)
> + *
> + * Copyright (C) 2011 Eduard - Gabriel Munteanu
> + * Copyright (C) 2015 David Kiarie, <davidkiarie4@gmail.com>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License as published by
> + * the Free Software Foundation; either version 2 of the License, or
> + * (at your option) any later version.
> +
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> +
> + * You should have received a copy of the GNU General Public License along
> + * with this program; if not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#ifndef AMD_IOMMU_H_
> +#define AMD_IOMMU_H_
> +
> +#include "hw/hw.h"
> +#include "hw/pci/pci.h"
> +#include "hw/pci/msi.h"
> +#include "hw/sysbus.h"
> +#include "sysemu/dma.h"
> +#include "hw/i386/pc.h"
> +#include "sysemu/dma.h"
> +#include "hw/i386/x86-iommu.h"
> +
> +/* Capability registers */
> +#define AMDVI_CAPAB_BAR_LOW           0x04
> +#define AMDVI_CAPAB_BAR_HIGH          0x08
> +#define AMDVI_CAPAB_RANGE             0x0C
> +#define AMDVI_CAPAB_MISC              0x10
> +
> +#define AMDVI_CAPAB_SIZE              0x18
> +#define AMDVI_CAPAB_REG_SIZE          0x04
> +
> +/* Capability header data */
> +#define AMDVI_CAPAB_ID_SEC            0xf
> +#define AMDVI_CAPAB_FLAT_EXT          (1 << 28)
> +#define AMDVI_CAPAB_EFR_SUP           (1 << 27)
> +#define AMDVI_CAPAB_FLAG_NPCACHE      (1 << 26)
> +#define AMDVI_CAPAB_FLAG_HTTUNNEL     (1 << 25)
> +#define AMDVI_CAPAB_FLAG_IOTLBSUP     (1 << 24)
> +#define AMDVI_CAPAB_INIT_TYPE         (3 << 16)
> +
> +/* No. of used MMIO registers */
> +#define AMDVI_MMIO_REGS_HIGH  8
> +#define AMDVI_MMIO_REGS_LOW   7
> +
> +/* MMIO registers */
> +#define AMDVI_MMIO_DEVICE_TABLE       0x0000
> +#define AMDVI_MMIO_COMMAND_BASE       0x0008
> +#define AMDVI_MMIO_EVENT_BASE         0x0010
> +#define AMDVI_MMIO_CONTROL            0x0018
> +#define AMDVI_MMIO_EXCL_BASE          0x0020
> +#define AMDVI_MMIO_EXCL_LIMIT         0x0028
> +#define AMDVI_MMIO_EXT_FEATURES       0x0030
> +#define AMDVI_MMIO_COMMAND_HEAD       0x2000
> +#define AMDVI_MMIO_COMMAND_TAIL       0x2008
> +#define AMDVI_MMIO_EVENT_HEAD         0x2010
> +#define AMDVI_MMIO_EVENT_TAIL         0x2018
> +#define AMDVI_MMIO_STATUS             0x2020
> +#define AMDVI_MMIO_PPR_BASE           0x0038
> +#define AMDVI_MMIO_PPR_HEAD           0x2030
> +#define AMDVI_MMIO_PPR_TAIL           0x2038
> +
> +#define AMDVI_MMIO_SIZE               0x4000
> +
> +#define AMDVI_MMIO_DEVTAB_SIZE_MASK   ((1ULL << 12) - 1)
> +#define AMDVI_MMIO_DEVTAB_BASE_MASK   (((1ULL << 52) - 1) & ~ \
> +                                       AMDVI_MMIO_DEVTAB_SIZE_MASK)
> +#define AMDVI_MMIO_DEVTAB_ENTRY_SIZE  32
> +#define AMDVI_MMIO_DEVTAB_SIZE_UNIT   4096
> +
> +/* some of this are similar but just for readability */
> +#define AMDVI_MMIO_CMDBUF_SIZE_BYTE       (AMDVI_MMIO_COMMAND_BASE + 7)
> +#define AMDVI_MMIO_CMDBUF_SIZE_MASK       0x0F
> +#define AMDVI_MMIO_CMDBUF_BASE_MASK       AMDVI_MMIO_DEVTAB_BASE_MASK
> +#define AMDVI_MMIO_CMDBUF_HEAD_MASK       (((1ULL << 19) - 1) & ~0x0F)
> +#define AMDVI_MMIO_CMDBUF_TAIL_MASK       AMDVI_MMIO_EVTLOG_HEAD_MASK
> +
> +#define AMDVI_MMIO_EVTLOG_SIZE_BYTE       (AMDVI_MMIO_EVENT_BASE + 7)
> +#define AMDVI_MMIO_EVTLOG_SIZE_MASK       AMDVI_MMIO_CMDBUF_SIZE_MASK
> +#define AMDVI_MMIO_EVTLOG_BASE_MASK       AMDVI_MMIO_CMDBUF_BASE_MASK
> +#define AMDVI_MMIO_EVTLOG_HEAD_MASK       (((1ULL << 19) - 1) & ~0x0F)
> +#define AMDVI_MMIO_EVTLOG_TAIL_MASK       AMDVI_MMIO_EVTLOG_HEAD_MASK
> +
> +#define AMDVI_MMIO_PPRLOG_SIZE_BYTE       (AMDVI_MMIO_EVENT_BASE + 7)
> +#define AMDVI_MMIO_PPRLOG_HEAD_MASK       AMDVI_MMIO_EVTLOG_HEAD_MASK
> +#define AMDVI_MMIO_PPRLOG_TAIL_MASK       AMDVI_MMIO_EVTLOG_HEAD_MASK
> +#define AMDVI_MMIO_PPRLOG_BASE_MASK       AMDVI_MMIO_EVTLOG_BASE_MASK
> +#define AMDVI_MMIO_PPRLOG_SIZE_MASK       AMDVI_MMIO_EVTLOG_SIZE_MASK
> +
> +#define AMDVI_MMIO_EXCL_ENABLED_MASK      (1ULL << 0)
> +#define AMDVI_MMIO_EXCL_ALLOW_MASK        (1ULL << 1)
> +#define AMDVI_MMIO_EXCL_LIMIT_MASK        AMDVI_MMIO_DEVTAB_BASE_MASK
> +#define AMDVI_MMIO_EXCL_LIMIT_LOW         0xFFF
> +
> +/* mmio control register flags */
> +#define AMDVI_MMIO_CONTROL_AMDVIEN        (1ULL << 0)
> +#define AMDVI_MMIO_CONTROL_HTTUNEN        (1ULL << 1)
> +#define AMDVI_MMIO_CONTROL_EVENTLOGEN     (1ULL << 2)
> +#define AMDVI_MMIO_CONTROL_EVENTINTEN     (1ULL << 3)
> +#define AMDVI_MMIO_CONTROL_COMWAITINTEN   (1ULL << 4)
> +#define AMDVI_MMIO_CONTROL_CMDBUFLEN      (1ULL << 12)
> +
> +/* MMIO status register bits */
> +#define AMDVI_MMIO_STATUS_CMDBUF_RUN  (1 << 4)
> +#define AMDVI_MMIO_STATUS_EVT_RUN     (1 << 3)
> +#define AMDVI_MMIO_STATUS_COMP_INT    (1 << 2)
> +#define AMDVI_MMIO_STATUS_EVT_OVF     (1 << 0)
> +
> +#define AMDVI_CMDBUF_ID_BYTE              0x07
> +#define AMDVI_CMDBUF_ID_RSHIFT            4
> +
> +#define AMDVI_CMD_COMPLETION_WAIT         0x01
> +#define AMDVI_CMD_INVAL_DEVTAB_ENTRY      0x02
> +#define AMDVI_CMD_INVAL_AMDVI_PAGES       0x03
> +#define AMDVI_CMD_INVAL_IOTLB_PAGES       0x04
> +#define AMDVI_CMD_INVAL_INTR_TABLE        0x05
> +#define AMDVI_CMD_PREFETCH_AMDVI_PAGES    0x06
> +#define AMDVI_CMD_COMPLETE_PPR_REQUEST    0x07
> +#define AMDVI_CMD_INVAL_AMDVI_ALL         0x08
> +
> +#define AMDVI_DEVTAB_ENTRY_SIZE           32
> +
> +/* Device table entry bits 0:63 */
> +#define AMDVI_DEV_VALID                   (1ULL << 0)
> +#define AMDVI_DEV_TRANSLATION_VALID       (1ULL << 1)
> +#define AMDVI_DEV_MODE_MASK               0x7
> +#define AMDVI_DEV_MODE_RSHIFT             9
> +#define AMDVI_DEV_PT_ROOT_MASK            0xFFFFFFFFFF000
> +#define AMDVI_DEV_PT_ROOT_RSHIFT          12
> +#define AMDVI_DEV_PERM_SHIFT              61
> +#define AMDVI_DEV_PERM_READ               (1ULL << 61)
> +#define AMDVI_DEV_PERM_WRITE              (1ULL << 62)
> +
> +/* Device table entry bits 64:127 */
> +#define AMDVI_DEV_DOMID_ID_MASK          ((1ULL << 16) - 1)
> +
> +/* Event codes and flags, as stored in the info field */
> +#define AMDVI_EVENT_ILLEGAL_DEVTAB_ENTRY  (0x1U << 12)
> +#define AMDVI_EVENT_IOPF                  (0x2U << 12)
> +#define   AMDVI_EVENT_IOPF_I              (1U << 3)
> +#define AMDVI_EVENT_DEV_TAB_HW_ERROR      (0x3U << 12)
> +#define AMDVI_EVENT_PAGE_TAB_HW_ERROR     (0x4U << 12)
> +#define AMDVI_EVENT_ILLEGAL_COMMAND_ERROR (0x5U << 12)
> +#define AMDVI_EVENT_COMMAND_HW_ERROR      (0x6U << 12)
> +
> +#define AMDVI_EVENT_LEN                  16
> +#define AMDVI_PERM_READ             (1 << 0)
> +#define AMDVI_PERM_WRITE            (1 << 1)
> +
> +#define AMDVI_FEATURE_PREFETCH            (1ULL << 0) /* page prefetch       */
> +#define AMDVI_FEATURE_PPR                 (1ULL << 1) /* PPR Support         */
> +#define AMDVI_FEATURE_GT                  (1ULL << 4) /* Guest Translation   */
> +#define AMDVI_FEATURE_IA                  (1ULL << 6) /* inval all support   */
> +#define AMDVI_FEATURE_GA                  (1ULL << 7) /* guest VAPIC support */
> +#define AMDVI_FEATURE_HE                  (1ULL << 8) /* hardware error regs */
> +#define AMDVI_FEATURE_PC                  (1ULL << 9) /* Perf counters       */
> +
> +/* reserved DTE bits */
> +#define AMDVI_DTE_LOWER_QUAD_RESERVED  0x80300000000000fc
> +#define AMDVI_DTE_MIDDLE_QUAD_RESERVED 0x0000000000000100
> +#define AMDVI_DTE_UPPER_QUAD_RESERVED  0x08f0000000000000
> +
> +/* AMDVI paging mode */
> +#define AMDVI_GATS_MODE                 (6ULL <<  12)
> +#define AMDVI_HATS_MODE                 (6ULL <<  10)
> +
> +/* IOTLB */
> +#define AMDVI_IOTLB_MAX_SIZE 1024
> +#define AMDVI_DEVID_SHIFT    36
> +
> +/* interrupt types */
> +#define AMDVI_MT_FIXED  0x0
> +#define AMDVI_MT_ARBIT  0x1
> +#define AMDVI_MT_SMI    0x2
> +#define AMDVI_MT_NMI    0x3
> +#define AMDVI_MT_INIT   0x4
> +#define AMDVI_MT_EXTINT 0x6
> +#define AMDVI_MT_LINT1  0xb
> +#define AMDVI_MT_LINT0  0xe
> +
> +/* Ext reg, GA support */
> +#define AMDVI_GASUP    (1UL << 7)
> +/* MMIO control GA enable bits */
> +#define AMDVI_GAEN     (1UL << 17)
> +
> +/* MSI interrupt type mask */
> +#define AMDVI_IR_TYPE_MASK 0x300
> +
> +/* interrupt destination mode */
> +#define AMDVI_IRDEST_MODE_MASK 0x2
> +
> +/* select MSI data 10:0 bits */
> +#define AMDVI_IRTE_INDEX_MASK 0x7ff
> +
> +/* bits determining whether specific interrupts should be passed
> + * split DTE into 64-bit chunks
> + */
> +#define AMDVI_DTE_INTPASS       56
> +#define AMDVI_DTE_EINTPASS      57
> +#define AMDVI_DTE_NMIPASS       58
> +#define AMDVI_DTE_INTCTL        60
> +#define AMDVI_DTE_LINT0PASS     62
> +#define AMDVI_DTE_LINT1PASS     63
> +
> +/* interrupt data valid */
> +#define AMDVI_IR_VALID          (1UL << 0)
> +
> +/* interrupt root table mask */
> +#define AMDVI_IRTEROOT_MASK     0xffffffffffffc0
> +
> +/* default IRTE size */
> +#define AMDVI_DEFAULT_IRTE_SIZE 0x4
> +
> +/* IRTE size with GASup enabled */
> +#define AMDVI_IRTE_SIZE_GASUP   0x10
> +
> +#define AMDVI_IRTE_VECTOR_MASK    (0xffU << 16)
> +#define AMDVI_IRTE_DEST_MASK      (0xffU << 8)
> +#define AMDVI_IRTE_DM_MASK        (0x1U << 6)
> +#define AMDVI_IRTE_RQEOI_MASK     (0x1U << 5)
> +#define AMDVI_IRTE_INTTYPE_MASK   (0x7U << 2)
> +#define AMDVI_IRTE_SUPIOPF_MASK   (0x1U << 1)
> +#define AMDVI_IRTE_REMAP_MASK     (0x1U << 0)
> +
> +#define AMDVI_IR_TABLE_SIZE_MASK 0xfe
> +
> +/* offsets into MSI data */
> +#define AMDVI_MSI_DATA_DM_RSHIFT       0x8
> +#define AMDVI_MSI_DATA_LEVEL_RSHIFT    0xe
> +#define AMDVI_MSI_DATA_TRM_RSHIFT      0xf
> +
> +/* offsets into MSI address */
> +#define AMDVI_MSI_ADDR_DM_RSHIFT       0x2
> +#define AMDVI_MSI_ADDR_RH_RSHIFT       0x3
> +#define AMDVI_MSI_ADDR_DEST_RSHIFT     0xc
> +
> +#define AMDVI_LOCAL_APIC_ADDR     0xfee00000
> +
> +/* extended feature support */
> +#define AMDVI_EXT_FEATURES (AMDVI_FEATURE_PREFETCH | AMDVI_FEATURE_PPR | \
> +        AMDVI_FEATURE_IA | AMDVI_FEATURE_GT | AMDVI_FEATURE_GA | \
Came across this when reviewing your IR series.
Do you really support Guest Translation in your code? I'm also not sure 
if QEMU emulates Virtual APIC. So I'd skip the last two bits.

Valentine

> +        AMDVI_FEATURE_HE | AMDVI_GATS_MODE | AMDVI_HATS_MODE)
> +
> +/* capabilities header */
> +#define AMDVI_CAPAB_FEATURES (AMDVI_CAPAB_FLAT_EXT | \
> +        AMDVI_CAPAB_FLAG_NPCACHE | AMDVI_CAPAB_FLAG_IOTLBSUP \
> +        | AMDVI_CAPAB_ID_SEC | AMDVI_CAPAB_INIT_TYPE | \
> +        AMDVI_CAPAB_FLAG_HTTUNNEL |  AMDVI_CAPAB_EFR_SUP)
> +
> +/* AMDVI default address */
> +#define AMDVI_BASE_ADDR 0xfed80000
> +
> +/* page management constants */
> +#define AMDVI_PAGE_SHIFT 12
> +#define AMDVI_PAGE_SIZE  (1ULL << AMDVI_PAGE_SHIFT)
> +
> +#define AMDVI_PAGE_SHIFT_4K 12
> +#define AMDVI_PAGE_MASK_4K  (~((1ULL << AMDVI_PAGE_SHIFT_4K) - 1))
> +
> +#define AMDVI_MAX_VA_ADDR          (48UL << 5)
> +#define AMDVI_MAX_PH_ADDR          (40UL << 8)
> +#define AMDVI_MAX_GVA_ADDR         (48UL << 15)
> +
> +/* invalidation command device id */
> +#define AMDVI_INVAL_DEV_ID_SHIFT  32
> +#define AMDVI_INVAL_DEV_ID_MASK   (~((1UL << AMDVI_INVAL_DEV_ID_SHIFT) - 1))
> +
> +/* invalidation address */
> +#define AMDVI_INVAL_ADDR_MASK_SHIFT 12
> +#define AMDVI_INVAL_ADDR_MASK     (~((1UL << AMDVI_INVAL_ADDR_MASK_SHIFT) - 1))
> +
> +/* invalidation S bit mask */
> +#define AMDVI_INVAL_ALL(val) ((val) & (0x1))
> +
> +/* Completion Wait data size */
> +#define AMDVI_COMPLETION_DATA_SIZE    8
> +
> +#define AMDVI_COMMAND_SIZE   16
> +
> +#define AMDVI_INT_ADDR_FIRST 0xfee00000ULL
> +#define AMDVI_INT_ADDR_LAST  0xfeefffffULL
> +
> +#define AMDVI_INT_ADDR_SIZE ((AMDVI_INT_ADDR_LAST - \
> +        AMDVI_INT_ADDR_FIRST) + 1)
> +
> +/* Completion Wait data size */
> +#define AMDVI_COMPLETION_DATA_SIZE    8
> +
> +#define AMDVI_COMMAND_SIZE   16
> +
> +/* AMD IOMMU errors */
> +#define AMDVI_ILLEG_DEV_TAB  0x1
> +#define AMDVI_IOPF_          0x2
> +#define AMDVI_DEV_TAB_HW     0x3
> +#define AMDVI_PAGE_TAB_HW    0x4
> +#define AMDVI_ILLEG_COM      0x5
> +#define AMDVI_COM_HW         0x6
> +#define AMDVI_IOTLB_TIMEOUT  0x7
> +#define AMDVI_INVAL_DEV_REQ  0x8
> +#define AMDVI_INVAL_PPR_REQ  0x9
> +#define AMDVI_EVT_COUNT_ZERO 0xa
> +
> +/* represent target and master aborts error state */
> +#define AMDVI_TARGET_ABORT     0xb
> +#define AMDVI_MASTER_ABORT     0xc
> +
> +#define TYPE_AMD_IOMMU_DEVICE "amd-iommu"
> +#define AMD_IOMMU_DEVICE(obj)\
> +    OBJECT_CHECK(AMDVIState, (obj), TYPE_AMD_IOMMU_DEVICE)
> +
> +#define TYPE_AMD_IOMMU_PCI "AMDVI-PCI"
> +#define AMD_IOMMU_PCI(obj)\
> +    OBJECT_CHECK(AMDVIPCIState, (obj), TYPE_AMD_IOMMU_PCI)
> +
> +typedef struct AMDVIAddressSpace AMDVIAddressSpace;
> +
> +/* functions to steal PCI config space */
> +typedef struct AMDVIPCIState {
> +    PCIDevice dev;               /* The PCI device itself        */
> +} AMDVIPCIState;
> +
> +typedef struct AMDVIState {
> +    X86IOMMUState iommu;        /* IOMMU bus device             */
> +    AMDVIPCIState pci;          /* IOMMU PCI device             */
> +
> +    uint32_t version;
> +    uint32_t capab_offset;       /* capability offset pointer    */
> +
> +    uint64_t mmio_addr;
> +
> +    uint32_t devid;              /* auto-assigned devid          */
> +
> +    bool enabled;                /* IOMMU enabled                */
> +    bool ats_enabled;            /* address translation enabled  */
> +    bool cmdbuf_enabled;         /* command buffer enabled       */
> +    bool evtlog_enabled;         /* event log enabled            */
> +    bool excl_enabled;
> +
> +    hwaddr devtab;               /* base address device table    */
> +    size_t devtab_len;           /* device table length          */
> +
> +    hwaddr cmdbuf;               /* command buffer base address  */
> +    uint64_t cmdbuf_len;         /* command buffer length        */
> +    uint32_t cmdbuf_head;        /* current IOMMU read position  */
> +    uint32_t cmdbuf_tail;        /* next Software write position */
> +    bool completion_wait_intr;
> +
> +    hwaddr evtlog;               /* base address event log       */
> +    bool evtlog_intr;
> +    uint32_t evtlog_len;         /* event log length             */
> +    uint32_t evtlog_head;        /* current IOMMU write position */
> +    uint32_t evtlog_tail;        /* current Software read position */
> +
> +    /* unused for now */
> +    hwaddr excl_base;            /* base DVA - IOMMU exclusion range */
> +    hwaddr excl_limit;           /* limit of IOMMU exclusion range   */
> +    bool excl_allow;             /* translate accesses to the exclusion range */
> +    bool excl_enable;            /* exclusion range enabled          */
> +
> +    hwaddr ppr_log;              /* base address ppr log */
> +    uint32_t pprlog_len;         /* ppr log len  */
> +    uint32_t pprlog_head;        /* ppr log head */
> +    uint32_t pprlog_tail;        /* ppr log tail */
> +
> +    MemoryRegion mmio;                 /* MMIO region                  */
> +    uint8_t mmior[AMDVI_MMIO_SIZE];    /* read/write MMIO              */
> +    uint8_t w1cmask[AMDVI_MMIO_SIZE];  /* read/write 1 clear mask      */
> +    uint8_t romask[AMDVI_MMIO_SIZE];   /* MMIO read/only mask          */
> +    bool mmio_enabled;
> +
> +    /* IOMMU function */
> +    MemoryRegionIOMMUOps iommu_ops;
> +
> +    /* for each served device */
> +    AMDVIAddressSpace **address_spaces[PCI_BUS_MAX];
> +
> +    /* IOTLB */
> +    GHashTable *iotlb;
> +} AMDVIState;
> +
> +#endif
> diff --git a/hw/i386/trace-events b/hw/i386/trace-events
> index 592de3a..5c12c10 100644
> --- a/hw/i386/trace-events
> +++ b/hw/i386/trace-events
> @@ -42,3 +42,10 @@ amdvi_mode_invalid(unsigned level, uint64_t addr)"error: translation level 0x%"P
>  amdvi_page_fault(uint64_t addr) "error: page fault accessing guest physical address 0x%"PRIx64
>  amdvi_iotlb_hit(uint16_t bus, uint16_t slot, uint16_t func, uint64_t addr, uint64_t txaddr) "hit iotlb devid %02x:%02x.%x gpa 0x%"PRIx64 " hpa 0x%"PRIx64
>  amdvi_translation_result(uint16_t bus, uint16_t slot, uint16_t func, uint64_t addr, uint64_t txaddr) "devid: %02x:%02x.%x gpa 0x%"PRIx64 " hpa 0x%"PRIx64
> +amdvi_irte_get_fail(uint64_t addr, uint64_t offset) "couldn't access device table entry 0x%"PRIx64" + offset 0x%"PRIx64
> +amdvi_invalid_irte_entry(uint16_t devid, uint64_t offset) "devid %x requested IRTE offset 0x%"PRIx64" Outside IR table range"
> +amdvi_ir_request(uint32_t data, uint64_t addr, uint16_t sid) "IR request data 0x%"PRIx32" address 0x%"PRIx64" SID %x"
> +amdvi_ir_remap(uint32_t data, uint64_t addr, uint16_t sid) "IR remap data 0x%"PRIx32" address 0x%"PRIx64" SID %x"
> +amdvi_ir_target_abort(uint32_t data, uint64_t addr, uint16_t sid) "IR target abort data 0x%"PRIx32" address 0x%"PRIx64" SID %x"
> +amdvi_ir_write_fail(uint64_t addr, uint32_t data) "fail to write to addr 0x%"PRIx64 " value 0x%"PRIx32
> +amdvi_ir_read_fail(uint64_t addr) " fail to read from addr 0x%"PRIx64
>

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Qemu-devel] [V15 3/4] hw/i386: Introduce AMD IOMMU
  2016-08-12 19:10   ` Valentine Sinitsyn
@ 2016-08-12 19:40     ` David Kiarie
  2016-08-12 19:41       ` Valentine Sinitsyn
  0 siblings, 1 reply; 26+ messages in thread
From: David Kiarie @ 2016-08-12 19:40 UTC (permalink / raw)
  To: Valentine Sinitsyn
  Cc: QEMU Developers, Peter Xu, rkrcmar, Jan Kiszka, Eduardo Habkost,
	Michael S. Tsirkin

On Fri, Aug 12, 2016 at 10:10 PM, Valentine Sinitsyn <
valentine.sinitsyn@gmail.com> wrote:

> Hi David,
>
> On 02.08.2016 13:39, David Kiarie wrote:
>
>> Add AMD IOMMU emulaton to Qemu in addition to Intel IOMMU.
>> The IOMMU does basic translation, error checking and has a
>> minimal IOTLB implementation. This IOMMU bypassed the need
>> for target aborts by responding with IOMMU_NONE access rights
>> and exempts the region 0xfee00000-0xfeefffff from translation
>> as it is the q35 interrupt region.
>>
>> We advertise features that are not yet implemented to please
>> the Linux IOMMU driver.
>>
>> IOTLB aims at implementing commands on real IOMMUs which is
>> essential for debugging and may not offer any performance
>> benefits
>>
>> Signed-off-by: David Kiarie <davidkiarie4@gmail.com>
>> ---
>> +
>> +/* IRTE size with GASup enabled */
>> +#define AMDVI_IRTE_SIZE_GASUP   0x10
>> +
>> +#define AMDVI_IRTE_VECTOR_MASK    (0xffU << 16)
>> +#define AMDVI_IRTE_DEST_MASK      (0xffU << 8)
>> +#define AMDVI_IRTE_DM_MASK        (0x1U << 6)
>> +#define AMDVI_IRTE_RQEOI_MASK     (0x1U << 5)
>> +#define AMDVI_IRTE_INTTYPE_MASK   (0x7U << 2)
>> +#define AMDVI_IRTE_SUPIOPF_MASK   (0x1U << 1)
>> +#define AMDVI_IRTE_REMAP_MASK     (0x1U << 0)
>> +
>> +#define AMDVI_IR_TABLE_SIZE_MASK 0xfe
>> +
>> +/* offsets into MSI data */
>> +#define AMDVI_MSI_DATA_DM_RSHIFT       0x8
>> +#define AMDVI_MSI_DATA_LEVEL_RSHIFT    0xe
>> +#define AMDVI_MSI_DATA_TRM_RSHIFT      0xf
>> +
>> +/* offsets into MSI address */
>> +#define AMDVI_MSI_ADDR_DM_RSHIFT       0x2
>> +#define AMDVI_MSI_ADDR_RH_RSHIFT       0x3
>> +#define AMDVI_MSI_ADDR_DEST_RSHIFT     0xc
>> +
>> +#define AMDVI_LOCAL_APIC_ADDR     0xfee00000
>> +
>> +/* extended feature support */
>> +#define AMDVI_EXT_FEATURES (AMDVI_FEATURE_PREFETCH | AMDVI_FEATURE_PPR |
>> \
>> +        AMDVI_FEATURE_IA | AMDVI_FEATURE_GT | AMDVI_FEATURE_GA | \
>>
>

> Came across this when reviewing your IR series.
> Do you really support Guest Translation in your code? I'm also not sure if
> QEMU emulates Virtual APIC. So I'd skip the last two bits.


You mean GT and GA ? I could do a bit more research about GA but as I have
mentioned in the commit message the Linux AMD-Vi (which is the primary
target) checks for some of these features when deciding the version of this
IOMMU otherwise it defaults to IOMMU version 1.


>
> Valentine
>
>
> +        AMDVI_FEATURE_HE | AMDVI_GATS_MODE | AMDVI_HATS_MODE)
>> +
>> +/* capabilities header */
>> +#define AMDVI_CAPAB_FEATURES (AMDVI_CAPAB_FLAT_EXT | \
>> +        AMDVI_CAPAB_FLAG_NPCACHE | AMDVI_CAPAB_FLAG_IOTLBSUP \
>> +        | AMDVI_CAPAB_ID_SEC | AMDVI_CAPAB_INIT_TYPE | \
>> +        AMDVI_CAPAB_FLAG_HTTUNNEL |  AMDVI_CAPAB_EFR_SUP)
>> +
>> +/* AMDVI default address */
>> +#define AMDVI_BASE_ADDR 0xfed80000
>>
>
>

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Qemu-devel] [V15 3/4] hw/i386: Introduce AMD IOMMU
  2016-08-12 19:40     ` David Kiarie
@ 2016-08-12 19:41       ` Valentine Sinitsyn
  0 siblings, 0 replies; 26+ messages in thread
From: Valentine Sinitsyn @ 2016-08-12 19:41 UTC (permalink / raw)
  To: David Kiarie
  Cc: QEMU Developers, Peter Xu, rkrcmar, Jan Kiszka, Eduardo Habkost,
	Michael S. Tsirkin

On 13.08.2016 00:40, David Kiarie wrote:
>
>
> On Fri, Aug 12, 2016 at 10:10 PM, Valentine Sinitsyn
> <valentine.sinitsyn@gmail.com <mailto:valentine.sinitsyn@gmail.com>> wrote:
>
>     Hi David,
>
>     On 02.08.2016 13:39, David Kiarie wrote:
>
>         Add AMD IOMMU emulaton to Qemu in addition to Intel IOMMU.
>         The IOMMU does basic translation, error checking and has a
>         minimal IOTLB implementation. This IOMMU bypassed the need
>         for target aborts by responding with IOMMU_NONE access rights
>         and exempts the region 0xfee00000-0xfeefffff from translation
>         as it is the q35 interrupt region.
>
>         We advertise features that are not yet implemented to please
>         the Linux IOMMU driver.
>
>         IOTLB aims at implementing commands on real IOMMUs which is
>         essential for debugging and may not offer any performance
>         benefits
>
>         Signed-off-by: David Kiarie <davidkiarie4@gmail.com
>         <mailto:davidkiarie4@gmail.com>>
>         ---
>         +
>         +/* IRTE size with GASup enabled */
>         +#define AMDVI_IRTE_SIZE_GASUP   0x10
>         +
>         +#define AMDVI_IRTE_VECTOR_MASK    (0xffU << 16)
>         +#define AMDVI_IRTE_DEST_MASK      (0xffU << 8)
>         +#define AMDVI_IRTE_DM_MASK        (0x1U << 6)
>         +#define AMDVI_IRTE_RQEOI_MASK     (0x1U << 5)
>         +#define AMDVI_IRTE_INTTYPE_MASK   (0x7U << 2)
>         +#define AMDVI_IRTE_SUPIOPF_MASK   (0x1U << 1)
>         +#define AMDVI_IRTE_REMAP_MASK     (0x1U << 0)
>         +
>         +#define AMDVI_IR_TABLE_SIZE_MASK 0xfe
>         +
>         +/* offsets into MSI data */
>         +#define AMDVI_MSI_DATA_DM_RSHIFT       0x8
>         +#define AMDVI_MSI_DATA_LEVEL_RSHIFT    0xe
>         +#define AMDVI_MSI_DATA_TRM_RSHIFT      0xf
>         +
>         +/* offsets into MSI address */
>         +#define AMDVI_MSI_ADDR_DM_RSHIFT       0x2
>         +#define AMDVI_MSI_ADDR_RH_RSHIFT       0x3
>         +#define AMDVI_MSI_ADDR_DEST_RSHIFT     0xc
>         +
>         +#define AMDVI_LOCAL_APIC_ADDR     0xfee00000
>         +
>         +/* extended feature support */
>         +#define AMDVI_EXT_FEATURES (AMDVI_FEATURE_PREFETCH |
>         AMDVI_FEATURE_PPR | \
>         +        AMDVI_FEATURE_IA | AMDVI_FEATURE_GT | AMDVI_FEATURE_GA | \
>
>
>
>     Came across this when reviewing your IR series.
>     Do you really support Guest Translation in your code? I'm also not
>     sure if QEMU emulates Virtual APIC. So I'd skip the last two bits.
>
>
> You mean GT and GA ? I could do a bit more research about GA but as I
> have mentioned in the commit message the Linux AMD-Vi (which is the
> primary target) checks for some of these features when deciding the
> version of this IOMMU otherwise it defaults to IOMMU version 1.
True. Ignore this comment then.

Valentine
>
>
>
>     Valentine
>
>
>         +        AMDVI_FEATURE_HE | AMDVI_GATS_MODE | AMDVI_HATS_MODE)
>         +
>         +/* capabilities header */
>         +#define AMDVI_CAPAB_FEATURES (AMDVI_CAPAB_FLAT_EXT | \
>         +        AMDVI_CAPAB_FLAG_NPCACHE | AMDVI_CAPAB_FLAG_IOTLBSUP \
>         +        | AMDVI_CAPAB_ID_SEC | AMDVI_CAPAB_INIT_TYPE | \
>         +        AMDVI_CAPAB_FLAG_HTTUNNEL |  AMDVI_CAPAB_EFR_SUP)
>         +
>         +/* AMDVI default address */
>         +#define AMDVI_BASE_ADDR 0xfed80000
>
>
>

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Qemu-devel] [V15 0/4] AMD IOMMU
  2016-08-09 20:27 [Qemu-devel] [V15 0/4] AMD IOMMU David Kiarie
@ 2016-08-10  8:30 ` David Kiarie
  0 siblings, 0 replies; 26+ messages in thread
From: David Kiarie @ 2016-08-10  8:30 UTC (permalink / raw)
  To: QEMU Developers
  Cc: Peter Xu, Jan Kiszka, rkrcmar, Valentine Sinitsyn,
	Eduardo Habkost, Michael S. Tsirkin, David Kiarie

On Tue, Aug 9, 2016 at 11:27 PM, David Kiarie <davidkiarie4@gmail.com>
wrote:

> Hi all,
>
> This patchset adds basic AMD IOMMU emulation support to Qemu.
>
> Change since v14
>    -MMIO register reading/write bug fix [Peter]
>    -Endian-ness issue fix[Peter]
>    -Bitfields layouts in IOMMU commands fix[Peter]
>

I seem to have left out a few of this.

   -IVRS changed IVHD device entry from type 3 to 1 to save a few bytes
>    -coding style issues, comment grammer and other miscellaneous fixes.
>
> Changes since v13
>    -Added an error to make AMD IOMMU incompatible with device
> assignment.[Alex]
>    -Converted AMD IOMMU into a composite PCI and System Bus device. This
> helps with:
>       -We can now inherit from X86 IOMMU base class(which is implemented
> as a System Bus device).
>       -We can now reserve MMIO region for IOMMU without a BAR register and
> without a hack.
>
> Changes since v12
>
>    -Coding style fixes [Jan, Michael]
>    -Error logging fix to avoid using a macro[Jan]
>    -moved some PCI macros to PCI header[Jan]
>    -Use a lookup table for MMIO register names when tracing[Jan]
>
> Changes since V11
>    -AMD IOMMU is not started with -device amd-iommu (with a dependency on
> Marcel's patches).
>    -IOMMU commands are represented using bitfields which is less error
> prone and more readable[Peter]
>    -Changed from debug fprintfs to tracing[Jan]
>
> Changes since V10
>
>    -Support for huge pages including some obscure AMD IOMMU feature that
> allows default page size override[Jan].
>    -Fixed an issue with generation of interrupts. We noted that AMD IOMMU
> has BusMaster- and is therefore not able to generate interrupts like any
> other PCI device. We have resulted in writing directly to system address
> but this could be fixed by some patches which have not been merged yet.
>
> Changes since v9
>
>    -amd_iommu prefixes have been renamed to a shorter 'amdvi' both in the
> macros
>     and in the functions/code. The register macros have not been moved to
> the
>     implementation file since almost the macros there are basically macros
> and I
>     reckoned renaming them should suffice.
>    -taken care of byte order in the use of 'dma_memory_read'[Michael]
>    -Taken care of invalid DTE entries to ensure no DMA unless a device is
> configured to allow it.
>    -An issue with the emulate IOMMU defaulting to AMD_IOMMU has been
> fixed[Marcel]
>
> You can test[1] this patches by starting with parameters
>     qemu-system-x86_64 -M -device amd-iommu -m 2G -enable-kvm -smp 4 -cpu
> host -hda file.img -soundhw ac97
> emulating whatever devices you want.
>
> Not passing any command line parameters to linux should be enough to test
> this patches since the devices are basically
> passes-through but to the 'host' (l1 guest). You can still go ahead pass
> command line parameter 'iommu=pt iommu=1'
> and try to pass a device to L2 guest. This can also done without passing
> any iommu related parameters to the kernel.
>
> David Kiarie (4):
>   hw/pci: Prepare for AMD IOMMU
>   hw/i386/trace-events: Add AMD IOMMU trace events
>   hw/i386: Introduce AMD IOMMU
>   hw/i386: AMD IOMMU IVRS table
>
>  hw/acpi/aml-build.c         |    2 +-
>  hw/i386/Makefile.objs       |    1 +
>  hw/i386/acpi-build.c        |   76 ++-
>  hw/i386/amd_iommu.c         | 1401 ++++++++++++++++++++++++++++++
> +++++++++++++
>  hw/i386/amd_iommu.h         |  390 ++++++++++++
>  hw/i386/intel_iommu.c       |    1 +
>  hw/i386/trace-events        |   36 ++
>  hw/i386/x86-iommu.c         |    6 +
>  include/hw/acpi/aml-build.h |    1 +
>  include/hw/i386/x86-iommu.h |   12 +
>  include/hw/pci/pci.h        |    4 +-
>  11 files changed, 1919 insertions(+), 11 deletions(-)
>  create mode 100644 hw/i386/amd_iommu.c
>  create mode 100644 hw/i386/amd_iommu.h
>
> --
> 2.1.4
>
>

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [Qemu-devel] [V15 0/4] AMD IOMMU
@ 2016-08-09 20:27 David Kiarie
  2016-08-10  8:30 ` David Kiarie
  0 siblings, 1 reply; 26+ messages in thread
From: David Kiarie @ 2016-08-09 20:27 UTC (permalink / raw)
  To: qemu-devel
  Cc: peterx, jan.kiszka, rkrcmar, valentine.sinitsyn, ehabkost, mst,
	David Kiarie

Hi all,

This patchset adds basic AMD IOMMU emulation support to Qemu. 

Change since v14
   -MMIO register reading/write bug fix [Peter]
   -Endian-ness issue fix[Peter]
   -Bitfields layouts in IOMMU commands fix[Peter]
   -IVRS changed IVHD device entry from type 3 to 1 to save a few bytes
   -coding style issues, comment grammer and other miscellaneous fixes.

Changes since v13
   -Added an error to make AMD IOMMU incompatible with device assignment.[Alex]
   -Converted AMD IOMMU into a composite PCI and System Bus device. This helps with:
      -We can now inherit from X86 IOMMU base class(which is implemented as a System Bus device).
      -We can now reserve MMIO region for IOMMU without a BAR register and without a hack.

Changes since v12

   -Coding style fixes [Jan, Michael]
   -Error logging fix to avoid using a macro[Jan]
   -moved some PCI macros to PCI header[Jan]
   -Use a lookup table for MMIO register names when tracing[Jan]

Changes since V11
   -AMD IOMMU is not started with -device amd-iommu (with a dependency on Marcel's patches).
   -IOMMU commands are represented using bitfields which is less error prone and more readable[Peter]
   -Changed from debug fprintfs to tracing[Jan]

Changes since V10
 
   -Support for huge pages including some obscure AMD IOMMU feature that allows default page size override[Jan].
   -Fixed an issue with generation of interrupts. We noted that AMD IOMMU has BusMaster- and is therefore not able to generate interrupts like any other PCI device. We have resulted in writing directly to system address but this could be fixed by some patches which have not been merged yet.

Changes since v9

   -amd_iommu prefixes have been renamed to a shorter 'amdvi' both in the macros
    and in the functions/code. The register macros have not been moved to the 
    implementation file since almost the macros there are basically macros and I 
    reckoned renaming them should suffice.
   -taken care of byte order in the use of 'dma_memory_read'[Michael]
   -Taken care of invalid DTE entries to ensure no DMA unless a device is configured to allow it.
   -An issue with the emulate IOMMU defaulting to AMD_IOMMU has been fixed[Marcel]
   
You can test[1] this patches by starting with parameters 
    qemu-system-x86_64 -M -device amd-iommu -m 2G -enable-kvm -smp 4 -cpu host -hda file.img -soundhw ac97 
emulating whatever devices you want.

Not passing any command line parameters to linux should be enough to test this patches since the devices are basically
passes-through but to the 'host' (l1 guest). You can still go ahead pass command line parameter 'iommu=pt iommu=1'
and try to pass a device to L2 guest. This can also done without passing any iommu related parameters to the kernel. 

David Kiarie (4):
  hw/pci: Prepare for AMD IOMMU
  hw/i386/trace-events: Add AMD IOMMU trace events
  hw/i386: Introduce AMD IOMMU
  hw/i386: AMD IOMMU IVRS table

 hw/acpi/aml-build.c         |    2 +-
 hw/i386/Makefile.objs       |    1 +
 hw/i386/acpi-build.c        |   76 ++-
 hw/i386/amd_iommu.c         | 1401 +++++++++++++++++++++++++++++++++++++++++++
 hw/i386/amd_iommu.h         |  390 ++++++++++++
 hw/i386/intel_iommu.c       |    1 +
 hw/i386/trace-events        |   36 ++
 hw/i386/x86-iommu.c         |    6 +
 include/hw/acpi/aml-build.h |    1 +
 include/hw/i386/x86-iommu.h |   12 +
 include/hw/pci/pci.h        |    4 +-
 11 files changed, 1919 insertions(+), 11 deletions(-)
 create mode 100644 hw/i386/amd_iommu.c
 create mode 100644 hw/i386/amd_iommu.h

-- 
2.1.4

^ permalink raw reply	[flat|nested] 26+ messages in thread

end of thread, other threads:[~2016-08-12 19:42 UTC | newest]

Thread overview: 26+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-08-02  8:39 [Qemu-devel] [V15 0/4] AMD IOMMU David Kiarie
2016-08-02  8:39 ` [Qemu-devel] [V15 1/4] hw/pci: Prepare for " David Kiarie
2016-08-08  9:01   ` Peter Xu
2016-08-08  9:25     ` David Kiarie
2016-08-02  8:39 ` [Qemu-devel] [V15 2/4] hw/i386/trace-events: Add AMD IOMMU trace events David Kiarie
2016-08-02  8:39 ` [Qemu-devel] [V15 3/4] hw/i386: Introduce AMD IOMMU David Kiarie
2016-08-09  5:44   ` Peter Xu
2016-08-09 12:07     ` David Kiarie
2016-08-09 12:21       ` Peter Xu
2016-08-09 12:52     ` David Kiarie
2016-08-09 13:01       ` Valentine Sinitsyn
2016-08-09 13:17         ` David Kiarie
2016-08-10  2:08       ` Peter Xu
2016-08-10  6:30         ` David Kiarie
2016-08-09 17:46     ` David Kiarie
2016-08-10  1:49       ` Peter Xu
2016-08-11  8:23   ` Valentine Sinitsyn
2016-08-11  8:32     ` David Kiarie
2016-08-11  8:35       ` Valentine Sinitsyn
2016-08-12 19:10   ` Valentine Sinitsyn
2016-08-12 19:40     ` David Kiarie
2016-08-12 19:41       ` Valentine Sinitsyn
2016-08-02  8:39 ` [Qemu-devel] [V15 4/4] hw/i386: AMD IOMMU IVRS table David Kiarie
2016-08-02 13:32   ` Igor Mammedov
2016-08-09 20:27 [Qemu-devel] [V15 0/4] AMD IOMMU David Kiarie
2016-08-10  8:30 ` David Kiarie

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.