From: "Liu, Yi L" <yi.l.liu@intel.com> To: qemu-devel@nongnu.org, david@gibson.dropbear.id.au, pbonzini@redhat.com, alex.williamson@redhat.com, peterx@redhat.com Cc: mst@redhat.com, eric.auger@redhat.com, kevin.tian@intel.com, yi.l.liu@intel.com, jun.j.tian@intel.com, yi.y.sun@intel.com, kvm@vger.kernel.org, hao.wu@intel.com, Jacob Pan <jacob.jun.pan@linux.intel.com>, Yi Sun <yi.y.sun@linux.intel.com>, Richard Henderson <rth@twiddle.net>, Eduardo Habkost <ehabkost@redhat.com> Subject: [RFC v3 24/25] intel_iommu: propagate PASID-based iotlb invalidation to host Date: Wed, 29 Jan 2020 04:16:55 -0800 [thread overview] Message-ID: <1580300216-86172-25-git-send-email-yi.l.liu@intel.com> (raw) In-Reply-To: <1580300216-86172-1-git-send-email-yi.l.liu@intel.com> From: Liu Yi L <yi.l.liu@intel.com> This patch propagates PASID-based iotlb invalidation to host. Intel VT-d 3.0 supports nested translation in PASID granular. Guest SVA support could be implemented by configuring nested translation on specific PASID. This is also known as dual stage DMA translation. Under such configuration, guest owns the GVA->GPA translation which is configured as first level page table in host side for a specific pasid, and host owns GPA->HPA translation. As guest owns first level translation table, piotlb invalidation should be propagated to host since host IOMMU will cache first level page table related mappings during DMA address translation. This patch traps the guest PASID-based iotlb flush and propagate it to host. Cc: Kevin Tian <kevin.tian@intel.com> Cc: Jacob Pan <jacob.jun.pan@linux.intel.com> Cc: Peter Xu <peterx@redhat.com> Cc: Yi Sun <yi.y.sun@linux.intel.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Richard Henderson <rth@twiddle.net> Cc: Eduardo Habkost <ehabkost@redhat.com> Signed-off-by: Liu Yi L <yi.l.liu@intel.com> --- hw/i386/intel_iommu.c | 122 +++++++++++++++++++++++++++++++++++++++++ hw/i386/intel_iommu_internal.h | 7 +++ 2 files changed, 129 insertions(+) diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c index 1fe8257..93de7e4 100644 --- a/hw/i386/intel_iommu.c +++ b/hw/i386/intel_iommu.c @@ -3102,15 +3102,137 @@ static bool vtd_process_pasid_desc(IntelIOMMUState *s, return (ret == 0) ? true : false; } +static void vtd_invalidate_piotlb(IntelIOMMUState *s, VTDBus *vtd_bus, + int devfn, DualIOMMUStage1Cache *stage1_cache) +{ + VTDIOMMUContext *vtd_icx; + vtd_icx = vtd_bus->dev_icx[devfn]; + if (!vtd_icx) { + return; + } + if (ds_iommu_flush_stage1_cache(vtd_icx->dsi_obj, stage1_cache)) { + error_report("Cache flush failed"); + } +} + +static inline bool vtd_pasid_cache_valid( + VTDPASIDAddressSpace *vtd_pasid_as) +{ + return (vtd_pasid_as->iommu_state->pasid_cache_gen && + (vtd_pasid_as->iommu_state->pasid_cache_gen + == vtd_pasid_as->pasid_cache_entry.pasid_cache_gen)); +} + +/** + * This function is a loop function for the s->vtd_pasid_as + * list with VTDPIOTLBInvInfo as execution filter. It propagates + * the piotlb invalidation to host. Caller of this function + * should hold iommu_lock. + */ +static void vtd_flush_pasid_iotlb(gpointer key, gpointer value, + gpointer user_data) +{ + VTDPIOTLBInvInfo *piotlb_info = user_data; + VTDPASIDAddressSpace *vtd_pasid_as = value; + uint16_t did; + + /* + * Needs to check whether the pasid entry cache stored in + * vtd_pasid_as is valid or not. "invalid" means the pasid + * cache has been flushed, thus host should have done piotlb + * invalidation together with a pasid cache invalidation, so + * no need to pass down piotlb invalidation to host for better + * performance. Only when pasid entry cache is "valid", should + * a piotlb invalidation be propagated to host since it means + * guest just modified a mapping in its page table. + */ + if (!vtd_pasid_cache_valid(vtd_pasid_as)) { + return; + } + + did = vtd_pe_get_domain_id( + &(vtd_pasid_as->pasid_cache_entry.pasid_entry)); + + if ((piotlb_info->domain_id == did) && + (piotlb_info->pasid == vtd_pasid_as->pasid)) { + vtd_invalidate_piotlb(vtd_pasid_as->iommu_state, + vtd_pasid_as->vtd_bus, + vtd_pasid_as->devfn, + piotlb_info->stage1_cache); + } + + /* + * TODO: needs to add QEMU piotlb flush when QEMU piotlb + * infrastructure is ready. For now, it is enough for passthru + * devices. + */ +} + static void vtd_piotlb_pasid_invalidate(IntelIOMMUState *s, uint16_t domain_id, uint32_t pasid) { + VTDPIOTLBInvInfo piotlb_info; + struct iommu_cache_invalidate_info *cache_info; + DualIOMMUStage1Cache stage1_cache; + + stage1_cache.pasid = pasid; + + cache_info = &stage1_cache.cache_info; + cache_info->version = IOMMU_UAPI_VERSION; + cache_info->cache = IOMMU_CACHE_INV_TYPE_IOTLB; + cache_info->granularity = IOMMU_INV_GRANU_PASID; + cache_info->pasid_info.pasid = pasid; + cache_info->pasid_info.flags = IOMMU_INV_PASID_FLAGS_PASID; + + piotlb_info.domain_id = domain_id; + piotlb_info.pasid = pasid; + piotlb_info.stage1_cache = &stage1_cache; + + vtd_iommu_lock(s); + /* + * Here loops all the vtd_pasid_as instances in s->vtd_pasid_as + * to find out the affected devices since piotlb invalidation + * should check pasid cache per architecture point of view. + */ + g_hash_table_foreach(s->vtd_pasid_as, + vtd_flush_pasid_iotlb, &piotlb_info); + vtd_iommu_unlock(s); } static void vtd_piotlb_page_invalidate(IntelIOMMUState *s, uint16_t domain_id, uint32_t pasid, hwaddr addr, uint8_t am, bool ih) { + VTDPIOTLBInvInfo piotlb_info; + struct iommu_cache_invalidate_info *cache_info; + DualIOMMUStage1Cache stage1_cache; + + stage1_cache.pasid = pasid; + + cache_info = &stage1_cache.cache_info; + cache_info->version = IOMMU_UAPI_VERSION; + cache_info->cache = IOMMU_CACHE_INV_TYPE_IOTLB; + cache_info->granularity = IOMMU_INV_GRANU_ADDR; + cache_info->addr_info.flags = IOMMU_INV_ADDR_FLAGS_PASID; + cache_info->addr_info.flags |= ih ? IOMMU_INV_ADDR_FLAGS_LEAF : 0; + cache_info->addr_info.pasid = pasid; + cache_info->addr_info.addr = addr; + cache_info->addr_info.granule_size = 1 << (12 + am); + cache_info->addr_info.nb_granules = 1; + + piotlb_info.domain_id = domain_id; + piotlb_info.pasid = pasid; + piotlb_info.stage1_cache = &stage1_cache; + + vtd_iommu_lock(s); + /* + * Here loops all the vtd_pasid_as instances in s->vtd_pasid_as + * to find out the affected devices since piotlb invalidation + * should check pasid cache per architecture point of view. + */ + g_hash_table_foreach(s->vtd_pasid_as, + vtd_flush_pasid_iotlb, &piotlb_info); + vtd_iommu_unlock(s); } static bool vtd_process_piotlb_desc(IntelIOMMUState *s, diff --git a/hw/i386/intel_iommu_internal.h b/hw/i386/intel_iommu_internal.h index 7f4db04..f144bd3 100644 --- a/hw/i386/intel_iommu_internal.h +++ b/hw/i386/intel_iommu_internal.h @@ -530,6 +530,13 @@ struct VTDPASIDCacheInfo { VTD_PASID_CACHE_DEVSI) typedef struct VTDPASIDCacheInfo VTDPASIDCacheInfo; +struct VTDPIOTLBInvInfo { + uint16_t domain_id; + uint32_t pasid; + DualIOMMUStage1Cache *stage1_cache; +}; +typedef struct VTDPIOTLBInvInfo VTDPIOTLBInvInfo; + /* Masks for struct VTDRootEntry */ #define VTD_ROOT_ENTRY_P 1ULL #define VTD_ROOT_ENTRY_CTP (~0xfffULL) -- 2.7.4
WARNING: multiple messages have this Message-ID (diff)
From: "Liu, Yi L" <yi.l.liu@intel.com> To: qemu-devel@nongnu.org, david@gibson.dropbear.id.au, pbonzini@redhat.com, alex.williamson@redhat.com, peterx@redhat.com Cc: kevin.tian@intel.com, yi.l.liu@intel.com, Yi Sun <yi.y.sun@linux.intel.com>, Eduardo Habkost <ehabkost@redhat.com>, kvm@vger.kernel.org, mst@redhat.com, jun.j.tian@intel.com, eric.auger@redhat.com, yi.y.sun@intel.com, Jacob Pan <jacob.jun.pan@linux.intel.com>, Richard Henderson <rth@twiddle.net>, hao.wu@intel.com Subject: [RFC v3 24/25] intel_iommu: propagate PASID-based iotlb invalidation to host Date: Wed, 29 Jan 2020 04:16:55 -0800 [thread overview] Message-ID: <1580300216-86172-25-git-send-email-yi.l.liu@intel.com> (raw) In-Reply-To: <1580300216-86172-1-git-send-email-yi.l.liu@intel.com> From: Liu Yi L <yi.l.liu@intel.com> This patch propagates PASID-based iotlb invalidation to host. Intel VT-d 3.0 supports nested translation in PASID granular. Guest SVA support could be implemented by configuring nested translation on specific PASID. This is also known as dual stage DMA translation. Under such configuration, guest owns the GVA->GPA translation which is configured as first level page table in host side for a specific pasid, and host owns GPA->HPA translation. As guest owns first level translation table, piotlb invalidation should be propagated to host since host IOMMU will cache first level page table related mappings during DMA address translation. This patch traps the guest PASID-based iotlb flush and propagate it to host. Cc: Kevin Tian <kevin.tian@intel.com> Cc: Jacob Pan <jacob.jun.pan@linux.intel.com> Cc: Peter Xu <peterx@redhat.com> Cc: Yi Sun <yi.y.sun@linux.intel.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Richard Henderson <rth@twiddle.net> Cc: Eduardo Habkost <ehabkost@redhat.com> Signed-off-by: Liu Yi L <yi.l.liu@intel.com> --- hw/i386/intel_iommu.c | 122 +++++++++++++++++++++++++++++++++++++++++ hw/i386/intel_iommu_internal.h | 7 +++ 2 files changed, 129 insertions(+) diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c index 1fe8257..93de7e4 100644 --- a/hw/i386/intel_iommu.c +++ b/hw/i386/intel_iommu.c @@ -3102,15 +3102,137 @@ static bool vtd_process_pasid_desc(IntelIOMMUState *s, return (ret == 0) ? true : false; } +static void vtd_invalidate_piotlb(IntelIOMMUState *s, VTDBus *vtd_bus, + int devfn, DualIOMMUStage1Cache *stage1_cache) +{ + VTDIOMMUContext *vtd_icx; + vtd_icx = vtd_bus->dev_icx[devfn]; + if (!vtd_icx) { + return; + } + if (ds_iommu_flush_stage1_cache(vtd_icx->dsi_obj, stage1_cache)) { + error_report("Cache flush failed"); + } +} + +static inline bool vtd_pasid_cache_valid( + VTDPASIDAddressSpace *vtd_pasid_as) +{ + return (vtd_pasid_as->iommu_state->pasid_cache_gen && + (vtd_pasid_as->iommu_state->pasid_cache_gen + == vtd_pasid_as->pasid_cache_entry.pasid_cache_gen)); +} + +/** + * This function is a loop function for the s->vtd_pasid_as + * list with VTDPIOTLBInvInfo as execution filter. It propagates + * the piotlb invalidation to host. Caller of this function + * should hold iommu_lock. + */ +static void vtd_flush_pasid_iotlb(gpointer key, gpointer value, + gpointer user_data) +{ + VTDPIOTLBInvInfo *piotlb_info = user_data; + VTDPASIDAddressSpace *vtd_pasid_as = value; + uint16_t did; + + /* + * Needs to check whether the pasid entry cache stored in + * vtd_pasid_as is valid or not. "invalid" means the pasid + * cache has been flushed, thus host should have done piotlb + * invalidation together with a pasid cache invalidation, so + * no need to pass down piotlb invalidation to host for better + * performance. Only when pasid entry cache is "valid", should + * a piotlb invalidation be propagated to host since it means + * guest just modified a mapping in its page table. + */ + if (!vtd_pasid_cache_valid(vtd_pasid_as)) { + return; + } + + did = vtd_pe_get_domain_id( + &(vtd_pasid_as->pasid_cache_entry.pasid_entry)); + + if ((piotlb_info->domain_id == did) && + (piotlb_info->pasid == vtd_pasid_as->pasid)) { + vtd_invalidate_piotlb(vtd_pasid_as->iommu_state, + vtd_pasid_as->vtd_bus, + vtd_pasid_as->devfn, + piotlb_info->stage1_cache); + } + + /* + * TODO: needs to add QEMU piotlb flush when QEMU piotlb + * infrastructure is ready. For now, it is enough for passthru + * devices. + */ +} + static void vtd_piotlb_pasid_invalidate(IntelIOMMUState *s, uint16_t domain_id, uint32_t pasid) { + VTDPIOTLBInvInfo piotlb_info; + struct iommu_cache_invalidate_info *cache_info; + DualIOMMUStage1Cache stage1_cache; + + stage1_cache.pasid = pasid; + + cache_info = &stage1_cache.cache_info; + cache_info->version = IOMMU_UAPI_VERSION; + cache_info->cache = IOMMU_CACHE_INV_TYPE_IOTLB; + cache_info->granularity = IOMMU_INV_GRANU_PASID; + cache_info->pasid_info.pasid = pasid; + cache_info->pasid_info.flags = IOMMU_INV_PASID_FLAGS_PASID; + + piotlb_info.domain_id = domain_id; + piotlb_info.pasid = pasid; + piotlb_info.stage1_cache = &stage1_cache; + + vtd_iommu_lock(s); + /* + * Here loops all the vtd_pasid_as instances in s->vtd_pasid_as + * to find out the affected devices since piotlb invalidation + * should check pasid cache per architecture point of view. + */ + g_hash_table_foreach(s->vtd_pasid_as, + vtd_flush_pasid_iotlb, &piotlb_info); + vtd_iommu_unlock(s); } static void vtd_piotlb_page_invalidate(IntelIOMMUState *s, uint16_t domain_id, uint32_t pasid, hwaddr addr, uint8_t am, bool ih) { + VTDPIOTLBInvInfo piotlb_info; + struct iommu_cache_invalidate_info *cache_info; + DualIOMMUStage1Cache stage1_cache; + + stage1_cache.pasid = pasid; + + cache_info = &stage1_cache.cache_info; + cache_info->version = IOMMU_UAPI_VERSION; + cache_info->cache = IOMMU_CACHE_INV_TYPE_IOTLB; + cache_info->granularity = IOMMU_INV_GRANU_ADDR; + cache_info->addr_info.flags = IOMMU_INV_ADDR_FLAGS_PASID; + cache_info->addr_info.flags |= ih ? IOMMU_INV_ADDR_FLAGS_LEAF : 0; + cache_info->addr_info.pasid = pasid; + cache_info->addr_info.addr = addr; + cache_info->addr_info.granule_size = 1 << (12 + am); + cache_info->addr_info.nb_granules = 1; + + piotlb_info.domain_id = domain_id; + piotlb_info.pasid = pasid; + piotlb_info.stage1_cache = &stage1_cache; + + vtd_iommu_lock(s); + /* + * Here loops all the vtd_pasid_as instances in s->vtd_pasid_as + * to find out the affected devices since piotlb invalidation + * should check pasid cache per architecture point of view. + */ + g_hash_table_foreach(s->vtd_pasid_as, + vtd_flush_pasid_iotlb, &piotlb_info); + vtd_iommu_unlock(s); } static bool vtd_process_piotlb_desc(IntelIOMMUState *s, diff --git a/hw/i386/intel_iommu_internal.h b/hw/i386/intel_iommu_internal.h index 7f4db04..f144bd3 100644 --- a/hw/i386/intel_iommu_internal.h +++ b/hw/i386/intel_iommu_internal.h @@ -530,6 +530,13 @@ struct VTDPASIDCacheInfo { VTD_PASID_CACHE_DEVSI) typedef struct VTDPASIDCacheInfo VTDPASIDCacheInfo; +struct VTDPIOTLBInvInfo { + uint16_t domain_id; + uint32_t pasid; + DualIOMMUStage1Cache *stage1_cache; +}; +typedef struct VTDPIOTLBInvInfo VTDPIOTLBInvInfo; + /* Masks for struct VTDRootEntry */ #define VTD_ROOT_ENTRY_P 1ULL #define VTD_ROOT_ENTRY_CTP (~0xfffULL) -- 2.7.4
next prev parent reply other threads:[~2020-01-29 12:12 UTC|newest] Thread overview: 136+ messages / expand[flat|nested] mbox.gz Atom feed top 2020-01-29 12:16 [RFC v3 00/25] intel_iommu: expose Shared Virtual Addressing to VMs Liu, Yi L 2020-01-29 12:16 ` Liu, Yi L 2020-01-29 12:16 ` [RFC v3 01/25] hw/pci: modify pci_setup_iommu() to set PCIIOMMUOps Liu, Yi L 2020-01-29 12:16 ` Liu, Yi L 2020-01-29 12:16 ` [RFC v3 02/25] hw/iommu: introduce DualStageIOMMUObject Liu, Yi L 2020-01-29 12:16 ` Liu, Yi L 2020-01-31 3:59 ` David Gibson 2020-01-31 3:59 ` David Gibson 2020-01-31 11:42 ` Liu, Yi L 2020-01-31 11:42 ` Liu, Yi L 2020-02-12 6:32 ` David Gibson 2020-02-12 6:32 ` David Gibson 2020-01-29 12:16 ` [RFC v3 03/25] hw/iommu: introduce IOMMUContext Liu, Yi L 2020-01-29 12:16 ` Liu, Yi L 2020-01-31 4:06 ` David Gibson 2020-01-31 4:06 ` David Gibson 2020-01-31 11:42 ` Liu, Yi L 2020-01-31 11:42 ` Liu, Yi L 2020-02-11 16:58 ` Peter Xu 2020-02-11 16:58 ` Peter Xu 2020-02-12 7:15 ` Liu, Yi L 2020-02-12 7:15 ` Liu, Yi L 2020-02-12 15:59 ` Peter Xu 2020-02-12 15:59 ` Peter Xu 2020-02-13 2:46 ` Liu, Yi L 2020-02-13 2:46 ` Liu, Yi L 2020-02-14 5:36 ` David Gibson 2020-02-14 5:36 ` David Gibson 2020-02-15 6:25 ` Liu, Yi L 2020-02-15 6:25 ` Liu, Yi L 2020-01-29 12:16 ` [RFC v3 04/25] hw/pci: introduce pci_device_iommu_context() Liu, Yi L 2020-01-29 12:16 ` Liu, Yi L 2020-01-29 12:16 ` [RFC v3 05/25] intel_iommu: provide get_iommu_context() callback Liu, Yi L 2020-01-29 12:16 ` Liu, Yi L 2020-01-29 12:16 ` [RFC v3 06/25] scripts/update-linux-headers: Import iommu.h Liu, Yi L 2020-01-29 12:16 ` Liu, Yi L 2020-01-29 12:25 ` Cornelia Huck 2020-01-29 12:25 ` Cornelia Huck 2020-01-31 11:40 ` Liu, Yi L 2020-01-31 11:40 ` Liu, Yi L 2020-01-29 12:16 ` [RFC v3 07/25] header file update VFIO/IOMMU vSVA APIs Liu, Yi L 2020-01-29 12:16 ` Liu, Yi L 2020-01-29 12:28 ` Cornelia Huck 2020-01-29 12:28 ` Cornelia Huck 2020-01-31 11:41 ` Liu, Yi L 2020-01-31 11:41 ` Liu, Yi L 2020-01-29 12:16 ` [RFC v3 08/25] vfio: pass IOMMUContext into vfio_get_group() Liu, Yi L 2020-01-29 12:16 ` Liu, Yi L 2020-01-29 12:16 ` [RFC v3 09/25] vfio: check VFIO_TYPE1_NESTING_IOMMU support Liu, Yi L 2020-01-29 12:16 ` Liu, Yi L 2020-02-11 19:08 ` Peter Xu 2020-02-11 19:08 ` Peter Xu 2020-02-12 7:16 ` Liu, Yi L 2020-02-12 7:16 ` Liu, Yi L 2020-01-29 12:16 ` [RFC v3 10/25] vfio: register DualStageIOMMUObject to vIOMMU Liu, Yi L 2020-01-29 12:16 ` Liu, Yi L 2020-01-29 12:16 ` [RFC v3 11/25] vfio: get stage-1 pasid formats from Kernel Liu, Yi L 2020-01-29 12:16 ` Liu, Yi L 2020-02-11 19:30 ` Peter Xu 2020-02-11 19:30 ` Peter Xu 2020-02-12 7:19 ` Liu, Yi L 2020-02-12 7:19 ` Liu, Yi L 2020-01-29 12:16 ` [RFC v3 12/25] vfio/common: add pasid_alloc/free support Liu, Yi L 2020-01-29 12:16 ` Liu, Yi L 2020-02-11 19:31 ` Peter Xu 2020-02-11 19:31 ` Peter Xu 2020-02-12 7:20 ` Liu, Yi L 2020-02-12 7:20 ` Liu, Yi L 2020-01-29 12:16 ` [RFC v3 13/25] intel_iommu: modify x-scalable-mode to be string option Liu, Yi L 2020-01-29 12:16 ` Liu, Yi L 2020-02-11 19:43 ` Peter Xu 2020-02-11 19:43 ` Peter Xu 2020-02-12 7:28 ` Liu, Yi L 2020-02-12 7:28 ` Liu, Yi L 2020-02-12 16:05 ` Peter Xu 2020-02-12 16:05 ` Peter Xu 2020-02-13 2:44 ` Liu, Yi L 2020-02-13 2:44 ` Liu, Yi L 2020-01-29 12:16 ` [RFC v3 14/25] intel_iommu: add virtual command capability support Liu, Yi L 2020-01-29 12:16 ` Liu, Yi L 2020-02-11 20:16 ` Peter Xu 2020-02-11 20:16 ` Peter Xu 2020-02-12 7:32 ` Liu, Yi L 2020-02-12 7:32 ` Liu, Yi L 2020-02-11 21:56 ` Peter Xu 2020-02-11 21:56 ` Peter Xu 2020-02-13 2:40 ` Liu, Yi L 2020-02-13 2:40 ` Liu, Yi L 2020-02-13 14:31 ` Peter Xu 2020-02-13 14:31 ` Peter Xu 2020-02-13 15:08 ` Peter Xu 2020-02-13 15:08 ` Peter Xu 2020-02-15 8:49 ` Liu, Yi L 2020-02-15 8:49 ` Liu, Yi L 2020-01-29 12:16 ` [RFC v3 15/25] intel_iommu: process pasid cache invalidation Liu, Yi L 2020-01-29 12:16 ` Liu, Yi L 2020-02-11 20:17 ` Peter Xu 2020-02-11 20:17 ` Peter Xu 2020-02-12 7:33 ` Liu, Yi L 2020-02-12 7:33 ` Liu, Yi L 2020-01-29 12:16 ` [RFC v3 16/25] intel_iommu: add PASID cache management infrastructure Liu, Yi L 2020-01-29 12:16 ` Liu, Yi L 2020-02-11 23:35 ` Peter Xu 2020-02-11 23:35 ` Peter Xu 2020-02-12 8:37 ` Liu, Yi L 2020-02-12 8:37 ` Liu, Yi L 2020-02-12 15:26 ` Peter Xu 2020-02-12 15:26 ` Peter Xu 2020-02-13 2:59 ` Liu, Yi L 2020-02-13 2:59 ` Liu, Yi L 2020-02-13 15:14 ` Peter Xu 2020-02-13 15:14 ` Peter Xu 2020-02-15 8:50 ` Liu, Yi L 2020-02-15 8:50 ` Liu, Yi L 2020-01-29 12:16 ` [RFC v3 17/25] vfio: add bind stage-1 page table support Liu, Yi L 2020-01-29 12:16 ` Liu, Yi L 2020-01-29 12:16 ` [RFC v3 18/25] intel_iommu: bind/unbind guest page table to host Liu, Yi L 2020-01-29 12:16 ` Liu, Yi L 2020-01-29 12:16 ` [RFC v3 19/25] intel_iommu: replay guest pasid bindings " Liu, Yi L 2020-01-29 12:16 ` Liu, Yi L 2020-01-29 12:16 ` [RFC v3 20/25] intel_iommu: replay pasid binds after context cache invalidation Liu, Yi L 2020-01-29 12:16 ` Liu, Yi L 2020-01-29 12:16 ` [RFC v3 21/25] intel_iommu: do not pass down pasid bind for PASID #0 Liu, Yi L 2020-01-29 12:16 ` Liu, Yi L 2020-01-29 12:16 ` [RFC v3 22/25] vfio: add support for flush iommu stage-1 cache Liu, Yi L 2020-01-29 12:16 ` Liu, Yi L 2020-01-29 12:16 ` [RFC v3 23/25] intel_iommu: process PASID-based iotlb invalidation Liu, Yi L 2020-01-29 12:16 ` Liu, Yi L 2020-01-29 12:16 ` Liu, Yi L [this message] 2020-01-29 12:16 ` [RFC v3 24/25] intel_iommu: propagate PASID-based iotlb invalidation to host Liu, Yi L 2020-01-29 12:16 ` [RFC v3 25/25] intel_iommu: process PASID-based Device-TLB invalidation Liu, Yi L 2020-01-29 12:16 ` Liu, Yi L 2020-01-29 13:44 ` [RFC v3 00/25] intel_iommu: expose Shared Virtual Addressing to VMs no-reply 2020-01-29 13:44 ` no-reply 2020-01-29 13:48 ` no-reply 2020-01-29 13:48 ` no-reply
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=1580300216-86172-25-git-send-email-yi.l.liu@intel.com \ --to=yi.l.liu@intel.com \ --cc=alex.williamson@redhat.com \ --cc=david@gibson.dropbear.id.au \ --cc=ehabkost@redhat.com \ --cc=eric.auger@redhat.com \ --cc=hao.wu@intel.com \ --cc=jacob.jun.pan@linux.intel.com \ --cc=jun.j.tian@intel.com \ --cc=kevin.tian@intel.com \ --cc=kvm@vger.kernel.org \ --cc=mst@redhat.com \ --cc=pbonzini@redhat.com \ --cc=peterx@redhat.com \ --cc=qemu-devel@nongnu.org \ --cc=rth@twiddle.net \ --cc=yi.y.sun@intel.com \ --cc=yi.y.sun@linux.intel.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.