All of lore.kernel.org
 help / color / mirror / Atom feed
From: Lu Baolu <baolu.lu@linux.intel.com>
To: "Tian, Kevin" <kevin.tian@intel.com>,
	Jacob Pan <jacob.jun.pan@linux.intel.com>,
	"iommu@lists.linux-foundation.org"
	<iommu@lists.linux-foundation.org>,
	LKML <linux-kernel@vger.kernel.org>,
	Joerg Roedel <joro@8bytes.org>,
	David Woodhouse <dwmw2@infradead.org>,
	Alex Williamson <alex.williamson@redhat.com>,
	Jean-Philippe Brucker <jean-philippe@linaro.com>
Cc: baolu.lu@linux.intel.com, "Liu, Yi L" <yi.l.liu@intel.com>,
	"Raj, Ashok" <ashok.raj@intel.com>,
	Christoph Hellwig <hch@infradead.org>,
	Jonathan Cameron <jic23@kernel.org>,
	Eric Auger <eric.auger@redhat.com>
Subject: Re: [PATCH v7 11/11] iommu/vt-d: Add svm/sva invalidate function
Date: Sat, 26 Oct 2019 10:40:58 +0800	[thread overview]
Message-ID: <5e9d2372-a8b5-9a26-1438-c1a608bfad6d@linux.intel.com> (raw)
In-Reply-To: <AADFC41AFE54684AB9EE6CBC0274A5D19D5CDE06@SHSMSX104.ccr.corp.intel.com>

Hi,

On 10/25/19 3:27 PM, Tian, Kevin wrote:
>> From: Jacob Pan [mailto:jacob.jun.pan@linux.intel.com]
>> Sent: Friday, October 25, 2019 3:55 AM
>>
>> When Shared Virtual Address (SVA) is enabled for a guest OS via
>> vIOMMU, we need to provide invalidation support at IOMMU API and
>> driver
>> level. This patch adds Intel VT-d specific function to implement
>> iommu passdown invalidate API for shared virtual address.
>>
>> The use case is for supporting caching structure invalidation
>> of assigned SVM capable devices. Emulated IOMMU exposes queue
>> invalidation capability and passes down all descriptors from the guest
>> to the physical IOMMU.
> 
> specifically you may clarify that only invalidations related to
> first-level page table is passed down, because it's guest
> structure being bound to the first-level. other descriptors
> are emulated or translated into other necessary operations.
> 
>>
>> The assumption is that guest to host device ID mapping should be
>> resolved prior to calling IOMMU driver. Based on the device handle,
>> host IOMMU driver can replace certain fields before submit to the
>> invalidation queue.
> 
> what is device ID? it's a bit confusing term here.
> 
>>
>> Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com>
>> Signed-off-by: Ashok Raj <ashok.raj@intel.com>
>> Signed-off-by: Liu, Yi L <yi.l.liu@linux.intel.com>
>> ---
>>   drivers/iommu/intel-iommu.c | 170
>> ++++++++++++++++++++++++++++++++++++++++++++
>>   1 file changed, 170 insertions(+)
>>
>> diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
>> index 5fab32fbc4b4..a73e76d6457a 100644
>> --- a/drivers/iommu/intel-iommu.c
>> +++ b/drivers/iommu/intel-iommu.c
>> @@ -5491,6 +5491,175 @@ static void
>> intel_iommu_aux_detach_device(struct iommu_domain *domain,
>>   	aux_domain_remove_dev(to_dmar_domain(domain), dev);
>>   }
>>
>> +/*
>> + * 2D array for converting and sanitizing IOMMU generic TLB granularity to
>> + * VT-d granularity. Invalidation is typically included in the unmap
>> operation
>> + * as a result of DMA or VFIO unmap. However, for assigned device where
>> guest
>> + * could own the first level page tables without being shadowed by QEMU.
>> In
>> + * this case there is no pass down unmap to the host IOMMU as a result of
>> unmap
>> + * in the guest. Only invalidations are trapped and passed down.
>> + * In all cases, only first level TLB invalidation (request with PASID) can be
>> + * passed down, therefore we do not include IOTLB granularity for request
>> + * without PASID (second level).
>> + *
>> + * For an example, to find the VT-d granularity encoding for IOTLB
>> + * type and page selective granularity within PASID:
>> + * X: indexed by iommu cache type
>> + * Y: indexed by enum iommu_inv_granularity
>> + * [IOMMU_CACHE_INV_TYPE_IOTLB][IOMMU_INV_GRANU_ADDR]
>> + *
>> + * Granu_map array indicates validity of the table. 1: valid, 0: invalid
>> + *
>> + */
>> +const static int
>> inv_type_granu_map[IOMMU_CACHE_INV_TYPE_NR][IOMMU_INV_GRAN
>> U_NR] = {
>> +	/* PASID based IOTLB, support PASID selective and page selective */
>> +	{0, 1, 1},
>> +	/* PASID based dev TLBs, only support all PASIDs or single PASID */
>> +	{1, 1, 0},
> 
> I forgot previous discussion. is it necessary to pass down dev TLB invalidation
> requests? Can it be handled by host iOMMU driver automatically?

On host SVA, when a memory is unmapped, driver callback will invalidate
dev IOTLB explicitly. So I guess we need to pass down it for guest case.
This is also required for guest iova over 1st level usage as far as can
see.

Best regards,
baolu

> 
>> +	/* PASID cache */
>> +	{1, 1, 0}
>> +};
>> +
>> +const static u64
>> inv_type_granu_table[IOMMU_CACHE_INV_TYPE_NR][IOMMU_INV_GRAN
>> U_NR] = {
>> +	/* PASID based IOTLB */
>> +	{0, QI_GRAN_NONG_PASID, QI_GRAN_PSI_PASID},
>> +	/* PASID based dev TLBs */
>> +	{QI_DEV_IOTLB_GRAN_ALL, QI_DEV_IOTLB_GRAN_PASID_SEL, 0},
>> +	/* PASID cache */
>> +	{QI_PC_ALL_PASIDS, QI_PC_PASID_SEL, 0},
>> +};
>> +
>> +static inline int to_vtd_granularity(int type, int granu, u64 *vtd_granu)
>> +{
>> +	if (type >= IOMMU_CACHE_INV_TYPE_NR || granu >=
>> IOMMU_INV_GRANU_NR ||
>> +		!inv_type_granu_map[type][granu])
>> +		return -EINVAL;
>> +
>> +	*vtd_granu = inv_type_granu_table[type][granu];
>> +
>> +	return 0;
>> +}
>> +
>> +static inline u64 to_vtd_size(u64 granu_size, u64 nr_granules)
>> +{
>> +	u64 nr_pages = (granu_size * nr_granules) >> VTD_PAGE_SHIFT;
>> +
>> +	/* VT-d size is encoded as 2^size of 4K pages, 0 for 4k, 9 for 2MB,
>> etc.
>> +	 * IOMMU cache invalidate API passes granu_size in bytes, and
>> number of
>> +	 * granu size in contiguous memory.
>> +	 */
>> +	return order_base_2(nr_pages);
>> +}
>> +
>> +#ifdef CONFIG_INTEL_IOMMU_SVM
>> +static int intel_iommu_sva_invalidate(struct iommu_domain *domain,
>> +		struct device *dev, struct iommu_cache_invalidate_info
>> *inv_info)
>> +{
>> +	struct dmar_domain *dmar_domain = to_dmar_domain(domain);
>> +	struct device_domain_info *info;
>> +	struct intel_iommu *iommu;
>> +	unsigned long flags;
>> +	int cache_type;
>> +	u8 bus, devfn;
>> +	u16 did, sid;
>> +	int ret = 0;
>> +	u64 size;
>> +
>> +	if (!inv_info || !dmar_domain ||
>> +		inv_info->version !=
>> IOMMU_CACHE_INVALIDATE_INFO_VERSION_1)
>> +		return -EINVAL;
>> +
>> +	if (!dev || !dev_is_pci(dev))
>> +		return -ENODEV;
>> +
>> +	iommu = device_to_iommu(dev, &bus, &devfn);
>> +	if (!iommu)
>> +		return -ENODEV;
>> +
>> +	spin_lock_irqsave(&device_domain_lock, flags);
>> +	spin_lock(&iommu->lock);
>> +	info = iommu_support_dev_iotlb(dmar_domain, iommu, bus,
>> devfn);
>> +	if (!info) {
>> +		ret = -EINVAL;
>> +		goto out_unlock;
>> +	}
>> +	did = dmar_domain->iommu_did[iommu->seq_id];
>> +	sid = PCI_DEVID(bus, devfn);
>> +	size = to_vtd_size(inv_info->addr_info.granule_size, inv_info-
>>> addr_info.nb_granules);
>> +
>> +	for_each_set_bit(cache_type, (unsigned long *)&inv_info->cache,
>> IOMMU_CACHE_INV_TYPE_NR) {
>> +		u64 granu = 0;
>> +		u64 pasid = 0;
>> +
>> +		ret = to_vtd_granularity(cache_type, inv_info->granularity,
>> &granu);
>> +		if (ret) {
>> +			pr_err("Invalid cache type and granu
>> combination %d/%d\n", cache_type,
>> +				inv_info->granularity);
>> +			break;
>> +		}
>> +
>> +		/* PASID is stored in different locations based on
>> granularity */
>> +		if (inv_info->granularity == IOMMU_INV_GRANU_PASID)
>> +			pasid = inv_info->pasid_info.pasid;
>> +		else if (inv_info->granularity == IOMMU_INV_GRANU_ADDR)
>> +			pasid = inv_info->addr_info.pasid;
>> +		else {
>> +			pr_err("Cannot find PASID for given cache type and
>> granularity\n");
>> +			break;
>> +		}
>> +
>> +		switch (BIT(cache_type)) {
>> +		case IOMMU_CACHE_INV_TYPE_IOTLB:
>> +			if (size && (inv_info->addr_info.addr &
>> ((BIT(VTD_PAGE_SHIFT + size)) - 1))) {
>> +				pr_err("Address out of range, 0x%llx, size
>> order %llu\n",
>> +					inv_info->addr_info.addr, size);
>> +				ret = -ERANGE;
>> +				goto out_unlock;
>> +			}
>> +
>> +			qi_flush_piotlb(iommu, did,
>> mm_to_dma_pfn(inv_info->addr_info.addr),
>> +					pasid, size, granu, inv_info-
>>> addr_info.flags & IOMMU_INV_ADDR_FLAGS_LEAF);
>> +
>> +			/*
>> +			 * Always flush device IOTLB if ATS is enabled since
>> guest
>> +			 * vIOMMU exposes CM = 1, no device IOTLB flush
>> will be passed
>> +			 * down.
>> +			 */
>> +			if (info->ats_enabled) {
>> +				qi_flush_dev_piotlb(iommu, sid, info->pfsid,
>> +						pasid, info->ats_qdep,
>> +						inv_info->addr_info.addr,
>> size,
>> +						granu);
>> +			}
>> +			break;
>> +		case IOMMU_CACHE_INV_TYPE_DEV_IOTLB:
>> +			if (info->ats_enabled) {
>> +				qi_flush_dev_piotlb(iommu, sid, info->pfsid,
>> +						inv_info->addr_info.pasid,
>> info->ats_qdep,
>> +						inv_info->addr_info.addr,
>> size,
>> +						granu);
>> +			} else
>> +				pr_warn("Passdown device IOTLB flush w/o
>> ATS!\n");
>> +
>> +			break;
>> +		case IOMMU_CACHE_INV_TYPE_PASID:
>> +			qi_flush_pasid_cache(iommu, did, granu, inv_info-
>>> pasid_info.pasid);
>> +
>> +			break;
>> +		default:
>> +			dev_err(dev, "Unsupported IOMMU invalidation
>> type %d\n",
>> +				cache_type);
>> +			ret = -EINVAL;
>> +		}
>> +	}
>> +out_unlock:
>> +	spin_unlock(&iommu->lock);
>> +	spin_unlock_irqrestore(&device_domain_lock, flags);
>> +
>> +	return ret;
>> +}
>> +#endif
>> +
>>   static int intel_iommu_map(struct iommu_domain *domain,
>>   			   unsigned long iova, phys_addr_t hpa,
>>   			   size_t size, int iommu_prot)
>> @@ -6027,6 +6196,7 @@ const struct iommu_ops intel_iommu_ops = {
>>   	.is_attach_deferred	= intel_iommu_is_attach_deferred,
>>   	.pgsize_bitmap		= INTEL_IOMMU_PGSIZES,
>>   #ifdef CONFIG_INTEL_IOMMU_SVM
>> +	.cache_invalidate	= intel_iommu_sva_invalidate,
>>   	.sva_bind_gpasid	= intel_svm_bind_gpasid,
>>   	.sva_unbind_gpasid	= intel_svm_unbind_gpasid,
>>   #endif
>> --
>> 2.7.4
> 
> 

WARNING: multiple messages have this Message-ID (diff)
From: Lu Baolu <baolu.lu@linux.intel.com>
To: "Tian, Kevin" <kevin.tian@intel.com>,
	Jacob Pan <jacob.jun.pan@linux.intel.com>,
	"iommu@lists.linux-foundation.org"
	<iommu@lists.linux-foundation.org>,
	 LKML <linux-kernel@vger.kernel.org>,
	Joerg Roedel <joro@8bytes.org>,
	David Woodhouse <dwmw2@infradead.org>,
	Alex Williamson <alex.williamson@redhat.com>,
	Jean-Philippe Brucker <jean-philippe@linaro.com>
Cc: "Raj, Ashok" <ashok.raj@intel.com>, Jonathan Cameron <jic23@kernel.org>
Subject: Re: [PATCH v7 11/11] iommu/vt-d: Add svm/sva invalidate function
Date: Sat, 26 Oct 2019 10:40:58 +0800	[thread overview]
Message-ID: <5e9d2372-a8b5-9a26-1438-c1a608bfad6d@linux.intel.com> (raw)
In-Reply-To: <AADFC41AFE54684AB9EE6CBC0274A5D19D5CDE06@SHSMSX104.ccr.corp.intel.com>

Hi,

On 10/25/19 3:27 PM, Tian, Kevin wrote:
>> From: Jacob Pan [mailto:jacob.jun.pan@linux.intel.com]
>> Sent: Friday, October 25, 2019 3:55 AM
>>
>> When Shared Virtual Address (SVA) is enabled for a guest OS via
>> vIOMMU, we need to provide invalidation support at IOMMU API and
>> driver
>> level. This patch adds Intel VT-d specific function to implement
>> iommu passdown invalidate API for shared virtual address.
>>
>> The use case is for supporting caching structure invalidation
>> of assigned SVM capable devices. Emulated IOMMU exposes queue
>> invalidation capability and passes down all descriptors from the guest
>> to the physical IOMMU.
> 
> specifically you may clarify that only invalidations related to
> first-level page table is passed down, because it's guest
> structure being bound to the first-level. other descriptors
> are emulated or translated into other necessary operations.
> 
>>
>> The assumption is that guest to host device ID mapping should be
>> resolved prior to calling IOMMU driver. Based on the device handle,
>> host IOMMU driver can replace certain fields before submit to the
>> invalidation queue.
> 
> what is device ID? it's a bit confusing term here.
> 
>>
>> Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com>
>> Signed-off-by: Ashok Raj <ashok.raj@intel.com>
>> Signed-off-by: Liu, Yi L <yi.l.liu@linux.intel.com>
>> ---
>>   drivers/iommu/intel-iommu.c | 170
>> ++++++++++++++++++++++++++++++++++++++++++++
>>   1 file changed, 170 insertions(+)
>>
>> diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
>> index 5fab32fbc4b4..a73e76d6457a 100644
>> --- a/drivers/iommu/intel-iommu.c
>> +++ b/drivers/iommu/intel-iommu.c
>> @@ -5491,6 +5491,175 @@ static void
>> intel_iommu_aux_detach_device(struct iommu_domain *domain,
>>   	aux_domain_remove_dev(to_dmar_domain(domain), dev);
>>   }
>>
>> +/*
>> + * 2D array for converting and sanitizing IOMMU generic TLB granularity to
>> + * VT-d granularity. Invalidation is typically included in the unmap
>> operation
>> + * as a result of DMA or VFIO unmap. However, for assigned device where
>> guest
>> + * could own the first level page tables without being shadowed by QEMU.
>> In
>> + * this case there is no pass down unmap to the host IOMMU as a result of
>> unmap
>> + * in the guest. Only invalidations are trapped and passed down.
>> + * In all cases, only first level TLB invalidation (request with PASID) can be
>> + * passed down, therefore we do not include IOTLB granularity for request
>> + * without PASID (second level).
>> + *
>> + * For an example, to find the VT-d granularity encoding for IOTLB
>> + * type and page selective granularity within PASID:
>> + * X: indexed by iommu cache type
>> + * Y: indexed by enum iommu_inv_granularity
>> + * [IOMMU_CACHE_INV_TYPE_IOTLB][IOMMU_INV_GRANU_ADDR]
>> + *
>> + * Granu_map array indicates validity of the table. 1: valid, 0: invalid
>> + *
>> + */
>> +const static int
>> inv_type_granu_map[IOMMU_CACHE_INV_TYPE_NR][IOMMU_INV_GRAN
>> U_NR] = {
>> +	/* PASID based IOTLB, support PASID selective and page selective */
>> +	{0, 1, 1},
>> +	/* PASID based dev TLBs, only support all PASIDs or single PASID */
>> +	{1, 1, 0},
> 
> I forgot previous discussion. is it necessary to pass down dev TLB invalidation
> requests? Can it be handled by host iOMMU driver automatically?

On host SVA, when a memory is unmapped, driver callback will invalidate
dev IOTLB explicitly. So I guess we need to pass down it for guest case.
This is also required for guest iova over 1st level usage as far as can
see.

Best regards,
baolu

> 
>> +	/* PASID cache */
>> +	{1, 1, 0}
>> +};
>> +
>> +const static u64
>> inv_type_granu_table[IOMMU_CACHE_INV_TYPE_NR][IOMMU_INV_GRAN
>> U_NR] = {
>> +	/* PASID based IOTLB */
>> +	{0, QI_GRAN_NONG_PASID, QI_GRAN_PSI_PASID},
>> +	/* PASID based dev TLBs */
>> +	{QI_DEV_IOTLB_GRAN_ALL, QI_DEV_IOTLB_GRAN_PASID_SEL, 0},
>> +	/* PASID cache */
>> +	{QI_PC_ALL_PASIDS, QI_PC_PASID_SEL, 0},
>> +};
>> +
>> +static inline int to_vtd_granularity(int type, int granu, u64 *vtd_granu)
>> +{
>> +	if (type >= IOMMU_CACHE_INV_TYPE_NR || granu >=
>> IOMMU_INV_GRANU_NR ||
>> +		!inv_type_granu_map[type][granu])
>> +		return -EINVAL;
>> +
>> +	*vtd_granu = inv_type_granu_table[type][granu];
>> +
>> +	return 0;
>> +}
>> +
>> +static inline u64 to_vtd_size(u64 granu_size, u64 nr_granules)
>> +{
>> +	u64 nr_pages = (granu_size * nr_granules) >> VTD_PAGE_SHIFT;
>> +
>> +	/* VT-d size is encoded as 2^size of 4K pages, 0 for 4k, 9 for 2MB,
>> etc.
>> +	 * IOMMU cache invalidate API passes granu_size in bytes, and
>> number of
>> +	 * granu size in contiguous memory.
>> +	 */
>> +	return order_base_2(nr_pages);
>> +}
>> +
>> +#ifdef CONFIG_INTEL_IOMMU_SVM
>> +static int intel_iommu_sva_invalidate(struct iommu_domain *domain,
>> +		struct device *dev, struct iommu_cache_invalidate_info
>> *inv_info)
>> +{
>> +	struct dmar_domain *dmar_domain = to_dmar_domain(domain);
>> +	struct device_domain_info *info;
>> +	struct intel_iommu *iommu;
>> +	unsigned long flags;
>> +	int cache_type;
>> +	u8 bus, devfn;
>> +	u16 did, sid;
>> +	int ret = 0;
>> +	u64 size;
>> +
>> +	if (!inv_info || !dmar_domain ||
>> +		inv_info->version !=
>> IOMMU_CACHE_INVALIDATE_INFO_VERSION_1)
>> +		return -EINVAL;
>> +
>> +	if (!dev || !dev_is_pci(dev))
>> +		return -ENODEV;
>> +
>> +	iommu = device_to_iommu(dev, &bus, &devfn);
>> +	if (!iommu)
>> +		return -ENODEV;
>> +
>> +	spin_lock_irqsave(&device_domain_lock, flags);
>> +	spin_lock(&iommu->lock);
>> +	info = iommu_support_dev_iotlb(dmar_domain, iommu, bus,
>> devfn);
>> +	if (!info) {
>> +		ret = -EINVAL;
>> +		goto out_unlock;
>> +	}
>> +	did = dmar_domain->iommu_did[iommu->seq_id];
>> +	sid = PCI_DEVID(bus, devfn);
>> +	size = to_vtd_size(inv_info->addr_info.granule_size, inv_info-
>>> addr_info.nb_granules);
>> +
>> +	for_each_set_bit(cache_type, (unsigned long *)&inv_info->cache,
>> IOMMU_CACHE_INV_TYPE_NR) {
>> +		u64 granu = 0;
>> +		u64 pasid = 0;
>> +
>> +		ret = to_vtd_granularity(cache_type, inv_info->granularity,
>> &granu);
>> +		if (ret) {
>> +			pr_err("Invalid cache type and granu
>> combination %d/%d\n", cache_type,
>> +				inv_info->granularity);
>> +			break;
>> +		}
>> +
>> +		/* PASID is stored in different locations based on
>> granularity */
>> +		if (inv_info->granularity == IOMMU_INV_GRANU_PASID)
>> +			pasid = inv_info->pasid_info.pasid;
>> +		else if (inv_info->granularity == IOMMU_INV_GRANU_ADDR)
>> +			pasid = inv_info->addr_info.pasid;
>> +		else {
>> +			pr_err("Cannot find PASID for given cache type and
>> granularity\n");
>> +			break;
>> +		}
>> +
>> +		switch (BIT(cache_type)) {
>> +		case IOMMU_CACHE_INV_TYPE_IOTLB:
>> +			if (size && (inv_info->addr_info.addr &
>> ((BIT(VTD_PAGE_SHIFT + size)) - 1))) {
>> +				pr_err("Address out of range, 0x%llx, size
>> order %llu\n",
>> +					inv_info->addr_info.addr, size);
>> +				ret = -ERANGE;
>> +				goto out_unlock;
>> +			}
>> +
>> +			qi_flush_piotlb(iommu, did,
>> mm_to_dma_pfn(inv_info->addr_info.addr),
>> +					pasid, size, granu, inv_info-
>>> addr_info.flags & IOMMU_INV_ADDR_FLAGS_LEAF);
>> +
>> +			/*
>> +			 * Always flush device IOTLB if ATS is enabled since
>> guest
>> +			 * vIOMMU exposes CM = 1, no device IOTLB flush
>> will be passed
>> +			 * down.
>> +			 */
>> +			if (info->ats_enabled) {
>> +				qi_flush_dev_piotlb(iommu, sid, info->pfsid,
>> +						pasid, info->ats_qdep,
>> +						inv_info->addr_info.addr,
>> size,
>> +						granu);
>> +			}
>> +			break;
>> +		case IOMMU_CACHE_INV_TYPE_DEV_IOTLB:
>> +			if (info->ats_enabled) {
>> +				qi_flush_dev_piotlb(iommu, sid, info->pfsid,
>> +						inv_info->addr_info.pasid,
>> info->ats_qdep,
>> +						inv_info->addr_info.addr,
>> size,
>> +						granu);
>> +			} else
>> +				pr_warn("Passdown device IOTLB flush w/o
>> ATS!\n");
>> +
>> +			break;
>> +		case IOMMU_CACHE_INV_TYPE_PASID:
>> +			qi_flush_pasid_cache(iommu, did, granu, inv_info-
>>> pasid_info.pasid);
>> +
>> +			break;
>> +		default:
>> +			dev_err(dev, "Unsupported IOMMU invalidation
>> type %d\n",
>> +				cache_type);
>> +			ret = -EINVAL;
>> +		}
>> +	}
>> +out_unlock:
>> +	spin_unlock(&iommu->lock);
>> +	spin_unlock_irqrestore(&device_domain_lock, flags);
>> +
>> +	return ret;
>> +}
>> +#endif
>> +
>>   static int intel_iommu_map(struct iommu_domain *domain,
>>   			   unsigned long iova, phys_addr_t hpa,
>>   			   size_t size, int iommu_prot)
>> @@ -6027,6 +6196,7 @@ const struct iommu_ops intel_iommu_ops = {
>>   	.is_attach_deferred	= intel_iommu_is_attach_deferred,
>>   	.pgsize_bitmap		= INTEL_IOMMU_PGSIZES,
>>   #ifdef CONFIG_INTEL_IOMMU_SVM
>> +	.cache_invalidate	= intel_iommu_sva_invalidate,
>>   	.sva_bind_gpasid	= intel_svm_bind_gpasid,
>>   	.sva_unbind_gpasid	= intel_svm_unbind_gpasid,
>>   #endif
>> --
>> 2.7.4
> 
> 
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

  reply	other threads:[~2019-10-26  2:44 UTC|newest]

Thread overview: 158+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-10-24 19:54 [PATCH v7 00/11] Nested Shared Virtual Address (SVA) VT-d support Jacob Pan
2019-10-24 19:54 ` Jacob Pan
2019-10-24 19:54 ` [PATCH v7 01/11] iommu/vt-d: Cache virtual command capability register Jacob Pan
2019-10-24 19:54   ` Jacob Pan
2019-10-25  2:53   ` Lu Baolu
2019-10-25  2:53     ` Lu Baolu
2019-10-25  6:06   ` Tian, Kevin
2019-10-25  6:06     ` Tian, Kevin
2019-11-08 10:32   ` Auger Eric
2019-11-08 10:32     ` Auger Eric
2019-10-24 19:54 ` [PATCH v7 02/11] iommu/vt-d: Enlightened PASID allocation Jacob Pan
2019-10-24 19:54   ` Jacob Pan
2019-10-25  6:19   ` Tian, Kevin
2019-10-25  6:19     ` Tian, Kevin
2019-10-29 17:14     ` Jacob Pan
2019-10-29 17:14       ` Jacob Pan
2019-10-29 18:16       ` Tian, Kevin
2019-10-29 18:16         ` Tian, Kevin
2019-11-08 10:33   ` Auger Eric
2019-11-08 10:33     ` Auger Eric
2019-11-08 22:22     ` Jacob Pan
2019-11-08 22:22       ` Jacob Pan
2019-10-24 19:54 ` [PATCH v7 03/11] iommu/vt-d: Add custom allocator for IOASID Jacob Pan
2019-10-24 19:54   ` Jacob Pan
2019-10-25  2:30   ` Lu Baolu
2019-10-25  2:30     ` Lu Baolu
2019-10-25  4:43     ` Jacob Pan
2019-10-25  4:43       ` Jacob Pan
2019-10-25  6:40       ` Tian, Kevin
2019-10-25  6:40         ` Tian, Kevin
2019-10-25 14:39         ` Lu Baolu
2019-10-25 14:39           ` Lu Baolu
2019-10-25 15:52           ` Tian, Kevin
2019-10-25 15:52             ` Tian, Kevin
2019-10-28 22:49             ` Jacob Pan
2019-10-28 22:49               ` Jacob Pan
2019-10-29  2:22               ` Lu Baolu
2019-10-29  2:22                 ` Lu Baolu
2019-10-25  6:31   ` Tian, Kevin
2019-10-25  6:31     ` Tian, Kevin
2019-10-28 22:52     ` Jacob Pan
2019-10-28 22:52       ` Jacob Pan
2019-11-08 10:40   ` Auger Eric
2019-11-08 10:40     ` Auger Eric
2019-11-08 22:26     ` Jacob Pan
2019-11-08 22:26       ` Jacob Pan
2019-10-24 19:54 ` [PATCH v7 04/11] iommu/vt-d: Replace Intel specific PASID allocator with IOASID Jacob Pan
2019-10-24 19:54   ` Jacob Pan
2019-10-25  5:47   ` Lu Baolu
2019-10-25  5:47     ` Lu Baolu
2019-11-01 18:29     ` Jacob Pan
2019-11-01 18:29       ` Jacob Pan
2019-10-25  6:41   ` Tian, Kevin
2019-10-25  6:41     ` Tian, Kevin
2019-10-28 22:46     ` Jacob Pan
2019-10-28 22:46       ` Jacob Pan
2019-11-08 11:30   ` Auger Eric
2019-11-08 11:30     ` Auger Eric
2019-11-08 22:55     ` Jacob Pan
2019-11-08 22:55       ` Jacob Pan
2019-11-12  9:54       ` Auger Eric
2019-11-12  9:54         ` Auger Eric
2019-10-24 19:54 ` [PATCH v7 05/11] iommu/vt-d: Move domain helper to header Jacob Pan
2019-10-24 19:54   ` Jacob Pan
2019-10-25  5:26   ` Lu Baolu
2019-10-25  5:26     ` Lu Baolu
2019-10-24 19:54 ` [PATCH v7 06/11] iommu/vt-d: Avoid duplicated code for PASID setup Jacob Pan
2019-10-24 19:54   ` Jacob Pan
2019-10-25  5:32   ` Lu Baolu
2019-10-25  5:32     ` Lu Baolu
2019-10-25  6:42   ` Tian, Kevin
2019-10-25  6:42     ` Tian, Kevin
2019-10-28 22:41     ` Jacob Pan
2019-10-28 22:41       ` Jacob Pan
2019-11-12  9:54   ` Auger Eric
2019-11-12  9:54     ` Auger Eric
2019-10-24 19:55 ` [PATCH v7 07/11] iommu/vt-d: Add nested translation helper function Jacob Pan
2019-10-24 19:55   ` Jacob Pan
2019-10-25  7:04   ` Tian, Kevin
2019-10-25  7:04     ` Tian, Kevin
2019-11-01 21:10     ` Jacob Pan
2019-11-01 21:10       ` Jacob Pan
2019-10-25 15:04   ` Lu Baolu
2019-10-25 15:04     ` Lu Baolu
2019-10-25 16:06     ` Jacob Pan
2019-10-25 16:06       ` Jacob Pan
2019-11-08 13:55   ` Auger Eric
2019-11-08 13:55     ` Auger Eric
2019-10-24 19:55 ` [PATCH v7 08/11] iommu/vt-d: Misc macro clean up for SVM Jacob Pan
2019-10-24 19:55   ` Jacob Pan
2019-10-26  1:00   ` Lu Baolu
2019-10-26  1:00     ` Lu Baolu
2019-10-28 22:38     ` Jacob Pan
2019-10-28 22:38       ` Jacob Pan
2019-10-24 19:55 ` [PATCH v7 09/11] iommu/vt-d: Add bind guest PASID support Jacob Pan
2019-10-24 19:55   ` Jacob Pan
2019-10-25  7:19   ` Tian, Kevin
2019-10-25  7:19     ` Tian, Kevin
2019-10-25 17:33     ` Jacob Pan
2019-10-25 17:33       ` Jacob Pan
2019-10-28  6:03       ` Tian, Kevin
2019-10-28  6:03         ` Tian, Kevin
2019-10-28 16:02         ` Jacob Pan
2019-10-28 16:02           ` Jacob Pan
2019-10-29  7:57           ` Tian, Kevin
2019-10-29  7:57             ` Tian, Kevin
2019-10-29 16:11             ` Jacob Pan
2019-10-29 16:11               ` Jacob Pan
2019-10-29 18:04               ` Tian, Kevin
2019-10-29 18:04                 ` Tian, Kevin
2019-10-29  2:33         ` Lu Baolu
2019-10-29  2:33           ` Lu Baolu
2019-10-26  2:01   ` Lu Baolu
2019-10-26  2:01     ` Lu Baolu
2019-10-28 22:29     ` Jacob Pan
2019-10-28 22:29       ` Jacob Pan
2019-10-29  2:54       ` Lu Baolu
2019-10-29  2:54         ` Lu Baolu
2019-10-29  4:11         ` Jacob Pan
2019-10-29  4:11           ` Jacob Pan
2019-10-29  5:04           ` Lu Baolu
2019-10-29  5:04             ` Lu Baolu
2019-10-24 19:55 ` [PATCH v7 10/11] iommu/vt-d: Support flushing more translation cache types Jacob Pan
2019-10-24 19:55   ` Jacob Pan
2019-10-25  7:21   ` Tian, Kevin
2019-10-25  7:21     ` Tian, Kevin
2019-11-01 21:30     ` Jacob Pan
2019-11-01 21:30       ` Jacob Pan
2019-10-26  2:22   ` Lu Baolu
2019-10-26  2:22     ` Lu Baolu
2019-11-01 21:28     ` Jacob Pan
2019-11-01 21:28       ` Jacob Pan
2019-11-08 16:18   ` Auger Eric
2019-11-08 16:18     ` Auger Eric
2019-11-08 23:05     ` Jacob Pan
2019-11-08 23:05       ` Jacob Pan
2019-10-24 19:55 ` [PATCH v7 11/11] iommu/vt-d: Add svm/sva invalidate function Jacob Pan
2019-10-24 19:55   ` Jacob Pan
2019-10-25  7:27   ` Tian, Kevin
2019-10-25  7:27     ` Tian, Kevin
2019-10-26  2:40     ` Lu Baolu [this message]
2019-10-26  2:40       ` Lu Baolu
2019-10-26  7:03       ` Lu Baolu
2019-10-26  7:03         ` Lu Baolu
2019-10-28  6:06         ` Tian, Kevin
2019-10-28  6:06           ` Tian, Kevin
2019-10-28 16:10           ` Jacob Pan
2019-10-28 16:10             ` Jacob Pan
2019-10-29 18:52             ` Tian, Kevin
2019-10-29 18:52               ` Tian, Kevin
2019-10-29 19:25               ` Jacob Pan
2019-10-29 19:25                 ` Jacob Pan
2019-10-28 16:13     ` Jacob Pan
2019-10-28 16:13       ` Jacob Pan
2019-11-12 10:28   ` Auger Eric
2019-11-12 10:28     ` Auger Eric
2020-02-15  1:18     ` Jacob Pan
2020-02-15  1:18       ` Jacob Pan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5e9d2372-a8b5-9a26-1438-c1a608bfad6d@linux.intel.com \
    --to=baolu.lu@linux.intel.com \
    --cc=alex.williamson@redhat.com \
    --cc=ashok.raj@intel.com \
    --cc=dwmw2@infradead.org \
    --cc=eric.auger@redhat.com \
    --cc=hch@infradead.org \
    --cc=iommu@lists.linux-foundation.org \
    --cc=jacob.jun.pan@linux.intel.com \
    --cc=jean-philippe@linaro.com \
    --cc=jic23@kernel.org \
    --cc=joro@8bytes.org \
    --cc=kevin.tian@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=yi.l.liu@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.