iommu.lists.linux-foundation.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/3] iommu/vtd: Per device dma ops
@ 2019-08-01  6:01 Lu Baolu
  2019-08-01  6:01 ` [PATCH 1/3] iommu/vt-d: Refactor find_domain() helper Lu Baolu
                   ` (2 more replies)
  0 siblings, 3 replies; 11+ messages in thread
From: Lu Baolu @ 2019-08-01  6:01 UTC (permalink / raw)
  To: David Woodhouse, Joerg Roedel
  Cc: kevin.tian, ashok.raj, linux-kernel, iommu, jacob.jun.pan,
	Robin Murphy, Christoph Hellwig

Current Intel IOMMU driver sets the system level dma_ops. This
means every dma API call will be routed to the iommu driver,
even the privileged user might select to bypass iommu for some
specific devices.

Furthermore,  if the priviledged user requests to bypass iommu
translation for a device, the iommu driver might fall back to
use dma domain blindly if the device is not able to address all
system memory.
    
This sets the per-device dma_ops only if a device is using DMA
domain. Otherwise, use the default dma_ops for direct dma.

Lu Baolu (3):
  iommu/vt-d: Refactor find_domain() helper
  iommu/vt-d: Apply per-device dma_ops
  iommu/vt-d: Cleanup after using per-device dma ops

 drivers/iommu/intel-iommu.c | 131 ++++++++++--------------------------
 1 file changed, 34 insertions(+), 97 deletions(-)

-- 
2.17.1

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH 1/3] iommu/vt-d: Refactor find_domain() helper
  2019-08-01  6:01 [PATCH 0/3] iommu/vtd: Per device dma ops Lu Baolu
@ 2019-08-01  6:01 ` Lu Baolu
  2019-08-01  6:10   ` Christoph Hellwig
  2019-08-01  6:01 ` [PATCH 2/3] iommu/vt-d: Apply per-device dma_ops Lu Baolu
  2019-08-01  6:01 ` [PATCH 3/3] iommu/vt-d: Cleanup after using per-device dma ops Lu Baolu
  2 siblings, 1 reply; 11+ messages in thread
From: Lu Baolu @ 2019-08-01  6:01 UTC (permalink / raw)
  To: David Woodhouse, Joerg Roedel
  Cc: kevin.tian, ashok.raj, linux-kernel, iommu, jacob.jun.pan,
	Robin Murphy, Christoph Hellwig

Current find_domain() helper checks and does the deferred domain
attachment and return the domain in use. This isn't always the
use case for the callers. Some callers only want to retrieve the
current domain in use.

This refactors find_domain() into two helpers: 1) find_domain()
only returns the domain in use; 2) deferred_attach_domain() does
the deferred domain attachment if required and return the domain
in use.

Cc: Ashok Raj <ashok.raj@intel.com>
Cc: Jacob Pan <jacob.jun.pan@linux.intel.com>
Cc: Kevin Tian <kevin.tian@intel.com>
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
---
 drivers/iommu/intel-iommu.c | 35 ++++++++++++++++++++---------------
 1 file changed, 20 insertions(+), 15 deletions(-)

diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
index aaa641f98590..d2ac89a2026e 100644
--- a/drivers/iommu/intel-iommu.c
+++ b/drivers/iommu/intel-iommu.c
@@ -2387,14 +2387,24 @@ static void domain_remove_dev_info(struct dmar_domain *domain)
 	spin_unlock_irqrestore(&device_domain_lock, flags);
 }
 
-/*
- * find_domain
- * Note: we use struct device->archdata.iommu stores the info
- */
 static struct dmar_domain *find_domain(struct device *dev)
 {
 	struct device_domain_info *info;
 
+	if (unlikely(dev->archdata.iommu == DEFER_DEVICE_DOMAIN_INFO ||
+		     dev->archdata.iommu == DUMMY_DEVICE_DOMAIN_INFO))
+		return NULL;
+
+	/* No lock here, assumes no domain exit in normal case */
+	info = dev->archdata.iommu;
+	if (likely(info))
+		return info->domain;
+
+	return NULL;
+}
+
+static struct dmar_domain *deferred_attach_domain(struct device *dev)
+{
 	if (unlikely(dev->archdata.iommu == DEFER_DEVICE_DOMAIN_INFO)) {
 		struct iommu_domain *domain;
 
@@ -2404,12 +2414,7 @@ static struct dmar_domain *find_domain(struct device *dev)
 			intel_iommu_attach_device(domain, dev);
 	}
 
-	/* No lock here, assumes no domain exit in normal case */
-	info = dev->archdata.iommu;
-
-	if (likely(info))
-		return info->domain;
-	return NULL;
+	return find_domain(dev);
 }
 
 static inline struct device_domain_info *
@@ -3478,7 +3483,7 @@ static dma_addr_t __intel_map_single(struct device *dev, phys_addr_t paddr,
 
 	BUG_ON(dir == DMA_NONE);
 
-	domain = find_domain(dev);
+	domain = deferred_attach_domain(dev);
 	if (!domain)
 		return DMA_MAPPING_ERROR;
 
@@ -3552,7 +3557,7 @@ static void intel_unmap(struct device *dev, dma_addr_t dev_addr, size_t size)
 	struct page *freelist;
 	struct pci_dev *pdev = NULL;
 
-	domain = find_domain(dev);
+	domain = deferred_attach_domain(dev);
 	BUG_ON(!domain);
 
 	iommu = domain_get_iommu(domain);
@@ -3694,7 +3699,7 @@ static int intel_map_sg(struct device *dev, struct scatterlist *sglist, int nele
 	if (!iommu_need_mapping(dev))
 		return dma_direct_map_sg(dev, sglist, nelems, dir, attrs);
 
-	domain = find_domain(dev);
+	domain = deferred_attach_domain(dev);
 	if (!domain)
 		return 0;
 
@@ -3757,7 +3762,7 @@ bounce_map_single(struct device *dev, phys_addr_t paddr, size_t size,
 	unsigned long nrpages;
 	dma_addr_t ret_addr;
 
-	domain = find_domain(dev);
+	domain = deferred_attach_domain(dev);
 	if (WARN_ON(dir == DMA_NONE || !domain))
 		return DMA_MAPPING_ERROR;
 
@@ -3789,7 +3794,7 @@ bounce_unmap_single(struct device *dev, dma_addr_t dev_addr, size_t size,
 	unsigned long iova_pfn;
 	unsigned long nrpages;
 
-	domain = find_domain(dev);
+	domain = deferred_attach_domain(dev);
 	if (WARN_ON(!domain))
 		return;
 
-- 
2.17.1

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 2/3] iommu/vt-d: Apply per-device dma_ops
  2019-08-01  6:01 [PATCH 0/3] iommu/vtd: Per device dma ops Lu Baolu
  2019-08-01  6:01 ` [PATCH 1/3] iommu/vt-d: Refactor find_domain() helper Lu Baolu
@ 2019-08-01  6:01 ` Lu Baolu
  2019-08-06  6:43   ` Christoph Hellwig
  2019-08-01  6:01 ` [PATCH 3/3] iommu/vt-d: Cleanup after using per-device dma ops Lu Baolu
  2 siblings, 1 reply; 11+ messages in thread
From: Lu Baolu @ 2019-08-01  6:01 UTC (permalink / raw)
  To: David Woodhouse, Joerg Roedel
  Cc: kevin.tian, ashok.raj, linux-kernel, iommu, jacob.jun.pan,
	Robin Murphy, Christoph Hellwig

Current Intel IOMMU driver sets the system level dma_ops. This
implementation has at least the following drawbacks: 1) each
dma API will go through the IOMMU driver even the devices are
using identity mapped domains; 2) if user requests to use an
identity mapped domain (a.k.a. bypass iommu translation), the
driver might fall back to dma domain blindly if the device is
not able to address all system memory.

This sets per-device dma_ops if a device is using a dma domain.
Otherwise, use the default system level dma_ops for direct dma.

Cc: Ashok Raj <ashok.raj@intel.com>
Cc: Jacob Pan <jacob.jun.pan@linux.intel.com>
Cc: Kevin Tian <kevin.tian@intel.com>
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
---
 drivers/iommu/intel-iommu.c | 48 ++++++++-----------------------------
 1 file changed, 10 insertions(+), 38 deletions(-)

diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
index d2ac89a2026e..609b539b93f6 100644
--- a/drivers/iommu/intel-iommu.c
+++ b/drivers/iommu/intel-iommu.c
@@ -3431,43 +3431,10 @@ static struct dmar_domain *get_private_domain_for_dev(struct device *dev)
 /* Check if the dev needs to go through non-identity map and unmap process.*/
 static bool iommu_need_mapping(struct device *dev)
 {
-	int ret;
-
 	if (iommu_dummy(dev))
 		return false;
 
-	ret = identity_mapping(dev);
-	if (ret) {
-		u64 dma_mask = *dev->dma_mask;
-
-		if (dev->coherent_dma_mask && dev->coherent_dma_mask < dma_mask)
-			dma_mask = dev->coherent_dma_mask;
-
-		if (dma_mask >= dma_get_required_mask(dev))
-			return false;
-
-		/*
-		 * 32 bit DMA is removed from si_domain and fall back to
-		 * non-identity mapping.
-		 */
-		dmar_remove_one_dev_info(dev);
-		ret = iommu_request_dma_domain_for_dev(dev);
-		if (ret) {
-			struct iommu_domain *domain;
-			struct dmar_domain *dmar_domain;
-
-			domain = iommu_get_domain_for_dev(dev);
-			if (domain) {
-				dmar_domain = to_dmar_domain(domain);
-				dmar_domain->flags |= DOMAIN_FLAG_LOSE_CHILDREN;
-			}
-			get_private_domain_for_dev(dev);
-		}
-
-		dev_info(dev, "32bit DMA uses non-identity mapping\n");
-	}
-
-	return true;
+	return !identity_mapping(dev);
 }
 
 static dma_addr_t __intel_map_single(struct device *dev, phys_addr_t paddr,
@@ -4895,8 +4862,6 @@ int __init intel_iommu_init(void)
 	}
 	up_write(&dmar_global_lock);
 
-	dma_ops = &intel_dma_ops;
-
 	init_iommu_pm_ops();
 
 	for_each_active_iommu(iommu, drhd) {
@@ -5487,8 +5452,13 @@ static int intel_iommu_add_device(struct device *dev)
 		}
 	}
 
-	if (device_needs_bounce(dev))
-		set_dma_ops(dev, &bounce_dma_ops);
+	dmar_domain = find_domain(dev);
+	if (dmar_domain && !domain_type_is_si(dmar_domain)) {
+		if (device_needs_bounce(dev))
+			set_dma_ops(dev, &bounce_dma_ops);
+		else
+			set_dma_ops(dev, &intel_dma_ops);
+	}
 
 	return 0;
 }
@@ -5507,6 +5477,8 @@ static void intel_iommu_remove_device(struct device *dev)
 	iommu_group_remove_device(dev);
 
 	iommu_device_unlink(&iommu->iommu, dev);
+
+	set_dma_ops(dev, NULL);
 }
 
 static void intel_iommu_get_resv_regions(struct device *device,
-- 
2.17.1

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 3/3] iommu/vt-d: Cleanup after using per-device dma ops
  2019-08-01  6:01 [PATCH 0/3] iommu/vtd: Per device dma ops Lu Baolu
  2019-08-01  6:01 ` [PATCH 1/3] iommu/vt-d: Refactor find_domain() helper Lu Baolu
  2019-08-01  6:01 ` [PATCH 2/3] iommu/vt-d: Apply per-device dma_ops Lu Baolu
@ 2019-08-01  6:01 ` Lu Baolu
  2 siblings, 0 replies; 11+ messages in thread
From: Lu Baolu @ 2019-08-01  6:01 UTC (permalink / raw)
  To: David Woodhouse, Joerg Roedel
  Cc: kevin.tian, ashok.raj, linux-kernel, iommu, jacob.jun.pan,
	Robin Murphy, Christoph Hellwig

After using per-device dma ops, we don't need to check whether
a dvice needs mapping, hence all checks of iommu_need_mapping()
are unnecessary now.

Cc: Ashok Raj <ashok.raj@intel.com>
Cc: Jacob Pan <jacob.jun.pan@linux.intel.com>
Cc: Kevin Tian <kevin.tian@intel.com>
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
---
 drivers/iommu/intel-iommu.c | 50 ++++---------------------------------
 1 file changed, 5 insertions(+), 45 deletions(-)

diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
index 609b539b93f6..f7399d98f404 100644
--- a/drivers/iommu/intel-iommu.c
+++ b/drivers/iommu/intel-iommu.c
@@ -2761,17 +2761,6 @@ static int __init si_domain_init(int hw)
 	return 0;
 }
 
-static int identity_mapping(struct device *dev)
-{
-	struct device_domain_info *info;
-
-	info = dev->archdata.iommu;
-	if (info && info != DUMMY_DEVICE_DOMAIN_INFO)
-		return (info->domain == si_domain);
-
-	return 0;
-}
-
 static int domain_add_dev_info(struct dmar_domain *domain, struct device *dev)
 {
 	struct dmar_domain *ndomain;
@@ -3428,15 +3417,6 @@ static struct dmar_domain *get_private_domain_for_dev(struct device *dev)
 	return domain;
 }
 
-/* Check if the dev needs to go through non-identity map and unmap process.*/
-static bool iommu_need_mapping(struct device *dev)
-{
-	if (iommu_dummy(dev))
-		return false;
-
-	return !identity_mapping(dev);
-}
-
 static dma_addr_t __intel_map_single(struct device *dev, phys_addr_t paddr,
 				     size_t size, int dir, u64 dma_mask)
 {
@@ -3498,20 +3478,15 @@ static dma_addr_t intel_map_page(struct device *dev, struct page *page,
 				 enum dma_data_direction dir,
 				 unsigned long attrs)
 {
-	if (iommu_need_mapping(dev))
-		return __intel_map_single(dev, page_to_phys(page) + offset,
-				size, dir, *dev->dma_mask);
-	return dma_direct_map_page(dev, page, offset, size, dir, attrs);
+	return __intel_map_single(dev, page_to_phys(page) + offset,
+				  size, dir, *dev->dma_mask);
 }
 
 static dma_addr_t intel_map_resource(struct device *dev, phys_addr_t phys_addr,
 				     size_t size, enum dma_data_direction dir,
 				     unsigned long attrs)
 {
-	if (iommu_need_mapping(dev))
-		return __intel_map_single(dev, phys_addr, size, dir,
-				*dev->dma_mask);
-	return dma_direct_map_resource(dev, phys_addr, size, dir, attrs);
+	return __intel_map_single(dev, phys_addr, size, dir, *dev->dma_mask);
 }
 
 static void intel_unmap(struct device *dev, dma_addr_t dev_addr, size_t size)
@@ -3563,17 +3538,13 @@ static void intel_unmap_page(struct device *dev, dma_addr_t dev_addr,
 			     size_t size, enum dma_data_direction dir,
 			     unsigned long attrs)
 {
-	if (iommu_need_mapping(dev))
-		intel_unmap(dev, dev_addr, size);
-	else
-		dma_direct_unmap_page(dev, dev_addr, size, dir, attrs);
+	intel_unmap(dev, dev_addr, size);
 }
 
 static void intel_unmap_resource(struct device *dev, dma_addr_t dev_addr,
 		size_t size, enum dma_data_direction dir, unsigned long attrs)
 {
-	if (iommu_need_mapping(dev))
-		intel_unmap(dev, dev_addr, size);
+	intel_unmap(dev, dev_addr, size);
 }
 
 static void *intel_alloc_coherent(struct device *dev, size_t size,
@@ -3583,9 +3554,6 @@ static void *intel_alloc_coherent(struct device *dev, size_t size,
 	struct page *page = NULL;
 	int order;
 
-	if (!iommu_need_mapping(dev))
-		return dma_direct_alloc(dev, size, dma_handle, flags, attrs);
-
 	size = PAGE_ALIGN(size);
 	order = get_order(size);
 
@@ -3619,9 +3587,6 @@ static void intel_free_coherent(struct device *dev, size_t size, void *vaddr,
 	int order;
 	struct page *page = virt_to_page(vaddr);
 
-	if (!iommu_need_mapping(dev))
-		return dma_direct_free(dev, size, vaddr, dma_handle, attrs);
-
 	size = PAGE_ALIGN(size);
 	order = get_order(size);
 
@@ -3639,9 +3604,6 @@ static void intel_unmap_sg(struct device *dev, struct scatterlist *sglist,
 	struct scatterlist *sg;
 	int i;
 
-	if (!iommu_need_mapping(dev))
-		return dma_direct_unmap_sg(dev, sglist, nelems, dir, attrs);
-
 	for_each_sg(sglist, sg, nelems, i) {
 		nrpages += aligned_nrpages(sg_dma_address(sg), sg_dma_len(sg));
 	}
@@ -3663,8 +3625,6 @@ static int intel_map_sg(struct device *dev, struct scatterlist *sglist, int nele
 	struct intel_iommu *iommu;
 
 	BUG_ON(dir == DMA_NONE);
-	if (!iommu_need_mapping(dev))
-		return dma_direct_map_sg(dev, sglist, nelems, dir, attrs);
 
 	domain = deferred_attach_domain(dev);
 	if (!domain)
-- 
2.17.1

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH 1/3] iommu/vt-d: Refactor find_domain() helper
  2019-08-01  6:01 ` [PATCH 1/3] iommu/vt-d: Refactor find_domain() helper Lu Baolu
@ 2019-08-01  6:10   ` Christoph Hellwig
  2019-08-01  6:20     ` Lu Baolu
  0 siblings, 1 reply; 11+ messages in thread
From: Christoph Hellwig @ 2019-08-01  6:10 UTC (permalink / raw)
  To: Lu Baolu
  Cc: kevin.tian, ashok.raj, Robin Murphy, linux-kernel, iommu,
	jacob.jun.pan, David Woodhouse, Christoph Hellwig

On Thu, Aug 01, 2019 at 02:01:54PM +0800, Lu Baolu wrote:
> +	/* No lock here, assumes no domain exit in normal case */

s/exit/exists/ ?

> +	info = dev->archdata.iommu;
> +	if (likely(info))
> +		return info->domain;

But then again the likely would be odd.
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 1/3] iommu/vt-d: Refactor find_domain() helper
  2019-08-01  6:10   ` Christoph Hellwig
@ 2019-08-01  6:20     ` Lu Baolu
  2019-08-01 14:09       ` Christoph Hellwig
  0 siblings, 1 reply; 11+ messages in thread
From: Lu Baolu @ 2019-08-01  6:20 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: kevin.tian, ashok.raj, Robin Murphy, linux-kernel, iommu,
	jacob.jun.pan, David Woodhouse

Hi Christoph,

On 8/1/19 2:10 PM, Christoph Hellwig wrote:
> On Thu, Aug 01, 2019 at 02:01:54PM +0800, Lu Baolu wrote:
>> +	/* No lock here, assumes no domain exit in normal case */
> 
> s/exit/exists/ ?

This comment is just moved from one place to another in this patch.

"no domain exit" means "the domain isn't freed". (my understand)

> 
>> +	info = dev->archdata.iommu;
>> +	if (likely(info))
>> +		return info->domain;
> 
> But then again the likely would be odd.
> 

Normally there's a domain for a device (default domain or isolation
domain for assignment cases).

Best regards,
Baolu
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 1/3] iommu/vt-d: Refactor find_domain() helper
  2019-08-01  6:20     ` Lu Baolu
@ 2019-08-01 14:09       ` Christoph Hellwig
  2019-08-02  2:06         ` Lu Baolu
  0 siblings, 1 reply; 11+ messages in thread
From: Christoph Hellwig @ 2019-08-01 14:09 UTC (permalink / raw)
  To: Lu Baolu
  Cc: kevin.tian, ashok.raj, Robin Murphy, linux-kernel, iommu,
	jacob.jun.pan, David Woodhouse, Christoph Hellwig

On Thu, Aug 01, 2019 at 02:20:07PM +0800, Lu Baolu wrote:
> Hi Christoph,
>
> On 8/1/19 2:10 PM, Christoph Hellwig wrote:
>> On Thu, Aug 01, 2019 at 02:01:54PM +0800, Lu Baolu wrote:
>>> +	/* No lock here, assumes no domain exit in normal case */
>>
>> s/exit/exists/ ?
>
> This comment is just moved from one place to another in this patch.
>
> "no domain exit" means "the domain isn't freed". (my understand)

Maybe we'll get that refconfirmed and can fix up the comment?

>
>>
>>> +	info = dev->archdata.iommu;
>>> +	if (likely(info))
>>> +		return info->domain;
>>
>> But then again the likely would be odd.
>>
>
> Normally there's a domain for a device (default domain or isolation
> domain for assignment cases).

Makes sense, I just mean to say that the likely was contrary to my
understanding of the above comment.
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 1/3] iommu/vt-d: Refactor find_domain() helper
  2019-08-01 14:09       ` Christoph Hellwig
@ 2019-08-02  2:06         ` Lu Baolu
  0 siblings, 0 replies; 11+ messages in thread
From: Lu Baolu @ 2019-08-02  2:06 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: kevin.tian, ashok.raj, Robin Murphy, linux-kernel, iommu,
	jacob.jun.pan, David Woodhouse

Hi,

On 8/1/19 10:09 PM, Christoph Hellwig wrote:
> On Thu, Aug 01, 2019 at 02:20:07PM +0800, Lu Baolu wrote:
>> Hi Christoph,
>>
>> On 8/1/19 2:10 PM, Christoph Hellwig wrote:
>>> On Thu, Aug 01, 2019 at 02:01:54PM +0800, Lu Baolu wrote:
>>>> +	/* No lock here, assumes no domain exit in normal case */
>>>
>>> s/exit/exists/ ?
>>
>> This comment is just moved from one place to another in this patch.
>>
>> "no domain exit" means "the domain isn't freed". (my understand)
> 
> Maybe we'll get that refconfirmed and can fix up the comment?

Sure.

> 
>>
>>>
>>>> +	info = dev->archdata.iommu;
>>>> +	if (likely(info))
>>>> +		return info->domain;
>>>
>>> But then again the likely would be odd.
>>>
>>
>> Normally there's a domain for a device (default domain or isolation
>> domain for assignment cases).
> 
> Makes sense, I just mean to say that the likely was contrary to my
> understanding of the above comment.
> 

Best regards,
Baolu
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 2/3] iommu/vt-d: Apply per-device dma_ops
  2019-08-01  6:01 ` [PATCH 2/3] iommu/vt-d: Apply per-device dma_ops Lu Baolu
@ 2019-08-06  6:43   ` Christoph Hellwig
  2019-08-07  3:06     ` Lu Baolu
  0 siblings, 1 reply; 11+ messages in thread
From: Christoph Hellwig @ 2019-08-06  6:43 UTC (permalink / raw)
  To: Lu Baolu
  Cc: kevin.tian, ashok.raj, Robin Murphy, linux-kernel, iommu,
	jacob.jun.pan, David Woodhouse, Christoph Hellwig

Hi Lu,

I really do like the switch to the per-device dma_map_ops, but:

On Thu, Aug 01, 2019 at 02:01:55PM +0800, Lu Baolu wrote:
> Current Intel IOMMU driver sets the system level dma_ops. This
> implementation has at least the following drawbacks: 1) each
> dma API will go through the IOMMU driver even the devices are
> using identity mapped domains; 2) if user requests to use an
> identity mapped domain (a.k.a. bypass iommu translation), the
> driver might fall back to dma domain blindly if the device is
> not able to address all system memory.

This is very clearly a behavioral regression.  The intel-iommu driver
has always used the iommu mapping to provide decent support for
devices that do not have the full 64-bit addressing capability, and
changing this will make a lot of existing setups go slower.

I don't think having to use swiotlb for these devices helps anyone.
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 2/3] iommu/vt-d: Apply per-device dma_ops
  2019-08-06  6:43   ` Christoph Hellwig
@ 2019-08-07  3:06     ` Lu Baolu
  2019-08-13  7:38       ` Lu Baolu
  0 siblings, 1 reply; 11+ messages in thread
From: Lu Baolu @ 2019-08-07  3:06 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: kevin.tian, ashok.raj, Robin Murphy, linux-kernel, iommu,
	jacob.jun.pan, David Woodhouse

Hi Christoph,

On 8/6/19 2:43 PM, Christoph Hellwig wrote:
> Hi Lu,
> 
> I really do like the switch to the per-device dma_map_ops, but:
> 
> On Thu, Aug 01, 2019 at 02:01:55PM +0800, Lu Baolu wrote:
>> Current Intel IOMMU driver sets the system level dma_ops. This
>> implementation has at least the following drawbacks: 1) each
>> dma API will go through the IOMMU driver even the devices are
>> using identity mapped domains; 2) if user requests to use an
>> identity mapped domain (a.k.a. bypass iommu translation), the
>> driver might fall back to dma domain blindly if the device is
>> not able to address all system memory.
> 
> This is very clearly a behavioral regression.  The intel-iommu driver
> has always used the iommu mapping to provide decent support for
> devices that do not have the full 64-bit addressing capability, and
> changing this will make a lot of existing setups go slower.
>

I agree with you that we should keep the capability and avoid possible
performance regression on some setups. But, instead of hard-coding this
in the iommu driver, I prefer a more scalable way.

For example, the concept of per group default domain type [1] seems to
be a good choice. The kernel could be statically compiled as by-default
"pass through" or "translate everything". The per group default domain
type API could then be used by the privileged user to tweak some of the
groups for better performance, either by 1) bypassing iommu translation
for the trusted super-speed devices, or 2) applying iommu translation to
access the system memory which is beyond the device's address capability
(without the necessary of using bounce buffer).

[1] https://www.spinics.net/lists/iommu/msg37113.html

> I don't think having to use swiotlb for these devices helps anyone.
> 

Best regards,
Baolu

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 2/3] iommu/vt-d: Apply per-device dma_ops
  2019-08-07  3:06     ` Lu Baolu
@ 2019-08-13  7:38       ` Lu Baolu
  0 siblings, 0 replies; 11+ messages in thread
From: Lu Baolu @ 2019-08-13  7:38 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: kevin.tian, ashok.raj, Robin Murphy, linux-kernel, iommu,
	jacob.jun.pan, David Woodhouse

Hi again,

On 8/7/19 11:06 AM, Lu Baolu wrote:
> Hi Christoph,
> 
> On 8/6/19 2:43 PM, Christoph Hellwig wrote:
>> Hi Lu,
>>
>> I really do like the switch to the per-device dma_map_ops, but:
>>
>> On Thu, Aug 01, 2019 at 02:01:55PM +0800, Lu Baolu wrote:
>>> Current Intel IOMMU driver sets the system level dma_ops. This
>>> implementation has at least the following drawbacks: 1) each
>>> dma API will go through the IOMMU driver even the devices are
>>> using identity mapped domains; 2) if user requests to use an
>>> identity mapped domain (a.k.a. bypass iommu translation), the
>>> driver might fall back to dma domain blindly if the device is
>>> not able to address all system memory.
>>
>> This is very clearly a behavioral regression.  The intel-iommu driver
>> has always used the iommu mapping to provide decent support for
>> devices that do not have the full 64-bit addressing capability, and
>> changing this will make a lot of existing setups go slower.
>>
> 
> I agree with you that we should keep the capability and avoid possible
> performance regression on some setups. But, instead of hard-coding this
> in the iommu driver, I prefer a more scalable way.
> 
> For example, the concept of per group default domain type [1] seems to
> be a good choice. The kernel could be statically compiled as by-default
> "pass through" or "translate everything". The per group default domain
> type API could then be used by the privileged user to tweak some of the
> groups for better performance, either by 1) bypassing iommu translation
> for the trusted super-speed devices, or 2) applying iommu translation to
> access the system memory which is beyond the device's address capability
> (without the necessary of using bounce buffer).
> 
> [1] https://www.spinics.net/lists/iommu/msg37113.html
> 

The code that this patch is trying to remove also looks buggy. The check
and replace of domain happens in each DMA API, but there isn't any lock
to serialize them.

Best regards,
Lu Baolu

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2019-08-13  7:45 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-08-01  6:01 [PATCH 0/3] iommu/vtd: Per device dma ops Lu Baolu
2019-08-01  6:01 ` [PATCH 1/3] iommu/vt-d: Refactor find_domain() helper Lu Baolu
2019-08-01  6:10   ` Christoph Hellwig
2019-08-01  6:20     ` Lu Baolu
2019-08-01 14:09       ` Christoph Hellwig
2019-08-02  2:06         ` Lu Baolu
2019-08-01  6:01 ` [PATCH 2/3] iommu/vt-d: Apply per-device dma_ops Lu Baolu
2019-08-06  6:43   ` Christoph Hellwig
2019-08-07  3:06     ` Lu Baolu
2019-08-13  7:38       ` Lu Baolu
2019-08-01  6:01 ` [PATCH 3/3] iommu/vt-d: Cleanup after using per-device dma ops Lu Baolu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).