Changing the AMD IOMMU API path to work in an atomic context which is necessary for any custom drivers using the IOMMU API while holding a spinlock.
diff mbox series

Message ID 1535120929-5693-1-git-send-email-murphyt7@tcd.ie
State New, archived
Headers show
Series
  • Changing the AMD IOMMU API path to work in an atomic context which is necessary for any custom drivers using the IOMMU API while holding a spinlock.
Related show

Commit Message

Tom Murphy Aug. 24, 2018, 2:28 p.m. UTC
From: Tom Murphy <murphyt7@tcd.ie>

---

This patch allows the IOMMU API path in the AMD driver to be called from an atomic context.
This is useful for anyone building a driver which needs to call the IOMMU API while holding a spinlock.

 drivers/iommu/amd_iommu.c       | 14 ++++++++------
 drivers/iommu/amd_iommu_types.h |  2 +-
 2 files changed, 9 insertions(+), 7 deletions(-)

Comments

Christoph Hellwig Aug. 24, 2018, 2:53 p.m. UTC | #1
On Fri, Aug 24, 2018 at 02:28:49PM +0000, murphyt7@tcd.ie wrote:
> From: Tom Murphy <murphyt7@tcd.ie>
> 
> ---
> 
> This patch allows the IOMMU API path in the AMD driver to be called from an atomic context.
> This is useful for anyone building a driver which needs to call the IOMMU API while holding a spinlock.

I don't think that is a good idea.  Please point to the code in the
driver and we can't probably find a better solution.
Robin Murphy Aug. 24, 2018, 3:24 p.m. UTC | #2
On 24/08/18 15:53, Christoph Hellwig wrote:
> On Fri, Aug 24, 2018 at 02:28:49PM +0000, murphyt7@tcd.ie wrote:
>> From: Tom Murphy <murphyt7@tcd.ie>
>>
>> ---
>>
>> This patch allows the IOMMU API path in the AMD driver to be called from an atomic context.
>> This is useful for anyone building a driver which needs to call the IOMMU API while holding a spinlock.
> 
> I don't think that is a good idea.  Please point to the code in the
> driver and we can't probably find a better solution.

Although IIRC the AMD driver is in fact the only one whose map/unmap 
callbacks aren't already spinlock-safe (or at least it was last time I 
was looking). Stuff like iommu-dma is already relying on this in order 
to implement streaming DMA API calls (which may be in atomic context) on 
top of the corresponding IOMMU API operations.

Robin.
Christoph Hellwig Aug. 27, 2018, 6:58 a.m. UTC | #3
On Fri, Aug 24, 2018 at 04:24:49PM +0100, Robin Murphy wrote:
> Although IIRC the AMD driver is in fact the only one whose map/unmap
> callbacks aren't already spinlock-safe (or at least it was last time I was
> looking). Stuff like iommu-dma is already relying on this in order to
> implement streaming DMA API calls (which may be in atomic context) on top of
> the corresponding IOMMU API operations.

True.  That should be mentioned in the spinlock, though - and once
it is done the calling conventions should be documented.
Christoph Hellwig Aug. 27, 2018, 7:07 a.m. UTC | #4
On Fri, Aug 24, 2018 at 11:55:53PM +0100, Tom Murphy wrote:
> We were going to do this by using the iommu-dma api and replacing all the
> existing calls to the DMA api functions in the amd driver with their
> iommu-dma equivalent like in this driver:
> https://elixir.bootlin.com/linux/latest/source/arch/arm64/mm/dma-mapping.c#L810

FYI, I have a a wip branch to move the arm64 wrappers for both
swiotlb and dma-iommu here:

http://git.infradead.org/users/hch/misc.git/shortlog/refs/heads/dma-maybe-coherent

Maybe you could try to base on top of that.

> To do this we need the map/unmap callbacks to be spinlock-safe.

You probably want to send the patch together with the one(s) making use
of it.
Joerg Roedel Sept. 25, 2018, 9:09 a.m. UTC | #5
On Fri, Aug 24, 2018 at 02:28:49PM +0000, murphyt7@tcd.ie wrote:
> From: Tom Murphy <murphyt7@tcd.ie>
> 
> ---
> 
> This patch allows the IOMMU API path in the AMD driver to be called
> from an atomic context.  This is useful for anyone building a driver
> which needs to call the IOMMU API while holding a spinlock.

A bit more context would be helpful here. This is a pretty intrusive
change since we will drain the memory allocators atomic pools now while
building page-tables. This needs some more justification than just 'some
drivers might need it'.

Further, if we really address this problem it needs to happen through
the IOMMU-API, because the functions you are patching are called from
iommu_map()/iommu_unmap(), and these functions are allowed to sleep.

So the required approach here would be to introduce something like
iommu_map_atomic()/iommu_unmap_atomic() and implement call-backs for
that in more than just one driver.

Regards,

	Joerg

Patch
diff mbox series

diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
index 0c910a8..642afc9 100644
--- a/drivers/iommu/amd_iommu.c
+++ b/drivers/iommu/amd_iommu.c
@@ -2904,7 +2904,7 @@  static void protection_domain_free(struct protection_domain *domain)
 static int protection_domain_init(struct protection_domain *domain)
 {
 	spin_lock_init(&domain->lock);
-	mutex_init(&domain->api_lock);
+	spin_lock_init(&domain->api_lock);
 	domain->id = domain_id_alloc();
 	if (!domain->id)
 		return -ENOMEM;
@@ -3088,6 +3088,7 @@  static int amd_iommu_map(struct iommu_domain *dom, unsigned long iova,
 			 phys_addr_t paddr, size_t page_size, int iommu_prot)
 {
 	struct protection_domain *domain = to_pdomain(dom);
+	unsigned long flags;
 	int prot = 0;
 	int ret;
 
@@ -3099,9 +3100,9 @@  static int amd_iommu_map(struct iommu_domain *dom, unsigned long iova,
 	if (iommu_prot & IOMMU_WRITE)
 		prot |= IOMMU_PROT_IW;
 
-	mutex_lock(&domain->api_lock);
-	ret = iommu_map_page(domain, iova, paddr, page_size, prot, GFP_KERNEL);
-	mutex_unlock(&domain->api_lock);
+	spin_lock_irqsave(&domain->api_lock, flags);
+	ret = iommu_map_page(domain, iova, paddr, page_size, prot, GFP_ATOMIC);
+	spin_unlock_irqrestore(&domain->api_lock, flags);
 
 	return ret;
 }
@@ -3110,14 +3111,15 @@  static size_t amd_iommu_unmap(struct iommu_domain *dom, unsigned long iova,
 			   size_t page_size)
 {
 	struct protection_domain *domain = to_pdomain(dom);
+	unsigned long flags;
 	size_t unmap_size;
 
 	if (domain->mode == PAGE_MODE_NONE)
 		return -EINVAL;
 
-	mutex_lock(&domain->api_lock);
+	spin_lock_irqsave(&domain->api_lock, flags);
 	unmap_size = iommu_unmap_page(domain, iova, page_size);
-	mutex_unlock(&domain->api_lock);
+	spin_unlock_irqrestore(&domain->api_lock, flags);
 
 	domain_flush_tlb_pde(domain);
 	domain_flush_complete(domain);
diff --git a/drivers/iommu/amd_iommu_types.h b/drivers/iommu/amd_iommu_types.h
index 0d91785..a9077a5 100644
--- a/drivers/iommu/amd_iommu_types.h
+++ b/drivers/iommu/amd_iommu_types.h
@@ -444,7 +444,7 @@  struct protection_domain {
 	struct iommu_domain domain; /* generic domain handle used by
 				       iommu core code */
 	spinlock_t lock;	/* mostly used to lock the page table*/
-	struct mutex api_lock;	/* protect page tables in the iommu-api path */
+	spinlock_t api_lock;	/* protect page tables in the iommu-api path */
 	u16 id;			/* the domain id written to the device table */
 	int mode;		/* paging mode (0-6 levels) */
 	u64 *pt_root;		/* page table root pointer */