All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] iommu/ipmmu-vmsa: Fix allocation in atomic context
@ 2018-07-20 16:16 Geert Uytterhoeven
  2018-07-21  9:12   ` Laurent Pinchart
  2018-07-27  7:41 ` Joerg Roedel
  0 siblings, 2 replies; 4+ messages in thread
From: Geert Uytterhoeven @ 2018-07-20 16:16 UTC (permalink / raw)
  To: Joerg Roedel, Magnus Damm, Laurent Pinchart
  Cc: iommu, linux-renesas-soc, linux-kernel, Geert Uytterhoeven

When attaching a device to an IOMMU group with
CONFIG_DEBUG_ATOMIC_SLEEP=y:

    BUG: sleeping function called from invalid context at mm/slab.h:421
    in_atomic(): 1, irqs_disabled(): 128, pid: 61, name: kworker/1:1
    ...
    Call trace:
     ...
     arm_lpae_alloc_pgtable+0x114/0x184
     arm_64_lpae_alloc_pgtable_s1+0x2c/0x128
     arm_32_lpae_alloc_pgtable_s1+0x40/0x6c
     alloc_io_pgtable_ops+0x60/0x88
     ipmmu_attach_device+0x140/0x334

ipmmu_attach_device() takes a spinlock, while arm_lpae_alloc_pgtable()
allocates memory using GFP_KERNEL.  Originally, the ipmmu-vmsa driver
had its own custom page table allocation implementation using
GFP_ATOMIC, hence the spinlock was fine.

Fix this by replacing the spinlock by a mutex, like the arm-smmu driver
does.

Fixes: f20ed39f53145e45 ("iommu/ipmmu-vmsa: Use the ARM LPAE page table allocator")
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
---
 drivers/iommu/ipmmu-vmsa.c | 9 ++++-----
 1 file changed, 4 insertions(+), 5 deletions(-)

diff --git a/drivers/iommu/ipmmu-vmsa.c b/drivers/iommu/ipmmu-vmsa.c
index 6a0e7142f41bf667..8f54f25404456035 100644
--- a/drivers/iommu/ipmmu-vmsa.c
+++ b/drivers/iommu/ipmmu-vmsa.c
@@ -73,7 +73,7 @@ struct ipmmu_vmsa_domain {
 	struct io_pgtable_ops *iop;
 
 	unsigned int context_id;
-	spinlock_t lock;			/* Protects mappings */
+	struct mutex mutex;			/* Protects mappings */
 };
 
 static struct ipmmu_vmsa_domain *to_vmsa_domain(struct iommu_domain *dom)
@@ -599,7 +599,7 @@ static struct iommu_domain *__ipmmu_domain_alloc(unsigned type)
 	if (!domain)
 		return NULL;
 
-	spin_lock_init(&domain->lock);
+	mutex_init(&domain->mutex);
 
 	return &domain->io_domain;
 }
@@ -645,7 +645,6 @@ static int ipmmu_attach_device(struct iommu_domain *io_domain,
 	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
 	struct ipmmu_vmsa_device *mmu = to_ipmmu(dev);
 	struct ipmmu_vmsa_domain *domain = to_vmsa_domain(io_domain);
-	unsigned long flags;
 	unsigned int i;
 	int ret = 0;
 
@@ -654,7 +653,7 @@ static int ipmmu_attach_device(struct iommu_domain *io_domain,
 		return -ENXIO;
 	}
 
-	spin_lock_irqsave(&domain->lock, flags);
+	mutex_lock(&domain->mutex);
 
 	if (!domain->mmu) {
 		/* The domain hasn't been used yet, initialize it. */
@@ -678,7 +677,7 @@ static int ipmmu_attach_device(struct iommu_domain *io_domain,
 	} else
 		dev_info(dev, "Reusing IPMMU context %u\n", domain->context_id);
 
-	spin_unlock_irqrestore(&domain->lock, flags);
+	mutex_unlock(&domain->mutex);
 
 	if (ret < 0)
 		return ret;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH] iommu/ipmmu-vmsa: Fix allocation in atomic context
@ 2018-07-21  9:12   ` Laurent Pinchart
  0 siblings, 0 replies; 4+ messages in thread
From: Laurent Pinchart @ 2018-07-21  9:12 UTC (permalink / raw)
  To: Geert Uytterhoeven
  Cc: Joerg Roedel, Magnus Damm, iommu, linux-renesas-soc, linux-kernel

Hi Geert,

Thank you for the patch.

On Friday, 20 July 2018 19:16:59 EEST Geert Uytterhoeven wrote:
> When attaching a device to an IOMMU group with
> CONFIG_DEBUG_ATOMIC_SLEEP=y:
> 
>     BUG: sleeping function called from invalid context at mm/slab.h:421
>     in_atomic(): 1, irqs_disabled(): 128, pid: 61, name: kworker/1:1
>     ...
>     Call trace:
>      ...
>      arm_lpae_alloc_pgtable+0x114/0x184
>      arm_64_lpae_alloc_pgtable_s1+0x2c/0x128
>      arm_32_lpae_alloc_pgtable_s1+0x40/0x6c
>      alloc_io_pgtable_ops+0x60/0x88
>      ipmmu_attach_device+0x140/0x334
> 
> ipmmu_attach_device() takes a spinlock, while arm_lpae_alloc_pgtable()
> allocates memory using GFP_KERNEL.  Originally, the ipmmu-vmsa driver
> had its own custom page table allocation implementation using
> GFP_ATOMIC, hence the spinlock was fine.
> 
> Fix this by replacing the spinlock by a mutex, like the arm-smmu driver
> does.
> 
> Fixes: f20ed39f53145e45 ("iommu/ipmmu-vmsa: Use the ARM LPAE page table
> allocator") Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
> ---
>  drivers/iommu/ipmmu-vmsa.c | 9 ++++-----
>  1 file changed, 4 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/iommu/ipmmu-vmsa.c b/drivers/iommu/ipmmu-vmsa.c
> index 6a0e7142f41bf667..8f54f25404456035 100644
> --- a/drivers/iommu/ipmmu-vmsa.c
> +++ b/drivers/iommu/ipmmu-vmsa.c
> @@ -73,7 +73,7 @@ struct ipmmu_vmsa_domain {
>  	struct io_pgtable_ops *iop;
> 
>  	unsigned int context_id;
> -	spinlock_t lock;			/* Protects mappings */
> +	struct mutex mutex;			/* Protects mappings */
>  };
> 
>  static struct ipmmu_vmsa_domain *to_vmsa_domain(struct iommu_domain *dom)
> @@ -599,7 +599,7 @@ static struct iommu_domain
> *__ipmmu_domain_alloc(unsigned type) if (!domain)
>  		return NULL;
> 
> -	spin_lock_init(&domain->lock);
> +	mutex_init(&domain->mutex);
> 
>  	return &domain->io_domain;
>  }
> @@ -645,7 +645,6 @@ static int ipmmu_attach_device(struct iommu_domain
> *io_domain, struct iommu_fwspec *fwspec = dev->iommu_fwspec;
>  	struct ipmmu_vmsa_device *mmu = to_ipmmu(dev);
>  	struct ipmmu_vmsa_domain *domain = to_vmsa_domain(io_domain);
> -	unsigned long flags;
>  	unsigned int i;
>  	int ret = 0;
> 
> @@ -654,7 +653,7 @@ static int ipmmu_attach_device(struct iommu_domain
> *io_domain, return -ENXIO;
>  	}
> 
> -	spin_lock_irqsave(&domain->lock, flags);
> +	mutex_lock(&domain->mutex);

As the ipmmu_attach_device() function is called from a sleepable context this 
should be fine.

Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>

> 
>  	if (!domain->mmu) {
>  		/* The domain hasn't been used yet, initialize it. */
> @@ -678,7 +677,7 @@ static int ipmmu_attach_device(struct iommu_domain
> *io_domain, } else
>  		dev_info(dev, "Reusing IPMMU context %u\n", domain->context_id);
> 
> -	spin_unlock_irqrestore(&domain->lock, flags);
> +	mutex_unlock(&domain->mutex);
> 
>  	if (ret < 0)
>  		return ret;


-- 
Regards,

Laurent Pinchart




^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH] iommu/ipmmu-vmsa: Fix allocation in atomic context
@ 2018-07-21  9:12   ` Laurent Pinchart
  0 siblings, 0 replies; 4+ messages in thread
From: Laurent Pinchart @ 2018-07-21  9:12 UTC (permalink / raw)
  To: Geert Uytterhoeven
  Cc: linux-renesas-soc-u79uwXL29TY76Z2rM5mHXA, Magnus Damm,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA

Hi Geert,

Thank you for the patch.

On Friday, 20 July 2018 19:16:59 EEST Geert Uytterhoeven wrote:
> When attaching a device to an IOMMU group with
> CONFIG_DEBUG_ATOMIC_SLEEP=y:
> 
>     BUG: sleeping function called from invalid context at mm/slab.h:421
>     in_atomic(): 1, irqs_disabled(): 128, pid: 61, name: kworker/1:1
>     ...
>     Call trace:
>      ...
>      arm_lpae_alloc_pgtable+0x114/0x184
>      arm_64_lpae_alloc_pgtable_s1+0x2c/0x128
>      arm_32_lpae_alloc_pgtable_s1+0x40/0x6c
>      alloc_io_pgtable_ops+0x60/0x88
>      ipmmu_attach_device+0x140/0x334
> 
> ipmmu_attach_device() takes a spinlock, while arm_lpae_alloc_pgtable()
> allocates memory using GFP_KERNEL.  Originally, the ipmmu-vmsa driver
> had its own custom page table allocation implementation using
> GFP_ATOMIC, hence the spinlock was fine.
> 
> Fix this by replacing the spinlock by a mutex, like the arm-smmu driver
> does.
> 
> Fixes: f20ed39f53145e45 ("iommu/ipmmu-vmsa: Use the ARM LPAE page table
> allocator") Signed-off-by: Geert Uytterhoeven <geert+renesas-gXvu3+zWzMSzQB+pC5nmwQ@public.gmane.org>
> ---
>  drivers/iommu/ipmmu-vmsa.c | 9 ++++-----
>  1 file changed, 4 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/iommu/ipmmu-vmsa.c b/drivers/iommu/ipmmu-vmsa.c
> index 6a0e7142f41bf667..8f54f25404456035 100644
> --- a/drivers/iommu/ipmmu-vmsa.c
> +++ b/drivers/iommu/ipmmu-vmsa.c
> @@ -73,7 +73,7 @@ struct ipmmu_vmsa_domain {
>  	struct io_pgtable_ops *iop;
> 
>  	unsigned int context_id;
> -	spinlock_t lock;			/* Protects mappings */
> +	struct mutex mutex;			/* Protects mappings */
>  };
> 
>  static struct ipmmu_vmsa_domain *to_vmsa_domain(struct iommu_domain *dom)
> @@ -599,7 +599,7 @@ static struct iommu_domain
> *__ipmmu_domain_alloc(unsigned type) if (!domain)
>  		return NULL;
> 
> -	spin_lock_init(&domain->lock);
> +	mutex_init(&domain->mutex);
> 
>  	return &domain->io_domain;
>  }
> @@ -645,7 +645,6 @@ static int ipmmu_attach_device(struct iommu_domain
> *io_domain, struct iommu_fwspec *fwspec = dev->iommu_fwspec;
>  	struct ipmmu_vmsa_device *mmu = to_ipmmu(dev);
>  	struct ipmmu_vmsa_domain *domain = to_vmsa_domain(io_domain);
> -	unsigned long flags;
>  	unsigned int i;
>  	int ret = 0;
> 
> @@ -654,7 +653,7 @@ static int ipmmu_attach_device(struct iommu_domain
> *io_domain, return -ENXIO;
>  	}
> 
> -	spin_lock_irqsave(&domain->lock, flags);
> +	mutex_lock(&domain->mutex);

As the ipmmu_attach_device() function is called from a sleepable context this 
should be fine.

Reviewed-by: Laurent Pinchart <laurent.pinchart-ryLnwIuWjnjg/C1BVhZhaw@public.gmane.org>

> 
>  	if (!domain->mmu) {
>  		/* The domain hasn't been used yet, initialize it. */
> @@ -678,7 +677,7 @@ static int ipmmu_attach_device(struct iommu_domain
> *io_domain, } else
>  		dev_info(dev, "Reusing IPMMU context %u\n", domain->context_id);
> 
> -	spin_unlock_irqrestore(&domain->lock, flags);
> +	mutex_unlock(&domain->mutex);
> 
>  	if (ret < 0)
>  		return ret;


-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH] iommu/ipmmu-vmsa: Fix allocation in atomic context
  2018-07-20 16:16 [PATCH] iommu/ipmmu-vmsa: Fix allocation in atomic context Geert Uytterhoeven
  2018-07-21  9:12   ` Laurent Pinchart
@ 2018-07-27  7:41 ` Joerg Roedel
  1 sibling, 0 replies; 4+ messages in thread
From: Joerg Roedel @ 2018-07-27  7:41 UTC (permalink / raw)
  To: Geert Uytterhoeven
  Cc: Magnus Damm, Laurent Pinchart, iommu, linux-renesas-soc, linux-kernel

On Fri, Jul 20, 2018 at 06:16:59PM +0200, Geert Uytterhoeven wrote:
> When attaching a device to an IOMMU group with
> CONFIG_DEBUG_ATOMIC_SLEEP=y:
> 
>     BUG: sleeping function called from invalid context at mm/slab.h:421
>     in_atomic(): 1, irqs_disabled(): 128, pid: 61, name: kworker/1:1
>     ...
>     Call trace:
>      ...
>      arm_lpae_alloc_pgtable+0x114/0x184
>      arm_64_lpae_alloc_pgtable_s1+0x2c/0x128
>      arm_32_lpae_alloc_pgtable_s1+0x40/0x6c
>      alloc_io_pgtable_ops+0x60/0x88
>      ipmmu_attach_device+0x140/0x334
> 
> ipmmu_attach_device() takes a spinlock, while arm_lpae_alloc_pgtable()
> allocates memory using GFP_KERNEL.  Originally, the ipmmu-vmsa driver
> had its own custom page table allocation implementation using
> GFP_ATOMIC, hence the spinlock was fine.
> 
> Fix this by replacing the spinlock by a mutex, like the arm-smmu driver
> does.
> 
> Fixes: f20ed39f53145e45 ("iommu/ipmmu-vmsa: Use the ARM LPAE page table allocator")
> Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
> ---
>  drivers/iommu/ipmmu-vmsa.c | 9 ++++-----
>  1 file changed, 4 insertions(+), 5 deletions(-)

Applied, thanks.


^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2018-07-27  7:41 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-07-20 16:16 [PATCH] iommu/ipmmu-vmsa: Fix allocation in atomic context Geert Uytterhoeven
2018-07-21  9:12 ` Laurent Pinchart
2018-07-21  9:12   ` Laurent Pinchart
2018-07-27  7:41 ` Joerg Roedel

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.