All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] staging: android: ion: Restrict cache maintenance to dma mapped memory
@ 2018-02-10  6:21 Liam Mark
  2018-02-12 20:39 ` Laura Abbott
  2018-02-12 20:39 ` Laura Abbott
  0 siblings, 2 replies; 4+ messages in thread
From: Liam Mark @ 2018-02-10  6:21 UTC (permalink / raw)
  To: Laura Abbott, Sumit Semwal
  Cc: devel, Todd Kjos, linux-kernel, linaro-mm-sig,
	Arve Hjønnevåg, Martijn Coenen

The ION begin_cpu_access and end_cpu_access functions use the
dma_sync_sg_for_cpu and dma_sync_sg_for_device APIs to perform cache
maintenance.

Currently it is possible to apply cache maintenance, via the
begin_cpu_access and end_cpu_access APIs, to ION buffers which are not
dma mapped.

The dma sync sg APIs should not be called on sg lists which have not been
dma mapped as this can result in cache maintenance being applied to the
wrong address. If an sg list has not been dma mapped then its dma_address
field has not been populated, some dma ops such as the swiotlb_dma_ops ops
use the dma_address field to calculate the address onto which to apply
cache maintenance.

Fix the ION begin_cpu_access and end_cpu_access functions to only apply
cache maintenance to buffers which have been dma mapped.

Fixes: 2a55e7b5e544 ("staging: android: ion: Call dma_map_sg for syncing and mapping")
Signed-off-by: Liam Mark <lmark@codeaurora.org>
---
 drivers/staging/android/ion/ion.c | 16 ++++++++++++----
 1 file changed, 12 insertions(+), 4 deletions(-)

diff --git a/drivers/staging/android/ion/ion.c b/drivers/staging/android/ion/ion.c
index f480885e346b..e5df5272823d 100644
--- a/drivers/staging/android/ion/ion.c
+++ b/drivers/staging/android/ion/ion.c
@@ -214,6 +214,7 @@ struct ion_dma_buf_attachment {
 	struct device *dev;
 	struct sg_table *table;
 	struct list_head list;
+	bool dma_mapped;
 };
 
 static int ion_dma_buf_attach(struct dma_buf *dmabuf, struct device *dev,
@@ -235,6 +236,7 @@ static int ion_dma_buf_attach(struct dma_buf *dmabuf, struct device *dev,
 
 	a->table = table;
 	a->dev = dev;
+	a->dma_mapped = false;
 	INIT_LIST_HEAD(&a->list);
 
 	attachment->priv = a;
@@ -272,6 +274,7 @@ static struct sg_table *ion_map_dma_buf(struct dma_buf_attachment *attachment,
 			direction))
 		return ERR_PTR(-ENOMEM);
 
+	a->dma_mapped = true;
 	return table;
 }
 
@@ -279,7 +282,10 @@ static void ion_unmap_dma_buf(struct dma_buf_attachment *attachment,
 			      struct sg_table *table,
 			      enum dma_data_direction direction)
 {
+	struct ion_dma_buf_attachment *a = attachment->priv;
+
 	dma_unmap_sg(attachment->dev, table->sgl, table->nents, direction);
+	a->dma_mapped = false;
 }
 
 static int ion_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma)
@@ -345,8 +351,9 @@ static int ion_dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
 
 	mutex_lock(&buffer->lock);
 	list_for_each_entry(a, &buffer->attachments, list) {
-		dma_sync_sg_for_cpu(a->dev, a->table->sgl, a->table->nents,
-				    direction);
+		if (a->dma_mapped)
+			dma_sync_sg_for_cpu(a->dev, a->table->sgl,
+					    a->table->nents, direction);
 	}
 	mutex_unlock(&buffer->lock);
 
@@ -367,8 +374,9 @@ static int ion_dma_buf_end_cpu_access(struct dma_buf *dmabuf,
 
 	mutex_lock(&buffer->lock);
 	list_for_each_entry(a, &buffer->attachments, list) {
-		dma_sync_sg_for_device(a->dev, a->table->sgl, a->table->nents,
-				       direction);
+		if (a->dma_mapped)
+			dma_sync_sg_for_device(a->dev, a->table->sgl,
+					       a->table->nents, direction);
 	}
 	mutex_unlock(&buffer->lock);
 
-- 
1.8.5.2


Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project
_______________________________________________
devel mailing list
devel@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel

^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH] staging: android: ion: Restrict cache maintenance to dma mapped memory
  2018-02-10  6:21 [PATCH] staging: android: ion: Restrict cache maintenance to dma mapped memory Liam Mark
@ 2018-02-12 20:39 ` Laura Abbott
  2018-02-12 20:39 ` Laura Abbott
  1 sibling, 0 replies; 4+ messages in thread
From: Laura Abbott @ 2018-02-12 20:39 UTC (permalink / raw)
  To: Liam Mark, Sumit Semwal
  Cc: devel, Todd Kjos, linux-kernel, linaro-mm-sig,
	Arve Hjønnevåg, Martijn Coenen

On 02/09/2018 10:21 PM, Liam Mark wrote:
> The ION begin_cpu_access and end_cpu_access functions use the
> dma_sync_sg_for_cpu and dma_sync_sg_for_device APIs to perform cache
> maintenance.
> 
> Currently it is possible to apply cache maintenance, via the
> begin_cpu_access and end_cpu_access APIs, to ION buffers which are not
> dma mapped.
> 
> The dma sync sg APIs should not be called on sg lists which have not been
> dma mapped as this can result in cache maintenance being applied to the
> wrong address. If an sg list has not been dma mapped then its dma_address
> field has not been populated, some dma ops such as the swiotlb_dma_ops ops
> use the dma_address field to calculate the address onto which to apply
> cache maintenance.
> 
> Fix the ION begin_cpu_access and end_cpu_access functions to only apply
> cache maintenance to buffers which have been dma mapped.
> 
I think this looks okay. I was initially concerned about concurrency and
setting the dma_mapped flag but I think that should be handled by the
caller synchronizing map/unmap/cpu_access calls (we might need to re-evaluate
in the future)

I would like to hold on queuing this for just a little bit until I
finish working on the Ion unit test (doing this in the complete opposite
order of course). I'm assuming this passed your internal tests Liam?

Thanks,
Laura

> Fixes: 2a55e7b5e544 ("staging: android: ion: Call dma_map_sg for syncing and mapping")
> Signed-off-by: Liam Mark <lmark@codeaurora.org>
> ---
>   drivers/staging/android/ion/ion.c | 16 ++++++++++++----
>   1 file changed, 12 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/staging/android/ion/ion.c b/drivers/staging/android/ion/ion.c
> index f480885e346b..e5df5272823d 100644
> --- a/drivers/staging/android/ion/ion.c
> +++ b/drivers/staging/android/ion/ion.c
> @@ -214,6 +214,7 @@ struct ion_dma_buf_attachment {
>   	struct device *dev;caller 
>   	struct sg_table *table;
>   	struct list_head list;
> +	bool dma_mapped;
>   };
>   
>   static int ion_dma_buf_attach(struct dma_buf *dmabuf, struct device *dev,
> @@ -235,6 +236,7 @@ static int ion_dma_buf_attach(struct dma_buf *dmabuf, struct device *dev,
>   
>   	a->table = table;
>   	a->dev = dev;
> +	a->dma_mapped = false;
>   	INIT_LIST_HEAD(&a->list);
>   
>   	attachment->priv = a;
> @@ -272,6 +274,7 @@ static struct sg_table *ion_map_dma_buf(struct dma_buf_attachment *attachment,
>   			direction))
>   		return ERR_PTR(-ENOMEM);
>   
> +	a->dma_mapped = true;
>   	return table;
>   }
>   
> @@ -279,7 +282,10 @@ static void ion_unmap_dma_buf(struct dma_buf_attachment *attachment,
>   			      struct sg_table *table,
>   			      enum dma_data_direction direction)
>   {
> +	struct ion_dma_buf_attachment *a = attachment->priv;
> +
>   	dma_unmap_sg(attachment->dev, table->sgl, table->nents, direction);
> +	a->dma_mapped = false;
>   }
>   
>   static int ion_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma)
> @@ -345,8 +351,9 @@ static int ion_dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
>   
>   	mutex_lock(&buffer->lock);
>   	list_for_each_entry(a, &buffer->attachments, list) {
> -		dma_sync_sg_for_cpu(a->dev, a->table->sgl, a->table->nents,
> -				    direction);
> +		if (a->dma_mapped)
> +			dma_sync_sg_for_cpu(a->dev, a->table->sgl,
> +					    a->table->nents, direction);
>   	}
>   	mutex_unlock(&buffer->lock);
>   
> @@ -367,8 +374,9 @@ static int ion_dma_buf_end_cpu_access(struct dma_buf *dmabuf,
>   
>   	mutex_lock(&buffer->lock);
>   	list_for_each_entry(a, &buffer->attachments, list) {
> -		dma_sync_sg_for_device(a->dev, a->table->sgl, a->table->nents,
> -				       direction);
> +		if (a->dma_mapped)
> +			dma_sync_sg_for_device(a->dev, a->table->sgl,
> +					       a->table->nents, direction);
>   	}
>   	mutex_unlock(&buffer->lock);
>   
> 

_______________________________________________
devel mailing list
devel@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH] staging: android: ion: Restrict cache maintenance to dma mapped memory
  2018-02-10  6:21 [PATCH] staging: android: ion: Restrict cache maintenance to dma mapped memory Liam Mark
  2018-02-12 20:39 ` Laura Abbott
@ 2018-02-12 20:39 ` Laura Abbott
  2018-02-13  1:20   ` Liam Mark
  1 sibling, 1 reply; 4+ messages in thread
From: Laura Abbott @ 2018-02-12 20:39 UTC (permalink / raw)
  To: Liam Mark, Sumit Semwal
  Cc: devel, Todd Kjos, linux-kernel, linaro-mm-sig,
	Arve Hjønnevåg, Martijn Coenen

On 02/09/2018 10:21 PM, Liam Mark wrote:
> The ION begin_cpu_access and end_cpu_access functions use the
> dma_sync_sg_for_cpu and dma_sync_sg_for_device APIs to perform cache
> maintenance.
> 
> Currently it is possible to apply cache maintenance, via the
> begin_cpu_access and end_cpu_access APIs, to ION buffers which are not
> dma mapped.
> 
> The dma sync sg APIs should not be called on sg lists which have not been
> dma mapped as this can result in cache maintenance being applied to the
> wrong address. If an sg list has not been dma mapped then its dma_address
> field has not been populated, some dma ops such as the swiotlb_dma_ops ops
> use the dma_address field to calculate the address onto which to apply
> cache maintenance.
> 
> Fix the ION begin_cpu_access and end_cpu_access functions to only apply
> cache maintenance to buffers which have been dma mapped.
> 
I think this looks okay. I was initially concerned about concurrency and
setting the dma_mapped flag but I think that should be handled by the
caller synchronizing map/unmap/cpu_access calls (we might need to re-evaluate
in the future)

I would like to hold on queuing this for just a little bit until I
finish working on the Ion unit test (doing this in the complete opposite
order of course). I'm assuming this passed your internal tests Liam?

Thanks,
Laura

> Fixes: 2a55e7b5e544 ("staging: android: ion: Call dma_map_sg for syncing and mapping")
> Signed-off-by: Liam Mark <lmark@codeaurora.org>
> ---
>   drivers/staging/android/ion/ion.c | 16 ++++++++++++----
>   1 file changed, 12 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/staging/android/ion/ion.c b/drivers/staging/android/ion/ion.c
> index f480885e346b..e5df5272823d 100644
> --- a/drivers/staging/android/ion/ion.c
> +++ b/drivers/staging/android/ion/ion.c
> @@ -214,6 +214,7 @@ struct ion_dma_buf_attachment {
>   	struct device *dev;caller 
>   	struct sg_table *table;
>   	struct list_head list;
> +	bool dma_mapped;
>   };
>   
>   static int ion_dma_buf_attach(struct dma_buf *dmabuf, struct device *dev,
> @@ -235,6 +236,7 @@ static int ion_dma_buf_attach(struct dma_buf *dmabuf, struct device *dev,
>   
>   	a->table = table;
>   	a->dev = dev;
> +	a->dma_mapped = false;
>   	INIT_LIST_HEAD(&a->list);
>   
>   	attachment->priv = a;
> @@ -272,6 +274,7 @@ static struct sg_table *ion_map_dma_buf(struct dma_buf_attachment *attachment,
>   			direction))
>   		return ERR_PTR(-ENOMEM);
>   
> +	a->dma_mapped = true;
>   	return table;
>   }
>   
> @@ -279,7 +282,10 @@ static void ion_unmap_dma_buf(struct dma_buf_attachment *attachment,
>   			      struct sg_table *table,
>   			      enum dma_data_direction direction)
>   {
> +	struct ion_dma_buf_attachment *a = attachment->priv;
> +
>   	dma_unmap_sg(attachment->dev, table->sgl, table->nents, direction);
> +	a->dma_mapped = false;
>   }
>   
>   static int ion_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma)
> @@ -345,8 +351,9 @@ static int ion_dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
>   
>   	mutex_lock(&buffer->lock);
>   	list_for_each_entry(a, &buffer->attachments, list) {
> -		dma_sync_sg_for_cpu(a->dev, a->table->sgl, a->table->nents,
> -				    direction);
> +		if (a->dma_mapped)
> +			dma_sync_sg_for_cpu(a->dev, a->table->sgl,
> +					    a->table->nents, direction);
>   	}
>   	mutex_unlock(&buffer->lock);
>   
> @@ -367,8 +374,9 @@ static int ion_dma_buf_end_cpu_access(struct dma_buf *dmabuf,
>   
>   	mutex_lock(&buffer->lock);
>   	list_for_each_entry(a, &buffer->attachments, list) {
> -		dma_sync_sg_for_device(a->dev, a->table->sgl, a->table->nents,
> -				       direction);
> +		if (a->dma_mapped)
> +			dma_sync_sg_for_device(a->dev, a->table->sgl,
> +					       a->table->nents, direction);
>   	}
>   	mutex_unlock(&buffer->lock);
>   
> 

_______________________________________________
devel mailing list
devel@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH] staging: android: ion: Restrict cache maintenance to dma mapped memory
  2018-02-12 20:39 ` Laura Abbott
@ 2018-02-13  1:20   ` Liam Mark
  0 siblings, 0 replies; 4+ messages in thread
From: Liam Mark @ 2018-02-13  1:20 UTC (permalink / raw)
  To: Laura Abbott
  Cc: devel, Todd Kjos, linux-kernel, linaro-mm-sig,
	Arve Hjønnevåg, Martijn Coenen, Sumit Semwal

On Mon, 12 Feb 2018, Laura Abbott wrote:

> On 02/09/2018 10:21 PM, Liam Mark wrote:
> > The ION begin_cpu_access and end_cpu_access functions use the
> > dma_sync_sg_for_cpu and dma_sync_sg_for_device APIs to perform cache
> > maintenance.
> > 
> > Currently it is possible to apply cache maintenance, via the
> > begin_cpu_access and end_cpu_access APIs, to ION buffers which are not
> > dma mapped.
> > 
> > The dma sync sg APIs should not be called on sg lists which have not been
> > dma mapped as this can result in cache maintenance being applied to the
> > wrong address. If an sg list has not been dma mapped then its dma_address
> > field has not been populated, some dma ops such as the swiotlb_dma_ops ops
> > use the dma_address field to calculate the address onto which to apply
> > cache maintenance.
> > 
> > Fix the ION begin_cpu_access and end_cpu_access functions to only apply
> > cache maintenance to buffers which have been dma mapped.
> > 
> I think this looks okay. I was initially concerned about concurrency and
> setting the dma_mapped flag but I think that should be handled by the
> caller synchronizing map/unmap/cpu_access calls (we might need to re-evaluate
> in the future)


I had convinced myself that concurrency wasn't a problem, but you are 
right it does need to be re-evaluated. For example the code could be at 
the point after the dma unmap call has completed but before dma_mapped has 
been set to false, and if userspace happened to slip in a call to begin/end cpu 
access cache maintenance would happen on memory which isn't dma mapped. 
So at least this would need to be addressed, maybe for this issue just 
move the setting of dma_mapped to the start of the ion_unmap_dma_buf function.

I can clean this up and any other concurrency issues we can identify.

> 
> I would like to hold on queuing this for just a little bit until I
> finish working on the Ion unit test (doing this in the complete opposite
> order of course). I'm assuming this passed your internal tests Liam?

Yes it has passed my internal ION unit tests, though I haven't given 
the change to internal ION clients yet.

Liam

Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project
_______________________________________________
devel mailing list
devel@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2018-02-13  1:20 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-02-10  6:21 [PATCH] staging: android: ion: Restrict cache maintenance to dma mapped memory Liam Mark
2018-02-12 20:39 ` Laura Abbott
2018-02-12 20:39 ` Laura Abbott
2018-02-13  1:20   ` Liam Mark

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.