linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] dma-buf: Wait on the reservation object when sync'ing before CPU access
@ 2016-06-21  7:04 Chris Wilson
  2016-06-21  8:44 ` Daniel Vetter
                   ` (2 more replies)
  0 siblings, 3 replies; 7+ messages in thread
From: Chris Wilson @ 2016-06-21  7:04 UTC (permalink / raw)
  To: dri-devel
  Cc: Chris Wilson, Sumit Semwal, Daniel Vetter, linux-media,
	linaro-mm-sig, linux-kernel

Rendering operations to the dma-buf are tracked implicitly via the
reservation_object (dmabuf->resv). This is used to allow poll() to
wait upon outstanding rendering (or just query the current status of
rendering). The dma-buf sync ioctl allows userspace to prepare the
dma-buf for CPU access, which should include waiting upon rendering.
(Some drivers may need to do more work to ensure that the dma-buf mmap
is coherent as well as complete.)

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Sumit Semwal <sumit.semwal@linaro.org>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: linux-media@vger.kernel.org
Cc: dri-devel@lists.freedesktop.org
Cc: linaro-mm-sig@lists.linaro.org
Cc: linux-kernel@vger.kernel.org
---

I'm wondering whether it makes sense just to always do the wait first.
It is one of the first operations every driver has to make. A driver
that wants to implement it differently (e.g. they can special case
native waits) will still require a wait on the reservation object to
finish external rendering.
-Chris

---
 drivers/dma-buf/dma-buf.c | 18 ++++++++++++++++++
 1 file changed, 18 insertions(+)

diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
index ddaee60ae52a..123f14b8e882 100644
--- a/drivers/dma-buf/dma-buf.c
+++ b/drivers/dma-buf/dma-buf.c
@@ -586,6 +586,22 @@ void dma_buf_unmap_attachment(struct dma_buf_attachment *attach,
 }
 EXPORT_SYMBOL_GPL(dma_buf_unmap_attachment);
 
+static int __dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
+				      enum dma_data_direction direction)
+{
+	bool write = (direction == DMA_BIDIRECTIONAL ||
+		      direction == DMA_TO_DEVICE);
+	struct reservation_object *resv = dma_buf->resv;
+	long ret;
+
+	/* Wait on any implicit rendering fences */
+	ret = reservation_object_wait_timeout_rcu(resv, write, true,
+						  MAX_SCHEDULE_TIMEOUT);
+	if (ret < 0)
+		return ret;
+
+	return 0;
+}
 
 /**
  * dma_buf_begin_cpu_access - Must be called before accessing a dma_buf from the
@@ -607,6 +623,8 @@ int dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
 
 	if (dmabuf->ops->begin_cpu_access)
 		ret = dmabuf->ops->begin_cpu_access(dmabuf, direction);
+	else
+		ret = __dma_buf_begin_cpu_access(dmabuf, direction);
 
 	return ret;
 }
-- 
2.8.1

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH] dma-buf: Wait on the reservation object when sync'ing before CPU access
  2016-06-21  7:04 [PATCH] dma-buf: Wait on the reservation object when sync'ing before CPU access Chris Wilson
@ 2016-06-21  8:44 ` Daniel Vetter
  2016-08-15 15:37 ` [PATCH v2] " Chris Wilson
  2016-08-15 15:42 ` Chris Wilson
  2 siblings, 0 replies; 7+ messages in thread
From: Daniel Vetter @ 2016-06-21  8:44 UTC (permalink / raw)
  To: Chris Wilson
  Cc: dri-devel, Sumit Semwal, Daniel Vetter, linux-media,
	linaro-mm-sig, Linux Kernel Mailing List, Maarten Lankhorst,
	Sean Paul, Zach Reizner

On Tue, Jun 21, 2016 at 08:04:00AM +0100, Chris Wilson wrote:
> Rendering operations to the dma-buf are tracked implicitly via the
> reservation_object (dmabuf->resv). This is used to allow poll() to
> wait upon outstanding rendering (or just query the current status of
> rendering). The dma-buf sync ioctl allows userspace to prepare the
> dma-buf for CPU access, which should include waiting upon rendering.
> (Some drivers may need to do more work to ensure that the dma-buf mmap
> is coherent as well as complete.)
>
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Sumit Semwal <sumit.semwal@linaro.org>
> Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
> Cc: linux-media@vger.kernel.org
> Cc: dri-devel@lists.freedesktop.org
> Cc: linaro-mm-sig@lists.linaro.org
> Cc: linux-kernel@vger.kernel.org
> ---
>
> I'm wondering whether it makes sense just to always do the wait first.
> It is one of the first operations every driver has to make. A driver
> that wants to implement it differently (e.g. they can special case
> native waits) will still require a wait on the reservation object to
> finish external rendering.

Worst case (if the driver uses reservation objects also internally) we'll
end up calling this twice. It should be cheap enough to do that. I'll add
a few folks who might want to chip in with an opinion ...
-Daniel

> -Chris
>
> ---
>  drivers/dma-buf/dma-buf.c | 18 ++++++++++++++++++
>  1 file changed, 18 insertions(+)
>
> diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
> index ddaee60ae52a..123f14b8e882 100644
> --- a/drivers/dma-buf/dma-buf.c
> +++ b/drivers/dma-buf/dma-buf.c
> @@ -586,6 +586,22 @@ void dma_buf_unmap_attachment(struct dma_buf_attachment *attach,
>  }
>  EXPORT_SYMBOL_GPL(dma_buf_unmap_attachment);
>
> +static int __dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
> +      enum dma_data_direction direction)
> +{
> + bool write = (direction == DMA_BIDIRECTIONAL ||
> +      direction == DMA_TO_DEVICE);
> + struct reservation_object *resv = dma_buf->resv;
> + long ret;
> +
> + /* Wait on any implicit rendering fences */
> + ret = reservation_object_wait_timeout_rcu(resv, write, true,
> +  MAX_SCHEDULE_TIMEOUT);
> + if (ret < 0)
> + return ret;
> +
> + return 0;
> +}
>
>  /**
>   * dma_buf_begin_cpu_access - Must be called before accessing a dma_buf from the
> @@ -607,6 +623,8 @@ int dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
>
>   if (dmabuf->ops->begin_cpu_access)
>   ret = dmabuf->ops->begin_cpu_access(dmabuf, direction);
> + else
> + ret = __dma_buf_begin_cpu_access(dmabuf, direction);
>
>   return ret;
>  }
> --
> 2.8.1
>

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH v2] dma-buf: Wait on the reservation object when sync'ing before CPU access
  2016-06-21  7:04 [PATCH] dma-buf: Wait on the reservation object when sync'ing before CPU access Chris Wilson
  2016-06-21  8:44 ` Daniel Vetter
@ 2016-08-15 15:37 ` Chris Wilson
  2016-08-15 15:42 ` Chris Wilson
  2 siblings, 0 replies; 7+ messages in thread
From: Chris Wilson @ 2016-08-15 15:37 UTC (permalink / raw)
  To: dri-devel
  Cc: intel-gfx, Chris Wilson, Sumit Semwal, Daniel Vetter,
	Eric Anholt, linux-media, dri-devel, linaro-mm-sig, linux-kernel

Rendering operations to the dma-buf are tracked implicitly via the
reservation_object (dmabuf->resv). This is used to allow poll() to
wait upon outstanding rendering (or just query the current status of
rendering). The dma-buf sync ioctl allows userspace to prepare the
dma-buf for CPU access, which should include waiting upon rendering.
(Some drivers may need to do more work to ensure that the dma-buf mmap
is coherent as well as complete.)

v2: Always wait upon the reservation object implicitly. We choose to do
it after the native handler in case it can do so more efficiently.

Testcase: igt/prime_vgem
Testcase: igt/gem_concurrent_blit # *vgem*
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Sumit Semwal <sumit.semwal@linaro.org>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Eric Anholt <eric@anholt.net>
Cc: linux-media@vger.kernel.org
Cc: dri-devel@lists.freedesktop.org
Cc: linaro-mm-sig@lists.linaro.org
Cc: linux-kernel@vger.kernel.org
---
 drivers/dma-buf/dma-buf.c | 23 +++++++++++++++++++++++
 1 file changed, 23 insertions(+)

diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
index ddaee60ae52a..cf04d249a6a4 100644
--- a/drivers/dma-buf/dma-buf.c
+++ b/drivers/dma-buf/dma-buf.c
@@ -586,6 +586,22 @@ void dma_buf_unmap_attachment(struct dma_buf_attachment *attach,
 }
 EXPORT_SYMBOL_GPL(dma_buf_unmap_attachment);
 
+static int __dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
+				      enum dma_data_direction direction)
+{
+	bool write = (direction == DMA_BIDIRECTIONAL ||
+		      direction == DMA_TO_DEVICE);
+	struct reservation_object *resv = dmabuf->resv;
+	long ret;
+
+	/* Wait on any implicit rendering fences */
+	ret = reservation_object_wait_timeout_rcu(resv, write, true,
+						  MAX_SCHEDULE_TIMEOUT);
+	if (ret < 0)
+		return ret;
+
+	return 0;
+}
 
 /**
  * dma_buf_begin_cpu_access - Must be called before accessing a dma_buf from the
@@ -608,6 +624,13 @@ int dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
 	if (dmabuf->ops->begin_cpu_access)
 		ret = dmabuf->ops->begin_cpu_access(dmabuf, direction);
 
+	/* Ensure that all fences are waited upon - but we first allow
+	 * the native handler the chance to do so more efficiently if it
+	 * chooses. A double invocation here will be reasonably cheap no-op.
+	 */
+	if (ret == 0)
+		ret = __dma_buf_begin_cpu_access(dmabuf, direction);
+
 	return ret;
 }
 EXPORT_SYMBOL_GPL(dma_buf_begin_cpu_access);
-- 
2.8.1

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH v2] dma-buf: Wait on the reservation object when sync'ing before CPU access
  2016-06-21  7:04 [PATCH] dma-buf: Wait on the reservation object when sync'ing before CPU access Chris Wilson
  2016-06-21  8:44 ` Daniel Vetter
  2016-08-15 15:37 ` [PATCH v2] " Chris Wilson
@ 2016-08-15 15:42 ` Chris Wilson
  2016-08-15 16:02   ` Daniel Vetter
  2 siblings, 1 reply; 7+ messages in thread
From: Chris Wilson @ 2016-08-15 15:42 UTC (permalink / raw)
  To: dri-devel
  Cc: intel-gfx, Chris Wilson, Sumit Semwal, Daniel Vetter,
	Eric Anholt, linux-media, linaro-mm-sig, linux-kernel

Rendering operations to the dma-buf are tracked implicitly via the
reservation_object (dmabuf->resv). This is used to allow poll() to
wait upon outstanding rendering (or just query the current status of
rendering). The dma-buf sync ioctl allows userspace to prepare the
dma-buf for CPU access, which should include waiting upon rendering.
(Some drivers may need to do more work to ensure that the dma-buf mmap
is coherent as well as complete.)

v2: Always wait upon the reservation object implicitly. We choose to do
it after the native handler in case it can do so more efficiently.

Testcase: igt/prime_vgem
Testcase: igt/gem_concurrent_blit # *vgem*
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Sumit Semwal <sumit.semwal@linaro.org>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Eric Anholt <eric@anholt.net>
Cc: linux-media@vger.kernel.org
Cc: dri-devel@lists.freedesktop.org
Cc: linaro-mm-sig@lists.linaro.org
Cc: linux-kernel@vger.kernel.org
---
 drivers/dma-buf/dma-buf.c | 23 +++++++++++++++++++++++
 1 file changed, 23 insertions(+)

diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
index ddaee60ae52a..cf04d249a6a4 100644
--- a/drivers/dma-buf/dma-buf.c
+++ b/drivers/dma-buf/dma-buf.c
@@ -586,6 +586,22 @@ void dma_buf_unmap_attachment(struct dma_buf_attachment *attach,
 }
 EXPORT_SYMBOL_GPL(dma_buf_unmap_attachment);
 
+static int __dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
+				      enum dma_data_direction direction)
+{
+	bool write = (direction == DMA_BIDIRECTIONAL ||
+		      direction == DMA_TO_DEVICE);
+	struct reservation_object *resv = dmabuf->resv;
+	long ret;
+
+	/* Wait on any implicit rendering fences */
+	ret = reservation_object_wait_timeout_rcu(resv, write, true,
+						  MAX_SCHEDULE_TIMEOUT);
+	if (ret < 0)
+		return ret;
+
+	return 0;
+}
 
 /**
  * dma_buf_begin_cpu_access - Must be called before accessing a dma_buf from the
@@ -608,6 +624,13 @@ int dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
 	if (dmabuf->ops->begin_cpu_access)
 		ret = dmabuf->ops->begin_cpu_access(dmabuf, direction);
 
+	/* Ensure that all fences are waited upon - but we first allow
+	 * the native handler the chance to do so more efficiently if it
+	 * chooses. A double invocation here will be reasonably cheap no-op.
+	 */
+	if (ret == 0)
+		ret = __dma_buf_begin_cpu_access(dmabuf, direction);
+
 	return ret;
 }
 EXPORT_SYMBOL_GPL(dma_buf_begin_cpu_access);
-- 
2.8.1

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH v2] dma-buf: Wait on the reservation object when sync'ing before CPU access
  2016-08-15 15:42 ` Chris Wilson
@ 2016-08-15 16:02   ` Daniel Vetter
  2016-12-19  1:40     ` Inki Dae
  0 siblings, 1 reply; 7+ messages in thread
From: Daniel Vetter @ 2016-08-15 16:02 UTC (permalink / raw)
  To: Chris Wilson
  Cc: dri-devel, intel-gfx, Sumit Semwal, Daniel Vetter, Eric Anholt,
	linux-media, linaro-mm-sig, linux-kernel

On Mon, Aug 15, 2016 at 04:42:18PM +0100, Chris Wilson wrote:
> Rendering operations to the dma-buf are tracked implicitly via the
> reservation_object (dmabuf->resv). This is used to allow poll() to
> wait upon outstanding rendering (or just query the current status of
> rendering). The dma-buf sync ioctl allows userspace to prepare the
> dma-buf for CPU access, which should include waiting upon rendering.
> (Some drivers may need to do more work to ensure that the dma-buf mmap
> is coherent as well as complete.)
> 
> v2: Always wait upon the reservation object implicitly. We choose to do
> it after the native handler in case it can do so more efficiently.
> 
> Testcase: igt/prime_vgem
> Testcase: igt/gem_concurrent_blit # *vgem*
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Sumit Semwal <sumit.semwal@linaro.org>
> Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
> Cc: Eric Anholt <eric@anholt.net>
> Cc: linux-media@vger.kernel.org
> Cc: dri-devel@lists.freedesktop.org
> Cc: linaro-mm-sig@lists.linaro.org
> Cc: linux-kernel@vger.kernel.org
> ---
>  drivers/dma-buf/dma-buf.c | 23 +++++++++++++++++++++++
>  1 file changed, 23 insertions(+)
> 
> diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
> index ddaee60ae52a..cf04d249a6a4 100644
> --- a/drivers/dma-buf/dma-buf.c
> +++ b/drivers/dma-buf/dma-buf.c
> @@ -586,6 +586,22 @@ void dma_buf_unmap_attachment(struct dma_buf_attachment *attach,
>  }
>  EXPORT_SYMBOL_GPL(dma_buf_unmap_attachment);
>  
> +static int __dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
> +				      enum dma_data_direction direction)
> +{
> +	bool write = (direction == DMA_BIDIRECTIONAL ||
> +		      direction == DMA_TO_DEVICE);
> +	struct reservation_object *resv = dmabuf->resv;
> +	long ret;
> +
> +	/* Wait on any implicit rendering fences */
> +	ret = reservation_object_wait_timeout_rcu(resv, write, true,
> +						  MAX_SCHEDULE_TIMEOUT);
> +	if (ret < 0)
> +		return ret;
> +
> +	return 0;
> +}
>  
>  /**
>   * dma_buf_begin_cpu_access - Must be called before accessing a dma_buf from the
> @@ -608,6 +624,13 @@ int dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
>  	if (dmabuf->ops->begin_cpu_access)
>  		ret = dmabuf->ops->begin_cpu_access(dmabuf, direction);
>  
> +	/* Ensure that all fences are waited upon - but we first allow
> +	 * the native handler the chance to do so more efficiently if it
> +	 * chooses. A double invocation here will be reasonably cheap no-op.
> +	 */
> +	if (ret == 0)
> +		ret = __dma_buf_begin_cpu_access(dmabuf, direction);

Not sure we should wait first and the flush or the other way round. But I
don't think it'll matter for any current dma-buf exporter, so meh.

Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>

Sumits, can you pls pick this one up and put into drm-misc?
-Daniel

> +
>  	return ret;
>  }
>  EXPORT_SYMBOL_GPL(dma_buf_begin_cpu_access);
> -- 
> 2.8.1
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v2] dma-buf: Wait on the reservation object when sync'ing before CPU access
  2016-08-15 16:02   ` Daniel Vetter
@ 2016-12-19  1:40     ` Inki Dae
  2016-12-19 10:23       ` Chris Wilson
  0 siblings, 1 reply; 7+ messages in thread
From: Inki Dae @ 2016-12-19  1:40 UTC (permalink / raw)
  To: Chris Wilson, dri-devel, intel-gfx, Sumit Semwal, Eric Anholt,
	linux-media, linaro-mm-sig, linux-kernel



2016년 08월 16일 01:02에 Daniel Vetter 이(가) 쓴 글:
> On Mon, Aug 15, 2016 at 04:42:18PM +0100, Chris Wilson wrote:
>> Rendering operations to the dma-buf are tracked implicitly via the
>> reservation_object (dmabuf->resv). This is used to allow poll() to
>> wait upon outstanding rendering (or just query the current status of
>> rendering). The dma-buf sync ioctl allows userspace to prepare the
>> dma-buf for CPU access, which should include waiting upon rendering.
>> (Some drivers may need to do more work to ensure that the dma-buf mmap
>> is coherent as well as complete.)
>>
>> v2: Always wait upon the reservation object implicitly. We choose to do
>> it after the native handler in case it can do so more efficiently.
>>
>> Testcase: igt/prime_vgem
>> Testcase: igt/gem_concurrent_blit # *vgem*
>> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
>> Cc: Sumit Semwal <sumit.semwal@linaro.org>
>> Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
>> Cc: Eric Anholt <eric@anholt.net>
>> Cc: linux-media@vger.kernel.org
>> Cc: dri-devel@lists.freedesktop.org
>> Cc: linaro-mm-sig@lists.linaro.org
>> Cc: linux-kernel@vger.kernel.org
>> ---
>>  drivers/dma-buf/dma-buf.c | 23 +++++++++++++++++++++++
>>  1 file changed, 23 insertions(+)
>>
>> diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
>> index ddaee60ae52a..cf04d249a6a4 100644
>> --- a/drivers/dma-buf/dma-buf.c
>> +++ b/drivers/dma-buf/dma-buf.c
>> @@ -586,6 +586,22 @@ void dma_buf_unmap_attachment(struct dma_buf_attachment *attach,
>>  }
>>  EXPORT_SYMBOL_GPL(dma_buf_unmap_attachment);
>>  
>> +static int __dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
>> +				      enum dma_data_direction direction)
>> +{
>> +	bool write = (direction == DMA_BIDIRECTIONAL ||
>> +		      direction == DMA_TO_DEVICE);
>> +	struct reservation_object *resv = dmabuf->resv;
>> +	long ret;
>> +
>> +	/* Wait on any implicit rendering fences */
>> +	ret = reservation_object_wait_timeout_rcu(resv, write, true,
>> +						  MAX_SCHEDULE_TIMEOUT);
>> +	if (ret < 0)
>> +		return ret;
>> +
>> +	return 0;
>> +}
>>  
>>  /**
>>   * dma_buf_begin_cpu_access - Must be called before accessing a dma_buf from the
>> @@ -608,6 +624,13 @@ int dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
>>  	if (dmabuf->ops->begin_cpu_access)
>>  		ret = dmabuf->ops->begin_cpu_access(dmabuf, direction);
>>  
>> +	/* Ensure that all fences are waited upon - but we first allow
>> +	 * the native handler the chance to do so more efficiently if it
>> +	 * chooses. A double invocation here will be reasonably cheap no-op.
>> +	 */
>> +	if (ret == 0)
>> +		ret = __dma_buf_begin_cpu_access(dmabuf, direction);
> 
> Not sure we should wait first and the flush or the other way round. But I
> don't think it'll matter for any current dma-buf exporter, so meh.
> 

Sorry for late comment. I wonder there is no problem in case that GPU or other DMA device tries to access this dma buffer after dma_buf_begin_cpu_access call.
I think in this case, they - GPU or DMA devices - would make a mess of the dma buffer while CPU is accessing the buffer.

This patch is in mainline already so if this is real problem then I think we sould choose,
1. revert this patch from mainline
2. make sure to prevent other DMA devices to try to access the buffer while CPU is accessing the buffer.

Thanks.

> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
> 
> Sumits, can you pls pick this one up and put into drm-misc?
> -Daniel
> 
>> +
>>  	return ret;
>>  }
>>  EXPORT_SYMBOL_GPL(dma_buf_begin_cpu_access);
>> -- 
>> 2.8.1
>>
> 

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v2] dma-buf: Wait on the reservation object when sync'ing before CPU access
  2016-12-19  1:40     ` Inki Dae
@ 2016-12-19 10:23       ` Chris Wilson
  0 siblings, 0 replies; 7+ messages in thread
From: Chris Wilson @ 2016-12-19 10:23 UTC (permalink / raw)
  To: Inki Dae
  Cc: dri-devel, intel-gfx, Sumit Semwal, Eric Anholt, linux-media,
	linaro-mm-sig, linux-kernel

On Mon, Dec 19, 2016 at 10:40:41AM +0900, Inki Dae wrote:
> 
> 
> 2016년 08월 16일 01:02에 Daniel Vetter 이(가) 쓴 글:
> > On Mon, Aug 15, 2016 at 04:42:18PM +0100, Chris Wilson wrote:
> >> Rendering operations to the dma-buf are tracked implicitly via the
> >> reservation_object (dmabuf->resv). This is used to allow poll() to
> >> wait upon outstanding rendering (or just query the current status of
> >> rendering). The dma-buf sync ioctl allows userspace to prepare the
> >> dma-buf for CPU access, which should include waiting upon rendering.
> >> (Some drivers may need to do more work to ensure that the dma-buf mmap
> >> is coherent as well as complete.)
> >>
> >> v2: Always wait upon the reservation object implicitly. We choose to do
> >> it after the native handler in case it can do so more efficiently.
> >>
> >> Testcase: igt/prime_vgem
> >> Testcase: igt/gem_concurrent_blit # *vgem*
> >> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> >> Cc: Sumit Semwal <sumit.semwal@linaro.org>
> >> Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
> >> Cc: Eric Anholt <eric@anholt.net>
> >> Cc: linux-media@vger.kernel.org
> >> Cc: dri-devel@lists.freedesktop.org
> >> Cc: linaro-mm-sig@lists.linaro.org
> >> Cc: linux-kernel@vger.kernel.org
> >> ---
> >>  drivers/dma-buf/dma-buf.c | 23 +++++++++++++++++++++++
> >>  1 file changed, 23 insertions(+)
> >>
> >> diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
> >> index ddaee60ae52a..cf04d249a6a4 100644
> >> --- a/drivers/dma-buf/dma-buf.c
> >> +++ b/drivers/dma-buf/dma-buf.c
> >> @@ -586,6 +586,22 @@ void dma_buf_unmap_attachment(struct dma_buf_attachment *attach,
> >>  }
> >>  EXPORT_SYMBOL_GPL(dma_buf_unmap_attachment);
> >>  
> >> +static int __dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
> >> +				      enum dma_data_direction direction)
> >> +{
> >> +	bool write = (direction == DMA_BIDIRECTIONAL ||
> >> +		      direction == DMA_TO_DEVICE);
> >> +	struct reservation_object *resv = dmabuf->resv;
> >> +	long ret;
> >> +
> >> +	/* Wait on any implicit rendering fences */
> >> +	ret = reservation_object_wait_timeout_rcu(resv, write, true,
> >> +						  MAX_SCHEDULE_TIMEOUT);
> >> +	if (ret < 0)
> >> +		return ret;
> >> +
> >> +	return 0;
> >> +}
> >>  
> >>  /**
> >>   * dma_buf_begin_cpu_access - Must be called before accessing a dma_buf from the
> >> @@ -608,6 +624,13 @@ int dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
> >>  	if (dmabuf->ops->begin_cpu_access)
> >>  		ret = dmabuf->ops->begin_cpu_access(dmabuf, direction);
> >>  
> >> +	/* Ensure that all fences are waited upon - but we first allow
> >> +	 * the native handler the chance to do so more efficiently if it
> >> +	 * chooses. A double invocation here will be reasonably cheap no-op.
> >> +	 */
> >> +	if (ret == 0)
> >> +		ret = __dma_buf_begin_cpu_access(dmabuf, direction);
> > 
> > Not sure we should wait first and the flush or the other way round. But I
> > don't think it'll matter for any current dma-buf exporter, so meh.
> > 
> 
> Sorry for late comment. I wonder there is no problem in case that GPU or other DMA device tries to access this dma buffer after dma_buf_begin_cpu_access call.
> I think in this case, they - GPU or DMA devices - would make a mess of the dma buffer while CPU is accessing the buffer.
> 
> This patch is in mainline already so if this is real problem then I think we sould choose,
> 1. revert this patch from mainline

That scenario is irrespective of this patch. It just depends on there
being concurrent CPU access with destructive DMA access (or vice-versa).

> 2. make sure to prevent other DMA devices to try to access the buffer while CPU is accessing the buffer.

Is the safeguard you want, and the one employed elsewhere, which you could
accomplish by adding a fence to the reservation object for the CPU access
in begin_access and signaling from end_access. It would need to be an
autosignaled fence because userspace may forget to end its access
(or otherwise be terminated whilst holding the fence).

Everyone using the mmap without begin/end can of course still reek havoc
on the buffer.
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2016-12-19 10:25 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-06-21  7:04 [PATCH] dma-buf: Wait on the reservation object when sync'ing before CPU access Chris Wilson
2016-06-21  8:44 ` Daniel Vetter
2016-08-15 15:37 ` [PATCH v2] " Chris Wilson
2016-08-15 15:42 ` Chris Wilson
2016-08-15 16:02   ` Daniel Vetter
2016-12-19  1:40     ` Inki Dae
2016-12-19 10:23       ` Chris Wilson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).