All of lore.kernel.org
 help / color / mirror / Atom feed
From: Laura Abbott <labbott@redhat.com>
To: Qing Xia <saberlily.xia@hisilicon.com>,
	sumit.semwal@linaro.org, gregkh@linuxfoundation.org,
	arve@android.com, tkjos@android.com, maco@android.com
Cc: linaro-mm-sig@lists.linaro.org, linux-kernel@vger.kernel.org,
	dri-devel@lists.freedesktop.org, devel@driverdev.osuosl.org,
	daniel.liuyi@hisilicon.com, yudongbin@hisilicon.com,
	kongfei@hisilicon.com
Subject: Re: [PATCH] staging: android: ion: move map_kernel to ion_dma_buf_kmap
Date: Wed, 2 Jan 2019 14:36:07 -0800	[thread overview]
Message-ID: <1f507556-890e-e7f4-f80e-49cf7084d78a@redhat.com> (raw)
In-Reply-To: <1545639585-24785-1-git-send-email-saberlily.xia@hisilicon.com>

On 12/24/18 12:19 AM, Qing Xia wrote:
> Now, as Google's user guide, if userspace need clean ION buffer's
> cache, they should call ioctl(fd, DMA_BUF_IOCTL_SYNC, sync). Then
> we found that ion_dma_buf_begin_cpu_access/ion_dma_buf_end_cpu_access
> will do ION buffer's map_kernel, it's not necessary. And if usersapce
> only need clean cache, they will call ion_dma_buf_end_cpu_access by
> dmabuf's ioctl, ION buffer's kmap_cnt will be wrong value -1, then
> driver could not get right kernel vaddr by dma_buf_kmap.
> 

The problem is this subtly violates how dma_buf is supposed
to work. All setup is supposed to happen in begin_cpu_access.
I agree calling map kernel each time isn't great but I think
this needs to be worked out with dma_buf.

Thanks,
Laura

> Signed-off-by: Qing Xia <saberlily.xia@hisilicon.com>
> ---
>   drivers/staging/android/ion/ion.c | 46 ++++++++++++++++++---------------------
>   1 file changed, 21 insertions(+), 25 deletions(-)
> 
> diff --git a/drivers/staging/android/ion/ion.c b/drivers/staging/android/ion/ion.c
> index 9907332..f7e2812 100644
> --- a/drivers/staging/android/ion/ion.c
> +++ b/drivers/staging/android/ion/ion.c
> @@ -303,45 +303,47 @@ static void ion_dma_buf_release(struct dma_buf *dmabuf)
>   static void *ion_dma_buf_kmap(struct dma_buf *dmabuf, unsigned long offset)
>   {
>   	struct ion_buffer *buffer = dmabuf->priv;
> +	void *vaddr;
>   
> -	return buffer->vaddr + offset * PAGE_SIZE;
> +	if (buffer->heap->ops->map_kernel) {
> +		mutex_lock(&buffer->lock);
> +		vaddr = ion_buffer_kmap_get(buffer);
> +		mutex_unlock(&buffer->lock);
> +		if (IS_ERR(vaddr))
> +			return vaddr;
> +
> +		return vaddr + offset * PAGE_SIZE;
> +	}
> +
> +	return NULL;
>   }
>   
>   static void ion_dma_buf_kunmap(struct dma_buf *dmabuf, unsigned long offset,
>   			       void *ptr)
>   {
> +	struct ion_buffer *buffer = dmabuf->priv;
> +
> +	if (buffer->heap->ops->map_kernel) {
> +		mutex_lock(&buffer->lock);
> +		ion_buffer_kmap_put(buffer);
> +		mutex_unlock(&buffer->lock);
> +	}
>   }
>   
>   static int ion_dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
>   					enum dma_data_direction direction)
>   {
>   	struct ion_buffer *buffer = dmabuf->priv;
> -	void *vaddr;
>   	struct ion_dma_buf_attachment *a;
> -	int ret = 0;
> -
> -	/*
> -	 * TODO: Move this elsewhere because we don't always need a vaddr
> -	 */
> -	if (buffer->heap->ops->map_kernel) {
> -		mutex_lock(&buffer->lock);
> -		vaddr = ion_buffer_kmap_get(buffer);
> -		if (IS_ERR(vaddr)) {
> -			ret = PTR_ERR(vaddr);
> -			goto unlock;
> -		}
> -		mutex_unlock(&buffer->lock);
> -	}
>   
>   	mutex_lock(&buffer->lock);
>   	list_for_each_entry(a, &buffer->attachments, list) {
>   		dma_sync_sg_for_cpu(a->dev, a->table->sgl, a->table->nents,
>   				    direction);
>   	}
> -
> -unlock:
>   	mutex_unlock(&buffer->lock);
> -	return ret;
> +
> +	return 0;
>   }
>   
>   static int ion_dma_buf_end_cpu_access(struct dma_buf *dmabuf,
> @@ -350,12 +352,6 @@ static int ion_dma_buf_end_cpu_access(struct dma_buf *dmabuf,
>   	struct ion_buffer *buffer = dmabuf->priv;
>   	struct ion_dma_buf_attachment *a;
>   
> -	if (buffer->heap->ops->map_kernel) {
> -		mutex_lock(&buffer->lock);
> -		ion_buffer_kmap_put(buffer);
> -		mutex_unlock(&buffer->lock);
> -	}
> -
>   	mutex_lock(&buffer->lock);
>   	list_for_each_entry(a, &buffer->attachments, list) {
>   		dma_sync_sg_for_device(a->dev, a->table->sgl, a->table->nents,
> 


  reply	other threads:[~2019-01-02 22:36 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-12-24  8:19 [PATCH] staging: android: ion: move map_kernel to ion_dma_buf_kmap Qing Xia
2018-12-24  8:19 ` Qing Xia
2019-01-02 22:36 ` Laura Abbott [this message]
2019-01-18  1:51   ` Xiaqing (A)
2019-01-18  1:51     ` Xiaqing (A)

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1f507556-890e-e7f4-f80e-49cf7084d78a@redhat.com \
    --to=labbott@redhat.com \
    --cc=arve@android.com \
    --cc=daniel.liuyi@hisilicon.com \
    --cc=devel@driverdev.osuosl.org \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=gregkh@linuxfoundation.org \
    --cc=kongfei@hisilicon.com \
    --cc=linaro-mm-sig@lists.linaro.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=maco@android.com \
    --cc=saberlily.xia@hisilicon.com \
    --cc=sumit.semwal@linaro.org \
    --cc=tkjos@android.com \
    --cc=yudongbin@hisilicon.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.