All of lore.kernel.org
 help / color / mirror / Atom feed
From: daniel@ffwll.ch (Daniel Vetter)
To: linux-arm-kernel@lists.infradead.org
Subject: [PATCH 2/5] drm: add ARM flush implementation
Date: Tue, 30 Jan 2018 11:14:36 +0100	[thread overview]
Message-ID: <20180130101436.GK25930@phenom.ffwll.local> (raw)
In-Reply-To: <20180124025606.3020-2-gurchetansingh@chromium.org>

On Tue, Jan 23, 2018 at 06:56:03PM -0800, Gurchetan Singh wrote:
> The dma_cache_maint_page function is important for cache maintenance on
> ARM32 (this was determined via testing).
> 
> Since we desire direct control of the caches in drm_cache.c, let's make
> a copy of the function, rename it and use it.
> 
> v2: Don't use DMA API, call functions directly (Daniel)
> 
> Signed-off-by: Gurchetan Singh <gurchetansingh@chromium.org>

fwiw, in principle, this approach has my Ack from the drm side.

But if we can't get any agreement from the arch side then I guess we'll
just have to suck it up and mandate that any dma-buf on ARM32 must be wc
mapped, always. Not sure that's a good idea either, but should at least
get things moving.
-Daniel

> ---
>  drivers/gpu/drm/drm_cache.c | 61 +++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 61 insertions(+)
> 
> diff --git a/drivers/gpu/drm/drm_cache.c b/drivers/gpu/drm/drm_cache.c
> index 89cdd32fe1f3..5124582451c6 100644
> --- a/drivers/gpu/drm/drm_cache.c
> +++ b/drivers/gpu/drm/drm_cache.c
> @@ -69,6 +69,55 @@ static void drm_cache_flush_clflush(struct page *pages[],
>  }
>  #endif
>  
> +#if defined(CONFIG_ARM)
> +static void drm_cache_maint_page(struct page *page, unsigned long offset,
> +				 size_t size, enum dma_data_direction dir,
> +				 void (*op)(const void *, size_t, int))
> +{
> +	unsigned long pfn;
> +	size_t left = size;
> +
> +	pfn = page_to_pfn(page) + offset / PAGE_SIZE;
> +	offset %= PAGE_SIZE;
> +
> +	/*
> +	 * A single sg entry may refer to multiple physically contiguous
> +	 * pages.  But we still need to process highmem pages individually.
> +	 * If highmem is not configured then the bulk of this loop gets
> +	 * optimized out.
> +	 */
> +	do {
> +		size_t len = left;
> +		void *vaddr;
> +
> +		page = pfn_to_page(pfn);
> +
> +		if (PageHighMem(page)) {
> +			if (len + offset > PAGE_SIZE)
> +				len = PAGE_SIZE - offset;
> +
> +			if (cache_is_vipt_nonaliasing()) {
> +				vaddr = kmap_atomic(page);
> +				op(vaddr + offset, len, dir);
> +				kunmap_atomic(vaddr);
> +			} else {
> +				vaddr = kmap_high_get(page);
> +				if (vaddr) {
> +					op(vaddr + offset, len, dir);
> +					kunmap_high(page);
> +				}
> +			}
> +		} else {
> +			vaddr = page_address(page) + offset;
> +			op(vaddr, len, dir);
> +		}
> +		offset = 0;
> +		pfn++;
> +		left -= len;
> +	} while (left);
> +}
> +#endif
> +
>  /**
>   * drm_flush_pages - Flush dcache lines of a set of pages.
>   * @pages: List of pages to be flushed.
> @@ -104,6 +153,12 @@ drm_flush_pages(struct page *pages[], unsigned long num_pages)
>  				   (unsigned long)page_virtual + PAGE_SIZE);
>  		kunmap_atomic(page_virtual);
>  	}
> +#elif defined(CONFIG_ARM)
> +	unsigned long i;
> +
> +	for (i = 0; i < num_pages; i++)
> +		drm_cache_maint_page(pages[i], 0, PAGE_SIZE, DMA_TO_DEVICE,
> +				     dmac_map_area);
>  #else
>  	pr_err("Architecture has no drm_cache.c support\n");
>  	WARN_ON_ONCE(1);
> @@ -135,6 +190,12 @@ drm_flush_sg(struct sg_table *st)
>  
>  	if (wbinvd_on_all_cpus())
>  		pr_err("Timed out waiting for cache flush\n");
> +#elif defined(CONFIG_ARM)
> +	struct sg_page_iter sg_iter;
> +
> +	for_each_sg_page(st->sgl, &sg_iter, st->nents, 0)
> +		drm_cache_maint_page(sg_page_iter_page(&sg_iter), 0, PAGE_SIZE,
> +				     DMA_TO_DEVICE, dmac_map_area);
>  #else
>  	pr_err("Architecture has no drm_cache.c support\n");
>  	WARN_ON_ONCE(1);
> -- 
> 2.13.5
> 
> _______________________________________________
> dri-devel mailing list
> dri-devel at lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

WARNING: multiple messages have this Message-ID (diff)
From: Daniel Vetter <daniel@ffwll.ch>
To: Gurchetan Singh <gurchetansingh@chromium.org>
Cc: linux-arm-kernel@lists.infradead.org, daniel.vetter@intel.com,
	thierry.reding@gmail.com, laurent.pinchart@ideasonboard.com,
	dri-devel@lists.freedesktop.org
Subject: Re: [PATCH 2/5] drm: add ARM flush implementation
Date: Tue, 30 Jan 2018 11:14:36 +0100	[thread overview]
Message-ID: <20180130101436.GK25930@phenom.ffwll.local> (raw)
In-Reply-To: <20180124025606.3020-2-gurchetansingh@chromium.org>

On Tue, Jan 23, 2018 at 06:56:03PM -0800, Gurchetan Singh wrote:
> The dma_cache_maint_page function is important for cache maintenance on
> ARM32 (this was determined via testing).
> 
> Since we desire direct control of the caches in drm_cache.c, let's make
> a copy of the function, rename it and use it.
> 
> v2: Don't use DMA API, call functions directly (Daniel)
> 
> Signed-off-by: Gurchetan Singh <gurchetansingh@chromium.org>

fwiw, in principle, this approach has my Ack from the drm side.

But if we can't get any agreement from the arch side then I guess we'll
just have to suck it up and mandate that any dma-buf on ARM32 must be wc
mapped, always. Not sure that's a good idea either, but should at least
get things moving.
-Daniel

> ---
>  drivers/gpu/drm/drm_cache.c | 61 +++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 61 insertions(+)
> 
> diff --git a/drivers/gpu/drm/drm_cache.c b/drivers/gpu/drm/drm_cache.c
> index 89cdd32fe1f3..5124582451c6 100644
> --- a/drivers/gpu/drm/drm_cache.c
> +++ b/drivers/gpu/drm/drm_cache.c
> @@ -69,6 +69,55 @@ static void drm_cache_flush_clflush(struct page *pages[],
>  }
>  #endif
>  
> +#if defined(CONFIG_ARM)
> +static void drm_cache_maint_page(struct page *page, unsigned long offset,
> +				 size_t size, enum dma_data_direction dir,
> +				 void (*op)(const void *, size_t, int))
> +{
> +	unsigned long pfn;
> +	size_t left = size;
> +
> +	pfn = page_to_pfn(page) + offset / PAGE_SIZE;
> +	offset %= PAGE_SIZE;
> +
> +	/*
> +	 * A single sg entry may refer to multiple physically contiguous
> +	 * pages.  But we still need to process highmem pages individually.
> +	 * If highmem is not configured then the bulk of this loop gets
> +	 * optimized out.
> +	 */
> +	do {
> +		size_t len = left;
> +		void *vaddr;
> +
> +		page = pfn_to_page(pfn);
> +
> +		if (PageHighMem(page)) {
> +			if (len + offset > PAGE_SIZE)
> +				len = PAGE_SIZE - offset;
> +
> +			if (cache_is_vipt_nonaliasing()) {
> +				vaddr = kmap_atomic(page);
> +				op(vaddr + offset, len, dir);
> +				kunmap_atomic(vaddr);
> +			} else {
> +				vaddr = kmap_high_get(page);
> +				if (vaddr) {
> +					op(vaddr + offset, len, dir);
> +					kunmap_high(page);
> +				}
> +			}
> +		} else {
> +			vaddr = page_address(page) + offset;
> +			op(vaddr, len, dir);
> +		}
> +		offset = 0;
> +		pfn++;
> +		left -= len;
> +	} while (left);
> +}
> +#endif
> +
>  /**
>   * drm_flush_pages - Flush dcache lines of a set of pages.
>   * @pages: List of pages to be flushed.
> @@ -104,6 +153,12 @@ drm_flush_pages(struct page *pages[], unsigned long num_pages)
>  				   (unsigned long)page_virtual + PAGE_SIZE);
>  		kunmap_atomic(page_virtual);
>  	}
> +#elif defined(CONFIG_ARM)
> +	unsigned long i;
> +
> +	for (i = 0; i < num_pages; i++)
> +		drm_cache_maint_page(pages[i], 0, PAGE_SIZE, DMA_TO_DEVICE,
> +				     dmac_map_area);
>  #else
>  	pr_err("Architecture has no drm_cache.c support\n");
>  	WARN_ON_ONCE(1);
> @@ -135,6 +190,12 @@ drm_flush_sg(struct sg_table *st)
>  
>  	if (wbinvd_on_all_cpus())
>  		pr_err("Timed out waiting for cache flush\n");
> +#elif defined(CONFIG_ARM)
> +	struct sg_page_iter sg_iter;
> +
> +	for_each_sg_page(st->sgl, &sg_iter, st->nents, 0)
> +		drm_cache_maint_page(sg_page_iter_page(&sg_iter), 0, PAGE_SIZE,
> +				     DMA_TO_DEVICE, dmac_map_area);
>  #else
>  	pr_err("Architecture has no drm_cache.c support\n");
>  	WARN_ON_ONCE(1);
> -- 
> 2.13.5
> 
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

  parent reply	other threads:[~2018-01-30 10:14 UTC|newest]

Thread overview: 33+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-01-24  2:56 [PATCH 2/5] drm: add ARM flush implementation Gurchetan Singh
2018-01-24  2:56 ` Gurchetan Singh
2018-01-24  2:56 ` [PATCH 3/5] drm: add ARM64 flush implementations Gurchetan Singh
2018-01-24  2:56   ` Gurchetan Singh
2018-01-24 12:00   ` Robin Murphy
2018-01-24 12:00     ` Robin Murphy
2018-01-24 12:36     ` Russell King - ARM Linux
2018-01-24 12:36       ` Russell King - ARM Linux
2018-01-24 13:14       ` Robin Murphy
2018-01-24 13:14         ` Robin Murphy
2018-01-30  9:53     ` Daniel Vetter
2018-01-30  9:53       ` Daniel Vetter
2018-01-24  2:56 ` [PATCH 4/5] drm/vgem: flush page during page fault Gurchetan Singh
2018-01-24  2:56   ` Gurchetan Singh
2018-01-24  2:56 ` [PATCH 5/5] drm: use drm_flush_sg where possible Gurchetan Singh
2018-01-24  2:56   ` Gurchetan Singh
2018-01-24 11:50 ` [PATCH 2/5] drm: add ARM flush implementation Lucas Stach
2018-01-24 11:50   ` Lucas Stach
2018-01-24 12:45 ` Russell King - ARM Linux
2018-01-24 12:45   ` Russell King - ARM Linux
2018-01-24 18:45   ` Gurchetan Singh
2018-01-24 19:26     ` Russell King - ARM Linux
2018-01-24 19:26       ` Russell King - ARM Linux
2018-01-24 23:43       ` Gurchetan Singh
2018-01-24 23:43         ` Gurchetan Singh
2018-01-25 14:06       ` Thierry Reding
2018-01-25 14:06         ` Thierry Reding
2018-01-30 10:14 ` Daniel Vetter [this message]
2018-01-30 10:14   ` Daniel Vetter
2018-01-30 11:31   ` Russell King - ARM Linux
2018-01-30 11:31     ` Russell King - ARM Linux
2018-01-30 12:27     ` Daniel Vetter
2018-01-30 12:27       ` Daniel Vetter

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180130101436.GK25930@phenom.ffwll.local \
    --to=daniel@ffwll.ch \
    --cc=linux-arm-kernel@lists.infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.