All of lore.kernel.org
 help / color / mirror / Atom feed
From: Daniel Vetter <daniel@ffwll.ch>
To: Thomas Zimmermann <tzimmermann@suse.de>
Cc: Daniel Vetter <daniel@ffwll.ch>,
	christian.koenig@amd.com, airlied@linux.ie,
	sumit.semwal@linaro.org, maarten.lankhorst@linux.intel.com,
	mripard@kernel.org, kraxel@redhat.com, hdegoede@redhat.com,
	sean@poorly.run, eric@anholt.net, sam@ravnborg.org,
	dri-devel@lists.freedesktop.org, linux-media@vger.kernel.org,
	linaro-mm-sig@lists.linaro.org,
	virtualization@lists.linux-foundation.org
Subject: Re: [PATCH v3 2/8] drm/ast: Only map cursor BOs during updates
Date: Fri, 11 Dec 2020 14:57:03 +0100	[thread overview]
Message-ID: <20201211135703.GR401619@phenom.ffwll.local> (raw)
In-Reply-To: <0c628bab-e4e0-170b-e695-abf3cde13c01@suse.de>

On Fri, Dec 11, 2020 at 11:49:48AM +0100, Thomas Zimmermann wrote:
> 
> 
> Am 11.12.20 um 11:18 schrieb Daniel Vetter:
> > On Wed, Dec 09, 2020 at 03:25:21PM +0100, Thomas Zimmermann wrote:
> > > The HW cursor's BO used to be mapped permanently into the kernel's
> > > address space. GEM's vmap operation will be protected by locks, and
> > > we don't want to lock the BO's for an indefinate period of time.
> > > 
> > > Change the cursor code to map the HW BOs only during updates. The
> > > vmap operation in VRAM helpers is cheap, as a once estabished mapping
> > > is being reused until the BO actually moves. As the HW cursor BOs are
> > > permanently pinned, they never move at all.
> > > 
> > > v2:
> > > 	* fix typos in commit description
> > > 
> > > Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> > > Acked-by: Christian König <christian.koenig@amd.com>
> > 
> > Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
> > 
> > Now there's a pretty big issue here though: We can't take dma_resv_lock in
> > commit_tail, because of possible deadlocks on at least gpus that do real
> > async rendering because of the dma_fences. Unfortunately my annotations
> > patches got stuck a bit, I need to refresh them.
> > 
> > Rules are you can pin and unpin stuff in prepare/cleanup_plane, and also
> > take dma_resv_lock there, but not in commit_tail in-between. So I think
> > our vmap_local needs to loose the unconditional assert_locked and require
> > either that or a pin count.
> 
> I guess my commit description is misleading when it speaks of updates.
> ast_cursor_blit() is actually called from the cursor plane's prepare_fb
> function. [1] The vmap code in ast_cursor_show() could be moved into blit()
> as well, I think.

Oh I failed to check this properly. Even better.

> I guess the clean solution is to integrate the cursor code with the
> modesetting code in ast_mode. From there, locks and mappings can be
> established in prepare_fb and the HW state can be updated in atomic_commit.

Yup. I'll still refresh my series with lockdep annotations, keeps paranoid
me at peace :-)
-Daniel

> 
> Best regards
> Thomas
> 
> [1] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/gpu/drm/ast/ast_mode.c#n646
> 
> > -Daniel
> > 
> > > ---
> > >   drivers/gpu/drm/ast/ast_cursor.c | 51 ++++++++++++++++++--------------
> > >   drivers/gpu/drm/ast/ast_drv.h    |  2 --
> > >   2 files changed, 28 insertions(+), 25 deletions(-)
> > > 
> > > diff --git a/drivers/gpu/drm/ast/ast_cursor.c b/drivers/gpu/drm/ast/ast_cursor.c
> > > index 68bf3d33f1ed..fac1ee79c372 100644
> > > --- a/drivers/gpu/drm/ast/ast_cursor.c
> > > +++ b/drivers/gpu/drm/ast/ast_cursor.c
> > > @@ -39,7 +39,6 @@ static void ast_cursor_fini(struct ast_private *ast)
> > >   	for (i = 0; i < ARRAY_SIZE(ast->cursor.gbo); ++i) {
> > >   		gbo = ast->cursor.gbo[i];
> > > -		drm_gem_vram_vunmap(gbo, &ast->cursor.map[i]);
> > >   		drm_gem_vram_unpin(gbo);
> > >   		drm_gem_vram_put(gbo);
> > >   	}
> > > @@ -53,14 +52,13 @@ static void ast_cursor_release(struct drm_device *dev, void *ptr)
> > >   }
> > >   /*
> > > - * Allocate cursor BOs and pins them at the end of VRAM.
> > > + * Allocate cursor BOs and pin them at the end of VRAM.
> > >    */
> > >   int ast_cursor_init(struct ast_private *ast)
> > >   {
> > >   	struct drm_device *dev = &ast->base;
> > >   	size_t size, i;
> > >   	struct drm_gem_vram_object *gbo;
> > > -	struct dma_buf_map map;
> > >   	int ret;
> > >   	size = roundup(AST_HWC_SIZE + AST_HWC_SIGNATURE_SIZE, PAGE_SIZE);
> > > @@ -77,15 +75,7 @@ int ast_cursor_init(struct ast_private *ast)
> > >   			drm_gem_vram_put(gbo);
> > >   			goto err_drm_gem_vram_put;
> > >   		}
> > > -		ret = drm_gem_vram_vmap(gbo, &map);
> > > -		if (ret) {
> > > -			drm_gem_vram_unpin(gbo);
> > > -			drm_gem_vram_put(gbo);
> > > -			goto err_drm_gem_vram_put;
> > > -		}
> > > -
> > >   		ast->cursor.gbo[i] = gbo;
> > > -		ast->cursor.map[i] = map;
> > >   	}
> > >   	return drmm_add_action_or_reset(dev, ast_cursor_release, NULL);
> > > @@ -94,7 +84,6 @@ int ast_cursor_init(struct ast_private *ast)
> > >   	while (i) {
> > >   		--i;
> > >   		gbo = ast->cursor.gbo[i];
> > > -		drm_gem_vram_vunmap(gbo, &ast->cursor.map[i]);
> > >   		drm_gem_vram_unpin(gbo);
> > >   		drm_gem_vram_put(gbo);
> > >   	}
> > > @@ -168,31 +157,38 @@ static void update_cursor_image(u8 __iomem *dst, const u8 *src, int width, int h
> > >   int ast_cursor_blit(struct ast_private *ast, struct drm_framebuffer *fb)
> > >   {
> > >   	struct drm_device *dev = &ast->base;
> > > -	struct drm_gem_vram_object *gbo;
> > > -	struct dma_buf_map map;
> > > -	int ret;
> > > -	void *src;
> > > +	struct drm_gem_vram_object *dst_gbo = ast->cursor.gbo[ast->cursor.next_index];
> > > +	struct drm_gem_vram_object *src_gbo = drm_gem_vram_of_gem(fb->obj[0]);
> > > +	struct dma_buf_map src_map, dst_map;
> > >   	void __iomem *dst;
> > > +	void *src;
> > > +	int ret;
> > >   	if (drm_WARN_ON_ONCE(dev, fb->width > AST_MAX_HWC_WIDTH) ||
> > >   	    drm_WARN_ON_ONCE(dev, fb->height > AST_MAX_HWC_HEIGHT))
> > >   		return -EINVAL;
> > > -	gbo = drm_gem_vram_of_gem(fb->obj[0]);
> > > -
> > > -	ret = drm_gem_vram_vmap(gbo, &map);
> > > +	ret = drm_gem_vram_vmap(src_gbo, &src_map);
> > >   	if (ret)
> > >   		return ret;
> > > -	src = map.vaddr; /* TODO: Use mapping abstraction properly */
> > > +	src = src_map.vaddr; /* TODO: Use mapping abstraction properly */
> > > -	dst = ast->cursor.map[ast->cursor.next_index].vaddr_iomem;
> > > +	ret = drm_gem_vram_vmap(dst_gbo, &dst_map);
> > > +	if (ret)
> > > +		goto err_drm_gem_vram_vunmap;
> > > +	dst = dst_map.vaddr_iomem; /* TODO: Use mapping abstraction properly */
> > >   	/* do data transfer to cursor BO */
> > >   	update_cursor_image(dst, src, fb->width, fb->height);
> > > -	drm_gem_vram_vunmap(gbo, &map);
> > > +	drm_gem_vram_vunmap(dst_gbo, &dst_map);
> > > +	drm_gem_vram_vunmap(src_gbo, &src_map);
> > >   	return 0;
> > > +
> > > +err_drm_gem_vram_vunmap:
> > > +	drm_gem_vram_vunmap(src_gbo, &src_map);
> > > +	return ret;
> > >   }
> > >   static void ast_cursor_set_base(struct ast_private *ast, u64 address)
> > > @@ -243,17 +239,26 @@ static void ast_cursor_set_location(struct ast_private *ast, u16 x, u16 y,
> > >   void ast_cursor_show(struct ast_private *ast, int x, int y,
> > >   		     unsigned int offset_x, unsigned int offset_y)
> > >   {
> > > +	struct drm_device *dev = &ast->base;
> > > +	struct drm_gem_vram_object *gbo = ast->cursor.gbo[ast->cursor.next_index];
> > > +	struct dma_buf_map map;
> > >   	u8 x_offset, y_offset;
> > >   	u8 __iomem *dst;
> > >   	u8 __iomem *sig;
> > >   	u8 jreg;
> > > +	int ret;
> > > -	dst = ast->cursor.map[ast->cursor.next_index].vaddr;
> > > +	ret = drm_gem_vram_vmap(gbo, &map);
> > > +	if (drm_WARN_ONCE(dev, ret, "drm_gem_vram_vmap() failed, ret=%d\n", ret))
> > > +		return;
> > > +	dst = map.vaddr_iomem; /* TODO: Use mapping abstraction properly */
> > >   	sig = dst + AST_HWC_SIZE;
> > >   	writel(x, sig + AST_HWC_SIGNATURE_X);
> > >   	writel(y, sig + AST_HWC_SIGNATURE_Y);
> > > +	drm_gem_vram_vunmap(gbo, &map);
> > > +
> > >   	if (x < 0) {
> > >   		x_offset = (-x) + offset_x;
> > >   		x = 0;
> > > diff --git a/drivers/gpu/drm/ast/ast_drv.h b/drivers/gpu/drm/ast/ast_drv.h
> > > index ccaff81924ee..f871fc36c2f7 100644
> > > --- a/drivers/gpu/drm/ast/ast_drv.h
> > > +++ b/drivers/gpu/drm/ast/ast_drv.h
> > > @@ -28,7 +28,6 @@
> > >   #ifndef __AST_DRV_H__
> > >   #define __AST_DRV_H__
> > > -#include <linux/dma-buf-map.h>
> > >   #include <linux/i2c.h>
> > >   #include <linux/i2c-algo-bit.h>
> > >   #include <linux/io.h>
> > > @@ -133,7 +132,6 @@ struct ast_private {
> > >   	struct {
> > >   		struct drm_gem_vram_object *gbo[AST_DEFAULT_HWC_NUM];
> > > -		struct dma_buf_map map[AST_DEFAULT_HWC_NUM];
> > >   		unsigned int next_index;
> > >   	} cursor;
> > > -- 
> > > 2.29.2
> > > 
> > 
> 
> -- 
> Thomas Zimmermann
> Graphics Driver Developer
> SUSE Software Solutions Germany GmbH
> Maxfeldstr. 5, 90409 Nürnberg, Germany
> (HRB 36809, AG Nürnberg)
> Geschäftsführer: Felix Imendörffer
> 




-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

WARNING: multiple messages have this Message-ID (diff)
From: Daniel Vetter <daniel@ffwll.ch>
To: Thomas Zimmermann <tzimmermann@suse.de>
Cc: eric@anholt.net, airlied@linux.ie, sam@ravnborg.org,
	dri-devel@lists.freedesktop.org,
	maarten.lankhorst@linux.intel.com, mripard@kernel.org,
	sumit.semwal@linaro.org, linaro-mm-sig@lists.linaro.org,
	hdegoede@redhat.com, Daniel Vetter <daniel@ffwll.ch>,
	virtualization@lists.linux-foundation.org, sean@poorly.run,
	christian.koenig@amd.com, linux-media@vger.kernel.org
Subject: Re: [PATCH v3 2/8] drm/ast: Only map cursor BOs during updates
Date: Fri, 11 Dec 2020 14:57:03 +0100	[thread overview]
Message-ID: <20201211135703.GR401619@phenom.ffwll.local> (raw)
In-Reply-To: <0c628bab-e4e0-170b-e695-abf3cde13c01@suse.de>

On Fri, Dec 11, 2020 at 11:49:48AM +0100, Thomas Zimmermann wrote:
> 
> 
> Am 11.12.20 um 11:18 schrieb Daniel Vetter:
> > On Wed, Dec 09, 2020 at 03:25:21PM +0100, Thomas Zimmermann wrote:
> > > The HW cursor's BO used to be mapped permanently into the kernel's
> > > address space. GEM's vmap operation will be protected by locks, and
> > > we don't want to lock the BO's for an indefinate period of time.
> > > 
> > > Change the cursor code to map the HW BOs only during updates. The
> > > vmap operation in VRAM helpers is cheap, as a once estabished mapping
> > > is being reused until the BO actually moves. As the HW cursor BOs are
> > > permanently pinned, they never move at all.
> > > 
> > > v2:
> > > 	* fix typos in commit description
> > > 
> > > Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> > > Acked-by: Christian König <christian.koenig@amd.com>
> > 
> > Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
> > 
> > Now there's a pretty big issue here though: We can't take dma_resv_lock in
> > commit_tail, because of possible deadlocks on at least gpus that do real
> > async rendering because of the dma_fences. Unfortunately my annotations
> > patches got stuck a bit, I need to refresh them.
> > 
> > Rules are you can pin and unpin stuff in prepare/cleanup_plane, and also
> > take dma_resv_lock there, but not in commit_tail in-between. So I think
> > our vmap_local needs to loose the unconditional assert_locked and require
> > either that or a pin count.
> 
> I guess my commit description is misleading when it speaks of updates.
> ast_cursor_blit() is actually called from the cursor plane's prepare_fb
> function. [1] The vmap code in ast_cursor_show() could be moved into blit()
> as well, I think.

Oh I failed to check this properly. Even better.

> I guess the clean solution is to integrate the cursor code with the
> modesetting code in ast_mode. From there, locks and mappings can be
> established in prepare_fb and the HW state can be updated in atomic_commit.

Yup. I'll still refresh my series with lockdep annotations, keeps paranoid
me at peace :-)
-Daniel

> 
> Best regards
> Thomas
> 
> [1] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/gpu/drm/ast/ast_mode.c#n646
> 
> > -Daniel
> > 
> > > ---
> > >   drivers/gpu/drm/ast/ast_cursor.c | 51 ++++++++++++++++++--------------
> > >   drivers/gpu/drm/ast/ast_drv.h    |  2 --
> > >   2 files changed, 28 insertions(+), 25 deletions(-)
> > > 
> > > diff --git a/drivers/gpu/drm/ast/ast_cursor.c b/drivers/gpu/drm/ast/ast_cursor.c
> > > index 68bf3d33f1ed..fac1ee79c372 100644
> > > --- a/drivers/gpu/drm/ast/ast_cursor.c
> > > +++ b/drivers/gpu/drm/ast/ast_cursor.c
> > > @@ -39,7 +39,6 @@ static void ast_cursor_fini(struct ast_private *ast)
> > >   	for (i = 0; i < ARRAY_SIZE(ast->cursor.gbo); ++i) {
> > >   		gbo = ast->cursor.gbo[i];
> > > -		drm_gem_vram_vunmap(gbo, &ast->cursor.map[i]);
> > >   		drm_gem_vram_unpin(gbo);
> > >   		drm_gem_vram_put(gbo);
> > >   	}
> > > @@ -53,14 +52,13 @@ static void ast_cursor_release(struct drm_device *dev, void *ptr)
> > >   }
> > >   /*
> > > - * Allocate cursor BOs and pins them at the end of VRAM.
> > > + * Allocate cursor BOs and pin them at the end of VRAM.
> > >    */
> > >   int ast_cursor_init(struct ast_private *ast)
> > >   {
> > >   	struct drm_device *dev = &ast->base;
> > >   	size_t size, i;
> > >   	struct drm_gem_vram_object *gbo;
> > > -	struct dma_buf_map map;
> > >   	int ret;
> > >   	size = roundup(AST_HWC_SIZE + AST_HWC_SIGNATURE_SIZE, PAGE_SIZE);
> > > @@ -77,15 +75,7 @@ int ast_cursor_init(struct ast_private *ast)
> > >   			drm_gem_vram_put(gbo);
> > >   			goto err_drm_gem_vram_put;
> > >   		}
> > > -		ret = drm_gem_vram_vmap(gbo, &map);
> > > -		if (ret) {
> > > -			drm_gem_vram_unpin(gbo);
> > > -			drm_gem_vram_put(gbo);
> > > -			goto err_drm_gem_vram_put;
> > > -		}
> > > -
> > >   		ast->cursor.gbo[i] = gbo;
> > > -		ast->cursor.map[i] = map;
> > >   	}
> > >   	return drmm_add_action_or_reset(dev, ast_cursor_release, NULL);
> > > @@ -94,7 +84,6 @@ int ast_cursor_init(struct ast_private *ast)
> > >   	while (i) {
> > >   		--i;
> > >   		gbo = ast->cursor.gbo[i];
> > > -		drm_gem_vram_vunmap(gbo, &ast->cursor.map[i]);
> > >   		drm_gem_vram_unpin(gbo);
> > >   		drm_gem_vram_put(gbo);
> > >   	}
> > > @@ -168,31 +157,38 @@ static void update_cursor_image(u8 __iomem *dst, const u8 *src, int width, int h
> > >   int ast_cursor_blit(struct ast_private *ast, struct drm_framebuffer *fb)
> > >   {
> > >   	struct drm_device *dev = &ast->base;
> > > -	struct drm_gem_vram_object *gbo;
> > > -	struct dma_buf_map map;
> > > -	int ret;
> > > -	void *src;
> > > +	struct drm_gem_vram_object *dst_gbo = ast->cursor.gbo[ast->cursor.next_index];
> > > +	struct drm_gem_vram_object *src_gbo = drm_gem_vram_of_gem(fb->obj[0]);
> > > +	struct dma_buf_map src_map, dst_map;
> > >   	void __iomem *dst;
> > > +	void *src;
> > > +	int ret;
> > >   	if (drm_WARN_ON_ONCE(dev, fb->width > AST_MAX_HWC_WIDTH) ||
> > >   	    drm_WARN_ON_ONCE(dev, fb->height > AST_MAX_HWC_HEIGHT))
> > >   		return -EINVAL;
> > > -	gbo = drm_gem_vram_of_gem(fb->obj[0]);
> > > -
> > > -	ret = drm_gem_vram_vmap(gbo, &map);
> > > +	ret = drm_gem_vram_vmap(src_gbo, &src_map);
> > >   	if (ret)
> > >   		return ret;
> > > -	src = map.vaddr; /* TODO: Use mapping abstraction properly */
> > > +	src = src_map.vaddr; /* TODO: Use mapping abstraction properly */
> > > -	dst = ast->cursor.map[ast->cursor.next_index].vaddr_iomem;
> > > +	ret = drm_gem_vram_vmap(dst_gbo, &dst_map);
> > > +	if (ret)
> > > +		goto err_drm_gem_vram_vunmap;
> > > +	dst = dst_map.vaddr_iomem; /* TODO: Use mapping abstraction properly */
> > >   	/* do data transfer to cursor BO */
> > >   	update_cursor_image(dst, src, fb->width, fb->height);
> > > -	drm_gem_vram_vunmap(gbo, &map);
> > > +	drm_gem_vram_vunmap(dst_gbo, &dst_map);
> > > +	drm_gem_vram_vunmap(src_gbo, &src_map);
> > >   	return 0;
> > > +
> > > +err_drm_gem_vram_vunmap:
> > > +	drm_gem_vram_vunmap(src_gbo, &src_map);
> > > +	return ret;
> > >   }
> > >   static void ast_cursor_set_base(struct ast_private *ast, u64 address)
> > > @@ -243,17 +239,26 @@ static void ast_cursor_set_location(struct ast_private *ast, u16 x, u16 y,
> > >   void ast_cursor_show(struct ast_private *ast, int x, int y,
> > >   		     unsigned int offset_x, unsigned int offset_y)
> > >   {
> > > +	struct drm_device *dev = &ast->base;
> > > +	struct drm_gem_vram_object *gbo = ast->cursor.gbo[ast->cursor.next_index];
> > > +	struct dma_buf_map map;
> > >   	u8 x_offset, y_offset;
> > >   	u8 __iomem *dst;
> > >   	u8 __iomem *sig;
> > >   	u8 jreg;
> > > +	int ret;
> > > -	dst = ast->cursor.map[ast->cursor.next_index].vaddr;
> > > +	ret = drm_gem_vram_vmap(gbo, &map);
> > > +	if (drm_WARN_ONCE(dev, ret, "drm_gem_vram_vmap() failed, ret=%d\n", ret))
> > > +		return;
> > > +	dst = map.vaddr_iomem; /* TODO: Use mapping abstraction properly */
> > >   	sig = dst + AST_HWC_SIZE;
> > >   	writel(x, sig + AST_HWC_SIGNATURE_X);
> > >   	writel(y, sig + AST_HWC_SIGNATURE_Y);
> > > +	drm_gem_vram_vunmap(gbo, &map);
> > > +
> > >   	if (x < 0) {
> > >   		x_offset = (-x) + offset_x;
> > >   		x = 0;
> > > diff --git a/drivers/gpu/drm/ast/ast_drv.h b/drivers/gpu/drm/ast/ast_drv.h
> > > index ccaff81924ee..f871fc36c2f7 100644
> > > --- a/drivers/gpu/drm/ast/ast_drv.h
> > > +++ b/drivers/gpu/drm/ast/ast_drv.h
> > > @@ -28,7 +28,6 @@
> > >   #ifndef __AST_DRV_H__
> > >   #define __AST_DRV_H__
> > > -#include <linux/dma-buf-map.h>
> > >   #include <linux/i2c.h>
> > >   #include <linux/i2c-algo-bit.h>
> > >   #include <linux/io.h>
> > > @@ -133,7 +132,6 @@ struct ast_private {
> > >   	struct {
> > >   		struct drm_gem_vram_object *gbo[AST_DEFAULT_HWC_NUM];
> > > -		struct dma_buf_map map[AST_DEFAULT_HWC_NUM];
> > >   		unsigned int next_index;
> > >   	} cursor;
> > > -- 
> > > 2.29.2
> > > 
> > 
> 
> -- 
> Thomas Zimmermann
> Graphics Driver Developer
> SUSE Software Solutions Germany GmbH
> Maxfeldstr. 5, 90409 Nürnberg, Germany
> (HRB 36809, AG Nürnberg)
> Geschäftsführer: Felix Imendörffer
> 




-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

WARNING: multiple messages have this Message-ID (diff)
From: Daniel Vetter <daniel@ffwll.ch>
To: Thomas Zimmermann <tzimmermann@suse.de>
Cc: airlied@linux.ie, sam@ravnborg.org,
	dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org,
	hdegoede@redhat.com, kraxel@redhat.com,
	virtualization@lists.linux-foundation.org, sean@poorly.run,
	christian.koenig@amd.com, linux-media@vger.kernel.org
Subject: Re: [PATCH v3 2/8] drm/ast: Only map cursor BOs during updates
Date: Fri, 11 Dec 2020 14:57:03 +0100	[thread overview]
Message-ID: <20201211135703.GR401619@phenom.ffwll.local> (raw)
In-Reply-To: <0c628bab-e4e0-170b-e695-abf3cde13c01@suse.de>

On Fri, Dec 11, 2020 at 11:49:48AM +0100, Thomas Zimmermann wrote:
> 
> 
> Am 11.12.20 um 11:18 schrieb Daniel Vetter:
> > On Wed, Dec 09, 2020 at 03:25:21PM +0100, Thomas Zimmermann wrote:
> > > The HW cursor's BO used to be mapped permanently into the kernel's
> > > address space. GEM's vmap operation will be protected by locks, and
> > > we don't want to lock the BO's for an indefinate period of time.
> > > 
> > > Change the cursor code to map the HW BOs only during updates. The
> > > vmap operation in VRAM helpers is cheap, as a once estabished mapping
> > > is being reused until the BO actually moves. As the HW cursor BOs are
> > > permanently pinned, they never move at all.
> > > 
> > > v2:
> > > 	* fix typos in commit description
> > > 
> > > Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> > > Acked-by: Christian König <christian.koenig@amd.com>
> > 
> > Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
> > 
> > Now there's a pretty big issue here though: We can't take dma_resv_lock in
> > commit_tail, because of possible deadlocks on at least gpus that do real
> > async rendering because of the dma_fences. Unfortunately my annotations
> > patches got stuck a bit, I need to refresh them.
> > 
> > Rules are you can pin and unpin stuff in prepare/cleanup_plane, and also
> > take dma_resv_lock there, but not in commit_tail in-between. So I think
> > our vmap_local needs to loose the unconditional assert_locked and require
> > either that or a pin count.
> 
> I guess my commit description is misleading when it speaks of updates.
> ast_cursor_blit() is actually called from the cursor plane's prepare_fb
> function. [1] The vmap code in ast_cursor_show() could be moved into blit()
> as well, I think.

Oh I failed to check this properly. Even better.

> I guess the clean solution is to integrate the cursor code with the
> modesetting code in ast_mode. From there, locks and mappings can be
> established in prepare_fb and the HW state can be updated in atomic_commit.

Yup. I'll still refresh my series with lockdep annotations, keeps paranoid
me at peace :-)
-Daniel

> 
> Best regards
> Thomas
> 
> [1] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/gpu/drm/ast/ast_mode.c#n646
> 
> > -Daniel
> > 
> > > ---
> > >   drivers/gpu/drm/ast/ast_cursor.c | 51 ++++++++++++++++++--------------
> > >   drivers/gpu/drm/ast/ast_drv.h    |  2 --
> > >   2 files changed, 28 insertions(+), 25 deletions(-)
> > > 
> > > diff --git a/drivers/gpu/drm/ast/ast_cursor.c b/drivers/gpu/drm/ast/ast_cursor.c
> > > index 68bf3d33f1ed..fac1ee79c372 100644
> > > --- a/drivers/gpu/drm/ast/ast_cursor.c
> > > +++ b/drivers/gpu/drm/ast/ast_cursor.c
> > > @@ -39,7 +39,6 @@ static void ast_cursor_fini(struct ast_private *ast)
> > >   	for (i = 0; i < ARRAY_SIZE(ast->cursor.gbo); ++i) {
> > >   		gbo = ast->cursor.gbo[i];
> > > -		drm_gem_vram_vunmap(gbo, &ast->cursor.map[i]);
> > >   		drm_gem_vram_unpin(gbo);
> > >   		drm_gem_vram_put(gbo);
> > >   	}
> > > @@ -53,14 +52,13 @@ static void ast_cursor_release(struct drm_device *dev, void *ptr)
> > >   }
> > >   /*
> > > - * Allocate cursor BOs and pins them at the end of VRAM.
> > > + * Allocate cursor BOs and pin them at the end of VRAM.
> > >    */
> > >   int ast_cursor_init(struct ast_private *ast)
> > >   {
> > >   	struct drm_device *dev = &ast->base;
> > >   	size_t size, i;
> > >   	struct drm_gem_vram_object *gbo;
> > > -	struct dma_buf_map map;
> > >   	int ret;
> > >   	size = roundup(AST_HWC_SIZE + AST_HWC_SIGNATURE_SIZE, PAGE_SIZE);
> > > @@ -77,15 +75,7 @@ int ast_cursor_init(struct ast_private *ast)
> > >   			drm_gem_vram_put(gbo);
> > >   			goto err_drm_gem_vram_put;
> > >   		}
> > > -		ret = drm_gem_vram_vmap(gbo, &map);
> > > -		if (ret) {
> > > -			drm_gem_vram_unpin(gbo);
> > > -			drm_gem_vram_put(gbo);
> > > -			goto err_drm_gem_vram_put;
> > > -		}
> > > -
> > >   		ast->cursor.gbo[i] = gbo;
> > > -		ast->cursor.map[i] = map;
> > >   	}
> > >   	return drmm_add_action_or_reset(dev, ast_cursor_release, NULL);
> > > @@ -94,7 +84,6 @@ int ast_cursor_init(struct ast_private *ast)
> > >   	while (i) {
> > >   		--i;
> > >   		gbo = ast->cursor.gbo[i];
> > > -		drm_gem_vram_vunmap(gbo, &ast->cursor.map[i]);
> > >   		drm_gem_vram_unpin(gbo);
> > >   		drm_gem_vram_put(gbo);
> > >   	}
> > > @@ -168,31 +157,38 @@ static void update_cursor_image(u8 __iomem *dst, const u8 *src, int width, int h
> > >   int ast_cursor_blit(struct ast_private *ast, struct drm_framebuffer *fb)
> > >   {
> > >   	struct drm_device *dev = &ast->base;
> > > -	struct drm_gem_vram_object *gbo;
> > > -	struct dma_buf_map map;
> > > -	int ret;
> > > -	void *src;
> > > +	struct drm_gem_vram_object *dst_gbo = ast->cursor.gbo[ast->cursor.next_index];
> > > +	struct drm_gem_vram_object *src_gbo = drm_gem_vram_of_gem(fb->obj[0]);
> > > +	struct dma_buf_map src_map, dst_map;
> > >   	void __iomem *dst;
> > > +	void *src;
> > > +	int ret;
> > >   	if (drm_WARN_ON_ONCE(dev, fb->width > AST_MAX_HWC_WIDTH) ||
> > >   	    drm_WARN_ON_ONCE(dev, fb->height > AST_MAX_HWC_HEIGHT))
> > >   		return -EINVAL;
> > > -	gbo = drm_gem_vram_of_gem(fb->obj[0]);
> > > -
> > > -	ret = drm_gem_vram_vmap(gbo, &map);
> > > +	ret = drm_gem_vram_vmap(src_gbo, &src_map);
> > >   	if (ret)
> > >   		return ret;
> > > -	src = map.vaddr; /* TODO: Use mapping abstraction properly */
> > > +	src = src_map.vaddr; /* TODO: Use mapping abstraction properly */
> > > -	dst = ast->cursor.map[ast->cursor.next_index].vaddr_iomem;
> > > +	ret = drm_gem_vram_vmap(dst_gbo, &dst_map);
> > > +	if (ret)
> > > +		goto err_drm_gem_vram_vunmap;
> > > +	dst = dst_map.vaddr_iomem; /* TODO: Use mapping abstraction properly */
> > >   	/* do data transfer to cursor BO */
> > >   	update_cursor_image(dst, src, fb->width, fb->height);
> > > -	drm_gem_vram_vunmap(gbo, &map);
> > > +	drm_gem_vram_vunmap(dst_gbo, &dst_map);
> > > +	drm_gem_vram_vunmap(src_gbo, &src_map);
> > >   	return 0;
> > > +
> > > +err_drm_gem_vram_vunmap:
> > > +	drm_gem_vram_vunmap(src_gbo, &src_map);
> > > +	return ret;
> > >   }
> > >   static void ast_cursor_set_base(struct ast_private *ast, u64 address)
> > > @@ -243,17 +239,26 @@ static void ast_cursor_set_location(struct ast_private *ast, u16 x, u16 y,
> > >   void ast_cursor_show(struct ast_private *ast, int x, int y,
> > >   		     unsigned int offset_x, unsigned int offset_y)
> > >   {
> > > +	struct drm_device *dev = &ast->base;
> > > +	struct drm_gem_vram_object *gbo = ast->cursor.gbo[ast->cursor.next_index];
> > > +	struct dma_buf_map map;
> > >   	u8 x_offset, y_offset;
> > >   	u8 __iomem *dst;
> > >   	u8 __iomem *sig;
> > >   	u8 jreg;
> > > +	int ret;
> > > -	dst = ast->cursor.map[ast->cursor.next_index].vaddr;
> > > +	ret = drm_gem_vram_vmap(gbo, &map);
> > > +	if (drm_WARN_ONCE(dev, ret, "drm_gem_vram_vmap() failed, ret=%d\n", ret))
> > > +		return;
> > > +	dst = map.vaddr_iomem; /* TODO: Use mapping abstraction properly */
> > >   	sig = dst + AST_HWC_SIZE;
> > >   	writel(x, sig + AST_HWC_SIGNATURE_X);
> > >   	writel(y, sig + AST_HWC_SIGNATURE_Y);
> > > +	drm_gem_vram_vunmap(gbo, &map);
> > > +
> > >   	if (x < 0) {
> > >   		x_offset = (-x) + offset_x;
> > >   		x = 0;
> > > diff --git a/drivers/gpu/drm/ast/ast_drv.h b/drivers/gpu/drm/ast/ast_drv.h
> > > index ccaff81924ee..f871fc36c2f7 100644
> > > --- a/drivers/gpu/drm/ast/ast_drv.h
> > > +++ b/drivers/gpu/drm/ast/ast_drv.h
> > > @@ -28,7 +28,6 @@
> > >   #ifndef __AST_DRV_H__
> > >   #define __AST_DRV_H__
> > > -#include <linux/dma-buf-map.h>
> > >   #include <linux/i2c.h>
> > >   #include <linux/i2c-algo-bit.h>
> > >   #include <linux/io.h>
> > > @@ -133,7 +132,6 @@ struct ast_private {
> > >   	struct {
> > >   		struct drm_gem_vram_object *gbo[AST_DEFAULT_HWC_NUM];
> > > -		struct dma_buf_map map[AST_DEFAULT_HWC_NUM];
> > >   		unsigned int next_index;
> > >   	} cursor;
> > > -- 
> > > 2.29.2
> > > 
> > 
> 
> -- 
> Thomas Zimmermann
> Graphics Driver Developer
> SUSE Software Solutions Germany GmbH
> Maxfeldstr. 5, 90409 Nürnberg, Germany
> (HRB 36809, AG Nürnberg)
> Geschäftsführer: Felix Imendörffer
> 




-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

  reply	other threads:[~2020-12-11 13:58 UTC|newest]

Thread overview: 79+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-12-09 14:25 [PATCH v3 0/8] drm: Support short-term vmap via vmap_local Thomas Zimmermann
2020-12-09 14:25 ` Thomas Zimmermann
2020-12-09 14:25 ` Thomas Zimmermann
2020-12-09 14:25 ` [PATCH v3 1/8] drm/ast: Don't pin cursor source BO explicitly during update Thomas Zimmermann
2020-12-09 14:25   ` Thomas Zimmermann
2020-12-09 14:25   ` Thomas Zimmermann
2020-12-11 10:14   ` Daniel Vetter
2020-12-11 10:14     ` Daniel Vetter
2020-12-11 10:14     ` Daniel Vetter
2020-12-09 14:25 ` [PATCH v3 2/8] drm/ast: Only map cursor BOs during updates Thomas Zimmermann
2020-12-09 14:25   ` Thomas Zimmermann
2020-12-09 14:25   ` Thomas Zimmermann
2020-12-11 10:18   ` Daniel Vetter
2020-12-11 10:18     ` Daniel Vetter
2020-12-11 10:18     ` Daniel Vetter
2020-12-11 10:49     ` Thomas Zimmermann
2020-12-11 10:49       ` Thomas Zimmermann
2020-12-11 10:49       ` Thomas Zimmermann
2020-12-11 13:57       ` Daniel Vetter [this message]
2020-12-11 13:57         ` Daniel Vetter
2020-12-11 13:57         ` Daniel Vetter
2020-12-09 14:25 ` [PATCH v3 3/8] dma-buf: Add vmap_local and vnumap_local operations Thomas Zimmermann
2020-12-09 14:25   ` Thomas Zimmermann
2020-12-09 14:25   ` Thomas Zimmermann
2020-12-09 14:27   ` Thomas Zimmermann
2020-12-09 14:27     ` Thomas Zimmermann
2020-12-09 14:27     ` Thomas Zimmermann
2020-12-11  9:24   ` Daniel Vetter
2020-12-11  9:24     ` Daniel Vetter
2020-12-11  9:24     ` Daniel Vetter
2020-12-11 14:09   ` Daniel Vetter
2020-12-11 14:09     ` Daniel Vetter
2020-12-11 14:09     ` Daniel Vetter
2020-12-09 14:25 ` [PATCH v3 4/8] drm/gem: Create infrastructure for GEM vmap_local Thomas Zimmermann
2020-12-09 14:25   ` Thomas Zimmermann
2020-12-09 14:25   ` Thomas Zimmermann
2020-12-09 17:35   ` kernel test robot
2020-12-11  9:29   ` Daniel Vetter
2020-12-11  9:29     ` Daniel Vetter
2020-12-11  9:29     ` Daniel Vetter
2020-12-09 14:25 ` [PATCH v3 5/8] drm/cma-helper: Provide a vmap function for short-term mappings Thomas Zimmermann
2020-12-09 14:25   ` Thomas Zimmermann
2020-12-09 14:25   ` Thomas Zimmermann
2020-12-11  9:40   ` Daniel Vetter
2020-12-11  9:40     ` Daniel Vetter
2020-12-11  9:40     ` Daniel Vetter
2020-12-11 10:02     ` Daniel Vetter
2020-12-11 10:02       ` Daniel Vetter
2020-12-11 10:02       ` Daniel Vetter
2020-12-09 14:25 ` [PATCH v3 6/8] drm/shmem-helper: " Thomas Zimmermann
2020-12-09 14:25   ` Thomas Zimmermann
2020-12-09 14:25   ` Thomas Zimmermann
2020-12-11  9:50   ` Daniel Vetter
2020-12-11  9:50     ` Daniel Vetter
2020-12-11  9:50     ` Daniel Vetter
2021-01-07 10:28     ` Thomas Zimmermann
2021-01-07 10:28       ` Thomas Zimmermann
2021-01-07 10:28       ` Thomas Zimmermann
2021-01-07 15:01       ` Daniel Vetter
2021-01-07 15:01         ` Daniel Vetter
2021-01-07 15:01         ` Daniel Vetter
2020-12-09 14:25 ` [PATCH v3 7/8] drm/vram-helper: " Thomas Zimmermann
2020-12-09 14:25   ` Thomas Zimmermann
2020-12-09 14:25   ` Thomas Zimmermann
2020-12-11  9:57   ` Daniel Vetter
2020-12-11  9:57     ` Daniel Vetter
2020-12-11  9:57     ` Daniel Vetter
2020-12-09 14:25 ` [PATCH v3 8/8] drm/fb-helper: Move BO locking from DRM client to fbdev damage worker Thomas Zimmermann
2020-12-09 14:25   ` Thomas Zimmermann
2020-12-09 14:25   ` Thomas Zimmermann
2020-12-11 10:01   ` Daniel Vetter
2020-12-11 10:01     ` Daniel Vetter
2020-12-11 10:01     ` Daniel Vetter
2020-12-11 10:16     ` Thomas Zimmermann
2020-12-11 10:16       ` Thomas Zimmermann
2020-12-11 10:16       ` Thomas Zimmermann
2020-12-11 14:00       ` Daniel Vetter
2020-12-11 14:00         ` Daniel Vetter
2020-12-11 14:00         ` Daniel Vetter

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20201211135703.GR401619@phenom.ffwll.local \
    --to=daniel@ffwll.ch \
    --cc=airlied@linux.ie \
    --cc=christian.koenig@amd.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=eric@anholt.net \
    --cc=hdegoede@redhat.com \
    --cc=kraxel@redhat.com \
    --cc=linaro-mm-sig@lists.linaro.org \
    --cc=linux-media@vger.kernel.org \
    --cc=maarten.lankhorst@linux.intel.com \
    --cc=mripard@kernel.org \
    --cc=sam@ravnborg.org \
    --cc=sean@poorly.run \
    --cc=sumit.semwal@linaro.org \
    --cc=tzimmermann@suse.de \
    --cc=virtualization@lists.linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.