All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 00/10] drm/fb-helper: Various fixes and cleanups
@ 2020-11-16 20:04 Thomas Zimmermann
  2020-11-16 20:04   ` Thomas Zimmermann
                   ` (9 more replies)
  0 siblings, 10 replies; 21+ messages in thread
From: Thomas Zimmermann @ 2020-11-16 20:04 UTC (permalink / raw)
  To: daniel, airlied, sam, mripard, maarten.lankhorst, christian.koenig
  Cc: Thomas Zimmermann, dri-devel

Here's a number of fb-helper patches that have been piling up recently.

Patches 1 to 3 fix bugs that I spotted while going through the code.
Because of the way the fbdev code works, they have been avoided so far.

Patches 4 to 7 cleanup damage handling for fbdev's shadow buffer and
fix a few issues.

Specifically, the final patch adds locking to the code that flushes the
shadow framebuffer into BO memory. During the conversion of radeon to
generic fbdev, the question came up about interference with concurrent
modesets. If fbdev has the BO pinned in system memory for flushing while
the modeset wants to pin it to VRAM for scanout, the modeset would
most likely fail. We haven't seen that so far, but it's possible at
least. Acquiring modeset locks during the flush operation prevents
concurrent modesets from taking place.

The code has been tested with SHMEM and TTM BOs; with atomic and non-
atomic modesetting.

[1] https://patchwork.freedesktop.org/patch/400054/?series=83765&rev=1

Thomas Zimmermann (10):
  drm/fb-helper: Call dirty helper after writing to fbdev
  drm/fb-helper: Unmap client buffer during shutdown
  drm/client: Depend on GEM object kmap ref-counting
  drm/fb-helper: Rename dirty worker to damage worker
  drm/fb-helper: Return early in dirty worker
  drm/fb-helper: Separate shadow-buffer flushing and calling dirty
    callback
  drm/fb-helper: Move damage blit code and its setup into separate
    routine
  drm/fb-helper: Restore damage area upon errors
  drm/fb-helper: Copy dma-buf map before flushing shadow fb
  drm/fb-helper: Acquire modeset lock around shadow-buffer flushing

 drivers/gpu/drm/drm_client.c    |   4 -
 drivers/gpu/drm/drm_fb_helper.c | 177 ++++++++++++++++++++++----------
 include/drm/drm_fb_helper.h     |  14 +--
 3 files changed, 130 insertions(+), 65 deletions(-)

--
2.29.2

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [PATCH 01/10] drm/fb-helper: Call dirty helper after writing to fbdev
  2020-11-16 20:04 [PATCH 00/10] drm/fb-helper: Various fixes and cleanups Thomas Zimmermann
@ 2020-11-16 20:04   ` Thomas Zimmermann
  2020-11-16 20:04 ` [PATCH 02/10] drm/fb-helper: Unmap client buffer during shutdown Thomas Zimmermann
                     ` (8 subsequent siblings)
  9 siblings, 0 replies; 21+ messages in thread
From: Thomas Zimmermann @ 2020-11-16 20:04 UTC (permalink / raw)
  To: daniel, airlied, sam, mripard, maarten.lankhorst, christian.koenig
  Cc: Daniel Vetter, virtualization, Thomas Zimmermann, dri-devel

If fbdev uses a shadow framebuffer, call the damage handler. Otherwise
the update might not make it to the screen.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Fixes: 222ec45f4c69 ("drm/fb_helper: Support framebuffers in I/O memory")
Cc: Thomas Zimmermann <tzimmermann@suse.de>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Sam Ravnborg <sam@ravnborg.org>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Cc: Maxime Ripard <mripard@kernel.org>
Cc: David Airlie <airlied@linux.ie>
Cc: Daniel Vetter <daniel@ffwll.ch>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: dri-devel@lists.freedesktop.org
Cc: virtualization@lists.linux-foundation.org
---
 drivers/gpu/drm/drm_fb_helper.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
index 25edf670867c..ee1a19e22df2 100644
--- a/drivers/gpu/drm/drm_fb_helper.c
+++ b/drivers/gpu/drm/drm_fb_helper.c
@@ -2189,6 +2189,9 @@ static ssize_t drm_fbdev_fb_write(struct fb_info *info, const char __user *buf,
 	if (ret > 0)
 		*ppos += ret;
 
+	if (ret > 0)
+		drm_fb_helper_dirty(info, 0, 0, info->var.xres, info->var.yres);
+
 	return ret ? ret : err;
 }
 
-- 
2.29.2

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 01/10] drm/fb-helper: Call dirty helper after writing to fbdev
@ 2020-11-16 20:04   ` Thomas Zimmermann
  0 siblings, 0 replies; 21+ messages in thread
From: Thomas Zimmermann @ 2020-11-16 20:04 UTC (permalink / raw)
  To: daniel, airlied, sam, mripard, maarten.lankhorst, christian.koenig
  Cc: Daniel Vetter, virtualization, Thomas Zimmermann, dri-devel,
	Gerd Hoffmann

If fbdev uses a shadow framebuffer, call the damage handler. Otherwise
the update might not make it to the screen.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Fixes: 222ec45f4c69 ("drm/fb_helper: Support framebuffers in I/O memory")
Cc: Thomas Zimmermann <tzimmermann@suse.de>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Sam Ravnborg <sam@ravnborg.org>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Cc: Maxime Ripard <mripard@kernel.org>
Cc: David Airlie <airlied@linux.ie>
Cc: Daniel Vetter <daniel@ffwll.ch>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: dri-devel@lists.freedesktop.org
Cc: virtualization@lists.linux-foundation.org
---
 drivers/gpu/drm/drm_fb_helper.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
index 25edf670867c..ee1a19e22df2 100644
--- a/drivers/gpu/drm/drm_fb_helper.c
+++ b/drivers/gpu/drm/drm_fb_helper.c
@@ -2189,6 +2189,9 @@ static ssize_t drm_fbdev_fb_write(struct fb_info *info, const char __user *buf,
 	if (ret > 0)
 		*ppos += ret;
 
+	if (ret > 0)
+		drm_fb_helper_dirty(info, 0, 0, info->var.xres, info->var.yres);
+
 	return ret ? ret : err;
 }
 
-- 
2.29.2

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 02/10] drm/fb-helper: Unmap client buffer during shutdown
  2020-11-16 20:04 [PATCH 00/10] drm/fb-helper: Various fixes and cleanups Thomas Zimmermann
  2020-11-16 20:04   ` Thomas Zimmermann
@ 2020-11-16 20:04 ` Thomas Zimmermann
  2020-11-16 20:04 ` [PATCH 03/10] drm/client: Depend on GEM object kmap ref-counting Thomas Zimmermann
                   ` (7 subsequent siblings)
  9 siblings, 0 replies; 21+ messages in thread
From: Thomas Zimmermann @ 2020-11-16 20:04 UTC (permalink / raw)
  To: daniel, airlied, sam, mripard, maarten.lankhorst, christian.koenig
  Cc: Thomas Zimmermann, dri-devel

The fbdev helper's generic probe function establishes a mapping for
framebuffers without shadow buffer. The clean-up function did not unmap
the buffer object. Add the unmap operation.

As fbdev devices are usally released during system shutdown, this has
not been a problem in practice.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
 drivers/gpu/drm/drm_fb_helper.c | 13 +++++++++----
 1 file changed, 9 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
index ee1a19e22df2..bdc6c9382f8c 100644
--- a/drivers/gpu/drm/drm_fb_helper.c
+++ b/drivers/gpu/drm/drm_fb_helper.c
@@ -1988,14 +1988,19 @@ static void drm_fbdev_cleanup(struct drm_fb_helper *fb_helper)
 	if (!fb_helper->dev)
 		return;
 
-	if (fbi && fbi->fbdefio) {
-		fb_deferred_io_cleanup(fbi);
-		shadow = fbi->screen_buffer;
+	if (fbi) {
+		if (fbi->fbdefio)
+			fb_deferred_io_cleanup(fbi);
+		if (drm_fbdev_use_shadow_fb(fb_helper))
+			shadow = fbi->screen_buffer;
 	}
 
 	drm_fb_helper_fini(fb_helper);
 
-	vfree(shadow);
+	if (shadow)
+		vfree(shadow);
+	else
+		drm_client_buffer_vunmap(fb_helper->buffer);
 
 	drm_client_framebuffer_delete(fb_helper->buffer);
 }
-- 
2.29.2

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 03/10] drm/client: Depend on GEM object kmap ref-counting
  2020-11-16 20:04 [PATCH 00/10] drm/fb-helper: Various fixes and cleanups Thomas Zimmermann
  2020-11-16 20:04   ` Thomas Zimmermann
  2020-11-16 20:04 ` [PATCH 02/10] drm/fb-helper: Unmap client buffer during shutdown Thomas Zimmermann
@ 2020-11-16 20:04 ` Thomas Zimmermann
  2020-11-16 20:04 ` [PATCH 04/10] drm/fb-helper: Rename dirty worker to damage worker Thomas Zimmermann
                   ` (6 subsequent siblings)
  9 siblings, 0 replies; 21+ messages in thread
From: Thomas Zimmermann @ 2020-11-16 20:04 UTC (permalink / raw)
  To: daniel, airlied, sam, mripard, maarten.lankhorst, christian.koenig
  Cc: Thomas Zimmermann, dri-devel

DRM client's vmap/vunmap functions don't allow for multiple vmap
operations. Calling drm_client_buffer_vmap() twice returns the same
mapping, then calling drm_client_buffer_vunmap() twice already unmaps
on the first call. This leads to unbalanced vmap refcounts. Fix this
by calling drm_gem_vmap() unconditionally in drm_client_buffer_vmap().

All drivers that support DRM clients have to implement correct ref-
counting for their vmap operations, or not vunmap at all. This is the
case for drivers that use CMA, SHMEM and VRAM helpers, and QXL. Other
drivers are not affected.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
 drivers/gpu/drm/drm_client.c | 4 ----
 1 file changed, 4 deletions(-)

diff --git a/drivers/gpu/drm/drm_client.c b/drivers/gpu/drm/drm_client.c
index fe573acf1067..ce45e380f4a2 100644
--- a/drivers/gpu/drm/drm_client.c
+++ b/drivers/gpu/drm/drm_client.c
@@ -314,9 +314,6 @@ drm_client_buffer_vmap(struct drm_client_buffer *buffer, struct dma_buf_map *map
 	struct dma_buf_map *map = &buffer->map;
 	int ret;
 
-	if (dma_buf_map_is_set(map))
-		goto out;
-
 	/*
 	 * FIXME: The dependency on GEM here isn't required, we could
 	 * convert the driver handle to a dma-buf instead and use the
@@ -329,7 +326,6 @@ drm_client_buffer_vmap(struct drm_client_buffer *buffer, struct dma_buf_map *map
 	if (ret)
 		return ret;
 
-out:
 	*map_copy = *map;
 
 	return 0;
-- 
2.29.2

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 04/10] drm/fb-helper: Rename dirty worker to damage worker
  2020-11-16 20:04 [PATCH 00/10] drm/fb-helper: Various fixes and cleanups Thomas Zimmermann
                   ` (2 preceding siblings ...)
  2020-11-16 20:04 ` [PATCH 03/10] drm/client: Depend on GEM object kmap ref-counting Thomas Zimmermann
@ 2020-11-16 20:04 ` Thomas Zimmermann
  2020-11-16 20:04 ` [PATCH 05/10] drm/fb-helper: Return early in dirty worker Thomas Zimmermann
                   ` (5 subsequent siblings)
  9 siblings, 0 replies; 21+ messages in thread
From: Thomas Zimmermann @ 2020-11-16 20:04 UTC (permalink / raw)
  To: daniel, airlied, sam, mripard, maarten.lankhorst, christian.koenig
  Cc: Thomas Zimmermann, dri-devel

The dirty worker handles all damage updates, instead of just calling
the framebuffer's dirty callback. Rename it to damage worker. Also
rename related variables accordingly. No functional changes are made.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
 drivers/gpu/drm/drm_fb_helper.c | 65 +++++++++++++++------------------
 include/drm/drm_fb_helper.h     | 14 +++----
 2 files changed, 36 insertions(+), 43 deletions(-)

diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
index bdc6c9382f8c..58db64cdbfa1 100644
--- a/drivers/gpu/drm/drm_fb_helper.c
+++ b/drivers/gpu/drm/drm_fb_helper.c
@@ -371,9 +371,9 @@ static void drm_fb_helper_resume_worker(struct work_struct *work)
 	console_unlock();
 }
 
-static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper *fb_helper,
-					  struct drm_clip_rect *clip,
-					  struct dma_buf_map *dst)
+static void drm_fb_helper_damage_blit_real(struct drm_fb_helper *fb_helper,
+					   struct drm_clip_rect *clip,
+					   struct dma_buf_map *dst)
 {
 	struct drm_framebuffer *fb = fb_helper->fb;
 	unsigned int cpp = fb->format->cpp[0];
@@ -391,21 +391,21 @@ static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper *fb_helper,
 	}
 }
 
-static void drm_fb_helper_dirty_work(struct work_struct *work)
+static void drm_fb_helper_damage_work(struct work_struct *work)
 {
 	struct drm_fb_helper *helper = container_of(work, struct drm_fb_helper,
-						    dirty_work);
-	struct drm_clip_rect *clip = &helper->dirty_clip;
+						    damage_work);
+	struct drm_clip_rect *clip = &helper->damage_clip;
 	struct drm_clip_rect clip_copy;
 	unsigned long flags;
 	struct dma_buf_map map;
 	int ret;
 
-	spin_lock_irqsave(&helper->dirty_lock, flags);
+	spin_lock_irqsave(&helper->damage_lock, flags);
 	clip_copy = *clip;
 	clip->x1 = clip->y1 = ~0;
 	clip->x2 = clip->y2 = 0;
-	spin_unlock_irqrestore(&helper->dirty_lock, flags);
+	spin_unlock_irqrestore(&helper->damage_lock, flags);
 
 	/* call dirty callback only when it has been really touched */
 	if (clip_copy.x1 < clip_copy.x2 && clip_copy.y1 < clip_copy.y2) {
@@ -415,7 +415,7 @@ static void drm_fb_helper_dirty_work(struct work_struct *work)
 			ret = drm_client_buffer_vmap(helper->buffer, &map);
 			if (ret)
 				return;
-			drm_fb_helper_dirty_blit_real(helper, &clip_copy, &map);
+			drm_fb_helper_damage_blit_real(helper, &clip_copy, &map);
 		}
 
 		if (helper->fb->funcs->dirty)
@@ -440,10 +440,10 @@ void drm_fb_helper_prepare(struct drm_device *dev, struct drm_fb_helper *helper,
 			   const struct drm_fb_helper_funcs *funcs)
 {
 	INIT_LIST_HEAD(&helper->kernel_fb_list);
-	spin_lock_init(&helper->dirty_lock);
+	spin_lock_init(&helper->damage_lock);
 	INIT_WORK(&helper->resume_work, drm_fb_helper_resume_worker);
-	INIT_WORK(&helper->dirty_work, drm_fb_helper_dirty_work);
-	helper->dirty_clip.x1 = helper->dirty_clip.y1 = ~0;
+	INIT_WORK(&helper->damage_work, drm_fb_helper_damage_work);
+	helper->damage_clip.x1 = helper->damage_clip.y1 = ~0;
 	mutex_init(&helper->lock);
 	helper->funcs = funcs;
 	helper->dev = dev;
@@ -579,7 +579,7 @@ void drm_fb_helper_fini(struct drm_fb_helper *fb_helper)
 		return;
 
 	cancel_work_sync(&fb_helper->resume_work);
-	cancel_work_sync(&fb_helper->dirty_work);
+	cancel_work_sync(&fb_helper->damage_work);
 
 	info = fb_helper->fbdev;
 	if (info) {
@@ -614,30 +614,30 @@ static bool drm_fbdev_use_shadow_fb(struct drm_fb_helper *fb_helper)
 	       fb->funcs->dirty;
 }
 
-static void drm_fb_helper_dirty(struct fb_info *info, u32 x, u32 y,
-				u32 width, u32 height)
+static void drm_fb_helper_damage(struct fb_info *info, u32 x, u32 y,
+				 u32 width, u32 height)
 {
 	struct drm_fb_helper *helper = info->par;
-	struct drm_clip_rect *clip = &helper->dirty_clip;
+	struct drm_clip_rect *clip = &helper->damage_clip;
 	unsigned long flags;
 
 	if (!drm_fbdev_use_shadow_fb(helper))
 		return;
 
-	spin_lock_irqsave(&helper->dirty_lock, flags);
+	spin_lock_irqsave(&helper->damage_lock, flags);
 	clip->x1 = min_t(u32, clip->x1, x);
 	clip->y1 = min_t(u32, clip->y1, y);
 	clip->x2 = max_t(u32, clip->x2, x + width);
 	clip->y2 = max_t(u32, clip->y2, y + height);
-	spin_unlock_irqrestore(&helper->dirty_lock, flags);
+	spin_unlock_irqrestore(&helper->damage_lock, flags);
 
-	schedule_work(&helper->dirty_work);
+	schedule_work(&helper->damage_work);
 }
 
 /**
  * drm_fb_helper_deferred_io() - fbdev deferred_io callback function
  * @info: fb_info struct pointer
- * @pagelist: list of dirty mmap framebuffer pages
+ * @pagelist: list of mmap framebuffer pages that have to be flushed
  *
  * This function is used as the &fb_deferred_io.deferred_io
  * callback function for flushing the fbdev mmap writes.
@@ -662,7 +662,7 @@ void drm_fb_helper_deferred_io(struct fb_info *info,
 		y1 = min / info->fix.line_length;
 		y2 = min_t(u32, DIV_ROUND_UP(max, info->fix.line_length),
 			   info->var.yres);
-		drm_fb_helper_dirty(info, 0, y1, info->var.xres, y2 - y1);
+		drm_fb_helper_damage(info, 0, y1, info->var.xres, y2 - y1);
 	}
 }
 EXPORT_SYMBOL(drm_fb_helper_deferred_io);
@@ -699,8 +699,7 @@ ssize_t drm_fb_helper_sys_write(struct fb_info *info, const char __user *buf,
 
 	ret = fb_sys_write(info, buf, count, ppos);
 	if (ret > 0)
-		drm_fb_helper_dirty(info, 0, 0, info->var.xres,
-				    info->var.yres);
+		drm_fb_helper_damage(info, 0, 0, info->var.xres, info->var.yres);
 
 	return ret;
 }
@@ -717,8 +716,7 @@ void drm_fb_helper_sys_fillrect(struct fb_info *info,
 				const struct fb_fillrect *rect)
 {
 	sys_fillrect(info, rect);
-	drm_fb_helper_dirty(info, rect->dx, rect->dy,
-			    rect->width, rect->height);
+	drm_fb_helper_damage(info, rect->dx, rect->dy, rect->width, rect->height);
 }
 EXPORT_SYMBOL(drm_fb_helper_sys_fillrect);
 
@@ -733,8 +731,7 @@ void drm_fb_helper_sys_copyarea(struct fb_info *info,
 				const struct fb_copyarea *area)
 {
 	sys_copyarea(info, area);
-	drm_fb_helper_dirty(info, area->dx, area->dy,
-			    area->width, area->height);
+	drm_fb_helper_damage(info, area->dx, area->dy, area->width, area->height);
 }
 EXPORT_SYMBOL(drm_fb_helper_sys_copyarea);
 
@@ -749,8 +746,7 @@ void drm_fb_helper_sys_imageblit(struct fb_info *info,
 				 const struct fb_image *image)
 {
 	sys_imageblit(info, image);
-	drm_fb_helper_dirty(info, image->dx, image->dy,
-			    image->width, image->height);
+	drm_fb_helper_damage(info, image->dx, image->dy, image->width, image->height);
 }
 EXPORT_SYMBOL(drm_fb_helper_sys_imageblit);
 
@@ -765,8 +761,7 @@ void drm_fb_helper_cfb_fillrect(struct fb_info *info,
 				const struct fb_fillrect *rect)
 {
 	cfb_fillrect(info, rect);
-	drm_fb_helper_dirty(info, rect->dx, rect->dy,
-			    rect->width, rect->height);
+	drm_fb_helper_damage(info, rect->dx, rect->dy, rect->width, rect->height);
 }
 EXPORT_SYMBOL(drm_fb_helper_cfb_fillrect);
 
@@ -781,8 +776,7 @@ void drm_fb_helper_cfb_copyarea(struct fb_info *info,
 				const struct fb_copyarea *area)
 {
 	cfb_copyarea(info, area);
-	drm_fb_helper_dirty(info, area->dx, area->dy,
-			    area->width, area->height);
+	drm_fb_helper_damage(info, area->dx, area->dy, area->width, area->height);
 }
 EXPORT_SYMBOL(drm_fb_helper_cfb_copyarea);
 
@@ -797,8 +791,7 @@ void drm_fb_helper_cfb_imageblit(struct fb_info *info,
 				 const struct fb_image *image)
 {
 	cfb_imageblit(info, image);
-	drm_fb_helper_dirty(info, image->dx, image->dy,
-			    image->width, image->height);
+	drm_fb_helper_damage(info, image->dx, image->dy, image->width, image->height);
 }
 EXPORT_SYMBOL(drm_fb_helper_cfb_imageblit);
 
@@ -2195,7 +2188,7 @@ static ssize_t drm_fbdev_fb_write(struct fb_info *info, const char __user *buf,
 		*ppos += ret;
 
 	if (ret > 0)
-		drm_fb_helper_dirty(info, 0, 0, info->var.xres, info->var.yres);
+		drm_fb_helper_damage(info, 0, 0, info->var.xres, info->var.yres);
 
 	return ret ? ret : err;
 }
diff --git a/include/drm/drm_fb_helper.h b/include/drm/drm_fb_helper.h
index 306aa3a60be9..3b273f9ca39a 100644
--- a/include/drm/drm_fb_helper.h
+++ b/include/drm/drm_fb_helper.h
@@ -100,10 +100,10 @@ struct drm_fb_helper_funcs {
  * @funcs: driver callbacks for fb helper
  * @fbdev: emulated fbdev device info struct
  * @pseudo_palette: fake palette of 16 colors
- * @dirty_clip: clip rectangle used with deferred_io to accumulate damage to
- *              the screen buffer
- * @dirty_lock: spinlock protecting @dirty_clip
- * @dirty_work: worker used to flush the framebuffer
+ * @damage_clip: clip rectangle used with deferred_io to accumulate damage to
+ *                the screen buffer
+ * @damage_lock: spinlock protecting @damage_clip
+ * @damage_work: worker used to flush the framebuffer
  * @resume_work: worker used during resume if the console lock is already taken
  *
  * This is the main structure used by the fbdev helpers. Drivers supporting
@@ -131,9 +131,9 @@ struct drm_fb_helper {
 	const struct drm_fb_helper_funcs *funcs;
 	struct fb_info *fbdev;
 	u32 pseudo_palette[17];
-	struct drm_clip_rect dirty_clip;
-	spinlock_t dirty_lock;
-	struct work_struct dirty_work;
+	struct drm_clip_rect damage_clip;
+	spinlock_t damage_lock;
+	struct work_struct damage_work;
 	struct work_struct resume_work;
 
 	/**
-- 
2.29.2

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 05/10] drm/fb-helper: Return early in dirty worker
  2020-11-16 20:04 [PATCH 00/10] drm/fb-helper: Various fixes and cleanups Thomas Zimmermann
                   ` (3 preceding siblings ...)
  2020-11-16 20:04 ` [PATCH 04/10] drm/fb-helper: Rename dirty worker to damage worker Thomas Zimmermann
@ 2020-11-16 20:04 ` Thomas Zimmermann
  2020-11-16 20:04 ` [PATCH 06/10] drm/fb-helper: Separate shadow-buffer flushing and calling dirty callback Thomas Zimmermann
                   ` (4 subsequent siblings)
  9 siblings, 0 replies; 21+ messages in thread
From: Thomas Zimmermann @ 2020-11-16 20:04 UTC (permalink / raw)
  To: daniel, airlied, sam, mripard, maarten.lankhorst, christian.koenig
  Cc: Thomas Zimmermann, dri-devel

Returning early in the dirty worker if no update is required makes the
code more readable. No functional changes are made.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
 drivers/gpu/drm/drm_fb_helper.c | 31 +++++++++++++++----------------
 1 file changed, 15 insertions(+), 16 deletions(-)

diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
index 58db64cdbfa1..741ac8254a6e 100644
--- a/drivers/gpu/drm/drm_fb_helper.c
+++ b/drivers/gpu/drm/drm_fb_helper.c
@@ -407,24 +407,23 @@ static void drm_fb_helper_damage_work(struct work_struct *work)
 	clip->x2 = clip->y2 = 0;
 	spin_unlock_irqrestore(&helper->damage_lock, flags);
 
-	/* call dirty callback only when it has been really touched */
-	if (clip_copy.x1 < clip_copy.x2 && clip_copy.y1 < clip_copy.y2) {
-
-		/* Generic fbdev uses a shadow buffer */
-		if (helper->buffer) {
-			ret = drm_client_buffer_vmap(helper->buffer, &map);
-			if (ret)
-				return;
-			drm_fb_helper_damage_blit_real(helper, &clip_copy, &map);
-		}
-
-		if (helper->fb->funcs->dirty)
-			helper->fb->funcs->dirty(helper->fb, NULL, 0, 0,
-						 &clip_copy, 1);
+	/* Call damage handlers only if necessary */
+	if (!(clip_copy.x1 < clip_copy.x2 && clip_copy.y1 < clip_copy.y2))
+		return;
 
-		if (helper->buffer)
-			drm_client_buffer_vunmap(helper->buffer);
+	/* Generic fbdev uses a shadow buffer */
+	if (helper->buffer) {
+		ret = drm_client_buffer_vmap(helper->buffer, &map);
+		if (ret)
+			return;
+		drm_fb_helper_damage_blit_real(helper, &clip_copy, &map);
 	}
+
+	if (helper->fb->funcs->dirty)
+		helper->fb->funcs->dirty(helper->fb, NULL, 0, 0, &clip_copy, 1);
+
+	if (helper->buffer)
+		drm_client_buffer_vunmap(helper->buffer);
 }
 
 /**
-- 
2.29.2

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 06/10] drm/fb-helper: Separate shadow-buffer flushing and calling dirty callback
  2020-11-16 20:04 [PATCH 00/10] drm/fb-helper: Various fixes and cleanups Thomas Zimmermann
                   ` (4 preceding siblings ...)
  2020-11-16 20:04 ` [PATCH 05/10] drm/fb-helper: Return early in dirty worker Thomas Zimmermann
@ 2020-11-16 20:04 ` Thomas Zimmermann
  2020-11-16 20:04 ` [PATCH 07/10] drm/fb-helper: Move damage blit code and its setup into separate routine Thomas Zimmermann
                   ` (3 subsequent siblings)
  9 siblings, 0 replies; 21+ messages in thread
From: Thomas Zimmermann @ 2020-11-16 20:04 UTC (permalink / raw)
  To: daniel, airlied, sam, mripard, maarten.lankhorst, christian.koenig
  Cc: Thomas Zimmermann, dri-devel

Flushing the shadow framebuffer and invoking the dirty callback are two
separate operations, so do them seprately. The flush operation is paired
with calls to vmap and vunmap. They are not needed for the dirty callback,
which performs its own invocations if necessary.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
 drivers/gpu/drm/drm_fb_helper.c | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
index 741ac8254a6e..213c6aa4bfa4 100644
--- a/drivers/gpu/drm/drm_fb_helper.c
+++ b/drivers/gpu/drm/drm_fb_helper.c
@@ -417,13 +417,11 @@ static void drm_fb_helper_damage_work(struct work_struct *work)
 		if (ret)
 			return;
 		drm_fb_helper_damage_blit_real(helper, &clip_copy, &map);
+		drm_client_buffer_vunmap(helper->buffer);
 	}
 
 	if (helper->fb->funcs->dirty)
 		helper->fb->funcs->dirty(helper->fb, NULL, 0, 0, &clip_copy, 1);
-
-	if (helper->buffer)
-		drm_client_buffer_vunmap(helper->buffer);
 }
 
 /**
-- 
2.29.2

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 07/10] drm/fb-helper: Move damage blit code and its setup into separate routine
  2020-11-16 20:04 [PATCH 00/10] drm/fb-helper: Various fixes and cleanups Thomas Zimmermann
                   ` (5 preceding siblings ...)
  2020-11-16 20:04 ` [PATCH 06/10] drm/fb-helper: Separate shadow-buffer flushing and calling dirty callback Thomas Zimmermann
@ 2020-11-16 20:04 ` Thomas Zimmermann
  2020-11-16 20:04 ` [PATCH 08/10] drm/fb-helper: Restore damage area upon errors Thomas Zimmermann
                   ` (2 subsequent siblings)
  9 siblings, 0 replies; 21+ messages in thread
From: Thomas Zimmermann @ 2020-11-16 20:04 UTC (permalink / raw)
  To: daniel, airlied, sam, mripard, maarten.lankhorst, christian.koenig
  Cc: Thomas Zimmermann, dri-devel

Introduce a separate function for the blit code and its vmap setup. Done
in preparation of additional changes. No functional changes are made.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
 drivers/gpu/drm/drm_fb_helper.c | 24 +++++++++++++++++++-----
 1 file changed, 19 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
index 213c6aa4bfa4..2e1a335bafd2 100644
--- a/drivers/gpu/drm/drm_fb_helper.c
+++ b/drivers/gpu/drm/drm_fb_helper.c
@@ -391,6 +391,24 @@ static void drm_fb_helper_damage_blit_real(struct drm_fb_helper *fb_helper,
 	}
 }
 
+static int drm_fb_helper_damage_blit(struct drm_fb_helper *fb_helper,
+				     struct drm_clip_rect *clip)
+{
+	struct drm_client_buffer *buffer = fb_helper->buffer;
+	struct dma_buf_map map;
+	int ret;
+
+	ret = drm_client_buffer_vmap(buffer, &map);
+	if (ret)
+		return ret;
+
+	drm_fb_helper_damage_blit_real(fb_helper, clip, &map);
+
+	drm_client_buffer_vunmap(buffer);
+
+	return 0;
+}
+
 static void drm_fb_helper_damage_work(struct work_struct *work)
 {
 	struct drm_fb_helper *helper = container_of(work, struct drm_fb_helper,
@@ -398,7 +416,6 @@ static void drm_fb_helper_damage_work(struct work_struct *work)
 	struct drm_clip_rect *clip = &helper->damage_clip;
 	struct drm_clip_rect clip_copy;
 	unsigned long flags;
-	struct dma_buf_map map;
 	int ret;
 
 	spin_lock_irqsave(&helper->damage_lock, flags);
@@ -411,13 +428,10 @@ static void drm_fb_helper_damage_work(struct work_struct *work)
 	if (!(clip_copy.x1 < clip_copy.x2 && clip_copy.y1 < clip_copy.y2))
 		return;
 
-	/* Generic fbdev uses a shadow buffer */
 	if (helper->buffer) {
-		ret = drm_client_buffer_vmap(helper->buffer, &map);
+		ret = drm_fb_helper_damage_blit(helper, &clip_copy);
 		if (ret)
 			return;
-		drm_fb_helper_damage_blit_real(helper, &clip_copy, &map);
-		drm_client_buffer_vunmap(helper->buffer);
 	}
 
 	if (helper->fb->funcs->dirty)
-- 
2.29.2

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 08/10] drm/fb-helper: Restore damage area upon errors
  2020-11-16 20:04 [PATCH 00/10] drm/fb-helper: Various fixes and cleanups Thomas Zimmermann
                   ` (6 preceding siblings ...)
  2020-11-16 20:04 ` [PATCH 07/10] drm/fb-helper: Move damage blit code and its setup into separate routine Thomas Zimmermann
@ 2020-11-16 20:04 ` Thomas Zimmermann
  2020-11-16 20:52   ` Daniel Vetter
  2020-11-16 20:04 ` [PATCH 09/10] drm/fb-helper: Copy dma-buf map before flushing shadow fb Thomas Zimmermann
  2020-11-16 20:04 ` [PATCH 10/10] drm/fb-helper: Acquire modeset lock around shadow-buffer flushing Thomas Zimmermann
  9 siblings, 1 reply; 21+ messages in thread
From: Thomas Zimmermann @ 2020-11-16 20:04 UTC (permalink / raw)
  To: daniel, airlied, sam, mripard, maarten.lankhorst, christian.koenig
  Cc: Thomas Zimmermann, dri-devel

If the damage handling fails, restore the damage area. The next invocation
of the damage worker will then perform the update.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
 drivers/gpu/drm/drm_fb_helper.c | 23 ++++++++++++++++++++---
 1 file changed, 20 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
index 2e1a335bafd2..13b65dad2ca8 100644
--- a/drivers/gpu/drm/drm_fb_helper.c
+++ b/drivers/gpu/drm/drm_fb_helper.c
@@ -431,11 +431,28 @@ static void drm_fb_helper_damage_work(struct work_struct *work)
 	if (helper->buffer) {
 		ret = drm_fb_helper_damage_blit(helper, &clip_copy);
 		if (ret)
-			return;
+			goto err;
 	}
 
-	if (helper->fb->funcs->dirty)
-		helper->fb->funcs->dirty(helper->fb, NULL, 0, 0, &clip_copy, 1);
+	if (helper->fb->funcs->dirty) {
+		ret = helper->fb->funcs->dirty(helper->fb, NULL, 0, 0, &clip_copy, 1);
+		if (ret)
+			goto err;
+	}
+
+	return;
+
+err:
+	/*
+	 * Restore damage clip rectangle on errors. Next instance of damage
+	 * worker will perform the update.
+	 */
+	spin_lock_irqsave(&helper->damage_lock, flags);
+	clip->x1 = min_t(u32, clip->x1, clip_copy.x1);
+	clip->y1 = min_t(u32, clip->y1, clip_copy.y1);
+	clip->x2 = max_t(u32, clip->x2, clip_copy.x2);
+	clip->y2 = max_t(u32, clip->y2, clip_copy.y2);
+	spin_unlock_irqrestore(&helper->damage_lock, flags);
 }
 
 /**
-- 
2.29.2

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 09/10] drm/fb-helper: Copy dma-buf map before flushing shadow fb
  2020-11-16 20:04 [PATCH 00/10] drm/fb-helper: Various fixes and cleanups Thomas Zimmermann
                   ` (7 preceding siblings ...)
  2020-11-16 20:04 ` [PATCH 08/10] drm/fb-helper: Restore damage area upon errors Thomas Zimmermann
@ 2020-11-16 20:04 ` Thomas Zimmermann
  2020-11-16 20:04 ` [PATCH 10/10] drm/fb-helper: Acquire modeset lock around shadow-buffer flushing Thomas Zimmermann
  9 siblings, 0 replies; 21+ messages in thread
From: Thomas Zimmermann @ 2020-11-16 20:04 UTC (permalink / raw)
  To: daniel, airlied, sam, mripard, maarten.lankhorst, christian.koenig
  Cc: Thomas Zimmermann, dri-devel

Copy the vmap()'ed instance of struct dma_buf_map before modifying it,
in case the implementation of vunmap() depends on the exact address.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
 drivers/gpu/drm/drm_fb_helper.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
index 13b65dad2ca8..5a22c744378c 100644
--- a/drivers/gpu/drm/drm_fb_helper.c
+++ b/drivers/gpu/drm/drm_fb_helper.c
@@ -395,14 +395,15 @@ static int drm_fb_helper_damage_blit(struct drm_fb_helper *fb_helper,
 				     struct drm_clip_rect *clip)
 {
 	struct drm_client_buffer *buffer = fb_helper->buffer;
-	struct dma_buf_map map;
+	struct dma_buf_map map, dst;
 	int ret;
 
 	ret = drm_client_buffer_vmap(buffer, &map);
 	if (ret)
 		return ret;
 
-	drm_fb_helper_damage_blit_real(fb_helper, clip, &map);
+	dst = map;
+	drm_fb_helper_damage_blit_real(fb_helper, clip, &dst);
 
 	drm_client_buffer_vunmap(buffer);
 
-- 
2.29.2

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 10/10] drm/fb-helper: Acquire modeset lock around shadow-buffer flushing
  2020-11-16 20:04 [PATCH 00/10] drm/fb-helper: Various fixes and cleanups Thomas Zimmermann
                   ` (8 preceding siblings ...)
  2020-11-16 20:04 ` [PATCH 09/10] drm/fb-helper: Copy dma-buf map before flushing shadow fb Thomas Zimmermann
@ 2020-11-16 20:04 ` Thomas Zimmermann
  2020-11-16 20:48   ` Daniel Vetter
  9 siblings, 1 reply; 21+ messages in thread
From: Thomas Zimmermann @ 2020-11-16 20:04 UTC (permalink / raw)
  To: daniel, airlied, sam, mripard, maarten.lankhorst, christian.koenig
  Cc: Thomas Zimmermann, dri-devel

Flushing the fbdev's shadow buffer requires vmap'ing the BO memory, which
in turn requires pinning the BO. While being pinned, the BO cannot be moved
into VRAM for scanout. Consequently, a concurrent modeset operation that
involves the fbdev framebuffer would likely fail.

Resolve this problem be acquiring the modeset lock of the planes that use
the fbdev framebuffer. On non-atomic drivers, also acquire the mode-config
lock. This serializes the flushing of the framebuffer with concurrent
modeset operations.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
 drivers/gpu/drm/drm_fb_helper.c | 43 +++++++++++++++++++++++++++++++--
 1 file changed, 41 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
index 5a22c744378c..af485c71a42a 100644
--- a/drivers/gpu/drm/drm_fb_helper.c
+++ b/drivers/gpu/drm/drm_fb_helper.c
@@ -394,20 +394,59 @@ static void drm_fb_helper_damage_blit_real(struct drm_fb_helper *fb_helper,
 static int drm_fb_helper_damage_blit(struct drm_fb_helper *fb_helper,
 				     struct drm_clip_rect *clip)
 {
+	struct drm_device *dev = fb_helper->dev;
+	struct drm_framebuffer *fb = fb_helper->fb;
 	struct drm_client_buffer *buffer = fb_helper->buffer;
+	struct drm_modeset_acquire_ctx ctx;
+	struct drm_framebuffer *plane_fb;
+	struct drm_plane *plane;
 	struct dma_buf_map map, dst;
 	int ret;
 
+	if (!drm_drv_uses_atomic_modeset(dev))
+		mutex_lock(&dev->mode_config.mutex);
+
+	drm_modeset_acquire_init(&ctx, 0);
+
+retry:
+	drm_for_each_plane(plane, dev) {
+		ret = drm_modeset_lock(&plane->mutex, &ctx);
+		if (ret == -EDEADLK) {
+			ret = drm_modeset_backoff(&ctx);
+			if (!ret)
+				goto retry;
+		} else if (ret) {
+			goto out;
+		}
+
+		if (drm_drv_uses_atomic_modeset(dev))
+			plane_fb = plane->state->fb;
+		else
+			plane_fb = plane->fb;
+
+		if (plane_fb != fb) {
+			drm_modeset_unlock(&plane->mutex);
+			continue;
+		}
+	}
+
 	ret = drm_client_buffer_vmap(buffer, &map);
 	if (ret)
-		return ret;
+		goto out;
 
 	dst = map;
 	drm_fb_helper_damage_blit_real(fb_helper, clip, &dst);
 
 	drm_client_buffer_vunmap(buffer);
 
-	return 0;
+out:
+	drm_modeset_drop_locks(&ctx);
+	drm_modeset_acquire_fini(&ctx);
+
+	if (!drm_drv_uses_atomic_modeset(dev))
+		mutex_unlock(&dev->mode_config.mutex);
+
+	return ret;
 }
 
 static void drm_fb_helper_damage_work(struct work_struct *work)
-- 
2.29.2

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* Re: [PATCH 10/10] drm/fb-helper: Acquire modeset lock around shadow-buffer flushing
  2020-11-16 20:04 ` [PATCH 10/10] drm/fb-helper: Acquire modeset lock around shadow-buffer flushing Thomas Zimmermann
@ 2020-11-16 20:48   ` Daniel Vetter
  2020-11-17  8:05     ` Christian König
  0 siblings, 1 reply; 21+ messages in thread
From: Daniel Vetter @ 2020-11-16 20:48 UTC (permalink / raw)
  To: Thomas Zimmermann; +Cc: airlied, dri-devel, sam, christian.koenig

On Mon, Nov 16, 2020 at 09:04:37PM +0100, Thomas Zimmermann wrote:
> Flushing the fbdev's shadow buffer requires vmap'ing the BO memory, which
> in turn requires pinning the BO. While being pinned, the BO cannot be moved
> into VRAM for scanout. Consequently, a concurrent modeset operation that
> involves the fbdev framebuffer would likely fail.
> 
> Resolve this problem be acquiring the modeset lock of the planes that use
> the fbdev framebuffer. On non-atomic drivers, also acquire the mode-config
> lock. This serializes the flushing of the framebuffer with concurrent
> modeset operations.
> 
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>

I think this is the wrong lock. What you want is the buffer lock, the
dma_resv lock of the underlying gem_bo underneath the fb we have. And hold
that from _vmap to the end of _vunmap.

Unfortuantely that's going to be one nasty refactoring adventure :-/

So I think for plan B we need something nasty like this, but this here has
disadvantage that fbdev activity in the background can seriously hurt the
native kms client (despite that you're trying to filter a bit, you're
creating a big lock across all planes and we've really worked hard to make
those stand-alone as much as possible).

I think we can do better here, since we're only worried about modesets
from fbdev itself. Nothing else needs to be able to pull the buffer from
system memory into vram while we have it pinned here.

Holding mutex_lock(fb_helper->lock) here instead, with a big comment
explaining why that's enough, and that the clean fix would be holding the
dma_resv_lock, should  do the trick.
-Daniel

> ---
>  drivers/gpu/drm/drm_fb_helper.c | 43 +++++++++++++++++++++++++++++++--
>  1 file changed, 41 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
> index 5a22c744378c..af485c71a42a 100644
> --- a/drivers/gpu/drm/drm_fb_helper.c
> +++ b/drivers/gpu/drm/drm_fb_helper.c
> @@ -394,20 +394,59 @@ static void drm_fb_helper_damage_blit_real(struct drm_fb_helper *fb_helper,
>  static int drm_fb_helper_damage_blit(struct drm_fb_helper *fb_helper,
>  				     struct drm_clip_rect *clip)
>  {
> +	struct drm_device *dev = fb_helper->dev;
> +	struct drm_framebuffer *fb = fb_helper->fb;
>  	struct drm_client_buffer *buffer = fb_helper->buffer;
> +	struct drm_modeset_acquire_ctx ctx;
> +	struct drm_framebuffer *plane_fb;
> +	struct drm_plane *plane;
>  	struct dma_buf_map map, dst;
>  	int ret;
>  
> +	if (!drm_drv_uses_atomic_modeset(dev))
> +		mutex_lock(&dev->mode_config.mutex);
> +
> +	drm_modeset_acquire_init(&ctx, 0);
> +
> +retry:
> +	drm_for_each_plane(plane, dev) {
> +		ret = drm_modeset_lock(&plane->mutex, &ctx);
> +		if (ret == -EDEADLK) {
> +			ret = drm_modeset_backoff(&ctx);
> +			if (!ret)
> +				goto retry;
> +		} else if (ret) {
> +			goto out;
> +		}
> +
> +		if (drm_drv_uses_atomic_modeset(dev))
> +			plane_fb = plane->state->fb;
> +		else
> +			plane_fb = plane->fb;
> +
> +		if (plane_fb != fb) {
> +			drm_modeset_unlock(&plane->mutex);
> +			continue;
> +		}
> +	}
> +
>  	ret = drm_client_buffer_vmap(buffer, &map);
>  	if (ret)
> -		return ret;
> +		goto out;
>  
>  	dst = map;
>  	drm_fb_helper_damage_blit_real(fb_helper, clip, &dst);
>  
>  	drm_client_buffer_vunmap(buffer);
>  
> -	return 0;
> +out:
> +	drm_modeset_drop_locks(&ctx);
> +	drm_modeset_acquire_fini(&ctx);
> +
> +	if (!drm_drv_uses_atomic_modeset(dev))
> +		mutex_unlock(&dev->mode_config.mutex);
> +
> +	return ret;
>  }
>  
>  static void drm_fb_helper_damage_work(struct work_struct *work)
> -- 
> 2.29.2
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 08/10] drm/fb-helper: Restore damage area upon errors
  2020-11-16 20:04 ` [PATCH 08/10] drm/fb-helper: Restore damage area upon errors Thomas Zimmermann
@ 2020-11-16 20:52   ` Daniel Vetter
  2020-11-17 15:14     ` Sebastian Reichel
  0 siblings, 1 reply; 21+ messages in thread
From: Daniel Vetter @ 2020-11-16 20:52 UTC (permalink / raw)
  To: Thomas Zimmermann; +Cc: airlied, dri-devel, sam, christian.koenig

On Mon, Nov 16, 2020 at 09:04:35PM +0100, Thomas Zimmermann wrote:
> If the damage handling fails, restore the damage area. The next invocation
> of the damage worker will then perform the update.
> 
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> ---
>  drivers/gpu/drm/drm_fb_helper.c | 23 ++++++++++++++++++++---
>  1 file changed, 20 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
> index 2e1a335bafd2..13b65dad2ca8 100644
> --- a/drivers/gpu/drm/drm_fb_helper.c
> +++ b/drivers/gpu/drm/drm_fb_helper.c
> @@ -431,11 +431,28 @@ static void drm_fb_helper_damage_work(struct work_struct *work)
>  	if (helper->buffer) {
>  		ret = drm_fb_helper_damage_blit(helper, &clip_copy);
>  		if (ret)
> -			return;
> +			goto err;
>  	}
>  
> -	if (helper->fb->funcs->dirty)
> -		helper->fb->funcs->dirty(helper->fb, NULL, 0, 0, &clip_copy, 1);
> +	if (helper->fb->funcs->dirty) {
> +		ret = helper->fb->funcs->dirty(helper->fb, NULL, 0, 0, &clip_copy, 1);
> +		if (ret)
> +			goto err;

I think this is unexpected enough that we should put some drm error
printing here I think.
-Daniel

> +	}
> +
> +	return;
> +
> +err:
> +	/*
> +	 * Restore damage clip rectangle on errors. Next instance of damage
> +	 * worker will perform the update.
> +	 */
> +	spin_lock_irqsave(&helper->damage_lock, flags);
> +	clip->x1 = min_t(u32, clip->x1, clip_copy.x1);
> +	clip->y1 = min_t(u32, clip->y1, clip_copy.y1);
> +	clip->x2 = max_t(u32, clip->x2, clip_copy.x2);
> +	clip->y2 = max_t(u32, clip->y2, clip_copy.y2);
> +	spin_unlock_irqrestore(&helper->damage_lock, flags);
>  }
>  
>  /**
> -- 
> 2.29.2
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 10/10] drm/fb-helper: Acquire modeset lock around shadow-buffer flushing
  2020-11-16 20:48   ` Daniel Vetter
@ 2020-11-17  8:05     ` Christian König
  0 siblings, 0 replies; 21+ messages in thread
From: Christian König @ 2020-11-17  8:05 UTC (permalink / raw)
  To: Daniel Vetter, Thomas Zimmermann; +Cc: airlied, sam, dri-devel

Am 16.11.20 um 21:48 schrieb Daniel Vetter:
> On Mon, Nov 16, 2020 at 09:04:37PM +0100, Thomas Zimmermann wrote:
>> Flushing the fbdev's shadow buffer requires vmap'ing the BO memory, which
>> in turn requires pinning the BO. While being pinned, the BO cannot be moved
>> into VRAM for scanout. Consequently, a concurrent modeset operation that
>> involves the fbdev framebuffer would likely fail.
>>
>> Resolve this problem be acquiring the modeset lock of the planes that use
>> the fbdev framebuffer. On non-atomic drivers, also acquire the mode-config
>> lock. This serializes the flushing of the framebuffer with concurrent
>> modeset operations.
>>
>> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> I think this is the wrong lock. What you want is the buffer lock, the
> dma_resv lock of the underlying gem_bo underneath the fb we have. And hold
> that from _vmap to the end of _vunmap.

+1 Yes exactly that came to my mind as well while reading this.

If you want to prevent a BO from moving while inside the kernel taking 
the reservation lock is usually the right thing to do.

Only when you need to return to userspace AND keep the BO in the same 
place then it is valid to pin it.

Regards,
Christian.

>
> Unfortuantely that's going to be one nasty refactoring adventure :-/
>
> So I think for plan B we need something nasty like this, but this here has
> disadvantage that fbdev activity in the background can seriously hurt the
> native kms client (despite that you're trying to filter a bit, you're
> creating a big lock across all planes and we've really worked hard to make
> those stand-alone as much as possible).
>
> I think we can do better here, since we're only worried about modesets
> from fbdev itself. Nothing else needs to be able to pull the buffer from
> system memory into vram while we have it pinned here.
>
> Holding mutex_lock(fb_helper->lock) here instead, with a big comment
> explaining why that's enough, and that the clean fix would be holding the
> dma_resv_lock, should  do the trick.
> -Daniel
>
>> ---
>>   drivers/gpu/drm/drm_fb_helper.c | 43 +++++++++++++++++++++++++++++++--
>>   1 file changed, 41 insertions(+), 2 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
>> index 5a22c744378c..af485c71a42a 100644
>> --- a/drivers/gpu/drm/drm_fb_helper.c
>> +++ b/drivers/gpu/drm/drm_fb_helper.c
>> @@ -394,20 +394,59 @@ static void drm_fb_helper_damage_blit_real(struct drm_fb_helper *fb_helper,
>>   static int drm_fb_helper_damage_blit(struct drm_fb_helper *fb_helper,
>>   				     struct drm_clip_rect *clip)
>>   {
>> +	struct drm_device *dev = fb_helper->dev;
>> +	struct drm_framebuffer *fb = fb_helper->fb;
>>   	struct drm_client_buffer *buffer = fb_helper->buffer;
>> +	struct drm_modeset_acquire_ctx ctx;
>> +	struct drm_framebuffer *plane_fb;
>> +	struct drm_plane *plane;
>>   	struct dma_buf_map map, dst;
>>   	int ret;
>>   
>> +	if (!drm_drv_uses_atomic_modeset(dev))
>> +		mutex_lock(&dev->mode_config.mutex);
>> +
>> +	drm_modeset_acquire_init(&ctx, 0);
>> +
>> +retry:
>> +	drm_for_each_plane(plane, dev) {
>> +		ret = drm_modeset_lock(&plane->mutex, &ctx);
>> +		if (ret == -EDEADLK) {
>> +			ret = drm_modeset_backoff(&ctx);
>> +			if (!ret)
>> +				goto retry;
>> +		} else if (ret) {
>> +			goto out;
>> +		}
>> +
>> +		if (drm_drv_uses_atomic_modeset(dev))
>> +			plane_fb = plane->state->fb;
>> +		else
>> +			plane_fb = plane->fb;
>> +
>> +		if (plane_fb != fb) {
>> +			drm_modeset_unlock(&plane->mutex);
>> +			continue;
>> +		}
>> +	}
>> +
>>   	ret = drm_client_buffer_vmap(buffer, &map);
>>   	if (ret)
>> -		return ret;
>> +		goto out;
>>   
>>   	dst = map;
>>   	drm_fb_helper_damage_blit_real(fb_helper, clip, &dst);
>>   
>>   	drm_client_buffer_vunmap(buffer);
>>   
>> -	return 0;
>> +out:
>> +	drm_modeset_drop_locks(&ctx);
>> +	drm_modeset_acquire_fini(&ctx);
>> +
>> +	if (!drm_drv_uses_atomic_modeset(dev))
>> +		mutex_unlock(&dev->mode_config.mutex);
>> +
>> +	return ret;
>>   }
>>   
>>   static void drm_fb_helper_damage_work(struct work_struct *work)
>> -- 
>> 2.29.2
>>

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 08/10] drm/fb-helper: Restore damage area upon errors
  2020-11-16 20:52   ` Daniel Vetter
@ 2020-11-17 15:14     ` Sebastian Reichel
  2020-11-17 16:54       ` Daniel Vetter
  0 siblings, 1 reply; 21+ messages in thread
From: Sebastian Reichel @ 2020-11-17 15:14 UTC (permalink / raw)
  To: Daniel Vetter
  Cc: airlied, sam, dri-devel, Thomas Zimmermann, christian.koenig


[-- Attachment #1.1: Type: text/plain, Size: 1791 bytes --]

Hi,

On Mon, Nov 16, 2020 at 09:52:16PM +0100, Daniel Vetter wrote:
> On Mon, Nov 16, 2020 at 09:04:35PM +0100, Thomas Zimmermann wrote:
> > If the damage handling fails, restore the damage area. The next invocation
> > of the damage worker will then perform the update.
> > 
> > Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> > ---
> >  drivers/gpu/drm/drm_fb_helper.c | 23 ++++++++++++++++++++---
> >  1 file changed, 20 insertions(+), 3 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
> > index 2e1a335bafd2..13b65dad2ca8 100644
> > --- a/drivers/gpu/drm/drm_fb_helper.c
> > +++ b/drivers/gpu/drm/drm_fb_helper.c
> > @@ -431,11 +431,28 @@ static void drm_fb_helper_damage_work(struct work_struct *work)
> >  	if (helper->buffer) {
> >  		ret = drm_fb_helper_damage_blit(helper, &clip_copy);
> >  		if (ret)
> > -			return;
> > +			goto err;
> >  	}
> >  
> > -	if (helper->fb->funcs->dirty)
> > -		helper->fb->funcs->dirty(helper->fb, NULL, 0, 0, &clip_copy, 1);
> > +	if (helper->fb->funcs->dirty) {
> > +		ret = helper->fb->funcs->dirty(helper->fb, NULL, 0, 0, &clip_copy, 1);
> > +		if (ret)
> > +			goto err;
> 
> I think this is unexpected enough that we should put some drm error
> printing here I think.

Note, that the dirty framebuffer routines are used by the
framebuffer console. Printing warnings means another line
in the framebuffer console and thus another call to the
dirty routines. Assuming the problem does not get solved
magically the system will be busy printing warnings forever.
I have been through that fun while working on the OMAP DSI
command mode patches and I suggest to only do ratelimited
logging in the code related to damage tracking.

-- Sebastian

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 01/10] drm/fb-helper: Call dirty helper after writing to fbdev
  2020-11-16 20:04   ` Thomas Zimmermann
@ 2020-11-17 16:22     ` Ville Syrjälä
  -1 siblings, 0 replies; 21+ messages in thread
From: Ville Syrjälä @ 2020-11-17 16:22 UTC (permalink / raw)
  To: Thomas Zimmermann
  Cc: airlied, Daniel Vetter, maarten.lankhorst, mripard,
	virtualization, dri-devel, daniel, sam, christian.koenig

On Mon, Nov 16, 2020 at 09:04:28PM +0100, Thomas Zimmermann wrote:
> If fbdev uses a shadow framebuffer, call the damage handler. Otherwise
> the update might not make it to the screen.
> 
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> Fixes: 222ec45f4c69 ("drm/fb_helper: Support framebuffers in I/O memory")
> Cc: Thomas Zimmermann <tzimmermann@suse.de>
> Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
> Cc: Sam Ravnborg <sam@ravnborg.org>
> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
> Cc: Maxime Ripard <mripard@kernel.org>
> Cc: David Airlie <airlied@linux.ie>
> Cc: Daniel Vetter <daniel@ffwll.ch>
> Cc: Gerd Hoffmann <kraxel@redhat.com>
> Cc: dri-devel@lists.freedesktop.org
> Cc: virtualization@lists.linux-foundation.org
> ---
>  drivers/gpu/drm/drm_fb_helper.c | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
> index 25edf670867c..ee1a19e22df2 100644
> --- a/drivers/gpu/drm/drm_fb_helper.c
> +++ b/drivers/gpu/drm/drm_fb_helper.c
> @@ -2189,6 +2189,9 @@ static ssize_t drm_fbdev_fb_write(struct fb_info *info, const char __user *buf,
>  	if (ret > 0)
>  		*ppos += ret;
>  
> +	if (ret > 0)
> +		drm_fb_helper_dirty(info, 0, 0, info->var.xres, info->var.yres);

Should that rather be 0->{x,y}res_virtual or {x,y}offset->{x,y}res?

> +
>  	return ret ? ret : err;
>  }
>  
> -- 
> 2.29.2
> 
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel

-- 
Ville Syrjälä
Intel
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 01/10] drm/fb-helper: Call dirty helper after writing to fbdev
@ 2020-11-17 16:22     ` Ville Syrjälä
  0 siblings, 0 replies; 21+ messages in thread
From: Ville Syrjälä @ 2020-11-17 16:22 UTC (permalink / raw)
  To: Thomas Zimmermann
  Cc: airlied, Daniel Vetter, virtualization, dri-devel, sam,
	christian.koenig, Gerd Hoffmann

On Mon, Nov 16, 2020 at 09:04:28PM +0100, Thomas Zimmermann wrote:
> If fbdev uses a shadow framebuffer, call the damage handler. Otherwise
> the update might not make it to the screen.
> 
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> Fixes: 222ec45f4c69 ("drm/fb_helper: Support framebuffers in I/O memory")
> Cc: Thomas Zimmermann <tzimmermann@suse.de>
> Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
> Cc: Sam Ravnborg <sam@ravnborg.org>
> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
> Cc: Maxime Ripard <mripard@kernel.org>
> Cc: David Airlie <airlied@linux.ie>
> Cc: Daniel Vetter <daniel@ffwll.ch>
> Cc: Gerd Hoffmann <kraxel@redhat.com>
> Cc: dri-devel@lists.freedesktop.org
> Cc: virtualization@lists.linux-foundation.org
> ---
>  drivers/gpu/drm/drm_fb_helper.c | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
> index 25edf670867c..ee1a19e22df2 100644
> --- a/drivers/gpu/drm/drm_fb_helper.c
> +++ b/drivers/gpu/drm/drm_fb_helper.c
> @@ -2189,6 +2189,9 @@ static ssize_t drm_fbdev_fb_write(struct fb_info *info, const char __user *buf,
>  	if (ret > 0)
>  		*ppos += ret;
>  
> +	if (ret > 0)
> +		drm_fb_helper_dirty(info, 0, 0, info->var.xres, info->var.yres);

Should that rather be 0->{x,y}res_virtual or {x,y}offset->{x,y}res?

> +
>  	return ret ? ret : err;
>  }
>  
> -- 
> 2.29.2
> 
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel

-- 
Ville Syrjälä
Intel
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 08/10] drm/fb-helper: Restore damage area upon errors
  2020-11-17 15:14     ` Sebastian Reichel
@ 2020-11-17 16:54       ` Daniel Vetter
  0 siblings, 0 replies; 21+ messages in thread
From: Daniel Vetter @ 2020-11-17 16:54 UTC (permalink / raw)
  To: Sebastian Reichel
  Cc: airlied, dri-devel, Thomas Zimmermann, sam, christian.koenig

On Tue, Nov 17, 2020 at 04:14:46PM +0100, Sebastian Reichel wrote:
> Hi,
> 
> On Mon, Nov 16, 2020 at 09:52:16PM +0100, Daniel Vetter wrote:
> > On Mon, Nov 16, 2020 at 09:04:35PM +0100, Thomas Zimmermann wrote:
> > > If the damage handling fails, restore the damage area. The next invocation
> > > of the damage worker will then perform the update.
> > > 
> > > Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> > > ---
> > >  drivers/gpu/drm/drm_fb_helper.c | 23 ++++++++++++++++++++---
> > >  1 file changed, 20 insertions(+), 3 deletions(-)
> > > 
> > > diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
> > > index 2e1a335bafd2..13b65dad2ca8 100644
> > > --- a/drivers/gpu/drm/drm_fb_helper.c
> > > +++ b/drivers/gpu/drm/drm_fb_helper.c
> > > @@ -431,11 +431,28 @@ static void drm_fb_helper_damage_work(struct work_struct *work)
> > >  	if (helper->buffer) {
> > >  		ret = drm_fb_helper_damage_blit(helper, &clip_copy);
> > >  		if (ret)
> > > -			return;
> > > +			goto err;
> > >  	}
> > >  
> > > -	if (helper->fb->funcs->dirty)
> > > -		helper->fb->funcs->dirty(helper->fb, NULL, 0, 0, &clip_copy, 1);
> > > +	if (helper->fb->funcs->dirty) {
> > > +		ret = helper->fb->funcs->dirty(helper->fb, NULL, 0, 0, &clip_copy, 1);
> > > +		if (ret)
> > > +			goto err;
> > 
> > I think this is unexpected enough that we should put some drm error
> > printing here I think.
> 
> Note, that the dirty framebuffer routines are used by the
> framebuffer console. Printing warnings means another line
> in the framebuffer console and thus another call to the
> dirty routines. Assuming the problem does not get solved
> magically the system will be busy printing warnings forever.
> I have been through that fun while working on the OMAP DSI
> command mode patches and I suggest to only do ratelimited
> logging in the code related to damage tracking.

Excellent point. Maybe _once is good enough even, perhaps with a dump of
the entire atomic state included for debugging (in case it's transient or
something like that). Past that there's nothing we can do anyway, and most
likely it's just hurting.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 01/10] drm/fb-helper: Call dirty helper after writing to fbdev
  2020-11-17 16:22     ` Ville Syrjälä
@ 2020-11-18  7:56       ` Thomas Zimmermann
  -1 siblings, 0 replies; 21+ messages in thread
From: Thomas Zimmermann @ 2020-11-18  7:56 UTC (permalink / raw)
  To: Ville Syrjälä
  Cc: airlied, Daniel Vetter, maarten.lankhorst, mripard,
	virtualization, dri-devel, daniel, sam, christian.koenig

Hi

Am 17.11.20 um 17:22 schrieb Ville Syrjälä:
> On Mon, Nov 16, 2020 at 09:04:28PM +0100, Thomas Zimmermann wrote:
>> If fbdev uses a shadow framebuffer, call the damage handler. Otherwise
>> the update might not make it to the screen.
>>
>> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
>> Fixes: 222ec45f4c69 ("drm/fb_helper: Support framebuffers in I/O memory")
>> Cc: Thomas Zimmermann <tzimmermann@suse.de>
>> Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
>> Cc: Sam Ravnborg <sam@ravnborg.org>
>> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
>> Cc: Maxime Ripard <mripard@kernel.org>
>> Cc: David Airlie <airlied@linux.ie>
>> Cc: Daniel Vetter <daniel@ffwll.ch>
>> Cc: Gerd Hoffmann <kraxel@redhat.com>
>> Cc: dri-devel@lists.freedesktop.org
>> Cc: virtualization@lists.linux-foundation.org
>> ---
>>  drivers/gpu/drm/drm_fb_helper.c | 3 +++
>>  1 file changed, 3 insertions(+)
>>
>> diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
>> index 25edf670867c..ee1a19e22df2 100644
>> --- a/drivers/gpu/drm/drm_fb_helper.c
>> +++ b/drivers/gpu/drm/drm_fb_helper.c
>> @@ -2189,6 +2189,9 @@ static ssize_t drm_fbdev_fb_write(struct fb_info *info, const char __user *buf,
>>  	if (ret > 0)
>>  		*ppos += ret;
>>  
>> +	if (ret > 0)
>> +		drm_fb_helper_dirty(info, 0, 0, info->var.xres, info->var.yres);
> 
> Should that rather be 0->{x,y}res_virtual or {x,y}offset->{x,y}res?

That code snippet works in drm_fb_helper_sys_write().

However these are framebuffer-relative coordinates. I guess *res_virtual
would be more correct. I'll update the patch accordingly. Thanks for
pointing this out.

Best regards
Thomas

> 
>> +
>>  	return ret ? ret : err;
>>  }
>>  
>> -- 
>> 2.29.2
>>
>> _______________________________________________
>> dri-devel mailing list
>> dri-devel@lists.freedesktop.org
>> https://lists.freedesktop.org/mailman/listinfo/dri-devel
> 

-- 
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 01/10] drm/fb-helper: Call dirty helper after writing to fbdev
@ 2020-11-18  7:56       ` Thomas Zimmermann
  0 siblings, 0 replies; 21+ messages in thread
From: Thomas Zimmermann @ 2020-11-18  7:56 UTC (permalink / raw)
  To: Ville Syrjälä
  Cc: airlied, Daniel Vetter, virtualization, dri-devel, sam,
	christian.koenig, Gerd Hoffmann

Hi

Am 17.11.20 um 17:22 schrieb Ville Syrjälä:
> On Mon, Nov 16, 2020 at 09:04:28PM +0100, Thomas Zimmermann wrote:
>> If fbdev uses a shadow framebuffer, call the damage handler. Otherwise
>> the update might not make it to the screen.
>>
>> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
>> Fixes: 222ec45f4c69 ("drm/fb_helper: Support framebuffers in I/O memory")
>> Cc: Thomas Zimmermann <tzimmermann@suse.de>
>> Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
>> Cc: Sam Ravnborg <sam@ravnborg.org>
>> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
>> Cc: Maxime Ripard <mripard@kernel.org>
>> Cc: David Airlie <airlied@linux.ie>
>> Cc: Daniel Vetter <daniel@ffwll.ch>
>> Cc: Gerd Hoffmann <kraxel@redhat.com>
>> Cc: dri-devel@lists.freedesktop.org
>> Cc: virtualization@lists.linux-foundation.org
>> ---
>>  drivers/gpu/drm/drm_fb_helper.c | 3 +++
>>  1 file changed, 3 insertions(+)
>>
>> diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
>> index 25edf670867c..ee1a19e22df2 100644
>> --- a/drivers/gpu/drm/drm_fb_helper.c
>> +++ b/drivers/gpu/drm/drm_fb_helper.c
>> @@ -2189,6 +2189,9 @@ static ssize_t drm_fbdev_fb_write(struct fb_info *info, const char __user *buf,
>>  	if (ret > 0)
>>  		*ppos += ret;
>>  
>> +	if (ret > 0)
>> +		drm_fb_helper_dirty(info, 0, 0, info->var.xres, info->var.yres);
> 
> Should that rather be 0->{x,y}res_virtual or {x,y}offset->{x,y}res?

That code snippet works in drm_fb_helper_sys_write().

However these are framebuffer-relative coordinates. I guess *res_virtual
would be more correct. I'll update the patch accordingly. Thanks for
pointing this out.

Best regards
Thomas

> 
>> +
>>  	return ret ? ret : err;
>>  }
>>  
>> -- 
>> 2.29.2
>>
>> _______________________________________________
>> dri-devel mailing list
>> dri-devel@lists.freedesktop.org
>> https://lists.freedesktop.org/mailman/listinfo/dri-devel
> 

-- 
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2020-11-18  7:56 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-11-16 20:04 [PATCH 00/10] drm/fb-helper: Various fixes and cleanups Thomas Zimmermann
2020-11-16 20:04 ` [PATCH 01/10] drm/fb-helper: Call dirty helper after writing to fbdev Thomas Zimmermann
2020-11-16 20:04   ` Thomas Zimmermann
2020-11-17 16:22   ` Ville Syrjälä
2020-11-17 16:22     ` Ville Syrjälä
2020-11-18  7:56     ` Thomas Zimmermann
2020-11-18  7:56       ` Thomas Zimmermann
2020-11-16 20:04 ` [PATCH 02/10] drm/fb-helper: Unmap client buffer during shutdown Thomas Zimmermann
2020-11-16 20:04 ` [PATCH 03/10] drm/client: Depend on GEM object kmap ref-counting Thomas Zimmermann
2020-11-16 20:04 ` [PATCH 04/10] drm/fb-helper: Rename dirty worker to damage worker Thomas Zimmermann
2020-11-16 20:04 ` [PATCH 05/10] drm/fb-helper: Return early in dirty worker Thomas Zimmermann
2020-11-16 20:04 ` [PATCH 06/10] drm/fb-helper: Separate shadow-buffer flushing and calling dirty callback Thomas Zimmermann
2020-11-16 20:04 ` [PATCH 07/10] drm/fb-helper: Move damage blit code and its setup into separate routine Thomas Zimmermann
2020-11-16 20:04 ` [PATCH 08/10] drm/fb-helper: Restore damage area upon errors Thomas Zimmermann
2020-11-16 20:52   ` Daniel Vetter
2020-11-17 15:14     ` Sebastian Reichel
2020-11-17 16:54       ` Daniel Vetter
2020-11-16 20:04 ` [PATCH 09/10] drm/fb-helper: Copy dma-buf map before flushing shadow fb Thomas Zimmermann
2020-11-16 20:04 ` [PATCH 10/10] drm/fb-helper: Acquire modeset lock around shadow-buffer flushing Thomas Zimmermann
2020-11-16 20:48   ` Daniel Vetter
2020-11-17  8:05     ` Christian König

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.