dri-devel.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/6] drm/gud: Use the shadow plane helper
@ 2022-11-30 19:26 Noralf Trønnes via B4 Submission Endpoint
  2022-11-30 19:26 ` [PATCH v2 1/6] drm/gud: Fix UBSAN warning Noralf Trønnes via B4 Submission Endpoint
                   ` (7 more replies)
  0 siblings, 8 replies; 24+ messages in thread
From: Noralf Trønnes via B4 Submission Endpoint @ 2022-11-30 19:26 UTC (permalink / raw)
  To: Thomas Zimmermann, Javier Martinez Canillas, dri-devel,
	Maxime Ripard, stable, Noralf =?unknown-8bit?q?Tr=C3=B8nnes?=

Hi,

I have started to look at igt for testing and want to use CRC tests. To
implement support for this I need to move away from the simple kms
helper.

When looking around for examples I came across Thomas' nice shadow
helper and thought, yes this is perfect for drm/gud. So I'll switch to
that before I move away from the simple kms helper.

The async framebuffer flushing code path now uses a shadow buffer and
doesn't touch the framebuffer when it shouldn't. I have also taken the
opportunity to inline the synchronous flush code path and make this the
default flushing stategy.

Noralf.

Cc: Maxime Ripard <mripard@kernel.org>
Cc: Thomas Zimmermann <tzimmermann@suse.de>
Cc: dri-devel@lists.freedesktop.org
Signed-off-by: Noralf Trønnes <noralf@tronnes.org>

---
Changes in v2:
- Drop patch (Thomas):
  drm/gem: shadow_fb_access: Prepare imported buffers for CPU access
- Use src as variable name for iosys_map (Thomas)
- Prepare imported buffer for CPU access in the driver (Thomas)
- New patch: make sync flushing the default (Thomas)
- Link to v1: https://lore.kernel.org/r/20221122-gud-shadow-plane-v1-0-9de3afa3383e@tronnes.org

---
Noralf Trønnes (6):
      drm/gud: Fix UBSAN warning
      drm/gud: Don't retry a failed framebuffer flush
      drm/gud: Split up gud_flush_work()
      drm/gud: Prepare buffer for CPU access in gud_flush_work()
      drm/gud: Use the shadow plane helper
      drm/gud: Enable synchronous flushing by default

 drivers/gpu/drm/gud/gud_drv.c      |   1 +
 drivers/gpu/drm/gud/gud_internal.h |   1 +
 drivers/gpu/drm/gud/gud_pipe.c     | 222 ++++++++++++++++++-------------------
 3 files changed, 112 insertions(+), 112 deletions(-)
---
base-commit: 7257702951305b1f0259c3468c39fc59d1ad4d8b
change-id: 20221122-gud-shadow-plane-ae37a95d4d8d

Best regards,
-- 
Noralf Trønnes <noralf@tronnes.org>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH v2 1/6] drm/gud: Fix UBSAN warning
  2022-11-30 19:26 [PATCH v2 0/6] drm/gud: Use the shadow plane helper Noralf Trønnes via B4 Submission Endpoint
@ 2022-11-30 19:26 ` Noralf Trønnes via B4 Submission Endpoint
  2022-11-30 19:26 ` [PATCH v2 2/6] drm/gud: Don't retry a failed framebuffer flush Noralf Trønnes via B4 Submission Endpoint
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 24+ messages in thread
From: Noralf Trønnes via B4 Submission Endpoint @ 2022-11-30 19:26 UTC (permalink / raw)
  To: Thomas Zimmermann, Javier Martinez Canillas, dri-devel,
	Maxime Ripard, stable, Noralf =?unknown-8bit?q?Tr=C3=B8nnes?=

From: Noralf Trønnes <noralf@tronnes.org>

UBSAN complains about invalid value for bool:

[  101.165172] [drm] Initialized gud 1.0.0 20200422 for 2-3.2:1.0 on minor 1
[  101.213360] gud 2-3.2:1.0: [drm] fb1: guddrmfb frame buffer device
[  101.213426] usbcore: registered new interface driver gud
[  101.989431] ================================================================================
[  101.989441] UBSAN: invalid-load in linux/include/linux/iosys-map.h:253:9
[  101.989447] load of value 121 is not a valid value for type '_Bool'
[  101.989451] CPU: 1 PID: 455 Comm: kworker/1:6 Not tainted 5.18.0-rc5-gud-5.18-rc5 #3
[  101.989456] Hardware name: Hewlett-Packard HP EliteBook 820 G1/1991, BIOS L71 Ver. 01.44 04/12/2018
[  101.989459] Workqueue: events_long gud_flush_work [gud]
[  101.989471] Call Trace:
[  101.989474]  <TASK>
[  101.989479]  dump_stack_lvl+0x49/0x5f
[  101.989488]  dump_stack+0x10/0x12
[  101.989493]  ubsan_epilogue+0x9/0x3b
[  101.989498]  __ubsan_handle_load_invalid_value.cold+0x44/0x49
[  101.989504]  dma_buf_vmap.cold+0x38/0x3d
[  101.989511]  ? find_busiest_group+0x48/0x300
[  101.989520]  drm_gem_shmem_vmap+0x76/0x1b0 [drm_shmem_helper]
[  101.989528]  drm_gem_shmem_object_vmap+0x9/0xb [drm_shmem_helper]
[  101.989535]  drm_gem_vmap+0x26/0x60 [drm]
[  101.989594]  drm_gem_fb_vmap+0x47/0x150 [drm_kms_helper]
[  101.989630]  gud_prep_flush+0xc1/0x710 [gud]
[  101.989639]  ? _raw_spin_lock+0x17/0x40
[  101.989648]  gud_flush_work+0x1e0/0x430 [gud]
[  101.989653]  ? __switch_to+0x11d/0x470
[  101.989664]  process_one_work+0x21f/0x3f0
[  101.989673]  worker_thread+0x200/0x3e0
[  101.989679]  ? rescuer_thread+0x390/0x390
[  101.989684]  kthread+0xfd/0x130
[  101.989690]  ? kthread_complete_and_exit+0x20/0x20
[  101.989696]  ret_from_fork+0x22/0x30
[  101.989706]  </TASK>
[  101.989708] ================================================================================

The source of this warning is in iosys_map_clear() called from
dma_buf_vmap(). It conditionally sets values based on map->is_iomem. The
iosys_map variables are allocated uninitialized on the stack leading to
->is_iomem having all kinds of values and not only 0/1.

Fix this by zeroing the iosys_map variables.

Fixes: 40e1a70b4aed ("drm: Add GUD USB Display driver")
Cc: <stable@vger.kernel.org> # v5.18+
Reviewed-by: Javier Martinez Canillas <javierm@redhat.com>
Reviewed-by: Thomas Zimmermann <tzimmermann@suse.de>
Signed-off-by: Noralf Trønnes <noralf@tronnes.org>
---
 drivers/gpu/drm/gud/gud_pipe.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/gud/gud_pipe.c b/drivers/gpu/drm/gud/gud_pipe.c
index 7c6dc2bcd14a..61f4abaf1811 100644
--- a/drivers/gpu/drm/gud/gud_pipe.c
+++ b/drivers/gpu/drm/gud/gud_pipe.c
@@ -157,8 +157,8 @@ static int gud_prep_flush(struct gud_device *gdrm, struct drm_framebuffer *fb,
 {
 	struct dma_buf_attachment *import_attach = fb->obj[0]->import_attach;
 	u8 compression = gdrm->compression;
-	struct iosys_map map[DRM_FORMAT_MAX_PLANES];
-	struct iosys_map map_data[DRM_FORMAT_MAX_PLANES];
+	struct iosys_map map[DRM_FORMAT_MAX_PLANES] = { };
+	struct iosys_map map_data[DRM_FORMAT_MAX_PLANES] = { };
 	struct iosys_map dst;
 	void *vaddr, *buf;
 	size_t pitch, len;

-- 
2.34.1

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v2 2/6] drm/gud: Don't retry a failed framebuffer flush
  2022-11-30 19:26 [PATCH v2 0/6] drm/gud: Use the shadow plane helper Noralf Trønnes via B4 Submission Endpoint
  2022-11-30 19:26 ` [PATCH v2 1/6] drm/gud: Fix UBSAN warning Noralf Trønnes via B4 Submission Endpoint
@ 2022-11-30 19:26 ` Noralf Trønnes via B4 Submission Endpoint
  2022-11-30 19:26 ` [PATCH v2 3/6] drm/gud: Split up gud_flush_work() Noralf Trønnes via B4 Submission Endpoint
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 24+ messages in thread
From: Noralf Trønnes via B4 Submission Endpoint @ 2022-11-30 19:26 UTC (permalink / raw)
  To: Thomas Zimmermann, Javier Martinez Canillas, dri-devel,
	Maxime Ripard, stable, Noralf =?unknown-8bit?q?Tr=C3=B8nnes?=

From: Noralf Trønnes <noralf@tronnes.org>

If a framebuffer flush fails the driver will do one retry by requeing the
worker. Currently the worker is used even for synchronous flushing, but a
later patch will inline it, so this needs to change. Thinking about how to
solve this I came to the conclusion that this retry mechanism was a fix
for a problem that was only in the mind of the developer (me) and not
something that solved a real problem.

So let's remove this for now and revisit later should it become necessary.
gud_add_damage() has now only one caller so it can be inlined.

Reviewed-by: Javier Martinez Canillas <javierm@redhat.com>
Reviewed-by: Thomas Zimmermann <tzimmermann@suse.de>
Signed-off-by: Noralf Trønnes <noralf@tronnes.org>
---
 drivers/gpu/drm/gud/gud_pipe.c | 48 +++++++-----------------------------------
 1 file changed, 8 insertions(+), 40 deletions(-)

diff --git a/drivers/gpu/drm/gud/gud_pipe.c b/drivers/gpu/drm/gud/gud_pipe.c
index 61f4abaf1811..ff1358815af5 100644
--- a/drivers/gpu/drm/gud/gud_pipe.c
+++ b/drivers/gpu/drm/gud/gud_pipe.c
@@ -333,37 +333,6 @@ void gud_clear_damage(struct gud_device *gdrm)
 	gdrm->damage.y2 = 0;
 }
 
-static void gud_add_damage(struct gud_device *gdrm, struct drm_rect *damage)
-{
-	gdrm->damage.x1 = min(gdrm->damage.x1, damage->x1);
-	gdrm->damage.y1 = min(gdrm->damage.y1, damage->y1);
-	gdrm->damage.x2 = max(gdrm->damage.x2, damage->x2);
-	gdrm->damage.y2 = max(gdrm->damage.y2, damage->y2);
-}
-
-static void gud_retry_failed_flush(struct gud_device *gdrm, struct drm_framebuffer *fb,
-				   struct drm_rect *damage)
-{
-	/*
-	 * pipe_update waits for the worker when the display mode is going to change.
-	 * This ensures that the width and height is still the same making it safe to
-	 * add back the damage.
-	 */
-
-	mutex_lock(&gdrm->damage_lock);
-	if (!gdrm->fb) {
-		drm_framebuffer_get(fb);
-		gdrm->fb = fb;
-	}
-	gud_add_damage(gdrm, damage);
-	mutex_unlock(&gdrm->damage_lock);
-
-	/* Retry only once to avoid a possible storm in case of continues errors. */
-	if (!gdrm->prev_flush_failed)
-		queue_work(system_long_wq, &gdrm->work);
-	gdrm->prev_flush_failed = true;
-}
-
 void gud_flush_work(struct work_struct *work)
 {
 	struct gud_device *gdrm = container_of(work, struct gud_device, work);
@@ -407,14 +376,10 @@ void gud_flush_work(struct work_struct *work)
 		ret = gud_flush_rect(gdrm, fb, format, &rect);
 		if (ret) {
 			if (ret != -ENODEV && ret != -ECONNRESET &&
-			    ret != -ESHUTDOWN && ret != -EPROTO) {
-				bool prev_flush_failed = gdrm->prev_flush_failed;
-
-				gud_retry_failed_flush(gdrm, fb, &damage);
-				if (!prev_flush_failed)
-					dev_err_ratelimited(fb->dev->dev,
-							    "Failed to flush framebuffer: error=%d\n", ret);
-			}
+			    ret != -ESHUTDOWN && ret != -EPROTO)
+				dev_err_ratelimited(fb->dev->dev,
+						    "Failed to flush framebuffer: error=%d\n", ret);
+			gdrm->prev_flush_failed = true;
 			break;
 		}
 
@@ -439,7 +404,10 @@ static void gud_fb_queue_damage(struct gud_device *gdrm, struct drm_framebuffer
 		gdrm->fb = fb;
 	}
 
-	gud_add_damage(gdrm, damage);
+	gdrm->damage.x1 = min(gdrm->damage.x1, damage->x1);
+	gdrm->damage.y1 = min(gdrm->damage.y1, damage->y1);
+	gdrm->damage.x2 = max(gdrm->damage.x2, damage->x2);
+	gdrm->damage.y2 = max(gdrm->damage.y2, damage->y2);
 
 	mutex_unlock(&gdrm->damage_lock);
 

-- 
2.34.1

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v2 3/6] drm/gud: Split up gud_flush_work()
  2022-11-30 19:26 [PATCH v2 0/6] drm/gud: Use the shadow plane helper Noralf Trønnes via B4 Submission Endpoint
  2022-11-30 19:26 ` [PATCH v2 1/6] drm/gud: Fix UBSAN warning Noralf Trønnes via B4 Submission Endpoint
  2022-11-30 19:26 ` [PATCH v2 2/6] drm/gud: Don't retry a failed framebuffer flush Noralf Trønnes via B4 Submission Endpoint
@ 2022-11-30 19:26 ` Noralf Trønnes via B4 Submission Endpoint
  2022-11-30 19:26 ` [PATCH v2 4/6] drm/gud: Prepare buffer for CPU access in gud_flush_work() Noralf Trønnes via B4 Submission Endpoint
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 24+ messages in thread
From: Noralf Trønnes via B4 Submission Endpoint @ 2022-11-30 19:26 UTC (permalink / raw)
  To: Thomas Zimmermann, Javier Martinez Canillas, dri-devel,
	Maxime Ripard, stable, Noralf =?unknown-8bit?q?Tr=C3=B8nnes?=

From: Noralf Trønnes <noralf@tronnes.org>

In preparation for inlining synchronous flushing split out the part of
gud_flush_work() that can be shared by the sync and async code paths.

Reviewed-by: Javier Martinez Canillas <javierm@redhat.com>
Reviewed-by: Thomas Zimmermann <tzimmermann@suse.de>
Signed-off-by: Noralf Trønnes <noralf@tronnes.org>
---
 drivers/gpu/drm/gud/gud_pipe.c | 72 +++++++++++++++++++++++-------------------
 1 file changed, 39 insertions(+), 33 deletions(-)

diff --git a/drivers/gpu/drm/gud/gud_pipe.c b/drivers/gpu/drm/gud/gud_pipe.c
index ff1358815af5..d2af9947494f 100644
--- a/drivers/gpu/drm/gud/gud_pipe.c
+++ b/drivers/gpu/drm/gud/gud_pipe.c
@@ -333,15 +333,49 @@ void gud_clear_damage(struct gud_device *gdrm)
 	gdrm->damage.y2 = 0;
 }
 
+static void gud_flush_damage(struct gud_device *gdrm, struct drm_framebuffer *fb,
+			     struct drm_rect *damage)
+{
+	const struct drm_format_info *format;
+	unsigned int i, lines;
+	size_t pitch;
+	int ret;
+
+	format = fb->format;
+	if (format->format == DRM_FORMAT_XRGB8888 && gdrm->xrgb8888_emulation_format)
+		format = gdrm->xrgb8888_emulation_format;
+
+	/* Split update if it's too big */
+	pitch = drm_format_info_min_pitch(format, 0, drm_rect_width(damage));
+	lines = drm_rect_height(damage);
+
+	if (gdrm->bulk_len < lines * pitch)
+		lines = gdrm->bulk_len / pitch;
+
+	for (i = 0; i < DIV_ROUND_UP(drm_rect_height(damage), lines); i++) {
+		struct drm_rect rect = *damage;
+
+		rect.y1 += i * lines;
+		rect.y2 = min_t(u32, rect.y1 + lines, damage->y2);
+
+		ret = gud_flush_rect(gdrm, fb, format, &rect);
+		if (ret) {
+			if (ret != -ENODEV && ret != -ECONNRESET &&
+			    ret != -ESHUTDOWN && ret != -EPROTO)
+				dev_err_ratelimited(fb->dev->dev,
+						    "Failed to flush framebuffer: error=%d\n", ret);
+			gdrm->prev_flush_failed = true;
+			break;
+		}
+	}
+}
+
 void gud_flush_work(struct work_struct *work)
 {
 	struct gud_device *gdrm = container_of(work, struct gud_device, work);
-	const struct drm_format_info *format;
 	struct drm_framebuffer *fb;
 	struct drm_rect damage;
-	unsigned int i, lines;
-	int idx, ret = 0;
-	size_t pitch;
+	int idx;
 
 	if (!drm_dev_enter(&gdrm->drm, &idx))
 		return;
@@ -356,35 +390,7 @@ void gud_flush_work(struct work_struct *work)
 	if (!fb)
 		goto out;
 
-	format = fb->format;
-	if (format->format == DRM_FORMAT_XRGB8888 && gdrm->xrgb8888_emulation_format)
-		format = gdrm->xrgb8888_emulation_format;
-
-	/* Split update if it's too big */
-	pitch = drm_format_info_min_pitch(format, 0, drm_rect_width(&damage));
-	lines = drm_rect_height(&damage);
-
-	if (gdrm->bulk_len < lines * pitch)
-		lines = gdrm->bulk_len / pitch;
-
-	for (i = 0; i < DIV_ROUND_UP(drm_rect_height(&damage), lines); i++) {
-		struct drm_rect rect = damage;
-
-		rect.y1 += i * lines;
-		rect.y2 = min_t(u32, rect.y1 + lines, damage.y2);
-
-		ret = gud_flush_rect(gdrm, fb, format, &rect);
-		if (ret) {
-			if (ret != -ENODEV && ret != -ECONNRESET &&
-			    ret != -ESHUTDOWN && ret != -EPROTO)
-				dev_err_ratelimited(fb->dev->dev,
-						    "Failed to flush framebuffer: error=%d\n", ret);
-			gdrm->prev_flush_failed = true;
-			break;
-		}
-
-		gdrm->prev_flush_failed = false;
-	}
+	gud_flush_damage(gdrm, fb, &damage);
 
 	drm_framebuffer_put(fb);
 out:

-- 
2.34.1

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v2 4/6] drm/gud: Prepare buffer for CPU access in gud_flush_work()
  2022-11-30 19:26 [PATCH v2 0/6] drm/gud: Use the shadow plane helper Noralf Trønnes via B4 Submission Endpoint
                   ` (2 preceding siblings ...)
  2022-11-30 19:26 ` [PATCH v2 3/6] drm/gud: Split up gud_flush_work() Noralf Trønnes via B4 Submission Endpoint
@ 2022-11-30 19:26 ` Noralf Trønnes via B4 Submission Endpoint
  2022-12-01  8:51   ` Thomas Zimmermann
  2022-11-30 19:26 ` [PATCH v2 5/6] drm/gud: Use the shadow plane helper Noralf Trønnes via B4 Submission Endpoint
                   ` (3 subsequent siblings)
  7 siblings, 1 reply; 24+ messages in thread
From: Noralf Trønnes via B4 Submission Endpoint @ 2022-11-30 19:26 UTC (permalink / raw)
  To: Thomas Zimmermann, Javier Martinez Canillas, dri-devel,
	Maxime Ripard, stable, Noralf =?unknown-8bit?q?Tr=C3=B8nnes?=

From: Noralf Trønnes <noralf@tronnes.org>

In preparation for moving to the shadow plane helper prepare the
framebuffer for CPU access as early as possible.

v2:
- Use src as variable name for iosys_map (Thomas)

Reviewed-by: Javier Martinez Canillas <javierm@redhat.com>
Signed-off-by: Noralf Trønnes <noralf@tronnes.org>
---
 drivers/gpu/drm/gud/gud_pipe.c | 67 +++++++++++++++++++++---------------------
 1 file changed, 33 insertions(+), 34 deletions(-)

diff --git a/drivers/gpu/drm/gud/gud_pipe.c b/drivers/gpu/drm/gud/gud_pipe.c
index d2af9947494f..98fe8efda4a9 100644
--- a/drivers/gpu/drm/gud/gud_pipe.c
+++ b/drivers/gpu/drm/gud/gud_pipe.c
@@ -15,6 +15,7 @@
 #include <drm/drm_fourcc.h>
 #include <drm/drm_framebuffer.h>
 #include <drm/drm_gem.h>
+#include <drm/drm_gem_atomic_helper.h>
 #include <drm/drm_gem_framebuffer_helper.h>
 #include <drm/drm_print.h>
 #include <drm/drm_rect.h>
@@ -152,32 +153,21 @@ static size_t gud_xrgb8888_to_color(u8 *dst, const struct drm_format_info *forma
 }
 
 static int gud_prep_flush(struct gud_device *gdrm, struct drm_framebuffer *fb,
+			  const struct iosys_map *src, bool cached_reads,
 			  const struct drm_format_info *format, struct drm_rect *rect,
 			  struct gud_set_buffer_req *req)
 {
-	struct dma_buf_attachment *import_attach = fb->obj[0]->import_attach;
 	u8 compression = gdrm->compression;
-	struct iosys_map map[DRM_FORMAT_MAX_PLANES] = { };
-	struct iosys_map map_data[DRM_FORMAT_MAX_PLANES] = { };
 	struct iosys_map dst;
 	void *vaddr, *buf;
 	size_t pitch, len;
-	int ret = 0;
 
 	pitch = drm_format_info_min_pitch(format, 0, drm_rect_width(rect));
 	len = pitch * drm_rect_height(rect);
 	if (len > gdrm->bulk_len)
 		return -E2BIG;
 
-	ret = drm_gem_fb_vmap(fb, map, map_data);
-	if (ret)
-		return ret;
-
-	vaddr = map_data[0].vaddr;
-
-	ret = drm_gem_fb_begin_cpu_access(fb, DMA_FROM_DEVICE);
-	if (ret)
-		goto vunmap;
+	vaddr = src[0].vaddr;
 retry:
 	if (compression)
 		buf = gdrm->compress_buf;
@@ -192,29 +182,27 @@ static int gud_prep_flush(struct gud_device *gdrm, struct drm_framebuffer *fb,
 	if (format != fb->format) {
 		if (format->format == GUD_DRM_FORMAT_R1) {
 			len = gud_xrgb8888_to_r124(buf, format, vaddr, fb, rect);
-			if (!len) {
-				ret = -ENOMEM;
-				goto end_cpu_access;
-			}
+			if (!len)
+				return -ENOMEM;
 		} else if (format->format == DRM_FORMAT_R8) {
-			drm_fb_xrgb8888_to_gray8(&dst, NULL, map_data, fb, rect);
+			drm_fb_xrgb8888_to_gray8(&dst, NULL, src, fb, rect);
 		} else if (format->format == DRM_FORMAT_RGB332) {
-			drm_fb_xrgb8888_to_rgb332(&dst, NULL, map_data, fb, rect);
+			drm_fb_xrgb8888_to_rgb332(&dst, NULL, src, fb, rect);
 		} else if (format->format == DRM_FORMAT_RGB565) {
-			drm_fb_xrgb8888_to_rgb565(&dst, NULL, map_data, fb, rect,
+			drm_fb_xrgb8888_to_rgb565(&dst, NULL, src, fb, rect,
 						  gud_is_big_endian());
 		} else if (format->format == DRM_FORMAT_RGB888) {
-			drm_fb_xrgb8888_to_rgb888(&dst, NULL, map_data, fb, rect);
+			drm_fb_xrgb8888_to_rgb888(&dst, NULL, src, fb, rect);
 		} else {
 			len = gud_xrgb8888_to_color(buf, format, vaddr, fb, rect);
 		}
 	} else if (gud_is_big_endian() && format->cpp[0] > 1) {
-		drm_fb_swab(&dst, NULL, map_data, fb, rect, !import_attach);
-	} else if (compression && !import_attach && pitch == fb->pitches[0]) {
+		drm_fb_swab(&dst, NULL, src, fb, rect, cached_reads);
+	} else if (compression && cached_reads && pitch == fb->pitches[0]) {
 		/* can compress directly from the framebuffer */
 		buf = vaddr + rect->y1 * pitch;
 	} else {
-		drm_fb_memcpy(&dst, NULL, map_data, fb, rect);
+		drm_fb_memcpy(&dst, NULL, src, fb, rect);
 	}
 
 	memset(req, 0, sizeof(*req));
@@ -237,12 +225,7 @@ static int gud_prep_flush(struct gud_device *gdrm, struct drm_framebuffer *fb,
 		req->compressed_length = cpu_to_le32(complen);
 	}
 
-end_cpu_access:
-	drm_gem_fb_end_cpu_access(fb, DMA_FROM_DEVICE);
-vunmap:
-	drm_gem_fb_vunmap(fb, map);
-
-	return ret;
+	return 0;
 }
 
 struct gud_usb_bulk_context {
@@ -285,6 +268,7 @@ static int gud_usb_bulk(struct gud_device *gdrm, size_t len)
 }
 
 static int gud_flush_rect(struct gud_device *gdrm, struct drm_framebuffer *fb,
+			  const struct iosys_map *src, bool cached_reads,
 			  const struct drm_format_info *format, struct drm_rect *rect)
 {
 	struct gud_set_buffer_req req;
@@ -293,7 +277,7 @@ static int gud_flush_rect(struct gud_device *gdrm, struct drm_framebuffer *fb,
 
 	drm_dbg(&gdrm->drm, "Flushing [FB:%d] " DRM_RECT_FMT "\n", fb->base.id, DRM_RECT_ARG(rect));
 
-	ret = gud_prep_flush(gdrm, fb, format, rect, &req);
+	ret = gud_prep_flush(gdrm, fb, src, cached_reads, format, rect, &req);
 	if (ret)
 		return ret;
 
@@ -334,6 +318,7 @@ void gud_clear_damage(struct gud_device *gdrm)
 }
 
 static void gud_flush_damage(struct gud_device *gdrm, struct drm_framebuffer *fb,
+			     const struct iosys_map *src, bool cached_reads,
 			     struct drm_rect *damage)
 {
 	const struct drm_format_info *format;
@@ -358,7 +343,7 @@ static void gud_flush_damage(struct gud_device *gdrm, struct drm_framebuffer *fb
 		rect.y1 += i * lines;
 		rect.y2 = min_t(u32, rect.y1 + lines, damage->y2);
 
-		ret = gud_flush_rect(gdrm, fb, format, &rect);
+		ret = gud_flush_rect(gdrm, fb, src, cached_reads, format, &rect);
 		if (ret) {
 			if (ret != -ENODEV && ret != -ECONNRESET &&
 			    ret != -ESHUTDOWN && ret != -EPROTO)
@@ -373,9 +358,10 @@ static void gud_flush_damage(struct gud_device *gdrm, struct drm_framebuffer *fb
 void gud_flush_work(struct work_struct *work)
 {
 	struct gud_device *gdrm = container_of(work, struct gud_device, work);
+	struct iosys_map gem_map = { }, fb_map = { };
 	struct drm_framebuffer *fb;
 	struct drm_rect damage;
-	int idx;
+	int idx, ret;
 
 	if (!drm_dev_enter(&gdrm->drm, &idx))
 		return;
@@ -390,8 +376,21 @@ void gud_flush_work(struct work_struct *work)
 	if (!fb)
 		goto out;
 
-	gud_flush_damage(gdrm, fb, &damage);
+	ret = drm_gem_fb_vmap(fb, &gem_map, &fb_map);
+	if (ret)
+		goto fb_put;
 
+	ret = drm_gem_fb_begin_cpu_access(fb, DMA_FROM_DEVICE);
+	if (ret)
+		goto vunmap;
+
+	/* Imported buffers are assumed to be WriteCombined with uncached reads */
+	gud_flush_damage(gdrm, fb, &fb_map, !fb->obj[0]->import_attach, &damage);
+
+	drm_gem_fb_end_cpu_access(fb, DMA_FROM_DEVICE);
+vunmap:
+	drm_gem_fb_vunmap(fb, &gem_map);
+fb_put:
 	drm_framebuffer_put(fb);
 out:
 	drm_dev_exit(idx);

-- 
2.34.1

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v2 5/6] drm/gud: Use the shadow plane helper
  2022-11-30 19:26 [PATCH v2 0/6] drm/gud: Use the shadow plane helper Noralf Trønnes via B4 Submission Endpoint
                   ` (3 preceding siblings ...)
  2022-11-30 19:26 ` [PATCH v2 4/6] drm/gud: Prepare buffer for CPU access in gud_flush_work() Noralf Trønnes via B4 Submission Endpoint
@ 2022-11-30 19:26 ` Noralf Trønnes via B4 Submission Endpoint
  2022-12-01  8:55   ` Thomas Zimmermann
  2022-11-30 19:26 ` [PATCH v2 6/6] drm/gud: Enable synchronous flushing by default Noralf Trønnes via B4 Submission Endpoint
                   ` (2 subsequent siblings)
  7 siblings, 1 reply; 24+ messages in thread
From: Noralf Trønnes via B4 Submission Endpoint @ 2022-11-30 19:26 UTC (permalink / raw)
  To: Thomas Zimmermann, Javier Martinez Canillas, dri-devel,
	Maxime Ripard, stable, Noralf =?unknown-8bit?q?Tr=C3=B8nnes?=

From: Noralf Trønnes <noralf@tronnes.org>

Use the shadow plane helper to take care of mapping the framebuffer for
CPU access. The synchronous flushing is now done inline without the use of
a worker. The async path now uses a shadow buffer to hold framebuffer
changes and it doesn't read the framebuffer behind userspace's back
anymore.

v2:
- Use src as variable name for iosys_map (Thomas)
- Prepare imported buffer for CPU access in the driver (Thomas)

Signed-off-by: Noralf Trønnes <noralf@tronnes.org>
---
 drivers/gpu/drm/gud/gud_drv.c      |  1 +
 drivers/gpu/drm/gud/gud_internal.h |  1 +
 drivers/gpu/drm/gud/gud_pipe.c     | 81 ++++++++++++++++++++++++++------------
 3 files changed, 57 insertions(+), 26 deletions(-)

diff --git a/drivers/gpu/drm/gud/gud_drv.c b/drivers/gpu/drm/gud/gud_drv.c
index d57dab104358..5aac7cda0505 100644
--- a/drivers/gpu/drm/gud/gud_drv.c
+++ b/drivers/gpu/drm/gud/gud_drv.c
@@ -365,6 +365,7 @@ static void gud_debugfs_init(struct drm_minor *minor)
 static const struct drm_simple_display_pipe_funcs gud_pipe_funcs = {
 	.check      = gud_pipe_check,
 	.update	    = gud_pipe_update,
+	DRM_GEM_SIMPLE_DISPLAY_PIPE_SHADOW_PLANE_FUNCS
 };
 
 static const struct drm_mode_config_funcs gud_mode_config_funcs = {
diff --git a/drivers/gpu/drm/gud/gud_internal.h b/drivers/gpu/drm/gud/gud_internal.h
index e351a1f1420d..0d148a6f27aa 100644
--- a/drivers/gpu/drm/gud/gud_internal.h
+++ b/drivers/gpu/drm/gud/gud_internal.h
@@ -43,6 +43,7 @@ struct gud_device {
 	struct drm_framebuffer *fb;
 	struct drm_rect damage;
 	bool prev_flush_failed;
+	void *shadow_buf;
 };
 
 static inline struct gud_device *to_gud_device(struct drm_device *drm)
diff --git a/drivers/gpu/drm/gud/gud_pipe.c b/drivers/gpu/drm/gud/gud_pipe.c
index 98fe8efda4a9..92189474a7ed 100644
--- a/drivers/gpu/drm/gud/gud_pipe.c
+++ b/drivers/gpu/drm/gud/gud_pipe.c
@@ -358,10 +358,10 @@ static void gud_flush_damage(struct gud_device *gdrm, struct drm_framebuffer *fb
 void gud_flush_work(struct work_struct *work)
 {
 	struct gud_device *gdrm = container_of(work, struct gud_device, work);
-	struct iosys_map gem_map = { }, fb_map = { };
+	struct iosys_map shadow_map;
 	struct drm_framebuffer *fb;
 	struct drm_rect damage;
-	int idx, ret;
+	int idx;
 
 	if (!drm_dev_enter(&gdrm->drm, &idx))
 		return;
@@ -369,6 +369,7 @@ void gud_flush_work(struct work_struct *work)
 	mutex_lock(&gdrm->damage_lock);
 	fb = gdrm->fb;
 	gdrm->fb = NULL;
+	iosys_map_set_vaddr(&shadow_map, gdrm->shadow_buf);
 	damage = gdrm->damage;
 	gud_clear_damage(gdrm);
 	mutex_unlock(&gdrm->damage_lock);
@@ -376,33 +377,33 @@ void gud_flush_work(struct work_struct *work)
 	if (!fb)
 		goto out;
 
-	ret = drm_gem_fb_vmap(fb, &gem_map, &fb_map);
-	if (ret)
-		goto fb_put;
+	gud_flush_damage(gdrm, fb, &shadow_map, true, &damage);
 
-	ret = drm_gem_fb_begin_cpu_access(fb, DMA_FROM_DEVICE);
-	if (ret)
-		goto vunmap;
-
-	/* Imported buffers are assumed to be WriteCombined with uncached reads */
-	gud_flush_damage(gdrm, fb, &fb_map, !fb->obj[0]->import_attach, &damage);
-
-	drm_gem_fb_end_cpu_access(fb, DMA_FROM_DEVICE);
-vunmap:
-	drm_gem_fb_vunmap(fb, &gem_map);
-fb_put:
 	drm_framebuffer_put(fb);
 out:
 	drm_dev_exit(idx);
 }
 
-static void gud_fb_queue_damage(struct gud_device *gdrm, struct drm_framebuffer *fb,
-				struct drm_rect *damage)
+static int gud_fb_queue_damage(struct gud_device *gdrm, struct drm_framebuffer *fb,
+			       const struct iosys_map *src, struct drm_rect *damage)
 {
 	struct drm_framebuffer *old_fb = NULL;
+	struct iosys_map shadow_map;
 
 	mutex_lock(&gdrm->damage_lock);
 
+	if (!gdrm->shadow_buf) {
+		gdrm->shadow_buf = vzalloc(fb->pitches[0] * fb->height);
+		if (!gdrm->shadow_buf) {
+			mutex_unlock(&gdrm->damage_lock);
+			return -ENOMEM;
+		}
+	}
+
+	iosys_map_set_vaddr(&shadow_map, gdrm->shadow_buf);
+	iosys_map_incr(&shadow_map, drm_fb_clip_offset(fb->pitches[0], fb->format, damage));
+	drm_fb_memcpy(&shadow_map, fb->pitches, src, fb, damage);
+
 	if (fb != gdrm->fb) {
 		old_fb = gdrm->fb;
 		drm_framebuffer_get(fb);
@@ -420,6 +421,26 @@ static void gud_fb_queue_damage(struct gud_device *gdrm, struct drm_framebuffer
 
 	if (old_fb)
 		drm_framebuffer_put(old_fb);
+
+	return 0;
+}
+
+static void gud_fb_handle_damage(struct gud_device *gdrm, struct drm_framebuffer *fb,
+				 const struct iosys_map *src, struct drm_rect *damage)
+{
+	int ret;
+
+	if (gdrm->flags & GUD_DISPLAY_FLAG_FULL_UPDATE)
+		drm_rect_init(damage, 0, 0, fb->width, fb->height);
+
+	if (gud_async_flush) {
+		ret = gud_fb_queue_damage(gdrm, fb, src, damage);
+		if (ret != -ENOMEM)
+			return;
+	}
+
+	/* Imported buffers are assumed to be WriteCombined with uncached reads */
+	gud_flush_damage(gdrm, fb, src, !fb->obj[0]->import_attach, damage);
 }
 
 int gud_pipe_check(struct drm_simple_display_pipe *pipe,
@@ -544,10 +565,11 @@ void gud_pipe_update(struct drm_simple_display_pipe *pipe,
 	struct drm_device *drm = pipe->crtc.dev;
 	struct gud_device *gdrm = to_gud_device(drm);
 	struct drm_plane_state *state = pipe->plane.state;
+	struct drm_shadow_plane_state *shadow_plane_state = to_drm_shadow_plane_state(state);
 	struct drm_framebuffer *fb = state->fb;
 	struct drm_crtc *crtc = &pipe->crtc;
 	struct drm_rect damage;
-	int idx;
+	int ret, idx;
 
 	if (crtc->state->mode_changed || !crtc->state->enable) {
 		cancel_work_sync(&gdrm->work);
@@ -557,6 +579,8 @@ void gud_pipe_update(struct drm_simple_display_pipe *pipe,
 			gdrm->fb = NULL;
 		}
 		gud_clear_damage(gdrm);
+		vfree(gdrm->shadow_buf);
+		gdrm->shadow_buf = NULL;
 		mutex_unlock(&gdrm->damage_lock);
 	}
 
@@ -572,14 +596,19 @@ void gud_pipe_update(struct drm_simple_display_pipe *pipe,
 	if (crtc->state->active_changed)
 		gud_usb_set_u8(gdrm, GUD_REQ_SET_DISPLAY_ENABLE, crtc->state->active);
 
-	if (drm_atomic_helper_damage_merged(old_state, state, &damage)) {
-		if (gdrm->flags & GUD_DISPLAY_FLAG_FULL_UPDATE)
-			drm_rect_init(&damage, 0, 0, fb->width, fb->height);
-		gud_fb_queue_damage(gdrm, fb, &damage);
-		if (!gud_async_flush)
-			flush_work(&gdrm->work);
-	}
+	if (!fb)
+		goto ctrl_disable;
 
+	ret = drm_gem_fb_begin_cpu_access(fb, DMA_FROM_DEVICE);
+	if (ret)
+		goto ctrl_disable;
+
+	if (drm_atomic_helper_damage_merged(old_state, state, &damage))
+		gud_fb_handle_damage(gdrm, fb, &shadow_plane_state->data[0], &damage);
+
+	drm_gem_fb_end_cpu_access(fb, DMA_FROM_DEVICE);
+
+ctrl_disable:
 	if (!crtc->state->enable)
 		gud_usb_set_u8(gdrm, GUD_REQ_SET_CONTROLLER_ENABLE, 0);
 

-- 
2.34.1

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v2 6/6] drm/gud: Enable synchronous flushing by default
  2022-11-30 19:26 [PATCH v2 0/6] drm/gud: Use the shadow plane helper Noralf Trønnes via B4 Submission Endpoint
                   ` (4 preceding siblings ...)
  2022-11-30 19:26 ` [PATCH v2 5/6] drm/gud: Use the shadow plane helper Noralf Trønnes via B4 Submission Endpoint
@ 2022-11-30 19:26 ` Noralf Trønnes via B4 Submission Endpoint
  2022-12-01  8:57   ` Thomas Zimmermann
  2022-12-01  5:55 ` [PATCH v2 0/6] drm/gud: Use the shadow plane helper Greg KH
  2022-12-06 15:57 ` Noralf Trønnes
  7 siblings, 1 reply; 24+ messages in thread
From: Noralf Trønnes via B4 Submission Endpoint @ 2022-11-30 19:26 UTC (permalink / raw)
  To: Thomas Zimmermann, Javier Martinez Canillas, dri-devel,
	Maxime Ripard, stable, Noralf =?unknown-8bit?q?Tr=C3=B8nnes?=

From: Noralf Trønnes <noralf@tronnes.org>

gud has a module parameter that controls whether framebuffer flushing
happens synchronously during the commit or asynchronously in a worker.

GNOME before version 3.38 handled all displays in the same rendering loop.
This lead to gud slowing down the refresh rate for a faster monitor. This
has now been fixed so lets change the default.

The plan is to remove async flushing in the future. The code is now
structured in a way that makes it easy to do this.

Link: https://blogs.gnome.org/shell-dev/2020/07/02/splitting-up-the-frame-clock/
Suggested-by: Thomas Zimmermann <tzimmermann@suse.de>
Signed-off-by: Noralf Trønnes <noralf@tronnes.org>
---
 drivers/gpu/drm/gud/gud_pipe.c | 12 ++++--------
 1 file changed, 4 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/gud/gud_pipe.c b/drivers/gpu/drm/gud/gud_pipe.c
index 92189474a7ed..62c43d3632d4 100644
--- a/drivers/gpu/drm/gud/gud_pipe.c
+++ b/drivers/gpu/drm/gud/gud_pipe.c
@@ -25,17 +25,13 @@
 #include "gud_internal.h"
 
 /*
- * Some userspace rendering loops runs all displays in the same loop.
+ * Some userspace rendering loops run all displays in the same loop.
  * This means that a fast display will have to wait for a slow one.
- * For this reason gud does flushing asynchronous by default.
- * The down side is that in e.g. a single display setup userspace thinks
- * the display is insanely fast since the driver reports back immediately
- * that the flush/pageflip is done. This wastes CPU and power.
- * Such users might want to set this module parameter to false.
+ * Such users might want to enable this module parameter.
  */
-static bool gud_async_flush = true;
+static bool gud_async_flush;
 module_param_named(async_flush, gud_async_flush, bool, 0644);
-MODULE_PARM_DESC(async_flush, "Enable asynchronous flushing [default=true]");
+MODULE_PARM_DESC(async_flush, "Enable asynchronous flushing [default=0]");
 
 /*
  * FIXME: The driver is probably broken on Big Endian machines.

-- 
2.34.1

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 0/6] drm/gud: Use the shadow plane helper
  2022-11-30 19:26 [PATCH v2 0/6] drm/gud: Use the shadow plane helper Noralf Trønnes via B4 Submission Endpoint
                   ` (5 preceding siblings ...)
  2022-11-30 19:26 ` [PATCH v2 6/6] drm/gud: Enable synchronous flushing by default Noralf Trønnes via B4 Submission Endpoint
@ 2022-12-01  5:55 ` Greg KH
  2022-12-01 10:00   ` Noralf Trønnes
  2022-12-06 15:57 ` Noralf Trønnes
  7 siblings, 1 reply; 24+ messages in thread
From: Greg KH @ 2022-12-01  5:55 UTC (permalink / raw)
  To: noralf; +Cc: dri-devel, Javier Martinez Canillas, Thomas Zimmermann, stable

On Wed, Nov 30, 2022 at 08:26:48PM +0100, Noralf Trønnes via B4 Submission Endpoint wrote:
> Hi,
> 
> I have started to look at igt for testing and want to use CRC tests. To
> implement support for this I need to move away from the simple kms
> helper.
> 
> When looking around for examples I came across Thomas' nice shadow
> helper and thought, yes this is perfect for drm/gud. So I'll switch to
> that before I move away from the simple kms helper.
> 
> The async framebuffer flushing code path now uses a shadow buffer and
> doesn't touch the framebuffer when it shouldn't. I have also taken the
> opportunity to inline the synchronous flush code path and make this the
> default flushing stategy.
> 
> Noralf.
> 
> Cc: Maxime Ripard <mripard@kernel.org>
> Cc: Thomas Zimmermann <tzimmermann@suse.de>
> Cc: dri-devel@lists.freedesktop.org
> Signed-off-by: Noralf Trønnes <noralf@tronnes.org>
> 
> ---
> Changes in v2:
> - Drop patch (Thomas):
>   drm/gem: shadow_fb_access: Prepare imported buffers for CPU access
> - Use src as variable name for iosys_map (Thomas)
> - Prepare imported buffer for CPU access in the driver (Thomas)
> - New patch: make sync flushing the default (Thomas)
> - Link to v1: https://lore.kernel.org/r/20221122-gud-shadow-plane-v1-0-9de3afa3383e@tronnes.org

<formletter>

This is not the correct way to submit patches for inclusion in the
stable kernel tree.  Please read:
    https://www.kernel.org/doc/html/latest/process/stable-kernel-rules.html
for how to do this properly.

</formletter>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 4/6] drm/gud: Prepare buffer for CPU access in gud_flush_work()
  2022-11-30 19:26 ` [PATCH v2 4/6] drm/gud: Prepare buffer for CPU access in gud_flush_work() Noralf Trønnes via B4 Submission Endpoint
@ 2022-12-01  8:51   ` Thomas Zimmermann
  0 siblings, 0 replies; 24+ messages in thread
From: Thomas Zimmermann @ 2022-12-01  8:51 UTC (permalink / raw)
  To: noralf, Javier Martinez Canillas, dri-devel, Maxime Ripard, stable


[-- Attachment #1.1: Type: text/plain, Size: 7357 bytes --]



Am 30.11.22 um 20:26 schrieb Noralf Trønnes via B4 Submission Endpoint:
> From: Noralf Trønnes <noralf@tronnes.org>
> 
> In preparation for moving to the shadow plane helper prepare the
> framebuffer for CPU access as early as possible.
> 
> v2:
> - Use src as variable name for iosys_map (Thomas)
> 
> Reviewed-by: Javier Martinez Canillas <javierm@redhat.com>
> Signed-off-by: Noralf Trønnes <noralf@tronnes.org>

Reviewed-by: Thomas Zimmermann <tzimmermann@suse.de>

> ---
>   drivers/gpu/drm/gud/gud_pipe.c | 67 +++++++++++++++++++++---------------------
>   1 file changed, 33 insertions(+), 34 deletions(-)
> 
> diff --git a/drivers/gpu/drm/gud/gud_pipe.c b/drivers/gpu/drm/gud/gud_pipe.c
> index d2af9947494f..98fe8efda4a9 100644
> --- a/drivers/gpu/drm/gud/gud_pipe.c
> +++ b/drivers/gpu/drm/gud/gud_pipe.c
> @@ -15,6 +15,7 @@
>   #include <drm/drm_fourcc.h>
>   #include <drm/drm_framebuffer.h>
>   #include <drm/drm_gem.h>
> +#include <drm/drm_gem_atomic_helper.h>
>   #include <drm/drm_gem_framebuffer_helper.h>
>   #include <drm/drm_print.h>
>   #include <drm/drm_rect.h>
> @@ -152,32 +153,21 @@ static size_t gud_xrgb8888_to_color(u8 *dst, const struct drm_format_info *forma
>   }
>   
>   static int gud_prep_flush(struct gud_device *gdrm, struct drm_framebuffer *fb,
> +			  const struct iosys_map *src, bool cached_reads,
>   			  const struct drm_format_info *format, struct drm_rect *rect,
>   			  struct gud_set_buffer_req *req)
>   {
> -	struct dma_buf_attachment *import_attach = fb->obj[0]->import_attach;
>   	u8 compression = gdrm->compression;
> -	struct iosys_map map[DRM_FORMAT_MAX_PLANES] = { };
> -	struct iosys_map map_data[DRM_FORMAT_MAX_PLANES] = { };
>   	struct iosys_map dst;
>   	void *vaddr, *buf;
>   	size_t pitch, len;
> -	int ret = 0;
>   
>   	pitch = drm_format_info_min_pitch(format, 0, drm_rect_width(rect));
>   	len = pitch * drm_rect_height(rect);
>   	if (len > gdrm->bulk_len)
>   		return -E2BIG;
>   
> -	ret = drm_gem_fb_vmap(fb, map, map_data);
> -	if (ret)
> -		return ret;
> -
> -	vaddr = map_data[0].vaddr;
> -
> -	ret = drm_gem_fb_begin_cpu_access(fb, DMA_FROM_DEVICE);
> -	if (ret)
> -		goto vunmap;
> +	vaddr = src[0].vaddr;
>   retry:
>   	if (compression)
>   		buf = gdrm->compress_buf;
> @@ -192,29 +182,27 @@ static int gud_prep_flush(struct gud_device *gdrm, struct drm_framebuffer *fb,
>   	if (format != fb->format) {
>   		if (format->format == GUD_DRM_FORMAT_R1) {
>   			len = gud_xrgb8888_to_r124(buf, format, vaddr, fb, rect);
> -			if (!len) {
> -				ret = -ENOMEM;
> -				goto end_cpu_access;
> -			}
> +			if (!len)
> +				return -ENOMEM;
>   		} else if (format->format == DRM_FORMAT_R8) {
> -			drm_fb_xrgb8888_to_gray8(&dst, NULL, map_data, fb, rect);
> +			drm_fb_xrgb8888_to_gray8(&dst, NULL, src, fb, rect);
>   		} else if (format->format == DRM_FORMAT_RGB332) {
> -			drm_fb_xrgb8888_to_rgb332(&dst, NULL, map_data, fb, rect);
> +			drm_fb_xrgb8888_to_rgb332(&dst, NULL, src, fb, rect);
>   		} else if (format->format == DRM_FORMAT_RGB565) {
> -			drm_fb_xrgb8888_to_rgb565(&dst, NULL, map_data, fb, rect,
> +			drm_fb_xrgb8888_to_rgb565(&dst, NULL, src, fb, rect,
>   						  gud_is_big_endian());
>   		} else if (format->format == DRM_FORMAT_RGB888) {
> -			drm_fb_xrgb8888_to_rgb888(&dst, NULL, map_data, fb, rect);
> +			drm_fb_xrgb8888_to_rgb888(&dst, NULL, src, fb, rect);
>   		} else {
>   			len = gud_xrgb8888_to_color(buf, format, vaddr, fb, rect);
>   		}
>   	} else if (gud_is_big_endian() && format->cpp[0] > 1) {
> -		drm_fb_swab(&dst, NULL, map_data, fb, rect, !import_attach);
> -	} else if (compression && !import_attach && pitch == fb->pitches[0]) {
> +		drm_fb_swab(&dst, NULL, src, fb, rect, cached_reads);
> +	} else if (compression && cached_reads && pitch == fb->pitches[0]) {
>   		/* can compress directly from the framebuffer */
>   		buf = vaddr + rect->y1 * pitch;
>   	} else {
> -		drm_fb_memcpy(&dst, NULL, map_data, fb, rect);
> +		drm_fb_memcpy(&dst, NULL, src, fb, rect);
>   	}
>   
>   	memset(req, 0, sizeof(*req));
> @@ -237,12 +225,7 @@ static int gud_prep_flush(struct gud_device *gdrm, struct drm_framebuffer *fb,
>   		req->compressed_length = cpu_to_le32(complen);
>   	}
>   
> -end_cpu_access:
> -	drm_gem_fb_end_cpu_access(fb, DMA_FROM_DEVICE);
> -vunmap:
> -	drm_gem_fb_vunmap(fb, map);
> -
> -	return ret;
> +	return 0;
>   }
>   
>   struct gud_usb_bulk_context {
> @@ -285,6 +268,7 @@ static int gud_usb_bulk(struct gud_device *gdrm, size_t len)
>   }
>   
>   static int gud_flush_rect(struct gud_device *gdrm, struct drm_framebuffer *fb,
> +			  const struct iosys_map *src, bool cached_reads,
>   			  const struct drm_format_info *format, struct drm_rect *rect)
>   {
>   	struct gud_set_buffer_req req;
> @@ -293,7 +277,7 @@ static int gud_flush_rect(struct gud_device *gdrm, struct drm_framebuffer *fb,
>   
>   	drm_dbg(&gdrm->drm, "Flushing [FB:%d] " DRM_RECT_FMT "\n", fb->base.id, DRM_RECT_ARG(rect));
>   
> -	ret = gud_prep_flush(gdrm, fb, format, rect, &req);
> +	ret = gud_prep_flush(gdrm, fb, src, cached_reads, format, rect, &req);
>   	if (ret)
>   		return ret;
>   
> @@ -334,6 +318,7 @@ void gud_clear_damage(struct gud_device *gdrm)
>   }
>   
>   static void gud_flush_damage(struct gud_device *gdrm, struct drm_framebuffer *fb,
> +			     const struct iosys_map *src, bool cached_reads,
>   			     struct drm_rect *damage)
>   {
>   	const struct drm_format_info *format;
> @@ -358,7 +343,7 @@ static void gud_flush_damage(struct gud_device *gdrm, struct drm_framebuffer *fb
>   		rect.y1 += i * lines;
>   		rect.y2 = min_t(u32, rect.y1 + lines, damage->y2);
>   
> -		ret = gud_flush_rect(gdrm, fb, format, &rect);
> +		ret = gud_flush_rect(gdrm, fb, src, cached_reads, format, &rect);
>   		if (ret) {
>   			if (ret != -ENODEV && ret != -ECONNRESET &&
>   			    ret != -ESHUTDOWN && ret != -EPROTO)
> @@ -373,9 +358,10 @@ static void gud_flush_damage(struct gud_device *gdrm, struct drm_framebuffer *fb
>   void gud_flush_work(struct work_struct *work)
>   {
>   	struct gud_device *gdrm = container_of(work, struct gud_device, work);
> +	struct iosys_map gem_map = { }, fb_map = { };
>   	struct drm_framebuffer *fb;
>   	struct drm_rect damage;
> -	int idx;
> +	int idx, ret;
>   
>   	if (!drm_dev_enter(&gdrm->drm, &idx))
>   		return;
> @@ -390,8 +376,21 @@ void gud_flush_work(struct work_struct *work)
>   	if (!fb)
>   		goto out;
>   
> -	gud_flush_damage(gdrm, fb, &damage);
> +	ret = drm_gem_fb_vmap(fb, &gem_map, &fb_map);
> +	if (ret)
> +		goto fb_put;
>   
> +	ret = drm_gem_fb_begin_cpu_access(fb, DMA_FROM_DEVICE);
> +	if (ret)
> +		goto vunmap;
> +
> +	/* Imported buffers are assumed to be WriteCombined with uncached reads */
> +	gud_flush_damage(gdrm, fb, &fb_map, !fb->obj[0]->import_attach, &damage);
> +
> +	drm_gem_fb_end_cpu_access(fb, DMA_FROM_DEVICE);
> +vunmap:
> +	drm_gem_fb_vunmap(fb, &gem_map);
> +fb_put:
>   	drm_framebuffer_put(fb);
>   out:
>   	drm_dev_exit(idx);
> 

-- 
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Ivo Totev

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 840 bytes --]

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 5/6] drm/gud: Use the shadow plane helper
  2022-11-30 19:26 ` [PATCH v2 5/6] drm/gud: Use the shadow plane helper Noralf Trønnes via B4 Submission Endpoint
@ 2022-12-01  8:55   ` Thomas Zimmermann
  0 siblings, 0 replies; 24+ messages in thread
From: Thomas Zimmermann @ 2022-12-01  8:55 UTC (permalink / raw)
  To: noralf, Javier Martinez Canillas, dri-devel, Maxime Ripard, stable


[-- Attachment #1.1: Type: text/plain, Size: 7482 bytes --]



Am 30.11.22 um 20:26 schrieb Noralf Trønnes via B4 Submission Endpoint:
> From: Noralf Trønnes <noralf@tronnes.org>
> 
> Use the shadow plane helper to take care of mapping the framebuffer for
> CPU access. The synchronous flushing is now done inline without the use of
> a worker. The async path now uses a shadow buffer to hold framebuffer
> changes and it doesn't read the framebuffer behind userspace's back
> anymore.
> 
> v2:
> - Use src as variable name for iosys_map (Thomas)
> - Prepare imported buffer for CPU access in the driver (Thomas)
> 
> Signed-off-by: Noralf Trønnes <noralf@tronnes.org>

Reviewed-by: Thomas Zimmermann <tzimmermann@suse.de>

> ---
>   drivers/gpu/drm/gud/gud_drv.c      |  1 +
>   drivers/gpu/drm/gud/gud_internal.h |  1 +
>   drivers/gpu/drm/gud/gud_pipe.c     | 81 ++++++++++++++++++++++++++------------
>   3 files changed, 57 insertions(+), 26 deletions(-)
> 
> diff --git a/drivers/gpu/drm/gud/gud_drv.c b/drivers/gpu/drm/gud/gud_drv.c
> index d57dab104358..5aac7cda0505 100644
> --- a/drivers/gpu/drm/gud/gud_drv.c
> +++ b/drivers/gpu/drm/gud/gud_drv.c
> @@ -365,6 +365,7 @@ static void gud_debugfs_init(struct drm_minor *minor)
>   static const struct drm_simple_display_pipe_funcs gud_pipe_funcs = {
>   	.check      = gud_pipe_check,
>   	.update	    = gud_pipe_update,
> +	DRM_GEM_SIMPLE_DISPLAY_PIPE_SHADOW_PLANE_FUNCS
>   };
>   
>   static const struct drm_mode_config_funcs gud_mode_config_funcs = {
> diff --git a/drivers/gpu/drm/gud/gud_internal.h b/drivers/gpu/drm/gud/gud_internal.h
> index e351a1f1420d..0d148a6f27aa 100644
> --- a/drivers/gpu/drm/gud/gud_internal.h
> +++ b/drivers/gpu/drm/gud/gud_internal.h
> @@ -43,6 +43,7 @@ struct gud_device {
>   	struct drm_framebuffer *fb;
>   	struct drm_rect damage;
>   	bool prev_flush_failed;
> +	void *shadow_buf;
>   };
>   
>   static inline struct gud_device *to_gud_device(struct drm_device *drm)
> diff --git a/drivers/gpu/drm/gud/gud_pipe.c b/drivers/gpu/drm/gud/gud_pipe.c
> index 98fe8efda4a9..92189474a7ed 100644
> --- a/drivers/gpu/drm/gud/gud_pipe.c
> +++ b/drivers/gpu/drm/gud/gud_pipe.c
> @@ -358,10 +358,10 @@ static void gud_flush_damage(struct gud_device *gdrm, struct drm_framebuffer *fb
>   void gud_flush_work(struct work_struct *work)
>   {
>   	struct gud_device *gdrm = container_of(work, struct gud_device, work);
> -	struct iosys_map gem_map = { }, fb_map = { };
> +	struct iosys_map shadow_map;
>   	struct drm_framebuffer *fb;
>   	struct drm_rect damage;
> -	int idx, ret;
> +	int idx;
>   
>   	if (!drm_dev_enter(&gdrm->drm, &idx))
>   		return;
> @@ -369,6 +369,7 @@ void gud_flush_work(struct work_struct *work)
>   	mutex_lock(&gdrm->damage_lock);
>   	fb = gdrm->fb;
>   	gdrm->fb = NULL;
> +	iosys_map_set_vaddr(&shadow_map, gdrm->shadow_buf);
>   	damage = gdrm->damage;
>   	gud_clear_damage(gdrm);
>   	mutex_unlock(&gdrm->damage_lock);
> @@ -376,33 +377,33 @@ void gud_flush_work(struct work_struct *work)
>   	if (!fb)
>   		goto out;
>   
> -	ret = drm_gem_fb_vmap(fb, &gem_map, &fb_map);
> -	if (ret)
> -		goto fb_put;
> +	gud_flush_damage(gdrm, fb, &shadow_map, true, &damage);
>   
> -	ret = drm_gem_fb_begin_cpu_access(fb, DMA_FROM_DEVICE);
> -	if (ret)
> -		goto vunmap;
> -
> -	/* Imported buffers are assumed to be WriteCombined with uncached reads */
> -	gud_flush_damage(gdrm, fb, &fb_map, !fb->obj[0]->import_attach, &damage);
> -
> -	drm_gem_fb_end_cpu_access(fb, DMA_FROM_DEVICE);
> -vunmap:
> -	drm_gem_fb_vunmap(fb, &gem_map);
> -fb_put:
>   	drm_framebuffer_put(fb);
>   out:
>   	drm_dev_exit(idx);
>   }
>   
> -static void gud_fb_queue_damage(struct gud_device *gdrm, struct drm_framebuffer *fb,
> -				struct drm_rect *damage)
> +static int gud_fb_queue_damage(struct gud_device *gdrm, struct drm_framebuffer *fb,
> +			       const struct iosys_map *src, struct drm_rect *damage)
>   {
>   	struct drm_framebuffer *old_fb = NULL;
> +	struct iosys_map shadow_map;
>   
>   	mutex_lock(&gdrm->damage_lock);
>   
> +	if (!gdrm->shadow_buf) {
> +		gdrm->shadow_buf = vzalloc(fb->pitches[0] * fb->height);
> +		if (!gdrm->shadow_buf) {
> +			mutex_unlock(&gdrm->damage_lock);
> +			return -ENOMEM;
> +		}
> +	}
> +
> +	iosys_map_set_vaddr(&shadow_map, gdrm->shadow_buf);
> +	iosys_map_incr(&shadow_map, drm_fb_clip_offset(fb->pitches[0], fb->format, damage));
> +	drm_fb_memcpy(&shadow_map, fb->pitches, src, fb, damage);
> +
>   	if (fb != gdrm->fb) {
>   		old_fb = gdrm->fb;
>   		drm_framebuffer_get(fb);
> @@ -420,6 +421,26 @@ static void gud_fb_queue_damage(struct gud_device *gdrm, struct drm_framebuffer
>   
>   	if (old_fb)
>   		drm_framebuffer_put(old_fb);
> +
> +	return 0;
> +}
> +
> +static void gud_fb_handle_damage(struct gud_device *gdrm, struct drm_framebuffer *fb,
> +				 const struct iosys_map *src, struct drm_rect *damage)
> +{
> +	int ret;
> +
> +	if (gdrm->flags & GUD_DISPLAY_FLAG_FULL_UPDATE)
> +		drm_rect_init(damage, 0, 0, fb->width, fb->height);
> +
> +	if (gud_async_flush) {
> +		ret = gud_fb_queue_damage(gdrm, fb, src, damage);
> +		if (ret != -ENOMEM)
> +			return;
> +	}
> +
> +	/* Imported buffers are assumed to be WriteCombined with uncached reads */
> +	gud_flush_damage(gdrm, fb, src, !fb->obj[0]->import_attach, damage);
>   }
>   
>   int gud_pipe_check(struct drm_simple_display_pipe *pipe,
> @@ -544,10 +565,11 @@ void gud_pipe_update(struct drm_simple_display_pipe *pipe,
>   	struct drm_device *drm = pipe->crtc.dev;
>   	struct gud_device *gdrm = to_gud_device(drm);
>   	struct drm_plane_state *state = pipe->plane.state;
> +	struct drm_shadow_plane_state *shadow_plane_state = to_drm_shadow_plane_state(state);
>   	struct drm_framebuffer *fb = state->fb;
>   	struct drm_crtc *crtc = &pipe->crtc;
>   	struct drm_rect damage;
> -	int idx;
> +	int ret, idx;
>   
>   	if (crtc->state->mode_changed || !crtc->state->enable) {
>   		cancel_work_sync(&gdrm->work);
> @@ -557,6 +579,8 @@ void gud_pipe_update(struct drm_simple_display_pipe *pipe,
>   			gdrm->fb = NULL;
>   		}
>   		gud_clear_damage(gdrm);
> +		vfree(gdrm->shadow_buf);
> +		gdrm->shadow_buf = NULL;
>   		mutex_unlock(&gdrm->damage_lock);
>   	}
>   
> @@ -572,14 +596,19 @@ void gud_pipe_update(struct drm_simple_display_pipe *pipe,
>   	if (crtc->state->active_changed)
>   		gud_usb_set_u8(gdrm, GUD_REQ_SET_DISPLAY_ENABLE, crtc->state->active);
>   
> -	if (drm_atomic_helper_damage_merged(old_state, state, &damage)) {
> -		if (gdrm->flags & GUD_DISPLAY_FLAG_FULL_UPDATE)
> -			drm_rect_init(&damage, 0, 0, fb->width, fb->height);
> -		gud_fb_queue_damage(gdrm, fb, &damage);
> -		if (!gud_async_flush)
> -			flush_work(&gdrm->work);
> -	}
> +	if (!fb)
> +		goto ctrl_disable;
>   
> +	ret = drm_gem_fb_begin_cpu_access(fb, DMA_FROM_DEVICE);
> +	if (ret)
> +		goto ctrl_disable;
> +
> +	if (drm_atomic_helper_damage_merged(old_state, state, &damage))
> +		gud_fb_handle_damage(gdrm, fb, &shadow_plane_state->data[0], &damage);
> +
> +	drm_gem_fb_end_cpu_access(fb, DMA_FROM_DEVICE);
> +
> +ctrl_disable:
>   	if (!crtc->state->enable)
>   		gud_usb_set_u8(gdrm, GUD_REQ_SET_CONTROLLER_ENABLE, 0);
>   
> 

-- 
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Ivo Totev

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 840 bytes --]

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 6/6] drm/gud: Enable synchronous flushing by default
  2022-11-30 19:26 ` [PATCH v2 6/6] drm/gud: Enable synchronous flushing by default Noralf Trønnes via B4 Submission Endpoint
@ 2022-12-01  8:57   ` Thomas Zimmermann
  0 siblings, 0 replies; 24+ messages in thread
From: Thomas Zimmermann @ 2022-12-01  8:57 UTC (permalink / raw)
  To: noralf, Javier Martinez Canillas, dri-devel, Maxime Ripard, stable


[-- Attachment #1.1: Type: text/plain, Size: 2484 bytes --]



Am 30.11.22 um 20:26 schrieb Noralf Trønnes via B4 Submission Endpoint:
> From: Noralf Trønnes <noralf@tronnes.org>
> 
> gud has a module parameter that controls whether framebuffer flushing
> happens synchronously during the commit or asynchronously in a worker.
> 
> GNOME before version 3.38 handled all displays in the same rendering loop.
> This lead to gud slowing down the refresh rate for a faster monitor. This
> has now been fixed so lets change the default.
> 
> The plan is to remove async flushing in the future. The code is now
> structured in a way that makes it easy to do this.
> 
> Link: https://blogs.gnome.org/shell-dev/2020/07/02/splitting-up-the-frame-clock/
> Suggested-by: Thomas Zimmermann <tzimmermann@suse.de>
> Signed-off-by: Noralf Trønnes <noralf@tronnes.org>

Reviewed-by: Thomas Zimmermann <tzimmermann@suse.de>

> ---
>   drivers/gpu/drm/gud/gud_pipe.c | 12 ++++--------
>   1 file changed, 4 insertions(+), 8 deletions(-)
> 
> diff --git a/drivers/gpu/drm/gud/gud_pipe.c b/drivers/gpu/drm/gud/gud_pipe.c
> index 92189474a7ed..62c43d3632d4 100644
> --- a/drivers/gpu/drm/gud/gud_pipe.c
> +++ b/drivers/gpu/drm/gud/gud_pipe.c
> @@ -25,17 +25,13 @@
>   #include "gud_internal.h"
>   
>   /*
> - * Some userspace rendering loops runs all displays in the same loop.
> + * Some userspace rendering loops run all displays in the same loop.
>    * This means that a fast display will have to wait for a slow one.
> - * For this reason gud does flushing asynchronous by default.
> - * The down side is that in e.g. a single display setup userspace thinks
> - * the display is insanely fast since the driver reports back immediately
> - * that the flush/pageflip is done. This wastes CPU and power.
> - * Such users might want to set this module parameter to false.
> + * Such users might want to enable this module parameter.
>    */
> -static bool gud_async_flush = true;
> +static bool gud_async_flush;
>   module_param_named(async_flush, gud_async_flush, bool, 0644);
> -MODULE_PARM_DESC(async_flush, "Enable asynchronous flushing [default=true]");
> +MODULE_PARM_DESC(async_flush, "Enable asynchronous flushing [default=0]");
>   
>   /*
>    * FIXME: The driver is probably broken on Big Endian machines.
> 

-- 
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Ivo Totev

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 840 bytes --]

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 0/6] drm/gud: Use the shadow plane helper
  2022-12-01  5:55 ` [PATCH v2 0/6] drm/gud: Use the shadow plane helper Greg KH
@ 2022-12-01 10:00   ` Noralf Trønnes
  2022-12-01 12:12     ` Greg KH
  0 siblings, 1 reply; 24+ messages in thread
From: Noralf Trønnes @ 2022-12-01 10:00 UTC (permalink / raw)
  To: Greg KH, Konstantin Ryabitsev
  Cc: Javier Martinez Canillas, dri-devel, Thomas Zimmermann, stable, tools



Den 01.12.2022 06.55, skrev Greg KH:
> On Wed, Nov 30, 2022 at 08:26:48PM +0100, Noralf Trønnes via B4 Submission Endpoint wrote:
>> Hi,
>>
>> I have started to look at igt for testing and want to use CRC tests. To
>> implement support for this I need to move away from the simple kms
>> helper.
>>
>> When looking around for examples I came across Thomas' nice shadow
>> helper and thought, yes this is perfect for drm/gud. So I'll switch to
>> that before I move away from the simple kms helper.
>>
>> The async framebuffer flushing code path now uses a shadow buffer and
>> doesn't touch the framebuffer when it shouldn't. I have also taken the
>> opportunity to inline the synchronous flush code path and make this the
>> default flushing stategy.
>>
>> Noralf.
>>
>> Cc: Maxime Ripard <mripard@kernel.org>
>> Cc: Thomas Zimmermann <tzimmermann@suse.de>
>> Cc: dri-devel@lists.freedesktop.org
>> Signed-off-by: Noralf Trønnes <noralf@tronnes.org>
>>
>> ---
>> Changes in v2:
>> - Drop patch (Thomas):
>>   drm/gem: shadow_fb_access: Prepare imported buffers for CPU access
>> - Use src as variable name for iosys_map (Thomas)
>> - Prepare imported buffer for CPU access in the driver (Thomas)
>> - New patch: make sync flushing the default (Thomas)
>> - Link to v1: https://lore.kernel.org/r/20221122-gud-shadow-plane-v1-0-9de3afa3383e@tronnes.org
> 
> <formletter>
> 
> This is not the correct way to submit patches for inclusion in the
> stable kernel tree.  Please read:
>     https://www.kernel.org/doc/html/latest/process/stable-kernel-rules.html
> for how to do this properly.
> 
> </formletter>

Care to elaborate?
Is it because stable got the whole patchset and not just the one fix
patch that cc'ed stable?

This patchset was sent using the b4 tool and I can't control this
aspect. Everyone mentioned in the patches gets the whole set.

Noralf.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 0/6] drm/gud: Use the shadow plane helper
  2022-12-01 10:00   ` Noralf Trønnes
@ 2022-12-01 12:12     ` Greg KH
  2022-12-01 13:14       ` Noralf Trønnes
  0 siblings, 1 reply; 24+ messages in thread
From: Greg KH @ 2022-12-01 12:12 UTC (permalink / raw)
  To: Noralf Trønnes
  Cc: tools, Javier Martinez Canillas, dri-devel, Thomas Zimmermann,
	stable, Konstantin Ryabitsev

On Thu, Dec 01, 2022 at 11:00:44AM +0100, Noralf Trønnes wrote:
> 
> 
> Den 01.12.2022 06.55, skrev Greg KH:
> > On Wed, Nov 30, 2022 at 08:26:48PM +0100, Noralf Trønnes via B4 Submission Endpoint wrote:
> >> Hi,
> >>
> >> I have started to look at igt for testing and want to use CRC tests. To
> >> implement support for this I need to move away from the simple kms
> >> helper.
> >>
> >> When looking around for examples I came across Thomas' nice shadow
> >> helper and thought, yes this is perfect for drm/gud. So I'll switch to
> >> that before I move away from the simple kms helper.
> >>
> >> The async framebuffer flushing code path now uses a shadow buffer and
> >> doesn't touch the framebuffer when it shouldn't. I have also taken the
> >> opportunity to inline the synchronous flush code path and make this the
> >> default flushing stategy.
> >>
> >> Noralf.
> >>
> >> Cc: Maxime Ripard <mripard@kernel.org>
> >> Cc: Thomas Zimmermann <tzimmermann@suse.de>
> >> Cc: dri-devel@lists.freedesktop.org
> >> Signed-off-by: Noralf Trønnes <noralf@tronnes.org>
> >>
> >> ---
> >> Changes in v2:
> >> - Drop patch (Thomas):
> >>   drm/gem: shadow_fb_access: Prepare imported buffers for CPU access
> >> - Use src as variable name for iosys_map (Thomas)
> >> - Prepare imported buffer for CPU access in the driver (Thomas)
> >> - New patch: make sync flushing the default (Thomas)
> >> - Link to v1: https://lore.kernel.org/r/20221122-gud-shadow-plane-v1-0-9de3afa3383e@tronnes.org
> > 
> > <formletter>
> > 
> > This is not the correct way to submit patches for inclusion in the
> > stable kernel tree.  Please read:
> >     https://www.kernel.org/doc/html/latest/process/stable-kernel-rules.html
> > for how to do this properly.
> > 
> > </formletter>
> 
> Care to elaborate?
> Is it because stable got the whole patchset and not just the one fix
> patch that cc'ed stable?

That is what triggered this, yes.

> This patchset was sent using the b4 tool and I can't control this
> aspect. Everyone mentioned in the patches gets the whole set.

Fair enough, but watch out, bots will report this as being a problem as
they can't always read through all patches in a series to notice this...

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 0/6] drm/gud: Use the shadow plane helper
  2022-12-01 12:12     ` Greg KH
@ 2022-12-01 13:14       ` Noralf Trønnes
  2022-12-01 13:21         ` Greg KH
  0 siblings, 1 reply; 24+ messages in thread
From: Noralf Trønnes @ 2022-12-01 13:14 UTC (permalink / raw)
  To: Greg KH, Konstantin Ryabitsev
  Cc: Javier Martinez Canillas, Noralf Trønnes, dri-devel,
	Thomas Zimmermann, tools



Den 01.12.2022 13.12, skrev Greg KH:
> On Thu, Dec 01, 2022 at 11:00:44AM +0100, Noralf Trønnes wrote:
>>
>>
>> Den 01.12.2022 06.55, skrev Greg KH:
>>> On Wed, Nov 30, 2022 at 08:26:48PM +0100, Noralf Trønnes via B4 Submission Endpoint wrote:
>>>> Hi,
>>>>
>>>> I have started to look at igt for testing and want to use CRC tests. To
>>>> implement support for this I need to move away from the simple kms
>>>> helper.
>>>>
>>>> When looking around for examples I came across Thomas' nice shadow
>>>> helper and thought, yes this is perfect for drm/gud. So I'll switch to
>>>> that before I move away from the simple kms helper.
>>>>
>>>> The async framebuffer flushing code path now uses a shadow buffer and
>>>> doesn't touch the framebuffer when it shouldn't. I have also taken the
>>>> opportunity to inline the synchronous flush code path and make this the
>>>> default flushing stategy.
>>>>
>>>> Noralf.
>>>>
>>>> Cc: Maxime Ripard <mripard@kernel.org>
>>>> Cc: Thomas Zimmermann <tzimmermann@suse.de>
>>>> Cc: dri-devel@lists.freedesktop.org
>>>> Signed-off-by: Noralf Trønnes <noralf@tronnes.org>
>>>>
>>>> ---
>>>> Changes in v2:
>>>> - Drop patch (Thomas):
>>>>   drm/gem: shadow_fb_access: Prepare imported buffers for CPU access
>>>> - Use src as variable name for iosys_map (Thomas)
>>>> - Prepare imported buffer for CPU access in the driver (Thomas)
>>>> - New patch: make sync flushing the default (Thomas)
>>>> - Link to v1: https://lore.kernel.org/r/20221122-gud-shadow-plane-v1-0-9de3afa3383e@tronnes.org
>>>
>>> <formletter>
>>>
>>> This is not the correct way to submit patches for inclusion in the
>>> stable kernel tree.  Please read:
>>>     https://www.kernel.org/doc/html/latest/process/stable-kernel-rules.html
>>> for how to do this properly.
>>>
>>> </formletter>
>>
>> Care to elaborate?
>> Is it because stable got the whole patchset and not just the one fix
>> patch that cc'ed stable?
> 
> That is what triggered this, yes.
> 
>> This patchset was sent using the b4 tool and I can't control this
>> aspect. Everyone mentioned in the patches gets the whole set.
> 
> Fair enough, but watch out, bots will report this as being a problem as
> they can't always read through all patches in a series to notice this...
> 

Konstantin,

Can you add a rule in b4 to exclude stable@vger.kernel.org
(stable@kernel.org as well?) from getting the whole patchset?

Noralf.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 0/6] drm/gud: Use the shadow plane helper
  2022-12-01 13:14       ` Noralf Trønnes
@ 2022-12-01 13:21         ` Greg KH
  2022-12-01 13:34           ` Javier Martinez Canillas
  0 siblings, 1 reply; 24+ messages in thread
From: Greg KH @ 2022-12-01 13:21 UTC (permalink / raw)
  To: Noralf Trønnes
  Cc: Konstantin Ryabitsev, Javier Martinez Canillas, dri-devel,
	Thomas Zimmermann, tools

On Thu, Dec 01, 2022 at 02:14:42PM +0100, Noralf Trønnes wrote:
> 
> 
> Den 01.12.2022 13.12, skrev Greg KH:
> > On Thu, Dec 01, 2022 at 11:00:44AM +0100, Noralf Trønnes wrote:
> >>
> >>
> >> Den 01.12.2022 06.55, skrev Greg KH:
> >>> On Wed, Nov 30, 2022 at 08:26:48PM +0100, Noralf Trønnes via B4 Submission Endpoint wrote:
> >>>> Hi,
> >>>>
> >>>> I have started to look at igt for testing and want to use CRC tests. To
> >>>> implement support for this I need to move away from the simple kms
> >>>> helper.
> >>>>
> >>>> When looking around for examples I came across Thomas' nice shadow
> >>>> helper and thought, yes this is perfect for drm/gud. So I'll switch to
> >>>> that before I move away from the simple kms helper.
> >>>>
> >>>> The async framebuffer flushing code path now uses a shadow buffer and
> >>>> doesn't touch the framebuffer when it shouldn't. I have also taken the
> >>>> opportunity to inline the synchronous flush code path and make this the
> >>>> default flushing stategy.
> >>>>
> >>>> Noralf.
> >>>>
> >>>> Cc: Maxime Ripard <mripard@kernel.org>
> >>>> Cc: Thomas Zimmermann <tzimmermann@suse.de>
> >>>> Cc: dri-devel@lists.freedesktop.org
> >>>> Signed-off-by: Noralf Trønnes <noralf@tronnes.org>
> >>>>
> >>>> ---
> >>>> Changes in v2:
> >>>> - Drop patch (Thomas):
> >>>>   drm/gem: shadow_fb_access: Prepare imported buffers for CPU access
> >>>> - Use src as variable name for iosys_map (Thomas)
> >>>> - Prepare imported buffer for CPU access in the driver (Thomas)
> >>>> - New patch: make sync flushing the default (Thomas)
> >>>> - Link to v1: https://lore.kernel.org/r/20221122-gud-shadow-plane-v1-0-9de3afa3383e@tronnes.org
> >>>
> >>> <formletter>
> >>>
> >>> This is not the correct way to submit patches for inclusion in the
> >>> stable kernel tree.  Please read:
> >>>     https://www.kernel.org/doc/html/latest/process/stable-kernel-rules.html
> >>> for how to do this properly.
> >>>
> >>> </formletter>
> >>
> >> Care to elaborate?
> >> Is it because stable got the whole patchset and not just the one fix
> >> patch that cc'ed stable?
> > 
> > That is what triggered this, yes.
> > 
> >> This patchset was sent using the b4 tool and I can't control this
> >> aspect. Everyone mentioned in the patches gets the whole set.
> > 
> > Fair enough, but watch out, bots will report this as being a problem as
> > they can't always read through all patches in a series to notice this...
> > 
> 
> Konstantin,
> 
> Can you add a rule in b4 to exclude stable@vger.kernel.org
> (stable@kernel.org as well?) from getting the whole patchset?

stable@kernel.org is a pipe to /dev/null so that's not needed to be
messed with.

As for this needing special casing in b4, it's rare that you send out a
patch series and only want 1 or 2 of them in stable, right?

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 0/6] drm/gud: Use the shadow plane helper
  2022-12-01 13:21         ` Greg KH
@ 2022-12-01 13:34           ` Javier Martinez Canillas
  2022-12-01 14:16             ` Konstantin Ryabitsev
  0 siblings, 1 reply; 24+ messages in thread
From: Javier Martinez Canillas @ 2022-12-01 13:34 UTC (permalink / raw)
  To: Greg KH, Noralf Trønnes
  Cc: tools, dri-devel, Thomas Zimmermann, Konstantin Ryabitsev

Hello Greg,

On 12/1/22 14:21, Greg KH wrote:

[...]

>>>> This patchset was sent using the b4 tool and I can't control this
>>>> aspect. Everyone mentioned in the patches gets the whole set.
>>>
>>> Fair enough, but watch out, bots will report this as being a problem as
>>> they can't always read through all patches in a series to notice this...
>>>
>>
>> Konstantin,
>>
>> Can you add a rule in b4 to exclude stable@vger.kernel.org
>> (stable@kernel.org as well?) from getting the whole patchset?
> 
> stable@kernel.org is a pipe to /dev/null so that's not needed to be
> messed with.
> 
> As for this needing special casing in b4, it's rare that you send out a
> patch series and only want 1 or 2 of them in stable, right?
>

Not really, it's very common for a patch-series to contain fixes (that could
go to stable if applicable) and change that are not suitable for stable. The
problem as Noralf mentioned is that the b4 tool doesn't seem to allow Cc'ing
individual patches to different recipients, and everyone get the whole set.

So either b4 needs to have this support, exclude stable@vger.kernel.org when
sending a set or stable@vger.kernel.org ignore patches without a Fixes: tag.

-- 
Best regards,

Javier Martinez Canillas
Core Platforms
Red Hat


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 0/6] drm/gud: Use the shadow plane helper
  2022-12-01 13:34           ` Javier Martinez Canillas
@ 2022-12-01 14:16             ` Konstantin Ryabitsev
  2022-12-01 14:20               ` Javier Martinez Canillas
                                 ` (2 more replies)
  0 siblings, 3 replies; 24+ messages in thread
From: Konstantin Ryabitsev @ 2022-12-01 14:16 UTC (permalink / raw)
  To: Javier Martinez Canillas
  Cc: Greg KH, Noralf Trønnes, dri-devel, Thomas Zimmermann, tools

On Thu, Dec 01, 2022 at 02:34:41PM +0100, Javier Martinez Canillas wrote:
> >> Konstantin,
> >>
> >> Can you add a rule in b4 to exclude stable@vger.kernel.org
> >> (stable@kernel.org as well?) from getting the whole patchset?
> > 
> > stable@kernel.org is a pipe to /dev/null so that's not needed to be
> > messed with.
> > 
> > As for this needing special casing in b4, it's rare that you send out a
> > patch series and only want 1 or 2 of them in stable, right?
> >
> 
> Not really, it's very common for a patch-series to contain fixes (that could
> go to stable if applicable) and change that are not suitable for stable. The
> problem as Noralf mentioned is that the b4 tool doesn't seem to allow Cc'ing
> individual patches to different recipients, and everyone get the whole set.
> 
> So either b4 needs to have this support, exclude stable@vger.kernel.org when
> sending a set or stable@vger.kernel.org ignore patches without a Fixes: tag.

I think what I can do is a special logic for Cc: trailers:

- Any Cc: trailers we find in the cover letter receive the entire series
- Any Cc: trailers in individual patches only receive these individual patches

Thank you for being patient -- we'll get this right, I promise.

-K

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 0/6] drm/gud: Use the shadow plane helper
  2022-12-01 14:16             ` Konstantin Ryabitsev
@ 2022-12-01 14:20               ` Javier Martinez Canillas
  2022-12-01 14:27               ` Vlastimil Babka
  2022-12-01 20:52               ` Noralf Trønnes
  2 siblings, 0 replies; 24+ messages in thread
From: Javier Martinez Canillas @ 2022-12-01 14:20 UTC (permalink / raw)
  To: Konstantin Ryabitsev
  Cc: Greg KH, Noralf Trønnes, dri-devel, Thomas Zimmermann, tools


On 12/1/22 15:16, Konstantin Ryabitsev wrote:
> On Thu, Dec 01, 2022 at 02:34:41PM +0100, Javier Martinez Canillas wrote:
>>>> Konstantin,
>>>>
>>>> Can you add a rule in b4 to exclude stable@vger.kernel.org
>>>> (stable@kernel.org as well?) from getting the whole patchset?
>>>
>>> stable@kernel.org is a pipe to /dev/null so that's not needed to be
>>> messed with.
>>>
>>> As for this needing special casing in b4, it's rare that you send out a
>>> patch series and only want 1 or 2 of them in stable, right?
>>>
>>
>> Not really, it's very common for a patch-series to contain fixes (that could
>> go to stable if applicable) and change that are not suitable for stable. The
>> problem as Noralf mentioned is that the b4 tool doesn't seem to allow Cc'ing
>> individual patches to different recipients, and everyone get the whole set.
>>
>> So either b4 needs to have this support, exclude stable@vger.kernel.org when
>> sending a set or stable@vger.kernel.org ignore patches without a Fixes: tag.
> 
> I think what I can do is a special logic for Cc: trailers:
> 
> - Any Cc: trailers we find in the cover letter receive the entire series
> - Any Cc: trailers in individual patches only receive these individual patches
>

I think that make sense. It's similar to how for example patman works.
 
> Thank you for being patient -- we'll get this right, I promise.
> 

On the contrary, thanks a lot for working on this tool and for being
that responsive to the feedback.

-- 
Best regards,

Javier Martinez Canillas
Core Platforms
Red Hat


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 0/6] drm/gud: Use the shadow plane helper
  2022-12-01 14:16             ` Konstantin Ryabitsev
  2022-12-01 14:20               ` Javier Martinez Canillas
@ 2022-12-01 14:27               ` Vlastimil Babka
  2022-12-01 14:32                 ` Mark Brown
  2022-12-01 20:52               ` Noralf Trønnes
  2 siblings, 1 reply; 24+ messages in thread
From: Vlastimil Babka @ 2022-12-01 14:27 UTC (permalink / raw)
  To: Konstantin Ryabitsev, Javier Martinez Canillas
  Cc: Greg KH, Noralf Trønnes, dri-devel, Thomas Zimmermann, tools

On 12/1/22 15:16, Konstantin Ryabitsev wrote:
> On Thu, Dec 01, 2022 at 02:34:41PM +0100, Javier Martinez Canillas wrote:
>>>> Konstantin,
>>>>
>>>> Can you add a rule in b4 to exclude stable@vger.kernel.org
>>>> (stable@kernel.org as well?) from getting the whole patchset?
>>>
>>> stable@kernel.org is a pipe to /dev/null so that's not needed to be
>>> messed with.
>>>
>>> As for this needing special casing in b4, it's rare that you send out a
>>> patch series and only want 1 or 2 of them in stable, right?
>>>
>>
>> Not really, it's very common for a patch-series to contain fixes (that could
>> go to stable if applicable) and change that are not suitable for stable. The
>> problem as Noralf mentioned is that the b4 tool doesn't seem to allow Cc'ing
>> individual patches to different recipients, and everyone get the whole set.
>>
>> So either b4 needs to have this support, exclude stable@vger.kernel.org when
>> sending a set or stable@vger.kernel.org ignore patches without a Fixes: tag.
> 
> I think what I can do is a special logic for Cc: trailers:
> 
> - Any Cc: trailers we find in the cover letter receive the entire series
> - Any Cc: trailers in individual patches only receive these individual patches

I usually do that with git send-email and a custom --cc-cmd script, but 
additionally it sends the cover letter also to everyone that's on any 
individual patch's Cc, so everyone gets at least the cover letter + 
their specific patch(es).

But that extra logic could be optional.

> Thank you for being patient -- we'll get this right, I promise.
> 
> -K
> 


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 0/6] drm/gud: Use the shadow plane helper
  2022-12-01 14:27               ` Vlastimil Babka
@ 2022-12-01 14:32                 ` Mark Brown
  2023-01-05 12:35                   ` Daniel Vetter
  0 siblings, 1 reply; 24+ messages in thread
From: Mark Brown @ 2022-12-01 14:32 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: tools, Greg KH, Javier Martinez Canillas, dri-devel,
	Noralf Trønnes, Thomas Zimmermann, Konstantin Ryabitsev

[-- Attachment #1: Type: text/plain, Size: 518 bytes --]

On Thu, Dec 01, 2022 at 03:27:32PM +0100, Vlastimil Babka wrote:

> I usually do that with git send-email and a custom --cc-cmd script, but
> additionally it sends the cover letter also to everyone that's on any
> individual patch's Cc, so everyone gets at least the cover letter + their
> specific patch(es).

> But that extra logic could be optional.

Yeah, without the cover letter if you've just got an individual patch it
can be unclear what's going on with dependencies and so on for getting
the patches merged.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 0/6] drm/gud: Use the shadow plane helper
  2022-12-01 14:16             ` Konstantin Ryabitsev
  2022-12-01 14:20               ` Javier Martinez Canillas
  2022-12-01 14:27               ` Vlastimil Babka
@ 2022-12-01 20:52               ` Noralf Trønnes
  2 siblings, 0 replies; 24+ messages in thread
From: Noralf Trønnes @ 2022-12-01 20:52 UTC (permalink / raw)
  To: Konstantin Ryabitsev, Javier Martinez Canillas
  Cc: Greg KH, Noralf Trønnes, dri-devel, Thomas Zimmermann, tools



Den 01.12.2022 15.16, skrev Konstantin Ryabitsev:
> On Thu, Dec 01, 2022 at 02:34:41PM +0100, Javier Martinez Canillas wrote:
>>>> Konstantin,
>>>>
>>>> Can you add a rule in b4 to exclude stable@vger.kernel.org
>>>> (stable@kernel.org as well?) from getting the whole patchset?
>>>
>>> stable@kernel.org is a pipe to /dev/null so that's not needed to be
>>> messed with.
>>>
>>> As for this needing special casing in b4, it's rare that you send out a
>>> patch series and only want 1 or 2 of them in stable, right?
>>>
>>
>> Not really, it's very common for a patch-series to contain fixes (that could
>> go to stable if applicable) and change that are not suitable for stable. The
>> problem as Noralf mentioned is that the b4 tool doesn't seem to allow Cc'ing
>> individual patches to different recipients, and everyone get the whole set.
>>
>> So either b4 needs to have this support, exclude stable@vger.kernel.org when
>> sending a set or stable@vger.kernel.org ignore patches without a Fixes: tag.
> 
> I think what I can do is a special logic for Cc: trailers:
> 
> - Any Cc: trailers we find in the cover letter receive the entire series
> - Any Cc: trailers in individual patches only receive these individual patches
> 

That should cover my use cases. I can now do 'b4 prep --auto-to-cc' and
then trim down the cc list in the cover letter if necessary.

> Thank you for being patient -- we'll get this right, I promise.
> 

Thanks for getting it right. b4 can replace parts of my own tooling and
do it smoother so I think I'll continue to use it.

Noralf.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 0/6] drm/gud: Use the shadow plane helper
  2022-11-30 19:26 [PATCH v2 0/6] drm/gud: Use the shadow plane helper Noralf Trønnes via B4 Submission Endpoint
                   ` (6 preceding siblings ...)
  2022-12-01  5:55 ` [PATCH v2 0/6] drm/gud: Use the shadow plane helper Greg KH
@ 2022-12-06 15:57 ` Noralf Trønnes
  7 siblings, 0 replies; 24+ messages in thread
From: Noralf Trønnes @ 2022-12-06 15:57 UTC (permalink / raw)
  To: Thomas Zimmermann, Javier Martinez Canillas, dri-devel, Maxime Ripard
  Cc: Noralf Trønnes



Den 30.11.2022 20.26, skrev Noralf Trønnes via B4 Submission Endpoint:
> Hi,
> 
> I have started to look at igt for testing and want to use CRC tests. To
> implement support for this I need to move away from the simple kms
> helper.
> 
> When looking around for examples I came across Thomas' nice shadow
> helper and thought, yes this is perfect for drm/gud. So I'll switch to
> that before I move away from the simple kms helper.
> 
> The async framebuffer flushing code path now uses a shadow buffer and
> doesn't touch the framebuffer when it shouldn't. I have also taken the
> opportunity to inline the synchronous flush code path and make this the
> default flushing stategy.
> 
> Noralf.
> 
> Cc: Maxime Ripard <mripard@kernel.org>
> Cc: Thomas Zimmermann <tzimmermann@suse.de>
> Cc: dri-devel@lists.freedesktop.org
> Signed-off-by: Noralf Trønnes <noralf@tronnes.org>
> 
> ---
> Changes in v2:
> - Drop patch (Thomas):
>   drm/gem: shadow_fb_access: Prepare imported buffers for CPU access
> - Use src as variable name for iosys_map (Thomas)
> - Prepare imported buffer for CPU access in the driver (Thomas)
> - New patch: make sync flushing the default (Thomas)
> - Link to v1: https://lore.kernel.org/r/20221122-gud-shadow-plane-v1-0-9de3afa3383e@tronnes.org
> 
> ---
> Noralf Trønnes (6):
>       drm/gud: Fix UBSAN warning
>       drm/gud: Don't retry a failed framebuffer flush
>       drm/gud: Split up gud_flush_work()
>       drm/gud: Prepare buffer for CPU access in gud_flush_work()
>       drm/gud: Use the shadow plane helper
>       drm/gud: Enable synchronous flushing by default
> 

Applied to drm-misc-next, thanks for reviewing!

Noralf.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 0/6] drm/gud: Use the shadow plane helper
  2022-12-01 14:32                 ` Mark Brown
@ 2023-01-05 12:35                   ` Daniel Vetter
  2023-01-05 16:00                     ` Konstantin Ryabitsev
  0 siblings, 1 reply; 24+ messages in thread
From: Daniel Vetter @ 2023-01-05 12:35 UTC (permalink / raw)
  To: Mark Brown
  Cc: Konstantin Ryabitsev, Greg KH, Javier Martinez Canillas,
	dri-devel, Noralf Trønnes, Vlastimil Babka,
	Thomas Zimmermann, tools

On Thu, Dec 01, 2022 at 02:32:15PM +0000, Mark Brown wrote:
> On Thu, Dec 01, 2022 at 03:27:32PM +0100, Vlastimil Babka wrote:
> 
> > I usually do that with git send-email and a custom --cc-cmd script, but
> > additionally it sends the cover letter also to everyone that's on any
> > individual patch's Cc, so everyone gets at least the cover letter + their
> > specific patch(es).
> 
> > But that extra logic could be optional.
> 
> Yeah, without the cover letter if you've just got an individual patch it
> can be unclear what's going on with dependencies and so on for getting
> the patches merged.

+1 on including the cover letter for any recipient. If b4 can do this
right by default, that would be a really good reason for me to look into
it and switch.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 0/6] drm/gud: Use the shadow plane helper
  2023-01-05 12:35                   ` Daniel Vetter
@ 2023-01-05 16:00                     ` Konstantin Ryabitsev
  0 siblings, 0 replies; 24+ messages in thread
From: Konstantin Ryabitsev @ 2023-01-05 16:00 UTC (permalink / raw)
  To: Daniel Vetter
  Cc: Noralf Trønnes, Greg KH, Javier Martinez Canillas,
	dri-devel, Mark Brown, Vlastimil Babka, Thomas Zimmermann, tools

On Thu, Jan 05, 2023 at 01:35:37PM +0100, Daniel Vetter wrote:
> > Yeah, without the cover letter if you've just got an individual patch it
> > can be unclear what's going on with dependencies and so on for getting
> > the patches merged.
> 
> +1 on including the cover letter for any recipient. If b4 can do this
> right by default, that would be a really good reason for me to look into
> it and switch.

That's the behaviour in the latest releases. I will release 0.11.2 later today
with some hotfixes, so if you want to try it out, that's the version I'd
recommend.

-K

^ permalink raw reply	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2023-01-05 16:00 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-11-30 19:26 [PATCH v2 0/6] drm/gud: Use the shadow plane helper Noralf Trønnes via B4 Submission Endpoint
2022-11-30 19:26 ` [PATCH v2 1/6] drm/gud: Fix UBSAN warning Noralf Trønnes via B4 Submission Endpoint
2022-11-30 19:26 ` [PATCH v2 2/6] drm/gud: Don't retry a failed framebuffer flush Noralf Trønnes via B4 Submission Endpoint
2022-11-30 19:26 ` [PATCH v2 3/6] drm/gud: Split up gud_flush_work() Noralf Trønnes via B4 Submission Endpoint
2022-11-30 19:26 ` [PATCH v2 4/6] drm/gud: Prepare buffer for CPU access in gud_flush_work() Noralf Trønnes via B4 Submission Endpoint
2022-12-01  8:51   ` Thomas Zimmermann
2022-11-30 19:26 ` [PATCH v2 5/6] drm/gud: Use the shadow plane helper Noralf Trønnes via B4 Submission Endpoint
2022-12-01  8:55   ` Thomas Zimmermann
2022-11-30 19:26 ` [PATCH v2 6/6] drm/gud: Enable synchronous flushing by default Noralf Trønnes via B4 Submission Endpoint
2022-12-01  8:57   ` Thomas Zimmermann
2022-12-01  5:55 ` [PATCH v2 0/6] drm/gud: Use the shadow plane helper Greg KH
2022-12-01 10:00   ` Noralf Trønnes
2022-12-01 12:12     ` Greg KH
2022-12-01 13:14       ` Noralf Trønnes
2022-12-01 13:21         ` Greg KH
2022-12-01 13:34           ` Javier Martinez Canillas
2022-12-01 14:16             ` Konstantin Ryabitsev
2022-12-01 14:20               ` Javier Martinez Canillas
2022-12-01 14:27               ` Vlastimil Babka
2022-12-01 14:32                 ` Mark Brown
2023-01-05 12:35                   ` Daniel Vetter
2023-01-05 16:00                     ` Konstantin Ryabitsev
2022-12-01 20:52               ` Noralf Trønnes
2022-12-06 15:57 ` Noralf Trønnes

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).