dri-devel.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v4] drm/ssd130x: Allocate buffers in the plane's .atomic_check callback
@ 2023-07-21  7:09 Javier Martinez Canillas
  2023-07-21  7:33 ` Maxime Ripard
                   ` (2 more replies)
  0 siblings, 3 replies; 16+ messages in thread
From: Javier Martinez Canillas @ 2023-07-21  7:09 UTC (permalink / raw)
  To: linux-kernel
  Cc: Thomas Zimmermann, Daniel Vetter, Javier Martinez Canillas,
	Maxime Ripard, Geert Uytterhoeven, dri-devel

Geert reports that the following NULL pointer dereference happens for him
after commit 49d7d581ceaf ("drm/ssd130x: Don't allocate buffers on each
plane update"):

    [drm] Initialized ssd130x 1.0.0 20220131 for 0-003c on minor 0
    ssd130x-i2c 0-003c: [drm] surface width(128), height(32), bpp(1)
    and format(R1   little-endian (0x20203152))
    Unable to handle kernel NULL pointer dereference at virtual address 00000000
    Oops [#1]
    CPU: 0 PID: 1 Comm: swapper Not tainted
    6.5.0-rc1-orangecrab-02219-g0a529a1e4bf4 #565
    epc : ssd130x_update_rect.isra.0+0x13c/0x340
     ra : ssd130x_update_rect.isra.0+0x2bc/0x340
    ...
    status: 00000120 badaddr: 00000000 cause: 0000000f
    [<c0303d90>] ssd130x_update_rect.isra.0+0x13c/0x340
    [<c0304200>] ssd130x_primary_plane_helper_atomic_update+0x26c/0x284
    [<c02f8d54>] drm_atomic_helper_commit_planes+0xfc/0x27c
    [<c02f9314>] drm_atomic_helper_commit_tail+0x5c/0xb4
    [<c02f94fc>] commit_tail+0x190/0x1b8
    [<c02f99fc>] drm_atomic_helper_commit+0x194/0x1c0
    [<c02c5d00>] drm_atomic_commit+0xa4/0xe4
    [<c02cce40>] drm_client_modeset_commit_atomic+0x244/0x278
    [<c02ccef0>] drm_client_modeset_commit_locked+0x7c/0x1bc
    [<c02cd064>] drm_client_modeset_commit+0x34/0x64
    [<c0301a78>] __drm_fb_helper_restore_fbdev_mode_unlocked+0xc4/0xe8
    [<c0303424>] drm_fb_helper_set_par+0x38/0x58
    [<c027c410>] fbcon_init+0x294/0x534
    ...

The problem is that fbcon calls fbcon_init() which triggers a DRM modeset
and this leads to drm_atomic_helper_commit_planes() attempting to commit
the atomic state for all planes, even the ones whose CRTC is not enabled.

Since the primary plane buffer is allocated in the encoder .atomic_enable
callback, this happens after that initial modeset commit and leads to the
mentioned NULL pointer dereference.

Fix this by having custom helpers to allocate, duplicate and destroy the
plane state, that will take care of allocating and freeing these buffers.

Instead of doing it in the encoder atomic enabled and disabled callbacks,
since allocations must not be done in an .atomic_enable handler. Because
drivers are not allowed to fail after drm_atomic_helper_swap_state() has
been called and the new atomic state is stored into the current sw state.

Fixes: 49d7d581ceaf ("drm/ssd130x: Don't allocate buffers on each plane update")
Reported-by: Geert Uytterhoeven <geert@linux-m68k.org>
Suggested-by: Maxime Ripard <mripard@kernel.org>
Signed-off-by: Javier Martinez Canillas <javierm@redhat.com>
---

Changes in v4:
- Move buffers allocation/free to plane .reset/.destroy helpers (Maxime Ripard).

Changes in v3:
- Move the buffers allocation to the plane helper funcs .begin_fb_access
  and the free to .end_fb_access callbacks.
- Always allocate intermediate buffer because is use in ssd130x_clear_screen().
- Don't allocate the buffers as device managed resources.

Changes in v2:
- Move the buffers allocation to .fb_create instead of preventing atomic
  updates for disable planes.
- Don't allocate the intermediate buffer when using the native R1 format.
- Allocate buffers as device managed resources.

 drivers/gpu/drm/solomon/ssd130x.c | 134 ++++++++++++++++++++++++------
 drivers/gpu/drm/solomon/ssd130x.h |   3 -
 2 files changed, 108 insertions(+), 29 deletions(-)

diff --git a/drivers/gpu/drm/solomon/ssd130x.c b/drivers/gpu/drm/solomon/ssd130x.c
index afb08a8aa9fc..21b2afe40b13 100644
--- a/drivers/gpu/drm/solomon/ssd130x.c
+++ b/drivers/gpu/drm/solomon/ssd130x.c
@@ -141,12 +141,26 @@ const struct ssd130x_deviceinfo ssd130x_variants[] = {
 };
 EXPORT_SYMBOL_NS_GPL(ssd130x_variants, DRM_SSD130X);
 
+struct ssd130x_plane_state {
+	struct drm_plane_state base;
+	/* Intermediate buffer to convert pixels from XRGB8888 to R1 */
+	u8 *buffer;
+	/* Buffer that contains R1 pixels to be written to the panel */
+	u8 *data_array;
+};
+
+static inline struct ssd130x_plane_state *to_ssd130x_plane_state(struct drm_plane_state *state)
+{
+	return container_of(state, struct ssd130x_plane_state, base);
+}
+
 static inline struct ssd130x_device *drm_to_ssd130x(struct drm_device *drm)
 {
 	return container_of(drm, struct ssd130x_device, drm);
 }
 
-static int ssd130x_buf_alloc(struct ssd130x_device *ssd130x)
+static int ssd130x_buf_alloc(struct ssd130x_device *ssd130x,
+			     struct ssd130x_plane_state *ssd130x_state)
 {
 	unsigned int page_height = ssd130x->device_info->page_height;
 	unsigned int pages = DIV_ROUND_UP(ssd130x->height, page_height);
@@ -159,23 +173,23 @@ static int ssd130x_buf_alloc(struct ssd130x_device *ssd130x)
 
 	pitch = drm_format_info_min_pitch(fi, 0, ssd130x->width);
 
-	ssd130x->buffer = kcalloc(pitch, ssd130x->height, GFP_KERNEL);
-	if (!ssd130x->buffer)
+	ssd130x_state->buffer = kcalloc(pitch, ssd130x->height, GFP_KERNEL);
+	if (!ssd130x_state->buffer)
 		return -ENOMEM;
 
-	ssd130x->data_array = kcalloc(ssd130x->width, pages, GFP_KERNEL);
-	if (!ssd130x->data_array) {
-		kfree(ssd130x->buffer);
+	ssd130x_state->data_array = kcalloc(ssd130x->width, pages, GFP_KERNEL);
+	if (!ssd130x_state->data_array) {
+		kfree(ssd130x_state->buffer);
 		return -ENOMEM;
 	}
 
 	return 0;
 }
 
-static void ssd130x_buf_free(struct ssd130x_device *ssd130x)
+static void ssd130x_buf_free(struct ssd130x_plane_state *ssd130x_state)
 {
-	kfree(ssd130x->data_array);
-	kfree(ssd130x->buffer);
+	kfree(ssd130x_state->data_array);
+	kfree(ssd130x_state->buffer);
 }
 
 /*
@@ -466,12 +480,14 @@ static int ssd130x_init(struct ssd130x_device *ssd130x)
 				 SSD130X_SET_ADDRESS_MODE_HORIZONTAL);
 }
 
-static int ssd130x_update_rect(struct ssd130x_device *ssd130x, struct drm_rect *rect)
+static int ssd130x_update_rect(struct ssd130x_device *ssd130x,
+			       struct ssd130x_plane_state *ssd130x_state,
+			       struct drm_rect *rect)
 {
 	unsigned int x = rect->x1;
 	unsigned int y = rect->y1;
-	u8 *buf = ssd130x->buffer;
-	u8 *data_array = ssd130x->data_array;
+	u8 *buf = ssd130x_state->buffer;
+	u8 *data_array = ssd130x_state->data_array;
 	unsigned int width = drm_rect_width(rect);
 	unsigned int height = drm_rect_height(rect);
 	unsigned int line_length = DIV_ROUND_UP(width, 8);
@@ -567,7 +583,8 @@ static int ssd130x_update_rect(struct ssd130x_device *ssd130x, struct drm_rect *
 	return ret;
 }
 
-static void ssd130x_clear_screen(struct ssd130x_device *ssd130x)
+static void ssd130x_clear_screen(struct ssd130x_device *ssd130x,
+				 struct ssd130x_plane_state *ssd130x_state)
 {
 	struct drm_rect fullscreen = {
 		.x1 = 0,
@@ -576,18 +593,21 @@ static void ssd130x_clear_screen(struct ssd130x_device *ssd130x)
 		.y2 = ssd130x->height,
 	};
 
-	ssd130x_update_rect(ssd130x, &fullscreen);
+	ssd130x_update_rect(ssd130x, ssd130x_state, &fullscreen);
 }
 
-static int ssd130x_fb_blit_rect(struct drm_framebuffer *fb, const struct iosys_map *vmap,
+static int ssd130x_fb_blit_rect(struct drm_plane_state *state,
+				const struct iosys_map *vmap,
 				struct drm_rect *rect)
 {
+	struct drm_framebuffer *fb = state->fb;
 	struct ssd130x_device *ssd130x = drm_to_ssd130x(fb->dev);
+	struct ssd130x_plane_state *ssd130x_state = to_ssd130x_plane_state(state);
 	unsigned int page_height = ssd130x->device_info->page_height;
 	struct iosys_map dst;
 	unsigned int dst_pitch;
 	int ret = 0;
-	u8 *buf = ssd130x->buffer;
+	u8 *buf = ssd130x_state->buffer;
 
 	if (!buf)
 		return 0;
@@ -607,11 +627,27 @@ static int ssd130x_fb_blit_rect(struct drm_framebuffer *fb, const struct iosys_m
 
 	drm_gem_fb_end_cpu_access(fb, DMA_FROM_DEVICE);
 
-	ssd130x_update_rect(ssd130x, rect);
+	ssd130x_update_rect(ssd130x, ssd130x_state, rect);
 
 	return ret;
 }
 
+static int ssd130x_primary_plane_helper_atomic_check(struct drm_plane *plane,
+						     struct drm_atomic_state *state)
+{
+	struct drm_device *drm = plane->dev;
+	struct ssd130x_device *ssd130x = drm_to_ssd130x(drm);
+	struct drm_plane_state *plane_state = drm_atomic_get_new_plane_state(state, plane);
+	struct ssd130x_plane_state *ssd130x_state = to_ssd130x_plane_state(plane_state);
+	int ret;
+
+	ret = drm_plane_helper_atomic_check(plane, state);
+	if (ret)
+		return ret;
+
+	return ssd130x_buf_alloc(ssd130x, ssd130x_state);
+}
+
 static void ssd130x_primary_plane_helper_atomic_update(struct drm_plane *plane,
 						       struct drm_atomic_state *state)
 {
@@ -634,7 +670,7 @@ static void ssd130x_primary_plane_helper_atomic_update(struct drm_plane *plane,
 		if (!drm_rect_intersect(&dst_clip, &damage))
 			continue;
 
-		ssd130x_fb_blit_rect(plane_state->fb, &shadow_plane_state->data[0], &dst_clip);
+		ssd130x_fb_blit_rect(plane_state, &shadow_plane_state->data[0], &dst_clip);
 	}
 
 	drm_dev_exit(idx);
@@ -645,19 +681,68 @@ static void ssd130x_primary_plane_helper_atomic_disable(struct drm_plane *plane,
 {
 	struct drm_device *drm = plane->dev;
 	struct ssd130x_device *ssd130x = drm_to_ssd130x(drm);
+	struct ssd130x_plane_state *ssd130x_state = to_ssd130x_plane_state(plane->state);
 	int idx;
 
 	if (!drm_dev_enter(drm, &idx))
 		return;
 
-	ssd130x_clear_screen(ssd130x);
+	ssd130x_clear_screen(ssd130x, ssd130x_state);
 
 	drm_dev_exit(idx);
 }
 
+/* Called during init to allocate the plane's atomic state. */
+static void ssd130x_primary_plane_reset(struct drm_plane *plane)
+{
+	struct ssd130x_plane_state *ssd130x_state;
+
+	WARN_ON(plane->state);
+
+	ssd130x_state = kzalloc(sizeof(*ssd130x_state), GFP_KERNEL);
+	if (!ssd130x_state)
+		return;
+
+	__drm_atomic_helper_plane_reset(plane, &ssd130x_state->base);
+}
+
+static struct drm_plane_state *ssd130x_primary_plane_duplicate_state(struct drm_plane *plane)
+{
+	struct ssd130x_plane_state *old_ssd130x_state;
+	struct ssd130x_plane_state *ssd130x_state;
+
+	if (WARN_ON(!plane->state))
+		return NULL;
+
+	old_ssd130x_state = to_ssd130x_plane_state(plane->state);
+	ssd130x_state = kmemdup(old_ssd130x_state, sizeof(*ssd130x_state), GFP_KERNEL);
+	if (!ssd130x_state)
+		return NULL;
+
+	/* The buffers are not duplicated and are allocated in .atomic_check */
+	ssd130x_state->buffer = NULL;
+	ssd130x_state->data_array = NULL;
+
+	__drm_atomic_helper_plane_duplicate_state(plane, &ssd130x_state->base);
+
+	return &ssd130x_state->base;
+}
+
+static void ssd130x_primary_plane_destroy_state(struct drm_plane *plane,
+						struct drm_plane_state *state)
+{
+	struct ssd130x_plane_state *ssd130x_state = to_ssd130x_plane_state(state);
+
+	ssd130x_buf_free(ssd130x_state);
+
+	__drm_atomic_helper_plane_destroy_state(&ssd130x_state->base);
+
+	kfree(ssd130x_state);
+}
+
 static const struct drm_plane_helper_funcs ssd130x_primary_plane_helper_funcs = {
 	DRM_GEM_SHADOW_PLANE_HELPER_FUNCS,
-	.atomic_check = drm_plane_helper_atomic_check,
+	.atomic_check = ssd130x_primary_plane_helper_atomic_check,
 	.atomic_update = ssd130x_primary_plane_helper_atomic_update,
 	.atomic_disable = ssd130x_primary_plane_helper_atomic_disable,
 };
@@ -665,6 +750,9 @@ static const struct drm_plane_helper_funcs ssd130x_primary_plane_helper_funcs =
 static const struct drm_plane_funcs ssd130x_primary_plane_funcs = {
 	.update_plane = drm_atomic_helper_update_plane,
 	.disable_plane = drm_atomic_helper_disable_plane,
+	.reset = ssd130x_primary_plane_reset,
+	.atomic_duplicate_state = ssd130x_primary_plane_duplicate_state,
+	.atomic_destroy_state = ssd130x_primary_plane_destroy_state,
 	.destroy = drm_plane_cleanup,
 	DRM_GEM_SHADOW_PLANE_FUNCS,
 };
@@ -719,10 +807,6 @@ static void ssd130x_encoder_helper_atomic_enable(struct drm_encoder *encoder,
 	if (ret)
 		goto power_off;
 
-	ret = ssd130x_buf_alloc(ssd130x);
-	if (ret)
-		goto power_off;
-
 	ssd130x_write_cmd(ssd130x, 1, SSD130X_DISPLAY_ON);
 
 	backlight_enable(ssd130x->bl_dev);
@@ -744,8 +828,6 @@ static void ssd130x_encoder_helper_atomic_disable(struct drm_encoder *encoder,
 
 	ssd130x_write_cmd(ssd130x, 1, SSD130X_DISPLAY_OFF);
 
-	ssd130x_buf_free(ssd130x);
-
 	ssd130x_power_off(ssd130x);
 }
 
diff --git a/drivers/gpu/drm/solomon/ssd130x.h b/drivers/gpu/drm/solomon/ssd130x.h
index 161588b1cc4d..87968b3e7fb8 100644
--- a/drivers/gpu/drm/solomon/ssd130x.h
+++ b/drivers/gpu/drm/solomon/ssd130x.h
@@ -89,9 +89,6 @@ struct ssd130x_device {
 	u8 col_end;
 	u8 page_start;
 	u8 page_end;
-
-	u8 *buffer;
-	u8 *data_array;
 };
 
 extern const struct ssd130x_deviceinfo ssd130x_variants[];
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH v4] drm/ssd130x: Allocate buffers in the plane's .atomic_check callback
  2023-07-21  7:09 [PATCH v4] drm/ssd130x: Allocate buffers in the plane's .atomic_check callback Javier Martinez Canillas
@ 2023-07-21  7:33 ` Maxime Ripard
  2023-07-25 19:21 ` Geert Uytterhoeven
  2023-07-27 12:52 ` Thomas Zimmermann
  2 siblings, 0 replies; 16+ messages in thread
From: Maxime Ripard @ 2023-07-21  7:33 UTC (permalink / raw)
  To: Javier Martinez Canillas
  Cc: Thomas Zimmermann, Daniel Vetter, linux-kernel, dri-devel,
	Geert Uytterhoeven, Maxime Ripard

On Fri, 21 Jul 2023 09:09:50 +0200, Javier Martinez Canillas wrote:
> Geert reports that the following NULL pointer dereference happens for him
> after commit 49d7d581ceaf ("drm/ssd130x: Don't allocate buffers on each
> plane update"):
> 
>     [drm] Initialized ssd130x 1.0.0 20220131 for 0-003c on minor 0
> 
> [ ... ]

Acked-by: Maxime Ripard <mripard@kernel.org>

Thanks!
Maxime

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v4] drm/ssd130x: Allocate buffers in the plane's .atomic_check callback
  2023-07-21  7:09 [PATCH v4] drm/ssd130x: Allocate buffers in the plane's .atomic_check callback Javier Martinez Canillas
  2023-07-21  7:33 ` Maxime Ripard
@ 2023-07-25 19:21 ` Geert Uytterhoeven
  2023-07-25 20:49   ` Javier Martinez Canillas
  2023-07-26  9:59   ` Maxime Ripard
  2023-07-27 12:52 ` Thomas Zimmermann
  2 siblings, 2 replies; 16+ messages in thread
From: Geert Uytterhoeven @ 2023-07-25 19:21 UTC (permalink / raw)
  To: Javier Martinez Canillas
  Cc: Thomas Zimmermann, Daniel Vetter, linux-kernel, Maxime Ripard, dri-devel

Hi Javier,

Thanks for your patch!

On Fri, Jul 21, 2023 at 9:10 AM Javier Martinez Canillas
<javierm@redhat.com> wrote:
> Geert reports that the following NULL pointer dereference happens for him
> after commit 49d7d581ceaf ("drm/ssd130x: Don't allocate buffers on each
> plane update"):
>
>     [drm] Initialized ssd130x 1.0.0 20220131 for 0-003c on minor 0
>     ssd130x-i2c 0-003c: [drm] surface width(128), height(32), bpp(1)
>     and format(R1   little-endian (0x20203152))
>     Unable to handle kernel NULL pointer dereference at virtual address 00000000
>     Oops [#1]
>     CPU: 0 PID: 1 Comm: swapper Not tainted
>     6.5.0-rc1-orangecrab-02219-g0a529a1e4bf4 #565
>     epc : ssd130x_update_rect.isra.0+0x13c/0x340
>      ra : ssd130x_update_rect.isra.0+0x2bc/0x340
>     ...
>     status: 00000120 badaddr: 00000000 cause: 0000000f
>     [<c0303d90>] ssd130x_update_rect.isra.0+0x13c/0x340
>     [<c0304200>] ssd130x_primary_plane_helper_atomic_update+0x26c/0x284
>     [<c02f8d54>] drm_atomic_helper_commit_planes+0xfc/0x27c
>     [<c02f9314>] drm_atomic_helper_commit_tail+0x5c/0xb4
>     [<c02f94fc>] commit_tail+0x190/0x1b8
>     [<c02f99fc>] drm_atomic_helper_commit+0x194/0x1c0
>     [<c02c5d00>] drm_atomic_commit+0xa4/0xe4
>     [<c02cce40>] drm_client_modeset_commit_atomic+0x244/0x278
>     [<c02ccef0>] drm_client_modeset_commit_locked+0x7c/0x1bc
>     [<c02cd064>] drm_client_modeset_commit+0x34/0x64
>     [<c0301a78>] __drm_fb_helper_restore_fbdev_mode_unlocked+0xc4/0xe8
>     [<c0303424>] drm_fb_helper_set_par+0x38/0x58
>     [<c027c410>] fbcon_init+0x294/0x534
>     ...
>
> The problem is that fbcon calls fbcon_init() which triggers a DRM modeset
> and this leads to drm_atomic_helper_commit_planes() attempting to commit
> the atomic state for all planes, even the ones whose CRTC is not enabled.
>
> Since the primary plane buffer is allocated in the encoder .atomic_enable
> callback, this happens after that initial modeset commit and leads to the
> mentioned NULL pointer dereference.

Upon further investigation, the NULL pointer dereference does not
happen with the current and accepted code (only with my patches to
add support for R1), because of the "superfluous" NULL buffer check
in ssd130x_fb_blit_rect():
https://cgit.freedesktop.org/drm-misc/tree/drivers/gpu/drm/solomon/ssd130x.c#n592

So you probably want to drop the crash report...

> Fix this by having custom helpers to allocate, duplicate and destroy the
> plane state, that will take care of allocating and freeing these buffers.
>
> Instead of doing it in the encoder atomic enabled and disabled callbacks,
> since allocations must not be done in an .atomic_enable handler. Because
> drivers are not allowed to fail after drm_atomic_helper_swap_state() has
> been called and the new atomic state is stored into the current sw state.
>
> Fixes: 49d7d581ceaf ("drm/ssd130x: Don't allocate buffers on each plane update")

... and the Fixes tag.

> Reported-by: Geert Uytterhoeven <geert@linux-m68k.org>
> Suggested-by: Maxime Ripard <mripard@kernel.org>
> Signed-off-by: Javier Martinez Canillas <javierm@redhat.com>

Regardless, avoiding calls to ssd130x_fb_blit_rect() when the buffer
is not yet allocated is worthwhile.  And this patch achieves that.

Tested-by: Geert Uytterhoeven <geert@linux-m68k.org>

Still, some comments below.

> --- a/drivers/gpu/drm/solomon/ssd130x.c
> +++ b/drivers/gpu/drm/solomon/ssd130x.c
> @@ -141,12 +141,26 @@ const struct ssd130x_deviceinfo ssd130x_variants[] = {
>  };
>  EXPORT_SYMBOL_NS_GPL(ssd130x_variants, DRM_SSD130X);
>
> +struct ssd130x_plane_state {
> +       struct drm_plane_state base;
> +       /* Intermediate buffer to convert pixels from XRGB8888 to R1 */
> +       u8 *buffer;
> +       /* Buffer that contains R1 pixels to be written to the panel */
> +       u8 *data_array;

The second buffer actually contains pixels in hardware format.
For now that is a transposed buffer of R1 pixels, but that will change
when you will add support for greyscale displays.

So I'd write "hardware format" instead of R1 for both.

BTW, I still think data_array should be allocated during probing,
as it is related to the hardware, not to a plane.

> +};

> @@ -159,23 +173,23 @@ static int ssd130x_buf_alloc(struct ssd130x_device *ssd130x)
>
>         pitch = drm_format_info_min_pitch(fi, 0, ssd130x->width);
>
> -       ssd130x->buffer = kcalloc(pitch, ssd130x->height, GFP_KERNEL);
> -       if (!ssd130x->buffer)
> +       ssd130x_state->buffer = kcalloc(pitch, ssd130x->height, GFP_KERNEL);
> +       if (!ssd130x_state->buffer)
>                 return -ENOMEM;
>
> -       ssd130x->data_array = kcalloc(ssd130x->width, pages, GFP_KERNEL);
> -       if (!ssd130x->data_array) {
> -               kfree(ssd130x->buffer);
> +       ssd130x_state->data_array = kcalloc(ssd130x->width, pages, GFP_KERNEL);
> +       if (!ssd130x_state->data_array) {
> +               kfree(ssd130x_state->buffer);

Should ssd130x_state->buffer be reset to NULL here?
I.e. if .atomic_check() fails, will .atomic_destroy_state() be called,
leading to a double free?

>                 return -ENOMEM;
>         }
>
>         return 0;
>  }

> @@ -576,18 +593,21 @@ static void ssd130x_clear_screen(struct ssd130x_device *ssd130x)
>                 .y2 = ssd130x->height,
>         };
>
> -       ssd130x_update_rect(ssd130x, &fullscreen);
> +       ssd130x_update_rect(ssd130x, ssd130x_state, &fullscreen);
>  }
>
> -static int ssd130x_fb_blit_rect(struct drm_framebuffer *fb, const struct iosys_map *vmap,
> +static int ssd130x_fb_blit_rect(struct drm_plane_state *state,
> +                               const struct iosys_map *vmap,
>                                 struct drm_rect *rect)
>  {
> +       struct drm_framebuffer *fb = state->fb;
>         struct ssd130x_device *ssd130x = drm_to_ssd130x(fb->dev);
> +       struct ssd130x_plane_state *ssd130x_state = to_ssd130x_plane_state(state);
>         unsigned int page_height = ssd130x->device_info->page_height;
>         struct iosys_map dst;
>         unsigned int dst_pitch;
>         int ret = 0;
> -       u8 *buf = ssd130x->buffer;
> +       u8 *buf = ssd130x_state->buffer;
>
>         if (!buf)

This check should no longer be needed (and prevented you from seeing
the issue before).

>                 return 0;
> @@ -607,11 +627,27 @@ static int ssd130x_fb_blit_rect(struct drm_framebuffer *fb, const struct iosys_m
>
>         drm_gem_fb_end_cpu_access(fb, DMA_FROM_DEVICE);
>
> -       ssd130x_update_rect(ssd130x, rect);
> +       ssd130x_update_rect(ssd130x, ssd130x_state, rect);
>
>         return ret;
>  }
>
> +static int ssd130x_primary_plane_helper_atomic_check(struct drm_plane *plane,
> +                                                    struct drm_atomic_state *state)
> +{
> +       struct drm_device *drm = plane->dev;
> +       struct ssd130x_device *ssd130x = drm_to_ssd130x(drm);
> +       struct drm_plane_state *plane_state = drm_atomic_get_new_plane_state(state, plane);
> +       struct ssd130x_plane_state *ssd130x_state = to_ssd130x_plane_state(plane_state);
> +       int ret;
> +
> +       ret = drm_plane_helper_atomic_check(plane, state);
> +       if (ret)
> +               return ret;
> +
> +       return ssd130x_buf_alloc(ssd130x, ssd130x_state);

I think the code would become easier to read by inlining
ssd130x_buf_alloc() here.

> +}
> +

> +static struct drm_plane_state *ssd130x_primary_plane_duplicate_state(struct drm_plane *plane)
> +{
> +       struct ssd130x_plane_state *old_ssd130x_state;
> +       struct ssd130x_plane_state *ssd130x_state;
> +
> +       if (WARN_ON(!plane->state))
> +               return NULL;
> +
> +       old_ssd130x_state = to_ssd130x_plane_state(plane->state);
> +       ssd130x_state = kmemdup(old_ssd130x_state, sizeof(*ssd130x_state), GFP_KERNEL);

I know this is the standard pattern, but the "dup" part is a bit
silly here:
  - The ssd130x-specific parts in ssd130x_plane_state are zeroed below,
  - The base part is copied again in
    __drm_atomic_helper_plane_duplicate_state).
So (for now) you might as well just call kzalloc() ;-)

> +       if (!ssd130x_state)
> +               return NULL;
> +
> +       /* The buffers are not duplicated and are allocated in .atomic_check */
> +       ssd130x_state->buffer = NULL;
> +       ssd130x_state->data_array = NULL;
> +
> +       __drm_atomic_helper_plane_duplicate_state(plane, &ssd130x_state->base);
> +
> +       return &ssd130x_state->base;
> +}
> +
> +static void ssd130x_primary_plane_destroy_state(struct drm_plane *plane,
> +                                               struct drm_plane_state *state)
> +{
> +       struct ssd130x_plane_state *ssd130x_state = to_ssd130x_plane_state(state);
> +
> +       ssd130x_buf_free(ssd130x_state);

I think the code would become easier to read by inlining
ssd130x_buf_free() here.

> +
> +       __drm_atomic_helper_plane_destroy_state(&ssd130x_state->base);
> +
> +       kfree(ssd130x_state);
> +}

Gr{oetje,eeting}s,

                        Geert

-- 
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
                                -- Linus Torvalds

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v4] drm/ssd130x: Allocate buffers in the plane's .atomic_check callback
  2023-07-25 19:21 ` Geert Uytterhoeven
@ 2023-07-25 20:49   ` Javier Martinez Canillas
  2023-07-26  9:59   ` Maxime Ripard
  1 sibling, 0 replies; 16+ messages in thread
From: Javier Martinez Canillas @ 2023-07-25 20:49 UTC (permalink / raw)
  To: Geert Uytterhoeven
  Cc: Thomas Zimmermann, Daniel Vetter, linux-kernel, Maxime Ripard, dri-devel

Geert Uytterhoeven <geert@linux-m68k.org> writes:

Hello Geert,

> Hi Javier,
>
> Thanks for your patch!
>

Thanks a lot for your feedabck.

> On Fri, Jul 21, 2023 at 9:10 AM Javier Martinez Canillas
> <javierm@redhat.com> wrote:
>> Geert reports that the following NULL pointer dereference happens for him

[...]

>
>> Since the primary plane buffer is allocated in the encoder .atomic_enable
>> callback, this happens after that initial modeset commit and leads to the
>> mentioned NULL pointer dereference.
>
> Upon further investigation, the NULL pointer dereference does not
> happen with the current and accepted code (only with my patches to
> add support for R1), because of the "superfluous" NULL buffer check
> in ssd130x_fb_blit_rect():
> https://cgit.freedesktop.org/drm-misc/tree/drivers/gpu/drm/solomon/ssd130x.c#n592
>
> So you probably want to drop the crash report...
>
>> Fix this by having custom helpers to allocate, duplicate and destroy the
>> plane state, that will take care of allocating and freeing these buffers.
>>
>> Instead of doing it in the encoder atomic enabled and disabled callbacks,
>> since allocations must not be done in an .atomic_enable handler. Because
>> drivers are not allowed to fail after drm_atomic_helper_swap_state() has
>> been called and the new atomic state is stored into the current sw state.
>>
>> Fixes: 49d7d581ceaf ("drm/ssd130x: Don't allocate buffers on each plane update")
>
> ... and the Fixes tag.
>

Ah, I see. Thanks a lot for that information.

>> Reported-by: Geert Uytterhoeven <geert@linux-m68k.org>

Will drop your Reported-by tag too then.

>> Suggested-by: Maxime Ripard <mripard@kernel.org>
>> Signed-off-by: Javier Martinez Canillas <javierm@redhat.com>
>
> Regardless, avoiding calls to ssd130x_fb_blit_rect() when the buffer
> is not yet allocated is worthwhile.  And this patch achieves that.
>

Agreed, and as Maxime mentioned is the correct thing to do.

> Tested-by: Geert Uytterhoeven <geert@linux-m68k.org>
>

Thanks for testing!

> Still, some comments below.
>
>> --- a/drivers/gpu/drm/solomon/ssd130x.c
>> +++ b/drivers/gpu/drm/solomon/ssd130x.c
>> @@ -141,12 +141,26 @@ const struct ssd130x_deviceinfo ssd130x_variants[] = {
>>  };
>>  EXPORT_SYMBOL_NS_GPL(ssd130x_variants, DRM_SSD130X);
>>
>> +struct ssd130x_plane_state {
>> +       struct drm_plane_state base;
>> +       /* Intermediate buffer to convert pixels from XRGB8888 to R1 */
>> +       u8 *buffer;
>> +       /* Buffer that contains R1 pixels to be written to the panel */
>> +       u8 *data_array;
>
> The second buffer actually contains pixels in hardware format.
> For now that is a transposed buffer of R1 pixels, but that will change
> when you will add support for greyscale displays.
>
> So I'd write "hardware format" instead of R1 for both.
>

Agreed.

> BTW, I still think data_array should be allocated during probing,
> as it is related to the hardware, not to a plane.
>

Ack. I'll do that as a separate patch on v5.

>> +};
>
>> @@ -159,23 +173,23 @@ static int ssd130x_buf_alloc(struct ssd130x_device *ssd130x)
>>
>>         pitch = drm_format_info_min_pitch(fi, 0, ssd130x->width);
>>
>> -       ssd130x->buffer = kcalloc(pitch, ssd130x->height, GFP_KERNEL);
>> -       if (!ssd130x->buffer)
>> +       ssd130x_state->buffer = kcalloc(pitch, ssd130x->height, GFP_KERNEL);
>> +       if (!ssd130x_state->buffer)
>>                 return -ENOMEM;
>>
>> -       ssd130x->data_array = kcalloc(ssd130x->width, pages, GFP_KERNEL);
>> -       if (!ssd130x->data_array) {
>> -               kfree(ssd130x->buffer);
>> +       ssd130x_state->data_array = kcalloc(ssd130x->width, pages, GFP_KERNEL);
>> +       if (!ssd130x_state->data_array) {
>> +               kfree(ssd130x_state->buffer);
>
> Should ssd130x_state->buffer be reset to NULL here?
> I.e. if .atomic_check() fails, will .atomic_destroy_state() be called,
> leading to a double free?
>

Yeah. I'll add it.

>>                 return -ENOMEM;
>>         }
>>
>>         return 0;
>>  }
>
>> @@ -576,18 +593,21 @@ static void ssd130x_clear_screen(struct ssd130x_device *ssd130x)
>>                 .y2 = ssd130x->height,
>>         };
>>
>> -       ssd130x_update_rect(ssd130x, &fullscreen);
>> +       ssd130x_update_rect(ssd130x, ssd130x_state, &fullscreen);
>>  }
>>
>> -static int ssd130x_fb_blit_rect(struct drm_framebuffer *fb, const struct iosys_map *vmap,
>> +static int ssd130x_fb_blit_rect(struct drm_plane_state *state,
>> +                               const struct iosys_map *vmap,
>>                                 struct drm_rect *rect)
>>  {
>> +       struct drm_framebuffer *fb = state->fb;
>>         struct ssd130x_device *ssd130x = drm_to_ssd130x(fb->dev);
>> +       struct ssd130x_plane_state *ssd130x_state = to_ssd130x_plane_state(state);
>>         unsigned int page_height = ssd130x->device_info->page_height;
>>         struct iosys_map dst;
>>         unsigned int dst_pitch;
>>         int ret = 0;
>> -       u8 *buf = ssd130x->buffer;
>> +       u8 *buf = ssd130x_state->buffer;
>>
>>         if (!buf)
>
> This check should no longer be needed (and prevented you from seeing
> the issue before).
>

Ack.

>>                 return 0;

[...]

>> +static int ssd130x_primary_plane_helper_atomic_check(struct drm_plane *plane,
>> +                                                    struct drm_atomic_state *state)
>> +{
>> +       struct drm_device *drm = plane->dev;
>> +       struct ssd130x_device *ssd130x = drm_to_ssd130x(drm);
>> +       struct drm_plane_state *plane_state = drm_atomic_get_new_plane_state(state, plane);
>> +       struct ssd130x_plane_state *ssd130x_state = to_ssd130x_plane_state(plane_state);
>> +       int ret;
>> +
>> +       ret = drm_plane_helper_atomic_check(plane, state);
>> +       if (ret)
>> +               return ret;
>> +
>> +       return ssd130x_buf_alloc(ssd130x, ssd130x_state);
>
> I think the code would become easier to read by inlining
> ssd130x_buf_alloc() here.
>

Agreed. I'll do that.

>> +}
>> +
>
>> +static struct drm_plane_state *ssd130x_primary_plane_duplicate_state(struct drm_plane *plane)
>> +{
>> +       struct ssd130x_plane_state *old_ssd130x_state;
>> +       struct ssd130x_plane_state *ssd130x_state;
>> +
>> +       if (WARN_ON(!plane->state))
>> +               return NULL;
>> +
>> +       old_ssd130x_state = to_ssd130x_plane_state(plane->state);
>> +       ssd130x_state = kmemdup(old_ssd130x_state, sizeof(*ssd130x_state), GFP_KERNEL);
>
> I know this is the standard pattern, but the "dup" part is a bit
> silly here:
>   - The ssd130x-specific parts in ssd130x_plane_state are zeroed below,
>   - The base part is copied again in
>     __drm_atomic_helper_plane_duplicate_state).
> So (for now) you might as well just call kzalloc() ;-)
>

Indeed. You are correct.

>> +       if (!ssd130x_state)
>> +               return NULL;
>> +
>> +       /* The buffers are not duplicated and are allocated in .atomic_check */
>> +       ssd130x_state->buffer = NULL;
>> +       ssd130x_state->data_array = NULL;
>> +
>> +       __drm_atomic_helper_plane_duplicate_state(plane, &ssd130x_state->base);
>> +
>> +       return &ssd130x_state->base;
>> +}
>> +
>> +static void ssd130x_primary_plane_destroy_state(struct drm_plane *plane,
>> +                                               struct drm_plane_state *state)
>> +{
>> +       struct ssd130x_plane_state *ssd130x_state = to_ssd130x_plane_state(state);
>> +
>> +       ssd130x_buf_free(ssd130x_state);
>
> I think the code would become easier to read by inlining
> ssd130x_buf_free() here.
>

Ack.

-- 
Best regards,

Javier Martinez Canillas
Core Platforms
Red Hat


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v4] drm/ssd130x: Allocate buffers in the plane's .atomic_check callback
  2023-07-25 19:21 ` Geert Uytterhoeven
  2023-07-25 20:49   ` Javier Martinez Canillas
@ 2023-07-26  9:59   ` Maxime Ripard
  2023-07-26 10:39     ` Javier Martinez Canillas
  2023-07-26 11:52     ` Geert Uytterhoeven
  1 sibling, 2 replies; 16+ messages in thread
From: Maxime Ripard @ 2023-07-26  9:59 UTC (permalink / raw)
  To: Geert Uytterhoeven
  Cc: Thomas Zimmermann, Daniel Vetter, Javier Martinez Canillas,
	dri-devel, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 4298 bytes --]

On Tue, Jul 25, 2023 at 09:21:52PM +0200, Geert Uytterhoeven wrote:
> > --- a/drivers/gpu/drm/solomon/ssd130x.c
> > +++ b/drivers/gpu/drm/solomon/ssd130x.c
> > @@ -141,12 +141,26 @@ const struct ssd130x_deviceinfo ssd130x_variants[] = {
> >  };
> >  EXPORT_SYMBOL_NS_GPL(ssd130x_variants, DRM_SSD130X);
> >
> > +struct ssd130x_plane_state {
> > +       struct drm_plane_state base;
> > +       /* Intermediate buffer to convert pixels from XRGB8888 to R1 */
> > +       u8 *buffer;
> > +       /* Buffer that contains R1 pixels to be written to the panel */
> > +       u8 *data_array;
> 
> The second buffer actually contains pixels in hardware format.
> For now that is a transposed buffer of R1 pixels, but that will change
> when you will add support for greyscale displays.
> 
> So I'd write "hardware format" instead of R1 for both.
>
> BTW, I still think data_array should be allocated during probing,
> as it is related to the hardware, not to a plane.

I somewhat disagree.

If I understood right during our discussion with Javier, the buffer size
derives from the mode size (height and width).

In KMS, the mode is tied to the KMS state, and thus you can expect the
mode to change every state commit. So the more logical thing to do is to
tie the buffer size (and thus the buffer pointer) to the state since
it's only valid for that particular state for all we know.

Of course, our case is allows use to simplify things since it's a fixed
mode, but one of Javier's goal with this driver was to provide a good
example we can refer people to, so I think it's worth keeping.

> > @@ -159,23 +173,23 @@ static int ssd130x_buf_alloc(struct ssd130x_device *ssd130x)
> >
> >         pitch = drm_format_info_min_pitch(fi, 0, ssd130x->width);
> >
> > -       ssd130x->buffer = kcalloc(pitch, ssd130x->height, GFP_KERNEL);
> > -       if (!ssd130x->buffer)
> > +       ssd130x_state->buffer = kcalloc(pitch, ssd130x->height, GFP_KERNEL);
> > +       if (!ssd130x_state->buffer)
> >                 return -ENOMEM;
> >
> > -       ssd130x->data_array = kcalloc(ssd130x->width, pages, GFP_KERNEL);
> > -       if (!ssd130x->data_array) {
> > -               kfree(ssd130x->buffer);
> > +       ssd130x_state->data_array = kcalloc(ssd130x->width, pages, GFP_KERNEL);
> > +       if (!ssd130x_state->data_array) {
> > +               kfree(ssd130x_state->buffer);
> 
> Should ssd130x_state->buffer be reset to NULL here?
> I.e. if .atomic_check() fails, will .atomic_destroy_state() be called,
> leading to a double free?

That's a good question.

I never really thought of that, but yeah, AFAIK atomic_destroy_state()
will be called when atomic_check() fails.

Which means that it's probably broken in a lot of drivers.

Also, Javier pointed me to a discussion you had on IRC about using devm
allocation here. We can't really do that. KMS devices are only freed
once the last userspace application closes its fd to the device file, so
you have an unbounded window during which the driver is still callable
by userspace (and thus can still trigger an atomic commit) but the
buffer would have been freed for a while.

The driver could use a bit more work on that area (by adding
drm_dev_enter/drm_dev_exit) but it still creates unnecessary risks to
use devm there.

> > +static struct drm_plane_state *ssd130x_primary_plane_duplicate_state(struct drm_plane *plane)
> > +{
> > +       struct ssd130x_plane_state *old_ssd130x_state;
> > +       struct ssd130x_plane_state *ssd130x_state;
> > +
> > +       if (WARN_ON(!plane->state))
> > +               return NULL;
> > +
> > +       old_ssd130x_state = to_ssd130x_plane_state(plane->state);
> > +       ssd130x_state = kmemdup(old_ssd130x_state, sizeof(*ssd130x_state), GFP_KERNEL);
> 
> I know this is the standard pattern, but the "dup" part is a bit
> silly here:
>   - The ssd130x-specific parts in ssd130x_plane_state are zeroed below,
>   - The base part is copied again in
>     __drm_atomic_helper_plane_duplicate_state).
> So (for now) you might as well just call kzalloc() ;-)

Still if we ever add a field in the state structure, it will be
surprising that it's not copied over.

The code is there and looks good to me, so I'd rather keep it.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 228 bytes --]

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v4] drm/ssd130x: Allocate buffers in the plane's .atomic_check callback
  2023-07-26  9:59   ` Maxime Ripard
@ 2023-07-26 10:39     ` Javier Martinez Canillas
  2023-07-26 11:52     ` Geert Uytterhoeven
  1 sibling, 0 replies; 16+ messages in thread
From: Javier Martinez Canillas @ 2023-07-26 10:39 UTC (permalink / raw)
  To: Maxime Ripard, Geert Uytterhoeven
  Cc: Thomas Zimmermann, Daniel Vetter, linux-kernel, dri-devel

Maxime Ripard <mripard@kernel.org> writes:

Hello Maxime,

> On Tue, Jul 25, 2023 at 09:21:52PM +0200, Geert Uytterhoeven wrote:
>> > --- a/drivers/gpu/drm/solomon/ssd130x.c
>> > +++ b/drivers/gpu/drm/solomon/ssd130x.c
>> > @@ -141,12 +141,26 @@ const struct ssd130x_deviceinfo ssd130x_variants[] = {
>> >  };
>> >  EXPORT_SYMBOL_NS_GPL(ssd130x_variants, DRM_SSD130X);
>> >
>> > +struct ssd130x_plane_state {
>> > +       struct drm_plane_state base;
>> > +       /* Intermediate buffer to convert pixels from XRGB8888 to R1 */
>> > +       u8 *buffer;
>> > +       /* Buffer that contains R1 pixels to be written to the panel */
>> > +       u8 *data_array;
>> 
>> The second buffer actually contains pixels in hardware format.
>> For now that is a transposed buffer of R1 pixels, but that will change
>> when you will add support for greyscale displays.
>> 
>> So I'd write "hardware format" instead of R1 for both.
>>
>> BTW, I still think data_array should be allocated during probing,
>> as it is related to the hardware, not to a plane.
>
> I somewhat disagree.
>
> If I understood right during our discussion with Javier, the buffer size
> derives from the mode size (height and width).
>
> In KMS, the mode is tied to the KMS state, and thus you can expect the
> mode to change every state commit. So the more logical thing to do is to
> tie the buffer size (and thus the buffer pointer) to the state since
> it's only valid for that particular state for all we know.
>
> Of course, our case is allows use to simplify things since it's a fixed
> mode, but one of Javier's goal with this driver was to provide a good
> example we can refer people to, so I think it's worth keeping.
>

Yes, that's certainly one of my goals. So I'll just keep it as-is then.

>> > @@ -159,23 +173,23 @@ static int ssd130x_buf_alloc(struct ssd130x_device *ssd130x)
>> >
>> >         pitch = drm_format_info_min_pitch(fi, 0, ssd130x->width);
>> >
>> > -       ssd130x->buffer = kcalloc(pitch, ssd130x->height, GFP_KERNEL);
>> > -       if (!ssd130x->buffer)
>> > +       ssd130x_state->buffer = kcalloc(pitch, ssd130x->height, GFP_KERNEL);
>> > +       if (!ssd130x_state->buffer)
>> >                 return -ENOMEM;
>> >
>> > -       ssd130x->data_array = kcalloc(ssd130x->width, pages, GFP_KERNEL);
>> > -       if (!ssd130x->data_array) {
>> > -               kfree(ssd130x->buffer);
>> > +       ssd130x_state->data_array = kcalloc(ssd130x->width, pages, GFP_KERNEL);
>> > +       if (!ssd130x_state->data_array) {
>> > +               kfree(ssd130x_state->buffer);
>> 
>> Should ssd130x_state->buffer be reset to NULL here?
>> I.e. if .atomic_check() fails, will .atomic_destroy_state() be called,
>> leading to a double free?
>
> That's a good question.
>
> I never really thought of that, but yeah, AFAIK atomic_destroy_state()
> will be called when atomic_check() fails.
>

Interesting. I didn't know that. I'll set to NULL and add a comment.

> Which means that it's probably broken in a lot of drivers.
>
> Also, Javier pointed me to a discussion you had on IRC about using devm
> allocation here. We can't really do that. KMS devices are only freed
> once the last userspace application closes its fd to the device file, so
> you have an unbounded window during which the driver is still callable
> by userspace (and thus can still trigger an atomic commit) but the
> buffer would have been freed for a while.
>
> The driver could use a bit more work on that area (by adding
> drm_dev_enter/drm_dev_exit) but it still creates unnecessary risks to
> use devm there.
>

Yes, but that discussion is not relevant anymore anyways if we keep the
.data_array allocatioin the plane's .atomic_check handler.

>> > +static struct drm_plane_state *ssd130x_primary_plane_duplicate_state(struct drm_plane *plane)
>> > +{
>> > +       struct ssd130x_plane_state *old_ssd130x_state;
>> > +       struct ssd130x_plane_state *ssd130x_state;
>> > +
>> > +       if (WARN_ON(!plane->state))
>> > +               return NULL;
>> > +
>> > +       old_ssd130x_state = to_ssd130x_plane_state(plane->state);
>> > +       ssd130x_state = kmemdup(old_ssd130x_state, sizeof(*ssd130x_state), GFP_KERNEL);
>> 
>> I know this is the standard pattern, but the "dup" part is a bit
>> silly here:
>>   - The ssd130x-specific parts in ssd130x_plane_state are zeroed below,
>>   - The base part is copied again in
>>     __drm_atomic_helper_plane_duplicate_state).
>> So (for now) you might as well just call kzalloc() ;-)
>
> Still if we ever add a field in the state structure, it will be
> surprising that it's not copied over.
>
> The code is there and looks good to me, so I'd rather keep it.

Yes, it's unlikely that this state structure will get other fields but
still since is the standard patter, we can keep it just for others to use
it as a reference.

-- 
Best regards,

Javier Martinez Canillas
Core Platforms
Red Hat


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v4] drm/ssd130x: Allocate buffers in the plane's .atomic_check callback
  2023-07-26  9:59   ` Maxime Ripard
  2023-07-26 10:39     ` Javier Martinez Canillas
@ 2023-07-26 11:52     ` Geert Uytterhoeven
  2023-07-26 12:22       ` Javier Martinez Canillas
  2023-07-26 13:49       ` Maxime Ripard
  1 sibling, 2 replies; 16+ messages in thread
From: Geert Uytterhoeven @ 2023-07-26 11:52 UTC (permalink / raw)
  To: Maxime Ripard
  Cc: Thomas Zimmermann, Daniel Vetter, Javier Martinez Canillas,
	dri-devel, linux-kernel

Hi Maxime,

On Wed, Jul 26, 2023 at 12:00 PM Maxime Ripard <mripard@kernel.org> wrote:
> On Tue, Jul 25, 2023 at 09:21:52PM +0200, Geert Uytterhoeven wrote:
> > > --- a/drivers/gpu/drm/solomon/ssd130x.c
> > > +++ b/drivers/gpu/drm/solomon/ssd130x.c
> > > @@ -141,12 +141,26 @@ const struct ssd130x_deviceinfo ssd130x_variants[] = {
> > >  };
> > >  EXPORT_SYMBOL_NS_GPL(ssd130x_variants, DRM_SSD130X);
> > >
> > > +struct ssd130x_plane_state {
> > > +       struct drm_plane_state base;
> > > +       /* Intermediate buffer to convert pixels from XRGB8888 to R1 */
> > > +       u8 *buffer;
> > > +       /* Buffer that contains R1 pixels to be written to the panel */
> > > +       u8 *data_array;
> >
> > The second buffer actually contains pixels in hardware format.
> > For now that is a transposed buffer of R1 pixels, but that will change
> > when you will add support for greyscale displays.
> >
> > So I'd write "hardware format" instead of R1 for both.
> >
> > BTW, I still think data_array should be allocated during probing,
> > as it is related to the hardware, not to a plane.
>
> I somewhat disagree.
>
> If I understood right during our discussion with Javier, the buffer size
> derives from the mode size (height and width).
>
> In KMS, the mode is tied to the KMS state, and thus you can expect the
> mode to change every state commit. So the more logical thing to do is to
> tie the buffer size (and thus the buffer pointer) to the state since
> it's only valid for that particular state for all we know.
>
> Of course, our case is allows use to simplify things since it's a fixed
> mode, but one of Javier's goal with this driver was to provide a good
> example we can refer people to, so I think it's worth keeping.

The second buffer (containing the hardware format) has a size that
depends on the full screen size, not the current mode (I believe that's
also the size of the frame buffer backing the plane?).  So its size is
fixed.

Given the allocations are now done based on plane state, I think the
first buffer should be sized according to the frame buffer backing
the plane? Currently it uses the full screen size, too (cfr. below).

> > > @@ -159,23 +173,23 @@ static int ssd130x_buf_alloc(struct ssd130x_device *ssd130x)
> > >
> > >         pitch = drm_format_info_min_pitch(fi, 0, ssd130x->width);
> > >
> > > -       ssd130x->buffer = kcalloc(pitch, ssd130x->height, GFP_KERNEL);
> > > -       if (!ssd130x->buffer)
> > > +       ssd130x_state->buffer = kcalloc(pitch, ssd130x->height, GFP_KERNEL);

Based on full screen width and height.

> > > +       if (!ssd130x_state->buffer)
> > >                 return -ENOMEM;
> > >
> > > -       ssd130x->data_array = kcalloc(ssd130x->width, pages, GFP_KERNEL);
> > > -       if (!ssd130x->data_array) {
> > > -               kfree(ssd130x->buffer);
> > > +       ssd130x_state->data_array = kcalloc(ssd130x->width, pages, GFP_KERNEL);

Based on full screen width and height (hardware page size).

> > > +       if (!ssd130x_state->data_array) {
> > > +               kfree(ssd130x_state->buffer);
> >
> > Should ssd130x_state->buffer be reset to NULL here?
> > I.e. if .atomic_check() fails, will .atomic_destroy_state() be called,
> > leading to a double free?
>
> That's a good question.
>
> I never really thought of that, but yeah, AFAIK atomic_destroy_state()
> will be called when atomic_check() fails.
>
> Which means that it's probably broken in a lot of drivers.
>
> Also, Javier pointed me to a discussion you had on IRC about using devm
> allocation here. We can't really do that. KMS devices are only freed
> once the last userspace application closes its fd to the device file, so
> you have an unbounded window during which the driver is still callable
> by userspace (and thus can still trigger an atomic commit) but the
> buffer would have been freed for a while.

It should still be safe for (at least) the data_array buffer. That
buffer is only used to store pixels in hardware format, and immediately
send them to the hardware.  If this can be called that late, it will
fail horribly, as you can no longer talk to the hardware at that point
(as ssd130x is an i2c driver, it might actually work; but a DRM driver
 that calls devm_platform_ioremap_resource() will crash when writing
 to its MMIO registers)?!?

> > > +static struct drm_plane_state *ssd130x_primary_plane_duplicate_state(struct drm_plane *plane)
> > > +{
> > > +       struct ssd130x_plane_state *old_ssd130x_state;
> > > +       struct ssd130x_plane_state *ssd130x_state;
> > > +
> > > +       if (WARN_ON(!plane->state))
> > > +               return NULL;
> > > +
> > > +       old_ssd130x_state = to_ssd130x_plane_state(plane->state);
> > > +       ssd130x_state = kmemdup(old_ssd130x_state, sizeof(*ssd130x_state), GFP_KERNEL);
> >
> > I know this is the standard pattern, but the "dup" part is a bit
> > silly here:
> >   - The ssd130x-specific parts in ssd130x_plane_state are zeroed below,
> >   - The base part is copied again in
> >     __drm_atomic_helper_plane_duplicate_state).
> > So (for now) you might as well just call kzalloc() ;-)
>
> Still if we ever add a field in the state structure, it will be
> surprising that it's not copied over.
>
> The code is there and looks good to me, so I'd rather keep it.

Fair enough, let's keep it.

Gr{oetje,eeting}s,

                        Geert

-- 
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
                                -- Linus Torvalds

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v4] drm/ssd130x: Allocate buffers in the plane's .atomic_check callback
  2023-07-26 11:52     ` Geert Uytterhoeven
@ 2023-07-26 12:22       ` Javier Martinez Canillas
  2023-07-26 12:33         ` Geert Uytterhoeven
  2023-07-26 13:49       ` Maxime Ripard
  1 sibling, 1 reply; 16+ messages in thread
From: Javier Martinez Canillas @ 2023-07-26 12:22 UTC (permalink / raw)
  To: Geert Uytterhoeven, Maxime Ripard
  Cc: Thomas Zimmermann, Daniel Vetter, linux-kernel, dri-devel

Geert Uytterhoeven <geert@linux-m68k.org> writes:

Hello Geert,

> Hi Maxime,
>
> On Wed, Jul 26, 2023 at 12:00 PM Maxime Ripard <mripard@kernel.org> wrote:
>> On Tue, Jul 25, 2023 at 09:21:52PM +0200, Geert Uytterhoeven wrote:
>> > > --- a/drivers/gpu/drm/solomon/ssd130x.c
>> > > +++ b/drivers/gpu/drm/solomon/ssd130x.c
>> > > @@ -141,12 +141,26 @@ const struct ssd130x_deviceinfo ssd130x_variants[] = {
>> > >  };
>> > >  EXPORT_SYMBOL_NS_GPL(ssd130x_variants, DRM_SSD130X);
>> > >
>> > > +struct ssd130x_plane_state {
>> > > +       struct drm_plane_state base;
>> > > +       /* Intermediate buffer to convert pixels from XRGB8888 to R1 */
>> > > +       u8 *buffer;
>> > > +       /* Buffer that contains R1 pixels to be written to the panel */
>> > > +       u8 *data_array;
>> >
>> > The second buffer actually contains pixels in hardware format.
>> > For now that is a transposed buffer of R1 pixels, but that will change
>> > when you will add support for greyscale displays.
>> >
>> > So I'd write "hardware format" instead of R1 for both.
>> >
>> > BTW, I still think data_array should be allocated during probing,
>> > as it is related to the hardware, not to a plane.
>>
>> I somewhat disagree.
>>
>> If I understood right during our discussion with Javier, the buffer size
>> derives from the mode size (height and width).
>>
>> In KMS, the mode is tied to the KMS state, and thus you can expect the
>> mode to change every state commit. So the more logical thing to do is to
>> tie the buffer size (and thus the buffer pointer) to the state since
>> it's only valid for that particular state for all we know.
>>
>> Of course, our case is allows use to simplify things since it's a fixed
>> mode, but one of Javier's goal with this driver was to provide a good
>> example we can refer people to, so I think it's worth keeping.
>
> The second buffer (containing the hardware format) has a size that
> depends on the full screen size, not the current mode (I believe that's
> also the size of the frame buffer backing the plane?).  So its size is
> fixed.
>

Yes, is fixed. But Maxime's point is that this is a characteristic of this
particular device and even when the display resolution can't be changed,
the correct thing to do is to keep all state related to the mode (even the
buffer used to store the hardware pixels that are written to the display)

> Given the allocations are now done based on plane state, I think the
> first buffer should be sized according to the frame buffer backing
> the plane? Currently it uses the full screen size, too (cfr. below).
>

But can the mode even be changed if ssd130x_connector_helper_get_modes()
just adds a single display mode with mode->hdisplay == ssd130x->width and
mode->vdisplay == ssd130x->height.

>> > > @@ -159,23 +173,23 @@ static int ssd130x_buf_alloc(struct ssd130x_device *ssd130x)
>> > >
>> > >         pitch = drm_format_info_min_pitch(fi, 0, ssd130x->width);
>> > >
>> > > -       ssd130x->buffer = kcalloc(pitch, ssd130x->height, GFP_KERNEL);
>> > > -       if (!ssd130x->buffer)
>> > > +       ssd130x_state->buffer = kcalloc(pitch, ssd130x->height, GFP_KERNEL);
>
> Based on full screen width and height.
>

You think that using ssd130x->mode->vdisplay instead will be more clear here?

>> > > +       if (!ssd130x_state->buffer)
>> > >                 return -ENOMEM;
>> > >
>> > > -       ssd130x->data_array = kcalloc(ssd130x->width, pages, GFP_KERNEL);
>> > > -       if (!ssd130x->data_array) {
>> > > -               kfree(ssd130x->buffer);
>> > > +       ssd130x_state->data_array = kcalloc(ssd130x->width, pages, GFP_KERNEL);
>
> Based on full screen width and height (hardware page size).
>

Yes, this depends on the panel attached to the OLED controller, and that
resolution is fixed and taken from the Device Tree (or ACPI table).

>> > > +       if (!ssd130x_state->data_array) {
>> > > +               kfree(ssd130x_state->buffer);
>> >
>> > Should ssd130x_state->buffer be reset to NULL here?
>> > I.e. if .atomic_check() fails, will .atomic_destroy_state() be called,
>> > leading to a double free?
>>
>> That's a good question.
>>
>> I never really thought of that, but yeah, AFAIK atomic_destroy_state()
>> will be called when atomic_check() fails.
>>
>> Which means that it's probably broken in a lot of drivers.
>>
>> Also, Javier pointed me to a discussion you had on IRC about using devm
>> allocation here. We can't really do that. KMS devices are only freed
>> once the last userspace application closes its fd to the device file, so
>> you have an unbounded window during which the driver is still callable
>> by userspace (and thus can still trigger an atomic commit) but the
>> buffer would have been freed for a while.
>
> It should still be safe for (at least) the data_array buffer. That
> buffer is only used to store pixels in hardware format, and immediately
> send them to the hardware.  If this can be called that late, it will
> fail horribly, as you can no longer talk to the hardware at that point
> (as ssd130x is an i2c driver, it might actually work; but a DRM driver
>  that calls devm_platform_ioremap_resource() will crash when writing
>  to its MMIO registers)?!?
>

At the very least the SPI driver will fail since the GPIO that is used to
toggle the D/C pin is allocated with devm_gpiod_get_optional(), but also
the regulator, backlight device, etc.

But in any case, as mentioned it is only relevant if the data_array buffer
is allocated at probe time, and from Maxime's explanation is more correct
to do it in the .atomic_check handler.

>> > > +static struct drm_plane_state *ssd130x_primary_plane_duplicate_state(struct drm_plane *plane)
>> > > +{
>> > > +       struct ssd130x_plane_state *old_ssd130x_state;
>> > > +       struct ssd130x_plane_state *ssd130x_state;
>> > > +
>> > > +       if (WARN_ON(!plane->state))
>> > > +               return NULL;
>> > > +
>> > > +       old_ssd130x_state = to_ssd130x_plane_state(plane->state);
>> > > +       ssd130x_state = kmemdup(old_ssd130x_state, sizeof(*ssd130x_state), GFP_KERNEL);
>> >
>> > I know this is the standard pattern, but the "dup" part is a bit
>> > silly here:
>> >   - The ssd130x-specific parts in ssd130x_plane_state are zeroed below,
>> >   - The base part is copied again in
>> >     __drm_atomic_helper_plane_duplicate_state).
>> > So (for now) you might as well just call kzalloc() ;-)
>>
>> Still if we ever add a field in the state structure, it will be
>> surprising that it's not copied over.
>>
>> The code is there and looks good to me, so I'd rather keep it.
>
> Fair enough, let's keep it.
>

Yeah. At the very least helps since will be consistent with other drivers.

> Gr{oetje,eeting}s,
>
>                         Geert

-- 
Best regards,

Javier Martinez Canillas
Core Platforms
Red Hat


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v4] drm/ssd130x: Allocate buffers in the plane's .atomic_check callback
  2023-07-26 12:22       ` Javier Martinez Canillas
@ 2023-07-26 12:33         ` Geert Uytterhoeven
  2023-07-26 13:54           ` Maxime Ripard
  0 siblings, 1 reply; 16+ messages in thread
From: Geert Uytterhoeven @ 2023-07-26 12:33 UTC (permalink / raw)
  To: Javier Martinez Canillas
  Cc: Thomas Zimmermann, Daniel Vetter, linux-kernel, Maxime Ripard, dri-devel

Hi Javier,

On Wed, Jul 26, 2023 at 2:22 PM Javier Martinez Canillas
<javierm@redhat.com> wrote:
> Geert Uytterhoeven <geert@linux-m68k.org> writes:
> > On Wed, Jul 26, 2023 at 12:00 PM Maxime Ripard <mripard@kernel.org> wrote:
> >> On Tue, Jul 25, 2023 at 09:21:52PM +0200, Geert Uytterhoeven wrote:
> >> > > --- a/drivers/gpu/drm/solomon/ssd130x.c
> >> > > +++ b/drivers/gpu/drm/solomon/ssd130x.c
> >> > > @@ -141,12 +141,26 @@ const struct ssd130x_deviceinfo ssd130x_variants[] = {
> >> > >  };
> >> > >  EXPORT_SYMBOL_NS_GPL(ssd130x_variants, DRM_SSD130X);
> >> > >
> >> > > +struct ssd130x_plane_state {
> >> > > +       struct drm_plane_state base;
> >> > > +       /* Intermediate buffer to convert pixels from XRGB8888 to R1 */
> >> > > +       u8 *buffer;
> >> > > +       /* Buffer that contains R1 pixels to be written to the panel */
> >> > > +       u8 *data_array;
> >> >
> >> > The second buffer actually contains pixels in hardware format.
> >> > For now that is a transposed buffer of R1 pixels, but that will change
> >> > when you will add support for greyscale displays.
> >> >
> >> > So I'd write "hardware format" instead of R1 for both.
> >> >
> >> > BTW, I still think data_array should be allocated during probing,
> >> > as it is related to the hardware, not to a plane.
> >>
> >> I somewhat disagree.
> >>
> >> If I understood right during our discussion with Javier, the buffer size
> >> derives from the mode size (height and width).
> >>
> >> In KMS, the mode is tied to the KMS state, and thus you can expect the
> >> mode to change every state commit. So the more logical thing to do is to
> >> tie the buffer size (and thus the buffer pointer) to the state since
> >> it's only valid for that particular state for all we know.
> >>
> >> Of course, our case is allows use to simplify things since it's a fixed
> >> mode, but one of Javier's goal with this driver was to provide a good
> >> example we can refer people to, so I think it's worth keeping.
> >
> > The second buffer (containing the hardware format) has a size that
> > depends on the full screen size, not the current mode (I believe that's
> > also the size of the frame buffer backing the plane?).  So its size is
> > fixed.
> >
>
> Yes, is fixed. But Maxime's point is that this is a characteristic of this
> particular device and even when the display resolution can't be changed,
> the correct thing to do is to keep all state related to the mode (even the
> buffer used to store the hardware pixels that are written to the display)
>
> > Given the allocations are now done based on plane state, I think the
> > first buffer should be sized according to the frame buffer backing
> > the plane? Currently it uses the full screen size, too (cfr. below).
> >
>
> But can the mode even be changed if ssd130x_connector_helper_get_modes()
> just adds a single display mode with mode->hdisplay == ssd130x->width and
> mode->vdisplay == ssd130x->height.

No, the mode cannot be changed.

At first, I thought you could still create a smaller frame buffer,
and attach that to the (single, thus primary) plane, but it turns out
I was wrong[*], so you can ignore that.

[*] ssd130x_primary_plane_helper_atomic_check() calls
    drm_plane_helper_atomic_check(), which calls
    drm_atomic_helper_check_plane_state() with can_position = false.
    As the position of planes is actually a software thing on ssd130x,
    positioning support could be added later, though...

> >> Also, Javier pointed me to a discussion you had on IRC about using devm
> >> allocation here. We can't really do that. KMS devices are only freed
> >> once the last userspace application closes its fd to the device file, so
> >> you have an unbounded window during which the driver is still callable
> >> by userspace (and thus can still trigger an atomic commit) but the
> >> buffer would have been freed for a while.
> >
> > It should still be safe for (at least) the data_array buffer. That
> > buffer is only used to store pixels in hardware format, and immediately
> > send them to the hardware.  If this can be called that late, it will
> > fail horribly, as you can no longer talk to the hardware at that point
> > (as ssd130x is an i2c driver, it might actually work; but a DRM driver
> >  that calls devm_platform_ioremap_resource() will crash when writing
> >  to its MMIO registers)?!?
>
> At the very least the SPI driver will fail since the GPIO that is used to
> toggle the D/C pin is allocated with devm_gpiod_get_optional(), but also
> the regulator, backlight device, etc.
>
> But in any case, as mentioned it is only relevant if the data_array buffer
> is allocated at probe time, and from Maxime's explanation is more correct
> to do it in the .atomic_check handler.

You need (at least) data_array for clear_screen, too, which is called
from .atomic_disable().

Gr{oetje,eeting}s,

                        Geert

-- 
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
                                -- Linus Torvalds

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v4] drm/ssd130x: Allocate buffers in the plane's .atomic_check callback
  2023-07-26 11:52     ` Geert Uytterhoeven
  2023-07-26 12:22       ` Javier Martinez Canillas
@ 2023-07-26 13:49       ` Maxime Ripard
  2023-07-26 14:16         ` Javier Martinez Canillas
  1 sibling, 1 reply; 16+ messages in thread
From: Maxime Ripard @ 2023-07-26 13:49 UTC (permalink / raw)
  To: Geert Uytterhoeven
  Cc: Thomas Zimmermann, Daniel Vetter, Javier Martinez Canillas,
	dri-devel, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 5806 bytes --]

Hi,

On Wed, Jul 26, 2023 at 01:52:37PM +0200, Geert Uytterhoeven wrote:
> On Wed, Jul 26, 2023 at 12:00 PM Maxime Ripard <mripard@kernel.org> wrote:
> > On Tue, Jul 25, 2023 at 09:21:52PM +0200, Geert Uytterhoeven wrote:
> > > > --- a/drivers/gpu/drm/solomon/ssd130x.c
> > > > +++ b/drivers/gpu/drm/solomon/ssd130x.c
> > > > @@ -141,12 +141,26 @@ const struct ssd130x_deviceinfo ssd130x_variants[] = {
> > > >  };
> > > >  EXPORT_SYMBOL_NS_GPL(ssd130x_variants, DRM_SSD130X);
> > > >
> > > > +struct ssd130x_plane_state {
> > > > +       struct drm_plane_state base;
> > > > +       /* Intermediate buffer to convert pixels from XRGB8888 to R1 */
> > > > +       u8 *buffer;
> > > > +       /* Buffer that contains R1 pixels to be written to the panel */
> > > > +       u8 *data_array;
> > >
> > > The second buffer actually contains pixels in hardware format.
> > > For now that is a transposed buffer of R1 pixels, but that will change
> > > when you will add support for greyscale displays.
> > >
> > > So I'd write "hardware format" instead of R1 for both.
> > >
> > > BTW, I still think data_array should be allocated during probing,
> > > as it is related to the hardware, not to a plane.
> >
> > I somewhat disagree.
> >
> > If I understood right during our discussion with Javier, the buffer size
> > derives from the mode size (height and width).
> >
> > In KMS, the mode is tied to the KMS state, and thus you can expect the
> > mode to change every state commit. So the more logical thing to do is to
> > tie the buffer size (and thus the buffer pointer) to the state since
> > it's only valid for that particular state for all we know.
> >
> > Of course, our case is allows use to simplify things since it's a fixed
> > mode, but one of Javier's goal with this driver was to provide a good
> > example we can refer people to, so I think it's worth keeping.
> 
> The second buffer (containing the hardware format) has a size that
> depends on the full screen size, not the current mode (I believe that's
> also the size of the frame buffer backing the plane?).  So its size is
> fixed.

In KMS in general, no. For that particular case, yes.

And about the framebuffer size == full screen size, there's multiple
sizes involved. I guess what you call full screen size is the CRTC size.

You can't make the assumption in KMS that CRTC size == (primary) plane
size == framebuffer size.

If you're using scaling for example, you will have a framebuffer size
smaller or larger than it plane size. If you're using cropping, then the
plane size or framebuffer size will be different from the CRTC size.
Some ways to work around overscan is also to have a smaller plane size
than the CRTC size.

You don't have to have the CRTC size == primary plane size, and then you
don't have to have plane size == framebuffer size.

you can't really make that assumption in the general case either:
scaling or cropping will have a different framebuffer size than the CRTC
primary plane size (which doesn't have to be "full screen" either).

> Given the allocations are now done based on plane state, I think the
> first buffer should be sized according to the frame buffer backing
> the plane? Currently it uses the full screen size, too (cfr. below).

Yeah, probably.

> > > > @@ -159,23 +173,23 @@ static int ssd130x_buf_alloc(struct ssd130x_device *ssd130x)
> > > >
> > > >         pitch = drm_format_info_min_pitch(fi, 0, ssd130x->width);
> > > >
> > > > -       ssd130x->buffer = kcalloc(pitch, ssd130x->height, GFP_KERNEL);
> > > > -       if (!ssd130x->buffer)
> > > > +       ssd130x_state->buffer = kcalloc(pitch, ssd130x->height, GFP_KERNEL);
> 
> Based on full screen width and height.
> 
> > > > +       if (!ssd130x_state->buffer)
> > > >                 return -ENOMEM;
> > > >
> > > > -       ssd130x->data_array = kcalloc(ssd130x->width, pages, GFP_KERNEL);
> > > > -       if (!ssd130x->data_array) {
> > > > -               kfree(ssd130x->buffer);
> > > > +       ssd130x_state->data_array = kcalloc(ssd130x->width, pages, GFP_KERNEL);
> 
> Based on full screen width and height (hardware page size).
> 
> > > > +       if (!ssd130x_state->data_array) {
> > > > +               kfree(ssd130x_state->buffer);
> > >
> > > Should ssd130x_state->buffer be reset to NULL here?
> > > I.e. if .atomic_check() fails, will .atomic_destroy_state() be called,
> > > leading to a double free?
> >
> > That's a good question.
> >
> > I never really thought of that, but yeah, AFAIK atomic_destroy_state()
> > will be called when atomic_check() fails.
> >
> > Which means that it's probably broken in a lot of drivers.
> >
> > Also, Javier pointed me to a discussion you had on IRC about using devm
> > allocation here. We can't really do that. KMS devices are only freed
> > once the last userspace application closes its fd to the device file, so
> > you have an unbounded window during which the driver is still callable
> > by userspace (and thus can still trigger an atomic commit) but the
> > buffer would have been freed for a while.
> 
> It should still be safe for (at least) the data_array buffer. That
> buffer is only used to store pixels in hardware format, and immediately
> send them to the hardware.  If this can be called that late, it will
> fail horribly, as you can no longer talk to the hardware at that point
> (as ssd130x is an i2c driver, it might actually work; but a DRM driver
>  that calls devm_platform_ioremap_resource() will crash when writing
>  to its MMIO registers)?!?

Yep, that's exactly why we have drm_dev_enter/drm_dev_exit :)

Not a lot of drivers are using it but all should. vc4 does exactly what
you are describing for example.

Maxime

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 228 bytes --]

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v4] drm/ssd130x: Allocate buffers in the plane's .atomic_check callback
  2023-07-26 12:33         ` Geert Uytterhoeven
@ 2023-07-26 13:54           ` Maxime Ripard
  2023-07-26 14:14             ` Geert Uytterhoeven
  0 siblings, 1 reply; 16+ messages in thread
From: Maxime Ripard @ 2023-07-26 13:54 UTC (permalink / raw)
  To: Geert Uytterhoeven
  Cc: Thomas Zimmermann, Daniel Vetter, Javier Martinez Canillas,
	dri-devel, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 1743 bytes --]

On Wed, Jul 26, 2023 at 02:33:06PM +0200, Geert Uytterhoeven wrote:
> > >> Also, Javier pointed me to a discussion you had on IRC about using devm
> > >> allocation here. We can't really do that. KMS devices are only freed
> > >> once the last userspace application closes its fd to the device file, so
> > >> you have an unbounded window during which the driver is still callable
> > >> by userspace (and thus can still trigger an atomic commit) but the
> > >> buffer would have been freed for a while.
> > >
> > > It should still be safe for (at least) the data_array buffer. That
> > > buffer is only used to store pixels in hardware format, and immediately
> > > send them to the hardware.  If this can be called that late, it will
> > > fail horribly, as you can no longer talk to the hardware at that point
> > > (as ssd130x is an i2c driver, it might actually work; but a DRM driver
> > >  that calls devm_platform_ioremap_resource() will crash when writing
> > >  to its MMIO registers)?!?
> >
> > At the very least the SPI driver will fail since the GPIO that is used to
> > toggle the D/C pin is allocated with devm_gpiod_get_optional(), but also
> > the regulator, backlight device, etc.
> >
> > But in any case, as mentioned it is only relevant if the data_array buffer
> > is allocated at probe time, and from Maxime's explanation is more correct
> > to do it in the .atomic_check handler.
> 
> You need (at least) data_array for clear_screen, too, which is called
> from .atomic_disable().

I'm not sure I get what your concern is?

Even if we entirely disable the plane, the state will not have been
destroyed yet so you still have at least access to the data_array from
the old state.

Maxime

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 228 bytes --]

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v4] drm/ssd130x: Allocate buffers in the plane's .atomic_check callback
  2023-07-26 13:54           ` Maxime Ripard
@ 2023-07-26 14:14             ` Geert Uytterhoeven
  2023-07-26 14:22               ` Javier Martinez Canillas
  0 siblings, 1 reply; 16+ messages in thread
From: Geert Uytterhoeven @ 2023-07-26 14:14 UTC (permalink / raw)
  To: Maxime Ripard
  Cc: Thomas Zimmermann, Daniel Vetter, Javier Martinez Canillas,
	dri-devel, linux-kernel

Hi Maxime,

On Wed, Jul 26, 2023 at 3:54 PM Maxime Ripard <mripard@kernel.org> wrote:
> On Wed, Jul 26, 2023 at 02:33:06PM +0200, Geert Uytterhoeven wrote:
> > > >> Also, Javier pointed me to a discussion you had on IRC about using devm
> > > >> allocation here. We can't really do that. KMS devices are only freed
> > > >> once the last userspace application closes its fd to the device file, so
> > > >> you have an unbounded window during which the driver is still callable
> > > >> by userspace (and thus can still trigger an atomic commit) but the
> > > >> buffer would have been freed for a while.
> > > >
> > > > It should still be safe for (at least) the data_array buffer. That
> > > > buffer is only used to store pixels in hardware format, and immediately
> > > > send them to the hardware.  If this can be called that late, it will
> > > > fail horribly, as you can no longer talk to the hardware at that point
> > > > (as ssd130x is an i2c driver, it might actually work; but a DRM driver
> > > >  that calls devm_platform_ioremap_resource() will crash when writing
> > > >  to its MMIO registers)?!?
> > >
> > > At the very least the SPI driver will fail since the GPIO that is used to
> > > toggle the D/C pin is allocated with devm_gpiod_get_optional(), but also
> > > the regulator, backlight device, etc.
> > >
> > > But in any case, as mentioned it is only relevant if the data_array buffer
> > > is allocated at probe time, and from Maxime's explanation is more correct
> > > to do it in the .atomic_check handler.
> >
> > You need (at least) data_array for clear_screen, too, which is called
> > from .atomic_disable().
>
> I'm not sure I get what your concern is?
>
> Even if we entirely disable the plane, the state will not have been
> destroyed yet so you still have at least access to the data_array from
> the old state.

Currently, clearing the screen is done from the plane's .atomic_disable()
callback, so the plane's state should be fine.

But IIUIC, DRM allows the user to enable/disable the display without
creating any plane first, so clearing should be handled in the CRTC's
.atomic_disable() callback instead? Then, would we still have access
to valid plane state?

Thanks!

Gr{oetje,eeting}s,

                        Geert

-- 
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
                                -- Linus Torvalds

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v4] drm/ssd130x: Allocate buffers in the plane's .atomic_check callback
  2023-07-26 13:49       ` Maxime Ripard
@ 2023-07-26 14:16         ` Javier Martinez Canillas
  0 siblings, 0 replies; 16+ messages in thread
From: Javier Martinez Canillas @ 2023-07-26 14:16 UTC (permalink / raw)
  To: Maxime Ripard, Geert Uytterhoeven
  Cc: Thomas Zimmermann, Daniel Vetter, linux-kernel, dri-devel

Maxime Ripard <mripard@kernel.org> writes:

Hello Maxime,

> Hi,
>
> On Wed, Jul 26, 2023 at 01:52:37PM +0200, Geert Uytterhoeven wrote:
>> On Wed, Jul 26, 2023 at 12:00 PM Maxime Ripard <mripard@kernel.org> wrote:
>> > On Tue, Jul 25, 2023 at 09:21:52PM +0200, Geert Uytterhoeven wrote:

[...]

>> The second buffer (containing the hardware format) has a size that
>> depends on the full screen size, not the current mode (I believe that's
>> also the size of the frame buffer backing the plane?).  So its size is
>> fixed.
>
> In KMS in general, no. For that particular case, yes.
>
> And about the framebuffer size == full screen size, there's multiple
> sizes involved. I guess what you call full screen size is the CRTC size.
>
> You can't make the assumption in KMS that CRTC size == (primary) plane
> size == framebuffer size.
>
> If you're using scaling for example, you will have a framebuffer size
> smaller or larger than it plane size. If you're using cropping, then the
> plane size or framebuffer size will be different from the CRTC size.
> Some ways to work around overscan is also to have a smaller plane size
> than the CRTC size.
>
> You don't have to have the CRTC size == primary plane size, and then you
> don't have to have plane size == framebuffer size.
>
> you can't really make that assumption in the general case either:
> scaling or cropping will have a different framebuffer size than the CRTC
> primary plane size (which doesn't have to be "full screen" either).
>

Yes, this particular driver is using the drm_plane_helper_atomic_check()
as the primary plane .atomic_check and this function helper calls to:

drm_atomic_helper_check_plane_state(new_plane_state, new_crtc_state,
						   DRM_PLANE_NO_SCALING,
						   DRM_PLANE_NO_SCALING,
						   false, false);

so it does not support scaling nor positioning.

>> Given the allocations are now done based on plane state, I think the
>> first buffer should be sized according to the frame buffer backing
>> the plane? Currently it uses the full screen size, too (cfr. below).
>
> Yeah, probably.
>

Right, that could be fixed as another patch if anything to make it more
reable since it won't have any functional change.

>> > > > @@ -159,23 +173,23 @@ static int ssd130x_buf_alloc(struct ssd130x_device *ssd130x)
>> > > >
>> > > >         pitch = drm_format_info_min_pitch(fi, 0, ssd130x->width);
>> > > >
>> > > > -       ssd130x->buffer = kcalloc(pitch, ssd130x->height, GFP_KERNEL);
>> > > > -       if (!ssd130x->buffer)
>> > > > +       ssd130x_state->buffer = kcalloc(pitch, ssd130x->height, GFP_KERNEL);
>> 
>> Based on full screen width and height.
>> 
>> > > > +       if (!ssd130x_state->buffer)
>> > > >                 return -ENOMEM;
>> > > >
>> > > > -       ssd130x->data_array = kcalloc(ssd130x->width, pages, GFP_KERNEL);
>> > > > -       if (!ssd130x->data_array) {
>> > > > -               kfree(ssd130x->buffer);
>> > > > +       ssd130x_state->data_array = kcalloc(ssd130x->width, pages, GFP_KERNEL);
>> 
>> Based on full screen width and height (hardware page size).
>> 
>> > > > +       if (!ssd130x_state->data_array) {
>> > > > +               kfree(ssd130x_state->buffer);
>> > >
>> > > Should ssd130x_state->buffer be reset to NULL here?
>> > > I.e. if .atomic_check() fails, will .atomic_destroy_state() be called,
>> > > leading to a double free?
>> >
>> > That's a good question.
>> >
>> > I never really thought of that, but yeah, AFAIK atomic_destroy_state()
>> > will be called when atomic_check() fails.
>> >
>> > Which means that it's probably broken in a lot of drivers.
>> >
>> > Also, Javier pointed me to a discussion you had on IRC about using devm
>> > allocation here. We can't really do that. KMS devices are only freed
>> > once the last userspace application closes its fd to the device file, so
>> > you have an unbounded window during which the driver is still callable
>> > by userspace (and thus can still trigger an atomic commit) but the
>> > buffer would have been freed for a while.
>> 
>> It should still be safe for (at least) the data_array buffer. That
>> buffer is only used to store pixels in hardware format, and immediately
>> send them to the hardware.  If this can be called that late, it will
>> fail horribly, as you can no longer talk to the hardware at that point
>> (as ssd130x is an i2c driver, it might actually work; but a DRM driver
>>  that calls devm_platform_ioremap_resource() will crash when writing
>>  to its MMIO registers)?!?
>
> Yep, that's exactly why we have drm_dev_enter/drm_dev_exit :)
>

Thanks. As a follow-up I can also do that in another patch.

-- 
Best regards,

Javier Martinez Canillas
Core Platforms
Red Hat


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v4] drm/ssd130x: Allocate buffers in the plane's .atomic_check callback
  2023-07-26 14:14             ` Geert Uytterhoeven
@ 2023-07-26 14:22               ` Javier Martinez Canillas
  0 siblings, 0 replies; 16+ messages in thread
From: Javier Martinez Canillas @ 2023-07-26 14:22 UTC (permalink / raw)
  To: Geert Uytterhoeven, Maxime Ripard
  Cc: Thomas Zimmermann, Daniel Vetter, linux-kernel, dri-devel

Geert Uytterhoeven <geert@linux-m68k.org> writes:

Hello Geert,

> Hi Maxime,
>
> On Wed, Jul 26, 2023 at 3:54 PM Maxime Ripard <mripard@kernel.org> wrote:
>> On Wed, Jul 26, 2023 at 02:33:06PM +0200, Geert Uytterhoeven wrote:
>> > > >> Also, Javier pointed me to a discussion you had on IRC about using devm
>> > > >> allocation here. We can't really do that. KMS devices are only freed
>> > > >> once the last userspace application closes its fd to the device file, so
>> > > >> you have an unbounded window during which the driver is still callable
>> > > >> by userspace (and thus can still trigger an atomic commit) but the
>> > > >> buffer would have been freed for a while.
>> > > >
>> > > > It should still be safe for (at least) the data_array buffer. That
>> > > > buffer is only used to store pixels in hardware format, and immediately
>> > > > send them to the hardware.  If this can be called that late, it will
>> > > > fail horribly, as you can no longer talk to the hardware at that point
>> > > > (as ssd130x is an i2c driver, it might actually work; but a DRM driver
>> > > >  that calls devm_platform_ioremap_resource() will crash when writing
>> > > >  to its MMIO registers)?!?
>> > >
>> > > At the very least the SPI driver will fail since the GPIO that is used to
>> > > toggle the D/C pin is allocated with devm_gpiod_get_optional(), but also
>> > > the regulator, backlight device, etc.
>> > >
>> > > But in any case, as mentioned it is only relevant if the data_array buffer
>> > > is allocated at probe time, and from Maxime's explanation is more correct
>> > > to do it in the .atomic_check handler.
>> >
>> > You need (at least) data_array for clear_screen, too, which is called
>> > from .atomic_disable().
>>
>> I'm not sure I get what your concern is?
>>
>> Even if we entirely disable the plane, the state will not have been
>> destroyed yet so you still have at least access to the data_array from
>> the old state.
>
> Currently, clearing the screen is done from the plane's .atomic_disable()
> callback, so the plane's state should be fine.
>
> But IIUIC, DRM allows the user to enable/disable the display without
> creating any plane first, so clearing should be handled in the CRTC's

But it's needed to be clared in this case? Currently the power on/off
is done in the encoder's .atomic_{en,dis}able() but the actual panel
clear is only done for the plane disable as you mentioned.

> .atomic_disable() callback instead? Then, would we still have access
> to valid plane state?
>

If the clear has to be moved to the CRTC atomic enable/disable, then
the driver should be reworked to not use the data_array and instead
just allocate a zero'ed buffer and pass that as you proposed.

But again that's something that could be done later as another patch.

> Thanks!
>
> Gr{oetje,eeting}s,
>

-- 
Best regards,

Javier Martinez Canillas
Core Platforms
Red Hat


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v4] drm/ssd130x: Allocate buffers in the plane's .atomic_check callback
  2023-07-21  7:09 [PATCH v4] drm/ssd130x: Allocate buffers in the plane's .atomic_check callback Javier Martinez Canillas
  2023-07-21  7:33 ` Maxime Ripard
  2023-07-25 19:21 ` Geert Uytterhoeven
@ 2023-07-27 12:52 ` Thomas Zimmermann
  2023-07-27 13:43   ` Javier Martinez Canillas
  2 siblings, 1 reply; 16+ messages in thread
From: Thomas Zimmermann @ 2023-07-27 12:52 UTC (permalink / raw)
  To: Javier Martinez Canillas, linux-kernel
  Cc: Daniel Vetter, dri-devel, Geert Uytterhoeven, Maxime Ripard


[-- Attachment #1.1: Type: text/plain, Size: 14339 bytes --]

Hi Javier,

this patch is completely broken. It's easy to fix though.

Am 21.07.23 um 09:09 schrieb Javier Martinez Canillas:
> Geert reports that the following NULL pointer dereference happens for him
> after commit 49d7d581ceaf ("drm/ssd130x: Don't allocate buffers on each
> plane update"):
> 
>      [drm] Initialized ssd130x 1.0.0 20220131 for 0-003c on minor 0
>      ssd130x-i2c 0-003c: [drm] surface width(128), height(32), bpp(1)
>      and format(R1   little-endian (0x20203152))
>      Unable to handle kernel NULL pointer dereference at virtual address 00000000
>      Oops [#1]
>      CPU: 0 PID: 1 Comm: swapper Not tainted
>      6.5.0-rc1-orangecrab-02219-g0a529a1e4bf4 #565
>      epc : ssd130x_update_rect.isra.0+0x13c/0x340
>       ra : ssd130x_update_rect.isra.0+0x2bc/0x340
>      ...
>      status: 00000120 badaddr: 00000000 cause: 0000000f
>      [<c0303d90>] ssd130x_update_rect.isra.0+0x13c/0x340
>      [<c0304200>] ssd130x_primary_plane_helper_atomic_update+0x26c/0x284
>      [<c02f8d54>] drm_atomic_helper_commit_planes+0xfc/0x27c
>      [<c02f9314>] drm_atomic_helper_commit_tail+0x5c/0xb4
>      [<c02f94fc>] commit_tail+0x190/0x1b8
>      [<c02f99fc>] drm_atomic_helper_commit+0x194/0x1c0
>      [<c02c5d00>] drm_atomic_commit+0xa4/0xe4
>      [<c02cce40>] drm_client_modeset_commit_atomic+0x244/0x278
>      [<c02ccef0>] drm_client_modeset_commit_locked+0x7c/0x1bc
>      [<c02cd064>] drm_client_modeset_commit+0x34/0x64
>      [<c0301a78>] __drm_fb_helper_restore_fbdev_mode_unlocked+0xc4/0xe8
>      [<c0303424>] drm_fb_helper_set_par+0x38/0x58
>      [<c027c410>] fbcon_init+0x294/0x534
>      ...
> 
> The problem is that fbcon calls fbcon_init() which triggers a DRM modeset
> and this leads to drm_atomic_helper_commit_planes() attempting to commit
> the atomic state for all planes, even the ones whose CRTC is not enabled.
> 
> Since the primary plane buffer is allocated in the encoder .atomic_enable
> callback, this happens after that initial modeset commit and leads to the
> mentioned NULL pointer dereference.
> 
> Fix this by having custom helpers to allocate, duplicate and destroy the
> plane state, that will take care of allocating and freeing these buffers.
> 
> Instead of doing it in the encoder atomic enabled and disabled callbacks,
> since allocations must not be done in an .atomic_enable handler. Because
> drivers are not allowed to fail after drm_atomic_helper_swap_state() has
> been called and the new atomic state is stored into the current sw state.
> 
> Fixes: 49d7d581ceaf ("drm/ssd130x: Don't allocate buffers on each plane update")
> Reported-by: Geert Uytterhoeven <geert@linux-m68k.org>
> Suggested-by: Maxime Ripard <mripard@kernel.org>
> Signed-off-by: Javier Martinez Canillas <javierm@redhat.com>
> ---
> 
> Changes in v4:
> - Move buffers allocation/free to plane .reset/.destroy helpers (Maxime Ripard).
> 
> Changes in v3:
> - Move the buffers allocation to the plane helper funcs .begin_fb_access
>    and the free to .end_fb_access callbacks.
> - Always allocate intermediate buffer because is use in ssd130x_clear_screen().
> - Don't allocate the buffers as device managed resources.
> 
> Changes in v2:
> - Move the buffers allocation to .fb_create instead of preventing atomic
>    updates for disable planes.
> - Don't allocate the intermediate buffer when using the native R1 format.
> - Allocate buffers as device managed resources.
> 
>   drivers/gpu/drm/solomon/ssd130x.c | 134 ++++++++++++++++++++++++------
>   drivers/gpu/drm/solomon/ssd130x.h |   3 -
>   2 files changed, 108 insertions(+), 29 deletions(-)
> 
> diff --git a/drivers/gpu/drm/solomon/ssd130x.c b/drivers/gpu/drm/solomon/ssd130x.c
> index afb08a8aa9fc..21b2afe40b13 100644
> --- a/drivers/gpu/drm/solomon/ssd130x.c
> +++ b/drivers/gpu/drm/solomon/ssd130x.c
> @@ -141,12 +141,26 @@ const struct ssd130x_deviceinfo ssd130x_variants[] = {
>   };
>   EXPORT_SYMBOL_NS_GPL(ssd130x_variants, DRM_SSD130X);
>   
> +struct ssd130x_plane_state {
> +	struct drm_plane_state base;

You need to use a struct drm_shadow_plane_state here.

> +	/* Intermediate buffer to convert pixels from XRGB8888 to R1 */
> +	u8 *buffer;
> +	/* Buffer that contains R1 pixels to be written to the panel */
> +	u8 *data_array;
> +};
> +
> +static inline struct ssd130x_plane_state *to_ssd130x_plane_state(struct drm_plane_state *state)
> +{
> +	return container_of(state, struct ssd130x_plane_state, base);
> +}
> +
>   static inline struct ssd130x_device *drm_to_ssd130x(struct drm_device *drm)
>   {
>   	return container_of(drm, struct ssd130x_device, drm);
>   }
>   
> -static int ssd130x_buf_alloc(struct ssd130x_device *ssd130x)
> +static int ssd130x_buf_alloc(struct ssd130x_device *ssd130x,
> +			     struct ssd130x_plane_state *ssd130x_state)
>   {
>   	unsigned int page_height = ssd130x->device_info->page_height;
>   	unsigned int pages = DIV_ROUND_UP(ssd130x->height, page_height);
> @@ -159,23 +173,23 @@ static int ssd130x_buf_alloc(struct ssd130x_device *ssd130x)
>   
>   	pitch = drm_format_info_min_pitch(fi, 0, ssd130x->width);
>   
> -	ssd130x->buffer = kcalloc(pitch, ssd130x->height, GFP_KERNEL);
> -	if (!ssd130x->buffer)
> +	ssd130x_state->buffer = kcalloc(pitch, ssd130x->height, GFP_KERNEL);
> +	if (!ssd130x_state->buffer)
>   		return -ENOMEM;
>   
> -	ssd130x->data_array = kcalloc(ssd130x->width, pages, GFP_KERNEL);
> -	if (!ssd130x->data_array) {
> -		kfree(ssd130x->buffer);
> +	ssd130x_state->data_array = kcalloc(ssd130x->width, pages, GFP_KERNEL);
> +	if (!ssd130x_state->data_array) {
> +		kfree(ssd130x_state->buffer);
>   		return -ENOMEM;
>   	}
>   
>   	return 0;
>   }
>   
> -static void ssd130x_buf_free(struct ssd130x_device *ssd130x)
> +static void ssd130x_buf_free(struct ssd130x_plane_state *ssd130x_state)
>   {
> -	kfree(ssd130x->data_array);
> -	kfree(ssd130x->buffer);
> +	kfree(ssd130x_state->data_array);
> +	kfree(ssd130x_state->buffer);
>   }
>   
>   /*
> @@ -466,12 +480,14 @@ static int ssd130x_init(struct ssd130x_device *ssd130x)
>   				 SSD130X_SET_ADDRESS_MODE_HORIZONTAL);
>   }
>   
> -static int ssd130x_update_rect(struct ssd130x_device *ssd130x, struct drm_rect *rect)
> +static int ssd130x_update_rect(struct ssd130x_device *ssd130x,
> +			       struct ssd130x_plane_state *ssd130x_state,
> +			       struct drm_rect *rect)
>   {
>   	unsigned int x = rect->x1;
>   	unsigned int y = rect->y1;
> -	u8 *buf = ssd130x->buffer;
> -	u8 *data_array = ssd130x->data_array;
> +	u8 *buf = ssd130x_state->buffer;
> +	u8 *data_array = ssd130x_state->data_array;
>   	unsigned int width = drm_rect_width(rect);
>   	unsigned int height = drm_rect_height(rect);
>   	unsigned int line_length = DIV_ROUND_UP(width, 8);
> @@ -567,7 +583,8 @@ static int ssd130x_update_rect(struct ssd130x_device *ssd130x, struct drm_rect *
>   	return ret;
>   }
>   
> -static void ssd130x_clear_screen(struct ssd130x_device *ssd130x)
> +static void ssd130x_clear_screen(struct ssd130x_device *ssd130x,
> +				 struct ssd130x_plane_state *ssd130x_state)
>   {
>   	struct drm_rect fullscreen = {
>   		.x1 = 0,
> @@ -576,18 +593,21 @@ static void ssd130x_clear_screen(struct ssd130x_device *ssd130x)
>   		.y2 = ssd130x->height,
>   	};
>   
> -	ssd130x_update_rect(ssd130x, &fullscreen);
> +	ssd130x_update_rect(ssd130x, ssd130x_state, &fullscreen);
>   }
>   
> -static int ssd130x_fb_blit_rect(struct drm_framebuffer *fb, const struct iosys_map *vmap,
> +static int ssd130x_fb_blit_rect(struct drm_plane_state *state,
> +				const struct iosys_map *vmap,
>   				struct drm_rect *rect)
>   {
> +	struct drm_framebuffer *fb = state->fb;
>   	struct ssd130x_device *ssd130x = drm_to_ssd130x(fb->dev);
> +	struct ssd130x_plane_state *ssd130x_state = to_ssd130x_plane_state(state);
>   	unsigned int page_height = ssd130x->device_info->page_height;
>   	struct iosys_map dst;
>   	unsigned int dst_pitch;
>   	int ret = 0;
> -	u8 *buf = ssd130x->buffer;
> +	u8 *buf = ssd130x_state->buffer;
>   
>   	if (!buf)
>   		return 0;
> @@ -607,11 +627,27 @@ static int ssd130x_fb_blit_rect(struct drm_framebuffer *fb, const struct iosys_m
>   
>   	drm_gem_fb_end_cpu_access(fb, DMA_FROM_DEVICE);
>   
> -	ssd130x_update_rect(ssd130x, rect);
> +	ssd130x_update_rect(ssd130x, ssd130x_state, rect);
>   
>   	return ret;
>   }
>   
> +static int ssd130x_primary_plane_helper_atomic_check(struct drm_plane *plane,
> +						     struct drm_atomic_state *state)
> +{
> +	struct drm_device *drm = plane->dev;
> +	struct ssd130x_device *ssd130x = drm_to_ssd130x(drm);
> +	struct drm_plane_state *plane_state = drm_atomic_get_new_plane_state(state, plane);
> +	struct ssd130x_plane_state *ssd130x_state = to_ssd130x_plane_state(plane_state);
> +	int ret;
> +
> +	ret = drm_plane_helper_atomic_check(plane, state);
> +	if (ret)
> +		return ret;
> +
> +	return ssd130x_buf_alloc(ssd130x, ssd130x_state);
> +}
> +
>   static void ssd130x_primary_plane_helper_atomic_update(struct drm_plane *plane,
>   						       struct drm_atomic_state *state)
>   {
> @@ -634,7 +670,7 @@ static void ssd130x_primary_plane_helper_atomic_update(struct drm_plane *plane,
>   		if (!drm_rect_intersect(&dst_clip, &damage))
>   			continue;
>   
> -		ssd130x_fb_blit_rect(plane_state->fb, &shadow_plane_state->data[0], &dst_clip);
> +		ssd130x_fb_blit_rect(plane_state, &shadow_plane_state->data[0], &dst_clip);
>   	}
>   
>   	drm_dev_exit(idx);
> @@ -645,19 +681,68 @@ static void ssd130x_primary_plane_helper_atomic_disable(struct drm_plane *plane,
>   {
>   	struct drm_device *drm = plane->dev;
>   	struct ssd130x_device *ssd130x = drm_to_ssd130x(drm);
> +	struct ssd130x_plane_state *ssd130x_state = to_ssd130x_plane_state(plane->state);
>   	int idx;
>   
>   	if (!drm_dev_enter(drm, &idx))
>   		return;
>   
> -	ssd130x_clear_screen(ssd130x);
> +	ssd130x_clear_screen(ssd130x, ssd130x_state);
>   
>   	drm_dev_exit(idx);
>   }
>   
> +/* Called during init to allocate the plane's atomic state. */
> +static void ssd130x_primary_plane_reset(struct drm_plane *plane)
> +{
> +	struct ssd130x_plane_state *ssd130x_state;
> +
> +	WARN_ON(plane->state);
> +
> +	ssd130x_state = kzalloc(sizeof(*ssd130x_state), GFP_KERNEL);
> +	if (!ssd130x_state)
> +		return;
> +
> +	__drm_atomic_helper_plane_reset(plane, &ssd130x_state->base);

Use __drm_gem_reset_shadow_plane() here

> +}
> +
> +static struct drm_plane_state *ssd130x_primary_plane_duplicate_state(struct drm_plane *plane)
> +{
> +	struct ssd130x_plane_state *old_ssd130x_state;
> +	struct ssd130x_plane_state *ssd130x_state;
> +
> +	if (WARN_ON(!plane->state))
> +		return NULL;
> +
> +	old_ssd130x_state = to_ssd130x_plane_state(plane->state);
> +	ssd130x_state = kmemdup(old_ssd130x_state, sizeof(*ssd130x_state), GFP_KERNEL);
> +	if (!ssd130x_state)
> +		return NULL;
> +
> +	/* The buffers are not duplicated and are allocated in .atomic_check */
> +	ssd130x_state->buffer = NULL;
> +	ssd130x_state->data_array = NULL;
> +
> +	__drm_atomic_helper_plane_duplicate_state(plane, &ssd130x_state->base);

Use __drm_gem_duplicate_shadow_plane_state() here.

> +
> +	return &ssd130x_state->base;
> +}
> +
> +static void ssd130x_primary_plane_destroy_state(struct drm_plane *plane,
> +						struct drm_plane_state *state)
> +{
> +	struct ssd130x_plane_state *ssd130x_state = to_ssd130x_plane_state(state);
> +
> +	ssd130x_buf_free(ssd130x_state);
> +
> +	__drm_atomic_helper_plane_destroy_state(&ssd130x_state->base);

Use __drm_gem_destroy_shadow_plane_state() here.

> +
> +	kfree(ssd130x_state);
> +}
> +
>   static const struct drm_plane_helper_funcs ssd130x_primary_plane_helper_funcs = {
>   	DRM_GEM_SHADOW_PLANE_HELPER_FUNCS,

This macro sets the callbacks that vmap/vunmap the GEM buffer during the 
display update. They expect to upcast from drm_plane_state to 
drm_shadow_plane_state.  And hence, your driver's plane state needs to 
inherit from drm_shadow_plane_state.

> -	.atomic_check = drm_plane_helper_atomic_check,
> +	.atomic_check = ssd130x_primary_plane_helper_atomic_check,
>   	.atomic_update = ssd130x_primary_plane_helper_atomic_update,
>   	.atomic_disable = ssd130x_primary_plane_helper_atomic_disable,
>   };
> @@ -665,6 +750,9 @@ static const struct drm_plane_helper_funcs ssd130x_primary_plane_helper_funcs =
>   static const struct drm_plane_funcs ssd130x_primary_plane_funcs = {
>   	.update_plane = drm_atomic_helper_update_plane,
>   	.disable_plane = drm_atomic_helper_disable_plane,
> +	.reset = ssd130x_primary_plane_reset,
> +	.atomic_duplicate_state = ssd130x_primary_plane_duplicate_state,
> +	.atomic_destroy_state = ssd130x_primary_plane_destroy_state,
>   	.destroy = drm_plane_cleanup,
>   	DRM_GEM_SHADOW_PLANE_FUNCS,

Arnd already mentioned that this macro now needs to go away.

Best regards
Thomas

>   };
> @@ -719,10 +807,6 @@ static void ssd130x_encoder_helper_atomic_enable(struct drm_encoder *encoder,
>   	if (ret)
>   		goto power_off;
>   
> -	ret = ssd130x_buf_alloc(ssd130x);
> -	if (ret)
> -		goto power_off;
> -
>   	ssd130x_write_cmd(ssd130x, 1, SSD130X_DISPLAY_ON);
>   
>   	backlight_enable(ssd130x->bl_dev);
> @@ -744,8 +828,6 @@ static void ssd130x_encoder_helper_atomic_disable(struct drm_encoder *encoder,
>   
>   	ssd130x_write_cmd(ssd130x, 1, SSD130X_DISPLAY_OFF);
>   
> -	ssd130x_buf_free(ssd130x);
> -
>   	ssd130x_power_off(ssd130x);
>   }
>   
> diff --git a/drivers/gpu/drm/solomon/ssd130x.h b/drivers/gpu/drm/solomon/ssd130x.h
> index 161588b1cc4d..87968b3e7fb8 100644
> --- a/drivers/gpu/drm/solomon/ssd130x.h
> +++ b/drivers/gpu/drm/solomon/ssd130x.h
> @@ -89,9 +89,6 @@ struct ssd130x_device {
>   	u8 col_end;
>   	u8 page_start;
>   	u8 page_end;
> -
> -	u8 *buffer;
> -	u8 *data_array;
>   };
>   
>   extern const struct ssd130x_deviceinfo ssd130x_variants[];

-- 
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Frankenstrasse 146, 90461 Nuernberg, Germany
GF: Ivo Totev, Andrew Myers, Andrew McDonald, Boudien Moerman
HRB 36809 (AG Nuernberg)

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 840 bytes --]

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v4] drm/ssd130x: Allocate buffers in the plane's .atomic_check callback
  2023-07-27 12:52 ` Thomas Zimmermann
@ 2023-07-27 13:43   ` Javier Martinez Canillas
  0 siblings, 0 replies; 16+ messages in thread
From: Javier Martinez Canillas @ 2023-07-27 13:43 UTC (permalink / raw)
  To: Thomas Zimmermann, linux-kernel
  Cc: Daniel Vetter, dri-devel, Geert Uytterhoeven, Maxime Ripard

Thomas Zimmermann <tzimmermann@suse.de> writes:

Hello Thomas,

Thanks a lot for the feedback!

> Hi Javier,
>
> this patch is completely broken. It's easy to fix though.
>
> Am 21.07.23 um 09:09 schrieb Javier Martinez Canillas:

[...]

>> +struct ssd130x_plane_state {
>> +	struct drm_plane_state base;
>
> You need to use a struct drm_shadow_plane_state here.
>

Yes, I already noticed this when testing Arnd's patch to drop
DRM_GEM_SHADOW_PLANE_FUNCS. I already have a patch ready.

[...]

+
>>   static const struct drm_plane_helper_funcs ssd130x_primary_plane_helper_funcs = {
>>   	DRM_GEM_SHADOW_PLANE_HELPER_FUNCS,
>
> This macro sets the callbacks that vmap/vunmap the GEM buffer during the 
> display update. They expect to upcast from drm_plane_state to 
> drm_shadow_plane_state.  And hence, your driver's plane state needs to 
> inherit from drm_shadow_plane_state.
>

Yup. I missed that. I'm now testing my patch and will post it.

-- 
Best regards,

Javier Martinez Canillas
Core Platforms
Red Hat


^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2023-07-27 13:43 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-07-21  7:09 [PATCH v4] drm/ssd130x: Allocate buffers in the plane's .atomic_check callback Javier Martinez Canillas
2023-07-21  7:33 ` Maxime Ripard
2023-07-25 19:21 ` Geert Uytterhoeven
2023-07-25 20:49   ` Javier Martinez Canillas
2023-07-26  9:59   ` Maxime Ripard
2023-07-26 10:39     ` Javier Martinez Canillas
2023-07-26 11:52     ` Geert Uytterhoeven
2023-07-26 12:22       ` Javier Martinez Canillas
2023-07-26 12:33         ` Geert Uytterhoeven
2023-07-26 13:54           ` Maxime Ripard
2023-07-26 14:14             ` Geert Uytterhoeven
2023-07-26 14:22               ` Javier Martinez Canillas
2023-07-26 13:49       ` Maxime Ripard
2023-07-26 14:16         ` Javier Martinez Canillas
2023-07-27 12:52 ` Thomas Zimmermann
2023-07-27 13:43   ` Javier Martinez Canillas

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).