* [PATCH] drm: fb-helper: Avoid nesting spinlock_t into raw_spinlock_t
@ 2022-02-15 15:43 ` Jiri Kosina
0 siblings, 0 replies; 10+ messages in thread
From: Jiri Kosina @ 2022-02-15 15:43 UTC (permalink / raw)
To: Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann,
David Airlie, Daniel Vetter, Sebastian Siewior
Cc: linux-kernel, dri-devel
From: Jiri Kosina <jkosina@suse.cz>
drm_fb_helper_damage() is acquiring spinlock_t (helper->damage_lock),
while it can be called from contexts where raw_spinlock_t is held (e.g.
console_owner lock obtained on vprintk_emit() codepath).
As the critical sections protected by damage_lock are super-tiny, let's
fix this by converting it to raw_spinlock_t in order not to violate
PREEMPT_RT-imposed lock nesting rules.
This fixes the splat below.
=============================
[ BUG: Invalid wait context ]
5.17.0-rc4-00002-gd567f5db412e #1 Not tainted
-----------------------------
swapper/0/0 is trying to lock:
ffff8c5687cc4158 (&helper->damage_lock){....}-{3:3}, at: drm_fb_helper_damage.isra.22+0x4a/0xf0
other info that might help us debug this:
context-{2:2}
3 locks held by swapper/0/0:
#0: ffffffffad776520 (console_lock){+.+.}-{0:0}, at: vprintk_emit+0xb8/0x2a0
#1: ffffffffad696120 (console_owner){-...}-{0:0}, at: console_unlock+0x17f/0x550
#2: ffffffffad926a58 (printing_lock){....}-{3:3}, at: vt_console_print+0x7d/0x3e0
stack backtrace:
CPU: 0 PID: 0 Comm: swapper/0 Not tainted 5.17.0-rc4-00002-gd567f5db412e #1 bed1d5e19e0e7e8c9d97fd8afa1322f7f47a4f38
Hardware name: LENOVO 20UJS2B905/20UJS2B905, BIOS R1CET63W(1.32 ) 04/09/2021
Call Trace:
<IRQ>
dump_stack_lvl+0x58/0x71
__lock_acquire+0x165b/0x1780
? secondary_startup_64_no_verify+0xd5/0xdb
lock_acquire+0x278/0x300
? drm_fb_helper_damage.isra.22+0x4a/0xf0
? save_trace+0x3e/0x340
? __bfs+0x10f/0x240
_raw_spin_lock_irqsave+0x48/0x60
? drm_fb_helper_damage.isra.22+0x4a/0xf0
drm_fb_helper_damage.isra.22+0x4a/0xf0
soft_cursor+0x194/0x240
bit_cursor+0x386/0x630
? get_color+0x29/0x120
? bit_putcs+0x4b0/0x4b0
? console_unlock+0x17f/0x550
hide_cursor+0x2f/0x90
vt_console_print+0x3c5/0x3e0
? console_unlock+0x17f/0x550
console_unlock+0x515/0x550
vprintk_emit+0x1c8/0x2a0
_printk+0x52/0x6e
? sched_clock_tick+0x3d/0x60
collect_cpu_info_amd+0x93/0xd0
collect_cpu_info_local+0x23/0x30
flush_smp_call_function_queue+0x137/0x220
__sysvec_call_function_single+0x43/0x1c0
sysvec_call_function_single+0x43/0x80
</IRQ>
<TASK>
asm_sysvec_call_function_single+0x12/0x20
RIP: 0010:cpuidle_enter_state+0x111/0x4b0
Code: 7c ff 45 84 ff 74 17 9c 58 0f 1f 44 00 00 f6 c4 02 0f 85 71 03 00 00 31 ff e8 bb 21 86 ff e8 76 2f 8e ff fb 66 0f 1f 44 00 00 <45> 85 f6 0f 88 12 01 00 00 49 63 d6 4c 2b 24 24 48 8d 04 52 48 8d
RSP: 0018:ffffffffad603e48 EFLAGS: 00000206
RAX: 00000000000127c3 RBX: 0000000000000003 RCX: 0000000000000000
RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffffffffac32617a
RBP: ffff8c5687ba4400 R08: 0000000000000001 R09: 0000000000000001
R10: ffffffffad603e10 R11: 0000000000000000 R12: 00000000685eb4a0
R13: ffffffffad918f80 R14: 0000000000000003 R15: 0000000000000000
? cpuidle_enter_state+0x10a/0x4b0
? cpuidle_enter_state+0x10a/0x4b0
cpuidle_enter+0x29/0x40
do_idle+0x24d/0x2c0
cpu_startup_entry+0x19/0x20
start_kernel+0x9c2/0x9e9
secondary_startup_64_no_verify+0xd5/0xdb
</TASK>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
---
drivers/gpu/drm/drm_fb_helper.c | 14 +++++++-------
include/drm/drm_fb_helper.h | 2 +-
2 files changed, 8 insertions(+), 8 deletions(-)
diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
index ed43b987d306..7c4ab6e6f865 100644
--- a/drivers/gpu/drm/drm_fb_helper.c
+++ b/drivers/gpu/drm/drm_fb_helper.c
@@ -436,11 +436,11 @@ static void drm_fb_helper_damage_work(struct work_struct *work)
unsigned long flags;
int ret;
- spin_lock_irqsave(&helper->damage_lock, flags);
+ raw_spin_lock_irqsave(&helper->damage_lock, flags);
clip_copy = *clip;
clip->x1 = clip->y1 = ~0;
clip->x2 = clip->y2 = 0;
- spin_unlock_irqrestore(&helper->damage_lock, flags);
+ raw_spin_unlock_irqrestore(&helper->damage_lock, flags);
/* Call damage handlers only if necessary */
if (!(clip_copy.x1 < clip_copy.x2 && clip_copy.y1 < clip_copy.y2))
@@ -465,12 +465,12 @@ static void drm_fb_helper_damage_work(struct work_struct *work)
* Restore damage clip rectangle on errors. The next run
* of the damage worker will perform the update.
*/
- spin_lock_irqsave(&helper->damage_lock, flags);
+ raw_spin_lock_irqsave(&helper->damage_lock, flags);
clip->x1 = min_t(u32, clip->x1, clip_copy.x1);
clip->y1 = min_t(u32, clip->y1, clip_copy.y1);
clip->x2 = max_t(u32, clip->x2, clip_copy.x2);
clip->y2 = max_t(u32, clip->y2, clip_copy.y2);
- spin_unlock_irqrestore(&helper->damage_lock, flags);
+ raw_spin_unlock_irqrestore(&helper->damage_lock, flags);
}
/**
@@ -486,7 +486,7 @@ void drm_fb_helper_prepare(struct drm_device *dev, struct drm_fb_helper *helper,
const struct drm_fb_helper_funcs *funcs)
{
INIT_LIST_HEAD(&helper->kernel_fb_list);
- spin_lock_init(&helper->damage_lock);
+ raw_spin_lock_init(&helper->damage_lock);
INIT_WORK(&helper->resume_work, drm_fb_helper_resume_worker);
INIT_WORK(&helper->damage_work, drm_fb_helper_damage_work);
helper->damage_clip.x1 = helper->damage_clip.y1 = ~0;
@@ -670,12 +670,12 @@ static void drm_fb_helper_damage(struct fb_info *info, u32 x, u32 y,
if (!drm_fbdev_use_shadow_fb(helper))
return;
- spin_lock_irqsave(&helper->damage_lock, flags);
+ raw_spin_lock_irqsave(&helper->damage_lock, flags);
clip->x1 = min_t(u32, clip->x1, x);
clip->y1 = min_t(u32, clip->y1, y);
clip->x2 = max_t(u32, clip->x2, x + width);
clip->y2 = max_t(u32, clip->y2, y + height);
- spin_unlock_irqrestore(&helper->damage_lock, flags);
+ raw_spin_unlock_irqrestore(&helper->damage_lock, flags);
schedule_work(&helper->damage_work);
}
diff --git a/include/drm/drm_fb_helper.h b/include/drm/drm_fb_helper.h
index 3af4624368d8..91178958896e 100644
--- a/include/drm/drm_fb_helper.h
+++ b/include/drm/drm_fb_helper.h
@@ -131,7 +131,7 @@ struct drm_fb_helper {
struct fb_info *fbdev;
u32 pseudo_palette[17];
struct drm_clip_rect damage_clip;
- spinlock_t damage_lock;
+ raw_spinlock_t damage_lock;
struct work_struct damage_work;
struct work_struct resume_work;
--
Jiri Kosina
SUSE Labs
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH] drm: fb-helper: Avoid nesting spinlock_t into raw_spinlock_t
@ 2022-02-15 15:43 ` Jiri Kosina
0 siblings, 0 replies; 10+ messages in thread
From: Jiri Kosina @ 2022-02-15 15:43 UTC (permalink / raw)
To: Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann,
David Airlie, Daniel Vetter, Sebastian Siewior
Cc: dri-devel, linux-kernel
From: Jiri Kosina <jkosina@suse.cz>
drm_fb_helper_damage() is acquiring spinlock_t (helper->damage_lock),
while it can be called from contexts where raw_spinlock_t is held (e.g.
console_owner lock obtained on vprintk_emit() codepath).
As the critical sections protected by damage_lock are super-tiny, let's
fix this by converting it to raw_spinlock_t in order not to violate
PREEMPT_RT-imposed lock nesting rules.
This fixes the splat below.
=============================
[ BUG: Invalid wait context ]
5.17.0-rc4-00002-gd567f5db412e #1 Not tainted
-----------------------------
swapper/0/0 is trying to lock:
ffff8c5687cc4158 (&helper->damage_lock){....}-{3:3}, at: drm_fb_helper_damage.isra.22+0x4a/0xf0
other info that might help us debug this:
context-{2:2}
3 locks held by swapper/0/0:
#0: ffffffffad776520 (console_lock){+.+.}-{0:0}, at: vprintk_emit+0xb8/0x2a0
#1: ffffffffad696120 (console_owner){-...}-{0:0}, at: console_unlock+0x17f/0x550
#2: ffffffffad926a58 (printing_lock){....}-{3:3}, at: vt_console_print+0x7d/0x3e0
stack backtrace:
CPU: 0 PID: 0 Comm: swapper/0 Not tainted 5.17.0-rc4-00002-gd567f5db412e #1 bed1d5e19e0e7e8c9d97fd8afa1322f7f47a4f38
Hardware name: LENOVO 20UJS2B905/20UJS2B905, BIOS R1CET63W(1.32 ) 04/09/2021
Call Trace:
<IRQ>
dump_stack_lvl+0x58/0x71
__lock_acquire+0x165b/0x1780
? secondary_startup_64_no_verify+0xd5/0xdb
lock_acquire+0x278/0x300
? drm_fb_helper_damage.isra.22+0x4a/0xf0
? save_trace+0x3e/0x340
? __bfs+0x10f/0x240
_raw_spin_lock_irqsave+0x48/0x60
? drm_fb_helper_damage.isra.22+0x4a/0xf0
drm_fb_helper_damage.isra.22+0x4a/0xf0
soft_cursor+0x194/0x240
bit_cursor+0x386/0x630
? get_color+0x29/0x120
? bit_putcs+0x4b0/0x4b0
? console_unlock+0x17f/0x550
hide_cursor+0x2f/0x90
vt_console_print+0x3c5/0x3e0
? console_unlock+0x17f/0x550
console_unlock+0x515/0x550
vprintk_emit+0x1c8/0x2a0
_printk+0x52/0x6e
? sched_clock_tick+0x3d/0x60
collect_cpu_info_amd+0x93/0xd0
collect_cpu_info_local+0x23/0x30
flush_smp_call_function_queue+0x137/0x220
__sysvec_call_function_single+0x43/0x1c0
sysvec_call_function_single+0x43/0x80
</IRQ>
<TASK>
asm_sysvec_call_function_single+0x12/0x20
RIP: 0010:cpuidle_enter_state+0x111/0x4b0
Code: 7c ff 45 84 ff 74 17 9c 58 0f 1f 44 00 00 f6 c4 02 0f 85 71 03 00 00 31 ff e8 bb 21 86 ff e8 76 2f 8e ff fb 66 0f 1f 44 00 00 <45> 85 f6 0f 88 12 01 00 00 49 63 d6 4c 2b 24 24 48 8d 04 52 48 8d
RSP: 0018:ffffffffad603e48 EFLAGS: 00000206
RAX: 00000000000127c3 RBX: 0000000000000003 RCX: 0000000000000000
RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffffffffac32617a
RBP: ffff8c5687ba4400 R08: 0000000000000001 R09: 0000000000000001
R10: ffffffffad603e10 R11: 0000000000000000 R12: 00000000685eb4a0
R13: ffffffffad918f80 R14: 0000000000000003 R15: 0000000000000000
? cpuidle_enter_state+0x10a/0x4b0
? cpuidle_enter_state+0x10a/0x4b0
cpuidle_enter+0x29/0x40
do_idle+0x24d/0x2c0
cpu_startup_entry+0x19/0x20
start_kernel+0x9c2/0x9e9
secondary_startup_64_no_verify+0xd5/0xdb
</TASK>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
---
drivers/gpu/drm/drm_fb_helper.c | 14 +++++++-------
include/drm/drm_fb_helper.h | 2 +-
2 files changed, 8 insertions(+), 8 deletions(-)
diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
index ed43b987d306..7c4ab6e6f865 100644
--- a/drivers/gpu/drm/drm_fb_helper.c
+++ b/drivers/gpu/drm/drm_fb_helper.c
@@ -436,11 +436,11 @@ static void drm_fb_helper_damage_work(struct work_struct *work)
unsigned long flags;
int ret;
- spin_lock_irqsave(&helper->damage_lock, flags);
+ raw_spin_lock_irqsave(&helper->damage_lock, flags);
clip_copy = *clip;
clip->x1 = clip->y1 = ~0;
clip->x2 = clip->y2 = 0;
- spin_unlock_irqrestore(&helper->damage_lock, flags);
+ raw_spin_unlock_irqrestore(&helper->damage_lock, flags);
/* Call damage handlers only if necessary */
if (!(clip_copy.x1 < clip_copy.x2 && clip_copy.y1 < clip_copy.y2))
@@ -465,12 +465,12 @@ static void drm_fb_helper_damage_work(struct work_struct *work)
* Restore damage clip rectangle on errors. The next run
* of the damage worker will perform the update.
*/
- spin_lock_irqsave(&helper->damage_lock, flags);
+ raw_spin_lock_irqsave(&helper->damage_lock, flags);
clip->x1 = min_t(u32, clip->x1, clip_copy.x1);
clip->y1 = min_t(u32, clip->y1, clip_copy.y1);
clip->x2 = max_t(u32, clip->x2, clip_copy.x2);
clip->y2 = max_t(u32, clip->y2, clip_copy.y2);
- spin_unlock_irqrestore(&helper->damage_lock, flags);
+ raw_spin_unlock_irqrestore(&helper->damage_lock, flags);
}
/**
@@ -486,7 +486,7 @@ void drm_fb_helper_prepare(struct drm_device *dev, struct drm_fb_helper *helper,
const struct drm_fb_helper_funcs *funcs)
{
INIT_LIST_HEAD(&helper->kernel_fb_list);
- spin_lock_init(&helper->damage_lock);
+ raw_spin_lock_init(&helper->damage_lock);
INIT_WORK(&helper->resume_work, drm_fb_helper_resume_worker);
INIT_WORK(&helper->damage_work, drm_fb_helper_damage_work);
helper->damage_clip.x1 = helper->damage_clip.y1 = ~0;
@@ -670,12 +670,12 @@ static void drm_fb_helper_damage(struct fb_info *info, u32 x, u32 y,
if (!drm_fbdev_use_shadow_fb(helper))
return;
- spin_lock_irqsave(&helper->damage_lock, flags);
+ raw_spin_lock_irqsave(&helper->damage_lock, flags);
clip->x1 = min_t(u32, clip->x1, x);
clip->y1 = min_t(u32, clip->y1, y);
clip->x2 = max_t(u32, clip->x2, x + width);
clip->y2 = max_t(u32, clip->y2, y + height);
- spin_unlock_irqrestore(&helper->damage_lock, flags);
+ raw_spin_unlock_irqrestore(&helper->damage_lock, flags);
schedule_work(&helper->damage_work);
}
diff --git a/include/drm/drm_fb_helper.h b/include/drm/drm_fb_helper.h
index 3af4624368d8..91178958896e 100644
--- a/include/drm/drm_fb_helper.h
+++ b/include/drm/drm_fb_helper.h
@@ -131,7 +131,7 @@ struct drm_fb_helper {
struct fb_info *fbdev;
u32 pseudo_palette[17];
struct drm_clip_rect damage_clip;
- spinlock_t damage_lock;
+ raw_spinlock_t damage_lock;
struct work_struct damage_work;
struct work_struct resume_work;
--
Jiri Kosina
SUSE Labs
^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH] drm: fb-helper: Avoid nesting spinlock_t into raw_spinlock_t
2022-02-15 15:43 ` Jiri Kosina
@ 2022-02-15 15:49 ` Sebastian Siewior
-1 siblings, 0 replies; 10+ messages in thread
From: Sebastian Siewior @ 2022-02-15 15:49 UTC (permalink / raw)
To: Jiri Kosina
Cc: Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann,
David Airlie, Daniel Vetter, dri-devel, linux-kernel,
John Ogness
On 2022-02-15 16:43:08 [+0100], Jiri Kosina wrote:
> From: Jiri Kosina <jkosina@suse.cz>
>
> drm_fb_helper_damage() is acquiring spinlock_t (helper->damage_lock),
> while it can be called from contexts where raw_spinlock_t is held (e.g.
> console_owner lock obtained on vprintk_emit() codepath).
>
> As the critical sections protected by damage_lock are super-tiny, let's
> fix this by converting it to raw_spinlock_t in order not to violate
> PREEMPT_RT-imposed lock nesting rules.
>
> This fixes the splat below.
>
> =============================
> [ BUG: Invalid wait context ]
> 5.17.0-rc4-00002-gd567f5db412e #1 Not tainted
rc4. Is this also the case in the RT tree which includes John's printk
changes?
> -----------------------------
> swapper/0/0 is trying to lock:
> ffff8c5687cc4158 (&helper->damage_lock){....}-{3:3}, at: drm_fb_helper_damage.isra.22+0x4a/0xf0
> other info that might help us debug this:
> context-{2:2}
> 3 locks held by swapper/0/0:
> #0: ffffffffad776520 (console_lock){+.+.}-{0:0}, at: vprintk_emit+0xb8/0x2a0
> #1: ffffffffad696120 (console_owner){-...}-{0:0}, at: console_unlock+0x17f/0x550
> #2: ffffffffad926a58 (printing_lock){....}-{3:3}, at: vt_console_print+0x7d/0x3e0
> stack backtrace:
> CPU: 0 PID: 0 Comm: swapper/0 Not tainted 5.17.0-rc4-00002-gd567f5db412e #1 bed1d5e19e0e7e8c9d97fd8afa1322f7f47a4f38
> Hardware name: LENOVO 20UJS2B905/20UJS2B905, BIOS R1CET63W(1.32 ) 04/09/2021
> Call Trace:
> <IRQ>
> dump_stack_lvl+0x58/0x71
> __lock_acquire+0x165b/0x1780
> ? secondary_startup_64_no_verify+0xd5/0xdb
> lock_acquire+0x278/0x300
> ? drm_fb_helper_damage.isra.22+0x4a/0xf0
> ? save_trace+0x3e/0x340
> ? __bfs+0x10f/0x240
> _raw_spin_lock_irqsave+0x48/0x60
> ? drm_fb_helper_damage.isra.22+0x4a/0xf0
> drm_fb_helper_damage.isra.22+0x4a/0xf0
> soft_cursor+0x194/0x240
> bit_cursor+0x386/0x630
> ? get_color+0x29/0x120
> ? bit_putcs+0x4b0/0x4b0
> ? console_unlock+0x17f/0x550
> hide_cursor+0x2f/0x90
> vt_console_print+0x3c5/0x3e0
> ? console_unlock+0x17f/0x550
> console_unlock+0x515/0x550
> vprintk_emit+0x1c8/0x2a0
> _printk+0x52/0x6e
> ? sched_clock_tick+0x3d/0x60
> collect_cpu_info_amd+0x93/0xd0
> collect_cpu_info_local+0x23/0x30
> flush_smp_call_function_queue+0x137/0x220
> __sysvec_call_function_single+0x43/0x1c0
> sysvec_call_function_single+0x43/0x80
> </IRQ>
> <TASK>
> asm_sysvec_call_function_single+0x12/0x20
> RIP: 0010:cpuidle_enter_state+0x111/0x4b0
> Code: 7c ff 45 84 ff 74 17 9c 58 0f 1f 44 00 00 f6 c4 02 0f 85 71 03 00 00 31 ff e8 bb 21 86 ff e8 76 2f 8e ff fb 66 0f 1f 44 00 00 <45> 85 f6 0f 88 12 01 00 00 49 63 d6 4c 2b 24 24 48 8d 04 52 48 8d
> RSP: 0018:ffffffffad603e48 EFLAGS: 00000206
> RAX: 00000000000127c3 RBX: 0000000000000003 RCX: 0000000000000000
> RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffffffffac32617a
> RBP: ffff8c5687ba4400 R08: 0000000000000001 R09: 0000000000000001
> R10: ffffffffad603e10 R11: 0000000000000000 R12: 00000000685eb4a0
> R13: ffffffffad918f80 R14: 0000000000000003 R15: 0000000000000000
> ? cpuidle_enter_state+0x10a/0x4b0
> ? cpuidle_enter_state+0x10a/0x4b0
> cpuidle_enter+0x29/0x40
> do_idle+0x24d/0x2c0
> cpu_startup_entry+0x19/0x20
> start_kernel+0x9c2/0x9e9
> secondary_startup_64_no_verify+0xd5/0xdb
> </TASK>
>
> Signed-off-by: Jiri Kosina <jkosina@suse.cz>
> ---
> drivers/gpu/drm/drm_fb_helper.c | 14 +++++++-------
> include/drm/drm_fb_helper.h | 2 +-
> 2 files changed, 8 insertions(+), 8 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
> index ed43b987d306..7c4ab6e6f865 100644
> --- a/drivers/gpu/drm/drm_fb_helper.c
> +++ b/drivers/gpu/drm/drm_fb_helper.c
> @@ -436,11 +436,11 @@ static void drm_fb_helper_damage_work(struct work_struct *work)
> unsigned long flags;
> int ret;
>
> - spin_lock_irqsave(&helper->damage_lock, flags);
> + raw_spin_lock_irqsave(&helper->damage_lock, flags);
> clip_copy = *clip;
> clip->x1 = clip->y1 = ~0;
> clip->x2 = clip->y2 = 0;
> - spin_unlock_irqrestore(&helper->damage_lock, flags);
> + raw_spin_unlock_irqrestore(&helper->damage_lock, flags);
>
> /* Call damage handlers only if necessary */
> if (!(clip_copy.x1 < clip_copy.x2 && clip_copy.y1 < clip_copy.y2))
> @@ -465,12 +465,12 @@ static void drm_fb_helper_damage_work(struct work_struct *work)
> * Restore damage clip rectangle on errors. The next run
> * of the damage worker will perform the update.
> */
> - spin_lock_irqsave(&helper->damage_lock, flags);
> + raw_spin_lock_irqsave(&helper->damage_lock, flags);
> clip->x1 = min_t(u32, clip->x1, clip_copy.x1);
> clip->y1 = min_t(u32, clip->y1, clip_copy.y1);
> clip->x2 = max_t(u32, clip->x2, clip_copy.x2);
> clip->y2 = max_t(u32, clip->y2, clip_copy.y2);
> - spin_unlock_irqrestore(&helper->damage_lock, flags);
> + raw_spin_unlock_irqrestore(&helper->damage_lock, flags);
> }
>
> /**
> @@ -486,7 +486,7 @@ void drm_fb_helper_prepare(struct drm_device *dev, struct drm_fb_helper *helper,
> const struct drm_fb_helper_funcs *funcs)
> {
> INIT_LIST_HEAD(&helper->kernel_fb_list);
> - spin_lock_init(&helper->damage_lock);
> + raw_spin_lock_init(&helper->damage_lock);
> INIT_WORK(&helper->resume_work, drm_fb_helper_resume_worker);
> INIT_WORK(&helper->damage_work, drm_fb_helper_damage_work);
> helper->damage_clip.x1 = helper->damage_clip.y1 = ~0;
> @@ -670,12 +670,12 @@ static void drm_fb_helper_damage(struct fb_info *info, u32 x, u32 y,
> if (!drm_fbdev_use_shadow_fb(helper))
> return;
>
> - spin_lock_irqsave(&helper->damage_lock, flags);
> + raw_spin_lock_irqsave(&helper->damage_lock, flags);
> clip->x1 = min_t(u32, clip->x1, x);
> clip->y1 = min_t(u32, clip->y1, y);
> clip->x2 = max_t(u32, clip->x2, x + width);
> clip->y2 = max_t(u32, clip->y2, y + height);
> - spin_unlock_irqrestore(&helper->damage_lock, flags);
> + raw_spin_unlock_irqrestore(&helper->damage_lock, flags);
>
> schedule_work(&helper->damage_work);
> }
> diff --git a/include/drm/drm_fb_helper.h b/include/drm/drm_fb_helper.h
> index 3af4624368d8..91178958896e 100644
> --- a/include/drm/drm_fb_helper.h
> +++ b/include/drm/drm_fb_helper.h
> @@ -131,7 +131,7 @@ struct drm_fb_helper {
> struct fb_info *fbdev;
> u32 pseudo_palette[17];
> struct drm_clip_rect damage_clip;
> - spinlock_t damage_lock;
> + raw_spinlock_t damage_lock;
> struct work_struct damage_work;
> struct work_struct resume_work;
>
>
Sebastian
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH] drm: fb-helper: Avoid nesting spinlock_t into raw_spinlock_t
@ 2022-02-15 15:49 ` Sebastian Siewior
0 siblings, 0 replies; 10+ messages in thread
From: Sebastian Siewior @ 2022-02-15 15:49 UTC (permalink / raw)
To: Jiri Kosina
Cc: Thomas Zimmermann, John Ogness, David Airlie, linux-kernel, dri-devel
On 2022-02-15 16:43:08 [+0100], Jiri Kosina wrote:
> From: Jiri Kosina <jkosina@suse.cz>
>
> drm_fb_helper_damage() is acquiring spinlock_t (helper->damage_lock),
> while it can be called from contexts where raw_spinlock_t is held (e.g.
> console_owner lock obtained on vprintk_emit() codepath).
>
> As the critical sections protected by damage_lock are super-tiny, let's
> fix this by converting it to raw_spinlock_t in order not to violate
> PREEMPT_RT-imposed lock nesting rules.
>
> This fixes the splat below.
>
> =============================
> [ BUG: Invalid wait context ]
> 5.17.0-rc4-00002-gd567f5db412e #1 Not tainted
rc4. Is this also the case in the RT tree which includes John's printk
changes?
> -----------------------------
> swapper/0/0 is trying to lock:
> ffff8c5687cc4158 (&helper->damage_lock){....}-{3:3}, at: drm_fb_helper_damage.isra.22+0x4a/0xf0
> other info that might help us debug this:
> context-{2:2}
> 3 locks held by swapper/0/0:
> #0: ffffffffad776520 (console_lock){+.+.}-{0:0}, at: vprintk_emit+0xb8/0x2a0
> #1: ffffffffad696120 (console_owner){-...}-{0:0}, at: console_unlock+0x17f/0x550
> #2: ffffffffad926a58 (printing_lock){....}-{3:3}, at: vt_console_print+0x7d/0x3e0
> stack backtrace:
> CPU: 0 PID: 0 Comm: swapper/0 Not tainted 5.17.0-rc4-00002-gd567f5db412e #1 bed1d5e19e0e7e8c9d97fd8afa1322f7f47a4f38
> Hardware name: LENOVO 20UJS2B905/20UJS2B905, BIOS R1CET63W(1.32 ) 04/09/2021
> Call Trace:
> <IRQ>
> dump_stack_lvl+0x58/0x71
> __lock_acquire+0x165b/0x1780
> ? secondary_startup_64_no_verify+0xd5/0xdb
> lock_acquire+0x278/0x300
> ? drm_fb_helper_damage.isra.22+0x4a/0xf0
> ? save_trace+0x3e/0x340
> ? __bfs+0x10f/0x240
> _raw_spin_lock_irqsave+0x48/0x60
> ? drm_fb_helper_damage.isra.22+0x4a/0xf0
> drm_fb_helper_damage.isra.22+0x4a/0xf0
> soft_cursor+0x194/0x240
> bit_cursor+0x386/0x630
> ? get_color+0x29/0x120
> ? bit_putcs+0x4b0/0x4b0
> ? console_unlock+0x17f/0x550
> hide_cursor+0x2f/0x90
> vt_console_print+0x3c5/0x3e0
> ? console_unlock+0x17f/0x550
> console_unlock+0x515/0x550
> vprintk_emit+0x1c8/0x2a0
> _printk+0x52/0x6e
> ? sched_clock_tick+0x3d/0x60
> collect_cpu_info_amd+0x93/0xd0
> collect_cpu_info_local+0x23/0x30
> flush_smp_call_function_queue+0x137/0x220
> __sysvec_call_function_single+0x43/0x1c0
> sysvec_call_function_single+0x43/0x80
> </IRQ>
> <TASK>
> asm_sysvec_call_function_single+0x12/0x20
> RIP: 0010:cpuidle_enter_state+0x111/0x4b0
> Code: 7c ff 45 84 ff 74 17 9c 58 0f 1f 44 00 00 f6 c4 02 0f 85 71 03 00 00 31 ff e8 bb 21 86 ff e8 76 2f 8e ff fb 66 0f 1f 44 00 00 <45> 85 f6 0f 88 12 01 00 00 49 63 d6 4c 2b 24 24 48 8d 04 52 48 8d
> RSP: 0018:ffffffffad603e48 EFLAGS: 00000206
> RAX: 00000000000127c3 RBX: 0000000000000003 RCX: 0000000000000000
> RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffffffffac32617a
> RBP: ffff8c5687ba4400 R08: 0000000000000001 R09: 0000000000000001
> R10: ffffffffad603e10 R11: 0000000000000000 R12: 00000000685eb4a0
> R13: ffffffffad918f80 R14: 0000000000000003 R15: 0000000000000000
> ? cpuidle_enter_state+0x10a/0x4b0
> ? cpuidle_enter_state+0x10a/0x4b0
> cpuidle_enter+0x29/0x40
> do_idle+0x24d/0x2c0
> cpu_startup_entry+0x19/0x20
> start_kernel+0x9c2/0x9e9
> secondary_startup_64_no_verify+0xd5/0xdb
> </TASK>
>
> Signed-off-by: Jiri Kosina <jkosina@suse.cz>
> ---
> drivers/gpu/drm/drm_fb_helper.c | 14 +++++++-------
> include/drm/drm_fb_helper.h | 2 +-
> 2 files changed, 8 insertions(+), 8 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
> index ed43b987d306..7c4ab6e6f865 100644
> --- a/drivers/gpu/drm/drm_fb_helper.c
> +++ b/drivers/gpu/drm/drm_fb_helper.c
> @@ -436,11 +436,11 @@ static void drm_fb_helper_damage_work(struct work_struct *work)
> unsigned long flags;
> int ret;
>
> - spin_lock_irqsave(&helper->damage_lock, flags);
> + raw_spin_lock_irqsave(&helper->damage_lock, flags);
> clip_copy = *clip;
> clip->x1 = clip->y1 = ~0;
> clip->x2 = clip->y2 = 0;
> - spin_unlock_irqrestore(&helper->damage_lock, flags);
> + raw_spin_unlock_irqrestore(&helper->damage_lock, flags);
>
> /* Call damage handlers only if necessary */
> if (!(clip_copy.x1 < clip_copy.x2 && clip_copy.y1 < clip_copy.y2))
> @@ -465,12 +465,12 @@ static void drm_fb_helper_damage_work(struct work_struct *work)
> * Restore damage clip rectangle on errors. The next run
> * of the damage worker will perform the update.
> */
> - spin_lock_irqsave(&helper->damage_lock, flags);
> + raw_spin_lock_irqsave(&helper->damage_lock, flags);
> clip->x1 = min_t(u32, clip->x1, clip_copy.x1);
> clip->y1 = min_t(u32, clip->y1, clip_copy.y1);
> clip->x2 = max_t(u32, clip->x2, clip_copy.x2);
> clip->y2 = max_t(u32, clip->y2, clip_copy.y2);
> - spin_unlock_irqrestore(&helper->damage_lock, flags);
> + raw_spin_unlock_irqrestore(&helper->damage_lock, flags);
> }
>
> /**
> @@ -486,7 +486,7 @@ void drm_fb_helper_prepare(struct drm_device *dev, struct drm_fb_helper *helper,
> const struct drm_fb_helper_funcs *funcs)
> {
> INIT_LIST_HEAD(&helper->kernel_fb_list);
> - spin_lock_init(&helper->damage_lock);
> + raw_spin_lock_init(&helper->damage_lock);
> INIT_WORK(&helper->resume_work, drm_fb_helper_resume_worker);
> INIT_WORK(&helper->damage_work, drm_fb_helper_damage_work);
> helper->damage_clip.x1 = helper->damage_clip.y1 = ~0;
> @@ -670,12 +670,12 @@ static void drm_fb_helper_damage(struct fb_info *info, u32 x, u32 y,
> if (!drm_fbdev_use_shadow_fb(helper))
> return;
>
> - spin_lock_irqsave(&helper->damage_lock, flags);
> + raw_spin_lock_irqsave(&helper->damage_lock, flags);
> clip->x1 = min_t(u32, clip->x1, x);
> clip->y1 = min_t(u32, clip->y1, y);
> clip->x2 = max_t(u32, clip->x2, x + width);
> clip->y2 = max_t(u32, clip->y2, y + height);
> - spin_unlock_irqrestore(&helper->damage_lock, flags);
> + raw_spin_unlock_irqrestore(&helper->damage_lock, flags);
>
> schedule_work(&helper->damage_work);
> }
> diff --git a/include/drm/drm_fb_helper.h b/include/drm/drm_fb_helper.h
> index 3af4624368d8..91178958896e 100644
> --- a/include/drm/drm_fb_helper.h
> +++ b/include/drm/drm_fb_helper.h
> @@ -131,7 +131,7 @@ struct drm_fb_helper {
> struct fb_info *fbdev;
> u32 pseudo_palette[17];
> struct drm_clip_rect damage_clip;
> - spinlock_t damage_lock;
> + raw_spinlock_t damage_lock;
> struct work_struct damage_work;
> struct work_struct resume_work;
>
>
Sebastian
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH] drm: fb-helper: Avoid nesting spinlock_t into raw_spinlock_t
2022-02-15 15:49 ` Sebastian Siewior
@ 2022-02-15 16:23 ` John Ogness
-1 siblings, 0 replies; 10+ messages in thread
From: John Ogness @ 2022-02-15 16:23 UTC (permalink / raw)
To: Sebastian Siewior, Jiri Kosina
Cc: Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann,
David Airlie, Daniel Vetter, dri-devel, linux-kernel
On 2022-02-15, Sebastian Siewior <bigeasy@linutronix.de> wrote:
>> From: Jiri Kosina <jkosina@suse.cz>
>>
>> drm_fb_helper_damage() is acquiring spinlock_t (helper->damage_lock),
>> while it can be called from contexts where raw_spinlock_t is held (e.g.
>> console_owner lock obtained on vprintk_emit() codepath).
>>
>> As the critical sections protected by damage_lock are super-tiny, let's
>> fix this by converting it to raw_spinlock_t in order not to violate
>> PREEMPT_RT-imposed lock nesting rules.
>>
>> This fixes the splat below.
>>
>> =============================
>> [ BUG: Invalid wait context ]
>> 5.17.0-rc4-00002-gd567f5db412e #1 Not tainted
>
> rc4. Is this also the case in the RT tree which includes John's printk
> changes?
In the RT tree, the fbcon's write() callback is only called in
preemptible() contexts. So this is only a mainline issue.
The series I recently posted to LKML [0] should also address this issue
(if/when it gets accepted).
John
[0] https://lore.kernel.org/lkml/20220207194323.273637-1-john.ogness@linutronix.de
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH] drm: fb-helper: Avoid nesting spinlock_t into raw_spinlock_t
@ 2022-02-15 16:23 ` John Ogness
0 siblings, 0 replies; 10+ messages in thread
From: John Ogness @ 2022-02-15 16:23 UTC (permalink / raw)
To: Sebastian Siewior, Jiri Kosina
Cc: Thomas Zimmermann, David Airlie, linux-kernel, dri-devel
On 2022-02-15, Sebastian Siewior <bigeasy@linutronix.de> wrote:
>> From: Jiri Kosina <jkosina@suse.cz>
>>
>> drm_fb_helper_damage() is acquiring spinlock_t (helper->damage_lock),
>> while it can be called from contexts where raw_spinlock_t is held (e.g.
>> console_owner lock obtained on vprintk_emit() codepath).
>>
>> As the critical sections protected by damage_lock are super-tiny, let's
>> fix this by converting it to raw_spinlock_t in order not to violate
>> PREEMPT_RT-imposed lock nesting rules.
>>
>> This fixes the splat below.
>>
>> =============================
>> [ BUG: Invalid wait context ]
>> 5.17.0-rc4-00002-gd567f5db412e #1 Not tainted
>
> rc4. Is this also the case in the RT tree which includes John's printk
> changes?
In the RT tree, the fbcon's write() callback is only called in
preemptible() contexts. So this is only a mainline issue.
The series I recently posted to LKML [0] should also address this issue
(if/when it gets accepted).
John
[0] https://lore.kernel.org/lkml/20220207194323.273637-1-john.ogness@linutronix.de
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH] drm: fb-helper: Avoid nesting spinlock_t into raw_spinlock_t
2022-02-15 16:23 ` John Ogness
@ 2022-02-15 19:59 ` Jiri Kosina
-1 siblings, 0 replies; 10+ messages in thread
From: Jiri Kosina @ 2022-02-15 19:59 UTC (permalink / raw)
To: John Ogness
Cc: Sebastian Siewior, Maarten Lankhorst, Maxime Ripard,
Thomas Zimmermann, David Airlie, Daniel Vetter, dri-devel,
linux-kernel
On Tue, 15 Feb 2022, John Ogness wrote:
> >> drm_fb_helper_damage() is acquiring spinlock_t (helper->damage_lock),
> >> while it can be called from contexts where raw_spinlock_t is held (e.g.
> >> console_owner lock obtained on vprintk_emit() codepath).
> >>
> >> As the critical sections protected by damage_lock are super-tiny, let's
> >> fix this by converting it to raw_spinlock_t in order not to violate
> >> PREEMPT_RT-imposed lock nesting rules.
> >>
> >> This fixes the splat below.
> >>
> >> =============================
> >> [ BUG: Invalid wait context ]
> >> 5.17.0-rc4-00002-gd567f5db412e #1 Not tainted
> >
> > rc4. Is this also the case in the RT tree which includes John's printk
> > changes?
>
> In the RT tree, the fbcon's write() callback is only called in
> preemptible() contexts. So this is only a mainline issue.
>
> The series I recently posted to LKML [0] should also address this issue
> (if/when it gets accepted).
>
> John
>
> [0] https://lore.kernel.org/lkml/20220207194323.273637-1-john.ogness@linutronix.de
Thanks for confirmation, seems like this problem is indeed cured by your
series.
I still believe though that we shouldn't let 5.17 released with this
warning being emitted into dmesg, so I'd suggest going with my patch for
mainline, and revert it in your series on top of it.
Thanks,
--
Jiri Kosina
SUSE Labs
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH] drm: fb-helper: Avoid nesting spinlock_t into raw_spinlock_t
@ 2022-02-15 19:59 ` Jiri Kosina
0 siblings, 0 replies; 10+ messages in thread
From: Jiri Kosina @ 2022-02-15 19:59 UTC (permalink / raw)
To: John Ogness
Cc: Thomas Zimmermann, David Airlie, Sebastian Siewior, linux-kernel,
dri-devel
On Tue, 15 Feb 2022, John Ogness wrote:
> >> drm_fb_helper_damage() is acquiring spinlock_t (helper->damage_lock),
> >> while it can be called from contexts where raw_spinlock_t is held (e.g.
> >> console_owner lock obtained on vprintk_emit() codepath).
> >>
> >> As the critical sections protected by damage_lock are super-tiny, let's
> >> fix this by converting it to raw_spinlock_t in order not to violate
> >> PREEMPT_RT-imposed lock nesting rules.
> >>
> >> This fixes the splat below.
> >>
> >> =============================
> >> [ BUG: Invalid wait context ]
> >> 5.17.0-rc4-00002-gd567f5db412e #1 Not tainted
> >
> > rc4. Is this also the case in the RT tree which includes John's printk
> > changes?
>
> In the RT tree, the fbcon's write() callback is only called in
> preemptible() contexts. So this is only a mainline issue.
>
> The series I recently posted to LKML [0] should also address this issue
> (if/when it gets accepted).
>
> John
>
> [0] https://lore.kernel.org/lkml/20220207194323.273637-1-john.ogness@linutronix.de
Thanks for confirmation, seems like this problem is indeed cured by your
series.
I still believe though that we shouldn't let 5.17 released with this
warning being emitted into dmesg, so I'd suggest going with my patch for
mainline, and revert it in your series on top of it.
Thanks,
--
Jiri Kosina
SUSE Labs
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH] drm: fb-helper: Avoid nesting spinlock_t into raw_spinlock_t
2022-02-15 19:59 ` Jiri Kosina
@ 2022-02-16 7:40 ` Sebastian Siewior
-1 siblings, 0 replies; 10+ messages in thread
From: Sebastian Siewior @ 2022-02-16 7:40 UTC (permalink / raw)
To: Jiri Kosina
Cc: John Ogness, Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann,
David Airlie, Daniel Vetter, dri-devel, linux-kernel
On 2022-02-15 20:59:24 [+0100], Jiri Kosina wrote:
> Thanks for confirmation, seems like this problem is indeed cured by your
> series.
Oki.
> I still believe though that we shouldn't let 5.17 released with this
> warning being emitted into dmesg, so I'd suggest going with my patch for
> mainline, and revert it in your series on top of it.
No. That warning is only visible with CONFIG_PROVE_RAW_LOCK_NESTING with
the following paragraph in its help:
| NOTE: There are known nesting problems. So if you enable this
| option expect lockdep splats until these problems have been fully
| addressed which is work in progress. This config switch allows to
| identify and analyze these problems. It will be removed and the
| check permanently enabled once the main issues have been fixed.
This warning in this call chain should affect every console driver which
acquires a lock.
> Thanks,
Sebastian
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH] drm: fb-helper: Avoid nesting spinlock_t into raw_spinlock_t
@ 2022-02-16 7:40 ` Sebastian Siewior
0 siblings, 0 replies; 10+ messages in thread
From: Sebastian Siewior @ 2022-02-16 7:40 UTC (permalink / raw)
To: Jiri Kosina
Cc: John Ogness, David Airlie, linux-kernel, dri-devel, Thomas Zimmermann
On 2022-02-15 20:59:24 [+0100], Jiri Kosina wrote:
> Thanks for confirmation, seems like this problem is indeed cured by your
> series.
Oki.
> I still believe though that we shouldn't let 5.17 released with this
> warning being emitted into dmesg, so I'd suggest going with my patch for
> mainline, and revert it in your series on top of it.
No. That warning is only visible with CONFIG_PROVE_RAW_LOCK_NESTING with
the following paragraph in its help:
| NOTE: There are known nesting problems. So if you enable this
| option expect lockdep splats until these problems have been fully
| addressed which is work in progress. This config switch allows to
| identify and analyze these problems. It will be removed and the
| check permanently enabled once the main issues have been fixed.
This warning in this call chain should affect every console driver which
acquires a lock.
> Thanks,
Sebastian
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2022-02-16 7:41 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-02-15 15:43 [PATCH] drm: fb-helper: Avoid nesting spinlock_t into raw_spinlock_t Jiri Kosina
2022-02-15 15:43 ` Jiri Kosina
2022-02-15 15:49 ` Sebastian Siewior
2022-02-15 15:49 ` Sebastian Siewior
2022-02-15 16:23 ` John Ogness
2022-02-15 16:23 ` John Ogness
2022-02-15 19:59 ` Jiri Kosina
2022-02-15 19:59 ` Jiri Kosina
2022-02-16 7:40 ` Sebastian Siewior
2022-02-16 7:40 ` Sebastian Siewior
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.