* [PATCH] x86/kvm: Fix broken irq restoration in kvm_wait
@ 2021-02-23 5:25 Wanpeng Li
2021-02-23 5:28 ` Wanpeng Li
2021-03-11 3:09 ` Wanpeng Li
0 siblings, 2 replies; 8+ messages in thread
From: Wanpeng Li @ 2021-02-23 5:25 UTC (permalink / raw)
To: linux-kernel, kvm
Cc: Paolo Bonzini, Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li,
Jim Mattson, Joerg Roedel, Mark Rutland, Thomas Gleixner
From: Wanpeng Li <wanpengli@tencent.com>
After commit 997acaf6b4b59c (lockdep: report broken irq restoration), the guest
splatting below during boot:
raw_local_irq_restore() called with IRQs enabled
WARNING: CPU: 1 PID: 169 at kernel/locking/irqflag-debug.c:10 warn_bogus_irq_restore+0x26/0x30
Modules linked in: hid_generic usbhid hid
CPU: 1 PID: 169 Comm: systemd-udevd Not tainted 5.11.0+ #25
RIP: 0010:warn_bogus_irq_restore+0x26/0x30
Call Trace:
kvm_wait+0x76/0x90
__pv_queued_spin_lock_slowpath+0x285/0x2e0
do_raw_spin_lock+0xc9/0xd0
_raw_spin_lock+0x59/0x70
lockref_get_not_dead+0xf/0x50
__legitimize_path+0x31/0x60
legitimize_root+0x37/0x50
try_to_unlazy_next+0x7f/0x1d0
lookup_fast+0xb0/0x170
path_openat+0x165/0x9b0
do_filp_open+0x99/0x110
do_sys_openat2+0x1f1/0x2e0
do_sys_open+0x5c/0x80
__x64_sys_open+0x21/0x30
do_syscall_64+0x32/0x50
entry_SYSCALL_64_after_hwframe+0x44/0xae
The irqflags handling in kvm_wait() which ends up doing:
local_irq_save(flags);
safe_halt();
local_irq_restore(flags);
which triggered a new consistency checking, we generally expect
local_irq_save() and local_irq_restore() to be pared and sanely
nested, and so local_irq_restore() expects to be called with
irqs disabled.
This patch fixes it by adding a local_irq_disable() after safe_halt()
to avoid this warning.
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Wanpeng Li <wanpengli@tencent.com>
---
arch/x86/kernel/kvm.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index 5e78e01..688c84a 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -853,8 +853,10 @@ static void kvm_wait(u8 *ptr, u8 val)
*/
if (arch_irqs_disabled_flags(flags))
halt();
- else
+ else {
safe_halt();
+ local_irq_disable();
+ }
out:
local_irq_restore(flags);
--
2.7.4
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH] x86/kvm: Fix broken irq restoration in kvm_wait
2021-02-23 5:25 [PATCH] x86/kvm: Fix broken irq restoration in kvm_wait Wanpeng Li
@ 2021-02-23 5:28 ` Wanpeng Li
2021-03-11 15:54 ` Sean Christopherson
2021-03-11 3:09 ` Wanpeng Li
1 sibling, 1 reply; 8+ messages in thread
From: Wanpeng Li @ 2021-02-23 5:28 UTC (permalink / raw)
To: LKML, kvm
Cc: Paolo Bonzini, Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li,
Jim Mattson, Joerg Roedel, Mark Rutland, Thomas Gleixner
On Tue, 23 Feb 2021 at 13:25, Wanpeng Li <kernellwp@gmail.com> wrote:
>
> From: Wanpeng Li <wanpengli@tencent.com>
>
> After commit 997acaf6b4b59c (lockdep: report broken irq restoration), the guest
> splatting below during boot:
>
> raw_local_irq_restore() called with IRQs enabled
> WARNING: CPU: 1 PID: 169 at kernel/locking/irqflag-debug.c:10 warn_bogus_irq_restore+0x26/0x30
> Modules linked in: hid_generic usbhid hid
> CPU: 1 PID: 169 Comm: systemd-udevd Not tainted 5.11.0+ #25
> RIP: 0010:warn_bogus_irq_restore+0x26/0x30
> Call Trace:
> kvm_wait+0x76/0x90
> __pv_queued_spin_lock_slowpath+0x285/0x2e0
> do_raw_spin_lock+0xc9/0xd0
> _raw_spin_lock+0x59/0x70
> lockref_get_not_dead+0xf/0x50
> __legitimize_path+0x31/0x60
> legitimize_root+0x37/0x50
> try_to_unlazy_next+0x7f/0x1d0
> lookup_fast+0xb0/0x170
> path_openat+0x165/0x9b0
> do_filp_open+0x99/0x110
> do_sys_openat2+0x1f1/0x2e0
> do_sys_open+0x5c/0x80
> __x64_sys_open+0x21/0x30
> do_syscall_64+0x32/0x50
> entry_SYSCALL_64_after_hwframe+0x44/0xae
>
> The irqflags handling in kvm_wait() which ends up doing:
>
> local_irq_save(flags);
> safe_halt();
> local_irq_restore(flags);
>
> which triggered a new consistency checking, we generally expect
> local_irq_save() and local_irq_restore() to be pared and sanely
> nested, and so local_irq_restore() expects to be called with
> irqs disabled.
>
> This patch fixes it by adding a local_irq_disable() after safe_halt()
> to avoid this warning.
>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Signed-off-by: Wanpeng Li <wanpengli@tencent.com>
> ---
> arch/x86/kernel/kvm.c | 4 +++-
> 1 file changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
> index 5e78e01..688c84a 100644
> --- a/arch/x86/kernel/kvm.c
> +++ b/arch/x86/kernel/kvm.c
> @@ -853,8 +853,10 @@ static void kvm_wait(u8 *ptr, u8 val)
> */
> if (arch_irqs_disabled_flags(flags))
> halt();
> - else
> + else {
> safe_halt();
> + local_irq_disable();
> + }
An alternative fix:
diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index 5e78e01..7127aef 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -836,12 +836,13 @@ static void kvm_kick_cpu(int cpu)
static void kvm_wait(u8 *ptr, u8 val)
{
- unsigned long flags;
+ bool disabled = irqs_disabled();
if (in_nmi())
return;
- local_irq_save(flags);
+ if (!disabled)
+ local_irq_disable();
if (READ_ONCE(*ptr) != val)
goto out;
@@ -851,13 +852,14 @@ static void kvm_wait(u8 *ptr, u8 val)
* for irq enabled case to avoid hang when lock info is overwritten
* in irq spinlock slowpath and no spurious interrupt occur to save us.
*/
- if (arch_irqs_disabled_flags(flags))
+ if (disabled)
halt();
else
safe_halt();
out:
- local_irq_restore(flags);
+ if (!disabled)
+ local_irq_enable();
}
#ifdef CONFIG_X86_32
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH] x86/kvm: Fix broken irq restoration in kvm_wait
2021-02-23 5:25 [PATCH] x86/kvm: Fix broken irq restoration in kvm_wait Wanpeng Li
2021-02-23 5:28 ` Wanpeng Li
@ 2021-03-11 3:09 ` Wanpeng Li
1 sibling, 0 replies; 8+ messages in thread
From: Wanpeng Li @ 2021-03-11 3:09 UTC (permalink / raw)
To: LKML, kvm
Cc: Paolo Bonzini, Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li,
Jim Mattson, Joerg Roedel, Mark Rutland, Thomas Gleixner
ping,
On Tue, 23 Feb 2021 at 13:25, Wanpeng Li <kernellwp@gmail.com> wrote:
>
> From: Wanpeng Li <wanpengli@tencent.com>
>
> After commit 997acaf6b4b59c (lockdep: report broken irq restoration), the guest
> splatting below during boot:
>
> raw_local_irq_restore() called with IRQs enabled
> WARNING: CPU: 1 PID: 169 at kernel/locking/irqflag-debug.c:10 warn_bogus_irq_restore+0x26/0x30
> Modules linked in: hid_generic usbhid hid
> CPU: 1 PID: 169 Comm: systemd-udevd Not tainted 5.11.0+ #25
> RIP: 0010:warn_bogus_irq_restore+0x26/0x30
> Call Trace:
> kvm_wait+0x76/0x90
> __pv_queued_spin_lock_slowpath+0x285/0x2e0
> do_raw_spin_lock+0xc9/0xd0
> _raw_spin_lock+0x59/0x70
> lockref_get_not_dead+0xf/0x50
> __legitimize_path+0x31/0x60
> legitimize_root+0x37/0x50
> try_to_unlazy_next+0x7f/0x1d0
> lookup_fast+0xb0/0x170
> path_openat+0x165/0x9b0
> do_filp_open+0x99/0x110
> do_sys_openat2+0x1f1/0x2e0
> do_sys_open+0x5c/0x80
> __x64_sys_open+0x21/0x30
> do_syscall_64+0x32/0x50
> entry_SYSCALL_64_after_hwframe+0x44/0xae
>
> The irqflags handling in kvm_wait() which ends up doing:
>
> local_irq_save(flags);
> safe_halt();
> local_irq_restore(flags);
>
> which triggered a new consistency checking, we generally expect
> local_irq_save() and local_irq_restore() to be pared and sanely
> nested, and so local_irq_restore() expects to be called with
> irqs disabled.
>
> This patch fixes it by adding a local_irq_disable() after safe_halt()
> to avoid this warning.
>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Signed-off-by: Wanpeng Li <wanpengli@tencent.com>
> ---
> arch/x86/kernel/kvm.c | 4 +++-
> 1 file changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
> index 5e78e01..688c84a 100644
> --- a/arch/x86/kernel/kvm.c
> +++ b/arch/x86/kernel/kvm.c
> @@ -853,8 +853,10 @@ static void kvm_wait(u8 *ptr, u8 val)
> */
> if (arch_irqs_disabled_flags(flags))
> halt();
> - else
> + else {
> safe_halt();
> + local_irq_disable();
> + }
>
> out:
> local_irq_restore(flags);
> --
> 2.7.4
>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH] x86/kvm: Fix broken irq restoration in kvm_wait
2021-02-23 5:28 ` Wanpeng Li
@ 2021-03-11 15:54 ` Sean Christopherson
2021-03-11 18:09 ` Paolo Bonzini
2021-03-13 0:57 ` Wanpeng Li
0 siblings, 2 replies; 8+ messages in thread
From: Sean Christopherson @ 2021-03-11 15:54 UTC (permalink / raw)
To: Wanpeng Li
Cc: LKML, kvm, Paolo Bonzini, Vitaly Kuznetsov, Wanpeng Li,
Jim Mattson, Joerg Roedel, Mark Rutland, Thomas Gleixner
On Tue, Feb 23, 2021, Wanpeng Li wrote:
> On Tue, 23 Feb 2021 at 13:25, Wanpeng Li <kernellwp@gmail.com> wrote:
> >
> > From: Wanpeng Li <wanpengli@tencent.com>
> >
> > After commit 997acaf6b4b59c (lockdep: report broken irq restoration), the guest
> > splatting below during boot:
> >
> > raw_local_irq_restore() called with IRQs enabled
> > WARNING: CPU: 1 PID: 169 at kernel/locking/irqflag-debug.c:10 warn_bogus_irq_restore+0x26/0x30
> > Modules linked in: hid_generic usbhid hid
> > CPU: 1 PID: 169 Comm: systemd-udevd Not tainted 5.11.0+ #25
> > RIP: 0010:warn_bogus_irq_restore+0x26/0x30
> > Call Trace:
> > kvm_wait+0x76/0x90
> > __pv_queued_spin_lock_slowpath+0x285/0x2e0
> > do_raw_spin_lock+0xc9/0xd0
> > _raw_spin_lock+0x59/0x70
> > lockref_get_not_dead+0xf/0x50
> > __legitimize_path+0x31/0x60
> > legitimize_root+0x37/0x50
> > try_to_unlazy_next+0x7f/0x1d0
> > lookup_fast+0xb0/0x170
> > path_openat+0x165/0x9b0
> > do_filp_open+0x99/0x110
> > do_sys_openat2+0x1f1/0x2e0
> > do_sys_open+0x5c/0x80
> > __x64_sys_open+0x21/0x30
> > do_syscall_64+0x32/0x50
> > entry_SYSCALL_64_after_hwframe+0x44/0xae
> >
> > The irqflags handling in kvm_wait() which ends up doing:
> >
> > local_irq_save(flags);
> > safe_halt();
> > local_irq_restore(flags);
> >
> > which triggered a new consistency checking, we generally expect
> > local_irq_save() and local_irq_restore() to be pared and sanely
> > nested, and so local_irq_restore() expects to be called with
> > irqs disabled.
> >
> > This patch fixes it by adding a local_irq_disable() after safe_halt()
> > to avoid this warning.
> >
> > Cc: Mark Rutland <mark.rutland@arm.com>
> > Cc: Thomas Gleixner <tglx@linutronix.de>
> > Signed-off-by: Wanpeng Li <wanpengli@tencent.com>
> > ---
> > arch/x86/kernel/kvm.c | 4 +++-
> > 1 file changed, 3 insertions(+), 1 deletion(-)
> >
> > diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
> > index 5e78e01..688c84a 100644
> > --- a/arch/x86/kernel/kvm.c
> > +++ b/arch/x86/kernel/kvm.c
> > @@ -853,8 +853,10 @@ static void kvm_wait(u8 *ptr, u8 val)
> > */
> > if (arch_irqs_disabled_flags(flags))
> > halt();
> > - else
> > + else {
> > safe_halt();
> > + local_irq_disable();
> > + }
>
> An alternative fix:
>
> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
> index 5e78e01..7127aef 100644
> --- a/arch/x86/kernel/kvm.c
> +++ b/arch/x86/kernel/kvm.c
> @@ -836,12 +836,13 @@ static void kvm_kick_cpu(int cpu)
>
> static void kvm_wait(u8 *ptr, u8 val)
> {
> - unsigned long flags;
> + bool disabled = irqs_disabled();
>
> if (in_nmi())
> return;
>
> - local_irq_save(flags);
> + if (!disabled)
> + local_irq_disable();
>
> if (READ_ONCE(*ptr) != val)
> goto out;
> @@ -851,13 +852,14 @@ static void kvm_wait(u8 *ptr, u8 val)
> * for irq enabled case to avoid hang when lock info is overwritten
> * in irq spinlock slowpath and no spurious interrupt occur to save us.
> */
> - if (arch_irqs_disabled_flags(flags))
> + if (disabled)
> halt();
> else
> safe_halt();
>
> out:
> - local_irq_restore(flags);
> + if (!disabled)
> + local_irq_enable();
> }
>
> #ifdef CONFIG_X86_32
A third option would be to split the paths. In the end, it's only the ptr/val
line that's shared.
---
arch/x86/kernel/kvm.c | 23 ++++++++++-------------
1 file changed, 10 insertions(+), 13 deletions(-)
diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index 5e78e01ca3b4..78bb0fae3982 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -836,28 +836,25 @@ static void kvm_kick_cpu(int cpu)
static void kvm_wait(u8 *ptr, u8 val)
{
- unsigned long flags;
-
if (in_nmi())
return;
- local_irq_save(flags);
-
- if (READ_ONCE(*ptr) != val)
- goto out;
-
/*
* halt until it's our turn and kicked. Note that we do safe halt
* for irq enabled case to avoid hang when lock info is overwritten
* in irq spinlock slowpath and no spurious interrupt occur to save us.
*/
- if (arch_irqs_disabled_flags(flags))
- halt();
- else
- safe_halt();
+ if (irqs_disabled()) {
+ if (READ_ONCE(*ptr) == val)
+ halt();
+ } else {
+ local_irq_disable();
-out:
- local_irq_restore(flags);
+ if (READ_ONCE(*ptr) == val)
+ safe_halt();
+
+ local_irq_enable();
+ }
}
#ifdef CONFIG_X86_32
--
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH] x86/kvm: Fix broken irq restoration in kvm_wait
2021-03-11 15:54 ` Sean Christopherson
@ 2021-03-11 18:09 ` Paolo Bonzini
2021-03-13 0:57 ` Wanpeng Li
1 sibling, 0 replies; 8+ messages in thread
From: Paolo Bonzini @ 2021-03-11 18:09 UTC (permalink / raw)
To: Sean Christopherson, Wanpeng Li
Cc: LKML, kvm, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
Joerg Roedel, Mark Rutland, Thomas Gleixner
On 11/03/21 16:54, Sean Christopherson wrote:
> On Tue, Feb 23, 2021, Wanpeng Li wrote:
>> On Tue, 23 Feb 2021 at 13:25, Wanpeng Li <kernellwp@gmail.com> wrote:
>>>
>>> From: Wanpeng Li <wanpengli@tencent.com>
>>>
>>> After commit 997acaf6b4b59c (lockdep: report broken irq restoration), the guest
>>> splatting below during boot:
>>>
>>> raw_local_irq_restore() called with IRQs enabled
>>> WARNING: CPU: 1 PID: 169 at kernel/locking/irqflag-debug.c:10 warn_bogus_irq_restore+0x26/0x30
>>> Modules linked in: hid_generic usbhid hid
>>> CPU: 1 PID: 169 Comm: systemd-udevd Not tainted 5.11.0+ #25
>>> RIP: 0010:warn_bogus_irq_restore+0x26/0x30
>>> Call Trace:
>>> kvm_wait+0x76/0x90
>>> __pv_queued_spin_lock_slowpath+0x285/0x2e0
>>> do_raw_spin_lock+0xc9/0xd0
>>> _raw_spin_lock+0x59/0x70
>>> lockref_get_not_dead+0xf/0x50
>>> __legitimize_path+0x31/0x60
>>> legitimize_root+0x37/0x50
>>> try_to_unlazy_next+0x7f/0x1d0
>>> lookup_fast+0xb0/0x170
>>> path_openat+0x165/0x9b0
>>> do_filp_open+0x99/0x110
>>> do_sys_openat2+0x1f1/0x2e0
>>> do_sys_open+0x5c/0x80
>>> __x64_sys_open+0x21/0x30
>>> do_syscall_64+0x32/0x50
>>> entry_SYSCALL_64_after_hwframe+0x44/0xae
>>>
>>> The irqflags handling in kvm_wait() which ends up doing:
>>>
>>> local_irq_save(flags);
>>> safe_halt();
>>> local_irq_restore(flags);
>>>
>>> which triggered a new consistency checking, we generally expect
>>> local_irq_save() and local_irq_restore() to be pared and sanely
>>> nested, and so local_irq_restore() expects to be called with
>>> irqs disabled.
>>>
>>> This patch fixes it by adding a local_irq_disable() after safe_halt()
>>> to avoid this warning.
>>>
>>> Cc: Mark Rutland <mark.rutland@arm.com>
>>> Cc: Thomas Gleixner <tglx@linutronix.de>
>>> Signed-off-by: Wanpeng Li <wanpengli@tencent.com>
>>> ---
>>> arch/x86/kernel/kvm.c | 4 +++-
>>> 1 file changed, 3 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
>>> index 5e78e01..688c84a 100644
>>> --- a/arch/x86/kernel/kvm.c
>>> +++ b/arch/x86/kernel/kvm.c
>>> @@ -853,8 +853,10 @@ static void kvm_wait(u8 *ptr, u8 val)
>>> */
>>> if (arch_irqs_disabled_flags(flags))
>>> halt();
>>> - else
>>> + else {
>>> safe_halt();
>>> + local_irq_disable();
>>> + }
>>
>> An alternative fix:
>>
>> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
>> index 5e78e01..7127aef 100644
>> --- a/arch/x86/kernel/kvm.c
>> +++ b/arch/x86/kernel/kvm.c
>> @@ -836,12 +836,13 @@ static void kvm_kick_cpu(int cpu)
>>
>> static void kvm_wait(u8 *ptr, u8 val)
>> {
>> - unsigned long flags;
>> + bool disabled = irqs_disabled();
>>
>> if (in_nmi())
>> return;
>>
>> - local_irq_save(flags);
>> + if (!disabled)
>> + local_irq_disable();
>>
>> if (READ_ONCE(*ptr) != val)
>> goto out;
>> @@ -851,13 +852,14 @@ static void kvm_wait(u8 *ptr, u8 val)
>> * for irq enabled case to avoid hang when lock info is overwritten
>> * in irq spinlock slowpath and no spurious interrupt occur to save us.
>> */
>> - if (arch_irqs_disabled_flags(flags))
>> + if (disabled)
>> halt();
>> else
>> safe_halt();
>>
>> out:
>> - local_irq_restore(flags);
>> + if (!disabled)
>> + local_irq_enable();
>> }
>>
>> #ifdef CONFIG_X86_32
>
> A third option would be to split the paths. In the end, it's only the ptr/val
> line that's shared.
>
> ---
> arch/x86/kernel/kvm.c | 23 ++++++++++-------------
> 1 file changed, 10 insertions(+), 13 deletions(-)
>
> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
> index 5e78e01ca3b4..78bb0fae3982 100644
> --- a/arch/x86/kernel/kvm.c
> +++ b/arch/x86/kernel/kvm.c
> @@ -836,28 +836,25 @@ static void kvm_kick_cpu(int cpu)
>
> static void kvm_wait(u8 *ptr, u8 val)
> {
> - unsigned long flags;
> -
> if (in_nmi())
> return;
>
> - local_irq_save(flags);
> -
> - if (READ_ONCE(*ptr) != val)
> - goto out;
> -
> /*
> * halt until it's our turn and kicked. Note that we do safe halt
> * for irq enabled case to avoid hang when lock info is overwritten
> * in irq spinlock slowpath and no spurious interrupt occur to save us.
> */
> - if (arch_irqs_disabled_flags(flags))
> - halt();
> - else
> - safe_halt();
> + if (irqs_disabled()) {
> + if (READ_ONCE(*ptr) == val)
> + halt();
> + } else {
> + local_irq_disable();
>
> -out:
> - local_irq_restore(flags);
> + if (READ_ONCE(*ptr) == val)
> + safe_halt();
> +
> + local_irq_enable();
> + }
> }
>
> #ifdef CONFIG_X86_32
> --
>
I'll send this one tomorrow.
Paolo
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH] x86/kvm: Fix broken irq restoration in kvm_wait
2021-03-11 15:54 ` Sean Christopherson
2021-03-11 18:09 ` Paolo Bonzini
@ 2021-03-13 0:57 ` Wanpeng Li
2021-03-13 9:29 ` Paolo Bonzini
1 sibling, 1 reply; 8+ messages in thread
From: Wanpeng Li @ 2021-03-13 0:57 UTC (permalink / raw)
To: Sean Christopherson
Cc: LKML, kvm, Paolo Bonzini, Vitaly Kuznetsov, Wanpeng Li,
Jim Mattson, Joerg Roedel, Mark Rutland, Thomas Gleixner
On Thu, 11 Mar 2021 at 23:54, Sean Christopherson <seanjc@google.com> wrote:
>
> On Tue, Feb 23, 2021, Wanpeng Li wrote:
> > On Tue, 23 Feb 2021 at 13:25, Wanpeng Li <kernellwp@gmail.com> wrote:
> > >
> > > From: Wanpeng Li <wanpengli@tencent.com>
> > >
> > > After commit 997acaf6b4b59c (lockdep: report broken irq restoration), the guest
> > > splatting below during boot:
> > >
> > > raw_local_irq_restore() called with IRQs enabled
> > > WARNING: CPU: 1 PID: 169 at kernel/locking/irqflag-debug.c:10 warn_bogus_irq_restore+0x26/0x30
> > > Modules linked in: hid_generic usbhid hid
> > > CPU: 1 PID: 169 Comm: systemd-udevd Not tainted 5.11.0+ #25
> > > RIP: 0010:warn_bogus_irq_restore+0x26/0x30
> > > Call Trace:
> > > kvm_wait+0x76/0x90
> > > __pv_queued_spin_lock_slowpath+0x285/0x2e0
> > > do_raw_spin_lock+0xc9/0xd0
> > > _raw_spin_lock+0x59/0x70
> > > lockref_get_not_dead+0xf/0x50
> > > __legitimize_path+0x31/0x60
> > > legitimize_root+0x37/0x50
> > > try_to_unlazy_next+0x7f/0x1d0
> > > lookup_fast+0xb0/0x170
> > > path_openat+0x165/0x9b0
> > > do_filp_open+0x99/0x110
> > > do_sys_openat2+0x1f1/0x2e0
> > > do_sys_open+0x5c/0x80
> > > __x64_sys_open+0x21/0x30
> > > do_syscall_64+0x32/0x50
> > > entry_SYSCALL_64_after_hwframe+0x44/0xae
> > >
> > > The irqflags handling in kvm_wait() which ends up doing:
> > >
> > > local_irq_save(flags);
> > > safe_halt();
> > > local_irq_restore(flags);
> > >
> > > which triggered a new consistency checking, we generally expect
> > > local_irq_save() and local_irq_restore() to be pared and sanely
> > > nested, and so local_irq_restore() expects to be called with
> > > irqs disabled.
> > >
> > > This patch fixes it by adding a local_irq_disable() after safe_halt()
> > > to avoid this warning.
> > >
> > > Cc: Mark Rutland <mark.rutland@arm.com>
> > > Cc: Thomas Gleixner <tglx@linutronix.de>
> > > Signed-off-by: Wanpeng Li <wanpengli@tencent.com>
> > > ---
> > > arch/x86/kernel/kvm.c | 4 +++-
> > > 1 file changed, 3 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
> > > index 5e78e01..688c84a 100644
> > > --- a/arch/x86/kernel/kvm.c
> > > +++ b/arch/x86/kernel/kvm.c
> > > @@ -853,8 +853,10 @@ static void kvm_wait(u8 *ptr, u8 val)
> > > */
> > > if (arch_irqs_disabled_flags(flags))
> > > halt();
> > > - else
> > > + else {
> > > safe_halt();
> > > + local_irq_disable();
> > > + }
> >
> > An alternative fix:
> >
> > diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
> > index 5e78e01..7127aef 100644
> > --- a/arch/x86/kernel/kvm.c
> > +++ b/arch/x86/kernel/kvm.c
> > @@ -836,12 +836,13 @@ static void kvm_kick_cpu(int cpu)
> >
> > static void kvm_wait(u8 *ptr, u8 val)
> > {
> > - unsigned long flags;
> > + bool disabled = irqs_disabled();
> >
> > if (in_nmi())
> > return;
> >
> > - local_irq_save(flags);
> > + if (!disabled)
> > + local_irq_disable();
> >
> > if (READ_ONCE(*ptr) != val)
> > goto out;
> > @@ -851,13 +852,14 @@ static void kvm_wait(u8 *ptr, u8 val)
> > * for irq enabled case to avoid hang when lock info is overwritten
> > * in irq spinlock slowpath and no spurious interrupt occur to save us.
> > */
> > - if (arch_irqs_disabled_flags(flags))
> > + if (disabled)
> > halt();
> > else
> > safe_halt();
> >
> > out:
> > - local_irq_restore(flags);
> > + if (!disabled)
> > + local_irq_enable();
> > }
> >
> > #ifdef CONFIG_X86_32
>
> A third option would be to split the paths. In the end, it's only the ptr/val
> line that's shared.
I just sent out a formal patch for my alternative fix, I think the
whole logic in kvm_wait is more clear w/ my version.
>
> ---
> arch/x86/kernel/kvm.c | 23 ++++++++++-------------
> 1 file changed, 10 insertions(+), 13 deletions(-)
>
> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
> index 5e78e01ca3b4..78bb0fae3982 100644
> --- a/arch/x86/kernel/kvm.c
> +++ b/arch/x86/kernel/kvm.c
> @@ -836,28 +836,25 @@ static void kvm_kick_cpu(int cpu)
>
> static void kvm_wait(u8 *ptr, u8 val)
> {
> - unsigned long flags;
> -
> if (in_nmi())
> return;
>
> - local_irq_save(flags);
> -
> - if (READ_ONCE(*ptr) != val)
> - goto out;
> -
> /*
> * halt until it's our turn and kicked. Note that we do safe halt
> * for irq enabled case to avoid hang when lock info is overwritten
> * in irq spinlock slowpath and no spurious interrupt occur to save us.
> */
> - if (arch_irqs_disabled_flags(flags))
> - halt();
> - else
> - safe_halt();
> + if (irqs_disabled()) {
> + if (READ_ONCE(*ptr) == val)
> + halt();
> + } else {
> + local_irq_disable();
>
> -out:
> - local_irq_restore(flags);
> + if (READ_ONCE(*ptr) == val)
> + safe_halt();
> +
> + local_irq_enable();
> + }
> }
>
> #ifdef CONFIG_X86_32
> --
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH] x86/kvm: Fix broken irq restoration in kvm_wait
2021-03-13 0:57 ` Wanpeng Li
@ 2021-03-13 9:29 ` Paolo Bonzini
2021-03-15 6:56 ` Wanpeng Li
0 siblings, 1 reply; 8+ messages in thread
From: Paolo Bonzini @ 2021-03-13 9:29 UTC (permalink / raw)
To: Wanpeng Li, Sean Christopherson
Cc: LKML, kvm, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
Joerg Roedel, Mark Rutland, Thomas Gleixner
On 13/03/21 01:57, Wanpeng Li wrote:
>> A third option would be to split the paths. In the end, it's only the ptr/val
>> line that's shared.
> I just sent out a formal patch for my alternative fix, I think the
> whole logic in kvm_wait is more clear w/ my version.
>
I don't know, having three "if"s in 10 lines of code is a bit daunting.
Paolo
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH] x86/kvm: Fix broken irq restoration in kvm_wait
2021-03-13 9:29 ` Paolo Bonzini
@ 2021-03-15 6:56 ` Wanpeng Li
0 siblings, 0 replies; 8+ messages in thread
From: Wanpeng Li @ 2021-03-15 6:56 UTC (permalink / raw)
To: Paolo Bonzini
Cc: Sean Christopherson, LKML, kvm, Vitaly Kuznetsov, Wanpeng Li,
Jim Mattson, Joerg Roedel, Mark Rutland, Thomas Gleixner
On Sat, 13 Mar 2021 at 17:33, Paolo Bonzini <pbonzini@redhat.com> wrote:
>
> On 13/03/21 01:57, Wanpeng Li wrote:
> >> A third option would be to split the paths. In the end, it's only the ptr/val
> >> line that's shared.
> > I just sent out a formal patch for my alternative fix, I think the
> > whole logic in kvm_wait is more clear w/ my version.
> >
>
> I don't know, having three "if"s in 10 lines of code is a bit daunting.
Fair enough, just sent out v3 per Sean's suggestion.
Wanpeng
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2021-03-15 6:57 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-02-23 5:25 [PATCH] x86/kvm: Fix broken irq restoration in kvm_wait Wanpeng Li
2021-02-23 5:28 ` Wanpeng Li
2021-03-11 15:54 ` Sean Christopherson
2021-03-11 18:09 ` Paolo Bonzini
2021-03-13 0:57 ` Wanpeng Li
2021-03-13 9:29 ` Paolo Bonzini
2021-03-15 6:56 ` Wanpeng Li
2021-03-11 3:09 ` Wanpeng Li
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).