linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH] x86: call smp vmxoff in smp stop
@ 2017-01-04 10:11 Xishi Qiu
  2017-01-05  1:45 ` [RFC PATCH V2] " Xishi Qiu
  2017-01-14  1:36 ` [PATCH] " Xishi Qiu
  0 siblings, 2 replies; 15+ messages in thread
From: Xishi Qiu @ 2017-01-04 10:11 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	the arch/x86 maintainers, wanpeng.li, Andrew Morton,
	hidehiro.kawai.ez, pbonzini
  Cc: LKML, Fengtiantian, Xiexiuqi

From: f00186668 <fengtiantian@huawei.com>

We need to disable VMX on all CPUs before stop cpu when OS panic, otherwisewe
risk hanging up the machine, because the CPU ignore INIT signals when VMX is enabled.
In kernel mainline this issue existence.

Signed-off-by: f00186668 <fengtiantian@huawei.com>
---
 arch/x86/kernel/smp.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/arch/x86/kernel/smp.c b/arch/x86/kernel/smp.c
index 68f8cc2..6b64c6b 100644
--- a/arch/x86/kernel/smp.c
+++ b/arch/x86/kernel/smp.c
@@ -162,6 +162,7 @@ static int smp_stop_nmi_callback(unsigned int val, struct pt_regs *regs)
 	if (raw_smp_processor_id() == atomic_read(&stopping_cpu))
 		return NMI_HANDLED;
 
+	cpu_emergency_vmxoff();
 	stop_this_cpu(NULL);
 
 	return NMI_HANDLED;
@@ -174,6 +175,7 @@ static int smp_stop_nmi_callback(unsigned int val, struct pt_regs *regs)
 asmlinkage __visible void smp_reboot_interrupt(void)
 {
 	ipi_entering_ack_irq();
+	cpu_emergency_vmxoff();
 	stop_this_cpu(NULL);
 	irq_exit();
 }
-- 
1.8.3.1 

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [RFC PATCH V2] x86: call smp vmxoff in smp stop
  2017-01-04 10:11 [RFC PATCH] x86: call smp vmxoff in smp stop Xishi Qiu
@ 2017-01-05  1:45 ` Xishi Qiu
  2017-01-12 13:55   ` Paolo Bonzini
  2017-01-14  1:42   ` [PATCH V3] " Xishi Qiu
  2017-01-14  1:36 ` [PATCH] " Xishi Qiu
  1 sibling, 2 replies; 15+ messages in thread
From: Xishi Qiu @ 2017-01-05  1:45 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	the arch/x86 maintainers, wanpeng.li, Andrew Morton,
	hidehiro.kawai.ez, pbonzini
  Cc: LKML, Fengtiantian, Xiexiuqi

From: f00186668 <fengtiantian@huawei.com>

We need to disable VMX on all CPUs before stop cpu when OS panic,
otherwisewe risk hanging up the machine, because the CPU ignore INIT
signals when VMX is enabled. In kernel mainline this issue existence.

Signed-off-by: f00186668 <fengtiantian@huawei.com>
---
 arch/x86/kernel/smp.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/arch/x86/kernel/smp.c b/arch/x86/kernel/smp.c
index 68f8cc2..b574d55 100644
--- a/arch/x86/kernel/smp.c
+++ b/arch/x86/kernel/smp.c
@@ -33,6 +33,7 @@
 #include <asm/mce.h>
 #include <asm/trace/irq_vectors.h>
 #include <asm/kexec.h>
+#include <asm/virtext.h>
 
 /*
  *	Some notes on x86 processor bugs affecting SMP operation:
@@ -162,6 +163,7 @@ static int smp_stop_nmi_callback(unsigned int val, struct pt_regs *regs)
 	if (raw_smp_processor_id() == atomic_read(&stopping_cpu))
 		return NMI_HANDLED;
 
+	cpu_emergency_vmxoff();
 	stop_this_cpu(NULL);
 
 	return NMI_HANDLED;
@@ -174,6 +176,7 @@ static int smp_stop_nmi_callback(unsigned int val, struct pt_regs *regs)
 asmlinkage __visible void smp_reboot_interrupt(void)
 {
 	ipi_entering_ack_irq();
+	cpu_emergency_vmxoff();
 	stop_this_cpu(NULL);
 	irq_exit();
 }
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [RFC PATCH V2] x86: call smp vmxoff in smp stop
  2017-01-05  1:45 ` [RFC PATCH V2] " Xishi Qiu
@ 2017-01-12 13:55   ` Paolo Bonzini
  2017-01-14  1:42   ` [PATCH V3] " Xishi Qiu
  1 sibling, 0 replies; 15+ messages in thread
From: Paolo Bonzini @ 2017-01-12 13:55 UTC (permalink / raw)
  To: Xishi Qiu, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	the arch/x86 maintainers, wanpeng.li, Andrew Morton,
	hidehiro.kawai.ez
  Cc: LKML, Fengtiantian, Xiexiuqi



On 05/01/2017 02:45, Xishi Qiu wrote:
> From: f00186668 <fengtiantian@huawei.com>
> 
> We need to disable VMX on all CPUs before stop cpu when OS panic,
> otherwisewe risk hanging up the machine, because the CPU ignore INIT
> signals when VMX is enabled. In kernel mainline this issue existence.
> 
> Signed-off-by: f00186668 <fengtiantian@huawei.com>

Looks good, but you need to put your colleague's real name (Tiantian
Feng?) in the Signed-off-by line, and you need another Signed-off-by
line for yourself.

Thanks,

Paolo

> ---
>  arch/x86/kernel/smp.c | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/arch/x86/kernel/smp.c b/arch/x86/kernel/smp.c
> index 68f8cc2..b574d55 100644
> --- a/arch/x86/kernel/smp.c
> +++ b/arch/x86/kernel/smp.c
> @@ -33,6 +33,7 @@
>  #include <asm/mce.h>
>  #include <asm/trace/irq_vectors.h>
>  #include <asm/kexec.h>
> +#include <asm/virtext.h>
>  
>  /*
>   *	Some notes on x86 processor bugs affecting SMP operation:
> @@ -162,6 +163,7 @@ static int smp_stop_nmi_callback(unsigned int val, struct pt_regs *regs)
>  	if (raw_smp_processor_id() == atomic_read(&stopping_cpu))
>  		return NMI_HANDLED;
>  
> +	cpu_emergency_vmxoff();
>  	stop_this_cpu(NULL);
>  
>  	return NMI_HANDLED;
> @@ -174,6 +176,7 @@ static int smp_stop_nmi_callback(unsigned int val, struct pt_regs *regs)
>  asmlinkage __visible void smp_reboot_interrupt(void)
>  {
>  	ipi_entering_ack_irq();
> +	cpu_emergency_vmxoff();
>  	stop_this_cpu(NULL);
>  	irq_exit();
>  }
> 

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH] x86: call smp vmxoff in smp stop
  2017-01-04 10:11 [RFC PATCH] x86: call smp vmxoff in smp stop Xishi Qiu
  2017-01-05  1:45 ` [RFC PATCH V2] " Xishi Qiu
@ 2017-01-14  1:36 ` Xishi Qiu
  2017-01-14  1:41   ` Xishi Qiu
  2017-01-15  0:45   ` kbuild test robot
  1 sibling, 2 replies; 15+ messages in thread
From: Xishi Qiu @ 2017-01-14  1:36 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	the arch/x86 maintainers, wanpeng.li, Andrew Morton,
	hidehiro.kawai.ez, pbonzini
  Cc: LKML, Fengtiantian, Xiexiuqi

From: Tiantian Feng <fengtiantian@huawei.com>

We need to disable VMX on all CPUs before stop cpu when OS panic, otherwisewe
risk hanging up the machine, because the CPU ignore INIT signals when VMX is enabled.
In kernel mainline this issue existence.

Signed-off-by: Tiantian Feng <fengtiantian@huawei.com>
---
 arch/x86/kernel/smp.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/arch/x86/kernel/smp.c b/arch/x86/kernel/smp.c
index 68f8cc2..6b64c6b 100644
--- a/arch/x86/kernel/smp.c
+++ b/arch/x86/kernel/smp.c
@@ -162,6 +162,7 @@ static int smp_stop_nmi_callback(unsigned int val, struct pt_regs *regs)
 	if (raw_smp_processor_id() == atomic_read(&stopping_cpu))
 		return NMI_HANDLED;
 
+	cpu_emergency_vmxoff();
 	stop_this_cpu(NULL);
 
 	return NMI_HANDLED;
@@ -174,6 +175,7 @@ static int smp_stop_nmi_callback(unsigned int val, struct pt_regs *regs)
 asmlinkage __visible void smp_reboot_interrupt(void)
 {
 	ipi_entering_ack_irq();
+	cpu_emergency_vmxoff();
 	stop_this_cpu(NULL);
 	irq_exit();
 }
-- 
1.8.3.1 . 

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH] x86: call smp vmxoff in smp stop
  2017-01-14  1:36 ` [PATCH] " Xishi Qiu
@ 2017-01-14  1:41   ` Xishi Qiu
  2017-01-15  0:45   ` kbuild test robot
  1 sibling, 0 replies; 15+ messages in thread
From: Xishi Qiu @ 2017-01-14  1:41 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	the arch/x86 maintainers, wanpeng.li, Andrew Morton,
	hidehiro.kawai.ez, pbonzini
  Cc: LKML, Fengtiantian, Xiexiuqi

On 2017/1/14 9:36, Xishi Qiu wrote:

> From: Tiantian Feng <fengtiantian@huawei.com>
> 
> We need to disable VMX on all CPUs before stop cpu when OS panic, otherwisewe
> risk hanging up the machine, because the CPU ignore INIT signals when VMX is enabled.
> In kernel mainline this issue existence.
> 
> Signed-off-by: Tiantian Feng <fengtiantian@huawei.com>
> ---

Sorry, I missed something, please ignore this one, thanks.

>  arch/x86/kernel/smp.c | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/arch/x86/kernel/smp.c b/arch/x86/kernel/smp.c
> index 68f8cc2..6b64c6b 100644
> --- a/arch/x86/kernel/smp.c
> +++ b/arch/x86/kernel/smp.c
> @@ -162,6 +162,7 @@ static int smp_stop_nmi_callback(unsigned int val, struct pt_regs *regs)
>  	if (raw_smp_processor_id() == atomic_read(&stopping_cpu))
>  		return NMI_HANDLED;
>  
> +	cpu_emergency_vmxoff();
>  	stop_this_cpu(NULL);
>  
>  	return NMI_HANDLED;
> @@ -174,6 +175,7 @@ static int smp_stop_nmi_callback(unsigned int val, struct pt_regs *regs)
>  asmlinkage __visible void smp_reboot_interrupt(void)
>  {
>  	ipi_entering_ack_irq();
> +	cpu_emergency_vmxoff();
>  	stop_this_cpu(NULL);
>  	irq_exit();
>  }

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH V3] x86: call smp vmxoff in smp stop
  2017-01-05  1:45 ` [RFC PATCH V2] " Xishi Qiu
  2017-01-12 13:55   ` Paolo Bonzini
@ 2017-01-14  1:42   ` Xishi Qiu
  2017-01-17 15:18     ` Paolo Bonzini
  2017-01-18 11:32     ` [PATCH V4] " Xishi Qiu
  1 sibling, 2 replies; 15+ messages in thread
From: Xishi Qiu @ 2017-01-14  1:42 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	the arch/x86 maintainers, wanpeng.li, Andrew Morton,
	hidehiro.kawai.ez, pbonzini
  Cc: LKML, Fengtiantian, Xiexiuqi

From: Tiantian Feng <fengtiantian@huawei.com>

We need to disable VMX on all CPUs before stop cpu when OS panic,
otherwisewe risk hanging up the machine, because the CPU ignore INIT
signals when VMX is enabled. In kernel mainline this issue existence.

Signed-off-by: Tiantian Feng <fengtiantian@huawei.com>
---
 arch/x86/kernel/smp.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/arch/x86/kernel/smp.c b/arch/x86/kernel/smp.c
index 68f8cc2..b574d55 100644
--- a/arch/x86/kernel/smp.c
+++ b/arch/x86/kernel/smp.c
@@ -33,6 +33,7 @@
 #include <asm/mce.h>
 #include <asm/trace/irq_vectors.h>
 #include <asm/kexec.h>
+#include <asm/virtext.h>
 
 /*
  *	Some notes on x86 processor bugs affecting SMP operation:
@@ -162,6 +163,7 @@ static int smp_stop_nmi_callback(unsigned int val, struct pt_regs *regs)
 	if (raw_smp_processor_id() == atomic_read(&stopping_cpu))
 		return NMI_HANDLED;
 
+	cpu_emergency_vmxoff();
 	stop_this_cpu(NULL);
 
 	return NMI_HANDLED;
@@ -174,6 +176,7 @@ static int smp_stop_nmi_callback(unsigned int val, struct pt_regs *regs)
 asmlinkage __visible void smp_reboot_interrupt(void)
 {
 	ipi_entering_ack_irq();
+	cpu_emergency_vmxoff();
 	stop_this_cpu(NULL);
 	irq_exit();
 }
-- 
1.8.3.1 

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH] x86: call smp vmxoff in smp stop
  2017-01-14  1:36 ` [PATCH] " Xishi Qiu
  2017-01-14  1:41   ` Xishi Qiu
@ 2017-01-15  0:45   ` kbuild test robot
  1 sibling, 0 replies; 15+ messages in thread
From: kbuild test robot @ 2017-01-15  0:45 UTC (permalink / raw)
  To: Xishi Qiu
  Cc: kbuild-all, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	the arch/x86 maintainers, wanpeng.li, Andrew Morton,
	hidehiro.kawai.ez, pbonzini, LKML, Fengtiantian, Xiexiuqi

[-- Attachment #1: Type: text/plain, Size: 1433 bytes --]

Hi Tiantian,

[auto build test ERROR on tip/auto-latest]
[also build test ERROR on v4.10-rc3 next-20170113]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/Xishi-Qiu/x86-call-smp-vmxoff-in-smp-stop/20170115-075446
config: x86_64-randconfig-x017-201703 (attached as .config)
compiler: gcc-6 (Debian 6.2.0-3) 6.2.0 20160901
reproduce:
        # save the attached .config to linux build tree
        make ARCH=x86_64 

All errors (new ones prefixed by >>):

   arch/x86/kernel/smp.c: In function 'smp_stop_nmi_callback':
>> arch/x86/kernel/smp.c:165:2: error: implicit declaration of function 'cpu_emergency_vmxoff' [-Werror=implicit-function-declaration]
     cpu_emergency_vmxoff();
     ^~~~~~~~~~~~~~~~~~~~
   cc1: some warnings being treated as errors

vim +/cpu_emergency_vmxoff +165 arch/x86/kernel/smp.c

   159	static int smp_stop_nmi_callback(unsigned int val, struct pt_regs *regs)
   160	{
   161		/* We are registered on stopping cpu too, avoid spurious NMI */
   162		if (raw_smp_processor_id() == atomic_read(&stopping_cpu))
   163			return NMI_HANDLED;
   164	
 > 165		cpu_emergency_vmxoff();
   166		stop_this_cpu(NULL);
   167	
   168		return NMI_HANDLED;

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 27500 bytes --]

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH V3] x86: call smp vmxoff in smp stop
  2017-01-14  1:42   ` [PATCH V3] " Xishi Qiu
@ 2017-01-17 15:18     ` Paolo Bonzini
  2017-01-18  2:19       ` Xishi Qiu
  2017-01-18 11:32     ` [PATCH V4] " Xishi Qiu
  1 sibling, 1 reply; 15+ messages in thread
From: Paolo Bonzini @ 2017-01-17 15:18 UTC (permalink / raw)
  To: Xishi Qiu, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	the arch/x86 maintainers, wanpeng.li, Andrew Morton,
	hidehiro.kawai.ez
  Cc: LKML, Fengtiantian, Xiexiuqi



On 14/01/2017 02:42, Xishi Qiu wrote:
> From: Tiantian Feng <fengtiantian@huawei.com>
> 
> We need to disable VMX on all CPUs before stop cpu when OS panic,
> otherwisewe risk hanging up the machine, because the CPU ignore INIT
> signals when VMX is enabled. In kernel mainline this issue existence.
> 
> Signed-off-by: Tiantian Feng <fengtiantian@huawei.com>

Xishi,

it's still missing your Signed-off-by.

Paolo

> ---
>  arch/x86/kernel/smp.c | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/arch/x86/kernel/smp.c b/arch/x86/kernel/smp.c
> index 68f8cc2..b574d55 100644
> --- a/arch/x86/kernel/smp.c
> +++ b/arch/x86/kernel/smp.c
> @@ -33,6 +33,7 @@
>  #include <asm/mce.h>
>  #include <asm/trace/irq_vectors.h>
>  #include <asm/kexec.h>
> +#include <asm/virtext.h>
>  
>  /*
>   *	Some notes on x86 processor bugs affecting SMP operation:
> @@ -162,6 +163,7 @@ static int smp_stop_nmi_callback(unsigned int val, struct pt_regs *regs)
>  	if (raw_smp_processor_id() == atomic_read(&stopping_cpu))
>  		return NMI_HANDLED;
>  
> +	cpu_emergency_vmxoff();
>  	stop_this_cpu(NULL);
>  
>  	return NMI_HANDLED;
> @@ -174,6 +176,7 @@ static int smp_stop_nmi_callback(unsigned int val, struct pt_regs *regs)
>  asmlinkage __visible void smp_reboot_interrupt(void)
>  {
>  	ipi_entering_ack_irq();
> +	cpu_emergency_vmxoff();
>  	stop_this_cpu(NULL);
>  	irq_exit();
>  }
> 

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH V3] x86: call smp vmxoff in smp stop
  2017-01-17 15:18     ` Paolo Bonzini
@ 2017-01-18  2:19       ` Xishi Qiu
  2017-01-18  9:30         ` Paolo Bonzini
  0 siblings, 1 reply; 15+ messages in thread
From: Xishi Qiu @ 2017-01-18  2:19 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	the arch/x86 maintainers, wanpeng.li, Andrew Morton,
	hidehiro.kawai.ez, LKML, Fengtiantian, Xiexiuqi

On 2017/1/17 23:18, Paolo Bonzini wrote:

> 
> 
> On 14/01/2017 02:42, Xishi Qiu wrote:
>> From: Tiantian Feng <fengtiantian@huawei.com>
>>
>> We need to disable VMX on all CPUs before stop cpu when OS panic,
>> otherwisewe risk hanging up the machine, because the CPU ignore INIT
>> signals when VMX is enabled. In kernel mainline this issue existence.
>>
>> Signed-off-by: Tiantian Feng <fengtiantian@huawei.com>
> 
> Xishi,
> 
> it's still missing your Signed-off-by.
> 

Hi Paolo,

This patch is from fengtiantian, and I just send it for him,
so still should add my SOB?

Thanks,
Xishi Qiu

> Paolo
> 
>> ---
>>  arch/x86/kernel/smp.c | 3 +++
>>  1 file changed, 3 insertions(+)
>>
>> diff --git a/arch/x86/kernel/smp.c b/arch/x86/kernel/smp.c
>> index 68f8cc2..b574d55 100644
>> --- a/arch/x86/kernel/smp.c
>> +++ b/arch/x86/kernel/smp.c
>> @@ -33,6 +33,7 @@
>>  #include <asm/mce.h>
>>  #include <asm/trace/irq_vectors.h>
>>  #include <asm/kexec.h>
>> +#include <asm/virtext.h>
>>  
>>  /*
>>   *	Some notes on x86 processor bugs affecting SMP operation:
>> @@ -162,6 +163,7 @@ static int smp_stop_nmi_callback(unsigned int val, struct pt_regs *regs)
>>  	if (raw_smp_processor_id() == atomic_read(&stopping_cpu))
>>  		return NMI_HANDLED;
>>  
>> +	cpu_emergency_vmxoff();
>>  	stop_this_cpu(NULL);
>>  
>>  	return NMI_HANDLED;
>> @@ -174,6 +176,7 @@ static int smp_stop_nmi_callback(unsigned int val, struct pt_regs *regs)
>>  asmlinkage __visible void smp_reboot_interrupt(void)
>>  {
>>  	ipi_entering_ack_irq();
>> +	cpu_emergency_vmxoff();
>>  	stop_this_cpu(NULL);
>>  	irq_exit();
>>  }
>>
> 
> .
> 

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH V3] x86: call smp vmxoff in smp stop
  2017-01-18  2:19       ` Xishi Qiu
@ 2017-01-18  9:30         ` Paolo Bonzini
  0 siblings, 0 replies; 15+ messages in thread
From: Paolo Bonzini @ 2017-01-18  9:30 UTC (permalink / raw)
  To: Xishi Qiu
  Cc: Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	the arch/x86 maintainers, wanpeng.li, Andrew Morton,
	hidehiro.kawai.ez, LKML, Fengtiantian, Xiexiuqi



On 18/01/2017 03:19, Xishi Qiu wrote:
> On 2017/1/17 23:18, Paolo Bonzini wrote:
> 
>>
>>
>> On 14/01/2017 02:42, Xishi Qiu wrote:
>>> From: Tiantian Feng <fengtiantian@huawei.com>
>>>
>>> We need to disable VMX on all CPUs before stop cpu when OS panic,
>>> otherwisewe risk hanging up the machine, because the CPU ignore INIT
>>> signals when VMX is enabled. In kernel mainline this issue existence.
>>>
>>> Signed-off-by: Tiantian Feng <fengtiantian@huawei.com>
>>
>> Xishi,
>>
>> it's still missing your Signed-off-by.
>>
> 
> Hi Paolo,
> 
> This patch is from fengtiantian, and I just send it for him,
> so still should add my SOB?

Yes, both of them should be there.  The "signed-off-by" is a sequence of
all people that managed the patch---so that would be Tiantian first,
then you, then an x86 maintainer.

Paolo

> Thanks,
> Xishi Qiu
> 
>> Paolo
>>
>>> ---
>>>  arch/x86/kernel/smp.c | 3 +++
>>>  1 file changed, 3 insertions(+)
>>>
>>> diff --git a/arch/x86/kernel/smp.c b/arch/x86/kernel/smp.c
>>> index 68f8cc2..b574d55 100644
>>> --- a/arch/x86/kernel/smp.c
>>> +++ b/arch/x86/kernel/smp.c
>>> @@ -33,6 +33,7 @@
>>>  #include <asm/mce.h>
>>>  #include <asm/trace/irq_vectors.h>
>>>  #include <asm/kexec.h>
>>> +#include <asm/virtext.h>
>>>  
>>>  /*
>>>   *	Some notes on x86 processor bugs affecting SMP operation:
>>> @@ -162,6 +163,7 @@ static int smp_stop_nmi_callback(unsigned int val, struct pt_regs *regs)
>>>  	if (raw_smp_processor_id() == atomic_read(&stopping_cpu))
>>>  		return NMI_HANDLED;
>>>  
>>> +	cpu_emergency_vmxoff();
>>>  	stop_this_cpu(NULL);
>>>  
>>>  	return NMI_HANDLED;
>>> @@ -174,6 +176,7 @@ static int smp_stop_nmi_callback(unsigned int val, struct pt_regs *regs)
>>>  asmlinkage __visible void smp_reboot_interrupt(void)
>>>  {
>>>  	ipi_entering_ack_irq();
>>> +	cpu_emergency_vmxoff();
>>>  	stop_this_cpu(NULL);
>>>  	irq_exit();
>>>  }
>>>
>>
>> .
>>
> 
> 
> 

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH V4] x86: call smp vmxoff in smp stop
  2017-01-14  1:42   ` [PATCH V3] " Xishi Qiu
  2017-01-17 15:18     ` Paolo Bonzini
@ 2017-01-18 11:32     ` Xishi Qiu
  2017-04-18 13:34       ` Paolo Bonzini
  1 sibling, 1 reply; 15+ messages in thread
From: Xishi Qiu @ 2017-01-18 11:32 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	the arch/x86 maintainers, wanpeng.li, Andrew Morton,
	hidehiro.kawai.ez, pbonzini
  Cc: LKML, Fengtiantian, Xiexiuqi

From: Tiantian Feng <fengtiantian@huawei.com>

We need to disable VMX on all CPUs before stop cpu when OS panic,
otherwisewe risk hanging up the machine, because the CPU ignore INIT
signals when VMX is enabled. In kernel mainline this issue existence.

Signed-off-by: Tiantian Feng <fengtiantian@huawei.com>
Signed-off-by: Xishi Qiu <qiuxishi@huawei.com>
---
 arch/x86/kernel/smp.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/arch/x86/kernel/smp.c b/arch/x86/kernel/smp.c
index 68f8cc2..b574d55 100644
--- a/arch/x86/kernel/smp.c
+++ b/arch/x86/kernel/smp.c
@@ -33,6 +33,7 @@
 #include <asm/mce.h>
 #include <asm/trace/irq_vectors.h>
 #include <asm/kexec.h>
+#include <asm/virtext.h>
 
 /*
  *	Some notes on x86 processor bugs affecting SMP operation:
@@ -162,6 +163,7 @@ static int smp_stop_nmi_callback(unsigned int val, struct pt_regs *regs)
 	if (raw_smp_processor_id() == atomic_read(&stopping_cpu))
 		return NMI_HANDLED;
 
+	cpu_emergency_vmxoff();
 	stop_this_cpu(NULL);
 
 	return NMI_HANDLED;
@@ -174,6 +176,7 @@ static int smp_stop_nmi_callback(unsigned int val, struct pt_regs *regs)
 asmlinkage __visible void smp_reboot_interrupt(void)
 {
 	ipi_entering_ack_irq();
+	cpu_emergency_vmxoff();
 	stop_this_cpu(NULL);
 	irq_exit();
 }
-- 
1.8.3.1 

. 

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH V4] x86: call smp vmxoff in smp stop
  2017-01-18 11:32     ` [PATCH V4] " Xishi Qiu
@ 2017-04-18 13:34       ` Paolo Bonzini
  2017-04-19  8:02         ` Ingo Molnar
  0 siblings, 1 reply; 15+ messages in thread
From: Paolo Bonzini @ 2017-04-18 13:34 UTC (permalink / raw)
  To: Xishi Qiu, Ingo Molnar, H. Peter Anvin
  Cc: Thomas Gleixner, the arch/x86 maintainers, wanpeng.li,
	Andrew Morton, hidehiro.kawai.ez, LKML, Fengtiantian, Xiexiuqi

Ingo, can you put this in tip?

Thanks,

Paolo

On 18/01/2017 12:32, Xishi Qiu wrote:
> From: Tiantian Feng <fengtiantian@huawei.com>
> 
> We need to disable VMX on all CPUs before stop cpu when OS panic,
> otherwisewe risk hanging up the machine, because the CPU ignore INIT
> signals when VMX is enabled. In kernel mainline this issue existence.
> 
> Signed-off-by: Tiantian Feng <fengtiantian@huawei.com>
> Signed-off-by: Xishi Qiu <qiuxishi@huawei.com>
> ---
>  arch/x86/kernel/smp.c | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/arch/x86/kernel/smp.c b/arch/x86/kernel/smp.c
> index 68f8cc2..b574d55 100644
> --- a/arch/x86/kernel/smp.c
> +++ b/arch/x86/kernel/smp.c
> @@ -33,6 +33,7 @@
>  #include <asm/mce.h>
>  #include <asm/trace/irq_vectors.h>
>  #include <asm/kexec.h>
> +#include <asm/virtext.h>
>  
>  /*
>   *	Some notes on x86 processor bugs affecting SMP operation:
> @@ -162,6 +163,7 @@ static int smp_stop_nmi_callback(unsigned int val, struct pt_regs *regs)
>  	if (raw_smp_processor_id() == atomic_read(&stopping_cpu))
>  		return NMI_HANDLED;
>  
> +	cpu_emergency_vmxoff();
>  	stop_this_cpu(NULL);
>  
>  	return NMI_HANDLED;
> @@ -174,6 +176,7 @@ static int smp_stop_nmi_callback(unsigned int val, struct pt_regs *regs)
>  asmlinkage __visible void smp_reboot_interrupt(void)
>  {
>  	ipi_entering_ack_irq();
> +	cpu_emergency_vmxoff();
>  	stop_this_cpu(NULL);
>  	irq_exit();
>  }
> 

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH V4] x86: call smp vmxoff in smp stop
  2017-04-18 13:34       ` Paolo Bonzini
@ 2017-04-19  8:02         ` Ingo Molnar
  2017-04-19  8:22           ` Paolo Bonzini
  0 siblings, 1 reply; 15+ messages in thread
From: Ingo Molnar @ 2017-04-19  8:02 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Xishi Qiu, Ingo Molnar, H. Peter Anvin, Thomas Gleixner,
	the arch/x86 maintainers, wanpeng.li, Andrew Morton,
	hidehiro.kawai.ez, LKML, Fengtiantian, Xiexiuqi


* Paolo Bonzini <pbonzini@redhat.com> wrote:

> Ingo, can you put this in tip?
> 
> Thanks,
> 
> Paolo
> 
> On 18/01/2017 12:32, Xishi Qiu wrote:
> > From: Tiantian Feng <fengtiantian@huawei.com>
> > 
> > We need to disable VMX on all CPUs before stop cpu when OS panic,
> > otherwisewe risk hanging up the machine, because the CPU ignore INIT
> > signals when VMX is enabled. In kernel mainline this issue existence.

Yes, but the changelog is atrcious:

 - title should describe the purpose, not the implementation

 - CPU is spelled 'CPU' once, then 'cpu' _in the same sentence_!

 - typos

 - spelling

 - the last sentence doesn't even parse ...

Still it's already at V4 and comes with two signoffs and what amounts to a 
maintainer Ack??

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH V4] x86: call smp vmxoff in smp stop
  2017-04-19  8:02         ` Ingo Molnar
@ 2017-04-19  8:22           ` Paolo Bonzini
  2017-04-19  9:50             ` Ingo Molnar
  0 siblings, 1 reply; 15+ messages in thread
From: Paolo Bonzini @ 2017-04-19  8:22 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Xishi Qiu, Ingo Molnar, H. Peter Anvin, Thomas Gleixner,
	the arch/x86 maintainers, wanpeng li, Andrew Morton,
	hidehiro kawai ez, LKML, Fengtiantian, Xiexiuqi


> > On 18/01/2017 12:32, Xishi Qiu wrote:
> > > From: Tiantian Feng <fengtiantian@huawei.com>
> > > 
> > > We need to disable VMX on all CPUs before stop cpu when OS panic,
> > > otherwisewe risk hanging up the machine, because the CPU ignore INIT
> > > signals when VMX is enabled. In kernel mainline this issue existence.
> 
> Yes, but the changelog is atrcious:
> 
>  - title should describe the purpose, not the implementation
> 
>  - CPU is spelled 'CPU' once, then 'cpu' _in the same sentence_!
> 
>  - typos
> 
>  - spelling
> 
>  - the last sentence doesn't even parse ...
> 
> Still it's already at V4 and comes with two signoffs and what amounts to a
> maintainer Ack??

Well, the v2-v4 were really just about getting the signoffs right.  At some
point you just get desensitized about the changelog. :(

I'll post v5 with a rewritten commit message.

Paolo

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH V4] x86: call smp vmxoff in smp stop
  2017-04-19  8:22           ` Paolo Bonzini
@ 2017-04-19  9:50             ` Ingo Molnar
  0 siblings, 0 replies; 15+ messages in thread
From: Ingo Molnar @ 2017-04-19  9:50 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Xishi Qiu, Ingo Molnar, H. Peter Anvin, Thomas Gleixner,
	the arch/x86 maintainers, wanpeng li, Andrew Morton,
	hidehiro kawai ez, LKML, Fengtiantian, Xiexiuqi


* Paolo Bonzini <pbonzini@redhat.com> wrote:

> 
> > > On 18/01/2017 12:32, Xishi Qiu wrote:
> > > > From: Tiantian Feng <fengtiantian@huawei.com>
> > > > 
> > > > We need to disable VMX on all CPUs before stop cpu when OS panic,
> > > > otherwisewe risk hanging up the machine, because the CPU ignore INIT
> > > > signals when VMX is enabled. In kernel mainline this issue existence.
> > 
> > Yes, but the changelog is atrcious:
> > 
> >  - title should describe the purpose, not the implementation
> > 
> >  - CPU is spelled 'CPU' once, then 'cpu' _in the same sentence_!
> > 
> >  - typos
> > 
> >  - spelling
> > 
> >  - the last sentence doesn't even parse ...
> > 
> > Still it's already at V4 and comes with two signoffs and what amounts to a
> > maintainer Ack??
> 
> Well, the v2-v4 were really just about getting the signoffs right.  At some
> point you just get desensitized about the changelog. :(
> 
> I'll post v5 with a rewritten commit message.

Thanks!

	Ingo

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2017-04-19  9:50 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-01-04 10:11 [RFC PATCH] x86: call smp vmxoff in smp stop Xishi Qiu
2017-01-05  1:45 ` [RFC PATCH V2] " Xishi Qiu
2017-01-12 13:55   ` Paolo Bonzini
2017-01-14  1:42   ` [PATCH V3] " Xishi Qiu
2017-01-17 15:18     ` Paolo Bonzini
2017-01-18  2:19       ` Xishi Qiu
2017-01-18  9:30         ` Paolo Bonzini
2017-01-18 11:32     ` [PATCH V4] " Xishi Qiu
2017-04-18 13:34       ` Paolo Bonzini
2017-04-19  8:02         ` Ingo Molnar
2017-04-19  8:22           ` Paolo Bonzini
2017-04-19  9:50             ` Ingo Molnar
2017-01-14  1:36 ` [PATCH] " Xishi Qiu
2017-01-14  1:41   ` Xishi Qiu
2017-01-15  0:45   ` kbuild test robot

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).