From: "Jan Beulich" <JBeulich@suse.com> To: xen-devel <xen-devel@lists.xenproject.org> Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wei.liu2@citrix.com>, Roger Pau Monne <roger.pau@citrix.com> Subject: [PATCH v2 1/3] x86/idle: re-arrange dead-idle handling Date: Fri, 17 May 2019 04:11:54 -0600 [thread overview] Message-ID: <5CDE88EA0200007800230031@prv1-mh.provo.novell.com> (raw) In-Reply-To: <5CDE88900200007800230027@prv1-mh.provo.novell.com> In order to be able to wake parked CPUs from default_dead_idle() (for them to then enter a different dead-idle routine), the function should not itself loop. Move the loop into play_dead(), and use play_dead() as well on the AP boot error path. Furthermore, not the least considering the comment in play_dead(), make sure NMI raised (for now this would be a bug elsewhere, but that's about to change) against a parked or fully offline CPU won't invoke the actual, full-blown NMI handler. Note however that this doesn't make #MC any safer for fully offline CPUs. Signed-off-by: Jan Beulich <jbeulich@suse.com> --- v2: Add spec_ctrl_exit_idle() to default_dead_idle(). Add #MC related remark to description. --- Note: I had to drop the discussed acpi_dead_idle() adjustment again, as it breaks booting with "smt=0" and "maxcpus=" on at least one of my systems. I've not yet managed to understand why that would be. --- a/xen/arch/x86/domain.c +++ b/xen/arch/x86/domain.c @@ -100,14 +100,20 @@ void default_dead_idle(void) */ spec_ctrl_enter_idle(get_cpu_info()); wbinvd(); - for ( ; ; ) - halt(); + halt(); + spec_ctrl_exit_idle(get_cpu_info()); } -static void play_dead(void) +void play_dead(void) { + unsigned int cpu = smp_processor_id(); + local_irq_disable(); + /* Change the NMI handler to a nop (see comment below). */ + _set_gate_lower(&idt_tables[cpu][TRAP_nmi], SYS_DESC_irq_gate, 0, + &trap_nop); + /* * NOTE: After cpu_exit_clear, per-cpu variables may no longer accessible, * as they may be freed at any time if offline CPUs don't get parked. In @@ -118,9 +124,10 @@ static void play_dead(void) * Consider very carefully when adding code to *dead_idle. Most hypervisor * subsystems are unsafe to call. */ - cpu_exit_clear(smp_processor_id()); + cpu_exit_clear(cpu); - (*dead_idle)(); + for ( ; ; ) + dead_idle(); } static void idle_loop(void) --- a/xen/arch/x86/smpboot.c +++ b/xen/arch/x86/smpboot.c @@ -33,6 +33,7 @@ #include <xen/serial.h> #include <xen/numa.h> #include <xen/cpu.h> +#include <asm/cpuidle.h> #include <asm/current.h> #include <asm/mc146818rtc.h> #include <asm/desc.h> @@ -209,8 +210,7 @@ static void smp_callin(void) halt: clear_local_APIC(); spin_debug_enable(); - cpu_exit_clear(cpu); - (*dead_idle)(); + play_dead(); } /* Allow the master to continue. */ --- a/xen/include/asm-x86/cpuidle.h +++ b/xen/include/asm-x86/cpuidle.h @@ -20,6 +20,7 @@ int mwait_idle_init(struct notifier_bloc int cpuidle_init_cpu(unsigned int cpu); void default_dead_idle(void); void acpi_dead_idle(void); +void play_dead(void); void trace_exit_reason(u32 *irq_traced); void update_idle_stats(struct acpi_processor_power *, struct acpi_processor_cx *, uint64_t, uint64_t); _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel
WARNING: multiple messages have this Message-ID (diff)
From: "Jan Beulich" <JBeulich@suse.com> To: "xen-devel" <xen-devel@lists.xenproject.org> Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wei.liu2@citrix.com>, Roger Pau Monne <roger.pau@citrix.com> Subject: [Xen-devel] [PATCH v2 1/3] x86/idle: re-arrange dead-idle handling Date: Fri, 17 May 2019 04:11:54 -0600 [thread overview] Message-ID: <5CDE88EA0200007800230031@prv1-mh.provo.novell.com> (raw) Message-ID: <20190517101154.KC7yZDQNjd1azwPqGSLz4RvHuPY4fZYnHUaX_Hp3aDE@z> (raw) In-Reply-To: <5CDE88900200007800230027@prv1-mh.provo.novell.com> In order to be able to wake parked CPUs from default_dead_idle() (for them to then enter a different dead-idle routine), the function should not itself loop. Move the loop into play_dead(), and use play_dead() as well on the AP boot error path. Furthermore, not the least considering the comment in play_dead(), make sure NMI raised (for now this would be a bug elsewhere, but that's about to change) against a parked or fully offline CPU won't invoke the actual, full-blown NMI handler. Note however that this doesn't make #MC any safer for fully offline CPUs. Signed-off-by: Jan Beulich <jbeulich@suse.com> --- v2: Add spec_ctrl_exit_idle() to default_dead_idle(). Add #MC related remark to description. --- Note: I had to drop the discussed acpi_dead_idle() adjustment again, as it breaks booting with "smt=0" and "maxcpus=" on at least one of my systems. I've not yet managed to understand why that would be. --- a/xen/arch/x86/domain.c +++ b/xen/arch/x86/domain.c @@ -100,14 +100,20 @@ void default_dead_idle(void) */ spec_ctrl_enter_idle(get_cpu_info()); wbinvd(); - for ( ; ; ) - halt(); + halt(); + spec_ctrl_exit_idle(get_cpu_info()); } -static void play_dead(void) +void play_dead(void) { + unsigned int cpu = smp_processor_id(); + local_irq_disable(); + /* Change the NMI handler to a nop (see comment below). */ + _set_gate_lower(&idt_tables[cpu][TRAP_nmi], SYS_DESC_irq_gate, 0, + &trap_nop); + /* * NOTE: After cpu_exit_clear, per-cpu variables may no longer accessible, * as they may be freed at any time if offline CPUs don't get parked. In @@ -118,9 +124,10 @@ static void play_dead(void) * Consider very carefully when adding code to *dead_idle. Most hypervisor * subsystems are unsafe to call. */ - cpu_exit_clear(smp_processor_id()); + cpu_exit_clear(cpu); - (*dead_idle)(); + for ( ; ; ) + dead_idle(); } static void idle_loop(void) --- a/xen/arch/x86/smpboot.c +++ b/xen/arch/x86/smpboot.c @@ -33,6 +33,7 @@ #include <xen/serial.h> #include <xen/numa.h> #include <xen/cpu.h> +#include <asm/cpuidle.h> #include <asm/current.h> #include <asm/mc146818rtc.h> #include <asm/desc.h> @@ -209,8 +210,7 @@ static void smp_callin(void) halt: clear_local_APIC(); spin_debug_enable(); - cpu_exit_clear(cpu); - (*dead_idle)(); + play_dead(); } /* Allow the master to continue. */ --- a/xen/include/asm-x86/cpuidle.h +++ b/xen/include/asm-x86/cpuidle.h @@ -20,6 +20,7 @@ int mwait_idle_init(struct notifier_bloc int cpuidle_init_cpu(unsigned int cpu); void default_dead_idle(void); void acpi_dead_idle(void); +void play_dead(void); void trace_exit_reason(u32 *irq_traced); void update_idle_stats(struct acpi_processor_power *, struct acpi_processor_cx *, uint64_t, uint64_t); _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel
next prev parent reply other threads:[~2019-05-17 10:11 UTC|newest] Thread overview: 33+ messages / expand[flat|nested] mbox.gz Atom feed top 2018-08-01 14:22 [PATCH 0/5] x86: more power-efficient CPU parking Jan Beulich 2018-08-01 14:31 ` [PATCH 1/5] x86/cpuidle: replace a pointless NULL check Jan Beulich 2018-08-01 14:33 ` Andrew Cooper 2018-08-01 15:12 ` Jan Beulich 2018-08-01 14:31 ` [PATCH 2/5] x86/idle: re-arrange dead-idle handling Jan Beulich 2018-09-07 17:08 ` Andrew Cooper 2018-09-10 10:13 ` Jan Beulich 2018-10-26 10:55 ` Ping: " Jan Beulich 2018-12-05 20:33 ` Andrew Cooper 2018-12-06 8:16 ` Jan Beulich 2018-08-01 14:32 ` [PATCH 3/5] x86/cpuidle: push parked CPUs into deeper sleep states when possible Jan Beulich 2018-10-26 10:56 ` Ping: " Jan Beulich 2018-08-01 14:33 ` [PATCH 4/5] x86/cpuidle: clean up Cx dumping Jan Beulich 2018-08-01 14:40 ` Andrew Cooper 2018-08-01 14:33 ` [PATCH 5/5] x86: place non-parked CPUs into wait-for-SIPI state after offlining Jan Beulich 2018-08-29 7:08 ` Ping: [PATCH 0/5] x86: more power-efficient CPU parking Jan Beulich 2018-08-29 17:01 ` Andrew Cooper 2018-08-30 7:29 ` Jan Beulich [not found] ` <5B61C21202000000000FC1F1@prv1-mh.provo.novell.com> [not found] ` <5B61C21202000078001F8805@prv1-mh.provo.novell.com> [not found] ` <5B61C21202000000000FC6BD@prv1-mh.provo.novell.com> [not found] ` <5B61C212020000780020B6D8@prv1-mh.provo.novell.com> [not found] ` <5B61C21202000000000FF27E@prv1-mh.provo.novell.com> [not found] ` <5B61C2120200007800224310@prv1-mh.provo.novell.com> 2019-04-03 10:12 ` Jan Beulich 2019-04-03 11:14 ` Andrew Cooper 2019-04-03 12:43 ` Jan Beulich 2019-04-03 14:44 ` Andrew Cooper 2019-04-03 15:20 ` Jan Beulich [not found] ` <5B61C2120200000000101EDC@prv1-mh.provo.novell.com> [not found] ` <5B61C212020000780022FF0D@prv1-mh.provo.novell.com> 2019-05-17 10:10 ` [PATCH v2 0/3] " Jan Beulich 2019-05-17 10:10 ` [Xen-devel] " Jan Beulich 2019-05-17 10:11 ` Jan Beulich [this message] 2019-05-17 10:11 ` [Xen-devel] [PATCH v2 1/3] x86/idle: re-arrange dead-idle handling Jan Beulich 2019-05-20 14:25 ` Andrew Cooper 2019-05-20 14:25 ` [Xen-devel] " Andrew Cooper 2019-05-17 10:12 ` [PATCH v2 2/3] x86/cpuidle: push parked CPUs into deeper sleep states when possible Jan Beulich 2019-05-17 10:12 ` [Xen-devel] " Jan Beulich 2019-05-17 10:12 ` [PATCH v2 3/3] x86/cpuidle: clean up Cx dumping Jan Beulich 2019-05-17 10:12 ` [Xen-devel] " Jan Beulich
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=5CDE88EA0200007800230031@prv1-mh.provo.novell.com \ --to=jbeulich@suse.com \ --cc=andrew.cooper3@citrix.com \ --cc=roger.pau@citrix.com \ --cc=wei.liu2@citrix.com \ --cc=xen-devel@lists.xenproject.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).