All of lore.kernel.org
 help / color / mirror / Atom feed
From: Luwei Kang <luwei.kang@intel.com>
To: xen-devel@lists.xen.org
Cc: andrew.cooper3@citrix.com, boris.ostrovsky@oracle.com,
	Luwei Kang <luwei.kang@intel.com>,
	jbeulich@suse.com
Subject: [PATCH v3] x86/vpmu: add cpu hot unplug notifier for vpmu
Date: Sun, 21 May 2017 21:09:42 +0800	[thread overview]
Message-ID: <1495372182-12928-1-git-send-email-luwei.kang@intel.com> (raw)

Currently, Hot unplug a physical CPU with vpmu enabled may cause
system hang due to send a remote call to an offlined pCPU. This
patch add a cpu hot unplug notifer to save vpmu context before
cpu offline.

Consider one scenario, hot unplug pCPU N with vpmu enabled.
The vcpu which running on this pCPU will be switch to other
online cpu. A remote call will be send to pCPU N to save the
vpmu context before loading the vpmu context on this pCPU.
System will hang in function on_select_cpus() because of that pCPU
is offlined and can not do any respond.

Signed-off-by: Luwei Kang <luwei.kang@intel.com>
---
v3:
 1.add cpu_online() check in vpm_load() and vpmu_arch_destroy();
 2.add vpmu_ prefix. rename cpu_callback() to vpmu_cpu_callback();
v2:
 1.fix some typo and coding style;
 2.change "swith" to "if" in cpu_callback() because of there just have one case;
 3.add VPMU_CONTEX_LOADED check before send remote call in vpmu_arch_destroy();
---
 xen/arch/x86/cpu/vpmu.c | 52 +++++++++++++++++++++++++++++++++++++++++++------
 1 file changed, 46 insertions(+), 6 deletions(-)

diff --git a/xen/arch/x86/cpu/vpmu.c b/xen/arch/x86/cpu/vpmu.c
index 03401fd..486af12 100644
--- a/xen/arch/x86/cpu/vpmu.c
+++ b/xen/arch/x86/cpu/vpmu.c
@@ -21,6 +21,7 @@
 #include <xen/xenoprof.h>
 #include <xen/event.h>
 #include <xen/guest_access.h>
+#include <xen/cpu.h>
 #include <asm/regs.h>
 #include <asm/types.h>
 #include <asm/msr.h>
@@ -394,8 +395,11 @@ int vpmu_load(struct vcpu *v, bool_t from_guest)
     if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
         return 0;
 
-    /* First time this VCPU is running here */
-    if ( vpmu->last_pcpu != pcpu )
+    /*
+     * The last pCPU is still online and this is the first time this vCPU
+     * running here.
+     */
+    if ( cpu_online(vpmu->last_pcpu) && vpmu->last_pcpu != pcpu )
     {
         /*
          * Get the context from last pcpu that we ran on. Note that if another
@@ -575,15 +579,21 @@ static void vpmu_arch_destroy(struct vcpu *v)
      * We will test it again in vpmu_clear_last() with interrupts
      * disabled to make sure we don't clear someone else.
      */
-    if ( per_cpu(last_vcpu, vpmu->last_pcpu) == v )
+    if ( cpu_online(vpmu->last_pcpu) &&
+            per_cpu(last_vcpu, vpmu->last_pcpu) == v )
         on_selected_cpus(cpumask_of(vpmu->last_pcpu),
                          vpmu_clear_last, v, 1);
 
     if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->arch_vpmu_destroy )
     {
-        /* Unload VPMU first. This will stop counters */
-        on_selected_cpus(cpumask_of(vcpu_vpmu(v)->last_pcpu),
-                         vpmu_save_force, v, 1);
+        /*
+         * Unload VPMU first if VPMU_CONTEXT_LOADED being set.
+         * This will stop counters.
+         */
+        if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
+            on_selected_cpus(cpumask_of(vcpu_vpmu(v)->last_pcpu),
+                             vpmu_save_force, v, 1);
+
          vpmu->arch_vpmu_ops->arch_vpmu_destroy(v);
     }
 }
@@ -835,6 +845,33 @@ long do_xenpmu_op(unsigned int op, XEN_GUEST_HANDLE_PARAM(xen_pmu_params_t) arg)
     return ret;
 }
 
+static int vpmu_cpu_callback(
+    struct notifier_block *nfb, unsigned long action, void *hcpu)
+{
+    unsigned int cpu = (unsigned long)hcpu;
+    struct vcpu *vcpu = per_cpu(last_vcpu, cpu);
+    struct vpmu_struct *vpmu;
+
+    if ( !vcpu )
+        return NOTIFY_DONE;
+
+    vpmu = vcpu_vpmu(vcpu);
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
+        return NOTIFY_DONE;
+
+    if ( action == CPU_DYING )
+    {
+        vpmu_save_force(vcpu);
+        vpmu_reset(vpmu, VPMU_CONTEXT_LOADED);
+    }
+
+    return NOTIFY_DONE;
+}
+
+static struct notifier_block vpmu_cpu_nfb = {
+    .notifier_call = vpmu_cpu_callback
+};
+
 static int __init vpmu_init(void)
 {
     int vendor = current_cpu_data.x86_vendor;
@@ -872,8 +909,11 @@ static int __init vpmu_init(void)
     }
 
     if ( vpmu_mode != XENPMU_MODE_OFF )
+    {
+        register_cpu_notifier(&vpmu_cpu_nfb);
         printk(XENLOG_INFO "VPMU: version " __stringify(XENPMU_VER_MAJ) "."
                __stringify(XENPMU_VER_MIN) "\n");
+    }
     else
         opt_vpmu_enabled = 0;
 
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

             reply	other threads:[~2017-05-21 13:09 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-05-21 13:09 Luwei Kang [this message]
2017-05-22 13:29 ` [PATCH v3] x86/vpmu: add cpu hot unplug notifier for vpmu Jan Beulich
2017-05-23  1:47   ` Kang, Luwei
2017-05-23  7:31     ` Jan Beulich

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1495372182-12928-1-git-send-email-luwei.kang@intel.com \
    --to=luwei.kang@intel.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=boris.ostrovsky@oracle.com \
    --cc=jbeulich@suse.com \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.