All of lore.kernel.org
 help / color / mirror / Atom feed
From: Feng Wu <feng.wu@intel.com>
To: xen-devel@lists.xen.org
Cc: kevin.tian@intel.com, Feng Wu <feng.wu@intel.com>,
	george.dunlap@eu.citrix.com, andrew.cooper3@citrix.com,
	dario.faggioli@citrix.com, jbeulich@suse.com
Subject: [PATCH v5 7/7] VMX: Fixup PI descriptor when cpu is offline
Date: Tue, 11 Oct 2016 08:57:53 +0800	[thread overview]
Message-ID: <1476147473-30970-8-git-send-email-feng.wu@intel.com> (raw)
In-Reply-To: <1476147473-30970-1-git-send-email-feng.wu@intel.com>

When cpu is offline, we need to move all the vcpus in its blocking
list to another online cpu, this patch handles it.

Signed-off-by: Feng Wu <feng.wu@intel.com>
---
v5:
- Add some comments to explain why it doesn't cause deadlock
for the ABBA deadlock scenario.

 xen/arch/x86/hvm/vmx/vmcs.c       |  1 +
 xen/arch/x86/hvm/vmx/vmx.c        | 48 +++++++++++++++++++++++++++++++++++++++
 xen/include/asm-x86/hvm/vmx/vmx.h |  1 +
 3 files changed, 50 insertions(+)

diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index 10976bd..5dd68ca 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -578,6 +578,7 @@ void vmx_cpu_dead(unsigned int cpu)
     vmx_free_vmcs(per_cpu(vmxon_region, cpu));
     per_cpu(vmxon_region, cpu) = 0;
     nvmx_cpu_dead(cpu);
+    vmx_pi_desc_fixup(cpu);
 }
 
 int vmx_cpu_up(void)
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index b14c84e..c71d496 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -208,6 +208,54 @@ static void vmx_pi_do_resume(struct vcpu *v)
     vmx_pi_list_remove(v);
 }
 
+void vmx_pi_desc_fixup(int cpu)
+{
+    unsigned int new_cpu, dest;
+    unsigned long flags;
+    struct arch_vmx_struct *vmx, *tmp;
+    spinlock_t *new_lock, *old_lock = &per_cpu(vmx_pi_blocking, cpu).lock;
+    struct list_head *blocked_vcpus = &per_cpu(vmx_pi_blocking, cpu).list;
+
+    if ( !iommu_intpost )
+        return;
+
+    /*
+     * We are in the context of CPU_DEAD or CPU_UP_CANCELED notification,
+     * and it is impossible for a second CPU go down in parallel. So we
+     * can safely acquire the old cpu's lock and then acquire the new_cpu's
+     * lock after that.
+     */
+    spin_lock_irqsave(old_lock, flags);
+
+    list_for_each_entry_safe(vmx, tmp, blocked_vcpus, pi_blocking.list)
+    {
+        /*
+         * We need to find an online cpu as the NDST of the PI descriptor, it
+         * doesn't matter whether it is within the cpupool of the domain or
+         * not. As long as it is online, the vCPU will be woken up once the
+         * notification event arrives.
+         */
+        new_cpu = cpumask_any(&cpu_online_map);
+        new_lock = &per_cpu(vmx_pi_blocking, new_cpu).lock;
+
+        spin_lock(new_lock);
+
+        ASSERT(vmx->pi_blocking.lock == old_lock);
+
+        dest = cpu_physical_id(new_cpu);
+        write_atomic(&vmx->pi_desc.ndst,
+                     x2apic_enabled ? dest : MASK_INSR(dest, PI_xAPIC_NDST_MASK));
+
+        list_move(&vmx->pi_blocking.list,
+                  &per_cpu(vmx_pi_blocking, new_cpu).list);
+        vmx->pi_blocking.lock = new_lock;
+
+        spin_unlock(new_lock);
+    }
+
+    spin_unlock_irqrestore(old_lock, flags);
+}
+
 /* This function is called when pcidevs_lock is held */
 void vmx_pi_hooks_assign(struct domain *d)
 {
diff --git a/xen/include/asm-x86/hvm/vmx/vmx.h b/xen/include/asm-x86/hvm/vmx/vmx.h
index 4cdd9b1..9783c70 100644
--- a/xen/include/asm-x86/hvm/vmx/vmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vmx.h
@@ -569,6 +569,7 @@ void free_p2m_hap_data(struct p2m_domain *p2m);
 void p2m_init_hap_data(struct p2m_domain *p2m);
 
 void vmx_pi_per_cpu_init(unsigned int cpu);
+void vmx_pi_desc_fixup(int cpu);
 
 void vmx_pi_hooks_assign(struct domain *d);
 void vmx_pi_hooks_deassign(struct domain *d);
-- 
2.1.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

  parent reply	other threads:[~2016-10-11  0:57 UTC|newest]

Thread overview: 35+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-10-11  0:57 [PATCH v5 0/7] VMX: Properly handle pi descriptor and per-cpu blocking list Feng Wu
2016-10-11  0:57 ` [PATCH v5 1/7] VMX: Statically assign two PI hooks Feng Wu
2016-10-11  8:11   ` Tian, Kevin
2016-10-12 13:25   ` Jan Beulich
2016-10-11  0:57 ` [PATCH v5 2/7] VMX: Properly handle pi when all the assigned devices are removed Feng Wu
2016-10-11  8:20   ` Tian, Kevin
2016-10-11  0:57 ` [PATCH v5 3/7] VMX: Cleanup PI per-cpu blocking list when vcpu is destroyed Feng Wu
2016-10-11  0:57 ` [PATCH v5 4/7] VMX: Make sure PI is in proper state before install the hooks Feng Wu
2016-10-12 13:45   ` Jan Beulich
2016-10-17  6:26     ` Wu, Feng
2016-10-24  7:22       ` Jan Beulich
2016-10-24  7:45         ` Wu, Feng
2016-10-24  7:59           ` Jan Beulich
2016-10-11  0:57 ` [PATCH v5 5/7] VT-d: No need to set irq affinity for posted format IRTE Feng Wu
2016-10-12 13:56   ` Jan Beulich
2016-10-17  7:02     ` Wu, Feng
2016-10-24  7:27       ` Jan Beulich
2016-10-24  8:57         ` Wu, Feng
2016-10-24  9:53           ` Jan Beulich
2016-10-24 10:18             ` Wu, Feng
2016-10-24 10:57               ` Jan Beulich
2016-10-24 11:10                 ` Wu, Feng
2016-10-24 11:31                   ` Jan Beulich
2016-10-25  1:04                     ` Wu, Feng
2016-10-25  5:57                       ` Tian, Kevin
2016-10-25  8:09                       ` Jan Beulich
2016-10-26  9:12                         ` Wu, Feng
2016-10-26  9:51                           ` Jan Beulich
2016-10-11  0:57 ` [PATCH v5 6/7] VT-d: Some cleanups Feng Wu
2016-10-12 14:01   ` Jan Beulich
2016-10-11  0:57 ` Feng Wu [this message]
2016-10-11  8:38   ` [PATCH v5 7/7] VMX: Fixup PI descriptor when cpu is offline Tian, Kevin
2016-10-11 11:46     ` Wu, Feng
2016-10-11  8:08 ` [PATCH v5 0/7] VMX: Properly handle pi descriptor and per-cpu blocking list Tian, Kevin
2016-10-11  8:11   ` Wu, Feng

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1476147473-30970-8-git-send-email-feng.wu@intel.com \
    --to=feng.wu@intel.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=dario.faggioli@citrix.com \
    --cc=george.dunlap@eu.citrix.com \
    --cc=jbeulich@suse.com \
    --cc=kevin.tian@intel.com \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.