From: Feng Wu <feng.wu@intel.com>
To: xen-devel@lists.xen.org
Cc: kevin.tian@intel.com, Feng Wu <feng.wu@intel.com>,
george.dunlap@eu.citrix.com, andrew.cooper3@citrix.com,
dario.faggioli@citrix.com, jbeulich@suse.com
Subject: [PATCH v4 2/6] VMX: Properly handle pi when all the assigned devices are removed
Date: Wed, 21 Sep 2016 10:37:46 +0800 [thread overview]
Message-ID: <1474425470-3629-3-git-send-email-feng.wu@intel.com> (raw)
In-Reply-To: <1474425470-3629-1-git-send-email-feng.wu@intel.com>
This patch handles some concern cases when the last assigned device
is removed from the domain. In this case we should carefully handle
pi descriptor and the per-cpu blocking list, to make sure:
- all the PI descriptor are in the right state when next time a
devices is assigned to the domain again.
- No remaining vcpus of the domain in the per-cpu blocking list.
Basically, we pause the domain before zapping the PI hooks and
removing the vCPU from the blocking list, then unpause it after
that.
Signed-off-by: Feng Wu <feng.wu@intel.com>
---
v4:
- Rename some functions:
vmx_pi_remove_vcpu_from_blocking_list() -> vmx_pi_list_remove()
vmx_pi_blocking_cleanup() -> vmx_pi_list_cleanup()
- Remove the check in vmx_pi_list_cleanup()
- Comments adjustment
xen/arch/x86/hvm/vmx/vmx.c | 33 +++++++++++++++++++++++++++++----
1 file changed, 29 insertions(+), 4 deletions(-)
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 355936a..7305f40 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -158,14 +158,12 @@ static void vmx_pi_switch_to(struct vcpu *v)
pi_clear_sn(pi_desc);
}
-static void vmx_pi_do_resume(struct vcpu *v)
+static void vmx_pi_list_remove(struct vcpu *v)
{
unsigned long flags;
spinlock_t *pi_blocking_list_lock;
struct pi_desc *pi_desc = &v->arch.hvm_vmx.pi_desc;
- ASSERT(!test_bit(_VPF_blocked, &v->pause_flags));
-
/*
* Set 'NV' field back to posted_intr_vector, so the
* Posted-Interrupts can be delivered to the vCPU when
@@ -173,12 +171,12 @@ static void vmx_pi_do_resume(struct vcpu *v)
*/
write_atomic(&pi_desc->nv, posted_intr_vector);
- /* The vCPU is not on any blocking list. */
pi_blocking_list_lock = v->arch.hvm_vmx.pi_blocking.lock;
/* Prevent the compiler from eliminating the local variable.*/
smp_rmb();
+ /* The vCPU is not on any blocking list. */
if ( pi_blocking_list_lock == NULL )
return;
@@ -198,6 +196,18 @@ static void vmx_pi_do_resume(struct vcpu *v)
spin_unlock_irqrestore(pi_blocking_list_lock, flags);
}
+static void vmx_pi_do_resume(struct vcpu *v)
+{
+ ASSERT(!test_bit(_VPF_blocked, &v->pause_flags));
+
+ vmx_pi_list_remove(v);
+}
+
+static void vmx_pi_list_cleanup(struct vcpu *v)
+{
+ vmx_pi_list_remove(v);
+}
+
/* This function is called when pcidevs_lock is held */
void vmx_pi_hooks_assign(struct domain *d)
{
@@ -215,13 +225,28 @@ void vmx_pi_hooks_assign(struct domain *d)
/* This function is called when pcidevs_lock is held */
void vmx_pi_hooks_deassign(struct domain *d)
{
+ struct vcpu *v;
+
if ( !iommu_intpost || !has_hvm_container_domain(d) )
return;
ASSERT(d->arch.hvm_domain.vmx.vcpu_block);
+ /*
+ * Pausing the domain can make sure the vCPU is not
+ * running and hence calling the hooks simultaneously
+ * when deassigning the PI hooks and removing the vCPU
+ * from the blocking list.
+ */
+ domain_pause(d);
+
d->arch.hvm_domain.vmx.vcpu_block = NULL;
d->arch.hvm_domain.vmx.pi_do_resume = NULL;
+
+ for_each_vcpu ( d, v )
+ vmx_pi_list_cleanup(v);
+
+ domain_unpause(d);
}
static int vmx_domain_initialise(struct domain *d)
--
2.1.0
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
next prev parent reply other threads:[~2016-09-21 2:37 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-09-21 2:37 [PATCH v4 0/6] VMX: Properly handle pi descriptor and per-cpu blocking list Feng Wu
2016-09-21 2:37 ` [PATCH v4 1/6] VMX: Statically assign two PI hooks Feng Wu
2016-09-26 11:37 ` Jan Beulich
2016-09-26 12:09 ` Jan Beulich
2016-09-28 6:48 ` Wu, Feng
2016-09-28 9:38 ` Jan Beulich
2016-10-09 8:30 ` Wu, Feng
2016-10-10 7:26 ` Jan Beulich
2016-09-21 2:37 ` Feng Wu [this message]
2016-09-26 11:46 ` [PATCH v4 2/6] VMX: Properly handle pi when all the assigned devices are removed Jan Beulich
2016-09-28 6:50 ` Wu, Feng
2016-09-28 9:52 ` Jan Beulich
2016-09-29 3:08 ` Wu, Feng
2016-09-21 2:37 ` [PATCH v4 3/6] VMX: Cleanup PI per-cpu blocking list when vcpu is destroyed Feng Wu
2016-09-21 2:37 ` [PATCH v4 4/6] VMX: Make sure PI is in proper state before install the hooks Feng Wu
2016-09-26 12:45 ` Jan Beulich
2016-09-28 6:50 ` Wu, Feng
2016-09-21 2:37 ` [PATCH v4 5/6] VT-d: No need to set irq affinity for posted format IRTE Feng Wu
2016-09-26 12:58 ` Jan Beulich
2016-09-28 6:51 ` Wu, Feng
2016-09-28 9:58 ` Jan Beulich
2016-10-09 5:35 ` Wu, Feng
2016-10-10 7:02 ` Jan Beulich
2016-10-10 22:55 ` Wu, Feng
2016-09-21 2:37 ` [PATCH v4 6/6] VMX: Fixup PI descritpor when cpu is offline Feng Wu
2016-09-26 13:03 ` Jan Beulich
2016-09-28 6:53 ` Wu, Feng
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1474425470-3629-3-git-send-email-feng.wu@intel.com \
--to=feng.wu@intel.com \
--cc=andrew.cooper3@citrix.com \
--cc=dario.faggioli@citrix.com \
--cc=george.dunlap@eu.citrix.com \
--cc=jbeulich@suse.com \
--cc=kevin.tian@intel.com \
--cc=xen-devel@lists.xen.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).