All of lore.kernel.org
 help / color / mirror / Atom feed
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	david.vrabel@citrix.com, xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH] xen/pvhvm: Support more than 32 VCPUs when migrating (v3).
Date: Thu, 21 Jul 2016 11:05:43 -0400	[thread overview]
Message-ID: <a7a6a26a-796b-edc3-380b-fa830f907aa7__31066.3923075937$1469113639$gmane$org@oracle.com> (raw)
In-Reply-To: <20160721141403.GA12188@char.us.oracle.com>

On 07/21/2016 10:14 AM, Konrad Rzeszutek Wilk wrote:
> On Fri, Jul 10, 2015 at 02:57:51PM -0400, Konrad Rzeszutek Wilk wrote:
>> On Fri, Jul 10, 2015 at 02:37:46PM -0400, Konrad Rzeszutek Wilk wrote:
>>> When Xen migrates an HVM guest, by default its shared_info can
>>> only hold up to 32 CPUs. As such the hypercall
>>> VCPUOP_register_vcpu_info was introduced which allowed us to
>>> setup per-page areas for VCPUs. This means we can boot PVHVM
>>> guest with more than 32 VCPUs. During migration the per-cpu
>>> structure is allocated freshly by the hypervisor (vcpu_info_mfn
>>> is set to INVALID_MFN) so that the newly migrated guest
>>> can make an VCPUOP_register_vcpu_info hypercall.
>>>
>>> Unfortunatly we end up triggering this condition in Xen:
>>> /* Run this command on yourself or on other offline VCPUS. */
>>>  if ( (v != current) && !test_bit(_VPF_down, &v->pause_flags) )
>>>
>>> which means we are unable to setup the per-cpu VCPU structures
>>> for running vCPUS. The Linux PV code paths make this work by
>>> iterating over every vCPU with:
>>>
>>>  1) is target CPU up (VCPUOP_is_up hypercall?)
>>>  2) if yes, then VCPUOP_down to pause it.
>>>  3) VCPUOP_register_vcpu_info
>>>  4) if it was down, then VCPUOP_up to bring it back up
>>>
>>> But since VCPUOP_down, VCPUOP_is_up, and VCPUOP_up are
>>> not allowed on HVM guests we can't do this. However with the
>>> Xen git commit f80c5623a126afc31e6bb9382268d579f0324a7a
>>> ("xen/x86: allow HVM guests to use hypercalls to bring up vCPUs"")
>> <sigh> I was in my local tree which was Roger's 'hvm_without_dm_v3'
>> looking at patches and spotted this - and thought it was already in!
>>
>> Sorry about this patch - and please ignore it until the VCPU_op*
>> can be used by HVM guests.
> The corresponding patch is in Xen tree (192df6f9122ddebc21d0a632c10da3453aeee1c2)
>
> Could folks take a look at the patch pls?
>
> Without it you can't migrate an Linux guest with more than 32 vCPUs.
>
>>> we can do this. As such first check if VCPUOP_is_up is actually
>>> possible before trying this dance.
>>>
>>> As most of this dance code is done already in 'xen_setup_vcpu'
>>> lets make it callable on both PV and HVM. This means moving one
>>> of the checks out to 'xen_setup_runstate_info'.
>>>
>>> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
>>> ---
>>>  arch/x86/xen/enlighten.c | 23 +++++++++++++++++------
>>>  arch/x86/xen/suspend.c   |  7 +------
>>>  arch/x86/xen/time.c      |  3 +++
>>>  3 files changed, 21 insertions(+), 12 deletions(-)
>>>
>>> diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
>>> index 46957ea..187dec6 100644
>>> --- a/arch/x86/xen/enlighten.c
>>> +++ b/arch/x86/xen/enlighten.c
>>> @@ -238,12 +238,23 @@ static void xen_vcpu_setup(int cpu)
>>>  void xen_vcpu_restore(void)
>>>  {
>>>  	int cpu;
>>> +	bool vcpuops = true;
>>> +	const struct cpumask *mask;
>>>  
>>> -	for_each_possible_cpu(cpu) {
>>> +	mask = xen_pv_domain() ? cpu_possible_mask : cpu_online_mask;
>>> +
>>> +	/* Only Xen 4.5 and higher supports this. */
>>> +	if (HYPERVISOR_vcpu_op(VCPUOP_is_up, smp_processor_id(), NULL) == -ENOSYS)
>>> +		vcpuops = false;
>>> +
>>> +	for_each_cpu(cpu, mask) {
>>>  		bool other_cpu = (cpu != smp_processor_id());
>>> -		bool is_up = HYPERVISOR_vcpu_op(VCPUOP_is_up, cpu, NULL);
>>> +		bool is_up = false;
>>>  
>>> -		if (other_cpu && is_up &&
>>> +		if (vcpuops)
>>> +			is_up = HYPERVISOR_vcpu_op(VCPUOP_is_up, cpu, NULL);


You can just say
   is_up = (HYPERVISOR_vcpu_op(VCPUOP_is_up, cpu, NULL) > 0);

and then you won't need vcpuops bool.




>>> +
>>> +		if (vcpuops && other_cpu && is_up &&
>>>  		    HYPERVISOR_vcpu_op(VCPUOP_down, cpu, NULL))
>>>  			BUG();
>>>  
>>> @@ -252,7 +263,7 @@ void xen_vcpu_restore(void)
>>>  		if (have_vcpu_info_placement)
>>>  			xen_vcpu_setup(cpu);
>>>  
>>> -		if (other_cpu && is_up &&
>>> +		if (vcpuops && other_cpu && is_up &&
>>>  		    HYPERVISOR_vcpu_op(VCPUOP_up, cpu, NULL))
>>>  			BUG();
>>>  	}
>>> @@ -1704,8 +1715,8 @@ void __ref xen_hvm_init_shared_info(void)
>>>  	 * in that case multiple vcpus might be online. */
>>>  	for_each_online_cpu(cpu) {
>>>  		/* Leave it to be NULL. */
>>> -		if (cpu >= MAX_VIRT_CPUS)
>>> -			continue;
>>> +		if (cpu >= MAX_VIRT_CPUS && cpu <= NR_CPUS)
>>> +			per_cpu(xen_vcpu, cpu) = NULL; /* Triggers xen_vcpu_setup.*/
>>>  		per_cpu(xen_vcpu, cpu) = &HYPERVISOR_shared_info->vcpu_info[cpu];

I don't think I understand this change.

Can you have cpu > NR_CPUS? And isn't per_cpu(xen_vcpu, cpu) NULL
already (as the comment at the top suggests)?


-boris


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

  reply	other threads:[~2016-07-21 15:05 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-07-10 18:37 [PATCH] xen/pvhvm: Support more than 32 VCPUs when migrating (v3) Konrad Rzeszutek Wilk
2015-07-10 18:57 ` Konrad Rzeszutek Wilk
2015-11-12 16:40   ` Ian Campbell
2015-11-13 14:42     ` Konrad Rzeszutek Wilk
2015-11-13 14:49       ` Ian Campbell
2015-12-01 21:53         ` Konrad Rzeszutek Wilk
2016-07-21 14:14   ` Konrad Rzeszutek Wilk
2016-07-21 14:14   ` Konrad Rzeszutek Wilk
2016-07-21 15:05     ` Boris Ostrovsky [this message]
2016-07-21 15:05     ` Boris Ostrovsky
2015-07-10 18:57 ` Konrad Rzeszutek Wilk
  -- strict thread matches above, loose matches on Subject: below --
2015-07-10 18:37 Konrad Rzeszutek Wilk

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='a7a6a26a-796b-edc3-380b-fa830f907aa7__31066.3923075937$1469113639$gmane$org@oracle.com' \
    --to=boris.ostrovsky@oracle.com \
    --cc=david.vrabel@citrix.com \
    --cc=konrad.wilk@oracle.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.