All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Roger Pau Monné" <roger.pau@citrix.com>
To: Chao Gao <chao.gao@intel.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Wei Liu <wei.liu2@citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Jan Beulich <JBeulich@suse.com>,
	xen-devel@lists.xen.org
Subject: Re: [RFC Patch v4 4/8] hvmloader: boot cpu through broadcast
Date: Mon, 26 Feb 2018 14:19:10 +0000	[thread overview]
Message-ID: <20180226141910.iy3zjgajwyizkm6o@MacBook-Pro-de-Roger.local> (raw)
In-Reply-To: <20180226123322.GA140947@skl-4s-chao.sh.intel.com>

On Mon, Feb 26, 2018 at 08:33:23PM +0800, Chao Gao wrote:
> On Mon, Feb 26, 2018 at 01:28:07AM -0700, Jan Beulich wrote:
> >>>> On 24.02.18 at 06:49, <chao.gao@intel.com> wrote:
> >> On Fri, Feb 23, 2018 at 04:42:10PM +0000, Roger Pau Monné wrote:
> >>>On Wed, Dec 06, 2017 at 03:50:10PM +0800, Chao Gao wrote:
> >>>> Intel SDM Extended XAPIC (X2APIC) -> "Initialization by System Software"
> >>>> has the following description:
> >>>> 
> >>>> "The ACPI interfaces for the x2APIC are described in Section 5.2, “ACPI System
> >>>> Description Tables,” of the Advanced Configuration and Power Interface
> >>>> Specification, Revision 4.0a (http://www.acpi.info/spec.htm). The default
> >>>> behavior for BIOS is to pass the control to the operating system with the
> >>>> local x2APICs in xAPIC mode if all APIC IDs reported by CPUID.0BH:EDX are less
> >>>> than 255, and in x2APIC mode if there are any logical processor reporting an
> >>>> APIC ID of 255 or greater."
> >>>> 
> >>>> In this patch, hvmloader enables x2apic mode for all vcpus if there are cpus
> >>>> with APIC ID > 255. To wake up processors whose APIC ID is greater than 255,
> >>>> the SIPI is broadcasted to all APs. It is the way how Seabios wakes up APs.
> >>>> APs may compete for the stack, thus a lock is introduced to protect the stack.
> >>>
> >>>Hm, how are we going to deal with this on PVH? hvmloader doesn't run
> >>>for PVH guests, hence it seems like switching to x2APIC mode should be
> >>>done somewhere else that shared between HVM and PVH.
> >>>
> >>>Maybe the hypercall that sets the number of vCPUs should change the
> >>>APIC mode?
> >> 
> >> Yes. A flag can be passed when setting the maximum number of vCPUs. Xen
> >> will switch all local APICs to x2APIC mode or xAPIC mode according to
> >> the flag.
> >
> >A flag? Where? Why isn't 257+ vCPU-s on its own sufficient to tell
> >that the mode needs to be switched?
> 
> In struct xen_domctl_max_vcpus, a flag, like SWITCH_TO_X2APIC_MODE, can
> be used to instruct Xen to initialize vlapic and do this switch.
> 
> Yes, it is another option: Xen can do this switch when need. This
> solution leads to smaller code change compared with introducing a new
> flag when setting the maximum number of vCPUs.

Since APIC ID is currently hardcoded in guest_cpuid as vcpu_id * 2,
IMO Xen should switch to x2APIC mode when it detects that vCPUs > 128,
like Jan has suggest. Then you won't need to modify hvmloader at all,
and the same would work for PVH I assume?

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

  reply	other threads:[~2018-02-26 14:19 UTC|newest]

Thread overview: 56+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-12-06  7:50 [RFC Patch v4 0/8] Extend resources to support more vcpus in single VM Chao Gao
2017-12-06  7:50 ` [RFC Patch v4 1/8] ioreq: remove most 'buf' parameter from static functions Chao Gao
2017-12-06 14:44   ` Paul Durrant
2017-12-06  8:37     ` Chao Gao
2017-12-06  7:50 ` [RFC Patch v4 2/8] ioreq: bump the number of IOREQ page to 4 pages Chao Gao
2017-12-06 15:04   ` Paul Durrant
2017-12-06  9:02     ` Chao Gao
2017-12-06 16:10       ` Paul Durrant
2017-12-07  8:41         ` Paul Durrant
2017-12-07  6:56           ` Chao Gao
2017-12-08 11:06             ` Paul Durrant
2017-12-12  1:03               ` Chao Gao
2017-12-12  9:07                 ` Paul Durrant
2017-12-12 23:39                   ` Chao Gao
2017-12-13 10:49                     ` Paul Durrant
2017-12-13 17:50                       ` Paul Durrant
2017-12-14 14:50                         ` Paul Durrant
2017-12-15  0:35                           ` Chao Gao
2017-12-15  9:40                             ` Paul Durrant
2018-04-18  8:19   ` Jan Beulich
2017-12-06  7:50 ` [RFC Patch v4 3/8] xl/acpi: unify the computation of lapic_id Chao Gao
2018-02-22 18:05   ` Wei Liu
2017-12-06  7:50 ` [RFC Patch v4 4/8] hvmloader: boot cpu through broadcast Chao Gao
2018-02-22 18:44   ` Wei Liu
2018-02-23  8:41     ` Jan Beulich
2018-02-23 16:42   ` Roger Pau Monné
2018-02-24  5:49     ` Chao Gao
2018-02-26  8:28       ` Jan Beulich
2018-02-26 12:33         ` Chao Gao
2018-02-26 14:19           ` Roger Pau Monné [this message]
2018-04-18  8:38   ` Jan Beulich
2018-04-18 11:20     ` Chao Gao
2018-04-18 11:50       ` Jan Beulich
2017-12-06  7:50 ` [RFC Patch v4 5/8] Tool/ACPI: DSDT extension to support more vcpus Chao Gao
2017-12-06  7:50 ` [RFC Patch v4 6/8] hvmload: Add x2apic entry support in the MADT and SRAT build Chao Gao
2018-04-18  8:48   ` Jan Beulich
2017-12-06  7:50 ` [RFC Patch v4 7/8] x86/hvm: bump the number of pages of shadow memory Chao Gao
2018-02-27 14:17   ` George Dunlap
2018-04-18  8:53   ` Jan Beulich
2018-04-18 11:39     ` Chao Gao
2018-04-18 11:50       ` Andrew Cooper
2018-04-18 11:59       ` Jan Beulich
2017-12-06  7:50 ` [RFC Patch v4 8/8] x86/hvm: bump the maximum number of vcpus to 512 Chao Gao
2018-02-22 18:46   ` Wei Liu
2018-02-23  8:50     ` Jan Beulich
2018-02-23 17:18       ` Wei Liu
2018-02-23 18:11   ` Roger Pau Monné
2018-02-24  6:26     ` Chao Gao
2018-02-26  8:26     ` Jan Beulich
2018-02-26 13:11       ` Chao Gao
2018-02-26 16:10         ` Jan Beulich
2018-03-01  5:21           ` Chao Gao
2018-03-01  7:17             ` Juergen Gross
2018-03-01  7:37             ` Jan Beulich
2018-03-01  7:11               ` Chao Gao
2018-02-27 14:59         ` George Dunlap

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180226141910.iy3zjgajwyizkm6o@MacBook-Pro-de-Roger.local \
    --to=roger.pau@citrix.com \
    --cc=JBeulich@suse.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=chao.gao@intel.com \
    --cc=ian.jackson@eu.citrix.com \
    --cc=wei.liu2@citrix.com \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.