All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Jan Beulich" <JBeulich@suse.com>
To: Lan Tianyu <tianyu.lan@intel.com>
Cc: kevin.tian@intel.com, Wei Liu <wei.liu2@citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Meng Xu <xumengpanda@gmail.com>,
	chao.gao@intel.com
Subject: Re: [RFC PATCH 0/5] Extend resources to support more vcpus in single VM
Date: Wed, 30 Aug 2017 01:12:39 -0600	[thread overview]
Message-ID: <59A6818702000078001755D7@prv-mh.provo.novell.com> (raw)
In-Reply-To: <bd84adf1-dff8-5a79-0f7e-78eba480edfd@intel.com>

>>> On 30.08.17 at 07:33, <tianyu.lan@intel.com> wrote:
> On 2017年08月29日 16:49, Jan Beulich wrote:
>>>>> On 29.08.17 at 06:38, <tianyu.lan@intel.com> wrote:
>>> On 2017年08月25日 22:10, Meng Xu wrote:
>>>> How many VCPUs for a single VM do you want to support with this patch set?
>>>
>>> Hi Meng:
>>> 	Sorry for later response. We hope to increase max vcpu number to 512.
>>> This also have dependency on other jobs(i.e, cpu topology, mult page
>>> support for ioreq server and virtual IOMMU).
>> 
>> I'm sorry for repeating this, but your first and foremost goal ought
>> to be to address the known issues with VMs having up to 128
>> vCPU-s; Andrew has been pointing this out in the past. I see no
>> point in pushing up the limit if even the current limit doesn't work
>> reliably in all cases.
>> 
> 
> Hi Jan & Andrew:
> 	We ran some HPC benchmark(i.e, HPlinkpack, dgemm, sgemm, igemm and so
> on) in a huge VM with 128 vcpus(Even >255 vcpus with non-upstreamed
> patches) and didn't meet unreliable issue. These benchmarks run heavy
> workloads in VM and some of them even last several hours.

I guess it heavily depends on what portions of hypervisor code
those benchmarks exercise. Compute-intensives ones (which
seems a likely case for HPC) aren't that interesting. Ones putting
high pressure on e.g. the p2m lock, or ones causing high IPI rates
(inside the guest) are likely to be more problematic.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

  reply	other threads:[~2017-08-30  7:12 UTC|newest]

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-08-25  2:52 [RFC PATCH 0/5] Extend resources to support more vcpus in single VM Lan Tianyu
2017-08-25  2:52 ` [RFC PATCH 1/5] xen/hap: Increase hap size for more vcpus support Lan Tianyu
2017-08-25  9:14   ` Wei Liu
2017-08-28  8:53     ` Lan, Tianyu
2017-08-25  2:52 ` [RFC PATCH 2/5] XL: Increase event channels to support more vcpus Lan Tianyu
2017-08-25  9:18   ` Wei Liu
2017-08-25  9:57     ` Roger Pau Monné
2017-08-25 10:04       ` Wei Liu
2017-08-28  9:11         ` Lan, Tianyu
2017-08-28  9:21           ` Wei Liu
2017-08-28  9:22           ` Jan Beulich
2017-08-25  2:52 ` [RFC PATCH 3/5] Tool/ACPI: DSDT extension " Lan Tianyu
2017-08-25  9:25   ` Wei Liu
2017-08-28  9:12     ` Lan, Tianyu
2017-08-25 10:36   ` Roger Pau Monné
2017-08-25 12:01     ` Jan Beulich
2017-08-29  4:58       ` Lan Tianyu
2017-08-29  5:01     ` Lan Tianyu
2017-08-25  2:52 ` [RFC PATCH 4/5] hvmload: Add x2apic entry support in the MADT build Lan Tianyu
2017-08-25  9:26   ` Wei Liu
2017-08-25  9:43     ` Jan Beulich
2017-08-25 10:11   ` Roger Pau Monné
2017-08-29  3:14     ` Lan Tianyu
2017-08-25  2:52 ` [RFC PATCH 5/5] xl/libacpi: extend lapic_id() to uint32_t Lan Tianyu
2017-08-25  9:22   ` Wei Liu
2017-08-25 14:10 ` [RFC PATCH 0/5] Extend resources to support more vcpus in single VM Meng Xu
2017-08-29  4:38   ` Lan Tianyu
2017-08-29  8:49     ` Jan Beulich
2017-08-30  5:33       ` Lan Tianyu
2017-08-30  7:12         ` Jan Beulich [this message]
2017-08-30  9:18           ` George Dunlap

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=59A6818702000078001755D7@prv-mh.provo.novell.com \
    --to=jbeulich@suse.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=chao.gao@intel.com \
    --cc=george.dunlap@eu.citrix.com \
    --cc=ian.jackson@eu.citrix.com \
    --cc=kevin.tian@intel.com \
    --cc=tianyu.lan@intel.com \
    --cc=wei.liu2@citrix.com \
    --cc=xen-devel@lists.xen.org \
    --cc=xumengpanda@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.