From: "Jan Beulich" <JBeulich@suse.com>
To: George Dunlap <george.dunlap@citrix.com>,
George Dunlap <George.Dunlap@eu.citrix.com>,
Feng Wu <feng.wu@intel.com>
Cc: Kevin Tian <kevin.tian@intel.com>, Keir Fraser <keir@xen.org>,
Andrew Cooper <andrew.cooper3@citrix.com>,
Dario Faggioli <dario.faggioli@citrix.com>,
"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: Ideas Re: [PATCH v14 1/2] vmx: VT-d posted-interrupt core logic handling
Date: Tue, 08 Mar 2016 10:26:33 -0700 [thread overview]
Message-ID: <56DF195902000078000DA8C1@prv-mh.provo.novell.com> (raw)
In-Reply-To: <56DF066B.3090106@citrix.com>
>>> On 08.03.16 at 18:05, <george.dunlap@citrix.com> wrote:
> On 08/03/16 15:42, Jan Beulich wrote:
>>>>> On 08.03.16 at 15:42, <George.Dunlap@eu.citrix.com> wrote:
>>> On Tue, Mar 8, 2016 at 1:10 PM, Wu, Feng <feng.wu@intel.com> wrote:
>>>>> -----Original Message-----
>>>>> From: George Dunlap [mailto:george.dunlap@citrix.com]
>>>>>
>>>>> 2. Try to test engineered situations where we expect this to be a
>>>>> problem, to see how big of a problem it is (proving the theory to be
>>>>> accurate or inaccurate in this case)
>>>>
>>>> Maybe we can run a SMP guest with all the vcpus pinned to a dedicated
>>>> pCPU, we can run some benchmark in the guest with VT-d PI and without
>>>> VT-d PI, then see the performance difference between these two sceanrios.
>>>
>>> This would give us an idea what the worst-case scenario would be.
>>
>> How would a single VM ever give us an idea about the worst
>> case? Something getting close to worst case is a ton of single
>> vCPU guests all temporarily pinned to one and the same pCPU
>> (could be multi-vCPU ones, but the more vCPU-s the more
>> artificial this pinning would become) right before they go into
>> blocked state (i.e. through one of the two callers of
>> arch_vcpu_block()), the pinning removed while blocked, and
>> then all getting woken at once.
>
> Why would removing the pinning be important?
It's not important by itself, other than to avoid all vCPU-s then
waking up on the one pCPU.
> And I guess it's actually the case that it doesn't need all VMs to
> actually be *receiving* interrupts; it just requires them to be
> *capable* of receiving interrupts, for there to be a long chain all
> blocked on the same physical cpu.
Yes.
>>> But
>>> pinning all vcpus to a single pcpu isn't really a sensible use case we
>>> want to support -- if you have to do something stupid to get a
>>> performance regression, then I as far as I'm concerned it's not a
>>> problem.
>>>
>>> Or to put it a different way: If we pin 10 vcpus to a single pcpu and
>>> then pound them all with posted interrupts, and there is *no*
>>> significant performance regression, then that will conclusively prove
>>> that the theoretical performance regression is of no concern, and we
>>> can enable PI by default.
>>
>> The point isn't the pinning. The point is what pCPU they're on when
>> going to sleep. And that could involve quite a few more than just
>> 10 vCPU-s, provided they all sleep long enough.
>>
>> And the "theoretical performance regression is of no concern" is
>> also not a proper way of looking at it, I would say: Even if such
>> a situation would happen extremely rarely, if it can happen at all,
>> it would still be a security issue.
>
> What I'm trying to get at is -- exactly what situation? What actually
> constitutes a problematic interrupt latency / interrupt processing
> workload, how many vcpus must be sleeping on the same pcpu to actually
> risk triggering that latency / workload, and how feasible is it that
> such a situation would arise in a reasonable scenario?
>
> If 200us is too long, and it only takes 3 sleeping vcpus to get there,
> then yes, there is a genuine problem we need to try to address before we
> turn it on by default. If we say that up to 500us is tolerable, and it
> takes 100 sleeping vcpus to reach that latency, then this is something I
> don't really think we need to worry about.
>
> "I think something bad may happen" is a really difficult to work with.
I understand that, but coming up with proper numbers here isn't
easy. Fact is - it cannot be excluded that on a system with
hundreds of pCPU-s and thousands or vCPU-s, that all vCPU-s
would at some point pile up on one pCPU's list.
How many would be tolerable on a single list depends upon host
characteristics, so a fixed number won't do anyway. Hence I
think the better approach, instead of improving lookup, is to
distribute vCPU-s evenly across lists. Which in turn would likely
require those lists to no longer be tied to pCPU-s, an aspect I
had already suggested during review. As soon as distribution
would be reasonably even, the security concern would vanish:
Someone placing more vCPU-s on a host than that host can
handle is responsible for the consequences. Quite contrary to
someone placing more vCPU-s on a host than a single pCPU can
reasonably handle in an interrupt handler.
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
next prev parent reply other threads:[~2016-03-08 17:26 UTC|newest]
Thread overview: 53+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-02-29 3:00 [PATCH v14 0/2] Add VT-d Posted-Interrupts support Feng Wu
2016-02-29 3:00 ` [PATCH v14 1/2] vmx: VT-d posted-interrupt core logic handling Feng Wu
2016-02-29 13:33 ` Jan Beulich
2016-02-29 13:52 ` Dario Faggioli
2016-03-01 5:39 ` Wu, Feng
2016-03-01 9:24 ` Jan Beulich
2016-03-01 10:16 ` George Dunlap
2016-03-01 13:06 ` Wu, Feng
2016-03-01 5:24 ` Tian, Kevin
2016-03-01 5:39 ` Wu, Feng
2016-03-04 22:00 ` Ideas " Konrad Rzeszutek Wilk
2016-03-07 11:21 ` George Dunlap
2016-03-07 15:53 ` Konrad Rzeszutek Wilk
2016-03-07 16:19 ` Dario Faggioli
2016-03-07 20:23 ` Konrad Rzeszutek Wilk
2016-03-08 12:02 ` George Dunlap
2016-03-08 13:10 ` Wu, Feng
2016-03-08 14:42 ` George Dunlap
2016-03-08 15:42 ` Jan Beulich
2016-03-08 17:05 ` George Dunlap
2016-03-08 17:26 ` Jan Beulich [this message]
2016-03-08 18:38 ` George Dunlap
2016-03-09 5:06 ` Wu, Feng
2016-03-09 13:39 ` Jan Beulich
2016-03-09 16:01 ` George Dunlap
2016-03-09 16:31 ` Jan Beulich
2016-03-09 16:23 ` On setting clear criteria for declaring a feature acceptable (was "vmx: VT-d posted-interrupt core logic handling") George Dunlap
2016-03-09 16:58 ` On setting clear criteria for declaring a feature acceptable Jan Beulich
2016-03-09 18:02 ` On setting clear criteria for declaring a feature acceptable (was "vmx: VT-d posted-interrupt core logic handling") David Vrabel
2016-03-10 1:15 ` Wu, Feng
2016-03-10 9:30 ` George Dunlap
2016-03-10 5:09 ` Tian, Kevin
2016-03-10 8:07 ` vmx: VT-d posted-interrupt core logic handling Jan Beulich
2016-03-10 8:43 ` Tian, Kevin
2016-03-10 9:05 ` Jan Beulich
2016-03-10 9:20 ` Tian, Kevin
2016-03-10 10:05 ` Tian, Kevin
2016-03-10 10:18 ` Jan Beulich
2016-03-10 10:35 ` David Vrabel
2016-03-10 10:46 ` George Dunlap
2016-03-10 11:16 ` David Vrabel
2016-03-10 11:49 ` George Dunlap
2016-03-10 13:24 ` Jan Beulich
2016-03-10 11:00 ` George Dunlap
2016-03-10 11:21 ` Dario Faggioli
2016-03-10 13:36 ` Wu, Feng
2016-05-17 13:27 ` Konrad Rzeszutek Wilk
2016-05-19 7:22 ` Wu, Feng
2016-03-10 10:41 ` George Dunlap
2016-03-09 5:22 ` Ideas Re: [PATCH v14 1/2] " Wu, Feng
2016-03-09 11:25 ` George Dunlap
2016-03-09 12:06 ` Wu, Feng
2016-02-29 3:00 ` [PATCH v14 2/2] Add a command line parameter for VT-d posted-interrupts Feng Wu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=56DF195902000078000DA8C1@prv-mh.provo.novell.com \
--to=jbeulich@suse.com \
--cc=George.Dunlap@eu.citrix.com \
--cc=andrew.cooper3@citrix.com \
--cc=dario.faggioli@citrix.com \
--cc=feng.wu@intel.com \
--cc=george.dunlap@citrix.com \
--cc=keir@xen.org \
--cc=kevin.tian@intel.com \
--cc=xen-devel@lists.xen.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).