xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: "Jan Beulich" <JBeulich@suse.com>
To: George Dunlap <George.Dunlap@eu.citrix.com>, Feng Wu <feng.wu@intel.com>
Cc: Kevin Tian <kevin.tian@intel.com>, Keir Fraser <keir@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Dario Faggioli <dario.faggioli@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: Ideas Re: [PATCH v14 1/2] vmx: VT-d posted-interrupt core logic handling
Date: Tue, 08 Mar 2016 08:42:01 -0700	[thread overview]
Message-ID: <56DF00D902000078000DA7C1@prv-mh.provo.novell.com> (raw)
In-Reply-To: <CAFLBxZZbtiQ1noM6np1h4Q2TUnErDLkT=MskODdJw0L-jNhKEg@mail.gmail.com>

>>> On 08.03.16 at 15:42, <George.Dunlap@eu.citrix.com> wrote:
> On Tue, Mar 8, 2016 at 1:10 PM, Wu, Feng <feng.wu@intel.com> wrote:
>>> -----Original Message-----
>>> From: George Dunlap [mailto:george.dunlap@citrix.com]
> [snip]
>>> It seems like there are a couple of ways we could approach this:
>>>
>>> 1. Try to optimize the reverse look-up code so that it's not a linear
>>> linked list (getting rid of the theoretical fear)
>>
>> Good point.
>>
>>>
>>> 2. Try to test engineered situations where we expect this to be a
>>> problem, to see how big of a problem it is (proving the theory to be
>>> accurate or inaccurate in this case)
>>
>> Maybe we can run a SMP guest with all the vcpus pinned to a dedicated
>> pCPU, we can run some benchmark in the guest with VT-d PI and without
>> VT-d PI, then see the performance difference between these two sceanrios.
> 
> This would give us an idea what the worst-case scenario would be.

How would a single VM ever give us an idea about the worst
case? Something getting close to worst case is a ton of single
vCPU guests all temporarily pinned to one and the same pCPU
(could be multi-vCPU ones, but the more vCPU-s the more
artificial this pinning would become) right before they go into
blocked state (i.e. through one of the two callers of
arch_vcpu_block()), the pinning removed while blocked, and
then all getting woken at once.

>  But
> pinning all vcpus to a single pcpu isn't really a sensible use case we
> want to support -- if you have to do something stupid to get a
> performance regression, then I as far as I'm concerned it's not a
> problem.
> 
> Or to put it a different way: If we pin 10 vcpus to a single pcpu and
> then pound them all with posted interrupts, and there is *no*
> significant performance regression, then that will conclusively prove
> that the theoretical performance regression is of no concern, and we
> can enable PI by default.

The point isn't the pinning. The point is what pCPU they're on when
going to sleep. And that could involve quite a few more than just
10 vCPU-s, provided they all sleep long enough.

And the "theoretical performance regression is of no concern" is
also not a proper way of looking at it, I would say: Even if such
a situation would happen extremely rarely, if it can happen at all,
it would still be a security issue.

> On the other hand, if we pin 10 vcpus to a single pcpu, pound them all
> with posted interrupts, and then there *is* a significant performance
> regression, then it would still not convince me there is a real
> problem to be solved.  There is only actually a problem if the "long
> chain of vcpus" can happen in the course of a semi-realistic use-case.
> 
> Suppose we had a set of SRIOV NICs with 10-20 virtual functions total,
> assigned to 10-20 VMs, and those VMs in a cpupool confined to a single
> socket of about 4 cores; and then we do a really network-intensive
> benchmark. That's a *bit* far-fetched, but it's something that might
> conceivably happen in the real world without any deliberate stupidity.
> If there's no significant performance issues in that case, I would
> think we can say that posted interrupts are robust enough to be
> enabled by default.
> 
>>> 3. Turn the feature on by default as soon as the 4.8 window opens up,
>>> perhaps with some sort of a check that runs when in debug mode that
>>> looks for the condition we're afraid of happening and BUG()s.  If we run
>>> a full development cycle without anyone hitting the bug in testing, then
>>> we just leave the feature on.
>>
>> Maybe we can pre-define a max acceptable length of the list,  if it really
>> reach the number, print out a warning or something like that. However,
>> how to decide the max length is a problem. May need more thinking.
> 
> I think we want to measure the amount of time spent in the interrupt
> handler (or with interrupts disabled).  It doesn't matter if the list
> is 100 items long, if it can be handled in 500us.  On the other hand,
> if a list of 4 elements takes 20ms, there's a pretty massive problem.
> :-)

Spending on the order of 500us in an interrupt handler would
already seem pretty long to me, especially when the interrupt
may get raised at a high frequency. Even more so if, when in
that state, _each_ invocation of the interrupt handler would
take that long: With an (imo not unrealistic) interrupt rate of
1kHz we would spend half of the available CPU time in that
handler.

> I don't have a good idea what an unreasonably large number would be here -- 
> Jan?

Neither do I, unfortunately.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

  reply	other threads:[~2016-03-08 15:42 UTC|newest]

Thread overview: 53+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-02-29  3:00 [PATCH v14 0/2] Add VT-d Posted-Interrupts support Feng Wu
2016-02-29  3:00 ` [PATCH v14 1/2] vmx: VT-d posted-interrupt core logic handling Feng Wu
2016-02-29 13:33   ` Jan Beulich
2016-02-29 13:52     ` Dario Faggioli
2016-03-01  5:39       ` Wu, Feng
2016-03-01  9:24         ` Jan Beulich
2016-03-01 10:16     ` George Dunlap
2016-03-01 13:06       ` Wu, Feng
2016-03-01  5:24   ` Tian, Kevin
2016-03-01  5:39     ` Wu, Feng
2016-03-04 22:00   ` Ideas " Konrad Rzeszutek Wilk
2016-03-07 11:21     ` George Dunlap
2016-03-07 15:53       ` Konrad Rzeszutek Wilk
2016-03-07 16:19         ` Dario Faggioli
2016-03-07 20:23           ` Konrad Rzeszutek Wilk
2016-03-08 12:02         ` George Dunlap
2016-03-08 13:10           ` Wu, Feng
2016-03-08 14:42             ` George Dunlap
2016-03-08 15:42               ` Jan Beulich [this message]
2016-03-08 17:05                 ` George Dunlap
2016-03-08 17:26                   ` Jan Beulich
2016-03-08 18:38                     ` George Dunlap
2016-03-09  5:06                       ` Wu, Feng
2016-03-09 13:39                       ` Jan Beulich
2016-03-09 16:01                         ` George Dunlap
2016-03-09 16:31                           ` Jan Beulich
2016-03-09 16:23                         ` On setting clear criteria for declaring a feature acceptable (was "vmx: VT-d posted-interrupt core logic handling") George Dunlap
2016-03-09 16:58                           ` On setting clear criteria for declaring a feature acceptable Jan Beulich
2016-03-09 18:02                           ` On setting clear criteria for declaring a feature acceptable (was "vmx: VT-d posted-interrupt core logic handling") David Vrabel
2016-03-10  1:15                             ` Wu, Feng
2016-03-10  9:30                             ` George Dunlap
2016-03-10  5:09                           ` Tian, Kevin
2016-03-10  8:07                             ` vmx: VT-d posted-interrupt core logic handling Jan Beulich
2016-03-10  8:43                               ` Tian, Kevin
2016-03-10  9:05                                 ` Jan Beulich
2016-03-10  9:20                                   ` Tian, Kevin
2016-03-10 10:05                                   ` Tian, Kevin
2016-03-10 10:18                                     ` Jan Beulich
2016-03-10 10:35                                       ` David Vrabel
2016-03-10 10:46                                         ` George Dunlap
2016-03-10 11:16                                           ` David Vrabel
2016-03-10 11:49                                             ` George Dunlap
2016-03-10 13:24                                             ` Jan Beulich
2016-03-10 11:00                                       ` George Dunlap
2016-03-10 11:21                                         ` Dario Faggioli
2016-03-10 13:36                                     ` Wu, Feng
2016-05-17 13:27                                       ` Konrad Rzeszutek Wilk
2016-05-19  7:22                                         ` Wu, Feng
2016-03-10 10:41                               ` George Dunlap
2016-03-09  5:22                   ` Ideas Re: [PATCH v14 1/2] " Wu, Feng
2016-03-09 11:25                     ` George Dunlap
2016-03-09 12:06                       ` Wu, Feng
2016-02-29  3:00 ` [PATCH v14 2/2] Add a command line parameter for VT-d posted-interrupts Feng Wu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=56DF00D902000078000DA7C1@prv-mh.provo.novell.com \
    --to=jbeulich@suse.com \
    --cc=George.Dunlap@eu.citrix.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=dario.faggioli@citrix.com \
    --cc=feng.wu@intel.com \
    --cc=george.dunlap@citrix.com \
    --cc=keir@xen.org \
    --cc=kevin.tian@intel.com \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).