From: George Dunlap <george.dunlap@citrix.com>
To: Jan Beulich <JBeulich@suse.com>, Kevin Tian <kevin.tian@intel.com>
Cc: Lars Kurth <lars.kurth@citrix.com>, Feng Wu <feng.wu@intel.com>,
George Dunlap <George.Dunlap@eu.citrix.com>,
Andrew Cooper <andrew.cooper3@citrix.com>,
Dario Faggioli <dario.faggioli@citrix.com>,
Ian Jackson <Ian.Jackson@eu.citrix.com>,
"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
David Vrabel <david.vrabel@citrix.com>
Subject: Re: vmx: VT-d posted-interrupt core logic handling
Date: Thu, 10 Mar 2016 10:41:02 +0000 [thread overview]
Message-ID: <56E14F3E.8060804@citrix.com> (raw)
In-Reply-To: <56E1394802000078000DB155@prv-mh.provo.novell.com>
On 10/03/16 08:07, Jan Beulich wrote:
>>>> On 10.03.16 at 06:09, <kevin.tian@intel.com> wrote:
>> It's always good to have a clear definition to which extend a performance
>> issue would become a security risk. I saw 200us/500us used as example
>> in this thread, however no one can give an accrual criteria. In that case,
>> how do we call it a problem even when Feng collected some data? Based
>> on mindset from all maintainers?
>
> I think I've already made clear in previous comments that such
> measurements won't lead anywhere. What we need is a
> guarantee (by way of enforcement in source code) that the
> lists can't grow overly large, compared to the total load placed
> on the system.
>
>> I think a good way of looking at this is based on which capability is
>> impacted.
>> In this specific case the directly impacted metric is the interrupt delivery
>> latency. However today Xen is not RT-capable. Xen doesn't commit to
>> deliver a worst-case 10us interrupt latency. The whole interrupt delivery
>> path
>> (from Xen into Guest) has not been optimized yet, then there could be other
>> reasons impacting latency too beside the concern on this specific list walk.
>> There is no baseline worst-case data w/o PI. There is no final goal to hit.
>> There is no test case to measure.
>>
>> Then why blocking this feature due to this unmeasurable concern and why
>> not enabling it and then improving it later when it becomes a measurable
>> concern when Xen will commit a clear interrupt latency goal will be
>> committed
>> by Xen (at that time people working on that effort will have to identify all
>> kinds
>> of problems impacting interrupt latency and then can optimize together)?
>> People should understand possibly bad interrupt latency in extreme cases
>> like discussed in this thread (w/ or w/o PI), since Xen doesn't commit
>> anything
>> here.
>
> I've never made any reference to this being an interrupt latency
> issue; I think it was George who somehow implied this from earlier
> comments. Interrupt latency, at least generally, isn't a security
> concern (generally because of course latency can get so high that
> it might become a concern). All my previous remarks regarding the
> issue are solely from the common perspective of long running
> operations (which we've been dealing with outside of interrupt
> context in a variety of cases, as you may recall). Hence the purely
> theoretical basis for some sort of measurement would be to
> determine how long a worst case list traversal would take. With
> "worst case" being derived from the theoretical limits the
> hypervisor implementation so far implies: 128 vCPU-s per domain
> (a limit which we sooner or later will need to lift, i.e. taking into
> consideration a larger value - like the 8k for PV guests - wouldn't
> hurt) by 32k domains per host, totaling to 4M possible list entries.
> Yes, it is obvious that this limit won't be reachable in practice, but
> no, any lower limit can't be guaranteed to be good enough.
Can I suggest we suspend the discussion of what would or would not be
reasonable and come back to it next week? I definitely feel myself
digging my heels in here, so it might be good to go away and come back
to the discussion with a bit of distance.
(Potential technical solutions are still game I think.)
-George
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
next prev parent reply other threads:[~2016-03-10 10:41 UTC|newest]
Thread overview: 53+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-02-29 3:00 [PATCH v14 0/2] Add VT-d Posted-Interrupts support Feng Wu
2016-02-29 3:00 ` [PATCH v14 1/2] vmx: VT-d posted-interrupt core logic handling Feng Wu
2016-02-29 13:33 ` Jan Beulich
2016-02-29 13:52 ` Dario Faggioli
2016-03-01 5:39 ` Wu, Feng
2016-03-01 9:24 ` Jan Beulich
2016-03-01 10:16 ` George Dunlap
2016-03-01 13:06 ` Wu, Feng
2016-03-01 5:24 ` Tian, Kevin
2016-03-01 5:39 ` Wu, Feng
2016-03-04 22:00 ` Ideas " Konrad Rzeszutek Wilk
2016-03-07 11:21 ` George Dunlap
2016-03-07 15:53 ` Konrad Rzeszutek Wilk
2016-03-07 16:19 ` Dario Faggioli
2016-03-07 20:23 ` Konrad Rzeszutek Wilk
2016-03-08 12:02 ` George Dunlap
2016-03-08 13:10 ` Wu, Feng
2016-03-08 14:42 ` George Dunlap
2016-03-08 15:42 ` Jan Beulich
2016-03-08 17:05 ` George Dunlap
2016-03-08 17:26 ` Jan Beulich
2016-03-08 18:38 ` George Dunlap
2016-03-09 5:06 ` Wu, Feng
2016-03-09 13:39 ` Jan Beulich
2016-03-09 16:01 ` George Dunlap
2016-03-09 16:31 ` Jan Beulich
2016-03-09 16:23 ` On setting clear criteria for declaring a feature acceptable (was "vmx: VT-d posted-interrupt core logic handling") George Dunlap
2016-03-09 16:58 ` On setting clear criteria for declaring a feature acceptable Jan Beulich
2016-03-09 18:02 ` On setting clear criteria for declaring a feature acceptable (was "vmx: VT-d posted-interrupt core logic handling") David Vrabel
2016-03-10 1:15 ` Wu, Feng
2016-03-10 9:30 ` George Dunlap
2016-03-10 5:09 ` Tian, Kevin
2016-03-10 8:07 ` vmx: VT-d posted-interrupt core logic handling Jan Beulich
2016-03-10 8:43 ` Tian, Kevin
2016-03-10 9:05 ` Jan Beulich
2016-03-10 9:20 ` Tian, Kevin
2016-03-10 10:05 ` Tian, Kevin
2016-03-10 10:18 ` Jan Beulich
2016-03-10 10:35 ` David Vrabel
2016-03-10 10:46 ` George Dunlap
2016-03-10 11:16 ` David Vrabel
2016-03-10 11:49 ` George Dunlap
2016-03-10 13:24 ` Jan Beulich
2016-03-10 11:00 ` George Dunlap
2016-03-10 11:21 ` Dario Faggioli
2016-03-10 13:36 ` Wu, Feng
2016-05-17 13:27 ` Konrad Rzeszutek Wilk
2016-05-19 7:22 ` Wu, Feng
2016-03-10 10:41 ` George Dunlap [this message]
2016-03-09 5:22 ` Ideas Re: [PATCH v14 1/2] " Wu, Feng
2016-03-09 11:25 ` George Dunlap
2016-03-09 12:06 ` Wu, Feng
2016-02-29 3:00 ` [PATCH v14 2/2] Add a command line parameter for VT-d posted-interrupts Feng Wu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=56E14F3E.8060804@citrix.com \
--to=george.dunlap@citrix.com \
--cc=George.Dunlap@eu.citrix.com \
--cc=Ian.Jackson@eu.citrix.com \
--cc=JBeulich@suse.com \
--cc=andrew.cooper3@citrix.com \
--cc=dario.faggioli@citrix.com \
--cc=david.vrabel@citrix.com \
--cc=feng.wu@intel.com \
--cc=kevin.tian@intel.com \
--cc=lars.kurth@citrix.com \
--cc=xen-devel@lists.xen.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).