All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Xu, Dongxiao" <dongxiao.xu@intel.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: Xen Platform QoS design discussion
Date: Wed, 7 May 2014 02:08:27 +0000	[thread overview]
Message-ID: <40776A41FC278F40B59438AD47D147A9119FF569@SHSMSX104.ccr.corp.intel.com> (raw)
In-Reply-To: <5368B418.9000307@citrix.com>

> -----Original Message-----
> From: Andrew Cooper [mailto:andrew.cooper3@citrix.com]
> Sent: Tuesday, May 06, 2014 6:06 PM
> To: Xu, Dongxiao
> Cc: Jan Beulich; Ian Campbell; xen-devel@lists.xen.org
> Subject: Re: Xen Platform QoS design discussion
> 
> On 06/05/14 02:40, Xu, Dongxiao wrote:
> >> -----Original Message-----
> >> From: Xu, Dongxiao
> >> Sent: Sunday, May 04, 2014 8:46 AM
> >> To: Jan Beulich
> >> Cc: Andrew Cooper(andrew.cooper3@citrix.com); Ian Campbell;
> >> xen-devel@lists.xen.org
> >> Subject: RE: Xen Platform QoS design discussion
> >>
> >>> -----Original Message-----
> >>> From: Jan Beulich [mailto:JBeulich@suse.com]
> >>> Sent: Friday, May 02, 2014 8:40 PM
> >>> To: Xu, Dongxiao
> >>> Cc: Andrew Cooper(andrew.cooper3@citrix.com); Ian Campbell;
> >>> xen-devel@lists.xen.org
> >>> Subject: RE: Xen Platform QoS design discussion
> >>>
> >>>>>> On 02.05.14 at 14:30, <dongxiao.xu@intel.com> wrote:
> >>>>>  -----Original Message-----
> >>>>> From: Jan Beulich [mailto:JBeulich@suse.com]
> >>>>> Sent: Friday, May 02, 2014 5:24 PM
> >>>>> To: Xu, Dongxiao
> >>>>> Cc: Andrew Cooper(andrew.cooper3@citrix.com); Ian Campbell;
> >>>>> xen-devel@lists.xen.org
> >>>>> Subject: RE: Xen Platform QoS design discussion
> >>>>>
> >>>>>>>> On 01.05.14 at 02:56, <dongxiao.xu@intel.com> wrote:
> >>>>>>> From: Ian Campbell [mailto:Ian.Campbell@citrix.com]
> >>>>>>> Have you asked yourself whether this information even needs to be
> >>>>>>> exposed all the way up to libxl? Who are the expected consumers of
> this
> >>>>>>> interface? Are they low-level CLI tools (i.e. like xenpm is) or are you
> >>>>>>> expecting toolstacks to plumb this information all the way up to their
> >>>>>>> GUI or CLI (e.g. xl or virsh)?
> >>>>>> The information returned to libxl users is the cache utilization for a
> >>>>>> certain domain in certain socket, and the main consumers are cloud
> users
> >>>> like
> >>>>>> openstack, etc. Of course, we will also provide an xl command to present
> >>>> such
> >>>>>> information.
> >>>>> To me this doesn't really address the question Ian asked, yet knowing
> >>>>> who's going to be the consumer of the data is also quite relevant for
> >>>>> answering your original question on the method to obtain that data.
> >>>>> Obviously, if the main use of it is per-domain, a domctl would seem like
> >>>>> a suitable approach despite the data being more of sysctl kind. But if
> >>>>> a global view would be more important, that model would seem to make
> >>>>> life needlessly hard for the consumers. In turn, if using a domctl, I tend
> >>>>> to agree that not using shared pages would be preferable; iirc their use
> >>>>> was mainly suggested because of the size of the data.
> >>>> From the discussion with openstack developers, on certain cloud host, all
> >>>> running VM's information (e.g., domain ID) will be stored in a database, and
> >>>> openstack software will use libvirt/XenAPI to query specific domain
> >>>> information. That libvirt/XenAPI API interface basically accepts the domain
> >>>> ID as input parameter and get the domain information, including the
> platform
> >>>> QoS one.
> >>>>
> >>>> Based on above information, I think we'd better design the QoS hypercall
> >>>> per-domain.
> >>> If you think that this is going to be the only (or at least prevalent)
> >>> usage model, that's probably okay then. But I'm a little puzzled that
> >>> all this effort is just for a single, rather specific consumer. I thought
> >>> that if this is so important to Intel there would be wider interested
> >>> audience.
> > Since there is no further comments, I suppose we all agreed on making the
> hypercall per-domain and use data copying mechanism between hypervisor and
> Dom0 tool stack?
> >
> 

Reply previous Ian and Andrew's comments in this mail.

> No - the onus is very much on you to prove that your API will *not* be
> used in the following way:
> 
> every $TIMEPERIOD
>   for each domain
>     for each type of information
>       get-$TYPE-information-for-$DOMAIN

The "for loop" mentioned here does exist in certain software levels, and there are several options:
1. For loop in libvirt/openstack layer (likely):
In this case, domctl would be better which returns per-domain's QoS info. Otherwise it will repeatedly call sysctl hypercall to get the entire data structure but only returns one domain's info to user space.

2. For loop within libxl API function and returns whole QoS data (unlikely):
If we return such entire PQoS info to Dom0 user space via libxl API, then this API will be changing once new PQoS feature comes out. As Ian mentioned, we need certain compatibility for libxl API.

> 
> Which is the source of my concerns regarding overhead.
> 
> As far as I can see, as soon as you provide access to this QoS
> information, higher level toolstacks are going to want all information
> for all domains.  Given your proposed domctl, they will have exactly one
> (bad) way of getting this information.

I understand your point.
I think the final decision should be a compromise between:
1) Overhead.
2) Compatibility.
3) Code flexibility and simplicity.

Thanks,
Dongxiao

> 
> ~Andrew

  reply	other threads:[~2014-05-07  2:08 UTC|newest]

Thread overview: 46+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-04-30 16:47 Xen Platform QoS design discussion Xu, Dongxiao
2014-04-30 17:02 ` Ian Campbell
2014-05-01  0:56   ` Xu, Dongxiao
2014-05-02  9:23     ` Jan Beulich
2014-05-02 12:30       ` Xu, Dongxiao
2014-05-02 12:40         ` Jan Beulich
2014-05-04  0:46           ` Xu, Dongxiao
2014-05-06  9:10             ` Ian Campbell
2014-05-06  1:40           ` Xu, Dongxiao
2014-05-06  7:55             ` Jan Beulich
2014-05-06 10:06             ` Andrew Cooper
2014-05-07  2:08               ` Xu, Dongxiao [this message]
2014-05-07  9:10                 ` Ian Campbell
2014-05-07 13:26               ` George Dunlap
2014-05-07 21:18                 ` Andrew Cooper
2014-05-08  5:21                   ` Xu, Dongxiao
2014-05-08 11:25                     ` Andrew Cooper
2014-05-09  2:41                       ` Xu, Dongxiao
2014-05-13  1:53                       ` Xu, Dongxiao
2014-05-16  5:11                       ` Xu, Dongxiao
2014-05-19 11:28                         ` George Dunlap
2014-05-19 11:45                           ` Jan Beulich
2014-05-19 12:13                             ` George Dunlap
2014-05-19 12:41                               ` Jan Beulich
2014-05-22  8:19                                 ` Xu, Dongxiao
2014-05-22  8:39                                   ` Jan Beulich
2014-05-22  9:27                                     ` George Dunlap
2014-05-26  0:51                                       ` Xu, Dongxiao
2014-05-29  0:45                                       ` Xu, Dongxiao
2014-05-29  7:01                                         ` Jan Beulich
2014-05-29  7:31                                           ` Xu, Dongxiao
2014-05-29  9:11                                             ` Jan Beulich
2014-05-30  9:10                                               ` Ian Campbell
2014-05-30 11:17                                                 ` Jan Beulich
2014-05-30 12:33                                                   ` Ian Campbell
2014-06-05  0:48                                                   ` Xu, Dongxiao
2014-06-05 10:43                                                     ` George Dunlap
2014-05-29  9:13                                             ` Andrew Cooper
2014-05-30  1:07                                               ` Xu, Dongxiao
2014-05-30  6:23                                                 ` Jan Beulich
2014-05-30  7:51                                                   ` Xu, Dongxiao
2014-05-30 11:15                                                     ` Jan Beulich
2014-05-02 12:50         ` Andrew Cooper
2014-05-04  2:34           ` Xu, Dongxiao
2014-05-06  9:12             ` Ian Campbell
2014-05-06 10:00             ` Andrew Cooper

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=40776A41FC278F40B59438AD47D147A9119FF569@SHSMSX104.ccr.corp.intel.com \
    --to=dongxiao.xu@intel.com \
    --cc=Ian.Campbell@citrix.com \
    --cc=JBeulich@suse.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.