All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Xu, Dongxiao" <dongxiao.xu@intel.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: Xen Platform QoS design discussion
Date: Fri, 16 May 2014 05:11:22 +0000	[thread overview]
Message-ID: <40776A41FC278F40B59438AD47D147A911A150FC@SHSMSX104.ccr.corp.intel.com> (raw)
In-Reply-To: 536B69AB.7010005@citrix.com

> -----Original Message-----
> From: Xu, Dongxiao
> Sent: Tuesday, May 13, 2014 9:53 AM
> To: Andrew Cooper
> Cc: George Dunlap; Ian Campbell; Jan Beulich; xen-devel@lists.xen.org
> Subject: RE: [Xen-devel] Xen Platform QoS design discussion
> 
> > -----Original Message-----
> > From: Xu, Dongxiao
> > Sent: Friday, May 09, 2014 10:41 AM
> > To: Andrew Cooper
> > Cc: George Dunlap; Ian Campbell; Jan Beulich; xen-devel@lists.xen.org
> > Subject: RE: [Xen-devel] Xen Platform QoS design discussion
> >
> > > -----Original Message-----
> > > From: Andrew Cooper [mailto:andrew.cooper3@citrix.com]
> > > Sent: Thursday, May 08, 2014 7:26 PM
> > > To: Xu, Dongxiao
> > > Cc: George Dunlap; Ian Campbell; Jan Beulich; xen-devel@lists.xen.org
> > > Subject: Re: [Xen-devel] Xen Platform QoS design discussion
> > >
> > > On 08/05/14 06:21, Xu, Dongxiao wrote:
> > >
> > > <massive snip>
> > >
> > > >>
> > > >>> We have two different hypercalls right now for getting "dominfo": a
> > > >>> domctl and a sysctl.  You use the domctl if you want information about
> > > >>> a single domain, you use sysctl if you want information about all
> > > >>> domains.  The sysctl implementation calls the domctl implementation
> > > >>> internally.
> > > >> It is not a fair comparison, given the completely different nature of
> > > >> the domctls in question.  XEN_DOMCTL_getdomaininfo is doing very little
> > > >> more than reading specific bits of data out the appropriate struct
> > > >> domain and its struct vcpu's which can trivially be done by the cpu
> > > >> handling the hypercall.
> > > >>
> > > >>> Is there a problem with doing the same thing here?  Or, with starting
> > > >>> with a domctl, and then creating a sysctl if iterating over all
> > > >>> domains (and calling the domctl internally) if we measure the domctl
> > > >>> to be too slow for many callers?
> > > >>>
> > > >>>  -George
> > > >> My problem is not with the domctl per-se.
> > > >>
> > > >> My problem is that this is not a QoS design discussion;  this is an
> > > >> email thread about a specific QoS implementation which is not answering
> > > >> the concerns raised against it to the satisfaction of people raising the
> > > >> concerns.
> > > >>
> > > >> The core argument here is that a statement of "OpenStack want to get a
> > > >> piece of QoS data back from libvirt/xenapi when querying a specific
> > > >> domain" is being used to justify implementing the hypercall in an
> > > >> identical fashion.
> > > >>
> > > >> This is not a libxl design; this is a single user story forming part of
> > > >> the requirement "I as a cloud service provider would like QoS
> > > >> information for each VM to be available to my
> > > >> $CHOSEN_ORCHESTRATION_SOFTWARE so I can {differentially charge
> > > >> customers, balance my load more evenly, etc}".
> > > >>
> > > >> The only valid justification for implementing a brand new hypercall in a
> > > >> certain way is "Because $THIS_CERTAIN_WAY is the $MOST_SENSIBLE way
> to
> > > >> perform the actions I need to perform", for appropriately
> > > >> substitutions.  Not "because it is the same way I want to hand this
> > > >> information off at the higher level".
> > > >>
> > > >> As part of this design discussion. I have raised a concern saying "I
> > > >> believe the usecase of having a stats gathering daemon in dom0 has not
> > > >> been appropriately considered", qualified with "If you were to use the
> > > >> domctl as currently designed from a stats gathering daemon, you will
> > > >> cripple Xen with the overhead".
> > > >>
> > > >> Going back to the original use, xenapi has a stats daemon for these
> > > >> things.  It has an rpc interface so a query given a specific domain can
> > > >> return some or all data for that domain, but it very definitely does not
> > > >> translate each request into a hypercall for the requested information.
> > > >> I have no real experience with libvirt, so can't comment on stats
> > > >> gathering in that context.
> > > >>
> > > >> I have proposed an alternative Xen->libxc interface designed with a
> > > >> stats daemon in mind, explaining why I believe it has lower overheads to
> > > >> Xen and why is more in line with what I expect ${VENDOR}Stack to
> > > >> actually want.
> > > >>
> > > >> I am now waiting for a reasoned rebuttal which has more content than
> > > >> "because there are a set of patches which already implement it in this
> way".
> > > > No, I don't have the patch for domctl implementation.
> > > >
> > > > In the past half year, all previous v1-v10 patches are implemented in sysctl
> > way,
> > > however based on that, people raised a lot of comments (large size of
> memory,
> > > runtime non-0 order of memory allocation, page sharing with user space, CPU
> > > online/offline special logic, etc.), and these make the platform QoS
> > > implementation more and more complex in Xen. That's why I am proposing
> the
> > > domctl method that can make things easier.
> > > >
> > > > I don't have more things to argue or rebuttal, and if you prefer sysctl, I can
> > > continue to work out a v11, v12 or more, to present the big 2-dimension array
> > to
> > > end user and let them withdraw their real required data, still includes the
> extra
> > > CPU online/offline logics to handle the QoS resource runtime allocation.
> > > >
> > > > Thanks,
> > > > Dongxiao
> > >
> > > I am sorry - I was not trying to make an argument for one of the
> > > proposed mechanisms over the other.  The point I was trying to make
> > > (which on further consideration isn't as clear as I was hoping) is that
> > > you cannot possibly design the hypercall interface before knowing the
> > > library usecases, and there is a clear lack of understanding (or at
> > > least communication) in this regard.
> > >
> > >
> > > So, starting from the top. OpenStack want QoS information, and want to
> > > get it from libvirt/XenAPI.  I think libvirt/XenAPI is the correct level
> > > to do this at, and think exactly the same would apply to CloudStack as
> > > well.  The relevant part of this is the question "how does
> > > libvirt/XenAPI collect stats".
> > >
> > > XenAPI collects stats with the RRD Daemon, running in dom0.  It has an
> > > internal database of statistics, and hands data from this database out
> > > upon RPC requests.  It also has threads whose purpose is to periodically
> > > refresh the data in the database.  This provides a disconnect between
> > > ${FOO}Stack requesting stats for a domain and the logic to obtain stats
> > > for that domain.
> > >
> > > I am however unfamiliar with libvirt in this regard.  Could you please
> > > explain how the libvirt daemon deals with stats?
> >
> > I am not the libvirt expert either.
> > Consult from other guys who work in libvirt that, libvirt doesn't maintain the
> > domain status itself, but just expose the APIs for upper cloud/openstack to
> query,
> > and these APIs accept the domain id as input parameter.
> 
> Hi Andrew,
> 
> Do you have more thought considering this libvirt usage?

Ping...

Thanks,
Dongxiao

> 
> Thanks,
> Dongxiao
> 
> >
> > Thanks,
> > Dongxiao
> >
> > >
> > > ~Andrew

  parent reply	other threads:[~2014-05-16  5:11 UTC|newest]

Thread overview: 46+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-04-30 16:47 Xen Platform QoS design discussion Xu, Dongxiao
2014-04-30 17:02 ` Ian Campbell
2014-05-01  0:56   ` Xu, Dongxiao
2014-05-02  9:23     ` Jan Beulich
2014-05-02 12:30       ` Xu, Dongxiao
2014-05-02 12:40         ` Jan Beulich
2014-05-04  0:46           ` Xu, Dongxiao
2014-05-06  9:10             ` Ian Campbell
2014-05-06  1:40           ` Xu, Dongxiao
2014-05-06  7:55             ` Jan Beulich
2014-05-06 10:06             ` Andrew Cooper
2014-05-07  2:08               ` Xu, Dongxiao
2014-05-07  9:10                 ` Ian Campbell
2014-05-07 13:26               ` George Dunlap
2014-05-07 21:18                 ` Andrew Cooper
2014-05-08  5:21                   ` Xu, Dongxiao
2014-05-08 11:25                     ` Andrew Cooper
2014-05-09  2:41                       ` Xu, Dongxiao
2014-05-13  1:53                       ` Xu, Dongxiao
2014-05-16  5:11                       ` Xu, Dongxiao [this message]
2014-05-19 11:28                         ` George Dunlap
2014-05-19 11:45                           ` Jan Beulich
2014-05-19 12:13                             ` George Dunlap
2014-05-19 12:41                               ` Jan Beulich
2014-05-22  8:19                                 ` Xu, Dongxiao
2014-05-22  8:39                                   ` Jan Beulich
2014-05-22  9:27                                     ` George Dunlap
2014-05-26  0:51                                       ` Xu, Dongxiao
2014-05-29  0:45                                       ` Xu, Dongxiao
2014-05-29  7:01                                         ` Jan Beulich
2014-05-29  7:31                                           ` Xu, Dongxiao
2014-05-29  9:11                                             ` Jan Beulich
2014-05-30  9:10                                               ` Ian Campbell
2014-05-30 11:17                                                 ` Jan Beulich
2014-05-30 12:33                                                   ` Ian Campbell
2014-06-05  0:48                                                   ` Xu, Dongxiao
2014-06-05 10:43                                                     ` George Dunlap
2014-05-29  9:13                                             ` Andrew Cooper
2014-05-30  1:07                                               ` Xu, Dongxiao
2014-05-30  6:23                                                 ` Jan Beulich
2014-05-30  7:51                                                   ` Xu, Dongxiao
2014-05-30 11:15                                                     ` Jan Beulich
2014-05-02 12:50         ` Andrew Cooper
2014-05-04  2:34           ` Xu, Dongxiao
2014-05-06  9:12             ` Ian Campbell
2014-05-06 10:00             ` Andrew Cooper

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=40776A41FC278F40B59438AD47D147A911A150FC@SHSMSX104.ccr.corp.intel.com \
    --to=dongxiao.xu@intel.com \
    --cc=George.Dunlap@eu.citrix.com \
    --cc=Ian.Campbell@citrix.com \
    --cc=JBeulich@suse.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.