xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: "Xia, Hongyan" <hongyxia@amazon.com>
To: "wl@xen.org" <wl@xen.org>,
	"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"Grall, Julien" <jgrall@amazon.com>,
	"jbeulich@suse.com" <jbeulich@suse.com>,
	"roger.pau@citrix.com" <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH v2] x86/domain_page: implement pure per-vCPU mapping infrastructure
Date: Fri, 21 Feb 2020 14:40:14 +0000	[thread overview]
Message-ID: <8a4e4fa48aafa565da6eb7f2905d0a21be65901c.camel@amazon.com> (raw)
In-Reply-To: <5a80693d-87e3-26a5-0c80-fba7d0212260@citrix.com>

On Fri, 2020-02-21 at 13:02 +0000, Andrew Cooper wrote:
> On 21/02/2020 12:52, Xia, Hongyan wrote:
> > On Fri, 2020-02-21 at 11:50 +0000, Wei Liu wrote:
> > > Given that:
> > > 
> > > 1. mapcache_domain is now a structure with only one member.
> > > 2. ents is a constant throughout domain's lifecycle.
> > > 
> > > You can replace mapcache_domain with a boolean --
> > > mapcache_mapping_populated (?) in arch.pv.
> > > 
> > > If I'm not mistaken, the size of the mapping is derived from the
> > > vcpu
> > > being initialised, so a further improvement is to lift the
> > > mapping
> > > creation out of mapcache_vcpu_init.
> > 
> > But you can just XEN_DOMCTL_max_vcpus on a running domain to
> > increase
> > its max_vcpus count, so that ents is not constant?
> 
> The comments suggest that, but it has never been implemented, and I'm
> in
> the process of purging the ability.
> 
> Already now, max is passed into domain_create, and the
> XEN_DOMCTL_max_vcpus call has to exactly match what was passed to
> create.  As soon as the {domain,vcpu}_destroy() functions become
> properly idempotent (so we can unwind from midway through after an
> -ENOMEM/etc), XEN_DOMCTL_max_vcpus will be dropped completely.
> 
> d->max_cpus is set early during construction, and remains constant
> for
> the lifetime of the domain.

Thanks for the clarification. This simplifies things quite a bit.

Hongyan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

  reply	other threads:[~2020-02-21 14:40 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-02-06 18:58 [Xen-devel] [PATCH v2] x86/domain_page: implement pure per-vCPU mapping infrastructure Hongyan Xia
2020-02-21 11:50 ` Wei Liu
2020-02-21 12:52   ` Xia, Hongyan
2020-02-21 13:02     ` Andrew Cooper
2020-02-21 14:40       ` Xia, Hongyan [this message]
2020-02-21 13:31     ` Jan Beulich
2020-02-21 14:36       ` Wei Liu
2020-02-21 14:55         ` Jan Beulich
2020-02-21 14:58           ` Wei Liu
2020-02-21 15:08             ` Jan Beulich
2020-02-21 14:52       ` Xia, Hongyan
2020-02-21 14:59         ` Jan Beulich
2020-02-21 14:39     ` Wei Liu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=8a4e4fa48aafa565da6eb7f2905d0a21be65901c.camel@amazon.com \
    --to=hongyxia@amazon.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=jbeulich@suse.com \
    --cc=jgrall@amazon.com \
    --cc=roger.pau@citrix.com \
    --cc=wl@xen.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).