xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Dario Faggioli <dfaggioli@suse.com>
To: "Jürgen Groß" <jgross@suse.com>,
	"Sergey Dyasli" <sergey.dyasli@citrix.com>
Cc: Wei Liu <wl@xen.org>, Andrew Cooper <Andrew.Cooper3@citrix.com>,
	George Dunlap <George.Dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Ian Jackson <Ian.Jackson@citrix.com>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] xl vcpu-pin peculiarities in core scheduling mode
Date: Tue, 24 Mar 2020 15:55:32 +0100	[thread overview]
Message-ID: <52ae93739b1176f535fabff8932230edbfa6ce7d.camel@suse.com> (raw)
In-Reply-To: <af97b12c-e1f5-0278-8599-96264dc57101@suse.com>

[-- Attachment #1: Type: text/plain, Size: 3791 bytes --]

On Tue, 2020-03-24 at 15:22 +0100, Jürgen Groß wrote:
> On 24.03.20 14:34, Sergey Dyasli wrote:
> > I did some experiments and noticed
> > the following
> > inconsistencies:
> > 
> >    1. xl vcpu-pin 5 0 0
> >       Windows 10 (64-bit) (1)              5     0    0   
> > -b-    1644.0  0 / all
> >       Windows 10 (64-bit) (1)              5     1    1   
> > -b-    1650.1  0 / all
> >                                                       ^            
> >       ^
> >       CPU 1 doesn't match reported hard-affinity of 0. Should this
> > command set
> >       hard-affinity of vCPU 1 to 1? Or should it be 0-1 for both
> > vCPUs instead?
> > 
I think this is fine. For improving how this is reported back to users,
I'd go for the solution nr 3 proposed by Juergen (below).

> >    2. xl vcpu-pin 5 0 1
> >       libxl: error: libxl_sched.c:62:libxl__set_vcpuaffinity:
> > Domain 5:Setting vcpu affinity: Invalid argument
> >       This is expected but perhaps needs documenting somewhere?
> > 
Not against more clear error reporting. It would mean that libxl must
have a way to tell that pinning failed because pinning was not being
done to a "master CPU".

I guess it's doable, but perhaps it's not the top priority, assuming we
have (and we put in place, if we still don't) good documentation on how
pinning works in this operational mode.

That would make a good article/blog post, I think.

> >    3. xl vcpu-pin 5 0 1-2
> >       Windows 10 (64-bit) (1)              5     0    2   
> > -b-    1646.7  1-2 / all
> >       Windows 10 (64-bit) (1)              5     1    3   
> > -b-    1651.6  1-2 / all
> >                                                       ^            
> >       ^^^
> >       Here is a CPU / affinity mismatch again, but the more
> > interesting fact
> >       is that setting 1-2 is allowed at all, I'd expect CPU would
> > never be set
> >       to 1 with such settings.
> > 
This is the situation I'm most concerned of. Mostly, because I think a
user might be surprised to see the command (1) not failing and (2)
having the effect that it has.

I think that, in this case, we should either fail, or adjust the
affinity to 2-3. If we do the latter, we should inform the user about
that. There's something similar in libxl already (related to soft and
hard affinity, where we set a mask, then we check what's been actually
setup by Xen and act accordingly).

Thoughts?

I'd go for a mix of 1 and 3, i.e., I'd do:

> 1. As today, documenting the output.
>     Not very nice IMO, but the least effort.
> 
This, i.e., we definitely need more documentation and we need to make
sure it's visible enough.

> 2. Just print one line for each virtual cpu/core/socket, like:
>     Windows 10 (64-bit) (1)    5     0-1   0-1   -b-    1646.7  0-1 /
> all
>     This has the disadvantage of dropping the per-vcpu time in favor
> of
>     per-vcore time, OTOH this is reflecting reality.
> 
> 3. Print the effective pinnings:
>     Windows 10 (64-bit) (1)    5     0     0     -b-    1646.7  0   /
> all
>     Windows 10 (64-bit) (1)    5     1     1     -b-    1646.7  1   /
> all
>     Should be rather easy to do.
> 
And this: i.e., I'd always report the effective mapping.

I actually would go as far as changing the mapping we've been given and
store the effective one(s) in `cpu_hard_affinity`, etc, in Xen. Of
course, as said above, we'd need to inform the user that this has
happened.

Regards
-- 
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)


[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

      reply	other threads:[~2020-03-24 14:55 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-03-24 13:34 [Xen-devel] xl vcpu-pin peculiarities in core scheduling mode Sergey Dyasli
2020-03-24 14:22 ` Jürgen Groß
2020-03-24 14:55   ` Dario Faggioli [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=52ae93739b1176f535fabff8932230edbfa6ce7d.camel@suse.com \
    --to=dfaggioli@suse.com \
    --cc=Andrew.Cooper3@citrix.com \
    --cc=George.Dunlap@citrix.com \
    --cc=Ian.Jackson@citrix.com \
    --cc=jbeulich@suse.com \
    --cc=jgross@suse.com \
    --cc=roger.pau@citrix.com \
    --cc=sergey.dyasli@citrix.com \
    --cc=wl@xen.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).