xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* [Xen-devel] xl vcpu-pin peculiarities in core scheduling mode
@ 2020-03-24 13:34 Sergey Dyasli
  2020-03-24 14:22 ` Jürgen Groß
  0 siblings, 1 reply; 3+ messages in thread
From: Sergey Dyasli @ 2020-03-24 13:34 UTC (permalink / raw)
  To: Juergen Gross
  Cc: Sergey Dyasli, Wei Liu, Andrew Cooper, George Dunlap,
	Dario Faggioli, Jan Beulich, xen-devel, Ian Jackson,
	Roger Pau Monne

Hi Juergen,

I've notived there is no documentation about how vcpu-pin is supposed to work
with core scheduling enabled. I did some experiments and noticed the following
inconsistencies:

  1. xl vcpu-pin 5 0 0
     Windows 10 (64-bit) (1)              5     0    0   -b-    1644.0  0 / all
     Windows 10 (64-bit) (1)              5     1    1   -b-    1650.1  0 / all
                                                     ^                  ^
     CPU 1 doesn't match reported hard-affinity of 0. Should this command set
     hard-affinity of vCPU 1 to 1? Or should it be 0-1 for both vCPUs instead?


  2. xl vcpu-pin 5 0 1
     libxl: error: libxl_sched.c:62:libxl__set_vcpuaffinity: Domain 5:Setting vcpu affinity: Invalid argument
     This is expected but perhaps needs documenting somewhere?


  3. xl vcpu-pin 5 0 1-2
     Windows 10 (64-bit) (1)              5     0    2   -b-    1646.7  1-2 / all
     Windows 10 (64-bit) (1)              5     1    3   -b-    1651.6  1-2 / all
                                                     ^                  ^^^
     Here is a CPU / affinity mismatch again, but the more interesting fact
     is that setting 1-2 is allowed at all, I'd expect CPU would never be set
     to 1 with such settings.

Please let me know what you think about the above cases.

--
Thanks,
Sergey


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [Xen-devel] xl vcpu-pin peculiarities in core scheduling mode
  2020-03-24 13:34 [Xen-devel] xl vcpu-pin peculiarities in core scheduling mode Sergey Dyasli
@ 2020-03-24 14:22 ` Jürgen Groß
  2020-03-24 14:55   ` Dario Faggioli
  0 siblings, 1 reply; 3+ messages in thread
From: Jürgen Groß @ 2020-03-24 14:22 UTC (permalink / raw)
  To: Sergey Dyasli
  Cc: Wei Liu, Andrew Cooper, George Dunlap, Dario Faggioli,
	Jan Beulich, xen-devel, Ian Jackson, Roger Pau Monne

On 24.03.20 14:34, Sergey Dyasli wrote:
> Hi Juergen,
> 
> I've notived there is no documentation about how vcpu-pin is supposed to work
> with core scheduling enabled. I did some experiments and noticed the following
> inconsistencies:
> 
>    1. xl vcpu-pin 5 0 0
>       Windows 10 (64-bit) (1)              5     0    0   -b-    1644.0  0 / all
>       Windows 10 (64-bit) (1)              5     1    1   -b-    1650.1  0 / all
>                                                       ^                  ^
>       CPU 1 doesn't match reported hard-affinity of 0. Should this command set
>       hard-affinity of vCPU 1 to 1? Or should it be 0-1 for both vCPUs instead?
> 
> 
>    2. xl vcpu-pin 5 0 1
>       libxl: error: libxl_sched.c:62:libxl__set_vcpuaffinity: Domain 5:Setting vcpu affinity: Invalid argument
>       This is expected but perhaps needs documenting somewhere?
> 
> 
>    3. xl vcpu-pin 5 0 1-2
>       Windows 10 (64-bit) (1)              5     0    2   -b-    1646.7  1-2 / all
>       Windows 10 (64-bit) (1)              5     1    3   -b-    1651.6  1-2 / all
>                                                       ^                  ^^^
>       Here is a CPU / affinity mismatch again, but the more interesting fact
>       is that setting 1-2 is allowed at all, I'd expect CPU would never be set
>       to 1 with such settings.
> 
> Please let me know what you think about the above cases.

I think all of the effects can be explained by the way how pinning with
core scheduling is implemented. This does not mean that the information
presented to the user shouldn't be adapted.

Basically pinning of any vcpu will just affect the "master"-vcpu of a
virtual core (sibling 0). It will happily accept any setting as long as
any "master"-cpu of a core is in the resulting set of cpus.

All vcpus of a virtual core share the same pinnings.

I think this explains all of the above scenarios.

IMO there are the following possibilities for reporting those pinnings
to the user:

1. As today, documenting the output.
    Not very nice IMO, but the least effort.

2. Just print one line for each virtual cpu/core/socket, like:
    Windows 10 (64-bit) (1)    5     0-1   0-1   -b-    1646.7  0-1 / all
    This has the disadvantage of dropping the per-vcpu time in favor of
    per-vcore time, OTOH this is reflecting reality.

3. Print the effective pinnings:
    Windows 10 (64-bit) (1)    5     0     0     -b-    1646.7  0   / all
    Windows 10 (64-bit) (1)    5     1     1     -b-    1646.7  1   / all
    Should be rather easy to do.

Thoughts?


Juergen


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [Xen-devel] xl vcpu-pin peculiarities in core scheduling mode
  2020-03-24 14:22 ` Jürgen Groß
@ 2020-03-24 14:55   ` Dario Faggioli
  0 siblings, 0 replies; 3+ messages in thread
From: Dario Faggioli @ 2020-03-24 14:55 UTC (permalink / raw)
  To: Jürgen Groß, Sergey Dyasli
  Cc: Wei Liu, Andrew Cooper, George Dunlap, Jan Beulich, xen-devel,
	Ian Jackson, Roger Pau Monne

[-- Attachment #1: Type: text/plain, Size: 3791 bytes --]

On Tue, 2020-03-24 at 15:22 +0100, Jürgen Groß wrote:
> On 24.03.20 14:34, Sergey Dyasli wrote:
> > I did some experiments and noticed
> > the following
> > inconsistencies:
> > 
> >    1. xl vcpu-pin 5 0 0
> >       Windows 10 (64-bit) (1)              5     0    0   
> > -b-    1644.0  0 / all
> >       Windows 10 (64-bit) (1)              5     1    1   
> > -b-    1650.1  0 / all
> >                                                       ^            
> >       ^
> >       CPU 1 doesn't match reported hard-affinity of 0. Should this
> > command set
> >       hard-affinity of vCPU 1 to 1? Or should it be 0-1 for both
> > vCPUs instead?
> > 
I think this is fine. For improving how this is reported back to users,
I'd go for the solution nr 3 proposed by Juergen (below).

> >    2. xl vcpu-pin 5 0 1
> >       libxl: error: libxl_sched.c:62:libxl__set_vcpuaffinity:
> > Domain 5:Setting vcpu affinity: Invalid argument
> >       This is expected but perhaps needs documenting somewhere?
> > 
Not against more clear error reporting. It would mean that libxl must
have a way to tell that pinning failed because pinning was not being
done to a "master CPU".

I guess it's doable, but perhaps it's not the top priority, assuming we
have (and we put in place, if we still don't) good documentation on how
pinning works in this operational mode.

That would make a good article/blog post, I think.

> >    3. xl vcpu-pin 5 0 1-2
> >       Windows 10 (64-bit) (1)              5     0    2   
> > -b-    1646.7  1-2 / all
> >       Windows 10 (64-bit) (1)              5     1    3   
> > -b-    1651.6  1-2 / all
> >                                                       ^            
> >       ^^^
> >       Here is a CPU / affinity mismatch again, but the more
> > interesting fact
> >       is that setting 1-2 is allowed at all, I'd expect CPU would
> > never be set
> >       to 1 with such settings.
> > 
This is the situation I'm most concerned of. Mostly, because I think a
user might be surprised to see the command (1) not failing and (2)
having the effect that it has.

I think that, in this case, we should either fail, or adjust the
affinity to 2-3. If we do the latter, we should inform the user about
that. There's something similar in libxl already (related to soft and
hard affinity, where we set a mask, then we check what's been actually
setup by Xen and act accordingly).

Thoughts?

I'd go for a mix of 1 and 3, i.e., I'd do:

> 1. As today, documenting the output.
>     Not very nice IMO, but the least effort.
> 
This, i.e., we definitely need more documentation and we need to make
sure it's visible enough.

> 2. Just print one line for each virtual cpu/core/socket, like:
>     Windows 10 (64-bit) (1)    5     0-1   0-1   -b-    1646.7  0-1 /
> all
>     This has the disadvantage of dropping the per-vcpu time in favor
> of
>     per-vcore time, OTOH this is reflecting reality.
> 
> 3. Print the effective pinnings:
>     Windows 10 (64-bit) (1)    5     0     0     -b-    1646.7  0   /
> all
>     Windows 10 (64-bit) (1)    5     1     1     -b-    1646.7  1   /
> all
>     Should be rather easy to do.
> 
And this: i.e., I'd always report the effective mapping.

I actually would go as far as changing the mapping we've been given and
store the effective one(s) in `cpu_hard_affinity`, etc, in Xen. Of
course, as said above, we'd need to inform the user that this has
happened.

Regards
-- 
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)


[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2020-03-24 14:55 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-03-24 13:34 [Xen-devel] xl vcpu-pin peculiarities in core scheduling mode Sergey Dyasli
2020-03-24 14:22 ` Jürgen Groß
2020-03-24 14:55   ` Dario Faggioli

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).