All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] xen/sched: Re-position the domain_update_node_affinity() call during vcpu construction
@ 2018-09-06 14:01 Andrew Cooper
  2018-09-07  8:33 ` Jan Beulich
                   ` (3 more replies)
  0 siblings, 4 replies; 5+ messages in thread
From: Andrew Cooper @ 2018-09-06 14:01 UTC (permalink / raw)
  To: Xen-devel
  Cc: Stefano Stabellini, Wei Liu, Andrew Cooper, Dario Faggioli,
	Julien Grall, Jan Beulich, Roger Pau Monné

alloc_vcpu()'s call to domain_update_node_affinity() has existed for a decade,
but its effort is mostly wasted.

alloc_vcpu() is called in a loop for each vcpu, bringing them into existence.
The values of the affinity masks are still default, which is allcpus in
general, or a processor singleton for pinned domains.

Furthermore, domain_update_node_affinity() itself loops over all vcpus
accumulating the masks, making it a scalability concern with large numbers of
vcpus.

Move it to be called once after all vcpus are constructed, which has the same
net effect, but with fewer intermediate memory allocations and less cpumask
arithmetic.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wei.liu2@citrix.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien.grall@arm.com>
CC: Dario Faggioli <dfaggioli@suse.com>

This perhaps wants backporting to the maintenance trees, which is why I've
rebased it backwards over my other construction changes.
---
 xen/arch/arm/domain_build.c   | 2 ++
 xen/arch/x86/hvm/dom0_build.c | 2 ++
 xen/arch/x86/pv/dom0_build.c  | 1 +
 xen/common/domain.c           | 3 ---
 xen/common/domctl.c           | 1 +
 5 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 2a383c8..5389217 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -2242,6 +2242,8 @@ int __init construct_dom0(struct domain *d)
             vcpu_switch_to_aarch64_mode(d->vcpu[i]);
     }
 
+    domain_update_node_affinity(d);
+
     v->is_initialised = 1;
     clear_bit(_VPF_down, &v->pause_flags);
 
diff --git a/xen/arch/x86/hvm/dom0_build.c b/xen/arch/x86/hvm/dom0_build.c
index 22e335f..c63d7f0 100644
--- a/xen/arch/x86/hvm/dom0_build.c
+++ b/xen/arch/x86/hvm/dom0_build.c
@@ -600,6 +600,8 @@ static int __init pvh_setup_cpus(struct domain *d, paddr_t entry,
             cpu = p->processor;
     }
 
+    domain_update_node_affinity(d);
+
     rc = arch_set_info_hvm_guest(v, &cpu_ctx);
     if ( rc )
     {
diff --git a/xen/arch/x86/pv/dom0_build.c b/xen/arch/x86/pv/dom0_build.c
index 96ff0ee..44418b2 100644
--- a/xen/arch/x86/pv/dom0_build.c
+++ b/xen/arch/x86/pv/dom0_build.c
@@ -709,6 +709,7 @@ int __init dom0_construct_pv(struct domain *d,
             cpu = p->processor;
     }
 
+    domain_update_node_affinity(d);
     d->arch.paging.mode = 0;
 
     /* Set up CR3 value for write_ptbase */
diff --git a/xen/common/domain.c b/xen/common/domain.c
index 78c450e..6229ba7 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -193,9 +193,6 @@ struct vcpu *alloc_vcpu(
     /* Must be called after making new vcpu visible to for_each_vcpu(). */
     vcpu_check_shutdown(v);
 
-    if ( !is_idle_domain(d) )
-        domain_update_node_affinity(d);
-
     return v;
 }
 
diff --git a/xen/common/domctl.c b/xen/common/domctl.c
index ee0983d..faf26e7 100644
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -590,6 +590,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
                 goto maxvcpu_out;
         }
 
+        domain_update_node_affinity(d);
         ret = 0;
 
     maxvcpu_out:
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH] xen/sched: Re-position the domain_update_node_affinity() call during vcpu construction
  2018-09-06 14:01 [PATCH] xen/sched: Re-position the domain_update_node_affinity() call during vcpu construction Andrew Cooper
@ 2018-09-07  8:33 ` Jan Beulich
  2018-09-07  8:40 ` Wei Liu
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 5+ messages in thread
From: Jan Beulich @ 2018-09-07  8:33 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Stefano Stabellini, Wei Liu, Dario Faggioli, Julien Grall,
	Xen-devel, Roger Pau Monne

>>> On 06.09.18 at 16:01, <andrew.cooper3@citrix.com> wrote:
> alloc_vcpu()'s call to domain_update_node_affinity() has existed for a decade,
> but its effort is mostly wasted.
> 
> alloc_vcpu() is called in a loop for each vcpu, bringing them into existence.
> The values of the affinity masks are still default, which is allcpus in
> general, or a processor singleton for pinned domains.
> 
> Furthermore, domain_update_node_affinity() itself loops over all vcpus
> accumulating the masks, making it a scalability concern with large numbers of
> vcpus.
> 
> Move it to be called once after all vcpus are constructed, which has the same
> net effect, but with fewer intermediate memory allocations and less cpumask
> arithmetic.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] xen/sched: Re-position the domain_update_node_affinity() call during vcpu construction
  2018-09-06 14:01 [PATCH] xen/sched: Re-position the domain_update_node_affinity() call during vcpu construction Andrew Cooper
  2018-09-07  8:33 ` Jan Beulich
@ 2018-09-07  8:40 ` Wei Liu
  2018-09-10 11:27 ` Julien Grall
  2018-09-11 16:14 ` Dario Faggioli
  3 siblings, 0 replies; 5+ messages in thread
From: Wei Liu @ 2018-09-07  8:40 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Stefano Stabellini, Wei Liu, Xen-devel, Julien Grall,
	Jan Beulich, Dario Faggioli, Roger Pau Monné

On Thu, Sep 06, 2018 at 03:01:35PM +0100, Andrew Cooper wrote:
> alloc_vcpu()'s call to domain_update_node_affinity() has existed for a decade,
> but its effort is mostly wasted.
> 
> alloc_vcpu() is called in a loop for each vcpu, bringing them into existence.
> The values of the affinity masks are still default, which is allcpus in
> general, or a processor singleton for pinned domains.
> 
> Furthermore, domain_update_node_affinity() itself loops over all vcpus
> accumulating the masks, making it a scalability concern with large numbers of
> vcpus.
> 
> Move it to be called once after all vcpus are constructed, which has the same
> net effect, but with fewer intermediate memory allocations and less cpumask
> arithmetic.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Wei Liu <wei.liu2@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] xen/sched: Re-position the domain_update_node_affinity() call during vcpu construction
  2018-09-06 14:01 [PATCH] xen/sched: Re-position the domain_update_node_affinity() call during vcpu construction Andrew Cooper
  2018-09-07  8:33 ` Jan Beulich
  2018-09-07  8:40 ` Wei Liu
@ 2018-09-10 11:27 ` Julien Grall
  2018-09-11 16:14 ` Dario Faggioli
  3 siblings, 0 replies; 5+ messages in thread
From: Julien Grall @ 2018-09-10 11:27 UTC (permalink / raw)
  To: Andrew Cooper, Xen-devel
  Cc: Dario Faggioli, Stefano Stabellini, Wei Liu, Jan Beulich,
	Roger Pau Monné

Hi Andrew,

On 06/09/18 15:01, Andrew Cooper wrote:
> alloc_vcpu()'s call to domain_update_node_affinity() has existed for a decade,
> but its effort is mostly wasted.
> 
> alloc_vcpu() is called in a loop for each vcpu, bringing them into existence.
> The values of the affinity masks are still default, which is allcpus in
> general, or a processor singleton for pinned domains.
> 
> Furthermore, domain_update_node_affinity() itself loops over all vcpus
> accumulating the masks, making it a scalability concern with large numbers of
> vcpus.
> 
> Move it to be called once after all vcpus are constructed, which has the same
> net effect, but with fewer intermediate memory allocations and less cpumask
> arithmetic.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

For Arm bits:

Acked-by: Julien Grall <julien.grall@arm.com>

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] xen/sched: Re-position the domain_update_node_affinity() call during vcpu construction
  2018-09-06 14:01 [PATCH] xen/sched: Re-position the domain_update_node_affinity() call during vcpu construction Andrew Cooper
                   ` (2 preceding siblings ...)
  2018-09-10 11:27 ` Julien Grall
@ 2018-09-11 16:14 ` Dario Faggioli
  3 siblings, 0 replies; 5+ messages in thread
From: Dario Faggioli @ 2018-09-11 16:14 UTC (permalink / raw)
  To: Andrew Cooper, Xen-devel
  Cc: Julien Grall, Stefano Stabellini, Wei Liu, Jan Beulich,
	Roger Pau Monné


[-- Attachment #1.1: Type: text/plain, Size: 1138 bytes --]

On Thu, 2018-09-06 at 15:01 +0100, Andrew Cooper wrote:
> alloc_vcpu()'s call to domain_update_node_affinity() has existed for
> a decade,
> but its effort is mostly wasted.
> 
> alloc_vcpu() is called in a loop for each vcpu, bringing them into
> existence.
> The values of the affinity masks are still default, which is allcpus
> in
> general, or a processor singleton for pinned domains.
> 
> Furthermore, domain_update_node_affinity() itself loops over all
> vcpus
> accumulating the masks, making it a scalability concern with large
> numbers of
> vcpus.
> 
> Move it to be called once after all vcpus are constructed, which has
> the same
> net effect, but with fewer intermediate memory allocations and less
> cpumask
> arithmetic.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>
Reviewed-by: Dario Faggioli <dfaggioli@suse.com>

Regards,
Dario
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Software Engineer @ SUSE https://www.suse.com/

[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

[-- Attachment #2: Type: text/plain, Size: 157 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2018-09-11 16:14 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-09-06 14:01 [PATCH] xen/sched: Re-position the domain_update_node_affinity() call during vcpu construction Andrew Cooper
2018-09-07  8:33 ` Jan Beulich
2018-09-07  8:40 ` Wei Liu
2018-09-10 11:27 ` Julien Grall
2018-09-11 16:14 ` Dario Faggioli

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.