All of lore.kernel.org
 help / color / mirror / Atom feed
* [Xen-devel] [PATCH v2 0/9] xen: scheduler cleanups
@ 2020-01-08 15:23 Juergen Gross
  2020-01-08 15:23 ` [Xen-devel] [PATCH v2 1/9] xen/sched: move schedulers and cpupool coding to dedicated directory Juergen Gross
                   ` (9 more replies)
  0 siblings, 10 replies; 24+ messages in thread
From: Juergen Gross @ 2020-01-08 15:23 UTC (permalink / raw)
  To: xen-devel
  Cc: Juergen Gross, Stefano Stabellini, Julien Grall, Wei Liu,
	Konrad Rzeszutek Wilk, George Dunlap, Andrew Cooper, Ian Jackson,
	Dario Faggioli, Josh Whitehead, Meng Xu, Jan Beulich,
	Stewart Hildebrand, Volodymyr Babchuk, Roger Pau Monné

Move all scheduler related hypervisor code to xen/common/sched/ and
do a lot of cleanups.

Juergen Gross (9):
  xen/sched: move schedulers and cpupool coding to dedicated directory
  xen/sched: make sched-if.h really scheduler private
  xen/sched: cleanup sched.h
  xen/sched: remove special cases for free cpus in schedulers
  xen/sched: use scratch cpumask instead of allocating it on the stack
  xen/sched: replace null scheduler percpu-variable with pdata hook
  xen/sched: switch scheduling to bool where appropriate
  xen/sched: eliminate sched_tick_suspend() and sched_tick_resume()
  xen/sched: add const qualifier where appropriate

 MAINTAINERS                                        |   8 +-
 xen/arch/arm/domain.c                              |   6 +-
 xen/arch/x86/acpi/cpu_idle.c                       |  15 +-
 xen/arch/x86/cpu/mwait-idle.c                      |   8 +-
 xen/arch/x86/dom0_build.c                          |   5 +-
 xen/common/Kconfig                                 |  66 +-----
 xen/common/Makefile                                |   8 +-
 xen/common/domain.c                                |  70 ------
 xen/common/domctl.c                                | 135 +----------
 xen/common/rcupdate.c                              |   7 +-
 xen/common/sched/Kconfig                           |  65 ++++++
 xen/common/sched/Makefile                          |   7 +
 xen/common/{sched_arinc653.c => sched/arinc653.c}  |  15 +-
 xen/common/{compat/schedule.c => sched/compat.c}   |   2 +-
 xen/common/{schedule.c => sched/core.c}            | 246 ++++++++++++++++++---
 xen/common/{ => sched}/cpupool.c                   |  23 +-
 xen/common/{sched_credit.c => sched/credit.c}      |  65 +++---
 xen/common/{sched_credit2.c => sched/credit2.c}    |  85 +++----
 xen/common/{sched_null.c => sched/null.c}          | 105 ++++++---
 .../xen/sched-if.h => common/sched/private.h}      |  18 +-
 xen/common/{sched_rt.c => sched/rt.c}              | 109 +++++----
 xen/include/xen/domain.h                           |   3 +
 xen/include/xen/rcupdate.h                         |   3 -
 xen/include/xen/sched.h                            |  39 ++--
 24 files changed, 568 insertions(+), 545 deletions(-)
 create mode 100644 xen/common/sched/Kconfig
 create mode 100644 xen/common/sched/Makefile
 rename xen/common/{sched_arinc653.c => sched/arinc653.c} (99%)
 rename xen/common/{compat/schedule.c => sched/compat.c} (97%)
 rename xen/common/{schedule.c => sched/core.c} (92%)
 rename xen/common/{ => sched}/cpupool.c (97%)
 rename xen/common/{sched_credit.c => sched/credit.c} (97%)
 rename xen/common/{sched_credit2.c => sched/credit2.c} (98%)
 rename xen/common/{sched_null.c => sched/null.c} (92%)
 rename xen/{include/xen/sched-if.h => common/sched/private.h} (96%)
 rename xen/common/{sched_rt.c => sched/rt.c} (94%)

-- 
2.16.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [Xen-devel] [PATCH v2 1/9] xen/sched: move schedulers and cpupool coding to dedicated directory
  2020-01-08 15:23 [Xen-devel] [PATCH v2 0/9] xen: scheduler cleanups Juergen Gross
@ 2020-01-08 15:23 ` Juergen Gross
  2020-01-22 13:03   ` Dario Faggioli
  2020-01-08 15:23 ` [Xen-devel] [PATCH v2 2/9] xen/sched: make sched-if.h really scheduler private Juergen Gross
                   ` (8 subsequent siblings)
  9 siblings, 1 reply; 24+ messages in thread
From: Juergen Gross @ 2020-01-08 15:23 UTC (permalink / raw)
  To: xen-devel
  Cc: Juergen Gross, Stefano Stabellini, Julien Grall, Wei Liu,
	Konrad Rzeszutek Wilk, George Dunlap, Andrew Cooper, Ian Jackson,
	Dario Faggioli, Josh Whitehead, Meng Xu, Jan Beulich,
	Stewart Hildebrand

Move sched*c and cpupool.c to a new directory common/sched.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- renamed sources (Dario Faggioli, Andrew Cooper)
---
 MAINTAINERS                                       |  8 +--
 xen/common/Kconfig                                | 66 +----------------------
 xen/common/Makefile                               |  8 +--
 xen/common/sched/Kconfig                          | 65 ++++++++++++++++++++++
 xen/common/sched/Makefile                         |  7 +++
 xen/common/{sched_arinc653.c => sched/arinc653.c} |  0
 xen/common/{compat/schedule.c => sched/compat.c}  |  2 +-
 xen/common/{schedule.c => sched/core.c}           |  2 +-
 xen/common/{ => sched}/cpupool.c                  |  0
 xen/common/{sched_credit.c => sched/credit.c}     |  0
 xen/common/{sched_credit2.c => sched/credit2.c}   |  0
 xen/common/{sched_null.c => sched/null.c}         |  0
 xen/common/{sched_rt.c => sched/rt.c}             |  0
 13 files changed, 80 insertions(+), 78 deletions(-)
 create mode 100644 xen/common/sched/Kconfig
 create mode 100644 xen/common/sched/Makefile
 rename xen/common/{sched_arinc653.c => sched/arinc653.c} (100%)
 rename xen/common/{compat/schedule.c => sched/compat.c} (97%)
 rename xen/common/{schedule.c => sched/core.c} (99%)
 rename xen/common/{ => sched}/cpupool.c (100%)
 rename xen/common/{sched_credit.c => sched/credit.c} (100%)
 rename xen/common/{sched_credit2.c => sched/credit2.c} (100%)
 rename xen/common/{sched_null.c => sched/null.c} (100%)
 rename xen/common/{sched_rt.c => sched/rt.c} (100%)

diff --git a/MAINTAINERS b/MAINTAINERS
index eaea4620e2..9d2ac631ba 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -174,7 +174,7 @@ M:	Josh Whitehead <josh.whitehead@dornerworks.com>
 M:	Stewart Hildebrand <stewart.hildebrand@dornerworks.com>
 S:	Supported
 L:	DornerWorks Xen-Devel <xen-devel@dornerworks.com>
-F:	xen/common/sched_arinc653.c
+F:	xen/common/sched/arinc653.c
 F:	tools/libxc/xc_arinc653.c
 
 ARM (W/ VIRTUALISATION EXTENSIONS) ARCHITECTURE
@@ -212,7 +212,7 @@ CPU POOLS
 M:	Juergen Gross <jgross@suse.com>
 M:	Dario Faggioli <dfaggioli@suse.com>
 S:	Supported
-F:	xen/common/cpupool.c
+F:	xen/common/sched/cpupool.c
 
 DEVICE TREE
 M:	Stefano Stabellini <sstabellini@kernel.org>
@@ -378,13 +378,13 @@ RTDS SCHEDULER
 M:	Dario Faggioli <dfaggioli@suse.com>
 M:	Meng Xu <mengxu@cis.upenn.edu>
 S:	Supported
-F:	xen/common/sched_rt.c
+F:	xen/common/sched/rt.c
 
 SCHEDULING
 M:	George Dunlap <george.dunlap@eu.citrix.com>
 M:	Dario Faggioli <dfaggioli@suse.com>
 S:	Supported
-F:	xen/common/sched*
+F:	xen/common/sched/
 
 SEABIOS UPSTREAM
 M:	Wei Liu <wl@xen.org>
diff --git a/xen/common/Kconfig b/xen/common/Kconfig
index b3d161d057..9d6d09eb37 100644
--- a/xen/common/Kconfig
+++ b/xen/common/Kconfig
@@ -275,71 +275,7 @@ config ARGO
 
 	  If unsure, say N.
 
-menu "Schedulers"
-	visible if EXPERT = "y"
-
-config SCHED_CREDIT
-	bool "Credit scheduler support"
-	default y
-	---help---
-	  The traditional credit scheduler is a general purpose scheduler.
-
-config SCHED_CREDIT2
-	bool "Credit2 scheduler support"
-	default y
-	---help---
-	  The credit2 scheduler is a general purpose scheduler that is
-	  optimized for lower latency and higher VM density.
-
-config SCHED_RTDS
-	bool "RTDS scheduler support (EXPERIMENTAL)"
-	default y
-	---help---
-	  The RTDS scheduler is a soft and firm real-time scheduler for
-	  multicore, targeted for embedded, automotive, graphics and gaming
-	  in the cloud, and general low-latency workloads.
-
-config SCHED_ARINC653
-	bool "ARINC653 scheduler support (EXPERIMENTAL)"
-	default DEBUG
-	---help---
-	  The ARINC653 scheduler is a hard real-time scheduler for single
-	  cores, targeted for avionics, drones, and medical devices.
-
-config SCHED_NULL
-	bool "Null scheduler support (EXPERIMENTAL)"
-	default y
-	---help---
-	  The null scheduler is a static, zero overhead scheduler,
-	  for when there always are less vCPUs than pCPUs, typically
-	  in embedded or HPC scenarios.
-
-choice
-	prompt "Default Scheduler?"
-	default SCHED_CREDIT2_DEFAULT
-
-	config SCHED_CREDIT_DEFAULT
-		bool "Credit Scheduler" if SCHED_CREDIT
-	config SCHED_CREDIT2_DEFAULT
-		bool "Credit2 Scheduler" if SCHED_CREDIT2
-	config SCHED_RTDS_DEFAULT
-		bool "RT Scheduler" if SCHED_RTDS
-	config SCHED_ARINC653_DEFAULT
-		bool "ARINC653 Scheduler" if SCHED_ARINC653
-	config SCHED_NULL_DEFAULT
-		bool "Null Scheduler" if SCHED_NULL
-endchoice
-
-config SCHED_DEFAULT
-	string
-	default "credit" if SCHED_CREDIT_DEFAULT
-	default "credit2" if SCHED_CREDIT2_DEFAULT
-	default "rtds" if SCHED_RTDS_DEFAULT
-	default "arinc653" if SCHED_ARINC653_DEFAULT
-	default "null" if SCHED_NULL_DEFAULT
-	default "credit2"
-
-endmenu
+source "common/sched/Kconfig"
 
 config CRYPTO
 	bool
diff --git a/xen/common/Makefile b/xen/common/Makefile
index 62b34e69e9..2abb8250b0 100644
--- a/xen/common/Makefile
+++ b/xen/common/Makefile
@@ -3,7 +3,6 @@ obj-y += bitmap.o
 obj-y += bsearch.o
 obj-$(CONFIG_CORE_PARKING) += core_parking.o
 obj-y += cpu.o
-obj-y += cpupool.o
 obj-$(CONFIG_DEBUG_TRACE) += debugtrace.o
 obj-$(CONFIG_HAS_DEVICE_TREE) += device_tree.o
 obj-y += domctl.o
@@ -38,12 +37,6 @@ obj-y += radix-tree.o
 obj-y += rbtree.o
 obj-y += rcupdate.o
 obj-y += rwlock.o
-obj-$(CONFIG_SCHED_ARINC653) += sched_arinc653.o
-obj-$(CONFIG_SCHED_CREDIT) += sched_credit.o
-obj-$(CONFIG_SCHED_CREDIT2) += sched_credit2.o
-obj-$(CONFIG_SCHED_RTDS) += sched_rt.o
-obj-$(CONFIG_SCHED_NULL) += sched_null.o
-obj-y += schedule.o
 obj-y += shutdown.o
 obj-y += softirq.o
 obj-y += sort.o
@@ -74,6 +67,7 @@ obj-$(CONFIG_COMPAT) += $(addprefix compat/,domain.o kernel.o memory.o multicall
 extra-y := symbols-dummy.o
 
 subdir-$(CONFIG_COVERAGE) += coverage
+subdir-y += sched
 subdir-$(CONFIG_UBSAN) += ubsan
 
 subdir-$(CONFIG_NEEDS_LIBELF) += libelf
diff --git a/xen/common/sched/Kconfig b/xen/common/sched/Kconfig
new file mode 100644
index 0000000000..883ac87cab
--- /dev/null
+++ b/xen/common/sched/Kconfig
@@ -0,0 +1,65 @@
+menu "Schedulers"
+	visible if EXPERT = "y"
+
+config SCHED_CREDIT
+	bool "Credit scheduler support"
+	default y
+	---help---
+	  The traditional credit scheduler is a general purpose scheduler.
+
+config SCHED_CREDIT2
+	bool "Credit2 scheduler support"
+	default y
+	---help---
+	  The credit2 scheduler is a general purpose scheduler that is
+	  optimized for lower latency and higher VM density.
+
+config SCHED_RTDS
+	bool "RTDS scheduler support (EXPERIMENTAL)"
+	default y
+	---help---
+	  The RTDS scheduler is a soft and firm real-time scheduler for
+	  multicore, targeted for embedded, automotive, graphics and gaming
+	  in the cloud, and general low-latency workloads.
+
+config SCHED_ARINC653
+	bool "ARINC653 scheduler support (EXPERIMENTAL)"
+	default DEBUG
+	---help---
+	  The ARINC653 scheduler is a hard real-time scheduler for single
+	  cores, targeted for avionics, drones, and medical devices.
+
+config SCHED_NULL
+	bool "Null scheduler support (EXPERIMENTAL)"
+	default y
+	---help---
+	  The null scheduler is a static, zero overhead scheduler,
+	  for when there always are less vCPUs than pCPUs, typically
+	  in embedded or HPC scenarios.
+
+choice
+	prompt "Default Scheduler?"
+	default SCHED_CREDIT2_DEFAULT
+
+	config SCHED_CREDIT_DEFAULT
+		bool "Credit Scheduler" if SCHED_CREDIT
+	config SCHED_CREDIT2_DEFAULT
+		bool "Credit2 Scheduler" if SCHED_CREDIT2
+	config SCHED_RTDS_DEFAULT
+		bool "RT Scheduler" if SCHED_RTDS
+	config SCHED_ARINC653_DEFAULT
+		bool "ARINC653 Scheduler" if SCHED_ARINC653
+	config SCHED_NULL_DEFAULT
+		bool "Null Scheduler" if SCHED_NULL
+endchoice
+
+config SCHED_DEFAULT
+	string
+	default "credit" if SCHED_CREDIT_DEFAULT
+	default "credit2" if SCHED_CREDIT2_DEFAULT
+	default "rtds" if SCHED_RTDS_DEFAULT
+	default "arinc653" if SCHED_ARINC653_DEFAULT
+	default "null" if SCHED_NULL_DEFAULT
+	default "credit2"
+
+endmenu
diff --git a/xen/common/sched/Makefile b/xen/common/sched/Makefile
new file mode 100644
index 0000000000..3537f2a68d
--- /dev/null
+++ b/xen/common/sched/Makefile
@@ -0,0 +1,7 @@
+obj-y += cpupool.o
+obj-$(CONFIG_SCHED_ARINC653) += arinc653.o
+obj-$(CONFIG_SCHED_CREDIT) += credit.o
+obj-$(CONFIG_SCHED_CREDIT2) += credit2.o
+obj-$(CONFIG_SCHED_RTDS) += rt.o
+obj-$(CONFIG_SCHED_NULL) += null.o
+obj-y += core.o
diff --git a/xen/common/sched_arinc653.c b/xen/common/sched/arinc653.c
similarity index 100%
rename from xen/common/sched_arinc653.c
rename to xen/common/sched/arinc653.c
diff --git a/xen/common/compat/schedule.c b/xen/common/sched/compat.c
similarity index 97%
rename from xen/common/compat/schedule.c
rename to xen/common/sched/compat.c
index 8b6e6f107d..040b4caca2 100644
--- a/xen/common/compat/schedule.c
+++ b/xen/common/sched/compat.c
@@ -37,7 +37,7 @@ static int compat_poll(struct compat_sched_poll *compat)
 #define do_poll compat_poll
 #define sched_poll compat_sched_poll
 
-#include "../schedule.c"
+#include "core.c"
 
 int compat_set_timer_op(u32 lo, s32 hi)
 {
diff --git a/xen/common/schedule.c b/xen/common/sched/core.c
similarity index 99%
rename from xen/common/schedule.c
rename to xen/common/sched/core.c
index 54a07ff9e8..4d8eb4c617 100644
--- a/xen/common/schedule.c
+++ b/xen/common/sched/core.c
@@ -3128,7 +3128,7 @@ void __init sched_setup_dom0_vcpus(struct domain *d)
 #endif
 
 #ifdef CONFIG_COMPAT
-#include "compat/schedule.c"
+#include "compat.c"
 #endif
 
 #endif /* !COMPAT */
diff --git a/xen/common/cpupool.c b/xen/common/sched/cpupool.c
similarity index 100%
rename from xen/common/cpupool.c
rename to xen/common/sched/cpupool.c
diff --git a/xen/common/sched_credit.c b/xen/common/sched/credit.c
similarity index 100%
rename from xen/common/sched_credit.c
rename to xen/common/sched/credit.c
diff --git a/xen/common/sched_credit2.c b/xen/common/sched/credit2.c
similarity index 100%
rename from xen/common/sched_credit2.c
rename to xen/common/sched/credit2.c
diff --git a/xen/common/sched_null.c b/xen/common/sched/null.c
similarity index 100%
rename from xen/common/sched_null.c
rename to xen/common/sched/null.c
diff --git a/xen/common/sched_rt.c b/xen/common/sched/rt.c
similarity index 100%
rename from xen/common/sched_rt.c
rename to xen/common/sched/rt.c
-- 
2.16.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [Xen-devel] [PATCH v2 2/9] xen/sched: make sched-if.h really scheduler private
  2020-01-08 15:23 [Xen-devel] [PATCH v2 0/9] xen: scheduler cleanups Juergen Gross
  2020-01-08 15:23 ` [Xen-devel] [PATCH v2 1/9] xen/sched: move schedulers and cpupool coding to dedicated directory Juergen Gross
@ 2020-01-08 15:23 ` Juergen Gross
  2020-01-14 14:27   ` Jan Beulich
  2020-01-08 15:23 ` [Xen-devel] [PATCH v2 3/9] xen/sched: cleanup sched.h Juergen Gross
                   ` (7 subsequent siblings)
  9 siblings, 1 reply; 24+ messages in thread
From: Juergen Gross @ 2020-01-08 15:23 UTC (permalink / raw)
  To: xen-devel
  Cc: Juergen Gross, Stefano Stabellini, Julien Grall, Wei Liu,
	Konrad Rzeszutek Wilk, George Dunlap, Andrew Cooper, Ian Jackson,
	Dario Faggioli, Josh Whitehead, Meng Xu, Jan Beulich,
	Stewart Hildebrand, Roger Pau Monné

include/xen/sched-if.h should be private to scheduler code, so move it
to common/sched/private.h and move the remaining use cases to
cpupool.c and core.c.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Dario Faggioli <dfaggioli@suse.com>
---
V2:
- rename to private.h (Andrew Cooper)
---
 xen/arch/x86/dom0_build.c                          |   5 +-
 xen/common/domain.c                                |  70 --------
 xen/common/domctl.c                                | 135 +--------------
 xen/common/sched/arinc653.c                        |   3 +-
 xen/common/sched/core.c                            | 191 ++++++++++++++++++++-
 xen/common/sched/cpupool.c                         |  13 +-
 xen/common/sched/credit.c                          |   2 +-
 xen/common/sched/credit2.c                         |   3 +-
 xen/common/sched/null.c                            |   3 +-
 .../xen/sched-if.h => common/sched/private.h}      |   3 -
 xen/common/sched/rt.c                              |   3 +-
 xen/include/xen/domain.h                           |   3 +
 xen/include/xen/sched.h                            |   7 +
 13 files changed, 228 insertions(+), 213 deletions(-)
 rename xen/{include/xen/sched-if.h => common/sched/private.h} (99%)

diff --git a/xen/arch/x86/dom0_build.c b/xen/arch/x86/dom0_build.c
index 28b964e018..56c2dee0fc 100644
--- a/xen/arch/x86/dom0_build.c
+++ b/xen/arch/x86/dom0_build.c
@@ -9,7 +9,6 @@
 #include <xen/libelf.h>
 #include <xen/pfn.h>
 #include <xen/sched.h>
-#include <xen/sched-if.h>
 #include <xen/softirq.h>
 
 #include <asm/amd.h>
@@ -227,9 +226,9 @@ unsigned int __init dom0_max_vcpus(void)
         dom0_nodes = node_online_map;
     for_each_node_mask ( node, dom0_nodes )
         cpumask_or(&dom0_cpus, &dom0_cpus, &node_to_cpumask(node));
-    cpumask_and(&dom0_cpus, &dom0_cpus, cpupool0->cpu_valid);
+    cpumask_and(&dom0_cpus, &dom0_cpus, cpupool_valid_cpus(cpupool0));
     if ( cpumask_empty(&dom0_cpus) )
-        cpumask_copy(&dom0_cpus, cpupool0->cpu_valid);
+        cpumask_copy(&dom0_cpus, cpupool_valid_cpus(cpupool0));
 
     max_vcpus = cpumask_weight(&dom0_cpus);
     if ( opt_dom0_max_vcpus_min > max_vcpus )
diff --git a/xen/common/domain.c b/xen/common/domain.c
index 0b1103fdb2..71a7c2776f 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -10,7 +10,6 @@
 #include <xen/ctype.h>
 #include <xen/err.h>
 #include <xen/sched.h>
-#include <xen/sched-if.h>
 #include <xen/domain.h>
 #include <xen/mm.h>
 #include <xen/event.h>
@@ -565,75 +564,6 @@ void __init setup_system_domains(void)
 #endif
 }
 
-void domain_update_node_affinity(struct domain *d)
-{
-    cpumask_var_t dom_cpumask, dom_cpumask_soft;
-    cpumask_t *dom_affinity;
-    const cpumask_t *online;
-    struct sched_unit *unit;
-    unsigned int cpu;
-
-    /* Do we have vcpus already? If not, no need to update node-affinity. */
-    if ( !d->vcpu || !d->vcpu[0] )
-        return;
-
-    if ( !zalloc_cpumask_var(&dom_cpumask) )
-        return;
-    if ( !zalloc_cpumask_var(&dom_cpumask_soft) )
-    {
-        free_cpumask_var(dom_cpumask);
-        return;
-    }
-
-    online = cpupool_domain_master_cpumask(d);
-
-    spin_lock(&d->node_affinity_lock);
-
-    /*
-     * If d->auto_node_affinity is true, let's compute the domain's
-     * node-affinity and update d->node_affinity accordingly. if false,
-     * just leave d->auto_node_affinity alone.
-     */
-    if ( d->auto_node_affinity )
-    {
-        /*
-         * We want the narrowest possible set of pcpus (to get the narowest
-         * possible set of nodes). What we need is the cpumask of where the
-         * domain can run (the union of the hard affinity of all its vcpus),
-         * and the full mask of where it would prefer to run (the union of
-         * the soft affinity of all its various vcpus). Let's build them.
-         */
-        for_each_sched_unit ( d, unit )
-        {
-            cpumask_or(dom_cpumask, dom_cpumask, unit->cpu_hard_affinity);
-            cpumask_or(dom_cpumask_soft, dom_cpumask_soft,
-                       unit->cpu_soft_affinity);
-        }
-        /* Filter out non-online cpus */
-        cpumask_and(dom_cpumask, dom_cpumask, online);
-        ASSERT(!cpumask_empty(dom_cpumask));
-        /* And compute the intersection between hard, online and soft */
-        cpumask_and(dom_cpumask_soft, dom_cpumask_soft, dom_cpumask);
-
-        /*
-         * If not empty, the intersection of hard, soft and online is the
-         * narrowest set we want. If empty, we fall back to hard&online.
-         */
-        dom_affinity = cpumask_empty(dom_cpumask_soft) ?
-                           dom_cpumask : dom_cpumask_soft;
-
-        nodes_clear(d->node_affinity);
-        for_each_cpu ( cpu, dom_affinity )
-            node_set(cpu_to_node(cpu), d->node_affinity);
-    }
-
-    spin_unlock(&d->node_affinity_lock);
-
-    free_cpumask_var(dom_cpumask_soft);
-    free_cpumask_var(dom_cpumask);
-}
-
-
 int domain_set_node_affinity(struct domain *d, const nodemask_t *affinity)
 {
     /* Being disjoint with the system is just wrong. */
diff --git a/xen/common/domctl.c b/xen/common/domctl.c
index 650310e874..8b819f56e5 100644
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -11,7 +11,6 @@
 #include <xen/err.h>
 #include <xen/mm.h>
 #include <xen/sched.h>
-#include <xen/sched-if.h>
 #include <xen/domain.h>
 #include <xen/event.h>
 #include <xen/grant_table.h>
@@ -65,9 +64,9 @@ static int bitmap_to_xenctl_bitmap(struct xenctl_bitmap *xenctl_bitmap,
     return err;
 }
 
-static int xenctl_bitmap_to_bitmap(unsigned long *bitmap,
-                                   const struct xenctl_bitmap *xenctl_bitmap,
-                                   unsigned int nbits)
+int xenctl_bitmap_to_bitmap(unsigned long *bitmap,
+                            const struct xenctl_bitmap *xenctl_bitmap,
+                            unsigned int nbits)
 {
     unsigned int guest_bytes, copy_bytes;
     int err = 0;
@@ -200,7 +199,7 @@ void getdomaininfo(struct domain *d, struct xen_domctl_getdomaininfo *info)
     info->shared_info_frame = mfn_to_gmfn(d, virt_to_mfn(d->shared_info));
     BUG_ON(SHARED_M2P(info->shared_info_frame));
 
-    info->cpupool = d->cpupool ? d->cpupool->cpupool_id : CPUPOOLID_NONE;
+    info->cpupool = cpupool_get_id(d);
 
     memcpy(info->handle, d->handle, sizeof(xen_domain_handle_t));
 
@@ -234,16 +233,6 @@ void domctl_lock_release(void)
     spin_unlock(&current->domain->hypercall_deadlock_mutex);
 }
 
-static inline
-int vcpuaffinity_params_invalid(const struct xen_domctl_vcpuaffinity *vcpuaff)
-{
-    return vcpuaff->flags == 0 ||
-           ((vcpuaff->flags & XEN_VCPUAFFINITY_HARD) &&
-            guest_handle_is_null(vcpuaff->cpumap_hard.bitmap)) ||
-           ((vcpuaff->flags & XEN_VCPUAFFINITY_SOFT) &&
-            guest_handle_is_null(vcpuaff->cpumap_soft.bitmap));
-}
-
 void vnuma_destroy(struct vnuma_info *vnuma)
 {
     if ( vnuma )
@@ -608,122 +597,8 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 
     case XEN_DOMCTL_setvcpuaffinity:
     case XEN_DOMCTL_getvcpuaffinity:
-    {
-        struct vcpu *v;
-        const struct sched_unit *unit;
-        struct xen_domctl_vcpuaffinity *vcpuaff = &op->u.vcpuaffinity;
-
-        ret = -EINVAL;
-        if ( vcpuaff->vcpu >= d->max_vcpus )
-            break;
-
-        ret = -ESRCH;
-        if ( (v = d->vcpu[vcpuaff->vcpu]) == NULL )
-            break;
-
-        unit = v->sched_unit;
-        ret = -EINVAL;
-        if ( vcpuaffinity_params_invalid(vcpuaff) )
-            break;
-
-        if ( op->cmd == XEN_DOMCTL_setvcpuaffinity )
-        {
-            cpumask_var_t new_affinity, old_affinity;
-            cpumask_t *online = cpupool_domain_master_cpumask(v->domain);
-
-            /*
-             * We want to be able to restore hard affinity if we are trying
-             * setting both and changing soft affinity (which happens later,
-             * when hard affinity has been succesfully chaged already) fails.
-             */
-            if ( !alloc_cpumask_var(&old_affinity) )
-            {
-                ret = -ENOMEM;
-                break;
-            }
-            cpumask_copy(old_affinity, unit->cpu_hard_affinity);
-
-            if ( !alloc_cpumask_var(&new_affinity) )
-            {
-                free_cpumask_var(old_affinity);
-                ret = -ENOMEM;
-                break;
-            }
-
-            /* Undo a stuck SCHED_pin_override? */
-            if ( vcpuaff->flags & XEN_VCPUAFFINITY_FORCE )
-                vcpu_temporary_affinity(v, NR_CPUS, VCPU_AFFINITY_OVERRIDE);
-
-            ret = 0;
-
-            /*
-             * We both set a new affinity and report back to the caller what
-             * the scheduler will be effectively using.
-             */
-            if ( vcpuaff->flags & XEN_VCPUAFFINITY_HARD )
-            {
-                ret = xenctl_bitmap_to_bitmap(cpumask_bits(new_affinity),
-                                              &vcpuaff->cpumap_hard,
-                                              nr_cpu_ids);
-                if ( !ret )
-                    ret = vcpu_set_hard_affinity(v, new_affinity);
-                if ( ret )
-                    goto setvcpuaffinity_out;
-
-                /*
-                 * For hard affinity, what we return is the intersection of
-                 * cpupool's online mask and the new hard affinity.
-                 */
-                cpumask_and(new_affinity, online, unit->cpu_hard_affinity);
-                ret = cpumask_to_xenctl_bitmap(&vcpuaff->cpumap_hard,
-                                               new_affinity);
-            }
-            if ( vcpuaff->flags & XEN_VCPUAFFINITY_SOFT )
-            {
-                ret = xenctl_bitmap_to_bitmap(cpumask_bits(new_affinity),
-                                              &vcpuaff->cpumap_soft,
-                                              nr_cpu_ids);
-                if ( !ret)
-                    ret = vcpu_set_soft_affinity(v, new_affinity);
-                if ( ret )
-                {
-                    /*
-                     * Since we're returning error, the caller expects nothing
-                     * happened, so we rollback the changes to hard affinity
-                     * (if any).
-                     */
-                    if ( vcpuaff->flags & XEN_VCPUAFFINITY_HARD )
-                        vcpu_set_hard_affinity(v, old_affinity);
-                    goto setvcpuaffinity_out;
-                }
-
-                /*
-                 * For soft affinity, we return the intersection between the
-                 * new soft affinity, the cpupool's online map and the (new)
-                 * hard affinity.
-                 */
-                cpumask_and(new_affinity, new_affinity, online);
-                cpumask_and(new_affinity, new_affinity,
-                            unit->cpu_hard_affinity);
-                ret = cpumask_to_xenctl_bitmap(&vcpuaff->cpumap_soft,
-                                               new_affinity);
-            }
-
- setvcpuaffinity_out:
-            free_cpumask_var(new_affinity);
-            free_cpumask_var(old_affinity);
-        }
-        else
-        {
-            if ( vcpuaff->flags & XEN_VCPUAFFINITY_HARD )
-                ret = cpumask_to_xenctl_bitmap(&vcpuaff->cpumap_hard,
-                                               unit->cpu_hard_affinity);
-            if ( vcpuaff->flags & XEN_VCPUAFFINITY_SOFT )
-                ret = cpumask_to_xenctl_bitmap(&vcpuaff->cpumap_soft,
-                                               unit->cpu_soft_affinity);
-        }
+        ret = vcpu_affinity_domctl(d, op->cmd, &op->u.vcpuaffinity);
         break;
-    }
 
     case XEN_DOMCTL_scheduler_op:
         ret = sched_adjust(d, &op->u.scheduler_op);
diff --git a/xen/common/sched/arinc653.c b/xen/common/sched/arinc653.c
index 565575c326..8895d92b5e 100644
--- a/xen/common/sched/arinc653.c
+++ b/xen/common/sched/arinc653.c
@@ -26,7 +26,6 @@
 
 #include <xen/lib.h>
 #include <xen/sched.h>
-#include <xen/sched-if.h>
 #include <xen/timer.h>
 #include <xen/softirq.h>
 #include <xen/time.h>
@@ -35,6 +34,8 @@
 #include <xen/guest_access.h>
 #include <public/sysctl.h>
 
+#include "private.h"
+
 /**************************************************************************
  * Private Macros                                                         *
  **************************************************************************/
diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c
index 4d8eb4c617..2fae959e90 100644
--- a/xen/common/sched/core.c
+++ b/xen/common/sched/core.c
@@ -23,7 +23,6 @@
 #include <xen/time.h>
 #include <xen/timer.h>
 #include <xen/perfc.h>
-#include <xen/sched-if.h>
 #include <xen/softirq.h>
 #include <xen/trace.h>
 #include <xen/mm.h>
@@ -38,6 +37,8 @@
 #include <xsm/xsm.h>
 #include <xen/err.h>
 
+#include "private.h"
+
 #ifdef CONFIG_XEN_GUEST
 #include <asm/guest.h>
 #else
@@ -1607,6 +1608,194 @@ int vcpu_temporary_affinity(struct vcpu *v, unsigned int cpu, uint8_t reason)
     return ret;
 }
 
+static inline
+int vcpuaffinity_params_invalid(const struct xen_domctl_vcpuaffinity *vcpuaff)
+{
+    return vcpuaff->flags == 0 ||
+           ((vcpuaff->flags & XEN_VCPUAFFINITY_HARD) &&
+            guest_handle_is_null(vcpuaff->cpumap_hard.bitmap)) ||
+           ((vcpuaff->flags & XEN_VCPUAFFINITY_SOFT) &&
+            guest_handle_is_null(vcpuaff->cpumap_soft.bitmap));
+}
+
+int vcpu_affinity_domctl(struct domain *d, uint32_t cmd,
+                         struct xen_domctl_vcpuaffinity *vcpuaff)
+{
+    struct vcpu *v;
+    const struct sched_unit *unit;
+    int ret = 0;
+
+    if ( vcpuaff->vcpu >= d->max_vcpus )
+        return -EINVAL;
+
+    if ( (v = d->vcpu[vcpuaff->vcpu]) == NULL )
+        return -ESRCH;
+
+    if ( vcpuaffinity_params_invalid(vcpuaff) )
+        return -EINVAL;
+
+    unit = v->sched_unit;
+
+    if ( cmd == XEN_DOMCTL_setvcpuaffinity )
+    {
+        cpumask_var_t new_affinity, old_affinity;
+        cpumask_t *online = cpupool_domain_master_cpumask(v->domain);
+
+        /*
+         * We want to be able to restore hard affinity if we are trying
+         * setting both and changing soft affinity (which happens later,
+         * when hard affinity has been succesfully chaged already) fails.
+         */
+        if ( !alloc_cpumask_var(&old_affinity) )
+            return -ENOMEM;
+
+        cpumask_copy(old_affinity, unit->cpu_hard_affinity);
+
+        if ( !alloc_cpumask_var(&new_affinity) )
+        {
+            free_cpumask_var(old_affinity);
+            return -ENOMEM;
+        }
+
+        /* Undo a stuck SCHED_pin_override? */
+        if ( vcpuaff->flags & XEN_VCPUAFFINITY_FORCE )
+            vcpu_temporary_affinity(v, NR_CPUS, VCPU_AFFINITY_OVERRIDE);
+
+        ret = 0;
+
+        /*
+         * We both set a new affinity and report back to the caller what
+         * the scheduler will be effectively using.
+         */
+        if ( vcpuaff->flags & XEN_VCPUAFFINITY_HARD )
+        {
+            ret = xenctl_bitmap_to_bitmap(cpumask_bits(new_affinity),
+                                          &vcpuaff->cpumap_hard, nr_cpu_ids);
+            if ( !ret )
+                ret = vcpu_set_hard_affinity(v, new_affinity);
+            if ( ret )
+                goto setvcpuaffinity_out;
+
+            /*
+             * For hard affinity, what we return is the intersection of
+             * cpupool's online mask and the new hard affinity.
+             */
+            cpumask_and(new_affinity, online, unit->cpu_hard_affinity);
+            ret = cpumask_to_xenctl_bitmap(&vcpuaff->cpumap_hard, new_affinity);
+        }
+        if ( vcpuaff->flags & XEN_VCPUAFFINITY_SOFT )
+        {
+            ret = xenctl_bitmap_to_bitmap(cpumask_bits(new_affinity),
+                                          &vcpuaff->cpumap_soft, nr_cpu_ids);
+            if ( !ret)
+                ret = vcpu_set_soft_affinity(v, new_affinity);
+            if ( ret )
+            {
+                /*
+                 * Since we're returning error, the caller expects nothing
+                 * happened, so we rollback the changes to hard affinity
+                 * (if any).
+                 */
+                if ( vcpuaff->flags & XEN_VCPUAFFINITY_HARD )
+                    vcpu_set_hard_affinity(v, old_affinity);
+                goto setvcpuaffinity_out;
+            }
+
+            /*
+             * For soft affinity, we return the intersection between the
+             * new soft affinity, the cpupool's online map and the (new)
+             * hard affinity.
+             */
+            cpumask_and(new_affinity, new_affinity, online);
+            cpumask_and(new_affinity, new_affinity, unit->cpu_hard_affinity);
+            ret = cpumask_to_xenctl_bitmap(&vcpuaff->cpumap_soft, new_affinity);
+        }
+
+ setvcpuaffinity_out:
+        free_cpumask_var(new_affinity);
+        free_cpumask_var(old_affinity);
+    }
+    else
+    {
+        if ( vcpuaff->flags & XEN_VCPUAFFINITY_HARD )
+            ret = cpumask_to_xenctl_bitmap(&vcpuaff->cpumap_hard,
+                                           unit->cpu_hard_affinity);
+        if ( vcpuaff->flags & XEN_VCPUAFFINITY_SOFT )
+            ret = cpumask_to_xenctl_bitmap(&vcpuaff->cpumap_soft,
+                                           unit->cpu_soft_affinity);
+    }
+
+    return ret;
+}
+
+void domain_update_node_affinity(struct domain *d)
+{
+    cpumask_var_t dom_cpumask, dom_cpumask_soft;
+    cpumask_t *dom_affinity;
+    const cpumask_t *online;
+    struct sched_unit *unit;
+    unsigned int cpu;
+
+    /* Do we have vcpus already? If not, no need to update node-affinity. */
+    if ( !d->vcpu || !d->vcpu[0] )
+        return;
+
+    if ( !zalloc_cpumask_var(&dom_cpumask) )
+        return;
+    if ( !zalloc_cpumask_var(&dom_cpumask_soft) )
+    {
+        free_cpumask_var(dom_cpumask);
+        return;
+    }
+
+    online = cpupool_domain_master_cpumask(d);
+
+    spin_lock(&d->node_affinity_lock);
+
+    /*
+     * If d->auto_node_affinity is true, let's compute the domain's
+     * node-affinity and update d->node_affinity accordingly. if false,
+     * just leave d->auto_node_affinity alone.
+     */
+    if ( d->auto_node_affinity )
+    {
+        /*
+         * We want the narrowest possible set of pcpus (to get the narowest
+         * possible set of nodes). What we need is the cpumask of where the
+         * domain can run (the union of the hard affinity of all its vcpus),
+         * and the full mask of where it would prefer to run (the union of
+         * the soft affinity of all its various vcpus). Let's build them.
+         */
+        for_each_sched_unit ( d, unit )
+        {
+            cpumask_or(dom_cpumask, dom_cpumask, unit->cpu_hard_affinity);
+            cpumask_or(dom_cpumask_soft, dom_cpumask_soft,
+                       unit->cpu_soft_affinity);
+        }
+        /* Filter out non-online cpus */
+        cpumask_and(dom_cpumask, dom_cpumask, online);
+        ASSERT(!cpumask_empty(dom_cpumask));
+        /* And compute the intersection between hard, online and soft */
+        cpumask_and(dom_cpumask_soft, dom_cpumask_soft, dom_cpumask);
+
+        /*
+         * If not empty, the intersection of hard, soft and online is the
+         * narrowest set we want. If empty, we fall back to hard&online.
+         */
+        dom_affinity = cpumask_empty(dom_cpumask_soft) ?
+                           dom_cpumask : dom_cpumask_soft;
+
+        nodes_clear(d->node_affinity);
+        for_each_cpu ( cpu, dom_affinity )
+            node_set(cpu_to_node(cpu), d->node_affinity);
+    }
+
+    spin_unlock(&d->node_affinity_lock);
+
+    free_cpumask_var(dom_cpumask_soft);
+    free_cpumask_var(dom_cpumask);
+}
+
 typedef long ret_t;
 
 #endif /* !COMPAT */
diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c
index d66b541a94..7b31ab0d61 100644
--- a/xen/common/sched/cpupool.c
+++ b/xen/common/sched/cpupool.c
@@ -16,11 +16,12 @@
 #include <xen/cpumask.h>
 #include <xen/percpu.h>
 #include <xen/sched.h>
-#include <xen/sched-if.h>
 #include <xen/warning.h>
 #include <xen/keyhandler.h>
 #include <xen/cpu.h>
 
+#include "private.h"
+
 #define for_each_cpupool(ptr)    \
     for ((ptr) = &cpupool_list; *(ptr) != NULL; (ptr) = &((*(ptr))->next))
 
@@ -875,6 +876,16 @@ int cpupool_do_sysctl(struct xen_sysctl_cpupool_op *op)
     return ret;
 }
 
+int cpupool_get_id(const struct domain *d)
+{
+    return d->cpupool ? d->cpupool->cpupool_id : CPUPOOLID_NONE;
+}
+
+cpumask_t *cpupool_valid_cpus(struct cpupool *pool)
+{
+    return pool->cpu_valid;
+}
+
 void dump_runq(unsigned char key)
 {
     unsigned long    flags;
diff --git a/xen/common/sched/credit.c b/xen/common/sched/credit.c
index aa41a3301b..4329d9df56 100644
--- a/xen/common/sched/credit.c
+++ b/xen/common/sched/credit.c
@@ -15,7 +15,6 @@
 #include <xen/delay.h>
 #include <xen/event.h>
 #include <xen/time.h>
-#include <xen/sched-if.h>
 #include <xen/softirq.h>
 #include <asm/atomic.h>
 #include <asm/div64.h>
@@ -24,6 +23,7 @@
 #include <xen/trace.h>
 #include <xen/err.h>
 
+#include "private.h"
 
 /*
  * Locking:
diff --git a/xen/common/sched/credit2.c b/xen/common/sched/credit2.c
index f7c477053c..65e8ab052e 100644
--- a/xen/common/sched/credit2.c
+++ b/xen/common/sched/credit2.c
@@ -18,7 +18,6 @@
 #include <xen/event.h>
 #include <xen/time.h>
 #include <xen/perfc.h>
-#include <xen/sched-if.h>
 #include <xen/softirq.h>
 #include <asm/div64.h>
 #include <xen/errno.h>
@@ -26,6 +25,8 @@
 #include <xen/cpu.h>
 #include <xen/keyhandler.h>
 
+#include "private.h"
+
 /* Meant only for helping developers during debugging. */
 /* #define d2printk printk */
 #define d2printk(x...)
diff --git a/xen/common/sched/null.c b/xen/common/sched/null.c
index 3f3418c9b1..b99f1e3c65 100644
--- a/xen/common/sched/null.c
+++ b/xen/common/sched/null.c
@@ -29,10 +29,11 @@
  */
 
 #include <xen/sched.h>
-#include <xen/sched-if.h>
 #include <xen/softirq.h>
 #include <xen/trace.h>
 
+#include "private.h"
+
 /*
  * null tracing events. Check include/public/trace.h for more details.
  */
diff --git a/xen/include/xen/sched-if.h b/xen/common/sched/private.h
similarity index 99%
rename from xen/include/xen/sched-if.h
rename to xen/common/sched/private.h
index b0ac54e63d..a702fd23b1 100644
--- a/xen/include/xen/sched-if.h
+++ b/xen/common/sched/private.h
@@ -12,9 +12,6 @@
 #include <xen/err.h>
 #include <xen/rcupdate.h>
 
-/* A global pointer to the initial cpupool (POOL0). */
-extern struct cpupool *cpupool0;
-
 /* cpus currently in no cpupool */
 extern cpumask_t cpupool_free_cpus;
 
diff --git a/xen/common/sched/rt.c b/xen/common/sched/rt.c
index b2b29481f3..8203b63a9d 100644
--- a/xen/common/sched/rt.c
+++ b/xen/common/sched/rt.c
@@ -20,7 +20,6 @@
 #include <xen/time.h>
 #include <xen/timer.h>
 #include <xen/perfc.h>
-#include <xen/sched-if.h>
 #include <xen/softirq.h>
 #include <asm/atomic.h>
 #include <xen/errno.h>
@@ -31,6 +30,8 @@
 #include <xen/err.h>
 #include <xen/guest_access.h>
 
+#include "private.h"
+
 /*
  * TODO:
  *
diff --git a/xen/include/xen/domain.h b/xen/include/xen/domain.h
index 1cb205d977..7e51d361de 100644
--- a/xen/include/xen/domain.h
+++ b/xen/include/xen/domain.h
@@ -27,6 +27,9 @@ struct xen_domctl_getdomaininfo;
 void getdomaininfo(struct domain *d, struct xen_domctl_getdomaininfo *info);
 void arch_get_domain_info(const struct domain *d,
                           struct xen_domctl_getdomaininfo *info);
+int xenctl_bitmap_to_bitmap(unsigned long *bitmap,
+                            const struct xenctl_bitmap *xenctl_bitmap,
+                            unsigned int nbits);
 
 /*
  * Arch-specifics.
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index cc942a3621..d3adc69ab9 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -50,6 +50,9 @@ DEFINE_XEN_GUEST_HANDLE(vcpu_runstate_info_compat_t);
 /* A global pointer to the hardware domain (usually DOM0). */
 extern struct domain *hardware_domain;
 
+/* A global pointer to the initial cpupool (POOL0). */
+extern struct cpupool *cpupool0;
+
 #ifdef CONFIG_LATE_HWDOM
 extern domid_t hardware_domid;
 #else
@@ -931,6 +934,8 @@ int vcpu_temporary_affinity(struct vcpu *v, unsigned int cpu, uint8_t reason);
 int vcpu_set_hard_affinity(struct vcpu *v, const cpumask_t *affinity);
 int vcpu_set_soft_affinity(struct vcpu *v, const cpumask_t *affinity);
 void restore_vcpu_affinity(struct domain *d);
+int vcpu_affinity_domctl(struct domain *d, uint32_t cmd,
+                         struct xen_domctl_vcpuaffinity *vcpuaff);
 
 void vcpu_runstate_get(struct vcpu *v, struct vcpu_runstate_info *runstate);
 uint64_t get_cpu_idle_time(unsigned int cpu);
@@ -1068,6 +1073,8 @@ int cpupool_add_domain(struct domain *d, int poolid);
 void cpupool_rm_domain(struct domain *d);
 int cpupool_move_domain(struct domain *d, struct cpupool *c);
 int cpupool_do_sysctl(struct xen_sysctl_cpupool_op *op);
+int cpupool_get_id(const struct domain *d);
+cpumask_t *cpupool_valid_cpus(struct cpupool *pool);
 void schedule_dump(struct cpupool *c);
 extern void dump_runq(unsigned char key);
 
-- 
2.16.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [Xen-devel] [PATCH v2 3/9] xen/sched: cleanup sched.h
  2020-01-08 15:23 [Xen-devel] [PATCH v2 0/9] xen: scheduler cleanups Juergen Gross
  2020-01-08 15:23 ` [Xen-devel] [PATCH v2 1/9] xen/sched: move schedulers and cpupool coding to dedicated directory Juergen Gross
  2020-01-08 15:23 ` [Xen-devel] [PATCH v2 2/9] xen/sched: make sched-if.h really scheduler private Juergen Gross
@ 2020-01-08 15:23 ` Juergen Gross
  2020-01-14 15:38   ` Jan Beulich
  2020-01-08 15:23 ` [Xen-devel] [PATCH v2 4/9] xen/sched: remove special cases for free cpus in schedulers Juergen Gross
                   ` (6 subsequent siblings)
  9 siblings, 1 reply; 24+ messages in thread
From: Juergen Gross @ 2020-01-08 15:23 UTC (permalink / raw)
  To: xen-devel
  Cc: Juergen Gross, Stefano Stabellini, Julien Grall, Wei Liu,
	Konrad Rzeszutek Wilk, George Dunlap, Andrew Cooper, Ian Jackson,
	Dario Faggioli, Jan Beulich

There are some items in include/xen/sched.h which can be moved to
private.h as they are scheduler private.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Dario Faggioli <dfaggioli@suse.com>
---
 xen/common/sched/core.c    |  2 +-
 xen/common/sched/private.h | 13 +++++++++++++
 xen/include/xen/sched.h    | 17 -----------------
 3 files changed, 14 insertions(+), 18 deletions(-)

diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c
index 2fae959e90..4153d110be 100644
--- a/xen/common/sched/core.c
+++ b/xen/common/sched/core.c
@@ -1346,7 +1346,7 @@ int vcpu_set_hard_affinity(struct vcpu *v, const cpumask_t *affinity)
     return vcpu_set_affinity(v, affinity, v->sched_unit->cpu_hard_affinity);
 }
 
-int vcpu_set_soft_affinity(struct vcpu *v, const cpumask_t *affinity)
+static int vcpu_set_soft_affinity(struct vcpu *v, const cpumask_t *affinity)
 {
     return vcpu_set_affinity(v, affinity, v->sched_unit->cpu_soft_affinity);
 }
diff --git a/xen/common/sched/private.h b/xen/common/sched/private.h
index a702fd23b1..edce354dc7 100644
--- a/xen/common/sched/private.h
+++ b/xen/common/sched/private.h
@@ -533,6 +533,7 @@ static inline void sched_unit_unpause(const struct sched_unit *unit)
 struct cpupool
 {
     int              cpupool_id;
+#define CPUPOOLID_NONE    -1
     unsigned int     n_dom;
     cpumask_var_t    cpu_valid;      /* all cpus assigned to pool */
     cpumask_var_t    res_valid;      /* all scheduling resources of pool */
@@ -618,5 +619,17 @@ affinity_balance_cpumask(const struct sched_unit *unit, int step,
 
 void sched_rm_cpu(unsigned int cpu);
 const cpumask_t *sched_get_opt_cpumask(enum sched_gran opt, unsigned int cpu);
+void schedule_dump(struct cpupool *c);
+struct scheduler *scheduler_get_default(void);
+struct scheduler *scheduler_alloc(unsigned int sched_id, int *perr);
+void scheduler_free(struct scheduler *sched);
+int cpu_disable_scheduler(unsigned int cpu);
+int schedule_cpu_add(unsigned int cpu, struct cpupool *c);
+int schedule_cpu_rm(unsigned int cpu);
+int sched_move_domain(struct domain *d, struct cpupool *c);
+struct cpupool *cpupool_get_by_id(int poolid);
+void cpupool_put(struct cpupool *pool);
+int cpupool_add_domain(struct domain *d, int poolid);
+void cpupool_rm_domain(struct domain *d);
 
 #endif /* __XEN_SCHED_IF_H__ */
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index d3adc69ab9..b4c2e4f7c2 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -687,7 +687,6 @@ int  sched_init_vcpu(struct vcpu *v);
 void sched_destroy_vcpu(struct vcpu *v);
 int  sched_init_domain(struct domain *d, int poolid);
 void sched_destroy_domain(struct domain *d);
-int sched_move_domain(struct domain *d, struct cpupool *c);
 long sched_adjust(struct domain *, struct xen_domctl_scheduler_op *);
 long sched_adjust_global(struct xen_sysctl_scheduler_op *);
 int  sched_id(void);
@@ -920,19 +919,10 @@ static inline bool sched_has_urgent_vcpu(void)
     return atomic_read(&this_cpu(sched_urgent_count));
 }
 
-struct scheduler;
-
-struct scheduler *scheduler_get_default(void);
-struct scheduler *scheduler_alloc(unsigned int sched_id, int *perr);
-void scheduler_free(struct scheduler *sched);
-int schedule_cpu_add(unsigned int cpu, struct cpupool *c);
-int schedule_cpu_rm(unsigned int cpu);
 void vcpu_set_periodic_timer(struct vcpu *v, s_time_t value);
-int cpu_disable_scheduler(unsigned int cpu);
 void sched_setup_dom0_vcpus(struct domain *d);
 int vcpu_temporary_affinity(struct vcpu *v, unsigned int cpu, uint8_t reason);
 int vcpu_set_hard_affinity(struct vcpu *v, const cpumask_t *affinity);
-int vcpu_set_soft_affinity(struct vcpu *v, const cpumask_t *affinity);
 void restore_vcpu_affinity(struct domain *d);
 int vcpu_affinity_domctl(struct domain *d, uint32_t cmd,
                          struct xen_domctl_vcpuaffinity *vcpuaff);
@@ -1065,17 +1055,10 @@ extern enum cpufreq_controller {
     FREQCTL_none, FREQCTL_dom0_kernel, FREQCTL_xen
 } cpufreq_controller;
 
-#define CPUPOOLID_NONE    -1
-
-struct cpupool *cpupool_get_by_id(int poolid);
-void cpupool_put(struct cpupool *pool);
-int cpupool_add_domain(struct domain *d, int poolid);
-void cpupool_rm_domain(struct domain *d);
 int cpupool_move_domain(struct domain *d, struct cpupool *c);
 int cpupool_do_sysctl(struct xen_sysctl_cpupool_op *op);
 int cpupool_get_id(const struct domain *d);
 cpumask_t *cpupool_valid_cpus(struct cpupool *pool);
-void schedule_dump(struct cpupool *c);
 extern void dump_runq(unsigned char key);
 
 void arch_do_physinfo(struct xen_sysctl_physinfo *pi);
-- 
2.16.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [Xen-devel] [PATCH v2 4/9] xen/sched: remove special cases for free cpus in schedulers
  2020-01-08 15:23 [Xen-devel] [PATCH v2 0/9] xen: scheduler cleanups Juergen Gross
                   ` (2 preceding siblings ...)
  2020-01-08 15:23 ` [Xen-devel] [PATCH v2 3/9] xen/sched: cleanup sched.h Juergen Gross
@ 2020-01-08 15:23 ` Juergen Gross
  2020-01-22 13:06   ` Dario Faggioli
  2020-01-08 15:23 ` [Xen-devel] [PATCH v2 5/9] xen/sched: use scratch cpumask instead of allocating it on the stack Juergen Gross
                   ` (5 subsequent siblings)
  9 siblings, 1 reply; 24+ messages in thread
From: Juergen Gross @ 2020-01-08 15:23 UTC (permalink / raw)
  To: xen-devel; +Cc: Juergen Gross, George Dunlap, Dario Faggioli

With the idle scheduler now taking care of all cpus not in any cpupool
the special cases in the other schedulers for no cpupool associated
can be removed.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/common/sched/credit.c  |  7 ++-----
 xen/common/sched/credit2.c | 30 ------------------------------
 2 files changed, 2 insertions(+), 35 deletions(-)

diff --git a/xen/common/sched/credit.c b/xen/common/sched/credit.c
index 4329d9df56..6b04f8f71c 100644
--- a/xen/common/sched/credit.c
+++ b/xen/common/sched/credit.c
@@ -1690,11 +1690,8 @@ csched_load_balance(struct csched_private *prv, int cpu,
 
     BUG_ON(get_sched_res(cpu) != snext->unit->res);
 
-    /*
-     * If this CPU is going offline, or is not (yet) part of any cpupool
-     * (as it happens, e.g., during cpu bringup), we shouldn't steal work.
-     */
-    if ( unlikely(!cpumask_test_cpu(cpu, online) || c == NULL) )
+    /* If this CPU is going offline, we shouldn't steal work.  */
+    if ( unlikely(!cpumask_test_cpu(cpu, online)) )
         goto out;
 
     if ( snext->pri == CSCHED_PRI_IDLE )
diff --git a/xen/common/sched/credit2.c b/xen/common/sched/credit2.c
index 65e8ab052e..849d254e04 100644
--- a/xen/common/sched/credit2.c
+++ b/xen/common/sched/credit2.c
@@ -2744,40 +2744,10 @@ static void
 csched2_unit_migrate(
     const struct scheduler *ops, struct sched_unit *unit, unsigned int new_cpu)
 {
-    struct domain *d = unit->domain;
     struct csched2_unit * const svc = csched2_unit(unit);
     struct csched2_runqueue_data *trqd;
     s_time_t now = NOW();
 
-    /*
-     * Being passed a target pCPU which is outside of our cpupool is only
-     * valid if we are shutting down (or doing ACPI suspend), and we are
-     * moving everyone to BSP, no matter whether or not BSP is inside our
-     * cpupool.
-     *
-     * And since there indeed is the chance that it is not part of it, all
-     * we must do is remove _and_ unassign the unit from any runqueue, as
-     * well as updating v->processor with the target, so that the suspend
-     * process can continue.
-     *
-     * It will then be during resume that a new, meaningful, value for
-     * v->processor will be chosen, and during actual domain unpause that
-     * the unit will be assigned to and added to the proper runqueue.
-     */
-    if ( unlikely(!cpumask_test_cpu(new_cpu, cpupool_domain_master_cpumask(d))) )
-    {
-        ASSERT(system_state == SYS_STATE_suspend);
-        if ( unit_on_runq(svc) )
-        {
-            runq_remove(svc);
-            update_load(ops, svc->rqd, NULL, -1, now);
-        }
-        _runq_deassign(svc);
-        sched_set_res(unit, get_sched_res(new_cpu));
-        return;
-    }
-
-    /* If here, new_cpu must be a valid Credit2 pCPU, and in our affinity. */
     ASSERT(cpumask_test_cpu(new_cpu, &csched2_priv(ops)->initialized));
     ASSERT(cpumask_test_cpu(new_cpu, unit->cpu_hard_affinity));
 
-- 
2.16.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [Xen-devel] [PATCH v2 5/9] xen/sched: use scratch cpumask instead of allocating it on the stack
  2020-01-08 15:23 [Xen-devel] [PATCH v2 0/9] xen: scheduler cleanups Juergen Gross
                   ` (3 preceding siblings ...)
  2020-01-08 15:23 ` [Xen-devel] [PATCH v2 4/9] xen/sched: remove special cases for free cpus in schedulers Juergen Gross
@ 2020-01-08 15:23 ` Juergen Gross
  2020-01-09  6:21   ` Meng Xu
  2020-01-08 15:23 ` [Xen-devel] [PATCH v2 6/9] xen/sched: replace null scheduler percpu-variable with pdata hook Juergen Gross
                   ` (4 subsequent siblings)
  9 siblings, 1 reply; 24+ messages in thread
From: Juergen Gross @ 2020-01-08 15:23 UTC (permalink / raw)
  To: xen-devel; +Cc: Juergen Gross, George Dunlap, Meng Xu, Dario Faggioli

In rt scheduler there are three instances of cpumasks allocated on the
stack. Replace them by using cpumask_scratch.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/common/sched/rt.c | 56 ++++++++++++++++++++++++++++++++++-----------------
 1 file changed, 37 insertions(+), 19 deletions(-)

diff --git a/xen/common/sched/rt.c b/xen/common/sched/rt.c
index 8203b63a9d..d26f77f554 100644
--- a/xen/common/sched/rt.c
+++ b/xen/common/sched/rt.c
@@ -637,23 +637,38 @@ replq_reinsert(const struct scheduler *ops, struct rt_unit *svc)
  * and available resources
  */
 static struct sched_resource *
-rt_res_pick(const struct scheduler *ops, const struct sched_unit *unit)
+rt_res_pick_locked(const struct sched_unit *unit, unsigned int locked_cpu)
 {
-    cpumask_t cpus;
+    cpumask_t *cpus = cpumask_scratch_cpu(locked_cpu);
     cpumask_t *online;
     int cpu;
 
     online = cpupool_domain_master_cpumask(unit->domain);
-    cpumask_and(&cpus, online, unit->cpu_hard_affinity);
+    cpumask_and(cpus, online, unit->cpu_hard_affinity);
 
-    cpu = cpumask_test_cpu(sched_unit_master(unit), &cpus)
+    cpu = cpumask_test_cpu(sched_unit_master(unit), cpus)
             ? sched_unit_master(unit)
-            : cpumask_cycle(sched_unit_master(unit), &cpus);
-    ASSERT( !cpumask_empty(&cpus) && cpumask_test_cpu(cpu, &cpus) );
+            : cpumask_cycle(sched_unit_master(unit), cpus);
+    ASSERT( !cpumask_empty(cpus) && cpumask_test_cpu(cpu, cpus) );
 
     return get_sched_res(cpu);
 }
 
+/*
+ * Pick a valid resource for the unit vc
+ * Valid resource of an unit is intesection of unit's affinity
+ * and available resources
+ */
+static struct sched_resource *
+rt_res_pick(const struct scheduler *ops, const struct sched_unit *unit)
+{
+    struct sched_resource *res;
+
+    res = rt_res_pick_locked(unit, unit->res->master_cpu);
+
+    return res;
+}
+
 /*
  * Init/Free related code
  */
@@ -886,11 +901,14 @@ rt_unit_insert(const struct scheduler *ops, struct sched_unit *unit)
     struct rt_unit *svc = rt_unit(unit);
     s_time_t now;
     spinlock_t *lock;
+    unsigned int cpu = smp_processor_id();
 
     BUG_ON( is_idle_unit(unit) );
 
     /* This is safe because unit isn't yet being scheduled */
-    sched_set_res(unit, rt_res_pick(ops, unit));
+    lock = pcpu_schedule_lock_irq(cpu);
+    sched_set_res(unit, rt_res_pick_locked(unit, cpu));
+    pcpu_schedule_unlock_irq(lock, cpu);
 
     lock = unit_schedule_lock_irq(unit);
 
@@ -1003,13 +1021,13 @@ burn_budget(const struct scheduler *ops, struct rt_unit *svc, s_time_t now)
  * lock is grabbed before calling this function
  */
 static struct rt_unit *
-runq_pick(const struct scheduler *ops, const cpumask_t *mask)
+runq_pick(const struct scheduler *ops, const cpumask_t *mask, unsigned int cpu)
 {
     struct list_head *runq = rt_runq(ops);
     struct list_head *iter;
     struct rt_unit *svc = NULL;
     struct rt_unit *iter_svc = NULL;
-    cpumask_t cpu_common;
+    cpumask_t *cpu_common = cpumask_scratch_cpu(cpu);
     cpumask_t *online;
 
     list_for_each ( iter, runq )
@@ -1018,9 +1036,9 @@ runq_pick(const struct scheduler *ops, const cpumask_t *mask)
 
         /* mask cpu_hard_affinity & cpupool & mask */
         online = cpupool_domain_master_cpumask(iter_svc->unit->domain);
-        cpumask_and(&cpu_common, online, iter_svc->unit->cpu_hard_affinity);
-        cpumask_and(&cpu_common, mask, &cpu_common);
-        if ( cpumask_empty(&cpu_common) )
+        cpumask_and(cpu_common, online, iter_svc->unit->cpu_hard_affinity);
+        cpumask_and(cpu_common, mask, cpu_common);
+        if ( cpumask_empty(cpu_common) )
             continue;
 
         ASSERT( iter_svc->cur_budget > 0 );
@@ -1092,7 +1110,7 @@ rt_schedule(const struct scheduler *ops, struct sched_unit *currunit,
     }
     else
     {
-        snext = runq_pick(ops, cpumask_of(sched_cpu));
+        snext = runq_pick(ops, cpumask_of(sched_cpu), cur_cpu);
 
         if ( snext == NULL )
             snext = rt_unit(sched_idle_unit(sched_cpu));
@@ -1186,22 +1204,22 @@ runq_tickle(const struct scheduler *ops, struct rt_unit *new)
     struct rt_unit *iter_svc;
     struct sched_unit *iter_unit;
     int cpu = 0, cpu_to_tickle = 0;
-    cpumask_t not_tickled;
+    cpumask_t *not_tickled = cpumask_scratch_cpu(smp_processor_id());
     cpumask_t *online;
 
     if ( new == NULL || is_idle_unit(new->unit) )
         return;
 
     online = cpupool_domain_master_cpumask(new->unit->domain);
-    cpumask_and(&not_tickled, online, new->unit->cpu_hard_affinity);
-    cpumask_andnot(&not_tickled, &not_tickled, &prv->tickled);
+    cpumask_and(not_tickled, online, new->unit->cpu_hard_affinity);
+    cpumask_andnot(not_tickled, not_tickled, &prv->tickled);
 
     /*
      * 1) If there are any idle CPUs, kick one.
      *    For cache benefit,we first search new->cpu.
      *    The same loop also find the one with lowest priority.
      */
-    cpu = cpumask_test_or_cycle(sched_unit_master(new->unit), &not_tickled);
+    cpu = cpumask_test_or_cycle(sched_unit_master(new->unit), not_tickled);
     while ( cpu!= nr_cpu_ids )
     {
         iter_unit = curr_on_cpu(cpu);
@@ -1216,8 +1234,8 @@ runq_tickle(const struct scheduler *ops, struct rt_unit *new)
              compare_unit_priority(iter_svc, latest_deadline_unit) < 0 )
             latest_deadline_unit = iter_svc;
 
-        cpumask_clear_cpu(cpu, &not_tickled);
-        cpu = cpumask_cycle(cpu, &not_tickled);
+        cpumask_clear_cpu(cpu, not_tickled);
+        cpu = cpumask_cycle(cpu, not_tickled);
     }
 
     /* 2) candicate has higher priority, kick out lowest priority unit */
-- 
2.16.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [Xen-devel] [PATCH v2 6/9] xen/sched: replace null scheduler percpu-variable with pdata hook
  2020-01-08 15:23 [Xen-devel] [PATCH v2 0/9] xen: scheduler cleanups Juergen Gross
                   ` (4 preceding siblings ...)
  2020-01-08 15:23 ` [Xen-devel] [PATCH v2 5/9] xen/sched: use scratch cpumask instead of allocating it on the stack Juergen Gross
@ 2020-01-08 15:23 ` Juergen Gross
  2020-01-22 13:04   ` Dario Faggioli
  2020-01-08 15:23 ` [Xen-devel] [PATCH v2 7/9] xen/sched: switch scheduling to bool where appropriate Juergen Gross
                   ` (3 subsequent siblings)
  9 siblings, 1 reply; 24+ messages in thread
From: Juergen Gross @ 2020-01-08 15:23 UTC (permalink / raw)
  To: xen-devel; +Cc: Juergen Gross, George Dunlap, Dario Faggioli

Instead of having an own percpu-variable for private data per cpu the
generic scheduler interface for that purpose should be used.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/common/sched/null.c | 89 +++++++++++++++++++++++++++++++++----------------
 1 file changed, 60 insertions(+), 29 deletions(-)

diff --git a/xen/common/sched/null.c b/xen/common/sched/null.c
index b99f1e3c65..3161ac2e62 100644
--- a/xen/common/sched/null.c
+++ b/xen/common/sched/null.c
@@ -89,7 +89,6 @@ struct null_private {
 struct null_pcpu {
     struct sched_unit *unit;
 };
-DEFINE_PER_CPU(struct null_pcpu, npc);
 
 /*
  * Schedule unit
@@ -159,32 +158,48 @@ static void null_deinit(struct scheduler *ops)
     ops->sched_data = NULL;
 }
 
-static void init_pdata(struct null_private *prv, unsigned int cpu)
+static void init_pdata(struct null_private *prv, struct null_pcpu *npc,
+                       unsigned int cpu)
 {
     /* Mark the pCPU as free, and with no unit assigned */
     cpumask_set_cpu(cpu, &prv->cpus_free);
-    per_cpu(npc, cpu).unit = NULL;
+    npc->unit = NULL;
 }
 
 static void null_init_pdata(const struct scheduler *ops, void *pdata, int cpu)
 {
     struct null_private *prv = null_priv(ops);
 
-    /* alloc_pdata is not implemented, so we want this to be NULL. */
-    ASSERT(!pdata);
+    ASSERT(pdata);
 
-    init_pdata(prv, cpu);
+    init_pdata(prv, pdata, cpu);
 }
 
 static void null_deinit_pdata(const struct scheduler *ops, void *pcpu, int cpu)
 {
     struct null_private *prv = null_priv(ops);
+    struct null_pcpu *npc = pcpu;
 
-    /* alloc_pdata not implemented, so this must have stayed NULL */
-    ASSERT(!pcpu);
+    ASSERT(npc);
 
     cpumask_clear_cpu(cpu, &prv->cpus_free);
-    per_cpu(npc, cpu).unit = NULL;
+    npc->unit = NULL;
+}
+
+static void *null_alloc_pdata(const struct scheduler *ops, int cpu)
+{
+    struct null_pcpu *npc;
+
+    npc = xzalloc(struct null_pcpu);
+    if ( npc == NULL )
+        return ERR_PTR(-ENOMEM);
+
+    return npc;
+}
+
+static void null_free_pdata(const struct scheduler *ops, void *pcpu, int cpu)
+{
+    xfree(pcpu);
 }
 
 static void *null_alloc_udata(const struct scheduler *ops,
@@ -268,6 +283,7 @@ pick_res(struct null_private *prv, const struct sched_unit *unit)
     unsigned int bs;
     unsigned int cpu = sched_unit_master(unit), new_cpu;
     cpumask_t *cpus = cpupool_domain_master_cpumask(unit->domain);
+    struct null_pcpu *npc = get_sched_res(cpu)->sched_priv;
 
     ASSERT(spin_is_locked(get_sched_res(cpu)->schedule_lock));
 
@@ -286,8 +302,7 @@ pick_res(struct null_private *prv, const struct sched_unit *unit)
          * don't, so we get to keep in the scratch cpumask what we have just
          * put in it.)
          */
-        if ( likely((per_cpu(npc, cpu).unit == NULL ||
-                     per_cpu(npc, cpu).unit == unit)
+        if ( likely((npc->unit == NULL || npc->unit == unit)
                     && cpumask_test_cpu(cpu, cpumask_scratch_cpu(cpu))) )
         {
             new_cpu = cpu;
@@ -336,9 +351,11 @@ pick_res(struct null_private *prv, const struct sched_unit *unit)
 static void unit_assign(struct null_private *prv, struct sched_unit *unit,
                         unsigned int cpu)
 {
+    struct null_pcpu *npc = get_sched_res(cpu)->sched_priv;
+
     ASSERT(is_unit_online(unit));
 
-    per_cpu(npc, cpu).unit = unit;
+    npc->unit = unit;
     sched_set_res(unit, get_sched_res(cpu));
     cpumask_clear_cpu(cpu, &prv->cpus_free);
 
@@ -363,12 +380,13 @@ static bool unit_deassign(struct null_private *prv, struct sched_unit *unit)
     unsigned int bs;
     unsigned int cpu = sched_unit_master(unit);
     struct null_unit *wvc;
+    struct null_pcpu *npc = get_sched_res(cpu)->sched_priv;
 
     ASSERT(list_empty(&null_unit(unit)->waitq_elem));
-    ASSERT(per_cpu(npc, cpu).unit == unit);
+    ASSERT(npc->unit == unit);
     ASSERT(!cpumask_test_cpu(cpu, &prv->cpus_free));
 
-    per_cpu(npc, cpu).unit = NULL;
+    npc->unit = NULL;
     cpumask_set_cpu(cpu, &prv->cpus_free);
 
     dprintk(XENLOG_G_INFO, "%d <-- NULL (%pdv%d)\n", cpu, unit->domain,
@@ -436,7 +454,7 @@ static spinlock_t *null_switch_sched(struct scheduler *new_ops,
      */
     ASSERT(!local_irq_is_enabled());
 
-    init_pdata(prv, cpu);
+    init_pdata(prv, pdata, cpu);
 
     return &sr->_lock;
 }
@@ -446,6 +464,7 @@ static void null_unit_insert(const struct scheduler *ops,
 {
     struct null_private *prv = null_priv(ops);
     struct null_unit *nvc = null_unit(unit);
+    struct null_pcpu *npc;
     unsigned int cpu;
     spinlock_t *lock;
 
@@ -462,6 +481,7 @@ static void null_unit_insert(const struct scheduler *ops,
  retry:
     sched_set_res(unit, pick_res(prv, unit));
     cpu = sched_unit_master(unit);
+    npc = get_sched_res(cpu)->sched_priv;
 
     spin_unlock(lock);
 
@@ -471,7 +491,7 @@ static void null_unit_insert(const struct scheduler *ops,
                 cpupool_domain_master_cpumask(unit->domain));
 
     /* If the pCPU is free, we assign unit to it */
-    if ( likely(per_cpu(npc, cpu).unit == NULL) )
+    if ( likely(npc->unit == NULL) )
     {
         /*
          * Insert is followed by vcpu_wake(), so there's no need to poke
@@ -519,7 +539,10 @@ static void null_unit_remove(const struct scheduler *ops,
     /* If offline, the unit shouldn't be assigned, nor in the waitqueue */
     if ( unlikely(!is_unit_online(unit)) )
     {
-        ASSERT(per_cpu(npc, sched_unit_master(unit)).unit != unit);
+        struct null_pcpu *npc;
+
+        npc = unit->res->sched_priv;
+        ASSERT(npc->unit != unit);
         ASSERT(list_empty(&nvc->waitq_elem));
         goto out;
     }
@@ -548,6 +571,7 @@ static void null_unit_wake(const struct scheduler *ops,
     struct null_private *prv = null_priv(ops);
     struct null_unit *nvc = null_unit(unit);
     unsigned int cpu = sched_unit_master(unit);
+    struct null_pcpu *npc = get_sched_res(cpu)->sched_priv;
 
     ASSERT(!is_idle_unit(unit));
 
@@ -569,7 +593,7 @@ static void null_unit_wake(const struct scheduler *ops,
     else
         SCHED_STAT_CRANK(unit_wake_not_runnable);
 
-    if ( likely(per_cpu(npc, cpu).unit == unit) )
+    if ( likely(npc->unit == unit) )
     {
         cpu_raise_softirq(cpu, SCHEDULE_SOFTIRQ);
         return;
@@ -581,7 +605,7 @@ static void null_unit_wake(const struct scheduler *ops,
      * and its previous resource is free (and affinities match), we can just
      * assign the unit to it (we own the proper lock already) and be done.
      */
-    if ( per_cpu(npc, cpu).unit == NULL &&
+    if ( npc->unit == NULL &&
          unit_check_affinity(unit, cpu, BALANCE_HARD_AFFINITY) )
     {
         if ( !has_soft_affinity(unit) ||
@@ -622,6 +646,7 @@ static void null_unit_sleep(const struct scheduler *ops,
 {
     struct null_private *prv = null_priv(ops);
     unsigned int cpu = sched_unit_master(unit);
+    struct null_pcpu *npc = get_sched_res(cpu)->sched_priv;
     bool tickled = false;
 
     ASSERT(!is_idle_unit(unit));
@@ -640,7 +665,7 @@ static void null_unit_sleep(const struct scheduler *ops,
             list_del_init(&nvc->waitq_elem);
             spin_unlock(&prv->waitq_lock);
         }
-        else if ( per_cpu(npc, cpu).unit == unit )
+        else if ( npc->unit == unit )
             tickled = unit_deassign(prv, unit);
     }
 
@@ -663,6 +688,7 @@ static void null_unit_migrate(const struct scheduler *ops,
 {
     struct null_private *prv = null_priv(ops);
     struct null_unit *nvc = null_unit(unit);
+    struct null_pcpu *npc;
 
     ASSERT(!is_idle_unit(unit));
 
@@ -686,7 +712,8 @@ static void null_unit_migrate(const struct scheduler *ops,
      * If unit is assigned to a pCPU, then such pCPU becomes free, and we
      * should look in the waitqueue if anyone else can be assigned to it.
      */
-    if ( likely(per_cpu(npc, sched_unit_master(unit)).unit == unit) )
+    npc = unit->res->sched_priv;
+    if ( likely(npc->unit == unit) )
     {
         unit_deassign(prv, unit);
         SCHED_STAT_CRANK(migrate_running);
@@ -720,7 +747,8 @@ static void null_unit_migrate(const struct scheduler *ops,
      *
      * In latter, all we can do is to park unit in the waitqueue.
      */
-    if ( per_cpu(npc, new_cpu).unit == NULL &&
+    npc = get_sched_res(new_cpu)->sched_priv;
+    if ( npc->unit == NULL &&
          unit_check_affinity(unit, new_cpu, BALANCE_HARD_AFFINITY) )
     {
         /* unit might have been in the waitqueue, so remove it */
@@ -788,6 +816,7 @@ static void null_schedule(const struct scheduler *ops, struct sched_unit *prev,
     unsigned int bs;
     const unsigned int cur_cpu = smp_processor_id();
     const unsigned int sched_cpu = sched_get_resource_cpu(cur_cpu);
+    struct null_pcpu *npc = get_sched_res(sched_cpu)->sched_priv;
     struct null_private *prv = null_priv(ops);
     struct null_unit *wvc;
 
@@ -802,14 +831,14 @@ static void null_schedule(const struct scheduler *ops, struct sched_unit *prev,
         } d;
         d.cpu = cur_cpu;
         d.tasklet = tasklet_work_scheduled;
-        if ( per_cpu(npc, sched_cpu).unit == NULL )
+        if ( npc->unit == NULL )
         {
             d.unit = d.dom = -1;
         }
         else
         {
-            d.unit = per_cpu(npc, sched_cpu).unit->unit_id;
-            d.dom = per_cpu(npc, sched_cpu).unit->domain->domain_id;
+            d.unit = npc->unit->unit_id;
+            d.dom = npc->unit->domain->domain_id;
         }
         __trace_var(TRC_SNULL_SCHEDULE, 1, sizeof(d), &d);
     }
@@ -820,7 +849,7 @@ static void null_schedule(const struct scheduler *ops, struct sched_unit *prev,
         prev->next_task = sched_idle_unit(sched_cpu);
     }
     else
-        prev->next_task = per_cpu(npc, sched_cpu).unit;
+        prev->next_task = npc->unit;
     prev->next_time = -1;
 
     /*
@@ -921,6 +950,7 @@ static inline void dump_unit(struct null_private *prv, struct null_unit *nvc)
 static void null_dump_pcpu(const struct scheduler *ops, int cpu)
 {
     struct null_private *prv = null_priv(ops);
+    struct null_pcpu *npc = get_sched_res(cpu)->sched_priv;
     struct null_unit *nvc;
     spinlock_t *lock;
     unsigned long flags;
@@ -930,9 +960,8 @@ static void null_dump_pcpu(const struct scheduler *ops, int cpu)
     printk("CPU[%02d] sibling={%*pbl}, core={%*pbl}",
            cpu, CPUMASK_PR(per_cpu(cpu_sibling_mask, cpu)),
            CPUMASK_PR(per_cpu(cpu_core_mask, cpu)));
-    if ( per_cpu(npc, cpu).unit != NULL )
-        printk(", unit=%pdv%d", per_cpu(npc, cpu).unit->domain,
-               per_cpu(npc, cpu).unit->unit_id);
+    if ( npc->unit != NULL )
+        printk(", unit=%pdv%d", npc->unit->domain, npc->unit->unit_id);
     printk("\n");
 
     /* current unit (nothing to say if that's the idle unit) */
@@ -1010,6 +1039,8 @@ static const struct scheduler sched_null_def = {
 
     .init           = null_init,
     .deinit         = null_deinit,
+    .alloc_pdata    = null_alloc_pdata,
+    .free_pdata     = null_free_pdata,
     .init_pdata     = null_init_pdata,
     .switch_sched   = null_switch_sched,
     .deinit_pdata   = null_deinit_pdata,
-- 
2.16.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [Xen-devel] [PATCH v2 7/9] xen/sched: switch scheduling to bool where appropriate
  2020-01-08 15:23 [Xen-devel] [PATCH v2 0/9] xen: scheduler cleanups Juergen Gross
                   ` (5 preceding siblings ...)
  2020-01-08 15:23 ` [Xen-devel] [PATCH v2 6/9] xen/sched: replace null scheduler percpu-variable with pdata hook Juergen Gross
@ 2020-01-08 15:23 ` Juergen Gross
  2020-01-09  6:23   ` Meng Xu
  2020-01-22 12:56   ` Dario Faggioli
  2020-01-08 15:23 ` [Xen-devel] [PATCH v2 8/9] xen/sched: eliminate sched_tick_suspend() and sched_tick_resume() Juergen Gross
                   ` (2 subsequent siblings)
  9 siblings, 2 replies; 24+ messages in thread
From: Juergen Gross @ 2020-01-08 15:23 UTC (permalink / raw)
  To: xen-devel
  Cc: Juergen Gross, Stefano Stabellini, Julien Grall, Wei Liu,
	Konrad Rzeszutek Wilk, George Dunlap, Andrew Cooper, Ian Jackson,
	Dario Faggioli, Josh Whitehead, Meng Xu, Jan Beulich,
	Stewart Hildebrand

Scheduling code has several places using int or bool_t instead of bool.
Switch those.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- rename bool "pos" to "first" (Dario Faggioli)
---
 xen/common/sched/arinc653.c |  8 ++++----
 xen/common/sched/core.c     | 14 +++++++-------
 xen/common/sched/cpupool.c  | 10 +++++-----
 xen/common/sched/credit.c   | 12 ++++++------
 xen/common/sched/private.h  |  2 +-
 xen/common/sched/rt.c       | 18 +++++++++---------
 xen/include/xen/sched.h     |  6 +++---
 7 files changed, 35 insertions(+), 35 deletions(-)

diff --git a/xen/common/sched/arinc653.c b/xen/common/sched/arinc653.c
index 8895d92b5e..bce8021e3f 100644
--- a/xen/common/sched/arinc653.c
+++ b/xen/common/sched/arinc653.c
@@ -75,7 +75,7 @@ typedef struct arinc653_unit_s
      * arinc653_unit_t pointer. */
     struct sched_unit * unit;
     /* awake holds whether the UNIT has been woken with vcpu_wake() */
-    bool_t              awake;
+    bool                awake;
     /* list holds the linked list information for the list this UNIT
      * is stored in */
     struct list_head    list;
@@ -427,7 +427,7 @@ a653sched_alloc_udata(const struct scheduler *ops, struct sched_unit *unit,
      * will mark the UNIT awake.
      */
     svc->unit = unit;
-    svc->awake = 0;
+    svc->awake = false;
     if ( !is_idle_unit(unit) )
         list_add(&svc->list, &SCHED_PRIV(ops)->unit_list);
     update_schedule_units(ops);
@@ -473,7 +473,7 @@ static void
 a653sched_unit_sleep(const struct scheduler *ops, struct sched_unit *unit)
 {
     if ( AUNIT(unit) != NULL )
-        AUNIT(unit)->awake = 0;
+        AUNIT(unit)->awake = false;
 
     /*
      * If the UNIT being put to sleep is the same one that is currently
@@ -493,7 +493,7 @@ static void
 a653sched_unit_wake(const struct scheduler *ops, struct sched_unit *unit)
 {
     if ( AUNIT(unit) != NULL )
-        AUNIT(unit)->awake = 1;
+        AUNIT(unit)->awake = true;
 
     cpu_raise_softirq(sched_unit_master(unit), SCHEDULE_SOFTIRQ);
 }
diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c
index 4153d110be..896f82f4d2 100644
--- a/xen/common/sched/core.c
+++ b/xen/common/sched/core.c
@@ -53,7 +53,7 @@ string_param("sched", opt_sched);
  * scheduler will give preferrence to partially idle package compared to
  * the full idle package, when picking pCPU to schedule vCPU.
  */
-bool_t sched_smt_power_savings = 0;
+bool sched_smt_power_savings;
 boolean_param("sched_smt_power_savings", sched_smt_power_savings);
 
 /* Default scheduling rate limit: 1ms
@@ -574,7 +574,7 @@ int sched_init_vcpu(struct vcpu *v)
     {
         get_sched_res(v->processor)->curr = unit;
         get_sched_res(v->processor)->sched_unit_idle = unit;
-        v->is_running = 1;
+        v->is_running = true;
         unit->is_running = true;
         unit->state_entry_time = NOW();
     }
@@ -983,7 +983,7 @@ static void sched_unit_migrate_finish(struct sched_unit *unit)
     unsigned long flags;
     unsigned int old_cpu, new_cpu;
     spinlock_t *old_lock, *new_lock;
-    bool_t pick_called = 0;
+    bool pick_called = false;
     struct vcpu *v;
 
     /*
@@ -1029,7 +1029,7 @@ static void sched_unit_migrate_finish(struct sched_unit *unit)
             if ( (new_lock == get_sched_res(new_cpu)->schedule_lock) &&
                  cpumask_test_cpu(new_cpu, unit->domain->cpupool->cpu_valid) )
                 break;
-            pick_called = 1;
+            pick_called = true;
         }
         else
         {
@@ -1037,7 +1037,7 @@ static void sched_unit_migrate_finish(struct sched_unit *unit)
              * We do not hold the scheduler lock appropriate for this vCPU.
              * Thus we cannot select a new CPU on this iteration. Try again.
              */
-            pick_called = 0;
+            pick_called = false;
         }
 
         sched_spin_unlock_double(old_lock, new_lock, flags);
@@ -2148,7 +2148,7 @@ static void sched_switch_units(struct sched_resource *sr,
             vcpu_runstate_change(vnext, vnext->new_state, now);
         }
 
-        vnext->is_running = 1;
+        vnext->is_running = true;
 
         if ( is_idle_vcpu(vnext) )
             vnext->sched_unit = next;
@@ -2219,7 +2219,7 @@ static void vcpu_context_saved(struct vcpu *vprev, struct vcpu *vnext)
     smp_wmb();
 
     if ( vprev != vnext )
-        vprev->is_running = 0;
+        vprev->is_running = false;
 }
 
 static void unit_context_saved(struct sched_resource *sr)
diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c
index 7b31ab0d61..c1396cfff4 100644
--- a/xen/common/sched/cpupool.c
+++ b/xen/common/sched/cpupool.c
@@ -154,7 +154,7 @@ static struct cpupool *alloc_cpupool_struct(void)
  * the searched id is returned
  * returns NULL if not found.
  */
-static struct cpupool *__cpupool_find_by_id(int id, int exact)
+static struct cpupool *__cpupool_find_by_id(int id, bool exact)
 {
     struct cpupool **q;
 
@@ -169,10 +169,10 @@ static struct cpupool *__cpupool_find_by_id(int id, int exact)
 
 static struct cpupool *cpupool_find_by_id(int poolid)
 {
-    return __cpupool_find_by_id(poolid, 1);
+    return __cpupool_find_by_id(poolid, true);
 }
 
-static struct cpupool *__cpupool_get_by_id(int poolid, int exact)
+static struct cpupool *__cpupool_get_by_id(int poolid, bool exact)
 {
     struct cpupool *c;
     spin_lock(&cpupool_lock);
@@ -185,12 +185,12 @@ static struct cpupool *__cpupool_get_by_id(int poolid, int exact)
 
 struct cpupool *cpupool_get_by_id(int poolid)
 {
-    return __cpupool_get_by_id(poolid, 1);
+    return __cpupool_get_by_id(poolid, true);
 }
 
 static struct cpupool *cpupool_get_next_by_id(int poolid)
 {
-    return __cpupool_get_by_id(poolid, 0);
+    return __cpupool_get_by_id(poolid, false);
 }
 
 void cpupool_put(struct cpupool *pool)
diff --git a/xen/common/sched/credit.c b/xen/common/sched/credit.c
index 6b04f8f71c..a75efbd43d 100644
--- a/xen/common/sched/credit.c
+++ b/xen/common/sched/credit.c
@@ -245,7 +245,7 @@ __runq_elem(struct list_head *elem)
 }
 
 /* Is the first element of cpu's runq (if any) cpu's idle unit? */
-static inline bool_t is_runq_idle(unsigned int cpu)
+static inline bool is_runq_idle(unsigned int cpu)
 {
     /*
      * We're peeking at cpu's runq, we must hold the proper lock.
@@ -344,7 +344,7 @@ static void burn_credits(struct csched_unit *svc, s_time_t now)
     svc->start_time += (credits * MILLISECS(1)) / CSCHED_CREDITS_PER_MSEC;
 }
 
-static bool_t __read_mostly opt_tickle_one_idle = 1;
+static bool __read_mostly opt_tickle_one_idle = true;
 boolean_param("tickle_one_idle_cpu", opt_tickle_one_idle);
 
 DEFINE_PER_CPU(unsigned int, last_tickle_cpu);
@@ -719,7 +719,7 @@ __csched_unit_is_migrateable(const struct csched_private *prv,
 
 static int
 _csched_cpu_pick(const struct scheduler *ops, const struct sched_unit *unit,
-                 bool_t commit)
+                 bool commit)
 {
     int cpu = sched_unit_master(unit);
     /* We must always use cpu's scratch space */
@@ -871,7 +871,7 @@ csched_res_pick(const struct scheduler *ops, const struct sched_unit *unit)
      * get boosted, which we don't deserve as we are "only" migrating.
      */
     set_bit(CSCHED_FLAG_UNIT_MIGRATING, &svc->flags);
-    return get_sched_res(_csched_cpu_pick(ops, unit, 1));
+    return get_sched_res(_csched_cpu_pick(ops, unit, true));
 }
 
 static inline void
@@ -975,7 +975,7 @@ csched_unit_acct(struct csched_private *prv, unsigned int cpu)
          * migrating it to run elsewhere (see multi-core and multi-thread
          * support in csched_res_pick()).
          */
-        new_cpu = _csched_cpu_pick(ops, currunit, 0);
+        new_cpu = _csched_cpu_pick(ops, currunit, false);
 
         unit_schedule_unlock_irqrestore(lock, flags, currunit);
 
@@ -1108,7 +1108,7 @@ static void
 csched_unit_wake(const struct scheduler *ops, struct sched_unit *unit)
 {
     struct csched_unit * const svc = CSCHED_UNIT(unit);
-    bool_t migrating;
+    bool migrating;
 
     BUG_ON( is_idle_unit(unit) );
 
diff --git a/xen/common/sched/private.h b/xen/common/sched/private.h
index edce354dc7..9d0db75cbb 100644
--- a/xen/common/sched/private.h
+++ b/xen/common/sched/private.h
@@ -589,7 +589,7 @@ unsigned int cpupool_get_granularity(const struct cpupool *c);
  * * The hard affinity is not a subset of soft affinity
  * * There is an overlap between the soft and hard affinity masks
  */
-static inline int has_soft_affinity(const struct sched_unit *unit)
+static inline bool has_soft_affinity(const struct sched_unit *unit)
 {
     return unit->soft_aff_effective &&
            !cpumask_subset(cpupool_domain_master_cpumask(unit->domain),
diff --git a/xen/common/sched/rt.c b/xen/common/sched/rt.c
index d26f77f554..6dc4b6a6e5 100644
--- a/xen/common/sched/rt.c
+++ b/xen/common/sched/rt.c
@@ -490,13 +490,13 @@ rt_update_deadline(s_time_t now, struct rt_unit *svc)
 static inline bool
 deadline_queue_remove(struct list_head *queue, struct list_head *elem)
 {
-    int pos = 0;
+    bool first = false;
 
     if ( queue->next != elem )
-        pos = 1;
+        first = true;
 
     list_del_init(elem);
-    return !pos;
+    return !first;
 }
 
 static inline bool
@@ -505,17 +505,17 @@ deadline_queue_insert(struct rt_unit * (*qelem)(struct list_head *),
                       struct list_head *queue)
 {
     struct list_head *iter;
-    int pos = 0;
+    bool first = true;
 
     list_for_each ( iter, queue )
     {
         struct rt_unit * iter_svc = (*qelem)(iter);
         if ( compare_unit_priority(svc, iter_svc) > 0 )
             break;
-        pos++;
+        first = false;
     }
     list_add_tail(elem, iter);
-    return !pos;
+    return first;
 }
 #define deadline_runq_insert(...) \
   deadline_queue_insert(&q_elem, ##__VA_ARGS__)
@@ -605,7 +605,7 @@ replq_reinsert(const struct scheduler *ops, struct rt_unit *svc)
 {
     struct list_head *replq = rt_replq(ops);
     struct rt_unit *rearm_svc = svc;
-    bool_t rearm = 0;
+    bool rearm = false;
 
     ASSERT( unit_on_replq(svc) );
 
@@ -622,7 +622,7 @@ replq_reinsert(const struct scheduler *ops, struct rt_unit *svc)
     {
         deadline_replq_insert(svc, &svc->replq_elem, replq);
         rearm_svc = replq_elem(replq->next);
-        rearm = 1;
+        rearm = true;
     }
     else
         rearm = deadline_replq_insert(svc, &svc->replq_elem, replq);
@@ -1279,7 +1279,7 @@ rt_unit_wake(const struct scheduler *ops, struct sched_unit *unit)
 {
     struct rt_unit * const svc = rt_unit(unit);
     s_time_t now;
-    bool_t missed;
+    bool missed;
 
     BUG_ON( is_idle_unit(unit) );
 
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index b4c2e4f7c2..d8e961095f 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -560,18 +560,18 @@ static inline bool is_system_domain(const struct domain *d)
  * Use this when you don't have an existing reference to @d. It returns
  * FALSE if @d is being destroyed.
  */
-static always_inline int get_domain(struct domain *d)
+static always_inline bool get_domain(struct domain *d)
 {
     int old, seen = atomic_read(&d->refcnt);
     do
     {
         old = seen;
         if ( unlikely(old & DOMAIN_DESTROYED) )
-            return 0;
+            return false;
         seen = atomic_cmpxchg(&d->refcnt, old, old + 1);
     }
     while ( unlikely(seen != old) );
-    return 1;
+    return true;
 }
 
 /*
-- 
2.16.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [Xen-devel] [PATCH v2 8/9] xen/sched: eliminate sched_tick_suspend() and sched_tick_resume()
  2020-01-08 15:23 [Xen-devel] [PATCH v2 0/9] xen: scheduler cleanups Juergen Gross
                   ` (6 preceding siblings ...)
  2020-01-08 15:23 ` [Xen-devel] [PATCH v2 7/9] xen/sched: switch scheduling to bool where appropriate Juergen Gross
@ 2020-01-08 15:23 ` Juergen Gross
  2020-01-08 15:23 ` [Xen-devel] [PATCH v2 9/9] xen/sched: add const qualifier where appropriate Juergen Gross
  2020-01-22 13:11 ` [Xen-devel] [PATCH v2 0/9] xen: scheduler cleanups Dario Faggioli
  9 siblings, 0 replies; 24+ messages in thread
From: Juergen Gross @ 2020-01-08 15:23 UTC (permalink / raw)
  To: xen-devel
  Cc: Juergen Gross, Stefano Stabellini, Julien Grall, Wei Liu,
	Konrad Rzeszutek Wilk, George Dunlap, Andrew Cooper, Ian Jackson,
	Dario Faggioli, Jan Beulich, Volodymyr Babchuk,
	Roger Pau Monné

sched_tick_suspend() and sched_tick_resume() only call rcu related
functions, so eliminate them and do the rcu_idle_timer*() calling in
rcu_idle_[enter|exit]().

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Dario Faggioli <dfaggioli@suse.com>
Acked-by: Julien Grall <julien@xen.org>
Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
 xen/arch/arm/domain.c         |  6 +++---
 xen/arch/x86/acpi/cpu_idle.c  | 15 ++++++++-------
 xen/arch/x86/cpu/mwait-idle.c |  8 ++++----
 xen/common/rcupdate.c         |  7 +++++--
 xen/common/sched/core.c       | 12 ------------
 xen/include/xen/rcupdate.h    |  3 ---
 xen/include/xen/sched.h       |  2 --
 7 files changed, 20 insertions(+), 33 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index c0a13aa0ab..aa3df3b3ba 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -46,8 +46,8 @@ static void do_idle(void)
 {
     unsigned int cpu = smp_processor_id();
 
-    sched_tick_suspend();
-    /* sched_tick_suspend() can raise TIMER_SOFTIRQ. Process it now. */
+    rcu_idle_enter(cpu);
+    /* rcu_idle_enter() can raise TIMER_SOFTIRQ. Process it now. */
     process_pending_softirqs();
 
     local_irq_disable();
@@ -58,7 +58,7 @@ static void do_idle(void)
     }
     local_irq_enable();
 
-    sched_tick_resume();
+    rcu_idle_exit(cpu);
 }
 
 void idle_loop(void)
diff --git a/xen/arch/x86/acpi/cpu_idle.c b/xen/arch/x86/acpi/cpu_idle.c
index 5edd1844f4..2676f0d7da 100644
--- a/xen/arch/x86/acpi/cpu_idle.c
+++ b/xen/arch/x86/acpi/cpu_idle.c
@@ -599,7 +599,8 @@ void update_idle_stats(struct acpi_processor_power *power,
 
 static void acpi_processor_idle(void)
 {
-    struct acpi_processor_power *power = processor_powers[smp_processor_id()];
+    unsigned int cpu = smp_processor_id();
+    struct acpi_processor_power *power = processor_powers[cpu];
     struct acpi_processor_cx *cx = NULL;
     int next_state;
     uint64_t t1, t2 = 0;
@@ -648,8 +649,8 @@ static void acpi_processor_idle(void)
 
     cpufreq_dbs_timer_suspend();
 
-    sched_tick_suspend();
-    /* sched_tick_suspend() can raise TIMER_SOFTIRQ. Process it now. */
+    rcu_idle_enter(cpu);
+    /* rcu_idle_enter() can raise TIMER_SOFTIRQ. Process it now. */
     process_pending_softirqs();
 
     /*
@@ -658,10 +659,10 @@ static void acpi_processor_idle(void)
      */
     local_irq_disable();
 
-    if ( !cpu_is_haltable(smp_processor_id()) )
+    if ( !cpu_is_haltable(cpu) )
     {
         local_irq_enable();
-        sched_tick_resume();
+        rcu_idle_exit(cpu);
         cpufreq_dbs_timer_resume();
         return;
     }
@@ -786,7 +787,7 @@ static void acpi_processor_idle(void)
         /* Now in C0 */
         power->last_state = &power->states[0];
         local_irq_enable();
-        sched_tick_resume();
+        rcu_idle_exit(cpu);
         cpufreq_dbs_timer_resume();
         return;
     }
@@ -794,7 +795,7 @@ static void acpi_processor_idle(void)
     /* Now in C0 */
     power->last_state = &power->states[0];
 
-    sched_tick_resume();
+    rcu_idle_exit(cpu);
     cpufreq_dbs_timer_resume();
 
     if ( cpuidle_current_governor->reflect )
diff --git a/xen/arch/x86/cpu/mwait-idle.c b/xen/arch/x86/cpu/mwait-idle.c
index 52413e6da1..f49b04c45b 100644
--- a/xen/arch/x86/cpu/mwait-idle.c
+++ b/xen/arch/x86/cpu/mwait-idle.c
@@ -755,8 +755,8 @@ static void mwait_idle(void)
 
 	cpufreq_dbs_timer_suspend();
 
-	sched_tick_suspend();
-	/* sched_tick_suspend() can raise TIMER_SOFTIRQ. Process it now. */
+	rcu_idle_enter(cpu);
+	/* rcu_idle_enter() can raise TIMER_SOFTIRQ. Process it now. */
 	process_pending_softirqs();
 
 	/* Interrupts must be disabled for C2 and higher transitions. */
@@ -764,7 +764,7 @@ static void mwait_idle(void)
 
 	if (!cpu_is_haltable(cpu)) {
 		local_irq_enable();
-		sched_tick_resume();
+		rcu_idle_exit(cpu);
 		cpufreq_dbs_timer_resume();
 		return;
 	}
@@ -806,7 +806,7 @@ static void mwait_idle(void)
 	if (!(lapic_timer_reliable_states & (1 << cstate)))
 		lapic_timer_on();
 
-	sched_tick_resume();
+	rcu_idle_exit(cpu);
 	cpufreq_dbs_timer_resume();
 
 	if ( cpuidle_current_governor->reflect )
diff --git a/xen/common/rcupdate.c b/xen/common/rcupdate.c
index a56103c6f7..cb712c8690 100644
--- a/xen/common/rcupdate.c
+++ b/xen/common/rcupdate.c
@@ -459,7 +459,7 @@ int rcu_needs_cpu(int cpu)
  * periodically poke rcu_pedning(), so that it will invoke the callback
  * not too late after the end of the grace period.
  */
-void rcu_idle_timer_start()
+static void rcu_idle_timer_start(void)
 {
     struct rcu_data *rdp = &this_cpu(rcu_data);
 
@@ -475,7 +475,7 @@ void rcu_idle_timer_start()
     rdp->idle_timer_active = true;
 }
 
-void rcu_idle_timer_stop()
+static void rcu_idle_timer_stop(void)
 {
     struct rcu_data *rdp = &this_cpu(rcu_data);
 
@@ -633,10 +633,13 @@ void rcu_idle_enter(unsigned int cpu)
      * Se the comment before cpumask_andnot() in  rcu_start_batch().
      */
     smp_mb();
+
+    rcu_idle_timer_start();
 }
 
 void rcu_idle_exit(unsigned int cpu)
 {
+    rcu_idle_timer_stop();
     ASSERT(cpumask_test_cpu(cpu, &rcu_ctrlblk.idle_cpumask));
     cpumask_clear_cpu(cpu, &rcu_ctrlblk.idle_cpumask);
 }
diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c
index 896f82f4d2..d32b9b1baa 100644
--- a/xen/common/sched/core.c
+++ b/xen/common/sched/core.c
@@ -3268,18 +3268,6 @@ void schedule_dump(struct cpupool *c)
     rcu_read_unlock(&sched_res_rculock);
 }
 
-void sched_tick_suspend(void)
-{
-    rcu_idle_enter(smp_processor_id());
-    rcu_idle_timer_start();
-}
-
-void sched_tick_resume(void)
-{
-    rcu_idle_timer_stop();
-    rcu_idle_exit(smp_processor_id());
-}
-
 void wait(void)
 {
     schedule();
diff --git a/xen/include/xen/rcupdate.h b/xen/include/xen/rcupdate.h
index 13850865ed..174d058113 100644
--- a/xen/include/xen/rcupdate.h
+++ b/xen/include/xen/rcupdate.h
@@ -148,7 +148,4 @@ int rcu_barrier(void);
 void rcu_idle_enter(unsigned int cpu);
 void rcu_idle_exit(unsigned int cpu);
 
-void rcu_idle_timer_start(void);
-void rcu_idle_timer_stop(void);
-
 #endif /* __XEN_RCUPDATE_H */
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index d8e961095f..cf7aa39844 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -690,8 +690,6 @@ void sched_destroy_domain(struct domain *d);
 long sched_adjust(struct domain *, struct xen_domctl_scheduler_op *);
 long sched_adjust_global(struct xen_sysctl_scheduler_op *);
 int  sched_id(void);
-void sched_tick_suspend(void);
-void sched_tick_resume(void);
 void vcpu_wake(struct vcpu *v);
 long vcpu_yield(void);
 void vcpu_sleep_nosync(struct vcpu *v);
-- 
2.16.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [Xen-devel] [PATCH v2 9/9] xen/sched: add const qualifier where appropriate
  2020-01-08 15:23 [Xen-devel] [PATCH v2 0/9] xen: scheduler cleanups Juergen Gross
                   ` (7 preceding siblings ...)
  2020-01-08 15:23 ` [Xen-devel] [PATCH v2 8/9] xen/sched: eliminate sched_tick_suspend() and sched_tick_resume() Juergen Gross
@ 2020-01-08 15:23 ` Juergen Gross
  2020-01-09  6:00   ` Meng Xu
  2020-01-22 13:11 ` [Xen-devel] [PATCH v2 0/9] xen: scheduler cleanups Dario Faggioli
  9 siblings, 1 reply; 24+ messages in thread
From: Juergen Gross @ 2020-01-08 15:23 UTC (permalink / raw)
  To: xen-devel
  Cc: Juergen Gross, Stefano Stabellini, Julien Grall, Wei Liu,
	Konrad Rzeszutek Wilk, George Dunlap, Andrew Cooper, Ian Jackson,
	Dario Faggioli, Josh Whitehead, Meng Xu, Jan Beulich,
	Stewart Hildebrand

Make use of the const qualifier more often in scheduling code.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Dario Faggioli <dfaggioli@suse.com>
---
 xen/common/sched/arinc653.c |  4 ++--
 xen/common/sched/core.c     | 25 +++++++++++-----------
 xen/common/sched/cpupool.c  |  2 +-
 xen/common/sched/credit.c   | 44 ++++++++++++++++++++------------------
 xen/common/sched/credit2.c  | 52 +++++++++++++++++++++++----------------------
 xen/common/sched/null.c     | 17 ++++++++-------
 xen/common/sched/rt.c       | 32 ++++++++++++++--------------
 xen/include/xen/sched.h     |  9 ++++----
 8 files changed, 96 insertions(+), 89 deletions(-)

diff --git a/xen/common/sched/arinc653.c b/xen/common/sched/arinc653.c
index bce8021e3f..5421918221 100644
--- a/xen/common/sched/arinc653.c
+++ b/xen/common/sched/arinc653.c
@@ -608,7 +608,7 @@ static struct sched_resource *
 a653sched_pick_resource(const struct scheduler *ops,
                         const struct sched_unit *unit)
 {
-    cpumask_t *online;
+    const cpumask_t *online;
     unsigned int cpu;
 
     /*
@@ -639,7 +639,7 @@ a653_switch_sched(struct scheduler *new_ops, unsigned int cpu,
                   void *pdata, void *vdata)
 {
     struct sched_resource *sr = get_sched_res(cpu);
-    arinc653_unit_t *svc = vdata;
+    const arinc653_unit_t *svc = vdata;
 
     ASSERT(!pdata && svc && is_idle_unit(svc->unit));
 
diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c
index d32b9b1baa..944164d78a 100644
--- a/xen/common/sched/core.c
+++ b/xen/common/sched/core.c
@@ -175,7 +175,7 @@ static inline struct scheduler *dom_scheduler(const struct domain *d)
 
 static inline struct scheduler *unit_scheduler(const struct sched_unit *unit)
 {
-    struct domain *d = unit->domain;
+    const struct domain *d = unit->domain;
 
     if ( likely(d->cpupool != NULL) )
         return d->cpupool->sched;
@@ -202,7 +202,7 @@ static inline struct scheduler *vcpu_scheduler(const struct vcpu *v)
 }
 #define VCPU2ONLINE(_v) cpupool_domain_master_cpumask((_v)->domain)
 
-static inline void trace_runstate_change(struct vcpu *v, int new_state)
+static inline void trace_runstate_change(const struct vcpu *v, int new_state)
 {
     struct { uint32_t vcpu:16, domain:16; } d;
     uint32_t event;
@@ -220,7 +220,7 @@ static inline void trace_runstate_change(struct vcpu *v, int new_state)
     __trace_var(event, 1/*tsc*/, sizeof(d), &d);
 }
 
-static inline void trace_continue_running(struct vcpu *v)
+static inline void trace_continue_running(const struct vcpu *v)
 {
     struct { uint32_t vcpu:16, domain:16; } d;
 
@@ -302,7 +302,8 @@ void sched_guest_idle(void (*idle) (void), unsigned int cpu)
     atomic_dec(&per_cpu(sched_urgent_count, cpu));
 }
 
-void vcpu_runstate_get(struct vcpu *v, struct vcpu_runstate_info *runstate)
+void vcpu_runstate_get(const struct vcpu *v,
+                       struct vcpu_runstate_info *runstate)
 {
     spinlock_t *lock;
     s_time_t delta;
@@ -324,7 +325,7 @@ void vcpu_runstate_get(struct vcpu *v, struct vcpu_runstate_info *runstate)
 uint64_t get_cpu_idle_time(unsigned int cpu)
 {
     struct vcpu_runstate_info state = { 0 };
-    struct vcpu *v = idle_vcpu[cpu];
+    const struct vcpu *v = idle_vcpu[cpu];
 
     if ( cpu_online(cpu) && v )
         vcpu_runstate_get(v, &state);
@@ -392,7 +393,7 @@ static void sched_free_unit_mem(struct sched_unit *unit)
 
 static void sched_free_unit(struct sched_unit *unit, struct vcpu *v)
 {
-    struct vcpu *vunit;
+    const struct vcpu *vunit;
     unsigned int cnt = 0;
 
     /* Don't count to be released vcpu, might be not in vcpu list yet. */
@@ -522,7 +523,7 @@ static unsigned int sched_select_initial_cpu(const struct vcpu *v)
 
 int sched_init_vcpu(struct vcpu *v)
 {
-    struct domain *d = v->domain;
+    const struct domain *d = v->domain;
     struct sched_unit *unit;
     unsigned int processor;
 
@@ -913,7 +914,7 @@ static void sched_unit_move_locked(struct sched_unit *unit,
                                    unsigned int new_cpu)
 {
     unsigned int old_cpu = unit->res->master_cpu;
-    struct vcpu *v;
+    const struct vcpu *v;
 
     rcu_read_lock(&sched_res_rculock);
 
@@ -1090,7 +1091,7 @@ static bool sched_check_affinity_broken(const struct sched_unit *unit)
     return false;
 }
 
-static void sched_reset_affinity_broken(struct sched_unit *unit)
+static void sched_reset_affinity_broken(const struct sched_unit *unit)
 {
     struct vcpu *v;
 
@@ -1176,7 +1177,7 @@ void restore_vcpu_affinity(struct domain *d)
 int cpu_disable_scheduler(unsigned int cpu)
 {
     struct domain *d;
-    struct cpupool *c;
+    const struct cpupool *c;
     cpumask_t online_affinity;
     int ret = 0;
 
@@ -1251,8 +1252,8 @@ out:
 static int cpu_disable_scheduler_check(unsigned int cpu)
 {
     struct domain *d;
-    struct vcpu *v;
-    struct cpupool *c;
+    const struct vcpu *v;
+    const struct cpupool *c;
 
     c = get_sched_res(cpu)->cpupool;
     if ( c == NULL )
diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c
index c1396cfff4..28d5143e37 100644
--- a/xen/common/sched/cpupool.c
+++ b/xen/common/sched/cpupool.c
@@ -881,7 +881,7 @@ int cpupool_get_id(const struct domain *d)
     return d->cpupool ? d->cpupool->cpupool_id : CPUPOOLID_NONE;
 }
 
-cpumask_t *cpupool_valid_cpus(struct cpupool *pool)
+const cpumask_t *cpupool_valid_cpus(const struct cpupool *pool)
 {
     return pool->cpu_valid;
 }
diff --git a/xen/common/sched/credit.c b/xen/common/sched/credit.c
index a75efbd43d..cdda6fa09b 100644
--- a/xen/common/sched/credit.c
+++ b/xen/common/sched/credit.c
@@ -233,7 +233,7 @@ static void csched_tick(void *_cpu);
 static void csched_acct(void *dummy);
 
 static inline int
-__unit_on_runq(struct csched_unit *svc)
+__unit_on_runq(const struct csched_unit *svc)
 {
     return !list_empty(&svc->runq_elem);
 }
@@ -349,11 +349,11 @@ boolean_param("tickle_one_idle_cpu", opt_tickle_one_idle);
 
 DEFINE_PER_CPU(unsigned int, last_tickle_cpu);
 
-static inline void __runq_tickle(struct csched_unit *new)
+static inline void __runq_tickle(const struct csched_unit *new)
 {
     unsigned int cpu = sched_unit_master(new->unit);
-    struct sched_resource *sr = get_sched_res(cpu);
-    struct sched_unit *unit = new->unit;
+    const struct sched_resource *sr = get_sched_res(cpu);
+    const struct sched_unit *unit = new->unit;
     struct csched_unit * const cur = CSCHED_UNIT(curr_on_cpu(cpu));
     struct csched_private *prv = CSCHED_PRIV(sr->scheduler);
     cpumask_t mask, idle_mask, *online;
@@ -509,7 +509,7 @@ static inline void __runq_tickle(struct csched_unit *new)
 static void
 csched_free_pdata(const struct scheduler *ops, void *pcpu, int cpu)
 {
-    struct csched_private *prv = CSCHED_PRIV(ops);
+    const struct csched_private *prv = CSCHED_PRIV(ops);
 
     /*
      * pcpu either points to a valid struct csched_pcpu, or is NULL, if we're
@@ -652,7 +652,7 @@ csched_switch_sched(struct scheduler *new_ops, unsigned int cpu,
 
 #ifndef NDEBUG
 static inline void
-__csched_unit_check(struct sched_unit *unit)
+__csched_unit_check(const struct sched_unit *unit)
 {
     struct csched_unit * const svc = CSCHED_UNIT(unit);
     struct csched_dom * const sdom = svc->sdom;
@@ -700,8 +700,8 @@ __csched_vcpu_is_cache_hot(const struct csched_private *prv,
 
 static inline int
 __csched_unit_is_migrateable(const struct csched_private *prv,
-                             struct sched_unit *unit,
-                             int dest_cpu, cpumask_t *mask)
+                             const struct sched_unit *unit,
+                             int dest_cpu, const cpumask_t *mask)
 {
     const struct csched_unit *svc = CSCHED_UNIT(unit);
     /*
@@ -725,7 +725,7 @@ _csched_cpu_pick(const struct scheduler *ops, const struct sched_unit *unit,
     /* We must always use cpu's scratch space */
     cpumask_t *cpus = cpumask_scratch_cpu(cpu);
     cpumask_t idlers;
-    cpumask_t *online = cpupool_domain_master_cpumask(unit->domain);
+    const cpumask_t *online = cpupool_domain_master_cpumask(unit->domain);
     struct csched_pcpu *spc = NULL;
     int balance_step;
 
@@ -932,7 +932,7 @@ csched_unit_acct(struct csched_private *prv, unsigned int cpu)
 {
     struct sched_unit *currunit = current->sched_unit;
     struct csched_unit * const svc = CSCHED_UNIT(currunit);
-    struct sched_resource *sr = get_sched_res(cpu);
+    const struct sched_resource *sr = get_sched_res(cpu);
     const struct scheduler *ops = sr->scheduler;
 
     ASSERT( sched_unit_master(currunit) == cpu );
@@ -1084,7 +1084,7 @@ csched_unit_sleep(const struct scheduler *ops, struct sched_unit *unit)
 {
     struct csched_unit * const svc = CSCHED_UNIT(unit);
     unsigned int cpu = sched_unit_master(unit);
-    struct sched_resource *sr = get_sched_res(cpu);
+    const struct sched_resource *sr = get_sched_res(cpu);
 
     SCHED_STAT_CRANK(unit_sleep);
 
@@ -1577,7 +1577,7 @@ static void
 csched_tick(void *_cpu)
 {
     unsigned int cpu = (unsigned long)_cpu;
-    struct sched_resource *sr = get_sched_res(cpu);
+    const struct sched_resource *sr = get_sched_res(cpu);
     struct csched_pcpu *spc = CSCHED_PCPU(cpu);
     struct csched_private *prv = CSCHED_PRIV(sr->scheduler);
 
@@ -1604,7 +1604,7 @@ csched_tick(void *_cpu)
 static struct csched_unit *
 csched_runq_steal(int peer_cpu, int cpu, int pri, int balance_step)
 {
-    struct sched_resource *sr = get_sched_res(cpu);
+    const struct sched_resource *sr = get_sched_res(cpu);
     const struct csched_private * const prv = CSCHED_PRIV(sr->scheduler);
     const struct csched_pcpu * const peer_pcpu = CSCHED_PCPU(peer_cpu);
     struct csched_unit *speer;
@@ -1681,10 +1681,10 @@ static struct csched_unit *
 csched_load_balance(struct csched_private *prv, int cpu,
     struct csched_unit *snext, bool *stolen)
 {
-    struct cpupool *c = get_sched_res(cpu)->cpupool;
+    const struct cpupool *c = get_sched_res(cpu)->cpupool;
     struct csched_unit *speer;
     cpumask_t workers;
-    cpumask_t *online = c->res_valid;
+    const cpumask_t *online = c->res_valid;
     int peer_cpu, first_cpu, peer_node, bstep;
     int node = cpu_to_node(cpu);
 
@@ -2008,7 +2008,7 @@ out:
 }
 
 static void
-csched_dump_unit(struct csched_unit *svc)
+csched_dump_unit(const struct csched_unit *svc)
 {
     struct csched_dom * const sdom = svc->sdom;
 
@@ -2041,10 +2041,11 @@ csched_dump_unit(struct csched_unit *svc)
 static void
 csched_dump_pcpu(const struct scheduler *ops, int cpu)
 {
-    struct list_head *runq, *iter;
+    const struct list_head *runq;
+    struct list_head *iter;
     struct csched_private *prv = CSCHED_PRIV(ops);
-    struct csched_pcpu *spc;
-    struct csched_unit *svc;
+    const struct csched_pcpu *spc;
+    const struct csched_unit *svc;
     spinlock_t *lock;
     unsigned long flags;
     int loop;
@@ -2132,12 +2133,13 @@ csched_dump(const struct scheduler *ops)
     loop = 0;
     list_for_each( iter_sdom, &prv->active_sdom )
     {
-        struct csched_dom *sdom;
+        const struct csched_dom *sdom;
+
         sdom = list_entry(iter_sdom, struct csched_dom, active_sdom_elem);
 
         list_for_each( iter_svc, &sdom->active_unit )
         {
-            struct csched_unit *svc;
+            const struct csched_unit *svc;
             spinlock_t *lock;
 
             svc = list_entry(iter_svc, struct csched_unit, active_unit_elem);
diff --git a/xen/common/sched/credit2.c b/xen/common/sched/credit2.c
index 849d254e04..256c1c01fc 100644
--- a/xen/common/sched/credit2.c
+++ b/xen/common/sched/credit2.c
@@ -692,7 +692,7 @@ void smt_idle_mask_clear(unsigned int cpu, cpumask_t *mask)
  */
 static int get_fallback_cpu(struct csched2_unit *svc)
 {
-    struct sched_unit *unit = svc->unit;
+    const struct sched_unit *unit = svc->unit;
     unsigned int bs;
 
     SCHED_STAT_CRANK(need_fallback_cpu);
@@ -774,7 +774,7 @@ static int get_fallback_cpu(struct csched2_unit *svc)
  *
  * FIXME: Do pre-calculated division?
  */
-static void t2c_update(struct csched2_runqueue_data *rqd, s_time_t time,
+static void t2c_update(const struct csched2_runqueue_data *rqd, s_time_t time,
                           struct csched2_unit *svc)
 {
     uint64_t val = time * rqd->max_weight + svc->residual;
@@ -783,7 +783,8 @@ static void t2c_update(struct csched2_runqueue_data *rqd, s_time_t time,
     svc->credit -= val;
 }
 
-static s_time_t c2t(struct csched2_runqueue_data *rqd, s_time_t credit, struct csched2_unit *svc)
+static s_time_t c2t(const struct csched2_runqueue_data *rqd, s_time_t credit,
+                    const struct csched2_unit *svc)
 {
     return credit * svc->weight / rqd->max_weight;
 }
@@ -792,7 +793,7 @@ static s_time_t c2t(struct csched2_runqueue_data *rqd, s_time_t credit, struct c
  * Runqueue related code.
  */
 
-static inline int unit_on_runq(struct csched2_unit *svc)
+static inline int unit_on_runq(const struct csched2_unit *svc)
 {
     return !list_empty(&svc->runq_elem);
 }
@@ -849,9 +850,9 @@ static inline bool same_core(unsigned int cpua, unsigned int cpub)
 }
 
 static unsigned int
-cpu_to_runqueue(struct csched2_private *prv, unsigned int cpu)
+cpu_to_runqueue(const struct csched2_private *prv, unsigned int cpu)
 {
-    struct csched2_runqueue_data *rqd;
+    const struct csched2_runqueue_data *rqd;
     unsigned int rqi;
 
     for ( rqi = 0; rqi < nr_cpu_ids; rqi++ )
@@ -917,7 +918,7 @@ static void update_max_weight(struct csched2_runqueue_data *rqd, int new_weight,
 
         list_for_each( iter, &rqd->svc )
         {
-            struct csched2_unit * svc = list_entry(iter, struct csched2_unit, rqd_elem);
+            const struct csched2_unit * svc = list_entry(iter, struct csched2_unit, rqd_elem);
 
             if ( svc->weight > max_weight )
                 max_weight = svc->weight;
@@ -970,7 +971,7 @@ _runq_assign(struct csched2_unit *svc, struct csched2_runqueue_data *rqd)
 }
 
 static void
-runq_assign(const struct scheduler *ops, struct sched_unit *unit)
+runq_assign(const struct scheduler *ops, const struct sched_unit *unit)
 {
     struct csched2_unit *svc = unit->priv;
 
@@ -997,7 +998,7 @@ _runq_deassign(struct csched2_unit *svc)
 }
 
 static void
-runq_deassign(const struct scheduler *ops, struct sched_unit *unit)
+runq_deassign(const struct scheduler *ops, const struct sched_unit *unit)
 {
     struct csched2_unit *svc = unit->priv;
 
@@ -1203,7 +1204,7 @@ static void
 update_svc_load(const struct scheduler *ops,
                 struct csched2_unit *svc, int change, s_time_t now)
 {
-    struct csched2_private *prv = csched2_priv(ops);
+    const struct csched2_private *prv = csched2_priv(ops);
     s_time_t delta, unit_load;
     unsigned int P, W;
 
@@ -1362,11 +1363,11 @@ static inline bool is_preemptable(const struct csched2_unit *svc,
  * Within the same class, the highest difference of credit.
  */
 static s_time_t tickle_score(const struct scheduler *ops, s_time_t now,
-                             struct csched2_unit *new, unsigned int cpu)
+                             const struct csched2_unit *new, unsigned int cpu)
 {
     struct csched2_runqueue_data *rqd = c2rqd(ops, cpu);
     struct csched2_unit * cur = csched2_unit(curr_on_cpu(cpu));
-    struct csched2_private *prv = csched2_priv(ops);
+    const struct csched2_private *prv = csched2_priv(ops);
     s_time_t score;
 
     /*
@@ -1441,7 +1442,7 @@ runq_tickle(const struct scheduler *ops, struct csched2_unit *new, s_time_t now)
     struct sched_unit *unit = new->unit;
     unsigned int bs, cpu = sched_unit_master(unit);
     struct csched2_runqueue_data *rqd = c2rqd(ops, cpu);
-    cpumask_t *online = cpupool_domain_master_cpumask(unit->domain);
+    const cpumask_t *online = cpupool_domain_master_cpumask(unit->domain);
     cpumask_t mask;
 
     ASSERT(new->rqd == rqd);
@@ -2005,7 +2006,7 @@ static void replenish_domain_budget(void* data)
 
 #ifndef NDEBUG
 static inline void
-csched2_unit_check(struct sched_unit *unit)
+csched2_unit_check(const struct sched_unit *unit)
 {
     struct csched2_unit * const svc = csched2_unit(unit);
     struct csched2_dom * const sdom = svc->sdom;
@@ -2541,8 +2542,8 @@ static void migrate(const struct scheduler *ops,
  *  - svc is not already flagged to migrate,
  *  - if svc is allowed to run on at least one of the pcpus of rqd.
  */
-static bool unit_is_migrateable(struct csched2_unit *svc,
-                                  struct csched2_runqueue_data *rqd)
+static bool unit_is_migrateable(const struct csched2_unit *svc,
+                                const struct csched2_runqueue_data *rqd)
 {
     struct sched_unit *unit = svc->unit;
     int cpu = sched_unit_master(unit);
@@ -3076,7 +3077,7 @@ csched2_free_domdata(const struct scheduler *ops, void *data)
 static void
 csched2_unit_insert(const struct scheduler *ops, struct sched_unit *unit)
 {
-    struct csched2_unit *svc = unit->priv;
+    const struct csched2_unit *svc = unit->priv;
     struct csched2_dom * const sdom = svc->sdom;
     spinlock_t *lock;
 
@@ -3142,7 +3143,7 @@ csched2_runtime(const struct scheduler *ops, int cpu,
     int rt_credit; /* Proposed runtime measured in credits */
     struct csched2_runqueue_data *rqd = c2rqd(ops, cpu);
     struct list_head *runq = &rqd->runq;
-    struct csched2_private *prv = csched2_priv(ops);
+    const struct csched2_private *prv = csched2_priv(ops);
 
     /*
      * If we're idle, just stay so. Others (or external events)
@@ -3239,7 +3240,7 @@ runq_candidate(struct csched2_runqueue_data *rqd,
                unsigned int *skipped)
 {
     struct list_head *iter, *temp;
-    struct sched_resource *sr = get_sched_res(cpu);
+    const struct sched_resource *sr = get_sched_res(cpu);
     struct csched2_unit *snext = NULL;
     struct csched2_private *prv = csched2_priv(sr->scheduler);
     bool yield = false, soft_aff_preempt = false;
@@ -3603,7 +3604,8 @@ static void csched2_schedule(
 }
 
 static void
-csched2_dump_unit(struct csched2_private *prv, struct csched2_unit *svc)
+csched2_dump_unit(const struct csched2_private *prv,
+                  const struct csched2_unit *svc)
 {
     printk("[%i.%i] flags=%x cpu=%i",
             svc->unit->domain->domain_id,
@@ -3626,8 +3628,8 @@ csched2_dump_unit(struct csched2_private *prv, struct csched2_unit *svc)
 static inline void
 dump_pcpu(const struct scheduler *ops, int cpu)
 {
-    struct csched2_private *prv = csched2_priv(ops);
-    struct csched2_unit *svc;
+    const struct csched2_private *prv = csched2_priv(ops);
+    const struct csched2_unit *svc;
 
     printk("CPU[%02d] runq=%d, sibling={%*pbl}, core={%*pbl}\n",
            cpu, c2r(cpu),
@@ -3695,8 +3697,8 @@ csched2_dump(const struct scheduler *ops)
     loop = 0;
     list_for_each( iter_sdom, &prv->sdom )
     {
-        struct csched2_dom *sdom;
-        struct sched_unit *unit;
+        const struct csched2_dom *sdom;
+        const struct sched_unit *unit;
 
         sdom = list_entry(iter_sdom, struct csched2_dom, sdom_elem);
 
@@ -3737,7 +3739,7 @@ csched2_dump(const struct scheduler *ops)
         printk("RUNQ:\n");
         list_for_each( iter, runq )
         {
-            struct csched2_unit *svc = runq_elem(iter);
+            const struct csched2_unit *svc = runq_elem(iter);
 
             if ( svc )
             {
diff --git a/xen/common/sched/null.c b/xen/common/sched/null.c
index 3161ac2e62..8c3101649d 100644
--- a/xen/common/sched/null.c
+++ b/xen/common/sched/null.c
@@ -278,12 +278,12 @@ static void null_free_domdata(const struct scheduler *ops, void *data)
  * So this is not part of any hot path.
  */
 static struct sched_resource *
-pick_res(struct null_private *prv, const struct sched_unit *unit)
+pick_res(const struct null_private *prv, const struct sched_unit *unit)
 {
     unsigned int bs;
     unsigned int cpu = sched_unit_master(unit), new_cpu;
-    cpumask_t *cpus = cpupool_domain_master_cpumask(unit->domain);
-    struct null_pcpu *npc = get_sched_res(cpu)->sched_priv;
+    const cpumask_t *cpus = cpupool_domain_master_cpumask(unit->domain);
+    const struct null_pcpu *npc = get_sched_res(cpu)->sched_priv;
 
     ASSERT(spin_is_locked(get_sched_res(cpu)->schedule_lock));
 
@@ -375,7 +375,7 @@ static void unit_assign(struct null_private *prv, struct sched_unit *unit,
 }
 
 /* Returns true if a cpu was tickled */
-static bool unit_deassign(struct null_private *prv, struct sched_unit *unit)
+static bool unit_deassign(struct null_private *prv, const struct sched_unit *unit)
 {
     unsigned int bs;
     unsigned int cpu = sched_unit_master(unit);
@@ -441,7 +441,7 @@ static spinlock_t *null_switch_sched(struct scheduler *new_ops,
 {
     struct sched_resource *sr = get_sched_res(cpu);
     struct null_private *prv = null_priv(new_ops);
-    struct null_unit *nvc = vdata;
+    const struct null_unit *nvc = vdata;
 
     ASSERT(nvc && is_idle_unit(nvc->unit));
 
@@ -940,7 +940,8 @@ static void null_schedule(const struct scheduler *ops, struct sched_unit *prev,
     prev->next_task->migrated = false;
 }
 
-static inline void dump_unit(struct null_private *prv, struct null_unit *nvc)
+static inline void dump_unit(const struct null_private *prv,
+                             const struct null_unit *nvc)
 {
     printk("[%i.%i] pcpu=%d", nvc->unit->domain->domain_id,
             nvc->unit->unit_id, list_empty(&nvc->waitq_elem) ?
@@ -950,8 +951,8 @@ static inline void dump_unit(struct null_private *prv, struct null_unit *nvc)
 static void null_dump_pcpu(const struct scheduler *ops, int cpu)
 {
     struct null_private *prv = null_priv(ops);
-    struct null_pcpu *npc = get_sched_res(cpu)->sched_priv;
-    struct null_unit *nvc;
+    const struct null_pcpu *npc = get_sched_res(cpu)->sched_priv;
+    const struct null_unit *nvc;
     spinlock_t *lock;
     unsigned long flags;
 
diff --git a/xen/common/sched/rt.c b/xen/common/sched/rt.c
index 6dc4b6a6e5..f90a605643 100644
--- a/xen/common/sched/rt.c
+++ b/xen/common/sched/rt.c
@@ -352,7 +352,7 @@ static void
 rt_dump_pcpu(const struct scheduler *ops, int cpu)
 {
     struct rt_private *prv = rt_priv(ops);
-    struct rt_unit *svc;
+    const struct rt_unit *svc;
     unsigned long flags;
 
     spin_lock_irqsave(&prv->lock, flags);
@@ -371,8 +371,8 @@ rt_dump(const struct scheduler *ops)
 {
     struct list_head *runq, *depletedq, *replq, *iter;
     struct rt_private *prv = rt_priv(ops);
-    struct rt_unit *svc;
-    struct rt_dom *sdom;
+    const struct rt_unit *svc;
+    const struct rt_dom *sdom;
     unsigned long flags;
 
     spin_lock_irqsave(&prv->lock, flags);
@@ -408,7 +408,7 @@ rt_dump(const struct scheduler *ops)
     printk("Domain info:\n");
     list_for_each ( iter, &prv->sdom )
     {
-        struct sched_unit *unit;
+        const struct sched_unit *unit;
 
         sdom = list_entry(iter, struct rt_dom, sdom_elem);
         printk("\tdomain: %d\n", sdom->dom->domain_id);
@@ -509,7 +509,7 @@ deadline_queue_insert(struct rt_unit * (*qelem)(struct list_head *),
 
     list_for_each ( iter, queue )
     {
-        struct rt_unit * iter_svc = (*qelem)(iter);
+        const struct rt_unit * iter_svc = (*qelem)(iter);
         if ( compare_unit_priority(svc, iter_svc) > 0 )
             break;
         first = false;
@@ -547,7 +547,7 @@ replq_remove(const struct scheduler *ops, struct rt_unit *svc)
          */
         if ( !list_empty(replq) )
         {
-            struct rt_unit *svc_next = replq_elem(replq->next);
+            const struct rt_unit *svc_next = replq_elem(replq->next);
             set_timer(&prv->repl_timer, svc_next->cur_deadline);
         }
         else
@@ -604,7 +604,7 @@ static void
 replq_reinsert(const struct scheduler *ops, struct rt_unit *svc)
 {
     struct list_head *replq = rt_replq(ops);
-    struct rt_unit *rearm_svc = svc;
+    const struct rt_unit *rearm_svc = svc;
     bool rearm = false;
 
     ASSERT( unit_on_replq(svc) );
@@ -640,7 +640,7 @@ static struct sched_resource *
 rt_res_pick_locked(const struct sched_unit *unit, unsigned int locked_cpu)
 {
     cpumask_t *cpus = cpumask_scratch_cpu(locked_cpu);
-    cpumask_t *online;
+    const cpumask_t *online;
     int cpu;
 
     online = cpupool_domain_master_cpumask(unit->domain);
@@ -1028,7 +1028,7 @@ runq_pick(const struct scheduler *ops, const cpumask_t *mask, unsigned int cpu)
     struct rt_unit *svc = NULL;
     struct rt_unit *iter_svc = NULL;
     cpumask_t *cpu_common = cpumask_scratch_cpu(cpu);
-    cpumask_t *online;
+    const cpumask_t *online;
 
     list_for_each ( iter, runq )
     {
@@ -1197,15 +1197,15 @@ rt_unit_sleep(const struct scheduler *ops, struct sched_unit *unit)
  * lock is grabbed before calling this function
  */
 static void
-runq_tickle(const struct scheduler *ops, struct rt_unit *new)
+runq_tickle(const struct scheduler *ops, const struct rt_unit *new)
 {
     struct rt_private *prv = rt_priv(ops);
-    struct rt_unit *latest_deadline_unit = NULL; /* lowest priority */
-    struct rt_unit *iter_svc;
-    struct sched_unit *iter_unit;
+    const struct rt_unit *latest_deadline_unit = NULL; /* lowest priority */
+    const struct rt_unit *iter_svc;
+    const struct sched_unit *iter_unit;
     int cpu = 0, cpu_to_tickle = 0;
     cpumask_t *not_tickled = cpumask_scratch_cpu(smp_processor_id());
-    cpumask_t *online;
+    const cpumask_t *online;
 
     if ( new == NULL || is_idle_unit(new->unit) )
         return;
@@ -1379,7 +1379,7 @@ rt_dom_cntl(
 {
     struct rt_private *prv = rt_priv(ops);
     struct rt_unit *svc;
-    struct sched_unit *unit;
+    const struct sched_unit *unit;
     unsigned long flags;
     int rc = 0;
     struct xen_domctl_schedparam_vcpu local_sched;
@@ -1484,7 +1484,7 @@ rt_dom_cntl(
  */
 static void repl_timer_handler(void *data){
     s_time_t now;
-    struct scheduler *ops = data;
+    const struct scheduler *ops = data;
     struct rt_private *prv = rt_priv(ops);
     struct list_head *replq = rt_replq(ops);
     struct list_head *runq = rt_runq(ops);
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index cf7aa39844..7c5c437247 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -773,7 +773,7 @@ static inline void hypercall_cancel_continuation(struct vcpu *v)
 extern struct domain *domain_list;
 
 /* Caller must hold the domlist_read_lock or domlist_update_lock. */
-static inline struct domain *first_domain_in_cpupool( struct cpupool *c)
+static inline struct domain *first_domain_in_cpupool(const struct cpupool *c)
 {
     struct domain *d;
     for (d = rcu_dereference(domain_list); d && d->cpupool != c;
@@ -781,7 +781,7 @@ static inline struct domain *first_domain_in_cpupool( struct cpupool *c)
     return d;
 }
 static inline struct domain *next_domain_in_cpupool(
-    struct domain *d, struct cpupool *c)
+    struct domain *d, const struct cpupool *c)
 {
     for (d = rcu_dereference(d->next_in_list); d && d->cpupool != c;
          d = rcu_dereference(d->next_in_list));
@@ -925,7 +925,8 @@ void restore_vcpu_affinity(struct domain *d);
 int vcpu_affinity_domctl(struct domain *d, uint32_t cmd,
                          struct xen_domctl_vcpuaffinity *vcpuaff);
 
-void vcpu_runstate_get(struct vcpu *v, struct vcpu_runstate_info *runstate);
+void vcpu_runstate_get(const struct vcpu *v,
+                       struct vcpu_runstate_info *runstate);
 uint64_t get_cpu_idle_time(unsigned int cpu);
 void sched_guest_idle(void (*idle) (void), unsigned int cpu);
 void scheduler_enable(void);
@@ -1056,7 +1057,7 @@ extern enum cpufreq_controller {
 int cpupool_move_domain(struct domain *d, struct cpupool *c);
 int cpupool_do_sysctl(struct xen_sysctl_cpupool_op *op);
 int cpupool_get_id(const struct domain *d);
-cpumask_t *cpupool_valid_cpus(struct cpupool *pool);
+const cpumask_t *cpupool_valid_cpus(const struct cpupool *pool);
 extern void dump_runq(unsigned char key);
 
 void arch_do_physinfo(struct xen_sysctl_physinfo *pi);
-- 
2.16.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [Xen-devel] [PATCH v2 9/9] xen/sched: add const qualifier where appropriate
  2020-01-08 15:23 ` [Xen-devel] [PATCH v2 9/9] xen/sched: add const qualifier where appropriate Juergen Gross
@ 2020-01-09  6:00   ` Meng Xu
  0 siblings, 0 replies; 24+ messages in thread
From: Meng Xu @ 2020-01-09  6:00 UTC (permalink / raw)
  To: Juergen Gross
  Cc: Stefano Stabellini, Julien Grall, Wei Liu, Konrad Rzeszutek Wilk,
	George Dunlap, Andrew Cooper, Ian Jackson, Dario Faggioli,
	Josh Whitehead, Jan Beulich, Stewart Hildebrand, xen-devel

On Wed, Jan 8, 2020 at 7:23 AM Juergen Gross <jgross@suse.com> wrote:
>
> Make use of the const qualifier more often in scheduling code.
>
> Signed-off-by: Juergen Gross <jgross@suse.com>
> Reviewed-by: Dario Faggioli <dfaggioli@suse.com>
> ---
>  xen/common/sched/arinc653.c |  4 ++--
>  xen/common/sched/core.c     | 25 +++++++++++-----------
>  xen/common/sched/cpupool.c  |  2 +-
>  xen/common/sched/credit.c   | 44 ++++++++++++++++++++------------------
>  xen/common/sched/credit2.c  | 52 +++++++++++++++++++++++----------------------
>  xen/common/sched/null.c     | 17 ++++++++-------
>  xen/common/sched/rt.c       | 32 ++++++++++++++--------------
>  xen/include/xen/sched.h     |  9 ++++----
>  8 files changed, 96 insertions(+), 89 deletions(-)
>

As to xen/common/sched/rt.c,

Acked-by: Meng Xu <mengxu@cis.upenn.edu>

Meng

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Xen-devel] [PATCH v2 5/9] xen/sched: use scratch cpumask instead of allocating it on the stack
  2020-01-08 15:23 ` [Xen-devel] [PATCH v2 5/9] xen/sched: use scratch cpumask instead of allocating it on the stack Juergen Gross
@ 2020-01-09  6:21   ` Meng Xu
  0 siblings, 0 replies; 24+ messages in thread
From: Meng Xu @ 2020-01-09  6:21 UTC (permalink / raw)
  To: Juergen Gross; +Cc: George Dunlap, xen-devel, Dario Faggioli

On Wed, Jan 8, 2020 at 7:24 AM Juergen Gross <jgross@suse.com> wrote:
>
> In rt scheduler there are three instances of cpumasks allocated on the
> stack. Replace them by using cpumask_scratch.
>
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
>  xen/common/sched/rt.c | 56 ++++++++++++++++++++++++++++++++++-----------------
>  1 file changed, 37 insertions(+), 19 deletions(-)
>

Reviewed-by: Meng Xu <mengxu@cis.upenn.edu>

Meng

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Xen-devel] [PATCH v2 7/9] xen/sched: switch scheduling to bool where appropriate
  2020-01-08 15:23 ` [Xen-devel] [PATCH v2 7/9] xen/sched: switch scheduling to bool where appropriate Juergen Gross
@ 2020-01-09  6:23   ` Meng Xu
  2020-01-22 12:56   ` Dario Faggioli
  1 sibling, 0 replies; 24+ messages in thread
From: Meng Xu @ 2020-01-09  6:23 UTC (permalink / raw)
  To: Juergen Gross
  Cc: Stefano Stabellini, Julien Grall, Wei Liu, Konrad Rzeszutek Wilk,
	George Dunlap, Andrew Cooper, Ian Jackson, Dario Faggioli,
	Josh Whitehead, Jan Beulich, Stewart Hildebrand, xen-devel

On Wed, Jan 8, 2020 at 7:24 AM Juergen Gross <jgross@suse.com> wrote:
>
> Scheduling code has several places using int or bool_t instead of bool.
> Switch those.
>
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
> V2:
> - rename bool "pos" to "first" (Dario Faggioli)
> ---
>  xen/common/sched/arinc653.c |  8 ++++----
>  xen/common/sched/core.c     | 14 +++++++-------
>  xen/common/sched/cpupool.c  | 10 +++++-----
>  xen/common/sched/credit.c   | 12 ++++++------
>  xen/common/sched/private.h  |  2 +-
>  xen/common/sched/rt.c       | 18 +++++++++---------
>  xen/include/xen/sched.h     |  6 +++---
>  7 files changed, 35 insertions(+), 35 deletions(-)
>

As to  xen/common/sched/rt.c,

Reviewed-by: Meng Xu <mengxu@cis.upenn.edu>

Cheers,

Meng

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Xen-devel] [PATCH v2 2/9] xen/sched: make sched-if.h really scheduler private
  2020-01-08 15:23 ` [Xen-devel] [PATCH v2 2/9] xen/sched: make sched-if.h really scheduler private Juergen Gross
@ 2020-01-14 14:27   ` Jan Beulich
  2020-01-14 14:33     ` Jürgen Groß
  0 siblings, 1 reply; 24+ messages in thread
From: Jan Beulich @ 2020-01-14 14:27 UTC (permalink / raw)
  To: Juergen Gross
  Cc: Stefano Stabellini, Julien Grall, Wei Liu, Konrad Rzeszutek Wilk,
	George Dunlap, Andrew Cooper, Ian Jackson, Dario Faggioli,
	Josh Whitehead, Meng Xu, Stewart Hildebrand, xen-devel,
	Roger Pau Monné

On 08.01.2020 16:23, Juergen Gross wrote:
> @@ -234,16 +233,6 @@ void domctl_lock_release(void)
>      spin_unlock(&current->domain->hypercall_deadlock_mutex);
>  }
>  
> -static inline
> -int vcpuaffinity_params_invalid(const struct xen_domctl_vcpuaffinity *vcpuaff)
> -{
> -    return vcpuaff->flags == 0 ||
> -           ((vcpuaff->flags & XEN_VCPUAFFINITY_HARD) &&
> -            guest_handle_is_null(vcpuaff->cpumap_hard.bitmap)) ||
> -           ((vcpuaff->flags & XEN_VCPUAFFINITY_SOFT) &&
> -            guest_handle_is_null(vcpuaff->cpumap_soft.bitmap));
> -}

I'd like to suggest keeping this and ...

> @@ -608,122 +597,8 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
>  
>      case XEN_DOMCTL_setvcpuaffinity:
>      case XEN_DOMCTL_getvcpuaffinity:
> -    {
> -        struct vcpu *v;
> -        const struct sched_unit *unit;
> -        struct xen_domctl_vcpuaffinity *vcpuaff = &op->u.vcpuaffinity;
> -
> -        ret = -EINVAL;
> -        if ( vcpuaff->vcpu >= d->max_vcpus )
> -            break;
> -
> -        ret = -ESRCH;
> -        if ( (v = d->vcpu[vcpuaff->vcpu]) == NULL )
> -            break;
> -
> -        unit = v->sched_unit;
> -        ret = -EINVAL;
> -        if ( vcpuaffinity_params_invalid(vcpuaff) )
> -            break;

... everything up to here (except the [too early] unit assignment),
as not being scheduler specific at all. The remainder then would
better become two distinct functions, eliminating the need to pass
op->cmd (and presumably passing "v" instead of "d"). If, otoh, the
decision (supported by others) is to move everything, then I think
it would be appropriate to make at least some adjustments: The code
above should be converted to use domain_vcpu(), and e.g. ...

> -        if ( op->cmd == XEN_DOMCTL_setvcpuaffinity )
> -        {
> -            cpumask_var_t new_affinity, old_affinity;
> -            cpumask_t *online = cpupool_domain_master_cpumask(v->domain);

... this should use "d".

> @@ -875,6 +876,16 @@ int cpupool_do_sysctl(struct xen_sysctl_cpupool_op *op)
>      return ret;
>  }
>  
> +int cpupool_get_id(const struct domain *d)

I find plain int odd for something like an ID, but I can see why
this is.

> +cpumask_t *cpupool_valid_cpus(struct cpupool *pool)

const twice?

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Xen-devel] [PATCH v2 2/9] xen/sched: make sched-if.h really scheduler private
  2020-01-14 14:27   ` Jan Beulich
@ 2020-01-14 14:33     ` Jürgen Groß
  2020-01-14 14:39       ` Jan Beulich
  0 siblings, 1 reply; 24+ messages in thread
From: Jürgen Groß @ 2020-01-14 14:33 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Stefano Stabellini, Julien Grall, Wei Liu, Konrad Rzeszutek Wilk,
	George Dunlap, Andrew Cooper, Ian Jackson, Dario Faggioli,
	Josh Whitehead, Meng Xu, Stewart Hildebrand, xen-devel,
	Roger Pau Monné

On 14.01.20 15:27, Jan Beulich wrote:
> On 08.01.2020 16:23, Juergen Gross wrote:
>> @@ -234,16 +233,6 @@ void domctl_lock_release(void)
>>       spin_unlock(&current->domain->hypercall_deadlock_mutex);
>>   }
>>   
>> -static inline
>> -int vcpuaffinity_params_invalid(const struct xen_domctl_vcpuaffinity *vcpuaff)
>> -{
>> -    return vcpuaff->flags == 0 ||
>> -           ((vcpuaff->flags & XEN_VCPUAFFINITY_HARD) &&
>> -            guest_handle_is_null(vcpuaff->cpumap_hard.bitmap)) ||
>> -           ((vcpuaff->flags & XEN_VCPUAFFINITY_SOFT) &&
>> -            guest_handle_is_null(vcpuaff->cpumap_soft.bitmap));
>> -}
> 
> I'd like to suggest keeping this and ...
> 
>> @@ -608,122 +597,8 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
>>   
>>       case XEN_DOMCTL_setvcpuaffinity:
>>       case XEN_DOMCTL_getvcpuaffinity:
>> -    {
>> -        struct vcpu *v;
>> -        const struct sched_unit *unit;
>> -        struct xen_domctl_vcpuaffinity *vcpuaff = &op->u.vcpuaffinity;
>> -
>> -        ret = -EINVAL;
>> -        if ( vcpuaff->vcpu >= d->max_vcpus )
>> -            break;
>> -
>> -        ret = -ESRCH;
>> -        if ( (v = d->vcpu[vcpuaff->vcpu]) == NULL )
>> -            break;
>> -
>> -        unit = v->sched_unit;
>> -        ret = -EINVAL;
>> -        if ( vcpuaffinity_params_invalid(vcpuaff) )
>> -            break;
> 
> ... everything up to here (except the [too early] unit assignment),
> as not being scheduler specific at all. The remainder then would
> better become two distinct functions, eliminating the need to pass
> op->cmd (and presumably passing "v" instead of "d"). If, otoh, the
> decision (supported by others) is to move everything, then I think
> it would be appropriate to make at least some adjustments: The code
> above should be converted to use domain_vcpu(), and e.g. ...

Either would be fine with me.

> 
>> -        if ( op->cmd == XEN_DOMCTL_setvcpuaffinity )
>> -        {
>> -            cpumask_var_t new_affinity, old_affinity;
>> -            cpumask_t *online = cpupool_domain_master_cpumask(v->domain);
> 
> ... this should use "d".

Yes.

> 
>> @@ -875,6 +876,16 @@ int cpupool_do_sysctl(struct xen_sysctl_cpupool_op *op)
>>       return ret;
>>   }
>>   
>> +int cpupool_get_id(const struct domain *d)
> 
> I find plain int odd for something like an ID, but I can see why
> this is.
> 
>> +cpumask_t *cpupool_valid_cpus(struct cpupool *pool)
> 
> const twice?

See patch 9.


Juergen

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Xen-devel] [PATCH v2 2/9] xen/sched: make sched-if.h really scheduler private
  2020-01-14 14:33     ` Jürgen Groß
@ 2020-01-14 14:39       ` Jan Beulich
  2020-01-14 14:50         ` Jürgen Groß
  0 siblings, 1 reply; 24+ messages in thread
From: Jan Beulich @ 2020-01-14 14:39 UTC (permalink / raw)
  To: Jürgen Groß
  Cc: Stefano Stabellini, Julien Grall, Wei Liu, Konrad Rzeszutek Wilk,
	George Dunlap, Andrew Cooper, Ian Jackson, Dario Faggioli,
	Josh Whitehead, Meng Xu, Stewart Hildebrand, xen-devel,
	Roger Pau Monné

On 14.01.2020 15:33, Jürgen Groß  wrote:
> On 14.01.20 15:27, Jan Beulich wrote:
>> On 08.01.2020 16:23, Juergen Gross wrote:
>>> +cpumask_t *cpupool_valid_cpus(struct cpupool *pool)
>>
>> const twice?
> 
> See patch 9.

Well, in such a case either justify the omission in the description,
or introduce the function with const here and drop them there. As
things are, no reviewer should really let this pass uncommented.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Xen-devel] [PATCH v2 2/9] xen/sched: make sched-if.h really scheduler private
  2020-01-14 14:39       ` Jan Beulich
@ 2020-01-14 14:50         ` Jürgen Groß
  0 siblings, 0 replies; 24+ messages in thread
From: Jürgen Groß @ 2020-01-14 14:50 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Stefano Stabellini, Julien Grall, Wei Liu, Konrad Rzeszutek Wilk,
	George Dunlap, Andrew Cooper, Ian Jackson, Dario Faggioli,
	Josh Whitehead, Meng Xu, Stewart Hildebrand, xen-devel,
	Roger Pau Monné

On 14.01.20 15:39, Jan Beulich wrote:
> On 14.01.2020 15:33, Jürgen Groß  wrote:
>> On 14.01.20 15:27, Jan Beulich wrote:
>>> On 08.01.2020 16:23, Juergen Gross wrote:
>>>> +cpumask_t *cpupool_valid_cpus(struct cpupool *pool)
>>>
>>> const twice?
>>
>> See patch 9.
> 
> Well, in such a case either justify the omission in the description,
> or introduce the function with const here and drop them there. As
> things are, no reviewer should really let this pass uncommented.

Oh, sorry, you are right. When writing my reply I believed I just moved
those functions. The introduction should have the const qualifiers
already, of course.


Juergen

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Xen-devel] [PATCH v2 3/9] xen/sched: cleanup sched.h
  2020-01-08 15:23 ` [Xen-devel] [PATCH v2 3/9] xen/sched: cleanup sched.h Juergen Gross
@ 2020-01-14 15:38   ` Jan Beulich
  2020-01-14 15:42     ` Jürgen Groß
  0 siblings, 1 reply; 24+ messages in thread
From: Jan Beulich @ 2020-01-14 15:38 UTC (permalink / raw)
  To: Juergen Gross
  Cc: Stefano Stabellini, Julien Grall, Wei Liu, Konrad Rzeszutek Wilk,
	George Dunlap, Andrew Cooper, Ian Jackson, Dario Faggioli,
	xen-devel

On 08.01.2020 16:23, Juergen Gross wrote:
> --- a/xen/common/sched/private.h
> +++ b/xen/common/sched/private.h
> @@ -533,6 +533,7 @@ static inline void sched_unit_unpause(const struct sched_unit *unit)
>  struct cpupool
>  {
>      int              cpupool_id;
> +#define CPUPOOLID_NONE    -1

Would you mind taking the opportunity and giving this the
parentheses it has been lacking?

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Xen-devel] [PATCH v2 3/9] xen/sched: cleanup sched.h
  2020-01-14 15:38   ` Jan Beulich
@ 2020-01-14 15:42     ` Jürgen Groß
  0 siblings, 0 replies; 24+ messages in thread
From: Jürgen Groß @ 2020-01-14 15:42 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Stefano Stabellini, Julien Grall, Wei Liu, Konrad Rzeszutek Wilk,
	George Dunlap, Andrew Cooper, Ian Jackson, Dario Faggioli,
	xen-devel

On 14.01.20 16:38, Jan Beulich wrote:
> On 08.01.2020 16:23, Juergen Gross wrote:
>> --- a/xen/common/sched/private.h
>> +++ b/xen/common/sched/private.h
>> @@ -533,6 +533,7 @@ static inline void sched_unit_unpause(const struct sched_unit *unit)
>>   struct cpupool
>>   {
>>       int              cpupool_id;
>> +#define CPUPOOLID_NONE    -1
> 
> Would you mind taking the opportunity and giving this the
> parentheses it has been lacking?

Yes, of course.


Juergen


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Xen-devel] [PATCH v2 7/9] xen/sched: switch scheduling to bool where appropriate
  2020-01-08 15:23 ` [Xen-devel] [PATCH v2 7/9] xen/sched: switch scheduling to bool where appropriate Juergen Gross
  2020-01-09  6:23   ` Meng Xu
@ 2020-01-22 12:56   ` Dario Faggioli
  1 sibling, 0 replies; 24+ messages in thread
From: Dario Faggioli @ 2020-01-22 12:56 UTC (permalink / raw)
  To: Juergen Gross, xen-devel
  Cc: Stefano Stabellini, Julien Grall, Wei Liu, Konrad Rzeszutek Wilk,
	George Dunlap, Andrew Cooper, Ian Jackson, Josh Whitehead,
	Meng Xu, Stewart Hildebrand


[-- Attachment #1.1: Type: text/plain, Size: 618 bytes --]

On Wed, 2020-01-08 at 16:23 +0100, Juergen Gross wrote:
> Scheduling code has several places using int or bool_t instead of
> bool.
> Switch those.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
> V2:
> - rename bool "pos" to "first" (Dario Faggioli)
>
Reviewed-by: Dario Faggioli <dfaggioli@suse.com>

Thanks and Regards
-- 
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)


[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

[-- Attachment #2: Type: text/plain, Size: 157 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Xen-devel] [PATCH v2 1/9] xen/sched: move schedulers and cpupool coding to dedicated directory
  2020-01-08 15:23 ` [Xen-devel] [PATCH v2 1/9] xen/sched: move schedulers and cpupool coding to dedicated directory Juergen Gross
@ 2020-01-22 13:03   ` Dario Faggioli
  0 siblings, 0 replies; 24+ messages in thread
From: Dario Faggioli @ 2020-01-22 13:03 UTC (permalink / raw)
  To: Juergen Gross, xen-devel
  Cc: Stefano Stabellini, Julien Grall, Wei Liu, Konrad Rzeszutek Wilk,
	George Dunlap, Andrew Cooper, Ian Jackson, Josh Whitehead,
	Meng Xu, Stewart Hildebrand


[-- Attachment #1.1: Type: text/plain, Size: 577 bytes --]

On Wed, 2020-01-08 at 16:23 +0100, Juergen Gross wrote:
> Move sched*c and cpupool.c to a new directory common/sched.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
> V2:
> - renamed sources (Dario Faggioli, Andrew Cooper)
>
Reviewed-by: Dario Faggioli <dfaggioli@suse.com>

Regards
-- 
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)


[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

[-- Attachment #2: Type: text/plain, Size: 157 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Xen-devel] [PATCH v2 6/9] xen/sched: replace null scheduler percpu-variable with pdata hook
  2020-01-08 15:23 ` [Xen-devel] [PATCH v2 6/9] xen/sched: replace null scheduler percpu-variable with pdata hook Juergen Gross
@ 2020-01-22 13:04   ` Dario Faggioli
  0 siblings, 0 replies; 24+ messages in thread
From: Dario Faggioli @ 2020-01-22 13:04 UTC (permalink / raw)
  To: Juergen Gross, xen-devel; +Cc: George Dunlap


[-- Attachment #1.1: Type: text/plain, Size: 584 bytes --]

On Wed, 2020-01-08 at 16:23 +0100, Juergen Gross wrote:
> Instead of having an own percpu-variable for private data per cpu the
> generic scheduler interface for that purpose should be used.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>
>
Reviewed-by: Dario Faggioli <dfaggioli@suse.com>

Regards
-- 
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)


[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

[-- Attachment #2: Type: text/plain, Size: 157 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Xen-devel] [PATCH v2 4/9] xen/sched: remove special cases for free cpus in schedulers
  2020-01-08 15:23 ` [Xen-devel] [PATCH v2 4/9] xen/sched: remove special cases for free cpus in schedulers Juergen Gross
@ 2020-01-22 13:06   ` Dario Faggioli
  0 siblings, 0 replies; 24+ messages in thread
From: Dario Faggioli @ 2020-01-22 13:06 UTC (permalink / raw)
  To: Juergen Gross, xen-devel; +Cc: George Dunlap


[-- Attachment #1.1: Type: text/plain, Size: 614 bytes --]

On Wed, 2020-01-08 at 16:23 +0100, Juergen Gross wrote:
> With the idle scheduler now taking care of all cpus not in any
> cpupool
> the special cases in the other schedulers for no cpupool associated
> can be removed.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>
>
Reviewed-by: Dario Faggioli <dfaggioli@suse.com>

Regards
-- 
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)


[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

[-- Attachment #2: Type: text/plain, Size: 157 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Xen-devel] [PATCH v2 0/9] xen: scheduler cleanups
  2020-01-08 15:23 [Xen-devel] [PATCH v2 0/9] xen: scheduler cleanups Juergen Gross
                   ` (8 preceding siblings ...)
  2020-01-08 15:23 ` [Xen-devel] [PATCH v2 9/9] xen/sched: add const qualifier where appropriate Juergen Gross
@ 2020-01-22 13:11 ` Dario Faggioli
  9 siblings, 0 replies; 24+ messages in thread
From: Dario Faggioli @ 2020-01-22 13:11 UTC (permalink / raw)
  To: Juergen Gross, xen-devel
  Cc: Stefano Stabellini, Julien Grall, Wei Liu, Konrad Rzeszutek Wilk,
	George Dunlap, Andrew Cooper, Ian Jackson, Josh Whitehead,
	Meng Xu, Stewart Hildebrand, Volodymyr Babchuk,
	Roger Pau Monné


[-- Attachment #1.1: Type: text/plain, Size: 1227 bytes --]

On Wed, 2020-01-08 at 16:23 +0100, Juergen Gross wrote:
> Move all scheduler related hypervisor code to xen/common/sched/ and
> do a lot of cleanups.
> 
> Juergen Gross (9):
>   xen/sched: move schedulers and cpupool coding to dedicated
> directory
>   xen/sched: make sched-if.h really scheduler private
>   xen/sched: cleanup sched.h
>   xen/sched: remove special cases for free cpus in schedulers
>   xen/sched: use scratch cpumask instead of allocating it on the
> stack
>   xen/sched: replace null scheduler percpu-variable with pdata hook
>   xen/sched: switch scheduling to bool where appropriate
>   xen/sched: eliminate sched_tick_suspend() and sched_tick_resume()
>   xen/sched: add const qualifier where appropriate
>
Ok, unless I'm missing something, I think that "scheduling-wise" this
series if fully Rev/Acked-by.

Thanks Juergen for the cleanups. The code looks a lot better with this
patches applied! :-)

Regards
-- 
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)


[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

[-- Attachment #2: Type: text/plain, Size: 157 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2020-01-22 13:12 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-01-08 15:23 [Xen-devel] [PATCH v2 0/9] xen: scheduler cleanups Juergen Gross
2020-01-08 15:23 ` [Xen-devel] [PATCH v2 1/9] xen/sched: move schedulers and cpupool coding to dedicated directory Juergen Gross
2020-01-22 13:03   ` Dario Faggioli
2020-01-08 15:23 ` [Xen-devel] [PATCH v2 2/9] xen/sched: make sched-if.h really scheduler private Juergen Gross
2020-01-14 14:27   ` Jan Beulich
2020-01-14 14:33     ` Jürgen Groß
2020-01-14 14:39       ` Jan Beulich
2020-01-14 14:50         ` Jürgen Groß
2020-01-08 15:23 ` [Xen-devel] [PATCH v2 3/9] xen/sched: cleanup sched.h Juergen Gross
2020-01-14 15:38   ` Jan Beulich
2020-01-14 15:42     ` Jürgen Groß
2020-01-08 15:23 ` [Xen-devel] [PATCH v2 4/9] xen/sched: remove special cases for free cpus in schedulers Juergen Gross
2020-01-22 13:06   ` Dario Faggioli
2020-01-08 15:23 ` [Xen-devel] [PATCH v2 5/9] xen/sched: use scratch cpumask instead of allocating it on the stack Juergen Gross
2020-01-09  6:21   ` Meng Xu
2020-01-08 15:23 ` [Xen-devel] [PATCH v2 6/9] xen/sched: replace null scheduler percpu-variable with pdata hook Juergen Gross
2020-01-22 13:04   ` Dario Faggioli
2020-01-08 15:23 ` [Xen-devel] [PATCH v2 7/9] xen/sched: switch scheduling to bool where appropriate Juergen Gross
2020-01-09  6:23   ` Meng Xu
2020-01-22 12:56   ` Dario Faggioli
2020-01-08 15:23 ` [Xen-devel] [PATCH v2 8/9] xen/sched: eliminate sched_tick_suspend() and sched_tick_resume() Juergen Gross
2020-01-08 15:23 ` [Xen-devel] [PATCH v2 9/9] xen/sched: add const qualifier where appropriate Juergen Gross
2020-01-09  6:00   ` Meng Xu
2020-01-22 13:11 ` [Xen-devel] [PATCH v2 0/9] xen: scheduler cleanups Dario Faggioli

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.