* [PATCH v2] sched: print information about scheduling granularity
@ 2020-04-20 13:06 Sergey Dyasli
2020-04-20 13:13 ` Jan Beulich
2020-04-20 13:45 ` Jürgen Groß
0 siblings, 2 replies; 4+ messages in thread
From: Sergey Dyasli @ 2020-04-20 13:06 UTC (permalink / raw)
To: xen-devel
Cc: Juergen Gross, Sergey Dyasli, George Dunlap, Jan Beulich, Dario Faggioli
Currently it might be not obvious which scheduling mode (e.g. core-
scheduling) is being used by the scheduler. Alleviate this by printing
additional information about the selected granularity per-cpupool.
Note: per-cpupool granularity selection is not implemented yet.
The single global value is being used for each cpupool.
Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>
---
v2:
- print information on a separate line
- use per-cpupool granularity
- updated commit message
CC: Juergen Gross <jgross@suse.com>
CC: Dario Faggioli <dfaggioli@suse.com>
CC: George Dunlap <george.dunlap@citrix.com>
CC: Jan Beulich <jbeulich@suse.com>
---
xen/common/sched/cpupool.c | 26 ++++++++++++++++++++++++++
1 file changed, 26 insertions(+)
diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c
index d40345b585..68106f6c15 100644
--- a/xen/common/sched/cpupool.c
+++ b/xen/common/sched/cpupool.c
@@ -40,6 +40,30 @@ static DEFINE_SPINLOCK(cpupool_lock);
static enum sched_gran __read_mostly opt_sched_granularity = SCHED_GRAN_cpu;
static unsigned int __read_mostly sched_granularity = 1;
+static void sched_gran_print(enum sched_gran mode, unsigned int gran)
+{
+ char *str = "";
+
+ switch ( mode )
+ {
+ case SCHED_GRAN_cpu:
+ str = "cpu";
+ break;
+ case SCHED_GRAN_core:
+ str = "core";
+ break;
+ case SCHED_GRAN_socket:
+ str = "socket";
+ break;
+ default:
+ ASSERT_UNREACHABLE();
+ break;
+ }
+
+ printk("Scheduling granularity: %s, %u CPU%s per sched-resource\n",
+ str, gran, gran == 1 ? "" : "s");
+}
+
#ifdef CONFIG_HAS_SCHED_GRANULARITY
static int __init sched_select_granularity(const char *str)
{
@@ -115,6 +139,7 @@ static void __init cpupool_gran_init(void)
warning_add(fallback);
sched_granularity = gran;
+ sched_gran_print(opt_sched_granularity, sched_granularity);
}
unsigned int cpupool_get_granularity(const struct cpupool *c)
@@ -911,6 +936,7 @@ void dump_runq(unsigned char key)
{
printk("Cpupool %d:\n", (*c)->cpupool_id);
printk("Cpus: %*pbl\n", CPUMASK_PR((*c)->cpu_valid));
+ sched_gran_print((*c)->gran, cpupool_get_granularity(*c));
schedule_dump(*c);
}
--
2.17.1
^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: [PATCH v2] sched: print information about scheduling granularity
2020-04-20 13:06 [PATCH v2] sched: print information about scheduling granularity Sergey Dyasli
@ 2020-04-20 13:13 ` Jan Beulich
2020-04-20 13:45 ` Jürgen Groß
1 sibling, 0 replies; 4+ messages in thread
From: Jan Beulich @ 2020-04-20 13:13 UTC (permalink / raw)
To: Sergey Dyasli; +Cc: Juergen Gross, xen-devel, George Dunlap, Dario Faggioli
On 20.04.2020 15:06, Sergey Dyasli wrote:
> --- a/xen/common/sched/cpupool.c
> +++ b/xen/common/sched/cpupool.c
> @@ -40,6 +40,30 @@ static DEFINE_SPINLOCK(cpupool_lock);
> static enum sched_gran __read_mostly opt_sched_granularity = SCHED_GRAN_cpu;
> static unsigned int __read_mostly sched_granularity = 1;
>
> +static void sched_gran_print(enum sched_gran mode, unsigned int gran)
> +{
> + char *str = "";
const please (could easily be added while committing of course)
Jan
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH v2] sched: print information about scheduling granularity
2020-04-20 13:06 [PATCH v2] sched: print information about scheduling granularity Sergey Dyasli
2020-04-20 13:13 ` Jan Beulich
@ 2020-04-20 13:45 ` Jürgen Groß
2020-04-21 7:08 ` Sergey Dyasli
1 sibling, 1 reply; 4+ messages in thread
From: Jürgen Groß @ 2020-04-20 13:45 UTC (permalink / raw)
To: Sergey Dyasli, xen-devel; +Cc: George Dunlap, Jan Beulich, Dario Faggioli
On 20.04.20 15:06, Sergey Dyasli wrote:
> Currently it might be not obvious which scheduling mode (e.g. core-
> scheduling) is being used by the scheduler. Alleviate this by printing
> additional information about the selected granularity per-cpupool.
>
> Note: per-cpupool granularity selection is not implemented yet.
> The single global value is being used for each cpupool.
This is misleading. You are using the per-cpupool values, but they
are all the same right now.
>
> Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>
> ---
> v2:
> - print information on a separate line
> - use per-cpupool granularity
> - updated commit message
>
> CC: Juergen Gross <jgross@suse.com>
> CC: Dario Faggioli <dfaggioli@suse.com>
> CC: George Dunlap <george.dunlap@citrix.com>
> CC: Jan Beulich <jbeulich@suse.com>
> ---
> xen/common/sched/cpupool.c | 26 ++++++++++++++++++++++++++
> 1 file changed, 26 insertions(+)
>
> diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c
> index d40345b585..68106f6c15 100644
> --- a/xen/common/sched/cpupool.c
> +++ b/xen/common/sched/cpupool.c
> @@ -40,6 +40,30 @@ static DEFINE_SPINLOCK(cpupool_lock);
> static enum sched_gran __read_mostly opt_sched_granularity = SCHED_GRAN_cpu;
> static unsigned int __read_mostly sched_granularity = 1;
>
> +static void sched_gran_print(enum sched_gran mode, unsigned int gran)
> +{
> + char *str = "";
> +
> + switch ( mode )
> + {
> + case SCHED_GRAN_cpu:
> + str = "cpu";
> + break;
> + case SCHED_GRAN_core:
> + str = "core";
> + break;
> + case SCHED_GRAN_socket:
> + str = "socket";
> + break;
> + default:
> + ASSERT_UNREACHABLE();
> + break;
> + }
With this addition it might make sense to have an array indexed by
mode to get the string. This array could then be used in
sched_select_granularity(), too.
> +
> + printk("Scheduling granularity: %s, %u CPU%s per sched-resource\n",
> + str, gran, gran == 1 ? "" : "s");
> +}
> +
> #ifdef CONFIG_HAS_SCHED_GRANULARITY
> static int __init sched_select_granularity(const char *str)
> {
> @@ -115,6 +139,7 @@ static void __init cpupool_gran_init(void)
> warning_add(fallback);
>
> sched_granularity = gran;
> + sched_gran_print(opt_sched_granularity, sched_granularity);
> }
>
> unsigned int cpupool_get_granularity(const struct cpupool *c)
> @@ -911,6 +936,7 @@ void dump_runq(unsigned char key)
> {
> printk("Cpupool %d:\n", (*c)->cpupool_id);
> printk("Cpus: %*pbl\n", CPUMASK_PR((*c)->cpu_valid));
> + sched_gran_print((*c)->gran, cpupool_get_granularity(*c));
> schedule_dump(*c);
> }
Juergen
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH v2] sched: print information about scheduling granularity
2020-04-20 13:45 ` Jürgen Groß
@ 2020-04-21 7:08 ` Sergey Dyasli
0 siblings, 0 replies; 4+ messages in thread
From: Sergey Dyasli @ 2020-04-21 7:08 UTC (permalink / raw)
To: Jürgen Groß, xen-devel
Cc: Sergey Dyasli, George Dunlap, Jan Beulich, Dario Faggioli
On 20/04/2020 14:45, Jürgen Groß wrote:
> On 20.04.20 15:06, Sergey Dyasli wrote:
>> Currently it might be not obvious which scheduling mode (e.g. core-
>> scheduling) is being used by the scheduler. Alleviate this by printing
>> additional information about the selected granularity per-cpupool.
>>
>> Note: per-cpupool granularity selection is not implemented yet.
>> The single global value is being used for each cpupool.
>
> This is misleading. You are using the per-cpupool values, but they
> are all the same right now.
This is what I meant by my note, but I might need to improve the wording
since the current one looks ambiguous to you.
>
>>
>> Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>
>> ---
>> v2:
>> - print information on a separate line
>> - use per-cpupool granularity
>> - updated commit message
>>
>> CC: Juergen Gross <jgross@suse.com>
>> CC: Dario Faggioli <dfaggioli@suse.com>
>> CC: George Dunlap <george.dunlap@citrix.com>
>> CC: Jan Beulich <jbeulich@suse.com>
>> ---
>> xen/common/sched/cpupool.c | 26 ++++++++++++++++++++++++++
>> 1 file changed, 26 insertions(+)
>>
>> diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c
>> index d40345b585..68106f6c15 100644
>> --- a/xen/common/sched/cpupool.c
>> +++ b/xen/common/sched/cpupool.c
>> @@ -40,6 +40,30 @@ static DEFINE_SPINLOCK(cpupool_lock);
>> static enum sched_gran __read_mostly opt_sched_granularity = SCHED_GRAN_cpu;
>> static unsigned int __read_mostly sched_granularity = 1;
>> +static void sched_gran_print(enum sched_gran mode, unsigned int gran)
>> +{
>> + char *str = "";
>> +
>> + switch ( mode )
>> + {
>> + case SCHED_GRAN_cpu:
>> + str = "cpu";
>> + break;
>> + case SCHED_GRAN_core:
>> + str = "core";
>> + break;
>> + case SCHED_GRAN_socket:
>> + str = "socket";
>> + break;
>> + default:
>> + ASSERT_UNREACHABLE();
>> + break;
>> + }
>
> With this addition it might make sense to have an array indexed by
> mode to get the string. This array could then be used in
> sched_select_granularity(), too.
I had thoughts about that, and with your suggestion looks like I need
to go and do it.
>
>> +
>> + printk("Scheduling granularity: %s, %u CPU%s per sched-resource\n",
>> + str, gran, gran == 1 ? "" : "s");
>> +}
>> +
>> #ifdef CONFIG_HAS_SCHED_GRANULARITY
>> static int __init sched_select_granularity(const char *str)
>> {
>> @@ -115,6 +139,7 @@ static void __init cpupool_gran_init(void)
>> warning_add(fallback);
>> sched_granularity = gran;
>> + sched_gran_print(opt_sched_granularity, sched_granularity);
>> }
>> unsigned int cpupool_get_granularity(const struct cpupool *c)
>> @@ -911,6 +936,7 @@ void dump_runq(unsigned char key)
>> {
>> printk("Cpupool %d:\n", (*c)->cpupool_id);
>> printk("Cpus: %*pbl\n", CPUMASK_PR((*c)->cpu_valid));
>> + sched_gran_print((*c)->gran, cpupool_get_granularity(*c));
>> schedule_dump(*c);
>> }
>
--
Thanks,
Sergey
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2020-04-21 7:09 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-04-20 13:06 [PATCH v2] sched: print information about scheduling granularity Sergey Dyasli
2020-04-20 13:13 ` Jan Beulich
2020-04-20 13:45 ` Jürgen Groß
2020-04-21 7:08 ` Sergey Dyasli
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.