From: Andrii Anisov <andrii.anisov@gmail.com> To: Dario Faggioli <dfaggioli@suse.com>, xen-devel@lists.xenproject.org Cc: George Dunlap <george.dunlap@eu.citrix.com> Subject: Re: [PATCH 1/2] xen: credit2: avoid using cpumask_weight() in hot-paths Date: Tue, 23 Apr 2019 12:48:21 +0300 [thread overview] Message-ID: <1c6052e6-bf76-e4c9-e7b7-779ee218818e@gmail.com> (raw) In-Reply-To: <155577388014.25746.13361382203794112287.stgit@wayrath> Hello Dario, On 20.04.19 18:24, Dario Faggioli wrote: > cpumask_weight() is known to be expensive. In Credit2, we use it in > load-balancing, but only for knowing how many CPUs are active in a > runqueue. > > Keeping such count in an integer field of the per-runqueue data > structure we have, completely avoids the need for cpumask_weight(). > > While there, remove as much other uses of it as we can, even if not in > hot-paths. > > Signed-off-by: Dario Faggioli <dfaggioli@suse.com> > --- > Cc: George Dunlap <george.dunlap@eu.citrix.com> > --- > xen/common/sched_credit2.c | 21 ++++++++++++++++----- > 1 file changed, 16 insertions(+), 5 deletions(-) > > diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c > index 6958b265fc..7034325243 100644 > --- a/xen/common/sched_credit2.c > +++ b/xen/common/sched_credit2.c > @@ -466,6 +466,7 @@ struct csched2_runqueue_data { > spinlock_t lock; /* Lock for this runqueue */ > > struct list_head runq; /* Ordered list of runnable vms */ > + int nr_cpus; /* How many CPUs are sharing this runqueue */ > int id; /* ID of this runqueue (-1 if invalid) */ > > int load; /* Instantaneous load (num of non-idle vcpus) */ > @@ -2613,8 +2614,8 @@ retry: > if ( st.orqd->b_avgload > load_max ) > load_max = st.orqd->b_avgload; > > - cpus_max = cpumask_weight(&st.lrqd->active); > - i = cpumask_weight(&st.orqd->active); > + cpus_max = st.lrqd->nr_cpus; > + i = st.orqd->nr_cpus; > if ( i > cpus_max ) > cpus_max = i; > > @@ -3697,7 +3698,7 @@ csched2_dump(const struct scheduler *ops) > "\tinstload = %d\n" > "\taveload = %"PRI_stime" (~%"PRI_stime"%%)\n", > i, > - cpumask_weight(&prv->rqd[i].active), > + prv->rqd[i].nr_cpus, > nr_cpu_ids, cpumask_bits(&prv->rqd[i].active), > prv->rqd[i].max_weight, > prv->rqd[i].pick_bias, > @@ -3818,6 +3819,9 @@ init_pdata(struct csched2_private *prv, struct csched2_pcpu *spc, > __cpumask_set_cpu(cpu, &prv->initialized); > __cpumask_set_cpu(cpu, &rqd->smt_idle); > > + rqd->nr_cpus++; > + ASSERT(cpumask_weight(&rqd->active) == rqd->nr_cpus); > + > /* On the boot cpu we are called before cpu_sibling_mask has been set up. */ > if ( cpu == 0 && system_state < SYS_STATE_active ) > __cpumask_set_cpu(cpu, &csched2_pcpu(cpu)->sibling_mask); > @@ -3829,8 +3833,11 @@ init_pdata(struct csched2_private *prv, struct csched2_pcpu *spc, > __cpumask_set_cpu(rcpu, &csched2_pcpu(cpu)->sibling_mask); > } > > - if ( cpumask_weight(&rqd->active) == 1 ) > + if ( rqd->nr_cpus == 1 ) > + { > + ASSERT(cpumask_weight(&rqd->active) == 1); Please do not use hard tabs. > rqd->pick_bias = cpu; > + } > > return spc->runq_id; > } > @@ -3944,8 +3951,12 @@ csched2_deinit_pdata(const struct scheduler *ops, void *pcpu, int cpu) > __cpumask_clear_cpu(cpu, &rqd->smt_idle); > __cpumask_clear_cpu(cpu, &rqd->active); > > - if ( cpumask_empty(&rqd->active) ) > + rqd->nr_cpus--; > + ASSERT(cpumask_weight(&rqd->active) == rqd->nr_cpus); > + > + if ( rqd->nr_cpus == 0 ) > { > + ASSERT(cpumask_empty(&rqd->active)); Ditto. > printk(XENLOG_INFO " No cpus left on runqueue, disabling\n"); > deactivate_runqueue(prv, spc->runq_id); > } > > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xenproject.org > https://lists.xenproject.org/mailman/listinfo/xen-devel > With nits fixed: Reviewed-by: Andrii Anisov <andrii_anisov@epam.com> -- Sincerely, Andrii Anisov. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel
WARNING: multiple messages have this Message-ID (diff)
From: Andrii Anisov <andrii.anisov@gmail.com> To: Dario Faggioli <dfaggioli@suse.com>, xen-devel@lists.xenproject.org Cc: George Dunlap <george.dunlap@eu.citrix.com> Subject: Re: [Xen-devel] [PATCH 1/2] xen: credit2: avoid using cpumask_weight() in hot-paths Date: Tue, 23 Apr 2019 12:48:21 +0300 [thread overview] Message-ID: <1c6052e6-bf76-e4c9-e7b7-779ee218818e@gmail.com> (raw) Message-ID: <20190423094821.zMJn2-Vf9iwezyQoKjdEIkszD0_0qpxaCjEjsqqIq1U@z> (raw) In-Reply-To: <155577388014.25746.13361382203794112287.stgit@wayrath> Hello Dario, On 20.04.19 18:24, Dario Faggioli wrote: > cpumask_weight() is known to be expensive. In Credit2, we use it in > load-balancing, but only for knowing how many CPUs are active in a > runqueue. > > Keeping such count in an integer field of the per-runqueue data > structure we have, completely avoids the need for cpumask_weight(). > > While there, remove as much other uses of it as we can, even if not in > hot-paths. > > Signed-off-by: Dario Faggioli <dfaggioli@suse.com> > --- > Cc: George Dunlap <george.dunlap@eu.citrix.com> > --- > xen/common/sched_credit2.c | 21 ++++++++++++++++----- > 1 file changed, 16 insertions(+), 5 deletions(-) > > diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c > index 6958b265fc..7034325243 100644 > --- a/xen/common/sched_credit2.c > +++ b/xen/common/sched_credit2.c > @@ -466,6 +466,7 @@ struct csched2_runqueue_data { > spinlock_t lock; /* Lock for this runqueue */ > > struct list_head runq; /* Ordered list of runnable vms */ > + int nr_cpus; /* How many CPUs are sharing this runqueue */ > int id; /* ID of this runqueue (-1 if invalid) */ > > int load; /* Instantaneous load (num of non-idle vcpus) */ > @@ -2613,8 +2614,8 @@ retry: > if ( st.orqd->b_avgload > load_max ) > load_max = st.orqd->b_avgload; > > - cpus_max = cpumask_weight(&st.lrqd->active); > - i = cpumask_weight(&st.orqd->active); > + cpus_max = st.lrqd->nr_cpus; > + i = st.orqd->nr_cpus; > if ( i > cpus_max ) > cpus_max = i; > > @@ -3697,7 +3698,7 @@ csched2_dump(const struct scheduler *ops) > "\tinstload = %d\n" > "\taveload = %"PRI_stime" (~%"PRI_stime"%%)\n", > i, > - cpumask_weight(&prv->rqd[i].active), > + prv->rqd[i].nr_cpus, > nr_cpu_ids, cpumask_bits(&prv->rqd[i].active), > prv->rqd[i].max_weight, > prv->rqd[i].pick_bias, > @@ -3818,6 +3819,9 @@ init_pdata(struct csched2_private *prv, struct csched2_pcpu *spc, > __cpumask_set_cpu(cpu, &prv->initialized); > __cpumask_set_cpu(cpu, &rqd->smt_idle); > > + rqd->nr_cpus++; > + ASSERT(cpumask_weight(&rqd->active) == rqd->nr_cpus); > + > /* On the boot cpu we are called before cpu_sibling_mask has been set up. */ > if ( cpu == 0 && system_state < SYS_STATE_active ) > __cpumask_set_cpu(cpu, &csched2_pcpu(cpu)->sibling_mask); > @@ -3829,8 +3833,11 @@ init_pdata(struct csched2_private *prv, struct csched2_pcpu *spc, > __cpumask_set_cpu(rcpu, &csched2_pcpu(cpu)->sibling_mask); > } > > - if ( cpumask_weight(&rqd->active) == 1 ) > + if ( rqd->nr_cpus == 1 ) > + { > + ASSERT(cpumask_weight(&rqd->active) == 1); Please do not use hard tabs. > rqd->pick_bias = cpu; > + } > > return spc->runq_id; > } > @@ -3944,8 +3951,12 @@ csched2_deinit_pdata(const struct scheduler *ops, void *pcpu, int cpu) > __cpumask_clear_cpu(cpu, &rqd->smt_idle); > __cpumask_clear_cpu(cpu, &rqd->active); > > - if ( cpumask_empty(&rqd->active) ) > + rqd->nr_cpus--; > + ASSERT(cpumask_weight(&rqd->active) == rqd->nr_cpus); > + > + if ( rqd->nr_cpus == 0 ) > { > + ASSERT(cpumask_empty(&rqd->active)); Ditto. > printk(XENLOG_INFO " No cpus left on runqueue, disabling\n"); > deactivate_runqueue(prv, spc->runq_id); > } > > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xenproject.org > https://lists.xenproject.org/mailman/listinfo/xen-devel > With nits fixed: Reviewed-by: Andrii Anisov <andrii_anisov@epam.com> -- Sincerely, Andrii Anisov. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel
next prev parent reply other threads:[~2019-04-23 9:48 UTC|newest] Thread overview: 36+ messages / expand[flat|nested] mbox.gz Atom feed top 2019-04-20 15:24 [PATCH 0/2] xen: sched/Credit2: small and trivial improvements Dario Faggioli 2019-04-20 15:24 ` [Xen-devel] " Dario Faggioli 2019-04-20 15:24 ` [PATCH 1/2] xen: credit2: avoid using cpumask_weight() in hot-paths Dario Faggioli 2019-04-20 15:24 ` [Xen-devel] " Dario Faggioli 2019-04-23 9:44 ` Andrew Cooper 2019-04-23 9:44 ` [Xen-devel] " Andrew Cooper 2019-04-26 13:01 ` Dario Faggioli 2019-04-26 13:01 ` [Xen-devel] " Dario Faggioli 2019-04-23 9:48 ` Andrii Anisov [this message] 2019-04-23 9:48 ` Andrii Anisov 2019-04-20 15:24 ` [PATCH 2/2] xen: sched: we never get into context_switch() with prev==next Dario Faggioli 2019-04-20 15:24 ` [Xen-devel] " Dario Faggioli 2019-04-21 17:08 ` Julien Grall 2019-04-21 17:08 ` [Xen-devel] " Julien Grall 2019-04-23 9:13 ` Andrii Anisov 2019-04-23 9:13 ` [Xen-devel] " Andrii Anisov 2019-04-23 9:47 ` Andrew Cooper 2019-04-23 9:47 ` [Xen-devel] " Andrew Cooper 2019-04-23 10:33 ` Andrii Anisov 2019-04-23 10:33 ` [Xen-devel] " Andrii Anisov 2019-04-26 15:13 ` Dario Faggioli 2019-04-26 15:13 ` [Xen-devel] " Dario Faggioli 2019-04-26 16:22 ` Julien Grall 2019-04-26 16:22 ` [Xen-devel] " Julien Grall 2019-04-26 16:26 ` Andrew Cooper 2019-04-26 16:26 ` [Xen-devel] " Andrew Cooper 2019-05-08 10:03 ` Andrii Anisov 2019-05-08 10:03 ` [Xen-devel] " Andrii Anisov 2019-04-23 9:50 ` George Dunlap 2019-04-23 9:50 ` [Xen-devel] " George Dunlap 2019-04-23 9:56 ` Juergen Gross 2019-04-23 9:56 ` [Xen-devel] " Juergen Gross 2019-04-26 12:58 ` Dario Faggioli 2019-04-26 12:58 ` [Xen-devel] " Dario Faggioli 2019-04-23 10:00 ` Andrew Cooper 2019-04-23 10:00 ` [Xen-devel] " Andrew Cooper
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=1c6052e6-bf76-e4c9-e7b7-779ee218818e@gmail.com \ --to=andrii.anisov@gmail.com \ --cc=dfaggioli@suse.com \ --cc=george.dunlap@eu.citrix.com \ --cc=xen-devel@lists.xenproject.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.