xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Dario Faggioli <dfaggioli@suse.com>
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: George Dunlap <george.dunlap@citrix.com>
Subject: Re: [PATCH v3 1/8] xen/cpupool: support moving domain between cpupools with different granularity
Date: Wed, 16 Dec 2020 18:52:59 +0100	[thread overview]
Message-ID: <a22954117d8dd36fc0e1b9470efb72c5b80ad393.camel@suse.com> (raw)
In-Reply-To: <20201209160956.32456-2-jgross@suse.com>

[-- Attachment #1: Type: text/plain, Size: 2151 bytes --]

On Wed, 2020-12-09 at 17:09 +0100, Juergen Gross wrote:
> When moving a domain between cpupools with different scheduling
> granularity the sched_units of the domain need to be adjusted.
> 
> Do that by allocating new sched_units and throwing away the old ones
> in sched_move_domain().
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>
>
This looks fine, and can have:

Reviewed-by: Dario Faggioli <dfaggioli@suse.com>

I would only have one request. It's not a huge deal, and probably not
worth a resend only for that, but if either you or the committer are up
for complying with that in whatever way you find the most suitable,
that would be great.

I.e., can we...
> ---
>  xen/common/sched/core.c | 121 ++++++++++++++++++++++++++++++--------
> --
>  1 file changed, 90 insertions(+), 31 deletions(-)
> 
> diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c
> index a429fc7640..2a61c879b3 100644
> --- a/xen/common/sched/core.c
> +++ b/xen/common/sched/core.c
> 
> [...]
> -    old_ops = dom_scheduler(d);
>      old_domdata = d->sched_priv;
> 
Move *here* (i.e., above this new call to cpumask_first()) the comment
that is currently inside the loop?
>  
> +    new_p = cpumask_first(d->cpupool->cpu_valid);
>      for_each_sched_unit ( d, unit )
>      {
> +        spinlock_t *lock;
> +
> +        /*
> +         * Temporarily move all units to same processor to make
> locking
> +         * easier when moving the new units to the new processors.
> +         */
>
This one here, basically ^^^

> +        lock = unit_schedule_lock_irq(unit);
> +        sched_set_res(unit, get_sched_res(new_p));
> +        spin_unlock_irq(lock);
> +
>          sched_remove_unit(old_ops, unit);
>      }
>  
Thanks and Regards
-- 
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

  reply	other threads:[~2020-12-16 17:53 UTC|newest]

Thread overview: 33+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-12-09 16:09 [PATCH v3 0/8] xen: support per-cpupool scheduling granularity Juergen Gross
2020-12-09 16:09 ` [PATCH v3 1/8] xen/cpupool: support moving domain between cpupools with different granularity Juergen Gross
2020-12-16 17:52   ` Dario Faggioli [this message]
2020-12-17  7:49     ` Jan Beulich
2020-12-17  7:54       ` Jürgen Groß
2020-12-09 16:09 ` [PATCH v3 2/8] xen/hypfs: switch write function handles to const Juergen Gross
2020-12-16 16:08   ` Jan Beulich
2020-12-16 16:17     ` Jürgen Groß
2020-12-16 16:35       ` Jan Beulich
2020-12-09 16:09 ` [PATCH v3 3/8] xen/hypfs: add new enter() and exit() per node callbacks Juergen Gross
2020-12-16 16:16   ` Jan Beulich
2020-12-16 16:24     ` Jürgen Groß
2020-12-16 16:36       ` Jan Beulich
2020-12-16 17:12         ` Jürgen Groß
2020-12-09 16:09 ` [PATCH v3 4/8] xen/hypfs: support dynamic hypfs nodes Juergen Gross
2020-12-17 11:01   ` Jan Beulich
2020-12-17 11:24     ` Jürgen Groß
2020-12-09 16:09 ` [PATCH v3 5/8] xen/hypfs: add support for id-based dynamic directories Juergen Gross
2020-12-17 11:28   ` Jan Beulich
2020-12-17 11:32     ` Jürgen Groß
2020-12-17 12:14       ` Jan Beulich
2020-12-18  8:57         ` Jürgen Groß
2020-12-18  9:09           ` Jan Beulich
2020-12-18 12:41             ` Jürgen Groß
2020-12-21  8:26               ` Jan Beulich
2021-01-18  7:25             ` Jürgen Groß
2021-01-18  7:59               ` Jan Beulich
2020-12-09 16:09 ` [PATCH v3 6/8] xen/cpupool: add cpupool directories Juergen Gross
2020-12-17 15:54   ` Jan Beulich
2020-12-17 16:10     ` Dario Faggioli
2020-12-09 16:09 ` [PATCH v3 7/8] xen/cpupool: add scheduling granularity entry to cpupool entries Juergen Gross
2020-12-17 15:57   ` Jan Beulich
2020-12-09 16:09 ` [PATCH v3 8/8] xen/cpupool: make per-cpupool sched-gran hypfs node writable Juergen Gross

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=a22954117d8dd36fc0e1b9470efb72c5b80ad393.camel@suse.com \
    --to=dfaggioli@suse.com \
    --cc=george.dunlap@citrix.com \
    --cc=jgross@suse.com \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).