All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jan Beulich <jbeulich@suse.com>
To: Juergen Gross <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	Dario Faggioli <dfaggioli@suse.com>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH v4 5/5] xen/cpupool: make per-cpupool sched-gran hypfs node writable
Date: Thu, 21 Jan 2021 16:55:38 +0100	[thread overview]
Message-ID: <d2d99ac9-07e4-68fc-bfaa-51edaff7055b@suse.com> (raw)
In-Reply-To: <20210118115516.11001-6-jgross@suse.com>

On 18.01.2021 12:55, Juergen Gross wrote:
> Make /cpupool/<id>/sched-gran in hypfs writable. This will enable per
> cpupool selectable scheduling granularity.
> 
> Writing this node is allowed only with no cpu assigned to the cpupool.
> Allowed are values "cpu", "core" and "socket".
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>
with two small adjustment requests:

> @@ -85,36 +85,43 @@ static int __init sched_select_granularity(const char *str)
>      {
>          if ( strcmp(sg_name[i].name, str) == 0 )
>          {
> -            opt_sched_granularity = sg_name[i].mode;
> +            *mode = sg_name[i].mode;
>              return 0;
>          }
>      }
>  
>      return -EINVAL;
>  }
> +
> +static int __init sched_select_granularity(const char *str)
> +{
> +    return sched_gran_get(str, &opt_sched_granularity);
> +}
>  custom_param("sched-gran", sched_select_granularity);
> +#elif CONFIG_HYPFS

Missing defined().

> @@ -1126,17 +1136,55 @@ static unsigned int hypfs_gran_getsize(const struct hypfs_entry *entry)
>      return strlen(gran) + 1;
>  }
>  
> +static int cpupool_gran_write(struct hypfs_entry_leaf *leaf,
> +                              XEN_GUEST_HANDLE_PARAM(const_void) uaddr,
> +                              unsigned int ulen)
> +{
> +    const struct hypfs_dyndir_id *data;
> +    struct cpupool *cpupool;
> +    enum sched_gran gran;
> +    unsigned int sched_gran = 0;
> +    char name[SCHED_GRAN_NAME_LEN];
> +    int ret = 0;
> +
> +    if ( ulen > SCHED_GRAN_NAME_LEN )
> +        return -ENOSPC;
> +
> +    if ( copy_from_guest(name, uaddr, ulen) )
> +        return -EFAULT;
> +
> +    if ( memchr(name, 0, ulen) == (name + ulen - 1) )
> +        sched_gran = sched_gran_get(name, &gran) ?
> +                     0 : cpupool_check_granularity(gran);
> +    if ( sched_gran == 0 )
> +        return -EINVAL;
> +
> +    data = hypfs_get_dyndata();
> +    cpupool = data->data;
> +    ASSERT(cpupool);
> +
> +    if ( !cpumask_empty(cpupool->cpu_valid) )
> +        ret = -EBUSY;
> +    else
> +    {
> +        cpupool->gran = gran;
> +        cpupool->sched_gran = sched_gran;
> +    }

I think this could do with a comment clarifying what lock this
is protected by, as the function itself has no sign of any
locking, not even an assertion that a certain lock is being held.
If you were to suggest some text, this as well as the minor issue
above could likely be taken care of while committing.

Jan


  reply	other threads:[~2021-01-21 15:55 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-01-18 11:55 [PATCH v4 0/5] xen: support per-cpupool scheduling granularity Juergen Gross
2021-01-18 11:55 ` [PATCH v4 1/5] xen/hypfs: support dynamic hypfs nodes Juergen Gross
2021-01-21 15:47   ` Jan Beulich
2021-01-18 11:55 ` [PATCH v4 2/5] xen/hypfs: add support for id-based dynamic directories Juergen Gross
2021-01-21 15:49   ` Jan Beulich
2021-01-18 11:55 ` [PATCH v4 3/5] xen/cpupool: add cpupool directories Juergen Gross
2021-01-18 11:55 ` [PATCH v4 4/5] xen/cpupool: add scheduling granularity entry to cpupool entries Juergen Gross
2021-01-21 15:50   ` Jan Beulich
2021-01-21 17:27   ` Dario Faggioli
2021-01-18 11:55 ` [PATCH v4 5/5] xen/cpupool: make per-cpupool sched-gran hypfs node writable Juergen Gross
2021-01-21 15:55   ` Jan Beulich [this message]
2021-01-21 16:10     ` Jürgen Groß
2021-01-21 17:27   ` Dario Faggioli

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=d2d99ac9-07e4-68fc-bfaa-51edaff7055b@suse.com \
    --to=jbeulich@suse.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=dfaggioli@suse.com \
    --cc=george.dunlap@citrix.com \
    --cc=iwj@xenproject.org \
    --cc=jgross@suse.com \
    --cc=julien@xen.org \
    --cc=sstabellini@kernel.org \
    --cc=wl@xen.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.