All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v4 0/5] xen: support per-cpupool scheduling granularity
@ 2021-01-18 11:55 Juergen Gross
  2021-01-18 11:55 ` [PATCH v4 1/5] xen/hypfs: support dynamic hypfs nodes Juergen Gross
                   ` (4 more replies)
  0 siblings, 5 replies; 13+ messages in thread
From: Juergen Gross @ 2021-01-18 11:55 UTC (permalink / raw)
  To: xen-devel
  Cc: Juergen Gross, Andrew Cooper, George Dunlap, Ian Jackson,
	Jan Beulich, Julien Grall, Stefano Stabellini, Wei Liu,
	Dario Faggioli

Support scheduling granularity per cpupool. Setting the granularity is
done via hypfs, which needed to gain dynamical entries for that
purpose.

Apart from the hypfs related additional functionality the main change
for cpupools was the support for moving a domain to a new granularity,
as this requires to modify the scheduling unit/vcpu relationship.

I have tried to do the hypfs modifications in a rather generic way in
order to be able to use the same infrastructure in other cases, too
(e.g. for per-domain entries).

The complete series has been tested by creating cpupools with different
granularities and moving busy and idle domains between those.

Changes in V4:
- Patches 1-3 of V3 have been committed
- Addressed all comments

Changes in V3:
- Patches 1-6 and 8-11 of V2 have been committed already
- New patch 2
- Addressed all comments
- Added a data pointer to struct hypfs_dyndir

Changes in V2:
- Added several new patches, especially for some further cleanups in
  cpupool.c.
- Completely reworked the locking scheme with dynamical directories:
  locking of resources (cpupools in this series) is now done via new
  callbacks which are called when traversing the hypfs tree. This
  removes the need to add locking to each hypfs related cpupool
  function and it ensures data integrity across multiple callbacks.
- Reordered the first few patches in order to have already acked
  patches in pure cleanup patches first.
- Addressed several comments.

Juergen Gross (5):
  xen/hypfs: support dynamic hypfs nodes
  xen/hypfs: add support for id-based dynamic directories
  xen/cpupool: add cpupool directories
  xen/cpupool: add scheduling granularity entry to cpupool entries
  xen/cpupool: make per-cpupool sched-gran hypfs node writable

 docs/misc/hypfs-paths.pandoc |  16 +++
 xen/common/hypfs.c           | 141 ++++++++++++++++++++
 xen/common/sched/cpupool.c   | 240 +++++++++++++++++++++++++++++++++--
 xen/include/xen/hypfs.h      |  47 +++++--
 4 files changed, 422 insertions(+), 22 deletions(-)

-- 
2.26.2



^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH v4 1/5] xen/hypfs: support dynamic hypfs nodes
  2021-01-18 11:55 [PATCH v4 0/5] xen: support per-cpupool scheduling granularity Juergen Gross
@ 2021-01-18 11:55 ` Juergen Gross
  2021-01-21 15:47   ` Jan Beulich
  2021-01-18 11:55 ` [PATCH v4 2/5] xen/hypfs: add support for id-based dynamic directories Juergen Gross
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 13+ messages in thread
From: Juergen Gross @ 2021-01-18 11:55 UTC (permalink / raw)
  To: xen-devel
  Cc: Juergen Gross, Andrew Cooper, George Dunlap, Ian Jackson,
	Jan Beulich, Julien Grall, Stefano Stabellini, Wei Liu

Add a HYPFS_VARDIR_INIT() macro for initializing such a directory
statically, taking a struct hypfs_funcs pointer as parameter additional
to those of HYPFS_DIR_INIT().

Modify HYPFS_VARSIZE_INIT() to take the function vector pointer as an
additional parameter as this will be needed for dynamical entries.

For being able to let the generic hypfs coding continue to work on
normal struct hypfs_entry entities even for dynamical nodes add some
infrastructure for allocating a working area for the current hypfs
request in order to store needed information for traversing the tree.
This area is anchored in a percpu pointer and can be retrieved by any
level of the dynamic entries. The normal way to handle allocation and
freeing is to allocate the data in the enter() callback of a node and
to free it in the related exit() callback.

Add a hypfs_add_dyndir() function for adding a dynamic directory
template to the tree, which is needed for having the correct reference
to its position in hypfs.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- switch to xzalloc_bytes() in hypfs_alloc_dyndata() (Jan Beulich)
- carved out from previous patch
- use enter() and exit() callbacks for allocating and freeing
  dyndata memory
- add hypfs_add_dyndir()

V3:
- switch hypfs_alloc_dyndata() to be type safe (Jan Beulich)
- rename HYPFS_VARDIR_INIT() to HYPFS_DIR_INIT_FUNC() (Jan Beulich)

V4:
- use temporary variables for avoiding multiple per_cpu() uses
  (Jan Beulich)
- add comment (Jan Beulich)
- hide hypfs_alloc_dyndata() type unsafe backing function (Jan Beulich)
---
 xen/common/hypfs.c      | 43 +++++++++++++++++++++++++++++++++++++++++
 xen/include/xen/hypfs.h | 29 +++++++++++++++++----------
 2 files changed, 62 insertions(+), 10 deletions(-)

diff --git a/xen/common/hypfs.c b/xen/common/hypfs.c
index 73497ea1d7..6c0e59dedd 100644
--- a/xen/common/hypfs.c
+++ b/xen/common/hypfs.c
@@ -72,6 +72,7 @@ enum hypfs_lock_state {
     hypfs_write_locked
 };
 static DEFINE_PER_CPU(enum hypfs_lock_state, hypfs_locked);
+static DEFINE_PER_CPU(struct hypfs_dyndata *, hypfs_dyndata);
 
 static DEFINE_PER_CPU(const struct hypfs_entry *, hypfs_last_node_entered);
 
@@ -155,6 +156,36 @@ static void node_exit_all(void)
         node_exit(*last);
 }
 
+#undef hypfs_alloc_dyndata
+void *hypfs_alloc_dyndata(unsigned long size)
+{
+    unsigned int cpu = smp_processor_id();
+    struct hypfs_dyndata **dyndata = &per_cpu(hypfs_dyndata, cpu);
+
+    ASSERT(per_cpu(hypfs_locked, cpu) != hypfs_unlocked);
+    ASSERT(*dyndata == NULL);
+
+    *dyndata = xzalloc_bytes(size);
+
+    return *dyndata;
+}
+
+void *hypfs_get_dyndata(void)
+{
+    struct hypfs_dyndata *dyndata = this_cpu(hypfs_dyndata);
+
+    ASSERT(dyndata);
+
+    return dyndata;
+}
+
+void hypfs_free_dyndata(void)
+{
+    struct hypfs_dyndata **dyndata = &this_cpu(hypfs_dyndata);
+
+    XFREE(*dyndata);
+}
+
 static int add_entry(struct hypfs_entry_dir *parent, struct hypfs_entry *new)
 {
     int ret = -ENOENT;
@@ -216,6 +247,18 @@ int hypfs_add_dir(struct hypfs_entry_dir *parent,
     return ret;
 }
 
+void hypfs_add_dyndir(struct hypfs_entry_dir *parent,
+                      struct hypfs_entry_dir *template)
+{
+    /*
+     * As the template is only a placeholder for possibly multiple dynamically
+     * generated directories, the link up to its parent can be static, while
+     * the "real" children of the parent are to be found via the parent's
+     * findentry function only.
+     */
+    template->e.parent = &parent->e;
+}
+
 int hypfs_add_leaf(struct hypfs_entry_dir *parent,
                    struct hypfs_entry_leaf *leaf, bool nofault)
 {
diff --git a/xen/include/xen/hypfs.h b/xen/include/xen/hypfs.h
index a6dfdb7d8e..d028c01283 100644
--- a/xen/include/xen/hypfs.h
+++ b/xen/include/xen/hypfs.h
@@ -76,7 +76,7 @@ struct hypfs_entry_dir {
     struct list_head dirlist;
 };
 
-#define HYPFS_DIR_INIT(var, nam)                  \
+#define HYPFS_DIR_INIT_FUNC(var, nam, fn)         \
     struct hypfs_entry_dir __read_mostly var = {  \
         .e.type = XEN_HYPFS_TYPE_DIR,             \
         .e.encoding = XEN_HYPFS_ENC_PLAIN,        \
@@ -84,22 +84,25 @@ struct hypfs_entry_dir {
         .e.size = 0,                              \
         .e.max_size = 0,                          \
         .e.list = LIST_HEAD_INIT(var.e.list),     \
-        .e.funcs = &hypfs_dir_funcs,              \
+        .e.funcs = (fn),                          \
         .dirlist = LIST_HEAD_INIT(var.dirlist),   \
     }
 
-#define HYPFS_VARSIZE_INIT(var, typ, nam, msz)    \
-    struct hypfs_entry_leaf __read_mostly var = { \
-        .e.type = (typ),                          \
-        .e.encoding = XEN_HYPFS_ENC_PLAIN,        \
-        .e.name = (nam),                          \
-        .e.max_size = (msz),                      \
-        .e.funcs = &hypfs_leaf_ro_funcs,          \
+#define HYPFS_DIR_INIT(var, nam)                  \
+    HYPFS_DIR_INIT_FUNC(var, nam, &hypfs_dir_funcs)
+
+#define HYPFS_VARSIZE_INIT(var, typ, nam, msz, fn) \
+    struct hypfs_entry_leaf __read_mostly var = {  \
+        .e.type = (typ),                           \
+        .e.encoding = XEN_HYPFS_ENC_PLAIN,         \
+        .e.name = (nam),                           \
+        .e.max_size = (msz),                       \
+        .e.funcs = (fn),                           \
     }
 
 /* Content and size need to be set via hypfs_string_set_reference(). */
 #define HYPFS_STRING_INIT(var, nam)               \
-    HYPFS_VARSIZE_INIT(var, XEN_HYPFS_TYPE_STRING, nam, 0)
+    HYPFS_VARSIZE_INIT(var, XEN_HYPFS_TYPE_STRING, nam, 0, &hypfs_leaf_ro_funcs)
 
 /*
  * Set content and size of a XEN_HYPFS_TYPE_STRING node. The node will point
@@ -150,6 +153,8 @@ extern struct hypfs_entry_dir hypfs_root;
 
 int hypfs_add_dir(struct hypfs_entry_dir *parent,
                   struct hypfs_entry_dir *dir, bool nofault);
+void hypfs_add_dyndir(struct hypfs_entry_dir *parent,
+                      struct hypfs_entry_dir *template);
 int hypfs_add_leaf(struct hypfs_entry_dir *parent,
                    struct hypfs_entry_leaf *leaf, bool nofault);
 const struct hypfs_entry *hypfs_node_enter(const struct hypfs_entry *entry);
@@ -177,6 +182,10 @@ struct hypfs_entry *hypfs_leaf_findentry(const struct hypfs_entry_dir *dir,
 struct hypfs_entry *hypfs_dir_findentry(const struct hypfs_entry_dir *dir,
                                         const char *name,
                                         unsigned int name_len);
+void *hypfs_alloc_dyndata(unsigned long size);
+#define hypfs_alloc_dyndata(type) ((type *)hypfs_alloc_dyndata(sizeof(type)))
+void *hypfs_get_dyndata(void);
+void hypfs_free_dyndata(void);
 #endif
 
 #endif /* __XEN_HYPFS_H__ */
-- 
2.26.2



^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v4 2/5] xen/hypfs: add support for id-based dynamic directories
  2021-01-18 11:55 [PATCH v4 0/5] xen: support per-cpupool scheduling granularity Juergen Gross
  2021-01-18 11:55 ` [PATCH v4 1/5] xen/hypfs: support dynamic hypfs nodes Juergen Gross
@ 2021-01-18 11:55 ` Juergen Gross
  2021-01-21 15:49   ` Jan Beulich
  2021-01-18 11:55 ` [PATCH v4 3/5] xen/cpupool: add cpupool directories Juergen Gross
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 13+ messages in thread
From: Juergen Gross @ 2021-01-18 11:55 UTC (permalink / raw)
  To: xen-devel
  Cc: Juergen Gross, Andrew Cooper, George Dunlap, Ian Jackson,
	Jan Beulich, Julien Grall, Stefano Stabellini, Wei Liu

Add some helpers to hypfs.c to support dynamic directories with a
numerical id as name.

The dynamic directory is based on a template specified by the user
allowing to use specific access functions and having a predefined
set of entries in the directory.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- use macro for length of entry name (Jan Beulich)
- const attributes (Jan Beulich)
- use template name as format string (Jan Beulich)
- add hypfs_dynid_entry_size() helper (Jan Beulich)
- expect dyndir data having been allocated by enter() callback

V3:
- add a specific enter() callback returning the template pointer
- add data field to struct hypfs_dyndir_id
- rename hypfs_gen_dyndir_entry_id() (Jan Beulich)
- add comments regarding generated names to be kept in sync (Jan Beulich)

V4:
- correct comments (Jan Beulich)
---
 xen/common/hypfs.c      | 98 +++++++++++++++++++++++++++++++++++++++++
 xen/include/xen/hypfs.h | 18 ++++++++
 2 files changed, 116 insertions(+)

diff --git a/xen/common/hypfs.c b/xen/common/hypfs.c
index 6c0e59dedd..5468497404 100644
--- a/xen/common/hypfs.c
+++ b/xen/common/hypfs.c
@@ -365,6 +365,104 @@ unsigned int hypfs_getsize(const struct hypfs_entry *entry)
     return entry->size;
 }
 
+/*
+ * Fill the direntry for a dynamically generated entry. Especially the
+ * generated name needs to be kept in sync with hypfs_gen_dyndir_id_entry().
+ */
+int hypfs_read_dyndir_id_entry(const struct hypfs_entry_dir *template,
+                               unsigned int id, bool is_last,
+                               XEN_GUEST_HANDLE_PARAM(void) *uaddr)
+{
+    struct xen_hypfs_dirlistentry direntry;
+    char name[HYPFS_DYNDIR_ID_NAMELEN];
+    unsigned int e_namelen, e_len;
+
+    e_namelen = snprintf(name, sizeof(name), template->e.name, id);
+    e_len = DIRENTRY_SIZE(e_namelen);
+    direntry.e.pad = 0;
+    direntry.e.type = template->e.type;
+    direntry.e.encoding = template->e.encoding;
+    direntry.e.content_len = template->e.funcs->getsize(&template->e);
+    direntry.e.max_write_len = template->e.max_size;
+    direntry.off_next = is_last ? 0 : e_len;
+    if ( copy_to_guest(*uaddr, &direntry, 1) )
+        return -EFAULT;
+    if ( copy_to_guest_offset(*uaddr, DIRENTRY_NAME_OFF, name,
+                              e_namelen + 1) )
+        return -EFAULT;
+
+    guest_handle_add_offset(*uaddr, e_len);
+
+    return 0;
+}
+
+static const struct hypfs_entry *hypfs_dyndir_enter(
+    const struct hypfs_entry *entry)
+{
+    const struct hypfs_dyndir_id *data;
+
+    data = hypfs_get_dyndata();
+
+    /* Use template with original enter function. */
+    return data->template->e.funcs->enter(&data->template->e);
+}
+
+static struct hypfs_entry *hypfs_dyndir_findentry(
+    const struct hypfs_entry_dir *dir, const char *name, unsigned int name_len)
+{
+    const struct hypfs_dyndir_id *data;
+
+    data = hypfs_get_dyndata();
+
+    /* Use template with original findentry function. */
+    return data->template->e.funcs->findentry(data->template, name, name_len);
+}
+
+static int hypfs_read_dyndir(const struct hypfs_entry *entry,
+                             XEN_GUEST_HANDLE_PARAM(void) uaddr)
+{
+    const struct hypfs_dyndir_id *data;
+
+    data = hypfs_get_dyndata();
+
+    /* Use template with original read function. */
+    return data->template->e.funcs->read(&data->template->e, uaddr);
+}
+
+/*
+ * Fill dyndata with a dynamically generated entry based on a template
+ * and a numerical id.
+ * Needs to be kept in sync with hypfs_read_dyndir_id_entry() regarding the
+ * name generated.
+ */
+struct hypfs_entry *hypfs_gen_dyndir_id_entry(
+    const struct hypfs_entry_dir *template, unsigned int id, void *data)
+{
+    struct hypfs_dyndir_id *dyndata;
+
+    dyndata = hypfs_get_dyndata();
+
+    dyndata->template = template;
+    dyndata->id = id;
+    dyndata->data = data;
+    snprintf(dyndata->name, sizeof(dyndata->name), template->e.name, id);
+    dyndata->dir = *template;
+    dyndata->dir.e.name = dyndata->name;
+    dyndata->dir.e.funcs = &dyndata->funcs;
+    dyndata->funcs = *template->e.funcs;
+    dyndata->funcs.enter = hypfs_dyndir_enter;
+    dyndata->funcs.findentry = hypfs_dyndir_findentry;
+    dyndata->funcs.read = hypfs_read_dyndir;
+
+    return &dyndata->dir.e;
+}
+
+unsigned int hypfs_dynid_entry_size(const struct hypfs_entry *template,
+                                    unsigned int id)
+{
+    return DIRENTRY_SIZE(snprintf(NULL, 0, template->name, id));
+}
+
 int hypfs_read_dir(const struct hypfs_entry *entry,
                    XEN_GUEST_HANDLE_PARAM(void) uaddr)
 {
diff --git a/xen/include/xen/hypfs.h b/xen/include/xen/hypfs.h
index d028c01283..e9d4c2555b 100644
--- a/xen/include/xen/hypfs.h
+++ b/xen/include/xen/hypfs.h
@@ -76,6 +76,17 @@ struct hypfs_entry_dir {
     struct list_head dirlist;
 };
 
+struct hypfs_dyndir_id {
+    struct hypfs_entry_dir dir;             /* Modified copy of template. */
+    struct hypfs_funcs funcs;               /* Dynamic functions. */
+    const struct hypfs_entry_dir *template; /* Template used. */
+#define HYPFS_DYNDIR_ID_NAMELEN 12
+    char name[HYPFS_DYNDIR_ID_NAMELEN];     /* Name of hypfs entry. */
+
+    unsigned int id;                        /* Numerical id. */
+    void *data;                             /* Data associated with id. */
+};
+
 #define HYPFS_DIR_INIT_FUNC(var, nam, fn)         \
     struct hypfs_entry_dir __read_mostly var = {  \
         .e.type = XEN_HYPFS_TYPE_DIR,             \
@@ -186,6 +197,13 @@ void *hypfs_alloc_dyndata(unsigned long size);
 #define hypfs_alloc_dyndata(type) ((type *)hypfs_alloc_dyndata(sizeof(type)))
 void *hypfs_get_dyndata(void);
 void hypfs_free_dyndata(void);
+int hypfs_read_dyndir_id_entry(const struct hypfs_entry_dir *template,
+                               unsigned int id, bool is_last,
+                               XEN_GUEST_HANDLE_PARAM(void) *uaddr);
+struct hypfs_entry *hypfs_gen_dyndir_id_entry(
+    const struct hypfs_entry_dir *template, unsigned int id, void *data);
+unsigned int hypfs_dynid_entry_size(const struct hypfs_entry *template,
+                                    unsigned int id);
 #endif
 
 #endif /* __XEN_HYPFS_H__ */
-- 
2.26.2



^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v4 3/5] xen/cpupool: add cpupool directories
  2021-01-18 11:55 [PATCH v4 0/5] xen: support per-cpupool scheduling granularity Juergen Gross
  2021-01-18 11:55 ` [PATCH v4 1/5] xen/hypfs: support dynamic hypfs nodes Juergen Gross
  2021-01-18 11:55 ` [PATCH v4 2/5] xen/hypfs: add support for id-based dynamic directories Juergen Gross
@ 2021-01-18 11:55 ` Juergen Gross
  2021-01-18 11:55 ` [PATCH v4 4/5] xen/cpupool: add scheduling granularity entry to cpupool entries Juergen Gross
  2021-01-18 11:55 ` [PATCH v4 5/5] xen/cpupool: make per-cpupool sched-gran hypfs node writable Juergen Gross
  4 siblings, 0 replies; 13+ messages in thread
From: Juergen Gross @ 2021-01-18 11:55 UTC (permalink / raw)
  To: xen-devel
  Cc: Juergen Gross, Andrew Cooper, George Dunlap, Ian Jackson,
	Jan Beulich, Julien Grall, Stefano Stabellini, Wei Liu,
	Dario Faggioli

Add /cpupool/<cpupool-id> directories to hypfs. Those are completely
dynamic, so the related hypfs access functions need to be implemented.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Dario Faggioli <dfaggioli@suse.com>
---
V2:
- added const (Jan Beulich)
- call hypfs_add_dir() in helper (Dario Faggioli)
- switch locking to enter/exit callbacks

V3:
- use generic dyndirid enter function
- const for hypfs function vector (Jan Beulich)
- drop size calculation from cpupool_dir_read() (Jan Beulich)
- check cpupool id to not exceed UINT_MAX (Jan Beulich)
- coding style (#if/#else/#endif) (Jan Beulich)
---
 docs/misc/hypfs-paths.pandoc |   9 +++
 xen/common/sched/cpupool.c   | 104 +++++++++++++++++++++++++++++++++++
 2 files changed, 113 insertions(+)

diff --git a/docs/misc/hypfs-paths.pandoc b/docs/misc/hypfs-paths.pandoc
index 6c7b2f7ee3..aaca1cdf92 100644
--- a/docs/misc/hypfs-paths.pandoc
+++ b/docs/misc/hypfs-paths.pandoc
@@ -175,6 +175,15 @@ The major version of Xen.
 
 The minor version of Xen.
 
+#### /cpupool/
+
+A directory of all current cpupools.
+
+#### /cpupool/*/
+
+The individual cpupools. Each entry is a directory with the name being the
+cpupool-id (e.g. /cpupool/0/).
+
 #### /params/
 
 A directory of runtime parameters.
diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c
index 0db7d77219..f293ba0cc4 100644
--- a/xen/common/sched/cpupool.c
+++ b/xen/common/sched/cpupool.c
@@ -13,6 +13,8 @@
 
 #include <xen/cpu.h>
 #include <xen/cpumask.h>
+#include <xen/guest_access.h>
+#include <xen/hypfs.h>
 #include <xen/init.h>
 #include <xen/keyhandler.h>
 #include <xen/lib.h>
@@ -33,6 +35,7 @@ static int cpupool_moving_cpu = -1;
 static struct cpupool *cpupool_cpu_moving = NULL;
 static cpumask_t cpupool_locked_cpus;
 
+/* This lock nests inside sysctl or hypfs lock. */
 static DEFINE_SPINLOCK(cpupool_lock);
 
 static enum sched_gran __read_mostly opt_sched_granularity = SCHED_GRAN_cpu;
@@ -1003,12 +1006,113 @@ static struct notifier_block cpu_nfb = {
     .notifier_call = cpu_callback
 };
 
+#ifdef CONFIG_HYPFS
+
+static HYPFS_DIR_INIT(cpupool_pooldir, "%u");
+
+static int cpupool_dir_read(const struct hypfs_entry *entry,
+                            XEN_GUEST_HANDLE_PARAM(void) uaddr)
+{
+    int ret = 0;
+    const struct cpupool *c;
+
+    list_for_each_entry(c, &cpupool_list, list)
+    {
+        ret = hypfs_read_dyndir_id_entry(&cpupool_pooldir, c->cpupool_id,
+                                         list_is_last(&c->list, &cpupool_list),
+                                         &uaddr);
+        if ( ret )
+            break;
+    }
+
+    return ret;
+}
+
+static unsigned int cpupool_dir_getsize(const struct hypfs_entry *entry)
+{
+    const struct cpupool *c;
+    unsigned int size = 0;
+
+    list_for_each_entry(c, &cpupool_list, list)
+        size += hypfs_dynid_entry_size(entry, c->cpupool_id);
+
+    return size;
+}
+
+static const struct hypfs_entry *cpupool_dir_enter(
+    const struct hypfs_entry *entry)
+{
+    struct hypfs_dyndir_id *data;
+
+    data = hypfs_alloc_dyndata(struct hypfs_dyndir_id);
+    if ( !data )
+        return ERR_PTR(-ENOMEM);
+    data->id = CPUPOOLID_NONE;
+
+    spin_lock(&cpupool_lock);
+
+    return entry;
+}
+
+static void cpupool_dir_exit(const struct hypfs_entry *entry)
+{
+    spin_unlock(&cpupool_lock);
+
+    hypfs_free_dyndata();
+}
+
+static struct hypfs_entry *cpupool_dir_findentry(
+    const struct hypfs_entry_dir *dir, const char *name, unsigned int name_len)
+{
+    unsigned long id;
+    const char *end;
+    struct cpupool *cpupool;
+
+    id = simple_strtoul(name, &end, 10);
+    if ( end != name + name_len || id > UINT_MAX )
+        return ERR_PTR(-ENOENT);
+
+    cpupool = __cpupool_find_by_id(id, true);
+
+    if ( !cpupool )
+        return ERR_PTR(-ENOENT);
+
+    return hypfs_gen_dyndir_id_entry(&cpupool_pooldir, id, cpupool);
+}
+
+static const struct hypfs_funcs cpupool_dir_funcs = {
+    .enter = cpupool_dir_enter,
+    .exit = cpupool_dir_exit,
+    .read = cpupool_dir_read,
+    .write = hypfs_write_deny,
+    .getsize = cpupool_dir_getsize,
+    .findentry = cpupool_dir_findentry,
+};
+
+static HYPFS_DIR_INIT_FUNC(cpupool_dir, "cpupool", &cpupool_dir_funcs);
+
+static void cpupool_hypfs_init(void)
+{
+    hypfs_add_dir(&hypfs_root, &cpupool_dir, true);
+    hypfs_add_dyndir(&cpupool_dir, &cpupool_pooldir);
+}
+
+#else /* CONFIG_HYPFS */
+
+static void cpupool_hypfs_init(void)
+{
+}
+
+#endif /* CONFIG_HYPFS */
+
 static int __init cpupool_init(void)
 {
     unsigned int cpu;
 
     cpupool_gran_init();
 
+    cpupool_hypfs_init();
+
     cpupool0 = cpupool_create(0, 0);
     BUG_ON(IS_ERR(cpupool0));
     cpupool_put(cpupool0);
-- 
2.26.2



^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v4 4/5] xen/cpupool: add scheduling granularity entry to cpupool entries
  2021-01-18 11:55 [PATCH v4 0/5] xen: support per-cpupool scheduling granularity Juergen Gross
                   ` (2 preceding siblings ...)
  2021-01-18 11:55 ` [PATCH v4 3/5] xen/cpupool: add cpupool directories Juergen Gross
@ 2021-01-18 11:55 ` Juergen Gross
  2021-01-21 15:50   ` Jan Beulich
  2021-01-21 17:27   ` Dario Faggioli
  2021-01-18 11:55 ` [PATCH v4 5/5] xen/cpupool: make per-cpupool sched-gran hypfs node writable Juergen Gross
  4 siblings, 2 replies; 13+ messages in thread
From: Juergen Gross @ 2021-01-18 11:55 UTC (permalink / raw)
  To: xen-devel
  Cc: Juergen Gross, Andrew Cooper, George Dunlap, Ian Jackson,
	Jan Beulich, Julien Grall, Stefano Stabellini, Wei Liu,
	Dario Faggioli

Add a "sched-gran" entry to the per-cpupool hypfs directories.

For now make this entry read-only and let it contain one of the
strings "cpu", "core" or "socket".

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- added const (Jan Beulich)
- modify test in cpupool_gran_read() (Jan Beulich)
---
 docs/misc/hypfs-paths.pandoc |  4 ++
 xen/common/sched/cpupool.c   | 72 ++++++++++++++++++++++++++++++++++--
 2 files changed, 72 insertions(+), 4 deletions(-)

diff --git a/docs/misc/hypfs-paths.pandoc b/docs/misc/hypfs-paths.pandoc
index aaca1cdf92..f1ce24d7fe 100644
--- a/docs/misc/hypfs-paths.pandoc
+++ b/docs/misc/hypfs-paths.pandoc
@@ -184,6 +184,10 @@ A directory of all current cpupools.
 The individual cpupools. Each entry is a directory with the name being the
 cpupool-id (e.g. /cpupool/0/).
 
+#### /cpupool/*/sched-gran = ("cpu" | "core" | "socket")
+
+The scheduling granularity of a cpupool.
+
 #### /params/
 
 A directory of runtime parameters.
diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c
index f293ba0cc4..e2011367bd 100644
--- a/xen/common/sched/cpupool.c
+++ b/xen/common/sched/cpupool.c
@@ -41,9 +41,10 @@ static DEFINE_SPINLOCK(cpupool_lock);
 static enum sched_gran __read_mostly opt_sched_granularity = SCHED_GRAN_cpu;
 static unsigned int __read_mostly sched_granularity = 1;
 
+#define SCHED_GRAN_NAME_LEN  8
 struct sched_gran_name {
     enum sched_gran mode;
-    char name[8];
+    char name[SCHED_GRAN_NAME_LEN];
 };
 
 static const struct sched_gran_name sg_name[] = {
@@ -52,7 +53,7 @@ static const struct sched_gran_name sg_name[] = {
     {SCHED_GRAN_socket, "socket"},
 };
 
-static void sched_gran_print(enum sched_gran mode, unsigned int gran)
+static const char *sched_gran_get_name(enum sched_gran mode)
 {
     const char *name = "";
     unsigned int i;
@@ -66,8 +67,13 @@ static void sched_gran_print(enum sched_gran mode, unsigned int gran)
         }
     }
 
+    return name;
+}
+
+static void sched_gran_print(enum sched_gran mode, unsigned int gran)
+{
     printk("Scheduling granularity: %s, %u CPU%s per sched-resource\n",
-           name, gran, gran == 1 ? "" : "s");
+           sched_gran_get_name(mode), gran, gran == 1 ? "" : "s");
 }
 
 #ifdef CONFIG_HAS_SCHED_GRANULARITY
@@ -1014,10 +1020,16 @@ static int cpupool_dir_read(const struct hypfs_entry *entry,
                             XEN_GUEST_HANDLE_PARAM(void) uaddr)
 {
     int ret = 0;
-    const struct cpupool *c;
+    struct cpupool *c;
+    struct hypfs_dyndir_id *data;
+
+    data = hypfs_get_dyndata();
 
     list_for_each_entry(c, &cpupool_list, list)
     {
+        data->id = c->cpupool_id;
+        data->data = c;
+
         ret = hypfs_read_dyndir_id_entry(&cpupool_pooldir, c->cpupool_id,
                                          list_is_last(&c->list, &cpupool_list),
                                          &uaddr);
@@ -1080,6 +1092,56 @@ static struct hypfs_entry *cpupool_dir_findentry(
     return hypfs_gen_dyndir_id_entry(&cpupool_pooldir, id, cpupool);
 }
 
+static int cpupool_gran_read(const struct hypfs_entry *entry,
+                             XEN_GUEST_HANDLE_PARAM(void) uaddr)
+{
+    const struct hypfs_dyndir_id *data;
+    const struct cpupool *cpupool;
+    const char *gran;
+
+    data = hypfs_get_dyndata();
+    cpupool = data->data;
+    ASSERT(cpupool);
+
+    gran = sched_gran_get_name(cpupool->gran);
+
+    if ( !*gran )
+        return -ENOENT;
+
+    return copy_to_guest(uaddr, gran, strlen(gran) + 1) ? -EFAULT : 0;
+}
+
+static unsigned int hypfs_gran_getsize(const struct hypfs_entry *entry)
+{
+    const struct hypfs_dyndir_id *data;
+    const struct cpupool *cpupool;
+    const char *gran;
+
+    data = hypfs_get_dyndata();
+    cpupool = data->data;
+    ASSERT(cpupool);
+
+    gran = sched_gran_get_name(cpupool->gran);
+
+    return strlen(gran) + 1;
+}
+
+static const struct hypfs_funcs cpupool_gran_funcs = {
+    .enter = hypfs_node_enter,
+    .exit = hypfs_node_exit,
+    .read = cpupool_gran_read,
+    .write = hypfs_write_deny,
+    .getsize = hypfs_gran_getsize,
+    .findentry = hypfs_leaf_findentry,
+};
+
+static HYPFS_VARSIZE_INIT(cpupool_gran, XEN_HYPFS_TYPE_STRING, "sched-gran",
+                          0, &cpupool_gran_funcs);
+static char granstr[SCHED_GRAN_NAME_LEN] = {
+    [0 ... SCHED_GRAN_NAME_LEN - 2] = '?',
+    [SCHED_GRAN_NAME_LEN - 1] = 0
+};
+
 static const struct hypfs_funcs cpupool_dir_funcs = {
     .enter = cpupool_dir_enter,
     .exit = cpupool_dir_exit,
@@ -1095,6 +1157,8 @@ static void cpupool_hypfs_init(void)
 {
     hypfs_add_dir(&hypfs_root, &cpupool_dir, true);
     hypfs_add_dyndir(&cpupool_dir, &cpupool_pooldir);
+    hypfs_string_set_reference(&cpupool_gran, granstr);
+    hypfs_add_leaf(&cpupool_pooldir, &cpupool_gran, true);
 }
 
 #else /* CONFIG_HYPFS */
-- 
2.26.2



^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v4 5/5] xen/cpupool: make per-cpupool sched-gran hypfs node writable
  2021-01-18 11:55 [PATCH v4 0/5] xen: support per-cpupool scheduling granularity Juergen Gross
                   ` (3 preceding siblings ...)
  2021-01-18 11:55 ` [PATCH v4 4/5] xen/cpupool: add scheduling granularity entry to cpupool entries Juergen Gross
@ 2021-01-18 11:55 ` Juergen Gross
  2021-01-21 15:55   ` Jan Beulich
  2021-01-21 17:27   ` Dario Faggioli
  4 siblings, 2 replies; 13+ messages in thread
From: Juergen Gross @ 2021-01-18 11:55 UTC (permalink / raw)
  To: xen-devel
  Cc: Juergen Gross, Andrew Cooper, George Dunlap, Ian Jackson,
	Jan Beulich, Julien Grall, Stefano Stabellini, Wei Liu,
	Dario Faggioli

Make /cpupool/<id>/sched-gran in hypfs writable. This will enable per
cpupool selectable scheduling granularity.

Writing this node is allowed only with no cpu assigned to the cpupool.
Allowed are values "cpu", "core" and "socket".

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- test user parameters earlier (Jan Beulich)

V3:
- fix build without CONFIG_HYPFS on Arm (Andrew Cooper)
---
 docs/misc/hypfs-paths.pandoc |  5 ++-
 xen/common/sched/cpupool.c   | 70 ++++++++++++++++++++++++++++++------
 2 files changed, 63 insertions(+), 12 deletions(-)

diff --git a/docs/misc/hypfs-paths.pandoc b/docs/misc/hypfs-paths.pandoc
index f1ce24d7fe..e86f7d0dbe 100644
--- a/docs/misc/hypfs-paths.pandoc
+++ b/docs/misc/hypfs-paths.pandoc
@@ -184,10 +184,13 @@ A directory of all current cpupools.
 The individual cpupools. Each entry is a directory with the name being the
 cpupool-id (e.g. /cpupool/0/).
 
-#### /cpupool/*/sched-gran = ("cpu" | "core" | "socket")
+#### /cpupool/*/sched-gran = ("cpu" | "core" | "socket") [w]
 
 The scheduling granularity of a cpupool.
 
+Writing a value is allowed only for cpupools with no cpu assigned and if the
+architecture is supporting different scheduling granularities.
+
 #### /params/
 
 A directory of runtime parameters.
diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c
index e2011367bd..acd26f9449 100644
--- a/xen/common/sched/cpupool.c
+++ b/xen/common/sched/cpupool.c
@@ -77,7 +77,7 @@ static void sched_gran_print(enum sched_gran mode, unsigned int gran)
 }
 
 #ifdef CONFIG_HAS_SCHED_GRANULARITY
-static int __init sched_select_granularity(const char *str)
+static int sched_gran_get(const char *str, enum sched_gran *mode)
 {
     unsigned int i;
 
@@ -85,36 +85,43 @@ static int __init sched_select_granularity(const char *str)
     {
         if ( strcmp(sg_name[i].name, str) == 0 )
         {
-            opt_sched_granularity = sg_name[i].mode;
+            *mode = sg_name[i].mode;
             return 0;
         }
     }
 
     return -EINVAL;
 }
+
+static int __init sched_select_granularity(const char *str)
+{
+    return sched_gran_get(str, &opt_sched_granularity);
+}
 custom_param("sched-gran", sched_select_granularity);
+#elif CONFIG_HYPFS
+static int sched_gran_get(const char *str, enum sched_gran *mode)
+{
+    return -EINVAL;
+}
 #endif
 
-static unsigned int __init cpupool_check_granularity(void)
+static unsigned int cpupool_check_granularity(enum sched_gran mode)
 {
     unsigned int cpu;
     unsigned int siblings, gran = 0;
 
-    if ( opt_sched_granularity == SCHED_GRAN_cpu )
+    if ( mode == SCHED_GRAN_cpu )
         return 1;
 
     for_each_online_cpu ( cpu )
     {
-        siblings = cpumask_weight(sched_get_opt_cpumask(opt_sched_granularity,
-                                                        cpu));
+        siblings = cpumask_weight(sched_get_opt_cpumask(mode, cpu));
         if ( gran == 0 )
             gran = siblings;
         else if ( gran != siblings )
             return 0;
     }
 
-    sched_disable_smt_switching = true;
-
     return gran;
 }
 
@@ -126,7 +133,7 @@ static void __init cpupool_gran_init(void)
 
     while ( gran == 0 )
     {
-        gran = cpupool_check_granularity();
+        gran = cpupool_check_granularity(opt_sched_granularity);
 
         if ( gran == 0 )
         {
@@ -152,6 +159,9 @@ static void __init cpupool_gran_init(void)
     if ( fallback )
         warning_add(fallback);
 
+    if ( opt_sched_granularity != SCHED_GRAN_cpu )
+        sched_disable_smt_switching = true;
+
     sched_granularity = gran;
     sched_gran_print(opt_sched_granularity, sched_granularity);
 }
@@ -1126,17 +1136,55 @@ static unsigned int hypfs_gran_getsize(const struct hypfs_entry *entry)
     return strlen(gran) + 1;
 }
 
+static int cpupool_gran_write(struct hypfs_entry_leaf *leaf,
+                              XEN_GUEST_HANDLE_PARAM(const_void) uaddr,
+                              unsigned int ulen)
+{
+    const struct hypfs_dyndir_id *data;
+    struct cpupool *cpupool;
+    enum sched_gran gran;
+    unsigned int sched_gran = 0;
+    char name[SCHED_GRAN_NAME_LEN];
+    int ret = 0;
+
+    if ( ulen > SCHED_GRAN_NAME_LEN )
+        return -ENOSPC;
+
+    if ( copy_from_guest(name, uaddr, ulen) )
+        return -EFAULT;
+
+    if ( memchr(name, 0, ulen) == (name + ulen - 1) )
+        sched_gran = sched_gran_get(name, &gran) ?
+                     0 : cpupool_check_granularity(gran);
+    if ( sched_gran == 0 )
+        return -EINVAL;
+
+    data = hypfs_get_dyndata();
+    cpupool = data->data;
+    ASSERT(cpupool);
+
+    if ( !cpumask_empty(cpupool->cpu_valid) )
+        ret = -EBUSY;
+    else
+    {
+        cpupool->gran = gran;
+        cpupool->sched_gran = sched_gran;
+    }
+
+    return ret;
+}
+
 static const struct hypfs_funcs cpupool_gran_funcs = {
     .enter = hypfs_node_enter,
     .exit = hypfs_node_exit,
     .read = cpupool_gran_read,
-    .write = hypfs_write_deny,
+    .write = cpupool_gran_write,
     .getsize = hypfs_gran_getsize,
     .findentry = hypfs_leaf_findentry,
 };
 
 static HYPFS_VARSIZE_INIT(cpupool_gran, XEN_HYPFS_TYPE_STRING, "sched-gran",
-                          0, &cpupool_gran_funcs);
+                          SCHED_GRAN_NAME_LEN, &cpupool_gran_funcs);
 static char granstr[SCHED_GRAN_NAME_LEN] = {
     [0 ... SCHED_GRAN_NAME_LEN - 2] = '?',
     [SCHED_GRAN_NAME_LEN - 1] = 0
-- 
2.26.2



^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH v4 1/5] xen/hypfs: support dynamic hypfs nodes
  2021-01-18 11:55 ` [PATCH v4 1/5] xen/hypfs: support dynamic hypfs nodes Juergen Gross
@ 2021-01-21 15:47   ` Jan Beulich
  0 siblings, 0 replies; 13+ messages in thread
From: Jan Beulich @ 2021-01-21 15:47 UTC (permalink / raw)
  To: Juergen Gross
  Cc: Andrew Cooper, George Dunlap, Ian Jackson, Julien Grall,
	Stefano Stabellini, Wei Liu, xen-devel

On 18.01.2021 12:55, Juergen Gross wrote:
> Add a HYPFS_VARDIR_INIT() macro for initializing such a directory
> statically, taking a struct hypfs_funcs pointer as parameter additional
> to those of HYPFS_DIR_INIT().
> 
> Modify HYPFS_VARSIZE_INIT() to take the function vector pointer as an
> additional parameter as this will be needed for dynamical entries.
> 
> For being able to let the generic hypfs coding continue to work on
> normal struct hypfs_entry entities even for dynamical nodes add some
> infrastructure for allocating a working area for the current hypfs
> request in order to store needed information for traversing the tree.
> This area is anchored in a percpu pointer and can be retrieved by any
> level of the dynamic entries. The normal way to handle allocation and
> freeing is to allocate the data in the enter() callback of a node and
> to free it in the related exit() callback.
> 
> Add a hypfs_add_dyndir() function for adding a dynamic directory
> template to the tree, which is needed for having the correct reference
> to its position in hypfs.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v4 2/5] xen/hypfs: add support for id-based dynamic directories
  2021-01-18 11:55 ` [PATCH v4 2/5] xen/hypfs: add support for id-based dynamic directories Juergen Gross
@ 2021-01-21 15:49   ` Jan Beulich
  0 siblings, 0 replies; 13+ messages in thread
From: Jan Beulich @ 2021-01-21 15:49 UTC (permalink / raw)
  To: Juergen Gross
  Cc: Andrew Cooper, George Dunlap, Ian Jackson, Julien Grall,
	Stefano Stabellini, Wei Liu, xen-devel

On 18.01.2021 12:55, Juergen Gross wrote:
> Add some helpers to hypfs.c to support dynamic directories with a
> numerical id as name.
> 
> The dynamic directory is based on a template specified by the user
> allowing to use specific access functions and having a predefined
> set of entries in the directory.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v4 4/5] xen/cpupool: add scheduling granularity entry to cpupool entries
  2021-01-18 11:55 ` [PATCH v4 4/5] xen/cpupool: add scheduling granularity entry to cpupool entries Juergen Gross
@ 2021-01-21 15:50   ` Jan Beulich
  2021-01-21 17:27   ` Dario Faggioli
  1 sibling, 0 replies; 13+ messages in thread
From: Jan Beulich @ 2021-01-21 15:50 UTC (permalink / raw)
  To: Juergen Gross
  Cc: Andrew Cooper, George Dunlap, Ian Jackson, Julien Grall,
	Stefano Stabellini, Wei Liu, Dario Faggioli, xen-devel

On 18.01.2021 12:55, Juergen Gross wrote:
> Add a "sched-gran" entry to the per-cpupool hypfs directories.
> 
> For now make this entry read-only and let it contain one of the
> strings "cpu", "core" or "socket".
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v4 5/5] xen/cpupool: make per-cpupool sched-gran hypfs node writable
  2021-01-18 11:55 ` [PATCH v4 5/5] xen/cpupool: make per-cpupool sched-gran hypfs node writable Juergen Gross
@ 2021-01-21 15:55   ` Jan Beulich
  2021-01-21 16:10     ` Jürgen Groß
  2021-01-21 17:27   ` Dario Faggioli
  1 sibling, 1 reply; 13+ messages in thread
From: Jan Beulich @ 2021-01-21 15:55 UTC (permalink / raw)
  To: Juergen Gross
  Cc: Andrew Cooper, George Dunlap, Ian Jackson, Julien Grall,
	Stefano Stabellini, Wei Liu, Dario Faggioli, xen-devel

On 18.01.2021 12:55, Juergen Gross wrote:
> Make /cpupool/<id>/sched-gran in hypfs writable. This will enable per
> cpupool selectable scheduling granularity.
> 
> Writing this node is allowed only with no cpu assigned to the cpupool.
> Allowed are values "cpu", "core" and "socket".
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>
with two small adjustment requests:

> @@ -85,36 +85,43 @@ static int __init sched_select_granularity(const char *str)
>      {
>          if ( strcmp(sg_name[i].name, str) == 0 )
>          {
> -            opt_sched_granularity = sg_name[i].mode;
> +            *mode = sg_name[i].mode;
>              return 0;
>          }
>      }
>  
>      return -EINVAL;
>  }
> +
> +static int __init sched_select_granularity(const char *str)
> +{
> +    return sched_gran_get(str, &opt_sched_granularity);
> +}
>  custom_param("sched-gran", sched_select_granularity);
> +#elif CONFIG_HYPFS

Missing defined().

> @@ -1126,17 +1136,55 @@ static unsigned int hypfs_gran_getsize(const struct hypfs_entry *entry)
>      return strlen(gran) + 1;
>  }
>  
> +static int cpupool_gran_write(struct hypfs_entry_leaf *leaf,
> +                              XEN_GUEST_HANDLE_PARAM(const_void) uaddr,
> +                              unsigned int ulen)
> +{
> +    const struct hypfs_dyndir_id *data;
> +    struct cpupool *cpupool;
> +    enum sched_gran gran;
> +    unsigned int sched_gran = 0;
> +    char name[SCHED_GRAN_NAME_LEN];
> +    int ret = 0;
> +
> +    if ( ulen > SCHED_GRAN_NAME_LEN )
> +        return -ENOSPC;
> +
> +    if ( copy_from_guest(name, uaddr, ulen) )
> +        return -EFAULT;
> +
> +    if ( memchr(name, 0, ulen) == (name + ulen - 1) )
> +        sched_gran = sched_gran_get(name, &gran) ?
> +                     0 : cpupool_check_granularity(gran);
> +    if ( sched_gran == 0 )
> +        return -EINVAL;
> +
> +    data = hypfs_get_dyndata();
> +    cpupool = data->data;
> +    ASSERT(cpupool);
> +
> +    if ( !cpumask_empty(cpupool->cpu_valid) )
> +        ret = -EBUSY;
> +    else
> +    {
> +        cpupool->gran = gran;
> +        cpupool->sched_gran = sched_gran;
> +    }

I think this could do with a comment clarifying what lock this
is protected by, as the function itself has no sign of any
locking, not even an assertion that a certain lock is being held.
If you were to suggest some text, this as well as the minor issue
above could likely be taken care of while committing.

Jan


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v4 5/5] xen/cpupool: make per-cpupool sched-gran hypfs node writable
  2021-01-21 15:55   ` Jan Beulich
@ 2021-01-21 16:10     ` Jürgen Groß
  0 siblings, 0 replies; 13+ messages in thread
From: Jürgen Groß @ 2021-01-21 16:10 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Andrew Cooper, George Dunlap, Ian Jackson, Julien Grall,
	Stefano Stabellini, Wei Liu, Dario Faggioli, xen-devel


[-- Attachment #1.1.1: Type: text/plain, Size: 2825 bytes --]

On 21.01.21 16:55, Jan Beulich wrote:
> On 18.01.2021 12:55, Juergen Gross wrote:
>> Make /cpupool/<id>/sched-gran in hypfs writable. This will enable per
>> cpupool selectable scheduling granularity.
>>
>> Writing this node is allowed only with no cpu assigned to the cpupool.
>> Allowed are values "cpu", "core" and "socket".
>>
>> Signed-off-by: Juergen Gross <jgross@suse.com>
> 
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> with two small adjustment requests:
> 
>> @@ -85,36 +85,43 @@ static int __init sched_select_granularity(const char *str)
>>       {
>>           if ( strcmp(sg_name[i].name, str) == 0 )
>>           {
>> -            opt_sched_granularity = sg_name[i].mode;
>> +            *mode = sg_name[i].mode;
>>               return 0;
>>           }
>>       }
>>   
>>       return -EINVAL;
>>   }
>> +
>> +static int __init sched_select_granularity(const char *str)
>> +{
>> +    return sched_gran_get(str, &opt_sched_granularity);
>> +}
>>   custom_param("sched-gran", sched_select_granularity);
>> +#elif CONFIG_HYPFS
> 
> Missing defined().
> 
>> @@ -1126,17 +1136,55 @@ static unsigned int hypfs_gran_getsize(const struct hypfs_entry *entry)
>>       return strlen(gran) + 1;
>>   }
>>   
>> +static int cpupool_gran_write(struct hypfs_entry_leaf *leaf,
>> +                              XEN_GUEST_HANDLE_PARAM(const_void) uaddr,
>> +                              unsigned int ulen)
>> +{
>> +    const struct hypfs_dyndir_id *data;
>> +    struct cpupool *cpupool;
>> +    enum sched_gran gran;
>> +    unsigned int sched_gran = 0;
>> +    char name[SCHED_GRAN_NAME_LEN];
>> +    int ret = 0;
>> +
>> +    if ( ulen > SCHED_GRAN_NAME_LEN )
>> +        return -ENOSPC;
>> +
>> +    if ( copy_from_guest(name, uaddr, ulen) )
>> +        return -EFAULT;
>> +
>> +    if ( memchr(name, 0, ulen) == (name + ulen - 1) )
>> +        sched_gran = sched_gran_get(name, &gran) ?
>> +                     0 : cpupool_check_granularity(gran);
>> +    if ( sched_gran == 0 )
>> +        return -EINVAL;
>> +
>> +    data = hypfs_get_dyndata();
>> +    cpupool = data->data;
>> +    ASSERT(cpupool);
>> +
>> +    if ( !cpumask_empty(cpupool->cpu_valid) )
>> +        ret = -EBUSY;
>> +    else
>> +    {
>> +        cpupool->gran = gran;
>> +        cpupool->sched_gran = sched_gran;
>> +    }
> 
> I think this could do with a comment clarifying what lock this
> is protected by, as the function itself has no sign of any
> locking, not even an assertion that a certain lock is being held.
> If you were to suggest some text, this as well as the minor issue
> above could likely be taken care of while committing.

cpupool_gran_[read|write]() are both guarded by the cpupool_lock
taken via cpupool_dir_enter().


Juergen

[-- Attachment #1.1.2: OpenPGP_0xB0DE9DD628BF132F.asc --]
[-- Type: application/pgp-keys, Size: 3135 bytes --]

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 495 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v4 5/5] xen/cpupool: make per-cpupool sched-gran hypfs node writable
  2021-01-18 11:55 ` [PATCH v4 5/5] xen/cpupool: make per-cpupool sched-gran hypfs node writable Juergen Gross
  2021-01-21 15:55   ` Jan Beulich
@ 2021-01-21 17:27   ` Dario Faggioli
  1 sibling, 0 replies; 13+ messages in thread
From: Dario Faggioli @ 2021-01-21 17:27 UTC (permalink / raw)
  To: Juergen Gross, xen-devel
  Cc: Andrew Cooper, George Dunlap, Ian Jackson, Jan Beulich,
	Julien Grall, Stefano Stabellini, Wei Liu

[-- Attachment #1: Type: text/plain, Size: 742 bytes --]

On Mon, 2021-01-18 at 12:55 +0100, Juergen Gross wrote:
> Make /cpupool/<id>/sched-gran in hypfs writable. This will enable per
> cpupool selectable scheduling granularity.
> 
> Writing this node is allowed only with no cpu assigned to the
> cpupool.
> Allowed are values "cpu", "core" and "socket".
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>
>
Reviewed-by: Dario Faggioli <dfaggioli@suse.com>

This holds with Jan's proposed adjustments.

Regards
-- 
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v4 4/5] xen/cpupool: add scheduling granularity entry to cpupool entries
  2021-01-18 11:55 ` [PATCH v4 4/5] xen/cpupool: add scheduling granularity entry to cpupool entries Juergen Gross
  2021-01-21 15:50   ` Jan Beulich
@ 2021-01-21 17:27   ` Dario Faggioli
  1 sibling, 0 replies; 13+ messages in thread
From: Dario Faggioli @ 2021-01-21 17:27 UTC (permalink / raw)
  To: Juergen Gross, xen-devel
  Cc: Andrew Cooper, George Dunlap, Ian Jackson, Jan Beulich,
	Julien Grall, Stefano Stabellini, Wei Liu

[-- Attachment #1: Type: text/plain, Size: 620 bytes --]

On Mon, 2021-01-18 at 12:55 +0100, Juergen Gross wrote:
> Add a "sched-gran" entry to the per-cpupool hypfs directories.
> 
> For now make this entry read-only and let it contain one of the
> strings "cpu", "core" or "socket".
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>
>
Reviewed-by: Dario Faggioli <dfaggioli@suse.com>

Regards
-- 
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2021-01-21 17:27 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-01-18 11:55 [PATCH v4 0/5] xen: support per-cpupool scheduling granularity Juergen Gross
2021-01-18 11:55 ` [PATCH v4 1/5] xen/hypfs: support dynamic hypfs nodes Juergen Gross
2021-01-21 15:47   ` Jan Beulich
2021-01-18 11:55 ` [PATCH v4 2/5] xen/hypfs: add support for id-based dynamic directories Juergen Gross
2021-01-21 15:49   ` Jan Beulich
2021-01-18 11:55 ` [PATCH v4 3/5] xen/cpupool: add cpupool directories Juergen Gross
2021-01-18 11:55 ` [PATCH v4 4/5] xen/cpupool: add scheduling granularity entry to cpupool entries Juergen Gross
2021-01-21 15:50   ` Jan Beulich
2021-01-21 17:27   ` Dario Faggioli
2021-01-18 11:55 ` [PATCH v4 5/5] xen/cpupool: make per-cpupool sched-gran hypfs node writable Juergen Gross
2021-01-21 15:55   ` Jan Beulich
2021-01-21 16:10     ` Jürgen Groß
2021-01-21 17:27   ` Dario Faggioli

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.