All of lore.kernel.org
 help / color / mirror / Atom feed
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Juergen Gross <jgross@suse.com>, <xen-devel@lists.xenproject.org>
Cc: George Dunlap <george.dunlap@citrix.com>,
	Dario Faggioli <dfaggioli@suse.com>,
	Ian Jackson <iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v2 00/17] xen: support per-cpupool scheduling granularity
Date: Fri, 4 Dec 2020 23:53:21 +0000	[thread overview]
Message-ID: <a12de6ea-584c-49ca-3a09-f94b65933a62@citrix.com> (raw)
In-Reply-To: <20201201082128.15239-1-jgross@suse.com>

On 01/12/2020 08:21, Juergen Gross wrote:
> Support scheduling granularity per cpupool. Setting the granularity is
> done via hypfs, which needed to gain dynamical entries for that
> purpose.
>
> Apart from the hypfs related additional functionality the main change
> for cpupools was the support for moving a domain to a new granularity,
> as this requires to modify the scheduling unit/vcpu relationship.
>
> I have tried to do the hypfs modifications in a rather generic way in
> order to be able to use the same infrastructure in other cases, too
> (e.g. for per-domain entries).
>
> The complete series has been tested by creating cpupools with different
> granularities and moving busy and idle domains between those.
>
> Changes in V2:
> - Added several new patches, especially for some further cleanups in
>   cpupool.c.
> - Completely reworked the locking scheme with dynamical directories:
>   locking of resources (cpupools in this series) is now done via new
>   callbacks which are called when traversing the hypfs tree. This
>   removes the need to add locking to each hypfs related cpupool
>   function and it ensures data integrity across multiple callbacks.
> - Reordered the first few patches in order to have already acked
>   patches in pure cleanup patches first.
> - Addressed several comments.
>
> Juergen Gross (17):
>   xen/cpupool: add cpu to sched_res_mask when removing it from cpupool
>   xen/cpupool: add missing bits for per-cpupool scheduling granularity
>   xen/cpupool: sort included headers in cpupool.c
>   xen/cpupool: switch cpupool id to unsigned
>   xen/cpupool: switch cpupool list to normal list interface
>   xen/cpupool: use ERR_PTR() for returning error cause from
>     cpupool_create()
>   xen/cpupool: support moving domain between cpupools with different
>     granularity
>   docs: fix hypfs path documentation
>   xen/hypfs: move per-node function pointers into a dedicated struct
>   xen/hypfs: pass real failure reason up from hypfs_get_entry()
>   xen/hypfs: add getsize() and findentry() callbacks to hypfs_funcs
>   xen/hypfs: add new enter() and exit() per node callbacks
>   xen/hypfs: support dynamic hypfs nodes
>   xen/hypfs: add support for id-based dynamic directories
>   xen/cpupool: add cpupool directories
>   xen/cpupool: add scheduling granularity entry to cpupool entries
>   xen/cpupool: make per-cpupool sched-gran hypfs node writable

Gitlab CI is fairly (but not completely) reliably hitting an failure in
ARM randconfig against this series only.

https://gitlab.com/xen-project/patchew/xen/-/pipelines/225445864 is one
example.

Error is:

cpupool.c:102:12: error: 'sched_gran_get' defined but not used
[-Werror=unused-function]
  102 | static int sched_gran_get(const char *str, enum sched_gran *mode)
      |            ^~~~~~~~~~~~~~



Weirdly, there is a second diagnostic showing up which appears to be
unrelated and non-fatal, but a concerning non-the-less

mem_access.c: In function 'p2m_mem_access_check':
mem_access.c:227:6: note: parameter passing for argument of type 'const
struct npfec' changed in GCC 9.1
  227 | bool p2m_mem_access_check(paddr_t gpa, vaddr_t gla, const struct
npfec npfec)
      |      ^~~~~~~~~~~~~~~~~~~~

It appears to be related to
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=88469 and is letting us
know that the ABI changed.  However, Xen is an embedded project with no
external linkage, so we can probably compile with -Wno-psabi and be done
with it.

~Andrew, in lieu of a real CI robot.


  parent reply	other threads:[~2020-12-04 23:53 UTC|newest]

Thread overview: 73+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-12-01  8:21 [PATCH v2 00/17] xen: support per-cpupool scheduling granularity Juergen Gross
2020-12-01  8:21 ` [PATCH v2 01/17] xen/cpupool: add cpu to sched_res_mask when removing it from cpupool Juergen Gross
2020-12-01  8:21 ` [PATCH v2 02/17] xen/cpupool: add missing bits for per-cpupool scheduling granularity Juergen Gross
2020-12-01  8:21 ` [PATCH v2 03/17] xen/cpupool: sort included headers in cpupool.c Juergen Gross
2020-12-01  8:21 ` [PATCH v2 04/17] xen/cpupool: switch cpupool id to unsigned Juergen Gross
2020-12-01  8:55   ` Jan Beulich
2020-12-01  9:01     ` Jürgen Groß
2020-12-01  9:07       ` Jan Beulich
2020-12-07  9:59       ` Jan Beulich
2020-12-07 14:48         ` Jürgen Groß
2020-12-07 15:00           ` Jan Beulich
2020-12-04 15:52   ` Dario Faggioli
2020-12-07  9:58     ` Jan Beulich
2020-12-07 15:21     ` Jan Beulich
2020-12-01  8:21 ` [PATCH v2 05/17] xen/cpupool: switch cpupool list to normal list interface Juergen Gross
2020-12-01  9:12   ` Jan Beulich
2020-12-01  9:18     ` Jürgen Groß
2020-12-04 16:13       ` Dario Faggioli
2020-12-04 16:16         ` Jürgen Groß
2020-12-04 16:25           ` Dario Faggioli
2020-12-04 16:56   ` Dario Faggioli
2020-12-01  8:21 ` [PATCH v2 06/17] xen/cpupool: use ERR_PTR() for returning error cause from cpupool_create() Juergen Gross
2020-12-02  8:58   ` Jan Beulich
2020-12-02  9:56     ` Jürgen Groß
2020-12-02 10:46       ` Jan Beulich
2020-12-02 10:58         ` Jürgen Groß
2020-12-04 16:29   ` Dario Faggioli
2020-12-01  8:21 ` [PATCH v2 07/17] xen/cpupool: support moving domain between cpupools with different granularity Juergen Gross
2020-12-01  8:21 ` [PATCH v2 08/17] docs: fix hypfs path documentation Juergen Gross
2020-12-01  8:21 ` [PATCH v2 09/17] xen/hypfs: move per-node function pointers into a dedicated struct Juergen Gross
2020-12-02 15:36   ` Jan Beulich
2020-12-02 15:41     ` Jürgen Groß
2020-12-03  8:47     ` Jürgen Groß
2020-12-03  9:12       ` Jan Beulich
2020-12-03  9:51         ` Jürgen Groß
2020-12-01  8:21 ` [PATCH v2 10/17] xen/hypfs: pass real failure reason up from hypfs_get_entry() Juergen Gross
2020-12-01  8:21 ` [PATCH v2 11/17] xen/hypfs: add getsize() and findentry() callbacks to hypfs_funcs Juergen Gross
2020-12-02 15:42   ` Jan Beulich
2020-12-02 15:51     ` Jürgen Groß
2020-12-03  8:12       ` Jan Beulich
2020-12-03  9:39         ` Jürgen Groß
2020-12-04  8:58   ` Jan Beulich
2020-12-04 11:14     ` Jürgen Groß
2020-12-01  8:21 ` [PATCH v2 12/17] xen/hypfs: add new enter() and exit() per node callbacks Juergen Gross
2020-12-03 14:59   ` Jan Beulich
2020-12-03 15:14     ` Jürgen Groß
2020-12-03 15:29       ` Jan Beulich
2020-12-04  8:33         ` Jürgen Groß
2020-12-04  8:30   ` Jan Beulich
2020-12-04  8:35     ` Jürgen Groß
2020-12-01  8:21 ` [PATCH v2 13/17] xen/hypfs: support dynamic hypfs nodes Juergen Gross
2020-12-03 15:08   ` Jan Beulich
2020-12-03 15:18     ` Jürgen Groß
2020-12-01  8:21 ` [PATCH v2 14/17] xen/hypfs: add support for id-based dynamic directories Juergen Gross
2020-12-03 15:44   ` Jan Beulich
2020-12-04  8:52     ` Jürgen Groß
2020-12-04  9:16       ` Jan Beulich
2020-12-04 13:08         ` Jürgen Groß
2020-12-07  7:54           ` Jan Beulich
2020-12-01  8:21 ` [PATCH v2 15/17] xen/cpupool: add cpupool directories Juergen Gross
2020-12-01  9:00   ` Jan Beulich
2020-12-01  9:03     ` Jürgen Groß
2020-12-02 15:46   ` Jürgen Groß
2020-12-03 14:46     ` Jan Beulich
2020-12-03 15:11       ` Jürgen Groß
2020-12-04  9:10   ` Jan Beulich
2020-12-04 11:08     ` Jürgen Groß
2020-12-04 11:54       ` Jan Beulich
2020-12-01  8:21 ` [PATCH v2 16/17] xen/cpupool: add scheduling granularity entry to cpupool entries Juergen Gross
2020-12-01  8:21 ` [PATCH v2 17/17] xen/cpupool: make per-cpupool sched-gran hypfs node writable Juergen Gross
2020-12-04 23:53 ` Andrew Cooper [this message]
2020-12-05  7:41   ` [PATCH v2 00/17] xen: support per-cpupool scheduling granularity Jürgen Groß
2020-12-07  9:00   ` Jan Beulich

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=a12de6ea-584c-49ca-3a09-f94b65933a62@citrix.com \
    --to=andrew.cooper3@citrix.com \
    --cc=dfaggioli@suse.com \
    --cc=george.dunlap@citrix.com \
    --cc=iwj@xenproject.org \
    --cc=jbeulich@suse.com \
    --cc=jgross@suse.com \
    --cc=julien@xen.org \
    --cc=sstabellini@kernel.org \
    --cc=wl@xen.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.