From: "Jürgen Groß" <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
George Dunlap <george.dunlap@citrix.com>,
Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
Dario Faggioli <dfaggioli@suse.com>,
xen-devel@lists.xenproject.org
Subject: Re: [PATCH v2 15/17] xen/cpupool: add cpupool directories
Date: Fri, 4 Dec 2020 12:08:56 +0100 [thread overview]
Message-ID: <72e2300c-6367-5469-d7fd-767dd411dcb8@suse.com> (raw)
In-Reply-To: <e14fa4a4-3a3e-ceac-af38-8561baf58aa8@suse.com>
[-- Attachment #1.1.1: Type: text/plain, Size: 6372 bytes --]
On 04.12.20 10:10, Jan Beulich wrote:
> On 01.12.2020 09:21, Juergen Gross wrote:
>> @@ -1003,12 +1006,131 @@ static struct notifier_block cpu_nfb = {
>> .notifier_call = cpu_callback
>> };
>>
>> +#ifdef CONFIG_HYPFS
>> +static const struct hypfs_entry *cpupool_pooldir_enter(
>> + const struct hypfs_entry *entry);
>> +
>> +static struct hypfs_funcs cpupool_pooldir_funcs = {
>
> Yet one more const missing?
Already fixed locally.
>
>> + .enter = cpupool_pooldir_enter,
>> + .exit = hypfs_node_exit,
>> + .read = hypfs_read_dir,
>> + .write = hypfs_write_deny,
>> + .getsize = hypfs_getsize,
>> + .findentry = hypfs_dir_findentry,
>> +};
>> +
>> +static HYPFS_VARDIR_INIT(cpupool_pooldir, "%u", &cpupool_pooldir_funcs);
>> +
>> +static const struct hypfs_entry *cpupool_pooldir_enter(
>> + const struct hypfs_entry *entry)
>> +{
>> + return &cpupool_pooldir.e;
>> +}
>> +
>> +static int cpupool_dir_read(const struct hypfs_entry *entry,
>> + XEN_GUEST_HANDLE_PARAM(void) uaddr)
>> +{
>> + int ret = 0;
>> + const struct cpupool *c;
>> + unsigned int size = 0;
>> +
>> + list_for_each_entry(c, &cpupool_list, list)
>> + {
>> + size += hypfs_dynid_entry_size(entry, c->cpupool_id);
>
> Why do you maintain size here? I can't spot any use.
Oh, indeed.
This is a remnant of an earlier variant.
>
> With this dropped the function then no longer depends on its
> "entry" parameter, which makes me wonder ...
>
>> + ret = hypfs_read_dyndir_id_entry(&cpupool_pooldir, c->cpupool_id,
>> + list_is_last(&c->list, &cpupool_list),
>> + &uaddr);
>> + if ( ret )
>> + break;
>> + }
>> +
>> + return ret;
>> +}
>> +
>> +static unsigned int cpupool_dir_getsize(const struct hypfs_entry *entry)
>> +{
>> + const struct cpupool *c;
>> + unsigned int size = 0;
>> +
>> + list_for_each_entry(c, &cpupool_list, list)
>> + size += hypfs_dynid_entry_size(entry, c->cpupool_id);
>
> ... why this one does. To be certain their results are consistent
> with one another, I think both should produce their results from
> the same data.
In the end they do. Creating a complete direntry just for obtaining its
size is overkill, especially as hypfs_read_dyndir_id_entry() is not
directly calculating the size, but copying the fixed and the variable
parts in two portions.
>
>> + return size;
>> +}
>> +
>> +static const struct hypfs_entry *cpupool_dir_enter(
>> + const struct hypfs_entry *entry)
>> +{
>> + struct hypfs_dyndir_id *data;
>> +
>> + data = hypfs_alloc_dyndata(sizeof(*data));
>
> I generally like the added type safety of the macro wrappers
> around _xmalloc(). I wonder if it wouldn't be a good idea to have
> such here as well, to avoid random mistakes like
>
> data = hypfs_alloc_dyndata(sizeof(data));
Fine with me.
>
> However I further notice that the struct allocated isn't cpupool
> specific at all. It would seem to me that such an allocation
> therefore doesn't belong here. Therefore I wonder whether ...
>
>> + if ( !data )
>> + return ERR_PTR(-ENOMEM);
>> + data->id = CPUPOOLID_NONE;
>> +
>> + spin_lock(&cpupool_lock);
>
> ... these two properties (initial ID and lock) shouldn't e.g. be
> communicated via the template, allowing the enter/exit hooks to
> become generic for all ID templates.
The problem with the lock is that it is rather user specific. For
domains this will be split (rcu_read_lock(&domlist_read_lock) for
the /domain directory, and get_domain() for the per-domain level).
And memory allocation might need other data as well, so this won't
be the same structure for all cases. A two level dynamic directory
(e.g. domain/vcpu) might want to allocate the needed dyndata for
both levels already when entering /domain.
>
> Yet in turn I notice that the "id" field only ever gets set, both
> in patch 14 and here. But yes, I've now spotted the consumers in
> patch 16.
>
>> + return entry;
>> +}
>> +
>> +static void cpupool_dir_exit(const struct hypfs_entry *entry)
>> +{
>> + spin_unlock(&cpupool_lock);
>> +
>> + hypfs_free_dyndata();
>> +}
>> +
>> +static struct hypfs_entry *cpupool_dir_findentry(
>> + const struct hypfs_entry_dir *dir, const char *name, unsigned int name_len)
>> +{
>> + unsigned long id;
>> + const char *end;
>> + const struct cpupool *cpupool;
>> +
>> + id = simple_strtoul(name, &end, 10);
>> + if ( end != name + name_len )
>> + return ERR_PTR(-ENOENT);
>> +
>> + cpupool = __cpupool_find_by_id(id, true);
>
> Silent truncation from unsigned long to unsigned int?
Oh, indeed. Need to check against UINT_MAX.
>
>> + if ( !cpupool )
>> + return ERR_PTR(-ENOENT);
>> +
>> + return hypfs_gen_dyndir_entry_id(&cpupool_pooldir, id);
>> +}
>> +
>> +static struct hypfs_funcs cpupool_dir_funcs = {
>
> Yet another missing const?
Already fixed.
>
>> + .enter = cpupool_dir_enter,
>> + .exit = cpupool_dir_exit,
>> + .read = cpupool_dir_read,
>> + .write = hypfs_write_deny,
>> + .getsize = cpupool_dir_getsize,
>> + .findentry = cpupool_dir_findentry,
>> +};
>> +
>> +static HYPFS_VARDIR_INIT(cpupool_dir, "cpupool", &cpupool_dir_funcs);
>
> Why VARDIR? This isn't a template, is it? Or does VARDIR really
> serve multiple purposes?
Basically it just takes an additional parameter for the function vector.
Maybe I should rename it to HYPFS_DIR_INIT_FUNC()?
>
>> +static void cpupool_hypfs_init(void)
>> +{
>> + hypfs_add_dir(&hypfs_root, &cpupool_dir, true);
>> + hypfs_add_dyndir(&cpupool_dir, &cpupool_pooldir);
>> +}
>> +#else
>> +
>> +static void cpupool_hypfs_init(void)
>> +{
>> +}
>> +#endif
>
> I think you want to be consistent with the use of blank lines next
> to #if / #else / #endif. In cases when they enclose multiple entities,
> I think it's generally better to have intervening blank lines
> everywhere. I also think in such cases commenting #else and #endif is
> helpful. But you're the maintainer of this code ...
I think I'll change it.
Juergen
[-- Attachment #1.1.2: OpenPGP_0xB0DE9DD628BF132F.asc --]
[-- Type: application/pgp-keys, Size: 3135 bytes --]
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 495 bytes --]
next prev parent reply other threads:[~2020-12-04 11:09 UTC|newest]
Thread overview: 73+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-12-01 8:21 [PATCH v2 00/17] xen: support per-cpupool scheduling granularity Juergen Gross
2020-12-01 8:21 ` [PATCH v2 01/17] xen/cpupool: add cpu to sched_res_mask when removing it from cpupool Juergen Gross
2020-12-01 8:21 ` [PATCH v2 02/17] xen/cpupool: add missing bits for per-cpupool scheduling granularity Juergen Gross
2020-12-01 8:21 ` [PATCH v2 03/17] xen/cpupool: sort included headers in cpupool.c Juergen Gross
2020-12-01 8:21 ` [PATCH v2 04/17] xen/cpupool: switch cpupool id to unsigned Juergen Gross
2020-12-01 8:55 ` Jan Beulich
2020-12-01 9:01 ` Jürgen Groß
2020-12-01 9:07 ` Jan Beulich
2020-12-07 9:59 ` Jan Beulich
2020-12-07 14:48 ` Jürgen Groß
2020-12-07 15:00 ` Jan Beulich
2020-12-04 15:52 ` Dario Faggioli
2020-12-07 9:58 ` Jan Beulich
2020-12-07 15:21 ` Jan Beulich
2020-12-01 8:21 ` [PATCH v2 05/17] xen/cpupool: switch cpupool list to normal list interface Juergen Gross
2020-12-01 9:12 ` Jan Beulich
2020-12-01 9:18 ` Jürgen Groß
2020-12-04 16:13 ` Dario Faggioli
2020-12-04 16:16 ` Jürgen Groß
2020-12-04 16:25 ` Dario Faggioli
2020-12-04 16:56 ` Dario Faggioli
2020-12-01 8:21 ` [PATCH v2 06/17] xen/cpupool: use ERR_PTR() for returning error cause from cpupool_create() Juergen Gross
2020-12-02 8:58 ` Jan Beulich
2020-12-02 9:56 ` Jürgen Groß
2020-12-02 10:46 ` Jan Beulich
2020-12-02 10:58 ` Jürgen Groß
2020-12-04 16:29 ` Dario Faggioli
2020-12-01 8:21 ` [PATCH v2 07/17] xen/cpupool: support moving domain between cpupools with different granularity Juergen Gross
2020-12-01 8:21 ` [PATCH v2 08/17] docs: fix hypfs path documentation Juergen Gross
2020-12-01 8:21 ` [PATCH v2 09/17] xen/hypfs: move per-node function pointers into a dedicated struct Juergen Gross
2020-12-02 15:36 ` Jan Beulich
2020-12-02 15:41 ` Jürgen Groß
2020-12-03 8:47 ` Jürgen Groß
2020-12-03 9:12 ` Jan Beulich
2020-12-03 9:51 ` Jürgen Groß
2020-12-01 8:21 ` [PATCH v2 10/17] xen/hypfs: pass real failure reason up from hypfs_get_entry() Juergen Gross
2020-12-01 8:21 ` [PATCH v2 11/17] xen/hypfs: add getsize() and findentry() callbacks to hypfs_funcs Juergen Gross
2020-12-02 15:42 ` Jan Beulich
2020-12-02 15:51 ` Jürgen Groß
2020-12-03 8:12 ` Jan Beulich
2020-12-03 9:39 ` Jürgen Groß
2020-12-04 8:58 ` Jan Beulich
2020-12-04 11:14 ` Jürgen Groß
2020-12-01 8:21 ` [PATCH v2 12/17] xen/hypfs: add new enter() and exit() per node callbacks Juergen Gross
2020-12-03 14:59 ` Jan Beulich
2020-12-03 15:14 ` Jürgen Groß
2020-12-03 15:29 ` Jan Beulich
2020-12-04 8:33 ` Jürgen Groß
2020-12-04 8:30 ` Jan Beulich
2020-12-04 8:35 ` Jürgen Groß
2020-12-01 8:21 ` [PATCH v2 13/17] xen/hypfs: support dynamic hypfs nodes Juergen Gross
2020-12-03 15:08 ` Jan Beulich
2020-12-03 15:18 ` Jürgen Groß
2020-12-01 8:21 ` [PATCH v2 14/17] xen/hypfs: add support for id-based dynamic directories Juergen Gross
2020-12-03 15:44 ` Jan Beulich
2020-12-04 8:52 ` Jürgen Groß
2020-12-04 9:16 ` Jan Beulich
2020-12-04 13:08 ` Jürgen Groß
2020-12-07 7:54 ` Jan Beulich
2020-12-01 8:21 ` [PATCH v2 15/17] xen/cpupool: add cpupool directories Juergen Gross
2020-12-01 9:00 ` Jan Beulich
2020-12-01 9:03 ` Jürgen Groß
2020-12-02 15:46 ` Jürgen Groß
2020-12-03 14:46 ` Jan Beulich
2020-12-03 15:11 ` Jürgen Groß
2020-12-04 9:10 ` Jan Beulich
2020-12-04 11:08 ` Jürgen Groß [this message]
2020-12-04 11:54 ` Jan Beulich
2020-12-01 8:21 ` [PATCH v2 16/17] xen/cpupool: add scheduling granularity entry to cpupool entries Juergen Gross
2020-12-01 8:21 ` [PATCH v2 17/17] xen/cpupool: make per-cpupool sched-gran hypfs node writable Juergen Gross
2020-12-04 23:53 ` [PATCH v2 00/17] xen: support per-cpupool scheduling granularity Andrew Cooper
2020-12-05 7:41 ` Jürgen Groß
2020-12-07 9:00 ` Jan Beulich
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=72e2300c-6367-5469-d7fd-767dd411dcb8@suse.com \
--to=jgross@suse.com \
--cc=andrew.cooper3@citrix.com \
--cc=dfaggioli@suse.com \
--cc=george.dunlap@citrix.com \
--cc=iwj@xenproject.org \
--cc=jbeulich@suse.com \
--cc=julien@xen.org \
--cc=sstabellini@kernel.org \
--cc=wl@xen.org \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).