All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jan Beulich <jbeulich@suse.com>
To: Wei Chen <wei.chen@arm.com>
Cc: nd@arm.com, "Andrew Cooper" <andrew.cooper3@citrix.com>,
	"Roger Pau Monné" <roger.pau@citrix.com>, "Wei Liu" <wl@xen.org>,
	"George Dunlap" <george.dunlap@citrix.com>,
	"Julien Grall" <julien@xen.org>,
	"Stefano Stabellini" <sstabellini@kernel.org>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH v7 2/6] xen/x86: move generically usable NUMA code from x86 to common
Date: Thu, 3 Nov 2022 15:25:43 +0100	[thread overview]
Message-ID: <78036eeb-2585-97e5-9f80-bb84f297cc08@suse.com> (raw)
In-Reply-To: <20221020061445.288839-3-wei.chen@arm.com>

On 20.10.2022 08:14, Wei Chen wrote:
> There are some codes in x86/numa.c can be shared by common
> architectures to implememnt NUMA support. Just like some
> variables and functions to check and store NUMA memory map.
> And some variables and functions to do NUMA initialization.
> 
> In this patch, we move them to common/numa.c and xen/numa.h
> and use the CONFIG_NUMA to gate them for non-NUMA supported
> architectures. As the target header file is Xen-style, so
> we trim some spaces and replace tabs for the codes that has
> been moved to xen/numa.h at the same time.
> 
> As acpi_scan_nodes has been used in a common function, it
> doesn't make sense to use acpi_xxx in common code, so we
> rename it to numa_process_nodes in this patch too. After that
> if we still use CONFIG_ACPI_NUMA in to gate numa_process_nodes
> in numa_initmem_init, that doesn't make sense. As CONFIG_NUMA
> will be selected by CONFIG_ACPI_NUMA for x86. So, we replace
> CONFIG_ACPI_NUMA by CONFIG_NUMA to gate numa_process_nodes.
> 
> As arch_numa_disabled has been implememnted for ACPI NUMA,
> we can rename srat_disabled to numa_disabled and move it
> to common code as well.
> 
> The macro node_to_first_cpu(node) hasn't been used anywhere,
> so we drop it in this patch too.
> 
> Because some architectures allow to use all 64 physical address
> bits, but some architectures are not (like Arm64 allows 52, 48
> bits). In this case, we use min(PADDR_BITS, BITS_PER_LONG - 1)
> to calculate the shift when only one node is in the system in
> this patch too.
> 
> Signed-off-by: Wei Chen <wei.chen@arm.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>
with one small further request (could be taken care of also while
committing if no other need for a v8 arises):

> --- /dev/null
> +++ b/xen/common/numa.c
> @@ -0,0 +1,464 @@
> +/*
> + * Generic VM initialization for NUMA setups.
> + * Copyright 2002,2003 Andi Kleen, SuSE Labs.
> + * Adapted for Xen: Ryan Harper <ryanh@us.ibm.com>
> + */
> +
> +#include <xen/init.h>
> +#include <xen/keyhandler.h>
> +#include <xen/mm.h>
> +#include <xen/nodemask.h>
> +#include <xen/numa.h>
> +#include <xen/param.h>
> +#include <xen/sched.h>
> +#include <xen/softirq.h>
> +
> +struct node_data __ro_after_init node_data[MAX_NUMNODES];
> +
> +/* Mapping from pdx to node id */
> +unsigned int __ro_after_init memnode_shift;
> +unsigned long __ro_after_init memnodemapsize;
> +nodeid_t *__ro_after_init memnodemap;
> +static typeof(*memnodemap) __ro_after_init _memnodemap[64];
> +
> +nodeid_t __read_mostly cpu_to_node[NR_CPUS] = {
> +    [0 ... NR_CPUS-1] = NUMA_NO_NODE
> +};
> +
> +cpumask_t __read_mostly node_to_cpumask[MAX_NUMNODES];
> +
> +nodemask_t __read_mostly node_online_map = { { [0] = 1UL } };
> +
> +bool __ro_after_init numa_off;
> +
> +bool numa_disabled(void)
> +{
> +    return numa_off || arch_numa_disabled();
> +}
> +
> +/*
> + * Given a shift value, try to populate memnodemap[]
> + * Returns :
> + * 1 if OK
> + * 0 if memnodmap[] too small (of shift too small)

May I ask that you correct this comment line: "of" (alone) makes no sense
here. Either "or" was meant or it would want to be "because of". Unless
this is a language tweak I'm entirely unaware of ...

Jan


  reply	other threads:[~2022-11-03 14:26 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-10-20  6:14 [PATCH v7 0/6] Device tree based NUMA support for Arm - Part#2 Wei Chen
2022-10-20  6:14 ` [PATCH v7 1/6] xen/x86: Provide helpers for common code to access acpi_numa Wei Chen
2022-10-20  6:14 ` [PATCH v7 2/6] xen/x86: move generically usable NUMA code from x86 to common Wei Chen
2022-11-03 14:25   ` Jan Beulich [this message]
2022-11-07 10:09     ` Wei Chen
2022-11-07 10:14       ` Jan Beulich
2022-10-20  6:14 ` [PATCH v7 3/6] xen/x86: Use ASSERT instead of VIRTUAL_BUG_ON for phys_to_nid Wei Chen
2022-10-20  6:14 ` [PATCH v7 4/6] xen/x86: use arch_get_ram_range to get information from E820 map Wei Chen
2022-10-20  6:14 ` [PATCH v7 5/6] xen/x86: move NUMA process nodes nodes code from x86 to common Wei Chen
2022-11-08 16:55   ` Jan Beulich
2022-11-09  8:51     ` Wei Chen
2022-11-09  9:30       ` Jan Beulich
2022-11-09 10:10         ` Wei Chen
2022-10-20  6:14 ` [PATCH v7 6/6] xen: introduce a Kconfig option to configure NUMA nodes number Wei Chen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=78036eeb-2585-97e5-9f80-bb84f297cc08@suse.com \
    --to=jbeulich@suse.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=george.dunlap@citrix.com \
    --cc=julien@xen.org \
    --cc=nd@arm.com \
    --cc=roger.pau@citrix.com \
    --cc=sstabellini@kernel.org \
    --cc=wei.chen@arm.com \
    --cc=wl@xen.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.