* [Qemu-devel] [RFC PATCH] numa: add auto_enable_numa to fix broken check in spapr
@ 2019-08-01 7:52 Tao Xu
2019-08-02 6:55 ` David Gibson
0 siblings, 1 reply; 6+ messages in thread
From: Tao Xu @ 2019-08-01 7:52 UTC (permalink / raw)
To: ehabkost, imammedo, marcel.apfelbaum, david; +Cc: Tao Xu, qemu-ppc, qemu-devel
Introduce MachineClass::auto_enable_numa for one implicit NUMA node,
and enable it to fix broken check in spapr_validate_node_memory(), when
spapr_populate_memory() creates a implicit node and info then use
nb_numa_nodes which is 0.
Suggested-by: Igor Mammedov <imammedo@redhat.com>
Suggested-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Tao Xu <tao3.xu@intel.com>
---
This patch has a dependency on
https://patchwork.kernel.org/cover/11063235/
---
hw/core/numa.c | 9 +++++++--
hw/ppc/spapr.c | 9 +--------
include/hw/boards.h | 1 +
3 files changed, 9 insertions(+), 10 deletions(-)
diff --git a/hw/core/numa.c b/hw/core/numa.c
index 75db35ac19..756d243d3f 100644
--- a/hw/core/numa.c
+++ b/hw/core/numa.c
@@ -580,9 +580,14 @@ void numa_complete_configuration(MachineState *ms)
* guest tries to use it with that drivers.
*
* Enable NUMA implicitly by adding a new NUMA node automatically.
+ *
+ * Or if MachineClass::auto_enable_numa is true and no NUMA nodes,
+ * assume there is just one node with whole RAM.
*/
- if (ms->ram_slots > 0 && ms->numa_state->num_nodes == 0 &&
- mc->auto_enable_numa_with_memhp) {
+ if (ms->numa_state->num_nodes == 0 &&
+ ((ms->ram_slots > 0 &&
+ mc->auto_enable_numa_with_memhp) ||
+ mc->auto_enable_numa)) {
NumaNodeOptions node = { };
parse_numa_node(ms, &node, &error_abort);
}
diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
index f607ca567b..e50343f326 100644
--- a/hw/ppc/spapr.c
+++ b/hw/ppc/spapr.c
@@ -400,14 +400,6 @@ static int spapr_populate_memory(SpaprMachineState *spapr, void *fdt)
hwaddr mem_start, node_size;
int i, nb_nodes = machine->numa_state->num_nodes;
NodeInfo *nodes = machine->numa_state->nodes;
- NodeInfo ramnode;
-
- /* No NUMA nodes, assume there is just one node with whole RAM */
- if (!nb_nodes) {
- nb_nodes = 1;
- ramnode.node_mem = machine->ram_size;
- nodes = &ramnode;
- }
for (i = 0, mem_start = 0; i < nb_nodes; ++i) {
if (!nodes[i].node_mem) {
@@ -4369,6 +4361,7 @@ static void spapr_machine_class_init(ObjectClass *oc, void *data)
*/
mc->numa_mem_align_shift = 28;
mc->numa_mem_supported = true;
+ mc->auto_enable_numa = true;
smc->default_caps.caps[SPAPR_CAP_HTM] = SPAPR_CAP_OFF;
smc->default_caps.caps[SPAPR_CAP_VSX] = SPAPR_CAP_ON;
diff --git a/include/hw/boards.h b/include/hw/boards.h
index 2eb9a0b4e0..4a350b87d2 100644
--- a/include/hw/boards.h
+++ b/include/hw/boards.h
@@ -220,6 +220,7 @@ struct MachineClass {
bool smbus_no_migration_support;
bool nvdimm_supported;
bool numa_mem_supported;
+ bool auto_enable_numa;
HotplugHandler *(*get_hotplug_handler)(MachineState *machine,
DeviceState *dev);
--
2.20.1
^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [Qemu-devel] [RFC PATCH] numa: add auto_enable_numa to fix broken check in spapr
2019-08-01 7:52 [Qemu-devel] [RFC PATCH] numa: add auto_enable_numa to fix broken check in spapr Tao Xu
@ 2019-08-02 6:55 ` David Gibson
2019-08-05 0:56 ` Tao Xu
0 siblings, 1 reply; 6+ messages in thread
From: David Gibson @ 2019-08-02 6:55 UTC (permalink / raw)
To: Tao Xu; +Cc: imammedo, qemu-ppc, ehabkost, qemu-devel
[-- Attachment #1: Type: text/plain, Size: 3525 bytes --]
On Thu, Aug 01, 2019 at 03:52:58PM +0800, Tao Xu wrote:
> Introduce MachineClass::auto_enable_numa for one implicit NUMA node,
> and enable it to fix broken check in spapr_validate_node_memory(), when
> spapr_populate_memory() creates a implicit node and info then use
> nb_numa_nodes which is 0.
>
> Suggested-by: Igor Mammedov <imammedo@redhat.com>
> Suggested-by: Eduardo Habkost <ehabkost@redhat.com>
> Signed-off-by: Tao Xu <tao3.xu@intel.com>
The change here looks fine so,
Acked-by: David Gibson <david@gibson.dropbear.id.au>
However, I'm not following what check in spapr is broken and why.
> ---
>
> This patch has a dependency on
> https://patchwork.kernel.org/cover/11063235/
> ---
> hw/core/numa.c | 9 +++++++--
> hw/ppc/spapr.c | 9 +--------
> include/hw/boards.h | 1 +
> 3 files changed, 9 insertions(+), 10 deletions(-)
>
> diff --git a/hw/core/numa.c b/hw/core/numa.c
> index 75db35ac19..756d243d3f 100644
> --- a/hw/core/numa.c
> +++ b/hw/core/numa.c
> @@ -580,9 +580,14 @@ void numa_complete_configuration(MachineState *ms)
> * guest tries to use it with that drivers.
> *
> * Enable NUMA implicitly by adding a new NUMA node automatically.
> + *
> + * Or if MachineClass::auto_enable_numa is true and no NUMA nodes,
> + * assume there is just one node with whole RAM.
> */
> - if (ms->ram_slots > 0 && ms->numa_state->num_nodes == 0 &&
> - mc->auto_enable_numa_with_memhp) {
> + if (ms->numa_state->num_nodes == 0 &&
> + ((ms->ram_slots > 0 &&
> + mc->auto_enable_numa_with_memhp) ||
> + mc->auto_enable_numa)) {
> NumaNodeOptions node = { };
> parse_numa_node(ms, &node, &error_abort);
> }
> diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
> index f607ca567b..e50343f326 100644
> --- a/hw/ppc/spapr.c
> +++ b/hw/ppc/spapr.c
> @@ -400,14 +400,6 @@ static int spapr_populate_memory(SpaprMachineState *spapr, void *fdt)
> hwaddr mem_start, node_size;
> int i, nb_nodes = machine->numa_state->num_nodes;
> NodeInfo *nodes = machine->numa_state->nodes;
> - NodeInfo ramnode;
> -
> - /* No NUMA nodes, assume there is just one node with whole RAM */
> - if (!nb_nodes) {
> - nb_nodes = 1;
> - ramnode.node_mem = machine->ram_size;
> - nodes = &ramnode;
> - }
>
> for (i = 0, mem_start = 0; i < nb_nodes; ++i) {
> if (!nodes[i].node_mem) {
> @@ -4369,6 +4361,7 @@ static void spapr_machine_class_init(ObjectClass *oc, void *data)
> */
> mc->numa_mem_align_shift = 28;
> mc->numa_mem_supported = true;
> + mc->auto_enable_numa = true;
>
> smc->default_caps.caps[SPAPR_CAP_HTM] = SPAPR_CAP_OFF;
> smc->default_caps.caps[SPAPR_CAP_VSX] = SPAPR_CAP_ON;
> diff --git a/include/hw/boards.h b/include/hw/boards.h
> index 2eb9a0b4e0..4a350b87d2 100644
> --- a/include/hw/boards.h
> +++ b/include/hw/boards.h
> @@ -220,6 +220,7 @@ struct MachineClass {
> bool smbus_no_migration_support;
> bool nvdimm_supported;
> bool numa_mem_supported;
> + bool auto_enable_numa;
>
> HotplugHandler *(*get_hotplug_handler)(MachineState *machine,
> DeviceState *dev);
--
David Gibson | I'll have my music baroque, and my code
david AT gibson.dropbear.id.au | minimalist, thank you. NOT _the_ _other_
| _way_ _around_!
http://www.ozlabs.org/~dgibson
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [Qemu-devel] [RFC PATCH] numa: add auto_enable_numa to fix broken check in spapr
2019-08-02 6:55 ` David Gibson
@ 2019-08-05 0:56 ` Tao Xu
2019-08-05 2:58 ` David Gibson
0 siblings, 1 reply; 6+ messages in thread
From: Tao Xu @ 2019-08-05 0:56 UTC (permalink / raw)
To: David Gibson; +Cc: imammedo, qemu-ppc, ehabkost, qemu-devel
On 8/2/2019 2:55 PM, David Gibson wrote:
> On Thu, Aug 01, 2019 at 03:52:58PM +0800, Tao Xu wrote:
>> Introduce MachineClass::auto_enable_numa for one implicit NUMA node,
>> and enable it to fix broken check in spapr_validate_node_memory(), when
>> spapr_populate_memory() creates a implicit node and info then use
>> nb_numa_nodes which is 0.
>>
>> Suggested-by: Igor Mammedov <imammedo@redhat.com>
>> Suggested-by: Eduardo Habkost <ehabkost@redhat.com>
>> Signed-off-by: Tao Xu <tao3.xu@intel.com>
>
> The change here looks fine so,
>
> Acked-by: David Gibson <david@gibson.dropbear.id.au>
>
> However, I'm not following what check in spapr is broken and why.
>
Sorry, may be I should update the commit message.
Because in spapr_populate_memory(), if numa node is 0
if (!nb_nodes) {
nb_nodes = 1;
ramnode.node_mem = machine->ram_size;
nodes = &ramnode;
}
it use a local 'nb_nodes' as 1 and update global nodes info, but
inpapr_validate_node_memory(), use the global nb_numa_nodes
for (i = 0; i < nb_numa_nodes; i++) {
if (numa_info[i].node_mem % SPAPR_MEMORY_BLOCK_SIZE) {
so the global is 0 and skip the node_mem check.
>> ---
>>
>> This patch has a dependency on
>> https://patchwork.kernel.org/cover/11063235/
>> ---
>> hw/core/numa.c | 9 +++++++--
>> hw/ppc/spapr.c | 9 +--------
>> include/hw/boards.h | 1 +
>> 3 files changed, 9 insertions(+), 10 deletions(-)
>>
>> diff --git a/hw/core/numa.c b/hw/core/numa.c
>> index 75db35ac19..756d243d3f 100644
>> --- a/hw/core/numa.c
>> +++ b/hw/core/numa.c
>> @@ -580,9 +580,14 @@ void numa_complete_configuration(MachineState *ms)
>> * guest tries to use it with that drivers.
>> *
>> * Enable NUMA implicitly by adding a new NUMA node automatically.
>> + *
>> + * Or if MachineClass::auto_enable_numa is true and no NUMA nodes,
>> + * assume there is just one node with whole RAM.
>> */
>> - if (ms->ram_slots > 0 && ms->numa_state->num_nodes == 0 &&
>> - mc->auto_enable_numa_with_memhp) {
>> + if (ms->numa_state->num_nodes == 0 &&
>> + ((ms->ram_slots > 0 &&
>> + mc->auto_enable_numa_with_memhp) ||
>> + mc->auto_enable_numa)) {
>> NumaNodeOptions node = { };
>> parse_numa_node(ms, &node, &error_abort);
>> }
>> diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
>> index f607ca567b..e50343f326 100644
>> --- a/hw/ppc/spapr.c
>> +++ b/hw/ppc/spapr.c
>> @@ -400,14 +400,6 @@ static int spapr_populate_memory(SpaprMachineState *spapr, void *fdt)
>> hwaddr mem_start, node_size;
>> int i, nb_nodes = machine->numa_state->num_nodes;
>> NodeInfo *nodes = machine->numa_state->nodes;
>> - NodeInfo ramnode;
>> -
>> - /* No NUMA nodes, assume there is just one node with whole RAM */
>> - if (!nb_nodes) {
>> - nb_nodes = 1;
>> - ramnode.node_mem = machine->ram_size;
>> - nodes = &ramnode;
>> - }
>>
>> for (i = 0, mem_start = 0; i < nb_nodes; ++i) {
>> if (!nodes[i].node_mem) {
>> @@ -4369,6 +4361,7 @@ static void spapr_machine_class_init(ObjectClass *oc, void *data)
>> */
>> mc->numa_mem_align_shift = 28;
>> mc->numa_mem_supported = true;
>> + mc->auto_enable_numa = true;
>>
>> smc->default_caps.caps[SPAPR_CAP_HTM] = SPAPR_CAP_OFF;
>> smc->default_caps.caps[SPAPR_CAP_VSX] = SPAPR_CAP_ON;
>> diff --git a/include/hw/boards.h b/include/hw/boards.h
>> index 2eb9a0b4e0..4a350b87d2 100644
>> --- a/include/hw/boards.h
>> +++ b/include/hw/boards.h
>> @@ -220,6 +220,7 @@ struct MachineClass {
>> bool smbus_no_migration_support;
>> bool nvdimm_supported;
>> bool numa_mem_supported;
>> + bool auto_enable_numa;
>>
>> HotplugHandler *(*get_hotplug_handler)(MachineState *machine,
>> DeviceState *dev);
>
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [Qemu-devel] [RFC PATCH] numa: add auto_enable_numa to fix broken check in spapr
2019-08-05 0:56 ` Tao Xu
@ 2019-08-05 2:58 ` David Gibson
2019-08-05 3:37 ` Tao Xu
0 siblings, 1 reply; 6+ messages in thread
From: David Gibson @ 2019-08-05 2:58 UTC (permalink / raw)
To: Tao Xu; +Cc: imammedo, qemu-ppc, ehabkost, qemu-devel
[-- Attachment #1: Type: text/plain, Size: 4943 bytes --]
On Mon, Aug 05, 2019 at 08:56:40AM +0800, Tao Xu wrote:
> On 8/2/2019 2:55 PM, David Gibson wrote:
> > On Thu, Aug 01, 2019 at 03:52:58PM +0800, Tao Xu wrote:
> > > Introduce MachineClass::auto_enable_numa for one implicit NUMA node,
> > > and enable it to fix broken check in spapr_validate_node_memory(), when
> > > spapr_populate_memory() creates a implicit node and info then use
> > > nb_numa_nodes which is 0.
> > >
> > > Suggested-by: Igor Mammedov <imammedo@redhat.com>
> > > Suggested-by: Eduardo Habkost <ehabkost@redhat.com>
> > > Signed-off-by: Tao Xu <tao3.xu@intel.com>
> >
> > The change here looks fine so,
> >
> > Acked-by: David Gibson <david@gibson.dropbear.id.au>
> >
> > However, I'm not following what check in spapr is broken and why.
> >
> Sorry, may be I should update the commit message.
>
> Because in spapr_populate_memory(), if numa node is 0
>
> if (!nb_nodes) {
> nb_nodes = 1;
> ramnode.node_mem = machine->ram_size;
> nodes = &ramnode;
> }
>
> it use a local 'nb_nodes' as 1 and update global nodes info, but
> inpapr_validate_node_memory(), use the global nb_numa_nodes
>
> for (i = 0; i < nb_numa_nodes; i++) {
> if (numa_info[i].node_mem % SPAPR_MEMORY_BLOCK_SIZE) {
>
> so the global is 0 and skip the node_mem check.
Well, not really. That loop is that each node has memory size a
multiple of 256MiB. But we've already checked that the whole memory
size is a multiple of 256MiB, so in the case of one NUMA node, the
per-node check doesn't actually do anything extra.
And in the "non-NUMA" case, nb_numa_nodes == 0, then I don't believe
numa_info[] is populated anyway, so we couldn't do the check like
this.
> > > ---
> > >
> > > This patch has a dependency on
> > > https://patchwork.kernel.org/cover/11063235/
> > > ---
> > > hw/core/numa.c | 9 +++++++--
> > > hw/ppc/spapr.c | 9 +--------
> > > include/hw/boards.h | 1 +
> > > 3 files changed, 9 insertions(+), 10 deletions(-)
> > >
> > > diff --git a/hw/core/numa.c b/hw/core/numa.c
> > > index 75db35ac19..756d243d3f 100644
> > > --- a/hw/core/numa.c
> > > +++ b/hw/core/numa.c
> > > @@ -580,9 +580,14 @@ void numa_complete_configuration(MachineState *ms)
> > > * guest tries to use it with that drivers.
> > > *
> > > * Enable NUMA implicitly by adding a new NUMA node automatically.
> > > + *
> > > + * Or if MachineClass::auto_enable_numa is true and no NUMA nodes,
> > > + * assume there is just one node with whole RAM.
> > > */
> > > - if (ms->ram_slots > 0 && ms->numa_state->num_nodes == 0 &&
> > > - mc->auto_enable_numa_with_memhp) {
> > > + if (ms->numa_state->num_nodes == 0 &&
> > > + ((ms->ram_slots > 0 &&
> > > + mc->auto_enable_numa_with_memhp) ||
> > > + mc->auto_enable_numa)) {
> > > NumaNodeOptions node = { };
> > > parse_numa_node(ms, &node, &error_abort);
> > > }
> > > diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
> > > index f607ca567b..e50343f326 100644
> > > --- a/hw/ppc/spapr.c
> > > +++ b/hw/ppc/spapr.c
> > > @@ -400,14 +400,6 @@ static int spapr_populate_memory(SpaprMachineState *spapr, void *fdt)
> > > hwaddr mem_start, node_size;
> > > int i, nb_nodes = machine->numa_state->num_nodes;
> > > NodeInfo *nodes = machine->numa_state->nodes;
> > > - NodeInfo ramnode;
> > > -
> > > - /* No NUMA nodes, assume there is just one node with whole RAM */
> > > - if (!nb_nodes) {
> > > - nb_nodes = 1;
> > > - ramnode.node_mem = machine->ram_size;
> > > - nodes = &ramnode;
> > > - }
> > > for (i = 0, mem_start = 0; i < nb_nodes; ++i) {
> > > if (!nodes[i].node_mem) {
> > > @@ -4369,6 +4361,7 @@ static void spapr_machine_class_init(ObjectClass *oc, void *data)
> > > */
> > > mc->numa_mem_align_shift = 28;
> > > mc->numa_mem_supported = true;
> > > + mc->auto_enable_numa = true;
> > > smc->default_caps.caps[SPAPR_CAP_HTM] = SPAPR_CAP_OFF;
> > > smc->default_caps.caps[SPAPR_CAP_VSX] = SPAPR_CAP_ON;
> > > diff --git a/include/hw/boards.h b/include/hw/boards.h
> > > index 2eb9a0b4e0..4a350b87d2 100644
> > > --- a/include/hw/boards.h
> > > +++ b/include/hw/boards.h
> > > @@ -220,6 +220,7 @@ struct MachineClass {
> > > bool smbus_no_migration_support;
> > > bool nvdimm_supported;
> > > bool numa_mem_supported;
> > > + bool auto_enable_numa;
> > > HotplugHandler *(*get_hotplug_handler)(MachineState *machine,
> > > DeviceState *dev);
> >
>
--
David Gibson | I'll have my music baroque, and my code
david AT gibson.dropbear.id.au | minimalist, thank you. NOT _the_ _other_
| _way_ _around_!
http://www.ozlabs.org/~dgibson
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [Qemu-devel] [RFC PATCH] numa: add auto_enable_numa to fix broken check in spapr
2019-08-05 2:58 ` David Gibson
@ 2019-08-05 3:37 ` Tao Xu
2019-08-05 6:40 ` David Gibson
0 siblings, 1 reply; 6+ messages in thread
From: Tao Xu @ 2019-08-05 3:37 UTC (permalink / raw)
To: David Gibson; +Cc: imammedo, qemu-ppc, ehabkost, qemu-devel
On 8/5/2019 10:58 AM, David Gibson wrote:
> On Mon, Aug 05, 2019 at 08:56:40AM +0800, Tao Xu wrote:
>> On 8/2/2019 2:55 PM, David Gibson wrote:
>>> On Thu, Aug 01, 2019 at 03:52:58PM +0800, Tao Xu wrote:
>>>> Introduce MachineClass::auto_enable_numa for one implicit NUMA node,
>>>> and enable it to fix broken check in spapr_validate_node_memory(), when
>>>> spapr_populate_memory() creates a implicit node and info then use
>>>> nb_numa_nodes which is 0.
>>>>
>>>> Suggested-by: Igor Mammedov <imammedo@redhat.com>
>>>> Suggested-by: Eduardo Habkost <ehabkost@redhat.com>
>>>> Signed-off-by: Tao Xu <tao3.xu@intel.com>
>>>
>>> The change here looks fine so,
>>>
>>> Acked-by: David Gibson <david@gibson.dropbear.id.au>
>>>
>>> However, I'm not following what check in spapr is broken and why.
>>>
>> Sorry, may be I should update the commit message.
>>
>> Because in spapr_populate_memory(), if numa node is 0
>>
>> if (!nb_nodes) {
>> nb_nodes = 1;
>> ramnode.node_mem = machine->ram_size;
>> nodes = &ramnode;
>> }
>>
>> it use a local 'nb_nodes' as 1 and update global nodes info, but
>> inpapr_validate_node_memory(), use the global nb_numa_nodes
>>
>> for (i = 0; i < nb_numa_nodes; i++) {
>> if (numa_info[i].node_mem % SPAPR_MEMORY_BLOCK_SIZE) {
>>
>> so the global is 0 and skip the node_mem check.
>
> Well, not really. That loop is that each node has memory size a
> multiple of 256MiB. But we've already checked that the whole memory
> size is a multiple of 256MiB, so in the case of one NUMA node, the
> per-node check doesn't actually do anything extra.
>
> And in the "non-NUMA" case, nb_numa_nodes == 0, then I don't believe
> numa_info[] is populated anyway, so we couldn't do the check like
> this.
>
Thank you David. I understand. I will modify the commit message. So can
I modify and keep this patch as a feature? Because it can reuse the
generic numa code.
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [Qemu-devel] [RFC PATCH] numa: add auto_enable_numa to fix broken check in spapr
2019-08-05 3:37 ` Tao Xu
@ 2019-08-05 6:40 ` David Gibson
0 siblings, 0 replies; 6+ messages in thread
From: David Gibson @ 2019-08-05 6:40 UTC (permalink / raw)
To: Tao Xu; +Cc: imammedo, qemu-ppc, ehabkost, qemu-devel
[-- Attachment #1: Type: text/plain, Size: 2471 bytes --]
On Mon, Aug 05, 2019 at 11:37:14AM +0800, Tao Xu wrote:
> On 8/5/2019 10:58 AM, David Gibson wrote:
> > On Mon, Aug 05, 2019 at 08:56:40AM +0800, Tao Xu wrote:
> > > On 8/2/2019 2:55 PM, David Gibson wrote:
> > > > On Thu, Aug 01, 2019 at 03:52:58PM +0800, Tao Xu wrote:
> > > > > Introduce MachineClass::auto_enable_numa for one implicit NUMA node,
> > > > > and enable it to fix broken check in spapr_validate_node_memory(), when
> > > > > spapr_populate_memory() creates a implicit node and info then use
> > > > > nb_numa_nodes which is 0.
> > > > >
> > > > > Suggested-by: Igor Mammedov <imammedo@redhat.com>
> > > > > Suggested-by: Eduardo Habkost <ehabkost@redhat.com>
> > > > > Signed-off-by: Tao Xu <tao3.xu@intel.com>
> > > >
> > > > The change here looks fine so,
> > > >
> > > > Acked-by: David Gibson <david@gibson.dropbear.id.au>
> > > >
> > > > However, I'm not following what check in spapr is broken and why.
> > > >
> > > Sorry, may be I should update the commit message.
> > >
> > > Because in spapr_populate_memory(), if numa node is 0
> > >
> > > if (!nb_nodes) {
> > > nb_nodes = 1;
> > > ramnode.node_mem = machine->ram_size;
> > > nodes = &ramnode;
> > > }
> > >
> > > it use a local 'nb_nodes' as 1 and update global nodes info, but
> > > inpapr_validate_node_memory(), use the global nb_numa_nodes
> > >
> > > for (i = 0; i < nb_numa_nodes; i++) {
> > > if (numa_info[i].node_mem % SPAPR_MEMORY_BLOCK_SIZE) {
> > >
> > > so the global is 0 and skip the node_mem check.
> >
> > Well, not really. That loop is that each node has memory size a
> > multiple of 256MiB. But we've already checked that the whole memory
> > size is a multiple of 256MiB, so in the case of one NUMA node, the
> > per-node check doesn't actually do anything extra.
> >
> > And in the "non-NUMA" case, nb_numa_nodes == 0, then I don't believe
> > numa_info[] is populated anyway, so we couldn't do the check like
> > this.
> >
> Thank you David. I understand. I will modify the commit message. So can I
> modify and keep this patch as a feature? Because it can reuse the generic
> numa code.
Yes, the patch itself looks fine, just the comment is misleading.
--
David Gibson | I'll have my music baroque, and my code
david AT gibson.dropbear.id.au | minimalist, thank you. NOT _the_ _other_
| _way_ _around_!
http://www.ozlabs.org/~dgibson
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2019-08-05 6:46 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-08-01 7:52 [Qemu-devel] [RFC PATCH] numa: add auto_enable_numa to fix broken check in spapr Tao Xu
2019-08-02 6:55 ` David Gibson
2019-08-05 0:56 ` Tao Xu
2019-08-05 2:58 ` David Gibson
2019-08-05 3:37 ` Tao Xu
2019-08-05 6:40 ` David Gibson
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).