All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH] numa: Introduce MachineClass::auto_enable_numa for implicit NUMA node
@ 2019-08-05  7:13 Tao Xu
  2019-08-06 12:50 ` Igor Mammedov
  2019-09-03 17:52 ` Eduardo Habkost
  0 siblings, 2 replies; 11+ messages in thread
From: Tao Xu @ 2019-08-05  7:13 UTC (permalink / raw)
  To: david, ehabkost, imammedo, marcel.apfelbaum; +Cc: Tao Xu, qemu-ppc, qemu-devel

Add MachineClass::auto_enable_numa field. When it is true, a NUMA node
is expected to be created implicitly.

Acked-by: David Gibson <david@gibson.dropbear.id.au>
Suggested-by: Igor Mammedov <imammedo@redhat.com>
Suggested-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Tao Xu <tao3.xu@intel.com>
---

This patch has a dependency on
https://patchwork.kernel.org/cover/11063235/
---
 hw/core/numa.c      | 9 +++++++--
 hw/ppc/spapr.c      | 9 +--------
 include/hw/boards.h | 1 +
 3 files changed, 9 insertions(+), 10 deletions(-)

diff --git a/hw/core/numa.c b/hw/core/numa.c
index 75db35ac19..756d243d3f 100644
--- a/hw/core/numa.c
+++ b/hw/core/numa.c
@@ -580,9 +580,14 @@ void numa_complete_configuration(MachineState *ms)
      *   guest tries to use it with that drivers.
      *
      * Enable NUMA implicitly by adding a new NUMA node automatically.
+     *
+     * Or if MachineClass::auto_enable_numa is true and no NUMA nodes,
+     * assume there is just one node with whole RAM.
      */
-    if (ms->ram_slots > 0 && ms->numa_state->num_nodes == 0 &&
-        mc->auto_enable_numa_with_memhp) {
+    if (ms->numa_state->num_nodes == 0 &&
+        ((ms->ram_slots > 0 &&
+        mc->auto_enable_numa_with_memhp) ||
+        mc->auto_enable_numa)) {
             NumaNodeOptions node = { };
             parse_numa_node(ms, &node, &error_abort);
     }
diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
index f607ca567b..e50343f326 100644
--- a/hw/ppc/spapr.c
+++ b/hw/ppc/spapr.c
@@ -400,14 +400,6 @@ static int spapr_populate_memory(SpaprMachineState *spapr, void *fdt)
     hwaddr mem_start, node_size;
     int i, nb_nodes = machine->numa_state->num_nodes;
     NodeInfo *nodes = machine->numa_state->nodes;
-    NodeInfo ramnode;
-
-    /* No NUMA nodes, assume there is just one node with whole RAM */
-    if (!nb_nodes) {
-        nb_nodes = 1;
-        ramnode.node_mem = machine->ram_size;
-        nodes = &ramnode;
-    }
 
     for (i = 0, mem_start = 0; i < nb_nodes; ++i) {
         if (!nodes[i].node_mem) {
@@ -4369,6 +4361,7 @@ static void spapr_machine_class_init(ObjectClass *oc, void *data)
      */
     mc->numa_mem_align_shift = 28;
     mc->numa_mem_supported = true;
+    mc->auto_enable_numa = true;
 
     smc->default_caps.caps[SPAPR_CAP_HTM] = SPAPR_CAP_OFF;
     smc->default_caps.caps[SPAPR_CAP_VSX] = SPAPR_CAP_ON;
diff --git a/include/hw/boards.h b/include/hw/boards.h
index 2eb9a0b4e0..4a350b87d2 100644
--- a/include/hw/boards.h
+++ b/include/hw/boards.h
@@ -220,6 +220,7 @@ struct MachineClass {
     bool smbus_no_migration_support;
     bool nvdimm_supported;
     bool numa_mem_supported;
+    bool auto_enable_numa;
 
     HotplugHandler *(*get_hotplug_handler)(MachineState *machine,
                                            DeviceState *dev);
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [Qemu-devel] [PATCH] numa: Introduce MachineClass::auto_enable_numa for implicit NUMA node
  2019-08-05  7:13 [Qemu-devel] [PATCH] numa: Introduce MachineClass::auto_enable_numa for implicit NUMA node Tao Xu
@ 2019-08-06 12:50 ` Igor Mammedov
  2019-08-07 17:52   ` Eduardo Habkost
  2019-09-03 17:52 ` Eduardo Habkost
  1 sibling, 1 reply; 11+ messages in thread
From: Igor Mammedov @ 2019-08-06 12:50 UTC (permalink / raw)
  To: Tao Xu; +Cc: qemu-devel, qemu-ppc, ehabkost, david

On Mon,  5 Aug 2019 15:13:02 +0800
Tao Xu <tao3.xu@intel.com> wrote:

> Add MachineClass::auto_enable_numa field. When it is true, a NUMA node
> is expected to be created implicitly.
> 
> Acked-by: David Gibson <david@gibson.dropbear.id.au>
> Suggested-by: Igor Mammedov <imammedo@redhat.com>
> Suggested-by: Eduardo Habkost <ehabkost@redhat.com>
> Signed-off-by: Tao Xu <tao3.xu@intel.com>
> ---
> 
> This patch has a dependency on
> https://patchwork.kernel.org/cover/11063235/
> ---
>  hw/core/numa.c      | 9 +++++++--
>  hw/ppc/spapr.c      | 9 +--------
>  include/hw/boards.h | 1 +
>  3 files changed, 9 insertions(+), 10 deletions(-)
> 
> diff --git a/hw/core/numa.c b/hw/core/numa.c
> index 75db35ac19..756d243d3f 100644
> --- a/hw/core/numa.c
> +++ b/hw/core/numa.c
> @@ -580,9 +580,14 @@ void numa_complete_configuration(MachineState *ms)
>       *   guest tries to use it with that drivers.
>       *
>       * Enable NUMA implicitly by adding a new NUMA node automatically.
> +     *
> +     * Or if MachineClass::auto_enable_numa is true and no NUMA nodes,
> +     * assume there is just one node with whole RAM.
>       */
> -    if (ms->ram_slots > 0 && ms->numa_state->num_nodes == 0 &&
> -        mc->auto_enable_numa_with_memhp) {
> +    if (ms->numa_state->num_nodes == 0 &&
> +        ((ms->ram_slots > 0 &&
> +        mc->auto_enable_numa_with_memhp) ||
> +        mc->auto_enable_numa)) {
>              NumaNodeOptions node = { };
>              parse_numa_node(ms, &node, &error_abort);
>      }
> diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
> index f607ca567b..e50343f326 100644
> --- a/hw/ppc/spapr.c
> +++ b/hw/ppc/spapr.c
> @@ -400,14 +400,6 @@ static int spapr_populate_memory(SpaprMachineState *spapr, void *fdt)
>      hwaddr mem_start, node_size;
>      int i, nb_nodes = machine->numa_state->num_nodes;
>      NodeInfo *nodes = machine->numa_state->nodes;
> -    NodeInfo ramnode;
> -
> -    /* No NUMA nodes, assume there is just one node with whole RAM */
> -    if (!nb_nodes) {
> -        nb_nodes = 1;
> -        ramnode.node_mem = machine->ram_size;
> -        nodes = &ramnode;
> -    }
>  
>      for (i = 0, mem_start = 0; i < nb_nodes; ++i) {
>          if (!nodes[i].node_mem) {
> @@ -4369,6 +4361,7 @@ static void spapr_machine_class_init(ObjectClass *oc, void *data)
>       */
>      mc->numa_mem_align_shift = 28;
>      mc->numa_mem_supported = true;
> +    mc->auto_enable_numa = true;

this will always create a numa node (that will affect not only RAM but
also all other components that depends on numa state (like CPUs)),
where as spapr_populate_memory() was only faking numa node in DT for RAM.
It makes non-numa configuration impossible.
Seeing David's ACK on the patch it might be fine, but I believe
commit message should capture that and explain why the change in
behavior is fine.

>      smc->default_caps.caps[SPAPR_CAP_HTM] = SPAPR_CAP_OFF;
>      smc->default_caps.caps[SPAPR_CAP_VSX] = SPAPR_CAP_ON;
> diff --git a/include/hw/boards.h b/include/hw/boards.h
> index 2eb9a0b4e0..4a350b87d2 100644
> --- a/include/hw/boards.h
> +++ b/include/hw/boards.h
> @@ -220,6 +220,7 @@ struct MachineClass {
>      bool smbus_no_migration_support;
>      bool nvdimm_supported;
>      bool numa_mem_supported;
> +    bool auto_enable_numa;
>  
>      HotplugHandler *(*get_hotplug_handler)(MachineState *machine,
>                                             DeviceState *dev);



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Qemu-devel] [PATCH] numa: Introduce MachineClass::auto_enable_numa for implicit NUMA node
  2019-08-06 12:50 ` Igor Mammedov
@ 2019-08-07 17:52   ` Eduardo Habkost
  2019-08-08  6:35     ` David Gibson
  2019-08-08  8:17     ` Tao Xu
  0 siblings, 2 replies; 11+ messages in thread
From: Eduardo Habkost @ 2019-08-07 17:52 UTC (permalink / raw)
  To: Igor Mammedov; +Cc: Tao Xu, qemu-ppc, qemu-devel, david

On Tue, Aug 06, 2019 at 02:50:55PM +0200, Igor Mammedov wrote:
> On Mon,  5 Aug 2019 15:13:02 +0800
> Tao Xu <tao3.xu@intel.com> wrote:
> 
> > Add MachineClass::auto_enable_numa field. When it is true, a NUMA node
> > is expected to be created implicitly.
> > 
> > Acked-by: David Gibson <david@gibson.dropbear.id.au>
> > Suggested-by: Igor Mammedov <imammedo@redhat.com>
> > Suggested-by: Eduardo Habkost <ehabkost@redhat.com>
> > Signed-off-by: Tao Xu <tao3.xu@intel.com>
[...]
> > +    mc->auto_enable_numa = true;
> 
> this will always create a numa node (that will affect not only RAM but
> also all other components that depends on numa state (like CPUs)),
> where as spapr_populate_memory() was only faking numa node in DT for RAM.
> It makes non-numa configuration impossible.
> Seeing David's ACK on the patch it might be fine, but I believe
> commit message should capture that and explain why the change in
> behavior is fine.

After a quick look, all spapr code seems to have the same
behavior when nb_numa_nodes==0 and nb_numa_nodes==1, but I'd like
to be sure.

David and/or Tao Xu: do you confirm there's no ABI change at all
on spapr after implicitly creating a NUMA node?

> 
> >      smc->default_caps.caps[SPAPR_CAP_HTM] = SPAPR_CAP_OFF;
> >      smc->default_caps.caps[SPAPR_CAP_VSX] = SPAPR_CAP_ON;
> > diff --git a/include/hw/boards.h b/include/hw/boards.h
> > index 2eb9a0b4e0..4a350b87d2 100644
> > --- a/include/hw/boards.h
> > +++ b/include/hw/boards.h
> > @@ -220,6 +220,7 @@ struct MachineClass {
> >      bool smbus_no_migration_support;
> >      bool nvdimm_supported;
> >      bool numa_mem_supported;
> > +    bool auto_enable_numa;
> >  
> >      HotplugHandler *(*get_hotplug_handler)(MachineState *machine,
> >                                             DeviceState *dev);
> 

-- 
Eduardo


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Qemu-devel] [PATCH] numa: Introduce MachineClass::auto_enable_numa for implicit NUMA node
  2019-08-07 17:52   ` Eduardo Habkost
@ 2019-08-08  6:35     ` David Gibson
  2019-08-08  6:35       ` David Gibson
  2019-08-09  9:29       ` Igor Mammedov
  2019-08-08  8:17     ` Tao Xu
  1 sibling, 2 replies; 11+ messages in thread
From: David Gibson @ 2019-08-08  6:35 UTC (permalink / raw)
  To: Eduardo Habkost; +Cc: Igor Mammedov, Tao Xu, qemu-ppc, qemu-devel

[-- Attachment #1: Type: text/plain, Size: 2421 bytes --]

On Wed, Aug 07, 2019 at 02:52:56PM -0300, Eduardo Habkost wrote:
> On Tue, Aug 06, 2019 at 02:50:55PM +0200, Igor Mammedov wrote:
> > On Mon,  5 Aug 2019 15:13:02 +0800
> > Tao Xu <tao3.xu@intel.com> wrote:
> > 
> > > Add MachineClass::auto_enable_numa field. When it is true, a NUMA node
> > > is expected to be created implicitly.
> > > 
> > > Acked-by: David Gibson <david@gibson.dropbear.id.au>
> > > Suggested-by: Igor Mammedov <imammedo@redhat.com>
> > > Suggested-by: Eduardo Habkost <ehabkost@redhat.com>
> > > Signed-off-by: Tao Xu <tao3.xu@intel.com>
> [...]
> > > +    mc->auto_enable_numa = true;
> > 
> > this will always create a numa node (that will affect not only RAM but
> > also all other components that depends on numa state (like CPUs)),
> > where as spapr_populate_memory() was only faking numa node in DT for RAM.
> > It makes non-numa configuration impossible.
> > Seeing David's ACK on the patch it might be fine, but I believe
> > commit message should capture that and explain why the change in
> > behavior is fine.
> 
> After a quick look, all spapr code seems to have the same
> behavior when nb_numa_nodes==0 and nb_numa_nodes==1, but I'd like
> to be sure.

That's certainly the intention.  If there are cases where it doesn't
behave that way, it's a bug - although possible one we have to
maintainer for machine compatibility.

> David and/or Tao Xu: do you confirm there's no ABI change at all
> on spapr after implicitly creating a NUMA node?

I don't believe there is, no.

> 
> > 
> > >      smc->default_caps.caps[SPAPR_CAP_HTM] = SPAPR_CAP_OFF;
> > >      smc->default_caps.caps[SPAPR_CAP_VSX] = SPAPR_CAP_ON;
> > > diff --git a/include/hw/boards.h b/include/hw/boards.h
> > > index 2eb9a0b4e0..4a350b87d2 100644
> > > --- a/include/hw/boards.h
> > > +++ b/include/hw/boards.h
> > > @@ -220,6 +220,7 @@ struct MachineClass {
> > >      bool smbus_no_migration_support;
> > >      bool nvdimm_supported;
> > >      bool numa_mem_supported;
> > > +    bool auto_enable_numa;
> > >  
> > >      HotplugHandler *(*get_hotplug_handler)(MachineState *machine,
> > >                                             DeviceState *dev);
> > 
> 

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Qemu-devel] [PATCH] numa: Introduce MachineClass::auto_enable_numa for implicit NUMA node
  2019-08-08  6:35     ` David Gibson
@ 2019-08-08  6:35       ` David Gibson
  2019-08-09  9:29       ` Igor Mammedov
  1 sibling, 0 replies; 11+ messages in thread
From: David Gibson @ 2019-08-08  6:35 UTC (permalink / raw)
  To: Eduardo Habkost; +Cc: Igor Mammedov, Tao Xu, qemu-ppc, qemu-devel

[-- Attachment #1: Type: text/plain, Size: 2040 bytes --]

On Thu, Aug 08, 2019 at 04:35:00PM +1000, David Gibson wrote:
> On Wed, Aug 07, 2019 at 02:52:56PM -0300, Eduardo Habkost wrote:
> > On Tue, Aug 06, 2019 at 02:50:55PM +0200, Igor Mammedov wrote:
> > > On Mon,  5 Aug 2019 15:13:02 +0800
> > > Tao Xu <tao3.xu@intel.com> wrote:
> > > 
> > > > Add MachineClass::auto_enable_numa field. When it is true, a NUMA node
> > > > is expected to be created implicitly.
> > > > 
> > > > Acked-by: David Gibson <david@gibson.dropbear.id.au>
> > > > Suggested-by: Igor Mammedov <imammedo@redhat.com>
> > > > Suggested-by: Eduardo Habkost <ehabkost@redhat.com>
> > > > Signed-off-by: Tao Xu <tao3.xu@intel.com>
> > [...]
> > > > +    mc->auto_enable_numa = true;
> > > 
> > > this will always create a numa node (that will affect not only RAM but
> > > also all other components that depends on numa state (like CPUs)),
> > > where as spapr_populate_memory() was only faking numa node in DT for RAM.
> > > It makes non-numa configuration impossible.
> > > Seeing David's ACK on the patch it might be fine, but I believe
> > > commit message should capture that and explain why the change in
> > > behavior is fine.
> > 
> > After a quick look, all spapr code seems to have the same
> > behavior when nb_numa_nodes==0 and nb_numa_nodes==1, but I'd like
> > to be sure.
> 
> That's certainly the intention.  If there are cases where it doesn't
> behave that way, it's a bug - although possible one we have to
> maintainer for machine compatibility.
> 
> > David and/or Tao Xu: do you confirm there's no ABI change at all
> > on spapr after implicitly creating a NUMA node?
> 
> I don't believe there is, no.

Oh, FWIW, the PAPR interface which is what defines the guest
environment has no notion of "non NUMA" except in the sense of a
system with exactly one NUMA node.

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Qemu-devel] [PATCH] numa: Introduce MachineClass::auto_enable_numa for implicit NUMA node
  2019-08-07 17:52   ` Eduardo Habkost
  2019-08-08  6:35     ` David Gibson
@ 2019-08-08  8:17     ` Tao Xu
  1 sibling, 0 replies; 11+ messages in thread
From: Tao Xu @ 2019-08-08  8:17 UTC (permalink / raw)
  To: Eduardo Habkost, Igor Mammedov; +Cc: qemu-ppc, qemu-devel, david

On 8/8/2019 1:52 AM, Eduardo Habkost wrote:
> On Tue, Aug 06, 2019 at 02:50:55PM +0200, Igor Mammedov wrote:
>> On Mon,  5 Aug 2019 15:13:02 +0800
>> Tao Xu <tao3.xu@intel.com> wrote:
>>
>>> Add MachineClass::auto_enable_numa field. When it is true, a NUMA node
>>> is expected to be created implicitly.
>>>
>>> Acked-by: David Gibson <david@gibson.dropbear.id.au>
>>> Suggested-by: Igor Mammedov <imammedo@redhat.com>
>>> Suggested-by: Eduardo Habkost <ehabkost@redhat.com>
>>> Signed-off-by: Tao Xu <tao3.xu@intel.com>
> [...]
>>> +    mc->auto_enable_numa = true;
>>
>> this will always create a numa node (that will affect not only RAM but
>> also all other components that depends on numa state (like CPUs)),
>> where as spapr_populate_memory() was only faking numa node in DT for RAM.
>> It makes non-numa configuration impossible.
>> Seeing David's ACK on the patch it might be fine, but I believe
>> commit message should capture that and explain why the change in
>> behavior is fine.
> 
> After a quick look, all spapr code seems to have the same
> behavior when nb_numa_nodes==0 and nb_numa_nodes==1, but I'd like
> to be sure.
> 
> David and/or Tao Xu: do you confirm there's no ABI change at all
> on spapr after implicitly creating a NUMA node?
> 
Even without this patch and HMAT patch, if without numa configuration, 
global nb_numa_nodes is always existing and default is 0, so nb_nodes 
will be auto set to 1, so from my point of view, this patch will not 
change ABI.

And I would also want to listen David's opinion.
>>
>>>       smc->default_caps.caps[SPAPR_CAP_HTM] = SPAPR_CAP_OFF;
>>>       smc->default_caps.caps[SPAPR_CAP_VSX] = SPAPR_CAP_ON;
>>> diff --git a/include/hw/boards.h b/include/hw/boards.h
>>> index 2eb9a0b4e0..4a350b87d2 100644
>>> --- a/include/hw/boards.h
>>> +++ b/include/hw/boards.h
>>> @@ -220,6 +220,7 @@ struct MachineClass {
>>>       bool smbus_no_migration_support;
>>>       bool nvdimm_supported;
>>>       bool numa_mem_supported;
>>> +    bool auto_enable_numa;
>>>   
>>>       HotplugHandler *(*get_hotplug_handler)(MachineState *machine,
>>>                                              DeviceState *dev);
>>
> 



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Qemu-devel] [PATCH] numa: Introduce MachineClass::auto_enable_numa for implicit NUMA node
  2019-08-08  6:35     ` David Gibson
  2019-08-08  6:35       ` David Gibson
@ 2019-08-09  9:29       ` Igor Mammedov
  1 sibling, 0 replies; 11+ messages in thread
From: Igor Mammedov @ 2019-08-09  9:29 UTC (permalink / raw)
  To: David Gibson; +Cc: Tao Xu, qemu-ppc, Eduardo Habkost, qemu-devel

On Thu, 8 Aug 2019 16:35:00 +1000
David Gibson <david@gibson.dropbear.id.au> wrote:

> On Wed, Aug 07, 2019 at 02:52:56PM -0300, Eduardo Habkost wrote:
> > On Tue, Aug 06, 2019 at 02:50:55PM +0200, Igor Mammedov wrote:  
> > > On Mon,  5 Aug 2019 15:13:02 +0800
> > > Tao Xu <tao3.xu@intel.com> wrote:
> > >   
> > > > Add MachineClass::auto_enable_numa field. When it is true, a NUMA node
> > > > is expected to be created implicitly.
> > > > 
> > > > Acked-by: David Gibson <david@gibson.dropbear.id.au>
> > > > Suggested-by: Igor Mammedov <imammedo@redhat.com>
> > > > Suggested-by: Eduardo Habkost <ehabkost@redhat.com>
> > > > Signed-off-by: Tao Xu <tao3.xu@intel.com>  
> > [...]  
> > > > +    mc->auto_enable_numa = true;  
> > > 
> > > this will always create a numa node (that will affect not only RAM but
> > > also all other components that depends on numa state (like CPUs)),
> > > where as spapr_populate_memory() was only faking numa node in DT for RAM.
> > > It makes non-numa configuration impossible.
> > > Seeing David's ACK on the patch it might be fine, but I believe
> > > commit message should capture that and explain why the change in
> > > behavior is fine.  
> > 
> > After a quick look, all spapr code seems to have the same
> > behavior when nb_numa_nodes==0 and nb_numa_nodes==1, but I'd like
> > to be sure.  
> 
> That's certainly the intention.  If there are cases where it doesn't
> behave that way, it's a bug - although possible one we have to
> maintainer for machine compatibility.

considering DT is firmware we typically do not add any compat
code for the later.

> 
> > David and/or Tao Xu: do you confirm there's no ABI change at all
> > on spapr after implicitly creating a NUMA node?  
> 
> I don't believe there is, no.

Also seeing your next reply, it seems there is no non-numa
usecase is spec so it would be a bug to begin with, hence:

Reviewed-by: Igor Mammedov <imammedo@redhat.com>


> >   
> > >   
> > > >      smc->default_caps.caps[SPAPR_CAP_HTM] = SPAPR_CAP_OFF;
> > > >      smc->default_caps.caps[SPAPR_CAP_VSX] = SPAPR_CAP_ON;
> > > > diff --git a/include/hw/boards.h b/include/hw/boards.h
> > > > index 2eb9a0b4e0..4a350b87d2 100644
> > > > --- a/include/hw/boards.h
> > > > +++ b/include/hw/boards.h
> > > > @@ -220,6 +220,7 @@ struct MachineClass {
> > > >      bool smbus_no_migration_support;
> > > >      bool nvdimm_supported;
> > > >      bool numa_mem_supported;
> > > > +    bool auto_enable_numa;
> > > >  
> > > >      HotplugHandler *(*get_hotplug_handler)(MachineState *machine,
> > > >                                             DeviceState *dev);  
> > >   
> >   
> 



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Qemu-devel] [PATCH] numa: Introduce MachineClass::auto_enable_numa for implicit NUMA node
  2019-08-05  7:13 [Qemu-devel] [PATCH] numa: Introduce MachineClass::auto_enable_numa for implicit NUMA node Tao Xu
  2019-08-06 12:50 ` Igor Mammedov
@ 2019-09-03 17:52 ` Eduardo Habkost
  2019-09-04  6:22   ` Tao Xu
  1 sibling, 1 reply; 11+ messages in thread
From: Eduardo Habkost @ 2019-09-03 17:52 UTC (permalink / raw)
  To: Tao Xu; +Cc: imammedo, qemu-ppc, qemu-devel, david

On Mon, Aug 05, 2019 at 03:13:02PM +0800, Tao Xu wrote:
> Add MachineClass::auto_enable_numa field. When it is true, a NUMA node
> is expected to be created implicitly.
> 
> Acked-by: David Gibson <david@gibson.dropbear.id.au>
> Suggested-by: Igor Mammedov <imammedo@redhat.com>
> Suggested-by: Eduardo Habkost <ehabkost@redhat.com>
> Signed-off-by: Tao Xu <tao3.xu@intel.com>

This introduces spurious warnings when running qemu-system-ppc64.
See: https://lore.kernel.org/qemu-devel/CAFEAcA-AvFS2cbDH-t5SxgY9hA=LGL81_8dn-vh193vtV9W1Lg@mail.gmail.com/

To reproduce it, just run 'qemu-system-ppc64 -machine pseries'
without any -numa arguments.

I have removed this patch from machine-next so it won't block the
existing pull request.

-- 
Eduardo


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Qemu-devel] [PATCH] numa: Introduce MachineClass::auto_enable_numa for implicit NUMA node
  2019-09-03 17:52 ` Eduardo Habkost
@ 2019-09-04  6:22   ` Tao Xu
  2019-09-04 20:43     ` Eduardo Habkost
  0 siblings, 1 reply; 11+ messages in thread
From: Tao Xu @ 2019-09-04  6:22 UTC (permalink / raw)
  To: Eduardo Habkost; +Cc: imammedo, qemu-ppc, qemu-devel, david

On 9/4/2019 1:52 AM, Eduardo Habkost wrote:
> On Mon, Aug 05, 2019 at 03:13:02PM +0800, Tao Xu wrote:
>> Add MachineClass::auto_enable_numa field. When it is true, a NUMA node
>> is expected to be created implicitly.
>>
>> Acked-by: David Gibson <david@gibson.dropbear.id.au>
>> Suggested-by: Igor Mammedov <imammedo@redhat.com>
>> Suggested-by: Eduardo Habkost <ehabkost@redhat.com>
>> Signed-off-by: Tao Xu <tao3.xu@intel.com>
> 
> This introduces spurious warnings when running qemu-system-ppc64.
> See: https://lore.kernel.org/qemu-devel/CAFEAcA-AvFS2cbDH-t5SxgY9hA=LGL81_8dn-vh193vtV9W1Lg@mail.gmail.com/
> 
> To reproduce it, just run 'qemu-system-ppc64 -machine pseries'
> without any -numa arguments.
> 
> I have removed this patch from machine-next so it won't block the
> existing pull request.
> 
I got it. If default splitting of RAM between nodes is
deprecated, this patch can't reuse the splitting code. I agree with 
droping this patch.


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Qemu-devel] [PATCH] numa: Introduce MachineClass::auto_enable_numa for implicit NUMA node
  2019-09-04  6:22   ` Tao Xu
@ 2019-09-04 20:43     ` Eduardo Habkost
  2019-09-05  0:57       ` Tao Xu
  0 siblings, 1 reply; 11+ messages in thread
From: Eduardo Habkost @ 2019-09-04 20:43 UTC (permalink / raw)
  To: Tao Xu; +Cc: imammedo, qemu-ppc, qemu-devel, david

On Wed, Sep 04, 2019 at 02:22:39PM +0800, Tao Xu wrote:
> On 9/4/2019 1:52 AM, Eduardo Habkost wrote:
> > On Mon, Aug 05, 2019 at 03:13:02PM +0800, Tao Xu wrote:
> > > Add MachineClass::auto_enable_numa field. When it is true, a NUMA node
> > > is expected to be created implicitly.
> > > 
> > > Acked-by: David Gibson <david@gibson.dropbear.id.au>
> > > Suggested-by: Igor Mammedov <imammedo@redhat.com>
> > > Suggested-by: Eduardo Habkost <ehabkost@redhat.com>
> > > Signed-off-by: Tao Xu <tao3.xu@intel.com>
> > 
> > This introduces spurious warnings when running qemu-system-ppc64.
> > See: https://lore.kernel.org/qemu-devel/CAFEAcA-AvFS2cbDH-t5SxgY9hA=LGL81_8dn-vh193vtV9W1Lg@mail.gmail.com/
> > 
> > To reproduce it, just run 'qemu-system-ppc64 -machine pseries'
> > without any -numa arguments.
> > 
> > I have removed this patch from machine-next so it won't block the
> > existing pull request.
> > 
> I got it. If default splitting of RAM between nodes is
> deprecated, this patch can't reuse the splitting code. I agree with droping
> this patch.

Probably all we need to fix this issue is to replace
  NumaNodeOptions node = { };
with
  NumaNodeOptions node = { .size = ram_size };
in the auto_enable_numa block.

Do you plan to send v2?

-- 
Eduardo


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [Qemu-devel] [PATCH] numa: Introduce MachineClass::auto_enable_numa for implicit NUMA node
  2019-09-04 20:43     ` Eduardo Habkost
@ 2019-09-05  0:57       ` Tao Xu
  0 siblings, 0 replies; 11+ messages in thread
From: Tao Xu @ 2019-09-05  0:57 UTC (permalink / raw)
  To: Eduardo Habkost; +Cc: imammedo, qemu-ppc, qemu-devel, david

On 9/5/2019 4:43 AM, Eduardo Habkost wrote:
> On Wed, Sep 04, 2019 at 02:22:39PM +0800, Tao Xu wrote:
>> On 9/4/2019 1:52 AM, Eduardo Habkost wrote:
>>> On Mon, Aug 05, 2019 at 03:13:02PM +0800, Tao Xu wrote:
>>>> Add MachineClass::auto_enable_numa field. When it is true, a NUMA node
>>>> is expected to be created implicitly.
>>>>
>>>> Acked-by: David Gibson <david@gibson.dropbear.id.au>
>>>> Suggested-by: Igor Mammedov <imammedo@redhat.com>
>>>> Suggested-by: Eduardo Habkost <ehabkost@redhat.com>
>>>> Signed-off-by: Tao Xu <tao3.xu@intel.com>
>>>
>>> This introduces spurious warnings when running qemu-system-ppc64.
>>> See: https://lore.kernel.org/qemu-devel/CAFEAcA-AvFS2cbDH-t5SxgY9hA=LGL81_8dn-vh193vtV9W1Lg@mail.gmail.com/
>>>
>>> To reproduce it, just run 'qemu-system-ppc64 -machine pseries'
>>> without any -numa arguments.
>>>
>>> I have removed this patch from machine-next so it won't block the
>>> existing pull request.
>>>
>> I got it. If default splitting of RAM between nodes is
>> deprecated, this patch can't reuse the splitting code. I agree with droping
>> this patch.
> 
> Probably all we need to fix this issue is to replace
>    NumaNodeOptions node = { };
> with
>    NumaNodeOptions node = { .size = ram_size };
> in the auto_enable_numa block.
> 
> Do you plan to send v2?
> 
OK, thank you for your suggestion. I will fix it and send v2.


^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2019-09-05  0:58 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-08-05  7:13 [Qemu-devel] [PATCH] numa: Introduce MachineClass::auto_enable_numa for implicit NUMA node Tao Xu
2019-08-06 12:50 ` Igor Mammedov
2019-08-07 17:52   ` Eduardo Habkost
2019-08-08  6:35     ` David Gibson
2019-08-08  6:35       ` David Gibson
2019-08-09  9:29       ` Igor Mammedov
2019-08-08  8:17     ` Tao Xu
2019-09-03 17:52 ` Eduardo Habkost
2019-09-04  6:22   ` Tao Xu
2019-09-04 20:43     ` Eduardo Habkost
2019-09-05  0:57       ` Tao Xu

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.