qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Babu Moger <babu.moger@amd.com>
To: Igor Mammedov <imammedo@redhat.com>
Cc: "qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	"pbonzini@redhat.com" <pbonzini@redhat.com>,
	"ehabkost@redhat.com" <ehabkost@redhat.com>,
	"rth@twiddle.net" <rth@twiddle.net>
Subject: RE: [PATCH v2 1/3] hw/i386: Initialize topo_ids from CpuInstanceProperties
Date: Mon, 13 Jul 2020 14:30:29 -0500	[thread overview]
Message-ID: <4003b476-afdb-74a1-4f9f-1387aae6c4e8@amd.com> (raw)
In-Reply-To: <20200713193221.27674630@redhat.com>



> -----Original Message-----
> From: Igor Mammedov <imammedo@redhat.com>
> Sent: Monday, July 13, 2020 12:32 PM
> To: Moger, Babu <Babu.Moger@amd.com>
> Cc: pbonzini@redhat.com; rth@twiddle.net; ehabkost@redhat.com; qemu-
> devel@nongnu.org
> Subject: Re: [PATCH v2 1/3] hw/i386: Initialize topo_ids from
> CpuInstanceProperties
> 
> On Mon, 13 Jul 2020 11:43:33 -0500
> Babu Moger <babu.moger@amd.com> wrote:
> 
> > On 7/13/20 11:17 AM, Igor Mammedov wrote:
> > > On Mon, 13 Jul 2020 10:02:22 -0500
> > > Babu Moger <babu.moger@amd.com> wrote:
> > >
> > >>> -----Original Message-----
> > >>> From: Igor Mammedov <imammedo@redhat.com>
> > >>> Sent: Monday, July 13, 2020 4:08 AM
> > >>> To: Moger, Babu <Babu.Moger@amd.com>
> > >>> Cc: pbonzini@redhat.com; rth@twiddle.net; ehabkost@redhat.com;
> > >>> qemu- devel@nongnu.org
> > >>> Subject: Re: [PATCH v2 1/3] hw/i386: Initialize topo_ids from
> > >>> CpuInstanceProperties
> > > [...]
> > >>>> +
> > >>>> +/*
> > >>>> + * Initialize topo_ids from CpuInstanceProperties
> > >>>> + * node_id in CpuInstanceProperties(or in CPU device) is a sequential
> > >>>> + * number, but while building the topology
> > >>>
> > >>>> we need to separate it for
> > >>>> + * each socket(mod nodes_per_pkg).
> > >>> could you clarify a bit more on why this is necessary?
> > >>
> > >> If you have two sockets and 4 numa nodes, node_id in
> > >> CpuInstanceProperties will be number sequentially as 0, 1, 2, 3.
> > >> But in EPYC topology, it will be  0, 1, 0, 1( Basically mod % number of nodes
> per socket).
> > >
> > > I'm confused, let's suppose we have 2 EPYC sockets with 2 nodes per
> > > socket so APIC id woulbe be composed like:
> > >
> > >  1st socket
> > >    pkg_id(0) | node_id(0)
> > >    pkg_id(0) | node_id(1)
> > >
> > >  2nd socket
> > >    pkg_id(1) | node_id(0)
> > >    pkg_id(1) | node_id(1)
> > >
> > > if that's the case, then EPYC's node_id here doesn't look like a
> > > NUMA node in the sense it's usually used (above config would have 4
> > > different memory controllers => 4 conventional NUMA nodes).
> >
> > EPIC model uses combination of socket id and node id to identify the
> > numa nodes. So, it internally uses all the information.
> 
> well with above values, EPYC's node_id doesn't look like it's specifying a
> machine numa node, but rather a node index within single socket. In which case,
> it doesn't make much sense calling it NUMA node_id, it's rather some index
> within a socket. (it starts looking like terminology is all mixed up)
> 
> If you have access to a milti-socket EPYC machine, can you dump and post here
> its apic ids, pls?

Here is the output from my EPYC machine with 2 sockets and totally 8
nodes(SMT disabled). The cpus 0-31 are in socket 0 and  cpus 32-63 in
socket 1.

# lscpu
Architecture:        x86_64
CPU op-mode(s):      32-bit, 64-bit
Byte Order:          Little Endian
CPU(s):              64
On-line CPU(s) list: 0-63
Thread(s) per core:  1
Core(s) per socket:  32
Socket(s):           2
NUMA node(s):        8
Vendor ID:           AuthenticAMD
CPU family:          23
Model:               1
Model name:          AMD Eng Sample: 1S1901A4VIHF5_30/19_N
Stepping:            2
CPU MHz:             2379.233
CPU max MHz:         1900.0000
CPU min MHz:         1200.0000
BogoMIPS:            3792.81
Virtualization:      AMD-V
L1d cache:           32K
L1i cache:           64K
L2 cache:            512K
L3 cache:            8192K
NUMA node0 CPU(s):   0-7
NUMA node1 CPU(s):   8-15
NUMA node2 CPU(s):   16-23
NUMA node3 CPU(s):   24-31
NUMA node4 CPU(s):   32-39
NUMA node5 CPU(s):   40-47
NUMA node6 CPU(s):   48-55
NUMA node7 CPU(s):   56-63

Here is the output of #cpuid  -l 0x8000001e  -r

You may want to refer
https://www.amd.com/system/files/TechDocs/54945_3.03_ppr_ZP_B2_pub.zip
(section 2.1.12.2.1.3 ApicId Enumeration Requirements).
Note that this is a general guideline. We tried to generalize in qemu as
much as possible. It is bit complex.

CPU 0:
   0x8000001e 0x00: eax=0x00000000 ebx=0x00000100 ecx=0x00000300
edx=0x00000000
CPU 1:
   0x8000001e 0x00: eax=0x00000002 ebx=0x00000101 ecx=0x00000300
edx=0x00000000
CPU 2:
   0x8000001e 0x00: eax=0x00000004 ebx=0x00000102 ecx=0x00000300
edx=0x00000000
CPU 3:
   0x8000001e 0x00: eax=0x00000006 ebx=0x00000103 ecx=0x00000300
edx=0x00000000
CPU 4:
   0x8000001e 0x00: eax=0x00000008 ebx=0x00000104 ecx=0x00000300
edx=0x00000000
CPU 5:
   0x8000001e 0x00: eax=0x0000000a ebx=0x00000105 ecx=0x00000300
edx=0x00000000
CPU 6:
   0x8000001e 0x00: eax=0x0000000c ebx=0x00000106 ecx=0x00000300
edx=0x00000000
CPU 7:
   0x8000001e 0x00: eax=0x0000000e ebx=0x00000107 ecx=0x00000300
edx=0x00000000
CPU 8:
   0x8000001e 0x00: eax=0x00000010 ebx=0x00000108 ecx=0x00000301
edx=0x00000000
CPU 9:
   0x8000001e 0x00: eax=0x00000012 ebx=0x00000109 ecx=0x00000301
edx=0x00000000
CPU 10:
   0x8000001e 0x00: eax=0x00000014 ebx=0x0000010a ecx=0x00000301
edx=0x00000000
CPU 11:
   0x8000001e 0x00: eax=0x00000016 ebx=0x0000010b ecx=0x00000301
edx=0x00000000
CPU 12:
   0x8000001e 0x00: eax=0x00000018 ebx=0x0000010c ecx=0x00000301
edx=0x00000000
CPU 13:
   0x8000001e 0x00: eax=0x0000001a ebx=0x0000010d ecx=0x00000301
edx=0x00000000
CPU 14:
   0x8000001e 0x00: eax=0x0000001c ebx=0x0000010e ecx=0x00000301
edx=0x00000000
CPU 15:
   0x8000001e 0x00: eax=0x0000001e ebx=0x0000010f ecx=0x00000301
edx=0x00000000
CPU 16:
   0x8000001e 0x00: eax=0x00000020 ebx=0x00000110 ecx=0x00000302
edx=0x00000000
CPU 17:
   0x8000001e 0x00: eax=0x00000022 ebx=0x00000111 ecx=0x00000302
edx=0x00000000
CPU 18:
   0x8000001e 0x00: eax=0x00000024 ebx=0x00000112 ecx=0x00000302
edx=0x00000000
CPU 19:
   0x8000001e 0x00: eax=0x00000026 ebx=0x00000113 ecx=0x00000302
edx=0x00000000
CPU 20:
   0x8000001e 0x00: eax=0x00000028 ebx=0x00000114 ecx=0x00000302
edx=0x00000000
CPU 21:
   0x8000001e 0x00: eax=0x0000002a ebx=0x00000115 ecx=0x00000302
edx=0x00000000
CPU 22:
   0x8000001e 0x00: eax=0x0000002c ebx=0x00000116 ecx=0x00000302
edx=0x00000000
CPU 23:
   0x8000001e 0x00: eax=0x0000002e ebx=0x00000117 ecx=0x00000302
edx=0x00000000
CPU 24:
   0x8000001e 0x00: eax=0x00000030 ebx=0x00000118 ecx=0x00000303
edx=0x00000000
CPU 25:
   0x8000001e 0x00: eax=0x00000032 ebx=0x00000119 ecx=0x00000303
edx=0x00000000
CPU 26:
   0x8000001e 0x00: eax=0x00000034 ebx=0x0000011a ecx=0x00000303
edx=0x00000000
CPU 27:
   0x8000001e 0x00: eax=0x00000036 ebx=0x0000011b ecx=0x00000303
edx=0x00000000
CPU 28:
   0x8000001e 0x00: eax=0x00000038 ebx=0x0000011c ecx=0x00000303
edx=0x00000000
CPU 29:
   0x8000001e 0x00: eax=0x0000003a ebx=0x0000011d ecx=0x00000303
edx=0x00000000
CPU 30:
   0x8000001e 0x00: eax=0x0000003c ebx=0x0000011e ecx=0x00000303
edx=0x00000000
CPU 31:
   0x8000001e 0x00: eax=0x0000003e ebx=0x0000011f ecx=0x00000303
edx=0x00000000
CPU 32:
   0x8000001e 0x00: eax=0x00000040 ebx=0x00000100 ecx=0x00000304
edx=0x00000000
CPU 33:
   0x8000001e 0x00: eax=0x00000042 ebx=0x00000101 ecx=0x00000304
edx=0x00000000
CPU 34:
   0x8000001e 0x00: eax=0x00000044 ebx=0x00000102 ecx=0x00000304
edx=0x00000000
CPU 35:
   0x8000001e 0x00: eax=0x00000046 ebx=0x00000103 ecx=0x00000304
edx=0x00000000
CPU 36:
   0x8000001e 0x00: eax=0x00000048 ebx=0x00000104 ecx=0x00000304
edx=0x00000000
CPU 37:
   0x8000001e 0x00: eax=0x0000004a ebx=0x00000105 ecx=0x00000304
edx=0x00000000
CPU 38:
   0x8000001e 0x00: eax=0x0000004c ebx=0x00000106 ecx=0x00000304
edx=0x00000000
CPU 39:
   0x8000001e 0x00: eax=0x0000004e ebx=0x00000107 ecx=0x00000304
edx=0x00000000
CPU 40:
   0x8000001e 0x00: eax=0x00000050 ebx=0x00000108 ecx=0x00000305
edx=0x00000000
CPU 41:
   0x8000001e 0x00: eax=0x00000052 ebx=0x00000109 ecx=0x00000305
edx=0x00000000
CPU 42:
   0x8000001e 0x00: eax=0x00000054 ebx=0x0000010a ecx=0x00000305
edx=0x00000000
CPU 43:
   0x8000001e 0x00: eax=0x00000056 ebx=0x0000010b ecx=0x00000305
edx=0x00000000
CPU 44:
   0x8000001e 0x00: eax=0x00000058 ebx=0x0000010c ecx=0x00000305
edx=0x00000000
CPU 45:
   0x8000001e 0x00: eax=0x0000005a ebx=0x0000010d ecx=0x00000305
edx=0x00000000
CPU 46:
   0x8000001e 0x00: eax=0x0000005c ebx=0x0000010e ecx=0x00000305
edx=0x00000000
CPU 47:
   0x8000001e 0x00: eax=0x0000005e ebx=0x0000010f ecx=0x00000305
edx=0x00000000
CPU 48:
   0x8000001e 0x00: eax=0x00000060 ebx=0x00000110 ecx=0x00000306
edx=0x00000000

CPU 49:
   0x8000001e 0x00: eax=0x00000062 ebx=0x00000111 ecx=0x00000306
edx=0x00000000
CPU 50:
   0x8000001e 0x00: eax=0x00000064 ebx=0x00000112 ecx=0x00000306
edx=0x00000000
CPU 51:
   0x8000001e 0x00: eax=0x00000066 ebx=0x00000113 ecx=0x00000306
edx=0x00000000
CPU 52:
   0x8000001e 0x00: eax=0x00000068 ebx=0x00000114 ecx=0x00000306
edx=0x00000000
CPU 53:
   0x8000001e 0x00: eax=0x0000006a ebx=0x00000115 ecx=0x00000306
edx=0x00000000
CPU 54:
   0x8000001e 0x00: eax=0x0000006c ebx=0x00000116 ecx=0x00000306
edx=0x00000000
CPU 55:
   0x8000001e 0x00: eax=0x0000006e ebx=0x00000117 ecx=0x00000306
edx=0x00000000
CPU 56:
   0x8000001e 0x00: eax=0x00000070 ebx=0x00000118 ecx=0x00000307
edx=0x00000000
CPU 57:
   0x8000001e 0x00: eax=0x00000072 ebx=0x00000119 ecx=0x00000307
edx=0x00000000
CPU 58:
   0x8000001e 0x00: eax=0x00000074 ebx=0x0000011a ecx=0x00000307
edx=0x00000000
CPU 59:
   0x8000001e 0x00: eax=0x00000076 ebx=0x0000011b ecx=0x00000307
edx=0x00000000
CPU 60:
   0x8000001e 0x00: eax=0x00000078 ebx=0x0000011c ecx=0x00000307
edx=0x00000000
CPU 61:
   0x8000001e 0x00: eax=0x0000007a ebx=0x0000011d ecx=0x00000307
edx=0x00000000
CPU 62:
   0x8000001e 0x00: eax=0x0000007c ebx=0x0000011e ecx=0x00000307
edx=0x00000000
CPU 63:
   0x8000001e 0x00: eax=0x0000007e ebx=0x0000011f ecx=0x00000307
edx=0x00000000

> 
> >
> > >
> > > I wonder if linux guest actually uses node_id encoded in apic id for
> > > configuring/checking numa structures, or it just uses whatever ACPI
> > > SRAT table provided.
> > >
> > >>>> + */
> > >>>> +static inline void x86_init_topo_ids(X86CPUTopoInfo *topo_info,
> > >>>> +                                     CpuInstanceProperties props,
> > >>>> +                                     X86CPUTopoIDs *topo_ids) {
> > >>>> +    topo_ids->smt_id = props.has_thread_id ? props.thread_id : 0;
> > >>>> +    topo_ids->core_id = props.has_core_id ? props.core_id : 0;
> > >>>> +    topo_ids->die_id = props.has_die_id ? props.die_id : 0;
> > >>>> +    topo_ids->node_id = props.has_node_id ?
> > >>>> +                        props.node_id % MAX(topo_info->nodes_per_pkg, 1) : 0;
> > >>>> +    topo_ids->pkg_id = props.has_socket_id ? props.socket_id :
> > >>>> +0; }
> > >>>>  /*
> > >>>>   * Make APIC ID for the CPU 'cpu_index'
> > >>>>   *
> > >>>>
> > >>
> > >
> >



  reply	other threads:[~2020-07-13 19:32 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-07-01 17:30 [PATCH v2 0/3] Fix couple of issues with AMD topology Babu Moger
2020-07-01 17:31 ` [PATCH v2 1/3] hw/i386: Initialize topo_ids from CpuInstanceProperties Babu Moger
2020-07-13  9:08   ` Igor Mammedov
2020-07-13 15:02     ` Babu Moger
2020-07-13 16:17       ` Igor Mammedov
2020-07-13 16:43         ` Babu Moger
2020-07-13 17:32           ` Igor Mammedov
2020-07-13 19:30             ` Babu Moger [this message]
2020-07-14 16:41               ` Igor Mammedov
2020-07-14 17:26                 ` Babu Moger
2020-07-14 18:26                   ` Igor Mammedov
2020-07-24 17:05               ` Igor Mammedov
2020-07-27 15:49                 ` Babu Moger
2020-07-27 17:14                   ` Igor Mammedov
2020-07-27 23:59                     ` Babu Moger
2020-07-29 14:12                       ` Igor Mammedov
2020-07-29 21:22                         ` Babu Moger
2020-07-30 11:27                           ` Igor Mammedov
2020-07-01 17:31 ` [PATCH v2 2/3] hw/i386: Build apic_id " Babu Moger
2020-07-13  9:15   ` Igor Mammedov
2020-07-13 15:00     ` Babu Moger
2020-07-01 17:31 ` [PATCH v2 3/3] hw/386: Fix uninitialized memory with -device and CPU hotplug Babu Moger
2020-07-27 16:36   ` Igor Mammedov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4003b476-afdb-74a1-4f9f-1387aae6c4e8@amd.com \
    --to=babu.moger@amd.com \
    --cc=ehabkost@redhat.com \
    --cc=imammedo@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=rth@twiddle.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).