All of lore.kernel.org
 help / color / mirror / Atom feed
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: xen-devel@lists.xen.org
Cc: dario.faggioli@citrix.com, keir@xen.org, jbeulich@suse.com
Subject: [PATCH 1 of 2] x86/numa: Correct assumption that each NUMA node has memory
Date: Wed, 11 Jul 2012 13:49:55 +0100	[thread overview]
Message-ID: <a2dbed3582e5bedb605a.1342010995@andrewcoop.uk.xensource.com> (raw)
In-Reply-To: <patchbomb.1342010994@andrewcoop.uk.xensource.com>

[-- Attachment #1: numa-init.patch --]
[-- Type: text/x-patch, Size: 3550 bytes --]

It is now quite easy to buy servers with incorrectly populated DIMMs, especially
with AMD Magny-Cours and Interlagos systems which have two NUMA nodes per socket.

Currently, Xen will assign all CPUs on nodes without memory to node 0, which
leads to interestingly wrong NUMA information, causing numa aware functionality
such as alloc_domheap_pages() to get things very wrong.

This patch splits the current logic to accept NUMA nodes without memory, which
corrects the accounting of CPUs to online NUMA nodes.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

diff -r 7b0dc7f3ddfe -r a2dbed3582e5 xen/arch/x86/setup.c
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -195,8 +195,10 @@ void __devinit srat_detect_node(int cpu)
     u32 apicid = x86_cpu_to_apicid[cpu];
 
     node = apicid_to_node[apicid];
-    if ( node == NUMA_NO_NODE || !node_online(node) )
+    if ( node == NUMA_NO_NODE )
         node = 0;
+
+    node_set_online(node);
     numa_set_node(cpu, node);
 
     if ( opt_cpu_info && acpi_numa > 0 )
diff -r 7b0dc7f3ddfe -r a2dbed3582e5 xen/arch/x86/srat.c
--- a/xen/arch/x86/srat.c
+++ b/xen/arch/x86/srat.c
@@ -23,7 +23,8 @@
 
 static struct acpi_table_slit *__read_mostly acpi_slit;
 
-static nodemask_t nodes_parsed __initdata;
+static nodemask_t memory_nodes_parsed __initdata;
+static nodemask_t processor_nodes_parsed __initdata;
 static nodemask_t nodes_found __initdata;
 static struct node nodes[MAX_NUMNODES] __initdata;
 static u8 __read_mostly pxm2node[256] = { [0 ... 255] = NUMA_NO_NODE };
@@ -221,6 +222,7 @@ acpi_numa_processor_affinity_init(struct
 		return;
 	}
 	apicid_to_node[pa->apic_id] = node;
+	node_set(node, processor_nodes_parsed);
 	acpi_numa = 1;
 	printk(KERN_INFO "SRAT: PXM %u -> APIC %u -> Node %u\n",
 	       pxm, pa->apic_id, node);
@@ -287,7 +289,7 @@ acpi_numa_memory_affinity_init(struct ac
 		return;
 	}
 	nd = &nodes[node];
-	if (!node_test_and_set(node, nodes_parsed)) {
+	if (!node_test_and_set(node, memory_nodes_parsed)) {
 		nd->start = start;
 		nd->end = end;
 	} else {
@@ -324,7 +326,7 @@ static int nodes_cover_memory(void)
 
 		do {
 			found = 0;
-			for_each_node_mask(j, nodes_parsed)
+			for_each_node_mask(j, memory_nodes_parsed)
 				if (start < nodes[j].end
 				    && end > nodes[j].start) {
 					if (start >= nodes[j].start) {
@@ -418,6 +420,7 @@ void __init srat_parse_regions(u64 addr)
 int __init acpi_scan_nodes(u64 start, u64 end)
 {
 	int i;
+	nodemask_t all_nodes_parsed;
 
 	/* First clean up the node list */
 	for (i = 0; i < MAX_NUMNODES; i++)
@@ -441,17 +444,26 @@ int __init acpi_scan_nodes(u64 start, u6
 		return -1;
 	}
 
+	nodes_or(all_nodes_parsed, memory_nodes_parsed, processor_nodes_parsed);
+
 	/* Finally register nodes */
-	for_each_node_mask(i, nodes_parsed)
+	for_each_node_mask(i, all_nodes_parsed)
 	{
-		if ((nodes[i].end - nodes[i].start) < NODE_MIN_SIZE)
-			continue;
+		u64 size = nodes[i].end - nodes[i].start;
+		if ( size == 0 )
+			printk(KERN_WARNING "SRAT: Node %u has no memory. "
+			       "BIOS Bug or mis-configured hardware?\n", i);
+
+		else if (size < NODE_MIN_SIZE)
+			printk(KERN_WARNING "SRAT: Node %u has only %"PRIu64
+			       " bytes of memory. BIOS Bug?\n", i, size);
+
 		setup_node_bootmem(i, nodes[i].start, nodes[i].end);
 	}
 	for (i = 0; i < nr_cpu_ids; i++) {
 		if (cpu_to_node[i] == NUMA_NO_NODE)
 			continue;
-		if (!node_isset(cpu_to_node[i], nodes_parsed))
+		if (!node_isset(cpu_to_node[i], processor_nodes_parsed))
 			numa_set_node(i, NUMA_NO_NODE);
 	}
 	numa_init_array();

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

  reply	other threads:[~2012-07-11 12:49 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-07-11 12:49 [PATCH 0 of 2] x86/numa logical fixes Andrew Cooper
2012-07-11 12:49 ` Andrew Cooper [this message]
2012-07-11 12:49 ` [PATCH 2 of 2] x86/numa: Remove warning about small NUMA nodes Andrew Cooper

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=a2dbed3582e5bedb605a.1342010995@andrewcoop.uk.xensource.com \
    --to=andrew.cooper3@citrix.com \
    --cc=dario.faggioli@citrix.com \
    --cc=jbeulich@suse.com \
    --cc=keir@xen.org \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.