linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/3] create sysfs representation of ACPI HMAT
@ 2017-12-14  2:10 Ross Zwisler
  2017-12-14  2:10 ` [PATCH v3 1/3] acpi: HMAT support in acpi_parse_entries_array() Ross Zwisler
                   ` (4 more replies)
  0 siblings, 5 replies; 42+ messages in thread
From: Ross Zwisler @ 2017-12-14  2:10 UTC (permalink / raw)
  To: linux-kernel
  Cc: Ross Zwisler, Anaczkowski, Lukasz, Box, David E, Kogut, Jaroslaw,
	Koss, Marcin, Koziej, Artur, Lahtinen, Joonas, Moore, Robert,
	Nachimuthu, Murugasamy, Odzioba, Lukasz, Rafael J. Wysocki,
	Rafael J. Wysocki, Schmauss, Erik, Verma, Vishal L, Zheng, Lv,
	Andrew Morton, Balbir Singh, Brice Goglin, Dan Williams,
	Dave Hansen, Jerome Glisse, John Hubbard, Len Brown, Tim Chen,
	devel, linux-acpi, linux-mm, linux-nvdimm

This is the third revision of my patches adding a sysfs representation
of the ACPI Heterogeneous Memory Attribute Table (HMAT).  These patches
are based on v4.15-rc3 and a working tree can be found here:

https://git.kernel.org/pub/scm/linux/kernel/git/zwisler/linux.git/log/?h=hmat_v3

My goal is to get these patches merged for v4.16.

Changes from previous version (https://lkml.org/lkml/2017/7/6/749):

 - Changed "HMEM" to "HMAT" and "hmem" to "hmat" throughout to make sure
   that this effort doesn't get confused with Jerome's HMM work and to
   make it clear that this enabling is tightly tied to the ACPI HMAT
   table.  (John Hubbard)

 - Moved the link in the initiator (i.e. mem_init0/mem_tgt2) from
   pointing to the "mem_tgt2/local_init" attribute group to instead
   point at the mem_tgt2 target itself.  (Brice Goglin)

 - Simplified the contents of both the initiators and the targets so
   that we just symlink to the NUMA node and don't duplicate
   information.  For initiators this means that we no longer enumerate
   CPUs, and for targets this means that we don't provide physical
   address start and length information.  All of this is already
   available in the NUMA node directory itself (i.e.
   /sys/devices/system/node/node0), and it already accounts for the fact
   that both multiple CPUs and multiple memory regions can be owned by a
   given NUMA node.  Also removed some extra attributes (is_enabled,
   is_isolated) which I don't think are useful at this point in time.

I have tested this against many different configs that I implemented
using qemu.

---

==== Quick Summary ====

Platforms exist today which have multiple types of memory attached to a
single CPU.  These disparate memory ranges have some characteristics in
common, such as CPU cache coherence, but they can have wide ranges of
performance both in terms of latency and bandwidth.

For example, consider a system that contains persistent memory, standard
DDR memory and High Bandwidth Memory (HBM), all attached to the same CPU.
There could potentially be an order of magnitude or more difference in
performance between the slowest and fastest memory attached to that CPU.

With the current Linux code NUMA nodes are CPU-centric, so all the memory
attached to a given CPU will be lumped into the same NUMA node.  This makes
it very difficult for userspace applications to understand the performance
of different memory ranges on a given CPU.

We solve this issue by providing userspace with performance information on
individual memory ranges.  This performance information is exposed via
sysfs:

  # grep . mem_tgt2/* mem_tgt2/local_init/* 2>/dev/null
  mem_tgt2/firmware_id:1
  mem_tgt2/is_cached:0
  mem_tgt2/local_init/read_bw_MBps:40960
  mem_tgt2/local_init/read_lat_nsec:50
  mem_tgt2/local_init/write_bw_MBps:40960
  mem_tgt2/local_init/write_lat_nsec:50

This allows applications to easily find the memory that they want to use.
We expect that the existing NUMA APIs will be enhanced to use this new
information so that applications can continue to use them to select their
desired memory.

==== Lots of Details ====

This patch set provides a sysfs representation of parts of the
Heterogeneous Memory Attribute Table (HMAT), newly defined in ACPI 6.2.
One major conceptual change in ACPI 6.2 related to this work is that
proximity domains no longer need to contain a processor.  We can now
have memory-only proximity domains, which means that we can now have
memory-only Linux NUMA nodes.

Here is an example configuration where we have a single processor, one
range of regular memory and one range of HBM:

  +---------------+   +----------------+
  | Processor     |   | Memory         |
  | prox domain 0 +---+ prox domain 1  |
  | NUMA node 1   |   | NUMA node 2    |
  +-------+-------+   +----------------+
          |
  +-------+----------+
  | HBM              |
  | prox domain 2    |
  | NUMA node 0      |
  +------------------+

This gives us one initiator (the processor) and two targets (the two memory
ranges).  Each of these three has its own ACPI proximity domain and
associated Linux NUMA node.  Note also that while there is a 1:1 mapping
from each proximity domain to each NUMA node, the numbers don't necessarily
match up.  Additionally we can have extra NUMA nodes that don't map back to
ACPI proximity domains.

The above configuration could also have the processor and one of the two
memory ranges sharing a proximity domain and NUMA node, but for the
purposes of the HMAT the two memory ranges will need to be separated.

The overall goal of this series and of the HMAT is to allow users to
identify memory using its performance characteristics.  This is
complicated by the amount of HMAT data that could be present in very
large systems, so in this series we only surface performance information
for local (initiator,target) pairings.  The changelog for patch 5
discusses this in detail.

Ross Zwisler (3):
  acpi: HMAT support in acpi_parse_entries_array()
  hmat: add heterogeneous memory sysfs support
  hmat: add performance attributes

 MAINTAINERS                         |   6 +
 drivers/acpi/Kconfig                |   1 +
 drivers/acpi/Makefile               |   1 +
 drivers/acpi/hmat/Kconfig           |   7 +
 drivers/acpi/hmat/Makefile          |   2 +
 drivers/acpi/hmat/core.c            | 797 ++++++++++++++++++++++++++++++++++++
 drivers/acpi/hmat/hmat.h            |  64 +++
 drivers/acpi/hmat/initiator.c       |  43 ++
 drivers/acpi/hmat/perf_attributes.c |  56 +++
 drivers/acpi/hmat/target.c          |  55 +++
 drivers/acpi/tables.c               |  52 ++-
 11 files changed, 1073 insertions(+), 11 deletions(-)
 create mode 100644 drivers/acpi/hmat/Kconfig
 create mode 100644 drivers/acpi/hmat/Makefile
 create mode 100644 drivers/acpi/hmat/core.c
 create mode 100644 drivers/acpi/hmat/hmat.h
 create mode 100644 drivers/acpi/hmat/initiator.c
 create mode 100644 drivers/acpi/hmat/perf_attributes.c
 create mode 100644 drivers/acpi/hmat/target.c

-- 
2.14.3

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [PATCH v3 1/3] acpi: HMAT support in acpi_parse_entries_array()
  2017-12-14  2:10 [PATCH v3 0/3] create sysfs representation of ACPI HMAT Ross Zwisler
@ 2017-12-14  2:10 ` Ross Zwisler
  2017-12-15  0:49   ` Rafael J. Wysocki
  2017-12-15  1:10   ` Dan Williams
  2017-12-14  2:10 ` [PATCH v3 2/3] hmat: add heterogeneous memory sysfs support Ross Zwisler
                   ` (3 subsequent siblings)
  4 siblings, 2 replies; 42+ messages in thread
From: Ross Zwisler @ 2017-12-14  2:10 UTC (permalink / raw)
  To: linux-kernel
  Cc: Ross Zwisler, Anaczkowski, Lukasz, Box, David E, Kogut, Jaroslaw,
	Koss, Marcin, Koziej, Artur, Lahtinen, Joonas, Moore, Robert,
	Nachimuthu, Murugasamy, Odzioba, Lukasz, Rafael J. Wysocki,
	Rafael J. Wysocki, Schmauss, Erik, Verma, Vishal L, Zheng, Lv,
	Andrew Morton, Balbir Singh, Brice Goglin, Dan Williams,
	Dave Hansen, Jerome Glisse, John Hubbard, Len Brown, Tim Chen,
	devel, linux-acpi, linux-mm, linux-nvdimm

The current implementation of acpi_parse_entries_array() assumes that each
subtable has a standard ACPI subtable entry of type struct
acpi_subtable_header.  This standard subtable header has a one byte length
followed by a one byte type.

The HMAT subtables have to allow for a longer length so they have subtable
headers of type struct acpi_hmat_structure which has a 2 byte type and a 4
byte length.

Enhance the subtable parsing in acpi_parse_entries_array() so that it can
handle these new HMAT subtables.

Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>
---
 drivers/acpi/tables.c | 52 ++++++++++++++++++++++++++++++++++++++++-----------
 1 file changed, 41 insertions(+), 11 deletions(-)

diff --git a/drivers/acpi/tables.c b/drivers/acpi/tables.c
index 80ce2a7d224b..f777b94c234a 100644
--- a/drivers/acpi/tables.c
+++ b/drivers/acpi/tables.c
@@ -218,6 +218,33 @@ void acpi_table_print_madt_entry(struct acpi_subtable_header *header)
 	}
 }
 
+static unsigned long __init
+acpi_get_entry_type(char *id, void *entry)
+{
+	if (strncmp(id, ACPI_SIG_HMAT, 4) == 0)
+		return ((struct acpi_hmat_structure *)entry)->type;
+	else
+		return ((struct acpi_subtable_header *)entry)->type;
+}
+
+static unsigned long __init
+acpi_get_entry_length(char *id, void *entry)
+{
+	if (strncmp(id, ACPI_SIG_HMAT, 4) == 0)
+		return ((struct acpi_hmat_structure *)entry)->length;
+	else
+		return ((struct acpi_subtable_header *)entry)->length;
+}
+
+static unsigned long __init
+acpi_get_subtable_header_length(char *id)
+{
+	if (strncmp(id, ACPI_SIG_HMAT, 4) == 0)
+		return sizeof(struct acpi_hmat_structure);
+	else
+		return sizeof(struct acpi_subtable_header);
+}
+
 /**
  * acpi_parse_entries_array - for each proc_num find a suitable subtable
  *
@@ -242,10 +269,10 @@ acpi_parse_entries_array(char *id, unsigned long table_size,
 		struct acpi_subtable_proc *proc, int proc_num,
 		unsigned int max_entries)
 {
-	struct acpi_subtable_header *entry;
-	unsigned long table_end;
+	unsigned long table_end, subtable_header_length;
 	int count = 0;
 	int errs = 0;
+	void *entry;
 	int i;
 
 	if (acpi_disabled)
@@ -263,19 +290,23 @@ acpi_parse_entries_array(char *id, unsigned long table_size,
 	}
 
 	table_end = (unsigned long)table_header + table_header->length;
+	subtable_header_length = acpi_get_subtable_header_length(id);
 
 	/* Parse all entries looking for a match. */
 
-	entry = (struct acpi_subtable_header *)
-	    ((unsigned long)table_header + table_size);
+	entry = (void *)table_header + table_size;
+
+	while (((unsigned long)entry) + subtable_header_length  < table_end) {
+		unsigned long entry_type, entry_length;
 
-	while (((unsigned long)entry) + sizeof(struct acpi_subtable_header) <
-	       table_end) {
 		if (max_entries && count >= max_entries)
 			break;
 
+		entry_type = acpi_get_entry_type(id, entry);
+		entry_length = acpi_get_entry_length(id, entry);
+
 		for (i = 0; i < proc_num; i++) {
-			if (entry->type != proc[i].id)
+			if (entry_type != proc[i].id)
 				continue;
 			if (!proc[i].handler ||
 			     (!errs && proc[i].handler(entry, table_end))) {
@@ -290,16 +321,15 @@ acpi_parse_entries_array(char *id, unsigned long table_size,
 			count++;
 
 		/*
-		 * If entry->length is 0, break from this loop to avoid
+		 * If entry_length is 0, break from this loop to avoid
 		 * infinite loop.
 		 */
-		if (entry->length == 0) {
+		if (entry_length == 0) {
 			pr_err("[%4.4s:0x%02x] Invalid zero length\n", id, proc->id);
 			return -EINVAL;
 		}
 
-		entry = (struct acpi_subtable_header *)
-		    ((unsigned long)entry + entry->length);
+		entry += entry_length;
 	}
 
 	if (max_entries && count > max_entries) {
-- 
2.14.3

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v3 2/3] hmat: add heterogeneous memory sysfs support
  2017-12-14  2:10 [PATCH v3 0/3] create sysfs representation of ACPI HMAT Ross Zwisler
  2017-12-14  2:10 ` [PATCH v3 1/3] acpi: HMAT support in acpi_parse_entries_array() Ross Zwisler
@ 2017-12-14  2:10 ` Ross Zwisler
  2017-12-15  0:52   ` Rafael J. Wysocki
  2017-12-14  2:10 ` [PATCH v3 3/3] hmat: add performance attributes Ross Zwisler
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 42+ messages in thread
From: Ross Zwisler @ 2017-12-14  2:10 UTC (permalink / raw)
  To: linux-kernel
  Cc: Ross Zwisler, Anaczkowski, Lukasz, Box, David E, Kogut, Jaroslaw,
	Koss, Marcin, Koziej, Artur, Lahtinen, Joonas, Moore, Robert,
	Nachimuthu, Murugasamy, Odzioba, Lukasz, Rafael J. Wysocki,
	Rafael J. Wysocki, Schmauss, Erik, Verma, Vishal L, Zheng, Lv,
	Andrew Morton, Balbir Singh, Brice Goglin, Dan Williams,
	Dave Hansen, Jerome Glisse, John Hubbard, Len Brown, Tim Chen,
	devel, linux-acpi, linux-mm, linux-nvdimm

Add a new sysfs subsystem, /sys/devices/system/hmat, which surfaces
information about memory initiators and memory targets to the user.  These
initiators and targets are described by the ACPI SRAT and HMAT tables.

A "memory initiator" in this case is a NUMA node containing one or more
devices such as CPU or separate memory I/O devices that can initiate
memory requests.  A "memory target" is NUMA node containing at least one
CPU-accessible physical address range.

The key piece of information surfaced by this patch is the mapping between
the ACPI table "proximity domain" numbers, held in the "firmware_id"
attribute, and Linux NUMA node numbers.  Every ACPI proximity domain will
end up being a unique NUMA node in Linux, but the numbers may get reordered
and Linux can create extra NUMA nodes that don't map back to ACPI proximity
domains.  The firmware_id value is needed if anyone ever wants to look at
the ACPI HMAT and SRAT tables directly and make sense of how they map to
NUMA nodes in Linux.

Initiators are found at /sys/devices/system/hmat/mem_initX, and the
attributes for a given initiator look like this:

  # tree mem_init0
  mem_init0
  ├── firmware_id
  ├── node0 -> ../../node/node0
  ├── power
  │   ├── async
  │   ...
  ├── subsystem -> ../../../../bus/hmat
  └── uevent

Where "mem_init0" on my system represents the CPU acting as a memory
initiator at NUMA node 0.  Users can discover which CPUs are part of this
memory initiator by following the node0 symlink and looking at cpumap,
cpulist and the cpu* symlinks.

Targets are found at /sys/devices/system/hmat/mem_tgtX, and the attributes
for a given target look like this:

  # tree mem_tgt2
  mem_tgt2
  ├── firmware_id
  ├── is_cached
  ├── node2 -> ../../node/node2
  ├── power
  │   ├── async
  │   ...
  ├── subsystem -> ../../../../bus/hmat
  └── uevent

Users can discover information about the memory owned by this memory target
by following the node2 symlink and looking at meminfo, vmstat and at the
memory* memory section symlinks.

Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>
---
 MAINTAINERS                   |   6 +
 drivers/acpi/Kconfig          |   1 +
 drivers/acpi/Makefile         |   1 +
 drivers/acpi/hmat/Kconfig     |   7 +
 drivers/acpi/hmat/Makefile    |   2 +
 drivers/acpi/hmat/core.c      | 536 ++++++++++++++++++++++++++++++++++++++++++
 drivers/acpi/hmat/hmat.h      |  47 ++++
 drivers/acpi/hmat/initiator.c |  43 ++++
 drivers/acpi/hmat/target.c    |  55 +++++
 9 files changed, 698 insertions(+)
 create mode 100644 drivers/acpi/hmat/Kconfig
 create mode 100644 drivers/acpi/hmat/Makefile
 create mode 100644 drivers/acpi/hmat/core.c
 create mode 100644 drivers/acpi/hmat/hmat.h
 create mode 100644 drivers/acpi/hmat/initiator.c
 create mode 100644 drivers/acpi/hmat/target.c

diff --git a/MAINTAINERS b/MAINTAINERS
index 82ad0eabce4f..64ebec0708de 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -6366,6 +6366,12 @@ S:	Supported
 F:	drivers/scsi/hisi_sas/
 F:	Documentation/devicetree/bindings/scsi/hisilicon-sas.txt
 
+HMAT - ACPI Heterogeneous Memory Attribute Table Support
+M:	Ross Zwisler <ross.zwisler@linux.intel.com>
+L:	linux-mm@kvack.org
+S:	Supported
+F:	drivers/acpi/hmat/
+
 HMM - Heterogeneous Memory Management
 M:	Jérôme Glisse <jglisse@redhat.com>
 L:	linux-mm@kvack.org
diff --git a/drivers/acpi/Kconfig b/drivers/acpi/Kconfig
index 46505396869e..21cdd1288430 100644
--- a/drivers/acpi/Kconfig
+++ b/drivers/acpi/Kconfig
@@ -466,6 +466,7 @@ config ACPI_REDUCED_HARDWARE_ONLY
 	  If you are unsure what to do, do not enable this option.
 
 source "drivers/acpi/nfit/Kconfig"
+source "drivers/acpi/hmat/Kconfig"
 
 source "drivers/acpi/apei/Kconfig"
 source "drivers/acpi/dptf/Kconfig"
diff --git a/drivers/acpi/Makefile b/drivers/acpi/Makefile
index 41954a601989..ed5eab6b0412 100644
--- a/drivers/acpi/Makefile
+++ b/drivers/acpi/Makefile
@@ -75,6 +75,7 @@ obj-$(CONFIG_ACPI_PROCESSOR)	+= processor.o
 obj-$(CONFIG_ACPI)		+= container.o
 obj-$(CONFIG_ACPI_THERMAL)	+= thermal.o
 obj-$(CONFIG_ACPI_NFIT)		+= nfit/
+obj-$(CONFIG_ACPI_HMAT)		+= hmat/
 obj-$(CONFIG_ACPI)		+= acpi_memhotplug.o
 obj-$(CONFIG_ACPI_HOTPLUG_IOAPIC) += ioapic.o
 obj-$(CONFIG_ACPI_BATTERY)	+= battery.o
diff --git a/drivers/acpi/hmat/Kconfig b/drivers/acpi/hmat/Kconfig
new file mode 100644
index 000000000000..954ad4701005
--- /dev/null
+++ b/drivers/acpi/hmat/Kconfig
@@ -0,0 +1,7 @@
+config ACPI_HMAT
+	bool "ACPI Heterogeneous Memory Attribute Table Support"
+	depends on ACPI_NUMA
+	depends on SYSFS
+	help
+	  Exports a sysfs representation of the ACPI Heterogeneous Memory
+	  Attributes Table (HMAT).
diff --git a/drivers/acpi/hmat/Makefile b/drivers/acpi/hmat/Makefile
new file mode 100644
index 000000000000..edf4bcb1c97d
--- /dev/null
+++ b/drivers/acpi/hmat/Makefile
@@ -0,0 +1,2 @@
+obj-$(CONFIG_ACPI_HMAT) := hmat.o
+hmat-y := core.o initiator.o target.o
diff --git a/drivers/acpi/hmat/core.c b/drivers/acpi/hmat/core.c
new file mode 100644
index 000000000000..61b90dadf84b
--- /dev/null
+++ b/drivers/acpi/hmat/core.c
@@ -0,0 +1,536 @@
+/*
+ * Heterogeneous Memory Attributes Table (HMAT) representation in sysfs
+ *
+ * Copyright (c) 2017, Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ */
+
+#include <acpi/acpi_numa.h>
+#include <linux/acpi.h>
+#include <linux/cpu.h>
+#include <linux/device.h>
+#include <linux/init.h>
+#include <linux/list.h>
+#include <linux/module.h>
+#include <linux/slab.h>
+#include "hmat.h"
+
+static LIST_HEAD(target_list);
+static LIST_HEAD(initiator_list);
+
+static bool bad_hmat;
+
+static int link_node_for_kobj(unsigned int node, struct kobject *kobj)
+{
+	if (node_devices[node])
+		return sysfs_create_link(kobj, &node_devices[node]->dev.kobj,
+				kobject_name(&node_devices[node]->dev.kobj));
+	return 0;
+}
+
+static void remove_node_for_kobj(unsigned int node, struct kobject *kobj)
+{
+	if (node_devices[node])
+		sysfs_remove_link(kobj,
+				kobject_name(&node_devices[node]->dev.kobj));
+}
+
+#define HMAT_CLASS_NAME	"hmat"
+
+static struct bus_type hmat_subsys = {
+	/*
+	 * .dev_name is set before device_register() based on the type of
+	 * device we are registering.
+	 */
+	.name = HMAT_CLASS_NAME,
+};
+
+/* memory initiators */
+static void release_memory_initiator(struct device *dev)
+{
+	struct memory_initiator *init = to_memory_initiator(dev);
+
+	list_del(&init->list);
+	kfree(init);
+}
+
+static void __init remove_memory_initiator(struct memory_initiator *init)
+{
+	if (init->is_registered) {
+		remove_node_for_kobj(pxm_to_node(init->pxm), &init->dev.kobj);
+		device_unregister(&init->dev);
+	} else
+		release_memory_initiator(&init->dev);
+}
+
+static int __init register_memory_initiator(struct memory_initiator *init)
+{
+	int ret;
+
+	hmat_subsys.dev_name = "mem_init";
+	init->dev.bus = &hmat_subsys;
+	init->dev.id = pxm_to_node(init->pxm);
+	init->dev.release = release_memory_initiator;
+	init->dev.groups = memory_initiator_attribute_groups;
+
+	ret = device_register(&init->dev);
+	if (ret < 0)
+		return ret;
+
+	init->is_registered = true;
+	return link_node_for_kobj(pxm_to_node(init->pxm), &init->dev.kobj);
+}
+
+static struct memory_initiator * __init add_memory_initiator(int pxm)
+{
+	struct memory_initiator *init;
+
+	if (pxm_to_node(pxm) == NUMA_NO_NODE) {
+		pr_err("HMAT: No NUMA node for PXM %d\n", pxm);
+		bad_hmat = true;
+		return ERR_PTR(-EINVAL);
+	}
+
+	/*
+	 * Make sure we haven't already added an initiator for this proximity
+	 * domain.  We don't care about any other differences in the SRAT
+	 * tables (apic_id, etc), so we just use the data from the first table
+	 * we see for a given proximity domain.
+	 */
+	list_for_each_entry(init, &initiator_list, list)
+		if (init->pxm == pxm)
+			return 0;
+
+	init = kzalloc(sizeof(*init), GFP_KERNEL);
+	if (!init) {
+		bad_hmat = true;
+		return ERR_PTR(-ENOMEM);
+	}
+
+	init->pxm = pxm;
+
+	list_add_tail(&init->list, &initiator_list);
+	return init;
+}
+
+/* memory targets */
+static void release_memory_target(struct device *dev)
+{
+	struct memory_target *tgt = to_memory_target(dev);
+
+	list_del(&tgt->list);
+	kfree(tgt);
+}
+
+static void __init remove_memory_target(struct memory_target *tgt)
+{
+	if (tgt->is_registered) {
+		remove_node_for_kobj(pxm_to_node(tgt->ma->proximity_domain),
+				&tgt->dev.kobj);
+		device_unregister(&tgt->dev);
+	} else
+		release_memory_target(&tgt->dev);
+}
+
+static int __init register_memory_target(struct memory_target *tgt)
+{
+	int ret;
+
+	if (!tgt->ma || !tgt->spa) {
+		pr_err("HMAT: Incomplete memory target found\n");
+		return -EINVAL;
+	}
+
+	hmat_subsys.dev_name = "mem_tgt";
+	tgt->dev.bus = &hmat_subsys;
+	tgt->dev.id = pxm_to_node(tgt->ma->proximity_domain);
+	tgt->dev.release = release_memory_target;
+	tgt->dev.groups = memory_target_attribute_groups;
+
+	ret = device_register(&tgt->dev);
+	if (ret < 0)
+		return ret;
+
+	tgt->is_registered = true;
+
+	return link_node_for_kobj(pxm_to_node(tgt->ma->proximity_domain),
+			&tgt->dev.kobj);
+}
+
+static int __init add_memory_target(struct acpi_srat_mem_affinity *ma)
+{
+	struct memory_target *tgt;
+
+	if (pxm_to_node(ma->proximity_domain) == NUMA_NO_NODE) {
+		pr_err("HMAT: No NUMA node for PXM %d\n", ma->proximity_domain);
+		bad_hmat = true;
+		return -EINVAL;
+	}
+
+	/*
+	 * Make sure we haven't already added a target for this proximity
+	 * domain.  We don't care about any other differences in the SRAT
+	 * tables (base_address, length), so we just use the data from the
+	 * first table we see for a given proximity domain.
+	 */
+	list_for_each_entry(tgt, &target_list, list)
+		if (tgt->ma->proximity_domain == ma->proximity_domain)
+			return 0;
+
+	tgt = kzalloc(sizeof(*tgt), GFP_KERNEL);
+	if (!tgt) {
+		bad_hmat = true;
+		return -ENOMEM;
+	}
+
+	tgt->ma = ma;
+
+	list_add_tail(&tgt->list, &target_list);
+	return 0;
+}
+
+/* ACPI parsing code, starting with the HMAT */
+static int __init hmat_noop_parse(struct acpi_table_header *table)
+{
+	/* real work done by the hmat_parse_* and srat_parse_* routines */
+	return 0;
+}
+
+static bool __init hmat_spa_matches_srat(struct acpi_hmat_address_range *spa,
+		struct acpi_srat_mem_affinity *ma)
+{
+	if (spa->physical_address_base != ma->base_address ||
+	    spa->physical_address_length != ma->length)
+		return false;
+
+	return true;
+}
+
+static void find_local_initiator(struct memory_target *tgt)
+{
+	struct memory_initiator *init;
+
+	if (!(tgt->spa->flags & ACPI_HMAT_PROCESSOR_PD_VALID) ||
+			pxm_to_node(tgt->spa->processor_PD) == NUMA_NO_NODE)
+		return;
+
+	list_for_each_entry(init, &initiator_list, list) {
+		if (init->pxm == tgt->spa->processor_PD) {
+			tgt->local_init = init;
+			return;
+		}
+	}
+}
+
+/* ACPI HMAT parsing routines */
+static int __init
+hmat_parse_address_range(struct acpi_subtable_header *header,
+		const unsigned long end)
+{
+	struct acpi_hmat_address_range *spa;
+	struct memory_target *tgt;
+
+	if (bad_hmat)
+		return 0;
+
+	spa = (struct acpi_hmat_address_range *)header;
+	if (!spa) {
+		pr_err("HMAT: NULL table entry\n");
+		goto err;
+	}
+
+	if (spa->header.length != sizeof(*spa)) {
+		pr_err("HMAT: Unexpected header length: %d\n",
+				spa->header.length);
+		goto err;
+	}
+
+	list_for_each_entry(tgt, &target_list, list) {
+		if ((spa->flags & ACPI_HMAT_MEMORY_PD_VALID) &&
+				spa->memory_PD == tgt->ma->proximity_domain) {
+			/*
+			 * We only add a single HMAT target per proximity
+			 * domain so we wait for the one that matches the
+			 * single SRAT memory affinity structure per PXM we
+			 * saved in add_memory_target().
+			 */
+			if (hmat_spa_matches_srat(spa, tgt->ma)) {
+				tgt->spa = spa;
+				find_local_initiator(tgt);
+			}
+			return 0;
+		}
+	}
+
+	return 0;
+err:
+	bad_hmat = true;
+	return -EINVAL;
+}
+
+static int __init hmat_parse_cache(struct acpi_subtable_header *header,
+		const unsigned long end)
+{
+	struct acpi_hmat_cache *cache;
+	struct memory_target *tgt;
+
+	if (bad_hmat)
+		return 0;
+
+	cache = (struct acpi_hmat_cache *)header;
+	if (!cache) {
+		pr_err("HMAT: NULL table entry\n");
+		goto err;
+	}
+
+	if (cache->header.length < sizeof(*cache)) {
+		pr_err("HMAT: Unexpected header length: %d\n",
+				cache->header.length);
+		goto err;
+	}
+
+	list_for_each_entry(tgt, &target_list, list) {
+		if (cache->memory_PD == tgt->ma->proximity_domain) {
+			tgt->is_cached = true;
+			return 0;
+		}
+	}
+
+	pr_err("HMAT: Couldn't find cached target PXM %d\n", cache->memory_PD);
+err:
+	bad_hmat = true;
+	return -EINVAL;
+}
+
+/*
+ * SRAT parsing.  We use srat_disabled() and pxm_to_node() so we don't redo
+ * any of the SRAT sanity checking done in drivers/acpi/numa.c.
+ */
+static int __init
+srat_parse_processor_affinity(struct acpi_subtable_header *header,
+		const unsigned long end)
+{
+	struct acpi_srat_cpu_affinity *cpu;
+	struct memory_initiator *init;
+	u32 pxm;
+
+	if (bad_hmat)
+		return 0;
+
+	cpu = (struct acpi_srat_cpu_affinity *)header;
+	if (!cpu) {
+		pr_err("HMAT: NULL table entry\n");
+		bad_hmat = true;
+		return -EINVAL;
+	}
+
+	pxm = cpu->proximity_domain_lo;
+	if (acpi_srat_revision >= 2)
+		pxm |= *((unsigned int *)cpu->proximity_domain_hi) << 8;
+
+	if (!(cpu->flags & ACPI_SRAT_CPU_ENABLED))
+		return 0;
+
+	init = add_memory_initiator(pxm);
+	if (IS_ERR_OR_NULL(init))
+		return PTR_ERR(init);
+
+	init->cpu = cpu;
+	return 0;
+}
+
+static int __init
+srat_parse_x2apic_affinity(struct acpi_subtable_header *header,
+		const unsigned long end)
+{
+	struct acpi_srat_x2apic_cpu_affinity *x2apic;
+	struct memory_initiator *init;
+
+	if (bad_hmat)
+		return 0;
+
+	x2apic = (struct acpi_srat_x2apic_cpu_affinity *)header;
+	if (!x2apic) {
+		pr_err("HMAT: NULL table entry\n");
+		bad_hmat = true;
+		return -EINVAL;
+	}
+
+	if (!(x2apic->flags & ACPI_SRAT_CPU_ENABLED))
+		return 0;
+
+	init = add_memory_initiator(x2apic->proximity_domain);
+	if (IS_ERR_OR_NULL(init))
+		return PTR_ERR(init);
+
+	init->x2apic = x2apic;
+	return 0;
+}
+
+static int __init
+srat_parse_gicc_affinity(struct acpi_subtable_header *header,
+		const unsigned long end)
+{
+	struct acpi_srat_gicc_affinity *gicc;
+	struct memory_initiator *init;
+
+	if (bad_hmat)
+		return 0;
+
+	gicc = (struct acpi_srat_gicc_affinity *)header;
+	if (!gicc) {
+		pr_err("HMAT: NULL table entry\n");
+		bad_hmat = true;
+		return -EINVAL;
+	}
+
+	if (!(gicc->flags & ACPI_SRAT_GICC_ENABLED))
+		return 0;
+
+	init = add_memory_initiator(gicc->proximity_domain);
+	if (IS_ERR_OR_NULL(init))
+		return PTR_ERR(init);
+
+	init->gicc = gicc;
+	return 0;
+}
+
+static int __init
+srat_parse_memory_affinity(struct acpi_subtable_header *header,
+		const unsigned long end)
+{
+	struct acpi_srat_mem_affinity *ma;
+
+	if (bad_hmat)
+		return 0;
+
+	ma = (struct acpi_srat_mem_affinity *)header;
+	if (!ma) {
+		pr_err("HMAT: NULL table entry\n");
+		bad_hmat = true;
+		return -EINVAL;
+	}
+
+	if (!(ma->flags & ACPI_SRAT_MEM_ENABLED))
+		return 0;
+
+	return add_memory_target(ma);
+}
+
+/*
+ * Remove our sysfs entries, unregister our devices and free allocated memory.
+ */
+static void hmat_cleanup(void)
+{
+	struct memory_initiator *init, *init_iter;
+	struct memory_target *tgt, *tgt_iter;
+
+	list_for_each_entry_safe(tgt, tgt_iter, &target_list, list)
+		remove_memory_target(tgt);
+
+	list_for_each_entry_safe(init, init_iter, &initiator_list, list)
+		remove_memory_initiator(init);
+}
+
+static int __init hmat_init(void)
+{
+	struct acpi_table_header *tbl;
+	struct memory_initiator *init;
+	struct memory_target *tgt;
+	acpi_status status = AE_OK;
+	int ret;
+
+	if (srat_disabled())
+		return 0;
+
+	/*
+	 * We take a permanent reference to both the HMAT and SRAT in ACPI
+	 * memory so we can keep pointers to their subtables.  These tables
+	 * already had references on them which would never be released, taken
+	 * by acpi_sysfs_init(), so this shouldn't negatively impact anything.
+	 */
+	status = acpi_get_table(ACPI_SIG_SRAT, 0, &tbl);
+	if (ACPI_FAILURE(status))
+		return 0;
+
+	status = acpi_get_table(ACPI_SIG_HMAT, 0, &tbl);
+	if (ACPI_FAILURE(status))
+		return 0;
+
+	ret = subsys_system_register(&hmat_subsys, NULL);
+	if (ret)
+		return ret;
+
+	if (!acpi_table_parse(ACPI_SIG_SRAT, hmat_noop_parse)) {
+		struct acpi_subtable_proc srat_proc[4];
+
+		memset(srat_proc, 0, sizeof(srat_proc));
+		srat_proc[0].id = ACPI_SRAT_TYPE_CPU_AFFINITY;
+		srat_proc[0].handler = srat_parse_processor_affinity;
+		srat_proc[1].id = ACPI_SRAT_TYPE_X2APIC_CPU_AFFINITY;
+		srat_proc[1].handler = srat_parse_x2apic_affinity;
+		srat_proc[2].id = ACPI_SRAT_TYPE_GICC_AFFINITY;
+		srat_proc[2].handler = srat_parse_gicc_affinity;
+		srat_proc[3].id = ACPI_SRAT_TYPE_MEMORY_AFFINITY;
+		srat_proc[3].handler = srat_parse_memory_affinity;
+
+		acpi_table_parse_entries_array(ACPI_SIG_SRAT,
+					sizeof(struct acpi_table_srat),
+					srat_proc, ARRAY_SIZE(srat_proc), 0);
+	}
+
+	if (!acpi_table_parse(ACPI_SIG_HMAT, hmat_noop_parse)) {
+		struct acpi_subtable_proc hmat_proc[2];
+
+		memset(hmat_proc, 0, sizeof(hmat_proc));
+		hmat_proc[0].id = ACPI_HMAT_TYPE_ADDRESS_RANGE;
+		hmat_proc[0].handler = hmat_parse_address_range;
+		hmat_proc[1].id = ACPI_HMAT_TYPE_CACHE;
+		hmat_proc[1].handler = hmat_parse_cache;
+
+		acpi_table_parse_entries_array(ACPI_SIG_HMAT,
+					sizeof(struct acpi_table_hmat),
+					hmat_proc, ARRAY_SIZE(hmat_proc), 0);
+	}
+
+	if (bad_hmat) {
+		ret = -EINVAL;
+		goto err;
+	}
+
+	list_for_each_entry(init, &initiator_list, list) {
+		ret = register_memory_initiator(init);
+		if (ret)
+			goto err;
+	}
+
+	list_for_each_entry(tgt, &target_list, list) {
+		ret = register_memory_target(tgt);
+		if (ret)
+			goto err;
+	}
+
+	return 0;
+err:
+	pr_err("HMAT: Error during initialization\n");
+	hmat_cleanup();
+	return ret;
+}
+
+static __exit void hmat_exit(void)
+{
+	hmat_cleanup();
+}
+
+module_init(hmat_init);
+module_exit(hmat_exit);
+MODULE_LICENSE("GPL v2");
+MODULE_AUTHOR("Intel Corporation");
diff --git a/drivers/acpi/hmat/hmat.h b/drivers/acpi/hmat/hmat.h
new file mode 100644
index 000000000000..108aad1f8ad7
--- /dev/null
+++ b/drivers/acpi/hmat/hmat.h
@@ -0,0 +1,47 @@
+/*
+ * Heterogeneous Memory Attributes Table (HMAT) representation in sysfs
+ *
+ * Copyright (c) 2017, Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ */
+
+#ifndef _ACPI_HMAT_H_
+#define _ACPI_HMAT_H_
+
+struct memory_initiator {
+	struct list_head list;
+	struct device dev;
+
+	/* only one of the following three will be set */
+	struct acpi_srat_cpu_affinity *cpu;
+	struct acpi_srat_x2apic_cpu_affinity *x2apic;
+	struct acpi_srat_gicc_affinity *gicc;
+
+	int pxm;
+	bool is_registered;
+};
+#define to_memory_initiator(d) container_of((d), struct memory_initiator, dev)
+
+struct memory_target {
+	struct list_head list;
+	struct device dev;
+	struct acpi_srat_mem_affinity *ma;
+	struct acpi_hmat_address_range *spa;
+	struct memory_initiator *local_init;
+
+	bool is_cached;
+	bool is_registered;
+};
+#define to_memory_target(d) container_of((d), struct memory_target, dev)
+
+extern const struct attribute_group *memory_initiator_attribute_groups[];
+extern const struct attribute_group *memory_target_attribute_groups[];
+#endif /* _ACPI_HMAT_H_ */
diff --git a/drivers/acpi/hmat/initiator.c b/drivers/acpi/hmat/initiator.c
new file mode 100644
index 000000000000..be2bf2b58940
--- /dev/null
+++ b/drivers/acpi/hmat/initiator.c
@@ -0,0 +1,43 @@
+/*
+ * Heterogeneous Memory Attributes Table (HMAT) sysfs initiator representation
+ *
+ * Copyright (c) 2017, Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ */
+
+#include <acpi/acpi_numa.h>
+#include <linux/acpi.h>
+#include <linux/device.h>
+#include <linux/sysfs.h>
+#include "hmat.h"
+
+static ssize_t firmware_id_show(struct device *dev,
+			struct device_attribute *attr, char *buf)
+{
+	struct memory_initiator *init = to_memory_initiator(dev);
+
+	return sprintf(buf, "%d\n", init->pxm);
+}
+static DEVICE_ATTR_RO(firmware_id);
+
+static struct attribute *memory_initiator_attributes[] = {
+	&dev_attr_firmware_id.attr,
+	NULL,
+};
+
+static struct attribute_group memory_initiator_attribute_group = {
+	.attrs = memory_initiator_attributes,
+};
+
+const struct attribute_group *memory_initiator_attribute_groups[] = {
+	&memory_initiator_attribute_group,
+	NULL,
+};
diff --git a/drivers/acpi/hmat/target.c b/drivers/acpi/hmat/target.c
new file mode 100644
index 000000000000..2a9b44d5f44c
--- /dev/null
+++ b/drivers/acpi/hmat/target.c
@@ -0,0 +1,55 @@
+/*
+ * Heterogeneous Memory Attributes Table (HMAT) sysfs target representation
+ *
+ * Copyright (c) 2017, Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ */
+
+#include <acpi/acpi_numa.h>
+#include <linux/acpi.h>
+#include <linux/device.h>
+#include <linux/sysfs.h>
+#include "hmat.h"
+
+/* attributes for memory targets */
+static ssize_t firmware_id_show(struct device *dev,
+			struct device_attribute *attr, char *buf)
+{
+	struct memory_target *tgt = to_memory_target(dev);
+
+	return sprintf(buf, "%d\n", tgt->ma->proximity_domain);
+}
+static DEVICE_ATTR_RO(firmware_id);
+
+static ssize_t is_cached_show(struct device *dev,
+		struct device_attribute *attr, char *buf)
+{
+	struct memory_target *tgt = to_memory_target(dev);
+
+	return sprintf(buf, "%d\n", tgt->is_cached);
+}
+static DEVICE_ATTR_RO(is_cached);
+
+static struct attribute *memory_target_attributes[] = {
+	&dev_attr_firmware_id.attr,
+	&dev_attr_is_cached.attr,
+	NULL
+};
+
+/* attributes which are present for all memory targets */
+static struct attribute_group memory_target_attribute_group = {
+	.attrs = memory_target_attributes,
+};
+
+const struct attribute_group *memory_target_attribute_groups[] = {
+	&memory_target_attribute_group,
+	NULL,
+};
-- 
2.14.3

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v3 3/3] hmat: add performance attributes
  2017-12-14  2:10 [PATCH v3 0/3] create sysfs representation of ACPI HMAT Ross Zwisler
  2017-12-14  2:10 ` [PATCH v3 1/3] acpi: HMAT support in acpi_parse_entries_array() Ross Zwisler
  2017-12-14  2:10 ` [PATCH v3 2/3] hmat: add heterogeneous memory sysfs support Ross Zwisler
@ 2017-12-14  2:10 ` Ross Zwisler
  2017-12-14 13:00 ` [PATCH v3 0/3] create sysfs representation of ACPI HMAT Michal Hocko
  2017-12-22  3:09 ` Anshuman Khandual
  4 siblings, 0 replies; 42+ messages in thread
From: Ross Zwisler @ 2017-12-14  2:10 UTC (permalink / raw)
  To: linux-kernel
  Cc: Ross Zwisler, Anaczkowski, Lukasz, Box, David E, Kogut, Jaroslaw,
	Koss, Marcin, Koziej, Artur, Lahtinen, Joonas, Moore, Robert,
	Nachimuthu, Murugasamy, Odzioba, Lukasz, Rafael J. Wysocki,
	Rafael J. Wysocki, Schmauss, Erik, Verma, Vishal L, Zheng, Lv,
	Andrew Morton, Balbir Singh, Brice Goglin, Dan Williams,
	Dave Hansen, Jerome Glisse, John Hubbard, Len Brown, Tim Chen,
	devel, linux-acpi, linux-mm, linux-nvdimm

Add performance information found in the HMAT to the sysfs representation.
This information lives as an attribute group named "local_init" in the
memory target:

  # tree mem_tgt2
  mem_tgt2
  ├── firmware_id
  ├── is_cached
  ├── local_init
  │   ├── mem_init0 -> ../../mem_init0
  │   ├── mem_init1 -> ../../mem_init1
  │   ├── read_bw_MBps
  │   ├── read_lat_nsec
  │   ├── write_bw_MBps
  │   └── write_lat_nsec
  ├── node2 -> ../../node/node2
  ├── power
  │   ├── async
  │   ...
  ├── subsystem -> ../../../../bus/hmat
  └── uevent

This attribute group surfaces latency and bandwidth performance for a
memory target and its local initiators.  For example:

  # grep . mem_tgt2/local_init/* 2>/dev/null
  mem_tgt2/local_init/read_bw_MBps:30720
  mem_tgt2/local_init/read_lat_nsec:100
  mem_tgt2/local_init/write_bw_MBps:30720
  mem_tgt2/local_init/write_lat_nsec:100

The initiators also have a symlink to their local targets:

  # ls -l mem_init0/mem_tgt2
  lrwxrwxrwx. 1 root root 0 Dec 13 16:45 mem_init0/mem_tgt2 -> ../mem_tgt2

We create performance attribute groups only for local (initiator,target)
pairings, where the first local initiator for a given target is defined by
the "Processor Proximity Domain" field in the HMAT's Memory Subsystem
Address Range Structure table.  After we have one local initiator we scan
the performance data to link to any other "local" initiators with the same
local performance to a given memory target.

A given target only has one set of local performance values, so each target
will have at most one "local_init" attribute group, though that group can
contain links to multiple initiators that all have local performance.  A
given memory initiator may have multiple local memory targets, so multiple
"mem_tgtX" links may exist for a given initiator.

If a given memory target is cached we give performance numbers only for the
media itself, and rely on the "is_cached" attribute to represent the
fact that there is a caching layer.

The fact that we only expose a subset of the performance information
presented in the HMAT via sysfs as a compromise, driven by fact that those
usages will be the highest performing and because to represent all possible
paths could cause an unmanageable explosion of sysfs entries.

If we dump everything from the HMAT into sysfs we end up with
O(num_targets * num_initiators * num_caching_levels) attributes.  Each of
these attributes only takes up 2 bytes in a System Locality Latency and
Bandwidth Information Structure, but if we have to create a directory entry
for each it becomes much more expensive.

For example, very large systems today can have on the order of thousands of
NUMA nodes.  Say we have a system which used to have 1,000 NUMA nodes that
each had both a CPU and local memory.  The HMAT allows us to separate the
CPUs and memory into separate NUMA nodes, so we can end up with 1,000 CPU
initiator NUMA nodes and 1,000 memory target NUMA nodes.  If we represented
the performance information for each possible CPU/memory pair in sysfs we
would end up with 1,000,000 attribute groups.

This is a lot to pass in a set of packed data tables, but I think we'll
break sysfs if we try to create millions of attributes, regardless of how
we nest them in a directory hierarchy.

By only representing performance information for local (initiator,target)
pairings, we reduce the number of sysfs entries to O(num_targets).

Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>
---
 drivers/acpi/hmat/Makefile          |   2 +-
 drivers/acpi/hmat/core.c            | 263 +++++++++++++++++++++++++++++++++++-
 drivers/acpi/hmat/hmat.h            |  17 +++
 drivers/acpi/hmat/perf_attributes.c |  56 ++++++++
 4 files changed, 336 insertions(+), 2 deletions(-)
 create mode 100644 drivers/acpi/hmat/perf_attributes.c

diff --git a/drivers/acpi/hmat/Makefile b/drivers/acpi/hmat/Makefile
index edf4bcb1c97d..b5d1d83684da 100644
--- a/drivers/acpi/hmat/Makefile
+++ b/drivers/acpi/hmat/Makefile
@@ -1,2 +1,2 @@
 obj-$(CONFIG_ACPI_HMAT) := hmat.o
-hmat-y := core.o initiator.o target.o
+hmat-y := core.o initiator.o target.o perf_attributes.o
diff --git a/drivers/acpi/hmat/core.c b/drivers/acpi/hmat/core.c
index 61b90dadf84b..89e84658fc73 100644
--- a/drivers/acpi/hmat/core.c
+++ b/drivers/acpi/hmat/core.c
@@ -23,11 +23,225 @@
 #include <linux/slab.h>
 #include "hmat.h"
 
+#define NO_VALUE	-1
+#define LOCAL_INIT	"local_init"
+
 static LIST_HEAD(target_list);
 static LIST_HEAD(initiator_list);
+LIST_HEAD(locality_list);
 
 static bool bad_hmat;
 
+/* Performance attributes for an initiator/target pair. */
+static int get_performance_data(u32 init_pxm, u32 tgt_pxm,
+		struct acpi_hmat_locality *hmat_loc)
+{
+	int num_init = hmat_loc->number_of_initiator_Pds;
+	int num_tgt = hmat_loc->number_of_target_Pds;
+	int init_idx = NO_VALUE;
+	int tgt_idx = NO_VALUE;
+	u32 *initiators, *targets;
+	u16 *entries, val;
+	int i;
+
+	/* the initiator array is after the struct acpi_hmat_locality fields */
+	initiators = (u32 *)(hmat_loc + 1);
+	targets = &initiators[num_init];
+	entries = (u16 *)&targets[num_tgt];
+
+	for (i = 0; i < num_init; i++) {
+		if (initiators[i] == init_pxm) {
+			init_idx = i;
+			break;
+		}
+	}
+
+	if (init_idx == NO_VALUE)
+		return NO_VALUE;
+
+	for (i = 0; i < num_tgt; i++) {
+		if (targets[i] == tgt_pxm) {
+			tgt_idx = i;
+			break;
+		}
+	}
+
+	if (tgt_idx == NO_VALUE)
+		return NO_VALUE;
+
+	val = entries[init_idx*num_tgt + tgt_idx];
+	if (val < 10 || val == 0xFFFF)
+		return NO_VALUE;
+
+	return (int)(val * hmat_loc->entry_base_unit) / 10;
+}
+
+/*
+ * 'direction' is either READ or WRITE
+ * Latency is reported in nanoseconds and bandwidth is reported in MB/s.
+ */
+static int hmat_get_attribute(int init_pxm, int tgt_pxm, int direction,
+		enum hmat_attr_type type)
+{
+	struct memory_locality *loc;
+	int value;
+
+	list_for_each_entry(loc, &locality_list, list) {
+		struct acpi_hmat_locality *hmat_loc = loc->hmat_loc;
+
+		if (direction == READ && type == LATENCY &&
+		    (hmat_loc->data_type == ACPI_HMAT_ACCESS_LATENCY ||
+		     hmat_loc->data_type == ACPI_HMAT_READ_LATENCY)) {
+			value = get_performance_data(init_pxm, tgt_pxm,
+					hmat_loc);
+			if (value != NO_VALUE)
+				return value;
+		}
+
+		if (direction == WRITE && type == LATENCY &&
+		    (hmat_loc->data_type == ACPI_HMAT_ACCESS_LATENCY ||
+		     hmat_loc->data_type == ACPI_HMAT_WRITE_LATENCY)) {
+			value = get_performance_data(init_pxm, tgt_pxm,
+					hmat_loc);
+			if (value != NO_VALUE)
+				return value;
+		}
+
+		if (direction == READ && type == BANDWIDTH &&
+		    (hmat_loc->data_type == ACPI_HMAT_ACCESS_BANDWIDTH ||
+		     hmat_loc->data_type == ACPI_HMAT_READ_BANDWIDTH)) {
+			value = get_performance_data(init_pxm, tgt_pxm,
+					hmat_loc);
+			if (value != NO_VALUE)
+				return value;
+		}
+
+		if (direction == WRITE && type == BANDWIDTH &&
+		    (hmat_loc->data_type == ACPI_HMAT_ACCESS_BANDWIDTH ||
+		     hmat_loc->data_type == ACPI_HMAT_WRITE_BANDWIDTH)) {
+			value = get_performance_data(init_pxm, tgt_pxm,
+					hmat_loc);
+			if (value != NO_VALUE)
+				return value;
+		}
+	}
+
+	return NO_VALUE;
+}
+
+/*
+ * 'direction' is either READ or WRITE
+ * Latency is reported in nanoseconds and bandwidth is reported in MB/s.
+ */
+int hmat_local_attribute(struct device *tgt_dev, int direction,
+		enum hmat_attr_type type)
+{
+	struct memory_target *tgt = to_memory_target(tgt_dev);
+	int tgt_pxm = tgt->ma->proximity_domain;
+	int init_pxm;
+
+	if (!tgt->local_init)
+		return NO_VALUE;
+
+	init_pxm = tgt->local_init->pxm;
+	return hmat_get_attribute(init_pxm, tgt_pxm, direction, type);
+}
+
+static bool is_local_init(int init_pxm, int tgt_pxm, int read_lat,
+		int write_lat, int read_bw, int write_bw)
+{
+	if (read_lat != hmat_get_attribute(init_pxm, tgt_pxm, READ, LATENCY))
+		return false;
+
+	if (write_lat != hmat_get_attribute(init_pxm, tgt_pxm, WRITE, LATENCY))
+		return false;
+
+	if (read_bw != hmat_get_attribute(init_pxm, tgt_pxm, READ, BANDWIDTH))
+		return false;
+
+	if (write_bw != hmat_get_attribute(init_pxm, tgt_pxm, WRITE, BANDWIDTH))
+		return false;
+
+	return true;
+}
+
+static const struct attribute_group performance_attribute_group = {
+	.attrs = performance_attributes,
+	.name = LOCAL_INIT,
+};
+
+static void remove_performance_attributes(struct memory_target *tgt)
+{
+	if (!tgt->local_init)
+		return;
+
+	/*
+	 * FIXME: Need to enhance the core sysfs code to remove all the links
+	 * in both the attribute group and in the device itself when those are
+	 * removed.
+	 */
+	sysfs_remove_group(&tgt->dev.kobj, &performance_attribute_group);
+}
+
+static int add_performance_attributes(struct memory_target *tgt)
+{
+	int read_lat, write_lat, read_bw, write_bw;
+	int tgt_pxm = tgt->ma->proximity_domain;
+	struct kobject *init_kobj, *tgt_kobj;
+	struct device *init_dev, *tgt_dev;
+	struct memory_initiator *init;
+	int ret;
+
+	if (!tgt->local_init)
+		return 0;
+
+	tgt_dev = &tgt->dev;
+	tgt_kobj = &tgt_dev->kobj;
+
+	/* Create entries for initiator/target pair in the target.  */
+	ret = sysfs_create_group(tgt_kobj, &performance_attribute_group);
+	if (ret < 0)
+		return ret;
+
+	/*
+	 * Iterate through initiators and find all the ones that have the same
+	 * performance as the local initiator.
+	 */
+	read_lat = hmat_local_attribute(tgt_dev, READ, LATENCY);
+	write_lat = hmat_local_attribute(tgt_dev, WRITE, LATENCY);
+	read_bw = hmat_local_attribute(tgt_dev, READ, BANDWIDTH);
+	write_bw = hmat_local_attribute(tgt_dev, WRITE, BANDWIDTH);
+
+	list_for_each_entry(init, &initiator_list, list) {
+		init_dev = &init->dev;
+		init_kobj = &init_dev->kobj;
+
+		if (init == tgt->local_init ||
+			is_local_init(init->pxm, tgt_pxm, read_lat,
+				write_lat, read_bw, write_bw)) {
+			ret = sysfs_add_link_to_group(tgt_kobj, LOCAL_INIT,
+					init_kobj, dev_name(init_dev));
+			if (ret < 0)
+				goto err;
+
+			/*
+			 * Create a link in the local initiator to this
+			 * target.
+			 */
+			ret = sysfs_create_link(init_kobj, tgt_kobj,
+					kobject_name(tgt_kobj));
+			if (ret < 0)
+				goto err;
+		}
+
+	}
+	tgt->has_perf_attributes = true;
+	return 0;
+err:
+	remove_performance_attributes(tgt);
+	return ret;
+}
+
 static int link_node_for_kobj(unsigned int node, struct kobject *kobj)
 {
 	if (node_devices[node])
@@ -132,6 +346,9 @@ static void release_memory_target(struct device *dev)
 
 static void __init remove_memory_target(struct memory_target *tgt)
 {
+	if (tgt->has_perf_attributes)
+		remove_performance_attributes(tgt);
+
 	if (tgt->is_registered) {
 		remove_node_for_kobj(pxm_to_node(tgt->ma->proximity_domain),
 				&tgt->dev.kobj);
@@ -276,6 +493,38 @@ hmat_parse_address_range(struct acpi_subtable_header *header,
 	return -EINVAL;
 }
 
+static int __init hmat_parse_locality(struct acpi_subtable_header *header,
+		const unsigned long end)
+{
+	struct acpi_hmat_locality *hmat_loc;
+	struct memory_locality *loc;
+
+	if (bad_hmat)
+		return 0;
+
+	hmat_loc = (struct acpi_hmat_locality *)header;
+	if (!hmat_loc) {
+		pr_err("HMAT: NULL table entry\n");
+		bad_hmat = true;
+		return -EINVAL;
+	}
+
+	/* We don't report cached performance information in sysfs. */
+	if (hmat_loc->flags == ACPI_HMAT_MEMORY ||
+			hmat_loc->flags == ACPI_HMAT_LAST_LEVEL_CACHE) {
+		loc = kzalloc(sizeof(*loc), GFP_KERNEL);
+		if (!loc) {
+			bad_hmat = true;
+			return -ENOMEM;
+		}
+
+		loc->hmat_loc = hmat_loc;
+		list_add_tail(&loc->list, &locality_list);
+	}
+
+	return 0;
+}
+
 static int __init hmat_parse_cache(struct acpi_subtable_header *header,
 		const unsigned long end)
 {
@@ -431,6 +680,7 @@ srat_parse_memory_affinity(struct acpi_subtable_header *header,
 static void hmat_cleanup(void)
 {
 	struct memory_initiator *init, *init_iter;
+	struct memory_locality *loc, *loc_iter;
 	struct memory_target *tgt, *tgt_iter;
 
 	list_for_each_entry_safe(tgt, tgt_iter, &target_list, list)
@@ -438,6 +688,11 @@ static void hmat_cleanup(void)
 
 	list_for_each_entry_safe(init, init_iter, &initiator_list, list)
 		remove_memory_initiator(init);
+
+	list_for_each_entry_safe(loc, loc_iter, &locality_list, list) {
+		list_del(&loc->list);
+		kfree(loc);
+	}
 }
 
 static int __init hmat_init(void)
@@ -488,13 +743,15 @@ static int __init hmat_init(void)
 	}
 
 	if (!acpi_table_parse(ACPI_SIG_HMAT, hmat_noop_parse)) {
-		struct acpi_subtable_proc hmat_proc[2];
+		struct acpi_subtable_proc hmat_proc[3];
 
 		memset(hmat_proc, 0, sizeof(hmat_proc));
 		hmat_proc[0].id = ACPI_HMAT_TYPE_ADDRESS_RANGE;
 		hmat_proc[0].handler = hmat_parse_address_range;
 		hmat_proc[1].id = ACPI_HMAT_TYPE_CACHE;
 		hmat_proc[1].handler = hmat_parse_cache;
+		hmat_proc[2].id = ACPI_HMAT_TYPE_LOCALITY;
+		hmat_proc[2].handler = hmat_parse_locality;
 
 		acpi_table_parse_entries_array(ACPI_SIG_HMAT,
 					sizeof(struct acpi_table_hmat),
@@ -516,6 +773,10 @@ static int __init hmat_init(void)
 		ret = register_memory_target(tgt);
 		if (ret)
 			goto err;
+
+		ret = add_performance_attributes(tgt);
+		if (ret)
+			goto err;
 	}
 
 	return 0;
diff --git a/drivers/acpi/hmat/hmat.h b/drivers/acpi/hmat/hmat.h
index 108aad1f8ad7..89200f5c4b38 100644
--- a/drivers/acpi/hmat/hmat.h
+++ b/drivers/acpi/hmat/hmat.h
@@ -16,6 +16,11 @@
 #ifndef _ACPI_HMAT_H_
 #define _ACPI_HMAT_H_
 
+enum hmat_attr_type {
+	LATENCY,
+	BANDWIDTH,
+};
+
 struct memory_initiator {
 	struct list_head list;
 	struct device dev;
@@ -39,9 +44,21 @@ struct memory_target {
 
 	bool is_cached;
 	bool is_registered;
+	bool has_perf_attributes;
 };
 #define to_memory_target(d) container_of((d), struct memory_target, dev)
 
+struct memory_locality {
+	struct list_head list;
+	struct acpi_hmat_locality *hmat_loc;
+};
+
 extern const struct attribute_group *memory_initiator_attribute_groups[];
 extern const struct attribute_group *memory_target_attribute_groups[];
+extern struct attribute *performance_attributes[];
+
+extern struct list_head locality_list;
+
+int hmat_local_attribute(struct device *tgt_dev, int direction,
+		enum hmat_attr_type type);
 #endif /* _ACPI_HMAT_H_ */
diff --git a/drivers/acpi/hmat/perf_attributes.c b/drivers/acpi/hmat/perf_attributes.c
new file mode 100644
index 000000000000..60f107b58822
--- /dev/null
+++ b/drivers/acpi/hmat/perf_attributes.c
@@ -0,0 +1,56 @@
+/*
+ * Heterogeneous Memory Attributes Table (HMAT) sysfs performance attributes
+ *
+ * Copyright (c) 2017, Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ */
+
+#include <linux/acpi.h>
+#include <linux/device.h>
+#include <linux/sysfs.h>
+#include "hmat.h"
+
+static ssize_t read_lat_nsec_show(struct device *dev,
+			struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", hmat_local_attribute(dev, READ, LATENCY));
+}
+static DEVICE_ATTR_RO(read_lat_nsec);
+
+static ssize_t write_lat_nsec_show(struct device *dev,
+			struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", hmat_local_attribute(dev, WRITE, LATENCY));
+}
+static DEVICE_ATTR_RO(write_lat_nsec);
+
+static ssize_t read_bw_MBps_show(struct device *dev,
+			struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", hmat_local_attribute(dev, READ, BANDWIDTH));
+}
+static DEVICE_ATTR_RO(read_bw_MBps);
+
+static ssize_t write_bw_MBps_show(struct device *dev,
+			struct device_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n",
+			hmat_local_attribute(dev, WRITE, BANDWIDTH));
+}
+static DEVICE_ATTR_RO(write_bw_MBps);
+
+struct attribute *performance_attributes[] = {
+	&dev_attr_read_lat_nsec.attr,
+	&dev_attr_write_lat_nsec.attr,
+	&dev_attr_read_bw_MBps.attr,
+	&dev_attr_write_bw_MBps.attr,
+	NULL
+};
-- 
2.14.3

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* Re: [PATCH v3 0/3] create sysfs representation of ACPI HMAT
  2017-12-14  2:10 [PATCH v3 0/3] create sysfs representation of ACPI HMAT Ross Zwisler
                   ` (2 preceding siblings ...)
  2017-12-14  2:10 ` [PATCH v3 3/3] hmat: add performance attributes Ross Zwisler
@ 2017-12-14 13:00 ` Michal Hocko
  2017-12-18 20:35   ` Ross Zwisler
  2017-12-22  3:09 ` Anshuman Khandual
  4 siblings, 1 reply; 42+ messages in thread
From: Michal Hocko @ 2017-12-14 13:00 UTC (permalink / raw)
  To: Ross Zwisler
  Cc: linux-kernel, Anaczkowski, Lukasz, Box, David E, Kogut, Jaroslaw,
	Koss, Marcin, Koziej, Artur, Lahtinen, Joonas, Moore, Robert,
	Nachimuthu, Murugasamy, Odzioba, Lukasz, Rafael J. Wysocki,
	Rafael J. Wysocki, Schmauss, Erik, Verma, Vishal L, Zheng, Lv,
	Andrew Morton, Balbir Singh, Brice Goglin, Dan Williams,
	Dave Hansen, Jerome Glisse, John Hubbard, Len Brown, Tim Chen,
	devel, linux-acpi, linux-mm, linux-nvdimm, linux-api

[CC linix-api]

On Wed 13-12-17 19:10:16, Ross Zwisler wrote:
> This is the third revision of my patches adding a sysfs representation
> of the ACPI Heterogeneous Memory Attribute Table (HMAT).  These patches
> are based on v4.15-rc3 and a working tree can be found here:
> 
> https://git.kernel.org/pub/scm/linux/kernel/git/zwisler/linux.git/log/?h=hmat_v3
> 
> My goal is to get these patches merged for v4.16.

Has actually reviewed the overal design already for this to be 4.16
thing? I do not see any acks/reviewed-bys in any of the patches...

> Changes from previous version (https://lkml.org/lkml/2017/7/6/749):

... comments on this last posting are touching the surface rather than
really discuss the overal design.

>  - Changed "HMEM" to "HMAT" and "hmem" to "hmat" throughout to make sure
>    that this effort doesn't get confused with Jerome's HMM work and to
>    make it clear that this enabling is tightly tied to the ACPI HMAT
>    table.  (John Hubbard)
> 
>  - Moved the link in the initiator (i.e. mem_init0/mem_tgt2) from
>    pointing to the "mem_tgt2/local_init" attribute group to instead
>    point at the mem_tgt2 target itself.  (Brice Goglin)
> 
>  - Simplified the contents of both the initiators and the targets so
>    that we just symlink to the NUMA node and don't duplicate
>    information.  For initiators this means that we no longer enumerate
>    CPUs, and for targets this means that we don't provide physical
>    address start and length information.  All of this is already
>    available in the NUMA node directory itself (i.e.
>    /sys/devices/system/node/node0), and it already accounts for the fact
>    that both multiple CPUs and multiple memory regions can be owned by a
>    given NUMA node.  Also removed some extra attributes (is_enabled,
>    is_isolated) which I don't think are useful at this point in time.
> 
> I have tested this against many different configs that I implemented
> using qemu.

What is the testing procedure? How can I setup qemu to simlate such HW?

[Keeping the rest of the email for linux-api reference]

> ---
> 
> ==== Quick Summary ====
> 
> Platforms exist today which have multiple types of memory attached to a
> single CPU.  These disparate memory ranges have some characteristics in
> common, such as CPU cache coherence, but they can have wide ranges of
> performance both in terms of latency and bandwidth.
> 
> For example, consider a system that contains persistent memory, standard
> DDR memory and High Bandwidth Memory (HBM), all attached to the same CPU.
> There could potentially be an order of magnitude or more difference in
> performance between the slowest and fastest memory attached to that CPU.
> 
> With the current Linux code NUMA nodes are CPU-centric, so all the memory
> attached to a given CPU will be lumped into the same NUMA node.  This makes
> it very difficult for userspace applications to understand the performance
> of different memory ranges on a given CPU.
> 
> We solve this issue by providing userspace with performance information on
> individual memory ranges.  This performance information is exposed via
> sysfs:
> 
>   # grep . mem_tgt2/* mem_tgt2/local_init/* 2>/dev/null
>   mem_tgt2/firmware_id:1
>   mem_tgt2/is_cached:0
>   mem_tgt2/local_init/read_bw_MBps:40960
>   mem_tgt2/local_init/read_lat_nsec:50
>   mem_tgt2/local_init/write_bw_MBps:40960
>   mem_tgt2/local_init/write_lat_nsec:50
> 
> This allows applications to easily find the memory that they want to use.
> We expect that the existing NUMA APIs will be enhanced to use this new
> information so that applications can continue to use them to select their
> desired memory.

How? Could you provide some examples?

> ==== Lots of Details ====
> 
> This patch set provides a sysfs representation of parts of the
> Heterogeneous Memory Attribute Table (HMAT), newly defined in ACPI 6.2.
> One major conceptual change in ACPI 6.2 related to this work is that
> proximity domains no longer need to contain a processor.  We can now
> have memory-only proximity domains, which means that we can now have
> memory-only Linux NUMA nodes.
> 
> Here is an example configuration where we have a single processor, one
> range of regular memory and one range of HBM:
> 
>   +---------------+   +----------------+
>   | Processor     |   | Memory         |
>   | prox domain 0 +---+ prox domain 1  |
>   | NUMA node 1   |   | NUMA node 2    |
>   +-------+-------+   +----------------+
>           |
>   +-------+----------+
>   | HBM              |
>   | prox domain 2    |
>   | NUMA node 0      |
>   +------------------+
> 
> This gives us one initiator (the processor) and two targets (the two memory
> ranges).  Each of these three has its own ACPI proximity domain and
> associated Linux NUMA node.  Note also that while there is a 1:1 mapping
> from each proximity domain to each NUMA node, the numbers don't necessarily
> match up.  Additionally we can have extra NUMA nodes that don't map back to
> ACPI proximity domains.
> 
> The above configuration could also have the processor and one of the two
> memory ranges sharing a proximity domain and NUMA node, but for the
> purposes of the HMAT the two memory ranges will need to be separated.
> 
> The overall goal of this series and of the HMAT is to allow users to
> identify memory using its performance characteristics.  This is
> complicated by the amount of HMAT data that could be present in very
> large systems, so in this series we only surface performance information
> for local (initiator,target) pairings.  The changelog for patch 5
> discusses this in detail.
> 
> Ross Zwisler (3):
>   acpi: HMAT support in acpi_parse_entries_array()
>   hmat: add heterogeneous memory sysfs support
>   hmat: add performance attributes
> 
>  MAINTAINERS                         |   6 +
>  drivers/acpi/Kconfig                |   1 +
>  drivers/acpi/Makefile               |   1 +
>  drivers/acpi/hmat/Kconfig           |   7 +
>  drivers/acpi/hmat/Makefile          |   2 +
>  drivers/acpi/hmat/core.c            | 797 ++++++++++++++++++++++++++++++++++++
>  drivers/acpi/hmat/hmat.h            |  64 +++
>  drivers/acpi/hmat/initiator.c       |  43 ++
>  drivers/acpi/hmat/perf_attributes.c |  56 +++
>  drivers/acpi/hmat/target.c          |  55 +++
>  drivers/acpi/tables.c               |  52 ++-
>  11 files changed, 1073 insertions(+), 11 deletions(-)
>  create mode 100644 drivers/acpi/hmat/Kconfig
>  create mode 100644 drivers/acpi/hmat/Makefile
>  create mode 100644 drivers/acpi/hmat/core.c
>  create mode 100644 drivers/acpi/hmat/hmat.h
>  create mode 100644 drivers/acpi/hmat/initiator.c
>  create mode 100644 drivers/acpi/hmat/perf_attributes.c
>  create mode 100644 drivers/acpi/hmat/target.c
> 
> -- 
> 2.14.3
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v3 1/3] acpi: HMAT support in acpi_parse_entries_array()
  2017-12-14  2:10 ` [PATCH v3 1/3] acpi: HMAT support in acpi_parse_entries_array() Ross Zwisler
@ 2017-12-15  0:49   ` Rafael J. Wysocki
  2017-12-15  1:10   ` Dan Williams
  1 sibling, 0 replies; 42+ messages in thread
From: Rafael J. Wysocki @ 2017-12-15  0:49 UTC (permalink / raw)
  To: Ross Zwisler
  Cc: Linux Kernel Mailing List, Anaczkowski, Lukasz, Box, David E,
	Kogut, Jaroslaw, Koss, Marcin, Koziej, Artur, Lahtinen, Joonas,
	Moore, Robert, Nachimuthu, Murugasamy, Odzioba, Lukasz,
	Rafael J. Wysocki, Rafael J. Wysocki, Schmauss, Erik, Verma,
	Vishal L, Zheng, Lv, Andrew Morton, Balbir Singh, Brice Goglin,
	Dan Williams, Dave Hansen, Jerome Glisse, John Hubbard,
	Len Brown, Tim Chen, devel, ACPI Devel Maling List,
	Linux Memory Management List, linux-nvdimm

On Thu, Dec 14, 2017 at 3:10 AM, Ross Zwisler
<ross.zwisler@linux.intel.com> wrote:
> The current implementation of acpi_parse_entries_array() assumes that each
> subtable has a standard ACPI subtable entry of type struct
> acpi_subtable_header.  This standard subtable header has a one byte length
> followed by a one byte type.
>
> The HMAT subtables have to allow for a longer length so they have subtable
> headers of type struct acpi_hmat_structure which has a 2 byte type and a 4
> byte length.
>
> Enhance the subtable parsing in acpi_parse_entries_array() so that it can
> handle these new HMAT subtables.
>
> Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>

This one is fine by me.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v3 2/3] hmat: add heterogeneous memory sysfs support
  2017-12-14  2:10 ` [PATCH v3 2/3] hmat: add heterogeneous memory sysfs support Ross Zwisler
@ 2017-12-15  0:52   ` Rafael J. Wysocki
  2017-12-15 20:53     ` Ross Zwisler
  0 siblings, 1 reply; 42+ messages in thread
From: Rafael J. Wysocki @ 2017-12-15  0:52 UTC (permalink / raw)
  To: Ross Zwisler
  Cc: Linux Kernel Mailing List, Anaczkowski, Lukasz, Box, David E,
	Kogut, Jaroslaw, Koss, Marcin, Koziej, Artur, Lahtinen, Joonas,
	Moore, Robert, Nachimuthu, Murugasamy, Odzioba, Lukasz,
	Rafael J. Wysocki, Rafael J. Wysocki, Schmauss, Erik, Verma,
	Vishal L, Zheng, Lv, Andrew Morton, Balbir Singh, Brice Goglin,
	Dan Williams, Dave Hansen, Jerome Glisse, John Hubbard,
	Len Brown, Tim Chen, devel, ACPI Devel Maling List,
	Linux Memory Management List, linux-nvdimm

On Thu, Dec 14, 2017 at 3:10 AM, Ross Zwisler
<ross.zwisler@linux.intel.com> wrote:
> Add a new sysfs subsystem, /sys/devices/system/hmat, which surfaces
> information about memory initiators and memory targets to the user.  These
> initiators and targets are described by the ACPI SRAT and HMAT tables.
>
> A "memory initiator" in this case is a NUMA node containing one or more
> devices such as CPU or separate memory I/O devices that can initiate
> memory requests.  A "memory target" is NUMA node containing at least one
> CPU-accessible physical address range.
>
> The key piece of information surfaced by this patch is the mapping between
> the ACPI table "proximity domain" numbers, held in the "firmware_id"
> attribute, and Linux NUMA node numbers.  Every ACPI proximity domain will
> end up being a unique NUMA node in Linux, but the numbers may get reordered
> and Linux can create extra NUMA nodes that don't map back to ACPI proximity
> domains.  The firmware_id value is needed if anyone ever wants to look at
> the ACPI HMAT and SRAT tables directly and make sense of how they map to
> NUMA nodes in Linux.
>
> Initiators are found at /sys/devices/system/hmat/mem_initX, and the
> attributes for a given initiator look like this:
>
>   # tree mem_init0
>   mem_init0
>   ├── firmware_id
>   ├── node0 -> ../../node/node0
>   ├── power
>   │   ├── async
>   │   ...
>   ├── subsystem -> ../../../../bus/hmat
>   └── uevent
>
> Where "mem_init0" on my system represents the CPU acting as a memory
> initiator at NUMA node 0.  Users can discover which CPUs are part of this
> memory initiator by following the node0 symlink and looking at cpumap,
> cpulist and the cpu* symlinks.
>
> Targets are found at /sys/devices/system/hmat/mem_tgtX, and the attributes
> for a given target look like this:
>
>   # tree mem_tgt2
>   mem_tgt2
>   ├── firmware_id
>   ├── is_cached
>   ├── node2 -> ../../node/node2
>   ├── power
>   │   ├── async
>   │   ...
>   ├── subsystem -> ../../../../bus/hmat
>   └── uevent
>
> Users can discover information about the memory owned by this memory target
> by following the node2 symlink and looking at meminfo, vmstat and at the
> memory* memory section symlinks.
>
> Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>
> ---
>  MAINTAINERS                   |   6 +
>  drivers/acpi/Kconfig          |   1 +
>  drivers/acpi/Makefile         |   1 +
>  drivers/acpi/hmat/Kconfig     |   7 +
>  drivers/acpi/hmat/Makefile    |   2 +
>  drivers/acpi/hmat/core.c      | 536 ++++++++++++++++++++++++++++++++++++++++++
>  drivers/acpi/hmat/hmat.h      |  47 ++++
>  drivers/acpi/hmat/initiator.c |  43 ++++
>  drivers/acpi/hmat/target.c    |  55 +++++
>  9 files changed, 698 insertions(+)
>  create mode 100644 drivers/acpi/hmat/Kconfig
>  create mode 100644 drivers/acpi/hmat/Makefile
>  create mode 100644 drivers/acpi/hmat/core.c
>  create mode 100644 drivers/acpi/hmat/hmat.h
>  create mode 100644 drivers/acpi/hmat/initiator.c
>  create mode 100644 drivers/acpi/hmat/target.c
>
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 82ad0eabce4f..64ebec0708de 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -6366,6 +6366,12 @@ S:       Supported
>  F:     drivers/scsi/hisi_sas/
>  F:     Documentation/devicetree/bindings/scsi/hisilicon-sas.txt
>
> +HMAT - ACPI Heterogeneous Memory Attribute Table Support
> +M:     Ross Zwisler <ross.zwisler@linux.intel.com>
> +L:     linux-mm@kvack.org
> +S:     Supported
> +F:     drivers/acpi/hmat/
> +
>  HMM - Heterogeneous Memory Management
>  M:     Jérôme Glisse <jglisse@redhat.com>
>  L:     linux-mm@kvack.org
> diff --git a/drivers/acpi/Kconfig b/drivers/acpi/Kconfig
> index 46505396869e..21cdd1288430 100644
> --- a/drivers/acpi/Kconfig
> +++ b/drivers/acpi/Kconfig
> @@ -466,6 +466,7 @@ config ACPI_REDUCED_HARDWARE_ONLY
>           If you are unsure what to do, do not enable this option.
>
>  source "drivers/acpi/nfit/Kconfig"
> +source "drivers/acpi/hmat/Kconfig"
>
>  source "drivers/acpi/apei/Kconfig"
>  source "drivers/acpi/dptf/Kconfig"
> diff --git a/drivers/acpi/Makefile b/drivers/acpi/Makefile
> index 41954a601989..ed5eab6b0412 100644
> --- a/drivers/acpi/Makefile
> +++ b/drivers/acpi/Makefile
> @@ -75,6 +75,7 @@ obj-$(CONFIG_ACPI_PROCESSOR)  += processor.o
>  obj-$(CONFIG_ACPI)             += container.o
>  obj-$(CONFIG_ACPI_THERMAL)     += thermal.o
>  obj-$(CONFIG_ACPI_NFIT)                += nfit/
> +obj-$(CONFIG_ACPI_HMAT)                += hmat/
>  obj-$(CONFIG_ACPI)             += acpi_memhotplug.o
>  obj-$(CONFIG_ACPI_HOTPLUG_IOAPIC) += ioapic.o
>  obj-$(CONFIG_ACPI_BATTERY)     += battery.o
> diff --git a/drivers/acpi/hmat/Kconfig b/drivers/acpi/hmat/Kconfig
> new file mode 100644
> index 000000000000..954ad4701005
> --- /dev/null
> +++ b/drivers/acpi/hmat/Kconfig
> @@ -0,0 +1,7 @@
> +config ACPI_HMAT
> +       bool "ACPI Heterogeneous Memory Attribute Table Support"
> +       depends on ACPI_NUMA
> +       depends on SYSFS
> +       help
> +         Exports a sysfs representation of the ACPI Heterogeneous Memory
> +         Attributes Table (HMAT).
> diff --git a/drivers/acpi/hmat/Makefile b/drivers/acpi/hmat/Makefile
> new file mode 100644
> index 000000000000..edf4bcb1c97d
> --- /dev/null
> +++ b/drivers/acpi/hmat/Makefile
> @@ -0,0 +1,2 @@
> +obj-$(CONFIG_ACPI_HMAT) := hmat.o
> +hmat-y := core.o initiator.o target.o
> diff --git a/drivers/acpi/hmat/core.c b/drivers/acpi/hmat/core.c
> new file mode 100644
> index 000000000000..61b90dadf84b
> --- /dev/null
> +++ b/drivers/acpi/hmat/core.c
> @@ -0,0 +1,536 @@
> +/*
> + * Heterogeneous Memory Attributes Table (HMAT) representation in sysfs
> + *
> + * Copyright (c) 2017, Intel Corporation.
> + *
> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms and conditions of the GNU General Public License,
> + * version 2, as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope it will be useful, but WITHOUT
> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
> + * more details.
> + */

Minor nit for starters: you should use SPDX license indentifiers in
new files and if you do so, the license boilerplace is not necessary
any more.

Thanks,
Rafael

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v3 1/3] acpi: HMAT support in acpi_parse_entries_array()
  2017-12-14  2:10 ` [PATCH v3 1/3] acpi: HMAT support in acpi_parse_entries_array() Ross Zwisler
  2017-12-15  0:49   ` Rafael J. Wysocki
@ 2017-12-15  1:10   ` Dan Williams
  2017-12-16  1:53     ` Rafael J. Wysocki
  1 sibling, 1 reply; 42+ messages in thread
From: Dan Williams @ 2017-12-15  1:10 UTC (permalink / raw)
  To: Ross Zwisler
  Cc: linux-kernel, Anaczkowski, Lukasz, Box, David E, Kogut, Jaroslaw,
	Koss, Marcin, Koziej, Artur, Lahtinen, Joonas, Moore, Robert,
	Nachimuthu, Murugasamy, Odzioba, Lukasz, Rafael J. Wysocki,
	Rafael J. Wysocki, Schmauss, Erik, Verma, Vishal L, Zheng, Lv,
	Andrew Morton, Balbir Singh, Brice Goglin, Dave Hansen,
	Jerome Glisse, John Hubbard, Len Brown, Tim Chen, devel,
	Linux ACPI, Linux MM, linux-nvdimm

On Wed, Dec 13, 2017 at 6:10 PM, Ross Zwisler
<ross.zwisler@linux.intel.com> wrote:
> The current implementation of acpi_parse_entries_array() assumes that each
> subtable has a standard ACPI subtable entry of type struct
> acpi_subtable_header.  This standard subtable header has a one byte length
> followed by a one byte type.
>
> The HMAT subtables have to allow for a longer length so they have subtable
> headers of type struct acpi_hmat_structure which has a 2 byte type and a 4
> byte length.

Hmm, NFIT has a 2 byte type and a 2 byte length, so its one more
permutation. I happened to reinvent sub-table parsing in the NFIT
driver, but it might be nice in the future to refactor that to use the
common parsing.

>
> Enhance the subtable parsing in acpi_parse_entries_array() so that it can
> handle these new HMAT subtables.
>
> Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>
> ---
>  drivers/acpi/tables.c | 52 ++++++++++++++++++++++++++++++++++++++++-----------
>  1 file changed, 41 insertions(+), 11 deletions(-)
>
> diff --git a/drivers/acpi/tables.c b/drivers/acpi/tables.c
> index 80ce2a7d224b..f777b94c234a 100644
> --- a/drivers/acpi/tables.c
> +++ b/drivers/acpi/tables.c
> @@ -218,6 +218,33 @@ void acpi_table_print_madt_entry(struct acpi_subtable_header *header)
>         }
>  }
>
> +static unsigned long __init
> +acpi_get_entry_type(char *id, void *entry)
> +{
> +       if (strncmp(id, ACPI_SIG_HMAT, 4) == 0)
> +               return ((struct acpi_hmat_structure *)entry)->type;
> +       else
> +               return ((struct acpi_subtable_header *)entry)->type;
> +}

It seems inefficient to make all checks keep asking "is HMAT?".
Especially if we want to extend this to other table types should we
instead setup and pass a pair of function pointers to parse the
sub-table format?

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v3 2/3] hmat: add heterogeneous memory sysfs support
  2017-12-15  0:52   ` Rafael J. Wysocki
@ 2017-12-15 20:53     ` Ross Zwisler
  0 siblings, 0 replies; 42+ messages in thread
From: Ross Zwisler @ 2017-12-15 20:53 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Ross Zwisler, Linux Kernel Mailing List, Anaczkowski, Lukasz,
	Box, David E, Kogut, Jaroslaw, Koss, Marcin, Koziej, Artur,
	Lahtinen, Joonas, Moore, Robert, Nachimuthu, Murugasamy, Odzioba,
	Lukasz, Rafael J. Wysocki, Rafael J. Wysocki, Schmauss, Erik,
	Verma, Vishal L, Zheng, Lv, Andrew Morton, Balbir Singh,
	Brice Goglin, Dan Williams, Dave Hansen, Jerome Glisse,
	John Hubbard, Len Brown, Tim Chen, devel, ACPI Devel Maling List,
	Linux Memory Management List, linux-nvdimm

On Fri, Dec 15, 2017 at 01:52:03AM +0100, Rafael J. Wysocki wrote:
<>
> > diff --git a/drivers/acpi/hmat/core.c b/drivers/acpi/hmat/core.c
> > new file mode 100644
> > index 000000000000..61b90dadf84b
> > --- /dev/null
> > +++ b/drivers/acpi/hmat/core.c
> > @@ -0,0 +1,536 @@
> > +/*
> > + * Heterogeneous Memory Attributes Table (HMAT) representation in sysfs
> > + *
> > + * Copyright (c) 2017, Intel Corporation.
> > + *
> > + * This program is free software; you can redistribute it and/or modify it
> > + * under the terms and conditions of the GNU General Public License,
> > + * version 2, as published by the Free Software Foundation.
> > + *
> > + * This program is distributed in the hope it will be useful, but WITHOUT
> > + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
> > + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
> > + * more details.
> > + */
> 
> Minor nit for starters: you should use SPDX license indentifiers in
> new files and if you do so, the license boilerplace is not necessary
> any more.

Okay, I'll fix that up.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v3 1/3] acpi: HMAT support in acpi_parse_entries_array()
  2017-12-15  1:10   ` Dan Williams
@ 2017-12-16  1:53     ` Rafael J. Wysocki
  2017-12-16  1:57       ` Dan Williams
  0 siblings, 1 reply; 42+ messages in thread
From: Rafael J. Wysocki @ 2017-12-16  1:53 UTC (permalink / raw)
  To: Dan Williams, Linux ACPI
  Cc: Ross Zwisler, linux-kernel, Anaczkowski, Lukasz, Box, David E,
	Kogut, Jaroslaw, Koss, Marcin, Koziej, Artur, Lahtinen, Joonas,
	Moore, Robert, Nachimuthu, Murugasamy, Odzioba, Lukasz,
	Rafael J. Wysocki, Schmauss, Erik, Verma, Vishal L, Zheng, Lv,
	Andrew Morton, Balbir Singh, Brice Goglin, Dave Hansen,
	Jerome Glisse, John Hubbard, Len Brown, Tim Chen, Linux MM,
	linux-nvdimm

On Friday, December 15, 2017 2:10:17 AM CET Dan Williams wrote:
> On Wed, Dec 13, 2017 at 6:10 PM, Ross Zwisler
> <ross.zwisler@linux.intel.com> wrote:
> > The current implementation of acpi_parse_entries_array() assumes that each
> > subtable has a standard ACPI subtable entry of type struct
> > acpi_subtable_header.  This standard subtable header has a one byte length
> > followed by a one byte type.
> >
> > The HMAT subtables have to allow for a longer length so they have subtable
> > headers of type struct acpi_hmat_structure which has a 2 byte type and a 4
> > byte length.
> 
> Hmm, NFIT has a 2 byte type and a 2 byte length, so its one more
> permutation. I happened to reinvent sub-table parsing in the NFIT
> driver, but it might be nice in the future to refactor that to use the
> common parsing.
> 
> >
> > Enhance the subtable parsing in acpi_parse_entries_array() so that it can
> > handle these new HMAT subtables.
> >
> > Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>
> > ---
> >  drivers/acpi/tables.c | 52 ++++++++++++++++++++++++++++++++++++++++-----------
> >  1 file changed, 41 insertions(+), 11 deletions(-)
> >
> > diff --git a/drivers/acpi/tables.c b/drivers/acpi/tables.c
> > index 80ce2a7d224b..f777b94c234a 100644
> > --- a/drivers/acpi/tables.c
> > +++ b/drivers/acpi/tables.c
> > @@ -218,6 +218,33 @@ void acpi_table_print_madt_entry(struct acpi_subtable_header *header)
> >         }
> >  }
> >
> > +static unsigned long __init
> > +acpi_get_entry_type(char *id, void *entry)
> > +{
> > +       if (strncmp(id, ACPI_SIG_HMAT, 4) == 0)
> > +               return ((struct acpi_hmat_structure *)entry)->type;
> > +       else
> > +               return ((struct acpi_subtable_header *)entry)->type;
> > +}
> 
> It seems inefficient to make all checks keep asking "is HMAT?".

Well, ideally, the signature should be checked once.  I guess that can be
arranged for here.

> Especially if we want to extend this to other table types should we
> instead setup and pass a pair of function pointers to parse the
> sub-table format?

Function pointers may be too much even. :-)

Thanks,
Rafael

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v3 1/3] acpi: HMAT support in acpi_parse_entries_array()
  2017-12-16  1:53     ` Rafael J. Wysocki
@ 2017-12-16  1:57       ` Dan Williams
  2017-12-16  2:15         ` Rafael J. Wysocki
  0 siblings, 1 reply; 42+ messages in thread
From: Dan Williams @ 2017-12-16  1:57 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Linux ACPI, Ross Zwisler, linux-kernel, Anaczkowski, Lukasz, Box,
	David E, Kogut, Jaroslaw, Koss, Marcin, Koziej, Artur, Lahtinen,
	Joonas, Moore, Robert, Nachimuthu, Murugasamy, Odzioba, Lukasz,
	Rafael J. Wysocki, Schmauss, Erik, Verma, Vishal L, Zheng, Lv,
	Andrew Morton, Balbir Singh, Brice Goglin, Dave Hansen,
	Jerome Glisse, John Hubbard, Len Brown, Tim Chen, Linux MM,
	linux-nvdimm

On Fri, Dec 15, 2017 at 5:53 PM, Rafael J. Wysocki <rjw@rjwysocki.net> wrote:
> On Friday, December 15, 2017 2:10:17 AM CET Dan Williams wrote:
>> On Wed, Dec 13, 2017 at 6:10 PM, Ross Zwisler
>> <ross.zwisler@linux.intel.com> wrote:
>> > The current implementation of acpi_parse_entries_array() assumes that each
>> > subtable has a standard ACPI subtable entry of type struct
>> > acpi_subtable_header.  This standard subtable header has a one byte length
>> > followed by a one byte type.
>> >
>> > The HMAT subtables have to allow for a longer length so they have subtable
>> > headers of type struct acpi_hmat_structure which has a 2 byte type and a 4
>> > byte length.
>>
>> Hmm, NFIT has a 2 byte type and a 2 byte length, so its one more
>> permutation. I happened to reinvent sub-table parsing in the NFIT
>> driver, but it might be nice in the future to refactor that to use the
>> common parsing.
>>
>> >
>> > Enhance the subtable parsing in acpi_parse_entries_array() so that it can
>> > handle these new HMAT subtables.
>> >
>> > Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>
>> > ---
>> >  drivers/acpi/tables.c | 52 ++++++++++++++++++++++++++++++++++++++++-----------
>> >  1 file changed, 41 insertions(+), 11 deletions(-)
>> >
>> > diff --git a/drivers/acpi/tables.c b/drivers/acpi/tables.c
>> > index 80ce2a7d224b..f777b94c234a 100644
>> > --- a/drivers/acpi/tables.c
>> > +++ b/drivers/acpi/tables.c
>> > @@ -218,6 +218,33 @@ void acpi_table_print_madt_entry(struct acpi_subtable_header *header)
>> >         }
>> >  }
>> >
>> > +static unsigned long __init
>> > +acpi_get_entry_type(char *id, void *entry)
>> > +{
>> > +       if (strncmp(id, ACPI_SIG_HMAT, 4) == 0)
>> > +               return ((struct acpi_hmat_structure *)entry)->type;
>> > +       else
>> > +               return ((struct acpi_subtable_header *)entry)->type;
>> > +}
>>
>> It seems inefficient to make all checks keep asking "is HMAT?".
>
> Well, ideally, the signature should be checked once.  I guess that can be
> arranged for here.
>
>> Especially if we want to extend this to other table types should we
>> instead setup and pass a pair of function pointers to parse the
>> sub-table format?
>
> Function pointers may be too much even. :-)

True, how about an enum of acpi sub-table header types?

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v3 1/3] acpi: HMAT support in acpi_parse_entries_array()
  2017-12-16  1:57       ` Dan Williams
@ 2017-12-16  2:15         ` Rafael J. Wysocki
  0 siblings, 0 replies; 42+ messages in thread
From: Rafael J. Wysocki @ 2017-12-16  2:15 UTC (permalink / raw)
  To: Dan Williams
  Cc: Rafael J. Wysocki, Linux ACPI, Ross Zwisler, linux-kernel,
	Anaczkowski, Lukasz, Box, David E, Kogut, Jaroslaw, Koss, Marcin,
	Koziej, Artur, Lahtinen, Joonas, Moore, Robert, Nachimuthu,
	Murugasamy, Odzioba, Lukasz, Rafael J. Wysocki, Schmauss, Erik,
	Verma, Vishal L, Zheng, Lv, Andrew Morton, Balbir Singh,
	Brice Goglin, Dave Hansen, Jerome Glisse, John Hubbard,
	Len Brown, Tim Chen, Linux MM, linux-nvdimm

On Sat, Dec 16, 2017 at 2:57 AM, Dan Williams <dan.j.williams@intel.com> wrote:
> On Fri, Dec 15, 2017 at 5:53 PM, Rafael J. Wysocki <rjw@rjwysocki.net> wrote:
>> On Friday, December 15, 2017 2:10:17 AM CET Dan Williams wrote:
>>> On Wed, Dec 13, 2017 at 6:10 PM, Ross Zwisler
>>> <ross.zwisler@linux.intel.com> wrote:
>>> > The current implementation of acpi_parse_entries_array() assumes that each
>>> > subtable has a standard ACPI subtable entry of type struct
>>> > acpi_subtable_header.  This standard subtable header has a one byte length
>>> > followed by a one byte type.
>>> >
>>> > The HMAT subtables have to allow for a longer length so they have subtable
>>> > headers of type struct acpi_hmat_structure which has a 2 byte type and a 4
>>> > byte length.
>>>
>>> Hmm, NFIT has a 2 byte type and a 2 byte length, so its one more
>>> permutation. I happened to reinvent sub-table parsing in the NFIT
>>> driver, but it might be nice in the future to refactor that to use the
>>> common parsing.
>>>
>>> >
>>> > Enhance the subtable parsing in acpi_parse_entries_array() so that it can
>>> > handle these new HMAT subtables.
>>> >
>>> > Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>
>>> > ---
>>> >  drivers/acpi/tables.c | 52 ++++++++++++++++++++++++++++++++++++++++-----------
>>> >  1 file changed, 41 insertions(+), 11 deletions(-)
>>> >
>>> > diff --git a/drivers/acpi/tables.c b/drivers/acpi/tables.c
>>> > index 80ce2a7d224b..f777b94c234a 100644
>>> > --- a/drivers/acpi/tables.c
>>> > +++ b/drivers/acpi/tables.c
>>> > @@ -218,6 +218,33 @@ void acpi_table_print_madt_entry(struct acpi_subtable_header *header)
>>> >         }
>>> >  }
>>> >
>>> > +static unsigned long __init
>>> > +acpi_get_entry_type(char *id, void *entry)
>>> > +{
>>> > +       if (strncmp(id, ACPI_SIG_HMAT, 4) == 0)
>>> > +               return ((struct acpi_hmat_structure *)entry)->type;
>>> > +       else
>>> > +               return ((struct acpi_subtable_header *)entry)->type;
>>> > +}
>>>
>>> It seems inefficient to make all checks keep asking "is HMAT?".
>>
>> Well, ideally, the signature should be checked once.  I guess that can be
>> arranged for here.
>>
>>> Especially if we want to extend this to other table types should we
>>> instead setup and pass a pair of function pointers to parse the
>>> sub-table format?
>>
>> Function pointers may be too much even. :-)
>
> True, how about an enum of acpi sub-table header types?

That works.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v3 0/3] create sysfs representation of ACPI HMAT
  2017-12-14 13:00 ` [PATCH v3 0/3] create sysfs representation of ACPI HMAT Michal Hocko
@ 2017-12-18 20:35   ` Ross Zwisler
  2017-12-20 16:41     ` Ross Zwisler
  2017-12-20 18:19     ` Matthew Wilcox
  0 siblings, 2 replies; 42+ messages in thread
From: Ross Zwisler @ 2017-12-18 20:35 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Ross Zwisler, linux-kernel, Anaczkowski, Lukasz, Box, David E,
	Kogut, Jaroslaw, Koss, Marcin, Koziej, Artur, Lahtinen, Joonas,
	Moore, Robert, Nachimuthu, Murugasamy, Odzioba, Lukasz,
	Rafael J. Wysocki, Rafael J. Wysocki, Schmauss, Erik, Verma,
	Vishal L, Zheng, Lv, Andrew Morton, Balbir Singh, Brice Goglin,
	Dan Williams, Dave Hansen, Jerome Glisse, John Hubbard,
	Len Brown, Tim Chen, devel, linux-acpi, linux-mm, linux-nvdimm,
	linux-api

On Thu, Dec 14, 2017 at 02:00:32PM +0100, Michal Hocko wrote:
> [CC linix-api]

Oh, thanks.  I'll add them to my CC list for sysfs related changes in the
future.

> On Wed 13-12-17 19:10:16, Ross Zwisler wrote:
> > This is the third revision of my patches adding a sysfs representation
> > of the ACPI Heterogeneous Memory Attribute Table (HMAT).  These patches
> > are based on v4.15-rc3 and a working tree can be found here:
> > 
> > https://git.kernel.org/pub/scm/linux/kernel/git/zwisler/linux.git/log/?h=hmat_v3
> > 
> > My goal is to get these patches merged for v4.16.
> 
> Has actually reviewed the overal design already for this to be 4.16
> thing? I do not see any acks/reviewed-bys in any of the patches...
> 
> > Changes from previous version (https://lkml.org/lkml/2017/7/6/749):
> 
> ... comments on this last posting are touching the surface rather than
> really discuss the overal design.

Yep, that's a fair assessment.  I would love a more in-depth review of the
code so far. :)

What I'm hoping to do with this series is to just provide a sysfs
representation of the HMAT so that applications can know which NUMA nodes to
select with existing utilities like numactl.  This series does not currently
alter any kernel behavior, it only provides a sysfs interface.

Say for example you had a system with some high bandwidth memory (HBM), and
you wanted to use it for a specific application.  You could use the sysfs
representation of the HMAT to figure out which memory target held your HBM.
You could do this by looking at the local bandwidth values for the various
memory targets, so:

	# grep . /sys/devices/system/hmat/mem_tgt*/local_init/write_bw_MBps
	/sys/devices/system/hmat/mem_tgt2/local_init/write_bw_MBps:81920
	/sys/devices/system/hmat/mem_tgt3/local_init/write_bw_MBps:40960
	/sys/devices/system/hmat/mem_tgt4/local_init/write_bw_MBps:40960
	/sys/devices/system/hmat/mem_tgt5/local_init/write_bw_MBps:40960

and look for the one that corresponds to your HBM speed. (These numbers are
made up, but you get the idea.)

Alternatively if you knew the physical addresses of your HBM you could look
for it by finding the numa node that owns the appropriate memory sections, so:

	# ls -d /sys/devices/system/hmat/mem_tgt2/node2/memory*
	/sys/devices/system/hmat/mem_tgt2/node2/memory0
	/sys/devices/system/hmat/mem_tgt2/node2/memory1
etc.

Once you know the NUMA node of your HBM, you can figure out the NUMA node of
it's local initiator:

	# ls -d /sys/devices/system/hmat/mem_tgt2/local_init/mem_init*
	/sys/devices/system/hmat/mem_tgt2/local_init/mem_init0

So, in our made-up example our HBM is located in numa node 2, and the local
CPU for that HBM is at numa node 0.

You would then use numactl to bind your app to those numa nodes:

	numactl --membind=2 --cpunodebind=0 ./my_application

Does that make sense?

Eventually we can enhance numactl so it can automatically choose memory with
higher bandwidth, etc., but I think just this bit of kernel enabling gets us
started in the right direction.

> >  - Changed "HMEM" to "HMAT" and "hmem" to "hmat" throughout to make sure
> >    that this effort doesn't get confused with Jerome's HMM work and to
> >    make it clear that this enabling is tightly tied to the ACPI HMAT
> >    table.  (John Hubbard)
> > 
> >  - Moved the link in the initiator (i.e. mem_init0/mem_tgt2) from
> >    pointing to the "mem_tgt2/local_init" attribute group to instead
> >    point at the mem_tgt2 target itself.  (Brice Goglin)
> > 
> >  - Simplified the contents of both the initiators and the targets so
> >    that we just symlink to the NUMA node and don't duplicate
> >    information.  For initiators this means that we no longer enumerate
> >    CPUs, and for targets this means that we don't provide physical
> >    address start and length information.  All of this is already
> >    available in the NUMA node directory itself (i.e.
> >    /sys/devices/system/node/node0), and it already accounts for the fact
> >    that both multiple CPUs and multiple memory regions can be owned by a
> >    given NUMA node.  Also removed some extra attributes (is_enabled,
> >    is_isolated) which I don't think are useful at this point in time.
> > 
> > I have tested this against many different configs that I implemented
> > using qemu.
> 
> What is the testing procedure? How can I setup qemu to simlate such HW?

Well, the QEMU table simulation is gross, so I'd rather not get everyone
testing with that.  Injecting custom HMAT and SRAT tables via initrd/initramfs
is a much better way:

https://www.kernel.org/doc/Documentation/acpi/initrd_table_override.txt

Dan recently posted a patch that lets this happen for the HMAT:

https://lists.01.org/pipermail/linux-nvdimm/2017-December/013545.html

I'm working right now on getting an easier way to generate HMAT tables - I'll
let you know when I have something working.

> [Keeping the rest of the email for linux-api reference]
> 
> > ---
> > 
> > ==== Quick Summary ====
> > 
> > Platforms exist today which have multiple types of memory attached to a
> > single CPU.  These disparate memory ranges have some characteristics in
> > common, such as CPU cache coherence, but they can have wide ranges of
> > performance both in terms of latency and bandwidth.
> > 
> > For example, consider a system that contains persistent memory, standard
> > DDR memory and High Bandwidth Memory (HBM), all attached to the same CPU.
> > There could potentially be an order of magnitude or more difference in
> > performance between the slowest and fastest memory attached to that CPU.
> > 
> > With the current Linux code NUMA nodes are CPU-centric, so all the memory
> > attached to a given CPU will be lumped into the same NUMA node.  This makes
> > it very difficult for userspace applications to understand the performance
> > of different memory ranges on a given CPU.
> > 
> > We solve this issue by providing userspace with performance information on
> > individual memory ranges.  This performance information is exposed via
> > sysfs:
> > 
> >   # grep . mem_tgt2/* mem_tgt2/local_init/* 2>/dev/null
> >   mem_tgt2/firmware_id:1
> >   mem_tgt2/is_cached:0
> >   mem_tgt2/local_init/read_bw_MBps:40960
> >   mem_tgt2/local_init/read_lat_nsec:50
> >   mem_tgt2/local_init/write_bw_MBps:40960
> >   mem_tgt2/local_init/write_lat_nsec:50
> > 
> > This allows applications to easily find the memory that they want to use.
> > We expect that the existing NUMA APIs will be enhanced to use this new
> > information so that applications can continue to use them to select their
> > desired memory.
> 
> How? Could you provide some examples?

I think I answered this above, but please let me know if you still have
questions or have any ideas for improvement.

Thank you for the review!

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v3 0/3] create sysfs representation of ACPI HMAT
  2017-12-18 20:35   ` Ross Zwisler
@ 2017-12-20 16:41     ` Ross Zwisler
  2017-12-21 13:18       ` Michal Hocko
  2017-12-20 18:19     ` Matthew Wilcox
  1 sibling, 1 reply; 42+ messages in thread
From: Ross Zwisler @ 2017-12-20 16:41 UTC (permalink / raw)
  To: Ross Zwisler
  Cc: Michal Hocko, linux-kernel, Anaczkowski, Lukasz, Box, David E,
	Kogut, Jaroslaw, Koss, Marcin, Koziej, Artur, Lahtinen, Joonas,
	Moore, Robert, Nachimuthu, Murugasamy, Odzioba, Lukasz,
	Rafael J. Wysocki, Rafael J. Wysocki, Schmauss, Erik, Verma,
	Vishal L, Zheng, Lv, Andrew Morton, Balbir Singh, Brice Goglin,
	Dan Williams, Dave Hansen, Jerome Glisse, John Hubbard,
	Len Brown, Tim Chen, devel, linux-acpi, linux-mm, linux-nvdimm,
	linux-api

On Mon, Dec 18, 2017 at 01:35:47PM -0700, Ross Zwisler wrote:
> On Thu, Dec 14, 2017 at 02:00:32PM +0100, Michal Hocko wrote:
<>
> > What is the testing procedure? How can I setup qemu to simlate such HW?
> 
> Well, the QEMU table simulation is gross, so I'd rather not get everyone
> testing with that.  Injecting custom HMAT and SRAT tables via initrd/initramfs
> is a much better way:
> 
> https://www.kernel.org/doc/Documentation/acpi/initrd_table_override.txt
> 
> Dan recently posted a patch that lets this happen for the HMAT:
> 
> https://lists.01.org/pipermail/linux-nvdimm/2017-December/013545.html
> 
> I'm working right now on getting an easier way to generate HMAT tables - I'll
> let you know when I have something working.

I've posted details on how to set up test configurations using injected HMAT
and SRAT tables here:

https://github.com/rzwisler/hmat_examples

So far I've got two different sample configs, and we can add more as they are
useful.  Having the sample configs in github is also nice because if someone
finds a config that causes a kernel issue it can be reported then added to
this list of example configs for future testing.

Please let me know if you have trouble getting this working.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v3 0/3] create sysfs representation of ACPI HMAT
  2017-12-18 20:35   ` Ross Zwisler
  2017-12-20 16:41     ` Ross Zwisler
@ 2017-12-20 18:19     ` Matthew Wilcox
  2017-12-20 20:22       ` Dave Hansen
                         ` (2 more replies)
  1 sibling, 3 replies; 42+ messages in thread
From: Matthew Wilcox @ 2017-12-20 18:19 UTC (permalink / raw)
  To: Ross Zwisler
  Cc: Michal Hocko, linux-kernel, Anaczkowski, Lukasz, Box, David E,
	Kogut, Jaroslaw, Koss, Marcin, Koziej, Artur, Lahtinen, Joonas,
	Moore, Robert, Nachimuthu, Murugasamy, Odzioba, Lukasz,
	Rafael J. Wysocki, Rafael J. Wysocki, Schmauss, Erik, Verma,
	Vishal L, Zheng, Lv, Andrew Morton, Balbir Singh, Brice Goglin,
	Dan Williams, Dave Hansen, Jerome Glisse, John Hubbard,
	Len Brown, Tim Chen, devel, linux-acpi, linux-mm, linux-nvdimm,
	linux-api, linuxppc-dev

On Mon, Dec 18, 2017 at 01:35:47PM -0700, Ross Zwisler wrote:
> What I'm hoping to do with this series is to just provide a sysfs
> representation of the HMAT so that applications can know which NUMA nodes to
> select with existing utilities like numactl.  This series does not currently
> alter any kernel behavior, it only provides a sysfs interface.
> 
> Say for example you had a system with some high bandwidth memory (HBM), and
> you wanted to use it for a specific application.  You could use the sysfs
> representation of the HMAT to figure out which memory target held your HBM.
> You could do this by looking at the local bandwidth values for the various
> memory targets, so:
> 
> 	# grep . /sys/devices/system/hmat/mem_tgt*/local_init/write_bw_MBps
> 	/sys/devices/system/hmat/mem_tgt2/local_init/write_bw_MBps:81920
> 	/sys/devices/system/hmat/mem_tgt3/local_init/write_bw_MBps:40960
> 	/sys/devices/system/hmat/mem_tgt4/local_init/write_bw_MBps:40960
> 	/sys/devices/system/hmat/mem_tgt5/local_init/write_bw_MBps:40960
> 
> and look for the one that corresponds to your HBM speed. (These numbers are
> made up, but you get the idea.)

Presumably ACPI-based platforms will not be the only ones who have the
ability to expose different bandwidth memories in the future.  I think
we need a platform-agnostic way ... right, PowerPC people?

I don't know what the right interface is, but my laptop has a set of
/sys/devices/system/memory/memoryN/ directories.  Perhaps this is the
right place to expose write_bw (etc).

> Once you know the NUMA node of your HBM, you can figure out the NUMA node of
> it's local initiator:
> 
> 	# ls -d /sys/devices/system/hmat/mem_tgt2/local_init/mem_init*
> 	/sys/devices/system/hmat/mem_tgt2/local_init/mem_init0
> 
> So, in our made-up example our HBM is located in numa node 2, and the local
> CPU for that HBM is at numa node 0.

initiator is a CPU?  I'd have expected you to expose a memory controller
abstraction rather than re-use storage terminology.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v3 0/3] create sysfs representation of ACPI HMAT
  2017-12-20 18:19     ` Matthew Wilcox
@ 2017-12-20 20:22       ` Dave Hansen
  2017-12-20 21:16         ` Matthew Wilcox
  2017-12-20 21:13       ` Ross Zwisler
  2017-12-21 12:50       ` Michael Ellerman
  2 siblings, 1 reply; 42+ messages in thread
From: Dave Hansen @ 2017-12-20 20:22 UTC (permalink / raw)
  To: Matthew Wilcox, Ross Zwisler
  Cc: Michal Hocko, linux-kernel, Anaczkowski, Lukasz, Box, David E,
	Kogut, Jaroslaw, Koss, Marcin, Koziej, Artur, Lahtinen, Joonas,
	Moore, Robert, Nachimuthu, Murugasamy, Odzioba, Lukasz,
	Rafael J. Wysocki, Rafael J. Wysocki, Schmauss, Erik, Verma,
	Vishal L, Zheng, Lv, Andrew Morton, Balbir Singh, Brice Goglin,
	Dan Williams, Jerome Glisse, John Hubbard, Len Brown, Tim Chen,
	devel, linux-acpi, linux-mm, linux-nvdimm, linux-api,
	linuxppc-dev

On 12/20/2017 10:19 AM, Matthew Wilcox wrote:
> I don't know what the right interface is, but my laptop has a set of
> /sys/devices/system/memory/memoryN/ directories.  Perhaps this is the
> right place to expose write_bw (etc).

Those directories are already too redundant and wasteful.  I think we'd
really rather not add to them.  In addition, it's technically possible
to have a memory section span NUMA nodes and have different performance
properties, which make it impossible to represent there.

In any case, ACPI PXM's (Proximity Domains) are guaranteed to have
uniform performance properties in the HMAT, and we just so happen to
always create one NUMA node per PXM.  So, NUMA nodes really are a good fit.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v3 0/3] create sysfs representation of ACPI HMAT
  2017-12-20 18:19     ` Matthew Wilcox
  2017-12-20 20:22       ` Dave Hansen
@ 2017-12-20 21:13       ` Ross Zwisler
  2017-12-21  1:41         ` Elliott, Robert (Persistent Memory)
  2017-12-21 12:50       ` Michael Ellerman
  2 siblings, 1 reply; 42+ messages in thread
From: Ross Zwisler @ 2017-12-20 21:13 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: Ross Zwisler, Michal Hocko, linux-kernel, Anaczkowski, Lukasz,
	Box, David E, Kogut, Jaroslaw, Koss, Marcin, Koziej, Artur,
	Lahtinen, Joonas, Moore, Robert, Nachimuthu, Murugasamy, Odzioba,
	Lukasz, Rafael J. Wysocki, Rafael J. Wysocki, Schmauss, Erik,
	Verma, Vishal L, Zheng, Lv, Andrew Morton, Balbir Singh,
	Brice Goglin, Dan Williams, Dave Hansen, Jerome Glisse,
	John Hubbard, Len Brown, Tim Chen, devel, linux-acpi, linux-mm,
	linux-nvdimm, linux-api, linuxppc-dev

On Wed, Dec 20, 2017 at 10:19:37AM -0800, Matthew Wilcox wrote:
> On Mon, Dec 18, 2017 at 01:35:47PM -0700, Ross Zwisler wrote:
> > What I'm hoping to do with this series is to just provide a sysfs
> > representation of the HMAT so that applications can know which NUMA nodes to
> > select with existing utilities like numactl.  This series does not currently
> > alter any kernel behavior, it only provides a sysfs interface.
> > 
> > Say for example you had a system with some high bandwidth memory (HBM), and
> > you wanted to use it for a specific application.  You could use the sysfs
> > representation of the HMAT to figure out which memory target held your HBM.
> > You could do this by looking at the local bandwidth values for the various
> > memory targets, so:
> > 
> > 	# grep . /sys/devices/system/hmat/mem_tgt*/local_init/write_bw_MBps
> > 	/sys/devices/system/hmat/mem_tgt2/local_init/write_bw_MBps:81920
> > 	/sys/devices/system/hmat/mem_tgt3/local_init/write_bw_MBps:40960
> > 	/sys/devices/system/hmat/mem_tgt4/local_init/write_bw_MBps:40960
> > 	/sys/devices/system/hmat/mem_tgt5/local_init/write_bw_MBps:40960
> > 
> > and look for the one that corresponds to your HBM speed. (These numbers are
> > made up, but you get the idea.)
> 
> Presumably ACPI-based platforms will not be the only ones who have the
> ability to expose different bandwidth memories in the future.  I think
> we need a platform-agnostic way ... right, PowerPC people?

Hey Matthew,

Yep, this is where I started as well.  My plan with my initial implementation
was to try and make the sysfs representation as platform agnostic as possible,
and just have the ACPI HMAT as one of the many places to gather the data
needed to populate sysfs.

However, as I began coding the implementation became very specific to the
HMAT, probably because I don't know of way that this type of info is
represented on another platform.  John Hubbard noticed the same thing and
asked me to s/HMEM/HMAT/ everywhere and just make it HMAT specific, and to
prevent it from being confused with the HMM work:

https://lkml.org/lkml/2017/7/7/33
https://lkml.org/lkml/2017/7/7/442

I'm open to making it more platform agnostic if I can get my hands on a
parallel effort in another platform and tease out the commonality, but trying
to do that without a second example hasn't worked out.  If we don't have a
good second example right now I think maybe we should put this in and then
merge it with the second example when it comes along.

> I don't know what the right interface is, but my laptop has a set of
> /sys/devices/system/memory/memoryN/ directories.  Perhaps this is the
> right place to expose write_bw (etc).
> 
> > Once you know the NUMA node of your HBM, you can figure out the NUMA node of
> > it's local initiator:
> > 
> > 	# ls -d /sys/devices/system/hmat/mem_tgt2/local_init/mem_init*
> > 	/sys/devices/system/hmat/mem_tgt2/local_init/mem_init0
> > 
> > So, in our made-up example our HBM is located in numa node 2, and the local
> > CPU for that HBM is at numa node 0.
> 
> initiator is a CPU?  I'd have expected you to expose a memory controller
> abstraction rather than re-use storage terminology.

Yea, I agree that at first blush it seems weird.  It turns out that looking at
it in sort of a storage initiator/target way is beneficial, though, because it
allows us to cut down on the number of data values we need to represent.

For example the SLIT, which doesn't differentiate between initiator and target
proximity domains (and thus nodes) always represents a system with N proximity
domains using a NxN distance table.  This makes sense if every node contains
both CPUs and memory.

With the introduction of the HMAT, though, we can have memory-only initiator
nodes and we can explicitly associate them with their local CPU.  This is
necessary so that we can separate memory with different performance
characteristics (HBM vs normal memory vs persistent memory, for example) that
are all attached to the same CPU.

So, say we now have a system with 4 CPUs, and each of those CPUs has 3
different types of memory attached to it.  We now have 16 total proximity
domains, 4 CPU and 12 memory.

If we represent this with the SLIT we end up with a 16 X 16 distance table
(256 entries), most of which don't matter because they are memory-to-memory
distances which don't make sense.

In the HMAT, though, we separate out the initiators and the targets and put
them into separate lists.  (See 5.2.27.4 System Locality Latency and Bandwidth
Information Structure in ACPI 6.2 for details.)  So, this same config in the
HMAT only has 4*12=48 performance values of each type, all of which convey
meaningful information.

The HMAT indeed even uses the storage "initiator" and "target" terminology. :)

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v3 0/3] create sysfs representation of ACPI HMAT
  2017-12-20 20:22       ` Dave Hansen
@ 2017-12-20 21:16         ` Matthew Wilcox
  2017-12-20 21:24           ` Ross Zwisler
  0 siblings, 1 reply; 42+ messages in thread
From: Matthew Wilcox @ 2017-12-20 21:16 UTC (permalink / raw)
  To: Dave Hansen
  Cc: Ross Zwisler, Michal Hocko, linux-kernel, Anaczkowski, Lukasz,
	Box, David E, Kogut, Jaroslaw, Koss, Marcin, Koziej, Artur,
	Lahtinen, Joonas, Moore, Robert, Nachimuthu, Murugasamy, Odzioba,
	Lukasz, Rafael J. Wysocki, Rafael J. Wysocki, Schmauss, Erik,
	Verma, Vishal L, Zheng, Lv, Andrew Morton, Balbir Singh,
	Brice Goglin, Dan Williams, Jerome Glisse, John Hubbard,
	Len Brown, Tim Chen, devel, linux-acpi, linux-mm, linux-nvdimm,
	linux-api, linuxppc-dev

On Wed, Dec 20, 2017 at 12:22:21PM -0800, Dave Hansen wrote:
> On 12/20/2017 10:19 AM, Matthew Wilcox wrote:
> > I don't know what the right interface is, but my laptop has a set of
> > /sys/devices/system/memory/memoryN/ directories.  Perhaps this is the
> > right place to expose write_bw (etc).
> 
> Those directories are already too redundant and wasteful.  I think we'd
> really rather not add to them.  In addition, it's technically possible
> to have a memory section span NUMA nodes and have different performance
> properties, which make it impossible to represent there.
> 
> In any case, ACPI PXM's (Proximity Domains) are guaranteed to have
> uniform performance properties in the HMAT, and we just so happen to
> always create one NUMA node per PXM.  So, NUMA nodes really are a good fit.

I think you're missing my larger point which is that I don't think this
should be exposed to userspace as an ACPI feature.  Because if you do,
then it'll also be exposed to userspace as an openfirmware feature.
And sooner or later a devicetree feature.  And then writing a portable
program becomes an exercise in suffering.

So, what's the right place in sysfs that isn't tied to ACPI?  A new
directory or set of directories under /sys/devices/system/memory/ ?

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v3 0/3] create sysfs representation of ACPI HMAT
  2017-12-20 21:16         ` Matthew Wilcox
@ 2017-12-20 21:24           ` Ross Zwisler
  2017-12-20 22:29             ` Dan Williams
  0 siblings, 1 reply; 42+ messages in thread
From: Ross Zwisler @ 2017-12-20 21:24 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: Dave Hansen, Ross Zwisler, Michal Hocko, linux-kernel,
	Anaczkowski, Lukasz, Box, David E, Kogut, Jaroslaw, Koss, Marcin,
	Koziej, Artur, Lahtinen, Joonas, Moore, Robert, Nachimuthu,
	Murugasamy, Odzioba, Lukasz, Rafael J. Wysocki,
	Rafael J. Wysocki, Schmauss, Erik, Verma, Vishal L, Zheng, Lv,
	Andrew Morton, Balbir Singh, Brice Goglin, Dan Williams,
	Jerome Glisse, John Hubbard, Len Brown, Tim Chen, devel,
	linux-acpi, linux-mm, linux-nvdimm, linux-api, linuxppc-dev

On Wed, Dec 20, 2017 at 01:16:49PM -0800, Matthew Wilcox wrote:
> On Wed, Dec 20, 2017 at 12:22:21PM -0800, Dave Hansen wrote:
> > On 12/20/2017 10:19 AM, Matthew Wilcox wrote:
> > > I don't know what the right interface is, but my laptop has a set of
> > > /sys/devices/system/memory/memoryN/ directories.  Perhaps this is the
> > > right place to expose write_bw (etc).
> > 
> > Those directories are already too redundant and wasteful.  I think we'd
> > really rather not add to them.  In addition, it's technically possible
> > to have a memory section span NUMA nodes and have different performance
> > properties, which make it impossible to represent there.
> > 
> > In any case, ACPI PXM's (Proximity Domains) are guaranteed to have
> > uniform performance properties in the HMAT, and we just so happen to
> > always create one NUMA node per PXM.  So, NUMA nodes really are a good fit.
> 
> I think you're missing my larger point which is that I don't think this
> should be exposed to userspace as an ACPI feature.  Because if you do,
> then it'll also be exposed to userspace as an openfirmware feature.
> And sooner or later a devicetree feature.  And then writing a portable
> program becomes an exercise in suffering.
> 
> So, what's the right place in sysfs that isn't tied to ACPI?  A new
> directory or set of directories under /sys/devices/system/memory/ ?

Oh, the current location isn't at all tied to acpi except that it happens to
be named 'hmat'.  When it was all named 'hmem' it was just:

/sys/devices/system/hmem

Which has no ACPI-isms at all.  I'm happy to move it under
/sys/devices/system/memory/hmat if that's helpful, but I think we still have
the issue that the data represented therein is still pulled right from the
HMAT, and I don't know how to abstract it into something more platform
agnostic until I know what data is provided by those other platforms.

For example, the HMAT provides latency information and bandwidth information
for both reads and writes.  Will the devicetree/openfirmware/etc version have
this same info, or will it be just different enough that it won't translate
into whatever I choose to stick in sysfs?

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v3 0/3] create sysfs representation of ACPI HMAT
  2017-12-20 21:24           ` Ross Zwisler
@ 2017-12-20 22:29             ` Dan Williams
  2017-12-20 22:41               ` Ross Zwisler
  0 siblings, 1 reply; 42+ messages in thread
From: Dan Williams @ 2017-12-20 22:29 UTC (permalink / raw)
  To: Ross Zwisler
  Cc: Matthew Wilcox, Dave Hansen, Michal Hocko, linux-kernel,
	Anaczkowski, Lukasz, Box, David E, Kogut, Jaroslaw, Koss, Marcin,
	Koziej, Artur, Lahtinen, Joonas, Moore, Robert, Nachimuthu,
	Murugasamy, Odzioba, Lukasz, Rafael J. Wysocki,
	Rafael J. Wysocki, Schmauss, Erik, Verma, Vishal L, Zheng, Lv,
	Andrew Morton, Balbir Singh, Brice Goglin, Jerome Glisse,
	John Hubbard, Len Brown, Tim Chen, devel, Linux ACPI, Linux MM,
	linux-nvdimm, Linux API, linuxppc-dev

On Wed, Dec 20, 2017 at 1:24 PM, Ross Zwisler
<ross.zwisler@linux.intel.com> wrote:
> On Wed, Dec 20, 2017 at 01:16:49PM -0800, Matthew Wilcox wrote:
>> On Wed, Dec 20, 2017 at 12:22:21PM -0800, Dave Hansen wrote:
>> > On 12/20/2017 10:19 AM, Matthew Wilcox wrote:
>> > > I don't know what the right interface is, but my laptop has a set of
>> > > /sys/devices/system/memory/memoryN/ directories.  Perhaps this is the
>> > > right place to expose write_bw (etc).
>> >
>> > Those directories are already too redundant and wasteful.  I think we'd
>> > really rather not add to them.  In addition, it's technically possible
>> > to have a memory section span NUMA nodes and have different performance
>> > properties, which make it impossible to represent there.
>> >
>> > In any case, ACPI PXM's (Proximity Domains) are guaranteed to have
>> > uniform performance properties in the HMAT, and we just so happen to
>> > always create one NUMA node per PXM.  So, NUMA nodes really are a good fit.
>>
>> I think you're missing my larger point which is that I don't think this
>> should be exposed to userspace as an ACPI feature.  Because if you do,
>> then it'll also be exposed to userspace as an openfirmware feature.
>> And sooner or later a devicetree feature.  And then writing a portable
>> program becomes an exercise in suffering.
>>
>> So, what's the right place in sysfs that isn't tied to ACPI?  A new
>> directory or set of directories under /sys/devices/system/memory/ ?
>
> Oh, the current location isn't at all tied to acpi except that it happens to
> be named 'hmat'.  When it was all named 'hmem' it was just:
>
> /sys/devices/system/hmem
>
> Which has no ACPI-isms at all.  I'm happy to move it under
> /sys/devices/system/memory/hmat if that's helpful, but I think we still have
> the issue that the data represented therein is still pulled right from the
> HMAT, and I don't know how to abstract it into something more platform
> agnostic until I know what data is provided by those other platforms.
>
> For example, the HMAT provides latency information and bandwidth information
> for both reads and writes.  Will the devicetree/openfirmware/etc version have
> this same info, or will it be just different enough that it won't translate
> into whatever I choose to stick in sysfs?

For the initial implementation do we need to have a representation of
all the performance data? Given that
/sys/devices/system/node/nodeX/distance is the only generic
performance attribute published by the kernel today it is already the
case that applications that need to target specific memories need to
go parse information that is not provided by the kernel by default.
The question is can those specialized applications stay special and go
parse the platform specific data sources, like raw HMAT, directly, or
do we expect general purpose applications to make use of this data? I
think a firmware-id to numa-node translation facility
(/sys/devices/system/node/nodeX/fwid) is a simple start that we can
build on with more information as specific use cases arise.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v3 0/3] create sysfs representation of ACPI HMAT
  2017-12-20 22:29             ` Dan Williams
@ 2017-12-20 22:41               ` Ross Zwisler
  2017-12-21 20:31                 ` Brice Goglin
  0 siblings, 1 reply; 42+ messages in thread
From: Ross Zwisler @ 2017-12-20 22:41 UTC (permalink / raw)
  To: Dan Williams
  Cc: Ross Zwisler, Matthew Wilcox, Dave Hansen, Michal Hocko,
	linux-kernel, Anaczkowski, Lukasz, Box, David E, Kogut, Jaroslaw,
	Koss, Marcin, Koziej, Artur, Lahtinen, Joonas, Moore, Robert,
	Nachimuthu, Murugasamy, Odzioba, Lukasz, Rafael J. Wysocki,
	Rafael J. Wysocki, Schmauss, Erik, Verma, Vishal L, Zheng, Lv,
	Andrew Morton, Balbir Singh, Brice Goglin, Jerome Glisse,
	John Hubbard, Len Brown, Tim Chen, devel, Linux ACPI, Linux MM,
	linux-nvdimm, Linux API, linuxppc-dev

On Wed, Dec 20, 2017 at 02:29:56PM -0800, Dan Williams wrote:
> On Wed, Dec 20, 2017 at 1:24 PM, Ross Zwisler
> <ross.zwisler@linux.intel.com> wrote:
> > On Wed, Dec 20, 2017 at 01:16:49PM -0800, Matthew Wilcox wrote:
> >> On Wed, Dec 20, 2017 at 12:22:21PM -0800, Dave Hansen wrote:
> >> > On 12/20/2017 10:19 AM, Matthew Wilcox wrote:
> >> > > I don't know what the right interface is, but my laptop has a set of
> >> > > /sys/devices/system/memory/memoryN/ directories.  Perhaps this is the
> >> > > right place to expose write_bw (etc).
> >> >
> >> > Those directories are already too redundant and wasteful.  I think we'd
> >> > really rather not add to them.  In addition, it's technically possible
> >> > to have a memory section span NUMA nodes and have different performance
> >> > properties, which make it impossible to represent there.
> >> >
> >> > In any case, ACPI PXM's (Proximity Domains) are guaranteed to have
> >> > uniform performance properties in the HMAT, and we just so happen to
> >> > always create one NUMA node per PXM.  So, NUMA nodes really are a good fit.
> >>
> >> I think you're missing my larger point which is that I don't think this
> >> should be exposed to userspace as an ACPI feature.  Because if you do,
> >> then it'll also be exposed to userspace as an openfirmware feature.
> >> And sooner or later a devicetree feature.  And then writing a portable
> >> program becomes an exercise in suffering.
> >>
> >> So, what's the right place in sysfs that isn't tied to ACPI?  A new
> >> directory or set of directories under /sys/devices/system/memory/ ?
> >
> > Oh, the current location isn't at all tied to acpi except that it happens to
> > be named 'hmat'.  When it was all named 'hmem' it was just:
> >
> > /sys/devices/system/hmem
> >
> > Which has no ACPI-isms at all.  I'm happy to move it under
> > /sys/devices/system/memory/hmat if that's helpful, but I think we still have
> > the issue that the data represented therein is still pulled right from the
> > HMAT, and I don't know how to abstract it into something more platform
> > agnostic until I know what data is provided by those other platforms.
> >
> > For example, the HMAT provides latency information and bandwidth information
> > for both reads and writes.  Will the devicetree/openfirmware/etc version have
> > this same info, or will it be just different enough that it won't translate
> > into whatever I choose to stick in sysfs?
> 
> For the initial implementation do we need to have a representation of
> all the performance data? Given that
> /sys/devices/system/node/nodeX/distance is the only generic
> performance attribute published by the kernel today it is already the
> case that applications that need to target specific memories need to
> go parse information that is not provided by the kernel by default.
> The question is can those specialized applications stay special and go
> parse the platform specific data sources, like raw HMAT, directly, or
> do we expect general purpose applications to make use of this data? I
> think a firmware-id to numa-node translation facility
> (/sys/devices/system/node/nodeX/fwid) is a simple start that we can
> build on with more information as specific use cases arise.

We don't represent all the performance data, we only represent the data for
local initiator/target pairs.  I do think that this is useful to have in sysfs
because it provides a way to easily answer the most commonly asked questions
(or at least what I'm guessing will be the most commmonly asked queststions),
i.e. "given a CPU, what are the speeds of the various types of memory attached
to it", and "given a chunk of memory, how fast is it and to which CPU is it
local"?  By providing this base level of information I'm hoping to prevent
most applications from having to parse the HMAT directly.

The question of whether or not to include this local performance information
was one of the main questions of the initial RFC patch series, and I did get
feedback (albiet off-list) that the local performance information was
valuable to at least some users.  I did intentionally structure my (now very
short) set so that the performance information was added as a separate patch,
so we can get to the place you're talking about where we only provide firmware
id <=> proximity domain mappings by just leaving off the last patch in the
series.

I'm personally still of the opinion though that this last patch does add
value.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* RE: [PATCH v3 0/3] create sysfs representation of ACPI HMAT
  2017-12-20 21:13       ` Ross Zwisler
@ 2017-12-21  1:41         ` Elliott, Robert (Persistent Memory)
  2017-12-22 21:46           ` Ross Zwisler
  0 siblings, 1 reply; 42+ messages in thread
From: Elliott, Robert (Persistent Memory) @ 2017-12-21  1:41 UTC (permalink / raw)
  To: Ross Zwisler, Matthew Wilcox
  Cc: Michal Hocko, Box, David E, Dave Hansen, Zheng, Lv, linux-nvdimm,
	Rafael J. Wysocki, Anaczkowski, Lukasz, Moore, Robert,
	linux-acpi, Odzioba, Lukasz, Schmauss, Erik, Len Brown,
	John Hubbard, linuxppc-dev, Jerome Glisse, devel, Kogut,
	Jaroslaw, linux-mm, Koss, Marcin, linux-api, Brice Goglin,
	Nachimuthu, Murugasamy, Rafael J. Wysocki, linux-kernel, Koziej,
	Artur, Lahtinen, Joonas, Andrew Morton, Tim Chen



> -----Original Message-----
> From: Linux-nvdimm [mailto:linux-nvdimm-bounces@lists.01.org] On Behalf Of
> Ross Zwisler
...
> 
> On Wed, Dec 20, 2017 at 10:19:37AM -0800, Matthew Wilcox wrote:
...
> > initiator is a CPU?  I'd have expected you to expose a memory controller
> > abstraction rather than re-use storage terminology.
> 
> Yea, I agree that at first blush it seems weird.  It turns out that
> looking at it in sort of a storage initiator/target way is beneficial,
> though, because it allows us to cut down on the number of data values
> we need to represent.
> 
> For example the SLIT, which doesn't differentiate between initiator and
> target proximity domains (and thus nodes) always represents a system
> with N proximity domains using a NxN distance table.  This makes sense
> if every node contains both CPUs and memory.
> 
> With the introduction of the HMAT, though, we can have memory-only
> initiator nodes and we can explicitly associate them with their local 
> CPU.  This is necessary so that we can separate memory with different
> performance characteristics (HBM vs normal memory vs persistent memory,
> for example) that are all attached to the same CPU.
> 
> So, say we now have a system with 4 CPUs, and each of those CPUs has 3
> different types of memory attached to it.  We now have 16 total proximity
> domains, 4 CPU and 12 memory.

The CPU cores that make up a node can have performance restrictions of
their own; for example, they might max out at 10 GB/s even though the
memory controller supports 120 GB/s (meaning you need to use 12 cores
on the node to fully exercise memory).  It'd be helpful to report this,
so software can decide how many cores to use for bandwidth-intensive work.

> If we represent this with the SLIT we end up with a 16 X 16 distance table
> (256 entries), most of which don't matter because they are memory-to-
> memory distances which don't make sense.
> 
> In the HMAT, though, we separate out the initiators and the targets and
> put them into separate lists.  (See 5.2.27.4 System Locality Latency and
> Bandwidth Information Structure in ACPI 6.2 for details.)  So, this same
> config in the HMAT only has 4*12=48 performance values of each type, all
> of which convey meaningful information.
> 
> The HMAT indeed even uses the storage "initiator" and "target"
> terminology. :)

Centralized DMA engines (e.g., as used by the "DMA based blk-mq pmem
driver") have performance differences too.  A CPU might include
CPU cores that reach 10 GB/s, DMA engines that reach 60 GB/s, and
memory controllers that reach 120 GB/s.  I guess these would be
represented as extra initiators on the node?


---
Robert Elliott, HPE Persistent Memory

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v3 0/3] create sysfs representation of ACPI HMAT
  2017-12-20 18:19     ` Matthew Wilcox
  2017-12-20 20:22       ` Dave Hansen
  2017-12-20 21:13       ` Ross Zwisler
@ 2017-12-21 12:50       ` Michael Ellerman
  2 siblings, 0 replies; 42+ messages in thread
From: Michael Ellerman @ 2017-12-21 12:50 UTC (permalink / raw)
  To: Matthew Wilcox, Ross Zwisler
  Cc: Michal Hocko, Box, David E, Dave Hansen, Zheng, Lv, linux-nvdimm,
	Verma, Vishal L, Rafael J. Wysocki, Anaczkowski, Lukasz, Moore,
	Robert, linux-acpi, Odzioba, Lukasz, Schmauss, Erik, Len Brown,
	John Hubbard, linuxppc-dev, Jerome Glisse, Dan Williams, devel,
	Kogut, Jaroslaw, linux-mm, Koss, Marcin, linux-api, Brice Goglin,
	Nachimuthu, Murugasamy, Rafael J. Wysocki, linux-kernel, Koziej,
	Artur, Lahtinen, Joonas, Andrew Morton, Tim Chen

Matthew Wilcox <willy@infradead.org> writes:

> On Mon, Dec 18, 2017 at 01:35:47PM -0700, Ross Zwisler wrote:
>> What I'm hoping to do with this series is to just provide a sysfs
>> representation of the HMAT so that applications can know which NUMA nodes to
>> select with existing utilities like numactl.  This series does not currently
>> alter any kernel behavior, it only provides a sysfs interface.
>> 
>> Say for example you had a system with some high bandwidth memory (HBM), and
>> you wanted to use it for a specific application.  You could use the sysfs
>> representation of the HMAT to figure out which memory target held your HBM.
>> You could do this by looking at the local bandwidth values for the various
>> memory targets, so:
>> 
>> 	# grep . /sys/devices/system/hmat/mem_tgt*/local_init/write_bw_MBps
>> 	/sys/devices/system/hmat/mem_tgt2/local_init/write_bw_MBps:81920
>> 	/sys/devices/system/hmat/mem_tgt3/local_init/write_bw_MBps:40960
>> 	/sys/devices/system/hmat/mem_tgt4/local_init/write_bw_MBps:40960
>> 	/sys/devices/system/hmat/mem_tgt5/local_init/write_bw_MBps:40960
>> 
>> and look for the one that corresponds to your HBM speed. (These numbers are
>> made up, but you get the idea.)
>
> Presumably ACPI-based platforms will not be the only ones who have the
> ability to expose different bandwidth memories in the future.  I think
> we need a platform-agnostic way ... right, PowerPC people?

Yes!

I don't have any detail at hand but will try and rustle something up.

cheers

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v3 0/3] create sysfs representation of ACPI HMAT
  2017-12-20 16:41     ` Ross Zwisler
@ 2017-12-21 13:18       ` Michal Hocko
  0 siblings, 0 replies; 42+ messages in thread
From: Michal Hocko @ 2017-12-21 13:18 UTC (permalink / raw)
  To: Ross Zwisler
  Cc: linux-kernel, Anaczkowski, Lukasz, Box, David E, Kogut, Jaroslaw,
	Koss, Marcin, Koziej, Artur, Lahtinen, Joonas, Moore, Robert,
	Nachimuthu, Murugasamy, Odzioba, Lukasz, Rafael J. Wysocki,
	Rafael J. Wysocki, Schmauss, Erik, Verma, Vishal L, Zheng, Lv,
	Andrew Morton, Balbir Singh, Brice Goglin, Dan Williams,
	Dave Hansen, Jerome Glisse, John Hubbard, Len Brown, Tim Chen,
	devel, linux-acpi, linux-mm, linux-nvdimm, linux-api

On Wed 20-12-17 09:41:07, Ross Zwisler wrote:
> On Mon, Dec 18, 2017 at 01:35:47PM -0700, Ross Zwisler wrote:
> > On Thu, Dec 14, 2017 at 02:00:32PM +0100, Michal Hocko wrote:
> <>
> > > What is the testing procedure? How can I setup qemu to simlate such HW?
> > 
> > Well, the QEMU table simulation is gross, so I'd rather not get everyone
> > testing with that.  Injecting custom HMAT and SRAT tables via initrd/initramfs
> > is a much better way:
> > 
> > https://www.kernel.org/doc/Documentation/acpi/initrd_table_override.txt
> > 
> > Dan recently posted a patch that lets this happen for the HMAT:
> > 
> > https://lists.01.org/pipermail/linux-nvdimm/2017-December/013545.html
> > 
> > I'm working right now on getting an easier way to generate HMAT tables - I'll
> > let you know when I have something working.
> 
> I've posted details on how to set up test configurations using injected HMAT
> and SRAT tables here:
> 
> https://github.com/rzwisler/hmat_examples
> 
> So far I've got two different sample configs, and we can add more as they are
> useful.  Having the sample configs in github is also nice because if someone
> finds a config that causes a kernel issue it can be reported then added to
> this list of example configs for future testing.
> 
> Please let me know if you have trouble getting this working.

Thanks a lot Ross, I will try this but things are getting pretty busy
here before the holiday so I won't be able to get to it and your other
email before new year. Sorry about that.

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v3 0/3] create sysfs representation of ACPI HMAT
  2017-12-20 22:41               ` Ross Zwisler
@ 2017-12-21 20:31                 ` Brice Goglin
  2017-12-22 22:53                   ` Dan Williams
  0 siblings, 1 reply; 42+ messages in thread
From: Brice Goglin @ 2017-12-21 20:31 UTC (permalink / raw)
  To: Ross Zwisler, Dan Williams
  Cc: Matthew Wilcox, Dave Hansen, Michal Hocko, linux-kernel,
	Anaczkowski, Lukasz, Box, David E, Kogut, Jaroslaw, Koss, Marcin,
	Koziej, Artur, Lahtinen, Joonas, Moore, Robert, Nachimuthu,
	Murugasamy, Odzioba, Lukasz, Rafael J. Wysocki,
	Rafael J. Wysocki, Schmauss, Erik, Verma, Vishal L, Zheng, Lv,
	Andrew Morton, Balbir Singh, Jerome Glisse, John Hubbard,
	Len Brown, Tim Chen, devel, Linux ACPI, Linux MM, linux-nvdimm,
	Linux API, linuxppc-dev

Le 20/12/2017 à 23:41, Ross Zwisler a écrit :
> On Wed, Dec 20, 2017 at 02:29:56PM -0800, Dan Williams wrote:
>> On Wed, Dec 20, 2017 at 1:24 PM, Ross Zwisler
>> <ross.zwisler@linux.intel.com> wrote:
>>> On Wed, Dec 20, 2017 at 01:16:49PM -0800, Matthew Wilcox wrote:
>>>> On Wed, Dec 20, 2017 at 12:22:21PM -0800, Dave Hansen wrote:
>>>>> On 12/20/2017 10:19 AM, Matthew Wilcox wrote:
>>>>>> I don't know what the right interface is, but my laptop has a set of
>>>>>> /sys/devices/system/memory/memoryN/ directories.  Perhaps this is the
>>>>>> right place to expose write_bw (etc).
>>>>> Those directories are already too redundant and wasteful.  I think we'd
>>>>> really rather not add to them.  In addition, it's technically possible
>>>>> to have a memory section span NUMA nodes and have different performance
>>>>> properties, which make it impossible to represent there.
>>>>>
>>>>> In any case, ACPI PXM's (Proximity Domains) are guaranteed to have
>>>>> uniform performance properties in the HMAT, and we just so happen to
>>>>> always create one NUMA node per PXM.  So, NUMA nodes really are a good fit.
>>>> I think you're missing my larger point which is that I don't think this
>>>> should be exposed to userspace as an ACPI feature.  Because if you do,
>>>> then it'll also be exposed to userspace as an openfirmware feature.
>>>> And sooner or later a devicetree feature.  And then writing a portable
>>>> program becomes an exercise in suffering.
>>>>
>>>> So, what's the right place in sysfs that isn't tied to ACPI?  A new
>>>> directory or set of directories under /sys/devices/system/memory/ ?
>>> Oh, the current location isn't at all tied to acpi except that it happens to
>>> be named 'hmat'.  When it was all named 'hmem' it was just:
>>>
>>> /sys/devices/system/hmem
>>>
>>> Which has no ACPI-isms at all.  I'm happy to move it under
>>> /sys/devices/system/memory/hmat if that's helpful, but I think we still have
>>> the issue that the data represented therein is still pulled right from the
>>> HMAT, and I don't know how to abstract it into something more platform
>>> agnostic until I know what data is provided by those other platforms.
>>>
>>> For example, the HMAT provides latency information and bandwidth information
>>> for both reads and writes.  Will the devicetree/openfirmware/etc version have
>>> this same info, or will it be just different enough that it won't translate
>>> into whatever I choose to stick in sysfs?
>> For the initial implementation do we need to have a representation of
>> all the performance data? Given that
>> /sys/devices/system/node/nodeX/distance is the only generic
>> performance attribute published by the kernel today it is already the
>> case that applications that need to target specific memories need to
>> go parse information that is not provided by the kernel by default.
>> The question is can those specialized applications stay special and go
>> parse the platform specific data sources, like raw HMAT, directly, or
>> do we expect general purpose applications to make use of this data? I
>> think a firmware-id to numa-node translation facility
>> (/sys/devices/system/node/nodeX/fwid) is a simple start that we can
>> build on with more information as specific use cases arise.
> We don't represent all the performance data, we only represent the data for
> local initiator/target pairs.  I do think that this is useful to have in sysfs
> because it provides a way to easily answer the most commonly asked questions
> (or at least what I'm guessing will be the most commmonly asked queststions),
> i.e. "given a CPU, what are the speeds of the various types of memory attached
> to it", and "given a chunk of memory, how fast is it and to which CPU is it
> local"?  By providing this base level of information I'm hoping to prevent
> most applications from having to parse the HMAT directly.
>
> The question of whether or not to include this local performance information
> was one of the main questions of the initial RFC patch series, and I did get
> feedback (albiet off-list) that the local performance information was
> valuable to at least some users.  I did intentionally structure my (now very
> short) set so that the performance information was added as a separate patch,
> so we can get to the place you're talking about where we only provide firmware
> id <=> proximity domain mappings by just leaving off the last patch in the
> series.
>

Hello

I can confirm that HPC runtimes are going to use these patches (at least
all runtimes that use hwloc for topology discovery, but that's the vast
majority of HPC anyway).

We really didn't like KNL exposing a hacky SLIT table [1]. We had to
explicitly detect that specific crazy table to find out which NUMA nodes
were local to which cores, and to find out which NUMA nodes were
HBM/MCDRAM or DDR. And then we had to hide the SLIT values to the
application because the reported latencies didn't match reality. Quite
annoying.

With Ross' patches, we can easily get what we need:
* which NUMA nodes are local to which CPUs? /sys/devices/system/node/
can only report a single local node per CPU (doesn't work for KNL and
upcoming architectures with HBM+DDR+...)
* which NUMA nodes are slow/fast (for both bandwidth and latency)
And we can still look at SLIT under /sys/devices/system/node if really
needed.

And of course having this in sysfs is much better than parsing ACPI
tables that are only accessible to root :)

Regards
Brice

[1] local DDR = 10, remote DDR = 20, local HBM = 31, remote HBM = 41

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v3 0/3] create sysfs representation of ACPI HMAT
  2017-12-14  2:10 [PATCH v3 0/3] create sysfs representation of ACPI HMAT Ross Zwisler
                   ` (3 preceding siblings ...)
  2017-12-14 13:00 ` [PATCH v3 0/3] create sysfs representation of ACPI HMAT Michal Hocko
@ 2017-12-22  3:09 ` Anshuman Khandual
  2017-12-22 10:31   ` Kogut, Jaroslaw
                     ` (3 more replies)
  4 siblings, 4 replies; 42+ messages in thread
From: Anshuman Khandual @ 2017-12-22  3:09 UTC (permalink / raw)
  To: Ross Zwisler, linux-kernel
  Cc: Anaczkowski, Lukasz, Box, David E, Kogut, Jaroslaw, Koss, Marcin,
	Koziej, Artur, Lahtinen, Joonas, Moore, Robert, Nachimuthu,
	Murugasamy, Odzioba, Lukasz, Rafael J. Wysocki,
	Rafael J. Wysocki, Schmauss, Erik, Verma, Vishal L, Zheng, Lv,
	Andrew Morton, Balbir Singh, Brice Goglin, Dan Williams,
	Dave Hansen, Jerome Glisse, John Hubbard, Len Brown, Tim Chen,
	devel, linux-acpi, linux-mm, linux-nvdimm

On 12/14/2017 07:40 AM, Ross Zwisler wrote:
> ==== Quick Summary ====
> 
> Platforms exist today which have multiple types of memory attached to a
> single CPU.  These disparate memory ranges have some characteristics in
> common, such as CPU cache coherence, but they can have wide ranges of
> performance both in terms of latency and bandwidth.

Right.

> 
> For example, consider a system that contains persistent memory, standard
> DDR memory and High Bandwidth Memory (HBM), all attached to the same CPU.
> There could potentially be an order of magnitude or more difference in
> performance between the slowest and fastest memory attached to that CPU.

Right.

> 
> With the current Linux code NUMA nodes are CPU-centric, so all the memory
> attached to a given CPU will be lumped into the same NUMA node.  This makes
> it very difficult for userspace applications to understand the performance
> of different memory ranges on a given CPU.

Right but that might require fundamental changes to the NUMA representation.
Plugging those memory as separate NUMA nodes, identify them through sysfs
and try allocating from it through mbind() seems like a short term solution.

Though if we decide to go in this direction, sysfs interface or something
similar is required to enumerate memory properties.

> 
> We solve this issue by providing userspace with performance information on
> individual memory ranges.  This performance information is exposed via
> sysfs:
> 
>   # grep . mem_tgt2/* mem_tgt2/local_init/* 2>/dev/null
>   mem_tgt2/firmware_id:1
>   mem_tgt2/is_cached:0
>   mem_tgt2/local_init/read_bw_MBps:40960
>   mem_tgt2/local_init/read_lat_nsec:50
>   mem_tgt2/local_init/write_bw_MBps:40960
>   mem_tgt2/local_init/write_lat_nsec:50

I might have missed discussions from earlier versions, why we have this
kind of a "source --> target" model ? We will enlist properties for all
possible "source --> target" on the system ? Right now it shows only
bandwidth and latency properties, can it accommodate other properties
as well in future ?

> 
> This allows applications to easily find the memory that they want to use.
> We expect that the existing NUMA APIs will be enhanced to use this new
> information so that applications can continue to use them to select their
> desired memory.

I had presented a proposal for NUMA redesign in the Plumbers Conference this
year where various memory devices with different kind of memory attributes
can be represented in the kernel and be used explicitly from the user space.
Here is the link to the proposal if you feel interested. The proposal is
very intrusive and also I dont have a RFC for it yet for discussion here.

https://linuxplumbersconf.org/2017/ocw//system/presentations/4656/original/Hierarchical_NUMA_Design_Plumbers_2017.pdf

Problem is, designing the sysfs interface for memory attribute detection
from user space without first thinking about redesigning the NUMA for
heterogeneous memory may not be a good idea. Will look into this further.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* RE: [PATCH v3 0/3] create sysfs representation of ACPI HMAT
  2017-12-22  3:09 ` Anshuman Khandual
@ 2017-12-22 10:31   ` Kogut, Jaroslaw
  2017-12-22 14:37     ` Anshuman Khandual
  2017-12-22 17:13   ` Dave Hansen
                     ` (2 subsequent siblings)
  3 siblings, 1 reply; 42+ messages in thread
From: Kogut, Jaroslaw @ 2017-12-22 10:31 UTC (permalink / raw)
  To: Anshuman Khandual, linux-kernel, Ross Zwisler
  Cc: Anaczkowski, Lukasz, Box, David E, Koss, Marcin, Koziej, Artur,
	Lahtinen, Joonas, Moore, Robert, Nachimuthu, Murugasamy, Odzioba,
	Lukasz, Wysocki, Rafael J, Rafael J. Wysocki, Schmauss, Erik,
	Verma, Vishal L, Zheng, Lv, Andrew Morton, Balbir Singh,
	Brice Goglin, Williams, Dan J, Hansen, Dave, Jerome Glisse,
	John Hubbard, Len Brown, Tim Chen, devel, linux-acpi, linux-mm,
	linux-nvdimm

> ... first thinking about redesigning the NUMA for
> heterogeneous memory may not be a good idea. Will look into this further.

I agree with comment that first a direction should be defined how to handle heterogeneous memory system.

> https://linuxplumbersconf.org/2017/ocw//system/presentations/4656/original/
> Hierarchical_NUMA_Design_Plumbers_2017.pdf

I miss in the presentation a user perspective of the new approach, e.g.
- How does application developer see/understand the heterogeneous memory system?
- How does app developer use the heterogeneous memory system? 
- What are modification in API/sys interfaces?

In other hand, if we assume that separate memory NUMA node has different memory capabilities/attributes from stand point of particular CPU, it is easy to explain for user how to describe/handle heterogeneous memory. 

Of course, current numa design is not sufficient in kernel in following areas today:
- Exposing memory attributes that describe heterogeneous memory system
- Interfaces to use the heterogeneous memory system, e.g. more sophisticated policies
- Internal mechanism in memory management, e.g. automigration, maybe something else.

> -----Original Message-----
> From: Anshuman Khandual [mailto:khandual@linux.vnet.ibm.com]
> Sent: Friday, December 22, 2017 4:10 AM
> To: Ross Zwisler <ross.zwisler@linux.intel.com>; linux-kernel@vger.kernel.org
> Cc: Anaczkowski, Lukasz <lukasz.anaczkowski@intel.com>; Box, David E
> <david.e.box@intel.com>; Kogut, Jaroslaw <Jaroslaw.Kogut@intel.com>; Koss,
> Marcin <marcin.koss@intel.com>; Koziej, Artur <artur.koziej@intel.com>;
> Lahtinen, Joonas <joonas.lahtinen@intel.com>; Moore, Robert
> <robert.moore@intel.com>; Nachimuthu, Murugasamy
> <murugasamy.nachimuthu@intel.com>; Odzioba, Lukasz
> <lukasz.odzioba@intel.com>; Wysocki, Rafael J <rafael.j.wysocki@intel.com>;
> Rafael J. Wysocki <rjw@rjwysocki.net>; Schmauss, Erik
> <erik.schmauss@intel.com>; Verma, Vishal L <vishal.l.verma@intel.com>;
> Zheng, Lv <lv.zheng@intel.com>; Andrew Morton <akpm@linux-
> foundation.org>; Balbir Singh <bsingharora@gmail.com>; Brice Goglin
> <brice.goglin@gmail.com>; Williams, Dan J <dan.j.williams@intel.com>;
> Hansen, Dave <dave.hansen@intel.com>; Jerome Glisse <jglisse@redhat.com>;
> John Hubbard <jhubbard@nvidia.com>; Len Brown <lenb@kernel.org>; Tim
> Chen <tim.c.chen@linux.intel.com>; devel@acpica.org; linux-
> acpi@vger.kernel.org; linux-mm@kvack.org; linux-nvdimm@lists.01.org
> Subject: Re: [PATCH v3 0/3] create sysfs representation of ACPI HMAT
> 
> On 12/14/2017 07:40 AM, Ross Zwisler wrote:
> > ==== Quick Summary ====
> >
> > Platforms exist today which have multiple types of memory attached to
> > a single CPU.  These disparate memory ranges have some characteristics
> > in common, such as CPU cache coherence, but they can have wide ranges
> > of performance both in terms of latency and bandwidth.
> 
> Right.
> 
> >
> > For example, consider a system that contains persistent memory,
> > standard DDR memory and High Bandwidth Memory (HBM), all attached to
> the same CPU.
> > There could potentially be an order of magnitude or more difference in
> > performance between the slowest and fastest memory attached to that CPU.
> 
> Right.
> 
> >
> > With the current Linux code NUMA nodes are CPU-centric, so all the
> > memory attached to a given CPU will be lumped into the same NUMA node.
> > This makes it very difficult for userspace applications to understand
> > the performance of different memory ranges on a given CPU.
> 
> Right but that might require fundamental changes to the NUMA
> representation.
> Plugging those memory as separate NUMA nodes, identify them through sysfs
> and try allocating from it through mbind() seems like a short term solution.
> 
> Though if we decide to go in this direction, sysfs interface or something similar
> is required to enumerate memory properties.
> 
> >
> > We solve this issue by providing userspace with performance
> > information on individual memory ranges.  This performance information
> > is exposed via
> > sysfs:
> >
> >   # grep . mem_tgt2/* mem_tgt2/local_init/* 2>/dev/null
> >   mem_tgt2/firmware_id:1
> >   mem_tgt2/is_cached:0
> >   mem_tgt2/local_init/read_bw_MBps:40960
> >   mem_tgt2/local_init/read_lat_nsec:50
> >   mem_tgt2/local_init/write_bw_MBps:40960
> >   mem_tgt2/local_init/write_lat_nsec:50
> 
> I might have missed discussions from earlier versions, why we have this kind of
> a "source --> target" model ? We will enlist properties for all possible "source --
> > target" on the system ? Right now it shows only bandwidth and latency
> properties, can it accommodate other properties as well in future ?
> 
> >
> > This allows applications to easily find the memory that they want to use.
> > We expect that the existing NUMA APIs will be enhanced to use this new
> > information so that applications can continue to use them to select
> > their desired memory.
> 
> I had presented a proposal for NUMA redesign in the Plumbers Conference this
> year where various memory devices with different kind of memory attributes
> can be represented in the kernel and be used explicitly from the user space.
> Here is the link to the proposal if you feel interested. The proposal is very
> intrusive and also I dont have a RFC for it yet for discussion here.
> 
> https://linuxplumbersconf.org/2017/ocw//system/presentations/4656/original/
> Hierarchical_NUMA_Design_Plumbers_2017.pdf
> 
> Problem is, designing the sysfs interface for memory attribute detection from
> user space without first thinking about redesigning the NUMA for
> heterogeneous memory may not be a good idea. Will look into this further.

--------------------------------------------------------------------

Intel Technology Poland sp. z o.o.
ul. Slowackiego 173 | 80-298 Gdansk | Sad Rejonowy Gdansk Polnoc | VII Wydzial Gospodarczy Krajowego Rejestru Sadowego - KRS 101882 | NIP 957-07-52-316 | Kapital zakladowy 200.000 PLN.

Ta wiadomosc wraz z zalacznikami jest przeznaczona dla okreslonego adresata i moze zawierac informacje poufne. W razie przypadkowego otrzymania tej wiadomosci, prosimy o powiadomienie nadawcy oraz trwale jej usuniecie; jakiekolwiek
przegladanie lub rozpowszechnianie jest zabronione.
This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). If you are not the intended recipient, please contact the sender and delete all copies; any review or distribution by
others is strictly prohibited.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v3 0/3] create sysfs representation of ACPI HMAT
  2017-12-22 10:31   ` Kogut, Jaroslaw
@ 2017-12-22 14:37     ` Anshuman Khandual
  0 siblings, 0 replies; 42+ messages in thread
From: Anshuman Khandual @ 2017-12-22 14:37 UTC (permalink / raw)
  To: Kogut, Jaroslaw, Anshuman Khandual, linux-kernel, Ross Zwisler
  Cc: Anaczkowski, Lukasz, Box, David E, Koss, Marcin, Koziej, Artur,
	Lahtinen, Joonas, Moore, Robert, Nachimuthu, Murugasamy, Odzioba,
	Lukasz, Wysocki, Rafael J, Rafael J. Wysocki, Schmauss, Erik,
	Verma, Vishal L, Zheng, Lv, Andrew Morton, Balbir Singh,
	Brice Goglin, Williams, Dan J, Hansen, Dave, Jerome Glisse,
	John Hubbard, Len Brown, Tim Chen, devel, linux-acpi, linux-mm,
	linux-nvdimm

On 12/22/2017 04:01 PM, Kogut, Jaroslaw wrote:
>> ... first thinking about redesigning the NUMA for
>> heterogeneous memory may not be a good idea. Will look into this further.
> I agree with comment that first a direction should be defined how to handle heterogeneous memory system.
> 
>> https://linuxplumbersconf.org/2017/ocw//system/presentations/4656/original/
>> Hierarchical_NUMA_Design_Plumbers_2017.pdf
> I miss in the presentation a user perspective of the new approach, e.g.
> - How does application developer see/understand the heterogeneous memory system?

>From user perspective

- Each memory node (with or without CPU) is a NUMA node with attributes
- User should detect these NUMA nodes from sysfs (not part of proposal)
- User allocates/operates/destroys VMA with new sys calls (_mattr based)

> - How does app developer use the heterogeneous memory system?

- Through existing and new system calls

> - What are modification in API/sys interfaces?

- The presentation has possible addition of new system calls with 'u64
  _mattr' representation for memory attributes which can be used while
  requesting different kinds of memory from the kernel

> 
> In other hand, if we assume that separate memory NUMA node has different memory capabilities/attributes from stand point of particular CPU, it is easy to explain for user how to describe/handle heterogeneous memory. 
> 
> Of course, current numa design is not sufficient in kernel in following areas today:
> - Exposing memory attributes that describe heterogeneous memory system
> - Interfaces to use the heterogeneous memory system, e.g. more sophisticated policies
> - Internal mechanism in memory management, e.g. automigration, maybe something else.

Right, we would need

- Representation of NUMA with attributes
- APIs/syscalls for accessing the intended memory from user space
- Memory management policies and algorithms navigating trough all these
  new attributes in various situations

IMHO, we should not consider sysfs interfaces for heterogeneous memory
(which will be an ABI going forward and hence cannot be changed easily)
before we get the NUMA redesign right.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v3 0/3] create sysfs representation of ACPI HMAT
  2017-12-22  3:09 ` Anshuman Khandual
  2017-12-22 10:31   ` Kogut, Jaroslaw
@ 2017-12-22 17:13   ` Dave Hansen
  2017-12-23  5:14     ` Anshuman Khandual
  2017-12-22 22:13   ` Ross Zwisler
  2017-12-22 22:31   ` Ross Zwisler
  3 siblings, 1 reply; 42+ messages in thread
From: Dave Hansen @ 2017-12-22 17:13 UTC (permalink / raw)
  To: Anshuman Khandual, Ross Zwisler, linux-kernel
  Cc: Anaczkowski, Lukasz, Box, David E, Kogut, Jaroslaw, Koss, Marcin,
	Koziej, Artur, Lahtinen, Joonas, Moore, Robert, Nachimuthu,
	Murugasamy, Odzioba, Lukasz, Rafael J. Wysocki,
	Rafael J. Wysocki, Schmauss, Erik, Verma, Vishal L, Zheng, Lv,
	Andrew Morton, Balbir Singh, Brice Goglin, Dan Williams,
	Jerome Glisse, John Hubbard, Len Brown, Tim Chen, devel,
	linux-acpi, linux-mm, linux-nvdimm

On 12/21/2017 07:09 PM, Anshuman Khandual wrote:
> I had presented a proposal for NUMA redesign in the Plumbers Conference this
> year where various memory devices with different kind of memory attributes
> can be represented in the kernel and be used explicitly from the user space.
> Here is the link to the proposal if you feel interested. The proposal is
> very intrusive and also I dont have a RFC for it yet for discussion here.

I think that's the best reason to "re-use NUMA" for this: it's _not_
intrusive.

Also, from an x86 perspective, these HMAT systems *will* be out there.
Old versions of Linux *will* see different types of memory as separate
NUMA nodes.  So, if we are going to do something different, it's going
to be interesting to un-teach those systems about using the NUMA APIs
for this.  That ship has sailed.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v3 0/3] create sysfs representation of ACPI HMAT
  2017-12-21  1:41         ` Elliott, Robert (Persistent Memory)
@ 2017-12-22 21:46           ` Ross Zwisler
  0 siblings, 0 replies; 42+ messages in thread
From: Ross Zwisler @ 2017-12-22 21:46 UTC (permalink / raw)
  To: Elliott, Robert (Persistent Memory)
  Cc: Ross Zwisler, Matthew Wilcox, Michal Hocko, Box, David E,
	Dave Hansen, Zheng, Lv, linux-nvdimm, Rafael J. Wysocki,
	Anaczkowski, Lukasz, Moore, Robert, linux-acpi, Odzioba, Lukasz,
	Schmauss, Erik, Len Brown, John Hubbard, linuxppc-dev,
	Jerome Glisse, devel, Kogut, Jaroslaw, linux-mm, Koss, Marcin,
	linux-api, Brice Goglin, Nachimuthu, Murugasamy,
	Rafael J. Wysocki, linux-kernel, Koziej, Artur, Lahtinen, Joonas,
	Andrew Morton, Tim Chen

On Thu, Dec 21, 2017 at 01:41:15AM +0000, Elliott, Robert (Persistent Memory) wrote:
> 
> 
> > -----Original Message-----
> > From: Linux-nvdimm [mailto:linux-nvdimm-bounces@lists.01.org] On Behalf Of
> > Ross Zwisler
> ...
> > 
> > On Wed, Dec 20, 2017 at 10:19:37AM -0800, Matthew Wilcox wrote:
> ...
> > > initiator is a CPU?  I'd have expected you to expose a memory controller
> > > abstraction rather than re-use storage terminology.
> > 
> > Yea, I agree that at first blush it seems weird.  It turns out that
> > looking at it in sort of a storage initiator/target way is beneficial,
> > though, because it allows us to cut down on the number of data values
> > we need to represent.
> > 
> > For example the SLIT, which doesn't differentiate between initiator and
> > target proximity domains (and thus nodes) always represents a system
> > with N proximity domains using a NxN distance table.  This makes sense
> > if every node contains both CPUs and memory.
> > 
> > With the introduction of the HMAT, though, we can have memory-only
> > initiator nodes and we can explicitly associate them with their local 
> > CPU.  This is necessary so that we can separate memory with different
> > performance characteristics (HBM vs normal memory vs persistent memory,
> > for example) that are all attached to the same CPU.
> > 
> > So, say we now have a system with 4 CPUs, and each of those CPUs has 3
> > different types of memory attached to it.  We now have 16 total proximity
> > domains, 4 CPU and 12 memory.
> 
> The CPU cores that make up a node can have performance restrictions of
> their own; for example, they might max out at 10 GB/s even though the
> memory controller supports 120 GB/s (meaning you need to use 12 cores
> on the node to fully exercise memory).  It'd be helpful to report this,
> so software can decide how many cores to use for bandwidth-intensive work.
> 
> > If we represent this with the SLIT we end up with a 16 X 16 distance table
> > (256 entries), most of which don't matter because they are memory-to-
> > memory distances which don't make sense.
> > 
> > In the HMAT, though, we separate out the initiators and the targets and
> > put them into separate lists.  (See 5.2.27.4 System Locality Latency and
> > Bandwidth Information Structure in ACPI 6.2 for details.)  So, this same
> > config in the HMAT only has 4*12=48 performance values of each type, all
> > of which convey meaningful information.
> > 
> > The HMAT indeed even uses the storage "initiator" and "target"
> > terminology. :)
> 
> Centralized DMA engines (e.g., as used by the "DMA based blk-mq pmem
> driver") have performance differences too.  A CPU might include
> CPU cores that reach 10 GB/s, DMA engines that reach 60 GB/s, and
> memory controllers that reach 120 GB/s.  I guess these would be
> represented as extra initiators on the node?

For both of your comments I think all of this comes down to how you want to
represent your platform in the HMAT.  The sysfs representation just shows you
what is in the HMAT.

Each initiator node is just a single NUMA node (think of it as a NUMA node
which has the characteristic that it can initiate memory requests), so I don't
think there is a way to have "extra initiators on the node".  I think what
you're talking about is separating the DMA engines and CPU cores into separate
NUMA nodes, both of which are initiators.  I think this is probably fine as it
conveys useful info.

I don't think the HMAT has a concept of increasing bandwidth for number of CPU
cores used - it just has a single bandwidth number (well, one for read and one
for write) per initiator/target pair.  I don't think we want to add this,
either - the HMAT is already very complex.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v3 0/3] create sysfs representation of ACPI HMAT
  2017-12-22  3:09 ` Anshuman Khandual
  2017-12-22 10:31   ` Kogut, Jaroslaw
  2017-12-22 17:13   ` Dave Hansen
@ 2017-12-22 22:13   ` Ross Zwisler
  2017-12-23  6:56     ` Anshuman Khandual
  2017-12-22 22:31   ` Ross Zwisler
  3 siblings, 1 reply; 42+ messages in thread
From: Ross Zwisler @ 2017-12-22 22:13 UTC (permalink / raw)
  To: Anshuman Khandual
  Cc: Ross Zwisler, linux-kernel, Anaczkowski, Lukasz, Box, David E,
	Kogut, Jaroslaw, Koss, Marcin, Koziej, Artur, Lahtinen, Joonas,
	Moore, Robert, Nachimuthu, Murugasamy, Odzioba, Lukasz,
	Rafael J. Wysocki, Rafael J. Wysocki, Schmauss, Erik, Verma,
	Vishal L, Zheng, Lv, Andrew Morton, Balbir Singh, Brice Goglin,
	Dan Williams, Dave Hansen, Jerome Glisse, John Hubbard,
	Len Brown, Tim Chen, devel, linux-acpi, linux-mm, linux-nvdimm

On Fri, Dec 22, 2017 at 08:39:41AM +0530, Anshuman Khandual wrote:
> On 12/14/2017 07:40 AM, Ross Zwisler wrote:
> > ==== Quick Summary ====
> > 
> > Platforms exist today which have multiple types of memory attached to a
> > single CPU.  These disparate memory ranges have some characteristics in
> > common, such as CPU cache coherence, but they can have wide ranges of
> > performance both in terms of latency and bandwidth.
> 
> Right.
> 
> > 
> > For example, consider a system that contains persistent memory, standard
> > DDR memory and High Bandwidth Memory (HBM), all attached to the same CPU.
> > There could potentially be an order of magnitude or more difference in
> > performance between the slowest and fastest memory attached to that CPU.
> 
> Right.
> 
> > 
> > With the current Linux code NUMA nodes are CPU-centric, so all the memory
> > attached to a given CPU will be lumped into the same NUMA node.  This makes
> > it very difficult for userspace applications to understand the performance
> > of different memory ranges on a given CPU.
> 
> Right but that might require fundamental changes to the NUMA representation.
> Plugging those memory as separate NUMA nodes, identify them through sysfs
> and try allocating from it through mbind() seems like a short term solution.
> 
> Though if we decide to go in this direction, sysfs interface or something
> similar is required to enumerate memory properties.

Yep, and this patch series is trying to be the sysfs interface that is
required to the memory properties.  :)  It's a certainty that we will have
memory-only NUMA nodes, at least on platforms that support ACPI.  Supporting
memory-only proximity domains (which Linux turns in to memory-only NUMA nodes)
is explicitly supported with the introduction of the HMAT in ACPI 6.2.

It also turns out that the existing memory management code already deals with
them just fine - you see this with my hmat_examples setup:

https://github.com/rzwisler/hmat_examples

Both configurations created by this repo create memory-only NUMA nodes, even
with upstream kernels.  My patches don't change that, they just provide a
sysfs representation of the HMAT so users can discover the memory that exists
in the system.

> > We solve this issue by providing userspace with performance information on
> > individual memory ranges.  This performance information is exposed via
> > sysfs:
> > 
> >   # grep . mem_tgt2/* mem_tgt2/local_init/* 2>/dev/null
> >   mem_tgt2/firmware_id:1
> >   mem_tgt2/is_cached:0
> >   mem_tgt2/local_init/read_bw_MBps:40960
> >   mem_tgt2/local_init/read_lat_nsec:50
> >   mem_tgt2/local_init/write_bw_MBps:40960
> >   mem_tgt2/local_init/write_lat_nsec:50
> 
> I might have missed discussions from earlier versions, why we have this
> kind of a "source --> target" model ? We will enlist properties for all
> possible "source --> target" on the system ? Right now it shows only
> bandwidth and latency properties, can it accommodate other properties
> as well in future ?

The initiator/target model is useful in preventing us from needing a
MAX_NUMA_NODES x MAX_NUMA_NODES sized table for each performance attribute.  I
talked about it a little more here:

https://lists.01.org/pipermail/linux-nvdimm/2017-December/013654.html

> > This allows applications to easily find the memory that they want to use.
> > We expect that the existing NUMA APIs will be enhanced to use this new
> > information so that applications can continue to use them to select their
> > desired memory.
> 
> I had presented a proposal for NUMA redesign in the Plumbers Conference this
> year where various memory devices with different kind of memory attributes
> can be represented in the kernel and be used explicitly from the user space.
> Here is the link to the proposal if you feel interested. The proposal is
> very intrusive and also I dont have a RFC for it yet for discussion here.
> 
> https://linuxplumbersconf.org/2017/ocw//system/presentations/4656/original/Hierarchical_NUMA_Design_Plumbers_2017.pdf

I'll take a look, but my first reaction is that I agree with Dave that it
seems hard to re-teach systems a new NUMA scheme.  This patch series doesn't
attempt to do that - it is very unintrusive and only informs users about the
memory-only NUMA nodes that will already exist in their ACPI-based systems.

> Problem is, designing the sysfs interface for memory attribute detection
> from user space without first thinking about redesigning the NUMA for
> heterogeneous memory may not be a good idea. Will look into this further.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v3 0/3] create sysfs representation of ACPI HMAT
  2017-12-22  3:09 ` Anshuman Khandual
                     ` (2 preceding siblings ...)
  2017-12-22 22:13   ` Ross Zwisler
@ 2017-12-22 22:31   ` Ross Zwisler
  2017-12-25  2:05     ` Liubo(OS Lab)
  3 siblings, 1 reply; 42+ messages in thread
From: Ross Zwisler @ 2017-12-22 22:31 UTC (permalink / raw)
  To: Anshuman Khandual
  Cc: Ross Zwisler, linux-kernel, Anaczkowski, Lukasz, Box, David E,
	Kogut, Jaroslaw, Koss, Marcin, Koziej, Artur, Lahtinen, Joonas,
	Moore, Robert, Nachimuthu, Murugasamy, Odzioba, Lukasz,
	Rafael J. Wysocki, Rafael J. Wysocki, Schmauss, Erik, Verma,
	Vishal L, Zheng, Lv, Andrew Morton, Balbir Singh, Brice Goglin,
	Dan Williams, Dave Hansen, Jerome Glisse, John Hubbard,
	Len Brown, Tim Chen, devel, linux-acpi, linux-mm, linux-nvdimm

On Fri, Dec 22, 2017 at 08:39:41AM +0530, Anshuman Khandual wrote:
> On 12/14/2017 07:40 AM, Ross Zwisler wrote:
<>
> > We solve this issue by providing userspace with performance information on
> > individual memory ranges.  This performance information is exposed via
> > sysfs:
> > 
> >   # grep . mem_tgt2/* mem_tgt2/local_init/* 2>/dev/null
> >   mem_tgt2/firmware_id:1
> >   mem_tgt2/is_cached:0
> >   mem_tgt2/local_init/read_bw_MBps:40960
> >   mem_tgt2/local_init/read_lat_nsec:50
> >   mem_tgt2/local_init/write_bw_MBps:40960
> >   mem_tgt2/local_init/write_lat_nsec:50
<>
> We will enlist properties for all possible "source --> target" on the system?

Nope, just 'local' initiator/target pairs.  I talk about the reasoning for
this in the cover letter for patch 3:

https://lists.01.org/pipermail/linux-nvdimm/2017-December/013574.html

> Right now it shows only bandwidth and latency properties, can it accommodate
> other properties as well in future ?

We also have an 'is_cached' attribute for the memory targets if they are
involved in a caching hierarchy, but right now those are all the things we
expose.  We can potentially expose whatever we want that is present in the
HMAT, but those seemed like a good start.

I noticed that in your presentation you had some other examples of attributes
you cared about:

 * reliability
 * power consumption
 * density

The HMAT doesn't provide this sort of information at present, but we
could/would add them to sysfs if the HMAT ever grew support for them.

> > This allows applications to easily find the memory that they want to use.
> > We expect that the existing NUMA APIs will be enhanced to use this new
> > information so that applications can continue to use them to select their
> > desired memory.
> 
> I had presented a proposal for NUMA redesign in the Plumbers Conference this
> year where various memory devices with different kind of memory attributes
> can be represented in the kernel and be used explicitly from the user space.
> Here is the link to the proposal if you feel interested. The proposal is
> very intrusive and also I dont have a RFC for it yet for discussion here.
> 
> https://linuxplumbersconf.org/2017/ocw//system/presentations/4656/original/Hierarchical_NUMA_Design_Plumbers_2017.pdf
> 
> Problem is, designing the sysfs interface for memory attribute detection
> from user space without first thinking about redesigning the NUMA for
> heterogeneous memory may not be a good idea. Will look into this further.

I took another look at your presentation, and overall I think that if/when a
NUMA redesign like this takes place ACPI systems with HMAT tables will be able
to participate.  But I think we are probably a ways away from that, and like I
said in my previous mail ACPI systems with memory-only NUMA nodes are going to
exist and need to be supported with the current NUMA scheme.  Hence I don't
think that this patch series conflicts with your proposal.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v3 0/3] create sysfs representation of ACPI HMAT
  2017-12-21 20:31                 ` Brice Goglin
@ 2017-12-22 22:53                   ` Dan Williams
  2017-12-22 23:22                     ` Ross Zwisler
  2017-12-27  9:10                     ` Brice Goglin
  0 siblings, 2 replies; 42+ messages in thread
From: Dan Williams @ 2017-12-22 22:53 UTC (permalink / raw)
  To: Brice Goglin
  Cc: Ross Zwisler, Matthew Wilcox, Dave Hansen, Michal Hocko,
	linux-kernel, Anaczkowski, Lukasz, Box, David E, Kogut, Jaroslaw,
	Koss, Marcin, Koziej, Artur, Lahtinen, Joonas, Moore, Robert,
	Nachimuthu, Murugasamy, Odzioba, Lukasz, Rafael J. Wysocki,
	Rafael J. Wysocki, Schmauss, Erik, Verma, Vishal L, Zheng, Lv,
	Andrew Morton, Balbir Singh, Jerome Glisse, John Hubbard,
	Len Brown, Tim Chen, devel, Linux ACPI, Linux MM, linux-nvdimm,
	Linux API, linuxppc-dev

On Thu, Dec 21, 2017 at 12:31 PM, Brice Goglin <brice.goglin@gmail.com> wrote:
> Le 20/12/2017 à 23:41, Ross Zwisler a écrit :
[..]
> Hello
>
> I can confirm that HPC runtimes are going to use these patches (at least
> all runtimes that use hwloc for topology discovery, but that's the vast
> majority of HPC anyway).
>
> We really didn't like KNL exposing a hacky SLIT table [1]. We had to
> explicitly detect that specific crazy table to find out which NUMA nodes
> were local to which cores, and to find out which NUMA nodes were
> HBM/MCDRAM or DDR. And then we had to hide the SLIT values to the
> application because the reported latencies didn't match reality. Quite
> annoying.
>
> With Ross' patches, we can easily get what we need:
> * which NUMA nodes are local to which CPUs? /sys/devices/system/node/
> can only report a single local node per CPU (doesn't work for KNL and
> upcoming architectures with HBM+DDR+...)
> * which NUMA nodes are slow/fast (for both bandwidth and latency)
> And we can still look at SLIT under /sys/devices/system/node if really
> needed.
>
> And of course having this in sysfs is much better than parsing ACPI
> tables that are only accessible to root :)

On this point, it's not clear to me that we should allow these sysfs
entries to be world readable. Given /proc/iomem now hides physical
address information from non-root we at least need to be careful not
to undo that with new sysfs HMAT attributes. Once you need to be root
for this info, is parsing binary HMAT vs sysfs a blocker for the HPC
use case?

Perhaps we can enlist /proc/iomem or a similar enumeration interface
to tell userspace the NUMA node and whether the kernel thinks it has
better or worse performance characteristics relative to base
system-RAM, i.e. new IORES_DESC_* values. I'm worried that if we start
publishing absolute numbers in sysfs userspace will default to looking
for specific magic numbers in sysfs vs asking the kernel for memory
that has performance characteristics relative to base "System RAM". In
other words the absolute performance information that the HMAT
publishes is useful to the kernel, but it's not clear that userspace
needs that vs a relative indicator for making NUMA node preference
decisions.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v3 0/3] create sysfs representation of ACPI HMAT
  2017-12-22 22:53                   ` Dan Williams
@ 2017-12-22 23:22                     ` Ross Zwisler
  2017-12-22 23:57                       ` Dan Williams
  2017-12-27  9:10                     ` Brice Goglin
  1 sibling, 1 reply; 42+ messages in thread
From: Ross Zwisler @ 2017-12-22 23:22 UTC (permalink / raw)
  To: Dan Williams
  Cc: Brice Goglin, Ross Zwisler, Matthew Wilcox, Dave Hansen,
	Michal Hocko, linux-kernel, Anaczkowski, Lukasz, Box, David E,
	Kogut, Jaroslaw, Koss, Marcin, Koziej, Artur, Lahtinen, Joonas,
	Moore, Robert, Nachimuthu, Murugasamy, Odzioba, Lukasz,
	Rafael J. Wysocki, Rafael J. Wysocki, Schmauss, Erik, Verma,
	Vishal L, Zheng, Lv, Andrew Morton, Balbir Singh, Jerome Glisse,
	John Hubbard, Len Brown, Tim Chen, devel, Linux ACPI, Linux MM,
	linux-nvdimm, Linux API, linuxppc-dev

On Fri, Dec 22, 2017 at 02:53:42PM -0800, Dan Williams wrote:
> On Thu, Dec 21, 2017 at 12:31 PM, Brice Goglin <brice.goglin@gmail.com> wrote:
> > Le 20/12/2017 à 23:41, Ross Zwisler a écrit :
> [..]
> > Hello
> >
> > I can confirm that HPC runtimes are going to use these patches (at least
> > all runtimes that use hwloc for topology discovery, but that's the vast
> > majority of HPC anyway).
> >
> > We really didn't like KNL exposing a hacky SLIT table [1]. We had to
> > explicitly detect that specific crazy table to find out which NUMA nodes
> > were local to which cores, and to find out which NUMA nodes were
> > HBM/MCDRAM or DDR. And then we had to hide the SLIT values to the
> > application because the reported latencies didn't match reality. Quite
> > annoying.
> >
> > With Ross' patches, we can easily get what we need:
> > * which NUMA nodes are local to which CPUs? /sys/devices/system/node/
> > can only report a single local node per CPU (doesn't work for KNL and
> > upcoming architectures with HBM+DDR+...)
> > * which NUMA nodes are slow/fast (for both bandwidth and latency)
> > And we can still look at SLIT under /sys/devices/system/node if really
> > needed.
> >
> > And of course having this in sysfs is much better than parsing ACPI
> > tables that are only accessible to root :)
> 
> On this point, it's not clear to me that we should allow these sysfs
> entries to be world readable. Given /proc/iomem now hides physical
> address information from non-root we at least need to be careful not
> to undo that with new sysfs HMAT attributes.

This enabling does not expose any physical addresses to userspace.  It only
provides performance numbers from the HMAT and associates them with existing
NUMA nodes.  Are you worried that exposing performance numbers to non-root
users via sysfs poses a security risk?

> Once you need to be root for this info, is parsing binary HMAT vs sysfs a
> blocker for the HPC use case?
> 
> Perhaps we can enlist /proc/iomem or a similar enumeration interface
> to tell userspace the NUMA node and whether the kernel thinks it has
> better or worse performance characteristics relative to base
> system-RAM, i.e. new IORES_DESC_* values. I'm worried that if we start
> publishing absolute numbers in sysfs userspace will default to looking
> for specific magic numbers in sysfs vs asking the kernel for memory
> that has performance characteristics relative to base "System RAM". In
> other words the absolute performance information that the HMAT
> publishes is useful to the kernel, but it's not clear that userspace
> needs that vs a relative indicator for making NUMA node preference
> decisions.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v3 0/3] create sysfs representation of ACPI HMAT
  2017-12-22 23:22                     ` Ross Zwisler
@ 2017-12-22 23:57                       ` Dan Williams
  2017-12-23  1:14                         ` Rafael J. Wysocki
  0 siblings, 1 reply; 42+ messages in thread
From: Dan Williams @ 2017-12-22 23:57 UTC (permalink / raw)
  To: Ross Zwisler
  Cc: Brice Goglin, Matthew Wilcox, Dave Hansen, Michal Hocko,
	linux-kernel, Anaczkowski, Lukasz, Box, David E, Kogut, Jaroslaw,
	Koss, Marcin, Koziej, Artur, Lahtinen, Joonas, Moore, Robert,
	Nachimuthu, Murugasamy, Odzioba, Lukasz, Rafael J. Wysocki,
	Rafael J. Wysocki, Schmauss, Erik, Verma, Vishal L, Zheng, Lv,
	Andrew Morton, Balbir Singh, Jerome Glisse, John Hubbard,
	Len Brown, Tim Chen, devel, Linux ACPI, Linux MM, linux-nvdimm,
	Linux API, linuxppc-dev

On Fri, Dec 22, 2017 at 3:22 PM, Ross Zwisler
<ross.zwisler@linux.intel.com> wrote:
> On Fri, Dec 22, 2017 at 02:53:42PM -0800, Dan Williams wrote:
>> On Thu, Dec 21, 2017 at 12:31 PM, Brice Goglin <brice.goglin@gmail.com> wrote:
>> > Le 20/12/2017 à 23:41, Ross Zwisler a écrit :
>> [..]
>> > Hello
>> >
>> > I can confirm that HPC runtimes are going to use these patches (at least
>> > all runtimes that use hwloc for topology discovery, but that's the vast
>> > majority of HPC anyway).
>> >
>> > We really didn't like KNL exposing a hacky SLIT table [1]. We had to
>> > explicitly detect that specific crazy table to find out which NUMA nodes
>> > were local to which cores, and to find out which NUMA nodes were
>> > HBM/MCDRAM or DDR. And then we had to hide the SLIT values to the
>> > application because the reported latencies didn't match reality. Quite
>> > annoying.
>> >
>> > With Ross' patches, we can easily get what we need:
>> > * which NUMA nodes are local to which CPUs? /sys/devices/system/node/
>> > can only report a single local node per CPU (doesn't work for KNL and
>> > upcoming architectures with HBM+DDR+...)
>> > * which NUMA nodes are slow/fast (for both bandwidth and latency)
>> > And we can still look at SLIT under /sys/devices/system/node if really
>> > needed.
>> >
>> > And of course having this in sysfs is much better than parsing ACPI
>> > tables that are only accessible to root :)
>>
>> On this point, it's not clear to me that we should allow these sysfs
>> entries to be world readable. Given /proc/iomem now hides physical
>> address information from non-root we at least need to be careful not
>> to undo that with new sysfs HMAT attributes.
>
> This enabling does not expose any physical addresses to userspace.  It only
> provides performance numbers from the HMAT and associates them with existing
> NUMA nodes.  Are you worried that exposing performance numbers to non-root
> users via sysfs poses a security risk?

It's an information disclosure that's not clear we need to make to
non-root processes.

I'm more worried about userspace growing dependencies on the absolute
numbers when those numbers can change from platform to platform.
Differentiated memory on one platform may be the common memory pool on
another.

To me this has parallels with storage device hinting where
specifications like T10 have a complex enumeration of all the
performance hints that can be passed to the device, but the Linux
enabling effort aims for a sanitzed set of relative hints that make
sense. It's more flexible if userspace specifies a relative intent
rather than an absolute performance target. Putting all the HMAT
information into sysfs gives userspace more information than it could
possibly do anything reasonable, at least outside of specialized apps
that are hand tuned for a given hardware platform.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v3 0/3] create sysfs representation of ACPI HMAT
  2017-12-22 23:57                       ` Dan Williams
@ 2017-12-23  1:14                         ` Rafael J. Wysocki
  0 siblings, 0 replies; 42+ messages in thread
From: Rafael J. Wysocki @ 2017-12-23  1:14 UTC (permalink / raw)
  To: Dan Williams
  Cc: Ross Zwisler, Brice Goglin, Matthew Wilcox, Dave Hansen,
	Michal Hocko, linux-kernel, Anaczkowski, Lukasz, Box, David E,
	Kogut, Jaroslaw, Koss, Marcin, Koziej, Artur, Lahtinen, Joonas,
	Moore, Robert, Nachimuthu, Murugasamy, Odzioba, Lukasz,
	Rafael J. Wysocki, Rafael J. Wysocki, Schmauss, Erik, Verma,
	Vishal L, Zheng, Lv, Andrew Morton, Balbir Singh, Jerome Glisse,
	John Hubbard, Len Brown, Tim Chen, devel, Linux ACPI, Linux MM,
	linux-nvdimm, Linux API, linuxppc-dev

On Sat, Dec 23, 2017 at 12:57 AM, Dan Williams <dan.j.williams@intel.com> wrote:
> On Fri, Dec 22, 2017 at 3:22 PM, Ross Zwisler
> <ross.zwisler@linux.intel.com> wrote:
>> On Fri, Dec 22, 2017 at 02:53:42PM -0800, Dan Williams wrote:
>>> On Thu, Dec 21, 2017 at 12:31 PM, Brice Goglin <brice.goglin@gmail.com> wrote:
>>> > Le 20/12/2017 à 23:41, Ross Zwisler a écrit :
>>> [..]
>>> > Hello
>>> >
>>> > I can confirm that HPC runtimes are going to use these patches (at least
>>> > all runtimes that use hwloc for topology discovery, but that's the vast
>>> > majority of HPC anyway).
>>> >
>>> > We really didn't like KNL exposing a hacky SLIT table [1]. We had to
>>> > explicitly detect that specific crazy table to find out which NUMA nodes
>>> > were local to which cores, and to find out which NUMA nodes were
>>> > HBM/MCDRAM or DDR. And then we had to hide the SLIT values to the
>>> > application because the reported latencies didn't match reality. Quite
>>> > annoying.
>>> >
>>> > With Ross' patches, we can easily get what we need:
>>> > * which NUMA nodes are local to which CPUs? /sys/devices/system/node/
>>> > can only report a single local node per CPU (doesn't work for KNL and
>>> > upcoming architectures with HBM+DDR+...)
>>> > * which NUMA nodes are slow/fast (for both bandwidth and latency)
>>> > And we can still look at SLIT under /sys/devices/system/node if really
>>> > needed.
>>> >
>>> > And of course having this in sysfs is much better than parsing ACPI
>>> > tables that are only accessible to root :)
>>>
>>> On this point, it's not clear to me that we should allow these sysfs
>>> entries to be world readable. Given /proc/iomem now hides physical
>>> address information from non-root we at least need to be careful not
>>> to undo that with new sysfs HMAT attributes.
>>
>> This enabling does not expose any physical addresses to userspace.  It only
>> provides performance numbers from the HMAT and associates them with existing
>> NUMA nodes.  Are you worried that exposing performance numbers to non-root
>> users via sysfs poses a security risk?
>
> It's an information disclosure that's not clear we need to make to
> non-root processes.
>
> I'm more worried about userspace growing dependencies on the absolute
> numbers when those numbers can change from platform to platform.
> Differentiated memory on one platform may be the common memory pool on
> another.
>
> To me this has parallels with storage device hinting where
> specifications like T10 have a complex enumeration of all the
> performance hints that can be passed to the device, but the Linux
> enabling effort aims for a sanitzed set of relative hints that make
> sense. It's more flexible if userspace specifies a relative intent
> rather than an absolute performance target. Putting all the HMAT
> information into sysfs gives userspace more information than it could
> possibly do anything reasonable, at least outside of specialized apps
> that are hand tuned for a given hardware platform.

That's a valid point IMO.

It is sort of tempting to expose everything to user space verbatim,
especially early in the enabling process when the kernel has not yet
found suitable ways to utilize the given information, but the very act
of exposing it may affect what can be done with it in the future.

User space interfaces need to stay around and be supported forever, at
least potentially, so adding every one of them is a serious
commitment.

Thanks,
Rafael

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v3 0/3] create sysfs representation of ACPI HMAT
  2017-12-22 17:13   ` Dave Hansen
@ 2017-12-23  5:14     ` Anshuman Khandual
  0 siblings, 0 replies; 42+ messages in thread
From: Anshuman Khandual @ 2017-12-23  5:14 UTC (permalink / raw)
  To: Dave Hansen, Anshuman Khandual, Ross Zwisler, linux-kernel
  Cc: Anaczkowski, Lukasz, Box, David E, Kogut, Jaroslaw, Koss, Marcin,
	Koziej, Artur, Lahtinen, Joonas, Moore, Robert, Nachimuthu,
	Murugasamy, Odzioba, Lukasz, Rafael J. Wysocki,
	Rafael J. Wysocki, Schmauss, Erik, Verma, Vishal L, Zheng, Lv,
	Andrew Morton, Balbir Singh, Brice Goglin, Dan Williams,
	Jerome Glisse, John Hubbard, Len Brown, Tim Chen, devel,
	linux-acpi, linux-mm, linux-nvdimm

On 12/22/2017 10:43 PM, Dave Hansen wrote:
> On 12/21/2017 07:09 PM, Anshuman Khandual wrote:
>> I had presented a proposal for NUMA redesign in the Plumbers Conference this
>> year where various memory devices with different kind of memory attributes
>> can be represented in the kernel and be used explicitly from the user space.
>> Here is the link to the proposal if you feel interested. The proposal is
>> very intrusive and also I dont have a RFC for it yet for discussion here.
> I think that's the best reason to "re-use NUMA" for this: it's _not_
> intrusive.
> 
> Also, from an x86 perspective, these HMAT systems *will* be out there.
> Old versions of Linux *will* see different types of memory as separate
> NUMA nodes.  So, if we are going to do something different, it's going
> to be interesting to un-teach those systems about using the NUMA APIs
> for this.  That ship has sailed.

I understand the need to fetch these details from ACPI/DT for
applications to target these distinct memory only NUMA nodes.
This can be done by parsing from platform specific values from
/proc/acpi/ or /proc/device-tree/ interfaces. This can be a
short term solution before NUMA redesign can be figured out.
But adding generic devices like "hmat" in the /sys/devices/
path which will be locked for good, seems problematic.
   

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v3 0/3] create sysfs representation of ACPI HMAT
  2017-12-22 22:13   ` Ross Zwisler
@ 2017-12-23  6:56     ` Anshuman Khandual
  0 siblings, 0 replies; 42+ messages in thread
From: Anshuman Khandual @ 2017-12-23  6:56 UTC (permalink / raw)
  To: Ross Zwisler, Anshuman Khandual
  Cc: linux-kernel, Anaczkowski, Lukasz, Box, David E, Kogut, Jaroslaw,
	Koss, Marcin, Koziej, Artur, Lahtinen, Joonas, Moore, Robert,
	Nachimuthu, Murugasamy, Odzioba, Lukasz, Rafael J. Wysocki,
	Rafael J. Wysocki, Schmauss, Erik, Verma, Vishal L, Zheng, Lv,
	Andrew Morton, Balbir Singh, Brice Goglin, Dan Williams,
	Dave Hansen, Jerome Glisse, John Hubbard, Len Brown, Tim Chen,
	devel, linux-acpi, linux-mm, linux-nvdimm

On 12/23/2017 03:43 AM, Ross Zwisler wrote:
> On Fri, Dec 22, 2017 at 08:39:41AM +0530, Anshuman Khandual wrote:
>> On 12/14/2017 07:40 AM, Ross Zwisler wrote:
>>> ==== Quick Summary ====
>>>
>>> Platforms exist today which have multiple types of memory attached to a
>>> single CPU.  These disparate memory ranges have some characteristics in
>>> common, such as CPU cache coherence, but they can have wide ranges of
>>> performance both in terms of latency and bandwidth.
>>
>> Right.
>>
>>>
>>> For example, consider a system that contains persistent memory, standard
>>> DDR memory and High Bandwidth Memory (HBM), all attached to the same CPU.
>>> There could potentially be an order of magnitude or more difference in
>>> performance between the slowest and fastest memory attached to that CPU.
>>
>> Right.
>>
>>>
>>> With the current Linux code NUMA nodes are CPU-centric, so all the memory
>>> attached to a given CPU will be lumped into the same NUMA node.  This makes
>>> it very difficult for userspace applications to understand the performance
>>> of different memory ranges on a given CPU.
>>
>> Right but that might require fundamental changes to the NUMA representation.
>> Plugging those memory as separate NUMA nodes, identify them through sysfs
>> and try allocating from it through mbind() seems like a short term solution.
>>
>> Though if we decide to go in this direction, sysfs interface or something
>> similar is required to enumerate memory properties.
> 
> Yep, and this patch series is trying to be the sysfs interface that is
> required to the memory properties.  :)  It's a certainty that we will have
> memory-only NUMA nodes, at least on platforms that support ACPI.  Supporting
> memory-only proximity domains (which Linux turns in to memory-only NUMA nodes)
> is explicitly supported with the introduction of the HMAT in ACPI 6.2.

Yeah, even on POWER platforms can have memory only NUMA nodes.

> 
> It also turns out that the existing memory management code already deals with
> them just fine - you see this with my hmat_examples setup:
> 
> https://github.com/rzwisler/hmat_examples
> 
> Both configurations created by this repo create memory-only NUMA nodes, even
> with upstream kernels.  My patches don't change that, they just provide a
> sysfs representation of the HMAT so users can discover the memory that exists
> in the system.

Once its a NUMA node everything will work as is from MM interface
point of view. But the point is how we export these properties to
user space. My only concern is lets not do it in a way which will
be locked without first going through NUMA redesign for these new
attribute based memory, thats all.

> 
>>> We solve this issue by providing userspace with performance information on
>>> individual memory ranges.  This performance information is exposed via
>>> sysfs:
>>>
>>>   # grep . mem_tgt2/* mem_tgt2/local_init/* 2>/dev/null
>>>   mem_tgt2/firmware_id:1
>>>   mem_tgt2/is_cached:0
>>>   mem_tgt2/local_init/read_bw_MBps:40960
>>>   mem_tgt2/local_init/read_lat_nsec:50
>>>   mem_tgt2/local_init/write_bw_MBps:40960
>>>   mem_tgt2/local_init/write_lat_nsec:50
>>
>> I might have missed discussions from earlier versions, why we have this
>> kind of a "source --> target" model ? We will enlist properties for all
>> possible "source --> target" on the system ? Right now it shows only
>> bandwidth and latency properties, can it accommodate other properties
>> as well in future ?
> 
> The initiator/target model is useful in preventing us from needing a
> MAX_NUMA_NODES x MAX_NUMA_NODES sized table for each performance attribute.  I
> talked about it a little more here:

That makes it even more complex. Not only we have a memory attribute
like bandwidth specific to the range, we are also exporting it's
relative values as seen from different CPU nodes. Its again kind of
a NUMA distance table being exported in the generic sysfs path like
/sys/devices/. The problem is possible future memory attributes like
'reliability', 'density', 'power consumption' might not have a need
for a "source --> destination" kind of model as they dont change
based on which CPU node is accessing it.

> 
> https://lists.01.org/pipermail/linux-nvdimm/2017-December/013654.html
> 
>>> This allows applications to easily find the memory that they want to use.
>>> We expect that the existing NUMA APIs will be enhanced to use this new
>>> information so that applications can continue to use them to select their
>>> desired memory.
>>
>> I had presented a proposal for NUMA redesign in the Plumbers Conference this
>> year where various memory devices with different kind of memory attributes
>> can be represented in the kernel and be used explicitly from the user space.
>> Here is the link to the proposal if you feel interested. The proposal is
>> very intrusive and also I dont have a RFC for it yet for discussion here.
>>
>> https://linuxplumbersconf.org/2017/ocw//system/presentations/4656/original/Hierarchical_NUMA_Design_Plumbers_2017.pdf
> 
> I'll take a look, but my first reaction is that I agree with Dave that it
> seems hard to re-teach systems a new NUMA scheme.  This patch series doesn't
> attempt to do that - it is very unintrusive and only informs users about the
> memory-only NUMA nodes that will already exist in their ACPI-based systems.

But while not trying to address how NUMA should treat these
heterogeneous memory attribute nodes, the patch series might
be trying to lock down sysfs interfaces which might not be as
appropriate or extensible when redesigning  NUMA in future.
Sure, pass on the information to user space but not through
generic interfaces like /sys/device.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v3 0/3] create sysfs representation of ACPI HMAT
  2017-12-22 22:31   ` Ross Zwisler
@ 2017-12-25  2:05     ` Liubo(OS Lab)
  0 siblings, 0 replies; 42+ messages in thread
From: Liubo(OS Lab) @ 2017-12-25  2:05 UTC (permalink / raw)
  To: Ross Zwisler, Anshuman Khandual
  Cc: linux-kernel, Anaczkowski, Lukasz, Box, David E, Kogut, Jaroslaw,
	Koss, Marcin, Koziej, Artur, Lahtinen, Joonas, Moore, Robert,
	Nachimuthu, Murugasamy, Odzioba, Lukasz, Rafael J. Wysocki,
	Rafael J. Wysocki, Schmauss, Erik, Verma, Vishal L, Zheng, Lv,
	Andrew Morton, Balbir Singh, Brice Goglin, Dan Williams,
	Dave Hansen, Jerome Glisse, John Hubbard, Len Brown, Tim Chen,
	devel, linux-acpi, linux-mm, linux-nvdimm

On 2017/12/23 6:31, Ross Zwisler wrote:
> On Fri, Dec 22, 2017 at 08:39:41AM +0530, Anshuman Khandual wrote:
>> On 12/14/2017 07:40 AM, Ross Zwisler wrote:
> <>
>>> We solve this issue by providing userspace with performance information on
>>> individual memory ranges.  This performance information is exposed via
>>> sysfs:
>>>
>>>   # grep . mem_tgt2/* mem_tgt2/local_init/* 2>/dev/null
>>>   mem_tgt2/firmware_id:1
>>>   mem_tgt2/is_cached:0
>>>   mem_tgt2/local_init/read_bw_MBps:40960
>>>   mem_tgt2/local_init/read_lat_nsec:50
>>>   mem_tgt2/local_init/write_bw_MBps:40960
>>>   mem_tgt2/local_init/write_lat_nsec:50
> <>
>> We will enlist properties for all possible "source --> target" on the system?
> 
> Nope, just 'local' initiator/target pairs.  I talk about the reasoning for
> this in the cover letter for patch 3:
> 
> https://lists.01.org/pipermail/linux-nvdimm/2017-December/013574.html
> 
>> Right now it shows only bandwidth and latency properties, can it accommodate
>> other properties as well in future ?
> 
> We also have an 'is_cached' attribute for the memory targets if they are
> involved in a caching hierarchy, but right now those are all the things we
> expose.  We can potentially expose whatever we want that is present in the
> HMAT, but those seemed like a good start.
> 
> I noticed that in your presentation you had some other examples of attributes
> you cared about:
> 
>  * reliability
>  * power consumption
>  * density
> 
> The HMAT doesn't provide this sort of information at present, but we
> could/would add them to sysfs if the HMAT ever grew support for them.
> 
>>> This allows applications to easily find the memory that they want to use.
>>> We expect that the existing NUMA APIs will be enhanced to use this new
>>> information so that applications can continue to use them to select their
>>> desired memory.
>>
>> I had presented a proposal for NUMA redesign in the Plumbers Conference this
>> year where various memory devices with different kind of memory attributes
>> can be represented in the kernel and be used explicitly from the user space.
>> Here is the link to the proposal if you feel interested. The proposal is
>> very intrusive and also I dont have a RFC for it yet for discussion here.
>>
>> https://linuxplumbersconf.org/2017/ocw//system/presentations/4656/original/Hierarchical_NUMA_Design_Plumbers_2017.pdf
>>
>> Problem is, designing the sysfs interface for memory attribute detection
>> from user space without first thinking about redesigning the NUMA for
>> heterogeneous memory may not be a good idea. Will look into this further.
> 
> I took another look at your presentation, and overall I think that if/when a
> NUMA redesign like this takes place ACPI systems with HMAT tables will be able
> to participate.  But I think we are probably a ways away from that, and like I

I'm afraid not, there are cache-coherent bus like CCIX/OpenCAPI come out soon.
No matter to say System-on-Chip already with internal bus linked DDR、HBM、CPU、Accelerator..

> said in my previous mail ACPI systems with memory-only NUMA nodes are going to
> exist and need to be supported with the current NUMA scheme.  Hence I don't

And not only memory-only, but the accelerators can also be a master like CPU.

> think that this patch series conflicts with your proposal.

Didn't see conflict neither, but perhaps we should think for a longer-term solution and cover more
situations/platforms.
Anshuman's proposal is really a good start point to us.

Cheers,
Bob Liu

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v3 0/3] create sysfs representation of ACPI HMAT
  2017-12-22 22:53                   ` Dan Williams
  2017-12-22 23:22                     ` Ross Zwisler
@ 2017-12-27  9:10                     ` Brice Goglin
  2017-12-30  6:58                       ` Matthew Wilcox
  1 sibling, 1 reply; 42+ messages in thread
From: Brice Goglin @ 2017-12-27  9:10 UTC (permalink / raw)
  To: Dan Williams
  Cc: Ross Zwisler, Matthew Wilcox, Dave Hansen, Michal Hocko,
	linux-kernel, Anaczkowski, Lukasz, Box, David E, Kogut, Jaroslaw,
	Koss, Marcin, Koziej, Artur, Lahtinen, Joonas, Moore, Robert,
	Nachimuthu, Murugasamy, Odzioba, Lukasz, Rafael J. Wysocki,
	Rafael J. Wysocki, Schmauss, Erik, Verma, Vishal L, Zheng, Lv,
	Andrew Morton, Balbir Singh, Jerome Glisse, John Hubbard,
	Len Brown, Tim Chen, devel, Linux ACPI, Linux MM, linux-nvdimm,
	Linux API, linuxppc-dev

Le 22/12/2017 à 23:53, Dan Williams a écrit :
> On Thu, Dec 21, 2017 at 12:31 PM, Brice Goglin <brice.goglin@gmail.com> wrote:
>> Le 20/12/2017 à 23:41, Ross Zwisler a écrit :
> [..]
>> Hello
>>
>> I can confirm that HPC runtimes are going to use these patches (at least
>> all runtimes that use hwloc for topology discovery, but that's the vast
>> majority of HPC anyway).
>>
>> We really didn't like KNL exposing a hacky SLIT table [1]. We had to
>> explicitly detect that specific crazy table to find out which NUMA nodes
>> were local to which cores, and to find out which NUMA nodes were
>> HBM/MCDRAM or DDR. And then we had to hide the SLIT values to the
>> application because the reported latencies didn't match reality. Quite
>> annoying.
>>
>> With Ross' patches, we can easily get what we need:
>> * which NUMA nodes are local to which CPUs? /sys/devices/system/node/
>> can only report a single local node per CPU (doesn't work for KNL and
>> upcoming architectures with HBM+DDR+...)
>> * which NUMA nodes are slow/fast (for both bandwidth and latency)
>> And we can still look at SLIT under /sys/devices/system/node if really
>> needed.
>>
>> And of course having this in sysfs is much better than parsing ACPI
>> tables that are only accessible to root :)
> On this point, it's not clear to me that we should allow these sysfs
> entries to be world readable. Given /proc/iomem now hides physical
> address information from non-root we at least need to be careful not
> to undo that with new sysfs HMAT attributes. Once you need to be root
> for this info, is parsing binary HMAT vs sysfs a blocker for the HPC
> use case?

I don't think it would be a blocker.

> Perhaps we can enlist /proc/iomem or a similar enumeration interface
> to tell userspace the NUMA node and whether the kernel thinks it has
> better or worse performance characteristics relative to base
> system-RAM, i.e. new IORES_DESC_* values. I'm worried that if we start
> publishing absolute numbers in sysfs userspace will default to looking
> for specific magic numbers in sysfs vs asking the kernel for memory
> that has performance characteristics relative to base "System RAM". In
> other words the absolute performance information that the HMAT
> publishes is useful to the kernel, but it's not clear that userspace
> needs that vs a relative indicator for making NUMA node preference
> decisions.

Some HPC users will benchmark the machine to discovery actual
performance numbers anyway.
However, most users won't do this. They will want to know relative
performance of different nodes. If you normalize HMAT values by dividing
them with system-RAM values, that's likely OK. If you just say "that
node is faster than system RAM", it's not precise enough.

Brice

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v3 0/3] create sysfs representation of ACPI HMAT
  2017-12-27  9:10                     ` Brice Goglin
@ 2017-12-30  6:58                       ` Matthew Wilcox
  2017-12-30  9:19                         ` Brice Goglin
  0 siblings, 1 reply; 42+ messages in thread
From: Matthew Wilcox @ 2017-12-30  6:58 UTC (permalink / raw)
  To: Brice Goglin
  Cc: Dan Williams, Ross Zwisler, Dave Hansen, Michal Hocko,
	linux-kernel, Anaczkowski, Lukasz, Box, David E, Kogut, Jaroslaw,
	Koss, Marcin, Koziej, Artur, Lahtinen, Joonas, Moore, Robert,
	Nachimuthu, Murugasamy, Odzioba, Lukasz, Rafael J. Wysocki,
	Rafael J. Wysocki, Schmauss, Erik, Verma, Vishal L, Zheng, Lv,
	Andrew Morton, Balbir Singh, Jerome Glisse, John Hubbard,
	Len Brown, Tim Chen, devel, Linux ACPI, Linux MM, linux-nvdimm,
	Linux API, linuxppc-dev

On Wed, Dec 27, 2017 at 10:10:34AM +0100, Brice Goglin wrote:
> > Perhaps we can enlist /proc/iomem or a similar enumeration interface
> > to tell userspace the NUMA node and whether the kernel thinks it has
> > better or worse performance characteristics relative to base
> > system-RAM, i.e. new IORES_DESC_* values. I'm worried that if we start
> > publishing absolute numbers in sysfs userspace will default to looking
> > for specific magic numbers in sysfs vs asking the kernel for memory
> > that has performance characteristics relative to base "System RAM". In
> > other words the absolute performance information that the HMAT
> > publishes is useful to the kernel, but it's not clear that userspace
> > needs that vs a relative indicator for making NUMA node preference
> > decisions.
> 
> Some HPC users will benchmark the machine to discovery actual
> performance numbers anyway.
> However, most users won't do this. They will want to know relative
> performance of different nodes. If you normalize HMAT values by dividing
> them with system-RAM values, that's likely OK. If you just say "that
> node is faster than system RAM", it's not precise enough.

So "this memory has 800% bandwidth of normal" and "this memory has 70%
bandwidth of normal"?

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v3 0/3] create sysfs representation of ACPI HMAT
  2017-12-30  6:58                       ` Matthew Wilcox
@ 2017-12-30  9:19                         ` Brice Goglin
  0 siblings, 0 replies; 42+ messages in thread
From: Brice Goglin @ 2017-12-30  9:19 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: Dan Williams, Ross Zwisler, Dave Hansen, Michal Hocko,
	linux-kernel, Anaczkowski, Lukasz, Box, David E, Kogut, Jaroslaw,
	Koss, Marcin, Koziej, Artur, Lahtinen, Joonas, Moore, Robert,
	Nachimuthu, Murugasamy, Odzioba, Lukasz, Rafael J. Wysocki,
	Rafael J. Wysocki, Schmauss, Erik, Verma, Vishal L, Zheng, Lv,
	Andrew Morton, Balbir Singh, Jerome Glisse, John Hubbard,
	Len Brown, Tim Chen, devel, Linux ACPI, Linux MM, linux-nvdimm,
	Linux API, linuxppc-dev



Le 30/12/2017 à 07:58, Matthew Wilcox a écrit :
> On Wed, Dec 27, 2017 at 10:10:34AM +0100, Brice Goglin wrote:
>>> Perhaps we can enlist /proc/iomem or a similar enumeration interface
>>> to tell userspace the NUMA node and whether the kernel thinks it has
>>> better or worse performance characteristics relative to base
>>> system-RAM, i.e. new IORES_DESC_* values. I'm worried that if we start
>>> publishing absolute numbers in sysfs userspace will default to looking
>>> for specific magic numbers in sysfs vs asking the kernel for memory
>>> that has performance characteristics relative to base "System RAM". In
>>> other words the absolute performance information that the HMAT
>>> publishes is useful to the kernel, but it's not clear that userspace
>>> needs that vs a relative indicator for making NUMA node preference
>>> decisions.
>> Some HPC users will benchmark the machine to discovery actual
>> performance numbers anyway.
>> However, most users won't do this. They will want to know relative
>> performance of different nodes. If you normalize HMAT values by dividing
>> them with system-RAM values, that's likely OK. If you just say "that
>> node is faster than system RAM", it's not precise enough.
> So "this memory has 800% bandwidth of normal" and "this memory has 70%
> bandwidth of normal"?

I guess that would work.
Brice

^ permalink raw reply	[flat|nested] 42+ messages in thread

end of thread, other threads:[~2017-12-30  9:19 UTC | newest]

Thread overview: 42+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-12-14  2:10 [PATCH v3 0/3] create sysfs representation of ACPI HMAT Ross Zwisler
2017-12-14  2:10 ` [PATCH v3 1/3] acpi: HMAT support in acpi_parse_entries_array() Ross Zwisler
2017-12-15  0:49   ` Rafael J. Wysocki
2017-12-15  1:10   ` Dan Williams
2017-12-16  1:53     ` Rafael J. Wysocki
2017-12-16  1:57       ` Dan Williams
2017-12-16  2:15         ` Rafael J. Wysocki
2017-12-14  2:10 ` [PATCH v3 2/3] hmat: add heterogeneous memory sysfs support Ross Zwisler
2017-12-15  0:52   ` Rafael J. Wysocki
2017-12-15 20:53     ` Ross Zwisler
2017-12-14  2:10 ` [PATCH v3 3/3] hmat: add performance attributes Ross Zwisler
2017-12-14 13:00 ` [PATCH v3 0/3] create sysfs representation of ACPI HMAT Michal Hocko
2017-12-18 20:35   ` Ross Zwisler
2017-12-20 16:41     ` Ross Zwisler
2017-12-21 13:18       ` Michal Hocko
2017-12-20 18:19     ` Matthew Wilcox
2017-12-20 20:22       ` Dave Hansen
2017-12-20 21:16         ` Matthew Wilcox
2017-12-20 21:24           ` Ross Zwisler
2017-12-20 22:29             ` Dan Williams
2017-12-20 22:41               ` Ross Zwisler
2017-12-21 20:31                 ` Brice Goglin
2017-12-22 22:53                   ` Dan Williams
2017-12-22 23:22                     ` Ross Zwisler
2017-12-22 23:57                       ` Dan Williams
2017-12-23  1:14                         ` Rafael J. Wysocki
2017-12-27  9:10                     ` Brice Goglin
2017-12-30  6:58                       ` Matthew Wilcox
2017-12-30  9:19                         ` Brice Goglin
2017-12-20 21:13       ` Ross Zwisler
2017-12-21  1:41         ` Elliott, Robert (Persistent Memory)
2017-12-22 21:46           ` Ross Zwisler
2017-12-21 12:50       ` Michael Ellerman
2017-12-22  3:09 ` Anshuman Khandual
2017-12-22 10:31   ` Kogut, Jaroslaw
2017-12-22 14:37     ` Anshuman Khandual
2017-12-22 17:13   ` Dave Hansen
2017-12-23  5:14     ` Anshuman Khandual
2017-12-22 22:13   ` Ross Zwisler
2017-12-23  6:56     ` Anshuman Khandual
2017-12-22 22:31   ` Ross Zwisler
2017-12-25  2:05     ` Liubo(OS Lab)

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).