All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v4 0/2] perf x86: Exposing IO stack to IO PMON mapping through sysfs
@ 2020-01-17 13:37 roman.sudarikov
  2020-01-17 13:37 ` [PATCH v4 1/2] perf x86: Infrastructure for exposing an Uncore unit to PMON mapping roman.sudarikov
                   ` (2 more replies)
  0 siblings, 3 replies; 17+ messages in thread
From: roman.sudarikov @ 2020-01-17 13:37 UTC (permalink / raw)
  To: peterz, mingo, acme, mark.rutland, alexander.shishkin, jolsa,
	namhyung, linux-kernel, eranian, bgregg, ak, kan.liang, gregkh
  Cc: alexander.antonov, roman.sudarikov

From: Roman Sudarikov <roman.sudarikov@linux.intel.com>

The previous version can be found at:
v3: https://lkml.kernel.org/r/20200113135444.12027-1-roman.sudarikov@linux.intel.com

Changes in this revision are:
v3 -> v4:
- Addressed comments from Greg Kroah-Hartman:
  1. Reworked handling of newly introduced attribute.
  2. Required Documentation update is expected in the follow up patchset


The previous version can be found at:
v2: https://lkml.kernel.org/r/20191210091451.6054-1-roman.sudarikov@linux.intel.com

Changes in this revision are:
v2 -> v3:
- Addressed comments from Peter and Kan

The previous version can be found at:
v1: https://lkml.kernel.org/r/20191126163630.17300-1-roman.sudarikov@linux.intel.com

Changes in this revision are:
v1 -> v2:
- Fixed process related issues;
- This patch set includes kernel support for IIO stack to PMON mapping;
- Stephane raised concerns regarding output format which may require
code changes in the user space part of the feature only. We will continue
output format discussion in the context of user space update.

Intel® Xeon® Scalable processor family (code name Skylake-SP) makes
significant changes in the integrated I/O (IIO) architecture. The new
solution introduces IIO stacks which are responsible for managing traffic
between the PCIe domain and the Mesh domain. Each IIO stack has its own
PMON block and can handle either DMI port, x16 PCIe root port, MCP-Link
or various built-in accelerators. IIO PMON blocks allow concurrent
monitoring of I/O flows up to 4 x4 bifurcation within each IIO stack.

Software is supposed to program required perf counters within each IIO
stack and gather performance data. The tricky thing here is that IIO PMON
reports data per IIO stack but users have no idea what IIO stacks are -
they only know devices which are connected to the platform.

Understanding IIO stack concept to find which IIO stack that particular
IO device is connected to, or to identify an IIO PMON block to program
for monitoring specific IIO stack assumes a lot of implicit knowledge
about given Intel server platform architecture.

This patch set introduces:
1. An infrastructure for exposing an Uncore unit to Uncore PMON mapping
   through sysfs-backend;
2. A new --iiostat mode in perf stat to provide I/O performance metrics
   per I/O device.

Usage examples:

1. List all devices below IIO stacks
  ./perf stat --iiostat=show

Sample output w/o libpci:

    S0-RootPort0-uncore_iio_0<00:00.0>
    S1-RootPort0-uncore_iio_0<81:00.0>
    S0-RootPort1-uncore_iio_1<18:00.0>
    S1-RootPort1-uncore_iio_1<86:00.0>
    S1-RootPort1-uncore_iio_1<88:00.0>
    S0-RootPort2-uncore_iio_2<3d:00.0>
    S1-RootPort2-uncore_iio_2<af:00.0>
    S1-RootPort3-uncore_iio_3<da:00.0>

Sample output with libpci:

    S0-RootPort0-uncore_iio_0<00:00.0 Sky Lake-E DMI3 Registers>
    S1-RootPort0-uncore_iio_0<81:00.0 Ethernet Controller X710 for 10GbE SFP+>
    S0-RootPort1-uncore_iio_1<18:00.0 Omni-Path HFI Silicon 100 Series [discrete]>
    S1-RootPort1-uncore_iio_1<86:00.0 Ethernet Controller XL710 for 40GbE QSFP+>
    S1-RootPort1-uncore_iio_1<88:00.0 Ethernet Controller XL710 for 40GbE QSFP+>
    S0-RootPort2-uncore_iio_2<3d:00.0 Ethernet Connection X722 for 10GBASE-T>
    S1-RootPort2-uncore_iio_2<af:00.0 Omni-Path HFI Silicon 100 Series [discrete]>
    S1-RootPort3-uncore_iio_3<da:00.0 NVMe Datacenter SSD [Optane]>

2. Collect metrics for all I/O devices below IIO stack

  ./perf stat --iiostat -- dd if=/dev/zero of=/dev/nvme0n1 bs=1M oflag=direct
    357708+0 records in
    357707+0 records out
    375083606016 bytes (375 GB, 349 GiB) copied, 215.381 s, 1.7 GB/s

  Performance counter stats for 'system wide':

     device             Inbound Read(MB)    Inbound Write(MB)    Outbound Read(MB)   Outbound Write(MB)
    00:00.0                    0                    0                    0                    0
    81:00.0                    0                    0                    0                    0
    18:00.0                    0                    0                    0                    0
    86:00.0                    0                    0                    0                    0
    88:00.0                    0                    0                    0                    0
    3b:00.0                    3                    0                    0                    0
    3c:03.0                    3                    0                    0                    0
    3d:00.0                    3                    0                    0                    0
    af:00.0                    0                    0                    0                    0
    da:00.0               358559                   44                    0                   22

    215.383783574 seconds time elapsed


3. Collect metrics for comma separted list of I/O devices

  ./perf stat --iiostat=da:00.0 -- dd if=/dev/zero of=/dev/nvme0n1 bs=1M oflag=direct
    381555+0 records in
    381554+0 records out
    400088457216 bytes (400 GB, 373 GiB) copied, 374.044 s, 1.1 GB/s

  Performance counter stats for 'system wide':

     device             Inbound Read(MB)    Inbound Write(MB)    Outbound Read(MB)   Outbound Write(MB)
    da:00.0               382462                   47                    0                   23

    374.045775505 seconds time elapsed

Roman Sudarikov (2):
  perf x86: Infrastructure for exposing an Uncore unit to PMON mapping
  perf x86: Exposing an Uncore unit to PMON for Intel Xeon® server
    platform

 arch/x86/events/intel/uncore.c       |  46 ++++++++
 arch/x86/events/intel/uncore.h       |   6 +
 arch/x86/events/intel/uncore_snbep.c | 160 +++++++++++++++++++++++++++
 3 files changed, 212 insertions(+)


base-commit: 219d54332a09e8d8741c1e1982f5eae56099de85
-- 
2.19.1


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH v4 1/2] perf x86: Infrastructure for exposing an Uncore unit to PMON mapping
  2020-01-17 13:37 [PATCH v4 0/2] perf x86: Exposing IO stack to IO PMON mapping through sysfs roman.sudarikov
@ 2020-01-17 13:37 ` roman.sudarikov
  2020-01-17 14:16   ` Greg KH
  2020-01-17 13:37 ` [PATCH v4 2/2] perf x86: Exposing an Uncore unit to PMON for Intel Xeon® server platform roman.sudarikov
  2020-01-17 14:14 ` [PATCH v4 0/2] perf x86: Exposing IO stack to IO PMON mapping through sysfs Greg KH
  2 siblings, 1 reply; 17+ messages in thread
From: roman.sudarikov @ 2020-01-17 13:37 UTC (permalink / raw)
  To: peterz, mingo, acme, mark.rutland, alexander.shishkin, jolsa,
	namhyung, linux-kernel, eranian, bgregg, ak, kan.liang, gregkh
  Cc: alexander.antonov, roman.sudarikov

From: Roman Sudarikov <roman.sudarikov@linux.intel.com>

Intel® Xeon® Scalable processor family (code name Skylake-SP) makes
significant changes in the integrated I/O (IIO) architecture. The new
solution introduces IIO stacks which are responsible for managing traffic
between the PCIe domain and the Mesh domain. Each IIO stack has its own
PMON block and can handle either DMI port, x16 PCIe root port, MCP-Link
or various built-in accelerators. IIO PMON blocks allow concurrent
monitoring of I/O flows up to 4 x4 bifurcation within each IIO stack.

Software is supposed to program required perf counters within each IIO
stack and gather performance data. The tricky thing here is that IIO PMON
reports data per IIO stack but users have no idea what IIO stacks are -
they only know devices which are connected to the platform.

Understanding IIO stack concept to find which IIO stack that particular
IO device is connected to, or to identify an IIO PMON block to program
for monitoring specific IIO stack assumes a lot of implicit knowledge
about given Intel server platform architecture.

Usage example:
    /sys/devices/uncore_<type>_<pmu_idx>/mapping

Each Uncore unit type, by its nature, can be mapped to its own context,
for example:
1. CHA - each uncore_cha_<pmu_idx> is assigned to manage a distinct slice
   of LLC capacity;
2. UPI - each uncore_upi_<pmu_idx> is assigned to manage one link of Intel
   UPI Subsystem;
3. IIO - each uncore_iio_<pmu_idx> is assigned to manage one stack of the
   IIO module;
4. IMC - each uncore_imc_<pmu_idx> is assigned to manage one channel of
   Memory Controller.

Implementation details:
Two callbacks added to struct intel_uncore_type to discover and map Uncore
units to PMONs:
    int (*get_topology)(struct intel_uncore_type *type, int max_dies)
    int (*set_mapping)(struct intel_uncore_type *type, int max_dies)

Details of IIO Uncore unit mapping to IIO PMON:
Each IIO stack is either DMI port, x16 PCIe root port, MCP-Link or various
built-in accelerators. For Uncore IIO Unit type, the mapping file
holds bus numbers of devices, which can be monitored by that IIO PMON block
on each die.

Reviewed-by: Kan Liang <kan.liang@linux.intel.com>
Co-developed-by: Alexander Antonov <alexander.antonov@intel.com>
Signed-off-by: Alexander Antonov <alexander.antonov@intel.com>
Signed-off-by: Roman Sudarikov <roman.sudarikov@linux.intel.com>
---
 arch/x86/events/intel/uncore.c | 46 ++++++++++++++++++++++++++++++++++
 arch/x86/events/intel/uncore.h |  6 +++++
 2 files changed, 52 insertions(+)

diff --git a/arch/x86/events/intel/uncore.c b/arch/x86/events/intel/uncore.c
index 86467f85c383..55201bfde2c8 100644
--- a/arch/x86/events/intel/uncore.c
+++ b/arch/x86/events/intel/uncore.c
@@ -825,6 +825,44 @@ static const struct attribute_group uncore_pmu_attr_group = {
 	.attrs = uncore_pmu_attrs,
 };
 
+static ssize_t mapping_show(struct device *dev,
+				struct device_attribute *attr, char *buf)
+{
+	struct intel_uncore_pmu *pmu = dev_get_drvdata(dev);
+
+	return snprintf(buf, PAGE_SIZE - 1, "%s\n", pmu->mapping);
+}
+static DEVICE_ATTR_RO(mapping);
+
+static struct attribute *mapping_attrs[] = {
+	&dev_attr_mapping.attr,
+	NULL,
+};
+
+static struct attribute_group mapping_group = {
+	.attrs = mapping_attrs,
+};
+
+static umode_t
+not_visible(struct kobject *kobj, struct attribute *attr, int i)
+{
+	return 0;
+}
+
+static const struct attribute_group *attr_update[] = {
+	&mapping_group,
+	NULL,
+};
+
+static void uncore_platform_mapping(struct intel_uncore_type *t)
+{
+	if (t->get_topology && t->set_mapping &&
+	    !t->get_topology(t, max_dies) && !t->set_mapping(t, max_dies))
+		mapping_group.is_visible = NULL;
+	else
+		mapping_group.is_visible = not_visible;
+}
+
 static int uncore_pmu_register(struct intel_uncore_pmu *pmu)
 {
 	int ret;
@@ -849,6 +887,8 @@ static int uncore_pmu_register(struct intel_uncore_pmu *pmu)
 		pmu->pmu.attr_groups = pmu->type->attr_groups;
 	}
 
+	pmu->pmu.attr_update = attr_update;
+
 	if (pmu->type->num_boxes == 1) {
 		if (strlen(pmu->type->name) > 0)
 			sprintf(pmu->name, "uncore_%s", pmu->type->name);
@@ -859,6 +899,12 @@ static int uncore_pmu_register(struct intel_uncore_pmu *pmu)
 			pmu->pmu_idx);
 	}
 
+	/*
+	 * Exposing mapping of Uncore units to corresponding Uncore PMUs
+	 * through /sys/devices/uncore_<type>_<idx>/mapping
+	 */
+	uncore_platform_mapping(pmu->type);
+
 	ret = perf_pmu_register(&pmu->pmu, pmu->name, -1);
 	if (!ret)
 		pmu->registered = true;
diff --git a/arch/x86/events/intel/uncore.h b/arch/x86/events/intel/uncore.h
index bbfdaa720b45..1b3891302f6d 100644
--- a/arch/x86/events/intel/uncore.h
+++ b/arch/x86/events/intel/uncore.h
@@ -73,6 +73,11 @@ struct intel_uncore_type {
 	struct freerunning_counters *freerunning;
 	const struct attribute_group *attr_groups[4];
 	struct pmu *pmu; /* for custom pmu ops */
+	u64 *topology;
+	/* finding Uncore units */
+	int (*get_topology)(struct intel_uncore_type *type, int max_dies);
+	/* mapping Uncore units to PMON ranges */
+	int (*set_mapping)(struct intel_uncore_type *type, int max_dies);
 };
 
 #define pmu_group attr_groups[0]
@@ -99,6 +104,7 @@ struct intel_uncore_pmu {
 	int				pmu_idx;
 	int				func_id;
 	bool				registered;
+	char				*mapping;
 	atomic_t			activeboxes;
 	struct intel_uncore_type	*type;
 	struct intel_uncore_box		**boxes;
-- 
2.19.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v4 2/2] perf x86: Exposing an Uncore unit to PMON for Intel Xeon® server platform
  2020-01-17 13:37 [PATCH v4 0/2] perf x86: Exposing IO stack to IO PMON mapping through sysfs roman.sudarikov
  2020-01-17 13:37 ` [PATCH v4 1/2] perf x86: Infrastructure for exposing an Uncore unit to PMON mapping roman.sudarikov
@ 2020-01-17 13:37 ` roman.sudarikov
  2020-01-17 14:19   ` Greg KH
  2020-01-17 14:14 ` [PATCH v4 0/2] perf x86: Exposing IO stack to IO PMON mapping through sysfs Greg KH
  2 siblings, 1 reply; 17+ messages in thread
From: roman.sudarikov @ 2020-01-17 13:37 UTC (permalink / raw)
  To: peterz, mingo, acme, mark.rutland, alexander.shishkin, jolsa,
	namhyung, linux-kernel, eranian, bgregg, ak, kan.liang, gregkh
  Cc: alexander.antonov, roman.sudarikov

From: Roman Sudarikov <roman.sudarikov@linux.intel.com>

Current version supports a server line starting Intel® Xeon® Processor
Scalable Family and introduces mapping for IIO Uncore units only.
Other units can be added on demand.

IIO stack to PMON mapping is exposed through:
    /sys/devices/uncore_iio_<pmu_idx>/mapping
    in the following format: domain:bus

For example, on a 4-die Intel Xeon® server platform:
    $ cat /sys/devices/uncore_iio_0/mapping
    0000:00,0000:40,0000:80,0000:c0

Which means:
IIO PMON block 0 on die 0 belongs to IIO stack on bus 0x00, domain 0x0000
IIO PMON block 0 on die 1 belongs to IIO stack on bus 0x40, domain 0x0000
IIO PMON block 0 on die 2 belongs to IIO stack on bus 0x80, domain 0x0000
IIO PMON block 0 on die 3 belongs to IIO stack on bus 0xc0, domain 0x0000

Reviewed-by: Kan Liang <kan.liang@linux.intel.com>
Co-developed-by: Alexander Antonov <alexander.antonov@intel.com>
Signed-off-by: Alexander Antonov <alexander.antonov@intel.com>
Signed-off-by: Roman Sudarikov <roman.sudarikov@linux.intel.com>
---
 arch/x86/events/intel/uncore_snbep.c | 160 +++++++++++++++++++++++++++
 1 file changed, 160 insertions(+)

diff --git a/arch/x86/events/intel/uncore_snbep.c b/arch/x86/events/intel/uncore_snbep.c
index b10a5ec79e48..813009b48a0f 100644
--- a/arch/x86/events/intel/uncore_snbep.c
+++ b/arch/x86/events/intel/uncore_snbep.c
@@ -273,6 +273,30 @@
 #define SKX_CPUNODEID			0xc0
 #define SKX_GIDNIDMAP			0xd4
 
+/*
+ * The CPU_BUS_NUMBER MSR returns the values of the respective CPUBUSNO CSR
+ * that BIOS programmed. MSR has package scope.
+ * |  Bit  |  Default  |  Description
+ * | [63]  |    00h    | VALID - When set, indicates the CPU bus
+ *                       numbers have been initialized. (RO)
+ * |[62:48]|    ---    | Reserved
+ * |[47:40]|    00h    | BUS_NUM_5 — Return the bus number BIOS assigned
+ *                       CPUBUSNO(5). (RO)
+ * |[39:32]|    00h    | BUS_NUM_4 — Return the bus number BIOS assigned
+ *                       CPUBUSNO(4). (RO)
+ * |[31:24]|    00h    | BUS_NUM_3 — Return the bus number BIOS assigned
+ *                       CPUBUSNO(3). (RO)
+ * |[23:16]|    00h    | BUS_NUM_2 — Return the bus number BIOS assigned
+ *                       CPUBUSNO(2). (RO)
+ * |[15:8] |    00h    | BUS_NUM_1 — Return the bus number BIOS assigned
+ *                       CPUBUSNO(1). (RO)
+ * | [7:0] |    00h    | BUS_NUM_0 — Return the bus number BIOS assigned
+ *                       CPUBUSNO(0). (RO)
+ */
+#define SKX_MSR_CPU_BUS_NUMBER		0x300
+#define SKX_MSR_CPU_BUS_VALID_BIT	(1ULL << 63)
+#define BUS_NUM_STRIDE			8
+
 /* SKX CHA */
 #define SKX_CHA_MSR_PMON_BOX_FILTER_TID		(0x1ffULL << 0)
 #define SKX_CHA_MSR_PMON_BOX_FILTER_LINK	(0xfULL << 9)
@@ -3580,6 +3604,9 @@ static struct intel_uncore_ops skx_uncore_iio_ops = {
 	.read_counter		= uncore_msr_read_counter,
 };
 
+static int skx_iio_get_topology(struct intel_uncore_type *type, int max_dies);
+static int skx_iio_set_mapping(struct intel_uncore_type *type, int max_dies);
+
 static struct intel_uncore_type skx_uncore_iio = {
 	.name			= "iio",
 	.num_counters		= 4,
@@ -3594,6 +3621,8 @@ static struct intel_uncore_type skx_uncore_iio = {
 	.constraints		= skx_uncore_iio_constraints,
 	.ops			= &skx_uncore_iio_ops,
 	.format_group		= &skx_uncore_iio_format_group,
+	.get_topology		= skx_iio_get_topology,
+	.set_mapping		= skx_iio_set_mapping,
 };
 
 enum perf_uncore_iio_freerunning_type_id {
@@ -3780,6 +3809,137 @@ static int skx_count_chabox(void)
 	return hweight32(val);
 }
 
+static inline int skx_msr_cpu_bus_read(int cpu, u64 *topology)
+{
+	u64 msr_value;
+
+	if (rdmsrl_on_cpu(cpu, SKX_MSR_CPU_BUS_NUMBER, &msr_value) ||
+			!(msr_value & SKX_MSR_CPU_BUS_VALID_BIT))
+		return -ENXIO;
+
+	*topology = msr_value;
+
+	return 0;
+}
+
+static int skx_iio_get_topology(struct intel_uncore_type *type, int max_dies)
+{
+	int ret, cpu, die, current_die;
+	struct pci_bus *bus = NULL;
+
+	/*
+	 * Verified single-segment environments only; disabled for multiple
+	 * segment topologies for now.
+	 */
+	while ((bus = pci_find_next_bus(bus)) && !pci_domain_nr(bus))
+		;
+	if (bus)
+		return -EPERM;
+
+	type->topology = kcalloc(max_dies, sizeof(u64), GFP_KERNEL);
+	if (!type->topology)
+		return -ENOMEM;
+
+	/*
+	 * Using cpus_read_lock() to ensure cpu is not going down between
+	 * looking at cpu_online_mask.
+	 */
+	cpus_read_lock();
+	/* Invalid value to start loop.*/
+	current_die = -1;
+	for_each_online_cpu(cpu) {
+		die = topology_logical_die_id(cpu);
+		if (current_die == die)
+			continue;
+		ret = skx_msr_cpu_bus_read(cpu, &type->topology[die]);
+		if (ret) {
+			kfree(type->topology);
+			break;
+		}
+		current_die = die;
+	}
+	cpus_read_unlock();
+
+	return ret;
+}
+
+static inline u8 skx_iio_stack_bus(struct intel_uncore_pmu *pmu, int die)
+{
+	return pmu->type->topology[die] >> (pmu->pmu_idx * BUS_NUM_STRIDE);
+}
+
+static int skx_iio_set_box_mapping(struct intel_uncore_pmu *pmu, int max_dies)
+{
+	char *buf;
+	int die = 0;
+	/* Length of template "%04x:%02x," without null character. */
+	const int template_len = 8;
+
+	/*
+	 * Root bus 0x00 is valid only for die 0 AND pmu_idx = 0.
+	 * Set "0" platform mapping for PMUs which have zero stack bus and
+	 * non-zero index.
+	 */
+	if (!skx_iio_stack_bus(pmu, die) && pmu->pmu_idx) {
+		pmu->mapping = kzalloc(2, GFP_KERNEL);
+		if (!pmu->mapping)
+			return -ENOMEM;
+		sprintf(pmu->mapping, "0");
+		return 0;
+	}
+
+	pmu->mapping = kzalloc(max_dies * template_len + 1, GFP_KERNEL);
+	if (!pmu->mapping)
+		return -ENOMEM;
+
+	buf = pmu->mapping;
+	for (; die < max_dies; die++) {
+		buf += snprintf(buf, template_len + 1, "%04x:%02x,", 0,
+				skx_iio_stack_bus(pmu, die));
+	}
+
+	*(--buf) = '\0';
+
+	return 0;
+}
+
+static int skx_iio_set_mapping(struct intel_uncore_type *type, int max_dies)
+{
+	/*
+	 * Each IIO stack (PCIe root port) has its own IIO PMON block, so each
+	 * "mapping" holds bus number(s) of PCIe root port(s), which can
+	 * be monitored by that IIO PMON block.
+	 *
+	 * For example, on 4-die Xeon platform with up to 6 IIO stacks per die
+	 * and, therefore, 6 IIO PMON blocks per die, the "mapping"
+	 * of IIO PMON block 0 holds "0000:00,0000:40,0000:80,0000:c0":
+	 *
+	 * $ cat /sys/devices/uncore_iio_0/mapping
+	 * 0000:00,0000:40,0000:80,0000:c0
+	 *
+	 * Which means:
+	 * IIO PMON 0 on die 0 belongs to PCIe RP on bus 0x00, domain 0x0000
+	 * IIO PMON 0 on die 1 belongs to PCIe RP on bus 0x40, domain 0x0000
+	 * IIO PMON 0 on die 2 belongs to PCIe RP on bus 0x80, domain 0x0000
+	 * IIO PMON 0 on die 3 belongs to PCIe RP on bus 0xc0, domain 0x0000
+	 */
+
+	int ret;
+	struct intel_uncore_pmu *pmu = type->pmus;
+
+	for (; pmu - type->pmus < type->num_boxes; pmu++) {
+		ret = skx_iio_set_box_mapping(pmu, max_dies);
+		if (ret) {
+			for (; pmu->pmu_idx > 0; --pmu)
+				kfree(pmu->mapping);
+			break;
+		}
+	}
+
+	kfree(type->topology);
+	return ret;
+}
+
 void skx_uncore_cpu_init(void)
 {
 	skx_uncore_chabox.num_boxes = skx_count_chabox();
-- 
2.19.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH v4 0/2] perf x86: Exposing IO stack to IO PMON mapping through sysfs
  2020-01-17 13:37 [PATCH v4 0/2] perf x86: Exposing IO stack to IO PMON mapping through sysfs roman.sudarikov
  2020-01-17 13:37 ` [PATCH v4 1/2] perf x86: Infrastructure for exposing an Uncore unit to PMON mapping roman.sudarikov
  2020-01-17 13:37 ` [PATCH v4 2/2] perf x86: Exposing an Uncore unit to PMON for Intel Xeon® server platform roman.sudarikov
@ 2020-01-17 14:14 ` Greg KH
  2 siblings, 0 replies; 17+ messages in thread
From: Greg KH @ 2020-01-17 14:14 UTC (permalink / raw)
  To: roman.sudarikov
  Cc: peterz, mingo, acme, mark.rutland, alexander.shishkin, jolsa,
	namhyung, linux-kernel, eranian, bgregg, ak, kan.liang,
	alexander.antonov

On Fri, Jan 17, 2020 at 04:37:57PM +0300, roman.sudarikov@linux.intel.com wrote:
> From: Roman Sudarikov <roman.sudarikov@linux.intel.com>
> 
> The previous version can be found at:
> v3: https://lkml.kernel.org/r/20200113135444.12027-1-roman.sudarikov@linux.intel.com
> 
> Changes in this revision are:
> v3 -> v4:
> - Addressed comments from Greg Kroah-Hartman:
>   1. Reworked handling of newly introduced attribute.
>   2. Required Documentation update is expected in the follow up patchset

What follow up patchset?  Please include it here so we can properly
review if the code you have matches the documentation and if it makes
sense to do it that way.

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v4 1/2] perf x86: Infrastructure for exposing an Uncore unit to PMON mapping
  2020-01-17 13:37 ` [PATCH v4 1/2] perf x86: Infrastructure for exposing an Uncore unit to PMON mapping roman.sudarikov
@ 2020-01-17 14:16   ` Greg KH
  0 siblings, 0 replies; 17+ messages in thread
From: Greg KH @ 2020-01-17 14:16 UTC (permalink / raw)
  To: roman.sudarikov
  Cc: peterz, mingo, acme, mark.rutland, alexander.shishkin, jolsa,
	namhyung, linux-kernel, eranian, bgregg, ak, kan.liang,
	alexander.antonov

On Fri, Jan 17, 2020 at 04:37:58PM +0300, roman.sudarikov@linux.intel.com wrote:
> From: Roman Sudarikov <roman.sudarikov@linux.intel.com>
> 
> Intel® Xeon® Scalable processor family (code name Skylake-SP) makes
> significant changes in the integrated I/O (IIO) architecture. The new
> solution introduces IIO stacks which are responsible for managing traffic
> between the PCIe domain and the Mesh domain. Each IIO stack has its own
> PMON block and can handle either DMI port, x16 PCIe root port, MCP-Link
> or various built-in accelerators. IIO PMON blocks allow concurrent
> monitoring of I/O flows up to 4 x4 bifurcation within each IIO stack.
> 
> Software is supposed to program required perf counters within each IIO
> stack and gather performance data. The tricky thing here is that IIO PMON
> reports data per IIO stack but users have no idea what IIO stacks are -
> they only know devices which are connected to the platform.
> 
> Understanding IIO stack concept to find which IIO stack that particular
> IO device is connected to, or to identify an IIO PMON block to program
> for monitoring specific IIO stack assumes a lot of implicit knowledge
> about given Intel server platform architecture.
> 
> Usage example:
>     /sys/devices/uncore_<type>_<pmu_idx>/mapping
> 
> Each Uncore unit type, by its nature, can be mapped to its own context,
> for example:
> 1. CHA - each uncore_cha_<pmu_idx> is assigned to manage a distinct slice
>    of LLC capacity;
> 2. UPI - each uncore_upi_<pmu_idx> is assigned to manage one link of Intel
>    UPI Subsystem;
> 3. IIO - each uncore_iio_<pmu_idx> is assigned to manage one stack of the
>    IIO module;
> 4. IMC - each uncore_imc_<pmu_idx> is assigned to manage one channel of
>    Memory Controller.
> 
> Implementation details:
> Two callbacks added to struct intel_uncore_type to discover and map Uncore
> units to PMONs:
>     int (*get_topology)(struct intel_uncore_type *type, int max_dies)
>     int (*set_mapping)(struct intel_uncore_type *type, int max_dies)
> 
> Details of IIO Uncore unit mapping to IIO PMON:
> Each IIO stack is either DMI port, x16 PCIe root port, MCP-Link or various
> built-in accelerators. For Uncore IIO Unit type, the mapping file
> holds bus numbers of devices, which can be monitored by that IIO PMON block
> on each die.
> 
> Reviewed-by: Kan Liang <kan.liang@linux.intel.com>
> Co-developed-by: Alexander Antonov <alexander.antonov@intel.com>
> Signed-off-by: Alexander Antonov <alexander.antonov@intel.com>
> Signed-off-by: Roman Sudarikov <roman.sudarikov@linux.intel.com>
> ---
>  arch/x86/events/intel/uncore.c | 46 ++++++++++++++++++++++++++++++++++
>  arch/x86/events/intel/uncore.h |  6 +++++
>  2 files changed, 52 insertions(+)
> 
> diff --git a/arch/x86/events/intel/uncore.c b/arch/x86/events/intel/uncore.c
> index 86467f85c383..55201bfde2c8 100644
> --- a/arch/x86/events/intel/uncore.c
> +++ b/arch/x86/events/intel/uncore.c
> @@ -825,6 +825,44 @@ static const struct attribute_group uncore_pmu_attr_group = {
>  	.attrs = uncore_pmu_attrs,
>  };
>  
> +static ssize_t mapping_show(struct device *dev,
> +				struct device_attribute *attr, char *buf)
> +{
> +	struct intel_uncore_pmu *pmu = dev_get_drvdata(dev);
> +
> +	return snprintf(buf, PAGE_SIZE - 1, "%s\n", pmu->mapping);
> +}
> +static DEVICE_ATTR_RO(mapping);
> +
> +static struct attribute *mapping_attrs[] = {
> +	&dev_attr_mapping.attr,
> +	NULL,
> +};
> +
> +static struct attribute_group mapping_group = {
> +	.attrs = mapping_attrs,
> +};
> +
> +static umode_t
> +not_visible(struct kobject *kobj, struct attribute *attr, int i)
> +{
> +	return 0;
> +}
> +
> +static const struct attribute_group *attr_update[] = {
> +	&mapping_group,
> +	NULL,
> +};

ATTRIBUTE_GROUPS()?


> +
> +static void uncore_platform_mapping(struct intel_uncore_type *t)
> +{
> +	if (t->get_topology && t->set_mapping &&
> +	    !t->get_topology(t, max_dies) && !t->set_mapping(t, max_dies))
> +		mapping_group.is_visible = NULL;

No need to set something to NULL that is already set to NULL, right?

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v4 2/2] perf x86: Exposing an Uncore unit to PMON for Intel Xeon® server platform
  2020-01-17 13:37 ` [PATCH v4 2/2] perf x86: Exposing an Uncore unit to PMON for Intel Xeon® server platform roman.sudarikov
@ 2020-01-17 14:19   ` Greg KH
  2020-01-17 16:23     ` Andi Kleen
  0 siblings, 1 reply; 17+ messages in thread
From: Greg KH @ 2020-01-17 14:19 UTC (permalink / raw)
  To: roman.sudarikov
  Cc: peterz, mingo, acme, mark.rutland, alexander.shishkin, jolsa,
	namhyung, linux-kernel, eranian, bgregg, ak, kan.liang,
	alexander.antonov

On Fri, Jan 17, 2020 at 04:37:59PM +0300, roman.sudarikov@linux.intel.com wrote:
> From: Roman Sudarikov <roman.sudarikov@linux.intel.com>
> 
> Current version supports a server line starting Intel® Xeon® Processor
> Scalable Family and introduces mapping for IIO Uncore units only.
> Other units can be added on demand.
> 
> IIO stack to PMON mapping is exposed through:
>     /sys/devices/uncore_iio_<pmu_idx>/mapping
>     in the following format: domain:bus
> 
> For example, on a 4-die Intel Xeon® server platform:
>     $ cat /sys/devices/uncore_iio_0/mapping
>     0000:00,0000:40,0000:80,0000:c0

Again, horrible format, why do we have to parse this in userspace like
this?  Who will use this file?  What do they really need?

And what happens when you have too many "dies" in a system and you
overflow the sysfs file?  We have already seen this happen for other
sysfs files that assumed "oh, we will never have that many
cpus/leds/whatever in a system at one time" and now they have to go do
horrid hacks to get around the PAGE_SIZE limitation of sysfs files.

DO NOT DO THIS!

I thought I was nice and gentle last time and said that this was a
really bad idea and you would fix it up.  That didn't happen, so I am
being explicit here, THIS IS NOT AN ACCEPTABLE FILE OUTPUT FOR A SYSFS
FILE.

greg k-h

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v4 2/2] perf x86: Exposing an Uncore unit to PMON for Intel Xeon® server platform
  2020-01-17 14:19   ` Greg KH
@ 2020-01-17 16:23     ` Andi Kleen
  2020-01-17 16:54       ` Greg KH
  0 siblings, 1 reply; 17+ messages in thread
From: Andi Kleen @ 2020-01-17 16:23 UTC (permalink / raw)
  To: Greg KH
  Cc: roman.sudarikov, peterz, mingo, acme, mark.rutland,
	alexander.shishkin, jolsa, namhyung, linux-kernel, eranian,
	bgregg, kan.liang, alexander.antonov

> I thought I was nice and gentle last time and said that this was a
> really bad idea and you would fix it up.  That didn't happen, so I am
> being explicit here, THIS IS NOT AN ACCEPTABLE FILE OUTPUT FOR A SYSFS
> FILE.

Could you suggest how such a 1:N mapping should be expressed instead in
sysfs?

-Andi

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v4 2/2] perf x86: Exposing an Uncore unit to PMON for Intel Xeon® server platform
  2020-01-17 16:23     ` Andi Kleen
@ 2020-01-17 16:54       ` Greg KH
  2020-01-17 17:27         ` Andi Kleen
  2020-01-21 16:15         ` Sudarikov, Roman
  0 siblings, 2 replies; 17+ messages in thread
From: Greg KH @ 2020-01-17 16:54 UTC (permalink / raw)
  To: Andi Kleen
  Cc: roman.sudarikov, peterz, mingo, acme, mark.rutland,
	alexander.shishkin, jolsa, namhyung, linux-kernel, eranian,
	bgregg, kan.liang, alexander.antonov

On Fri, Jan 17, 2020 at 08:23:57AM -0800, Andi Kleen wrote:
> > I thought I was nice and gentle last time and said that this was a
> > really bad idea and you would fix it up.  That didn't happen, so I am
> > being explicit here, THIS IS NOT AN ACCEPTABLE FILE OUTPUT FOR A SYSFS
> > FILE.
> 
> Could you suggest how such a 1:N mapping should be expressed instead in
> sysfs?

I have yet to figure out what it is you all are trying to express here
given a lack of Documentation/ABI/ file :)

But again, sysfs is ONE VALUE PER FILE.  You have a list of items here,
that is bounded only by the number of devices in the system at the
moment.  That number will go up in time, as we all know.  So this is
just not going to work at all as-is.

greg k-h

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v4 2/2] perf x86: Exposing an Uncore unit to PMON for Intel Xeon® server platform
  2020-01-17 16:54       ` Greg KH
@ 2020-01-17 17:27         ` Andi Kleen
  2020-01-17 18:42           ` Greg KH
  2020-01-21 16:15         ` Sudarikov, Roman
  1 sibling, 1 reply; 17+ messages in thread
From: Andi Kleen @ 2020-01-17 17:27 UTC (permalink / raw)
  To: Greg KH
  Cc: roman.sudarikov, peterz, mingo, acme, mark.rutland,
	alexander.shishkin, jolsa, namhyung, linux-kernel, eranian,
	bgregg, kan.liang, alexander.antonov

> > Could you suggest how such a 1:N mapping should be expressed instead in
> > sysfs?
> 
> I have yet to figure out what it is you all are trying to express here
> given a lack of Documentation/ABI/ file :)

I thought the example Roman gave was clear.

System has multiple dies
Each die has 4 pmon ports
Each pmon port per die maps to one PCI bus.

He mapped it to 

pmon0-3: list of pci busses indexed by die

To be honest the approach doesn't seem unreasonable to me. It's similar
e.g. how we express lists of cpus or nodes in sysfs today.

-Andi

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v4 2/2] perf x86: Exposing an Uncore unit to PMON for Intel Xeon® server platform
  2020-01-17 17:27         ` Andi Kleen
@ 2020-01-17 18:42           ` Greg KH
  2020-01-17 19:12             ` Andi Kleen
  0 siblings, 1 reply; 17+ messages in thread
From: Greg KH @ 2020-01-17 18:42 UTC (permalink / raw)
  To: Andi Kleen
  Cc: roman.sudarikov, peterz, mingo, acme, mark.rutland,
	alexander.shishkin, jolsa, namhyung, linux-kernel, eranian,
	bgregg, kan.liang, alexander.antonov

On Fri, Jan 17, 2020 at 09:27:26AM -0800, Andi Kleen wrote:
> > > Could you suggest how such a 1:N mapping should be expressed instead in
> > > sysfs?
> > 
> > I have yet to figure out what it is you all are trying to express here
> > given a lack of Documentation/ABI/ file :)
> 
> I thought the example Roman gave was clear.
> 
> System has multiple dies
> Each die has 4 pmon ports
> Each pmon port per die maps to one PCI bus.
> 
> He mapped it to 
> 
> pmon0-3: list of pci busses indexed by die
> 
> To be honest the approach doesn't seem unreasonable to me. It's similar
> e.g. how we express lists of cpus or nodes in sysfs today.

Again, you are having to parse a single line of output from sysfs that
contains multiple values, one that will just keep getting bigger and
bigger as time goes on until we run out of space.

One value per file for sysfs, it's been the rule since the beginning.
If there are files that violate this, ugh, it slips through, but as the
submitter is asking for my review, I am going to actually follow the
rules here.

greg k-h

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v4 2/2] perf x86: Exposing an Uncore unit to PMON for Intel Xeon® server platform
  2020-01-17 18:42           ` Greg KH
@ 2020-01-17 19:12             ` Andi Kleen
  2020-01-17 23:03               ` Greg KH
  0 siblings, 1 reply; 17+ messages in thread
From: Andi Kleen @ 2020-01-17 19:12 UTC (permalink / raw)
  To: roman.sudarikov
  Cc: gregkh, roman.sudarikov, peterz, mingo, acme, mark.rutland,
	alexander.shishkin, jolsa, namhyung, linux-kernel, eranian,
	bgregg, kan.liang, alexander.antonov

> > pmon0-3: list of pci busses indexed by die
> > 
> > To be honest the approach doesn't seem unreasonable to me. It's similar
> > e.g. how we express lists of cpus or nodes in sysfs today.

<snipped repeated form letter non answer to question>

Roman, 

I suppose you'll need something like

/sys/device/system/dieXXX/pci-pmon<0-3>/bus 

and bus could be a symlink to the pci bus directory.

The whole thing will be ugly and complicated and slow and difficult
to parse, but it will presumably follow Greg's rules.

-Andi


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v4 2/2] perf x86: Exposing an Uncore unit to PMON for Intel Xeon® server platform
  2020-01-17 19:12             ` Andi Kleen
@ 2020-01-17 23:03               ` Greg KH
  2020-01-17 23:21                 ` Andi Kleen
  0 siblings, 1 reply; 17+ messages in thread
From: Greg KH @ 2020-01-17 23:03 UTC (permalink / raw)
  To: Andi Kleen
  Cc: roman.sudarikov, peterz, mingo, acme, mark.rutland,
	alexander.shishkin, jolsa, namhyung, linux-kernel, eranian,
	bgregg, kan.liang, alexander.antonov

On Fri, Jan 17, 2020 at 11:12:20AM -0800, Andi Kleen wrote:
> > > pmon0-3: list of pci busses indexed by die
> > > 
> > > To be honest the approach doesn't seem unreasonable to me. It's similar
> > > e.g. how we express lists of cpus or nodes in sysfs today.
> 
> <snipped repeated form letter non answer to question>
> 
> Roman, 
> 
> I suppose you'll need something like
> 
> /sys/device/system/dieXXX/pci-pmon<0-3>/bus 
> 
> and bus could be a symlink to the pci bus directory.

Why do you need to link to the pci bus directory?

> The whole thing will be ugly and complicated and slow and difficult
> to parse, but it will presumably follow Greg's rules.

Who needs to parse this?  What tool will do it and for what?

greg k-h

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v4 2/2] perf x86: Exposing an Uncore unit to PMON for Intel Xeon® server platform
  2020-01-17 23:03               ` Greg KH
@ 2020-01-17 23:21                 ` Andi Kleen
  0 siblings, 0 replies; 17+ messages in thread
From: Andi Kleen @ 2020-01-17 23:21 UTC (permalink / raw)
  To: Greg KH
  Cc: roman.sudarikov, peterz, mingo, acme, mark.rutland,
	alexander.shishkin, jolsa, namhyung, linux-kernel, eranian,
	bgregg, kan.liang, alexander.antonov

> > Roman, 
> > 
> > I suppose you'll need something like
> > 
> > /sys/device/system/dieXXX/pci-pmon<0-3>/bus 
> > 
> > and bus could be a symlink to the pci bus directory.
> 
> Why do you need to link to the pci bus directory?

The goal is to identify which bus is counted by the perfmon
counter. I suppose it could be a field containing the bus
identifier too, but I think symlinks are used elsewhere.

> > The whole thing will be ugly and complicated and slow and difficult
> > to parse, but it will presumably follow Greg's rules.
> 
> Who needs to parse this?  What tool will do it and for what?

perf parses is to output PCI bandwidth per PCI device.

-Andi


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v4 2/2] perf x86: Exposing an Uncore unit to PMON for Intel Xeon® server platform
  2020-01-17 16:54       ` Greg KH
  2020-01-17 17:27         ` Andi Kleen
@ 2020-01-21 16:15         ` Sudarikov, Roman
  2020-01-21 17:15           ` Greg KH
  1 sibling, 1 reply; 17+ messages in thread
From: Sudarikov, Roman @ 2020-01-21 16:15 UTC (permalink / raw)
  To: Greg KH, Andi Kleen
  Cc: peterz, mingo, acme, mark.rutland, alexander.shishkin, jolsa,
	namhyung, linux-kernel, eranian, bgregg, kan.liang,
	alexander.antonov

On 17.01.2020 19:54, Greg KH wrote:
> On Fri, Jan 17, 2020 at 08:23:57AM -0800, Andi Kleen wrote:
>>> I thought I was nice and gentle last time and said that this was a
>>> really bad idea and you would fix it up.  That didn't happen, so I am
>>> being explicit here, THIS IS NOT AN ACCEPTABLE FILE OUTPUT FOR A SYSFS
>>> FILE.
>> Could you suggest how such a 1:N mapping should be expressed instead in
>> sysfs?
> I have yet to figure out what it is you all are trying to express here
> given a lack of Documentation/ABI/ file :)
>
> But again, sysfs is ONE VALUE PER FILE.  You have a list of items here,
> that is bounded only by the number of devices in the system at the
> moment.  That number will go up in time, as we all know.  So this is
> just not going to work at all as-is.
>
> greg k-h

Hi Greg,

Technically, the motivation behind this patch is to enable Linux perf tool
to attribute IO traffic to IO device.

Currently, perf tool provides interface to configure IO PMUs only 
without any
context.

Understanding IIO stack concept to find which IIO stack that particular
IO device is connected to, or to identify an IIO PMON block to program
for monitoring specific IIO stack assumes a lot of implicit knowledge
about given Intel server platform architecture.


Please consider the following mapping schema:

1. new "mapping" directory is to be added under each uncore_iio_N directory
2. that "mapping" directory is supposed to contain symlinks named "dieN"
which are pointed to corresponding root bus.
Below is how it looks like for 2S machine:

# ll uncore_iio_0/mapping/
lrwxrwxrwx 1 root root 0 Jan 20 23:55 die0 -> 
../../pci0000:00/pci_bus/0000:00
lrwxrwxrwx 1 root root 0 Jan 20 23:55 die1 -> 
../../pci0000:80/pci_bus/0000:80

# ll uncore_iio_1/mapping/
lrwxrwxrwx 1 root root 0 Jan 20 23:55 die0 -> 
../../pci0000:17/pci_bus/0000:17
lrwxrwxrwx 1 root root 0 Jan 20 23:55 die1 -> 
../../pci0000:85/pci_bus/0000:85

# ll uncore_iio_2/mapping/
lrwxrwxrwx 1 root root 0 Jan 20 23:55 die0 -> 
../../pci0000:3a/pci_bus/0000:3a
lrwxrwxrwx 1 root root 0 Jan 20 23:55 die1 -> 
../../pci0000:ae/pci_bus/0000:ae

# ll uncore_iio_3/mapping/
lrwxrwxrwx 1 root root 0 Jan 20 23:55 die0 -> 
../../pci0000:5d/pci_bus/0000:5d
lrwxrwxrwx 1 root root 0 Jan 20 23:55 die1 -> 
../../pci0000:d7/pci_bus/0000:d7


Thanks,
Roman


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v4 2/2] perf x86: Exposing an Uncore unit to PMON for Intel Xeon® server platform
  2020-01-21 16:15         ` Sudarikov, Roman
@ 2020-01-21 17:15           ` Greg KH
  2020-01-28 14:55             ` Sudarikov, Roman
  0 siblings, 1 reply; 17+ messages in thread
From: Greg KH @ 2020-01-21 17:15 UTC (permalink / raw)
  To: Sudarikov, Roman
  Cc: Andi Kleen, peterz, mingo, acme, mark.rutland,
	alexander.shishkin, jolsa, namhyung, linux-kernel, eranian,
	bgregg, kan.liang, alexander.antonov

On Tue, Jan 21, 2020 at 07:15:56PM +0300, Sudarikov, Roman wrote:
> On 17.01.2020 19:54, Greg KH wrote:
> > On Fri, Jan 17, 2020 at 08:23:57AM -0800, Andi Kleen wrote:
> > > > I thought I was nice and gentle last time and said that this was a
> > > > really bad idea and you would fix it up.  That didn't happen, so I am
> > > > being explicit here, THIS IS NOT AN ACCEPTABLE FILE OUTPUT FOR A SYSFS
> > > > FILE.
> > > Could you suggest how such a 1:N mapping should be expressed instead in
> > > sysfs?
> > I have yet to figure out what it is you all are trying to express here
> > given a lack of Documentation/ABI/ file :)
> > 
> > But again, sysfs is ONE VALUE PER FILE.  You have a list of items here,
> > that is bounded only by the number of devices in the system at the
> > moment.  That number will go up in time, as we all know.  So this is
> > just not going to work at all as-is.
> > 
> > greg k-h
> 
> Hi Greg,
> 
> Technically, the motivation behind this patch is to enable Linux perf tool
> to attribute IO traffic to IO device.
> 
> Currently, perf tool provides interface to configure IO PMUs only without
> any
> context.
> 
> Understanding IIO stack concept to find which IIO stack that particular
> IO device is connected to, or to identify an IIO PMON block to program
> for monitoring specific IIO stack assumes a lot of implicit knowledge
> about given Intel server platform architecture.

Is "IIO" being used here the same way that drivers/iio/ is in the
kernel, or is this some other term?  If it is the same, why isn't the
iio developers involved in this?  If it is some other term, please
always define it and perhaps pick a different name :)

> Please consider the following mapping schema:
> 
> 1. new "mapping" directory is to be added under each uncore_iio_N directory

What is uncore_iio_N?  A struct device?  Or something else?

> 2. that "mapping" directory is supposed to contain symlinks named "dieN"
> which are pointed to corresponding root bus.
> Below is how it looks like for 2S machine:
> 
> # ll uncore_iio_0/mapping/
> lrwxrwxrwx 1 root root 0 Jan 20 23:55 die0 ->
> ../../pci0000:00/pci_bus/0000:00

Where did "pci_bus" come from in there?  I don't see under /sys/devices/
for my pci bridges.

> lrwxrwxrwx 1 root root 0 Jan 20 23:55 die1 ->
> ../../pci0000:80/pci_bus/0000:80
> 
> # ll uncore_iio_1/mapping/
> lrwxrwxrwx 1 root root 0 Jan 20 23:55 die0 ->
> ../../pci0000:17/pci_bus/0000:17
> lrwxrwxrwx 1 root root 0 Jan 20 23:55 die1 ->
> ../../pci0000:85/pci_bus/0000:85
> 
> # ll uncore_iio_2/mapping/
> lrwxrwxrwx 1 root root 0 Jan 20 23:55 die0 ->
> ../../pci0000:3a/pci_bus/0000:3a
> lrwxrwxrwx 1 root root 0 Jan 20 23:55 die1 ->
> ../../pci0000:ae/pci_bus/0000:ae
> 
> # ll uncore_iio_3/mapping/
> lrwxrwxrwx 1 root root 0 Jan 20 23:55 die0 ->
> ../../pci0000:5d/pci_bus/0000:5d
> lrwxrwxrwx 1 root root 0 Jan 20 23:55 die1 ->
> ../../pci0000:d7/pci_bus/0000:d7

Why have a subdir here?

Anyway, yes, that would make sense, if userspace can actually do
something with that, can it?

Also, what tears those symlinks down when you remove those pci devices
from the system?  Shouldn't you have an entry in the pci device itself
for this type of thing?  And if so, isn't this really just a "normal"
class type driver you are writing?  That should handle all of the
symlinks and stuff for you automatically, right?

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v4 2/2] perf x86: Exposing an Uncore unit to PMON for Intel Xeon® server platform
  2020-01-21 17:15           ` Greg KH
@ 2020-01-28 14:55             ` Sudarikov, Roman
  2020-01-28 20:19               ` Liang, Kan
  0 siblings, 1 reply; 17+ messages in thread
From: Sudarikov, Roman @ 2020-01-28 14:55 UTC (permalink / raw)
  To: Greg KH
  Cc: Andi Kleen, peterz, mingo, acme, mark.rutland,
	alexander.shishkin, jolsa, namhyung, linux-kernel, eranian,
	bgregg, kan.liang, alexander.antonov

On 21.01.2020 20:15, Greg KH wrote:
> On Tue, Jan 21, 2020 at 07:15:56PM +0300, Sudarikov, Roman wrote:
>> On 17.01.2020 19:54, Greg KH wrote:
>>> On Fri, Jan 17, 2020 at 08:23:57AM -0800, Andi Kleen wrote:
>>>>> I thought I was nice and gentle last time and said that this was a
>>>>> really bad idea and you would fix it up.  That didn't happen, so I am
>>>>> being explicit here, THIS IS NOT AN ACCEPTABLE FILE OUTPUT FOR A SYSFS
>>>>> FILE.
>>>> Could you suggest how such a 1:N mapping should be expressed instead in
>>>> sysfs?
>>> I have yet to figure out what it is you all are trying to express here
>>> given a lack of Documentation/ABI/ file :)
>>>
>>> But again, sysfs is ONE VALUE PER FILE.  You have a list of items here,
>>> that is bounded only by the number of devices in the system at the
>>> moment.  That number will go up in time, as we all know.  So this is
>>> just not going to work at all as-is.
>>>
>>> greg k-h
>> Hi Greg,
>>
>> Technically, the motivation behind this patch is to enable Linux perf tool
>> to attribute IO traffic to IO device.
>>
>> Currently, perf tool provides interface to configure IO PMUs only without
>> any
>> context.
>>
>> Understanding IIO stack concept to find which IIO stack that particular
>> IO device is connected to, or to identify an IIO PMON block to program
>> for monitoring specific IIO stack assumes a lot of implicit knowledge
>> about given Intel server platform architecture.
> Is "IIO" being used here the same way that drivers/iio/ is in the
> kernel, or is this some other term?  If it is the same, why isn't the
> iio developers involved in this?  If it is some other term, please
> always define it and perhaps pick a different name :)
The term "IIO" (Integrated IO) in that context refers to set of PMUs 
which are
responsible for monitoring traffic crossing PCIe domain boundaries. It's 
specific
for Intel Xeon server line and supported by Linux kernel perf tool 
starting v4.9.
So I'm just referring to what's already in the kernel :)
>> Please consider the following mapping schema:
>>
>> 1. new "mapping" directory is to be added under each uncore_iio_N directory
> What is uncore_iio_N?  A struct device?  Or something else?
It's interface to corresponding IIO PMU, should be struct device
>> 2. that "mapping" directory is supposed to contain symlinks named "dieN"
>> which are pointed to corresponding root bus.
>> Below is how it looks like for 2S machine:
>>
>> # ll uncore_iio_0/mapping/
>> lrwxrwxrwx 1 root root 0 Jan 20 23:55 die0 ->
>> ../../pci0000:00/pci_bus/0000:00
> Where did "pci_bus" come from in there?  I don't see under /sys/devices/
> for my pci bridges.
>
>> lrwxrwxrwx 1 root root 0 Jan 20 23:55 die1 ->
>> ../../pci0000:80/pci_bus/0000:80
>>
>> # ll uncore_iio_1/mapping/
>> lrwxrwxrwx 1 root root 0 Jan 20 23:55 die0 ->
>> ../../pci0000:17/pci_bus/0000:17
>> lrwxrwxrwx 1 root root 0 Jan 20 23:55 die1 ->
>> ../../pci0000:85/pci_bus/0000:85
>>
>> # ll uncore_iio_2/mapping/
>> lrwxrwxrwx 1 root root 0 Jan 20 23:55 die0 ->
>> ../../pci0000:3a/pci_bus/0000:3a
>> lrwxrwxrwx 1 root root 0 Jan 20 23:55 die1 ->
>> ../../pci0000:ae/pci_bus/0000:ae
>>
>> # ll uncore_iio_3/mapping/
>> lrwxrwxrwx 1 root root 0 Jan 20 23:55 die0 ->
>> ../../pci0000:5d/pci_bus/0000:5d
>> lrwxrwxrwx 1 root root 0 Jan 20 23:55 die1 ->
>> ../../pci0000:d7/pci_bus/0000:d7
> Why have a subdir here?
Just for convenience. I can put it the same level as other attributes 
(cpumask etc).
Please let me know which layout to choose.
> Anyway, yes, that would make sense, if userspace can actually do
> something with that, can it?
Sure! The linux perf tool will use it to attribute IO traffic to devices.
Initially the feature was sent for review containing both kernel[1] and
user space[2] parts, but later it was decided to finalize kernel part first
and then proceed with user space.

[1] 
https://lore.kernel.org/lkml/20191126163630.17300-2-roman.sudarikov@linux.intel.com/ 

[2] 
https://lore.kernel.org/lkml/20191126163630.17300-5-roman.sudarikov@linux.intel.com/
>
> Also, what tears those symlinks down when you remove those pci devices
> from the system?  Shouldn't you have an entry in the pci device itself
> for this type of thing?  And if so, isn't this really just a "normal"
> class type driver you are writing?  That should handle all of the
> symlinks and stuff for you automatically, right?
The IIO PMUs by design monitors traffic crossing integrated pci root ports.
For each IIO PMU the feature creates symlinks to its  pci root port on 
each node.
Those pci devices, by its nature, can not be "just removed". If the SOC is
designed the way that some integrated root port is not present
then the case will be correctly handled by the feature.
> thanks,
>
> greg k-h



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v4 2/2] perf x86: Exposing an Uncore unit to PMON for Intel Xeon® server platform
  2020-01-28 14:55             ` Sudarikov, Roman
@ 2020-01-28 20:19               ` Liang, Kan
  0 siblings, 0 replies; 17+ messages in thread
From: Liang, Kan @ 2020-01-28 20:19 UTC (permalink / raw)
  To: Sudarikov, Roman, Greg KH
  Cc: Andi Kleen, peterz, mingo, acme, mark.rutland,
	alexander.shishkin, jolsa, namhyung, linux-kernel, eranian,
	bgregg, alexander.antonov



On 1/28/2020 9:55 AM, Sudarikov, Roman wrote:
> On 21.01.2020 20:15, Greg KH wrote:
>> On Tue, Jan 21, 2020 at 07:15:56PM +0300, Sudarikov, Roman wrote:
>>> On 17.01.2020 19:54, Greg KH wrote:
>>>> On Fri, Jan 17, 2020 at 08:23:57AM -0800, Andi Kleen wrote:
>>>>>> I thought I was nice and gentle last time and said that this was a
>>>>>> really bad idea and you would fix it up.  That didn't happen, so I am
>>>>>> being explicit here, THIS IS NOT AN ACCEPTABLE FILE OUTPUT FOR A 
>>>>>> SYSFS
>>>>>> FILE.
>>>>> Could you suggest how such a 1:N mapping should be expressed 
>>>>> instead in
>>>>> sysfs?
>>>> I have yet to figure out what it is you all are trying to express here
>>>> given a lack of Documentation/ABI/ file :)
>>>>
>>>> But again, sysfs is ONE VALUE PER FILE.  You have a list of items here,
>>>> that is bounded only by the number of devices in the system at the
>>>> moment.  That number will go up in time, as we all know.  So this is
>>>> just not going to work at all as-is.
>>>>
>>>> greg k-h
>>> Hi Greg,
>>>
>>> Technically, the motivation behind this patch is to enable Linux perf 
>>> tool
>>> to attribute IO traffic to IO device.
>>>
>>> Currently, perf tool provides interface to configure IO PMUs only 
>>> without
>>> any
>>> context.
>>>
>>> Understanding IIO stack concept to find which IIO stack that particular
>>> IO device is connected to, or to identify an IIO PMON block to program
>>> for monitoring specific IIO stack assumes a lot of implicit knowledge
>>> about given Intel server platform architecture.
>> Is "IIO" being used here the same way that drivers/iio/ is in the
>> kernel, or is this some other term?  If it is the same, why isn't the
>> iio developers involved in this?  If it is some other term, please
>> always define it and perhaps pick a different name :)
> The term "IIO" (Integrated IO) in that context refers to set of PMUs 
> which are
> responsible for monitoring traffic crossing PCIe domain boundaries. It's 
> specific
> for Intel Xeon server line and supported by Linux kernel perf tool 
> starting v4.9.
> So I'm just referring to what's already in the kernel :)
>>> Please consider the following mapping schema:
>>>
>>> 1. new "mapping" directory is to be added under each uncore_iio_N 
>>> directory
>> What is uncore_iio_N?  A struct device?  Or something else?
> It's interface to corresponding IIO PMU, should be struct device
>>> 2. that "mapping" directory is supposed to contain symlinks named "dieN"
>>> which are pointed to corresponding root bus.
>>> Below is how it looks like for 2S machine:
>>>
>>> # ll uncore_iio_0/mapping/
>>> lrwxrwxrwx 1 root root 0 Jan 20 23:55 die0 ->
>>> ../../pci0000:00/pci_bus/0000:00
>> Where did "pci_bus" come from in there?  I don't see under /sys/devices/
>> for my pci bridges.
>>
>>> lrwxrwxrwx 1 root root 0 Jan 20 23:55 die1 ->
>>> ../../pci0000:80/pci_bus/0000:80
>>>
>>> # ll uncore_iio_1/mapping/
>>> lrwxrwxrwx 1 root root 0 Jan 20 23:55 die0 ->
>>> ../../pci0000:17/pci_bus/0000:17
>>> lrwxrwxrwx 1 root root 0 Jan 20 23:55 die1 ->
>>> ../../pci0000:85/pci_bus/0000:85
>>>
>>> # ll uncore_iio_2/mapping/
>>> lrwxrwxrwx 1 root root 0 Jan 20 23:55 die0 ->
>>> ../../pci0000:3a/pci_bus/0000:3a
>>> lrwxrwxrwx 1 root root 0 Jan 20 23:55 die1 ->
>>> ../../pci0000:ae/pci_bus/0000:ae
>>>
>>> # ll uncore_iio_3/mapping/
>>> lrwxrwxrwx 1 root root 0 Jan 20 23:55 die0 ->
>>> ../../pci0000:5d/pci_bus/0000:5d
>>> lrwxrwxrwx 1 root root 0 Jan 20 23:55 die1 ->
>>> ../../pci0000:d7/pci_bus/0000:d7
>> Why have a subdir here?
> Just for convenience. I can put it the same level as other attributes 
> (cpumask etc).
> Please let me know which layout to choose.
>> Anyway, yes, that would make sense, if userspace can actually do
>> something with that, can it?
> Sure! The linux perf tool will use it to attribute IO traffic to devices.
> Initially the feature was sent for review containing both kernel[1] and
> user space[2] parts, but later it was decided to finalize kernel part first
> and then proceed with user space.
> 
> [1] 
> https://lore.kernel.org/lkml/20191126163630.17300-2-roman.sudarikov@linux.intel.com/ 
> 
> [2] 
> https://lore.kernel.org/lkml/20191126163630.17300-5-roman.sudarikov@linux.intel.com/ 
> 
>>
>> Also, what tears those symlinks down when you remove those pci devices
>> from the system?  Shouldn't you have an entry in the pci device itself
>> for this type of thing?  And if so, isn't this really just a "normal"
>> class type driver you are writing?  That should handle all of the
>> symlinks and stuff for you automatically, right?
> The IIO PMUs by design monitors traffic crossing integrated pci root ports.
> For each IIO PMU the feature creates symlinks to its  pci root port on 
> each node.


Can we just simply assign the BUS# to it as below?
# cat uncore_iio_1/mapping/die0
0000:00
I'm not sure why we need a symlink here.

Also, if the BUS is removed, I think we may want to update mapping as well.

Thanks,
Kan

> Those pci devices, by its nature, can not be "just removed". If the SOC is
> designed the way that some integrated root port is not present
> then the case will be correctly handled by the feature.
>> thanks,
>>
>> greg k-h
> 
> 

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2020-01-28 20:32 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-01-17 13:37 [PATCH v4 0/2] perf x86: Exposing IO stack to IO PMON mapping through sysfs roman.sudarikov
2020-01-17 13:37 ` [PATCH v4 1/2] perf x86: Infrastructure for exposing an Uncore unit to PMON mapping roman.sudarikov
2020-01-17 14:16   ` Greg KH
2020-01-17 13:37 ` [PATCH v4 2/2] perf x86: Exposing an Uncore unit to PMON for Intel Xeon® server platform roman.sudarikov
2020-01-17 14:19   ` Greg KH
2020-01-17 16:23     ` Andi Kleen
2020-01-17 16:54       ` Greg KH
2020-01-17 17:27         ` Andi Kleen
2020-01-17 18:42           ` Greg KH
2020-01-17 19:12             ` Andi Kleen
2020-01-17 23:03               ` Greg KH
2020-01-17 23:21                 ` Andi Kleen
2020-01-21 16:15         ` Sudarikov, Roman
2020-01-21 17:15           ` Greg KH
2020-01-28 14:55             ` Sudarikov, Roman
2020-01-28 20:19               ` Liang, Kan
2020-01-17 14:14 ` [PATCH v4 0/2] perf x86: Exposing IO stack to IO PMON mapping through sysfs Greg KH

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.