linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v9 0/8] perf/amd/iommu: Enable multi-IOMMU support
@ 2017-02-07  8:40 Suravee Suthikulpanit
  2017-02-07  8:40 ` [PATCH v9 1/8] perf/amd/iommu: Declare pr_fmt and remove unnecessary pr_debug Suravee Suthikulpanit
                   ` (8 more replies)
  0 siblings, 9 replies; 17+ messages in thread
From: Suravee Suthikulpanit @ 2017-02-07  8:40 UTC (permalink / raw)
  To: linux-kernel, iommu; +Cc: bp, peterz, joro, mingo, Suravee Suthikulpanit

From: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>

This patch series modifies the existing IOMMU and Perf drivers to support
systems with multiple IOMMUs by allocating an amd_iommu PMU per IOMMU instance.
This allows users to specify performance events and filters separately for each
IOMMU.

This has been tested on the new family17h-based server w/ multiple IOMMUs.

Git branch containing this patch series is available here:

    https://github.com/ssuthiku/linux.git  perf-iommu-v9

Changes from V8 (https://lkml.org/lkml/2017/1/16/48)
  * Rebase to v4.10
  * Do not use hwc->idx as pmu index. Instead, include pointers
    to struct amd_iommu in the perf_amd_iommu.
  * Do not remove local64_cmpxchg() in perf_iommu_read().
  * Fix incorrect bitfield when using GENMASK_ULL().

Changes from V7 (https://lkml.org/lkml/2017/1/9/917)
  * Re-order patches to clean up first before introducing new stuff.
  * Always use amd_iommu_get_num_iommus() to access amd_iommus_present
    variable now.
  * Fix Perf IOMMU sysfs attributes initialization.
  * Miscellaneous clean up 

Changes from V6 (https://lkml.org/lkml/2016/12/23/134)
  * Renamed function parameters from devid to idx (per Joerg).
  * Removed unnecessary function declarations from amd_iommu_proto.h
    (per Joerg).

Changes from V5 (https://lkml.org/lkml/2016/2/23/370)
  * Rebased onto v4.9.
  * Remove the patch which consolidates function delclarations since
    we have not yet agreed on the appropriate place for the new header file.

Thanks,
Suravee

Suravee Suthikulpanit (8):
  perf/amd/iommu: Declare pr_fmt and remove unnecessary pr_debug
  perf/amd/iommu: Clean up bitwise operations
  perf/amd/iommu: Clean up perf_iommu_read()
  iommu/amd: Introduce amd_iommu_get_num_iommus()
  perf/amd/iommu: Modify functions to query max banks and counters
  perf/amd/iommu: Modify amd_iommu_pc_get_set_reg_val() to allow
    specifying IOMMU
  perf/amd/iommu: Fix sysfs perf attribute groups
  perf/amd/iommu: Enable support for multiple IOMMUs

 arch/x86/events/amd/iommu.c     | 258 +++++++++++++++++++---------------------
 arch/x86/events/amd/iommu.h     |  18 ++-
 drivers/iommu/amd_iommu.c       |   6 +-
 drivers/iommu/amd_iommu_init.c  | 104 +++++++++-------
 drivers/iommu/amd_iommu_proto.h |   8 +-
 drivers/iommu/amd_iommu_types.h |   3 -
 6 files changed, 203 insertions(+), 194 deletions(-)

-- 
1.8.3.1

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH v9 1/8] perf/amd/iommu: Declare pr_fmt and remove unnecessary pr_debug
  2017-02-07  8:40 [PATCH v9 0/8] perf/amd/iommu: Enable multi-IOMMU support Suravee Suthikulpanit
@ 2017-02-07  8:40 ` Suravee Suthikulpanit
  2017-02-07  8:40 ` [PATCH v9 2/8] perf/amd/iommu: Clean up bitwise operations Suravee Suthikulpanit
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 17+ messages in thread
From: Suravee Suthikulpanit @ 2017-02-07  8:40 UTC (permalink / raw)
  To: linux-kernel, iommu
  Cc: bp, peterz, joro, mingo, Suravee Suthikulpanit, Suravee Suthikulpanit

Declare pr_fmt for perf/amd_iommu and remove unnecessary pr_debug.

Also check return value when _init_events_attrs fails.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Borislav Petkov <bp@alien8.de>
Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
---
 arch/x86/events/amd/iommu.c | 20 ++++++++------------
 1 file changed, 8 insertions(+), 12 deletions(-)

diff --git a/arch/x86/events/amd/iommu.c b/arch/x86/events/amd/iommu.c
index b28200d..44638d0 100644
--- a/arch/x86/events/amd/iommu.c
+++ b/arch/x86/events/amd/iommu.c
@@ -11,6 +11,8 @@
  * published by the Free Software Foundation.
  */
 
+#define pr_fmt(fmt)	"perf/amd_iommu: " fmt
+
 #include <linux/perf_event.h>
 #include <linux/init.h>
 #include <linux/cpumask.h>
@@ -298,7 +300,6 @@ static void perf_iommu_start(struct perf_event *event, int flags)
 {
 	struct hw_perf_event *hwc = &event->hw;
 
-	pr_debug("perf: amd_iommu:perf_iommu_start\n");
 	if (WARN_ON_ONCE(!(hwc->state & PERF_HES_STOPPED)))
 		return;
 
@@ -323,7 +324,6 @@ static void perf_iommu_read(struct perf_event *event)
 	u64 prev_raw_count = 0ULL;
 	u64 delta = 0ULL;
 	struct hw_perf_event *hwc = &event->hw;
-	pr_debug("perf: amd_iommu:perf_iommu_read\n");
 
 	amd_iommu_pc_get_set_reg_val(_GET_DEVID(event),
 				_GET_BANK(event), _GET_CNTR(event),
@@ -349,8 +349,6 @@ static void perf_iommu_stop(struct perf_event *event, int flags)
 	struct hw_perf_event *hwc = &event->hw;
 	u64 config;
 
-	pr_debug("perf: amd_iommu:perf_iommu_stop\n");
-
 	if (hwc->state & PERF_HES_UPTODATE)
 		return;
 
@@ -372,7 +370,6 @@ static int perf_iommu_add(struct perf_event *event, int flags)
 	struct perf_amd_iommu *perf_iommu =
 			container_of(event->pmu, struct perf_amd_iommu, pmu);
 
-	pr_debug("perf: amd_iommu:perf_iommu_add\n");
 	event->hw.state = PERF_HES_UPTODATE | PERF_HES_STOPPED;
 
 	/* request an iommu bank/counter */
@@ -393,7 +390,6 @@ static void perf_iommu_del(struct perf_event *event, int flags)
 	struct perf_amd_iommu *perf_iommu =
 			container_of(event->pmu, struct perf_amd_iommu, pmu);
 
-	pr_debug("perf: amd_iommu:perf_iommu_del\n");
 	perf_iommu_stop(event, PERF_EF_UPDATE);
 
 	/* clear the assigned iommu bank/counter */
@@ -444,24 +440,24 @@ static __init int _init_perf_amd_iommu(
 
 	raw_spin_lock_init(&perf_iommu->lock);
 
-	/* Init format attributes */
 	perf_iommu->format_group = &amd_iommu_format_group;
 
 	/* Init cpumask attributes to only core 0 */
 	cpumask_set_cpu(0, &iommu_cpumask);
 	perf_iommu->cpumask_group = &amd_iommu_cpumask_group;
 
-	/* Init events attributes */
-	if (_init_events_attrs(perf_iommu) != 0)
-		pr_err("perf: amd_iommu: Only support raw events.\n");
+	ret = _init_events_attrs(perf_iommu);
+	if (ret) {
+		pr_err("Error initializing AMD IOMMU perf events.\n");
+		return ret;
+	}
 
-	/* Init null attributes */
 	perf_iommu->null_group = NULL;
 	perf_iommu->pmu.attr_groups = perf_iommu->attr_groups;
 
 	ret = perf_pmu_register(&perf_iommu->pmu, name, -1);
 	if (ret) {
-		pr_err("perf: amd_iommu: Failed to initialized.\n");
+		pr_err("Error initializing AMD IOMMU perf counters.\n");
 		amd_iommu_pc_exit();
 	} else {
 		pr_info("perf: amd_iommu: Detected. (%d banks, %d counters/bank)\n",
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v9 2/8] perf/amd/iommu: Clean up bitwise operations
  2017-02-07  8:40 [PATCH v9 0/8] perf/amd/iommu: Enable multi-IOMMU support Suravee Suthikulpanit
  2017-02-07  8:40 ` [PATCH v9 1/8] perf/amd/iommu: Declare pr_fmt and remove unnecessary pr_debug Suravee Suthikulpanit
@ 2017-02-07  8:40 ` Suravee Suthikulpanit
  2017-02-07  8:40 ` [PATCH v9 3/8] perf/amd/iommu: Clean up perf_iommu_read() Suravee Suthikulpanit
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 17+ messages in thread
From: Suravee Suthikulpanit @ 2017-02-07  8:40 UTC (permalink / raw)
  To: linux-kernel, iommu; +Cc: bp, peterz, joro, mingo, Suravee Suthikulpanit

From: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>

Clean up register initialization and make use of BIT_ULL(x)
where appropriate. This should not affect logic and functionality.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Borislav Petkov <bp@alien8.de>
Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
---
 arch/x86/events/amd/iommu.c | 18 +++++++++---------
 1 file changed, 9 insertions(+), 9 deletions(-)

diff --git a/arch/x86/events/amd/iommu.c b/arch/x86/events/amd/iommu.c
index 44638d0..85b634e 100644
--- a/arch/x86/events/amd/iommu.c
+++ b/arch/x86/events/amd/iommu.c
@@ -164,11 +164,11 @@ static int get_next_avail_iommu_bnk_cntr(struct perf_amd_iommu *perf_iommu)
 	for (bank = 0, shift = 0; bank < max_banks; bank++) {
 		for (cntr = 0; cntr < max_cntrs; cntr++) {
 			shift = bank + (bank*3) + cntr;
-			if (perf_iommu->cntr_assign_mask & (1ULL<<shift)) {
+			if (perf_iommu->cntr_assign_mask & BIT_ULL(shift)) {
 				continue;
 			} else {
-				perf_iommu->cntr_assign_mask |= (1ULL<<shift);
-				retval = ((u16)((u16)bank<<8) | (u8)(cntr));
+				perf_iommu->cntr_assign_mask |= BIT_ULL(shift);
+				retval = ((bank & 0xFF) << 8) | (cntr & 0xFF);
 				goto out;
 			}
 		}
@@ -265,23 +265,23 @@ static void perf_iommu_enable_event(struct perf_event *ev)
 			_GET_BANK(ev), _GET_CNTR(ev) ,
 			 IOMMU_PC_COUNTER_SRC_REG, &reg, true);
 
-	reg = 0ULL | devid | (_GET_DEVID_MASK(ev) << 32);
+	reg = devid | (_GET_DEVID_MASK(ev) << 32);
 	if (reg)
-		reg |= (1UL << 31);
+		reg |= BIT(31);
 	amd_iommu_pc_get_set_reg_val(devid,
 			_GET_BANK(ev), _GET_CNTR(ev) ,
 			 IOMMU_PC_DEVID_MATCH_REG, &reg, true);
 
-	reg = 0ULL | _GET_PASID(ev) | (_GET_PASID_MASK(ev) << 32);
+	reg = _GET_PASID(ev) | (_GET_PASID_MASK(ev) << 32);
 	if (reg)
-		reg |= (1UL << 31);
+		reg |= BIT(31);
 	amd_iommu_pc_get_set_reg_val(devid,
 			_GET_BANK(ev), _GET_CNTR(ev) ,
 			 IOMMU_PC_PASID_MATCH_REG, &reg, true);
 
-	reg = 0ULL | _GET_DOMID(ev) | (_GET_DOMID_MASK(ev) << 32);
+	reg = _GET_DOMID(ev) | (_GET_DOMID_MASK(ev) << 32);
 	if (reg)
-		reg |= (1UL << 31);
+		reg |= BIT(31);
 	amd_iommu_pc_get_set_reg_val(devid,
 			_GET_BANK(ev), _GET_CNTR(ev) ,
 			 IOMMU_PC_DOMID_MATCH_REG, &reg, true);
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v9 3/8] perf/amd/iommu: Clean up perf_iommu_read()
  2017-02-07  8:40 [PATCH v9 0/8] perf/amd/iommu: Enable multi-IOMMU support Suravee Suthikulpanit
  2017-02-07  8:40 ` [PATCH v9 1/8] perf/amd/iommu: Declare pr_fmt and remove unnecessary pr_debug Suravee Suthikulpanit
  2017-02-07  8:40 ` [PATCH v9 2/8] perf/amd/iommu: Clean up bitwise operations Suravee Suthikulpanit
@ 2017-02-07  8:40 ` Suravee Suthikulpanit
  2017-02-07  8:40 ` [PATCH v9 4/8] iommu/amd: Introduce amd_iommu_get_num_iommus() Suravee Suthikulpanit
                   ` (5 subsequent siblings)
  8 siblings, 0 replies; 17+ messages in thread
From: Suravee Suthikulpanit @ 2017-02-07  8:40 UTC (permalink / raw)
  To: linux-kernel, iommu
  Cc: bp, peterz, joro, mingo, Suravee Suthikulpanit, Suravee Suthikulpanit

Fix coding style and make use of GENMASK_ULL macro.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Joerg Roedel <joro@8bytes.org>
Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
---
 arch/x86/events/amd/iommu.c | 16 ++++++----------
 1 file changed, 6 insertions(+), 10 deletions(-)

diff --git a/arch/x86/events/amd/iommu.c b/arch/x86/events/amd/iommu.c
index 85b634e..5bef429 100644
--- a/arch/x86/events/amd/iommu.c
+++ b/arch/x86/events/amd/iommu.c
@@ -320,9 +320,7 @@ static void perf_iommu_start(struct perf_event *event, int flags)
 
 static void perf_iommu_read(struct perf_event *event)
 {
-	u64 count = 0ULL;
-	u64 prev_raw_count = 0ULL;
-	u64 delta = 0ULL;
+	u64 count, prev, delta;
 	struct hw_perf_event *hwc = &event->hw;
 
 	amd_iommu_pc_get_set_reg_val(_GET_DEVID(event),
@@ -330,18 +328,16 @@ static void perf_iommu_read(struct perf_event *event)
 				IOMMU_PC_COUNTER_REG, &count, false);
 
 	/* IOMMU pc counter register is only 48 bits */
-	count &= 0xFFFFFFFFFFFFULL;
+	count &= GENMASK_ULL(47, 0);
 
-	prev_raw_count =  local64_read(&hwc->prev_count);
-	if (local64_cmpxchg(&hwc->prev_count, prev_raw_count,
-					count) != prev_raw_count)
+	prev = local64_read(&hwc->prev_count);
+	if (local64_cmpxchg(&hwc->prev_count, prev, count) != prev)
 		return;
 
-	/* Handling 48-bit counter overflowing */
-	delta = (count << COUNTER_SHIFT) - (prev_raw_count << COUNTER_SHIFT);
+	/* Handle 48-bit counter overflow */
+	delta = (count << COUNTER_SHIFT) - (prev << COUNTER_SHIFT);
 	delta >>= COUNTER_SHIFT;
 	local64_add(delta, &event->count);
-
 }
 
 static void perf_iommu_stop(struct perf_event *event, int flags)
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v9 4/8] iommu/amd: Introduce amd_iommu_get_num_iommus()
  2017-02-07  8:40 [PATCH v9 0/8] perf/amd/iommu: Enable multi-IOMMU support Suravee Suthikulpanit
                   ` (2 preceding siblings ...)
  2017-02-07  8:40 ` [PATCH v9 3/8] perf/amd/iommu: Clean up perf_iommu_read() Suravee Suthikulpanit
@ 2017-02-07  8:40 ` Suravee Suthikulpanit
  2017-02-07  8:40 ` [PATCH v9 5/8] perf/amd/iommu: Modify functions to query max banks and counters Suravee Suthikulpanit
                   ` (4 subsequent siblings)
  8 siblings, 0 replies; 17+ messages in thread
From: Suravee Suthikulpanit @ 2017-02-07  8:40 UTC (permalink / raw)
  To: linux-kernel, iommu; +Cc: bp, peterz, joro, mingo, Suravee Suthikulpanit

Introduce amd_iommu_get_num_iommus(), which returns the value of
amd_iommus_present. The function is used to replace direct access to
the variable, which is now declared as static.

This function will also be used by Perf AMD IOMMU driver.

Cc: Borislav Petkov <bp@alien8.de>
Cc: Joerg Roedel <joro@8bytes.org>
Signed-off-by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
---
 arch/x86/events/amd/iommu.h     |  2 ++
 drivers/iommu/amd_iommu.c       |  6 +++---
 drivers/iommu/amd_iommu_init.c  | 11 +++++++++--
 drivers/iommu/amd_iommu_proto.h |  1 +
 drivers/iommu/amd_iommu_types.h |  3 ---
 5 files changed, 15 insertions(+), 8 deletions(-)

diff --git a/arch/x86/events/amd/iommu.h b/arch/x86/events/amd/iommu.h
index 845d173..5c5c932 100644
--- a/arch/x86/events/amd/iommu.h
+++ b/arch/x86/events/amd/iommu.h
@@ -28,6 +28,8 @@
 #define IOMMU_BASE_DEVID			0x0000
 
 /* amd_iommu_init.c external support functions */
+extern int amd_iommu_get_num_iommus(void);
+
 extern bool amd_iommu_pc_supported(void);
 
 extern u8 amd_iommu_pc_get_max_banks(u16 devid);
diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
index 3ef0f42..04085a4 100644
--- a/drivers/iommu/amd_iommu.c
+++ b/drivers/iommu/amd_iommu.c
@@ -1230,7 +1230,7 @@ static void __domain_flush_pages(struct protection_domain *domain,
 
 	build_inv_iommu_pages(&cmd, address, size, domain->id, pde);
 
-	for (i = 0; i < amd_iommus_present; ++i) {
+	for (i = 0; i < amd_iommu_get_num_iommus(); ++i) {
 		if (!domain->dev_iommu[i])
 			continue;
 
@@ -1274,7 +1274,7 @@ static void domain_flush_complete(struct protection_domain *domain)
 {
 	int i;
 
-	for (i = 0; i < amd_iommus_present; ++i) {
+	for (i = 0; i < amd_iommu_get_num_iommus(); ++i) {
 		if (domain && !domain->dev_iommu[i])
 			continue;
 
@@ -3343,7 +3343,7 @@ static int __flush_pasid(struct protection_domain *domain, int pasid,
 	 * IOMMU TLB needs to be flushed before Device TLB to
 	 * prevent device TLB refill from IOMMU TLB
 	 */
-	for (i = 0; i < amd_iommus_present; ++i) {
+	for (i = 0; i < amd_iommu_get_num_iommus(); ++i) {
 		if (domain->dev_iommu[i] == 0)
 			continue;
 
diff --git a/drivers/iommu/amd_iommu_init.c b/drivers/iommu/amd_iommu_init.c
index 6799cf9..1b5adb4 100644
--- a/drivers/iommu/amd_iommu_init.c
+++ b/drivers/iommu/amd_iommu_init.c
@@ -165,7 +165,9 @@ struct ivmd_header {
 
 /* Array to assign indices to IOMMUs*/
 struct amd_iommu *amd_iommus[MAX_IOMMUS];
-int amd_iommus_present;
+
+/* Number of IOMMUs present in the system */
+static int amd_iommus_present;
 
 /* IOMMUs have a non-present cache? */
 bool amd_iommu_np_cache __read_mostly;
@@ -270,6 +272,11 @@ static inline unsigned long tbl_size(int entry_size)
 	return 1UL << shift;
 }
 
+int amd_iommu_get_num_iommus(void)
+{
+	return amd_iommus_present;
+}
+
 /* Access to l1 and l2 indexed register spaces */
 
 static u32 iommu_read_l1(struct amd_iommu *iommu, u16 l1, u8 address)
@@ -1334,7 +1341,7 @@ static int __init init_iommu_one(struct amd_iommu *iommu, struct ivhd_header *h)
 
 	/* Add IOMMU to internal data structures */
 	list_add_tail(&iommu->list, &amd_iommu_list);
-	iommu->index             = amd_iommus_present++;
+	iommu->index = amd_iommus_present++;
 
 	if (unlikely(iommu->index >= MAX_IOMMUS)) {
 		WARN(1, "AMD-Vi: System has more IOMMUs than supported by this driver\n");
diff --git a/drivers/iommu/amd_iommu_proto.h b/drivers/iommu/amd_iommu_proto.h
index 7eb60c1..e8f0710 100644
--- a/drivers/iommu/amd_iommu_proto.h
+++ b/drivers/iommu/amd_iommu_proto.h
@@ -21,6 +21,7 @@
 
 #include "amd_iommu_types.h"
 
+extern int amd_iommu_get_num_iommus(void);
 extern int amd_iommu_init_dma_ops(void);
 extern int amd_iommu_init_passthrough(void);
 extern irqreturn_t amd_iommu_int_thread(int irq, void *data);
diff --git a/drivers/iommu/amd_iommu_types.h b/drivers/iommu/amd_iommu_types.h
index 0d91785..09d7a11 100644
--- a/drivers/iommu/amd_iommu_types.h
+++ b/drivers/iommu/amd_iommu_types.h
@@ -606,9 +606,6 @@ struct devid_map {
  */
 extern struct amd_iommu *amd_iommus[MAX_IOMMUS];
 
-/* Number of IOMMUs present in the system */
-extern int amd_iommus_present;
-
 /*
  * Declarations for the global list of all protection domains
  */
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v9 5/8] perf/amd/iommu: Modify functions to query max banks and counters
  2017-02-07  8:40 [PATCH v9 0/8] perf/amd/iommu: Enable multi-IOMMU support Suravee Suthikulpanit
                   ` (3 preceding siblings ...)
  2017-02-07  8:40 ` [PATCH v9 4/8] iommu/amd: Introduce amd_iommu_get_num_iommus() Suravee Suthikulpanit
@ 2017-02-07  8:40 ` Suravee Suthikulpanit
  2017-02-08 19:31   ` Borislav Petkov
  2017-02-07  8:40 ` [PATCH v9 6/8] perf/amd/iommu: Modify amd_iommu_pc_get_set_reg_val() to allow specifying IOMMU Suravee Suthikulpanit
                   ` (3 subsequent siblings)
  8 siblings, 1 reply; 17+ messages in thread
From: Suravee Suthikulpanit @ 2017-02-07  8:40 UTC (permalink / raw)
  To: linux-kernel, iommu; +Cc: bp, peterz, joro, mingo, Suravee Suthikulpanit

Currently, amd_iommu_pc_get_max_[banks|counters]() use end-point
device ID to locate an IOMMU and check the reported max banks/counters.
The logic assumes that the IOMMU_BASE_DEVID belongs to the first IOMMU,
and uses it to acquire a reference to the first IOMMU, which does not work
on certain systems. Instead, modify the function to take an IOMMU index,
and use it to query the corresponding AMD IOMMU instance.

Currently, hardcodes the IOMMU index to 0 since the current AMD IOMMU
perf implementation only supports a single IOMMU. A subsequent
patch will add support for multiple IOMMUs, and will use a proper IOMMU index.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Joerg Roedel <joro@8bytes.org>
Signed-off-by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
---
 arch/x86/events/amd/iommu.c     | 17 +++++++----------
 arch/x86/events/amd/iommu.h     |  9 ++++-----
 drivers/iommu/amd_iommu_init.c  | 37 +++++++++++++++++++++++--------------
 drivers/iommu/amd_iommu_proto.h |  2 --
 4 files changed, 34 insertions(+), 31 deletions(-)

diff --git a/arch/x86/events/amd/iommu.c b/arch/x86/events/amd/iommu.c
index 5bef429..2394950 100644
--- a/arch/x86/events/amd/iommu.c
+++ b/arch/x86/events/amd/iommu.c
@@ -239,14 +239,6 @@ static int perf_iommu_event_init(struct perf_event *event)
 		return -EINVAL;
 	}
 
-	/* integrate with iommu base devid (0000), assume one iommu */
-	perf_iommu->max_banks =
-		amd_iommu_pc_get_max_banks(IOMMU_BASE_DEVID);
-	perf_iommu->max_counters =
-		amd_iommu_pc_get_max_counters(IOMMU_BASE_DEVID);
-	if ((perf_iommu->max_banks == 0) || (perf_iommu->max_counters == 0))
-		return -EINVAL;
-
 	/* update the hw_perf_event struct with the iommu config data */
 	hwc->config = config;
 	hwc->extra_reg.config = config1;
@@ -448,6 +440,11 @@ static __init int _init_perf_amd_iommu(
 		return ret;
 	}
 
+	perf_iommu->max_banks    = amd_iommu_pc_get_max_banks(0);
+	perf_iommu->max_counters = amd_iommu_pc_get_max_counters(0);
+	if (!perf_iommu->max_banks || !perf_iommu->max_counters)
+		return -EINVAL;
+
 	perf_iommu->null_group = NULL;
 	perf_iommu->pmu.attr_groups = perf_iommu->attr_groups;
 
@@ -457,8 +454,8 @@ static __init int _init_perf_amd_iommu(
 		amd_iommu_pc_exit();
 	} else {
 		pr_info("perf: amd_iommu: Detected. (%d banks, %d counters/bank)\n",
-			amd_iommu_pc_get_max_banks(IOMMU_BASE_DEVID),
-			amd_iommu_pc_get_max_counters(IOMMU_BASE_DEVID));
+			amd_iommu_pc_get_max_banks(0),
+			amd_iommu_pc_get_max_counters(0));
 	}
 
 	return ret;
diff --git a/arch/x86/events/amd/iommu.h b/arch/x86/events/amd/iommu.h
index 5c5c932..b775107 100644
--- a/arch/x86/events/amd/iommu.h
+++ b/arch/x86/events/amd/iommu.h
@@ -24,19 +24,18 @@
 #define PC_MAX_SPEC_BNKS			64
 #define PC_MAX_SPEC_CNTRS			16
 
-/* iommu pc reg masks*/
-#define IOMMU_BASE_DEVID			0x0000
-
 /* amd_iommu_init.c external support functions */
 extern int amd_iommu_get_num_iommus(void);
 
 extern bool amd_iommu_pc_supported(void);
 
-extern u8 amd_iommu_pc_get_max_banks(u16 devid);
+extern u8 amd_iommu_pc_get_max_banks(unsigned int idx);
 
-extern u8 amd_iommu_pc_get_max_counters(u16 devid);
+extern u8 amd_iommu_pc_get_max_counters(unsigned int idx);
 
 extern int amd_iommu_pc_get_set_reg_val(u16 devid, u8 bank, u8 cntr,
 			u8 fxn, u64 *value, bool is_write);
 
+extern struct amd_iommu *get_amd_iommu(int idx);
+
 #endif /*_PERF_EVENT_AMD_IOMMU_H_*/
diff --git a/drivers/iommu/amd_iommu_init.c b/drivers/iommu/amd_iommu_init.c
index 1b5adb4..eb654e7 100644
--- a/drivers/iommu/amd_iommu_init.c
+++ b/drivers/iommu/amd_iommu_init.c
@@ -2715,6 +2715,21 @@ bool amd_iommu_v2_supported(void)
 }
 EXPORT_SYMBOL(amd_iommu_v2_supported);
 
+struct amd_iommu *get_amd_iommu(unsigned int idx)
+{
+	unsigned int i = 0;
+	struct amd_iommu *iommu, *ret = NULL;
+
+	for_each_iommu(iommu) {
+		if (i++ == idx) {
+			ret = iommu;
+			break;
+		}
+	}
+	return ret;
+}
+EXPORT_SYMBOL(get_amd_iommu);
+
 /****************************************************************************
  *
  * IOMMU EFR Performance Counter support functionality. This code allows
@@ -2722,17 +2737,14 @@ bool amd_iommu_v2_supported(void)
  *
  ****************************************************************************/
 
-u8 amd_iommu_pc_get_max_banks(u16 devid)
+u8 amd_iommu_pc_get_max_banks(unsigned int idx)
 {
-	struct amd_iommu *iommu;
-	u8 ret = 0;
+	struct amd_iommu *iommu = get_amd_iommu(idx);
 
-	/* locate the iommu governing the devid */
-	iommu = amd_iommu_rlookup_table[devid];
 	if (iommu)
-		ret = iommu->max_banks;
+		return iommu->max_banks;
 
-	return ret;
+	return 0;
 }
 EXPORT_SYMBOL(amd_iommu_pc_get_max_banks);
 
@@ -2742,17 +2754,14 @@ bool amd_iommu_pc_supported(void)
 }
 EXPORT_SYMBOL(amd_iommu_pc_supported);
 
-u8 amd_iommu_pc_get_max_counters(u16 devid)
+u8 amd_iommu_pc_get_max_counters(unsigned int idx)
 {
-	struct amd_iommu *iommu;
-	u8 ret = 0;
+	struct amd_iommu *iommu = get_amd_iommu(idx);
 
-	/* locate the iommu governing the devid */
-	iommu = amd_iommu_rlookup_table[devid];
 	if (iommu)
-		ret = iommu->max_counters;
+		return iommu->max_counters;
 
-	return ret;
+	return 0;
 }
 EXPORT_SYMBOL(amd_iommu_pc_get_max_counters);
 
diff --git a/drivers/iommu/amd_iommu_proto.h b/drivers/iommu/amd_iommu_proto.h
index e8f0710..cd2257e 100644
--- a/drivers/iommu/amd_iommu_proto.h
+++ b/drivers/iommu/amd_iommu_proto.h
@@ -59,8 +59,6 @@ extern int amd_iommu_domain_set_gcr3(struct iommu_domain *dom, int pasid,
 
 /* IOMMU Performance Counter functions */
 extern bool amd_iommu_pc_supported(void);
-extern u8 amd_iommu_pc_get_max_banks(u16 devid);
-extern u8 amd_iommu_pc_get_max_counters(u16 devid);
 extern int amd_iommu_pc_get_set_reg_val(u16 devid, u8 bank, u8 cntr, u8 fxn,
 				    u64 *value, bool is_write);
 
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v9 6/8] perf/amd/iommu: Modify amd_iommu_pc_get_set_reg_val() to allow specifying IOMMU
  2017-02-07  8:40 [PATCH v9 0/8] perf/amd/iommu: Enable multi-IOMMU support Suravee Suthikulpanit
                   ` (4 preceding siblings ...)
  2017-02-07  8:40 ` [PATCH v9 5/8] perf/amd/iommu: Modify functions to query max banks and counters Suravee Suthikulpanit
@ 2017-02-07  8:40 ` Suravee Suthikulpanit
  2017-02-08 19:32   ` Borislav Petkov
  2017-02-07  8:40 ` [PATCH v9 7/8] perf/amd/iommu: Fix sysfs perf attribute groups Suravee Suthikulpanit
                   ` (2 subsequent siblings)
  8 siblings, 1 reply; 17+ messages in thread
From: Suravee Suthikulpanit @ 2017-02-07  8:40 UTC (permalink / raw)
  To: linux-kernel, iommu
  Cc: bp, peterz, joro, mingo, Suravee Suthikulpanit, Suravee Suthikulpanit

From: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>

The current amd_iommu_pc_get_set_reg_val() cannot support multiple IOMMUs.
So, modify it to allow callers to specify IOMMU. This prepares the driver
for supporting multi-IOMMU in subsequent patch.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Joerg Roedel <joro@8bytes.org>
Signed-off-by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
---
 arch/x86/events/amd/iommu.c     | 38 +++++++++++++---------------
 arch/x86/events/amd/iommu.h     |  9 +++++--
 drivers/iommu/amd_iommu_init.c  | 56 +++++++++++++++++++++++------------------
 drivers/iommu/amd_iommu_proto.h |  5 ----
 4 files changed, 56 insertions(+), 52 deletions(-)

diff --git a/arch/x86/events/amd/iommu.c b/arch/x86/events/amd/iommu.c
index 2394950..b771914 100644
--- a/arch/x86/events/amd/iommu.c
+++ b/arch/x86/events/amd/iommu.c
@@ -248,49 +248,45 @@ static int perf_iommu_event_init(struct perf_event *event)
 
 static void perf_iommu_enable_event(struct perf_event *ev)
 {
+	struct amd_iommu *iommu = get_amd_iommu(0);
 	u8 csource = _GET_CSOURCE(ev);
 	u16 devid = _GET_DEVID(ev);
+	u8 bank = _GET_BANK(ev);
+	u8 cntr = _GET_CNTR(ev);
 	u64 reg = 0ULL;
 
 	reg = csource;
-	amd_iommu_pc_get_set_reg_val(devid,
-			_GET_BANK(ev), _GET_CNTR(ev) ,
-			 IOMMU_PC_COUNTER_SRC_REG, &reg, true);
+	amd_iommu_pc_set_reg(iommu, bank, cntr, IOMMU_PC_COUNTER_SRC_REG, &reg);
 
 	reg = devid | (_GET_DEVID_MASK(ev) << 32);
 	if (reg)
 		reg |= BIT(31);
-	amd_iommu_pc_get_set_reg_val(devid,
-			_GET_BANK(ev), _GET_CNTR(ev) ,
-			 IOMMU_PC_DEVID_MATCH_REG, &reg, true);
+	amd_iommu_pc_set_reg(iommu, bank, cntr, IOMMU_PC_DEVID_MATCH_REG, &reg);
 
 	reg = _GET_PASID(ev) | (_GET_PASID_MASK(ev) << 32);
 	if (reg)
 		reg |= BIT(31);
-	amd_iommu_pc_get_set_reg_val(devid,
-			_GET_BANK(ev), _GET_CNTR(ev) ,
-			 IOMMU_PC_PASID_MATCH_REG, &reg, true);
+	amd_iommu_pc_set_reg(iommu, bank, cntr, IOMMU_PC_PASID_MATCH_REG, &reg);
 
 	reg = _GET_DOMID(ev) | (_GET_DOMID_MASK(ev) << 32);
 	if (reg)
 		reg |= BIT(31);
-	amd_iommu_pc_get_set_reg_val(devid,
-			_GET_BANK(ev), _GET_CNTR(ev) ,
-			 IOMMU_PC_DOMID_MATCH_REG, &reg, true);
+	amd_iommu_pc_set_reg(iommu, bank, cntr, IOMMU_PC_DOMID_MATCH_REG, &reg);
 }
 
 static void perf_iommu_disable_event(struct perf_event *event)
 {
+	struct amd_iommu *iommu = get_amd_iommu(0);
 	u64 reg = 0ULL;
 
-	amd_iommu_pc_get_set_reg_val(_GET_DEVID(event),
-			_GET_BANK(event), _GET_CNTR(event),
-			IOMMU_PC_COUNTER_SRC_REG, &reg, true);
+	amd_iommu_pc_set_reg(iommu, _GET_BANK(event), _GET_CNTR(event),
+			     IOMMU_PC_COUNTER_SRC_REG, &reg);
 }
 
 static void perf_iommu_start(struct perf_event *event, int flags)
 {
 	struct hw_perf_event *hwc = &event->hw;
+	struct amd_iommu *iommu = get_amd_iommu(0);
 
 	if (WARN_ON_ONCE(!(hwc->state & PERF_HES_STOPPED)))
 		return;
@@ -300,9 +296,8 @@ static void perf_iommu_start(struct perf_event *event, int flags)
 
 	if (flags & PERF_EF_RELOAD) {
 		u64 prev_raw_count =  local64_read(&hwc->prev_count);
-		amd_iommu_pc_get_set_reg_val(_GET_DEVID(event),
-				_GET_BANK(event), _GET_CNTR(event),
-				IOMMU_PC_COUNTER_REG, &prev_raw_count, true);
+		amd_iommu_pc_set_reg(iommu, _GET_BANK(event), _GET_CNTR(event),
+				     IOMMU_PC_COUNTER_REG, &prev_raw_count);
 	}
 
 	perf_iommu_enable_event(event);
@@ -314,10 +309,11 @@ static void perf_iommu_read(struct perf_event *event)
 {
 	u64 count, prev, delta;
 	struct hw_perf_event *hwc = &event->hw;
+	struct amd_iommu *iommu = get_amd_iommu(0);
 
-	amd_iommu_pc_get_set_reg_val(_GET_DEVID(event),
-				_GET_BANK(event), _GET_CNTR(event),
-				IOMMU_PC_COUNTER_REG, &count, false);
+	if (amd_iommu_pc_get_reg(iommu, _GET_BANK(event), _GET_CNTR(event),
+				 IOMMU_PC_COUNTER_REG, &count))
+		return;
 
 	/* IOMMU pc counter register is only 48 bits */
 	count &= GENMASK_ULL(47, 0);
diff --git a/arch/x86/events/amd/iommu.h b/arch/x86/events/amd/iommu.h
index b775107..62e0702 100644
--- a/arch/x86/events/amd/iommu.h
+++ b/arch/x86/events/amd/iommu.h
@@ -24,6 +24,8 @@
 #define PC_MAX_SPEC_BNKS			64
 #define PC_MAX_SPEC_CNTRS			16
 
+struct amd_iommu;
+
 /* amd_iommu_init.c external support functions */
 extern int amd_iommu_get_num_iommus(void);
 
@@ -33,8 +35,11 @@
 
 extern u8 amd_iommu_pc_get_max_counters(unsigned int idx);
 
-extern int amd_iommu_pc_get_set_reg_val(u16 devid, u8 bank, u8 cntr,
-			u8 fxn, u64 *value, bool is_write);
+extern int amd_iommu_pc_set_reg(struct amd_iommu *iommu, u8 bank, u8 cntr,
+				u8 fxn, u64 *value);
+
+extern int amd_iommu_pc_get_reg(struct amd_iommu *iommu, u8 bank, u8 cntr,
+				u8 fxn, u64 *value);
 
 extern struct amd_iommu *get_amd_iommu(int idx);
 
diff --git a/drivers/iommu/amd_iommu_init.c b/drivers/iommu/amd_iommu_init.c
index eb654e7..a50ad65 100644
--- a/drivers/iommu/amd_iommu_init.c
+++ b/drivers/iommu/amd_iommu_init.c
@@ -254,10 +254,6 @@ enum iommu_init_state {
 static int __init iommu_go_to_state(enum iommu_init_state state);
 static void init_device_table_dma(void);
 
-static int iommu_pc_get_set_reg_val(struct amd_iommu *iommu,
-				    u8 bank, u8 cntr, u8 fxn,
-				    u64 *value, bool is_write);
-
 static inline void update_last_devid(u16 devid)
 {
 	if (devid > amd_iommu_last_bdf)
@@ -1482,6 +1478,8 @@ static int __init init_iommu_all(struct acpi_table_header *table)
 	return 0;
 }
 
+static int iommu_pc_get_set_reg(struct amd_iommu *iommu, u8 bank, u8 cntr,
+				u8 fxn, u64 *value, bool is_write);
 
 static void init_iommu_perf_ctr(struct amd_iommu *iommu)
 {
@@ -1493,8 +1491,8 @@ static void init_iommu_perf_ctr(struct amd_iommu *iommu)
 	amd_iommu_pc_present = true;
 
 	/* Check if the performance counters can be written to */
-	if ((0 != iommu_pc_get_set_reg_val(iommu, 0, 0, 0, &val, true)) ||
-	    (0 != iommu_pc_get_set_reg_val(iommu, 0, 0, 0, &val2, false)) ||
+	if ((iommu_pc_get_set_reg(iommu, 0, 0, 0, &val, true)) ||
+	    (iommu_pc_get_set_reg(iommu, 0, 0, 0, &val2, false)) ||
 	    (val != val2)) {
 		pr_err("AMD-Vi: Unable to write to IOMMU perf counter.\n");
 		amd_iommu_pc_present = false;
@@ -2765,48 +2763,58 @@ u8 amd_iommu_pc_get_max_counters(unsigned int idx)
 }
 EXPORT_SYMBOL(amd_iommu_pc_get_max_counters);
 
-static int iommu_pc_get_set_reg_val(struct amd_iommu *iommu,
-				    u8 bank, u8 cntr, u8 fxn,
-				    u64 *value, bool is_write)
+static int iommu_pc_get_set_reg(struct amd_iommu *iommu, u8 bank, u8 cntr,
+				u8 fxn, u64 *value, bool is_write)
 {
 	u32 offset;
 	u32 max_offset_lim;
 
+	/* Make sure the IOMMU PC resource is available */
+	if (!amd_iommu_pc_present)
+		return -ENODEV;
+
 	/* Check for valid iommu and pc register indexing */
-	if (WARN_ON((fxn > 0x28) || (fxn & 7)))
+	if (WARN_ON(!iommu || (fxn > 0x28) || (fxn & 7)))
 		return -ENODEV;
 
-	offset = (u32)(((0x40|bank) << 12) | (cntr << 8) | fxn);
+	offset = (u32)(((0x40 | bank) << 12) | (cntr << 8) | fxn);
 
 	/* Limit the offset to the hw defined mmio region aperture */
-	max_offset_lim = (u32)(((0x40|iommu->max_banks) << 12) |
+	max_offset_lim = (u32)(((0x40 | iommu->max_banks) << 12) |
 				(iommu->max_counters << 8) | 0x28);
 	if ((offset < MMIO_CNTR_REG_OFFSET) ||
 	    (offset > max_offset_lim))
 		return -EINVAL;
 
 	if (is_write) {
-		writel((u32)*value, iommu->mmio_base + offset);
-		writel((*value >> 32), iommu->mmio_base + offset + 4);
+		u64 val = *value & GENMASK_ULL(47, 0);
+
+		writel((u32)val, iommu->mmio_base + offset);
+		writel((val >> 32), iommu->mmio_base + offset + 4);
 	} else {
 		*value = readl(iommu->mmio_base + offset + 4);
 		*value <<= 32;
-		*value = readl(iommu->mmio_base + offset);
+		*value |= readl(iommu->mmio_base + offset);
+		*value &= GENMASK_ULL(47, 0);
 	}
 
 	return 0;
 }
-EXPORT_SYMBOL(amd_iommu_pc_get_set_reg_val);
 
-int amd_iommu_pc_get_set_reg_val(u16 devid, u8 bank, u8 cntr, u8 fxn,
-				    u64 *value, bool is_write)
+int amd_iommu_pc_get_reg(struct amd_iommu *iommu, u8 bank, u8 cntr, u8 fxn, u64 *value)
 {
-	struct amd_iommu *iommu = amd_iommu_rlookup_table[devid];
+	if (!iommu)
+		return -EINVAL;
 
-	/* Make sure the IOMMU PC resource is available */
-	if (!amd_iommu_pc_present || iommu == NULL)
-		return -ENODEV;
+	return iommu_pc_get_set_reg(iommu, bank, cntr, fxn, value, false);
+}
+EXPORT_SYMBOL(amd_iommu_pc_get_reg);
+
+int amd_iommu_pc_set_reg(struct amd_iommu *iommu, u8 bank, u8 cntr, u8 fxn, u64 *value)
+{
+	if (!iommu)
+		return -EINVAL;
 
-	return iommu_pc_get_set_reg_val(iommu, bank, cntr, fxn,
-					value, is_write);
+	return iommu_pc_get_set_reg(iommu, bank, cntr, fxn, value, true);
 }
+EXPORT_SYMBOL(amd_iommu_pc_set_reg);
diff --git a/drivers/iommu/amd_iommu_proto.h b/drivers/iommu/amd_iommu_proto.h
index cd2257e..466260f 100644
--- a/drivers/iommu/amd_iommu_proto.h
+++ b/drivers/iommu/amd_iommu_proto.h
@@ -57,11 +57,6 @@ extern int amd_iommu_domain_set_gcr3(struct iommu_domain *dom, int pasid,
 extern int amd_iommu_domain_clear_gcr3(struct iommu_domain *dom, int pasid);
 extern struct iommu_domain *amd_iommu_get_v2_domain(struct pci_dev *pdev);
 
-/* IOMMU Performance Counter functions */
-extern bool amd_iommu_pc_supported(void);
-extern int amd_iommu_pc_get_set_reg_val(u16 devid, u8 bank, u8 cntr, u8 fxn,
-				    u64 *value, bool is_write);
-
 #ifdef CONFIG_IRQ_REMAP
 extern int amd_iommu_create_irq_domain(struct amd_iommu *iommu);
 #else
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v9 7/8] perf/amd/iommu: Fix sysfs perf attribute groups
  2017-02-07  8:40 [PATCH v9 0/8] perf/amd/iommu: Enable multi-IOMMU support Suravee Suthikulpanit
                   ` (5 preceding siblings ...)
  2017-02-07  8:40 ` [PATCH v9 6/8] perf/amd/iommu: Modify amd_iommu_pc_get_set_reg_val() to allow specifying IOMMU Suravee Suthikulpanit
@ 2017-02-07  8:40 ` Suravee Suthikulpanit
  2017-02-07  8:40 ` [PATCH v9 8/8] perf/amd/iommu: Enable support for multiple IOMMUs Suravee Suthikulpanit
  2017-02-15 10:13 ` [PATCH v9 0/8] perf/amd/iommu: Enable multi-IOMMU support Borislav Petkov
  8 siblings, 0 replies; 17+ messages in thread
From: Suravee Suthikulpanit @ 2017-02-07  8:40 UTC (permalink / raw)
  To: linux-kernel, iommu
  Cc: bp, peterz, joro, mingo, Suravee Suthikulpanit, Suravee Suthikulpanit

From: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>

Introduce static amd_iommu_attr_groups to simplify the
sysfs attributes initialization code.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Borislav Petkov <bp@alien8.de>
Signed-off-by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
---
 arch/x86/events/amd/iommu.c | 81 ++++++++++++++++++---------------------------
 1 file changed, 32 insertions(+), 49 deletions(-)

diff --git a/arch/x86/events/amd/iommu.c b/arch/x86/events/amd/iommu.c
index b771914..7bbf405 100644
--- a/arch/x86/events/amd/iommu.c
+++ b/arch/x86/events/amd/iommu.c
@@ -43,14 +43,8 @@ struct perf_amd_iommu {
 	u8 max_counters;
 	u64 cntr_assign_mask;
 	raw_spinlock_t lock;
-	const struct attribute_group *attr_groups[4];
 };
 
-#define format_group	attr_groups[0]
-#define cpumask_group	attr_groups[1]
-#define events_group	attr_groups[2]
-#define null_group	attr_groups[3]
-
 /*---------------------------------------------
  * sysfs format attributes
  *---------------------------------------------*/
@@ -81,6 +75,10 @@ struct perf_amd_iommu {
 /*---------------------------------------------
  * sysfs events attributes
  *---------------------------------------------*/
+static struct attribute_group amd_iommu_events_group = {
+	.name = "events",
+};
+
 struct amd_iommu_event_desc {
 	struct kobj_attribute attr;
 	const char *event;
@@ -384,76 +382,60 @@ static void perf_iommu_del(struct perf_event *event, int flags)
 	perf_event_update_userpage(event);
 }
 
-static __init int _init_events_attrs(struct perf_amd_iommu *perf_iommu)
+static __init int _init_events_attrs(void)
 {
-	struct attribute **attrs;
-	struct attribute_group *attr_group;
 	int i = 0, j;
+	struct attribute **attrs;
 
 	while (amd_iommu_v2_event_descs[i].attr.attr.name)
 		i++;
 
-	attr_group = kzalloc(sizeof(struct attribute *)
-		* (i + 1) + sizeof(*attr_group), GFP_KERNEL);
-	if (!attr_group)
+	attrs = kzalloc(sizeof(struct attribute **) * (i + 1), GFP_KERNEL);
+	if (!attrs)
 		return -ENOMEM;
 
-	attrs = (struct attribute **)(attr_group + 1);
 	for (j = 0; j < i; j++)
 		attrs[j] = &amd_iommu_v2_event_descs[j].attr.attr;
 
-	attr_group->name = "events";
-	attr_group->attrs = attrs;
-	perf_iommu->events_group = attr_group;
-
+	amd_iommu_events_group.attrs = attrs;
 	return 0;
 }
 
 static __init void amd_iommu_pc_exit(void)
 {
-	if (__perf_iommu.events_group != NULL) {
-		kfree(__perf_iommu.events_group);
-		__perf_iommu.events_group = NULL;
-	}
+	kfree(amd_iommu_events_group.attrs);
 }
 
-static __init int _init_perf_amd_iommu(
-	struct perf_amd_iommu *perf_iommu, char *name)
+const struct attribute_group *amd_iommu_attr_groups[] = {
+	&amd_iommu_format_group,
+	&amd_iommu_cpumask_group,
+	&amd_iommu_events_group,
+	NULL,
+};
+
+static __init int
+_init_perf_amd_iommu(struct perf_amd_iommu *perf_iommu, char *name)
 {
 	int ret;
 
 	raw_spin_lock_init(&perf_iommu->lock);
 
-	perf_iommu->format_group = &amd_iommu_format_group;
-
 	/* Init cpumask attributes to only core 0 */
 	cpumask_set_cpu(0, &iommu_cpumask);
-	perf_iommu->cpumask_group = &amd_iommu_cpumask_group;
-
-	ret = _init_events_attrs(perf_iommu);
-	if (ret) {
-		pr_err("Error initializing AMD IOMMU perf events.\n");
-		return ret;
-	}
 
 	perf_iommu->max_banks    = amd_iommu_pc_get_max_banks(0);
 	perf_iommu->max_counters = amd_iommu_pc_get_max_counters(0);
 	if (!perf_iommu->max_banks || !perf_iommu->max_counters)
 		return -EINVAL;
 
-	perf_iommu->null_group = NULL;
-	perf_iommu->pmu.attr_groups = perf_iommu->attr_groups;
-
+	perf_iommu->pmu.attr_groups = amd_iommu_attr_groups;
 	ret = perf_pmu_register(&perf_iommu->pmu, name, -1);
-	if (ret) {
+	if (ret)
 		pr_err("Error initializing AMD IOMMU perf counters.\n");
-		amd_iommu_pc_exit();
-	} else {
+	else
 		pr_info("perf: amd_iommu: Detected. (%d banks, %d counters/bank)\n",
 			amd_iommu_pc_get_max_banks(0),
 			amd_iommu_pc_get_max_counters(0));
-	}
-
 	return ret;
 }
 
@@ -467,24 +449,25 @@ static __init int _init_perf_amd_iommu(
 		.stop		= perf_iommu_stop,
 		.read		= perf_iommu_read,
 	},
-	.max_banks		= 0x00,
-	.max_counters		= 0x00,
-	.cntr_assign_mask	= 0ULL,
-	.format_group		= NULL,
-	.cpumask_group		= NULL,
-	.events_group		= NULL,
-	.null_group		= NULL,
 };
 
 static __init int amd_iommu_pc_init(void)
 {
+	int ret;
+
 	/* Make sure the IOMMU PC resource is available */
 	if (!amd_iommu_pc_supported())
 		return -ENODEV;
 
-	_init_perf_amd_iommu(&__perf_iommu, "amd_iommu");
+	ret = _init_events_attrs();
+	if (ret)
+		return ret;
 
-	return 0;
+	ret = _init_perf_amd_iommu(&__perf_iommu, "amd_iommu");
+	if (ret)
+		amd_iommu_pc_exit();
+
+	return ret;
 }
 
 device_initcall(amd_iommu_pc_init);
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v9 8/8] perf/amd/iommu: Enable support for multiple IOMMUs
  2017-02-07  8:40 [PATCH v9 0/8] perf/amd/iommu: Enable multi-IOMMU support Suravee Suthikulpanit
                   ` (6 preceding siblings ...)
  2017-02-07  8:40 ` [PATCH v9 7/8] perf/amd/iommu: Fix sysfs perf attribute groups Suravee Suthikulpanit
@ 2017-02-07  8:40 ` Suravee Suthikulpanit
  2017-02-08 19:33   ` Borislav Petkov
  2017-02-15 10:13 ` [PATCH v9 0/8] perf/amd/iommu: Enable multi-IOMMU support Borislav Petkov
  8 siblings, 1 reply; 17+ messages in thread
From: Suravee Suthikulpanit @ 2017-02-07  8:40 UTC (permalink / raw)
  To: linux-kernel, iommu
  Cc: bp, peterz, joro, mingo, Suravee Suthikulpanit, Suravee Suthikulpanit

From: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>

Add multi-IOMMU support for perf by exposing an AMD IOMMU PMU
for each IOMMU found in the system via:

  /bus/event_source/devices/amd_iommu_x

where x is the IOMMU index. This allows users to specify
different events to be programmed onto performance counters
of each IOMMU.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Borislav Petkov <bp@alien8.de>
Signed-off-by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
---
 arch/x86/events/amd/iommu.c | 108 ++++++++++++++++++++++++++------------------
 1 file changed, 64 insertions(+), 44 deletions(-)

diff --git a/arch/x86/events/amd/iommu.c b/arch/x86/events/amd/iommu.c
index 7bbf405..d9313d2 100644
--- a/arch/x86/events/amd/iommu.c
+++ b/arch/x86/events/amd/iommu.c
@@ -35,16 +35,21 @@
 #define _GET_PASID_MASK(ev) ((ev->hw.extra_reg.config >> 16) & 0xFFFFULL)
 #define _GET_DOMID_MASK(ev) ((ev->hw.extra_reg.config >> 32) & 0xFFFFULL)
 
-static struct perf_amd_iommu __perf_iommu;
+#define PERF_AMD_IOMMU_NAME_SIZE	16
 
 struct perf_amd_iommu {
+	struct list_head list;
 	struct pmu pmu;
+	struct amd_iommu *iommu;
+	char name[PERF_AMD_IOMMU_NAME_SIZE];
 	u8 max_banks;
 	u8 max_counters;
 	u64 cntr_assign_mask;
 	raw_spinlock_t lock;
 };
 
+static LIST_HEAD(perf_amd_iommu_list);
+
 /*---------------------------------------------
  * sysfs format attributes
  *---------------------------------------------*/
@@ -202,8 +207,6 @@ static int clear_avail_iommu_bnk_cntr(struct perf_amd_iommu *perf_iommu,
 static int perf_iommu_event_init(struct perf_event *event)
 {
 	struct hw_perf_event *hwc = &event->hw;
-	struct perf_amd_iommu *perf_iommu;
-	u64 config, config1;
 
 	/* test the event attr type check for PMU enumeration */
 	if (event->attr.type != event->pmu->type)
@@ -225,28 +228,21 @@ static int perf_iommu_event_init(struct perf_event *event)
 	if (event->cpu < 0)
 		return -EINVAL;
 
-	perf_iommu = &__perf_iommu;
-
-	if (event->pmu != &perf_iommu->pmu)
-		return -ENOENT;
-
-	if (perf_iommu) {
-		config = event->attr.config;
-		config1 = event->attr.config1;
-	} else {
-		return -EINVAL;
-	}
-
 	/* update the hw_perf_event struct with the iommu config data */
-	hwc->config = config;
-	hwc->extra_reg.config = config1;
+	hwc->config           = event->attr.config;
+	hwc->extra_reg.config = event->attr.config1;
 
 	return 0;
 }
 
+static inline struct amd_iommu *perf_event_2_iommu(struct perf_event *ev)
+{
+	return (container_of(ev->pmu, struct perf_amd_iommu, pmu))->iommu;
+}
+
 static void perf_iommu_enable_event(struct perf_event *ev)
 {
-	struct amd_iommu *iommu = get_amd_iommu(0);
+	struct amd_iommu *iommu = perf_event_2_iommu(ev);
 	u8 csource = _GET_CSOURCE(ev);
 	u16 devid = _GET_DEVID(ev);
 	u8 bank = _GET_BANK(ev);
@@ -274,7 +270,7 @@ static void perf_iommu_enable_event(struct perf_event *ev)
 
 static void perf_iommu_disable_event(struct perf_event *event)
 {
-	struct amd_iommu *iommu = get_amd_iommu(0);
+	struct amd_iommu *iommu = perf_event_2_iommu(event);
 	u64 reg = 0ULL;
 
 	amd_iommu_pc_set_reg(iommu, _GET_BANK(event), _GET_CNTR(event),
@@ -284,7 +280,7 @@ static void perf_iommu_disable_event(struct perf_event *event)
 static void perf_iommu_start(struct perf_event *event, int flags)
 {
 	struct hw_perf_event *hwc = &event->hw;
-	struct amd_iommu *iommu = get_amd_iommu(0);
+	struct amd_iommu *iommu = perf_event_2_iommu(event);
 
 	if (WARN_ON_ONCE(!(hwc->state & PERF_HES_STOPPED)))
 		return;
@@ -307,7 +303,7 @@ static void perf_iommu_read(struct perf_event *event)
 {
 	u64 count, prev, delta;
 	struct hw_perf_event *hwc = &event->hw;
-	struct amd_iommu *iommu = get_amd_iommu(0);
+	struct amd_iommu *iommu = perf_event_2_iommu(event);
 
 	if (amd_iommu_pc_get_reg(iommu, _GET_BANK(event), _GET_CNTR(event),
 				 IOMMU_PC_COUNTER_REG, &count))
@@ -403,6 +399,13 @@ static __init int _init_events_attrs(void)
 
 static __init void amd_iommu_pc_exit(void)
 {
+	struct perf_amd_iommu *pi, *next;
+
+	list_for_each_entry_safe(pi, next, &perf_amd_iommu_list, list) {
+		list_del(&pi->list);
+		kfree(pi);
+	}
+
 	kfree(amd_iommu_events_group.attrs);
 }
 
@@ -414,46 +417,46 @@ static __init void amd_iommu_pc_exit(void)
 };
 
 static __init int
-_init_perf_amd_iommu(struct perf_amd_iommu *perf_iommu, char *name)
+init_one_perf_amd_iommu(struct perf_amd_iommu *perf_iommu, unsigned int idx)
 {
 	int ret;
 
 	raw_spin_lock_init(&perf_iommu->lock);
 
-	/* Init cpumask attributes to only core 0 */
-	cpumask_set_cpu(0, &iommu_cpumask);
-
-	perf_iommu->max_banks    = amd_iommu_pc_get_max_banks(0);
-	perf_iommu->max_counters = amd_iommu_pc_get_max_counters(0);
+	perf_iommu->iommu	 = get_amd_iommu(idx);
+	perf_iommu->max_banks    = amd_iommu_pc_get_max_banks(idx);
+	perf_iommu->max_counters = amd_iommu_pc_get_max_counters(idx);
 	if (!perf_iommu->max_banks || !perf_iommu->max_counters)
 		return -EINVAL;
 
+	snprintf(perf_iommu->name, PERF_AMD_IOMMU_NAME_SIZE, "amd_iommu_%u", idx);
+
+	perf_iommu->pmu.event_init  = perf_iommu_event_init;
+	perf_iommu->pmu.add         = perf_iommu_add;
+	perf_iommu->pmu.del         = perf_iommu_del;
+	perf_iommu->pmu.start       = perf_iommu_start;
+	perf_iommu->pmu.stop        = perf_iommu_stop;
+	perf_iommu->pmu.read        = perf_iommu_read;
+	perf_iommu->pmu.task_ctx_nr = perf_invalid_context;
 	perf_iommu->pmu.attr_groups = amd_iommu_attr_groups;
-	ret = perf_pmu_register(&perf_iommu->pmu, name, -1);
+
+	ret = perf_pmu_register(&perf_iommu->pmu, perf_iommu->name, -1);
 	if (ret)
 		pr_err("Error initializing AMD IOMMU perf counters.\n");
 	else
-		pr_info("perf: amd_iommu: Detected. (%d banks, %d counters/bank)\n",
-			amd_iommu_pc_get_max_banks(0),
-			amd_iommu_pc_get_max_counters(0));
+	pr_info("Detected AMD IOMMU #%d (%d banks, %d counters/bank)\n",
+		idx, amd_iommu_pc_get_max_banks(idx),
+		amd_iommu_pc_get_max_counters(idx));
 	return ret;
 }
 
-static struct perf_amd_iommu __perf_iommu = {
-	.pmu = {
-		.task_ctx_nr    = perf_invalid_context,
-		.event_init	= perf_iommu_event_init,
-		.add		= perf_iommu_add,
-		.del		= perf_iommu_del,
-		.start		= perf_iommu_start,
-		.stop		= perf_iommu_stop,
-		.read		= perf_iommu_read,
-	},
-};
-
 static __init int amd_iommu_pc_init(void)
 {
 	int ret;
+	unsigned int i;
+
+	/* Init cpumask attributes to only core 0 */
+	cpumask_set_cpu(0, &iommu_cpumask);
 
 	/* Make sure the IOMMU PC resource is available */
 	if (!amd_iommu_pc_supported())
@@ -463,7 +466,24 @@ static __init int amd_iommu_pc_init(void)
 	if (ret)
 		return ret;
 
-	ret = _init_perf_amd_iommu(&__perf_iommu, "amd_iommu");
+	for (i = 0 ; i < amd_iommu_get_num_iommus(); i++) {
+		struct perf_amd_iommu *pi;
+
+		pi = kzalloc(sizeof(struct perf_amd_iommu), GFP_KERNEL);
+		if (!pi) {
+			ret = -ENOMEM;
+			break;
+		}
+
+		ret = init_one_perf_amd_iommu(pi, i);
+		if (ret) {
+			kfree(pi);
+			break;
+		}
+
+		list_add_tail(&pi->list, &perf_amd_iommu_list);
+	}
+
 	if (ret)
 		amd_iommu_pc_exit();
 
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH v9 5/8] perf/amd/iommu: Modify functions to query max banks and counters
  2017-02-07  8:40 ` [PATCH v9 5/8] perf/amd/iommu: Modify functions to query max banks and counters Suravee Suthikulpanit
@ 2017-02-08 19:31   ` Borislav Petkov
  0 siblings, 0 replies; 17+ messages in thread
From: Borislav Petkov @ 2017-02-08 19:31 UTC (permalink / raw)
  To: Suravee Suthikulpanit; +Cc: linux-kernel, iommu, peterz, joro, mingo

On Tue, Feb 07, 2017 at 02:40:33AM -0600, Suravee Suthikulpanit wrote:
> Currently, amd_iommu_pc_get_max_[banks|counters]() use end-point
> device ID to locate an IOMMU and check the reported max banks/counters.
> The logic assumes that the IOMMU_BASE_DEVID belongs to the first IOMMU,
> and uses it to acquire a reference to the first IOMMU, which does not work
> on certain systems. Instead, modify the function to take an IOMMU index,
> and use it to query the corresponding AMD IOMMU instance.
> 
> Currently, hardcodes the IOMMU index to 0 since the current AMD IOMMU
> perf implementation only supports a single IOMMU. A subsequent
> patch will add support for multiple IOMMUs, and will use a proper IOMMU index.
> 
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Borislav Petkov <bp@alien8.de>
> Cc: Joerg Roedel <joro@8bytes.org>
> Signed-off-by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
> ---

...

> diff --git a/drivers/iommu/amd_iommu_init.c b/drivers/iommu/amd_iommu_init.c
> index 1b5adb4..eb654e7 100644
> --- a/drivers/iommu/amd_iommu_init.c
> +++ b/drivers/iommu/amd_iommu_init.c
> @@ -2715,6 +2715,21 @@ bool amd_iommu_v2_supported(void)
>  }
>  EXPORT_SYMBOL(amd_iommu_v2_supported);
>  
> +struct amd_iommu *get_amd_iommu(unsigned int idx)
> +{
> +	unsigned int i = 0;
> +	struct amd_iommu *iommu, *ret = NULL;
> +
> +	for_each_iommu(iommu) {
> +		if (i++ == idx) {
> +			ret = iommu;
> +			break;

			return iommu;

no need for the assignment and the break.

> +		}
> +	}
> +	return ret;
> +}
> +EXPORT_SYMBOL(get_amd_iommu);

-- 
Regards/Gruss,
    Boris.

Good mailing practices for 400: avoid top-posting and trim the reply.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v9 6/8] perf/amd/iommu: Modify amd_iommu_pc_get_set_reg_val() to allow specifying IOMMU
  2017-02-07  8:40 ` [PATCH v9 6/8] perf/amd/iommu: Modify amd_iommu_pc_get_set_reg_val() to allow specifying IOMMU Suravee Suthikulpanit
@ 2017-02-08 19:32   ` Borislav Petkov
  0 siblings, 0 replies; 17+ messages in thread
From: Borislav Petkov @ 2017-02-08 19:32 UTC (permalink / raw)
  To: Suravee Suthikulpanit; +Cc: linux-kernel, iommu, peterz, joro, mingo

On Tue, Feb 07, 2017 at 02:40:34AM -0600, Suravee Suthikulpanit wrote:
> From: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
> 
> The current amd_iommu_pc_get_set_reg_val() cannot support multiple IOMMUs.
> So, modify it to allow callers to specify IOMMU. This prepares the driver
> for supporting multi-IOMMU in subsequent patch.
> 
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Borislav Petkov <bp@alien8.de>
> Cc: Joerg Roedel <joro@8bytes.org>
> Signed-off-by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
> ---

...

> @@ -2765,48 +2763,58 @@ u8 amd_iommu_pc_get_max_counters(unsigned int idx)
>  }
>  EXPORT_SYMBOL(amd_iommu_pc_get_max_counters);
>  
> -static int iommu_pc_get_set_reg_val(struct amd_iommu *iommu,
> -				    u8 bank, u8 cntr, u8 fxn,
> -				    u64 *value, bool is_write)
> +static int iommu_pc_get_set_reg(struct amd_iommu *iommu, u8 bank, u8 cntr,
> +				u8 fxn, u64 *value, bool is_write)
>  {

...

>  	if (is_write) {
> -		writel((u32)*value, iommu->mmio_base + offset);
> -		writel((*value >> 32), iommu->mmio_base + offset + 4);
> +		u64 val = *value & GENMASK_ULL(47, 0);
> +
> +		writel((u32)val, iommu->mmio_base + offset);
> +		writel((val >> 32), iommu->mmio_base + offset + 4);
>  	} else {
>  		*value = readl(iommu->mmio_base + offset + 4);
>  		*value <<= 32;
> -		*value = readl(iommu->mmio_base + offset);
> +		*value |= readl(iommu->mmio_base + offset);
> +		*value &= GENMASK_ULL(47, 0);
>  	}
>  

This is an unrelated cleanup - please put in a separate patch.

-- 
Regards/Gruss,
    Boris.

Good mailing practices for 400: avoid top-posting and trim the reply.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v9 8/8] perf/amd/iommu: Enable support for multiple IOMMUs
  2017-02-07  8:40 ` [PATCH v9 8/8] perf/amd/iommu: Enable support for multiple IOMMUs Suravee Suthikulpanit
@ 2017-02-08 19:33   ` Borislav Petkov
  2017-02-15  7:13     ` Suravee Suthikulpanit
  0 siblings, 1 reply; 17+ messages in thread
From: Borislav Petkov @ 2017-02-08 19:33 UTC (permalink / raw)
  To: Suravee Suthikulpanit; +Cc: linux-kernel, iommu, peterz, joro, mingo

On Tue, Feb 07, 2017 at 02:40:36AM -0600, Suravee Suthikulpanit wrote:
> From: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
> 
> Add multi-IOMMU support for perf by exposing an AMD IOMMU PMU
> for each IOMMU found in the system via:
> 
>   /bus/event_source/devices/amd_iommu_x
> 
> where x is the IOMMU index. This allows users to specify
> different events to be programmed onto performance counters
> of each IOMMU.
> 
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Borislav Petkov <bp@alien8.de>
> Signed-off-by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
> ---

...

> @@ -414,46 +417,46 @@ static __init void amd_iommu_pc_exit(void)
>  };
>  
>  static __init int
> -_init_perf_amd_iommu(struct perf_amd_iommu *perf_iommu, char *name)
> +init_one_perf_amd_iommu(struct perf_amd_iommu *perf_iommu, unsigned int idx)

It's a static function - just call it "init_one_iommu".

>  {
>  	int ret;
>  
>  	raw_spin_lock_init(&perf_iommu->lock);
>  
> -	/* Init cpumask attributes to only core 0 */
> -	cpumask_set_cpu(0, &iommu_cpumask);
> -
> -	perf_iommu->max_banks    = amd_iommu_pc_get_max_banks(0);
> -	perf_iommu->max_counters = amd_iommu_pc_get_max_counters(0);
> +	perf_iommu->iommu	 = get_amd_iommu(idx);

This function can return NULL - you need to handle that here otherwise
it will blow up later.

> +	perf_iommu->max_banks    = amd_iommu_pc_get_max_banks(idx);
> +	perf_iommu->max_counters = amd_iommu_pc_get_max_counters(idx);
>  	if (!perf_iommu->max_banks || !perf_iommu->max_counters)
>  		return -EINVAL;
>  
> +	snprintf(perf_iommu->name, PERF_AMD_IOMMU_NAME_SIZE, "amd_iommu_%u", idx);
> +
> +	perf_iommu->pmu.event_init  = perf_iommu_event_init;
> +	perf_iommu->pmu.add         = perf_iommu_add;
> +	perf_iommu->pmu.del         = perf_iommu_del;
> +	perf_iommu->pmu.start       = perf_iommu_start;
> +	perf_iommu->pmu.stop        = perf_iommu_stop;
> +	perf_iommu->pmu.read        = perf_iommu_read;
> +	perf_iommu->pmu.task_ctx_nr = perf_invalid_context;
>  	perf_iommu->pmu.attr_groups = amd_iommu_attr_groups;

So you can define a static struct pmu in the driver and do struct
assignment directly instead of writing them one-by-one.

	perf_iommu->pmu = pmu;

> -	ret = perf_pmu_register(&perf_iommu->pmu, name, -1);
> +
> +	ret = perf_pmu_register(&perf_iommu->pmu, perf_iommu->name, -1);
>  	if (ret)
>  		pr_err("Error initializing AMD IOMMU perf counters.\n");
>  	else
> -		pr_info("perf: amd_iommu: Detected. (%d banks, %d counters/bank)\n",
> -			amd_iommu_pc_get_max_banks(0),
> -			amd_iommu_pc_get_max_counters(0));
> +	pr_info("Detected AMD IOMMU #%d (%d banks, %d counters/bank)\n",
> +		idx, amd_iommu_pc_get_max_banks(idx),

perf_iommu->max_banks

> +		amd_iommu_pc_get_max_counters(idx));

perf_iommu->max_counters

You already read them above.

>  	return ret;
>  }
>  
> -static struct perf_amd_iommu __perf_iommu = {
> -	.pmu = {
> -		.task_ctx_nr    = perf_invalid_context,
> -		.event_init	= perf_iommu_event_init,
> -		.add		= perf_iommu_add,
> -		.del		= perf_iommu_del,
> -		.start		= perf_iommu_start,
> -		.stop		= perf_iommu_stop,
> -		.read		= perf_iommu_read,
> -	},
> -};
> -
>  static __init int amd_iommu_pc_init(void)
>  {
>  	int ret;
> +	unsigned int i;
> +
> +	/* Init cpumask attributes to only core 0 */
> +	cpumask_set_cpu(0, &iommu_cpumask);

This belongs at the end of the function, once everything has been
initialized successfully.

>  	/* Make sure the IOMMU PC resource is available */
>  	if (!amd_iommu_pc_supported())


> @@ -463,7 +466,24 @@ static __init int amd_iommu_pc_init(void)
>  	if (ret)
>  		return ret;
>  
> -	ret = _init_perf_amd_iommu(&__perf_iommu, "amd_iommu");
> +	for (i = 0 ; i < amd_iommu_get_num_iommus(); i++) {
> +		struct perf_amd_iommu *pi;
> +
> +		pi = kzalloc(sizeof(struct perf_amd_iommu), GFP_KERNEL);
> +		if (!pi) {
> +			ret = -ENOMEM;
> +			break;
> +		}
> +
> +		ret = init_one_perf_amd_iommu(pi, i);
> +		if (ret) {
> +			kfree(pi);
> +			break;

What happens with the iommus that have been initialized successfully
before this one fails? They remain in use?

I think we need at least a warning saying here:

	pr_warning("Error initializing IOMMU %d ...")

so that we at least know why some are missing.

-- 
Regards/Gruss,
    Boris.

Good mailing practices for 400: avoid top-posting and trim the reply.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v9 8/8] perf/amd/iommu: Enable support for multiple IOMMUs
  2017-02-08 19:33   ` Borislav Petkov
@ 2017-02-15  7:13     ` Suravee Suthikulpanit
  2017-02-15 10:28       ` Borislav Petkov
  0 siblings, 1 reply; 17+ messages in thread
From: Suravee Suthikulpanit @ 2017-02-15  7:13 UTC (permalink / raw)
  To: Borislav Petkov; +Cc: linux-kernel, iommu, peterz, joro, mingo

Boris,

On 2/9/17 02:33, Borislav Petkov wrote:
> On Tue, Feb 07, 2017 at 02:40:36AM -0600, Suravee Suthikulpanit wrote:
>> From: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
>>  [......]
>> +	perf_iommu->max_banks    = amd_iommu_pc_get_max_banks(idx);
>> +	perf_iommu->max_counters = amd_iommu_pc_get_max_counters(idx);
>>  	if (!perf_iommu->max_banks || !perf_iommu->max_counters)
>>  		return -EINVAL;
>>
>> +	snprintf(perf_iommu->name, PERF_AMD_IOMMU_NAME_SIZE, "amd_iommu_%u", idx);
>> +
>> +	perf_iommu->pmu.event_init  = perf_iommu_event_init;
>> +	perf_iommu->pmu.add         = perf_iommu_add;
>> +	perf_iommu->pmu.del         = perf_iommu_del;
>> +	perf_iommu->pmu.start       = perf_iommu_start;
>> +	perf_iommu->pmu.stop        = perf_iommu_stop;
>> +	perf_iommu->pmu.read        = perf_iommu_read;
>> +	perf_iommu->pmu.task_ctx_nr = perf_invalid_context;
>>  	perf_iommu->pmu.attr_groups = amd_iommu_attr_groups;
>
> So you can define a static struct pmu in the driver and do struct
> assignment directly instead of writing them one-by-one.

I believe this is the same suggestion you have made in V8.
Here our previous discussion in V8:

On 2/7/17 08:42, Suravee Suthikulpanit wrote:
 >
 > On 1/23/17 02:55, Borislav Petkov wrote:
 >> Because otherwise you're carrying a struct pmu in each struct
 >> perf_amd_iommu which has identical contents.
 >
 > Actually, only the callbacks above will be identical on each pmu, but
 > there are other parts of the structure which are different
 > (e.g. pmu->name, pmu->type, etc.) Also, we need one pmu instance per
 > IOMMU since each pmu reference will get assigned to perf_event, and
 > also used to reference back to struct perf_amd_iommu. Note that each
 > pmu can also have different events.

So, I still don't think we can have just one static PMU structure and
assign it to each IOMMU. Lemme know if I am missing your point here.


>> [...]
 >> @@ -463,7 +466,24 @@ static __init int amd_iommu_pc_init(void)
>>  	if (ret)
>>  		return ret;
>>
>> -	ret = _init_perf_amd_iommu(&__perf_iommu, "amd_iommu");
>> +	for (i = 0 ; i < amd_iommu_get_num_iommus(); i++) {
>> +		struct perf_amd_iommu *pi;
>> +
>> +		pi = kzalloc(sizeof(struct perf_amd_iommu), GFP_KERNEL);
>> +		if (!pi) {
>> +			ret = -ENOMEM;
>> +			break;
>> +		}
>> +
>> +		ret = init_one_perf_amd_iommu(pi, i);
>> +		if (ret) {
>> +			kfree(pi);
>> +			break;
>
> What happens with the iommus that have been initialized successfully
> before this one fails? They remain in use?
>
> I think we need at least a warning saying here:
>
> 	pr_warning("Error initializing IOMMU %d ...")
>
> so that we at least know why some are missing.
>

The initialized ones should be functioning independently (as separate PMUs).
So, it should be alright to just leave them. I'll add the warning message
as you suggested.

Thanks,
Suravee

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v9 0/8] perf/amd/iommu: Enable multi-IOMMU support
  2017-02-07  8:40 [PATCH v9 0/8] perf/amd/iommu: Enable multi-IOMMU support Suravee Suthikulpanit
                   ` (7 preceding siblings ...)
  2017-02-07  8:40 ` [PATCH v9 8/8] perf/amd/iommu: Enable support for multiple IOMMUs Suravee Suthikulpanit
@ 2017-02-15 10:13 ` Borislav Petkov
  2017-02-15 10:44   ` Jiri Olsa
  8 siblings, 1 reply; 17+ messages in thread
From: Borislav Petkov @ 2017-02-15 10:13 UTC (permalink / raw)
  To: Suravee Suthikulpanit, Arnaldo Carvalho de Melo, Jiri Olsa, Ingo Molnar
  Cc: linux-kernel, iommu, peterz, joro, mingo

On Tue, Feb 07, 2017 at 02:40:28AM -0600, Suravee Suthikulpanit wrote:
> From: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
> 
> This patch series modifies the existing IOMMU and Perf drivers to support
> systems with multiple IOMMUs by allocating an amd_iommu PMU per IOMMU instance.
> This allows users to specify performance events and filters separately for each
> IOMMU.
> 
> This has been tested on the new family17h-based server w/ multiple IOMMUs.

Ok, so far so good.

There's just one thing:

$ perf stat -e amd_iommu_X/Y

says <not supported> and only doing the system wide tracing with -a does
it count events.

So, lemme ask perf tool people, can we guys make the -a thing default
when detect that we're running only uncore events which all should need
-a anyway?

Otherwise people who, like me, don't know that, would try to do
measurements and see the <not supported> and wonder...

Lemme add some more people to CC.

-- 
Regards/Gruss,
    Boris.

Good mailing practices for 400: avoid top-posting and trim the reply.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v9 8/8] perf/amd/iommu: Enable support for multiple IOMMUs
  2017-02-15  7:13     ` Suravee Suthikulpanit
@ 2017-02-15 10:28       ` Borislav Petkov
  0 siblings, 0 replies; 17+ messages in thread
From: Borislav Petkov @ 2017-02-15 10:28 UTC (permalink / raw)
  To: Suravee Suthikulpanit; +Cc: linux-kernel, iommu, peterz, joro, mingo

On Wed, Feb 15, 2017 at 02:13:29PM +0700, Suravee Suthikulpanit wrote:
> > So you can define a static struct pmu in the driver and do struct
> > assignment directly instead of writing them one-by-one.
> 
> I believe this is the same suggestion you have made in V8.

Here's what I mean:

---
diff --git a/arch/x86/events/amd/iommu.c b/arch/x86/events/amd/iommu.c
index d9313d20a715..ac11d0755707 100644
--- a/arch/x86/events/amd/iommu.c
+++ b/arch/x86/events/amd/iommu.c
@@ -416,6 +416,17 @@ const struct attribute_group *amd_iommu_attr_groups[] = {
 	NULL,
 };
 
+static struct pmu iommu_pmu = {
+	.event_init	= perf_iommu_event_init,
+	.add		= perf_iommu_add,
+	.del		= perf_iommu_del,
+	.start		= perf_iommu_start,
+	.stop		= perf_iommu_stop,
+	.read		= perf_iommu_read,
+	.task_ctx_nr	= perf_invalid_context,
+	.attr_groups	= amd_iommu_attr_groups,
+};
+
 static __init int
 init_one_perf_amd_iommu(struct perf_amd_iommu *perf_iommu, unsigned int idx)
 {
@@ -431,14 +442,7 @@ init_one_perf_amd_iommu(struct perf_amd_iommu *perf_iommu, unsigned int idx)
 
 	snprintf(perf_iommu->name, PERF_AMD_IOMMU_NAME_SIZE, "amd_iommu_%u", idx);
 
-	perf_iommu->pmu.event_init  = perf_iommu_event_init;
-	perf_iommu->pmu.add         = perf_iommu_add;
-	perf_iommu->pmu.del         = perf_iommu_del;
-	perf_iommu->pmu.start       = perf_iommu_start;
-	perf_iommu->pmu.stop        = perf_iommu_stop;
-	perf_iommu->pmu.read        = perf_iommu_read;
-	perf_iommu->pmu.task_ctx_nr = perf_invalid_context;
-	perf_iommu->pmu.attr_groups = amd_iommu_attr_groups;
+	perf_iommu->pmu = iommu_pmu;
 
 	ret = perf_pmu_register(&perf_iommu->pmu, perf_iommu->name, -1);
 	if (ret)

> The initialized ones should be functioning independently (as separate PMUs).
> So, it should be alright to just leave them. I'll add the warning message
> as you suggested.

Yes, you need at least a warning message so that people know why some of
the IOMMUs are missing.

-- 
Regards/Gruss,
    Boris.

Good mailing practices for 400: avoid top-posting and trim the reply.

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH v9 0/8] perf/amd/iommu: Enable multi-IOMMU support
  2017-02-15 10:13 ` [PATCH v9 0/8] perf/amd/iommu: Enable multi-IOMMU support Borislav Petkov
@ 2017-02-15 10:44   ` Jiri Olsa
  2017-02-15 11:13     ` Borislav Petkov
  0 siblings, 1 reply; 17+ messages in thread
From: Jiri Olsa @ 2017-02-15 10:44 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: Suravee Suthikulpanit, Arnaldo Carvalho de Melo, Jiri Olsa,
	Ingo Molnar, linux-kernel, iommu, peterz, joro, mingo

On Wed, Feb 15, 2017 at 11:13:23AM +0100, Borislav Petkov wrote:
> On Tue, Feb 07, 2017 at 02:40:28AM -0600, Suravee Suthikulpanit wrote:
> > From: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
> > 
> > This patch series modifies the existing IOMMU and Perf drivers to support
> > systems with multiple IOMMUs by allocating an amd_iommu PMU per IOMMU instance.
> > This allows users to specify performance events and filters separately for each
> > IOMMU.
> > 
> > This has been tested on the new family17h-based server w/ multiple IOMMUs.
> 
> Ok, so far so good.
> 
> There's just one thing:
> 
> $ perf stat -e amd_iommu_X/Y
> 
> says <not supported> and only doing the system wide tracing with -a does
> it count events.

does it say unsupported when you omit -a? it should display error
and options like:

[jolsa@krava perf]$ ./perf stat -e 'cpu/cpu-cycles/'

 Usage: perf stat [<options>] [<command>]

    -a, --all-cpus        system-wide collection from all CPUs
    -A, --no-aggr         disable CPU count aggregation
    -B, --big-num         print large numbers with thousands' separators
    -C, --cpu <cpu>       list of cpus to monitor in system-wide
...


> 
> So, lemme ask perf tool people, can we guys make the -a thing default
> when detect that we're running only uncore events which all should need
> -a anyway?

it's possible, can't think of anything being hurt by this now..

jirka

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v9 0/8] perf/amd/iommu: Enable multi-IOMMU support
  2017-02-15 10:44   ` Jiri Olsa
@ 2017-02-15 11:13     ` Borislav Petkov
  0 siblings, 0 replies; 17+ messages in thread
From: Borislav Petkov @ 2017-02-15 11:13 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Suravee Suthikulpanit, Arnaldo Carvalho de Melo, Jiri Olsa,
	Ingo Molnar, linux-kernel, iommu, peterz, joro, mingo

On Wed, Feb 15, 2017 at 11:44:24AM +0100, Jiri Olsa wrote:
> does it say unsupported when you omit -a?

Yeah, it does dump the event name like it was counting but then it says
<not supported>. For example:

   <not supported>      amd_iommu_0/...

> [jolsa@krava perf]$ ./perf stat -e 'cpu/cpu-cycles/'
	 ^^^^^

Haha, I chuckle everytime I see this hostname :-)))

> > So, lemme ask perf tool people, can we guys make the -a thing default
> > when detect that we're running only uncore events which all should need
> > -a anyway?
> 
> it's possible, can't think of anything being hurt by this now..

Right, so Peter meant something simple like: you parse all events, see
that they're all uncore and enable system_wide if so.

I get the feeling that we want to do that for uncore-only events
anyway...

-- 
Regards/Gruss,
    Boris.

Good mailing practices for 400: avoid top-posting and trim the reply.

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2017-02-15 11:14 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-02-07  8:40 [PATCH v9 0/8] perf/amd/iommu: Enable multi-IOMMU support Suravee Suthikulpanit
2017-02-07  8:40 ` [PATCH v9 1/8] perf/amd/iommu: Declare pr_fmt and remove unnecessary pr_debug Suravee Suthikulpanit
2017-02-07  8:40 ` [PATCH v9 2/8] perf/amd/iommu: Clean up bitwise operations Suravee Suthikulpanit
2017-02-07  8:40 ` [PATCH v9 3/8] perf/amd/iommu: Clean up perf_iommu_read() Suravee Suthikulpanit
2017-02-07  8:40 ` [PATCH v9 4/8] iommu/amd: Introduce amd_iommu_get_num_iommus() Suravee Suthikulpanit
2017-02-07  8:40 ` [PATCH v9 5/8] perf/amd/iommu: Modify functions to query max banks and counters Suravee Suthikulpanit
2017-02-08 19:31   ` Borislav Petkov
2017-02-07  8:40 ` [PATCH v9 6/8] perf/amd/iommu: Modify amd_iommu_pc_get_set_reg_val() to allow specifying IOMMU Suravee Suthikulpanit
2017-02-08 19:32   ` Borislav Petkov
2017-02-07  8:40 ` [PATCH v9 7/8] perf/amd/iommu: Fix sysfs perf attribute groups Suravee Suthikulpanit
2017-02-07  8:40 ` [PATCH v9 8/8] perf/amd/iommu: Enable support for multiple IOMMUs Suravee Suthikulpanit
2017-02-08 19:33   ` Borislav Petkov
2017-02-15  7:13     ` Suravee Suthikulpanit
2017-02-15 10:28       ` Borislav Petkov
2017-02-15 10:13 ` [PATCH v9 0/8] perf/amd/iommu: Enable multi-IOMMU support Borislav Petkov
2017-02-15 10:44   ` Jiri Olsa
2017-02-15 11:13     ` Borislav Petkov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).