linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/6] Handle MCA banks in a per_cpu way
@ 2019-04-11 20:18 Ghannam, Yazen
  2019-04-11 20:18 ` [PATCH v2 1/6] x86/MCE: Make struct mce_banks[] static Ghannam, Yazen
                   ` (5 more replies)
  0 siblings, 6 replies; 11+ messages in thread
From: Ghannam, Yazen @ 2019-04-11 20:18 UTC (permalink / raw)
  To: linux-edac; +Cc: Ghannam, Yazen, linux-kernel, bp, tony.luck, x86

From: Yazen Ghannam <yazen.ghannam@amd.com>

The focus of this patchset is define and use the MCA bank structures
and bank count per logical CPU.

With the exception of patch 4, this set applies to systems in production
today.

Patch 1:
Moves the declaration of struct mce_banks[] to the only file it's used.

Patch 2:
Splits struct mce_bank into a structure for fields common to MCA banks
on all CPUs and another structure that can be used per_cpu.

Patch 3:
Brings full circle the saga of the threshold block addresses on SMCA
systems. After taking a step back and reviewing the AMD documentation, I
think that this implimentation is the simplest and more robust way to
follow the spec.

Patch 4:
Saves and uses the MCA bank count as a per_cpu variable. This is to
support systems that have MCA bank counts that are different between
logical CPUs.

Patch 5:
Makes sure that sysfs reports the MCA_CTL value as set in hardware. This
is not something related to making things per_cpu but rather just
something I noticed while working on the other patches.

Patch 6:
Prevents sysfs access for MCA banks that are uninitialized.

Link:
https://lkml.kernel.org/r/20190408141205.12376-1-Yazen.Ghannam@amd.com

Thanks,
Yazen


Yazen Ghannam (6):
  x86/MCE: Make struct mce_banks[] static
  x86/MCE: Handle MCA controls in a per_cpu way
  x86/MCE/AMD: Don't cache block addresses on SMCA systems
  x86/MCE: Make number of MCA banks per_cpu
  x86/MCE: Save MCA control bits that get set in hardware
  x86/MCE: Treat MCE bank as initialized if control bits set in hardware

 arch/x86/kernel/cpu/mce/amd.c      |  87 +++++++++--------
 arch/x86/kernel/cpu/mce/core.c     | 146 ++++++++++++++++++++---------
 arch/x86/kernel/cpu/mce/internal.h |  12 +--
 3 files changed, 144 insertions(+), 101 deletions(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH v2 1/6] x86/MCE: Make struct mce_banks[] static
  2019-04-11 20:18 [PATCH v2 0/6] Handle MCA banks in a per_cpu way Ghannam, Yazen
@ 2019-04-11 20:18 ` Ghannam, Yazen
  2019-04-11 20:18 ` [PATCH v2 2/6] x86/MCE: Handle MCA controls in a per_cpu way Ghannam, Yazen
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 11+ messages in thread
From: Ghannam, Yazen @ 2019-04-11 20:18 UTC (permalink / raw)
  To: linux-edac; +Cc: Ghannam, Yazen, linux-kernel, bp, tony.luck, x86

From: Yazen Ghannam <yazen.ghannam@amd.com>

The struct mce_banks[] array is only used in mce/core.c so move the
definition of struct mce_bank to mce/core.c and make the array static.

Also, change the "init" field to bool type.

Signed-off-by: Yazen Ghannam <yazen.ghannam@amd.com>
---
Link:
https://lkml.kernel.org/r/20190408141205.12376-2-Yazen.Ghannam@amd.com

v1->v2:
* No changes

 arch/x86/kernel/cpu/mce/core.c     | 11 ++++++++++-
 arch/x86/kernel/cpu/mce/internal.h | 10 ----------
 2 files changed, 10 insertions(+), 11 deletions(-)

diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c
index 58925e7ccb27..8d0d1e8425db 100644
--- a/arch/x86/kernel/cpu/mce/core.c
+++ b/arch/x86/kernel/cpu/mce/core.c
@@ -64,7 +64,16 @@ static DEFINE_MUTEX(mce_sysfs_mutex);
 
 DEFINE_PER_CPU(unsigned, mce_exception_count);
 
-struct mce_bank *mce_banks __read_mostly;
+#define ATTR_LEN               16
+/* One object for each MCE bank, shared by all CPUs */
+struct mce_bank {
+	u64			ctl;			/* subevents to enable */
+	bool			init;			/* initialise bank? */
+	struct device_attribute	attr;			/* device attribute */
+	char			attrname[ATTR_LEN];	/* attribute name */
+};
+
+static struct mce_bank *mce_banks __read_mostly;
 struct mce_vendor_flags mce_flags __read_mostly;
 
 struct mca_config mca_cfg __read_mostly = {
diff --git a/arch/x86/kernel/cpu/mce/internal.h b/arch/x86/kernel/cpu/mce/internal.h
index af5eab1e65e2..032d52c66616 100644
--- a/arch/x86/kernel/cpu/mce/internal.h
+++ b/arch/x86/kernel/cpu/mce/internal.h
@@ -22,17 +22,8 @@ enum severity_level {
 
 extern struct blocking_notifier_head x86_mce_decoder_chain;
 
-#define ATTR_LEN		16
 #define INITIAL_CHECK_INTERVAL	5 * 60 /* 5 minutes */
 
-/* One object for each MCE bank, shared by all CPUs */
-struct mce_bank {
-	u64			ctl;			/* subevents to enable */
-	unsigned char init;				/* initialise bank? */
-	struct device_attribute attr;			/* device attribute */
-	char			attrname[ATTR_LEN];	/* attribute name */
-};
-
 struct mce_evt_llist {
 	struct llist_node llnode;
 	struct mce mce;
@@ -47,7 +38,6 @@ struct llist_node *mce_gen_pool_prepare_records(void);
 extern int (*mce_severity)(struct mce *a, int tolerant, char **msg, bool is_excp);
 struct dentry *mce_get_debugfs_dir(void);
 
-extern struct mce_bank *mce_banks;
 extern mce_banks_t mce_banks_ce_disabled;
 
 #ifdef CONFIG_X86_MCE_INTEL
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v2 2/6] x86/MCE: Handle MCA controls in a per_cpu way
  2019-04-11 20:18 [PATCH v2 0/6] Handle MCA banks in a per_cpu way Ghannam, Yazen
  2019-04-11 20:18 ` [PATCH v2 1/6] x86/MCE: Make struct mce_banks[] static Ghannam, Yazen
@ 2019-04-11 20:18 ` Ghannam, Yazen
  2019-04-16 10:21   ` Borislav Petkov
  2019-04-11 20:18 ` [PATCH v2 3/6] x86/MCE/AMD: Don't cache block addresses on SMCA systems Ghannam, Yazen
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 11+ messages in thread
From: Ghannam, Yazen @ 2019-04-11 20:18 UTC (permalink / raw)
  To: linux-edac; +Cc: Ghannam, Yazen, linux-kernel, bp, tony.luck, x86

From: Yazen Ghannam <yazen.ghannam@amd.com>

Current AMD systems have unique MCA banks per logical CPU even though
the type of the banks may all align to the same bank number. Each CPU
will have control of a set of MCA banks in the hardware and these are
not shared with other CPUs.

For example, bank 0 may be the Load-Store Unit on every logical CPU, but
each bank 0 is a unique structure in the hardware. In other words, there
isn't a *single* Load-Store Unit at MCA bank 0 that all logical CPUs
share.

This idea extends even to non-core MCA banks. For example, CPU0 and CPU4
may see a Unified Memory Controller at bank 15, but each CPU is actually
seeing a unique hardware structure that is not shared with other CPUs.

Because the MCA banks are all unique hardware structures, it would be
good to control them in a more granular way. For example, if there is a
known issue with the Floating Point Unit on CPU5 and a user wishes to
disable an error type on the Floating Point Unit, then it would be good
to do this only for CPU5 rather than all CPUs.

Also, future AMD systems may have heterogeneous MCA banks. Meaning the
bank numbers may not necessarily represent the same types between CPUs.
For example, bank 20 visible to CPU0 may be a Unified Memory Controller
and bank 20 visible to CPU4 may be a Coherent Slave. So granular control
will be even more necessary should the user wish to control specific MCA
banks.

Split the device attributes from struct mce_bank leaving only the MCA
bank control fields.

Make struct mce_banks[] per_cpu in order to have more granular control
over individual MCA banks in the hardware.

Allocate the device attributes statically based on the maximum number of
MCA banks supported. The sysfs interface will use as many as needed per
CPU. Currently, this is set to mca_cfg.banks, but will be changed to a
per_cpu bank count in a future patch.

Allocate the MCA control bits dynamically. Use the maximum number of MCA
banks supported for now. This will be changed to a per_cpu bank count in
a future patch.

Redo the sysfs store/show functions to handle the per_cpu mce_banks[].

Signed-off-by: Yazen Ghannam <yazen.ghannam@amd.com>
---
Link:
https://lkml.kernel.org/r/20190408141205.12376-3-Yazen.Ghannam@amd.com

v1->v2:
* Change "struct mce_bank*" to "struct mce_bank *" in definition.

 arch/x86/kernel/cpu/mce/core.c | 77 ++++++++++++++++++++++------------
 1 file changed, 51 insertions(+), 26 deletions(-)

diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c
index 8d0d1e8425db..aa41f41e5931 100644
--- a/arch/x86/kernel/cpu/mce/core.c
+++ b/arch/x86/kernel/cpu/mce/core.c
@@ -64,16 +64,21 @@ static DEFINE_MUTEX(mce_sysfs_mutex);
 
 DEFINE_PER_CPU(unsigned, mce_exception_count);
 
+struct mce_bank {
+	u64	ctl;	/* subevents to enable */
+	bool	init;	/* initialise bank? */
+};
+static DEFINE_PER_CPU_READ_MOSTLY(struct mce_bank *, mce_banks);
+
 #define ATTR_LEN               16
 /* One object for each MCE bank, shared by all CPUs */
-struct mce_bank {
-	u64			ctl;			/* subevents to enable */
-	bool			init;			/* initialise bank? */
+struct mce_bank_dev {
 	struct device_attribute	attr;			/* device attribute */
 	char			attrname[ATTR_LEN];	/* attribute name */
+	u8			bank;			/* bank number */
 };
+static struct mce_bank_dev mce_bank_devs[MAX_NR_BANKS];
 
-static struct mce_bank *mce_banks __read_mostly;
 struct mce_vendor_flags mce_flags __read_mostly;
 
 struct mca_config mca_cfg __read_mostly = {
@@ -695,7 +700,7 @@ bool machine_check_poll(enum mcp_flags flags, mce_banks_t *b)
 		m.tsc = rdtsc();
 
 	for (i = 0; i < mca_cfg.banks; i++) {
-		if (!mce_banks[i].ctl || !test_bit(i, *b))
+		if (!this_cpu_read(mce_banks)[i].ctl || !test_bit(i, *b))
 			continue;
 
 		m.misc = 0;
@@ -1138,7 +1143,7 @@ static void __mc_scan_banks(struct mce *m, struct mce *final,
 		if (!test_bit(i, valid_banks))
 			continue;
 
-		if (!mce_banks[i].ctl)
+		if (!this_cpu_read(mce_banks)[i].ctl)
 			continue;
 
 		m->misc = 0;
@@ -1475,16 +1480,19 @@ static int __mcheck_cpu_mce_banks_init(void)
 {
 	int i;
 
-	mce_banks = kcalloc(MAX_NR_BANKS, sizeof(struct mce_bank), GFP_KERNEL);
-	if (!mce_banks)
+	per_cpu(mce_banks, smp_processor_id()) =
+		kcalloc(MAX_NR_BANKS, sizeof(struct mce_bank), GFP_KERNEL);
+
+	if (!this_cpu_read(mce_banks))
 		return -ENOMEM;
 
 	for (i = 0; i < MAX_NR_BANKS; i++) {
-		struct mce_bank *b = &mce_banks[i];
+		struct mce_bank *b = &this_cpu_read(mce_banks)[i];
 
 		b->ctl = -1ULL;
 		b->init = 1;
 	}
+
 	return 0;
 }
 
@@ -1504,7 +1512,7 @@ static int __mcheck_cpu_cap_init(void)
 
 	mca_cfg.banks = max(mca_cfg.banks, b);
 
-	if (!mce_banks) {
+	if (!this_cpu_read(mce_banks)) {
 		int err = __mcheck_cpu_mce_banks_init();
 		if (err)
 			return err;
@@ -1547,7 +1555,7 @@ static void __mcheck_cpu_init_clear_banks(void)
 	int i;
 
 	for (i = 0; i < mca_cfg.banks; i++) {
-		struct mce_bank *b = &mce_banks[i];
+		struct mce_bank *b = &this_cpu_read(mce_banks)[i];
 
 		if (!b->init)
 			continue;
@@ -1602,7 +1610,7 @@ static int __mcheck_cpu_apply_quirks(struct cpuinfo_x86 *c)
 			 * trips off incorrectly with the IOMMU & 3ware
 			 * & Cerberus:
 			 */
-			clear_bit(10, (unsigned long *)&mce_banks[4].ctl);
+			clear_bit(10, (unsigned long *)&this_cpu_read(mce_banks)[4].ctl);
 		}
 		if (c->x86 < 0x11 && cfg->bootlog < 0) {
 			/*
@@ -1616,7 +1624,7 @@ static int __mcheck_cpu_apply_quirks(struct cpuinfo_x86 *c)
 		 * by default.
 		 */
 		if (c->x86 == 6 && cfg->banks > 0)
-			mce_banks[0].ctl = 0;
+			this_cpu_read(mce_banks)[0].ctl = 0;
 
 		/*
 		 * overflow_recov is supported for F15h Models 00h-0fh
@@ -1638,7 +1646,7 @@ static int __mcheck_cpu_apply_quirks(struct cpuinfo_x86 *c)
 		 */
 
 		if (c->x86 == 6 && c->x86_model < 0x1A && cfg->banks > 0)
-			mce_banks[0].init = 0;
+			this_cpu_read(mce_banks)[0].init = 0;
 
 		/*
 		 * All newer Intel systems support MCE broadcasting. Enable
@@ -1952,7 +1960,7 @@ static void mce_disable_error_reporting(void)
 	int i;
 
 	for (i = 0; i < mca_cfg.banks; i++) {
-		struct mce_bank *b = &mce_banks[i];
+		struct mce_bank *b = &this_cpu_read(mce_banks)[i];
 
 		if (b->init)
 			wrmsrl(msr_ops.ctl(i), 0);
@@ -2051,26 +2059,41 @@ static struct bus_type mce_subsys = {
 
 DEFINE_PER_CPU(struct device *, mce_device);
 
-static inline struct mce_bank *attr_to_bank(struct device_attribute *attr)
+static inline struct mce_bank_dev *attr_to_bank(struct device_attribute *attr)
 {
-	return container_of(attr, struct mce_bank, attr);
+	return container_of(attr, struct mce_bank_dev, attr);
 }
 
 static ssize_t show_bank(struct device *s, struct device_attribute *attr,
 			 char *buf)
 {
-	return sprintf(buf, "%llx\n", attr_to_bank(attr)->ctl);
+	struct mce_bank *b;
+	u8 bank = attr_to_bank(attr)->bank;
+
+	if (bank >= mca_cfg.banks)
+		return -EINVAL;
+
+	b = &per_cpu(mce_banks, s->id)[bank];
+
+	return sprintf(buf, "%llx\n", b->ctl);
 }
 
 static ssize_t set_bank(struct device *s, struct device_attribute *attr,
 			const char *buf, size_t size)
 {
 	u64 new;
+	struct mce_bank *b;
+	u8 bank = attr_to_bank(attr)->bank;
 
 	if (kstrtou64(buf, 0, &new) < 0)
 		return -EINVAL;
 
-	attr_to_bank(attr)->ctl = new;
+	if (bank >= mca_cfg.banks)
+		return -EINVAL;
+
+	b = &per_cpu(mce_banks, s->id)[bank];
+
+	b->ctl = new;
 	mce_restart();
 
 	return size;
@@ -2185,7 +2208,7 @@ static void mce_device_release(struct device *dev)
 	kfree(dev);
 }
 
-/* Per cpu device init. All of the cpus still share the same ctrl bank: */
+/* Per cpu device init. All of the cpus still share the same bank device: */
 static int mce_device_create(unsigned int cpu)
 {
 	struct device *dev;
@@ -2218,7 +2241,7 @@ static int mce_device_create(unsigned int cpu)
 			goto error;
 	}
 	for (j = 0; j < mca_cfg.banks; j++) {
-		err = device_create_file(dev, &mce_banks[j].attr);
+		err = device_create_file(dev, &mce_bank_devs[j].attr);
 		if (err)
 			goto error2;
 	}
@@ -2228,7 +2251,7 @@ static int mce_device_create(unsigned int cpu)
 	return 0;
 error2:
 	while (--j >= 0)
-		device_remove_file(dev, &mce_banks[j].attr);
+		device_remove_file(dev, &mce_bank_devs[j].attr);
 error:
 	while (--i >= 0)
 		device_remove_file(dev, mce_device_attrs[i]);
@@ -2250,7 +2273,7 @@ static void mce_device_remove(unsigned int cpu)
 		device_remove_file(dev, mce_device_attrs[i]);
 
 	for (i = 0; i < mca_cfg.banks; i++)
-		device_remove_file(dev, &mce_banks[i].attr);
+		device_remove_file(dev, &mce_bank_devs[i].attr);
 
 	device_unregister(dev);
 	cpumask_clear_cpu(cpu, mce_device_initialized);
@@ -2279,7 +2302,7 @@ static void mce_reenable_cpu(void)
 	if (!cpuhp_tasks_frozen)
 		cmci_reenable();
 	for (i = 0; i < mca_cfg.banks; i++) {
-		struct mce_bank *b = &mce_banks[i];
+		struct mce_bank *b = &this_cpu_read(mce_banks)[i];
 
 		if (b->init)
 			wrmsrl(msr_ops.ctl(i), b->ctl);
@@ -2328,10 +2351,12 @@ static __init void mce_init_banks(void)
 {
 	int i;
 
-	for (i = 0; i < mca_cfg.banks; i++) {
-		struct mce_bank *b = &mce_banks[i];
+	for (i = 0; i < MAX_NR_BANKS; i++) {
+		struct mce_bank_dev *b = &mce_bank_devs[i];
 		struct device_attribute *a = &b->attr;
 
+		b->bank = i;
+
 		sysfs_attr_init(&a->attr);
 		a->attr.name	= b->attrname;
 		snprintf(b->attrname, ATTR_LEN, "bank%d", i);
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v2 3/6] x86/MCE/AMD: Don't cache block addresses on SMCA systems
  2019-04-11 20:18 [PATCH v2 0/6] Handle MCA banks in a per_cpu way Ghannam, Yazen
  2019-04-11 20:18 ` [PATCH v2 1/6] x86/MCE: Make struct mce_banks[] static Ghannam, Yazen
  2019-04-11 20:18 ` [PATCH v2 2/6] x86/MCE: Handle MCA controls in a per_cpu way Ghannam, Yazen
@ 2019-04-11 20:18 ` Ghannam, Yazen
  2019-04-16 13:44   ` Borislav Petkov
  2019-04-11 20:18 ` [PATCH v2 4/6] x86/MCE: Make number of MCA banks per_cpu Ghannam, Yazen
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 11+ messages in thread
From: Ghannam, Yazen @ 2019-04-11 20:18 UTC (permalink / raw)
  To: linux-edac; +Cc: Ghannam, Yazen, linux-kernel, bp, tony.luck, x86

From: Yazen Ghannam <yazen.ghannam@amd.com>

On legacy systems, the addresses of the MCA_MISC* registers need to be
recursively discovered based on a Block Pointer field in the registers.

On Scalable MCA systems, the register space is fixed, and particular
addresses can be derived by regular offsets for bank and register type.
This fixed address space includes the MCA_MISC* registers.

MCA_MISC0 is always available for each MCA bank. MCA_MISC1 through
MCA_MISC4 are considered available if MCA_MISC0[BlkPtr]=1.

Cache the value of MCA_MISC0[BlkPtr] for each bank and per CPU. This
needs to be done only during init. The values should be saved per CPU
to accommodate heterogeneous SMCA systems.

Redo smca_get_block_address() to directly return the block addresses.
Use the cached Block Pointer value to decide if the MCA_MISC1-4
addresses should be returned.

Signed-off-by: Yazen Ghannam <yazen.ghannam@amd.com>
---
Link:
https://lkml.kernel.org/r/20190408141205.12376-4-Yazen.Ghannam@amd.com

v1->v2:
* No change.

 arch/x86/kernel/cpu/mce/amd.c | 71 +++++++++++++++++------------------
 1 file changed, 35 insertions(+), 36 deletions(-)

diff --git a/arch/x86/kernel/cpu/mce/amd.c b/arch/x86/kernel/cpu/mce/amd.c
index e64de5149e50..f0644b59848d 100644
--- a/arch/x86/kernel/cpu/mce/amd.c
+++ b/arch/x86/kernel/cpu/mce/amd.c
@@ -101,11 +101,6 @@ static struct smca_bank_name smca_names[] = {
 	[SMCA_PCIE]	= { "pcie",		"PCI Express Unit" },
 };
 
-static u32 smca_bank_addrs[MAX_NR_BANKS][NR_BLOCKS] __ro_after_init =
-{
-	[0 ... MAX_NR_BANKS - 1] = { [0 ... NR_BLOCKS - 1] = -1 }
-};
-
 static const char *smca_get_name(enum smca_bank_types t)
 {
 	if (t >= N_SMCA_BANK_TYPES)
@@ -198,6 +193,7 @@ static char buf_mcatype[MAX_MCATYPE_NAME_LEN];
 
 static DEFINE_PER_CPU(struct threshold_bank **, threshold_banks);
 static DEFINE_PER_CPU(unsigned int, bank_map);	/* see which banks are on */
+static DEFINE_PER_CPU(u32, smca_blkptr_map); /* see which banks use BlkPtr */
 
 static void amd_threshold_interrupt(void);
 static void amd_deferred_error_interrupt(void);
@@ -208,6 +204,28 @@ static void default_deferred_error_interrupt(void)
 }
 void (*deferred_error_int_vector)(void) = default_deferred_error_interrupt;
 
+static void smca_set_blkptr_map(unsigned int bank, unsigned int cpu)
+{
+	u32 low, high;
+
+	/*
+	 * For SMCA enabled processors, BLKPTR field of the first MISC register
+	 * (MCx_MISC0) indicates presence of additional MISC regs set (MISC1-4).
+	 */
+	if (rdmsr_safe(MSR_AMD64_SMCA_MCx_CONFIG(bank), &low, &high))
+		return;
+
+	if (!(low & MCI_CONFIG_MCAX))
+		return;
+
+	if (rdmsr_safe(MSR_AMD64_SMCA_MCx_MISC(bank), &low, &high))
+		return;
+
+	if (low & MASK_BLKPTR_LO)
+		per_cpu(smca_blkptr_map, cpu) |= 1 << bank;
+
+}
+
 static void smca_configure(unsigned int bank, unsigned int cpu)
 {
 	unsigned int i, hwid_mcatype;
@@ -245,6 +263,8 @@ static void smca_configure(unsigned int bank, unsigned int cpu)
 		wrmsr(smca_config, low, high);
 	}
 
+	smca_set_blkptr_map(bank, cpu);
+
 	/* Return early if this bank was already initialized. */
 	if (smca_banks[bank].hwid)
 		return;
@@ -455,42 +475,21 @@ static void deferred_error_interrupt_enable(struct cpuinfo_x86 *c)
 	wrmsr(MSR_CU_DEF_ERR, low, high);
 }
 
-static u32 smca_get_block_address(unsigned int bank, unsigned int block)
+static u32 smca_get_block_address(unsigned int bank, unsigned int block,
+				  unsigned int cpu)
 {
-	u32 low, high;
-	u32 addr = 0;
-
-	if (smca_get_bank_type(bank) == SMCA_RESERVED)
-		return addr;
-
 	if (!block)
 		return MSR_AMD64_SMCA_MCx_MISC(bank);
 
-	/* Check our cache first: */
-	if (smca_bank_addrs[bank][block] != -1)
-		return smca_bank_addrs[bank][block];
-
-	/*
-	 * For SMCA enabled processors, BLKPTR field of the first MISC register
-	 * (MCx_MISC0) indicates presence of additional MISC regs set (MISC1-4).
-	 */
-	if (rdmsr_safe(MSR_AMD64_SMCA_MCx_CONFIG(bank), &low, &high))
-		goto out;
-
-	if (!(low & MCI_CONFIG_MCAX))
-		goto out;
-
-	if (!rdmsr_safe(MSR_AMD64_SMCA_MCx_MISC(bank), &low, &high) &&
-	    (low & MASK_BLKPTR_LO))
-		addr = MSR_AMD64_SMCA_MCx_MISCy(bank, block - 1);
+	if (!(per_cpu(smca_blkptr_map, cpu) & (1 << bank)))
+		return 0;
 
-out:
-	smca_bank_addrs[bank][block] = addr;
-	return addr;
+	return MSR_AMD64_SMCA_MCx_MISCy(bank, block - 1);
 }
 
 static u32 get_block_address(u32 current_addr, u32 low, u32 high,
-			     unsigned int bank, unsigned int block)
+			     unsigned int bank, unsigned int block,
+			     unsigned int cpu)
 {
 	u32 addr = 0, offset = 0;
 
@@ -498,7 +497,7 @@ static u32 get_block_address(u32 current_addr, u32 low, u32 high,
 		return addr;
 
 	if (mce_flags.smca)
-		return smca_get_block_address(bank, block);
+		return smca_get_block_address(bank, block, cpu);
 
 	/* Fall back to method we used for older processors: */
 	switch (block) {
@@ -611,7 +610,7 @@ void mce_amd_feature_init(struct cpuinfo_x86 *c)
 			smca_configure(bank, cpu);
 
 		for (block = 0; block < NR_BLOCKS; ++block) {
-			address = get_block_address(address, low, high, bank, block);
+			address = get_block_address(address, low, high, bank, block, cpu);
 			if (!address)
 				break;
 
@@ -1228,7 +1227,7 @@ static int allocate_threshold_blocks(unsigned int cpu, unsigned int bank,
 	if (err)
 		goto out_free;
 recurse:
-	address = get_block_address(address, low, high, bank, ++block);
+	address = get_block_address(address, low, high, bank, ++block, cpu);
 	if (!address)
 		return 0;
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v2 4/6] x86/MCE: Make number of MCA banks per_cpu
  2019-04-11 20:18 [PATCH v2 0/6] Handle MCA banks in a per_cpu way Ghannam, Yazen
                   ` (2 preceding siblings ...)
  2019-04-11 20:18 ` [PATCH v2 3/6] x86/MCE/AMD: Don't cache block addresses on SMCA systems Ghannam, Yazen
@ 2019-04-11 20:18 ` Ghannam, Yazen
  2019-04-16 13:55   ` Borislav Petkov
  2019-04-16 17:36   ` Borislav Petkov
  2019-04-11 20:18 ` [PATCH v2 6/6] x86/MCE: Treat MCE bank as initialized if control bits set in hardware Ghannam, Yazen
  2019-04-11 20:18 ` [PATCH v2 5/6] x86/MCE: Save MCA control bits that get " Ghannam, Yazen
  5 siblings, 2 replies; 11+ messages in thread
From: Ghannam, Yazen @ 2019-04-11 20:18 UTC (permalink / raw)
  To: linux-edac; +Cc: Ghannam, Yazen, linux-kernel, bp, tony.luck, x86

From: Yazen Ghannam <yazen.ghannam@amd.com>

The number of MCA banks is provided per logical CPU. Historically, this
number has been the same across all CPUs, but this is not an
architectural guarantee. Future AMD systems may have MCA bank counts
that vary between logical CPUs in a system.

This issue was partially addressed in

commit ("006c077041dc x86/mce: Handle varying MCA bank counts")

by allocating structures using the maximum number of MCA banks and by
saving the maximum MCA bank count in a system as the global count. This
means that some extra structures are allocated. Also, this means that
CPUs will spend more time in the #MC and other handlers checking extra
MCA banks.

Define the number of MCA banks as a per_cpu variable. Replace all uses
of mca_cfg.banks with this.

Also, use the per_cpu bank count when allocating per_cpu structures.

Print the number of banks per CPU as a debug message for those who may
be interested.

Signed-off-by: Yazen Ghannam <yazen.ghannam@amd.com>
---
Link:
https://lkml.kernel.org/r/20190408141205.12376-5-Yazen.Ghannam@amd.com

v1->v2:
* Drop export of new variable and leave injector code as-is.
* Add "mce_" prefix to new "num_banks" variable.

 arch/x86/kernel/cpu/mce/amd.c      | 16 +++++-----
 arch/x86/kernel/cpu/mce/core.c     | 48 +++++++++++++++++-------------
 arch/x86/kernel/cpu/mce/internal.h |  2 +-
 3 files changed, 36 insertions(+), 30 deletions(-)

diff --git a/arch/x86/kernel/cpu/mce/amd.c b/arch/x86/kernel/cpu/mce/amd.c
index f0644b59848d..2aed94f3a23e 100644
--- a/arch/x86/kernel/cpu/mce/amd.c
+++ b/arch/x86/kernel/cpu/mce/amd.c
@@ -493,7 +493,7 @@ static u32 get_block_address(u32 current_addr, u32 low, u32 high,
 {
 	u32 addr = 0, offset = 0;
 
-	if ((bank >= mca_cfg.banks) || (block >= NR_BLOCKS))
+	if ((bank >= per_cpu(mce_num_banks, cpu)) || (block >= NR_BLOCKS))
 		return addr;
 
 	if (mce_flags.smca)
@@ -605,7 +605,7 @@ void mce_amd_feature_init(struct cpuinfo_x86 *c)
 
 	disable_err_thresholding(c);
 
-	for (bank = 0; bank < mca_cfg.banks; ++bank) {
+	for (bank = 0; bank < this_cpu_read(mce_num_banks); ++bank) {
 		if (mce_flags.smca)
 			smca_configure(bank, cpu);
 
@@ -948,7 +948,7 @@ static void amd_deferred_error_interrupt(void)
 {
 	unsigned int bank;
 
-	for (bank = 0; bank < mca_cfg.banks; ++bank)
+	for (bank = 0; bank < this_cpu_read(mce_num_banks); ++bank)
 		log_error_deferred(bank);
 }
 
@@ -989,7 +989,7 @@ static void amd_threshold_interrupt(void)
 	struct threshold_block *first_block = NULL, *block = NULL, *tmp = NULL;
 	unsigned int bank, cpu = smp_processor_id();
 
-	for (bank = 0; bank < mca_cfg.banks; ++bank) {
+	for (bank = 0; bank < this_cpu_read(mce_num_banks); ++bank) {
 		if (!(per_cpu(bank_map, cpu) & (1 << bank)))
 			continue;
 
@@ -1176,7 +1176,7 @@ static int allocate_threshold_blocks(unsigned int cpu, unsigned int bank,
 	u32 low, high;
 	int err;
 
-	if ((bank >= mca_cfg.banks) || (block >= NR_BLOCKS))
+	if ((bank >= per_cpu(mce_num_banks, cpu)) || (block >= NR_BLOCKS))
 		return 0;
 
 	if (rdmsr_safe_on_cpu(cpu, address, &low, &high))
@@ -1410,7 +1410,7 @@ int mce_threshold_remove_device(unsigned int cpu)
 {
 	unsigned int bank;
 
-	for (bank = 0; bank < mca_cfg.banks; ++bank) {
+	for (bank = 0; bank < per_cpu(mce_num_banks, cpu); ++bank) {
 		if (!(per_cpu(bank_map, cpu) & (1 << bank)))
 			continue;
 		threshold_remove_bank(cpu, bank);
@@ -1431,14 +1431,14 @@ int mce_threshold_create_device(unsigned int cpu)
 	if (bp)
 		return 0;
 
-	bp = kcalloc(mca_cfg.banks, sizeof(struct threshold_bank *),
+	bp = kcalloc(per_cpu(mce_num_banks, cpu), sizeof(struct threshold_bank *),
 		     GFP_KERNEL);
 	if (!bp)
 		return -ENOMEM;
 
 	per_cpu(threshold_banks, cpu) = bp;
 
-	for (bank = 0; bank < mca_cfg.banks; ++bank) {
+	for (bank = 0; bank < per_cpu(mce_num_banks, cpu); ++bank) {
 		if (!(per_cpu(bank_map, cpu) & (1 << bank)))
 			continue;
 		err = threshold_create_bank(cpu, bank);
diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c
index aa41f41e5931..0fe29140ecab 100644
--- a/arch/x86/kernel/cpu/mce/core.c
+++ b/arch/x86/kernel/cpu/mce/core.c
@@ -64,6 +64,8 @@ static DEFINE_MUTEX(mce_sysfs_mutex);
 
 DEFINE_PER_CPU(unsigned, mce_exception_count);
 
+DEFINE_PER_CPU_READ_MOSTLY(u8, mce_num_banks);
+
 struct mce_bank {
 	u64	ctl;	/* subevents to enable */
 	bool	init;	/* initialise bank? */
@@ -699,7 +701,7 @@ bool machine_check_poll(enum mcp_flags flags, mce_banks_t *b)
 	if (flags & MCP_TIMESTAMP)
 		m.tsc = rdtsc();
 
-	for (i = 0; i < mca_cfg.banks; i++) {
+	for (i = 0; i < this_cpu_read(mce_num_banks); i++) {
 		if (!this_cpu_read(mce_banks)[i].ctl || !test_bit(i, *b))
 			continue;
 
@@ -801,7 +803,7 @@ static int mce_no_way_out(struct mce *m, char **msg, unsigned long *validp,
 	char *tmp;
 	int i;
 
-	for (i = 0; i < mca_cfg.banks; i++) {
+	for (i = 0; i < this_cpu_read(mce_num_banks); i++) {
 		m->status = mce_rdmsrl(msr_ops.status(i));
 		if (!(m->status & MCI_STATUS_VAL))
 			continue;
@@ -1081,7 +1083,7 @@ static void mce_clear_state(unsigned long *toclear)
 {
 	int i;
 
-	for (i = 0; i < mca_cfg.banks; i++) {
+	for (i = 0; i < this_cpu_read(mce_num_banks); i++) {
 		if (test_bit(i, toclear))
 			mce_wrmsrl(msr_ops.status(i), 0);
 	}
@@ -1138,7 +1140,7 @@ static void __mc_scan_banks(struct mce *m, struct mce *final,
 	struct mca_config *cfg = &mca_cfg;
 	int severity, i;
 
-	for (i = 0; i < cfg->banks; i++) {
+	for (i = 0; i < this_cpu_read(mce_num_banks); i++) {
 		__clear_bit(i, toclear);
 		if (!test_bit(i, valid_banks))
 			continue;
@@ -1478,15 +1480,16 @@ EXPORT_SYMBOL_GPL(mce_notify_irq);
 
 static int __mcheck_cpu_mce_banks_init(void)
 {
+	u8 n_banks = this_cpu_read(mce_num_banks);
 	int i;
 
 	per_cpu(mce_banks, smp_processor_id()) =
-		kcalloc(MAX_NR_BANKS, sizeof(struct mce_bank), GFP_KERNEL);
+		kcalloc(n_banks, sizeof(struct mce_bank), GFP_KERNEL);
 
 	if (!this_cpu_read(mce_banks))
 		return -ENOMEM;
 
-	for (i = 0; i < MAX_NR_BANKS; i++) {
+	for (i = 0; i < n_banks; i++) {
 		struct mce_bank *b = &this_cpu_read(mce_banks)[i];
 
 		b->ctl = -1ULL;
@@ -1507,10 +1510,15 @@ static int __mcheck_cpu_cap_init(void)
 	rdmsrl(MSR_IA32_MCG_CAP, cap);
 
 	b = cap & MCG_BANKCNT_MASK;
-	if (WARN_ON_ONCE(b > MAX_NR_BANKS))
+	pr_debug("CPU%d supports %d MCE banks\n", smp_processor_id(), b);
+
+	if (b > MAX_NR_BANKS) {
+		pr_warn("CPU%d: Using only %u machine check banks out of %u\n",
+			smp_processor_id(), MAX_NR_BANKS, b);
 		b = MAX_NR_BANKS;
+	}
 
-	mca_cfg.banks = max(mca_cfg.banks, b);
+	this_cpu_write(mce_num_banks, b);
 
 	if (!this_cpu_read(mce_banks)) {
 		int err = __mcheck_cpu_mce_banks_init();
@@ -1554,7 +1562,7 @@ static void __mcheck_cpu_init_clear_banks(void)
 {
 	int i;
 
-	for (i = 0; i < mca_cfg.banks; i++) {
+	for (i = 0; i < this_cpu_read(mce_num_banks); i++) {
 		struct mce_bank *b = &this_cpu_read(mce_banks)[i];
 
 		if (!b->init)
@@ -1604,7 +1612,7 @@ static int __mcheck_cpu_apply_quirks(struct cpuinfo_x86 *c)
 
 	/* This should be disabled by the BIOS, but isn't always */
 	if (c->x86_vendor == X86_VENDOR_AMD) {
-		if (c->x86 == 15 && cfg->banks > 4) {
+		if (c->x86 == 15 && this_cpu_read(mce_num_banks) > 4) {
 			/*
 			 * disable GART TBL walk error reporting, which
 			 * trips off incorrectly with the IOMMU & 3ware
@@ -1623,7 +1631,7 @@ static int __mcheck_cpu_apply_quirks(struct cpuinfo_x86 *c)
 		 * Various K7s with broken bank 0 around. Always disable
 		 * by default.
 		 */
-		if (c->x86 == 6 && cfg->banks > 0)
+		if (c->x86 == 6 && this_cpu_read(mce_num_banks) > 0)
 			this_cpu_read(mce_banks)[0].ctl = 0;
 
 		/*
@@ -1645,7 +1653,7 @@ static int __mcheck_cpu_apply_quirks(struct cpuinfo_x86 *c)
 		 * valid event later, merely don't write CTL0.
 		 */
 
-		if (c->x86 == 6 && c->x86_model < 0x1A && cfg->banks > 0)
+		if (c->x86 == 6 && c->x86_model < 0x1A && this_cpu_read(mce_num_banks) > 0)
 			this_cpu_read(mce_banks)[0].init = 0;
 
 		/*
@@ -1871,7 +1879,7 @@ static void __mce_disable_bank(void *arg)
 
 void mce_disable_bank(int bank)
 {
-	if (bank >= mca_cfg.banks) {
+	if (bank >= this_cpu_read(mce_num_banks)) {
 		pr_warn(FW_BUG
 			"Ignoring request to disable invalid MCA bank %d.\n",
 			bank);
@@ -1959,7 +1967,7 @@ static void mce_disable_error_reporting(void)
 {
 	int i;
 
-	for (i = 0; i < mca_cfg.banks; i++) {
+	for (i = 0; i < this_cpu_read(mce_num_banks); i++) {
 		struct mce_bank *b = &this_cpu_read(mce_banks)[i];
 
 		if (b->init)
@@ -2070,7 +2078,7 @@ static ssize_t show_bank(struct device *s, struct device_attribute *attr,
 	struct mce_bank *b;
 	u8 bank = attr_to_bank(attr)->bank;
 
-	if (bank >= mca_cfg.banks)
+	if (bank >= per_cpu(mce_num_banks, s->id))
 		return -EINVAL;
 
 	b = &per_cpu(mce_banks, s->id)[bank];
@@ -2088,7 +2096,7 @@ static ssize_t set_bank(struct device *s, struct device_attribute *attr,
 	if (kstrtou64(buf, 0, &new) < 0)
 		return -EINVAL;
 
-	if (bank >= mca_cfg.banks)
+	if (bank >= per_cpu(mce_num_banks, s->id))
 		return -EINVAL;
 
 	b = &per_cpu(mce_banks, s->id)[bank];
@@ -2240,7 +2248,7 @@ static int mce_device_create(unsigned int cpu)
 		if (err)
 			goto error;
 	}
-	for (j = 0; j < mca_cfg.banks; j++) {
+	for (j = 0; j < per_cpu(mce_num_banks, cpu); j++) {
 		err = device_create_file(dev, &mce_bank_devs[j].attr);
 		if (err)
 			goto error2;
@@ -2272,7 +2280,7 @@ static void mce_device_remove(unsigned int cpu)
 	for (i = 0; mce_device_attrs[i]; i++)
 		device_remove_file(dev, mce_device_attrs[i]);
 
-	for (i = 0; i < mca_cfg.banks; i++)
+	for (i = 0; i < per_cpu(mce_num_banks, cpu); i++)
 		device_remove_file(dev, &mce_bank_devs[i].attr);
 
 	device_unregister(dev);
@@ -2301,7 +2309,7 @@ static void mce_reenable_cpu(void)
 
 	if (!cpuhp_tasks_frozen)
 		cmci_reenable();
-	for (i = 0; i < mca_cfg.banks; i++) {
+	for (i = 0; i < this_cpu_read(mce_num_banks); i++) {
 		struct mce_bank *b = &this_cpu_read(mce_banks)[i];
 
 		if (b->init)
@@ -2489,8 +2497,6 @@ EXPORT_SYMBOL_GPL(mcsafe_key);
 
 static int __init mcheck_late_init(void)
 {
-	pr_info("Using %d MCE banks\n", mca_cfg.banks);
-
 	if (mca_cfg.recovery)
 		static_branch_inc(&mcsafe_key);
 
diff --git a/arch/x86/kernel/cpu/mce/internal.h b/arch/x86/kernel/cpu/mce/internal.h
index 032d52c66616..632e2e57c1d0 100644
--- a/arch/x86/kernel/cpu/mce/internal.h
+++ b/arch/x86/kernel/cpu/mce/internal.h
@@ -118,7 +118,6 @@ struct mca_config {
 	      bios_cmci_threshold	: 1,
 	      __reserved		: 59;
 
-	u8 banks;
 	s8 bootlog;
 	int tolerant;
 	int monarch_timeout;
@@ -127,6 +126,7 @@ struct mca_config {
 };
 
 extern struct mca_config mca_cfg;
+DECLARE_PER_CPU_READ_MOSTLY(u8, mce_num_banks);
 
 struct mce_vendor_flags {
 	/*
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v2 5/6] x86/MCE: Save MCA control bits that get set in hardware
  2019-04-11 20:18 [PATCH v2 0/6] Handle MCA banks in a per_cpu way Ghannam, Yazen
                   ` (4 preceding siblings ...)
  2019-04-11 20:18 ` [PATCH v2 6/6] x86/MCE: Treat MCE bank as initialized if control bits set in hardware Ghannam, Yazen
@ 2019-04-11 20:18 ` Ghannam, Yazen
  5 siblings, 0 replies; 11+ messages in thread
From: Ghannam, Yazen @ 2019-04-11 20:18 UTC (permalink / raw)
  To: linux-edac; +Cc: Ghannam, Yazen, linux-kernel, bp, tony.luck, x86

From: Yazen Ghannam <yazen.ghannam@amd.com>

The OS is expected to write all bits in MCA_CTL. However, only
implemented bits get set in the hardware.

Read back MCA_CTL so that the value in the hardware is saved and
reported through sysfs.

Signed-off-by: Yazen Ghannam <yazen.ghannam@amd.com>
---
Link:
https://lkml.kernel.org/r/20190408141205.12376-6-Yazen.Ghannam@amd.com

v1->v2:
* No change.

 arch/x86/kernel/cpu/mce/core.c | 15 ++++++++++-----
 1 file changed, 10 insertions(+), 5 deletions(-)

diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c
index 0fe29140ecab..71662133c70c 100644
--- a/arch/x86/kernel/cpu/mce/core.c
+++ b/arch/x86/kernel/cpu/mce/core.c
@@ -1565,10 +1565,13 @@ static void __mcheck_cpu_init_clear_banks(void)
 	for (i = 0; i < this_cpu_read(mce_num_banks); i++) {
 		struct mce_bank *b = &this_cpu_read(mce_banks)[i];
 
-		if (!b->init)
-			continue;
-		wrmsrl(msr_ops.ctl(i), b->ctl);
-		wrmsrl(msr_ops.status(i), 0);
+		if (b->init) {
+			wrmsrl(msr_ops.ctl(i), b->ctl);
+			wrmsrl(msr_ops.status(i), 0);
+		}
+
+		/* Save bits set in hardware. */
+		rdmsrl(msr_ops.ctl(i), b->ctl);
 	}
 }
 
@@ -2312,8 +2315,10 @@ static void mce_reenable_cpu(void)
 	for (i = 0; i < this_cpu_read(mce_num_banks); i++) {
 		struct mce_bank *b = &this_cpu_read(mce_banks)[i];
 
-		if (b->init)
+		if (b->init) {
 			wrmsrl(msr_ops.ctl(i), b->ctl);
+			rdmsrl(msr_ops.ctl(i), b->ctl);
+		}
 	}
 }
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v2 6/6] x86/MCE: Treat MCE bank as initialized if control bits set in hardware
  2019-04-11 20:18 [PATCH v2 0/6] Handle MCA banks in a per_cpu way Ghannam, Yazen
                   ` (3 preceding siblings ...)
  2019-04-11 20:18 ` [PATCH v2 4/6] x86/MCE: Make number of MCA banks per_cpu Ghannam, Yazen
@ 2019-04-11 20:18 ` Ghannam, Yazen
  2019-04-11 20:18 ` [PATCH v2 5/6] x86/MCE: Save MCA control bits that get " Ghannam, Yazen
  5 siblings, 0 replies; 11+ messages in thread
From: Ghannam, Yazen @ 2019-04-11 20:18 UTC (permalink / raw)
  To: linux-edac; +Cc: Ghannam, Yazen, linux-kernel, bp, tony.luck, x86

From: Yazen Ghannam <yazen.ghannam@amd.com>

The OS is expected to write all bits to MCA_CTL for each bank. However,
some banks may be unused in which case the registers for such banks are
Read-as-Zero/Writes-Ignored. Also, the OS may not write any control bits
because of quirks, etc.

A bank can be considered uninitialized if the MCA_CTL register returns
zero. This is because either the OS did not write anything or because
the hardware is enforcing RAZ/WI for the bank.

Set a bank's init value based on if the control bits are set or not in
hardware.

Return an error code in the sysfs interface for uninitialized banks.

Signed-off-by: Yazen Ghannam <yazen.ghannam@amd.com>
---
Link:
https://lkml.kernel.org/r/20190408141205.12376-3-Yazen.Ghannam@amd.com

v1->v2:
* New in v2.
* Based on discussion from v1 patch 2.

 arch/x86/kernel/cpu/mce/core.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c
index 71662133c70c..dcf5c6d72811 100644
--- a/arch/x86/kernel/cpu/mce/core.c
+++ b/arch/x86/kernel/cpu/mce/core.c
@@ -1572,6 +1572,9 @@ static void __mcheck_cpu_init_clear_banks(void)
 
 		/* Save bits set in hardware. */
 		rdmsrl(msr_ops.ctl(i), b->ctl);
+
+		/* Bank is initialized if bits are set in hardware. */
+		b->init = !!b->ctl;
 	}
 }
 
@@ -2086,6 +2089,9 @@ static ssize_t show_bank(struct device *s, struct device_attribute *attr,
 
 	b = &per_cpu(mce_banks, s->id)[bank];
 
+	if (!b->init)
+		return -ENODEV;
+
 	return sprintf(buf, "%llx\n", b->ctl);
 }
 
@@ -2104,6 +2110,9 @@ static ssize_t set_bank(struct device *s, struct device_attribute *attr,
 
 	b = &per_cpu(mce_banks, s->id)[bank];
 
+	if (!b->init)
+		return -ENODEV;
+
 	b->ctl = new;
 	mce_restart();
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 2/6] x86/MCE: Handle MCA controls in a per_cpu way
  2019-04-11 20:18 ` [PATCH v2 2/6] x86/MCE: Handle MCA controls in a per_cpu way Ghannam, Yazen
@ 2019-04-16 10:21   ` Borislav Petkov
  0 siblings, 0 replies; 11+ messages in thread
From: Borislav Petkov @ 2019-04-16 10:21 UTC (permalink / raw)
  To: Ghannam, Yazen; +Cc: linux-edac, linux-kernel, bp, tony.luck, x86

On Thu, Apr 11, 2019 at 08:18:01PM +0000, Ghannam, Yazen wrote:
>  arch/x86/kernel/cpu/mce/core.c | 77 ++++++++++++++++++++++------------
>  1 file changed, 51 insertions(+), 26 deletions(-)
> 
> diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c
> index 8d0d1e8425db..aa41f41e5931 100644
> --- a/arch/x86/kernel/cpu/mce/core.c
> +++ b/arch/x86/kernel/cpu/mce/core.c
> @@ -64,16 +64,21 @@ static DEFINE_MUTEX(mce_sysfs_mutex);
>  
>  DEFINE_PER_CPU(unsigned, mce_exception_count);
>  
> +struct mce_bank {
> +	u64	ctl;	/* subevents to enable */
> +	bool	init;	/* initialise bank? */
> +};

Pls keep original members alignment.

> +static DEFINE_PER_CPU_READ_MOSTLY(struct mce_bank *, mce_banks);
> +
>  #define ATTR_LEN               16
>  /* One object for each MCE bank, shared by all CPUs */
> -struct mce_bank {
> -	u64			ctl;			/* subevents to enable */
> -	bool			init;			/* initialise bank? */
> +struct mce_bank_dev {
>  	struct device_attribute	attr;			/* device attribute */
>  	char			attrname[ATTR_LEN];	/* attribute name */
> +	u8			bank;			/* bank number */
>  };
> +static struct mce_bank_dev mce_bank_devs[MAX_NR_BANKS];
>  
> -static struct mce_bank *mce_banks __read_mostly;
>  struct mce_vendor_flags mce_flags __read_mostly;
>  
>  struct mca_config mca_cfg __read_mostly = {
> @@ -695,7 +700,7 @@ bool machine_check_poll(enum mcp_flags flags, mce_banks_t *b)
>  		m.tsc = rdtsc();
>  
>  	for (i = 0; i < mca_cfg.banks; i++) {
> -		if (!mce_banks[i].ctl || !test_bit(i, *b))
> +		if (!this_cpu_read(mce_banks)[i].ctl || !test_bit(i, *b))
>  			continue;
>  
>  		m.misc = 0;
> @@ -1138,7 +1143,7 @@ static void __mc_scan_banks(struct mce *m, struct mce *final,
>  		if (!test_bit(i, valid_banks))
>  			continue;
>  
> -		if (!mce_banks[i].ctl)
> +		if (!this_cpu_read(mce_banks)[i].ctl)
>  			continue;
>  
>  		m->misc = 0;
> @@ -1475,16 +1480,19 @@ static int __mcheck_cpu_mce_banks_init(void)
>  {
>  	int i;
>  
> -	mce_banks = kcalloc(MAX_NR_BANKS, sizeof(struct mce_bank), GFP_KERNEL);
> -	if (!mce_banks)
> +	per_cpu(mce_banks, smp_processor_id()) =
> +		kcalloc(MAX_NR_BANKS, sizeof(struct mce_bank), GFP_KERNEL);
> +
> +	if (!this_cpu_read(mce_banks))
>  		return -ENOMEM;
>  
>  	for (i = 0; i < MAX_NR_BANKS; i++) {
> -		struct mce_bank *b = &mce_banks[i];
> +		struct mce_bank *b = &this_cpu_read(mce_banks)[i];
>  
>  		b->ctl = -1ULL;
>  		b->init = 1;
>  	}
> +
>  	return 0;
>  }

Instead of doing all those per-CPU accesses in the function, you can use
a local pointer and assign it once in the end, before returning.

Also, fix the similar situation where you have multiple per-CPU accesses
in a single function - assign to a local pointer instead and do all the
accesses through it.

> @@ -1504,7 +1512,7 @@ static int __mcheck_cpu_cap_init(void)
>  
>  	mca_cfg.banks = max(mca_cfg.banks, b);
>  
> -	if (!mce_banks) {
> +	if (!this_cpu_read(mce_banks)) {
>  		int err = __mcheck_cpu_mce_banks_init();
>  		if (err)
>  			return err;
> @@ -1547,7 +1555,7 @@ static void __mcheck_cpu_init_clear_banks(void)
>  	int i;
>  
>  	for (i = 0; i < mca_cfg.banks; i++) {
> -		struct mce_bank *b = &mce_banks[i];
> +		struct mce_bank *b = &this_cpu_read(mce_banks)[i];
>  
>  		if (!b->init)
>  			continue;
> @@ -1602,7 +1610,7 @@ static int __mcheck_cpu_apply_quirks(struct cpuinfo_x86 *c)
>  			 * trips off incorrectly with the IOMMU & 3ware
>  			 * & Cerberus:
>  			 */
> -			clear_bit(10, (unsigned long *)&mce_banks[4].ctl);
> +			clear_bit(10, (unsigned long *)&this_cpu_read(mce_banks)[4].ctl);
>  		}
>  		if (c->x86 < 0x11 && cfg->bootlog < 0) {
>  			/*
> @@ -1616,7 +1624,7 @@ static int __mcheck_cpu_apply_quirks(struct cpuinfo_x86 *c)
>  		 * by default.
>  		 */
>  		if (c->x86 == 6 && cfg->banks > 0)
> -			mce_banks[0].ctl = 0;
> +			this_cpu_read(mce_banks)[0].ctl = 0;
>  
>  		/*
>  		 * overflow_recov is supported for F15h Models 00h-0fh
> @@ -1638,7 +1646,7 @@ static int __mcheck_cpu_apply_quirks(struct cpuinfo_x86 *c)
>  		 */
>  
>  		if (c->x86 == 6 && c->x86_model < 0x1A && cfg->banks > 0)
> -			mce_banks[0].init = 0;
> +			this_cpu_read(mce_banks)[0].init = 0;
>  
>  		/*
>  		 * All newer Intel systems support MCE broadcasting. Enable
> @@ -1952,7 +1960,7 @@ static void mce_disable_error_reporting(void)
>  	int i;
>  
>  	for (i = 0; i < mca_cfg.banks; i++) {
> -		struct mce_bank *b = &mce_banks[i];
> +		struct mce_bank *b = &this_cpu_read(mce_banks)[i];
>  
>  		if (b->init)
>  			wrmsrl(msr_ops.ctl(i), 0);
> @@ -2051,26 +2059,41 @@ static struct bus_type mce_subsys = {
>  
>  DEFINE_PER_CPU(struct device *, mce_device);
>  
> -static inline struct mce_bank *attr_to_bank(struct device_attribute *attr)
> +static inline struct mce_bank_dev *attr_to_bank(struct device_attribute *attr)
>  {
> -	return container_of(attr, struct mce_bank, attr);
> +	return container_of(attr, struct mce_bank_dev, attr);
>  }
>  
>  static ssize_t show_bank(struct device *s, struct device_attribute *attr,
>  			 char *buf)
>  {
> -	return sprintf(buf, "%llx\n", attr_to_bank(attr)->ctl);
> +	struct mce_bank *b;
> +	u8 bank = attr_to_bank(attr)->bank;

Please sort function local variables declaration in a reverse christmas
tree order:

	<type A> longest_variable_name;
	<type B> shorter_var_name;
	<type C> even_shorter;
	<type D> i;

> +
> +	if (bank >= mca_cfg.banks)
> +		return -EINVAL;
> +
> +	b = &per_cpu(mce_banks, s->id)[bank];
> +
> +	return sprintf(buf, "%llx\n", b->ctl);
>  }
>  
>  static ssize_t set_bank(struct device *s, struct device_attribute *attr,
>  			const char *buf, size_t size)
>  {
>  	u64 new;
> +	struct mce_bank *b;
> +	u8 bank = attr_to_bank(attr)->bank;

Ditto.

>  	if (kstrtou64(buf, 0, &new) < 0)
>  		return -EINVAL;
>  
> -	attr_to_bank(attr)->ctl = new;
> +	if (bank >= mca_cfg.banks)
> +		return -EINVAL;
> +
> +	b = &per_cpu(mce_banks, s->id)[bank];
> +
> +	b->ctl = new;
>  	mce_restart();
>  
>  	return size;
> @@ -2185,7 +2208,7 @@ static void mce_device_release(struct device *dev)
>  	kfree(dev);
>  }
>  
> -/* Per cpu device init. All of the cpus still share the same ctrl bank: */
> +/* Per cpu device init. All of the cpus still share the same bank device: */

s/cpu/CPU/g, while at it.

-- 
Regards/Gruss,
    Boris.

Good mailing practices for 400: avoid top-posting and trim the reply.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 3/6] x86/MCE/AMD: Don't cache block addresses on SMCA systems
  2019-04-11 20:18 ` [PATCH v2 3/6] x86/MCE/AMD: Don't cache block addresses on SMCA systems Ghannam, Yazen
@ 2019-04-16 13:44   ` Borislav Petkov
  0 siblings, 0 replies; 11+ messages in thread
From: Borislav Petkov @ 2019-04-16 13:44 UTC (permalink / raw)
  To: Ghannam, Yazen; +Cc: linux-edac, linux-kernel, tony.luck, x86

On Thu, Apr 11, 2019 at 08:18:02PM +0000, Ghannam, Yazen wrote:
> From: Yazen Ghannam <yazen.ghannam@amd.com>
> 
> On legacy systems, the addresses of the MCA_MISC* registers need to be
> recursively discovered based on a Block Pointer field in the registers.
> 
> On Scalable MCA systems, the register space is fixed, and particular
> addresses can be derived by regular offsets for bank and register type.
> This fixed address space includes the MCA_MISC* registers.
> 
> MCA_MISC0 is always available for each MCA bank. MCA_MISC1 through
> MCA_MISC4 are considered available if MCA_MISC0[BlkPtr]=1.
> 
> Cache the value of MCA_MISC0[BlkPtr] for each bank and per CPU. This
> needs to be done only during init. The values should be saved per CPU
> to accommodate heterogeneous SMCA systems.
> 
> Redo smca_get_block_address() to directly return the block addresses.
> Use the cached Block Pointer value to decide if the MCA_MISC1-4
> addresses should be returned.
> 
> Signed-off-by: Yazen Ghannam <yazen.ghannam@amd.com>
> ---
> Link:
> https://lkml.kernel.org/r/20190408141205.12376-4-Yazen.Ghannam@amd.com
> 
> v1->v2:
> * No change.
> 
>  arch/x86/kernel/cpu/mce/amd.c | 71 +++++++++++++++++------------------
>  1 file changed, 35 insertions(+), 36 deletions(-)
> 
> diff --git a/arch/x86/kernel/cpu/mce/amd.c b/arch/x86/kernel/cpu/mce/amd.c
> index e64de5149e50..f0644b59848d 100644
> --- a/arch/x86/kernel/cpu/mce/amd.c
> +++ b/arch/x86/kernel/cpu/mce/amd.c
> @@ -101,11 +101,6 @@ static struct smca_bank_name smca_names[] = {
>  	[SMCA_PCIE]	= { "pcie",		"PCI Express Unit" },
>  };
>  
> -static u32 smca_bank_addrs[MAX_NR_BANKS][NR_BLOCKS] __ro_after_init =
> -{
> -	[0 ... MAX_NR_BANKS - 1] = { [0 ... NR_BLOCKS - 1] = -1 }
> -};
> -
>  static const char *smca_get_name(enum smca_bank_types t)
>  {
>  	if (t >= N_SMCA_BANK_TYPES)
> @@ -198,6 +193,7 @@ static char buf_mcatype[MAX_MCATYPE_NAME_LEN];
>  
>  static DEFINE_PER_CPU(struct threshold_bank **, threshold_banks);
>  static DEFINE_PER_CPU(unsigned int, bank_map);	/* see which banks are on */
> +static DEFINE_PER_CPU(u32, smca_blkptr_map); /* see which banks use BlkPtr */

smca_misc_banks_map I guess. BlkPtr means something only to you, PPR
writers and me. And a couple others... :-)

>  static void amd_threshold_interrupt(void);
>  static void amd_deferred_error_interrupt(void);
> @@ -208,6 +204,28 @@ static void default_deferred_error_interrupt(void)
>  }
>  void (*deferred_error_int_vector)(void) = default_deferred_error_interrupt;
>  
> +static void smca_set_blkptr_map(unsigned int bank, unsigned int cpu)

Accordingly...

> +{
> +	u32 low, high;
> +
> +	/*
> +	 * For SMCA enabled processors, BLKPTR field of the first MISC register
> +	 * (MCx_MISC0) indicates presence of additional MISC regs set (MISC1-4).
> +	 */
> +	if (rdmsr_safe(MSR_AMD64_SMCA_MCx_CONFIG(bank), &low, &high))
> +		return;
> +
> +	if (!(low & MCI_CONFIG_MCAX))
> +		return;
> +
> +	if (rdmsr_safe(MSR_AMD64_SMCA_MCx_MISC(bank), &low, &high))
> +		return;
> +
> +	if (low & MASK_BLKPTR_LO)
> +		per_cpu(smca_blkptr_map, cpu) |= 1 << bank;

BIT(bank)

> +
> +}
> +
>  static void smca_configure(unsigned int bank, unsigned int cpu)
>  {
>  	unsigned int i, hwid_mcatype;
> @@ -245,6 +263,8 @@ static void smca_configure(unsigned int bank, unsigned int cpu)
>  		wrmsr(smca_config, low, high);
>  	}
>  
> +	smca_set_blkptr_map(bank, cpu);
> +
>  	/* Return early if this bank was already initialized. */
>  	if (smca_banks[bank].hwid)
>  		return;
> @@ -455,42 +475,21 @@ static void deferred_error_interrupt_enable(struct cpuinfo_x86 *c)
>  	wrmsr(MSR_CU_DEF_ERR, low, high);
>  }
>  
> -static u32 smca_get_block_address(unsigned int bank, unsigned int block)
> +static u32 smca_get_block_address(unsigned int bank, unsigned int block,
> +				  unsigned int cpu)
>  {
> -	u32 low, high;
> -	u32 addr = 0;
> -
> -	if (smca_get_bank_type(bank) == SMCA_RESERVED)
> -		return addr;
> -
>  	if (!block)
>  		return MSR_AMD64_SMCA_MCx_MISC(bank);
>  
> -	/* Check our cache first: */
> -	if (smca_bank_addrs[bank][block] != -1)
> -		return smca_bank_addrs[bank][block];
> -
> -	/*
> -	 * For SMCA enabled processors, BLKPTR field of the first MISC register
> -	 * (MCx_MISC0) indicates presence of additional MISC regs set (MISC1-4).
> -	 */
> -	if (rdmsr_safe(MSR_AMD64_SMCA_MCx_CONFIG(bank), &low, &high))
> -		goto out;
> -
> -	if (!(low & MCI_CONFIG_MCAX))
> -		goto out;
> -
> -	if (!rdmsr_safe(MSR_AMD64_SMCA_MCx_MISC(bank), &low, &high) &&
> -	    (low & MASK_BLKPTR_LO))
> -		addr = MSR_AMD64_SMCA_MCx_MISCy(bank, block - 1);
> +	if (!(per_cpu(smca_blkptr_map, cpu) & (1 << bank)))

BIT(bank)

> +		return 0;
>  
> -out:
> -	smca_bank_addrs[bank][block] = addr;
> -	return addr;
> +	return MSR_AMD64_SMCA_MCx_MISCy(bank, block - 1);
>  }

-- 
Regards/Gruss,
    Boris.

Good mailing practices for 400: avoid top-posting and trim the reply.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 4/6] x86/MCE: Make number of MCA banks per_cpu
  2019-04-11 20:18 ` [PATCH v2 4/6] x86/MCE: Make number of MCA banks per_cpu Ghannam, Yazen
@ 2019-04-16 13:55   ` Borislav Petkov
  2019-04-16 17:36   ` Borislav Petkov
  1 sibling, 0 replies; 11+ messages in thread
From: Borislav Petkov @ 2019-04-16 13:55 UTC (permalink / raw)
  To: Ghannam, Yazen; +Cc: linux-edac, linux-kernel, tony.luck, x86

On Thu, Apr 11, 2019 at 08:18:03PM +0000, Ghannam, Yazen wrote:
> @@ -1478,15 +1480,16 @@ EXPORT_SYMBOL_GPL(mce_notify_irq);
>  
>  static int __mcheck_cpu_mce_banks_init(void)
>  {
> +	u8 n_banks = this_cpu_read(mce_num_banks);
>  	int i;
>  
>  	per_cpu(mce_banks, smp_processor_id()) =
> -		kcalloc(MAX_NR_BANKS, sizeof(struct mce_bank), GFP_KERNEL);
> +		kcalloc(n_banks, sizeof(struct mce_bank), GFP_KERNEL);
>  
>  	if (!this_cpu_read(mce_banks))
>  		return -ENOMEM;
>  
> -	for (i = 0; i < MAX_NR_BANKS; i++) {
> +	for (i = 0; i < n_banks; i++) {
>  		struct mce_bank *b = &this_cpu_read(mce_banks)[i];
>  
>  		b->ctl = -1ULL;
> @@ -1507,10 +1510,15 @@ static int __mcheck_cpu_cap_init(void)
>  	rdmsrl(MSR_IA32_MCG_CAP, cap);
>  
>  	b = cap & MCG_BANKCNT_MASK;
> -	if (WARN_ON_ONCE(b > MAX_NR_BANKS))
> +	pr_debug("CPU%d supports %d MCE banks\n", smp_processor_id(), b);

What for?

You're removing the "Using" message because, yeah, it was worthless to
people. If anyone wants to count the banks:

$ ls /sys/devices/system/machinecheck/machinecheck0/bank* | wc -l
23

-- 
Regards/Gruss,
    Boris.

Good mailing practices for 400: avoid top-posting and trim the reply.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 4/6] x86/MCE: Make number of MCA banks per_cpu
  2019-04-11 20:18 ` [PATCH v2 4/6] x86/MCE: Make number of MCA banks per_cpu Ghannam, Yazen
  2019-04-16 13:55   ` Borislav Petkov
@ 2019-04-16 17:36   ` Borislav Petkov
  1 sibling, 0 replies; 11+ messages in thread
From: Borislav Petkov @ 2019-04-16 17:36 UTC (permalink / raw)
  To: Ghannam, Yazen; +Cc: linux-edac, linux-kernel, tony.luck, x86

On Thu, Apr 11, 2019 at 08:18:03PM +0000, Ghannam, Yazen wrote:
> From: Yazen Ghannam <yazen.ghannam@amd.com>
> 
> The number of MCA banks is provided per logical CPU. Historically, this
> number has been the same across all CPUs, but this is not an
> architectural guarantee. Future AMD systems may have MCA bank counts
> that vary between logical CPUs in a system.
> 
> This issue was partially addressed in
> 
> commit ("006c077041dc x86/mce: Handle varying MCA bank counts")

Add that to your .gitconfig:

[alias]
        one = show -s --pretty='format:%h (\"%s\")'

so that you can quote commits properly:

$ git one 006c077041dc73b9490fffc4c6af5befe0687110
006c077041dc ("x86/mce: Handle varying MCA bank counts")

-- 
Regards/Gruss,
    Boris.

Good mailing practices for 400: avoid top-posting and trim the reply.

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2019-04-16 17:36 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-04-11 20:18 [PATCH v2 0/6] Handle MCA banks in a per_cpu way Ghannam, Yazen
2019-04-11 20:18 ` [PATCH v2 1/6] x86/MCE: Make struct mce_banks[] static Ghannam, Yazen
2019-04-11 20:18 ` [PATCH v2 2/6] x86/MCE: Handle MCA controls in a per_cpu way Ghannam, Yazen
2019-04-16 10:21   ` Borislav Petkov
2019-04-11 20:18 ` [PATCH v2 3/6] x86/MCE/AMD: Don't cache block addresses on SMCA systems Ghannam, Yazen
2019-04-16 13:44   ` Borislav Petkov
2019-04-11 20:18 ` [PATCH v2 4/6] x86/MCE: Make number of MCA banks per_cpu Ghannam, Yazen
2019-04-16 13:55   ` Borislav Petkov
2019-04-16 17:36   ` Borislav Petkov
2019-04-11 20:18 ` [PATCH v2 6/6] x86/MCE: Treat MCE bank as initialized if control bits set in hardware Ghannam, Yazen
2019-04-11 20:18 ` [PATCH v2 5/6] x86/MCE: Save MCA control bits that get " Ghannam, Yazen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).