linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH v2 00/11] powerpc: "paca->soft_enabled" based local atomic operation implementation
@ 2016-07-31 19:06 Madhavan Srinivasan
  2016-07-31 19:06 ` [RFC PATCH v2 01/11] Add #defs for paca->soft_enabled flags Madhavan Srinivasan
                   ` (10 more replies)
  0 siblings, 11 replies; 22+ messages in thread
From: Madhavan Srinivasan @ 2016-07-31 19:06 UTC (permalink / raw)
  To: benh, mpe; +Cc: anton, paulus, npiggin, linuxppc-dev, Madhavan Srinivasan

Local atomic operations are fast and highly reentrant per CPU counters.
Used for percpu variable updates. Local atomic operations only guarantee
variable modification atomicity wrt the CPU which owns the data and
these needs to be executed in a preemption safe way.

Here is the design of the patchset. Since local_* operations
are only need to be atomic to interrupts (IIUC), we have two options.
Either replay the "op" if interrupted or replay the interrupt after
the "op". Initial patchset posted was based on implementing local_* operation
based on CR5 which replay's the "op". Patchset had issues in case of
rewinding the address pointor from an array. This make the slow patch
really slow. Since CR5 based implementation proposed using __ex_table to find
the rewind address, this rasied concerns about size of __ex_table and vmlinux.

https://lists.ozlabs.org/pipermail/linuxppc-dev/2014-December/123115.html

But this patchset uses Benjamin Herrenschmidt suggestion of using
arch_local_irq_disable() to soft_disable interrupts (including PMIs).
After finishing the "op", arch_local_irq_restore() called and correspondingly
interrupts are replayed if any occured.

Current paca->soft_enabled logic is reserved and MASKABLE_EXCEPTION_* macros
are extended to support this feature.

patch re-write the current local_* functions to use arch_local_irq_disbale.
Base flow for each function is

 {
        soft_irq_set_level(2)
        load
        ..
        store
        arch_local_irq_restore()
 }

Reason for the approach is that, currently l[w/d]arx/st[w/d]cx.
instruction pair is used for local_* operations, which are heavy
on cycle count and they dont support a local variant. So to
see whether the new implementation helps, used a modified
version of Rusty's benchmark code on local_t.

https://lkml.org/lkml/2008/12/16/450

Modifications to Rusty's benchmark code:
 - Executed only local_t test

Here are the values with the patch.

Time in ns per iteration

Local_t         Without Patch           With Patch

_inc                    28              8
_add                    28              8
_read                   3               3
_add_return             28              7

Currently only asm/local.h has been rewrite, and also
the entire change is tested only in PPC64 (pseries guest)

First four are the clean up patches which lays the foundation
to make things easier. Fifth patch in the patchset reverse the
current soft_enabled logic and commit message details the reason and
need for this change. Sixth and seventh patch refactor's the __EXPECTION_PROLOG_1
code to support addition of a new parameter to MASKABLE_* macros. New parameter
will give the possible masking level for the interrupt. Rest of the patches are 
to add support for maskable PMI and implementation of local_t using arch_local_irq_*().

Since the patchset is experimental, changes made are focused on pseries and
powernv platforms only. Would really like to know comments for
this approach before extending to other powerpc platforms.

Tested the patchset in a
 - pSeries LPAR (with perf record).
	- Ran kernbench with perf record for 24 hours.
	- More testing needed.

Changelog RFC v1:

1)Commit messages are improved.
2)Renamed the arch_local_irq_disable_var to soft_irq_set_level as suggested
3)Renamed the LAZY_INTERRUPT* macro to IRQ_DISABLE_LEVEL_* as suggested
4)Extended the MASKABLE_EXCEPTION* macros to support additional parameter.
5)Each MASKABLE_EXCEPTION_* macro will carry a "mask_level"
6)Logic to decide on jump to maskable_handler in SOFTEN_TEST is now based on
  "mask_level"
7)__EXCEPTION_PROLOG_1 is factored out to support "mask_level" parameter.
  This reduced the code changes needed for supporting "mask_level" parameters.

Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>

Madhavan Srinivasan (11):
  Add #defs for paca->soft_enabled flags
  Cleanup to use IRQ_DISABLE_LEVEL_* macros for paca->soft_enabled
    update
  powerpc: move set_soft_enabled()
  powerpc: Use set_soft_enabled api to update paca->soft_enabled
  powerpc: reverse the soft_enable logic
  powerpc: Avoid using EXCEPTION_PROLOG_1 macro in MASKABLE_*
  powerpc: Add new _EXCEPTION_PROLOG_1 macro
  powerpc: Add "mask_lvl" paramater to MASKABLE_* macros
  powerpc: Add support to mask perf interrupts
  powerpc: Support to replay PMIs
  powerpc: rewrite local_t using soft_irq

 arch/powerpc/include/asm/exception-64s.h | 106 +++++++++++++++++++++----------
 arch/powerpc/include/asm/hw_irq.h        |  46 ++++++++++++--
 arch/powerpc/include/asm/irqflags.h      |   8 +--
 arch/powerpc/include/asm/kvm_ppc.h       |   2 +-
 arch/powerpc/include/asm/local.h         |  91 ++++++++++++++++++--------
 arch/powerpc/kernel/entry_64.S           |  16 ++---
 arch/powerpc/kernel/exceptions-64s.S     |  46 ++++++++++----
 arch/powerpc/kernel/head_64.S            |   3 +-
 arch/powerpc/kernel/idle_power4.S        |   3 +-
 arch/powerpc/kernel/irq.c                |  24 +++----
 arch/powerpc/kernel/process.c            |   3 +-
 arch/powerpc/kernel/setup_64.c           |   5 +-
 arch/powerpc/kernel/time.c               |   4 +-
 arch/powerpc/mm/hugetlbpage.c            |   2 +-
 arch/powerpc/perf/core-book3s.c          |   2 +-
 15 files changed, 247 insertions(+), 114 deletions(-)

-- 
2.7.4

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [RFC PATCH v2 01/11] Add #defs for paca->soft_enabled flags
  2016-07-31 19:06 [RFC PATCH v2 00/11] powerpc: "paca->soft_enabled" based local atomic operation implementation Madhavan Srinivasan
@ 2016-07-31 19:06 ` Madhavan Srinivasan
  2016-07-31 19:06 ` [RFC PATCH v2 02/11] Cleanup to use IRQ_DISABLE_LEVEL_* macros for paca->soft_enabled update Madhavan Srinivasan
                   ` (9 subsequent siblings)
  10 siblings, 0 replies; 22+ messages in thread
From: Madhavan Srinivasan @ 2016-07-31 19:06 UTC (permalink / raw)
  To: benh, mpe; +Cc: anton, paulus, npiggin, linuxppc-dev, Madhavan Srinivasan

Two #defs IRQ_DISABLE_LEVEL_NONE and
IRQ_DISABLE_LEVEL_LINUX are added to be used
when updating paca->soft_enabled.

Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/hw_irq.h | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/arch/powerpc/include/asm/hw_irq.h b/arch/powerpc/include/asm/hw_irq.h
index b59ac27a6b7d..08c59b7da033 100644
--- a/arch/powerpc/include/asm/hw_irq.h
+++ b/arch/powerpc/include/asm/hw_irq.h
@@ -27,6 +27,13 @@
 #define PACA_IRQ_EE_EDGE	0x10 /* BookE only */
 #define PACA_IRQ_HMI		0x20
 
+/*
+ * flags for paca->soft_enabled
+ */
+#define IRQ_DISABLE_LEVEL_NONE		1
+#define IRQ_DISABLE_LEVEL_LINUX		0
+
+
 #endif /* CONFIG_PPC64 */
 
 #ifndef __ASSEMBLY__
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [RFC PATCH v2 02/11] Cleanup to use IRQ_DISABLE_LEVEL_* macros for paca->soft_enabled update
  2016-07-31 19:06 [RFC PATCH v2 00/11] powerpc: "paca->soft_enabled" based local atomic operation implementation Madhavan Srinivasan
  2016-07-31 19:06 ` [RFC PATCH v2 01/11] Add #defs for paca->soft_enabled flags Madhavan Srinivasan
@ 2016-07-31 19:06 ` Madhavan Srinivasan
  2016-07-31 19:06 ` [RFC PATCH v2 03/11] powerpc: move set_soft_enabled() Madhavan Srinivasan
                   ` (8 subsequent siblings)
  10 siblings, 0 replies; 22+ messages in thread
From: Madhavan Srinivasan @ 2016-07-31 19:06 UTC (permalink / raw)
  To: benh, mpe; +Cc: anton, paulus, npiggin, linuxppc-dev, Madhavan Srinivasan

Replace the hardcoded values used when updating
paca->soft_enabled with IRQ_DISABLE_LEVEL_* #def.
No logic change.

Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/exception-64s.h |  2 +-
 arch/powerpc/include/asm/hw_irq.h        | 15 ++++++++-------
 arch/powerpc/include/asm/irqflags.h      |  6 +++---
 arch/powerpc/include/asm/kvm_ppc.h       |  2 +-
 arch/powerpc/kernel/entry_64.S           | 14 +++++++-------
 arch/powerpc/kernel/head_64.S            |  3 ++-
 arch/powerpc/kernel/idle_power4.S        |  3 ++-
 arch/powerpc/kernel/irq.c                |  9 +++++----
 arch/powerpc/kernel/process.c            |  3 ++-
 arch/powerpc/kernel/setup_64.c           |  3 +++
 arch/powerpc/kernel/time.c               |  2 +-
 arch/powerpc/mm/hugetlbpage.c            |  2 +-
 arch/powerpc/perf/core-book3s.c          |  2 +-
 13 files changed, 37 insertions(+), 29 deletions(-)

diff --git a/arch/powerpc/include/asm/exception-64s.h b/arch/powerpc/include/asm/exception-64s.h
index 93ae809fe5ea..a664586301d2 100644
--- a/arch/powerpc/include/asm/exception-64s.h
+++ b/arch/powerpc/include/asm/exception-64s.h
@@ -406,7 +406,7 @@ label##_relon_hv:						\
 
 #define __SOFTEN_TEST(h, vec)						\
 	lbz	r10,PACASOFTIRQEN(r13);					\
-	cmpwi	r10,0;							\
+	cmpwi	r10,IRQ_DISABLE_LEVEL_LINUX;				\
 	li	r10,SOFTEN_VALUE_##vec;					\
 	beq	masked_##h##interrupt
 #define _SOFTEN_TEST(h, vec)	__SOFTEN_TEST(h, vec)
diff --git a/arch/powerpc/include/asm/hw_irq.h b/arch/powerpc/include/asm/hw_irq.h
index 08c59b7da033..eb4cc85b930e 100644
--- a/arch/powerpc/include/asm/hw_irq.h
+++ b/arch/powerpc/include/asm/hw_irq.h
@@ -65,9 +65,10 @@ static inline unsigned long arch_local_irq_disable(void)
 	unsigned long flags, zero;
 
 	asm volatile(
-		"li %1,0; lbz %0,%2(13); stb %1,%2(13)"
+		"li %1,%3; lbz %0,%2(13); stb %1,%2(13)"
 		: "=r" (flags), "=&r" (zero)
-		: "i" (offsetof(struct paca_struct, soft_enabled))
+		: "i" (offsetof(struct paca_struct, soft_enabled)),\
+		  "i" (IRQ_DISABLE_LEVEL_LINUX)
 		: "memory");
 
 	return flags;
@@ -77,7 +78,7 @@ extern void arch_local_irq_restore(unsigned long);
 
 static inline void arch_local_irq_enable(void)
 {
-	arch_local_irq_restore(1);
+	arch_local_irq_restore(IRQ_DISABLE_LEVEL_NONE);
 }
 
 static inline unsigned long arch_local_irq_save(void)
@@ -87,7 +88,7 @@ static inline unsigned long arch_local_irq_save(void)
 
 static inline bool arch_irqs_disabled_flags(unsigned long flags)
 {
-	return flags == 0;
+	return flags == IRQ_DISABLE_LEVEL_LINUX;
 }
 
 static inline bool arch_irqs_disabled(void)
@@ -107,9 +108,9 @@ static inline bool arch_irqs_disabled(void)
 	u8 _was_enabled;				\
 	__hard_irq_disable();				\
 	_was_enabled = local_paca->soft_enabled;	\
-	local_paca->soft_enabled = 0;			\
+	local_paca->soft_enabled = IRQ_DISABLE_LEVEL_LINUX;\
 	local_paca->irq_happened |= PACA_IRQ_HARD_DIS;	\
-	if (_was_enabled)				\
+	if (_was_enabled == IRQ_DISABLE_LEVEL_NONE)	\
 		trace_hardirqs_off();			\
 } while(0)
 
@@ -132,7 +133,7 @@ static inline void may_hard_irq_enable(void)
 
 static inline bool arch_irq_disabled_regs(struct pt_regs *regs)
 {
-	return !regs->softe;
+	return (regs->softe == IRQ_DISABLE_LEVEL_LINUX);
 }
 
 extern bool prep_irq_for_idle(void);
diff --git a/arch/powerpc/include/asm/irqflags.h b/arch/powerpc/include/asm/irqflags.h
index f2149066fe5d..2796eceb5707 100644
--- a/arch/powerpc/include/asm/irqflags.h
+++ b/arch/powerpc/include/asm/irqflags.h
@@ -48,8 +48,8 @@
 #define RECONCILE_IRQ_STATE(__rA, __rB)		\
 	lbz	__rA,PACASOFTIRQEN(r13);	\
 	lbz	__rB,PACAIRQHAPPENED(r13);	\
-	cmpwi	cr0,__rA,0;			\
-	li	__rA,0;				\
+	cmpwi	cr0,__rA,IRQ_DISABLE_LEVEL_LINUX;\
+	li	__rA,IRQ_DISABLE_LEVEL_LINUX;	\
 	ori	__rB,__rB,PACA_IRQ_HARD_DIS;	\
 	stb	__rB,PACAIRQHAPPENED(r13);	\
 	beq	44f;				\
@@ -63,7 +63,7 @@
 
 #define RECONCILE_IRQ_STATE(__rA, __rB)		\
 	lbz	__rA,PACAIRQHAPPENED(r13);	\
-	li	__rB,0;				\
+	li	__rB,IRQ_DISABLE_LEVEL_LINUX;	\
 	ori	__rA,__rA,PACA_IRQ_HARD_DIS;	\
 	stb	__rB,PACASOFTIRQEN(r13);	\
 	stb	__rA,PACAIRQHAPPENED(r13)
diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h
index 2544edabe7f3..fec6d5e92dab 100644
--- a/arch/powerpc/include/asm/kvm_ppc.h
+++ b/arch/powerpc/include/asm/kvm_ppc.h
@@ -707,7 +707,7 @@ static inline void kvmppc_fix_ee_before_entry(void)
 
 	/* Only need to enable IRQs by hard enabling them after this */
 	local_paca->irq_happened = 0;
-	local_paca->soft_enabled = 1;
+	local_paca->soft_enabled = IRQ_DISABLE_LEVEL_NONE;
 #endif
 }
 
diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
index 73e461a3dfbb..47ab7ac3d039 100644
--- a/arch/powerpc/kernel/entry_64.S
+++ b/arch/powerpc/kernel/entry_64.S
@@ -147,7 +147,7 @@ END_FW_FTR_SECTION_IFSET(FW_FEATURE_SPLPAR)
 	/* We do need to set SOFTE in the stack frame or the return
 	 * from interrupt will be painful
 	 */
-	li	r10,1
+	li	r10,IRQ_DISABLE_LEVEL_NONE
 	std	r10,SOFTE(r1)
 
 	CURRENT_THREAD_INFO(r11, r1)
@@ -725,7 +725,7 @@ resume_kernel:
 	lwz	r8,TI_PREEMPT(r9)
 	cmpwi	cr1,r8,0
 	ld	r0,SOFTE(r1)
-	cmpdi	r0,0
+	cmpdi	r0,IRQ_DISABLE_LEVEL_LINUX
 	crandc	eq,cr1*4+eq,eq
 	bne	restore
 
@@ -765,11 +765,11 @@ restore:
 	 */
 	ld	r5,SOFTE(r1)
 	lbz	r6,PACASOFTIRQEN(r13)
-	cmpwi	cr0,r5,0
+	cmpwi	cr0,r5,IRQ_DISABLE_LEVEL_LINUX
 	beq	restore_irq_off
 
 	/* We are enabling, were we already enabled ? Yes, just return */
-	cmpwi	cr0,r6,1
+	cmpwi	cr0,r6,IRQ_DISABLE_LEVEL_NONE
 	beq	cr0,do_restore
 
 	/*
@@ -788,7 +788,7 @@ restore:
 	 */
 restore_no_replay:
 	TRACE_ENABLE_INTS
-	li	r0,1
+	li	r0,IRQ_DISABLE_LEVEL_NONE
 	stb	r0,PACASOFTIRQEN(r13);
 
 	/*
@@ -894,7 +894,7 @@ restore_irq_off:
 	beq	1f
 	rlwinm	r7,r7,0,~PACA_IRQ_HARD_DIS
 	stb	r7,PACAIRQHAPPENED(r13)
-1:	li	r0,0
+1:	li	r0,IRQ_DISABLE_LEVEL_LINUX
 	stb	r0,PACASOFTIRQEN(r13);
 	TRACE_DISABLE_INTS
 	b	do_restore
@@ -1012,7 +1012,7 @@ _GLOBAL(enter_rtas)
 	 * check it with the asm equivalent of WARN_ON
 	 */
 	lbz	r0,PACASOFTIRQEN(r13)
-1:	tdnei	r0,0
+1:	tdnei	r0,IRQ_DISABLE_LEVEL_LINUX
 	EMIT_BUG_ENTRY 1b,__FILE__,__LINE__,BUGFLAG_WARNING
 #endif
 	
diff --git a/arch/powerpc/kernel/head_64.S b/arch/powerpc/kernel/head_64.S
index 2d14774af6b4..4beba58ccb63 100644
--- a/arch/powerpc/kernel/head_64.S
+++ b/arch/powerpc/kernel/head_64.S
@@ -789,6 +789,7 @@ __secondary_start:
 	/* Mark interrupts soft and hard disabled (they might be enabled
 	 * in the PACA when doing hotplug)
 	 */
+	li	r7,IRQ_DISABLE_LEVEL_LINUX
 	stb	r7,PACASOFTIRQEN(r13)
 	li	r0,PACA_IRQ_HARD_DIS
 	stb	r0,PACAIRQHAPPENED(r13)
@@ -957,7 +958,7 @@ start_here_common:
 	/* Mark interrupts soft and hard disabled (they might be enabled
 	 * in the PACA when doing hotplug)
 	 */
-	li	r0,0
+	li	r0,IRQ_DISABLE_LEVEL_LINUX
 	stb	r0,PACASOFTIRQEN(r13)
 	li	r0,PACA_IRQ_HARD_DIS
 	stb	r0,PACAIRQHAPPENED(r13)
diff --git a/arch/powerpc/kernel/idle_power4.S b/arch/powerpc/kernel/idle_power4.S
index f57a19348bdd..6d61113ecd92 100644
--- a/arch/powerpc/kernel/idle_power4.S
+++ b/arch/powerpc/kernel/idle_power4.S
@@ -14,6 +14,7 @@
 #include <asm/thread_info.h>
 #include <asm/ppc_asm.h>
 #include <asm/asm-offsets.h>
+#include <asm/hw_irq.h>
 #include <asm/irqflags.h>
 
 #undef DEBUG
@@ -53,7 +54,7 @@ END_FTR_SECTION_IFCLR(CPU_FTR_CAN_NAP)
 	mfmsr	r7
 #endif /* CONFIG_TRACE_IRQFLAGS */
 
-	li	r0,1
+	li	r0,IRQ_DISABLE_LEVEL_NONE
 	stb	r0,PACASOFTIRQEN(r13)	/* we'll hard-enable shortly */
 BEGIN_FTR_SECTION
 	DSSALL
diff --git a/arch/powerpc/kernel/irq.c b/arch/powerpc/kernel/irq.c
index 3cb46a3b1de7..0c12e90f2cc7 100644
--- a/arch/powerpc/kernel/irq.c
+++ b/arch/powerpc/kernel/irq.c
@@ -67,6 +67,7 @@
 #include <asm/smp.h>
 #include <asm/debug.h>
 #include <asm/livepatch.h>
+#include <asm/hw_irq.h>
 
 #ifdef CONFIG_PPC64
 #include <asm/paca.h>
@@ -207,7 +208,7 @@ notrace void arch_local_irq_restore(unsigned long en)
 
 	/* Write the new soft-enabled value */
 	set_soft_enabled(en);
-	if (!en)
+	if (en == IRQ_DISABLE_LEVEL_LINUX)
 		return;
 	/*
 	 * From this point onward, we can take interrupts, preempt,
@@ -252,7 +253,7 @@ notrace void arch_local_irq_restore(unsigned long en)
 	}
 #endif /* CONFIG_TRACE_IRQFLAG */
 
-	set_soft_enabled(0);
+	set_soft_enabled(IRQ_DISABLE_LEVEL_LINUX);
 
 	/*
 	 * Check if anything needs to be re-emitted. We haven't
@@ -262,7 +263,7 @@ notrace void arch_local_irq_restore(unsigned long en)
 	replay = __check_irq_replay();
 
 	/* We can soft-enable now */
-	set_soft_enabled(1);
+	set_soft_enabled(IRQ_DISABLE_LEVEL_NONE);
 
 	/*
 	 * And replay if we have to. This will return with interrupts
@@ -336,7 +337,7 @@ bool prep_irq_for_idle(void)
 	 * of entering the low power state.
 	 */
 	local_paca->irq_happened &= ~PACA_IRQ_HARD_DIS;
-	local_paca->soft_enabled = 1;
+	local_paca->soft_enabled = IRQ_DISABLE_LEVEL_NONE;
 
 	/* Tell the caller to enter the low power state */
 	return true;
diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
index 0b93893424f5..a7201deb4620 100644
--- a/arch/powerpc/kernel/process.c
+++ b/arch/powerpc/kernel/process.c
@@ -54,6 +54,7 @@
 #include <asm/debug.h>
 #ifdef CONFIG_PPC64
 #include <asm/firmware.h>
+#include <asm/hw_irq.h>
 #endif
 #include <asm/code-patching.h>
 #include <asm/exec.h>
@@ -1418,7 +1419,7 @@ int copy_thread(unsigned long clone_flags, unsigned long usp,
 			childregs->gpr[14] = ppc_function_entry((void *)usp);
 #ifdef CONFIG_PPC64
 		clear_tsk_thread_flag(p, TIF_32BIT);
-		childregs->softe = 1;
+		childregs->softe = IRQ_DISABLE_LEVEL_NONE;
 #endif
 		childregs->gpr[15] = kthread_arg;
 		p->thread.regs = NULL;	/* no user register state */
diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
index 96d4a2b23d0f..8b60b894abeb 100644
--- a/arch/powerpc/kernel/setup_64.c
+++ b/arch/powerpc/kernel/setup_64.c
@@ -70,6 +70,7 @@
 #include <asm/hugetlb.h>
 #include <asm/epapr_hcalls.h>
 #include <asm/livepatch.h>
+#include <asm/hw_irq.h>
 
 #ifdef DEBUG
 #define DBG(fmt...) udbg_printf(fmt)
@@ -204,6 +205,8 @@ static void fixup_boot_paca(void)
 	get_paca()->cpu_start = 1;
 	/* Allow percpu accesses to work until we setup percpu data */
 	get_paca()->data_offset = 0;
+	/* Mark interrupts disabled in PACA */
+	get_paca()->soft_enabled = IRQ_DISABLE_LEVEL_LINUX;
 }
 
 static void cpu_ready_for_interrupts(void)
diff --git a/arch/powerpc/kernel/time.c b/arch/powerpc/kernel/time.c
index 3ed9a5a21d77..6bf2546ac372 100644
--- a/arch/powerpc/kernel/time.c
+++ b/arch/powerpc/kernel/time.c
@@ -258,7 +258,7 @@ void accumulate_stolen_time(void)
 	 * needs to reflect that so various debug stuff doesn't
 	 * complain
 	 */
-	local_paca->soft_enabled = 0;
+	local_paca->soft_enabled = IRQ_DISABLE_LEVEL_LINUX;
 
 	sst = scan_dispatch_log(local_paca->starttime_user);
 	ust = scan_dispatch_log(local_paca->starttime);
diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c
index 119d18611500..22bebf2b4bc5 100644
--- a/arch/powerpc/mm/hugetlbpage.c
+++ b/arch/powerpc/mm/hugetlbpage.c
@@ -907,7 +907,7 @@ void flush_dcache_icache_hugepage(struct page *page)
  * So long as we atomically load page table pointers we are safe against teardown,
  * we can follow the address down to the the page and take a ref on it.
  * This function need to be called with interrupts disabled. We use this variant
- * when we have MSR[EE] = 0 but the paca->soft_enabled = 1
+ * when we have MSR[EE] = 0 but the paca->soft_enabled = IRQ_DISABLE_LEVEL_NONE
  */
 
 pte_t *__find_linux_pte_or_hugepte(pgd_t *pgdir, unsigned long ea,
diff --git a/arch/powerpc/perf/core-book3s.c b/arch/powerpc/perf/core-book3s.c
index 141c289ae492..06354a345611 100644
--- a/arch/powerpc/perf/core-book3s.c
+++ b/arch/powerpc/perf/core-book3s.c
@@ -313,7 +313,7 @@ static inline void perf_read_regs(struct pt_regs *regs)
  */
 static inline int perf_intr_is_nmi(struct pt_regs *regs)
 {
-	return !regs->softe;
+	return (regs->softe == IRQ_DISABLE_LEVEL_LINUX);
 }
 
 /*
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [RFC PATCH v2 03/11] powerpc: move set_soft_enabled()
  2016-07-31 19:06 [RFC PATCH v2 00/11] powerpc: "paca->soft_enabled" based local atomic operation implementation Madhavan Srinivasan
  2016-07-31 19:06 ` [RFC PATCH v2 01/11] Add #defs for paca->soft_enabled flags Madhavan Srinivasan
  2016-07-31 19:06 ` [RFC PATCH v2 02/11] Cleanup to use IRQ_DISABLE_LEVEL_* macros for paca->soft_enabled update Madhavan Srinivasan
@ 2016-07-31 19:06 ` Madhavan Srinivasan
  2016-07-31 19:06 ` [RFC PATCH v2 04/11] powerpc: Use set_soft_enabled api to update paca->soft_enabled Madhavan Srinivasan
                   ` (7 subsequent siblings)
  10 siblings, 0 replies; 22+ messages in thread
From: Madhavan Srinivasan @ 2016-07-31 19:06 UTC (permalink / raw)
  To: benh, mpe; +Cc: anton, paulus, npiggin, linuxppc-dev, Madhavan Srinivasan

Move set_soft_enabled() from powerpc/kernel/irq.c to
asm/hw_irq.c. this way updation of paca->soft_enabled
can be forced wherever possible.

Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/hw_irq.h | 6 ++++++
 arch/powerpc/kernel/irq.c         | 6 ------
 2 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/arch/powerpc/include/asm/hw_irq.h b/arch/powerpc/include/asm/hw_irq.h
index eb4cc85b930e..ba4ade085aef 100644
--- a/arch/powerpc/include/asm/hw_irq.h
+++ b/arch/powerpc/include/asm/hw_irq.h
@@ -48,6 +48,12 @@ extern void unknown_exception(struct pt_regs *regs);
 #ifdef CONFIG_PPC64
 #include <asm/paca.h>
 
+static inline notrace void set_soft_enabled(unsigned long enable)
+{
+	__asm__ __volatile__("stb %0,%1(13)"
+	: : "r" (enable), "i" (offsetof(struct paca_struct, soft_enabled)));
+}
+
 static inline unsigned long arch_local_save_flags(void)
 {
 	unsigned long flags;
diff --git a/arch/powerpc/kernel/irq.c b/arch/powerpc/kernel/irq.c
index 0c12e90f2cc7..1d0e82a6c861 100644
--- a/arch/powerpc/kernel/irq.c
+++ b/arch/powerpc/kernel/irq.c
@@ -106,12 +106,6 @@ static inline notrace unsigned long get_irq_happened(void)
 	return happened;
 }
 
-static inline notrace void set_soft_enabled(unsigned long enable)
-{
-	__asm__ __volatile__("stb %0,%1(13)"
-	: : "r" (enable), "i" (offsetof(struct paca_struct, soft_enabled)));
-}
-
 static inline notrace int decrementer_check_overflow(void)
 {
  	u64 now = get_tb_or_rtc();
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [RFC PATCH v2 04/11] powerpc: Use set_soft_enabled api to update paca->soft_enabled
  2016-07-31 19:06 [RFC PATCH v2 00/11] powerpc: "paca->soft_enabled" based local atomic operation implementation Madhavan Srinivasan
                   ` (2 preceding siblings ...)
  2016-07-31 19:06 ` [RFC PATCH v2 03/11] powerpc: move set_soft_enabled() Madhavan Srinivasan
@ 2016-07-31 19:06 ` Madhavan Srinivasan
  2016-07-31 19:06 ` [RFC PATCH v2 05/11] powerpc: reverse the soft_enable logic Madhavan Srinivasan
                   ` (6 subsequent siblings)
  10 siblings, 0 replies; 22+ messages in thread
From: Madhavan Srinivasan @ 2016-07-31 19:06 UTC (permalink / raw)
  To: benh, mpe; +Cc: anton, paulus, npiggin, linuxppc-dev, Madhavan Srinivasan

Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/kvm_ppc.h | 2 +-
 arch/powerpc/kernel/irq.c          | 2 +-
 arch/powerpc/kernel/setup_64.c     | 4 ++--
 arch/powerpc/kernel/time.c         | 4 ++--
 4 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h
index fec6d5e92dab..103c5501982d 100644
--- a/arch/powerpc/include/asm/kvm_ppc.h
+++ b/arch/powerpc/include/asm/kvm_ppc.h
@@ -707,7 +707,7 @@ static inline void kvmppc_fix_ee_before_entry(void)
 
 	/* Only need to enable IRQs by hard enabling them after this */
 	local_paca->irq_happened = 0;
-	local_paca->soft_enabled = IRQ_DISABLE_LEVEL_NONE;
+	set_soft_enabled(IRQ_DISABLE_LEVEL_NONE);
 #endif
 }
 
diff --git a/arch/powerpc/kernel/irq.c b/arch/powerpc/kernel/irq.c
index 1d0e82a6c861..84edd25c8d51 100644
--- a/arch/powerpc/kernel/irq.c
+++ b/arch/powerpc/kernel/irq.c
@@ -331,7 +331,7 @@ bool prep_irq_for_idle(void)
 	 * of entering the low power state.
 	 */
 	local_paca->irq_happened &= ~PACA_IRQ_HARD_DIS;
-	local_paca->soft_enabled = IRQ_DISABLE_LEVEL_NONE;
+	set_soft_enabled(IRQ_DISABLE_LEVEL_NONE);
 
 	/* Tell the caller to enter the low power state */
 	return true;
diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
index 8b60b894abeb..389caca3d5d2 100644
--- a/arch/powerpc/kernel/setup_64.c
+++ b/arch/powerpc/kernel/setup_64.c
@@ -206,7 +206,7 @@ static void fixup_boot_paca(void)
 	/* Allow percpu accesses to work until we setup percpu data */
 	get_paca()->data_offset = 0;
 	/* Mark interrupts disabled in PACA */
-	get_paca()->soft_enabled = IRQ_DISABLE_LEVEL_LINUX;
+	set_soft_enabled(IRQ_DISABLE_LEVEL_LINUX);
 }
 
 static void cpu_ready_for_interrupts(void)
@@ -325,7 +325,7 @@ void __init early_setup(unsigned long dt_ptr)
 void early_setup_secondary(void)
 {
 	/* Mark interrupts enabled in PACA */
-	get_paca()->soft_enabled = 0;
+	set_soft_enabled(IRQ_DISABLE_LEVEL_LINUX);
 
 	/* Initialize the hash table or TLB handling */
 	early_init_mmu_secondary();
diff --git a/arch/powerpc/kernel/time.c b/arch/powerpc/kernel/time.c
index 6bf2546ac372..42f6ca367b09 100644
--- a/arch/powerpc/kernel/time.c
+++ b/arch/powerpc/kernel/time.c
@@ -258,7 +258,7 @@ void accumulate_stolen_time(void)
 	 * needs to reflect that so various debug stuff doesn't
 	 * complain
 	 */
-	local_paca->soft_enabled = IRQ_DISABLE_LEVEL_LINUX;
+	set_soft_enabled(IRQ_DISABLE_LEVEL_LINUX);
 
 	sst = scan_dispatch_log(local_paca->starttime_user);
 	ust = scan_dispatch_log(local_paca->starttime);
@@ -266,7 +266,7 @@ void accumulate_stolen_time(void)
 	local_paca->user_time -= ust;
 	local_paca->stolen_time += ust + sst;
 
-	local_paca->soft_enabled = save_soft_enabled;
+	set_soft_enabled(save_soft_enabled);
 }
 
 static inline u64 calculate_stolen_time(u64 stop_tb)
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [RFC PATCH v2 05/11] powerpc: reverse the soft_enable logic
  2016-07-31 19:06 [RFC PATCH v2 00/11] powerpc: "paca->soft_enabled" based local atomic operation implementation Madhavan Srinivasan
                   ` (3 preceding siblings ...)
  2016-07-31 19:06 ` [RFC PATCH v2 04/11] powerpc: Use set_soft_enabled api to update paca->soft_enabled Madhavan Srinivasan
@ 2016-07-31 19:06 ` Madhavan Srinivasan
  2016-07-31 19:06 ` [RFC PATCH v2 06/11] powerpc: Avoid using EXCEPTION_PROLOG_1 macro in MASKABLE_* Madhavan Srinivasan
                   ` (5 subsequent siblings)
  10 siblings, 0 replies; 22+ messages in thread
From: Madhavan Srinivasan @ 2016-07-31 19:06 UTC (permalink / raw)
  To: benh, mpe; +Cc: anton, paulus, npiggin, linuxppc-dev, Madhavan Srinivasan

"paca->soft_enabled" is used as a flag to mask some of interrupts.
Currently supported flags values and their details:

soft_enabled	MSR[EE]

0		0	Disabled (PMI and HMI not masked)
1		1	Enabled

"paca->soft_enabled" is initialized to 1 to make the interripts as
enabled. arch_local_irq_disable() will toggle the value when interrupts
needs to disbled. At this point, the interrupts are not actually disabled,
instead, interrupt vector has code to check for the flag and mask it when it occurs.
By "mask it", it update interrupt paca->irq_happened and return.
arch_local_irq_restore() is called to re-enable interrupts, which checks and
replays interrupts if any occured.

Now, as mentioned, current logic doesnot mask "performance monitoring interrupts"
and PMIs are implemented as NMI. But this patchset depends on local_irq_*
for a successful local_* update. Meaning, mask all possible interrupts during
local_* update and replay them after the update.

So the idea here is to reserve the "paca->soft_enabled" logic. New values and
details:

soft_enabled	MSR[EE]

1		0	Disabled  (PMI and HMI not masked)
0		1       Enabled

Reason for the this change is to create foundation for a third flag value "2"
for "soft_enabled" to add support to mask PMIs. When ->soft_enabled is
set to a value "2", PMI interrupts are mask and when set to a value
of "1", PMI are not mask.

Foundation patch to support checking of new flag value for "paca->soft_enabled".
Modify the condition checking for the "soft_enabled" from "equal" to
"greater than or equal to".

Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/exception-64s.h | 2 +-
 arch/powerpc/include/asm/hw_irq.h        | 8 ++++----
 arch/powerpc/include/asm/irqflags.h      | 2 +-
 arch/powerpc/kernel/entry_64.S           | 4 ++--
 arch/powerpc/kernel/irq.c                | 2 +-
 5 files changed, 9 insertions(+), 9 deletions(-)

diff --git a/arch/powerpc/include/asm/exception-64s.h b/arch/powerpc/include/asm/exception-64s.h
index a664586301d2..bbba44c2d5b0 100644
--- a/arch/powerpc/include/asm/exception-64s.h
+++ b/arch/powerpc/include/asm/exception-64s.h
@@ -408,7 +408,7 @@ label##_relon_hv:						\
 	lbz	r10,PACASOFTIRQEN(r13);					\
 	cmpwi	r10,IRQ_DISABLE_LEVEL_LINUX;				\
 	li	r10,SOFTEN_VALUE_##vec;					\
-	beq	masked_##h##interrupt
+	bge	masked_##h##interrupt
 #define _SOFTEN_TEST(h, vec)	__SOFTEN_TEST(h, vec)
 
 #define SOFTEN_TEST_PR(vec)						\
diff --git a/arch/powerpc/include/asm/hw_irq.h b/arch/powerpc/include/asm/hw_irq.h
index ba4ade085aef..0206a6c493c7 100644
--- a/arch/powerpc/include/asm/hw_irq.h
+++ b/arch/powerpc/include/asm/hw_irq.h
@@ -30,8 +30,8 @@
 /*
  * flags for paca->soft_enabled
  */
-#define IRQ_DISABLE_LEVEL_NONE		1
-#define IRQ_DISABLE_LEVEL_LINUX		0
+#define IRQ_DISABLE_LEVEL_NONE		0
+#define IRQ_DISABLE_LEVEL_LINUX		1
 
 
 #endif /* CONFIG_PPC64 */
@@ -94,7 +94,7 @@ static inline unsigned long arch_local_irq_save(void)
 
 static inline bool arch_irqs_disabled_flags(unsigned long flags)
 {
-	return flags == IRQ_DISABLE_LEVEL_LINUX;
+	return flags >= IRQ_DISABLE_LEVEL_LINUX;
 }
 
 static inline bool arch_irqs_disabled(void)
@@ -139,7 +139,7 @@ static inline void may_hard_irq_enable(void)
 
 static inline bool arch_irq_disabled_regs(struct pt_regs *regs)
 {
-	return (regs->softe == IRQ_DISABLE_LEVEL_LINUX);
+	return (regs->softe >= IRQ_DISABLE_LEVEL_LINUX);
 }
 
 extern bool prep_irq_for_idle(void);
diff --git a/arch/powerpc/include/asm/irqflags.h b/arch/powerpc/include/asm/irqflags.h
index 2796eceb5707..dcc6c9abb9b9 100644
--- a/arch/powerpc/include/asm/irqflags.h
+++ b/arch/powerpc/include/asm/irqflags.h
@@ -52,7 +52,7 @@
 	li	__rA,IRQ_DISABLE_LEVEL_LINUX;	\
 	ori	__rB,__rB,PACA_IRQ_HARD_DIS;	\
 	stb	__rB,PACAIRQHAPPENED(r13);	\
-	beq	44f;				\
+	bge	44f;				\
 	stb	__rA,PACASOFTIRQEN(r13);	\
 	TRACE_DISABLE_INTS;			\
 44:
diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
index 47ab7ac3d039..7d755442ed83 100644
--- a/arch/powerpc/kernel/entry_64.S
+++ b/arch/powerpc/kernel/entry_64.S
@@ -766,7 +766,7 @@ restore:
 	ld	r5,SOFTE(r1)
 	lbz	r6,PACASOFTIRQEN(r13)
 	cmpwi	cr0,r5,IRQ_DISABLE_LEVEL_LINUX
-	beq	restore_irq_off
+	bge	restore_irq_off
 
 	/* We are enabling, were we already enabled ? Yes, just return */
 	cmpwi	cr0,r6,IRQ_DISABLE_LEVEL_NONE
@@ -1012,7 +1012,7 @@ _GLOBAL(enter_rtas)
 	 * check it with the asm equivalent of WARN_ON
 	 */
 	lbz	r0,PACASOFTIRQEN(r13)
-1:	tdnei	r0,IRQ_DISABLE_LEVEL_LINUX
+1:	tdeqi	r0,IRQ_DISABLE_LEVEL_NONE
 	EMIT_BUG_ENTRY 1b,__FILE__,__LINE__,BUGFLAG_WARNING
 #endif
 	
diff --git a/arch/powerpc/kernel/irq.c b/arch/powerpc/kernel/irq.c
index 84edd25c8d51..857e1e8188e5 100644
--- a/arch/powerpc/kernel/irq.c
+++ b/arch/powerpc/kernel/irq.c
@@ -202,7 +202,7 @@ notrace void arch_local_irq_restore(unsigned long en)
 
 	/* Write the new soft-enabled value */
 	set_soft_enabled(en);
-	if (en == IRQ_DISABLE_LEVEL_LINUX)
+	if (en >= IRQ_DISABLE_LEVEL_LINUX)
 		return;
 	/*
 	 * From this point onward, we can take interrupts, preempt,
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [RFC PATCH v2 06/11] powerpc: Avoid using EXCEPTION_PROLOG_1 macro in MASKABLE_*
  2016-07-31 19:06 [RFC PATCH v2 00/11] powerpc: "paca->soft_enabled" based local atomic operation implementation Madhavan Srinivasan
                   ` (4 preceding siblings ...)
  2016-07-31 19:06 ` [RFC PATCH v2 05/11] powerpc: reverse the soft_enable logic Madhavan Srinivasan
@ 2016-07-31 19:06 ` Madhavan Srinivasan
  2016-07-31 19:06 ` [RFC PATCH v2 07/11] powerpc: Add new _EXCEPTION_PROLOG_1 macro Madhavan Srinivasan
                   ` (4 subsequent siblings)
  10 siblings, 0 replies; 22+ messages in thread
From: Madhavan Srinivasan @ 2016-07-31 19:06 UTC (permalink / raw)
  To: benh, mpe; +Cc: anton, paulus, npiggin, linuxppc-dev, Madhavan Srinivasan

Currently we use both EXCEPTION_PROLOG_1 amd __EXCEPTION_PROLOG_1
in the MASKABLE_* macros. As a cleanup, this patch makes MASKABLE_*
to use only __EXCEPTION_PROLOG_1. There is no logic change.

Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/exception-64s.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/include/asm/exception-64s.h b/arch/powerpc/include/asm/exception-64s.h
index bbba44c2d5b0..2743a21a553d 100644
--- a/arch/powerpc/include/asm/exception-64s.h
+++ b/arch/powerpc/include/asm/exception-64s.h
@@ -448,7 +448,7 @@ label##_hv:								\
 #define MASKABLE_EXCEPTION_HV_OOL(vec, label)				\
 	.globl label##_hv;						\
 label##_hv:								\
-	EXCEPTION_PROLOG_1(PACA_EXGEN, SOFTEN_TEST_HV, vec);		\
+	__EXCEPTION_PROLOG_1(PACA_EXGEN, SOFTEN_TEST_HV, vec);		\
 	EXCEPTION_PROLOG_PSERIES_1(label##_common, EXC_HV);
 
 #define __MASKABLE_RELON_EXCEPTION_PSERIES(vec, label, h, extra)	\
@@ -476,7 +476,7 @@ label##_relon_hv:							\
 #define MASKABLE_RELON_EXCEPTION_HV_OOL(vec, label)			\
 	.globl label##_relon_hv;					\
 label##_relon_hv:							\
-	EXCEPTION_PROLOG_1(PACA_EXGEN, SOFTEN_NOTEST_HV, vec);		\
+	__EXCEPTION_PROLOG_1(PACA_EXGEN, SOFTEN_NOTEST_HV, vec);		\
 	EXCEPTION_PROLOG_PSERIES_1(label##_common, EXC_HV);
 
 /*
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [RFC PATCH v2 07/11] powerpc: Add new _EXCEPTION_PROLOG_1 macro
  2016-07-31 19:06 [RFC PATCH v2 00/11] powerpc: "paca->soft_enabled" based local atomic operation implementation Madhavan Srinivasan
                   ` (5 preceding siblings ...)
  2016-07-31 19:06 ` [RFC PATCH v2 06/11] powerpc: Avoid using EXCEPTION_PROLOG_1 macro in MASKABLE_* Madhavan Srinivasan
@ 2016-07-31 19:06 ` Madhavan Srinivasan
  2016-07-31 19:06 ` [RFC PATCH v2 08/11] powerpc: Add "mask_lvl" paramater to MASKABLE_* macros Madhavan Srinivasan
                   ` (3 subsequent siblings)
  10 siblings, 0 replies; 22+ messages in thread
From: Madhavan Srinivasan @ 2016-07-31 19:06 UTC (permalink / raw)
  To: benh, mpe; +Cc: anton, paulus, npiggin, linuxppc-dev, Madhavan Srinivasan

To support addition of "mask_level" to MASKABLE_* macros,
factor out the EXCPETION_PROLOG_1 macro.

Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/exception-64s.h | 30 ++++++++++++++++++++++++++----
 1 file changed, 26 insertions(+), 4 deletions(-)

diff --git a/arch/powerpc/include/asm/exception-64s.h b/arch/powerpc/include/asm/exception-64s.h
index 2743a21a553d..1e6f04adba51 100644
--- a/arch/powerpc/include/asm/exception-64s.h
+++ b/arch/powerpc/include/asm/exception-64s.h
@@ -161,18 +161,40 @@ END_FTR_SECTION_NESTED(ftr,ftr,943)
 	std	r10,area+EX_R10(r13);	/* save r10 - r12 */		\
 	OPT_GET_SPR(r10, SPRN_CFAR, CPU_FTR_CFAR)
 
-#define __EXCEPTION_PROLOG_1(area, extra, vec)				\
+#define __EXCEPTION_PROLOG_1_PRE(area)					\
 	OPT_SAVE_REG_TO_PACA(area+EX_PPR, r9, CPU_FTR_HAS_PPR);		\
 	OPT_SAVE_REG_TO_PACA(area+EX_CFAR, r10, CPU_FTR_CFAR);		\
 	SAVE_CTR(r10, area);						\
-	mfcr	r9;							\
-	extra(vec);							\
+	mfcr	r9;
+
+#define __EXCEPTION_PROLOG_1_POST(area)					\
 	std	r11,area+EX_R11(r13);					\
 	std	r12,area+EX_R12(r13);					\
 	GET_SCRATCH0(r10);						\
 	std	r10,area+EX_R13(r13)
+
+/*
+ * This version of the EXCEPTION_PROLOG_1 will carry
+ * addition parameter called "mask_lvl" to support
+ * checking of the interrupt maskable level in the SOFTEN_TEST.
+ * Intended to be used in MASKABLE_EXCPETION_* macros.
+ */
+#define __EXCEPTION_PROLOG_1(area, extra, vec)				\
+	__EXCEPTION_PROLOG_1_PRE(area);					\
+	extra(vec);							\
+	__EXCEPTION_PROLOG_1_POST(area);
+
+/*
+ * This version of the EXCEPTION_PROLOG_1 is intended
+ * to be used in STD_EXCEPTION* macros
+ */
+#define _EXCEPTION_PROLOG_1(area, extra, vec)				\
+	__EXCEPTION_PROLOG_1_PRE(area);					\
+	extra(vec);							\
+	__EXCEPTION_PROLOG_1_POST(area);
+
 #define EXCEPTION_PROLOG_1(area, extra, vec)				\
-	__EXCEPTION_PROLOG_1(area, extra, vec)
+	_EXCEPTION_PROLOG_1(area, extra, vec)
 
 #define __EXCEPTION_PROLOG_PSERIES_1(label, h)				\
 	ld	r12,PACAKBASE(r13);	/* get high part of &label */	\
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [RFC PATCH v2 08/11] powerpc: Add "mask_lvl" paramater to MASKABLE_* macros
  2016-07-31 19:06 [RFC PATCH v2 00/11] powerpc: "paca->soft_enabled" based local atomic operation implementation Madhavan Srinivasan
                   ` (6 preceding siblings ...)
  2016-07-31 19:06 ` [RFC PATCH v2 07/11] powerpc: Add new _EXCEPTION_PROLOG_1 macro Madhavan Srinivasan
@ 2016-07-31 19:06 ` Madhavan Srinivasan
  2016-08-01  5:21   ` Nicholas Piggin
  2016-07-31 19:06 ` [RFC PATCH v2 09/11] powerpc: Add support to mask perf interrupts Madhavan Srinivasan
                   ` (2 subsequent siblings)
  10 siblings, 1 reply; 22+ messages in thread
From: Madhavan Srinivasan @ 2016-07-31 19:06 UTC (permalink / raw)
  To: benh, mpe; +Cc: anton, paulus, npiggin, linuxppc-dev, Madhavan Srinivasan

Make it explicit the interrupt masking level supported
by a gievn interrupt handler. Patch correspondingly
extends the MASKABLE_* macros with an addition's parameter.
"mask_lvl" parameter is passed to SOFTEN_TEST macro to decide
on masking the interrupt.

Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/exception-64s.h | 66 ++++++++++++++++----------------
 arch/powerpc/kernel/exceptions-64s.S     | 25 ++++++------
 2 files changed, 48 insertions(+), 43 deletions(-)

diff --git a/arch/powerpc/include/asm/exception-64s.h b/arch/powerpc/include/asm/exception-64s.h
index 1e6f04adba51..01ab37eff3f8 100644
--- a/arch/powerpc/include/asm/exception-64s.h
+++ b/arch/powerpc/include/asm/exception-64s.h
@@ -179,9 +179,9 @@ END_FTR_SECTION_NESTED(ftr,ftr,943)
  * checking of the interrupt maskable level in the SOFTEN_TEST.
  * Intended to be used in MASKABLE_EXCPETION_* macros.
  */
-#define __EXCEPTION_PROLOG_1(area, extra, vec)				\
+#define __EXCEPTION_PROLOG_1(area, extra, vec, mask_lvl)		\
 	__EXCEPTION_PROLOG_1_PRE(area);					\
-	extra(vec);							\
+	extra(vec, mask_lvl);							\
 	__EXCEPTION_PROLOG_1_POST(area);
 
 /*
@@ -426,79 +426,81 @@ label##_relon_hv:						\
 #define SOFTEN_VALUE_0xe60	PACA_IRQ_HMI
 #define SOFTEN_VALUE_0xe62	PACA_IRQ_HMI
 
-#define __SOFTEN_TEST(h, vec)						\
+#define __SOFTEN_TEST(h, vec, mask_lvl)					\
 	lbz	r10,PACASOFTIRQEN(r13);					\
-	cmpwi	r10,IRQ_DISABLE_LEVEL_LINUX;				\
+	andi.	r10,r10,mask_lvl;					\
 	li	r10,SOFTEN_VALUE_##vec;					\
-	bge	masked_##h##interrupt
-#define _SOFTEN_TEST(h, vec)	__SOFTEN_TEST(h, vec)
+	bne	masked_##h##interrupt
+#define _SOFTEN_TEST(h, vec, mask_lvl)	__SOFTEN_TEST(h, vec, mask_lvl)
 
-#define SOFTEN_TEST_PR(vec)						\
+#define SOFTEN_TEST_PR(vec, mask_lvl)					\
 	KVMTEST(vec);							\
-	_SOFTEN_TEST(EXC_STD, vec)
+	_SOFTEN_TEST(EXC_STD, vec, mask_lvl)
 
-#define SOFTEN_TEST_HV(vec)						\
+#define SOFTEN_TEST_HV(vec, mask_lvl)					\
 	KVMTEST(vec);							\
-	_SOFTEN_TEST(EXC_HV, vec)
+	_SOFTEN_TEST(EXC_HV, vec, mask_lvl)
 
-#define SOFTEN_NOTEST_PR(vec)		_SOFTEN_TEST(EXC_STD, vec)
-#define SOFTEN_NOTEST_HV(vec)		_SOFTEN_TEST(EXC_HV, vec)
+#define SOFTEN_NOTEST_PR(vec, mask_lvl)	\
+		_SOFTEN_TEST(EXC_STD, vec, mask_lvl)
+#define SOFTEN_NOTEST_HV(vec, mask_lvl)	\
+		_SOFTEN_TEST(EXC_HV, vec, mask_lvl)
 
-#define __MASKABLE_EXCEPTION_PSERIES(vec, label, h, extra)		\
+#define __MASKABLE_EXCEPTION_PSERIES(vec, label, h, extra, mask_lvl)	\
 	SET_SCRATCH0(r13);    /* save r13 */				\
 	EXCEPTION_PROLOG_0(PACA_EXGEN);					\
-	__EXCEPTION_PROLOG_1(PACA_EXGEN, extra, vec);			\
+	__EXCEPTION_PROLOG_1(PACA_EXGEN, extra, vec, mask_lvl);		\
 	EXCEPTION_PROLOG_PSERIES_1(label##_common, h);
 
-#define _MASKABLE_EXCEPTION_PSERIES(vec, label, h, extra)		\
-	__MASKABLE_EXCEPTION_PSERIES(vec, label, h, extra)
+#define _MASKABLE_EXCEPTION_PSERIES(vec, label, h, extra, mask_lvl)	\
+	__MASKABLE_EXCEPTION_PSERIES(vec, label, h, extra, mask_lvl)
 
-#define MASKABLE_EXCEPTION_PSERIES(loc, vec, label)			\
+#define MASKABLE_EXCEPTION_PSERIES(loc, vec, label, mask_lvl)		\
 	. = loc;							\
 	.globl label##_pSeries;						\
 label##_pSeries:							\
 	_MASKABLE_EXCEPTION_PSERIES(vec, label,				\
-				    EXC_STD, SOFTEN_TEST_PR)
+				    EXC_STD, SOFTEN_TEST_PR, mask_lvl)
 
-#define MASKABLE_EXCEPTION_HV(loc, vec, label)				\
+#define MASKABLE_EXCEPTION_HV(loc, vec, label, mask_lvl)		\
 	. = loc;							\
 	.globl label##_hv;						\
 label##_hv:								\
 	_MASKABLE_EXCEPTION_PSERIES(vec, label,				\
-				    EXC_HV, SOFTEN_TEST_HV)
+				    EXC_HV, SOFTEN_TEST_HV, mask_lvl)
 
-#define MASKABLE_EXCEPTION_HV_OOL(vec, label)				\
+#define MASKABLE_EXCEPTION_HV_OOL(vec, label, mask_lvl)			\
 	.globl label##_hv;						\
 label##_hv:								\
-	__EXCEPTION_PROLOG_1(PACA_EXGEN, SOFTEN_TEST_HV, vec);		\
+	__EXCEPTION_PROLOG_1(PACA_EXGEN, SOFTEN_TEST_HV, vec, mask_lvl);\
 	EXCEPTION_PROLOG_PSERIES_1(label##_common, EXC_HV);
 
-#define __MASKABLE_RELON_EXCEPTION_PSERIES(vec, label, h, extra)	\
+#define __MASKABLE_RELON_EXCEPTION_PSERIES(vec, label, h, extra, mask_lvl)\
 	SET_SCRATCH0(r13);    /* save r13 */				\
 	EXCEPTION_PROLOG_0(PACA_EXGEN);					\
-	__EXCEPTION_PROLOG_1(PACA_EXGEN, extra, vec);		\
+	__EXCEPTION_PROLOG_1(PACA_EXGEN, extra, vec, mask_lvl);		\
 	EXCEPTION_RELON_PROLOG_PSERIES_1(label##_common, h);
-#define _MASKABLE_RELON_EXCEPTION_PSERIES(vec, label, h, extra)	\
-	__MASKABLE_RELON_EXCEPTION_PSERIES(vec, label, h, extra)
+#define _MASKABLE_RELON_EXCEPTION_PSERIES(vec, label, h, extra, mask_lvl)\
+	__MASKABLE_RELON_EXCEPTION_PSERIES(vec, label, h, extra, mask_lvl)
 
-#define MASKABLE_RELON_EXCEPTION_PSERIES(loc, vec, label)		\
+#define MASKABLE_RELON_EXCEPTION_PSERIES(loc, vec, label, mask_lvl)	\
 	. = loc;							\
 	.globl label##_relon_pSeries;					\
 label##_relon_pSeries:							\
 	_MASKABLE_RELON_EXCEPTION_PSERIES(vec, label,			\
-					  EXC_STD, SOFTEN_NOTEST_PR)
+					  EXC_STD, SOFTEN_NOTEST_PR, mask_lvl)
 
-#define MASKABLE_RELON_EXCEPTION_HV(loc, vec, label)			\
+#define MASKABLE_RELON_EXCEPTION_HV(loc, vec, label, mask_lvl)		\
 	. = loc;							\
 	.globl label##_relon_hv;					\
 label##_relon_hv:							\
 	_MASKABLE_RELON_EXCEPTION_PSERIES(vec, label,			\
-					  EXC_HV, SOFTEN_NOTEST_HV)
+					  EXC_HV, SOFTEN_NOTEST_HV, mask_lvl)
 
-#define MASKABLE_RELON_EXCEPTION_HV_OOL(vec, label)			\
+#define MASKABLE_RELON_EXCEPTION_HV_OOL(vec, label, mask_lvl)		\
 	.globl label##_relon_hv;					\
 label##_relon_hv:							\
-	__EXCEPTION_PROLOG_1(PACA_EXGEN, SOFTEN_NOTEST_HV, vec);		\
+	__EXCEPTION_PROLOG_1(PACA_EXGEN, SOFTEN_NOTEST_HV, vec, mask_lvl);\
 	EXCEPTION_PROLOG_PSERIES_1(label##_common, EXC_HV);
 
 /*
diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
index 8bcc1b457115..2c87e82ecbe4 100644
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -256,11 +256,11 @@ hardware_interrupt_pSeries:
 hardware_interrupt_hv:
 	BEGIN_FTR_SECTION
 		_MASKABLE_EXCEPTION_PSERIES(0x502, hardware_interrupt,
-					    EXC_HV, SOFTEN_TEST_HV)
+					    EXC_HV, SOFTEN_TEST_HV, IRQ_DISABLE_LEVEL_LINUX)
 		KVM_HANDLER(PACA_EXGEN, EXC_HV, 0x502)
 	FTR_SECTION_ELSE
 		_MASKABLE_EXCEPTION_PSERIES(0x500, hardware_interrupt,
-					    EXC_STD, SOFTEN_TEST_PR)
+					    EXC_STD, SOFTEN_TEST_PR, IRQ_DISABLE_LEVEL_LINUX)
 		KVM_HANDLER(PACA_EXGEN, EXC_STD, 0x500)
 	ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE | CPU_FTR_ARCH_206)
 
@@ -276,11 +276,12 @@ hardware_interrupt_hv:
 	. = 0x900
 	.globl decrementer_pSeries
 decrementer_pSeries:
-	_MASKABLE_EXCEPTION_PSERIES(0x900, decrementer, EXC_STD, SOFTEN_TEST_PR)
+	_MASKABLE_EXCEPTION_PSERIES(0x900, decrementer, EXC_STD,
+					SOFTEN_TEST_PR, IRQ_DISABLE_LEVEL_LINUX)
 
 	STD_EXCEPTION_HV(0x980, 0x982, hdecrementer)
 
-	MASKABLE_EXCEPTION_PSERIES(0xa00, 0xa00, doorbell_super)
+	MASKABLE_EXCEPTION_PSERIES(0xa00, 0xa00, doorbell_super, IRQ_DISABLE_LEVEL_LINUX)
 	KVM_HANDLER(PACA_EXGEN, EXC_STD, 0xa00)
 
 	STD_EXCEPTION_PSERIES(0xb00, trap_0b)
@@ -595,10 +596,10 @@ END_FTR_SECTION_IFSET(CPU_FTR_CFAR)
 	KVM_HANDLER(PACA_EXGEN, EXC_HV, 0xe22)
 	STD_EXCEPTION_HV_OOL(0xe42, emulation_assist)
 	KVM_HANDLER(PACA_EXGEN, EXC_HV, 0xe42)
-	MASKABLE_EXCEPTION_HV_OOL(0xe62, hmi_exception)
+	MASKABLE_EXCEPTION_HV_OOL(0xe62, hmi_exception, IRQ_DISABLE_LEVEL_LINUX)
 	KVM_HANDLER(PACA_EXGEN, EXC_HV, 0xe62)
 
-	MASKABLE_EXCEPTION_HV_OOL(0xe82, h_doorbell)
+	MASKABLE_EXCEPTION_HV_OOL(0xe82, h_doorbell, IRQ_DISABLE_LEVEL_LINUX)
 	KVM_HANDLER(PACA_EXGEN, EXC_HV, 0xe82)
 
 	/* moved from 0xf00 */
@@ -834,16 +835,18 @@ instruction_access_slb_relon_pSeries:
 hardware_interrupt_relon_pSeries:
 hardware_interrupt_relon_hv:
 	BEGIN_FTR_SECTION
-		_MASKABLE_RELON_EXCEPTION_PSERIES(0x502, hardware_interrupt, EXC_HV, SOFTEN_TEST_HV)
+		_MASKABLE_RELON_EXCEPTION_PSERIES(0x502, hardware_interrupt,
+					EXC_HV, SOFTEN_TEST_HV, IRQ_DISABLE_LEVEL_LINUX)
 	FTR_SECTION_ELSE
-		_MASKABLE_RELON_EXCEPTION_PSERIES(0x500, hardware_interrupt, EXC_STD, SOFTEN_TEST_PR)
+		_MASKABLE_RELON_EXCEPTION_PSERIES(0x500, hardware_interrupt,
+					EXC_STD, SOFTEN_TEST_PR, IRQ_DISABLE_LEVEL_LINUX)
 	ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE)
 	STD_RELON_EXCEPTION_PSERIES(0x4600, 0x600, alignment)
 	STD_RELON_EXCEPTION_PSERIES(0x4700, 0x700, program_check)
 	STD_RELON_EXCEPTION_PSERIES(0x4800, 0x800, fp_unavailable)
-	MASKABLE_RELON_EXCEPTION_PSERIES(0x4900, 0x900, decrementer)
+	MASKABLE_RELON_EXCEPTION_PSERIES(0x4900, 0x900, decrementer, IRQ_DISABLE_LEVEL_LINUX)
 	STD_RELON_EXCEPTION_HV(0x4980, 0x982, hdecrementer)
-	MASKABLE_RELON_EXCEPTION_PSERIES(0x4a00, 0xa00, doorbell_super)
+	MASKABLE_RELON_EXCEPTION_PSERIES(0x4a00, 0xa00, doorbell_super, IRQ_DISABLE_LEVEL_LINUX)
 	STD_RELON_EXCEPTION_PSERIES(0x4b00, 0xb00, trap_0b)
 
 	. = 0x4c00
@@ -1136,7 +1139,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_VSX)
 
 	/* Equivalents to the above handlers for relocation-on interrupt vectors */
 	STD_RELON_EXCEPTION_HV_OOL(0xe40, emulation_assist)
-	MASKABLE_RELON_EXCEPTION_HV_OOL(0xe80, h_doorbell)
+	MASKABLE_RELON_EXCEPTION_HV_OOL(0xe80, h_doorbell, IRQ_DISABLE_LEVEL_LINUX)
 
 	STD_RELON_EXCEPTION_PSERIES_OOL(0xf00, performance_monitor)
 	STD_RELON_EXCEPTION_PSERIES_OOL(0xf20, altivec_unavailable)
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [RFC PATCH v2 09/11] powerpc: Add support to mask perf interrupts
  2016-07-31 19:06 [RFC PATCH v2 00/11] powerpc: "paca->soft_enabled" based local atomic operation implementation Madhavan Srinivasan
                   ` (7 preceding siblings ...)
  2016-07-31 19:06 ` [RFC PATCH v2 08/11] powerpc: Add "mask_lvl" paramater to MASKABLE_* macros Madhavan Srinivasan
@ 2016-07-31 19:06 ` Madhavan Srinivasan
  2016-08-01  5:29   ` Nicholas Piggin
  2016-07-31 19:06 ` [RFC PATCH v2 10/11] powerpc: Support to replay PMIs Madhavan Srinivasan
  2016-07-31 19:06 ` [RFC PATCH v2 11/11] powerpc: rewrite local_t using soft_irq Madhavan Srinivasan
  10 siblings, 1 reply; 22+ messages in thread
From: Madhavan Srinivasan @ 2016-07-31 19:06 UTC (permalink / raw)
  To: benh, mpe; +Cc: anton, paulus, npiggin, linuxppc-dev, Madhavan Srinivasan

To support masking of the PMI interrupts, couple of new interrupt handler
macros are added MASKABLE_EXCEPTION_PSERIES_OOL and
MASKABLE_RELON_EXCEPTION_PSERIES_OOL. These are needed to include the
SOFTEN_TEST and implement the support at both host and guest kernel.

Couple of new irq #defs "PACA_IRQ_PMI" and "SOFTEN_VALUE_0xf0*" added to
use in the exception code to check for PMI interrupts.

New flag value, "IRQ_DISABLE_LEVEL_PMU == 2", added to soft_enabled.
With new flag value for "soft_enabled", states looks like:

soft_enabled		MSR[EE]

2			0	Disbaled PMIs also
1			0	Disabled  (PMI and HMI not masked)
0			1	Enabled

PMI will have the maskable level as 2 and will be a NMI in lower level.

Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/exception-64s.h | 14 +++++++++++
 arch/powerpc/include/asm/hw_irq.h        |  4 +++
 arch/powerpc/kernel/exceptions-64s.S     | 43 ++++++++++++++++++++++----------
 3 files changed, 48 insertions(+), 13 deletions(-)

diff --git a/arch/powerpc/include/asm/exception-64s.h b/arch/powerpc/include/asm/exception-64s.h
index 01ab37eff3f8..90b150478398 100644
--- a/arch/powerpc/include/asm/exception-64s.h
+++ b/arch/powerpc/include/asm/exception-64s.h
@@ -425,6 +425,8 @@ label##_relon_hv:						\
 #define SOFTEN_VALUE_0xe82	PACA_IRQ_DBELL
 #define SOFTEN_VALUE_0xe60	PACA_IRQ_HMI
 #define SOFTEN_VALUE_0xe62	PACA_IRQ_HMI
+#define SOFTEN_VALUE_0xf00	PACA_IRQ_PMI
+#define SOFTEN_VALUE_0xf01	PACA_IRQ_PMI
 
 #define __SOFTEN_TEST(h, vec, mask_lvl)					\
 	lbz	r10,PACASOFTIRQEN(r13);					\
@@ -462,6 +464,12 @@ label##_pSeries:							\
 	_MASKABLE_EXCEPTION_PSERIES(vec, label,				\
 				    EXC_STD, SOFTEN_TEST_PR, mask_lvl)
 
+#define MASKABLE_EXCEPTION_PSERIES_OOL(vec, label, mask_lvl)		\
+	.globl label##_pSeries;						\
+label##_pSeries:							\
+	__EXCEPTION_PROLOG_1(PACA_EXGEN, SOFTEN_TEST_PR, vec, mask_lvl);\
+	EXCEPTION_PROLOG_PSERIES_1(label##_common, EXC_STD);
+
 #define MASKABLE_EXCEPTION_HV(loc, vec, label, mask_lvl)		\
 	. = loc;							\
 	.globl label##_hv;						\
@@ -490,6 +498,12 @@ label##_relon_pSeries:							\
 	_MASKABLE_RELON_EXCEPTION_PSERIES(vec, label,			\
 					  EXC_STD, SOFTEN_NOTEST_PR, mask_lvl)
 
+#define MASKABLE_RELON_EXCEPTION_PSERIES_OOL(vec, label, mask_lvl)	\
+	.globl label##_relon_pSeries;					\
+label##_relon_pSeries:							\
+	__EXCEPTION_PROLOG_1(PACA_EXGEN, SOFTEN_NOTEST_PR, vec, mask_lvl);\
+	EXCEPTION_PROLOG_PSERIES_1(label##_common, EXC_STD);
+
 #define MASKABLE_RELON_EXCEPTION_HV(loc, vec, label, mask_lvl)		\
 	. = loc;							\
 	.globl label##_relon_hv;					\
diff --git a/arch/powerpc/include/asm/hw_irq.h b/arch/powerpc/include/asm/hw_irq.h
index 0206a6c493c7..a7b86edc94bf 100644
--- a/arch/powerpc/include/asm/hw_irq.h
+++ b/arch/powerpc/include/asm/hw_irq.h
@@ -26,12 +26,16 @@
 #define PACA_IRQ_DEC		0x08 /* Or FIT */
 #define PACA_IRQ_EE_EDGE	0x10 /* BookE only */
 #define PACA_IRQ_HMI		0x20
+#define PACA_IRQ_PMI		0x40
 
 /*
  * flags for paca->soft_enabled
  */
 #define IRQ_DISABLE_LEVEL_NONE		0
 #define IRQ_DISABLE_LEVEL_LINUX		1
+#define IRQ_DISABLE_LEVEL_PMU		2
+
+#define MASK_IRQ_LEVEL		IRQ_DISABLE_LEVEL_LINUX | IRQ_DISABLE_LEVEL_PMU
 
 
 #endif /* CONFIG_PPC64 */
diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
index 2c87e82ecbe4..56dc71b82824 100644
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -256,11 +256,11 @@ hardware_interrupt_pSeries:
 hardware_interrupt_hv:
 	BEGIN_FTR_SECTION
 		_MASKABLE_EXCEPTION_PSERIES(0x502, hardware_interrupt,
-					    EXC_HV, SOFTEN_TEST_HV, IRQ_DISABLE_LEVEL_LINUX)
+					    EXC_HV, SOFTEN_TEST_HV, MASK_IRQ_LEVEL)
 		KVM_HANDLER(PACA_EXGEN, EXC_HV, 0x502)
 	FTR_SECTION_ELSE
 		_MASKABLE_EXCEPTION_PSERIES(0x500, hardware_interrupt,
-					    EXC_STD, SOFTEN_TEST_PR, IRQ_DISABLE_LEVEL_LINUX)
+					    EXC_STD, SOFTEN_TEST_PR, MASK_IRQ_LEVEL)
 		KVM_HANDLER(PACA_EXGEN, EXC_STD, 0x500)
 	ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE | CPU_FTR_ARCH_206)
 
@@ -277,11 +277,11 @@ hardware_interrupt_hv:
 	.globl decrementer_pSeries
 decrementer_pSeries:
 	_MASKABLE_EXCEPTION_PSERIES(0x900, decrementer, EXC_STD,
-					SOFTEN_TEST_PR, IRQ_DISABLE_LEVEL_LINUX)
+					SOFTEN_TEST_PR, MASK_IRQ_LEVEL)
 
 	STD_EXCEPTION_HV(0x980, 0x982, hdecrementer)
 
-	MASKABLE_EXCEPTION_PSERIES(0xa00, 0xa00, doorbell_super, IRQ_DISABLE_LEVEL_LINUX)
+	MASKABLE_EXCEPTION_PSERIES(0xa00, 0xa00, doorbell_super, MASK_IRQ_LEVEL)
 	KVM_HANDLER(PACA_EXGEN, EXC_STD, 0xa00)
 
 	STD_EXCEPTION_PSERIES(0xb00, trap_0b)
@@ -361,7 +361,11 @@ hv_doorbell_trampoline:
 performance_monitor_pseries_trampoline:
 	SET_SCRATCH0(r13)
 	EXCEPTION_PROLOG_0(PACA_EXGEN)
+BEGIN_FTR_SECTION
+	b	performance_monitor_hv
+FTR_SECTION_ELSE
 	b	performance_monitor_pSeries
+ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE)
 
 	. = 0xf20
 altivec_unavailable_pseries_trampoline:
@@ -596,15 +600,20 @@ END_FTR_SECTION_IFSET(CPU_FTR_CFAR)
 	KVM_HANDLER(PACA_EXGEN, EXC_HV, 0xe22)
 	STD_EXCEPTION_HV_OOL(0xe42, emulation_assist)
 	KVM_HANDLER(PACA_EXGEN, EXC_HV, 0xe42)
-	MASKABLE_EXCEPTION_HV_OOL(0xe62, hmi_exception, IRQ_DISABLE_LEVEL_LINUX)
+	MASKABLE_EXCEPTION_HV_OOL(0xe62, hmi_exception, MASK_IRQ_LEVEL)
 	KVM_HANDLER(PACA_EXGEN, EXC_HV, 0xe62)
 
-	MASKABLE_EXCEPTION_HV_OOL(0xe82, h_doorbell, IRQ_DISABLE_LEVEL_LINUX)
+	MASKABLE_EXCEPTION_HV_OOL(0xe82, h_doorbell, MASK_IRQ_LEVEL)
 	KVM_HANDLER(PACA_EXGEN, EXC_HV, 0xe82)
 
 	/* moved from 0xf00 */
-	STD_EXCEPTION_PSERIES_OOL(0xf00, performance_monitor)
+BEGIN_FTR_SECTION
+	MASKABLE_EXCEPTION_HV_OOL(0xf01, performance_monitor, IRQ_DISABLE_LEVEL_PMU)
+	KVM_HANDLER(PACA_EXGEN, EXC_HV, 0xf01)
+FTR_SECTION_ELSE
+	MASKABLE_EXCEPTION_PSERIES_OOL(0xf00, performance_monitor, IRQ_DISABLE_LEVEL_PMU)
 	KVM_HANDLER(PACA_EXGEN, EXC_STD, 0xf00)
+ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE)
 	STD_EXCEPTION_PSERIES_OOL(0xf20, altivec_unavailable)
 	KVM_HANDLER(PACA_EXGEN, EXC_STD, 0xf20)
 	STD_EXCEPTION_PSERIES_OOL(0xf40, vsx_unavailable)
@@ -836,17 +845,17 @@ hardware_interrupt_relon_pSeries:
 hardware_interrupt_relon_hv:
 	BEGIN_FTR_SECTION
 		_MASKABLE_RELON_EXCEPTION_PSERIES(0x502, hardware_interrupt,
-					EXC_HV, SOFTEN_TEST_HV, IRQ_DISABLE_LEVEL_LINUX)
+					EXC_HV, SOFTEN_TEST_HV, MASK_IRQ_LEVEL)
 	FTR_SECTION_ELSE
 		_MASKABLE_RELON_EXCEPTION_PSERIES(0x500, hardware_interrupt,
-					EXC_STD, SOFTEN_TEST_PR, IRQ_DISABLE_LEVEL_LINUX)
+					EXC_STD, SOFTEN_TEST_PR, MASK_IRQ_LEVEL)
 	ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE)
 	STD_RELON_EXCEPTION_PSERIES(0x4600, 0x600, alignment)
 	STD_RELON_EXCEPTION_PSERIES(0x4700, 0x700, program_check)
 	STD_RELON_EXCEPTION_PSERIES(0x4800, 0x800, fp_unavailable)
-	MASKABLE_RELON_EXCEPTION_PSERIES(0x4900, 0x900, decrementer, IRQ_DISABLE_LEVEL_LINUX)
+	MASKABLE_RELON_EXCEPTION_PSERIES(0x4900, 0x900, decrementer, MASK_IRQ_LEVEL)
 	STD_RELON_EXCEPTION_HV(0x4980, 0x982, hdecrementer)
-	MASKABLE_RELON_EXCEPTION_PSERIES(0x4a00, 0xa00, doorbell_super, IRQ_DISABLE_LEVEL_LINUX)
+	MASKABLE_RELON_EXCEPTION_PSERIES(0x4a00, 0xa00, doorbell_super, MASK_IRQ_LEVEL)
 	STD_RELON_EXCEPTION_PSERIES(0x4b00, 0xb00, trap_0b)
 
 	. = 0x4c00
@@ -884,7 +893,11 @@ h_doorbell_relon_trampoline:
 performance_monitor_relon_pseries_trampoline:
 	SET_SCRATCH0(r13)
 	EXCEPTION_PROLOG_0(PACA_EXGEN)
+BEGIN_FTR_SECTION
+	b	performance_monitor_relon_hv
+FTR_SECTION_ELSE
 	b	performance_monitor_relon_pSeries
+ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE)
 
 	. = 0x4f20
 altivec_unavailable_relon_pseries_trampoline:
@@ -1139,9 +1152,13 @@ END_FTR_SECTION_IFSET(CPU_FTR_VSX)
 
 	/* Equivalents to the above handlers for relocation-on interrupt vectors */
 	STD_RELON_EXCEPTION_HV_OOL(0xe40, emulation_assist)
-	MASKABLE_RELON_EXCEPTION_HV_OOL(0xe80, h_doorbell, IRQ_DISABLE_LEVEL_LINUX)
+	MASKABLE_RELON_EXCEPTION_HV_OOL(0xe80, h_doorbell, MASK_IRQ_LEVEL)
 
-	STD_RELON_EXCEPTION_PSERIES_OOL(0xf00, performance_monitor)
+BEGIN_FTR_SECTION
+	MASKABLE_RELON_EXCEPTION_HV_OOL(0xf01, performance_monitor, IRQ_DISABLE_LEVEL_PMU)
+FTR_SECTION_ELSE
+	MASKABLE_RELON_EXCEPTION_PSERIES_OOL(0xf00, performance_monitor, IRQ_DISABLE_LEVEL_PMU)
+ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE)
 	STD_RELON_EXCEPTION_PSERIES_OOL(0xf20, altivec_unavailable)
 	STD_RELON_EXCEPTION_PSERIES_OOL(0xf40, vsx_unavailable)
 	STD_RELON_EXCEPTION_PSERIES_OOL(0xf60, facility_unavailable)
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [RFC PATCH v2 10/11] powerpc: Support to replay PMIs
  2016-07-31 19:06 [RFC PATCH v2 00/11] powerpc: "paca->soft_enabled" based local atomic operation implementation Madhavan Srinivasan
                   ` (8 preceding siblings ...)
  2016-07-31 19:06 ` [RFC PATCH v2 09/11] powerpc: Add support to mask perf interrupts Madhavan Srinivasan
@ 2016-07-31 19:06 ` Madhavan Srinivasan
  2016-08-01  8:07   ` Nicholas Piggin
  2016-07-31 19:06 ` [RFC PATCH v2 11/11] powerpc: rewrite local_t using soft_irq Madhavan Srinivasan
  10 siblings, 1 reply; 22+ messages in thread
From: Madhavan Srinivasan @ 2016-07-31 19:06 UTC (permalink / raw)
  To: benh, mpe; +Cc: anton, paulus, npiggin, linuxppc-dev, Madhavan Srinivasan

Code to replay the Performance Monitoring Interrupts(PMI).
In the masked_interrupt handler, for PMIs we reset the MSR[EE]
and return. This is due the fact that PMIs are level triggered.
In the __check_irq_replay(), we enabled the MSR[EE] which will
fire the interrupt for us.

Patch also adds a new arch_local_irq_disable_var() variant. New
variant takes an input value to write to the paca->soft_enabled.
This will be used in following patch to implement the tri-state
value for soft-enabled.

Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/hw_irq.h | 14 ++++++++++++++
 arch/powerpc/kernel/irq.c         |  9 ++++++++-
 2 files changed, 22 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/include/asm/hw_irq.h b/arch/powerpc/include/asm/hw_irq.h
index a7b86edc94bf..9f235d25b9f8 100644
--- a/arch/powerpc/include/asm/hw_irq.h
+++ b/arch/powerpc/include/asm/hw_irq.h
@@ -84,6 +84,20 @@ static inline unsigned long arch_local_irq_disable(void)
 	return flags;
 }
 
+static inline unsigned long soft_irq_set_level(int value)
+{
+	unsigned long flags, zero;
+
+	asm volatile(
+		"li %1,%3; lbz %0,%2(13); stb %1,%2(13)"
+		: "=r" (flags), "=&r" (zero)
+		: "i" (offsetof(struct paca_struct, soft_enabled)),\
+		  "i" (value)
+		: "memory");
+
+	return flags;
+}
+
 extern void arch_local_irq_restore(unsigned long);
 
 static inline void arch_local_irq_enable(void)
diff --git a/arch/powerpc/kernel/irq.c b/arch/powerpc/kernel/irq.c
index 857e1e8188e5..9d70e51db8bc 100644
--- a/arch/powerpc/kernel/irq.c
+++ b/arch/powerpc/kernel/irq.c
@@ -158,9 +158,16 @@ notrace unsigned int __check_irq_replay(void)
 	if ((happened & PACA_IRQ_DEC) || decrementer_check_overflow())
 		return 0x900;
 
+	/*
+	 * In masked_handler() for PMI, we disable MSR[EE] and return.
+	 * When replaying it, just enabling the MSR[EE] will do
+	 * trick, since the PMI are "level" triggered.
+	 */
+	local_paca->irq_happened &= ~PACA_IRQ_PMI;
+
 	/* Finally check if an external interrupt happened */
 	local_paca->irq_happened &= ~PACA_IRQ_EE;
-	if (happened & PACA_IRQ_EE)
+	if ((happened & PACA_IRQ_EE) || (happened & PACA_IRQ_PMI))
 		return 0x500;
 
 #ifdef CONFIG_PPC_BOOK3E
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [RFC PATCH v2 11/11] powerpc: rewrite local_t using soft_irq
  2016-07-31 19:06 [RFC PATCH v2 00/11] powerpc: "paca->soft_enabled" based local atomic operation implementation Madhavan Srinivasan
                   ` (9 preceding siblings ...)
  2016-07-31 19:06 ` [RFC PATCH v2 10/11] powerpc: Support to replay PMIs Madhavan Srinivasan
@ 2016-07-31 19:06 ` Madhavan Srinivasan
  10 siblings, 0 replies; 22+ messages in thread
From: Madhavan Srinivasan @ 2016-07-31 19:06 UTC (permalink / raw)
  To: benh, mpe; +Cc: anton, paulus, npiggin, linuxppc-dev, Madhavan Srinivasan

Local atomic operations are fast and highly reentrant per CPU counters.
Used for percpu variable updates. Local atomic operations only guarantee
variable modification atomicity wrt the CPU which owns the data and
these needs to be executed in a preemption safe way.

Here is the design of this patch. Since local_* operations
are only need to be atomic to interrupts (IIUC), we have two options.
Either replay the "op" if interrupted or replay the interrupt after
the "op". Initial patchset posted was based on implementing local_* operation
based on CR5 which replay's the "op". Patchset had issues in case of
rewinding the address pointor from an array. This make the slow patch
really slow. Since CR5 based implementation proposed using __ex_table to find
the rewind addressr, this rasied concerns about size of __ex_table and vmlinux.

https://lists.ozlabs.org/pipermail/linuxppc-dev/2014-December/123115.html

But this patch uses, arch_local_irq_*() to soft_disable
interrupts (including PMIs). After finishing the "op", arch_local_irq_restore()
called and correspondingly interrupts are replayed if any occured.

patch re-write the current local_* functions to use arch_local_irq_disbale.
Base flow for each function is

{
	soft_irq_set_level(2)
	load
	..
	store
	arch_local_irq_restore()
}

Reason for the approach is that, currently l[w/d]arx/st[w/d]cx.
instruction pair is used for local_* operations, which are heavy
on cycle count and they dont support a local variant. So to
see whether the new implementation helps, used a modified
version of Rusty's benchmark code on local_t.

https://lkml.org/lkml/2008/12/16/450

Modifications to Rusty's benchmark code:
- Executed only local_t test

Here are the values with the patch.

Time in ns per iteration

Local_t             Without Patch           With Patch

_inc                        28              8
_add                        28              8
_read                       3               3
_add_return         28              7

Currently only asm/local.h has been rewrite, and also
the entire change is tested only in PPC64 (pseries guest)

TODO:
	- local_cmpxchg and local_xchg needs modification.

Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/local.h | 91 +++++++++++++++++++++++++++-------------
 1 file changed, 63 insertions(+), 28 deletions(-)

diff --git a/arch/powerpc/include/asm/local.h b/arch/powerpc/include/asm/local.h
index b8da91363864..9c006c0215f7 100644
--- a/arch/powerpc/include/asm/local.h
+++ b/arch/powerpc/include/asm/local.h
@@ -14,24 +14,50 @@ typedef struct
 #define local_read(l)	atomic_long_read(&(l)->a)
 #define local_set(l,i)	atomic_long_set(&(l)->a, (i))
 
-#define local_add(i,l)	atomic_long_add((i),(&(l)->a))
-#define local_sub(i,l)	atomic_long_sub((i),(&(l)->a))
-#define local_inc(l)	atomic_long_inc(&(l)->a)
-#define local_dec(l)	atomic_long_dec(&(l)->a)
+static __inline__ void local_add(long i, local_t *l)
+{
+	long t;
+	unsigned long flags;
+
+	flags = soft_irq_set_level(IRQ_DISABLE_LEVEL_PMU);
+	__asm__ __volatile__(
+	PPC_LL" %0,0(%2)\n\
+	add     %0,%1,%0\n"
+	PPC_STL" %0,0(%2)\n"
+	: "=&r" (t)
+	: "r" (i), "r" (&(l->a.counter)));
+	arch_local_irq_restore(flags);
+}
+
+static __inline__ void local_sub(long i, local_t *l)
+{
+	long t;
+	unsigned long flags;
+
+	flags = soft_irq_set_level(IRQ_DISABLE_LEVEL_PMU);
+	__asm__ __volatile__(
+	PPC_LL" %0,0(%2)\n\
+	subf    %0,%1,%0\n"
+	PPC_STL" %0,0(%2)\n"
+	: "=&r" (t)
+	: "r" (i), "r" (&(l->a.counter)));
+	arch_local_irq_restore(flags);
+}
 
 static __inline__ long local_add_return(long a, local_t *l)
 {
 	long t;
+	unsigned long flags;
 
+	flags = soft_irq_set_level(IRQ_DISABLE_LEVEL_PMU);
 	__asm__ __volatile__(
-"1:"	PPC_LLARX(%0,0,%2,0) "			# local_add_return\n\
+	PPC_LL" %0,0(%2)\n\
 	add	%0,%1,%0\n"
-	PPC405_ERR77(0,%2)
-	PPC_STLCX	"%0,0,%2 \n\
-	bne-	1b"
+	PPC_STL	"%0,0(%2)\n"
 	: "=&r" (t)
 	: "r" (a), "r" (&(l->a.counter))
 	: "cc", "memory");
+	arch_local_irq_restore(flags);
 
 	return t;
 }
@@ -41,16 +67,18 @@ static __inline__ long local_add_return(long a, local_t *l)
 static __inline__ long local_sub_return(long a, local_t *l)
 {
 	long t;
+	unsigned long flags;
+
+	flags = soft_irq_set_level(IRQ_DISABLE_LEVEL_PMU);
 
 	__asm__ __volatile__(
-"1:"	PPC_LLARX(%0,0,%2,0) "			# local_sub_return\n\
+"1:"	PPC_LL" %0,0(%2)\n\
 	subf	%0,%1,%0\n"
-	PPC405_ERR77(0,%2)
-	PPC_STLCX	"%0,0,%2 \n\
-	bne-	1b"
+	PPC_STL	"%0,0(%2)\n"
 	: "=&r" (t)
 	: "r" (a), "r" (&(l->a.counter))
 	: "cc", "memory");
+	arch_local_irq_restore(flags);
 
 	return t;
 }
@@ -58,16 +86,17 @@ static __inline__ long local_sub_return(long a, local_t *l)
 static __inline__ long local_inc_return(local_t *l)
 {
 	long t;
+	unsigned long flags;
 
+	flags = soft_irq_set_level(IRQ_DISABLE_LEVEL_PMU);
 	__asm__ __volatile__(
-"1:"	PPC_LLARX(%0,0,%1,0) "			# local_inc_return\n\
+"1:"	PPC_LL" %0,0(%1)\n\
 	addic	%0,%0,1\n"
-	PPC405_ERR77(0,%1)
-	PPC_STLCX	"%0,0,%1 \n\
-	bne-	1b"
+	PPC_STL "%0,0(%1)\n"
 	: "=&r" (t)
 	: "r" (&(l->a.counter))
 	: "cc", "xer", "memory");
+	arch_local_irq_restore(flags);
 
 	return t;
 }
@@ -85,20 +114,24 @@ static __inline__ long local_inc_return(local_t *l)
 static __inline__ long local_dec_return(local_t *l)
 {
 	long t;
+	unsigned long flags;
 
+	flags = soft_irq_set_level(IRQ_DISABLE_LEVEL_PMU);
 	__asm__ __volatile__(
-"1:"	PPC_LLARX(%0,0,%1,0) "			# local_dec_return\n\
+	PPC_LL" %0,0(%1)\n\
 	addic	%0,%0,-1\n"
-	PPC405_ERR77(0,%1)
-	PPC_STLCX	"%0,0,%1\n\
-	bne-	1b"
+	PPC_STL "%0,0(%1)\n"
 	: "=&r" (t)
 	: "r" (&(l->a.counter))
 	: "cc", "xer", "memory");
+	arch_local_irq_restore(flags);
 
 	return t;
 }
 
+#define local_inc(l)	local_inc_return(l)
+#define local_dec(l)	local_dec_return(l)
+
 #define local_cmpxchg(l, o, n) \
 	(cmpxchg_local(&((l)->a.counter), (o), (n)))
 #define local_xchg(l, n) (xchg_local(&((l)->a.counter), (n)))
@@ -115,20 +148,21 @@ static __inline__ long local_dec_return(local_t *l)
 static __inline__ int local_add_unless(local_t *l, long a, long u)
 {
 	long t;
+	unsigned long flags;
 
+	flags = soft_irq_set_level(IRQ_DISABLE_LEVEL_PMU);
 	__asm__ __volatile__ (
-"1:"	PPC_LLARX(%0,0,%1,0) "			# local_add_unless\n\
+	PPC_LL" %0,0(%1)\n\
 	cmpw	0,%0,%3 \n\
 	beq-	2f \n\
 	add	%0,%2,%0 \n"
-	PPC405_ERR77(0,%2)
-	PPC_STLCX	"%0,0,%1 \n\
-	bne-	1b \n"
+	PPC_STL" %0,0(%1) \n"
 "	subf	%0,%2,%0 \n\
 2:"
 	: "=&r" (t)
 	: "r" (&(l->a.counter)), "r" (a), "r" (u)
 	: "cc", "memory");
+	arch_local_irq_restore(flags);
 
 	return t != u;
 }
@@ -145,19 +179,20 @@ static __inline__ int local_add_unless(local_t *l, long a, long u)
 static __inline__ long local_dec_if_positive(local_t *l)
 {
 	long t;
+	unsigned long flags;
 
+	flags = soft_irq_set_level(IRQ_DISABLE_LEVEL_PMU);
 	__asm__ __volatile__(
-"1:"	PPC_LLARX(%0,0,%1,0) "			# local_dec_if_positive\n\
+	PPC_LL" %0,0(%1)\n\
 	cmpwi	%0,1\n\
 	addi	%0,%0,-1\n\
 	blt-	2f\n"
-	PPC405_ERR77(0,%1)
-	PPC_STLCX	"%0,0,%1\n\
-	bne-	1b"
+	PPC_STL "%0,0(%1)\n"
 	"\n\
 2:"	: "=&b" (t)
 	: "r" (&(l->a.counter))
 	: "cc", "memory");
+	arch_local_irq_restore(flags);
 
 	return t;
 }
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* Re: [RFC PATCH v2 08/11] powerpc: Add "mask_lvl" paramater to MASKABLE_* macros
  2016-07-31 19:06 ` [RFC PATCH v2 08/11] powerpc: Add "mask_lvl" paramater to MASKABLE_* macros Madhavan Srinivasan
@ 2016-08-01  5:21   ` Nicholas Piggin
  2016-08-01  5:49     ` Madhavan Srinivasan
  0 siblings, 1 reply; 22+ messages in thread
From: Nicholas Piggin @ 2016-08-01  5:21 UTC (permalink / raw)
  To: Madhavan Srinivasan; +Cc: benh, mpe, anton, paulus, linuxppc-dev

On Mon,  1 Aug 2016 00:36:26 +0530
Madhavan Srinivasan <maddy@linux.vnet.ibm.com> wrote:

> Make it explicit the interrupt masking level supported
> by a gievn interrupt handler. Patch correspondingly
> extends the MASKABLE_* macros with an addition's parameter.
> "mask_lvl" parameter is passed to SOFTEN_TEST macro to decide
> on masking the interrupt.

Hey Madhavan,

It looks like this has worked quite nicely. I think you've
managed to avoid any additional instructions in fastpaths
if I'm reading correctly.

I will do a more comprehensive review, but I wanted to ask:


> @@ -426,79 +426,81 @@ label##_relon_hv:						\
>  #define SOFTEN_VALUE_0xe60	PACA_IRQ_HMI
>  #define SOFTEN_VALUE_0xe62	PACA_IRQ_HMI
>  
> -#define __SOFTEN_TEST(h, vec)						\
> +#define __SOFTEN_TEST(h, vec, mask_lvl)					\
>  	lbz	r10,PACASOFTIRQEN(r13);					\
> -	cmpwi	r10,IRQ_DISABLE_LEVEL_LINUX;				\
> +	andi.	r10,r10,mask_lvl;					\
>  	li	r10,SOFTEN_VALUE_##vec;					\
> -	bge	masked_##h##interrupt
> -#define _SOFTEN_TEST(h, vec)	__SOFTEN_TEST(h, vec)
> +	bne	masked_##h##interrupt
> +#define _SOFTEN_TEST(h, vec, mask_lvl)	__SOFTEN_TEST(h, vec, mask_lvl)

We're talking about IRQ masking levels, but here it looks
like you're actually treating it as a mask.

I don't have a strong preference. Mask is more flexible, but
potentially constrained in how many interrupt types it can
cope with. That said, I doubt we'll need more than 8 mask bits
considering we've lived with one for years. So perhaps a mask
is a better choice. Ben, others, any preferences?

We should just use either "mask" or "level" everywhere, depending
on what we go with.

Thanks,
Nick

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [RFC PATCH v2 09/11] powerpc: Add support to mask perf interrupts
  2016-07-31 19:06 ` [RFC PATCH v2 09/11] powerpc: Add support to mask perf interrupts Madhavan Srinivasan
@ 2016-08-01  5:29   ` Nicholas Piggin
  2016-08-01  6:09     ` Madhavan Srinivasan
  0 siblings, 1 reply; 22+ messages in thread
From: Nicholas Piggin @ 2016-08-01  5:29 UTC (permalink / raw)
  To: Madhavan Srinivasan; +Cc: benh, mpe, anton, paulus, linuxppc-dev

On Mon,  1 Aug 2016 00:36:27 +0530
Madhavan Srinivasan <maddy@linux.vnet.ibm.com> wrote:

>  /*
>   * flags for paca->soft_enabled
>   */
>  #define IRQ_DISABLE_LEVEL_NONE		0
>  #define IRQ_DISABLE_LEVEL_LINUX		1
> +#define IRQ_DISABLE_LEVEL_PMU		2
> +
> +#define MASK_IRQ_LEVEL		IRQ_DISABLE_LEVEL_LINUX | IRQ_DISABLE_LEVEL_PMU
>  
>  
>  #endif /* CONFIG_PPC64 */
> diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
> index 2c87e82ecbe4..56dc71b82824 100644
> --- a/arch/powerpc/kernel/exceptions-64s.S
> +++ b/arch/powerpc/kernel/exceptions-64s.S
> @@ -256,11 +256,11 @@ hardware_interrupt_pSeries:
>  hardware_interrupt_hv:
>  	BEGIN_FTR_SECTION
>  		_MASKABLE_EXCEPTION_PSERIES(0x502, hardware_interrupt,
> -					    EXC_HV, SOFTEN_TEST_HV, IRQ_DISABLE_LEVEL_LINUX)
> +					    EXC_HV, SOFTEN_TEST_HV, MASK_IRQ_LEVEL)

So what I was expecting is that each exception handler would specify the
level (or bit, if we use bitmask) at which it gets disabled. The test code
will then test the exception level with the enable level (or s/level/mask).

The way you have now is each exception handler specifying the bits which
cause it to be disabled, but I think that's kind of backwards -- the
disabler knows which interrupts it wants to disable, the exception handler
does not know what disablers want to disable it :)

So to disable PMU and "linux" interrupts for local_t operations, you would
have:

local_irq_set_mask(IRQ_DISABLE_LEVEL_LINUX|IRQ_DISABLE_LEVEL_PMU)

And that would disable both handlers that test with IRQ_DISABLE_LEVEL_LINUX
and IRQ_DISABLE_LEVEL_PMU

Does that make sense? What do you think?

Thanks,
Nick

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [RFC PATCH v2 08/11] powerpc: Add "mask_lvl" paramater to MASKABLE_* macros
  2016-08-01  5:21   ` Nicholas Piggin
@ 2016-08-01  5:49     ` Madhavan Srinivasan
  0 siblings, 0 replies; 22+ messages in thread
From: Madhavan Srinivasan @ 2016-08-01  5:49 UTC (permalink / raw)
  To: Nicholas Piggin; +Cc: benh, mpe, anton, paulus, linuxppc-dev



On Monday 01 August 2016 10:51 AM, Nicholas Piggin wrote:
> On Mon,  1 Aug 2016 00:36:26 +0530
> Madhavan Srinivasan <maddy@linux.vnet.ibm.com> wrote:
>
>> Make it explicit the interrupt masking level supported
>> by a gievn interrupt handler. Patch correspondingly
>> extends the MASKABLE_* macros with an addition's parameter.
>> "mask_lvl" parameter is passed to SOFTEN_TEST macro to decide
>> on masking the interrupt.
> Hey Madhavan,
>
> It looks like this has worked quite nicely. I think you've
> managed to avoid any additional instructions in fastpaths
> if I'm reading correctly.

Yes. This avoids condition checking for many cases.

>
> I will do a more comprehensive review, but I wanted to ask:
>
>
>> @@ -426,79 +426,81 @@ label##_relon_hv:						\
>>   #define SOFTEN_VALUE_0xe60	PACA_IRQ_HMI
>>   #define SOFTEN_VALUE_0xe62	PACA_IRQ_HMI
>>   
>> -#define __SOFTEN_TEST(h, vec)						\
>> +#define __SOFTEN_TEST(h, vec, mask_lvl)					\
>>   	lbz	r10,PACASOFTIRQEN(r13);					\
>> -	cmpwi	r10,IRQ_DISABLE_LEVEL_LINUX;				\
>> +	andi.	r10,r10,mask_lvl;					\
>>   	li	r10,SOFTEN_VALUE_##vec;					\
>> -	bge	masked_##h##interrupt
>> -#define _SOFTEN_TEST(h, vec)	__SOFTEN_TEST(h, vec)
>> +	bne	masked_##h##interrupt
>> +#define _SOFTEN_TEST(h, vec, mask_lvl)	__SOFTEN_TEST(h, vec, mask_lvl)
> We're talking about IRQ masking levels, but here it looks
> like you're actually treating it as a mask.

Yes. That is true. I started with "level", but then realized
that I am adding more branch condition checks to
retain the PMI as NMI incase.

> I don't have a strong preference. Mask is more flexible, but
> potentially constrained in how many interrupt types it can
> cope with. That said, I doubt we'll need more than 8 mask bits
> considering we've lived with one for years. So perhaps a mask
> is a better choice. Ben, others, any preferences?
>
> We should just use either "mask" or "level" everywhere, depending
> on what we go with.

Yep. will change it.

Maddy

>
> Thanks,
> Nick
>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [RFC PATCH v2 09/11] powerpc: Add support to mask perf interrupts
  2016-08-01  5:29   ` Nicholas Piggin
@ 2016-08-01  6:09     ` Madhavan Srinivasan
  2016-08-01  6:48       ` Nicholas Piggin
  0 siblings, 1 reply; 22+ messages in thread
From: Madhavan Srinivasan @ 2016-08-01  6:09 UTC (permalink / raw)
  To: Nicholas Piggin; +Cc: benh, mpe, anton, paulus, linuxppc-dev



On Monday 01 August 2016 10:59 AM, Nicholas Piggin wrote:
> On Mon,  1 Aug 2016 00:36:27 +0530
> Madhavan Srinivasan <maddy@linux.vnet.ibm.com> wrote:
>
>>   /*
>>    * flags for paca->soft_enabled
>>    */
>>   #define IRQ_DISABLE_LEVEL_NONE		0
>>   #define IRQ_DISABLE_LEVEL_LINUX		1
>> +#define IRQ_DISABLE_LEVEL_PMU		2
>> +
>> +#define MASK_IRQ_LEVEL		IRQ_DISABLE_LEVEL_LINUX | IRQ_DISABLE_LEVEL_PMU
>>   
>>   
>>   #endif /* CONFIG_PPC64 */
>> diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
>> index 2c87e82ecbe4..56dc71b82824 100644
>> --- a/arch/powerpc/kernel/exceptions-64s.S
>> +++ b/arch/powerpc/kernel/exceptions-64s.S
>> @@ -256,11 +256,11 @@ hardware_interrupt_pSeries:
>>   hardware_interrupt_hv:
>>   	BEGIN_FTR_SECTION
>>   		_MASKABLE_EXCEPTION_PSERIES(0x502, hardware_interrupt,
>> -					    EXC_HV, SOFTEN_TEST_HV, IRQ_DISABLE_LEVEL_LINUX)
>> +					    EXC_HV, SOFTEN_TEST_HV, MASK_IRQ_LEVEL)
> So what I was expecting is that each exception handler would specify the
> level (or bit, if we use bitmask) at which it gets disabled. The test code
> will then test the exception level with the enable level (or s/level/mask).
>
> The way you have now is each exception handler specifying the bits which
> cause it to be disabled, but I think that's kind of backwards -- the
> disabler knows which interrupts it wants to disable, the exception handler
> does not know what disablers want to disable it :)

Yep. My bad. Implemented backwards.

>
> So to disable PMU and "linux" interrupts for local_t operations, you would
> have:
>
> local_irq_set_mask(IRQ_DISABLE_LEVEL_LINUX|IRQ_DISABLE_LEVEL_PMU)

Yep. Right now I made the exception handler to specify this and
  soft_irq_set_level() set the level. But this a minor change.

>
> And that would disable both handlers that test with IRQ_DISABLE_LEVEL_LINUX
> and IRQ_DISABLE_LEVEL_PMU
>
> Does that make sense? What do you think?

Yes. it make sense.
But would like to get comments for the SOFTEN_TEST changes.

Maddy

>
> Thanks,
> Nick
>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [RFC PATCH v2 09/11] powerpc: Add support to mask perf interrupts
  2016-08-01  6:09     ` Madhavan Srinivasan
@ 2016-08-01  6:48       ` Nicholas Piggin
  0 siblings, 0 replies; 22+ messages in thread
From: Nicholas Piggin @ 2016-08-01  6:48 UTC (permalink / raw)
  To: Madhavan Srinivasan; +Cc: benh, mpe, anton, paulus, linuxppc-dev

On Mon, 1 Aug 2016 11:39:18 +0530
Madhavan Srinivasan <maddy@linux.vnet.ibm.com> wrote:

> On Monday 01 August 2016 10:59 AM, Nicholas Piggin wrote:
> > On Mon,  1 Aug 2016 00:36:27 +0530
> > Madhavan Srinivasan <maddy@linux.vnet.ibm.com> wrote:

> > And that would disable both handlers that test with IRQ_DISABLE_LEVEL_LINUX
> > and IRQ_DISABLE_LEVEL_PMU
> >
> > Does that make sense? What do you think?  
> 
> Yes. it make sense.
> But would like to get comments for the SOFTEN_TEST changes.

Well I think they look good. We'll give others a chance to read and
comment. I'm sure Ben will want to check it out but he's pretty busy...

Thanks,
Nick

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [RFC PATCH v2 10/11] powerpc: Support to replay PMIs
  2016-07-31 19:06 ` [RFC PATCH v2 10/11] powerpc: Support to replay PMIs Madhavan Srinivasan
@ 2016-08-01  8:07   ` Nicholas Piggin
  2016-08-01  8:51     ` Benjamin Herrenschmidt
  2016-08-01  8:52     ` Benjamin Herrenschmidt
  0 siblings, 2 replies; 22+ messages in thread
From: Nicholas Piggin @ 2016-08-01  8:07 UTC (permalink / raw)
  To: Madhavan Srinivasan; +Cc: benh, mpe, anton, paulus, linuxppc-dev

On Mon,  1 Aug 2016 00:36:28 +0530
Madhavan Srinivasan <maddy@linux.vnet.ibm.com> wrote:

> Code to replay the Performance Monitoring Interrupts(PMI).
> In the masked_interrupt handler, for PMIs we reset the MSR[EE]
> and return. This is due the fact that PMIs are level triggered.
> In the __check_irq_replay(), we enabled the MSR[EE] which will
> fire the interrupt for us.
> 
> Patch also adds a new arch_local_irq_disable_var() variant. New
> variant takes an input value to write to the paca->soft_enabled.
> This will be used in following patch to implement the tri-state
> value for soft-enabled.
> 
> Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
> ---
>  arch/powerpc/include/asm/hw_irq.h | 14 ++++++++++++++
>  arch/powerpc/kernel/irq.c         |  9 ++++++++-
>  2 files changed, 22 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/powerpc/include/asm/hw_irq.h b/arch/powerpc/include/asm/hw_irq.h
> index a7b86edc94bf..9f235d25b9f8 100644
> --- a/arch/powerpc/include/asm/hw_irq.h
> +++ b/arch/powerpc/include/asm/hw_irq.h
> @@ -84,6 +84,20 @@ static inline unsigned long arch_local_irq_disable(void)
>  	return flags;
>  }
>  
> +static inline unsigned long soft_irq_set_level(int value)
> +{
> +	unsigned long flags, zero;
> +
> +	asm volatile(
> +		"li %1,%3; lbz %0,%2(13); stb %1,%2(13)"
> +		: "=r" (flags), "=&r" (zero)
> +		: "i" (offsetof(struct paca_struct, soft_enabled)),\
> +		  "i" (value)
> +		: "memory");
> +
> +	return flags;
> +}
> +
>  extern void arch_local_irq_restore(unsigned long);
>  
>  static inline void arch_local_irq_enable(void)
> diff --git a/arch/powerpc/kernel/irq.c b/arch/powerpc/kernel/irq.c
> index 857e1e8188e5..9d70e51db8bc 100644
> --- a/arch/powerpc/kernel/irq.c
> +++ b/arch/powerpc/kernel/irq.c
> @@ -158,9 +158,16 @@ notrace unsigned int __check_irq_replay(void)
>  	if ((happened & PACA_IRQ_DEC) || decrementer_check_overflow())
>  		return 0x900;
>  
> +	/*
> +	 * In masked_handler() for PMI, we disable MSR[EE] and return.
> +	 * When replaying it, just enabling the MSR[EE] will do
> +	 * trick, since the PMI are "level" triggered.
> +	 */
> +	local_paca->irq_happened &= ~PACA_IRQ_PMI;
> +
>  	/* Finally check if an external interrupt happened */
>  	local_paca->irq_happened &= ~PACA_IRQ_EE;
> -	if (happened & PACA_IRQ_EE)
> +	if ((happened & PACA_IRQ_EE) || (happened & PACA_IRQ_PMI))
>  		return 0x500;

This will replay hardware_interrupt_common in the case we got a PMI
interrupt but no EE.

Should we just follow the normal pattern here, return 0xf00 for PMI,
and replay the same as the other cases?

Thanks,
Nick

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [RFC PATCH v2 10/11] powerpc: Support to replay PMIs
  2016-08-01  8:07   ` Nicholas Piggin
@ 2016-08-01  8:51     ` Benjamin Herrenschmidt
  2016-08-01 10:22       ` Madhavan Srinivasan
  2016-08-01  8:52     ` Benjamin Herrenschmidt
  1 sibling, 1 reply; 22+ messages in thread
From: Benjamin Herrenschmidt @ 2016-08-01  8:51 UTC (permalink / raw)
  To: Nicholas Piggin, Madhavan Srinivasan; +Cc: mpe, anton, paulus, linuxppc-dev

On Mon, 2016-08-01 at 18:07 +1000, Nicholas Piggin wrote:
> This will replay hardware_interrupt_common in the case we got a PMI
> interrupt but no EE.
> 
> Should we just follow the normal pattern here, return 0xf00 for PMI,
> and replay the same as the other cases?

Agreed.

Cheers,
Ben.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [RFC PATCH v2 10/11] powerpc: Support to replay PMIs
  2016-08-01  8:07   ` Nicholas Piggin
  2016-08-01  8:51     ` Benjamin Herrenschmidt
@ 2016-08-01  8:52     ` Benjamin Herrenschmidt
  1 sibling, 0 replies; 22+ messages in thread
From: Benjamin Herrenschmidt @ 2016-08-01  8:52 UTC (permalink / raw)
  To: Nicholas Piggin, Madhavan Srinivasan; +Cc: mpe, anton, paulus, linuxppc-dev

On Mon, 2016-08-01 at 18:07 +1000, Nicholas Piggin wrote:
> > +static inline unsigned long soft_irq_set_level(int value)
> > +{
> > +     unsigned long flags, zero;
> > +
> > +     asm volatile(
> > +             "li %1,%3; lbz %0,%2(13); stb %1,%2(13)"
> > +             : "=r" (flags), "=&r" (zero)
> > +             : "i" (offsetof(struct paca_struct, soft_enabled)),\
> > +               "i" (value)
> > +             : "memory");
> > +
> > +     return flags;
> > +}

I would add a WARN_ON (possibly under control
of CONFIG_TRACE_IRQFLAGS(*) to verify we only ever use this to make
interrupts "less enabled".

(*) Or check if distros use CONFIG_TRACE_IRQFLAGS these days, then
create a new CONFIG_DEBUG_IRQ or something like that, and also move
the other use of CONFIG_TRACE_IRQFLAGS in local_irq_restore that
checks the msr as we really don't want that in production kernels.

Cheers,
Ben.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [RFC PATCH v2 10/11] powerpc: Support to replay PMIs
  2016-08-01  8:51     ` Benjamin Herrenschmidt
@ 2016-08-01 10:22       ` Madhavan Srinivasan
  2016-08-01 10:43         ` Nicholas Piggin
  0 siblings, 1 reply; 22+ messages in thread
From: Madhavan Srinivasan @ 2016-08-01 10:22 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Nicholas Piggin; +Cc: mpe, anton, paulus, linuxppc-dev



On Monday 01 August 2016 02:21 PM, Benjamin Herrenschmidt wrote:
> On Mon, 2016-08-01 at 18:07 +1000, Nicholas Piggin wrote:
>> This will replay hardware_interrupt_common in the case we got a PMI
>> interrupt but no EE.
>>
>> Should we just follow the normal pattern here, return 0xf00 for PMI,
>> and replay the same as the other cases?
> Agreed.

OK. But can we handle that in the C itself.
With an additional check to see whether EE happended or not?
Because if PMI landed first, there will be no EE anyway?

Maddy
> Cheers,
> Ben.
>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [RFC PATCH v2 10/11] powerpc: Support to replay PMIs
  2016-08-01 10:22       ` Madhavan Srinivasan
@ 2016-08-01 10:43         ` Nicholas Piggin
  0 siblings, 0 replies; 22+ messages in thread
From: Nicholas Piggin @ 2016-08-01 10:43 UTC (permalink / raw)
  To: Madhavan Srinivasan
  Cc: Benjamin Herrenschmidt, mpe, anton, paulus, linuxppc-dev

On Mon, 1 Aug 2016 15:52:28 +0530
Madhavan Srinivasan <maddy@linux.vnet.ibm.com> wrote:

> On Monday 01 August 2016 02:21 PM, Benjamin Herrenschmidt wrote:
> > On Mon, 2016-08-01 at 18:07 +1000, Nicholas Piggin wrote:  
> >> This will replay hardware_interrupt_common in the case we got a PMI
> >> interrupt but no EE.
> >>
> >> Should we just follow the normal pattern here, return 0xf00 for PMI,
> >> and replay the same as the other cases?  
> > Agreed.  
> 
> OK. But can we handle that in the C itself.
> With an additional check to see whether EE happended or not?
> Because if PMI landed first, there will be no EE anyway?

I just don't know if it's worth making a special case for
it rather than handle it the same as the others?

^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2016-08-01 10:43 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-07-31 19:06 [RFC PATCH v2 00/11] powerpc: "paca->soft_enabled" based local atomic operation implementation Madhavan Srinivasan
2016-07-31 19:06 ` [RFC PATCH v2 01/11] Add #defs for paca->soft_enabled flags Madhavan Srinivasan
2016-07-31 19:06 ` [RFC PATCH v2 02/11] Cleanup to use IRQ_DISABLE_LEVEL_* macros for paca->soft_enabled update Madhavan Srinivasan
2016-07-31 19:06 ` [RFC PATCH v2 03/11] powerpc: move set_soft_enabled() Madhavan Srinivasan
2016-07-31 19:06 ` [RFC PATCH v2 04/11] powerpc: Use set_soft_enabled api to update paca->soft_enabled Madhavan Srinivasan
2016-07-31 19:06 ` [RFC PATCH v2 05/11] powerpc: reverse the soft_enable logic Madhavan Srinivasan
2016-07-31 19:06 ` [RFC PATCH v2 06/11] powerpc: Avoid using EXCEPTION_PROLOG_1 macro in MASKABLE_* Madhavan Srinivasan
2016-07-31 19:06 ` [RFC PATCH v2 07/11] powerpc: Add new _EXCEPTION_PROLOG_1 macro Madhavan Srinivasan
2016-07-31 19:06 ` [RFC PATCH v2 08/11] powerpc: Add "mask_lvl" paramater to MASKABLE_* macros Madhavan Srinivasan
2016-08-01  5:21   ` Nicholas Piggin
2016-08-01  5:49     ` Madhavan Srinivasan
2016-07-31 19:06 ` [RFC PATCH v2 09/11] powerpc: Add support to mask perf interrupts Madhavan Srinivasan
2016-08-01  5:29   ` Nicholas Piggin
2016-08-01  6:09     ` Madhavan Srinivasan
2016-08-01  6:48       ` Nicholas Piggin
2016-07-31 19:06 ` [RFC PATCH v2 10/11] powerpc: Support to replay PMIs Madhavan Srinivasan
2016-08-01  8:07   ` Nicholas Piggin
2016-08-01  8:51     ` Benjamin Herrenschmidt
2016-08-01 10:22       ` Madhavan Srinivasan
2016-08-01 10:43         ` Nicholas Piggin
2016-08-01  8:52     ` Benjamin Herrenschmidt
2016-07-31 19:06 ` [RFC PATCH v2 11/11] powerpc: rewrite local_t using soft_irq Madhavan Srinivasan

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).