All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 00/13] powerpc: "paca->soft_enabled" based local atomic operation implementation
@ 2016-09-15 13:01 Madhavan Srinivasan
  2016-09-15 13:01 ` [PATCH 01/13] powerpc: Add #defs for paca->soft_enabled flags Madhavan Srinivasan
                   ` (13 more replies)
  0 siblings, 14 replies; 44+ messages in thread
From: Madhavan Srinivasan @ 2016-09-15 13:01 UTC (permalink / raw)
  To: benh, mpe; +Cc: anton, paulus, npiggin, linuxppc-dev, Madhavan Srinivasan

Local atomic operations are fast and highly reentrant per CPU counters.
Used for percpu variable updates. Local atomic operations only guarantee
variable modification atomicity wrt the CPU which owns the data and
these needs to be executed in a preemption safe way.

Here is the design of the patchset. Since local_* operations
are only need to be atomic to interrupts (IIUC), we have two options.
Either replay the "op" if interrupted or replay the interrupt after
the "op". Initial patchset posted was based on implementing local_* operation
based on CR5 which replay's the "op". Patchset had issues in case of
rewinding the address pointor from an array. This make the slow patch
really slow. Since CR5 based implementation proposed using __ex_table to find
the rewind address, this rasied concerns about size of __ex_table and vmlinux.

https://lists.ozlabs.org/pipermail/linuxppc-dev/2014-December/123115.html

But this patchset uses Benjamin Herrenschmidt suggestion of using
arch_local_irq_disable() to soft_disable interrupts (including PMIs).
After finishing the "op", arch_local_irq_restore() called and correspondingly
interrupts are replayed if any occured.

Current paca->soft_enabled logic is reserved and MASKABLE_EXCEPTION_* macros
are extended to support this feature.

patch re-write the current local_* functions to use arch_local_irq_disbale.
Base flow for each function is

 {
        local_irq_pmu_save(flags)
        load
        ..
        store
        local_irq_pmu_restore(flags)
 }

Reason for the approach is that, currently l[w/d]arx/st[w/d]cx.
instruction pair is used for local_* operations, which are heavy
on cycle count and they dont support a local variant. So to
see whether the new implementation helps, used a modified
version of Rusty's benchmark code on local_t.

https://lkml.org/lkml/2008/12/16/450

Modifications to Rusty's benchmark code:
 - Executed only local_t test

Here are the values with the patch.

Time in ns per iteration

Local_t         Without Patch           With Patch

_inc                    28              8
_add                    28              8
_read                   3               3
_add_return             28              7

Currently only asm/local.h has been rewrite, and also
the entire change is tested only in PPC64 (pseries guest)
and PPC64 LE host. Have only compile tested ppc64e_*.

First five are the clean up patches which lays the foundation
to make things easier. Fifth patch in the patchset reverse the
current soft_enabled logic and commit message details the reason and
need for this change. Seventh and eighth patch refactor's the __EXPECTION_PROLOG_1
code to support addition of a new parameter to MASKABLE_* macros. New parameter
will give the possible mask for the interrupt. Rest of the patches are
to add support for maskable PMI and implementation of local_t using local_irq_pmu_*().

Since the patchset is experimental, testing done only on pseries and
powernv platforms. Have only compile tested the patchset for Book3e.

Changelog RFC v5:
1)Implemented new set of soft_enabled manipulation functions
2)rewritten arch_local_irq_* functions to use the new soft_enabled_*()
3)Add WARN_ON to identify invalid soft_enabled transitions
4)Added powerpc_local_irq_pmu_save() and powerpc_local_irq_pmu_restore() to
  support masking of irqs (with PMI).
5)Added local_irq_pmu_*()s macros with trace_hardirqs_on|off() to match
  include/linux/irqflags.h

Changelog RFC v4:
1)Fix build breaks in in ppc64e_defconfig compilation
2)Merged PMI replay code with the exception vector changes patch
3)Renamed the new API to set PMI mask bit as suggested
4)Modified the current arch_local_save and new API function call to
  "OR" and store the value to ->soft_enabled instead of just store.
5)Updated the check in the arch_local_irq_restore() to alway check for
  greather than or zero to _LINUX mask bit.
6)Updated the commit messages.

Changelog RFC v3:
1)Squashed PMI masked interrupt patch and replay patch together
2)Have created a new patch which includes a new Kconfig and set_irq_set_mask()
3)Fixed the compilation issue with IRQ_DISABLE_MASK_* macros in book3e_*

Changelog RFC v2:
1)Renamed IRQ_DISABLE_LEVEL_* to IRQ_DISABLE_MASK_* and made logic changes
  to treat soft_enabled as a mask and not a flag or level.
2)Added a new Kconfig variable to support a WARN_ON
3)Refactored patchset for eaiser review.
4)Made changes to commit messages.
5)Made changes for BOOK3E version

Changelog RFC v1:

1)Commit messages are improved.
2)Renamed the arch_local_irq_disable_var to soft_irq_set_level as suggested
3)Renamed the LAZY_INTERRUPT* macro to IRQ_DISABLE_LEVEL_* as suggested
4)Extended the MASKABLE_EXCEPTION* macros to support additional parameter.
5)Each MASKABLE_EXCEPTION_* macro will carry a "mask_level"
6)Logic to decide on jump to maskable_handler in SOFTEN_TEST is now based on
  "mask_level"
7)__EXCEPTION_PROLOG_1 is factored out to support "mask_level" parameter.
  This reduced the code changes needed for supporting "mask_level" parameters.

Madhavan Srinivasan (13):
  powerpc: Add #defs for paca->soft_enabled flags
  powerpc: Cleanup to use IRQ_DISABLE_MASK_* macros for
    paca->soft_enabled update
  powerpc: move set_soft_enabled() and rename
  powerpc: Use soft_enabled_set api to update paca->soft_enabled
  powerpc: Add soft_enabled manipulation functions
  powerpc: reverse the soft_enable logic
  powerpc: Avoid using EXCEPTION_PROLOG_1 macro in MASKABLE_*
  powerpc: Add new _EXCEPTION_PROLOG_1 macro
  powerpc: Introduce new mask bit for soft_enabled
  powerpc: Add "bitmask" paramater to MASKABLE_* macros
  powerpc: Add support to mask perf interrupts and replay them
  powerpc: Add a Kconfig and a functions to set new soft_enabled mask
  powerpc: rewrite local_t using soft_irq

 arch/powerpc/Kconfig                     |   4 ++
 arch/powerpc/include/asm/exception-64s.h | 103 +++++++++++++++++++----------
 arch/powerpc/include/asm/hw_irq.h        | 110 ++++++++++++++++++++++++++++---
 arch/powerpc/include/asm/irqflags.h      |   8 +--
 arch/powerpc/include/asm/kvm_ppc.h       |   2 +-
 arch/powerpc/include/asm/local.h         |  94 ++++++++++++++++++--------
 arch/powerpc/kernel/entry_64.S           |  24 ++++---
 arch/powerpc/kernel/exceptions-64e.S     |   8 +--
 arch/powerpc/kernel/exceptions-64s.S     |  42 +++++++-----
 arch/powerpc/kernel/head_64.S            |   5 +-
 arch/powerpc/kernel/idle_book3e.S        |   3 +-
 arch/powerpc/kernel/idle_power4.S        |   3 +-
 arch/powerpc/kernel/irq.c                |  35 ++++++----
 arch/powerpc/kernel/process.c            |   3 +-
 arch/powerpc/kernel/setup_64.c           |   5 +-
 arch/powerpc/kernel/time.c               |   6 +-
 arch/powerpc/mm/hugetlbpage.c            |   2 +-
 arch/powerpc/perf/core-book3s.c          |   2 +-
 18 files changed, 330 insertions(+), 129 deletions(-)

--
2.7.4

^ permalink raw reply	[flat|nested] 44+ messages in thread

* [PATCH 01/13] powerpc: Add #defs for paca->soft_enabled flags
  2016-09-15 13:01 [PATCH 00/13] powerpc: "paca->soft_enabled" based local atomic operation implementation Madhavan Srinivasan
@ 2016-09-15 13:01 ` Madhavan Srinivasan
  2016-09-15 13:01 ` [PATCH 02/13] powerpc: Cleanup to use IRQ_DISABLE_MASK_* macros for paca->soft_enabled update Madhavan Srinivasan
                   ` (12 subsequent siblings)
  13 siblings, 0 replies; 44+ messages in thread
From: Madhavan Srinivasan @ 2016-09-15 13:01 UTC (permalink / raw)
  To: benh, mpe; +Cc: anton, paulus, npiggin, linuxppc-dev, Madhavan Srinivasan

Two #defs IRQ_DISABLE_LEVEL_NONE and IRQ_DISABLE_LEVEL_LINUX
are added to be used when updating paca->soft_enabled.

Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/hw_irq.h | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/arch/powerpc/include/asm/hw_irq.h b/arch/powerpc/include/asm/hw_irq.h
index c7d82ff62a33..df5def1f635a 100644
--- a/arch/powerpc/include/asm/hw_irq.h
+++ b/arch/powerpc/include/asm/hw_irq.h
@@ -27,6 +27,12 @@
 #define PACA_IRQ_EE_EDGE	0x10 /* BookE only */
 #define PACA_IRQ_HMI		0x20
 
+/*
+ * flags for paca->soft_enabled
+ */
+#define IRQ_DISABLE_MASK_NONE		1
+#define IRQ_DISABLE_MASK_LINUX		0
+
 #endif /* CONFIG_PPC64 */
 
 #ifndef __ASSEMBLY__
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH 02/13] powerpc: Cleanup to use IRQ_DISABLE_MASK_* macros for paca->soft_enabled update
  2016-09-15 13:01 [PATCH 00/13] powerpc: "paca->soft_enabled" based local atomic operation implementation Madhavan Srinivasan
  2016-09-15 13:01 ` [PATCH 01/13] powerpc: Add #defs for paca->soft_enabled flags Madhavan Srinivasan
@ 2016-09-15 13:01 ` Madhavan Srinivasan
  2016-09-16  9:47   ` Nicholas Piggin
  2016-09-15 13:01 ` [PATCH 03/13] powerpc: move set_soft_enabled() and rename Madhavan Srinivasan
                   ` (11 subsequent siblings)
  13 siblings, 1 reply; 44+ messages in thread
From: Madhavan Srinivasan @ 2016-09-15 13:01 UTC (permalink / raw)
  To: benh, mpe; +Cc: anton, paulus, npiggin, linuxppc-dev, Madhavan Srinivasan

Replace the hardcoded values used when updating
paca->soft_enabled with IRQ_DISABLE_MASK_* #def.
No logic change.

Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/exception-64s.h |  2 +-
 arch/powerpc/include/asm/hw_irq.h        | 15 ++++++++-------
 arch/powerpc/include/asm/irqflags.h      |  6 +++---
 arch/powerpc/include/asm/kvm_ppc.h       |  2 +-
 arch/powerpc/kernel/entry_64.S           | 16 ++++++++--------
 arch/powerpc/kernel/exceptions-64e.S     |  6 +++---
 arch/powerpc/kernel/head_64.S            |  5 +++--
 arch/powerpc/kernel/idle_book3e.S        |  3 ++-
 arch/powerpc/kernel/idle_power4.S        |  3 ++-
 arch/powerpc/kernel/irq.c                |  9 +++++----
 arch/powerpc/kernel/process.c            |  3 ++-
 arch/powerpc/kernel/setup_64.c           |  3 +++
 arch/powerpc/kernel/time.c               |  2 +-
 arch/powerpc/mm/hugetlbpage.c            |  2 +-
 arch/powerpc/perf/core-book3s.c          |  2 +-
 15 files changed, 44 insertions(+), 35 deletions(-)

diff --git a/arch/powerpc/include/asm/exception-64s.h b/arch/powerpc/include/asm/exception-64s.h
index bed66e5743b3..38272fe8a757 100644
--- a/arch/powerpc/include/asm/exception-64s.h
+++ b/arch/powerpc/include/asm/exception-64s.h
@@ -408,7 +408,7 @@ label##_relon_hv:						\
 
 #define __SOFTEN_TEST(h, vec)						\
 	lbz	r10,PACASOFTIRQEN(r13);					\
-	cmpwi	r10,0;							\
+	cmpwi	r10,IRQ_DISABLE_MASK_LINUX;							\
 	li	r10,SOFTEN_VALUE_##vec;					\
 	beq	masked_##h##interrupt
 #define _SOFTEN_TEST(h, vec)	__SOFTEN_TEST(h, vec)
diff --git a/arch/powerpc/include/asm/hw_irq.h b/arch/powerpc/include/asm/hw_irq.h
index df5def1f635a..1fcc2fd7275a 100644
--- a/arch/powerpc/include/asm/hw_irq.h
+++ b/arch/powerpc/include/asm/hw_irq.h
@@ -64,9 +64,10 @@ static inline unsigned long arch_local_irq_disable(void)
 	unsigned long flags, zero;
 
 	asm volatile(
-		"li %1,0; lbz %0,%2(13); stb %1,%2(13)"
+		"li %1,%3; lbz %0,%2(13); stb %1,%2(13)"
 		: "=r" (flags), "=&r" (zero)
-		: "i" (offsetof(struct paca_struct, soft_enabled))
+		: "i" (offsetof(struct paca_struct, soft_enabled)),\
+		  "i" (IRQ_DISABLE_MASK_LINUX)
 		: "memory");
 
 	return flags;
@@ -76,7 +77,7 @@ extern void arch_local_irq_restore(unsigned long);
 
 static inline void arch_local_irq_enable(void)
 {
-	arch_local_irq_restore(1);
+	arch_local_irq_restore(IRQ_DISABLE_MASK_NONE);
 }
 
 static inline unsigned long arch_local_irq_save(void)
@@ -86,7 +87,7 @@ static inline unsigned long arch_local_irq_save(void)
 
 static inline bool arch_irqs_disabled_flags(unsigned long flags)
 {
-	return flags == 0;
+	return flags == IRQ_DISABLE_MASK_LINUX;
 }
 
 static inline bool arch_irqs_disabled(void)
@@ -106,9 +107,9 @@ static inline bool arch_irqs_disabled(void)
 	u8 _was_enabled;				\
 	__hard_irq_disable();				\
 	_was_enabled = local_paca->soft_enabled;	\
-	local_paca->soft_enabled = 0;			\
+	local_paca->soft_enabled = IRQ_DISABLE_MASK_LINUX;\
 	local_paca->irq_happened |= PACA_IRQ_HARD_DIS;	\
-	if (_was_enabled)				\
+	if (_was_enabled == IRQ_DISABLE_MASK_NONE)	\
 		trace_hardirqs_off();			\
 } while(0)
 
@@ -131,7 +132,7 @@ static inline void may_hard_irq_enable(void)
 
 static inline bool arch_irq_disabled_regs(struct pt_regs *regs)
 {
-	return !regs->softe;
+	return (regs->softe == IRQ_DISABLE_MASK_LINUX);
 }
 
 extern bool prep_irq_for_idle(void);
diff --git a/arch/powerpc/include/asm/irqflags.h b/arch/powerpc/include/asm/irqflags.h
index f2149066fe5d..d0ed2a7d7d10 100644
--- a/arch/powerpc/include/asm/irqflags.h
+++ b/arch/powerpc/include/asm/irqflags.h
@@ -48,8 +48,8 @@
 #define RECONCILE_IRQ_STATE(__rA, __rB)		\
 	lbz	__rA,PACASOFTIRQEN(r13);	\
 	lbz	__rB,PACAIRQHAPPENED(r13);	\
-	cmpwi	cr0,__rA,0;			\
-	li	__rA,0;				\
+	cmpwi	cr0,__rA,IRQ_DISABLE_MASK_LINUX;\
+	li	__rA,IRQ_DISABLE_MASK_LINUX;	\
 	ori	__rB,__rB,PACA_IRQ_HARD_DIS;	\
 	stb	__rB,PACAIRQHAPPENED(r13);	\
 	beq	44f;				\
@@ -63,7 +63,7 @@
 
 #define RECONCILE_IRQ_STATE(__rA, __rB)		\
 	lbz	__rA,PACAIRQHAPPENED(r13);	\
-	li	__rB,0;				\
+	li	__rB,IRQ_DISABLE_MASK_LINUX;	\
 	ori	__rA,__rA,PACA_IRQ_HARD_DIS;	\
 	stb	__rB,PACASOFTIRQEN(r13);	\
 	stb	__rA,PACAIRQHAPPENED(r13)
diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h
index 2544edabe7f3..740ee309cea8 100644
--- a/arch/powerpc/include/asm/kvm_ppc.h
+++ b/arch/powerpc/include/asm/kvm_ppc.h
@@ -707,7 +707,7 @@ static inline void kvmppc_fix_ee_before_entry(void)
 
 	/* Only need to enable IRQs by hard enabling them after this */
 	local_paca->irq_happened = 0;
-	local_paca->soft_enabled = 1;
+	local_paca->soft_enabled = IRQ_DISABLE_MASK_NONE;
 #endif
 }
 
diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
index 5afd03e5e8b8..aef7b64cbbeb 100644
--- a/arch/powerpc/kernel/entry_64.S
+++ b/arch/powerpc/kernel/entry_64.S
@@ -131,7 +131,7 @@ END_FW_FTR_SECTION_IFSET(FW_FEATURE_SPLPAR)
 	 */
 #if defined(CONFIG_TRACE_IRQFLAGS) && defined(CONFIG_BUG)
 	lbz	r10,PACASOFTIRQEN(r13)
-	xori	r10,r10,1
+	xori	r10,r10,IRQ_DISABLE_MASK_NONE
 1:	tdnei	r10,0
 	EMIT_BUG_ENTRY 1b,__FILE__,__LINE__,BUGFLAG_WARNING
 #endif
@@ -147,7 +147,7 @@ END_FW_FTR_SECTION_IFSET(FW_FEATURE_SPLPAR)
 	/* We do need to set SOFTE in the stack frame or the return
 	 * from interrupt will be painful
 	 */
-	li	r10,1
+	li	r10,IRQ_DISABLE_MASK_NONE
 	std	r10,SOFTE(r1)
 
 	CURRENT_THREAD_INFO(r11, r1)
@@ -725,7 +725,7 @@ resume_kernel:
 	lwz	r8,TI_PREEMPT(r9)
 	cmpwi	cr1,r8,0
 	ld	r0,SOFTE(r1)
-	cmpdi	r0,0
+	cmpdi	r0,IRQ_DISABLE_MASK_LINUX
 	crandc	eq,cr1*4+eq,eq
 	bne	restore
 
@@ -765,11 +765,11 @@ restore:
 	 */
 	ld	r5,SOFTE(r1)
 	lbz	r6,PACASOFTIRQEN(r13)
-	cmpwi	cr0,r5,0
+	cmpwi	cr0,r5,IRQ_DISABLE_MASK_LINUX
 	beq	restore_irq_off
 
 	/* We are enabling, were we already enabled ? Yes, just return */
-	cmpwi	cr0,r6,1
+	cmpwi	cr0,r6,IRQ_DISABLE_MASK_NONE
 	beq	cr0,do_restore
 
 	/*
@@ -788,7 +788,7 @@ restore:
 	 */
 restore_no_replay:
 	TRACE_ENABLE_INTS
-	li	r0,1
+	li	r0,IRQ_DISABLE_MASK_NONE
 	stb	r0,PACASOFTIRQEN(r13);
 
 	/*
@@ -894,7 +894,7 @@ restore_irq_off:
 	beq	1f
 	rlwinm	r7,r7,0,~PACA_IRQ_HARD_DIS
 	stb	r7,PACAIRQHAPPENED(r13)
-1:	li	r0,0
+1:	li	r0,IRQ_DISABLE_MASK_LINUX
 	stb	r0,PACASOFTIRQEN(r13);
 	TRACE_DISABLE_INTS
 	b	do_restore
@@ -1012,7 +1012,7 @@ _GLOBAL(enter_rtas)
 	 * check it with the asm equivalent of WARN_ON
 	 */
 	lbz	r0,PACASOFTIRQEN(r13)
-1:	tdnei	r0,0
+1:	tdnei	r0,IRQ_DISABLE_MASK_LINUX
 	EMIT_BUG_ENTRY 1b,__FILE__,__LINE__,BUGFLAG_WARNING
 #endif
 	
diff --git a/arch/powerpc/kernel/exceptions-64e.S b/arch/powerpc/kernel/exceptions-64e.S
index 38a1f96430e1..5c628b5696f6 100644
--- a/arch/powerpc/kernel/exceptions-64e.S
+++ b/arch/powerpc/kernel/exceptions-64e.S
@@ -210,9 +210,9 @@ END_FTR_SECTION_IFSET(CPU_FTR_EMB_HV)
 	ld	r5,SOFTE(r1)
 
 	/* Interrupts had better not already be enabled... */
-	twnei	r6,0
+	twnei	r6,IRQ_DISABLE_MASK_LINUX
 
-	cmpwi	cr0,r5,0
+	cmpwi	cr0,r5,IRQ_DISABLE_MASK_LINUX
 	beq	1f
 
 	TRACE_ENABLE_INTS
@@ -352,7 +352,7 @@ ret_from_mc_except:
 
 #define PROLOG_ADDITION_MASKABLE_GEN(n)					    \
 	lbz	r10,PACASOFTIRQEN(r13); /* are irqs soft-disabled ? */	    \
-	cmpwi	cr0,r10,0;		/* yes -> go out of line */	    \
+	cmpwi	cr0,r10,IRQ_DISABLE_MASK_LINUX;/* yes -> go out of line */ \
 	beq	masked_interrupt_book3e_##n
 
 #define PROLOG_ADDITION_2REGS_GEN(n)					    \
diff --git a/arch/powerpc/kernel/head_64.S b/arch/powerpc/kernel/head_64.S
index f765b0434731..4bd58b6ea380 100644
--- a/arch/powerpc/kernel/head_64.S
+++ b/arch/powerpc/kernel/head_64.S
@@ -732,7 +732,7 @@ _GLOBAL(pmac_secondary_start)
 	/* Mark interrupts soft and hard disabled (they might be enabled
 	 * in the PACA when doing hotplug)
 	 */
-	li	r0,0
+	li	r0,IRQ_DISABLE_MASK_LINUX
 	stb	r0,PACASOFTIRQEN(r13)
 	li	r0,PACA_IRQ_HARD_DIS
 	stb	r0,PACAIRQHAPPENED(r13)
@@ -789,6 +789,7 @@ __secondary_start:
 	/* Mark interrupts soft and hard disabled (they might be enabled
 	 * in the PACA when doing hotplug)
 	 */
+	li	r7,IRQ_DISABLE_MASK_LINUX
 	stb	r7,PACASOFTIRQEN(r13)
 	li	r0,PACA_IRQ_HARD_DIS
 	stb	r0,PACAIRQHAPPENED(r13)
@@ -954,7 +955,7 @@ start_here_common:
 	/* Mark interrupts soft and hard disabled (they might be enabled
 	 * in the PACA when doing hotplug)
 	 */
-	li	r0,0
+	li	r0,IRQ_DISABLE_MASK_LINUX
 	stb	r0,PACASOFTIRQEN(r13)
 	li	r0,PACA_IRQ_HARD_DIS
 	stb	r0,PACAIRQHAPPENED(r13)
diff --git a/arch/powerpc/kernel/idle_book3e.S b/arch/powerpc/kernel/idle_book3e.S
index 48c21acef915..a459c306b04e 100644
--- a/arch/powerpc/kernel/idle_book3e.S
+++ b/arch/powerpc/kernel/idle_book3e.S
@@ -17,6 +17,7 @@
 #include <asm/processor.h>
 #include <asm/thread_info.h>
 #include <asm/epapr_hcalls.h>
+#include <asm/hw_irq.h>
 
 /* 64-bit version only for now */
 #ifdef CONFIG_PPC64
@@ -46,7 +47,7 @@ _GLOBAL(\name)
 	bl	trace_hardirqs_on
 	addi    r1,r1,128
 #endif
-	li	r0,1
+	li	r0,IRQ_DISABLE_MASK_NONE
 	stb	r0,PACASOFTIRQEN(r13)
 	
 	/* Interrupts will make use return to LR, so get something we want
diff --git a/arch/powerpc/kernel/idle_power4.S b/arch/powerpc/kernel/idle_power4.S
index f57a19348bdd..785e10619d8d 100644
--- a/arch/powerpc/kernel/idle_power4.S
+++ b/arch/powerpc/kernel/idle_power4.S
@@ -15,6 +15,7 @@
 #include <asm/ppc_asm.h>
 #include <asm/asm-offsets.h>
 #include <asm/irqflags.h>
+#include <asm/hw_irq.h>
 
 #undef DEBUG
 
@@ -53,7 +54,7 @@ END_FTR_SECTION_IFCLR(CPU_FTR_CAN_NAP)
 	mfmsr	r7
 #endif /* CONFIG_TRACE_IRQFLAGS */
 
-	li	r0,1
+	li	r0,IRQ_DISABLE_MASK_NONE
 	stb	r0,PACASOFTIRQEN(r13)	/* we'll hard-enable shortly */
 BEGIN_FTR_SECTION
 	DSSALL
diff --git a/arch/powerpc/kernel/irq.c b/arch/powerpc/kernel/irq.c
index 08887cf2b20e..ed1123125063 100644
--- a/arch/powerpc/kernel/irq.c
+++ b/arch/powerpc/kernel/irq.c
@@ -67,6 +67,7 @@
 #include <asm/smp.h>
 #include <asm/debug.h>
 #include <asm/livepatch.h>
+#include <asm/hw_irq.h>
 
 #ifdef CONFIG_PPC64
 #include <asm/paca.h>
@@ -208,7 +209,7 @@ notrace void arch_local_irq_restore(unsigned long en)
 
 	/* Write the new soft-enabled value */
 	set_soft_enabled(en);
-	if (!en)
+	if (en == IRQ_DISABLE_MASK_LINUX)
 		return;
 	/*
 	 * From this point onward, we can take interrupts, preempt,
@@ -253,7 +254,7 @@ notrace void arch_local_irq_restore(unsigned long en)
 	}
 #endif /* CONFIG_TRACE_IRQFLAGS */
 
-	set_soft_enabled(0);
+	set_soft_enabled(IRQ_DISABLE_MASK_LINUX);
 
 	/*
 	 * Check if anything needs to be re-emitted. We haven't
@@ -263,7 +264,7 @@ notrace void arch_local_irq_restore(unsigned long en)
 	replay = __check_irq_replay();
 
 	/* We can soft-enable now */
-	set_soft_enabled(1);
+	set_soft_enabled(IRQ_DISABLE_MASK_NONE);
 
 	/*
 	 * And replay if we have to. This will return with interrupts
@@ -337,7 +338,7 @@ bool prep_irq_for_idle(void)
 	 * of entering the low power state.
 	 */
 	local_paca->irq_happened &= ~PACA_IRQ_HARD_DIS;
-	local_paca->soft_enabled = 1;
+	local_paca->soft_enabled = IRQ_DISABLE_MASK_NONE;
 
 	/* Tell the caller to enter the low power state */
 	return true;
diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
index 9ee2623e0f67..6efaea2dc805 100644
--- a/arch/powerpc/kernel/process.c
+++ b/arch/powerpc/kernel/process.c
@@ -54,6 +54,7 @@
 #include <asm/debug.h>
 #ifdef CONFIG_PPC64
 #include <asm/firmware.h>
+#include <asm/hw_irq.h>
 #endif
 #include <asm/code-patching.h>
 #include <asm/exec.h>
@@ -1441,7 +1442,7 @@ int copy_thread(unsigned long clone_flags, unsigned long usp,
 			childregs->gpr[14] = ppc_function_entry((void *)usp);
 #ifdef CONFIG_PPC64
 		clear_tsk_thread_flag(p, TIF_32BIT);
-		childregs->softe = 1;
+		childregs->softe = IRQ_DISABLE_MASK_NONE;
 #endif
 		childregs->gpr[15] = kthread_arg;
 		p->thread.regs = NULL;	/* no user register state */
diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
index 7ac8e6eaab5b..f31930b9bfc1 100644
--- a/arch/powerpc/kernel/setup_64.c
+++ b/arch/powerpc/kernel/setup_64.c
@@ -67,6 +67,7 @@
 #include <asm/livepatch.h>
 #include <asm/opal.h>
 #include <asm/cputhreads.h>
+#include <asm/hw_irq.h>
 
 #ifdef DEBUG
 #define DBG(fmt...) udbg_printf(fmt)
@@ -195,6 +196,8 @@ static void __init fixup_boot_paca(void)
 	get_paca()->cpu_start = 1;
 	/* Allow percpu accesses to work until we setup percpu data */
 	get_paca()->data_offset = 0;
+	/* Mark interrupts disabled in PACA */
+	get_paca()->soft_enabled = IRQ_DISABLE_MASK_LINUX;
 }
 
 static void __init configure_exceptions(void)
diff --git a/arch/powerpc/kernel/time.c b/arch/powerpc/kernel/time.c
index 3efbedefba6a..7105757cdb90 100644
--- a/arch/powerpc/kernel/time.c
+++ b/arch/powerpc/kernel/time.c
@@ -268,7 +268,7 @@ void accumulate_stolen_time(void)
 	 * needs to reflect that so various debug stuff doesn't
 	 * complain
 	 */
-	local_paca->soft_enabled = 0;
+	local_paca->soft_enabled = IRQ_DISABLE_MASK_LINUX;
 
 	sst = scan_dispatch_log(acct->starttime_user);
 	ust = scan_dispatch_log(acct->starttime);
diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c
index 7372ee13eb1e..3270fa7880cd 100644
--- a/arch/powerpc/mm/hugetlbpage.c
+++ b/arch/powerpc/mm/hugetlbpage.c
@@ -914,7 +914,7 @@ void flush_dcache_icache_hugepage(struct page *page)
  * So long as we atomically load page table pointers we are safe against teardown,
  * we can follow the address down to the the page and take a ref on it.
  * This function need to be called with interrupts disabled. We use this variant
- * when we have MSR[EE] = 0 but the paca->soft_enabled = 1
+ * when we have MSR[EE] = 0 but the paca->soft_enabled = IRQ_DISABLE_MASK_NONE
  */
 
 pte_t *__find_linux_pte_or_hugepte(pgd_t *pgdir, unsigned long ea,
diff --git a/arch/powerpc/perf/core-book3s.c b/arch/powerpc/perf/core-book3s.c
index 4ed377f0f7b2..5e8302f85e3d 100644
--- a/arch/powerpc/perf/core-book3s.c
+++ b/arch/powerpc/perf/core-book3s.c
@@ -313,7 +313,7 @@ static inline void perf_read_regs(struct pt_regs *regs)
  */
 static inline int perf_intr_is_nmi(struct pt_regs *regs)
 {
-	return !regs->softe;
+	return (regs->softe == IRQ_DISABLE_MASK_LINUX);
 }
 
 /*
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH 03/13] powerpc: move set_soft_enabled() and rename
  2016-09-15 13:01 [PATCH 00/13] powerpc: "paca->soft_enabled" based local atomic operation implementation Madhavan Srinivasan
  2016-09-15 13:01 ` [PATCH 01/13] powerpc: Add #defs for paca->soft_enabled flags Madhavan Srinivasan
  2016-09-15 13:01 ` [PATCH 02/13] powerpc: Cleanup to use IRQ_DISABLE_MASK_* macros for paca->soft_enabled update Madhavan Srinivasan
@ 2016-09-15 13:01 ` Madhavan Srinivasan
  2016-09-16  9:50   ` Nicholas Piggin
  2016-09-15 13:01 ` [PATCH 04/13] powerpc: Use soft_enabled_set api to update paca->soft_enabled Madhavan Srinivasan
                   ` (10 subsequent siblings)
  13 siblings, 1 reply; 44+ messages in thread
From: Madhavan Srinivasan @ 2016-09-15 13:01 UTC (permalink / raw)
  To: benh, mpe; +Cc: anton, paulus, npiggin, linuxppc-dev, Madhavan Srinivasan

Move set_soft_enabled() from powerpc/kernel/irq.c to
asm/hw_irq.c. and rename it soft_enabled_set().
THis way paca->soft_enabled updates can be forced.

Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/hw_irq.h |  6 ++++++
 arch/powerpc/kernel/irq.c         | 12 +++---------
 2 files changed, 9 insertions(+), 9 deletions(-)

diff --git a/arch/powerpc/include/asm/hw_irq.h b/arch/powerpc/include/asm/hw_irq.h
index 1fcc2fd7275a..8fad8c24760b 100644
--- a/arch/powerpc/include/asm/hw_irq.h
+++ b/arch/powerpc/include/asm/hw_irq.h
@@ -47,6 +47,12 @@ extern void unknown_exception(struct pt_regs *regs);
 #ifdef CONFIG_PPC64
 #include <asm/paca.h>
 
+static inline notrace void soft_enabled_set(unsigned long enable)
+{
+	__asm__ __volatile__("stb %0,%1(13)"
+	: : "r" (enable), "i" (offsetof(struct paca_struct, soft_enabled)));
+}
+
 static inline unsigned long arch_local_save_flags(void)
 {
 	unsigned long flags;
diff --git a/arch/powerpc/kernel/irq.c b/arch/powerpc/kernel/irq.c
index ed1123125063..5a926ea5bd0b 100644
--- a/arch/powerpc/kernel/irq.c
+++ b/arch/powerpc/kernel/irq.c
@@ -107,12 +107,6 @@ static inline notrace unsigned long get_irq_happened(void)
 	return happened;
 }
 
-static inline notrace void set_soft_enabled(unsigned long enable)
-{
-	__asm__ __volatile__("stb %0,%1(13)"
-	: : "r" (enable), "i" (offsetof(struct paca_struct, soft_enabled)));
-}
-
 static inline notrace int decrementer_check_overflow(void)
 {
  	u64 now = get_tb_or_rtc();
@@ -208,7 +202,7 @@ notrace void arch_local_irq_restore(unsigned long en)
 	unsigned int replay;
 
 	/* Write the new soft-enabled value */
-	set_soft_enabled(en);
+	soft_enabled_set(en);
 	if (en == IRQ_DISABLE_MASK_LINUX)
 		return;
 	/*
@@ -254,7 +248,7 @@ notrace void arch_local_irq_restore(unsigned long en)
 	}
 #endif /* CONFIG_TRACE_IRQFLAGS */
 
-	set_soft_enabled(IRQ_DISABLE_MASK_LINUX);
+	soft_enabled_set(IRQ_DISABLE_MASK_LINUX);
 
 	/*
 	 * Check if anything needs to be re-emitted. We haven't
@@ -264,7 +258,7 @@ notrace void arch_local_irq_restore(unsigned long en)
 	replay = __check_irq_replay();
 
 	/* We can soft-enable now */
-	set_soft_enabled(IRQ_DISABLE_MASK_NONE);
+	soft_enabled_set(IRQ_DISABLE_MASK_NONE);
 
 	/*
 	 * And replay if we have to. This will return with interrupts
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH 04/13] powerpc: Use soft_enabled_set api to update paca->soft_enabled
  2016-09-15 13:01 [PATCH 00/13] powerpc: "paca->soft_enabled" based local atomic operation implementation Madhavan Srinivasan
                   ` (2 preceding siblings ...)
  2016-09-15 13:01 ` [PATCH 03/13] powerpc: move set_soft_enabled() and rename Madhavan Srinivasan
@ 2016-09-15 13:01 ` Madhavan Srinivasan
  2016-09-16  9:53   ` Nicholas Piggin
  2016-09-15 13:01 ` [PATCH 05/13] powerpc: Add soft_enabled manipulation functions Madhavan Srinivasan
                   ` (9 subsequent siblings)
  13 siblings, 1 reply; 44+ messages in thread
From: Madhavan Srinivasan @ 2016-09-15 13:01 UTC (permalink / raw)
  To: benh, mpe; +Cc: anton, paulus, npiggin, linuxppc-dev, Madhavan Srinivasan

Force use of soft_enabled_set() wrapper to update paca-soft_enabled
wherever possisble. Also add a new wrapper function, soft_enabled_set_return(),
added to force the paca->soft_enabled updates.

Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/hw_irq.h  | 14 ++++++++++++++
 arch/powerpc/include/asm/kvm_ppc.h |  2 +-
 arch/powerpc/kernel/irq.c          |  2 +-
 arch/powerpc/kernel/setup_64.c     |  4 ++--
 arch/powerpc/kernel/time.c         |  6 +++---
 5 files changed, 21 insertions(+), 7 deletions(-)

diff --git a/arch/powerpc/include/asm/hw_irq.h b/arch/powerpc/include/asm/hw_irq.h
index 8fad8c24760b..f828b8f8df02 100644
--- a/arch/powerpc/include/asm/hw_irq.h
+++ b/arch/powerpc/include/asm/hw_irq.h
@@ -53,6 +53,20 @@ static inline notrace void soft_enabled_set(unsigned long enable)
 	: : "r" (enable), "i" (offsetof(struct paca_struct, soft_enabled)));
 }
 
+static inline notrace unsigned long soft_enabled_set_return(unsigned long enable)
+{
+	unsigned long flags;
+
+	asm volatile(
+		"lbz %0,%1(13); stb %2,%1(13)"
+		: "=r" (flags)
+		: "i" (offsetof(struct paca_struct, soft_enabled)),\
+		  "r" (enable)
+		: "memory");
+
+	return flags;
+}
+
 static inline unsigned long arch_local_save_flags(void)
 {
 	unsigned long flags;
diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h
index 740ee309cea8..07f6a51ae99f 100644
--- a/arch/powerpc/include/asm/kvm_ppc.h
+++ b/arch/powerpc/include/asm/kvm_ppc.h
@@ -707,7 +707,7 @@ static inline void kvmppc_fix_ee_before_entry(void)
 
 	/* Only need to enable IRQs by hard enabling them after this */
 	local_paca->irq_happened = 0;
-	local_paca->soft_enabled = IRQ_DISABLE_MASK_NONE;
+	soft_enabled_set(IRQ_DISABLE_MASK_NONE);
 #endif
 }
 
diff --git a/arch/powerpc/kernel/irq.c b/arch/powerpc/kernel/irq.c
index 5a926ea5bd0b..58462ce186fa 100644
--- a/arch/powerpc/kernel/irq.c
+++ b/arch/powerpc/kernel/irq.c
@@ -332,7 +332,7 @@ bool prep_irq_for_idle(void)
 	 * of entering the low power state.
 	 */
 	local_paca->irq_happened &= ~PACA_IRQ_HARD_DIS;
-	local_paca->soft_enabled = IRQ_DISABLE_MASK_NONE;
+	soft_enabled_set(IRQ_DISABLE_MASK_NONE);
 
 	/* Tell the caller to enter the low power state */
 	return true;
diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
index f31930b9bfc1..f0f882166dcc 100644
--- a/arch/powerpc/kernel/setup_64.c
+++ b/arch/powerpc/kernel/setup_64.c
@@ -197,7 +197,7 @@ static void __init fixup_boot_paca(void)
 	/* Allow percpu accesses to work until we setup percpu data */
 	get_paca()->data_offset = 0;
 	/* Mark interrupts disabled in PACA */
-	get_paca()->soft_enabled = IRQ_DISABLE_MASK_LINUX;
+	soft_enabled_set(IRQ_DISABLE_MASK_LINUX);
 }
 
 static void __init configure_exceptions(void)
@@ -334,7 +334,7 @@ void __init early_setup(unsigned long dt_ptr)
 void early_setup_secondary(void)
 {
 	/* Mark interrupts disabled in PACA */
-	get_paca()->soft_enabled = 0;
+	soft_enabled_set(IRQ_DISABLE_MASK_LINUX);
 
 	/* Initialize the hash table or TLB handling */
 	early_init_mmu_secondary();
diff --git a/arch/powerpc/kernel/time.c b/arch/powerpc/kernel/time.c
index 7105757cdb90..483313aa311f 100644
--- a/arch/powerpc/kernel/time.c
+++ b/arch/powerpc/kernel/time.c
@@ -259,7 +259,7 @@ static u64 scan_dispatch_log(u64 stop_tb)
 void accumulate_stolen_time(void)
 {
 	u64 sst, ust;
-	u8 save_soft_enabled = local_paca->soft_enabled;
+	unsigned long save_soft_enabled;
 	struct cpu_accounting_data *acct = &local_paca->accounting;
 
 	/* We are called early in the exception entry, before
@@ -268,7 +268,7 @@ void accumulate_stolen_time(void)
 	 * needs to reflect that so various debug stuff doesn't
 	 * complain
 	 */
-	local_paca->soft_enabled = IRQ_DISABLE_MASK_LINUX;
+	save_soft_enabled = soft_enabled_set_return(IRQ_DISABLE_MASK_LINUX);
 
 	sst = scan_dispatch_log(acct->starttime_user);
 	ust = scan_dispatch_log(acct->starttime);
@@ -276,7 +276,7 @@ void accumulate_stolen_time(void)
 	acct->user_time -= ust;
 	local_paca->stolen_time += ust + sst;
 
-	local_paca->soft_enabled = save_soft_enabled;
+	soft_enabled_set(save_soft_enabled);
 }
 
 static inline u64 calculate_stolen_time(u64 stop_tb)
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH 05/13] powerpc: Add soft_enabled manipulation functions
  2016-09-15 13:01 [PATCH 00/13] powerpc: "paca->soft_enabled" based local atomic operation implementation Madhavan Srinivasan
                   ` (3 preceding siblings ...)
  2016-09-15 13:01 ` [PATCH 04/13] powerpc: Use soft_enabled_set api to update paca->soft_enabled Madhavan Srinivasan
@ 2016-09-15 13:01 ` Madhavan Srinivasan
  2016-09-16  9:57   ` Nicholas Piggin
  2016-09-15 13:01 ` [PATCH 06/13] powerpc: reverse the soft_enable logic Madhavan Srinivasan
                   ` (8 subsequent siblings)
  13 siblings, 1 reply; 44+ messages in thread
From: Madhavan Srinivasan @ 2016-09-15 13:01 UTC (permalink / raw)
  To: benh, mpe; +Cc: anton, paulus, npiggin, linuxppc-dev, Madhavan Srinivasan

Add new soft_enabled_* manipulation function and implement
arch_local_* using the soft_enabled_* wrappers.

Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/hw_irq.h | 32 ++++++++++++++------------------
 1 file changed, 14 insertions(+), 18 deletions(-)

diff --git a/arch/powerpc/include/asm/hw_irq.h b/arch/powerpc/include/asm/hw_irq.h
index f828b8f8df02..dc3c248f9244 100644
--- a/arch/powerpc/include/asm/hw_irq.h
+++ b/arch/powerpc/include/asm/hw_irq.h
@@ -53,21 +53,7 @@ static inline notrace void soft_enabled_set(unsigned long enable)
 	: : "r" (enable), "i" (offsetof(struct paca_struct, soft_enabled)));
 }
 
-static inline notrace unsigned long soft_enabled_set_return(unsigned long enable)
-{
-	unsigned long flags;
-
-	asm volatile(
-		"lbz %0,%1(13); stb %2,%1(13)"
-		: "=r" (flags)
-		: "i" (offsetof(struct paca_struct, soft_enabled)),\
-		  "r" (enable)
-		: "memory");
-
-	return flags;
-}
-
-static inline unsigned long arch_local_save_flags(void)
+static inline notrace unsigned long soft_enabled_return(void)
 {
 	unsigned long flags;
 
@@ -79,20 +65,30 @@ static inline unsigned long arch_local_save_flags(void)
 	return flags;
 }
 
-static inline unsigned long arch_local_irq_disable(void)
+static inline notrace unsigned long soft_enabled_set_return(unsigned long enable)
 {
 	unsigned long flags, zero;
 
 	asm volatile(
-		"li %1,%3; lbz %0,%2(13); stb %1,%2(13)"
+		"mr %1,%3; lbz %0,%2(13); stb %1,%2(13)"
 		: "=r" (flags), "=&r" (zero)
 		: "i" (offsetof(struct paca_struct, soft_enabled)),\
-		  "i" (IRQ_DISABLE_MASK_LINUX)
+		  "r" (enable)
 		: "memory");
 
 	return flags;
 }
 
+static inline unsigned long arch_local_save_flags(void)
+{
+	return soft_enabled_return();
+}
+
+static inline unsigned long arch_local_irq_disable(void)
+{
+	return soft_enabled_set_return(IRQ_DISABLE_MASK_LINUX);
+}
+
 extern void arch_local_irq_restore(unsigned long);
 
 static inline void arch_local_irq_enable(void)
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH 06/13] powerpc: reverse the soft_enable logic
  2016-09-15 13:01 [PATCH 00/13] powerpc: "paca->soft_enabled" based local atomic operation implementation Madhavan Srinivasan
                   ` (4 preceding siblings ...)
  2016-09-15 13:01 ` [PATCH 05/13] powerpc: Add soft_enabled manipulation functions Madhavan Srinivasan
@ 2016-09-15 13:01 ` Madhavan Srinivasan
  2016-09-16 10:05   ` Nicholas Piggin
  2016-09-15 13:01 ` [PATCH 07/13] powerpc: Avoid using EXCEPTION_PROLOG_1 macro in MASKABLE_* Madhavan Srinivasan
                   ` (7 subsequent siblings)
  13 siblings, 1 reply; 44+ messages in thread
From: Madhavan Srinivasan @ 2016-09-15 13:01 UTC (permalink / raw)
  To: benh, mpe; +Cc: anton, paulus, npiggin, linuxppc-dev, Madhavan Srinivasan

"paca->soft_enabled" is used as a flag to mask some of interrupts.
Currently supported flags values and their details:

soft_enabled    MSR[EE]

0               0       Disabled (PMI and HMI not masked)
1               1       Enabled

"paca->soft_enabled" is initialized to 1 to make the interripts as
enabled. arch_local_irq_disable() will toggle the value when interrupts
needs to disbled. At this point, the interrupts are not actually disabled,
instead, interrupt vector has code to check for the flag and mask it when it occurs.
By "mask it", it update interrupt paca->irq_happened and return.
arch_local_irq_restore() is called to re-enable interrupts, which checks and
replays interrupts if any occured.

Now, as mentioned, current logic doesnot mask "performance monitoring interrupts"
and PMIs are implemented as NMI. But this patchset depends on local_irq_*
for a successful local_* update. Meaning, mask all possible interrupts during
local_* update and replay them after the update.

So the idea here is to reserve the "paca->soft_enabled" logic. New values and
details:

soft_enabled    MSR[EE]

1               0       Disabled  (PMI and HMI not masked)
0               1       Enabled

Reason for the this change is to create foundation for a third flag value "2"
for "soft_enabled" to add support to mask PMIs. When ->soft_enabled is
set to a value "2", PMI interrupts are mask and when set to a value
of "1", PMI are not mask.

Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/hw_irq.h | 4 ++--
 arch/powerpc/kernel/entry_64.S    | 5 ++---
 2 files changed, 4 insertions(+), 5 deletions(-)

diff --git a/arch/powerpc/include/asm/hw_irq.h b/arch/powerpc/include/asm/hw_irq.h
index dc3c248f9244..fd9b421f9020 100644
--- a/arch/powerpc/include/asm/hw_irq.h
+++ b/arch/powerpc/include/asm/hw_irq.h
@@ -30,8 +30,8 @@
 /*
  * flags for paca->soft_enabled
  */
-#define IRQ_DISABLE_MASK_NONE		1
-#define IRQ_DISABLE_MASK_LINUX		0
+#define IRQ_DISABLE_MASK_NONE		0
+#define IRQ_DISABLE_MASK_LINUX		1
 
 #endif /* CONFIG_PPC64 */
 
diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
index aef7b64cbbeb..879aeb11ad29 100644
--- a/arch/powerpc/kernel/entry_64.S
+++ b/arch/powerpc/kernel/entry_64.S
@@ -131,8 +131,7 @@ END_FW_FTR_SECTION_IFSET(FW_FEATURE_SPLPAR)
 	 */
 #if defined(CONFIG_TRACE_IRQFLAGS) && defined(CONFIG_BUG)
 	lbz	r10,PACASOFTIRQEN(r13)
-	xori	r10,r10,IRQ_DISABLE_MASK_NONE
-1:	tdnei	r10,0
+1:	tdnei	r10,IRQ_DISABLE_MASK_NONE
 	EMIT_BUG_ENTRY 1b,__FILE__,__LINE__,BUGFLAG_WARNING
 #endif
 
@@ -1012,7 +1011,7 @@ _GLOBAL(enter_rtas)
 	 * check it with the asm equivalent of WARN_ON
 	 */
 	lbz	r0,PACASOFTIRQEN(r13)
-1:	tdnei	r0,IRQ_DISABLE_MASK_LINUX
+1:	tdeqi	r0,IRQ_DISABLE_MASK_NONE
 	EMIT_BUG_ENTRY 1b,__FILE__,__LINE__,BUGFLAG_WARNING
 #endif
 	
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH 07/13] powerpc: Avoid using EXCEPTION_PROLOG_1 macro in MASKABLE_*
  2016-09-15 13:01 [PATCH 00/13] powerpc: "paca->soft_enabled" based local atomic operation implementation Madhavan Srinivasan
                   ` (5 preceding siblings ...)
  2016-09-15 13:01 ` [PATCH 06/13] powerpc: reverse the soft_enable logic Madhavan Srinivasan
@ 2016-09-15 13:01 ` Madhavan Srinivasan
  2016-09-16 10:08   ` Nicholas Piggin
  2016-09-15 13:01 ` [PATCH 08/13] powerpc: Add new _EXCEPTION_PROLOG_1 macro Madhavan Srinivasan
                   ` (6 subsequent siblings)
  13 siblings, 1 reply; 44+ messages in thread
From: Madhavan Srinivasan @ 2016-09-15 13:01 UTC (permalink / raw)
  To: benh, mpe; +Cc: anton, paulus, npiggin, linuxppc-dev, Madhavan Srinivasan

Currently we use both EXCEPTION_PROLOG_1 and __EXCEPTION_PROLOG_1
in the MASKABLE_* macros. As a cleanup, this patch makes MASKABLE_*
to use only __EXCEPTION_PROLOG_1. There is not logic change.

Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/exception-64s.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/include/asm/exception-64s.h b/arch/powerpc/include/asm/exception-64s.h
index 38272fe8a757..75e262466b85 100644
--- a/arch/powerpc/include/asm/exception-64s.h
+++ b/arch/powerpc/include/asm/exception-64s.h
@@ -450,7 +450,7 @@ label##_hv:								\
 #define MASKABLE_EXCEPTION_HV_OOL(vec, label)				\
 	.globl label##_hv;						\
 label##_hv:								\
-	EXCEPTION_PROLOG_1(PACA_EXGEN, SOFTEN_TEST_HV, vec);		\
+	__EXCEPTION_PROLOG_1(PACA_EXGEN, SOFTEN_TEST_HV, vec);		\
 	EXCEPTION_PROLOG_PSERIES_1(label##_common, EXC_HV);
 
 #define __MASKABLE_RELON_EXCEPTION_PSERIES(vec, label, h, extra)	\
@@ -478,7 +478,7 @@ label##_relon_hv:							\
 #define MASKABLE_RELON_EXCEPTION_HV_OOL(vec, label)			\
 	.globl label##_relon_hv;					\
 label##_relon_hv:							\
-	EXCEPTION_PROLOG_1(PACA_EXGEN, SOFTEN_NOTEST_HV, vec);		\
+	__EXCEPTION_PROLOG_1(PACA_EXGEN, SOFTEN_NOTEST_HV, vec);		\
 	EXCEPTION_PROLOG_PSERIES_1(label##_common, EXC_HV);
 
 /*
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH 08/13] powerpc: Add new _EXCEPTION_PROLOG_1 macro
  2016-09-15 13:01 [PATCH 00/13] powerpc: "paca->soft_enabled" based local atomic operation implementation Madhavan Srinivasan
                   ` (6 preceding siblings ...)
  2016-09-15 13:01 ` [PATCH 07/13] powerpc: Avoid using EXCEPTION_PROLOG_1 macro in MASKABLE_* Madhavan Srinivasan
@ 2016-09-15 13:01 ` Madhavan Srinivasan
  2016-09-16 10:12   ` Nicholas Piggin
  2016-09-15 13:01 ` [PATCH 09/13] powerpc: Introduce new mask bit for soft_enabled Madhavan Srinivasan
                   ` (5 subsequent siblings)
  13 siblings, 1 reply; 44+ messages in thread
From: Madhavan Srinivasan @ 2016-09-15 13:01 UTC (permalink / raw)
  To: benh, mpe; +Cc: anton, paulus, npiggin, linuxppc-dev, Madhavan Srinivasan

To support addition of "bitmask" to MASKABLE_* macros,
factor out the EXCPETION_PROLOG_1 macro.

Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/exception-64s.h | 30 ++++++++++++++++++++++++++----
 1 file changed, 26 insertions(+), 4 deletions(-)

diff --git a/arch/powerpc/include/asm/exception-64s.h b/arch/powerpc/include/asm/exception-64s.h
index 75e262466b85..dd3253bd0d8e 100644
--- a/arch/powerpc/include/asm/exception-64s.h
+++ b/arch/powerpc/include/asm/exception-64s.h
@@ -161,18 +161,40 @@ END_FTR_SECTION_NESTED(ftr,ftr,943)
 	std	r10,area+EX_R10(r13);	/* save r10 - r12 */		\
 	OPT_GET_SPR(r10, SPRN_CFAR, CPU_FTR_CFAR)
 
-#define __EXCEPTION_PROLOG_1(area, extra, vec)				\
+#define __EXCEPTION_PROLOG_1_PRE(area)					\
 	OPT_SAVE_REG_TO_PACA(area+EX_PPR, r9, CPU_FTR_HAS_PPR);		\
 	OPT_SAVE_REG_TO_PACA(area+EX_CFAR, r10, CPU_FTR_CFAR);		\
 	SAVE_CTR(r10, area);						\
-	mfcr	r9;							\
-	extra(vec);							\
+	mfcr	r9;
+
+#define __EXCEPTION_PROLOG_1_POST(area)					\
 	std	r11,area+EX_R11(r13);					\
 	std	r12,area+EX_R12(r13);					\
 	GET_SCRATCH0(r10);						\
 	std	r10,area+EX_R13(r13)
+
+/*
+ * This version of the EXCEPTION_PROLOG_1 will carry
+ * addition parameter called "bitmask" to support
+ * checking of the interrupt maskable level in the SOFTEN_TEST.
+ * Intended to be used in MASKABLE_EXCPETION_* macros.
+ */
+#define __EXCEPTION_PROLOG_1(area, extra, vec)				\
+	__EXCEPTION_PROLOG_1_PRE(area);					\
+	extra(vec);							\
+	__EXCEPTION_PROLOG_1_POST(area);
+
+/*
+ * This version of the EXCEPTION_PROLOG_1 is intended
+ * to be used in STD_EXCEPTION* macros
+ */
+#define _EXCEPTION_PROLOG_1(area, extra, vec)				\
+	__EXCEPTION_PROLOG_1_PRE(area);					\
+	extra(vec);							\
+	__EXCEPTION_PROLOG_1_POST(area);
+
 #define EXCEPTION_PROLOG_1(area, extra, vec)				\
-	__EXCEPTION_PROLOG_1(area, extra, vec)
+	_EXCEPTION_PROLOG_1(area, extra, vec)
 
 #define __EXCEPTION_PROLOG_PSERIES_1(label, h)				\
 	ld	r12,PACAKBASE(r13);	/* get high part of &label */	\
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH 09/13] powerpc: Introduce new mask bit for soft_enabled
  2016-09-15 13:01 [PATCH 00/13] powerpc: "paca->soft_enabled" based local atomic operation implementation Madhavan Srinivasan
                   ` (7 preceding siblings ...)
  2016-09-15 13:01 ` [PATCH 08/13] powerpc: Add new _EXCEPTION_PROLOG_1 macro Madhavan Srinivasan
@ 2016-09-15 13:01 ` Madhavan Srinivasan
  2016-09-16 10:16   ` Nicholas Piggin
  2016-09-15 13:02 ` [PATCH 10/13] powerpc: Add "bitmask" paramater to MASKABLE_* macros Madhavan Srinivasan
                   ` (4 subsequent siblings)
  13 siblings, 1 reply; 44+ messages in thread
From: Madhavan Srinivasan @ 2016-09-15 13:01 UTC (permalink / raw)
  To: benh, mpe; +Cc: anton, paulus, npiggin, linuxppc-dev, Madhavan Srinivasan

Currently soft_enabled is used as the flag to determine
the interrupt state. Patch extends the soft_enabled
to be used as a mask instead of a flag.

Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/exception-64s.h | 4 ++--
 arch/powerpc/include/asm/hw_irq.h        | 1 +
 arch/powerpc/include/asm/irqflags.h      | 4 ++--
 arch/powerpc/kernel/entry_64.S           | 4 ++--
 arch/powerpc/kernel/exceptions-64e.S     | 6 +++---
 5 files changed, 10 insertions(+), 9 deletions(-)

diff --git a/arch/powerpc/include/asm/exception-64s.h b/arch/powerpc/include/asm/exception-64s.h
index dd3253bd0d8e..1eea4ab75607 100644
--- a/arch/powerpc/include/asm/exception-64s.h
+++ b/arch/powerpc/include/asm/exception-64s.h
@@ -430,9 +430,9 @@ label##_relon_hv:						\
 
 #define __SOFTEN_TEST(h, vec)						\
 	lbz	r10,PACASOFTIRQEN(r13);					\
-	cmpwi	r10,IRQ_DISABLE_MASK_LINUX;							\
+	andi.	r10,r10,IRQ_DISABLE_MASK_LINUX;				\
 	li	r10,SOFTEN_VALUE_##vec;					\
-	beq	masked_##h##interrupt
+	bne	masked_##h##interrupt
 #define _SOFTEN_TEST(h, vec)	__SOFTEN_TEST(h, vec)
 
 #define SOFTEN_TEST_PR(vec)						\
diff --git a/arch/powerpc/include/asm/hw_irq.h b/arch/powerpc/include/asm/hw_irq.h
index fd9b421f9020..245262c02bab 100644
--- a/arch/powerpc/include/asm/hw_irq.h
+++ b/arch/powerpc/include/asm/hw_irq.h
@@ -32,6 +32,7 @@
  */
 #define IRQ_DISABLE_MASK_NONE		0
 #define IRQ_DISABLE_MASK_LINUX		1
+#define IRQ_DISABLE_MASK_PMU		2
 
 #endif /* CONFIG_PPC64 */
 
diff --git a/arch/powerpc/include/asm/irqflags.h b/arch/powerpc/include/asm/irqflags.h
index d0ed2a7d7d10..9ff09747a226 100644
--- a/arch/powerpc/include/asm/irqflags.h
+++ b/arch/powerpc/include/asm/irqflags.h
@@ -48,11 +48,11 @@
 #define RECONCILE_IRQ_STATE(__rA, __rB)		\
 	lbz	__rA,PACASOFTIRQEN(r13);	\
 	lbz	__rB,PACAIRQHAPPENED(r13);	\
-	cmpwi	cr0,__rA,IRQ_DISABLE_MASK_LINUX;\
+	andi.	__rA,__rA,IRQ_DISABLE_MASK_LINUX;\
 	li	__rA,IRQ_DISABLE_MASK_LINUX;	\
 	ori	__rB,__rB,PACA_IRQ_HARD_DIS;	\
 	stb	__rB,PACAIRQHAPPENED(r13);	\
-	beq	44f;				\
+	bne	44f;				\
 	stb	__rA,PACASOFTIRQEN(r13);	\
 	TRACE_DISABLE_INTS;			\
 44:
diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
index 879aeb11ad29..533e363914a9 100644
--- a/arch/powerpc/kernel/entry_64.S
+++ b/arch/powerpc/kernel/entry_64.S
@@ -764,8 +764,8 @@ restore:
 	 */
 	ld	r5,SOFTE(r1)
 	lbz	r6,PACASOFTIRQEN(r13)
-	cmpwi	cr0,r5,IRQ_DISABLE_MASK_LINUX
-	beq	restore_irq_off
+	andi.	r5,r5,IRQ_DISABLE_MASK_LINUX
+	bne	restore_irq_off
 
 	/* We are enabling, were we already enabled ? Yes, just return */
 	cmpwi	cr0,r6,IRQ_DISABLE_MASK_NONE
diff --git a/arch/powerpc/kernel/exceptions-64e.S b/arch/powerpc/kernel/exceptions-64e.S
index 5c628b5696f6..8e40df2c2f30 100644
--- a/arch/powerpc/kernel/exceptions-64e.S
+++ b/arch/powerpc/kernel/exceptions-64e.S
@@ -212,8 +212,8 @@ END_FTR_SECTION_IFSET(CPU_FTR_EMB_HV)
 	/* Interrupts had better not already be enabled... */
 	twnei	r6,IRQ_DISABLE_MASK_LINUX
 
-	cmpwi	cr0,r5,IRQ_DISABLE_MASK_LINUX
-	beq	1f
+	andi.	r5,r5,IRQ_DISABLE_MASK_LINUX
+	bne	1f
 
 	TRACE_ENABLE_INTS
 	stb	r5,PACASOFTIRQEN(r13)
@@ -352,7 +352,7 @@ ret_from_mc_except:
 
 #define PROLOG_ADDITION_MASKABLE_GEN(n)					    \
 	lbz	r10,PACASOFTIRQEN(r13); /* are irqs soft-disabled ? */	    \
-	cmpwi	cr0,r10,IRQ_DISABLE_MASK_LINUX;/* yes -> go out of line */ \
+	andi.	r10,r10,IRQ_DISABLE_MASK_LINUX;/* yes -> go out of line */ \
 	beq	masked_interrupt_book3e_##n
 
 #define PROLOG_ADDITION_2REGS_GEN(n)					    \
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH 10/13] powerpc: Add "bitmask" paramater to MASKABLE_* macros
  2016-09-15 13:01 [PATCH 00/13] powerpc: "paca->soft_enabled" based local atomic operation implementation Madhavan Srinivasan
                   ` (8 preceding siblings ...)
  2016-09-15 13:01 ` [PATCH 09/13] powerpc: Introduce new mask bit for soft_enabled Madhavan Srinivasan
@ 2016-09-15 13:02 ` Madhavan Srinivasan
  2016-09-16 11:03   ` Nicholas Piggin
  2016-09-15 13:02 ` [PATCH 11/13] powerpc: Add support to mask perf interrupts and replay them Madhavan Srinivasan
                   ` (3 subsequent siblings)
  13 siblings, 1 reply; 44+ messages in thread
From: Madhavan Srinivasan @ 2016-09-15 13:02 UTC (permalink / raw)
  To: benh, mpe; +Cc: anton, paulus, npiggin, linuxppc-dev, Madhavan Srinivasan

Make it explicit the interrupt masking supported
by a gievn interrupt handler. Patch correspondingly
extends the MASKABLE_* macros with an addition's parameter.
"bitmask" parameter is passed to SOFTEN_TEST macro to decide
on masking the interrupt.

Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/exception-64s.h | 62 ++++++++++++++++----------------
 arch/powerpc/kernel/exceptions-64s.S     | 36 ++++++++++++-------
 2 files changed, 54 insertions(+), 44 deletions(-)

diff --git a/arch/powerpc/include/asm/exception-64s.h b/arch/powerpc/include/asm/exception-64s.h
index 1eea4ab75607..41be0c2d7658 100644
--- a/arch/powerpc/include/asm/exception-64s.h
+++ b/arch/powerpc/include/asm/exception-64s.h
@@ -179,9 +179,9 @@ END_FTR_SECTION_NESTED(ftr,ftr,943)
  * checking of the interrupt maskable level in the SOFTEN_TEST.
  * Intended to be used in MASKABLE_EXCPETION_* macros.
  */
-#define __EXCEPTION_PROLOG_1(area, extra, vec)				\
+#define __EXCEPTION_PROLOG_1(area, extra, vec, bitmask)			\
 	__EXCEPTION_PROLOG_1_PRE(area);					\
-	extra(vec);							\
+	extra(vec, bitmask);						\
 	__EXCEPTION_PROLOG_1_POST(area);
 
 /*
@@ -428,79 +428,79 @@ label##_relon_hv:						\
 #define SOFTEN_VALUE_0xea0	PACA_IRQ_EE
 #define SOFTEN_VALUE_0xea2	PACA_IRQ_EE
 
-#define __SOFTEN_TEST(h, vec)						\
+#define __SOFTEN_TEST(h, vec, bitmask)					\
 	lbz	r10,PACASOFTIRQEN(r13);					\
-	andi.	r10,r10,IRQ_DISABLE_MASK_LINUX;				\
+	andi.	r10,r10,bitmask;					\
 	li	r10,SOFTEN_VALUE_##vec;					\
 	bne	masked_##h##interrupt
-#define _SOFTEN_TEST(h, vec)	__SOFTEN_TEST(h, vec)
+#define _SOFTEN_TEST(h, vec, bitmask)	__SOFTEN_TEST(h, vec, bitmask)
 
-#define SOFTEN_TEST_PR(vec)						\
+#define SOFTEN_TEST_PR(vec, bitmask)					\
 	KVMTEST(vec);							\
-	_SOFTEN_TEST(EXC_STD, vec)
+	_SOFTEN_TEST(EXC_STD, vec, bitmask)
 
-#define SOFTEN_TEST_HV(vec)						\
+#define SOFTEN_TEST_HV(vec, bitmask)					\
 	KVMTEST(vec);							\
-	_SOFTEN_TEST(EXC_HV, vec)
+	_SOFTEN_TEST(EXC_HV, vec, bitmask)
 
-#define SOFTEN_NOTEST_PR(vec)		_SOFTEN_TEST(EXC_STD, vec)
-#define SOFTEN_NOTEST_HV(vec)		_SOFTEN_TEST(EXC_HV, vec)
+#define SOFTEN_NOTEST_PR(vec, bitmask)		_SOFTEN_TEST(EXC_STD, vec, bitmask)
+#define SOFTEN_NOTEST_HV(vec, bitmask)		_SOFTEN_TEST(EXC_HV, vec, bitmask)
 
-#define __MASKABLE_EXCEPTION_PSERIES(vec, label, h, extra)		\
+#define __MASKABLE_EXCEPTION_PSERIES(vec, label, h, extra, bitmask)	\
 	SET_SCRATCH0(r13);    /* save r13 */				\
 	EXCEPTION_PROLOG_0(PACA_EXGEN);					\
-	__EXCEPTION_PROLOG_1(PACA_EXGEN, extra, vec);			\
+	__EXCEPTION_PROLOG_1(PACA_EXGEN, extra, vec, bitmask);		\
 	EXCEPTION_PROLOG_PSERIES_1(label##_common, h);
 
-#define _MASKABLE_EXCEPTION_PSERIES(vec, label, h, extra)		\
-	__MASKABLE_EXCEPTION_PSERIES(vec, label, h, extra)
+#define _MASKABLE_EXCEPTION_PSERIES(vec, label, h, extra, bitmask)	\
+	__MASKABLE_EXCEPTION_PSERIES(vec, label, h, extra, bitmask)
 
-#define MASKABLE_EXCEPTION_PSERIES(loc, vec, label)			\
+#define MASKABLE_EXCEPTION_PSERIES(loc, vec, label, bitmask)		\
 	. = loc;							\
 	.globl label##_pSeries;						\
 label##_pSeries:							\
 	_MASKABLE_EXCEPTION_PSERIES(vec, label,				\
-				    EXC_STD, SOFTEN_TEST_PR)
+				    EXC_STD, SOFTEN_TEST_PR, bitmask)
 
-#define MASKABLE_EXCEPTION_HV(loc, vec, label)				\
+#define MASKABLE_EXCEPTION_HV(loc, vec, label, bitmask)			\
 	. = loc;							\
 	.globl label##_hv;						\
 label##_hv:								\
 	_MASKABLE_EXCEPTION_PSERIES(vec, label,				\
-				    EXC_HV, SOFTEN_TEST_HV)
+				    EXC_HV, SOFTEN_TEST_HV, bitmask)
 
-#define MASKABLE_EXCEPTION_HV_OOL(vec, label)				\
+#define MASKABLE_EXCEPTION_HV_OOL(vec, label, bitmask)			\
 	.globl label##_hv;						\
 label##_hv:								\
-	__EXCEPTION_PROLOG_1(PACA_EXGEN, SOFTEN_TEST_HV, vec);		\
+	__EXCEPTION_PROLOG_1(PACA_EXGEN, SOFTEN_TEST_HV, vec, bitmask);	\
 	EXCEPTION_PROLOG_PSERIES_1(label##_common, EXC_HV);
 
-#define __MASKABLE_RELON_EXCEPTION_PSERIES(vec, label, h, extra)	\
+#define __MASKABLE_RELON_EXCEPTION_PSERIES(vec, label, h, extra, bitmask)\
 	SET_SCRATCH0(r13);    /* save r13 */				\
 	EXCEPTION_PROLOG_0(PACA_EXGEN);					\
-	__EXCEPTION_PROLOG_1(PACA_EXGEN, extra, vec);		\
+	__EXCEPTION_PROLOG_1(PACA_EXGEN, extra, vec, bitmask);		\
 	EXCEPTION_RELON_PROLOG_PSERIES_1(label##_common, h);
-#define _MASKABLE_RELON_EXCEPTION_PSERIES(vec, label, h, extra)	\
-	__MASKABLE_RELON_EXCEPTION_PSERIES(vec, label, h, extra)
+#define _MASKABLE_RELON_EXCEPTION_PSERIES(vec, label, h, extra, bitmask)\
+	__MASKABLE_RELON_EXCEPTION_PSERIES(vec, label, h, extra, bitmask)
 
-#define MASKABLE_RELON_EXCEPTION_PSERIES(loc, vec, label)		\
+#define MASKABLE_RELON_EXCEPTION_PSERIES(loc, vec, label, bitmask)	\
 	. = loc;							\
 	.globl label##_relon_pSeries;					\
 label##_relon_pSeries:							\
 	_MASKABLE_RELON_EXCEPTION_PSERIES(vec, label,			\
-					  EXC_STD, SOFTEN_NOTEST_PR)
+					  EXC_STD, SOFTEN_NOTEST_PR, bitmask)
 
-#define MASKABLE_RELON_EXCEPTION_HV(loc, vec, label)			\
+#define MASKABLE_RELON_EXCEPTION_HV(loc, vec, label, bitmask)		\
 	. = loc;							\
 	.globl label##_relon_hv;					\
 label##_relon_hv:							\
 	_MASKABLE_RELON_EXCEPTION_PSERIES(vec, label,			\
-					  EXC_HV, SOFTEN_NOTEST_HV)
+					  EXC_HV, SOFTEN_NOTEST_HV, bitmask)
 
-#define MASKABLE_RELON_EXCEPTION_HV_OOL(vec, label)			\
+#define MASKABLE_RELON_EXCEPTION_HV_OOL(vec, label, bitmask)		\
 	.globl label##_relon_hv;					\
 label##_relon_hv:							\
-	__EXCEPTION_PROLOG_1(PACA_EXGEN, SOFTEN_NOTEST_HV, vec);		\
+	__EXCEPTION_PROLOG_1(PACA_EXGEN, SOFTEN_NOTEST_HV, vec, bitmask);\
 	EXCEPTION_PROLOG_PSERIES_1(label##_common, EXC_HV);
 
 /*
diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
index bffec73dbffc..581a10bdb34a 100644
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -221,11 +221,13 @@ hardware_interrupt_pSeries:
 hardware_interrupt_hv:
 	BEGIN_FTR_SECTION
 		_MASKABLE_EXCEPTION_PSERIES(0x502, hardware_interrupt,
-					    EXC_HV, SOFTEN_TEST_HV)
+					    EXC_HV, SOFTEN_TEST_HV,
+						IRQ_DISABLE_MASK_LINUX)
 		KVM_HANDLER(PACA_EXGEN, EXC_HV, 0x502)
 	FTR_SECTION_ELSE
 		_MASKABLE_EXCEPTION_PSERIES(0x500, hardware_interrupt,
-					    EXC_STD, SOFTEN_TEST_PR)
+					    EXC_STD, SOFTEN_TEST_PR,
+						IRQ_DISABLE_MASK_LINUX)
 		KVM_HANDLER(PACA_EXGEN, EXC_STD, 0x500)
 	ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE | CPU_FTR_ARCH_206)
 
@@ -241,11 +243,13 @@ hardware_interrupt_hv:
 	. = 0x900
 	.globl decrementer_pSeries
 decrementer_pSeries:
-	_MASKABLE_EXCEPTION_PSERIES(0x900, decrementer, EXC_STD, SOFTEN_TEST_PR)
+	_MASKABLE_EXCEPTION_PSERIES(0x900, decrementer, EXC_STD, SOFTEN_TEST_PR,
+							IRQ_DISABLE_MASK_LINUX)
 
 	STD_EXCEPTION_HV(0x980, 0x982, hdecrementer)
 
-	MASKABLE_EXCEPTION_PSERIES(0xa00, 0xa00, doorbell_super)
+	MASKABLE_EXCEPTION_PSERIES(0xa00, 0xa00, doorbell_super,
+							IRQ_DISABLE_MASK_LINUX)
 	KVM_HANDLER(PACA_EXGEN, EXC_STD, 0xa00)
 
 	STD_EXCEPTION_PSERIES(0xb00, trap_0b)
@@ -582,13 +586,13 @@ END_FTR_SECTION_IFSET(CPU_FTR_CFAR)
 	KVM_HANDLER(PACA_EXGEN, EXC_HV, 0xe22)
 	STD_EXCEPTION_HV_OOL(0xe42, emulation_assist)
 	KVM_HANDLER(PACA_EXGEN, EXC_HV, 0xe42)
-	MASKABLE_EXCEPTION_HV_OOL(0xe62, hmi_exception)
+	MASKABLE_EXCEPTION_HV_OOL(0xe62, hmi_exception,IRQ_DISABLE_MASK_LINUX)
 	KVM_HANDLER(PACA_EXGEN, EXC_HV, 0xe62)
 
-	MASKABLE_EXCEPTION_HV_OOL(0xe82, h_doorbell)
+	MASKABLE_EXCEPTION_HV_OOL(0xe82, h_doorbell,IRQ_DISABLE_MASK_LINUX)
 	KVM_HANDLER(PACA_EXGEN, EXC_HV, 0xe82)
 
-	MASKABLE_EXCEPTION_HV_OOL(0xea2, h_virt_irq)
+	MASKABLE_EXCEPTION_HV_OOL(0xea2, h_virt_irq,IRQ_DISABLE_MASK_LINUX)
 	KVM_HANDLER(PACA_EXGEN, EXC_HV, 0xea2)
 
 	/* moved from 0xf00 */
@@ -824,16 +828,20 @@ instruction_access_slb_relon_pSeries:
 hardware_interrupt_relon_pSeries:
 hardware_interrupt_relon_hv:
 	BEGIN_FTR_SECTION
-		_MASKABLE_RELON_EXCEPTION_PSERIES(0x502, hardware_interrupt, EXC_HV, SOFTEN_TEST_HV)
+		_MASKABLE_RELON_EXCEPTION_PSERIES(0x502, hardware_interrupt,
+			EXC_HV, SOFTEN_TEST_HV, IRQ_DISABLE_MASK_LINUX)
 	FTR_SECTION_ELSE
-		_MASKABLE_RELON_EXCEPTION_PSERIES(0x500, hardware_interrupt, EXC_STD, SOFTEN_TEST_PR)
+		_MASKABLE_RELON_EXCEPTION_PSERIES(0x500, hardware_interrupt,
+			EXC_STD, SOFTEN_TEST_PR, IRQ_DISABLE_MASK_LINUX)
 	ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE)
 	STD_RELON_EXCEPTION_PSERIES(0x4600, 0x600, alignment)
 	STD_RELON_EXCEPTION_PSERIES(0x4700, 0x700, program_check)
 	STD_RELON_EXCEPTION_PSERIES(0x4800, 0x800, fp_unavailable)
-	MASKABLE_RELON_EXCEPTION_PSERIES(0x4900, 0x900, decrementer)
+	MASKABLE_RELON_EXCEPTION_PSERIES(0x4900, 0x900, decrementer,
+							IRQ_DISABLE_MASK_LINUX)
 	STD_RELON_EXCEPTION_HV(0x4980, 0x982, hdecrementer)
-	MASKABLE_RELON_EXCEPTION_PSERIES(0x4a00, 0xa00, doorbell_super)
+	MASKABLE_RELON_EXCEPTION_PSERIES(0x4a00, 0xa00, doorbell_super,
+							IRQ_DISABLE_MASK_LINUX)
 	STD_RELON_EXCEPTION_PSERIES(0x4b00, 0xb00, trap_0b)
 
 	. = 0x4c00
@@ -1132,8 +1140,10 @@ END_FTR_SECTION_IFSET(CPU_FTR_VSX)
 
 	/* Equivalents to the above handlers for relocation-on interrupt vectors */
 	STD_RELON_EXCEPTION_HV_OOL(0xe40, emulation_assist)
-	MASKABLE_RELON_EXCEPTION_HV_OOL(0xe80, h_doorbell)
-	MASKABLE_RELON_EXCEPTION_HV_OOL(0xea0, h_virt_irq)
+	MASKABLE_RELON_EXCEPTION_HV_OOL(0xe80, h_doorbell,
+							IRQ_DISABLE_MASK_LINUX)
+	MASKABLE_RELON_EXCEPTION_HV_OOL(0xea0, h_virt_irq,
+							IRQ_DISABLE_MASK_LINUX)
 
 	STD_RELON_EXCEPTION_PSERIES_OOL(0xf00, performance_monitor)
 	STD_RELON_EXCEPTION_PSERIES_OOL(0xf20, altivec_unavailable)
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH 11/13] powerpc: Add support to mask perf interrupts and replay them
  2016-09-15 13:01 [PATCH 00/13] powerpc: "paca->soft_enabled" based local atomic operation implementation Madhavan Srinivasan
                   ` (9 preceding siblings ...)
  2016-09-15 13:02 ` [PATCH 10/13] powerpc: Add "bitmask" paramater to MASKABLE_* macros Madhavan Srinivasan
@ 2016-09-15 13:02 ` Madhavan Srinivasan
  2016-09-16 10:47   ` Nicholas Piggin
  2016-09-15 13:02 ` [PATCH 12/13] powerpc: Add a Kconfig and a functions to set new soft_enabled mask Madhavan Srinivasan
                   ` (2 subsequent siblings)
  13 siblings, 1 reply; 44+ messages in thread
From: Madhavan Srinivasan @ 2016-09-15 13:02 UTC (permalink / raw)
  To: benh, mpe; +Cc: anton, paulus, npiggin, linuxppc-dev, Madhavan Srinivasan

To support masking of the PMI interrupts, couple of new interrupt handler
macros are added MASKABLE_EXCEPTION_PSERIES_OOL and
MASKABLE_RELON_EXCEPTION_PSERIES_OOL.

Couple of new irq #defs "PACA_IRQ_PMI" and "SOFTEN_VALUE_0xf0*" added to
use in the exception code to check for PMI interrupts.

In the masked_interrupt handler, for PMIs we reset the MSR[EE]
and return. In the __check_irq_replay(), replay the PMI interrupt
by calling performance_monitor_common handler.

Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/exception-64s.h | 13 +++++++++++++
 arch/powerpc/include/asm/hw_irq.h        |  1 +
 arch/powerpc/kernel/entry_64.S           |  5 +++++
 arch/powerpc/kernel/exceptions-64s.S     |  6 ++++--
 arch/powerpc/kernel/irq.c                | 12 +++++++++++-
 5 files changed, 34 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/include/asm/exception-64s.h b/arch/powerpc/include/asm/exception-64s.h
index 41be0c2d7658..ca40b5c59869 100644
--- a/arch/powerpc/include/asm/exception-64s.h
+++ b/arch/powerpc/include/asm/exception-64s.h
@@ -427,6 +427,7 @@ label##_relon_hv:						\
 #define SOFTEN_VALUE_0xe62	PACA_IRQ_HMI
 #define SOFTEN_VALUE_0xea0	PACA_IRQ_EE
 #define SOFTEN_VALUE_0xea2	PACA_IRQ_EE
+#define SOFTEN_VALUE_0xf00	PACA_IRQ_PMI
 
 #define __SOFTEN_TEST(h, vec, bitmask)					\
 	lbz	r10,PACASOFTIRQEN(r13);					\
@@ -462,6 +463,12 @@ label##_pSeries:							\
 	_MASKABLE_EXCEPTION_PSERIES(vec, label,				\
 				    EXC_STD, SOFTEN_TEST_PR, bitmask)
 
+#define MASKABLE_EXCEPTION_PSERIES_OOL(vec, label, bitmask)		\
+	.globl label##_pSeries;						\
+label##_pSeries:							\
+	__EXCEPTION_PROLOG_1(PACA_EXGEN, SOFTEN_TEST_PR, vec, bitmask);	\
+	EXCEPTION_PROLOG_PSERIES_1(label##_common, EXC_STD);
+
 #define MASKABLE_EXCEPTION_HV(loc, vec, label, bitmask)			\
 	. = loc;							\
 	.globl label##_hv;						\
@@ -490,6 +497,12 @@ label##_relon_pSeries:							\
 	_MASKABLE_RELON_EXCEPTION_PSERIES(vec, label,			\
 					  EXC_STD, SOFTEN_NOTEST_PR, bitmask)
 
+#define MASKABLE_RELON_EXCEPTION_PSERIES_OOL(vec, label, bitmask)	\
+	.globl label##_relon_pSeries;					\
+label##_relon_pSeries:							\
+	__EXCEPTION_PROLOG_1(PACA_EXGEN, SOFTEN_NOTEST_PR, vec, bitmask);\
+	EXCEPTION_PROLOG_PSERIES_1(label##_common, EXC_STD);
+
 #define MASKABLE_RELON_EXCEPTION_HV(loc, vec, label, bitmask)		\
 	. = loc;							\
 	.globl label##_relon_hv;					\
diff --git a/arch/powerpc/include/asm/hw_irq.h b/arch/powerpc/include/asm/hw_irq.h
index 245262c02bab..850fdffb59eb 100644
--- a/arch/powerpc/include/asm/hw_irq.h
+++ b/arch/powerpc/include/asm/hw_irq.h
@@ -26,6 +26,7 @@
 #define PACA_IRQ_DEC		0x08 /* Or FIT */
 #define PACA_IRQ_EE_EDGE	0x10 /* BookE only */
 #define PACA_IRQ_HMI		0x20
+#define PACA_IRQ_PMI		0x40
 
 /*
  * flags for paca->soft_enabled
diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
index 533e363914a9..e3baf9c24d0e 100644
--- a/arch/powerpc/kernel/entry_64.S
+++ b/arch/powerpc/kernel/entry_64.S
@@ -933,6 +933,11 @@ restore_check_irq_replay:
 	addi	r3,r1,STACK_FRAME_OVERHEAD;
  	bl	do_IRQ
 	b	ret_from_except
+1:	cmpwi	cr0,r3,0xf00
+	bne	1f
+	addi	r3,r1,STACK_FRAME_OVERHEAD;
+	bl	performance_monitor_exception
+	b	ret_from_except
 1:	cmpwi	cr0,r3,0xe60
 	bne	1f
 	addi	r3,r1,STACK_FRAME_OVERHEAD;
diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
index 581a10bdb34a..19138a411700 100644
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -596,7 +596,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_CFAR)
 	KVM_HANDLER(PACA_EXGEN, EXC_HV, 0xea2)
 
 	/* moved from 0xf00 */
-	STD_EXCEPTION_PSERIES_OOL(0xf00, performance_monitor)
+	MASKABLE_EXCEPTION_PSERIES_OOL(0xf00, performance_monitor, IRQ_DISABLE_MASK_PMU)
 	KVM_HANDLER(PACA_EXGEN, EXC_STD, 0xf00)
 	STD_EXCEPTION_PSERIES_OOL(0xf20, altivec_unavailable)
 	KVM_HANDLER(PACA_EXGEN, EXC_STD, 0xf20)
@@ -671,6 +671,8 @@ _GLOBAL(__replay_interrupt)
 	beq	decrementer_common
 	cmpwi	r3,0x500
 	beq	hardware_interrupt_common
+	cmpwi	r3,0xf00
+	beq	performance_monitor_common
 BEGIN_FTR_SECTION
 	cmpwi	r3,0xe80
 	beq	h_doorbell_common
@@ -1145,7 +1147,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_VSX)
 	MASKABLE_RELON_EXCEPTION_HV_OOL(0xea0, h_virt_irq,
 							IRQ_DISABLE_MASK_LINUX)
 
-	STD_RELON_EXCEPTION_PSERIES_OOL(0xf00, performance_monitor)
+	MASKABLE_RELON_EXCEPTION_PSERIES_OOL(0xf00, performance_monitor, IRQ_DISABLE_MASK_PMU)
 	STD_RELON_EXCEPTION_PSERIES_OOL(0xf20, altivec_unavailable)
 	STD_RELON_EXCEPTION_PSERIES_OOL(0xf40, vsx_unavailable)
 	STD_RELON_EXCEPTION_PSERIES_OOL(0xf60, facility_unavailable)
diff --git a/arch/powerpc/kernel/irq.c b/arch/powerpc/kernel/irq.c
index 58462ce186fa..9e5e9a6d4147 100644
--- a/arch/powerpc/kernel/irq.c
+++ b/arch/powerpc/kernel/irq.c
@@ -159,6 +159,14 @@ notrace unsigned int __check_irq_replay(void)
 	if ((happened & PACA_IRQ_DEC) || decrementer_check_overflow())
 		return 0x900;
 
+	/*
+	 * In masked_handler() for PMI, we disable MSR[EE] and return.
+	 * Replay it here.
+	 */
+	local_paca->irq_happened &= ~PACA_IRQ_PMI;
+	if (happened & PACA_IRQ_PMI)
+		return 0xf00;
+
 	/* Finally check if an external interrupt happened */
 	local_paca->irq_happened &= ~PACA_IRQ_EE;
 	if (happened & PACA_IRQ_EE)
@@ -203,7 +211,9 @@ notrace void arch_local_irq_restore(unsigned long en)
 
 	/* Write the new soft-enabled value */
 	soft_enabled_set(en);
-	if (en == IRQ_DISABLE_MASK_LINUX)
+
+	/* any bits still disabled */
+	if (en)
 		return;
 	/*
 	 * From this point onward, we can take interrupts, preempt,
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH 12/13] powerpc: Add a Kconfig and a functions to set new soft_enabled mask
  2016-09-15 13:01 [PATCH 00/13] powerpc: "paca->soft_enabled" based local atomic operation implementation Madhavan Srinivasan
                   ` (10 preceding siblings ...)
  2016-09-15 13:02 ` [PATCH 11/13] powerpc: Add support to mask perf interrupts and replay them Madhavan Srinivasan
@ 2016-09-15 13:02 ` Madhavan Srinivasan
  2016-09-16 10:34   ` Nicholas Piggin
  2016-09-16 10:56   ` Nicholas Piggin
  2016-09-15 13:02 ` [PATCH 13/13] powerpc: rewrite local_t using soft_irq Madhavan Srinivasan
  2016-09-16 11:08 ` [PATCH 00/13] powerpc: "paca->soft_enabled" based local atomic operation implementation Nicholas Piggin
  13 siblings, 2 replies; 44+ messages in thread
From: Madhavan Srinivasan @ 2016-09-15 13:02 UTC (permalink / raw)
  To: benh, mpe; +Cc: anton, paulus, npiggin, linuxppc-dev, Madhavan Srinivasan

New Kconfig is added "CONFIG_IRQ_DEBUG_SUPPORT" to add warn_on
to alert the invalid transitions. Have also moved the code under
the CONFIG_TRACE_IRQFLAGS in arch_local_irq_restore() to new Kconfig
as suggested.

To support disabling and enabling of irq with PMI, set of
new powerpc_local_irq_pmu_save() and powerpc_local_irq_restore()
functions are added. And powerpc_local_irq_save() implemented,
by adding a new soft_enabled manipulation function soft_enabled_or_return().
Local_irq_pmu_* macros are provided to access these powerpc_local_irq_pmu*
functions which includes trace_hardirqs_on|off() to match what we
have in include/linux/irqflags.h.

Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
---
 arch/powerpc/Kconfig              |  4 +++
 arch/powerpc/include/asm/hw_irq.h | 69 ++++++++++++++++++++++++++++++++++++++-
 arch/powerpc/kernel/irq.c         |  8 +++--
 3 files changed, 78 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 927d2ab2ce08..878f05925340 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -51,6 +51,10 @@ config TRACE_IRQFLAGS_SUPPORT
 	bool
 	default y
 
+config IRQ_DEBUG_SUPPORT
+	bool
+	default n
+
 config LOCKDEP_SUPPORT
 	bool
 	default y
diff --git a/arch/powerpc/include/asm/hw_irq.h b/arch/powerpc/include/asm/hw_irq.h
index 850fdffb59eb..86f9736fdbb1 100644
--- a/arch/powerpc/include/asm/hw_irq.h
+++ b/arch/powerpc/include/asm/hw_irq.h
@@ -81,6 +81,20 @@ static inline notrace unsigned long soft_enabled_set_return(unsigned long enable
 	return flags;
 }
 
+static inline notrace unsigned long soft_enabled_or_return(unsigned long enable)
+{
+	unsigned long flags, zero;
+
+	asm volatile(
+		"mr %1,%3; lbz %0,%2(13); or %1,%0,%1; stb %1,%2(13)"
+		: "=r" (flags), "=&r"(zero)
+		: "i" (offsetof(struct paca_struct, soft_enabled)),\
+		  "r" (enable)
+		: "memory");
+
+	return flags;
+}
+
 static inline unsigned long arch_local_save_flags(void)
 {
 	return soft_enabled_return();
@@ -105,7 +119,7 @@ static inline unsigned long arch_local_irq_save(void)
 
 static inline bool arch_irqs_disabled_flags(unsigned long flags)
 {
-	return flags == IRQ_DISABLE_MASK_LINUX;
+	return (flags);
 }
 
 static inline bool arch_irqs_disabled(void)
@@ -113,6 +127,59 @@ static inline bool arch_irqs_disabled(void)
 	return arch_irqs_disabled_flags(arch_local_save_flags());
 }
 
+static inline void powerpc_local_irq_pmu_restore(unsigned long flags)
+{
+	arch_local_irq_restore(flags);
+}
+
+static inline unsigned long powerpc_local_irq_pmu_disable(void)
+{
+	return soft_enabled_or_return(IRQ_DISABLE_MASK_LINUX | IRQ_DISABLE_MASK_PMU);
+}
+
+static inline unsigned long powerpc_local_irq_pmu_save(void)
+{
+	return powerpc_local_irq_pmu_disable();
+}
+
+#define raw_local_irq_pmu_save(flags)					\
+	do {								\
+		typecheck(unsigned long, flags);			\
+		flags = powerpc_local_irq_pmu_save();			\
+	} while(0)
+
+#define raw_local_irq_pmu_restore(flags)				\
+	do {								\
+		typecheck(unsigned long, flags);			\
+		powerpc_local_irq_pmu_restore(flags);			\
+	} while(0)
+
+#ifdef CONFIG_TRACE_IRQFLAGS
+#define local_irq_pmu_save(flags)		\
+	do {					\
+		raw_local_irq_pmu_save(flags);	\
+		trace_hardirqs_off();		\
+	} while(0)
+#define local_irq_pmu_restore(flags)			\
+	do {						\
+		if (raw_irqs_disabled_flags(flags)) {   \
+			raw_local_irq_pmu_restore(flags);\
+			trace_hardirqs_off();		\
+		} else {				\
+			trace_hardirqs_on();		\
+			raw_local_irq_pmu_restore(flags);\
+		}					\
+	} while(0)
+#else
+#define local_irq_pmu_save(flags)		\
+	do {					\
+		raw_local_irq_pmu_save(flags);	\
+	} while(0)
+#define local_irq_pmu_restore(flags)			\
+	do { raw_local_irq_pmu_restore(flags); } while (0)
+#endif /* CONFIG_TRACE_IRQFLAGS */
+
+
 #ifdef CONFIG_PPC_BOOK3E
 #define __hard_irq_enable()	asm volatile("wrteei 1" : : : "memory")
 #define __hard_irq_disable()	asm volatile("wrteei 0" : : : "memory")
diff --git a/arch/powerpc/kernel/irq.c b/arch/powerpc/kernel/irq.c
index 9e5e9a6d4147..ae31b1e85fdb 100644
--- a/arch/powerpc/kernel/irq.c
+++ b/arch/powerpc/kernel/irq.c
@@ -209,6 +209,10 @@ notrace void arch_local_irq_restore(unsigned long en)
 	unsigned char irq_happened;
 	unsigned int replay;
 
+#ifdef CONFIG_IRQ_DEBUG_SUPPORT
+	WARN_ON(en & local_paca->soft_enabled & ~IRQ_DISABLE_MASK_LINUX);
+#endif
+
 	/* Write the new soft-enabled value */
 	soft_enabled_set(en);
 
@@ -245,7 +249,7 @@ notrace void arch_local_irq_restore(unsigned long en)
 	 */
 	if (unlikely(irq_happened != PACA_IRQ_HARD_DIS))
 		__hard_irq_disable();
-#ifdef CONFIG_TRACE_IRQFLAGS
+#ifdef CONFIG_IRQ_DEBUG_SUPPORT
 	else {
 		/*
 		 * We should already be hard disabled here. We had bugs
@@ -256,7 +260,7 @@ notrace void arch_local_irq_restore(unsigned long en)
 		if (WARN_ON(mfmsr() & MSR_EE))
 			__hard_irq_disable();
 	}
-#endif /* CONFIG_TRACE_IRQFLAGS */
+#endif /* CONFIG_IRQ_DEBUG_SUPPORT */
 
 	soft_enabled_set(IRQ_DISABLE_MASK_LINUX);
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH 13/13] powerpc: rewrite local_t using soft_irq
  2016-09-15 13:01 [PATCH 00/13] powerpc: "paca->soft_enabled" based local atomic operation implementation Madhavan Srinivasan
                   ` (11 preceding siblings ...)
  2016-09-15 13:02 ` [PATCH 12/13] powerpc: Add a Kconfig and a functions to set new soft_enabled mask Madhavan Srinivasan
@ 2016-09-15 13:02 ` Madhavan Srinivasan
  2016-09-15 16:55   ` kbuild test robot
  2016-09-16 11:01   ` Nicholas Piggin
  2016-09-16 11:08 ` [PATCH 00/13] powerpc: "paca->soft_enabled" based local atomic operation implementation Nicholas Piggin
  13 siblings, 2 replies; 44+ messages in thread
From: Madhavan Srinivasan @ 2016-09-15 13:02 UTC (permalink / raw)
  To: benh, mpe; +Cc: anton, paulus, npiggin, linuxppc-dev, Madhavan Srinivasan

Local atomic operations are fast and highly reentrant per CPU counters.
Used for percpu variable updates. Local atomic operations only guarantee
variable modification atomicity wrt the CPU which owns the data and
these needs to be executed in a preemption safe way.

Here is the design of this patch. Since local_* operations
are only need to be atomic to interrupts (IIUC), we have two options.
Either replay the "op" if interrupted or replay the interrupt after
the "op". Initial patchset posted was based on implementing local_* operation
based on CR5 which replay's the "op". Patchset had issues in case of
rewinding the address pointor from an array. This make the slow patch
really slow. Since CR5 based implementation proposed using __ex_table to find
the rewind addressr, this rasied concerns about size of __ex_table and vmlinux.

https://lists.ozlabs.org/pipermail/linuxppc-dev/2014-December/123115.html

But this patch uses, local_irq_pmu_save to soft_disable
interrupts (including PMIs). After finishing the "op", local_irq_pmu_restore()
called and correspondingly interrupts are replayed if any occured.

patch re-write the current local_* functions to use arch_local_irq_disbale.
Base flow for each function is

{
	local_irq_pmu_save(flags)
	load
	..
	store
	local_irq_pmu_restore(flags)
}

Reason for the approach is that, currently l[w/d]arx/st[w/d]cx.
instruction pair is used for local_* operations, which are heavy
on cycle count and they dont support a local variant. So to
see whether the new implementation helps, used a modified
version of Rusty's benchmark code on local_t.

https://lkml.org/lkml/2008/12/16/450

Modifications to Rusty's benchmark code:
- Executed only local_t test

Here are the values with the patch.

Time in ns per iteration

Local_t             Without Patch           With Patch

_inc                        28              8
_add                        28              8
_read                       3               3
_add_return         28              7

Currently only asm/local.h has been rewrite, and also
the entire change is tested only in PPC64 (pseries guest)
and PPC64 host (LE)

TODO:
	- local_cmpxchg and local_xchg needs modification.

Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/local.h | 94 ++++++++++++++++++++++++++++------------
 1 file changed, 66 insertions(+), 28 deletions(-)

diff --git a/arch/powerpc/include/asm/local.h b/arch/powerpc/include/asm/local.h
index b8da91363864..fb5728abb4e9 100644
--- a/arch/powerpc/include/asm/local.h
+++ b/arch/powerpc/include/asm/local.h
@@ -3,6 +3,9 @@
 
 #include <linux/percpu.h>
 #include <linux/atomic.h>
+#include <linux/irqflags.h>
+
+#include <asm/hw_irq.h>
 
 typedef struct
 {
@@ -14,24 +17,50 @@ typedef struct
 #define local_read(l)	atomic_long_read(&(l)->a)
 #define local_set(l,i)	atomic_long_set(&(l)->a, (i))
 
-#define local_add(i,l)	atomic_long_add((i),(&(l)->a))
-#define local_sub(i,l)	atomic_long_sub((i),(&(l)->a))
-#define local_inc(l)	atomic_long_inc(&(l)->a)
-#define local_dec(l)	atomic_long_dec(&(l)->a)
+static __inline__ void local_add(long i, local_t *l)
+{
+	long t;
+	unsigned long flags;
+
+	local_irq_pmu_save(flags);
+	__asm__ __volatile__(
+	PPC_LL" %0,0(%2)\n\
+	add     %0,%1,%0\n"
+	PPC_STL" %0,0(%2)\n"
+	: "=&r" (t)
+	: "r" (i), "r" (&(l->a.counter)));
+	local_irq_pmu_restore(flags);
+}
+
+static __inline__ void local_sub(long i, local_t *l)
+{
+	long t;
+	unsigned long flags;
+
+	local_irq_pmu_save(flags);
+	__asm__ __volatile__(
+	PPC_LL" %0,0(%2)\n\
+	subf    %0,%1,%0\n"
+	PPC_STL" %0,0(%2)\n"
+	: "=&r" (t)
+	: "r" (i), "r" (&(l->a.counter)));
+	local_irq_pmu_restore(flags);
+}
 
 static __inline__ long local_add_return(long a, local_t *l)
 {
 	long t;
+	unsigned long flags;
 
+	local_irq_pmu_save(flags);
 	__asm__ __volatile__(
-"1:"	PPC_LLARX(%0,0,%2,0) "			# local_add_return\n\
+	PPC_LL" %0,0(%2)\n\
 	add	%0,%1,%0\n"
-	PPC405_ERR77(0,%2)
-	PPC_STLCX	"%0,0,%2 \n\
-	bne-	1b"
+	PPC_STL	"%0,0(%2)\n"
 	: "=&r" (t)
 	: "r" (a), "r" (&(l->a.counter))
 	: "cc", "memory");
+	local_irq_pmu_restore(flags);
 
 	return t;
 }
@@ -41,16 +70,18 @@ static __inline__ long local_add_return(long a, local_t *l)
 static __inline__ long local_sub_return(long a, local_t *l)
 {
 	long t;
+	unsigned long flags;
+
+	local_irq_pmu_save(flags);
 
 	__asm__ __volatile__(
-"1:"	PPC_LLARX(%0,0,%2,0) "			# local_sub_return\n\
+"1:"	PPC_LL" %0,0(%2)\n\
 	subf	%0,%1,%0\n"
-	PPC405_ERR77(0,%2)
-	PPC_STLCX	"%0,0,%2 \n\
-	bne-	1b"
+	PPC_STL	"%0,0(%2)\n"
 	: "=&r" (t)
 	: "r" (a), "r" (&(l->a.counter))
 	: "cc", "memory");
+	local_irq_pmu_restore(flags);
 
 	return t;
 }
@@ -58,16 +89,17 @@ static __inline__ long local_sub_return(long a, local_t *l)
 static __inline__ long local_inc_return(local_t *l)
 {
 	long t;
+	unsigned long flags;
 
+	local_irq_pmu_save(flags);
 	__asm__ __volatile__(
-"1:"	PPC_LLARX(%0,0,%1,0) "			# local_inc_return\n\
+"1:"	PPC_LL" %0,0(%1)\n\
 	addic	%0,%0,1\n"
-	PPC405_ERR77(0,%1)
-	PPC_STLCX	"%0,0,%1 \n\
-	bne-	1b"
+	PPC_STL "%0,0(%1)\n"
 	: "=&r" (t)
 	: "r" (&(l->a.counter))
 	: "cc", "xer", "memory");
+	local_irq_pmu_restore(flags);
 
 	return t;
 }
@@ -85,20 +117,24 @@ static __inline__ long local_inc_return(local_t *l)
 static __inline__ long local_dec_return(local_t *l)
 {
 	long t;
+	unsigned long flags;
 
+	local_irq_pmu_save(flags);
 	__asm__ __volatile__(
-"1:"	PPC_LLARX(%0,0,%1,0) "			# local_dec_return\n\
+	PPC_LL" %0,0(%1)\n\
 	addic	%0,%0,-1\n"
-	PPC405_ERR77(0,%1)
-	PPC_STLCX	"%0,0,%1\n\
-	bne-	1b"
+	PPC_STL "%0,0(%1)\n"
 	: "=&r" (t)
 	: "r" (&(l->a.counter))
 	: "cc", "xer", "memory");
+	local_irq_pmu_restore(flags);
 
 	return t;
 }
 
+#define local_inc(l)	local_inc_return(l)
+#define local_dec(l)	local_dec_return(l)
+
 #define local_cmpxchg(l, o, n) \
 	(cmpxchg_local(&((l)->a.counter), (o), (n)))
 #define local_xchg(l, n) (xchg_local(&((l)->a.counter), (n)))
@@ -115,20 +151,21 @@ static __inline__ long local_dec_return(local_t *l)
 static __inline__ int local_add_unless(local_t *l, long a, long u)
 {
 	long t;
+	unsigned long flags;
 
+	local_irq_pmu_save(flags);
 	__asm__ __volatile__ (
-"1:"	PPC_LLARX(%0,0,%1,0) "			# local_add_unless\n\
+	PPC_LL" %0,0(%1)\n\
 	cmpw	0,%0,%3 \n\
 	beq-	2f \n\
 	add	%0,%2,%0 \n"
-	PPC405_ERR77(0,%2)
-	PPC_STLCX	"%0,0,%1 \n\
-	bne-	1b \n"
+	PPC_STL" %0,0(%1) \n"
 "	subf	%0,%2,%0 \n\
 2:"
 	: "=&r" (t)
 	: "r" (&(l->a.counter)), "r" (a), "r" (u)
 	: "cc", "memory");
+	local_irq_pmu_restore(flags);
 
 	return t != u;
 }
@@ -145,19 +182,20 @@ static __inline__ int local_add_unless(local_t *l, long a, long u)
 static __inline__ long local_dec_if_positive(local_t *l)
 {
 	long t;
+	unsigned long flags;
 
+	local_irq_pmu_save(flags);
 	__asm__ __volatile__(
-"1:"	PPC_LLARX(%0,0,%1,0) "			# local_dec_if_positive\n\
+	PPC_LL" %0,0(%1)\n\
 	cmpwi	%0,1\n\
 	addi	%0,%0,-1\n\
 	blt-	2f\n"
-	PPC405_ERR77(0,%1)
-	PPC_STLCX	"%0,0,%1\n\
-	bne-	1b"
+	PPC_STL "%0,0(%1)\n"
 	"\n\
 2:"	: "=&b" (t)
 	: "r" (&(l->a.counter))
 	: "cc", "memory");
+	local_irq_pmu_restore(flags);
 
 	return t;
 }
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* Re: [PATCH 13/13] powerpc: rewrite local_t using soft_irq
  2016-09-15 13:02 ` [PATCH 13/13] powerpc: rewrite local_t using soft_irq Madhavan Srinivasan
@ 2016-09-15 16:55   ` kbuild test robot
  2016-09-16 11:01   ` Nicholas Piggin
  1 sibling, 0 replies; 44+ messages in thread
From: kbuild test robot @ 2016-09-15 16:55 UTC (permalink / raw)
  To: Madhavan Srinivasan
  Cc: kbuild-all, benh, mpe, linuxppc-dev, Madhavan Srinivasan, paulus,
	anton, npiggin

[-- Attachment #1: Type: text/plain, Size: 2446 bytes --]

Hi Madhavan,

[auto build test ERROR on v4.8-rc6]
[cannot apply to powerpc/next kvm-ppc/kvm-ppc-next mpe/next next-20160915]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]
[Suggest to use git(>=2.9.0) format-patch --base=<commit> (or --base=auto for convenience) to record what (public, well-known) commit your patch series was built on]
[Check https://git-scm.com/docs/git-format-patch for more information]

url:    https://github.com/0day-ci/linux/commits/Madhavan-Srinivasan/powerpc-paca-soft_enabled-based-local-atomic-operation-implementation/20160915-215652
config: powerpc-allnoconfig (attached as .config)
compiler: powerpc-linux-gnu-gcc (Debian 5.4.0-6) 5.4.0 20160609
reproduce:
        wget https://git.kernel.org/cgit/linux/kernel/git/wfg/lkp-tests.git/plain/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        make.cross ARCH=powerpc 

All errors (new ones prefixed by >>):

   In file included from include/linux/perf_event.h:57:0,
                    from include/linux/trace_events.h:9,
                    from include/trace/syscall.h:6,
                    from include/linux/syscalls.h:81,
                    from init/main.c:18:
   arch/powerpc/include/asm/local.h: In function 'local_add':
>> arch/powerpc/include/asm/local.h:25:2: error: implicit declaration of function 'local_irq_pmu_save' [-Werror=implicit-function-declaration]
     local_irq_pmu_save(flags);
     ^
>> arch/powerpc/include/asm/local.h:32:2: error: implicit declaration of function 'local_irq_pmu_restore' [-Werror=implicit-function-declaration]
     local_irq_pmu_restore(flags);
     ^
   cc1: some warnings being treated as errors

vim +/local_irq_pmu_save +25 arch/powerpc/include/asm/local.h

    19	
    20	static __inline__ void local_add(long i, local_t *l)
    21	{
    22		long t;
    23		unsigned long flags;
    24	
  > 25		local_irq_pmu_save(flags);
    26		__asm__ __volatile__(
    27		PPC_LL" %0,0(%2)\n\
    28		add     %0,%1,%0\n"
    29		PPC_STL" %0,0(%2)\n"
    30		: "=&r" (t)
    31		: "r" (i), "r" (&(l->a.counter)));
  > 32		local_irq_pmu_restore(flags);
    33	}
    34	
    35	static __inline__ void local_sub(long i, local_t *l)

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/octet-stream, Size: 5933 bytes --]

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 02/13] powerpc: Cleanup to use IRQ_DISABLE_MASK_* macros for paca->soft_enabled update
  2016-09-15 13:01 ` [PATCH 02/13] powerpc: Cleanup to use IRQ_DISABLE_MASK_* macros for paca->soft_enabled update Madhavan Srinivasan
@ 2016-09-16  9:47   ` Nicholas Piggin
  2016-09-19  4:04     ` Madhavan Srinivasan
  0 siblings, 1 reply; 44+ messages in thread
From: Nicholas Piggin @ 2016-09-16  9:47 UTC (permalink / raw)
  To: Madhavan Srinivasan; +Cc: benh, mpe, anton, paulus, linuxppc-dev

On Thu, 15 Sep 2016 18:31:52 +0530
Madhavan Srinivasan <maddy@linux.vnet.ibm.com> wrote:

> Replace the hardcoded values used when updating
> paca->soft_enabled with IRQ_DISABLE_MASK_* #def.
> No logic change.

This could be folded with patch 1.

Reviewed-by: Nicholas Piggin <npiggin@gmail.com>

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 03/13] powerpc: move set_soft_enabled() and rename
  2016-09-15 13:01 ` [PATCH 03/13] powerpc: move set_soft_enabled() and rename Madhavan Srinivasan
@ 2016-09-16  9:50   ` Nicholas Piggin
  2016-09-19  4:05     ` Madhavan Srinivasan
  0 siblings, 1 reply; 44+ messages in thread
From: Nicholas Piggin @ 2016-09-16  9:50 UTC (permalink / raw)
  To: Madhavan Srinivasan; +Cc: benh, mpe, anton, paulus, linuxppc-dev

On Thu, 15 Sep 2016 18:31:53 +0530
Madhavan Srinivasan <maddy@linux.vnet.ibm.com> wrote:

> Move set_soft_enabled() from powerpc/kernel/irq.c to
> asm/hw_irq.c. and rename it soft_enabled_set().
> THis way paca->soft_enabled updates can be forced.

Could you just tidy up the changelog a little?

You are renaming it I assume because you are going to introduce more
soft_enabled_x() functions, and that the namespace works better as a
prefix than a postfix.

You are moving it so all paca->soft_enabled updates can be done via
these access functions rather than open coded.

Did I get that right?

Reviewed-by: Nicholas Piggin <npiggin@gmail.com>

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 04/13] powerpc: Use soft_enabled_set api to update paca->soft_enabled
  2016-09-15 13:01 ` [PATCH 04/13] powerpc: Use soft_enabled_set api to update paca->soft_enabled Madhavan Srinivasan
@ 2016-09-16  9:53   ` Nicholas Piggin
  2016-09-16 11:43     ` David Laight
  2016-09-19  4:11     ` Madhavan Srinivasan
  0 siblings, 2 replies; 44+ messages in thread
From: Nicholas Piggin @ 2016-09-16  9:53 UTC (permalink / raw)
  To: Madhavan Srinivasan; +Cc: benh, mpe, anton, paulus, linuxppc-dev

On Thu, 15 Sep 2016 18:31:54 +0530
Madhavan Srinivasan <maddy@linux.vnet.ibm.com> wrote:

> Force use of soft_enabled_set() wrapper to update paca-soft_enabled
> wherever possisble. Also add a new wrapper function, soft_enabled_set_return(),
> added to force the paca->soft_enabled updates.
> 
> Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
> ---
>  arch/powerpc/include/asm/hw_irq.h  | 14 ++++++++++++++
>  arch/powerpc/include/asm/kvm_ppc.h |  2 +-
>  arch/powerpc/kernel/irq.c          |  2 +-
>  arch/powerpc/kernel/setup_64.c     |  4 ++--
>  arch/powerpc/kernel/time.c         |  6 +++---
>  5 files changed, 21 insertions(+), 7 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/hw_irq.h b/arch/powerpc/include/asm/hw_irq.h
> index 8fad8c24760b..f828b8f8df02 100644
> --- a/arch/powerpc/include/asm/hw_irq.h
> +++ b/arch/powerpc/include/asm/hw_irq.h
> @@ -53,6 +53,20 @@ static inline notrace void soft_enabled_set(unsigned long enable)
>  	: : "r" (enable), "i" (offsetof(struct paca_struct, soft_enabled)));
>  }
>  
> +static inline notrace unsigned long soft_enabled_set_return(unsigned long enable)
> +{
> +	unsigned long flags;
> +
> +	asm volatile(
> +		"lbz %0,%1(13); stb %2,%1(13)"
> +		: "=r" (flags)
> +		: "i" (offsetof(struct paca_struct, soft_enabled)),\
> +		  "r" (enable)
> +		: "memory");
> +
> +	return flags;
> +}

Why do you have the "memory" clobber here while soft_enabled_set() does not?

Thanks,
Nick

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 05/13] powerpc: Add soft_enabled manipulation functions
  2016-09-15 13:01 ` [PATCH 05/13] powerpc: Add soft_enabled manipulation functions Madhavan Srinivasan
@ 2016-09-16  9:57   ` Nicholas Piggin
  2016-09-19  5:41     ` Madhavan Srinivasan
  0 siblings, 1 reply; 44+ messages in thread
From: Nicholas Piggin @ 2016-09-16  9:57 UTC (permalink / raw)
  To: Madhavan Srinivasan; +Cc: benh, mpe, anton, paulus, linuxppc-dev

On Thu, 15 Sep 2016 18:31:55 +0530
Madhavan Srinivasan <maddy@linux.vnet.ibm.com> wrote:

> Add new soft_enabled_* manipulation function and implement
> arch_local_* using the soft_enabled_* wrappers.
> 
> Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
> ---
>  arch/powerpc/include/asm/hw_irq.h | 32 ++++++++++++++------------------
>  1 file changed, 14 insertions(+), 18 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/hw_irq.h b/arch/powerpc/include/asm/hw_irq.h
> index f828b8f8df02..dc3c248f9244 100644
> --- a/arch/powerpc/include/asm/hw_irq.h
> +++ b/arch/powerpc/include/asm/hw_irq.h
> @@ -53,21 +53,7 @@ static inline notrace void soft_enabled_set(unsigned long enable)
>  	: : "r" (enable), "i" (offsetof(struct paca_struct, soft_enabled)));
>  }
>  
> -static inline notrace unsigned long soft_enabled_set_return(unsigned long enable)
> -{
> -	unsigned long flags;
> -
> -	asm volatile(
> -		"lbz %0,%1(13); stb %2,%1(13)"
> -		: "=r" (flags)
> -		: "i" (offsetof(struct paca_struct, soft_enabled)),\
> -		  "r" (enable)
> -		: "memory");
> -
> -	return flags;
> -}
> -
> -static inline unsigned long arch_local_save_flags(void)
> +static inline notrace unsigned long soft_enabled_return(void)
>  {
>  	unsigned long flags;
>  
> @@ -79,20 +65,30 @@ static inline unsigned long arch_local_save_flags(void)
>  	return flags;
>  }
>  
> -static inline unsigned long arch_local_irq_disable(void)
> +static inline notrace unsigned long soft_enabled_set_return(unsigned long enable)
>  {
>  	unsigned long flags, zero;
>  
>  	asm volatile(
> -		"li %1,%3; lbz %0,%2(13); stb %1,%2(13)"
> +		"mr %1,%3; lbz %0,%2(13); stb %1,%2(13)"
>  		: "=r" (flags), "=&r" (zero)
>  		: "i" (offsetof(struct paca_struct, soft_enabled)),\
> -		  "i" (IRQ_DISABLE_MASK_LINUX)
> +		  "r" (enable)
>  		: "memory");
>  
>  	return flags;
>  }

As we talked about earlier, it would be nice to add builtin_constant
variants to avoid the extra instruction. If you prefer to do that
after this series, that's fine.

Reviewed-by: Nicholas Piggin <npiggin@gmail.com>

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 06/13] powerpc: reverse the soft_enable logic
  2016-09-15 13:01 ` [PATCH 06/13] powerpc: reverse the soft_enable logic Madhavan Srinivasan
@ 2016-09-16 10:05   ` Nicholas Piggin
  2016-09-19  5:45     ` Madhavan Srinivasan
  0 siblings, 1 reply; 44+ messages in thread
From: Nicholas Piggin @ 2016-09-16 10:05 UTC (permalink / raw)
  To: Madhavan Srinivasan; +Cc: benh, mpe, anton, paulus, linuxppc-dev

On Thu, 15 Sep 2016 18:31:56 +0530
Madhavan Srinivasan <maddy@linux.vnet.ibm.com> wrote:

> "paca->soft_enabled" is used as a flag to mask some of interrupts.
> Currently supported flags values and their details:
> 
> soft_enabled    MSR[EE]
> 
> 0               0       Disabled (PMI and HMI not masked)
> 1               1       Enabled
> 
> "paca->soft_enabled" is initialized to 1 to make the interripts as
> enabled. arch_local_irq_disable() will toggle the value when interrupts
> needs to disbled. At this point, the interrupts are not actually disabled,
> instead, interrupt vector has code to check for the flag and mask it when it occurs.
> By "mask it", it update interrupt paca->irq_happened and return.
> arch_local_irq_restore() is called to re-enable interrupts, which checks and
> replays interrupts if any occured.
> 
> Now, as mentioned, current logic doesnot mask "performance monitoring interrupts"
> and PMIs are implemented as NMI. But this patchset depends on local_irq_*
> for a successful local_* update. Meaning, mask all possible interrupts during
> local_* update and replay them after the update.
> 
> So the idea here is to reserve the "paca->soft_enabled" logic. New values and
> details:
> 
> soft_enabled    MSR[EE]
> 
> 1               0       Disabled  (PMI and HMI not masked)
> 0               1       Enabled
> 
> Reason for the this change is to create foundation for a third flag value "2"
> for "soft_enabled" to add support to mask PMIs. When ->soft_enabled is
> set to a value "2", PMI interrupts are mask and when set to a value
> of "1", PMI are not mask.
> 
> Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
> ---
>  arch/powerpc/include/asm/hw_irq.h | 4 ++--
>  arch/powerpc/kernel/entry_64.S    | 5 ++---
>  2 files changed, 4 insertions(+), 5 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/hw_irq.h b/arch/powerpc/include/asm/hw_irq.h
> index dc3c248f9244..fd9b421f9020 100644
> --- a/arch/powerpc/include/asm/hw_irq.h
> +++ b/arch/powerpc/include/asm/hw_irq.h
> @@ -30,8 +30,8 @@
>  /*
>   * flags for paca->soft_enabled
>   */
> -#define IRQ_DISABLE_MASK_NONE		1
> -#define IRQ_DISABLE_MASK_LINUX		0
> +#define IRQ_DISABLE_MASK_NONE		0
> +#define IRQ_DISABLE_MASK_LINUX		1
>  
>  #endif /* CONFIG_PPC64 */
>  
> diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
> index aef7b64cbbeb..879aeb11ad29 100644
> --- a/arch/powerpc/kernel/entry_64.S
> +++ b/arch/powerpc/kernel/entry_64.S
> @@ -131,8 +131,7 @@ END_FW_FTR_SECTION_IFSET(FW_FEATURE_SPLPAR)
>  	 */
>  #if defined(CONFIG_TRACE_IRQFLAGS) && defined(CONFIG_BUG)
>  	lbz	r10,PACASOFTIRQEN(r13)
> -	xori	r10,r10,IRQ_DISABLE_MASK_NONE
> -1:	tdnei	r10,0
> +1:	tdnei	r10,IRQ_DISABLE_MASK_NONE
>  	EMIT_BUG_ENTRY 1b,__FILE__,__LINE__,BUGFLAG_WARNING
>  #endif
>  
> @@ -1012,7 +1011,7 @@ _GLOBAL(enter_rtas)
>  	 * check it with the asm equivalent of WARN_ON
>  	 */
>  	lbz	r0,PACASOFTIRQEN(r13)
> -1:	tdnei	r0,IRQ_DISABLE_MASK_LINUX
> +1:	tdeqi	r0,IRQ_DISABLE_MASK_NONE
>  	EMIT_BUG_ENTRY 1b,__FILE__,__LINE__,BUGFLAG_WARNING
>  #endif
>  	

We specifically want to ensure that _LINUX interrupts are disabled
here. Not that we allow masking of others without _LINUX now, but
current behavior is checking that LINUX ones are masked.

Otherwise it seems okay.

It might be nice after this series to do a pass and rename
soft_enabled to soft_masked.

Reviewed-by: Nicholas Piggin <npiggin@gmail.com>

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 07/13] powerpc: Avoid using EXCEPTION_PROLOG_1 macro in MASKABLE_*
  2016-09-15 13:01 ` [PATCH 07/13] powerpc: Avoid using EXCEPTION_PROLOG_1 macro in MASKABLE_* Madhavan Srinivasan
@ 2016-09-16 10:08   ` Nicholas Piggin
  0 siblings, 0 replies; 44+ messages in thread
From: Nicholas Piggin @ 2016-09-16 10:08 UTC (permalink / raw)
  To: Madhavan Srinivasan; +Cc: benh, mpe, anton, paulus, linuxppc-dev

On Thu, 15 Sep 2016 18:31:57 +0530
Madhavan Srinivasan <maddy@linux.vnet.ibm.com> wrote:

> Currently we use both EXCEPTION_PROLOG_1 and __EXCEPTION_PROLOG_1
> in the MASKABLE_* macros. As a cleanup, this patch makes MASKABLE_*
> to use only __EXCEPTION_PROLOG_1. There is not logic change.
> 
> Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>

I assume this was done like this once for some macro expansion issue?
If it doesn't cause any breakage, then fine.

Reviewed-by: Nicholas Piggin <npiggin@gmail.com>

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 08/13] powerpc: Add new _EXCEPTION_PROLOG_1 macro
  2016-09-15 13:01 ` [PATCH 08/13] powerpc: Add new _EXCEPTION_PROLOG_1 macro Madhavan Srinivasan
@ 2016-09-16 10:12   ` Nicholas Piggin
  2016-09-19  5:54     ` Madhavan Srinivasan
  0 siblings, 1 reply; 44+ messages in thread
From: Nicholas Piggin @ 2016-09-16 10:12 UTC (permalink / raw)
  To: Madhavan Srinivasan; +Cc: benh, mpe, anton, paulus, linuxppc-dev

On Thu, 15 Sep 2016 18:31:58 +0530
Madhavan Srinivasan <maddy@linux.vnet.ibm.com> wrote:

> To support addition of "bitmask" to MASKABLE_* macros,
> factor out the EXCPETION_PROLOG_1 macro.
> 
> Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>

Really minor nit, but as a matter of readability of the series,
would you consider moving this next to patch 10 where it's used,
if you submit again?

Reviewed-by: Nicholas Piggin <npiggin@gmail.com>

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 09/13] powerpc: Introduce new mask bit for soft_enabled
  2016-09-15 13:01 ` [PATCH 09/13] powerpc: Introduce new mask bit for soft_enabled Madhavan Srinivasan
@ 2016-09-16 10:16   ` Nicholas Piggin
  2016-09-19  5:57     ` Madhavan Srinivasan
  0 siblings, 1 reply; 44+ messages in thread
From: Nicholas Piggin @ 2016-09-16 10:16 UTC (permalink / raw)
  To: Madhavan Srinivasan; +Cc: benh, mpe, anton, paulus, linuxppc-dev

On Thu, 15 Sep 2016 18:31:59 +0530
Madhavan Srinivasan <maddy@linux.vnet.ibm.com> wrote:

> Currently soft_enabled is used as the flag to determine
> the interrupt state. Patch extends the soft_enabled
> to be used as a mask instead of a flag.

This should be the title of the patch, IMO. Introducing the new
mask bit is incidental (and could be moved to patch 11). The key
here of course is switching the tests from a flag to a mask.

Very cool that you got this all worked out without adding any
new instructions.

Reviewed-by: Nicholas Piggin <npiggin@gmail.com>

> 
> Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
> ---
>  arch/powerpc/include/asm/exception-64s.h | 4 ++--
>  arch/powerpc/include/asm/hw_irq.h        | 1 +
>  arch/powerpc/include/asm/irqflags.h      | 4 ++--
>  arch/powerpc/kernel/entry_64.S           | 4 ++--
>  arch/powerpc/kernel/exceptions-64e.S     | 6 +++---
>  5 files changed, 10 insertions(+), 9 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/exception-64s.h b/arch/powerpc/include/asm/exception-64s.h
> index dd3253bd0d8e..1eea4ab75607 100644
> --- a/arch/powerpc/include/asm/exception-64s.h
> +++ b/arch/powerpc/include/asm/exception-64s.h
> @@ -430,9 +430,9 @@ label##_relon_hv:						\
>  
>  #define __SOFTEN_TEST(h, vec)						\
>  	lbz	r10,PACASOFTIRQEN(r13);					\
> -	cmpwi	r10,IRQ_DISABLE_MASK_LINUX;							\
> +	andi.	r10,r10,IRQ_DISABLE_MASK_LINUX;				\
>  	li	r10,SOFTEN_VALUE_##vec;					\
> -	beq	masked_##h##interrupt
> +	bne	masked_##h##interrupt
>  #define _SOFTEN_TEST(h, vec)	__SOFTEN_TEST(h, vec)
>  
>  #define SOFTEN_TEST_PR(vec)						\
> diff --git a/arch/powerpc/include/asm/hw_irq.h b/arch/powerpc/include/asm/hw_irq.h
> index fd9b421f9020..245262c02bab 100644
> --- a/arch/powerpc/include/asm/hw_irq.h
> +++ b/arch/powerpc/include/asm/hw_irq.h
> @@ -32,6 +32,7 @@
>   */
>  #define IRQ_DISABLE_MASK_NONE		0
>  #define IRQ_DISABLE_MASK_LINUX		1
> +#define IRQ_DISABLE_MASK_PMU		2
>  
>  #endif /* CONFIG_PPC64 */
>  
> diff --git a/arch/powerpc/include/asm/irqflags.h b/arch/powerpc/include/asm/irqflags.h
> index d0ed2a7d7d10..9ff09747a226 100644
> --- a/arch/powerpc/include/asm/irqflags.h
> +++ b/arch/powerpc/include/asm/irqflags.h
> @@ -48,11 +48,11 @@
>  #define RECONCILE_IRQ_STATE(__rA, __rB)		\
>  	lbz	__rA,PACASOFTIRQEN(r13);	\
>  	lbz	__rB,PACAIRQHAPPENED(r13);	\
> -	cmpwi	cr0,__rA,IRQ_DISABLE_MASK_LINUX;\
> +	andi.	__rA,__rA,IRQ_DISABLE_MASK_LINUX;\
>  	li	__rA,IRQ_DISABLE_MASK_LINUX;	\
>  	ori	__rB,__rB,PACA_IRQ_HARD_DIS;	\
>  	stb	__rB,PACAIRQHAPPENED(r13);	\
> -	beq	44f;				\
> +	bne	44f;				\
>  	stb	__rA,PACASOFTIRQEN(r13);	\
>  	TRACE_DISABLE_INTS;			\
>  44:
> diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
> index 879aeb11ad29..533e363914a9 100644
> --- a/arch/powerpc/kernel/entry_64.S
> +++ b/arch/powerpc/kernel/entry_64.S
> @@ -764,8 +764,8 @@ restore:
>  	 */
>  	ld	r5,SOFTE(r1)
>  	lbz	r6,PACASOFTIRQEN(r13)
> -	cmpwi	cr0,r5,IRQ_DISABLE_MASK_LINUX
> -	beq	restore_irq_off
> +	andi.	r5,r5,IRQ_DISABLE_MASK_LINUX
> +	bne	restore_irq_off
>  
>  	/* We are enabling, were we already enabled ? Yes, just return */
>  	cmpwi	cr0,r6,IRQ_DISABLE_MASK_NONE
> diff --git a/arch/powerpc/kernel/exceptions-64e.S b/arch/powerpc/kernel/exceptions-64e.S
> index 5c628b5696f6..8e40df2c2f30 100644
> --- a/arch/powerpc/kernel/exceptions-64e.S
> +++ b/arch/powerpc/kernel/exceptions-64e.S
> @@ -212,8 +212,8 @@ END_FTR_SECTION_IFSET(CPU_FTR_EMB_HV)
>  	/* Interrupts had better not already be enabled... */
>  	twnei	r6,IRQ_DISABLE_MASK_LINUX
>  
> -	cmpwi	cr0,r5,IRQ_DISABLE_MASK_LINUX
> -	beq	1f
> +	andi.	r5,r5,IRQ_DISABLE_MASK_LINUX
> +	bne	1f
>  
>  	TRACE_ENABLE_INTS
>  	stb	r5,PACASOFTIRQEN(r13)
> @@ -352,7 +352,7 @@ ret_from_mc_except:
>  
>  #define PROLOG_ADDITION_MASKABLE_GEN(n)					    \
>  	lbz	r10,PACASOFTIRQEN(r13); /* are irqs soft-disabled ? */	    \
> -	cmpwi	cr0,r10,IRQ_DISABLE_MASK_LINUX;/* yes -> go out of line */ \
> +	andi.	r10,r10,IRQ_DISABLE_MASK_LINUX;/* yes -> go out of line */ \
>  	beq	masked_interrupt_book3e_##n
>  
>  #define PROLOG_ADDITION_2REGS_GEN(n)					    \

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 12/13] powerpc: Add a Kconfig and a functions to set new soft_enabled mask
  2016-09-15 13:02 ` [PATCH 12/13] powerpc: Add a Kconfig and a functions to set new soft_enabled mask Madhavan Srinivasan
@ 2016-09-16 10:34   ` Nicholas Piggin
  2016-09-16 10:56   ` Nicholas Piggin
  1 sibling, 0 replies; 44+ messages in thread
From: Nicholas Piggin @ 2016-09-16 10:34 UTC (permalink / raw)
  To: Madhavan Srinivasan; +Cc: benh, mpe, anton, paulus, linuxppc-dev

On Thu, 15 Sep 2016 18:32:02 +0530
Madhavan Srinivasan <maddy@linux.vnet.ibm.com> wrote:

> New Kconfig is added "CONFIG_IRQ_DEBUG_SUPPORT" to add warn_on
> to alert the invalid transitions. Have also moved the code under
> the CONFIG_TRACE_IRQFLAGS in arch_local_irq_restore() to new Kconfig
> as suggested.

I can't tempt you to put the Kconfig changes into their own patch? :)


> To support disabling and enabling of irq with PMI, set of
> new powerpc_local_irq_pmu_save() and powerpc_local_irq_restore()
> functions are added. And powerpc_local_irq_save() implemented,
> by adding a new soft_enabled manipulation function soft_enabled_or_return().
> Local_irq_pmu_* macros are provided to access these powerpc_local_irq_pmu*
> functions which includes trace_hardirqs_on|off() to match what we
> have in include/linux/irqflags.h.
> 
> Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>

> @@ -81,6 +81,20 @@ static inline notrace unsigned long soft_enabled_set_return(unsigned long enable
>  	return flags;
>  }
>  
> +static inline notrace unsigned long soft_enabled_or_return(unsigned long enable)
> +{
> +	unsigned long flags, zero;
> +
> +	asm volatile(
> +		"mr %1,%3; lbz %0,%2(13); or %1,%0,%1; stb %1,%2(13)"
> +		: "=r" (flags), "=&r"(zero)
> +		: "i" (offsetof(struct paca_struct, soft_enabled)),\
> +		  "r" (enable)
> +		: "memory");
> +
> +	return flags;
> +}

Another candidate for builtin_constification using immediates. And do you
actually need the initial mr instruction there?


> @@ -105,7 +119,7 @@ static inline unsigned long arch_local_irq_save(void)
>  
>  static inline bool arch_irqs_disabled_flags(unsigned long flags)
>  {
> -	return flags == IRQ_DISABLE_MASK_LINUX;
> +	return (flags);
>  }
>  
>  static inline bool arch_irqs_disabled(void)

This part logically belongs in patch 11, but it also needs to be changed,
I think? Keep in mind it's the generic kernel asking whether it has "Linux
interrupts" disabled.

    return flags & IRQ_DISABLE_MASK_LINUX;

> @@ -113,6 +127,59 @@ static inline bool arch_irqs_disabled(void)
>  	return arch_irqs_disabled_flags(arch_local_save_flags());
>  }
>  
> +static inline void powerpc_local_irq_pmu_restore(unsigned long flags)
> +{
> +	arch_local_irq_restore(flags);
> +}
> +
> +static inline unsigned long powerpc_local_irq_pmu_disable(void)
> +{
> +	return soft_enabled_or_return(IRQ_DISABLE_MASK_LINUX | IRQ_DISABLE_MASK_PMU);
> +}
> +
> +static inline unsigned long powerpc_local_irq_pmu_save(void)
> +{
> +	return powerpc_local_irq_pmu_disable();
> +}
> +
> +#define raw_local_irq_pmu_save(flags)					\
> +	do {								\
> +		typecheck(unsigned long, flags);			\
> +		flags = powerpc_local_irq_pmu_save();			\
> +	} while(0)
> +
> +#define raw_local_irq_pmu_restore(flags)				\
> +	do {								\
> +		typecheck(unsigned long, flags);			\
> +		powerpc_local_irq_pmu_restore(flags);			\
> +	} while(0)
> +
> +#ifdef CONFIG_TRACE_IRQFLAGS
> +#define local_irq_pmu_save(flags)		\
> +	do {					\
> +		raw_local_irq_pmu_save(flags);	\
> +		trace_hardirqs_off();		\
> +	} while(0)
> +#define local_irq_pmu_restore(flags)			\
> +	do {						\
> +		if (raw_irqs_disabled_flags(flags)) {   \
> +			raw_local_irq_pmu_restore(flags);\
> +			trace_hardirqs_off();		\
> +		} else {				\
> +			trace_hardirqs_on();		\
> +			raw_local_irq_pmu_restore(flags);\
> +		}					\
> +	} while(0)
> +#else
> +#define local_irq_pmu_save(flags)		\
> +	do {					\
> +		raw_local_irq_pmu_save(flags);	\
> +	} while(0)
> +#define local_irq_pmu_restore(flags)			\
> +	do { raw_local_irq_pmu_restore(flags); } while (0)
> +#endif /* CONFIG_TRACE_IRQFLAGS */
> +
> +

This looks pretty good. When I suggested powerpc_ prefix, I intended in these
functions here, so it wouldn't match with the local_irq_ namespace of generic
kernel. But that was just an idea. If you prefer to do it this way, could you
just drop the powerpc_ wrappers entirely?

A comment above that says it comes from the generic Linux local_irq code
might be an idea too.

Provided at least the arch_irqs_disabled_flags comment gets addressed:

Reviewed-by: Nicholas Piggin <npiggin@gmail.com>

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 11/13] powerpc: Add support to mask perf interrupts and replay them
  2016-09-15 13:02 ` [PATCH 11/13] powerpc: Add support to mask perf interrupts and replay them Madhavan Srinivasan
@ 2016-09-16 10:47   ` Nicholas Piggin
  0 siblings, 0 replies; 44+ messages in thread
From: Nicholas Piggin @ 2016-09-16 10:47 UTC (permalink / raw)
  To: Madhavan Srinivasan; +Cc: benh, mpe, anton, paulus, linuxppc-dev

On Thu, 15 Sep 2016 18:32:01 +0530
Madhavan Srinivasan <maddy@linux.vnet.ibm.com> wrote:

> To support masking of the PMI interrupts, couple of new interrupt handler
> macros are added MASKABLE_EXCEPTION_PSERIES_OOL and
> MASKABLE_RELON_EXCEPTION_PSERIES_OOL.
> 
> Couple of new irq #defs "PACA_IRQ_PMI" and "SOFTEN_VALUE_0xf0*" added to
> use in the exception code to check for PMI interrupts.
> 
> In the masked_interrupt handler, for PMIs we reset the MSR[EE]
> and return. In the __check_irq_replay(), replay the PMI interrupt
> by calling performance_monitor_common handler.


Reviewed-by: Nicholas Piggin <npiggin@gmail.com>

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 12/13] powerpc: Add a Kconfig and a functions to set new soft_enabled mask
  2016-09-15 13:02 ` [PATCH 12/13] powerpc: Add a Kconfig and a functions to set new soft_enabled mask Madhavan Srinivasan
  2016-09-16 10:34   ` Nicholas Piggin
@ 2016-09-16 10:56   ` Nicholas Piggin
  2016-09-19  6:03     ` Madhavan Srinivasan
  1 sibling, 1 reply; 44+ messages in thread
From: Nicholas Piggin @ 2016-09-16 10:56 UTC (permalink / raw)
  To: Madhavan Srinivasan; +Cc: benh, mpe, anton, paulus, linuxppc-dev

On Thu, 15 Sep 2016 18:32:02 +0530
Madhavan Srinivasan <maddy@linux.vnet.ibm.com> wrote:


> diff --git a/arch/powerpc/kernel/irq.c b/arch/powerpc/kernel/irq.c
> index 9e5e9a6d4147..ae31b1e85fdb 100644
> --- a/arch/powerpc/kernel/irq.c
> +++ b/arch/powerpc/kernel/irq.c
> @@ -209,6 +209,10 @@ notrace void arch_local_irq_restore(unsigned long en)
>  	unsigned char irq_happened;
>  	unsigned int replay;
>  
> +#ifdef CONFIG_IRQ_DEBUG_SUPPORT
> +	WARN_ON(en & local_paca->soft_enabled & ~IRQ_DISABLE_MASK_LINUX);
> +#endif
> +
>  	/* Write the new soft-enabled value */
>  	soft_enabled_set(en);
>  

Oh one other quick thing I just noticed: you could put this debug
check into your soft_enabled accessors.

We did decide it's okay for your masking level to go both ways,
didn't we? I.e.,

local_irq_disable();
local_irq_pmu_save(flags);
local_irq_pmu_restore(flags);
local_irq_enable();

-> LINUX -> LINUX|PMU -> LINUX ->

This means PMU interrupts would not get replayed despite being
enabled here. In practice I think that doesn't matter/can't happen
because a PMU interrupt while masked would hard disable anyway. A
comment explaining it might be nice though.

Thanks,
Nick

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 13/13] powerpc: rewrite local_t using soft_irq
  2016-09-15 13:02 ` [PATCH 13/13] powerpc: rewrite local_t using soft_irq Madhavan Srinivasan
  2016-09-15 16:55   ` kbuild test robot
@ 2016-09-16 11:01   ` Nicholas Piggin
  1 sibling, 0 replies; 44+ messages in thread
From: Nicholas Piggin @ 2016-09-16 11:01 UTC (permalink / raw)
  To: Madhavan Srinivasan; +Cc: benh, mpe, anton, paulus, linuxppc-dev

On Thu, 15 Sep 2016 18:32:03 +0530
Madhavan Srinivasan <maddy@linux.vnet.ibm.com> wrote:

> Local atomic operations are fast and highly reentrant per CPU counters.
> Used for percpu variable updates. Local atomic operations only guarantee
> variable modification atomicity wrt the CPU which owns the data and
> these needs to be executed in a preemption safe way.
> 
> Here is the design of this patch. Since local_* operations
> are only need to be atomic to interrupts (IIUC), we have two options.
> Either replay the "op" if interrupted or replay the interrupt after
> the "op". Initial patchset posted was based on implementing local_* operation
> based on CR5 which replay's the "op". Patchset had issues in case of
> rewinding the address pointor from an array. This make the slow patch
> really slow. Since CR5 based implementation proposed using __ex_table to find
> the rewind addressr, this rasied concerns about size of __ex_table and vmlinux.
> 
> https://lists.ozlabs.org/pipermail/linuxppc-dev/2014-December/123115.html
> 
> But this patch uses, local_irq_pmu_save to soft_disable
> interrupts (including PMIs). After finishing the "op", local_irq_pmu_restore()
> called and correspondingly interrupts are replayed if any occured.
> 
> patch re-write the current local_* functions to use arch_local_irq_disbale.
> Base flow for each function is
> 
> {
> 	local_irq_pmu_save(flags)
> 	load
> 	..
> 	store
> 	local_irq_pmu_restore(flags)
> }
> 
> Reason for the approach is that, currently l[w/d]arx/st[w/d]cx.
> instruction pair is used for local_* operations, which are heavy
> on cycle count and they dont support a local variant. So to
> see whether the new implementation helps, used a modified
> version of Rusty's benchmark code on local_t.
> 
> https://lkml.org/lkml/2008/12/16/450
> 
> Modifications to Rusty's benchmark code:
> - Executed only local_t test
> 
> Here are the values with the patch.
> 
> Time in ns per iteration
> 
> Local_t             Without Patch           With Patch
> 
> _inc                        28              8
> _add                        28              8
> _read                       3               3
> _add_return         28              7
> 
> Currently only asm/local.h has been rewrite, and also
> the entire change is tested only in PPC64 (pseries guest)
> and PPC64 host (LE)
> 
> TODO:
> 	- local_cmpxchg and local_xchg needs modification.
> 
> Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
> ---
>  arch/powerpc/include/asm/local.h | 94 ++++++++++++++++++++++++++++------------
>  1 file changed, 66 insertions(+), 28 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/local.h b/arch/powerpc/include/asm/local.h
> index b8da91363864..fb5728abb4e9 100644
> --- a/arch/powerpc/include/asm/local.h
> +++ b/arch/powerpc/include/asm/local.h
> @@ -3,6 +3,9 @@
>  
>  #include <linux/percpu.h>
>  #include <linux/atomic.h>
> +#include <linux/irqflags.h>
> +
> +#include <asm/hw_irq.h>
>  
>  typedef struct
>  {
> @@ -14,24 +17,50 @@ typedef struct
>  #define local_read(l)	atomic_long_read(&(l)->a)
>  #define local_set(l,i)	atomic_long_set(&(l)->a, (i))
>  
> -#define local_add(i,l)	atomic_long_add((i),(&(l)->a))
> -#define local_sub(i,l)	atomic_long_sub((i),(&(l)->a))
> -#define local_inc(l)	atomic_long_inc(&(l)->a)
> -#define local_dec(l)	atomic_long_dec(&(l)->a)
> +static __inline__ void local_add(long i, local_t *l)
> +{
> +	long t;
> +	unsigned long flags;
> +
> +	local_irq_pmu_save(flags);
> +	__asm__ __volatile__(
> +	PPC_LL" %0,0(%2)\n\
> +	add     %0,%1,%0\n"
> +	PPC_STL" %0,0(%2)\n"
> +	: "=&r" (t)
> +	: "r" (i), "r" (&(l->a.counter)));
> +	local_irq_pmu_restore(flags);
> +}
> +
> +static __inline__ void local_sub(long i, local_t *l)
> +{
> +	long t;
> +	unsigned long flags;
> +
> +	local_irq_pmu_save(flags);
> +	__asm__ __volatile__(
> +	PPC_LL" %0,0(%2)\n\
> +	subf    %0,%1,%0\n"
> +	PPC_STL" %0,0(%2)\n"
> +	: "=&r" (t)
> +	: "r" (i), "r" (&(l->a.counter)));
> +	local_irq_pmu_restore(flags);
> +}
>  
>  static __inline__ long local_add_return(long a, local_t *l)
>  {
>  	long t;
> +	unsigned long flags;
>  
> +	local_irq_pmu_save(flags);
>  	__asm__ __volatile__(
> -"1:"	PPC_LLARX(%0,0,%2,0) "			# local_add_return\n\
> +	PPC_LL" %0,0(%2)\n\
>  	add	%0,%1,%0\n"
> -	PPC405_ERR77(0,%2)
> -	PPC_STLCX	"%0,0,%2 \n\
> -	bne-	1b"
> +	PPC_STL	"%0,0(%2)\n"
>  	: "=&r" (t)
>  	: "r" (a), "r" (&(l->a.counter))
>  	: "cc", "memory");
> +	local_irq_pmu_restore(flags);

Are all your clobbers correct? You might not be clobbering "cc" here
anymore, for example. Could you double check those? Otherwise, awesome
patch!

Reviewed-by: Nicholas Piggin <npiggin@gmail.com>

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 10/13] powerpc: Add "bitmask" paramater to MASKABLE_* macros
  2016-09-15 13:02 ` [PATCH 10/13] powerpc: Add "bitmask" paramater to MASKABLE_* macros Madhavan Srinivasan
@ 2016-09-16 11:03   ` Nicholas Piggin
  2016-09-19  5:58     ` Madhavan Srinivasan
  0 siblings, 1 reply; 44+ messages in thread
From: Nicholas Piggin @ 2016-09-16 11:03 UTC (permalink / raw)
  To: Madhavan Srinivasan; +Cc: benh, mpe, anton, paulus, linuxppc-dev

On Thu, 15 Sep 2016 18:32:00 +0530
Madhavan Srinivasan <maddy@linux.vnet.ibm.com> wrote:

> Make it explicit the interrupt masking supported
> by a gievn interrupt handler. Patch correspondingly
> extends the MASKABLE_* macros with an addition's parameter.
> "bitmask" parameter is passed to SOFTEN_TEST macro to decide
> on masking the interrupt.
> 
> Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
> ---
>  arch/powerpc/include/asm/exception-64s.h | 62 ++++++++++++++++----------------
>  arch/powerpc/kernel/exceptions-64s.S     | 36 ++++++++++++-------
>  2 files changed, 54 insertions(+), 44 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/exception-64s.h b/arch/powerpc/include/asm/exception-64s.h
> index 1eea4ab75607..41be0c2d7658 100644
> --- a/arch/powerpc/include/asm/exception-64s.h
> +++ b/arch/powerpc/include/asm/exception-64s.h
> @@ -179,9 +179,9 @@ END_FTR_SECTION_NESTED(ftr,ftr,943)
>   * checking of the interrupt maskable level in the SOFTEN_TEST.
>   * Intended to be used in MASKABLE_EXCPETION_* macros.
>   */
> -#define __EXCEPTION_PROLOG_1(area, extra, vec)				\
> +#define __EXCEPTION_PROLOG_1(area, extra, vec, bitmask)			\
>  	__EXCEPTION_PROLOG_1_PRE(area);					\
> -	extra(vec);							\
> +	extra(vec, bitmask);						\
>  	__EXCEPTION_PROLOG_1_POST(area);
>  
>  /*

Is __EXCEPTION_PROLOG_1 now for maskable exceptions, and EXCEPTION_PROLOG_1
for unmaskable? Does it make sense to rename __EXCEPTION_PROLOG_1 to
MASKABLE_EXCEPTION_PROLOG_1? Reducing the mystery underscores in this file would
be nice!

This worked out nicely with mask bit being passed in by the exception handlers.
Very neat.

Reviewed-by: Nicholas Piggin <npiggin@gmail.com>

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 00/13] powerpc: "paca->soft_enabled" based local atomic operation implementation
  2016-09-15 13:01 [PATCH 00/13] powerpc: "paca->soft_enabled" based local atomic operation implementation Madhavan Srinivasan
                   ` (12 preceding siblings ...)
  2016-09-15 13:02 ` [PATCH 13/13] powerpc: rewrite local_t using soft_irq Madhavan Srinivasan
@ 2016-09-16 11:08 ` Nicholas Piggin
  13 siblings, 0 replies; 44+ messages in thread
From: Nicholas Piggin @ 2016-09-16 11:08 UTC (permalink / raw)
  To: Madhavan Srinivasan; +Cc: benh, mpe, anton, paulus, linuxppc-dev

On Thu, 15 Sep 2016 18:31:50 +0530
Madhavan Srinivasan <maddy@linux.vnet.ibm.com> wrote:

> Local atomic operations are fast and highly reentrant per CPU counters.
> Used for percpu variable updates. Local atomic operations only guarantee
> variable modification atomicity wrt the CPU which owns the data and
> these needs to be executed in a preemption safe way.

Great patches, I hope the review helped. Other than my minor comments,
I think the patches look good, so it's a good time for other reviewers
to take a look. Would be good to get an ack from Ben, Anton, or Paul too
at some point.

These will clash spectacularly with my exception vector rework
unfortunately, but that's how it goes. I can help with rebasing if
mine ends up going in first.

Thanks,
Nick

^ permalink raw reply	[flat|nested] 44+ messages in thread

* RE: [PATCH 04/13] powerpc: Use soft_enabled_set api to update paca->soft_enabled
  2016-09-16  9:53   ` Nicholas Piggin
@ 2016-09-16 11:43     ` David Laight
  2016-09-16 11:59       ` Nicholas Piggin
  2016-09-19  5:05       ` Madhavan Srinivasan
  2016-09-19  4:11     ` Madhavan Srinivasan
  1 sibling, 2 replies; 44+ messages in thread
From: David Laight @ 2016-09-16 11:43 UTC (permalink / raw)
  To: 'Nicholas Piggin', Madhavan Srinivasan
  Cc: linuxppc-dev, paulus, anton

From: Nicholas Piggin
> Sent: 16 September 2016 10:53
> On Thu, 15 Sep 2016 18:31:54 +0530
> Madhavan Srinivasan <maddy@linux.vnet.ibm.com> wrote:
>=20
> > Force use of soft_enabled_set() wrapper to update paca-soft_enabled
> > wherever possisble. Also add a new wrapper function, soft_enabled_set_r=
eturn(),
> > added to force the paca->soft_enabled updates.
...
> > diff --git a/arch/powerpc/include/asm/hw_irq.h b/arch/powerpc/include/a=
sm/hw_irq.h
> > index 8fad8c24760b..f828b8f8df02 100644
> > --- a/arch/powerpc/include/asm/hw_irq.h
> > +++ b/arch/powerpc/include/asm/hw_irq.h
> > @@ -53,6 +53,20 @@ static inline notrace void soft_enabled_set(unsigned=
 long enable)
> >  	: : "r" (enable), "i" (offsetof(struct paca_struct, soft_enabled)));
> >  }
> >
> > +static inline notrace unsigned long soft_enabled_set_return(unsigned l=
ong enable)
> > +{
> > +	unsigned long flags;
> > +
> > +	asm volatile(
> > +		"lbz %0,%1(13); stb %2,%1(13)"
> > +		: "=3Dr" (flags)
> > +		: "i" (offsetof(struct paca_struct, soft_enabled)),\
> > +		  "r" (enable)
> > +		: "memory");
> > +
> > +	return flags;
> > +}
>=20
> Why do you have the "memory" clobber here while soft_enabled_set() does n=
ot?

I wondered about the missing memory clobber earlier.

Any 'clobber' ought to be restricted to the referenced memory area.
If the structure is only referenced by r13 through 'asm volatile' it isn't =
needed.
OTOH why not allocate a global register variable to r13 and access through =
that?

	David

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 04/13] powerpc: Use soft_enabled_set api to update paca->soft_enabled
  2016-09-16 11:43     ` David Laight
@ 2016-09-16 11:59       ` Nicholas Piggin
  2016-09-16 13:22         ` David Laight
  2016-09-19  5:05       ` Madhavan Srinivasan
  1 sibling, 1 reply; 44+ messages in thread
From: Nicholas Piggin @ 2016-09-16 11:59 UTC (permalink / raw)
  To: David Laight; +Cc: Madhavan Srinivasan, linuxppc-dev, paulus, anton

On Fri, 16 Sep 2016 11:43:13 +0000
David Laight <David.Laight@ACULAB.COM> wrote:

> From: Nicholas Piggin
> > Sent: 16 September 2016 10:53
> > On Thu, 15 Sep 2016 18:31:54 +0530
> > Madhavan Srinivasan <maddy@linux.vnet.ibm.com> wrote:
> >   
> > > Force use of soft_enabled_set() wrapper to update paca-soft_enabled
> > > wherever possisble. Also add a new wrapper function, soft_enabled_set_return(),
> > > added to force the paca->soft_enabled updates.  
> ...
> > > diff --git a/arch/powerpc/include/asm/hw_irq.h b/arch/powerpc/include/asm/hw_irq.h
> > > index 8fad8c24760b..f828b8f8df02 100644
> > > --- a/arch/powerpc/include/asm/hw_irq.h
> > > +++ b/arch/powerpc/include/asm/hw_irq.h
> > > @@ -53,6 +53,20 @@ static inline notrace void soft_enabled_set(unsigned long enable)
> > >  	: : "r" (enable), "i" (offsetof(struct paca_struct, soft_enabled)));
> > >  }
> > >
> > > +static inline notrace unsigned long soft_enabled_set_return(unsigned long enable)
> > > +{
> > > +	unsigned long flags;
> > > +
> > > +	asm volatile(
> > > +		"lbz %0,%1(13); stb %2,%1(13)"
> > > +		: "=r" (flags)
> > > +		: "i" (offsetof(struct paca_struct, soft_enabled)),\
> > > +		  "r" (enable)
> > > +		: "memory");
> > > +
> > > +	return flags;
> > > +}  
> > 
> > Why do you have the "memory" clobber here while soft_enabled_set() does not?  
> 
> I wondered about the missing memory clobber earlier.
> 
> Any 'clobber' ought to be restricted to the referenced memory area.
> If the structure is only referenced by r13 through 'asm volatile' it isn't needed.

Well a clobber (compiler barrier) at some point is needed in irq_disable and
irq_enable paths, so we correctly open and close the critical section vs interrupts.
I just wonder about these helpers. It might be better to take the clobbers out of
there and add barrier(); in callers, which would make it more obvious.

^ permalink raw reply	[flat|nested] 44+ messages in thread

* RE: [PATCH 04/13] powerpc: Use soft_enabled_set api to update paca->soft_enabled
  2016-09-16 11:59       ` Nicholas Piggin
@ 2016-09-16 13:22         ` David Laight
  2016-09-19  2:52           ` Nicholas Piggin
  0 siblings, 1 reply; 44+ messages in thread
From: David Laight @ 2016-09-16 13:22 UTC (permalink / raw)
  To: 'Nicholas Piggin'
  Cc: Madhavan Srinivasan, linuxppc-dev, paulus, anton

From: Nicholas Piggin
> Sent: 16 September 2016 12:59
> On Fri, 16 Sep 2016 11:43:13 +0000
> David Laight <David.Laight@ACULAB.COM> wrote:
>=20
> > From: Nicholas Piggin
> > > Sent: 16 September 2016 10:53
> > > On Thu, 15 Sep 2016 18:31:54 +0530
> > > Madhavan Srinivasan <maddy@linux.vnet.ibm.com> wrote:
> > >
> > > > Force use of soft_enabled_set() wrapper to update paca-soft_enabled
> > > > wherever possisble. Also add a new wrapper function, soft_enabled_s=
et_return(),
> > > > added to force the paca->soft_enabled updates.
> > ...
> > > > diff --git a/arch/powerpc/include/asm/hw_irq.h b/arch/powerpc/inclu=
de/asm/hw_irq.h
> > > > index 8fad8c24760b..f828b8f8df02 100644
> > > > --- a/arch/powerpc/include/asm/hw_irq.h
> > > > +++ b/arch/powerpc/include/asm/hw_irq.h
> > > > @@ -53,6 +53,20 @@ static inline notrace void soft_enabled_set(unsi=
gned long enable)
> > > >  	: : "r" (enable), "i" (offsetof(struct paca_struct, soft_enabled)=
));
> > > >  }
> > > >
> > > > +static inline notrace unsigned long soft_enabled_set_return(unsign=
ed long enable)
> > > > +{
> > > > +	unsigned long flags;
> > > > +
> > > > +	asm volatile(
> > > > +		"lbz %0,%1(13); stb %2,%1(13)"
> > > > +		: "=3Dr" (flags)
> > > > +		: "i" (offsetof(struct paca_struct, soft_enabled)),\
> > > > +		  "r" (enable)
> > > > +		: "memory");
> > > > +
> > > > +	return flags;
> > > > +}
> > >
> > > Why do you have the "memory" clobber here while soft_enabled_set() do=
es not?
> >
> > I wondered about the missing memory clobber earlier.
> >
> > Any 'clobber' ought to be restricted to the referenced memory area.
> > If the structure is only referenced by r13 through 'asm volatile' it is=
n't needed.
>=20
> Well a clobber (compiler barrier) at some point is needed in irq_disable =
and
> irq_enable paths, so we correctly open and close the critical section vs =
interrupts.
> I just wonder about these helpers. It might be better to take the clobber=
s out of
> there and add barrier(); in callers, which would make it more obvious.

If the memory clobber is needed to synchronise with the rest of the code
rather than just ensuring the compiler doesn't reorder accesses via r13
then I'd add an explicit barrier() somewhere - even if in these helpers.

Potentially the helper wants a memory clobber for the (r13) area
and a separate barrier() to ensure the interrupts are masked for the
right code.
Even if both are together in the same helper.

	David

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 04/13] powerpc: Use soft_enabled_set api to update paca->soft_enabled
  2016-09-16 13:22         ` David Laight
@ 2016-09-19  2:52           ` Nicholas Piggin
  2016-09-19  5:32             ` Madhavan Srinivasan
  0 siblings, 1 reply; 44+ messages in thread
From: Nicholas Piggin @ 2016-09-19  2:52 UTC (permalink / raw)
  To: David Laight; +Cc: Madhavan Srinivasan, linuxppc-dev, paulus, anton

On Fri, 16 Sep 2016 13:22:24 +0000
David Laight <David.Laight@ACULAB.COM> wrote:

> From: Nicholas Piggin
> > Sent: 16 September 2016 12:59
> > On Fri, 16 Sep 2016 11:43:13 +0000
> > David Laight <David.Laight@ACULAB.COM> wrote:
> >   
> > > From: Nicholas Piggin  
> > > > Sent: 16 September 2016 10:53
> > > > On Thu, 15 Sep 2016 18:31:54 +0530
> > > > Madhavan Srinivasan <maddy@linux.vnet.ibm.com> wrote:
> > > >  
> > > > > Force use of soft_enabled_set() wrapper to update paca-soft_enabled
> > > > > wherever possisble. Also add a new wrapper function, soft_enabled_set_return(),
> > > > > added to force the paca->soft_enabled updates.  
> > > ...  
> > > > > diff --git a/arch/powerpc/include/asm/hw_irq.h b/arch/powerpc/include/asm/hw_irq.h
> > > > > index 8fad8c24760b..f828b8f8df02 100644
> > > > > --- a/arch/powerpc/include/asm/hw_irq.h
> > > > > +++ b/arch/powerpc/include/asm/hw_irq.h
> > > > > @@ -53,6 +53,20 @@ static inline notrace void soft_enabled_set(unsigned long enable)
> > > > >  	: : "r" (enable), "i" (offsetof(struct paca_struct, soft_enabled)));
> > > > >  }
> > > > >
> > > > > +static inline notrace unsigned long soft_enabled_set_return(unsigned long enable)
> > > > > +{
> > > > > +	unsigned long flags;
> > > > > +
> > > > > +	asm volatile(
> > > > > +		"lbz %0,%1(13); stb %2,%1(13)"
> > > > > +		: "=r" (flags)
> > > > > +		: "i" (offsetof(struct paca_struct, soft_enabled)),\
> > > > > +		  "r" (enable)
> > > > > +		: "memory");
> > > > > +
> > > > > +	return flags;
> > > > > +}  
> > > >
> > > > Why do you have the "memory" clobber here while soft_enabled_set() does not?  
> > >
> > > I wondered about the missing memory clobber earlier.
> > >
> > > Any 'clobber' ought to be restricted to the referenced memory area.
> > > If the structure is only referenced by r13 through 'asm volatile' it isn't needed.  
> > 
> > Well a clobber (compiler barrier) at some point is needed in irq_disable and
> > irq_enable paths, so we correctly open and close the critical section vs interrupts.
> > I just wonder about these helpers. It might be better to take the clobbers out of
> > there and add barrier(); in callers, which would make it more obvious.  
> 
> If the memory clobber is needed to synchronise with the rest of the code
> rather than just ensuring the compiler doesn't reorder accesses via r13
> then I'd add an explicit barrier() somewhere - even if in these helpers.
> 
> Potentially the helper wants a memory clobber for the (r13) area
> and a separate barrier() to ensure the interrupts are masked for the
> right code.
> Even if both are together in the same helper.

Good point. Some of the existing modification helpers don't seem to have
clobbers for modifying the r13->soft_enabled memory itself, but they do
have the memory clobber where a critical section barrier is required.

The former may not be a problem if the helpers are used very carefully,
but probably should be commented at best, if not fixed. So after Madhi's
patches, we should make all accesses go via the helper functions, so a
clobber for the soft_enabled modification may not be required (this should
be commented). I think it may be cleaner to specify the location in the
constraints, but maybe that doesn't generate the best code -- something to
investigate.

Then, I'd like to see barrier()s for interrupt critical sections placed in
the callers of these helpers, which will make the code more obvious.

Thanks,
Nick

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 02/13] powerpc: Cleanup to use IRQ_DISABLE_MASK_* macros for paca->soft_enabled update
  2016-09-16  9:47   ` Nicholas Piggin
@ 2016-09-19  4:04     ` Madhavan Srinivasan
  0 siblings, 0 replies; 44+ messages in thread
From: Madhavan Srinivasan @ 2016-09-19  4:04 UTC (permalink / raw)
  To: Nicholas Piggin; +Cc: benh, mpe, anton, paulus, linuxppc-dev



On Friday 16 September 2016 03:17 PM, Nicholas Piggin wrote:
> On Thu, 15 Sep 2016 18:31:52 +0530
> Madhavan Srinivasan <maddy@linux.vnet.ibm.com> wrote:
>
>> Replace the hardcoded values used when updating
>> paca->soft_enabled with IRQ_DISABLE_MASK_* #def.
>> No logic change.
> This could be folded with patch 1.

Sure. Will do it.
>
> Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
>

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 03/13] powerpc: move set_soft_enabled() and rename
  2016-09-16  9:50   ` Nicholas Piggin
@ 2016-09-19  4:05     ` Madhavan Srinivasan
  0 siblings, 0 replies; 44+ messages in thread
From: Madhavan Srinivasan @ 2016-09-19  4:05 UTC (permalink / raw)
  To: Nicholas Piggin; +Cc: benh, mpe, anton, paulus, linuxppc-dev



On Friday 16 September 2016 03:20 PM, Nicholas Piggin wrote:
> On Thu, 15 Sep 2016 18:31:53 +0530
> Madhavan Srinivasan <maddy@linux.vnet.ibm.com> wrote:
>
>> Move set_soft_enabled() from powerpc/kernel/irq.c to
>> asm/hw_irq.c. and rename it soft_enabled_set().
>> THis way paca->soft_enabled updates can be forced.
> Could you just tidy up the changelog a little?
>
> You are renaming it I assume because you are going to introduce more
> soft_enabled_x() functions, and that the namespace works better as a
> prefix than a postfix.
>
> You are moving it so all paca->soft_enabled updates can be done via
> these access functions rather than open coded.
>
> Did I get that right?

Yes. That is right. My bad. Will fix the commit message and changelog.

>
> Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
>

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 04/13] powerpc: Use soft_enabled_set api to update paca->soft_enabled
  2016-09-16  9:53   ` Nicholas Piggin
  2016-09-16 11:43     ` David Laight
@ 2016-09-19  4:11     ` Madhavan Srinivasan
  1 sibling, 0 replies; 44+ messages in thread
From: Madhavan Srinivasan @ 2016-09-19  4:11 UTC (permalink / raw)
  To: Nicholas Piggin; +Cc: benh, mpe, anton, paulus, linuxppc-dev



On Friday 16 September 2016 03:23 PM, Nicholas Piggin wrote:
> On Thu, 15 Sep 2016 18:31:54 +0530
> Madhavan Srinivasan <maddy@linux.vnet.ibm.com> wrote:
>
>> Force use of soft_enabled_set() wrapper to update paca-soft_enabled
>> wherever possisble. Also add a new wrapper function, soft_enabled_set_return(),
>> added to force the paca->soft_enabled updates.
>>
>> Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
>> ---
>>   arch/powerpc/include/asm/hw_irq.h  | 14 ++++++++++++++
>>   arch/powerpc/include/asm/kvm_ppc.h |  2 +-
>>   arch/powerpc/kernel/irq.c          |  2 +-
>>   arch/powerpc/kernel/setup_64.c     |  4 ++--
>>   arch/powerpc/kernel/time.c         |  6 +++---
>>   5 files changed, 21 insertions(+), 7 deletions(-)
>>
>> diff --git a/arch/powerpc/include/asm/hw_irq.h b/arch/powerpc/include/asm/hw_irq.h
>> index 8fad8c24760b..f828b8f8df02 100644
>> --- a/arch/powerpc/include/asm/hw_irq.h
>> +++ b/arch/powerpc/include/asm/hw_irq.h
>> @@ -53,6 +53,20 @@ static inline notrace void soft_enabled_set(unsigned long enable)
>>   	: : "r" (enable), "i" (offsetof(struct paca_struct, soft_enabled)));
>>   }
>>   
>> +static inline notrace unsigned long soft_enabled_set_return(unsigned long enable)
>> +{
>> +	unsigned long flags;
>> +
>> +	asm volatile(
>> +		"lbz %0,%1(13); stb %2,%1(13)"
>> +		: "=r" (flags)
>> +		: "i" (offsetof(struct paca_struct, soft_enabled)),\
>> +		  "r" (enable)
>> +		: "memory");
>> +
>> +	return flags;
>> +}
> Why do you have the "memory" clobber here while soft_enabled_set() does not?

I did change to function to include a local variable
and update the soft_enabled. But, my bad. It was in the next patch.
I should make the change here. Yes. we dont a "memory" clobber here
which is right. But, this change is not complete and I will correct it.

Maddy

>
> Thanks,
> Nick
>

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 04/13] powerpc: Use soft_enabled_set api to update paca->soft_enabled
  2016-09-16 11:43     ` David Laight
  2016-09-16 11:59       ` Nicholas Piggin
@ 2016-09-19  5:05       ` Madhavan Srinivasan
  1 sibling, 0 replies; 44+ messages in thread
From: Madhavan Srinivasan @ 2016-09-19  5:05 UTC (permalink / raw)
  To: David Laight, 'Nicholas Piggin'; +Cc: linuxppc-dev, paulus, anton



On Friday 16 September 2016 05:13 PM, David Laight wrote:
> From: Nicholas Piggin
>> Sent: 16 September 2016 10:53
>> On Thu, 15 Sep 2016 18:31:54 +0530
>> Madhavan Srinivasan <maddy@linux.vnet.ibm.com> wrote:
>>
>>> Force use of soft_enabled_set() wrapper to update paca-soft_enabled
>>> wherever possisble. Also add a new wrapper function, soft_enabled_set_return(),
>>> added to force the paca->soft_enabled updates.
> ...
>>> diff --git a/arch/powerpc/include/asm/hw_irq.h b/arch/powerpc/include/asm/hw_irq.h
>>> index 8fad8c24760b..f828b8f8df02 100644
>>> --- a/arch/powerpc/include/asm/hw_irq.h
>>> +++ b/arch/powerpc/include/asm/hw_irq.h
>>> @@ -53,6 +53,20 @@ static inline notrace void soft_enabled_set(unsigned long enable)
>>>   	: : "r" (enable), "i" (offsetof(struct paca_struct, soft_enabled)));
>>>   }
>>>
>>> +static inline notrace unsigned long soft_enabled_set_return(unsigned long enable)
>>> +{
>>> +	unsigned long flags;
>>> +
>>> +	asm volatile(
>>> +		"lbz %0,%1(13); stb %2,%1(13)"
>>> +		: "=r" (flags)
>>> +		: "i" (offsetof(struct paca_struct, soft_enabled)),\
>>> +		  "r" (enable)
>>> +		: "memory");
>>> +
>>> +	return flags;
>>> +}
>> Why do you have the "memory" clobber here while soft_enabled_set() does not?
> I wondered about the missing memory clobber earlier.
>
> Any 'clobber' ought to be restricted to the referenced memory area.
> If the structure is only referenced by r13 through 'asm volatile' it isn't needed.
> OTOH why not allocate a global register variable to r13 and access through that?

I do see this in asm/paca.h "register struct paca_struct *local_paca 
asm("r13"); "
and __check_irq_replay() in kernel/irq.c do updates the "irq_happened" as
mentioned. But existing helpers in hw_irq update the soft_enabled via
asm volatile and i did the same.

Maddy

> 	David
>

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 04/13] powerpc: Use soft_enabled_set api to update paca->soft_enabled
  2016-09-19  2:52           ` Nicholas Piggin
@ 2016-09-19  5:32             ` Madhavan Srinivasan
  0 siblings, 0 replies; 44+ messages in thread
From: Madhavan Srinivasan @ 2016-09-19  5:32 UTC (permalink / raw)
  To: Nicholas Piggin, David Laight; +Cc: linuxppc-dev, paulus, anton



On Monday 19 September 2016 08:22 AM, Nicholas Piggin wrote:
> On Fri, 16 Sep 2016 13:22:24 +0000
> David Laight <David.Laight@ACULAB.COM> wrote:
>
>> From: Nicholas Piggin
>>> Sent: 16 September 2016 12:59
>>> On Fri, 16 Sep 2016 11:43:13 +0000
>>> David Laight <David.Laight@ACULAB.COM> wrote:
>>>    
>>>> From: Nicholas Piggin
>>>>> Sent: 16 September 2016 10:53
>>>>> On Thu, 15 Sep 2016 18:31:54 +0530
>>>>> Madhavan Srinivasan <maddy@linux.vnet.ibm.com> wrote:
>>>>>   
>>>>>> Force use of soft_enabled_set() wrapper to update paca-soft_enabled
>>>>>> wherever possisble. Also add a new wrapper function, soft_enabled_set_return(),
>>>>>> added to force the paca->soft_enabled updates.
>>>> ...
>>>>>> diff --git a/arch/powerpc/include/asm/hw_irq.h b/arch/powerpc/include/asm/hw_irq.h
>>>>>> index 8fad8c24760b..f828b8f8df02 100644
>>>>>> --- a/arch/powerpc/include/asm/hw_irq.h
>>>>>> +++ b/arch/powerpc/include/asm/hw_irq.h
>>>>>> @@ -53,6 +53,20 @@ static inline notrace void soft_enabled_set(unsigned long enable)
>>>>>>   	: : "r" (enable), "i" (offsetof(struct paca_struct, soft_enabled)));
>>>>>>   }
>>>>>>
>>>>>> +static inline notrace unsigned long soft_enabled_set_return(unsigned long enable)
>>>>>> +{
>>>>>> +	unsigned long flags;
>>>>>> +
>>>>>> +	asm volatile(
>>>>>> +		"lbz %0,%1(13); stb %2,%1(13)"
>>>>>> +		: "=r" (flags)
>>>>>> +		: "i" (offsetof(struct paca_struct, soft_enabled)),\
>>>>>> +		  "r" (enable)
>>>>>> +		: "memory");
>>>>>> +
>>>>>> +	return flags;
>>>>>> +}
>>>>> Why do you have the "memory" clobber here while soft_enabled_set() does not?
>>>> I wondered about the missing memory clobber earlier.
>>>>
>>>> Any 'clobber' ought to be restricted to the referenced memory area.
>>>> If the structure is only referenced by r13 through 'asm volatile' it isn't needed.
>>> Well a clobber (compiler barrier) at some point is needed in irq_disable and
>>> irq_enable paths, so we correctly open and close the critical section vs interrupts.
>>> I just wonder about these helpers. It might be better to take the clobbers out of
>>> there and add barrier(); in callers, which would make it more obvious.
>> If the memory clobber is needed to synchronise with the rest of the code
>> rather than just ensuring the compiler doesn't reorder accesses via r13
>> then I'd add an explicit barrier() somewhere - even if in these helpers.
>>
>> Potentially the helper wants a memory clobber for the (r13) area
>> and a separate barrier() to ensure the interrupts are masked for the
>> right code.
>> Even if both are together in the same helper.
> Good point. Some of the existing modification helpers don't seem to have
> clobbers for modifying the r13->soft_enabled memory itself, but they do
> have the memory clobber where a critical section barrier is required.
>
> The former may not be a problem if the helpers are used very carefully,
> but probably should be commented at best, if not fixed.

Yes. Agreed. Will add comments

>   So after Madhi's
> patches, we should make all accesses go via the helper functions, so a
> clobber for the soft_enabled modification may not be required (this should
> be commented). I think it may be cleaner to specify the location in the
> constraints, but maybe that doesn't generate the best code -- something to
> investigate.
>
> Then, I'd like to see barrier()s for interrupt critical sections placed in
> the callers of these helpers, which will make the code more obvious.

Ok will look into this.

>
> Thanks,
> Nick
>

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 05/13] powerpc: Add soft_enabled manipulation functions
  2016-09-16  9:57   ` Nicholas Piggin
@ 2016-09-19  5:41     ` Madhavan Srinivasan
  0 siblings, 0 replies; 44+ messages in thread
From: Madhavan Srinivasan @ 2016-09-19  5:41 UTC (permalink / raw)
  To: Nicholas Piggin; +Cc: benh, mpe, anton, paulus, linuxppc-dev



On Friday 16 September 2016 03:27 PM, Nicholas Piggin wrote:
> On Thu, 15 Sep 2016 18:31:55 +0530
> Madhavan Srinivasan <maddy@linux.vnet.ibm.com> wrote:
>
>> Add new soft_enabled_* manipulation function and implement
>> arch_local_* using the soft_enabled_* wrappers.
>>
>> Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
>> ---
>>   arch/powerpc/include/asm/hw_irq.h | 32 ++++++++++++++------------------
>>   1 file changed, 14 insertions(+), 18 deletions(-)
>>
>> diff --git a/arch/powerpc/include/asm/hw_irq.h b/arch/powerpc/include/asm/hw_irq.h
>> index f828b8f8df02..dc3c248f9244 100644
>> --- a/arch/powerpc/include/asm/hw_irq.h
>> +++ b/arch/powerpc/include/asm/hw_irq.h
>> @@ -53,21 +53,7 @@ static inline notrace void soft_enabled_set(unsigned long enable)
>>   	: : "r" (enable), "i" (offsetof(struct paca_struct, soft_enabled)));
>>   }
>>   
>> -static inline notrace unsigned long soft_enabled_set_return(unsigned long enable)
>> -{
>> -	unsigned long flags;
>> -
>> -	asm volatile(
>> -		"lbz %0,%1(13); stb %2,%1(13)"
>> -		: "=r" (flags)
>> -		: "i" (offsetof(struct paca_struct, soft_enabled)),\
>> -		  "r" (enable)
>> -		: "memory");
>> -
>> -	return flags;
>> -}
>> -
>> -static inline unsigned long arch_local_save_flags(void)
>> +static inline notrace unsigned long soft_enabled_return(void)
>>   {
>>   	unsigned long flags;
>>   
>> @@ -79,20 +65,30 @@ static inline unsigned long arch_local_save_flags(void)
>>   	return flags;
>>   }
>>   
>> -static inline unsigned long arch_local_irq_disable(void)
>> +static inline notrace unsigned long soft_enabled_set_return(unsigned long enable)
>>   {
>>   	unsigned long flags, zero;
>>   
>>   	asm volatile(
>> -		"li %1,%3; lbz %0,%2(13); stb %1,%2(13)"
>> +		"mr %1,%3; lbz %0,%2(13); stb %1,%2(13)"
>>   		: "=r" (flags), "=&r" (zero)
>>   		: "i" (offsetof(struct paca_struct, soft_enabled)),\
>> -		  "i" (IRQ_DISABLE_MASK_LINUX)
>> +		  "r" (enable)
>>   		: "memory");
>>   
>>   	return flags;
>>   }
> As we talked about earlier, it would be nice to add builtin_constant
> variants to avoid the extra instruction. If you prefer to do that
> after this series, that's fine.

True. my bad. Will look into this.

Maddy

>
> Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
>

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 06/13] powerpc: reverse the soft_enable logic
  2016-09-16 10:05   ` Nicholas Piggin
@ 2016-09-19  5:45     ` Madhavan Srinivasan
  0 siblings, 0 replies; 44+ messages in thread
From: Madhavan Srinivasan @ 2016-09-19  5:45 UTC (permalink / raw)
  To: Nicholas Piggin; +Cc: benh, mpe, anton, paulus, linuxppc-dev



On Friday 16 September 2016 03:35 PM, Nicholas Piggin wrote:
> On Thu, 15 Sep 2016 18:31:56 +0530
> Madhavan Srinivasan <maddy@linux.vnet.ibm.com> wrote:
>
>> "paca->soft_enabled" is used as a flag to mask some of interrupts.
>> Currently supported flags values and their details:
>>
>> soft_enabled    MSR[EE]
>>
>> 0               0       Disabled (PMI and HMI not masked)
>> 1               1       Enabled
>>
>> "paca->soft_enabled" is initialized to 1 to make the interripts as
>> enabled. arch_local_irq_disable() will toggle the value when interrupts
>> needs to disbled. At this point, the interrupts are not actually disabled,
>> instead, interrupt vector has code to check for the flag and mask it when it occurs.
>> By "mask it", it update interrupt paca->irq_happened and return.
>> arch_local_irq_restore() is called to re-enable interrupts, which checks and
>> replays interrupts if any occured.
>>
>> Now, as mentioned, current logic doesnot mask "performance monitoring interrupts"
>> and PMIs are implemented as NMI. But this patchset depends on local_irq_*
>> for a successful local_* update. Meaning, mask all possible interrupts during
>> local_* update and replay them after the update.
>>
>> So the idea here is to reserve the "paca->soft_enabled" logic. New values and
>> details:
>>
>> soft_enabled    MSR[EE]
>>
>> 1               0       Disabled  (PMI and HMI not masked)
>> 0               1       Enabled
>>
>> Reason for the this change is to create foundation for a third flag value "2"
>> for "soft_enabled" to add support to mask PMIs. When ->soft_enabled is
>> set to a value "2", PMI interrupts are mask and when set to a value
>> of "1", PMI are not mask.
>>
>> Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
>> ---
>>   arch/powerpc/include/asm/hw_irq.h | 4 ++--
>>   arch/powerpc/kernel/entry_64.S    | 5 ++---
>>   2 files changed, 4 insertions(+), 5 deletions(-)
>>
>> diff --git a/arch/powerpc/include/asm/hw_irq.h b/arch/powerpc/include/asm/hw_irq.h
>> index dc3c248f9244..fd9b421f9020 100644
>> --- a/arch/powerpc/include/asm/hw_irq.h
>> +++ b/arch/powerpc/include/asm/hw_irq.h
>> @@ -30,8 +30,8 @@
>>   /*
>>    * flags for paca->soft_enabled
>>    */
>> -#define IRQ_DISABLE_MASK_NONE		1
>> -#define IRQ_DISABLE_MASK_LINUX		0
>> +#define IRQ_DISABLE_MASK_NONE		0
>> +#define IRQ_DISABLE_MASK_LINUX		1
>>   
>>   #endif /* CONFIG_PPC64 */
>>   
>> diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
>> index aef7b64cbbeb..879aeb11ad29 100644
>> --- a/arch/powerpc/kernel/entry_64.S
>> +++ b/arch/powerpc/kernel/entry_64.S
>> @@ -131,8 +131,7 @@ END_FW_FTR_SECTION_IFSET(FW_FEATURE_SPLPAR)
>>   	 */
>>   #if defined(CONFIG_TRACE_IRQFLAGS) && defined(CONFIG_BUG)
>>   	lbz	r10,PACASOFTIRQEN(r13)
>> -	xori	r10,r10,IRQ_DISABLE_MASK_NONE
>> -1:	tdnei	r10,0
>> +1:	tdnei	r10,IRQ_DISABLE_MASK_NONE
>>   	EMIT_BUG_ENTRY 1b,__FILE__,__LINE__,BUGFLAG_WARNING
>>   #endif
>>   
>> @@ -1012,7 +1011,7 @@ _GLOBAL(enter_rtas)
>>   	 * check it with the asm equivalent of WARN_ON
>>   	 */
>>   	lbz	r0,PACASOFTIRQEN(r13)
>> -1:	tdnei	r0,IRQ_DISABLE_MASK_LINUX
>> +1:	tdeqi	r0,IRQ_DISABLE_MASK_NONE
>>   	EMIT_BUG_ENTRY 1b,__FILE__,__LINE__,BUGFLAG_WARNING
>>   #endif
>>   	
> We specifically want to ensure that _LINUX interrupts are disabled
> here. Not that we allow masking of others without _LINUX now, but
> current behavior is checking that LINUX ones are masked.
>
> Otherwise it seems okay.
>
> It might be nice after this series to do a pass and rename
> soft_enabled to soft_masked.

Yes definitely, I do have it in my todo.

Maddy

>
> Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
>

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 08/13] powerpc: Add new _EXCEPTION_PROLOG_1 macro
  2016-09-16 10:12   ` Nicholas Piggin
@ 2016-09-19  5:54     ` Madhavan Srinivasan
  0 siblings, 0 replies; 44+ messages in thread
From: Madhavan Srinivasan @ 2016-09-19  5:54 UTC (permalink / raw)
  To: Nicholas Piggin; +Cc: benh, mpe, anton, paulus, linuxppc-dev



On Friday 16 September 2016 03:42 PM, Nicholas Piggin wrote:
> On Thu, 15 Sep 2016 18:31:58 +0530
> Madhavan Srinivasan <maddy@linux.vnet.ibm.com> wrote:
>
>> To support addition of "bitmask" to MASKABLE_* macros,
>> factor out the EXCPETION_PROLOG_1 macro.
>>
>> Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
> Really minor nit, but as a matter of readability of the series,
> would you consider moving this next to patch 10 where it's used,
> if you submit again?
Yes sure. Make sense. Will do it.

Maddy

>
> Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
>

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 09/13] powerpc: Introduce new mask bit for soft_enabled
  2016-09-16 10:16   ` Nicholas Piggin
@ 2016-09-19  5:57     ` Madhavan Srinivasan
  0 siblings, 0 replies; 44+ messages in thread
From: Madhavan Srinivasan @ 2016-09-19  5:57 UTC (permalink / raw)
  To: Nicholas Piggin; +Cc: benh, mpe, anton, paulus, linuxppc-dev



On Friday 16 September 2016 03:46 PM, Nicholas Piggin wrote:
> On Thu, 15 Sep 2016 18:31:59 +0530
> Madhavan Srinivasan <maddy@linux.vnet.ibm.com> wrote:
>
>> Currently soft_enabled is used as the flag to determine
>> the interrupt state. Patch extends the soft_enabled
>> to be used as a mask instead of a flag.
> This should be the title of the patch, IMO. Introducing the new
> mask bit is incidental (and could be moved to patch 11). The key
> here of course is switching the tests from a flag to a mask.
>
> Very cool that you got this all worked out without adding any
> new instructions.

Will make the changes accordingly. Thanks

>
> Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
>
>> Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
>> ---
>>   arch/powerpc/include/asm/exception-64s.h | 4 ++--
>>   arch/powerpc/include/asm/hw_irq.h        | 1 +
>>   arch/powerpc/include/asm/irqflags.h      | 4 ++--
>>   arch/powerpc/kernel/entry_64.S           | 4 ++--
>>   arch/powerpc/kernel/exceptions-64e.S     | 6 +++---
>>   5 files changed, 10 insertions(+), 9 deletions(-)
>>
>> diff --git a/arch/powerpc/include/asm/exception-64s.h b/arch/powerpc/include/asm/exception-64s.h
>> index dd3253bd0d8e..1eea4ab75607 100644
>> --- a/arch/powerpc/include/asm/exception-64s.h
>> +++ b/arch/powerpc/include/asm/exception-64s.h
>> @@ -430,9 +430,9 @@ label##_relon_hv:						\
>>   
>>   #define __SOFTEN_TEST(h, vec)						\
>>   	lbz	r10,PACASOFTIRQEN(r13);					\
>> -	cmpwi	r10,IRQ_DISABLE_MASK_LINUX;							\
>> +	andi.	r10,r10,IRQ_DISABLE_MASK_LINUX;				\
>>   	li	r10,SOFTEN_VALUE_##vec;					\
>> -	beq	masked_##h##interrupt
>> +	bne	masked_##h##interrupt
>>   #define _SOFTEN_TEST(h, vec)	__SOFTEN_TEST(h, vec)
>>   
>>   #define SOFTEN_TEST_PR(vec)						\
>> diff --git a/arch/powerpc/include/asm/hw_irq.h b/arch/powerpc/include/asm/hw_irq.h
>> index fd9b421f9020..245262c02bab 100644
>> --- a/arch/powerpc/include/asm/hw_irq.h
>> +++ b/arch/powerpc/include/asm/hw_irq.h
>> @@ -32,6 +32,7 @@
>>    */
>>   #define IRQ_DISABLE_MASK_NONE		0
>>   #define IRQ_DISABLE_MASK_LINUX		1
>> +#define IRQ_DISABLE_MASK_PMU		2
>>   
>>   #endif /* CONFIG_PPC64 */
>>   
>> diff --git a/arch/powerpc/include/asm/irqflags.h b/arch/powerpc/include/asm/irqflags.h
>> index d0ed2a7d7d10..9ff09747a226 100644
>> --- a/arch/powerpc/include/asm/irqflags.h
>> +++ b/arch/powerpc/include/asm/irqflags.h
>> @@ -48,11 +48,11 @@
>>   #define RECONCILE_IRQ_STATE(__rA, __rB)		\
>>   	lbz	__rA,PACASOFTIRQEN(r13);	\
>>   	lbz	__rB,PACAIRQHAPPENED(r13);	\
>> -	cmpwi	cr0,__rA,IRQ_DISABLE_MASK_LINUX;\
>> +	andi.	__rA,__rA,IRQ_DISABLE_MASK_LINUX;\
>>   	li	__rA,IRQ_DISABLE_MASK_LINUX;	\
>>   	ori	__rB,__rB,PACA_IRQ_HARD_DIS;	\
>>   	stb	__rB,PACAIRQHAPPENED(r13);	\
>> -	beq	44f;				\
>> +	bne	44f;				\
>>   	stb	__rA,PACASOFTIRQEN(r13);	\
>>   	TRACE_DISABLE_INTS;			\
>>   44:
>> diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
>> index 879aeb11ad29..533e363914a9 100644
>> --- a/arch/powerpc/kernel/entry_64.S
>> +++ b/arch/powerpc/kernel/entry_64.S
>> @@ -764,8 +764,8 @@ restore:
>>   	 */
>>   	ld	r5,SOFTE(r1)
>>   	lbz	r6,PACASOFTIRQEN(r13)
>> -	cmpwi	cr0,r5,IRQ_DISABLE_MASK_LINUX
>> -	beq	restore_irq_off
>> +	andi.	r5,r5,IRQ_DISABLE_MASK_LINUX
>> +	bne	restore_irq_off
>>   
>>   	/* We are enabling, were we already enabled ? Yes, just return */
>>   	cmpwi	cr0,r6,IRQ_DISABLE_MASK_NONE
>> diff --git a/arch/powerpc/kernel/exceptions-64e.S b/arch/powerpc/kernel/exceptions-64e.S
>> index 5c628b5696f6..8e40df2c2f30 100644
>> --- a/arch/powerpc/kernel/exceptions-64e.S
>> +++ b/arch/powerpc/kernel/exceptions-64e.S
>> @@ -212,8 +212,8 @@ END_FTR_SECTION_IFSET(CPU_FTR_EMB_HV)
>>   	/* Interrupts had better not already be enabled... */
>>   	twnei	r6,IRQ_DISABLE_MASK_LINUX
>>   
>> -	cmpwi	cr0,r5,IRQ_DISABLE_MASK_LINUX
>> -	beq	1f
>> +	andi.	r5,r5,IRQ_DISABLE_MASK_LINUX
>> +	bne	1f
>>   
>>   	TRACE_ENABLE_INTS
>>   	stb	r5,PACASOFTIRQEN(r13)
>> @@ -352,7 +352,7 @@ ret_from_mc_except:
>>   
>>   #define PROLOG_ADDITION_MASKABLE_GEN(n)					    \
>>   	lbz	r10,PACASOFTIRQEN(r13); /* are irqs soft-disabled ? */	    \
>> -	cmpwi	cr0,r10,IRQ_DISABLE_MASK_LINUX;/* yes -> go out of line */ \
>> +	andi.	r10,r10,IRQ_DISABLE_MASK_LINUX;/* yes -> go out of line */ \
>>   	beq	masked_interrupt_book3e_##n
>>   
>>   #define PROLOG_ADDITION_2REGS_GEN(n)					    \

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 10/13] powerpc: Add "bitmask" paramater to MASKABLE_* macros
  2016-09-16 11:03   ` Nicholas Piggin
@ 2016-09-19  5:58     ` Madhavan Srinivasan
  0 siblings, 0 replies; 44+ messages in thread
From: Madhavan Srinivasan @ 2016-09-19  5:58 UTC (permalink / raw)
  To: Nicholas Piggin; +Cc: benh, mpe, anton, paulus, linuxppc-dev



On Friday 16 September 2016 04:33 PM, Nicholas Piggin wrote:
> On Thu, 15 Sep 2016 18:32:00 +0530
> Madhavan Srinivasan <maddy@linux.vnet.ibm.com> wrote:
>
>> Make it explicit the interrupt masking supported
>> by a gievn interrupt handler. Patch correspondingly
>> extends the MASKABLE_* macros with an addition's parameter.
>> "bitmask" parameter is passed to SOFTEN_TEST macro to decide
>> on masking the interrupt.
>>
>> Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
>> ---
>>   arch/powerpc/include/asm/exception-64s.h | 62 ++++++++++++++++----------------
>>   arch/powerpc/kernel/exceptions-64s.S     | 36 ++++++++++++-------
>>   2 files changed, 54 insertions(+), 44 deletions(-)
>>
>> diff --git a/arch/powerpc/include/asm/exception-64s.h b/arch/powerpc/include/asm/exception-64s.h
>> index 1eea4ab75607..41be0c2d7658 100644
>> --- a/arch/powerpc/include/asm/exception-64s.h
>> +++ b/arch/powerpc/include/asm/exception-64s.h
>> @@ -179,9 +179,9 @@ END_FTR_SECTION_NESTED(ftr,ftr,943)
>>    * checking of the interrupt maskable level in the SOFTEN_TEST.
>>    * Intended to be used in MASKABLE_EXCPETION_* macros.
>>    */
>> -#define __EXCEPTION_PROLOG_1(area, extra, vec)				\
>> +#define __EXCEPTION_PROLOG_1(area, extra, vec, bitmask)			\
>>   	__EXCEPTION_PROLOG_1_PRE(area);					\
>> -	extra(vec);							\
>> +	extra(vec, bitmask);						\
>>   	__EXCEPTION_PROLOG_1_POST(area);
>>   
>>   /*
> Is __EXCEPTION_PROLOG_1 now for maskable exceptions, and EXCEPTION_PROLOG_1
> for unmaskable? Does it make sense to rename __EXCEPTION_PROLOG_1 to
> MASKABLE_EXCEPTION_PROLOG_1? Reducing the mystery underscores in this file would
> be nice!

Yes. That is true. Will make the changes.

Maddy

>
> This worked out nicely with mask bit being passed in by the exception handlers.
> Very neat.

Thanks.

> Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
>

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 12/13] powerpc: Add a Kconfig and a functions to set new soft_enabled mask
  2016-09-16 10:56   ` Nicholas Piggin
@ 2016-09-19  6:03     ` Madhavan Srinivasan
  0 siblings, 0 replies; 44+ messages in thread
From: Madhavan Srinivasan @ 2016-09-19  6:03 UTC (permalink / raw)
  To: Nicholas Piggin; +Cc: benh, mpe, anton, paulus, linuxppc-dev



On Friday 16 September 2016 04:26 PM, Nicholas Piggin wrote:
> On Thu, 15 Sep 2016 18:32:02 +0530
> Madhavan Srinivasan <maddy@linux.vnet.ibm.com> wrote:
>
>
>> diff --git a/arch/powerpc/kernel/irq.c b/arch/powerpc/kernel/irq.c
>> index 9e5e9a6d4147..ae31b1e85fdb 100644
>> --- a/arch/powerpc/kernel/irq.c
>> +++ b/arch/powerpc/kernel/irq.c
>> @@ -209,6 +209,10 @@ notrace void arch_local_irq_restore(unsigned long en)
>>   	unsigned char irq_happened;
>>   	unsigned int replay;
>>   
>> +#ifdef CONFIG_IRQ_DEBUG_SUPPORT
>> +	WARN_ON(en & local_paca->soft_enabled & ~IRQ_DISABLE_MASK_LINUX);
>> +#endif
>> +
>>   	/* Write the new soft-enabled value */
>>   	soft_enabled_set(en);
>>   
> Oh one other quick thing I just noticed: you could put this debug
> check into your soft_enabled accessors.

OK. Will move it.

> We did decide it's okay for your masking level to go both ways,
> didn't we? I.e.,
>
> local_irq_disable();
> local_irq_pmu_save(flags);
> local_irq_pmu_restore(flags);
> local_irq_enable();
>
> -> LINUX -> LINUX|PMU -> LINUX ->
>
> This means PMU interrupts would not get replayed despite being
> enabled here. In practice I think that doesn't matter/can't happen
> because a PMU interrupt while masked would hard disable anyway. A
> comment explaining it might be nice though.

Yes. I though i did the comment. Apologies. Will respin
with the comment.

Maddy

> Thanks,
> Nick
>

^ permalink raw reply	[flat|nested] 44+ messages in thread

end of thread, other threads:[~2016-09-19  6:03 UTC | newest]

Thread overview: 44+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-09-15 13:01 [PATCH 00/13] powerpc: "paca->soft_enabled" based local atomic operation implementation Madhavan Srinivasan
2016-09-15 13:01 ` [PATCH 01/13] powerpc: Add #defs for paca->soft_enabled flags Madhavan Srinivasan
2016-09-15 13:01 ` [PATCH 02/13] powerpc: Cleanup to use IRQ_DISABLE_MASK_* macros for paca->soft_enabled update Madhavan Srinivasan
2016-09-16  9:47   ` Nicholas Piggin
2016-09-19  4:04     ` Madhavan Srinivasan
2016-09-15 13:01 ` [PATCH 03/13] powerpc: move set_soft_enabled() and rename Madhavan Srinivasan
2016-09-16  9:50   ` Nicholas Piggin
2016-09-19  4:05     ` Madhavan Srinivasan
2016-09-15 13:01 ` [PATCH 04/13] powerpc: Use soft_enabled_set api to update paca->soft_enabled Madhavan Srinivasan
2016-09-16  9:53   ` Nicholas Piggin
2016-09-16 11:43     ` David Laight
2016-09-16 11:59       ` Nicholas Piggin
2016-09-16 13:22         ` David Laight
2016-09-19  2:52           ` Nicholas Piggin
2016-09-19  5:32             ` Madhavan Srinivasan
2016-09-19  5:05       ` Madhavan Srinivasan
2016-09-19  4:11     ` Madhavan Srinivasan
2016-09-15 13:01 ` [PATCH 05/13] powerpc: Add soft_enabled manipulation functions Madhavan Srinivasan
2016-09-16  9:57   ` Nicholas Piggin
2016-09-19  5:41     ` Madhavan Srinivasan
2016-09-15 13:01 ` [PATCH 06/13] powerpc: reverse the soft_enable logic Madhavan Srinivasan
2016-09-16 10:05   ` Nicholas Piggin
2016-09-19  5:45     ` Madhavan Srinivasan
2016-09-15 13:01 ` [PATCH 07/13] powerpc: Avoid using EXCEPTION_PROLOG_1 macro in MASKABLE_* Madhavan Srinivasan
2016-09-16 10:08   ` Nicholas Piggin
2016-09-15 13:01 ` [PATCH 08/13] powerpc: Add new _EXCEPTION_PROLOG_1 macro Madhavan Srinivasan
2016-09-16 10:12   ` Nicholas Piggin
2016-09-19  5:54     ` Madhavan Srinivasan
2016-09-15 13:01 ` [PATCH 09/13] powerpc: Introduce new mask bit for soft_enabled Madhavan Srinivasan
2016-09-16 10:16   ` Nicholas Piggin
2016-09-19  5:57     ` Madhavan Srinivasan
2016-09-15 13:02 ` [PATCH 10/13] powerpc: Add "bitmask" paramater to MASKABLE_* macros Madhavan Srinivasan
2016-09-16 11:03   ` Nicholas Piggin
2016-09-19  5:58     ` Madhavan Srinivasan
2016-09-15 13:02 ` [PATCH 11/13] powerpc: Add support to mask perf interrupts and replay them Madhavan Srinivasan
2016-09-16 10:47   ` Nicholas Piggin
2016-09-15 13:02 ` [PATCH 12/13] powerpc: Add a Kconfig and a functions to set new soft_enabled mask Madhavan Srinivasan
2016-09-16 10:34   ` Nicholas Piggin
2016-09-16 10:56   ` Nicholas Piggin
2016-09-19  6:03     ` Madhavan Srinivasan
2016-09-15 13:02 ` [PATCH 13/13] powerpc: rewrite local_t using soft_irq Madhavan Srinivasan
2016-09-15 16:55   ` kbuild test robot
2016-09-16 11:01   ` Nicholas Piggin
2016-09-16 11:08 ` [PATCH 00/13] powerpc: "paca->soft_enabled" based local atomic operation implementation Nicholas Piggin

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.