linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v10 00/17] powerpc: "paca->soft_enabled" based local atomic operation implementation
@ 2017-12-20  3:55 Madhavan Srinivasan
  2017-12-20  3:55 ` [PATCH v10 01/17] powerpc/64: do not trace irqs-off at interrupt return to soft-disabled context Madhavan Srinivasan
                   ` (17 more replies)
  0 siblings, 18 replies; 21+ messages in thread
From: Madhavan Srinivasan @ 2017-12-20  3:55 UTC (permalink / raw)
  To: mpe; +Cc: benh, anton, paulus, npiggin, linuxppc-dev, Madhavan Srinivasan

Local atomic operations are fast and highly reentrant per CPU counters.
Used for percpu variable updates. Local atomic operations only guarantee
variable modification atomicity wrt the CPU which owns the data and
these needs to be executed in a preemption safe way.

Here is the design of the patchset. Since local_* operations
are only need to be atomic to interrupts (IIUC), we have two options.
Either replay the "op" if interrupted or replay the interrupt after
the "op". Initial patchset posted was based on implementing local_* operation
based on CR5 which replay's the "op". Patchset had issues in case of
rewinding the address pointor from an array. This make the slow path
really slow. Since CR5 based implementation proposed using __ex_table to find
the rewind address, this rasied concerns about size of __ex_table and vmlinux.

https://lists.ozlabs.org/pipermail/linuxppc-dev/2014-December/123115.html

But this patchset uses Benjamin Herrenschmidt suggestion of using
arch_local_irq_disable() to soft_disable interrupts (including PMIs).
After finishing the "op", arch_local_irq_restore() called and correspondingly
interrupts are replayed if any occured.

Current paca->soft_enabled logic is reserved and MASKABLE_EXCEPTION_* macros
are extended to support this feature.

patch re-write the current local_* functions to use arch_local_irq_disbale.
Base flow for each function is

 {
        powerpc_local_irq_pmu_save(flags)
        load
        ..
        store
        powerpc_local_irq_pmu_restore(flags)
 }

Reason for the approach is that, currently l[w/d]arx/st[w/d]cx.
instruction pair is used for local_* operations, which are heavy
on cycle count and they dont support a local variant. So to
see whether the new implementation helps, used a modified
version of Rusty's benchmark code on local_t.

https://lkml.org/lkml/2008/12/16/450

Modifications to Rusty's benchmark code:
 - Executed only local_t test

Here are the values with the patch.

Time in ns per iteration

Local_t         Without Patch           With Patch

_inc                    38              10
_add                    38              10
_read                   4               4
_add_return             38              10

Currently only asm/local.h has been rewritten, and also
the entire change is tested only in PPC64 (pseries guest)
and PPC64 LE host. Have only compile tested ppc64e_*.

Changelog v9:
 - Split some patches to make reviewing easier
 - Included couple fixes
 - Updated the comment and commit message
 - Renamed variables and functions

Changelog v8:
 - Rearranged series.
 - Updated the comments
 - Fixed a hang in embedded version (thanks to mpe)
 - Added couple of more cleanup patches to the series
 - Updated commit messages.

Changelog v7:
1)Missed first patch in the series

Changelog v6:
1)Moved the renaming of soft_enabled to soft_disable_mask patch earlier in the series.
2)Added code to hardwire "softe" value in pt_regs for userspace to be always 1
3)rebased to latest upstream.

Changelog v5:
1)Fixed the check in hard_irq_disable() macro for soft_disabled_mask

Changelog v4:
1)split the __SOFT_ENABLED logic check from patch 7 and merged to soft_enabled
logic reversing patch.
2)Made changes to commit messages
3)Added a new IRQ_DISBALE_MASK_ALL to include supported disabled mask bits. 

Changelog v3:
1)Made suggest to commit messages
2)Added a new patch (patch 12) to rename the soft_enabled to soft_disabled_mask

Changelog v2:
Rebased to latest upstream

Changelog v1:
1)squashed patches 1/2 together and 8/9/10 together for readability
2)Created a separate patch for the kconfig changes
3)Moved the new mask value commit to patch 11.
4)Renamed local_irq_pmu_*() to powerpc_irq_pmu_*() to avoid
  namespaces matches with generic kernel local_irq*() functions
5)Renamed __EXCEPTION_PROLOG_1 macro to MASKABLE_EXCEPTION_PROLOG_1 macro
6)Made changes to commit messages
7)Add more comments to codes

Changelog RFC v5:
1)Implemented new set of soft_enabled manipulation functions
2)rewritten arch_local_irq_* functions to use the new soft_enabled_*()
3)Add WARN_ON to identify invalid soft_enabled transitions
4)Added powerpc_local_irq_pmu_save() and powerpc_local_irq_pmu_restore() to
  support masking of irqs (with PMI).
5)Added local_irq_pmu_*()s macros with trace_hardirqs_on|off() to match
  include/linux/irqflags.h

Changelog RFC v4:
1)Fix build breaks in in ppc64e_defconfig compilation
2)Merged PMI replay code with the exception vector changes patch
3)Renamed the new API to set PMI mask bit as suggested
4)Modified the current arch_local_save and new API function call to
  "OR" and store the value to ->soft_enabled instead of just store.
5)Updated the check in the arch_local_irq_restore() to alway check for
  greather than or zero to _LINUX mask bit.
6)Updated the commit messages.

Changelog RFC v3:
1)Squashed PMI masked interrupt patch and replay patch together
2)Have created a new patch which includes a new Kconfig and set_irq_set_mask()
3)Fixed the compilation issue with IRQ_DISABLE_MASK_* macros in book3e_*

Changelog RFC v2:
1)Renamed IRQ_DISABLE_LEVEL_* to IRQ_DISABLE_MASK_* and made logic changes
  to treat soft_enabled as a mask and not a flag or level.
2)Added a new Kconfig variable to support a WARN_ON
3)Refactored patchset for eaiser review.
4)Made changes to commit messages.
5)Made changes for BOOK3E version

Changelog RFC v1:

1)Commit messages are improved.
2)Renamed the arch_local_irq_disable_var to soft_irq_set_level as suggested
3)Renamed the LAZY_INTERRUPT* macro to IRQ_DISABLE_LEVEL_* as suggested
4)Extended the MASKABLE_EXCEPTION* macros to support additional parameter.
5)Each MASKABLE_EXCEPTION_* macro will carry a "mask_level"
6)Logic to decide on jump to maskable_handler in SOFTEN_TEST is now based on
  "mask_level"
7)__EXCEPTION_PROLOG_1 is factored out to support "mask_level" parameter.
  This reduced the code changes needed for supporting "mask_level" parameters.

Madhavan Srinivasan (15):
  powerpc/64: Add #defines for paca->soft_enabled flags
  powerpc/64: Fix arch_local_irq_disable() prototype
  powerpc/64: move set_soft_enabled(), rename it, add memory clobber
  powerpc/64: Implement and use soft_enabled_return API
  powerpc/64: Implement and use soft_enabled_set_return API
  powerpc/64: Cleanup hard_irq_disable() macro
  powerpc/64: Change soft_enabled from flag to bitmask
  powerpc/64: Rename soft_enabled to irq_soft_mask
  powerpc/64s: Avoid using EXCEPTION_PROLOG_1 macro in MASKABLE_*
  powerpc/64s: Add support to take additional parameter in MASKABLE_*
    macro
  powerpc/64s: Add support to mask perf interrupts and replay them
  powerpc: Add new kconfig IRQ_DEBUG_SUPPORT
  powerpc/64s: Add new set of irq_soft_mask_ functions for PMI masking
  powerpc: use generic atomic implementation for local_t
  powerpc/64s: Implement local_t using irq soft masking

Nicholas Piggin (2):
  powerpc/64: do not trace irqs-off at interrupt return to soft-disabled
    context
  powerpc/64: Improve inline asm in arch_local_irq_disable

 arch/powerpc/Kconfig.debug               |   4 +
 arch/powerpc/include/asm/exception-64s.h | 103 ++++++++++------
 arch/powerpc/include/asm/head-64.h       |  40 +++----
 arch/powerpc/include/asm/hw_irq.h        | 165 ++++++++++++++++++++++---
 arch/powerpc/include/asm/irqflags.h      |  14 +--
 arch/powerpc/include/asm/kvm_ppc.h       |   2 +-
 arch/powerpc/include/asm/local.h         | 200 +++++++++++++------------------
 arch/powerpc/include/asm/paca.h          |   2 +-
 arch/powerpc/kernel/asm-offsets.c        |   2 +-
 arch/powerpc/kernel/entry_64.S           |  50 ++++----
 arch/powerpc/kernel/exceptions-64e.S     |  20 ++--
 arch/powerpc/kernel/exceptions-64s.S     |  38 +++---
 arch/powerpc/kernel/head_64.S            |  11 +-
 arch/powerpc/kernel/idle_book3e.S        |   5 +-
 arch/powerpc/kernel/idle_power4.S        |   5 +-
 arch/powerpc/kernel/irq.c                |  29 ++---
 arch/powerpc/kernel/optprobes_head.S     |   2 +-
 arch/powerpc/kernel/process.c            |   3 +-
 arch/powerpc/kernel/ptrace.c             |  12 ++
 arch/powerpc/kernel/setup_64.c           |   5 +-
 arch/powerpc/kernel/signal_32.c          |   8 ++
 arch/powerpc/kernel/signal_64.c          |   3 +
 arch/powerpc/kernel/time.c               |   6 +-
 arch/powerpc/kvm/book3s_hv_rmhandlers.S  |   2 +-
 arch/powerpc/mm/hugetlbpage.c            |   2 +-
 arch/powerpc/perf/core-book3s.c          |   2 +-
 arch/powerpc/xmon/xmon.c                 |   4 +-
 27 files changed, 458 insertions(+), 281 deletions(-)

-- 
2.7.4

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [PATCH v10 01/17] powerpc/64: do not trace irqs-off at interrupt return to soft-disabled context
  2017-12-20  3:55 [PATCH v10 00/17] powerpc: "paca->soft_enabled" based local atomic operation implementation Madhavan Srinivasan
@ 2017-12-20  3:55 ` Madhavan Srinivasan
  2017-12-20  3:55 ` [PATCH v10 02/17] powerpc/64: Add #defines for paca->soft_enabled flags Madhavan Srinivasan
                   ` (16 subsequent siblings)
  17 siblings, 0 replies; 21+ messages in thread
From: Madhavan Srinivasan @ 2017-12-20  3:55 UTC (permalink / raw)
  To: mpe; +Cc: benh, anton, paulus, npiggin, linuxppc-dev, Madhavan Srinivasan

From: Nicholas Piggin <npiggin@gmail.com>

When an interrupt is returning to a soft-disabled context (which can
happen for non-maskable interrupts or synchronous interrupts), it goes
through the motions of soft-disabling again, including calling
TRACE_DISABLE_INTS (i.e., trace_hardirqs_off()).

This is not necessary, because we must already be soft-disabled in the
interrupt context, it also may be causing crashes in the irq tracing
code to re-enter as an nmi. Replace it with a warning to ensure that
soft-interrupts are still disabled.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
---
 arch/powerpc/kernel/entry_64.S | 10 +++++++---
 1 file changed, 7 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
index 3320bcac7192..b3055ebf20d1 100644
--- a/arch/powerpc/kernel/entry_64.S
+++ b/arch/powerpc/kernel/entry_64.S
@@ -911,9 +911,13 @@ END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR)
 	beq	1f
 	rlwinm	r7,r7,0,~PACA_IRQ_HARD_DIS
 	stb	r7,PACAIRQHAPPENED(r13)
-1:	li	r0,0
-	stb	r0,PACASOFTIRQEN(r13);
-	TRACE_DISABLE_INTS
+1:
+#if defined(CONFIG_TRACE_IRQFLAGS) && defined(CONFIG_BUG)
+	/* The interrupt should not have soft enabled. */
+	lbz	r7,PACASOFTIRQEN(r13)
+	tdnei	r7,0
+	EMIT_BUG_ENTRY 1b,__FILE__,__LINE__,BUGFLAG_WARNING
+#endif
 	b	.Ldo_restore
 
 	/*
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v10 02/17] powerpc/64: Add #defines for paca->soft_enabled flags
  2017-12-20  3:55 [PATCH v10 00/17] powerpc: "paca->soft_enabled" based local atomic operation implementation Madhavan Srinivasan
  2017-12-20  3:55 ` [PATCH v10 01/17] powerpc/64: do not trace irqs-off at interrupt return to soft-disabled context Madhavan Srinivasan
@ 2017-12-20  3:55 ` Madhavan Srinivasan
  2018-01-22  3:34   ` [v10, " Michael Ellerman
  2017-12-20  3:55 ` [PATCH v10 03/17] powerpc/64: Improve inline asm in arch_local_irq_disable Madhavan Srinivasan
                   ` (15 subsequent siblings)
  17 siblings, 1 reply; 21+ messages in thread
From: Madhavan Srinivasan @ 2017-12-20  3:55 UTC (permalink / raw)
  To: mpe; +Cc: benh, anton, paulus, npiggin, linuxppc-dev, Madhavan Srinivasan

Two #defines, IRQ_ENABLED and IRQ_DISABLED, are added to be used when
updating paca->soft_enabled. Replace the hardcoded values used when
updating paca->soft_enabled with IRQ_[EN/DIS]ABLED. No logic change.

Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/exception-64s.h |  2 +-
 arch/powerpc/include/asm/hw_irq.h        | 21 ++++++++++++++-------
 arch/powerpc/include/asm/irqflags.h      |  6 +++---
 arch/powerpc/include/asm/kvm_ppc.h       |  2 +-
 arch/powerpc/kernel/entry_64.S           | 16 ++++++++--------
 arch/powerpc/kernel/exceptions-64e.S     |  6 +++---
 arch/powerpc/kernel/head_64.S            |  5 +++--
 arch/powerpc/kernel/idle_book3e.S        |  3 ++-
 arch/powerpc/kernel/idle_power4.S        |  3 ++-
 arch/powerpc/kernel/irq.c                |  9 +++++----
 arch/powerpc/kernel/process.c            |  3 ++-
 arch/powerpc/kernel/setup_64.c           |  3 +++
 arch/powerpc/kernel/time.c               |  2 +-
 arch/powerpc/mm/hugetlbpage.c            |  2 +-
 arch/powerpc/perf/core-book3s.c          |  2 +-
 15 files changed, 50 insertions(+), 35 deletions(-)

diff --git a/arch/powerpc/include/asm/exception-64s.h b/arch/powerpc/include/asm/exception-64s.h
index b27205297e1d..7c2486248dfa 100644
--- a/arch/powerpc/include/asm/exception-64s.h
+++ b/arch/powerpc/include/asm/exception-64s.h
@@ -499,7 +499,7 @@ END_FTR_SECTION_NESTED(ftr,ftr,943)
 
 #define __SOFTEN_TEST(h, vec)						\
 	lbz	r10,PACASOFTIRQEN(r13);					\
-	cmpwi	r10,0;							\
+	cmpwi	r10,IRQ_DISABLED;				\
 	li	r10,SOFTEN_VALUE_##vec;					\
 	beq	masked_##h##interrupt
 
diff --git a/arch/powerpc/include/asm/hw_irq.h b/arch/powerpc/include/asm/hw_irq.h
index 3818fa0164f0..463b595ca971 100644
--- a/arch/powerpc/include/asm/hw_irq.h
+++ b/arch/powerpc/include/asm/hw_irq.h
@@ -28,6 +28,12 @@
 #define PACA_IRQ_EE_EDGE	0x10 /* BookE only */
 #define PACA_IRQ_HMI		0x20
 
+/*
+ * flags for paca->soft_enabled
+ */
+#define IRQ_ENABLED	1
+#define IRQ_DISABLED	0
+
 #endif /* CONFIG_PPC64 */
 
 #ifndef __ASSEMBLY__
@@ -60,9 +66,10 @@ static inline unsigned long arch_local_irq_disable(void)
 	unsigned long flags, zero;
 
 	asm volatile(
-		"li %1,0; lbz %0,%2(13); stb %1,%2(13)"
+		"li %1,%3; lbz %0,%2(13); stb %1,%2(13)"
 		: "=r" (flags), "=&r" (zero)
-		: "i" (offsetof(struct paca_struct, soft_enabled))
+		: "i" (offsetof(struct paca_struct, soft_enabled)),\
+		  "i" (IRQ_DISABLED)
 		: "memory");
 
 	return flags;
@@ -72,7 +79,7 @@ extern void arch_local_irq_restore(unsigned long);
 
 static inline void arch_local_irq_enable(void)
 {
-	arch_local_irq_restore(1);
+	arch_local_irq_restore(IRQ_ENABLED);
 }
 
 static inline unsigned long arch_local_irq_save(void)
@@ -82,7 +89,7 @@ static inline unsigned long arch_local_irq_save(void)
 
 static inline bool arch_irqs_disabled_flags(unsigned long flags)
 {
-	return flags == 0;
+	return flags == IRQ_DISABLED;
 }
 
 static inline bool arch_irqs_disabled(void)
@@ -102,9 +109,9 @@ static inline bool arch_irqs_disabled(void)
 	u8 _was_enabled;				\
 	__hard_irq_disable();				\
 	_was_enabled = local_paca->soft_enabled;	\
-	local_paca->soft_enabled = 0;			\
+	local_paca->soft_enabled = IRQ_DISABLED;\
 	local_paca->irq_happened |= PACA_IRQ_HARD_DIS;	\
-	if (_was_enabled)				\
+	if (_was_enabled == IRQ_ENABLED)	\
 		trace_hardirqs_off();			\
 } while(0)
 
@@ -127,7 +134,7 @@ static inline void may_hard_irq_enable(void)
 
 static inline bool arch_irq_disabled_regs(struct pt_regs *regs)
 {
-	return !regs->softe;
+	return (regs->softe == IRQ_DISABLED);
 }
 
 extern bool prep_irq_for_idle(void);
diff --git a/arch/powerpc/include/asm/irqflags.h b/arch/powerpc/include/asm/irqflags.h
index 1aeb5f13b8c4..55d9a0c0f1a6 100644
--- a/arch/powerpc/include/asm/irqflags.h
+++ b/arch/powerpc/include/asm/irqflags.h
@@ -49,8 +49,8 @@
 #define RECONCILE_IRQ_STATE(__rA, __rB)		\
 	lbz	__rA,PACASOFTIRQEN(r13);	\
 	lbz	__rB,PACAIRQHAPPENED(r13);	\
-	cmpwi	cr0,__rA,0;			\
-	li	__rA,0;				\
+	cmpwi	cr0,__rA,IRQ_DISABLED;\
+	li	__rA,IRQ_DISABLED;	\
 	ori	__rB,__rB,PACA_IRQ_HARD_DIS;	\
 	stb	__rB,PACAIRQHAPPENED(r13);	\
 	beq	44f;				\
@@ -64,7 +64,7 @@
 
 #define RECONCILE_IRQ_STATE(__rA, __rB)		\
 	lbz	__rA,PACAIRQHAPPENED(r13);	\
-	li	__rB,0;				\
+	li	__rB,IRQ_DISABLED;	\
 	ori	__rA,__rA,PACA_IRQ_HARD_DIS;	\
 	stb	__rB,PACASOFTIRQEN(r13);	\
 	stb	__rA,PACAIRQHAPPENED(r13)
diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h
index 941c2a3f231b..70a38ba46dc0 100644
--- a/arch/powerpc/include/asm/kvm_ppc.h
+++ b/arch/powerpc/include/asm/kvm_ppc.h
@@ -873,7 +873,7 @@ static inline void kvmppc_fix_ee_before_entry(void)
 
 	/* Only need to enable IRQs by hard enabling them after this */
 	local_paca->irq_happened = 0;
-	local_paca->soft_enabled = 1;
+	local_paca->soft_enabled = IRQ_ENABLED;
 #endif
 }
 
diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
index b3055ebf20d1..02536e989df5 100644
--- a/arch/powerpc/kernel/entry_64.S
+++ b/arch/powerpc/kernel/entry_64.S
@@ -130,7 +130,7 @@ END_FW_FTR_SECTION_IFSET(FW_FEATURE_SPLPAR)
 	 */
 #if defined(CONFIG_TRACE_IRQFLAGS) && defined(CONFIG_BUG)
 	lbz	r10,PACASOFTIRQEN(r13)
-	xori	r10,r10,1
+	xori	r10,r10,IRQ_ENABLED
 1:	tdnei	r10,0
 	EMIT_BUG_ENTRY 1b,__FILE__,__LINE__,BUGFLAG_WARNING
 #endif
@@ -147,7 +147,7 @@ system_call:			/* label this so stack traces look sane */
 	/* We do need to set SOFTE in the stack frame or the return
 	 * from interrupt will be painful
 	 */
-	li	r10,1
+	li	r10,IRQ_ENABLED
 	std	r10,SOFTE(r1)
 
 	CURRENT_THREAD_INFO(r11, r1)
@@ -743,7 +743,7 @@ resume_kernel:
 	lwz	r8,TI_PREEMPT(r9)
 	cmpwi	cr1,r8,0
 	ld	r0,SOFTE(r1)
-	cmpdi	r0,0
+	cmpdi	r0,IRQ_DISABLED
 	crandc	eq,cr1*4+eq,eq
 	bne	restore
 
@@ -783,11 +783,11 @@ restore:
 	 */
 	ld	r5,SOFTE(r1)
 	lbz	r6,PACASOFTIRQEN(r13)
-	cmpwi	cr0,r5,0
+	cmpwi	cr0,r5,IRQ_DISABLED
 	beq	.Lrestore_irq_off
 
 	/* We are enabling, were we already enabled ? Yes, just return */
-	cmpwi	cr0,r6,1
+	cmpwi	cr0,r6,IRQ_ENABLED
 	beq	cr0,.Ldo_restore
 
 	/*
@@ -806,7 +806,7 @@ restore:
 	 */
 .Lrestore_no_replay:
 	TRACE_ENABLE_INTS
-	li	r0,1
+	li	r0,IRQ_ENABLED
 	stb	r0,PACASOFTIRQEN(r13);
 
 	/*
@@ -915,7 +915,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR)
 #if defined(CONFIG_TRACE_IRQFLAGS) && defined(CONFIG_BUG)
 	/* The interrupt should not have soft enabled. */
 	lbz	r7,PACASOFTIRQEN(r13)
-	tdnei	r7,0
+	tdnei	r7,IRQ_DISABLED
 	EMIT_BUG_ENTRY 1b,__FILE__,__LINE__,BUGFLAG_WARNING
 #endif
 	b	.Ldo_restore
@@ -1036,7 +1036,7 @@ _GLOBAL(enter_rtas)
 	 * check it with the asm equivalent of WARN_ON
 	 */
 	lbz	r0,PACASOFTIRQEN(r13)
-1:	tdnei	r0,0
+1:	tdnei	r0,IRQ_DISABLED
 	EMIT_BUG_ENTRY 1b,__FILE__,__LINE__,BUGFLAG_WARNING
 #endif
 	
diff --git a/arch/powerpc/kernel/exceptions-64e.S b/arch/powerpc/kernel/exceptions-64e.S
index acd8ca76233e..1ca9ed89ed0b 100644
--- a/arch/powerpc/kernel/exceptions-64e.S
+++ b/arch/powerpc/kernel/exceptions-64e.S
@@ -210,9 +210,9 @@ END_FTR_SECTION_IFSET(CPU_FTR_EMB_HV)
 	ld	r5,SOFTE(r1)
 
 	/* Interrupts had better not already be enabled... */
-	twnei	r6,0
+	twnei	r6,IRQ_DISABLED
 
-	cmpwi	cr0,r5,0
+	cmpwi	cr0,r5,IRQ_DISABLED
 	beq	1f
 
 	TRACE_ENABLE_INTS
@@ -352,7 +352,7 @@ ret_from_mc_except:
 
 #define PROLOG_ADDITION_MASKABLE_GEN(n)					    \
 	lbz	r10,PACASOFTIRQEN(r13); /* are irqs soft-disabled ? */	    \
-	cmpwi	cr0,r10,0;		/* yes -> go out of line */	    \
+	cmpwi	cr0,r10,IRQ_DISABLED;	/* yes -> go out of line */ \
 	beq	masked_interrupt_book3e_##n
 
 #define PROLOG_ADDITION_2REGS_GEN(n)					    \
diff --git a/arch/powerpc/kernel/head_64.S b/arch/powerpc/kernel/head_64.S
index aa71a90f5222..a9a577dc465c 100644
--- a/arch/powerpc/kernel/head_64.S
+++ b/arch/powerpc/kernel/head_64.S
@@ -765,7 +765,7 @@ _GLOBAL(pmac_secondary_start)
 	/* Mark interrupts soft and hard disabled (they might be enabled
 	 * in the PACA when doing hotplug)
 	 */
-	li	r0,0
+	li	r0,IRQ_DISABLED
 	stb	r0,PACASOFTIRQEN(r13)
 	li	r0,PACA_IRQ_HARD_DIS
 	stb	r0,PACAIRQHAPPENED(r13)
@@ -822,6 +822,7 @@ __secondary_start:
 	/* Mark interrupts soft and hard disabled (they might be enabled
 	 * in the PACA when doing hotplug)
 	 */
+	li	r7,IRQ_DISABLED
 	stb	r7,PACASOFTIRQEN(r13)
 	li	r0,PACA_IRQ_HARD_DIS
 	stb	r0,PACAIRQHAPPENED(r13)
@@ -988,7 +989,7 @@ start_here_common:
 	/* Mark interrupts soft and hard disabled (they might be enabled
 	 * in the PACA when doing hotplug)
 	 */
-	li	r0,0
+	li	r0,IRQ_DISABLED
 	stb	r0,PACASOFTIRQEN(r13)
 	li	r0,PACA_IRQ_HARD_DIS
 	stb	r0,PACAIRQHAPPENED(r13)
diff --git a/arch/powerpc/kernel/idle_book3e.S b/arch/powerpc/kernel/idle_book3e.S
index 48c21acef915..b25a1aee6e08 100644
--- a/arch/powerpc/kernel/idle_book3e.S
+++ b/arch/powerpc/kernel/idle_book3e.S
@@ -17,6 +17,7 @@
 #include <asm/processor.h>
 #include <asm/thread_info.h>
 #include <asm/epapr_hcalls.h>
+#include <asm/hw_irq.h>
 
 /* 64-bit version only for now */
 #ifdef CONFIG_PPC64
@@ -46,7 +47,7 @@ _GLOBAL(\name)
 	bl	trace_hardirqs_on
 	addi    r1,r1,128
 #endif
-	li	r0,1
+	li	r0,IRQ_ENABLED
 	stb	r0,PACASOFTIRQEN(r13)
 	
 	/* Interrupts will make use return to LR, so get something we want
diff --git a/arch/powerpc/kernel/idle_power4.S b/arch/powerpc/kernel/idle_power4.S
index f57a19348bdd..26b0d6f3f748 100644
--- a/arch/powerpc/kernel/idle_power4.S
+++ b/arch/powerpc/kernel/idle_power4.S
@@ -15,6 +15,7 @@
 #include <asm/ppc_asm.h>
 #include <asm/asm-offsets.h>
 #include <asm/irqflags.h>
+#include <asm/hw_irq.h>
 
 #undef DEBUG
 
@@ -53,7 +54,7 @@ END_FTR_SECTION_IFCLR(CPU_FTR_CAN_NAP)
 	mfmsr	r7
 #endif /* CONFIG_TRACE_IRQFLAGS */
 
-	li	r0,1
+	li	r0,IRQ_ENABLED
 	stb	r0,PACASOFTIRQEN(r13)	/* we'll hard-enable shortly */
 BEGIN_FTR_SECTION
 	DSSALL
diff --git a/arch/powerpc/kernel/irq.c b/arch/powerpc/kernel/irq.c
index b7a84522e652..1ba8f6632cd2 100644
--- a/arch/powerpc/kernel/irq.c
+++ b/arch/powerpc/kernel/irq.c
@@ -67,6 +67,7 @@
 #include <asm/smp.h>
 #include <asm/livepatch.h>
 #include <asm/asm-prototypes.h>
+#include <asm/hw_irq.h>
 
 #ifdef CONFIG_PPC64
 #include <asm/paca.h>
@@ -231,7 +232,7 @@ notrace void arch_local_irq_restore(unsigned long en)
 
 	/* Write the new soft-enabled value */
 	set_soft_enabled(en);
-	if (!en)
+	if (en == IRQ_DISABLED)
 		return;
 	/*
 	 * From this point onward, we can take interrupts, preempt,
@@ -276,7 +277,7 @@ notrace void arch_local_irq_restore(unsigned long en)
 	}
 #endif /* CONFIG_TRACE_IRQFLAGS */
 
-	set_soft_enabled(0);
+	set_soft_enabled(IRQ_DISABLED);
 	trace_hardirqs_off();
 
 	/*
@@ -288,7 +289,7 @@ notrace void arch_local_irq_restore(unsigned long en)
 
 	/* We can soft-enable now */
 	trace_hardirqs_on();
-	set_soft_enabled(1);
+	set_soft_enabled(IRQ_ENABLED);
 
 	/*
 	 * And replay if we have to. This will return with interrupts
@@ -363,7 +364,7 @@ bool prep_irq_for_idle(void)
 	 * of entering the low power state.
 	 */
 	local_paca->irq_happened &= ~PACA_IRQ_HARD_DIS;
-	local_paca->soft_enabled = 1;
+	local_paca->soft_enabled = IRQ_ENABLED;
 
 	/* Tell the caller to enter the low power state */
 	return true;
diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
index 5acb5a176dbe..c59a4d2a7905 100644
--- a/arch/powerpc/kernel/process.c
+++ b/arch/powerpc/kernel/process.c
@@ -57,6 +57,7 @@
 #include <asm/debug.h>
 #ifdef CONFIG_PPC64
 #include <asm/firmware.h>
+#include <asm/hw_irq.h>
 #endif
 #include <asm/code-patching.h>
 #include <asm/exec.h>
@@ -1674,7 +1675,7 @@ int copy_thread(unsigned long clone_flags, unsigned long usp,
 			childregs->gpr[14] = ppc_function_entry((void *)usp);
 #ifdef CONFIG_PPC64
 		clear_tsk_thread_flag(p, TIF_32BIT);
-		childregs->softe = 1;
+		childregs->softe = IRQ_ENABLED;
 #endif
 		childregs->gpr[15] = kthread_arg;
 		p->thread.regs = NULL;	/* no user register state */
diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
index 8956a9856604..909903f042ff 100644
--- a/arch/powerpc/kernel/setup_64.c
+++ b/arch/powerpc/kernel/setup_64.c
@@ -68,6 +68,7 @@
 #include <asm/livepatch.h>
 #include <asm/opal.h>
 #include <asm/cputhreads.h>
+#include <asm/hw_irq.h>
 
 #include "setup.h"
 
@@ -189,6 +190,8 @@ static void __init fixup_boot_paca(void)
 	get_paca()->cpu_start = 1;
 	/* Allow percpu accesses to work until we setup percpu data */
 	get_paca()->data_offset = 0;
+	/* Mark interrupts disabled in PACA */
+	get_paca()->soft_enabled = IRQ_DISABLED;
 }
 
 static void __init configure_exceptions(void)
diff --git a/arch/powerpc/kernel/time.c b/arch/powerpc/kernel/time.c
index fe6f3a285455..d0d730c61758 100644
--- a/arch/powerpc/kernel/time.c
+++ b/arch/powerpc/kernel/time.c
@@ -253,7 +253,7 @@ void accumulate_stolen_time(void)
 	 * needs to reflect that so various debug stuff doesn't
 	 * complain
 	 */
-	local_paca->soft_enabled = 0;
+	local_paca->soft_enabled = IRQ_DISABLED;
 
 	sst = scan_dispatch_log(acct->starttime_user);
 	ust = scan_dispatch_log(acct->starttime);
diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c
index a9b9083c5e49..b8640ef11041 100644
--- a/arch/powerpc/mm/hugetlbpage.c
+++ b/arch/powerpc/mm/hugetlbpage.c
@@ -752,7 +752,7 @@ void flush_dcache_icache_hugepage(struct page *page)
  * So long as we atomically load page table pointers we are safe against teardown,
  * we can follow the address down to the the page and take a ref on it.
  * This function need to be called with interrupts disabled. We use this variant
- * when we have MSR[EE] = 0 but the paca->soft_enabled = 1
+ * when we have MSR[EE] = 0 but the paca->soft_enabled = IRQ_ENABLED
  */
 pte_t *__find_linux_pte(pgd_t *pgdir, unsigned long ea,
 			bool *is_thp, unsigned *hpage_shift)
diff --git a/arch/powerpc/perf/core-book3s.c b/arch/powerpc/perf/core-book3s.c
index 153812966365..7ffc02ed0b0f 100644
--- a/arch/powerpc/perf/core-book3s.c
+++ b/arch/powerpc/perf/core-book3s.c
@@ -322,7 +322,7 @@ static inline void perf_read_regs(struct pt_regs *regs)
  */
 static inline int perf_intr_is_nmi(struct pt_regs *regs)
 {
-	return !regs->softe;
+	return (regs->softe == IRQ_DISABLED);
 }
 
 /*
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v10 03/17] powerpc/64: Improve inline asm in arch_local_irq_disable
  2017-12-20  3:55 [PATCH v10 00/17] powerpc: "paca->soft_enabled" based local atomic operation implementation Madhavan Srinivasan
  2017-12-20  3:55 ` [PATCH v10 01/17] powerpc/64: do not trace irqs-off at interrupt return to soft-disabled context Madhavan Srinivasan
  2017-12-20  3:55 ` [PATCH v10 02/17] powerpc/64: Add #defines for paca->soft_enabled flags Madhavan Srinivasan
@ 2017-12-20  3:55 ` Madhavan Srinivasan
  2017-12-20  3:55 ` [PATCH v10 04/17] powerpc/64: Fix arch_local_irq_disable() prototype Madhavan Srinivasan
                   ` (14 subsequent siblings)
  17 siblings, 0 replies; 21+ messages in thread
From: Madhavan Srinivasan @ 2017-12-20  3:55 UTC (permalink / raw)
  To: mpe; +Cc: benh, anton, paulus, npiggin, linuxppc-dev, Madhavan Srinivasan

From: Nicholas Piggin <npiggin@gmail.com>

arch_local_irq_disable is implemented strangely, with a temporary
output register being set to the desired soft_enabled value via an
immediate input, which is then used to store to memory. This is not
required, the immediate can be specified directly as a register input.

For simple cases at least, assembly is unchanged except register
mapping.

Reviewed-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/hw_irq.h | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/arch/powerpc/include/asm/hw_irq.h b/arch/powerpc/include/asm/hw_irq.h
index 463b595ca971..d1e76298e393 100644
--- a/arch/powerpc/include/asm/hw_irq.h
+++ b/arch/powerpc/include/asm/hw_irq.h
@@ -63,13 +63,13 @@ static inline unsigned long arch_local_save_flags(void)
 
 static inline unsigned long arch_local_irq_disable(void)
 {
-	unsigned long flags, zero;
+	unsigned long flags;
 
 	asm volatile(
-		"li %1,%3; lbz %0,%2(13); stb %1,%2(13)"
-		: "=r" (flags), "=&r" (zero)
-		: "i" (offsetof(struct paca_struct, soft_enabled)),\
-		  "i" (IRQ_DISABLED)
+		"lbz %0,%1(13); stb %2,%1(13)"
+		: "=&r" (flags)
+		: "i" (offsetof(struct paca_struct, soft_enabled)),
+		  "r" (IRQ_DISABLED)
 		: "memory");
 
 	return flags;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v10 04/17] powerpc/64: Fix arch_local_irq_disable() prototype
  2017-12-20  3:55 [PATCH v10 00/17] powerpc: "paca->soft_enabled" based local atomic operation implementation Madhavan Srinivasan
                   ` (2 preceding siblings ...)
  2017-12-20  3:55 ` [PATCH v10 03/17] powerpc/64: Improve inline asm in arch_local_irq_disable Madhavan Srinivasan
@ 2017-12-20  3:55 ` Madhavan Srinivasan
  2017-12-20  3:55 ` [PATCH v10 05/17] powerpc/64: move set_soft_enabled(), rename it, add memory clobber Madhavan Srinivasan
                   ` (13 subsequent siblings)
  17 siblings, 0 replies; 21+ messages in thread
From: Madhavan Srinivasan @ 2017-12-20  3:55 UTC (permalink / raw)
  To: mpe; +Cc: benh, anton, paulus, npiggin, linuxppc-dev, Madhavan Srinivasan

In powerpc/64, the arch_local_irq_disable() function returns unsigned
long, which is not consistent with other architectures.

Move that set-return asm implementation into arch_local_irq_save(),
and make arch_local_irq_disable() return void, simplifying the
assembly.

Suggested-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/hw_irq.h | 25 +++++++++++++++----------
 1 file changed, 15 insertions(+), 10 deletions(-)

diff --git a/arch/powerpc/include/asm/hw_irq.h b/arch/powerpc/include/asm/hw_irq.h
index d1e76298e393..a946b0285334 100644
--- a/arch/powerpc/include/asm/hw_irq.h
+++ b/arch/powerpc/include/asm/hw_irq.h
@@ -61,18 +61,14 @@ static inline unsigned long arch_local_save_flags(void)
 	return flags;
 }
 
-static inline unsigned long arch_local_irq_disable(void)
+static inline void arch_local_irq_disable(void)
 {
-	unsigned long flags;
-
 	asm volatile(
-		"lbz %0,%1(13); stb %2,%1(13)"
-		: "=&r" (flags)
-		: "i" (offsetof(struct paca_struct, soft_enabled)),
-		  "r" (IRQ_DISABLED)
+		"stb %0,%1(13)"
+		:
+		: "r" (IRQ_DISABLED),
+		  "i" (offsetof(struct paca_struct, soft_enabled))
 		: "memory");
-
-	return flags;
 }
 
 extern void arch_local_irq_restore(unsigned long);
@@ -84,7 +80,16 @@ static inline void arch_local_irq_enable(void)
 
 static inline unsigned long arch_local_irq_save(void)
 {
-	return arch_local_irq_disable();
+	unsigned long flags;
+
+	asm volatile(
+		"lbz %0,%1(13); stb %2,%1(13)"
+		: "=&r" (flags)
+		: "i" (offsetof(struct paca_struct, soft_enabled)),
+		  "r" (IRQ_DISABLED)
+		: "memory");
+
+	return flags;
 }
 
 static inline bool arch_irqs_disabled_flags(unsigned long flags)
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v10 05/17] powerpc/64: move set_soft_enabled(), rename it, add memory clobber
  2017-12-20  3:55 [PATCH v10 00/17] powerpc: "paca->soft_enabled" based local atomic operation implementation Madhavan Srinivasan
                   ` (3 preceding siblings ...)
  2017-12-20  3:55 ` [PATCH v10 04/17] powerpc/64: Fix arch_local_irq_disable() prototype Madhavan Srinivasan
@ 2017-12-20  3:55 ` Madhavan Srinivasan
  2017-12-20  3:55 ` [PATCH v10 06/17] powerpc/64: Implement and use soft_enabled_return API Madhavan Srinivasan
                   ` (12 subsequent siblings)
  17 siblings, 0 replies; 21+ messages in thread
From: Madhavan Srinivasan @ 2017-12-20  3:55 UTC (permalink / raw)
  To: mpe; +Cc: benh, anton, paulus, npiggin, linuxppc-dev, Madhavan Srinivasan

Move set_soft_enabled() from powerpc/kernel/irq.c to asm/hw_irq.c, and
have existing open-coded updates to paca->soft_enabled go via this
access function.

Add a "memory" clobber to tell the compiler that paca->soft_enabled
has changed (gcc can't see the access through the r13 paca register).

It is renamed to soft_enabled_set(), which makes a prefix namespace
that is helpful when new soft_enabled manipulation functions are
introduced.

Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/hw_irq.h  | 22 ++++++++++++++++------
 arch/powerpc/include/asm/kvm_ppc.h |  2 +-
 arch/powerpc/kernel/irq.c          | 14 ++++----------
 arch/powerpc/kernel/setup_64.c     |  4 ++--
 arch/powerpc/kernel/time.c         |  4 ++--
 5 files changed, 25 insertions(+), 21 deletions(-)

diff --git a/arch/powerpc/include/asm/hw_irq.h b/arch/powerpc/include/asm/hw_irq.h
index a946b0285334..6441a0498234 100644
--- a/arch/powerpc/include/asm/hw_irq.h
+++ b/arch/powerpc/include/asm/hw_irq.h
@@ -49,6 +49,21 @@ extern void unknown_exception(struct pt_regs *regs);
 #ifdef CONFIG_PPC64
 #include <asm/paca.h>
 
+/*
+ * The "memory" clobber acts as both a compiler barrier
+ * for the critical section and as a clobber because
+ * we changed paca->soft_enabled
+ */
+static inline notrace void soft_enabled_set(unsigned long enable)
+{
+	asm volatile(
+		"stb %0,%1(13)"
+		:
+		: "r" (enable),
+		  "i" (offsetof(struct paca_struct, soft_enabled))
+		: "memory");
+}
+
 static inline unsigned long arch_local_save_flags(void)
 {
 	unsigned long flags;
@@ -63,12 +78,7 @@ static inline unsigned long arch_local_save_flags(void)
 
 static inline void arch_local_irq_disable(void)
 {
-	asm volatile(
-		"stb %0,%1(13)"
-		:
-		: "r" (IRQ_DISABLED),
-		  "i" (offsetof(struct paca_struct, soft_enabled))
-		: "memory");
+	soft_enabled_set(IRQ_DISABLED);
 }
 
 extern void arch_local_irq_restore(unsigned long);
diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h
index 70a38ba46dc0..d038c627f07f 100644
--- a/arch/powerpc/include/asm/kvm_ppc.h
+++ b/arch/powerpc/include/asm/kvm_ppc.h
@@ -873,7 +873,7 @@ static inline void kvmppc_fix_ee_before_entry(void)
 
 	/* Only need to enable IRQs by hard enabling them after this */
 	local_paca->irq_happened = 0;
-	local_paca->soft_enabled = IRQ_ENABLED;
+	soft_enabled_set(IRQ_ENABLED);
 #endif
 }
 
diff --git a/arch/powerpc/kernel/irq.c b/arch/powerpc/kernel/irq.c
index 1ba8f6632cd2..bf519fc7913f 100644
--- a/arch/powerpc/kernel/irq.c
+++ b/arch/powerpc/kernel/irq.c
@@ -107,12 +107,6 @@ static inline notrace unsigned long get_irq_happened(void)
 	return happened;
 }
 
-static inline notrace void set_soft_enabled(unsigned long enable)
-{
-	__asm__ __volatile__("stb %0,%1(13)"
-	: : "r" (enable), "i" (offsetof(struct paca_struct, soft_enabled)));
-}
-
 static inline notrace int decrementer_check_overflow(void)
 {
  	u64 now = get_tb_or_rtc();
@@ -231,7 +225,7 @@ notrace void arch_local_irq_restore(unsigned long en)
 	unsigned int replay;
 
 	/* Write the new soft-enabled value */
-	set_soft_enabled(en);
+	soft_enabled_set(en);
 	if (en == IRQ_DISABLED)
 		return;
 	/*
@@ -277,7 +271,7 @@ notrace void arch_local_irq_restore(unsigned long en)
 	}
 #endif /* CONFIG_TRACE_IRQFLAGS */
 
-	set_soft_enabled(IRQ_DISABLED);
+	soft_enabled_set(IRQ_DISABLED);
 	trace_hardirqs_off();
 
 	/*
@@ -289,7 +283,7 @@ notrace void arch_local_irq_restore(unsigned long en)
 
 	/* We can soft-enable now */
 	trace_hardirqs_on();
-	set_soft_enabled(IRQ_ENABLED);
+	soft_enabled_set(IRQ_ENABLED);
 
 	/*
 	 * And replay if we have to. This will return with interrupts
@@ -364,7 +358,7 @@ bool prep_irq_for_idle(void)
 	 * of entering the low power state.
 	 */
 	local_paca->irq_happened &= ~PACA_IRQ_HARD_DIS;
-	local_paca->soft_enabled = IRQ_ENABLED;
+	soft_enabled_set(IRQ_ENABLED);
 
 	/* Tell the caller to enter the low power state */
 	return true;
diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
index 909903f042ff..adb069af4baf 100644
--- a/arch/powerpc/kernel/setup_64.c
+++ b/arch/powerpc/kernel/setup_64.c
@@ -191,7 +191,7 @@ static void __init fixup_boot_paca(void)
 	/* Allow percpu accesses to work until we setup percpu data */
 	get_paca()->data_offset = 0;
 	/* Mark interrupts disabled in PACA */
-	get_paca()->soft_enabled = IRQ_DISABLED;
+	soft_enabled_set(IRQ_DISABLED);
 }
 
 static void __init configure_exceptions(void)
@@ -354,7 +354,7 @@ void __init early_setup(unsigned long dt_ptr)
 void early_setup_secondary(void)
 {
 	/* Mark interrupts disabled in PACA */
-	get_paca()->soft_enabled = 0;
+	soft_enabled_set(IRQ_DISABLED);
 
 	/* Initialize the hash table or TLB handling */
 	early_init_mmu_secondary();
diff --git a/arch/powerpc/kernel/time.c b/arch/powerpc/kernel/time.c
index d0d730c61758..f1ecf40fc6c1 100644
--- a/arch/powerpc/kernel/time.c
+++ b/arch/powerpc/kernel/time.c
@@ -253,7 +253,7 @@ void accumulate_stolen_time(void)
 	 * needs to reflect that so various debug stuff doesn't
 	 * complain
 	 */
-	local_paca->soft_enabled = IRQ_DISABLED;
+	soft_enabled_set(IRQ_DISABLED);
 
 	sst = scan_dispatch_log(acct->starttime_user);
 	ust = scan_dispatch_log(acct->starttime);
@@ -261,7 +261,7 @@ void accumulate_stolen_time(void)
 	acct->utime -= ust;
 	acct->steal_time += ust + sst;
 
-	local_paca->soft_enabled = save_soft_enabled;
+	soft_enabled_set(save_soft_enabled);
 }
 
 static inline u64 calculate_stolen_time(u64 stop_tb)
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v10 06/17] powerpc/64: Implement and use soft_enabled_return API
  2017-12-20  3:55 [PATCH v10 00/17] powerpc: "paca->soft_enabled" based local atomic operation implementation Madhavan Srinivasan
                   ` (4 preceding siblings ...)
  2017-12-20  3:55 ` [PATCH v10 05/17] powerpc/64: move set_soft_enabled(), rename it, add memory clobber Madhavan Srinivasan
@ 2017-12-20  3:55 ` Madhavan Srinivasan
  2017-12-20  3:55 ` [PATCH v10 07/17] powerpc/64: Implement and use soft_enabled_set_return API Madhavan Srinivasan
                   ` (11 subsequent siblings)
  17 siblings, 0 replies; 21+ messages in thread
From: Madhavan Srinivasan @ 2017-12-20  3:55 UTC (permalink / raw)
  To: mpe; +Cc: benh, anton, paulus, npiggin, linuxppc-dev, Madhavan Srinivasan

Add a new wrapper function, soft_enabled_return(), added to return
paca->soft_enabled value.

Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/hw_irq.h | 21 +++++++++++++--------
 arch/powerpc/kernel/time.c        |  2 +-
 2 files changed, 14 insertions(+), 9 deletions(-)

diff --git a/arch/powerpc/include/asm/hw_irq.h b/arch/powerpc/include/asm/hw_irq.h
index 6441a0498234..fbffeecb913f 100644
--- a/arch/powerpc/include/asm/hw_irq.h
+++ b/arch/powerpc/include/asm/hw_irq.h
@@ -49,6 +49,18 @@ extern void unknown_exception(struct pt_regs *regs);
 #ifdef CONFIG_PPC64
 #include <asm/paca.h>
 
+static inline notrace unsigned long soft_enabled_return(void)
+{
+	unsigned long flags;
+
+	asm volatile(
+		"lbz %0,%1(13)"
+		: "=r" (flags)
+		: "i" (offsetof(struct paca_struct, soft_enabled)));
+
+	return flags;
+}
+
 /*
  * The "memory" clobber acts as both a compiler barrier
  * for the critical section and as a clobber because
@@ -66,14 +78,7 @@ static inline notrace void soft_enabled_set(unsigned long enable)
 
 static inline unsigned long arch_local_save_flags(void)
 {
-	unsigned long flags;
-
-	asm volatile(
-		"lbz %0,%1(13)"
-		: "=r" (flags)
-		: "i" (offsetof(struct paca_struct, soft_enabled)));
-
-	return flags;
+	return soft_enabled_return();
 }
 
 static inline void arch_local_irq_disable(void)
diff --git a/arch/powerpc/kernel/time.c b/arch/powerpc/kernel/time.c
index f1ecf40fc6c1..9b483520c010 100644
--- a/arch/powerpc/kernel/time.c
+++ b/arch/powerpc/kernel/time.c
@@ -244,7 +244,7 @@ static u64 scan_dispatch_log(u64 stop_tb)
 void accumulate_stolen_time(void)
 {
 	u64 sst, ust;
-	u8 save_soft_enabled = local_paca->soft_enabled;
+	unsigned long save_soft_enabled = soft_enabled_return();
 	struct cpu_accounting_data *acct = &local_paca->accounting;
 
 	/* We are called early in the exception entry, before
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v10 07/17] powerpc/64: Implement and use soft_enabled_set_return API
  2017-12-20  3:55 [PATCH v10 00/17] powerpc: "paca->soft_enabled" based local atomic operation implementation Madhavan Srinivasan
                   ` (5 preceding siblings ...)
  2017-12-20  3:55 ` [PATCH v10 06/17] powerpc/64: Implement and use soft_enabled_return API Madhavan Srinivasan
@ 2017-12-20  3:55 ` Madhavan Srinivasan
  2017-12-20  3:55 ` [PATCH v10 08/17] powerpc/64: Cleanup hard_irq_disable() macro Madhavan Srinivasan
                   ` (10 subsequent siblings)
  17 siblings, 0 replies; 21+ messages in thread
From: Madhavan Srinivasan @ 2017-12-20  3:55 UTC (permalink / raw)
  To: mpe; +Cc: benh, anton, paulus, npiggin, linuxppc-dev, Madhavan Srinivasan

Add a new wrapper function, soft_enabled_set_return(), added to do the
paca->soft_enabled updates requiring a set-return.

Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/hw_irq.h | 25 +++++++++++++++----------
 1 file changed, 15 insertions(+), 10 deletions(-)

diff --git a/arch/powerpc/include/asm/hw_irq.h b/arch/powerpc/include/asm/hw_irq.h
index fbffeecb913f..8b1476609ba7 100644
--- a/arch/powerpc/include/asm/hw_irq.h
+++ b/arch/powerpc/include/asm/hw_irq.h
@@ -76,6 +76,20 @@ static inline notrace void soft_enabled_set(unsigned long enable)
 		: "memory");
 }
 
+static inline notrace unsigned long soft_enabled_set_return(unsigned long enable)
+{
+	unsigned long flags;
+
+	asm volatile(
+		"lbz %0,%1(13); stb %2,%1(13)"
+		: "=&r" (flags)
+		: "i" (offsetof(struct paca_struct, soft_enabled)),
+		  "r" (enable)
+		: "memory");
+
+	return flags;
+}
+
 static inline unsigned long arch_local_save_flags(void)
 {
 	return soft_enabled_return();
@@ -95,16 +109,7 @@ static inline void arch_local_irq_enable(void)
 
 static inline unsigned long arch_local_irq_save(void)
 {
-	unsigned long flags;
-
-	asm volatile(
-		"lbz %0,%1(13); stb %2,%1(13)"
-		: "=&r" (flags)
-		: "i" (offsetof(struct paca_struct, soft_enabled)),
-		  "r" (IRQ_DISABLED)
-		: "memory");
-
-	return flags;
+	return soft_enabled_set_return(IRQ_DISABLED);
 }
 
 static inline bool arch_irqs_disabled_flags(unsigned long flags)
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v10 08/17] powerpc/64: Cleanup hard_irq_disable() macro
  2017-12-20  3:55 [PATCH v10 00/17] powerpc: "paca->soft_enabled" based local atomic operation implementation Madhavan Srinivasan
                   ` (6 preceding siblings ...)
  2017-12-20  3:55 ` [PATCH v10 07/17] powerpc/64: Implement and use soft_enabled_set_return API Madhavan Srinivasan
@ 2017-12-20  3:55 ` Madhavan Srinivasan
  2017-12-20  3:55 ` [PATCH v10 09/17] powerpc/64: Change soft_enabled from flag to bitmask Madhavan Srinivasan
                   ` (9 subsequent siblings)
  17 siblings, 0 replies; 21+ messages in thread
From: Madhavan Srinivasan @ 2017-12-20  3:55 UTC (permalink / raw)
  To: mpe; +Cc: benh, anton, paulus, npiggin, linuxppc-dev, Madhavan Srinivasan

Minor cleanup to use helper function for manipulating
paca->soft_enabled variable.

Suggested-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/hw_irq.h | 7 +++----
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/arch/powerpc/include/asm/hw_irq.h b/arch/powerpc/include/asm/hw_irq.h
index 8b1476609ba7..232795f64804 100644
--- a/arch/powerpc/include/asm/hw_irq.h
+++ b/arch/powerpc/include/asm/hw_irq.h
@@ -131,12 +131,11 @@ static inline bool arch_irqs_disabled(void)
 #endif
 
 #define hard_irq_disable()	do {			\
-	u8 _was_enabled;				\
+	unsigned long flags;				\
 	__hard_irq_disable();				\
-	_was_enabled = local_paca->soft_enabled;	\
-	local_paca->soft_enabled = IRQ_DISABLED;\
+	flags = soft_enabled_set_return(IRQ_DISABLED);	\
 	local_paca->irq_happened |= PACA_IRQ_HARD_DIS;	\
-	if (_was_enabled == IRQ_ENABLED)	\
+	if (!arch_irqs_disabled_flags(flags))		\
 		trace_hardirqs_off();			\
 } while(0)
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v10 09/17] powerpc/64: Change soft_enabled from flag to bitmask
  2017-12-20  3:55 [PATCH v10 00/17] powerpc: "paca->soft_enabled" based local atomic operation implementation Madhavan Srinivasan
                   ` (7 preceding siblings ...)
  2017-12-20  3:55 ` [PATCH v10 08/17] powerpc/64: Cleanup hard_irq_disable() macro Madhavan Srinivasan
@ 2017-12-20  3:55 ` Madhavan Srinivasan
  2017-12-20  3:55 ` [PATCH v10 10/17] powerpc/64: Rename soft_enabled to irq_soft_mask Madhavan Srinivasan
                   ` (8 subsequent siblings)
  17 siblings, 0 replies; 21+ messages in thread
From: Madhavan Srinivasan @ 2017-12-20  3:55 UTC (permalink / raw)
  To: mpe; +Cc: benh, anton, paulus, npiggin, linuxppc-dev, Madhavan Srinivasan

"paca->soft_enabled" is used as a flag to mask some of interrupts.
Currently supported flags values and their details:

soft_enabled    MSR[EE]

0               0       Disabled (PMI and HMI not masked)
1               1       Enabled

"paca->soft_enabled" is initialized to 1 to make the interripts as
enabled. arch_local_irq_disable() will toggle the value when
interrupts needs to disbled. At this point, the interrupts are not
actually disabled, instead, interrupt vector has code to check for the
flag and mask it when it occurs. By "mask it", it update interrupt
paca->irq_happened and return. arch_local_irq_restore() is called to
re-enable interrupts, which checks and replays interrupts if any
occured.

Now, as mentioned, current logic doesnot mask "performance monitoring
interrupts" and PMIs are implemented as NMI. But this patchset depends
on local_irq_* for a successful local_* update. Meaning, mask all
possible interrupts during local_* update and replay them after the
update.

So the idea here is to reserve the "paca->soft_enabled" logic. New
values and details:

soft_enabled    MSR[EE]

1               0       Disabled  (PMI and HMI not masked)
0               1       Enabled

Reason for the this change is to create foundation for a third mask
value "0x2" for "soft_enabled" to add support to mask PMIs. When
->soft_enabled is set to a value "3", PMI interrupts are mask and when
set to a value of "1", PMI are not mask. With this patch also extends
soft_enabled as interrupt disable mask.

Current flags are renamed from IRQ_[EN?DIS}ABLED to
IRQ_DISABLE_MASK_NONE and IRQ_DISABLE_MASK.

Patch also fixes the ptrace call to force the user to see the softe
value to be alway 1. Reason being, even though userspace has no
business knowing about softe, it is part of pt_regs. Like-wise in
signal context.

Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/exception-64s.h |  4 ++--
 arch/powerpc/include/asm/hw_irq.h        | 34 +++++++++++++++++++++++---------
 arch/powerpc/include/asm/irqflags.h      |  8 ++++----
 arch/powerpc/include/asm/kvm_ppc.h       |  2 +-
 arch/powerpc/kernel/entry_64.S           | 27 ++++++++++++-------------
 arch/powerpc/kernel/exceptions-64e.S     | 10 +++++-----
 arch/powerpc/kernel/head_64.S            |  6 +++---
 arch/powerpc/kernel/idle_book3e.S        |  2 +-
 arch/powerpc/kernel/idle_power4.S        |  2 +-
 arch/powerpc/kernel/irq.c                | 26 ++++++++++++++++++------
 arch/powerpc/kernel/process.c            |  2 +-
 arch/powerpc/kernel/ptrace.c             | 12 +++++++++++
 arch/powerpc/kernel/setup_64.c           |  4 ++--
 arch/powerpc/kernel/signal_32.c          |  8 ++++++++
 arch/powerpc/kernel/signal_64.c          |  3 +++
 arch/powerpc/kernel/time.c               |  2 +-
 arch/powerpc/mm/hugetlbpage.c            |  2 +-
 arch/powerpc/perf/core-book3s.c          |  2 +-
 18 files changed, 104 insertions(+), 52 deletions(-)

diff --git a/arch/powerpc/include/asm/exception-64s.h b/arch/powerpc/include/asm/exception-64s.h
index 7c2486248dfa..b8f8a78ffa09 100644
--- a/arch/powerpc/include/asm/exception-64s.h
+++ b/arch/powerpc/include/asm/exception-64s.h
@@ -499,9 +499,9 @@ END_FTR_SECTION_NESTED(ftr,ftr,943)
 
 #define __SOFTEN_TEST(h, vec)						\
 	lbz	r10,PACASOFTIRQEN(r13);					\
-	cmpwi	r10,IRQ_DISABLED;				\
+	andi.	r10,r10,IRQ_DISABLE_MASK;				\
 	li	r10,SOFTEN_VALUE_##vec;					\
-	beq	masked_##h##interrupt
+	bne	masked_##h##interrupt
 
 #define _SOFTEN_TEST(h, vec)	__SOFTEN_TEST(h, vec)
 
diff --git a/arch/powerpc/include/asm/hw_irq.h b/arch/powerpc/include/asm/hw_irq.h
index 232795f64804..82c4e3572aa9 100644
--- a/arch/powerpc/include/asm/hw_irq.h
+++ b/arch/powerpc/include/asm/hw_irq.h
@@ -31,8 +31,8 @@
 /*
  * flags for paca->soft_enabled
  */
-#define IRQ_ENABLED	1
-#define IRQ_DISABLED	0
+#define IRQ_DISABLE_MASK_NONE	0x00
+#define IRQ_DISABLE_MASK	0x01
 
 #endif /* CONFIG_PPC64 */
 
@@ -68,6 +68,18 @@ static inline notrace unsigned long soft_enabled_return(void)
  */
 static inline notrace void soft_enabled_set(unsigned long enable)
 {
+#ifdef CONFIG_TRACE_IRQFLAGS
+	/*
+	 * mask must always include LINUX bit if any are set, and
+	 * interrupts don't get replayed until the Linux interrupt is
+	 * unmasked. This could be changed to replay partial unmasks
+	 * in future, which would allow Linux masks to nest inside
+	 * other masks, among other things. For now, be very dumb and
+	 * simple.
+	 */
+	WARN_ON(mask && !(mask & IRQ_DISABLE_MASK));
+#endif
+
 	asm volatile(
 		"stb %0,%1(13)"
 		:
@@ -76,15 +88,19 @@ static inline notrace void soft_enabled_set(unsigned long enable)
 		: "memory");
 }
 
-static inline notrace unsigned long soft_enabled_set_return(unsigned long enable)
+static inline notrace unsigned long soft_enabled_set_return(unsigned long mask)
 {
 	unsigned long flags;
 
+#ifdef CONFIG_TRACE_IRQFLAGS
+	WARN_ON(mask && !(mask & IRQ_DISABLE_MASK));
+#endif
+
 	asm volatile(
 		"lbz %0,%1(13); stb %2,%1(13)"
 		: "=&r" (flags)
 		: "i" (offsetof(struct paca_struct, soft_enabled)),
-		  "r" (enable)
+		  "r" (mask)
 		: "memory");
 
 	return flags;
@@ -104,17 +120,17 @@ extern void arch_local_irq_restore(unsigned long);
 
 static inline void arch_local_irq_enable(void)
 {
-	arch_local_irq_restore(IRQ_ENABLED);
+	arch_local_irq_restore(IRQ_DISABLE_MASK_NONE);
 }
 
 static inline unsigned long arch_local_irq_save(void)
 {
-	return soft_enabled_set_return(IRQ_DISABLED);
+	return soft_enabled_set_return(IRQ_DISABLE_MASK);
 }
 
 static inline bool arch_irqs_disabled_flags(unsigned long flags)
 {
-	return flags == IRQ_DISABLED;
+	return flags & IRQ_DISABLE_MASK;
 }
 
 static inline bool arch_irqs_disabled(void)
@@ -133,7 +149,7 @@ static inline bool arch_irqs_disabled(void)
 #define hard_irq_disable()	do {			\
 	unsigned long flags;				\
 	__hard_irq_disable();				\
-	flags = soft_enabled_set_return(IRQ_DISABLED);	\
+	flags = soft_enabled_set_return(IRQ_DISABLE_MASK);\
 	local_paca->irq_happened |= PACA_IRQ_HARD_DIS;	\
 	if (!arch_irqs_disabled_flags(flags))		\
 		trace_hardirqs_off();			\
@@ -158,7 +174,7 @@ static inline void may_hard_irq_enable(void)
 
 static inline bool arch_irq_disabled_regs(struct pt_regs *regs)
 {
-	return (regs->softe == IRQ_DISABLED);
+	return (regs->softe & IRQ_DISABLE_MASK);
 }
 
 extern bool prep_irq_for_idle(void);
diff --git a/arch/powerpc/include/asm/irqflags.h b/arch/powerpc/include/asm/irqflags.h
index 55d9a0c0f1a6..0fd6ec7d8797 100644
--- a/arch/powerpc/include/asm/irqflags.h
+++ b/arch/powerpc/include/asm/irqflags.h
@@ -49,11 +49,11 @@
 #define RECONCILE_IRQ_STATE(__rA, __rB)		\
 	lbz	__rA,PACASOFTIRQEN(r13);	\
 	lbz	__rB,PACAIRQHAPPENED(r13);	\
-	cmpwi	cr0,__rA,IRQ_DISABLED;\
-	li	__rA,IRQ_DISABLED;	\
+	andi.	__rA,__rA,IRQ_DISABLE_MASK;\
+	li	__rA,IRQ_DISABLE_MASK;	\
 	ori	__rB,__rB,PACA_IRQ_HARD_DIS;	\
 	stb	__rB,PACAIRQHAPPENED(r13);	\
-	beq	44f;				\
+	bne	44f;				\
 	stb	__rA,PACASOFTIRQEN(r13);	\
 	TRACE_DISABLE_INTS;			\
 44:
@@ -64,7 +64,7 @@
 
 #define RECONCILE_IRQ_STATE(__rA, __rB)		\
 	lbz	__rA,PACAIRQHAPPENED(r13);	\
-	li	__rB,IRQ_DISABLED;	\
+	li	__rB,IRQ_DISABLE_MASK;	\
 	ori	__rA,__rA,PACA_IRQ_HARD_DIS;	\
 	stb	__rB,PACASOFTIRQEN(r13);	\
 	stb	__rA,PACAIRQHAPPENED(r13)
diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h
index d038c627f07f..09992b9d9401 100644
--- a/arch/powerpc/include/asm/kvm_ppc.h
+++ b/arch/powerpc/include/asm/kvm_ppc.h
@@ -873,7 +873,7 @@ static inline void kvmppc_fix_ee_before_entry(void)
 
 	/* Only need to enable IRQs by hard enabling them after this */
 	local_paca->irq_happened = 0;
-	soft_enabled_set(IRQ_ENABLED);
+	soft_enabled_set(IRQ_DISABLE_MASK_NONE);
 #endif
 }
 
diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
index 02536e989df5..62d615328f57 100644
--- a/arch/powerpc/kernel/entry_64.S
+++ b/arch/powerpc/kernel/entry_64.S
@@ -130,8 +130,7 @@ END_FW_FTR_SECTION_IFSET(FW_FEATURE_SPLPAR)
 	 */
 #if defined(CONFIG_TRACE_IRQFLAGS) && defined(CONFIG_BUG)
 	lbz	r10,PACASOFTIRQEN(r13)
-	xori	r10,r10,IRQ_ENABLED
-1:	tdnei	r10,0
+1:	tdnei	r10,IRQ_DISABLE_MASK_NONE
 	EMIT_BUG_ENTRY 1b,__FILE__,__LINE__,BUGFLAG_WARNING
 #endif
 
@@ -147,7 +146,7 @@ system_call:			/* label this so stack traces look sane */
 	/* We do need to set SOFTE in the stack frame or the return
 	 * from interrupt will be painful
 	 */
-	li	r10,IRQ_ENABLED
+	li	r10,IRQ_DISABLE_MASK_NONE
 	std	r10,SOFTE(r1)
 
 	CURRENT_THREAD_INFO(r11, r1)
@@ -741,10 +740,10 @@ resume_kernel:
 	beq+	restore
 	/* Check that preempt_count() == 0 and interrupts are enabled */
 	lwz	r8,TI_PREEMPT(r9)
-	cmpwi	cr1,r8,0
+	cmpwi	cr0,r8,0
+	bne	restore
 	ld	r0,SOFTE(r1)
-	cmpdi	r0,IRQ_DISABLED
-	crandc	eq,cr1*4+eq,eq
+	andi.	r0,r0,IRQ_DISABLE_MASK
 	bne	restore
 
 	/*
@@ -783,11 +782,11 @@ restore:
 	 */
 	ld	r5,SOFTE(r1)
 	lbz	r6,PACASOFTIRQEN(r13)
-	cmpwi	cr0,r5,IRQ_DISABLED
-	beq	.Lrestore_irq_off
+	andi.	r5,r5,IRQ_DISABLE_MASK
+	bne	.Lrestore_irq_off
 
 	/* We are enabling, were we already enabled ? Yes, just return */
-	cmpwi	cr0,r6,IRQ_ENABLED
+	andi.	r6,r6,IRQ_DISABLE_MASK
 	beq	cr0,.Ldo_restore
 
 	/*
@@ -806,7 +805,7 @@ restore:
 	 */
 .Lrestore_no_replay:
 	TRACE_ENABLE_INTS
-	li	r0,IRQ_ENABLED
+	li	r0,IRQ_DISABLE_MASK_NONE
 	stb	r0,PACASOFTIRQEN(r13);
 
 	/*
@@ -915,7 +914,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR)
 #if defined(CONFIG_TRACE_IRQFLAGS) && defined(CONFIG_BUG)
 	/* The interrupt should not have soft enabled. */
 	lbz	r7,PACASOFTIRQEN(r13)
-	tdnei	r7,IRQ_DISABLED
+	tdeqi	r7,IRQ_DISABLE_MASK_NONE
 	EMIT_BUG_ENTRY 1b,__FILE__,__LINE__,BUGFLAG_WARNING
 #endif
 	b	.Ldo_restore
@@ -1031,15 +1030,15 @@ _GLOBAL(enter_rtas)
 	li	r0,0
 	mtcr	r0
 
-#ifdef CONFIG_BUG	
+#ifdef CONFIG_BUG
 	/* There is no way it is acceptable to get here with interrupts enabled,
 	 * check it with the asm equivalent of WARN_ON
 	 */
 	lbz	r0,PACASOFTIRQEN(r13)
-1:	tdnei	r0,IRQ_DISABLED
+1:	tdeqi	r0,IRQ_DISABLE_MASK_NONE
 	EMIT_BUG_ENTRY 1b,__FILE__,__LINE__,BUGFLAG_WARNING
 #endif
-	
+
 	/* Hard-disable interrupts */
 	mfmsr	r6
 	rldicl	r7,r6,48,1
diff --git a/arch/powerpc/kernel/exceptions-64e.S b/arch/powerpc/kernel/exceptions-64e.S
index 1ca9ed89ed0b..15dcc89dcbcc 100644
--- a/arch/powerpc/kernel/exceptions-64e.S
+++ b/arch/powerpc/kernel/exceptions-64e.S
@@ -210,10 +210,10 @@ END_FTR_SECTION_IFSET(CPU_FTR_EMB_HV)
 	ld	r5,SOFTE(r1)
 
 	/* Interrupts had better not already be enabled... */
-	twnei	r6,IRQ_DISABLED
+	tweqi	r6,IRQ_DISABLE_MASK_NONE
 
-	cmpwi	cr0,r5,IRQ_DISABLED
-	beq	1f
+	andi.	r6,r5,IRQ_DISABLE_MASK
+	bne	1f
 
 	TRACE_ENABLE_INTS
 	stb	r5,PACASOFTIRQEN(r13)
@@ -352,8 +352,8 @@ ret_from_mc_except:
 
 #define PROLOG_ADDITION_MASKABLE_GEN(n)					    \
 	lbz	r10,PACASOFTIRQEN(r13); /* are irqs soft-disabled ? */	    \
-	cmpwi	cr0,r10,IRQ_DISABLED;	/* yes -> go out of line */ \
-	beq	masked_interrupt_book3e_##n
+	andi.	r10,r10,IRQ_DISABLE_MASK;	/* yes -> go out of line */ \
+	bne	masked_interrupt_book3e_##n
 
 #define PROLOG_ADDITION_2REGS_GEN(n)					    \
 	std	r14,PACA_EXGEN+EX_R14(r13);				    \
diff --git a/arch/powerpc/kernel/head_64.S b/arch/powerpc/kernel/head_64.S
index a9a577dc465c..d43286208ee2 100644
--- a/arch/powerpc/kernel/head_64.S
+++ b/arch/powerpc/kernel/head_64.S
@@ -765,7 +765,7 @@ _GLOBAL(pmac_secondary_start)
 	/* Mark interrupts soft and hard disabled (they might be enabled
 	 * in the PACA when doing hotplug)
 	 */
-	li	r0,IRQ_DISABLED
+	li	r0,IRQ_DISABLE_MASK
 	stb	r0,PACASOFTIRQEN(r13)
 	li	r0,PACA_IRQ_HARD_DIS
 	stb	r0,PACAIRQHAPPENED(r13)
@@ -822,7 +822,7 @@ __secondary_start:
 	/* Mark interrupts soft and hard disabled (they might be enabled
 	 * in the PACA when doing hotplug)
 	 */
-	li	r7,IRQ_DISABLED
+	li	r7,IRQ_DISABLE_MASK
 	stb	r7,PACASOFTIRQEN(r13)
 	li	r0,PACA_IRQ_HARD_DIS
 	stb	r0,PACAIRQHAPPENED(r13)
@@ -989,7 +989,7 @@ start_here_common:
 	/* Mark interrupts soft and hard disabled (they might be enabled
 	 * in the PACA when doing hotplug)
 	 */
-	li	r0,IRQ_DISABLED
+	li	r0,IRQ_DISABLE_MASK
 	stb	r0,PACASOFTIRQEN(r13)
 	li	r0,PACA_IRQ_HARD_DIS
 	stb	r0,PACAIRQHAPPENED(r13)
diff --git a/arch/powerpc/kernel/idle_book3e.S b/arch/powerpc/kernel/idle_book3e.S
index b25a1aee6e08..a459c306b04e 100644
--- a/arch/powerpc/kernel/idle_book3e.S
+++ b/arch/powerpc/kernel/idle_book3e.S
@@ -47,7 +47,7 @@ _GLOBAL(\name)
 	bl	trace_hardirqs_on
 	addi    r1,r1,128
 #endif
-	li	r0,IRQ_ENABLED
+	li	r0,IRQ_DISABLE_MASK_NONE
 	stb	r0,PACASOFTIRQEN(r13)
 	
 	/* Interrupts will make use return to LR, so get something we want
diff --git a/arch/powerpc/kernel/idle_power4.S b/arch/powerpc/kernel/idle_power4.S
index 26b0d6f3f748..785e10619d8d 100644
--- a/arch/powerpc/kernel/idle_power4.S
+++ b/arch/powerpc/kernel/idle_power4.S
@@ -54,7 +54,7 @@ END_FTR_SECTION_IFCLR(CPU_FTR_CAN_NAP)
 	mfmsr	r7
 #endif /* CONFIG_TRACE_IRQFLAGS */
 
-	li	r0,IRQ_ENABLED
+	li	r0,IRQ_DISABLE_MASK_NONE
 	stb	r0,PACASOFTIRQEN(r13)	/* we'll hard-enable shortly */
 BEGIN_FTR_SECTION
 	DSSALL
diff --git a/arch/powerpc/kernel/irq.c b/arch/powerpc/kernel/irq.c
index bf519fc7913f..f3b580d18f0d 100644
--- a/arch/powerpc/kernel/irq.c
+++ b/arch/powerpc/kernel/irq.c
@@ -219,15 +219,29 @@ notrace unsigned int __check_irq_replay(void)
 	return 0;
 }
 
-notrace void arch_local_irq_restore(unsigned long en)
+notrace void arch_local_irq_restore(unsigned long mask)
 {
 	unsigned char irq_happened;
 	unsigned int replay;
 
 	/* Write the new soft-enabled value */
-	soft_enabled_set(en);
-	if (en == IRQ_DISABLED)
+	soft_enabled_set(mask);
+	if (mask) {
+#ifdef CONFIG_TRACE_IRQFLAGS
+		/*
+		 * mask must always include LINUX bit if any
+		 * are set, and interrupts don't get replayed until
+		 * the Linux interrupt is unmasked. This could be
+		 * changed to replay partial unmasks in future,
+		 * which would allow Linux masks to nest inside
+		 * other masks, among other things. For now, be very
+		 * dumb and simple.
+		 */
+		WARN_ON(!(mask & IRQ_DISABLE_MASK));
+#endif
 		return;
+	}
+
 	/*
 	 * From this point onward, we can take interrupts, preempt,
 	 * etc... unless we got hard-disabled. We check if an event
@@ -271,7 +285,7 @@ notrace void arch_local_irq_restore(unsigned long en)
 	}
 #endif /* CONFIG_TRACE_IRQFLAGS */
 
-	soft_enabled_set(IRQ_DISABLED);
+	soft_enabled_set(IRQ_DISABLE_MASK);
 	trace_hardirqs_off();
 
 	/*
@@ -283,7 +297,7 @@ notrace void arch_local_irq_restore(unsigned long en)
 
 	/* We can soft-enable now */
 	trace_hardirqs_on();
-	soft_enabled_set(IRQ_ENABLED);
+	soft_enabled_set(IRQ_DISABLE_MASK_NONE);
 
 	/*
 	 * And replay if we have to. This will return with interrupts
@@ -358,7 +372,7 @@ bool prep_irq_for_idle(void)
 	 * of entering the low power state.
 	 */
 	local_paca->irq_happened &= ~PACA_IRQ_HARD_DIS;
-	soft_enabled_set(IRQ_ENABLED);
+	soft_enabled_set(IRQ_DISABLE_MASK_NONE);
 
 	/* Tell the caller to enter the low power state */
 	return true;
diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
index c59a4d2a7905..30fe22639dd9 100644
--- a/arch/powerpc/kernel/process.c
+++ b/arch/powerpc/kernel/process.c
@@ -1675,7 +1675,7 @@ int copy_thread(unsigned long clone_flags, unsigned long usp,
 			childregs->gpr[14] = ppc_function_entry((void *)usp);
 #ifdef CONFIG_PPC64
 		clear_tsk_thread_flag(p, TIF_32BIT);
-		childregs->softe = IRQ_ENABLED;
+		childregs->softe = IRQ_DISABLE_MASK_NONE;
 #endif
 		childregs->gpr[15] = kthread_arg;
 		p->thread.regs = NULL;	/* no user register state */
diff --git a/arch/powerpc/kernel/ptrace.c b/arch/powerpc/kernel/ptrace.c
index f52ad5bb7109..bd2c49475473 100644
--- a/arch/powerpc/kernel/ptrace.c
+++ b/arch/powerpc/kernel/ptrace.c
@@ -283,6 +283,18 @@ int ptrace_get_reg(struct task_struct *task, int regno, unsigned long *data)
 	if (regno == PT_DSCR)
 		return get_user_dscr(task, data);
 
+#ifdef CONFIG_PPC64
+	/*
+	 * softe copies paca->soft_enabled variable state. Since soft_enabled is
+	 * no more used as a flag, lets force usr to alway see the softe value as 1
+	 * which means interrupts are not soft disabled.
+	 */
+	if (regno == PT_SOFTE) {
+		*data = 1;
+		return  0;
+	}
+#endif
+
 	if (regno < (sizeof(struct pt_regs) / sizeof(unsigned long))) {
 		*data = ((unsigned long *)task->thread.regs)[regno];
 		return 0;
diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
index adb069af4baf..0931f626fdc4 100644
--- a/arch/powerpc/kernel/setup_64.c
+++ b/arch/powerpc/kernel/setup_64.c
@@ -191,7 +191,7 @@ static void __init fixup_boot_paca(void)
 	/* Allow percpu accesses to work until we setup percpu data */
 	get_paca()->data_offset = 0;
 	/* Mark interrupts disabled in PACA */
-	soft_enabled_set(IRQ_DISABLED);
+	soft_enabled_set(IRQ_DISABLE_MASK);
 }
 
 static void __init configure_exceptions(void)
@@ -354,7 +354,7 @@ void __init early_setup(unsigned long dt_ptr)
 void early_setup_secondary(void)
 {
 	/* Mark interrupts disabled in PACA */
-	soft_enabled_set(IRQ_DISABLED);
+	soft_enabled_set(IRQ_DISABLE_MASK);
 
 	/* Initialize the hash table or TLB handling */
 	early_init_mmu_secondary();
diff --git a/arch/powerpc/kernel/signal_32.c b/arch/powerpc/kernel/signal_32.c
index 9ffd73296f64..a30c6562ed66 100644
--- a/arch/powerpc/kernel/signal_32.c
+++ b/arch/powerpc/kernel/signal_32.c
@@ -111,12 +111,20 @@ static inline int save_general_regs(struct pt_regs *regs,
 {
 	elf_greg_t64 *gregs = (elf_greg_t64 *)regs;
 	int i;
+	/* Force usr to alway see softe as 1 (interrupts enabled) */
+	elf_greg_t64 softe = 0x1;
 
 	WARN_ON(!FULL_REGS(regs));
 
 	for (i = 0; i <= PT_RESULT; i ++) {
 		if (i == 14 && !FULL_REGS(regs))
 			i = 32;
+		if ( i == PT_SOFTE) {
+			if(__put_user((unsigned int)softe, &frame->mc_gregs[i]))
+				return -EFAULT;
+			else
+				continue;
+		}
 		if (__put_user((unsigned int)gregs[i], &frame->mc_gregs[i]))
 			return -EFAULT;
 	}
diff --git a/arch/powerpc/kernel/signal_64.c b/arch/powerpc/kernel/signal_64.c
index 4b9ca3570344..2705fba544ad 100644
--- a/arch/powerpc/kernel/signal_64.c
+++ b/arch/powerpc/kernel/signal_64.c
@@ -110,6 +110,8 @@ static long setup_sigcontext(struct sigcontext __user *sc,
 	struct pt_regs *regs = tsk->thread.regs;
 	unsigned long msr = regs->msr;
 	long err = 0;
+	/* Force usr to alway see softe as 1 (interrupts enabled) */
+	unsigned long softe = 0x1;
 
 	BUG_ON(tsk != current);
 
@@ -169,6 +171,7 @@ static long setup_sigcontext(struct sigcontext __user *sc,
 	WARN_ON(!FULL_REGS(regs));
 	err |= __copy_to_user(&sc->gp_regs, regs, GP_REGS_SIZE);
 	err |= __put_user(msr, &sc->gp_regs[PT_MSR]);
+	err |= __put_user(softe, &sc->gp_regs[PT_SOFTE]);
 	err |= __put_user(signr, &sc->signal);
 	err |= __put_user(handler, &sc->handler);
 	if (set != NULL)
diff --git a/arch/powerpc/kernel/time.c b/arch/powerpc/kernel/time.c
index 9b483520c010..e0d83df2b5e1 100644
--- a/arch/powerpc/kernel/time.c
+++ b/arch/powerpc/kernel/time.c
@@ -253,7 +253,7 @@ void accumulate_stolen_time(void)
 	 * needs to reflect that so various debug stuff doesn't
 	 * complain
 	 */
-	soft_enabled_set(IRQ_DISABLED);
+	soft_enabled_set(IRQ_DISABLE_MASK);
 
 	sst = scan_dispatch_log(acct->starttime_user);
 	ust = scan_dispatch_log(acct->starttime);
diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c
index b8640ef11041..f445e6037687 100644
--- a/arch/powerpc/mm/hugetlbpage.c
+++ b/arch/powerpc/mm/hugetlbpage.c
@@ -752,7 +752,7 @@ void flush_dcache_icache_hugepage(struct page *page)
  * So long as we atomically load page table pointers we are safe against teardown,
  * we can follow the address down to the the page and take a ref on it.
  * This function need to be called with interrupts disabled. We use this variant
- * when we have MSR[EE] = 0 but the paca->soft_enabled = IRQ_ENABLED
+ * when we have MSR[EE] = 0 but the paca->soft_enabled = IRQ_DISABLE_MASK_NONE
  */
 pte_t *__find_linux_pte(pgd_t *pgdir, unsigned long ea,
 			bool *is_thp, unsigned *hpage_shift)
diff --git a/arch/powerpc/perf/core-book3s.c b/arch/powerpc/perf/core-book3s.c
index 7ffc02ed0b0f..9f0dbbc50d5e 100644
--- a/arch/powerpc/perf/core-book3s.c
+++ b/arch/powerpc/perf/core-book3s.c
@@ -322,7 +322,7 @@ static inline void perf_read_regs(struct pt_regs *regs)
  */
 static inline int perf_intr_is_nmi(struct pt_regs *regs)
 {
-	return (regs->softe == IRQ_DISABLED);
+	return (regs->softe & IRQ_DISABLE_MASK);
 }
 
 /*
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v10 10/17] powerpc/64: Rename soft_enabled to irq_soft_mask
  2017-12-20  3:55 [PATCH v10 00/17] powerpc: "paca->soft_enabled" based local atomic operation implementation Madhavan Srinivasan
                   ` (8 preceding siblings ...)
  2017-12-20  3:55 ` [PATCH v10 09/17] powerpc/64: Change soft_enabled from flag to bitmask Madhavan Srinivasan
@ 2017-12-20  3:55 ` Madhavan Srinivasan
  2017-12-20  3:55 ` [PATCH v10 11/17] powerpc/64s: Avoid using EXCEPTION_PROLOG_1 macro in MASKABLE_* Madhavan Srinivasan
                   ` (7 subsequent siblings)
  17 siblings, 0 replies; 21+ messages in thread
From: Madhavan Srinivasan @ 2017-12-20  3:55 UTC (permalink / raw)
  To: mpe; +Cc: benh, anton, paulus, npiggin, linuxppc-dev, Madhavan Srinivasan

Rename the paca->soft_enabled to paca->irq_soft_mask as it is no
longer used as a flag for interrupt state, but a mask.

Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/exception-64s.h |  6 +--
 arch/powerpc/include/asm/hw_irq.h        | 70 +++++++++++++++++---------------
 arch/powerpc/include/asm/irqflags.h      | 12 +++---
 arch/powerpc/include/asm/kvm_ppc.h       |  2 +-
 arch/powerpc/include/asm/paca.h          |  2 +-
 arch/powerpc/kernel/asm-offsets.c        |  2 +-
 arch/powerpc/kernel/entry_64.S           | 26 ++++++------
 arch/powerpc/kernel/exceptions-64e.S     | 16 ++++----
 arch/powerpc/kernel/head_64.S            | 12 +++---
 arch/powerpc/kernel/idle_book3e.S        |  4 +-
 arch/powerpc/kernel/idle_power4.S        |  4 +-
 arch/powerpc/kernel/irq.c                | 23 +++--------
 arch/powerpc/kernel/optprobes_head.S     |  2 +-
 arch/powerpc/kernel/process.c            |  2 +-
 arch/powerpc/kernel/ptrace.c             |  2 +-
 arch/powerpc/kernel/setup_64.c           |  4 +-
 arch/powerpc/kernel/time.c               |  6 +--
 arch/powerpc/kvm/book3s_hv_rmhandlers.S  |  2 +-
 arch/powerpc/mm/hugetlbpage.c            |  2 +-
 arch/powerpc/perf/core-book3s.c          |  2 +-
 arch/powerpc/xmon/xmon.c                 |  4 +-
 21 files changed, 99 insertions(+), 106 deletions(-)

diff --git a/arch/powerpc/include/asm/exception-64s.h b/arch/powerpc/include/asm/exception-64s.h
index b8f8a78ffa09..f9a9269df62e 100644
--- a/arch/powerpc/include/asm/exception-64s.h
+++ b/arch/powerpc/include/asm/exception-64s.h
@@ -432,7 +432,7 @@ END_FTR_SECTION_NESTED(ftr,ftr,943)
 	mflr	r9;			/* Get LR, later save to stack	*/ \
 	ld	r2,PACATOC(r13);	/* get kernel TOC into r2	*/ \
 	std	r9,_LINK(r1);						   \
-	lbz	r10,PACASOFTIRQEN(r13);				   \
+	lbz	r10,PACAIRQSOFTMASK(r13);				   \
 	mfspr	r11,SPRN_XER;		/* save XER in stackframe	*/ \
 	std	r10,SOFTE(r1);						   \
 	std	r11,_XER(r1);						   \
@@ -498,8 +498,8 @@ END_FTR_SECTION_NESTED(ftr,ftr,943)
 #define SOFTEN_VALUE_0xea0	PACA_IRQ_EE
 
 #define __SOFTEN_TEST(h, vec)						\
-	lbz	r10,PACASOFTIRQEN(r13);					\
-	andi.	r10,r10,IRQ_DISABLE_MASK;				\
+	lbz	r10,PACAIRQSOFTMASK(r13);				\
+	andi.	r10,r10,IRQ_SOFT_MASK_STD;				\
 	li	r10,SOFTEN_VALUE_##vec;					\
 	bne	masked_##h##interrupt
 
diff --git a/arch/powerpc/include/asm/hw_irq.h b/arch/powerpc/include/asm/hw_irq.h
index 82c4e3572aa9..6022aa6d1dd4 100644
--- a/arch/powerpc/include/asm/hw_irq.h
+++ b/arch/powerpc/include/asm/hw_irq.h
@@ -29,10 +29,10 @@
 #define PACA_IRQ_HMI		0x20
 
 /*
- * flags for paca->soft_enabled
+ * flags for paca->irq_soft_mask
  */
-#define IRQ_DISABLE_MASK_NONE	0x00
-#define IRQ_DISABLE_MASK	0x01
+#define IRQ_SOFT_MASK_NONE	0x00
+#define IRQ_SOFT_MASK_STD	0x01
 
 #endif /* CONFIG_PPC64 */
 
@@ -49,14 +49,14 @@ extern void unknown_exception(struct pt_regs *regs);
 #ifdef CONFIG_PPC64
 #include <asm/paca.h>
 
-static inline notrace unsigned long soft_enabled_return(void)
+static inline notrace unsigned long irq_soft_mask_return(void)
 {
 	unsigned long flags;
 
 	asm volatile(
 		"lbz %0,%1(13)"
 		: "=r" (flags)
-		: "i" (offsetof(struct paca_struct, soft_enabled)));
+		: "i" (offsetof(struct paca_struct, irq_soft_mask)));
 
 	return flags;
 }
@@ -64,42 +64,48 @@ static inline notrace unsigned long soft_enabled_return(void)
 /*
  * The "memory" clobber acts as both a compiler barrier
  * for the critical section and as a clobber because
- * we changed paca->soft_enabled
+ * we changed paca->irq_soft_mask
  */
-static inline notrace void soft_enabled_set(unsigned long enable)
+static inline notrace void irq_soft_mask_set(unsigned long mask)
 {
 #ifdef CONFIG_TRACE_IRQFLAGS
 	/*
-	 * mask must always include LINUX bit if any are set, and
-	 * interrupts don't get replayed until the Linux interrupt is
-	 * unmasked. This could be changed to replay partial unmasks
-	 * in future, which would allow Linux masks to nest inside
-	 * other masks, among other things. For now, be very dumb and
-	 * simple.
+	 * The irq mask must always include the STD bit if any are set.
+	 *
+	 * and interrupts don't get replayed until the standard
+	 * interrupt (local_irq_disable()) is unmasked.
+	 *
+	 * Other masks must only provide additional masking beyond
+	 * the standard, and they are also not replayed until the
+	 * standard interrupt becomes unmasked.
+	 *
+	 * This could be changed, but it will require partial
+	 * unmasks to be replayed, among other things. For now, take
+	 * the simple approach.
 	 */
-	WARN_ON(mask && !(mask & IRQ_DISABLE_MASK));
+	WARN_ON(mask && !(mask & IRQ_SOFT_MASK_STD));
 #endif
 
 	asm volatile(
 		"stb %0,%1(13)"
 		:
-		: "r" (enable),
-		  "i" (offsetof(struct paca_struct, soft_enabled))
+		: "r" (mask),
+		  "i" (offsetof(struct paca_struct, irq_soft_mask))
 		: "memory");
 }
 
-static inline notrace unsigned long soft_enabled_set_return(unsigned long mask)
+static inline notrace unsigned long irq_soft_mask_set_return(unsigned long mask)
 {
 	unsigned long flags;
 
 #ifdef CONFIG_TRACE_IRQFLAGS
-	WARN_ON(mask && !(mask & IRQ_DISABLE_MASK));
+	WARN_ON(mask && !(mask & IRQ_SOFT_MASK_STD));
 #endif
 
 	asm volatile(
 		"lbz %0,%1(13); stb %2,%1(13)"
 		: "=&r" (flags)
-		: "i" (offsetof(struct paca_struct, soft_enabled)),
+		: "i" (offsetof(struct paca_struct, irq_soft_mask)),
 		  "r" (mask)
 		: "memory");
 
@@ -108,29 +114,29 @@ static inline notrace unsigned long soft_enabled_set_return(unsigned long mask)
 
 static inline unsigned long arch_local_save_flags(void)
 {
-	return soft_enabled_return();
+	return irq_soft_mask_return();
 }
 
 static inline void arch_local_irq_disable(void)
 {
-	soft_enabled_set(IRQ_DISABLED);
+	irq_soft_mask_set(IRQ_SOFT_MASK_STD);
 }
 
 extern void arch_local_irq_restore(unsigned long);
 
 static inline void arch_local_irq_enable(void)
 {
-	arch_local_irq_restore(IRQ_DISABLE_MASK_NONE);
+	arch_local_irq_restore(IRQ_SOFT_MASK_NONE);
 }
 
 static inline unsigned long arch_local_irq_save(void)
 {
-	return soft_enabled_set_return(IRQ_DISABLE_MASK);
+	return irq_soft_mask_set_return(IRQ_SOFT_MASK_STD);
 }
 
 static inline bool arch_irqs_disabled_flags(unsigned long flags)
 {
-	return flags & IRQ_DISABLE_MASK;
+	return flags & IRQ_SOFT_MASK_STD;
 }
 
 static inline bool arch_irqs_disabled(void)
@@ -146,13 +152,13 @@ static inline bool arch_irqs_disabled(void)
 #define __hard_irq_disable()	__mtmsrd(local_paca->kernel_msr, 1)
 #endif
 
-#define hard_irq_disable()	do {			\
-	unsigned long flags;				\
-	__hard_irq_disable();				\
-	flags = soft_enabled_set_return(IRQ_DISABLE_MASK);\
-	local_paca->irq_happened |= PACA_IRQ_HARD_DIS;	\
-	if (!arch_irqs_disabled_flags(flags))		\
-		trace_hardirqs_off();			\
+#define hard_irq_disable()	do {				\
+	unsigned long flags;					\
+	__hard_irq_disable();					\
+	flags = irq_soft_mask_set_return(IRQ_SOFT_MASK_STD);	\
+	local_paca->irq_happened |= PACA_IRQ_HARD_DIS;		\
+	if (!arch_irqs_disabled_flags(flags))			\
+		trace_hardirqs_off();				\
 } while(0)
 
 static inline bool lazy_irq_pending(void)
@@ -174,7 +180,7 @@ static inline void may_hard_irq_enable(void)
 
 static inline bool arch_irq_disabled_regs(struct pt_regs *regs)
 {
-	return (regs->softe & IRQ_DISABLE_MASK);
+	return (regs->softe & IRQ_SOFT_MASK_STD);
 }
 
 extern bool prep_irq_for_idle(void);
diff --git a/arch/powerpc/include/asm/irqflags.h b/arch/powerpc/include/asm/irqflags.h
index 0fd6ec7d8797..492b0a9fa352 100644
--- a/arch/powerpc/include/asm/irqflags.h
+++ b/arch/powerpc/include/asm/irqflags.h
@@ -47,14 +47,14 @@
  * be clobbered.
  */
 #define RECONCILE_IRQ_STATE(__rA, __rB)		\
-	lbz	__rA,PACASOFTIRQEN(r13);	\
+	lbz	__rA,PACAIRQSOFTMASK(r13);	\
 	lbz	__rB,PACAIRQHAPPENED(r13);	\
-	andi.	__rA,__rA,IRQ_DISABLE_MASK;\
-	li	__rA,IRQ_DISABLE_MASK;	\
+	andi.	__rA,__rA,IRQ_SOFT_MASK_STD;	\
+	li	__rA,IRQ_SOFT_MASK_STD;		\
 	ori	__rB,__rB,PACA_IRQ_HARD_DIS;	\
 	stb	__rB,PACAIRQHAPPENED(r13);	\
 	bne	44f;				\
-	stb	__rA,PACASOFTIRQEN(r13);	\
+	stb	__rA,PACAIRQSOFTMASK(r13);	\
 	TRACE_DISABLE_INTS;			\
 44:
 
@@ -64,9 +64,9 @@
 
 #define RECONCILE_IRQ_STATE(__rA, __rB)		\
 	lbz	__rA,PACAIRQHAPPENED(r13);	\
-	li	__rB,IRQ_DISABLE_MASK;	\
+	li	__rB,IRQ_SOFT_MASK_STD;		\
 	ori	__rA,__rA,PACA_IRQ_HARD_DIS;	\
-	stb	__rB,PACASOFTIRQEN(r13);	\
+	stb	__rB,PACAIRQSOFTMASK(r13);	\
 	stb	__rA,PACAIRQHAPPENED(r13)
 #endif
 #endif
diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h
index 09992b9d9401..08053e596753 100644
--- a/arch/powerpc/include/asm/kvm_ppc.h
+++ b/arch/powerpc/include/asm/kvm_ppc.h
@@ -873,7 +873,7 @@ static inline void kvmppc_fix_ee_before_entry(void)
 
 	/* Only need to enable IRQs by hard enabling them after this */
 	local_paca->irq_happened = 0;
-	soft_enabled_set(IRQ_DISABLE_MASK_NONE);
+	irq_soft_mask_set(IRQ_SOFT_MASK_NONE);
 #endif
 }
 
diff --git a/arch/powerpc/include/asm/paca.h b/arch/powerpc/include/asm/paca.h
index 3892db93b837..e2ee193eb24d 100644
--- a/arch/powerpc/include/asm/paca.h
+++ b/arch/powerpc/include/asm/paca.h
@@ -159,7 +159,7 @@ struct paca_struct {
 	u64 saved_r1;			/* r1 save for RTAS calls or PM */
 	u64 saved_msr;			/* MSR saved here by enter_rtas */
 	u16 trap_save;			/* Used when bad stack is encountered */
-	u8 soft_enabled;		/* irq soft-enable flag */
+	u8 irq_soft_mask;		/* mask for irq soft masking */
 	u8 irq_happened;		/* irq happened while soft-disabled */
 	u8 io_sync;			/* writel() needs spin_unlock sync */
 	u8 irq_work_pending;		/* IRQ_WORK interrupt while soft-disable */
diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
index 6b958414b4e0..397681f43eed 100644
--- a/arch/powerpc/kernel/asm-offsets.c
+++ b/arch/powerpc/kernel/asm-offsets.c
@@ -178,7 +178,7 @@ int main(void)
 	OFFSET(PACATOC, paca_struct, kernel_toc);
 	OFFSET(PACAKBASE, paca_struct, kernelbase);
 	OFFSET(PACAKMSR, paca_struct, kernel_msr);
-	OFFSET(PACASOFTIRQEN, paca_struct, soft_enabled);
+	OFFSET(PACAIRQSOFTMASK, paca_struct, irq_soft_mask);
 	OFFSET(PACAIRQHAPPENED, paca_struct, irq_happened);
 #ifdef CONFIG_PPC_BOOK3S
 	OFFSET(PACACONTEXTID, paca_struct, mm_ctx_id);
diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
index 62d615328f57..25951224b383 100644
--- a/arch/powerpc/kernel/entry_64.S
+++ b/arch/powerpc/kernel/entry_64.S
@@ -129,8 +129,8 @@ END_FW_FTR_SECTION_IFSET(FW_FEATURE_SPLPAR)
 	 * is correct
 	 */
 #if defined(CONFIG_TRACE_IRQFLAGS) && defined(CONFIG_BUG)
-	lbz	r10,PACASOFTIRQEN(r13)
-1:	tdnei	r10,IRQ_DISABLE_MASK_NONE
+	lbz	r10,PACAIRQSOFTMASK(r13)
+1:	tdnei	r10,IRQ_SOFT_MASK_NONE
 	EMIT_BUG_ENTRY 1b,__FILE__,__LINE__,BUGFLAG_WARNING
 #endif
 
@@ -146,7 +146,7 @@ system_call:			/* label this so stack traces look sane */
 	/* We do need to set SOFTE in the stack frame or the return
 	 * from interrupt will be painful
 	 */
-	li	r10,IRQ_DISABLE_MASK_NONE
+	li	r10,IRQ_SOFT_MASK_NONE
 	std	r10,SOFTE(r1)
 
 	CURRENT_THREAD_INFO(r11, r1)
@@ -743,7 +743,7 @@ resume_kernel:
 	cmpwi	cr0,r8,0
 	bne	restore
 	ld	r0,SOFTE(r1)
-	andi.	r0,r0,IRQ_DISABLE_MASK
+	andi.	r0,r0,IRQ_SOFT_MASK_STD
 	bne	restore
 
 	/*
@@ -781,12 +781,12 @@ restore:
 	 * are about to re-enable interrupts
 	 */
 	ld	r5,SOFTE(r1)
-	lbz	r6,PACASOFTIRQEN(r13)
-	andi.	r5,r5,IRQ_DISABLE_MASK
+	lbz	r6,PACAIRQSOFTMASK(r13)
+	andi.	r5,r5,IRQ_SOFT_MASK_STD
 	bne	.Lrestore_irq_off
 
 	/* We are enabling, were we already enabled ? Yes, just return */
-	andi.	r6,r6,IRQ_DISABLE_MASK
+	andi.	r6,r6,IRQ_SOFT_MASK_STD
 	beq	cr0,.Ldo_restore
 
 	/*
@@ -805,8 +805,8 @@ restore:
 	 */
 .Lrestore_no_replay:
 	TRACE_ENABLE_INTS
-	li	r0,IRQ_DISABLE_MASK_NONE
-	stb	r0,PACASOFTIRQEN(r13);
+	li	r0,IRQ_SOFT_MASK_NONE
+	stb	r0,PACAIRQSOFTMASK(r13);
 
 	/*
 	 * Final return path. BookE is handled in a different file
@@ -913,8 +913,8 @@ END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR)
 1:
 #if defined(CONFIG_TRACE_IRQFLAGS) && defined(CONFIG_BUG)
 	/* The interrupt should not have soft enabled. */
-	lbz	r7,PACASOFTIRQEN(r13)
-	tdeqi	r7,IRQ_DISABLE_MASK_NONE
+	lbz	r7,PACAIRQSOFTMASK(r13)
+	tdeqi	r7,IRQ_SOFT_MASK_NONE
 	EMIT_BUG_ENTRY 1b,__FILE__,__LINE__,BUGFLAG_WARNING
 #endif
 	b	.Ldo_restore
@@ -1034,8 +1034,8 @@ _GLOBAL(enter_rtas)
 	/* There is no way it is acceptable to get here with interrupts enabled,
 	 * check it with the asm equivalent of WARN_ON
 	 */
-	lbz	r0,PACASOFTIRQEN(r13)
-1:	tdeqi	r0,IRQ_DISABLE_MASK_NONE
+	lbz	r0,PACAIRQSOFTMASK(r13)
+1:	tdeqi	r0,IRQ_SOFT_MASK_NONE
 	EMIT_BUG_ENTRY 1b,__FILE__,__LINE__,BUGFLAG_WARNING
 #endif
 
diff --git a/arch/powerpc/kernel/exceptions-64e.S b/arch/powerpc/kernel/exceptions-64e.S
index 15dcc89dcbcc..7fdf4da0059e 100644
--- a/arch/powerpc/kernel/exceptions-64e.S
+++ b/arch/powerpc/kernel/exceptions-64e.S
@@ -139,7 +139,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_EMB_HV)
 	mfspr	r10,SPRN_ESR
 	SPECIAL_EXC_STORE(r10,ESR)
 
-	lbz	r10,PACASOFTIRQEN(r13)
+	lbz	r10,PACAIRQSOFTMASK(r13)
 	SPECIAL_EXC_STORE(r10,SOFTE)
 	ld	r10,_NIP(r1)
 	SPECIAL_EXC_STORE(r10,CSRR0)
@@ -206,17 +206,17 @@ BEGIN_FTR_SECTION
 	mtspr	SPRN_MAS8,r10
 END_FTR_SECTION_IFSET(CPU_FTR_EMB_HV)
 
-	lbz	r6,PACASOFTIRQEN(r13)
+	lbz	r6,PACAIRQSOFTMASK(r13)
 	ld	r5,SOFTE(r1)
 
 	/* Interrupts had better not already be enabled... */
-	tweqi	r6,IRQ_DISABLE_MASK_NONE
+	tweqi	r6,IRQ_SOFT_MASK_NONE
 
-	andi.	r6,r5,IRQ_DISABLE_MASK
+	andi.	r6,r5,IRQ_SOFT_MASK_STD
 	bne	1f
 
 	TRACE_ENABLE_INTS
-	stb	r5,PACASOFTIRQEN(r13)
+	stb	r5,PACAIRQSOFTMASK(r13)
 1:
 	/*
 	 * Restore PACAIRQHAPPENED rather than setting it based on
@@ -351,8 +351,8 @@ ret_from_mc_except:
 #define PROLOG_ADDITION_NONE_MC(n)
 
 #define PROLOG_ADDITION_MASKABLE_GEN(n)					    \
-	lbz	r10,PACASOFTIRQEN(r13); /* are irqs soft-disabled ? */	    \
-	andi.	r10,r10,IRQ_DISABLE_MASK;	/* yes -> go out of line */ \
+	lbz	r10,PACAIRQSOFTMASK(r13);	/* are irqs soft-masked? */ \
+	andi.	r10,r10,IRQ_SOFT_MASK_STD;	/* yes -> go out of line */ \
 	bne	masked_interrupt_book3e_##n
 
 #define PROLOG_ADDITION_2REGS_GEN(n)					    \
@@ -397,7 +397,7 @@ exc_##n##_common:							    \
 	mfspr	r8,SPRN_XER;		/* save XER in stackframe */	    \
 	ld	r9,excf+EX_R1(r13);	/* load orig r1 back from PACA */   \
 	lwz	r10,excf+EX_CR(r13);	/* load orig CR back from PACA	*/  \
-	lbz	r11,PACASOFTIRQEN(r13);	/* get current IRQ softe */	    \
+	lbz	r11,PACAIRQSOFTMASK(r13); /* get current IRQ softe */	    \
 	ld	r12,exception_marker@toc(r2);				    \
 	li	r0,0;							    \
 	std	r3,GPR10(r1);		/* save r10 to stackframe */	    \
diff --git a/arch/powerpc/kernel/head_64.S b/arch/powerpc/kernel/head_64.S
index d43286208ee2..56f9e112b98d 100644
--- a/arch/powerpc/kernel/head_64.S
+++ b/arch/powerpc/kernel/head_64.S
@@ -765,8 +765,8 @@ _GLOBAL(pmac_secondary_start)
 	/* Mark interrupts soft and hard disabled (they might be enabled
 	 * in the PACA when doing hotplug)
 	 */
-	li	r0,IRQ_DISABLE_MASK
-	stb	r0,PACASOFTIRQEN(r13)
+	li	r0,IRQ_SOFT_MASK_STD
+	stb	r0,PACAIRQSOFTMASK(r13)
 	li	r0,PACA_IRQ_HARD_DIS
 	stb	r0,PACAIRQHAPPENED(r13)
 
@@ -822,8 +822,8 @@ __secondary_start:
 	/* Mark interrupts soft and hard disabled (they might be enabled
 	 * in the PACA when doing hotplug)
 	 */
-	li	r7,IRQ_DISABLE_MASK
-	stb	r7,PACASOFTIRQEN(r13)
+	li	r7,IRQ_SOFT_MASK_STD
+	stb	r7,PACAIRQSOFTMASK(r13)
 	li	r0,PACA_IRQ_HARD_DIS
 	stb	r0,PACAIRQHAPPENED(r13)
 
@@ -989,8 +989,8 @@ start_here_common:
 	/* Mark interrupts soft and hard disabled (they might be enabled
 	 * in the PACA when doing hotplug)
 	 */
-	li	r0,IRQ_DISABLE_MASK
-	stb	r0,PACASOFTIRQEN(r13)
+	li	r0,IRQ_SOFT_MASK_STD
+	stb	r0,PACAIRQSOFTMASK(r13)
 	li	r0,PACA_IRQ_HARD_DIS
 	stb	r0,PACAIRQHAPPENED(r13)
 
diff --git a/arch/powerpc/kernel/idle_book3e.S b/arch/powerpc/kernel/idle_book3e.S
index a459c306b04e..5466207c1027 100644
--- a/arch/powerpc/kernel/idle_book3e.S
+++ b/arch/powerpc/kernel/idle_book3e.S
@@ -47,8 +47,8 @@ _GLOBAL(\name)
 	bl	trace_hardirqs_on
 	addi    r1,r1,128
 #endif
-	li	r0,IRQ_DISABLE_MASK_NONE
-	stb	r0,PACASOFTIRQEN(r13)
+	li	r0,IRQ_SOFT_MASK_NONE
+	stb	r0,PACAIRQSOFTMASK(r13)
 	
 	/* Interrupts will make use return to LR, so get something we want
 	 * in there
diff --git a/arch/powerpc/kernel/idle_power4.S b/arch/powerpc/kernel/idle_power4.S
index 785e10619d8d..fc6e8aadc9c6 100644
--- a/arch/powerpc/kernel/idle_power4.S
+++ b/arch/powerpc/kernel/idle_power4.S
@@ -54,8 +54,8 @@ END_FTR_SECTION_IFCLR(CPU_FTR_CAN_NAP)
 	mfmsr	r7
 #endif /* CONFIG_TRACE_IRQFLAGS */
 
-	li	r0,IRQ_DISABLE_MASK_NONE
-	stb	r0,PACASOFTIRQEN(r13)	/* we'll hard-enable shortly */
+	li	r0,IRQ_SOFT_MASK_NONE
+	stb	r0,PACAIRQSOFTMASK(r13)	/* we'll hard-enable shortly */
 BEGIN_FTR_SECTION
 	DSSALL
 	sync
diff --git a/arch/powerpc/kernel/irq.c b/arch/powerpc/kernel/irq.c
index f3b580d18f0d..4b21b502c148 100644
--- a/arch/powerpc/kernel/irq.c
+++ b/arch/powerpc/kernel/irq.c
@@ -225,22 +225,9 @@ notrace void arch_local_irq_restore(unsigned long mask)
 	unsigned int replay;
 
 	/* Write the new soft-enabled value */
-	soft_enabled_set(mask);
-	if (mask) {
-#ifdef CONFIG_TRACE_IRQFLAGS
-		/*
-		 * mask must always include LINUX bit if any
-		 * are set, and interrupts don't get replayed until
-		 * the Linux interrupt is unmasked. This could be
-		 * changed to replay partial unmasks in future,
-		 * which would allow Linux masks to nest inside
-		 * other masks, among other things. For now, be very
-		 * dumb and simple.
-		 */
-		WARN_ON(!(mask & IRQ_DISABLE_MASK));
-#endif
+	irq_soft_mask_set(mask);
+	if (mask)
 		return;
-	}
 
 	/*
 	 * From this point onward, we can take interrupts, preempt,
@@ -285,7 +272,7 @@ notrace void arch_local_irq_restore(unsigned long mask)
 	}
 #endif /* CONFIG_TRACE_IRQFLAGS */
 
-	soft_enabled_set(IRQ_DISABLE_MASK);
+	irq_soft_mask_set(IRQ_SOFT_MASK_STD);
 	trace_hardirqs_off();
 
 	/*
@@ -297,7 +284,7 @@ notrace void arch_local_irq_restore(unsigned long mask)
 
 	/* We can soft-enable now */
 	trace_hardirqs_on();
-	soft_enabled_set(IRQ_DISABLE_MASK_NONE);
+	irq_soft_mask_set(IRQ_SOFT_MASK_NONE);
 
 	/*
 	 * And replay if we have to. This will return with interrupts
@@ -372,7 +359,7 @@ bool prep_irq_for_idle(void)
 	 * of entering the low power state.
 	 */
 	local_paca->irq_happened &= ~PACA_IRQ_HARD_DIS;
-	soft_enabled_set(IRQ_DISABLE_MASK_NONE);
+	irq_soft_mask_set(IRQ_SOFT_MASK_NONE);
 
 	/* Tell the caller to enter the low power state */
 	return true;
diff --git a/arch/powerpc/kernel/optprobes_head.S b/arch/powerpc/kernel/optprobes_head.S
index 52fc864cdec4..98a3aeeb3c8c 100644
--- a/arch/powerpc/kernel/optprobes_head.S
+++ b/arch/powerpc/kernel/optprobes_head.S
@@ -58,7 +58,7 @@ optprobe_template_entry:
 	std	r5,_XER(r1)
 	mfcr	r5
 	std	r5,_CCR(r1)
-	lbz     r5,PACASOFTIRQEN(r13)
+	lbz     r5,PACAIRQSOFTMASK(r13)
 	std     r5,SOFTE(r1)
 
 	/*
diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
index 30fe22639dd9..2a63cc78257d 100644
--- a/arch/powerpc/kernel/process.c
+++ b/arch/powerpc/kernel/process.c
@@ -1675,7 +1675,7 @@ int copy_thread(unsigned long clone_flags, unsigned long usp,
 			childregs->gpr[14] = ppc_function_entry((void *)usp);
 #ifdef CONFIG_PPC64
 		clear_tsk_thread_flag(p, TIF_32BIT);
-		childregs->softe = IRQ_DISABLE_MASK_NONE;
+		childregs->softe = IRQ_SOFT_MASK_NONE;
 #endif
 		childregs->gpr[15] = kthread_arg;
 		p->thread.regs = NULL;	/* no user register state */
diff --git a/arch/powerpc/kernel/ptrace.c b/arch/powerpc/kernel/ptrace.c
index bd2c49475473..aef08e579946 100644
--- a/arch/powerpc/kernel/ptrace.c
+++ b/arch/powerpc/kernel/ptrace.c
@@ -285,7 +285,7 @@ int ptrace_get_reg(struct task_struct *task, int regno, unsigned long *data)
 
 #ifdef CONFIG_PPC64
 	/*
-	 * softe copies paca->soft_enabled variable state. Since soft_enabled is
+	 * softe copies paca->irq_soft_mask variable state. Since irq_soft_mask is
 	 * no more used as a flag, lets force usr to alway see the softe value as 1
 	 * which means interrupts are not soft disabled.
 	 */
diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
index 0931f626fdc4..a4408a7e6f14 100644
--- a/arch/powerpc/kernel/setup_64.c
+++ b/arch/powerpc/kernel/setup_64.c
@@ -191,7 +191,7 @@ static void __init fixup_boot_paca(void)
 	/* Allow percpu accesses to work until we setup percpu data */
 	get_paca()->data_offset = 0;
 	/* Mark interrupts disabled in PACA */
-	soft_enabled_set(IRQ_DISABLE_MASK);
+	irq_soft_mask_set(IRQ_SOFT_MASK_STD);
 }
 
 static void __init configure_exceptions(void)
@@ -354,7 +354,7 @@ void __init early_setup(unsigned long dt_ptr)
 void early_setup_secondary(void)
 {
 	/* Mark interrupts disabled in PACA */
-	soft_enabled_set(IRQ_DISABLE_MASK);
+	irq_soft_mask_set(IRQ_SOFT_MASK_STD);
 
 	/* Initialize the hash table or TLB handling */
 	early_init_mmu_secondary();
diff --git a/arch/powerpc/kernel/time.c b/arch/powerpc/kernel/time.c
index e0d83df2b5e1..80a5594d5953 100644
--- a/arch/powerpc/kernel/time.c
+++ b/arch/powerpc/kernel/time.c
@@ -244,7 +244,7 @@ static u64 scan_dispatch_log(u64 stop_tb)
 void accumulate_stolen_time(void)
 {
 	u64 sst, ust;
-	unsigned long save_soft_enabled = soft_enabled_return();
+	unsigned long save_irq_soft_mask = irq_soft_mask_return();
 	struct cpu_accounting_data *acct = &local_paca->accounting;
 
 	/* We are called early in the exception entry, before
@@ -253,7 +253,7 @@ void accumulate_stolen_time(void)
 	 * needs to reflect that so various debug stuff doesn't
 	 * complain
 	 */
-	soft_enabled_set(IRQ_DISABLE_MASK);
+	irq_soft_mask_set(IRQ_SOFT_MASK_STD);
 
 	sst = scan_dispatch_log(acct->starttime_user);
 	ust = scan_dispatch_log(acct->starttime);
@@ -261,7 +261,7 @@ void accumulate_stolen_time(void)
 	acct->utime -= ust;
 	acct->steal_time += ust + sst;
 
-	soft_enabled_set(save_soft_enabled);
+	irq_soft_mask_set(save_irq_soft_mask);
 }
 
 static inline u64 calculate_stolen_time(u64 stop_tb)
diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
index 2659844784b8..a92ad8500917 100644
--- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
+++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
@@ -3249,7 +3249,7 @@ kvmppc_bad_host_intr:
 	mfctr	r4
 #endif
 	mfxer	r5
-	lbz	r6, PACASOFTIRQEN(r13)
+	lbz	r6, PACAIRQSOFTMASK(r13)
 	std	r3, _LINK(r1)
 	std	r4, _CTR(r1)
 	std	r5, _XER(r1)
diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c
index f445e6037687..4bae16244138 100644
--- a/arch/powerpc/mm/hugetlbpage.c
+++ b/arch/powerpc/mm/hugetlbpage.c
@@ -752,7 +752,7 @@ void flush_dcache_icache_hugepage(struct page *page)
  * So long as we atomically load page table pointers we are safe against teardown,
  * we can follow the address down to the the page and take a ref on it.
  * This function need to be called with interrupts disabled. We use this variant
- * when we have MSR[EE] = 0 but the paca->soft_enabled = IRQ_DISABLE_MASK_NONE
+ * when we have MSR[EE] = 0 but the paca->irq_soft_mask = IRQ_SOFT_MASK_NONE
  */
 pte_t *__find_linux_pte(pgd_t *pgdir, unsigned long ea,
 			bool *is_thp, unsigned *hpage_shift)
diff --git a/arch/powerpc/perf/core-book3s.c b/arch/powerpc/perf/core-book3s.c
index 9f0dbbc50d5e..67153fa4cbd5 100644
--- a/arch/powerpc/perf/core-book3s.c
+++ b/arch/powerpc/perf/core-book3s.c
@@ -322,7 +322,7 @@ static inline void perf_read_regs(struct pt_regs *regs)
  */
 static inline int perf_intr_is_nmi(struct pt_regs *regs)
 {
-	return (regs->softe & IRQ_DISABLE_MASK);
+	return (regs->softe & IRQ_SOFT_MASK_STD);
 }
 
 /*
diff --git a/arch/powerpc/xmon/xmon.c b/arch/powerpc/xmon/xmon.c
index cab24f549e7c..a53454f61d09 100644
--- a/arch/powerpc/xmon/xmon.c
+++ b/arch/powerpc/xmon/xmon.c
@@ -1623,7 +1623,7 @@ static void excprint(struct pt_regs *fp)
 	printf("  current = 0x%lx\n", current);
 #ifdef CONFIG_PPC64
 	printf("  paca    = 0x%lx\t softe: %d\t irq_happened: 0x%02x\n",
-	       local_paca, local_paca->soft_enabled, local_paca->irq_happened);
+	       local_paca, local_paca->irq_soft_mask, local_paca->irq_happened);
 #endif
 	if (current) {
 		printf("    pid   = %ld, comm = %s\n",
@@ -2391,7 +2391,7 @@ static void dump_one_paca(int cpu)
 	DUMP(p, stab_rr, "lx");
 	DUMP(p, saved_r1, "lx");
 	DUMP(p, trap_save, "x");
-	DUMP(p, soft_enabled, "x");
+	DUMP(p, irq_soft_mask, "x");
 	DUMP(p, irq_happened, "x");
 	DUMP(p, io_sync, "x");
 	DUMP(p, irq_work_pending, "x");
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v10 11/17] powerpc/64s: Avoid using EXCEPTION_PROLOG_1 macro in MASKABLE_*
  2017-12-20  3:55 [PATCH v10 00/17] powerpc: "paca->soft_enabled" based local atomic operation implementation Madhavan Srinivasan
                   ` (9 preceding siblings ...)
  2017-12-20  3:55 ` [PATCH v10 10/17] powerpc/64: Rename soft_enabled to irq_soft_mask Madhavan Srinivasan
@ 2017-12-20  3:55 ` Madhavan Srinivasan
  2017-12-20  3:55 ` [PATCH v10 12/17] powerpc/64s: Add support to take additional parameter in MASKABLE_* macro Madhavan Srinivasan
                   ` (6 subsequent siblings)
  17 siblings, 0 replies; 21+ messages in thread
From: Madhavan Srinivasan @ 2017-12-20  3:55 UTC (permalink / raw)
  To: mpe; +Cc: benh, anton, paulus, npiggin, linuxppc-dev, Madhavan Srinivasan

Currently we use both EXCEPTION_PROLOG_1 and __EXCEPTION_PROLOG_1 in
the MASKABLE_* macros. As a cleanup, this patch makes MASKABLE_* to
use only __EXCEPTION_PROLOG_1. There is not logic change.

Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/exception-64s.h | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/include/asm/exception-64s.h b/arch/powerpc/include/asm/exception-64s.h
index f9a9269df62e..b6dc05c944c5 100644
--- a/arch/powerpc/include/asm/exception-64s.h
+++ b/arch/powerpc/include/asm/exception-64s.h
@@ -536,7 +536,7 @@ END_FTR_SECTION_NESTED(ftr,ftr,943)
 				    EXC_STD, SOFTEN_TEST_PR)
 
 #define MASKABLE_EXCEPTION_PSERIES_OOL(vec, label)			\
-	EXCEPTION_PROLOG_1(PACA_EXGEN, SOFTEN_TEST_PR, vec);		\
+	__EXCEPTION_PROLOG_1(PACA_EXGEN, SOFTEN_TEST_PR, vec);		\
 	EXCEPTION_PROLOG_PSERIES_1(label, EXC_STD)
 
 #define MASKABLE_EXCEPTION_HV(loc, vec, label)				\
@@ -544,7 +544,7 @@ END_FTR_SECTION_NESTED(ftr,ftr,943)
 				    EXC_HV, SOFTEN_TEST_HV)
 
 #define MASKABLE_EXCEPTION_HV_OOL(vec, label)				\
-	EXCEPTION_PROLOG_1(PACA_EXGEN, SOFTEN_TEST_HV, vec);		\
+	__EXCEPTION_PROLOG_1(PACA_EXGEN, SOFTEN_TEST_HV, vec);		\
 	EXCEPTION_PROLOG_PSERIES_1(label, EXC_HV)
 
 #define __MASKABLE_RELON_EXCEPTION_PSERIES(vec, label, h, extra)	\
@@ -565,7 +565,7 @@ END_FTR_SECTION_NESTED(ftr,ftr,943)
 					  EXC_HV, SOFTEN_TEST_HV)
 
 #define MASKABLE_RELON_EXCEPTION_HV_OOL(vec, label)			\
-	EXCEPTION_PROLOG_1(PACA_EXGEN, SOFTEN_TEST_HV, vec);		\
+	__EXCEPTION_PROLOG_1(PACA_EXGEN, SOFTEN_TEST_HV, vec);		\
 	EXCEPTION_RELON_PROLOG_PSERIES_1(label, EXC_HV)
 
 /*
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v10 12/17] powerpc/64s: Add support to take additional parameter in MASKABLE_* macro
  2017-12-20  3:55 [PATCH v10 00/17] powerpc: "paca->soft_enabled" based local atomic operation implementation Madhavan Srinivasan
                   ` (10 preceding siblings ...)
  2017-12-20  3:55 ` [PATCH v10 11/17] powerpc/64s: Avoid using EXCEPTION_PROLOG_1 macro in MASKABLE_* Madhavan Srinivasan
@ 2017-12-20  3:55 ` Madhavan Srinivasan
  2017-12-20  3:55 ` [PATCH v10 13/17] powerpc/64s: Add support to mask perf interrupts and replay them Madhavan Srinivasan
                   ` (5 subsequent siblings)
  17 siblings, 0 replies; 21+ messages in thread
From: Madhavan Srinivasan @ 2017-12-20  3:55 UTC (permalink / raw)
  To: mpe; +Cc: benh, anton, paulus, npiggin, linuxppc-dev, Madhavan Srinivasan

To support addition of "bitmask" to MASKABLE_* macros, factor out the
EXCPETION_PROLOG_1 macro.

Make it explicit the interrupt masking supported by a gievn interrupt
handler. Patch correspondingly extends the MASKABLE_* macros with an
addition's parameter. "bitmask" parameter is passed to SOFTEN_TEST
macro to decide on masking the interrupt.

Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/exception-64s.h | 92 ++++++++++++++++++++------------
 arch/powerpc/include/asm/head-64.h       | 40 +++++++-------
 arch/powerpc/kernel/exceptions-64s.S     | 32 ++++++-----
 3 files changed, 96 insertions(+), 68 deletions(-)

diff --git a/arch/powerpc/include/asm/exception-64s.h b/arch/powerpc/include/asm/exception-64s.h
index b6dc05c944c5..d005a1c19b68 100644
--- a/arch/powerpc/include/asm/exception-64s.h
+++ b/arch/powerpc/include/asm/exception-64s.h
@@ -198,18 +198,40 @@ END_FTR_SECTION_NESTED(ftr,ftr,943)
 	std	r10,area+EX_R10(r13);	/* save r10 - r12 */		\
 	OPT_GET_SPR(r10, SPRN_CFAR, CPU_FTR_CFAR)
 
-#define __EXCEPTION_PROLOG_1(area, extra, vec)				\
+#define __EXCEPTION_PROLOG_1_PRE(area)					\
 	OPT_SAVE_REG_TO_PACA(area+EX_PPR, r9, CPU_FTR_HAS_PPR);		\
 	OPT_SAVE_REG_TO_PACA(area+EX_CFAR, r10, CPU_FTR_CFAR);		\
 	SAVE_CTR(r10, area);						\
-	mfcr	r9;							\
-	extra(vec);							\
+	mfcr	r9;
+
+#define __EXCEPTION_PROLOG_1_POST(area)					\
 	std	r11,area+EX_R11(r13);					\
 	std	r12,area+EX_R12(r13);					\
 	GET_SCRATCH0(r10);						\
 	std	r10,area+EX_R13(r13)
+
+/*
+ * This version of the EXCEPTION_PROLOG_1 will carry
+ * addition parameter called "bitmask" to support
+ * checking of the interrupt maskable level in the SOFTEN_TEST.
+ * Intended to be used in MASKABLE_EXCPETION_* macros.
+ */
+#define MASKABLE_EXCEPTION_PROLOG_1(area, extra, vec, bitmask)			\
+	__EXCEPTION_PROLOG_1_PRE(area);					\
+	extra(vec, bitmask);						\
+	__EXCEPTION_PROLOG_1_POST(area);
+
+/*
+ * This version of the EXCEPTION_PROLOG_1 is intended
+ * to be used in STD_EXCEPTION* macros
+ */
+#define _EXCEPTION_PROLOG_1(area, extra, vec)				\
+	__EXCEPTION_PROLOG_1_PRE(area);					\
+	extra(vec);							\
+	__EXCEPTION_PROLOG_1_POST(area);
+
 #define EXCEPTION_PROLOG_1(area, extra, vec)				\
-	__EXCEPTION_PROLOG_1(area, extra, vec)
+	_EXCEPTION_PROLOG_1(area, extra, vec)
 
 #define __EXCEPTION_PROLOG_PSERIES_1(label, h)				\
 	ld	r10,PACAKMSR(r13);	/* get MSR value for kernel */	\
@@ -497,21 +519,21 @@ END_FTR_SECTION_NESTED(ftr,ftr,943)
 #define SOFTEN_VALUE_0xe60	PACA_IRQ_HMI
 #define SOFTEN_VALUE_0xea0	PACA_IRQ_EE
 
-#define __SOFTEN_TEST(h, vec)						\
+#define __SOFTEN_TEST(h, vec, bitmask)					\
 	lbz	r10,PACAIRQSOFTMASK(r13);				\
-	andi.	r10,r10,IRQ_SOFT_MASK_STD;				\
+	andi.	r10,r10,bitmask;					\
 	li	r10,SOFTEN_VALUE_##vec;					\
 	bne	masked_##h##interrupt
 
-#define _SOFTEN_TEST(h, vec)	__SOFTEN_TEST(h, vec)
+#define _SOFTEN_TEST(h, vec, bitmask)	__SOFTEN_TEST(h, vec, bitmask)
 
-#define SOFTEN_TEST_PR(vec)						\
+#define SOFTEN_TEST_PR(vec, bitmask)					\
 	KVMTEST(EXC_STD, vec);						\
-	_SOFTEN_TEST(EXC_STD, vec)
+	_SOFTEN_TEST(EXC_STD, vec, bitmask)
 
-#define SOFTEN_TEST_HV(vec)						\
+#define SOFTEN_TEST_HV(vec, bitmask)					\
 	KVMTEST(EXC_HV, vec);						\
-	_SOFTEN_TEST(EXC_HV, vec)
+	_SOFTEN_TEST(EXC_HV, vec, bitmask)
 
 #define KVMTEST_PR(vec)							\
 	KVMTEST(EXC_STD, vec)
@@ -519,53 +541,53 @@ END_FTR_SECTION_NESTED(ftr,ftr,943)
 #define KVMTEST_HV(vec)							\
 	KVMTEST(EXC_HV, vec)
 
-#define SOFTEN_NOTEST_PR(vec)		_SOFTEN_TEST(EXC_STD, vec)
-#define SOFTEN_NOTEST_HV(vec)		_SOFTEN_TEST(EXC_HV, vec)
+#define SOFTEN_NOTEST_PR(vec, bitmask)	_SOFTEN_TEST(EXC_STD, vec, bitmask)
+#define SOFTEN_NOTEST_HV(vec, bitmask)	_SOFTEN_TEST(EXC_HV, vec, bitmask)
 
-#define __MASKABLE_EXCEPTION_PSERIES(vec, label, h, extra)		\
+#define __MASKABLE_EXCEPTION_PSERIES(vec, label, h, extra, bitmask)	\
 	SET_SCRATCH0(r13);    /* save r13 */				\
 	EXCEPTION_PROLOG_0(PACA_EXGEN);					\
-	__EXCEPTION_PROLOG_1(PACA_EXGEN, extra, vec);			\
+	MASKABLE_EXCEPTION_PROLOG_1(PACA_EXGEN, extra, vec, bitmask);	\
 	EXCEPTION_PROLOG_PSERIES_1(label, h);
 
-#define _MASKABLE_EXCEPTION_PSERIES(vec, label, h, extra)		\
-	__MASKABLE_EXCEPTION_PSERIES(vec, label, h, extra)
+#define _MASKABLE_EXCEPTION_PSERIES(vec, label, h, extra, bitmask)	\
+	__MASKABLE_EXCEPTION_PSERIES(vec, label, h, extra, bitmask)
 
-#define MASKABLE_EXCEPTION_PSERIES(loc, vec, label)			\
+#define MASKABLE_EXCEPTION_PSERIES(loc, vec, label, bitmask)		\
 	_MASKABLE_EXCEPTION_PSERIES(vec, label,				\
-				    EXC_STD, SOFTEN_TEST_PR)
+				    EXC_STD, SOFTEN_TEST_PR, bitmask)
 
-#define MASKABLE_EXCEPTION_PSERIES_OOL(vec, label)			\
-	__EXCEPTION_PROLOG_1(PACA_EXGEN, SOFTEN_TEST_PR, vec);		\
+#define MASKABLE_EXCEPTION_PSERIES_OOL(vec, label, bitmask)		\
+	MASKABLE_EXCEPTION_PROLOG_1(PACA_EXGEN, SOFTEN_TEST_PR, vec, bitmask);\
 	EXCEPTION_PROLOG_PSERIES_1(label, EXC_STD)
 
-#define MASKABLE_EXCEPTION_HV(loc, vec, label)				\
+#define MASKABLE_EXCEPTION_HV(loc, vec, label, bitmask)			\
 	_MASKABLE_EXCEPTION_PSERIES(vec, label,				\
-				    EXC_HV, SOFTEN_TEST_HV)
+				    EXC_HV, SOFTEN_TEST_HV, bitmask)
 
-#define MASKABLE_EXCEPTION_HV_OOL(vec, label)				\
-	__EXCEPTION_PROLOG_1(PACA_EXGEN, SOFTEN_TEST_HV, vec);		\
+#define MASKABLE_EXCEPTION_HV_OOL(vec, label, bitmask)			\
+	MASKABLE_EXCEPTION_PROLOG_1(PACA_EXGEN, SOFTEN_TEST_HV, vec, bitmask);\
 	EXCEPTION_PROLOG_PSERIES_1(label, EXC_HV)
 
-#define __MASKABLE_RELON_EXCEPTION_PSERIES(vec, label, h, extra)	\
+#define __MASKABLE_RELON_EXCEPTION_PSERIES(vec, label, h, extra, bitmask) \
 	SET_SCRATCH0(r13);    /* save r13 */				\
 	EXCEPTION_PROLOG_0(PACA_EXGEN);					\
-	__EXCEPTION_PROLOG_1(PACA_EXGEN, extra, vec);			\
+	MASKABLE_EXCEPTION_PROLOG_1(PACA_EXGEN, extra, vec, bitmask);	\
 	EXCEPTION_RELON_PROLOG_PSERIES_1(label, h)
 
-#define _MASKABLE_RELON_EXCEPTION_PSERIES(vec, label, h, extra)		\
-	__MASKABLE_RELON_EXCEPTION_PSERIES(vec, label, h, extra)
+#define _MASKABLE_RELON_EXCEPTION_PSERIES(vec, label, h, extra, bitmask)\
+	__MASKABLE_RELON_EXCEPTION_PSERIES(vec, label, h, extra, bitmask)
 
-#define MASKABLE_RELON_EXCEPTION_PSERIES(loc, vec, label)		\
+#define MASKABLE_RELON_EXCEPTION_PSERIES(loc, vec, label, bitmask)	\
 	_MASKABLE_RELON_EXCEPTION_PSERIES(vec, label,			\
-					  EXC_STD, SOFTEN_NOTEST_PR)
+					  EXC_STD, SOFTEN_NOTEST_PR, bitmask)
 
-#define MASKABLE_RELON_EXCEPTION_HV(loc, vec, label)			\
+#define MASKABLE_RELON_EXCEPTION_HV(loc, vec, label, bitmask)		\
 	_MASKABLE_RELON_EXCEPTION_PSERIES(vec, label,			\
-					  EXC_HV, SOFTEN_TEST_HV)
+					  EXC_HV, SOFTEN_TEST_HV, bitmask)
 
-#define MASKABLE_RELON_EXCEPTION_HV_OOL(vec, label)			\
-	__EXCEPTION_PROLOG_1(PACA_EXGEN, SOFTEN_TEST_HV, vec);		\
+#define MASKABLE_RELON_EXCEPTION_HV_OOL(vec, label, bitmask)		\
+	MASKABLE_EXCEPTION_PROLOG_1(PACA_EXGEN, SOFTEN_NOTEST_HV, vec, bitmask);\
 	EXCEPTION_RELON_PROLOG_PSERIES_1(label, EXC_HV)
 
 /*
diff --git a/arch/powerpc/include/asm/head-64.h b/arch/powerpc/include/asm/head-64.h
index fdcff76e9a25..2bbf9b6877dc 100644
--- a/arch/powerpc/include/asm/head-64.h
+++ b/arch/powerpc/include/asm/head-64.h
@@ -269,14 +269,14 @@ end_##sname:
 	STD_RELON_EXCEPTION_PSERIES(start, realvec, name##_common);	\
 	EXC_VIRT_END(name, start, size);
 
-#define EXC_REAL_MASKABLE(name, start, size)				\
+#define EXC_REAL_MASKABLE(name, start, size, bitmask)			\
 	EXC_REAL_BEGIN(name, start, size);				\
-	MASKABLE_EXCEPTION_PSERIES(start, start, name##_common);	\
+	MASKABLE_EXCEPTION_PSERIES(start, start, name##_common, bitmask);\
 	EXC_REAL_END(name, start, size);
 
-#define EXC_VIRT_MASKABLE(name, start, size, realvec)			\
+#define EXC_VIRT_MASKABLE(name, start, size, realvec, bitmask)		\
 	EXC_VIRT_BEGIN(name, start, size);				\
-	MASKABLE_RELON_EXCEPTION_PSERIES(start, realvec, name##_common); \
+	MASKABLE_RELON_EXCEPTION_PSERIES(start, realvec, name##_common, bitmask);\
 	EXC_VIRT_END(name, start, size);
 
 #define EXC_REAL_HV(name, start, size)					\
@@ -305,13 +305,13 @@ end_##sname:
 #define __EXC_REAL_OOL_MASKABLE(name, start, size)			\
 	__EXC_REAL_OOL(name, start, size);
 
-#define __TRAMP_REAL_OOL_MASKABLE(name, vec)				\
+#define __TRAMP_REAL_OOL_MASKABLE(name, vec, bitmask)			\
 	TRAMP_REAL_BEGIN(tramp_real_##name);				\
-	MASKABLE_EXCEPTION_PSERIES_OOL(vec, name##_common);		\
+	MASKABLE_EXCEPTION_PSERIES_OOL(vec, name##_common, bitmask);	\
 
-#define EXC_REAL_OOL_MASKABLE(name, start, size)			\
+#define EXC_REAL_OOL_MASKABLE(name, start, size, bitmask)		\
 	__EXC_REAL_OOL_MASKABLE(name, start, size);			\
-	__TRAMP_REAL_OOL_MASKABLE(name, start);
+	__TRAMP_REAL_OOL_MASKABLE(name, start, bitmask);
 
 #define __EXC_REAL_OOL_HV_DIRECT(name, start, size, handler)		\
 	EXC_REAL_BEGIN(name, start, size);				\
@@ -332,13 +332,13 @@ end_##sname:
 #define __EXC_REAL_OOL_MASKABLE_HV(name, start, size)			\
 	__EXC_REAL_OOL(name, start, size);
 
-#define __TRAMP_REAL_OOL_MASKABLE_HV(name, vec)				\
+#define __TRAMP_REAL_OOL_MASKABLE_HV(name, vec, bitmask)		\
 	TRAMP_REAL_BEGIN(tramp_real_##name);				\
-	MASKABLE_EXCEPTION_HV_OOL(vec, name##_common);			\
+	MASKABLE_EXCEPTION_HV_OOL(vec, name##_common, bitmask);		\
 
-#define EXC_REAL_OOL_MASKABLE_HV(name, start, size)			\
+#define EXC_REAL_OOL_MASKABLE_HV(name, start, size, bitmask)		\
 	__EXC_REAL_OOL_MASKABLE_HV(name, start, size);			\
-	__TRAMP_REAL_OOL_MASKABLE_HV(name, start);
+	__TRAMP_REAL_OOL_MASKABLE_HV(name, start, bitmask);
 
 #define __EXC_VIRT_OOL(name, start, size)				\
 	EXC_VIRT_BEGIN(name, start, size);				\
@@ -356,13 +356,13 @@ end_##sname:
 #define __EXC_VIRT_OOL_MASKABLE(name, start, size)			\
 	__EXC_VIRT_OOL(name, start, size);
 
-#define __TRAMP_VIRT_OOL_MASKABLE(name, realvec)			\
+#define __TRAMP_VIRT_OOL_MASKABLE(name, realvec, bitmask)		\
 	TRAMP_VIRT_BEGIN(tramp_virt_##name);				\
-	MASKABLE_RELON_EXCEPTION_PSERIES_OOL(realvec, name##_common);	\
+	MASKABLE_RELON_EXCEPTION_PSERIES_OOL(realvec, name##_common, bitmask);\
 
-#define EXC_VIRT_OOL_MASKABLE(name, start, size, realvec)		\
+#define EXC_VIRT_OOL_MASKABLE(name, start, size, realvec, bitmask)	\
 	__EXC_VIRT_OOL_MASKABLE(name, start, size);			\
-	__TRAMP_VIRT_OOL_MASKABLE(name, realvec);
+	__TRAMP_VIRT_OOL_MASKABLE(name, realvec, bitmask);
 
 #define __EXC_VIRT_OOL_HV(name, start, size)				\
 	__EXC_VIRT_OOL(name, start, size);
@@ -378,13 +378,13 @@ end_##sname:
 #define __EXC_VIRT_OOL_MASKABLE_HV(name, start, size)			\
 	__EXC_VIRT_OOL(name, start, size);
 
-#define __TRAMP_VIRT_OOL_MASKABLE_HV(name, realvec)			\
+#define __TRAMP_VIRT_OOL_MASKABLE_HV(name, realvec, bitmask)		\
 	TRAMP_VIRT_BEGIN(tramp_virt_##name);				\
-	MASKABLE_RELON_EXCEPTION_HV_OOL(realvec, name##_common);	\
+	MASKABLE_RELON_EXCEPTION_HV_OOL(realvec, name##_common, bitmask);\
 
-#define EXC_VIRT_OOL_MASKABLE_HV(name, start, size, realvec)		\
+#define EXC_VIRT_OOL_MASKABLE_HV(name, start, size, realvec, bitmask)	\
 	__EXC_VIRT_OOL_MASKABLE_HV(name, start, size);			\
-	__TRAMP_VIRT_OOL_MASKABLE_HV(name, realvec);
+	__TRAMP_VIRT_OOL_MASKABLE_HV(name, realvec, bitmask);
 
 #define TRAMP_KVM(area, n)						\
 	TRAMP_KVM_BEGIN(do_kvm_##n);					\
diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
index e441b469dc8f..52027e800bb6 100644
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -691,10 +691,12 @@ EXC_REAL_BEGIN(hardware_interrupt, 0x500, 0x100)
 hardware_interrupt_hv:
 	BEGIN_FTR_SECTION
 		_MASKABLE_EXCEPTION_PSERIES(0x500, hardware_interrupt_common,
-					    EXC_HV, SOFTEN_TEST_HV)
+					    EXC_HV, SOFTEN_TEST_HV,
+					    IRQ_SOFT_MASK_STD)
 	FTR_SECTION_ELSE
 		_MASKABLE_EXCEPTION_PSERIES(0x500, hardware_interrupt_common,
-					    EXC_STD, SOFTEN_TEST_PR)
+					    EXC_STD, SOFTEN_TEST_PR,
+					    IRQ_SOFT_MASK_STD)
 	ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE | CPU_FTR_ARCH_206)
 EXC_REAL_END(hardware_interrupt, 0x500, 0x100)
 
@@ -702,9 +704,13 @@ EXC_VIRT_BEGIN(hardware_interrupt, 0x4500, 0x100)
 	.globl hardware_interrupt_relon_hv;
 hardware_interrupt_relon_hv:
 	BEGIN_FTR_SECTION
-		_MASKABLE_RELON_EXCEPTION_PSERIES(0x500, hardware_interrupt_common, EXC_HV, SOFTEN_TEST_HV)
+		_MASKABLE_RELON_EXCEPTION_PSERIES(0x500, hardware_interrupt_common,
+						  EXC_HV, SOFTEN_TEST_HV,
+						  IRQ_SOFT_MASK_STD)
 	FTR_SECTION_ELSE
-		_MASKABLE_RELON_EXCEPTION_PSERIES(0x500, hardware_interrupt_common, EXC_STD, SOFTEN_TEST_PR)
+		_MASKABLE_RELON_EXCEPTION_PSERIES(0x500, hardware_interrupt_common,
+						  EXC_STD, SOFTEN_TEST_PR,
+						  IRQ_SOFT_MASK_STD)
 	ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE)
 EXC_VIRT_END(hardware_interrupt, 0x4500, 0x100)
 
@@ -800,8 +806,8 @@ END_FTR_SECTION_IFSET(CPU_FTR_TM)
 #endif
 
 
-EXC_REAL_MASKABLE(decrementer, 0x900, 0x80)
-EXC_VIRT_MASKABLE(decrementer, 0x4900, 0x80, 0x900)
+EXC_REAL_MASKABLE(decrementer, 0x900, 0x80, IRQ_SOFT_MASK_STD)
+EXC_VIRT_MASKABLE(decrementer, 0x4900, 0x80, 0x900, IRQ_SOFT_MASK_STD)
 TRAMP_KVM(PACA_EXGEN, 0x900)
 EXC_COMMON_ASYNC(decrementer_common, 0x900, timer_interrupt)
 
@@ -812,8 +818,8 @@ TRAMP_KVM_HV(PACA_EXGEN, 0x980)
 EXC_COMMON(hdecrementer_common, 0x980, hdec_interrupt)
 
 
-EXC_REAL_MASKABLE(doorbell_super, 0xa00, 0x100)
-EXC_VIRT_MASKABLE(doorbell_super, 0x4a00, 0x100, 0xa00)
+EXC_REAL_MASKABLE(doorbell_super, 0xa00, 0x100, IRQ_SOFT_MASK_STD)
+EXC_VIRT_MASKABLE(doorbell_super, 0x4a00, 0x100, 0xa00, IRQ_SOFT_MASK_STD)
 TRAMP_KVM(PACA_EXGEN, 0xa00)
 #ifdef CONFIG_PPC_DOORBELL
 EXC_COMMON_ASYNC(doorbell_super_common, 0xa00, doorbell_exception)
@@ -1025,7 +1031,7 @@ EXC_COMMON(emulation_assist_common, 0xe40, emulation_assist_interrupt)
  * mode.
  */
 __EXC_REAL_OOL_HV_DIRECT(hmi_exception, 0xe60, 0x20, hmi_exception_early)
-__TRAMP_REAL_OOL_MASKABLE_HV(hmi_exception, 0xe60)
+__TRAMP_REAL_OOL_MASKABLE_HV(hmi_exception, 0xe60, IRQ_SOFT_MASK_STD)
 EXC_VIRT_NONE(0x4e60, 0x20)
 TRAMP_KVM_HV(PACA_EXGEN, 0xe60)
 TRAMP_REAL_BEGIN(hmi_exception_early)
@@ -1083,8 +1089,8 @@ EXC_COMMON_BEGIN(hmi_exception_common)
 EXCEPTION_COMMON(PACA_EXGEN, 0xe60, hmi_exception_common, handle_hmi_exception,
         ret_from_except, FINISH_NAP;ADD_NVGPRS;ADD_RECONCILE;RUNLATCH_ON)
 
-EXC_REAL_OOL_MASKABLE_HV(h_doorbell, 0xe80, 0x20)
-EXC_VIRT_OOL_MASKABLE_HV(h_doorbell, 0x4e80, 0x20, 0xe80)
+EXC_REAL_OOL_MASKABLE_HV(h_doorbell, 0xe80, 0x20, IRQ_SOFT_MASK_STD)
+EXC_VIRT_OOL_MASKABLE_HV(h_doorbell, 0x4e80, 0x20, 0xe80, IRQ_SOFT_MASK_STD)
 TRAMP_KVM_HV(PACA_EXGEN, 0xe80)
 #ifdef CONFIG_PPC_DOORBELL
 EXC_COMMON_ASYNC(h_doorbell_common, 0xe80, doorbell_exception)
@@ -1093,8 +1099,8 @@ EXC_COMMON_ASYNC(h_doorbell_common, 0xe80, unknown_exception)
 #endif
 
 
-EXC_REAL_OOL_MASKABLE_HV(h_virt_irq, 0xea0, 0x20)
-EXC_VIRT_OOL_MASKABLE_HV(h_virt_irq, 0x4ea0, 0x20, 0xea0)
+EXC_REAL_OOL_MASKABLE_HV(h_virt_irq, 0xea0, 0x20, IRQ_SOFT_MASK_STD)
+EXC_VIRT_OOL_MASKABLE_HV(h_virt_irq, 0x4ea0, 0x20, 0xea0, IRQ_SOFT_MASK_STD)
 TRAMP_KVM_HV(PACA_EXGEN, 0xea0)
 EXC_COMMON_ASYNC(h_virt_irq_common, 0xea0, do_IRQ)
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v10 13/17] powerpc/64s: Add support to mask perf interrupts and replay them
  2017-12-20  3:55 [PATCH v10 00/17] powerpc: "paca->soft_enabled" based local atomic operation implementation Madhavan Srinivasan
                   ` (11 preceding siblings ...)
  2017-12-20  3:55 ` [PATCH v10 12/17] powerpc/64s: Add support to take additional parameter in MASKABLE_* macro Madhavan Srinivasan
@ 2017-12-20  3:55 ` Madhavan Srinivasan
  2018-01-15 13:23   ` Nicholas Piggin
  2017-12-20  3:55 ` [PATCH v10 14/17] powerpc: Add new kconfig IRQ_DEBUG_SUPPORT Madhavan Srinivasan
                   ` (4 subsequent siblings)
  17 siblings, 1 reply; 21+ messages in thread
From: Madhavan Srinivasan @ 2017-12-20  3:55 UTC (permalink / raw)
  To: mpe; +Cc: benh, anton, paulus, npiggin, linuxppc-dev, Madhavan Srinivasan

Two new bit mask field "IRQ_DISABLE_MASK_PMU" is introduced to support
the masking of PMI and "IRQ_DISABLE_MASK_ALL" to aid interrupt masking
checking.

Couple of new irq #defs "PACA_IRQ_PMI" and "SOFTEN_VALUE_0xf0*" added
to use in the exception code to check for PMI interrupts.

In the masked_interrupt handler, for PMIs we reset the MSR[EE] and
return. In the __check_irq_replay(), replay the PMI interrupt by
calling performance_monitor_common handler.

Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/exception-64s.h |  5 +++++
 arch/powerpc/include/asm/hw_irq.h        | 19 +++++++++++--------
 arch/powerpc/kernel/entry_64.S           |  5 +++++
 arch/powerpc/kernel/exceptions-64s.S     |  6 ++++--
 arch/powerpc/kernel/irq.c                |  7 ++++++-
 5 files changed, 31 insertions(+), 11 deletions(-)

diff --git a/arch/powerpc/include/asm/exception-64s.h b/arch/powerpc/include/asm/exception-64s.h
index d005a1c19b68..54afd1f140a4 100644
--- a/arch/powerpc/include/asm/exception-64s.h
+++ b/arch/powerpc/include/asm/exception-64s.h
@@ -518,6 +518,7 @@ END_FTR_SECTION_NESTED(ftr,ftr,943)
 #define SOFTEN_VALUE_0xe80	PACA_IRQ_DBELL
 #define SOFTEN_VALUE_0xe60	PACA_IRQ_HMI
 #define SOFTEN_VALUE_0xea0	PACA_IRQ_EE
+#define SOFTEN_VALUE_0xf00	PACA_IRQ_PMI
 
 #define __SOFTEN_TEST(h, vec, bitmask)					\
 	lbz	r10,PACAIRQSOFTMASK(r13);				\
@@ -582,6 +583,10 @@ END_FTR_SECTION_NESTED(ftr,ftr,943)
 	_MASKABLE_RELON_EXCEPTION_PSERIES(vec, label,			\
 					  EXC_STD, SOFTEN_NOTEST_PR, bitmask)
 
+#define MASKABLE_RELON_EXCEPTION_PSERIES_OOL(vec, label, bitmask)	\
+	MASKABLE_EXCEPTION_PROLOG_1(PACA_EXGEN, SOFTEN_NOTEST_PR, vec, bitmask);\
+	EXCEPTION_PROLOG_PSERIES_1(label, EXC_STD);
+
 #define MASKABLE_RELON_EXCEPTION_HV(loc, vec, label, bitmask)		\
 	_MASKABLE_RELON_EXCEPTION_PSERIES(vec, label,			\
 					  EXC_HV, SOFTEN_TEST_HV, bitmask)
diff --git a/arch/powerpc/include/asm/hw_irq.h b/arch/powerpc/include/asm/hw_irq.h
index 6022aa6d1dd4..6249ebc17608 100644
--- a/arch/powerpc/include/asm/hw_irq.h
+++ b/arch/powerpc/include/asm/hw_irq.h
@@ -27,12 +27,15 @@
 #define PACA_IRQ_DEC		0x08 /* Or FIT */
 #define PACA_IRQ_EE_EDGE	0x10 /* BookE only */
 #define PACA_IRQ_HMI		0x20
+#define PACA_IRQ_PMI		0x40
 
 /*
  * flags for paca->irq_soft_mask
  */
 #define IRQ_SOFT_MASK_NONE	0x00
-#define IRQ_SOFT_MASK_STD	0x01
+#define IRQ_SOFT_MASK_STD	0x01 /* local_irq_disable() interrupts */
+#define IRQ_SOFT_MASK_PMU	0x02
+#define IRQ_SOFT_MASK_ALL	0x03
 
 #endif /* CONFIG_PPC64 */
 
@@ -152,13 +155,13 @@ static inline bool arch_irqs_disabled(void)
 #define __hard_irq_disable()	__mtmsrd(local_paca->kernel_msr, 1)
 #endif
 
-#define hard_irq_disable()	do {				\
-	unsigned long flags;					\
-	__hard_irq_disable();					\
-	flags = irq_soft_mask_set_return(IRQ_SOFT_MASK_STD);	\
-	local_paca->irq_happened |= PACA_IRQ_HARD_DIS;		\
-	if (!arch_irqs_disabled_flags(flags))			\
-		trace_hardirqs_off();				\
+#define hard_irq_disable()	do {					\
+	unsigned long flags;						\
+	__hard_irq_disable();						\
+	flags = irq_soft_mask_set_return(IRQ_SOFT_MASK_ALL);		\
+	local_paca->irq_happened |= PACA_IRQ_HARD_DIS;			\
+	if (!arch_irqs_disabled_flags(flags))				\
+		trace_hardirqs_off();					\
 } while(0)
 
 static inline bool lazy_irq_pending(void)
diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
index 25951224b383..511a97b62075 100644
--- a/arch/powerpc/kernel/entry_64.S
+++ b/arch/powerpc/kernel/entry_64.S
@@ -954,6 +954,11 @@ END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR)
 	addi	r3,r1,STACK_FRAME_OVERHEAD;
  	bl	do_IRQ
 	b	ret_from_except
+1:	cmpwi	cr0,r3,0xf00
+	bne	1f
+	addi	r3,r1,STACK_FRAME_OVERHEAD;
+	bl	performance_monitor_exception
+	b	ret_from_except
 1:	cmpwi	cr0,r3,0xe60
 	bne	1f
 	addi	r3,r1,STACK_FRAME_OVERHEAD;
diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
index 52027e800bb6..60d9c68ef414 100644
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -1111,8 +1111,8 @@ EXC_REAL_NONE(0xee0, 0x20)
 EXC_VIRT_NONE(0x4ee0, 0x20)
 
 
-EXC_REAL_OOL(performance_monitor, 0xf00, 0x20)
-EXC_VIRT_OOL(performance_monitor, 0x4f00, 0x20, 0xf00)
+EXC_REAL_OOL_MASKABLE(performance_monitor, 0xf00, 0x20, IRQ_SOFT_MASK_PMU)
+EXC_VIRT_OOL_MASKABLE(performance_monitor, 0x4f00, 0x20, 0xf00, IRQ_SOFT_MASK_PMU)
 TRAMP_KVM(PACA_EXGEN, 0xf00)
 EXC_COMMON_ASYNC(performance_monitor_common, 0xf00, performance_monitor_exception)
 
@@ -1723,6 +1723,8 @@ BEGIN_FTR_SECTION
 FTR_SECTION_ELSE
 	beq	hardware_interrupt_common
 ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE | CPU_FTR_ARCH_300)
+	cmpwi	r3,0xf00
+	beq	performance_monitor_common
 BEGIN_FTR_SECTION
 	cmpwi	r3,0xa00
 	beq	h_doorbell_common_msgclr
diff --git a/arch/powerpc/kernel/irq.c b/arch/powerpc/kernel/irq.c
index 4b21b502c148..1cbf6a0caed7 100644
--- a/arch/powerpc/kernel/irq.c
+++ b/arch/powerpc/kernel/irq.c
@@ -186,6 +186,11 @@ notrace unsigned int __check_irq_replay(void)
 		return 0x900;
 	}
 
+	if (happened & PACA_IRQ_PMI) {
+		local_paca->irq_happened &= ~PACA_IRQ_PMI;
+		return 0xf00;
+	}
+
 	if (happened & PACA_IRQ_EE) {
 		local_paca->irq_happened &= ~PACA_IRQ_EE;
 		return 0x500;
@@ -272,7 +277,7 @@ notrace void arch_local_irq_restore(unsigned long mask)
 	}
 #endif /* CONFIG_TRACE_IRQFLAGS */
 
-	irq_soft_mask_set(IRQ_SOFT_MASK_STD);
+	irq_soft_mask_set(IRQ_SOFT_MASK_ALL);
 	trace_hardirqs_off();
 
 	/*
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v10 14/17] powerpc: Add new kconfig IRQ_DEBUG_SUPPORT
  2017-12-20  3:55 [PATCH v10 00/17] powerpc: "paca->soft_enabled" based local atomic operation implementation Madhavan Srinivasan
                   ` (12 preceding siblings ...)
  2017-12-20  3:55 ` [PATCH v10 13/17] powerpc/64s: Add support to mask perf interrupts and replay them Madhavan Srinivasan
@ 2017-12-20  3:55 ` Madhavan Srinivasan
  2017-12-20  3:55 ` [PATCH v10 15/17] powerpc/64s: Add new set of irq_soft_mask_ functions for PMI masking Madhavan Srinivasan
                   ` (3 subsequent siblings)
  17 siblings, 0 replies; 21+ messages in thread
From: Madhavan Srinivasan @ 2017-12-20  3:55 UTC (permalink / raw)
  To: mpe; +Cc: benh, anton, paulus, npiggin, linuxppc-dev, Madhavan Srinivasan

New Kconfig is added "CONFIG_IRQ_DEBUG_SUPPORT" to add warn_on to
alert the invalid transitions. Also moved the code under the
CONFIG_TRACE_IRQFLAGS in arch_local_irq_restore() to new Kconfig.

Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
---
 arch/powerpc/Kconfig.debug        | 4 ++++
 arch/powerpc/include/asm/hw_irq.h | 4 ++--
 arch/powerpc/kernel/entry_64.S    | 4 ++--
 arch/powerpc/kernel/irq.c         | 4 ++--
 4 files changed, 10 insertions(+), 6 deletions(-)

diff --git a/arch/powerpc/Kconfig.debug b/arch/powerpc/Kconfig.debug
index 657c33cd4eee..8013c62f38a7 100644
--- a/arch/powerpc/Kconfig.debug
+++ b/arch/powerpc/Kconfig.debug
@@ -90,6 +90,10 @@ config MSI_BITMAP_SELFTEST
 	depends on DEBUG_KERNEL
 	default n
 
+config PPC_IRQ_SOFT_MASK_DEBUG
+	bool "Include extra checks for powerpc irq soft masking"
+	default n
+
 config XMON
 	bool "Include xmon kernel debugger"
 	depends on DEBUG_KERNEL
diff --git a/arch/powerpc/include/asm/hw_irq.h b/arch/powerpc/include/asm/hw_irq.h
index 6249ebc17608..716fe87a3588 100644
--- a/arch/powerpc/include/asm/hw_irq.h
+++ b/arch/powerpc/include/asm/hw_irq.h
@@ -71,7 +71,7 @@ static inline notrace unsigned long irq_soft_mask_return(void)
  */
 static inline notrace void irq_soft_mask_set(unsigned long mask)
 {
-#ifdef CONFIG_TRACE_IRQFLAGS
+#ifdef CONFIG_PPC_IRQ_SOFT_MASK_DEBUG
 	/*
 	 * The irq mask must always include the STD bit if any are set.
 	 *
@@ -101,7 +101,7 @@ static inline notrace unsigned long irq_soft_mask_set_return(unsigned long mask)
 {
 	unsigned long flags;
 
-#ifdef CONFIG_TRACE_IRQFLAGS
+#ifdef CONFIG_PPC_IRQ_SOFT_MASK_DEBUG
 	WARN_ON(mask && !(mask & IRQ_SOFT_MASK_STD));
 #endif
 
diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
index 511a97b62075..f2ffb9aa7ff4 100644
--- a/arch/powerpc/kernel/entry_64.S
+++ b/arch/powerpc/kernel/entry_64.S
@@ -128,7 +128,7 @@ END_FW_FTR_SECTION_IFSET(FW_FEATURE_SPLPAR)
 	 * of irq tracing is used, we additionally check that condition
 	 * is correct
 	 */
-#if defined(CONFIG_TRACE_IRQFLAGS) && defined(CONFIG_BUG)
+#if defined(CONFIG_PPC_IRQ_SOFT_MASK_DEBUG) && defined(CONFIG_BUG)
 	lbz	r10,PACAIRQSOFTMASK(r13)
 1:	tdnei	r10,IRQ_SOFT_MASK_NONE
 	EMIT_BUG_ENTRY 1b,__FILE__,__LINE__,BUGFLAG_WARNING
@@ -911,7 +911,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR)
 	rlwinm	r7,r7,0,~PACA_IRQ_HARD_DIS
 	stb	r7,PACAIRQHAPPENED(r13)
 1:
-#if defined(CONFIG_TRACE_IRQFLAGS) && defined(CONFIG_BUG)
+#if defined(CONFIG_PPC_IRQ_SOFT_MASK_DEBUG) && defined(CONFIG_BUG)
 	/* The interrupt should not have soft enabled. */
 	lbz	r7,PACAIRQSOFTMASK(r13)
 	tdeqi	r7,IRQ_SOFT_MASK_NONE
diff --git a/arch/powerpc/kernel/irq.c b/arch/powerpc/kernel/irq.c
index 1cbf6a0caed7..e7619d144c15 100644
--- a/arch/powerpc/kernel/irq.c
+++ b/arch/powerpc/kernel/irq.c
@@ -264,7 +264,7 @@ notrace void arch_local_irq_restore(unsigned long mask)
 	 */
 	if (unlikely(irq_happened != PACA_IRQ_HARD_DIS))
 		__hard_irq_disable();
-#ifdef CONFIG_TRACE_IRQFLAGS
+#ifdef CONFIG_PPC_IRQ_SOFT_MASK_DEBUG
 	else {
 		/*
 		 * We should already be hard disabled here. We had bugs
@@ -275,7 +275,7 @@ notrace void arch_local_irq_restore(unsigned long mask)
 		if (WARN_ON(mfmsr() & MSR_EE))
 			__hard_irq_disable();
 	}
-#endif /* CONFIG_TRACE_IRQFLAGS */
+#endif
 
 	irq_soft_mask_set(IRQ_SOFT_MASK_ALL);
 	trace_hardirqs_off();
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v10 15/17] powerpc/64s: Add new set of irq_soft_mask_ functions for PMI masking
  2017-12-20  3:55 [PATCH v10 00/17] powerpc: "paca->soft_enabled" based local atomic operation implementation Madhavan Srinivasan
                   ` (13 preceding siblings ...)
  2017-12-20  3:55 ` [PATCH v10 14/17] powerpc: Add new kconfig IRQ_DEBUG_SUPPORT Madhavan Srinivasan
@ 2017-12-20  3:55 ` Madhavan Srinivasan
  2017-12-20  3:55 ` [PATCH v10 16/17] powerpc: use generic atomic implementation for local_t Madhavan Srinivasan
                   ` (2 subsequent siblings)
  17 siblings, 0 replies; 21+ messages in thread
From: Madhavan Srinivasan @ 2017-12-20  3:55 UTC (permalink / raw)
  To: mpe; +Cc: benh, anton, paulus, npiggin, linuxppc-dev, Madhavan Srinivasan

To support soft-masking of the performance monitor interrupt, a set of
new powerpc_local_irq_pmu_save() and powerpc_local_irq_restore()
functions are added. And powerpc_local_irq_save() implemented, by
adding a new irq_soft_mask manipulation function
irq_soft_mask_or_return().

Local_irq_pmu_* macros are provided to access these
powerpc_local_irq_pmu* functions which includes
trace_hardirqs_on|off() to match what we have in
include/linux/irqflags.h.

Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/hw_irq.h | 71 +++++++++++++++++++++++++++++++++++++++
 1 file changed, 71 insertions(+)

diff --git a/arch/powerpc/include/asm/hw_irq.h b/arch/powerpc/include/asm/hw_irq.h
index 716fe87a3588..eea02cbf5699 100644
--- a/arch/powerpc/include/asm/hw_irq.h
+++ b/arch/powerpc/include/asm/hw_irq.h
@@ -34,8 +34,12 @@
  */
 #define IRQ_SOFT_MASK_NONE	0x00
 #define IRQ_SOFT_MASK_STD	0x01 /* local_irq_disable() interrupts */
+#ifdef CONFIG_PPC_BOOK3S
 #define IRQ_SOFT_MASK_PMU	0x02
 #define IRQ_SOFT_MASK_ALL	0x03
+#else
+#define IRQ_SOFT_MASK_ALL	0x01
+#endif
 
 #endif /* CONFIG_PPC64 */
 
@@ -115,6 +119,24 @@ static inline notrace unsigned long irq_soft_mask_set_return(unsigned long mask)
 	return flags;
 }
 
+static inline notrace unsigned long irq_soft_mask_or_return(unsigned long mask)
+{
+	unsigned long flags, tmp;
+
+	asm volatile(
+		"lbz %0,%2(13); or %1,%0,%3; stb %1,%2(13)"
+		: "=&r" (flags), "=r" (tmp)
+		: "i" (offsetof(struct paca_struct, irq_soft_mask)),
+		  "r" (mask)
+		: "memory");
+
+#ifdef CONFIG_PPC_IRQ_SOFT_MASK_DEBUG
+	WARN_ON((mask | flags) && !((mask | flags) & IRQ_SOFT_MASK_STD));
+#endif
+
+	return flags;
+}
+
 static inline unsigned long arch_local_save_flags(void)
 {
 	return irq_soft_mask_return();
@@ -147,6 +169,55 @@ static inline bool arch_irqs_disabled(void)
 	return arch_irqs_disabled_flags(arch_local_save_flags());
 }
 
+#ifdef CONFIG_PPC_BOOK3S
+/*
+ * To support disabling and enabling of irq with PMI, set of
+ * new powerpc_local_irq_pmu_save() and powerpc_local_irq_restore()
+ * functions are added. These macros are implemented using generic
+ * linux local_irq_* code from include/linux/irqflags.h.
+ */
+#define raw_local_irq_pmu_save(flags)					\
+	do {								\
+		typecheck(unsigned long, flags);			\
+		flags = irq_soft_mask_or_return(IRQ_SOFT_MASK_STD |	\
+				IRQ_SOFT_MASK_PMU);			\
+	} while(0)
+
+#define raw_local_irq_pmu_restore(flags)				\
+	do {								\
+		typecheck(unsigned long, flags);			\
+		arch_local_irq_restore(flags);				\
+	} while(0)
+
+#ifdef CONFIG_TRACE_IRQFLAGS
+#define powerpc_local_irq_pmu_save(flags)			\
+	 do {							\
+		raw_local_irq_pmu_save(flags);			\
+		trace_hardirqs_off();				\
+	} while(0)
+#define powerpc_local_irq_pmu_restore(flags)			\
+	do {							\
+		if (raw_irqs_disabled_flags(flags)) {		\
+			raw_local_irq_pmu_restore(flags);	\
+			trace_hardirqs_off();			\
+		} else {					\
+			trace_hardirqs_on();			\
+			raw_local_irq_pmu_restore(flags);	\
+		}						\
+	} while(0)
+#else
+#define powerpc_local_irq_pmu_save(flags)			\
+	do {							\
+		raw_local_irq_pmu_save(flags);			\
+	} while(0)
+#define powerpc_local_irq_pmu_restore(flags)			\
+	do {							\
+		raw_local_irq_pmu_restore(flags);		\
+	} while (0)
+#endif  /* CONFIG_TRACE_IRQFLAGS */
+
+#endif /* CONFIG_PPC_BOOK3S */
+
 #ifdef CONFIG_PPC_BOOK3E
 #define __hard_irq_enable()	asm volatile("wrteei 1" : : : "memory")
 #define __hard_irq_disable()	asm volatile("wrteei 0" : : : "memory")
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v10 16/17] powerpc: use generic atomic implementation for local_t
  2017-12-20  3:55 [PATCH v10 00/17] powerpc: "paca->soft_enabled" based local atomic operation implementation Madhavan Srinivasan
                   ` (14 preceding siblings ...)
  2017-12-20  3:55 ` [PATCH v10 15/17] powerpc/64s: Add new set of irq_soft_mask_ functions for PMI masking Madhavan Srinivasan
@ 2017-12-20  3:55 ` Madhavan Srinivasan
  2017-12-20  3:55 ` [PATCH v10 17/17] powerpc/64s: Implement local_t using irq soft masking Madhavan Srinivasan
  2017-12-20  5:02 ` [PATCH v10 00/17] powerpc: "paca->soft_enabled" based local atomic operation implementation Nicholas Piggin
  17 siblings, 0 replies; 21+ messages in thread
From: Madhavan Srinivasan @ 2017-12-20  3:55 UTC (permalink / raw)
  To: mpe; +Cc: benh, anton, paulus, npiggin, linuxppc-dev, Madhavan Srinivasan

powerpc implements local_t with atomic operations. There is already
an asm-generic implementation which does this using atomic_t.

Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/local.h | 171 +--------------------------------------
 1 file changed, 1 insertion(+), 170 deletions(-)

diff --git a/arch/powerpc/include/asm/local.h b/arch/powerpc/include/asm/local.h
index 600a68bd77f5..cfbcc31d43ad 100644
--- a/arch/powerpc/include/asm/local.h
+++ b/arch/powerpc/include/asm/local.h
@@ -2,175 +2,6 @@
 #ifndef _ARCH_POWERPC_LOCAL_H
 #define _ARCH_POWERPC_LOCAL_H
 
-#include <linux/percpu.h>
-#include <linux/atomic.h>
-
-typedef struct
-{
-	atomic_long_t a;
-} local_t;
-
-#define LOCAL_INIT(i)	{ ATOMIC_LONG_INIT(i) }
-
-#define local_read(l)	atomic_long_read(&(l)->a)
-#define local_set(l,i)	atomic_long_set(&(l)->a, (i))
-
-#define local_add(i,l)	atomic_long_add((i),(&(l)->a))
-#define local_sub(i,l)	atomic_long_sub((i),(&(l)->a))
-#define local_inc(l)	atomic_long_inc(&(l)->a)
-#define local_dec(l)	atomic_long_dec(&(l)->a)
-
-static __inline__ long local_add_return(long a, local_t *l)
-{
-	long t;
-
-	__asm__ __volatile__(
-"1:"	PPC_LLARX(%0,0,%2,0) "			# local_add_return\n\
-	add	%0,%1,%0\n"
-	PPC405_ERR77(0,%2)
-	PPC_STLCX	"%0,0,%2 \n\
-	bne-	1b"
-	: "=&r" (t)
-	: "r" (a), "r" (&(l->a.counter))
-	: "cc", "memory");
-
-	return t;
-}
-
-#define local_add_negative(a, l)	(local_add_return((a), (l)) < 0)
-
-static __inline__ long local_sub_return(long a, local_t *l)
-{
-	long t;
-
-	__asm__ __volatile__(
-"1:"	PPC_LLARX(%0,0,%2,0) "			# local_sub_return\n\
-	subf	%0,%1,%0\n"
-	PPC405_ERR77(0,%2)
-	PPC_STLCX	"%0,0,%2 \n\
-	bne-	1b"
-	: "=&r" (t)
-	: "r" (a), "r" (&(l->a.counter))
-	: "cc", "memory");
-
-	return t;
-}
-
-static __inline__ long local_inc_return(local_t *l)
-{
-	long t;
-
-	__asm__ __volatile__(
-"1:"	PPC_LLARX(%0,0,%1,0) "			# local_inc_return\n\
-	addic	%0,%0,1\n"
-	PPC405_ERR77(0,%1)
-	PPC_STLCX	"%0,0,%1 \n\
-	bne-	1b"
-	: "=&r" (t)
-	: "r" (&(l->a.counter))
-	: "cc", "xer", "memory");
-
-	return t;
-}
-
-/*
- * local_inc_and_test - increment and test
- * @l: pointer of type local_t
- *
- * Atomically increments @l by 1
- * and returns true if the result is zero, or false for all
- * other cases.
- */
-#define local_inc_and_test(l) (local_inc_return(l) == 0)
-
-static __inline__ long local_dec_return(local_t *l)
-{
-	long t;
-
-	__asm__ __volatile__(
-"1:"	PPC_LLARX(%0,0,%1,0) "			# local_dec_return\n\
-	addic	%0,%0,-1\n"
-	PPC405_ERR77(0,%1)
-	PPC_STLCX	"%0,0,%1\n\
-	bne-	1b"
-	: "=&r" (t)
-	: "r" (&(l->a.counter))
-	: "cc", "xer", "memory");
-
-	return t;
-}
-
-#define local_cmpxchg(l, o, n) \
-	(cmpxchg_local(&((l)->a.counter), (o), (n)))
-#define local_xchg(l, n) (xchg_local(&((l)->a.counter), (n)))
-
-/**
- * local_add_unless - add unless the number is a given value
- * @l: pointer of type local_t
- * @a: the amount to add to v...
- * @u: ...unless v is equal to u.
- *
- * Atomically adds @a to @l, so long as it was not @u.
- * Returns non-zero if @l was not @u, and zero otherwise.
- */
-static __inline__ int local_add_unless(local_t *l, long a, long u)
-{
-	long t;
-
-	__asm__ __volatile__ (
-"1:"	PPC_LLARX(%0,0,%1,0) "			# local_add_unless\n\
-	cmpw	0,%0,%3 \n\
-	beq-	2f \n\
-	add	%0,%2,%0 \n"
-	PPC405_ERR77(0,%2)
-	PPC_STLCX	"%0,0,%1 \n\
-	bne-	1b \n"
-"	subf	%0,%2,%0 \n\
-2:"
-	: "=&r" (t)
-	: "r" (&(l->a.counter)), "r" (a), "r" (u)
-	: "cc", "memory");
-
-	return t != u;
-}
-
-#define local_inc_not_zero(l) local_add_unless((l), 1, 0)
-
-#define local_sub_and_test(a, l)	(local_sub_return((a), (l)) == 0)
-#define local_dec_and_test(l)		(local_dec_return((l)) == 0)
-
-/*
- * Atomically test *l and decrement if it is greater than 0.
- * The function returns the old value of *l minus 1.
- */
-static __inline__ long local_dec_if_positive(local_t *l)
-{
-	long t;
-
-	__asm__ __volatile__(
-"1:"	PPC_LLARX(%0,0,%1,0) "			# local_dec_if_positive\n\
-	cmpwi	%0,1\n\
-	addi	%0,%0,-1\n\
-	blt-	2f\n"
-	PPC405_ERR77(0,%1)
-	PPC_STLCX	"%0,0,%1\n\
-	bne-	1b"
-	"\n\
-2:"	: "=&b" (t)
-	: "r" (&(l->a.counter))
-	: "cc", "memory");
-
-	return t;
-}
-
-/* Use these for per-cpu local_t variables: on some archs they are
- * much more efficient than these naive implementations.  Note they take
- * a variable, not an address.
- */
-
-#define __local_inc(l)		((l)->a.counter++)
-#define __local_dec(l)		((l)->a.counter++)
-#define __local_add(i,l)	((l)->a.counter+=(i))
-#define __local_sub(i,l)	((l)->a.counter-=(i))
+#include <asm-generic/local.h>
 
 #endif /* _ARCH_POWERPC_LOCAL_H */
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v10 17/17] powerpc/64s: Implement local_t using irq soft masking
  2017-12-20  3:55 [PATCH v10 00/17] powerpc: "paca->soft_enabled" based local atomic operation implementation Madhavan Srinivasan
                   ` (15 preceding siblings ...)
  2017-12-20  3:55 ` [PATCH v10 16/17] powerpc: use generic atomic implementation for local_t Madhavan Srinivasan
@ 2017-12-20  3:55 ` Madhavan Srinivasan
  2017-12-20  5:02 ` [PATCH v10 00/17] powerpc: "paca->soft_enabled" based local atomic operation implementation Nicholas Piggin
  17 siblings, 0 replies; 21+ messages in thread
From: Madhavan Srinivasan @ 2017-12-20  3:55 UTC (permalink / raw)
  To: mpe; +Cc: benh, anton, paulus, npiggin, linuxppc-dev, Madhavan Srinivasan

local_t is used for atomic modifications for per-CPU data, versus
re-entrant modifications via interrupts.

local_t read-modify-write atomic operations are currently implemented
with hardware atomics (larx/stcx), which are quite slow. This patch
implements them by masking all types of interrupts that may do local_t
operations ("standard" and perf interrupts).

Rusty's benchmark (https://lkml.org/lkml/2008/12/16/450) gives the
following timings for the local_t test, in nanoseconds per iteration:

             larx/stcx   irq+pmu disable
_inc                38                10
_add                38                10
_read                4                 4
_add_return         38                10

There are still some interrupt types (system reset, machine check, and
watchdog), which can not safely use local_t operations, because they
are not masked.

An alternative approach was proposed, using a CR bit to mark a critical
section, which is tested in the interrupt return path, and would then
branch to a fixup handler (similar to exception fixups), which re-starts
the operation. The problem with this was the complexity of the fixup
handler and the latency of the slow path.

https://lists.ozlabs.org/pipermail/linuxppc-dev/2014-November/123024.html

Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/local.h | 141 +++++++++++++++++++++++++++++++++++++++
 1 file changed, 141 insertions(+)

diff --git a/arch/powerpc/include/asm/local.h b/arch/powerpc/include/asm/local.h
index cfbcc31d43ad..fdd00939270b 100644
--- a/arch/powerpc/include/asm/local.h
+++ b/arch/powerpc/include/asm/local.h
@@ -2,6 +2,147 @@
 #ifndef _ARCH_POWERPC_LOCAL_H
 #define _ARCH_POWERPC_LOCAL_H
 
+#ifdef CONFIG_PPC_BOOK3S_64
+
+#include <linux/percpu.h>
+#include <linux/atomic.h>
+#include <linux/irqflags.h>
+
+#include <asm/hw_irq.h>
+
+typedef struct
+{
+	long v;
+} local_t;
+
+#define LOCAL_INIT(i)	{ (i) }
+
+static __inline__ long local_read(local_t *l)
+{
+	return READ_ONCE(l->v);
+}
+
+static __inline__ void local_set(local_t *l, long i)
+{
+	WRITE_ONCE(l->v, i);
+}
+
+#define LOCAL_OP(op, c_op)						\
+static __inline__ void local_##op(long i, local_t *l)			\
+{									\
+	unsigned long flags;						\
+									\
+	powerpc_local_irq_pmu_save(flags);				\
+	l->v c_op i;						\
+	powerpc_local_irq_pmu_restore(flags);				\
+}
+
+#define LOCAL_OP_RETURN(op, c_op)					\
+static __inline__ long local_##op##_return(long a, local_t *l)		\
+{									\
+	long t;								\
+	unsigned long flags;						\
+									\
+	powerpc_local_irq_pmu_save(flags);				\
+	t = (l->v c_op a);						\
+	powerpc_local_irq_pmu_restore(flags);				\
+									\
+	return t;							\
+}
+
+#define LOCAL_OPS(op, c_op)		\
+	LOCAL_OP(op, c_op)		\
+	LOCAL_OP_RETURN(op, c_op)
+
+LOCAL_OPS(add, +=)
+LOCAL_OPS(sub, -=)
+
+#define local_add_negative(a, l)	(local_add_return((a), (l)) < 0)
+#define local_inc_return(l)		local_add_return(1LL, l)
+#define local_inc(l)			local_inc_return(l)
+
+/*
+ * local_inc_and_test - increment and test
+ * @l: pointer of type local_t
+ *
+ * Atomically increments @l by 1
+ * and returns true if the result is zero, or false for all
+ * other cases.
+ */
+#define local_inc_and_test(l)		(local_inc_return(l) == 0)
+
+#define local_dec_return(l)		local_sub_return(1LL, l)
+#define local_dec(l)			local_dec_return(l)
+#define local_sub_and_test(a, l)	(local_sub_return((a), (l)) == 0)
+#define local_dec_and_test(l)		(local_dec_return((l)) == 0)
+
+static __inline__ long local_cmpxchg(local_t *l, long o, long n)
+{
+	long t;
+	unsigned long flags;
+
+	powerpc_local_irq_pmu_save(flags);
+	t = l->v;
+	if (t == o)
+		l->v = n;
+	powerpc_local_irq_pmu_restore(flags);
+
+	return t;
+}
+
+static __inline__ long local_xchg(local_t *l, long n)
+{
+	long t;
+	unsigned long flags;
+
+	powerpc_local_irq_pmu_save(flags);
+	t = l->v;
+	l->v = n;
+	powerpc_local_irq_pmu_restore(flags);
+
+	return t;
+}
+
+/**
+ * local_add_unless - add unless the number is a given value
+ * @l: pointer of type local_t
+ * @a: the amount to add to v...
+ * @u: ...unless v is equal to u.
+ *
+ * Atomically adds @a to @l, so long as it was not @u.
+ * Returns non-zero if @l was not @u, and zero otherwise.
+ */
+static __inline__ int local_add_unless(local_t *l, long a, long u)
+{
+	unsigned long flags;
+	int ret = 0;
+
+	powerpc_local_irq_pmu_save(flags);
+	if (l->v != u) {
+		l->v += a;
+		ret = 1;
+	}
+	powerpc_local_irq_pmu_restore(flags);
+
+	return ret;
+}
+
+#define local_inc_not_zero(l)		local_add_unless((l), 1, 0)
+
+/* Use these for per-cpu local_t variables: on some archs they are
+ * much more efficient than these naive implementations.  Note they take
+ * a variable, not an address.
+ */
+
+#define __local_inc(l)		((l)->v++)
+#define __local_dec(l)		((l)->v++)
+#define __local_add(i,l)	((l)->v+=(i))
+#define __local_sub(i,l)	((l)->v-=(i))
+
+#else /* CONFIG_PPC64 */
+
 #include <asm-generic/local.h>
 
+#endif /* CONFIG_PPC64 */
+
 #endif /* _ARCH_POWERPC_LOCAL_H */
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* Re: [PATCH v10 00/17] powerpc: "paca->soft_enabled" based local atomic operation implementation
  2017-12-20  3:55 [PATCH v10 00/17] powerpc: "paca->soft_enabled" based local atomic operation implementation Madhavan Srinivasan
                   ` (16 preceding siblings ...)
  2017-12-20  3:55 ` [PATCH v10 17/17] powerpc/64s: Implement local_t using irq soft masking Madhavan Srinivasan
@ 2017-12-20  5:02 ` Nicholas Piggin
  17 siblings, 0 replies; 21+ messages in thread
From: Nicholas Piggin @ 2017-12-20  5:02 UTC (permalink / raw)
  To: Madhavan Srinivasan; +Cc: mpe, benh, anton, paulus, linuxppc-dev

On Wed, 20 Dec 2017 09:25:40 +0530
Madhavan Srinivasan <maddy@linux.vnet.ibm.com> wrote:

> Local atomic operations are fast and highly reentrant per CPU counters.
> Used for percpu variable updates. Local atomic operations only guarantee
> variable modification atomicity wrt the CPU which owns the data and
> these needs to be executed in a preemption safe way.

As we've been discussing offline, these are all

Reviewed-by: Nicholas Piggin <npiggin@gmail.com>

Thanks,
Nick

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v10 13/17] powerpc/64s: Add support to mask perf interrupts and replay them
  2017-12-20  3:55 ` [PATCH v10 13/17] powerpc/64s: Add support to mask perf interrupts and replay them Madhavan Srinivasan
@ 2018-01-15 13:23   ` Nicholas Piggin
  0 siblings, 0 replies; 21+ messages in thread
From: Nicholas Piggin @ 2018-01-15 13:23 UTC (permalink / raw)
  To: Madhavan Srinivasan; +Cc: mpe, benh, anton, paulus, linuxppc-dev

On Wed, 20 Dec 2017 09:25:53 +0530
Madhavan Srinivasan <maddy@linux.vnet.ibm.com> wrote:

> Two new bit mask field "IRQ_DISABLE_MASK_PMU" is introduced to support
> the masking of PMI and "IRQ_DISABLE_MASK_ALL" to aid interrupt masking
> checking.
> 
> Couple of new irq #defs "PACA_IRQ_PMI" and "SOFTEN_VALUE_0xf0*" added
> to use in the exception code to check for PMI interrupts.
> 
> In the masked_interrupt handler, for PMIs we reset the MSR[EE] and
> return. In the __check_irq_replay(), replay the PMI interrupt by
> calling performance_monitor_common handler.

I think we need to add all interrupt types which clear MSR[EE] in their
masked handler to the test in may_hard_irq_disable(), otherwise EE can
be enabled before all such pending interrupts are replayed.

We should define all such interrupts in a mask and use that in
may_hard_irq_disable() and corresponding place in masked_##_H##interrupt:

Thanks,
Nick

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [v10, 02/17] powerpc/64: Add #defines for paca->soft_enabled flags
  2017-12-20  3:55 ` [PATCH v10 02/17] powerpc/64: Add #defines for paca->soft_enabled flags Madhavan Srinivasan
@ 2018-01-22  3:34   ` Michael Ellerman
  0 siblings, 0 replies; 21+ messages in thread
From: Michael Ellerman @ 2018-01-22  3:34 UTC (permalink / raw)
  To: Madhavan Srinivasan
  Cc: Madhavan Srinivasan, npiggin, paulus, anton, linuxppc-dev

On Wed, 2017-12-20 at 03:55:42 UTC, Madhavan Srinivasan wrote:
> Two #defines, IRQ_ENABLED and IRQ_DISABLED, are added to be used when
> updating paca->soft_enabled. Replace the hardcoded values used when
> updating paca->soft_enabled with IRQ_[EN/DIS]ABLED. No logic change.
> 
> Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>

Patchs 2-17 applied to powerpc next, thanks.

https://git.kernel.org/powerpc/c/c2e480ba822718190e58849b79a76d

cheers

^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2018-01-22  3:34 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-12-20  3:55 [PATCH v10 00/17] powerpc: "paca->soft_enabled" based local atomic operation implementation Madhavan Srinivasan
2017-12-20  3:55 ` [PATCH v10 01/17] powerpc/64: do not trace irqs-off at interrupt return to soft-disabled context Madhavan Srinivasan
2017-12-20  3:55 ` [PATCH v10 02/17] powerpc/64: Add #defines for paca->soft_enabled flags Madhavan Srinivasan
2018-01-22  3:34   ` [v10, " Michael Ellerman
2017-12-20  3:55 ` [PATCH v10 03/17] powerpc/64: Improve inline asm in arch_local_irq_disable Madhavan Srinivasan
2017-12-20  3:55 ` [PATCH v10 04/17] powerpc/64: Fix arch_local_irq_disable() prototype Madhavan Srinivasan
2017-12-20  3:55 ` [PATCH v10 05/17] powerpc/64: move set_soft_enabled(), rename it, add memory clobber Madhavan Srinivasan
2017-12-20  3:55 ` [PATCH v10 06/17] powerpc/64: Implement and use soft_enabled_return API Madhavan Srinivasan
2017-12-20  3:55 ` [PATCH v10 07/17] powerpc/64: Implement and use soft_enabled_set_return API Madhavan Srinivasan
2017-12-20  3:55 ` [PATCH v10 08/17] powerpc/64: Cleanup hard_irq_disable() macro Madhavan Srinivasan
2017-12-20  3:55 ` [PATCH v10 09/17] powerpc/64: Change soft_enabled from flag to bitmask Madhavan Srinivasan
2017-12-20  3:55 ` [PATCH v10 10/17] powerpc/64: Rename soft_enabled to irq_soft_mask Madhavan Srinivasan
2017-12-20  3:55 ` [PATCH v10 11/17] powerpc/64s: Avoid using EXCEPTION_PROLOG_1 macro in MASKABLE_* Madhavan Srinivasan
2017-12-20  3:55 ` [PATCH v10 12/17] powerpc/64s: Add support to take additional parameter in MASKABLE_* macro Madhavan Srinivasan
2017-12-20  3:55 ` [PATCH v10 13/17] powerpc/64s: Add support to mask perf interrupts and replay them Madhavan Srinivasan
2018-01-15 13:23   ` Nicholas Piggin
2017-12-20  3:55 ` [PATCH v10 14/17] powerpc: Add new kconfig IRQ_DEBUG_SUPPORT Madhavan Srinivasan
2017-12-20  3:55 ` [PATCH v10 15/17] powerpc/64s: Add new set of irq_soft_mask_ functions for PMI masking Madhavan Srinivasan
2017-12-20  3:55 ` [PATCH v10 16/17] powerpc: use generic atomic implementation for local_t Madhavan Srinivasan
2017-12-20  3:55 ` [PATCH v10 17/17] powerpc/64s: Implement local_t using irq soft masking Madhavan Srinivasan
2017-12-20  5:02 ` [PATCH v10 00/17] powerpc: "paca->soft_enabled" based local atomic operation implementation Nicholas Piggin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).