linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
* [PATCH for-4.8_set3 00/10] Use jump label for cpu/mmu_has_feature
@ 2016-07-13  9:38 Aneesh Kumar K.V
  2016-07-13  9:38 ` [PATCH for-4.8 01/10] powerpc/mm: Add __cpu/__mmu_has_feature Aneesh Kumar K.V
                   ` (9 more replies)
  0 siblings, 10 replies; 15+ messages in thread
From: Aneesh Kumar K.V @ 2016-07-13  9:38 UTC (permalink / raw)
  To: benh, paulus, mpe; +Cc: linuxppc-dev, Aneesh Kumar K.V

Hi,

I have converted the usage of cpu/mmu_has_feature in __init functions
to use the non jump label variant. Even though some of them happen
after featurefix, I guess it is better to have a simpler rule such
that we use __cpu/mmu_has_feature in __init functions and the jump
label variant otherwise.


Aneesh Kumar K.V (5):
  powerpc/mm: Add __cpu/__mmu_has_feature
  powerpc/mm: Convert early cpu/mmu feature check to use the new helpers
  powerpc/mm/radix: Add radix_set_pte to use in early init
  powerpc: Call jump_label_init early
  powerpc/mm: Catch the usage of cpu/mmu_has_feature before jump label
    init

Kevin Hao (5):
  jump_label: make it possible for the archs to invoke jump_label_init()
    much earlier
  powerpc: kill mfvtb()
  powerpc: move the cpu_has_feature to a separate file
  powerpc: use the jump label for cpu_has_feature
  powerpc: use jump label for mmu_has_feature

 arch/powerpc/Kconfig.debug                    | 11 +++++
 arch/powerpc/include/asm/book3s/64/mmu-hash.h |  5 ++-
 arch/powerpc/include/asm/book3s/64/mmu.h      | 19 ++++++--
 arch/powerpc/include/asm/book3s/64/pgtable.h  |  2 +-
 arch/powerpc/include/asm/cacheflush.h         |  1 +
 arch/powerpc/include/asm/cpufeatures.h        | 49 +++++++++++++++++++++
 arch/powerpc/include/asm/cputable.h           | 16 +++----
 arch/powerpc/include/asm/cputime.h            |  1 +
 arch/powerpc/include/asm/dbell.h              |  1 +
 arch/powerpc/include/asm/dcr-native.h         |  1 +
 arch/powerpc/include/asm/mman.h               |  1 +
 arch/powerpc/include/asm/mmu.h                | 62 ++++++++++++++++++++++++++-
 arch/powerpc/include/asm/reg.h                |  9 ----
 arch/powerpc/include/asm/time.h               |  3 +-
 arch/powerpc/include/asm/xor.h                |  1 +
 arch/powerpc/kernel/align.c                   |  1 +
 arch/powerpc/kernel/cputable.c                | 37 ++++++++++++++++
 arch/powerpc/kernel/irq.c                     |  1 +
 arch/powerpc/kernel/paca.c                    |  2 +-
 arch/powerpc/kernel/process.c                 |  3 +-
 arch/powerpc/kernel/setup-common.c            |  7 +--
 arch/powerpc/kernel/setup_32.c                | 23 +++++++---
 arch/powerpc/kernel/setup_64.c                | 20 ++++++---
 arch/powerpc/kernel/smp.c                     |  3 +-
 arch/powerpc/kvm/book3s_hv_builtin.c          |  2 +-
 arch/powerpc/mm/44x_mmu.c                     |  6 +--
 arch/powerpc/mm/hash_native_64.c              |  2 +-
 arch/powerpc/mm/hash_utils_64.c               | 12 +++---
 arch/powerpc/mm/hugetlbpage.c                 |  2 +-
 arch/powerpc/mm/mmu_context_nohash.c          |  4 +-
 arch/powerpc/mm/pgtable-hash64.c              |  2 +-
 arch/powerpc/mm/pgtable-radix.c               | 23 +++++++++-
 arch/powerpc/mm/ppc_mmu_32.c                  |  2 +-
 arch/powerpc/platforms/44x/iss4xx.c           |  2 +-
 arch/powerpc/platforms/44x/ppc476.c           |  2 +-
 arch/powerpc/platforms/85xx/smp.c             |  6 +--
 arch/powerpc/platforms/cell/pervasive.c       |  3 +-
 arch/powerpc/platforms/cell/smp.c             |  2 +-
 arch/powerpc/platforms/powermac/setup.c       |  2 +-
 arch/powerpc/platforms/powermac/smp.c         |  4 +-
 arch/powerpc/platforms/powernv/setup.c        |  2 +-
 arch/powerpc/platforms/powernv/smp.c          |  4 +-
 arch/powerpc/platforms/powernv/subcore.c      |  2 +-
 arch/powerpc/platforms/pseries/lpar.c         |  4 +-
 arch/powerpc/platforms/pseries/smp.c          |  6 +--
 arch/powerpc/xmon/ppc-dis.c                   |  2 +
 kernel/jump_label.c                           |  3 ++
 47 files changed, 296 insertions(+), 82 deletions(-)
 create mode 100644 arch/powerpc/include/asm/cpufeatures.h

-- 
2.7.4

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH for-4.8 01/10] powerpc/mm: Add __cpu/__mmu_has_feature
  2016-07-13  9:38 [PATCH for-4.8_set3 00/10] Use jump label for cpu/mmu_has_feature Aneesh Kumar K.V
@ 2016-07-13  9:38 ` Aneesh Kumar K.V
  2016-07-13  9:38 ` [PATCH for-4.8 02/10] powerpc/mm: Convert early cpu/mmu feature check to use the new helpers Aneesh Kumar K.V
                   ` (8 subsequent siblings)
  9 siblings, 0 replies; 15+ messages in thread
From: Aneesh Kumar K.V @ 2016-07-13  9:38 UTC (permalink / raw)
  To: benh, paulus, mpe; +Cc: linuxppc-dev, Aneesh Kumar K.V

In later patches, we will be switching cpu and mmu feature check to
use static keys. This would require us to have a variant of feature
check that can be used in early boot before jump label is initialized.
This patch adds the same. We also add a variant for radix_enabled()
check

We also update the return type to bool.

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/book3s/64/mmu.h | 19 +++++++++++++++----
 arch/powerpc/include/asm/cputable.h      | 15 ++++++++++-----
 arch/powerpc/include/asm/mmu.h           | 13 +++++++++++--
 arch/powerpc/xmon/ppc-dis.c              |  1 +
 4 files changed, 37 insertions(+), 11 deletions(-)

diff --git a/arch/powerpc/include/asm/book3s/64/mmu.h b/arch/powerpc/include/asm/book3s/64/mmu.h
index 6d8306d9aa7a..1bb0e536c76b 100644
--- a/arch/powerpc/include/asm/book3s/64/mmu.h
+++ b/arch/powerpc/include/asm/book3s/64/mmu.h
@@ -24,9 +24,20 @@ struct mmu_psize_def {
 extern struct mmu_psize_def mmu_psize_defs[MMU_PAGE_COUNT];
 
 #ifdef CONFIG_PPC_RADIX_MMU
-#define radix_enabled() mmu_has_feature(MMU_FTR_TYPE_RADIX)
+static inline bool radix_enabled(void)
+{
+	return mmu_has_feature(MMU_FTR_TYPE_RADIX);
+}
+#define radix_enabled radix_enabled
+
+static inline bool __radix_enabled(void)
+{
+	return __mmu_has_feature(MMU_FTR_TYPE_RADIX);
+}
+#define __radix_enabled __radix_enabled
 #else
 #define radix_enabled() (0)
+#define __radix_enabled() (0)
 #endif
 
 #endif /* __ASSEMBLY__ */
@@ -111,7 +122,7 @@ extern void hash__early_init_mmu(void);
 extern void radix__early_init_mmu(void);
 static inline void early_init_mmu(void)
 {
-	if (radix_enabled())
+	if (__radix_enabled())
 		return radix__early_init_mmu();
 	return hash__early_init_mmu();
 }
@@ -119,7 +130,7 @@ extern void hash__early_init_mmu_secondary(void);
 extern void radix__early_init_mmu_secondary(void);
 static inline void early_init_mmu_secondary(void)
 {
-	if (radix_enabled())
+	if (__radix_enabled())
 		return radix__early_init_mmu_secondary();
 	return hash__early_init_mmu_secondary();
 }
@@ -131,7 +142,7 @@ extern void radix__setup_initial_memory_limit(phys_addr_t first_memblock_base,
 static inline void setup_initial_memory_limit(phys_addr_t first_memblock_base,
 					      phys_addr_t first_memblock_size)
 {
-	if (radix_enabled())
+	if (__radix_enabled())
 		return radix__setup_initial_memory_limit(first_memblock_base,
 						   first_memblock_size);
 	return hash__setup_initial_memory_limit(first_memblock_base,
diff --git a/arch/powerpc/include/asm/cputable.h b/arch/powerpc/include/asm/cputable.h
index df4fb5faba43..dfdf36bc2664 100644
--- a/arch/powerpc/include/asm/cputable.h
+++ b/arch/powerpc/include/asm/cputable.h
@@ -576,12 +576,17 @@ enum {
 };
 #endif /* __powerpc64__ */
 
-static inline int cpu_has_feature(unsigned long feature)
+static inline bool __cpu_has_feature(unsigned long feature)
 {
-	return (CPU_FTRS_ALWAYS & feature) ||
-	       (CPU_FTRS_POSSIBLE
-		& cur_cpu_spec->cpu_features
-		& feature);
+	if (CPU_FTRS_ALWAYS & feature)
+		return true;
+
+	return !!(CPU_FTRS_POSSIBLE & cur_cpu_spec->cpu_features & feature);
+}
+
+static inline bool cpu_has_feature(unsigned long feature)
+{
+	return __cpu_has_feature(feature);
 }
 
 #define HBP_NUM 1
diff --git a/arch/powerpc/include/asm/mmu.h b/arch/powerpc/include/asm/mmu.h
index 0e7c1a262075..828b92faec91 100644
--- a/arch/powerpc/include/asm/mmu.h
+++ b/arch/powerpc/include/asm/mmu.h
@@ -134,9 +134,14 @@ enum {
 		0,
 };
 
-static inline int mmu_has_feature(unsigned long feature)
+static inline bool __mmu_has_feature(unsigned long feature)
 {
-	return (MMU_FTRS_POSSIBLE & cur_cpu_spec->mmu_features & feature);
+	return !!(MMU_FTRS_POSSIBLE & cur_cpu_spec->mmu_features & feature);
+}
+
+static inline bool mmu_has_feature(unsigned long feature)
+{
+	return __mmu_has_feature(feature);
 }
 
 static inline void mmu_clear_feature(unsigned long feature)
@@ -232,5 +237,9 @@ extern void setup_initial_memory_limit(phys_addr_t first_memblock_base,
 #define radix_enabled() (0)
 #endif
 
+#ifndef __radix_enabled
+#define __radix_enabled() (0)
+#endif
+
 #endif /* __KERNEL__ */
 #endif /* _ASM_POWERPC_MMU_H_ */
diff --git a/arch/powerpc/xmon/ppc-dis.c b/arch/powerpc/xmon/ppc-dis.c
index 89098f320ad5..acad77b4f7b6 100644
--- a/arch/powerpc/xmon/ppc-dis.c
+++ b/arch/powerpc/xmon/ppc-dis.c
@@ -19,6 +19,7 @@ You should have received a copy of the GNU General Public License
 along with this file; see the file COPYING.  If not, write to the Free
 Software Foundation, 51 Franklin Street - Fifth Floor, Boston, MA 02110-1301, USA.  */
 
+#include <linux/types.h>
 #include <asm/cputable.h>
 #include "nonstdio.h"
 #include "ansidecl.h"
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH for-4.8 02/10] powerpc/mm: Convert early cpu/mmu feature check to use the new helpers
  2016-07-13  9:38 [PATCH for-4.8_set3 00/10] Use jump label for cpu/mmu_has_feature Aneesh Kumar K.V
  2016-07-13  9:38 ` [PATCH for-4.8 01/10] powerpc/mm: Add __cpu/__mmu_has_feature Aneesh Kumar K.V
@ 2016-07-13  9:38 ` Aneesh Kumar K.V
  2016-07-13 12:09   ` Benjamin Herrenschmidt
  2016-07-13  9:38 ` [PATCH for-4.8 03/10] powerpc/mm/radix: Add radix_set_pte to use in early init Aneesh Kumar K.V
                   ` (7 subsequent siblings)
  9 siblings, 1 reply; 15+ messages in thread
From: Aneesh Kumar K.V @ 2016-07-13  9:38 UTC (permalink / raw)
  To: benh, paulus, mpe; +Cc: linuxppc-dev, Aneesh Kumar K.V

This switch most of the early feature check to use the non static key
variant of the function. In later patches we will be switching
cpu_has_feature and mmu_has_feature to use static keys and we can use
them only after static key/jump label is initialized. Any check for
feature before jump label init should be done using this new helper.

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/book3s/64/mmu-hash.h |  4 ++--
 arch/powerpc/include/asm/book3s/64/pgtable.h  |  2 +-
 arch/powerpc/kernel/paca.c                    |  2 +-
 arch/powerpc/kernel/setup-common.c            |  6 +++---
 arch/powerpc/kernel/setup_32.c                | 14 +++++++-------
 arch/powerpc/kernel/setup_64.c                | 12 ++++++------
 arch/powerpc/kernel/smp.c                     |  2 +-
 arch/powerpc/kvm/book3s_hv_builtin.c          |  2 +-
 arch/powerpc/mm/44x_mmu.c                     |  6 +++---
 arch/powerpc/mm/hash_native_64.c              |  2 +-
 arch/powerpc/mm/hash_utils_64.c               | 12 ++++++------
 arch/powerpc/mm/hugetlbpage.c                 |  2 +-
 arch/powerpc/mm/mmu_context_nohash.c          |  4 ++--
 arch/powerpc/mm/pgtable-hash64.c              |  2 +-
 arch/powerpc/mm/ppc_mmu_32.c                  |  2 +-
 arch/powerpc/platforms/44x/iss4xx.c           |  2 +-
 arch/powerpc/platforms/44x/ppc476.c           |  2 +-
 arch/powerpc/platforms/85xx/smp.c             |  6 +++---
 arch/powerpc/platforms/cell/pervasive.c       |  2 +-
 arch/powerpc/platforms/cell/smp.c             |  2 +-
 arch/powerpc/platforms/powermac/setup.c       |  2 +-
 arch/powerpc/platforms/powermac/smp.c         |  4 ++--
 arch/powerpc/platforms/powernv/setup.c        |  2 +-
 arch/powerpc/platforms/powernv/smp.c          |  4 ++--
 arch/powerpc/platforms/powernv/subcore.c      |  2 +-
 arch/powerpc/platforms/pseries/lpar.c         |  4 ++--
 arch/powerpc/platforms/pseries/smp.c          |  6 +++---
 27 files changed, 56 insertions(+), 56 deletions(-)

diff --git a/arch/powerpc/include/asm/book3s/64/mmu-hash.h b/arch/powerpc/include/asm/book3s/64/mmu-hash.h
index 6ec21aad8ccc..e908a8cc1942 100644
--- a/arch/powerpc/include/asm/book3s/64/mmu-hash.h
+++ b/arch/powerpc/include/asm/book3s/64/mmu-hash.h
@@ -239,7 +239,7 @@ static inline unsigned long hpte_encode_avpn(unsigned long vpn, int psize,
 	 */
 	v = (vpn >> (23 - VPN_SHIFT)) & ~(mmu_psize_defs[psize].avpnm);
 	v <<= HPTE_V_AVPN_SHIFT;
-	if (!cpu_has_feature(CPU_FTR_ARCH_300))
+	if (!__cpu_has_feature(CPU_FTR_ARCH_300))
 		v |= ((unsigned long) ssize) << HPTE_V_SSIZE_SHIFT;
 	return v;
 }
@@ -267,7 +267,7 @@ static inline unsigned long hpte_encode_r(unsigned long pa, int base_psize,
 					  int actual_psize, int ssize)
 {
 
-	if (cpu_has_feature(CPU_FTR_ARCH_300))
+	if (__cpu_has_feature(CPU_FTR_ARCH_300))
 		pa |= ((unsigned long) ssize) << HPTE_R_3_0_SSIZE_SHIFT;
 
 	/* A 4K page needs no special encoding */
diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h
index d3ab97e3c744..bf3452fbfad6 100644
--- a/arch/powerpc/include/asm/book3s/64/pgtable.h
+++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
@@ -805,7 +805,7 @@ static inline int __meminit vmemmap_create_mapping(unsigned long start,
 						   unsigned long page_size,
 						   unsigned long phys)
 {
-	if (radix_enabled())
+	if (__radix_enabled())
 		return radix__vmemmap_create_mapping(start, page_size, phys);
 	return hash__vmemmap_create_mapping(start, page_size, phys);
 }
diff --git a/arch/powerpc/kernel/paca.c b/arch/powerpc/kernel/paca.c
index 93dae296b6be..1b0b89e80824 100644
--- a/arch/powerpc/kernel/paca.c
+++ b/arch/powerpc/kernel/paca.c
@@ -184,7 +184,7 @@ void setup_paca(struct paca_struct *new_paca)
 	 * if we do a GET_PACA() before the feature fixups have been
 	 * applied
 	 */
-	if (cpu_has_feature(CPU_FTR_HVMODE))
+	if (__cpu_has_feature(CPU_FTR_HVMODE))
 		mtspr(SPRN_SPRG_HPACA, local_paca);
 #endif
 	mtspr(SPRN_SPRG_PACA, local_paca);
diff --git a/arch/powerpc/kernel/setup-common.c b/arch/powerpc/kernel/setup-common.c
index 8ca79b7503d8..f43d2d76d81f 100644
--- a/arch/powerpc/kernel/setup-common.c
+++ b/arch/powerpc/kernel/setup-common.c
@@ -236,7 +236,7 @@ static int show_cpuinfo(struct seq_file *m, void *v)
 		seq_printf(m, "unknown (%08x)", pvr);
 
 #ifdef CONFIG_ALTIVEC
-	if (cpu_has_feature(CPU_FTR_ALTIVEC))
+	if (__cpu_has_feature(CPU_FTR_ALTIVEC))
 		seq_printf(m, ", altivec supported");
 #endif /* CONFIG_ALTIVEC */
 
@@ -484,7 +484,7 @@ void __init smp_setup_cpu_maps(void)
 	}
 
 	/* If no SMT supported, nthreads is forced to 1 */
-	if (!cpu_has_feature(CPU_FTR_SMT)) {
+	if (!__cpu_has_feature(CPU_FTR_SMT)) {
 		DBG("  SMT disabled ! nthreads forced to 1\n");
 		nthreads = 1;
 	}
@@ -510,7 +510,7 @@ void __init smp_setup_cpu_maps(void)
 		maxcpus = be32_to_cpup(ireg + num_addr_cell + num_size_cell);
 
 		/* Double maxcpus for processors which have SMT capability */
-		if (cpu_has_feature(CPU_FTR_SMT))
+		if (__cpu_has_feature(CPU_FTR_SMT))
 			maxcpus *= nthreads;
 
 		if (maxcpus > nr_cpu_ids) {
diff --git a/arch/powerpc/kernel/setup_32.c b/arch/powerpc/kernel/setup_32.c
index d544fa311757..ecdc42d44951 100644
--- a/arch/powerpc/kernel/setup_32.c
+++ b/arch/powerpc/kernel/setup_32.c
@@ -132,14 +132,14 @@ notrace void __init machine_init(u64 dt_ptr)
 	setup_kdump_trampoline();
 
 #ifdef CONFIG_6xx
-	if (cpu_has_feature(CPU_FTR_CAN_DOZE) ||
-	    cpu_has_feature(CPU_FTR_CAN_NAP))
+	if (__cpu_has_feature(CPU_FTR_CAN_DOZE) ||
+	    __cpu_has_feature(CPU_FTR_CAN_NAP))
 		ppc_md.power_save = ppc6xx_idle;
 #endif
 
 #ifdef CONFIG_E500
-	if (cpu_has_feature(CPU_FTR_CAN_DOZE) ||
-	    cpu_has_feature(CPU_FTR_CAN_NAP))
+	if (__cpu_has_feature(CPU_FTR_CAN_DOZE) ||
+	    __cpu_has_feature(CPU_FTR_CAN_NAP))
 		ppc_md.power_save = e500_idle;
 #endif
 	if (ppc_md.progress)
@@ -149,7 +149,7 @@ notrace void __init machine_init(u64 dt_ptr)
 /* Checks "l2cr=xxxx" command-line option */
 int __init ppc_setup_l2cr(char *str)
 {
-	if (cpu_has_feature(CPU_FTR_L2CR)) {
+	if (__cpu_has_feature(CPU_FTR_L2CR)) {
 		unsigned long val = simple_strtoul(str, NULL, 0);
 		printk(KERN_INFO "l2cr set to %lx\n", val);
 		_set_L2CR(0);		/* force invalidate by disable cache */
@@ -162,7 +162,7 @@ __setup("l2cr=", ppc_setup_l2cr);
 /* Checks "l3cr=xxxx" command-line option */
 int __init ppc_setup_l3cr(char *str)
 {
-	if (cpu_has_feature(CPU_FTR_L3CR)) {
+	if (__cpu_has_feature(CPU_FTR_L3CR)) {
 		unsigned long val = simple_strtoul(str, NULL, 0);
 		printk(KERN_INFO "l3cr set to %lx\n", val);
 		_set_L3CR(val);		/* and enable it */
@@ -294,7 +294,7 @@ void __init setup_arch(char **cmdline_p)
 	dcache_bsize = cur_cpu_spec->dcache_bsize;
 	icache_bsize = cur_cpu_spec->icache_bsize;
 	ucache_bsize = 0;
-	if (cpu_has_feature(CPU_FTR_UNIFIED_ID_CACHE))
+	if (__cpu_has_feature(CPU_FTR_UNIFIED_ID_CACHE))
 		ucache_bsize = icache_bsize = dcache_bsize;
 
 	if (ppc_md.panic)
diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
index 5530bb55a78b..05dde6318b79 100644
--- a/arch/powerpc/kernel/setup_64.c
+++ b/arch/powerpc/kernel/setup_64.c
@@ -125,7 +125,7 @@ static void setup_tlb_core_data(void)
 		 * will be racy and could produce duplicate entries.
 		 */
 		if (smt_enabled_at_boot >= 2 &&
-		    !mmu_has_feature(MMU_FTR_USE_TLBRSRV) &&
+		    !__mmu_has_feature(MMU_FTR_USE_TLBRSRV) &&
 		    book3e_htw_mode != PPC_HTW_E6500) {
 			/* Should we panic instead? */
 			WARN_ONCE("%s: unsupported MMU configuration -- expect problems\n",
@@ -216,8 +216,8 @@ static void cpu_ready_for_interrupts(void)
 	 * not in hypervisor mode, we enable relocation-on interrupts later
 	 * in pSeries_setup_arch() using the H_SET_MODE hcall.
 	 */
-	if (cpu_has_feature(CPU_FTR_HVMODE) &&
-	    cpu_has_feature(CPU_FTR_ARCH_207S)) {
+	if (__cpu_has_feature(CPU_FTR_HVMODE) &&
+	    __cpu_has_feature(CPU_FTR_ARCH_207S)) {
 		unsigned long lpcr = mfspr(SPRN_LPCR);
 		mtspr(SPRN_LPCR, lpcr | LPCR_AIL_3);
 	}
@@ -588,13 +588,13 @@ static u64 safe_stack_limit(void)
 {
 #ifdef CONFIG_PPC_BOOK3E
 	/* Freescale BookE bolts the entire linear mapping */
-	if (mmu_has_feature(MMU_FTR_TYPE_FSL_E))
+	if (__mmu_has_feature(MMU_FTR_TYPE_FSL_E))
 		return linear_map_top;
 	/* Other BookE, we assume the first GB is bolted */
 	return 1ul << 30;
 #else
 	/* BookS, the first segment is bolted */
-	if (mmu_has_feature(MMU_FTR_1T_SEGMENT))
+	if (__mmu_has_feature(MMU_FTR_1T_SEGMENT))
 		return 1UL << SID_SHIFT_1T;
 	return 1UL << SID_SHIFT;
 #endif
@@ -639,7 +639,7 @@ static void __init exc_lvl_early_init(void)
 		paca[i].mc_kstack = __va(sp + THREAD_SIZE);
 	}
 
-	if (cpu_has_feature(CPU_FTR_DEBUG_LVL_EXC))
+	if (__cpu_has_feature(CPU_FTR_DEBUG_LVL_EXC))
 		patch_exception(0x040, exc_debug_debug_book3e);
 }
 #else
diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
index 5a1f015ea9f3..d1a7234c1c33 100644
--- a/arch/powerpc/kernel/smp.c
+++ b/arch/powerpc/kernel/smp.c
@@ -96,7 +96,7 @@ int smp_generic_cpu_bootable(unsigned int nr)
 	/* Special case - we inhibit secondary thread startup
 	 * during boot if the user requests it.
 	 */
-	if (system_state == SYSTEM_BOOTING && cpu_has_feature(CPU_FTR_SMT)) {
+	if (system_state == SYSTEM_BOOTING && __cpu_has_feature(CPU_FTR_SMT)) {
 		if (!smt_enabled_at_boot && cpu_thread_in_core(nr) != 0)
 			return 0;
 		if (smt_enabled_at_boot
diff --git a/arch/powerpc/kvm/book3s_hv_builtin.c b/arch/powerpc/kvm/book3s_hv_builtin.c
index 5f0380db3eab..cadb2d0f9892 100644
--- a/arch/powerpc/kvm/book3s_hv_builtin.c
+++ b/arch/powerpc/kvm/book3s_hv_builtin.c
@@ -80,7 +80,7 @@ void __init kvm_cma_reserve(void)
 	/*
 	 * We need CMA reservation only when we are in HV mode
 	 */
-	if (!cpu_has_feature(CPU_FTR_HVMODE))
+	if (!__cpu_has_feature(CPU_FTR_HVMODE))
 		return;
 	/*
 	 * We cannot use memblock_phys_mem_size() here, because
diff --git a/arch/powerpc/mm/44x_mmu.c b/arch/powerpc/mm/44x_mmu.c
index 82b1ff759e26..0b17851b0f90 100644
--- a/arch/powerpc/mm/44x_mmu.c
+++ b/arch/powerpc/mm/44x_mmu.c
@@ -187,12 +187,12 @@ unsigned long __init mmu_mapin_ram(unsigned long top)
 	 * initial 256M mapping established in head_44x.S */
 	for (addr = memstart + PPC_PIN_SIZE; addr < lowmem_end_addr;
 	     addr += PPC_PIN_SIZE) {
-		if (mmu_has_feature(MMU_FTR_TYPE_47x))
+		if (__mmu_has_feature(MMU_FTR_TYPE_47x))
 			ppc47x_pin_tlb(addr + PAGE_OFFSET, addr);
 		else
 			ppc44x_pin_tlb(addr + PAGE_OFFSET, addr);
 	}
-	if (mmu_has_feature(MMU_FTR_TYPE_47x)) {
+	if (__mmu_has_feature(MMU_FTR_TYPE_47x)) {
 		ppc47x_update_boltmap();
 
 #ifdef DEBUG
@@ -245,7 +245,7 @@ void mmu_init_secondary(int cpu)
 	 */
 	for (addr = memstart + PPC_PIN_SIZE; addr < lowmem_end_addr;
 	     addr += PPC_PIN_SIZE) {
-		if (mmu_has_feature(MMU_FTR_TYPE_47x))
+		if (__mmu_has_feature(MMU_FTR_TYPE_47x))
 			ppc47x_pin_tlb(addr + PAGE_OFFSET, addr);
 		else
 			ppc44x_pin_tlb(addr + PAGE_OFFSET, addr);
diff --git a/arch/powerpc/mm/hash_native_64.c b/arch/powerpc/mm/hash_native_64.c
index 277047528a3a..2208780587a0 100644
--- a/arch/powerpc/mm/hash_native_64.c
+++ b/arch/powerpc/mm/hash_native_64.c
@@ -746,6 +746,6 @@ void __init hpte_init_native(void)
 	ppc_md.flush_hash_range = native_flush_hash_range;
 	ppc_md.hugepage_invalidate   = native_hugepage_invalidate;
 
-	if (cpu_has_feature(CPU_FTR_ARCH_300))
+	if (__cpu_has_feature(CPU_FTR_ARCH_300))
 		ppc_md.register_process_table = native_register_proc_table;
 }
diff --git a/arch/powerpc/mm/hash_utils_64.c b/arch/powerpc/mm/hash_utils_64.c
index 47d59a1f12f1..3509337502f6 100644
--- a/arch/powerpc/mm/hash_utils_64.c
+++ b/arch/powerpc/mm/hash_utils_64.c
@@ -529,7 +529,7 @@ static bool might_have_hea(void)
 	 * we will never see an HEA ethernet device.
 	 */
 #ifdef CONFIG_IBMEBUS
-	return !cpu_has_feature(CPU_FTR_ARCH_207S);
+	return !__cpu_has_feature(CPU_FTR_ARCH_207S);
 #else
 	return false;
 #endif
@@ -559,7 +559,7 @@ static void __init htab_init_page_sizes(void)
 	 * Not in the device-tree, let's fallback on known size
 	 * list for 16M capable GP & GR
 	 */
-	if (mmu_has_feature(MMU_FTR_16M_PAGE))
+	if (__mmu_has_feature(MMU_FTR_16M_PAGE))
 		memcpy(mmu_psize_defs, mmu_psize_defaults_gp,
 		       sizeof(mmu_psize_defaults_gp));
 found:
@@ -589,7 +589,7 @@ found:
 		mmu_vmalloc_psize = MMU_PAGE_64K;
 		if (mmu_linear_psize == MMU_PAGE_4K)
 			mmu_linear_psize = MMU_PAGE_64K;
-		if (mmu_has_feature(MMU_FTR_CI_LARGE_PAGE)) {
+		if (__mmu_has_feature(MMU_FTR_CI_LARGE_PAGE)) {
 			/*
 			 * When running on pSeries using 64k pages for ioremap
 			 * would stop us accessing the HEA ethernet. So if we
@@ -763,7 +763,7 @@ static void __init htab_initialize(void)
 	/* Initialize page sizes */
 	htab_init_page_sizes();
 
-	if (mmu_has_feature(MMU_FTR_1T_SEGMENT)) {
+	if (__mmu_has_feature(MMU_FTR_1T_SEGMENT)) {
 		mmu_kernel_ssize = MMU_SEGSIZE_1T;
 		mmu_highuser_ssize = MMU_SEGSIZE_1T;
 		printk(KERN_INFO "Using 1TB segments\n");
@@ -815,7 +815,7 @@ static void __init htab_initialize(void)
 		/* Initialize the HPT with no entries */
 		memset((void *)table, 0, htab_size_bytes);
 
-		if (!cpu_has_feature(CPU_FTR_ARCH_300))
+		if (!__cpu_has_feature(CPU_FTR_ARCH_300))
 			/* Set SDR1 */
 			mtspr(SPRN_SDR1, _SDR1);
 		else
@@ -952,7 +952,7 @@ void hash__early_init_mmu_secondary(void)
 {
 	/* Initialize hash table for that CPU */
 	if (!firmware_has_feature(FW_FEATURE_LPAR)) {
-		if (!cpu_has_feature(CPU_FTR_ARCH_300))
+		if (!__cpu_has_feature(CPU_FTR_ARCH_300))
 			mtspr(SPRN_SDR1, _SDR1);
 		else
 			mtspr(SPRN_PTCR,
diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c
index 119d18611500..3be9c9e918b6 100644
--- a/arch/powerpc/mm/hugetlbpage.c
+++ b/arch/powerpc/mm/hugetlbpage.c
@@ -828,7 +828,7 @@ static int __init hugetlbpage_init(void)
 {
 	int psize;
 
-	if (!radix_enabled() && !mmu_has_feature(MMU_FTR_16M_PAGE))
+	if (!radix_enabled() && !__mmu_has_feature(MMU_FTR_16M_PAGE))
 		return -ENODEV;
 
 	for (psize = 0; psize < MMU_PAGE_COUNT; ++psize) {
diff --git a/arch/powerpc/mm/mmu_context_nohash.c b/arch/powerpc/mm/mmu_context_nohash.c
index 7d95bc402dba..4ec513e506fb 100644
--- a/arch/powerpc/mm/mmu_context_nohash.c
+++ b/arch/powerpc/mm/mmu_context_nohash.c
@@ -442,11 +442,11 @@ void __init mmu_context_init(void)
 	 * present if needed.
 	 *      -- BenH
 	 */
-	if (mmu_has_feature(MMU_FTR_TYPE_8xx)) {
+	if (__mmu_has_feature(MMU_FTR_TYPE_8xx)) {
 		first_context = 0;
 		last_context = 15;
 		no_selective_tlbil = true;
-	} else if (mmu_has_feature(MMU_FTR_TYPE_47x)) {
+	} else if (__mmu_has_feature(MMU_FTR_TYPE_47x)) {
 		first_context = 1;
 		last_context = 65535;
 		no_selective_tlbil = false;
diff --git a/arch/powerpc/mm/pgtable-hash64.c b/arch/powerpc/mm/pgtable-hash64.c
index c23e286a6b8f..d9b5804bdce9 100644
--- a/arch/powerpc/mm/pgtable-hash64.c
+++ b/arch/powerpc/mm/pgtable-hash64.c
@@ -313,7 +313,7 @@ pmd_t hash__pmdp_huge_get_and_clear(struct mm_struct *mm,
 int hash__has_transparent_hugepage(void)
 {
 
-	if (!mmu_has_feature(MMU_FTR_16M_PAGE))
+	if (!__mmu_has_feature(MMU_FTR_16M_PAGE))
 		return 0;
 	/*
 	 * We support THP only if PMD_SIZE is 16MB.
diff --git a/arch/powerpc/mm/ppc_mmu_32.c b/arch/powerpc/mm/ppc_mmu_32.c
index 2a049fb8523d..0915733d8ae4 100644
--- a/arch/powerpc/mm/ppc_mmu_32.c
+++ b/arch/powerpc/mm/ppc_mmu_32.c
@@ -187,7 +187,7 @@ void __init MMU_init_hw(void)
 	extern unsigned int hash_page[];
 	extern unsigned int flush_hash_patch_A[], flush_hash_patch_B[];
 
-	if (!mmu_has_feature(MMU_FTR_HPTE_TABLE)) {
+	if (!__mmu_has_feature(MMU_FTR_HPTE_TABLE)) {
 		/*
 		 * Put a blr (procedure return) instruction at the
 		 * start of hash_page, since we can still get DSI
diff --git a/arch/powerpc/platforms/44x/iss4xx.c b/arch/powerpc/platforms/44x/iss4xx.c
index c7c6758b3cfe..506b711828b0 100644
--- a/arch/powerpc/platforms/44x/iss4xx.c
+++ b/arch/powerpc/platforms/44x/iss4xx.c
@@ -131,7 +131,7 @@ static struct smp_ops_t iss_smp_ops = {
 
 static void __init iss4xx_smp_init(void)
 {
-	if (mmu_has_feature(MMU_FTR_TYPE_47x))
+	if (__mmu_has_feature(MMU_FTR_TYPE_47x))
 		smp_ops = &iss_smp_ops;
 }
 
diff --git a/arch/powerpc/platforms/44x/ppc476.c b/arch/powerpc/platforms/44x/ppc476.c
index c11ce6516c8f..895dc63d6a49 100644
--- a/arch/powerpc/platforms/44x/ppc476.c
+++ b/arch/powerpc/platforms/44x/ppc476.c
@@ -201,7 +201,7 @@ static struct smp_ops_t ppc47x_smp_ops = {
 
 static void __init ppc47x_smp_init(void)
 {
-	if (mmu_has_feature(MMU_FTR_TYPE_47x))
+	if (__mmu_has_feature(MMU_FTR_TYPE_47x))
 		smp_ops = &ppc47x_smp_ops;
 }
 
diff --git a/arch/powerpc/platforms/85xx/smp.c b/arch/powerpc/platforms/85xx/smp.c
index fe9f19e5e935..a4705d964187 100644
--- a/arch/powerpc/platforms/85xx/smp.c
+++ b/arch/powerpc/platforms/85xx/smp.c
@@ -280,7 +280,7 @@ static int smp_85xx_kick_cpu(int nr)
 
 #ifdef CONFIG_PPC64
 	if (threads_per_core == 2) {
-		if (WARN_ON_ONCE(!cpu_has_feature(CPU_FTR_SMT)))
+		if (WARN_ON_ONCE(!__cpu_has_feature(CPU_FTR_SMT)))
 			return -ENOENT;
 
 		booting_thread_hwid = cpu_thread_in_core(nr);
@@ -462,7 +462,7 @@ static void mpc85xx_smp_machine_kexec(struct kimage *image)
 
 static void smp_85xx_basic_setup(int cpu_nr)
 {
-	if (cpu_has_feature(CPU_FTR_DBELL))
+	if (__cpu_has_feature(CPU_FTR_DBELL))
 		doorbell_setup_this_cpu();
 }
 
@@ -485,7 +485,7 @@ void __init mpc85xx_smp_init(void)
 	} else
 		smp_85xx_ops.setup_cpu = smp_85xx_basic_setup;
 
-	if (cpu_has_feature(CPU_FTR_DBELL)) {
+	if (__cpu_has_feature(CPU_FTR_DBELL)) {
 		/*
 		 * If left NULL, .message_pass defaults to
 		 * smp_muxed_ipi_message_pass
diff --git a/arch/powerpc/platforms/cell/pervasive.c b/arch/powerpc/platforms/cell/pervasive.c
index d17e98bc0c10..f053602e63fa 100644
--- a/arch/powerpc/platforms/cell/pervasive.c
+++ b/arch/powerpc/platforms/cell/pervasive.c
@@ -115,7 +115,7 @@ void __init cbe_pervasive_init(void)
 {
 	int cpu;
 
-	if (!cpu_has_feature(CPU_FTR_PAUSE_ZERO))
+	if (!__cpu_has_feature(CPU_FTR_PAUSE_ZERO))
 		return;
 
 	for_each_possible_cpu(cpu) {
diff --git a/arch/powerpc/platforms/cell/smp.c b/arch/powerpc/platforms/cell/smp.c
index 895560f4be69..4d373c6375a8 100644
--- a/arch/powerpc/platforms/cell/smp.c
+++ b/arch/powerpc/platforms/cell/smp.c
@@ -148,7 +148,7 @@ void __init smp_init_cell(void)
 	smp_ops = &bpa_iic_smp_ops;
 
 	/* Mark threads which are still spinning in hold loops. */
-	if (cpu_has_feature(CPU_FTR_SMT)) {
+	if (__cpu_has_feature(CPU_FTR_SMT)) {
 		for_each_present_cpu(i) {
 			if (cpu_thread_in_core(i) == 0)
 				cpumask_set_cpu(i, &of_spin_map);
diff --git a/arch/powerpc/platforms/powermac/setup.c b/arch/powerpc/platforms/powermac/setup.c
index 8dd78f4e1af4..615bb39b82d3 100644
--- a/arch/powerpc/platforms/powermac/setup.c
+++ b/arch/powerpc/platforms/powermac/setup.c
@@ -248,7 +248,7 @@ static void __init ohare_init(void)
 static void __init l2cr_init(void)
 {
 	/* Checks "l2cr-value" property in the registry */
-	if (cpu_has_feature(CPU_FTR_L2CR)) {
+	if (__cpu_has_feature(CPU_FTR_L2CR)) {
 		struct device_node *np = of_find_node_by_name(NULL, "cpus");
 		if (np == 0)
 			np = of_find_node_by_type(NULL, "cpu");
diff --git a/arch/powerpc/platforms/powermac/smp.c b/arch/powerpc/platforms/powermac/smp.c
index 28a147ca32ba..d917ebad551e 100644
--- a/arch/powerpc/platforms/powermac/smp.c
+++ b/arch/powerpc/platforms/powermac/smp.c
@@ -670,7 +670,7 @@ volatile static long int core99_l3_cache;
 static void core99_init_caches(int cpu)
 {
 #ifndef CONFIG_PPC64
-	if (!cpu_has_feature(CPU_FTR_L2CR))
+	if (!__cpu_has_feature(CPU_FTR_L2CR))
 		return;
 
 	if (cpu == 0) {
@@ -683,7 +683,7 @@ static void core99_init_caches(int cpu)
 		printk("CPU%d: L2CR set to %lx\n", cpu, core99_l2_cache);
 	}
 
-	if (!cpu_has_feature(CPU_FTR_L3CR))
+	if (!__cpu_has_feature(CPU_FTR_L3CR))
 		return;
 
 	if (cpu == 0){
diff --git a/arch/powerpc/platforms/powernv/setup.c b/arch/powerpc/platforms/powernv/setup.c
index 8492bbbcfc08..607a05233119 100644
--- a/arch/powerpc/platforms/powernv/setup.c
+++ b/arch/powerpc/platforms/powernv/setup.c
@@ -273,7 +273,7 @@ static int __init pnv_probe(void)
 	if (!of_flat_dt_is_compatible(root, "ibm,powernv"))
 		return 0;
 
-	if (IS_ENABLED(CONFIG_PPC_RADIX_MMU) && radix_enabled())
+	if (IS_ENABLED(CONFIG_PPC_RADIX_MMU) && __radix_enabled())
 		radix_init_native();
 	else if (IS_ENABLED(CONFIG_PPC_STD_MMU_64))
 		hpte_init_native();
diff --git a/arch/powerpc/platforms/powernv/smp.c b/arch/powerpc/platforms/powernv/smp.c
index ad7b1a3dbed0..a9f20306d305 100644
--- a/arch/powerpc/platforms/powernv/smp.c
+++ b/arch/powerpc/platforms/powernv/smp.c
@@ -50,7 +50,7 @@ static void pnv_smp_setup_cpu(int cpu)
 		xics_setup_cpu();
 
 #ifdef CONFIG_PPC_DOORBELL
-	if (cpu_has_feature(CPU_FTR_DBELL))
+	if (__cpu_has_feature(CPU_FTR_DBELL))
 		doorbell_setup_this_cpu();
 #endif
 }
@@ -233,7 +233,7 @@ static int pnv_cpu_bootable(unsigned int nr)
 	 * switches. So on those machines we ignore the smt_enabled_at_boot
 	 * setting (smt-enabled on the kernel command line).
 	 */
-	if (cpu_has_feature(CPU_FTR_ARCH_207S))
+	if (__cpu_has_feature(CPU_FTR_ARCH_207S))
 		return 1;
 
 	return smp_generic_cpu_bootable(nr);
diff --git a/arch/powerpc/platforms/powernv/subcore.c b/arch/powerpc/platforms/powernv/subcore.c
index 0babef11136f..abf308fbb385 100644
--- a/arch/powerpc/platforms/powernv/subcore.c
+++ b/arch/powerpc/platforms/powernv/subcore.c
@@ -407,7 +407,7 @@ static DEVICE_ATTR(subcores_per_core, 0644,
 
 static int subcore_init(void)
 {
-	if (!cpu_has_feature(CPU_FTR_SUBCORE))
+	if (!__cpu_has_feature(CPU_FTR_SUBCORE))
 		return 0;
 
 	/*
diff --git a/arch/powerpc/platforms/pseries/lpar.c b/arch/powerpc/platforms/pseries/lpar.c
index 03ff9867a610..a54de1cff935 100644
--- a/arch/powerpc/platforms/pseries/lpar.c
+++ b/arch/powerpc/platforms/pseries/lpar.c
@@ -76,10 +76,10 @@ void vpa_init(int cpu)
 	 */
 	WARN_ON(cpu != smp_processor_id());
 
-	if (cpu_has_feature(CPU_FTR_ALTIVEC))
+	if (__cpu_has_feature(CPU_FTR_ALTIVEC))
 		lppaca_of(cpu).vmxregs_in_use = 1;
 
-	if (cpu_has_feature(CPU_FTR_ARCH_207S))
+	if (__cpu_has_feature(CPU_FTR_ARCH_207S))
 		lppaca_of(cpu).ebb_regs_in_use = 1;
 
 	addr = __pa(&lppaca_of(cpu));
diff --git a/arch/powerpc/platforms/pseries/smp.c b/arch/powerpc/platforms/pseries/smp.c
index f6f83aeccaaa..57111bae6eec 100644
--- a/arch/powerpc/platforms/pseries/smp.c
+++ b/arch/powerpc/platforms/pseries/smp.c
@@ -143,7 +143,7 @@ static void smp_setup_cpu(int cpu)
 {
 	if (cpu != boot_cpuid)
 		xics_setup_cpu();
-	if (cpu_has_feature(CPU_FTR_DBELL))
+	if (__cpu_has_feature(CPU_FTR_DBELL))
 		doorbell_setup_this_cpu();
 
 	if (firmware_has_feature(FW_FEATURE_SPLPAR))
@@ -200,7 +200,7 @@ static __init void pSeries_smp_probe(void)
 {
 	xics_smp_probe();
 
-	if (cpu_has_feature(CPU_FTR_DBELL)) {
+	if (__cpu_has_feature(CPU_FTR_DBELL)) {
 		xics_cause_ipi = smp_ops->cause_ipi;
 		smp_ops->cause_ipi = pSeries_cause_ipi_mux;
 	}
@@ -232,7 +232,7 @@ void __init smp_init_pseries(void)
 	 * query-cpu-stopped-state.
 	 */
 	if (rtas_token("query-cpu-stopped-state") == RTAS_UNKNOWN_SERVICE) {
-		if (cpu_has_feature(CPU_FTR_SMT)) {
+		if (__cpu_has_feature(CPU_FTR_SMT)) {
 			for_each_present_cpu(i) {
 				if (cpu_thread_in_core(i) == 0)
 					cpumask_set_cpu(i, of_spin_mask);
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH for-4.8 03/10] powerpc/mm/radix: Add radix_set_pte to use in early init
  2016-07-13  9:38 [PATCH for-4.8_set3 00/10] Use jump label for cpu/mmu_has_feature Aneesh Kumar K.V
  2016-07-13  9:38 ` [PATCH for-4.8 01/10] powerpc/mm: Add __cpu/__mmu_has_feature Aneesh Kumar K.V
  2016-07-13  9:38 ` [PATCH for-4.8 02/10] powerpc/mm: Convert early cpu/mmu feature check to use the new helpers Aneesh Kumar K.V
@ 2016-07-13  9:38 ` Aneesh Kumar K.V
  2016-07-13  9:38 ` [PATCH for-4.8 04/10] jump_label: make it possible for the archs to invoke jump_label_init() much earlier Aneesh Kumar K.V
                   ` (6 subsequent siblings)
  9 siblings, 0 replies; 15+ messages in thread
From: Aneesh Kumar K.V @ 2016-07-13  9:38 UTC (permalink / raw)
  To: benh, paulus, mpe; +Cc: linuxppc-dev, Aneesh Kumar K.V

We want to use the static key based feature check in set_pte_at. Since
we call radix__map_kernel_page early in boot before jump label is
initialized we can't call set_pte_at there. Add radix__set_pte for the
same.

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
---
 arch/powerpc/mm/pgtable-radix.c | 23 ++++++++++++++++++++++-
 1 file changed, 22 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/mm/pgtable-radix.c b/arch/powerpc/mm/pgtable-radix.c
index ce21a0f2c2a1..e9f8a542f75b 100644
--- a/arch/powerpc/mm/pgtable-radix.c
+++ b/arch/powerpc/mm/pgtable-radix.c
@@ -39,6 +39,27 @@ static __ref void *early_alloc_pgtable(unsigned long size)
 
 	return pt;
 }
+/*
+ * set_pte stores a linux PTE into the linux page table.
+ */
+static void radix__set_pte(struct mm_struct *mm, unsigned long addr, pte_t *ptep,
+			   pte_t pte)
+{
+	/*
+	 * When handling numa faults, we already have the pte marked
+	 * _PAGE_PRESENT, but we can be sure that it is not in hpte.
+	 * Hence we can use set_pte_at for them.
+	 */
+	VM_WARN_ON(pte_present(*ptep) && !pte_protnone(*ptep));
+
+	/*
+	 * Add the pte bit when tryint set a pte
+	 */
+	pte = __pte(pte_val(pte) | _PAGE_PTE);
+
+	/* Perform the setting of the PTE */
+	radix__set_pte_at(mm, addr, ptep, pte, 0);
+}
 
 int radix__map_kernel_page(unsigned long ea, unsigned long pa,
 			  pgprot_t flags,
@@ -102,7 +123,7 @@ int radix__map_kernel_page(unsigned long ea, unsigned long pa,
 	}
 
 set_the_pte:
-	set_pte_at(&init_mm, ea, ptep, pfn_pte(pa >> PAGE_SHIFT, flags));
+	radix__set_pte(&init_mm, ea, ptep, pfn_pte(pa >> PAGE_SHIFT, flags));
 	smp_wmb();
 	return 0;
 }
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH for-4.8 04/10] jump_label: make it possible for the archs to invoke jump_label_init() much earlier
  2016-07-13  9:38 [PATCH for-4.8_set3 00/10] Use jump label for cpu/mmu_has_feature Aneesh Kumar K.V
                   ` (2 preceding siblings ...)
  2016-07-13  9:38 ` [PATCH for-4.8 03/10] powerpc/mm/radix: Add radix_set_pte to use in early init Aneesh Kumar K.V
@ 2016-07-13  9:38 ` Aneesh Kumar K.V
  2016-07-13  9:38 ` [PATCH for-4.8 05/10] powerpc: Call jump_label_init early Aneesh Kumar K.V
                   ` (5 subsequent siblings)
  9 siblings, 0 replies; 15+ messages in thread
From: Aneesh Kumar K.V @ 2016-07-13  9:38 UTC (permalink / raw)
  To: benh, paulus, mpe; +Cc: linuxppc-dev, Kevin Hao, Aneesh Kumar K . V

From: Kevin Hao <haokexin@gmail.com>

For some archs (such as powerpc) would want to invoke jump_label_init()
in a much earlier stage. So check static_key_initialized in order to
make sure this function run only once.

Signed-off-by: Kevin Hao <haokexin@gmail.com>
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
---

Ingo did ack this patch in email
http://marc.info/?l=linux-kernel&m=144049104329961&w=2
But there was no Acked-by: 

 kernel/jump_label.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/kernel/jump_label.c b/kernel/jump_label.c
index 4b353e0be121..8ada9f5dc507 100644
--- a/kernel/jump_label.c
+++ b/kernel/jump_label.c
@@ -235,6 +235,9 @@ void __init jump_label_init(void)
 	struct static_key *key = NULL;
 	struct jump_entry *iter;
 
+	if (static_key_initialized)
+		return;
+
 	jump_label_lock();
 	jump_label_sort_entries(iter_start, iter_stop);
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH for-4.8 05/10] powerpc: Call jump_label_init early
  2016-07-13  9:38 [PATCH for-4.8_set3 00/10] Use jump label for cpu/mmu_has_feature Aneesh Kumar K.V
                   ` (3 preceding siblings ...)
  2016-07-13  9:38 ` [PATCH for-4.8 04/10] jump_label: make it possible for the archs to invoke jump_label_init() much earlier Aneesh Kumar K.V
@ 2016-07-13  9:38 ` Aneesh Kumar K.V
  2016-07-13  9:38 ` [PATCH for-4.8 06/10] powerpc: kill mfvtb() Aneesh Kumar K.V
                   ` (4 subsequent siblings)
  9 siblings, 0 replies; 15+ messages in thread
From: Aneesh Kumar K.V @ 2016-07-13  9:38 UTC (permalink / raw)
  To: benh, paulus, mpe; +Cc: linuxppc-dev, Aneesh Kumar K.V

Call jump_label_init early so that can use static keys for cpu and
mmu feature check. We should have finalzed all the cpu/mmu features when
we call setup_system and we also did feature fixup for ASM based code.

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
---
 arch/powerpc/kernel/setup_32.c | 6 ++++++
 arch/powerpc/kernel/setup_64.c | 6 ++++++
 2 files changed, 12 insertions(+)

diff --git a/arch/powerpc/kernel/setup_32.c b/arch/powerpc/kernel/setup_32.c
index ecdc42d44951..8831738c3dcb 100644
--- a/arch/powerpc/kernel/setup_32.c
+++ b/arch/powerpc/kernel/setup_32.c
@@ -99,6 +99,12 @@ notrace unsigned long __init early_init(unsigned long dt_ptr)
 			 PTRRELOC(&__stop___lwsync_fixup));
 
 	do_final_fixups();
+	/*
+	 * init jump label so that cpu and mmu feature check can be optimized
+	 * using jump label. We should have all the cpu/mmu features finalized
+	 * by now.
+	 */
+	jump_label_init();
 
 	return KERNELBASE + offset;
 }
diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
index 05dde6318b79..c6f6cbcbee91 100644
--- a/arch/powerpc/kernel/setup_64.c
+++ b/arch/powerpc/kernel/setup_64.c
@@ -480,6 +480,12 @@ void __init setup_system(void)
 	do_lwsync_fixups(cur_cpu_spec->cpu_features,
 			 &__start___lwsync_fixup, &__stop___lwsync_fixup);
 	do_final_fixups();
+	/*
+	 * init jump label so that cpu and mmu feature check can be optimized
+	 * using jump label. We should have all the cpu/mmu features finalized
+	 * by now.
+	 */
+	jump_label_init();
 
 	/*
 	 * Unflatten the device-tree passed by prom_init or kexec
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH for-4.8 06/10] powerpc: kill mfvtb()
  2016-07-13  9:38 [PATCH for-4.8_set3 00/10] Use jump label for cpu/mmu_has_feature Aneesh Kumar K.V
                   ` (4 preceding siblings ...)
  2016-07-13  9:38 ` [PATCH for-4.8 05/10] powerpc: Call jump_label_init early Aneesh Kumar K.V
@ 2016-07-13  9:38 ` Aneesh Kumar K.V
  2016-07-13  9:38 ` [PATCH for-4.8 07/10] powerpc: move the cpu_has_feature to a separate file Aneesh Kumar K.V
                   ` (3 subsequent siblings)
  9 siblings, 0 replies; 15+ messages in thread
From: Aneesh Kumar K.V @ 2016-07-13  9:38 UTC (permalink / raw)
  To: benh, paulus, mpe; +Cc: linuxppc-dev, Kevin Hao, Aneesh Kumar K . V

From: Kevin Hao <haokexin@gmail.com>

This function is only used by get_vtb(). They are almost the same
except the reading from the real register. Move the mfspr() to
get_vtb() and kill the function mfvtb(). With this, we can eliminate
the use of cpu_has_feature() in very core header file like reg.h.
This is a preparation for the use of jump label for cpu_has_feature().

Signed-off-by: Kevin Hao <haokexin@gmail.com>
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/reg.h  | 9 ---------
 arch/powerpc/include/asm/time.h | 2 +-
 2 files changed, 1 insertion(+), 10 deletions(-)

diff --git a/arch/powerpc/include/asm/reg.h b/arch/powerpc/include/asm/reg.h
index a69e8f3a4171..7ee09aa90ab4 100644
--- a/arch/powerpc/include/asm/reg.h
+++ b/arch/powerpc/include/asm/reg.h
@@ -1244,15 +1244,6 @@ static inline void msr_check_and_clear(unsigned long bits)
 		__msr_check_and_clear(bits);
 }
 
-static inline unsigned long mfvtb (void)
-{
-#ifdef CONFIG_PPC_BOOK3S_64
-	if (cpu_has_feature(CPU_FTR_ARCH_207S))
-		return mfspr(SPRN_VTB);
-#endif
-	return 0;
-}
-
 #ifdef __powerpc64__
 #if defined(CONFIG_PPC_CELL) || defined(CONFIG_PPC_FSL_BOOK3E)
 #define mftb()		({unsigned long rval;				\
diff --git a/arch/powerpc/include/asm/time.h b/arch/powerpc/include/asm/time.h
index 09211640a0e0..cbbeaf0a6597 100644
--- a/arch/powerpc/include/asm/time.h
+++ b/arch/powerpc/include/asm/time.h
@@ -103,7 +103,7 @@ static inline u64 get_vtb(void)
 {
 #ifdef CONFIG_PPC_BOOK3S_64
 	if (cpu_has_feature(CPU_FTR_ARCH_207S))
-		return mfvtb();
+		return mfspr(SPRN_VTB);
 #endif
 	return 0;
 }
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH for-4.8 07/10] powerpc: move the cpu_has_feature to a separate file
  2016-07-13  9:38 [PATCH for-4.8_set3 00/10] Use jump label for cpu/mmu_has_feature Aneesh Kumar K.V
                   ` (5 preceding siblings ...)
  2016-07-13  9:38 ` [PATCH for-4.8 06/10] powerpc: kill mfvtb() Aneesh Kumar K.V
@ 2016-07-13  9:38 ` Aneesh Kumar K.V
  2016-07-13  9:38 ` [PATCH for-4.8 08/10] powerpc: use the jump label for cpu_has_feature Aneesh Kumar K.V
                   ` (2 subsequent siblings)
  9 siblings, 0 replies; 15+ messages in thread
From: Aneesh Kumar K.V @ 2016-07-13  9:38 UTC (permalink / raw)
  To: benh, paulus, mpe; +Cc: linuxppc-dev, Kevin Hao, Aneesh Kumar K . V

From: Kevin Hao <haokexin@gmail.com>

We plan to use jump label for cpu_has_feature. In order to implement
this we need to include the linux/jump_label.h in asm/cputable.h.
But it seems that asm/cputable.h is so basic header file for ppc that
it is almost included by all the other header files. The including of
the linux/jump_label.h will introduces various recursive inclusion.
And it is very hard to fix that. So we choose to move the function
cpu_has_feature to a separate header file before using the jump label
for it. No functional change.

Signed-off-by: Kevin Hao <haokexin@gmail.com>
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/book3s/64/mmu-hash.h |  1 +
 arch/powerpc/include/asm/cacheflush.h         |  1 +
 arch/powerpc/include/asm/cpufeatures.h        | 22 ++++++++++++++++++++++
 arch/powerpc/include/asm/cputable.h           | 13 -------------
 arch/powerpc/include/asm/cputime.h            |  1 +
 arch/powerpc/include/asm/dbell.h              |  1 +
 arch/powerpc/include/asm/dcr-native.h         |  1 +
 arch/powerpc/include/asm/mman.h               |  1 +
 arch/powerpc/include/asm/time.h               |  1 +
 arch/powerpc/include/asm/xor.h                |  1 +
 arch/powerpc/kernel/align.c                   |  1 +
 arch/powerpc/kernel/irq.c                     |  1 +
 arch/powerpc/kernel/process.c                 |  1 +
 arch/powerpc/kernel/setup-common.c            |  1 +
 arch/powerpc/kernel/setup_32.c                |  1 +
 arch/powerpc/kernel/smp.c                     |  1 +
 arch/powerpc/platforms/cell/pervasive.c       |  1 +
 arch/powerpc/xmon/ppc-dis.c                   |  1 +
 18 files changed, 38 insertions(+), 13 deletions(-)
 create mode 100644 arch/powerpc/include/asm/cpufeatures.h

diff --git a/arch/powerpc/include/asm/book3s/64/mmu-hash.h b/arch/powerpc/include/asm/book3s/64/mmu-hash.h
index e908a8cc1942..68a62c013795 100644
--- a/arch/powerpc/include/asm/book3s/64/mmu-hash.h
+++ b/arch/powerpc/include/asm/book3s/64/mmu-hash.h
@@ -24,6 +24,7 @@
 #include <asm/book3s/64/pgtable.h>
 #include <asm/bug.h>
 #include <asm/processor.h>
+#include <asm/cpufeatures.h>
 
 /*
  * SLB
diff --git a/arch/powerpc/include/asm/cacheflush.h b/arch/powerpc/include/asm/cacheflush.h
index 69fb16d7a811..e650819acc95 100644
--- a/arch/powerpc/include/asm/cacheflush.h
+++ b/arch/powerpc/include/asm/cacheflush.h
@@ -11,6 +11,7 @@
 
 #include <linux/mm.h>
 #include <asm/cputable.h>
+#include <asm/cpufeatures.h>
 
 /*
  * No cache flushing is required when address mappings are changed,
diff --git a/arch/powerpc/include/asm/cpufeatures.h b/arch/powerpc/include/asm/cpufeatures.h
new file mode 100644
index 000000000000..bfa6cb8f5629
--- /dev/null
+++ b/arch/powerpc/include/asm/cpufeatures.h
@@ -0,0 +1,22 @@
+#ifndef __ASM_POWERPC_CPUFEATURES_H
+#define __ASM_POWERPC_CPUFEATURES_H
+
+#ifndef __ASSEMBLY__
+
+#include <asm/cputable.h>
+
+static inline bool __cpu_has_feature(unsigned long feature)
+{
+	if (CPU_FTRS_ALWAYS & feature)
+		return true;
+
+	return !!(CPU_FTRS_POSSIBLE & cur_cpu_spec->cpu_features & feature);
+}
+
+static inline bool cpu_has_feature(unsigned long feature)
+{
+
+	return __cpu_has_feature(feature);
+}
+#endif /* __ASSEMBLY__ */
+#endif /* __ASM_POWERPC_CPUFEATURE_H */
diff --git a/arch/powerpc/include/asm/cputable.h b/arch/powerpc/include/asm/cputable.h
index dfdf36bc2664..a49ea95849f8 100644
--- a/arch/powerpc/include/asm/cputable.h
+++ b/arch/powerpc/include/asm/cputable.h
@@ -576,19 +576,6 @@ enum {
 };
 #endif /* __powerpc64__ */
 
-static inline bool __cpu_has_feature(unsigned long feature)
-{
-	if (CPU_FTRS_ALWAYS & feature)
-		return true;
-
-	return !!(CPU_FTRS_POSSIBLE & cur_cpu_spec->cpu_features & feature);
-}
-
-static inline bool cpu_has_feature(unsigned long feature)
-{
-	return __cpu_has_feature(feature);
-}
-
 #define HBP_NUM 1
 
 #endif /* !__ASSEMBLY__ */
diff --git a/arch/powerpc/include/asm/cputime.h b/arch/powerpc/include/asm/cputime.h
index e2452550bcb1..b91837865c0e 100644
--- a/arch/powerpc/include/asm/cputime.h
+++ b/arch/powerpc/include/asm/cputime.h
@@ -28,6 +28,7 @@ static inline void setup_cputime_one_jiffy(void) { }
 #include <asm/div64.h>
 #include <asm/time.h>
 #include <asm/param.h>
+#include <asm/cpufeatures.h>
 
 typedef u64 __nocast cputime_t;
 typedef u64 __nocast cputime64_t;
diff --git a/arch/powerpc/include/asm/dbell.h b/arch/powerpc/include/asm/dbell.h
index 5fa6b20eba10..2d9eae338f70 100644
--- a/arch/powerpc/include/asm/dbell.h
+++ b/arch/powerpc/include/asm/dbell.h
@@ -16,6 +16,7 @@
 #include <linux/threads.h>
 
 #include <asm/ppc-opcode.h>
+#include <asm/cpufeatures.h>
 
 #define PPC_DBELL_MSG_BRDCAST	(0x04000000)
 #define PPC_DBELL_TYPE(x)	(((x) & 0xf) << (63-36))
diff --git a/arch/powerpc/include/asm/dcr-native.h b/arch/powerpc/include/asm/dcr-native.h
index 4efc11dacb98..0186ba05bfe1 100644
--- a/arch/powerpc/include/asm/dcr-native.h
+++ b/arch/powerpc/include/asm/dcr-native.h
@@ -24,6 +24,7 @@
 
 #include <linux/spinlock.h>
 #include <asm/cputable.h>
+#include <asm/cpufeatures.h>
 
 typedef struct {
 	unsigned int base;
diff --git a/arch/powerpc/include/asm/mman.h b/arch/powerpc/include/asm/mman.h
index 2563c435a4b1..b0db2cc88900 100644
--- a/arch/powerpc/include/asm/mman.h
+++ b/arch/powerpc/include/asm/mman.h
@@ -13,6 +13,7 @@
 
 #include <asm/cputable.h>
 #include <linux/mm.h>
+#include <asm/cpufeatures.h>
 
 /*
  * This file is included by linux/mman.h, so we can't use cacl_vm_prot_bits()
diff --git a/arch/powerpc/include/asm/time.h b/arch/powerpc/include/asm/time.h
index cbbeaf0a6597..3620a96e2384 100644
--- a/arch/powerpc/include/asm/time.h
+++ b/arch/powerpc/include/asm/time.h
@@ -18,6 +18,7 @@
 #include <linux/percpu.h>
 
 #include <asm/processor.h>
+#include <asm/cpufeatures.h>
 
 /* time.c */
 extern unsigned long tb_ticks_per_jiffy;
diff --git a/arch/powerpc/include/asm/xor.h b/arch/powerpc/include/asm/xor.h
index 0abb97f3be10..15ba0d07937f 100644
--- a/arch/powerpc/include/asm/xor.h
+++ b/arch/powerpc/include/asm/xor.h
@@ -23,6 +23,7 @@
 #ifdef CONFIG_ALTIVEC
 
 #include <asm/cputable.h>
+#include <asm/cpufeatures.h>
 
 void xor_altivec_2(unsigned long bytes, unsigned long *v1_in,
 		   unsigned long *v2_in);
diff --git a/arch/powerpc/kernel/align.c b/arch/powerpc/kernel/align.c
index c7097f933114..6fb5b1a160aa 100644
--- a/arch/powerpc/kernel/align.c
+++ b/arch/powerpc/kernel/align.c
@@ -26,6 +26,7 @@
 #include <asm/emulated_ops.h>
 #include <asm/switch_to.h>
 #include <asm/disassemble.h>
+#include <asm/cpufeatures.h>
 
 struct aligninfo {
 	unsigned char len;
diff --git a/arch/powerpc/kernel/irq.c b/arch/powerpc/kernel/irq.c
index 58217aec30ea..7f5596908225 100644
--- a/arch/powerpc/kernel/irq.c
+++ b/arch/powerpc/kernel/irq.c
@@ -75,6 +75,7 @@
 #endif
 #define CREATE_TRACE_POINTS
 #include <asm/trace.h>
+#include <asm/cpufeatures.h>
 
 DEFINE_PER_CPU_SHARED_ALIGNED(irq_cpustat_t, irq_stat);
 EXPORT_PER_CPU_SYMBOL(irq_stat);
diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
index ddceeb96e8fb..2a61cf1bcf37 100644
--- a/arch/powerpc/kernel/process.c
+++ b/arch/powerpc/kernel/process.c
@@ -58,6 +58,7 @@
 #include <asm/code-patching.h>
 #include <asm/exec.h>
 #include <asm/livepatch.h>
+#include <asm/cpufeatures.h>
 
 #include <linux/kprobes.h>
 #include <linux/kdebug.h>
diff --git a/arch/powerpc/kernel/setup-common.c b/arch/powerpc/kernel/setup-common.c
index f43d2d76d81f..e2d7b8843a7c 100644
--- a/arch/powerpc/kernel/setup-common.c
+++ b/arch/powerpc/kernel/setup-common.c
@@ -61,6 +61,7 @@
 #include <asm/cputhreads.h>
 #include <mm/mmu_decl.h>
 #include <asm/fadump.h>
+#include <asm/cpufeatures.h>
 
 #ifdef DEBUG
 #include <asm/udbg.h>
diff --git a/arch/powerpc/kernel/setup_32.c b/arch/powerpc/kernel/setup_32.c
index 8831738c3dcb..ccac1cfd877d 100644
--- a/arch/powerpc/kernel/setup_32.c
+++ b/arch/powerpc/kernel/setup_32.c
@@ -39,6 +39,7 @@
 #include <asm/mmu_context.h>
 #include <asm/epapr_hcalls.h>
 #include <asm/code-patching.h>
+#include <asm/cpufeatures.h>
 
 #define DBG(fmt...)
 
diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
index d1a7234c1c33..48110f3f94ec 100644
--- a/arch/powerpc/kernel/smp.c
+++ b/arch/powerpc/kernel/smp.c
@@ -55,6 +55,7 @@
 #include <asm/debug.h>
 #include <asm/kexec.h>
 #include <asm/asm-prototypes.h>
+#include <asm/cpufeatures.h>
 
 #ifdef DEBUG
 #include <asm/udbg.h>
diff --git a/arch/powerpc/platforms/cell/pervasive.c b/arch/powerpc/platforms/cell/pervasive.c
index f053602e63fa..24310e58d107 100644
--- a/arch/powerpc/platforms/cell/pervasive.c
+++ b/arch/powerpc/platforms/cell/pervasive.c
@@ -35,6 +35,7 @@
 #include <asm/pgtable.h>
 #include <asm/reg.h>
 #include <asm/cell-regs.h>
+#include <asm/cpufeatures.h>
 
 #include "pervasive.h"
 
diff --git a/arch/powerpc/xmon/ppc-dis.c b/arch/powerpc/xmon/ppc-dis.c
index acad77b4f7b6..88435c75139a 100644
--- a/arch/powerpc/xmon/ppc-dis.c
+++ b/arch/powerpc/xmon/ppc-dis.c
@@ -21,6 +21,7 @@ Software Foundation, 51 Franklin Street - Fifth Floor, Boston, MA 02110-1301, US
 
 #include <linux/types.h>
 #include <asm/cputable.h>
+#include <asm/cpufeatures.h>
 #include "nonstdio.h"
 #include "ansidecl.h"
 #include "ppc.h"
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH for-4.8 08/10] powerpc: use the jump label for cpu_has_feature
  2016-07-13  9:38 [PATCH for-4.8_set3 00/10] Use jump label for cpu/mmu_has_feature Aneesh Kumar K.V
                   ` (6 preceding siblings ...)
  2016-07-13  9:38 ` [PATCH for-4.8 07/10] powerpc: move the cpu_has_feature to a separate file Aneesh Kumar K.V
@ 2016-07-13  9:38 ` Aneesh Kumar K.V
  2016-07-13  9:38 ` [PATCH for-4.8 09/10] powerpc: use jump label for mmu_has_feature Aneesh Kumar K.V
  2016-07-13  9:38 ` [PATCH for-4.8 10/10] powerpc/mm: Catch the usage of cpu/mmu_has_feature before jump label init Aneesh Kumar K.V
  9 siblings, 0 replies; 15+ messages in thread
From: Aneesh Kumar K.V @ 2016-07-13  9:38 UTC (permalink / raw)
  To: benh, paulus, mpe; +Cc: linuxppc-dev, Kevin Hao, Aneesh Kumar K . V

From: Kevin Hao <haokexin@gmail.com>

The cpu features are fixed once the probe of cpu features are done.
And the function cpu_has_feature() does be used in some hot path.
The checking of the cpu features for each time of invoking of
cpu_has_feature() seems suboptimal. This tries to reduce this
overhead of this check by using jump label.

The generated assemble code of the following c program:
	if (cpu_has_feature(CPU_FTR_XXX))
		xxx()

Before:
	lis     r9,-16230
	lwz     r9,12324(r9)
	lwz     r9,12(r9)
	andi.   r10,r9,512
	beqlr-

After:
	nop	if CPU_FTR_XXX is enabled
	b xxx	if CPU_FTR_XXX is not enabled

Signed-off-by: Kevin Hao <haokexin@gmail.com>
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/cpufeatures.h | 21 +++++++++++++++++++++
 arch/powerpc/include/asm/cputable.h    |  8 ++++++++
 arch/powerpc/kernel/cputable.c         | 20 ++++++++++++++++++++
 arch/powerpc/kernel/setup_32.c         |  1 +
 arch/powerpc/kernel/setup_64.c         |  1 +
 5 files changed, 51 insertions(+)

diff --git a/arch/powerpc/include/asm/cpufeatures.h b/arch/powerpc/include/asm/cpufeatures.h
index bfa6cb8f5629..4a4a0b898463 100644
--- a/arch/powerpc/include/asm/cpufeatures.h
+++ b/arch/powerpc/include/asm/cpufeatures.h
@@ -13,10 +13,31 @@ static inline bool __cpu_has_feature(unsigned long feature)
 	return !!(CPU_FTRS_POSSIBLE & cur_cpu_spec->cpu_features & feature);
 }
 
+#ifdef CONFIG_JUMP_LABEL
+#include <linux/jump_label.h>
+
+extern struct static_key_true cpu_feat_keys[MAX_CPU_FEATURES];
+
+static __always_inline bool cpu_has_feature(unsigned long feature)
+{
+	int i;
+
+	if (CPU_FTRS_ALWAYS & feature)
+		return true;
+
+	if (!(CPU_FTRS_POSSIBLE & feature))
+		return false;
+
+	i = __builtin_ctzl(feature);
+	return static_branch_likely(&cpu_feat_keys[i]);
+}
+#else
 static inline bool cpu_has_feature(unsigned long feature)
 {
 
 	return __cpu_has_feature(feature);
 }
+#endif
+
 #endif /* __ASSEMBLY__ */
 #endif /* __ASM_POWERPC_CPUFEATURE_H */
diff --git a/arch/powerpc/include/asm/cputable.h b/arch/powerpc/include/asm/cputable.h
index a49ea95849f8..6c161e456759 100644
--- a/arch/powerpc/include/asm/cputable.h
+++ b/arch/powerpc/include/asm/cputable.h
@@ -122,6 +122,12 @@ extern void do_feature_fixups(unsigned long value, void *fixup_start,
 
 extern const char *powerpc_base_platform;
 
+#ifdef CONFIG_JUMP_LABEL
+extern void cpu_feat_keys_init(void);
+#else
+static inline void cpu_feat_keys_init(void) { }
+#endif
+
 /* TLB flush actions. Used as argument to cpu_spec.flush_tlb() hook */
 enum {
 	TLB_INVAL_SCOPE_GLOBAL = 0,	/* invalidate all TLBs */
@@ -132,6 +138,8 @@ enum {
 
 /* CPU kernel features */
 
+#define MAX_CPU_FEATURES	(8 * sizeof(((struct cpu_spec *)0)->cpu_features))
+
 /* Retain the 32b definitions all use bottom half of word */
 #define CPU_FTR_COHERENT_ICACHE		ASM_CONST(0x00000001)
 #define CPU_FTR_L2CR			ASM_CONST(0x00000002)
diff --git a/arch/powerpc/kernel/cputable.c b/arch/powerpc/kernel/cputable.c
index d81f826d1029..67ce4816998e 100644
--- a/arch/powerpc/kernel/cputable.c
+++ b/arch/powerpc/kernel/cputable.c
@@ -15,6 +15,7 @@
 #include <linux/threads.h>
 #include <linux/init.h>
 #include <linux/export.h>
+#include <linux/jump_label.h>
 
 #include <asm/oprofile_impl.h>
 #include <asm/cputable.h>
@@ -2224,3 +2225,22 @@ struct cpu_spec * __init identify_cpu(unsigned long offset, unsigned int pvr)
 
 	return NULL;
 }
+
+#ifdef CONFIG_JUMP_LABEL
+struct static_key_true cpu_feat_keys[MAX_CPU_FEATURES] = {
+			[0 ... MAX_CPU_FEATURES - 1] = STATIC_KEY_TRUE_INIT
+};
+EXPORT_SYMBOL_GPL(cpu_feat_keys);
+
+void __init cpu_feat_keys_init(void)
+{
+	int i;
+
+	for (i = 0; i < MAX_CPU_FEATURES; i++) {
+		unsigned long f = 1ul << i;
+
+		if (!(cur_cpu_spec->cpu_features & f))
+			static_branch_disable(&cpu_feat_keys[i]);
+	}
+}
+#endif
diff --git a/arch/powerpc/kernel/setup_32.c b/arch/powerpc/kernel/setup_32.c
index ccac1cfd877d..ac5b41ad94ed 100644
--- a/arch/powerpc/kernel/setup_32.c
+++ b/arch/powerpc/kernel/setup_32.c
@@ -106,6 +106,7 @@ notrace unsigned long __init early_init(unsigned long dt_ptr)
 	 * by now.
 	 */
 	jump_label_init();
+	cpu_feat_keys_init();
 
 	return KERNELBASE + offset;
 }
diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
index c6f6cbcbee91..ab7710e369c1 100644
--- a/arch/powerpc/kernel/setup_64.c
+++ b/arch/powerpc/kernel/setup_64.c
@@ -486,6 +486,7 @@ void __init setup_system(void)
 	 * by now.
 	 */
 	jump_label_init();
+	cpu_feat_keys_init();
 
 	/*
 	 * Unflatten the device-tree passed by prom_init or kexec
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH for-4.8 09/10] powerpc: use jump label for mmu_has_feature
  2016-07-13  9:38 [PATCH for-4.8_set3 00/10] Use jump label for cpu/mmu_has_feature Aneesh Kumar K.V
                   ` (7 preceding siblings ...)
  2016-07-13  9:38 ` [PATCH for-4.8 08/10] powerpc: use the jump label for cpu_has_feature Aneesh Kumar K.V
@ 2016-07-13  9:38 ` Aneesh Kumar K.V
  2016-07-13  9:38 ` [PATCH for-4.8 10/10] powerpc/mm: Catch the usage of cpu/mmu_has_feature before jump label init Aneesh Kumar K.V
  9 siblings, 0 replies; 15+ messages in thread
From: Aneesh Kumar K.V @ 2016-07-13  9:38 UTC (permalink / raw)
  To: benh, paulus, mpe; +Cc: linuxppc-dev, Kevin Hao, Aneesh Kumar K . V

From: Kevin Hao <haokexin@gmail.com>

The mmu features are fixed once the probe of mmu features are done.
And the function mmu_has_feature() does be used in some hot path.
The checking of the mmu features for each time of invoking of
mmu_has_feature() seems suboptimal. This tries to reduce this
overhead of this check by using jump label.

The generated assemble code of the following c program:
	if (mmu_has_feature(MMU_FTR_XXX))
		xxx()
Before:
	lis     r9,-16230
	lwz     r9,12324(r9)
	lwz     r9,24(r9)
	andi.   r10,r9,16
	beqlr+

After:
	nop	if MMU_FTR_XXX is enabled
	b xxx	if MMU_FTR_XXX is not enabled

Signed-off-by: Kevin Hao <haokexin@gmail.com>
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/mmu.h | 36 ++++++++++++++++++++++++++++++++++++
 arch/powerpc/kernel/cputable.c | 17 +++++++++++++++++
 arch/powerpc/kernel/setup_32.c |  1 +
 arch/powerpc/kernel/setup_64.c |  1 +
 4 files changed, 55 insertions(+)

diff --git a/arch/powerpc/include/asm/mmu.h b/arch/powerpc/include/asm/mmu.h
index 828b92faec91..3726161f6a8d 100644
--- a/arch/powerpc/include/asm/mmu.h
+++ b/arch/powerpc/include/asm/mmu.h
@@ -139,6 +139,41 @@ static inline bool __mmu_has_feature(unsigned long feature)
 	return !!(MMU_FTRS_POSSIBLE & cur_cpu_spec->mmu_features & feature);
 }
 
+#ifdef CONFIG_JUMP_LABEL
+#include <linux/jump_label.h>
+
+#define MAX_MMU_FEATURES	(8 * sizeof(((struct cpu_spec *)0)->mmu_features))
+
+extern struct static_key_true mmu_feat_keys[MAX_MMU_FEATURES];
+
+extern void mmu_feat_keys_init(void);
+
+static __always_inline bool mmu_has_feature(unsigned long feature)
+{
+	int i;
+
+	if (!(MMU_FTRS_POSSIBLE & feature))
+		return false;
+
+	i = __builtin_ctzl(feature);
+	return static_branch_likely(&mmu_feat_keys[i]);
+}
+
+static inline void mmu_clear_feature(unsigned long feature)
+{
+	int i;
+
+	i = __builtin_ctzl(feature);
+	cur_cpu_spec->mmu_features &= ~feature;
+	static_branch_disable(&mmu_feat_keys[i]);
+}
+#else
+
+static inline void mmu_feat_keys_init(void)
+{
+
+}
+
 static inline bool mmu_has_feature(unsigned long feature)
 {
 	return __mmu_has_feature(feature);
@@ -148,6 +183,7 @@ static inline void mmu_clear_feature(unsigned long feature)
 {
 	cur_cpu_spec->mmu_features &= ~feature;
 }
+#endif /* CONFIG_JUMP_LABEL */
 
 extern unsigned int __start___mmu_ftr_fixup, __stop___mmu_ftr_fixup;
 
diff --git a/arch/powerpc/kernel/cputable.c b/arch/powerpc/kernel/cputable.c
index 67ce4816998e..fa1580788eda 100644
--- a/arch/powerpc/kernel/cputable.c
+++ b/arch/powerpc/kernel/cputable.c
@@ -2243,4 +2243,21 @@ void __init cpu_feat_keys_init(void)
 			static_branch_disable(&cpu_feat_keys[i]);
 	}
 }
+
+struct static_key_true mmu_feat_keys[MAX_MMU_FEATURES] = {
+			[0 ... MAX_MMU_FEATURES - 1] = STATIC_KEY_TRUE_INIT
+};
+EXPORT_SYMBOL_GPL(mmu_feat_keys);
+
+void __init mmu_feat_keys_init(void)
+{
+	int i;
+
+	for (i = 0; i < MAX_MMU_FEATURES; i++) {
+		unsigned long f = 1ul << i;
+
+		if (!(cur_cpu_spec->mmu_features & f))
+			static_branch_disable(&mmu_feat_keys[i]);
+	}
+}
 #endif
diff --git a/arch/powerpc/kernel/setup_32.c b/arch/powerpc/kernel/setup_32.c
index ac5b41ad94ed..cd0d8814bd9b 100644
--- a/arch/powerpc/kernel/setup_32.c
+++ b/arch/powerpc/kernel/setup_32.c
@@ -107,6 +107,7 @@ notrace unsigned long __init early_init(unsigned long dt_ptr)
 	 */
 	jump_label_init();
 	cpu_feat_keys_init();
+	mmu_feat_keys_init();
 
 	return KERNELBASE + offset;
 }
diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
index ab7710e369c1..063c2ddb28b6 100644
--- a/arch/powerpc/kernel/setup_64.c
+++ b/arch/powerpc/kernel/setup_64.c
@@ -487,6 +487,7 @@ void __init setup_system(void)
 	 */
 	jump_label_init();
 	cpu_feat_keys_init();
+	mmu_feat_keys_init();
 
 	/*
 	 * Unflatten the device-tree passed by prom_init or kexec
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH for-4.8 10/10] powerpc/mm: Catch the usage of cpu/mmu_has_feature before jump label init
  2016-07-13  9:38 [PATCH for-4.8_set3 00/10] Use jump label for cpu/mmu_has_feature Aneesh Kumar K.V
                   ` (8 preceding siblings ...)
  2016-07-13  9:38 ` [PATCH for-4.8 09/10] powerpc: use jump label for mmu_has_feature Aneesh Kumar K.V
@ 2016-07-13  9:38 ` Aneesh Kumar K.V
  9 siblings, 0 replies; 15+ messages in thread
From: Aneesh Kumar K.V @ 2016-07-13  9:38 UTC (permalink / raw)
  To: benh, paulus, mpe; +Cc: linuxppc-dev, Aneesh Kumar K.V

This enable us to catch the wrong usage of cpu_has_feature and
mmu_has_feature in the code. We need to use the feature bit based
check in show_regs because that is used in the reporting code.

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
---
 arch/powerpc/Kconfig.debug             | 11 +++++++++++
 arch/powerpc/include/asm/cpufeatures.h |  6 ++++++
 arch/powerpc/include/asm/mmu.h         | 13 +++++++++++++
 arch/powerpc/kernel/process.c          |  2 +-
 4 files changed, 31 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/Kconfig.debug b/arch/powerpc/Kconfig.debug
index 3bc551b66999..89117cb7bd28 100644
--- a/arch/powerpc/Kconfig.debug
+++ b/arch/powerpc/Kconfig.debug
@@ -60,6 +60,17 @@ config CODE_PATCHING_SELFTEST
 	depends on DEBUG_KERNEL
 	default n
 
+config FEATURE_FIXUP_DEBUG
+	bool "Do extra check on feature fixup calls"
+	depends on DEBUG_KERNEL
+	default n
+	help
+	  This catch the wrong usage of cpu_has_feature and mmu_has_feature
+	  in the code.
+
+	  If you don't know what this means, say N
+
+
 config FTR_FIXUP_SELFTEST
 	bool "Run self-tests of the feature-fixup code"
 	depends on DEBUG_KERNEL
diff --git a/arch/powerpc/include/asm/cpufeatures.h b/arch/powerpc/include/asm/cpufeatures.h
index 4a4a0b898463..93e7e3e87af4 100644
--- a/arch/powerpc/include/asm/cpufeatures.h
+++ b/arch/powerpc/include/asm/cpufeatures.h
@@ -22,6 +22,12 @@ static __always_inline bool cpu_has_feature(unsigned long feature)
 {
 	int i;
 
+#ifdef CONFIG_FEATURE_FIXUP_DEBUG
+	if (!static_key_initialized) {
+		WARN_ON(1);
+		return __cpu_has_feature(feature);
+	}
+#endif
 	if (CPU_FTRS_ALWAYS & feature)
 		return true;
 
diff --git a/arch/powerpc/include/asm/mmu.h b/arch/powerpc/include/asm/mmu.h
index 3726161f6a8d..5c1f3a4cb99f 100644
--- a/arch/powerpc/include/asm/mmu.h
+++ b/arch/powerpc/include/asm/mmu.h
@@ -152,6 +152,12 @@ static __always_inline bool mmu_has_feature(unsigned long feature)
 {
 	int i;
 
+#ifdef CONFIG_FEATURE_FIXUP_DEBUG
+	if (!static_key_initialized) {
+		WARN_ON(1);
+		return __mmu_has_feature(feature);
+	}
+#endif
 	if (!(MMU_FTRS_POSSIBLE & feature))
 		return false;
 
@@ -163,6 +169,13 @@ static inline void mmu_clear_feature(unsigned long feature)
 {
 	int i;
 
+#ifdef CONFIG_FEATURE_FIXUP_DEBUG
+	if (!static_key_initialized) {
+		WARN_ON(1);
+		cur_cpu_spec->mmu_features &= ~feature;
+		return;
+	}
+#endif
 	i = __builtin_ctzl(feature);
 	cur_cpu_spec->mmu_features &= ~feature;
 	static_branch_disable(&mmu_feat_keys[i]);
diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
index 2a61cf1bcf37..a7dbd54a94da 100644
--- a/arch/powerpc/kernel/process.c
+++ b/arch/powerpc/kernel/process.c
@@ -1315,7 +1315,7 @@ void show_regs(struct pt_regs * regs)
 	print_msr_bits(regs->msr);
 	printk("  CR: %08lx  XER: %08lx\n", regs->ccr, regs->xer);
 	trap = TRAP(regs);
-	if ((regs->trap != 0xc00) && cpu_has_feature(CPU_FTR_CFAR))
+	if ((regs->trap != 0xc00) && __cpu_has_feature(CPU_FTR_CFAR))
 		printk("CFAR: "REG" ", regs->orig_gpr3);
 	if (trap == 0x200 || trap == 0x300 || trap == 0x600)
 #if defined(CONFIG_4xx) || defined(CONFIG_BOOKE)
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH for-4.8 02/10] powerpc/mm: Convert early cpu/mmu feature check to use the new helpers
  2016-07-13  9:38 ` [PATCH for-4.8 02/10] powerpc/mm: Convert early cpu/mmu feature check to use the new helpers Aneesh Kumar K.V
@ 2016-07-13 12:09   ` Benjamin Herrenschmidt
  2016-07-13 13:58     ` Aneesh Kumar K.V
  2016-07-13 14:06     ` Aneesh Kumar K.V
  0 siblings, 2 replies; 15+ messages in thread
From: Benjamin Herrenschmidt @ 2016-07-13 12:09 UTC (permalink / raw)
  To: Aneesh Kumar K.V, paulus, mpe; +Cc: linuxppc-dev

On Wed, 2016-07-13 at 15:08 +0530, Aneesh Kumar K.V wrote:
> This switch most of the early feature check to use the non static key
> variant of the function. In later patches we will be switching
> cpu_has_feature and mmu_has_feature to use static keys and we can use
> them only after static key/jump label is initialized. Any check for
> feature before jump label init should be done using this new helper.

I'm not sure about that. This is converting way way way way more
functions than is needed. Especially if Michael applies my series
there will be very little code run before the patching, really only the
MMU initialization....

> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
> ---
>  arch/powerpc/include/asm/book3s/64/mmu-hash.h |  4 ++--
>  arch/powerpc/include/asm/book3s/64/pgtable.h  |  2 +-
>  arch/powerpc/kernel/paca.c                    |  2 +-
>  arch/powerpc/kernel/setup-common.c            |  6 +++---
>  arch/powerpc/kernel/setup_32.c                | 14 +++++++-------
>  arch/powerpc/kernel/setup_64.c                | 12 ++++++------
>  arch/powerpc/kernel/smp.c                     |  2 +-
>  arch/powerpc/kvm/book3s_hv_builtin.c          |  2 +-
>  arch/powerpc/mm/44x_mmu.c                     |  6 +++---
>  arch/powerpc/mm/hash_native_64.c              |  2 +-
>  arch/powerpc/mm/hash_utils_64.c               | 12 ++++++------
>  arch/powerpc/mm/hugetlbpage.c                 |  2 +-
>  arch/powerpc/mm/mmu_context_nohash.c          |  4 ++--
>  arch/powerpc/mm/pgtable-hash64.c              |  2 +-
>  arch/powerpc/mm/ppc_mmu_32.c                  |  2 +-
>  arch/powerpc/platforms/44x/iss4xx.c           |  2 +-
>  arch/powerpc/platforms/44x/ppc476.c           |  2 +-
>  arch/powerpc/platforms/85xx/smp.c             |  6 +++---
>  arch/powerpc/platforms/cell/pervasive.c       |  2 +-
>  arch/powerpc/platforms/cell/smp.c             |  2 +-
>  arch/powerpc/platforms/powermac/setup.c       |  2 +-
>  arch/powerpc/platforms/powermac/smp.c         |  4 ++--
>  arch/powerpc/platforms/powernv/setup.c        |  2 +-
>  arch/powerpc/platforms/powernv/smp.c          |  4 ++--
>  arch/powerpc/platforms/powernv/subcore.c      |  2 +-
>  arch/powerpc/platforms/pseries/lpar.c         |  4 ++--
>  arch/powerpc/platforms/pseries/smp.c          |  6 +++---
>  27 files changed, 56 insertions(+), 56 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/book3s/64/mmu-hash.h
> b/arch/powerpc/include/asm/book3s/64/mmu-hash.h
> index 6ec21aad8ccc..e908a8cc1942 100644
> --- a/arch/powerpc/include/asm/book3s/64/mmu-hash.h
> +++ b/arch/powerpc/include/asm/book3s/64/mmu-hash.h
> @@ -239,7 +239,7 @@ static inline unsigned long
> hpte_encode_avpn(unsigned long vpn, int psize,
>  	 */
>  	v = (vpn >> (23 - VPN_SHIFT)) &
> ~(mmu_psize_defs[psize].avpnm);
>  	v <<= HPTE_V_AVPN_SHIFT;
> -	if (!cpu_has_feature(CPU_FTR_ARCH_300))
> +	if (!__cpu_has_feature(CPU_FTR_ARCH_300))
>  		v |= ((unsigned long) ssize) << HPTE_V_SSIZE_SHIFT;
>  	return v;
>  }
> @@ -267,7 +267,7 @@ static inline unsigned long
> hpte_encode_r(unsigned long pa, int base_psize,
>  					  int actual_psize, int
> ssize)
>  {
>  
> -	if (cpu_has_feature(CPU_FTR_ARCH_300))
> +	if (__cpu_has_feature(CPU_FTR_ARCH_300))
>  		pa |= ((unsigned long) ssize) <<
> HPTE_R_3_0_SSIZE_SHIFT;
>  
>  	/* A 4K page needs no special encoding */
> diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h
> b/arch/powerpc/include/asm/book3s/64/pgtable.h
> index d3ab97e3c744..bf3452fbfad6 100644
> --- a/arch/powerpc/include/asm/book3s/64/pgtable.h
> +++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
> @@ -805,7 +805,7 @@ static inline int __meminit
> vmemmap_create_mapping(unsigned long start,
>  						   unsigned long
> page_size,
>  						   unsigned long
> phys)
>  {
> -	if (radix_enabled())
> +	if (__radix_enabled())
>  		return radix__vmemmap_create_mapping(start,
> page_size, phys);
>  	return hash__vmemmap_create_mapping(start, page_size, phys);
>  }
> diff --git a/arch/powerpc/kernel/paca.c b/arch/powerpc/kernel/paca.c
> index 93dae296b6be..1b0b89e80824 100644
> --- a/arch/powerpc/kernel/paca.c
> +++ b/arch/powerpc/kernel/paca.c
> @@ -184,7 +184,7 @@ void setup_paca(struct paca_struct *new_paca)
>  	 * if we do a GET_PACA() before the feature fixups have been
>  	 * applied
>  	 */
> -	if (cpu_has_feature(CPU_FTR_HVMODE))
> +	if (__cpu_has_feature(CPU_FTR_HVMODE))
>  		mtspr(SPRN_SPRG_HPACA, local_paca);
>  #endif
>  	mtspr(SPRN_SPRG_PACA, local_paca);
> diff --git a/arch/powerpc/kernel/setup-common.c
> b/arch/powerpc/kernel/setup-common.c
> index 8ca79b7503d8..f43d2d76d81f 100644
> --- a/arch/powerpc/kernel/setup-common.c
> +++ b/arch/powerpc/kernel/setup-common.c
> @@ -236,7 +236,7 @@ static int show_cpuinfo(struct seq_file *m, void
> *v)
>  		seq_printf(m, "unknown (%08x)", pvr);
>  
>  #ifdef CONFIG_ALTIVEC
> -	if (cpu_has_feature(CPU_FTR_ALTIVEC))
> +	if (__cpu_has_feature(CPU_FTR_ALTIVEC))
>  		seq_printf(m, ", altivec supported");
>  #endif /* CONFIG_ALTIVEC */
>  
> @@ -484,7 +484,7 @@ void __init smp_setup_cpu_maps(void)
>  	}
>  
>  	/* If no SMT supported, nthreads is forced to 1 */
> -	if (!cpu_has_feature(CPU_FTR_SMT)) {
> +	if (!__cpu_has_feature(CPU_FTR_SMT)) {
>  		DBG("  SMT disabled ! nthreads forced to 1\n");
>  		nthreads = 1;
>  	}
> @@ -510,7 +510,7 @@ void __init smp_setup_cpu_maps(void)
>  		maxcpus = be32_to_cpup(ireg + num_addr_cell +
> num_size_cell);
>  
>  		/* Double maxcpus for processors which have SMT
> capability */
> -		if (cpu_has_feature(CPU_FTR_SMT))
> +		if (__cpu_has_feature(CPU_FTR_SMT))
>  			maxcpus *= nthreads;
>  
>  		if (maxcpus > nr_cpu_ids) {
> diff --git a/arch/powerpc/kernel/setup_32.c
> b/arch/powerpc/kernel/setup_32.c
> index d544fa311757..ecdc42d44951 100644
> --- a/arch/powerpc/kernel/setup_32.c
> +++ b/arch/powerpc/kernel/setup_32.c
> @@ -132,14 +132,14 @@ notrace void __init machine_init(u64 dt_ptr)
>  	setup_kdump_trampoline();
>  
>  #ifdef CONFIG_6xx
> -	if (cpu_has_feature(CPU_FTR_CAN_DOZE) ||
> -	    cpu_has_feature(CPU_FTR_CAN_NAP))
> +	if (__cpu_has_feature(CPU_FTR_CAN_DOZE) ||
> +	    __cpu_has_feature(CPU_FTR_CAN_NAP))
>  		ppc_md.power_save = ppc6xx_idle;
>  #endif
>  
>  #ifdef CONFIG_E500
> -	if (cpu_has_feature(CPU_FTR_CAN_DOZE) ||
> -	    cpu_has_feature(CPU_FTR_CAN_NAP))
> +	if (__cpu_has_feature(CPU_FTR_CAN_DOZE) ||
> +	    __cpu_has_feature(CPU_FTR_CAN_NAP))
>  		ppc_md.power_save = e500_idle;
>  #endif
>  	if (ppc_md.progress)
> @@ -149,7 +149,7 @@ notrace void __init machine_init(u64 dt_ptr)
>  /* Checks "l2cr=xxxx" command-line option */
>  int __init ppc_setup_l2cr(char *str)
>  {
> -	if (cpu_has_feature(CPU_FTR_L2CR)) {
> +	if (__cpu_has_feature(CPU_FTR_L2CR)) {
>  		unsigned long val = simple_strtoul(str, NULL, 0);
>  		printk(KERN_INFO "l2cr set to %lx\n", val);
>  		_set_L2CR(0);		/* force invalidate by
> disable cache */
> @@ -162,7 +162,7 @@ __setup("l2cr=", ppc_setup_l2cr);
>  /* Checks "l3cr=xxxx" command-line option */
>  int __init ppc_setup_l3cr(char *str)
>  {
> -	if (cpu_has_feature(CPU_FTR_L3CR)) {
> +	if (__cpu_has_feature(CPU_FTR_L3CR)) {
>  		unsigned long val = simple_strtoul(str, NULL, 0);
>  		printk(KERN_INFO "l3cr set to %lx\n", val);
>  		_set_L3CR(val);		/* and enable it */
> @@ -294,7 +294,7 @@ void __init setup_arch(char **cmdline_p)
>  	dcache_bsize = cur_cpu_spec->dcache_bsize;
>  	icache_bsize = cur_cpu_spec->icache_bsize;
>  	ucache_bsize = 0;
> -	if (cpu_has_feature(CPU_FTR_UNIFIED_ID_CACHE))
> +	if (__cpu_has_feature(CPU_FTR_UNIFIED_ID_CACHE))
>  		ucache_bsize = icache_bsize = dcache_bsize;
>  
>  	if (ppc_md.panic)
> diff --git a/arch/powerpc/kernel/setup_64.c
> b/arch/powerpc/kernel/setup_64.c
> index 5530bb55a78b..05dde6318b79 100644
> --- a/arch/powerpc/kernel/setup_64.c
> +++ b/arch/powerpc/kernel/setup_64.c
> @@ -125,7 +125,7 @@ static void setup_tlb_core_data(void)
>  		 * will be racy and could produce duplicate entries.
>  		 */
>  		if (smt_enabled_at_boot >= 2 &&
> -		    !mmu_has_feature(MMU_FTR_USE_TLBRSRV) &&
> +		    !__mmu_has_feature(MMU_FTR_USE_TLBRSRV) &&
>  		    book3e_htw_mode != PPC_HTW_E6500) {
>  			/* Should we panic instead? */
>  			WARN_ONCE("%s: unsupported MMU configuration
> -- expect problems\n",
> @@ -216,8 +216,8 @@ static void cpu_ready_for_interrupts(void)
>  	 * not in hypervisor mode, we enable relocation-on
> interrupts later
>  	 * in pSeries_setup_arch() using the H_SET_MODE hcall.
>  	 */
> -	if (cpu_has_feature(CPU_FTR_HVMODE) &&
> -	    cpu_has_feature(CPU_FTR_ARCH_207S)) {
> +	if (__cpu_has_feature(CPU_FTR_HVMODE) &&
> +	    __cpu_has_feature(CPU_FTR_ARCH_207S)) {
>  		unsigned long lpcr = mfspr(SPRN_LPCR);
>  		mtspr(SPRN_LPCR, lpcr | LPCR_AIL_3);
>  	}
> @@ -588,13 +588,13 @@ static u64 safe_stack_limit(void)
>  {
>  #ifdef CONFIG_PPC_BOOK3E
>  	/* Freescale BookE bolts the entire linear mapping */
> -	if (mmu_has_feature(MMU_FTR_TYPE_FSL_E))
> +	if (__mmu_has_feature(MMU_FTR_TYPE_FSL_E))
>  		return linear_map_top;
>  	/* Other BookE, we assume the first GB is bolted */
>  	return 1ul << 30;
>  #else
>  	/* BookS, the first segment is bolted */
> -	if (mmu_has_feature(MMU_FTR_1T_SEGMENT))
> +	if (__mmu_has_feature(MMU_FTR_1T_SEGMENT))
>  		return 1UL << SID_SHIFT_1T;
>  	return 1UL << SID_SHIFT;
>  #endif
> @@ -639,7 +639,7 @@ static void __init exc_lvl_early_init(void)
>  		paca[i].mc_kstack = __va(sp + THREAD_SIZE);
>  	}
>  
> -	if (cpu_has_feature(CPU_FTR_DEBUG_LVL_EXC))
> +	if (__cpu_has_feature(CPU_FTR_DEBUG_LVL_EXC))
>  		patch_exception(0x040, exc_debug_debug_book3e);
>  }
>  #else
> diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
> index 5a1f015ea9f3..d1a7234c1c33 100644
> --- a/arch/powerpc/kernel/smp.c
> +++ b/arch/powerpc/kernel/smp.c
> @@ -96,7 +96,7 @@ int smp_generic_cpu_bootable(unsigned int nr)
>  	/* Special case - we inhibit secondary thread startup
>  	 * during boot if the user requests it.
>  	 */
> -	if (system_state == SYSTEM_BOOTING &&
> cpu_has_feature(CPU_FTR_SMT)) {
> +	if (system_state == SYSTEM_BOOTING &&
> __cpu_has_feature(CPU_FTR_SMT)) {
>  		if (!smt_enabled_at_boot && cpu_thread_in_core(nr)
> != 0)
>  			return 0;
>  		if (smt_enabled_at_boot
> diff --git a/arch/powerpc/kvm/book3s_hv_builtin.c
> b/arch/powerpc/kvm/book3s_hv_builtin.c
> index 5f0380db3eab..cadb2d0f9892 100644
> --- a/arch/powerpc/kvm/book3s_hv_builtin.c
> +++ b/arch/powerpc/kvm/book3s_hv_builtin.c
> @@ -80,7 +80,7 @@ void __init kvm_cma_reserve(void)
>  	/*
>  	 * We need CMA reservation only when we are in HV mode
>  	 */
> -	if (!cpu_has_feature(CPU_FTR_HVMODE))
> +	if (!__cpu_has_feature(CPU_FTR_HVMODE))
>  		return;
>  	/*
>  	 * We cannot use memblock_phys_mem_size() here, because
> diff --git a/arch/powerpc/mm/44x_mmu.c b/arch/powerpc/mm/44x_mmu.c
> index 82b1ff759e26..0b17851b0f90 100644
> --- a/arch/powerpc/mm/44x_mmu.c
> +++ b/arch/powerpc/mm/44x_mmu.c
> @@ -187,12 +187,12 @@ unsigned long __init mmu_mapin_ram(unsigned
> long top)
>  	 * initial 256M mapping established in head_44x.S */
>  	for (addr = memstart + PPC_PIN_SIZE; addr < lowmem_end_addr;
>  	     addr += PPC_PIN_SIZE) {
> -		if (mmu_has_feature(MMU_FTR_TYPE_47x))
> +		if (__mmu_has_feature(MMU_FTR_TYPE_47x))
>  			ppc47x_pin_tlb(addr + PAGE_OFFSET, addr);
>  		else
>  			ppc44x_pin_tlb(addr + PAGE_OFFSET, addr);
>  	}
> -	if (mmu_has_feature(MMU_FTR_TYPE_47x)) {
> +	if (__mmu_has_feature(MMU_FTR_TYPE_47x)) {
>  		ppc47x_update_boltmap();
>  
>  #ifdef DEBUG
> @@ -245,7 +245,7 @@ void mmu_init_secondary(int cpu)
>  	 */
>  	for (addr = memstart + PPC_PIN_SIZE; addr < lowmem_end_addr;
>  	     addr += PPC_PIN_SIZE) {
> -		if (mmu_has_feature(MMU_FTR_TYPE_47x))
> +		if (__mmu_has_feature(MMU_FTR_TYPE_47x))
>  			ppc47x_pin_tlb(addr + PAGE_OFFSET, addr);
>  		else
>  			ppc44x_pin_tlb(addr + PAGE_OFFSET, addr);
> diff --git a/arch/powerpc/mm/hash_native_64.c
> b/arch/powerpc/mm/hash_native_64.c
> index 277047528a3a..2208780587a0 100644
> --- a/arch/powerpc/mm/hash_native_64.c
> +++ b/arch/powerpc/mm/hash_native_64.c
> @@ -746,6 +746,6 @@ void __init hpte_init_native(void)
>  	ppc_md.flush_hash_range = native_flush_hash_range;
>  	ppc_md.hugepage_invalidate   = native_hugepage_invalidate;
>  
> -	if (cpu_has_feature(CPU_FTR_ARCH_300))
> +	if (__cpu_has_feature(CPU_FTR_ARCH_300))
>  		ppc_md.register_process_table =
> native_register_proc_table;
>  }
> diff --git a/arch/powerpc/mm/hash_utils_64.c
> b/arch/powerpc/mm/hash_utils_64.c
> index 47d59a1f12f1..3509337502f6 100644
> --- a/arch/powerpc/mm/hash_utils_64.c
> +++ b/arch/powerpc/mm/hash_utils_64.c
> @@ -529,7 +529,7 @@ static bool might_have_hea(void)
>  	 * we will never see an HEA ethernet device.
>  	 */
>  #ifdef CONFIG_IBMEBUS
> -	return !cpu_has_feature(CPU_FTR_ARCH_207S);
> +	return !__cpu_has_feature(CPU_FTR_ARCH_207S);
>  #else
>  	return false;
>  #endif
> @@ -559,7 +559,7 @@ static void __init htab_init_page_sizes(void)
>  	 * Not in the device-tree, let's fallback on known size
>  	 * list for 16M capable GP & GR
>  	 */
> -	if (mmu_has_feature(MMU_FTR_16M_PAGE))
> +	if (__mmu_has_feature(MMU_FTR_16M_PAGE))
>  		memcpy(mmu_psize_defs, mmu_psize_defaults_gp,
>  		       sizeof(mmu_psize_defaults_gp));
>  found:
> @@ -589,7 +589,7 @@ found:
>  		mmu_vmalloc_psize = MMU_PAGE_64K;
>  		if (mmu_linear_psize == MMU_PAGE_4K)
>  			mmu_linear_psize = MMU_PAGE_64K;
> -		if (mmu_has_feature(MMU_FTR_CI_LARGE_PAGE)) {
> +		if (__mmu_has_feature(MMU_FTR_CI_LARGE_PAGE)) {
>  			/*
>  			 * When running on pSeries using 64k pages
> for ioremap
>  			 * would stop us accessing the HEA ethernet.
> So if we
> @@ -763,7 +763,7 @@ static void __init htab_initialize(void)
>  	/* Initialize page sizes */
>  	htab_init_page_sizes();
>  
> -	if (mmu_has_feature(MMU_FTR_1T_SEGMENT)) {
> +	if (__mmu_has_feature(MMU_FTR_1T_SEGMENT)) {
>  		mmu_kernel_ssize = MMU_SEGSIZE_1T;
>  		mmu_highuser_ssize = MMU_SEGSIZE_1T;
>  		printk(KERN_INFO "Using 1TB segments\n");
> @@ -815,7 +815,7 @@ static void __init htab_initialize(void)
>  		/* Initialize the HPT with no entries */
>  		memset((void *)table, 0, htab_size_bytes);
>  
> -		if (!cpu_has_feature(CPU_FTR_ARCH_300))
> +		if (!__cpu_has_feature(CPU_FTR_ARCH_300))
>  			/* Set SDR1 */
>  			mtspr(SPRN_SDR1, _SDR1);
>  		else
> @@ -952,7 +952,7 @@ void hash__early_init_mmu_secondary(void)
>  {
>  	/* Initialize hash table for that CPU */
>  	if (!firmware_has_feature(FW_FEATURE_LPAR)) {
> -		if (!cpu_has_feature(CPU_FTR_ARCH_300))
> +		if (!__cpu_has_feature(CPU_FTR_ARCH_300))
>  			mtspr(SPRN_SDR1, _SDR1);
>  		else
>  			mtspr(SPRN_PTCR,
> diff --git a/arch/powerpc/mm/hugetlbpage.c
> b/arch/powerpc/mm/hugetlbpage.c
> index 119d18611500..3be9c9e918b6 100644
> --- a/arch/powerpc/mm/hugetlbpage.c
> +++ b/arch/powerpc/mm/hugetlbpage.c
> @@ -828,7 +828,7 @@ static int __init hugetlbpage_init(void)
>  {
>  	int psize;
>  
> -	if (!radix_enabled() && !mmu_has_feature(MMU_FTR_16M_PAGE))
> +	if (!radix_enabled() &&
> !__mmu_has_feature(MMU_FTR_16M_PAGE))
>  		return -ENODEV;
>  
>  	for (psize = 0; psize < MMU_PAGE_COUNT; ++psize) {
> diff --git a/arch/powerpc/mm/mmu_context_nohash.c
> b/arch/powerpc/mm/mmu_context_nohash.c
> index 7d95bc402dba..4ec513e506fb 100644
> --- a/arch/powerpc/mm/mmu_context_nohash.c
> +++ b/arch/powerpc/mm/mmu_context_nohash.c
> @@ -442,11 +442,11 @@ void __init mmu_context_init(void)
>  	 * present if needed.
>  	 *      -- BenH
>  	 */
> -	if (mmu_has_feature(MMU_FTR_TYPE_8xx)) {
> +	if (__mmu_has_feature(MMU_FTR_TYPE_8xx)) {
>  		first_context = 0;
>  		last_context = 15;
>  		no_selective_tlbil = true;
> -	} else if (mmu_has_feature(MMU_FTR_TYPE_47x)) {
> +	} else if (__mmu_has_feature(MMU_FTR_TYPE_47x)) {
>  		first_context = 1;
>  		last_context = 65535;
>  		no_selective_tlbil = false;
> diff --git a/arch/powerpc/mm/pgtable-hash64.c
> b/arch/powerpc/mm/pgtable-hash64.c
> index c23e286a6b8f..d9b5804bdce9 100644
> --- a/arch/powerpc/mm/pgtable-hash64.c
> +++ b/arch/powerpc/mm/pgtable-hash64.c
> @@ -313,7 +313,7 @@ pmd_t hash__pmdp_huge_get_and_clear(struct
> mm_struct *mm,
>  int hash__has_transparent_hugepage(void)
>  {
>  
> -	if (!mmu_has_feature(MMU_FTR_16M_PAGE))
> +	if (!__mmu_has_feature(MMU_FTR_16M_PAGE))
>  		return 0;
>  	/*
>  	 * We support THP only if PMD_SIZE is 16MB.
> diff --git a/arch/powerpc/mm/ppc_mmu_32.c
> b/arch/powerpc/mm/ppc_mmu_32.c
> index 2a049fb8523d..0915733d8ae4 100644
> --- a/arch/powerpc/mm/ppc_mmu_32.c
> +++ b/arch/powerpc/mm/ppc_mmu_32.c
> @@ -187,7 +187,7 @@ void __init MMU_init_hw(void)
>  	extern unsigned int hash_page[];
>  	extern unsigned int flush_hash_patch_A[],
> flush_hash_patch_B[];
>  
> -	if (!mmu_has_feature(MMU_FTR_HPTE_TABLE)) {
> +	if (!__mmu_has_feature(MMU_FTR_HPTE_TABLE)) {
>  		/*
>  		 * Put a blr (procedure return) instruction at the
>  		 * start of hash_page, since we can still get DSI
> diff --git a/arch/powerpc/platforms/44x/iss4xx.c
> b/arch/powerpc/platforms/44x/iss4xx.c
> index c7c6758b3cfe..506b711828b0 100644
> --- a/arch/powerpc/platforms/44x/iss4xx.c
> +++ b/arch/powerpc/platforms/44x/iss4xx.c
> @@ -131,7 +131,7 @@ static struct smp_ops_t iss_smp_ops = {
>  
>  static void __init iss4xx_smp_init(void)
>  {
> -	if (mmu_has_feature(MMU_FTR_TYPE_47x))
> +	if (__mmu_has_feature(MMU_FTR_TYPE_47x))
>  		smp_ops = &iss_smp_ops;
>  }
>  
> diff --git a/arch/powerpc/platforms/44x/ppc476.c
> b/arch/powerpc/platforms/44x/ppc476.c
> index c11ce6516c8f..895dc63d6a49 100644
> --- a/arch/powerpc/platforms/44x/ppc476.c
> +++ b/arch/powerpc/platforms/44x/ppc476.c
> @@ -201,7 +201,7 @@ static struct smp_ops_t ppc47x_smp_ops = {
>  
>  static void __init ppc47x_smp_init(void)
>  {
> -	if (mmu_has_feature(MMU_FTR_TYPE_47x))
> +	if (__mmu_has_feature(MMU_FTR_TYPE_47x))
>  		smp_ops = &ppc47x_smp_ops;
>  }
>  
> diff --git a/arch/powerpc/platforms/85xx/smp.c
> b/arch/powerpc/platforms/85xx/smp.c
> index fe9f19e5e935..a4705d964187 100644
> --- a/arch/powerpc/platforms/85xx/smp.c
> +++ b/arch/powerpc/platforms/85xx/smp.c
> @@ -280,7 +280,7 @@ static int smp_85xx_kick_cpu(int nr)
>  
>  #ifdef CONFIG_PPC64
>  	if (threads_per_core == 2) {
> -		if (WARN_ON_ONCE(!cpu_has_feature(CPU_FTR_SMT)))
> +		if (WARN_ON_ONCE(!__cpu_has_feature(CPU_FTR_SMT)))
>  			return -ENOENT;
>  
>  		booting_thread_hwid = cpu_thread_in_core(nr);
> @@ -462,7 +462,7 @@ static void mpc85xx_smp_machine_kexec(struct
> kimage *image)
>  
>  static void smp_85xx_basic_setup(int cpu_nr)
>  {
> -	if (cpu_has_feature(CPU_FTR_DBELL))
> +	if (__cpu_has_feature(CPU_FTR_DBELL))
>  		doorbell_setup_this_cpu();
>  }
>  
> @@ -485,7 +485,7 @@ void __init mpc85xx_smp_init(void)
>  	} else
>  		smp_85xx_ops.setup_cpu = smp_85xx_basic_setup;
>  
> -	if (cpu_has_feature(CPU_FTR_DBELL)) {
> +	if (__cpu_has_feature(CPU_FTR_DBELL)) {
>  		/*
>  		 * If left NULL, .message_pass defaults to
>  		 * smp_muxed_ipi_message_pass
> diff --git a/arch/powerpc/platforms/cell/pervasive.c
> b/arch/powerpc/platforms/cell/pervasive.c
> index d17e98bc0c10..f053602e63fa 100644
> --- a/arch/powerpc/platforms/cell/pervasive.c
> +++ b/arch/powerpc/platforms/cell/pervasive.c
> @@ -115,7 +115,7 @@ void __init cbe_pervasive_init(void)
>  {
>  	int cpu;
>  
> -	if (!cpu_has_feature(CPU_FTR_PAUSE_ZERO))
> +	if (!__cpu_has_feature(CPU_FTR_PAUSE_ZERO))
>  		return;
>  
>  	for_each_possible_cpu(cpu) {
> diff --git a/arch/powerpc/platforms/cell/smp.c
> b/arch/powerpc/platforms/cell/smp.c
> index 895560f4be69..4d373c6375a8 100644
> --- a/arch/powerpc/platforms/cell/smp.c
> +++ b/arch/powerpc/platforms/cell/smp.c
> @@ -148,7 +148,7 @@ void __init smp_init_cell(void)
>  	smp_ops = &bpa_iic_smp_ops;
>  
>  	/* Mark threads which are still spinning in hold loops. */
> -	if (cpu_has_feature(CPU_FTR_SMT)) {
> +	if (__cpu_has_feature(CPU_FTR_SMT)) {
>  		for_each_present_cpu(i) {
>  			if (cpu_thread_in_core(i) == 0)
>  				cpumask_set_cpu(i, &of_spin_map);
> diff --git a/arch/powerpc/platforms/powermac/setup.c
> b/arch/powerpc/platforms/powermac/setup.c
> index 8dd78f4e1af4..615bb39b82d3 100644
> --- a/arch/powerpc/platforms/powermac/setup.c
> +++ b/arch/powerpc/platforms/powermac/setup.c
> @@ -248,7 +248,7 @@ static void __init ohare_init(void)
>  static void __init l2cr_init(void)
>  {
>  	/* Checks "l2cr-value" property in the registry */
> -	if (cpu_has_feature(CPU_FTR_L2CR)) {
> +	if (__cpu_has_feature(CPU_FTR_L2CR)) {
>  		struct device_node *np = of_find_node_by_name(NULL,
> "cpus");
>  		if (np == 0)
>  			np = of_find_node_by_type(NULL, "cpu");
> diff --git a/arch/powerpc/platforms/powermac/smp.c
> b/arch/powerpc/platforms/powermac/smp.c
> index 28a147ca32ba..d917ebad551e 100644
> --- a/arch/powerpc/platforms/powermac/smp.c
> +++ b/arch/powerpc/platforms/powermac/smp.c
> @@ -670,7 +670,7 @@ volatile static long int core99_l3_cache;
>  static void core99_init_caches(int cpu)
>  {
>  #ifndef CONFIG_PPC64
> -	if (!cpu_has_feature(CPU_FTR_L2CR))
> +	if (!__cpu_has_feature(CPU_FTR_L2CR))
>  		return;
>  
>  	if (cpu == 0) {
> @@ -683,7 +683,7 @@ static void core99_init_caches(int cpu)
>  		printk("CPU%d: L2CR set to %lx\n", cpu,
> core99_l2_cache);
>  	}
>  
> -	if (!cpu_has_feature(CPU_FTR_L3CR))
> +	if (!__cpu_has_feature(CPU_FTR_L3CR))
>  		return;
>  
>  	if (cpu == 0){
> diff --git a/arch/powerpc/platforms/powernv/setup.c
> b/arch/powerpc/platforms/powernv/setup.c
> index 8492bbbcfc08..607a05233119 100644
> --- a/arch/powerpc/platforms/powernv/setup.c
> +++ b/arch/powerpc/platforms/powernv/setup.c
> @@ -273,7 +273,7 @@ static int __init pnv_probe(void)
>  	if (!of_flat_dt_is_compatible(root, "ibm,powernv"))
>  		return 0;
>  
> -	if (IS_ENABLED(CONFIG_PPC_RADIX_MMU) && radix_enabled())
> +	if (IS_ENABLED(CONFIG_PPC_RADIX_MMU) && __radix_enabled())
>  		radix_init_native();
>  	else if (IS_ENABLED(CONFIG_PPC_STD_MMU_64))
>  		hpte_init_native();
> diff --git a/arch/powerpc/platforms/powernv/smp.c
> b/arch/powerpc/platforms/powernv/smp.c
> index ad7b1a3dbed0..a9f20306d305 100644
> --- a/arch/powerpc/platforms/powernv/smp.c
> +++ b/arch/powerpc/platforms/powernv/smp.c
> @@ -50,7 +50,7 @@ static void pnv_smp_setup_cpu(int cpu)
>  		xics_setup_cpu();
>  
>  #ifdef CONFIG_PPC_DOORBELL
> -	if (cpu_has_feature(CPU_FTR_DBELL))
> +	if (__cpu_has_feature(CPU_FTR_DBELL))
>  		doorbell_setup_this_cpu();
>  #endif
>  }
> @@ -233,7 +233,7 @@ static int pnv_cpu_bootable(unsigned int nr)
>  	 * switches. So on those machines we ignore the
> smt_enabled_at_boot
>  	 * setting (smt-enabled on the kernel command line).
>  	 */
> -	if (cpu_has_feature(CPU_FTR_ARCH_207S))
> +	if (__cpu_has_feature(CPU_FTR_ARCH_207S))
>  		return 1;
>  
>  	return smp_generic_cpu_bootable(nr);
> diff --git a/arch/powerpc/platforms/powernv/subcore.c
> b/arch/powerpc/platforms/powernv/subcore.c
> index 0babef11136f..abf308fbb385 100644
> --- a/arch/powerpc/platforms/powernv/subcore.c
> +++ b/arch/powerpc/platforms/powernv/subcore.c
> @@ -407,7 +407,7 @@ static DEVICE_ATTR(subcores_per_core, 0644,
>  
>  static int subcore_init(void)
>  {
> -	if (!cpu_has_feature(CPU_FTR_SUBCORE))
> +	if (!__cpu_has_feature(CPU_FTR_SUBCORE))
>  		return 0;
>  
>  	/*
> diff --git a/arch/powerpc/platforms/pseries/lpar.c
> b/arch/powerpc/platforms/pseries/lpar.c
> index 03ff9867a610..a54de1cff935 100644
> --- a/arch/powerpc/platforms/pseries/lpar.c
> +++ b/arch/powerpc/platforms/pseries/lpar.c
> @@ -76,10 +76,10 @@ void vpa_init(int cpu)
>  	 */
>  	WARN_ON(cpu != smp_processor_id());
>  
> -	if (cpu_has_feature(CPU_FTR_ALTIVEC))
> +	if (__cpu_has_feature(CPU_FTR_ALTIVEC))
>  		lppaca_of(cpu).vmxregs_in_use = 1;
>  
> -	if (cpu_has_feature(CPU_FTR_ARCH_207S))
> +	if (__cpu_has_feature(CPU_FTR_ARCH_207S))
>  		lppaca_of(cpu).ebb_regs_in_use = 1;
>  
>  	addr = __pa(&lppaca_of(cpu));
> diff --git a/arch/powerpc/platforms/pseries/smp.c
> b/arch/powerpc/platforms/pseries/smp.c
> index f6f83aeccaaa..57111bae6eec 100644
> --- a/arch/powerpc/platforms/pseries/smp.c
> +++ b/arch/powerpc/platforms/pseries/smp.c
> @@ -143,7 +143,7 @@ static void smp_setup_cpu(int cpu)
>  {
>  	if (cpu != boot_cpuid)
>  		xics_setup_cpu();
> -	if (cpu_has_feature(CPU_FTR_DBELL))
> +	if (__cpu_has_feature(CPU_FTR_DBELL))
>  		doorbell_setup_this_cpu();
>  
>  	if (firmware_has_feature(FW_FEATURE_SPLPAR))
> @@ -200,7 +200,7 @@ static __init void pSeries_smp_probe(void)
>  {
>  	xics_smp_probe();
>  
> -	if (cpu_has_feature(CPU_FTR_DBELL)) {
> +	if (__cpu_has_feature(CPU_FTR_DBELL)) {
>  		xics_cause_ipi = smp_ops->cause_ipi;
>  		smp_ops->cause_ipi = pSeries_cause_ipi_mux;
>  	}
> @@ -232,7 +232,7 @@ void __init smp_init_pseries(void)
>  	 * query-cpu-stopped-state.
>  	 */
>  	if (rtas_token("query-cpu-stopped-state") ==
> RTAS_UNKNOWN_SERVICE) {
> -		if (cpu_has_feature(CPU_FTR_SMT)) {
> +		if (__cpu_has_feature(CPU_FTR_SMT)) {
>  			for_each_present_cpu(i) {
>  				if (cpu_thread_in_core(i) == 0)
>  					cpumask_set_cpu(i,
> of_spin_mask);

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH for-4.8 02/10] powerpc/mm: Convert early cpu/mmu feature check to use the new helpers
  2016-07-13 12:09   ` Benjamin Herrenschmidt
@ 2016-07-13 13:58     ` Aneesh Kumar K.V
  2016-07-13 14:06     ` Aneesh Kumar K.V
  1 sibling, 0 replies; 15+ messages in thread
From: Aneesh Kumar K.V @ 2016-07-13 13:58 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, paulus, mpe; +Cc: linuxppc-dev

Benjamin Herrenschmidt <benh@kernel.crashing.org> writes:

> On Wed, 2016-07-13 at 15:08 +0530, Aneesh Kumar K.V wrote:
>> This switch most of the early feature check to use the non static key
>> variant of the function. In later patches we will be switching
>> cpu_has_feature and mmu_has_feature to use static keys and we can use
>> them only after static key/jump label is initialized. Any check for
>> feature before jump label init should be done using this new helper.
>
> I'm not sure about that. This is converting way way way way more
> functions than is needed. Especially if Michael applies my series
> there will be very little code run before the patching, really only the
> MMU initialization....
>

But then all of them are __init functions and that help in:

1) Avoid adding those location information to the __jump_table section. (I assume
that will also have an impact of vmlinux size ?)
2) Simple rule regarding when to use __cpu_has_feature and
cpu_has_feature
3) No need to runtime patch lot of these __init code paths which will we
throw out after boot.

Do you see any draw back in making all the __init functions to use
__cpu_has_feature() ?

-aneesh

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH for-4.8 02/10] powerpc/mm: Convert early cpu/mmu feature check to use the new helpers
  2016-07-13 12:09   ` Benjamin Herrenschmidt
  2016-07-13 13:58     ` Aneesh Kumar K.V
@ 2016-07-13 14:06     ` Aneesh Kumar K.V
  2016-07-13 14:45       ` Benjamin Herrenschmidt
  1 sibling, 1 reply; 15+ messages in thread
From: Aneesh Kumar K.V @ 2016-07-13 14:06 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, paulus, mpe; +Cc: linuxppc-dev

Benjamin Herrenschmidt <benh@kernel.crashing.org> writes:

> On Wed, 2016-07-13 at 15:08 +0530, Aneesh Kumar K.V wrote:
>> This switch most of the early feature check to use the non static key
>> variant of the function. In later patches we will be switching
>> cpu_has_feature and mmu_has_feature to use static keys and we can use
>> them only after static key/jump label is initialized. Any check for
>> feature before jump label init should be done using this new helper.
>
> I'm not sure about that. This is converting way way way way more
> functions than is needed. Especially if Michael applies my series
> there will be very little code run before the patching, really only the
> MMU initialization....


Michael is also running into boot issues with the early init rewrite
patch series on G5. That is why I didn't rebase my patches on top of
that changes.

-aneesh

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH for-4.8 02/10] powerpc/mm: Convert early cpu/mmu feature check to use the new helpers
  2016-07-13 14:06     ` Aneesh Kumar K.V
@ 2016-07-13 14:45       ` Benjamin Herrenschmidt
  0 siblings, 0 replies; 15+ messages in thread
From: Benjamin Herrenschmidt @ 2016-07-13 14:45 UTC (permalink / raw)
  To: Aneesh Kumar K.V, paulus, mpe; +Cc: linuxppc-dev

On Wed, 2016-07-13 at 19:36 +0530, Aneesh Kumar K.V wrote:
> > I'm not sure about that. This is converting way way way way more
> > functions than is needed. Especially if Michael applies my series
> > there will be very little code run before the patching, really only
> the
> > MMU initialization....
> 
> 
> Michael is also running into boot issues with the early init rewrite
> patch series on G5. That is why I didn't rebase my patches on top of
> that changes.

Well that shouldn't be too hard to fix when I'm back next week. It's
working fine on my quad G5 ;-)

Cheers,
Ben.

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2016-07-13 14:45 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-07-13  9:38 [PATCH for-4.8_set3 00/10] Use jump label for cpu/mmu_has_feature Aneesh Kumar K.V
2016-07-13  9:38 ` [PATCH for-4.8 01/10] powerpc/mm: Add __cpu/__mmu_has_feature Aneesh Kumar K.V
2016-07-13  9:38 ` [PATCH for-4.8 02/10] powerpc/mm: Convert early cpu/mmu feature check to use the new helpers Aneesh Kumar K.V
2016-07-13 12:09   ` Benjamin Herrenschmidt
2016-07-13 13:58     ` Aneesh Kumar K.V
2016-07-13 14:06     ` Aneesh Kumar K.V
2016-07-13 14:45       ` Benjamin Herrenschmidt
2016-07-13  9:38 ` [PATCH for-4.8 03/10] powerpc/mm/radix: Add radix_set_pte to use in early init Aneesh Kumar K.V
2016-07-13  9:38 ` [PATCH for-4.8 04/10] jump_label: make it possible for the archs to invoke jump_label_init() much earlier Aneesh Kumar K.V
2016-07-13  9:38 ` [PATCH for-4.8 05/10] powerpc: Call jump_label_init early Aneesh Kumar K.V
2016-07-13  9:38 ` [PATCH for-4.8 06/10] powerpc: kill mfvtb() Aneesh Kumar K.V
2016-07-13  9:38 ` [PATCH for-4.8 07/10] powerpc: move the cpu_has_feature to a separate file Aneesh Kumar K.V
2016-07-13  9:38 ` [PATCH for-4.8 08/10] powerpc: use the jump label for cpu_has_feature Aneesh Kumar K.V
2016-07-13  9:38 ` [PATCH for-4.8 09/10] powerpc: use jump label for mmu_has_feature Aneesh Kumar K.V
2016-07-13  9:38 ` [PATCH for-4.8 10/10] powerpc/mm: Catch the usage of cpu/mmu_has_feature before jump label init Aneesh Kumar K.V

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).