linux-next.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* linux-next: build failure after merge of the percpu tree
@ 2014-08-27  4:22 Stephen Rothwell
  2014-08-27 15:26 ` [PATCH percpu/for-3.18-consistent-ops] Revert "powerpc: Replace __get_cpu_var uses" Tejun Heo
  0 siblings, 1 reply; 17+ messages in thread
From: Stephen Rothwell @ 2014-08-27  4:22 UTC (permalink / raw)
  To: Tejun Heo, Rusty Russell, Christoph Lameter, Ingo Molnar
  Cc: linux-next, linux-kernel, Christoph Lameter,
	Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman

[-- Attachment #1: Type: text/plain, Size: 1151 bytes --]

Hi all,

After merging the percpu tree, today's linux-next build (powerpc
ppc64_defconfig) failed like this:

In file included from arch/powerpc/include/asm/xics.h:9:0,
                 from arch/powerpc/kernel/asm-offsets.c:47:
include/linux/interrupt.h:372:0: warning: "set_softirq_pending" redefined
 #define set_softirq_pending(x) (local_softirq_pending() = (x))
 ^
In file included from include/linux/hardirq.h:8:0,
                 from include/linux/memcontrol.h:24,
                 from include/linux/swap.h:8,
                 from include/linux/suspend.h:4,
                 from arch/powerpc/kernel/asm-offsets.c:24:
arch/powerpc/include/asm/hardirq.h:25:0: note: this is the location of the previous definition
 #define set_softirq_pending(x) __this_cpu_write(irq_stat._softirq_pending, (x))
 ^

I got lots (and lots :-() of these and some were considered errors
(powerpc is built with -Werr in arch/powerpc).

Caused by commit 5828f666c069 ("powerpc: Replace __get_cpu_var uses").

I have used the percpu tree from next-20140826 for today.
-- 
Cheers,
Stephen Rothwell                    sfr@canb.auug.org.au

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH percpu/for-3.18-consistent-ops] Revert "powerpc: Replace __get_cpu_var uses"
  2014-08-27  4:22 linux-next: build failure after merge of the percpu tree Stephen Rothwell
@ 2014-08-27 15:26 ` Tejun Heo
  2014-08-27 15:56   ` Christoph Lameter
  2014-08-27 21:43   ` Stephen Rothwell
  0 siblings, 2 replies; 17+ messages in thread
From: Tejun Heo @ 2014-08-27 15:26 UTC (permalink / raw)
  To: Stephen Rothwell
  Cc: Rusty Russell, Christoph Lameter, Ingo Molnar, linux-next,
	linux-kernel, Christoph Lameter, Benjamin Herrenschmidt,
	Paul Mackerras, Michael Ellerman

>From 23f66e2d661b4d3226d16e25910a9e9472ce2410 Mon Sep 17 00:00:00 2001
From: Tejun Heo <tj@kernel.org>
Date: Wed, 27 Aug 2014 11:18:29 -0400

This reverts commit 5828f666c069af74e00db21559f1535103c9f79a due to
build failure after merging with pending powerpc changes.

Link: http://lkml.kernel.org/g/20140827142243.6277eaff@canb.auug.org.au

Signed-off-by: Tejun Heo <tj@kernel.org>
Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
Cc: Christoph Lameter <cl@linux-foundation.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
---
Chrisoph, let's route the updated patch through powerpc tree.

Thanks.

 arch/powerpc/include/asm/hardirq.h                |  4 +---
 arch/powerpc/include/asm/tlbflush.h               |  4 ++--
 arch/powerpc/include/asm/xics.h                   |  8 ++++----
 arch/powerpc/kernel/dbell.c                       |  2 +-
 arch/powerpc/kernel/hw_breakpoint.c               |  6 +++---
 arch/powerpc/kernel/iommu.c                       |  2 +-
 arch/powerpc/kernel/irq.c                         |  4 ++--
 arch/powerpc/kernel/kgdb.c                        |  2 +-
 arch/powerpc/kernel/kprobes.c                     |  6 +++---
 arch/powerpc/kernel/mce.c                         | 24 +++++++++++------------
 arch/powerpc/kernel/process.c                     | 10 +++++-----
 arch/powerpc/kernel/smp.c                         |  6 +++---
 arch/powerpc/kernel/sysfs.c                       |  4 ++--
 arch/powerpc/kernel/time.c                        | 22 ++++++++++-----------
 arch/powerpc/kernel/traps.c                       |  8 ++++----
 arch/powerpc/kvm/e500.c                           | 14 ++++++-------
 arch/powerpc/kvm/e500mc.c                         |  4 ++--
 arch/powerpc/mm/hash_native_64.c                  |  2 +-
 arch/powerpc/mm/hash_utils_64.c                   |  2 +-
 arch/powerpc/mm/hugetlbpage-book3e.c              |  6 +++---
 arch/powerpc/mm/hugetlbpage.c                     |  2 +-
 arch/powerpc/perf/core-book3s.c                   | 22 ++++++++++-----------
 arch/powerpc/perf/core-fsl-emb.c                  |  6 +++---
 arch/powerpc/platforms/cell/interrupt.c           |  6 +++---
 arch/powerpc/platforms/powernv/opal-tracepoints.c |  4 ++--
 arch/powerpc/platforms/ps3/interrupt.c            |  2 +-
 arch/powerpc/platforms/pseries/dtl.c              |  2 +-
 arch/powerpc/platforms/pseries/hvCall_inst.c      |  4 ++--
 arch/powerpc/platforms/pseries/iommu.c            |  8 ++++----
 arch/powerpc/platforms/pseries/lpar.c             |  6 +++---
 arch/powerpc/platforms/pseries/ras.c              |  4 ++--
 arch/powerpc/sysdev/xics/xics-common.c            |  2 +-
 32 files changed, 103 insertions(+), 105 deletions(-)

diff --git a/arch/powerpc/include/asm/hardirq.h b/arch/powerpc/include/asm/hardirq.h
index 8d907ba..1bbb301 100644
--- a/arch/powerpc/include/asm/hardirq.h
+++ b/arch/powerpc/include/asm/hardirq.h
@@ -21,9 +21,7 @@ DECLARE_PER_CPU_SHARED_ALIGNED(irq_cpustat_t, irq_stat);
 
 #define __ARCH_IRQ_STAT
 
-#define local_softirq_pending()	__this_cpu_read(irq_stat.__softirq_pending)
-#define set_softirq_pending(x) __this_cpu_write(irq_stat._softirq_pending, (x))
-#define or_softirq_pending(x) __this_cpu_or(irq_stat._softirq_pending, (x))
+#define local_softirq_pending()	__get_cpu_var(irq_stat).__softirq_pending
 
 static inline void ack_bad_irq(unsigned int irq)
 {
diff --git a/arch/powerpc/include/asm/tlbflush.h b/arch/powerpc/include/asm/tlbflush.h
index cd7c271..2def01ed 100644
--- a/arch/powerpc/include/asm/tlbflush.h
+++ b/arch/powerpc/include/asm/tlbflush.h
@@ -107,14 +107,14 @@ extern void __flush_tlb_pending(struct ppc64_tlb_batch *batch);
 
 static inline void arch_enter_lazy_mmu_mode(void)
 {
-	struct ppc64_tlb_batch *batch = this_cpu_ptr(&ppc64_tlb_batch);
+	struct ppc64_tlb_batch *batch = &__get_cpu_var(ppc64_tlb_batch);
 
 	batch->active = 1;
 }
 
 static inline void arch_leave_lazy_mmu_mode(void)
 {
-	struct ppc64_tlb_batch *batch = this_cpu_ptr(&ppc64_tlb_batch);
+	struct ppc64_tlb_batch *batch = &__get_cpu_var(ppc64_tlb_batch);
 
 	if (batch->index)
 		__flush_tlb_pending(batch);
diff --git a/arch/powerpc/include/asm/xics.h b/arch/powerpc/include/asm/xics.h
index 5007ad0..282d43a 100644
--- a/arch/powerpc/include/asm/xics.h
+++ b/arch/powerpc/include/asm/xics.h
@@ -97,7 +97,7 @@ DECLARE_PER_CPU(struct xics_cppr, xics_cppr);
 
 static inline void xics_push_cppr(unsigned int vec)
 {
-	struct xics_cppr *os_cppr = this_cpu_ptr(&xics_cppr);
+	struct xics_cppr *os_cppr = &__get_cpu_var(xics_cppr);
 
 	if (WARN_ON(os_cppr->index >= MAX_NUM_PRIORITIES - 1))
 		return;
@@ -110,7 +110,7 @@ static inline void xics_push_cppr(unsigned int vec)
 
 static inline unsigned char xics_pop_cppr(void)
 {
-	struct xics_cppr *os_cppr = this_cpu_ptr(&xics_cppr);
+	struct xics_cppr *os_cppr = &__get_cpu_var(xics_cppr);
 
 	if (WARN_ON(os_cppr->index < 1))
 		return LOWEST_PRIORITY;
@@ -120,7 +120,7 @@ static inline unsigned char xics_pop_cppr(void)
 
 static inline void xics_set_base_cppr(unsigned char cppr)
 {
-	struct xics_cppr *os_cppr = this_cpu_ptr(&xics_cppr);
+	struct xics_cppr *os_cppr = &__get_cpu_var(xics_cppr);
 
 	/* we only really want to set the priority when there's
 	 * just one cppr value on the stack
@@ -132,7 +132,7 @@ static inline void xics_set_base_cppr(unsigned char cppr)
 
 static inline unsigned char xics_cppr_top(void)
 {
-	struct xics_cppr *os_cppr = this_cpu_ptr(&xics_cppr);
+	struct xics_cppr *os_cppr = &__get_cpu_var(xics_cppr);
 	
 	return os_cppr->stack[os_cppr->index];
 }
diff --git a/arch/powerpc/kernel/dbell.c b/arch/powerpc/kernel/dbell.c
index f421781..d55c76c 100644
--- a/arch/powerpc/kernel/dbell.c
+++ b/arch/powerpc/kernel/dbell.c
@@ -41,7 +41,7 @@ void doorbell_exception(struct pt_regs *regs)
 
 	may_hard_irq_enable();
 
-	__this_cpu_inc(irq_stat.doorbell_irqs);
+	__get_cpu_var(irq_stat).doorbell_irqs++;
 
 	smp_ipi_demux();
 
diff --git a/arch/powerpc/kernel/hw_breakpoint.c b/arch/powerpc/kernel/hw_breakpoint.c
index b62f90e..0bb5918 100644
--- a/arch/powerpc/kernel/hw_breakpoint.c
+++ b/arch/powerpc/kernel/hw_breakpoint.c
@@ -63,7 +63,7 @@ int hw_breakpoint_slots(int type)
 int arch_install_hw_breakpoint(struct perf_event *bp)
 {
 	struct arch_hw_breakpoint *info = counter_arch_bp(bp);
-	struct perf_event **slot = this_cpu_ptr(&bp_per_reg);
+	struct perf_event **slot = &__get_cpu_var(bp_per_reg);
 
 	*slot = bp;
 
@@ -88,7 +88,7 @@ int arch_install_hw_breakpoint(struct perf_event *bp)
  */
 void arch_uninstall_hw_breakpoint(struct perf_event *bp)
 {
-	struct perf_event **slot = this_cpu_ptr(&bp_per_reg);
+	struct perf_event **slot = &__get_cpu_var(bp_per_reg);
 
 	if (*slot != bp) {
 		WARN_ONCE(1, "Can't find the breakpoint");
@@ -226,7 +226,7 @@ int __kprobes hw_breakpoint_handler(struct die_args *args)
 	 */
 	rcu_read_lock();
 
-	bp = __this_cpu_read(bp_per_reg);
+	bp = __get_cpu_var(bp_per_reg);
 	if (!bp)
 		goto out;
 	info = counter_arch_bp(bp);
diff --git a/arch/powerpc/kernel/iommu.c b/arch/powerpc/kernel/iommu.c
index 71e60bf..a10642a 100644
--- a/arch/powerpc/kernel/iommu.c
+++ b/arch/powerpc/kernel/iommu.c
@@ -208,7 +208,7 @@ static unsigned long iommu_range_alloc(struct device *dev,
 	 * We don't need to disable preemption here because any CPU can
 	 * safely use any IOMMU pool.
 	 */
-	pool_nr = __this_cpu_read(iommu_pool_hash) & (tbl->nr_pools - 1);
+	pool_nr = __raw_get_cpu_var(iommu_pool_hash) & (tbl->nr_pools - 1);
 
 	if (largealloc)
 		pool = &(tbl->large_pool);
diff --git a/arch/powerpc/kernel/irq.c b/arch/powerpc/kernel/irq.c
index 74d40c6..4c5891d 100644
--- a/arch/powerpc/kernel/irq.c
+++ b/arch/powerpc/kernel/irq.c
@@ -114,7 +114,7 @@ static inline notrace void set_soft_enabled(unsigned long enable)
 static inline notrace int decrementer_check_overflow(void)
 {
  	u64 now = get_tb_or_rtc();
-	u64 *next_tb = this_cpu_ptr(&decrementers_next_tb);
+ 	u64 *next_tb = &__get_cpu_var(decrementers_next_tb);
  
 	return now >= *next_tb;
 }
@@ -499,7 +499,7 @@ void __do_irq(struct pt_regs *regs)
 
 	/* And finally process it */
 	if (unlikely(irq == NO_IRQ))
-		__this_cpu_inc(irq_stat.spurious_irqs);
+		__get_cpu_var(irq_stat).spurious_irqs++;
 	else
 		generic_handle_irq(irq);
 
diff --git a/arch/powerpc/kernel/kgdb.c b/arch/powerpc/kernel/kgdb.c
index e77c3cc..85046573 100644
--- a/arch/powerpc/kernel/kgdb.c
+++ b/arch/powerpc/kernel/kgdb.c
@@ -155,7 +155,7 @@ static int kgdb_singlestep(struct pt_regs *regs)
 {
 	struct thread_info *thread_info, *exception_thread_info;
 	struct thread_info *backup_current_thread_info =
-		this_cpu_ptr(&kgdb_thread_info);
+		&__get_cpu_var(kgdb_thread_info);
 
 	if (user_mode(regs))
 		return 0;
diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c
index 7c053f2..2f72af8 100644
--- a/arch/powerpc/kernel/kprobes.c
+++ b/arch/powerpc/kernel/kprobes.c
@@ -119,7 +119,7 @@ static void __kprobes save_previous_kprobe(struct kprobe_ctlblk *kcb)
 
 static void __kprobes restore_previous_kprobe(struct kprobe_ctlblk *kcb)
 {
-	__this_cpu_write(current_kprobe, kcb->prev_kprobe.kp);
+	__get_cpu_var(current_kprobe) = kcb->prev_kprobe.kp;
 	kcb->kprobe_status = kcb->prev_kprobe.status;
 	kcb->kprobe_saved_msr = kcb->prev_kprobe.saved_msr;
 }
@@ -127,7 +127,7 @@ static void __kprobes restore_previous_kprobe(struct kprobe_ctlblk *kcb)
 static void __kprobes set_current_kprobe(struct kprobe *p, struct pt_regs *regs,
 				struct kprobe_ctlblk *kcb)
 {
-	__this_cpu_write(current_kprobe, p);
+	__get_cpu_var(current_kprobe) = p;
 	kcb->kprobe_saved_msr = regs->msr;
 }
 
@@ -192,7 +192,7 @@ static int __kprobes kprobe_handler(struct pt_regs *regs)
 				ret = 1;
 				goto no_kprobe;
 			}
-			p = __this_cpu_read(current_kprobe);
+			p = __get_cpu_var(current_kprobe);
 			if (p->break_handler && p->break_handler(p, regs)) {
 				goto ss_probe;
 			}
diff --git a/arch/powerpc/kernel/mce.c b/arch/powerpc/kernel/mce.c
index 15c99b6..a7fd4cb 100644
--- a/arch/powerpc/kernel/mce.c
+++ b/arch/powerpc/kernel/mce.c
@@ -73,8 +73,8 @@ void save_mce_event(struct pt_regs *regs, long handled,
 		    uint64_t nip, uint64_t addr)
 {
 	uint64_t srr1;
-	int index = __this_cpu_inc_return(mce_nest_count);
-	struct machine_check_event *mce = this_cpu_ptr(&mce_event[index]);
+	int index = __get_cpu_var(mce_nest_count)++;
+	struct machine_check_event *mce = &__get_cpu_var(mce_event[index]);
 
 	/*
 	 * Return if we don't have enough space to log mce event.
@@ -143,7 +143,7 @@ void save_mce_event(struct pt_regs *regs, long handled,
  */
 int get_mce_event(struct machine_check_event *mce, bool release)
 {
-	int index = __this_cpu_read(mce_nest_count) - 1;
+	int index = __get_cpu_var(mce_nest_count) - 1;
 	struct machine_check_event *mc_evt;
 	int ret = 0;
 
@@ -153,7 +153,7 @@ int get_mce_event(struct machine_check_event *mce, bool release)
 
 	/* Check if we have MCE info to process. */
 	if (index < MAX_MC_EVT) {
-		mc_evt = this_cpu_ptr(&mce_event[index]);
+		mc_evt = &__get_cpu_var(mce_event[index]);
 		/* Copy the event structure and release the original */
 		if (mce)
 			*mce = *mc_evt;
@@ -163,7 +163,7 @@ int get_mce_event(struct machine_check_event *mce, bool release)
 	}
 	/* Decrement the count to free the slot. */
 	if (release)
-		__this_cpu_dec(mce_nest_count);
+		__get_cpu_var(mce_nest_count)--;
 
 	return ret;
 }
@@ -184,13 +184,13 @@ void machine_check_queue_event(void)
 	if (!get_mce_event(&evt, MCE_EVENT_RELEASE))
 		return;
 
-	index = __this_cpu_inc_return(mce_queue_count);
+	index = __get_cpu_var(mce_queue_count)++;
 	/* If queue is full, just return for now. */
 	if (index >= MAX_MC_EVT) {
-		__this_cpu_dec(mce_queue_count);
+		__get_cpu_var(mce_queue_count)--;
 		return;
 	}
-	memcpy(this_cpu_ptr(&mce_event_queue[index]), &evt, sizeof(evt));
+	__get_cpu_var(mce_event_queue[index]) = evt;
 
 	/* Queue irq work to process this event later. */
 	irq_work_queue(&mce_event_process_work);
@@ -208,11 +208,11 @@ static void machine_check_process_queued_event(struct irq_work *work)
 	 * For now just print it to console.
 	 * TODO: log this error event to FSP or nvram.
 	 */
-	while (__this_cpu_read(mce_queue_count) > 0) {
-		index = __this_cpu_read(mce_queue_count) - 1;
+	while (__get_cpu_var(mce_queue_count) > 0) {
+		index = __get_cpu_var(mce_queue_count) - 1;
 		machine_check_print_event_info(
-				this_cpu_ptr(&mce_event_queue[index]));
-		__this_cpu_dec(mce_queue_count);
+				&__get_cpu_var(mce_event_queue[index]));
+		__get_cpu_var(mce_queue_count)--;
 	}
 }
 
diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
index 2df2f29..bf44ae9 100644
--- a/arch/powerpc/kernel/process.c
+++ b/arch/powerpc/kernel/process.c
@@ -498,7 +498,7 @@ static inline int set_dawr(struct arch_hw_breakpoint *brk)
 
 void __set_breakpoint(struct arch_hw_breakpoint *brk)
 {
-	__this_cpu_write(current_brk, *brk);
+	__get_cpu_var(current_brk) = *brk;
 
 	if (cpu_has_feature(CPU_FTR_DAWR))
 		set_dawr(brk);
@@ -841,7 +841,7 @@ struct task_struct *__switch_to(struct task_struct *prev,
  * schedule DABR
  */
 #ifndef CONFIG_HAVE_HW_BREAKPOINT
-	if (unlikely(!hw_brk_match(this_cpu_ptr(&current_brk), &new->thread.hw_brk)))
+	if (unlikely(!hw_brk_match(&__get_cpu_var(current_brk), &new->thread.hw_brk)))
 		__set_breakpoint(&new->thread.hw_brk);
 #endif /* CONFIG_HAVE_HW_BREAKPOINT */
 #endif
@@ -855,7 +855,7 @@ struct task_struct *__switch_to(struct task_struct *prev,
 	 * Collect processor utilization data per process
 	 */
 	if (firmware_has_feature(FW_FEATURE_SPLPAR)) {
-		struct cpu_usage *cu = this_cpu_ptr(&cpu_usage_array);
+		struct cpu_usage *cu = &__get_cpu_var(cpu_usage_array);
 		long unsigned start_tb, current_tb;
 		start_tb = old_thread->start_tb;
 		cu->current_tb = current_tb = mfspr(SPRN_PURR);
@@ -865,7 +865,7 @@ struct task_struct *__switch_to(struct task_struct *prev,
 #endif /* CONFIG_PPC64 */
 
 #ifdef CONFIG_PPC_BOOK3S_64
-	batch = this_cpu_ptr(&ppc64_tlb_batch);
+	batch = &__get_cpu_var(ppc64_tlb_batch);
 	if (batch->active) {
 		current_thread_info()->local_flags |= _TLF_LAZY_MMU;
 		if (batch->index)
@@ -888,7 +888,7 @@ struct task_struct *__switch_to(struct task_struct *prev,
 #ifdef CONFIG_PPC_BOOK3S_64
 	if (current_thread_info()->local_flags & _TLF_LAZY_MMU) {
 		current_thread_info()->local_flags &= ~_TLF_LAZY_MMU;
-		batch = this_cpu_ptr(&ppc64_tlb_batch);
+		batch = &__get_cpu_var(ppc64_tlb_batch);
 		batch->active = 1;
 	}
 #endif /* CONFIG_PPC_BOOK3S_64 */
diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
index 60391a5..a0738af 100644
--- a/arch/powerpc/kernel/smp.c
+++ b/arch/powerpc/kernel/smp.c
@@ -242,7 +242,7 @@ void smp_muxed_ipi_message_pass(int cpu, int msg)
 
 irqreturn_t smp_ipi_demux(void)
 {
-	struct cpu_messages *info = this_cpu_ptr(&ipi_message);
+	struct cpu_messages *info = &__get_cpu_var(ipi_message);
 	unsigned int all;
 
 	mb();	/* order any irq clear */
@@ -438,9 +438,9 @@ void generic_mach_cpu_die(void)
 	idle_task_exit();
 	cpu = smp_processor_id();
 	printk(KERN_DEBUG "CPU%d offline\n", cpu);
-	__this_cpu_write(cpu_state, CPU_DEAD);
+	__get_cpu_var(cpu_state) = CPU_DEAD;
 	smp_wmb();
-	while (__this_cpu_read(cpu_state) != CPU_UP_PREPARE)
+	while (__get_cpu_var(cpu_state) != CPU_UP_PREPARE)
 		cpu_relax();
 }
 
diff --git a/arch/powerpc/kernel/sysfs.c b/arch/powerpc/kernel/sysfs.c
index fa1fd8a..67fd2fd 100644
--- a/arch/powerpc/kernel/sysfs.c
+++ b/arch/powerpc/kernel/sysfs.c
@@ -394,10 +394,10 @@ void ppc_enable_pmcs(void)
 	ppc_set_pmu_inuse(1);
 
 	/* Only need to enable them once */
-	if (__this_cpu_read(pmcs_enabled))
+	if (__get_cpu_var(pmcs_enabled))
 		return;
 
-	__this_cpu_write(pmcs_enabled, 1);
+	__get_cpu_var(pmcs_enabled) = 1;
 
 	if (ppc_md.enable_pmcs)
 		ppc_md.enable_pmcs();
diff --git a/arch/powerpc/kernel/time.c b/arch/powerpc/kernel/time.c
index 4769e5b..368ab37 100644
--- a/arch/powerpc/kernel/time.c
+++ b/arch/powerpc/kernel/time.c
@@ -458,9 +458,9 @@ static inline void clear_irq_work_pending(void)
 
 DEFINE_PER_CPU(u8, irq_work_pending);
 
-#define set_irq_work_pending_flag()	__this_cpu_write(irq_work_pending, 1)
-#define test_irq_work_pending()		__this_cpu_read(irq_work_pending)
-#define clear_irq_work_pending()	__this_cpu_write(irq_work_pending, 0)
+#define set_irq_work_pending_flag()	__get_cpu_var(irq_work_pending) = 1
+#define test_irq_work_pending()		__get_cpu_var(irq_work_pending)
+#define clear_irq_work_pending()	__get_cpu_var(irq_work_pending) = 0
 
 #endif /* 32 vs 64 bit */
 
@@ -482,8 +482,8 @@ void arch_irq_work_raise(void)
 void __timer_interrupt(void)
 {
 	struct pt_regs *regs = get_irq_regs();
-	u64 *next_tb = this_cpu_ptr(&decrementers_next_tb);
-	struct clock_event_device *evt = this_cpu_ptr(&decrementers);
+	u64 *next_tb = &__get_cpu_var(decrementers_next_tb);
+	struct clock_event_device *evt = &__get_cpu_var(decrementers);
 	u64 now;
 
 	trace_timer_interrupt_entry(regs);
@@ -498,7 +498,7 @@ void __timer_interrupt(void)
 		*next_tb = ~(u64)0;
 		if (evt->event_handler)
 			evt->event_handler(evt);
-		__this_cpu_inc(irq_stat.timer_irqs_event);
+		__get_cpu_var(irq_stat).timer_irqs_event++;
 	} else {
 		now = *next_tb - now;
 		if (now <= DECREMENTER_MAX)
@@ -506,13 +506,13 @@ void __timer_interrupt(void)
 		/* We may have raced with new irq work */
 		if (test_irq_work_pending())
 			set_dec(1);
-		__this_cpu_inc(irq_stat.timer_irqs_others);
+		__get_cpu_var(irq_stat).timer_irqs_others++;
 	}
 
 #ifdef CONFIG_PPC64
 	/* collect purr register values often, for accurate calculations */
 	if (firmware_has_feature(FW_FEATURE_SPLPAR)) {
-		struct cpu_usage *cu = this_cpu_ptr(&cpu_usage_array);
+		struct cpu_usage *cu = &__get_cpu_var(cpu_usage_array);
 		cu->current_tb = mfspr(SPRN_PURR);
 	}
 #endif
@@ -527,7 +527,7 @@ void __timer_interrupt(void)
 void timer_interrupt(struct pt_regs * regs)
 {
 	struct pt_regs *old_regs;
-	u64 *next_tb = this_cpu_ptr(&decrementers_next_tb);
+	u64 *next_tb = &__get_cpu_var(decrementers_next_tb);
 
 	/* Ensure a positive value is written to the decrementer, or else
 	 * some CPUs will continue to take decrementer exceptions.
@@ -813,7 +813,7 @@ static void __init clocksource_init(void)
 static int decrementer_set_next_event(unsigned long evt,
 				      struct clock_event_device *dev)
 {
-	__this_cpu_write(decrementers_next_tb, get_tb_or_rtc() + evt);
+	__get_cpu_var(decrementers_next_tb) = get_tb_or_rtc() + evt;
 	set_dec(evt);
 
 	/* We may have raced with new irq work */
@@ -833,7 +833,7 @@ static void decrementer_set_mode(enum clock_event_mode mode,
 /* Interrupt handler for the timer broadcast IPI */
 void tick_broadcast_ipi_handler(void)
 {
-	u64 *next_tb = this_cpu_ptr(&decrementers_next_tb);
+	u64 *next_tb = &__get_cpu_var(decrementers_next_tb);
 
 	*next_tb = get_tb_or_rtc();
 	__timer_interrupt();
diff --git a/arch/powerpc/kernel/traps.c b/arch/powerpc/kernel/traps.c
index e6595b7..0dc43f9 100644
--- a/arch/powerpc/kernel/traps.c
+++ b/arch/powerpc/kernel/traps.c
@@ -295,7 +295,7 @@ long machine_check_early(struct pt_regs *regs)
 {
 	long handled = 0;
 
-	__this_cpu_inc(irq_stat.mce_exceptions);
+	__get_cpu_var(irq_stat).mce_exceptions++;
 
 	if (cur_cpu_spec && cur_cpu_spec->machine_check_early)
 		handled = cur_cpu_spec->machine_check_early(regs);
@@ -304,7 +304,7 @@ long machine_check_early(struct pt_regs *regs)
 
 long hmi_exception_realmode(struct pt_regs *regs)
 {
-	__this_cpu_inc(irq_stat.hmi_exceptions);
+	__get_cpu_var(irq_stat).hmi_exceptions++;
 
 	if (ppc_md.hmi_exception_early)
 		ppc_md.hmi_exception_early(regs);
@@ -700,7 +700,7 @@ void machine_check_exception(struct pt_regs *regs)
 	enum ctx_state prev_state = exception_enter();
 	int recover = 0;
 
-	__this_cpu_inc(irq_stat.mce_exceptions);
+	__get_cpu_var(irq_stat).mce_exceptions++;
 
 	/* See if any machine dependent calls. In theory, we would want
 	 * to call the CPU first, and call the ppc_md. one if the CPU
@@ -1519,7 +1519,7 @@ void vsx_unavailable_tm(struct pt_regs *regs)
 
 void performance_monitor_exception(struct pt_regs *regs)
 {
-	__this_cpu_inc(irq_stat.pmu_irqs);
+	__get_cpu_var(irq_stat).pmu_irqs++;
 
 	perf_irq(regs);
 }
diff --git a/arch/powerpc/kvm/e500.c b/arch/powerpc/kvm/e500.c
index 1609584..2e02ed8 100644
--- a/arch/powerpc/kvm/e500.c
+++ b/arch/powerpc/kvm/e500.c
@@ -76,11 +76,11 @@ static inline int local_sid_setup_one(struct id *entry)
 	unsigned long sid;
 	int ret = -1;
 
-	sid = __this_cpu_inc_return(pcpu_last_used_sid);
+	sid = ++(__get_cpu_var(pcpu_last_used_sid));
 	if (sid < NUM_TIDS) {
-		__this_cpu_write(pcpu_sids)entry[sid], entry);
+		__get_cpu_var(pcpu_sids).entry[sid] = entry;
 		entry->val = sid;
-		entry->pentry = this_cpu_ptr(&pcpu_sids.entry[sid]);
+		entry->pentry = &__get_cpu_var(pcpu_sids).entry[sid];
 		ret = sid;
 	}
 
@@ -108,8 +108,8 @@ static inline int local_sid_setup_one(struct id *entry)
 static inline int local_sid_lookup(struct id *entry)
 {
 	if (entry && entry->val != 0 &&
-	    __this_cpu_read(pcpu_sids.entry[entry->val]) == entry &&
-	    entry->pentry == this_cpu_ptr(&pcpu_sids.entry[entry->val]))
+	    __get_cpu_var(pcpu_sids).entry[entry->val] == entry &&
+	    entry->pentry == &__get_cpu_var(pcpu_sids).entry[entry->val])
 		return entry->val;
 	return -1;
 }
@@ -117,8 +117,8 @@ static inline int local_sid_lookup(struct id *entry)
 /* Invalidate all id mappings on local core -- call with preempt disabled */
 static inline void local_sid_destroy_all(void)
 {
-	__this_cpu_write(pcpu_last_used_sid, 0);
-	memset(this_cpu_ptr(&pcpu_sids), 0, sizeof(pcpu_sids));
+	__get_cpu_var(pcpu_last_used_sid) = 0;
+	memset(&__get_cpu_var(pcpu_sids), 0, sizeof(__get_cpu_var(pcpu_sids)));
 }
 
 static void *kvmppc_e500_id_table_alloc(struct kvmppc_vcpu_e500 *vcpu_e500)
diff --git a/arch/powerpc/kvm/e500mc.c b/arch/powerpc/kvm/e500mc.c
index 6ef54e5..164bad2 100644
--- a/arch/powerpc/kvm/e500mc.c
+++ b/arch/powerpc/kvm/e500mc.c
@@ -141,9 +141,9 @@ static void kvmppc_core_vcpu_load_e500mc(struct kvm_vcpu *vcpu, int cpu)
 	mtspr(SPRN_GESR, vcpu->arch.shared->esr);
 
 	if (vcpu->arch.oldpir != mfspr(SPRN_PIR) ||
-	    __this_cpu_read(last_vcpu_of_lpid[vcpu->kvm->arch.lpid]) != vcpu) {
+	    __get_cpu_var(last_vcpu_of_lpid)[vcpu->kvm->arch.lpid] != vcpu) {
 		kvmppc_e500_tlbil_all(vcpu_e500);
-		__this_cpu_write(last_vcpu_of_lpid[vcpu->kvm->arch.lpid], vcpu);
+		__get_cpu_var(last_vcpu_of_lpid)[vcpu->kvm->arch.lpid] = vcpu;
 	}
 
 	kvmppc_load_guest_fp(vcpu);
diff --git a/arch/powerpc/mm/hash_native_64.c b/arch/powerpc/mm/hash_native_64.c
index 504a16f..afc0a82 100644
--- a/arch/powerpc/mm/hash_native_64.c
+++ b/arch/powerpc/mm/hash_native_64.c
@@ -625,7 +625,7 @@ static void native_flush_hash_range(unsigned long number, int local)
 	unsigned long want_v;
 	unsigned long flags;
 	real_pte_t pte;
-	struct ppc64_tlb_batch *batch = this_cpu_ptr(&ppc64_tlb_batch);
+	struct ppc64_tlb_batch *batch = &__get_cpu_var(ppc64_tlb_batch);
 	unsigned long psize = batch->psize;
 	int ssize = batch->ssize;
 	int i;
diff --git a/arch/powerpc/mm/hash_utils_64.c b/arch/powerpc/mm/hash_utils_64.c
index 060d51f..daee7f4 100644
--- a/arch/powerpc/mm/hash_utils_64.c
+++ b/arch/powerpc/mm/hash_utils_64.c
@@ -1314,7 +1314,7 @@ void flush_hash_range(unsigned long number, int local)
 	else {
 		int i;
 		struct ppc64_tlb_batch *batch =
-			this_cpu_ptr(&ppc64_tlb_batch);
+			&__get_cpu_var(ppc64_tlb_batch);
 
 		for (i = 0; i < number; i++)
 			flush_hash_page(batch->vpn[i], batch->pte[i],
diff --git a/arch/powerpc/mm/hugetlbpage-book3e.c b/arch/powerpc/mm/hugetlbpage-book3e.c
index ba47aaf..5e4ee25 100644
--- a/arch/powerpc/mm/hugetlbpage-book3e.c
+++ b/arch/powerpc/mm/hugetlbpage-book3e.c
@@ -33,13 +33,13 @@ static inline int tlb1_next(void)
 
 	ncams = mfspr(SPRN_TLB1CFG) & TLBnCFG_N_ENTRY;
 
-	index = this_cpu_read(next_tlbcam_idx);
+	index = __get_cpu_var(next_tlbcam_idx);
 
 	/* Just round-robin the entries and wrap when we hit the end */
 	if (unlikely(index == ncams - 1))
-		__this_cpu_write(next_tlbcam_idx, tlbcam_index);
+		__get_cpu_var(next_tlbcam_idx) = tlbcam_index;
 	else
-		__this_cpu_inc(next_tlbcam_idx);
+		__get_cpu_var(next_tlbcam_idx)++;
 
 	return index;
 }
diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c
index 8aa04f0..7e70ae9 100644
--- a/arch/powerpc/mm/hugetlbpage.c
+++ b/arch/powerpc/mm/hugetlbpage.c
@@ -462,7 +462,7 @@ static void hugepd_free(struct mmu_gather *tlb, void *hugepte)
 {
 	struct hugepd_freelist **batchp;
 
-	batchp = this_cpu_ptr(&hugepd_freelist_cur);
+	batchp = &get_cpu_var(hugepd_freelist_cur);
 
 	if (atomic_read(&tlb->mm->mm_users) < 2 ||
 	    cpumask_equal(mm_cpumask(tlb->mm),
diff --git a/arch/powerpc/perf/core-book3s.c b/arch/powerpc/perf/core-book3s.c
index 690f9c7..b7cd00b 100644
--- a/arch/powerpc/perf/core-book3s.c
+++ b/arch/powerpc/perf/core-book3s.c
@@ -339,7 +339,7 @@ static void power_pmu_bhrb_reset(void)
 
 static void power_pmu_bhrb_enable(struct perf_event *event)
 {
-	struct cpu_hw_events *cpuhw = this_cpu_ptr(&cpu_hw_events);
+	struct cpu_hw_events *cpuhw = &__get_cpu_var(cpu_hw_events);
 
 	if (!ppmu->bhrb_nr)
 		return;
@@ -354,7 +354,7 @@ static void power_pmu_bhrb_enable(struct perf_event *event)
 
 static void power_pmu_bhrb_disable(struct perf_event *event)
 {
-	struct cpu_hw_events *cpuhw = this_cpu_ptr(&cpu_hw_events);
+	struct cpu_hw_events *cpuhw = &__get_cpu_var(cpu_hw_events);
 
 	if (!ppmu->bhrb_nr)
 		return;
@@ -1144,7 +1144,7 @@ static void power_pmu_disable(struct pmu *pmu)
 	if (!ppmu)
 		return;
 	local_irq_save(flags);
-	cpuhw = this_cpu_ptr(&cpu_hw_events);
+	cpuhw = &__get_cpu_var(cpu_hw_events);
 
 	if (!cpuhw->disabled) {
 		/*
@@ -1211,7 +1211,7 @@ static void power_pmu_enable(struct pmu *pmu)
 		return;
 	local_irq_save(flags);
 
-	cpuhw = this_cpu_ptr(&cpu_hw_events);
+	cpuhw = &__get_cpu_var(cpu_hw_events);
 	if (!cpuhw->disabled)
 		goto out;
 
@@ -1403,7 +1403,7 @@ static int power_pmu_add(struct perf_event *event, int ef_flags)
 	 * Add the event to the list (if there is room)
 	 * and check whether the total set is still feasible.
 	 */
-	cpuhw = this_cpu_ptr(&cpu_hw_events);
+	cpuhw = &__get_cpu_var(cpu_hw_events);
 	n0 = cpuhw->n_events;
 	if (n0 >= ppmu->n_counter)
 		goto out;
@@ -1469,7 +1469,7 @@ static void power_pmu_del(struct perf_event *event, int ef_flags)
 
 	power_pmu_read(event);
 
-	cpuhw = this_cpu_ptr(&cpu_hw_events);
+	cpuhw = &__get_cpu_var(cpu_hw_events);
 	for (i = 0; i < cpuhw->n_events; ++i) {
 		if (event == cpuhw->event[i]) {
 			while (++i < cpuhw->n_events) {
@@ -1575,7 +1575,7 @@ static void power_pmu_stop(struct perf_event *event, int ef_flags)
  */
 void power_pmu_start_txn(struct pmu *pmu)
 {
-	struct cpu_hw_events *cpuhw = this_cpu_ptr(&cpu_hw_events);
+	struct cpu_hw_events *cpuhw = &__get_cpu_var(cpu_hw_events);
 
 	perf_pmu_disable(pmu);
 	cpuhw->group_flag |= PERF_EVENT_TXN;
@@ -1589,7 +1589,7 @@ void power_pmu_start_txn(struct pmu *pmu)
  */
 void power_pmu_cancel_txn(struct pmu *pmu)
 {
-	struct cpu_hw_events *cpuhw = this_cpu_ptr(&cpu_hw_events);
+	struct cpu_hw_events *cpuhw = &__get_cpu_var(cpu_hw_events);
 
 	cpuhw->group_flag &= ~PERF_EVENT_TXN;
 	perf_pmu_enable(pmu);
@@ -1607,7 +1607,7 @@ int power_pmu_commit_txn(struct pmu *pmu)
 
 	if (!ppmu)
 		return -EAGAIN;
-	cpuhw = this_cpu_ptr(&cpu_hw_events);
+	cpuhw = &__get_cpu_var(cpu_hw_events);
 	n = cpuhw->n_events;
 	if (check_excludes(cpuhw->event, cpuhw->flags, 0, n))
 		return -EAGAIN;
@@ -1964,7 +1964,7 @@ static void record_and_restart(struct perf_event *event, unsigned long val,
 
 		if (event->attr.sample_type & PERF_SAMPLE_BRANCH_STACK) {
 			struct cpu_hw_events *cpuhw;
-			cpuhw = this_cpu_ptr(&cpu_hw_events);
+			cpuhw = &__get_cpu_var(cpu_hw_events);
 			power_pmu_bhrb_read(cpuhw);
 			data.br_stack = &cpuhw->bhrb_stack;
 		}
@@ -2037,7 +2037,7 @@ static bool pmc_overflow(unsigned long val)
 static void perf_event_interrupt(struct pt_regs *regs)
 {
 	int i, j;
-	struct cpu_hw_events *cpuhw = this_cpu_ptr(&cpu_hw_events);
+	struct cpu_hw_events *cpuhw = &__get_cpu_var(cpu_hw_events);
 	struct perf_event *event;
 	unsigned long val[8];
 	int found, active;
diff --git a/arch/powerpc/perf/core-fsl-emb.c b/arch/powerpc/perf/core-fsl-emb.c
index 4acaea0..d35ae52 100644
--- a/arch/powerpc/perf/core-fsl-emb.c
+++ b/arch/powerpc/perf/core-fsl-emb.c
@@ -210,7 +210,7 @@ static void fsl_emb_pmu_disable(struct pmu *pmu)
 	unsigned long flags;
 
 	local_irq_save(flags);
-	cpuhw = this_cpu_ptr(&cpu_hw_events);
+	cpuhw = &__get_cpu_var(cpu_hw_events);
 
 	if (!cpuhw->disabled) {
 		cpuhw->disabled = 1;
@@ -249,7 +249,7 @@ static void fsl_emb_pmu_enable(struct pmu *pmu)
 	unsigned long flags;
 
 	local_irq_save(flags);
-	cpuhw = this_cpu_ptr(&cpu_hw_events);
+	cpuhw = &__get_cpu_var(cpu_hw_events);
 	if (!cpuhw->disabled)
 		goto out;
 
@@ -653,7 +653,7 @@ static void record_and_restart(struct perf_event *event, unsigned long val,
 static void perf_event_interrupt(struct pt_regs *regs)
 {
 	int i;
-	struct cpu_hw_events *cpuhw = this_cpu_ptr(&cpu_hw_events);
+	struct cpu_hw_events *cpuhw = &__get_cpu_var(cpu_hw_events);
 	struct perf_event *event;
 	unsigned long val;
 	int found = 0;
diff --git a/arch/powerpc/platforms/cell/interrupt.c b/arch/powerpc/platforms/cell/interrupt.c
index 4c11421..8a106b4 100644
--- a/arch/powerpc/platforms/cell/interrupt.c
+++ b/arch/powerpc/platforms/cell/interrupt.c
@@ -82,7 +82,7 @@ static void iic_unmask(struct irq_data *d)
 
 static void iic_eoi(struct irq_data *d)
 {
-	struct iic *iic = this_cpu_ptr(&cpu_iic);
+	struct iic *iic = &__get_cpu_var(cpu_iic);
 	out_be64(&iic->regs->prio, iic->eoi_stack[--iic->eoi_ptr]);
 	BUG_ON(iic->eoi_ptr < 0);
 }
@@ -148,7 +148,7 @@ static unsigned int iic_get_irq(void)
 	struct iic *iic;
 	unsigned int virq;
 
-	iic = this_cpu_ptr(&cpu_iic);
+	iic = &__get_cpu_var(cpu_iic);
 	*(unsigned long *) &pending =
 		in_be64((u64 __iomem *) &iic->regs->pending_destr);
 	if (!(pending.flags & CBE_IIC_IRQ_VALID))
@@ -163,7 +163,7 @@ static unsigned int iic_get_irq(void)
 
 void iic_setup_cpu(void)
 {
-	out_be64(this_cpu_ptr(&cpu_iic.regs->prio), 0xff);
+	out_be64(&__get_cpu_var(cpu_iic).regs->prio, 0xff);
 }
 
 u8 iic_get_target_id(int cpu)
diff --git a/arch/powerpc/platforms/powernv/opal-tracepoints.c b/arch/powerpc/platforms/powernv/opal-tracepoints.c
index 9527e2a..d8a000a 100644
--- a/arch/powerpc/platforms/powernv/opal-tracepoints.c
+++ b/arch/powerpc/platforms/powernv/opal-tracepoints.c
@@ -48,7 +48,7 @@ void __trace_opal_entry(unsigned long opcode, unsigned long *args)
 
 	local_irq_save(flags);
 
-	depth = this_cpu_ptr(&opal_trace_depth);
+	depth = &__get_cpu_var(opal_trace_depth);
 
 	if (*depth)
 		goto out;
@@ -69,7 +69,7 @@ void __trace_opal_exit(long opcode, unsigned long retval)
 
 	local_irq_save(flags);
 
-	depth = this_cpu_ptr(&opal_trace_depth);
+	depth = &__get_cpu_var(opal_trace_depth);
 
 	if (*depth)
 		goto out;
diff --git a/arch/powerpc/platforms/ps3/interrupt.c b/arch/powerpc/platforms/ps3/interrupt.c
index a6c42f3..5f3b232 100644
--- a/arch/powerpc/platforms/ps3/interrupt.c
+++ b/arch/powerpc/platforms/ps3/interrupt.c
@@ -711,7 +711,7 @@ void __init ps3_register_ipi_irq(unsigned int cpu, unsigned int virq)
 
 static unsigned int ps3_get_irq(void)
 {
-	struct ps3_private *pd = this_cpu_ptr(&ps3_private);
+	struct ps3_private *pd = &__get_cpu_var(ps3_private);
 	u64 x = (pd->bmp.status & pd->bmp.mask);
 	unsigned int plug;
 
diff --git a/arch/powerpc/platforms/pseries/dtl.c b/arch/powerpc/platforms/pseries/dtl.c
index 39049e4..1062f71 100644
--- a/arch/powerpc/platforms/pseries/dtl.c
+++ b/arch/powerpc/platforms/pseries/dtl.c
@@ -75,7 +75,7 @@ static atomic_t dtl_count;
  */
 static void consume_dtle(struct dtl_entry *dtle, u64 index)
 {
-	struct dtl_ring *dtlr = this_cpu_ptr(&dtl_rings);
+	struct dtl_ring *dtlr = &__get_cpu_var(dtl_rings);
 	struct dtl_entry *wp = dtlr->write_ptr;
 	struct lppaca *vpa = local_paca->lppaca_ptr;
 
diff --git a/arch/powerpc/platforms/pseries/hvCall_inst.c b/arch/powerpc/platforms/pseries/hvCall_inst.c
index f02ec3a..4575f0c 100644
--- a/arch/powerpc/platforms/pseries/hvCall_inst.c
+++ b/arch/powerpc/platforms/pseries/hvCall_inst.c
@@ -110,7 +110,7 @@ static void probe_hcall_entry(void *ignored, unsigned long opcode, unsigned long
 	if (opcode > MAX_HCALL_OPCODE)
 		return;
 
-	h = this_cpu_ptr(&hcall_stats[opcode / 4]);
+	h = &__get_cpu_var(hcall_stats)[opcode / 4];
 	h->tb_start = mftb();
 	h->purr_start = mfspr(SPRN_PURR);
 }
@@ -123,7 +123,7 @@ static void probe_hcall_exit(void *ignored, unsigned long opcode, unsigned long
 	if (opcode > MAX_HCALL_OPCODE)
 		return;
 
-	h = this_cpu_ptr(&hcall_stats[opcode / 4]);
+	h = &__get_cpu_var(hcall_stats)[opcode / 4];
 	h->num_calls++;
 	h->tb_total += mftb() - h->tb_start;
 	h->purr_total += mfspr(SPRN_PURR) - h->purr_start;
diff --git a/arch/powerpc/platforms/pseries/iommu.c b/arch/powerpc/platforms/pseries/iommu.c
index 8c355ed..4642d6a 100644
--- a/arch/powerpc/platforms/pseries/iommu.c
+++ b/arch/powerpc/platforms/pseries/iommu.c
@@ -200,7 +200,7 @@ static int tce_buildmulti_pSeriesLP(struct iommu_table *tbl, long tcenum,
 
 	local_irq_save(flags);	/* to protect tcep and the page behind it */
 
-	tcep = __this_cpu_read(tce_page);
+	tcep = __get_cpu_var(tce_page);
 
 	/* This is safe to do since interrupts are off when we're called
 	 * from iommu_alloc{,_sg}()
@@ -213,7 +213,7 @@ static int tce_buildmulti_pSeriesLP(struct iommu_table *tbl, long tcenum,
 			return tce_build_pSeriesLP(tbl, tcenum, npages, uaddr,
 					    direction, attrs);
 		}
-		__this_cpu_write(tce_page, tcep);
+		__get_cpu_var(tce_page) = tcep;
 	}
 
 	rpn = __pa(uaddr) >> TCE_SHIFT;
@@ -399,7 +399,7 @@ static int tce_setrange_multi_pSeriesLP(unsigned long start_pfn,
 	long l, limit;
 
 	local_irq_disable();	/* to protect tcep and the page behind it */
-	tcep = __this_cpu_read(tce_page);
+	tcep = __get_cpu_var(tce_page);
 
 	if (!tcep) {
 		tcep = (__be64 *)__get_free_page(GFP_ATOMIC);
@@ -407,7 +407,7 @@ static int tce_setrange_multi_pSeriesLP(unsigned long start_pfn,
 			local_irq_enable();
 			return -ENOMEM;
 		}
-		__this_cpu_write(tce_page, tcep);
+		__get_cpu_var(tce_page) = tcep;
 	}
 
 	proto_tce = TCE_PCI_READ | TCE_PCI_WRITE;
diff --git a/arch/powerpc/platforms/pseries/lpar.c b/arch/powerpc/platforms/pseries/lpar.c
index 56df72d..34e6423 100644
--- a/arch/powerpc/platforms/pseries/lpar.c
+++ b/arch/powerpc/platforms/pseries/lpar.c
@@ -507,7 +507,7 @@ static void pSeries_lpar_flush_hash_range(unsigned long number, int local)
 	unsigned long vpn;
 	unsigned long i, pix, rc;
 	unsigned long flags = 0;
-	struct ppc64_tlb_batch *batch = this_cpu_ptr(&ppc64_tlb_batch);
+	struct ppc64_tlb_batch *batch = &__get_cpu_var(ppc64_tlb_batch);
 	int lock_tlbie = !mmu_has_feature(MMU_FTR_LOCKLESS_TLBIE);
 	unsigned long param[9];
 	unsigned long hash, index, shift, hidx, slot;
@@ -697,7 +697,7 @@ void __trace_hcall_entry(unsigned long opcode, unsigned long *args)
 
 	local_irq_save(flags);
 
-	depth = this_cpu_ptr(&hcall_trace_depth);
+	depth = &__get_cpu_var(hcall_trace_depth);
 
 	if (*depth)
 		goto out;
@@ -722,7 +722,7 @@ void __trace_hcall_exit(long opcode, unsigned long retval,
 
 	local_irq_save(flags);
 
-	depth = this_cpu_ptr(&hcall_trace_depth);
+	depth = &__get_cpu_var(hcall_trace_depth);
 
 	if (*depth)
 		goto out;
diff --git a/arch/powerpc/platforms/pseries/ras.c b/arch/powerpc/platforms/pseries/ras.c
index 179a69f..dff05b9 100644
--- a/arch/powerpc/platforms/pseries/ras.c
+++ b/arch/powerpc/platforms/pseries/ras.c
@@ -302,8 +302,8 @@ static struct rtas_error_log *fwnmi_get_errinfo(struct pt_regs *regs)
 	/* If it isn't an extended log we can use the per cpu 64bit buffer */
 	h = (struct rtas_error_log *)&savep[1];
 	if (!rtas_error_extended(h)) {
-		memcpy(this_cpu_ptr(&mce_data_buf), h, sizeof(__u64));
-		errhdr = (struct rtas_error_log *)this_cpu_ptr(&mce_data_buf);
+		memcpy(&__get_cpu_var(mce_data_buf), h, sizeof(__u64));
+		errhdr = (struct rtas_error_log *)&__get_cpu_var(mce_data_buf);
 	} else {
 		int len, error_log_length;
 
diff --git a/arch/powerpc/sysdev/xics/xics-common.c b/arch/powerpc/sysdev/xics/xics-common.c
index 365249c..fe0cca4 100644
--- a/arch/powerpc/sysdev/xics/xics-common.c
+++ b/arch/powerpc/sysdev/xics/xics-common.c
@@ -155,7 +155,7 @@ int __init xics_smp_probe(void)
 
 void xics_teardown_cpu(void)
 {
-	struct xics_cppr *os_cppr = this_cpu_ptr(&xics_cppr);
+	struct xics_cppr *os_cppr = &__get_cpu_var(xics_cppr);
 
 	/*
 	 * we have to reset the cppr index to 0 because we're
-- 
1.9.3

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH percpu/for-3.18-consistent-ops] Revert "powerpc: Replace __get_cpu_var uses"
  2014-08-27 15:26 ` [PATCH percpu/for-3.18-consistent-ops] Revert "powerpc: Replace __get_cpu_var uses" Tejun Heo
@ 2014-08-27 15:56   ` Christoph Lameter
  2014-08-27 21:49     ` Stephen Rothwell
  2014-08-27 21:43   ` Stephen Rothwell
  1 sibling, 1 reply; 17+ messages in thread
From: Christoph Lameter @ 2014-08-27 15:56 UTC (permalink / raw)
  To: Tejun Heo
  Cc: Stephen Rothwell, Rusty Russell, Ingo Molnar, linux-next,
	linux-kernel, Benjamin Herrenschmidt, Paul Mackerras,
	Michael Ellerman

On Wed, 27 Aug 2014, Tejun Heo wrote:

> Chrisoph, let's route the updated patch through powerpc tree.

Ok. Once I figure out what went wrong. This went through Feng's build
system and I though he did a powerpc build. So it must be a difference
configuration on powerpc.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH percpu/for-3.18-consistent-ops] Revert "powerpc: Replace __get_cpu_var uses"
  2014-08-27 15:26 ` [PATCH percpu/for-3.18-consistent-ops] Revert "powerpc: Replace __get_cpu_var uses" Tejun Heo
  2014-08-27 15:56   ` Christoph Lameter
@ 2014-08-27 21:43   ` Stephen Rothwell
  2014-08-27 21:49     ` Tejun Heo
  1 sibling, 1 reply; 17+ messages in thread
From: Stephen Rothwell @ 2014-08-27 21:43 UTC (permalink / raw)
  To: Tejun Heo
  Cc: Rusty Russell, Christoph Lameter, Ingo Molnar, linux-next,
	linux-kernel, Christoph Lameter, Benjamin Herrenschmidt,
	Paul Mackerras, Michael Ellerman

[-- Attachment #1: Type: text/plain, Size: 526 bytes --]

Hi Tejun,

On Wed, 27 Aug 2014 11:26:09 -0400 Tejun Heo <tj@kernel.org> wrote:
>
> From 23f66e2d661b4d3226d16e25910a9e9472ce2410 Mon Sep 17 00:00:00 2001
> From: Tejun Heo <tj@kernel.org>
> Date: Wed, 27 Aug 2014 11:18:29 -0400
> 
> This reverts commit 5828f666c069af74e00db21559f1535103c9f79a due to
> build failure after merging with pending powerpc changes.

I can't see any pending powerpc changes that would obviously affect
this ...

-- 
Cheers,
Stephen Rothwell                    sfr@canb.auug.org.au

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH percpu/for-3.18-consistent-ops] Revert "powerpc: Replace __get_cpu_var uses"
  2014-08-27 15:56   ` Christoph Lameter
@ 2014-08-27 21:49     ` Stephen Rothwell
  2014-08-27 23:23       ` Christoph Lameter
  0 siblings, 1 reply; 17+ messages in thread
From: Stephen Rothwell @ 2014-08-27 21:49 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: Tejun Heo, Rusty Russell, Ingo Molnar, linux-next, linux-kernel,
	Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman

[-- Attachment #1: Type: text/plain, Size: 710 bytes --]

Hi Christoph,

On Wed, 27 Aug 2014 10:56:27 -0500 (CDT) Christoph Lameter <cl@linux.com> wrote:
>
> On Wed, 27 Aug 2014, Tejun Heo wrote:
> 
> > Chrisoph, let's route the updated patch through powerpc tree.
> 
> Ok. Once I figure out what went wrong. This went through Feng's build
> system and I though he did a powerpc build. So it must be a difference
> configuration on powerpc.

Were the patches tested on top of v3.17-rc1?  My build was a powerpc
ppc64_defconfig.  It is possible that some other change interacted with
this, but there are not many other arch/powerpc changes after v3.17-rc1
and none seem obvious.

-- 
Cheers,
Stephen Rothwell                    sfr@canb.auug.org.au

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH percpu/for-3.18-consistent-ops] Revert "powerpc: Replace __get_cpu_var uses"
  2014-08-27 21:43   ` Stephen Rothwell
@ 2014-08-27 21:49     ` Tejun Heo
  2014-08-27 21:54       ` Christoph Lameter
  0 siblings, 1 reply; 17+ messages in thread
From: Tejun Heo @ 2014-08-27 21:49 UTC (permalink / raw)
  To: Stephen Rothwell
  Cc: Rusty Russell, Christoph Lameter, Ingo Molnar, linux-next,
	linux-kernel, Christoph Lameter, Benjamin Herrenschmidt,
	Paul Mackerras, Michael Ellerman

On Thu, Aug 28, 2014 at 07:43:49AM +1000, Stephen Rothwell wrote:
> Hi Tejun,
> 
> On Wed, 27 Aug 2014 11:26:09 -0400 Tejun Heo <tj@kernel.org> wrote:
> >
> > From 23f66e2d661b4d3226d16e25910a9e9472ce2410 Mon Sep 17 00:00:00 2001
> > From: Tejun Heo <tj@kernel.org>
> > Date: Wed, 27 Aug 2014 11:18:29 -0400
> > 
> > This reverts commit 5828f666c069af74e00db21559f1535103c9f79a due to
> > build failure after merging with pending powerpc changes.
> 
> I can't see any pending powerpc changes that would obviously affect
> this ...

Weird, the build-bot reported success on all powerpc configs on the
percpu branches w/o the revert.  I wonder what's going on.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH percpu/for-3.18-consistent-ops] Revert "powerpc: Replace __get_cpu_var uses"
  2014-08-27 21:49     ` Tejun Heo
@ 2014-08-27 21:54       ` Christoph Lameter
  0 siblings, 0 replies; 17+ messages in thread
From: Christoph Lameter @ 2014-08-27 21:54 UTC (permalink / raw)
  To: Tejun Heo
  Cc: Stephen Rothwell, Rusty Russell, Ingo Molnar, linux-next,
	linux-kernel, Benjamin Herrenschmidt, Paul Mackerras,
	Michael Ellerman

On Wed, 27 Aug 2014, Tejun Heo wrote:

> Weird, the build-bot reported success on all powerpc configs on the
> percpu branches w/o the revert.  I wonder what's going on.

I think this will fix it but I have no way of verifying it.


Subject: Define __ARCH_SET_SOFTIRQ_PENDING to avoid duplicate defs

Signed-off-by: Christoph Lameter <cl@linux.com>

Index: linux/arch/powerpc/include/asm/hardirq.h
===================================================================
--- linux.orig/arch/powerpc/include/asm/hardirq.h
+++ linux/arch/powerpc/include/asm/hardirq.h
@@ -20,6 +20,7 @@ typedef struct {
 DECLARE_PER_CPU_SHARED_ALIGNED(irq_cpustat_t, irq_stat);

 #define __ARCH_IRQ_STAT
+#define __ARCH_SET_SOFTIRQ_PENDING

 #define local_softirq_pending()	__this_cpu_read(irq_stat.__softirq_pending)
 #define set_softirq_pending(x) __this_cpu_write(irq_stat._softirq_pending, (x))

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH percpu/for-3.18-consistent-ops] Revert "powerpc: Replace __get_cpu_var uses"
  2014-08-27 21:49     ` Stephen Rothwell
@ 2014-08-27 23:23       ` Christoph Lameter
  0 siblings, 0 replies; 17+ messages in thread
From: Christoph Lameter @ 2014-08-27 23:23 UTC (permalink / raw)
  To: Stephen Rothwell
  Cc: Tejun Heo, Rusty Russell, Ingo Molnar, linux-next, linux-kernel,
	Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman

On Thu, 28 Aug 2014, Stephen Rothwell wrote:

> > Ok. Once I figure out what went wrong. This went through Feng's build
> > system and I though he did a powerpc build. So it must be a difference
> > configuration on powerpc.
>
> Were the patches tested on top of v3.17-rc1?  My build was a powerpc
> ppc64_defconfig.  It is possible that some other change interacted with
> this, but there are not many other arch/powerpc changes after v3.17-rc1
> and none seem obvious.

Yes the latest build test by Feng was on rc1.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: linux-next: build failure after merge of the percpu tree
  2023-12-11  8:31 ` Alexandre Ghiti
@ 2023-12-11 19:25   ` Dennis Zhou
  0 siblings, 0 replies; 17+ messages in thread
From: Dennis Zhou @ 2023-12-11 19:25 UTC (permalink / raw)
  To: Alexandre Ghiti
  Cc: Stephen Rothwell, Tejun Heo, Christoph Lameter, Ingo Molnar,
	Linux Kernel Mailing List, Linux Next Mailing List

Hello,

On Mon, Dec 11, 2023 at 09:31:25AM +0100, Alexandre Ghiti wrote:
> Hi Stephen,
> 
> On Mon, Dec 11, 2023 at 7:14 AM Stephen Rothwell <sfr@canb.auug.org.au> wrote:
> >
> > Hi all,
> >
> > After merging the percpu tree, today's linux-next build (sparc64
> > defconfig) failed like this:
> >
> > mm/percpu.c: In function 'pcpu_page_first_chunk':
> > mm/percpu.c:3336:17: error: implicit declaration of function 'flush_cache_vmap_early'; did you mean 'flush_cache_vmap'? [-Werror=implicit-function-declaration]
> >  3336 |                 flush_cache_vmap_early(unit_addr, unit_addr + ai->unit_size);
> >       |                 ^~~~~~~~~~~~~~~~~~~~~~
> >       |                 flush_cache_vmap
> > cc1: some warnings being treated as errors
> >
> > Caused by commit
> >
> >   a95c15a43f4a ("mm: Introduce flush_cache_vmap_early() and its riscv implementation")
> >
> > I have applied the following fix patch for today.  Are there other
> > archs that don't use asm-generic/cacheflush.h?
> 

I'm surprised automation didn't catch this as this should have failed
for any sparc build? It passed `sparc allmodconfig gcc` on my branches.

> It seems like most archs do not include this file, I should have
> checked. As I'm a bit scared of the possible side-effects of including
> asm-generic/cacheflush.h, I'll define flush_cache_vmap_early() on all
> archs that do define flush_cache_vmap().
> 

Hmmm. That makes sense, but we'd still need to check so we have the
generic #ifndef definition included everywhere too.

> Stephen, do you want a patch fix? Or do you want me to send a new
> version of the current patches so that you can drop them for now?
> 

The for-next tree gets recreated from pulls of the maintainers' trees.
I'm going to drop the series from percpu and then we can go again with a
v2.

> Sorry for the oversight,
> 

All good it happens. It's why the automation is there.

Thanks,
Dennis

> Thanks,
> 
> Alex
> 
> >
> > From: Stephen Rothwell <sfr@canb.auug.org.au>
> > Date: Mon, 11 Dec 2023 16:57:00 +1100
> > Subject: [PATCH] fix up for "mm: Introduce flush_cache_vmap_early() and its riscv implementation"
> >
> > Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
> > ---
> >  arch/sparc/include/asm/cacheflush.h | 7 +++++++
> >  1 file changed, 7 insertions(+)
> >
> > diff --git a/arch/sparc/include/asm/cacheflush.h b/arch/sparc/include/asm/cacheflush.h
> > index 881ac76eab93..9d87b2bcb217 100644
> > --- a/arch/sparc/include/asm/cacheflush.h
> > +++ b/arch/sparc/include/asm/cacheflush.h
> > @@ -10,4 +10,11 @@
> >  #else
> >  #include <asm/cacheflush_32.h>
> >  #endif
> > +
> > +#ifndef __ASSEMBLY__
> > +static inline void flush_cache_vmap_early(unsigned long start, unsigned long end)
> > +{
> > +}
> > +#endif
> > +
> >  #endif
> > --
> > 2.40.1
> >
> > --
> > Cheers,
> > Stephen Rothwell

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: linux-next: build failure after merge of the percpu tree
  2023-12-11  6:14 linux-next: build failure after merge of the percpu tree Stephen Rothwell
@ 2023-12-11  8:31 ` Alexandre Ghiti
  2023-12-11 19:25   ` Dennis Zhou
  0 siblings, 1 reply; 17+ messages in thread
From: Alexandre Ghiti @ 2023-12-11  8:31 UTC (permalink / raw)
  To: Stephen Rothwell
  Cc: Dennis Zhou, Tejun Heo, Christoph Lameter, Ingo Molnar,
	Linux Kernel Mailing List, Linux Next Mailing List

Hi Stephen,

On Mon, Dec 11, 2023 at 7:14 AM Stephen Rothwell <sfr@canb.auug.org.au> wrote:
>
> Hi all,
>
> After merging the percpu tree, today's linux-next build (sparc64
> defconfig) failed like this:
>
> mm/percpu.c: In function 'pcpu_page_first_chunk':
> mm/percpu.c:3336:17: error: implicit declaration of function 'flush_cache_vmap_early'; did you mean 'flush_cache_vmap'? [-Werror=implicit-function-declaration]
>  3336 |                 flush_cache_vmap_early(unit_addr, unit_addr + ai->unit_size);
>       |                 ^~~~~~~~~~~~~~~~~~~~~~
>       |                 flush_cache_vmap
> cc1: some warnings being treated as errors
>
> Caused by commit
>
>   a95c15a43f4a ("mm: Introduce flush_cache_vmap_early() and its riscv implementation")
>
> I have applied the following fix patch for today.  Are there other
> archs that don't use asm-generic/cacheflush.h?

It seems like most archs do not include this file, I should have
checked. As I'm a bit scared of the possible side-effects of including
asm-generic/cacheflush.h, I'll define flush_cache_vmap_early() on all
archs that do define flush_cache_vmap().

Stephen, do you want a patch fix? Or do you want me to send a new
version of the current patches so that you can drop them for now?

Sorry for the oversight,

Thanks,

Alex

>
> From: Stephen Rothwell <sfr@canb.auug.org.au>
> Date: Mon, 11 Dec 2023 16:57:00 +1100
> Subject: [PATCH] fix up for "mm: Introduce flush_cache_vmap_early() and its riscv implementation"
>
> Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
> ---
>  arch/sparc/include/asm/cacheflush.h | 7 +++++++
>  1 file changed, 7 insertions(+)
>
> diff --git a/arch/sparc/include/asm/cacheflush.h b/arch/sparc/include/asm/cacheflush.h
> index 881ac76eab93..9d87b2bcb217 100644
> --- a/arch/sparc/include/asm/cacheflush.h
> +++ b/arch/sparc/include/asm/cacheflush.h
> @@ -10,4 +10,11 @@
>  #else
>  #include <asm/cacheflush_32.h>
>  #endif
> +
> +#ifndef __ASSEMBLY__
> +static inline void flush_cache_vmap_early(unsigned long start, unsigned long end)
> +{
> +}
> +#endif
> +
>  #endif
> --
> 2.40.1
>
> --
> Cheers,
> Stephen Rothwell

^ permalink raw reply	[flat|nested] 17+ messages in thread

* linux-next: build failure after merge of the percpu tree
@ 2023-12-11  6:14 Stephen Rothwell
  2023-12-11  8:31 ` Alexandre Ghiti
  0 siblings, 1 reply; 17+ messages in thread
From: Stephen Rothwell @ 2023-12-11  6:14 UTC (permalink / raw)
  To: Dennis Zhou, Tejun Heo, Christoph Lameter, Ingo Molnar
  Cc: Alexandre Ghiti, Linux Kernel Mailing List, Linux Next Mailing List

[-- Attachment #1: Type: text/plain, Size: 1578 bytes --]

Hi all,

After merging the percpu tree, today's linux-next build (sparc64
defconfig) failed like this:

mm/percpu.c: In function 'pcpu_page_first_chunk':
mm/percpu.c:3336:17: error: implicit declaration of function 'flush_cache_vmap_early'; did you mean 'flush_cache_vmap'? [-Werror=implicit-function-declaration]
 3336 |                 flush_cache_vmap_early(unit_addr, unit_addr + ai->unit_size); 
      |                 ^~~~~~~~~~~~~~~~~~~~~~
      |                 flush_cache_vmap
cc1: some warnings being treated as errors

Caused by commit

  a95c15a43f4a ("mm: Introduce flush_cache_vmap_early() and its riscv implementation")

I have applied the following fix patch for today.  Are there other
archs that don't use asm-generic/cacheflush.h?

From: Stephen Rothwell <sfr@canb.auug.org.au>
Date: Mon, 11 Dec 2023 16:57:00 +1100
Subject: [PATCH] fix up for "mm: Introduce flush_cache_vmap_early() and its riscv implementation"

Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
---
 arch/sparc/include/asm/cacheflush.h | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/arch/sparc/include/asm/cacheflush.h b/arch/sparc/include/asm/cacheflush.h
index 881ac76eab93..9d87b2bcb217 100644
--- a/arch/sparc/include/asm/cacheflush.h
+++ b/arch/sparc/include/asm/cacheflush.h
@@ -10,4 +10,11 @@
 #else
 #include <asm/cacheflush_32.h>
 #endif
+
+#ifndef __ASSEMBLY__
+static inline void flush_cache_vmap_early(unsigned long start, unsigned long end)
+{
+}
+#endif
+
 #endif
-- 
2.40.1

-- 
Cheers,
Stephen Rothwell

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* linux-next: build failure after merge of the percpu tree
@ 2018-02-26  3:57 Stephen Rothwell
  0 siblings, 0 replies; 17+ messages in thread
From: Stephen Rothwell @ 2018-02-26  3:57 UTC (permalink / raw)
  To: Tejun Heo, Christoph Lameter, Ingo Molnar
  Cc: Linux-Next Mailing List, Linux Kernel Mailing List, Eric Dumazet

[-- Attachment #1: Type: text/plain, Size: 1203 bytes --]

Hi all,

After merging the percpu tree, today's linux-next build (sparc defconfig)
failed like this:

mm/percpu.c: In function 'pcpu_balance_workfn':
mm/percpu.c:1613:3: error: implicit declaration of function 'cond_resched'; did you mean 'should_resched'? [-Werror=implicit-function-declaration]
   cond_resched();
   ^~~~~~~~~~~~
   should_resched

Caused by commit

  accd4f36a7d1 ("percpu: add a schedule point in pcpu_balance_workfn()")

I have added this patch for today:

From: Stephen Rothwell <sfr@canb.auug.org.au>
Date: Mon, 26 Feb 2018 14:47:39 +1100
Subject: [PATCH] percpu: include sched.h for cond_resched()

Fixes: accd4f36a7d1 ("percpu: add a schedule point in pcpu_balance_workfn()")
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
---
 mm/percpu.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/mm/percpu.c b/mm/percpu.c
index 36e7b65ba6cf..15a398c00791 100644
--- a/mm/percpu.c
+++ b/mm/percpu.c
@@ -80,6 +80,7 @@
 #include <linux/vmalloc.h>
 #include <linux/workqueue.h>
 #include <linux/kmemleak.h>
+#include <linux/sched.h>
 
 #include <asm/cacheflush.h>
 #include <asm/sections.h>
-- 
2.16.1

-- 
Cheers,
Stephen Rothwell

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: linux-next: build failure after merge of the percpu tree
  2011-01-06  4:18   ` Stephen Rothwell
@ 2011-01-06  5:11     ` Tejun Heo
  0 siblings, 0 replies; 17+ messages in thread
From: Tejun Heo @ 2011-01-06  5:11 UTC (permalink / raw)
  To: Stephen Rothwell
  Cc: Rusty Russell, Christoph Lameter, Ingo Molnar, linux-next, linux-kernel

Hello, again.

On Thu, Jan 06, 2011 at 03:18:15PM +1100, Stephen Rothwell wrote:
> > My apologies.  I forgot an earlier patch to introduce this_cpu_has()
> > macro.  I've reverted the offending commit.
> 
> But that revert is not in your published for-next branch yet.  I have
> manually reverted that commit for today.

Ah, embarrassing.  I updated for-2.6.38 but forgot to update for-next
accordingly.  Fixed.

Thank you.

-- 
tejun

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: linux-next: build failure after merge of the percpu tree
  2011-01-04  5:13 ` Tejun Heo
@ 2011-01-06  4:18   ` Stephen Rothwell
  2011-01-06  5:11     ` Tejun Heo
  0 siblings, 1 reply; 17+ messages in thread
From: Stephen Rothwell @ 2011-01-06  4:18 UTC (permalink / raw)
  To: Tejun Heo
  Cc: Rusty Russell, Christoph Lameter, Ingo Molnar, linux-next, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 1077 bytes --]

Hi Tejun,

On Tue, 4 Jan 2011 06:13:14 +0100 Tejun Heo <tj@kernel.org> wrote:
>
> On Tue, Jan 04, 2011 at 03:21:05PM +1100, Stephen Rothwell wrote:
> > After merging the percpu tree, today's linux-next build (x86_64
> > allmodconfig) failed like this:
> > 
> > arch/x86/kernel/cpu/mcheck/therm_throt.c: In function 'intel_thermal_interrupt':
> > arch/x86/kernel/cpu/mcheck/therm_throt.c:368: error: implicit declaration of function 'this_cpu_has'
> > 
> > Caused by commit 6ac0bb7148b93fb40bccba5dff06d51a3e3ea283 ("x86: use
> > this_cpu_has for thermal_interrupt").
> > 
> > this_cpu_has() does not exist anywhere except in this introduced usage.
> > 
> > I have used the percpu tree from next-20101231 for today.
> 
> My apologies.  I forgot an earlier patch to introduce this_cpu_has()
> macro.  I've reverted the offending commit.

But that revert is not in your published for-next branch yet.  I have
manually reverted that commit for today.
-- 
Cheers,
Stephen Rothwell                    sfr@canb.auug.org.au
http://www.canb.auug.org.au/~sfr/

[-- Attachment #2: Type: application/pgp-signature, Size: 490 bytes --]

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: linux-next: build failure after merge of the percpu tree
  2011-01-04  4:21 Stephen Rothwell
@ 2011-01-04  5:13 ` Tejun Heo
  2011-01-06  4:18   ` Stephen Rothwell
  0 siblings, 1 reply; 17+ messages in thread
From: Tejun Heo @ 2011-01-04  5:13 UTC (permalink / raw)
  To: Stephen Rothwell
  Cc: Rusty Russell, Christoph Lameter, Ingo Molnar, linux-next, linux-kernel

Hello, Stephen.

On Tue, Jan 04, 2011 at 03:21:05PM +1100, Stephen Rothwell wrote:
> After merging the percpu tree, today's linux-next build (x86_64
> allmodconfig) failed like this:
> 
> arch/x86/kernel/cpu/mcheck/therm_throt.c: In function 'intel_thermal_interrupt':
> arch/x86/kernel/cpu/mcheck/therm_throt.c:368: error: implicit declaration of function 'this_cpu_has'
> 
> Caused by commit 6ac0bb7148b93fb40bccba5dff06d51a3e3ea283 ("x86: use
> this_cpu_has for thermal_interrupt").
> 
> this_cpu_has() does not exist anywhere except in this introduced usage.
> 
> I have used the percpu tree from next-20101231 for today.

My apologies.  I forgot an earlier patch to introduce this_cpu_has()
macro.  I've reverted the offending commit.

Thank you.

--
tejun

^ permalink raw reply	[flat|nested] 17+ messages in thread

* linux-next: build failure after merge of the percpu tree
@ 2011-01-04  4:21 Stephen Rothwell
  2011-01-04  5:13 ` Tejun Heo
  0 siblings, 1 reply; 17+ messages in thread
From: Stephen Rothwell @ 2011-01-04  4:21 UTC (permalink / raw)
  To: Tejun Heo, Rusty Russell, Christoph Lameter, Ingo Molnar
  Cc: linux-next, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 649 bytes --]

Hi all,

After merging the percpu tree, today's linux-next build (x86_64
allmodconfig) failed like this:

arch/x86/kernel/cpu/mcheck/therm_throt.c: In function 'intel_thermal_interrupt':
arch/x86/kernel/cpu/mcheck/therm_throt.c:368: error: implicit declaration of function 'this_cpu_has'

Caused by commit 6ac0bb7148b93fb40bccba5dff06d51a3e3ea283 ("x86: use
this_cpu_has for thermal_interrupt").

this_cpu_has() does not exist anywhere except in this introduced usage.

I have used the percpu tree from next-20101231 for today.
-- 
Cheers,
Stephen Rothwell                    sfr@canb.auug.org.au
http://www.canb.auug.org.au/~sfr/

[-- Attachment #2: Type: application/pgp-signature, Size: 490 bytes --]

^ permalink raw reply	[flat|nested] 17+ messages in thread

* linux-next: build failure after merge of the percpu tree
@ 2010-09-10  2:45 Stephen Rothwell
  0 siblings, 0 replies; 17+ messages in thread
From: Stephen Rothwell @ 2010-09-10  2:45 UTC (permalink / raw)
  To: Tejun Heo, Rusty Russell, Christoph Lameter, Ingo Molnar
  Cc: linux-next, linux-kernel, Brian Gerst

[-- Attachment #1: Type: text/plain, Size: 1108 bytes --]

Hi all,

After merging the percpu tree, today's linux-next build (x86_64
allmodconfig) failed like this:

In file included from drivers/vhost/net.c:26:
include/linux/if_macvlan.h: In function 'macvlan_count_rx':
include/linux/if_macvlan.h:69: error: read-only variable 'tcp_ptr__' used as 'asm' output
In file included from drivers/net/macvlan.c:30:
include/linux/if_macvlan.h: In function 'macvlan_count_rx':
include/linux/if_macvlan.h:69: error: read-only variable 'tcp_ptr__' used as 'asm' output
In file included from drivers/net/macvtap.c:2:
include/linux/if_macvlan.h: In function 'macvlan_count_rx':
include/linux/if_macvlan.h:69: error: read-only variable 'tcp_ptr__' used as 'asm' output

Caused by commit 7f56c13698abfb1bdf6cf2b5cfcd80626d44c12e ("x86, percpu:
Optimize this_cpu_ptr").

That line in if_macvlan.h is:

	rx_stats = this_cpu_ptr(vlan->rx_stats);

Where vlan is a "const struct macvlan_dev *".

I have used the percpu tree from next-20100909 for today.
-- 
Cheers,
Stephen Rothwell                    sfr@canb.auug.org.au
http://www.canb.auug.org.au/~sfr/

[-- Attachment #2: Type: application/pgp-signature, Size: 490 bytes --]

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2023-12-11 19:25 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-08-27  4:22 linux-next: build failure after merge of the percpu tree Stephen Rothwell
2014-08-27 15:26 ` [PATCH percpu/for-3.18-consistent-ops] Revert "powerpc: Replace __get_cpu_var uses" Tejun Heo
2014-08-27 15:56   ` Christoph Lameter
2014-08-27 21:49     ` Stephen Rothwell
2014-08-27 23:23       ` Christoph Lameter
2014-08-27 21:43   ` Stephen Rothwell
2014-08-27 21:49     ` Tejun Heo
2014-08-27 21:54       ` Christoph Lameter
  -- strict thread matches above, loose matches on Subject: below --
2023-12-11  6:14 linux-next: build failure after merge of the percpu tree Stephen Rothwell
2023-12-11  8:31 ` Alexandre Ghiti
2023-12-11 19:25   ` Dennis Zhou
2018-02-26  3:57 Stephen Rothwell
2011-01-04  4:21 Stephen Rothwell
2011-01-04  5:13 ` Tejun Heo
2011-01-06  4:18   ` Stephen Rothwell
2011-01-06  5:11     ` Tejun Heo
2010-09-10  2:45 Stephen Rothwell

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).