All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 1/13] cpumask: Use accessors for cpu_*_mask: arm
@ 2009-06-12 13:03 Rusty Russell
  2009-06-12 19:03 ` Russell King
  0 siblings, 1 reply; 5+ messages in thread
From: Rusty Russell @ 2009-06-12 13:03 UTC (permalink / raw)
  To: Russell King; +Cc: linux-kernel, Mike Travis


Use the accessors rather than frobbing bits directly (the new versions
are const).

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Mike Travis <travis@sgi.com>
---
 arch/arm/mach-realview/platsmp.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

--- linux-2.6.28.orig/arch/arm/mach-realview/platsmp.c
+++ linux-2.6.28/arch/arm/mach-realview/platsmp.c
@@ -193,7 +193,7 @@ void __init smp_init_cpus(void)
 	unsigned int i, ncores = get_core_count();
 
 	for (i = 0; i < ncores; i++)
-		cpu_set(i, cpu_possible_map);
+		set_cpu_possible(i, true);
 }
 
 void __init smp_prepare_cpus(unsigned int max_cpus)


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH 1/13] cpumask: Use accessors for cpu_*_mask: arm
  2009-06-12 13:03 [PATCH 1/13] cpumask: Use accessors for cpu_*_mask: arm Rusty Russell
@ 2009-06-12 19:03 ` Russell King
  2009-06-13 12:55   ` Rusty Russell
  0 siblings, 1 reply; 5+ messages in thread
From: Russell King @ 2009-06-12 19:03 UTC (permalink / raw)
  To: Rusty Russell; +Cc: linux-kernel, Mike Travis

On Fri, Jun 12, 2009 at 10:33:14PM +0930, Rusty Russell wrote:
> Use the accessors rather than frobbing bits directly (the new versions
> are const).

Please check linux-next - there are all kinds of SMP changes currently
queued up in the ARM tree, which include most of the updates for things
like this.

-- 
Russell King
 Linux kernel    2.6 ARM Linux   - http://www.arm.linux.org.uk/
 maintainer of:

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH 1/13] cpumask: Use accessors for cpu_*_mask: arm
  2009-06-12 19:03 ` Russell King
@ 2009-06-13 12:55   ` Rusty Russell
  2009-06-13 13:49     ` Russell King
  0 siblings, 1 reply; 5+ messages in thread
From: Rusty Russell @ 2009-06-13 12:55 UTC (permalink / raw)
  To: Russell King; +Cc: linux-kernel, Mike Travis

On Sat, 13 Jun 2009 04:33:03 am Russell King wrote:
> On Fri, Jun 12, 2009 at 10:33:14PM +0930, Rusty Russell wrote:
> > Use the accessors rather than frobbing bits directly (the new versions
> > are const).
>
> Please check linux-next - there are all kinds of SMP changes currently
> queued up in the ARM tree, which include most of the updates for things
> like this.

OK, here's the revision based on latest linux-next, if you wouldn't mind
taking it.  (We should really make on_each_cpu_mask an arch-indep, too).

Thanks,
Rusty.

Subject: cpumask: use mm_cpumask() wrapper: arm

Makes code futureproof against the impending change to mm->cpu_vm_mask.

It's also a chance to use the new cpumask_ ops which take a pointer
(the older ones are deprecated, but there's no hurry for arch code).

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>

diff --git a/arch/arm/include/asm/cacheflush.h b/arch/arm/include/asm/cacheflush.h
index 6cbd8fd..b425fb2 100644
--- a/arch/arm/include/asm/cacheflush.h
+++ b/arch/arm/include/asm/cacheflush.h
@@ -318,14 +318,14 @@ static inline void outer_flush_range(unsigned long start, unsigned long end)
 #ifndef CONFIG_CPU_CACHE_VIPT
 static inline void flush_cache_mm(struct mm_struct *mm)
 {
-	if (cpu_isset(smp_processor_id(), mm->cpu_vm_mask))
+	if (cpumask_test_cpu(smp_processor_id(), mm_cpumask(mm)))
 		__cpuc_flush_user_all();
 }
 
 static inline void
 flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned long end)
 {
-	if (cpu_isset(smp_processor_id(), vma->vm_mm->cpu_vm_mask))
+	if (cpumask_test_cpu(smp_processor_id(), mm_cpumask(vma->vm_mm)))
 		__cpuc_flush_user_range(start & PAGE_MASK, PAGE_ALIGN(end),
 					vma->vm_flags);
 }
@@ -333,7 +333,7 @@ flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned long
 static inline void
 flush_cache_page(struct vm_area_struct *vma, unsigned long user_addr, unsigned long pfn)
 {
-	if (cpu_isset(smp_processor_id(), vma->vm_mm->cpu_vm_mask)) {
+	if (cpumask_test_cpu(smp_processor_id(), mm_cpumask(vma->vm_mm))) {
 		unsigned long addr = user_addr & PAGE_MASK;
 		__cpuc_flush_user_range(addr, addr + PAGE_SIZE, vma->vm_flags);
 	}
@@ -344,7 +344,7 @@ flush_ptrace_access(struct vm_area_struct *vma, struct page *page,
 			 unsigned long uaddr, void *kaddr,
 			 unsigned long len, int write)
 {
-	if (cpu_isset(smp_processor_id(), vma->vm_mm->cpu_vm_mask)) {
+	if (cpumask_test_cpu(smp_processor_id(), mm_cpumask(vma->vm_mm))) {
 		unsigned long addr = (unsigned long)kaddr;
 		__cpuc_coherent_kern_range(addr, addr + len);
 	}
diff --git a/arch/arm/include/asm/mmu_context.h b/arch/arm/include/asm/mmu_context.h
index 263fed0..c3fe09f 100644
--- a/arch/arm/include/asm/mmu_context.h
+++ b/arch/arm/include/asm/mmu_context.h
@@ -101,14 +101,15 @@ switch_mm(struct mm_struct *prev, struct mm_struct *next,
 
 #ifdef CONFIG_SMP
 	/* check for possible thread migration */
-	if (!cpus_empty(next->cpu_vm_mask) && !cpu_isset(cpu, next->cpu_vm_mask))
+	if (!cpumask_empty(mm_cpumask(next)) &&
+	    !cpumask_test_cpu(cpu, mm_cpumask(next)))
 		__flush_icache_all();
 #endif
-	if (!cpu_test_and_set(cpu, next->cpu_vm_mask) || prev != next) {
+	if (!cpumask_test_and_set_cpu(cpu, mm_cpumask(next)) || prev != next) {
 		check_context(next);
 		cpu_switch_mm(next->pgd, next);
 		if (cache_is_vivt())
-			cpu_clear(cpu, prev->cpu_vm_mask);
+			cpumask_clear_cpu(cpu, mm_cpumask(prev));
 	}
 #endif
 }
diff --git a/arch/arm/include/asm/tlbflush.h b/arch/arm/include/asm/tlbflush.h
index b543a05..5d0fbb0 100644
--- a/arch/arm/include/asm/tlbflush.h
+++ b/arch/arm/include/asm/tlbflush.h
@@ -316,7 +316,7 @@ static inline void local_flush_tlb_mm(struct mm_struct *mm)
 	if (tlb_flag(TLB_WB))
 		dsb();
 
-	if (cpu_isset(smp_processor_id(), mm->cpu_vm_mask)) {
+	if (cpumask_test_cpu(smp_processor_id(), mm_cpumask(mm))) {
 		if (tlb_flag(TLB_V3_FULL))
 			asm("mcr p15, 0, %0, c6, c0, 0" : : "r" (zero) : "cc");
 		if (tlb_flag(TLB_V4_U_FULL))
@@ -354,7 +354,7 @@ local_flush_tlb_page(struct vm_area_struct *vma, unsigned long uaddr)
 	if (tlb_flag(TLB_WB))
 		dsb();
 
-	if (cpu_isset(smp_processor_id(), vma->vm_mm->cpu_vm_mask)) {
+	if (cpumask_test_cpu(smp_processor_id(), mm_cpumask(vma->vm_mm))) {
 		if (tlb_flag(TLB_V3_PAGE))
 			asm("mcr p15, 0, %0, c6, c0, 0" : : "r" (uaddr) : "cc");
 		if (tlb_flag(TLB_V4_U_PAGE))
diff --git a/arch/arm/kernel/smp.c b/arch/arm/kernel/smp.c
index 55fa7ff..03cf0d5 100644
--- a/arch/arm/kernel/smp.c
+++ b/arch/arm/kernel/smp.c
@@ -183,7 +183,7 @@ int __cpuexit __cpu_disable(void)
 	read_lock(&tasklist_lock);
 	for_each_process(p) {
 		if (p->mm)
-			cpu_clear(cpu, p->mm->cpu_vm_mask);
+			cpumask_clear_cpu(cpu, mm_cpumask(p->mm));
 	}
 	read_unlock(&tasklist_lock);
 
@@ -251,7 +251,7 @@ asmlinkage void __cpuinit secondary_start_kernel(void)
 	atomic_inc(&mm->mm_users);
 	atomic_inc(&mm->mm_count);
 	current->active_mm = mm;
-	cpu_set(cpu, mm->cpu_vm_mask);
+	cpumask_set_cpu(cpu, mm_cpumask(mm));
 	cpu_switch_mm(mm->pgd, mm);
 	enter_lazy_tlb(mm, current);
 	local_flush_tlb_all();
@@ -600,14 +600,14 @@ void flush_tlb_all(void)
 
 void flush_tlb_mm(struct mm_struct *mm)
 {
-	cpumask_t mask = mm->cpu_vm_mask;
+	cpumask_t mask = *mm_cpumask(mm);
 
 	on_each_cpu_mask(ipi_flush_tlb_mm, mm, 1, mask);
 }
 
 void flush_tlb_page(struct vm_area_struct *vma, unsigned long uaddr)
 {
-	cpumask_t mask = vma->vm_mm->cpu_vm_mask;
+	cpumask_t mask = *mm_cpumask(vma->vm_mm);
 	struct tlb_args ta;
 
 	ta.ta_vma = vma;
@@ -628,7 +628,7 @@ void flush_tlb_kernel_page(unsigned long kaddr)
 void flush_tlb_range(struct vm_area_struct *vma,
                      unsigned long start, unsigned long end)
 {
-	cpumask_t mask = vma->vm_mm->cpu_vm_mask;
+	cpumask_t mask = *cpumask_mm(vma->vm_mm);
 	struct tlb_args ta;
 
 	ta.ta_vma = vma;
diff --git a/arch/arm/mm/context.c b/arch/arm/mm/context.c
index fc84fcc..6bda76a 100644
--- a/arch/arm/mm/context.c
+++ b/arch/arm/mm/context.c
@@ -59,6 +59,6 @@ void __new_context(struct mm_struct *mm)
 	}
 	spin_unlock(&cpu_asid_lock);
 
-	mm->cpu_vm_mask = cpumask_of_cpu(smp_processor_id());
+	cpumask_copy(mm_cpumask(mm), cpumask_of(smp_processor_id()));
 	mm->context.id = asid;
 }
diff --git a/arch/arm/mm/flush.c b/arch/arm/mm/flush.c
index 0fa9bf3..527d816 100644
--- a/arch/arm/mm/flush.c
+++ b/arch/arm/mm/flush.c
@@ -41,7 +41,7 @@ static void flush_pfn_alias(unsigned long pfn, unsigned long vaddr)
 void flush_cache_mm(struct mm_struct *mm)
 {
 	if (cache_is_vivt()) {
-		if (cpu_isset(smp_processor_id(), mm->cpu_vm_mask))
+		if (cpumask_test_cpu(smp_processor_id(), mm_cpumask(mm)))
 			__cpuc_flush_user_all();
 		return;
 	}
@@ -59,7 +59,7 @@ void flush_cache_mm(struct mm_struct *mm)
 void flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned long end)
 {
 	if (cache_is_vivt()) {
-		if (cpu_isset(smp_processor_id(), vma->vm_mm->cpu_vm_mask))
+		if (cpumask_test_cpu(smp_processor_id(), mm_cpumask(vma->vm_mm)))
 			__cpuc_flush_user_range(start & PAGE_MASK, PAGE_ALIGN(end),
 						vma->vm_flags);
 		return;
@@ -78,7 +78,7 @@ void flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned
 void flush_cache_page(struct vm_area_struct *vma, unsigned long user_addr, unsigned long pfn)
 {
 	if (cache_is_vivt()) {
-		if (cpu_isset(smp_processor_id(), vma->vm_mm->cpu_vm_mask)) {
+		if (cpumask_test_cpu(smp_processor_id(), mm_cpumask(vma->vm_mm))) {
 			unsigned long addr = user_addr & PAGE_MASK;
 			__cpuc_flush_user_range(addr, addr + PAGE_SIZE, vma->vm_flags);
 		}
@@ -94,7 +94,7 @@ void flush_ptrace_access(struct vm_area_struct *vma, struct page *page,
 			 unsigned long len, int write)
 {
 	if (cache_is_vivt()) {
-		if (cpu_isset(smp_processor_id(), vma->vm_mm->cpu_vm_mask)) {
+		if (cpumask_test_cpu(smp_processor_id(), mm_cpumask(vma->vm_mm))) {
 			unsigned long addr = (unsigned long)kaddr;
 			__cpuc_coherent_kern_range(addr, addr + len);
 		}
@@ -107,7 +107,7 @@ void flush_ptrace_access(struct vm_area_struct *vma, struct page *page,
 	}
 
 	/* VIPT non-aliasing cache */
-	if (cpu_isset(smp_processor_id(), vma->vm_mm->cpu_vm_mask) &&
+	if (cpumask_test_cpu(smp_processor_id(), mm_cpumask(vma->vm_mm)) &&
 	    vma->vm_flags & VM_EXEC) {
 		unsigned long addr = (unsigned long)kaddr;
 		/* only flushing the kernel mapping on non-aliasing VIPT */


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH 1/13] cpumask: Use accessors for cpu_*_mask: arm
  2009-06-13 12:55   ` Rusty Russell
@ 2009-06-13 13:49     ` Russell King
  2009-06-15  7:44       ` Russell King
  0 siblings, 1 reply; 5+ messages in thread
From: Russell King @ 2009-06-13 13:49 UTC (permalink / raw)
  To: Rusty Russell; +Cc: linux-kernel, Mike Travis

On Sat, Jun 13, 2009 at 10:25:12PM +0930, Rusty Russell wrote:
> On Sat, 13 Jun 2009 04:33:03 am Russell King wrote:
> > On Fri, Jun 12, 2009 at 10:33:14PM +0930, Rusty Russell wrote:
> > > Use the accessors rather than frobbing bits directly (the new versions
> > > are const).
> >
> > Please check linux-next - there are all kinds of SMP changes currently
> > queued up in the ARM tree, which include most of the updates for things
> > like this.
> 
> OK, here's the revision based on latest linux-next, if you wouldn't mind
> taking it.  (We should really make on_each_cpu_mask an arch-indep, too).

Hmm, weird.  Your diff is against does not correspond with my tree, which
supposedly is in linux-next.

Commit 8266810 "[ARM] smp: fix cpumask usage in ARM SMP code" (in my tree
since May 17th) fixes some of the areas which your patch touches.  See:

http://ftp.arm.linux.org.uk/git/gitweb.cgi?p=linux-2.6-arm.git;a=shortlog;h=refs/heads/smp

I suggest hanging fire on this until the ARM tree is merged into mainline.

-- 
Russell King
 Linux kernel    2.6 ARM Linux   - http://www.arm.linux.org.uk/
 maintainer of:

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH 1/13] cpumask: Use accessors for cpu_*_mask: arm
  2009-06-13 13:49     ` Russell King
@ 2009-06-15  7:44       ` Russell King
  0 siblings, 0 replies; 5+ messages in thread
From: Russell King @ 2009-06-15  7:44 UTC (permalink / raw)
  To: Rusty Russell; +Cc: linux-kernel, Mike Travis

On Sat, Jun 13, 2009 at 02:49:22PM +0100, Russell King wrote:
> Commit 8266810 "[ARM] smp: fix cpumask usage in ARM SMP code" (in my tree
> since May 17th) fixes some of the areas which your patch touches.  See:
> 
> http://ftp.arm.linux.org.uk/git/gitweb.cgi?p=linux-2.6-arm.git;a=shortlog;h=refs/heads/smp
> 
> I suggest hanging fire on this until the ARM tree is merged into mainline.

... which has now happened.

-- 
Russell King
 Linux kernel    2.6 ARM Linux   - http://www.arm.linux.org.uk/
 maintainer of:

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2009-06-15  7:44 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-06-12 13:03 [PATCH 1/13] cpumask: Use accessors for cpu_*_mask: arm Rusty Russell
2009-06-12 19:03 ` Russell King
2009-06-13 12:55   ` Rusty Russell
2009-06-13 13:49     ` Russell King
2009-06-15  7:44       ` Russell King

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.