linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 00/31] cpumask: Provide new cpumask API
@ 2008-09-29 18:02 Mike Travis
  2008-09-29 18:02 ` [PATCH 01/31] cpumask: Documentation Mike Travis
                   ` (30 more replies)
  0 siblings, 31 replies; 43+ messages in thread
From: Mike Travis @ 2008-09-29 18:02 UTC (permalink / raw)
  To: Ingo Molnar, Rusty Russell
  Cc: Linus Torvalds, Andrew Morton, David Miller, Yinghai Lu,
	Thomas Gleixner, Jack Steiner, linux-kernel


Ingo Molnar wrote:
>
> could you please send whatever .c changes you have already, so that we 
> can have a look at how the end result will look like? Doesnt have to 
> build, i'm just curious about how it looks like in practice, 
> semantically.
> 
> 	Ingo

Here's one(*) proposal for a new cpumask interface API in a patch
that's compilable.  Obviously with a change of this magnitude, a large
number of files are affected.  I tried to group them into functional
changes, but there are a lot of "clean up" types of patches, so these
are grouped in the second half by sections.  Hopefully this minimizes
the interdependencies between patches.

[* - Rusty has an alternative approach that he'll be submitting shortly.]

The patches in this patchset are:

  * Doc files and basic "cleanup" changes.

	docs
	send_IPI_mask
	remove-min

  * Base changes to cpumask.h and helper files.

	mv-cpumask-alloc
	cpumask-base
	lib-cpumask

  * Minimal changes to let init/main.c compile cleanly.

  	init_main

  * Mechanical changes, mostly automated.

	Modify cpu-maps
	Get-rid-of-_nr variations
	Fix cpumask_of_cpu refs
	Remove set_cpus_allowed_ptr
	Remove CPU_MASK_ALL_PTR
	Modify for_each_cpu_mask
	Change first_cpu/next_cpu to cpus_first/cpus_next
	Remove node_to_cpumask_ptr

  * Sectional changes, to agree with API changes:

	acpi
	apic
	cpu
	cpufreq
	irq
	kernel
	misc
	mm
	pci
	rcu
	sched
	smp
	time
	tlb
	trace
	xen

  * Allocation of temporary onstack cpumask variables.
    (To be supplied.  See cpumask_alloc.h for prelim info.)

  * Changes to non-x86 architecture files.
    (To be supplied.)


These changes should work in the current code.  The flag to turn on the
new code is "CONFIG_CPUMASKS_OFFSTACK".  This is not yet turned on so
this code should be completely compatible with current code, only laying
down the groundwork for when cpumasks will be "offstack".  This should
allow testing to insure that the basic interface is functionally complete.

[Note: testing not yet started.]

-- 

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [PATCH 01/31] cpumask: Documentation
  2008-09-29 18:02 [PATCH 00/31] cpumask: Provide new cpumask API Mike Travis
@ 2008-09-29 18:02 ` Mike Travis
  2008-09-30 22:49   ` Rusty Russell
  2008-09-29 18:02 ` [PATCH 02/31] cpumask: modify send_IPI_mask interface to accept cpumask_t pointers Mike Travis
                   ` (29 subsequent siblings)
  30 siblings, 1 reply; 43+ messages in thread
From: Mike Travis @ 2008-09-29 18:02 UTC (permalink / raw)
  To: Ingo Molnar, Rusty Russell
  Cc: Linus Torvalds, Andrew Morton, David Miller, Yinghai Lu,
	Thomas Gleixner, Jack Steiner, linux-kernel

[-- Attachment #1: docs --]
[-- Type: text/plain, Size: 3336 bytes --]

Signed-of-by: Mike Travis <travis@sgi.com>
---
 cpumask.txt |   73 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 73 insertions(+)

--- /dev/null
+++ struct-cpumasks/cpumask.txt
@@ -0,0 +1,73 @@
+
+		CPUMASKS
+		--------
+
+	      [PRELIMINARY]
+
+Introduction
+
+Cpumask variables are used to represent (with a single bit) all the
+CPU's in a system.  With the prolific increase in the number of CPU's
+in a single system image (SSI), managing this large number of cpus has
+become an ordeal.  When the limit of CPU's in a system was small, the
+cpumask could fit within an integer.  Even with the increase to 128, a
+bit pattern was well within a managable size.  Now with systems available
+with 2048 cores, with hyperthreading, there are 4096 cpu threads.  And the
+number of cpus in an SSI is growing, 16,384 is right around the corner.
+Even desktop systems with only 2 or 4 sockets, a new generation of Intel
+processors that have 128 cores per socket will increase the count of CPU
+threads tremendously.
+
+Thus the handling of the cpumask that represents this 4096 limit needs
+512 bytes, putting pressure on the amount of stack space needed to
+accomodate temporary cpumask variables.
+
+The primary goal of the cpumasks API is to accomodate these large number
+of cpus without effecting the compactness of Linux for small, compact
+systems.
+
+
+The Changes
+
+Provide new cpumask interface API.  The relevant change is basically
+cpumask_t becomes an opaque object.  This should result in the minimum
+amount of modifications while still allowing the inline cpumask functions,
+and the ability to declare static cpumask objects.
+
+
+    /* raw declaration */
+    struct __cpumask_data_s { DECLARE_BITMAP(bits, NR_CPUS); };
+
+    /* cpumask_map_t used for declaring static cpumask maps */
+    typedef struct __cpumask_data_s cpumask_map_t[1];
+
+    /* cpumask_t used for function args and return pointers */
+    typedef struct __cpumask_data_s *cpumask_t;
+    typedef const struct __cpumask_data_s *const_cpumask_t;
+
+    /* cpumask_var_t used for local variable, definition follows */
+    typedef struct __cpumask_data_s	cpumask_var_t[1]; /* SMALL NR_CPUS */
+    typedef struct __cpumask_data_s	*cpumask_var_t;	  /* LARGE NR_CPUS */
+
+    /* replaces cpumask_t dst = (cpumask_t)src */
+    void cpus_copy(cpumask_t dst, const cpumask_t src);
+
+This removes the '*' indirection in all references to cpumask_t objects.
+You can change the reference to the cpumask object but to change the
+cpumask object itself you must use the functions that operate on cpumask
+objects (f.e. the cpu_* operators).  Functions can return a cpumask_t
+(which is a pointer to the cpumask object) and can only be passed a
+cpumask_t.
+
+All uses of a cpumask_t variable on the stack are changed to be cpumask_var_t
+except for pointers to static (read only) cpumask objects.  Allocation of
+local (temp) cpumask objects use the functions available in cpumask_alloc.h.
+(descriptions to be supplied.)
+
+All cpumask operators now operate using nr_cpu_ids instead of NR_CPUS.  All
+variants of the cpumask operators which used nr_cpu_ids instead of NR_CPUS
+have been deleted.
+
+All variants of functions which use the (old cpumask_t *) pointer have been
+deleted (f.e. set_cpus_allowed_ptr()).
+

-- 

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [PATCH 02/31] cpumask: modify send_IPI_mask interface to accept cpumask_t pointers
  2008-09-29 18:02 [PATCH 00/31] cpumask: Provide new cpumask API Mike Travis
  2008-09-29 18:02 ` [PATCH 01/31] cpumask: Documentation Mike Travis
@ 2008-09-29 18:02 ` Mike Travis
  2008-09-29 18:02 ` [PATCH 03/31] cpumask: remove min from first_cpu/next_cpu Mike Travis
                   ` (28 subsequent siblings)
  30 siblings, 0 replies; 43+ messages in thread
From: Mike Travis @ 2008-09-29 18:02 UTC (permalink / raw)
  To: Ingo Molnar, Rusty Russell
  Cc: Linus Torvalds, Andrew Morton, David Miller, Yinghai Lu,
	Thomas Gleixner, Jack Steiner, linux-kernel

[-- Attachment #1: send_IPI_mask --]
[-- Type: text/plain, Size: 48151 bytes --]

  * Change genapic interfaces to accept cpumask_t pointers and to not
    return cpumask function values.

  * Modify external callers to use these cpumask_t pointers in their calls.

  * Create new send_IPI_mask_allbutself which is the same as the
    send_IPI_mask functions but removes smp_processor_id() from list.
    This removes another common need for a temporary cpumask_t variable.

  * Rewrite functions that used a temp cpumask_t variable for:

	cpumask_t allbutme = cpu_online_map;

	cpu_clear(smp_processor_id(), allbutme);
	if (!cpus_empty(allbutme))
		...

    It becomes:

	if (!cpus_equal(cpu_online_map, cpumask_of_cpu(cpu)))
		...

  * Other minor code optimizations.


Applies to linux-2.6.tip/master.

Signed-off-by: Mike Travis <travis@sgi.com>
---
 arch/x86/kernel/apic.c                   |    2 
 arch/x86/kernel/crash.c                  |    5 -
 arch/x86/kernel/genapic_flat_64.c        |   54 +++++++------
 arch/x86/kernel/genx2apic_cluster.c      |   38 +++++----
 arch/x86/kernel/genx2apic_phys.c         |   31 ++++---
 arch/x86/kernel/genx2apic_uv_x.c         |   30 +++----
 arch/x86/kernel/io_apic.c                |  124 ++++++++++++++++---------------
 arch/x86/kernel/ipi.c                    |   26 ++++--
 arch/x86/kernel/smp.c                    |   13 +--
 arch/x86/kernel/tlb_32.c                 |    2 
 arch/x86/kernel/tlb_64.c                 |    2 
 arch/x86/mach-generic/bigsmp.c           |    5 -
 arch/x86/mach-generic/es7000.c           |    5 -
 arch/x86/mach-generic/numaq.c            |    5 -
 arch/x86/mach-generic/summit.c           |    5 -
 arch/x86/xen/smp.c                       |   15 +--
 include/asm-x86/bigsmp/apic.h            |    4 -
 include/asm-x86/bigsmp/ipi.h             |   13 +--
 include/asm-x86/es7000/apic.h            |    8 +-
 include/asm-x86/es7000/ipi.h             |   12 +--
 include/asm-x86/genapic_32.h             |    6 -
 include/asm-x86/genapic_64.h             |    8 +-
 include/asm-x86/ipi.h                    |   21 ++++-
 include/asm-x86/mach-default/mach_apic.h |   20 ++---
 include/asm-x86/mach-default/mach_ipi.h  |   16 +---
 include/asm-x86/mach-generic/mach_apic.h |    2 
 include/asm-x86/numaq/apic.h             |    2 
 include/asm-x86/numaq/ipi.h              |   13 +--
 include/asm-x86/summit/apic.h            |    8 +-
 include/asm-x86/summit/ipi.h             |   13 +--
 30 files changed, 265 insertions(+), 243 deletions(-)

--- struct-cpumasks.orig/arch/x86/kernel/apic.c
+++ struct-cpumasks/arch/x86/kernel/apic.c
@@ -460,7 +460,7 @@ static void lapic_timer_setup(enum clock
 static void lapic_timer_broadcast(cpumask_t mask)
 {
 #ifdef CONFIG_SMP
-	send_IPI_mask(mask, LOCAL_TIMER_VECTOR);
+	send_IPI_mask(&mask, LOCAL_TIMER_VECTOR);
 #endif
 }
 
--- struct-cpumasks.orig/arch/x86/kernel/crash.c
+++ struct-cpumasks/arch/x86/kernel/crash.c
@@ -77,10 +77,7 @@ static int crash_nmi_callback(struct not
 
 static void smp_send_nmi_allbutself(void)
 {
-	cpumask_t mask = cpu_online_map;
-	cpu_clear(safe_smp_processor_id(), mask);
-	if (!cpus_empty(mask))
-		send_IPI_mask(mask, NMI_VECTOR);
+	send_IPI_allbutself(NMI_VECTOR);
 }
 
 static struct notifier_block crash_nmi_nb = {
--- struct-cpumasks.orig/arch/x86/kernel/genapic_flat_64.c
+++ struct-cpumasks/arch/x86/kernel/genapic_flat_64.c
@@ -30,12 +30,12 @@ static int __init flat_acpi_madt_oem_che
 	return 1;
 }
 
-static cpumask_t flat_target_cpus(void)
+static void flat_target_cpus(cpumask_t *retmask)
 {
-	return cpu_online_map;
+	*retmask = cpu_online_map;
 }
 
-static cpumask_t flat_vector_allocation_domain(int cpu)
+static void flat_vector_allocation_domain(int cpu, cpumask_t *retmask)
 {
 	/* Careful. Some cpus do not strictly honor the set of cpus
 	 * specified in the interrupt destination when using lowest
@@ -45,8 +45,7 @@ static cpumask_t flat_vector_allocation_
 	 * deliver interrupts to the wrong hyperthread when only one
 	 * hyperthread was specified in the interrupt desitination.
 	 */
-	cpumask_t domain = { { [0] = APIC_ALL_CPUS, } };
-	return domain;
+	*retmask = (cpumask_t) { { [0] = APIC_ALL_CPUS, } };
 }
 
 /*
@@ -69,9 +68,8 @@ static void flat_init_apic_ldr(void)
 	apic_write(APIC_LDR, val);
 }
 
-static void flat_send_IPI_mask(cpumask_t cpumask, int vector)
+static void inline _flat_send_IPI_mask(unsigned long mask, int vector)
 {
-	unsigned long mask = cpus_addr(cpumask)[0];
 	unsigned long flags;
 
 	local_irq_save(flags);
@@ -79,20 +77,30 @@ static void flat_send_IPI_mask(cpumask_t
 	local_irq_restore(flags);
 }
 
+static void flat_send_IPI_mask(const cpumask_t *cpumask, int vector)
+{
+	unsigned long mask = cpus_addr(*cpumask)[0];
+
+	_flat_send_IPI_mask(mask, vector);
+}
+
 static void flat_send_IPI_allbutself(int vector)
 {
+	int cpu = smp_processor_id();
 #ifdef	CONFIG_HOTPLUG_CPU
 	int hotplug = 1;
 #else
 	int hotplug = 0;
 #endif
 	if (hotplug || vector == NMI_VECTOR) {
-		cpumask_t allbutme = cpu_online_map;
+		if (!cpus_equal(cpu_online_map, cpumask_of_cpu(cpu))) {
+			unsigned long mask = cpus_addr(cpu_online_map)[0];
 
-		cpu_clear(smp_processor_id(), allbutme);
+			if (cpu < BITS_PER_LONG)
+				clear_bit(cpu, &mask);
 
-		if (!cpus_empty(allbutme))
-			flat_send_IPI_mask(allbutme, vector);
+			_flat_send_IPI_mask(mask, vector);
+		}
 	} else if (num_online_cpus() > 1) {
 		__send_IPI_shortcut(APIC_DEST_ALLBUT, vector,APIC_DEST_LOGICAL);
 	}
@@ -101,7 +109,7 @@ static void flat_send_IPI_allbutself(int
 static void flat_send_IPI_all(int vector)
 {
 	if (vector == NMI_VECTOR)
-		flat_send_IPI_mask(cpu_online_map, vector);
+		flat_send_IPI_mask(&cpu_online_map, vector);
 	else
 		__send_IPI_shortcut(APIC_DEST_ALLINC, vector, APIC_DEST_LOGICAL);
 }
@@ -135,9 +143,9 @@ static int flat_apic_id_registered(void)
 	return physid_isset(read_xapic_id(), phys_cpu_present_map);
 }
 
-static unsigned int flat_cpu_mask_to_apicid(cpumask_t cpumask)
+static unsigned int flat_cpu_mask_to_apicid(const cpumask_t *cpumask)
 {
-	return cpus_addr(cpumask)[0] & APIC_ALL_CPUS;
+	return cpus_addr(*cpumask)[0] & APIC_ALL_CPUS;
 }
 
 static unsigned int phys_pkg_id(int index_msb)
@@ -193,30 +201,28 @@ static cpumask_t physflat_target_cpus(vo
 	return cpu_online_map;
 }
 
-static cpumask_t physflat_vector_allocation_domain(int cpu)
+static void physflat_vector_allocation_domain(int cpu, cpumask_t *retmask)
 {
-	return cpumask_of_cpu(cpu);
+	cpus_clear(*retmask);
+	cpu_set(cpu, *retmask);
 }
 
-static void physflat_send_IPI_mask(cpumask_t cpumask, int vector)
+static void physflat_send_IPI_mask(const cpumask_t *cpumask, int vector)
 {
 	send_IPI_mask_sequence(cpumask, vector);
 }
 
 static void physflat_send_IPI_allbutself(int vector)
 {
-	cpumask_t allbutme = cpu_online_map;
-
-	cpu_clear(smp_processor_id(), allbutme);
-	physflat_send_IPI_mask(allbutme, vector);
+	send_IPI_mask_allbutself(&cpu_online_map, vector);
 }
 
 static void physflat_send_IPI_all(int vector)
 {
-	physflat_send_IPI_mask(cpu_online_map, vector);
+	physflat_send_IPI_mask(&cpu_online_map, vector);
 }
 
-static unsigned int physflat_cpu_mask_to_apicid(cpumask_t cpumask)
+static unsigned int physflat_cpu_mask_to_apicid(const cpumask_t *cpumask)
 {
 	int cpu;
 
@@ -224,7 +230,7 @@ static unsigned int physflat_cpu_mask_to
 	 * We're using fixed IRQ delivery, can only return one phys APIC ID.
 	 * May as well be the first.
 	 */
-	cpu = first_cpu(cpumask);
+	cpu = first_cpu(*cpumask);
 	if ((unsigned)cpu < nr_cpu_ids)
 		return per_cpu(x86_cpu_to_apicid, cpu);
 	else
--- struct-cpumasks.orig/arch/x86/kernel/genx2apic_cluster.c
+++ struct-cpumasks/arch/x86/kernel/genx2apic_cluster.c
@@ -30,11 +30,10 @@ static cpumask_t x2apic_target_cpus(void
 /*
  * for now each logical cpu is in its own vector allocation domain.
  */
-static cpumask_t x2apic_vector_allocation_domain(int cpu)
+static void x2apic_vector_allocation_domain(int cpu, cpumask_t *retmask)
 {
-	cpumask_t domain = CPU_MASK_NONE;
-	cpu_set(cpu, domain);
-	return domain;
+	cpus_clear(*retmask);
+	cpu_set(cpu, *retmask);
 }
 
 static void __x2apic_send_IPI_dest(unsigned int apicid, int vector,
@@ -56,32 +55,37 @@ static void __x2apic_send_IPI_dest(unsig
  * at once. We have 16 cpu's in a cluster. This will minimize IPI register
  * writes.
  */
-static void x2apic_send_IPI_mask(cpumask_t mask, int vector)
+static void x2apic_send_IPI_mask(const cpumask_t *mask, int vector)
 {
 	unsigned long flags;
 	unsigned long query_cpu;
 
 	local_irq_save(flags);
-	for_each_cpu_mask(query_cpu, mask) {
-		__x2apic_send_IPI_dest(per_cpu(x86_cpu_to_logical_apicid, query_cpu),
-				       vector, APIC_DEST_LOGICAL);
-	}
+	for_each_cpu_mask_nr(query_cpu, *mask)
+		__x2apic_send_IPI_dest(
+			per_cpu(x86_cpu_to_logical_apicid, query_cpu),
+			vector, APIC_DEST_LOGICAL);
 	local_irq_restore(flags);
 }
 
 static void x2apic_send_IPI_allbutself(int vector)
 {
-	cpumask_t mask = cpu_online_map;
-
-	cpu_clear(smp_processor_id(), mask);
+	unsigned long flags;
+	unsigned long query_cpu;
+	unsigned long this_cpu = smp_processor_id();
 
-	if (!cpus_empty(mask))
-		x2apic_send_IPI_mask(mask, vector);
+	local_irq_save(flags);
+	for_each_online_cpu(query_cpu)
+		if (query_cpu != this_cpu)
+			__x2apic_send_IPI_dest(
+				per_cpu(x86_cpu_to_logical_apicid, query_cpu),
+				vector, APIC_DEST_LOGICAL);
+	local_irq_restore(flags);
 }
 
 static void x2apic_send_IPI_all(int vector)
 {
-	x2apic_send_IPI_mask(cpu_online_map, vector);
+	x2apic_send_IPI_mask(&cpu_online_map, vector);
 }
 
 static int x2apic_apic_id_registered(void)
@@ -89,7 +93,7 @@ static int x2apic_apic_id_registered(voi
 	return 1;
 }
 
-static unsigned int x2apic_cpu_mask_to_apicid(cpumask_t cpumask)
+static unsigned int x2apic_cpu_mask_to_apicid(const cpumask_t *cpumask)
 {
 	int cpu;
 
@@ -97,7 +101,7 @@ static unsigned int x2apic_cpu_mask_to_a
 	 * We're using fixed IRQ delivery, can only return one phys APIC ID.
 	 * May as well be the first.
 	 */
-	cpu = first_cpu(cpumask);
+	cpu = first_cpu(*cpumask);
 	if ((unsigned)cpu < NR_CPUS)
 		return per_cpu(x86_cpu_to_logical_apicid, cpu);
 	else
--- struct-cpumasks.orig/arch/x86/kernel/genx2apic_phys.c
+++ struct-cpumasks/arch/x86/kernel/genx2apic_phys.c
@@ -34,11 +34,10 @@ static cpumask_t x2apic_target_cpus(void
 	return cpumask_of_cpu(0);
 }
 
-static cpumask_t x2apic_vector_allocation_domain(int cpu)
+static void x2apic_vector_allocation_domain(int cpu, cpumask_t *retmask)
 {
-	cpumask_t domain = CPU_MASK_NONE;
-	cpu_set(cpu, domain);
-	return domain;
+	cpus_clear(*retmask);
+	cpu_set(cpu, *retmask);
 }
 
 static void __x2apic_send_IPI_dest(unsigned int apicid, int vector,
@@ -54,13 +53,13 @@ static void __x2apic_send_IPI_dest(unsig
 	x2apic_icr_write(cfg, apicid);
 }
 
-static void x2apic_send_IPI_mask(cpumask_t mask, int vector)
+static void x2apic_send_IPI_mask(const cpumask_t *mask, int vector)
 {
 	unsigned long flags;
 	unsigned long query_cpu;
 
 	local_irq_save(flags);
-	for_each_cpu_mask(query_cpu, mask) {
+	for_each_cpu_mask_nr(query_cpu, *mask) {
 		__x2apic_send_IPI_dest(per_cpu(x86_cpu_to_apicid, query_cpu),
 				       vector, APIC_DEST_PHYSICAL);
 	}
@@ -69,17 +68,21 @@ static void x2apic_send_IPI_mask(cpumask
 
 static void x2apic_send_IPI_allbutself(int vector)
 {
-	cpumask_t mask = cpu_online_map;
-
-	cpu_clear(smp_processor_id(), mask);
+	unsigned long flags;
+	unsigned long query_cpu;
+	unsigned long this_cpu = smp_processor_id();
 
-	if (!cpus_empty(mask))
-		x2apic_send_IPI_mask(mask, vector);
+	local_irq_save(flags);
+	for_each_online_cpu(query_cpu)
+		if (query_cpu != this_cpu)
+		__x2apic_send_IPI_dest(per_cpu(x86_cpu_to_apicid, query_cpu),
+				       vector, APIC_DEST_PHYSICAL);
+	local_irq_restore(flags);
 }
 
 static void x2apic_send_IPI_all(int vector)
 {
-	x2apic_send_IPI_mask(cpu_online_map, vector);
+	x2apic_send_IPI_mask(&cpu_online_map, vector);
 }
 
 static int x2apic_apic_id_registered(void)
@@ -87,7 +90,7 @@ static int x2apic_apic_id_registered(voi
 	return 1;
 }
 
-static unsigned int x2apic_cpu_mask_to_apicid(cpumask_t cpumask)
+static unsigned int x2apic_cpu_mask_to_apicid(const cpumask_t *cpumask)
 {
 	int cpu;
 
@@ -95,7 +98,7 @@ static unsigned int x2apic_cpu_mask_to_a
 	 * We're using fixed IRQ delivery, can only return one phys APIC ID.
 	 * May as well be the first.
 	 */
-	cpu = first_cpu(cpumask);
+	cpu = first_cpu(*cpumask);
 	if ((unsigned)cpu < NR_CPUS)
 		return per_cpu(x86_cpu_to_apicid, cpu);
 	else
--- struct-cpumasks.orig/arch/x86/kernel/genx2apic_uv_x.c
+++ struct-cpumasks/arch/x86/kernel/genx2apic_uv_x.c
@@ -81,11 +81,10 @@ static cpumask_t uv_target_cpus(void)
 	return cpumask_of_cpu(0);
 }
 
-static cpumask_t uv_vector_allocation_domain(int cpu)
+static void uv_vector_allocation_domain(int cpu, cpumask_t *retmask)
 {
-	cpumask_t domain = CPU_MASK_NONE;
-	cpu_set(cpu, domain);
-	return domain;
+	cpus_clear(*retmask);
+	cpu_set(cpu, *retmask);
 }
 
 int uv_wakeup_secondary(int phys_apicid, unsigned int start_rip)
@@ -124,28 +123,27 @@ static void uv_send_IPI_one(int cpu, int
 	uv_write_global_mmr64(pnode, UVH_IPI_INT, val);
 }
 
-static void uv_send_IPI_mask(cpumask_t mask, int vector)
+static void uv_send_IPI_mask(const cpumask_t *mask, int vector)
 {
 	unsigned int cpu;
 
-	for_each_possible_cpu(cpu)
-		if (cpu_isset(cpu, mask))
-			uv_send_IPI_one(cpu, vector);
+	for_each_cpu_mask_nr(cpu, *mask)
+		uv_send_IPI_one(cpu, vector);
 }
 
 static void uv_send_IPI_allbutself(int vector)
 {
-	cpumask_t mask = cpu_online_map;
-
-	cpu_clear(smp_processor_id(), mask);
+	unsigned int cpu;
+	unsigned int this_cpu = smp_processor_id();
 
-	if (!cpus_empty(mask))
-		uv_send_IPI_mask(mask, vector);
+	for_each_online_cpu(cpu)
+		if (cpu != this_cpu)
+			uv_send_IPI_one(cpu, vector);
 }
 
 static void uv_send_IPI_all(int vector)
 {
-	uv_send_IPI_mask(cpu_online_map, vector);
+	uv_send_IPI_mask(&cpu_online_map, vector);
 }
 
 static int uv_apic_id_registered(void)
@@ -157,7 +155,7 @@ static void uv_init_apic_ldr(void)
 {
 }
 
-static unsigned int uv_cpu_mask_to_apicid(cpumask_t cpumask)
+static unsigned int uv_cpu_mask_to_apicid(const cpumask_t *cpumask)
 {
 	int cpu;
 
@@ -165,7 +163,7 @@ static unsigned int uv_cpu_mask_to_apici
 	 * We're using fixed IRQ delivery, can only return one phys APIC ID.
 	 * May as well be the first.
 	 */
-	cpu = first_cpu(cpumask);
+	cpu = first_cpu(*cpumask);
 	if ((unsigned)cpu < nr_cpu_ids)
 		return per_cpu(x86_cpu_to_apicid, cpu);
 	else
--- struct-cpumasks.orig/arch/x86/kernel/io_apic.c
+++ struct-cpumasks/arch/x86/kernel/io_apic.c
@@ -529,7 +529,7 @@ static void __target_IO_APIC_irq(unsigne
 	}
 }
 
-static int assign_irq_vector(int irq, cpumask_t mask);
+static int assign_irq_vector(int irq, const cpumask_t *mask);
 
 static void set_ioapic_affinity_irq(unsigned int irq, cpumask_t mask)
 {
@@ -544,11 +544,11 @@ static void set_ioapic_affinity_irq(unsi
 		return;
 
 	cfg = irq_cfg(irq);
-	if (assign_irq_vector(irq, mask))
+	if (assign_irq_vector(irq, &mask))
 		return;
 
 	cpus_and(tmp, cfg->domain, mask);
-	dest = cpu_mask_to_apicid(tmp);
+	dest = cpu_mask_to_apicid(&tmp);
 	/*
 	 * Only the high 8 bits are valid.
 	 */
@@ -1205,7 +1205,7 @@ void unlock_vector_lock(void)
 	spin_unlock(&vector_lock);
 }
 
-static int __assign_irq_vector(int irq, cpumask_t mask)
+static int __assign_irq_vector(int irq, const cpumask_t *mask)
 {
 	/*
 	 * NOTE! The local APIC isn't very good at handling
@@ -1222,37 +1222,33 @@ static int __assign_irq_vector(int irq, 
 	unsigned int old_vector;
 	int cpu;
 	struct irq_cfg *cfg;
+	cpumask_t tmpmask;
 
 	cfg = irq_cfg(irq);
 
-	/* Only try and allocate irqs on cpus that are present */
-	cpus_and(mask, mask, cpu_online_map);
-
 	if ((cfg->move_in_progress) || cfg->move_cleanup_count)
 		return -EBUSY;
 
 	old_vector = cfg->vector;
 	if (old_vector) {
-		cpumask_t tmp;
-		cpus_and(tmp, cfg->domain, mask);
-		if (!cpus_empty(tmp))
+		cpus_and(tmpmask, *mask, cpu_online_map);
+		cpus_and(tmpmask, tmpmask, cfg->domain);
+		if (!cpus_empty(tmpmask))
 			return 0;
 	}
 
-	for_each_cpu_mask_nr(cpu, mask) {
-		cpumask_t domain, new_mask;
+	for_each_online_cpu_mask_nr(cpu, *mask) {
 		int new_cpu;
 		int vector, offset;
 
-		domain = vector_allocation_domain(cpu);
-		cpus_and(new_mask, domain, cpu_online_map);
+		vector_allocation_domain(cpu, &tmpmask);
 
 		vector = current_vector;
 		offset = current_offset;
 next:
 		vector += 8;
 		if (vector >= first_system_vector) {
-			/* If we run out of vectors on large boxen, must share them. */
+			/* If no more vectors on large boxen, must share them */
 			offset = (offset + 1) % 8;
 			vector = FIRST_DEVICE_VECTOR + offset;
 		}
@@ -1265,7 +1261,7 @@ next:
 		if (vector == SYSCALL_VECTOR)
 			goto next;
 #endif
-		for_each_cpu_mask_nr(new_cpu, new_mask)
+		for_each_online_cpu_mask_nr(new_cpu, tmpmask)
 			if (per_cpu(vector_irq, new_cpu)[vector] != -1)
 				goto next;
 		/* Found one! */
@@ -1275,16 +1271,16 @@ next:
 			cfg->move_in_progress = 1;
 			cfg->old_domain = cfg->domain;
 		}
-		for_each_cpu_mask_nr(new_cpu, new_mask)
+		for_each_cpu_mask_nr(new_cpu, tmpmask)
 			per_cpu(vector_irq, new_cpu)[vector] = irq;
 		cfg->vector = vector;
-		cfg->domain = domain;
+		cfg->domain = tmpmask;
 		return 0;
 	}
 	return -ENOSPC;
 }
 
-static int assign_irq_vector(int irq, cpumask_t mask)
+static int assign_irq_vector(int irq, const cpumask_t *mask)
 {
 	int err;
 	unsigned long flags;
@@ -1484,8 +1480,8 @@ static void setup_IO_APIC_irq(int apic, 
 
 	cfg = irq_cfg(irq);
 
-	mask = TARGET_CPUS;
-	if (assign_irq_vector(irq, mask))
+	TARGET_CPUS(&mask);
+	if (assign_irq_vector(irq, &mask))
 		return;
 
 	cpus_and(mask, cfg->domain, mask);
@@ -1498,7 +1494,7 @@ static void setup_IO_APIC_irq(int apic, 
 
 
 	if (setup_ioapic_entry(mp_ioapics[apic].mp_apicid, irq, &entry,
-			       cpu_mask_to_apicid(mask), trigger, polarity,
+			       cpu_mask_to_apicid(&mask), trigger, polarity,
 			       cfg->vector)) {
 		printk("Failed to setup ioapic entry for ioapic  %d, pin %d\n",
 		       mp_ioapics[apic].mp_apicid, pin);
@@ -1567,6 +1563,7 @@ static void __init setup_timer_IRQ0_pin(
 					int vector)
 {
 	struct IO_APIC_route_entry entry;
+	cpumask_t mask;
 
 #ifdef CONFIG_INTR_REMAP
 	if (intr_remapping_enabled)
@@ -1574,6 +1571,7 @@ static void __init setup_timer_IRQ0_pin(
 #endif
 
 	memset(&entry, 0, sizeof(entry));
+	TARGET_CPUS(&mask);
 
 	/*
 	 * We use logical delivery to get the timer IRQ
@@ -1581,7 +1579,7 @@ static void __init setup_timer_IRQ0_pin(
 	 */
 	entry.dest_mode = INT_DEST_MODE;
 	entry.mask = 1;					/* mask IRQ now */
-	entry.dest = cpu_mask_to_apicid(TARGET_CPUS);
+	entry.dest = cpu_mask_to_apicid(&mask);
 	entry.delivery_mode = INT_DELIVERY_MODE;
 	entry.polarity = 0;
 	entry.trigger = 0;
@@ -2204,7 +2202,7 @@ static int ioapic_retrigger_irq(unsigned
 	unsigned long flags;
 
 	spin_lock_irqsave(&vector_lock, flags);
-	send_IPI_mask(cpumask_of_cpu(first_cpu(cfg->domain)), cfg->vector);
+	send_IPI_mask(&cpumask_of_cpu(first_cpu(cfg->domain)), cfg->vector);
 	spin_unlock_irqrestore(&vector_lock, flags);
 
 	return 1;
@@ -2253,17 +2251,17 @@ static DECLARE_DELAYED_WORK(ir_migration
  * as simple as edge triggered migration and we can do the irq migration
  * with a simple atomic update to IO-APIC RTE.
  */
-static void migrate_ioapic_irq(int irq, cpumask_t mask)
+static void migrate_ioapic_irq(int irq, const cpumask_t *mask)
 {
 	struct irq_cfg *cfg;
 	struct irq_desc *desc;
-	cpumask_t tmp, cleanup_mask;
+	cpumask_t tmp;
 	struct irte irte;
 	int modify_ioapic_rte;
 	unsigned int dest;
 	unsigned long flags;
 
-	cpus_and(tmp, mask, cpu_online_map);
+	cpus_and(tmp, *mask, cpu_online_map);
 	if (cpus_empty(tmp))
 		return;
 
@@ -2274,8 +2272,8 @@ static void migrate_ioapic_irq(int irq, 
 		return;
 
 	cfg = irq_cfg(irq);
-	cpus_and(tmp, cfg->domain, mask);
-	dest = cpu_mask_to_apicid(tmp);
+	cpus_and(tmp, cfg->domain, *mask);
+	dest = cpu_mask_to_apicid(&tmp);
 
 	desc = irq_to_desc(irq);
 	modify_ioapic_rte = desc->status & IRQ_LEVEL;
@@ -2294,13 +2292,13 @@ static void migrate_ioapic_irq(int irq, 
 	modify_irte(irq, &irte);
 
 	if (cfg->move_in_progress) {
-		cpus_and(cleanup_mask, cfg->old_domain, cpu_online_map);
-		cfg->move_cleanup_count = cpus_weight(cleanup_mask);
-		send_IPI_mask(cleanup_mask, IRQ_MOVE_CLEANUP_VECTOR);
+		cpus_and(tmp, cfg->old_domain, cpu_online_map);
+		cfg->move_cleanup_count = cpus_weight(tmp);
+		send_IPI_mask(&tmp, IRQ_MOVE_CLEANUP_VECTOR);
 		cfg->move_in_progress = 0;
 	}
 
-	desc->affinity = mask;
+	desc->affinity = *mask;
 }
 
 static int migrate_irq_remapped_level(int irq)
@@ -2322,7 +2320,7 @@ static int migrate_irq_remapped_level(in
 	}
 
 	/* everthing is clear. we have right of way */
-	migrate_ioapic_irq(irq, desc->pending_mask);
+	migrate_ioapic_irq(irq, &desc->pending_mask);
 
 	ret = 0;
 	desc->status &= ~IRQ_MOVE_PENDING;
@@ -2370,7 +2368,7 @@ static void set_ir_ioapic_affinity_irq(u
 		return;
 	}
 
-	migrate_ioapic_irq(irq, mask);
+	migrate_ioapic_irq(irq, &mask);
 }
 #endif
 
@@ -2426,7 +2424,7 @@ static void irq_complete_move(unsigned i
 
 		cpus_and(cleanup_mask, cfg->old_domain, cpu_online_map);
 		cfg->move_cleanup_count = cpus_weight(cleanup_mask);
-		send_IPI_mask(cleanup_mask, IRQ_MOVE_CLEANUP_VECTOR);
+		send_IPI_mask(&cleanup_mask, IRQ_MOVE_CLEANUP_VECTOR);
 		cfg->move_in_progress = 0;
 	}
 }
@@ -2756,7 +2754,9 @@ static inline void __init check_timer(vo
 	unsigned long flags;
 	unsigned int ver;
 	int no_pin1 = 0;
+	cpumask_t mask;
 
+	TARGET_CPUS(&mask);
 	local_irq_save(flags);
 
         ver = apic_read(APIC_LVR);
@@ -2766,7 +2766,7 @@ static inline void __init check_timer(vo
 	 * get/set the timer IRQ vector:
 	 */
 	disable_8259A_irq(0);
-	assign_irq_vector(0, TARGET_CPUS);
+	assign_irq_vector(0, &mask);
 
 	/*
 	 * As IRQ0 is to be enabled in the 8259A, the virtual
@@ -3066,7 +3066,9 @@ unsigned int create_irq_nr(unsigned int 
 	unsigned int new;
 	unsigned long flags;
 	struct irq_cfg *cfg_new;
+	cpumask_t mask;
 
+	TARGET_CPUS(&mask);
 #ifndef CONFIG_HAVE_SPARSE_IRQ
 	irq_want = nr_irqs - 1;
 #endif
@@ -3082,7 +3084,7 @@ unsigned int create_irq_nr(unsigned int 
 		/* check if need to create one */
 		if (!cfg_new)
 			cfg_new = irq_cfg_alloc(new);
-		if (__assign_irq_vector(new, TARGET_CPUS) == 0)
+		if (__assign_irq_vector(new, &mask) == 0)
 			irq = new;
 		break;
 	}
@@ -3131,14 +3133,14 @@ static int msi_compose_msg(struct pci_de
 	unsigned dest;
 	cpumask_t tmp;
 
-	tmp = TARGET_CPUS;
-	err = assign_irq_vector(irq, tmp);
+	TARGET_CPUS(&tmp);
+	err = assign_irq_vector(irq, &tmp);
 	if (err)
 		return err;
 
 	cfg = irq_cfg(irq);
 	cpus_and(tmp, cfg->domain, tmp);
-	dest = cpu_mask_to_apicid(tmp);
+	dest = cpu_mask_to_apicid(&tmp);
 
 #ifdef CONFIG_INTR_REMAP
 	if (irq_remapped(irq)) {
@@ -3204,12 +3206,12 @@ static void set_msi_irq_affinity(unsigne
 	if (cpus_empty(tmp))
 		return;
 
-	if (assign_irq_vector(irq, mask))
+	if (assign_irq_vector(irq, &mask))
 		return;
 
 	cfg = irq_cfg(irq);
 	cpus_and(tmp, cfg->domain, mask);
-	dest = cpu_mask_to_apicid(tmp);
+	dest = cpu_mask_to_apicid(&tmp);
 
 	read_msi_msg(irq, &msg);
 
@@ -3232,7 +3234,7 @@ static void ir_set_msi_irq_affinity(unsi
 {
 	struct irq_cfg *cfg;
 	unsigned int dest;
-	cpumask_t tmp, cleanup_mask;
+	cpumask_t tmp;
 	struct irte irte;
 	struct irq_desc *desc;
 
@@ -3243,12 +3245,12 @@ static void ir_set_msi_irq_affinity(unsi
 	if (get_irte(irq, &irte))
 		return;
 
-	if (assign_irq_vector(irq, mask))
+	if (assign_irq_vector(irq, &mask))
 		return;
 
 	cfg = irq_cfg(irq);
 	cpus_and(tmp, cfg->domain, mask);
-	dest = cpu_mask_to_apicid(tmp);
+	dest = cpu_mask_to_apicid(&tmp);
 
 	irte.vector = cfg->vector;
 	irte.dest_id = IRTE_DEST(dest);
@@ -3264,9 +3266,9 @@ static void ir_set_msi_irq_affinity(unsi
 	 * vector allocation.
 	 */
 	if (cfg->move_in_progress) {
-		cpus_and(cleanup_mask, cfg->old_domain, cpu_online_map);
-		cfg->move_cleanup_count = cpus_weight(cleanup_mask);
-		send_IPI_mask(cleanup_mask, IRQ_MOVE_CLEANUP_VECTOR);
+		cpus_and(tmp, cfg->old_domain, cpu_online_map);
+		cfg->move_cleanup_count = cpus_weight(tmp);
+		send_IPI_mask(&tmp, IRQ_MOVE_CLEANUP_VECTOR);
 		cfg->move_in_progress = 0;
 	}
 
@@ -3483,12 +3485,12 @@ static void dmar_msi_set_affinity(unsign
 	if (cpus_empty(tmp))
 		return;
 
-	if (assign_irq_vector(irq, mask))
+	if (assign_irq_vector(irq, &mask))
 		return;
 
 	cfg = irq_cfg(irq);
 	cpus_and(tmp, cfg->domain, mask);
-	dest = cpu_mask_to_apicid(tmp);
+	dest = cpu_mask_to_apicid(&tmp);
 
 	dmar_msi_read(irq, &msg);
 
@@ -3544,12 +3546,12 @@ static void hpet_msi_set_affinity(unsign
 	if (cpus_empty(tmp))
 		return;
 
-	if (assign_irq_vector(irq, mask))
+	if (assign_irq_vector(irq, &mask))
 		return;
 
 	cfg = irq_cfg(irq);
 	cpus_and(tmp, cfg->domain, mask);
-	dest = cpu_mask_to_apicid(tmp);
+	dest = cpu_mask_to_apicid(&tmp);
 
 	hpet_msi_read(irq, &msg);
 
@@ -3624,12 +3626,12 @@ static void set_ht_irq_affinity(unsigned
 	if (cpus_empty(tmp))
 		return;
 
-	if (assign_irq_vector(irq, mask))
+	if (assign_irq_vector(irq, &mask))
 		return;
 
 	cfg = irq_cfg(irq);
 	cpus_and(tmp, cfg->domain, mask);
-	dest = cpu_mask_to_apicid(tmp);
+	dest = cpu_mask_to_apicid(&tmp);
 
 	target_ht_irq(irq, dest, cfg->vector);
 	desc = irq_to_desc(irq);
@@ -3654,15 +3656,15 @@ int arch_setup_ht_irq(unsigned int irq, 
 	int err;
 	cpumask_t tmp;
 
-	tmp = TARGET_CPUS;
-	err = assign_irq_vector(irq, tmp);
+	TARGET_CPUS(&tmp);
+	err = assign_irq_vector(irq, &tmp);
 	if (!err) {
 		struct ht_irq_msg msg;
 		unsigned dest;
 
 		cfg = irq_cfg(irq);
 		cpus_and(tmp, cfg->domain, tmp);
-		dest = cpu_mask_to_apicid(tmp);
+		dest = cpu_mask_to_apicid(&tmp);
 
 		msg.address_hi = HT_IRQ_HIGH_DEST_ID(dest);
 
@@ -3868,10 +3870,12 @@ void __init setup_ioapic_dest(void)
 {
 	int pin, ioapic, irq, irq_entry;
 	struct irq_cfg *cfg;
+	cpumask_t mask;
 
 	if (skip_ioapic_setup == 1)
 		return;
 
+	TARGET_CPUS(&mask);
 	for (ioapic = 0; ioapic < nr_ioapics; ioapic++) {
 		for (pin = 0; pin < nr_ioapic_registers[ioapic]; pin++) {
 			irq_entry = find_irq_entry(ioapic, pin, mp_INT);
@@ -3890,10 +3894,10 @@ void __init setup_ioapic_dest(void)
 						  irq_polarity(irq_entry));
 #ifdef CONFIG_INTR_REMAP
 			else if (intr_remapping_enabled)
-				set_ir_ioapic_affinity_irq(irq, TARGET_CPUS);
+				set_ir_ioapic_affinity_irq(irq, mask);
 #endif
 			else
-				set_ioapic_affinity_irq(irq, TARGET_CPUS);
+				set_ioapic_affinity_irq(irq, mask);
 		}
 
 	}
--- struct-cpumasks.orig/arch/x86/kernel/ipi.c
+++ struct-cpumasks/arch/x86/kernel/ipi.c
@@ -116,9 +116,9 @@ static inline void __send_IPI_dest_field
 /*
  * This is only used on smaller machines.
  */
-void send_IPI_mask_bitmask(cpumask_t cpumask, int vector)
+void send_IPI_mask_bitmask(const cpumask_t *cpumask, int vector)
 {
-	unsigned long mask = cpus_addr(cpumask)[0];
+	unsigned long mask = cpus_addr(*cpumask)[0];
 	unsigned long flags;
 
 	local_irq_save(flags);
@@ -127,7 +127,7 @@ void send_IPI_mask_bitmask(cpumask_t cpu
 	local_irq_restore(flags);
 }
 
-void send_IPI_mask_sequence(cpumask_t mask, int vector)
+void send_IPI_mask_sequence(const cpumask_t *mask, int vector)
 {
 	unsigned long flags;
 	unsigned int query_cpu;
@@ -139,12 +139,24 @@ void send_IPI_mask_sequence(cpumask_t ma
 	 */
 
 	local_irq_save(flags);
-	for_each_possible_cpu(query_cpu) {
-		if (cpu_isset(query_cpu, mask)) {
+	for_each_cpu_mask_nr(query_cpu, *mask)
+		__send_IPI_dest_field(cpu_to_logical_apicid(query_cpu), vector);
+	local_irq_restore(flags);
+}
+
+void send_IPI_mask_allbutself(const cpumask_t *mask, int vector)
+{
+	unsigned long flags;
+	unsigned int query_cpu;
+	unsigned int this_cpu = smp_processor_id();
+
+	/* See Hack comment above */
+
+	local_irq_save(flags);
+	for_each_cpu_mask_nr(query_cpu, *mask)
+		if (query_cpu != this_cpu)
 			__send_IPI_dest_field(cpu_to_logical_apicid(query_cpu),
 					      vector);
-		}
-	}
 	local_irq_restore(flags);
 }
 
--- struct-cpumasks.orig/arch/x86/kernel/smp.c
+++ struct-cpumasks/arch/x86/kernel/smp.c
@@ -118,26 +118,23 @@ static void native_smp_send_reschedule(i
 		WARN_ON(1);
 		return;
 	}
-	send_IPI_mask(cpumask_of_cpu(cpu), RESCHEDULE_VECTOR);
+	send_IPI_mask(&cpumask_of_cpu(cpu), RESCHEDULE_VECTOR);
 }
 
 void native_send_call_func_single_ipi(int cpu)
 {
-	send_IPI_mask(cpumask_of_cpu(cpu), CALL_FUNCTION_SINGLE_VECTOR);
+	send_IPI_mask(&cpumask_of_cpu(cpu), CALL_FUNCTION_SINGLE_VECTOR);
 }
 
 void native_send_call_func_ipi(const cpumask_t *mask)
 {
-	cpumask_t allbutself;
+	int cpu = smp_processor_id();
 
-	allbutself = cpu_online_map;
-	cpu_clear(smp_processor_id(), allbutself);
-
-	if (cpus_equal(*mask, allbutself) &&
+	if (!cpus_equal(cpu_online_map, cpumask_of_cpu(cpu)) &&
 	    cpus_equal(cpu_online_map, cpu_callout_map))
 		send_IPI_allbutself(CALL_FUNCTION_VECTOR);
 	else
-		send_IPI_mask(*mask, CALL_FUNCTION_VECTOR);
+		send_IPI_mask(mask, CALL_FUNCTION_VECTOR);
 }
 
 static void stop_this_cpu(void *dummy)
--- struct-cpumasks.orig/arch/x86/kernel/tlb_32.c
+++ struct-cpumasks/arch/x86/kernel/tlb_32.c
@@ -158,7 +158,7 @@ void native_flush_tlb_others(const cpuma
 	 * We have to send the IPI only to
 	 * CPUs affected.
 	 */
-	send_IPI_mask(cpumask, INVALIDATE_TLB_VECTOR);
+	send_IPI_mask(&cpumask, INVALIDATE_TLB_VECTOR);
 
 	while (!cpus_empty(flush_cpumask))
 		/* nothing. lockup detection does not belong here */
--- struct-cpumasks.orig/arch/x86/kernel/tlb_64.c
+++ struct-cpumasks/arch/x86/kernel/tlb_64.c
@@ -186,7 +186,7 @@ void native_flush_tlb_others(const cpuma
 	 * We have to send the IPI only to
 	 * CPUs affected.
 	 */
-	send_IPI_mask(cpumask, INVALIDATE_TLB_VECTOR_START + sender);
+	send_IPI_mask(&cpumask, INVALIDATE_TLB_VECTOR_START + sender);
 
 	while (!cpus_empty(f->flush_cpumask))
 		cpu_relax();
--- struct-cpumasks.orig/arch/x86/mach-generic/bigsmp.c
+++ struct-cpumasks/arch/x86/mach-generic/bigsmp.c
@@ -41,9 +41,10 @@ static const struct dmi_system_id bigsmp
 	 { }
 };
 
-static cpumask_t vector_allocation_domain(int cpu)
+static vector_allocation_domain(int cpu, cpumask_t *retmask)
 {
-        return cpumask_of_cpu(cpu);
+	cpus_clear(*retmask);
+	cpu_set(cpu, *retmask);
 }
 
 static int probe_bigsmp(void)
--- struct-cpumasks.orig/arch/x86/mach-generic/es7000.c
+++ struct-cpumasks/arch/x86/mach-generic/es7000.c
@@ -75,7 +75,7 @@ static int __init acpi_madt_oem_check(ch
 }
 #endif
 
-static cpumask_t vector_allocation_domain(int cpu)
+static void vector_allocation_domain(int cpu, cpumask_t *retmask)
 {
 	/* Careful. Some cpus do not strictly honor the set of cpus
 	 * specified in the interrupt destination when using lowest
@@ -85,8 +85,7 @@ static cpumask_t vector_allocation_domai
 	 * deliver interrupts to the wrong hyperthread when only one
 	 * hyperthread was specified in the interrupt desitination.
 	 */
-	cpumask_t domain = { { [0] = APIC_ALL_CPUS, } };
-	return domain;
+	*retmask = (cpumask_t) { { [0] = APIC_ALL_CPUS, } };
 }
 
 struct genapic __initdata_refok apic_es7000 = APIC_INIT("es7000", probe_es7000);
--- struct-cpumasks.orig/arch/x86/mach-generic/numaq.c
+++ struct-cpumasks/arch/x86/mach-generic/numaq.c
@@ -38,7 +38,7 @@ static int acpi_madt_oem_check(char *oem
 	return 0;
 }
 
-static cpumask_t vector_allocation_domain(int cpu)
+static void vector_allocation_domain(int cpu, cpumask_t *retmask)
 {
 	/* Careful. Some cpus do not strictly honor the set of cpus
 	 * specified in the interrupt destination when using lowest
@@ -48,8 +48,7 @@ static cpumask_t vector_allocation_domai
 	 * deliver interrupts to the wrong hyperthread when only one
 	 * hyperthread was specified in the interrupt desitination.
 	 */
-	cpumask_t domain = { { [0] = APIC_ALL_CPUS, } };
-	return domain;
+	*retmask = (cpumask_t) { { [0] = APIC_ALL_CPUS, } };
 }
 
 struct genapic apic_numaq = APIC_INIT("NUMAQ", probe_numaq);
--- struct-cpumasks.orig/arch/x86/mach-generic/summit.c
+++ struct-cpumasks/arch/x86/mach-generic/summit.c
@@ -23,7 +23,7 @@ static int probe_summit(void)
 	return 0;
 }
 
-static cpumask_t vector_allocation_domain(int cpu)
+static void vector_allocation_domain(int cpu, cpumask_t *retmask)
 {
 	/* Careful. Some cpus do not strictly honor the set of cpus
 	 * specified in the interrupt destination when using lowest
@@ -33,8 +33,7 @@ static cpumask_t vector_allocation_domai
 	 * deliver interrupts to the wrong hyperthread when only one
 	 * hyperthread was specified in the interrupt desitination.
 	 */
-	cpumask_t domain = { { [0] = APIC_ALL_CPUS, } };
-	return domain;
+	*retmask = (cpumask_t) { { [0] = APIC_ALL_CPUS, } };
 }
 
 struct genapic apic_summit = APIC_INIT("summit", probe_summit);
--- struct-cpumasks.orig/arch/x86/xen/smp.c
+++ struct-cpumasks/arch/x86/xen/smp.c
@@ -408,24 +408,22 @@ static void xen_smp_send_reschedule(int 
 	xen_send_IPI_one(cpu, XEN_RESCHEDULE_VECTOR);
 }
 
-static void xen_send_IPI_mask(cpumask_t mask, enum ipi_vector vector)
+static void xen_send_IPI_mask(const cpumask_t *mask, enum ipi_vector vector)
 {
 	unsigned cpu;
 
-	cpus_and(mask, mask, cpu_online_map);
-
-	for_each_cpu_mask_nr(cpu, mask)
+	for_each_online_cpu_mask_nr(cpu, *mask)
 		xen_send_IPI_one(cpu, vector);
 }
 
-static void xen_smp_send_call_function_ipi(const cpumask_t *mask)
+static void xen_smp_send_call_function_ipi(const cpumask_t mask)
 {
 	int cpu;
 
-	xen_send_IPI_mask(*mask, XEN_CALL_FUNCTION_VECTOR);
+	xen_send_IPI_mask(&mask, XEN_CALL_FUNCTION_VECTOR);
 
 	/* Make sure other vcpus get a chance to run if they need to. */
-	for_each_cpu_mask_nr(cpu, *mask) {
+	for_each_cpu_mask_nr(cpu, mask) {
 		if (xen_vcpu_stolen(cpu)) {
 			HYPERVISOR_sched_op(SCHEDOP_yield, 0);
 			break;
@@ -435,7 +433,8 @@ static void xen_smp_send_call_function_i
 
 static void xen_smp_send_call_function_single_ipi(int cpu)
 {
-	xen_send_IPI_mask(cpumask_of_cpu(cpu), XEN_CALL_FUNCTION_SINGLE_VECTOR);
+	xen_send_IPI_mask(&cpumask_of_cpu(cpu),
+			  XEN_CALL_FUNCTION_SINGLE_VECTOR);
 }
 
 static irqreturn_t xen_call_function_interrupt(int irq, void *dev_id)
--- struct-cpumasks.orig/include/asm-x86/bigsmp/apic.h
+++ struct-cpumasks/include/asm-x86/bigsmp/apic.h
@@ -121,12 +121,12 @@ static inline int check_phys_apicid_pres
 }
 
 /* As we are using single CPU as destination, pick only one CPU here */
-static inline unsigned int cpu_mask_to_apicid(cpumask_t cpumask)
+static inline unsigned int cpu_mask_to_apicid(const cpumask_t *cpumask)
 {
 	int cpu;
 	int apicid;	
 
-	cpu = first_cpu(cpumask);
+	cpu = first_cpu(*cpumask);
 	apicid = cpu_to_logical_apicid(cpu);
 	return apicid;
 }
--- struct-cpumasks.orig/include/asm-x86/bigsmp/ipi.h
+++ struct-cpumasks/include/asm-x86/bigsmp/ipi.h
@@ -1,25 +1,22 @@
 #ifndef __ASM_MACH_IPI_H
 #define __ASM_MACH_IPI_H
 
-void send_IPI_mask_sequence(cpumask_t mask, int vector);
+void send_IPI_mask_sequence(cpumask_t *mask, int vector);
+void send_IPI_mask_allbutself(cpumask_t *mask, int vector);
 
-static inline void send_IPI_mask(cpumask_t mask, int vector)
+static inline void send_IPI_mask(cpumask_t *mask, int vector)
 {
 	send_IPI_mask_sequence(mask, vector);
 }
 
 static inline void send_IPI_allbutself(int vector)
 {
-	cpumask_t mask = cpu_online_map;
-	cpu_clear(smp_processor_id(), mask);
-
-	if (!cpus_empty(mask))
-		send_IPI_mask(mask, vector);
+	send_IPI_mask_allbutself(&cpu_online_map, vector);
 }
 
 static inline void send_IPI_all(int vector)
 {
-	send_IPI_mask(cpu_online_map, vector);
+	send_IPI_mask(&cpu_online_map, vector);
 }
 
 #endif /* __ASM_MACH_IPI_H */
--- struct-cpumasks.orig/include/asm-x86/es7000/apic.h
+++ struct-cpumasks/include/asm-x86/es7000/apic.h
@@ -144,14 +144,14 @@ static inline int check_phys_apicid_pres
 	return (1);
 }
 
-static inline unsigned int cpu_mask_to_apicid(cpumask_t cpumask)
+static inline unsigned int cpu_mask_to_apicid(const cpumask_t *cpumask)
 {
 	int num_bits_set;
 	int cpus_found = 0;
 	int cpu;
 	int apicid;
 
-	num_bits_set = cpus_weight(cpumask);
+	num_bits_set = cpus_weight(*cpumask);
 	/* Return id to all */
 	if (num_bits_set == NR_CPUS)
 #if defined CONFIG_ES7000_CLUSTERED_APIC
@@ -163,10 +163,10 @@ static inline unsigned int cpu_mask_to_a
 	 * The cpus in the mask must all be on the apic cluster.  If are not
 	 * on the same apicid cluster return default value of TARGET_CPUS.
 	 */
-	cpu = first_cpu(cpumask);
+	cpu = first_cpu(*cpumask);
 	apicid = cpu_to_logical_apicid(cpu);
 	while (cpus_found < num_bits_set) {
-		if (cpu_isset(cpu, cpumask)) {
+		if (cpu_isset(cpu, *cpumask)) {
 			int new_apicid = cpu_to_logical_apicid(cpu);
 			if (apicid_cluster(apicid) !=
 					apicid_cluster(new_apicid)){
--- struct-cpumasks.orig/include/asm-x86/es7000/ipi.h
+++ struct-cpumasks/include/asm-x86/es7000/ipi.h
@@ -1,24 +1,22 @@
 #ifndef __ASM_ES7000_IPI_H
 #define __ASM_ES7000_IPI_H
 
-void send_IPI_mask_sequence(cpumask_t mask, int vector);
+void send_IPI_mask_sequence(cpumask_t *mask, int vector);
+void send_IPI_mask_allbutself(cpumask_t *mask, int vector);
 
-static inline void send_IPI_mask(cpumask_t mask, int vector)
+static inline void send_IPI_mask(cpumask_t *mask, int vector)
 {
 	send_IPI_mask_sequence(mask, vector);
 }
 
 static inline void send_IPI_allbutself(int vector)
 {
-	cpumask_t mask = cpu_online_map;
-	cpu_clear(smp_processor_id(), mask);
-	if (!cpus_empty(mask))
-		send_IPI_mask(mask, vector);
+	send_IPI_mask_allbutself(&cpu_online_map, vector);
 }
 
 static inline void send_IPI_all(int vector)
 {
-	send_IPI_mask(cpu_online_map, vector);
+	send_IPI_mask(&cpu_online_map, vector);
 }
 
 #endif /* __ASM_ES7000_IPI_H */
--- struct-cpumasks.orig/include/asm-x86/genapic_32.h
+++ struct-cpumasks/include/asm-x86/genapic_32.h
@@ -56,12 +56,12 @@ struct genapic {
 
 	unsigned (*get_apic_id)(unsigned long x);
 	unsigned long apic_id_mask;
-	unsigned int (*cpu_mask_to_apicid)(cpumask_t cpumask);
-	cpumask_t (*vector_allocation_domain)(int cpu);
+	unsigned int (*cpu_mask_to_apicid)(const cpumask_t *cpumask);
+	void (*vector_allocation_domain)(int cpu, cpumask_t *retmask);
 
 #ifdef CONFIG_SMP
 	/* ipi */
-	void (*send_IPI_mask)(cpumask_t mask, int vector);
+	void (*send_IPI_mask)(const cpumask_t *mask, int vector);
 	void (*send_IPI_allbutself)(int vector);
 	void (*send_IPI_all)(int vector);
 #endif
--- struct-cpumasks.orig/include/asm-x86/genapic_64.h
+++ struct-cpumasks/include/asm-x86/genapic_64.h
@@ -18,16 +18,16 @@ struct genapic {
 	u32 int_delivery_mode;
 	u32 int_dest_mode;
 	int (*apic_id_registered)(void);
-	cpumask_t (*target_cpus)(void);
-	cpumask_t (*vector_allocation_domain)(int cpu);
+	void (*target_cpus)(cpumask_t *retmask);
+	void (*vector_allocation_domain)(int cpu, cpumask_t *retmask);
 	void (*init_apic_ldr)(void);
 	/* ipi */
-	void (*send_IPI_mask)(cpumask_t mask, int vector);
+	void (*send_IPI_mask)(const cpumask_t *mask, int vector);
 	void (*send_IPI_allbutself)(int vector);
 	void (*send_IPI_all)(int vector);
 	void (*send_IPI_self)(int vector);
 	/* */
-	unsigned int (*cpu_mask_to_apicid)(cpumask_t cpumask);
+	unsigned int (*cpu_mask_to_apicid)(const cpumask_t *cpumask);
 	unsigned int (*phys_pkg_id)(int index_msb);
 	unsigned int (*get_apic_id)(unsigned long x);
 	unsigned long (*set_apic_id)(unsigned int id);
--- struct-cpumasks.orig/include/asm-x86/ipi.h
+++ struct-cpumasks/include/asm-x86/ipi.h
@@ -117,7 +117,7 @@ static inline void __send_IPI_dest_field
 	native_apic_mem_write(APIC_ICR, cfg);
 }
 
-static inline void send_IPI_mask_sequence(cpumask_t mask, int vector)
+static inline void send_IPI_mask_sequence(const cpumask_t *mask, int vector)
 {
 	unsigned long flags;
 	unsigned long query_cpu;
@@ -128,11 +128,28 @@ static inline void send_IPI_mask_sequenc
 	 * - mbligh
 	 */
 	local_irq_save(flags);
-	for_each_cpu_mask_nr(query_cpu, mask) {
+	for_each_cpu_mask_nr(query_cpu, *mask) {
 		__send_IPI_dest_field(per_cpu(x86_cpu_to_apicid, query_cpu),
 				      vector, APIC_DEST_PHYSICAL);
 	}
 	local_irq_restore(flags);
 }
 
+static inline void send_IPI_mask_allbutself(cpumask_t *mask, int vector)
+{
+	unsigned long flags;
+	unsigned int query_cpu;
+	unsigned int this_cpu = smp_processor_id();
+
+	/* See Hack comment above */
+
+	local_irq_save(flags);
+	for_each_cpu_mask_nr(query_cpu, *mask)
+		if (query_cpu != this_cpu)
+			__send_IPI_dest_field(
+				per_cpu(x86_cpu_to_apicid, query_cpu),
+				vector, APIC_DEST_PHYSICAL);
+	local_irq_restore(flags);
+}
+
 #endif /* ASM_X86__IPI_H */
--- struct-cpumasks.orig/include/asm-x86/mach-default/mach_apic.h
+++ struct-cpumasks/include/asm-x86/mach-default/mach_apic.h
@@ -8,12 +8,13 @@
 
 #define APIC_DFR_VALUE	(APIC_DFR_FLAT)
 
-static inline cpumask_t target_cpus(void)
+static inline void target_cpus(cpumask_t *retmask)
 { 
 #ifdef CONFIG_SMP
-	return cpu_online_map;
+	*retmask = cpu_online_map;
 #else
-	return cpumask_of_cpu(0);
+	cpus_clear(*retmask);
+	cpu_set(0, *retmask);
 #endif
 } 
 
@@ -24,7 +25,7 @@ static inline cpumask_t target_cpus(void
 #include <asm/genapic.h>
 #define INT_DELIVERY_MODE (genapic->int_delivery_mode)
 #define INT_DEST_MODE (genapic->int_dest_mode)
-#define TARGET_CPUS	  (genapic->target_cpus())
+#define TARGET_CPUS	  (genapic->target_cpus)
 #define apic_id_registered (genapic->apic_id_registered)
 #define init_apic_ldr (genapic->init_apic_ldr)
 #define cpu_mask_to_apicid (genapic->cpu_mask_to_apicid)
@@ -36,7 +37,7 @@ extern void setup_apic_routing(void);
 #else
 #define INT_DELIVERY_MODE dest_LowestPrio
 #define INT_DEST_MODE 1     /* logical delivery broadcast to all procs */
-#define TARGET_CPUS (target_cpus())
+#define TARGET_CPUS (target_cpus)
 /*
  * Set up the logical destination ID.
  *
@@ -59,9 +60,9 @@ static inline int apic_id_registered(voi
 	return physid_isset(read_apic_id(), phys_cpu_present_map);
 }
 
-static inline unsigned int cpu_mask_to_apicid(cpumask_t cpumask)
+static inline unsigned int cpu_mask_to_apicid(const cpumask_t *cpumask)
 {
-	return cpus_addr(cpumask)[0];
+	return cpus_addr(*cpumask)[0];
 }
 
 static inline u32 phys_pkg_id(u32 cpuid_apic, int index_msb)
@@ -86,7 +87,7 @@ static inline int apicid_to_node(int log
 #endif
 }
 
-static inline cpumask_t vector_allocation_domain(int cpu)
+static inline void vector_allocation_domain(int cpu, cpumask_t *retmask)
 {
         /* Careful. Some cpus do not strictly honor the set of cpus
          * specified in the interrupt destination when using lowest
@@ -96,8 +97,7 @@ static inline cpumask_t vector_allocatio
          * deliver interrupts to the wrong hyperthread when only one
          * hyperthread was specified in the interrupt desitination.
          */
-        cpumask_t domain = { { [0] = APIC_ALL_CPUS, } };
-        return domain;
+        *retmask = (cpumask_t) { { [0] = APIC_ALL_CPUS, } };
 }
 #endif
 
--- struct-cpumasks.orig/include/asm-x86/mach-default/mach_ipi.h
+++ struct-cpumasks/include/asm-x86/mach-default/mach_ipi.h
@@ -4,7 +4,8 @@
 /* Avoid include hell */
 #define NMI_VECTOR 0x02
 
-void send_IPI_mask_bitmask(cpumask_t mask, int vector);
+void send_IPI_mask_bitmask(const cpumask_t *mask, int vector);
+void send_IPI_mask_allbutself(const cpumask_t *mask, int vector);
 void __send_IPI_shortcut(unsigned int shortcut, int vector);
 
 extern int no_broadcast;
@@ -13,7 +14,7 @@ extern int no_broadcast;
 #include <asm/genapic.h>
 #define send_IPI_mask (genapic->send_IPI_mask)
 #else
-static inline void send_IPI_mask(cpumask_t mask, int vector)
+static inline void send_IPI_mask(const cpumask_t *mask, int vector)
 {
 	send_IPI_mask_bitmask(mask, vector);
 }
@@ -21,19 +22,16 @@ static inline void send_IPI_mask(cpumask
 
 static inline void __local_send_IPI_allbutself(int vector)
 {
-	if (no_broadcast || vector == NMI_VECTOR) {
-		cpumask_t mask = cpu_online_map;
-
-		cpu_clear(smp_processor_id(), mask);
-		send_IPI_mask(mask, vector);
-	} else
+	if (no_broadcast || vector == NMI_VECTOR)
+		send_IPI_mask_allbutself(&cpu_online_map, vector);
+	else
 		__send_IPI_shortcut(APIC_DEST_ALLBUT, vector);
 }
 
 static inline void __local_send_IPI_all(int vector)
 {
 	if (no_broadcast || vector == NMI_VECTOR)
-		send_IPI_mask(cpu_online_map, vector);
+		send_IPI_mask(&cpu_online_map, vector);
 	else
 		__send_IPI_shortcut(APIC_DEST_ALLINC, vector);
 }
--- struct-cpumasks.orig/include/asm-x86/mach-generic/mach_apic.h
+++ struct-cpumasks/include/asm-x86/mach-generic/mach_apic.h
@@ -9,7 +9,7 @@
 #define INT_DEST_MODE (genapic->int_dest_mode)
 #undef APIC_DEST_LOGICAL
 #define APIC_DEST_LOGICAL (genapic->apic_destination_logical)
-#define TARGET_CPUS	  (genapic->target_cpus())
+#define TARGET_CPUS	  (genapic->target_cpus)
 #define apic_id_registered (genapic->apic_id_registered)
 #define init_apic_ldr (genapic->init_apic_ldr)
 #define ioapic_phys_id_map (genapic->ioapic_phys_id_map)
--- struct-cpumasks.orig/include/asm-x86/numaq/apic.h
+++ struct-cpumasks/include/asm-x86/numaq/apic.h
@@ -122,7 +122,7 @@ static inline void enable_apic_mode(void
  * We use physical apicids here, not logical, so just return the default
  * physical broadcast to stop people from breaking us
  */
-static inline unsigned int cpu_mask_to_apicid(cpumask_t cpumask)
+static inline unsigned int cpu_mask_to_apicid(const cpumask_t *cpumask)
 {
 	return (int) 0xF;
 }
--- struct-cpumasks.orig/include/asm-x86/numaq/ipi.h
+++ struct-cpumasks/include/asm-x86/numaq/ipi.h
@@ -1,25 +1,22 @@
 #ifndef __ASM_NUMAQ_IPI_H
 #define __ASM_NUMAQ_IPI_H
 
-void send_IPI_mask_sequence(cpumask_t, int vector);
+void send_IPI_mask_sequence(const cpumask_t *mask, int vector);
+void send_IPI_mask_allbutself(const cpumask_t *mask, int vector);
 
-static inline void send_IPI_mask(cpumask_t mask, int vector)
+static inline void send_IPI_mask(const cpumask_t *mask, int vector)
 {
 	send_IPI_mask_sequence(mask, vector);
 }
 
 static inline void send_IPI_allbutself(int vector)
 {
-	cpumask_t mask = cpu_online_map;
-	cpu_clear(smp_processor_id(), mask);
-
-	if (!cpus_empty(mask))
-		send_IPI_mask(mask, vector);
+	send_IPI_mask_allbutself(&cpu_online_map, vector);
 }
 
 static inline void send_IPI_all(int vector)
 {
-	send_IPI_mask(cpu_online_map, vector);
+	send_IPI_mask(&cpu_online_map, vector);
 }
 
 #endif /* __ASM_NUMAQ_IPI_H */
--- struct-cpumasks.orig/include/asm-x86/summit/apic.h
+++ struct-cpumasks/include/asm-x86/summit/apic.h
@@ -137,14 +137,14 @@ static inline void enable_apic_mode(void
 {
 }
 
-static inline unsigned int cpu_mask_to_apicid(cpumask_t cpumask)
+static inline unsigned int cpu_mask_to_apicid(const cpumask_t *cpumask)
 {
 	int num_bits_set;
 	int cpus_found = 0;
 	int cpu;
 	int apicid;
 
-	num_bits_set = cpus_weight(cpumask);
+	num_bits_set = cpus_weight(*cpumask);
 	/* Return id to all */
 	if (num_bits_set == NR_CPUS)
 		return (int) 0xFF;
@@ -152,10 +152,10 @@ static inline unsigned int cpu_mask_to_a
 	 * The cpus in the mask must all be on the apic cluster.  If are not
 	 * on the same apicid cluster return default value of TARGET_CPUS.
 	 */
-	cpu = first_cpu(cpumask);
+	cpu = first_cpu(*cpumask);
 	apicid = cpu_to_logical_apicid(cpu);
 	while (cpus_found < num_bits_set) {
-		if (cpu_isset(cpu, cpumask)) {
+		if (cpu_isset(cpu, *cpumask)) {
 			int new_apicid = cpu_to_logical_apicid(cpu);
 			if (apicid_cluster(apicid) !=
 					apicid_cluster(new_apicid)){
--- struct-cpumasks.orig/include/asm-x86/summit/ipi.h
+++ struct-cpumasks/include/asm-x86/summit/ipi.h
@@ -1,25 +1,22 @@
 #ifndef __ASM_SUMMIT_IPI_H
 #define __ASM_SUMMIT_IPI_H
 
-void send_IPI_mask_sequence(cpumask_t mask, int vector);
+void send_IPI_mask_sequence(const cpumask_t *mask, int vector);
+void send_IPI_mask_allbutself(const cpumask_t *mask, int vector);
 
-static inline void send_IPI_mask(cpumask_t mask, int vector)
+static inline void send_IPI_mask(const cpumask_t *mask, int vector)
 {
 	send_IPI_mask_sequence(mask, vector);
 }
 
 static inline void send_IPI_allbutself(int vector)
 {
-	cpumask_t mask = cpu_online_map;
-	cpu_clear(smp_processor_id(), mask);
-
-	if (!cpus_empty(mask))
-		send_IPI_mask(mask, vector);
+	send_IPI_mask_allbutself(&cpu_online_map, vector);
 }
 
 static inline void send_IPI_all(int vector)
 {
-	send_IPI_mask(cpu_online_map, vector);
+	send_IPI_mask(&cpu_online_map, vector);
 }
 
 #endif /* __ASM_SUMMIT_IPI_H */

-- 

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [PATCH 03/31] cpumask: remove min from first_cpu/next_cpu
  2008-09-29 18:02 [PATCH 00/31] cpumask: Provide new cpumask API Mike Travis
  2008-09-29 18:02 ` [PATCH 01/31] cpumask: Documentation Mike Travis
  2008-09-29 18:02 ` [PATCH 02/31] cpumask: modify send_IPI_mask interface to accept cpumask_t pointers Mike Travis
@ 2008-09-29 18:02 ` Mike Travis
  2008-09-29 18:02 ` [PATCH 04/31] cpumask: move cpu_alloc to separate file Mike Travis
                   ` (27 subsequent siblings)
  30 siblings, 0 replies; 43+ messages in thread
From: Mike Travis @ 2008-09-29 18:02 UTC (permalink / raw)
  To: Ingo Molnar, Rusty Russell
  Cc: Linus Torvalds, Andrew Morton, David Miller, Yinghai Lu,
	Thomas Gleixner, Jack Steiner, linux-kernel

[-- Attachment #1: remove-min --]
[-- Type: text/plain, Size: 1037 bytes --]

Seems like this has been here forever, but I can't see why:
find_first_bit and find_next_bit both return >= NR_CPUS on failure.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Mike Travis <travis@sgi.com>

---
 lib/cpumask.c |    7 +++----
 1 file changed, 3 insertions(+), 4 deletions(-)

--- masklongs.orig/lib/cpumask.c
+++ masklongs/lib/cpumask.c
@@ -5,21 +5,20 @@
 
 int __first_cpu(const cpumask_t *srcp)
 {
-	return min_t(int, NR_CPUS, find_first_bit(srcp->bits, NR_CPUS));
+	return find_first_bit(srcp->bits, NR_CPUS);
 }
 EXPORT_SYMBOL(__first_cpu);
 
 int __next_cpu(int n, const cpumask_t *srcp)
 {
-	return min_t(int, NR_CPUS, find_next_bit(srcp->bits, NR_CPUS, n+1));
+	return find_next_bit(srcp->bits, NR_CPUS, n+1);
 }
 EXPORT_SYMBOL(__next_cpu);
 
 #if NR_CPUS > 64
 int __next_cpu_nr(int n, const cpumask_t *srcp)
 {
-	return min_t(int, nr_cpu_ids,
-				find_next_bit(srcp->bits, nr_cpu_ids, n+1));
+	return find_next_bit(srcp->bits, nr_cpu_ids, n+1);
 }
 EXPORT_SYMBOL(__next_cpu_nr);
 #endif

-- 

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [PATCH 04/31] cpumask: move cpu_alloc to separate file
  2008-09-29 18:02 [PATCH 00/31] cpumask: Provide new cpumask API Mike Travis
                   ` (2 preceding siblings ...)
  2008-09-29 18:02 ` [PATCH 03/31] cpumask: remove min from first_cpu/next_cpu Mike Travis
@ 2008-09-29 18:02 ` Mike Travis
  2008-09-29 18:02 ` [PATCH 05/31] cpumask: Provide new cpumask API Mike Travis
                   ` (26 subsequent siblings)
  30 siblings, 0 replies; 43+ messages in thread
From: Mike Travis @ 2008-09-29 18:02 UTC (permalink / raw)
  To: Ingo Molnar, Rusty Russell
  Cc: Linus Torvalds, Andrew Morton, David Miller, Yinghai Lu,
	Thomas Gleixner, Jack Steiner, linux-kernel

[-- Attachment #1: mv-cpumask-alloc --]
[-- Type: text/plain, Size: 5181 bytes --]

Signed-of-by: Mike Travis <travis@sgi.com>
---
 include/linux/cpumask.h       |   40 -------------
 include/linux/cpumask_alloc.h |  127 ++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 127 insertions(+), 40 deletions(-)

--- struct-cpumasks.orig/include/linux/cpumask.h
+++ struct-cpumasks/include/linux/cpumask.h
@@ -67,37 +67,6 @@
  * CPU_MASK_NONE			Initializer - no bits set
  * unsigned long *cpus_addr(mask)	Array of unsigned long's in mask
  *
- * CPUMASK_ALLOC kmalloc's a structure that is a composite of many cpumask_t
- * variables, and CPUMASK_PTR provides pointers to each field.
- *
- * The structure should be defined something like this:
- * struct my_cpumasks {
- *	cpumask_t mask1;
- *	cpumask_t mask2;
- * };
- *
- * Usage is then:
- *	CPUMASK_ALLOC(my_cpumasks);
- *	CPUMASK_PTR(mask1, my_cpumasks);
- *	CPUMASK_PTR(mask2, my_cpumasks);
- *
- *	--- DO NOT reference cpumask_t pointers until this check ---
- *	if (my_cpumasks == NULL)
- *		"kmalloc failed"...
- *
- * References are now pointers to the cpumask_t variables (*mask1, ...)
- *
- *if NR_CPUS > BITS_PER_LONG
- *   CPUMASK_ALLOC(m)			Declares and allocates struct m *m =
- *						kmalloc(sizeof(*m), GFP_KERNEL)
- *   CPUMASK_FREE(m)			Macro for kfree(m)
- *else
- *   CPUMASK_ALLOC(m)			Declares struct m _m, *m = &_m
- *   CPUMASK_FREE(m)			Nop
- *endif
- *   CPUMASK_PTR(v, m)			Declares cpumask_t *v = &(m->v)
- * ------------------------------------------------------------------------
- *
  * int cpumask_scnprintf(buf, len, mask) Format cpumask for printing
  * int cpumask_parse_user(ubuf, ulen, mask)	Parse ascii string as cpumask
  * int cpulist_scnprintf(buf, len, mask) Format cpumask as list for printing
@@ -327,15 +296,6 @@ extern cpumask_t cpu_mask_all;
 
 #define cpus_addr(src) ((src).bits)
 
-#if NR_CPUS > BITS_PER_LONG
-#define	CPUMASK_ALLOC(m)	struct m *m = kmalloc(sizeof(*m), GFP_KERNEL)
-#define	CPUMASK_FREE(m)		kfree(m)
-#else
-#define	CPUMASK_ALLOC(m)	struct m _m, *m = &_m
-#define	CPUMASK_FREE(m)
-#endif
-#define	CPUMASK_PTR(v, m) 	cpumask_t *v = &(m->v)
-
 #define cpumask_scnprintf(buf, len, src) \
 			__cpumask_scnprintf((buf), (len), &(src), NR_CPUS)
 static inline int __cpumask_scnprintf(char *buf, int len,
--- /dev/null
+++ struct-cpumasks/include/linux/cpumask_alloc.h
@@ -0,0 +1,127 @@
+#ifndef __LINUX_CPUMASK_ALLOC_H
+#define __LINUX_CPUMASK_ALLOC_H
+
+/*
+ * Simple alloc/free of cpumask structs
+ */
+
+#if NR_CPUS > BITS_PER_LONG
+
+#include <linux/slab.h>
+
+/* Allocates a cpumask large enough to contain nr_cpu_ids cpus */
+static inline int cpumask_alloc(cpumask_t *m)
+{
+	cpumask_t d = kmalloc(CPUMASK_SIZE, GFP_KERNEL);
+
+	*m = d;
+	return (d != NULL);
+}
+
+static inline void cpumask_free_(cpumask_t *m)
+{
+	kfree(*m);
+}
+
+#else
+static inline cpumask_val cpumask_alloc(cpumask_t *m)
+{
+}
+
+static inline void cpumask_free(cpumask_t *m)
+{
+}
+
+#endif
+
+/*
+ * Manage a pool of percpu temp cpumask's
+ */
+struct cpumask_pool_s {
+	unsigned long	length;
+	unsigned long	allocated;
+	cpumask_fixed	pool;
+};
+
+#if 0
+#define DEFINE_PER_CPUMASK_POOL(name, size)		\
+	struct __pool_##name##_s {			\
+		unsigned long	allocated;		\
+		cpumask_data	pool[size];		\
+	}						\
+	DEFINE_PER_CPU(struct __pool_##name##_s, name)
+#else
+#define DEFINE_PER_CPUMASK_POOL(name, size)		\
+	DEFINE_PER_CPU(					\
+		struct {				\
+			unsigned long	length;		\
+			unsigned long	allocated;	\
+			cpumask_data	pool[size];	\
+		}, name ) = { .length = size, }
+#endif
+
+#define	cpumask_pool_get(m, p)	__cpumask_pool_get(m, &__get_cpu_var(p))
+#define	cpumask_pool_put(m, p)	__cpumask_pool_put(m, &__get_cpu_var(p));
+
+static inline int __cpumask_pool_get(cpumask_t *m,
+				     struct cpumask_pool_s *p)
+{
+	int n;
+
+	preempt_disable();
+	while ((n = find_first_bit(&p->allocated, p->length)) < p->length &&
+		!test_and_set_bit(n, &p->allocated)) {
+
+		*m = &(p->pool[n]);
+		preempt_enable();
+		return 1;
+	}
+	preempt_enable();
+	return 0;
+}
+
+static inline void __cpumask_pool_put(cpumask_t *m,
+				      struct cpumask_pool_s *p)
+{
+	int n = *m - p->pool;
+
+	BUG_ON(n >= p->length);
+
+	preempt_disable();
+	clear_bit(n, &p->allocated);
+	preempt_enable();
+}
+
+/*
+ * CPUMASK_ALLOC kmalloc's a structure that is a composite of many cpumask_t
+ * variables, and CPUMASK_PTR provides pointers to each field.
+ *
+ * The structure should be defined something like this:
+ * struct my_cpumasks {
+ *	cpumask_fixed mask1;
+ *	cpumask_fixed mask2;
+ * };
+ *
+ * Usage is then:
+ *	CPUMASK_ALLOC(my_cpumasks);
+ *	CPUMASK_PTR(mask1, my_cpumasks);
+ *	CPUMASK_PTR(mask2, my_cpumasks);
+ *
+ *	--- DO NOT reference cpumask_t pointers until this check ---
+ *	if (my_cpumasks == NULL)
+ *		"kmalloc failed"...
+ *
+ * References are now cpumask_var variables (*mask1, ...)
+ */
+
+#if NR_CPUS > BITS_PER_LONG
+#define	CPUMASK_ALLOC(m)	struct m *m = kmalloc(sizeof(*m), GFP_KERNEL)
+#define	CPUMASK_FREE(m)		kfree(m)
+#define	CPUMASK_PTR(v, m) 	cpumask_var v = (m->v)
+#else
+#define	CPUMASK_ALLOC(m)	struct m m
+#define	CPUMASK_FREE(m)
+#define	CPUMASK_PTR(v, m) 	cpumask_var v
+#endif
+
+#endif /* __LINUX_CPUMASK_ALLOC_H */

-- 

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [PATCH 05/31] cpumask: Provide new cpumask API
  2008-09-29 18:02 [PATCH 00/31] cpumask: Provide new cpumask API Mike Travis
                   ` (3 preceding siblings ...)
  2008-09-29 18:02 ` [PATCH 04/31] cpumask: move cpu_alloc to separate file Mike Travis
@ 2008-09-29 18:02 ` Mike Travis
  2008-09-30  9:11   ` Ingo Molnar
  2008-09-29 18:02 ` [PATCH 06/31] cpumask: new lib/cpumask.c Mike Travis
                   ` (25 subsequent siblings)
  30 siblings, 1 reply; 43+ messages in thread
From: Mike Travis @ 2008-09-29 18:02 UTC (permalink / raw)
  To: Ingo Molnar, Rusty Russell
  Cc: Linus Torvalds, Andrew Morton, David Miller, Yinghai Lu,
	Thomas Gleixner, Jack Steiner, linux-kernel

[-- Attachment #1: cpumask-base --]
[-- Type: text/plain, Size: 29257 bytes --]

Provide new cpumask interface API.  The relevant change is basically
cpumask_t becomes an opaque object.  I believe this results in the
minimum amount of editing while still allowing the inline cpumask
functions, and the ability to declare static cpumask objects.


    /* raw declaration */
    struct __cpumask_data_s { DECLARE_BITMAP(bits, NR_CPUS); };

    /* cpumask_map_t used for declaring static cpumask maps */
    typedef struct __cpumask_data_s cpumask_map_t[1];

    /* cpumask_t used for function args and return pointers */
    typedef struct __cpumask_data_s *cpumask_t;
    typedef const struct __cpumask_data_s *const_cpumask_t;

    /* cpumask_var_t used for local variable, definition follows */
    typedef struct __cpumask_data_s	cpumask_var_t[1]; /* SMALL NR_CPUS */
    typedef struct __cpumask_data_s	*cpumask_var_t;	  /* LARGE NR_CPUS */

    /* replaces cpumask_t dst = (cpumask_t)src */
    void cpus_copy(cpumask_t dst, const cpumask_t src);

Remove the '*' indirection in all references to cpumask_t objects.  You can
change the reference to the cpumask object but not the cpumask object itself
without using the functions that operate on cpumask objects (f.e. the cpu_*
operators).  Functions can return a cpumask_t (which is a pointer to the
cpumask object) and only be passed a cpumask_t.

All uses of cpumask_t on the stack are changed to be cpumask_var_t except
for pointers to static cpumask objects.  Allocation of local (temp) cpumask
objects will follow...

All cpumask operators now operate using nr_cpu_ids instead of NR_CPUS.  All
variants of the cpumask operators which used nr_cpu_ids instead of NR_CPUS
are deleted.

All variants of functions which use the (old cpumask_t *) pointer are deleted
(f.e. set_cpus_allowed_ptr()).

Based on code from Rusty Russell <rusty@rustcorp.com.au> (THANKS!!)

Signed-of-by: Mike Travis <travis@sgi.com>

---
 include/linux/cpumask.h       |  445 +++++++++++++++++++++++-------------------
 include/linux/cpumask_alloc.h |   55 ++---
 2 files changed, 280 insertions(+), 220 deletions(-)

--- struct-cpumasks.orig/include/linux/cpumask.h
+++ struct-cpumasks/include/linux/cpumask.h
@@ -3,7 +3,8 @@
 
 /*
  * Cpumasks provide a bitmap suitable for representing the
- * set of CPU's in a system, one bit position per CPU number.
+ * set of CPU's in a system, one bit position per CPU number up to
+ * nr_cpu_ids (<= NR_CPUS).
  *
  * See detailed comments in the file linux/bitmap.h describing the
  * data type on which these cpumasks are based.
@@ -18,18 +19,6 @@
  * For details of cpus_fold(), see bitmap_fold in lib/bitmap.c.
  *
  * . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
- * Note: The alternate operations with the suffix "_nr" are used
- *       to limit the range of the loop to nr_cpu_ids instead of
- *       NR_CPUS when NR_CPUS > 64 for performance reasons.
- *       If NR_CPUS is <= 64 then most assembler bitmask
- *       operators execute faster with a constant range, so
- *       the operator will continue to use NR_CPUS.
- *
- *       Another consideration is that nr_cpu_ids is initialized
- *       to NR_CPUS and isn't lowered until the possible cpus are
- *       discovered (including any disabled cpus).  So early uses
- *       will span the entire range of NR_CPUS.
- * . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
  *
  * The available cpumask operations are:
  *
@@ -37,6 +26,7 @@
  * void cpu_clear(cpu, mask)		turn off bit 'cpu' in mask
  * void cpus_setall(mask)		set all bits
  * void cpus_clear(mask)		clear all bits
+ * void cpus_copy(dst, src)		copies cpumask bits from src to dst
  * int cpu_isset(cpu, mask)		true iff bit 'cpu' set in mask
  * int cpu_test_and_set(cpu, mask)	test and set bit 'cpu' in mask
  *
@@ -52,19 +42,21 @@
  * int cpus_empty(mask)			Is mask empty (no bits sets)?
  * int cpus_full(mask)			Is mask full (all bits sets)?
  * int cpus_weight(mask)		Hamming weigh - number of set bits
- * int cpus_weight_nr(mask)		Same using nr_cpu_ids instead of NR_CPUS
  *
  * void cpus_shift_right(dst, src, n)	Shift right
  * void cpus_shift_left(dst, src, n)	Shift left
  *
- * int first_cpu(mask)			Number lowest set bit, or NR_CPUS
- * int next_cpu(cpu, mask)		Next cpu past 'cpu', or NR_CPUS
- * int next_cpu_nr(cpu, mask)		Next cpu past 'cpu', or nr_cpu_ids
- *
- * cpumask_t cpumask_of_cpu(cpu)	Return cpumask with bit 'cpu' set
- *					(can be used as an lvalue)
- * CPU_MASK_ALL				Initializer - all bits set
- * CPU_MASK_NONE			Initializer - no bits set
+ * int cpus_first(mask)			Number lowest set bit, or nr_cpu_ids
+ * int cpus_next(cpu, mask)		Next cpu past 'cpu', or nr_cpu_ids
+ * int cpus_next_in(cpu, mask, andmask)	Next cpu in mask & andmask or nr_cpu_ids
+ *
+ * cpumask_t cpumask_of_cpu(cpu)	Return pointer to cpumask with bit
+ *					'cpu' set
+ *
+ * cpu_mask_all				cpumask_map_t of all bits set
+ * CPU_MASK_ALL				Initializer only - all bits set
+ * CPU_MASK_NONE			Initializer only - no bits set
+ * CPU_MASK_CPU0			Initializer only - cpu 0 bit set
  * unsigned long *cpus_addr(mask)	Array of unsigned long's in mask
  *
  * int cpumask_scnprintf(buf, len, mask) Format cpumask for printing
@@ -76,8 +68,8 @@
  * void cpus_onto(dst, orig, relmap)	*dst = orig relative to relmap
  * void cpus_fold(dst, orig, sz)	dst bits = orig bits mod sz
  *
- * for_each_cpu_mask(cpu, mask)		for-loop cpu over mask using NR_CPUS
- * for_each_cpu_mask_nr(cpu, mask)	for-loop cpu over mask using nr_cpu_ids
+ * for_each_cpu(cpu, mask)		for-loop cpu over mask
+ * for_each_cpu_in(cpu, mask, andmask)	for-loop cpu over mask & andmask
  *
  * int num_online_cpus()		Number of online CPUs
  * int num_possible_cpus()		Number of all possible CPUs
@@ -87,6 +79,7 @@
  * int cpu_possible(cpu)		Is some cpu possible?
  * int cpu_present(cpu)			Is some cpu present (can schedule)?
  *
+ * int any_cpu_in(mask, andmask)	First cpu in mask & andmask
  * int any_online_cpu(mask)		First online cpu in mask
  *
  * for_each_possible_cpu(cpu)		for-loop cpu over cpu_possible_map
@@ -107,129 +100,259 @@
 #include <linux/threads.h>
 #include <linux/bitmap.h>
 
-typedef struct { DECLARE_BITMAP(bits, NR_CPUS); } cpumask_t;
-extern cpumask_t _unused_cpumask_arg_;
+/* raw declaration */
+struct __cpumask_data_s { DECLARE_BITMAP(bits, NR_CPUS); };
+
+/* cpumask_map_t used for declaring static cpumask maps */
+typedef struct __cpumask_data_s cpumask_map_t[1];
+
+/* cpumask_t used for function args and return pointers */
+typedef struct __cpumask_data_s *cpumask_t;
+typedef const struct __cpumask_data_s *const_cpumask_t;
+
+/* cpumask_var_t used for local variable, definition follows */
+
+#if NR_CPUS == 1
+
+/* cpumask_var_t used for local variable */
+typedef struct __cpumask_data_s	cpumask_var_t[1];
+
+#define nr_cpu_ids			1
+#define cpus_first(src)			({ (void)(src); 0; })
+#define cpus_next(n, src)		({ (void)(src); 1; })
+#define cpus_next_in(n, src, andsrc)	({ (void)(src); 1; })
+#define any_online_cpu(mask)		0
+#define for_each_cpu(cpu, mask)	\
+	for ((cpu) = 0; (cpu) < 1; (cpu)++, (void)mask)
+#define for_each_cpu_in(cpu, mask, andmask) \
+	for ((cpu) = 0; (cpu) < 1; (cpu)++, (void)mask, (void)andmask)
+
+#define num_online_cpus()		1
+#define num_possible_cpus()		1
+#define num_present_cpus()		1
+#define cpu_online(cpu)			((cpu) == 0)
+#define cpu_possible(cpu)		((cpu) == 0)
+#define cpu_present(cpu)		((cpu) == 0)
+#define cpu_active(cpu)			((cpu) == 0)
+
+#else /* ... NR_CPUS > 1 */
+
+#ifndef CONFIG_CPUMASKS_OFFSTACK
+
+/* Constant is usually more efficient than a variable for small NR_CPUS */
+#define nr_cpu_ids		NR_CPUS
+
+/* cpumask_var_t used for local variable */
+typedef struct __cpumask_data_s	cpumask_var_t[1];
+static inline int cpumask_size(void)
+{
+	return sizeof(struct __cpumask_data_s);
+}
+
+#else
+
+/* Starts at NR_CPUS until acpi code discovers actual number. */
+extern int nr_cpu_ids;
 
-#define cpu_set(cpu, dst) __cpu_set((cpu), &(dst))
-static inline void __cpu_set(int cpu, volatile cpumask_t *dstp)
+/* cpumask_var_t used for local variable */
+typedef struct __cpumask_data_s	*cpumask_var_t;
+static inline int cpumask_size(void)
+{
+	return BITS_TO_LONGS(nr_cpu_ids) * sizeof(long);
+}
+
+#endif /* NR_CPUS > BITS_PER_LONG */
+
+extern int cpus_first(const_cpumask_t srcp);
+extern int cpus_next(int n, const_cpumask_t srcp);
+extern int cpus_next_in(int n, const_cpumask_t srcp, const_cpumask_t andsrc);
+extern int any_cpu_in(const_cpumask_t mask, const_cpumask_t andmask);
+
+#define any_online_cpu(mask)	any_cpu_in((const_cpumask_t)(mask), \
+					   (const_cpumask_t)cpu_online_map)
+
+#define for_each_cpu(cpu, mask)				\
+	for ((cpu) = -1;				\
+		(cpu) = cpus_next((cpu), (mask)),	\
+		(cpu) < nr_cpu_ids; )
+
+#define for_each_cpu_in(cpu, mask, andmask)			\
+	for ((cpu) = -1;					\
+		(cpu) = cpus_next_in((cpu), (mask), (andmask)),	\
+		(cpu) < nr_cpu_ids; )
+
+
+#define num_online_cpus()	cpus_weight(cpu_online_map)
+#define num_possible_cpus()	cpus_weight(cpu_possible_map)
+#define num_present_cpus()	cpus_weight(cpu_present_map)
+#define cpu_online(cpu)		cpu_isset((cpu), cpu_online_map)
+#define cpu_possible(cpu)	cpu_isset((cpu), cpu_possible_map)
+#define cpu_present(cpu)	cpu_isset((cpu), cpu_present_map)
+#define cpu_active(cpu)		cpu_isset((cpu), cpu_active_map)
+
+/* Deprecated: use for_each_cpu() */
+#define for_each_cpu_mask(cpu, mask)	\
+	for_each_cpu(cpu, mask)
+/*
+ * XXX - how to add this to the above macro?
+ * #warning "for_each_cpu_mask is deprecated, use for_each_cpu"
+ */
+
+/* Deprecated: use cpus_first() */
+static inline int __deprecated first_cpu(const_cpumask_t srcp)
+{
+	return cpus_first(srcp);
+}
+
+/* Deprecated: use cpus_next() */
+static inline int __deprecated next_cpu(int n, const_cpumask_t srcp)
+{
+	return cpus_next(n, srcp);
+}
+#endif /* NR_CPUS > 1 */
+
+#define cpu_set(cpu, dst) __cpu_set((cpu), (dst))
+static inline void __cpu_set(int cpu, volatile cpumask_t dstp)
 {
 	set_bit(cpu, dstp->bits);
 }
 
-#define cpu_clear(cpu, dst) __cpu_clear((cpu), &(dst))
-static inline void __cpu_clear(int cpu, volatile cpumask_t *dstp)
+#define cpu_clear(cpu, dst) __cpu_clear((cpu), (dst))
+static inline void __cpu_clear(int cpu, volatile cpumask_t dstp)
 {
 	clear_bit(cpu, dstp->bits);
 }
 
-#define cpus_setall(dst) __cpus_setall(&(dst), NR_CPUS)
-static inline void __cpus_setall(cpumask_t *dstp, int nbits)
+#define cpus_setall(dst) __cpus_setall((dst), nr_cpu_ids)
+static inline void __cpus_setall(cpumask_t dstp, int nbits)
 {
 	bitmap_fill(dstp->bits, nbits);
 }
 
-#define cpus_clear(dst) __cpus_clear(&(dst), NR_CPUS)
-static inline void __cpus_clear(cpumask_t *dstp, int nbits)
+#define cpus_clear(dst) __cpus_clear((dst), nr_cpu_ids)
+static inline void __cpus_clear(cpumask_t dstp, int nbits)
 {
 	bitmap_zero(dstp->bits, nbits);
 }
 
+#define cpus_copy(dst, src) \
+	__cpus_copy((dst), (const_cpumask_t)(src), nr_cpu_ids)
+static inline void __cpus_copy(cpumask_t dstp, const_cpumask_t srcp, int nbits)
+{
+	bitmap_copy(dstp->bits, srcp->bits, nbits);
+}
+
 /* No static inline type checking - see Subtlety (1) above. */
-#define cpu_isset(cpu, cpumask) test_bit((cpu), (cpumask).bits)
+#define cpu_isset(cpu, cpumask) test_bit((cpu), (cpumask)->bits)
 
-#define cpu_test_and_set(cpu, cpumask) __cpu_test_and_set((cpu), &(cpumask))
-static inline int __cpu_test_and_set(int cpu, cpumask_t *addr)
+#define cpu_test_and_set(cpu, cpumask) __cpu_test_and_set((cpu), (cpumask))
+static inline int __cpu_test_and_set(int cpu, cpumask_t addr)
 {
 	return test_and_set_bit(cpu, addr->bits);
 }
 
-#define cpus_and(dst, src1, src2) __cpus_and(&(dst), &(src1), &(src2), NR_CPUS)
-static inline void __cpus_and(cpumask_t *dstp, const cpumask_t *src1p,
-					const cpumask_t *src2p, int nbits)
+#define cpus_and(dst, src1, src2)			\
+	__cpus_and((dst), (const_cpumask_t)(src1),	\
+		          (const_cpumask_t)(src2), nr_cpu_ids)
+static inline void __cpus_and(cpumask_t dstp, const_cpumask_t src1p,
+					      const_cpumask_t src2p, int nbits)
 {
 	bitmap_and(dstp->bits, src1p->bits, src2p->bits, nbits);
 }
 
-#define cpus_or(dst, src1, src2) __cpus_or(&(dst), &(src1), &(src2), NR_CPUS)
-static inline void __cpus_or(cpumask_t *dstp, const cpumask_t *src1p,
-					const cpumask_t *src2p, int nbits)
+#define cpus_or(dst, src1, src2) 			\
+	__cpus_or((dst), (const_cpumask_t)(src1),	\
+			 (const_cpumask_t)(src2), nr_cpu_ids)
+static inline void __cpus_or(cpumask_t dstp, const_cpumask_t src1p,
+					     const_cpumask_t src2p, int nbits)
 {
 	bitmap_or(dstp->bits, src1p->bits, src2p->bits, nbits);
 }
 
-#define cpus_xor(dst, src1, src2) __cpus_xor(&(dst), &(src1), &(src2), NR_CPUS)
-static inline void __cpus_xor(cpumask_t *dstp, const cpumask_t *src1p,
-					const cpumask_t *src2p, int nbits)
+#define cpus_xor(dst, src1, src2)			\
+	__cpus_xor((dst), (const_cpumask_t)(src1),	\
+			  (const_cpumask_t)(src2), nr_cpu_ids)
+static inline void __cpus_xor(cpumask_t dstp, const_cpumask_t src1p,
+					      const_cpumask_t src2p, int nbits)
 {
 	bitmap_xor(dstp->bits, src1p->bits, src2p->bits, nbits);
 }
 
-#define cpus_andnot(dst, src1, src2) \
-				__cpus_andnot(&(dst), &(src1), &(src2), NR_CPUS)
-static inline void __cpus_andnot(cpumask_t *dstp, const cpumask_t *src1p,
-					const cpumask_t *src2p, int nbits)
+#define cpus_andnot(dst, src1, src2)			\
+	__cpus_andnot((dst), (const_cpumask_t)(src1),	\
+			     (const_cpumask_t)(src2), nr_cpu_ids)
+static inline void __cpus_andnot(cpumask_t dstp, const_cpumask_t src1p,
+					const_cpumask_t src2p, int nbits)
 {
 	bitmap_andnot(dstp->bits, src1p->bits, src2p->bits, nbits);
 }
 
-#define cpus_complement(dst, src) __cpus_complement(&(dst), &(src), NR_CPUS)
-static inline void __cpus_complement(cpumask_t *dstp,
-					const cpumask_t *srcp, int nbits)
+#define cpus_complement(dst, src)			\
+	__cpus_complement((dst), (const_cpumask_t)(src), nr_cpu_ids)
+static inline void __cpus_complement(cpumask_t dstp,
+					const_cpumask_t srcp, int nbits)
 {
 	bitmap_complement(dstp->bits, srcp->bits, nbits);
 }
 
-#define cpus_equal(src1, src2) __cpus_equal(&(src1), &(src2), NR_CPUS)
-static inline int __cpus_equal(const cpumask_t *src1p,
-					const cpumask_t *src2p, int nbits)
+#define cpus_equal(src1, src2)				\
+	__cpus_equal((const_cpumask_t)(src1),		\
+		     (const_cpumask_t)(src2), nr_cpu_ids)
+static inline int __cpus_equal(const_cpumask_t src1p,
+					const_cpumask_t src2p, int nbits)
 {
 	return bitmap_equal(src1p->bits, src2p->bits, nbits);
 }
 
-#define cpus_intersects(src1, src2) __cpus_intersects(&(src1), &(src2), NR_CPUS)
-static inline int __cpus_intersects(const cpumask_t *src1p,
-					const cpumask_t *src2p, int nbits)
+#define cpus_intersects(src1, src2)			\
+	__cpus_intersects((const_cpumask_t)(src1),	\
+			  (const_cpumask_t)(src2), nr_cpu_ids)
+static inline int __cpus_intersects(const_cpumask_t src1p,
+					const_cpumask_t src2p, int nbits)
 {
 	return bitmap_intersects(src1p->bits, src2p->bits, nbits);
 }
 
-#define cpus_subset(src1, src2) __cpus_subset(&(src1), &(src2), NR_CPUS)
-static inline int __cpus_subset(const cpumask_t *src1p,
-					const cpumask_t *src2p, int nbits)
+#define cpus_subset(src1, src2)				\
+	__cpus_subset((const_cpumask_t)(src1),		\
+		      (const_cpumask_t)(src2), nr_cpu_ids)
+static inline int __cpus_subset(const_cpumask_t src1p,
+					const_cpumask_t src2p, int nbits)
 {
 	return bitmap_subset(src1p->bits, src2p->bits, nbits);
 }
 
-#define cpus_empty(src) __cpus_empty(&(src), NR_CPUS)
-static inline int __cpus_empty(const cpumask_t *srcp, int nbits)
+#define cpus_empty(src) __cpus_empty((const_cpumask_t)(src), nr_cpu_ids)
+static inline int __cpus_empty(const_cpumask_t srcp, int nbits)
 {
 	return bitmap_empty(srcp->bits, nbits);
 }
 
-#define cpus_full(cpumask) __cpus_full(&(cpumask), NR_CPUS)
-static inline int __cpus_full(const cpumask_t *srcp, int nbits)
+#define cpus_full(cpumask) __cpus_full((const_cpumask_t)(cpumask), nr_cpu_ids)
+static inline int __cpus_full(const_cpumask_t srcp, int nbits)
 {
 	return bitmap_full(srcp->bits, nbits);
 }
 
-#define cpus_weight(cpumask) __cpus_weight(&(cpumask), NR_CPUS)
-static inline int __cpus_weight(const cpumask_t *srcp, int nbits)
+#define cpus_weight(cpumask)				\
+	__cpus_weight((const_cpumask_t)(cpumask), nr_cpu_ids)
+static inline int __cpus_weight(const_cpumask_t srcp, int nbits)
 {
 	return bitmap_weight(srcp->bits, nbits);
 }
 
-#define cpus_shift_right(dst, src, n) \
-			__cpus_shift_right(&(dst), &(src), (n), NR_CPUS)
-static inline void __cpus_shift_right(cpumask_t *dstp,
-					const cpumask_t *srcp, int n, int nbits)
+#define cpus_shift_right(dst, src, n)			\
+	__cpus_shift_right((dst), (const_cpumask_t)(src), (n), nr_cpu_ids)
+static inline void __cpus_shift_right(cpumask_t dstp,
+					const_cpumask_t srcp, int n, int nbits)
 {
 	bitmap_shift_right(dstp->bits, srcp->bits, n, nbits);
 }
 
-#define cpus_shift_left(dst, src, n) \
-			__cpus_shift_left(&(dst), &(src), (n), NR_CPUS)
-static inline void __cpus_shift_left(cpumask_t *dstp,
-					const cpumask_t *srcp, int n, int nbits)
+#define cpus_shift_left(dst, src, n)			\
+	__cpus_shift_left((dst), (const_cpumask_t)(src), (n), nr_cpu_ids)
+static inline void __cpus_shift_left(cpumask_t dstp,
+					const_cpumask_t srcp, int n, int nbits)
 {
 	bitmap_shift_left(dstp->bits, srcp->bits, n, nbits);
 }
@@ -244,11 +367,11 @@ static inline void __cpus_shift_left(cpu
 extern const unsigned long
 	cpu_bit_bitmap[BITS_PER_LONG+1][BITS_TO_LONGS(NR_CPUS)];
 
-static inline const cpumask_t *get_cpu_mask(unsigned int cpu)
+static inline const_cpumask_t get_cpu_mask(unsigned int cpu)
 {
 	const unsigned long *p = cpu_bit_bitmap[1 + cpu % BITS_PER_LONG];
 	p -= cpu / BITS_PER_LONG;
-	return (const cpumask_t *)p;
+	return (const_cpumask_t)p;
 }
 
 /*
@@ -256,151 +379,98 @@ static inline const cpumask_t *get_cpu_m
  * gcc optimizes it out (it's a constant) and there's no huge stack
  * variable created:
  */
-#define cpumask_of_cpu(cpu) (*get_cpu_mask(cpu))
-
+#define cpumask_of_cpu(cpu) ((const_cpumask_t)get_cpu_mask(cpu))
 
 #define CPU_MASK_LAST_WORD BITMAP_LAST_WORD_MASK(NR_CPUS)
+#define	CPU_MASK_INIT(value) { [0] = { {				\
+	[0 ... BITS_TO_LONGS(NR_CPUS)-1] =  value			\
+} } }
 
 #if NR_CPUS <= BITS_PER_LONG
 
-#define CPU_MASK_ALL							\
-(cpumask_t) { {								\
-	[BITS_TO_LONGS(NR_CPUS)-1] = CPU_MASK_LAST_WORD			\
-} }
-
-#define CPU_MASK_ALL_PTR	(&CPU_MASK_ALL)
+/* Initializer only, use cpu_mask_all in function arguments */
+#define CPU_MASK_ALL CPU_MASK_INIT(CPU_MASK_LAST_WORD)
 
 #else
 
-#define CPU_MASK_ALL							\
-(cpumask_t) { {								\
+/* Initializer only, use cpu_mask_all in function arguments */
+#define CPU_MASK_ALL { [0] = { {					\
 	[0 ... BITS_TO_LONGS(NR_CPUS)-2] = ~0UL,			\
 	[BITS_TO_LONGS(NR_CPUS)-1] = CPU_MASK_LAST_WORD			\
-} }
-
-/* cpu_mask_all is in init/main.c */
-extern cpumask_t cpu_mask_all;
-#define CPU_MASK_ALL_PTR	(&cpu_mask_all)
+} } }
 
 #endif
 
-#define CPU_MASK_NONE							\
-(cpumask_t) { {								\
-	[0 ... BITS_TO_LONGS(NR_CPUS)-1] =  0UL				\
-} }
-
-#define CPU_MASK_CPU0							\
-(cpumask_t) { {								\
-	[0] =  1UL							\
-} }
+/* Initializers only */
+#define CPU_MASK_NONE CPU_MASK_INIT(0UL)
+#define CPU_MASK_CPU0 CPU_MASK_INIT(1UL)
 
-#define cpus_addr(src) ((src).bits)
+#define cpus_addr(src) ((unsigned long *)((src)->bits))
 
-#define cpumask_scnprintf(buf, len, src) \
-			__cpumask_scnprintf((buf), (len), &(src), NR_CPUS)
+#define cpumask_scnprintf(buf, len, src)		\
+	__cpumask_scnprintf((buf), (len), (const_cpumask_t)(src), nr_cpu_ids)
 static inline int __cpumask_scnprintf(char *buf, int len,
-					const cpumask_t *srcp, int nbits)
+					const_cpumask_t srcp, int nbits)
 {
 	return bitmap_scnprintf(buf, len, srcp->bits, nbits);
 }
 
-#define cpumask_parse_user(ubuf, ulen, dst) \
-			__cpumask_parse_user((ubuf), (ulen), &(dst), NR_CPUS)
+#define cpumask_parse_user(ubuf, ulen, dst)		\
+	__cpumask_parse_user((ubuf), (ulen), (dst), nr_cpu_ids)
 static inline int __cpumask_parse_user(const char __user *buf, int len,
-					cpumask_t *dstp, int nbits)
+					cpumask_t dstp, int nbits)
 {
 	return bitmap_parse_user(buf, len, dstp->bits, nbits);
 }
 
-#define cpulist_scnprintf(buf, len, src) \
-			__cpulist_scnprintf((buf), (len), &(src), NR_CPUS)
+#define cpulist_scnprintf(buf, len, src)		\
+	__cpulist_scnprintf((buf), (len), (const_cpumask_t)(src), nr_cpu_ids)
 static inline int __cpulist_scnprintf(char *buf, int len,
-					const cpumask_t *srcp, int nbits)
+					const_cpumask_t srcp, int nbits)
 {
 	return bitmap_scnlistprintf(buf, len, srcp->bits, nbits);
 }
 
-#define cpulist_parse(buf, dst) __cpulist_parse((buf), &(dst), NR_CPUS)
-static inline int __cpulist_parse(const char *buf, cpumask_t *dstp, int nbits)
+#define cpulist_parse(buf, dst) __cpulist_parse((buf), (dst), nr_cpu_ids)
+static inline int __cpulist_parse(const char *buf, cpumask_t dstp, int nbits)
 {
 	return bitmap_parselist(buf, dstp->bits, nbits);
 }
 
-#define cpu_remap(oldbit, old, new) \
-		__cpu_remap((oldbit), &(old), &(new), NR_CPUS)
+#define cpu_remap(oldbit, old, new)			\
+	__cpu_remap((oldbit), (const_cpumask_t)(old),	\
+			      (const_cpumask_t)(new), nr_cpu_ids)
 static inline int __cpu_remap(int oldbit,
-		const cpumask_t *oldp, const cpumask_t *newp, int nbits)
+		const_cpumask_t oldp, const_cpumask_t newp, int nbits)
 {
 	return bitmap_bitremap(oldbit, oldp->bits, newp->bits, nbits);
 }
 
-#define cpus_remap(dst, src, old, new) \
-		__cpus_remap(&(dst), &(src), &(old), &(new), NR_CPUS)
-static inline void __cpus_remap(cpumask_t *dstp, const cpumask_t *srcp,
-		const cpumask_t *oldp, const cpumask_t *newp, int nbits)
+#define cpus_remap(dst, src, old, new)			\
+	__cpus_remap((dst), (const_cpumask_t)(src),	\
+			    (const_cpumask_t)(old), (new), nr_cpu_ids)
+static inline void __cpus_remap(cpumask_t dstp, const_cpumask_t srcp,
+		const_cpumask_t oldp, const_cpumask_t newp, int nbits)
 {
 	bitmap_remap(dstp->bits, srcp->bits, oldp->bits, newp->bits, nbits);
 }
 
-#define cpus_onto(dst, orig, relmap) \
-		__cpus_onto(&(dst), &(orig), &(relmap), NR_CPUS)
-static inline void __cpus_onto(cpumask_t *dstp, const cpumask_t *origp,
-		const cpumask_t *relmapp, int nbits)
+#define cpus_onto(dst, orig, relmap)			\
+	__cpus_onto((dst), (const_cpumask_t)(orig), (relmap), nr_cpu_ids)
+static inline void __cpus_onto(cpumask_t dstp, const_cpumask_t origp,
+		const_cpumask_t relmapp, int nbits)
 {
 	bitmap_onto(dstp->bits, origp->bits, relmapp->bits, nbits);
 }
 
-#define cpus_fold(dst, orig, sz) \
-		__cpus_fold(&(dst), &(orig), sz, NR_CPUS)
-static inline void __cpus_fold(cpumask_t *dstp, const cpumask_t *origp,
+#define cpus_fold(dst, orig, sz)			\
+	__cpus_fold((dst), (const_cpumask_t)(orig), sz, nr_cpu_ids)
+static inline void __cpus_fold(cpumask_t dstp, const_cpumask_t origp,
 		int sz, int nbits)
 {
 	bitmap_fold(dstp->bits, origp->bits, sz, nbits);
 }
 
-#if NR_CPUS == 1
-
-#define nr_cpu_ids		1
-#define first_cpu(src)		({ (void)(src); 0; })
-#define next_cpu(n, src)	({ (void)(src); 1; })
-#define any_online_cpu(mask)	0
-#define for_each_cpu_mask(cpu, mask)	\
-	for ((cpu) = 0; (cpu) < 1; (cpu)++, (void)mask)
-
-#else /* NR_CPUS > 1 */
-
-extern int nr_cpu_ids;
-int __first_cpu(const cpumask_t *srcp);
-int __next_cpu(int n, const cpumask_t *srcp);
-int __any_online_cpu(const cpumask_t *mask);
-
-#define first_cpu(src)		__first_cpu(&(src))
-#define next_cpu(n, src)	__next_cpu((n), &(src))
-#define any_online_cpu(mask) __any_online_cpu(&(mask))
-#define for_each_cpu_mask(cpu, mask)			\
-	for ((cpu) = -1;				\
-		(cpu) = next_cpu((cpu), (mask)),	\
-		(cpu) < NR_CPUS; )
-#endif
-
-#if NR_CPUS <= 64
-
-#define next_cpu_nr(n, src)		next_cpu(n, src)
-#define cpus_weight_nr(cpumask)		cpus_weight(cpumask)
-#define for_each_cpu_mask_nr(cpu, mask)	for_each_cpu_mask(cpu, mask)
-
-#else /* NR_CPUS > 64 */
-
-int __next_cpu_nr(int n, const cpumask_t *srcp);
-#define next_cpu_nr(n, src)	__next_cpu_nr((n), &(src))
-#define cpus_weight_nr(cpumask)	__cpus_weight(&(cpumask), nr_cpu_ids)
-#define for_each_cpu_mask_nr(cpu, mask)			\
-	for ((cpu) = -1;				\
-		(cpu) = next_cpu_nr((cpu), (mask)),	\
-		(cpu) < nr_cpu_ids; )
-
-#endif /* NR_CPUS > 64 */
-
 /*
  * The following particular system cpumasks and operations manage
  * possible, present, active and online cpus.  Each of them is a fixed size
@@ -458,33 +528,16 @@ int __next_cpu_nr(int n, const cpumask_t
  *        main(){ set1(3); set2(5); }
  */
 
-extern cpumask_t cpu_possible_map;
-extern cpumask_t cpu_online_map;
-extern cpumask_t cpu_present_map;
-extern cpumask_t cpu_active_map;
-
-#if NR_CPUS > 1
-#define num_online_cpus()	cpus_weight_nr(cpu_online_map)
-#define num_possible_cpus()	cpus_weight_nr(cpu_possible_map)
-#define num_present_cpus()	cpus_weight_nr(cpu_present_map)
-#define cpu_online(cpu)		cpu_isset((cpu), cpu_online_map)
-#define cpu_possible(cpu)	cpu_isset((cpu), cpu_possible_map)
-#define cpu_present(cpu)	cpu_isset((cpu), cpu_present_map)
-#define cpu_active(cpu)		cpu_isset((cpu), cpu_active_map)
-#else
-#define num_online_cpus()	1
-#define num_possible_cpus()	1
-#define num_present_cpus()	1
-#define cpu_online(cpu)		((cpu) == 0)
-#define cpu_possible(cpu)	((cpu) == 0)
-#define cpu_present(cpu)	((cpu) == 0)
-#define cpu_active(cpu)		((cpu) == 0)
-#endif
+extern cpumask_map_t cpu_possible_map;
+extern cpumask_map_t cpu_online_map;
+extern cpumask_map_t cpu_present_map;
+extern cpumask_map_t cpu_active_map;
+extern cpumask_map_t cpu_mask_all;
 
 #define cpu_is_offline(cpu)	unlikely(!cpu_online(cpu))
 
-#define for_each_possible_cpu(cpu) for_each_cpu_mask_nr((cpu), cpu_possible_map)
-#define for_each_online_cpu(cpu)   for_each_cpu_mask_nr((cpu), cpu_online_map)
-#define for_each_present_cpu(cpu)  for_each_cpu_mask_nr((cpu), cpu_present_map)
+#define for_each_possible_cpu(cpu) for_each_cpu((cpu), cpu_possible_map)
+#define for_each_online_cpu(cpu)   for_each_cpu((cpu), cpu_online_map)
+#define for_each_present_cpu(cpu)  for_each_cpu((cpu), cpu_present_map)
 
 #endif /* __LINUX_CPUMASK_H */
--- struct-cpumasks.orig/include/linux/cpumask_alloc.h
+++ struct-cpumasks/include/linux/cpumask_alloc.h
@@ -5,33 +5,36 @@
  * Simple alloc/free of cpumask structs
  */
 
-#if NR_CPUS > BITS_PER_LONG
-
-#include <linux/slab.h>
-
-/* Allocates a cpumask large enough to contain nr_cpu_ids cpus */
-static inline int cpumask_alloc(cpumask_t *m)
+#ifndef CONFIG_CPUMASKS_OFFSTACK
+static inline int cpumask_alloc(cpumask_var_t *m)
 {
-	cpumask_t d = kmalloc(CPUMASK_SIZE, GFP_KERNEL);
-
-	*m = d;
-	return (d != NULL);
+	return 1;
 }
 
-static inline void cpumask_free_(cpumask_t *m)
+static inline void cpumask_free(cpumask_var_t *m)
 {
-	kfree(*m);
 }
 
 #else
-static inline cpumask_val cpumask_alloc(cpumask_t *m)
+
+#include <linux/slab.h>
+
+/*
+ * Allocates a cpumask large enough to contain nr_cpu_ids cpus
+ * Returns true if allocation succeeded.
+ */
+static inline int cpumask_alloc(cpumask_var_t *m)
 {
+	cpumask_var_t d = kmalloc(cpumask_size(), GFP_KERNEL);
+
+	*m = d;
+	return (d != NULL);
 }
 
-static inline void cpumask_free(cpumask_t *m)
+static inline void cpumask_free_(cpumask_var_t *m)
 {
+	kfree(*m);
 }
-
 #endif
 
 /*
@@ -40,23 +43,27 @@ static inline void cpumask_free(cpumask_
 struct cpumask_pool_s {
 	unsigned long	length;
 	unsigned long	allocated;
-	cpumask_fixed	pool;
+	cpumask_map_t	pool;
 };
 
 #if 0
+
+/* Using fixed map size so length == sizeof(pool)/sizeof(cpumask_map_t) */
 #define DEFINE_PER_CPUMASK_POOL(name, size)		\
 	struct __pool_##name##_s {			\
 		unsigned long	allocated;		\
-		cpumask_data	pool[size];		\
+		cpumask_map_t	pool[size];		\
 	}						\
 	DEFINE_PER_CPU(struct __pool_##name##_s, name)
 #else
+
+/* Ideally here pool would be allocated early to be (cpumask_size() * size) */
 #define DEFINE_PER_CPUMASK_POOL(name, size)		\
 	DEFINE_PER_CPU(					\
 		struct {				\
 			unsigned long	length;		\
 			unsigned long	allocated;	\
-			cpumask_data	pool[size];	\
+			cpumask_map_t	pool[size];	\
 		}, name ) = { .length = size, }
 #endif
 
@@ -69,8 +76,8 @@ static inline int __cpumask_pool_get(cpu
 	int n;
 
 	preempt_disable();
-	while ((n = find_first_bit(&p->allocated, p->length)) < p->length &&
-		!test_and_set_bit(n, &p->allocated)) {
+	while ((n = find_first_zero_bit(&p->allocated, p->length))
+		< p->length && !test_and_set_bit(n, &p->allocated)) {
 
 		*m = &(p->pool[n]);
 		preempt_enable();
@@ -108,20 +115,20 @@ static inline void __cpumask_pool_put(cp
  *	CPUMASK_PTR(mask2, my_cpumasks);
  *
  *	--- DO NOT reference cpumask_t pointers until this check ---
- *	if (my_cpumasks == NULL)
+ *	if (!my_cpumasks)
  *		"kmalloc failed"...
  *
- * References are now cpumask_var variables (*mask1, ...)
+ * References are now cpumask_t variables
  */
 
 #if NR_CPUS > BITS_PER_LONG
 #define	CPUMASK_ALLOC(m)	struct m *m = kmalloc(sizeof(*m), GFP_KERNEL)
 #define	CPUMASK_FREE(m)		kfree(m)
-#define	CPUMASK_PTR(v, m) 	cpumask_var v = (m->v)
+#define	CPUMASK_PTR(v, m) 	cpumask_t v = (m->v)
 #else
 #define	CPUMASK_ALLOC(m)	struct m m
 #define	CPUMASK_FREE(m)
-#define	CPUMASK_PTR(v, m) 	cpumask_var v
+#define	CPUMASK_PTR(v, m) 	cpumask_t v = (m->v)
 #endif
 
 #endif /* __LINUX_CPUMASK_ALLOC_H */

-- 

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [PATCH 06/31] cpumask: new lib/cpumask.c
  2008-09-29 18:02 [PATCH 00/31] cpumask: Provide new cpumask API Mike Travis
                   ` (4 preceding siblings ...)
  2008-09-29 18:02 ` [PATCH 05/31] cpumask: Provide new cpumask API Mike Travis
@ 2008-09-29 18:02 ` Mike Travis
  2008-09-29 18:02 ` [PATCH 07/31] cpumask: changes to compile init/main.c Mike Travis
                   ` (24 subsequent siblings)
  30 siblings, 0 replies; 43+ messages in thread
From: Mike Travis @ 2008-09-29 18:02 UTC (permalink / raw)
  To: Ingo Molnar, Rusty Russell
  Cc: Linus Torvalds, Andrew Morton, David Miller, Yinghai Lu,
	Thomas Gleixner, Jack Steiner, linux-kernel

[-- Attachment #1: lib-cpumask --]
[-- Type: text/plain, Size: 1612 bytes --]

Signed-of-by: Mike Travis <travis@sgi.com>
---
 lib/cpumask.c |   36 +++++++++++++++++++++---------------
 1 file changed, 21 insertions(+), 15 deletions(-)

--- struct-cpumasks.orig/lib/cpumask.c
+++ struct-cpumasks/lib/cpumask.c
@@ -3,34 +3,40 @@
 #include <linux/cpumask.h>
 #include <linux/module.h>
 
-int __first_cpu(const cpumask_t *srcp)
+int cpus_first(const_cpumask_t srcp)
 {
-	return find_first_bit(srcp->bits, NR_CPUS);
+	return find_first_bit(srcp->bits, nr_cpu_ids);
 }
-EXPORT_SYMBOL(__first_cpu);
+EXPORT_SYMBOL(cpus_first);
 
-int __next_cpu(int n, const cpumask_t *srcp)
+int cpus_next(int n, const_cpumask_t srcp)
 {
-	return find_next_bit(srcp->bits, NR_CPUS, n+1);
+	return find_next_bit(srcp->bits, nr_cpu_ids, n+1);
 }
-EXPORT_SYMBOL(__next_cpu);
+EXPORT_SYMBOL(cpus_next);
 
-#if NR_CPUS > 64
-int __next_cpu_nr(int n, const cpumask_t *srcp)
+int cpus_next_in(int n, const_cpumask_t srcp, const_cpumask_t andp)
 {
-	return find_next_bit(srcp->bits, nr_cpu_ids, n+1);
+	int cpu;
+
+	for (cpu = n + 1; cpu < nr_cpu_ids; cpu++) {
+		cpu = find_next_bit(srcp->bits, nr_cpu_ids, cpu);
+
+		if (cpu < nr_cpu_ids && cpu_isset(cpu, andp))
+			return cpu;
+	}
+	return nr_cpu_ids;
 }
-EXPORT_SYMBOL(__next_cpu_nr);
-#endif
+EXPORT_SYMBOL(cpus_next_in);
 
-int __any_online_cpu(const cpumask_t *mask)
+int any_cpu_in(const_cpumask_t mask, const_cpumask_t andmask)
 {
 	int cpu;
 
-	for_each_cpu_mask(cpu, *mask) {
-		if (cpu_online(cpu))
+	for_each_cpu(cpu, mask) {
+		if (cpu_isset(cpu, andmask))
 			break;
 	}
 	return cpu;
 }
-EXPORT_SYMBOL(__any_online_cpu);
+EXPORT_SYMBOL(any_cpu_in);

-- 

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [PATCH 07/31] cpumask: changes to compile init/main.c
  2008-09-29 18:02 [PATCH 00/31] cpumask: Provide new cpumask API Mike Travis
                   ` (5 preceding siblings ...)
  2008-09-29 18:02 ` [PATCH 06/31] cpumask: new lib/cpumask.c Mike Travis
@ 2008-09-29 18:02 ` Mike Travis
  2008-09-29 18:02 ` [PATCH 08/31] cpumask: Change cpumask maps Mike Travis
                   ` (23 subsequent siblings)
  30 siblings, 0 replies; 43+ messages in thread
From: Mike Travis @ 2008-09-29 18:02 UTC (permalink / raw)
  To: Ingo Molnar, Rusty Russell
  Cc: Linus Torvalds, Andrew Morton, David Miller, Yinghai Lu,
	Thomas Gleixner, Jack Steiner, linux-kernel

[-- Attachment #1: init_main --]
[-- Type: text/plain, Size: 2674 bytes --]

Signed-of-by: Mike Travis <travis@sgi.com>
---
 include/linux/sched.h    |    6 +++---
 include/linux/seq_file.h |    4 ++--
 init/main.c              |   11 ++++++-----
 kernel/cpu.c             |    5 +++++
 4 files changed, 16 insertions(+), 10 deletions(-)

--- struct-cpumasks.orig/include/linux/sched.h
+++ struct-cpumasks/include/linux/sched.h
@@ -1583,10 +1583,10 @@ extern cputime_t task_gtime(struct task_
 
 #ifdef CONFIG_SMP
 extern int set_cpus_allowed_ptr(struct task_struct *p,
-				const cpumask_t *new_mask);
+				const cpumask_t new_mask);
 #else
 static inline int set_cpus_allowed_ptr(struct task_struct *p,
-				       const cpumask_t *new_mask)
+				       const cpumask_t new_mask)
 {
 	if (!cpu_isset(0, *new_mask))
 		return -EINVAL;
@@ -1595,7 +1595,7 @@ static inline int set_cpus_allowed_ptr(s
 #endif
 static inline int set_cpus_allowed(struct task_struct *p, cpumask_t new_mask)
 {
-	return set_cpus_allowed_ptr(p, &new_mask);
+	return set_cpus_allowed_ptr(p, new_mask);
 }
 
 extern unsigned long long sched_clock(void);
--- struct-cpumasks.orig/include/linux/seq_file.h
+++ struct-cpumasks/include/linux/seq_file.h
@@ -50,9 +50,9 @@ int seq_dentry(struct seq_file *, struct
 int seq_path_root(struct seq_file *m, struct path *path, struct path *root,
 		  char *esc);
 int seq_bitmap(struct seq_file *m, unsigned long *bits, unsigned int nr_bits);
-static inline int seq_cpumask(struct seq_file *m, cpumask_t *mask)
+static inline int seq_cpumask(struct seq_file *m, cpumask_t mask)
 {
-	return seq_bitmap(m, mask->bits, NR_CPUS);
+	return seq_bitmap(m, mask->bits, nr_cpu_ids);
 }
 
 static inline int seq_nodemask(struct seq_file *m, nodemask_t *mask)
--- struct-cpumasks.orig/init/main.c
+++ struct-cpumasks/init/main.c
@@ -367,12 +367,8 @@ static inline void smp_prepare_cpus(unsi
 
 #else
 
-#if NR_CPUS > BITS_PER_LONG
-cpumask_t cpu_mask_all __read_mostly = CPU_MASK_ALL;
-EXPORT_SYMBOL(cpu_mask_all);
-#endif
-
 /* Setup number of possible processor ids */
+#ifndef nr_cpu_ids
 int nr_cpu_ids __read_mostly = NR_CPUS;
 EXPORT_SYMBOL(nr_cpu_ids);
 
@@ -386,6 +382,11 @@ static void __init setup_nr_cpu_ids(void
 
 	nr_cpu_ids = highest_cpu + 1;
 }
+#else
+static inline void setup_nr_cpu_ids(void)
+{
+}
+#endif
 
 #ifndef CONFIG_HAVE_SETUP_PER_CPU_AREA
 unsigned long __per_cpu_offset[NR_CPUS] __read_mostly;
--- struct-cpumasks.orig/kernel/cpu.c
+++ struct-cpumasks/kernel/cpu.c
@@ -24,6 +24,11 @@
 cpumask_t cpu_present_map __read_mostly;
 EXPORT_SYMBOL(cpu_present_map);
 
+#if NR_CPUS > BITS_PER_LONG
+cpumask_map_t cpu_mask_all __read_mostly = CPU_MASK_ALL;
+EXPORT_SYMBOL(cpu_mask_all);
+#endif
+
 #ifndef CONFIG_SMP
 
 /*

-- 

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [PATCH 08/31] cpumask: Change cpumask maps
  2008-09-29 18:02 [PATCH 00/31] cpumask: Provide new cpumask API Mike Travis
                   ` (6 preceding siblings ...)
  2008-09-29 18:02 ` [PATCH 07/31] cpumask: changes to compile init/main.c Mike Travis
@ 2008-09-29 18:02 ` Mike Travis
  2008-09-29 18:02 ` [PATCH 09/31] cpumask: get rid of _nr functions Mike Travis
                   ` (22 subsequent siblings)
  30 siblings, 0 replies; 43+ messages in thread
From: Mike Travis @ 2008-09-29 18:02 UTC (permalink / raw)
  To: Ingo Molnar, Rusty Russell
  Cc: Linus Torvalds, Andrew Morton, David Miller, Yinghai Lu,
	Thomas Gleixner, Jack Steiner, linux-kernel

[-- Attachment #1: cpu-maps --]
[-- Type: text/plain, Size: 4009 bytes --]

Signed-of-by: Mike Travis <travis@sgi.com>
---
 arch/x86/kernel/smpboot.c |   20 ++++++++++----------
 include/asm-x86/smp.h     |   20 ++++++++++----------
 2 files changed, 20 insertions(+), 20 deletions(-)

--- struct-cpumasks.orig/arch/x86/kernel/smpboot.c
+++ struct-cpumasks/arch/x86/kernel/smpboot.c
@@ -102,20 +102,20 @@ EXPORT_SYMBOL(smp_num_siblings);
 DEFINE_PER_CPU(u16, cpu_llc_id) = BAD_APICID;
 
 /* bitmap of online cpus */
-cpumask_t cpu_online_map __read_mostly;
+cpumask_map_t cpu_online_map __read_mostly;
 EXPORT_SYMBOL(cpu_online_map);
 
-cpumask_t cpu_callin_map;
-cpumask_t cpu_callout_map;
-cpumask_t cpu_possible_map;
+cpumask_map_t cpu_callin_map;
+cpumask_map_t cpu_callout_map;
+cpumask_map_t cpu_possible_map;
 EXPORT_SYMBOL(cpu_possible_map);
 
 /* representing HT siblings of each logical CPU */
-DEFINE_PER_CPU(cpumask_t, cpu_sibling_map);
+DEFINE_PER_CPU(cpumask_map_t, cpu_sibling_map);
 EXPORT_PER_CPU_SYMBOL(cpu_sibling_map);
 
 /* representing HT and core siblings of each logical CPU */
-DEFINE_PER_CPU(cpumask_t, cpu_core_map);
+DEFINE_PER_CPU(cpumask_map_t, cpu_core_map);
 EXPORT_PER_CPU_SYMBOL(cpu_core_map);
 
 /* Per CPU bogomips and other parameters */
@@ -126,7 +126,7 @@ static atomic_t init_deasserted;
 
 
 /* representing cpus for which sibling maps can be computed */
-static cpumask_t cpu_sibling_setup_map;
+static cpumask_map_t cpu_sibling_setup_map;
 
 /* Set if we find a B stepping CPU */
 static int __cpuinitdata smp_b_stepping;
@@ -503,7 +503,7 @@ void __cpuinit set_cpu_sibling_map(int c
 }
 
 /* maps the cpu to the sched domain representing multi-core */
-cpumask_t cpu_coregroup_map(int cpu)
+const cpumask_t cpu_coregroup_map(int cpu)
 {
 	struct cpuinfo_x86 *c = &cpu_data(cpu);
 	/*
@@ -511,9 +511,9 @@ cpumask_t cpu_coregroup_map(int cpu)
 	 * And for power savings, we return cpu_core_map
 	 */
 	if (sched_mc_power_savings || sched_smt_power_savings)
-		return per_cpu(cpu_core_map, cpu);
+		return (const cpumask_t)per_cpu(cpu_core_map, cpu);
 	else
-		return c->llc_shared_map;
+		return (const cpumask_t)c->llc_shared_map;
 }
 
 static void impress_friends(void)
--- struct-cpumasks.orig/include/asm-x86/smp.h
+++ struct-cpumasks/include/asm-x86/smp.h
@@ -18,9 +18,9 @@
 #include <asm/pda.h>
 #include <asm/thread_info.h>
 
-extern cpumask_t cpu_callout_map;
-extern cpumask_t cpu_initialized;
-extern cpumask_t cpu_callin_map;
+extern cpumask_map_t cpu_callout_map;
+extern cpumask_map_t cpu_initialized;
+extern cpumask_map_t cpu_callin_map;
 
 extern void (*mtrr_hook)(void);
 extern void zap_low_mappings(void);
@@ -29,10 +29,10 @@ extern int __cpuinit get_local_pda(int c
 
 extern int smp_num_siblings;
 extern unsigned int num_processors;
-extern cpumask_t cpu_initialized;
+extern cpumask_map_t cpu_initialized;
 
-DECLARE_PER_CPU(cpumask_t, cpu_sibling_map);
-DECLARE_PER_CPU(cpumask_t, cpu_core_map);
+DECLARE_PER_CPU(cpumask_map_t, cpu_sibling_map);
+DECLARE_PER_CPU(cpumask_map_t, cpu_core_map);
 DECLARE_PER_CPU(u16, cpu_llc_id);
 #ifdef CONFIG_X86_32
 DECLARE_PER_CPU(int, cpu_number);
@@ -60,7 +60,7 @@ struct smp_ops {
 	void (*cpu_die)(unsigned int cpu);
 	void (*play_dead)(void);
 
-	void (*send_call_func_ipi)(const cpumask_t *mask);
+	void (*send_call_func_ipi)(const cpumask_t mask);
 	void (*send_call_func_single_ipi)(int cpu);
 };
 
@@ -123,9 +123,9 @@ static inline void arch_send_call_functi
 	smp_ops.send_call_func_single_ipi(cpu);
 }
 
-static inline void arch_send_call_function_ipi(cpumask_t mask)
+static inline void arch_send_call_function_ipi(const cpumask_t mask)
 {
-	smp_ops.send_call_func_ipi(&mask);
+	smp_ops.send_call_func_ipi(mask);
 }
 
 void cpu_disable_common(void);
@@ -138,7 +138,7 @@ void native_cpu_die(unsigned int cpu);
 void native_play_dead(void);
 void play_dead_common(void);
 
-void native_send_call_func_ipi(const cpumask_t *mask);
+void native_send_call_func_ipi(const cpumask_t mask);
 void native_send_call_func_single_ipi(int cpu);
 
 void smp_store_cpu_info(int id);

-- 

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [PATCH 09/31] cpumask: get rid of _nr functions
  2008-09-29 18:02 [PATCH 00/31] cpumask: Provide new cpumask API Mike Travis
                   ` (7 preceding siblings ...)
  2008-09-29 18:02 ` [PATCH 08/31] cpumask: Change cpumask maps Mike Travis
@ 2008-09-29 18:02 ` Mike Travis
  2008-09-29 18:03 ` [PATCH 10/31] cpumask: clean cpumask_of_cpu refs Mike Travis
                   ` (21 subsequent siblings)
  30 siblings, 0 replies; 43+ messages in thread
From: Mike Travis @ 2008-09-29 18:02 UTC (permalink / raw)
  To: Ingo Molnar, Rusty Russell
  Cc: Linus Torvalds, Andrew Morton, David Miller, Yinghai Lu,
	Thomas Gleixner, Jack Steiner, linux-kernel

[-- Attachment #1: get-rid-of-_nr --]
[-- Type: text/plain, Size: 31744 bytes --]

Signed-of-by: Mike Travis <travis@sgi.com>
---
 arch/x86/kernel/cpu/cpufreq/acpi-cpufreq.c       |    6 +--
 arch/x86/kernel/cpu/cpufreq/p4-clockmod.c        |    6 +--
 arch/x86/kernel/cpu/cpufreq/powernow-k8.c        |    8 ++---
 arch/x86/kernel/cpu/cpufreq/speedstep-centrino.c |   10 +++---
 arch/x86/kernel/cpu/cpufreq/speedstep-ich.c      |    4 +-
 arch/x86/kernel/cpu/intel_cacheinfo.c            |    2 -
 arch/x86/kernel/cpu/mcheck/mce_amd_64.c          |    4 +-
 arch/x86/kernel/io_apic.c                        |    8 ++---
 arch/x86/kernel/smpboot.c                        |    8 ++---
 arch/x86/xen/smp.c                               |    4 +-
 drivers/acpi/processor_throttling.c              |    6 +--
 drivers/cpufreq/cpufreq.c                        |   14 ++++----
 drivers/cpufreq/cpufreq_conservative.c           |    2 -
 drivers/cpufreq/cpufreq_ondemand.c               |    4 +-
 drivers/infiniband/hw/ehca/ehca_irq.c            |    2 -
 include/asm-x86/ipi.h                            |    4 +-
 kernel/cpu.c                                     |    2 -
 kernel/rcuclassic.c                              |    2 -
 kernel/rcupreempt.c                              |   10 +++---
 kernel/sched.c                                   |   36 +++++++++++------------
 kernel/sched_fair.c                              |    2 -
 kernel/sched_rt.c                                |    4 +-
 kernel/smp.c                                     |    2 -
 kernel/taskstats.c                               |    4 +-
 kernel/time/clocksource.c                        |    2 -
 kernel/time/tick-broadcast.c                     |    4 +-
 kernel/workqueue.c                               |    6 +--
 mm/allocpercpu.c                                 |    4 +-
 mm/quicklist.c                                   |    2 -
 mm/vmstat.c                                      |    2 -
 net/core/dev.c                                   |    4 +-
 net/iucv/iucv.c                                  |    2 -
 32 files changed, 90 insertions(+), 90 deletions(-)

--- struct-cpumasks.orig/arch/x86/kernel/cpu/cpufreq/acpi-cpufreq.c
+++ struct-cpumasks/arch/x86/kernel/cpu/cpufreq/acpi-cpufreq.c
@@ -202,7 +202,7 @@ static void drv_write(struct drv_cmd *cm
 	cpumask_t saved_mask = current->cpus_allowed;
 	unsigned int i;
 
-	for_each_cpu_mask_nr(i, cmd->mask) {
+	for_each_cpu_mask(i, cmd->mask) {
 		set_cpus_allowed_ptr(current, &cpumask_of_cpu(i));
 		do_drv_write(cmd);
 	}
@@ -451,7 +451,7 @@ static int acpi_cpufreq_target(struct cp
 
 	freqs.old = perf->states[perf->state].core_frequency * 1000;
 	freqs.new = data->freq_table[next_state].frequency;
-	for_each_cpu_mask_nr(i, cmd.mask) {
+	for_each_cpu_mask(i, cmd.mask) {
 		freqs.cpu = i;
 		cpufreq_notify_transition(&freqs, CPUFREQ_PRECHANGE);
 	}
@@ -466,7 +466,7 @@ static int acpi_cpufreq_target(struct cp
 		}
 	}
 
-	for_each_cpu_mask_nr(i, cmd.mask) {
+	for_each_cpu_mask(i, cmd.mask) {
 		freqs.cpu = i;
 		cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE);
 	}
--- struct-cpumasks.orig/arch/x86/kernel/cpu/cpufreq/p4-clockmod.c
+++ struct-cpumasks/arch/x86/kernel/cpu/cpufreq/p4-clockmod.c
@@ -122,7 +122,7 @@ static int cpufreq_p4_target(struct cpuf
 		return 0;
 
 	/* notifiers */
-	for_each_cpu_mask_nr(i, policy->cpus) {
+	for_each_cpu_mask(i, policy->cpus) {
 		freqs.cpu = i;
 		cpufreq_notify_transition(&freqs, CPUFREQ_PRECHANGE);
 	}
@@ -130,11 +130,11 @@ static int cpufreq_p4_target(struct cpuf
 	/* run on each logical CPU, see section 13.15.3 of IA32 Intel Architecture Software
 	 * Developer's Manual, Volume 3
 	 */
-	for_each_cpu_mask_nr(i, policy->cpus)
+	for_each_cpu_mask(i, policy->cpus)
 		cpufreq_p4_setdc(i, p4clockmod_table[newstate].index);
 
 	/* notifiers */
-	for_each_cpu_mask_nr(i, policy->cpus) {
+	for_each_cpu_mask(i, policy->cpus) {
 		freqs.cpu = i;
 		cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE);
 	}
--- struct-cpumasks.orig/arch/x86/kernel/cpu/cpufreq/powernow-k8.c
+++ struct-cpumasks/arch/x86/kernel/cpu/cpufreq/powernow-k8.c
@@ -963,7 +963,7 @@ static int transition_frequency_fidvid(s
 	freqs.old = find_khz_freq_from_fid(data->currfid);
 	freqs.new = find_khz_freq_from_fid(fid);
 
-	for_each_cpu_mask_nr(i, *(data->available_cores)) {
+	for_each_cpu_mask(i, *(data->available_cores)) {
 		freqs.cpu = i;
 		cpufreq_notify_transition(&freqs, CPUFREQ_PRECHANGE);
 	}
@@ -971,7 +971,7 @@ static int transition_frequency_fidvid(s
 	res = transition_fid_vid(data, fid, vid);
 	freqs.new = find_khz_freq_from_fid(data->currfid);
 
-	for_each_cpu_mask_nr(i, *(data->available_cores)) {
+	for_each_cpu_mask(i, *(data->available_cores)) {
 		freqs.cpu = i;
 		cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE);
 	}
@@ -994,7 +994,7 @@ static int transition_frequency_pstate(s
 	freqs.old = find_khz_freq_from_pstate(data->powernow_table, data->currpstate);
 	freqs.new = find_khz_freq_from_pstate(data->powernow_table, pstate);
 
-	for_each_cpu_mask_nr(i, *(data->available_cores)) {
+	for_each_cpu_mask(i, *(data->available_cores)) {
 		freqs.cpu = i;
 		cpufreq_notify_transition(&freqs, CPUFREQ_PRECHANGE);
 	}
@@ -1002,7 +1002,7 @@ static int transition_frequency_pstate(s
 	res = transition_pstate(data, pstate);
 	freqs.new = find_khz_freq_from_pstate(data->powernow_table, pstate);
 
-	for_each_cpu_mask_nr(i, *(data->available_cores)) {
+	for_each_cpu_mask(i, *(data->available_cores)) {
 		freqs.cpu = i;
 		cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE);
 	}
--- struct-cpumasks.orig/arch/x86/kernel/cpu/cpufreq/speedstep-centrino.c
+++ struct-cpumasks/arch/x86/kernel/cpu/cpufreq/speedstep-centrino.c
@@ -600,7 +600,7 @@ static int centrino_target (struct cpufr
 	*saved_mask = current->cpus_allowed;
 	first_cpu = 1;
 	cpus_clear(*covered_cpus);
-	for_each_cpu_mask_nr(j, *online_policy_cpus) {
+	for_each_cpu_mask(j, *online_policy_cpus) {
 		/*
 		 * Support for SMP systems.
 		 * Make sure we are running on CPU that wants to change freq
@@ -641,7 +641,7 @@ static int centrino_target (struct cpufr
 			dprintk("target=%dkHz old=%d new=%d msr=%04x\n",
 				target_freq, freqs.old, freqs.new, msr);
 
-			for_each_cpu_mask_nr(k, *online_policy_cpus) {
+			for_each_cpu_mask(k, *online_policy_cpus) {
 				freqs.cpu = k;
 				cpufreq_notify_transition(&freqs,
 					CPUFREQ_PRECHANGE);
@@ -664,7 +664,7 @@ static int centrino_target (struct cpufr
 		preempt_enable();
 	}
 
-	for_each_cpu_mask_nr(k, *online_policy_cpus) {
+	for_each_cpu_mask(k, *online_policy_cpus) {
 		freqs.cpu = k;
 		cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE);
 	}
@@ -678,7 +678,7 @@ static int centrino_target (struct cpufr
 		 */
 
 		if (!cpus_empty(*covered_cpus))
-			for_each_cpu_mask_nr(j, *covered_cpus) {
+			for_each_cpu_mask(j, *covered_cpus) {
 				set_cpus_allowed_ptr(current,
 						     &cpumask_of_cpu(j));
 				wrmsr(MSR_IA32_PERF_CTL, oldmsr, h);
@@ -687,7 +687,7 @@ static int centrino_target (struct cpufr
 		tmp = freqs.new;
 		freqs.new = freqs.old;
 		freqs.old = tmp;
-		for_each_cpu_mask_nr(j, *online_policy_cpus) {
+		for_each_cpu_mask(j, *online_policy_cpus) {
 			freqs.cpu = j;
 			cpufreq_notify_transition(&freqs, CPUFREQ_PRECHANGE);
 			cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE);
--- struct-cpumasks.orig/arch/x86/kernel/cpu/cpufreq/speedstep-ich.c
+++ struct-cpumasks/arch/x86/kernel/cpu/cpufreq/speedstep-ich.c
@@ -279,7 +279,7 @@ static int speedstep_target (struct cpuf
 
 	cpus_allowed = current->cpus_allowed;
 
-	for_each_cpu_mask_nr(i, policy->cpus) {
+	for_each_cpu_mask(i, policy->cpus) {
 		freqs.cpu = i;
 		cpufreq_notify_transition(&freqs, CPUFREQ_PRECHANGE);
 	}
@@ -292,7 +292,7 @@ static int speedstep_target (struct cpuf
 	/* allow to be run on all CPUs */
 	set_cpus_allowed_ptr(current, &cpus_allowed);
 
-	for_each_cpu_mask_nr(i, policy->cpus) {
+	for_each_cpu_mask(i, policy->cpus) {
 		freqs.cpu = i;
 		cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE);
 	}
--- struct-cpumasks.orig/arch/x86/kernel/cpu/intel_cacheinfo.c
+++ struct-cpumasks/arch/x86/kernel/cpu/intel_cacheinfo.c
@@ -513,7 +513,7 @@ static void __cpuinit cache_remove_share
 	int sibling;
 
 	this_leaf = CPUID4_INFO_IDX(cpu, index);
-	for_each_cpu_mask_nr(sibling, this_leaf->shared_cpu_map) {
+	for_each_cpu_mask(sibling, this_leaf->shared_cpu_map) {
 		sibling_leaf = CPUID4_INFO_IDX(sibling, index);
 		cpu_clear(cpu, sibling_leaf->shared_cpu_map);
 	}
--- struct-cpumasks.orig/arch/x86/kernel/cpu/mcheck/mce_amd_64.c
+++ struct-cpumasks/arch/x86/kernel/cpu/mcheck/mce_amd_64.c
@@ -527,7 +527,7 @@ static __cpuinit int threshold_create_ba
 	if (err)
 		goto out_free;
 
-	for_each_cpu_mask_nr(i, b->cpus) {
+	for_each_cpu_mask(i, b->cpus) {
 		if (i == cpu)
 			continue;
 
@@ -617,7 +617,7 @@ static void threshold_remove_bank(unsign
 #endif
 
 	/* remove all sibling symlinks before unregistering */
-	for_each_cpu_mask_nr(i, b->cpus) {
+	for_each_cpu_mask(i, b->cpus) {
 		if (i == cpu)
 			continue;
 
--- struct-cpumasks.orig/arch/x86/kernel/io_apic.c
+++ struct-cpumasks/arch/x86/kernel/io_apic.c
@@ -1237,7 +1237,7 @@ static int __assign_irq_vector(int irq, 
 			return 0;
 	}
 
-	for_each_online_cpu_mask_nr(cpu, *mask) {
+	for_each_cpu_in(cpu, mask, cpu_online_map) {
 		int new_cpu;
 		int vector, offset;
 
@@ -1261,7 +1261,7 @@ next:
 		if (vector == SYSCALL_VECTOR)
 			goto next;
 #endif
-		for_each_online_cpu_mask_nr(new_cpu, tmpmask)
+		for_each_cpu_in(new_cpu, tmpmask, cpu_online_map)
 			if (per_cpu(vector_irq, new_cpu)[vector] != -1)
 				goto next;
 		/* Found one! */
@@ -1271,7 +1271,7 @@ next:
 			cfg->move_in_progress = 1;
 			cfg->old_domain = cfg->domain;
 		}
-		for_each_cpu_mask_nr(new_cpu, tmpmask)
+		for_each_cpu_in(new_cpu, tmpmask, cpu_online_map)
 			per_cpu(vector_irq, new_cpu)[vector] = irq;
 		cfg->vector = vector;
 		cfg->domain = tmpmask;
@@ -1302,7 +1302,7 @@ static void __clear_irq_vector(int irq)
 
 	vector = cfg->vector;
 	cpus_and(mask, cfg->domain, cpu_online_map);
-	for_each_cpu_mask_nr(cpu, mask)
+	for_each_cpu_mask(cpu, mask)
 		per_cpu(vector_irq, cpu)[vector] = -1;
 
 	cfg->vector = 0;
--- struct-cpumasks.orig/arch/x86/kernel/smpboot.c
+++ struct-cpumasks/arch/x86/kernel/smpboot.c
@@ -448,7 +448,7 @@ void __cpuinit set_cpu_sibling_map(int c
 	cpu_set(cpu, cpu_sibling_setup_map);
 
 	if (smp_num_siblings > 1) {
-		for_each_cpu_mask_nr(i, cpu_sibling_setup_map) {
+		for_each_cpu_mask(i, cpu_sibling_setup_map) {
 			if (c->phys_proc_id == cpu_data(i).phys_proc_id &&
 			    c->cpu_core_id == cpu_data(i).cpu_core_id) {
 				cpu_set(i, per_cpu(cpu_sibling_map, cpu));
@@ -471,7 +471,7 @@ void __cpuinit set_cpu_sibling_map(int c
 		return;
 	}
 
-	for_each_cpu_mask_nr(i, cpu_sibling_setup_map) {
+	for_each_cpu_mask(i, cpu_sibling_setup_map) {
 		if (per_cpu(cpu_llc_id, cpu) != BAD_APICID &&
 		    per_cpu(cpu_llc_id, cpu) == per_cpu(cpu_llc_id, i)) {
 			cpu_set(i, c->llc_shared_map);
@@ -1268,7 +1268,7 @@ static void remove_siblinginfo(int cpu)
 	int sibling;
 	struct cpuinfo_x86 *c = &cpu_data(cpu);
 
-	for_each_cpu_mask_nr(sibling, per_cpu(cpu_core_map, cpu)) {
+	for_each_cpu_mask(sibling, per_cpu(cpu_core_map, cpu)) {
 		cpu_clear(cpu, per_cpu(cpu_core_map, sibling));
 		/*/
 		 * last thread sibling in this cpu core going down
@@ -1277,7 +1277,7 @@ static void remove_siblinginfo(int cpu)
 			cpu_data(sibling).booted_cores--;
 	}
 
-	for_each_cpu_mask_nr(sibling, per_cpu(cpu_sibling_map, cpu))
+	for_each_cpu_mask(sibling, per_cpu(cpu_sibling_map, cpu))
 		cpu_clear(cpu, per_cpu(cpu_sibling_map, sibling));
 	cpus_clear(per_cpu(cpu_sibling_map, cpu));
 	cpus_clear(per_cpu(cpu_core_map, cpu));
--- struct-cpumasks.orig/arch/x86/xen/smp.c
+++ struct-cpumasks/arch/x86/xen/smp.c
@@ -412,7 +412,7 @@ static void xen_send_IPI_mask(const cpum
 {
 	unsigned cpu;
 
-	for_each_online_cpu_mask_nr(cpu, *mask)
+	for_each_cpu_mask(cpu, mask)
 		xen_send_IPI_one(cpu, vector);
 }
 
@@ -423,7 +423,7 @@ static void xen_smp_send_call_function_i
 	xen_send_IPI_mask(&mask, XEN_CALL_FUNCTION_VECTOR);
 
 	/* Make sure other vcpus get a chance to run if they need to. */
-	for_each_cpu_mask_nr(cpu, mask) {
+	for_each_cpu_mask(cpu, mask) {
 		if (xen_vcpu_stolen(cpu)) {
 			HYPERVISOR_sched_op(SCHEDOP_yield, 0);
 			break;
--- struct-cpumasks.orig/drivers/acpi/processor_throttling.c
+++ struct-cpumasks/drivers/acpi/processor_throttling.c
@@ -1013,7 +1013,7 @@ int acpi_processor_set_throttling(struct
 	 * affected cpu in order to get one proper T-state.
 	 * The notifier event is THROTTLING_PRECHANGE.
 	 */
-	for_each_cpu_mask_nr(i, online_throttling_cpus) {
+	for_each_cpu_mask(i, online_throttling_cpus) {
 		t_state.cpu = i;
 		acpi_processor_throttling_notifier(THROTTLING_PRECHANGE,
 							&t_state);
@@ -1034,7 +1034,7 @@ int acpi_processor_set_throttling(struct
 		 * it is necessary to set T-state for every affected
 		 * cpus.
 		 */
-		for_each_cpu_mask_nr(i, online_throttling_cpus) {
+		for_each_cpu_mask(i, online_throttling_cpus) {
 			match_pr = per_cpu(processors, i);
 			/*
 			 * If the pointer is invalid, we will report the
@@ -1068,7 +1068,7 @@ int acpi_processor_set_throttling(struct
 	 * affected cpu to update the T-states.
 	 * The notifier event is THROTTLING_POSTCHANGE
 	 */
-	for_each_cpu_mask_nr(i, online_throttling_cpus) {
+	for_each_cpu_mask(i, online_throttling_cpus) {
 		t_state.cpu = i;
 		acpi_processor_throttling_notifier(THROTTLING_POSTCHANGE,
 							&t_state);
--- struct-cpumasks.orig/drivers/cpufreq/cpufreq.c
+++ struct-cpumasks/drivers/cpufreq/cpufreq.c
@@ -589,7 +589,7 @@ static ssize_t show_cpus(cpumask_t mask,
 	ssize_t i = 0;
 	unsigned int cpu;
 
-	for_each_cpu_mask_nr(cpu, mask) {
+	for_each_cpu_mask(cpu, mask) {
 		if (i)
 			i += scnprintf(&buf[i], (PAGE_SIZE - i - 2), " ");
 		i += scnprintf(&buf[i], (PAGE_SIZE - i - 2), "%u", cpu);
@@ -838,7 +838,7 @@ static int cpufreq_add_dev(struct sys_de
 	}
 #endif
 
-	for_each_cpu_mask_nr(j, policy->cpus) {
+	for_each_cpu_mask(j, policy->cpus) {
 		if (cpu == j)
 			continue;
 
@@ -901,14 +901,14 @@ static int cpufreq_add_dev(struct sys_de
 	}
 
 	spin_lock_irqsave(&cpufreq_driver_lock, flags);
-	for_each_cpu_mask_nr(j, policy->cpus) {
+	for_each_cpu_mask(j, policy->cpus) {
 		per_cpu(cpufreq_cpu_data, j) = policy;
 		per_cpu(policy_cpu, j) = policy->cpu;
 	}
 	spin_unlock_irqrestore(&cpufreq_driver_lock, flags);
 
 	/* symlink affected CPUs */
-	for_each_cpu_mask_nr(j, policy->cpus) {
+	for_each_cpu_mask(j, policy->cpus) {
 		if (j == cpu)
 			continue;
 		if (!cpu_online(j))
@@ -948,7 +948,7 @@ static int cpufreq_add_dev(struct sys_de
 
 err_out_unregister:
 	spin_lock_irqsave(&cpufreq_driver_lock, flags);
-	for_each_cpu_mask_nr(j, policy->cpus)
+	for_each_cpu_mask(j, policy->cpus)
 		per_cpu(cpufreq_cpu_data, j) = NULL;
 	spin_unlock_irqrestore(&cpufreq_driver_lock, flags);
 
@@ -1031,7 +1031,7 @@ static int __cpufreq_remove_dev(struct s
 	 * the sysfs links afterwards.
 	 */
 	if (unlikely(cpus_weight(data->cpus) > 1)) {
-		for_each_cpu_mask_nr(j, data->cpus) {
+		for_each_cpu_mask(j, data->cpus) {
 			if (j == cpu)
 				continue;
 			per_cpu(cpufreq_cpu_data, j) = NULL;
@@ -1041,7 +1041,7 @@ static int __cpufreq_remove_dev(struct s
 	spin_unlock_irqrestore(&cpufreq_driver_lock, flags);
 
 	if (unlikely(cpus_weight(data->cpus) > 1)) {
-		for_each_cpu_mask_nr(j, data->cpus) {
+		for_each_cpu_mask(j, data->cpus) {
 			if (j == cpu)
 				continue;
 			dprintk("removing link for cpu %u\n", j);
--- struct-cpumasks.orig/drivers/cpufreq/cpufreq_conservative.c
+++ struct-cpumasks/drivers/cpufreq/cpufreq_conservative.c
@@ -497,7 +497,7 @@ static int cpufreq_governor_dbs(struct c
 			return rc;
 		}
 
-		for_each_cpu_mask_nr(j, policy->cpus) {
+		for_each_cpu_mask(j, policy->cpus) {
 			struct cpu_dbs_info_s *j_dbs_info;
 			j_dbs_info = &per_cpu(cpu_dbs_info, j);
 			j_dbs_info->cur_policy = policy;
--- struct-cpumasks.orig/drivers/cpufreq/cpufreq_ondemand.c
+++ struct-cpumasks/drivers/cpufreq/cpufreq_ondemand.c
@@ -367,7 +367,7 @@ static void dbs_check_cpu(struct cpu_dbs
 
 	/* Get Idle Time */
 	idle_ticks = UINT_MAX;
-	for_each_cpu_mask_nr(j, policy->cpus) {
+	for_each_cpu_mask(j, policy->cpus) {
 		cputime64_t total_idle_ticks;
 		unsigned int tmp_idle_ticks;
 		struct cpu_dbs_info_s *j_dbs_info;
@@ -521,7 +521,7 @@ static int cpufreq_governor_dbs(struct c
 			return rc;
 		}
 
-		for_each_cpu_mask_nr(j, policy->cpus) {
+		for_each_cpu_mask(j, policy->cpus) {
 			struct cpu_dbs_info_s *j_dbs_info;
 			j_dbs_info = &per_cpu(cpu_dbs_info, j);
 			j_dbs_info->cur_policy = policy;
--- struct-cpumasks.orig/drivers/infiniband/hw/ehca/ehca_irq.c
+++ struct-cpumasks/drivers/infiniband/hw/ehca/ehca_irq.c
@@ -650,7 +650,7 @@ static inline int find_next_online_cpu(s
 		ehca_dmp(&cpu_online_map, sizeof(cpumask_t), "");
 
 	spin_lock_irqsave(&pool->last_cpu_lock, flags);
-	cpu = next_cpu_nr(pool->last_cpu, cpu_online_map);
+	cpu = next_cpu(pool->last_cpu, cpu_online_map);
 	if (cpu >= nr_cpu_ids)
 		cpu = first_cpu(cpu_online_map);
 	pool->last_cpu = cpu;
--- struct-cpumasks.orig/include/asm-x86/ipi.h
+++ struct-cpumasks/include/asm-x86/ipi.h
@@ -128,7 +128,7 @@ static inline void send_IPI_mask_sequenc
 	 * - mbligh
 	 */
 	local_irq_save(flags);
-	for_each_cpu_mask_nr(query_cpu, *mask) {
+	for_each_cpu_mask(query_cpu, *mask) {
 		__send_IPI_dest_field(per_cpu(x86_cpu_to_apicid, query_cpu),
 				      vector, APIC_DEST_PHYSICAL);
 	}
@@ -144,7 +144,7 @@ static inline void send_IPI_mask_allbuts
 	/* See Hack comment above */
 
 	local_irq_save(flags);
-	for_each_cpu_mask_nr(query_cpu, *mask)
+	for_each_cpu_mask(query_cpu, *mask)
 		if (query_cpu != this_cpu)
 			__send_IPI_dest_field(
 				per_cpu(x86_cpu_to_apicid, query_cpu),
--- struct-cpumasks.orig/kernel/cpu.c
+++ struct-cpumasks/kernel/cpu.c
@@ -445,7 +445,7 @@ void __ref enable_nonboot_cpus(void)
 		goto out;
 
 	printk("Enabling non-boot CPUs ...\n");
-	for_each_cpu_mask_nr(cpu, frozen_cpus) {
+	for_each_cpu_mask(cpu, frozen_cpus) {
 		error = _cpu_up(cpu, 1);
 		if (!error) {
 			printk("CPU%d is up\n", cpu);
--- struct-cpumasks.orig/kernel/rcuclassic.c
+++ struct-cpumasks/kernel/rcuclassic.c
@@ -112,7 +112,7 @@ static void force_quiescent_state(struct
 		 */
 		cpus_and(cpumask, rcp->cpumask, cpu_online_map);
 		cpu_clear(rdp->cpu, cpumask);
-		for_each_cpu_mask_nr(cpu, cpumask)
+		for_each_cpu_mask(cpu, cpumask)
 			smp_send_reschedule(cpu);
 	}
 	spin_unlock_irqrestore(&rcp->lock, flags);
--- struct-cpumasks.orig/kernel/rcupreempt.c
+++ struct-cpumasks/kernel/rcupreempt.c
@@ -748,7 +748,7 @@ rcu_try_flip_idle(void)
 
 	/* Now ask each CPU for acknowledgement of the flip. */
 
-	for_each_cpu_mask_nr(cpu, rcu_cpu_online_map) {
+	for_each_cpu_mask(cpu, rcu_cpu_online_map) {
 		per_cpu(rcu_flip_flag, cpu) = rcu_flipped;
 		dyntick_save_progress_counter(cpu);
 	}
@@ -766,7 +766,7 @@ rcu_try_flip_waitack(void)
 	int cpu;
 
 	RCU_TRACE_ME(rcupreempt_trace_try_flip_a1);
-	for_each_cpu_mask_nr(cpu, rcu_cpu_online_map)
+	for_each_cpu_mask(cpu, rcu_cpu_online_map)
 		if (rcu_try_flip_waitack_needed(cpu) &&
 		    per_cpu(rcu_flip_flag, cpu) != rcu_flip_seen) {
 			RCU_TRACE_ME(rcupreempt_trace_try_flip_ae1);
@@ -798,7 +798,7 @@ rcu_try_flip_waitzero(void)
 	/* Check to see if the sum of the "last" counters is zero. */
 
 	RCU_TRACE_ME(rcupreempt_trace_try_flip_z1);
-	for_each_cpu_mask_nr(cpu, rcu_cpu_online_map)
+	for_each_cpu_mask(cpu, rcu_cpu_online_map)
 		sum += RCU_DATA_CPU(cpu)->rcu_flipctr[lastidx];
 	if (sum != 0) {
 		RCU_TRACE_ME(rcupreempt_trace_try_flip_ze1);
@@ -813,7 +813,7 @@ rcu_try_flip_waitzero(void)
 	smp_mb();  /*  ^^^^^^^^^^^^ */
 
 	/* Call for a memory barrier from each CPU. */
-	for_each_cpu_mask_nr(cpu, rcu_cpu_online_map) {
+	for_each_cpu_mask(cpu, rcu_cpu_online_map) {
 		per_cpu(rcu_mb_flag, cpu) = rcu_mb_needed;
 		dyntick_save_progress_counter(cpu);
 	}
@@ -833,7 +833,7 @@ rcu_try_flip_waitmb(void)
 	int cpu;
 
 	RCU_TRACE_ME(rcupreempt_trace_try_flip_m1);
-	for_each_cpu_mask_nr(cpu, rcu_cpu_online_map)
+	for_each_cpu_mask(cpu, rcu_cpu_online_map)
 		if (rcu_try_flip_waitmb_needed(cpu) &&
 		    per_cpu(rcu_mb_flag, cpu) != rcu_mb_done) {
 			RCU_TRACE_ME(rcupreempt_trace_try_flip_me1);
--- struct-cpumasks.orig/kernel/sched.c
+++ struct-cpumasks/kernel/sched.c
@@ -2069,7 +2069,7 @@ find_idlest_group(struct sched_domain *s
 		/* Tally up the load of all CPUs in the group */
 		avg_load = 0;
 
-		for_each_cpu_mask_nr(i, group->cpumask) {
+		for_each_cpu_mask(i, group->cpumask) {
 			/* Bias balancing toward cpus of our domain */
 			if (local_group)
 				load = source_load(i, load_idx);
@@ -2111,7 +2111,7 @@ find_idlest_cpu(struct sched_group *grou
 	/* Traverse only the allowed CPUs */
 	cpus_and(*tmp, group->cpumask, p->cpus_allowed);
 
-	for_each_cpu_mask_nr(i, *tmp) {
+	for_each_cpu_mask(i, *tmp) {
 		load = weighted_cpuload(i);
 
 		if (load < min_load || (load == min_load && i == this_cpu)) {
@@ -3129,7 +3129,7 @@ find_busiest_group(struct sched_domain *
 		max_cpu_load = 0;
 		min_cpu_load = ~0UL;
 
-		for_each_cpu_mask_nr(i, group->cpumask) {
+		for_each_cpu_mask(i, group->cpumask) {
 			struct rq *rq;
 
 			if (!cpu_isset(i, *cpus))
@@ -3408,7 +3408,7 @@ find_busiest_queue(struct sched_group *g
 	unsigned long max_load = 0;
 	int i;
 
-	for_each_cpu_mask_nr(i, group->cpumask) {
+	for_each_cpu_mask(i, group->cpumask) {
 		unsigned long wl;
 
 		if (!cpu_isset(i, *cpus))
@@ -3950,7 +3950,7 @@ static void run_rebalance_domains(struct
 		int balance_cpu;
 
 		cpu_clear(this_cpu, cpus);
-		for_each_cpu_mask_nr(balance_cpu, cpus) {
+		for_each_cpu_mask(balance_cpu, cpus) {
 			/*
 			 * If this cpu gets work to do, stop the load balancing
 			 * work being done for other cpus. Next load
@@ -6961,7 +6961,7 @@ init_sched_build_groups(const cpumask_t 
 
 	cpus_clear(*covered);
 
-	for_each_cpu_mask_nr(i, *span) {
+	for_each_cpu_mask(i, *span) {
 		struct sched_group *sg;
 		int group = group_fn(i, cpu_map, &sg, tmpmask);
 		int j;
@@ -6972,7 +6972,7 @@ init_sched_build_groups(const cpumask_t 
 		cpus_clear(sg->cpumask);
 		sg->__cpu_power = 0;
 
-		for_each_cpu_mask_nr(j, *span) {
+		for_each_cpu_mask(j, *span) {
 			if (group_fn(j, cpu_map, NULL, tmpmask) != group)
 				continue;
 
@@ -7172,7 +7172,7 @@ static void init_numa_sched_groups_power
 	if (!sg)
 		return;
 	do {
-		for_each_cpu_mask_nr(j, sg->cpumask) {
+		for_each_cpu_mask(j, sg->cpumask) {
 			struct sched_domain *sd;
 
 			sd = &per_cpu(phys_domains, j);
@@ -7197,7 +7197,7 @@ static void free_sched_groups(const cpum
 {
 	int cpu, i;
 
-	for_each_cpu_mask_nr(cpu, *cpu_map) {
+	for_each_cpu_mask(cpu, *cpu_map) {
 		struct sched_group **sched_group_nodes
 			= sched_group_nodes_bycpu[cpu];
 
@@ -7436,7 +7436,7 @@ static int __build_sched_domains(const c
 	/*
 	 * Set up domains for cpus specified by the cpu_map.
 	 */
-	for_each_cpu_mask_nr(i, *cpu_map) {
+	for_each_cpu_mask(i, *cpu_map) {
 		struct sched_domain *sd = NULL, *p;
 		SCHED_CPUMASK_VAR(nodemask, allmasks);
 
@@ -7503,7 +7503,7 @@ static int __build_sched_domains(const c
 
 #ifdef CONFIG_SCHED_SMT
 	/* Set up CPU (sibling) groups */
-	for_each_cpu_mask_nr(i, *cpu_map) {
+	for_each_cpu_mask(i, *cpu_map) {
 		SCHED_CPUMASK_VAR(this_sibling_map, allmasks);
 		SCHED_CPUMASK_VAR(send_covered, allmasks);
 
@@ -7520,7 +7520,7 @@ static int __build_sched_domains(const c
 
 #ifdef CONFIG_SCHED_MC
 	/* Set up multi-core groups */
-	for_each_cpu_mask_nr(i, *cpu_map) {
+	for_each_cpu_mask(i, *cpu_map) {
 		SCHED_CPUMASK_VAR(this_core_map, allmasks);
 		SCHED_CPUMASK_VAR(send_covered, allmasks);
 
@@ -7587,7 +7587,7 @@ static int __build_sched_domains(const c
 			goto error;
 		}
 		sched_group_nodes[i] = sg;
-		for_each_cpu_mask_nr(j, *nodemask) {
+		for_each_cpu_mask(j, *nodemask) {
 			struct sched_domain *sd;
 
 			sd = &per_cpu(node_domains, j);
@@ -7633,21 +7633,21 @@ static int __build_sched_domains(const c
 
 	/* Calculate CPU power for physical packages and nodes */
 #ifdef CONFIG_SCHED_SMT
-	for_each_cpu_mask_nr(i, *cpu_map) {
+	for_each_cpu_mask(i, *cpu_map) {
 		struct sched_domain *sd = &per_cpu(cpu_domains, i);
 
 		init_sched_groups_power(i, sd);
 	}
 #endif
 #ifdef CONFIG_SCHED_MC
-	for_each_cpu_mask_nr(i, *cpu_map) {
+	for_each_cpu_mask(i, *cpu_map) {
 		struct sched_domain *sd = &per_cpu(core_domains, i);
 
 		init_sched_groups_power(i, sd);
 	}
 #endif
 
-	for_each_cpu_mask_nr(i, *cpu_map) {
+	for_each_cpu_mask(i, *cpu_map) {
 		struct sched_domain *sd = &per_cpu(phys_domains, i);
 
 		init_sched_groups_power(i, sd);
@@ -7667,7 +7667,7 @@ static int __build_sched_domains(const c
 #endif
 
 	/* Attach the domains */
-	for_each_cpu_mask_nr(i, *cpu_map) {
+	for_each_cpu_mask(i, *cpu_map) {
 		struct sched_domain *sd;
 #ifdef CONFIG_SCHED_SMT
 		sd = &per_cpu(cpu_domains, i);
@@ -7750,7 +7750,7 @@ static void detach_destroy_domains(const
 
 	unregister_sched_domain_sysctl();
 
-	for_each_cpu_mask_nr(i, *cpu_map)
+	for_each_cpu_mask(i, *cpu_map)
 		cpu_attach_domain(NULL, &def_root_domain, i);
 	synchronize_sched();
 	arch_destroy_sched_domains(cpu_map, &tmpmask);
--- struct-cpumasks.orig/kernel/sched_fair.c
+++ struct-cpumasks/kernel/sched_fair.c
@@ -978,7 +978,7 @@ static int wake_idle(int cpu, struct tas
 			&& !task_hot(p, task_rq(p)->clock, sd))) {
 			cpus_and(tmp, sd->span, p->cpus_allowed);
 			cpus_and(tmp, tmp, cpu_active_map);
-			for_each_cpu_mask_nr(i, tmp) {
+			for_each_cpu_mask(i, tmp) {
 				if (idle_cpu(i)) {
 					if (i != task_cpu(p)) {
 						schedstat_inc(p,
--- struct-cpumasks.orig/kernel/sched_rt.c
+++ struct-cpumasks/kernel/sched_rt.c
@@ -245,7 +245,7 @@ static int do_balance_runtime(struct rt_
 
 	spin_lock(&rt_b->rt_runtime_lock);
 	rt_period = ktime_to_ns(rt_b->rt_period);
-	for_each_cpu_mask_nr(i, rd->span) {
+	for_each_cpu_mask(i, rd->span) {
 		struct rt_rq *iter = sched_rt_period_rt_rq(rt_b, i);
 		s64 diff;
 
@@ -1179,7 +1179,7 @@ static int pull_rt_task(struct rq *this_
 
 	next = pick_next_task_rt(this_rq);
 
-	for_each_cpu_mask_nr(cpu, this_rq->rd->rto_mask) {
+	for_each_cpu_mask(cpu, this_rq->rd->rto_mask) {
 		if (this_cpu == cpu)
 			continue;
 
--- struct-cpumasks.orig/kernel/smp.c
+++ struct-cpumasks/kernel/smp.c
@@ -295,7 +295,7 @@ static void smp_call_function_mask_quies
 	data.func = quiesce_dummy;
 	data.info = NULL;
 
-	for_each_cpu_mask_nr(cpu, *mask) {
+	for_each_cpu_mask(cpu, *mask) {
 		data.flags = CSD_FLAG_WAIT;
 		generic_exec_single(cpu, &data);
 	}
--- struct-cpumasks.orig/kernel/taskstats.c
+++ struct-cpumasks/kernel/taskstats.c
@@ -301,7 +301,7 @@ static int add_del_listener(pid_t pid, c
 		return -EINVAL;
 
 	if (isadd == REGISTER) {
-		for_each_cpu_mask_nr(cpu, mask) {
+		for_each_cpu_mask(cpu, mask) {
 			s = kmalloc_node(sizeof(struct listener), GFP_KERNEL,
 					 cpu_to_node(cpu));
 			if (!s)
@@ -320,7 +320,7 @@ static int add_del_listener(pid_t pid, c
 
 	/* Deregister or cleanup */
 cleanup:
-	for_each_cpu_mask_nr(cpu, mask) {
+	for_each_cpu_mask(cpu, mask) {
 		listeners = &per_cpu(listener_array, cpu);
 		down_write(&listeners->sem);
 		list_for_each_entry_safe(s, tmp, &listeners->list, list) {
--- struct-cpumasks.orig/kernel/time/clocksource.c
+++ struct-cpumasks/kernel/time/clocksource.c
@@ -151,7 +151,7 @@ static void clocksource_watchdog(unsigne
 		 * Cycle through CPUs to check if the CPUs stay
 		 * synchronized to each other.
 		 */
-		int next_cpu = next_cpu_nr(raw_smp_processor_id(), cpu_online_map);
+		int next_cpu = next_cpu(raw_smp_processor_id(), cpu_online_map);
 
 		if (next_cpu >= nr_cpu_ids)
 			next_cpu = first_cpu(cpu_online_map);
--- struct-cpumasks.orig/kernel/time/tick-broadcast.c
+++ struct-cpumasks/kernel/time/tick-broadcast.c
@@ -399,7 +399,7 @@ again:
 	mask = CPU_MASK_NONE;
 	now = ktime_get();
 	/* Find all expired events */
-	for_each_cpu_mask_nr(cpu, tick_broadcast_oneshot_mask) {
+	for_each_cpu_mask(cpu, tick_broadcast_oneshot_mask) {
 		td = &per_cpu(tick_cpu_device, cpu);
 		if (td->evtdev->next_event.tv64 <= now.tv64)
 			cpu_set(cpu, mask);
@@ -496,7 +496,7 @@ static void tick_broadcast_init_next_eve
 	struct tick_device *td;
 	int cpu;
 
-	for_each_cpu_mask_nr(cpu, *mask) {
+	for_each_cpu_mask(cpu, *mask) {
 		td = &per_cpu(tick_cpu_device, cpu);
 		if (td->evtdev)
 			td->evtdev->next_event = expires;
--- struct-cpumasks.orig/kernel/workqueue.c
+++ struct-cpumasks/kernel/workqueue.c
@@ -415,7 +415,7 @@ void flush_workqueue(struct workqueue_st
 	might_sleep();
 	lock_map_acquire(&wq->lockdep_map);
 	lock_map_release(&wq->lockdep_map);
-	for_each_cpu_mask_nr(cpu, *cpu_map)
+	for_each_cpu_mask(cpu, *cpu_map)
 		flush_cpu_workqueue(per_cpu_ptr(wq->cpu_wq, cpu));
 }
 EXPORT_SYMBOL_GPL(flush_workqueue);
@@ -546,7 +546,7 @@ static void wait_on_work(struct work_str
 	wq = cwq->wq;
 	cpu_map = wq_cpu_map(wq);
 
-	for_each_cpu_mask_nr(cpu, *cpu_map)
+	for_each_cpu_mask(cpu, *cpu_map)
 		wait_on_cpu_work(per_cpu_ptr(wq->cpu_wq, cpu), work);
 }
 
@@ -906,7 +906,7 @@ void destroy_workqueue(struct workqueue_
 	list_del(&wq->list);
 	spin_unlock(&workqueue_lock);
 
-	for_each_cpu_mask_nr(cpu, *cpu_map)
+	for_each_cpu_mask(cpu, *cpu_map)
 		cleanup_workqueue_thread(per_cpu_ptr(wq->cpu_wq, cpu));
  	cpu_maps_update_done();
 
--- struct-cpumasks.orig/mm/allocpercpu.c
+++ struct-cpumasks/mm/allocpercpu.c
@@ -34,7 +34,7 @@ static void percpu_depopulate(void *__pd
 static void __percpu_depopulate_mask(void *__pdata, cpumask_t *mask)
 {
 	int cpu;
-	for_each_cpu_mask_nr(cpu, *mask)
+	for_each_cpu_mask(cpu, *mask)
 		percpu_depopulate(__pdata, cpu);
 }
 
@@ -86,7 +86,7 @@ static int __percpu_populate_mask(void *
 	int cpu;
 
 	cpus_clear(populated);
-	for_each_cpu_mask_nr(cpu, *mask)
+	for_each_cpu_mask(cpu, *mask)
 		if (unlikely(!percpu_populate(__pdata, size, gfp, cpu))) {
 			__percpu_depopulate_mask(__pdata, &populated);
 			return -ENOMEM;
--- struct-cpumasks.orig/mm/quicklist.c
+++ struct-cpumasks/mm/quicklist.c
@@ -42,7 +42,7 @@ static unsigned long max_pages(unsigned 
 
 	max = node_free_pages / FRACTION_OF_NODE_MEM;
 
-	num_cpus_on_node = cpus_weight_nr(*cpumask_on_node);
+	num_cpus_on_node = cpus_weight(*cpumask_on_node);
 	max /= num_cpus_on_node;
 
 	return max(max, min_pages);
--- struct-cpumasks.orig/mm/vmstat.c
+++ struct-cpumasks/mm/vmstat.c
@@ -27,7 +27,7 @@ static void sum_vm_events(unsigned long 
 
 	memset(ret, 0, NR_VM_EVENT_ITEMS * sizeof(unsigned long));
 
-	for_each_cpu_mask_nr(cpu, *cpumask) {
+	for_each_cpu_mask(cpu, *cpumask) {
 		struct vm_event_state *this = &per_cpu(vm_event_states, cpu);
 
 		for (i = 0; i < NR_VM_EVENT_ITEMS; i++)
--- struct-cpumasks.orig/net/core/dev.c
+++ struct-cpumasks/net/core/dev.c
@@ -2410,7 +2410,7 @@ out:
 	 */
 	if (!cpus_empty(net_dma.channel_mask)) {
 		int chan_idx;
-		for_each_cpu_mask_nr(chan_idx, net_dma.channel_mask) {
+		for_each_cpu_mask(chan_idx, net_dma.channel_mask) {
 			struct dma_chan *chan = net_dma.channels[chan_idx];
 			if (chan)
 				dma_async_memcpy_issue_pending(chan);
@@ -4552,7 +4552,7 @@ static void net_dma_rebalance(struct net
 	i = 0;
 	cpu = first_cpu(cpu_online_map);
 
-	for_each_cpu_mask_nr(chan_idx, net_dma->channel_mask) {
+	for_each_cpu_mask(chan_idx, net_dma->channel_mask) {
 		chan = net_dma->channels[chan_idx];
 
 		n = ((num_online_cpus() / cpus_weight(net_dma->channel_mask))
--- struct-cpumasks.orig/net/iucv/iucv.c
+++ struct-cpumasks/net/iucv/iucv.c
@@ -497,7 +497,7 @@ static void iucv_setmask_up(void)
 	/* Disable all cpu but the first in cpu_irq_cpumask. */
 	cpumask = iucv_irq_cpumask;
 	cpu_clear(first_cpu(iucv_irq_cpumask), cpumask);
-	for_each_cpu_mask_nr(cpu, cpumask)
+	for_each_cpu_mask(cpu, cpumask)
 		smp_call_function_single(cpu, iucv_block_cpu, NULL, 1);
 }
 

-- 

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [PATCH 10/31] cpumask: clean cpumask_of_cpu refs
  2008-09-29 18:02 [PATCH 00/31] cpumask: Provide new cpumask API Mike Travis
                   ` (8 preceding siblings ...)
  2008-09-29 18:02 ` [PATCH 09/31] cpumask: get rid of _nr functions Mike Travis
@ 2008-09-29 18:03 ` Mike Travis
  2008-09-29 18:03 ` [PATCH 11/31] cpumask: remove set_cpus_allowed_ptr Mike Travis
                   ` (20 subsequent siblings)
  30 siblings, 0 replies; 43+ messages in thread
From: Mike Travis @ 2008-09-29 18:03 UTC (permalink / raw)
  To: Ingo Molnar, Rusty Russell
  Cc: Linus Torvalds, Andrew Morton, David Miller, Yinghai Lu,
	Thomas Gleixner, Jack Steiner, linux-kernel

[-- Attachment #1: cpumask_of_cpu --]
[-- Type: text/plain, Size: 10700 bytes --]

Signed-of-by: Mike Travis <travis@sgi.com>
---
 arch/x86/kernel/acpi/cstate.c                    |    2 +-
 arch/x86/kernel/cpu/cpufreq/acpi-cpufreq.c       |    6 +++---
 arch/x86/kernel/cpu/cpufreq/powernow-k8.c        |    8 ++++----
 arch/x86/kernel/cpu/cpufreq/speedstep-centrino.c |    4 ++--
 arch/x86/kernel/cpu/cpufreq/speedstep-ich.c      |    2 +-
 arch/x86/kernel/cpu/intel_cacheinfo.c            |    2 +-
 arch/x86/kernel/microcode_core.c                 |    6 +++---
 arch/x86/kernel/reboot.c                         |    2 +-
 drivers/acpi/processor_throttling.c              |    6 +++---
 drivers/firmware/dcdbas.c                        |    2 +-
 drivers/misc/sgi-xp/xpc_main.c                   |    2 +-
 drivers/xen/manage.c                             |    2 +-
 kernel/time/tick-common.c                        |    2 +-
 kernel/trace/trace_sysprof.c                     |    2 +-
 net/sunrpc/svc.c                                 |    2 +-
 15 files changed, 25 insertions(+), 25 deletions(-)

--- struct-cpumasks.orig/arch/x86/kernel/acpi/cstate.c
+++ struct-cpumasks/arch/x86/kernel/acpi/cstate.c
@@ -91,7 +91,7 @@ int acpi_processor_ffh_cstate_probe(unsi
 
 	/* Make sure we are running on right CPU */
 	saved_mask = current->cpus_allowed;
-	retval = set_cpus_allowed_ptr(current, &cpumask_of_cpu(cpu));
+	retval = set_cpus_allowed_ptr(current, cpumask_of_cpu(cpu));
 	if (retval)
 		return -1;
 
--- struct-cpumasks.orig/arch/x86/kernel/cpu/cpufreq/acpi-cpufreq.c
+++ struct-cpumasks/arch/x86/kernel/cpu/cpufreq/acpi-cpufreq.c
@@ -203,7 +203,7 @@ static void drv_write(struct drv_cmd *cm
 	unsigned int i;
 
 	for_each_cpu_mask(i, cmd->mask) {
-		set_cpus_allowed_ptr(current, &cpumask_of_cpu(i));
+		set_cpus_allowed_ptr(current, cpumask_of_cpu(i));
 		do_drv_write(cmd);
 	}
 
@@ -271,7 +271,7 @@ static unsigned int get_measured_perf(un
 	unsigned int retval;
 
 	saved_mask = current->cpus_allowed;
-	set_cpus_allowed_ptr(current, &cpumask_of_cpu(cpu));
+	set_cpus_allowed_ptr(current, cpumask_of_cpu(cpu));
 	if (get_cpu() != cpu) {
 		/* We were not able to run on requested processor */
 		put_cpu();
@@ -349,7 +349,7 @@ static unsigned int get_cur_freq_on_cpu(
 	}
 
 	cached_freq = data->freq_table[data->acpi_data->state].frequency;
-	freq = extract_freq(get_cur_val(&cpumask_of_cpu(cpu)), data);
+	freq = extract_freq(get_cur_val(cpumask_of_cpu(cpu)), data);
 	if (freq != cached_freq) {
 		/*
 		 * The dreaded BIOS frequency change behind our back.
--- struct-cpumasks.orig/arch/x86/kernel/cpu/cpufreq/powernow-k8.c
+++ struct-cpumasks/arch/x86/kernel/cpu/cpufreq/powernow-k8.c
@@ -480,7 +480,7 @@ static int check_supported_cpu(unsigned 
 	unsigned int rc = 0;
 
 	oldmask = current->cpus_allowed;
-	set_cpus_allowed_ptr(current, &cpumask_of_cpu(cpu));
+	set_cpus_allowed_ptr(current, cpumask_of_cpu(cpu));
 
 	if (smp_processor_id() != cpu) {
 		printk(KERN_ERR PFX "limiting to cpu %u failed\n", cpu);
@@ -1027,7 +1027,7 @@ static int powernowk8_target(struct cpuf
 
 	/* only run on specific CPU from here on */
 	oldmask = current->cpus_allowed;
-	set_cpus_allowed_ptr(current, &cpumask_of_cpu(pol->cpu));
+	set_cpus_allowed_ptr(current, cpumask_of_cpu(pol->cpu));
 
 	if (smp_processor_id() != pol->cpu) {
 		printk(KERN_ERR PFX "limiting to cpu %u failed\n", pol->cpu);
@@ -1153,7 +1153,7 @@ static int __cpuinit powernowk8_cpu_init
 
 	/* only run on specific CPU from here on */
 	oldmask = current->cpus_allowed;
-	set_cpus_allowed_ptr(current, &cpumask_of_cpu(pol->cpu));
+	set_cpus_allowed_ptr(current, cpumask_of_cpu(pol->cpu));
 
 	if (smp_processor_id() != pol->cpu) {
 		printk(KERN_ERR PFX "limiting to cpu %u failed\n", pol->cpu);
@@ -1250,7 +1250,7 @@ static unsigned int powernowk8_get (unsi
 	if (!data)
 		return -EINVAL;
 
-	set_cpus_allowed_ptr(current, &cpumask_of_cpu(cpu));
+	set_cpus_allowed_ptr(current, cpumask_of_cpu(cpu));
 	if (smp_processor_id() != cpu) {
 		printk(KERN_ERR PFX
 			"limiting to CPU %d failed in powernowk8_get\n", cpu);
--- struct-cpumasks.orig/arch/x86/kernel/cpu/cpufreq/speedstep-centrino.c
+++ struct-cpumasks/arch/x86/kernel/cpu/cpufreq/speedstep-centrino.c
@@ -419,7 +419,7 @@ static unsigned int get_cur_freq(unsigne
 	cpumask_t saved_mask;
 
 	saved_mask = current->cpus_allowed;
-	set_cpus_allowed_ptr(current, &cpumask_of_cpu(cpu));
+	set_cpus_allowed_ptr(current, cpumask_of_cpu(cpu));
 	if (smp_processor_id() != cpu)
 		return 0;
 
@@ -680,7 +680,7 @@ static int centrino_target (struct cpufr
 		if (!cpus_empty(*covered_cpus))
 			for_each_cpu_mask(j, *covered_cpus) {
 				set_cpus_allowed_ptr(current,
-						     &cpumask_of_cpu(j));
+						     cpumask_of_cpu(j));
 				wrmsr(MSR_IA32_PERF_CTL, oldmsr, h);
 			}
 
--- struct-cpumasks.orig/arch/x86/kernel/cpu/cpufreq/speedstep-ich.c
+++ struct-cpumasks/arch/x86/kernel/cpu/cpufreq/speedstep-ich.c
@@ -244,7 +244,7 @@ static unsigned int _speedstep_get(const
 
 static unsigned int speedstep_get(unsigned int cpu)
 {
-	return _speedstep_get(&cpumask_of_cpu(cpu));
+	return _speedstep_get(cpumask_of_cpu(cpu));
 }
 
 /**
--- struct-cpumasks.orig/arch/x86/kernel/cpu/intel_cacheinfo.c
+++ struct-cpumasks/arch/x86/kernel/cpu/intel_cacheinfo.c
@@ -550,7 +550,7 @@ static int __cpuinit detect_cache_attrib
 		return -ENOMEM;
 
 	oldmask = current->cpus_allowed;
-	retval = set_cpus_allowed_ptr(current, &cpumask_of_cpu(cpu));
+	retval = set_cpus_allowed_ptr(current, cpumask_of_cpu(cpu));
 	if (retval)
 		goto out;
 
--- struct-cpumasks.orig/arch/x86/kernel/microcode_core.c
+++ struct-cpumasks/arch/x86/kernel/microcode_core.c
@@ -122,7 +122,7 @@ static int do_microcode_update(const voi
 		if (!uci->valid)
 			continue;
 
-		set_cpus_allowed_ptr(current, &cpumask_of_cpu(cpu));
+		set_cpus_allowed_ptr(current, cpumask_of_cpu(cpu));
 		error = microcode_ops->request_microcode_user(cpu, buf, size);
 		if (error < 0)
 			goto out;
@@ -222,7 +222,7 @@ static ssize_t reload_store(struct sys_d
 
 		get_online_cpus();
 		if (cpu_online(cpu)) {
-			set_cpus_allowed_ptr(current, &cpumask_of_cpu(cpu));
+			set_cpus_allowed_ptr(current, cpumask_of_cpu(cpu));
 			mutex_lock(&microcode_mutex);
 			if (uci->valid) {
 				err = microcode_ops->request_microcode_fw(cpu,
@@ -351,7 +351,7 @@ static void microcode_init_cpu(int cpu)
 {
 	cpumask_t old = current->cpus_allowed;
 
-	set_cpus_allowed_ptr(current, &cpumask_of_cpu(cpu));
+	set_cpus_allowed_ptr(current, cpumask_of_cpu(cpu));
 	microcode_update_cpu(cpu);
 	set_cpus_allowed_ptr(current, &old);
 }
--- struct-cpumasks.orig/arch/x86/kernel/reboot.c
+++ struct-cpumasks/arch/x86/kernel/reboot.c
@@ -431,7 +431,7 @@ void native_machine_shutdown(void)
 		reboot_cpu_id = smp_processor_id();
 
 	/* Make certain I only run on the appropriate processor */
-	set_cpus_allowed_ptr(current, &cpumask_of_cpu(reboot_cpu_id));
+	set_cpus_allowed_ptr(current, cpumask_of_cpu(reboot_cpu_id));
 
 	/* O.K Now that I'm on the appropriate processor,
 	 * stop all of the others.
--- struct-cpumasks.orig/drivers/acpi/processor_throttling.c
+++ struct-cpumasks/drivers/acpi/processor_throttling.c
@@ -838,7 +838,7 @@ static int acpi_processor_get_throttling
 	 * Migrate task to the cpu pointed by pr.
 	 */
 	saved_mask = current->cpus_allowed;
-	set_cpus_allowed_ptr(current, &cpumask_of_cpu(pr->id));
+	set_cpus_allowed_ptr(current, cpumask_of_cpu(pr->id));
 	ret = pr->throttling.acpi_processor_get_throttling(pr);
 	/* restore the previous state */
 	set_cpus_allowed_ptr(current, &saved_mask);
@@ -1025,7 +1025,7 @@ int acpi_processor_set_throttling(struct
 	 * it can be called only for the cpu pointed by pr.
 	 */
 	if (p_throttling->shared_type == DOMAIN_COORD_TYPE_SW_ANY) {
-		set_cpus_allowed_ptr(current, &cpumask_of_cpu(pr->id));
+		set_cpus_allowed_ptr(current, cpumask_of_cpu(pr->id));
 		ret = p_throttling->acpi_processor_set_throttling(pr,
 						t_state.target_state);
 	} else {
@@ -1056,7 +1056,7 @@ int acpi_processor_set_throttling(struct
 				continue;
 			}
 			t_state.cpu = i;
-			set_cpus_allowed_ptr(current, &cpumask_of_cpu(i));
+			set_cpus_allowed_ptr(current, cpumask_of_cpu(i));
 			ret = match_pr->throttling.
 				acpi_processor_set_throttling(
 				match_pr, t_state.target_state);
--- struct-cpumasks.orig/drivers/firmware/dcdbas.c
+++ struct-cpumasks/drivers/firmware/dcdbas.c
@@ -255,7 +255,7 @@ static int smi_request(struct smi_cmd *s
 
 	/* SMI requires CPU 0 */
 	old_mask = current->cpus_allowed;
-	set_cpus_allowed_ptr(current, &cpumask_of_cpu(0));
+	set_cpus_allowed_ptr(current, cpumask_of_cpu(0));
 	if (smp_processor_id() != 0) {
 		dev_dbg(&dcdbas_pdev->dev, "%s: failed to get CPU 0\n",
 			__func__);
--- struct-cpumasks.orig/drivers/misc/sgi-xp/xpc_main.c
+++ struct-cpumasks/drivers/misc/sgi-xp/xpc_main.c
@@ -318,7 +318,7 @@ xpc_hb_checker(void *ignore)
 
 	/* this thread was marked active by xpc_hb_init() */
 
-	set_cpus_allowed_ptr(current, &cpumask_of_cpu(XPC_HB_CHECK_CPU));
+	set_cpus_allowed_ptr(current, cpumask_of_cpu(XPC_HB_CHECK_CPU));
 
 	/* set our heartbeating to other partitions into motion */
 	xpc_hb_check_timeout = jiffies + (xpc_hb_check_interval * HZ);
--- struct-cpumasks.orig/drivers/xen/manage.c
+++ struct-cpumasks/drivers/xen/manage.c
@@ -102,7 +102,7 @@ static void do_suspend(void)
 	/* XXX use normal device tree? */
 	xenbus_suspend();
 
-	err = stop_machine(xen_suspend, &cancelled, &cpumask_of_cpu(0));
+	err = stop_machine(xen_suspend, &cancelled, cpumask_of_cpu(0));
 	if (err) {
 		printk(KERN_ERR "failed to start xen_suspend: %d\n", err);
 		goto out;
--- struct-cpumasks.orig/kernel/time/tick-common.c
+++ struct-cpumasks/kernel/time/tick-common.c
@@ -254,7 +254,7 @@ static int tick_check_new_device(struct 
 		curdev = NULL;
 	}
 	clockevents_exchange_device(curdev, newdev);
-	tick_setup_device(td, newdev, cpu, &cpumask_of_cpu(cpu));
+	tick_setup_device(td, newdev, cpu, cpumask_of_cpu(cpu));
 	if (newdev->features & CLOCK_EVT_FEAT_ONESHOT)
 		tick_oneshot_notify();
 
--- struct-cpumasks.orig/kernel/trace/trace_sysprof.c
+++ struct-cpumasks/kernel/trace/trace_sysprof.c
@@ -213,7 +213,7 @@ static void start_stack_timers(void)
 	int cpu;
 
 	for_each_online_cpu(cpu) {
-		set_cpus_allowed_ptr(current, &cpumask_of_cpu(cpu));
+		set_cpus_allowed_ptr(current, cpumask_of_cpu(cpu));
 		start_stack_timer(cpu);
 	}
 	set_cpus_allowed_ptr(current, &saved_mask);
--- struct-cpumasks.orig/net/sunrpc/svc.c
+++ struct-cpumasks/net/sunrpc/svc.c
@@ -310,7 +310,7 @@ svc_pool_map_set_cpumask(struct task_str
 	switch (m->mode) {
 	case SVC_POOL_PERCPU:
 	{
-		set_cpus_allowed_ptr(task, &cpumask_of_cpu(node));
+		set_cpus_allowed_ptr(task, cpumask_of_cpu(node));
 		break;
 	}
 	case SVC_POOL_PERNODE:

-- 

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [PATCH 11/31] cpumask: remove set_cpus_allowed_ptr
  2008-09-29 18:02 [PATCH 00/31] cpumask: Provide new cpumask API Mike Travis
                   ` (9 preceding siblings ...)
  2008-09-29 18:03 ` [PATCH 10/31] cpumask: clean cpumask_of_cpu refs Mike Travis
@ 2008-09-29 18:03 ` Mike Travis
  2008-09-29 18:03 ` [PATCH 12/31] cpumask: remove CPU_MASK_ALL_PTR Mike Travis
                   ` (19 subsequent siblings)
  30 siblings, 0 replies; 43+ messages in thread
From: Mike Travis @ 2008-09-29 18:03 UTC (permalink / raw)
  To: Ingo Molnar, Rusty Russell
  Cc: Linus Torvalds, Andrew Morton, David Miller, Yinghai Lu,
	Thomas Gleixner, Jack Steiner, linux-kernel

[-- Attachment #1: set_cpus_allowed_ptr --]
[-- Type: text/plain, Size: 24804 bytes --]

Signed-of-by: Mike Travis <travis@sgi.com>
---
 arch/x86/kernel/acpi/cstate.c                    |    4 ++--
 arch/x86/kernel/cpu/cpufreq/acpi-cpufreq.c       |   12 ++++++------
 arch/x86/kernel/cpu/cpufreq/powernow-k8.c        |   20 ++++++++++----------
 arch/x86/kernel/cpu/cpufreq/speedstep-centrino.c |   12 ++++++------
 arch/x86/kernel/cpu/cpufreq/speedstep-ich.c      |   12 ++++++------
 arch/x86/kernel/cpu/intel_cacheinfo.c            |    4 ++--
 arch/x86/kernel/cpu/mcheck/mce_amd_64.c          |    4 ++--
 arch/x86/kernel/microcode_core.c                 |   12 ++++++------
 arch/x86/kernel/reboot.c                         |    2 +-
 drivers/acpi/processor_throttling.c              |   10 +++++-----
 drivers/firmware/dcdbas.c                        |    4 ++--
 drivers/misc/sgi-xp/xpc_main.c                   |    2 +-
 drivers/pci/pci-driver.c                         |    4 ++--
 include/linux/sched.h                            |   12 ++++--------
 init/main.c                                      |    2 +-
 kernel/cpu.c                                     |    4 ++--
 kernel/cpuset.c                                  |    4 ++--
 kernel/kmod.c                                    |    2 +-
 kernel/kthread.c                                 |    4 ++--
 kernel/rcutorture.c                              |   10 +++++-----
 kernel/sched.c                                   |   12 ++++++------
 kernel/trace/trace_sysprof.c                     |    4 ++--
 mm/pdflush.c                                     |    2 +-
 mm/vmscan.c                                      |    4 ++--
 net/sunrpc/svc.c                                 |    4 ++--
 25 files changed, 81 insertions(+), 85 deletions(-)

--- struct-cpumasks.orig/arch/x86/kernel/acpi/cstate.c
+++ struct-cpumasks/arch/x86/kernel/acpi/cstate.c
@@ -91,7 +91,7 @@ int acpi_processor_ffh_cstate_probe(unsi
 
 	/* Make sure we are running on right CPU */
 	saved_mask = current->cpus_allowed;
-	retval = set_cpus_allowed_ptr(current, cpumask_of_cpu(cpu));
+	retval = set_cpus_allowed(current, cpumask_of_cpu(cpu));
 	if (retval)
 		return -1;
 
@@ -128,7 +128,7 @@ int acpi_processor_ffh_cstate_probe(unsi
 		 cx->address);
 
 out:
-	set_cpus_allowed_ptr(current, &saved_mask);
+	set_cpus_allowed(current, &saved_mask);
 	return retval;
 }
 EXPORT_SYMBOL_GPL(acpi_processor_ffh_cstate_probe);
--- struct-cpumasks.orig/arch/x86/kernel/cpu/cpufreq/acpi-cpufreq.c
+++ struct-cpumasks/arch/x86/kernel/cpu/cpufreq/acpi-cpufreq.c
@@ -192,9 +192,9 @@ static void drv_read(struct drv_cmd *cmd
 	cpumask_t saved_mask = current->cpus_allowed;
 	cmd->val = 0;
 
-	set_cpus_allowed_ptr(current, &cmd->mask);
+	set_cpus_allowed(current, &cmd->mask);
 	do_drv_read(cmd);
-	set_cpus_allowed_ptr(current, &saved_mask);
+	set_cpus_allowed(current, &saved_mask);
 }
 
 static void drv_write(struct drv_cmd *cmd)
@@ -203,11 +203,11 @@ static void drv_write(struct drv_cmd *cm
 	unsigned int i;
 
 	for_each_cpu_mask(i, cmd->mask) {
-		set_cpus_allowed_ptr(current, cpumask_of_cpu(i));
+		set_cpus_allowed(current, cpumask_of_cpu(i));
 		do_drv_write(cmd);
 	}
 
-	set_cpus_allowed_ptr(current, &saved_mask);
+	set_cpus_allowed(current, &saved_mask);
 	return;
 }
 
@@ -271,7 +271,7 @@ static unsigned int get_measured_perf(un
 	unsigned int retval;
 
 	saved_mask = current->cpus_allowed;
-	set_cpus_allowed_ptr(current, cpumask_of_cpu(cpu));
+	set_cpus_allowed(current, cpumask_of_cpu(cpu));
 	if (get_cpu() != cpu) {
 		/* We were not able to run on requested processor */
 		put_cpu();
@@ -329,7 +329,7 @@ static unsigned int get_measured_perf(un
 	retval = per_cpu(drv_data, cpu)->max_freq * perf_percent / 100;
 
 	put_cpu();
-	set_cpus_allowed_ptr(current, &saved_mask);
+	set_cpus_allowed(current, &saved_mask);
 
 	dprintk("cpu %d: performance percent %d\n", cpu, perf_percent);
 	return retval;
--- struct-cpumasks.orig/arch/x86/kernel/cpu/cpufreq/powernow-k8.c
+++ struct-cpumasks/arch/x86/kernel/cpu/cpufreq/powernow-k8.c
@@ -480,7 +480,7 @@ static int check_supported_cpu(unsigned 
 	unsigned int rc = 0;
 
 	oldmask = current->cpus_allowed;
-	set_cpus_allowed_ptr(current, cpumask_of_cpu(cpu));
+	set_cpus_allowed(current, cpumask_of_cpu(cpu));
 
 	if (smp_processor_id() != cpu) {
 		printk(KERN_ERR PFX "limiting to cpu %u failed\n", cpu);
@@ -525,7 +525,7 @@ static int check_supported_cpu(unsigned 
 	rc = 1;
 
 out:
-	set_cpus_allowed_ptr(current, &oldmask);
+	set_cpus_allowed(current, &oldmask);
 	return rc;
 }
 
@@ -1027,7 +1027,7 @@ static int powernowk8_target(struct cpuf
 
 	/* only run on specific CPU from here on */
 	oldmask = current->cpus_allowed;
-	set_cpus_allowed_ptr(current, cpumask_of_cpu(pol->cpu));
+	set_cpus_allowed(current, cpumask_of_cpu(pol->cpu));
 
 	if (smp_processor_id() != pol->cpu) {
 		printk(KERN_ERR PFX "limiting to cpu %u failed\n", pol->cpu);
@@ -1082,7 +1082,7 @@ static int powernowk8_target(struct cpuf
 	ret = 0;
 
 err_out:
-	set_cpus_allowed_ptr(current, &oldmask);
+	set_cpus_allowed(current, &oldmask);
 	return ret;
 }
 
@@ -1153,7 +1153,7 @@ static int __cpuinit powernowk8_cpu_init
 
 	/* only run on specific CPU from here on */
 	oldmask = current->cpus_allowed;
-	set_cpus_allowed_ptr(current, cpumask_of_cpu(pol->cpu));
+	set_cpus_allowed(current, cpumask_of_cpu(pol->cpu));
 
 	if (smp_processor_id() != pol->cpu) {
 		printk(KERN_ERR PFX "limiting to cpu %u failed\n", pol->cpu);
@@ -1172,7 +1172,7 @@ static int __cpuinit powernowk8_cpu_init
 		fidvid_msr_init();
 
 	/* run on any CPU again */
-	set_cpus_allowed_ptr(current, &oldmask);
+	set_cpus_allowed(current, &oldmask);
 
 	if (cpu_family == CPU_HW_PSTATE)
 		pol->cpus = cpumask_of_cpu(pol->cpu);
@@ -1213,7 +1213,7 @@ static int __cpuinit powernowk8_cpu_init
 	return 0;
 
 err_out:
-	set_cpus_allowed_ptr(current, &oldmask);
+	set_cpus_allowed(current, &oldmask);
 	powernow_k8_cpu_exit_acpi(data);
 
 	kfree(data);
@@ -1250,11 +1250,11 @@ static unsigned int powernowk8_get (unsi
 	if (!data)
 		return -EINVAL;
 
-	set_cpus_allowed_ptr(current, cpumask_of_cpu(cpu));
+	set_cpus_allowed(current, cpumask_of_cpu(cpu));
 	if (smp_processor_id() != cpu) {
 		printk(KERN_ERR PFX
 			"limiting to CPU %d failed in powernowk8_get\n", cpu);
-		set_cpus_allowed_ptr(current, &oldmask);
+		set_cpus_allowed(current, &oldmask);
 		return 0;
 	}
 
@@ -1269,7 +1269,7 @@ static unsigned int powernowk8_get (unsi
 
 
 out:
-	set_cpus_allowed_ptr(current, &oldmask);
+	set_cpus_allowed(current, &oldmask);
 	return khz;
 }
 
--- struct-cpumasks.orig/arch/x86/kernel/cpu/cpufreq/speedstep-centrino.c
+++ struct-cpumasks/arch/x86/kernel/cpu/cpufreq/speedstep-centrino.c
@@ -419,7 +419,7 @@ static unsigned int get_cur_freq(unsigne
 	cpumask_t saved_mask;
 
 	saved_mask = current->cpus_allowed;
-	set_cpus_allowed_ptr(current, cpumask_of_cpu(cpu));
+	set_cpus_allowed(current, cpumask_of_cpu(cpu));
 	if (smp_processor_id() != cpu)
 		return 0;
 
@@ -437,7 +437,7 @@ static unsigned int get_cur_freq(unsigne
 		clock_freq = extract_clock(l, cpu, 1);
 	}
 
-	set_cpus_allowed_ptr(current, &saved_mask);
+	set_cpus_allowed(current, &saved_mask);
 	return clock_freq;
 }
 
@@ -611,7 +611,7 @@ static int centrino_target (struct cpufr
 		else
 			cpu_set(j, *set_mask);
 
-		set_cpus_allowed_ptr(current, set_mask);
+		set_cpus_allowed(current, set_mask);
 		preempt_disable();
 		if (unlikely(!cpu_isset(smp_processor_id(), *set_mask))) {
 			dprintk("couldn't limit to CPUs in this domain\n");
@@ -679,7 +679,7 @@ static int centrino_target (struct cpufr
 
 		if (!cpus_empty(*covered_cpus))
 			for_each_cpu_mask(j, *covered_cpus) {
-				set_cpus_allowed_ptr(current,
+				set_cpus_allowed(current,
 						     cpumask_of_cpu(j));
 				wrmsr(MSR_IA32_PERF_CTL, oldmsr, h);
 			}
@@ -693,13 +693,13 @@ static int centrino_target (struct cpufr
 			cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE);
 		}
 	}
-	set_cpus_allowed_ptr(current, saved_mask);
+	set_cpus_allowed(current, saved_mask);
 	retval = 0;
 	goto out;
 
 migrate_end:
 	preempt_enable();
-	set_cpus_allowed_ptr(current, saved_mask);
+	set_cpus_allowed(current, saved_mask);
 out:
 	CPUMASK_FREE(allmasks);
 	return retval;
--- struct-cpumasks.orig/arch/x86/kernel/cpu/cpufreq/speedstep-ich.c
+++ struct-cpumasks/arch/x86/kernel/cpu/cpufreq/speedstep-ich.c
@@ -235,9 +235,9 @@ static unsigned int _speedstep_get(const
 	cpumask_t cpus_allowed;
 
 	cpus_allowed = current->cpus_allowed;
-	set_cpus_allowed_ptr(current, cpus);
+	set_cpus_allowed(current, cpus);
 	speed = speedstep_get_processor_frequency(speedstep_processor);
-	set_cpus_allowed_ptr(current, &cpus_allowed);
+	set_cpus_allowed(current, &cpus_allowed);
 	dprintk("detected %u kHz as current frequency\n", speed);
 	return speed;
 }
@@ -285,12 +285,12 @@ static int speedstep_target (struct cpuf
 	}
 
 	/* switch to physical CPU where state is to be changed */
-	set_cpus_allowed_ptr(current, &policy->cpus);
+	set_cpus_allowed(current, &policy->cpus);
 
 	speedstep_set_state(newstate);
 
 	/* allow to be run on all CPUs */
-	set_cpus_allowed_ptr(current, &cpus_allowed);
+	set_cpus_allowed(current, &cpus_allowed);
 
 	for_each_cpu_mask(i, policy->cpus) {
 		freqs.cpu = i;
@@ -326,7 +326,7 @@ static int speedstep_cpu_init(struct cpu
 #endif
 
 	cpus_allowed = current->cpus_allowed;
-	set_cpus_allowed_ptr(current, &policy->cpus);
+	set_cpus_allowed(current, &policy->cpus);
 
 	/* detect low and high frequency and transition latency */
 	result = speedstep_get_freqs(speedstep_processor,
@@ -334,7 +334,7 @@ static int speedstep_cpu_init(struct cpu
 				     &speedstep_freqs[SPEEDSTEP_HIGH].frequency,
 				     &policy->cpuinfo.transition_latency,
 				     &speedstep_set_state);
-	set_cpus_allowed_ptr(current, &cpus_allowed);
+	set_cpus_allowed(current, &cpus_allowed);
 	if (result)
 		return result;
 
--- struct-cpumasks.orig/arch/x86/kernel/cpu/intel_cacheinfo.c
+++ struct-cpumasks/arch/x86/kernel/cpu/intel_cacheinfo.c
@@ -550,7 +550,7 @@ static int __cpuinit detect_cache_attrib
 		return -ENOMEM;
 
 	oldmask = current->cpus_allowed;
-	retval = set_cpus_allowed_ptr(current, cpumask_of_cpu(cpu));
+	retval = set_cpus_allowed(current, cpumask_of_cpu(cpu));
 	if (retval)
 		goto out;
 
@@ -567,7 +567,7 @@ static int __cpuinit detect_cache_attrib
 		}
 		cache_shared_cpu_map_setup(cpu, j);
 	}
-	set_cpus_allowed_ptr(current, &oldmask);
+	set_cpus_allowed(current, &oldmask);
 
 out:
 	if (retval) {
--- struct-cpumasks.orig/arch/x86/kernel/cpu/mcheck/mce_amd_64.c
+++ struct-cpumasks/arch/x86/kernel/cpu/mcheck/mce_amd_64.c
@@ -257,12 +257,12 @@ static void affinity_set(unsigned int cp
 	*oldmask = current->cpus_allowed;
 	cpus_clear(*newmask);
 	cpu_set(cpu, *newmask);
-	set_cpus_allowed_ptr(current, newmask);
+	set_cpus_allowed(current, newmask);
 }
 
 static void affinity_restore(const cpumask_t *oldmask)
 {
-	set_cpus_allowed_ptr(current, oldmask);
+	set_cpus_allowed(current, oldmask);
 }
 
 #define SHOW_FIELDS(name)                                           \
--- struct-cpumasks.orig/arch/x86/kernel/microcode_core.c
+++ struct-cpumasks/arch/x86/kernel/microcode_core.c
@@ -122,7 +122,7 @@ static int do_microcode_update(const voi
 		if (!uci->valid)
 			continue;
 
-		set_cpus_allowed_ptr(current, cpumask_of_cpu(cpu));
+		set_cpus_allowed(current, cpumask_of_cpu(cpu));
 		error = microcode_ops->request_microcode_user(cpu, buf, size);
 		if (error < 0)
 			goto out;
@@ -130,7 +130,7 @@ static int do_microcode_update(const voi
 			microcode_ops->apply_microcode(cpu);
 	}
 out:
-	set_cpus_allowed_ptr(current, &old);
+	set_cpus_allowed(current, &old);
 	return error;
 }
 
@@ -222,7 +222,7 @@ static ssize_t reload_store(struct sys_d
 
 		get_online_cpus();
 		if (cpu_online(cpu)) {
-			set_cpus_allowed_ptr(current, cpumask_of_cpu(cpu));
+			set_cpus_allowed(current, cpumask_of_cpu(cpu));
 			mutex_lock(&microcode_mutex);
 			if (uci->valid) {
 				err = microcode_ops->request_microcode_fw(cpu,
@@ -231,7 +231,7 @@ static ssize_t reload_store(struct sys_d
 					microcode_ops->apply_microcode(cpu);
 			}
 			mutex_unlock(&microcode_mutex);
-			set_cpus_allowed_ptr(current, &old);
+			set_cpus_allowed(current, &old);
 		}
 		put_online_cpus();
 	}
@@ -351,9 +351,9 @@ static void microcode_init_cpu(int cpu)
 {
 	cpumask_t old = current->cpus_allowed;
 
-	set_cpus_allowed_ptr(current, cpumask_of_cpu(cpu));
+	set_cpus_allowed(current, cpumask_of_cpu(cpu));
 	microcode_update_cpu(cpu);
-	set_cpus_allowed_ptr(current, &old);
+	set_cpus_allowed(current, &old);
 }
 
 static int mc_sysdev_add(struct sys_device *sys_dev)
--- struct-cpumasks.orig/arch/x86/kernel/reboot.c
+++ struct-cpumasks/arch/x86/kernel/reboot.c
@@ -431,7 +431,7 @@ void native_machine_shutdown(void)
 		reboot_cpu_id = smp_processor_id();
 
 	/* Make certain I only run on the appropriate processor */
-	set_cpus_allowed_ptr(current, cpumask_of_cpu(reboot_cpu_id));
+	set_cpus_allowed(current, cpumask_of_cpu(reboot_cpu_id));
 
 	/* O.K Now that I'm on the appropriate processor,
 	 * stop all of the others.
--- struct-cpumasks.orig/drivers/acpi/processor_throttling.c
+++ struct-cpumasks/drivers/acpi/processor_throttling.c
@@ -838,10 +838,10 @@ static int acpi_processor_get_throttling
 	 * Migrate task to the cpu pointed by pr.
 	 */
 	saved_mask = current->cpus_allowed;
-	set_cpus_allowed_ptr(current, cpumask_of_cpu(pr->id));
+	set_cpus_allowed(current, cpumask_of_cpu(pr->id));
 	ret = pr->throttling.acpi_processor_get_throttling(pr);
 	/* restore the previous state */
-	set_cpus_allowed_ptr(current, &saved_mask);
+	set_cpus_allowed(current, &saved_mask);
 
 	return ret;
 }
@@ -1025,7 +1025,7 @@ int acpi_processor_set_throttling(struct
 	 * it can be called only for the cpu pointed by pr.
 	 */
 	if (p_throttling->shared_type == DOMAIN_COORD_TYPE_SW_ANY) {
-		set_cpus_allowed_ptr(current, cpumask_of_cpu(pr->id));
+		set_cpus_allowed(current, cpumask_of_cpu(pr->id));
 		ret = p_throttling->acpi_processor_set_throttling(pr,
 						t_state.target_state);
 	} else {
@@ -1056,7 +1056,7 @@ int acpi_processor_set_throttling(struct
 				continue;
 			}
 			t_state.cpu = i;
-			set_cpus_allowed_ptr(current, cpumask_of_cpu(i));
+			set_cpus_allowed(current, cpumask_of_cpu(i));
 			ret = match_pr->throttling.
 				acpi_processor_set_throttling(
 				match_pr, t_state.target_state);
@@ -1074,7 +1074,7 @@ int acpi_processor_set_throttling(struct
 							&t_state);
 	}
 	/* restore the previous state */
-	set_cpus_allowed_ptr(current, &saved_mask);
+	set_cpus_allowed(current, &saved_mask);
 	return ret;
 }
 
--- struct-cpumasks.orig/drivers/firmware/dcdbas.c
+++ struct-cpumasks/drivers/firmware/dcdbas.c
@@ -255,7 +255,7 @@ static int smi_request(struct smi_cmd *s
 
 	/* SMI requires CPU 0 */
 	old_mask = current->cpus_allowed;
-	set_cpus_allowed_ptr(current, cpumask_of_cpu(0));
+	set_cpus_allowed(current, cpumask_of_cpu(0));
 	if (smp_processor_id() != 0) {
 		dev_dbg(&dcdbas_pdev->dev, "%s: failed to get CPU 0\n",
 			__func__);
@@ -275,7 +275,7 @@ static int smi_request(struct smi_cmd *s
 	);
 
 out:
-	set_cpus_allowed_ptr(current, &old_mask);
+	set_cpus_allowed(current, &old_mask);
 	return ret;
 }
 
--- struct-cpumasks.orig/drivers/misc/sgi-xp/xpc_main.c
+++ struct-cpumasks/drivers/misc/sgi-xp/xpc_main.c
@@ -318,7 +318,7 @@ xpc_hb_checker(void *ignore)
 
 	/* this thread was marked active by xpc_hb_init() */
 
-	set_cpus_allowed_ptr(current, cpumask_of_cpu(XPC_HB_CHECK_CPU));
+	set_cpus_allowed(current, cpumask_of_cpu(XPC_HB_CHECK_CPU));
 
 	/* set our heartbeating to other partitions into motion */
 	xpc_hb_check_timeout = jiffies + (xpc_hb_check_interval * HZ);
--- struct-cpumasks.orig/drivers/pci/pci-driver.c
+++ struct-cpumasks/drivers/pci/pci-driver.c
@@ -185,7 +185,7 @@ static int pci_call_probe(struct pci_dri
 
 	if (node >= 0) {
 		node_to_cpumask_ptr(nodecpumask, node);
-		set_cpus_allowed_ptr(current, nodecpumask);
+		set_cpus_allowed(current, nodecpumask);
 	}
 	/* And set default memory allocation policy */
 	oldpol = current->mempolicy;
@@ -193,7 +193,7 @@ static int pci_call_probe(struct pci_dri
 #endif
 	error = drv->probe(dev, id);
 #ifdef CONFIG_NUMA
-	set_cpus_allowed_ptr(current, &oldmask);
+	set_cpus_allowed(current, &oldmask);
 	current->mempolicy = oldpol;
 #endif
 	return error;
--- struct-cpumasks.orig/include/linux/sched.h
+++ struct-cpumasks/include/linux/sched.h
@@ -960,7 +960,7 @@ struct sched_class {
 	void (*task_tick) (struct rq *rq, struct task_struct *p, int queued);
 	void (*task_new) (struct rq *rq, struct task_struct *p);
 	void (*set_cpus_allowed)(struct task_struct *p,
-				 const cpumask_t *newmask);
+				 const cpumask_t newmask);
 
 	void (*rq_online)(struct rq *rq);
 	void (*rq_offline)(struct rq *rq);
@@ -1582,21 +1582,17 @@ extern cputime_t task_gtime(struct task_
 #define used_math() tsk_used_math(current)
 
 #ifdef CONFIG_SMP
-extern int set_cpus_allowed_ptr(struct task_struct *p,
+extern int set_cpus_allowed(struct task_struct *p,
 				const cpumask_t new_mask);
 #else
-static inline int set_cpus_allowed_ptr(struct task_struct *p,
+static inline int set_cpus_allowed(struct task_struct *p,
 				       const cpumask_t new_mask)
 {
-	if (!cpu_isset(0, *new_mask))
+	if (!cpu_isset(0, new_mask))
 		return -EINVAL;
 	return 0;
 }
 #endif
-static inline int set_cpus_allowed(struct task_struct *p, cpumask_t new_mask)
-{
-	return set_cpus_allowed_ptr(p, new_mask);
-}
 
 extern unsigned long long sched_clock(void);
 
--- struct-cpumasks.orig/init/main.c
+++ struct-cpumasks/init/main.c
@@ -937,7 +937,7 @@ static int __init kernel_init(void * unu
 	/*
 	 * init can run on any cpu.
 	 */
-	set_cpus_allowed_ptr(current, CPU_MASK_ALL_PTR);
+	set_cpus_allowed(current, CPU_MASK_ALL_PTR);
 	/*
 	 * Tell the world that we're going to be the grim
 	 * reaper of innocent orphaned children.
--- struct-cpumasks.orig/kernel/cpu.c
+++ struct-cpumasks/kernel/cpu.c
@@ -253,7 +253,7 @@ static int __ref _cpu_down(unsigned int 
 	old_allowed = current->cpus_allowed;
 	cpus_setall(tmp);
 	cpu_clear(cpu, tmp);
-	set_cpus_allowed_ptr(current, &tmp);
+	set_cpus_allowed(current, &tmp);
 	tmp = cpumask_of_cpu(cpu);
 
 	err = __stop_machine(take_cpu_down, &tcd_param, &tmp);
@@ -282,7 +282,7 @@ static int __ref _cpu_down(unsigned int 
 	check_for_tasks(cpu);
 
 out_allowed:
-	set_cpus_allowed_ptr(current, &old_allowed);
+	set_cpus_allowed(current, &old_allowed);
 out_release:
 	cpu_hotplug_done();
 	if (!err) {
--- struct-cpumasks.orig/kernel/cpuset.c
+++ struct-cpumasks/kernel/cpuset.c
@@ -837,7 +837,7 @@ static int cpuset_test_cpumask(struct ta
 static void cpuset_change_cpumask(struct task_struct *tsk,
 				  struct cgroup_scanner *scan)
 {
-	set_cpus_allowed_ptr(tsk, &((cgroup_cs(scan->cg))->cpus_allowed));
+	set_cpus_allowed(tsk, &((cgroup_cs(scan->cg))->cpus_allowed));
 }
 
 /**
@@ -1330,7 +1330,7 @@ static void cpuset_attach(struct cgroup_
 
 	mutex_lock(&callback_mutex);
 	guarantee_online_cpus(cs, &cpus);
-	err = set_cpus_allowed_ptr(tsk, &cpus);
+	err = set_cpus_allowed(tsk, &cpus);
 	mutex_unlock(&callback_mutex);
 	if (err)
 		return;
--- struct-cpumasks.orig/kernel/kmod.c
+++ struct-cpumasks/kernel/kmod.c
@@ -166,7 +166,7 @@ static int ____call_usermodehelper(void 
 	}
 
 	/* We can run anywhere, unlike our parent keventd(). */
-	set_cpus_allowed_ptr(current, CPU_MASK_ALL_PTR);
+	set_cpus_allowed(current, CPU_MASK_ALL_PTR);
 
 	/*
 	 * Our parent is keventd, which runs with elevated scheduling priority.
--- struct-cpumasks.orig/kernel/kthread.c
+++ struct-cpumasks/kernel/kthread.c
@@ -107,7 +107,7 @@ static void create_kthread(struct kthrea
 		 */
 		sched_setscheduler(create->result, SCHED_NORMAL, &param);
 		set_user_nice(create->result, KTHREAD_NICE_LEVEL);
-		set_cpus_allowed_ptr(create->result, CPU_MASK_ALL_PTR);
+		set_cpus_allowed(create->result, CPU_MASK_ALL_PTR);
 	}
 	complete(&create->done);
 }
@@ -238,7 +238,7 @@ int kthreadd(void *unused)
 	set_task_comm(tsk, "kthreadd");
 	ignore_signals(tsk);
 	set_user_nice(tsk, KTHREAD_NICE_LEVEL);
-	set_cpus_allowed_ptr(tsk, CPU_MASK_ALL_PTR);
+	set_cpus_allowed(tsk, CPU_MASK_ALL_PTR);
 
 	current->flags |= PF_NOFREEZE | PF_FREEZER_NOSIG;
 
--- struct-cpumasks.orig/kernel/rcutorture.c
+++ struct-cpumasks/kernel/rcutorture.c
@@ -858,27 +858,27 @@ static void rcu_torture_shuffle_tasks(vo
 	if (rcu_idle_cpu != -1)
 		cpu_clear(rcu_idle_cpu, tmp_mask);
 
-	set_cpus_allowed_ptr(current, &tmp_mask);
+	set_cpus_allowed(current, &tmp_mask);
 
 	if (reader_tasks) {
 		for (i = 0; i < nrealreaders; i++)
 			if (reader_tasks[i])
-				set_cpus_allowed_ptr(reader_tasks[i],
+				set_cpus_allowed(reader_tasks[i],
 						     &tmp_mask);
 	}
 
 	if (fakewriter_tasks) {
 		for (i = 0; i < nfakewriters; i++)
 			if (fakewriter_tasks[i])
-				set_cpus_allowed_ptr(fakewriter_tasks[i],
+				set_cpus_allowed(fakewriter_tasks[i],
 						     &tmp_mask);
 	}
 
 	if (writer_task)
-		set_cpus_allowed_ptr(writer_task, &tmp_mask);
+		set_cpus_allowed(writer_task, &tmp_mask);
 
 	if (stats_task)
-		set_cpus_allowed_ptr(stats_task, &tmp_mask);
+		set_cpus_allowed(stats_task, &tmp_mask);
 
 	if (rcu_idle_cpu == -1)
 		rcu_idle_cpu = num_online_cpus() - 1;
--- struct-cpumasks.orig/kernel/sched.c
+++ struct-cpumasks/kernel/sched.c
@@ -5453,7 +5453,7 @@ long sched_setaffinity(pid_t pid, const 
 	cpuset_cpus_allowed(p, &cpus_allowed);
 	cpus_and(new_mask, new_mask, cpus_allowed);
  again:
-	retval = set_cpus_allowed_ptr(p, &new_mask);
+	retval = set_cpus_allowed(p, &new_mask);
 
 	if (!retval) {
 		cpuset_cpus_allowed(p, &cpus_allowed);
@@ -5970,7 +5970,7 @@ static inline void sched_init_granularit
  * task must not exit() & deallocate itself prematurely. The
  * call is not atomic; no spinlocks may be held.
  */
-int set_cpus_allowed_ptr(struct task_struct *p, const cpumask_t *new_mask)
+int set_cpus_allowed(struct task_struct *p, const cpumask_t new_mask)
 {
 	struct migration_req req;
 	unsigned long flags;
@@ -5978,13 +5978,13 @@ int set_cpus_allowed_ptr(struct task_str
 	int ret = 0;
 
 	rq = task_rq_lock(p, &flags);
-	if (!cpus_intersects(*new_mask, cpu_online_map)) {
+	if (!cpus_intersects(new_mask, cpu_online_map)) {
 		ret = -EINVAL;
 		goto out;
 	}
 
 	if (unlikely((p->flags & PF_THREAD_BOUND) && p != current &&
-		     !cpus_equal(p->cpus_allowed, *new_mask))) {
+		     !cpus_equal(p->cpus_allowed, new_mask))) {
 		ret = -EINVAL;
 		goto out;
 	}
@@ -6013,7 +6013,7 @@ out:
 
 	return ret;
 }
-EXPORT_SYMBOL_GPL(set_cpus_allowed_ptr);
+EXPORT_SYMBOL_GPL(set_cpus_allowed);
 
 /*
  * Move (not current) task off this cpu, onto dest cpu. We're doing
@@ -8011,7 +8011,7 @@ void __init sched_init_smp(void)
 	init_hrtick();
 
 	/* Move init over to a non-isolated CPU */
-	if (set_cpus_allowed_ptr(current, &non_isolated_cpus) < 0)
+	if (set_cpus_allowed(current, &non_isolated_cpus) < 0)
 		BUG();
 	sched_init_granularity();
 }
--- struct-cpumasks.orig/kernel/trace/trace_sysprof.c
+++ struct-cpumasks/kernel/trace/trace_sysprof.c
@@ -213,10 +213,10 @@ static void start_stack_timers(void)
 	int cpu;
 
 	for_each_online_cpu(cpu) {
-		set_cpus_allowed_ptr(current, cpumask_of_cpu(cpu));
+		set_cpus_allowed(current, cpumask_of_cpu(cpu));
 		start_stack_timer(cpu);
 	}
-	set_cpus_allowed_ptr(current, &saved_mask);
+	set_cpus_allowed(current, &saved_mask);
 }
 
 static void stop_stack_timer(int cpu)
--- struct-cpumasks.orig/mm/pdflush.c
+++ struct-cpumasks/mm/pdflush.c
@@ -188,7 +188,7 @@ static int pdflush(void *dummy)
 	 * The boottime pdflush's are easily placed w/o these 2 lines.
 	 */
 	cpuset_cpus_allowed(current, &cpus_allowed);
-	set_cpus_allowed_ptr(current, &cpus_allowed);
+	set_cpus_allowed(current, &cpus_allowed);
 
 	return __pdflush(&my_work);
 }
--- struct-cpumasks.orig/mm/vmscan.c
+++ struct-cpumasks/mm/vmscan.c
@@ -1690,7 +1690,7 @@ static int kswapd(void *p)
 	node_to_cpumask_ptr(cpumask, pgdat->node_id);
 
 	if (!cpus_empty(*cpumask))
-		set_cpus_allowed_ptr(tsk, cpumask);
+		set_cpus_allowed(tsk, cpumask);
 	current->reclaim_state = &reclaim_state;
 
 	/*
@@ -1928,7 +1928,7 @@ static int __devinit cpu_callback(struct
 
 			if (any_online_cpu(*mask) < nr_cpu_ids)
 				/* One of our CPUs online: restore mask */
-				set_cpus_allowed_ptr(pgdat->kswapd, mask);
+				set_cpus_allowed(pgdat->kswapd, mask);
 		}
 	}
 	return NOTIFY_OK;
--- struct-cpumasks.orig/net/sunrpc/svc.c
+++ struct-cpumasks/net/sunrpc/svc.c
@@ -310,13 +310,13 @@ svc_pool_map_set_cpumask(struct task_str
 	switch (m->mode) {
 	case SVC_POOL_PERCPU:
 	{
-		set_cpus_allowed_ptr(task, cpumask_of_cpu(node));
+		set_cpus_allowed(task, cpumask_of_cpu(node));
 		break;
 	}
 	case SVC_POOL_PERNODE:
 	{
 		node_to_cpumask_ptr(nodecpumask, node);
-		set_cpus_allowed_ptr(task, nodecpumask);
+		set_cpus_allowed(task, nodecpumask);
 		break;
 	}
 	}

-- 

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [PATCH 12/31] cpumask: remove CPU_MASK_ALL_PTR
  2008-09-29 18:02 [PATCH 00/31] cpumask: Provide new cpumask API Mike Travis
                   ` (10 preceding siblings ...)
  2008-09-29 18:03 ` [PATCH 11/31] cpumask: remove set_cpus_allowed_ptr Mike Travis
@ 2008-09-29 18:03 ` Mike Travis
  2008-09-29 18:03 ` [PATCH 13/31] cpumask: modify for_each_cpu_mask Mike Travis
                   ` (18 subsequent siblings)
  30 siblings, 0 replies; 43+ messages in thread
From: Mike Travis @ 2008-09-29 18:03 UTC (permalink / raw)
  To: Ingo Molnar, Rusty Russell
  Cc: Linus Torvalds, Andrew Morton, David Miller, Yinghai Lu,
	Thomas Gleixner, Jack Steiner, linux-kernel

[-- Attachment #1: CPU_MASK_ALL_PTR --]
[-- Type: text/plain, Size: 2972 bytes --]

Signed-of-by: Mike Travis <travis@sgi.com>
---
 arch/x86/kernel/cpu/mcheck/mce_amd_64.c |    4 ++--
 include/asm-generic/topology.h          |    2 +-
 init/main.c                             |    2 +-
 kernel/kmod.c                           |    2 +-
 kernel/kthread.c                        |    4 ++--
 kernel/sched.c                          |    2 +-
 6 files changed, 8 insertions(+), 8 deletions(-)

--- struct-cpumasks.orig/arch/x86/kernel/cpu/mcheck/mce_amd_64.c
+++ struct-cpumasks/arch/x86/kernel/cpu/mcheck/mce_amd_64.c
@@ -512,9 +512,9 @@ static __cpuinit int threshold_create_ba
 		goto out_free;
 
 #ifndef CONFIG_SMP
-	b->cpus = CPU_MASK_ALL;
+	cpus_copy(b->cpus, CPU_MASK_ALL);
 #else
-	b->cpus = per_cpu(cpu_core_map, cpu);
+	cpus_copy(b->cpus, per_cpu(cpu_core_map, cpu));
 #endif
 
 	per_cpu(threshold_banks, cpu)[bank] = b;
--- struct-cpumasks.orig/include/asm-generic/topology.h
+++ struct-cpumasks/include/asm-generic/topology.h
@@ -49,7 +49,7 @@
 
 #ifndef pcibus_to_cpumask
 #define pcibus_to_cpumask(bus)	(pcibus_to_node(bus) == -1 ? \
-					CPU_MASK_ALL : \
+					cpu_mask_all : \
 					node_to_cpumask(pcibus_to_node(bus)) \
 				)
 #endif
--- struct-cpumasks.orig/init/main.c
+++ struct-cpumasks/init/main.c
@@ -937,7 +937,7 @@ static int __init kernel_init(void * unu
 	/*
 	 * init can run on any cpu.
 	 */
-	set_cpus_allowed(current, CPU_MASK_ALL_PTR);
+	set_cpus_allowed(current, cpu_mask_all);
 	/*
 	 * Tell the world that we're going to be the grim
 	 * reaper of innocent orphaned children.
--- struct-cpumasks.orig/kernel/kmod.c
+++ struct-cpumasks/kernel/kmod.c
@@ -166,7 +166,7 @@ static int ____call_usermodehelper(void 
 	}
 
 	/* We can run anywhere, unlike our parent keventd(). */
-	set_cpus_allowed(current, CPU_MASK_ALL_PTR);
+	set_cpus_allowed(current, cpu_mask_all);
 
 	/*
 	 * Our parent is keventd, which runs with elevated scheduling priority.
--- struct-cpumasks.orig/kernel/kthread.c
+++ struct-cpumasks/kernel/kthread.c
@@ -107,7 +107,7 @@ static void create_kthread(struct kthrea
 		 */
 		sched_setscheduler(create->result, SCHED_NORMAL, &param);
 		set_user_nice(create->result, KTHREAD_NICE_LEVEL);
-		set_cpus_allowed(create->result, CPU_MASK_ALL_PTR);
+		set_cpus_allowed(create->result, cpu_mask_all);
 	}
 	complete(&create->done);
 }
@@ -238,7 +238,7 @@ int kthreadd(void *unused)
 	set_task_comm(tsk, "kthreadd");
 	ignore_signals(tsk);
 	set_user_nice(tsk, KTHREAD_NICE_LEVEL);
-	set_cpus_allowed(tsk, CPU_MASK_ALL_PTR);
+	set_cpus_allowed(tsk, cpu_mask_all);
 
 	current->flags |= PF_NOFREEZE | PF_FREEZER_NOSIG;
 
--- struct-cpumasks.orig/kernel/sched.c
+++ struct-cpumasks/kernel/sched.c
@@ -6195,7 +6195,7 @@ static void move_task_off_dead_cpu(int d
  */
 static void migrate_nr_uninterruptible(struct rq *rq_src)
 {
-	struct rq *rq_dest = cpu_rq(any_online_cpu(*CPU_MASK_ALL_PTR));
+	struct rq *rq_dest = cpu_rq(any_online_cpu(cpu_mask_all));
 	unsigned long flags;
 
 	local_irq_save(flags);

-- 

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [PATCH 13/31] cpumask: modify for_each_cpu_mask
  2008-09-29 18:02 [PATCH 00/31] cpumask: Provide new cpumask API Mike Travis
                   ` (11 preceding siblings ...)
  2008-09-29 18:03 ` [PATCH 12/31] cpumask: remove CPU_MASK_ALL_PTR Mike Travis
@ 2008-09-29 18:03 ` Mike Travis
  2008-09-29 18:03 ` [PATCH 14/31] cpumask: change first/next_cpu to cpus_first/next Mike Travis
                   ` (17 subsequent siblings)
  30 siblings, 0 replies; 43+ messages in thread
From: Mike Travis @ 2008-09-29 18:03 UTC (permalink / raw)
  To: Ingo Molnar, Rusty Russell
  Cc: Linus Torvalds, Andrew Morton, David Miller, Yinghai Lu,
	Thomas Gleixner, Jack Steiner, linux-kernel

[-- Attachment #1: for_each_cpu_mask --]
[-- Type: text/plain, Size: 34910 bytes --]

Signed-of-by: Mike Travis <travis@sgi.com>
---
 arch/x86/kernel/cpu/cpufreq/acpi-cpufreq.c       |    6 +--
 arch/x86/kernel/cpu/cpufreq/p4-clockmod.c        |    6 +--
 arch/x86/kernel/cpu/cpufreq/powernow-k8.c        |    8 ++--
 arch/x86/kernel/cpu/cpufreq/speedstep-centrino.c |   10 ++---
 arch/x86/kernel/cpu/cpufreq/speedstep-ich.c      |    4 +-
 arch/x86/kernel/cpu/intel_cacheinfo.c            |    2 -
 arch/x86/kernel/cpu/mcheck/mce_amd_64.c          |    4 +-
 arch/x86/kernel/genx2apic_cluster.c              |    2 -
 arch/x86/kernel/genx2apic_phys.c                 |    2 -
 arch/x86/kernel/io_apic.c                        |    4 +-
 arch/x86/kernel/smpboot.c                        |    8 ++--
 arch/x86/kernel/tlb_uv.c                         |    4 +-
 arch/x86/mm/mmio-mod.c                           |    4 +-
 arch/x86/xen/smp.c                               |    4 +-
 drivers/acpi/processor_throttling.c              |    6 +--
 drivers/cpufreq/cpufreq.c                        |   14 ++++----
 drivers/cpufreq/cpufreq_conservative.c           |    2 -
 drivers/cpufreq/cpufreq_ondemand.c               |    4 +-
 include/asm-x86/ipi.h                            |    4 +-
 kernel/cpu.c                                     |    2 -
 kernel/rcuclassic.c                              |    4 +-
 kernel/rcupreempt.c                              |   10 ++---
 kernel/sched.c                                   |   40 +++++++++++------------
 kernel/sched_fair.c                              |    2 -
 kernel/sched_rt.c                                |    8 ++--
 kernel/smp.c                                     |    2 -
 kernel/taskstats.c                               |    4 +-
 kernel/time/tick-broadcast.c                     |    4 +-
 kernel/trace/trace.c                             |   14 ++++----
 kernel/trace/trace_boot.c                        |    2 -
 kernel/workqueue.c                               |    6 +--
 mm/allocpercpu.c                                 |    4 +-
 mm/vmstat.c                                      |    2 -
 net/core/dev.c                                   |    4 +-
 net/iucv/iucv.c                                  |    2 -
 35 files changed, 104 insertions(+), 104 deletions(-)

--- struct-cpumasks.orig/arch/x86/kernel/cpu/cpufreq/acpi-cpufreq.c
+++ struct-cpumasks/arch/x86/kernel/cpu/cpufreq/acpi-cpufreq.c
@@ -202,7 +202,7 @@ static void drv_write(struct drv_cmd *cm
 	cpumask_t saved_mask = current->cpus_allowed;
 	unsigned int i;
 
-	for_each_cpu_mask(i, cmd->mask) {
+	for_each_cpu(i, cmd->mask) {
 		set_cpus_allowed(current, cpumask_of_cpu(i));
 		do_drv_write(cmd);
 	}
@@ -451,7 +451,7 @@ static int acpi_cpufreq_target(struct cp
 
 	freqs.old = perf->states[perf->state].core_frequency * 1000;
 	freqs.new = data->freq_table[next_state].frequency;
-	for_each_cpu_mask(i, cmd.mask) {
+	for_each_cpu(i, cmd.mask) {
 		freqs.cpu = i;
 		cpufreq_notify_transition(&freqs, CPUFREQ_PRECHANGE);
 	}
@@ -466,7 +466,7 @@ static int acpi_cpufreq_target(struct cp
 		}
 	}
 
-	for_each_cpu_mask(i, cmd.mask) {
+	for_each_cpu(i, cmd.mask) {
 		freqs.cpu = i;
 		cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE);
 	}
--- struct-cpumasks.orig/arch/x86/kernel/cpu/cpufreq/p4-clockmod.c
+++ struct-cpumasks/arch/x86/kernel/cpu/cpufreq/p4-clockmod.c
@@ -122,7 +122,7 @@ static int cpufreq_p4_target(struct cpuf
 		return 0;
 
 	/* notifiers */
-	for_each_cpu_mask(i, policy->cpus) {
+	for_each_cpu(i, policy->cpus) {
 		freqs.cpu = i;
 		cpufreq_notify_transition(&freqs, CPUFREQ_PRECHANGE);
 	}
@@ -130,11 +130,11 @@ static int cpufreq_p4_target(struct cpuf
 	/* run on each logical CPU, see section 13.15.3 of IA32 Intel Architecture Software
 	 * Developer's Manual, Volume 3
 	 */
-	for_each_cpu_mask(i, policy->cpus)
+	for_each_cpu(i, policy->cpus)
 		cpufreq_p4_setdc(i, p4clockmod_table[newstate].index);
 
 	/* notifiers */
-	for_each_cpu_mask(i, policy->cpus) {
+	for_each_cpu(i, policy->cpus) {
 		freqs.cpu = i;
 		cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE);
 	}
--- struct-cpumasks.orig/arch/x86/kernel/cpu/cpufreq/powernow-k8.c
+++ struct-cpumasks/arch/x86/kernel/cpu/cpufreq/powernow-k8.c
@@ -963,7 +963,7 @@ static int transition_frequency_fidvid(s
 	freqs.old = find_khz_freq_from_fid(data->currfid);
 	freqs.new = find_khz_freq_from_fid(fid);
 
-	for_each_cpu_mask(i, *(data->available_cores)) {
+	for_each_cpu(i, *(data->available_cores)) {
 		freqs.cpu = i;
 		cpufreq_notify_transition(&freqs, CPUFREQ_PRECHANGE);
 	}
@@ -971,7 +971,7 @@ static int transition_frequency_fidvid(s
 	res = transition_fid_vid(data, fid, vid);
 	freqs.new = find_khz_freq_from_fid(data->currfid);
 
-	for_each_cpu_mask(i, *(data->available_cores)) {
+	for_each_cpu(i, *(data->available_cores)) {
 		freqs.cpu = i;
 		cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE);
 	}
@@ -994,7 +994,7 @@ static int transition_frequency_pstate(s
 	freqs.old = find_khz_freq_from_pstate(data->powernow_table, data->currpstate);
 	freqs.new = find_khz_freq_from_pstate(data->powernow_table, pstate);
 
-	for_each_cpu_mask(i, *(data->available_cores)) {
+	for_each_cpu(i, *(data->available_cores)) {
 		freqs.cpu = i;
 		cpufreq_notify_transition(&freqs, CPUFREQ_PRECHANGE);
 	}
@@ -1002,7 +1002,7 @@ static int transition_frequency_pstate(s
 	res = transition_pstate(data, pstate);
 	freqs.new = find_khz_freq_from_pstate(data->powernow_table, pstate);
 
-	for_each_cpu_mask(i, *(data->available_cores)) {
+	for_each_cpu(i, *(data->available_cores)) {
 		freqs.cpu = i;
 		cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE);
 	}
--- struct-cpumasks.orig/arch/x86/kernel/cpu/cpufreq/speedstep-centrino.c
+++ struct-cpumasks/arch/x86/kernel/cpu/cpufreq/speedstep-centrino.c
@@ -600,7 +600,7 @@ static int centrino_target (struct cpufr
 	*saved_mask = current->cpus_allowed;
 	first_cpu = 1;
 	cpus_clear(*covered_cpus);
-	for_each_cpu_mask(j, *online_policy_cpus) {
+	for_each_cpu(j, *online_policy_cpus) {
 		/*
 		 * Support for SMP systems.
 		 * Make sure we are running on CPU that wants to change freq
@@ -641,7 +641,7 @@ static int centrino_target (struct cpufr
 			dprintk("target=%dkHz old=%d new=%d msr=%04x\n",
 				target_freq, freqs.old, freqs.new, msr);
 
-			for_each_cpu_mask(k, *online_policy_cpus) {
+			for_each_cpu(k, *online_policy_cpus) {
 				freqs.cpu = k;
 				cpufreq_notify_transition(&freqs,
 					CPUFREQ_PRECHANGE);
@@ -664,7 +664,7 @@ static int centrino_target (struct cpufr
 		preempt_enable();
 	}
 
-	for_each_cpu_mask(k, *online_policy_cpus) {
+	for_each_cpu(k, *online_policy_cpus) {
 		freqs.cpu = k;
 		cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE);
 	}
@@ -678,7 +678,7 @@ static int centrino_target (struct cpufr
 		 */
 
 		if (!cpus_empty(*covered_cpus))
-			for_each_cpu_mask(j, *covered_cpus) {
+			for_each_cpu(j, *covered_cpus) {
 				set_cpus_allowed(current,
 						     cpumask_of_cpu(j));
 				wrmsr(MSR_IA32_PERF_CTL, oldmsr, h);
@@ -687,7 +687,7 @@ static int centrino_target (struct cpufr
 		tmp = freqs.new;
 		freqs.new = freqs.old;
 		freqs.old = tmp;
-		for_each_cpu_mask(j, *online_policy_cpus) {
+		for_each_cpu(j, *online_policy_cpus) {
 			freqs.cpu = j;
 			cpufreq_notify_transition(&freqs, CPUFREQ_PRECHANGE);
 			cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE);
--- struct-cpumasks.orig/arch/x86/kernel/cpu/cpufreq/speedstep-ich.c
+++ struct-cpumasks/arch/x86/kernel/cpu/cpufreq/speedstep-ich.c
@@ -279,7 +279,7 @@ static int speedstep_target (struct cpuf
 
 	cpus_allowed = current->cpus_allowed;
 
-	for_each_cpu_mask(i, policy->cpus) {
+	for_each_cpu(i, policy->cpus) {
 		freqs.cpu = i;
 		cpufreq_notify_transition(&freqs, CPUFREQ_PRECHANGE);
 	}
@@ -292,7 +292,7 @@ static int speedstep_target (struct cpuf
 	/* allow to be run on all CPUs */
 	set_cpus_allowed(current, &cpus_allowed);
 
-	for_each_cpu_mask(i, policy->cpus) {
+	for_each_cpu(i, policy->cpus) {
 		freqs.cpu = i;
 		cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE);
 	}
--- struct-cpumasks.orig/arch/x86/kernel/cpu/intel_cacheinfo.c
+++ struct-cpumasks/arch/x86/kernel/cpu/intel_cacheinfo.c
@@ -513,7 +513,7 @@ static void __cpuinit cache_remove_share
 	int sibling;
 
 	this_leaf = CPUID4_INFO_IDX(cpu, index);
-	for_each_cpu_mask(sibling, this_leaf->shared_cpu_map) {
+	for_each_cpu(sibling, this_leaf->shared_cpu_map) {
 		sibling_leaf = CPUID4_INFO_IDX(sibling, index);
 		cpu_clear(cpu, sibling_leaf->shared_cpu_map);
 	}
--- struct-cpumasks.orig/arch/x86/kernel/cpu/mcheck/mce_amd_64.c
+++ struct-cpumasks/arch/x86/kernel/cpu/mcheck/mce_amd_64.c
@@ -527,7 +527,7 @@ static __cpuinit int threshold_create_ba
 	if (err)
 		goto out_free;
 
-	for_each_cpu_mask(i, b->cpus) {
+	for_each_cpu(i, b->cpus) {
 		if (i == cpu)
 			continue;
 
@@ -617,7 +617,7 @@ static void threshold_remove_bank(unsign
 #endif
 
 	/* remove all sibling symlinks before unregistering */
-	for_each_cpu_mask(i, b->cpus) {
+	for_each_cpu(i, b->cpus) {
 		if (i == cpu)
 			continue;
 
--- struct-cpumasks.orig/arch/x86/kernel/genx2apic_cluster.c
+++ struct-cpumasks/arch/x86/kernel/genx2apic_cluster.c
@@ -61,7 +61,7 @@ static void x2apic_send_IPI_mask(const c
 	unsigned long query_cpu;
 
 	local_irq_save(flags);
-	for_each_cpu_mask_nr(query_cpu, *mask)
+	for_each_cpu(query_cpu, mask)
 		__x2apic_send_IPI_dest(
 			per_cpu(x86_cpu_to_logical_apicid, query_cpu),
 			vector, APIC_DEST_LOGICAL);
--- struct-cpumasks.orig/arch/x86/kernel/genx2apic_phys.c
+++ struct-cpumasks/arch/x86/kernel/genx2apic_phys.c
@@ -59,7 +59,7 @@ static void x2apic_send_IPI_mask(const c
 	unsigned long query_cpu;
 
 	local_irq_save(flags);
-	for_each_cpu_mask_nr(query_cpu, *mask) {
+	for_each_cpu(query_cpu, mask) {
 		__x2apic_send_IPI_dest(per_cpu(x86_cpu_to_apicid, query_cpu),
 				       vector, APIC_DEST_PHYSICAL);
 	}
--- struct-cpumasks.orig/arch/x86/kernel/io_apic.c
+++ struct-cpumasks/arch/x86/kernel/io_apic.c
@@ -1241,7 +1241,7 @@ static int __assign_irq_vector(int irq, 
 		int new_cpu;
 		int vector, offset;
 
-		vector_allocation_domain(cpu, &tmpmask);
+		vector_allocation_domain(cpu, tmpmask);
 
 		vector = current_vector;
 		offset = current_offset;
@@ -1302,7 +1302,7 @@ static void __clear_irq_vector(int irq)
 
 	vector = cfg->vector;
 	cpus_and(mask, cfg->domain, cpu_online_map);
-	for_each_cpu_mask(cpu, mask)
+	for_each_cpu(cpu, mask)
 		per_cpu(vector_irq, cpu)[vector] = -1;
 
 	cfg->vector = 0;
--- struct-cpumasks.orig/arch/x86/kernel/smpboot.c
+++ struct-cpumasks/arch/x86/kernel/smpboot.c
@@ -448,7 +448,7 @@ void __cpuinit set_cpu_sibling_map(int c
 	cpu_set(cpu, cpu_sibling_setup_map);
 
 	if (smp_num_siblings > 1) {
-		for_each_cpu_mask(i, cpu_sibling_setup_map) {
+		for_each_cpu(i, cpu_sibling_setup_map) {
 			if (c->phys_proc_id == cpu_data(i).phys_proc_id &&
 			    c->cpu_core_id == cpu_data(i).cpu_core_id) {
 				cpu_set(i, per_cpu(cpu_sibling_map, cpu));
@@ -471,7 +471,7 @@ void __cpuinit set_cpu_sibling_map(int c
 		return;
 	}
 
-	for_each_cpu_mask(i, cpu_sibling_setup_map) {
+	for_each_cpu(i, cpu_sibling_setup_map) {
 		if (per_cpu(cpu_llc_id, cpu) != BAD_APICID &&
 		    per_cpu(cpu_llc_id, cpu) == per_cpu(cpu_llc_id, i)) {
 			cpu_set(i, c->llc_shared_map);
@@ -1268,7 +1268,7 @@ static void remove_siblinginfo(int cpu)
 	int sibling;
 	struct cpuinfo_x86 *c = &cpu_data(cpu);
 
-	for_each_cpu_mask(sibling, per_cpu(cpu_core_map, cpu)) {
+	for_each_cpu(sibling, per_cpu(cpu_core_map, cpu)) {
 		cpu_clear(cpu, per_cpu(cpu_core_map, sibling));
 		/*/
 		 * last thread sibling in this cpu core going down
@@ -1277,7 +1277,7 @@ static void remove_siblinginfo(int cpu)
 			cpu_data(sibling).booted_cores--;
 	}
 
-	for_each_cpu_mask(sibling, per_cpu(cpu_sibling_map, cpu))
+	for_each_cpu(sibling, per_cpu(cpu_sibling_map, cpu))
 		cpu_clear(cpu, per_cpu(cpu_sibling_map, sibling));
 	cpus_clear(per_cpu(cpu_sibling_map, cpu));
 	cpus_clear(per_cpu(cpu_core_map, cpu));
--- struct-cpumasks.orig/arch/x86/kernel/tlb_uv.c
+++ struct-cpumasks/arch/x86/kernel/tlb_uv.c
@@ -263,7 +263,7 @@ int uv_flush_send_and_wait(int cpu, int 
 	 * Success, so clear the remote cpu's from the mask so we don't
 	 * use the IPI method of shootdown on them.
 	 */
-	for_each_cpu_mask(bit, *cpumaskp) {
+	for_each_cpu(bit, *cpumaskp) {
 		blade = uv_cpu_to_blade_id(bit);
 		if (blade == this_blade)
 			continue;
@@ -315,7 +315,7 @@ int uv_flush_tlb_others(cpumask_t *cpuma
 	bau_nodes_clear(&bau_desc->distribution, UV_DISTRIBUTION_SIZE);
 
 	i = 0;
-	for_each_cpu_mask(bit, *cpumaskp) {
+	for_each_cpu(bit, *cpumaskp) {
 		blade = uv_cpu_to_blade_id(bit);
 		BUG_ON(blade > (UV_DISTRIBUTION_SIZE - 1));
 		if (blade == this_blade) {
--- struct-cpumasks.orig/arch/x86/mm/mmio-mod.c
+++ struct-cpumasks/arch/x86/mm/mmio-mod.c
@@ -392,7 +392,7 @@ static void enter_uniprocessor(void)
 		pr_notice(NAME "Disabling non-boot CPUs...\n");
 	put_online_cpus();
 
-	for_each_cpu_mask(cpu, downed_cpus) {
+	for_each_cpu(cpu, downed_cpus) {
 		err = cpu_down(cpu);
 		if (!err)
 			pr_info(NAME "CPU%d is down.\n", cpu);
@@ -414,7 +414,7 @@ static void __ref leave_uniprocessor(voi
 	if (cpus_weight(downed_cpus) == 0)
 		return;
 	pr_notice(NAME "Re-enabling CPUs...\n");
-	for_each_cpu_mask(cpu, downed_cpus) {
+	for_each_cpu(cpu, downed_cpus) {
 		err = cpu_up(cpu);
 		if (!err)
 			pr_info(NAME "enabled CPU%d.\n", cpu);
--- struct-cpumasks.orig/arch/x86/xen/smp.c
+++ struct-cpumasks/arch/x86/xen/smp.c
@@ -412,7 +412,7 @@ static void xen_send_IPI_mask(const cpum
 {
 	unsigned cpu;
 
-	for_each_cpu_mask(cpu, mask)
+	for_each_cpu(cpu, mask)
 		xen_send_IPI_one(cpu, vector);
 }
 
@@ -423,7 +423,7 @@ static void xen_smp_send_call_function_i
 	xen_send_IPI_mask(&mask, XEN_CALL_FUNCTION_VECTOR);
 
 	/* Make sure other vcpus get a chance to run if they need to. */
-	for_each_cpu_mask(cpu, mask) {
+	for_each_cpu(cpu, mask) {
 		if (xen_vcpu_stolen(cpu)) {
 			HYPERVISOR_sched_op(SCHEDOP_yield, 0);
 			break;
--- struct-cpumasks.orig/drivers/acpi/processor_throttling.c
+++ struct-cpumasks/drivers/acpi/processor_throttling.c
@@ -1013,7 +1013,7 @@ int acpi_processor_set_throttling(struct
 	 * affected cpu in order to get one proper T-state.
 	 * The notifier event is THROTTLING_PRECHANGE.
 	 */
-	for_each_cpu_mask(i, online_throttling_cpus) {
+	for_each_cpu(i, online_throttling_cpus) {
 		t_state.cpu = i;
 		acpi_processor_throttling_notifier(THROTTLING_PRECHANGE,
 							&t_state);
@@ -1034,7 +1034,7 @@ int acpi_processor_set_throttling(struct
 		 * it is necessary to set T-state for every affected
 		 * cpus.
 		 */
-		for_each_cpu_mask(i, online_throttling_cpus) {
+		for_each_cpu(i, online_throttling_cpus) {
 			match_pr = per_cpu(processors, i);
 			/*
 			 * If the pointer is invalid, we will report the
@@ -1068,7 +1068,7 @@ int acpi_processor_set_throttling(struct
 	 * affected cpu to update the T-states.
 	 * The notifier event is THROTTLING_POSTCHANGE
 	 */
-	for_each_cpu_mask(i, online_throttling_cpus) {
+	for_each_cpu(i, online_throttling_cpus) {
 		t_state.cpu = i;
 		acpi_processor_throttling_notifier(THROTTLING_POSTCHANGE,
 							&t_state);
--- struct-cpumasks.orig/drivers/cpufreq/cpufreq.c
+++ struct-cpumasks/drivers/cpufreq/cpufreq.c
@@ -589,7 +589,7 @@ static ssize_t show_cpus(cpumask_t mask,
 	ssize_t i = 0;
 	unsigned int cpu;
 
-	for_each_cpu_mask(cpu, mask) {
+	for_each_cpu(cpu, mask) {
 		if (i)
 			i += scnprintf(&buf[i], (PAGE_SIZE - i - 2), " ");
 		i += scnprintf(&buf[i], (PAGE_SIZE - i - 2), "%u", cpu);
@@ -838,7 +838,7 @@ static int cpufreq_add_dev(struct sys_de
 	}
 #endif
 
-	for_each_cpu_mask(j, policy->cpus) {
+	for_each_cpu(j, policy->cpus) {
 		if (cpu == j)
 			continue;
 
@@ -901,14 +901,14 @@ static int cpufreq_add_dev(struct sys_de
 	}
 
 	spin_lock_irqsave(&cpufreq_driver_lock, flags);
-	for_each_cpu_mask(j, policy->cpus) {
+	for_each_cpu(j, policy->cpus) {
 		per_cpu(cpufreq_cpu_data, j) = policy;
 		per_cpu(policy_cpu, j) = policy->cpu;
 	}
 	spin_unlock_irqrestore(&cpufreq_driver_lock, flags);
 
 	/* symlink affected CPUs */
-	for_each_cpu_mask(j, policy->cpus) {
+	for_each_cpu(j, policy->cpus) {
 		if (j == cpu)
 			continue;
 		if (!cpu_online(j))
@@ -948,7 +948,7 @@ static int cpufreq_add_dev(struct sys_de
 
 err_out_unregister:
 	spin_lock_irqsave(&cpufreq_driver_lock, flags);
-	for_each_cpu_mask(j, policy->cpus)
+	for_each_cpu(j, policy->cpus)
 		per_cpu(cpufreq_cpu_data, j) = NULL;
 	spin_unlock_irqrestore(&cpufreq_driver_lock, flags);
 
@@ -1031,7 +1031,7 @@ static int __cpufreq_remove_dev(struct s
 	 * the sysfs links afterwards.
 	 */
 	if (unlikely(cpus_weight(data->cpus) > 1)) {
-		for_each_cpu_mask(j, data->cpus) {
+		for_each_cpu(j, data->cpus) {
 			if (j == cpu)
 				continue;
 			per_cpu(cpufreq_cpu_data, j) = NULL;
@@ -1041,7 +1041,7 @@ static int __cpufreq_remove_dev(struct s
 	spin_unlock_irqrestore(&cpufreq_driver_lock, flags);
 
 	if (unlikely(cpus_weight(data->cpus) > 1)) {
-		for_each_cpu_mask(j, data->cpus) {
+		for_each_cpu(j, data->cpus) {
 			if (j == cpu)
 				continue;
 			dprintk("removing link for cpu %u\n", j);
--- struct-cpumasks.orig/drivers/cpufreq/cpufreq_conservative.c
+++ struct-cpumasks/drivers/cpufreq/cpufreq_conservative.c
@@ -497,7 +497,7 @@ static int cpufreq_governor_dbs(struct c
 			return rc;
 		}
 
-		for_each_cpu_mask(j, policy->cpus) {
+		for_each_cpu(j, policy->cpus) {
 			struct cpu_dbs_info_s *j_dbs_info;
 			j_dbs_info = &per_cpu(cpu_dbs_info, j);
 			j_dbs_info->cur_policy = policy;
--- struct-cpumasks.orig/drivers/cpufreq/cpufreq_ondemand.c
+++ struct-cpumasks/drivers/cpufreq/cpufreq_ondemand.c
@@ -367,7 +367,7 @@ static void dbs_check_cpu(struct cpu_dbs
 
 	/* Get Idle Time */
 	idle_ticks = UINT_MAX;
-	for_each_cpu_mask(j, policy->cpus) {
+	for_each_cpu(j, policy->cpus) {
 		cputime64_t total_idle_ticks;
 		unsigned int tmp_idle_ticks;
 		struct cpu_dbs_info_s *j_dbs_info;
@@ -521,7 +521,7 @@ static int cpufreq_governor_dbs(struct c
 			return rc;
 		}
 
-		for_each_cpu_mask(j, policy->cpus) {
+		for_each_cpu(j, policy->cpus) {
 			struct cpu_dbs_info_s *j_dbs_info;
 			j_dbs_info = &per_cpu(cpu_dbs_info, j);
 			j_dbs_info->cur_policy = policy;
--- struct-cpumasks.orig/include/asm-x86/ipi.h
+++ struct-cpumasks/include/asm-x86/ipi.h
@@ -128,7 +128,7 @@ static inline void send_IPI_mask_sequenc
 	 * - mbligh
 	 */
 	local_irq_save(flags);
-	for_each_cpu_mask(query_cpu, *mask) {
+	for_each_cpu(query_cpu, mask) {
 		__send_IPI_dest_field(per_cpu(x86_cpu_to_apicid, query_cpu),
 				      vector, APIC_DEST_PHYSICAL);
 	}
@@ -144,7 +144,7 @@ static inline void send_IPI_mask_allbuts
 	/* See Hack comment above */
 
 	local_irq_save(flags);
-	for_each_cpu_mask(query_cpu, *mask)
+	for_each_cpu(query_cpu, mask)
 		if (query_cpu != this_cpu)
 			__send_IPI_dest_field(
 				per_cpu(x86_cpu_to_apicid, query_cpu),
--- struct-cpumasks.orig/kernel/cpu.c
+++ struct-cpumasks/kernel/cpu.c
@@ -445,7 +445,7 @@ void __ref enable_nonboot_cpus(void)
 		goto out;
 
 	printk("Enabling non-boot CPUs ...\n");
-	for_each_cpu_mask(cpu, frozen_cpus) {
+	for_each_cpu(cpu, frozen_cpus) {
 		error = _cpu_up(cpu, 1);
 		if (!error) {
 			printk("CPU%d is up\n", cpu);
--- struct-cpumasks.orig/kernel/rcuclassic.c
+++ struct-cpumasks/kernel/rcuclassic.c
@@ -112,7 +112,7 @@ static void force_quiescent_state(struct
 		 */
 		cpus_and(cpumask, rcp->cpumask, cpu_online_map);
 		cpu_clear(rdp->cpu, cpumask);
-		for_each_cpu_mask(cpu, cpumask)
+		for_each_cpu(cpu, cpumask)
 			smp_send_reschedule(cpu);
 	}
 	spin_unlock_irqrestore(&rcp->lock, flags);
@@ -320,7 +320,7 @@ static void print_other_cpu_stall(struct
 	/* OK, time to rat on our buddy... */
 
 	printk(KERN_ERR "RCU detected CPU stalls:");
-	for_each_cpu_mask(cpu, rcp->cpumask)
+	for_each_cpu(cpu, rcp->cpumask)
 		printk(" %d", cpu);
 	printk(" (detected by %d, t=%lu/%lu)\n",
 	       smp_processor_id(), get_seconds(), rcp->gp_check);
--- struct-cpumasks.orig/kernel/rcupreempt.c
+++ struct-cpumasks/kernel/rcupreempt.c
@@ -748,7 +748,7 @@ rcu_try_flip_idle(void)
 
 	/* Now ask each CPU for acknowledgement of the flip. */
 
-	for_each_cpu_mask(cpu, rcu_cpu_online_map) {
+	for_each_cpu(cpu, rcu_cpu_online_map) {
 		per_cpu(rcu_flip_flag, cpu) = rcu_flipped;
 		dyntick_save_progress_counter(cpu);
 	}
@@ -766,7 +766,7 @@ rcu_try_flip_waitack(void)
 	int cpu;
 
 	RCU_TRACE_ME(rcupreempt_trace_try_flip_a1);
-	for_each_cpu_mask(cpu, rcu_cpu_online_map)
+	for_each_cpu(cpu, rcu_cpu_online_map)
 		if (rcu_try_flip_waitack_needed(cpu) &&
 		    per_cpu(rcu_flip_flag, cpu) != rcu_flip_seen) {
 			RCU_TRACE_ME(rcupreempt_trace_try_flip_ae1);
@@ -798,7 +798,7 @@ rcu_try_flip_waitzero(void)
 	/* Check to see if the sum of the "last" counters is zero. */
 
 	RCU_TRACE_ME(rcupreempt_trace_try_flip_z1);
-	for_each_cpu_mask(cpu, rcu_cpu_online_map)
+	for_each_cpu(cpu, rcu_cpu_online_map)
 		sum += RCU_DATA_CPU(cpu)->rcu_flipctr[lastidx];
 	if (sum != 0) {
 		RCU_TRACE_ME(rcupreempt_trace_try_flip_ze1);
@@ -813,7 +813,7 @@ rcu_try_flip_waitzero(void)
 	smp_mb();  /*  ^^^^^^^^^^^^ */
 
 	/* Call for a memory barrier from each CPU. */
-	for_each_cpu_mask(cpu, rcu_cpu_online_map) {
+	for_each_cpu(cpu, rcu_cpu_online_map) {
 		per_cpu(rcu_mb_flag, cpu) = rcu_mb_needed;
 		dyntick_save_progress_counter(cpu);
 	}
@@ -833,7 +833,7 @@ rcu_try_flip_waitmb(void)
 	int cpu;
 
 	RCU_TRACE_ME(rcupreempt_trace_try_flip_m1);
-	for_each_cpu_mask(cpu, rcu_cpu_online_map)
+	for_each_cpu(cpu, rcu_cpu_online_map)
 		if (rcu_try_flip_waitmb_needed(cpu) &&
 		    per_cpu(rcu_mb_flag, cpu) != rcu_mb_done) {
 			RCU_TRACE_ME(rcupreempt_trace_try_flip_me1);
--- struct-cpumasks.orig/kernel/sched.c
+++ struct-cpumasks/kernel/sched.c
@@ -1512,7 +1512,7 @@ static int tg_shares_up(struct task_grou
 	struct sched_domain *sd = data;
 	int i;
 
-	for_each_cpu_mask(i, sd->span) {
+	for_each_cpu(i, sd->span) {
 		rq_weight += tg->cfs_rq[i]->load.weight;
 		shares += tg->cfs_rq[i]->shares;
 	}
@@ -1526,7 +1526,7 @@ static int tg_shares_up(struct task_grou
 	if (!rq_weight)
 		rq_weight = cpus_weight(sd->span) * NICE_0_LOAD;
 
-	for_each_cpu_mask(i, sd->span) {
+	for_each_cpu(i, sd->span) {
 		struct rq *rq = cpu_rq(i);
 		unsigned long flags;
 
@@ -2069,7 +2069,7 @@ find_idlest_group(struct sched_domain *s
 		/* Tally up the load of all CPUs in the group */
 		avg_load = 0;
 
-		for_each_cpu_mask(i, group->cpumask) {
+		for_each_cpu(i, group->cpumask) {
 			/* Bias balancing toward cpus of our domain */
 			if (local_group)
 				load = source_load(i, load_idx);
@@ -2111,7 +2111,7 @@ find_idlest_cpu(struct sched_group *grou
 	/* Traverse only the allowed CPUs */
 	cpus_and(*tmp, group->cpumask, p->cpus_allowed);
 
-	for_each_cpu_mask(i, *tmp) {
+	for_each_cpu(i, *tmp) {
 		load = weighted_cpuload(i);
 
 		if (load < min_load || (load == min_load && i == this_cpu)) {
@@ -3129,7 +3129,7 @@ find_busiest_group(struct sched_domain *
 		max_cpu_load = 0;
 		min_cpu_load = ~0UL;
 
-		for_each_cpu_mask(i, group->cpumask) {
+		for_each_cpu(i, group->cpumask) {
 			struct rq *rq;
 
 			if (!cpu_isset(i, *cpus))
@@ -3408,7 +3408,7 @@ find_busiest_queue(struct sched_group *g
 	unsigned long max_load = 0;
 	int i;
 
-	for_each_cpu_mask(i, group->cpumask) {
+	for_each_cpu(i, group->cpumask) {
 		unsigned long wl;
 
 		if (!cpu_isset(i, *cpus))
@@ -3950,7 +3950,7 @@ static void run_rebalance_domains(struct
 		int balance_cpu;
 
 		cpu_clear(this_cpu, cpus);
-		for_each_cpu_mask(balance_cpu, cpus) {
+		for_each_cpu(balance_cpu, cpus) {
 			/*
 			 * If this cpu gets work to do, stop the load balancing
 			 * work being done for other cpus. Next load
@@ -6961,7 +6961,7 @@ init_sched_build_groups(const cpumask_t 
 
 	cpus_clear(*covered);
 
-	for_each_cpu_mask(i, *span) {
+	for_each_cpu(i, *span) {
 		struct sched_group *sg;
 		int group = group_fn(i, cpu_map, &sg, tmpmask);
 		int j;
@@ -6972,7 +6972,7 @@ init_sched_build_groups(const cpumask_t 
 		cpus_clear(sg->cpumask);
 		sg->__cpu_power = 0;
 
-		for_each_cpu_mask(j, *span) {
+		for_each_cpu(j, *span) {
 			if (group_fn(j, cpu_map, NULL, tmpmask) != group)
 				continue;
 
@@ -7172,7 +7172,7 @@ static void init_numa_sched_groups_power
 	if (!sg)
 		return;
 	do {
-		for_each_cpu_mask(j, sg->cpumask) {
+		for_each_cpu(j, sg->cpumask) {
 			struct sched_domain *sd;
 
 			sd = &per_cpu(phys_domains, j);
@@ -7197,7 +7197,7 @@ static void free_sched_groups(const cpum
 {
 	int cpu, i;
 
-	for_each_cpu_mask(cpu, *cpu_map) {
+	for_each_cpu(cpu, *cpu_map) {
 		struct sched_group **sched_group_nodes
 			= sched_group_nodes_bycpu[cpu];
 
@@ -7436,7 +7436,7 @@ static int __build_sched_domains(const c
 	/*
 	 * Set up domains for cpus specified by the cpu_map.
 	 */
-	for_each_cpu_mask(i, *cpu_map) {
+	for_each_cpu(i, *cpu_map) {
 		struct sched_domain *sd = NULL, *p;
 		SCHED_CPUMASK_VAR(nodemask, allmasks);
 
@@ -7503,7 +7503,7 @@ static int __build_sched_domains(const c
 
 #ifdef CONFIG_SCHED_SMT
 	/* Set up CPU (sibling) groups */
-	for_each_cpu_mask(i, *cpu_map) {
+	for_each_cpu(i, *cpu_map) {
 		SCHED_CPUMASK_VAR(this_sibling_map, allmasks);
 		SCHED_CPUMASK_VAR(send_covered, allmasks);
 
@@ -7520,7 +7520,7 @@ static int __build_sched_domains(const c
 
 #ifdef CONFIG_SCHED_MC
 	/* Set up multi-core groups */
-	for_each_cpu_mask(i, *cpu_map) {
+	for_each_cpu(i, *cpu_map) {
 		SCHED_CPUMASK_VAR(this_core_map, allmasks);
 		SCHED_CPUMASK_VAR(send_covered, allmasks);
 
@@ -7587,7 +7587,7 @@ static int __build_sched_domains(const c
 			goto error;
 		}
 		sched_group_nodes[i] = sg;
-		for_each_cpu_mask(j, *nodemask) {
+		for_each_cpu(j, *nodemask) {
 			struct sched_domain *sd;
 
 			sd = &per_cpu(node_domains, j);
@@ -7633,21 +7633,21 @@ static int __build_sched_domains(const c
 
 	/* Calculate CPU power for physical packages and nodes */
 #ifdef CONFIG_SCHED_SMT
-	for_each_cpu_mask(i, *cpu_map) {
+	for_each_cpu(i, *cpu_map) {
 		struct sched_domain *sd = &per_cpu(cpu_domains, i);
 
 		init_sched_groups_power(i, sd);
 	}
 #endif
 #ifdef CONFIG_SCHED_MC
-	for_each_cpu_mask(i, *cpu_map) {
+	for_each_cpu(i, *cpu_map) {
 		struct sched_domain *sd = &per_cpu(core_domains, i);
 
 		init_sched_groups_power(i, sd);
 	}
 #endif
 
-	for_each_cpu_mask(i, *cpu_map) {
+	for_each_cpu(i, *cpu_map) {
 		struct sched_domain *sd = &per_cpu(phys_domains, i);
 
 		init_sched_groups_power(i, sd);
@@ -7667,7 +7667,7 @@ static int __build_sched_domains(const c
 #endif
 
 	/* Attach the domains */
-	for_each_cpu_mask(i, *cpu_map) {
+	for_each_cpu(i, *cpu_map) {
 		struct sched_domain *sd;
 #ifdef CONFIG_SCHED_SMT
 		sd = &per_cpu(cpu_domains, i);
@@ -7750,7 +7750,7 @@ static void detach_destroy_domains(const
 
 	unregister_sched_domain_sysctl();
 
-	for_each_cpu_mask(i, *cpu_map)
+	for_each_cpu(i, *cpu_map)
 		cpu_attach_domain(NULL, &def_root_domain, i);
 	synchronize_sched();
 	arch_destroy_sched_domains(cpu_map, &tmpmask);
--- struct-cpumasks.orig/kernel/sched_fair.c
+++ struct-cpumasks/kernel/sched_fair.c
@@ -978,7 +978,7 @@ static int wake_idle(int cpu, struct tas
 			&& !task_hot(p, task_rq(p)->clock, sd))) {
 			cpus_and(tmp, sd->span, p->cpus_allowed);
 			cpus_and(tmp, tmp, cpu_active_map);
-			for_each_cpu_mask(i, tmp) {
+			for_each_cpu(i, tmp) {
 				if (idle_cpu(i)) {
 					if (i != task_cpu(p)) {
 						schedstat_inc(p,
--- struct-cpumasks.orig/kernel/sched_rt.c
+++ struct-cpumasks/kernel/sched_rt.c
@@ -245,7 +245,7 @@ static int do_balance_runtime(struct rt_
 
 	spin_lock(&rt_b->rt_runtime_lock);
 	rt_period = ktime_to_ns(rt_b->rt_period);
-	for_each_cpu_mask(i, rd->span) {
+	for_each_cpu(i, rd->span) {
 		struct rt_rq *iter = sched_rt_period_rt_rq(rt_b, i);
 		s64 diff;
 
@@ -324,7 +324,7 @@ static void __disable_runtime(struct rq 
 		/*
 		 * Greedy reclaim, take back as much as we can.
 		 */
-		for_each_cpu_mask(i, rd->span) {
+		for_each_cpu(i, rd->span) {
 			struct rt_rq *iter = sched_rt_period_rt_rq(rt_b, i);
 			s64 diff;
 
@@ -435,7 +435,7 @@ static int do_sched_rt_period_timer(stru
 		return 1;
 
 	span = sched_rt_period_mask();
-	for_each_cpu_mask(i, span) {
+	for_each_cpu(i, span) {
 		int enqueue = 0;
 		struct rt_rq *rt_rq = sched_rt_period_rt_rq(rt_b, i);
 		struct rq *rq = rq_of_rt_rq(rt_rq);
@@ -1179,7 +1179,7 @@ static int pull_rt_task(struct rq *this_
 
 	next = pick_next_task_rt(this_rq);
 
-	for_each_cpu_mask(cpu, this_rq->rd->rto_mask) {
+	for_each_cpu(cpu, this_rq->rd->rto_mask) {
 		if (this_cpu == cpu)
 			continue;
 
--- struct-cpumasks.orig/kernel/smp.c
+++ struct-cpumasks/kernel/smp.c
@@ -295,7 +295,7 @@ static void smp_call_function_mask_quies
 	data.func = quiesce_dummy;
 	data.info = NULL;
 
-	for_each_cpu_mask(cpu, *mask) {
+	for_each_cpu(cpu, *mask) {
 		data.flags = CSD_FLAG_WAIT;
 		generic_exec_single(cpu, &data);
 	}
--- struct-cpumasks.orig/kernel/taskstats.c
+++ struct-cpumasks/kernel/taskstats.c
@@ -301,7 +301,7 @@ static int add_del_listener(pid_t pid, c
 		return -EINVAL;
 
 	if (isadd == REGISTER) {
-		for_each_cpu_mask(cpu, mask) {
+		for_each_cpu(cpu, mask) {
 			s = kmalloc_node(sizeof(struct listener), GFP_KERNEL,
 					 cpu_to_node(cpu));
 			if (!s)
@@ -320,7 +320,7 @@ static int add_del_listener(pid_t pid, c
 
 	/* Deregister or cleanup */
 cleanup:
-	for_each_cpu_mask(cpu, mask) {
+	for_each_cpu(cpu, mask) {
 		listeners = &per_cpu(listener_array, cpu);
 		down_write(&listeners->sem);
 		list_for_each_entry_safe(s, tmp, &listeners->list, list) {
--- struct-cpumasks.orig/kernel/time/tick-broadcast.c
+++ struct-cpumasks/kernel/time/tick-broadcast.c
@@ -399,7 +399,7 @@ again:
 	mask = CPU_MASK_NONE;
 	now = ktime_get();
 	/* Find all expired events */
-	for_each_cpu_mask(cpu, tick_broadcast_oneshot_mask) {
+	for_each_cpu(cpu, tick_broadcast_oneshot_mask) {
 		td = &per_cpu(tick_cpu_device, cpu);
 		if (td->evtdev->next_event.tv64 <= now.tv64)
 			cpu_set(cpu, mask);
@@ -496,7 +496,7 @@ static void tick_broadcast_init_next_eve
 	struct tick_device *td;
 	int cpu;
 
-	for_each_cpu_mask(cpu, *mask) {
+	for_each_cpu(cpu, *mask) {
 		td = &per_cpu(tick_cpu_device, cpu);
 		if (td->evtdev)
 			td->evtdev->next_event = expires;
--- struct-cpumasks.orig/kernel/trace/trace.c
+++ struct-cpumasks/kernel/trace/trace.c
@@ -43,7 +43,7 @@ static unsigned long __read_mostly	traci
 static cpumask_t __read_mostly		tracing_buffer_mask;
 
 #define for_each_tracing_cpu(cpu)	\
-	for_each_cpu_mask(cpu, tracing_buffer_mask)
+	for_each_cpu(cpu, tracing_buffer_mask)
 
 static int trace_alloc_page(void);
 static int trace_free_page(void);
@@ -2711,7 +2711,7 @@ tracing_read_pipe(struct file *filp, cha
 		cpu_set(cpu, mask);
 	}
 
-	for_each_cpu_mask(cpu, mask) {
+	for_each_cpu(cpu, mask) {
 		data = iter->tr->data[cpu];
 		__raw_spin_lock(&data->lock);
 
@@ -2738,12 +2738,12 @@ tracing_read_pipe(struct file *filp, cha
 			break;
 	}
 
-	for_each_cpu_mask(cpu, mask) {
+	for_each_cpu(cpu, mask) {
 		data = iter->tr->data[cpu];
 		__raw_spin_unlock(&data->lock);
 	}
 
-	for_each_cpu_mask(cpu, mask) {
+	for_each_cpu(cpu, mask) {
 		data = iter->tr->data[cpu];
 		atomic_dec(&data->disabled);
 	}
@@ -3275,7 +3275,7 @@ void ftrace_dump(void)
 		cpu_set(cpu, mask);
 	}
 
-	for_each_cpu_mask(cpu, mask) {
+	for_each_cpu(cpu, mask) {
 		data = iter.tr->data[cpu];
 		__raw_spin_lock(&data->lock);
 
@@ -3312,12 +3312,12 @@ void ftrace_dump(void)
 	else
 		printk(KERN_TRACE "---------------------------------\n");
 
-	for_each_cpu_mask(cpu, mask) {
+	for_each_cpu(cpu, mask) {
 		data = iter.tr->data[cpu];
 		__raw_spin_unlock(&data->lock);
 	}
 
-	for_each_cpu_mask(cpu, mask) {
+	for_each_cpu(cpu, mask) {
 		data = iter.tr->data[cpu];
 		atomic_dec(&data->disabled);
 	}
--- struct-cpumasks.orig/kernel/trace/trace_boot.c
+++ struct-cpumasks/kernel/trace/trace_boot.c
@@ -33,7 +33,7 @@ static void boot_trace_init(struct trace
 
 	trace_boot_enabled = 0;
 
-	for_each_cpu_mask(cpu, cpu_possible_map)
+	for_each_cpu(cpu, cpu_possible_map)
 		tracing_reset(tr->data[cpu]);
 }
 
--- struct-cpumasks.orig/kernel/workqueue.c
+++ struct-cpumasks/kernel/workqueue.c
@@ -415,7 +415,7 @@ void flush_workqueue(struct workqueue_st
 	might_sleep();
 	lock_map_acquire(&wq->lockdep_map);
 	lock_map_release(&wq->lockdep_map);
-	for_each_cpu_mask(cpu, *cpu_map)
+	for_each_cpu(cpu, *cpu_map)
 		flush_cpu_workqueue(per_cpu_ptr(wq->cpu_wq, cpu));
 }
 EXPORT_SYMBOL_GPL(flush_workqueue);
@@ -546,7 +546,7 @@ static void wait_on_work(struct work_str
 	wq = cwq->wq;
 	cpu_map = wq_cpu_map(wq);
 
-	for_each_cpu_mask(cpu, *cpu_map)
+	for_each_cpu(cpu, *cpu_map)
 		wait_on_cpu_work(per_cpu_ptr(wq->cpu_wq, cpu), work);
 }
 
@@ -906,7 +906,7 @@ void destroy_workqueue(struct workqueue_
 	list_del(&wq->list);
 	spin_unlock(&workqueue_lock);
 
-	for_each_cpu_mask(cpu, *cpu_map)
+	for_each_cpu(cpu, *cpu_map)
 		cleanup_workqueue_thread(per_cpu_ptr(wq->cpu_wq, cpu));
  	cpu_maps_update_done();
 
--- struct-cpumasks.orig/mm/allocpercpu.c
+++ struct-cpumasks/mm/allocpercpu.c
@@ -34,7 +34,7 @@ static void percpu_depopulate(void *__pd
 static void __percpu_depopulate_mask(void *__pdata, cpumask_t *mask)
 {
 	int cpu;
-	for_each_cpu_mask(cpu, *mask)
+	for_each_cpu(cpu, *mask)
 		percpu_depopulate(__pdata, cpu);
 }
 
@@ -86,7 +86,7 @@ static int __percpu_populate_mask(void *
 	int cpu;
 
 	cpus_clear(populated);
-	for_each_cpu_mask(cpu, *mask)
+	for_each_cpu(cpu, *mask)
 		if (unlikely(!percpu_populate(__pdata, size, gfp, cpu))) {
 			__percpu_depopulate_mask(__pdata, &populated);
 			return -ENOMEM;
--- struct-cpumasks.orig/mm/vmstat.c
+++ struct-cpumasks/mm/vmstat.c
@@ -27,7 +27,7 @@ static void sum_vm_events(unsigned long 
 
 	memset(ret, 0, NR_VM_EVENT_ITEMS * sizeof(unsigned long));
 
-	for_each_cpu_mask(cpu, *cpumask) {
+	for_each_cpu(cpu, *cpumask) {
 		struct vm_event_state *this = &per_cpu(vm_event_states, cpu);
 
 		for (i = 0; i < NR_VM_EVENT_ITEMS; i++)
--- struct-cpumasks.orig/net/core/dev.c
+++ struct-cpumasks/net/core/dev.c
@@ -2410,7 +2410,7 @@ out:
 	 */
 	if (!cpus_empty(net_dma.channel_mask)) {
 		int chan_idx;
-		for_each_cpu_mask(chan_idx, net_dma.channel_mask) {
+		for_each_cpu(chan_idx, net_dma.channel_mask) {
 			struct dma_chan *chan = net_dma.channels[chan_idx];
 			if (chan)
 				dma_async_memcpy_issue_pending(chan);
@@ -4552,7 +4552,7 @@ static void net_dma_rebalance(struct net
 	i = 0;
 	cpu = first_cpu(cpu_online_map);
 
-	for_each_cpu_mask(chan_idx, net_dma->channel_mask) {
+	for_each_cpu(chan_idx, net_dma->channel_mask) {
 		chan = net_dma->channels[chan_idx];
 
 		n = ((num_online_cpus() / cpus_weight(net_dma->channel_mask))
--- struct-cpumasks.orig/net/iucv/iucv.c
+++ struct-cpumasks/net/iucv/iucv.c
@@ -497,7 +497,7 @@ static void iucv_setmask_up(void)
 	/* Disable all cpu but the first in cpu_irq_cpumask. */
 	cpumask = iucv_irq_cpumask;
 	cpu_clear(first_cpu(iucv_irq_cpumask), cpumask);
-	for_each_cpu_mask(cpu, cpumask)
+	for_each_cpu(cpu, cpumask)
 		smp_call_function_single(cpu, iucv_block_cpu, NULL, 1);
 }
 

-- 

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [PATCH 14/31] cpumask: change first/next_cpu to cpus_first/next
  2008-09-29 18:02 [PATCH 00/31] cpumask: Provide new cpumask API Mike Travis
                   ` (12 preceding siblings ...)
  2008-09-29 18:03 ` [PATCH 13/31] cpumask: modify for_each_cpu_mask Mike Travis
@ 2008-09-29 18:03 ` Mike Travis
  2008-09-29 18:03 ` [PATCH 15/31] cpumask: remove node_to_cpumask_ptr Mike Travis
                   ` (16 subsequent siblings)
  30 siblings, 0 replies; 43+ messages in thread
From: Mike Travis @ 2008-09-29 18:03 UTC (permalink / raw)
  To: Ingo Molnar, Rusty Russell
  Cc: Linus Torvalds, Andrew Morton, David Miller, Yinghai Lu,
	Thomas Gleixner, Jack Steiner, linux-kernel

[-- Attachment #1: first-next_cpu --]
[-- Type: text/plain, Size: 25455 bytes --]

Signed-of-by: Mike Travis <travis@sgi.com>
---
 arch/x86/kernel/acpi/boot.c                |    2 -
 arch/x86/kernel/apic.c                     |    2 -
 arch/x86/kernel/cpu/cpufreq/acpi-cpufreq.c |    4 +--
 arch/x86/kernel/cpu/cpufreq/powernow-k8.c  |    6 ++---
 arch/x86/kernel/cpu/intel_cacheinfo.c      |    4 +--
 arch/x86/kernel/cpu/mcheck/mce_amd_64.c    |    2 -
 arch/x86/kernel/cpu/proc.c                 |    4 +--
 arch/x86/kernel/genapic_flat_64.c          |    4 +--
 arch/x86/kernel/genx2apic_cluster.c        |    6 ++---
 arch/x86/kernel/genx2apic_phys.c           |    6 ++---
 arch/x86/kernel/genx2apic_uv_x.c           |    4 +--
 arch/x86/kernel/io_apic.c                  |    2 -
 arch/x86/kernel/smpboot.c                  |    2 -
 arch/x86/mm/mmio-mod.c                     |    2 -
 arch/x86/oprofile/op_model_p4.c            |    2 -
 drivers/infiniband/hw/ehca/ehca_irq.c      |    4 +--
 drivers/parisc/iosapic.c                   |    2 -
 drivers/xen/events.c                       |    2 -
 include/asm-x86/bigsmp/apic.h              |    4 +--
 include/asm-x86/es7000/apic.h              |    6 ++---
 include/asm-x86/summit/apic.h              |    8 +++----
 include/asm-x86/topology.h                 |    4 +--
 kernel/cpu.c                               |    6 ++---
 kernel/power/poweroff.c                    |    2 -
 kernel/sched.c                             |   32 ++++++++++++++---------------
 kernel/sched_rt.c                          |    2 -
 kernel/smp.c                               |    2 -
 kernel/stop_machine.c                      |    2 -
 kernel/time/clocksource.c                  |   11 +++++----
 kernel/time/tick-broadcast.c               |    2 -
 kernel/time/tick-common.c                  |    2 -
 kernel/workqueue.c                         |    2 -
 net/core/dev.c                             |    4 +--
 net/iucv/iucv.c                            |    4 +--
 34 files changed, 77 insertions(+), 76 deletions(-)

--- struct-cpumasks.orig/arch/x86/kernel/acpi/boot.c
+++ struct-cpumasks/arch/x86/kernel/acpi/boot.c
@@ -564,7 +564,7 @@ static int __cpuinit _acpi_map_lsapic(ac
 		return -EINVAL;
 	}
 
-	cpu = first_cpu(new_map);
+	cpu = cpus_first(new_map);
 
 	*pcpu = cpu;
 	return 0;
--- struct-cpumasks.orig/arch/x86/kernel/apic.c
+++ struct-cpumasks/arch/x86/kernel/apic.c
@@ -1857,7 +1857,7 @@ void __cpuinit generic_processor_info(in
 
 	num_processors++;
 	cpus_complement(tmp_map, cpu_present_map);
-	cpu = first_cpu(tmp_map);
+	cpu = cpus_first(tmp_map);
 
 	physid_set(apicid, phys_cpu_present_map);
 	if (apicid == boot_cpu_physical_apicid) {
--- struct-cpumasks.orig/arch/x86/kernel/cpu/cpufreq/acpi-cpufreq.c
+++ struct-cpumasks/arch/x86/kernel/cpu/cpufreq/acpi-cpufreq.c
@@ -219,14 +219,14 @@ static u32 get_cur_val(const cpumask_t *
 	if (unlikely(cpus_empty(*mask)))
 		return 0;
 
-	switch (per_cpu(drv_data, first_cpu(*mask))->cpu_feature) {
+	switch (per_cpu(drv_data, cpus_first(*mask))->cpu_feature) {
 	case SYSTEM_INTEL_MSR_CAPABLE:
 		cmd.type = SYSTEM_INTEL_MSR_CAPABLE;
 		cmd.addr.msr.reg = MSR_IA32_PERF_STATUS;
 		break;
 	case SYSTEM_IO_CAPABLE:
 		cmd.type = SYSTEM_IO_CAPABLE;
-		perf = per_cpu(drv_data, first_cpu(*mask))->acpi_data;
+		perf = per_cpu(drv_data, cpus_first(*mask))->acpi_data;
 		cmd.addr.io.port = perf->control_register.address;
 		cmd.addr.io.bit_width = perf->control_register.bit_width;
 		break;
--- struct-cpumasks.orig/arch/x86/kernel/cpu/cpufreq/powernow-k8.c
+++ struct-cpumasks/arch/x86/kernel/cpu/cpufreq/powernow-k8.c
@@ -639,7 +639,7 @@ static int fill_powernow_table(struct po
 
 	dprintk("cfid 0x%x, cvid 0x%x\n", data->currfid, data->currvid);
 	data->powernow_table = powernow_table;
-	if (first_cpu(per_cpu(cpu_core_map, data->cpu)) == data->cpu)
+	if (cpus_first(per_cpu(cpu_core_map, data->cpu)) == data->cpu)
 		print_basics(data);
 
 	for (j = 0; j < data->numps; j++)
@@ -793,7 +793,7 @@ static int powernow_k8_cpu_init_acpi(str
 
 	/* fill in data */
 	data->numps = data->acpi_data.state_count;
-	if (first_cpu(per_cpu(cpu_core_map, data->cpu)) == data->cpu)
+	if (cpus_first(per_cpu(cpu_core_map, data->cpu)) == data->cpu)
 		print_basics(data);
 	powernow_k8_acpi_pst_values(data, 0);
 
@@ -1244,7 +1244,7 @@ static unsigned int powernowk8_get (unsi
 	unsigned int khz = 0;
 	unsigned int first;
 
-	first = first_cpu(per_cpu(cpu_core_map, cpu));
+	first = cpus_first(per_cpu(cpu_core_map, cpu));
 	data = per_cpu(powernow_data, first);
 
 	if (!data)
--- struct-cpumasks.orig/arch/x86/kernel/cpu/intel_cacheinfo.c
+++ struct-cpumasks/arch/x86/kernel/cpu/intel_cacheinfo.c
@@ -690,7 +690,7 @@ static struct pci_dev *get_k8_northbridg
 
 static ssize_t show_cache_disable(struct _cpuid4_info *this_leaf, char *buf)
 {
-	int node = cpu_to_node(first_cpu(this_leaf->shared_cpu_map));
+	int node = cpu_to_node(cpus_first(this_leaf->shared_cpu_map));
 	struct pci_dev *dev = NULL;
 	ssize_t ret = 0;
 	int i;
@@ -724,7 +724,7 @@ static ssize_t
 store_cache_disable(struct _cpuid4_info *this_leaf, const char *buf,
 		    size_t count)
 {
-	int node = cpu_to_node(first_cpu(this_leaf->shared_cpu_map));
+	int node = cpu_to_node(cpus_first(this_leaf->shared_cpu_map));
 	struct pci_dev *dev = NULL;
 	unsigned int ret, index, val;
 
--- struct-cpumasks.orig/arch/x86/kernel/cpu/mcheck/mce_amd_64.c
+++ struct-cpumasks/arch/x86/kernel/cpu/mcheck/mce_amd_64.c
@@ -475,7 +475,7 @@ static __cpuinit int threshold_create_ba
 
 #ifdef CONFIG_SMP
 	if (cpu_data(cpu).cpu_core_id && shared_bank[bank]) {	/* symlink */
-		i = first_cpu(per_cpu(cpu_core_map, cpu));
+		i = cpus_first(per_cpu(cpu_core_map, cpu));
 
 		/* first core not up yet */
 		if (cpu_data(i).cpu_core_id)
--- struct-cpumasks.orig/arch/x86/kernel/cpu/proc.c
+++ struct-cpumasks/arch/x86/kernel/cpu/proc.c
@@ -159,7 +159,7 @@ static int show_cpuinfo(struct seq_file 
 static void *c_start(struct seq_file *m, loff_t *pos)
 {
 	if (*pos == 0)	/* just in case, cpu 0 is not the first */
-		*pos = first_cpu(cpu_online_map);
+		*pos = cpus_first(cpu_online_map);
 	if ((*pos) < nr_cpu_ids && cpu_online(*pos))
 		return &cpu_data(*pos);
 	return NULL;
@@ -167,7 +167,7 @@ static void *c_start(struct seq_file *m,
 
 static void *c_next(struct seq_file *m, void *v, loff_t *pos)
 {
-	*pos = next_cpu(*pos, cpu_online_map);
+	*pos = cpus_next(*pos, cpu_online_map);
 	return c_start(m, pos);
 }
 
--- struct-cpumasks.orig/arch/x86/kernel/genapic_flat_64.c
+++ struct-cpumasks/arch/x86/kernel/genapic_flat_64.c
@@ -222,7 +222,7 @@ static void physflat_send_IPI_all(int ve
 	physflat_send_IPI_mask(&cpu_online_map, vector);
 }
 
-static unsigned int physflat_cpu_mask_to_apicid(const cpumask_t *cpumask)
+static unsigned int physflat_cpu_mask_to_apicid(const cpumask_t cpumask)
 {
 	int cpu;
 
@@ -230,7 +230,7 @@ static unsigned int physflat_cpu_mask_to
 	 * We're using fixed IRQ delivery, can only return one phys APIC ID.
 	 * May as well be the first.
 	 */
-	cpu = first_cpu(*cpumask);
+	cpu = cpus_first(cpumask);
 	if ((unsigned)cpu < nr_cpu_ids)
 		return per_cpu(x86_cpu_to_apicid, cpu);
 	else
--- struct-cpumasks.orig/arch/x86/kernel/genx2apic_cluster.c
+++ struct-cpumasks/arch/x86/kernel/genx2apic_cluster.c
@@ -93,7 +93,7 @@ static int x2apic_apic_id_registered(voi
 	return 1;
 }
 
-static unsigned int x2apic_cpu_mask_to_apicid(const cpumask_t *cpumask)
+static unsigned int x2apic_cpu_mask_to_apicid(const cpumask_t cpumask)
 {
 	int cpu;
 
@@ -101,8 +101,8 @@ static unsigned int x2apic_cpu_mask_to_a
 	 * We're using fixed IRQ delivery, can only return one phys APIC ID.
 	 * May as well be the first.
 	 */
-	cpu = first_cpu(*cpumask);
-	if ((unsigned)cpu < NR_CPUS)
+	cpu = cpus_first(cpumask);
+	if ((unsigned)cpu < nr_cpu_ids)
 		return per_cpu(x86_cpu_to_logical_apicid, cpu);
 	else
 		return BAD_APICID;
--- struct-cpumasks.orig/arch/x86/kernel/genx2apic_phys.c
+++ struct-cpumasks/arch/x86/kernel/genx2apic_phys.c
@@ -90,7 +90,7 @@ static int x2apic_apic_id_registered(voi
 	return 1;
 }
 
-static unsigned int x2apic_cpu_mask_to_apicid(const cpumask_t *cpumask)
+static unsigned int x2apic_cpu_mask_to_apicid(const cpumask_t cpumask)
 {
 	int cpu;
 
@@ -98,8 +98,8 @@ static unsigned int x2apic_cpu_mask_to_a
 	 * We're using fixed IRQ delivery, can only return one phys APIC ID.
 	 * May as well be the first.
 	 */
-	cpu = first_cpu(*cpumask);
-	if ((unsigned)cpu < NR_CPUS)
+	cpu = cpus_first(cpumask);
+	if ((unsigned)cpu < nr_cpu_ids)
 		return per_cpu(x86_cpu_to_apicid, cpu);
 	else
 		return BAD_APICID;
--- struct-cpumasks.orig/arch/x86/kernel/genx2apic_uv_x.c
+++ struct-cpumasks/arch/x86/kernel/genx2apic_uv_x.c
@@ -155,7 +155,7 @@ static void uv_init_apic_ldr(void)
 {
 }
 
-static unsigned int uv_cpu_mask_to_apicid(const cpumask_t *cpumask)
+static unsigned int uv_cpu_mask_to_apicid(const cpumask_t cpumask)
 {
 	int cpu;
 
@@ -163,7 +163,7 @@ static unsigned int uv_cpu_mask_to_apici
 	 * We're using fixed IRQ delivery, can only return one phys APIC ID.
 	 * May as well be the first.
 	 */
-	cpu = first_cpu(*cpumask);
+	cpu = cpus_first(cpumask);
 	if ((unsigned)cpu < nr_cpu_ids)
 		return per_cpu(x86_cpu_to_apicid, cpu);
 	else
--- struct-cpumasks.orig/arch/x86/kernel/io_apic.c
+++ struct-cpumasks/arch/x86/kernel/io_apic.c
@@ -2202,7 +2202,7 @@ static int ioapic_retrigger_irq(unsigned
 	unsigned long flags;
 
 	spin_lock_irqsave(&vector_lock, flags);
-	send_IPI_mask(&cpumask_of_cpu(first_cpu(cfg->domain)), cfg->vector);
+	send_IPI_mask(cpumask_of_cpu(cpus_first(cfg->domain)), cfg->vector);
 	spin_unlock_irqrestore(&vector_lock, flags);
 
 	return 1;
--- struct-cpumasks.orig/arch/x86/kernel/smpboot.c
+++ struct-cpumasks/arch/x86/kernel/smpboot.c
@@ -488,7 +488,7 @@ void __cpuinit set_cpu_sibling_map(int c
 				 * for each core in package, increment
 				 * the booted_cores for this new cpu
 				 */
-				if (first_cpu(per_cpu(cpu_sibling_map, i)) == i)
+				if (cpus_first(per_cpu(cpu_sibling_map, i)) == i)
 					c->booted_cores++;
 				/*
 				 * increment the core count for all
--- struct-cpumasks.orig/arch/x86/mm/mmio-mod.c
+++ struct-cpumasks/arch/x86/mm/mmio-mod.c
@@ -387,7 +387,7 @@ static void enter_uniprocessor(void)
 
 	get_online_cpus();
 	downed_cpus = cpu_online_map;
-	cpu_clear(first_cpu(cpu_online_map), downed_cpus);
+	cpu_clear(cpus_first(cpu_online_map), downed_cpus);
 	if (num_online_cpus() > 1)
 		pr_notice(NAME "Disabling non-boot CPUs...\n");
 	put_online_cpus();
--- struct-cpumasks.orig/arch/x86/oprofile/op_model_p4.c
+++ struct-cpumasks/arch/x86/oprofile/op_model_p4.c
@@ -380,7 +380,7 @@ static unsigned int get_stagger(void)
 {
 #ifdef CONFIG_SMP
 	int cpu = smp_processor_id();
-	return (cpu != first_cpu(per_cpu(cpu_sibling_map, cpu)));
+	return (cpu != cpus_first(per_cpu(cpu_sibling_map, cpu)));
 #endif
 	return 0;
 }
--- struct-cpumasks.orig/drivers/infiniband/hw/ehca/ehca_irq.c
+++ struct-cpumasks/drivers/infiniband/hw/ehca/ehca_irq.c
@@ -650,9 +650,9 @@ static inline int find_next_online_cpu(s
 		ehca_dmp(&cpu_online_map, sizeof(cpumask_t), "");
 
 	spin_lock_irqsave(&pool->last_cpu_lock, flags);
-	cpu = next_cpu(pool->last_cpu, cpu_online_map);
+	cpu = cpus_next(pool->last_cpu, cpu_online_map);
 	if (cpu >= nr_cpu_ids)
-		cpu = first_cpu(cpu_online_map);
+		cpu = cpus_first(cpu_online_map);
 	pool->last_cpu = cpu;
 	spin_unlock_irqrestore(&pool->last_cpu_lock, flags);
 
--- struct-cpumasks.orig/drivers/parisc/iosapic.c
+++ struct-cpumasks/drivers/parisc/iosapic.c
@@ -713,7 +713,7 @@ static void iosapic_set_affinity_irq(uns
 	if (cpu_check_affinity(irq, &dest))
 		return;
 
-	vi->txn_addr = txn_affinity_addr(irq, first_cpu(dest));
+	vi->txn_addr = txn_affinity_addr(irq, cpus_first(dest));
 
 	spin_lock_irqsave(&iosapic_lock, flags);
 	/* d1 contains the destination CPU, so only want to set that
--- struct-cpumasks.orig/drivers/xen/events.c
+++ struct-cpumasks/drivers/xen/events.c
@@ -612,7 +612,7 @@ static void rebind_irq_to_cpu(unsigned i
 
 static void set_affinity_irq(unsigned irq, cpumask_t dest)
 {
-	unsigned tcpu = first_cpu(dest);
+	unsigned tcpu = cpus_first(dest);
 	rebind_irq_to_cpu(irq, tcpu);
 }
 
--- struct-cpumasks.orig/include/asm-x86/bigsmp/apic.h
+++ struct-cpumasks/include/asm-x86/bigsmp/apic.h
@@ -121,12 +121,12 @@ static inline int check_phys_apicid_pres
 }
 
 /* As we are using single CPU as destination, pick only one CPU here */
-static inline unsigned int cpu_mask_to_apicid(const cpumask_t *cpumask)
+static inline unsigned int cpu_mask_to_apicid(const cpumask_t cpumask)
 {
 	int cpu;
 	int apicid;	
 
-	cpu = first_cpu(*cpumask);
+	cpu = cpus_first(cpumask);
 	apicid = cpu_to_logical_apicid(cpu);
 	return apicid;
 }
--- struct-cpumasks.orig/include/asm-x86/es7000/apic.h
+++ struct-cpumasks/include/asm-x86/es7000/apic.h
@@ -144,7 +144,7 @@ static inline int check_phys_apicid_pres
 	return (1);
 }
 
-static inline unsigned int cpu_mask_to_apicid(const cpumask_t *cpumask)
+static inline unsigned int cpu_mask_to_apicid(const cpumask_t cpumask)
 {
 	int num_bits_set;
 	int cpus_found = 0;
@@ -163,10 +163,10 @@ static inline unsigned int cpu_mask_to_a
 	 * The cpus in the mask must all be on the apic cluster.  If are not
 	 * on the same apicid cluster return default value of TARGET_CPUS.
 	 */
-	cpu = first_cpu(*cpumask);
+	cpu = cpus_first(cpumask);
 	apicid = cpu_to_logical_apicid(cpu);
 	while (cpus_found < num_bits_set) {
-		if (cpu_isset(cpu, *cpumask)) {
+		if (cpu_isset(cpu, cpumask)) {
 			int new_apicid = cpu_to_logical_apicid(cpu);
 			if (apicid_cluster(apicid) !=
 					apicid_cluster(new_apicid)){
--- struct-cpumasks.orig/include/asm-x86/summit/apic.h
+++ struct-cpumasks/include/asm-x86/summit/apic.h
@@ -137,14 +137,14 @@ static inline void enable_apic_mode(void
 {
 }
 
-static inline unsigned int cpu_mask_to_apicid(const cpumask_t *cpumask)
+static inline unsigned int cpu_mask_to_apicid(const cpumask_t cpumask)
 {
 	int num_bits_set;
 	int cpus_found = 0;
 	int cpu;
 	int apicid;
 
-	num_bits_set = cpus_weight(*cpumask);
+	num_bits_set = cpus_weight(cpumask);
 	/* Return id to all */
 	if (num_bits_set == NR_CPUS)
 		return (int) 0xFF;
@@ -152,10 +152,10 @@ static inline unsigned int cpu_mask_to_a
 	 * The cpus in the mask must all be on the apic cluster.  If are not
 	 * on the same apicid cluster return default value of TARGET_CPUS.
 	 */
-	cpu = first_cpu(*cpumask);
+	cpu = cpus_first(cpumask);
 	apicid = cpu_to_logical_apicid(cpu);
 	while (cpus_found < num_bits_set) {
-		if (cpu_isset(cpu, *cpumask)) {
+		if (cpu_isset(cpu, cpumask)) {
 			int new_apicid = cpu_to_logical_apicid(cpu);
 			if (apicid_cluster(apicid) !=
 					apicid_cluster(new_apicid)){
--- struct-cpumasks.orig/include/asm-x86/topology.h
+++ struct-cpumasks/include/asm-x86/topology.h
@@ -196,7 +196,7 @@ static inline cpumask_t node_to_cpumask(
 }
 static inline int node_to_first_cpu(int node)
 {
-	return first_cpu(cpu_online_map);
+	return cpus_first(cpu_online_map);
 }
 
 /* Replace default node_to_cpumask_ptr with optimized version */
@@ -214,7 +214,7 @@ static inline int node_to_first_cpu(int 
 static inline int node_to_first_cpu(int node)
 {
 	node_to_cpumask_ptr(mask, node);
-	return first_cpu(*mask);
+	return cpus_first(*mask);
 }
 #endif
 
--- struct-cpumasks.orig/kernel/cpu.c
+++ struct-cpumasks/kernel/cpu.c
@@ -401,17 +401,17 @@ static cpumask_t frozen_cpus;
 
 int disable_nonboot_cpus(void)
 {
-	int cpu, first_cpu, error = 0;
+	int cpu, cpus_first, error = 0;
 
 	cpu_maps_update_begin();
-	first_cpu = first_cpu(cpu_online_map);
+	cpus_first = cpus_first(cpu_online_map);
 	/* We take down all of the non-boot CPUs in one shot to avoid races
 	 * with the userspace trying to use the CPU hotplug at the same time
 	 */
 	cpus_clear(frozen_cpus);
 	printk("Disabling non-boot CPUs ...\n");
 	for_each_online_cpu(cpu) {
-		if (cpu == first_cpu)
+		if (cpu == cpus_first)
 			continue;
 		error = _cpu_down(cpu, 1);
 		if (!error) {
--- struct-cpumasks.orig/kernel/power/poweroff.c
+++ struct-cpumasks/kernel/power/poweroff.c
@@ -27,7 +27,7 @@ static DECLARE_WORK(poweroff_work, do_po
 static void handle_poweroff(int key, struct tty_struct *tty)
 {
 	/* run sysrq poweroff on boot cpu */
-	schedule_work_on(first_cpu(cpu_online_map), &poweroff_work);
+	schedule_work_on(cpus_first(cpu_online_map), &poweroff_work);
 }
 
 static struct sysrq_key_op	sysrq_poweroff_op = {
--- struct-cpumasks.orig/kernel/sched.c
+++ struct-cpumasks/kernel/sched.c
@@ -3120,7 +3120,7 @@ find_busiest_group(struct sched_domain *
 		local_group = cpu_isset(this_cpu, group->cpumask);
 
 		if (local_group)
-			balance_cpu = first_cpu(group->cpumask);
+			balance_cpu = cpus_first(group->cpumask);
 
 		/* Tally up the load of all CPUs in the group */
 		sum_weighted_load = sum_nr_running = avg_load = 0;
@@ -3246,8 +3246,8 @@ find_busiest_group(struct sched_domain *
 		 */
 		if ((sum_nr_running < min_nr_running) ||
 		    (sum_nr_running == min_nr_running &&
-		     first_cpu(group->cpumask) <
-		     first_cpu(group_min->cpumask))) {
+		     cpus_first(group->cpumask) <
+		     cpus_first(group_min->cpumask))) {
 			group_min = group;
 			min_nr_running = sum_nr_running;
 			min_load_per_task = sum_weighted_load /
@@ -3262,8 +3262,8 @@ find_busiest_group(struct sched_domain *
 		if (sum_nr_running <= group_capacity - 1) {
 			if (sum_nr_running > leader_nr_running ||
 			    (sum_nr_running == leader_nr_running &&
-			     first_cpu(group->cpumask) >
-			      first_cpu(group_leader->cpumask))) {
+			     cpus_first(group->cpumask) >
+			      cpus_first(group_leader->cpumask))) {
 				group_leader = group;
 				leader_nr_running = sum_nr_running;
 			}
@@ -4001,7 +4001,7 @@ static inline void trigger_load_balance(
 			 * TBD: Traverse the sched domains and nominate
 			 * the nearest cpu in the nohz.cpu_mask.
 			 */
-			int ilb = first_cpu(nohz.cpu_mask);
+			int ilb = cpus_first(nohz.cpu_mask);
 
 			if (ilb < nr_cpu_ids)
 				resched_cpu(ilb);
@@ -7098,7 +7098,7 @@ cpu_to_core_group(int cpu, const cpumask
 
 	*mask = per_cpu(cpu_sibling_map, cpu);
 	cpus_and(*mask, *mask, *cpu_map);
-	group = first_cpu(*mask);
+	group = cpus_first(*mask);
 	if (sg)
 		*sg = &per_cpu(sched_group_core, group);
 	return group;
@@ -7125,11 +7125,11 @@ cpu_to_phys_group(int cpu, const cpumask
 #ifdef CONFIG_SCHED_MC
 	*mask = cpu_coregroup_map(cpu);
 	cpus_and(*mask, *mask, *cpu_map);
-	group = first_cpu(*mask);
+	group = cpus_first(*mask);
 #elif defined(CONFIG_SCHED_SMT)
 	*mask = per_cpu(cpu_sibling_map, cpu);
 	cpus_and(*mask, *mask, *cpu_map);
-	group = first_cpu(*mask);
+	group = cpus_first(*mask);
 #else
 	group = cpu;
 #endif
@@ -7157,7 +7157,7 @@ static int cpu_to_allnodes_group(int cpu
 
 	*nodemask = node_to_cpumask(cpu_to_node(cpu));
 	cpus_and(*nodemask, *nodemask, *cpu_map);
-	group = first_cpu(*nodemask);
+	group = cpus_first(*nodemask);
 
 	if (sg)
 		*sg = &per_cpu(sched_group_allnodes, group);
@@ -7176,7 +7176,7 @@ static void init_numa_sched_groups_power
 			struct sched_domain *sd;
 
 			sd = &per_cpu(phys_domains, j);
-			if (j != first_cpu(sd->groups->cpumask)) {
+			if (j != cpus_first(sd->groups->cpumask)) {
 				/*
 				 * Only add "power" once for each
 				 * physical package.
@@ -7253,7 +7253,7 @@ static void init_sched_groups_power(int 
 
 	WARN_ON(!sd || !sd->groups);
 
-	if (cpu != first_cpu(sd->groups->cpumask))
+	if (cpu != cpus_first(sd->groups->cpumask))
 		return;
 
 	child = sd->child;
@@ -7430,7 +7430,7 @@ static int __build_sched_domains(const c
 
 
 #ifdef CONFIG_NUMA
-	sched_group_nodes_bycpu[first_cpu(*cpu_map)] = sched_group_nodes;
+	sched_group_nodes_bycpu[cpus_first(*cpu_map)] = sched_group_nodes;
 #endif
 
 	/*
@@ -7509,7 +7509,7 @@ static int __build_sched_domains(const c
 
 		*this_sibling_map = per_cpu(cpu_sibling_map, i);
 		cpus_and(*this_sibling_map, *this_sibling_map, *cpu_map);
-		if (i != first_cpu(*this_sibling_map))
+		if (i != cpus_first(*this_sibling_map))
 			continue;
 
 		init_sched_build_groups(this_sibling_map, cpu_map,
@@ -7526,7 +7526,7 @@ static int __build_sched_domains(const c
 
 		*this_core_map = cpu_coregroup_map(i);
 		cpus_and(*this_core_map, *this_core_map, *cpu_map);
-		if (i != first_cpu(*this_core_map))
+		if (i != cpus_first(*this_core_map))
 			continue;
 
 		init_sched_build_groups(this_core_map, cpu_map,
@@ -7660,7 +7660,7 @@ static int __build_sched_domains(const c
 	if (sd_allnodes) {
 		struct sched_group *sg;
 
-		cpu_to_allnodes_group(first_cpu(*cpu_map), cpu_map, &sg,
+		cpu_to_allnodes_group(cpus_first(*cpu_map), cpu_map, &sg,
 								tmpmask);
 		init_numa_sched_groups_power(sg);
 	}
--- struct-cpumasks.orig/kernel/sched_rt.c
+++ struct-cpumasks/kernel/sched_rt.c
@@ -966,7 +966,7 @@ static inline int pick_optimal_cpu(int t
 	if ((this_cpu != -1) && cpu_isset(this_cpu, *mask))
 		return this_cpu;
 
-	first = first_cpu(*mask);
+	first = cpus_first(*mask);
 	if (first != NR_CPUS)
 		return first;
 
--- struct-cpumasks.orig/kernel/smp.c
+++ struct-cpumasks/kernel/smp.c
@@ -342,7 +342,7 @@ int smp_call_function_mask(cpumask_t mas
 	if (!num_cpus)
 		return 0;
 	else if (num_cpus == 1) {
-		cpu = first_cpu(mask);
+		cpu = cpus_first(mask);
 		return smp_call_function_single(cpu, func, info, wait);
 	}
 
--- struct-cpumasks.orig/kernel/stop_machine.c
+++ struct-cpumasks/kernel/stop_machine.c
@@ -127,7 +127,7 @@ int __stop_machine(int (*fn)(void *), vo
 		struct sched_param param = { .sched_priority = MAX_RT_PRIO-1 };
 
 		if (!cpus) {
-			if (i == first_cpu(cpu_online_map))
+			if (i == cpus_first(cpu_online_map))
 				smdata = &active;
 		} else {
 			if (cpu_isset(i, *cpus))
--- struct-cpumasks.orig/kernel/time/clocksource.c
+++ struct-cpumasks/kernel/time/clocksource.c
@@ -151,12 +151,13 @@ static void clocksource_watchdog(unsigne
 		 * Cycle through CPUs to check if the CPUs stay
 		 * synchronized to each other.
 		 */
-		int next_cpu = next_cpu(raw_smp_processor_id(), cpu_online_map);
+		int next_cpu = cpus_next(raw_smp_processor_id(),
+					 cpu_online_map);
 
 		if (next_cpu >= nr_cpu_ids)
-			next_cpu = first_cpu(cpu_online_map);
+			next_cpu = cpus_first(cpu_online_map);
 		watchdog_timer.expires += WATCHDOG_INTERVAL;
-		add_timer_on(&watchdog_timer, next_cpu);
+		add_timer_on(&watchdog_timer, cpus_next);
 	}
 	spin_unlock(&watchdog_lock);
 }
@@ -179,7 +180,7 @@ static void clocksource_check_watchdog(s
 			watchdog_last = watchdog->read();
 			watchdog_timer.expires = jiffies + WATCHDOG_INTERVAL;
 			add_timer_on(&watchdog_timer,
-				     first_cpu(cpu_online_map));
+				     cpus_first(cpu_online_map));
 		}
 	} else {
 		if (cs->flags & CLOCK_SOURCE_IS_CONTINUOUS)
@@ -201,7 +202,7 @@ static void clocksource_check_watchdog(s
 				watchdog_timer.expires =
 					jiffies + WATCHDOG_INTERVAL;
 				add_timer_on(&watchdog_timer,
-					     first_cpu(cpu_online_map));
+					     cpus_first(cpu_online_map));
 			}
 		}
 	}
--- struct-cpumasks.orig/kernel/time/tick-broadcast.c
+++ struct-cpumasks/kernel/time/tick-broadcast.c
@@ -148,7 +148,7 @@ static void tick_do_broadcast(cpumask_t 
 		 * one of the first device. This works as long as we have this
 		 * misfeature only on x86 (lapic)
 		 */
-		cpu = first_cpu(mask);
+		cpu = cpus_first(mask);
 		td = &per_cpu(tick_cpu_device, cpu);
 		td->evtdev->broadcast(mask);
 	}
--- struct-cpumasks.orig/kernel/time/tick-common.c
+++ struct-cpumasks/kernel/time/tick-common.c
@@ -299,7 +299,7 @@ static void tick_shutdown(unsigned int *
 	}
 	/* Transfer the do_timer job away from this cpu */
 	if (*cpup == tick_do_timer_cpu) {
-		int cpu = first_cpu(cpu_online_map);
+		int cpu = cpus_first(cpu_online_map);
 
 		tick_do_timer_cpu = (cpu != NR_CPUS) ? cpu :
 			TICK_DO_TIMER_NONE;
--- struct-cpumasks.orig/kernel/workqueue.c
+++ struct-cpumasks/kernel/workqueue.c
@@ -968,7 +968,7 @@ undo:
 void __init init_workqueues(void)
 {
 	cpu_populated_map = cpu_online_map;
-	singlethread_cpu = first_cpu(cpu_possible_map);
+	singlethread_cpu = cpus_first(cpu_possible_map);
 	cpu_singlethread_map = cpumask_of_cpu(singlethread_cpu);
 	hotcpu_notifier(workqueue_cpu_callback, 0);
 	keventd_wq = create_workqueue("events");
--- struct-cpumasks.orig/net/core/dev.c
+++ struct-cpumasks/net/core/dev.c
@@ -4550,7 +4550,7 @@ static void net_dma_rebalance(struct net
 	}
 
 	i = 0;
-	cpu = first_cpu(cpu_online_map);
+	cpu = cpus_first(cpu_online_map);
 
 	for_each_cpu(chan_idx, net_dma->channel_mask) {
 		chan = net_dma->channels[chan_idx];
@@ -4561,7 +4561,7 @@ static void net_dma_rebalance(struct net
 
 		while(n) {
 			per_cpu(softnet_data, cpu).net_dma = chan;
-			cpu = next_cpu(cpu, cpu_online_map);
+			cpu = cpus_next(cpu, cpu_online_map);
 			n--;
 		}
 		i++;
--- struct-cpumasks.orig/net/iucv/iucv.c
+++ struct-cpumasks/net/iucv/iucv.c
@@ -496,7 +496,7 @@ static void iucv_setmask_up(void)
 
 	/* Disable all cpu but the first in cpu_irq_cpumask. */
 	cpumask = iucv_irq_cpumask;
-	cpu_clear(first_cpu(iucv_irq_cpumask), cpumask);
+	cpu_clear(cpus_first(iucv_irq_cpumask), cpumask);
 	for_each_cpu(cpu, cpumask)
 		smp_call_function_single(cpu, iucv_block_cpu, NULL, 1);
 }
@@ -596,7 +596,7 @@ static int __cpuinit iucv_cpu_notify(str
 			return NOTIFY_BAD;
 		smp_call_function_single(cpu, iucv_retrieve_cpu, NULL, 1);
 		if (cpus_empty(iucv_irq_cpumask))
-			smp_call_function_single(first_cpu(iucv_buffer_cpumask),
+			smp_call_function_single(cpus_first(iucv_buffer_cpumask),
 						 iucv_allow_cpu, NULL, 1);
 		break;
 	}

-- 

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [PATCH 15/31] cpumask: remove node_to_cpumask_ptr
  2008-09-29 18:02 [PATCH 00/31] cpumask: Provide new cpumask API Mike Travis
                   ` (13 preceding siblings ...)
  2008-09-29 18:03 ` [PATCH 14/31] cpumask: change first/next_cpu to cpus_first/next Mike Travis
@ 2008-09-29 18:03 ` Mike Travis
  2008-09-29 18:03 ` [PATCH 16/31] cpumask: clean apic files Mike Travis
                   ` (15 subsequent siblings)
  30 siblings, 0 replies; 43+ messages in thread
From: Mike Travis @ 2008-09-29 18:03 UTC (permalink / raw)
  To: Ingo Molnar, Rusty Russell
  Cc: Linus Torvalds, Andrew Morton, David Miller, Yinghai Lu,
	Thomas Gleixner, Jack Steiner, linux-kernel

[-- Attachment #1: node_to_cpumask_ptr --]
[-- Type: text/plain, Size: 14573 bytes --]

Signed-of-by: Mike Travis <travis@sgi.com>
---
 arch/x86/kernel/setup_percpu.c |   65 +++++++++++++----------------------------
 drivers/base/node.c            |    2 -
 drivers/pci/pci-driver.c       |    7 +---
 include/asm-generic/topology.h |   13 --------
 include/asm-x86/topology.h     |   56 ++++++++++-------------------------
 include/linux/topology.h       |    3 -
 kernel/sched.c                 |   10 +++---
 mm/page_alloc.c                |    2 -
 mm/quicklist.c                 |    2 -
 mm/slab.c                      |    2 -
 mm/vmscan.c                    |    4 +-
 net/sunrpc/svc.c               |    7 ----
 12 files changed, 55 insertions(+), 118 deletions(-)

--- struct-cpumasks.orig/arch/x86/kernel/setup_percpu.c
+++ struct-cpumasks/arch/x86/kernel/setup_percpu.c
@@ -41,7 +41,7 @@ DEFINE_EARLY_PER_CPU(int, x86_cpu_to_nod
 EXPORT_EARLY_PER_CPU_SYMBOL(x86_cpu_to_node_map);
 
 /* which logical CPUs are on which nodes */
-cpumask_t *node_to_cpumask_map;
+const cpumask_t node_to_cpumask_map;
 EXPORT_SYMBOL(node_to_cpumask_map);
 
 /* setup node_to_cpumask_map */
@@ -210,7 +210,8 @@ void __init setup_per_cpu_areas(void)
 static void __init setup_node_to_cpumask_map(void)
 {
 	unsigned int node, num = 0;
-	cpumask_t *map;
+	cpumask_t *map = (cpumask_t *)&node_to_cpumask_map;
+	cpumask_t newmap;
 
 	/* setup nr_node_ids if not done yet */
 	if (nr_node_ids == MAX_NUMNODES) {
@@ -220,13 +221,13 @@ static void __init setup_node_to_cpumask
 	}
 
 	/* allocate the map */
-	map = alloc_bootmem_low(nr_node_ids * sizeof(cpumask_t));
+	newmap = alloc_bootmem_low(nr_node_ids * cpumask_size());
 
 	pr_debug(KERN_DEBUG "Node to cpumask map at %p for %d nodes\n",
-		 map, nr_node_ids);
+		 newmap, nr_node_ids);
 
 	/* node_to_cpumask() will now work */
-	node_to_cpumask_map = map;
+	*map = (cpumask_t)newmap;
 }
 
 void __cpuinit numa_set_node(int cpu, int node)
@@ -255,12 +256,16 @@ void __cpuinit numa_clear_node(int cpu)
 
 void __cpuinit numa_add_cpu(int cpu)
 {
-	cpu_set(cpu, node_to_cpumask_map[early_cpu_to_node(cpu)]);
+	cpumask_t map = (cpumask_t)node_to_cpumask(early_cpu_to_node(cpu));
+
+	cpu_set(cpu, map);
 }
 
 void __cpuinit numa_remove_cpu(int cpu)
 {
-	cpu_clear(cpu, node_to_cpumask_map[cpu_to_node(cpu)]);
+	cpumask_t map = (cpumask_t)node_to_cpumask(early_cpu_to_node(cpu));
+
+	cpu_clear(cpu, map);
 }
 
 #else /* CONFIG_DEBUG_PER_CPU_MAPS */
@@ -271,7 +276,7 @@ void __cpuinit numa_remove_cpu(int cpu)
 static void __cpuinit numa_set_cpumask(int cpu, int enable)
 {
 	int node = cpu_to_node(cpu);
-	cpumask_t *mask;
+	cpumask_t mask;
 	char buf[64];
 
 	if (node_to_cpumask_map == NULL) {
@@ -280,13 +285,13 @@ static void __cpuinit numa_set_cpumask(i
 		return;
 	}
 
-	mask = &node_to_cpumask_map[node];
+	mask = (cpumask_t)node_to_cpumask(early_cpu_to_node(cpu));
 	if (enable)
-		cpu_set(cpu, *mask);
+		cpu_set(cpu, mask);
 	else
-		cpu_clear(cpu, *mask);
+		cpu_clear(cpu, mask);
 
-	cpulist_scnprintf(buf, sizeof(buf), *mask);
+	cpulist_scnprintf(buf, sizeof(buf), mask);
 	printk(KERN_DEBUG "%s cpu %d node %d: mask now %s\n",
 		enable? "numa_add_cpu":"numa_remove_cpu", cpu, node, buf);
  }
@@ -333,54 +338,28 @@ int early_cpu_to_node(int cpu)
 
 
 /* empty cpumask */
-static const cpumask_t cpu_mask_none;
+static const cpumask_map_t cpu_mask_none;
 
 /*
  * Returns a pointer to the bitmask of CPUs on Node 'node'.
  */
-const cpumask_t *_node_to_cpumask_ptr(int node)
+const_cpumask_t node_to_cpumask(int node)
 {
 	if (node_to_cpumask_map == NULL) {
 		printk(KERN_WARNING
 			"_node_to_cpumask_ptr(%d): no node_to_cpumask_map!\n",
 			node);
 		dump_stack();
-		return (const cpumask_t *)&cpu_online_map;
+		return (const cpumask_t)cpu_online_map;
 	}
 	if (node >= nr_node_ids) {
 		printk(KERN_WARNING
 			"_node_to_cpumask_ptr(%d): node > nr_node_ids(%d)\n",
 			node, nr_node_ids);
 		dump_stack();
-		return &cpu_mask_none;
-	}
-	return &node_to_cpumask_map[node];
-}
-EXPORT_SYMBOL(_node_to_cpumask_ptr);
-
-/*
- * Returns a bitmask of CPUs on Node 'node'.
- *
- * Side note: this function creates the returned cpumask on the stack
- * so with a high NR_CPUS count, excessive stack space is used.  The
- * node_to_cpumask_ptr function should be used whenever possible.
- */
-cpumask_t node_to_cpumask(int node)
-{
-	if (node_to_cpumask_map == NULL) {
-		printk(KERN_WARNING
-			"node_to_cpumask(%d): no node_to_cpumask_map!\n", node);
-		dump_stack();
-		return cpu_online_map;
-	}
-	if (node >= nr_node_ids) {
-		printk(KERN_WARNING
-			"node_to_cpumask(%d): node > nr_node_ids(%d)\n",
-			node, nr_node_ids);
-		dump_stack();
-		return cpu_mask_none;
+		return (const cpumask_t)cpu_mask_none;
 	}
-	return node_to_cpumask_map[node];
+	return (const cpumask_t)&node_to_cpumask_map[node];
 }
 EXPORT_SYMBOL(node_to_cpumask);
 
--- struct-cpumasks.orig/drivers/base/node.c
+++ struct-cpumasks/drivers/base/node.c
@@ -22,7 +22,7 @@ static struct sysdev_class node_class = 
 static ssize_t node_read_cpumap(struct sys_device *dev, int type, char *buf)
 {
 	struct node *node_dev = to_node(dev);
-	node_to_cpumask_ptr(mask, node_dev->sysdev.id);
+	const cpumask_t mask = node_to_cpumask(node_dev->sysdev.id);
 	int len;
 
 	/* 2008/04/07: buf currently PAGE_SIZE, need 9 chars per 32 bits. */
--- struct-cpumasks.orig/drivers/pci/pci-driver.c
+++ struct-cpumasks/drivers/pci/pci-driver.c
@@ -183,10 +183,9 @@ static int pci_call_probe(struct pci_dri
 	cpumask_t oldmask = current->cpus_allowed;
 	int node = dev_to_node(&dev->dev);
 
-	if (node >= 0) {
-		node_to_cpumask_ptr(nodecpumask, node);
-		set_cpus_allowed(current, nodecpumask);
-	}
+	if (node >= 0)
+		set_cpus_allowed(current, node_to_cpumask(node);
+
 	/* And set default memory allocation policy */
 	oldpol = current->mempolicy;
 	current->mempolicy = NULL;	/* fall back to system default policy */
--- struct-cpumasks.orig/include/asm-generic/topology.h
+++ struct-cpumasks/include/asm-generic/topology.h
@@ -50,21 +50,10 @@
 #ifndef pcibus_to_cpumask
 #define pcibus_to_cpumask(bus)	(pcibus_to_node(bus) == -1 ? \
 					cpu_mask_all : \
-					node_to_cpumask(pcibus_to_node(bus)) \
+					node_to_cpumask(pcibus_to_node(bus) \
 				)
 #endif
 
 #endif	/* CONFIG_NUMA */
 
-/* returns pointer to cpumask for specified node */
-#ifndef node_to_cpumask_ptr
-
-#define	node_to_cpumask_ptr(v, node) 					\
-		cpumask_t _##v = node_to_cpumask(node);			\
-		const cpumask_t *v = &_##v
-
-#define node_to_cpumask_ptr_next(v, node)				\
-			  _##v = node_to_cpumask(node)
-#endif
-
 #endif /* _ASM_GENERIC_TOPOLOGY_H */
--- struct-cpumasks.orig/include/asm-x86/topology.h
+++ struct-cpumasks/include/asm-x86/topology.h
@@ -45,7 +45,7 @@
 #ifdef CONFIG_X86_32
 
 /* Mappings between node number and cpus on that node. */
-extern cpumask_t node_to_cpumask_map[];
+extern const cpumask_map_t node_to_cpumask_map[NR_CPUS];
 
 /* Mappings between logical cpu number and node number */
 extern int cpu_to_node_map[];
@@ -57,21 +57,16 @@ static inline int cpu_to_node(int cpu)
 }
 #define early_cpu_to_node(cpu)	cpu_to_node(cpu)
 
-/* Returns a bitmask of CPUs on Node 'node'.
- *
- * Side note: this function creates the returned cpumask on the stack
- * so with a high NR_CPUS count, excessive stack space is used.  The
- * node_to_cpumask_ptr function should be used whenever possible.
- */
-static inline cpumask_t node_to_cpumask(int node)
+/* Returns a bitmask of CPUs on Node 'node'. */
+static inline const cpumask_t node_to_cpumask(int node)
 {
-	return node_to_cpumask_map[node];
+	return (const cpumask_t)&node_to_cpumask_map[node];
 }
 
 #else /* CONFIG_X86_64 */
 
 /* Mappings between node number and cpus on that node. */
-extern cpumask_t *node_to_cpumask_map;
+extern const cpumask_t node_to_cpumask_map;
 
 /* Mappings between logical cpu number and node number */
 DECLARE_EARLY_PER_CPU(int, x86_cpu_to_node_map);
@@ -82,8 +77,10 @@ DECLARE_EARLY_PER_CPU(int, x86_cpu_to_no
 #ifdef CONFIG_DEBUG_PER_CPU_MAPS
 extern int cpu_to_node(int cpu);
 extern int early_cpu_to_node(int cpu);
-extern const cpumask_t *_node_to_cpumask_ptr(int node);
-extern cpumask_t node_to_cpumask(int node);
+/* XXX - "const" causes:
+ * "warning: type qualifiers ignored on function return type" */
+//extern const struct __cpumask_s *node_to_cpumask(int node);
+extern const_cpumask_t node_to_cpumask(int node);
 
 #else	/* !CONFIG_DEBUG_PER_CPU_MAPS */
 
@@ -103,26 +100,16 @@ static inline int early_cpu_to_node(int 
 }
 
 /* Returns a pointer to the cpumask of CPUs on Node 'node'. */
-static inline const cpumask_t *_node_to_cpumask_ptr(int node)
+static inline const_cpumask_t node_to_cpumask(int node)
 {
-	return &node_to_cpumask_map[node];
-}
+	char *map = (char *)node_to_cpumask_map;
 
-/* Returns a bitmask of CPUs on Node 'node'. */
-static inline cpumask_t node_to_cpumask(int node)
-{
-	return node_to_cpumask_map[node];
+	map += BITS_TO_LONGS(node * nr_cpu_ids);
+	return (const cpumask_t)map;
 }
 
 #endif /* !CONFIG_DEBUG_PER_CPU_MAPS */
 
-/* Replace default node_to_cpumask_ptr with optimized version */
-#define node_to_cpumask_ptr(v, node)		\
-		const cpumask_t *v = _node_to_cpumask_ptr(node)
-
-#define node_to_cpumask_ptr_next(v, node)	\
-			   v = _node_to_cpumask_ptr(node)
-
 #endif /* CONFIG_X86_64 */
 
 /*
@@ -186,25 +173,15 @@ extern int __node_distance(int, int);
 #define	cpu_to_node(cpu)	0
 #define	early_cpu_to_node(cpu)	0
 
-static inline const cpumask_t *_node_to_cpumask_ptr(int node)
+static inline const_cpumask_t node_to_cpumask(int node)
 {
-	return &cpu_online_map;
-}
-static inline cpumask_t node_to_cpumask(int node)
-{
-	return cpu_online_map;
+	return (const cpumask_t)cpu_online_map;
 }
 static inline int node_to_first_cpu(int node)
 {
 	return cpus_first(cpu_online_map);
 }
 
-/* Replace default node_to_cpumask_ptr with optimized version */
-#define node_to_cpumask_ptr(v, node)		\
-		const cpumask_t *v = _node_to_cpumask_ptr(node)
-
-#define node_to_cpumask_ptr_next(v, node)	\
-			   v = _node_to_cpumask_ptr(node)
 #endif
 
 #include <asm-generic/topology.h>
@@ -213,8 +190,7 @@ static inline int node_to_first_cpu(int 
 /* Returns the number of the first CPU on Node 'node'. */
 static inline int node_to_first_cpu(int node)
 {
-	node_to_cpumask_ptr(mask, node);
-	return cpus_first(*mask);
+	return cpus_first((const cpumask_t)node_to_cpumask(node));
 }
 #endif
 
--- struct-cpumasks.orig/include/linux/topology.h
+++ struct-cpumasks/include/linux/topology.h
@@ -40,8 +40,7 @@
 #ifndef nr_cpus_node
 #define nr_cpus_node(node)				\
 	({						\
-		node_to_cpumask_ptr(__tmp__, node);	\
-		cpus_weight(*__tmp__);			\
+		cpus_weight(node_to_cpumask(node));	\
 	})
 #endif
 
--- struct-cpumasks.orig/kernel/sched.c
+++ struct-cpumasks/kernel/sched.c
@@ -6147,7 +6147,7 @@ static void move_task_off_dead_cpu(int d
 
 	do {
 		/* On same node? */
-		mask = node_to_cpumask(cpu_to_node(dead_cpu));
+		mask = node_to_cpumask(cpu_to_node(dead_cpu);
 		cpus_and(mask, mask, p->cpus_allowed);
 		dest_cpu = any_online_cpu(mask);
 
@@ -7044,7 +7044,7 @@ static int find_next_best_node(int node,
 static void sched_domain_node_span(int node, cpumask_t *span)
 {
 	nodemask_t used_nodes;
-	node_to_cpumask_ptr(nodemask, node);
+	const cpumask_t nodemask = node_to_cpumask(node);
 	int i;
 
 	cpus_clear(*span);
@@ -7056,7 +7056,7 @@ static void sched_domain_node_span(int n
 	for (i = 1; i < SD_NODES_PER_DOMAIN; i++) {
 		int next_node = find_next_best_node(node, &used_nodes);
 
-		node_to_cpumask_ptr_next(nodemask, next_node);
+		nodemask = node_to_cpumask(next_node);
 		cpus_or(*span, *span, *nodemask);
 	}
 }
@@ -7155,7 +7155,7 @@ static int cpu_to_allnodes_group(int cpu
 {
 	int group;
 
-	*nodemask = node_to_cpumask(cpu_to_node(cpu));
+	nodemask = node_to_cpumask(cpu_to_node(cpu));
 	cpus_and(*nodemask, *nodemask, *cpu_map);
 	group = cpus_first(*nodemask);
 
@@ -7602,7 +7602,7 @@ static int __build_sched_domains(const c
 		for (j = 0; j < nr_node_ids; j++) {
 			SCHED_CPUMASK_VAR(notcovered, allmasks);
 			int n = (i + j) % nr_node_ids;
-			node_to_cpumask_ptr(pnodemask, n);
+			const cpumask_t pnodemask = node_to_cpumask(n);
 
 			cpus_complement(*notcovered, *covered);
 			cpus_and(*tmpmask, *notcovered, *cpu_map);
--- struct-cpumasks.orig/mm/page_alloc.c
+++ struct-cpumasks/mm/page_alloc.c
@@ -2080,7 +2080,7 @@ static int find_next_best_node(int node,
 	int n, val;
 	int min_val = INT_MAX;
 	int best_node = -1;
-	node_to_cpumask_ptr(tmp, 0);
+	const cpumask_t tmp = node_to_cpumask(0);
 
 	/* Use the local node if we haven't already */
 	if (!node_isset(node, *used_node_mask)) {
--- struct-cpumasks.orig/mm/quicklist.c
+++ struct-cpumasks/mm/quicklist.c
@@ -29,7 +29,7 @@ static unsigned long max_pages(unsigned 
 	int node = numa_node_id();
 	struct zone *zones = NODE_DATA(node)->node_zones;
 	int num_cpus_on_node;
-	node_to_cpumask_ptr(cpumask_on_node, node);
+	const cpumask_t cpumask_on_node = node_to_cpumask(node);
 
 	node_free_pages =
 #ifdef CONFIG_ZONE_DMA
--- struct-cpumasks.orig/mm/slab.c
+++ struct-cpumasks/mm/slab.c
@@ -1079,7 +1079,7 @@ static void __cpuinit cpuup_canceled(lon
 	struct kmem_cache *cachep;
 	struct kmem_list3 *l3 = NULL;
 	int node = cpu_to_node(cpu);
-	node_to_cpumask_ptr(mask, node);
+	const cpumask_t mask = node_to_cpumask(node);
 
 	list_for_each_entry(cachep, &cache_chain, next) {
 		struct array_cache *nc;
--- struct-cpumasks.orig/mm/vmscan.c
+++ struct-cpumasks/mm/vmscan.c
@@ -1687,7 +1687,7 @@ static int kswapd(void *p)
 	struct reclaim_state reclaim_state = {
 		.reclaimed_slab = 0,
 	};
-	node_to_cpumask_ptr(cpumask, pgdat->node_id);
+	const cpumask_t cpumask = node_to_cpumask(pgdat->node_id);
 
 	if (!cpus_empty(*cpumask))
 		set_cpus_allowed(tsk, cpumask);
@@ -1924,7 +1924,7 @@ static int __devinit cpu_callback(struct
 	if (action == CPU_ONLINE || action == CPU_ONLINE_FROZEN) {
 		for_each_node_state(nid, N_HIGH_MEMORY) {
 			pg_data_t *pgdat = NODE_DATA(nid);
-			node_to_cpumask_ptr(mask, pgdat->node_id);
+			const cpumask_t mask = node_to_cpumask(pgdat->node_id);
 
 			if (any_online_cpu(*mask) < nr_cpu_ids)
 				/* One of our CPUs online: restore mask */
--- struct-cpumasks.orig/net/sunrpc/svc.c
+++ struct-cpumasks/net/sunrpc/svc.c
@@ -309,17 +309,12 @@ svc_pool_map_set_cpumask(struct task_str
 
 	switch (m->mode) {
 	case SVC_POOL_PERCPU:
-	{
 		set_cpus_allowed(task, cpumask_of_cpu(node));
 		break;
-	}
 	case SVC_POOL_PERNODE:
-	{
-		node_to_cpumask_ptr(nodecpumask, node);
-		set_cpus_allowed(task, nodecpumask);
+		set_cpus_allowed(task, node_to_cpumask(node));
 		break;
 	}
-	}
 }
 
 /*

-- 

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [PATCH 16/31] cpumask: clean apic files
  2008-09-29 18:02 [PATCH 00/31] cpumask: Provide new cpumask API Mike Travis
                   ` (14 preceding siblings ...)
  2008-09-29 18:03 ` [PATCH 15/31] cpumask: remove node_to_cpumask_ptr Mike Travis
@ 2008-09-29 18:03 ` Mike Travis
  2008-09-29 18:03 ` [PATCH 17/31] cpumask: clean cpufreq files Mike Travis
                   ` (14 subsequent siblings)
  30 siblings, 0 replies; 43+ messages in thread
From: Mike Travis @ 2008-09-29 18:03 UTC (permalink / raw)
  To: Ingo Molnar, Rusty Russell
  Cc: Linus Torvalds, Andrew Morton, David Miller, Yinghai Lu,
	Thomas Gleixner, Jack Steiner, linux-kernel

[-- Attachment #1: clean-apic --]
[-- Type: text/plain, Size: 33628 bytes --]

Signed-of-by: Mike Travis <travis@sgi.com>
---
 arch/x86/kernel/apic.c                   |   10 -
 arch/x86/kernel/genapic_flat_64.c        |   38 +++----
 arch/x86/kernel/genx2apic_cluster.c      |   16 +--
 arch/x86/kernel/genx2apic_phys.c         |   16 +--
 arch/x86/kernel/genx2apic_uv_x.c         |   18 +--
 arch/x86/kernel/io_apic.c                |  157 +++++++++++++++----------------
 arch/x86/kernel/ipi.c                    |    6 -
 include/asm-x86/bigsmp/apic.h            |    2 
 include/asm-x86/es7000/apic.h            |    2 
 include/asm-x86/genapic_32.h             |    4 
 include/asm-x86/genapic_64.h             |    8 -
 include/asm-x86/ipi.h                    |    4 
 include/asm-x86/mach-default/mach_apic.h |    2 
 include/asm-x86/mach-default/mach_ipi.h  |   10 -
 include/asm-x86/numaq/apic.h             |    2 
 include/asm-x86/numaq/ipi.h              |    6 -
 include/asm-x86/summit/apic.h            |    2 
 include/asm-x86/summit/ipi.h             |    6 -
 18 files changed, 156 insertions(+), 153 deletions(-)

--- struct-cpumasks.orig/arch/x86/kernel/apic.c
+++ struct-cpumasks/arch/x86/kernel/apic.c
@@ -141,7 +141,7 @@ static int lapic_next_event(unsigned lon
 			    struct clock_event_device *evt);
 static void lapic_timer_setup(enum clock_event_mode mode,
 			      struct clock_event_device *evt);
-static void lapic_timer_broadcast(cpumask_t mask);
+static void lapic_timer_broadcast(const_cpumask_t mask);
 static void apic_pm_activate(void);
 
 /*
@@ -457,10 +457,10 @@ static void lapic_timer_setup(enum clock
 /*
  * Local APIC timer broadcast function
  */
-static void lapic_timer_broadcast(cpumask_t mask)
+static void lapic_timer_broadcast(const_cpumask_t mask)
 {
 #ifdef CONFIG_SMP
-	send_IPI_mask(&mask, LOCAL_TIMER_VECTOR);
+	send_IPI_mask(mask, LOCAL_TIMER_VECTOR);
 #endif
 }
 
@@ -473,7 +473,7 @@ static void __cpuinit setup_APIC_timer(v
 	struct clock_event_device *levt = &__get_cpu_var(lapic_events);
 
 	memcpy(levt, &lapic_clockevent, sizeof(*levt));
-	levt->cpumask = cpumask_of_cpu(smp_processor_id());
+	cpus_copy(levt->cpumask, cpumask_of_cpu(smp_processor_id()));
 
 	clockevents_register_device(levt);
 }
@@ -1836,7 +1836,7 @@ void disconnect_bsp_APIC(int virt_wire_s
 void __cpuinit generic_processor_info(int apicid, int version)
 {
 	int cpu;
-	cpumask_t tmp_map;
+	cpumask_var_t tmp_map;
 
 	/*
 	 * Validate version
--- struct-cpumasks.orig/arch/x86/kernel/genapic_flat_64.c
+++ struct-cpumasks/arch/x86/kernel/genapic_flat_64.c
@@ -30,12 +30,12 @@ static int __init flat_acpi_madt_oem_che
 	return 1;
 }
 
-static void flat_target_cpus(cpumask_t *retmask)
+static void flat_target_cpus(cpumask_t retmask)
 {
-	*retmask = cpu_online_map;
+	cpus_copy(retmask, cpu_online_map);
 }
 
-static void flat_vector_allocation_domain(int cpu, cpumask_t *retmask)
+static void flat_vector_allocation_domain(int cpu, cpumask_t retmask)
 {
 	/* Careful. Some cpus do not strictly honor the set of cpus
 	 * specified in the interrupt destination when using lowest
@@ -45,7 +45,9 @@ static void flat_vector_allocation_domai
 	 * deliver interrupts to the wrong hyperthread when only one
 	 * hyperthread was specified in the interrupt desitination.
 	 */
-	*retmask = (cpumask_t) { { [0] = APIC_ALL_CPUS, } };
+	static cpumask_map_t all_cpus = CPU_MASK_INIT(APIC_ALL_CPUS);
+
+	cpus_copy(retmask, all_cpus);
 }
 
 /*
@@ -77,9 +79,9 @@ static void inline _flat_send_IPI_mask(u
 	local_irq_restore(flags);
 }
 
-static void flat_send_IPI_mask(const cpumask_t *cpumask, int vector)
+static void flat_send_IPI_mask(const_cpumask_t cpumask, int vector)
 {
-	unsigned long mask = cpus_addr(*cpumask)[0];
+	unsigned long mask = cpus_addr(cpumask)[0];
 
 	_flat_send_IPI_mask(mask, vector);
 }
@@ -109,7 +111,7 @@ static void flat_send_IPI_allbutself(int
 static void flat_send_IPI_all(int vector)
 {
 	if (vector == NMI_VECTOR)
-		flat_send_IPI_mask(&cpu_online_map, vector);
+		flat_send_IPI_mask(cpu_online_map, vector);
 	else
 		__send_IPI_shortcut(APIC_DEST_ALLINC, vector, APIC_DEST_LOGICAL);
 }
@@ -143,9 +145,9 @@ static int flat_apic_id_registered(void)
 	return physid_isset(read_xapic_id(), phys_cpu_present_map);
 }
 
-static unsigned int flat_cpu_mask_to_apicid(const cpumask_t *cpumask)
+static unsigned int flat_cpu_mask_to_apicid(const_cpumask_t cpumask)
 {
-	return cpus_addr(*cpumask)[0] & APIC_ALL_CPUS;
+	return cpus_addr(cpumask)[0] & APIC_ALL_CPUS;
 }
 
 static unsigned int phys_pkg_id(int index_msb)
@@ -196,33 +198,33 @@ static int __init physflat_acpi_madt_oem
 	return 0;
 }
 
-static cpumask_t physflat_target_cpus(void)
+static void physflat_target_cpus(cpumask_t retmask)
 {
-	return cpu_online_map;
+	cpus_copy(retmask, cpu_online_map);
 }
 
-static void physflat_vector_allocation_domain(int cpu, cpumask_t *retmask)
+static void physflat_vector_allocation_domain(int cpu, cpumask_t retmask)
 {
-	cpus_clear(*retmask);
-	cpu_set(cpu, *retmask);
+	cpus_clear(retmask);
+	cpu_set(cpu, retmask);
 }
 
-static void physflat_send_IPI_mask(const cpumask_t *cpumask, int vector)
+static void physflat_send_IPI_mask(const_cpumask_t cpumask, int vector)
 {
 	send_IPI_mask_sequence(cpumask, vector);
 }
 
 static void physflat_send_IPI_allbutself(int vector)
 {
-	send_IPI_mask_allbutself(&cpu_online_map, vector);
+	send_IPI_mask_allbutself(cpu_online_map, vector);
 }
 
 static void physflat_send_IPI_all(int vector)
 {
-	physflat_send_IPI_mask(&cpu_online_map, vector);
+	physflat_send_IPI_mask(cpu_online_map, vector);
 }
 
-static unsigned int physflat_cpu_mask_to_apicid(const cpumask_t cpumask)
+static unsigned int physflat_cpu_mask_to_apicid(const_cpumask_t cpumask)
 {
 	int cpu;
 
--- struct-cpumasks.orig/arch/x86/kernel/genx2apic_cluster.c
+++ struct-cpumasks/arch/x86/kernel/genx2apic_cluster.c
@@ -22,18 +22,18 @@ static int __init x2apic_acpi_madt_oem_c
 
 /* Start with all IRQs pointing to boot CPU.  IRQ balancing will shift them. */
 
-static cpumask_t x2apic_target_cpus(void)
+static void x2apic_target_cpus(cpumask_t retmask)
 {
-	return cpumask_of_cpu(0);
+	cpus_copy(retmask, cpumask_of_cpu(0));
 }
 
 /*
  * for now each logical cpu is in its own vector allocation domain.
  */
-static void x2apic_vector_allocation_domain(int cpu, cpumask_t *retmask)
+static void x2apic_vector_allocation_domain(int cpu, cpumask_t retmask)
 {
-	cpus_clear(*retmask);
-	cpu_set(cpu, *retmask);
+	cpus_clear(retmask);
+	cpu_set(cpu, retmask);
 }
 
 static void __x2apic_send_IPI_dest(unsigned int apicid, int vector,
@@ -55,7 +55,7 @@ static void __x2apic_send_IPI_dest(unsig
  * at once. We have 16 cpu's in a cluster. This will minimize IPI register
  * writes.
  */
-static void x2apic_send_IPI_mask(const cpumask_t *mask, int vector)
+static void x2apic_send_IPI_mask(const_cpumask_t mask, int vector)
 {
 	unsigned long flags;
 	unsigned long query_cpu;
@@ -85,7 +85,7 @@ static void x2apic_send_IPI_allbutself(i
 
 static void x2apic_send_IPI_all(int vector)
 {
-	x2apic_send_IPI_mask(&cpu_online_map, vector);
+	x2apic_send_IPI_mask(cpu_online_map, vector);
 }
 
 static int x2apic_apic_id_registered(void)
@@ -93,7 +93,7 @@ static int x2apic_apic_id_registered(voi
 	return 1;
 }
 
-static unsigned int x2apic_cpu_mask_to_apicid(const cpumask_t cpumask)
+static unsigned int x2apic_cpu_mask_to_apicid(const_cpumask_t cpumask)
 {
 	int cpu;
 
--- struct-cpumasks.orig/arch/x86/kernel/genx2apic_phys.c
+++ struct-cpumasks/arch/x86/kernel/genx2apic_phys.c
@@ -29,15 +29,15 @@ static int __init x2apic_acpi_madt_oem_c
 
 /* Start with all IRQs pointing to boot CPU.  IRQ balancing will shift them. */
 
-static cpumask_t x2apic_target_cpus(void)
+static void x2apic_target_cpus(cpumask_t retmask)
 {
-	return cpumask_of_cpu(0);
+	cpus_copy(retmask, cpumask_of_cpu(0));
 }
 
-static void x2apic_vector_allocation_domain(int cpu, cpumask_t *retmask)
+static void x2apic_vector_allocation_domain(int cpu, cpumask_t retmask)
 {
-	cpus_clear(*retmask);
-	cpu_set(cpu, *retmask);
+	cpus_clear(retmask);
+	cpu_set(cpu, retmask);
 }
 
 static void __x2apic_send_IPI_dest(unsigned int apicid, int vector,
@@ -53,7 +53,7 @@ static void __x2apic_send_IPI_dest(unsig
 	x2apic_icr_write(cfg, apicid);
 }
 
-static void x2apic_send_IPI_mask(const cpumask_t *mask, int vector)
+static void x2apic_send_IPI_mask(const_cpumask_t mask, int vector)
 {
 	unsigned long flags;
 	unsigned long query_cpu;
@@ -82,7 +82,7 @@ static void x2apic_send_IPI_allbutself(i
 
 static void x2apic_send_IPI_all(int vector)
 {
-	x2apic_send_IPI_mask(&cpu_online_map, vector);
+	x2apic_send_IPI_mask(cpu_online_map, vector);
 }
 
 static int x2apic_apic_id_registered(void)
@@ -90,7 +90,7 @@ static int x2apic_apic_id_registered(voi
 	return 1;
 }
 
-static unsigned int x2apic_cpu_mask_to_apicid(const cpumask_t cpumask)
+static unsigned int x2apic_cpu_mask_to_apicid(const_cpumask_t cpumask)
 {
 	int cpu;
 
--- struct-cpumasks.orig/arch/x86/kernel/genx2apic_uv_x.c
+++ struct-cpumasks/arch/x86/kernel/genx2apic_uv_x.c
@@ -76,15 +76,15 @@ EXPORT_SYMBOL(sn_rtc_cycles_per_second);
 
 /* Start with all IRQs pointing to boot CPU.  IRQ balancing will shift them. */
 
-static cpumask_t uv_target_cpus(void)
+static void uv_target_cpus(cpumask_t retmask)
 {
-	return cpumask_of_cpu(0);
+	cpus_copy(retmask, cpumask_of_cpu(0));
 }
 
-static void uv_vector_allocation_domain(int cpu, cpumask_t *retmask)
+static void uv_vector_allocation_domain(int cpu, cpumask_t retmask)
 {
-	cpus_clear(*retmask);
-	cpu_set(cpu, *retmask);
+	cpus_clear(retmask);
+	cpu_set(cpu, retmask);
 }
 
 int uv_wakeup_secondary(int phys_apicid, unsigned int start_rip)
@@ -123,11 +123,11 @@ static void uv_send_IPI_one(int cpu, int
 	uv_write_global_mmr64(pnode, UVH_IPI_INT, val);
 }
 
-static void uv_send_IPI_mask(const cpumask_t *mask, int vector)
+static void uv_send_IPI_mask(const_cpumask_t mask, int vector)
 {
 	unsigned int cpu;
 
-	for_each_cpu_mask_nr(cpu, *mask)
+	for_each_cpu(cpu, mask)
 		uv_send_IPI_one(cpu, vector);
 }
 
@@ -143,7 +143,7 @@ static void uv_send_IPI_allbutself(int v
 
 static void uv_send_IPI_all(int vector)
 {
-	uv_send_IPI_mask(&cpu_online_map, vector);
+	uv_send_IPI_mask(cpu_online_map, vector);
 }
 
 static int uv_apic_id_registered(void)
@@ -155,7 +155,7 @@ static void uv_init_apic_ldr(void)
 {
 }
 
-static unsigned int uv_cpu_mask_to_apicid(const cpumask_t cpumask)
+static unsigned int uv_cpu_mask_to_apicid(const_cpumask_t cpumask)
 {
 	int cpu;
 
--- struct-cpumasks.orig/arch/x86/kernel/io_apic.c
+++ struct-cpumasks/arch/x86/kernel/io_apic.c
@@ -113,8 +113,8 @@ struct irq_cfg {
 	struct irq_cfg *next;
 #endif
 	struct irq_pin_list *irq_2_pin;
-	cpumask_t domain;
-	cpumask_t old_domain;
+	cpumask_map_t domain;
+	cpumask_map_t old_domain;
 	unsigned move_cleanup_count;
 	u8 vector;
 	u8 move_in_progress : 1;
@@ -529,14 +529,14 @@ static void __target_IO_APIC_irq(unsigne
 	}
 }
 
-static int assign_irq_vector(int irq, const cpumask_t *mask);
+static int assign_irq_vector(int irq, const_cpumask_t mask);
 
-static void set_ioapic_affinity_irq(unsigned int irq, cpumask_t mask)
+static void set_ioapic_affinity_irq(unsigned int irq, const_cpumask_t mask)
 {
 	struct irq_cfg *cfg;
 	unsigned long flags;
 	unsigned int dest;
-	cpumask_t tmp;
+	cpumask_var_t tmp;
 	struct irq_desc *desc;
 
 	cpus_and(tmp, mask, cpu_online_map);
@@ -544,11 +544,11 @@ static void set_ioapic_affinity_irq(unsi
 		return;
 
 	cfg = irq_cfg(irq);
-	if (assign_irq_vector(irq, &mask))
+	if (assign_irq_vector(irq, mask))
 		return;
 
 	cpus_and(tmp, cfg->domain, mask);
-	dest = cpu_mask_to_apicid(&tmp);
+	dest = cpu_mask_to_apicid(tmp);
 	/*
 	 * Only the high 8 bits are valid.
 	 */
@@ -557,7 +557,7 @@ static void set_ioapic_affinity_irq(unsi
 	desc = irq_to_desc(irq);
 	spin_lock_irqsave(&ioapic_lock, flags);
 	__target_IO_APIC_irq(irq, dest, cfg->vector);
-	desc->affinity = mask;
+	cpus_copy(desc->affinity, mask);
 	spin_unlock_irqrestore(&ioapic_lock, flags);
 }
 #endif /* CONFIG_SMP */
@@ -1205,7 +1205,7 @@ void unlock_vector_lock(void)
 	spin_unlock(&vector_lock);
 }
 
-static int __assign_irq_vector(int irq, const cpumask_t *mask)
+static int __assign_irq_vector(int irq, const_cpumask_t mask)
 {
 	/*
 	 * NOTE! The local APIC isn't very good at handling
@@ -1222,7 +1222,7 @@ static int __assign_irq_vector(int irq, 
 	unsigned int old_vector;
 	int cpu;
 	struct irq_cfg *cfg;
-	cpumask_t tmpmask;
+	cpumask_var_t tmpmask;
 
 	cfg = irq_cfg(irq);
 
@@ -1231,7 +1231,7 @@ static int __assign_irq_vector(int irq, 
 
 	old_vector = cfg->vector;
 	if (old_vector) {
-		cpus_and(tmpmask, *mask, cpu_online_map);
+		cpus_and(tmpmask, mask, cpu_online_map);
 		cpus_and(tmpmask, tmpmask, cfg->domain);
 		if (!cpus_empty(tmpmask))
 			return 0;
@@ -1269,18 +1269,18 @@ next:
 		current_offset = offset;
 		if (old_vector) {
 			cfg->move_in_progress = 1;
-			cfg->old_domain = cfg->domain;
+			cpus_copy(cfg->old_domain, cfg->domain);
 		}
 		for_each_cpu_in(new_cpu, tmpmask, cpu_online_map)
 			per_cpu(vector_irq, new_cpu)[vector] = irq;
 		cfg->vector = vector;
-		cfg->domain = tmpmask;
+		cpus_copy(cfg->domain, tmpmask);
 		return 0;
 	}
 	return -ENOSPC;
 }
 
-static int assign_irq_vector(int irq, const cpumask_t *mask)
+static int assign_irq_vector(int irq, const_cpumask_t mask)
 {
 	int err;
 	unsigned long flags;
@@ -1294,7 +1294,7 @@ static int assign_irq_vector(int irq, co
 static void __clear_irq_vector(int irq)
 {
 	struct irq_cfg *cfg;
-	cpumask_t mask;
+	cpumask_var_t mask;
 	int cpu, vector;
 
 	cfg = irq_cfg(irq);
@@ -1473,15 +1473,15 @@ static void setup_IO_APIC_irq(int apic, 
 {
 	struct irq_cfg *cfg;
 	struct IO_APIC_route_entry entry;
-	cpumask_t mask;
+	cpumask_var_t mask;
 
 	if (!IO_APIC_IRQ(irq))
 		return;
 
 	cfg = irq_cfg(irq);
 
-	TARGET_CPUS(&mask);
-	if (assign_irq_vector(irq, &mask))
+	TARGET_CPUS(mask);
+	if (assign_irq_vector(irq, mask))
 		return;
 
 	cpus_and(mask, cfg->domain, mask);
@@ -1494,7 +1494,7 @@ static void setup_IO_APIC_irq(int apic, 
 
 
 	if (setup_ioapic_entry(mp_ioapics[apic].mp_apicid, irq, &entry,
-			       cpu_mask_to_apicid(&mask), trigger, polarity,
+			       cpu_mask_to_apicid(mask), trigger, polarity,
 			       cfg->vector)) {
 		printk("Failed to setup ioapic entry for ioapic  %d, pin %d\n",
 		       mp_ioapics[apic].mp_apicid, pin);
@@ -1563,7 +1563,7 @@ static void __init setup_timer_IRQ0_pin(
 					int vector)
 {
 	struct IO_APIC_route_entry entry;
-	cpumask_t mask;
+	cpumask_var_t mask;
 
 #ifdef CONFIG_INTR_REMAP
 	if (intr_remapping_enabled)
@@ -1571,7 +1571,7 @@ static void __init setup_timer_IRQ0_pin(
 #endif
 
 	memset(&entry, 0, sizeof(entry));
-	TARGET_CPUS(&mask);
+	TARGET_CPUS(mask);
 
 	/*
 	 * We use logical delivery to get the timer IRQ
@@ -1579,7 +1579,7 @@ static void __init setup_timer_IRQ0_pin(
 	 */
 	entry.dest_mode = INT_DEST_MODE;
 	entry.mask = 1;					/* mask IRQ now */
-	entry.dest = cpu_mask_to_apicid(&mask);
+	entry.dest = cpu_mask_to_apicid(mask);
 	entry.delivery_mode = INT_DELIVERY_MODE;
 	entry.polarity = 0;
 	entry.trigger = 0;
@@ -1919,7 +1919,8 @@ void __init enable_IO_APIC(void)
 			/* If the interrupt line is enabled and in ExtInt mode
 			 * I have found the pin where the i8259 is connected.
 			 */
-			if ((entry.mask == 0) && (entry.delivery_mode == dest_ExtINT)) {
+			if ((entry.mask == 0) &&
+				(entry.delivery_mode == dest_ExtINT)) {
 				ioapic_i8259.apic = apic;
 				ioapic_i8259.pin  = pin;
 				goto found_i8259;
@@ -2251,17 +2252,17 @@ static DECLARE_DELAYED_WORK(ir_migration
  * as simple as edge triggered migration and we can do the irq migration
  * with a simple atomic update to IO-APIC RTE.
  */
-static void migrate_ioapic_irq(int irq, const cpumask_t *mask)
+static void migrate_ioapic_irq(int irq, const_cpumask_t mask)
 {
 	struct irq_cfg *cfg;
 	struct irq_desc *desc;
-	cpumask_t tmp;
+	cpumask_var_t tmp;
 	struct irte irte;
 	int modify_ioapic_rte;
 	unsigned int dest;
 	unsigned long flags;
 
-	cpus_and(tmp, *mask, cpu_online_map);
+	cpus_and(tmp, mask, cpu_online_map);
 	if (cpus_empty(tmp))
 		return;
 
@@ -2272,8 +2273,8 @@ static void migrate_ioapic_irq(int irq, 
 		return;
 
 	cfg = irq_cfg(irq);
-	cpus_and(tmp, cfg->domain, *mask);
-	dest = cpu_mask_to_apicid(&tmp);
+	cpus_and(tmp, cfg->domain, mask);
+	dest = cpu_mask_to_apicid(tmp);
 
 	desc = irq_to_desc(irq);
 	modify_ioapic_rte = desc->status & IRQ_LEVEL;
@@ -2294,11 +2295,11 @@ static void migrate_ioapic_irq(int irq, 
 	if (cfg->move_in_progress) {
 		cpus_and(tmp, cfg->old_domain, cpu_online_map);
 		cfg->move_cleanup_count = cpus_weight(tmp);
-		send_IPI_mask(&tmp, IRQ_MOVE_CLEANUP_VECTOR);
+		send_IPI_mask(tmp, IRQ_MOVE_CLEANUP_VECTOR);
 		cfg->move_in_progress = 0;
 	}
 
-	desc->affinity = *mask;
+	cpus_copy(desc->affinity, mask);
 }
 
 static int migrate_irq_remapped_level(int irq)
@@ -2320,7 +2321,7 @@ static int migrate_irq_remapped_level(in
 	}
 
 	/* everthing is clear. we have right of way */
-	migrate_ioapic_irq(irq, &desc->pending_mask);
+	migrate_ioapic_irq(irq, desc->pending_mask);
 
 	ret = 0;
 	desc->status &= ~IRQ_MOVE_PENDING;
@@ -2357,18 +2358,18 @@ static void ir_irq_migration(struct work
 /*
  * Migrates the IRQ destination in the process context.
  */
-static void set_ir_ioapic_affinity_irq(unsigned int irq, cpumask_t mask)
+static void set_ir_ioapic_affinity_irq(unsigned int irq, const_cpumask_t mask)
 {
 	struct irq_desc *desc = irq_to_desc(irq);
 
 	if (desc->status & IRQ_LEVEL) {
 		desc->status |= IRQ_MOVE_PENDING;
-		desc->pending_mask = mask;
+		cpus_copy(desc->pending_mask, mask);
 		migrate_irq_remapped_level(irq);
 		return;
 	}
 
-	migrate_ioapic_irq(irq, &mask);
+	migrate_ioapic_irq(irq, mask);
 }
 #endif
 
@@ -2420,11 +2421,11 @@ static void irq_complete_move(unsigned i
 	vector = ~get_irq_regs()->orig_ax;
 	me = smp_processor_id();
 	if ((vector == cfg->vector) && cpu_isset(me, cfg->domain)) {
-		cpumask_t cleanup_mask;
+		cpumask_var_t cleanup_mask;
 
 		cpus_and(cleanup_mask, cfg->old_domain, cpu_online_map);
 		cfg->move_cleanup_count = cpus_weight(cleanup_mask);
-		send_IPI_mask(&cleanup_mask, IRQ_MOVE_CLEANUP_VECTOR);
+		send_IPI_mask(cleanup_mask, IRQ_MOVE_CLEANUP_VECTOR);
 		cfg->move_in_progress = 0;
 	}
 }
@@ -2754,9 +2755,9 @@ static inline void __init check_timer(vo
 	unsigned long flags;
 	unsigned int ver;
 	int no_pin1 = 0;
-	cpumask_t mask;
+	cpumask_var_t mask;
 
-	TARGET_CPUS(&mask);
+	TARGET_CPUS(mask);
 	local_irq_save(flags);
 
         ver = apic_read(APIC_LVR);
@@ -2766,7 +2767,7 @@ static inline void __init check_timer(vo
 	 * get/set the timer IRQ vector:
 	 */
 	disable_8259A_irq(0);
-	assign_irq_vector(0, &mask);
+	assign_irq_vector(0, mask);
 
 	/*
 	 * As IRQ0 is to be enabled in the 8259A, the virtual
@@ -3066,9 +3067,9 @@ unsigned int create_irq_nr(unsigned int 
 	unsigned int new;
 	unsigned long flags;
 	struct irq_cfg *cfg_new;
-	cpumask_t mask;
+	cpumask_var_t mask;
 
-	TARGET_CPUS(&mask);
+	TARGET_CPUS(mask);
 #ifndef CONFIG_HAVE_SPARSE_IRQ
 	irq_want = nr_irqs - 1;
 #endif
@@ -3084,7 +3085,7 @@ unsigned int create_irq_nr(unsigned int 
 		/* check if need to create one */
 		if (!cfg_new)
 			cfg_new = irq_cfg_alloc(new);
-		if (__assign_irq_vector(new, &mask) == 0)
+		if (__assign_irq_vector(new, mask) == 0)
 			irq = new;
 		break;
 	}
@@ -3131,16 +3132,16 @@ static int msi_compose_msg(struct pci_de
 	struct irq_cfg *cfg;
 	int err;
 	unsigned dest;
-	cpumask_t tmp;
+	cpumask_var_t tmp;
 
-	TARGET_CPUS(&tmp);
-	err = assign_irq_vector(irq, &tmp);
+	TARGET_CPUS(tmp);
+	err = assign_irq_vector(irq, tmp);
 	if (err)
 		return err;
 
 	cfg = irq_cfg(irq);
 	cpus_and(tmp, cfg->domain, tmp);
-	dest = cpu_mask_to_apicid(&tmp);
+	dest = cpu_mask_to_apicid(tmp);
 
 #ifdef CONFIG_INTR_REMAP
 	if (irq_remapped(irq)) {
@@ -3194,24 +3195,24 @@ static int msi_compose_msg(struct pci_de
 }
 
 #ifdef CONFIG_SMP
-static void set_msi_irq_affinity(unsigned int irq, cpumask_t mask)
+static void set_msi_irq_affinity(unsigned int irq, const_cpumask_t mask)
 {
 	struct irq_cfg *cfg;
 	struct msi_msg msg;
 	unsigned int dest;
-	cpumask_t tmp;
+	cpumask_var_t tmp;
 	struct irq_desc *desc;
 
 	cpus_and(tmp, mask, cpu_online_map);
 	if (cpus_empty(tmp))
 		return;
 
-	if (assign_irq_vector(irq, &mask))
+	if (assign_irq_vector(irq, mask))
 		return;
 
 	cfg = irq_cfg(irq);
 	cpus_and(tmp, cfg->domain, mask);
-	dest = cpu_mask_to_apicid(&tmp);
+	dest = cpu_mask_to_apicid(tmp);
 
 	read_msi_msg(irq, &msg);
 
@@ -3222,7 +3223,7 @@ static void set_msi_irq_affinity(unsigne
 
 	write_msi_msg(irq, &msg);
 	desc = irq_to_desc(irq);
-	desc->affinity = mask;
+	cpus_copy(desc->affinity, mask);
 }
 
 #ifdef CONFIG_INTR_REMAP
@@ -3230,11 +3231,11 @@ static void set_msi_irq_affinity(unsigne
  * Migrate the MSI irq to another cpumask. This migration is
  * done in the process context using interrupt-remapping hardware.
  */
-static void ir_set_msi_irq_affinity(unsigned int irq, cpumask_t mask)
+static void ir_set_msi_irq_affinity(unsigned int irq, const_cpumask_t mask)
 {
 	struct irq_cfg *cfg;
 	unsigned int dest;
-	cpumask_t tmp;
+	cpumask_var_t tmp;
 	struct irte irte;
 	struct irq_desc *desc;
 
@@ -3245,12 +3246,12 @@ static void ir_set_msi_irq_affinity(unsi
 	if (get_irte(irq, &irte))
 		return;
 
-	if (assign_irq_vector(irq, &mask))
+	if (assign_irq_vector(irq, mask))
 		return;
 
 	cfg = irq_cfg(irq);
 	cpus_and(tmp, cfg->domain, mask);
-	dest = cpu_mask_to_apicid(&tmp);
+	dest = cpu_mask_to_apicid(tmp);
 
 	irte.vector = cfg->vector;
 	irte.dest_id = IRTE_DEST(dest);
@@ -3268,12 +3269,12 @@ static void ir_set_msi_irq_affinity(unsi
 	if (cfg->move_in_progress) {
 		cpus_and(tmp, cfg->old_domain, cpu_online_map);
 		cfg->move_cleanup_count = cpus_weight(tmp);
-		send_IPI_mask(&tmp, IRQ_MOVE_CLEANUP_VECTOR);
+		send_IPI_mask(tmp, IRQ_MOVE_CLEANUP_VECTOR);
 		cfg->move_in_progress = 0;
 	}
 
 	desc = irq_to_desc(irq);
-	desc->affinity = mask;
+	cpus_copy(desc->affinity, mask);
 }
 #endif
 #endif /* CONFIG_SMP */
@@ -3473,24 +3474,24 @@ void arch_teardown_msi_irq(unsigned int 
 
 #ifdef CONFIG_DMAR
 #ifdef CONFIG_SMP
-static void dmar_msi_set_affinity(unsigned int irq, cpumask_t mask)
+static void dmar_msi_set_affinity(unsigned int irq, const_cpumask_t mask)
 {
 	struct irq_cfg *cfg;
 	struct msi_msg msg;
 	unsigned int dest;
-	cpumask_t tmp;
+	cpumask_var_t tmp;
 	struct irq_desc *desc;
 
 	cpus_and(tmp, mask, cpu_online_map);
 	if (cpus_empty(tmp))
 		return;
 
-	if (assign_irq_vector(irq, &mask))
+	if (assign_irq_vector(irq, mask))
 		return;
 
 	cfg = irq_cfg(irq);
 	cpus_and(tmp, cfg->domain, mask);
-	dest = cpu_mask_to_apicid(&tmp);
+	dest = cpu_mask_to_apicid(tmp);
 
 	dmar_msi_read(irq, &msg);
 
@@ -3501,7 +3502,7 @@ static void dmar_msi_set_affinity(unsign
 
 	dmar_msi_write(irq, &msg);
 	desc = irq_to_desc(irq);
-	desc->affinity = mask;
+	cpus_copy(desc->affinity, mask);
 }
 #endif /* CONFIG_SMP */
 
@@ -3534,24 +3535,24 @@ int arch_setup_dmar_msi(unsigned int irq
 #ifdef CONFIG_HPET_TIMER
 
 #ifdef CONFIG_SMP
-static void hpet_msi_set_affinity(unsigned int irq, cpumask_t mask)
+static void hpet_msi_set_affinity(unsigned int irq, const_cpumask_t mask)
 {
 	struct irq_cfg *cfg;
 	struct irq_desc *desc;
 	struct msi_msg msg;
 	unsigned int dest;
-	cpumask_t tmp;
+	cpumask_var_t tmp;
 
 	cpus_and(tmp, mask, cpu_online_map);
 	if (cpus_empty(tmp))
 		return;
 
-	if (assign_irq_vector(irq, &mask))
+	if (assign_irq_vector(irq, mask))
 		return;
 
 	cfg = irq_cfg(irq);
 	cpus_and(tmp, cfg->domain, mask);
-	dest = cpu_mask_to_apicid(&tmp);
+	dest = cpu_mask_to_apicid(tmp);
 
 	hpet_msi_read(irq, &msg);
 
@@ -3562,7 +3563,7 @@ static void hpet_msi_set_affinity(unsign
 
 	hpet_msi_write(irq, &msg);
 	desc = irq_to_desc(irq);
-	desc->affinity = mask;
+	cpus_copy(desc->affinity, mask);
 }
 #endif /* CONFIG_SMP */
 
@@ -3615,27 +3616,27 @@ static void target_ht_irq(unsigned int i
 	write_ht_irq_msg(irq, &msg);
 }
 
-static void set_ht_irq_affinity(unsigned int irq, cpumask_t mask)
+static void set_ht_irq_affinity(unsigned int irq, const_cpumask_t mask)
 {
 	struct irq_cfg *cfg;
 	unsigned int dest;
-	cpumask_t tmp;
+	cpumask_var_t tmp;
 	struct irq_desc *desc;
 
 	cpus_and(tmp, mask, cpu_online_map);
 	if (cpus_empty(tmp))
 		return;
 
-	if (assign_irq_vector(irq, &mask))
+	if (assign_irq_vector(irq, mask))
 		return;
 
 	cfg = irq_cfg(irq);
 	cpus_and(tmp, cfg->domain, mask);
-	dest = cpu_mask_to_apicid(&tmp);
+	dest = cpu_mask_to_apicid(tmp);
 
 	target_ht_irq(irq, dest, cfg->vector);
 	desc = irq_to_desc(irq);
-	desc->affinity = mask;
+	cpus_copy(desc->affinity, mask);
 }
 #endif
 
@@ -3654,17 +3655,17 @@ int arch_setup_ht_irq(unsigned int irq, 
 {
 	struct irq_cfg *cfg;
 	int err;
-	cpumask_t tmp;
+	cpumask_var_t tmp;
 
-	TARGET_CPUS(&tmp);
-	err = assign_irq_vector(irq, &tmp);
+	TARGET_CPUS(tmp);
+	err = assign_irq_vector(irq, tmp);
 	if (!err) {
 		struct ht_irq_msg msg;
 		unsigned dest;
 
 		cfg = irq_cfg(irq);
 		cpus_and(tmp, cfg->domain, tmp);
-		dest = cpu_mask_to_apicid(&tmp);
+		dest = cpu_mask_to_apicid(tmp);
 
 		msg.address_hi = HT_IRQ_HIGH_DEST_ID(dest);
 
@@ -3870,12 +3871,12 @@ void __init setup_ioapic_dest(void)
 {
 	int pin, ioapic, irq, irq_entry;
 	struct irq_cfg *cfg;
-	cpumask_t mask;
+	cpumask_var_t mask;
 
 	if (skip_ioapic_setup == 1)
 		return;
 
-	TARGET_CPUS(&mask);
+	TARGET_CPUS(mask);
 	for (ioapic = 0; ioapic < nr_ioapics; ioapic++) {
 		for (pin = 0; pin < nr_ioapic_registers[ioapic]; pin++) {
 			irq_entry = find_irq_entry(ioapic, pin, mp_INT);
--- struct-cpumasks.orig/arch/x86/kernel/ipi.c
+++ struct-cpumasks/arch/x86/kernel/ipi.c
@@ -116,7 +116,7 @@ static inline void __send_IPI_dest_field
 /*
  * This is only used on smaller machines.
  */
-void send_IPI_mask_bitmask(const cpumask_t *cpumask, int vector)
+void send_IPI_mask_bitmask(const_cpumask_t *cpumask, int vector)
 {
 	unsigned long mask = cpus_addr(*cpumask)[0];
 	unsigned long flags;
@@ -127,7 +127,7 @@ void send_IPI_mask_bitmask(const cpumask
 	local_irq_restore(flags);
 }
 
-void send_IPI_mask_sequence(const cpumask_t *mask, int vector)
+void send_IPI_mask_sequence(const_cpumask_t *mask, int vector)
 {
 	unsigned long flags;
 	unsigned int query_cpu;
@@ -144,7 +144,7 @@ void send_IPI_mask_sequence(const cpumas
 	local_irq_restore(flags);
 }
 
-void send_IPI_mask_allbutself(const cpumask_t *mask, int vector)
+void send_IPI_mask_allbutself(const_cpumask_t *mask, int vector)
 {
 	unsigned long flags;
 	unsigned int query_cpu;
--- struct-cpumasks.orig/include/asm-x86/bigsmp/apic.h
+++ struct-cpumasks/include/asm-x86/bigsmp/apic.h
@@ -121,7 +121,7 @@ static inline int check_phys_apicid_pres
 }
 
 /* As we are using single CPU as destination, pick only one CPU here */
-static inline unsigned int cpu_mask_to_apicid(const cpumask_t cpumask)
+static inline unsigned int cpu_mask_to_apicid(const_cpumask_t cpumask)
 {
 	int cpu;
 	int apicid;	
--- struct-cpumasks.orig/include/asm-x86/es7000/apic.h
+++ struct-cpumasks/include/asm-x86/es7000/apic.h
@@ -144,7 +144,7 @@ static inline int check_phys_apicid_pres
 	return (1);
 }
 
-static inline unsigned int cpu_mask_to_apicid(const cpumask_t cpumask)
+static inline unsigned int cpu_mask_to_apicid(const_cpumask_t cpumask)
 {
 	int num_bits_set;
 	int cpus_found = 0;
--- struct-cpumasks.orig/include/asm-x86/genapic_32.h
+++ struct-cpumasks/include/asm-x86/genapic_32.h
@@ -56,12 +56,12 @@ struct genapic {
 
 	unsigned (*get_apic_id)(unsigned long x);
 	unsigned long apic_id_mask;
-	unsigned int (*cpu_mask_to_apicid)(const cpumask_t *cpumask);
+	unsigned int (*cpu_mask_to_apicid)(const_cpumask_t *cpumask);
 	void (*vector_allocation_domain)(int cpu, cpumask_t *retmask);
 
 #ifdef CONFIG_SMP
 	/* ipi */
-	void (*send_IPI_mask)(const cpumask_t *mask, int vector);
+	void (*send_IPI_mask)(const_cpumask_t *mask, int vector);
 	void (*send_IPI_allbutself)(int vector);
 	void (*send_IPI_all)(int vector);
 #endif
--- struct-cpumasks.orig/include/asm-x86/genapic_64.h
+++ struct-cpumasks/include/asm-x86/genapic_64.h
@@ -18,16 +18,16 @@ struct genapic {
 	u32 int_delivery_mode;
 	u32 int_dest_mode;
 	int (*apic_id_registered)(void);
-	void (*target_cpus)(cpumask_t *retmask);
-	void (*vector_allocation_domain)(int cpu, cpumask_t *retmask);
+	void (*target_cpus)(cpumask_t retmask);
+	void (*vector_allocation_domain)(int cpu, cpumask_t retmask);
 	void (*init_apic_ldr)(void);
 	/* ipi */
-	void (*send_IPI_mask)(const cpumask_t *mask, int vector);
+	void (*send_IPI_mask)(const_cpumask_t mask, int vector);
 	void (*send_IPI_allbutself)(int vector);
 	void (*send_IPI_all)(int vector);
 	void (*send_IPI_self)(int vector);
 	/* */
-	unsigned int (*cpu_mask_to_apicid)(const cpumask_t *cpumask);
+	unsigned int (*cpu_mask_to_apicid)(const_cpumask_t cpumask);
 	unsigned int (*phys_pkg_id)(int index_msb);
 	unsigned int (*get_apic_id)(unsigned long x);
 	unsigned long (*set_apic_id)(unsigned int id);
--- struct-cpumasks.orig/include/asm-x86/ipi.h
+++ struct-cpumasks/include/asm-x86/ipi.h
@@ -117,7 +117,7 @@ static inline void __send_IPI_dest_field
 	native_apic_mem_write(APIC_ICR, cfg);
 }
 
-static inline void send_IPI_mask_sequence(const cpumask_t *mask, int vector)
+static inline void send_IPI_mask_sequence(const_cpumask_t mask, int vector)
 {
 	unsigned long flags;
 	unsigned long query_cpu;
@@ -135,7 +135,7 @@ static inline void send_IPI_mask_sequenc
 	local_irq_restore(flags);
 }
 
-static inline void send_IPI_mask_allbutself(cpumask_t *mask, int vector)
+static inline void send_IPI_mask_allbutself(const_cpumask_t mask, int vector)
 {
 	unsigned long flags;
 	unsigned int query_cpu;
--- struct-cpumasks.orig/include/asm-x86/mach-default/mach_apic.h
+++ struct-cpumasks/include/asm-x86/mach-default/mach_apic.h
@@ -60,7 +60,7 @@ static inline int apic_id_registered(voi
 	return physid_isset(read_apic_id(), phys_cpu_present_map);
 }
 
-static inline unsigned int cpu_mask_to_apicid(const cpumask_t *cpumask)
+static inline unsigned int cpu_mask_to_apicid(const_cpumask_t *cpumask)
 {
 	return cpus_addr(*cpumask)[0];
 }
--- struct-cpumasks.orig/include/asm-x86/mach-default/mach_ipi.h
+++ struct-cpumasks/include/asm-x86/mach-default/mach_ipi.h
@@ -4,8 +4,8 @@
 /* Avoid include hell */
 #define NMI_VECTOR 0x02
 
-void send_IPI_mask_bitmask(const cpumask_t *mask, int vector);
-void send_IPI_mask_allbutself(const cpumask_t *mask, int vector);
+void send_IPI_mask_bitmask(const_cpumask_t mask, int vector);
+void send_IPI_mask_allbutself(const_cpumask_t mask, int vector);
 void __send_IPI_shortcut(unsigned int shortcut, int vector);
 
 extern int no_broadcast;
@@ -14,7 +14,7 @@ extern int no_broadcast;
 #include <asm/genapic.h>
 #define send_IPI_mask (genapic->send_IPI_mask)
 #else
-static inline void send_IPI_mask(const cpumask_t *mask, int vector)
+static inline void send_IPI_mask(const_cpumask_t mask, int vector)
 {
 	send_IPI_mask_bitmask(mask, vector);
 }
@@ -23,7 +23,7 @@ static inline void send_IPI_mask(const c
 static inline void __local_send_IPI_allbutself(int vector)
 {
 	if (no_broadcast || vector == NMI_VECTOR)
-		send_IPI_mask_allbutself(&cpu_online_map, vector);
+		send_IPI_mask_allbutself(cpu_online_map, vector);
 	else
 		__send_IPI_shortcut(APIC_DEST_ALLBUT, vector);
 }
@@ -31,7 +31,7 @@ static inline void __local_send_IPI_allb
 static inline void __local_send_IPI_all(int vector)
 {
 	if (no_broadcast || vector == NMI_VECTOR)
-		send_IPI_mask(&cpu_online_map, vector);
+		send_IPI_mask(cpu_online_map, vector);
 	else
 		__send_IPI_shortcut(APIC_DEST_ALLINC, vector);
 }
--- struct-cpumasks.orig/include/asm-x86/numaq/apic.h
+++ struct-cpumasks/include/asm-x86/numaq/apic.h
@@ -122,7 +122,7 @@ static inline void enable_apic_mode(void
  * We use physical apicids here, not logical, so just return the default
  * physical broadcast to stop people from breaking us
  */
-static inline unsigned int cpu_mask_to_apicid(const cpumask_t *cpumask)
+static inline unsigned int cpu_mask_to_apicid(const_cpumask_t *cpumask)
 {
 	return (int) 0xF;
 }
--- struct-cpumasks.orig/include/asm-x86/numaq/ipi.h
+++ struct-cpumasks/include/asm-x86/numaq/ipi.h
@@ -1,10 +1,10 @@
 #ifndef __ASM_NUMAQ_IPI_H
 #define __ASM_NUMAQ_IPI_H
 
-void send_IPI_mask_sequence(const cpumask_t *mask, int vector);
-void send_IPI_mask_allbutself(const cpumask_t *mask, int vector);
+void send_IPI_mask_sequence(const_cpumask_t *mask, int vector);
+void send_IPI_mask_allbutself(const_cpumask_t *mask, int vector);
 
-static inline void send_IPI_mask(const cpumask_t *mask, int vector)
+static inline void send_IPI_mask(const_cpumask_t *mask, int vector)
 {
 	send_IPI_mask_sequence(mask, vector);
 }
--- struct-cpumasks.orig/include/asm-x86/summit/apic.h
+++ struct-cpumasks/include/asm-x86/summit/apic.h
@@ -137,7 +137,7 @@ static inline void enable_apic_mode(void
 {
 }
 
-static inline unsigned int cpu_mask_to_apicid(const cpumask_t cpumask)
+static inline unsigned int cpu_mask_to_apicid(const_cpumask_t cpumask)
 {
 	int num_bits_set;
 	int cpus_found = 0;
--- struct-cpumasks.orig/include/asm-x86/summit/ipi.h
+++ struct-cpumasks/include/asm-x86/summit/ipi.h
@@ -1,10 +1,10 @@
 #ifndef __ASM_SUMMIT_IPI_H
 #define __ASM_SUMMIT_IPI_H
 
-void send_IPI_mask_sequence(const cpumask_t *mask, int vector);
-void send_IPI_mask_allbutself(const cpumask_t *mask, int vector);
+void send_IPI_mask_sequence(const_cpumask_t *mask, int vector);
+void send_IPI_mask_allbutself(const_cpumask_t *mask, int vector);
 
-static inline void send_IPI_mask(const cpumask_t *mask, int vector)
+static inline void send_IPI_mask(const_cpumask_t *mask, int vector)
 {
 	send_IPI_mask_sequence(mask, vector);
 }

-- 

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [PATCH 17/31] cpumask: clean cpufreq files
  2008-09-29 18:02 [PATCH 00/31] cpumask: Provide new cpumask API Mike Travis
                   ` (15 preceding siblings ...)
  2008-09-29 18:03 ` [PATCH 16/31] cpumask: clean apic files Mike Travis
@ 2008-09-29 18:03 ` Mike Travis
  2008-09-29 18:03 ` [PATCH 18/31] cpumask: clean sched files Mike Travis
                   ` (13 subsequent siblings)
  30 siblings, 0 replies; 43+ messages in thread
From: Mike Travis @ 2008-09-29 18:03 UTC (permalink / raw)
  To: Ingo Molnar, Rusty Russell
  Cc: Linus Torvalds, Andrew Morton, David Miller, Yinghai Lu,
	Thomas Gleixner, Jack Steiner, linux-kernel

[-- Attachment #1: clean-cpufreq --]
[-- Type: text/plain, Size: 17551 bytes --]

Signed-of-by: Mike Travis <travis@sgi.com>
---
 arch/x86/kernel/cpu/cpufreq/acpi-cpufreq.c       |   46 +++++++++++-----------
 arch/x86/kernel/cpu/cpufreq/p4-clockmod.c        |    2 
 arch/x86/kernel/cpu/cpufreq/powernow-k8.c        |   43 ++++++++++----------
 arch/x86/kernel/cpu/cpufreq/powernow-k8.h        |    2 
 arch/x86/kernel/cpu/cpufreq/speedstep-centrino.c |   48 +++++++++++------------
 arch/x86/kernel/cpu/cpufreq/speedstep-ich.c      |    2 
 drivers/cpufreq/cpufreq.c                        |    4 -
 include/linux/cpufreq.h                          |    4 -
 8 files changed, 77 insertions(+), 74 deletions(-)

--- struct-cpumasks.orig/arch/x86/kernel/cpu/cpufreq/acpi-cpufreq.c
+++ struct-cpumasks/arch/x86/kernel/cpu/cpufreq/acpi-cpufreq.c
@@ -144,7 +144,7 @@ typedef union {
 
 struct drv_cmd {
 	unsigned int type;
-	cpumask_t mask;
+	cpumask_map_t mask;
 	drv_addr_union addr;
 	u32 val;
 };
@@ -189,44 +189,46 @@ static void do_drv_write(struct drv_cmd 
 
 static void drv_read(struct drv_cmd *cmd)
 {
-	cpumask_t saved_mask = current->cpus_allowed;
+	cpumask_var_t saved_mask;
 	cmd->val = 0;
 
-	set_cpus_allowed(current, &cmd->mask);
+	cpus_copy(saved_mask, current->cpus_allowed);
+	set_cpus_allowed(current, cmd->mask);
 	do_drv_read(cmd);
-	set_cpus_allowed(current, &saved_mask);
+	set_cpus_allowed(current, saved_mask);
 }
 
 static void drv_write(struct drv_cmd *cmd)
 {
-	cpumask_t saved_mask = current->cpus_allowed;
+	cpumask_var_t saved_mask;
 	unsigned int i;
 
+	cpus_copy(saved_mask, current->cpus_allowed);
 	for_each_cpu(i, cmd->mask) {
 		set_cpus_allowed(current, cpumask_of_cpu(i));
 		do_drv_write(cmd);
 	}
 
-	set_cpus_allowed(current, &saved_mask);
+	set_cpus_allowed(current, saved_mask);
 	return;
 }
 
-static u32 get_cur_val(const cpumask_t *mask)
+static u32 get_cur_val(const_cpumask_t mask)
 {
 	struct acpi_processor_performance *perf;
 	struct drv_cmd cmd;
 
-	if (unlikely(cpus_empty(*mask)))
+	if (unlikely(cpus_empty(mask)))
 		return 0;
 
-	switch (per_cpu(drv_data, cpus_first(*mask))->cpu_feature) {
+	switch (per_cpu(drv_data, cpus_first(mask))->cpu_feature) {
 	case SYSTEM_INTEL_MSR_CAPABLE:
 		cmd.type = SYSTEM_INTEL_MSR_CAPABLE;
 		cmd.addr.msr.reg = MSR_IA32_PERF_STATUS;
 		break;
 	case SYSTEM_IO_CAPABLE:
 		cmd.type = SYSTEM_IO_CAPABLE;
-		perf = per_cpu(drv_data, cpus_first(*mask))->acpi_data;
+		perf = per_cpu(drv_data, cpus_first(mask))->acpi_data;
 		cmd.addr.io.port = perf->control_register.address;
 		cmd.addr.io.bit_width = perf->control_register.bit_width;
 		break;
@@ -234,7 +236,7 @@ static u32 get_cur_val(const cpumask_t *
 		return 0;
 	}
 
-	cmd.mask = *mask;
+	cpus_copy(cmd.mask, mask);
 
 	drv_read(&cmd);
 
@@ -266,11 +268,11 @@ static unsigned int get_measured_perf(un
 		u64 whole;
 	} aperf_cur, mperf_cur;
 
-	cpumask_t saved_mask;
+	cpumask_var_t saved_mask;
 	unsigned int perf_percent;
 	unsigned int retval;
 
-	saved_mask = current->cpus_allowed;
+	cpus_copy(saved_mask, current->cpus_allowed);
 	set_cpus_allowed(current, cpumask_of_cpu(cpu));
 	if (get_cpu() != cpu) {
 		/* We were not able to run on requested processor */
@@ -329,7 +331,7 @@ static unsigned int get_measured_perf(un
 	retval = per_cpu(drv_data, cpu)->max_freq * perf_percent / 100;
 
 	put_cpu();
-	set_cpus_allowed(current, &saved_mask);
+	set_cpus_allowed(current, saved_mask);
 
 	dprintk("cpu %d: performance percent %d\n", cpu, perf_percent);
 	return retval;
@@ -363,7 +365,7 @@ static unsigned int get_cur_freq_on_cpu(
 	return freq;
 }
 
-static unsigned int check_freqs(const cpumask_t *mask, unsigned int freq,
+static unsigned int check_freqs(const_cpumask_t mask, unsigned int freq,
 				struct acpi_cpufreq_data *data)
 {
 	unsigned int cur_freq;
@@ -384,7 +386,7 @@ static int acpi_cpufreq_target(struct cp
 	struct acpi_cpufreq_data *data = per_cpu(drv_data, policy->cpu);
 	struct acpi_processor_performance *perf;
 	struct cpufreq_freqs freqs;
-	cpumask_t online_policy_cpus;
+	cpumask_var_t online_policy_cpus;
 	struct drv_cmd cmd;
 	unsigned int next_state = 0; /* Index into freq_table */
 	unsigned int next_perf_state = 0; /* Index into perf table */
@@ -410,7 +412,7 @@ static int acpi_cpufreq_target(struct cp
 	/* cpufreq holds the hotplug lock, so we are safe from here on */
 	cpus_and(online_policy_cpus, cpu_online_map, policy->cpus);
 #else
-	online_policy_cpus = policy->cpus;
+	cpus_copy(online_policy_cpus, policy->cpus);
 #endif
 
 	next_perf_state = data->freq_table[next_state].index;
@@ -445,7 +447,7 @@ static int acpi_cpufreq_target(struct cp
 	cpus_clear(cmd.mask);
 
 	if (policy->shared_type != CPUFREQ_SHARED_TYPE_ANY)
-		cmd.mask = online_policy_cpus;
+		cpus_copy(cmd.mask, online_policy_cpus);
 	else
 		cpu_set(policy->cpu, cmd.mask);
 
@@ -459,7 +461,7 @@ static int acpi_cpufreq_target(struct cp
 	drv_write(&cmd);
 
 	if (acpi_pstate_strict) {
-		if (!check_freqs(&cmd.mask, freqs.new, data)) {
+		if (!check_freqs(cmd.mask, freqs.new, data)) {
 			dprintk("acpi_cpufreq_target failed (%d)\n",
 				policy->cpu);
 			return -EAGAIN;
@@ -599,15 +601,15 @@ static int acpi_cpufreq_cpu_init(struct 
 	 */
 	if (policy->shared_type == CPUFREQ_SHARED_TYPE_ALL ||
 	    policy->shared_type == CPUFREQ_SHARED_TYPE_ANY) {
-		policy->cpus = perf->shared_cpu_map;
+		cpus_copy(policy->cpus, perf->shared_cpu_map);
 	}
-	policy->related_cpus = perf->shared_cpu_map;
+	cpus_copy(policy->related_cpus, perf->shared_cpu_map);
 
 #ifdef CONFIG_SMP
 	dmi_check_system(sw_any_bug_dmi_table);
 	if (bios_with_sw_any_bug && cpus_weight(policy->cpus) == 1) {
 		policy->shared_type = CPUFREQ_SHARED_TYPE_ALL;
-		policy->cpus = per_cpu(cpu_core_map, cpu);
+		cpus_copy(policy->cpus, per_cpu(cpu_core_map, cpu));
 	}
 #endif
 
--- struct-cpumasks.orig/arch/x86/kernel/cpu/cpufreq/p4-clockmod.c
+++ struct-cpumasks/arch/x86/kernel/cpu/cpufreq/p4-clockmod.c
@@ -200,7 +200,7 @@ static int cpufreq_p4_cpu_init(struct cp
 	unsigned int i;
 
 #ifdef CONFIG_SMP
-	policy->cpus = per_cpu(cpu_sibling_map, policy->cpu);
+	cpus_copy(policy->cpus, per_cpu(cpu_sibling_map, policy->cpu));
 #endif
 
 	/* Errata workaround */
--- struct-cpumasks.orig/arch/x86/kernel/cpu/cpufreq/powernow-k8.c
+++ struct-cpumasks/arch/x86/kernel/cpu/cpufreq/powernow-k8.c
@@ -57,7 +57,7 @@ static DEFINE_PER_CPU(struct powernow_k8
 static int cpu_family = CPU_OPTERON;
 
 #ifndef CONFIG_SMP
-DEFINE_PER_CPU(cpumask_t, cpu_core_map);
+DEFINE_PER_CPU(cpumask_map_t, cpu_core_map);
 #endif
 
 /* Return a frequency in MHz, given an input fid */
@@ -475,11 +475,11 @@ static int core_voltage_post_transition(
 
 static int check_supported_cpu(unsigned int cpu)
 {
-	cpumask_t oldmask;
+	cpumask_var_t oldmask;
 	u32 eax, ebx, ecx, edx;
 	unsigned int rc = 0;
 
-	oldmask = current->cpus_allowed;
+	cpus_copy(oldmask, current->cpus_allowed);
 	set_cpus_allowed(current, cpumask_of_cpu(cpu));
 
 	if (smp_processor_id() != cpu) {
@@ -525,7 +525,7 @@ static int check_supported_cpu(unsigned 
 	rc = 1;
 
 out:
-	set_cpus_allowed(current, &oldmask);
+	set_cpus_allowed(current, oldmask);
 	return rc;
 }
 
@@ -963,7 +963,7 @@ static int transition_frequency_fidvid(s
 	freqs.old = find_khz_freq_from_fid(data->currfid);
 	freqs.new = find_khz_freq_from_fid(fid);
 
-	for_each_cpu(i, *(data->available_cores)) {
+	for_each_cpu(i, data->available_cores) {
 		freqs.cpu = i;
 		cpufreq_notify_transition(&freqs, CPUFREQ_PRECHANGE);
 	}
@@ -971,7 +971,7 @@ static int transition_frequency_fidvid(s
 	res = transition_fid_vid(data, fid, vid);
 	freqs.new = find_khz_freq_from_fid(data->currfid);
 
-	for_each_cpu(i, *(data->available_cores)) {
+	for_each_cpu(i, data->available_cores) {
 		freqs.cpu = i;
 		cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE);
 	}
@@ -994,7 +994,7 @@ static int transition_frequency_pstate(s
 	freqs.old = find_khz_freq_from_pstate(data->powernow_table, data->currpstate);
 	freqs.new = find_khz_freq_from_pstate(data->powernow_table, pstate);
 
-	for_each_cpu(i, *(data->available_cores)) {
+	for_each_cpu(i, data->available_cores) {
 		freqs.cpu = i;
 		cpufreq_notify_transition(&freqs, CPUFREQ_PRECHANGE);
 	}
@@ -1002,7 +1002,7 @@ static int transition_frequency_pstate(s
 	res = transition_pstate(data, pstate);
 	freqs.new = find_khz_freq_from_pstate(data->powernow_table, pstate);
 
-	for_each_cpu(i, *(data->available_cores)) {
+	for_each_cpu(i, data->available_cores) {
 		freqs.cpu = i;
 		cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE);
 	}
@@ -1012,7 +1012,7 @@ static int transition_frequency_pstate(s
 /* Driver entry point to switch to the target frequency */
 static int powernowk8_target(struct cpufreq_policy *pol, unsigned targfreq, unsigned relation)
 {
-	cpumask_t oldmask;
+	cpumask_var_t oldmask;
 	struct powernow_k8_data *data = per_cpu(powernow_data, pol->cpu);
 	u32 checkfid;
 	u32 checkvid;
@@ -1026,7 +1026,7 @@ static int powernowk8_target(struct cpuf
 	checkvid = data->currvid;
 
 	/* only run on specific CPU from here on */
-	oldmask = current->cpus_allowed;
+	cpus_copy(oldmask, current->cpus_allowed);
 	set_cpus_allowed(current, cpumask_of_cpu(pol->cpu));
 
 	if (smp_processor_id() != pol->cpu) {
@@ -1082,7 +1082,7 @@ static int powernowk8_target(struct cpuf
 	ret = 0;
 
 err_out:
-	set_cpus_allowed(current, &oldmask);
+	set_cpus_allowed(current, oldmask);
 	return ret;
 }
 
@@ -1101,7 +1101,7 @@ static int powernowk8_verify(struct cpuf
 static int __cpuinit powernowk8_cpu_init(struct cpufreq_policy *pol)
 {
 	struct powernow_k8_data *data;
-	cpumask_t oldmask;
+	cpumask_var_t oldmask;
 	int rc;
 
 	if (!cpu_online(pol->cpu))
@@ -1152,7 +1152,7 @@ static int __cpuinit powernowk8_cpu_init
 	}
 
 	/* only run on specific CPU from here on */
-	oldmask = current->cpus_allowed;
+	cpus_copy(oldmask, current->cpus_allowed);
 	set_cpus_allowed(current, cpumask_of_cpu(pol->cpu));
 
 	if (smp_processor_id() != pol->cpu) {
@@ -1172,13 +1172,13 @@ static int __cpuinit powernowk8_cpu_init
 		fidvid_msr_init();
 
 	/* run on any CPU again */
-	set_cpus_allowed(current, &oldmask);
+	set_cpus_allowed(current, oldmask);
 
 	if (cpu_family == CPU_HW_PSTATE)
-		pol->cpus = cpumask_of_cpu(pol->cpu);
+		cpus_copy(pol->cpus, cpumask_of_cpu(pol->cpu));
 	else
-		pol->cpus = per_cpu(cpu_core_map, pol->cpu);
-	data->available_cores = &(pol->cpus);
+		cpus_copy(pol->cpus, per_cpu(cpu_core_map, pol->cpu));
+	data->available_cores = pol->cpus;
 
 	/* Take a crude guess here.
 	 * That guess was in microseconds, so multiply with 1000 */
@@ -1213,7 +1213,7 @@ static int __cpuinit powernowk8_cpu_init
 	return 0;
 
 err_out:
-	set_cpus_allowed(current, &oldmask);
+	set_cpus_allowed(current, oldmask);
 	powernow_k8_cpu_exit_acpi(data);
 
 	kfree(data);
@@ -1240,7 +1240,7 @@ static int __devexit powernowk8_cpu_exit
 static unsigned int powernowk8_get (unsigned int cpu)
 {
 	struct powernow_k8_data *data;
-	cpumask_t oldmask = current->cpus_allowed;
+	cpumask_var_t oldmask;
 	unsigned int khz = 0;
 	unsigned int first;
 
@@ -1250,11 +1250,12 @@ static unsigned int powernowk8_get (unsi
 	if (!data)
 		return -EINVAL;
 
+	cpus_copy(oldmask, current->cpus_allowed);
 	set_cpus_allowed(current, cpumask_of_cpu(cpu));
 	if (smp_processor_id() != cpu) {
 		printk(KERN_ERR PFX
 			"limiting to CPU %d failed in powernowk8_get\n", cpu);
-		set_cpus_allowed(current, &oldmask);
+		set_cpus_allowed(current, oldmask);
 		return 0;
 	}
 
@@ -1269,7 +1270,7 @@ static unsigned int powernowk8_get (unsi
 
 
 out:
-	set_cpus_allowed(current, &oldmask);
+	set_cpus_allowed(current, oldmask);
 	return khz;
 }
 
--- struct-cpumasks.orig/arch/x86/kernel/cpu/cpufreq/powernow-k8.h
+++ struct-cpumasks/arch/x86/kernel/cpu/cpufreq/powernow-k8.h
@@ -38,7 +38,7 @@ struct powernow_k8_data {
 	/* we need to keep track of associated cores, but let cpufreq
 	 * handle hotplug events - so just point at cpufreq pol->cpus
 	 * structure */
-	cpumask_t *available_cores;
+	const_cpumask_t available_cores;
 };
 
 
--- struct-cpumasks.orig/arch/x86/kernel/cpu/cpufreq/speedstep-centrino.c
+++ struct-cpumasks/arch/x86/kernel/cpu/cpufreq/speedstep-centrino.c
@@ -20,6 +20,7 @@
 #include <linux/sched.h>	/* current */
 #include <linux/delay.h>
 #include <linux/compiler.h>
+#include <linux/cpumask_alloc.h>
 
 #include <asm/msr.h>
 #include <asm/processor.h>
@@ -416,9 +417,9 @@ static unsigned int get_cur_freq(unsigne
 {
 	unsigned l, h;
 	unsigned clock_freq;
-	cpumask_t saved_mask;
+	cpumask_var_t saved_mask;
 
-	saved_mask = current->cpus_allowed;
+	cpus_copy(saved_mask, current->cpus_allowed);
 	set_cpus_allowed(current, cpumask_of_cpu(cpu));
 	if (smp_processor_id() != cpu)
 		return 0;
@@ -437,7 +438,7 @@ static unsigned int get_cur_freq(unsigne
 		clock_freq = extract_clock(l, cpu, 1);
 	}
 
-	set_cpus_allowed(current, &saved_mask);
+	set_cpus_allowed(current, saved_mask);
 	return clock_freq;
 }
 
@@ -552,10 +553,10 @@ static int centrino_verify (struct cpufr
  * Sets a new CPUFreq policy.
  */
 struct allmasks {
-	cpumask_t		online_policy_cpus;
-	cpumask_t		saved_mask;
-	cpumask_t		set_mask;
-	cpumask_t		covered_cpus;
+	cpumask_var_t		online_policy_cpus;
+	cpumask_var_t		saved_mask;
+	cpumask_var_t		set_mask;
+	cpumask_var_t		covered_cpus;
 };
 
 static int centrino_target (struct cpufreq_policy *policy,
@@ -592,28 +593,28 @@ static int centrino_target (struct cpufr
 
 #ifdef CONFIG_HOTPLUG_CPU
 	/* cpufreq holds the hotplug lock, so we are safe from here on */
-	cpus_and(*online_policy_cpus, cpu_online_map, policy->cpus);
+	cpus_and(online_policy_cpus, cpu_online_map, policy->cpus);
 #else
-	*online_policy_cpus = policy->cpus;
+	cpus_copy(online_policy_cpus, policy->cpus);
 #endif
 
-	*saved_mask = current->cpus_allowed;
+	cpus_copy(saved_mask, current->cpus_allowed);
 	first_cpu = 1;
-	cpus_clear(*covered_cpus);
-	for_each_cpu(j, *online_policy_cpus) {
+	cpus_clear(covered_cpus);
+	for_each_cpu(j, online_policy_cpus) {
 		/*
 		 * Support for SMP systems.
 		 * Make sure we are running on CPU that wants to change freq
 		 */
-		cpus_clear(*set_mask);
+		cpus_clear(set_mask);
 		if (policy->shared_type == CPUFREQ_SHARED_TYPE_ANY)
-			cpus_or(*set_mask, *set_mask, *online_policy_cpus);
+			cpus_or(set_mask, set_mask, online_policy_cpus);
 		else
-			cpu_set(j, *set_mask);
+			cpu_set(j, set_mask);
 
 		set_cpus_allowed(current, set_mask);
 		preempt_disable();
-		if (unlikely(!cpu_isset(smp_processor_id(), *set_mask))) {
+		if (unlikely(!cpu_isset(smp_processor_id(), set_mask))) {
 			dprintk("couldn't limit to CPUs in this domain\n");
 			retval = -EAGAIN;
 			if (first_cpu) {
@@ -641,7 +642,7 @@ static int centrino_target (struct cpufr
 			dprintk("target=%dkHz old=%d new=%d msr=%04x\n",
 				target_freq, freqs.old, freqs.new, msr);
 
-			for_each_cpu(k, *online_policy_cpus) {
+			for_each_cpu(k, online_policy_cpus) {
 				freqs.cpu = k;
 				cpufreq_notify_transition(&freqs,
 					CPUFREQ_PRECHANGE);
@@ -660,11 +661,11 @@ static int centrino_target (struct cpufr
 			break;
 		}
 
-		cpu_set(j, *covered_cpus);
+		cpu_set(j, covered_cpus);
 		preempt_enable();
 	}
 
-	for_each_cpu(k, *online_policy_cpus) {
+	for_each_cpu(k, online_policy_cpus) {
 		freqs.cpu = k;
 		cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE);
 	}
@@ -677,17 +678,16 @@ static int centrino_target (struct cpufr
 		 * Best effort undo..
 		 */
 
-		if (!cpus_empty(*covered_cpus))
-			for_each_cpu(j, *covered_cpus) {
-				set_cpus_allowed(current,
-						     cpumask_of_cpu(j));
+		if (!cpus_empty(covered_cpus))
+			for_each_cpu(j, covered_cpus) {
+				set_cpus_allowed(current, cpumask_of_cpu(j));
 				wrmsr(MSR_IA32_PERF_CTL, oldmsr, h);
 			}
 
 		tmp = freqs.new;
 		freqs.new = freqs.old;
 		freqs.old = tmp;
-		for_each_cpu(j, *online_policy_cpus) {
+		for_each_cpu(j, online_policy_cpus) {
 			freqs.cpu = j;
 			cpufreq_notify_transition(&freqs, CPUFREQ_PRECHANGE);
 			cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE);
--- struct-cpumasks.orig/arch/x86/kernel/cpu/cpufreq/speedstep-ich.c
+++ struct-cpumasks/arch/x86/kernel/cpu/cpufreq/speedstep-ich.c
@@ -229,7 +229,7 @@ static unsigned int speedstep_detect_chi
 	return 0;
 }
 
-static unsigned int _speedstep_get(const cpumask_t *cpus)
+static unsigned int _speedstep_get(const_cpumask_t cpus)
 {
 	unsigned int speed;
 	cpumask_t cpus_allowed;
--- struct-cpumasks.orig/drivers/cpufreq/cpufreq.c
+++ struct-cpumasks/drivers/cpufreq/cpufreq.c
@@ -803,7 +803,7 @@ static int cpufreq_add_dev(struct sys_de
 	}
 
 	policy->cpu = cpu;
-	policy->cpus = cpumask_of_cpu(cpu);
+	cpus_copy(policy->cpus, cpumask_of_cpu(cpu));
 
 	/* Initially set CPU itself as the policy_cpu */
 	per_cpu(policy_cpu, cpu) = cpu;
@@ -856,7 +856,7 @@ static int cpufreq_add_dev(struct sys_de
 				goto err_out_driver_exit;
 
 			spin_lock_irqsave(&cpufreq_driver_lock, flags);
-			managed_policy->cpus = policy->cpus;
+			cpus_copy(managed_policy->cpus, policy->cpus);
 			per_cpu(cpufreq_cpu_data, cpu) = managed_policy;
 			spin_unlock_irqrestore(&cpufreq_driver_lock, flags);
 
--- struct-cpumasks.orig/include/linux/cpufreq.h
+++ struct-cpumasks/include/linux/cpufreq.h
@@ -80,8 +80,8 @@ struct cpufreq_real_policy {
 };
 
 struct cpufreq_policy {
-	cpumask_t		cpus;	/* CPUs requiring sw coordination */
-	cpumask_t		related_cpus; /* CPUs with any coordination */
+	cpumask_map_t		cpus;	/* CPUs requiring sw coordination */
+	cpumask_map_t		related_cpus; /* CPUs with any coordination */
 	unsigned int		shared_type; /* ANY or ALL affected CPUs
 						should set cpufreq */
 	unsigned int		cpu;    /* cpu nr of registered CPU */

-- 

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [PATCH 18/31] cpumask: clean sched files
  2008-09-29 18:02 [PATCH 00/31] cpumask: Provide new cpumask API Mike Travis
                   ` (16 preceding siblings ...)
  2008-09-29 18:03 ` [PATCH 17/31] cpumask: clean cpufreq files Mike Travis
@ 2008-09-29 18:03 ` Mike Travis
  2008-09-29 18:03 ` [PATCH 19/31] cpumask: clean xen files Mike Travis
                   ` (12 subsequent siblings)
  30 siblings, 0 replies; 43+ messages in thread
From: Mike Travis @ 2008-09-29 18:03 UTC (permalink / raw)
  To: Ingo Molnar, Rusty Russell
  Cc: Linus Torvalds, Andrew Morton, David Miller, Yinghai Lu,
	Thomas Gleixner, Jack Steiner, linux-kernel

[-- Attachment #1: clean-sched --]
[-- Type: text/plain, Size: 46529 bytes --]

Signed-of-by: Mike Travis <travis@sgi.com>
---
 include/linux/sched.h |   18 +-
 kernel/sched.c        |  450 +++++++++++++++++++++++---------------------------
 kernel/sched_cpupri.c |    6 
 kernel/sched_cpupri.h |    8 
 kernel/sched_fair.c   |    2 
 kernel/sched_rt.c     |   50 ++---
 6 files changed, 252 insertions(+), 282 deletions(-)

--- struct-cpumasks.orig/include/linux/sched.h
+++ struct-cpumasks/include/linux/sched.h
@@ -248,7 +248,7 @@ extern void init_idle_bootup_task(struct
 
 extern int runqueue_is_locked(void);
 
-extern cpumask_t nohz_cpu_mask;
+extern cpumask_map_t nohz_cpu_mask;
 #if defined(CONFIG_SMP) && defined(CONFIG_NO_HZ)
 extern int select_nohz_load_balancer(int cpu);
 #else
@@ -866,7 +866,7 @@ struct sched_domain {
 #endif
 };
 
-extern void partition_sched_domains(int ndoms_new, cpumask_t *doms_new,
+extern void partition_sched_domains(int ndoms_new, cpumask_t doms_new,
 				    struct sched_domain_attr *dattr_new);
 extern int arch_reinit_sched_domains(void);
 
@@ -875,7 +875,7 @@ extern int arch_reinit_sched_domains(voi
 struct sched_domain_attr;
 
 static inline void
-partition_sched_domains(int ndoms_new, cpumask_t *doms_new,
+partition_sched_domains(int ndoms_new, cpumask_t doms_new,
 			struct sched_domain_attr *dattr_new)
 {
 }
@@ -960,7 +960,7 @@ struct sched_class {
 	void (*task_tick) (struct rq *rq, struct task_struct *p, int queued);
 	void (*task_new) (struct rq *rq, struct task_struct *p);
 	void (*set_cpus_allowed)(struct task_struct *p,
-				 const cpumask_t newmask);
+				 const_cpumask_t newmask);
 
 	void (*rq_online)(struct rq *rq);
 	void (*rq_offline)(struct rq *rq);
@@ -1105,7 +1105,7 @@ struct task_struct {
 #endif
 
 	unsigned int policy;
-	cpumask_t cpus_allowed;
+	cpumask_map_t cpus_allowed;
 
 #ifdef CONFIG_PREEMPT_RCU
 	int rcu_read_lock_nesting;
@@ -1583,10 +1583,10 @@ extern cputime_t task_gtime(struct task_
 
 #ifdef CONFIG_SMP
 extern int set_cpus_allowed(struct task_struct *p,
-				const cpumask_t new_mask);
+				const_cpumask_t new_mask);
 #else
 static inline int set_cpus_allowed(struct task_struct *p,
-				       const cpumask_t new_mask)
+				       const_cpumask_t new_mask)
 {
 	if (!cpu_isset(0, new_mask))
 		return -EINVAL;
@@ -2377,8 +2377,8 @@ __trace_special(void *__tr, void *__data
 }
 #endif
 
-extern long sched_setaffinity(pid_t pid, const cpumask_t *new_mask);
-extern long sched_getaffinity(pid_t pid, cpumask_t *mask);
+extern long sched_setaffinity(pid_t pid, const_cpumask_t new_mask);
+extern long sched_getaffinity(pid_t pid, cpumask_t mask);
 
 extern int sched_mc_power_savings, sched_smt_power_savings;
 
--- struct-cpumasks.orig/kernel/sched.c
+++ struct-cpumasks/kernel/sched.c
@@ -54,6 +54,7 @@
 #include <linux/cpu.h>
 #include <linux/cpuset.h>
 #include <linux/percpu.h>
+#include <linux/cpumask_alloc.h>
 #include <linux/kthread.h>
 #include <linux/seq_file.h>
 #include <linux/sysctl.h>
@@ -481,14 +482,14 @@ struct rt_rq {
  */
 struct root_domain {
 	atomic_t refcount;
-	cpumask_t span;
-	cpumask_t online;
+	cpumask_map_t span;
+	cpumask_map_t online;
 
 	/*
 	 * The "RT overload" flag: it gets set if a CPU has more than
 	 * one runnable RT task.
 	 */
-	cpumask_t rto_mask;
+	cpumask_map_t rto_mask;
 	atomic_t rto_count;
 #ifdef CONFIG_SMP
 	struct cpupri cpupri;
@@ -2102,16 +2103,16 @@ find_idlest_group(struct sched_domain *s
  */
 static int
 find_idlest_cpu(struct sched_group *group, struct task_struct *p, int this_cpu,
-		cpumask_t *tmp)
+		cpumask_t tmp)
 {
 	unsigned long load, min_load = ULONG_MAX;
 	int idlest = -1;
 	int i;
 
 	/* Traverse only the allowed CPUs */
-	cpus_and(*tmp, group->cpumask, p->cpus_allowed);
+	cpus_and(tmp, group->cpumask, p->cpus_allowed);
 
-	for_each_cpu(i, *tmp) {
+	for_each_cpu(i, tmp) {
 		load = weighted_cpuload(i);
 
 		if (load < min_load || (load == min_load && i == this_cpu)) {
@@ -2138,6 +2139,7 @@ static int sched_balance_self(int cpu, i
 {
 	struct task_struct *t = current;
 	struct sched_domain *tmp, *sd = NULL;
+	cpumask_var_t span, tmpmask;
 
 	for_each_domain(cpu, tmp) {
 		/*
@@ -2153,7 +2155,6 @@ static int sched_balance_self(int cpu, i
 		update_shares(sd);
 
 	while (sd) {
-		cpumask_t span, tmpmask;
 		struct sched_group *group;
 		int new_cpu, weight;
 
@@ -2162,14 +2163,14 @@ static int sched_balance_self(int cpu, i
 			continue;
 		}
 
-		span = sd->span;
+		cpus_copy(span, sd->span);
 		group = find_idlest_group(sd, t, cpu);
 		if (!group) {
 			sd = sd->child;
 			continue;
 		}
 
-		new_cpu = find_idlest_cpu(group, t, cpu, &tmpmask);
+		new_cpu = find_idlest_cpu(group, t, cpu, tmpmask);
 		if (new_cpu == -1 || new_cpu == cpu) {
 			/* Now try balancing at a lower domain level of cpu */
 			sd = sd->child;
@@ -3081,7 +3082,7 @@ static int move_one_task(struct rq *this
 static struct sched_group *
 find_busiest_group(struct sched_domain *sd, int this_cpu,
 		   unsigned long *imbalance, enum cpu_idle_type idle,
-		   int *sd_idle, const cpumask_t *cpus, int *balance)
+		   int *sd_idle, const_cpumask_t cpus, int *balance)
 {
 	struct sched_group *busiest = NULL, *this = NULL, *group = sd->groups;
 	unsigned long max_load, avg_load, total_load, this_load, total_pwr;
@@ -3132,7 +3133,7 @@ find_busiest_group(struct sched_domain *
 		for_each_cpu(i, group->cpumask) {
 			struct rq *rq;
 
-			if (!cpu_isset(i, *cpus))
+			if (!cpu_isset(i, cpus))
 				continue;
 
 			rq = cpu_rq(i);
@@ -3402,7 +3403,7 @@ ret:
  */
 static struct rq *
 find_busiest_queue(struct sched_group *group, enum cpu_idle_type idle,
-		   unsigned long imbalance, const cpumask_t *cpus)
+		   unsigned long imbalance, const_cpumask_t cpus)
 {
 	struct rq *busiest = NULL, *rq;
 	unsigned long max_load = 0;
@@ -3411,7 +3412,7 @@ find_busiest_queue(struct sched_group *g
 	for_each_cpu(i, group->cpumask) {
 		unsigned long wl;
 
-		if (!cpu_isset(i, *cpus))
+		if (!cpu_isset(i, cpus))
 			continue;
 
 		rq = cpu_rq(i);
@@ -3441,7 +3442,7 @@ find_busiest_queue(struct sched_group *g
  */
 static int load_balance(int this_cpu, struct rq *this_rq,
 			struct sched_domain *sd, enum cpu_idle_type idle,
-			int *balance, cpumask_t *cpus)
+			int *balance, cpumask_t cpus)
 {
 	int ld_moved, all_pinned = 0, active_balance = 0, sd_idle = 0;
 	struct sched_group *group;
@@ -3449,7 +3450,7 @@ static int load_balance(int this_cpu, st
 	struct rq *busiest;
 	unsigned long flags;
 
-	cpus_setall(*cpus);
+	cpus_setall(cpus);
 
 	/*
 	 * When power savings policy is enabled for the parent domain, idle
@@ -3509,8 +3510,8 @@ redo:
 
 		/* All tasks on this runqueue were pinned by CPU affinity */
 		if (unlikely(all_pinned)) {
-			cpu_clear(cpu_of(busiest), *cpus);
-			if (!cpus_empty(*cpus))
+			cpu_clear(cpu_of(busiest), cpus);
+			if (!cpus_empty(cpus))
 				goto redo;
 			goto out_balanced;
 		}
@@ -3602,7 +3603,7 @@ out:
  */
 static int
 load_balance_newidle(int this_cpu, struct rq *this_rq, struct sched_domain *sd,
-			cpumask_t *cpus)
+			cpumask_t cpus)
 {
 	struct sched_group *group;
 	struct rq *busiest = NULL;
@@ -3611,7 +3612,7 @@ load_balance_newidle(int this_cpu, struc
 	int sd_idle = 0;
 	int all_pinned = 0;
 
-	cpus_setall(*cpus);
+	cpus_setall(cpus);
 
 	/*
 	 * When power savings policy is enabled for the parent domain, idle
@@ -3655,8 +3656,8 @@ redo:
 		double_unlock_balance(this_rq, busiest);
 
 		if (unlikely(all_pinned)) {
-			cpu_clear(cpu_of(busiest), *cpus);
-			if (!cpus_empty(*cpus))
+			cpu_clear(cpu_of(busiest), cpus);
+			if (!cpus_empty(cpus))
 				goto redo;
 		}
 	}
@@ -3691,7 +3692,7 @@ static void idle_balance(int this_cpu, s
 	struct sched_domain *sd;
 	int pulled_task = -1;
 	unsigned long next_balance = jiffies + HZ;
-	cpumask_t tmpmask;
+	cpumask_var_t tmpmask;
 
 	for_each_domain(this_cpu, sd) {
 		unsigned long interval;
@@ -3702,7 +3703,7 @@ static void idle_balance(int this_cpu, s
 		if (sd->flags & SD_BALANCE_NEWIDLE)
 			/* If we've pulled tasks over stop searching: */
 			pulled_task = load_balance_newidle(this_cpu, this_rq,
-							   sd, &tmpmask);
+							   sd, tmpmask);
 
 		interval = msecs_to_jiffies(sd->balance_interval);
 		if (time_after(next_balance, sd->last_balance + interval))
@@ -3773,7 +3774,7 @@ static void active_load_balance(struct r
 #ifdef CONFIG_NO_HZ
 static struct {
 	atomic_t load_balancer;
-	cpumask_t cpu_mask;
+	cpumask_map_t cpu_mask;
 } nohz ____cacheline_aligned = {
 	.load_balancer = ATOMIC_INIT(-1),
 	.cpu_mask = CPU_MASK_NONE,
@@ -3862,7 +3863,7 @@ static void rebalance_domains(int cpu, e
 	unsigned long next_balance = jiffies + 60*HZ;
 	int update_next_balance = 0;
 	int need_serialize;
-	cpumask_t tmp;
+	cpumask_var_t tmp;
 
 	for_each_domain(cpu, sd) {
 		if (!(sd->flags & SD_LOAD_BALANCE))
@@ -3887,7 +3888,7 @@ static void rebalance_domains(int cpu, e
 		}
 
 		if (time_after_eq(jiffies, sd->last_balance + interval)) {
-			if (load_balance(cpu, rq, sd, idle, &balance, &tmp)) {
+			if (load_balance(cpu, rq, sd, idle, &balance, tmp)) {
 				/*
 				 * We've pulled tasks over so either we're no
 				 * longer idle, or one of our SMT siblings is
@@ -3945,10 +3946,11 @@ static void run_rebalance_domains(struct
 	 */
 	if (this_rq->idle_at_tick &&
 	    atomic_read(&nohz.load_balancer) == this_cpu) {
-		cpumask_t cpus = nohz.cpu_mask;
+		cpumask_var_t cpus;
 		struct rq *rq;
 		int balance_cpu;
 
+		cpus_copy(cpus, nohz.cpu_mask);
 		cpu_clear(this_cpu, cpus);
 		for_each_cpu(balance_cpu, cpus) {
 			/*
@@ -5416,16 +5418,17 @@ out_unlock:
 	return retval;
 }
 
-long sched_setaffinity(pid_t pid, const cpumask_t *in_mask)
+long sched_setaffinity(pid_t pid, const_cpumask_t in_mask)
 {
-	cpumask_t cpus_allowed;
-	cpumask_t new_mask = *in_mask;
+	cpumask_var_t cpus_allowed;
+	cpumask_var_t new_mask;
 	struct task_struct *p;
 	int retval;
 
 	get_online_cpus();
 	read_lock(&tasklist_lock);
 
+	cpus_copy(new_mask, in_mask);
 	p = find_process_by_pid(pid);
 	if (!p) {
 		read_unlock(&tasklist_lock);
@@ -5450,20 +5453,20 @@ long sched_setaffinity(pid_t pid, const 
 	if (retval)
 		goto out_unlock;
 
-	cpuset_cpus_allowed(p, &cpus_allowed);
+	cpuset_cpus_allowed(p, cpus_allowed);
 	cpus_and(new_mask, new_mask, cpus_allowed);
  again:
-	retval = set_cpus_allowed(p, &new_mask);
+	retval = set_cpus_allowed(p, new_mask);
 
 	if (!retval) {
-		cpuset_cpus_allowed(p, &cpus_allowed);
+		cpuset_cpus_allowed(p, cpus_allowed);
 		if (!cpus_subset(new_mask, cpus_allowed)) {
 			/*
 			 * We must have raced with a concurrent cpuset
 			 * update. Just reset the cpus_allowed to the
 			 * cpuset's cpus_allowed
 			 */
-			new_mask = cpus_allowed;
+			cpus_copy(new_mask, cpus_allowed);
 			goto again;
 		}
 	}
@@ -5474,12 +5477,12 @@ out_unlock:
 }
 
 static int get_user_cpu_mask(unsigned long __user *user_mask_ptr, unsigned len,
-			     cpumask_t *new_mask)
+			     cpumask_t new_mask)
 {
-	if (len < sizeof(cpumask_t)) {
-		memset(new_mask, 0, sizeof(cpumask_t));
-	} else if (len > sizeof(cpumask_t)) {
-		len = sizeof(cpumask_t);
+	if (len < cpumask_size()) {
+		memset(new_mask, 0, cpumask_size());
+	} else if (len > cpumask_size()) {
+		len = cpumask_size();
 	}
 	return copy_from_user(new_mask, user_mask_ptr, len) ? -EFAULT : 0;
 }
@@ -5493,17 +5496,17 @@ static int get_user_cpu_mask(unsigned lo
 asmlinkage long sys_sched_setaffinity(pid_t pid, unsigned int len,
 				      unsigned long __user *user_mask_ptr)
 {
-	cpumask_t new_mask;
+	cpumask_var_t new_mask;
 	int retval;
 
-	retval = get_user_cpu_mask(user_mask_ptr, len, &new_mask);
+	retval = get_user_cpu_mask(user_mask_ptr, len, new_mask);
 	if (retval)
 		return retval;
 
-	return sched_setaffinity(pid, &new_mask);
+	return sched_setaffinity(pid, new_mask);
 }
 
-long sched_getaffinity(pid_t pid, cpumask_t *mask)
+long sched_getaffinity(pid_t pid, cpumask_t mask)
 {
 	struct task_struct *p;
 	int retval;
@@ -5520,7 +5523,7 @@ long sched_getaffinity(pid_t pid, cpumas
 	if (retval)
 		goto out_unlock;
 
-	cpus_and(*mask, p->cpus_allowed, cpu_online_map);
+	cpus_and(mask, p->cpus_allowed, cpu_online_map);
 
 out_unlock:
 	read_unlock(&tasklist_lock);
@@ -5539,19 +5542,19 @@ asmlinkage long sys_sched_getaffinity(pi
 				      unsigned long __user *user_mask_ptr)
 {
 	int ret;
-	cpumask_t mask;
+	cpumask_var_t mask;
 
-	if (len < sizeof(cpumask_t))
+	if (len < cpumask_size())
 		return -EINVAL;
 
-	ret = sched_getaffinity(pid, &mask);
+	ret = sched_getaffinity(pid, mask);
 	if (ret < 0)
 		return ret;
 
-	if (copy_to_user(user_mask_ptr, &mask, sizeof(cpumask_t)))
+	if (copy_to_user(user_mask_ptr, mask, cpumask_size()))
 		return -EFAULT;
 
-	return sizeof(cpumask_t);
+	return cpumask_size();
 }
 
 /**
@@ -5886,7 +5889,7 @@ void __cpuinit init_idle(struct task_str
 	idle->se.exec_start = sched_clock();
 
 	idle->prio = idle->normal_prio = MAX_PRIO;
-	idle->cpus_allowed = cpumask_of_cpu(cpu);
+	cpus_copy(idle->cpus_allowed, cpumask_of_cpu(cpu));
 	__set_task_cpu(idle, cpu);
 
 	spin_lock_irqsave(&rq->lock, flags);
@@ -5915,7 +5918,7 @@ void __cpuinit init_idle(struct task_str
  * which do not switch off the HZ timer nohz_cpu_mask should
  * always be CPU_MASK_NONE.
  */
-cpumask_t nohz_cpu_mask = CPU_MASK_NONE;
+cpumask_map_t nohz_cpu_mask = CPU_MASK_NONE;
 
 /*
  * Increase the granularity value when there are more CPUs,
@@ -5970,7 +5973,7 @@ static inline void sched_init_granularit
  * task must not exit() & deallocate itself prematurely. The
  * call is not atomic; no spinlocks may be held.
  */
-int set_cpus_allowed(struct task_struct *p, const cpumask_t new_mask)
+int set_cpus_allowed(struct task_struct *p, const_cpumask_t new_mask)
 {
 	struct migration_req req;
 	unsigned long flags;
@@ -5992,15 +5995,15 @@ int set_cpus_allowed(struct task_struct 
 	if (p->sched_class->set_cpus_allowed)
 		p->sched_class->set_cpus_allowed(p, new_mask);
 	else {
-		p->cpus_allowed = *new_mask;
-		p->rt.nr_cpus_allowed = cpus_weight(*new_mask);
+		cpus_copy(p->cpus_allowed, new_mask);
+		p->rt.nr_cpus_allowed = cpus_weight(new_mask);
 	}
 
 	/* Can the task run on the task's current CPU? If so, we're done */
-	if (cpu_isset(task_cpu(p), *new_mask))
+	if (cpu_isset(task_cpu(p), new_mask))
 		goto out;
 
-	if (migrate_task(p, any_online_cpu(*new_mask), &req)) {
+	if (migrate_task(p, any_online_cpu(new_mask), &req)) {
 		/* Need help from migration thread: drop lock and wait. */
 		task_rq_unlock(rq, &flags);
 		wake_up_process(rq->migration_thread);
@@ -6141,14 +6144,15 @@ static int __migrate_task_irq(struct tas
 static void move_task_off_dead_cpu(int dead_cpu, struct task_struct *p)
 {
 	unsigned long flags;
-	cpumask_t mask;
+	cpumask_var_t mask;
 	struct rq *rq;
 	int dest_cpu;
+	cpumask_var_t cpus_allowed;
 
 	do {
 		/* On same node? */
-		mask = node_to_cpumask(cpu_to_node(dead_cpu);
-		cpus_and(mask, mask, p->cpus_allowed);
+		cpus_and(mask, node_to_cpumask(cpu_to_node(dead_cpu)),
+							p->cpus_allowed);
 		dest_cpu = any_online_cpu(mask);
 
 		/* On any allowed CPU? */
@@ -6157,9 +6161,7 @@ static void move_task_off_dead_cpu(int d
 
 		/* No more Mr. Nice Guy. */
 		if (dest_cpu >= nr_cpu_ids) {
-			cpumask_t cpus_allowed;
-
-			cpuset_cpus_allowed_locked(p, &cpus_allowed);
+			cpuset_cpus_allowed_locked(p, cpus_allowed);
 			/*
 			 * Try to stay on the same cpuset, where the
 			 * current cpuset may be a subset of all cpus.
@@ -6168,7 +6170,7 @@ static void move_task_off_dead_cpu(int d
 			 * called within calls to cpuset_lock/cpuset_unlock.
 			 */
 			rq = task_rq_lock(p, &flags);
-			p->cpus_allowed = cpus_allowed;
+			cpus_copy(p->cpus_allowed, cpus_allowed);
 			dest_cpu = any_online_cpu(p->cpus_allowed);
 			task_rq_unlock(rq, &flags);
 
@@ -6667,13 +6669,13 @@ static inline const char *sd_level_to_st
 }
 
 static int sched_domain_debug_one(struct sched_domain *sd, int cpu, int level,
-				  cpumask_t *groupmask)
+				  cpumask_t groupmask)
 {
 	struct sched_group *group = sd->groups;
 	char str[256];
 
 	cpulist_scnprintf(str, sizeof(str), sd->span);
-	cpus_clear(*groupmask);
+	cpus_clear(groupmask);
 
 	printk(KERN_DEBUG "%*s domain %d: ", level, "", level);
 
@@ -6718,13 +6720,13 @@ static int sched_domain_debug_one(struct
 			break;
 		}
 
-		if (cpus_intersects(*groupmask, group->cpumask)) {
+		if (cpus_intersects(groupmask, group->cpumask)) {
 			printk(KERN_CONT "\n");
 			printk(KERN_ERR "ERROR: repeated CPUs\n");
 			break;
 		}
 
-		cpus_or(*groupmask, *groupmask, group->cpumask);
+		cpus_or(groupmask, groupmask, group->cpumask);
 
 		cpulist_scnprintf(str, sizeof(str), group->cpumask);
 		printk(KERN_CONT " %s", str);
@@ -6733,10 +6735,10 @@ static int sched_domain_debug_one(struct
 	} while (group != sd->groups);
 	printk(KERN_CONT "\n");
 
-	if (!cpus_equal(sd->span, *groupmask))
+	if (!cpus_equal(sd->span, groupmask))
 		printk(KERN_ERR "ERROR: groups don't span domain->span\n");
 
-	if (sd->parent && !cpus_subset(*groupmask, sd->parent->span))
+	if (sd->parent && !cpus_subset(groupmask, sd->parent->span))
 		printk(KERN_ERR "ERROR: parent span is not a superset "
 			"of domain->span\n");
 	return 0;
@@ -6744,7 +6746,7 @@ static int sched_domain_debug_one(struct
 
 static void sched_domain_debug(struct sched_domain *sd, int cpu)
 {
-	cpumask_t *groupmask;
+	cpumask_var_t groupmask;
 	int level = 0;
 
 	if (!sd) {
@@ -6754,8 +6756,7 @@ static void sched_domain_debug(struct sc
 
 	printk(KERN_DEBUG "CPU%d attaching sched-domain:\n", cpu);
 
-	groupmask = kmalloc(sizeof(cpumask_t), GFP_KERNEL);
-	if (!groupmask) {
+	if (!cpumask_alloc(&groupmask)) {
 		printk(KERN_DEBUG "Cannot load-balance (out of memory)\n");
 		return;
 	}
@@ -6768,7 +6769,7 @@ static void sched_domain_debug(struct sc
 		if (!sd)
 			break;
 	}
-	kfree(groupmask);
+	cpumask_free(&groupmask);
 }
 #else /* !CONFIG_SCHED_DEBUG */
 # define sched_domain_debug(sd, cpu) do { } while (0)
@@ -6921,7 +6922,7 @@ cpu_attach_domain(struct sched_domain *s
 }
 
 /* cpus with isolated domains */
-static cpumask_t cpu_isolated_map = CPU_MASK_NONE;
+static cpumask_map_t cpu_isolated_map = CPU_MASK_NONE;
 
 /* Setup the mask of cpus configured for isolated domains */
 static int __init isolated_cpu_setup(char *str)
@@ -6950,33 +6951,33 @@ __setup("isolcpus=", isolated_cpu_setup)
  * and ->cpu_power to 0.
  */
 static void
-init_sched_build_groups(const cpumask_t *span, const cpumask_t *cpu_map,
-			int (*group_fn)(int cpu, const cpumask_t *cpu_map,
+init_sched_build_groups(const_cpumask_t span, const_cpumask_t cpu_map,
+			int (*group_fn)(int cpu, const_cpumask_t cpu_map,
 					struct sched_group **sg,
-					cpumask_t *tmpmask),
-			cpumask_t *covered, cpumask_t *tmpmask)
+					cpumask_t tmpmask),
+			cpumask_t covered, cpumask_t tmpmask)
 {
 	struct sched_group *first = NULL, *last = NULL;
 	int i;
 
-	cpus_clear(*covered);
+	cpus_clear(covered);
 
-	for_each_cpu(i, *span) {
+	for_each_cpu(i, span) {
 		struct sched_group *sg;
 		int group = group_fn(i, cpu_map, &sg, tmpmask);
 		int j;
 
-		if (cpu_isset(i, *covered))
+		if (cpu_isset(i, covered))
 			continue;
 
 		cpus_clear(sg->cpumask);
 		sg->__cpu_power = 0;
 
-		for_each_cpu(j, *span) {
+		for_each_cpu(j, span) {
 			if (group_fn(j, cpu_map, NULL, tmpmask) != group)
 				continue;
 
-			cpu_set(j, *covered);
+			cpu_set(j, covered);
 			cpu_set(j, sg->cpumask);
 		}
 		if (!first)
@@ -7041,23 +7042,23 @@ static int find_next_best_node(int node,
  * should be one that prevents unnecessary balancing, but also spreads tasks
  * out optimally.
  */
-static void sched_domain_node_span(int node, cpumask_t *span)
+static void sched_domain_node_span(int node, cpumask_t span)
 {
 	nodemask_t used_nodes;
-	const cpumask_t nodemask = node_to_cpumask(node);
+	const_cpumask_t nodemask = node_to_cpumask(node);
 	int i;
 
-	cpus_clear(*span);
+	cpus_clear(span);
 	nodes_clear(used_nodes);
 
-	cpus_or(*span, *span, *nodemask);
+	cpus_or(span, span, nodemask);
 	node_set(node, used_nodes);
 
 	for (i = 1; i < SD_NODES_PER_DOMAIN; i++) {
 		int next_node = find_next_best_node(node, &used_nodes);
 
 		nodemask = node_to_cpumask(next_node);
-		cpus_or(*span, *span, *nodemask);
+		cpus_or(span, span, nodemask);
 	}
 }
 #endif /* CONFIG_NUMA */
@@ -7072,8 +7073,8 @@ static DEFINE_PER_CPU(struct sched_domai
 static DEFINE_PER_CPU(struct sched_group, sched_group_cpus);
 
 static int
-cpu_to_cpu_group(int cpu, const cpumask_t *cpu_map, struct sched_group **sg,
-		 cpumask_t *unused)
+cpu_to_cpu_group(int cpu, const_cpumask_t cpu_map, struct sched_group **sg,
+		 cpumask_t unused)
 {
 	if (sg)
 		*sg = &per_cpu(sched_group_cpus, cpu);
@@ -7091,22 +7092,22 @@ static DEFINE_PER_CPU(struct sched_group
 
 #if defined(CONFIG_SCHED_MC) && defined(CONFIG_SCHED_SMT)
 static int
-cpu_to_core_group(int cpu, const cpumask_t *cpu_map, struct sched_group **sg,
-		  cpumask_t *mask)
+cpu_to_core_group(int cpu, const_cpumask_t cpu_map, struct sched_group **sg,
+		  cpumask_t mask)
 {
 	int group;
 
-	*mask = per_cpu(cpu_sibling_map, cpu);
-	cpus_and(*mask, *mask, *cpu_map);
-	group = cpus_first(*mask);
+	cpus_copy(mask, per_cpu(cpu_sibling_map, cpu));
+	cpus_and(mask, mask, cpu_map);
+	group = cpus_first(mask);
 	if (sg)
 		*sg = &per_cpu(sched_group_core, group);
 	return group;
 }
 #elif defined(CONFIG_SCHED_MC)
 static int
-cpu_to_core_group(int cpu, const cpumask_t *cpu_map, struct sched_group **sg,
-		  cpumask_t *unused)
+cpu_to_core_group(int cpu, const_cpumask_t cpu_map, struct sched_group **sg,
+		  cpumask_t unused)
 {
 	if (sg)
 		*sg = &per_cpu(sched_group_core, cpu);
@@ -7118,18 +7119,16 @@ static DEFINE_PER_CPU(struct sched_domai
 static DEFINE_PER_CPU(struct sched_group, sched_group_phys);
 
 static int
-cpu_to_phys_group(int cpu, const cpumask_t *cpu_map, struct sched_group **sg,
-		  cpumask_t *mask)
+cpu_to_phys_group(int cpu, const_cpumask_t cpu_map, struct sched_group **sg,
+		  cpumask_t mask)
 {
 	int group;
 #ifdef CONFIG_SCHED_MC
-	*mask = cpu_coregroup_map(cpu);
-	cpus_and(*mask, *mask, *cpu_map);
-	group = cpus_first(*mask);
+	cpus_and(mask, cpu_coregroup_map(cpu), cpu_map);
+	group = cpus_first(mask);
 #elif defined(CONFIG_SCHED_SMT)
-	*mask = per_cpu(cpu_sibling_map, cpu);
-	cpus_and(*mask, *mask, *cpu_map);
-	group = cpus_first(*mask);
+	cpus_and(mask, per_cpu(cpu_sibling_map, cpu), cpu_map);
+	group = cpus_first(mask);
 #else
 	group = cpu;
 #endif
@@ -7150,14 +7149,13 @@ static struct sched_group ***sched_group
 static DEFINE_PER_CPU(struct sched_domain, allnodes_domains);
 static DEFINE_PER_CPU(struct sched_group, sched_group_allnodes);
 
-static int cpu_to_allnodes_group(int cpu, const cpumask_t *cpu_map,
-				 struct sched_group **sg, cpumask_t *nodemask)
+static int cpu_to_allnodes_group(int cpu, const_cpumask_t cpu_map,
+				 struct sched_group **sg, cpumask_t nodemask)
 {
 	int group;
 
-	nodemask = node_to_cpumask(cpu_to_node(cpu));
-	cpus_and(*nodemask, *nodemask, *cpu_map);
-	group = cpus_first(*nodemask);
+	cpus_and(nodemask, node_to_cpumask(cpu_to_node(cpu)), cpu_map);
+	group = cpus_first(nodemask);
 
 	if (sg)
 		*sg = &per_cpu(sched_group_allnodes, group);
@@ -7193,11 +7191,11 @@ static void init_numa_sched_groups_power
 
 #ifdef CONFIG_NUMA
 /* Free memory allocated for various sched_group structures */
-static void free_sched_groups(const cpumask_t *cpu_map, cpumask_t *nodemask)
+static void free_sched_groups(const_cpumask_t cpu_map, cpumask_t nodemask)
 {
 	int cpu, i;
 
-	for_each_cpu(cpu, *cpu_map) {
+	for_each_cpu(cpu, cpu_map) {
 		struct sched_group **sched_group_nodes
 			= sched_group_nodes_bycpu[cpu];
 
@@ -7207,9 +7205,8 @@ static void free_sched_groups(const cpum
 		for (i = 0; i < nr_node_ids; i++) {
 			struct sched_group *oldsg, *sg = sched_group_nodes[i];
 
-			*nodemask = node_to_cpumask(i);
-			cpus_and(*nodemask, *nodemask, *cpu_map);
-			if (cpus_empty(*nodemask))
+			cpus_and(nodemask, node_to_cpumask(i), cpu_map);
+			if (cpus_empty(nodemask))
 				continue;
 
 			if (sg == NULL)
@@ -7227,7 +7224,7 @@ next_sg:
 	}
 }
 #else /* !CONFIG_NUMA */
-static void free_sched_groups(const cpumask_t *cpu_map, cpumask_t *nodemask)
+static void free_sched_groups(const_cpumask_t cpu_map, cpumask_t nodemask)
 {
 }
 #endif /* CONFIG_NUMA */
@@ -7316,34 +7313,21 @@ SD_INIT_FUNC(CPU)
  * if the amount of space is significant.
  */
 struct allmasks {
-	cpumask_t tmpmask;			/* make this one first */
+	cpumask_map_t tmpmask;			/* make this one first */
 	union {
-		cpumask_t nodemask;
-		cpumask_t this_sibling_map;
-		cpumask_t this_core_map;
+		cpumask_map_t nodemask;
+		cpumask_map_t this_sibling_map;
+		cpumask_map_t this_core_map;
 	};
-	cpumask_t send_covered;
+	cpumask_map_t send_covered;
 
 #ifdef CONFIG_NUMA
-	cpumask_t domainspan;
-	cpumask_t covered;
-	cpumask_t notcovered;
+	cpumask_map_t domainspan;
+	cpumask_map_t covered;
+	cpumask_map_t notcovered;
 #endif
 };
 
-#if	NR_CPUS > 128
-#define	SCHED_CPUMASK_ALLOC		1
-#define	SCHED_CPUMASK_FREE(v)		kfree(v)
-#define	SCHED_CPUMASK_DECLARE(v)	struct allmasks *v
-#else
-#define	SCHED_CPUMASK_ALLOC		0
-#define	SCHED_CPUMASK_FREE(v)
-#define	SCHED_CPUMASK_DECLARE(v)	struct allmasks _v, *v = &_v
-#endif
-
-#define	SCHED_CPUMASK_VAR(v, a) 	cpumask_t *v = (cpumask_t *) \
-			((unsigned long)(a) + offsetof(struct allmasks, v))
-
 static int default_relax_domain_level = -1;
 
 static int __init setup_relax_domain_level(char *str)
@@ -7383,17 +7367,23 @@ static void set_domain_attribute(struct 
  * Build sched domains for a given set of cpus and attach the sched domains
  * to the individual cpus
  */
-static int __build_sched_domains(const cpumask_t *cpu_map,
+static int __build_sched_domains(const_cpumask_t cpu_map,
 				 struct sched_domain_attr *attr)
 {
 	int i;
 	struct root_domain *rd;
-	SCHED_CPUMASK_DECLARE(allmasks);
-	cpumask_t *tmpmask;
+	CPUMASK_ALLOC(allmasks);
+ 	CPUMASK_PTR(tmpmask, allmasks);
 #ifdef CONFIG_NUMA
 	struct sched_group **sched_group_nodes = NULL;
 	int sd_allnodes = 0;
 
+	/* check if scratch cpumask space gotten */
+	if (!allmasks) {
+		printk(KERN_WARNING "Cannot alloc cpumask array\n");
+		return -ENOMEM;
+	}
+
 	/*
 	 * Allocate the per-node list of sched groups
 	 */
@@ -7401,6 +7391,7 @@ static int __build_sched_domains(const c
 				    GFP_KERNEL);
 	if (!sched_group_nodes) {
 		printk(KERN_WARNING "Can not alloc sched group node list\n");
+		CPUMASK_FREE(allmasks);
 		return -ENOMEM;
 	}
 #endif
@@ -7411,45 +7402,30 @@ static int __build_sched_domains(const c
 #ifdef CONFIG_NUMA
 		kfree(sched_group_nodes);
 #endif
+		CPUMASK_FREE(allmasks);
 		return -ENOMEM;
 	}
 
-#if SCHED_CPUMASK_ALLOC
-	/* get space for all scratch cpumask variables */
-	allmasks = kmalloc(sizeof(*allmasks), GFP_KERNEL);
-	if (!allmasks) {
-		printk(KERN_WARNING "Cannot alloc cpumask array\n");
-		kfree(rd);
-#ifdef CONFIG_NUMA
-		kfree(sched_group_nodes);
-#endif
-		return -ENOMEM;
-	}
-#endif
-	tmpmask = (cpumask_t *)allmasks;
-
-
 #ifdef CONFIG_NUMA
-	sched_group_nodes_bycpu[cpus_first(*cpu_map)] = sched_group_nodes;
+	sched_group_nodes_bycpu[cpus_first(cpu_map)] = sched_group_nodes;
 #endif
 
 	/*
 	 * Set up domains for cpus specified by the cpu_map.
 	 */
-	for_each_cpu(i, *cpu_map) {
+	for_each_cpu(i, cpu_map) {
 		struct sched_domain *sd = NULL, *p;
-		SCHED_CPUMASK_VAR(nodemask, allmasks);
+		CPUMASK_PTR(nodemask, allmasks);
 
-		*nodemask = node_to_cpumask(cpu_to_node(i));
-		cpus_and(*nodemask, *nodemask, *cpu_map);
+		cpus_and(nodemask, node_to_cpumask(cpu_to_node(i)), cpu_map);
 
 #ifdef CONFIG_NUMA
-		if (cpus_weight(*cpu_map) >
-				SD_NODES_PER_DOMAIN*cpus_weight(*nodemask)) {
+		if (cpus_weight(cpu_map) >
+				SD_NODES_PER_DOMAIN*cpus_weight(nodemask)) {
 			sd = &per_cpu(allnodes_domains, i);
 			SD_INIT(sd, ALLNODES);
 			set_domain_attribute(sd, attr);
-			sd->span = *cpu_map;
+			cpus_copy(sd->span, cpu_map);
 			cpu_to_allnodes_group(i, cpu_map, &sd->groups, tmpmask);
 			p = sd;
 			sd_allnodes = 1;
@@ -7459,18 +7435,18 @@ static int __build_sched_domains(const c
 		sd = &per_cpu(node_domains, i);
 		SD_INIT(sd, NODE);
 		set_domain_attribute(sd, attr);
-		sched_domain_node_span(cpu_to_node(i), &sd->span);
+		sched_domain_node_span(cpu_to_node(i), sd->span);
 		sd->parent = p;
 		if (p)
 			p->child = sd;
-		cpus_and(sd->span, sd->span, *cpu_map);
+		cpus_and(sd->span, sd->span, cpu_map);
 #endif
 
 		p = sd;
 		sd = &per_cpu(phys_domains, i);
 		SD_INIT(sd, CPU);
 		set_domain_attribute(sd, attr);
-		sd->span = *nodemask;
+		cpus_copy(sd->span, nodemask);
 		sd->parent = p;
 		if (p)
 			p->child = sd;
@@ -7481,8 +7457,7 @@ static int __build_sched_domains(const c
 		sd = &per_cpu(core_domains, i);
 		SD_INIT(sd, MC);
 		set_domain_attribute(sd, attr);
-		sd->span = cpu_coregroup_map(i);
-		cpus_and(sd->span, sd->span, *cpu_map);
+		cpus_and(sd->span, cpu_coregroup_map(i), cpu_map);
 		sd->parent = p;
 		p->child = sd;
 		cpu_to_core_group(i, cpu_map, &sd->groups, tmpmask);
@@ -7493,8 +7468,7 @@ static int __build_sched_domains(const c
 		sd = &per_cpu(cpu_domains, i);
 		SD_INIT(sd, SIBLING);
 		set_domain_attribute(sd, attr);
-		sd->span = per_cpu(cpu_sibling_map, i);
-		cpus_and(sd->span, sd->span, *cpu_map);
+		cpus_and(sd->span, per_cpu(cpu_sibling_map, i), cpu_map);
 		sd->parent = p;
 		p->child = sd;
 		cpu_to_cpu_group(i, cpu_map, &sd->groups, tmpmask);
@@ -7503,13 +7477,13 @@ static int __build_sched_domains(const c
 
 #ifdef CONFIG_SCHED_SMT
 	/* Set up CPU (sibling) groups */
-	for_each_cpu(i, *cpu_map) {
-		SCHED_CPUMASK_VAR(this_sibling_map, allmasks);
-		SCHED_CPUMASK_VAR(send_covered, allmasks);
-
-		*this_sibling_map = per_cpu(cpu_sibling_map, i);
-		cpus_and(*this_sibling_map, *this_sibling_map, *cpu_map);
-		if (i != cpus_first(*this_sibling_map))
+	for_each_cpu(i, cpu_map) {
+		CPUMASK_PTR(this_sibling_map, allmasks);
+		CPUMASK_PTR(send_covered, allmasks);
+
+		cpus_and(this_sibling_map, per_cpu(cpu_sibling_map, i),
+								cpu_map);
+		if (i != cpus_first(this_sibling_map))
 			continue;
 
 		init_sched_build_groups(this_sibling_map, cpu_map,
@@ -7520,13 +7494,12 @@ static int __build_sched_domains(const c
 
 #ifdef CONFIG_SCHED_MC
 	/* Set up multi-core groups */
-	for_each_cpu(i, *cpu_map) {
-		SCHED_CPUMASK_VAR(this_core_map, allmasks);
-		SCHED_CPUMASK_VAR(send_covered, allmasks);
-
-		*this_core_map = cpu_coregroup_map(i);
-		cpus_and(*this_core_map, *this_core_map, *cpu_map);
-		if (i != cpus_first(*this_core_map))
+	for_each_cpu(i, cpu_map) {
+		CPUMASK_PTR(this_core_map, allmasks);
+		CPUMASK_PTR(send_covered, allmasks);
+
+		cpus_and(this_core_map, cpu_coregroup_map(i), cpu_map);
+		if (i != cpus_first(this_core_map))
 			continue;
 
 		init_sched_build_groups(this_core_map, cpu_map,
@@ -7537,12 +7510,11 @@ static int __build_sched_domains(const c
 
 	/* Set up physical groups */
 	for (i = 0; i < nr_node_ids; i++) {
-		SCHED_CPUMASK_VAR(nodemask, allmasks);
-		SCHED_CPUMASK_VAR(send_covered, allmasks);
+		CPUMASK_PTR(nodemask, allmasks);
+		CPUMASK_PTR(send_covered, allmasks);
 
-		*nodemask = node_to_cpumask(i);
-		cpus_and(*nodemask, *nodemask, *cpu_map);
-		if (cpus_empty(*nodemask))
+		cpus_and(nodemask, node_to_cpumask(i), cpu_map);
+		if (cpus_empty(nodemask))
 			continue;
 
 		init_sched_build_groups(nodemask, cpu_map,
@@ -7553,7 +7525,7 @@ static int __build_sched_domains(const c
 #ifdef CONFIG_NUMA
 	/* Set up node groups */
 	if (sd_allnodes) {
-		SCHED_CPUMASK_VAR(send_covered, allmasks);
+		CPUMASK_PTR(send_covered, allmasks);
 
 		init_sched_build_groups(cpu_map, cpu_map,
 					&cpu_to_allnodes_group,
@@ -7563,22 +7535,21 @@ static int __build_sched_domains(const c
 	for (i = 0; i < nr_node_ids; i++) {
 		/* Set up node groups */
 		struct sched_group *sg, *prev;
-		SCHED_CPUMASK_VAR(nodemask, allmasks);
-		SCHED_CPUMASK_VAR(domainspan, allmasks);
-		SCHED_CPUMASK_VAR(covered, allmasks);
+		CPUMASK_PTR(nodemask, allmasks);
+		CPUMASK_PTR(domainspan, allmasks);
+		CPUMASK_PTR(covered, allmasks);
 		int j;
 
-		*nodemask = node_to_cpumask(i);
-		cpus_clear(*covered);
+		cpus_clear(covered);
 
-		cpus_and(*nodemask, *nodemask, *cpu_map);
-		if (cpus_empty(*nodemask)) {
+		cpus_and(nodemask, node_to_cpumask(i), cpu_map);
+		if (cpus_empty(nodemask)) {
 			sched_group_nodes[i] = NULL;
 			continue;
 		}
 
 		sched_domain_node_span(i, domainspan);
-		cpus_and(*domainspan, *domainspan, *cpu_map);
+		cpus_and(domainspan, domainspan, cpu_map);
 
 		sg = kmalloc_node(sizeof(struct sched_group), GFP_KERNEL, i);
 		if (!sg) {
@@ -7587,31 +7558,30 @@ static int __build_sched_domains(const c
 			goto error;
 		}
 		sched_group_nodes[i] = sg;
-		for_each_cpu(j, *nodemask) {
+		for_each_cpu(j, nodemask) {
 			struct sched_domain *sd;
 
 			sd = &per_cpu(node_domains, j);
 			sd->groups = sg;
 		}
 		sg->__cpu_power = 0;
-		sg->cpumask = *nodemask;
+		cpus_copy(sg->cpumask, nodemask);
 		sg->next = sg;
-		cpus_or(*covered, *covered, *nodemask);
+		cpus_or(covered, covered, nodemask);
 		prev = sg;
 
 		for (j = 0; j < nr_node_ids; j++) {
-			SCHED_CPUMASK_VAR(notcovered, allmasks);
+			CPUMASK_PTR(notcovered, allmasks);
 			int n = (i + j) % nr_node_ids;
-			const cpumask_t pnodemask = node_to_cpumask(n);
 
-			cpus_complement(*notcovered, *covered);
-			cpus_and(*tmpmask, *notcovered, *cpu_map);
-			cpus_and(*tmpmask, *tmpmask, *domainspan);
-			if (cpus_empty(*tmpmask))
+			cpus_complement(notcovered, covered);
+			cpus_and(tmpmask, notcovered, cpu_map);
+			cpus_and(tmpmask, tmpmask, domainspan);
+			if (cpus_empty(tmpmask))
 				break;
 
-			cpus_and(*tmpmask, *tmpmask, *pnodemask);
-			if (cpus_empty(*tmpmask))
+			cpus_and(tmpmask, tmpmask, node_to_cpumask(n));
+			if (cpus_empty(tmpmask))
 				continue;
 
 			sg = kmalloc_node(sizeof(struct sched_group),
@@ -7622,9 +7592,9 @@ static int __build_sched_domains(const c
 				goto error;
 			}
 			sg->__cpu_power = 0;
-			sg->cpumask = *tmpmask;
+			cpus_copy(sg->cpumask, tmpmask);
 			sg->next = prev->next;
-			cpus_or(*covered, *covered, *tmpmask);
+			cpus_or(covered, covered, tmpmask);
 			prev->next = sg;
 			prev = sg;
 		}
@@ -7633,21 +7603,21 @@ static int __build_sched_domains(const c
 
 	/* Calculate CPU power for physical packages and nodes */
 #ifdef CONFIG_SCHED_SMT
-	for_each_cpu(i, *cpu_map) {
+	for_each_cpu(i, cpu_map) {
 		struct sched_domain *sd = &per_cpu(cpu_domains, i);
 
 		init_sched_groups_power(i, sd);
 	}
 #endif
 #ifdef CONFIG_SCHED_MC
-	for_each_cpu(i, *cpu_map) {
+	for_each_cpu(i, cpu_map) {
 		struct sched_domain *sd = &per_cpu(core_domains, i);
 
 		init_sched_groups_power(i, sd);
 	}
 #endif
 
-	for_each_cpu(i, *cpu_map) {
+	for_each_cpu(i, cpu_map) {
 		struct sched_domain *sd = &per_cpu(phys_domains, i);
 
 		init_sched_groups_power(i, sd);
@@ -7660,14 +7630,14 @@ static int __build_sched_domains(const c
 	if (sd_allnodes) {
 		struct sched_group *sg;
 
-		cpu_to_allnodes_group(cpus_first(*cpu_map), cpu_map, &sg,
+		cpu_to_allnodes_group(cpus_first(cpu_map), cpu_map, &sg,
 								tmpmask);
 		init_numa_sched_groups_power(sg);
 	}
 #endif
 
 	/* Attach the domains */
-	for_each_cpu(i, *cpu_map) {
+	for_each_cpu(i, cpu_map) {
 		struct sched_domain *sd;
 #ifdef CONFIG_SCHED_SMT
 		sd = &per_cpu(cpu_domains, i);
@@ -7679,23 +7649,23 @@ static int __build_sched_domains(const c
 		cpu_attach_domain(sd, rd, i);
 	}
 
-	SCHED_CPUMASK_FREE((void *)allmasks);
+	CPUMASK_FREE(allmasks);
 	return 0;
 
 #ifdef CONFIG_NUMA
 error:
 	free_sched_groups(cpu_map, tmpmask);
-	SCHED_CPUMASK_FREE((void *)allmasks);
+	CPUMASK_FREE(allmasks);
 	return -ENOMEM;
 #endif
 }
 
-static int build_sched_domains(const cpumask_t *cpu_map)
+static int build_sched_domains(const_cpumask_t cpu_map)
 {
 	return __build_sched_domains(cpu_map, NULL);
 }
 
-static cpumask_t *doms_cur;	/* current sched domains */
+static cpumask_t doms_cur;	/* current sched domains */
 static int ndoms_cur;		/* number of sched domains in 'doms_cur' */
 static struct sched_domain_attr *dattr_cur;
 				/* attribues of custom domains in 'doms_cur' */
@@ -7705,7 +7675,7 @@ static struct sched_domain_attr *dattr_c
  * cpumask_t) fails, then fallback to a single sched domain,
  * as determined by the single cpumask_t fallback_doms.
  */
-static cpumask_t fallback_doms;
+static cpumask_map_t fallback_doms;
 
 void __attribute__((weak)) arch_update_cpu_topology(void)
 {
@@ -7716,16 +7686,16 @@ void __attribute__((weak)) arch_update_c
  * For now this just excludes isolated cpus, but could be used to
  * exclude other special cases in the future.
  */
-static int arch_init_sched_domains(const cpumask_t *cpu_map)
+static int arch_init_sched_domains(const_cpumask_t cpu_map)
 {
 	int err;
 
 	arch_update_cpu_topology();
 	ndoms_cur = 1;
-	doms_cur = kmalloc(sizeof(cpumask_t), GFP_KERNEL);
+	doms_cur = kmalloc(cpumask_size(), GFP_KERNEL);
 	if (!doms_cur)
-		doms_cur = &fallback_doms;
-	cpus_andnot(*doms_cur, *cpu_map, cpu_isolated_map);
+		doms_cur = fallback_doms;
+	cpus_andnot(doms_cur, cpu_map, cpu_isolated_map);
 	dattr_cur = NULL;
 	err = build_sched_domains(doms_cur);
 	register_sched_domain_sysctl();
@@ -7733,8 +7703,8 @@ static int arch_init_sched_domains(const
 	return err;
 }
 
-static void arch_destroy_sched_domains(const cpumask_t *cpu_map,
-				       cpumask_t *tmpmask)
+static void arch_destroy_sched_domains(const_cpumask_t cpu_map,
+				       cpumask_t tmpmask)
 {
 	free_sched_groups(cpu_map, tmpmask);
 }
@@ -7743,17 +7713,17 @@ static void arch_destroy_sched_domains(c
  * Detach sched domains from a group of cpus specified in cpu_map
  * These cpus will now be attached to the NULL domain
  */
-static void detach_destroy_domains(const cpumask_t *cpu_map)
+static void detach_destroy_domains(const_cpumask_t cpu_map)
 {
-	cpumask_t tmpmask;
+	cpumask_var_t tmpmask;
 	int i;
 
 	unregister_sched_domain_sysctl();
 
-	for_each_cpu(i, *cpu_map)
+	for_each_cpu(i, cpu_map)
 		cpu_attach_domain(NULL, &def_root_domain, i);
 	synchronize_sched();
-	arch_destroy_sched_domains(cpu_map, &tmpmask);
+	arch_destroy_sched_domains(cpu_map, tmpmask);
 }
 
 /* handle null as "default" */
@@ -7797,7 +7767,7 @@ static int dattrs_equal(struct sched_dom
  *
  * Call with hotplug lock held
  */
-void partition_sched_domains(int ndoms_new, cpumask_t *doms_new,
+void partition_sched_domains(int ndoms_new, cpumask_t doms_new,
 			     struct sched_domain_attr *dattr_new)
 {
 	int i, j, n;
@@ -7812,7 +7782,7 @@ void partition_sched_domains(int ndoms_n
 	/* Destroy deleted domains */
 	for (i = 0; i < ndoms_cur; i++) {
 		for (j = 0; j < n; j++) {
-			if (cpus_equal(doms_cur[i], doms_new[j])
+			if (cpus_equal(&doms_cur[i], &doms_new[j])
 			    && dattrs_equal(dattr_cur, i, dattr_new, j))
 				goto match1;
 		}
@@ -7824,15 +7794,15 @@ match1:
 
 	if (doms_new == NULL) {
 		ndoms_cur = 0;
-		doms_new = &fallback_doms;
-		cpus_andnot(doms_new[0], cpu_online_map, cpu_isolated_map);
+		doms_new = fallback_doms;
+		cpus_andnot(doms_new, cpu_online_map, cpu_isolated_map);
 		dattr_new = NULL;
 	}
 
 	/* Build new domains */
 	for (i = 0; i < ndoms_new; i++) {
 		for (j = 0; j < ndoms_cur; j++) {
-			if (cpus_equal(doms_new[i], doms_cur[j])
+			if (cpus_equal(&doms_new[i], &doms_cur[j])
 			    && dattrs_equal(dattr_new, i, dattr_cur, j))
 				goto match2;
 		}
@@ -7844,7 +7814,7 @@ match2:
 	}
 
 	/* Remember the new sched domains */
-	if (doms_cur != &fallback_doms)
+	if (doms_cur != fallback_doms)
 		kfree(doms_cur);
 	kfree(dattr_cur);	/* kfree(NULL) is safe */
 	doms_cur = doms_new;
@@ -7984,7 +7954,7 @@ static int update_runtime(struct notifie
 
 void __init sched_init_smp(void)
 {
-	cpumask_t non_isolated_cpus;
+	cpumask_var_t non_isolated_cpus;
 
 #if defined(CONFIG_NUMA)
 	sched_group_nodes_bycpu = kzalloc(nr_cpu_ids * sizeof(void **),
@@ -7993,7 +7963,7 @@ void __init sched_init_smp(void)
 #endif
 	get_online_cpus();
 	mutex_lock(&sched_domains_mutex);
-	arch_init_sched_domains(&cpu_online_map);
+	arch_init_sched_domains(cpu_online_map);
 	cpus_andnot(non_isolated_cpus, cpu_possible_map, cpu_isolated_map);
 	if (cpus_empty(non_isolated_cpus))
 		cpu_set(smp_processor_id(), non_isolated_cpus);
@@ -8011,7 +7981,7 @@ void __init sched_init_smp(void)
 	init_hrtick();
 
 	/* Move init over to a non-isolated CPU */
-	if (set_cpus_allowed(current, &non_isolated_cpus) < 0)
+	if (set_cpus_allowed(current, non_isolated_cpus) < 0)
 		BUG();
 	sched_init_granularity();
 }
--- struct-cpumasks.orig/kernel/sched_cpupri.c
+++ struct-cpumasks/kernel/sched_cpupri.c
@@ -67,14 +67,14 @@ static int convert_prio(int prio)
  * Returns: (int)bool - CPUs were found
  */
 int cpupri_find(struct cpupri *cp, struct task_struct *p,
-		cpumask_t *lowest_mask)
+		cpumask_t lowest_mask)
 {
 	int                  idx      = 0;
 	int                  task_pri = convert_prio(p->prio);
 
 	for_each_cpupri_active(cp->pri_active, idx) {
 		struct cpupri_vec *vec  = &cp->pri_to_cpu[idx];
-		cpumask_t mask;
+		cpumask_var_t mask;
 
 		if (idx >= task_pri)
 			break;
@@ -84,7 +84,7 @@ int cpupri_find(struct cpupri *cp, struc
 		if (cpus_empty(mask))
 			continue;
 
-		*lowest_mask = mask;
+		cpus_copy(lowest_mask, mask);
 		return 1;
 	}
 
--- struct-cpumasks.orig/kernel/sched_cpupri.h
+++ struct-cpumasks/kernel/sched_cpupri.h
@@ -12,9 +12,9 @@
 /* values 2-101 are RT priorities 0-99 */
 
 struct cpupri_vec {
-	spinlock_t lock;
-	int        count;
-	cpumask_t  mask;
+	spinlock_t     lock;
+	int            count;
+	cpumask_map_t  mask;
 };
 
 struct cpupri {
@@ -25,7 +25,7 @@ struct cpupri {
 
 #ifdef CONFIG_SMP
 int  cpupri_find(struct cpupri *cp,
-		 struct task_struct *p, cpumask_t *lowest_mask);
+		 struct task_struct *p, cpumask_t lowest_mask);
 void cpupri_set(struct cpupri *cp, int cpu, int pri);
 void cpupri_init(struct cpupri *cp);
 #else
--- struct-cpumasks.orig/kernel/sched_fair.c
+++ struct-cpumasks/kernel/sched_fair.c
@@ -956,7 +956,7 @@ static void yield_task_fair(struct rq *r
 #if defined(ARCH_HAS_SCHED_WAKE_IDLE)
 static int wake_idle(int cpu, struct task_struct *p)
 {
-	cpumask_t tmp;
+	cpumask_var_t tmp;
 	struct sched_domain *sd;
 	int i;
 
--- struct-cpumasks.orig/kernel/sched_rt.c
+++ struct-cpumasks/kernel/sched_rt.c
@@ -139,14 +139,14 @@ static int rt_se_boosted(struct sched_rt
 }
 
 #ifdef CONFIG_SMP
-static inline cpumask_t sched_rt_period_mask(void)
+static inline const_cpumask_t sched_rt_period_mask(void)
 {
-	return cpu_rq(smp_processor_id())->rd->span;
+	return (const_cpumask_t)cpu_rq(smp_processor_id())->rd->span;
 }
 #else
-static inline cpumask_t sched_rt_period_mask(void)
+static inline const_cpumask_t sched_rt_period_mask(void)
 {
-	return cpu_online_map;
+	return (const_cpumask_t)cpu_online_map;
 }
 #endif
 
@@ -212,9 +212,9 @@ static inline int rt_rq_throttled(struct
 	return rt_rq->rt_throttled;
 }
 
-static inline cpumask_t sched_rt_period_mask(void)
+static inline const_cpumask_t sched_rt_period_mask(void)
 {
-	return cpu_online_map;
+	return (const_cpumask_t)cpu_online_map;
 }
 
 static inline
@@ -429,12 +429,12 @@ static inline int balance_runtime(struct
 static int do_sched_rt_period_timer(struct rt_bandwidth *rt_b, int overrun)
 {
 	int i, idle = 1;
-	cpumask_t span;
+	cpumask_var_t span;
 
 	if (!rt_bandwidth_enabled() || rt_b->rt_runtime == RUNTIME_INF)
 		return 1;
 
-	span = sched_rt_period_mask();
+	cpus_copy(span, sched_rt_period_mask());
 	for_each_cpu(i, span) {
 		int enqueue = 0;
 		struct rt_rq *rt_rq = sched_rt_period_rt_rq(rt_b, i);
@@ -805,16 +805,16 @@ static int select_task_rq_rt(struct task
 
 static void check_preempt_equal_prio(struct rq *rq, struct task_struct *p)
 {
-	cpumask_t mask;
+	cpumask_var_t mask;
 
 	if (rq->curr->rt.nr_cpus_allowed == 1)
 		return;
 
 	if (p->rt.nr_cpus_allowed != 1
-	    && cpupri_find(&rq->rd->cpupri, p, &mask))
+	    && cpupri_find(&rq->rd->cpupri, p, mask))
 		return;
 
-	if (!cpupri_find(&rq->rd->cpupri, rq->curr, &mask))
+	if (!cpupri_find(&rq->rd->cpupri, rq->curr, mask))
 		return;
 
 	/*
@@ -956,18 +956,18 @@ static struct task_struct *pick_next_hig
 	return next;
 }
 
-static DEFINE_PER_CPU(cpumask_t, local_cpu_mask);
+static DEFINE_PER_CPU(cpumask_map_t, local_cpu_mask);
 
-static inline int pick_optimal_cpu(int this_cpu, cpumask_t *mask)
+static inline int pick_optimal_cpu(int this_cpu, const_cpumask_t mask)
 {
 	int first;
 
 	/* "this_cpu" is cheaper to preempt than a remote processor */
-	if ((this_cpu != -1) && cpu_isset(this_cpu, *mask))
+	if ((this_cpu != -1) && cpu_isset(this_cpu, mask))
 		return this_cpu;
 
-	first = cpus_first(*mask);
-	if (first != NR_CPUS)
+	first = cpus_first(mask);
+	if (first != nr_cpu_ids)
 		return first;
 
 	return -1;
@@ -976,9 +976,10 @@ static inline int pick_optimal_cpu(int t
 static int find_lowest_rq(struct task_struct *task)
 {
 	struct sched_domain *sd;
-	cpumask_t *lowest_mask = &__get_cpu_var(local_cpu_mask);
+	cpumask_t lowest_mask = __get_cpu_var(local_cpu_mask);
 	int this_cpu = smp_processor_id();
 	int cpu      = task_cpu(task);
+	cpumask_var_t domain_mask;
 
 	if (task->rt.nr_cpus_allowed == 1)
 		return -1; /* No other targets possible */
@@ -991,7 +992,7 @@ static int find_lowest_rq(struct task_st
 	 * I guess we might want to change cpupri_find() to ignore those
 	 * in the first place.
 	 */
-	cpus_and(*lowest_mask, *lowest_mask, cpu_active_map);
+	cpus_and(lowest_mask, lowest_mask, cpu_active_map);
 
 	/*
 	 * At this point we have built a mask of cpus representing the
@@ -1001,7 +1002,7 @@ static int find_lowest_rq(struct task_st
 	 * We prioritize the last cpu that the task executed on since
 	 * it is most likely cache-hot in that location.
 	 */
-	if (cpu_isset(cpu, *lowest_mask))
+	if (cpu_isset(cpu, lowest_mask))
 		return cpu;
 
 	/*
@@ -1013,13 +1014,12 @@ static int find_lowest_rq(struct task_st
 
 	for_each_domain(cpu, sd) {
 		if (sd->flags & SD_WAKE_AFFINE) {
-			cpumask_t domain_mask;
 			int       best_cpu;
 
-			cpus_and(domain_mask, sd->span, *lowest_mask);
+			cpus_and(domain_mask, sd->span, lowest_mask);
 
 			best_cpu = pick_optimal_cpu(this_cpu,
-						    &domain_mask);
+						    domain_mask);
 			if (best_cpu != -1)
 				return best_cpu;
 		}
@@ -1308,9 +1308,9 @@ move_one_task_rt(struct rq *this_rq, int
 }
 
 static void set_cpus_allowed_rt(struct task_struct *p,
-				const cpumask_t *new_mask)
+				const_cpumask_t new_mask)
 {
-	int weight = cpus_weight(*new_mask);
+	int weight = cpus_weight(new_mask);
 
 	BUG_ON(!rt_task(p));
 
@@ -1331,7 +1331,7 @@ static void set_cpus_allowed_rt(struct t
 		update_rt_migration(rq);
 	}
 
-	p->cpus_allowed    = *new_mask;
+	cpus_copy(p->cpus_allowed, new_mask);
 	p->rt.nr_cpus_allowed = weight;
 }
 

-- 

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [PATCH 19/31] cpumask: clean xen files
  2008-09-29 18:02 [PATCH 00/31] cpumask: Provide new cpumask API Mike Travis
                   ` (17 preceding siblings ...)
  2008-09-29 18:03 ` [PATCH 18/31] cpumask: clean sched files Mike Travis
@ 2008-09-29 18:03 ` Mike Travis
  2008-09-29 18:03 ` [PATCH 20/31] cpumask: clean mm files Mike Travis
                   ` (11 subsequent siblings)
  30 siblings, 0 replies; 43+ messages in thread
From: Mike Travis @ 2008-09-29 18:03 UTC (permalink / raw)
  To: Ingo Molnar, Rusty Russell
  Cc: Linus Torvalds, Andrew Morton, David Miller, Yinghai Lu,
	Thomas Gleixner, Jack Steiner, linux-kernel

[-- Attachment #1: clean-xen --]
[-- Type: text/plain, Size: 4824 bytes --]

Signed-of-by: Mike Travis <travis@sgi.com>
---
 arch/x86/xen/enlighten.c |    9 +++++----
 arch/x86/xen/smp.c       |   12 ++++++------
 arch/x86/xen/suspend.c   |    2 +-
 arch/x86/xen/time.c      |    2 +-
 arch/x86/xen/xen-ops.h   |    2 +-
 drivers/xen/events.c     |    6 +++---
 6 files changed, 17 insertions(+), 16 deletions(-)

--- struct-cpumasks.orig/arch/x86/xen/enlighten.c
+++ struct-cpumasks/arch/x86/xen/enlighten.c
@@ -626,16 +626,17 @@ static void xen_flush_tlb_single(unsigne
 	preempt_enable();
 }
 
-static void xen_flush_tlb_others(const cpumask_t *cpus, struct mm_struct *mm,
+static void xen_flush_tlb_others(const_cpumask_t cpus, struct mm_struct *mm,
 				 unsigned long va)
 {
 	struct {
 		struct mmuext_op op;
-		cpumask_t mask;
+		cpumask_map_t mask;
 	} *args;
-	cpumask_t cpumask = *cpus;
+	cpumask_var_t cpumask;
 	struct multicall_space mcs;
 
+	cpus_copy(cpumask, cpus);
 	/*
 	 * A couple of (to be removed) sanity checks:
 	 *
@@ -653,7 +654,7 @@ static void xen_flush_tlb_others(const c
 
 	mcs = xen_mc_entry(sizeof(*args));
 	args = mcs.args;
-	args->mask = cpumask;
+	cpus_copy(args->mask, cpumask);
 	args->op.arg2.vcpumask = &args->mask;
 
 	if (va == TLB_FLUSH_ALL) {
--- struct-cpumasks.orig/arch/x86/xen/smp.c
+++ struct-cpumasks/arch/x86/xen/smp.c
@@ -33,7 +33,7 @@
 #include "xen-ops.h"
 #include "mmu.h"
 
-cpumask_t xen_cpu_initialized_map;
+cpumask_map_t xen_cpu_initialized_map;
 
 static DEFINE_PER_CPU(int, resched_irq);
 static DEFINE_PER_CPU(int, callfunc_irq);
@@ -192,7 +192,7 @@ static void __init xen_smp_prepare_cpus(
 	if (xen_smp_intr_init(0))
 		BUG();
 
-	xen_cpu_initialized_map = cpumask_of_cpu(0);
+	cpus_copy(xen_cpu_initialized_map, cpumask_of_cpu(0));
 
 	/* Restrict the possible_map according to max_cpus. */
 	while ((num_possible_cpus() > 1) && (num_possible_cpus() > max_cpus)) {
@@ -408,7 +408,7 @@ static void xen_smp_send_reschedule(int 
 	xen_send_IPI_one(cpu, XEN_RESCHEDULE_VECTOR);
 }
 
-static void xen_send_IPI_mask(const cpumask_t *mask, enum ipi_vector vector)
+static void xen_send_IPI_mask(const_cpumask_t mask, enum ipi_vector vector)
 {
 	unsigned cpu;
 
@@ -416,11 +416,11 @@ static void xen_send_IPI_mask(const cpum
 		xen_send_IPI_one(cpu, vector);
 }
 
-static void xen_smp_send_call_function_ipi(const cpumask_t mask)
+static void xen_smp_send_call_function_ipi(const_cpumask_t mask)
 {
 	int cpu;
 
-	xen_send_IPI_mask(&mask, XEN_CALL_FUNCTION_VECTOR);
+	xen_send_IPI_mask(mask, XEN_CALL_FUNCTION_VECTOR);
 
 	/* Make sure other vcpus get a chance to run if they need to. */
 	for_each_cpu(cpu, mask) {
@@ -433,7 +433,7 @@ static void xen_smp_send_call_function_i
 
 static void xen_smp_send_call_function_single_ipi(int cpu)
 {
-	xen_send_IPI_mask(&cpumask_of_cpu(cpu),
+	xen_send_IPI_mask(cpumask_of_cpu(cpu),
 			  XEN_CALL_FUNCTION_SINGLE_VECTOR);
 }
 
--- struct-cpumasks.orig/arch/x86/xen/suspend.c
+++ struct-cpumasks/arch/x86/xen/suspend.c
@@ -35,7 +35,7 @@ void xen_post_suspend(int suspend_cancel
 			pfn_to_mfn(xen_start_info->console.domU.mfn);
 	} else {
 #ifdef CONFIG_SMP
-		xen_cpu_initialized_map = cpu_online_map;
+		cpus_copy(xen_cpu_initialized_map, cpu_online_map);
 #endif
 		xen_vcpu_restore();
 	}
--- struct-cpumasks.orig/arch/x86/xen/time.c
+++ struct-cpumasks/arch/x86/xen/time.c
@@ -444,7 +444,7 @@ void xen_setup_timer(int cpu)
 	evt = &per_cpu(xen_clock_events, cpu);
 	memcpy(evt, xen_clockevent, sizeof(*evt));
 
-	evt->cpumask = cpumask_of_cpu(cpu);
+	cpus_copy(evt->cpumask, cpumask_of_cpu(cpu));
 	evt->irq = irq;
 
 	setup_runstate_info(cpu);
--- struct-cpumasks.orig/arch/x86/xen/xen-ops.h
+++ struct-cpumasks/arch/x86/xen/xen-ops.h
@@ -58,7 +58,7 @@ void __init xen_init_spinlocks(void);
 __cpuinit void xen_init_lock_cpu(int cpu);
 void xen_uninit_lock_cpu(int cpu);
 
-extern cpumask_t xen_cpu_initialized_map;
+extern cpumask_map_t xen_cpu_initialized_map;
 #else
 static inline void xen_smp_init(void) {}
 #endif
--- struct-cpumasks.orig/drivers/xen/events.c
+++ struct-cpumasks/drivers/xen/events.c
@@ -125,7 +125,7 @@ static void bind_evtchn_to_cpu(unsigned 
 
 	BUG_ON(irq == -1);
 #ifdef CONFIG_SMP
-	irq_to_desc(irq)->affinity = cpumask_of_cpu(cpu);
+	cpus_copy(irq_to_desc(irq)->affinity, cpumask_of_cpu(cpu));
 #endif
 
 	__clear_bit(chn, cpu_evtchn_mask[cpu_evtchn[chn]]);
@@ -143,7 +143,7 @@ static void init_evtchn_cpu_bindings(voi
 		struct irq_desc *desc = irq_to_desc(i);
 		if (!desc)
 			continue;
-		desc->affinity = cpumask_of_cpu(0);
+		cpus_copy(desc->affinity, cpumask_of_cpu(0));
 	}
 #endif
 
@@ -610,7 +610,7 @@ static void rebind_irq_to_cpu(unsigned i
 }
 
 
-static void set_affinity_irq(unsigned irq, cpumask_t dest)
+static void set_affinity_irq(unsigned irq, const_cpumask_t dest)
 {
 	unsigned tcpu = cpus_first(dest);
 	rebind_irq_to_cpu(irq, tcpu);

-- 

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [PATCH 20/31] cpumask: clean mm files
  2008-09-29 18:02 [PATCH 00/31] cpumask: Provide new cpumask API Mike Travis
                   ` (18 preceding siblings ...)
  2008-09-29 18:03 ` [PATCH 19/31] cpumask: clean xen files Mike Travis
@ 2008-09-29 18:03 ` Mike Travis
  2008-09-29 18:03 ` [PATCH 21/31] cpumask: clean acpi files Mike Travis
                   ` (10 subsequent siblings)
  30 siblings, 0 replies; 43+ messages in thread
From: Mike Travis @ 2008-09-29 18:03 UTC (permalink / raw)
  To: Ingo Molnar, Rusty Russell
  Cc: Linus Torvalds, Andrew Morton, David Miller, Yinghai Lu,
	Thomas Gleixner, Jack Steiner, linux-kernel

[-- Attachment #1: clean-mm --]
[-- Type: text/plain, Size: 8138 bytes --]

Signed-of-by: Mike Travis <travis@sgi.com>
---
 include/linux/mm_types.h |    2 +-
 mm/allocpercpu.c         |   18 +++++++++---------
 mm/page_alloc.c          |    6 +++---
 mm/pdflush.c             |    6 +++---
 mm/quicklist.c           |    4 ++--
 mm/slab.c                |    4 ++--
 mm/slub.c                |    4 ++--
 mm/vmscan.c              |    8 ++++----
 mm/vmstat.c              |    6 +++---
 9 files changed, 29 insertions(+), 29 deletions(-)

--- struct-cpumasks.orig/include/linux/mm_types.h
+++ struct-cpumasks/include/linux/mm_types.h
@@ -218,7 +218,7 @@ struct mm_struct {
 
 	unsigned long saved_auxv[AT_VECTOR_SIZE]; /* for /proc/PID/auxv */
 
-	cpumask_t cpu_vm_mask;
+	cpumask_map_t cpu_vm_mask;
 
 	/* Architecture-specific MM context */
 	mm_context_t context;
--- struct-cpumasks.orig/mm/allocpercpu.c
+++ struct-cpumasks/mm/allocpercpu.c
@@ -31,10 +31,10 @@ static void percpu_depopulate(void *__pd
  * @__pdata: per-cpu data to depopulate
  * @mask: depopulate per-cpu data for cpu's selected through mask bits
  */
-static void __percpu_depopulate_mask(void *__pdata, cpumask_t *mask)
+static void __percpu_depopulate_mask(void *__pdata, const_cpumask_t mask)
 {
 	int cpu;
-	for_each_cpu(cpu, *mask)
+	for_each_cpu(cpu, mask)
 		percpu_depopulate(__pdata, cpu);
 }
 
@@ -80,15 +80,15 @@ static void *percpu_populate(void *__pda
  * Per-cpu objects are populated with zeroed buffers.
  */
 static int __percpu_populate_mask(void *__pdata, size_t size, gfp_t gfp,
-				  cpumask_t *mask)
+				  const_cpumask_t mask)
 {
-	cpumask_t populated;
+	cpumask_var_t populated;
 	int cpu;
 
 	cpus_clear(populated);
-	for_each_cpu(cpu, *mask)
+	for_each_cpu(cpu, mask)
 		if (unlikely(!percpu_populate(__pdata, size, gfp, cpu))) {
-			__percpu_depopulate_mask(__pdata, &populated);
+			__percpu_depopulate_mask(__pdata, populated);
 			return -ENOMEM;
 		} else
 			cpu_set(cpu, populated);
@@ -96,7 +96,7 @@ static int __percpu_populate_mask(void *
 }
 
 #define percpu_populate_mask(__pdata, size, gfp, mask) \
-	__percpu_populate_mask((__pdata), (size), (gfp), &(mask))
+	__percpu_populate_mask((__pdata), (size), (gfp), (mask))
 
 /**
  * percpu_alloc_mask - initial setup of per-cpu data
@@ -108,7 +108,7 @@ static int __percpu_populate_mask(void *
  * which is simplified by the percpu_alloc() wrapper.
  * Per-cpu objects are populated with zeroed buffers.
  */
-void *__percpu_alloc_mask(size_t size, gfp_t gfp, cpumask_t *mask)
+void *__percpu_alloc_mask(size_t size, gfp_t gfp, const_cpumask_t mask)
 {
 	/*
 	 * We allocate whole cache lines to avoid false sharing
@@ -137,7 +137,7 @@ void percpu_free(void *__pdata)
 {
 	if (unlikely(!__pdata))
 		return;
-	__percpu_depopulate_mask(__pdata, &cpu_possible_map);
+	__percpu_depopulate_mask(__pdata, cpu_possible_map);
 	kfree(__percpu_disguise(__pdata));
 }
 EXPORT_SYMBOL_GPL(percpu_free);
--- struct-cpumasks.orig/mm/page_alloc.c
+++ struct-cpumasks/mm/page_alloc.c
@@ -2080,7 +2080,7 @@ static int find_next_best_node(int node,
 	int n, val;
 	int min_val = INT_MAX;
 	int best_node = -1;
-	const cpumask_t tmp = node_to_cpumask(0);
+	const_cpumask_t tmp = node_to_cpumask(0);
 
 	/* Use the local node if we haven't already */
 	if (!node_isset(node, *used_node_mask)) {
@@ -2101,8 +2101,8 @@ static int find_next_best_node(int node,
 		val += (n < node);
 
 		/* Give preference to headless and unused nodes */
-		node_to_cpumask_ptr_next(tmp, n);
-		if (!cpus_empty(*tmp))
+		tmp = node_to_cpumask(n);
+		if (!cpus_empty(tmp))
 			val += PENALTY_FOR_NODE_WITH_CPUS;
 
 		/* Slight preference for less loaded node */
--- struct-cpumasks.orig/mm/pdflush.c
+++ struct-cpumasks/mm/pdflush.c
@@ -172,7 +172,7 @@ static int __pdflush(struct pdflush_work
 static int pdflush(void *dummy)
 {
 	struct pdflush_work my_work;
-	cpumask_t cpus_allowed;
+	cpumask_var_t cpus_allowed;
 
 	/*
 	 * pdflush can spend a lot of time doing encryption via dm-crypt.  We
@@ -187,8 +187,8 @@ static int pdflush(void *dummy)
 	 * This is needed as pdflush's are dynamically created and destroyed.
 	 * The boottime pdflush's are easily placed w/o these 2 lines.
 	 */
-	cpuset_cpus_allowed(current, &cpus_allowed);
-	set_cpus_allowed(current, &cpus_allowed);
+	cpuset_cpus_allowed(current, cpus_allowed);
+	set_cpus_allowed(current, cpus_allowed);
 
 	return __pdflush(&my_work);
 }
--- struct-cpumasks.orig/mm/quicklist.c
+++ struct-cpumasks/mm/quicklist.c
@@ -29,7 +29,7 @@ static unsigned long max_pages(unsigned 
 	int node = numa_node_id();
 	struct zone *zones = NODE_DATA(node)->node_zones;
 	int num_cpus_on_node;
-	const cpumask_t cpumask_on_node = node_to_cpumask(node);
+	const_cpumask_t cpumask_on_node = node_to_cpumask(node);
 
 	node_free_pages =
 #ifdef CONFIG_ZONE_DMA
@@ -42,7 +42,7 @@ static unsigned long max_pages(unsigned 
 
 	max = node_free_pages / FRACTION_OF_NODE_MEM;
 
-	num_cpus_on_node = cpus_weight(*cpumask_on_node);
+	num_cpus_on_node = cpus_weight(cpumask_on_node);
 	max /= num_cpus_on_node;
 
 	return max(max, min_pages);
--- struct-cpumasks.orig/mm/slab.c
+++ struct-cpumasks/mm/slab.c
@@ -1079,7 +1079,7 @@ static void __cpuinit cpuup_canceled(lon
 	struct kmem_cache *cachep;
 	struct kmem_list3 *l3 = NULL;
 	int node = cpu_to_node(cpu);
-	const cpumask_t mask = node_to_cpumask(node);
+	const_cpumask_t mask = node_to_cpumask(node);
 
 	list_for_each_entry(cachep, &cache_chain, next) {
 		struct array_cache *nc;
@@ -1101,7 +1101,7 @@ static void __cpuinit cpuup_canceled(lon
 		if (nc)
 			free_block(cachep, nc->entry, nc->avail, node);
 
-		if (!cpus_empty(*mask)) {
+		if (!cpus_empty(mask)) {
 			spin_unlock_irq(&l3->list_lock);
 			goto free_array_cache;
 		}
--- struct-cpumasks.orig/mm/slub.c
+++ struct-cpumasks/mm/slub.c
@@ -1972,7 +1972,7 @@ static DEFINE_PER_CPU(struct kmem_cache_
 				kmem_cache_cpu)[NR_KMEM_CACHE_CPU];
 
 static DEFINE_PER_CPU(struct kmem_cache_cpu *, kmem_cache_cpu_free);
-static cpumask_t kmem_cach_cpu_free_init_once = CPU_MASK_NONE;
+static cpumask_map_t kmem_cach_cpu_free_init_once = CPU_MASK_NONE;
 
 static struct kmem_cache_cpu *alloc_kmem_cache_cpu(struct kmem_cache *s,
 							int cpu, gfp_t flags)
@@ -3446,7 +3446,7 @@ struct location {
 	long max_time;
 	long min_pid;
 	long max_pid;
-	cpumask_t cpus;
+	cpumask_map_t cpus;
 	nodemask_t nodes;
 };
 
--- struct-cpumasks.orig/mm/vmscan.c
+++ struct-cpumasks/mm/vmscan.c
@@ -1687,9 +1687,9 @@ static int kswapd(void *p)
 	struct reclaim_state reclaim_state = {
 		.reclaimed_slab = 0,
 	};
-	const cpumask_t cpumask = node_to_cpumask(pgdat->node_id);
+	const_cpumask_t cpumask = node_to_cpumask(pgdat->node_id);
 
-	if (!cpus_empty(*cpumask))
+	if (!cpus_empty(cpumask))
 		set_cpus_allowed(tsk, cpumask);
 	current->reclaim_state = &reclaim_state;
 
@@ -1924,9 +1924,9 @@ static int __devinit cpu_callback(struct
 	if (action == CPU_ONLINE || action == CPU_ONLINE_FROZEN) {
 		for_each_node_state(nid, N_HIGH_MEMORY) {
 			pg_data_t *pgdat = NODE_DATA(nid);
-			const cpumask_t mask = node_to_cpumask(pgdat->node_id);
+			const_cpumask_t mask = node_to_cpumask(pgdat->node_id);
 
-			if (any_online_cpu(*mask) < nr_cpu_ids)
+			if (any_online_cpu(mask) < nr_cpu_ids)
 				/* One of our CPUs online: restore mask */
 				set_cpus_allowed(pgdat->kswapd, mask);
 		}
--- struct-cpumasks.orig/mm/vmstat.c
+++ struct-cpumasks/mm/vmstat.c
@@ -20,14 +20,14 @@
 DEFINE_PER_CPU(struct vm_event_state, vm_event_states) = {{0}};
 EXPORT_PER_CPU_SYMBOL(vm_event_states);
 
-static void sum_vm_events(unsigned long *ret, cpumask_t *cpumask)
+static void sum_vm_events(unsigned long *ret, const_cpumask_t cpumask)
 {
 	int cpu;
 	int i;
 
 	memset(ret, 0, NR_VM_EVENT_ITEMS * sizeof(unsigned long));
 
-	for_each_cpu(cpu, *cpumask) {
+	for_each_cpu(cpu, cpumask) {
 		struct vm_event_state *this = &per_cpu(vm_event_states, cpu);
 
 		for (i = 0; i < NR_VM_EVENT_ITEMS; i++)
@@ -43,7 +43,7 @@ static void sum_vm_events(unsigned long 
 void all_vm_events(unsigned long *ret)
 {
 	get_online_cpus();
-	sum_vm_events(ret, &cpu_online_map);
+	sum_vm_events(ret, cpu_online_map);
 	put_online_cpus();
 }
 EXPORT_SYMBOL_GPL(all_vm_events);

-- 

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [PATCH 21/31] cpumask: clean acpi files
  2008-09-29 18:02 [PATCH 00/31] cpumask: Provide new cpumask API Mike Travis
                   ` (19 preceding siblings ...)
  2008-09-29 18:03 ` [PATCH 20/31] cpumask: clean mm files Mike Travis
@ 2008-09-29 18:03 ` Mike Travis
  2008-09-29 18:03 ` [PATCH 22/31] cpumask: clean irq files Mike Travis
                   ` (9 subsequent siblings)
  30 siblings, 0 replies; 43+ messages in thread
From: Mike Travis @ 2008-09-29 18:03 UTC (permalink / raw)
  To: Ingo Molnar, Rusty Russell
  Cc: Linus Torvalds, Andrew Morton, David Miller, Yinghai Lu,
	Thomas Gleixner, Jack Steiner, linux-kernel

[-- Attachment #1: clean-acpi --]
[-- Type: text/plain, Size: 4225 bytes --]

Signed-of-by: Mike Travis <travis@sgi.com>
---
 arch/x86/kernel/acpi/boot.c         |    4 ++--
 arch/x86/kernel/acpi/cstate.c       |    2 +-
 drivers/acpi/processor_perflib.c    |    6 +++---
 drivers/acpi/processor_throttling.c |   12 ++++++------
 include/acpi/processor.h            |    2 +-
 5 files changed, 13 insertions(+), 13 deletions(-)

--- struct-cpumasks.orig/arch/x86/kernel/acpi/boot.c
+++ struct-cpumasks/arch/x86/kernel/acpi/boot.c
@@ -520,7 +520,7 @@ static int __cpuinit _acpi_map_lsapic(ac
 	struct acpi_buffer buffer = { ACPI_ALLOCATE_BUFFER, NULL };
 	union acpi_object *obj;
 	struct acpi_madt_local_apic *lapic;
-	cpumask_t tmp_map, new_map;
+	cpumask_var_t tmp_map, new_map;
 	u8 physid;
 	int cpu;
 
@@ -551,7 +551,7 @@ static int __cpuinit _acpi_map_lsapic(ac
 	buffer.length = ACPI_ALLOCATE_BUFFER;
 	buffer.pointer = NULL;
 
-	tmp_map = cpu_present_map;
+	cpus_copy(tmp_map, cpu_present_map);
 	acpi_register_lapic(physid, lapic->lapic_flags & ACPI_MADT_ENABLED);
 
 	/*
--- struct-cpumasks.orig/arch/x86/kernel/acpi/cstate.c
+++ struct-cpumasks/arch/x86/kernel/acpi/cstate.c
@@ -128,7 +128,7 @@ int acpi_processor_ffh_cstate_probe(unsi
 		 cx->address);
 
 out:
-	set_cpus_allowed(current, &saved_mask);
+	set_cpus_allowed(current, saved_mask);
 	return retval;
 }
 EXPORT_SYMBOL_GPL(acpi_processor_ffh_cstate_probe);
--- struct-cpumasks.orig/drivers/acpi/processor_perflib.c
+++ struct-cpumasks/drivers/acpi/processor_perflib.c
@@ -571,7 +571,7 @@ int acpi_processor_preregister_performan
 	int count, count_target;
 	int retval = 0;
 	unsigned int i, j;
-	cpumask_t covered_cpus;
+	cpumask_var_t covered_cpus;
 	struct acpi_processor *pr;
 	struct acpi_psd_package *pdomain;
 	struct acpi_processor *match_pr;
@@ -701,8 +701,8 @@ int acpi_processor_preregister_performan
 
 			match_pr->performance->shared_type = 
 					pr->performance->shared_type;
-			match_pr->performance->shared_cpu_map =
-				pr->performance->shared_cpu_map;
+			cpus_copy(match_pr->performance->shared_cpu_map,
+					pr->performance->shared_cpu_map);
 		}
 	}
 
--- struct-cpumasks.orig/drivers/acpi/processor_throttling.c
+++ struct-cpumasks/drivers/acpi/processor_throttling.c
@@ -61,7 +61,7 @@ static int acpi_processor_update_tsd_coo
 	int count, count_target;
 	int retval = 0;
 	unsigned int i, j;
-	cpumask_t covered_cpus;
+	cpumask_var_t covered_cpus;
 	struct acpi_processor *pr, *match_pr;
 	struct acpi_tsd_package *pdomain, *match_pdomain;
 	struct acpi_processor_throttling *pthrottling, *match_pthrottling;
@@ -841,7 +841,7 @@ static int acpi_processor_get_throttling
 	set_cpus_allowed(current, cpumask_of_cpu(pr->id));
 	ret = pr->throttling.acpi_processor_get_throttling(pr);
 	/* restore the previous state */
-	set_cpus_allowed(current, &saved_mask);
+	set_cpus_allowed(current, saved_mask);
 
 	return ret;
 }
@@ -986,13 +986,13 @@ static int acpi_processor_set_throttling
 
 int acpi_processor_set_throttling(struct acpi_processor *pr, int state)
 {
-	cpumask_t saved_mask;
+	cpumask_var_t saved_mask;
 	int ret = 0;
 	unsigned int i;
 	struct acpi_processor *match_pr;
 	struct acpi_processor_throttling *p_throttling;
 	struct throttling_tstate t_state;
-	cpumask_t online_throttling_cpus;
+	cpumask_var_t online_throttling_cpus;
 
 	if (!pr)
 		return -EINVAL;
@@ -1003,7 +1003,7 @@ int acpi_processor_set_throttling(struct
 	if ((state < 0) || (state > (pr->throttling.state_count - 1)))
 		return -EINVAL;
 
-	saved_mask = current->cpus_allowed;
+	cpus_copy(saved_mask, current->cpus_allowed);
 	t_state.target_state = state;
 	p_throttling = &(pr->throttling);
 	cpus_and(online_throttling_cpus, cpu_online_map,
@@ -1074,7 +1074,7 @@ int acpi_processor_set_throttling(struct
 							&t_state);
 	}
 	/* restore the previous state */
-	set_cpus_allowed(current, &saved_mask);
+	set_cpus_allowed(current, saved_mask);
 	return ret;
 }
 
--- struct-cpumasks.orig/include/acpi/processor.h
+++ struct-cpumasks/include/acpi/processor.h
@@ -127,7 +127,7 @@ struct acpi_processor_performance {
 	unsigned int state_count;
 	struct acpi_processor_px *states;
 	struct acpi_psd_package domain_info;
-	cpumask_t shared_cpu_map;
+	cpumask_map_t shared_cpu_map;
 	unsigned int shared_type;
 };
 

-- 

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [PATCH 22/31] cpumask: clean irq files
  2008-09-29 18:02 [PATCH 00/31] cpumask: Provide new cpumask API Mike Travis
                   ` (20 preceding siblings ...)
  2008-09-29 18:03 ` [PATCH 21/31] cpumask: clean acpi files Mike Travis
@ 2008-09-29 18:03 ` Mike Travis
  2008-09-29 18:03 ` [PATCH 23/31] cpumask: clean pci files Mike Travis
                   ` (8 subsequent siblings)
  30 siblings, 0 replies; 43+ messages in thread
From: Mike Travis @ 2008-09-29 18:03 UTC (permalink / raw)
  To: Ingo Molnar, Rusty Russell
  Cc: Linus Torvalds, Andrew Morton, David Miller, Yinghai Lu,
	Thomas Gleixner, Jack Steiner, linux-kernel

[-- Attachment #1: clean-irq --]
[-- Type: text/plain, Size: 6609 bytes --]

Signed-of-by: Mike Travis <travis@sgi.com>
---
 arch/x86/kernel/irq_64.c |    6 +++---
 include/asm-x86/irq.h    |    2 +-
 include/linux/irq.h      |   10 +++++-----
 kernel/irq/manage.c      |   12 ++++++------
 kernel/irq/migration.c   |    6 +++---
 kernel/irq/proc.c        |   13 +++++++------
 6 files changed, 25 insertions(+), 24 deletions(-)

--- struct-cpumasks.orig/arch/x86/kernel/irq_64.c
+++ struct-cpumasks/arch/x86/kernel/irq_64.c
@@ -242,14 +242,14 @@ asmlinkage unsigned int do_IRQ(struct pt
 }
 
 #ifdef CONFIG_HOTPLUG_CPU
-void fixup_irqs(cpumask_t map)
+void fixup_irqs(const_cpumask_t map)
 {
 	unsigned int irq;
 	static int warned;
 	struct irq_desc *desc;
+	cpumask_var_t mask;
 
 	for_each_irq_desc(irq, desc) {
-		cpumask_t mask;
 		int break_affinity = 0;
 		int set_affinity = 1;
 
@@ -268,7 +268,7 @@ void fixup_irqs(cpumask_t map)
 		cpus_and(mask, desc->affinity, map);
 		if (cpus_empty(mask)) {
 			break_affinity = 1;
-			mask = map;
+			cpus_copy(mask, map);
 		}
 
 		if (desc->chip->mask)
--- struct-cpumasks.orig/include/asm-x86/irq.h
+++ struct-cpumasks/include/asm-x86/irq.h
@@ -37,7 +37,7 @@ extern int irqbalance_disable(char *str)
 
 #ifdef CONFIG_HOTPLUG_CPU
 #include <linux/cpumask.h>
-extern void fixup_irqs(cpumask_t map);
+extern void fixup_irqs(const_cpumask_t map);
 #endif
 
 extern unsigned int do_IRQ(struct pt_regs *regs);
--- struct-cpumasks.orig/include/linux/irq.h
+++ struct-cpumasks/include/linux/irq.h
@@ -111,7 +111,7 @@ struct irq_chip {
 	void		(*eoi)(unsigned int irq);
 
 	void		(*end)(unsigned int irq);
-	void		(*set_affinity)(unsigned int irq, cpumask_t dest);
+	void		(*set_affinity)(unsigned int irq, const_cpumask_t dest);
 	int		(*retrigger)(unsigned int irq);
 	int		(*set_type)(unsigned int irq, unsigned int flow_type);
 	int		(*set_wake)(unsigned int irq, unsigned int on);
@@ -180,11 +180,11 @@ struct irq_desc {
 	unsigned long		last_unhandled;	/* Aging timer for unhandled count */
 	spinlock_t		lock;
 #ifdef CONFIG_SMP
-	cpumask_t		affinity;
+	cpumask_map_t		affinity;
 	unsigned int		cpu;
 #endif
 #ifdef CONFIG_GENERIC_PENDING_IRQ
-	cpumask_t		pending_mask;
+	cpumask_map_t		pending_mask;
 #endif
 #ifdef CONFIG_PROC_FS
 	struct proc_dir_entry	*dir;
@@ -243,7 +243,7 @@ extern int setup_irq(unsigned int irq, s
 
 #ifdef CONFIG_GENERIC_PENDING_IRQ
 
-void set_pending_irq(unsigned int irq, cpumask_t mask);
+void set_pending_irq(unsigned int irq, const_cpumask_t mask);
 void move_native_irq(int irq);
 void move_masked_irq(int irq);
 
@@ -261,7 +261,7 @@ static inline void move_masked_irq(int i
 {
 }
 
-static inline void set_pending_irq(unsigned int irq, cpumask_t mask)
+static inline void set_pending_irq(unsigned int irq, const_cpumask_t mask)
 {
 }
 
--- struct-cpumasks.orig/kernel/irq/manage.c
+++ struct-cpumasks/kernel/irq/manage.c
@@ -17,7 +17,7 @@
 
 #ifdef CONFIG_SMP
 
-cpumask_t irq_default_affinity = CPU_MASK_ALL;
+cpumask_map_t irq_default_affinity = CPU_MASK_ALL;
 
 /**
  *	synchronize_irq - wait for pending IRQ handlers (on other CPUs)
@@ -79,7 +79,7 @@ int irq_can_set_affinity(unsigned int ir
  *	@cpumask:	cpumask
  *
  */
-int irq_set_affinity(unsigned int irq, cpumask_t cpumask)
+int irq_set_affinity(unsigned int irq, const_cpumask_t cpumask)
 {
 	struct irq_desc *desc = irq_to_desc(irq);
 
@@ -91,13 +91,13 @@ int irq_set_affinity(unsigned int irq, c
 		unsigned long flags;
 
 		spin_lock_irqsave(&desc->lock, flags);
-		desc->affinity = cpumask;
+		cpus_copy(desc->affinity, cpumask);
 		desc->chip->set_affinity(irq, cpumask);
 		spin_unlock_irqrestore(&desc->lock, flags);
 	} else
 		set_pending_irq(irq, cpumask);
 #else
-	desc->affinity = cpumask;
+	cpus_copy(desc->affinity, cpumask);
 	desc->chip->set_affinity(irq, cpumask);
 #endif
 	return 0;
@@ -109,7 +109,7 @@ int irq_set_affinity(unsigned int irq, c
  */
 int irq_select_affinity(unsigned int irq)
 {
-	cpumask_t mask;
+	cpumask_var_t mask;
 	struct irq_desc *desc;
 
 	if (!irq_can_set_affinity(irq))
@@ -118,7 +118,7 @@ int irq_select_affinity(unsigned int irq
 	cpus_and(mask, cpu_online_map, irq_default_affinity);
 
 	desc = irq_to_desc(irq);
-	desc->affinity = mask;
+	cpus_copy(desc->affinity, mask);
 	desc->chip->set_affinity(irq, mask);
 
 	return 0;
--- struct-cpumasks.orig/kernel/irq/migration.c
+++ struct-cpumasks/kernel/irq/migration.c
@@ -1,21 +1,21 @@
 
 #include <linux/irq.h>
 
-void set_pending_irq(unsigned int irq, cpumask_t mask)
+void set_pending_irq(unsigned int irq, const_cpumask_t mask)
 {
 	struct irq_desc *desc = irq_to_desc(irq);
 	unsigned long flags;
 
 	spin_lock_irqsave(&desc->lock, flags);
 	desc->status |= IRQ_MOVE_PENDING;
-	desc->pending_mask = mask;
+	cpus_copy(desc->pending_mask, mask);
 	spin_unlock_irqrestore(&desc->lock, flags);
 }
 
 void move_masked_irq(int irq)
 {
 	struct irq_desc *desc = irq_to_desc(irq);
-	cpumask_t tmp;
+	cpumask_var_t tmp;
 
 	if (likely(!(desc->status & IRQ_MOVE_PENDING)))
 		return;
--- struct-cpumasks.orig/kernel/irq/proc.c
+++ struct-cpumasks/kernel/irq/proc.c
@@ -20,11 +20,12 @@ static struct proc_dir_entry *root_irq_d
 static int irq_affinity_proc_show(struct seq_file *m, void *v)
 {
 	struct irq_desc *desc = irq_to_desc((long)m->private);
-	cpumask_t *mask = &desc->affinity;
+	cpumask_var_t mask;
 
+	cpus_copy(mask, desc->affinity);
 #ifdef CONFIG_GENERIC_PENDING_IRQ
 	if (desc->status & IRQ_MOVE_PENDING)
-		mask = &desc->pending_mask;
+		cpus_copy(mask, desc->pending_mask);
 #endif
 	seq_cpumask(m, mask);
 	seq_putc(m, '\n');
@@ -40,7 +41,7 @@ static ssize_t irq_affinity_proc_write(s
 		const char __user *buffer, size_t count, loff_t *pos)
 {
 	unsigned int irq = (int)(long)PDE(file->f_path.dentry->d_inode)->data;
-	cpumask_t new_value;
+	cpumask_var_t new_value;
 	int err;
 
 	if (!irq_to_desc(irq)->chip->set_affinity || no_irq_affinity ||
@@ -84,7 +85,7 @@ static const struct file_operations irq_
 
 static int default_affinity_show(struct seq_file *m, void *v)
 {
-	seq_cpumask(m, &irq_default_affinity);
+	seq_cpumask(m, irq_default_affinity);
 	seq_putc(m, '\n');
 	return 0;
 }
@@ -92,7 +93,7 @@ static int default_affinity_show(struct 
 static ssize_t default_affinity_write(struct file *file,
 		const char __user *buffer, size_t count, loff_t *ppos)
 {
-	cpumask_t new_value;
+	cpumask_var_t new_value;
 	int err;
 
 	err = cpumask_parse_user(buffer, count, new_value);
@@ -110,7 +111,7 @@ static ssize_t default_affinity_write(st
 	if (!cpus_intersects(new_value, cpu_online_map))
 		return -EINVAL;
 
-	irq_default_affinity = new_value;
+	cpus_copy(irq_default_affinity, new_value);
 
 	return count;
 }

-- 

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [PATCH 23/31] cpumask: clean pci files
  2008-09-29 18:02 [PATCH 00/31] cpumask: Provide new cpumask API Mike Travis
                   ` (21 preceding siblings ...)
  2008-09-29 18:03 ` [PATCH 22/31] cpumask: clean irq files Mike Travis
@ 2008-09-29 18:03 ` Mike Travis
  2008-09-29 18:03 ` [PATCH 24/31] cpumask: clean cpu files Mike Travis
                   ` (7 subsequent siblings)
  30 siblings, 0 replies; 43+ messages in thread
From: Mike Travis @ 2008-09-29 18:03 UTC (permalink / raw)
  To: Ingo Molnar, Rusty Russell
  Cc: Linus Torvalds, Andrew Morton, David Miller, Yinghai Lu,
	Thomas Gleixner, Jack Steiner, linux-kernel

[-- Attachment #1: clean-pci --]
[-- Type: text/plain, Size: 2887 bytes --]

Signed-of-by: Mike Travis <travis@sgi.com>
---
 drivers/pci/pci-driver.c |    7 ++++---
 drivers/pci/pci-sysfs.c  |    8 ++++----
 drivers/pci/probe.c      |    4 ++--
 include/asm-x86/pci.h    |    2 +-
 4 files changed, 11 insertions(+), 10 deletions(-)

--- struct-cpumasks.orig/drivers/pci/pci-driver.c
+++ struct-cpumasks/drivers/pci/pci-driver.c
@@ -180,11 +180,12 @@ static int pci_call_probe(struct pci_dri
 	   allocates its local memory on the right node without
 	   any need to change it. */
 	struct mempolicy *oldpol;
-	cpumask_t oldmask = current->cpus_allowed;
+	cpumask_var_t oldmask;
 	int node = dev_to_node(&dev->dev);
 
+	cpus_copy(oldmask, current->cpus_allowed);
 	if (node >= 0)
-		set_cpus_allowed(current, node_to_cpumask(node);
+		set_cpus_allowed(current, node_to_cpumask(node));
 
 	/* And set default memory allocation policy */
 	oldpol = current->mempolicy;
@@ -192,7 +193,7 @@ static int pci_call_probe(struct pci_dri
 #endif
 	error = drv->probe(dev, id);
 #ifdef CONFIG_NUMA
-	set_cpus_allowed(current, &oldmask);
+	set_cpus_allowed(current, oldmask);
 	current->mempolicy = oldpol;
 #endif
 	return error;
--- struct-cpumasks.orig/drivers/pci/pci-sysfs.c
+++ struct-cpumasks/drivers/pci/pci-sysfs.c
@@ -69,10 +69,10 @@ static ssize_t broken_parity_status_stor
 static ssize_t local_cpus_show(struct device *dev,
 			struct device_attribute *attr, char *buf)
 {		
-	cpumask_t mask;
+	cpumask_var_t mask;
 	int len;
 
-	mask = pcibus_to_cpumask(to_pci_dev(dev)->bus);
+	cpus_copy(mask, pcibus_to_cpumask(to_pci_dev(dev)->bus));
 	len = cpumask_scnprintf(buf, PAGE_SIZE-2, mask);
 	buf[len++] = '\n';
 	buf[len] = '\0';
@@ -83,10 +83,10 @@ static ssize_t local_cpus_show(struct de
 static ssize_t local_cpulist_show(struct device *dev,
 			struct device_attribute *attr, char *buf)
 {
-	cpumask_t mask;
+	cpumask_var_t mask;
 	int len;
 
-	mask = pcibus_to_cpumask(to_pci_dev(dev)->bus);
+	cpus_copy(mask, pcibus_to_cpumask(to_pci_dev(dev)->bus));
 	len = cpulist_scnprintf(buf, PAGE_SIZE-2, mask);
 	buf[len++] = '\n';
 	buf[len] = '\0';
--- struct-cpumasks.orig/drivers/pci/probe.c
+++ struct-cpumasks/drivers/pci/probe.c
@@ -119,9 +119,9 @@ static ssize_t pci_bus_show_cpuaffinity(
 					char *buf)
 {
 	int ret;
-	cpumask_t cpumask;
+	cpumask_var_t cpumask;
 
-	cpumask = pcibus_to_cpumask(to_pci_bus(dev));
+	cpus_copy(cpumask, pcibus_to_cpumask(to_pci_bus(dev)));
 	ret = type?
 		cpulist_scnprintf(buf, PAGE_SIZE-2, cpumask):
 		cpumask_scnprintf(buf, PAGE_SIZE-2, cpumask);
--- struct-cpumasks.orig/include/asm-x86/pci.h
+++ struct-cpumasks/include/asm-x86/pci.h
@@ -107,7 +107,7 @@ static inline int __pcibus_to_node(struc
 	return sd->node;
 }
 
-static inline cpumask_t __pcibus_to_cpumask(struct pci_bus *bus)
+static inline const_cpumask_t __pcibus_to_cpumask(struct pci_bus *bus)
 {
 	return node_to_cpumask(__pcibus_to_node(bus));
 }

-- 

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [PATCH 24/31] cpumask: clean cpu files
  2008-09-29 18:02 [PATCH 00/31] cpumask: Provide new cpumask API Mike Travis
                   ` (22 preceding siblings ...)
  2008-09-29 18:03 ` [PATCH 23/31] cpumask: clean pci files Mike Travis
@ 2008-09-29 18:03 ` Mike Travis
  2008-09-29 18:03 ` [PATCH 25/31] cpumask: clean rcu files Mike Travis
                   ` (6 subsequent siblings)
  30 siblings, 0 replies; 43+ messages in thread
From: Mike Travis @ 2008-09-29 18:03 UTC (permalink / raw)
  To: Ingo Molnar, Rusty Russell
  Cc: Linus Torvalds, Andrew Morton, David Miller, Yinghai Lu,
	Thomas Gleixner, Jack Steiner, linux-kernel

[-- Attachment #1: clean-cpu --]
[-- Type: text/plain, Size: 22258 bytes --]

Signed-of-by: Mike Travis <travis@sgi.com>
---
 arch/x86/kernel/cpu/common.c            |    2 
 arch/x86/kernel/cpu/intel_cacheinfo.c   |   16 +++----
 arch/x86/kernel/cpu/mcheck/mce_64.c     |    4 -
 arch/x86/kernel/cpu/mcheck/mce_amd_64.c |   44 +++++++++++----------
 arch/x86/kernel/setup_percpu.c          |    8 +--
 drivers/base/cpu.c                      |    6 +-
 include/linux/cpuset.h                  |   12 ++---
 include/linux/percpu.h                  |    7 +--
 kernel/cpu.c                            |   28 ++++++-------
 kernel/cpuset.c                         |   66 ++++++++++++++++----------------
 10 files changed, 98 insertions(+), 95 deletions(-)

--- struct-cpumasks.orig/arch/x86/kernel/cpu/common.c
+++ struct-cpumasks/arch/x86/kernel/cpu/common.c
@@ -842,7 +842,7 @@ static __init int setup_disablecpuid(cha
 }
 __setup("clearcpuid=", setup_disablecpuid);
 
-cpumask_t cpu_initialized __cpuinitdata = CPU_MASK_NONE;
+cpumask_map_t cpu_initialized __cpuinitdata = CPU_MASK_NONE;
 
 #ifdef CONFIG_X86_64
 struct x8664_pda **_cpu_pda __read_mostly;
--- struct-cpumasks.orig/arch/x86/kernel/cpu/intel_cacheinfo.c
+++ struct-cpumasks/arch/x86/kernel/cpu/intel_cacheinfo.c
@@ -132,7 +132,7 @@ struct _cpuid4_info {
 	union _cpuid4_leaf_ecx ecx;
 	unsigned long size;
 	unsigned long can_disable;
-	cpumask_t shared_cpu_map;	/* future?: only cpus/node is needed */
+	cpumask_map_t shared_cpu_map;	/* future?: only cpus/node is needed */
 };
 
 #ifdef CONFIG_PCI
@@ -539,7 +539,7 @@ static int __cpuinit detect_cache_attrib
 	struct _cpuid4_info	*this_leaf;
 	unsigned long		j;
 	int			retval;
-	cpumask_t		oldmask;
+	cpumask_var_t		oldmask;
 
 	if (num_cache_leaves == 0)
 		return -ENOENT;
@@ -549,7 +549,7 @@ static int __cpuinit detect_cache_attrib
 	if (per_cpu(cpuid4_info, cpu) == NULL)
 		return -ENOMEM;
 
-	oldmask = current->cpus_allowed;
+	cpus_copy(oldmask, current->cpus_allowed);
 	retval = set_cpus_allowed(current, cpumask_of_cpu(cpu));
 	if (retval)
 		goto out;
@@ -567,7 +567,7 @@ static int __cpuinit detect_cache_attrib
 		}
 		cache_shared_cpu_map_setup(cpu, j);
 	}
-	set_cpus_allowed(current, &oldmask);
+	set_cpus_allowed(current, oldmask);
 
 out:
 	if (retval) {
@@ -623,11 +623,11 @@ static ssize_t show_shared_cpu_map_func(
 	int n = 0;
 
 	if (len > 1) {
-		cpumask_t *mask = &this_leaf->shared_cpu_map;
+		const_cpumask_t mask = this_leaf->shared_cpu_map;
 
 		n = type?
-			cpulist_scnprintf(buf, len-2, *mask):
-			cpumask_scnprintf(buf, len-2, *mask);
+			cpulist_scnprintf(buf, len-2, mask):
+			cpumask_scnprintf(buf, len-2, mask);
 		buf[n++] = '\n';
 		buf[n] = '\0';
 	}
@@ -869,7 +869,7 @@ err_out:
 	return -ENOMEM;
 }
 
-static cpumask_t cache_dev_map = CPU_MASK_NONE;
+static cpumask_map_t cache_dev_map = CPU_MASK_NONE;
 
 /* Add/Remove cache interface for CPU device */
 static int __cpuinit cache_add_dev(struct sys_device * sys_dev)
--- struct-cpumasks.orig/arch/x86/kernel/cpu/mcheck/mce_64.c
+++ struct-cpumasks/arch/x86/kernel/cpu/mcheck/mce_64.c
@@ -510,7 +510,7 @@ static void __cpuinit mce_cpu_features(s
  */
 void __cpuinit mcheck_init(struct cpuinfo_x86 *c)
 {
-	static cpumask_t mce_cpus = CPU_MASK_NONE;
+	static cpumask_map_t mce_cpus = CPU_MASK_NONE;
 
 	mce_cpu_quirks(c);
 
@@ -822,7 +822,7 @@ static struct sysdev_attribute *mce_attr
 	NULL
 };
 
-static cpumask_t mce_device_initialized = CPU_MASK_NONE;
+static cpumask_map_t mce_device_initialized = CPU_MASK_NONE;
 
 /* Per cpu sysdev init.  All of the cpus still share the same ctl bank */
 static __cpuinit int mce_create_device(unsigned int cpu)
--- struct-cpumasks.orig/arch/x86/kernel/cpu/mcheck/mce_amd_64.c
+++ struct-cpumasks/arch/x86/kernel/cpu/mcheck/mce_amd_64.c
@@ -251,16 +251,16 @@ struct threshold_attr {
 	ssize_t(*store) (struct threshold_block *, const char *, size_t count);
 };
 
-static void affinity_set(unsigned int cpu, cpumask_t *oldmask,
-					   cpumask_t *newmask)
+static void affinity_set(unsigned int cpu, cpumask_t oldmask,
+					   cpumask_t newmask)
 {
-	*oldmask = current->cpus_allowed;
-	cpus_clear(*newmask);
-	cpu_set(cpu, *newmask);
+	cpus_copy(oldmask, current->cpus_allowed);
+	cpus_clear(newmask);
+	cpu_set(cpu, newmask);
 	set_cpus_allowed(current, newmask);
 }
 
-static void affinity_restore(const cpumask_t *oldmask)
+static void affinity_restore(const_cpumask_t oldmask)
 {
 	set_cpus_allowed(current, oldmask);
 }
@@ -277,15 +277,15 @@ static ssize_t store_interrupt_enable(st
 				      const char *buf, size_t count)
 {
 	char *end;
-	cpumask_t oldmask, newmask;
+	cpumask_var_t oldmask, newmask;
 	unsigned long new = simple_strtoul(buf, &end, 0);
 	if (end == buf)
 		return -EINVAL;
 	b->interrupt_enable = !!new;
 
-	affinity_set(b->cpu, &oldmask, &newmask);
+	affinity_set(b->cpu, oldmask, newmask);
 	threshold_restart_bank(b, 0, 0);
-	affinity_restore(&oldmask);
+	affinity_restore(oldmask);
 
 	return end - buf;
 }
@@ -294,7 +294,7 @@ static ssize_t store_threshold_limit(str
 				     const char *buf, size_t count)
 {
 	char *end;
-	cpumask_t oldmask, newmask;
+	cpumask_var_t oldmask, newmask;
 	u16 old;
 	unsigned long new = simple_strtoul(buf, &end, 0);
 	if (end == buf)
@@ -306,9 +306,9 @@ static ssize_t store_threshold_limit(str
 	old = b->threshold_limit;
 	b->threshold_limit = new;
 
-	affinity_set(b->cpu, &oldmask, &newmask);
+	affinity_set(b->cpu, oldmask, newmask);
 	threshold_restart_bank(b, 0, old);
-	affinity_restore(&oldmask);
+	affinity_restore(oldmask);
 
 	return end - buf;
 }
@@ -316,10 +316,11 @@ static ssize_t store_threshold_limit(str
 static ssize_t show_error_count(struct threshold_block *b, char *buf)
 {
 	u32 high, low;
-	cpumask_t oldmask, newmask;
-	affinity_set(b->cpu, &oldmask, &newmask);
+	cpumask_var_t oldmask, newmask;
+
+	affinity_set(b->cpu, oldmask, newmask);
 	rdmsr(b->address, low, high);
-	affinity_restore(&oldmask);
+	affinity_restore(oldmask);
 	return sprintf(buf, "%x\n",
 		       (high & 0xFFF) - (THRESHOLD_MAX - b->threshold_limit));
 }
@@ -327,10 +328,11 @@ static ssize_t show_error_count(struct t
 static ssize_t store_error_count(struct threshold_block *b,
 				 const char *buf, size_t count)
 {
-	cpumask_t oldmask, newmask;
-	affinity_set(b->cpu, &oldmask, &newmask);
+	cpumask_var_t oldmask, newmask;
+
+	affinity_set(b->cpu, oldmask, newmask);
 	threshold_restart_bank(b, 1, 0);
-	affinity_restore(&oldmask);
+	affinity_restore(oldmask);
 	return 1;
 }
 
@@ -468,7 +470,7 @@ static __cpuinit int threshold_create_ba
 {
 	int i, err = 0;
 	struct threshold_bank *b = NULL;
-	cpumask_t oldmask, newmask;
+	cpumask_var_t oldmask, newmask;
 	char name[32];
 
 	sprintf(name, "threshold_bank%i", bank);
@@ -519,10 +521,10 @@ static __cpuinit int threshold_create_ba
 
 	per_cpu(threshold_banks, cpu)[bank] = b;
 
-	affinity_set(cpu, &oldmask, &newmask);
+	affinity_set(cpu, oldmask, newmask);
 	err = allocate_threshold_blocks(cpu, bank, 0,
 					MSR_IA32_MC0_MISC + bank * 4);
-	affinity_restore(&oldmask);
+	affinity_restore(oldmask);
 
 	if (err)
 		goto out_free;
--- struct-cpumasks.orig/arch/x86/kernel/setup_percpu.c
+++ struct-cpumasks/arch/x86/kernel/setup_percpu.c
@@ -41,7 +41,7 @@ DEFINE_EARLY_PER_CPU(int, x86_cpu_to_nod
 EXPORT_EARLY_PER_CPU_SYMBOL(x86_cpu_to_node_map);
 
 /* which logical CPUs are on which nodes */
-const cpumask_t node_to_cpumask_map;
+const_cpumask_t node_to_cpumask_map;
 EXPORT_SYMBOL(node_to_cpumask_map);
 
 /* setup node_to_cpumask_map */
@@ -350,16 +350,16 @@ const_cpumask_t node_to_cpumask(int node
 			"_node_to_cpumask_ptr(%d): no node_to_cpumask_map!\n",
 			node);
 		dump_stack();
-		return (const cpumask_t)cpu_online_map;
+		return (const_cpumask_t)cpu_online_map;
 	}
 	if (node >= nr_node_ids) {
 		printk(KERN_WARNING
 			"_node_to_cpumask_ptr(%d): node > nr_node_ids(%d)\n",
 			node, nr_node_ids);
 		dump_stack();
-		return (const cpumask_t)cpu_mask_none;
+		return (const_cpumask_t)cpu_mask_none;
 	}
-	return (const cpumask_t)&node_to_cpumask_map[node];
+	return (const_cpumask_t)&node_to_cpumask_map[node];
 }
 EXPORT_SYMBOL(node_to_cpumask);
 
--- struct-cpumasks.orig/drivers/base/cpu.c
+++ struct-cpumasks/drivers/base/cpu.c
@@ -107,9 +107,9 @@ static SYSDEV_ATTR(crash_notes, 0400, sh
 /*
  * Print cpu online, possible, present, and system maps
  */
-static ssize_t print_cpus_map(char *buf, cpumask_t *map)
+static ssize_t print_cpus_map(char *buf, const_cpumask_t map)
 {
-	int n = cpulist_scnprintf(buf, PAGE_SIZE-2, *map);
+	int n = cpulist_scnprintf(buf, PAGE_SIZE-2, map);
 
 	buf[n++] = '\n';
 	buf[n] = '\0';
@@ -119,7 +119,7 @@ static ssize_t print_cpus_map(char *buf,
 #define	print_cpus_func(type) \
 static ssize_t print_cpus_##type(struct sysdev_class *class, char *buf)	\
 {									\
-	return print_cpus_map(buf, &cpu_##type##_map);			\
+	return print_cpus_map(buf, cpu_##type##_map);			\
 }									\
 static struct sysdev_class_attribute attr_##type##_map = 		\
 	_SYSDEV_CLASS_ATTR(type, 0444, print_cpus_##type, NULL)
--- struct-cpumasks.orig/include/linux/cpuset.h
+++ struct-cpumasks/include/linux/cpuset.h
@@ -20,8 +20,8 @@ extern int number_of_cpusets;	/* How man
 extern int cpuset_init_early(void);
 extern int cpuset_init(void);
 extern void cpuset_init_smp(void);
-extern void cpuset_cpus_allowed(struct task_struct *p, cpumask_t *mask);
-extern void cpuset_cpus_allowed_locked(struct task_struct *p, cpumask_t *mask);
+extern void cpuset_cpus_allowed(struct task_struct *p, cpumask_t mask);
+extern void cpuset_cpus_allowed_locked(struct task_struct *p, cpumask_t mask);
 extern nodemask_t cpuset_mems_allowed(struct task_struct *p);
 #define cpuset_current_mems_allowed (current->mems_allowed)
 void cpuset_init_current_mems_allowed(void);
@@ -86,14 +86,14 @@ static inline int cpuset_init_early(void
 static inline int cpuset_init(void) { return 0; }
 static inline void cpuset_init_smp(void) {}
 
-static inline void cpuset_cpus_allowed(struct task_struct *p, cpumask_t *mask)
+static inline void cpuset_cpus_allowed(struct task_struct *p, cpumask_t mask)
 {
-	*mask = cpu_possible_map;
+	cpus_copy(mask, cpu_possible_map);
 }
 static inline void cpuset_cpus_allowed_locked(struct task_struct *p,
-								cpumask_t *mask)
+								cpumask_t mask)
 {
-	*mask = cpu_possible_map;
+	cpus_copy(mask, cpu_possible_map);
 }
 
 static inline nodemask_t cpuset_mems_allowed(struct task_struct *p)
--- struct-cpumasks.orig/include/linux/percpu.h
+++ struct-cpumasks/include/linux/percpu.h
@@ -96,14 +96,15 @@ struct percpu_data {
         (__typeof__(ptr))__p->ptrs[(cpu)];	          \
 })
 
-extern void *__percpu_alloc_mask(size_t size, gfp_t gfp, cpumask_t *mask);
+extern void *__percpu_alloc_mask(size_t size, gfp_t gfp, const_cpumask_t mask);
 extern void percpu_free(void *__pdata);
 
 #else /* CONFIG_SMP */
 
 #define percpu_ptr(ptr, cpu) ({ (void)(cpu); (ptr); })
 
-static __always_inline void *__percpu_alloc_mask(size_t size, gfp_t gfp, cpumask_t *mask)
+static __always_inline void *__percpu_alloc_mask(size_t size, gfp_t gfp,
+						const_cpumask_t mask)
 {
 	return kzalloc(size, gfp);
 }
@@ -116,7 +117,7 @@ static inline void percpu_free(void *__p
 #endif /* CONFIG_SMP */
 
 #define percpu_alloc_mask(size, gfp, mask) \
-	__percpu_alloc_mask((size), (gfp), &(mask))
+	__percpu_alloc_mask((size), (gfp), mask)
 
 #define percpu_alloc(size, gfp) percpu_alloc_mask((size), (gfp), cpu_online_map)
 
--- struct-cpumasks.orig/kernel/cpu.c
+++ struct-cpumasks/kernel/cpu.c
@@ -21,7 +21,7 @@
  * as new cpu's are detected in the system via any platform specific
  * method, such as ACPI for e.g.
  */
-cpumask_t cpu_present_map __read_mostly;
+cpumask_map_t cpu_present_map __read_mostly;
 EXPORT_SYMBOL(cpu_present_map);
 
 #if NR_CPUS > BITS_PER_LONG
@@ -34,10 +34,10 @@ EXPORT_SYMBOL(cpu_mask_all);
 /*
  * Represents all cpu's that are currently online.
  */
-cpumask_t cpu_online_map __read_mostly = CPU_MASK_ALL;
+cpumask_map_t cpu_online_map __read_mostly = CPU_MASK_ALL;
 EXPORT_SYMBOL(cpu_online_map);
 
-cpumask_t cpu_possible_map __read_mostly = CPU_MASK_ALL;
+cpumask_map_t cpu_possible_map __read_mostly = CPU_MASK_ALL;
 EXPORT_SYMBOL(cpu_possible_map);
 
 #else /* CONFIG_SMP */
@@ -69,7 +69,7 @@ void __init cpu_hotplug_init(void)
 	cpu_hotplug.refcount = 0;
 }
 
-cpumask_t cpu_active_map;
+cpumask_map_t cpu_active_map;
 
 #ifdef CONFIG_HOTPLUG_CPU
 
@@ -222,7 +222,7 @@ static int __ref take_cpu_down(void *_pa
 static int __ref _cpu_down(unsigned int cpu, int tasks_frozen)
 {
 	int err, nr_calls = 0;
-	cpumask_t old_allowed, tmp;
+	cpumask_var_t old_allowed, tmp;
 	void *hcpu = (void *)(long)cpu;
 	unsigned long mod = tasks_frozen ? CPU_TASKS_FROZEN : 0;
 	struct take_cpu_down_param tcd_param = {
@@ -250,13 +250,13 @@ static int __ref _cpu_down(unsigned int 
 	}
 
 	/* Ensure that we are not runnable on dying cpu */
-	old_allowed = current->cpus_allowed;
+	cpus_copy(old_allowed, current->cpus_allowed);
 	cpus_setall(tmp);
 	cpu_clear(cpu, tmp);
-	set_cpus_allowed(current, &tmp);
-	tmp = cpumask_of_cpu(cpu);
+	set_cpus_allowed(current, tmp);
+	cpus_copy(tmp, cpumask_of_cpu(cpu));
 
-	err = __stop_machine(take_cpu_down, &tcd_param, &tmp);
+	err = __stop_machine(take_cpu_down, &tcd_param, tmp);
 	if (err) {
 		/* CPU didn't die: tell everyone.  Can't complain. */
 		if (raw_notifier_call_chain(&cpu_chain, CPU_DOWN_FAILED | mod,
@@ -282,7 +282,7 @@ static int __ref _cpu_down(unsigned int 
 	check_for_tasks(cpu);
 
 out_allowed:
-	set_cpus_allowed(current, &old_allowed);
+	set_cpus_allowed(current, old_allowed);
 out_release:
 	cpu_hotplug_done();
 	if (!err) {
@@ -397,21 +397,21 @@ out:
 }
 
 #ifdef CONFIG_PM_SLEEP_SMP
-static cpumask_t frozen_cpus;
+static cpumask_map_t frozen_cpus;
 
 int disable_nonboot_cpus(void)
 {
-	int cpu, cpus_first, error = 0;
+	int cpu, first_cpu, error = 0;
 
 	cpu_maps_update_begin();
-	cpus_first = cpus_first(cpu_online_map);
+	first_cpu = cpus_first(cpu_online_map);
 	/* We take down all of the non-boot CPUs in one shot to avoid races
 	 * with the userspace trying to use the CPU hotplug at the same time
 	 */
 	cpus_clear(frozen_cpus);
 	printk("Disabling non-boot CPUs ...\n");
 	for_each_online_cpu(cpu) {
-		if (cpu == cpus_first)
+		if (cpu == first_cpu)
 			continue;
 		error = _cpu_down(cpu, 1);
 		if (!error) {
--- struct-cpumasks.orig/kernel/cpuset.c
+++ struct-cpumasks/kernel/cpuset.c
@@ -83,7 +83,7 @@ struct cpuset {
 	struct cgroup_subsys_state css;
 
 	unsigned long flags;		/* "unsigned long" so bitops work */
-	cpumask_t cpus_allowed;		/* CPUs allowed to tasks in cpuset */
+	cpumask_map_t cpus_allowed;	/* CPUs allowed to tasks in cpuset */
 	nodemask_t mems_allowed;	/* Memory Nodes allowed to tasks */
 
 	struct cpuset *parent;		/* my parent */
@@ -279,15 +279,15 @@ static struct file_system_type cpuset_fs
  * Call with callback_mutex held.
  */
 
-static void guarantee_online_cpus(const struct cpuset *cs, cpumask_t *pmask)
+static void guarantee_online_cpus(const struct cpuset *cs, cpumask_t pmask)
 {
 	while (cs && !cpus_intersects(cs->cpus_allowed, cpu_online_map))
 		cs = cs->parent;
 	if (cs)
-		cpus_and(*pmask, cs->cpus_allowed, cpu_online_map);
+		cpus_and(pmask, cs->cpus_allowed, cpu_online_map);
 	else
-		*pmask = cpu_online_map;
-	BUG_ON(!cpus_intersects(*pmask, cpu_online_map));
+		cpus_copy(pmask, cpu_online_map);
+	BUG_ON(!cpus_intersects(pmask, cpu_online_map));
 }
 
 /*
@@ -574,7 +574,7 @@ update_domain_attr_tree(struct sched_dom
  *	element of the partition (one sched domain) to be passed to
  *	partition_sched_domains().
  */
-static int generate_sched_domains(cpumask_t **domains,
+static int generate_sched_domains(cpumask_t domains,
 			struct sched_domain_attr **attributes)
 {
 	LIST_HEAD(q);		/* queue of cpusets to be scanned */
@@ -582,7 +582,7 @@ static int generate_sched_domains(cpumas
 	struct cpuset **csa;	/* array of all cpuset ptrs */
 	int csn;		/* how many cpuset ptrs in csa so far */
 	int i, j, k;		/* indices for partition finding loops */
-	cpumask_t *doms;	/* resulting partition; i.e. sched domains */
+	cpumask_t doms;		/* resulting partition; i.e. sched domains */
 	struct sched_domain_attr *dattr;  /* attributes for custom domains */
 	int ndoms;		/* number of sched domains in result */
 	int nslot;		/* next empty doms[] cpumask_t slot */
@@ -594,7 +594,7 @@ static int generate_sched_domains(cpumas
 
 	/* Special case for the 99% of systems with one, full, sched domain */
 	if (is_sched_load_balance(&top_cpuset)) {
-		doms = kmalloc(sizeof(cpumask_t), GFP_KERNEL);
+		doms = kmalloc(cpumask_size(), GFP_KERNEL);
 		if (!doms)
 			goto done;
 
@@ -603,7 +603,7 @@ static int generate_sched_domains(cpumas
 			*dattr = SD_ATTR_INIT;
 			update_domain_attr_tree(dattr, &top_cpuset);
 		}
-		*doms = top_cpuset.cpus_allowed;
+		cpus_copy(doms, top_cpuset.cpus_allowed);
 
 		ndoms = 1;
 		goto done;
@@ -673,7 +673,7 @@ restart:
 	 * Now we know how many domains to create.
 	 * Convert <csn, csa> to <ndoms, doms> and populate cpu masks.
 	 */
-	doms = kmalloc(ndoms * sizeof(cpumask_t), GFP_KERNEL);
+	doms = kmalloc(ndoms * cpumask_size(), GFP_KERNEL);
 	if (!doms) {
 		ndoms = 0;
 		goto done;
@@ -687,7 +687,7 @@ restart:
 
 	for (nslot = 0, i = 0; i < csn; i++) {
 		struct cpuset *a = csa[i];
-		cpumask_t *dp;
+		cpumask_var_t dp;
 		int apn = a->pn;
 
 		if (apn < 0) {
@@ -695,7 +695,7 @@ restart:
 			continue;
 		}
 
-		dp = doms + nslot;
+		cpus_copy(dp, doms + nslot);
 
 		if (nslot == ndoms) {
 			static int warnings = 10;
@@ -710,14 +710,14 @@ restart:
 			continue;
 		}
 
-		cpus_clear(*dp);
+		cpus_clear(dp);
 		if (dattr)
 			*(dattr + nslot) = SD_ATTR_INIT;
 		for (j = i; j < csn; j++) {
 			struct cpuset *b = csa[j];
 
 			if (apn == b->pn) {
-				cpus_or(*dp, *dp, b->cpus_allowed);
+				cpus_or(dp, dp, b->cpus_allowed);
 				if (dattr)
 					update_domain_attr_tree(dattr + nslot, b);
 
@@ -732,7 +732,7 @@ restart:
 done:
 	kfree(csa);
 
-	*domains    = doms;
+	cpus_copy(domains, doms);
 	*attributes = dattr;
 	return ndoms;
 }
@@ -750,14 +750,14 @@ done:
 static void do_rebuild_sched_domains(struct work_struct *unused)
 {
 	struct sched_domain_attr *attr;
-	cpumask_t *doms;
+	cpumask_var_t doms;
 	int ndoms;
 
 	get_online_cpus();
 
 	/* Generate domain masks and attrs */
 	cgroup_lock();
-	ndoms = generate_sched_domains(&doms, &attr);
+	ndoms = generate_sched_domains(doms, &attr);
 	cgroup_unlock();
 
 	/* Have scheduler rebuild the domains */
@@ -837,7 +837,7 @@ static int cpuset_test_cpumask(struct ta
 static void cpuset_change_cpumask(struct task_struct *tsk,
 				  struct cgroup_scanner *scan)
 {
-	set_cpus_allowed(tsk, &((cgroup_cs(scan->cg))->cpus_allowed));
+	set_cpus_allowed(tsk, ((cgroup_cs(scan->cg))->cpus_allowed));
 }
 
 /**
@@ -913,7 +913,7 @@ static int update_cpumask(struct cpuset 
 	is_load_balanced = is_sched_load_balance(&trialcs);
 
 	mutex_lock(&callback_mutex);
-	cs->cpus_allowed = trialcs.cpus_allowed;
+	cpus_copy(cs->cpus_allowed, trialcs.cpus_allowed);
 	mutex_unlock(&callback_mutex);
 
 	/*
@@ -1305,10 +1305,10 @@ static int cpuset_can_attach(struct cgro
 	if (cpus_empty(cs->cpus_allowed) || nodes_empty(cs->mems_allowed))
 		return -ENOSPC;
 	if (tsk->flags & PF_THREAD_BOUND) {
-		cpumask_t mask;
+		cpumask_var_t mask;
 
 		mutex_lock(&callback_mutex);
-		mask = cs->cpus_allowed;
+		cpus_copy(mask, cs->cpus_allowed);
 		mutex_unlock(&callback_mutex);
 		if (!cpus_equal(tsk->cpus_allowed, mask))
 			return -EINVAL;
@@ -1321,7 +1321,7 @@ static void cpuset_attach(struct cgroup_
 			  struct cgroup *cont, struct cgroup *oldcont,
 			  struct task_struct *tsk)
 {
-	cpumask_t cpus;
+	cpumask_var_t cpus;
 	nodemask_t from, to;
 	struct mm_struct *mm;
 	struct cpuset *cs = cgroup_cs(cont);
@@ -1329,8 +1329,8 @@ static void cpuset_attach(struct cgroup_
 	int err;
 
 	mutex_lock(&callback_mutex);
-	guarantee_online_cpus(cs, &cpus);
-	err = set_cpus_allowed(tsk, &cpus);
+	guarantee_online_cpus(cs, cpus);
+	err = set_cpus_allowed(tsk, cpus);
 	mutex_unlock(&callback_mutex);
 	if (err)
 		return;
@@ -1472,10 +1472,10 @@ static int cpuset_write_resmask(struct c
 
 static int cpuset_sprintf_cpulist(char *page, struct cpuset *cs)
 {
-	cpumask_t mask;
+	cpumask_var_t mask;
 
 	mutex_lock(&callback_mutex);
-	mask = cs->cpus_allowed;
+	cpus_copy(mask, cs->cpus_allowed);
 	mutex_unlock(&callback_mutex);
 
 	return cpulist_scnprintf(page, PAGE_SIZE, mask);
@@ -1714,7 +1714,7 @@ static void cpuset_post_clone(struct cgr
 	parent_cs = cgroup_cs(parent);
 
 	cs->mems_allowed = parent_cs->mems_allowed;
-	cs->cpus_allowed = parent_cs->cpus_allowed;
+	cpus_copy(cs->cpus_allowed, parent_cs->cpus_allowed);
 	return;
 }
 
@@ -1980,7 +1980,7 @@ static int cpuset_track_online_cpus(stru
 				unsigned long phase, void *unused_cpu)
 {
 	struct sched_domain_attr *attr;
-	cpumask_t *doms;
+	cpumask_var_t doms;
 	int ndoms;
 
 	switch (phase) {
@@ -1995,9 +1995,9 @@ static int cpuset_track_online_cpus(stru
 	}
 
 	cgroup_lock();
-	top_cpuset.cpus_allowed = cpu_online_map;
+	cpus_copy(top_cpuset.cpus_allowed, cpu_online_map);
 	scan_for_empty_cpusets(&top_cpuset);
-	ndoms = generate_sched_domains(&doms, &attr);
+	ndoms = generate_sched_domains(doms, &attr);
 	cgroup_unlock();
 
 	/* Have scheduler rebuild the domains */
@@ -2029,7 +2029,7 @@ void cpuset_track_online_nodes(void)
 
 void __init cpuset_init_smp(void)
 {
-	top_cpuset.cpus_allowed = cpu_online_map;
+	cpus_copy(top_cpuset.cpus_allowed, cpu_online_map);
 	top_cpuset.mems_allowed = node_states[N_HIGH_MEMORY];
 
 	hotcpu_notifier(cpuset_track_online_cpus, 0);
@@ -2046,7 +2046,7 @@ void __init cpuset_init_smp(void)
  * tasks cpuset.
  **/
 
-void cpuset_cpus_allowed(struct task_struct *tsk, cpumask_t *pmask)
+void cpuset_cpus_allowed(struct task_struct *tsk, cpumask_t pmask)
 {
 	mutex_lock(&callback_mutex);
 	cpuset_cpus_allowed_locked(tsk, pmask);
@@ -2057,7 +2057,7 @@ void cpuset_cpus_allowed(struct task_str
  * cpuset_cpus_allowed_locked - return cpus_allowed mask from a tasks cpuset.
  * Must be called with callback_mutex held.
  **/
-void cpuset_cpus_allowed_locked(struct task_struct *tsk, cpumask_t *pmask)
+void cpuset_cpus_allowed_locked(struct task_struct *tsk, cpumask_t pmask)
 {
 	task_lock(tsk);
 	guarantee_online_cpus(task_cs(tsk), pmask);

-- 

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [PATCH 25/31] cpumask: clean rcu files
  2008-09-29 18:02 [PATCH 00/31] cpumask: Provide new cpumask API Mike Travis
                   ` (23 preceding siblings ...)
  2008-09-29 18:03 ` [PATCH 24/31] cpumask: clean cpu files Mike Travis
@ 2008-09-29 18:03 ` Mike Travis
  2008-09-29 18:03 ` [PATCH 26/31] cpumask: clean tlb files Mike Travis
                   ` (5 subsequent siblings)
  30 siblings, 0 replies; 43+ messages in thread
From: Mike Travis @ 2008-09-29 18:03 UTC (permalink / raw)
  To: Ingo Molnar, Rusty Russell
  Cc: Linus Torvalds, Andrew Morton, David Miller, Yinghai Lu,
	Thomas Gleixner, Jack Steiner, linux-kernel

[-- Attachment #1: clean-rcu --]
[-- Type: text/plain, Size: 2567 bytes --]

Signed-of-by: Mike Travis <travis@sgi.com>
---
 include/linux/rcuclassic.h |    2 +-
 kernel/rcuclassic.c        |    2 +-
 kernel/rcupreempt.c        |    2 +-
 kernel/rcutorture.c        |   12 ++++++------
 4 files changed, 9 insertions(+), 9 deletions(-)

--- struct-cpumasks.orig/include/linux/rcuclassic.h
+++ struct-cpumasks/include/linux/rcuclassic.h
@@ -53,7 +53,7 @@ struct rcu_ctrlblk {
 	int	signaled;
 
 	spinlock_t	lock	____cacheline_internodealigned_in_smp;
-	cpumask_t	cpumask; /* CPUs that need to switch in order    */
+	cpumask_map_t	cpumask; /* CPUs that need to switch in order    */
 				 /* for current batch to proceed.        */
 } ____cacheline_internodealigned_in_smp;
 
--- struct-cpumasks.orig/kernel/rcuclassic.c
+++ struct-cpumasks/kernel/rcuclassic.c
@@ -85,7 +85,7 @@ static void force_quiescent_state(struct
 			struct rcu_ctrlblk *rcp)
 {
 	int cpu;
-	cpumask_t cpumask;
+	cpumask_var_t cpumask;
 	unsigned long flags;
 
 	set_need_resched();
--- struct-cpumasks.orig/kernel/rcupreempt.c
+++ struct-cpumasks/kernel/rcupreempt.c
@@ -164,7 +164,7 @@ static char *rcu_try_flip_state_names[] 
 	{ "idle", "waitack", "waitzero", "waitmb" };
 #endif /* #ifdef CONFIG_RCU_TRACE */
 
-static cpumask_t rcu_cpu_online_map __read_mostly = CPU_MASK_NONE;
+static cpumask_map_t rcu_cpu_online_map __read_mostly = CPU_MASK_NONE;
 
 /*
  * Enum and per-CPU flag to determine when each CPU has seen
--- struct-cpumasks.orig/kernel/rcutorture.c
+++ struct-cpumasks/kernel/rcutorture.c
@@ -843,7 +843,7 @@ static int rcu_idle_cpu;	/* Force all to
  */
 static void rcu_torture_shuffle_tasks(void)
 {
-	cpumask_t tmp_mask;
+	cpumask_var_t tmp_mask;
 	int i;
 
 	cpus_setall(tmp_mask);
@@ -858,27 +858,27 @@ static void rcu_torture_shuffle_tasks(vo
 	if (rcu_idle_cpu != -1)
 		cpu_clear(rcu_idle_cpu, tmp_mask);
 
-	set_cpus_allowed(current, &tmp_mask);
+	set_cpus_allowed(current, tmp_mask);
 
 	if (reader_tasks) {
 		for (i = 0; i < nrealreaders; i++)
 			if (reader_tasks[i])
 				set_cpus_allowed(reader_tasks[i],
-						     &tmp_mask);
+						     tmp_mask);
 	}
 
 	if (fakewriter_tasks) {
 		for (i = 0; i < nfakewriters; i++)
 			if (fakewriter_tasks[i])
 				set_cpus_allowed(fakewriter_tasks[i],
-						     &tmp_mask);
+						     tmp_mask);
 	}
 
 	if (writer_task)
-		set_cpus_allowed(writer_task, &tmp_mask);
+		set_cpus_allowed(writer_task, tmp_mask);
 
 	if (stats_task)
-		set_cpus_allowed(stats_task, &tmp_mask);
+		set_cpus_allowed(stats_task, tmp_mask);
 
 	if (rcu_idle_cpu == -1)
 		rcu_idle_cpu = num_online_cpus() - 1;

-- 

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [PATCH 26/31] cpumask: clean tlb files
  2008-09-29 18:02 [PATCH 00/31] cpumask: Provide new cpumask API Mike Travis
                   ` (24 preceding siblings ...)
  2008-09-29 18:03 ` [PATCH 25/31] cpumask: clean rcu files Mike Travis
@ 2008-09-29 18:03 ` Mike Travis
  2008-09-29 18:03 ` [PATCH 27/31] cpumask: clean time files Mike Travis
                   ` (4 subsequent siblings)
  30 siblings, 0 replies; 43+ messages in thread
From: Mike Travis @ 2008-09-29 18:03 UTC (permalink / raw)
  To: Ingo Molnar, Rusty Russell
  Cc: Linus Torvalds, Andrew Morton, David Miller, Yinghai Lu,
	Thomas Gleixner, Jack Steiner, linux-kernel

[-- Attachment #1: clean-tlb --]
[-- Type: text/plain, Size: 7417 bytes --]

Signed-of-by: Mike Travis <travis@sgi.com>
---
 arch/x86/kernel/tlb_32.c   |   24 ++++++++++++------------
 arch/x86/kernel/tlb_64.c   |   23 ++++++++++++-----------
 arch/x86/kernel/tlb_uv.c   |   12 ++++++------
 include/asm-x86/tlbflush.h |    6 +++---
 4 files changed, 33 insertions(+), 32 deletions(-)

--- struct-cpumasks.orig/arch/x86/kernel/tlb_32.c
+++ struct-cpumasks/arch/x86/kernel/tlb_32.c
@@ -20,7 +20,7 @@ DEFINE_PER_CPU(struct tlb_state, cpu_tlb
  *	Optimizations Manfred Spraul <manfred@colorfullife.com>
  */
 
-static cpumask_t flush_cpumask;
+static cpumask_map_t flush_cpumask;
 static struct mm_struct *flush_mm;
 static unsigned long flush_va;
 static DEFINE_SPINLOCK(tlbstate_lock);
@@ -122,10 +122,10 @@ out:
 	__get_cpu_var(irq_stat).irq_tlb_count++;
 }
 
-void native_flush_tlb_others(const cpumask_t *cpumaskp, struct mm_struct *mm,
+void native_flush_tlb_others(const_cpumask_t cpumaskp, struct mm_struct *mm,
 			     unsigned long va)
 {
-	cpumask_t cpumask = *cpumaskp;
+	cpumask_var_t cpumask;
 
 	/*
 	 * A couple of (to be removed) sanity checks:
@@ -133,13 +133,13 @@ void native_flush_tlb_others(const cpuma
 	 * - current CPU must not be in mask
 	 * - mask must exist :)
 	 */
-	BUG_ON(cpus_empty(cpumask));
-	BUG_ON(cpu_isset(smp_processor_id(), cpumask));
+	BUG_ON(cpus_empty(cpumaskp));
+	BUG_ON(cpu_isset(smp_processor_id(), cpumaskp));
 	BUG_ON(!mm);
 
 #ifdef CONFIG_HOTPLUG_CPU
 	/* If a CPU which we ran on has gone down, OK. */
-	cpus_and(cpumask, cpumask, cpu_online_map);
+	cpus_and(cpumask, cpumaskp, cpu_online_map);
 	if (unlikely(cpus_empty(cpumask)))
 		return;
 #endif
@@ -172,10 +172,10 @@ void native_flush_tlb_others(const cpuma
 void flush_tlb_current_task(void)
 {
 	struct mm_struct *mm = current->mm;
-	cpumask_t cpu_mask;
+	cpumask_var_t cpu_mask;
 
 	preempt_disable();
-	cpu_mask = mm->cpu_vm_mask;
+	cpus_copy(cpu_mask, mm->cpu_vm_mask);
 	cpu_clear(smp_processor_id(), cpu_mask);
 
 	local_flush_tlb();
@@ -186,10 +186,10 @@ void flush_tlb_current_task(void)
 
 void flush_tlb_mm(struct mm_struct *mm)
 {
-	cpumask_t cpu_mask;
+	cpumask_var_t cpu_mask;
 
 	preempt_disable();
-	cpu_mask = mm->cpu_vm_mask;
+	cpus_copy(cpu_mask, mm->cpu_vm_mask);
 	cpu_clear(smp_processor_id(), cpu_mask);
 
 	if (current->active_mm == mm) {
@@ -207,10 +207,10 @@ void flush_tlb_mm(struct mm_struct *mm)
 void flush_tlb_page(struct vm_area_struct *vma, unsigned long va)
 {
 	struct mm_struct *mm = vma->vm_mm;
-	cpumask_t cpu_mask;
+	cpumask_var_t cpu_mask;
 
 	preempt_disable();
-	cpu_mask = mm->cpu_vm_mask;
+	cpus_copy(cpu_mask, mm->cpu_vm_mask);
 	cpu_clear(smp_processor_id(), cpu_mask);
 
 	if (current->active_mm == mm) {
--- struct-cpumasks.orig/arch/x86/kernel/tlb_64.c
+++ struct-cpumasks/arch/x86/kernel/tlb_64.c
@@ -43,7 +43,7 @@
 
 union smp_flush_state {
 	struct {
-		cpumask_t flush_cpumask;
+		cpumask_map_t flush_cpumask;
 		struct mm_struct *flush_mm;
 		unsigned long flush_va;
 		spinlock_t tlbstate_lock;
@@ -157,14 +157,15 @@ out:
 	add_pda(irq_tlb_count, 1);
 }
 
-void native_flush_tlb_others(const cpumask_t *cpumaskp, struct mm_struct *mm,
+void native_flush_tlb_others(const_cpumask_t cpumaskp, struct mm_struct *mm,
 			     unsigned long va)
 {
 	int sender;
 	union smp_flush_state *f;
-	cpumask_t cpumask = *cpumaskp;
+	cpumask_var_t cpumask;
 
-	if (is_uv_system() && uv_flush_tlb_others(&cpumask, mm, va))
+	cpus_copy(cpumask, cpumaskp);
+	if (is_uv_system() && uv_flush_tlb_others(cpumask, mm, va))
 		return;
 
 	/* Caller has disabled preemption */
@@ -186,7 +187,7 @@ void native_flush_tlb_others(const cpuma
 	 * We have to send the IPI only to
 	 * CPUs affected.
 	 */
-	send_IPI_mask(&cpumask, INVALIDATE_TLB_VECTOR_START + sender);
+	send_IPI_mask(cpumask, INVALIDATE_TLB_VECTOR_START + sender);
 
 	while (!cpus_empty(f->flush_cpumask))
 		cpu_relax();
@@ -210,10 +211,10 @@ core_initcall(init_smp_flush);
 void flush_tlb_current_task(void)
 {
 	struct mm_struct *mm = current->mm;
-	cpumask_t cpu_mask;
+	cpumask_var_t cpu_mask;
 
 	preempt_disable();
-	cpu_mask = mm->cpu_vm_mask;
+	cpus_copy(cpu_mask, mm->cpu_vm_mask);
 	cpu_clear(smp_processor_id(), cpu_mask);
 
 	local_flush_tlb();
@@ -224,10 +225,10 @@ void flush_tlb_current_task(void)
 
 void flush_tlb_mm(struct mm_struct *mm)
 {
-	cpumask_t cpu_mask;
+	cpumask_var_t cpu_mask;
 
 	preempt_disable();
-	cpu_mask = mm->cpu_vm_mask;
+	cpus_copy(cpu_mask, mm->cpu_vm_mask);
 	cpu_clear(smp_processor_id(), cpu_mask);
 
 	if (current->active_mm == mm) {
@@ -245,10 +246,10 @@ void flush_tlb_mm(struct mm_struct *mm)
 void flush_tlb_page(struct vm_area_struct *vma, unsigned long va)
 {
 	struct mm_struct *mm = vma->vm_mm;
-	cpumask_t cpu_mask;
+	cpumask_var_t cpu_mask;
 
 	preempt_disable();
-	cpu_mask = mm->cpu_vm_mask;
+	cpus_copy(cpu_mask, mm->cpu_vm_mask);
 	cpu_clear(smp_processor_id(), cpu_mask);
 
 	if (current->active_mm == mm) {
--- struct-cpumasks.orig/arch/x86/kernel/tlb_uv.c
+++ struct-cpumasks/arch/x86/kernel/tlb_uv.c
@@ -216,7 +216,7 @@ static int uv_wait_completion(struct bau
  * unchanged.
  */
 int uv_flush_send_and_wait(int cpu, int this_blade, struct bau_desc *bau_desc,
-			   cpumask_t *cpumaskp)
+			   cpumask_t cpumaskp)
 {
 	int completion_status = 0;
 	int right_shift;
@@ -263,13 +263,13 @@ int uv_flush_send_and_wait(int cpu, int 
 	 * Success, so clear the remote cpu's from the mask so we don't
 	 * use the IPI method of shootdown on them.
 	 */
-	for_each_cpu(bit, *cpumaskp) {
+	for_each_cpu(bit, cpumaskp) {
 		blade = uv_cpu_to_blade_id(bit);
 		if (blade == this_blade)
 			continue;
-		cpu_clear(bit, *cpumaskp);
+		cpu_clear(bit, cpumaskp);
 	}
-	if (!cpus_empty(*cpumaskp))
+	if (!cpus_empty(cpumaskp))
 		return 0;
 	return 1;
 }
@@ -296,7 +296,7 @@ int uv_flush_send_and_wait(int cpu, int 
  * Returns 1 if all remote flushing was done.
  * Returns 0 if some remote flushing remains to be done.
  */
-int uv_flush_tlb_others(cpumask_t *cpumaskp, struct mm_struct *mm,
+int uv_flush_tlb_others(cpumask_t cpumaskp, struct mm_struct *mm,
 			unsigned long va)
 {
 	int i;
@@ -315,7 +315,7 @@ int uv_flush_tlb_others(cpumask_t *cpuma
 	bau_nodes_clear(&bau_desc->distribution, UV_DISTRIBUTION_SIZE);
 
 	i = 0;
-	for_each_cpu(bit, *cpumaskp) {
+	for_each_cpu(bit, cpumaskp) {
 		blade = uv_cpu_to_blade_id(bit);
 		BUG_ON(blade > (UV_DISTRIBUTION_SIZE - 1));
 		if (blade == this_blade) {
--- struct-cpumasks.orig/include/asm-x86/tlbflush.h
+++ struct-cpumasks/include/asm-x86/tlbflush.h
@@ -113,7 +113,7 @@ static inline void flush_tlb_range(struc
 		__flush_tlb();
 }
 
-static inline void native_flush_tlb_others(const cpumask_t *cpumask,
+static inline void native_flush_tlb_others(const_cpumask_t cpumask,
 					   struct mm_struct *mm,
 					   unsigned long va)
 {
@@ -142,7 +142,7 @@ static inline void flush_tlb_range(struc
 	flush_tlb_mm(vma->vm_mm);
 }
 
-void native_flush_tlb_others(const cpumask_t *cpumask, struct mm_struct *mm,
+void native_flush_tlb_others(const_cpumask_t cpumask, struct mm_struct *mm,
 			     unsigned long va);
 
 #define TLBSTATE_OK	1
@@ -166,7 +166,7 @@ static inline void reset_lazy_tlbstate(v
 #endif	/* SMP */
 
 #ifndef CONFIG_PARAVIRT
-#define flush_tlb_others(mask, mm, va)	native_flush_tlb_others(&mask, mm, va)
+#define flush_tlb_others(mask, mm, va)	native_flush_tlb_others(mask, mm, va)
 #endif
 
 static inline void flush_tlb_kernel_range(unsigned long start,

-- 

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [PATCH 27/31] cpumask: clean time files
  2008-09-29 18:02 [PATCH 00/31] cpumask: Provide new cpumask API Mike Travis
                   ` (25 preceding siblings ...)
  2008-09-29 18:03 ` [PATCH 26/31] cpumask: clean tlb files Mike Travis
@ 2008-09-29 18:03 ` Mike Travis
  2008-09-29 18:03 ` [PATCH 28/31] cpumask: clean smp files Mike Travis
                   ` (3 subsequent siblings)
  30 siblings, 0 replies; 43+ messages in thread
From: Mike Travis @ 2008-09-29 18:03 UTC (permalink / raw)
  To: Ingo Molnar, Rusty Russell
  Cc: Linus Torvalds, Andrew Morton, David Miller, Yinghai Lu,
	Thomas Gleixner, Jack Steiner, linux-kernel

[-- Attachment #1: clean-time --]
[-- Type: text/plain, Size: 7620 bytes --]

Signed-of-by: Mike Travis <travis@sgi.com>
---
 arch/x86/kernel/hpet.c       |    4 ++--
 arch/x86/kernel/i8253.c      |    2 +-
 arch/x86/kernel/time_64.c    |    2 +-
 include/linux/clockchips.h   |    4 ++--
 include/linux/tick.h         |    4 ++--
 kernel/time/clocksource.c    |    2 +-
 kernel/time/tick-broadcast.c |   26 +++++++++++++-------------
 kernel/time/tick-common.c    |    6 +++---
 8 files changed, 25 insertions(+), 25 deletions(-)

--- struct-cpumasks.orig/arch/x86/kernel/hpet.c
+++ struct-cpumasks/arch/x86/kernel/hpet.c
@@ -265,7 +265,7 @@ static void hpet_legacy_clockevent_regis
 	 * Start hpet with the boot cpu mask and make it
 	 * global after the IO_APIC has been initialized.
 	 */
-	hpet_clockevent.cpumask = cpumask_of_cpu(smp_processor_id());
+	cpus_copy(hpet_clockevent.cpumask, cpumask_of_cpu(smp_processor_id()));
 	clockevents_register_device(&hpet_clockevent);
 	global_clock_event = &hpet_clockevent;
 	printk(KERN_DEBUG "hpet clockevent registered\n");
@@ -512,7 +512,7 @@ static void init_one_hpet_msi_clockevent
 	/* 5 usec minimum reprogramming delta. */
 	evt->min_delta_ns = 5000;
 
-	evt->cpumask = cpumask_of_cpu(hdev->cpu);
+	cpus_copy(evt->cpumask, cpumask_of_cpu(hdev->cpu));
 	clockevents_register_device(evt);
 }
 
--- struct-cpumasks.orig/arch/x86/kernel/i8253.c
+++ struct-cpumasks/arch/x86/kernel/i8253.c
@@ -114,7 +114,7 @@ void __init setup_pit_timer(void)
 	 * Start pit with the boot cpu mask and make it global after the
 	 * IO_APIC has been initialized.
 	 */
-	pit_clockevent.cpumask = cpumask_of_cpu(smp_processor_id());
+	cpus_copy(pit_clockevent.cpumask, cpumask_of_cpu(smp_processor_id()));
 	pit_clockevent.mult = div_sc(CLOCK_TICK_RATE, NSEC_PER_SEC,
 				     pit_clockevent.shift);
 	pit_clockevent.max_delta_ns =
--- struct-cpumasks.orig/arch/x86/kernel/time_64.c
+++ struct-cpumasks/arch/x86/kernel/time_64.c
@@ -125,7 +125,7 @@ void __init hpet_time_init(void)
 		setup_pit_timer();
 	}
 
-	irq0.mask = cpumask_of_cpu(0);
+	cpus_copy(irq0.mask, cpumask_of_cpu(0));
 	setup_irq(0, &irq0);
 }
 
--- struct-cpumasks.orig/include/linux/clockchips.h
+++ struct-cpumasks/include/linux/clockchips.h
@@ -82,13 +82,13 @@ struct clock_event_device {
 	int			shift;
 	int			rating;
 	int			irq;
-	cpumask_t		cpumask;
+	cpumask_map_t		cpumask;
 	int			(*set_next_event)(unsigned long evt,
 						  struct clock_event_device *);
 	void			(*set_mode)(enum clock_event_mode mode,
 					    struct clock_event_device *);
 	void			(*event_handler)(struct clock_event_device *);
-	void			(*broadcast)(cpumask_t mask);
+	void			(*broadcast)(const_cpumask_t mask);
 	struct list_head	list;
 	enum clock_event_mode	mode;
 	ktime_t			next_event;
--- struct-cpumasks.orig/include/linux/tick.h
+++ struct-cpumasks/include/linux/tick.h
@@ -84,10 +84,10 @@ static inline void tick_cancel_sched_tim
 
 # ifdef CONFIG_GENERIC_CLOCKEVENTS_BROADCAST
 extern struct tick_device *tick_get_broadcast_device(void);
-extern cpumask_t *tick_get_broadcast_mask(void);
+extern const_cpumask_t tick_get_broadcast_mask(void);
 
 #  ifdef CONFIG_TICK_ONESHOT
-extern cpumask_t *tick_get_broadcast_oneshot_mask(void);
+extern const_cpumask_t tick_get_broadcast_oneshot_mask(void);
 #  endif
 
 # endif /* BROADCAST */
--- struct-cpumasks.orig/kernel/time/clocksource.c
+++ struct-cpumasks/kernel/time/clocksource.c
@@ -157,7 +157,7 @@ static void clocksource_watchdog(unsigne
 		if (next_cpu >= nr_cpu_ids)
 			next_cpu = cpus_first(cpu_online_map);
 		watchdog_timer.expires += WATCHDOG_INTERVAL;
-		add_timer_on(&watchdog_timer, cpus_next);
+		add_timer_on(&watchdog_timer, next_cpu);
 	}
 	spin_unlock(&watchdog_lock);
 }
--- struct-cpumasks.orig/kernel/time/tick-broadcast.c
+++ struct-cpumasks/kernel/time/tick-broadcast.c
@@ -28,7 +28,7 @@
  */
 
 struct tick_device tick_broadcast_device;
-static cpumask_t tick_broadcast_mask;
+static cpumask_map_t tick_broadcast_mask;
 static DEFINE_SPINLOCK(tick_broadcast_lock);
 static int tick_broadcast_force;
 
@@ -46,9 +46,9 @@ struct tick_device *tick_get_broadcast_d
 	return &tick_broadcast_device;
 }
 
-cpumask_t *tick_get_broadcast_mask(void)
+const_cpumask_t tick_get_broadcast_mask(void)
 {
-	return &tick_broadcast_mask;
+	return (const_cpumask_t)tick_broadcast_mask;
 }
 
 /*
@@ -160,7 +160,7 @@ static void tick_do_broadcast(cpumask_t 
  */
 static void tick_do_periodic_broadcast(void)
 {
-	cpumask_t mask;
+	cpumask_var_t mask;
 
 	spin_lock(&tick_broadcast_lock);
 
@@ -364,9 +364,9 @@ static cpumask_t tick_broadcast_oneshot_
 /*
  * Debugging: see timer_list.c
  */
-cpumask_t *tick_get_broadcast_oneshot_mask(void)
+const_cpumask_t tick_get_broadcast_oneshot_mask(void)
 {
-	return &tick_broadcast_oneshot_mask;
+	return (const_cpumask_t)tick_broadcast_oneshot_mask;
 }
 
 static int tick_broadcast_set_event(ktime_t expires, int force)
@@ -388,7 +388,7 @@ int tick_resume_broadcast_oneshot(struct
 static void tick_handle_oneshot_broadcast(struct clock_event_device *dev)
 {
 	struct tick_device *td;
-	cpumask_t mask;
+	cpumask_var_t mask;
 	ktime_t now, next_event;
 	int cpu;
 
@@ -396,7 +396,7 @@ static void tick_handle_oneshot_broadcas
 again:
 	dev->next_event.tv64 = KTIME_MAX;
 	next_event.tv64 = KTIME_MAX;
-	mask = CPU_MASK_NONE;
+	cpus_clear(mask);
 	now = ktime_get();
 	/* Find all expired events */
 	for_each_cpu(cpu, tick_broadcast_oneshot_mask) {
@@ -491,12 +491,12 @@ static void tick_broadcast_clear_oneshot
 	cpu_clear(cpu, tick_broadcast_oneshot_mask);
 }
 
-static void tick_broadcast_init_next_event(cpumask_t *mask, ktime_t expires)
+static void tick_broadcast_init_next_event(const_cpumask_t mask, ktime_t expires)
 {
 	struct tick_device *td;
 	int cpu;
 
-	for_each_cpu(cpu, *mask) {
+	for_each_cpu(cpu, mask) {
 		td = &per_cpu(tick_cpu_device, cpu);
 		if (td->evtdev)
 			td->evtdev->next_event = expires;
@@ -512,7 +512,7 @@ void tick_broadcast_setup_oneshot(struct
 	if (bc->event_handler != tick_handle_oneshot_broadcast) {
 		int was_periodic = bc->mode == CLOCK_EVT_MODE_PERIODIC;
 		int cpu = smp_processor_id();
-		cpumask_t mask;
+		cpumask_var_t mask;
 
 		bc->event_handler = tick_handle_oneshot_broadcast;
 		clockevents_set_mode(bc, CLOCK_EVT_MODE_ONESHOT);
@@ -526,13 +526,13 @@ void tick_broadcast_setup_oneshot(struct
 		 * oneshot_mask bits for those and program the
 		 * broadcast device to fire.
 		 */
-		mask = tick_broadcast_mask;
+		cpus_copy(mask, tick_broadcast_mask);
 		cpu_clear(cpu, mask);
 		cpus_or(tick_broadcast_oneshot_mask,
 			tick_broadcast_oneshot_mask, mask);
 
 		if (was_periodic && !cpus_empty(mask)) {
-			tick_broadcast_init_next_event(&mask, tick_next_period);
+			tick_broadcast_init_next_event(mask, tick_next_period);
 			tick_broadcast_set_event(tick_next_period, 1);
 		} else
 			bc->next_event.tv64 = KTIME_MAX;
--- struct-cpumasks.orig/kernel/time/tick-common.c
+++ struct-cpumasks/kernel/time/tick-common.c
@@ -136,7 +136,7 @@ void tick_setup_periodic(struct clock_ev
  */
 static void tick_setup_device(struct tick_device *td,
 			      struct clock_event_device *newdev, int cpu,
-			      const cpumask_t *cpumask)
+			      const_cpumask_t cpumask)
 {
 	ktime_t next_event;
 	void (*handler)(struct clock_event_device *) = NULL;
@@ -171,8 +171,8 @@ static void tick_setup_device(struct tic
 	 * When the device is not per cpu, pin the interrupt to the
 	 * current cpu:
 	 */
-	if (!cpus_equal(newdev->cpumask, *cpumask))
-		irq_set_affinity(newdev->irq, *cpumask);
+	if (!cpus_equal(newdev->cpumask, cpumask))
+		irq_set_affinity(newdev->irq, cpumask);
 
 	/*
 	 * When global broadcasting is active, check if the current

-- 

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [PATCH 28/31] cpumask: clean smp files
  2008-09-29 18:02 [PATCH 00/31] cpumask: Provide new cpumask API Mike Travis
                   ` (26 preceding siblings ...)
  2008-09-29 18:03 ` [PATCH 27/31] cpumask: clean time files Mike Travis
@ 2008-09-29 18:03 ` Mike Travis
  2008-09-29 18:03 ` [PATCH 29/31] cpumask: clean trace files Mike Travis
                   ` (2 subsequent siblings)
  30 siblings, 0 replies; 43+ messages in thread
From: Mike Travis @ 2008-09-29 18:03 UTC (permalink / raw)
  To: Ingo Molnar, Rusty Russell
  Cc: Linus Torvalds, Andrew Morton, David Miller, Yinghai Lu,
	Thomas Gleixner, Jack Steiner, linux-kernel

[-- Attachment #1: clean-smp --]
[-- Type: text/plain, Size: 7160 bytes --]

Signed-of-by: Mike Travis <travis@sgi.com>
---
 arch/x86/kernel/smp.c     |    6 +++---
 arch/x86/kernel/smpboot.c |   20 ++++++++++++--------
 include/asm-x86/smp.h     |    6 +++---
 include/linux/smp.h       |    8 ++++----
 kernel/smp.c              |   15 ++++++++-------
 5 files changed, 30 insertions(+), 25 deletions(-)

--- struct-cpumasks.orig/arch/x86/kernel/smp.c
+++ struct-cpumasks/arch/x86/kernel/smp.c
@@ -118,15 +118,15 @@ static void native_smp_send_reschedule(i
 		WARN_ON(1);
 		return;
 	}
-	send_IPI_mask(&cpumask_of_cpu(cpu), RESCHEDULE_VECTOR);
+	send_IPI_mask(cpumask_of_cpu(cpu), RESCHEDULE_VECTOR);
 }
 
 void native_send_call_func_single_ipi(int cpu)
 {
-	send_IPI_mask(&cpumask_of_cpu(cpu), CALL_FUNCTION_SINGLE_VECTOR);
+	send_IPI_mask(cpumask_of_cpu(cpu), CALL_FUNCTION_SINGLE_VECTOR);
 }
 
-void native_send_call_func_ipi(const cpumask_t *mask)
+void native_send_call_func_ipi(const_cpumask_t mask)
 {
 	int cpu = smp_processor_id();
 
--- struct-cpumasks.orig/arch/x86/kernel/smpboot.c
+++ struct-cpumasks/arch/x86/kernel/smpboot.c
@@ -466,7 +466,8 @@ void __cpuinit set_cpu_sibling_map(int c
 	cpu_set(cpu, c->llc_shared_map);
 
 	if (current_cpu_data.x86_max_cores == 1) {
-		per_cpu(cpu_core_map, cpu) = per_cpu(cpu_sibling_map, cpu);
+		cpus_copy(per_cpu(cpu_core_map, cpu),
+			  per_cpu(cpu_sibling_map, cpu));
 		c->booted_cores = 1;
 		return;
 	}
@@ -503,7 +504,7 @@ void __cpuinit set_cpu_sibling_map(int c
 }
 
 /* maps the cpu to the sched domain representing multi-core */
-const cpumask_t cpu_coregroup_map(int cpu)
+const_cpumask_t cpu_coregroup_map(int cpu)
 {
 	struct cpuinfo_x86 *c = &cpu_data(cpu);
 	/*
@@ -511,9 +512,9 @@ const cpumask_t cpu_coregroup_map(int cp
 	 * And for power savings, we return cpu_core_map
 	 */
 	if (sched_mc_power_savings || sched_smt_power_savings)
-		return (const cpumask_t)per_cpu(cpu_core_map, cpu);
+		return (const_cpumask_t)per_cpu(cpu_core_map, cpu);
 	else
-		return (const cpumask_t)c->llc_shared_map;
+		return (const_cpumask_t)c->llc_shared_map;
 }
 
 static void impress_friends(void)
@@ -1036,12 +1037,13 @@ int __cpuinit native_cpu_up(unsigned int
  */
 static __init void disable_smp(void)
 {
-	cpu_present_map = cpumask_of_cpu(0);
-	cpu_possible_map = cpumask_of_cpu(0);
+	cpus_copy(cpu_present_map, cpumask_of_cpu(0));
+	cpus_copy(cpu_possible_map, cpumask_of_cpu(0));
 	smpboot_clear_io_apic_irqs();
 
 	if (smp_found_config)
-		physid_set_mask_of_physid(boot_cpu_physical_apicid, &phys_cpu_present_map);
+		physid_set_mask_of_physid(boot_cpu_physical_apicid,
+					  &phys_cpu_present_map);
 	else
 		physid_set_mask_of_physid(0, &phys_cpu_present_map);
 	map_cpu_to_logical_apicid();
@@ -1169,7 +1171,7 @@ void __init native_smp_prepare_cpus(unsi
 	preempt_disable();
 	smp_cpu_index_default();
 	current_cpu_data = boot_cpu_data;
-	cpu_callin_map = cpumask_of_cpu(0);
+	cpus_copy(cpu_callin_map, cpumask_of_cpu(0));
 	mb();
 	/*
 	 * Setup boot CPU information
@@ -1337,7 +1339,9 @@ __init void prefill_possible_map(void)
 	for (i = 0; i < possible; i++)
 		cpu_set(i, cpu_possible_map);
 
+#ifndef nr_cpu_ids
 	nr_cpu_ids = possible;
+#endif
 }
 
 static void __ref remove_cpu_from_maps(int cpu)
--- struct-cpumasks.orig/include/asm-x86/smp.h
+++ struct-cpumasks/include/asm-x86/smp.h
@@ -60,7 +60,7 @@ struct smp_ops {
 	void (*cpu_die)(unsigned int cpu);
 	void (*play_dead)(void);
 
-	void (*send_call_func_ipi)(const cpumask_t mask);
+	void (*send_call_func_ipi)(const_cpumask_t mask);
 	void (*send_call_func_single_ipi)(int cpu);
 };
 
@@ -123,7 +123,7 @@ static inline void arch_send_call_functi
 	smp_ops.send_call_func_single_ipi(cpu);
 }
 
-static inline void arch_send_call_function_ipi(const cpumask_t mask)
+static inline void arch_send_call_function_ipi(const_cpumask_t mask)
 {
 	smp_ops.send_call_func_ipi(mask);
 }
@@ -138,7 +138,7 @@ void native_cpu_die(unsigned int cpu);
 void native_play_dead(void);
 void play_dead_common(void);
 
-void native_send_call_func_ipi(const cpumask_t mask);
+void native_send_call_func_ipi(const_cpumask_t mask);
 void native_send_call_func_single_ipi(int cpu);
 
 void smp_store_cpu_info(int id);
--- struct-cpumasks.orig/include/linux/smp.h
+++ struct-cpumasks/include/linux/smp.h
@@ -62,10 +62,10 @@ extern void smp_cpus_done(unsigned int m
  * Call a function on all other processors
  */
 int smp_call_function(void(*func)(void *info), void *info, int wait);
-int smp_call_function_mask(cpumask_t mask, void(*func)(void *info), void *info,
-				int wait);
-int smp_call_function_single(int cpuid, void (*func) (void *info), void *info,
-				int wait);
+int smp_call_function_mask(const_cpumask_t mask,
+				void (*func)(void *info), void *info, int wait);
+int smp_call_function_single(int cpuid,
+				void (*func)(void *info), void *info, int wait);
 void __smp_call_function_single(int cpuid, struct call_single_data *data);
 
 /*
--- struct-cpumasks.orig/kernel/smp.c
+++ struct-cpumasks/kernel/smp.c
@@ -24,7 +24,7 @@ struct call_function_data {
 	struct call_single_data csd;
 	spinlock_t lock;
 	unsigned int refs;
-	cpumask_t cpumask;
+	cpumask_map_t cpumask;
 	struct rcu_head rcu_head;
 };
 
@@ -287,7 +287,7 @@ static void quiesce_dummy(void *unused)
  * If a faster scheme can be made, we could go back to preferring stack based
  * data -- the data allocation/free is non-zero cost.
  */
-static void smp_call_function_mask_quiesce_stack(const cpumask_t *mask)
+static void smp_call_function_mask_quiesce_stack(const_cpumask_t mask)
 {
 	struct call_single_data data;
 	int cpu;
@@ -295,7 +295,7 @@ static void smp_call_function_mask_quies
 	data.func = quiesce_dummy;
 	data.info = NULL;
 
-	for_each_cpu(cpu, *mask) {
+	for_each_cpu(cpu, mask) {
 		data.flags = CSD_FLAG_WAIT;
 		generic_exec_single(cpu, &data);
 	}
@@ -318,7 +318,7 @@ static void smp_call_function_mask_quies
  * hardware interrupt handler or from a bottom half handler. Preemption
  * must be disabled when calling this function.
  */
-int smp_call_function_mask(cpumask_t mask, void (*func)(void *), void *info,
+int smp_call_function_mask(const_cpumask_t inmask, void (*func)(void *), void *info,
 			   int wait)
 {
 	struct call_function_data d;
@@ -326,12 +326,13 @@ int smp_call_function_mask(cpumask_t mas
 	unsigned long flags;
 	int cpu, num_cpus;
 	int slowpath = 0;
+	cpumask_var_t mask;
 
 	/* Can deadlock when called with interrupts disabled */
 	WARN_ON(irqs_disabled());
 
 	cpu = smp_processor_id();
-	cpus_and(mask, mask, cpu_online_map);
+	cpus_and(mask, inmask, cpu_online_map);
 	cpu_clear(cpu, mask);
 	num_cpus = cpus_weight(mask);
 
@@ -362,7 +363,7 @@ int smp_call_function_mask(cpumask_t mas
 	data->csd.func = func;
 	data->csd.info = info;
 	data->refs = num_cpus;
-	data->cpumask = mask;
+	cpus_copy(data->cpumask, mask);
 
 	spin_lock_irqsave(&call_function_lock, flags);
 	list_add_tail_rcu(&data->csd.list, &call_function_queue);
@@ -375,7 +376,7 @@ int smp_call_function_mask(cpumask_t mas
 	if (wait) {
 		csd_flag_wait(&data->csd);
 		if (unlikely(slowpath))
-			smp_call_function_mask_quiesce_stack(&mask);
+			smp_call_function_mask_quiesce_stack(mask);
 	}
 
 	return 0;

-- 

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [PATCH 29/31] cpumask: clean trace files
  2008-09-29 18:02 [PATCH 00/31] cpumask: Provide new cpumask API Mike Travis
                   ` (27 preceding siblings ...)
  2008-09-29 18:03 ` [PATCH 28/31] cpumask: clean smp files Mike Travis
@ 2008-09-29 18:03 ` Mike Travis
  2008-09-29 18:03 ` [PATCH 30/31] cpumask: clean kernel files Mike Travis
  2008-09-29 18:03 ` [PATCH 31/31] cpumask: clean misc files Mike Travis
  30 siblings, 0 replies; 43+ messages in thread
From: Mike Travis @ 2008-09-29 18:03 UTC (permalink / raw)
  To: Ingo Molnar, Rusty Russell
  Cc: Linus Torvalds, Andrew Morton, David Miller, Yinghai Lu,
	Thomas Gleixner, Jack Steiner, linux-kernel

[-- Attachment #1: clean-trace --]
[-- Type: text/plain, Size: 2678 bytes --]

Signed-of-by: Mike Travis <travis@sgi.com>
---
 kernel/trace/trace.c         |   14 +++++++-------
 kernel/trace/trace_sysprof.c |    2 +-
 2 files changed, 8 insertions(+), 8 deletions(-)

--- struct-cpumasks.orig/kernel/trace/trace.c
+++ struct-cpumasks/kernel/trace/trace.c
@@ -40,7 +40,7 @@ unsigned long __read_mostly	tracing_max_
 unsigned long __read_mostly	tracing_thresh;
 
 static unsigned long __read_mostly	tracing_nr_buffers;
-static cpumask_t __read_mostly		tracing_buffer_mask;
+static cpumask_map_t __read_mostly	tracing_buffer_mask;
 
 #define for_each_tracing_cpu(cpu)	\
 	for_each_cpu(cpu, tracing_buffer_mask)
@@ -2163,13 +2163,13 @@ static struct file_operations show_trace
 /*
  * Only trace on a CPU if the bitmask is set:
  */
-static cpumask_t tracing_cpumask = CPU_MASK_ALL;
+static cpumask_map_t tracing_cpumask = CPU_MASK_ALL;
 
 /*
  * When tracing/tracing_cpu_mask is modified then this holds
  * the new bitmask we are about to install:
  */
-static cpumask_t tracing_cpumask_new;
+static cpumask_map_t tracing_cpumask_new;
 
 /*
  * The tracer itself will not take this lock, but still we want
@@ -2235,7 +2235,7 @@ tracing_cpumask_write(struct file *filp,
 	__raw_spin_unlock(&ftrace_max_lock);
 	raw_local_irq_enable();
 
-	tracing_cpumask = tracing_cpumask_new;
+	cpus_copy(tracing_cpumask, tracing_cpumask_new);
 
 	mutex_unlock(&tracing_cpumask_update_lock);
 
@@ -2600,7 +2600,7 @@ tracing_read_pipe(struct file *filp, cha
 {
 	struct trace_iterator *iter = filp->private_data;
 	struct trace_array_cpu *data;
-	static cpumask_t mask;
+	static cpumask_var_t mask;
 	unsigned long flags;
 #ifdef CONFIG_FTRACE
 	int ftrace_save;
@@ -3235,7 +3235,7 @@ void ftrace_dump(void)
 	/* use static because iter can be a bit big for the stack */
 	static struct trace_iterator iter;
 	struct trace_array_cpu *data;
-	static cpumask_t mask;
+	static cpumask_var_t mask;
 	static int dump_ran;
 	unsigned long flags;
 	int cnt = 0;
@@ -3454,7 +3454,7 @@ __init static int tracer_alloc_buffers(v
 
 	/* TODO: make the number of buffers hot pluggable with CPUS */
 	tracing_nr_buffers = num_possible_cpus();
-	tracing_buffer_mask = cpu_possible_map;
+	cpus_copy(tracing_buffer_mask, cpu_possible_map);
 
 	/* Allocate the first page for all buffers */
 	for_each_tracing_cpu(i) {
--- struct-cpumasks.orig/kernel/trace/trace_sysprof.c
+++ struct-cpumasks/kernel/trace/trace_sysprof.c
@@ -216,7 +216,7 @@ static void start_stack_timers(void)
 		set_cpus_allowed(current, cpumask_of_cpu(cpu));
 		start_stack_timer(cpu);
 	}
-	set_cpus_allowed(current, &saved_mask);
+	set_cpus_allowed(current, saved_mask);
 }
 
 static void stop_stack_timer(int cpu)

-- 

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [PATCH 30/31] cpumask: clean kernel files
  2008-09-29 18:02 [PATCH 00/31] cpumask: Provide new cpumask API Mike Travis
                   ` (28 preceding siblings ...)
  2008-09-29 18:03 ` [PATCH 29/31] cpumask: clean trace files Mike Travis
@ 2008-09-29 18:03 ` Mike Travis
  2008-09-29 18:03 ` [PATCH 31/31] cpumask: clean misc files Mike Travis
  30 siblings, 0 replies; 43+ messages in thread
From: Mike Travis @ 2008-09-29 18:03 UTC (permalink / raw)
  To: Ingo Molnar, Rusty Russell
  Cc: Linus Torvalds, Andrew Morton, David Miller, Yinghai Lu,
	Thomas Gleixner, Jack Steiner, linux-kernel

[-- Attachment #1: clean-kernel --]
[-- Type: text/plain, Size: 14207 bytes --]

Signed-of-by: Mike Travis <travis@sgi.com>
---
 arch/x86/kernel/apm_32.c         |   24 ++++++++++++++----------
 arch/x86/kernel/microcode_core.c |    6 +++---
 arch/x86/kernel/nmi.c            |    4 ++--
 arch/x86/kernel/process.c        |    2 +-
 kernel/compat.c                  |   24 ++++++++++++------------
 kernel/fork.c                    |    2 +-
 kernel/kthread.c                 |    2 +-
 kernel/profile.c                 |   10 +++++-----
 kernel/stop_machine.c            |    6 +++---
 kernel/taskstats.c               |   17 ++++++++---------
 kernel/workqueue.c               |   27 +++++++++++++++------------
 11 files changed, 65 insertions(+), 59 deletions(-)

--- struct-cpumasks.orig/arch/x86/kernel/apm_32.c
+++ struct-cpumasks/arch/x86/kernel/apm_32.c
@@ -492,17 +492,17 @@ static void apm_error(char *str, int err
  */
 
 #ifdef CONFIG_SMP
-
-static cpumask_t apm_save_cpus(void)
+static void apm_save_cpus(cpumask_t x)
 {
-	cpumask_t x = current->cpus_allowed;
+	if (x)
+		cpus_copy(x, current->cpus_allowed);
 	/* Some bioses don't like being called from CPU != 0 */
 	set_cpus_allowed(current, cpumask_of_cpu(0));
 	BUG_ON(smp_processor_id() != 0);
 	return x;
 }
 
-static inline void apm_restore_cpus(cpumask_t mask)
+static inline void apm_restore_cpus(const_cpumask_t mask)
 {
 	set_cpus_allowed(current, mask);
 }
@@ -513,7 +513,11 @@ static inline void apm_restore_cpus(cpum
  *	No CPU lockdown needed on a uniprocessor
  */
 
-#define apm_save_cpus()		(current->cpus_allowed)
+static void apm_save_cpus(cpumask_t x)
+{
+	if (x)
+		cpus_copy(x, current->cpus_allowed);
+}
 #define apm_restore_cpus(x)	(void)(x)
 
 #endif
@@ -597,12 +601,12 @@ static u8 apm_bios_call(u32 func, u32 eb
 {
 	APM_DECL_SEGS
 	unsigned long		flags;
-	cpumask_t		cpus;
+	cpumask_var_t		cpus;
 	int			cpu;
 	struct desc_struct	save_desc_40;
 	struct desc_struct	*gdt;
 
-	cpus = apm_save_cpus();
+	apm_save_cpus(cpus);
 
 	cpu = get_cpu();
 	gdt = get_cpu_gdt_table(cpu);
@@ -640,12 +644,12 @@ static u8 apm_bios_call_simple(u32 func,
 	u8			error;
 	APM_DECL_SEGS
 	unsigned long		flags;
-	cpumask_t		cpus;
+	cpumask_var_t		cpus;
 	int			cpu;
 	struct desc_struct	save_desc_40;
 	struct desc_struct	*gdt;
 
-	cpus = apm_save_cpus();
+	apm_save_cpus(cpus);
 
 	cpu = get_cpu();
 	gdt = get_cpu_gdt_table(cpu);
@@ -941,7 +945,7 @@ static void apm_power_off(void)
 
 	/* Some bioses don't like being called from CPU != 0 */
 	if (apm_info.realmode_power_off) {
-		(void)apm_save_cpus();
+		(void)apm_save_cpus(NULL);
 		machine_real_restart(po_bios_call, sizeof(po_bios_call));
 	} else {
 		(void)set_system_power_state(APM_STATE_OFF);
--- struct-cpumasks.orig/arch/x86/kernel/microcode_core.c
+++ struct-cpumasks/arch/x86/kernel/microcode_core.c
@@ -130,7 +130,7 @@ static int do_microcode_update(const voi
 			microcode_ops->apply_microcode(cpu);
 	}
 out:
-	set_cpus_allowed(current, &old);
+	set_cpus_allowed(current, old);
 	return error;
 }
 
@@ -231,7 +231,7 @@ static ssize_t reload_store(struct sys_d
 					microcode_ops->apply_microcode(cpu);
 			}
 			mutex_unlock(&microcode_mutex);
-			set_cpus_allowed(current, &old);
+			set_cpus_allowed(current, old);
 		}
 		put_online_cpus();
 	}
@@ -353,7 +353,7 @@ static void microcode_init_cpu(int cpu)
 
 	set_cpus_allowed(current, cpumask_of_cpu(cpu));
 	microcode_update_cpu(cpu);
-	set_cpus_allowed(current, &old);
+	set_cpus_allowed(current, old);
 }
 
 static int mc_sysdev_add(struct sys_device *sys_dev)
--- struct-cpumasks.orig/arch/x86/kernel/nmi.c
+++ struct-cpumasks/arch/x86/kernel/nmi.c
@@ -41,7 +41,7 @@
 int unknown_nmi_panic;
 int nmi_watchdog_enabled;
 
-static cpumask_t backtrace_mask = CPU_MASK_NONE;
+static cpumask_map_t backtrace_mask = CPU_MASK_NONE;
 
 /* nmi_active:
  * >0: the lapic NMI watchdog is active, but can be disabled
@@ -530,7 +530,7 @@ void __trigger_all_cpu_backtrace(void)
 {
 	int i;
 
-	backtrace_mask = cpu_online_map;
+	cpus_copy(backtrace_mask, cpu_online_map);
 	/* Wait for up to 10 seconds for all CPUs to do the backtrace */
 	for (i = 0; i < 10 * 1000; i++) {
 		if (cpus_empty(backtrace_mask))
--- struct-cpumasks.orig/arch/x86/kernel/process.c
+++ struct-cpumasks/arch/x86/kernel/process.c
@@ -246,7 +246,7 @@ static int __cpuinit check_c1e_idle(cons
 	return 1;
 }
 
-static cpumask_t c1e_mask = CPU_MASK_NONE;
+static cpumask_map_t c1e_mask = CPU_MASK_NONE;
 static int c1e_detected;
 
 void c1e_remove_cpu(int cpu)
--- struct-cpumasks.orig/kernel/compat.c
+++ struct-cpumasks/kernel/compat.c
@@ -396,16 +396,16 @@ asmlinkage long compat_sys_waitid(int wh
 }
 
 static int compat_get_user_cpu_mask(compat_ulong_t __user *user_mask_ptr,
-				    unsigned len, cpumask_t *new_mask)
+				    unsigned len, cpumask_t new_mask)
 {
 	unsigned long *k;
 
-	if (len < sizeof(cpumask_t))
-		memset(new_mask, 0, sizeof(cpumask_t));
-	else if (len > sizeof(cpumask_t))
-		len = sizeof(cpumask_t);
+	if (len < cpumask_size())
+		memset(new_mask, 0, cpumask_size());
+	else if (len > cpumask_size())
+		len = cpumask_size();
 
-	k = cpus_addr(*new_mask);
+	k = cpus_addr(new_mask);
 	return compat_get_bitmap(k, user_mask_ptr, len * 8);
 }
 
@@ -413,23 +413,23 @@ asmlinkage long compat_sys_sched_setaffi
 					     unsigned int len,
 					     compat_ulong_t __user *user_mask_ptr)
 {
-	cpumask_t new_mask;
+	cpumask_var_t new_mask;
 	int retval;
 
-	retval = compat_get_user_cpu_mask(user_mask_ptr, len, &new_mask);
+	retval = compat_get_user_cpu_mask(user_mask_ptr, len, new_mask);
 	if (retval)
 		return retval;
 
-	return sched_setaffinity(pid, &new_mask);
+	return sched_setaffinity(pid, new_mask);
 }
 
 asmlinkage long compat_sys_sched_getaffinity(compat_pid_t pid, unsigned int len,
 					     compat_ulong_t __user *user_mask_ptr)
 {
 	int ret;
-	cpumask_t mask;
+	cpumask_var_t mask;
 	unsigned long *k;
-	unsigned int min_length = sizeof(cpumask_t);
+	unsigned int min_length = cpumask_size();
 
 	if (NR_CPUS <= BITS_PER_COMPAT_LONG)
 		min_length = sizeof(compat_ulong_t);
@@ -437,7 +437,7 @@ asmlinkage long compat_sys_sched_getaffi
 	if (len < min_length)
 		return -EINVAL;
 
-	ret = sched_getaffinity(pid, &mask);
+	ret = sched_getaffinity(pid, mask);
 	if (ret < 0)
 		return ret;
 
--- struct-cpumasks.orig/kernel/fork.c
+++ struct-cpumasks/kernel/fork.c
@@ -1202,7 +1202,7 @@ static struct task_struct *copy_process(
 	 * to ensure it is on a valid CPU (and if not, just force it back to
 	 * parent's CPU). This avoids alot of nasty races.
 	 */
-	p->cpus_allowed = current->cpus_allowed;
+	cpus_copy(p->cpus_allowed, current->cpus_allowed);
 	p->rt.nr_cpus_allowed = current->rt.nr_cpus_allowed;
 	if (unlikely(!cpu_isset(task_cpu(p), p->cpus_allowed) ||
 			!cpu_online(task_cpu(p))))
--- struct-cpumasks.orig/kernel/kthread.c
+++ struct-cpumasks/kernel/kthread.c
@@ -179,7 +179,7 @@ void kthread_bind(struct task_struct *k,
 	/* Must have done schedule() in kthread() before we set_task_cpu */
 	wait_task_inactive(k, 0);
 	set_task_cpu(k, cpu);
-	k->cpus_allowed = cpumask_of_cpu(cpu);
+	cpus_copy(k->cpus_allowed, cpumask_of_cpu(cpu));
 	k->rt.nr_cpus_allowed = 1;
 	k->flags |= PF_THREAD_BOUND;
 }
--- struct-cpumasks.orig/kernel/profile.c
+++ struct-cpumasks/kernel/profile.c
@@ -43,7 +43,7 @@ static unsigned long prof_len, prof_shif
 int prof_on __read_mostly;
 EXPORT_SYMBOL_GPL(prof_on);
 
-static cpumask_t prof_cpu_mask = CPU_MASK_ALL;
+static cpumask_map_t prof_cpu_mask = CPU_MASK_ALL;
 #ifdef CONFIG_SMP
 static DEFINE_PER_CPU(struct profile_hit *[2], cpu_profile_hits);
 static DEFINE_PER_CPU(int, cpu_profile_flip);
@@ -421,7 +421,7 @@ void profile_tick(int type)
 static int prof_cpu_mask_read_proc(char *page, char **start, off_t off,
 			int count, int *eof, void *data)
 {
-	int len = cpumask_scnprintf(page, count, *(cpumask_t *)data);
+	int len = cpumask_scnprintf(page, count, (const_cpumask_t)data);
 	if (count - len < 2)
 		return -EINVAL;
 	len += sprintf(page + len, "\n");
@@ -431,15 +431,15 @@ static int prof_cpu_mask_read_proc(char 
 static int prof_cpu_mask_write_proc(struct file *file,
 	const char __user *buffer,  unsigned long count, void *data)
 {
-	cpumask_t *mask = (cpumask_t *)data;
+	cpumask_t mask = (cpumask_t)data;
 	unsigned long full_count = count, err;
-	cpumask_t new_value;
+	cpumask_var_t new_value;
 
 	err = cpumask_parse_user(buffer, count, new_value);
 	if (err)
 		return err;
 
-	*mask = new_value;
+	cpus_copy(mask, new_value);
 	return full_count;
 }
 
--- struct-cpumasks.orig/kernel/stop_machine.c
+++ struct-cpumasks/kernel/stop_machine.c
@@ -99,7 +99,7 @@ static int chill(void *unused)
 	return 0;
 }
 
-int __stop_machine(int (*fn)(void *), void *data, const cpumask_t *cpus)
+int __stop_machine(int (*fn)(void *), void *data, const_cpumask_t cpus)
 {
 	int i, err;
 	struct stop_machine_data active, idle;
@@ -130,7 +130,7 @@ int __stop_machine(int (*fn)(void *), vo
 			if (i == cpus_first(cpu_online_map))
 				smdata = &active;
 		} else {
-			if (cpu_isset(i, *cpus))
+			if (cpu_isset(i, cpus))
 				smdata = &active;
 		}
 
@@ -175,7 +175,7 @@ kill_threads:
 	return err;
 }
 
-int stop_machine(int (*fn)(void *), void *data, const cpumask_t *cpus)
+int stop_machine(int (*fn)(void *), void *data, const_cpumask_t cpus)
 {
 	int ret;
 
--- struct-cpumasks.orig/kernel/taskstats.c
+++ struct-cpumasks/kernel/taskstats.c
@@ -290,12 +290,11 @@ ret:
 	return;
 }
 
-static int add_del_listener(pid_t pid, cpumask_t *maskp, int isadd)
+static int add_del_listener(pid_t pid, const_cpumask_t mask, int isadd)
 {
 	struct listener_list *listeners;
 	struct listener *s, *tmp;
 	unsigned int cpu;
-	cpumask_t mask = *maskp;
 
 	if (!cpus_subset(mask, cpu_possible_map))
 		return -EINVAL;
@@ -335,7 +334,7 @@ cleanup:
 	return 0;
 }
 
-static int parse(struct nlattr *na, cpumask_t *mask)
+static int parse(struct nlattr *na, cpumask_t mask)
 {
 	char *data;
 	int len;
@@ -352,7 +351,7 @@ static int parse(struct nlattr *na, cpum
 	if (!data)
 		return -ENOMEM;
 	nla_strlcpy(data, na, len);
-	ret = cpulist_parse(data, *mask);
+	ret = cpulist_parse(data, mask);
 	kfree(data);
 	return ret;
 }
@@ -432,19 +431,19 @@ static int taskstats_user_cmd(struct sk_
 	struct sk_buff *rep_skb;
 	struct taskstats *stats;
 	size_t size;
-	cpumask_t mask;
+	cpumask_var_t mask;
 
-	rc = parse(info->attrs[TASKSTATS_CMD_ATTR_REGISTER_CPUMASK], &mask);
+	rc = parse(info->attrs[TASKSTATS_CMD_ATTR_REGISTER_CPUMASK], mask);
 	if (rc < 0)
 		return rc;
 	if (rc == 0)
-		return add_del_listener(info->snd_pid, &mask, REGISTER);
+		return add_del_listener(info->snd_pid, mask, REGISTER);
 
-	rc = parse(info->attrs[TASKSTATS_CMD_ATTR_DEREGISTER_CPUMASK], &mask);
+	rc = parse(info->attrs[TASKSTATS_CMD_ATTR_DEREGISTER_CPUMASK], mask);
 	if (rc < 0)
 		return rc;
 	if (rc == 0)
-		return add_del_listener(info->snd_pid, &mask, DEREGISTER);
+		return add_del_listener(info->snd_pid, mask, DEREGISTER);
 
 	/*
 	 * Size includes space for nested attributes
--- struct-cpumasks.orig/kernel/workqueue.c
+++ struct-cpumasks/kernel/workqueue.c
@@ -72,7 +72,7 @@ static DEFINE_SPINLOCK(workqueue_lock);
 static LIST_HEAD(workqueues);
 
 static int singlethread_cpu __read_mostly;
-static cpumask_t cpu_singlethread_map __read_mostly;
+static cpumask_map_t cpu_singlethread_map __read_mostly;
 /*
  * _cpu_down() first removes CPU from cpu_online_map, then CPU_DEAD
  * flushes cwq->worklist. This means that flush_workqueue/wait_on_work
@@ -80,7 +80,7 @@ static cpumask_t cpu_singlethread_map __
  * use cpu_possible_map, the cpumask below is more a documentation
  * than optimization.
  */
-static cpumask_t cpu_populated_map __read_mostly;
+static cpumask_map_t cpu_populated_map __read_mostly;
 
 /* If it's single threaded, it isn't in the list of workqueues. */
 static inline int is_single_threaded(struct workqueue_struct *wq)
@@ -88,10 +88,11 @@ static inline int is_single_threaded(str
 	return wq->singlethread;
 }
 
-static const cpumask_t *wq_cpu_map(struct workqueue_struct *wq)
+static const_cpumask_t wq_cpu_map(struct workqueue_struct *wq)
 {
 	return is_single_threaded(wq)
-		? &cpu_singlethread_map : &cpu_populated_map;
+		? (const_cpumask_t)cpu_singlethread_map
+		: (const_cpumask_t)cpu_populated_map;
 }
 
 static
@@ -409,13 +410,15 @@ static int flush_cpu_workqueue(struct cp
  */
 void flush_workqueue(struct workqueue_struct *wq)
 {
-	const cpumask_t *cpu_map = wq_cpu_map(wq);
+	cpumask_var_t cpu_map;	/* XXX - if wq_cpu_map(wq) changes? */
+				/* XXX - otherwise can be const_cpumask_t */
 	int cpu;
 
+	cpus_copy(cpu_map, wq_cpu_map(wq));
 	might_sleep();
 	lock_map_acquire(&wq->lockdep_map);
 	lock_map_release(&wq->lockdep_map);
-	for_each_cpu(cpu, *cpu_map)
+	for_each_cpu(cpu, cpu_map)
 		flush_cpu_workqueue(per_cpu_ptr(wq->cpu_wq, cpu));
 }
 EXPORT_SYMBOL_GPL(flush_workqueue);
@@ -531,7 +534,7 @@ static void wait_on_work(struct work_str
 {
 	struct cpu_workqueue_struct *cwq;
 	struct workqueue_struct *wq;
-	const cpumask_t *cpu_map;
+	const_cpumask_t cpu_map;
 	int cpu;
 
 	might_sleep();
@@ -546,7 +549,7 @@ static void wait_on_work(struct work_str
 	wq = cwq->wq;
 	cpu_map = wq_cpu_map(wq);
 
-	for_each_cpu(cpu, *cpu_map)
+	for_each_cpu(cpu, cpu_map)
 		wait_on_cpu_work(per_cpu_ptr(wq->cpu_wq, cpu), work);
 }
 
@@ -898,7 +901,7 @@ static void cleanup_workqueue_thread(str
  */
 void destroy_workqueue(struct workqueue_struct *wq)
 {
-	const cpumask_t *cpu_map = wq_cpu_map(wq);
+	const_cpumask_t cpu_map = wq_cpu_map(wq);
 	int cpu;
 
 	cpu_maps_update_begin();
@@ -906,7 +909,7 @@ void destroy_workqueue(struct workqueue_
 	list_del(&wq->list);
 	spin_unlock(&workqueue_lock);
 
-	for_each_cpu(cpu, *cpu_map)
+	for_each_cpu(cpu, cpu_map)
 		cleanup_workqueue_thread(per_cpu_ptr(wq->cpu_wq, cpu));
  	cpu_maps_update_done();
 
@@ -967,9 +970,9 @@ undo:
 
 void __init init_workqueues(void)
 {
-	cpu_populated_map = cpu_online_map;
+	cpus_copy(cpu_populated_map, cpu_online_map);
 	singlethread_cpu = cpus_first(cpu_possible_map);
-	cpu_singlethread_map = cpumask_of_cpu(singlethread_cpu);
+	cpus_copy(cpu_singlethread_map, cpumask_of_cpu(singlethread_cpu));
 	hotcpu_notifier(workqueue_cpu_callback, 0);
 	keventd_wq = create_workqueue("events");
 	BUG_ON(!keventd_wq);

-- 

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [PATCH 31/31] cpumask: clean misc files
  2008-09-29 18:02 [PATCH 00/31] cpumask: Provide new cpumask API Mike Travis
                   ` (29 preceding siblings ...)
  2008-09-29 18:03 ` [PATCH 30/31] cpumask: clean kernel files Mike Travis
@ 2008-09-29 18:03 ` Mike Travis
  30 siblings, 0 replies; 43+ messages in thread
From: Mike Travis @ 2008-09-29 18:03 UTC (permalink / raw)
  To: Ingo Molnar, Rusty Russell
  Cc: Linus Torvalds, Andrew Morton, David Miller, Yinghai Lu,
	Thomas Gleixner, Jack Steiner, linux-kernel

[-- Attachment #1: clean-misc --]
[-- Type: text/plain, Size: 13911 bytes --]

Signed-of-by: Mike Travis <travis@sgi.com>
---
 drivers/base/node.c            |    6 +++---
 drivers/base/topology.c        |   20 +++++++++++---------
 drivers/firmware/dcdbas.c      |    6 +++---
 drivers/net/sfc/efx.c          |    2 +-
 drivers/oprofile/buffer_sync.c |    2 +-
 include/asm-x86/paravirt.h     |    6 +++---
 include/asm-x86/processor.h    |    2 +-
 include/asm-x86/topology.h     |   14 +++++++-------
 include/asm-x86/uv/uv_bau.h    |    2 +-
 include/linux/interrupt.h      |    8 ++++----
 include/linux/seq_file.h       |    4 ++--
 include/linux/stop_machine.h   |    6 +++---
 net/core/dev.c                 |    2 +-
 net/iucv/iucv.c                |   12 ++++++------
 virt/kvm/kvm_main.c            |    6 +++---
 15 files changed, 50 insertions(+), 48 deletions(-)

--- struct-cpumasks.orig/drivers/base/node.c
+++ struct-cpumasks/drivers/base/node.c
@@ -22,15 +22,15 @@ static struct sysdev_class node_class = 
 static ssize_t node_read_cpumap(struct sys_device *dev, int type, char *buf)
 {
 	struct node *node_dev = to_node(dev);
-	const cpumask_t mask = node_to_cpumask(node_dev->sysdev.id);
+	const_cpumask_t mask = node_to_cpumask(node_dev->sysdev.id);
 	int len;
 
 	/* 2008/04/07: buf currently PAGE_SIZE, need 9 chars per 32 bits. */
 	BUILD_BUG_ON((NR_CPUS/32 * 9) > (PAGE_SIZE-1));
 
 	len = type?
-		cpulist_scnprintf(buf, PAGE_SIZE-2, *mask):
-		cpumask_scnprintf(buf, PAGE_SIZE-2, *mask);
+		cpulist_scnprintf(buf, PAGE_SIZE-2, mask):
+		cpumask_scnprintf(buf, PAGE_SIZE-2, mask);
  	buf[len++] = '\n';
  	buf[len] = '\0';
 	return len;
--- struct-cpumasks.orig/drivers/base/topology.c
+++ struct-cpumasks/drivers/base/topology.c
@@ -42,15 +42,15 @@ static ssize_t show_##name(struct sys_de
 }
 
 #if defined(topology_thread_siblings) || defined(topology_core_siblings)
-static ssize_t show_cpumap(int type, cpumask_t *mask, char *buf)
+static ssize_t show_cpumap(int type, const_cpumask_t mask, char *buf)
 {
 	ptrdiff_t len = PTR_ALIGN(buf + PAGE_SIZE - 1, PAGE_SIZE) - buf;
 	int n = 0;
 
 	if (len > 1) {
 		n = type?
-			cpulist_scnprintf(buf, len-2, *mask):
-			cpumask_scnprintf(buf, len-2, *mask);
+			cpulist_scnprintf(buf, len-2, mask):
+			cpumask_scnprintf(buf, len-2, mask);
 		buf[n++] = '\n';
 		buf[n] = '\0';
 	}
@@ -64,7 +64,7 @@ static ssize_t show_##name(struct sys_de
 			   struct sysdev_attribute *attr, char *buf)	\
 {									\
 	unsigned int cpu = dev->id;					\
-	return show_cpumap(0, &(topology_##name(cpu)), buf);		\
+	return show_cpumap(0, topology_##name(cpu), buf);		\
 }
 
 #define define_siblings_show_list(name)					\
@@ -73,7 +73,7 @@ static ssize_t show_##name##_list(struct
 				  char *buf)				\
 {									\
 	unsigned int cpu = dev->id;					\
-	return show_cpumap(1, &(topology_##name(cpu)), buf);		\
+	return show_cpumap(1, topology_##name(cpu), buf);		\
 }
 
 #else
@@ -82,8 +82,9 @@ static ssize_t show_##name(struct sys_de
 			   struct sysdev_attribute *attr, char *buf)	\
 {									\
 	unsigned int cpu = dev->id;					\
-	cpumask_t mask = topology_##name(cpu);				\
-	return show_cpumap(0, &mask, buf);				\
+	cpumask_var_t mask;						\
+	cpus_copy(mask, topology_##name(cpu));				\
+	return show_cpumap(0, mask, buf);				\
 }
 
 #define define_siblings_show_list(name)					\
@@ -92,8 +93,9 @@ static ssize_t show_##name##_list(struct
 				  char *buf)				\
 {									\
 	unsigned int cpu = dev->id;					\
-	cpumask_t mask = topology_##name(cpu);				\
-	return show_cpumap(1, &mask, buf);				\
+	cpumask_var_t mask;						\
+	cpus_copy(mask, topology_##name(cpu));				\
+	return show_cpumap(1, mask, buf);				\
 }
 #endif
 
--- struct-cpumasks.orig/drivers/firmware/dcdbas.c
+++ struct-cpumasks/drivers/firmware/dcdbas.c
@@ -244,7 +244,7 @@ static ssize_t host_control_on_shutdown_
  */
 static int smi_request(struct smi_cmd *smi_cmd)
 {
-	cpumask_t old_mask;
+	cpumask_var_t old_mask;
 	int ret = 0;
 
 	if (smi_cmd->magic != SMI_CMD_MAGIC) {
@@ -254,7 +254,7 @@ static int smi_request(struct smi_cmd *s
 	}
 
 	/* SMI requires CPU 0 */
-	old_mask = current->cpus_allowed;
+	cpus_copy(old_mask, current->cpus_allowed);
 	set_cpus_allowed(current, cpumask_of_cpu(0));
 	if (smp_processor_id() != 0) {
 		dev_dbg(&dcdbas_pdev->dev, "%s: failed to get CPU 0\n",
@@ -275,7 +275,7 @@ static int smi_request(struct smi_cmd *s
 	);
 
 out:
-	set_cpus_allowed(current, &old_mask);
+	set_cpus_allowed(current, old_mask);
 	return ret;
 }
 
--- struct-cpumasks.orig/drivers/net/sfc/efx.c
+++ struct-cpumasks/drivers/net/sfc/efx.c
@@ -834,7 +834,7 @@ static void efx_probe_interrupts(struct 
 		BUG_ON(!pci_find_capability(efx->pci_dev, PCI_CAP_ID_MSIX));
 
 		if (rss_cpus == 0) {
-			cpumask_t core_mask;
+			cpumask_var_t core_mask;
 			int cpu;
 
 			cpus_clear(core_mask);
--- struct-cpumasks.orig/drivers/oprofile/buffer_sync.c
+++ struct-cpumasks/drivers/oprofile/buffer_sync.c
@@ -37,7 +37,7 @@
 
 static LIST_HEAD(dying_tasks);
 static LIST_HEAD(dead_tasks);
-static cpumask_t marked_cpus = CPU_MASK_NONE;
+static cpumask_map_t marked_cpus = CPU_MASK_NONE;
 static DEFINE_SPINLOCK(task_mortuary);
 static void process_task_mortuary(void);
 
--- struct-cpumasks.orig/include/asm-x86/paravirt.h
+++ struct-cpumasks/include/asm-x86/paravirt.h
@@ -245,7 +245,7 @@ struct pv_mmu_ops {
 	void (*flush_tlb_user)(void);
 	void (*flush_tlb_kernel)(void);
 	void (*flush_tlb_single)(unsigned long addr);
-	void (*flush_tlb_others)(const cpumask_t *cpus, struct mm_struct *mm,
+	void (*flush_tlb_others)(const_cpumask_t cpus, struct mm_struct *mm,
 				 unsigned long va);
 
 	/* Hooks for allocating and freeing a pagetable top-level */
@@ -985,10 +985,10 @@ static inline void __flush_tlb_single(un
 	PVOP_VCALL1(pv_mmu_ops.flush_tlb_single, addr);
 }
 
-static inline void flush_tlb_others(cpumask_t cpumask, struct mm_struct *mm,
+static inline void flush_tlb_others(const_cpumask_t cpumask, struct mm_struct *mm,
 				    unsigned long va)
 {
-	PVOP_VCALL3(pv_mmu_ops.flush_tlb_others, &cpumask, mm, va);
+	PVOP_VCALL3(pv_mmu_ops.flush_tlb_others, cpumask, mm, va);
 }
 
 static inline int paravirt_pgd_alloc(struct mm_struct *mm)
--- struct-cpumasks.orig/include/asm-x86/processor.h
+++ struct-cpumasks/include/asm-x86/processor.h
@@ -93,7 +93,7 @@ struct cpuinfo_x86 {
 	unsigned long		loops_per_jiffy;
 #ifdef CONFIG_SMP
 	/* cpus sharing the last level cache: */
-	cpumask_t		llc_shared_map;
+	cpumask_var_t		llc_shared_map;
 #endif
 	/* cpuid returned max cores value: */
 	u16			 x86_max_cores;
--- struct-cpumasks.orig/include/asm-x86/topology.h
+++ struct-cpumasks/include/asm-x86/topology.h
@@ -58,15 +58,15 @@ static inline int cpu_to_node(int cpu)
 #define early_cpu_to_node(cpu)	cpu_to_node(cpu)
 
 /* Returns a bitmask of CPUs on Node 'node'. */
-static inline const cpumask_t node_to_cpumask(int node)
+static inline const_cpumask_t node_to_cpumask(int node)
 {
-	return (const cpumask_t)&node_to_cpumask_map[node];
+	return (const_cpumask_t)&node_to_cpumask_map[node];
 }
 
 #else /* CONFIG_X86_64 */
 
 /* Mappings between node number and cpus on that node. */
-extern const cpumask_t node_to_cpumask_map;
+extern const_cpumask_t node_to_cpumask_map;
 
 /* Mappings between logical cpu number and node number */
 DECLARE_EARLY_PER_CPU(int, x86_cpu_to_node_map);
@@ -105,7 +105,7 @@ static inline const_cpumask_t node_to_cp
 	char *map = (char *)node_to_cpumask_map;
 
 	map += BITS_TO_LONGS(node * nr_cpu_ids);
-	return (const cpumask_t)map;
+	return (const_cpumask_t)map;
 }
 
 #endif /* !CONFIG_DEBUG_PER_CPU_MAPS */
@@ -175,7 +175,7 @@ extern int __node_distance(int, int);
 
 static inline const_cpumask_t node_to_cpumask(int node)
 {
-	return (const cpumask_t)cpu_online_map;
+	return (const_cpumask_t)cpu_online_map;
 }
 static inline int node_to_first_cpu(int node)
 {
@@ -190,11 +190,11 @@ static inline int node_to_first_cpu(int 
 /* Returns the number of the first CPU on Node 'node'. */
 static inline int node_to_first_cpu(int node)
 {
-	return cpus_first((const cpumask_t)node_to_cpumask(node));
+	return cpus_first((const_cpumask_t)node_to_cpumask(node));
 }
 #endif
 
-extern cpumask_t cpu_coregroup_map(int cpu);
+extern const_cpumask_t cpu_coregroup_map(int cpu);
 
 #ifdef ENABLE_TOPO_DEFINES
 #define topology_physical_package_id(cpu)	(cpu_data(cpu).phys_proc_id)
--- struct-cpumasks.orig/include/asm-x86/uv/uv_bau.h
+++ struct-cpumasks/include/asm-x86/uv/uv_bau.h
@@ -325,7 +325,7 @@ static inline void bau_cpubits_clear(str
 #define cpubit_isset(cpu, bau_local_cpumask) \
 	test_bit((cpu), (bau_local_cpumask).bits)
 
-extern int uv_flush_tlb_others(cpumask_t *, struct mm_struct *, unsigned long);
+extern int uv_flush_tlb_others(cpumask_t, struct mm_struct *, unsigned long);
 extern void uv_bau_message_intr1(void);
 extern void uv_bau_timeout_intr1(void);
 
--- struct-cpumasks.orig/include/linux/interrupt.h
+++ struct-cpumasks/include/linux/interrupt.h
@@ -67,7 +67,7 @@ typedef irqreturn_t (*irq_handler_t)(int
 struct irqaction {
 	irq_handler_t handler;
 	unsigned long flags;
-	cpumask_t mask;
+	cpumask_map_t mask;
 	const char *name;
 	void *dev_id;
 	struct irqaction *next;
@@ -111,15 +111,15 @@ extern void enable_irq(unsigned int irq)
 
 #if defined(CONFIG_SMP) && defined(CONFIG_GENERIC_HARDIRQS)
 
-extern cpumask_t irq_default_affinity;
+extern cpumask_map_t irq_default_affinity;
 
-extern int irq_set_affinity(unsigned int irq, cpumask_t cpumask);
+extern int irq_set_affinity(unsigned int irq, const_cpumask_t cpumask);
 extern int irq_can_set_affinity(unsigned int irq);
 extern int irq_select_affinity(unsigned int irq);
 
 #else /* CONFIG_SMP */
 
-static inline int irq_set_affinity(unsigned int irq, cpumask_t cpumask)
+static inline int irq_set_affinity(unsigned int irq, const_cpumask_t cpumask)
 {
 	return -EINVAL;
 }
--- struct-cpumasks.orig/include/linux/seq_file.h
+++ struct-cpumasks/include/linux/seq_file.h
@@ -50,9 +50,9 @@ int seq_dentry(struct seq_file *, struct
 int seq_path_root(struct seq_file *m, struct path *path, struct path *root,
 		  char *esc);
 int seq_bitmap(struct seq_file *m, unsigned long *bits, unsigned int nr_bits);
-static inline int seq_cpumask(struct seq_file *m, cpumask_t mask)
+static inline int seq_cpumask(struct seq_file *m, const_cpumask_t mask)
 {
-	return seq_bitmap(m, mask->bits, nr_cpu_ids);
+	return seq_bitmap(m, cpus_addr(mask), nr_cpu_ids);
 }
 
 static inline int seq_nodemask(struct seq_file *m, nodemask_t *mask)
--- struct-cpumasks.orig/include/linux/stop_machine.h
+++ struct-cpumasks/include/linux/stop_machine.h
@@ -23,7 +23,7 @@
  *
  * This can be thought of as a very heavy write lock, equivalent to
  * grabbing every spinlock in the kernel. */
-int stop_machine(int (*fn)(void *), void *data, const cpumask_t *cpus);
+int stop_machine(int (*fn)(void *), void *data, const_cpumask_t cpus);
 
 /**
  * __stop_machine: freeze the machine on all CPUs and run this function
@@ -34,11 +34,11 @@ int stop_machine(int (*fn)(void *), void
  * Description: This is a special version of the above, which assumes cpus
  * won't come or go while it's being called.  Used by hotplug cpu.
  */
-int __stop_machine(int (*fn)(void *), void *data, const cpumask_t *cpus);
+int __stop_machine(int (*fn)(void *), void *data, const_cpumask_t cpus);
 #else
 
 static inline int stop_machine(int (*fn)(void *), void *data,
-			       const cpumask_t *cpus)
+			       const_cpumask_t cpus)
 {
 	int ret;
 	local_irq_disable();
--- struct-cpumasks.orig/net/core/dev.c
+++ struct-cpumasks/net/core/dev.c
@@ -169,7 +169,7 @@ static struct list_head ptype_all __read
 struct net_dma {
 	struct dma_client client;
 	spinlock_t lock;
-	cpumask_t channel_mask;
+	cpumask_map_t channel_mask;
 	struct dma_chan **channels;
 };
 
--- struct-cpumasks.orig/net/iucv/iucv.c
+++ struct-cpumasks/net/iucv/iucv.c
@@ -98,8 +98,8 @@ struct iucv_irq_list {
 };
 
 static struct iucv_irq_data *iucv_irq_data[NR_CPUS];
-static cpumask_t iucv_buffer_cpumask = CPU_MASK_NONE;
-static cpumask_t iucv_irq_cpumask = CPU_MASK_NONE;
+static cpumask_map_t iucv_buffer_cpumask = CPU_MASK_NONE;
+static cpumask_map_t iucv_irq_cpumask = CPU_MASK_NONE;
 
 /*
  * Queue of interrupt buffers lock for delivery via the tasklet
@@ -491,11 +491,11 @@ static void iucv_setmask_mp(void)
  */
 static void iucv_setmask_up(void)
 {
-	cpumask_t cpumask;
+	cpumask_var_t cpumask;
 	int cpu;
 
 	/* Disable all cpu but the first in cpu_irq_cpumask. */
-	cpumask = iucv_irq_cpumask;
+	cpus_copy(cpumask, iucv_irq_cpumask);
 	cpu_clear(cpus_first(iucv_irq_cpumask), cpumask);
 	for_each_cpu(cpu, cpumask)
 		smp_call_function_single(cpu, iucv_block_cpu, NULL, 1);
@@ -554,7 +554,7 @@ static void iucv_disable(void)
 static int __cpuinit iucv_cpu_notify(struct notifier_block *self,
 				     unsigned long action, void *hcpu)
 {
-	cpumask_t cpumask;
+	cpumask_var_t cpumask;
 	long cpu = (long) hcpu;
 
 	switch (action) {
@@ -589,7 +589,7 @@ static int __cpuinit iucv_cpu_notify(str
 		break;
 	case CPU_DOWN_PREPARE:
 	case CPU_DOWN_PREPARE_FROZEN:
-		cpumask = iucv_buffer_cpumask;
+		cpus_copy(cpumask, iucv_buffer_cpumask);
 		cpu_clear(cpu, cpumask);
 		if (cpus_empty(cpumask))
 			/* Can't offline last IUCV enabled cpu. */
--- struct-cpumasks.orig/virt/kvm/kvm_main.c
+++ struct-cpumasks/virt/kvm/kvm_main.c
@@ -57,7 +57,7 @@ MODULE_LICENSE("GPL");
 DEFINE_SPINLOCK(kvm_lock);
 LIST_HEAD(vm_list);
 
-static cpumask_t cpus_hardware_enabled;
+static cpumask_map_t cpus_hardware_enabled;
 
 struct kmem_cache *kvm_vcpu_cache;
 EXPORT_SYMBOL_GPL(kvm_vcpu_cache);
@@ -106,7 +106,7 @@ static void ack_flush(void *_completed)
 void kvm_flush_remote_tlbs(struct kvm *kvm)
 {
 	int i, cpu, me;
-	cpumask_t cpus;
+	cpumask_var_t cpus;
 	struct kvm_vcpu *vcpu;
 
 	me = get_cpu();
@@ -132,7 +132,7 @@ out:
 void kvm_reload_remote_mmus(struct kvm *kvm)
 {
 	int i, cpu, me;
-	cpumask_t cpus;
+	cpumask_var_t cpus;
 	struct kvm_vcpu *vcpu;
 
 	me = get_cpu();

-- 

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH 05/31] cpumask: Provide new cpumask API
  2008-09-29 18:02 ` [PATCH 05/31] cpumask: Provide new cpumask API Mike Travis
@ 2008-09-30  9:11   ` Ingo Molnar
  2008-09-30 15:42     ` Mike Travis
  0 siblings, 1 reply; 43+ messages in thread
From: Ingo Molnar @ 2008-09-30  9:11 UTC (permalink / raw)
  To: Mike Travis
  Cc: Rusty Russell, Linus Torvalds, Andrew Morton, David Miller,
	Yinghai Lu, Thomas Gleixner, Jack Steiner, linux-kernel


* Mike Travis <travis@sgi.com> wrote:

>     /* replaces cpumask_t dst = (cpumask_t)src */
>     void cpus_copy(cpumask_t dst, const cpumask_t src);

minor namespace nit i noticed while looking at actual usage of 
cpus_copy(): could you please rename it cpumask_set(dst, src)?

That streamlines it to have the same naming concept as atomic_set(), 
node_set(), zero_fd_set(), etc.

the patch-set looks quite nice otherwise already, the changes are 
straightforward and the end result already looks a lot more maintainable 
and not fragile at all.

In what way will Rusty's changes differ? Since you incorporate some of 
Rusty's changes already, could you please iterate towards a single 
patchset which we could then start testing?

	Ingo

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH 05/31] cpumask: Provide new cpumask API
  2008-09-30  9:11   ` Ingo Molnar
@ 2008-09-30 15:42     ` Mike Travis
  2008-09-30 16:17       ` Mike Travis
  0 siblings, 1 reply; 43+ messages in thread
From: Mike Travis @ 2008-09-30 15:42 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Rusty Russell, Linus Torvalds, Andrew Morton, David Miller,
	Yinghai Lu, Thomas Gleixner, Jack Steiner, linux-kernel

Ingo Molnar wrote:
> * Mike Travis <travis@sgi.com> wrote:
> 
>>     /* replaces cpumask_t dst = (cpumask_t)src */
>>     void cpus_copy(cpumask_t dst, const cpumask_t src);
> 
> minor namespace nit i noticed while looking at actual usage of 
> cpus_copy(): could you please rename it cpumask_set(dst, src)?
> 
> That streamlines it to have the same naming concept as atomic_set(), 
> node_set(), zero_fd_set(), etc.

Cpus_copy came from it's underlying function: bits_copy().  Cpumask_set
would deviate from the current naming convention of cpu_XXX for single
cpu ops and cpus_XXX for all cpus ops.  Do we want that?

> 
> the patch-set looks quite nice otherwise already, the changes are 
> straightforward and the end result already looks a lot more maintainable 
> and not fragile at all.

I was hoping for a stronger compiler error to indicate incorrect usage,
it currently just says "may be used before it's initialized" if you mistakenly
have cpumask_t as the local variable declaration.  I'm testing it now.

> 
> In what way will Rusty's changes differ? Since you incorporate some of 
> Rusty's changes already, could you please iterate towards a single 
> patchset which we could then start testing?

Our timezones are not very conducive to a lot of email exchanges (and he's moving.)
>From what I've seen I believe he's leaning towards using struct cpumask * and
less trickery than I have.

The other alternative that is very easy to implement with the new code is
using a simple unsigned long list for cpumask_t (as Linus first suggested):

	typedef unsigned long cpumask_var_t[1];	/* small NR_CPUS */
	typedef unsigned long *cpumask_var_t;	/* large NR_CPUS */

This simplifies things quite a bit and I can get rid of some trickery (and
it's a pointer already without having to invent a pointer to a struct.)  The
downside is arrays of cpumask_t's are less clean, but doable.  The best thing
about the changes in this patchset is the context of the cpumask is more well
known, and changes to the underlying type for cpumask are confined to only a
few files (cpumask.h, lib/cpumask.c, etc.)

Thanks,
Mike

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH 05/31] cpumask: Provide new cpumask API
  2008-09-30 15:42     ` Mike Travis
@ 2008-09-30 16:17       ` Mike Travis
  2008-10-01  9:08         ` Ingo Molnar
  0 siblings, 1 reply; 43+ messages in thread
From: Mike Travis @ 2008-09-30 16:17 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Rusty Russell, Linus Torvalds, Andrew Morton, David Miller,
	Yinghai Lu, Thomas Gleixner, Jack Steiner, linux-kernel

Mike Travis wrote:
> Ingo Molnar wrote:
...
> 
>> In what way will Rusty's changes differ? Since you incorporate some of 
>> Rusty's changes already, could you please iterate towards a single 
>> patchset which we could then start testing?
> 
> Our timezones are not very conducive to a lot of email exchanges (and he's moving.)
>>From what I've seen I believe he's leaning towards using struct cpumask * and
> less trickery than I have.

Oh yeah, I forgot the other major point of Rusty's approach.  He wants the
patchset to be completely bisectable.  That's far from true in my version.  

Thanks,
Mike

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH 01/31] cpumask: Documentation
  2008-09-29 18:02 ` [PATCH 01/31] cpumask: Documentation Mike Travis
@ 2008-09-30 22:49   ` Rusty Russell
  2008-10-01  9:13     ` Ingo Molnar
  0 siblings, 1 reply; 43+ messages in thread
From: Rusty Russell @ 2008-09-30 22:49 UTC (permalink / raw)
  To: Mike Travis
  Cc: Ingo Molnar, Linus Torvalds, Andrew Morton, David Miller,
	Yinghai Lu, Thomas Gleixner, Jack Steiner, linux-kernel

On Tuesday 30 September 2008 04:02:51 Mike Travis wrote:
> +The Changes
> +
> +Provide new cpumask interface API.  The relevant change is basically
> +cpumask_t becomes an opaque object.  This should result in the minimum
> +amount of modifications while still allowing the inline cpumask functions,
> +and the ability to declare static cpumask objects.
> +
> +
> +    /* raw declaration */
> +    struct __cpumask_data_s { DECLARE_BITMAP(bits, NR_CPUS); };
> +
> +    /* cpumask_map_t used for declaring static cpumask maps */
> +    typedef struct __cpumask_data_s cpumask_map_t[1];
> +
> +    /* cpumask_t used for function args and return pointers */
> +    typedef struct __cpumask_data_s *cpumask_t;
> +    typedef const struct __cpumask_data_s *const_cpumask_t;
> +
> +    /* cpumask_var_t used for local variable, definition follows */
> +    typedef struct __cpumask_data_s	cpumask_var_t[1]; /* SMALL NR_CPUS */
> +    typedef struct __cpumask_data_s	*cpumask_var_t;	  /* LARGE NR_CPUS */
> +
> +    /* replaces cpumask_t dst = (cpumask_t)src */
> +    void cpus_copy(cpumask_t dst, const cpumask_t src);

Hi Mike,

    I have several problems with this patch series.  First, it's a flag day 
change, which means it isn't bisectable and can't go through linux-next.  
Secondly, we still can't hide the definition of the cpumask struct as long as 
they're passed as cpumask_t, so it's going to be hard to find assignments 
(illegal once we allocate nr_cpu_ids bits rather than NR_CPUS), and on-stack 
users.

    Finally, we end up with code which is slightly more opaque than the 
current code, with two new typedefs.  And that's an ongoing problem.

    I took a slightly divergent line with my patch series, and introduced a 
parallel cpumask system which always passes and returns masks by pointer:

	cpumask_t -> struct cpumask
	on-stack cpumask_t -> cpumask_var_t (same as your patch)
	cpus_and(dst, src1, src2) etc -> cpumask_and(&dst, &src1, &src2)
	cpumask_t cpumask_of_cpu(cpu) -> const struct cpumask *cpumask_of(cpu)
	cpumask_t cpu_online_map etc -> const struct cpumask *cpu_online_mask etc.

The old ops are expressed in terms of the new ops, and can be phased out over 
time.

In addition, I added some new twists:

	static cpumasks and cpumasks in structures
		-> DECLARE_BITMAP(foo, NR_CPUS) and to_cpumask()

This means we can eventually obscure the actual definition of struct cpumask, 
to catch abuse.

	cpus_and(tmp, mask, online_mask); for_each_cpu(i, tmp)
		-> for_each_cpu_both(i, mask, online_mask)

This helper saves numerous on-stack temporaries.

	NR_CPUS -> CONFIG_NR_CPUS

The config option is now valid for UP as well.  This cleanup allows us to 
audit users of NR_CPUS (which might be used incorrectly now cpumask_ 
iterators only go to nr_cpu_ids).

The patches are fairly uninteresting, but here is the summary:

x86: remove noop cpus_and() with CPU_MASK_ALL.
x86: clean up speedctep-centrino and reduce cpumask_t usage
cpumask: remove min from first_cpu/next_cpu
cpumask: introduce struct cpumask.
cpumask: change cpumask_scnprintf, cpumask_parse_user, cpulist_parse, and
	 cpulist_scnprintf to take pointers.
cpumask: add cpumask_copy()
cpumask: introduce cpumask_var_t for local cpumask vars
cpumask: make CONFIG_NR_CPUS always valid.
cpumask: use setup_nr_cpu_ids() instead of direct assignment.
cpumask: make nr_cpu_ids valid in all configurations.
cpumask: prepare for iterators to only go to nr_cpu_ids.
cpumask: make nr_cpu_ids the actual limit on bitmap size
cpumask: replace for_each_cpu_mask_nr with for_each_cpu_mask everywhere
cpumask: use cpumask_bits() everywhere.
cpumask: Use &cpu_mask_all instead of CPU_MASK_ALL_PTR.
cpumask: cpumask_of(): cpumask_of_cpu() which returns a pointer.
cpumask: for_each_cpu(): for_each_cpu_mask which takes a pointer
cpumask: cpumask_first/cpumask_next
cpumask: for_each_cpu_both() / cpumask_first_both() / cpumask_next_both()
cpumask: deprecate any_online_cpu() in favour of cpumask_any/cpumask_any_both
cpumask: Replace CPUMASK_ALLOC etc with cpumask_var_t.
cpumask: get rid of boutique sched.c allocations, use cpumask_var_t.
cpumask: reorder header to minimize separate #ifdefs
cpumask: accessors to manipulate possible/present/online/active maps
cpumask: Use accessors code.
cpumask: switch over to cpu_online/possible/active/present_mask
cpumask: to_cpumask()
cpumask: cpu_all_mask: pointer version of cpu_mask_all.
cpumask: remove any_online_cpu() users.
cpumask: smp_call_function_many()
cpumask: Use smp_call_function_many()
cpumask: make irq_set_affinity() take a const struct cpumask *
x86: make TARGET_CPUS/target_cpus take a const struct cpumask *

I'll commit these to my quilt series today.

Thanks,
Rusty.

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH 05/31] cpumask: Provide new cpumask API
  2008-09-30 16:17       ` Mike Travis
@ 2008-10-01  9:08         ` Ingo Molnar
  0 siblings, 0 replies; 43+ messages in thread
From: Ingo Molnar @ 2008-10-01  9:08 UTC (permalink / raw)
  To: Mike Travis
  Cc: Rusty Russell, Linus Torvalds, Andrew Morton, David Miller,
	Yinghai Lu, Thomas Gleixner, Jack Steiner, linux-kernel


* Mike Travis <travis@sgi.com> wrote:

> Mike Travis wrote:
> > Ingo Molnar wrote:
> ...
> > 
> >> In what way will Rusty's changes differ? Since you incorporate some of 
> >> Rusty's changes already, could you please iterate towards a single 
> >> patchset which we could then start testing?
> > 
> > Our timezones are not very conducive to a lot of email exchanges 
> > (and he's moving.) From what I've seen I believe he's leaning 
> > towards using struct cpumask * and less trickery than I have.

actually, that's quite sane to do. const_cpumask_t looked a bit weird to 
me.

the extra indirection to a cpumask_t is not a big issue IMO, so in that 
sense whether we pass by value or pass by reference is not a _big_ 
performance item.

The complications (both present and expected ones) all come from the 
allocations.

> Oh yeah, I forgot the other major point of Rusty's approach.  He wants 
> the patchset to be completely bisectable.  That's far from true in my 
> version.

well, it should be a smooth transition and completely bisectable, 
there's hundreds of usages of cpumask_t and quite many in the pipeline. 
It's far easier to _you_ to get this stuff to work if it's all gradual 
and is expected to work all across. Have a default-off debug mode that 
turns off compatible cpumask_t perhaps - we can remove that later on.

with 'struct cpumask' we could keep cpumask_t as the compatible API, and 
could see the impact of these changes in a very finegrained and gradual 
way. Seems like a fundamentally sane approach to me ...

	Ingo

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH 01/31] cpumask: Documentation
  2008-09-30 22:49   ` Rusty Russell
@ 2008-10-01  9:13     ` Ingo Molnar
  2008-10-02  0:36       ` Rusty Russell
  0 siblings, 1 reply; 43+ messages in thread
From: Ingo Molnar @ 2008-10-01  9:13 UTC (permalink / raw)
  To: Rusty Russell
  Cc: Mike Travis, Linus Torvalds, Andrew Morton, David Miller,
	Yinghai Lu, Thomas Gleixner, Jack Steiner, linux-kernel


* Rusty Russell <rusty@rustcorp.com.au> wrote:

> On Tuesday 30 September 2008 04:02:51 Mike Travis wrote:
> > +The Changes
> > +
> > +Provide new cpumask interface API.  The relevant change is basically
> > +cpumask_t becomes an opaque object.  This should result in the minimum
> > +amount of modifications while still allowing the inline cpumask functions,
> > +and the ability to declare static cpumask objects.
> > +
> > +
> > +    /* raw declaration */
> > +    struct __cpumask_data_s { DECLARE_BITMAP(bits, NR_CPUS); };
> > +
> > +    /* cpumask_map_t used for declaring static cpumask maps */
> > +    typedef struct __cpumask_data_s cpumask_map_t[1];
> > +
> > +    /* cpumask_t used for function args and return pointers */
> > +    typedef struct __cpumask_data_s *cpumask_t;
> > +    typedef const struct __cpumask_data_s *const_cpumask_t;
> > +
> > +    /* cpumask_var_t used for local variable, definition follows */
> > +    typedef struct __cpumask_data_s	cpumask_var_t[1]; /* SMALL NR_CPUS */
> > +    typedef struct __cpumask_data_s	*cpumask_var_t;	  /* LARGE NR_CPUS */
> > +
> > +    /* replaces cpumask_t dst = (cpumask_t)src */
> > +    void cpus_copy(cpumask_t dst, const cpumask_t src);
> 
> Hi Mike,
> 
>     I have several problems with this patch series.  First, it's a flag day 
> change, which means it isn't bisectable and can't go through linux-next.  
> Secondly, we still can't hide the definition of the cpumask struct as long as 
> they're passed as cpumask_t, so it's going to be hard to find assignments 
> (illegal once we allocate nr_cpu_ids bits rather than NR_CPUS), and on-stack 
> users.
> 
>     Finally, we end up with code which is slightly more opaque than the 
> current code, with two new typedefs.  And that's an ongoing problem.
> 
>     I took a slightly divergent line with my patch series, and introduced a 
> parallel cpumask system which always passes and returns masks by pointer:
> 
> 	cpumask_t -> struct cpumask
> 	on-stack cpumask_t -> cpumask_var_t (same as your patch)
> 	cpus_and(dst, src1, src2) etc -> cpumask_and(&dst, &src1, &src2)
> 	cpumask_t cpumask_of_cpu(cpu) -> const struct cpumask *cpumask_of(cpu)
> 	cpumask_t cpu_online_map etc -> const struct cpumask *cpu_online_mask etc.
> 
> The old ops are expressed in terms of the new ops, and can be phased out over 
> time.

that looks very sane to me.

one small request:

> I'll commit these to my quilt series today.

IMHO, an infrastructure change of this magnitude should absolutely be 
done via the Git space. This needs a ton of testing and needs bisection, 
a real Git track record, etc.

	Ingo

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH 01/31] cpumask: Documentation
  2008-10-01  9:13     ` Ingo Molnar
@ 2008-10-02  0:36       ` Rusty Russell
  2008-10-02  9:32         ` Ingo Molnar
  2008-10-02 12:54         ` Mike Travis
  0 siblings, 2 replies; 43+ messages in thread
From: Rusty Russell @ 2008-10-02  0:36 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Mike Travis, Linus Torvalds, Andrew Morton, David Miller,
	Yinghai Lu, Thomas Gleixner, Jack Steiner, linux-kernel

On Wednesday 01 October 2008 19:13:25 Ingo Molnar wrote:
> that looks very sane to me.

Thanks, it's reasonably nice.  The task of hitting all those cpumask_t users 
is big, and I don't think we can do it in one hit.

> one small request:
> > I'll commit these to my quilt series today.
>
> IMHO, an infrastructure change of this magnitude should absolutely be
> done via the Git space. This needs a ton of testing and needs bisection,
> a real Git track record, etc.

Not yet.  Committing untested patches into git is the enemy of bisection; if 
one of my patches breaks an architecture, they lose the ability to bisect 
until its fixed.  If it's a series of patches, we can go back and fix it.

Now, once it's been tested a little, it's better for you to git-ize it and 
I'll send you patches instead.  But I want some more people banging on it, 
and a run through linux-next first...

If Mike's happy to work on these as a basis, we should be able to get there 
soon; the patches are sitting in my tree at http://ozlabs.org/~rusty/kernel/ 
(see rr-latest symlink).

Thanks,
Rusty.
PS.  To emphasize, I haven't actually *booted* this kernel.  My test machines 
are still in transit as I move (and ADSL not connected yet... Grr...)

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH 01/31] cpumask: Documentation
  2008-10-02  0:36       ` Rusty Russell
@ 2008-10-02  9:32         ` Ingo Molnar
  2008-10-02 12:54         ` Mike Travis
  1 sibling, 0 replies; 43+ messages in thread
From: Ingo Molnar @ 2008-10-02  9:32 UTC (permalink / raw)
  To: Rusty Russell
  Cc: Mike Travis, Linus Torvalds, Andrew Morton, David Miller,
	Yinghai Lu, Thomas Gleixner, Jack Steiner, linux-kernel


* Rusty Russell <rusty@rustcorp.com.au> wrote:

> > IMHO, an infrastructure change of this magnitude should absolutely 
> > be done via the Git space. This needs a ton of testing and needs 
> > bisection, a real Git track record, etc.
> 
> Not yet.  Committing untested patches into git is the enemy of 
> bisection; if one of my patches breaks an architecture, they lose the 
> ability to bisect until its fixed.  If it's a series of patches, we 
> can go back and fix it.

while the initial series might be rebased once or twice, beyond the 1-2 
days of initial integration and testing i dont think that's true, and 
i'm doing up to 3-4 bisections a day just fine, on an append-mostly 
tree.

if you have trouble turning a Git tree into a bisectable tree then your 
testing-fu is not strong enough ;-)

[ the only plausible danger is to architectures that are not used by 
  testers all that much (so that breakages can linger a lot longer 
  unnoticed) - but why should the other 99% of Linux users be put at a 
  disadvantage for them. ]

	Ingo

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH 01/31] cpumask: Documentation
  2008-10-02  0:36       ` Rusty Russell
  2008-10-02  9:32         ` Ingo Molnar
@ 2008-10-02 12:54         ` Mike Travis
  2008-10-03  9:04           ` Ingo Molnar
  1 sibling, 1 reply; 43+ messages in thread
From: Mike Travis @ 2008-10-02 12:54 UTC (permalink / raw)
  To: Rusty Russell
  Cc: Ingo Molnar, Linus Torvalds, Andrew Morton, David Miller,
	Yinghai Lu, Thomas Gleixner, Jack Steiner, linux-kernel

Rusty Russell wrote:
> On Wednesday 01 October 2008 19:13:25 Ingo Molnar wrote:
>> that looks very sane to me.
> 
> Thanks, it's reasonably nice.  The task of hitting all those cpumask_t users 
> is big, and I don't think we can do it in one hit.
> 
>> one small request:
>>> I'll commit these to my quilt series today.
>> IMHO, an infrastructure change of this magnitude should absolutely be
>> done via the Git space. This needs a ton of testing and needs bisection,
>> a real Git track record, etc.
> 
> Not yet.  Committing untested patches into git is the enemy of bisection; if 
> one of my patches breaks an architecture, they lose the ability to bisect 
> until its fixed.  If it's a series of patches, we can go back and fix it.
> 
> Now, once it's been tested a little, it's better for you to git-ize it and 
> I'll send you patches instead.  But I want some more people banging on it, 
> and a run through linux-next first...
> 
> If Mike's happy to work on these as a basis, we should be able to get there 
> soon; the patches are sitting in my tree at http://ozlabs.org/~rusty/kernel/ 
> (see rr-latest symlink).

Absolutely!  I may have my own concerns and preferences but the end goal is
far more important.  I'll take a look at it today.  [My only other pressing
matter is convincing Ingo to accept the SCIR driver (or tell me how I need
to change it so it is acceptable), so my management is happy... ;-)]

> 
> Thanks,
> Rusty.
> PS.  To emphasize, I haven't actually *booted* this kernel.  My test machines 
> are still in transit as I move (and ADSL not connected yet... Grr...)

Since our approaches are not different in concept, I can assure you that it
works... ;-)  And as Ingo and others have noted, the infrastructure is easy
to verify, it's the allocation of the temporary cpumasks that will be more
difficult to test.

Cheers,
Mike


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH 01/31] cpumask: Documentation
  2008-10-02 12:54         ` Mike Travis
@ 2008-10-03  9:04           ` Ingo Molnar
  2008-10-06 15:02             ` Pretty blinking lights vs. monitoring system activity from a system controller Mike Travis
  0 siblings, 1 reply; 43+ messages in thread
From: Ingo Molnar @ 2008-10-03  9:04 UTC (permalink / raw)
  To: Mike Travis
  Cc: Rusty Russell, Linus Torvalds, Andrew Morton, David Miller,
	Yinghai Lu, Thomas Gleixner, Jack Steiner, linux-kernel,
	Pavel Machek, H. Peter Anvin


* Mike Travis <travis@sgi.com> wrote:

> Absolutely!  I may have my own concerns and preferences but the end 
> goal is far more important.  I'll take a look at it today.  [My only 
> other pressing matter is convincing Ingo to accept the SCIR driver (or 
> tell me how I need to change it so it is acceptable), so my management 
> is happy... ;-)]

it's getting off topic, but i really dont get it why you cannot go via 
the standard LEDS framework, and why you have to hook into the x86 idle 
notifiers. (which we are hoping to get rid of)

RAS does not need that precise accounting. It just needs a heartbeat 
timer that tells it how to do the pretty lights and to report whether 
the CPU is still alive. Something that seems to be fully within the 
scope of LEDS. What am i missing?

	Ingo

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Pretty blinking lights vs. monitoring system activity from a system controller
  2008-10-03  9:04           ` Ingo Molnar
@ 2008-10-06 15:02             ` Mike Travis
  0 siblings, 0 replies; 43+ messages in thread
From: Mike Travis @ 2008-10-06 15:02 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Rusty Russell, Andrew Morton, Thomas Gleixner, Jack Steiner,
	linux-kernel, Pavel Machek, H. Peter Anvin, Richard Purdie

could you please bring these arguments up in the public thread, with 
LEDS people Cc:-ed?

	Ingo

[Changed the Cc list to whom I think may be interested, particularly
Richard Purdie <rpurdie@rpsys.net> for comments on the LED system,
and Thomas Gleixner <tglx@linutronix.de> for comments on using
the hi-res timer to interrupt each cpu every second.]

Ingo Molnar wrote:
> > 
> > it's getting off topic, but i really dont get it why you cannot go via 
> > the standard LEDS framework,

Hi Ingo,

The LED framework is fine for monitoring system activity with a few
LED's.  It can quantify system activity to provide a variably lit LED
and display disk activity.  Each LED requires registration similar to:

/* For the leds-gpio driver */
struct gpio_led {
        const char *name;
        char *default_trigger;
        unsigned        gpio;
        u8              active_low;
};

struct gpio_led_platform_data {
        int             num_leds;
        struct gpio_led *leds;
        int             (*gpio_blink_set)(unsigned gpio,
                                        unsigned long *delay_on,
                                        unsigned long *delay_off);
};

I would need an array of up to 4096 of the led_info structs allocated
on Node 0 at bootup time based on the number of cpus.  Registration of
these 4096 leds will allocate another (up to) 4096 array similar
to this struct on Node 0:

struct gpio_led_data {
        struct led_classdev cdev;
        unsigned gpio;
        struct work_struct work;
        u8 new_level;
        u8 can_sleep;
        u8 active_low;
        int (*platform_gpio_blink_set)(unsigned gpio,
                        unsigned long *delay_on, unsigned long *delay_off);
};

After registration there will be (up to) 4096 nodes in /sys/class/leds/
using the naming convention: "devicename:colour:function".  I'm not sure
of the total number of sysfs leaves but there's at least a brightness
and a trigger leaf under each.  This would add up to 12288 new entries
created in the sysfs filesystem.  (And none of these are useful.)
                                                                                                                                          
Servicing the trigger would require passing data over the system bus
each second for each LED.  In total this adds to the amount of memory
needed as well as reducing the available system bandwidth unnecessarily.

The current heartbeat trigger only quantifies the total system activity,
it does not precisely indicate which cpus are active or not.  There are
no means to associate the heartbeat trigger to a specific led.  There
are no means to associate a specific led to a specific cpu.

In contrast, my overhead is:

+struct uv_scir_s {
+       struct timer_list timer;
+       unsigned long   offset;
+       unsigned long   last;
+       unsigned long   idle_on;
+       unsigned long   idle_off;
+       unsigned char   state;
+       unsigned char   enabled;
+};

which is allocated in the UV hub info block in node local memory.  This
UV hub info block contains all the information needed to service the
UV hub for that node:

/*
 * The following defines attributes of the HUB chip. These attributes are
 * frequently referenced and are kept in the per-cpu data areas of each cpu.
 * They are kept together in a struct to minimize cache misses.
 */
struct uv_hub_info_s {
        unsigned long   global_mmr_base;
        unsigned long   gpa_mask;
        unsigned long   gnode_upper;
        unsigned long   lowmem_remap_top;
        unsigned long   lowmem_remap_base;
        unsigned short  pnode;
        unsigned short  pnode_mask;
        unsigned short  coherency_domain_number;
        unsigned short  numa_blade_id;
        unsigned char   blade_processor_id;
        unsigned char   m_val;
        unsigned char   n_val;
        struct uv_scir_s scir;
};


> > ...  and why you have to hook into the x86 idle 
> > notifiers. (which we are hoping to get rid of)

Is there any other instantaneous indication of whether the cpu is
currently idle prior to waking up to service the 1 second timer
interrupt?  I'd be glad to use something else, but I do not know what
that is.

The Altix (IA64) actually wrote to the HUB reg for each enter/exit idle
and that was not considered excessive overhead (the write overhead is
extremely low and is "posted" in parallel to the instruction read stream.)
I've toned this down (at your request) to only indicate if the cpu "is
more idle than not during the last second" (much less accurate but at
least provides some indication of "idleness".)

> > 
> > RAS does not need that precise accounting. It just needs a heartbeat 
> > timer that tells it how to do the pretty lights and to report whether 
> > the CPU is still alive. Something that seems to be fully within the 
> > scope of LEDS. What am i missing?

Each rack containing a UV system chassis has a system controller which
connects to each node board via the BMC bus.  If you're familiar with
the IPMI tool, then you know some of the capabilities of this backend
bus but suffice to say, it has access to many internal registers in the
UV hub whether that node is functioning or not.

These system controllers are attached to by the service console which is
used for hardware troubleshooting in the lab as well as in the field.
Some of the information is in the form of logs (memory/bus/cpu/IO errors,
etc.) and some of it indicates the state of the cpus during the last 64
seconds of operation (whether cpu is handling interrupts and whether it
was idle or not.  There are RAS programs to analyze this information to
provide a system activity summary as well as highlight potential causes
of system stoppage.

Once again, there are no LED's.  This is not to provide pretty blinking
lights, but is a real part of SGI's RAS story.  I bring this up because
I'm stuck between a rock and a hard place.  I'm trying to provide what
has been requested by our hardware engineers for supporting our systems,
and is at least as capable as our Altix product line (actually it's not,
as noted above.)  And I would understand your objections if this overhead
was being imposed on all x86_64 systems, but this is specifically only
for SGI UV systems and it's a trade off that SGI is willing to make.

Thanks,
Mike

[patch attached for review.]
--
Subject: SGI X86 UV: Provide a System Activity Indicator driver

The SGI UV system has no LEDS but uses one of the system controller
regs to indicate the online internal state of the cpu.  There is a
heartbeat bit indicating that the cpu is responding to interrupts,
and an idle bit indicating whether the cpu has been more or less than
50% idle each heartbeat period.  The current period is one second.

When a cpu panics, an error code is written by BIOS to this same reg.

So the reg has been renamed the "System Controller Interface Reg".

This patchset provides the following:

  * x86_64: Add base functionality for writing to the specific SCIR's
    for each cpu.

  * idle: Add an idle callback to measure the idle "on" and "off" times.

  * heartbeat: Invert "heartbeat" bit to indicate the cpu is "active".

  * if hotplug enabled, all bits are set (0xff) when the cpu is disabled.

Based on linux-2.6.tip/master.

Signed-off-by: Mike Travis <travis@sgi.com>
---
 arch/x86/kernel/genx2apic_uv_x.c |  138 +++++++++++++++++++++++++++++++++++++++
 include/asm-x86/uv/uv_hub.h      |   62 +++++++++++++++++
 2 files changed, 200 insertions(+)

--- linux-2.6.tip.orig/arch/x86/kernel/genx2apic_uv_x.c
+++ linux-2.6.tip/arch/x86/kernel/genx2apic_uv_x.c
@@ -10,6 +10,7 @@
 
 #include <linux/kernel.h>
 #include <linux/threads.h>
+#include <linux/cpu.h>
 #include <linux/cpumask.h>
 #include <linux/string.h>
 #include <linux/ctype.h>
@@ -18,6 +19,8 @@
 #include <linux/bootmem.h>
 #include <linux/module.h>
 #include <linux/hardirq.h>
+#include <linux/timer.h>
+#include <asm/idle.h>
 #include <asm/smp.h>
 #include <asm/ipi.h>
 #include <asm/genapic.h>
@@ -357,6 +360,139 @@ static __init void uv_rtc_init(void)
 		sn_rtc_cycles_per_second = ticks_per_sec;
 }
 
+/*
+ * percpu heartbeat timer
+ */
+static void uv_heartbeat(unsigned long ignored)
+{
+	struct timer_list *timer = &uv_hub_info->scir.timer;
+	unsigned char bits = uv_hub_info->scir.state;
+
+	/* flip heartbeat bit */
+	bits ^= SCIR_CPU_HEARTBEAT;
+
+	/* determine if we were mostly idle or not */
+	if (uv_hub_info->scir.idle_off && uv_hub_info->scir.idle_on) {
+		if (uv_hub_info->scir.idle_off > uv_hub_info->scir.idle_on)
+			bits |= SCIR_CPU_ACTIVITY;
+		else
+			bits &= ~SCIR_CPU_ACTIVITY;
+	}
+
+	/* reset idle counters */
+	uv_hub_info->scir.idle_on = 0;
+	uv_hub_info->scir.idle_off = 0;
+
+	/* update system controller interface reg */
+	uv_set_scir_bits(bits);
+
+	/* enable next timer period */
+	mod_timer(timer, jiffies + SCIR_CPU_HB_INTERVAL);
+}
+
+static int uv_idle(struct notifier_block *nfb, unsigned long action, void *junk)
+{
+	unsigned long elapsed = jiffies - uv_hub_info->scir.last;
+
+	/*
+	 * update activity to indicate current state,
+	 * measure time since last change
+	 */
+	if (action == IDLE_START) {
+
+		uv_hub_info->scir.state &= ~SCIR_CPU_ACTIVITY;
+		uv_hub_info->scir.idle_on += elapsed;
+		uv_hub_info->scir.last = jiffies;
+
+	} else if (action == IDLE_END) {
+
+		uv_hub_info->scir.state |= SCIR_CPU_ACTIVITY;
+		uv_hub_info->scir.idle_off += elapsed;
+		uv_hub_info->scir.last = jiffies;
+	}
+
+	return NOTIFY_OK;
+}
+
+static struct notifier_block uv_idle_notifier = {
+	.notifier_call = uv_idle,
+};
+
+static void __cpuinit uv_heartbeat_enable(int cpu)
+{
+	if (!uv_cpu_hub_info(cpu)->scir.enabled) {
+		struct timer_list *timer = &uv_cpu_hub_info(cpu)->scir.timer;
+
+		uv_set_cpu_scir_bits(cpu, SCIR_CPU_HEARTBEAT|SCIR_CPU_ACTIVITY);
+		setup_timer(timer, uv_heartbeat, cpu);
+		timer->expires = jiffies + SCIR_CPU_HB_INTERVAL;
+		add_timer_on(timer, cpu);
+		uv_cpu_hub_info(cpu)->scir.enabled = 1;
+	}
+
+	/* check boot cpu */
+	if (!uv_cpu_hub_info(0)->scir.enabled)
+		uv_heartbeat_enable(0);
+}
+
+static void __cpuinit uv_heartbeat_disable(int cpu)
+{
+	if (uv_cpu_hub_info(cpu)->scir.enabled) {
+		uv_cpu_hub_info(cpu)->scir.enabled = 0;
+		del_timer(&uv_cpu_hub_info(cpu)->scir.timer);
+	}
+	uv_set_cpu_scir_bits(cpu, 0xff);
+}
+
+#ifdef CONFIG_HOTPLUG_CPU
+/*
+ * cpu hotplug notifier
+ */
+static __cpuinit int uv_scir_cpu_notify(struct notifier_block *self,
+				       unsigned long action, void *hcpu)
+{
+	long cpu = (long)hcpu;
+
+	switch (action) {
+	case CPU_ONLINE:
+		uv_heartbeat_enable(cpu);
+		break;
+	case CPU_DOWN_PREPARE:
+		uv_heartbeat_disable(cpu);
+		break;
+	default:
+		break;
+	}
+	return NOTIFY_OK;
+}
+
+static __init void uv_scir_register_cpu_notifier(void)
+{
+	hotcpu_notifier(uv_scir_cpu_notify, 0);
+	idle_notifier_register(&uv_idle_notifier);
+}
+
+#else /* !CONFIG_HOTPLUG_CPU */
+
+static __init void uv_scir_register_cpu_notifier(void)
+{
+	idle_notifier_register(&uv_idle_notifier);
+}
+
+static __init int uv_init_heartbeat(void)
+{
+	int cpu;
+
+	if (is_uv_system())
+		for_each_online_cpu(cpu)
+			uv_heartbeat_enable(cpu);
+	return 0;
+}
+
+late_initcall(uv_init_heartbeat);
+
+#endif /* !CONFIG_HOTPLUG_CPU */
+
 static bool uv_system_inited;
 
 void __init uv_system_init(void)
@@ -435,6 +571,7 @@ void __init uv_system_init(void)
 		uv_cpu_hub_info(cpu)->gnode_upper = gnode_upper;
 		uv_cpu_hub_info(cpu)->global_mmr_base = mmr_base;
 		uv_cpu_hub_info(cpu)->coherency_domain_number = 0;/* ZZZ */
+		uv_cpu_hub_info(cpu)->scir.offset = SCIR_LOCAL_MMR_BASE + lcpu;
 		uv_node_to_blade[nid] = blade;
 		uv_cpu_to_blade[cpu] = blade;
 		max_pnode = max(pnode, max_pnode);
@@ -449,6 +586,7 @@ void __init uv_system_init(void)
 	map_mmr_high(max_pnode);
 	map_config_high(max_pnode);
 	map_mmioh_high(max_pnode);
+	uv_scir_register_cpu_notifier();
 	uv_system_inited = true;
 }
 
--- linux-2.6.tip.orig/include/asm-x86/uv/uv_hub.h
+++ linux-2.6.tip/include/asm-x86/uv/uv_hub.h
@@ -112,6 +112,16 @@
  */
 #define UV_MAX_NASID_VALUE	(UV_MAX_NUMALINK_NODES * 2)
 
+struct uv_scir_s {
+	struct timer_list timer;
+	unsigned long	offset;
+	unsigned long	last;
+	unsigned long	idle_on;
+	unsigned long	idle_off;
+	unsigned char	state;
+	unsigned char	enabled;
+};
+
 /*
  * The following defines attributes of the HUB chip. These attributes are
  * frequently referenced and are kept in the per-cpu data areas of each cpu.
@@ -130,7 +140,9 @@ struct uv_hub_info_s {
 	unsigned char	blade_processor_id;
 	unsigned char	m_val;
 	unsigned char	n_val;
+	struct uv_scir_s scir;
 };
+
 DECLARE_PER_CPU(struct uv_hub_info_s, __uv_hub_info);
 #define uv_hub_info 		(&__get_cpu_var(__uv_hub_info))
 #define uv_cpu_hub_info(cpu)	(&per_cpu(__uv_hub_info, cpu))
@@ -162,6 +174,30 @@ DECLARE_PER_CPU(struct uv_hub_info_s, __
 
 #define UV_APIC_PNODE_SHIFT	6
 
+/* Local Bus from cpu's perspective */
+#define LOCAL_BUS_BASE		0x1c00000
+#define LOCAL_BUS_SIZE		(4 * 1024 * 1024)
+
+/*
+ * System Controller Interface Reg
+ *
+ * Note there are NO leds on a UV system.  This register is only
+ * used by the system controller to monitor system-wide operation.
+ * There are 64 regs per node.  With Nahelem cpus (2 cores per node,
+ * 8 cpus per core, 2 threads per cpu) there are 32 cpu threads on
+ * a node.
+ *
+ * The window is located at top of ACPI MMR space
+ */
+#define SCIR_WINDOW_COUNT	64
+#define SCIR_LOCAL_MMR_BASE	(LOCAL_BUS_BASE + \
+				 LOCAL_BUS_SIZE - \
+				 SCIR_WINDOW_COUNT)
+
+#define SCIR_CPU_HEARTBEAT	0x01	/* timer interrupt */
+#define SCIR_CPU_ACTIVITY	0x02	/* not idle */
+#define SCIR_CPU_HB_INTERVAL	(HZ)	/* once per second */
+
 /*
  * Macros for converting between kernel virtual addresses, socket local physical
  * addresses, and UV global physical addresses.
@@ -276,6 +312,16 @@ static inline void uv_write_local_mmr(un
 	*uv_local_mmr_address(offset) = val;
 }
 
+static inline unsigned char uv_read_local_mmr8(unsigned long offset)
+{
+	return *((unsigned char *)uv_local_mmr_address(offset));
+}
+
+static inline void uv_write_local_mmr8(unsigned long offset, unsigned char val)
+{
+	*((unsigned char *)uv_local_mmr_address(offset)) = val;
+}
+
 /*
  * Structures and definitions for converting between cpu, node, pnode, and blade
  * numbers.
@@ -350,5 +396,21 @@ static inline int uv_num_possible_blades
 	return uv_possible_blades;
 }
 
+/* Update SCIR state */
+static inline void uv_set_scir_bits(unsigned char value)
+{
+	if (uv_hub_info->scir.state != value) {
+		uv_hub_info->scir.state = value;
+		uv_write_local_mmr8(uv_hub_info->scir.offset, value);
+	}
+}
+static inline void uv_set_cpu_scir_bits(int cpu, unsigned char value)
+{
+	if (uv_cpu_hub_info(cpu)->scir.state != value) {
+		uv_cpu_hub_info(cpu)->scir.state = value;
+		uv_write_local_mmr8(uv_cpu_hub_info(cpu)->scir.offset, value);
+	}
+}
+
 #endif /* ASM_X86__UV__UV_HUB_H */
 


^ permalink raw reply	[flat|nested] 43+ messages in thread

end of thread, other threads:[~2008-10-06 15:02 UTC | newest]

Thread overview: 43+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2008-09-29 18:02 [PATCH 00/31] cpumask: Provide new cpumask API Mike Travis
2008-09-29 18:02 ` [PATCH 01/31] cpumask: Documentation Mike Travis
2008-09-30 22:49   ` Rusty Russell
2008-10-01  9:13     ` Ingo Molnar
2008-10-02  0:36       ` Rusty Russell
2008-10-02  9:32         ` Ingo Molnar
2008-10-02 12:54         ` Mike Travis
2008-10-03  9:04           ` Ingo Molnar
2008-10-06 15:02             ` Pretty blinking lights vs. monitoring system activity from a system controller Mike Travis
2008-09-29 18:02 ` [PATCH 02/31] cpumask: modify send_IPI_mask interface to accept cpumask_t pointers Mike Travis
2008-09-29 18:02 ` [PATCH 03/31] cpumask: remove min from first_cpu/next_cpu Mike Travis
2008-09-29 18:02 ` [PATCH 04/31] cpumask: move cpu_alloc to separate file Mike Travis
2008-09-29 18:02 ` [PATCH 05/31] cpumask: Provide new cpumask API Mike Travis
2008-09-30  9:11   ` Ingo Molnar
2008-09-30 15:42     ` Mike Travis
2008-09-30 16:17       ` Mike Travis
2008-10-01  9:08         ` Ingo Molnar
2008-09-29 18:02 ` [PATCH 06/31] cpumask: new lib/cpumask.c Mike Travis
2008-09-29 18:02 ` [PATCH 07/31] cpumask: changes to compile init/main.c Mike Travis
2008-09-29 18:02 ` [PATCH 08/31] cpumask: Change cpumask maps Mike Travis
2008-09-29 18:02 ` [PATCH 09/31] cpumask: get rid of _nr functions Mike Travis
2008-09-29 18:03 ` [PATCH 10/31] cpumask: clean cpumask_of_cpu refs Mike Travis
2008-09-29 18:03 ` [PATCH 11/31] cpumask: remove set_cpus_allowed_ptr Mike Travis
2008-09-29 18:03 ` [PATCH 12/31] cpumask: remove CPU_MASK_ALL_PTR Mike Travis
2008-09-29 18:03 ` [PATCH 13/31] cpumask: modify for_each_cpu_mask Mike Travis
2008-09-29 18:03 ` [PATCH 14/31] cpumask: change first/next_cpu to cpus_first/next Mike Travis
2008-09-29 18:03 ` [PATCH 15/31] cpumask: remove node_to_cpumask_ptr Mike Travis
2008-09-29 18:03 ` [PATCH 16/31] cpumask: clean apic files Mike Travis
2008-09-29 18:03 ` [PATCH 17/31] cpumask: clean cpufreq files Mike Travis
2008-09-29 18:03 ` [PATCH 18/31] cpumask: clean sched files Mike Travis
2008-09-29 18:03 ` [PATCH 19/31] cpumask: clean xen files Mike Travis
2008-09-29 18:03 ` [PATCH 20/31] cpumask: clean mm files Mike Travis
2008-09-29 18:03 ` [PATCH 21/31] cpumask: clean acpi files Mike Travis
2008-09-29 18:03 ` [PATCH 22/31] cpumask: clean irq files Mike Travis
2008-09-29 18:03 ` [PATCH 23/31] cpumask: clean pci files Mike Travis
2008-09-29 18:03 ` [PATCH 24/31] cpumask: clean cpu files Mike Travis
2008-09-29 18:03 ` [PATCH 25/31] cpumask: clean rcu files Mike Travis
2008-09-29 18:03 ` [PATCH 26/31] cpumask: clean tlb files Mike Travis
2008-09-29 18:03 ` [PATCH 27/31] cpumask: clean time files Mike Travis
2008-09-29 18:03 ` [PATCH 28/31] cpumask: clean smp files Mike Travis
2008-09-29 18:03 ` [PATCH 29/31] cpumask: clean trace files Mike Travis
2008-09-29 18:03 ` [PATCH 30/31] cpumask: clean kernel files Mike Travis
2008-09-29 18:03 ` [PATCH 31/31] cpumask: clean misc files Mike Travis

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).