All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 01/16] CONFIG_DISABLE_OBSOLETE_CPUMASK_FUNCTIONS: set if CPUMASK_OFFSTACK.
@ 2015-03-02 11:35 Rusty Russell
  2015-03-02 11:35 ` [PATCH 02/16] cpumask: fix cpu-hotplug documentation Rusty Russell
                   ` (8 more replies)
  0 siblings, 9 replies; 28+ messages in thread
From: Rusty Russell @ 2015-03-02 11:35 UTC (permalink / raw)
  To: linux-kernel; +Cc: Rusty Russell

Using these functions with offstack cpus is unsafe.  They use all NR_CPUS
bits, unstead of nr_cpumask_bits.

In particular, lustre (in staging) used cpus_ and that caused a bug.

Reported-by: Oleg Drokin <green@linuxhacker.ru>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
---
 lib/Kconfig | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/lib/Kconfig b/lib/Kconfig
index 87da53bb1fef..722427805220 100644
--- a/lib/Kconfig
+++ b/lib/Kconfig
@@ -398,8 +398,8 @@ config CPUMASK_OFFSTACK
 	  stack overflow.
 
 config DISABLE_OBSOLETE_CPUMASK_FUNCTIONS
-       bool "Disable obsolete cpumask functions" if DEBUG_PER_CPU_MAPS
-       depends on BROKEN
+       bool
+       depends on CPUMASK_OFFSTACK
 
 config CPU_RMAP
 	bool
-- 
2.1.0


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 02/16] cpumask: fix cpu-hotplug documentation
  2015-03-02 11:35 [PATCH 01/16] CONFIG_DISABLE_OBSOLETE_CPUMASK_FUNCTIONS: set if CPUMASK_OFFSTACK Rusty Russell
@ 2015-03-02 11:35 ` Rusty Russell
  2015-03-02 11:47   ` Rusty Russell
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 28+ messages in thread
From: Rusty Russell @ 2015-03-02 11:35 UTC (permalink / raw)
  To: linux-kernel; +Cc: Rusty Russell, Jonathan Corbet, linux-doc

It refers to an obsolete function.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: linux-doc@vger.kernel.org
---
 Documentation/cpu-hotplug.txt | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/Documentation/cpu-hotplug.txt b/Documentation/cpu-hotplug.txt
index a0b005d2bd95..f9ad5e048b11 100644
--- a/Documentation/cpu-hotplug.txt
+++ b/Documentation/cpu-hotplug.txt
@@ -108,7 +108,7 @@ Never use anything other than cpumask_t to represent bitmap of CPUs.
 	for_each_possible_cpu     - Iterate over cpu_possible_mask
 	for_each_online_cpu       - Iterate over cpu_online_mask
 	for_each_present_cpu      - Iterate over cpu_present_mask
-	for_each_cpu_mask(x,mask) - Iterate over some random collection of cpu mask.
+	for_each_cpu(x,mask)      - Iterate over some random collection of cpu mask.
 
 	#include <linux/cpu.h>
 	get_online_cpus() and put_online_cpus():
-- 
2.1.0


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 03/16] ia64: Use for_each_cpu_and() and cpumask_any_and() instead of temp var.
  2015-03-02 11:35 [PATCH 01/16] CONFIG_DISABLE_OBSOLETE_CPUMASK_FUNCTIONS: set if CPUMASK_OFFSTACK Rusty Russell
@ 2015-03-02 11:47   ` Rusty Russell
  2015-03-02 11:47   ` Rusty Russell
                     ` (7 subsequent siblings)
  8 siblings, 0 replies; 28+ messages in thread
From: Rusty Russell @ 2015-03-02 11:35 UTC (permalink / raw)
  To: linux-kernel; +Cc: Rusty Russell, Tony Luck, Fenghua Yu, linux-ia64

Just a bit of manual neatening, before spatch cleans the rest.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: linux-ia64@vger.kernel.org
---
 arch/ia64/kernel/irq_ia64.c |  4 +---
 arch/ia64/kernel/msi_ia64.c | 10 ++++------
 2 files changed, 5 insertions(+), 9 deletions(-)

diff --git a/arch/ia64/kernel/irq_ia64.c b/arch/ia64/kernel/irq_ia64.c
index 698d8fefde6c..3329177c262e 100644
--- a/arch/ia64/kernel/irq_ia64.c
+++ b/arch/ia64/kernel/irq_ia64.c
@@ -161,7 +161,6 @@ int bind_irq_vector(int irq, int vector, cpumask_t domain)
 static void __clear_irq_vector(int irq)
 {
 	int vector, cpu;
-	cpumask_t mask;
 	cpumask_t domain;
 	struct irq_cfg *cfg = &irq_cfg[irq];
 
@@ -169,8 +168,7 @@ static void __clear_irq_vector(int irq)
 	BUG_ON(cfg->vector == IRQ_VECTOR_UNASSIGNED);
 	vector = cfg->vector;
 	domain = cfg->domain;
-	cpumask_and(&mask, &cfg->domain, cpu_online_mask);
-	for_each_cpu_mask(cpu, mask)
+	for_each_cpu_and(cpu, &cfg->domain, cpu_online_mask)
 		per_cpu(vector_irq, cpu)[vector] = -1;
 	cfg->vector = IRQ_VECTOR_UNASSIGNED;
 	cfg->domain = CPU_MASK_NONE;
diff --git a/arch/ia64/kernel/msi_ia64.c b/arch/ia64/kernel/msi_ia64.c
index 8ae36ea177d3..9dd7464f8c17 100644
--- a/arch/ia64/kernel/msi_ia64.c
+++ b/arch/ia64/kernel/msi_ia64.c
@@ -47,15 +47,14 @@ int ia64_setup_msi_irq(struct pci_dev *pdev, struct msi_desc *desc)
 	struct msi_msg	msg;
 	unsigned long	dest_phys_id;
 	int	irq, vector;
-	cpumask_t mask;
 
 	irq = create_irq();
 	if (irq < 0)
 		return irq;
 
 	irq_set_msi_desc(irq, desc);
-	cpumask_and(&mask, &(irq_to_domain(irq)), cpu_online_mask);
-	dest_phys_id = cpu_physical_id(first_cpu(mask));
+	dest_phys_id = cpu_physical_id(cpumask_any_and(&(irq_to_domain(irq)),
+						       cpu_online_mask));
 	vector = irq_to_vector(irq);
 
 	msg.address_hi = 0;
@@ -171,10 +170,9 @@ msi_compose_msg(struct pci_dev *pdev, unsigned int irq, struct msi_msg *msg)
 {
 	struct irq_cfg *cfg = irq_cfg + irq;
 	unsigned dest;
-	cpumask_t mask;
 
-	cpumask_and(&mask, &(irq_to_domain(irq)), cpu_online_mask);
-	dest = cpu_physical_id(first_cpu(mask));
+	dest = cpu_physical_id(cpumask_first_and(&(irq_to_domain(irq)),
+						 cpu_online_mask));
 
 	msg->address_hi = 0;
 	msg->address_lo =
-- 
2.1.0


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 04/16] drivers: fix up obsolete cpu function usage.
  2015-03-02 11:35 [PATCH 01/16] CONFIG_DISABLE_OBSOLETE_CPUMASK_FUNCTIONS: set if CPUMASK_OFFSTACK Rusty Russell
  2015-03-02 11:35 ` [PATCH 02/16] cpumask: fix cpu-hotplug documentation Rusty Russell
  2015-03-02 11:47   ` Rusty Russell
@ 2015-03-02 11:35 ` Rusty Russell
  2015-03-02 22:23   ` Rafael J. Wysocki
  2015-03-02 11:35 ` [PATCH 05/16] staging/lustre: " Rusty Russell
                   ` (5 subsequent siblings)
  8 siblings, 1 reply; 28+ messages in thread
From: Rusty Russell @ 2015-03-02 11:35 UTC (permalink / raw)
  To: linux-kernel
  Cc: Rusty Russell, Thomas Gleixner, Rafael J. Wysocki, Herbert Xu,
	Jason Cooper, Chris Metcalf, netdev

Thanks to spatch, plus manual removal of "&*".  Then a sweep for
for_each_cpu_mask => for_each_cpu.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Jason Cooper <jason@lakedaemon.net>
Cc: Chris Metcalf <cmetcalf@ezchip.com>
Cc: netdev@vger.kernel.org
---
 drivers/clocksource/dw_apb_timer.c | 3 ++-
 drivers/cpuidle/coupled.c          | 6 +++---
 drivers/crypto/n2_core.c           | 4 ++--
 drivers/irqchip/irq-gic-v3.c       | 2 +-
 drivers/irqchip/irq-mips-gic.c     | 6 +++---
 drivers/net/ethernet/tile/tilegx.c | 4 ++--
 6 files changed, 13 insertions(+), 12 deletions(-)

diff --git a/drivers/clocksource/dw_apb_timer.c b/drivers/clocksource/dw_apb_timer.c
index f3656a6b0382..35a88097af3c 100644
--- a/drivers/clocksource/dw_apb_timer.c
+++ b/drivers/clocksource/dw_apb_timer.c
@@ -117,7 +117,8 @@ static void apbt_set_mode(enum clock_event_mode mode,
 	unsigned long period;
 	struct dw_apb_clock_event_device *dw_ced = ced_to_dw_apb_ced(evt);
 
-	pr_debug("%s CPU %d mode=%d\n", __func__, first_cpu(*evt->cpumask),
+	pr_debug("%s CPU %d mode=%d\n", __func__,
+		 cpumask_first(evt->cpumask),
 		 mode);
 
 	switch (mode) {
diff --git a/drivers/cpuidle/coupled.c b/drivers/cpuidle/coupled.c
index 73fe2f8d7f96..7936dce4b878 100644
--- a/drivers/cpuidle/coupled.c
+++ b/drivers/cpuidle/coupled.c
@@ -292,7 +292,7 @@ static inline int cpuidle_coupled_get_state(struct cpuidle_device *dev,
 	 */
 	smp_rmb();
 
-	for_each_cpu_mask(i, coupled->coupled_cpus)
+	for_each_cpu(i, &coupled->coupled_cpus)
 		if (cpu_online(i) && coupled->requested_state[i] < state)
 			state = coupled->requested_state[i];
 
@@ -338,7 +338,7 @@ static void cpuidle_coupled_poke_others(int this_cpu,
 {
 	int cpu;
 
-	for_each_cpu_mask(cpu, coupled->coupled_cpus)
+	for_each_cpu(cpu, &coupled->coupled_cpus)
 		if (cpu != this_cpu && cpu_online(cpu))
 			cpuidle_coupled_poke(cpu);
 }
@@ -638,7 +638,7 @@ int cpuidle_coupled_register_device(struct cpuidle_device *dev)
 	if (cpumask_empty(&dev->coupled_cpus))
 		return 0;
 
-	for_each_cpu_mask(cpu, dev->coupled_cpus) {
+	for_each_cpu(cpu, &dev->coupled_cpus) {
 		other_dev = per_cpu(cpuidle_devices, cpu);
 		if (other_dev && other_dev->coupled) {
 			coupled = other_dev->coupled;
diff --git a/drivers/crypto/n2_core.c b/drivers/crypto/n2_core.c
index afd136b45f49..10a9aeff1666 100644
--- a/drivers/crypto/n2_core.c
+++ b/drivers/crypto/n2_core.c
@@ -1754,7 +1754,7 @@ static int spu_mdesc_walk_arcs(struct mdesc_handle *mdesc,
 				dev->dev.of_node->full_name);
 			return -EINVAL;
 		}
-		cpu_set(*id, p->sharing);
+		cpumask_set_cpu(*id, &p->sharing);
 		table[*id] = p;
 	}
 	return 0;
@@ -1776,7 +1776,7 @@ static int handle_exec_unit(struct spu_mdesc_info *ip, struct list_head *list,
 		return -ENOMEM;
 	}
 
-	cpus_clear(p->sharing);
+	cpumask_clear(&p->sharing);
 	spin_lock_init(&p->lock);
 	p->q_type = q_type;
 	INIT_LIST_HEAD(&p->jobs);
diff --git a/drivers/irqchip/irq-gic-v3.c b/drivers/irqchip/irq-gic-v3.c
index 1c6dea2fbc34..04b6f0732c1a 100644
--- a/drivers/irqchip/irq-gic-v3.c
+++ b/drivers/irqchip/irq-gic-v3.c
@@ -512,7 +512,7 @@ static void gic_raise_softirq(const struct cpumask *mask, unsigned int irq)
 	 */
 	smp_wmb();
 
-	for_each_cpu_mask(cpu, *mask) {
+	for_each_cpu(cpu, mask) {
 		u64 cluster_id = cpu_logical_map(cpu) & ~0xffUL;
 		u16 tlist;
 
diff --git a/drivers/irqchip/irq-mips-gic.c b/drivers/irqchip/irq-mips-gic.c
index 9acdc080e7ec..f26307908a2a 100644
--- a/drivers/irqchip/irq-mips-gic.c
+++ b/drivers/irqchip/irq-mips-gic.c
@@ -345,19 +345,19 @@ static int gic_set_affinity(struct irq_data *d, const struct cpumask *cpumask,
 	int		i;
 
 	cpumask_and(&tmp, cpumask, cpu_online_mask);
-	if (cpus_empty(tmp))
+	if (cpumask_empty(&tmp))
 		return -EINVAL;
 
 	/* Assumption : cpumask refers to a single CPU */
 	spin_lock_irqsave(&gic_lock, flags);
 
 	/* Re-route this IRQ */
-	gic_map_to_vpe(irq, first_cpu(tmp));
+	gic_map_to_vpe(irq, cpumask_first(&tmp));
 
 	/* Update the pcpu_masks */
 	for (i = 0; i < NR_CPUS; i++)
 		clear_bit(irq, pcpu_masks[i].pcpu_mask);
-	set_bit(irq, pcpu_masks[first_cpu(tmp)].pcpu_mask);
+	set_bit(irq, pcpu_masks[cpumask_first(&tmp)].pcpu_mask);
 
 	cpumask_copy(d->affinity, cpumask);
 	spin_unlock_irqrestore(&gic_lock, flags);
diff --git a/drivers/net/ethernet/tile/tilegx.c b/drivers/net/ethernet/tile/tilegx.c
index bea8cd2bb56c..deac41498c6e 100644
--- a/drivers/net/ethernet/tile/tilegx.c
+++ b/drivers/net/ethernet/tile/tilegx.c
@@ -1122,7 +1122,7 @@ static int alloc_percpu_mpipe_resources(struct net_device *dev,
 			addr + i * sizeof(struct tile_net_comps);
 
 	/* If this is a network cpu, create an iqueue. */
-	if (cpu_isset(cpu, network_cpus_map)) {
+	if (cpumask_test_cpu(cpu, &network_cpus_map)) {
 		order = get_order(NOTIF_RING_SIZE);
 		page = homecache_alloc_pages(GFP_KERNEL, order, cpu);
 		if (page == NULL) {
@@ -1298,7 +1298,7 @@ static int tile_net_init_mpipe(struct net_device *dev)
 	int first_ring, ring;
 	int instance = mpipe_instance(dev);
 	struct mpipe_data *md = &mpipe_data[instance];
-	int network_cpus_count = cpus_weight(network_cpus_map);
+	int network_cpus_count = cpumask_weight(&network_cpus_map);
 
 	if (!hash_default) {
 		netdev_err(dev, "Networking requires hash_default!\n");
-- 
2.1.0


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 05/16] staging/lustre: fix up obsolete cpu function usage.
  2015-03-02 11:35 [PATCH 01/16] CONFIG_DISABLE_OBSOLETE_CPUMASK_FUNCTIONS: set if CPUMASK_OFFSTACK Rusty Russell
                   ` (2 preceding siblings ...)
  2015-03-02 11:35 ` [PATCH 04/16] drivers: fix up obsolete cpu function usage Rusty Russell
@ 2015-03-02 11:35 ` Rusty Russell
  2015-03-02 17:50   ` Oleg Drokin
  2015-03-02 11:47   ` Rusty Russell
                   ` (4 subsequent siblings)
  8 siblings, 1 reply; 28+ messages in thread
From: Rusty Russell @ 2015-03-02 11:35 UTC (permalink / raw)
  To: linux-kernel; +Cc: Rusty Russell, Oleg Drokin

They triggered this cleanup, so I've separated their patch in the assumption
they've already combed their code.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Cc: Oleg Drokin <green@linuxhacker.ru>
---
 .../staging/lustre/lnet/klnds/o2iblnd/o2iblnd.c    |  4 +-
 .../staging/lustre/lustre/libcfs/linux/linux-cpu.c | 88 +++++++++++-----------
 drivers/staging/lustre/lustre/ptlrpc/ptlrpcd.c     |  6 +-
 drivers/staging/lustre/lustre/ptlrpc/service.c     |  4 +-
 4 files changed, 52 insertions(+), 50 deletions(-)

diff --git a/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd.c b/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd.c
index 651016919669..a25816ab9e53 100644
--- a/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd.c
+++ b/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd.c
@@ -638,8 +638,8 @@ kiblnd_get_completion_vector(kib_conn_t *conn, int cpt)
 		return 0;
 
 	/* hash NID to CPU id in this partition... */
-	off = do_div(nid, cpus_weight(*mask));
-	for_each_cpu_mask(i, *mask) {
+	off = do_div(nid, cpumask_weight(mask));
+	for_each_cpu(i, mask) {
 		if (off-- == 0)
 			return i % vectors;
 	}
diff --git a/drivers/staging/lustre/lustre/libcfs/linux/linux-cpu.c b/drivers/staging/lustre/lustre/libcfs/linux/linux-cpu.c
index 05f7595f18aa..c8fca8aae848 100644
--- a/drivers/staging/lustre/lustre/libcfs/linux/linux-cpu.c
+++ b/drivers/staging/lustre/lustre/libcfs/linux/linux-cpu.c
@@ -204,7 +204,7 @@ cfs_cpt_table_print(struct cfs_cpt_table *cptab, char *buf, int len)
 		}
 
 		tmp += rc;
-		for_each_cpu_mask(j, *cptab->ctb_parts[i].cpt_cpumask) {
+		for_each_cpu(j, cptab->ctb_parts[i].cpt_cpumask) {
 			rc = snprintf(tmp, len, "%d ", j);
 			len -= rc;
 			if (len <= 0) {
@@ -240,8 +240,8 @@ cfs_cpt_weight(struct cfs_cpt_table *cptab, int cpt)
 	LASSERT(cpt == CFS_CPT_ANY || (cpt >= 0 && cpt < cptab->ctb_nparts));
 
 	return cpt == CFS_CPT_ANY ?
-	       cpus_weight(*cptab->ctb_cpumask) :
-	       cpus_weight(*cptab->ctb_parts[cpt].cpt_cpumask);
+	       cpumask_weight(cptab->ctb_cpumask) :
+	       cpumask_weight(cptab->ctb_parts[cpt].cpt_cpumask);
 }
 EXPORT_SYMBOL(cfs_cpt_weight);
 
@@ -251,8 +251,9 @@ cfs_cpt_online(struct cfs_cpt_table *cptab, int cpt)
 	LASSERT(cpt == CFS_CPT_ANY || (cpt >= 0 && cpt < cptab->ctb_nparts));
 
 	return cpt == CFS_CPT_ANY ?
-	       any_online_cpu(*cptab->ctb_cpumask) != NR_CPUS :
-	       any_online_cpu(*cptab->ctb_parts[cpt].cpt_cpumask) != NR_CPUS;
+	       cpumask_any_and(cptab->ctb_cpumask, cpu_online_mask) != NR_CPUS :
+	       cpumask_any_and(cptab->ctb_parts[cpt].cpt_cpumask,
+	       	        cpu_online_mask) != NR_CPUS;
 }
 EXPORT_SYMBOL(cfs_cpt_online);
 
@@ -296,11 +297,11 @@ cfs_cpt_set_cpu(struct cfs_cpt_table *cptab, int cpt, int cpu)
 
 	cptab->ctb_cpu2cpt[cpu] = cpt;
 
-	LASSERT(!cpu_isset(cpu, *cptab->ctb_cpumask));
-	LASSERT(!cpu_isset(cpu, *cptab->ctb_parts[cpt].cpt_cpumask));
+	LASSERT(!cpumask_test_cpu(cpu, cptab->ctb_cpumask));
+	LASSERT(!cpumask_test_cpu(cpu, cptab->ctb_parts[cpt].cpt_cpumask));
 
-	cpu_set(cpu, *cptab->ctb_cpumask);
-	cpu_set(cpu, *cptab->ctb_parts[cpt].cpt_cpumask);
+	cpumask_set_cpu(cpu, cptab->ctb_cpumask);
+	cpumask_set_cpu(cpu, cptab->ctb_parts[cpt].cpt_cpumask);
 
 	node = cpu_to_node(cpu);
 
@@ -344,11 +345,11 @@ cfs_cpt_unset_cpu(struct cfs_cpt_table *cptab, int cpt, int cpu)
 		return;
 	}
 
-	LASSERT(cpu_isset(cpu, *cptab->ctb_parts[cpt].cpt_cpumask));
-	LASSERT(cpu_isset(cpu, *cptab->ctb_cpumask));
+	LASSERT(cpumask_test_cpu(cpu, cptab->ctb_parts[cpt].cpt_cpumask));
+	LASSERT(cpumask_test_cpu(cpu, cptab->ctb_cpumask));
 
-	cpu_clear(cpu, *cptab->ctb_parts[cpt].cpt_cpumask);
-	cpu_clear(cpu, *cptab->ctb_cpumask);
+	cpumask_clear_cpu(cpu, cptab->ctb_parts[cpt].cpt_cpumask);
+	cpumask_clear_cpu(cpu, cptab->ctb_cpumask);
 	cptab->ctb_cpu2cpt[cpu] = -1;
 
 	node = cpu_to_node(cpu);
@@ -356,7 +357,7 @@ cfs_cpt_unset_cpu(struct cfs_cpt_table *cptab, int cpt, int cpu)
 	LASSERT(node_isset(node, *cptab->ctb_parts[cpt].cpt_nodemask));
 	LASSERT(node_isset(node, *cptab->ctb_nodemask));
 
-	for_each_cpu_mask(i, *cptab->ctb_parts[cpt].cpt_cpumask) {
+	for_each_cpu(i, cptab->ctb_parts[cpt].cpt_cpumask) {
 		/* this CPT has other CPU belonging to this node? */
 		if (cpu_to_node(i) == node)
 			break;
@@ -365,7 +366,7 @@ cfs_cpt_unset_cpu(struct cfs_cpt_table *cptab, int cpt, int cpu)
 	if (i == NR_CPUS)
 		node_clear(node, *cptab->ctb_parts[cpt].cpt_nodemask);
 
-	for_each_cpu_mask(i, *cptab->ctb_cpumask) {
+	for_each_cpu(i, cptab->ctb_cpumask) {
 		/* this CPT-table has other CPU belonging to this node? */
 		if (cpu_to_node(i) == node)
 			break;
@@ -383,13 +384,13 @@ cfs_cpt_set_cpumask(struct cfs_cpt_table *cptab, int cpt, cpumask_t *mask)
 {
 	int	i;
 
-	if (cpus_weight(*mask) == 0 || any_online_cpu(*mask) == NR_CPUS) {
+	if (cpumask_weight(mask) == 0 || cpumask_any_and(mask, cpu_online_mask) == NR_CPUS) {
 		CDEBUG(D_INFO, "No online CPU is found in the CPU mask for CPU partition %d\n",
 		       cpt);
 		return 0;
 	}
 
-	for_each_cpu_mask(i, *mask) {
+	for_each_cpu(i, mask) {
 		if (!cfs_cpt_set_cpu(cptab, cpt, i))
 			return 0;
 	}
@@ -403,7 +404,7 @@ cfs_cpt_unset_cpumask(struct cfs_cpt_table *cptab, int cpt, cpumask_t *mask)
 {
 	int	i;
 
-	for_each_cpu_mask(i, *mask)
+	for_each_cpu(i, mask)
 		cfs_cpt_unset_cpu(cptab, cpt, i);
 }
 EXPORT_SYMBOL(cfs_cpt_unset_cpumask);
@@ -493,7 +494,7 @@ cfs_cpt_clear(struct cfs_cpt_table *cptab, int cpt)
 	}
 
 	for (; cpt <= last; cpt++) {
-		for_each_cpu_mask(i, *cptab->ctb_parts[cpt].cpt_cpumask)
+		for_each_cpu(i, cptab->ctb_parts[cpt].cpt_cpumask)
 			cfs_cpt_unset_cpu(cptab, cpt, i);
 	}
 }
@@ -578,14 +579,14 @@ cfs_cpt_bind(struct cfs_cpt_table *cptab, int cpt)
 		nodemask = cptab->ctb_parts[cpt].cpt_nodemask;
 	}
 
-	if (any_online_cpu(*cpumask) == NR_CPUS) {
+	if (cpumask_any_and(cpumask, cpu_online_mask) == NR_CPUS) {
 		CERROR("No online CPU found in CPU partition %d, did someone do CPU hotplug on system? You might need to reload Lustre modules to keep system working well.\n",
 		       cpt);
 		return -EINVAL;
 	}
 
 	for_each_online_cpu(i) {
-		if (cpu_isset(i, *cpumask))
+		if (cpumask_test_cpu(i, cpumask))
 			continue;
 
 		rc = set_cpus_allowed_ptr(current, cpumask);
@@ -616,14 +617,14 @@ cfs_cpt_choose_ncpus(struct cfs_cpt_table *cptab, int cpt,
 
 	LASSERT(number > 0);
 
-	if (number >= cpus_weight(*node)) {
-		while (!cpus_empty(*node)) {
-			cpu = first_cpu(*node);
+	if (number >= cpumask_weight(node)) {
+		while (!cpumask_empty(node)) {
+			cpu = cpumask_first(node);
 
 			rc = cfs_cpt_set_cpu(cptab, cpt, cpu);
 			if (!rc)
 				return -EINVAL;
-			cpu_clear(cpu, *node);
+			cpumask_clear_cpu(cpu, node);
 		}
 		return 0;
 	}
@@ -636,27 +637,27 @@ cfs_cpt_choose_ncpus(struct cfs_cpt_table *cptab, int cpt,
 		goto out;
 	}
 
-	while (!cpus_empty(*node)) {
-		cpu = first_cpu(*node);
+	while (!cpumask_empty(node)) {
+		cpu = cpumask_first(node);
 
 		/* get cpumask for cores in the same socket */
 		cfs_cpu_core_siblings(cpu, socket);
-		cpus_and(*socket, *socket, *node);
+		cpumask_and(socket, socket, node);
 
-		LASSERT(!cpus_empty(*socket));
+		LASSERT(!cpumask_empty(socket));
 
-		while (!cpus_empty(*socket)) {
+		while (!cpumask_empty(socket)) {
 			int     i;
 
 			/* get cpumask for hts in the same core */
 			cfs_cpu_ht_siblings(cpu, core);
-			cpus_and(*core, *core, *node);
+			cpumask_and(core, core, node);
 
-			LASSERT(!cpus_empty(*core));
+			LASSERT(!cpumask_empty(core));
 
-			for_each_cpu_mask(i, *core) {
-				cpu_clear(i, *socket);
-				cpu_clear(i, *node);
+			for_each_cpu(i, core) {
+				cpumask_clear_cpu(i, socket);
+				cpumask_clear_cpu(i, node);
 
 				rc = cfs_cpt_set_cpu(cptab, cpt, i);
 				if (!rc) {
@@ -667,7 +668,7 @@ cfs_cpt_choose_ncpus(struct cfs_cpt_table *cptab, int cpt,
 				if (--number == 0)
 					goto out;
 			}
-			cpu = first_cpu(*socket);
+			cpu = cpumask_first(socket);
 		}
 	}
 
@@ -767,7 +768,7 @@ cfs_cpt_table_create(int ncpt)
 	for_each_online_node(i) {
 		cfs_node_to_cpumask(i, mask);
 
-		while (!cpus_empty(*mask)) {
+		while (!cpumask_empty(mask)) {
 			struct cfs_cpu_partition *part;
 			int    n;
 
@@ -776,24 +777,24 @@ cfs_cpt_table_create(int ncpt)
 
 			part = &cptab->ctb_parts[cpt];
 
-			n = num - cpus_weight(*part->cpt_cpumask);
+			n = num - cpumask_weight(part->cpt_cpumask);
 			LASSERT(n > 0);
 
 			rc = cfs_cpt_choose_ncpus(cptab, cpt, mask, n);
 			if (rc < 0)
 				goto failed;
 
-			LASSERT(num >= cpus_weight(*part->cpt_cpumask));
-			if (num == cpus_weight(*part->cpt_cpumask))
+			LASSERT(num >= cpumask_weight(part->cpt_cpumask));
+			if (num == cpumask_weight(part->cpt_cpumask))
 				cpt++;
 		}
 	}
 
 	if (cpt != ncpt ||
-	    num != cpus_weight(*cptab->ctb_parts[ncpt - 1].cpt_cpumask)) {
+	    num != cpumask_weight(cptab->ctb_parts[ncpt - 1].cpt_cpumask)) {
 		CERROR("Expect %d(%d) CPU partitions but got %d(%d), CPU hotplug/unplug while setting?\n",
 		       cptab->ctb_nparts, num, cpt,
-		       cpus_weight(*cptab->ctb_parts[ncpt - 1].cpt_cpumask));
+		       cpumask_weight(cptab->ctb_parts[ncpt - 1].cpt_cpumask));
 		goto failed;
 	}
 
@@ -965,7 +966,8 @@ cfs_cpu_notify(struct notifier_block *self, unsigned long action, void *hcpu)
 		mutex_lock(&cpt_data.cpt_mutex);
 		/* if all HTs in a core are offline, it may break affinity */
 		cfs_cpu_ht_siblings(cpu, cpt_data.cpt_cpumask);
-		warn = any_online_cpu(*cpt_data.cpt_cpumask) >= nr_cpu_ids;
+		warn = cpumask_any_and(cpt_data.cpt_cpumask,
+				       cpu_online_mask) >= nr_cpu_ids;
 		mutex_unlock(&cpt_data.cpt_mutex);
 		CDEBUG(warn ? D_WARNING : D_INFO,
 		       "Lustre: can't support CPU plug-out well now, performance and stability could be impacted [CPU %u action: %lx]\n",
diff --git a/drivers/staging/lustre/lustre/ptlrpc/ptlrpcd.c b/drivers/staging/lustre/lustre/ptlrpc/ptlrpcd.c
index 4621b71fe0b6..625858b6d793 100644
--- a/drivers/staging/lustre/lustre/ptlrpc/ptlrpcd.c
+++ b/drivers/staging/lustre/lustre/ptlrpc/ptlrpcd.c
@@ -513,8 +513,8 @@ static int ptlrpcd_bind(int index, int max)
 		int i;
 		mask = *cpumask_of_node(cpu_to_node(index));
 		for (i = max; i < num_online_cpus(); i++)
-			cpu_clear(i, mask);
-		pc->pc_npartners = cpus_weight(mask) - 1;
+			cpumask_clear_cpu(i, &mask);
+		pc->pc_npartners = cpumask_weight(&mask) - 1;
 		set_bit(LIOD_BIND, &pc->pc_flags);
 	}
 #else
@@ -554,7 +554,7 @@ static int ptlrpcd_bind(int index, int max)
 				 * that are already initialized
 				 */
 				for (pidx = 0, i = 0; i < index; i++) {
-					if (cpu_isset(i, mask)) {
+					if (cpumask_test_cpu(i, &mask)) {
 						ppc = &ptlrpcds->pd_threads[i];
 						pc->pc_partners[pidx++] = ppc;
 						ppc->pc_partners[ppc->
diff --git a/drivers/staging/lustre/lustre/ptlrpc/service.c b/drivers/staging/lustre/lustre/ptlrpc/service.c
index 635b12b22cef..173803829865 100644
--- a/drivers/staging/lustre/lustre/ptlrpc/service.c
+++ b/drivers/staging/lustre/lustre/ptlrpc/service.c
@@ -558,7 +558,7 @@ ptlrpc_server_nthreads_check(struct ptlrpc_service *svc,
 		 * there are.
 		 */
 		cpumask_copy(&mask, topology_thread_cpumask(0));
-		if (cpus_weight(mask) > 1) { /* weight is # of HTs */
+		if (cpumask_weight(&mask) > 1) { /* weight is # of HTs */
 			/* depress thread factor for hyper-thread */
 			factor = factor - (factor >> 1) + (factor >> 3);
 		}
@@ -2771,7 +2771,7 @@ int ptlrpc_hr_init(void)
 	init_waitqueue_head(&ptlrpc_hr.hr_waitq);
 
 	cpumask_copy(&mask, topology_thread_cpumask(0));
-	weight = cpus_weight(mask);
+	weight = cpumask_weight(&mask);
 
 	cfs_percpt_for_each(hrp, i, ptlrpc_hr.hr_partitions) {
 		hrp->hrp_cpt = i;
-- 
2.1.0


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 06/16] ia64: fix up obsolete cpu function usage.
  2015-03-02 11:35 [PATCH 01/16] CONFIG_DISABLE_OBSOLETE_CPUMASK_FUNCTIONS: set if CPUMASK_OFFSTACK Rusty Russell
@ 2015-03-02 11:47   ` Rusty Russell
  2015-03-02 11:47   ` Rusty Russell
                     ` (7 subsequent siblings)
  8 siblings, 0 replies; 28+ messages in thread
From: Rusty Russell @ 2015-03-02 11:35 UTC (permalink / raw)
  To: linux-kernel; +Cc: Rusty Russell, Tony Luck, Fenghua Yu, linux-ia64

Thanks to spatch, then a sweep for for_each_cpu_mask => for_each_cpu.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: linux-ia64@vger.kernel.org
---
 arch/ia64/include/asm/acpi.h |  6 +++---
 arch/ia64/kernel/acpi.c      |  2 +-
 arch/ia64/kernel/iosapic.c   |  2 +-
 arch/ia64/kernel/irq_ia64.c  | 28 ++++++++++++++--------------
 arch/ia64/kernel/mca.c       | 10 +++++-----
 arch/ia64/kernel/numa.c      | 10 +++++-----
 arch/ia64/kernel/salinfo.c   | 24 ++++++++++++------------
 arch/ia64/kernel/setup.c     | 11 ++++++-----
 arch/ia64/kernel/smp.c       |  6 +++---
 arch/ia64/kernel/smpboot.c   | 42 ++++++++++++++++++++++--------------------
 arch/ia64/kernel/topology.c  |  6 +++---
 11 files changed, 75 insertions(+), 72 deletions(-)

diff --git a/arch/ia64/include/asm/acpi.h b/arch/ia64/include/asm/acpi.h
index a1d91ab4c5ef..aa0fdf125aba 100644
--- a/arch/ia64/include/asm/acpi.h
+++ b/arch/ia64/include/asm/acpi.h
@@ -117,7 +117,7 @@ static inline void arch_acpi_set_pdc_bits(u32 *buf)
 #ifdef CONFIG_ACPI_NUMA
 extern cpumask_t early_cpu_possible_map;
 #define for_each_possible_early_cpu(cpu)  \
-	for_each_cpu_mask((cpu), early_cpu_possible_map)
+	for_each_cpu((cpu), &early_cpu_possible_map)
 
 static inline void per_cpu_scan_finalize(int min_cpus, int reserve_cpus)
 {
@@ -125,13 +125,13 @@ static inline void per_cpu_scan_finalize(int min_cpus, int reserve_cpus)
 	int cpu;
 	int next_nid = 0;
 
-	low_cpu = cpus_weight(early_cpu_possible_map);
+	low_cpu = cpumask_weight(&early_cpu_possible_map);
 
 	high_cpu = max(low_cpu, min_cpus);
 	high_cpu = min(high_cpu + reserve_cpus, NR_CPUS);
 
 	for (cpu = low_cpu; cpu < high_cpu; cpu++) {
-		cpu_set(cpu, early_cpu_possible_map);
+		cpumask_set_cpu(cpu, &early_cpu_possible_map);
 		if (node_cpuid[cpu].nid == NUMA_NO_NODE) {
 			node_cpuid[cpu].nid = next_nid;
 			next_nid++;
diff --git a/arch/ia64/kernel/acpi.c b/arch/ia64/kernel/acpi.c
index 2c4498919d3c..35bf22cc71b7 100644
--- a/arch/ia64/kernel/acpi.c
+++ b/arch/ia64/kernel/acpi.c
@@ -483,7 +483,7 @@ acpi_numa_processor_affinity_init(struct acpi_srat_cpu_affinity *pa)
 	    (pa->apic_id << 8) | (pa->local_sapic_eid);
 	/* nid should be overridden as logical node id later */
 	node_cpuid[srat_num_cpus].nid = pxm;
-	cpu_set(srat_num_cpus, early_cpu_possible_map);
+	cpumask_set_cpu(srat_num_cpus, &early_cpu_possible_map);
 	srat_num_cpus++;
 }
 
diff --git a/arch/ia64/kernel/iosapic.c b/arch/ia64/kernel/iosapic.c
index cd44a57c73be..bc9501e36e77 100644
--- a/arch/ia64/kernel/iosapic.c
+++ b/arch/ia64/kernel/iosapic.c
@@ -690,7 +690,7 @@ skip_numa_setup:
 	do {
 		if (++cpu >= nr_cpu_ids)
 			cpu = 0;
-	} while (!cpu_online(cpu) || !cpu_isset(cpu, domain));
+	} while (!cpu_online(cpu) || !cpumask_test_cpu(cpu, &domain));
 
 	return cpu_physical_id(cpu);
 #else  /* CONFIG_SMP */
diff --git a/arch/ia64/kernel/irq_ia64.c b/arch/ia64/kernel/irq_ia64.c
index 3329177c262e..9f40d972969c 100644
--- a/arch/ia64/kernel/irq_ia64.c
+++ b/arch/ia64/kernel/irq_ia64.c
@@ -109,13 +109,13 @@ static inline int find_unassigned_vector(cpumask_t domain)
 	int pos, vector;
 
 	cpumask_and(&mask, &domain, cpu_online_mask);
-	if (cpus_empty(mask))
+	if (cpumask_empty(&mask))
 		return -EINVAL;
 
 	for (pos = 0; pos < IA64_NUM_DEVICE_VECTORS; pos++) {
 		vector = IA64_FIRST_DEVICE_VECTOR + pos;
-		cpus_and(mask, domain, vector_table[vector]);
-		if (!cpus_empty(mask))
+		cpumask_and(&mask, &domain, &vector_table[vector]);
+		if (!cpumask_empty(&mask))
 			continue;
 		return vector;
 	}
@@ -132,18 +132,18 @@ static int __bind_irq_vector(int irq, int vector, cpumask_t domain)
 	BUG_ON((unsigned)vector >= IA64_NUM_VECTORS);
 
 	cpumask_and(&mask, &domain, cpu_online_mask);
-	if (cpus_empty(mask))
+	if (cpumask_empty(&mask))
 		return -EINVAL;
-	if ((cfg->vector == vector) && cpus_equal(cfg->domain, domain))
+	if ((cfg->vector == vector) && cpumask_equal(&cfg->domain, &domain))
 		return 0;
 	if (cfg->vector != IRQ_VECTOR_UNASSIGNED)
 		return -EBUSY;
-	for_each_cpu_mask(cpu, mask)
+	for_each_cpu(cpu, &mask)
 		per_cpu(vector_irq, cpu)[vector] = irq;
 	cfg->vector = vector;
 	cfg->domain = domain;
 	irq_status[irq] = IRQ_USED;
-	cpus_or(vector_table[vector], vector_table[vector], domain);
+	cpumask_or(&vector_table[vector], &vector_table[vector], &domain);
 	return 0;
 }
 
@@ -242,7 +242,7 @@ void __setup_vector_irq(int cpu)
 		per_cpu(vector_irq, cpu)[vector] = -1;
 	/* Mark the inuse vectors */
 	for (irq = 0; irq < NR_IRQS; ++irq) {
-		if (!cpu_isset(cpu, irq_cfg[irq].domain))
+		if (!cpumask_test_cpu(cpu, &irq_cfg[irq].domain))
 			continue;
 		vector = irq_to_vector(irq);
 		per_cpu(vector_irq, cpu)[vector] = irq;
@@ -273,7 +273,7 @@ static int __irq_prepare_move(int irq, int cpu)
 		return -EBUSY;
 	if (cfg->vector == IRQ_VECTOR_UNASSIGNED || !cpu_online(cpu))
 		return -EINVAL;
-	if (cpu_isset(cpu, cfg->domain))
+	if (cpumask_test_cpu(cpu, &cfg->domain))
 		return 0;
 	domain = vector_allocation_domain(cpu);
 	vector = find_unassigned_vector(domain);
@@ -307,12 +307,12 @@ void irq_complete_move(unsigned irq)
 	if (likely(!cfg->move_in_progress))
 		return;
 
-	if (unlikely(cpu_isset(smp_processor_id(), cfg->old_domain)))
+	if (unlikely(cpumask_test_cpu(smp_processor_id(), &cfg->old_domain)))
 		return;
 
 	cpumask_and(&cleanup_mask, &cfg->old_domain, cpu_online_mask);
-	cfg->move_cleanup_count = cpus_weight(cleanup_mask);
-	for_each_cpu_mask(i, cleanup_mask)
+	cfg->move_cleanup_count = cpumask_weight(&cleanup_mask);
+	for_each_cpu(i, &cleanup_mask)
 		platform_send_ipi(i, IA64_IRQ_MOVE_VECTOR, IA64_IPI_DM_INT, 0);
 	cfg->move_in_progress = 0;
 }
@@ -338,12 +338,12 @@ static irqreturn_t smp_irq_move_cleanup_interrupt(int irq, void *dev_id)
 		if (!cfg->move_cleanup_count)
 			goto unlock;
 
-		if (!cpu_isset(me, cfg->old_domain))
+		if (!cpumask_test_cpu(me, &cfg->old_domain))
 			goto unlock;
 
 		spin_lock_irqsave(&vector_lock, flags);
 		__this_cpu_write(vector_irq[vector], -1);
-		cpu_clear(me, vector_table[vector]);
+		cpumask_clear_cpu(me, &vector_table[vector]);
 		spin_unlock_irqrestore(&vector_lock, flags);
 		cfg->move_cleanup_count--;
 	unlock:
diff --git a/arch/ia64/kernel/mca.c b/arch/ia64/kernel/mca.c
index 8bfd36af46f8..dd5801eb4c69 100644
--- a/arch/ia64/kernel/mca.c
+++ b/arch/ia64/kernel/mca.c
@@ -1293,7 +1293,7 @@ ia64_mca_handler(struct pt_regs *regs, struct switch_stack *sw,
 		monarch_cpu = cpu;
 		sos->monarch = 1;
 	} else {
-		cpu_set(cpu, mca_cpu);
+		cpumask_set_cpu(cpu, &mca_cpu);
 		sos->monarch = 0;
 	}
 	mprintk(KERN_INFO "Entered OS MCA handler. PSP=%lx cpu=%d "
@@ -1316,7 +1316,7 @@ ia64_mca_handler(struct pt_regs *regs, struct switch_stack *sw,
 		 */
 		ia64_mca_wakeup_all();
 	} else {
-		while (cpu_isset(cpu, mca_cpu))
+		while (cpumask_test_cpu(cpu, &mca_cpu))
 			cpu_relax();	/* spin until monarch wakes us */
 	}
 
@@ -1355,9 +1355,9 @@ ia64_mca_handler(struct pt_regs *regs, struct switch_stack *sw,
 		 * and put this cpu in the rendez loop.
 		 */
 		for_each_online_cpu(i) {
-			if (cpu_isset(i, mca_cpu)) {
+			if (cpumask_test_cpu(i, &mca_cpu)) {
 				monarch_cpu = i;
-				cpu_clear(i, mca_cpu);	/* wake next cpu */
+				cpumask_clear_cpu(i, &mca_cpu);	/* wake next cpu */
 				while (monarch_cpu != -1)
 					cpu_relax();	/* spin until last cpu leaves */
 				set_curr_task(cpu, previous_current);
@@ -1822,7 +1822,7 @@ format_mca_init_stack(void *mca_data, unsigned long offset,
 	ti->cpu = cpu;
 	p->stack = ti;
 	p->state = TASK_UNINTERRUPTIBLE;
-	cpu_set(cpu, p->cpus_allowed);
+	cpumask_set_cpu(cpu, &p->cpus_allowed);
 	INIT_LIST_HEAD(&p->tasks);
 	p->parent = p->real_parent = p->group_leader = p;
 	INIT_LIST_HEAD(&p->children);
diff --git a/arch/ia64/kernel/numa.c b/arch/ia64/kernel/numa.c
index d288cde93606..92c376279c6d 100644
--- a/arch/ia64/kernel/numa.c
+++ b/arch/ia64/kernel/numa.c
@@ -39,7 +39,7 @@ void map_cpu_to_node(int cpu, int nid)
 	}
 	/* sanity check first */
 	oldnid = cpu_to_node_map[cpu];
-	if (cpu_isset(cpu, node_to_cpu_mask[oldnid])) {
+	if (cpumask_test_cpu(cpu, &node_to_cpu_mask[oldnid])) {
 		return; /* nothing to do */
 	}
 	/* we don't have cpu-driven node hot add yet...
@@ -47,16 +47,16 @@ void map_cpu_to_node(int cpu, int nid)
 	if (!node_online(nid))
 		nid = first_online_node;
 	cpu_to_node_map[cpu] = nid;
-	cpu_set(cpu, node_to_cpu_mask[nid]);
+	cpumask_set_cpu(cpu, &node_to_cpu_mask[nid]);
 	return;
 }
 
 void unmap_cpu_from_node(int cpu, int nid)
 {
-	WARN_ON(!cpu_isset(cpu, node_to_cpu_mask[nid]));
+	WARN_ON(!cpumask_test_cpu(cpu, &node_to_cpu_mask[nid]));
 	WARN_ON(cpu_to_node_map[cpu] != nid);
 	cpu_to_node_map[cpu] = 0;
-	cpu_clear(cpu, node_to_cpu_mask[nid]);
+	cpumask_clear_cpu(cpu, &node_to_cpu_mask[nid]);
 }
 
 
@@ -71,7 +71,7 @@ void __init build_cpu_to_node_map(void)
 	int cpu, i, node;
 
 	for(node=0; node < MAX_NUMNODES; node++)
-		cpus_clear(node_to_cpu_mask[node]);
+		cpumask_clear(&node_to_cpu_mask[node]);
 
 	for_each_possible_early_cpu(cpu) {
 		node = -1;
diff --git a/arch/ia64/kernel/salinfo.c b/arch/ia64/kernel/salinfo.c
index ee9719eebb1e..1eeffb7fbb16 100644
--- a/arch/ia64/kernel/salinfo.c
+++ b/arch/ia64/kernel/salinfo.c
@@ -256,7 +256,7 @@ salinfo_log_wakeup(int type, u8 *buffer, u64 size, int irqsafe)
 			data_saved->buffer = buffer;
 		}
 	}
-	cpu_set(smp_processor_id(), data->cpu_event);
+	cpumask_set_cpu(smp_processor_id(), &data->cpu_event);
 	if (irqsafe) {
 		salinfo_work_to_do(data);
 		spin_unlock_irqrestore(&data_saved_lock, flags);
@@ -274,7 +274,7 @@ salinfo_timeout_check(struct salinfo_data *data)
 	unsigned long flags;
 	if (!data->open)
 		return;
-	if (!cpus_empty(data->cpu_event)) {
+	if (!cpumask_empty(&data->cpu_event)) {
 		spin_lock_irqsave(&data_saved_lock, flags);
 		salinfo_work_to_do(data);
 		spin_unlock_irqrestore(&data_saved_lock, flags);
@@ -308,7 +308,7 @@ salinfo_event_read(struct file *file, char __user *buffer, size_t count, loff_t
 	int i, n, cpu = -1;
 
 retry:
-	if (cpus_empty(data->cpu_event) && down_trylock(&data->mutex)) {
+	if (cpumask_empty(&data->cpu_event) && down_trylock(&data->mutex)) {
 		if (file->f_flags & O_NONBLOCK)
 			return -EAGAIN;
 		if (down_interruptible(&data->mutex))
@@ -317,9 +317,9 @@ retry:
 
 	n = data->cpu_check;
 	for (i = 0; i < nr_cpu_ids; i++) {
-		if (cpu_isset(n, data->cpu_event)) {
+		if (cpumask_test_cpu(n, &data->cpu_event)) {
 			if (!cpu_online(n)) {
-				cpu_clear(n, data->cpu_event);
+				cpumask_clear_cpu(n, &data->cpu_event);
 				continue;
 			}
 			cpu = n;
@@ -451,7 +451,7 @@ retry:
 		call_on_cpu(cpu, salinfo_log_read_cpu, data);
 	if (!data->log_size) {
 		data->state = STATE_NO_DATA;
-		cpu_clear(cpu, data->cpu_event);
+		cpumask_clear_cpu(cpu, &data->cpu_event);
 	} else {
 		data->state = STATE_LOG_RECORD;
 	}
@@ -491,11 +491,11 @@ salinfo_log_clear(struct salinfo_data *data, int cpu)
 	unsigned long flags;
 	spin_lock_irqsave(&data_saved_lock, flags);
 	data->state = STATE_NO_DATA;
-	if (!cpu_isset(cpu, data->cpu_event)) {
+	if (!cpumask_test_cpu(cpu, &data->cpu_event)) {
 		spin_unlock_irqrestore(&data_saved_lock, flags);
 		return 0;
 	}
-	cpu_clear(cpu, data->cpu_event);
+	cpumask_clear_cpu(cpu, &data->cpu_event);
 	if (data->saved_num) {
 		shift1_data_saved(data, data->saved_num - 1);
 		data->saved_num = 0;
@@ -509,7 +509,7 @@ salinfo_log_clear(struct salinfo_data *data, int cpu)
 	salinfo_log_new_read(cpu, data);
 	if (data->state == STATE_LOG_RECORD) {
 		spin_lock_irqsave(&data_saved_lock, flags);
-		cpu_set(cpu, data->cpu_event);
+		cpumask_set_cpu(cpu, &data->cpu_event);
 		salinfo_work_to_do(data);
 		spin_unlock_irqrestore(&data_saved_lock, flags);
 	}
@@ -581,7 +581,7 @@ salinfo_cpu_callback(struct notifier_block *nb, unsigned long action, void *hcpu
 		for (i = 0, data = salinfo_data;
 		     i < ARRAY_SIZE(salinfo_data);
 		     ++i, ++data) {
-			cpu_set(cpu, data->cpu_event);
+			cpumask_set_cpu(cpu, &data->cpu_event);
 			salinfo_work_to_do(data);
 		}
 		spin_unlock_irqrestore(&data_saved_lock, flags);
@@ -601,7 +601,7 @@ salinfo_cpu_callback(struct notifier_block *nb, unsigned long action, void *hcpu
 					shift1_data_saved(data, j);
 				}
 			}
-			cpu_clear(cpu, data->cpu_event);
+			cpumask_clear_cpu(cpu, &data->cpu_event);
 		}
 		spin_unlock_irqrestore(&data_saved_lock, flags);
 		break;
@@ -659,7 +659,7 @@ salinfo_init(void)
 
 		/* we missed any events before now */
 		for_each_online_cpu(j)
-			cpu_set(j, data->cpu_event);
+			cpumask_set_cpu(j, &data->cpu_event);
 
 		*sdir++ = dir;
 	}
diff --git a/arch/ia64/kernel/setup.c b/arch/ia64/kernel/setup.c
index d86669bcdfb2..b9761389cb8d 100644
--- a/arch/ia64/kernel/setup.c
+++ b/arch/ia64/kernel/setup.c
@@ -562,8 +562,8 @@ setup_arch (char **cmdline_p)
 #  ifdef CONFIG_ACPI_HOTPLUG_CPU
 	prefill_possible_map();
 #  endif
-	per_cpu_scan_finalize((cpus_weight(early_cpu_possible_map) == 0 ?
-		32 : cpus_weight(early_cpu_possible_map)),
+	per_cpu_scan_finalize((cpumask_weight(&early_cpu_possible_map) == 0 ?
+		32 : cpumask_weight(&early_cpu_possible_map)),
 		additional_cpus > 0 ? additional_cpus : 0);
 # endif
 #endif /* CONFIG_APCI_BOOT */
@@ -702,7 +702,8 @@ show_cpuinfo (struct seq_file *m, void *v)
 		   c->itc_freq / 1000000, c->itc_freq % 1000000,
 		   lpj*HZ/500000, (lpj*HZ/5000) % 100);
 #ifdef CONFIG_SMP
-	seq_printf(m, "siblings   : %u\n", cpus_weight(cpu_core_map[cpunum]));
+	seq_printf(m, "siblings   : %u\n",
+		   cpumask_weight(&cpu_core_map[cpunum]));
 	if (c->socket_id != -1)
 		seq_printf(m, "physical id: %u\n", c->socket_id);
 	if (c->threads_per_core > 1 || c->cores_per_socket > 1)
@@ -933,8 +934,8 @@ cpu_init (void)
 	 * (must be done after per_cpu area is setup)
 	 */
 	if (smp_processor_id() == 0) {
-		cpu_set(0, per_cpu(cpu_sibling_map, 0));
-		cpu_set(0, cpu_core_map[0]);
+		cpumask_set_cpu(0, &per_cpu(cpu_sibling_map, 0));
+		cpumask_set_cpu(0, &cpu_core_map[0]);
 	} else {
 		/*
 		 * Set ar.k3 so that assembly code in MCA handler can compute
diff --git a/arch/ia64/kernel/smp.c b/arch/ia64/kernel/smp.c
index 9fcd4e63048f..7f706d4f84f7 100644
--- a/arch/ia64/kernel/smp.c
+++ b/arch/ia64/kernel/smp.c
@@ -262,11 +262,11 @@ smp_flush_tlb_cpumask(cpumask_t xcpumask)
 	preempt_disable();
 	mycpu = smp_processor_id();
 
-	for_each_cpu_mask(cpu, cpumask)
+	for_each_cpu(cpu, &cpumask)
 		counts[cpu] = local_tlb_flush_counts[cpu].count & 0xffff;
 
 	mb();
-	for_each_cpu_mask(cpu, cpumask) {
+	for_each_cpu(cpu, &cpumask) {
 		if (cpu == mycpu)
 			flush_mycpu = 1;
 		else
@@ -276,7 +276,7 @@ smp_flush_tlb_cpumask(cpumask_t xcpumask)
 	if (flush_mycpu)
 		smp_local_flush_tlb();
 
-	for_each_cpu_mask(cpu, cpumask)
+	for_each_cpu(cpu, &cpumask)
 		while(counts[cpu] == (local_tlb_flush_counts[cpu].count & 0xffff))
 			udelay(FLUSH_DELAY);
 
diff --git a/arch/ia64/kernel/smpboot.c b/arch/ia64/kernel/smpboot.c
index 547a48d78bd7..15051e9c2c6f 100644
--- a/arch/ia64/kernel/smpboot.c
+++ b/arch/ia64/kernel/smpboot.c
@@ -434,7 +434,7 @@ smp_callin (void)
 	/*
 	 * Allow the master to continue.
 	 */
-	cpu_set(cpuid, cpu_callin_map);
+	cpumask_set_cpu(cpuid, &cpu_callin_map);
 	Dprintk("Stack on CPU %d at about %p\n",cpuid, &cpuid);
 }
 
@@ -475,13 +475,13 @@ do_boot_cpu (int sapicid, int cpu, struct task_struct *idle)
 	 */
 	Dprintk("Waiting on callin_map ...");
 	for (timeout = 0; timeout < 100000; timeout++) {
-		if (cpu_isset(cpu, cpu_callin_map))
+		if (cpumask_test_cpu(cpu, &cpu_callin_map))
 			break;  /* It has booted */
 		udelay(100);
 	}
 	Dprintk("\n");
 
-	if (!cpu_isset(cpu, cpu_callin_map)) {
+	if (!cpumask_test_cpu(cpu, &cpu_callin_map)) {
 		printk(KERN_ERR "Processor 0x%x/0x%x is stuck.\n", cpu, sapicid);
 		ia64_cpu_to_sapicid[cpu] = -1;
 		set_cpu_online(cpu, false);  /* was set in smp_callin() */
@@ -541,7 +541,7 @@ smp_prepare_cpus (unsigned int max_cpus)
 
 	smp_setup_percpu_timer();
 
-	cpu_set(0, cpu_callin_map);
+	cpumask_set_cpu(0, &cpu_callin_map);
 
 	local_cpu_data->loops_per_jiffy = loops_per_jiffy;
 	ia64_cpu_to_sapicid[0] = boot_cpu_id;
@@ -565,7 +565,7 @@ smp_prepare_cpus (unsigned int max_cpus)
 void smp_prepare_boot_cpu(void)
 {
 	set_cpu_online(smp_processor_id(), true);
-	cpu_set(smp_processor_id(), cpu_callin_map);
+	cpumask_set_cpu(smp_processor_id(), &cpu_callin_map);
 	set_numa_node(cpu_to_node_map[smp_processor_id()]);
 	per_cpu(cpu_state, smp_processor_id()) = CPU_ONLINE;
 	paravirt_post_smp_prepare_boot_cpu();
@@ -577,10 +577,10 @@ clear_cpu_sibling_map(int cpu)
 {
 	int i;
 
-	for_each_cpu_mask(i, per_cpu(cpu_sibling_map, cpu))
-		cpu_clear(cpu, per_cpu(cpu_sibling_map, i));
-	for_each_cpu_mask(i, cpu_core_map[cpu])
-		cpu_clear(cpu, cpu_core_map[i]);
+	for_each_cpu(i, &per_cpu(cpu_sibling_map, cpu))
+		cpumask_clear_cpu(cpu, &per_cpu(cpu_sibling_map, i));
+	for_each_cpu(i, &cpu_core_map[cpu])
+		cpumask_clear_cpu(cpu, &cpu_core_map[i]);
 
 	per_cpu(cpu_sibling_map, cpu) = cpu_core_map[cpu] = CPU_MASK_NONE;
 }
@@ -592,12 +592,12 @@ remove_siblinginfo(int cpu)
 
 	if (cpu_data(cpu)->threads_per_core == 1 &&
 	    cpu_data(cpu)->cores_per_socket == 1) {
-		cpu_clear(cpu, cpu_core_map[cpu]);
-		cpu_clear(cpu, per_cpu(cpu_sibling_map, cpu));
+		cpumask_clear_cpu(cpu, &cpu_core_map[cpu]);
+		cpumask_clear_cpu(cpu, &per_cpu(cpu_sibling_map, cpu));
 		return;
 	}
 
-	last = (cpus_weight(cpu_core_map[cpu]) == 1 ? 1 : 0);
+	last = (cpumask_weight(&cpu_core_map[cpu]) == 1 ? 1 : 0);
 
 	/* remove it from all sibling map's */
 	clear_cpu_sibling_map(cpu);
@@ -673,7 +673,7 @@ int __cpu_disable(void)
 	remove_siblinginfo(cpu);
 	fixup_irqs();
 	local_flush_tlb_all();
-	cpu_clear(cpu, cpu_callin_map);
+	cpumask_clear_cpu(cpu, &cpu_callin_map);
 	return 0;
 }
 
@@ -718,11 +718,13 @@ static inline void set_cpu_sibling_map(int cpu)
 
 	for_each_online_cpu(i) {
 		if ((cpu_data(cpu)->socket_id == cpu_data(i)->socket_id)) {
-			cpu_set(i, cpu_core_map[cpu]);
-			cpu_set(cpu, cpu_core_map[i]);
+			cpumask_set_cpu(i, &cpu_core_map[cpu]);
+			cpumask_set_cpu(cpu, &cpu_core_map[i]);
 			if (cpu_data(cpu)->core_id == cpu_data(i)->core_id) {
-				cpu_set(i, per_cpu(cpu_sibling_map, cpu));
-				cpu_set(cpu, per_cpu(cpu_sibling_map, i));
+				cpumask_set_cpu(i,
+						&per_cpu(cpu_sibling_map, cpu));
+				cpumask_set_cpu(cpu,
+						&per_cpu(cpu_sibling_map, i));
 			}
 		}
 	}
@@ -742,7 +744,7 @@ __cpu_up(unsigned int cpu, struct task_struct *tidle)
 	 * Already booted cpu? not valid anymore since we dont
 	 * do idle loop tightspin anymore.
 	 */
-	if (cpu_isset(cpu, cpu_callin_map))
+	if (cpumask_test_cpu(cpu, &cpu_callin_map))
 		return -EINVAL;
 
 	per_cpu(cpu_state, cpu) = CPU_UP_PREPARE;
@@ -753,8 +755,8 @@ __cpu_up(unsigned int cpu, struct task_struct *tidle)
 
 	if (cpu_data(cpu)->threads_per_core == 1 &&
 	    cpu_data(cpu)->cores_per_socket == 1) {
-		cpu_set(cpu, per_cpu(cpu_sibling_map, cpu));
-		cpu_set(cpu, cpu_core_map[cpu]);
+		cpumask_set_cpu(cpu, &per_cpu(cpu_sibling_map, cpu));
+		cpumask_set_cpu(cpu, &cpu_core_map[cpu]);
 		return 0;
 	}
 
diff --git a/arch/ia64/kernel/topology.c b/arch/ia64/kernel/topology.c
index 965ab42fabb0..c01fe8991244 100644
--- a/arch/ia64/kernel/topology.c
+++ b/arch/ia64/kernel/topology.c
@@ -148,7 +148,7 @@ static void cache_shared_cpu_map_setup(unsigned int cpu,
 
 	if (cpu_data(cpu)->threads_per_core <= 1 &&
 		cpu_data(cpu)->cores_per_socket <= 1) {
-		cpu_set(cpu, this_leaf->shared_cpu_map);
+		cpumask_set_cpu(cpu, &this_leaf->shared_cpu_map);
 		return;
 	}
 
@@ -164,7 +164,7 @@ static void cache_shared_cpu_map_setup(unsigned int cpu,
 			if (cpu_data(cpu)->socket_id == cpu_data(j)->socket_id
 				&& cpu_data(j)->core_id == csi.log1_cid
 				&& cpu_data(j)->thread_id == csi.log1_tid)
-				cpu_set(j, this_leaf->shared_cpu_map);
+				cpumask_set_cpu(j, &this_leaf->shared_cpu_map);
 
 		i++;
 	} while (i < num_shared &&
@@ -177,7 +177,7 @@ static void cache_shared_cpu_map_setup(unsigned int cpu,
 static void cache_shared_cpu_map_setup(unsigned int cpu,
 		struct cache_info * this_leaf)
 {
-	cpu_set(cpu, this_leaf->shared_cpu_map);
+	cpumask_set_cpu(cpu, &this_leaf->shared_cpu_map);
 	return;
 }
 #endif
-- 
2.1.0


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 07/16] um: fix up obsolete cpu function usage.
  2015-03-02 11:35 [PATCH 01/16] CONFIG_DISABLE_OBSOLETE_CPUMASK_FUNCTIONS: set if CPUMASK_OFFSTACK Rusty Russell
@ 2015-03-02 11:35   ` Rusty Russell
  2015-03-02 11:47   ` Rusty Russell
                     ` (7 subsequent siblings)
  8 siblings, 0 replies; 28+ messages in thread
From: Rusty Russell @ 2015-03-02 11:35 UTC (permalink / raw)
  To: linux-kernel
  Cc: Rusty Russell, Jeff Dike, Richard Weinberger, user-mode-linux-devel

Thanks to spatch.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Cc: Jeff Dike <jdike@addtoit.com>
Cc: Richard Weinberger <richard@nod.at>
Cc: user-mode-linux-devel@lists.sourceforge.net
---
 arch/um/kernel/smp.c | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/arch/um/kernel/smp.c b/arch/um/kernel/smp.c
index 5c8c3ea7db7b..74077892b34a 100644
--- a/arch/um/kernel/smp.c
+++ b/arch/um/kernel/smp.c
@@ -67,12 +67,12 @@ static int idle_proc(void *cpup)
 	os_set_fd_async(cpu_data[cpu].ipi_pipe[0]);
 
 	wmb();
-	if (cpu_test_and_set(cpu, cpu_callin_map)) {
+	if (cpumask_test_and_set_cpu(cpu, &cpu_callin_map)) {
 		printk(KERN_ERR "huh, CPU#%d already present??\n", cpu);
 		BUG();
 	}
 
-	while (!cpu_isset(cpu, smp_commenced_mask))
+	while (!cpumask_test_cpu(cpu, &smp_commenced_mask))
 		cpu_relax();
 
 	notify_cpu_starting(cpu);
@@ -111,7 +111,7 @@ void smp_prepare_cpus(unsigned int maxcpus)
 		set_cpu_possible(i, true);
 
 	set_cpu_online(me, true);
-	cpu_set(me, cpu_callin_map);
+	cpumask_set_cpu(me, &cpu_callin_map);
 
 	err = os_pipe(cpu_data[me].ipi_pipe, 1, 1);
 	if (err < 0)
@@ -127,11 +127,11 @@ void smp_prepare_cpus(unsigned int maxcpus)
 		init_idle(idle, cpu);
 
 		waittime = 200000000;
-		while (waittime-- && !cpu_isset(cpu, cpu_callin_map))
+		while (waittime-- && !cpumask_test_cpu(cpu, &cpu_callin_map))
 			cpu_relax();
 
 		printk(KERN_INFO "%s\n",
-		       cpu_isset(cpu, cpu_calling_map) ? "done" : "failed");
+		       cpumask_test_cpu(cpu, &cpu_calling_map) ? "done" : "failed");
 	}
 }
 
@@ -142,7 +142,7 @@ void smp_prepare_boot_cpu(void)
 
 int __cpu_up(unsigned int cpu, struct task_struct *tidle)
 {
-	cpu_set(cpu, smp_commenced_mask);
+	cpumask_set_cpu(cpu, &smp_commenced_mask);
 	while (!cpu_online(cpu))
 		mb();
 	return 0;
-- 
2.1.0


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [uml-devel] [PATCH 07/16] um: fix up obsolete cpu function usage.
@ 2015-03-02 11:35   ` Rusty Russell
  0 siblings, 0 replies; 28+ messages in thread
From: Rusty Russell @ 2015-03-02 11:35 UTC (permalink / raw)
  To: linux-kernel
  Cc: Richard Weinberger, Jeff Dike, Rusty Russell, user-mode-linux-devel

Thanks to spatch.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Cc: Jeff Dike <jdike@addtoit.com>
Cc: Richard Weinberger <richard@nod.at>
Cc: user-mode-linux-devel@lists.sourceforge.net
---
 arch/um/kernel/smp.c | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/arch/um/kernel/smp.c b/arch/um/kernel/smp.c
index 5c8c3ea7db7b..74077892b34a 100644
--- a/arch/um/kernel/smp.c
+++ b/arch/um/kernel/smp.c
@@ -67,12 +67,12 @@ static int idle_proc(void *cpup)
 	os_set_fd_async(cpu_data[cpu].ipi_pipe[0]);
 
 	wmb();
-	if (cpu_test_and_set(cpu, cpu_callin_map)) {
+	if (cpumask_test_and_set_cpu(cpu, &cpu_callin_map)) {
 		printk(KERN_ERR "huh, CPU#%d already present??\n", cpu);
 		BUG();
 	}
 
-	while (!cpu_isset(cpu, smp_commenced_mask))
+	while (!cpumask_test_cpu(cpu, &smp_commenced_mask))
 		cpu_relax();
 
 	notify_cpu_starting(cpu);
@@ -111,7 +111,7 @@ void smp_prepare_cpus(unsigned int maxcpus)
 		set_cpu_possible(i, true);
 
 	set_cpu_online(me, true);
-	cpu_set(me, cpu_callin_map);
+	cpumask_set_cpu(me, &cpu_callin_map);
 
 	err = os_pipe(cpu_data[me].ipi_pipe, 1, 1);
 	if (err < 0)
@@ -127,11 +127,11 @@ void smp_prepare_cpus(unsigned int maxcpus)
 		init_idle(idle, cpu);
 
 		waittime = 200000000;
-		while (waittime-- && !cpu_isset(cpu, cpu_callin_map))
+		while (waittime-- && !cpumask_test_cpu(cpu, &cpu_callin_map))
 			cpu_relax();
 
 		printk(KERN_INFO "%s\n",
-		       cpu_isset(cpu, cpu_calling_map) ? "done" : "failed");
+		       cpumask_test_cpu(cpu, &cpu_calling_map) ? "done" : "failed");
 	}
 }
 
@@ -142,7 +142,7 @@ void smp_prepare_boot_cpu(void)
 
 int __cpu_up(unsigned int cpu, struct task_struct *tidle)
 {
-	cpu_set(cpu, smp_commenced_mask);
+	cpumask_set_cpu(cpu, &smp_commenced_mask);
 	while (!cpu_online(cpu))
 		mb();
 	return 0;
-- 
2.1.0


------------------------------------------------------------------------------
Dive into the World of Parallel Programming The Go Parallel Website, sponsored
by Intel and developed in partnership with Slashdot Media, is your hub for all
things parallel software development, from weekly thought leadership blogs to
news, videos, case studies, tutorials and more. Take a look and join the 
conversation now. http://goparallel.sourceforge.net/
_______________________________________________
User-mode-linux-devel mailing list
User-mode-linux-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/user-mode-linux-devel


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 08/16] x86: fix up obsolete cpu function usage.
  2015-03-02 11:35 [PATCH 01/16] CONFIG_DISABLE_OBSOLETE_CPUMASK_FUNCTIONS: set if CPUMASK_OFFSTACK Rusty Russell
                   ` (5 preceding siblings ...)
  2015-03-02 11:35   ` [uml-devel] " Rusty Russell
@ 2015-03-02 11:35 ` Rusty Russell
  2015-03-02 13:36   ` [tip:x86/cleanups] x86: Fix up obsolete __cpu_set() " tip-bot for Rusty Russell
  2015-03-02 11:35 ` [PATCH 09/16] mips: fix up obsolete cpu " Rusty Russell
  2015-03-02 12:34 ` [PATCH 01/16] CONFIG_DISABLE_OBSOLETE_CPUMASK_FUNCTIONS: set if CPUMASK_OFFSTACK Paul Bolle
  8 siblings, 1 reply; 28+ messages in thread
From: Rusty Russell @ 2015-03-02 11:35 UTC (permalink / raw)
  To: linux-kernel; +Cc: Rusty Russell, x86

Thanks to spatch, plus manual removal of "&*".

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Cc: x86@kernel.org
---
 arch/x86/kernel/apic/x2apic_cluster.c | 8 ++++----
 arch/x86/kernel/irq.c                 | 4 ++--
 arch/x86/platform/uv/tlb_uv.c         | 6 +++---
 3 files changed, 9 insertions(+), 9 deletions(-)

diff --git a/arch/x86/kernel/apic/x2apic_cluster.c b/arch/x86/kernel/apic/x2apic_cluster.c
index e658f21681c8..d9d0bd2faaf4 100644
--- a/arch/x86/kernel/apic/x2apic_cluster.c
+++ b/arch/x86/kernel/apic/x2apic_cluster.c
@@ -135,12 +135,12 @@ static void init_x2apic_ldr(void)
 
 	per_cpu(x86_cpu_to_logical_apicid, this_cpu) = apic_read(APIC_LDR);
 
-	__cpu_set(this_cpu, per_cpu(cpus_in_cluster, this_cpu));
+	cpumask_set_cpu(this_cpu, per_cpu(cpus_in_cluster, this_cpu));
 	for_each_online_cpu(cpu) {
 		if (x2apic_cluster(this_cpu) != x2apic_cluster(cpu))
 			continue;
-		__cpu_set(this_cpu, per_cpu(cpus_in_cluster, cpu));
-		__cpu_set(cpu, per_cpu(cpus_in_cluster, this_cpu));
+		cpumask_set_cpu(this_cpu, per_cpu(cpus_in_cluster, cpu));
+		cpumask_set_cpu(cpu, per_cpu(cpus_in_cluster, this_cpu));
 	}
 }
 
@@ -195,7 +195,7 @@ static int x2apic_init_cpu_notifier(void)
 
 	BUG_ON(!per_cpu(cpus_in_cluster, cpu) || !per_cpu(ipi_mask, cpu));
 
-	__cpu_set(cpu, per_cpu(cpus_in_cluster, cpu));
+	cpumask_set_cpu(cpu, per_cpu(cpus_in_cluster, cpu));
 	register_hotcpu_notifier(&x2apic_cpu_notifier);
 	return 1;
 }
diff --git a/arch/x86/kernel/irq.c b/arch/x86/kernel/irq.c
index 67b1cbe0093a..e5952c225532 100644
--- a/arch/x86/kernel/irq.c
+++ b/arch/x86/kernel/irq.c
@@ -295,7 +295,7 @@ int check_irq_vectors_for_cpu_disable(void)
 
 	this_cpu = smp_processor_id();
 	cpumask_copy(&online_new, cpu_online_mask);
-	cpu_clear(this_cpu, online_new);
+	cpumask_clear_cpu(this_cpu, &online_new);
 
 	this_count = 0;
 	for (vector = FIRST_EXTERNAL_VECTOR; vector < NR_VECTORS; vector++) {
@@ -307,7 +307,7 @@ int check_irq_vectors_for_cpu_disable(void)
 
 			data = irq_desc_get_irq_data(desc);
 			cpumask_copy(&affinity_new, data->affinity);
-			cpu_clear(this_cpu, affinity_new);
+			cpumask_clear_cpu(this_cpu, &affinity_new);
 
 			/* Do not count inactive or per-cpu irqs. */
 			if (!irq_has_action(irq) || irqd_is_per_cpu(data))
diff --git a/arch/x86/platform/uv/tlb_uv.c b/arch/x86/platform/uv/tlb_uv.c
index 994798548b1a..3b6ec42718e4 100644
--- a/arch/x86/platform/uv/tlb_uv.c
+++ b/arch/x86/platform/uv/tlb_uv.c
@@ -415,7 +415,7 @@ static void reset_with_ipi(struct pnmask *distribution, struct bau_control *bcp)
 	struct reset_args reset_args;
 
 	reset_args.sender = sender;
-	cpus_clear(*mask);
+	cpumask_clear(mask);
 	/* find a single cpu for each uvhub in this distribution mask */
 	maskbits = sizeof(struct pnmask) * BITSPERBYTE;
 	/* each bit is a pnode relative to the partition base pnode */
@@ -425,7 +425,7 @@ static void reset_with_ipi(struct pnmask *distribution, struct bau_control *bcp)
 			continue;
 		apnode = pnode + bcp->partition_base_pnode;
 		cpu = pnode_to_first_cpu(apnode, smaster);
-		cpu_set(cpu, *mask);
+		cpumask_set_cpu(cpu, mask);
 	}
 
 	/* IPI all cpus; preemption is already disabled */
@@ -1126,7 +1126,7 @@ const struct cpumask *uv_flush_tlb_others(const struct cpumask *cpumask,
 	/* don't actually do a shootdown of the local cpu */
 	cpumask_andnot(flush_mask, cpumask, cpumask_of(cpu));
 
-	if (cpu_isset(cpu, *cpumask))
+	if (cpumask_test_cpu(cpu, cpumask))
 		stat->s_ntargself++;
 
 	bau_desc = bcp->descriptor_base;
-- 
2.1.0


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 09/16] mips: fix up obsolete cpu function usage.
  2015-03-02 11:35 [PATCH 01/16] CONFIG_DISABLE_OBSOLETE_CPUMASK_FUNCTIONS: set if CPUMASK_OFFSTACK Rusty Russell
                   ` (6 preceding siblings ...)
  2015-03-02 11:35 ` [PATCH 08/16] x86: " Rusty Russell
@ 2015-03-02 11:35 ` Rusty Russell
  2015-03-02 12:34 ` [PATCH 01/16] CONFIG_DISABLE_OBSOLETE_CPUMASK_FUNCTIONS: set if CPUMASK_OFFSTACK Paul Bolle
  8 siblings, 0 replies; 28+ messages in thread
From: Rusty Russell @ 2015-03-02 11:35 UTC (permalink / raw)
  To: linux-kernel
  Cc: Rusty Russell, Ralf Baechle, Kevin Cernekee, Florian Fainelli,
	linux-mips

Thanks to spatch, plus manual removal of "&*".  Then a sweep for
for_each_cpu_mask => for_each_cpu.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Kevin Cernekee <cernekee@gmail.com>
Cc: Florian Fainelli <f.fainelli@gmail.com>
Cc: linux-mips@linux-mips.org
---
 arch/mips/bcm63xx/irq.c              |  4 ++--
 arch/mips/cavium-octeon/smp.c        |  4 ++--
 arch/mips/kernel/crash.c             |  8 ++++----
 arch/mips/kernel/mips-mt-fpaff.c     |  4 ++--
 arch/mips/kernel/process.c           |  2 +-
 arch/mips/kernel/smp-bmips.c         |  2 +-
 arch/mips/kernel/smp-cmp.c           |  4 ++--
 arch/mips/kernel/smp-cps.c           |  4 ++--
 arch/mips/kernel/smp-mt.c            |  4 ++--
 arch/mips/kernel/smp.c               | 26 +++++++++++++-------------
 arch/mips/kernel/traps.c             |  6 +++---
 arch/mips/loongson/loongson-3/numa.c |  4 ++--
 arch/mips/loongson/loongson-3/smp.c  |  2 +-
 arch/mips/paravirt/paravirt-smp.c    |  2 +-
 arch/mips/sgi-ip27/ip27-init.c       |  2 +-
 arch/mips/sgi-ip27/ip27-klnuma.c     | 10 +++++-----
 arch/mips/sgi-ip27/ip27-memory.c     |  2 +-
 17 files changed, 45 insertions(+), 45 deletions(-)

diff --git a/arch/mips/bcm63xx/irq.c b/arch/mips/bcm63xx/irq.c
index b94bf44d8d8e..e3e808a6c542 100644
--- a/arch/mips/bcm63xx/irq.c
+++ b/arch/mips/bcm63xx/irq.c
@@ -58,9 +58,9 @@ static inline int enable_irq_for_cpu(int cpu, struct irq_data *d,
 
 #ifdef CONFIG_SMP
 	if (m)
-		enable &= cpu_isset(cpu, *m);
+		enable &= cpumask_test_cpu(cpu, m);
 	else if (irqd_affinity_was_set(d))
-		enable &= cpu_isset(cpu, *d->affinity);
+		enable &= cpumask_test_cpu(cpu, d->affinity);
 #endif
 	return enable;
 }
diff --git a/arch/mips/cavium-octeon/smp.c b/arch/mips/cavium-octeon/smp.c
index 8b1eeffa12ed..56f5d080ef9d 100644
--- a/arch/mips/cavium-octeon/smp.c
+++ b/arch/mips/cavium-octeon/smp.c
@@ -72,7 +72,7 @@ static inline void octeon_send_ipi_mask(const struct cpumask *mask,
 {
 	unsigned int i;
 
-	for_each_cpu_mask(i, *mask)
+	for_each_cpu(i, mask)
 		octeon_send_ipi_single(i, action);
 }
 
@@ -239,7 +239,7 @@ static int octeon_cpu_disable(void)
 		return -ENOTSUPP;
 
 	set_cpu_online(cpu, false);
-	cpu_clear(cpu, cpu_callin_map);
+	cpumask_clear_cpu(cpu, &cpu_callin_map);
 	octeon_fixup_irqs();
 
 	flush_cache_all();
diff --git a/arch/mips/kernel/crash.c b/arch/mips/kernel/crash.c
index d21264681e97..d434d5d5ae6e 100644
--- a/arch/mips/kernel/crash.c
+++ b/arch/mips/kernel/crash.c
@@ -25,9 +25,9 @@ static void crash_shutdown_secondary(void *ignore)
 		return;
 
 	local_irq_disable();
-	if (!cpu_isset(cpu, cpus_in_crash))
+	if (!cpumask_test_cpu(cpu, &cpus_in_crash))
 		crash_save_cpu(regs, cpu);
-	cpu_set(cpu, cpus_in_crash);
+	cpumask_set_cpu(cpu, &cpus_in_crash);
 
 	while (!atomic_read(&kexec_ready_to_reboot))
 		cpu_relax();
@@ -50,7 +50,7 @@ static void crash_kexec_prepare_cpus(void)
 	 */
 	pr_emerg("Sending IPI to other cpus...\n");
 	msecs = 10000;
-	while ((cpus_weight(cpus_in_crash) < ncpus) && (--msecs > 0)) {
+	while ((cpumask_weight(&cpus_in_crash) < ncpus) && (--msecs > 0)) {
 		cpu_relax();
 		mdelay(1);
 	}
@@ -66,5 +66,5 @@ void default_machine_crash_shutdown(struct pt_regs *regs)
 	crashing_cpu = smp_processor_id();
 	crash_save_cpu(regs, crashing_cpu);
 	crash_kexec_prepare_cpus();
-	cpu_set(crashing_cpu, cpus_in_crash);
+	cpumask_set_cpu(crashing_cpu, &cpus_in_crash);
 }
diff --git a/arch/mips/kernel/mips-mt-fpaff.c b/arch/mips/kernel/mips-mt-fpaff.c
index 362bb3707e62..3e4491aa6d6b 100644
--- a/arch/mips/kernel/mips-mt-fpaff.c
+++ b/arch/mips/kernel/mips-mt-fpaff.c
@@ -114,8 +114,8 @@ asmlinkage long mipsmt_sys_sched_setaffinity(pid_t pid, unsigned int len,
 	/* Compute new global allowed CPU set if necessary */
 	ti = task_thread_info(p);
 	if (test_ti_thread_flag(ti, TIF_FPUBOUND) &&
-	    cpus_intersects(*new_mask, mt_fpu_cpumask)) {
-		cpus_and(*effective_mask, *new_mask, mt_fpu_cpumask);
+	    cpumask_intersects(new_mask, &mt_fpu_cpumask)) {
+		cpumask_and(effective_mask, new_mask, &mt_fpu_cpumask);
 		retval = set_cpus_allowed_ptr(p, effective_mask);
 	} else {
 		cpumask_copy(effective_mask, new_mask);
diff --git a/arch/mips/kernel/process.c b/arch/mips/kernel/process.c
index bf85cc180d91..4501c7a4bd58 100644
--- a/arch/mips/kernel/process.c
+++ b/arch/mips/kernel/process.c
@@ -49,7 +49,7 @@
 void arch_cpu_idle_dead(void)
 {
 	/* What the heck is this check doing ? */
-	if (!cpu_isset(smp_processor_id(), cpu_callin_map))
+	if (!cpumask_test_cpu(smp_processor_id(), &cpu_callin_map))
 		play_dead();
 }
 #endif
diff --git a/arch/mips/kernel/smp-bmips.c b/arch/mips/kernel/smp-bmips.c
index b8bd9340c9c7..fd528d7ea278 100644
--- a/arch/mips/kernel/smp-bmips.c
+++ b/arch/mips/kernel/smp-bmips.c
@@ -362,7 +362,7 @@ static int bmips_cpu_disable(void)
 	pr_info("SMP: CPU%d is offline\n", cpu);
 
 	set_cpu_online(cpu, false);
-	cpu_clear(cpu, cpu_callin_map);
+	cpumask_clear_cpu(cpu, &cpu_callin_map);
 	clear_c0_status(IE_IRQ5);
 
 	local_flush_tlb_all();
diff --git a/arch/mips/kernel/smp-cmp.c b/arch/mips/kernel/smp-cmp.c
index e36a859af666..d5e0f949dc48 100644
--- a/arch/mips/kernel/smp-cmp.c
+++ b/arch/mips/kernel/smp-cmp.c
@@ -66,7 +66,7 @@ static void cmp_smp_finish(void)
 #ifdef CONFIG_MIPS_MT_FPAFF
 	/* If we have an FPU, enroll ourselves in the FPU-full mask */
 	if (cpu_has_fpu)
-		cpu_set(smp_processor_id(), mt_fpu_cpumask);
+		cpumask_set_cpu(smp_processor_id(), &mt_fpu_cpumask);
 #endif /* CONFIG_MIPS_MT_FPAFF */
 
 	local_irq_enable();
@@ -110,7 +110,7 @@ void __init cmp_smp_setup(void)
 #ifdef CONFIG_MIPS_MT_FPAFF
 	/* If we have an FPU, enroll ourselves in the FPU-full mask */
 	if (cpu_has_fpu)
-		cpu_set(0, mt_fpu_cpumask);
+		cpumask_set_cpu(0, &mt_fpu_cpumask);
 #endif /* CONFIG_MIPS_MT_FPAFF */
 
 	for (i = 1; i < NR_CPUS; i++) {
diff --git a/arch/mips/kernel/smp-cps.c b/arch/mips/kernel/smp-cps.c
index bed7590e475f..b0fe93e6537e 100644
--- a/arch/mips/kernel/smp-cps.c
+++ b/arch/mips/kernel/smp-cps.c
@@ -284,7 +284,7 @@ static void cps_smp_finish(void)
 #ifdef CONFIG_MIPS_MT_FPAFF
 	/* If we have an FPU, enroll ourselves in the FPU-full mask */
 	if (cpu_has_fpu)
-		cpu_set(smp_processor_id(), mt_fpu_cpumask);
+		cpumask_set_cpu(smp_processor_id(), &mt_fpu_cpumask);
 #endif /* CONFIG_MIPS_MT_FPAFF */
 
 	local_irq_enable();
@@ -307,7 +307,7 @@ static int cps_cpu_disable(void)
 	atomic_sub(1 << cpu_vpe_id(&current_cpu_data), &core_cfg->vpe_mask);
 	smp_mb__after_atomic();
 	set_cpu_online(cpu, false);
-	cpu_clear(cpu, cpu_callin_map);
+	cpumask_clear_cpu(cpu, &cpu_callin_map);
 
 	return 0;
 }
diff --git a/arch/mips/kernel/smp-mt.c b/arch/mips/kernel/smp-mt.c
index 17ea705f6c40..86311a164ef1 100644
--- a/arch/mips/kernel/smp-mt.c
+++ b/arch/mips/kernel/smp-mt.c
@@ -178,7 +178,7 @@ static void vsmp_smp_finish(void)
 #ifdef CONFIG_MIPS_MT_FPAFF
 	/* If we have an FPU, enroll ourselves in the FPU-full mask */
 	if (cpu_has_fpu)
-		cpu_set(smp_processor_id(), mt_fpu_cpumask);
+		cpumask_set_cpu(smp_processor_id(), &mt_fpu_cpumask);
 #endif /* CONFIG_MIPS_MT_FPAFF */
 
 	local_irq_enable();
@@ -239,7 +239,7 @@ static void __init vsmp_smp_setup(void)
 #ifdef CONFIG_MIPS_MT_FPAFF
 	/* If we have an FPU, enroll ourselves in the FPU-full mask */
 	if (cpu_has_fpu)
-		cpu_set(0, mt_fpu_cpumask);
+		cpumask_set_cpu(0, &mt_fpu_cpumask);
 #endif /* CONFIG_MIPS_MT_FPAFF */
 	if (!cpu_has_mipsmt)
 		return;
diff --git a/arch/mips/kernel/smp.c b/arch/mips/kernel/smp.c
index 1c0d8c50b7e1..357acd3ac0e6 100644
--- a/arch/mips/kernel/smp.c
+++ b/arch/mips/kernel/smp.c
@@ -75,30 +75,30 @@ static inline void set_cpu_sibling_map(int cpu)
 {
 	int i;
 
-	cpu_set(cpu, cpu_sibling_setup_map);
+	cpumask_set_cpu(cpu, &cpu_sibling_setup_map);
 
 	if (smp_num_siblings > 1) {
-		for_each_cpu_mask(i, cpu_sibling_setup_map) {
+		for_each_cpu(i, &cpu_sibling_setup_map) {
 			if (cpu_data[cpu].package == cpu_data[i].package &&
 				    cpu_data[cpu].core == cpu_data[i].core) {
-				cpu_set(i, cpu_sibling_map[cpu]);
-				cpu_set(cpu, cpu_sibling_map[i]);
+				cpumask_set_cpu(i, &cpu_sibling_map[cpu]);
+				cpumask_set_cpu(cpu, &cpu_sibling_map[i]);
 			}
 		}
 	} else
-		cpu_set(cpu, cpu_sibling_map[cpu]);
+		cpumask_set_cpu(cpu, &cpu_sibling_map[cpu]);
 }
 
 static inline void set_cpu_core_map(int cpu)
 {
 	int i;
 
-	cpu_set(cpu, cpu_core_setup_map);
+	cpumask_set_cpu(cpu, &cpu_core_setup_map);
 
-	for_each_cpu_mask(i, cpu_core_setup_map) {
+	for_each_cpu(i, &cpu_core_setup_map) {
 		if (cpu_data[cpu].package == cpu_data[i].package) {
-			cpu_set(i, cpu_core_map[cpu]);
-			cpu_set(cpu, cpu_core_map[i]);
+			cpumask_set_cpu(i, &cpu_core_map[cpu]);
+			cpumask_set_cpu(cpu, &cpu_core_map[i]);
 		}
 	}
 }
@@ -138,7 +138,7 @@ asmlinkage void start_secondary(void)
 	cpu = smp_processor_id();
 	cpu_data[cpu].udelay_val = loops_per_jiffy;
 
-	cpu_set(cpu, cpu_coherent_mask);
+	cpumask_set_cpu(cpu, &cpu_coherent_mask);
 	notify_cpu_starting(cpu);
 
 	set_cpu_online(cpu, true);
@@ -146,7 +146,7 @@ asmlinkage void start_secondary(void)
 	set_cpu_sibling_map(cpu);
 	set_cpu_core_map(cpu);
 
-	cpu_set(cpu, cpu_callin_map);
+	cpumask_set_cpu(cpu, &cpu_callin_map);
 
 	synchronise_count_slave(cpu);
 
@@ -210,7 +210,7 @@ void smp_prepare_boot_cpu(void)
 {
 	set_cpu_possible(0, true);
 	set_cpu_online(0, true);
-	cpu_set(0, cpu_callin_map);
+	cpumask_set_cpu(0, &cpu_callin_map);
 }
 
 int __cpu_up(unsigned int cpu, struct task_struct *tidle)
@@ -220,7 +220,7 @@ int __cpu_up(unsigned int cpu, struct task_struct *tidle)
 	/*
 	 * Trust is futile.  We should really have timeouts ...
 	 */
-	while (!cpu_isset(cpu, cpu_callin_map))
+	while (!cpumask_test_cpu(cpu, &cpu_callin_map))
 		udelay(100);
 
 	synchronise_count_master(cpu);
diff --git a/arch/mips/kernel/traps.c b/arch/mips/kernel/traps.c
index 33984c04b60b..b05b9462c728 100644
--- a/arch/mips/kernel/traps.c
+++ b/arch/mips/kernel/traps.c
@@ -1121,13 +1121,13 @@ static void mt_ase_fp_affinity(void)
 		 * restricted the allowed set to exclude any CPUs with FPUs,
 		 * we'll skip the procedure.
 		 */
-		if (cpus_intersects(current->cpus_allowed, mt_fpu_cpumask)) {
+		if (cpumask_intersects(&current->cpus_allowed, &mt_fpu_cpumask)) {
 			cpumask_t tmask;
 
 			current->thread.user_cpus_allowed
 				= current->cpus_allowed;
-			cpus_and(tmask, current->cpus_allowed,
-				mt_fpu_cpumask);
+			cpumask_and(&tmask, &current->cpus_allowed,
+				    &mt_fpu_cpumask);
 			set_cpus_allowed_ptr(current, &tmask);
 			set_thread_flag(TIF_FPUBOUND);
 		}
diff --git a/arch/mips/loongson/loongson-3/numa.c b/arch/mips/loongson/loongson-3/numa.c
index 6cae0e75de27..12d14ed48778 100644
--- a/arch/mips/loongson/loongson-3/numa.c
+++ b/arch/mips/loongson/loongson-3/numa.c
@@ -233,7 +233,7 @@ static __init void prom_meminit(void)
 		if (node_online(node)) {
 			szmem(node);
 			node_mem_init(node);
-			cpus_clear(__node_data[(node)]->cpumask);
+			cpumask_clear(&__node_data[(node)]->cpumask);
 		}
 	}
 	for (cpu = 0; cpu < loongson_sysconf.nr_cpus; cpu++) {
@@ -244,7 +244,7 @@ static __init void prom_meminit(void)
 		if (loongson_sysconf.reserved_cpus_mask & (1<<cpu))
 			continue;
 
-		cpu_set(active_cpu, __node_data[(node)]->cpumask);
+		cpumask_set_cpu(active_cpu, &__node_data[(node)]->cpumask);
 		pr_info("NUMA: set cpumask cpu %d on node %d\n", active_cpu, node);
 
 		active_cpu++;
diff --git a/arch/mips/loongson/loongson-3/smp.c b/arch/mips/loongson/loongson-3/smp.c
index e2eb688b5434..e3c68b5da18d 100644
--- a/arch/mips/loongson/loongson-3/smp.c
+++ b/arch/mips/loongson/loongson-3/smp.c
@@ -408,7 +408,7 @@ static int loongson3_cpu_disable(void)
 		return -EBUSY;
 
 	set_cpu_online(cpu, false);
-	cpu_clear(cpu, cpu_callin_map);
+	cpumask_clear_cpu(cpu, &cpu_callin_map);
 	local_irq_save(flags);
 	fixup_irqs();
 	local_irq_restore(flags);
diff --git a/arch/mips/paravirt/paravirt-smp.c b/arch/mips/paravirt/paravirt-smp.c
index 0164b0c48352..42181c7105df 100644
--- a/arch/mips/paravirt/paravirt-smp.c
+++ b/arch/mips/paravirt/paravirt-smp.c
@@ -75,7 +75,7 @@ static void paravirt_send_ipi_mask(const struct cpumask *mask, unsigned int acti
 {
 	unsigned int cpu;
 
-	for_each_cpu_mask(cpu, *mask)
+	for_each_cpu(cpu, mask)
 		paravirt_send_ipi_single(cpu, action);
 }
 
diff --git a/arch/mips/sgi-ip27/ip27-init.c b/arch/mips/sgi-ip27/ip27-init.c
index ee736bd103f8..570098bfdf87 100644
--- a/arch/mips/sgi-ip27/ip27-init.c
+++ b/arch/mips/sgi-ip27/ip27-init.c
@@ -60,7 +60,7 @@ static void per_hub_init(cnodeid_t cnode)
 	nasid_t nasid = COMPACT_TO_NASID_NODEID(cnode);
 	int i;
 
-	cpu_set(smp_processor_id(), hub->h_cpus);
+	cpumask_set_cpu(smp_processor_id(), &hub->h_cpus);
 
 	if (test_and_set_bit(cnode, hub_init_mask))
 		return;
diff --git a/arch/mips/sgi-ip27/ip27-klnuma.c b/arch/mips/sgi-ip27/ip27-klnuma.c
index ecbb62f339c5..bda90cf87e8c 100644
--- a/arch/mips/sgi-ip27/ip27-klnuma.c
+++ b/arch/mips/sgi-ip27/ip27-klnuma.c
@@ -29,8 +29,8 @@ static cpumask_t ktext_repmask;
 void __init setup_replication_mask(void)
 {
 	/* Set only the master cnode's bit.  The master cnode is always 0. */
-	cpus_clear(ktext_repmask);
-	cpu_set(0, ktext_repmask);
+	cpumask_clear(&ktext_repmask);
+	cpumask_set_cpu(0, &ktext_repmask);
 
 #ifdef CONFIG_REPLICATE_KTEXT
 #ifndef CONFIG_MAPPED_KERNEL
@@ -43,7 +43,7 @@ void __init setup_replication_mask(void)
 			if (cnode == 0)
 				continue;
 			/* Advertise that we have a copy of the kernel */
-			cpu_set(cnode, ktext_repmask);
+			cpumask_set_cpu(cnode, &ktext_repmask);
 		}
 	}
 #endif
@@ -99,7 +99,7 @@ void __init replicate_kernel_text()
 		client_nasid = COMPACT_TO_NASID_NODEID(cnode);
 
 		/* Check if this node should get a copy of the kernel */
-		if (cpu_isset(cnode, ktext_repmask)) {
+		if (cpumask_test_cpu(cnode, &ktext_repmask)) {
 			server_nasid = client_nasid;
 			copy_kernel(server_nasid);
 		}
@@ -124,7 +124,7 @@ unsigned long node_getfirstfree(cnodeid_t cnode)
 	loadbase += 16777216;
 #endif
 	offset = PAGE_ALIGN((unsigned long)(&_end)) - loadbase;
-	if ((cnode == 0) || (cpu_isset(cnode, ktext_repmask)))
+	if ((cnode == 0) || (cpumask_test_cpu(cnode, &ktext_repmask)))
 		return TO_NODE(nasid, offset) >> PAGE_SHIFT;
 	else
 		return KDM_TO_PHYS(PAGE_ALIGN(SYMMON_STK_ADDR(nasid, 0))) >> PAGE_SHIFT;
diff --git a/arch/mips/sgi-ip27/ip27-memory.c b/arch/mips/sgi-ip27/ip27-memory.c
index 0b68469e063f..8d0eb2643248 100644
--- a/arch/mips/sgi-ip27/ip27-memory.c
+++ b/arch/mips/sgi-ip27/ip27-memory.c
@@ -404,7 +404,7 @@ static void __init node_mem_init(cnodeid_t node)
 	NODE_DATA(node)->node_start_pfn = start_pfn;
 	NODE_DATA(node)->node_spanned_pages = end_pfn - start_pfn;
 
-	cpus_clear(hub_data(node)->h_cpus);
+	cpumask_clear(&hub_data(node)->h_cpus);
 
 	slot_freepfn += PFN_UP(sizeof(struct pglist_data) +
 			       sizeof(struct hub_data));
-- 
2.1.0


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 03/16] ia64: Use for_each_cpu_and() and cpumask_any_and() instead of temp var.
@ 2015-03-02 11:47   ` Rusty Russell
  0 siblings, 0 replies; 28+ messages in thread
From: Rusty Russell @ 2015-03-02 11:47 UTC (permalink / raw)
  To: linux-kernel; +Cc: Rusty Russell, Tony Luck, Fenghua Yu, linux-ia64

Just a bit of manual neatening, before spatch cleans the rest.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: linux-ia64@vger.kernel.org
---
 arch/ia64/kernel/irq_ia64.c |  4 +---
 arch/ia64/kernel/msi_ia64.c | 10 ++++------
 2 files changed, 5 insertions(+), 9 deletions(-)

diff --git a/arch/ia64/kernel/irq_ia64.c b/arch/ia64/kernel/irq_ia64.c
index 698d8fefde6c..3329177c262e 100644
--- a/arch/ia64/kernel/irq_ia64.c
+++ b/arch/ia64/kernel/irq_ia64.c
@@ -161,7 +161,6 @@ int bind_irq_vector(int irq, int vector, cpumask_t domain)
 static void __clear_irq_vector(int irq)
 {
 	int vector, cpu;
-	cpumask_t mask;
 	cpumask_t domain;
 	struct irq_cfg *cfg = &irq_cfg[irq];
 
@@ -169,8 +168,7 @@ static void __clear_irq_vector(int irq)
 	BUG_ON(cfg->vector = IRQ_VECTOR_UNASSIGNED);
 	vector = cfg->vector;
 	domain = cfg->domain;
-	cpumask_and(&mask, &cfg->domain, cpu_online_mask);
-	for_each_cpu_mask(cpu, mask)
+	for_each_cpu_and(cpu, &cfg->domain, cpu_online_mask)
 		per_cpu(vector_irq, cpu)[vector] = -1;
 	cfg->vector = IRQ_VECTOR_UNASSIGNED;
 	cfg->domain = CPU_MASK_NONE;
diff --git a/arch/ia64/kernel/msi_ia64.c b/arch/ia64/kernel/msi_ia64.c
index 8ae36ea177d3..9dd7464f8c17 100644
--- a/arch/ia64/kernel/msi_ia64.c
+++ b/arch/ia64/kernel/msi_ia64.c
@@ -47,15 +47,14 @@ int ia64_setup_msi_irq(struct pci_dev *pdev, struct msi_desc *desc)
 	struct msi_msg	msg;
 	unsigned long	dest_phys_id;
 	int	irq, vector;
-	cpumask_t mask;
 
 	irq = create_irq();
 	if (irq < 0)
 		return irq;
 
 	irq_set_msi_desc(irq, desc);
-	cpumask_and(&mask, &(irq_to_domain(irq)), cpu_online_mask);
-	dest_phys_id = cpu_physical_id(first_cpu(mask));
+	dest_phys_id = cpu_physical_id(cpumask_any_and(&(irq_to_domain(irq)),
+						       cpu_online_mask));
 	vector = irq_to_vector(irq);
 
 	msg.address_hi = 0;
@@ -171,10 +170,9 @@ msi_compose_msg(struct pci_dev *pdev, unsigned int irq, struct msi_msg *msg)
 {
 	struct irq_cfg *cfg = irq_cfg + irq;
 	unsigned dest;
-	cpumask_t mask;
 
-	cpumask_and(&mask, &(irq_to_domain(irq)), cpu_online_mask);
-	dest = cpu_physical_id(first_cpu(mask));
+	dest = cpu_physical_id(cpumask_first_and(&(irq_to_domain(irq)),
+						 cpu_online_mask));
 
 	msg->address_hi = 0;
 	msg->address_lo -- 
2.1.0


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 06/16] ia64: fix up obsolete cpu function usage.
@ 2015-03-02 11:47   ` Rusty Russell
  0 siblings, 0 replies; 28+ messages in thread
From: Rusty Russell @ 2015-03-02 11:47 UTC (permalink / raw)
  To: linux-kernel; +Cc: Rusty Russell, Tony Luck, Fenghua Yu, linux-ia64

Thanks to spatch, then a sweep for for_each_cpu_mask => for_each_cpu.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: linux-ia64@vger.kernel.org
---
 arch/ia64/include/asm/acpi.h |  6 +++---
 arch/ia64/kernel/acpi.c      |  2 +-
 arch/ia64/kernel/iosapic.c   |  2 +-
 arch/ia64/kernel/irq_ia64.c  | 28 ++++++++++++++--------------
 arch/ia64/kernel/mca.c       | 10 +++++-----
 arch/ia64/kernel/numa.c      | 10 +++++-----
 arch/ia64/kernel/salinfo.c   | 24 ++++++++++++------------
 arch/ia64/kernel/setup.c     | 11 ++++++-----
 arch/ia64/kernel/smp.c       |  6 +++---
 arch/ia64/kernel/smpboot.c   | 42 ++++++++++++++++++++++--------------------
 arch/ia64/kernel/topology.c  |  6 +++---
 11 files changed, 75 insertions(+), 72 deletions(-)

diff --git a/arch/ia64/include/asm/acpi.h b/arch/ia64/include/asm/acpi.h
index a1d91ab4c5ef..aa0fdf125aba 100644
--- a/arch/ia64/include/asm/acpi.h
+++ b/arch/ia64/include/asm/acpi.h
@@ -117,7 +117,7 @@ static inline void arch_acpi_set_pdc_bits(u32 *buf)
 #ifdef CONFIG_ACPI_NUMA
 extern cpumask_t early_cpu_possible_map;
 #define for_each_possible_early_cpu(cpu)  \
-	for_each_cpu_mask((cpu), early_cpu_possible_map)
+	for_each_cpu((cpu), &early_cpu_possible_map)
 
 static inline void per_cpu_scan_finalize(int min_cpus, int reserve_cpus)
 {
@@ -125,13 +125,13 @@ static inline void per_cpu_scan_finalize(int min_cpus, int reserve_cpus)
 	int cpu;
 	int next_nid = 0;
 
-	low_cpu = cpus_weight(early_cpu_possible_map);
+	low_cpu = cpumask_weight(&early_cpu_possible_map);
 
 	high_cpu = max(low_cpu, min_cpus);
 	high_cpu = min(high_cpu + reserve_cpus, NR_CPUS);
 
 	for (cpu = low_cpu; cpu < high_cpu; cpu++) {
-		cpu_set(cpu, early_cpu_possible_map);
+		cpumask_set_cpu(cpu, &early_cpu_possible_map);
 		if (node_cpuid[cpu].nid = NUMA_NO_NODE) {
 			node_cpuid[cpu].nid = next_nid;
 			next_nid++;
diff --git a/arch/ia64/kernel/acpi.c b/arch/ia64/kernel/acpi.c
index 2c4498919d3c..35bf22cc71b7 100644
--- a/arch/ia64/kernel/acpi.c
+++ b/arch/ia64/kernel/acpi.c
@@ -483,7 +483,7 @@ acpi_numa_processor_affinity_init(struct acpi_srat_cpu_affinity *pa)
 	    (pa->apic_id << 8) | (pa->local_sapic_eid);
 	/* nid should be overridden as logical node id later */
 	node_cpuid[srat_num_cpus].nid = pxm;
-	cpu_set(srat_num_cpus, early_cpu_possible_map);
+	cpumask_set_cpu(srat_num_cpus, &early_cpu_possible_map);
 	srat_num_cpus++;
 }
 
diff --git a/arch/ia64/kernel/iosapic.c b/arch/ia64/kernel/iosapic.c
index cd44a57c73be..bc9501e36e77 100644
--- a/arch/ia64/kernel/iosapic.c
+++ b/arch/ia64/kernel/iosapic.c
@@ -690,7 +690,7 @@ skip_numa_setup:
 	do {
 		if (++cpu >= nr_cpu_ids)
 			cpu = 0;
-	} while (!cpu_online(cpu) || !cpu_isset(cpu, domain));
+	} while (!cpu_online(cpu) || !cpumask_test_cpu(cpu, &domain));
 
 	return cpu_physical_id(cpu);
 #else  /* CONFIG_SMP */
diff --git a/arch/ia64/kernel/irq_ia64.c b/arch/ia64/kernel/irq_ia64.c
index 3329177c262e..9f40d972969c 100644
--- a/arch/ia64/kernel/irq_ia64.c
+++ b/arch/ia64/kernel/irq_ia64.c
@@ -109,13 +109,13 @@ static inline int find_unassigned_vector(cpumask_t domain)
 	int pos, vector;
 
 	cpumask_and(&mask, &domain, cpu_online_mask);
-	if (cpus_empty(mask))
+	if (cpumask_empty(&mask))
 		return -EINVAL;
 
 	for (pos = 0; pos < IA64_NUM_DEVICE_VECTORS; pos++) {
 		vector = IA64_FIRST_DEVICE_VECTOR + pos;
-		cpus_and(mask, domain, vector_table[vector]);
-		if (!cpus_empty(mask))
+		cpumask_and(&mask, &domain, &vector_table[vector]);
+		if (!cpumask_empty(&mask))
 			continue;
 		return vector;
 	}
@@ -132,18 +132,18 @@ static int __bind_irq_vector(int irq, int vector, cpumask_t domain)
 	BUG_ON((unsigned)vector >= IA64_NUM_VECTORS);
 
 	cpumask_and(&mask, &domain, cpu_online_mask);
-	if (cpus_empty(mask))
+	if (cpumask_empty(&mask))
 		return -EINVAL;
-	if ((cfg->vector = vector) && cpus_equal(cfg->domain, domain))
+	if ((cfg->vector = vector) && cpumask_equal(&cfg->domain, &domain))
 		return 0;
 	if (cfg->vector != IRQ_VECTOR_UNASSIGNED)
 		return -EBUSY;
-	for_each_cpu_mask(cpu, mask)
+	for_each_cpu(cpu, &mask)
 		per_cpu(vector_irq, cpu)[vector] = irq;
 	cfg->vector = vector;
 	cfg->domain = domain;
 	irq_status[irq] = IRQ_USED;
-	cpus_or(vector_table[vector], vector_table[vector], domain);
+	cpumask_or(&vector_table[vector], &vector_table[vector], &domain);
 	return 0;
 }
 
@@ -242,7 +242,7 @@ void __setup_vector_irq(int cpu)
 		per_cpu(vector_irq, cpu)[vector] = -1;
 	/* Mark the inuse vectors */
 	for (irq = 0; irq < NR_IRQS; ++irq) {
-		if (!cpu_isset(cpu, irq_cfg[irq].domain))
+		if (!cpumask_test_cpu(cpu, &irq_cfg[irq].domain))
 			continue;
 		vector = irq_to_vector(irq);
 		per_cpu(vector_irq, cpu)[vector] = irq;
@@ -273,7 +273,7 @@ static int __irq_prepare_move(int irq, int cpu)
 		return -EBUSY;
 	if (cfg->vector = IRQ_VECTOR_UNASSIGNED || !cpu_online(cpu))
 		return -EINVAL;
-	if (cpu_isset(cpu, cfg->domain))
+	if (cpumask_test_cpu(cpu, &cfg->domain))
 		return 0;
 	domain = vector_allocation_domain(cpu);
 	vector = find_unassigned_vector(domain);
@@ -307,12 +307,12 @@ void irq_complete_move(unsigned irq)
 	if (likely(!cfg->move_in_progress))
 		return;
 
-	if (unlikely(cpu_isset(smp_processor_id(), cfg->old_domain)))
+	if (unlikely(cpumask_test_cpu(smp_processor_id(), &cfg->old_domain)))
 		return;
 
 	cpumask_and(&cleanup_mask, &cfg->old_domain, cpu_online_mask);
-	cfg->move_cleanup_count = cpus_weight(cleanup_mask);
-	for_each_cpu_mask(i, cleanup_mask)
+	cfg->move_cleanup_count = cpumask_weight(&cleanup_mask);
+	for_each_cpu(i, &cleanup_mask)
 		platform_send_ipi(i, IA64_IRQ_MOVE_VECTOR, IA64_IPI_DM_INT, 0);
 	cfg->move_in_progress = 0;
 }
@@ -338,12 +338,12 @@ static irqreturn_t smp_irq_move_cleanup_interrupt(int irq, void *dev_id)
 		if (!cfg->move_cleanup_count)
 			goto unlock;
 
-		if (!cpu_isset(me, cfg->old_domain))
+		if (!cpumask_test_cpu(me, &cfg->old_domain))
 			goto unlock;
 
 		spin_lock_irqsave(&vector_lock, flags);
 		__this_cpu_write(vector_irq[vector], -1);
-		cpu_clear(me, vector_table[vector]);
+		cpumask_clear_cpu(me, &vector_table[vector]);
 		spin_unlock_irqrestore(&vector_lock, flags);
 		cfg->move_cleanup_count--;
 	unlock:
diff --git a/arch/ia64/kernel/mca.c b/arch/ia64/kernel/mca.c
index 8bfd36af46f8..dd5801eb4c69 100644
--- a/arch/ia64/kernel/mca.c
+++ b/arch/ia64/kernel/mca.c
@@ -1293,7 +1293,7 @@ ia64_mca_handler(struct pt_regs *regs, struct switch_stack *sw,
 		monarch_cpu = cpu;
 		sos->monarch = 1;
 	} else {
-		cpu_set(cpu, mca_cpu);
+		cpumask_set_cpu(cpu, &mca_cpu);
 		sos->monarch = 0;
 	}
 	mprintk(KERN_INFO "Entered OS MCA handler. PSP=%lx cpu=%d "
@@ -1316,7 +1316,7 @@ ia64_mca_handler(struct pt_regs *regs, struct switch_stack *sw,
 		 */
 		ia64_mca_wakeup_all();
 	} else {
-		while (cpu_isset(cpu, mca_cpu))
+		while (cpumask_test_cpu(cpu, &mca_cpu))
 			cpu_relax();	/* spin until monarch wakes us */
 	}
 
@@ -1355,9 +1355,9 @@ ia64_mca_handler(struct pt_regs *regs, struct switch_stack *sw,
 		 * and put this cpu in the rendez loop.
 		 */
 		for_each_online_cpu(i) {
-			if (cpu_isset(i, mca_cpu)) {
+			if (cpumask_test_cpu(i, &mca_cpu)) {
 				monarch_cpu = i;
-				cpu_clear(i, mca_cpu);	/* wake next cpu */
+				cpumask_clear_cpu(i, &mca_cpu);	/* wake next cpu */
 				while (monarch_cpu != -1)
 					cpu_relax();	/* spin until last cpu leaves */
 				set_curr_task(cpu, previous_current);
@@ -1822,7 +1822,7 @@ format_mca_init_stack(void *mca_data, unsigned long offset,
 	ti->cpu = cpu;
 	p->stack = ti;
 	p->state = TASK_UNINTERRUPTIBLE;
-	cpu_set(cpu, p->cpus_allowed);
+	cpumask_set_cpu(cpu, &p->cpus_allowed);
 	INIT_LIST_HEAD(&p->tasks);
 	p->parent = p->real_parent = p->group_leader = p;
 	INIT_LIST_HEAD(&p->children);
diff --git a/arch/ia64/kernel/numa.c b/arch/ia64/kernel/numa.c
index d288cde93606..92c376279c6d 100644
--- a/arch/ia64/kernel/numa.c
+++ b/arch/ia64/kernel/numa.c
@@ -39,7 +39,7 @@ void map_cpu_to_node(int cpu, int nid)
 	}
 	/* sanity check first */
 	oldnid = cpu_to_node_map[cpu];
-	if (cpu_isset(cpu, node_to_cpu_mask[oldnid])) {
+	if (cpumask_test_cpu(cpu, &node_to_cpu_mask[oldnid])) {
 		return; /* nothing to do */
 	}
 	/* we don't have cpu-driven node hot add yet...
@@ -47,16 +47,16 @@ void map_cpu_to_node(int cpu, int nid)
 	if (!node_online(nid))
 		nid = first_online_node;
 	cpu_to_node_map[cpu] = nid;
-	cpu_set(cpu, node_to_cpu_mask[nid]);
+	cpumask_set_cpu(cpu, &node_to_cpu_mask[nid]);
 	return;
 }
 
 void unmap_cpu_from_node(int cpu, int nid)
 {
-	WARN_ON(!cpu_isset(cpu, node_to_cpu_mask[nid]));
+	WARN_ON(!cpumask_test_cpu(cpu, &node_to_cpu_mask[nid]));
 	WARN_ON(cpu_to_node_map[cpu] != nid);
 	cpu_to_node_map[cpu] = 0;
-	cpu_clear(cpu, node_to_cpu_mask[nid]);
+	cpumask_clear_cpu(cpu, &node_to_cpu_mask[nid]);
 }
 
 
@@ -71,7 +71,7 @@ void __init build_cpu_to_node_map(void)
 	int cpu, i, node;
 
 	for(node=0; node < MAX_NUMNODES; node++)
-		cpus_clear(node_to_cpu_mask[node]);
+		cpumask_clear(&node_to_cpu_mask[node]);
 
 	for_each_possible_early_cpu(cpu) {
 		node = -1;
diff --git a/arch/ia64/kernel/salinfo.c b/arch/ia64/kernel/salinfo.c
index ee9719eebb1e..1eeffb7fbb16 100644
--- a/arch/ia64/kernel/salinfo.c
+++ b/arch/ia64/kernel/salinfo.c
@@ -256,7 +256,7 @@ salinfo_log_wakeup(int type, u8 *buffer, u64 size, int irqsafe)
 			data_saved->buffer = buffer;
 		}
 	}
-	cpu_set(smp_processor_id(), data->cpu_event);
+	cpumask_set_cpu(smp_processor_id(), &data->cpu_event);
 	if (irqsafe) {
 		salinfo_work_to_do(data);
 		spin_unlock_irqrestore(&data_saved_lock, flags);
@@ -274,7 +274,7 @@ salinfo_timeout_check(struct salinfo_data *data)
 	unsigned long flags;
 	if (!data->open)
 		return;
-	if (!cpus_empty(data->cpu_event)) {
+	if (!cpumask_empty(&data->cpu_event)) {
 		spin_lock_irqsave(&data_saved_lock, flags);
 		salinfo_work_to_do(data);
 		spin_unlock_irqrestore(&data_saved_lock, flags);
@@ -308,7 +308,7 @@ salinfo_event_read(struct file *file, char __user *buffer, size_t count, loff_t
 	int i, n, cpu = -1;
 
 retry:
-	if (cpus_empty(data->cpu_event) && down_trylock(&data->mutex)) {
+	if (cpumask_empty(&data->cpu_event) && down_trylock(&data->mutex)) {
 		if (file->f_flags & O_NONBLOCK)
 			return -EAGAIN;
 		if (down_interruptible(&data->mutex))
@@ -317,9 +317,9 @@ retry:
 
 	n = data->cpu_check;
 	for (i = 0; i < nr_cpu_ids; i++) {
-		if (cpu_isset(n, data->cpu_event)) {
+		if (cpumask_test_cpu(n, &data->cpu_event)) {
 			if (!cpu_online(n)) {
-				cpu_clear(n, data->cpu_event);
+				cpumask_clear_cpu(n, &data->cpu_event);
 				continue;
 			}
 			cpu = n;
@@ -451,7 +451,7 @@ retry:
 		call_on_cpu(cpu, salinfo_log_read_cpu, data);
 	if (!data->log_size) {
 		data->state = STATE_NO_DATA;
-		cpu_clear(cpu, data->cpu_event);
+		cpumask_clear_cpu(cpu, &data->cpu_event);
 	} else {
 		data->state = STATE_LOG_RECORD;
 	}
@@ -491,11 +491,11 @@ salinfo_log_clear(struct salinfo_data *data, int cpu)
 	unsigned long flags;
 	spin_lock_irqsave(&data_saved_lock, flags);
 	data->state = STATE_NO_DATA;
-	if (!cpu_isset(cpu, data->cpu_event)) {
+	if (!cpumask_test_cpu(cpu, &data->cpu_event)) {
 		spin_unlock_irqrestore(&data_saved_lock, flags);
 		return 0;
 	}
-	cpu_clear(cpu, data->cpu_event);
+	cpumask_clear_cpu(cpu, &data->cpu_event);
 	if (data->saved_num) {
 		shift1_data_saved(data, data->saved_num - 1);
 		data->saved_num = 0;
@@ -509,7 +509,7 @@ salinfo_log_clear(struct salinfo_data *data, int cpu)
 	salinfo_log_new_read(cpu, data);
 	if (data->state = STATE_LOG_RECORD) {
 		spin_lock_irqsave(&data_saved_lock, flags);
-		cpu_set(cpu, data->cpu_event);
+		cpumask_set_cpu(cpu, &data->cpu_event);
 		salinfo_work_to_do(data);
 		spin_unlock_irqrestore(&data_saved_lock, flags);
 	}
@@ -581,7 +581,7 @@ salinfo_cpu_callback(struct notifier_block *nb, unsigned long action, void *hcpu
 		for (i = 0, data = salinfo_data;
 		     i < ARRAY_SIZE(salinfo_data);
 		     ++i, ++data) {
-			cpu_set(cpu, data->cpu_event);
+			cpumask_set_cpu(cpu, &data->cpu_event);
 			salinfo_work_to_do(data);
 		}
 		spin_unlock_irqrestore(&data_saved_lock, flags);
@@ -601,7 +601,7 @@ salinfo_cpu_callback(struct notifier_block *nb, unsigned long action, void *hcpu
 					shift1_data_saved(data, j);
 				}
 			}
-			cpu_clear(cpu, data->cpu_event);
+			cpumask_clear_cpu(cpu, &data->cpu_event);
 		}
 		spin_unlock_irqrestore(&data_saved_lock, flags);
 		break;
@@ -659,7 +659,7 @@ salinfo_init(void)
 
 		/* we missed any events before now */
 		for_each_online_cpu(j)
-			cpu_set(j, data->cpu_event);
+			cpumask_set_cpu(j, &data->cpu_event);
 
 		*sdir++ = dir;
 	}
diff --git a/arch/ia64/kernel/setup.c b/arch/ia64/kernel/setup.c
index d86669bcdfb2..b9761389cb8d 100644
--- a/arch/ia64/kernel/setup.c
+++ b/arch/ia64/kernel/setup.c
@@ -562,8 +562,8 @@ setup_arch (char **cmdline_p)
 #  ifdef CONFIG_ACPI_HOTPLUG_CPU
 	prefill_possible_map();
 #  endif
-	per_cpu_scan_finalize((cpus_weight(early_cpu_possible_map) = 0 ?
-		32 : cpus_weight(early_cpu_possible_map)),
+	per_cpu_scan_finalize((cpumask_weight(&early_cpu_possible_map) = 0 ?
+		32 : cpumask_weight(&early_cpu_possible_map)),
 		additional_cpus > 0 ? additional_cpus : 0);
 # endif
 #endif /* CONFIG_APCI_BOOT */
@@ -702,7 +702,8 @@ show_cpuinfo (struct seq_file *m, void *v)
 		   c->itc_freq / 1000000, c->itc_freq % 1000000,
 		   lpj*HZ/500000, (lpj*HZ/5000) % 100);
 #ifdef CONFIG_SMP
-	seq_printf(m, "siblings   : %u\n", cpus_weight(cpu_core_map[cpunum]));
+	seq_printf(m, "siblings   : %u\n",
+		   cpumask_weight(&cpu_core_map[cpunum]));
 	if (c->socket_id != -1)
 		seq_printf(m, "physical id: %u\n", c->socket_id);
 	if (c->threads_per_core > 1 || c->cores_per_socket > 1)
@@ -933,8 +934,8 @@ cpu_init (void)
 	 * (must be done after per_cpu area is setup)
 	 */
 	if (smp_processor_id() = 0) {
-		cpu_set(0, per_cpu(cpu_sibling_map, 0));
-		cpu_set(0, cpu_core_map[0]);
+		cpumask_set_cpu(0, &per_cpu(cpu_sibling_map, 0));
+		cpumask_set_cpu(0, &cpu_core_map[0]);
 	} else {
 		/*
 		 * Set ar.k3 so that assembly code in MCA handler can compute
diff --git a/arch/ia64/kernel/smp.c b/arch/ia64/kernel/smp.c
index 9fcd4e63048f..7f706d4f84f7 100644
--- a/arch/ia64/kernel/smp.c
+++ b/arch/ia64/kernel/smp.c
@@ -262,11 +262,11 @@ smp_flush_tlb_cpumask(cpumask_t xcpumask)
 	preempt_disable();
 	mycpu = smp_processor_id();
 
-	for_each_cpu_mask(cpu, cpumask)
+	for_each_cpu(cpu, &cpumask)
 		counts[cpu] = local_tlb_flush_counts[cpu].count & 0xffff;
 
 	mb();
-	for_each_cpu_mask(cpu, cpumask) {
+	for_each_cpu(cpu, &cpumask) {
 		if (cpu = mycpu)
 			flush_mycpu = 1;
 		else
@@ -276,7 +276,7 @@ smp_flush_tlb_cpumask(cpumask_t xcpumask)
 	if (flush_mycpu)
 		smp_local_flush_tlb();
 
-	for_each_cpu_mask(cpu, cpumask)
+	for_each_cpu(cpu, &cpumask)
 		while(counts[cpu] = (local_tlb_flush_counts[cpu].count & 0xffff))
 			udelay(FLUSH_DELAY);
 
diff --git a/arch/ia64/kernel/smpboot.c b/arch/ia64/kernel/smpboot.c
index 547a48d78bd7..15051e9c2c6f 100644
--- a/arch/ia64/kernel/smpboot.c
+++ b/arch/ia64/kernel/smpboot.c
@@ -434,7 +434,7 @@ smp_callin (void)
 	/*
 	 * Allow the master to continue.
 	 */
-	cpu_set(cpuid, cpu_callin_map);
+	cpumask_set_cpu(cpuid, &cpu_callin_map);
 	Dprintk("Stack on CPU %d at about %p\n",cpuid, &cpuid);
 }
 
@@ -475,13 +475,13 @@ do_boot_cpu (int sapicid, int cpu, struct task_struct *idle)
 	 */
 	Dprintk("Waiting on callin_map ...");
 	for (timeout = 0; timeout < 100000; timeout++) {
-		if (cpu_isset(cpu, cpu_callin_map))
+		if (cpumask_test_cpu(cpu, &cpu_callin_map))
 			break;  /* It has booted */
 		udelay(100);
 	}
 	Dprintk("\n");
 
-	if (!cpu_isset(cpu, cpu_callin_map)) {
+	if (!cpumask_test_cpu(cpu, &cpu_callin_map)) {
 		printk(KERN_ERR "Processor 0x%x/0x%x is stuck.\n", cpu, sapicid);
 		ia64_cpu_to_sapicid[cpu] = -1;
 		set_cpu_online(cpu, false);  /* was set in smp_callin() */
@@ -541,7 +541,7 @@ smp_prepare_cpus (unsigned int max_cpus)
 
 	smp_setup_percpu_timer();
 
-	cpu_set(0, cpu_callin_map);
+	cpumask_set_cpu(0, &cpu_callin_map);
 
 	local_cpu_data->loops_per_jiffy = loops_per_jiffy;
 	ia64_cpu_to_sapicid[0] = boot_cpu_id;
@@ -565,7 +565,7 @@ smp_prepare_cpus (unsigned int max_cpus)
 void smp_prepare_boot_cpu(void)
 {
 	set_cpu_online(smp_processor_id(), true);
-	cpu_set(smp_processor_id(), cpu_callin_map);
+	cpumask_set_cpu(smp_processor_id(), &cpu_callin_map);
 	set_numa_node(cpu_to_node_map[smp_processor_id()]);
 	per_cpu(cpu_state, smp_processor_id()) = CPU_ONLINE;
 	paravirt_post_smp_prepare_boot_cpu();
@@ -577,10 +577,10 @@ clear_cpu_sibling_map(int cpu)
 {
 	int i;
 
-	for_each_cpu_mask(i, per_cpu(cpu_sibling_map, cpu))
-		cpu_clear(cpu, per_cpu(cpu_sibling_map, i));
-	for_each_cpu_mask(i, cpu_core_map[cpu])
-		cpu_clear(cpu, cpu_core_map[i]);
+	for_each_cpu(i, &per_cpu(cpu_sibling_map, cpu))
+		cpumask_clear_cpu(cpu, &per_cpu(cpu_sibling_map, i));
+	for_each_cpu(i, &cpu_core_map[cpu])
+		cpumask_clear_cpu(cpu, &cpu_core_map[i]);
 
 	per_cpu(cpu_sibling_map, cpu) = cpu_core_map[cpu] = CPU_MASK_NONE;
 }
@@ -592,12 +592,12 @@ remove_siblinginfo(int cpu)
 
 	if (cpu_data(cpu)->threads_per_core = 1 &&
 	    cpu_data(cpu)->cores_per_socket = 1) {
-		cpu_clear(cpu, cpu_core_map[cpu]);
-		cpu_clear(cpu, per_cpu(cpu_sibling_map, cpu));
+		cpumask_clear_cpu(cpu, &cpu_core_map[cpu]);
+		cpumask_clear_cpu(cpu, &per_cpu(cpu_sibling_map, cpu));
 		return;
 	}
 
-	last = (cpus_weight(cpu_core_map[cpu]) = 1 ? 1 : 0);
+	last = (cpumask_weight(&cpu_core_map[cpu]) = 1 ? 1 : 0);
 
 	/* remove it from all sibling map's */
 	clear_cpu_sibling_map(cpu);
@@ -673,7 +673,7 @@ int __cpu_disable(void)
 	remove_siblinginfo(cpu);
 	fixup_irqs();
 	local_flush_tlb_all();
-	cpu_clear(cpu, cpu_callin_map);
+	cpumask_clear_cpu(cpu, &cpu_callin_map);
 	return 0;
 }
 
@@ -718,11 +718,13 @@ static inline void set_cpu_sibling_map(int cpu)
 
 	for_each_online_cpu(i) {
 		if ((cpu_data(cpu)->socket_id = cpu_data(i)->socket_id)) {
-			cpu_set(i, cpu_core_map[cpu]);
-			cpu_set(cpu, cpu_core_map[i]);
+			cpumask_set_cpu(i, &cpu_core_map[cpu]);
+			cpumask_set_cpu(cpu, &cpu_core_map[i]);
 			if (cpu_data(cpu)->core_id = cpu_data(i)->core_id) {
-				cpu_set(i, per_cpu(cpu_sibling_map, cpu));
-				cpu_set(cpu, per_cpu(cpu_sibling_map, i));
+				cpumask_set_cpu(i,
+						&per_cpu(cpu_sibling_map, cpu));
+				cpumask_set_cpu(cpu,
+						&per_cpu(cpu_sibling_map, i));
 			}
 		}
 	}
@@ -742,7 +744,7 @@ __cpu_up(unsigned int cpu, struct task_struct *tidle)
 	 * Already booted cpu? not valid anymore since we dont
 	 * do idle loop tightspin anymore.
 	 */
-	if (cpu_isset(cpu, cpu_callin_map))
+	if (cpumask_test_cpu(cpu, &cpu_callin_map))
 		return -EINVAL;
 
 	per_cpu(cpu_state, cpu) = CPU_UP_PREPARE;
@@ -753,8 +755,8 @@ __cpu_up(unsigned int cpu, struct task_struct *tidle)
 
 	if (cpu_data(cpu)->threads_per_core = 1 &&
 	    cpu_data(cpu)->cores_per_socket = 1) {
-		cpu_set(cpu, per_cpu(cpu_sibling_map, cpu));
-		cpu_set(cpu, cpu_core_map[cpu]);
+		cpumask_set_cpu(cpu, &per_cpu(cpu_sibling_map, cpu));
+		cpumask_set_cpu(cpu, &cpu_core_map[cpu]);
 		return 0;
 	}
 
diff --git a/arch/ia64/kernel/topology.c b/arch/ia64/kernel/topology.c
index 965ab42fabb0..c01fe8991244 100644
--- a/arch/ia64/kernel/topology.c
+++ b/arch/ia64/kernel/topology.c
@@ -148,7 +148,7 @@ static void cache_shared_cpu_map_setup(unsigned int cpu,
 
 	if (cpu_data(cpu)->threads_per_core <= 1 &&
 		cpu_data(cpu)->cores_per_socket <= 1) {
-		cpu_set(cpu, this_leaf->shared_cpu_map);
+		cpumask_set_cpu(cpu, &this_leaf->shared_cpu_map);
 		return;
 	}
 
@@ -164,7 +164,7 @@ static void cache_shared_cpu_map_setup(unsigned int cpu,
 			if (cpu_data(cpu)->socket_id = cpu_data(j)->socket_id
 				&& cpu_data(j)->core_id = csi.log1_cid
 				&& cpu_data(j)->thread_id = csi.log1_tid)
-				cpu_set(j, this_leaf->shared_cpu_map);
+				cpumask_set_cpu(j, &this_leaf->shared_cpu_map);
 
 		i++;
 	} while (i < num_shared &&
@@ -177,7 +177,7 @@ static void cache_shared_cpu_map_setup(unsigned int cpu,
 static void cache_shared_cpu_map_setup(unsigned int cpu,
 		struct cache_info * this_leaf)
 {
-	cpu_set(cpu, this_leaf->shared_cpu_map);
+	cpumask_set_cpu(cpu, &this_leaf->shared_cpu_map);
 	return;
 }
 #endif
-- 
2.1.0


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* Re: [PATCH 01/16] CONFIG_DISABLE_OBSOLETE_CPUMASK_FUNCTIONS: set if CPUMASK_OFFSTACK.
  2015-03-02 11:35 [PATCH 01/16] CONFIG_DISABLE_OBSOLETE_CPUMASK_FUNCTIONS: set if CPUMASK_OFFSTACK Rusty Russell
                   ` (7 preceding siblings ...)
  2015-03-02 11:35 ` [PATCH 09/16] mips: fix up obsolete cpu " Rusty Russell
@ 2015-03-02 12:34 ` Paul Bolle
  2015-03-02 23:40   ` Rusty Russell
  8 siblings, 1 reply; 28+ messages in thread
From: Paul Bolle @ 2015-03-02 12:34 UTC (permalink / raw)
  To: Rusty Russell; +Cc: linux-kernel

On Mon, 2015-03-02 at 22:05 +1030, Rusty Russell wrote:
> Using these functions with offstack cpus is unsafe.  They use all NR_CPUS
> bits, unstead of nr_cpumask_bits.
> 
> In particular, lustre (in staging) used cpus_ and that caused a bug.
> 
> Reported-by: Oleg Drokin <green@linuxhacker.ru>
> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
> ---
>  lib/Kconfig | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/lib/Kconfig b/lib/Kconfig
> index 87da53bb1fef..722427805220 100644
> --- a/lib/Kconfig
> +++ b/lib/Kconfig
> @@ -398,8 +398,8 @@ config CPUMASK_OFFSTACK
>  	  stack overflow.
>  
>  config DISABLE_OBSOLETE_CPUMASK_FUNCTIONS
> -       bool "Disable obsolete cpumask functions" if DEBUG_PER_CPU_MAPS
> -       depends on BROKEN
> +       bool
> +       depends on CPUMASK_OFFSTACK

This removes the "prompt" from this symbol's entry. And nothing selects
it either (not in next-20150302 nor in this series). So I think this
just disables this Kconfig symbol entirely. Ie, it can't be set even if
CPUMAK_OFFSTACK is set.

Should this entry perhaps be using
    def_bool y

instead?

>  config CPU_RMAP
>  	bool

Thanks,


Paul Bolle


^ permalink raw reply	[flat|nested] 28+ messages in thread

* [tip:x86/cleanups] x86: Fix up obsolete __cpu_set() function usage
  2015-03-02 11:35 ` [PATCH 08/16] x86: " Rusty Russell
@ 2015-03-02 13:36   ` tip-bot for Rusty Russell
  0 siblings, 0 replies; 28+ messages in thread
From: tip-bot for Rusty Russell @ 2015-03-02 13:36 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: linux-kernel, hpa, rusty, tglx, mingo

Commit-ID:  020b37ac66c5fcec70b6fa51113b84bdfff6a4bc
Gitweb:     http://git.kernel.org/tip/020b37ac66c5fcec70b6fa51113b84bdfff6a4bc
Author:     Rusty Russell <rusty@rustcorp.com.au>
AuthorDate: Mon, 2 Mar 2015 22:05:49 +1030
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Mon, 2 Mar 2015 14:28:17 +0100

x86: Fix up obsolete __cpu_set() function usage

Thanks to spatch, plus manual removal of "&*".

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/1425296150-4722-8-git-send-email-rusty@rustcorp.com.au
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 arch/x86/kernel/apic/x2apic_cluster.c | 8 ++++----
 arch/x86/kernel/irq.c                 | 4 ++--
 arch/x86/platform/uv/tlb_uv.c         | 6 +++---
 3 files changed, 9 insertions(+), 9 deletions(-)

diff --git a/arch/x86/kernel/apic/x2apic_cluster.c b/arch/x86/kernel/apic/x2apic_cluster.c
index e658f21..d9d0bd2 100644
--- a/arch/x86/kernel/apic/x2apic_cluster.c
+++ b/arch/x86/kernel/apic/x2apic_cluster.c
@@ -135,12 +135,12 @@ static void init_x2apic_ldr(void)
 
 	per_cpu(x86_cpu_to_logical_apicid, this_cpu) = apic_read(APIC_LDR);
 
-	__cpu_set(this_cpu, per_cpu(cpus_in_cluster, this_cpu));
+	cpumask_set_cpu(this_cpu, per_cpu(cpus_in_cluster, this_cpu));
 	for_each_online_cpu(cpu) {
 		if (x2apic_cluster(this_cpu) != x2apic_cluster(cpu))
 			continue;
-		__cpu_set(this_cpu, per_cpu(cpus_in_cluster, cpu));
-		__cpu_set(cpu, per_cpu(cpus_in_cluster, this_cpu));
+		cpumask_set_cpu(this_cpu, per_cpu(cpus_in_cluster, cpu));
+		cpumask_set_cpu(cpu, per_cpu(cpus_in_cluster, this_cpu));
 	}
 }
 
@@ -195,7 +195,7 @@ static int x2apic_init_cpu_notifier(void)
 
 	BUG_ON(!per_cpu(cpus_in_cluster, cpu) || !per_cpu(ipi_mask, cpu));
 
-	__cpu_set(cpu, per_cpu(cpus_in_cluster, cpu));
+	cpumask_set_cpu(cpu, per_cpu(cpus_in_cluster, cpu));
 	register_hotcpu_notifier(&x2apic_cpu_notifier);
 	return 1;
 }
diff --git a/arch/x86/kernel/irq.c b/arch/x86/kernel/irq.c
index 67b1cbe..e5952c2 100644
--- a/arch/x86/kernel/irq.c
+++ b/arch/x86/kernel/irq.c
@@ -295,7 +295,7 @@ int check_irq_vectors_for_cpu_disable(void)
 
 	this_cpu = smp_processor_id();
 	cpumask_copy(&online_new, cpu_online_mask);
-	cpu_clear(this_cpu, online_new);
+	cpumask_clear_cpu(this_cpu, &online_new);
 
 	this_count = 0;
 	for (vector = FIRST_EXTERNAL_VECTOR; vector < NR_VECTORS; vector++) {
@@ -307,7 +307,7 @@ int check_irq_vectors_for_cpu_disable(void)
 
 			data = irq_desc_get_irq_data(desc);
 			cpumask_copy(&affinity_new, data->affinity);
-			cpu_clear(this_cpu, affinity_new);
+			cpumask_clear_cpu(this_cpu, &affinity_new);
 
 			/* Do not count inactive or per-cpu irqs. */
 			if (!irq_has_action(irq) || irqd_is_per_cpu(data))
diff --git a/arch/x86/platform/uv/tlb_uv.c b/arch/x86/platform/uv/tlb_uv.c
index 9947985..3b6ec42 100644
--- a/arch/x86/platform/uv/tlb_uv.c
+++ b/arch/x86/platform/uv/tlb_uv.c
@@ -415,7 +415,7 @@ static void reset_with_ipi(struct pnmask *distribution, struct bau_control *bcp)
 	struct reset_args reset_args;
 
 	reset_args.sender = sender;
-	cpus_clear(*mask);
+	cpumask_clear(mask);
 	/* find a single cpu for each uvhub in this distribution mask */
 	maskbits = sizeof(struct pnmask) * BITSPERBYTE;
 	/* each bit is a pnode relative to the partition base pnode */
@@ -425,7 +425,7 @@ static void reset_with_ipi(struct pnmask *distribution, struct bau_control *bcp)
 			continue;
 		apnode = pnode + bcp->partition_base_pnode;
 		cpu = pnode_to_first_cpu(apnode, smaster);
-		cpu_set(cpu, *mask);
+		cpumask_set_cpu(cpu, mask);
 	}
 
 	/* IPI all cpus; preemption is already disabled */
@@ -1126,7 +1126,7 @@ const struct cpumask *uv_flush_tlb_others(const struct cpumask *cpumask,
 	/* don't actually do a shootdown of the local cpu */
 	cpumask_andnot(flush_mask, cpumask, cpumask_of(cpu));
 
-	if (cpu_isset(cpu, *cpumask))
+	if (cpumask_test_cpu(cpu, cpumask))
 		stat->s_ntargself++;
 
 	bau_desc = bcp->descriptor_base;

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* Re: [PATCH 05/16] staging/lustre: fix up obsolete cpu function usage.
  2015-03-02 11:35 ` [PATCH 05/16] staging/lustre: " Rusty Russell
@ 2015-03-02 17:50   ` Oleg Drokin
  2015-03-02 23:39     ` Rusty Russell
  0 siblings, 1 reply; 28+ messages in thread
From: Oleg Drokin @ 2015-03-02 17:50 UTC (permalink / raw)
  To: Rusty Russell; +Cc: linux-kernel

Thanks!
Seems there was a midair collsion with my own patch that was not as comprehensive
wrt functions touched: https://lkml.org/lkml/2015/3/2/10

But on the other hand I also tried to clean up
some of the NR_CPUS usage while I was at it and this raises 
this question, from me, in the code like:

for_each_cpu_mask(i, blah) {
    blah
    if (something)
        break;
}
if (i == NR_CPUS)
    blah;

when we are replacing for_each_cpu_mask with for_each_cpu,
what do we check the counter against now to see that the entire loop was executed
and we did not exit prematurely? nr_cpu_ids?

Also I assume we still want to get rid of direct cpumask assignments like
> mask = *cpumask_of_node(cpu_to_node(index));



On Mar 2, 2015, at 6:35 AM, Rusty Russell wrote:

> They triggered this cleanup, so I've separated their patch in the assumption
> they've already combed their code.
> 
> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
> Cc: Oleg Drokin <green@linuxhacker.ru>
> ---
> .../staging/lustre/lnet/klnds/o2iblnd/o2iblnd.c    |  4 +-
> .../staging/lustre/lustre/libcfs/linux/linux-cpu.c | 88 +++++++++++-----------
> drivers/staging/lustre/lustre/ptlrpc/ptlrpcd.c     |  6 +-
> drivers/staging/lustre/lustre/ptlrpc/service.c     |  4 +-
> 4 files changed, 52 insertions(+), 50 deletions(-)
> 
> diff --git a/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd.c b/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd.c
> index 651016919669..a25816ab9e53 100644
> --- a/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd.c
> +++ b/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd.c
> @@ -638,8 +638,8 @@ kiblnd_get_completion_vector(kib_conn_t *conn, int cpt)
> 		return 0;
> 
> 	/* hash NID to CPU id in this partition... */
> -	off = do_div(nid, cpus_weight(*mask));
> -	for_each_cpu_mask(i, *mask) {
> +	off = do_div(nid, cpumask_weight(mask));
> +	for_each_cpu(i, mask) {
> 		if (off-- == 0)
> 			return i % vectors;
> 	}
> diff --git a/drivers/staging/lustre/lustre/libcfs/linux/linux-cpu.c b/drivers/staging/lustre/lustre/libcfs/linux/linux-cpu.c
> index 05f7595f18aa..c8fca8aae848 100644
> --- a/drivers/staging/lustre/lustre/libcfs/linux/linux-cpu.c
> +++ b/drivers/staging/lustre/lustre/libcfs/linux/linux-cpu.c
> @@ -204,7 +204,7 @@ cfs_cpt_table_print(struct cfs_cpt_table *cptab, char *buf, int len)
> 		}
> 
> 		tmp += rc;
> -		for_each_cpu_mask(j, *cptab->ctb_parts[i].cpt_cpumask) {
> +		for_each_cpu(j, cptab->ctb_parts[i].cpt_cpumask) {
> 			rc = snprintf(tmp, len, "%d ", j);
> 			len -= rc;
> 			if (len <= 0) {
> @@ -240,8 +240,8 @@ cfs_cpt_weight(struct cfs_cpt_table *cptab, int cpt)
> 	LASSERT(cpt == CFS_CPT_ANY || (cpt >= 0 && cpt < cptab->ctb_nparts));
> 
> 	return cpt == CFS_CPT_ANY ?
> -	       cpus_weight(*cptab->ctb_cpumask) :
> -	       cpus_weight(*cptab->ctb_parts[cpt].cpt_cpumask);
> +	       cpumask_weight(cptab->ctb_cpumask) :
> +	       cpumask_weight(cptab->ctb_parts[cpt].cpt_cpumask);
> }
> EXPORT_SYMBOL(cfs_cpt_weight);
> 
> @@ -251,8 +251,9 @@ cfs_cpt_online(struct cfs_cpt_table *cptab, int cpt)
> 	LASSERT(cpt == CFS_CPT_ANY || (cpt >= 0 && cpt < cptab->ctb_nparts));
> 
> 	return cpt == CFS_CPT_ANY ?
> -	       any_online_cpu(*cptab->ctb_cpumask) != NR_CPUS :
> -	       any_online_cpu(*cptab->ctb_parts[cpt].cpt_cpumask) != NR_CPUS;
> +	       cpumask_any_and(cptab->ctb_cpumask, cpu_online_mask) != NR_CPUS :
> +	       cpumask_any_and(cptab->ctb_parts[cpt].cpt_cpumask,
> +	       	        cpu_online_mask) != NR_CPUS;
> }
> EXPORT_SYMBOL(cfs_cpt_online);
> 
> @@ -296,11 +297,11 @@ cfs_cpt_set_cpu(struct cfs_cpt_table *cptab, int cpt, int cpu)
> 
> 	cptab->ctb_cpu2cpt[cpu] = cpt;
> 
> -	LASSERT(!cpu_isset(cpu, *cptab->ctb_cpumask));
> -	LASSERT(!cpu_isset(cpu, *cptab->ctb_parts[cpt].cpt_cpumask));
> +	LASSERT(!cpumask_test_cpu(cpu, cptab->ctb_cpumask));
> +	LASSERT(!cpumask_test_cpu(cpu, cptab->ctb_parts[cpt].cpt_cpumask));
> 
> -	cpu_set(cpu, *cptab->ctb_cpumask);
> -	cpu_set(cpu, *cptab->ctb_parts[cpt].cpt_cpumask);
> +	cpumask_set_cpu(cpu, cptab->ctb_cpumask);
> +	cpumask_set_cpu(cpu, cptab->ctb_parts[cpt].cpt_cpumask);
> 
> 	node = cpu_to_node(cpu);
> 
> @@ -344,11 +345,11 @@ cfs_cpt_unset_cpu(struct cfs_cpt_table *cptab, int cpt, int cpu)
> 		return;
> 	}
> 
> -	LASSERT(cpu_isset(cpu, *cptab->ctb_parts[cpt].cpt_cpumask));
> -	LASSERT(cpu_isset(cpu, *cptab->ctb_cpumask));
> +	LASSERT(cpumask_test_cpu(cpu, cptab->ctb_parts[cpt].cpt_cpumask));
> +	LASSERT(cpumask_test_cpu(cpu, cptab->ctb_cpumask));
> 
> -	cpu_clear(cpu, *cptab->ctb_parts[cpt].cpt_cpumask);
> -	cpu_clear(cpu, *cptab->ctb_cpumask);
> +	cpumask_clear_cpu(cpu, cptab->ctb_parts[cpt].cpt_cpumask);
> +	cpumask_clear_cpu(cpu, cptab->ctb_cpumask);
> 	cptab->ctb_cpu2cpt[cpu] = -1;
> 
> 	node = cpu_to_node(cpu);
> @@ -356,7 +357,7 @@ cfs_cpt_unset_cpu(struct cfs_cpt_table *cptab, int cpt, int cpu)
> 	LASSERT(node_isset(node, *cptab->ctb_parts[cpt].cpt_nodemask));
> 	LASSERT(node_isset(node, *cptab->ctb_nodemask));
> 
> -	for_each_cpu_mask(i, *cptab->ctb_parts[cpt].cpt_cpumask) {
> +	for_each_cpu(i, cptab->ctb_parts[cpt].cpt_cpumask) {
> 		/* this CPT has other CPU belonging to this node? */
> 		if (cpu_to_node(i) == node)
> 			break;
> @@ -365,7 +366,7 @@ cfs_cpt_unset_cpu(struct cfs_cpt_table *cptab, int cpt, int cpu)
> 	if (i == NR_CPUS)
> 		node_clear(node, *cptab->ctb_parts[cpt].cpt_nodemask);
> 
> -	for_each_cpu_mask(i, *cptab->ctb_cpumask) {
> +	for_each_cpu(i, cptab->ctb_cpumask) {
> 		/* this CPT-table has other CPU belonging to this node? */
> 		if (cpu_to_node(i) == node)
> 			break;
> @@ -383,13 +384,13 @@ cfs_cpt_set_cpumask(struct cfs_cpt_table *cptab, int cpt, cpumask_t *mask)
> {
> 	int	i;
> 
> -	if (cpus_weight(*mask) == 0 || any_online_cpu(*mask) == NR_CPUS) {
> +	if (cpumask_weight(mask) == 0 || cpumask_any_and(mask, cpu_online_mask) == NR_CPUS) {
> 		CDEBUG(D_INFO, "No online CPU is found in the CPU mask for CPU partition %d\n",
> 		       cpt);
> 		return 0;
> 	}
> 
> -	for_each_cpu_mask(i, *mask) {
> +	for_each_cpu(i, mask) {
> 		if (!cfs_cpt_set_cpu(cptab, cpt, i))
> 			return 0;
> 	}
> @@ -403,7 +404,7 @@ cfs_cpt_unset_cpumask(struct cfs_cpt_table *cptab, int cpt, cpumask_t *mask)
> {
> 	int	i;
> 
> -	for_each_cpu_mask(i, *mask)
> +	for_each_cpu(i, mask)
> 		cfs_cpt_unset_cpu(cptab, cpt, i);
> }
> EXPORT_SYMBOL(cfs_cpt_unset_cpumask);
> @@ -493,7 +494,7 @@ cfs_cpt_clear(struct cfs_cpt_table *cptab, int cpt)
> 	}
> 
> 	for (; cpt <= last; cpt++) {
> -		for_each_cpu_mask(i, *cptab->ctb_parts[cpt].cpt_cpumask)
> +		for_each_cpu(i, cptab->ctb_parts[cpt].cpt_cpumask)
> 			cfs_cpt_unset_cpu(cptab, cpt, i);
> 	}
> }
> @@ -578,14 +579,14 @@ cfs_cpt_bind(struct cfs_cpt_table *cptab, int cpt)
> 		nodemask = cptab->ctb_parts[cpt].cpt_nodemask;
> 	}
> 
> -	if (any_online_cpu(*cpumask) == NR_CPUS) {
> +	if (cpumask_any_and(cpumask, cpu_online_mask) == NR_CPUS) {
> 		CERROR("No online CPU found in CPU partition %d, did someone do CPU hotplug on system? You might need to reload Lustre modules to keep system working well.\n",
> 		       cpt);
> 		return -EINVAL;
> 	}
> 
> 	for_each_online_cpu(i) {
> -		if (cpu_isset(i, *cpumask))
> +		if (cpumask_test_cpu(i, cpumask))
> 			continue;
> 
> 		rc = set_cpus_allowed_ptr(current, cpumask);
> @@ -616,14 +617,14 @@ cfs_cpt_choose_ncpus(struct cfs_cpt_table *cptab, int cpt,
> 
> 	LASSERT(number > 0);
> 
> -	if (number >= cpus_weight(*node)) {
> -		while (!cpus_empty(*node)) {
> -			cpu = first_cpu(*node);
> +	if (number >= cpumask_weight(node)) {
> +		while (!cpumask_empty(node)) {
> +			cpu = cpumask_first(node);
> 
> 			rc = cfs_cpt_set_cpu(cptab, cpt, cpu);
> 			if (!rc)
> 				return -EINVAL;
> -			cpu_clear(cpu, *node);
> +			cpumask_clear_cpu(cpu, node);
> 		}
> 		return 0;
> 	}
> @@ -636,27 +637,27 @@ cfs_cpt_choose_ncpus(struct cfs_cpt_table *cptab, int cpt,
> 		goto out;
> 	}
> 
> -	while (!cpus_empty(*node)) {
> -		cpu = first_cpu(*node);
> +	while (!cpumask_empty(node)) {
> +		cpu = cpumask_first(node);
> 
> 		/* get cpumask for cores in the same socket */
> 		cfs_cpu_core_siblings(cpu, socket);
> -		cpus_and(*socket, *socket, *node);
> +		cpumask_and(socket, socket, node);
> 
> -		LASSERT(!cpus_empty(*socket));
> +		LASSERT(!cpumask_empty(socket));
> 
> -		while (!cpus_empty(*socket)) {
> +		while (!cpumask_empty(socket)) {
> 			int     i;
> 
> 			/* get cpumask for hts in the same core */
> 			cfs_cpu_ht_siblings(cpu, core);
> -			cpus_and(*core, *core, *node);
> +			cpumask_and(core, core, node);
> 
> -			LASSERT(!cpus_empty(*core));
> +			LASSERT(!cpumask_empty(core));
> 
> -			for_each_cpu_mask(i, *core) {
> -				cpu_clear(i, *socket);
> -				cpu_clear(i, *node);
> +			for_each_cpu(i, core) {
> +				cpumask_clear_cpu(i, socket);
> +				cpumask_clear_cpu(i, node);
> 
> 				rc = cfs_cpt_set_cpu(cptab, cpt, i);
> 				if (!rc) {
> @@ -667,7 +668,7 @@ cfs_cpt_choose_ncpus(struct cfs_cpt_table *cptab, int cpt,
> 				if (--number == 0)
> 					goto out;
> 			}
> -			cpu = first_cpu(*socket);
> +			cpu = cpumask_first(socket);
> 		}
> 	}
> 
> @@ -767,7 +768,7 @@ cfs_cpt_table_create(int ncpt)
> 	for_each_online_node(i) {
> 		cfs_node_to_cpumask(i, mask);
> 
> -		while (!cpus_empty(*mask)) {
> +		while (!cpumask_empty(mask)) {
> 			struct cfs_cpu_partition *part;
> 			int    n;
> 
> @@ -776,24 +777,24 @@ cfs_cpt_table_create(int ncpt)
> 
> 			part = &cptab->ctb_parts[cpt];
> 
> -			n = num - cpus_weight(*part->cpt_cpumask);
> +			n = num - cpumask_weight(part->cpt_cpumask);
> 			LASSERT(n > 0);
> 
> 			rc = cfs_cpt_choose_ncpus(cptab, cpt, mask, n);
> 			if (rc < 0)
> 				goto failed;
> 
> -			LASSERT(num >= cpus_weight(*part->cpt_cpumask));
> -			if (num == cpus_weight(*part->cpt_cpumask))
> +			LASSERT(num >= cpumask_weight(part->cpt_cpumask));
> +			if (num == cpumask_weight(part->cpt_cpumask))
> 				cpt++;
> 		}
> 	}
> 
> 	if (cpt != ncpt ||
> -	    num != cpus_weight(*cptab->ctb_parts[ncpt - 1].cpt_cpumask)) {
> +	    num != cpumask_weight(cptab->ctb_parts[ncpt - 1].cpt_cpumask)) {
> 		CERROR("Expect %d(%d) CPU partitions but got %d(%d), CPU hotplug/unplug while setting?\n",
> 		       cptab->ctb_nparts, num, cpt,
> -		       cpus_weight(*cptab->ctb_parts[ncpt - 1].cpt_cpumask));
> +		       cpumask_weight(cptab->ctb_parts[ncpt - 1].cpt_cpumask));
> 		goto failed;
> 	}
> 
> @@ -965,7 +966,8 @@ cfs_cpu_notify(struct notifier_block *self, unsigned long action, void *hcpu)
> 		mutex_lock(&cpt_data.cpt_mutex);
> 		/* if all HTs in a core are offline, it may break affinity */
> 		cfs_cpu_ht_siblings(cpu, cpt_data.cpt_cpumask);
> -		warn = any_online_cpu(*cpt_data.cpt_cpumask) >= nr_cpu_ids;
> +		warn = cpumask_any_and(cpt_data.cpt_cpumask,
> +				       cpu_online_mask) >= nr_cpu_ids;
> 		mutex_unlock(&cpt_data.cpt_mutex);
> 		CDEBUG(warn ? D_WARNING : D_INFO,
> 		       "Lustre: can't support CPU plug-out well now, performance and stability could be impacted [CPU %u action: %lx]\n",
> diff --git a/drivers/staging/lustre/lustre/ptlrpc/ptlrpcd.c b/drivers/staging/lustre/lustre/ptlrpc/ptlrpcd.c
> index 4621b71fe0b6..625858b6d793 100644
> --- a/drivers/staging/lustre/lustre/ptlrpc/ptlrpcd.c
> +++ b/drivers/staging/lustre/lustre/ptlrpc/ptlrpcd.c
> @@ -513,8 +513,8 @@ static int ptlrpcd_bind(int index, int max)
> 		int i;
> 		mask = *cpumask_of_node(cpu_to_node(index));
> 		for (i = max; i < num_online_cpus(); i++)
> -			cpu_clear(i, mask);
> -		pc->pc_npartners = cpus_weight(mask) - 1;
> +			cpumask_clear_cpu(i, &mask);
> +		pc->pc_npartners = cpumask_weight(&mask) - 1;
> 		set_bit(LIOD_BIND, &pc->pc_flags);
> 	}
> #else
> @@ -554,7 +554,7 @@ static int ptlrpcd_bind(int index, int max)
> 				 * that are already initialized
> 				 */
> 				for (pidx = 0, i = 0; i < index; i++) {
> -					if (cpu_isset(i, mask)) {
> +					if (cpumask_test_cpu(i, &mask)) {
> 						ppc = &ptlrpcds->pd_threads[i];
> 						pc->pc_partners[pidx++] = ppc;
> 						ppc->pc_partners[ppc->
> diff --git a/drivers/staging/lustre/lustre/ptlrpc/service.c b/drivers/staging/lustre/lustre/ptlrpc/service.c
> index 635b12b22cef..173803829865 100644
> --- a/drivers/staging/lustre/lustre/ptlrpc/service.c
> +++ b/drivers/staging/lustre/lustre/ptlrpc/service.c
> @@ -558,7 +558,7 @@ ptlrpc_server_nthreads_check(struct ptlrpc_service *svc,
> 		 * there are.
> 		 */
> 		cpumask_copy(&mask, topology_thread_cpumask(0));
> -		if (cpus_weight(mask) > 1) { /* weight is # of HTs */
> +		if (cpumask_weight(&mask) > 1) { /* weight is # of HTs */
> 			/* depress thread factor for hyper-thread */
> 			factor = factor - (factor >> 1) + (factor >> 3);
> 		}
> @@ -2771,7 +2771,7 @@ int ptlrpc_hr_init(void)
> 	init_waitqueue_head(&ptlrpc_hr.hr_waitq);
> 
> 	cpumask_copy(&mask, topology_thread_cpumask(0));
> -	weight = cpus_weight(mask);
> +	weight = cpumask_weight(&mask);
> 
> 	cfs_percpt_for_each(hrp, i, ptlrpc_hr.hr_partitions) {
> 		hrp->hrp_cpt = i;
> -- 
> 2.1.0


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 04/16] drivers: fix up obsolete cpu function usage.
  2015-03-02 11:35 ` [PATCH 04/16] drivers: fix up obsolete cpu function usage Rusty Russell
@ 2015-03-02 22:23   ` Rafael J. Wysocki
  0 siblings, 0 replies; 28+ messages in thread
From: Rafael J. Wysocki @ 2015-03-02 22:23 UTC (permalink / raw)
  To: Rusty Russell
  Cc: linux-kernel, Thomas Gleixner, Herbert Xu, Jason Cooper,
	Chris Metcalf, netdev

On Monday, March 02, 2015 10:05:45 PM Rusty Russell wrote:
> Thanks to spatch, plus manual removal of "&*".  Then a sweep for
> for_each_cpu_mask => for_each_cpu.
> 
> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
> Cc: Herbert Xu <herbert@gondor.apana.org.au>
> Cc: Jason Cooper <jason@lakedaemon.net>
> Cc: Chris Metcalf <cmetcalf@ezchip.com>
> Cc: netdev@vger.kernel.org

Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>

for the cpuidle part.  Thanks!

> ---
>  drivers/clocksource/dw_apb_timer.c | 3 ++-
>  drivers/cpuidle/coupled.c          | 6 +++---
>  drivers/crypto/n2_core.c           | 4 ++--
>  drivers/irqchip/irq-gic-v3.c       | 2 +-
>  drivers/irqchip/irq-mips-gic.c     | 6 +++---
>  drivers/net/ethernet/tile/tilegx.c | 4 ++--
>  6 files changed, 13 insertions(+), 12 deletions(-)
> 
> diff --git a/drivers/clocksource/dw_apb_timer.c b/drivers/clocksource/dw_apb_timer.c
> index f3656a6b0382..35a88097af3c 100644
> --- a/drivers/clocksource/dw_apb_timer.c
> +++ b/drivers/clocksource/dw_apb_timer.c
> @@ -117,7 +117,8 @@ static void apbt_set_mode(enum clock_event_mode mode,
>  	unsigned long period;
>  	struct dw_apb_clock_event_device *dw_ced = ced_to_dw_apb_ced(evt);
>  
> -	pr_debug("%s CPU %d mode=%d\n", __func__, first_cpu(*evt->cpumask),
> +	pr_debug("%s CPU %d mode=%d\n", __func__,
> +		 cpumask_first(evt->cpumask),
>  		 mode);
>  
>  	switch (mode) {
> diff --git a/drivers/cpuidle/coupled.c b/drivers/cpuidle/coupled.c
> index 73fe2f8d7f96..7936dce4b878 100644
> --- a/drivers/cpuidle/coupled.c
> +++ b/drivers/cpuidle/coupled.c
> @@ -292,7 +292,7 @@ static inline int cpuidle_coupled_get_state(struct cpuidle_device *dev,
>  	 */
>  	smp_rmb();
>  
> -	for_each_cpu_mask(i, coupled->coupled_cpus)
> +	for_each_cpu(i, &coupled->coupled_cpus)
>  		if (cpu_online(i) && coupled->requested_state[i] < state)
>  			state = coupled->requested_state[i];
>  
> @@ -338,7 +338,7 @@ static void cpuidle_coupled_poke_others(int this_cpu,
>  {
>  	int cpu;
>  
> -	for_each_cpu_mask(cpu, coupled->coupled_cpus)
> +	for_each_cpu(cpu, &coupled->coupled_cpus)
>  		if (cpu != this_cpu && cpu_online(cpu))
>  			cpuidle_coupled_poke(cpu);
>  }
> @@ -638,7 +638,7 @@ int cpuidle_coupled_register_device(struct cpuidle_device *dev)
>  	if (cpumask_empty(&dev->coupled_cpus))
>  		return 0;
>  
> -	for_each_cpu_mask(cpu, dev->coupled_cpus) {
> +	for_each_cpu(cpu, &dev->coupled_cpus) {
>  		other_dev = per_cpu(cpuidle_devices, cpu);
>  		if (other_dev && other_dev->coupled) {
>  			coupled = other_dev->coupled;
> diff --git a/drivers/crypto/n2_core.c b/drivers/crypto/n2_core.c
> index afd136b45f49..10a9aeff1666 100644
> --- a/drivers/crypto/n2_core.c
> +++ b/drivers/crypto/n2_core.c
> @@ -1754,7 +1754,7 @@ static int spu_mdesc_walk_arcs(struct mdesc_handle *mdesc,
>  				dev->dev.of_node->full_name);
>  			return -EINVAL;
>  		}
> -		cpu_set(*id, p->sharing);
> +		cpumask_set_cpu(*id, &p->sharing);
>  		table[*id] = p;
>  	}
>  	return 0;
> @@ -1776,7 +1776,7 @@ static int handle_exec_unit(struct spu_mdesc_info *ip, struct list_head *list,
>  		return -ENOMEM;
>  	}
>  
> -	cpus_clear(p->sharing);
> +	cpumask_clear(&p->sharing);
>  	spin_lock_init(&p->lock);
>  	p->q_type = q_type;
>  	INIT_LIST_HEAD(&p->jobs);
> diff --git a/drivers/irqchip/irq-gic-v3.c b/drivers/irqchip/irq-gic-v3.c
> index 1c6dea2fbc34..04b6f0732c1a 100644
> --- a/drivers/irqchip/irq-gic-v3.c
> +++ b/drivers/irqchip/irq-gic-v3.c
> @@ -512,7 +512,7 @@ static void gic_raise_softirq(const struct cpumask *mask, unsigned int irq)
>  	 */
>  	smp_wmb();
>  
> -	for_each_cpu_mask(cpu, *mask) {
> +	for_each_cpu(cpu, mask) {
>  		u64 cluster_id = cpu_logical_map(cpu) & ~0xffUL;
>  		u16 tlist;
>  
> diff --git a/drivers/irqchip/irq-mips-gic.c b/drivers/irqchip/irq-mips-gic.c
> index 9acdc080e7ec..f26307908a2a 100644
> --- a/drivers/irqchip/irq-mips-gic.c
> +++ b/drivers/irqchip/irq-mips-gic.c
> @@ -345,19 +345,19 @@ static int gic_set_affinity(struct irq_data *d, const struct cpumask *cpumask,
>  	int		i;
>  
>  	cpumask_and(&tmp, cpumask, cpu_online_mask);
> -	if (cpus_empty(tmp))
> +	if (cpumask_empty(&tmp))
>  		return -EINVAL;
>  
>  	/* Assumption : cpumask refers to a single CPU */
>  	spin_lock_irqsave(&gic_lock, flags);
>  
>  	/* Re-route this IRQ */
> -	gic_map_to_vpe(irq, first_cpu(tmp));
> +	gic_map_to_vpe(irq, cpumask_first(&tmp));
>  
>  	/* Update the pcpu_masks */
>  	for (i = 0; i < NR_CPUS; i++)
>  		clear_bit(irq, pcpu_masks[i].pcpu_mask);
> -	set_bit(irq, pcpu_masks[first_cpu(tmp)].pcpu_mask);
> +	set_bit(irq, pcpu_masks[cpumask_first(&tmp)].pcpu_mask);
>  
>  	cpumask_copy(d->affinity, cpumask);
>  	spin_unlock_irqrestore(&gic_lock, flags);
> diff --git a/drivers/net/ethernet/tile/tilegx.c b/drivers/net/ethernet/tile/tilegx.c
> index bea8cd2bb56c..deac41498c6e 100644
> --- a/drivers/net/ethernet/tile/tilegx.c
> +++ b/drivers/net/ethernet/tile/tilegx.c
> @@ -1122,7 +1122,7 @@ static int alloc_percpu_mpipe_resources(struct net_device *dev,
>  			addr + i * sizeof(struct tile_net_comps);
>  
>  	/* If this is a network cpu, create an iqueue. */
> -	if (cpu_isset(cpu, network_cpus_map)) {
> +	if (cpumask_test_cpu(cpu, &network_cpus_map)) {
>  		order = get_order(NOTIF_RING_SIZE);
>  		page = homecache_alloc_pages(GFP_KERNEL, order, cpu);
>  		if (page == NULL) {
> @@ -1298,7 +1298,7 @@ static int tile_net_init_mpipe(struct net_device *dev)
>  	int first_ring, ring;
>  	int instance = mpipe_instance(dev);
>  	struct mpipe_data *md = &mpipe_data[instance];
> -	int network_cpus_count = cpus_weight(network_cpus_map);
> +	int network_cpus_count = cpumask_weight(&network_cpus_map);
>  
>  	if (!hash_default) {
>  		netdev_err(dev, "Networking requires hash_default!\n");
> 

-- 
I speak only for myself.
Rafael J. Wysocki, Intel Open Source Technology Center.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 05/16] staging/lustre: fix up obsolete cpu function usage.
  2015-03-02 17:50   ` Oleg Drokin
@ 2015-03-02 23:39     ` Rusty Russell
  2015-03-03  1:16       ` Oleg Drokin
  0 siblings, 1 reply; 28+ messages in thread
From: Rusty Russell @ 2015-03-02 23:39 UTC (permalink / raw)
  To: Oleg Drokin; +Cc: linux-kernel

Oleg Drokin <green@linuxhacker.ru> writes:
> Thanks!
> Seems there was a midair collsion with my own patch that was not as comprehensive
> wrt functions touched: https://lkml.org/lkml/2015/3/2/10

Yep, I posted this for completeness (and for your reference), but
figured you'd handle it.

> But on the other hand I also tried to clean up
> some of the NR_CPUS usage while I was at it and this raises 
> this question, from me, in the code like:
>
> for_each_cpu_mask(i, blah) {
>     blah
>     if (something)
>         break;
> }
> if (i == NR_CPUS)
>     blah;
>
> when we are replacing for_each_cpu_mask with for_each_cpu,
> what do we check the counter against now to see that the entire loop was executed
> and we did not exit prematurely? nr_cpu_ids?

You want >= nr_cpu_ids here.

> Also I assume we still want to get rid of direct cpumask assignments like
>> mask = *cpumask_of_node(cpu_to_node(index));

Yes, but this code is wrong anyway:

		mask = *cpumask_of_node(cpu_to_node(index));
		for (i = max; i < num_online_cpus(); i++)
			cpumask_clear_cpu(i, &mask);

*Never* iterate to num_online_cpus().  eg. if cpus 0 and 3 are online,
num_online_cpus() == 2.  I'm not sure what this code is doing, but it's
not doing it well :)

There are several issues here.  You need to handle cpus going offline
(during this routine, as well as after).  You need to use a
cpumask_var_t, like so:

        cpumask_var_t mask;

...
	case PDB_POLICY_NEIGHBOR:
                if (!alloc_cpumask_var(&mask, GFP_???)) {
                        rc = -ENOMEM;
                        break;
                }
                ...

Or get rid of the mask altogether, eg:

        pc->pc_npartners = -1;
        for_each_cpu(i, cpu_online_mask) {
                if (i < max)
                        pc->pc_npartners++;
        }
        ...

	pidx = 0;
        for_each_cpu(i, cpu_online_mask) {
                if (i >= max)
                        break;
		ppc = &ptlrpcds->pd_threads[i];
		pc->pc_partners[pidx++] = ppc;
		ppc->pc_partners[ppc->pc_npartners++] = pc;
        }

[ This is off the top of my head, no idea if it's right...]

Thanks,
Rusty.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 01/16] CONFIG_DISABLE_OBSOLETE_CPUMASK_FUNCTIONS: set if CPUMASK_OFFSTACK.
  2015-03-02 12:34 ` [PATCH 01/16] CONFIG_DISABLE_OBSOLETE_CPUMASK_FUNCTIONS: set if CPUMASK_OFFSTACK Paul Bolle
@ 2015-03-02 23:40   ` Rusty Russell
  0 siblings, 0 replies; 28+ messages in thread
From: Rusty Russell @ 2015-03-02 23:40 UTC (permalink / raw)
  To: Paul Bolle; +Cc: linux-kernel

Paul Bolle <pebolle@tiscali.nl> writes:
> On Mon, 2015-03-02 at 22:05 +1030, Rusty Russell wrote:
>> Using these functions with offstack cpus is unsafe.  They use all NR_CPUS
>> bits, unstead of nr_cpumask_bits.
>> 
>> In particular, lustre (in staging) used cpus_ and that caused a bug.
>> 
>> Reported-by: Oleg Drokin <green@linuxhacker.ru>
>> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
>> ---
>>  lib/Kconfig | 4 ++--
>>  1 file changed, 2 insertions(+), 2 deletions(-)
>> 
>> diff --git a/lib/Kconfig b/lib/Kconfig
>> index 87da53bb1fef..722427805220 100644
>> --- a/lib/Kconfig
>> +++ b/lib/Kconfig
>> @@ -398,8 +398,8 @@ config CPUMASK_OFFSTACK
>>  	  stack overflow.
>>  
>>  config DISABLE_OBSOLETE_CPUMASK_FUNCTIONS
>> -       bool "Disable obsolete cpumask functions" if DEBUG_PER_CPU_MAPS
>> -       depends on BROKEN
>> +       bool
>> +       depends on CPUMASK_OFFSTACK
>
> This removes the "prompt" from this symbol's entry. And nothing selects
> it either (not in next-20150302 nor in this series). So I think this
> just disables this Kconfig symbol entirely. Ie, it can't be set even if
> CPUMAK_OFFSTACK is set.
>
> Should this entry perhaps be using
>     def_bool y
>
> instead?

You're right.  In practice, I used a different patch to actually force
enable it.

The final patch deletes it altogether, so I will just squash the two
and this will never appear in the final series.

Thanks!
Rusty.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 05/16] staging/lustre: fix up obsolete cpu function usage.
  2015-03-02 23:39     ` Rusty Russell
@ 2015-03-03  1:16       ` Oleg Drokin
  2015-03-03  3:12         ` Rusty Russell
  0 siblings, 1 reply; 28+ messages in thread
From: Oleg Drokin @ 2015-03-03  1:16 UTC (permalink / raw)
  To: Rusty Russell; +Cc: linux-kernel


On Mar 2, 2015, at 6:39 PM, Rusty Russell wrote:
>> if (i == NR_CPUS)
>>    blah;
>> 
>> when we are replacing for_each_cpu_mask with for_each_cpu,
>> what do we check the counter against now to see that the entire loop was executed
>> and we did not exit prematurely? nr_cpu_ids?
> You want >= nr_cpu_ids here.

Aha, Thanks!

>> Also I assume we still want to get rid of direct cpumask assignments like
>>> mask = *cpumask_of_node(cpu_to_node(index));
> 
> Yes, but this code is wrong anyway:
> 
> 		mask = *cpumask_of_node(cpu_to_node(index));
> 		for (i = max; i < num_online_cpus(); i++)
> 			cpumask_clear_cpu(i, &mask);
> 
> *Never* iterate to num_online_cpus().  eg. if cpus 0 and 3 are online,
> num_online_cpus() == 2.  I'm not sure what this code is doing, but it's
> not doing it well :)

Oh my, I don't know how I have not seen it sooner. I think I developed an
idea of what it is trying to do.
Thanks for highlighting it.

So there are 7 more users like this outside of Lustre in the kernel then, I'll try for a patch:
./drivers/scsi/hpsa.c:	for (i = 0; i < num_online_cpus(); i++) { -- this one seems to be just opencoding for_each_cpu, though

./arch/um/kernel/smp.c:	for (i = 0; i < num_online_cpus(); i++) { -- I wonder if UML is able to have discontiguous cpus up?

./arch/sh/include/asm/mmu_context.h:	for (i = 0; i < num_online_cpus(); i++) -- this seems to be the same bug we have.

./arch/sh/kernel/smp.c:		for (i = 0; i < num_online_cpus(); i++) -- this and the two below it too
./arch/sh/kernel/smp.c:		for (i = 0; i < num_online_cpus(); i++)
./arch/sh/kernel/smp.c:		for (i = 0; i < num_online_cpus(); i++)

./arch/m32r/kernel/smpboot.c:	for (cpu_id = 0 ; cpu_id < num_online_cpus() ; cpu_id++) -- this also is buggy, though it's just an info print.

Bye,
    Oleg

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 05/16] staging/lustre: fix up obsolete cpu function usage.
  2015-03-03  1:16       ` Oleg Drokin
@ 2015-03-03  3:12         ` Rusty Russell
  0 siblings, 0 replies; 28+ messages in thread
From: Rusty Russell @ 2015-03-03  3:12 UTC (permalink / raw)
  To: Oleg Drokin; +Cc: linux-kernel

Oleg Drokin <green@linuxhacker.ru> writes:
> So there are 7 more users like this outside of Lustre in the kernel then, I'll try for a patch:

I can squash these if you want...

> ./drivers/scsi/hpsa.c:	for (i = 0; i < num_online_cpus(); i++) { -- this one seems to be just opencoding for_each_cpu, though

Yeah, that's easy to fix.

> ./arch/um/kernel/smp.c:	for (i = 0; i < num_online_cpus(); i++) { -- I wonder if UML is able to have discontiguous cpus up?
>
> ./arch/sh/include/asm/mmu_context.h:	for (i = 0; i < num_online_cpus(); i++) -- this seems to be the same bug we have.
>
> ./arch/sh/kernel/smp.c:		for (i = 0; i < num_online_cpus(); i++) -- this and the two below it too
> ./arch/sh/kernel/smp.c:		for (i = 0; i < num_online_cpus(); i++)
> ./arch/sh/kernel/smp.c:		for (i = 0; i < num_online_cpus(); i++)
>
> ./arch/m32r/kernel/smpboot.c:	for (cpu_id = 0 ; cpu_id < num_online_cpus() ; cpu_id++) -- this also is buggy, though it's just an info print.

These are arch code, and while they should be fixed (because people may
copy them), those archs probably don't support hotplug cpus.

Thanks!
Rusty.

Subject: Fix weird uses of num_online_cpus().

This may be OK in archs with contiguous CPU numbers and without
hotplug CPUs, but it sets a terrible example.

And open-coding it like drivers/scsi/hpsa.c is just weird.

BTRFS has a weird comparison with num_online_cpus() too, but since
BTRFS just screwed up my test machines' root partition, I'm not
touching it :)

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Reported-by: Oleg Drokin <green@linuxhacker.ru>

diff --git a/arch/m32r/kernel/smpboot.c b/arch/m32r/kernel/smpboot.c
index bb21f4f63170..a468467542f4 100644
--- a/arch/m32r/kernel/smpboot.c
+++ b/arch/m32r/kernel/smpboot.c
@@ -376,7 +376,7 @@ void __init smp_cpus_done(unsigned int max_cpus)
 	if (!cpumask_equal(&cpu_callin_map, cpu_online_mask))
 		BUG();
 
-	for (cpu_id = 0 ; cpu_id < num_online_cpus() ; cpu_id++)
+	for_each_online_cpu(cpu_id)
 		show_cpu_info(cpu_id);
 
 	/*
diff --git a/arch/sh/include/asm/mmu_context.h b/arch/sh/include/asm/mmu_context.h
index b9d9489a5012..9f417feaf6e8 100644
--- a/arch/sh/include/asm/mmu_context.h
+++ b/arch/sh/include/asm/mmu_context.h
@@ -99,7 +99,7 @@ static inline int init_new_context(struct task_struct *tsk,
 {
 	int i;
 
-	for (i = 0; i < num_online_cpus(); i++)
+	for_each_online_cpu(i)
 		cpu_context(i, mm) = NO_CONTEXT;
 
 	return 0;
diff --git a/arch/sh/kernel/smp.c b/arch/sh/kernel/smp.c
index fc5acfc93c92..de6be008fc01 100644
--- a/arch/sh/kernel/smp.c
+++ b/arch/sh/kernel/smp.c
@@ -363,7 +363,7 @@ void flush_tlb_mm(struct mm_struct *mm)
 		smp_call_function(flush_tlb_mm_ipi, (void *)mm, 1);
 	} else {
 		int i;
-		for (i = 0; i < num_online_cpus(); i++)
+		for_each_online_cpu(i)
 			if (smp_processor_id() != i)
 				cpu_context(i, mm) = 0;
 	}
@@ -400,7 +400,7 @@ void flush_tlb_range(struct vm_area_struct *vma,
 		smp_call_function(flush_tlb_range_ipi, (void *)&fd, 1);
 	} else {
 		int i;
-		for (i = 0; i < num_online_cpus(); i++)
+		for_each_online_cpu(i)
 			if (smp_processor_id() != i)
 				cpu_context(i, mm) = 0;
 	}
@@ -443,7 +443,7 @@ void flush_tlb_page(struct vm_area_struct *vma, unsigned long page)
 		smp_call_function(flush_tlb_page_ipi, (void *)&fd, 1);
 	} else {
 		int i;
-		for (i = 0; i < num_online_cpus(); i++)
+		for_each_online_cpu(i)
 			if (smp_processor_id() != i)
 				cpu_context(i, vma->vm_mm) = 0;
 	}
diff --git a/arch/um/kernel/smp.c b/arch/um/kernel/smp.c
index 74077892b34a..525c3657a6af 100644
--- a/arch/um/kernel/smp.c
+++ b/arch/um/kernel/smp.c
@@ -45,7 +45,7 @@ void smp_send_stop(void)
 	int i;
 
 	printk(KERN_INFO "Stopping all CPUs...");
-	for (i = 0; i < num_online_cpus(); i++) {
+	for_each_online_cpu(i) {
 		if (i == current_thread->cpu)
 			continue;
 		os_write_file(cpu_data[i].ipi_pipe[1], "S", 1);
diff --git a/drivers/scsi/hpsa.c b/drivers/scsi/hpsa.c
index a1cfbd3dda47..8eab107b53fb 100644
--- a/drivers/scsi/hpsa.c
+++ b/drivers/scsi/hpsa.c
@@ -6632,14 +6632,12 @@ static void fail_all_outstanding_cmds(struct ctlr_info *h)
 
 static void set_lockup_detected_for_all_cpus(struct ctlr_info *h, u32 value)
 {
-	int i, cpu;
+	int cpu;
 
-	cpu = cpumask_first(cpu_online_mask);
-	for (i = 0; i < num_online_cpus(); i++) {
+	for_each_online_cpu(cpu) {
 		u32 *lockup_detected;
 		lockup_detected = per_cpu_ptr(h->lockup_detected, cpu);
 		*lockup_detected = value;
-		cpu = cpumask_next(cpu, cpu_online_mask);
 	}
 	wmb(); /* be sure the per-cpu variables are out to memory */
 }

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* Re: [PATCH 06/16] ia64: fix up obsolete cpu function usage.
  2015-03-02 11:47   ` Rusty Russell
@ 2015-05-26 20:45     ` Tony Luck
  -1 siblings, 0 replies; 28+ messages in thread
From: Tony Luck @ 2015-05-26 20:45 UTC (permalink / raw)
  To: Rusty Russell; +Cc: Linux Kernel Mailing List, Fenghua Yu, linux-ia64

On Mon, Mar 2, 2015 at 3:35 AM, Rusty Russell <rusty@rustcorp.com.au> wrote:
> Thanks to spatch, then a sweep for for_each_cpu_mask => for_each_cpu.
>
> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>

I'm seeing a bunch of warnings building the ia64 tree:

arch/ia64/kernel/smpboot.c:437: warning: passing argument 2 of
'cpumask_set_cpu' discards qualifiers from pointer target type
arch/ia64/kernel/smpboot.c:478: warning: passing argument 2 of
'cpumask_test_cpu' discards qualifiers from pointer target type
arch/ia64/kernel/smpboot.c:484: warning: passing argument 2 of
'cpumask_test_cpu' discards qualifiers from pointer target type
arch/ia64/kernel/smpboot.c:544: warning: passing argument 2 of
'cpumask_set_cpu' discards qualifiers from pointer target type
arch/ia64/kernel/smpboot.c:568: warning: passing argument 2 of
'cpumask_set_cpu' discards qualifiers from pointer target type
arch/ia64/kernel/smpboot.c:676: warning: passing argument 2 of
'cpumask_clear_cpu' discards qualifiers from pointer target type
arch/ia64/kernel/smpboot.c:747: warning: passing argument 2 of
'cpumask_test_cpu' discards qualifiers from pointer target type

which track back to these lines from this commit:

$ git show 5d2068da8d339e4dff8f9b9a1246e6a79e2949d8 | grep cpu_callin_map
- cpu_set(cpuid, cpu_callin_map);
+ cpumask_set_cpu(cpuid, &cpu_callin_map);
- if (cpu_isset(cpu, cpu_callin_map))
+ if (cpumask_test_cpu(cpu, &cpu_callin_map))
- if (!cpu_isset(cpu, cpu_callin_map)) {
+ if (!cpumask_test_cpu(cpu, &cpu_callin_map)) {
- cpu_set(0, cpu_callin_map);
+ cpumask_set_cpu(0, &cpu_callin_map);
- cpu_set(smp_processor_id(), cpu_callin_map);
+ cpumask_set_cpu(smp_processor_id(), &cpu_callin_map);
- cpu_clear(cpu, cpu_callin_map);
+ cpumask_clear_cpu(cpu, &cpu_callin_map);
- if (cpu_isset(cpu, cpu_callin_map))
+ if (cpumask_test_cpu(cpu, &cpu_callin_map))

The problem being that cpu_callin_map is declared volatile,
which matched the arg type in the declarations for
cpu_set() and cpu_clear() ... but doesn't match the new
fangled cpumask_set_cpu() etc.

Now the new functions do go on to call set_bit()
etc. ... which *do* expect a volatile second argument.

Should cpumask_set_cpu() and friends specify a volatile argument???

-Tony

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 06/16] ia64: fix up obsolete cpu function usage.
@ 2015-05-26 20:45     ` Tony Luck
  0 siblings, 0 replies; 28+ messages in thread
From: Tony Luck @ 2015-05-26 20:45 UTC (permalink / raw)
  To: Rusty Russell; +Cc: Linux Kernel Mailing List, Fenghua Yu, linux-ia64

On Mon, Mar 2, 2015 at 3:35 AM, Rusty Russell <rusty@rustcorp.com.au> wrote:
> Thanks to spatch, then a sweep for for_each_cpu_mask => for_each_cpu.
>
> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>

I'm seeing a bunch of warnings building the ia64 tree:

arch/ia64/kernel/smpboot.c:437: warning: passing argument 2 of
'cpumask_set_cpu' discards qualifiers from pointer target type
arch/ia64/kernel/smpboot.c:478: warning: passing argument 2 of
'cpumask_test_cpu' discards qualifiers from pointer target type
arch/ia64/kernel/smpboot.c:484: warning: passing argument 2 of
'cpumask_test_cpu' discards qualifiers from pointer target type
arch/ia64/kernel/smpboot.c:544: warning: passing argument 2 of
'cpumask_set_cpu' discards qualifiers from pointer target type
arch/ia64/kernel/smpboot.c:568: warning: passing argument 2 of
'cpumask_set_cpu' discards qualifiers from pointer target type
arch/ia64/kernel/smpboot.c:676: warning: passing argument 2 of
'cpumask_clear_cpu' discards qualifiers from pointer target type
arch/ia64/kernel/smpboot.c:747: warning: passing argument 2 of
'cpumask_test_cpu' discards qualifiers from pointer target type

which track back to these lines from this commit:

$ git show 5d2068da8d339e4dff8f9b9a1246e6a79e2949d8 | grep cpu_callin_map
- cpu_set(cpuid, cpu_callin_map);
+ cpumask_set_cpu(cpuid, &cpu_callin_map);
- if (cpu_isset(cpu, cpu_callin_map))
+ if (cpumask_test_cpu(cpu, &cpu_callin_map))
- if (!cpu_isset(cpu, cpu_callin_map)) {
+ if (!cpumask_test_cpu(cpu, &cpu_callin_map)) {
- cpu_set(0, cpu_callin_map);
+ cpumask_set_cpu(0, &cpu_callin_map);
- cpu_set(smp_processor_id(), cpu_callin_map);
+ cpumask_set_cpu(smp_processor_id(), &cpu_callin_map);
- cpu_clear(cpu, cpu_callin_map);
+ cpumask_clear_cpu(cpu, &cpu_callin_map);
- if (cpu_isset(cpu, cpu_callin_map))
+ if (cpumask_test_cpu(cpu, &cpu_callin_map))

The problem being that cpu_callin_map is declared volatile,
which matched the arg type in the declarations for
cpu_set() and cpu_clear() ... but doesn't match the new
fangled cpumask_set_cpu() etc.

Now the new functions do go on to call set_bit()
etc. ... which *do* expect a volatile second argument.

Should cpumask_set_cpu() and friends specify a volatile argument???

-Tony

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 06/16] ia64: fix up obsolete cpu function usage.
  2015-05-26 20:45     ` Tony Luck
@ 2015-05-27  1:30       ` Rusty Russell
  -1 siblings, 0 replies; 28+ messages in thread
From: Rusty Russell @ 2015-05-27  1:18 UTC (permalink / raw)
  To: Tony Luck; +Cc: Linux Kernel Mailing List, Fenghua Yu, linux-ia64

Tony Luck <tony.luck@gmail.com> writes:
> On Mon, Mar 2, 2015 at 3:35 AM, Rusty Russell <rusty@rustcorp.com.au> wrote:
>> Thanks to spatch, then a sweep for for_each_cpu_mask => for_each_cpu.
>>
>> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
>
> I'm seeing a bunch of warnings building the ia64 tree:

Indeed, here's the forgotten fix sitting in my patch collection.

> Should cpumask_set_cpu() and friends specify a volatile argument???

It's weird, but it turns out hardly anyone wants that.

Cheers,
Rusty.

ia64: make cpu_callin_map non-volatile.

cpumask_test_cpu() doesn't take volatile, unlike the obsoleted
cpu_isset.  The only place ia64 really cares is the spin waiting for a
bit; udelay() is probably a barrier but insert rmb() to be sure.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>

diff --git a/arch/ia64/kernel/smpboot.c b/arch/ia64/kernel/smpboot.c
index 15051e9c2c6f..629975b56608 100644
--- a/arch/ia64/kernel/smpboot.c
+++ b/arch/ia64/kernel/smpboot.c
@@ -127,7 +127,7 @@ int smp_num_siblings = 1;
 volatile int ia64_cpu_to_sapicid[NR_CPUS];
 EXPORT_SYMBOL(ia64_cpu_to_sapicid);
 
-static volatile cpumask_t cpu_callin_map;
+static cpumask_t cpu_callin_map;
 
 struct smp_boot_data smp_boot_data __initdata;
 
@@ -477,6 +477,7 @@ do_boot_cpu (int sapicid, int cpu, struct task_struct *idle)
 	for (timeout = 0; timeout < 100000; timeout++) {
 		if (cpumask_test_cpu(cpu, &cpu_callin_map))
 			break;  /* It has booted */
+		rmb(); /* Make sure we re-read cpu_callin_map */
 		udelay(100);
 	}
 	Dprintk("\n");

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* Re: [PATCH 06/16] ia64: fix up obsolete cpu function usage.
@ 2015-05-27  1:30       ` Rusty Russell
  0 siblings, 0 replies; 28+ messages in thread
From: Rusty Russell @ 2015-05-27  1:30 UTC (permalink / raw)
  To: Tony Luck; +Cc: Linux Kernel Mailing List, Fenghua Yu, linux-ia64

Tony Luck <tony.luck@gmail.com> writes:
> On Mon, Mar 2, 2015 at 3:35 AM, Rusty Russell <rusty@rustcorp.com.au> wrote:
>> Thanks to spatch, then a sweep for for_each_cpu_mask => for_each_cpu.
>>
>> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
>
> I'm seeing a bunch of warnings building the ia64 tree:

Indeed, here's the forgotten fix sitting in my patch collection.

> Should cpumask_set_cpu() and friends specify a volatile argument???

It's weird, but it turns out hardly anyone wants that.

Cheers,
Rusty.

ia64: make cpu_callin_map non-volatile.

cpumask_test_cpu() doesn't take volatile, unlike the obsoleted
cpu_isset.  The only place ia64 really cares is the spin waiting for a
bit; udelay() is probably a barrier but insert rmb() to be sure.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>

diff --git a/arch/ia64/kernel/smpboot.c b/arch/ia64/kernel/smpboot.c
index 15051e9c2c6f..629975b56608 100644
--- a/arch/ia64/kernel/smpboot.c
+++ b/arch/ia64/kernel/smpboot.c
@@ -127,7 +127,7 @@ int smp_num_siblings = 1;
 volatile int ia64_cpu_to_sapicid[NR_CPUS];
 EXPORT_SYMBOL(ia64_cpu_to_sapicid);
 
-static volatile cpumask_t cpu_callin_map;
+static cpumask_t cpu_callin_map;
 
 struct smp_boot_data smp_boot_data __initdata;
 
@@ -477,6 +477,7 @@ do_boot_cpu (int sapicid, int cpu, struct task_struct *idle)
 	for (timeout = 0; timeout < 100000; timeout++) {
 		if (cpumask_test_cpu(cpu, &cpu_callin_map))
 			break;  /* It has booted */
+		rmb(); /* Make sure we re-read cpu_callin_map */
 		udelay(100);
 	}
 	Dprintk("\n");

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* Re: [PATCH 06/16] ia64: fix up obsolete cpu function usage.
  2015-05-27  1:30       ` Rusty Russell
@ 2015-05-27 17:37         ` Tony Luck
  -1 siblings, 0 replies; 28+ messages in thread
From: Tony Luck @ 2015-05-27 17:37 UTC (permalink / raw)
  To: Rusty Russell; +Cc: Linux Kernel Mailing List, Fenghua Yu, linux-ia64

On Tue, May 26, 2015 at 6:18 PM, Rusty Russell <rusty@rustcorp.com.au> wrote:

> cpumask_test_cpu() doesn't take volatile, unlike the obsoleted
> cpu_isset.  The only place ia64 really cares is the spin waiting for a
> bit; udelay() is probably a barrier but insert rmb() to be sure.

Good to be sure ... but cpumask_test_cpu() simply calls test_bit() ...
and 2 out of 3 versions of that function throw "volatile" back into the
mix: :-)

Global definition: test_bit

  File                            Line
0 include/asm/bitops.h            334 test_bit (int nr, const volatile
void *addr)
1 asm-generic/bitops/non-atomic.h 103 static inline int test_bit(int
nr, const volatile unsigned long *addr)
2 asm-generic/bitops/atomic.h      16 static __always_inline int
test_bit(unsigned int nr, const unsigned long *addr)

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 06/16] ia64: fix up obsolete cpu function usage.
@ 2015-05-27 17:37         ` Tony Luck
  0 siblings, 0 replies; 28+ messages in thread
From: Tony Luck @ 2015-05-27 17:37 UTC (permalink / raw)
  To: Rusty Russell; +Cc: Linux Kernel Mailing List, Fenghua Yu, linux-ia64

On Tue, May 26, 2015 at 6:18 PM, Rusty Russell <rusty@rustcorp.com.au> wrote:

> cpumask_test_cpu() doesn't take volatile, unlike the obsoleted
> cpu_isset.  The only place ia64 really cares is the spin waiting for a
> bit; udelay() is probably a barrier but insert rmb() to be sure.

Good to be sure ... but cpumask_test_cpu() simply calls test_bit() ...
and 2 out of 3 versions of that function throw "volatile" back into the
mix: :-)

Global definition: test_bit

  File                            Line
0 include/asm/bitops.h            334 test_bit (int nr, const volatile
void *addr)
1 asm-generic/bitops/non-atomic.h 103 static inline int test_bit(int
nr, const volatile unsigned long *addr)
2 asm-generic/bitops/atomic.h      16 static __always_inline int
test_bit(unsigned int nr, const unsigned long *addr)

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 06/16] ia64: fix up obsolete cpu function usage.
  2015-05-27 17:37         ` Tony Luck
@ 2015-05-28  3:56           ` Rusty Russell
  -1 siblings, 0 replies; 28+ messages in thread
From: Rusty Russell @ 2015-05-28  3:44 UTC (permalink / raw)
  To: Tony Luck; +Cc: Linux Kernel Mailing List, Fenghua Yu, linux-ia64

Tony Luck <tony.luck@gmail.com> writes:
> On Tue, May 26, 2015 at 6:18 PM, Rusty Russell <rusty@rustcorp.com.au> wrote:
>
>> cpumask_test_cpu() doesn't take volatile, unlike the obsoleted
>> cpu_isset.  The only place ia64 really cares is the spin waiting for a
>> bit; udelay() is probably a barrier but insert rmb() to be sure.
>
> Good to be sure ... but cpumask_test_cpu() simply calls test_bit() ...
> and 2 out of 3 versions of that function throw "volatile" back into the
> mix: :-)

Yep, volatile is hard to remove.  But that seems like an argument for
removing the last user, not changing the interface...

The theory is that we're eschewing volatile, or so I thought.

Cheers,
Rusty.

> Global definition: test_bit
>
>   File                            Line
> 0 include/asm/bitops.h            334 test_bit (int nr, const volatile
> void *addr)
> 1 asm-generic/bitops/non-atomic.h 103 static inline int test_bit(int
> nr, const volatile unsigned long *addr)
> 2 asm-generic/bitops/atomic.h      16 static __always_inline int
> test_bit(unsigned int nr, const unsigned long *addr)

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 06/16] ia64: fix up obsolete cpu function usage.
@ 2015-05-28  3:56           ` Rusty Russell
  0 siblings, 0 replies; 28+ messages in thread
From: Rusty Russell @ 2015-05-28  3:56 UTC (permalink / raw)
  To: Tony Luck; +Cc: Linux Kernel Mailing List, Fenghua Yu, linux-ia64

Tony Luck <tony.luck@gmail.com> writes:
> On Tue, May 26, 2015 at 6:18 PM, Rusty Russell <rusty@rustcorp.com.au> wrote:
>
>> cpumask_test_cpu() doesn't take volatile, unlike the obsoleted
>> cpu_isset.  The only place ia64 really cares is the spin waiting for a
>> bit; udelay() is probably a barrier but insert rmb() to be sure.
>
> Good to be sure ... but cpumask_test_cpu() simply calls test_bit() ...
> and 2 out of 3 versions of that function throw "volatile" back into the
> mix: :-)

Yep, volatile is hard to remove.  But that seems like an argument for
removing the last user, not changing the interface...

The theory is that we're eschewing volatile, or so I thought.

Cheers,
Rusty.

> Global definition: test_bit
>
>   File                            Line
> 0 include/asm/bitops.h            334 test_bit (int nr, const volatile
> void *addr)
> 1 asm-generic/bitops/non-atomic.h 103 static inline int test_bit(int
> nr, const volatile unsigned long *addr)
> 2 asm-generic/bitops/atomic.h      16 static __always_inline int
> test_bit(unsigned int nr, const unsigned long *addr)

^ permalink raw reply	[flat|nested] 28+ messages in thread

end of thread, other threads:[~2015-05-28  4:13 UTC | newest]

Thread overview: 28+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-03-02 11:35 [PATCH 01/16] CONFIG_DISABLE_OBSOLETE_CPUMASK_FUNCTIONS: set if CPUMASK_OFFSTACK Rusty Russell
2015-03-02 11:35 ` [PATCH 02/16] cpumask: fix cpu-hotplug documentation Rusty Russell
2015-03-02 11:35 ` [PATCH 03/16] ia64: Use for_each_cpu_and() and cpumask_any_and() instead of temp var Rusty Russell
2015-03-02 11:47   ` Rusty Russell
2015-03-02 11:35 ` [PATCH 04/16] drivers: fix up obsolete cpu function usage Rusty Russell
2015-03-02 22:23   ` Rafael J. Wysocki
2015-03-02 11:35 ` [PATCH 05/16] staging/lustre: " Rusty Russell
2015-03-02 17:50   ` Oleg Drokin
2015-03-02 23:39     ` Rusty Russell
2015-03-03  1:16       ` Oleg Drokin
2015-03-03  3:12         ` Rusty Russell
2015-03-02 11:35 ` [PATCH 06/16] ia64: " Rusty Russell
2015-03-02 11:47   ` Rusty Russell
2015-05-26 20:45   ` Tony Luck
2015-05-26 20:45     ` Tony Luck
2015-05-27  1:18     ` Rusty Russell
2015-05-27  1:30       ` Rusty Russell
2015-05-27 17:37       ` Tony Luck
2015-05-27 17:37         ` Tony Luck
2015-05-28  3:44         ` Rusty Russell
2015-05-28  3:56           ` Rusty Russell
2015-03-02 11:35 ` [PATCH 07/16] um: " Rusty Russell
2015-03-02 11:35   ` [uml-devel] " Rusty Russell
2015-03-02 11:35 ` [PATCH 08/16] x86: " Rusty Russell
2015-03-02 13:36   ` [tip:x86/cleanups] x86: Fix up obsolete __cpu_set() " tip-bot for Rusty Russell
2015-03-02 11:35 ` [PATCH 09/16] mips: fix up obsolete cpu " Rusty Russell
2015-03-02 12:34 ` [PATCH 01/16] CONFIG_DISABLE_OBSOLETE_CPUMASK_FUNCTIONS: set if CPUMASK_OFFSTACK Paul Bolle
2015-03-02 23:40   ` Rusty Russell

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.