* [PATCH] [net-next] hns_enet: use cpumask_var_t for on-stack mask
@ 2017-02-02 14:49 Arnd Bergmann
2017-02-03 16:15 ` David Miller
0 siblings, 1 reply; 2+ messages in thread
From: Arnd Bergmann @ 2017-02-02 14:49 UTC (permalink / raw)
To: David S. Miller
Cc: Arnd Bergmann, Yisen Zhuang, Salil Mehta, Kejian Yan,
Daode Huang, Qianqian Xie, Sheng Li, lipeng, Philippe Reynes,
netdev, linux-kernel
On large SMP builds, we can run into a build warning:
drivers/net/ethernet/hisilicon/hns/hns_enet.c: In function 'hns_set_irq_affinity.isra.27':
drivers/net/ethernet/hisilicon/hns/hns_enet.c:1242:1: warning: the frame size of 1032 bytes is larger than 1024 bytes [-Wframe-larger-than=]
The solution here is to use cpumask_var_t, which can use dynamic
allocation when CONFIG_CPUMASK_OFFSTACK is enabled.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
---
drivers/net/ethernet/hisilicon/hns/hns_enet.c | 25 +++++++++++++++----------
1 file changed, 15 insertions(+), 10 deletions(-)
diff --git a/drivers/net/ethernet/hisilicon/hns/hns_enet.c b/drivers/net/ethernet/hisilicon/hns/hns_enet.c
index f7b75e96c1c3..fefe371e5907 100644
--- a/drivers/net/ethernet/hisilicon/hns/hns_enet.c
+++ b/drivers/net/ethernet/hisilicon/hns/hns_enet.c
@@ -1202,43 +1202,48 @@ static void hns_set_irq_affinity(struct hns_nic_priv *priv)
struct hns_nic_ring_data *rd;
int i;
int cpu;
- cpumask_t mask;
+ cpumask_var_t mask;
+
+ if (!alloc_cpumask_var(&mask, GFP_KERNEL))
+ return;
/*diffrent irq banlance for 16core and 32core*/
if (h->q_num == num_possible_cpus()) {
for (i = 0; i < h->q_num * 2; i++) {
rd = &priv->ring_data[i];
if (cpu_online(rd->queue_index)) {
- cpumask_clear(&mask);
+ cpumask_clear(mask);
cpu = rd->queue_index;
- cpumask_set_cpu(cpu, &mask);
+ cpumask_set_cpu(cpu, mask);
(void)irq_set_affinity_hint(rd->ring->irq,
- &mask);
+ mask);
}
}
} else {
for (i = 0; i < h->q_num; i++) {
rd = &priv->ring_data[i];
if (cpu_online(rd->queue_index * 2)) {
- cpumask_clear(&mask);
+ cpumask_clear(mask);
cpu = rd->queue_index * 2;
- cpumask_set_cpu(cpu, &mask);
+ cpumask_set_cpu(cpu, mask);
(void)irq_set_affinity_hint(rd->ring->irq,
- &mask);
+ mask);
}
}
for (i = h->q_num; i < h->q_num * 2; i++) {
rd = &priv->ring_data[i];
if (cpu_online(rd->queue_index * 2 + 1)) {
- cpumask_clear(&mask);
+ cpumask_clear(mask);
cpu = rd->queue_index * 2 + 1;
- cpumask_set_cpu(cpu, &mask);
+ cpumask_set_cpu(cpu, mask);
(void)irq_set_affinity_hint(rd->ring->irq,
- &mask);
+ mask);
}
}
}
+
+ free_cpumask_var(mask);
}
static int hns_nic_init_irq(struct hns_nic_priv *priv)
--
2.9.0
^ permalink raw reply related [flat|nested] 2+ messages in thread
* Re: [PATCH] [net-next] hns_enet: use cpumask_var_t for on-stack mask
2017-02-02 14:49 [PATCH] [net-next] hns_enet: use cpumask_var_t for on-stack mask Arnd Bergmann
@ 2017-02-03 16:15 ` David Miller
0 siblings, 0 replies; 2+ messages in thread
From: David Miller @ 2017-02-03 16:15 UTC (permalink / raw)
To: arnd
Cc: yisen.zhuang, salil.mehta, yankejian, huangdaode, xieqianqian,
lisheng011, lipeng321, tremyfr, netdev, linux-kernel
From: Arnd Bergmann <arnd@arndb.de>
Date: Thu, 2 Feb 2017 15:49:24 +0100
> On large SMP builds, we can run into a build warning:
>
> drivers/net/ethernet/hisilicon/hns/hns_enet.c: In function 'hns_set_irq_affinity.isra.27':
> drivers/net/ethernet/hisilicon/hns/hns_enet.c:1242:1: warning: the frame size of 1032 bytes is larger than 1024 bytes [-Wframe-larger-than=]
>
> The solution here is to use cpumask_var_t, which can use dynamic
> allocation when CONFIG_CPUMASK_OFFSTACK is enabled.
>
> Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Applied, thanks Arnd.
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2017-02-03 16:15 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-02-02 14:49 [PATCH] [net-next] hns_enet: use cpumask_var_t for on-stack mask Arnd Bergmann
2017-02-03 16:15 ` David Miller
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).