* [PATCH] irqchip/gic: Enable gic_set_affinity set more than one cpu @ 2016-10-13 10:57 Cheng Chao 2016-10-13 11:11 ` Cheng Chao 2016-10-13 15:31 ` Marc Zyngier 0 siblings, 2 replies; 8+ messages in thread From: Cheng Chao @ 2016-10-13 10:57 UTC (permalink / raw) To: tglx, jason, marc.zyngier; +Cc: linux-kernel, Cheng Chao GIC can distribute an interrupt to more than one cpu, but now, gic_set_affinity sets only one cpu to handle interrupt. Signed-off-by: Cheng Chao <cs.os.kernel@gmail.com> --- drivers/irqchip/irq-gic.c | 28 ++++++++++++++++++++++++---- 1 file changed, 24 insertions(+), 4 deletions(-) diff --git a/drivers/irqchip/irq-gic.c b/drivers/irqchip/irq-gic.c index 58e5b4e..198d33f 100644 --- a/drivers/irqchip/irq-gic.c +++ b/drivers/irqchip/irq-gic.c @@ -328,18 +328,38 @@ static int gic_set_affinity(struct irq_data *d, const struct cpumask *mask_val, unsigned int cpu, shift = (gic_irq(d) % 4) * 8; u32 val, mask, bit; unsigned long flags; + u32 valid_mask; - if (!force) - cpu = cpumask_any_and(mask_val, cpu_online_mask); - else + if (!force) { + valid_mask = cpumask_bits(mask_val)[0]; + valid_mask &= cpumask_bits(cpu_online_mask)[0]; + + cpu = cpumask_any((struct cpumask *)&valid_mask); + } else { cpu = cpumask_first(mask_val); + } if (cpu >= NR_GIC_CPU_IF || cpu >= nr_cpu_ids) return -EINVAL; gic_lock_irqsave(flags); mask = 0xff << shift; - bit = gic_cpu_map[cpu] << shift; + + if (!force) { + bit = 0; + + for_each_cpu(cpu, (struct cpumask *)&valid_mask) { + if (cpu >= NR_GIC_CPU_IF || cpu >= nr_cpu_ids) + break; + + bit |= gic_cpu_map[cpu]; + } + + bit = bit << shift; + } else { + bit = gic_cpu_map[cpu] << shift; + } + val = readl_relaxed(reg) & ~mask; writel_relaxed(val | bit, reg); gic_unlock_irqrestore(flags); -- 2.4.11 ^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH] irqchip/gic: Enable gic_set_affinity set more than one cpu 2016-10-13 10:57 [PATCH] irqchip/gic: Enable gic_set_affinity set more than one cpu Cheng Chao @ 2016-10-13 11:11 ` Cheng Chao 2016-10-13 15:31 ` Marc Zyngier 1 sibling, 0 replies; 8+ messages in thread From: Cheng Chao @ 2016-10-13 11:11 UTC (permalink / raw) To: tglx, jason, marc.zyngier; +Cc: linux-kernel, cs.os.kernel Hi, This patch has been tested on the SOC: ti AM572x and hisilicon hi35xx, it works. Please review this patch. Any suggestions will be welcome,thanks. Cheng on 10/13/2016 06:57 PM, Cheng Chao wrote: > GIC can distribute an interrupt to more than one cpu, > but now, gic_set_affinity sets only one cpu to handle interrupt. > > Signed-off-by: Cheng Chao <cs.os.kernel@gmail.com> > --- > drivers/irqchip/irq-gic.c | 28 ++++++++++++++++++++++++---- > 1 file changed, 24 insertions(+), 4 deletions(-) > > diff --git a/drivers/irqchip/irq-gic.c b/drivers/irqchip/irq-gic.c > index 58e5b4e..198d33f 100644 > --- a/drivers/irqchip/irq-gic.c > +++ b/drivers/irqchip/irq-gic.c > @@ -328,18 +328,38 @@ static int gic_set_affinity(struct irq_data *d, const struct cpumask *mask_val, > unsigned int cpu, shift = (gic_irq(d) % 4) * 8; > u32 val, mask, bit; > unsigned long flags; > + u32 valid_mask; > > - if (!force) > - cpu = cpumask_any_and(mask_val, cpu_online_mask); > - else > + if (!force) { > + valid_mask = cpumask_bits(mask_val)[0]; > + valid_mask &= cpumask_bits(cpu_online_mask)[0]; > + > + cpu = cpumask_any((struct cpumask *)&valid_mask); > + } else { > cpu = cpumask_first(mask_val); > + } > > if (cpu >= NR_GIC_CPU_IF || cpu >= nr_cpu_ids) > return -EINVAL; > > gic_lock_irqsave(flags); > mask = 0xff << shift; > - bit = gic_cpu_map[cpu] << shift; > + > + if (!force) { > + bit = 0; > + > + for_each_cpu(cpu, (struct cpumask *)&valid_mask) { > + if (cpu >= NR_GIC_CPU_IF || cpu >= nr_cpu_ids) > + break; > + > + bit |= gic_cpu_map[cpu]; > + } > + > + bit = bit << shift; > + } else { > + bit = gic_cpu_map[cpu] << shift; > + } > + > val = readl_relaxed(reg) & ~mask; > writel_relaxed(val | bit, reg); > gic_unlock_irqrestore(flags); > ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH] irqchip/gic: Enable gic_set_affinity set more than one cpu 2016-10-13 10:57 [PATCH] irqchip/gic: Enable gic_set_affinity set more than one cpu Cheng Chao 2016-10-13 11:11 ` Cheng Chao @ 2016-10-13 15:31 ` Marc Zyngier 2016-10-14 2:08 ` Cheng Chao 1 sibling, 1 reply; 8+ messages in thread From: Marc Zyngier @ 2016-10-13 15:31 UTC (permalink / raw) To: Cheng Chao; +Cc: tglx, jason, linux-kernel On Thu, 13 Oct 2016 18:57:14 +0800 Cheng Chao <cs.os.kernel@gmail.com> wrote: > GIC can distribute an interrupt to more than one cpu, > but now, gic_set_affinity sets only one cpu to handle interrupt. What makes you think this is a good idea? What purpose does it serves? I can only see drawbacks to this: You're waking up more than one CPU, wasting power, adding jitter and clobbering the cache. I assume you see a benefit to that approach, so can you please spell it out? > > Signed-off-by: Cheng Chao <cs.os.kernel@gmail.com> > --- > drivers/irqchip/irq-gic.c | 28 ++++++++++++++++++++++++---- > 1 file changed, 24 insertions(+), 4 deletions(-) > > diff --git a/drivers/irqchip/irq-gic.c b/drivers/irqchip/irq-gic.c > index 58e5b4e..198d33f 100644 > --- a/drivers/irqchip/irq-gic.c > +++ b/drivers/irqchip/irq-gic.c > @@ -328,18 +328,38 @@ static int gic_set_affinity(struct irq_data *d, const struct cpumask *mask_val, > unsigned int cpu, shift = (gic_irq(d) % 4) * 8; > u32 val, mask, bit; > unsigned long flags; > + u32 valid_mask; > > - if (!force) > - cpu = cpumask_any_and(mask_val, cpu_online_mask); > - else > + if (!force) { > + valid_mask = cpumask_bits(mask_val)[0]; > + valid_mask &= cpumask_bits(cpu_online_mask)[0]; > + > + cpu = cpumask_any((struct cpumask *)&valid_mask); What is wrong with with cpumask_any_and? > + } else { > cpu = cpumask_first(mask_val); > + } > > if (cpu >= NR_GIC_CPU_IF || cpu >= nr_cpu_ids) > return -EINVAL; > > gic_lock_irqsave(flags); > mask = 0xff << shift; > - bit = gic_cpu_map[cpu] << shift; > + > + if (!force) { > + bit = 0; > + > + for_each_cpu(cpu, (struct cpumask *)&valid_mask) { > + if (cpu >= NR_GIC_CPU_IF || cpu >= nr_cpu_ids) > + break; Shouldn't that be an error? > + > + bit |= gic_cpu_map[cpu]; > + } > + > + bit = bit << shift; > + } else { > + bit = gic_cpu_map[cpu] << shift; > + } > + > val = readl_relaxed(reg) & ~mask; > writel_relaxed(val | bit, reg); > gic_unlock_irqrestore(flags); Thanks, M. -- Jazz is not dead. It just smells funny. ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH] irqchip/gic: Enable gic_set_affinity set more than one cpu 2016-10-13 15:31 ` Marc Zyngier @ 2016-10-14 2:08 ` Cheng Chao 2016-10-14 17:33 ` Marc Zyngier 0 siblings, 1 reply; 8+ messages in thread From: Cheng Chao @ 2016-10-14 2:08 UTC (permalink / raw) To: Marc Zyngier; +Cc: tglx, jason, linux-kernel, cs.os.kernel Marc, Thanks for your comments. Cheng on 10/13/2016 11:31 PM, Marc Zyngier wrote: > On Thu, 13 Oct 2016 18:57:14 +0800 > Cheng Chao <cs.os.kernel@gmail.com> wrote: > >> GIC can distribute an interrupt to more than one cpu, >> but now, gic_set_affinity sets only one cpu to handle interrupt. > > What makes you think this is a good idea? What purpose does it serves? > I can only see drawbacks to this: You're waking up more than one CPU, > wasting power, adding jitter and clobbering the cache. > > I assume you see a benefit to that approach, so can you please spell it > out? > Ok, You are right, but the performance is another point that we should consider. We use E1 device to transmit/receive video stream. we find that E1's interrupt is only on the one cpu that cause this cpu usage is almost 100%, but other cpus is much lower load, so the performance is not good. the cpu is 4-core. so add CONFIG_ARM_GIC_AFFINITY_SINGLE_CPU is better? thus we can make a trade-off between the performance with the power etc. >> >> Signed-off-by: Cheng Chao <cs.os.kernel@gmail.com> >> --- >> drivers/irqchip/irq-gic.c | 28 ++++++++++++++++++++++++---- >> 1 file changed, 24 insertions(+), 4 deletions(-) >> >> diff --git a/drivers/irqchip/irq-gic.c b/drivers/irqchip/irq-gic.c >> index 58e5b4e..198d33f 100644 >> --- a/drivers/irqchip/irq-gic.c >> +++ b/drivers/irqchip/irq-gic.c >> @@ -328,18 +328,38 @@ static int gic_set_affinity(struct irq_data *d, const struct cpumask *mask_val, >> unsigned int cpu, shift = (gic_irq(d) % 4) * 8; >> u32 val, mask, bit; >> unsigned long flags; >> + u32 valid_mask; >> >> - if (!force) >> - cpu = cpumask_any_and(mask_val, cpu_online_mask); >> - else >> + if (!force) { >> + valid_mask = cpumask_bits(mask_val)[0]; >> + valid_mask &= cpumask_bits(cpu_online_mask)[0]; >> + >> + cpu = cpumask_any((struct cpumask *)&valid_mask); > > What is wrong with with cpumask_any_and? > #define cpumask_any_and(mask1, mask2) cpumask_first_and((mask1), (mask2)) #define cpumask_any(srcp) cpumask_first(srcp) There is no wrong with the cpumask_any_and. >> + } else { >> cpu = cpumask_first(mask_val); >> + } >> >> if (cpu >= NR_GIC_CPU_IF || cpu >= nr_cpu_ids) >> return -EINVAL; >> >> gic_lock_irqsave(flags); >> mask = 0xff << shift; >> - bit = gic_cpu_map[cpu] << shift; >> + >> + if (!force) { >> + bit = 0; >> + >> + for_each_cpu(cpu, (struct cpumask *)&valid_mask) { >> + if (cpu >= NR_GIC_CPU_IF || cpu >= nr_cpu_ids) >> + break; > > Shouldn't that be an error? > tested, no error. at the beginning, I code such like, cpumask_var_t valid_mask; alloc_cpumask_var(&valid_mask, GFP_KERNEL); cpumask_and(valid_mask, mask_val, cpu_online_mask); for_each_cpu(cpu, valid_mask) { } but alloc_cpumask_var maybe fail, so if (!alloc_cpumask_var(&valid_mask, GFP_KERNEL)) { /* fail*/ } else { } a little more complex. >> + >> + bit |= gic_cpu_map[cpu]; >> + } >> + >> + bit = bit << shift; >> + } else { >> + bit = gic_cpu_map[cpu] << shift; >> + } >> + >> val = readl_relaxed(reg) & ~mask; >> writel_relaxed(val | bit, reg); >> gic_unlock_irqrestore(flags); > > Thanks, > > M. > ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH] irqchip/gic: Enable gic_set_affinity set more than one cpu 2016-10-14 2:08 ` Cheng Chao @ 2016-10-14 17:33 ` Marc Zyngier 2016-10-15 7:23 ` Cheng Chao 0 siblings, 1 reply; 8+ messages in thread From: Marc Zyngier @ 2016-10-14 17:33 UTC (permalink / raw) To: Cheng Chao; +Cc: tglx, jason, linux-kernel On 14/10/16 03:08, Cheng Chao wrote: > Marc, > > Thanks for your comments. > > Cheng > > on 10/13/2016 11:31 PM, Marc Zyngier wrote: >> On Thu, 13 Oct 2016 18:57:14 +0800 >> Cheng Chao <cs.os.kernel@gmail.com> wrote: >> >>> GIC can distribute an interrupt to more than one cpu, >>> but now, gic_set_affinity sets only one cpu to handle interrupt. >> >> What makes you think this is a good idea? What purpose does it serves? >> I can only see drawbacks to this: You're waking up more than one CPU, >> wasting power, adding jitter and clobbering the cache. >> >> I assume you see a benefit to that approach, so can you please spell it >> out? >> > > Ok, You are right, but the performance is another point that we should consider. > > We use E1 device to transmit/receive video stream. we find that E1's interrupt is > only on the one cpu that cause this cpu usage is almost 100%, > but other cpus is much lower load, so the performance is not good. > the cpu is 4-core. It looks to me like you're barking up the wrong tree. We have NAPI-enabled network drivers for this exact reason, and adding more interrupts to an already overloaded system doesn't strike me as going in the right direction. May I suggest that you look at integrating NAPI into your E1 driver? > so add CONFIG_ARM_GIC_AFFINITY_SINGLE_CPU is better? > thus we can make a trade-off between the performance with the power etc. No, that's pretty horrible, and I'm not even going to entertain the idea. I suggest you start investigating how to mitigate your interrupt rate instead of just taking more of them. Thanks, M. -- Jazz is not dead. It just smells funny... ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH] irqchip/gic: Enable gic_set_affinity set more than one cpu 2016-10-14 17:33 ` Marc Zyngier @ 2016-10-15 7:23 ` Cheng Chao 2016-10-25 10:09 ` Marc Zyngier 0 siblings, 1 reply; 8+ messages in thread From: Cheng Chao @ 2016-10-15 7:23 UTC (permalink / raw) To: Marc Zyngier; +Cc: tglx, jason, linux-kernel, cs.os.kernel On 10/15/2016 01:33 AM, Marc Zyngier wrote: >> on 10/13/2016 11:31 PM, Marc Zyngier wrote: >>> On Thu, 13 Oct 2016 18:57:14 +0800 >>> Cheng Chao <cs.os.kernel@gmail.com> wrote: >>> >>>> GIC can distribute an interrupt to more than one cpu, >>>> but now, gic_set_affinity sets only one cpu to handle interrupt. >>> >>> What makes you think this is a good idea? What purpose does it serves? >>> I can only see drawbacks to this: You're waking up more than one CPU, >>> wasting power, adding jitter and clobbering the cache. >>> >>> I assume you see a benefit to that approach, so can you please spell it >>> out? >>> >> >> Ok, You are right, but the performance is another point that we should consider. >> >> We use E1 device to transmit/receive video stream. we find that E1's interrupt is >> only on the one cpu that cause this cpu usage is almost 100%, >> but other cpus is much lower load, so the performance is not good. >> the cpu is 4-core. > > It looks to me like you're barking up the wrong tree. We have > NAPI-enabled network drivers for this exact reason, and adding more > interrupts to an already overloaded system doesn't strike me as going in > the right direction. May I suggest that you look at integrating NAPI > into your E1 driver? > great, NAPI maybe is a good option, I can try to use NAPI. thank you. In other hand, gic_set_affinity sets only one cpu to handle interrupt, that really makes me a little confused, why does GIC's driver not like the others(MPIC, APIC etc) to support many cpus to handle interrupt? It seems that the GIC's driver constrain too much. We can use /proc/irq/xx/smp_affinity to set what we expect. echo 1 > /proc/irq/xx/smp_affinity, the interrupt on the first cpu. echo 2 > /proc/irq/xx/smp_affinity, the interrupt on the second cpu. but: echo 3 > /proc/irq/xx/smp_affinity, the interrupt on the first cpu, no interrupt on the second cpu. what? why does the second cpu has no interrupt? regardless of: >>> What makes you think this is a good idea? What purpose does it serves? >>> I can only see drawbacks to this: You're waking up more than one CPU, >>> wasting power, adding jitter and clobbering the cache. I think it is more reasonable to let user decide what to do. If I care about the power etc, then I only echo single cpu to /proc/irq/xx/smp_affinity, but if I expect more than one cpu to handle one special interrupt, I can echo 'what I expect cpus' to /proc/irq/xx/smp_affinity. >> so add CONFIG_ARM_GIC_AFFINITY_SINGLE_CPU is better? >> thus we can make a trade-off between the performance with the power etc. > > No, that's pretty horrible, and I'm not even going to entertain the > idea. Yes, in fact /proc/irq/xx/smp_affinity is enough. > I suggest you start investigating how to mitigate your interrupt > rate instead of just taking more of them. > Ok, thanks again. > Thanks, > > M. > ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH] irqchip/gic: Enable gic_set_affinity set more than one cpu 2016-10-15 7:23 ` Cheng Chao @ 2016-10-25 10:09 ` Marc Zyngier 2016-10-26 2:04 ` Cheng Chao 0 siblings, 1 reply; 8+ messages in thread From: Marc Zyngier @ 2016-10-25 10:09 UTC (permalink / raw) To: Cheng Chao; +Cc: tglx, jason, linux-kernel On 15/10/16 08:23, Cheng Chao wrote: > On 10/15/2016 01:33 AM, Marc Zyngier wrote: >>> on 10/13/2016 11:31 PM, Marc Zyngier wrote: >>>> On Thu, 13 Oct 2016 18:57:14 +0800 >>>> Cheng Chao <cs.os.kernel@gmail.com> wrote: >>>> >>>>> GIC can distribute an interrupt to more than one cpu, >>>>> but now, gic_set_affinity sets only one cpu to handle interrupt. >>>> >>>> What makes you think this is a good idea? What purpose does it serves? >>>> I can only see drawbacks to this: You're waking up more than one CPU, >>>> wasting power, adding jitter and clobbering the cache. >>>> >>>> I assume you see a benefit to that approach, so can you please spell it >>>> out? >>>> >>> >>> Ok, You are right, but the performance is another point that we should consider. >>> >>> We use E1 device to transmit/receive video stream. we find that E1's interrupt is >>> only on the one cpu that cause this cpu usage is almost 100%, >>> but other cpus is much lower load, so the performance is not good. >>> the cpu is 4-core. >> >> It looks to me like you're barking up the wrong tree. We have >> NAPI-enabled network drivers for this exact reason, and adding more >> interrupts to an already overloaded system doesn't strike me as going in >> the right direction. May I suggest that you look at integrating NAPI >> into your E1 driver? >> > > great, NAPI maybe is a good option, I can try to use NAPI. thank you. > > In other hand, gic_set_affinity sets only one cpu to handle interrupt, > that really makes me a little confused, why does GIC's driver not like > the others(MPIC, APIC etc) to support many cpus to handle interrupt? > > It seems that the GIC's driver constrain too much. There is several drawbacks to this: - Cache impacts and power efficiency, as already mentioned - Not virtualizable (you cannot efficiently implement this in a hypervisor that emulates a GICv2 distributor) - Doesn't scale (you cannot go beyond 8 CPUs) I strongly suggest you give NAPI a go, and only then consider delivering interrupts to multiple CPUs, because multiple CPU delivery is not future proof. > I think it is more reasonable to let user decide what to do. > > If I care about the power etc, then I only echo single cpu to > /proc/irq/xx/smp_affinity, but if I expect more than one cpu to handle > one special interrupt, I can echo 'what I expect cpus' to > /proc/irq/xx/smp_affinity. If that's what you really want, a better patch may be something like this: diff --git a/drivers/irqchip/irq-gic.c b/drivers/irqchip/irq-gic.c index d6c404b..b301d72 100644 --- a/drivers/irqchip/irq-gic.c +++ b/drivers/irqchip/irq-gic.c @@ -326,20 +326,25 @@ static int gic_set_affinity(struct irq_data *d, const struct cpumask *mask_val, { void __iomem *reg = gic_dist_base(d) + GIC_DIST_TARGET + (gic_irq(d) & ~3); unsigned int cpu, shift = (gic_irq(d) % 4) * 8; - u32 val, mask, bit; - unsigned long flags; + u32 val, mask, bit = 0; + unsigned long flags, aff = 0; - if (!force) - cpu = cpumask_any_and(mask_val, cpu_online_mask); - else - cpu = cpumask_first(mask_val); + for_each_cpu(cpu, mask_val) { + if (force) { + aff = 1 << cpu; + break; + } + + aff |= cpu_online(cpu) << cpu; + } - if (cpu >= NR_GIC_CPU_IF || cpu >= nr_cpu_ids) + if (!aff) return -EINVAL; gic_lock_irqsave(flags); mask = 0xff << shift; - bit = gic_cpu_map[cpu] << shift; + for_each_set_bit(cpu, &aff, nr_cpu_ids) + bit |= gic_cpu_map[cpu] << shift; val = readl_relaxed(reg) & ~mask; writel_relaxed(val | bit, reg); gic_unlock_irqrestore(flags); Thanks, M. -- Jazz is not dead. It just smells funny... ^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH] irqchip/gic: Enable gic_set_affinity set more than one cpu 2016-10-25 10:09 ` Marc Zyngier @ 2016-10-26 2:04 ` Cheng Chao 0 siblings, 0 replies; 8+ messages in thread From: Cheng Chao @ 2016-10-26 2:04 UTC (permalink / raw) To: Marc Zyngier; +Cc: tglx, jason, linux-kernel, cs.os.kernel on 10/25/2016 06:09 PM, Marc Zyngier wrote: > On 15/10/16 08:23, Cheng Chao wrote: >> On 10/15/2016 01:33 AM, Marc Zyngier wrote: >>>> on 10/13/2016 11:31 PM, Marc Zyngier wrote: >>>>> On Thu, 13 Oct 2016 18:57:14 +0800 >>>>> Cheng Chao <cs.os.kernel@gmail.com> wrote: >>>>> >>>>>> GIC can distribute an interrupt to more than one cpu, >>>>>> but now, gic_set_affinity sets only one cpu to handle interrupt. >>>>> >>>>> What makes you think this is a good idea? What purpose does it serves? >>>>> I can only see drawbacks to this: You're waking up more than one CPU, >>>>> wasting power, adding jitter and clobbering the cache. >>>>> >>>>> I assume you see a benefit to that approach, so can you please spell it >>>>> out? >>>>> >>>> >>>> Ok, You are right, but the performance is another point that we should consider. >>>> >>>> We use E1 device to transmit/receive video stream. we find that E1's interrupt is >>>> only on the one cpu that cause this cpu usage is almost 100%, >>>> but other cpus is much lower load, so the performance is not good. >>>> the cpu is 4-core. >>> >>> It looks to me like you're barking up the wrong tree. We have >>> NAPI-enabled network drivers for this exact reason, and adding more >>> interrupts to an already overloaded system doesn't strike me as going in >>> the right direction. May I suggest that you look at integrating NAPI >>> into your E1 driver? >>> >> >> great, NAPI maybe is a good option, I can try to use NAPI. thank you. >> >> In other hand, gic_set_affinity sets only one cpu to handle interrupt, >> that really makes me a little confused, why does GIC's driver not like >> the others(MPIC, APIC etc) to support many cpus to handle interrupt? >> >> It seems that the GIC's driver constrain too much. > > There is several drawbacks to this: > - Cache impacts and power efficiency, as already mentioned > - Not virtualizable (you cannot efficiently implement this in a > hypervisor that emulates a GICv2 distributor) > - Doesn't scale (you cannot go beyond 8 CPUs) > > I strongly suggest you give NAPI a go, and only then consider > delivering interrupts to multiple CPUs, because multiple CPU > delivery is not future proof. > Thanks again, the E1 driver with NAPI is on the right track. >> I think it is more reasonable to let user decide what to do. >> >> If I care about the power etc, then I only echo single cpu to >> /proc/irq/xx/smp_affinity, but if I expect more than one cpu to handle >> one special interrupt, I can echo 'what I expect cpus' to >> /proc/irq/xx/smp_affinity. > > If that's what you really want, a better patch may be something like this: > I hope the GIC'c driver is more flexible, and gic_set_affinity() doesn't constrain to set only one cpu. the GIC supports to distribute more than one cpu after all. > diff --git a/drivers/irqchip/irq-gic.c b/drivers/irqchip/irq-gic.c > index d6c404b..b301d72 100644 > --- a/drivers/irqchip/irq-gic.c > +++ b/drivers/irqchip/irq-gic.c > @@ -326,20 +326,25 @@ static int gic_set_affinity(struct irq_data *d, const struct cpumask *mask_val, > { > void __iomem *reg = gic_dist_base(d) + GIC_DIST_TARGET + (gic_irq(d) & ~3); > unsigned int cpu, shift = (gic_irq(d) % 4) * 8; > - u32 val, mask, bit; > - unsigned long flags; > + u32 val, mask, bit = 0; > + unsigned long flags, aff = 0; > > - if (!force) > - cpu = cpumask_any_and(mask_val, cpu_online_mask); > - else > - cpu = cpumask_first(mask_val); > + for_each_cpu(cpu, mask_val) { > + if (force) { > + aff = 1 << cpu; > + break; > + } > + > + aff |= cpu_online(cpu) << cpu; > + } > > - if (cpu >= NR_GIC_CPU_IF || cpu >= nr_cpu_ids) > + if (!aff) > return -EINVAL; > > gic_lock_irqsave(flags); > mask = 0xff << shift; > - bit = gic_cpu_map[cpu] << shift; > + for_each_set_bit(cpu, &aff, nr_cpu_ids) > + bit |= gic_cpu_map[cpu] << shift; > val = readl_relaxed(reg) & ~mask; > writel_relaxed(val | bit, reg); > gic_unlock_irqrestore(flags); > this patch is more better than before. a little check add. diff --git a/drivers/irqchip/irq-gic.c b/drivers/irqchip/irq-gic.c index 58e5b4e..b3d0f07 100644 --- a/drivers/irqchip/irq-gic.c +++ b/drivers/irqchip/irq-gic.c @@ -326,20 +326,28 @@ static int gic_set_affinity(struct irq_data *d, const struct cpumask *mask_val, { void __iomem *reg = gic_dist_base(d) + GIC_DIST_TARGET + (gic_irq(d) & ~3); unsigned int cpu, shift = (gic_irq(d) % 4) * 8; - u32 val, mask, bit; - unsigned long flags; + u32 val, mask, bit = 0; + unsigned long flags, aff = 0; - if (!force) - cpu = cpumask_any_and(mask_val, cpu_online_mask); - else - cpu = cpumask_first(mask_val); + for_each_cpu(cpu, mask_val) { + if (cpu >= NR_GIC_CPU_IF || cpu >= nr_cpu_ids) + break; + + if (force) { + aff = 1 << cpu; + break; + } + + aff |= cpu_online(cpu) << cpu; + } - if (cpu >= NR_GIC_CPU_IF || cpu >= nr_cpu_ids) + if (!aff) return -EINVAL; gic_lock_irqsave(flags); mask = 0xff << shift; - bit = gic_cpu_map[cpu] << shift; + for_each_set_bit(cpu, &aff, nr_cpu_ids) + bit |= gic_cpu_map[cpu] << shift; val = readl_relaxed(reg) & ~mask; writel_relaxed(val | bit, reg); gic_unlock_irqrestore(flags); > Thanks, > > M. > Thanks, Cheng ^ permalink raw reply related [flat|nested] 8+ messages in thread
end of thread, other threads:[~2016-10-26 2:04 UTC | newest] Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2016-10-13 10:57 [PATCH] irqchip/gic: Enable gic_set_affinity set more than one cpu Cheng Chao 2016-10-13 11:11 ` Cheng Chao 2016-10-13 15:31 ` Marc Zyngier 2016-10-14 2:08 ` Cheng Chao 2016-10-14 17:33 ` Marc Zyngier 2016-10-15 7:23 ` Cheng Chao 2016-10-25 10:09 ` Marc Zyngier 2016-10-26 2:04 ` Cheng Chao
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.