On 6/25/20 6:34 PM, Nitesh Narayan Lal wrote: > From: Alex Belits > > The current implementation of cpumask_local_spread() does not respect the > isolated CPUs, i.e., even if a CPU has been isolated for Real-Time task, > it will return it to the caller for pinning of its IRQ threads. Having > these unwanted IRQ threads on an isolated CPU adds up to a latency > overhead. > > Restrict the CPUs that are returned for spreading IRQs only to the > available housekeeping CPUs. > > Signed-off-by: Alex Belits > Signed-off-by: Nitesh Narayan Lal Hi Peter, I just realized that Yuqi jin's patch [1] that modifies cpumask_local_spread is lying in linux-next. Should I do a re-post by re-basing the patches on the top of linux-next? [1] https://lore.kernel.org/lkml/1582768688-2314-1-git-send-email-zhangshaokun@hisilicon.com/ > --- > lib/cpumask.c | 16 +++++++++++----- > 1 file changed, 11 insertions(+), 5 deletions(-) > > diff --git a/lib/cpumask.c b/lib/cpumask.c > index fb22fb266f93..85da6ab4fbb5 100644 > --- a/lib/cpumask.c > +++ b/lib/cpumask.c > @@ -6,6 +6,7 @@ > #include > #include > #include > +#include > > /** > * cpumask_next - get the next cpu in a cpumask > @@ -205,22 +206,27 @@ void __init free_bootmem_cpumask_var(cpumask_var_t mask) > */ > unsigned int cpumask_local_spread(unsigned int i, int node) > { > - int cpu; > + int cpu, hk_flags; > + const struct cpumask *mask; > > + hk_flags = HK_FLAG_DOMAIN | HK_FLAG_MANAGED_IRQ; > + mask = housekeeping_cpumask(hk_flags); > /* Wrap: we always want a cpu. */ > - i %= num_online_cpus(); > + i %= cpumask_weight(mask); > > if (node == NUMA_NO_NODE) { > - for_each_cpu(cpu, cpu_online_mask) > + for_each_cpu(cpu, mask) { > if (i-- == 0) > return cpu; > + } > } else { > /* NUMA first. */ > - for_each_cpu_and(cpu, cpumask_of_node(node), cpu_online_mask) > + for_each_cpu_and(cpu, cpumask_of_node(node), mask) { > if (i-- == 0) > return cpu; > + } > > - for_each_cpu(cpu, cpu_online_mask) { > + for_each_cpu(cpu, mask) { > /* Skip NUMA nodes, done above. */ > if (cpumask_test_cpu(cpu, cpumask_of_node(node))) > continue; -- Nitesh