Linux-api Archive on lore.kernel.org
 help / color / Atom feed
* [PATCH v4 0/3] Preventing job distribution to isolated CPUs
@ 2020-06-25 22:34 Nitesh Narayan Lal
  2020-06-25 22:34 ` [Patch v4 1/3] lib: Restrict cpumask_local_spread to houskeeping CPUs Nitesh Narayan Lal
                   ` (2 more replies)
  0 siblings, 3 replies; 9+ messages in thread
From: Nitesh Narayan Lal @ 2020-06-25 22:34 UTC (permalink / raw)
  To: linux-kernel, linux-api, frederic, mtosatti, juri.lelli, abelits,
	bhelgaas, linux-pci, rostedt, mingo, peterz, tglx, davem, akpm,
	sfr, stephen, rppt, jinyuqi, zhangshaokun

This patch-set is originated from one of the patches that have been
posted earlier as a part of "Task_isolation" mode [1] patch series
by Alex Belits <abelits@marvell.com>. There are only a couple of
changes that I am proposing in this patch-set compared to what Alex
has posted earlier.
 
 
Context
=======
On a broad level, all three patches that are included in this patch
set are meant to improve the driver/library to respect isolated
CPUs by not pinning any job on it. Not doing so could impact
the latency values in RT use-cases.


Patches
=======
* Patch1:
  The first patch is meant to make cpumask_local_spread()
  aware of the isolated CPUs. It ensures that the CPUs that
  are returned by this API only includes housekeeping CPUs.

* Patch2:
  This patch ensures that a probe function that is called
  using work_on_cpu() doesn't run any task on an isolated CPU.

* Patch3:
  This patch makes store_rps_map() aware of the isolated
  CPUs so that rps don't queue any jobs on an isolated CPU. 


Proposed Changes
================
To fix the above-mentioned issues Alex has used housekeeping_cpumask().
The only changes that I am proposing here are:
- Removing the dependency on CONFIG_TASK_ISOLATION that was proposed by
  Alex. As it should be safe to rely on housekeeping_cpumask()
  even when we don't have any isolated CPUs and we want
  to fall back to using all available CPUs in any of the above scenarios.
- Using both HK_FLAG_DOMAIN and HK_FLAG_WQ in Patch2 & 3, this is
  because we would want the above fixes not only when we have isolcpus but
  also with something like systemd's CPU affinity.


Testing
=======
* Patch 1:
  Fix for cpumask_local_spread() is tested by creating VFs, loading
  iavf module and by adding a tracepoint to confirm that only housekeeping
  CPUs are picked when an appropriate profile is set up and all remaining
  CPUs when no CPU isolation is configured.

* Patch 2:
  To test the PCI fix, I hotplugged a virtio-net-pci from qemu console
  and forced its addition to a specific node to trigger the code path that
  includes the proposed fix and verified that only housekeeping CPUs
  are included via tracepoint.

* Patch 3:
  To test the fix in store_rps_map(), I tried configuring an isolated
  CPU by writing to /sys/class/net/en*/queues/rx*/rps_cpus which
  resulted in 'write error: Invalid argument' error. For the case
  where a non-isolated CPU is writing in rps_cpus the above operation
  succeeded without any error.


Changes from v3[2]:
==================
- In patch 1, replaced HK_FLAG_WQ with HK_FLAG_MANAGED_IRQ based on the
  suggestion from Frederic Weisbecker.

Changes from v2[3]:
==================
Both the following suggestions are from Peter Zijlstra.
- Patch1: Removed the extra while loop from cpumask_local_spread and fixed
  the code styling issues.
- Patch3: Change to use cpumask_empty() for verifying that the requested
  CPUs are available in the the housekeeping CPUs.

Changes from v1[4]:
==================
- Included the suggestions made by Bjorn Helgaas in the commit message.
- Included the 'Reviewed-by' and 'Acked-by' received for Patch-2.


[1] https://patchwork.ozlabs.org/project/netdev/patch/51102eebe62336c6a4e584c7a503553b9f90e01c.camel@marvell.com/
[2] https://patchwork.ozlabs.org/project/linux-pci/cover/20200623192331.215557-1-nitesh@redhat.com/
[3] https://patchwork.ozlabs.org/project/linux-pci/cover/20200622234510.240834-1-nitesh@redhat.com/
[4] https://patchwork.ozlabs.org/project/linux-pci/cover/20200610161226.424337-1-nitesh@redhat.com/


Alex Belits (3):
  lib: Restrict cpumask_local_spread to houskeeping CPUs
  PCI: Restrict probe functions to housekeeping CPUs
  net: Restrict receive packets queuing to housekeeping CPUs

 drivers/pci/pci-driver.c |  5 ++++-
 lib/cpumask.c            | 16 +++++++++++-----
 net/core/net-sysfs.c     | 10 +++++++++-
 3 files changed, 24 insertions(+), 7 deletions(-)

-- 


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [Patch v4 1/3] lib: Restrict cpumask_local_spread to houskeeping CPUs
  2020-06-25 22:34 [PATCH v4 0/3] Preventing job distribution to isolated CPUs Nitesh Narayan Lal
@ 2020-06-25 22:34 ` Nitesh Narayan Lal
  2020-06-29 16:11   ` Nitesh Narayan Lal
  2020-06-25 22:34 ` [Patch v4 2/3] PCI: Restrict probe functions to housekeeping CPUs Nitesh Narayan Lal
  2020-06-25 22:34 ` [Patch v4 3/3] net: Restrict receive packets queuing " Nitesh Narayan Lal
  2 siblings, 1 reply; 9+ messages in thread
From: Nitesh Narayan Lal @ 2020-06-25 22:34 UTC (permalink / raw)
  To: linux-kernel, linux-api, frederic, mtosatti, juri.lelli, abelits,
	bhelgaas, linux-pci, rostedt, mingo, peterz, tglx, davem, akpm,
	sfr, stephen, rppt, jinyuqi, zhangshaokun

From: Alex Belits <abelits@marvell.com>

The current implementation of cpumask_local_spread() does not respect the
isolated CPUs, i.e., even if a CPU has been isolated for Real-Time task,
it will return it to the caller for pinning of its IRQ threads. Having
these unwanted IRQ threads on an isolated CPU adds up to a latency
overhead.

Restrict the CPUs that are returned for spreading IRQs only to the
available housekeeping CPUs.

Signed-off-by: Alex Belits <abelits@marvell.com>
Signed-off-by: Nitesh Narayan Lal <nitesh@redhat.com>
---
 lib/cpumask.c | 16 +++++++++++-----
 1 file changed, 11 insertions(+), 5 deletions(-)

diff --git a/lib/cpumask.c b/lib/cpumask.c
index fb22fb266f93..85da6ab4fbb5 100644
--- a/lib/cpumask.c
+++ b/lib/cpumask.c
@@ -6,6 +6,7 @@
 #include <linux/export.h>
 #include <linux/memblock.h>
 #include <linux/numa.h>
+#include <linux/sched/isolation.h>
 
 /**
  * cpumask_next - get the next cpu in a cpumask
@@ -205,22 +206,27 @@ void __init free_bootmem_cpumask_var(cpumask_var_t mask)
  */
 unsigned int cpumask_local_spread(unsigned int i, int node)
 {
-	int cpu;
+	int cpu, hk_flags;
+	const struct cpumask *mask;
 
+	hk_flags = HK_FLAG_DOMAIN | HK_FLAG_MANAGED_IRQ;
+	mask = housekeeping_cpumask(hk_flags);
 	/* Wrap: we always want a cpu. */
-	i %= num_online_cpus();
+	i %= cpumask_weight(mask);
 
 	if (node == NUMA_NO_NODE) {
-		for_each_cpu(cpu, cpu_online_mask)
+		for_each_cpu(cpu, mask) {
 			if (i-- == 0)
 				return cpu;
+		}
 	} else {
 		/* NUMA first. */
-		for_each_cpu_and(cpu, cpumask_of_node(node), cpu_online_mask)
+		for_each_cpu_and(cpu, cpumask_of_node(node), mask) {
 			if (i-- == 0)
 				return cpu;
+		}
 
-		for_each_cpu(cpu, cpu_online_mask) {
+		for_each_cpu(cpu, mask) {
 			/* Skip NUMA nodes, done above. */
 			if (cpumask_test_cpu(cpu, cpumask_of_node(node)))
 				continue;
-- 
2.18.4


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [Patch v4 2/3] PCI: Restrict probe functions to housekeeping CPUs
  2020-06-25 22:34 [PATCH v4 0/3] Preventing job distribution to isolated CPUs Nitesh Narayan Lal
  2020-06-25 22:34 ` [Patch v4 1/3] lib: Restrict cpumask_local_spread to houskeeping CPUs Nitesh Narayan Lal
@ 2020-06-25 22:34 ` Nitesh Narayan Lal
  2020-06-25 22:34 ` [Patch v4 3/3] net: Restrict receive packets queuing " Nitesh Narayan Lal
  2 siblings, 0 replies; 9+ messages in thread
From: Nitesh Narayan Lal @ 2020-06-25 22:34 UTC (permalink / raw)
  To: linux-kernel, linux-api, frederic, mtosatti, juri.lelli, abelits,
	bhelgaas, linux-pci, rostedt, mingo, peterz, tglx, davem, akpm,
	sfr, stephen, rppt, jinyuqi, zhangshaokun

From: Alex Belits <abelits@marvell.com>

pci_call_probe() prevents the nesting of work_on_cpu() for a scenario
where a VF device is probed from work_on_cpu() of the PF.

Replace the cpumask used in pci_call_probe() from all online CPUs to only
housekeeping CPUs. This is to ensure that there are no additional latency
overheads caused due to the pinning of jobs on isolated CPUs.

Signed-off-by: Alex Belits <abelits@marvell.com>
Signed-off-by: Nitesh Narayan Lal <nitesh@redhat.com>
Acked-by: Bjorn Helgaas <bhelgaas@google.com>
Reviewed-by: Frederic Weisbecker <frederic@kernel.org>
---
 drivers/pci/pci-driver.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/pci/pci-driver.c b/drivers/pci/pci-driver.c
index da6510af1221..449466f71040 100644
--- a/drivers/pci/pci-driver.c
+++ b/drivers/pci/pci-driver.c
@@ -12,6 +12,7 @@
 #include <linux/string.h>
 #include <linux/slab.h>
 #include <linux/sched.h>
+#include <linux/sched/isolation.h>
 #include <linux/cpu.h>
 #include <linux/pm_runtime.h>
 #include <linux/suspend.h>
@@ -333,6 +334,7 @@ static int pci_call_probe(struct pci_driver *drv, struct pci_dev *dev,
 			  const struct pci_device_id *id)
 {
 	int error, node, cpu;
+	int hk_flags = HK_FLAG_DOMAIN | HK_FLAG_WQ;
 	struct drv_dev_and_id ddi = { drv, dev, id };
 
 	/*
@@ -353,7 +355,8 @@ static int pci_call_probe(struct pci_driver *drv, struct pci_dev *dev,
 	    pci_physfn_is_probed(dev))
 		cpu = nr_cpu_ids;
 	else
-		cpu = cpumask_any_and(cpumask_of_node(node), cpu_online_mask);
+		cpu = cpumask_any_and(cpumask_of_node(node),
+				      housekeeping_cpumask(hk_flags));
 
 	if (cpu < nr_cpu_ids)
 		error = work_on_cpu(cpu, local_pci_probe, &ddi);
-- 
2.18.4


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [Patch v4 3/3] net: Restrict receive packets queuing to housekeeping CPUs
  2020-06-25 22:34 [PATCH v4 0/3] Preventing job distribution to isolated CPUs Nitesh Narayan Lal
  2020-06-25 22:34 ` [Patch v4 1/3] lib: Restrict cpumask_local_spread to houskeeping CPUs Nitesh Narayan Lal
  2020-06-25 22:34 ` [Patch v4 2/3] PCI: Restrict probe functions to housekeeping CPUs Nitesh Narayan Lal
@ 2020-06-25 22:34 ` Nitesh Narayan Lal
  2020-06-26 11:14   ` Peter Zijlstra
  2 siblings, 1 reply; 9+ messages in thread
From: Nitesh Narayan Lal @ 2020-06-25 22:34 UTC (permalink / raw)
  To: linux-kernel, linux-api, frederic, mtosatti, juri.lelli, abelits,
	bhelgaas, linux-pci, rostedt, mingo, peterz, tglx, davem, akpm,
	sfr, stephen, rppt, jinyuqi, zhangshaokun

From: Alex Belits <abelits@marvell.com>

With the existing implementation of store_rps_map(), packets are queued
in the receive path on the backlog queues of other CPUs irrespective of
whether they are isolated or not. This could add a latency overhead to
any RT workload that is running on the same CPU.

Ensure that store_rps_map() only uses available housekeeping CPUs for
storing the rps_map.

Signed-off-by: Alex Belits <abelits@marvell.com>
Signed-off-by: Nitesh Narayan Lal <nitesh@redhat.com>
---
 net/core/net-sysfs.c | 10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/net/core/net-sysfs.c b/net/core/net-sysfs.c
index e353b822bb15..677868fea316 100644
--- a/net/core/net-sysfs.c
+++ b/net/core/net-sysfs.c
@@ -11,6 +11,7 @@
 #include <linux/if_arp.h>
 #include <linux/slab.h>
 #include <linux/sched/signal.h>
+#include <linux/sched/isolation.h>
 #include <linux/nsproxy.h>
 #include <net/sock.h>
 #include <net/net_namespace.h>
@@ -741,7 +742,7 @@ static ssize_t store_rps_map(struct netdev_rx_queue *queue,
 {
 	struct rps_map *old_map, *map;
 	cpumask_var_t mask;
-	int err, cpu, i;
+	int err, cpu, i, hk_flags;
 	static DEFINE_MUTEX(rps_map_mutex);
 
 	if (!capable(CAP_NET_ADMIN))
@@ -756,6 +757,13 @@ static ssize_t store_rps_map(struct netdev_rx_queue *queue,
 		return err;
 	}
 
+	hk_flags = HK_FLAG_DOMAIN | HK_FLAG_WQ;
+	cpumask_and(mask, mask, housekeeping_cpumask(hk_flags));
+	if (cpumask_empty(mask)) {
+		free_cpumask_var(mask);
+		return -EINVAL;
+	}
+
 	map = kzalloc(max_t(unsigned int,
 			    RPS_MAP_SIZE(cpumask_weight(mask)), L1_CACHE_BYTES),
 		      GFP_KERNEL);
-- 
2.18.4


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [Patch v4 3/3] net: Restrict receive packets queuing to housekeeping CPUs
  2020-06-25 22:34 ` [Patch v4 3/3] net: Restrict receive packets queuing " Nitesh Narayan Lal
@ 2020-06-26 11:14   ` Peter Zijlstra
  2020-06-26 17:20     ` David Miller
  0 siblings, 1 reply; 9+ messages in thread
From: Peter Zijlstra @ 2020-06-26 11:14 UTC (permalink / raw)
  To: Nitesh Narayan Lal
  Cc: linux-kernel, linux-api, frederic, mtosatti, juri.lelli, abelits,
	bhelgaas, linux-pci, rostedt, mingo, tglx, davem, akpm, sfr,
	stephen, rppt, jinyuqi, zhangshaokun

On Thu, Jun 25, 2020 at 06:34:43PM -0400, Nitesh Narayan Lal wrote:
> From: Alex Belits <abelits@marvell.com>
> 
> With the existing implementation of store_rps_map(), packets are queued
> in the receive path on the backlog queues of other CPUs irrespective of
> whether they are isolated or not. This could add a latency overhead to
> any RT workload that is running on the same CPU.
> 
> Ensure that store_rps_map() only uses available housekeeping CPUs for
> storing the rps_map.
> 
> Signed-off-by: Alex Belits <abelits@marvell.com>
> Signed-off-by: Nitesh Narayan Lal <nitesh@redhat.com>

Dave, ACK if I route this?

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [Patch v4 3/3] net: Restrict receive packets queuing to housekeeping CPUs
  2020-06-26 11:14   ` Peter Zijlstra
@ 2020-06-26 17:20     ` David Miller
  0 siblings, 0 replies; 9+ messages in thread
From: David Miller @ 2020-06-26 17:20 UTC (permalink / raw)
  To: peterz
  Cc: nitesh, linux-kernel, linux-api, frederic, mtosatti, juri.lelli,
	abelits, bhelgaas, linux-pci, rostedt, mingo, tglx, akpm, sfr,
	stephen, rppt, jinyuqi, zhangshaokun

From: Peter Zijlstra <peterz@infradead.org>
Date: Fri, 26 Jun 2020 13:14:01 +0200

> On Thu, Jun 25, 2020 at 06:34:43PM -0400, Nitesh Narayan Lal wrote:
>> From: Alex Belits <abelits@marvell.com>
>> 
>> With the existing implementation of store_rps_map(), packets are queued
>> in the receive path on the backlog queues of other CPUs irrespective of
>> whether they are isolated or not. This could add a latency overhead to
>> any RT workload that is running on the same CPU.
>> 
>> Ensure that store_rps_map() only uses available housekeeping CPUs for
>> storing the rps_map.
>> 
>> Signed-off-by: Alex Belits <abelits@marvell.com>
>> Signed-off-by: Nitesh Narayan Lal <nitesh@redhat.com>
> 
> Dave, ACK if I route this?

No problem:

Acked-by: David S. Miller <davem@davemloft.net>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [Patch v4 1/3] lib: Restrict cpumask_local_spread to houskeeping CPUs
  2020-06-25 22:34 ` [Patch v4 1/3] lib: Restrict cpumask_local_spread to houskeeping CPUs Nitesh Narayan Lal
@ 2020-06-29 16:11   ` Nitesh Narayan Lal
  2020-07-01  0:32     ` Andrew Morton
  0 siblings, 1 reply; 9+ messages in thread
From: Nitesh Narayan Lal @ 2020-06-29 16:11 UTC (permalink / raw)
  To: linux-kernel, linux-api, peterz
  Cc: frederic, mtosatti, juri.lelli, abelits, bhelgaas, linux-pci,
	rostedt, mingo, tglx, davem, akpm, sfr, stephen, rppt, jinyuqi,
	zhangshaokun

[-- Attachment #1.1: Type: text/plain, Size: 2385 bytes --]


On 6/25/20 6:34 PM, Nitesh Narayan Lal wrote:
> From: Alex Belits <abelits@marvell.com>
>
> The current implementation of cpumask_local_spread() does not respect the
> isolated CPUs, i.e., even if a CPU has been isolated for Real-Time task,
> it will return it to the caller for pinning of its IRQ threads. Having
> these unwanted IRQ threads on an isolated CPU adds up to a latency
> overhead.
>
> Restrict the CPUs that are returned for spreading IRQs only to the
> available housekeeping CPUs.
>
> Signed-off-by: Alex Belits <abelits@marvell.com>
> Signed-off-by: Nitesh Narayan Lal <nitesh@redhat.com>

Hi Peter,

I just realized that Yuqi jin's patch [1] that modifies cpumask_local_spread is
lying in linux-next.
Should I do a re-post by re-basing the patches on the top of linux-next?

[1]
https://lore.kernel.org/lkml/1582768688-2314-1-git-send-email-zhangshaokun@hisilicon.com/

> ---
>  lib/cpumask.c | 16 +++++++++++-----
>  1 file changed, 11 insertions(+), 5 deletions(-)
>
> diff --git a/lib/cpumask.c b/lib/cpumask.c
> index fb22fb266f93..85da6ab4fbb5 100644
> --- a/lib/cpumask.c
> +++ b/lib/cpumask.c
> @@ -6,6 +6,7 @@
>  #include <linux/export.h>
>  #include <linux/memblock.h>
>  #include <linux/numa.h>
> +#include <linux/sched/isolation.h>
>  
>  /**
>   * cpumask_next - get the next cpu in a cpumask
> @@ -205,22 +206,27 @@ void __init free_bootmem_cpumask_var(cpumask_var_t mask)
>   */
>  unsigned int cpumask_local_spread(unsigned int i, int node)
>  {
> -	int cpu;
> +	int cpu, hk_flags;
> +	const struct cpumask *mask;
>  
> +	hk_flags = HK_FLAG_DOMAIN | HK_FLAG_MANAGED_IRQ;
> +	mask = housekeeping_cpumask(hk_flags);
>  	/* Wrap: we always want a cpu. */
> -	i %= num_online_cpus();
> +	i %= cpumask_weight(mask);
>  
>  	if (node == NUMA_NO_NODE) {
> -		for_each_cpu(cpu, cpu_online_mask)
> +		for_each_cpu(cpu, mask) {
>  			if (i-- == 0)
>  				return cpu;
> +		}
>  	} else {
>  		/* NUMA first. */
> -		for_each_cpu_and(cpu, cpumask_of_node(node), cpu_online_mask)
> +		for_each_cpu_and(cpu, cpumask_of_node(node), mask) {
>  			if (i-- == 0)
>  				return cpu;
> +		}
>  
> -		for_each_cpu(cpu, cpu_online_mask) {
> +		for_each_cpu(cpu, mask) {
>  			/* Skip NUMA nodes, done above. */
>  			if (cpumask_test_cpu(cpu, cpumask_of_node(node)))
>  				continue;
-- 
Nitesh


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [Patch v4 1/3] lib: Restrict cpumask_local_spread to houskeeping CPUs
  2020-06-29 16:11   ` Nitesh Narayan Lal
@ 2020-07-01  0:32     ` Andrew Morton
  2020-07-01  0:47       ` Nitesh Narayan Lal
  0 siblings, 1 reply; 9+ messages in thread
From: Andrew Morton @ 2020-07-01  0:32 UTC (permalink / raw)
  To: Nitesh Narayan Lal
  Cc: linux-kernel, linux-api, peterz, frederic, mtosatti, juri.lelli,
	abelits, bhelgaas, linux-pci, rostedt, mingo, tglx, davem, sfr,
	stephen, rppt, jinyuqi, zhangshaokun

On Mon, 29 Jun 2020 12:11:25 -0400 Nitesh Narayan Lal <nitesh@redhat.com> wrote:

> 
> On 6/25/20 6:34 PM, Nitesh Narayan Lal wrote:
> > From: Alex Belits <abelits@marvell.com>
> >
> > The current implementation of cpumask_local_spread() does not respect the
> > isolated CPUs, i.e., even if a CPU has been isolated for Real-Time task,
> > it will return it to the caller for pinning of its IRQ threads. Having
> > these unwanted IRQ threads on an isolated CPU adds up to a latency
> > overhead.
> >
> > Restrict the CPUs that are returned for spreading IRQs only to the
> > available housekeeping CPUs.
> >
> > Signed-off-by: Alex Belits <abelits@marvell.com>
> > Signed-off-by: Nitesh Narayan Lal <nitesh@redhat.com>
> 
> Hi Peter,
> 
> I just realized that Yuqi jin's patch [1] that modifies cpumask_local_spread is
> lying in linux-next.
> Should I do a re-post by re-basing the patches on the top of linux-next?
> 
> [1]
> https://lore.kernel.org/lkml/1582768688-2314-1-git-send-email-zhangshaokun@hisilicon.com/

This patch has had some review difficulties and has been pending for
quite some time.  I suggest you base your work on mainline and that we
ask Yuqi jin to rebase on that, if I don't feel confident doing it,


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [Patch v4 1/3] lib: Restrict cpumask_local_spread to houskeeping CPUs
  2020-07-01  0:32     ` Andrew Morton
@ 2020-07-01  0:47       ` Nitesh Narayan Lal
  0 siblings, 0 replies; 9+ messages in thread
From: Nitesh Narayan Lal @ 2020-07-01  0:47 UTC (permalink / raw)
  To: Andrew Morton, peterz
  Cc: linux-kernel, linux-api, frederic, mtosatti, juri.lelli, abelits,
	bhelgaas, linux-pci, rostedt, mingo, tglx, davem, sfr, stephen,
	rppt, jinyuqi, zhangshaokun

[-- Attachment #1.1: Type: text/plain, Size: 1478 bytes --]


On 6/30/20 8:32 PM, Andrew Morton wrote:
> On Mon, 29 Jun 2020 12:11:25 -0400 Nitesh Narayan Lal <nitesh@redhat.com> wrote:
>
>> On 6/25/20 6:34 PM, Nitesh Narayan Lal wrote:
>>> From: Alex Belits <abelits@marvell.com>
>>>
>>> The current implementation of cpumask_local_spread() does not respect the
>>> isolated CPUs, i.e., even if a CPU has been isolated for Real-Time task,
>>> it will return it to the caller for pinning of its IRQ threads. Having
>>> these unwanted IRQ threads on an isolated CPU adds up to a latency
>>> overhead.
>>>
>>> Restrict the CPUs that are returned for spreading IRQs only to the
>>> available housekeeping CPUs.
>>>
>>> Signed-off-by: Alex Belits <abelits@marvell.com>
>>> Signed-off-by: Nitesh Narayan Lal <nitesh@redhat.com>
>> Hi Peter,
>>
>> I just realized that Yuqi jin's patch [1] that modifies cpumask_local_spread is
>> lying in linux-next.
>> Should I do a re-post by re-basing the patches on the top of linux-next?
>>
>> [1]
>> https://lore.kernel.org/lkml/1582768688-2314-1-git-send-email-zhangshaokun@hisilicon.com/
> This patch has had some review difficulties and has been pending for
> quite some time.  I suggest you base your work on mainline and that we
> ask Yuqi jin to rebase on that, if I don't feel confident doing it,
>

I see, in that case, it should be fine to go ahead with this patch-set as I
already based this on top of the latest master before posting.

-- 
Thanks
Nitesh


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, back to index

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-06-25 22:34 [PATCH v4 0/3] Preventing job distribution to isolated CPUs Nitesh Narayan Lal
2020-06-25 22:34 ` [Patch v4 1/3] lib: Restrict cpumask_local_spread to houskeeping CPUs Nitesh Narayan Lal
2020-06-29 16:11   ` Nitesh Narayan Lal
2020-07-01  0:32     ` Andrew Morton
2020-07-01  0:47       ` Nitesh Narayan Lal
2020-06-25 22:34 ` [Patch v4 2/3] PCI: Restrict probe functions to housekeeping CPUs Nitesh Narayan Lal
2020-06-25 22:34 ` [Patch v4 3/3] net: Restrict receive packets queuing " Nitesh Narayan Lal
2020-06-26 11:14   ` Peter Zijlstra
2020-06-26 17:20     ` David Miller

Linux-api Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/linux-api/0 linux-api/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 linux-api linux-api/ https://lore.kernel.org/linux-api \
		linux-api@vger.kernel.org
	public-inbox-index linux-api

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.kernel.vger.linux-api


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git