netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH net-next v7 1/1] net:openvswitch:reduce cpu_used_mask memory
@ 2023-02-03 15:42 Eddy Tao
  2023-02-04 13:28 ` Simon Horman
  0 siblings, 1 reply; 3+ messages in thread
From: Eddy Tao @ 2023-02-03 15:42 UTC (permalink / raw)
  To: netdev
  Cc: Eddy Tao, Pravin B Shelar, David S. Miller, Eric Dumazet,
	Jakub Kicinski, Paolo Abeni, dev, linux-kernel

Use actual CPU number instead of hardcoded value to decide the size
of 'cpu_used_mask' in 'struct sw_flow'. Below is the reason.

'struct cpumask cpu_used_mask' is embedded in struct sw_flow.
Its size is hardcoded to CONFIG_NR_CPUS bits, which can be
8192 by default, it costs memory and slows down ovs_flow_alloc

To address this:
 Redefine cpu_used_mask to pointer.
 Append cpumask_size() bytes after 'stat' to hold cpumask.
 Initialization cpu_used_mask right after stats_last_writer.

APIs like cpumask_next and cpumask_set_cpu never access bits
beyond cpu count, cpumask_size() bytes of memory is enough

Signed-off-by: Eddy Tao <taoyuan_eddy@hotmail.com>
---
 net/openvswitch/flow.c       | 9 ++++++---
 net/openvswitch/flow.h       | 2 +-
 net/openvswitch/flow_table.c | 8 +++++---
 3 files changed, 12 insertions(+), 7 deletions(-)

diff --git a/net/openvswitch/flow.c b/net/openvswitch/flow.c
index e20d1a973417..416976f70322 100644
--- a/net/openvswitch/flow.c
+++ b/net/openvswitch/flow.c
@@ -107,7 +107,8 @@ void ovs_flow_stats_update(struct sw_flow *flow, __be16 tcp_flags,
 
 					rcu_assign_pointer(flow->stats[cpu],
 							   new_stats);
-					cpumask_set_cpu(cpu, &flow->cpu_used_mask);
+					cpumask_set_cpu(cpu,
+							flow->cpu_used_mask);
 					goto unlock;
 				}
 			}
@@ -135,7 +136,8 @@ void ovs_flow_stats_get(const struct sw_flow *flow,
 	memset(ovs_stats, 0, sizeof(*ovs_stats));
 
 	/* We open code this to make sure cpu 0 is always considered */
-	for (cpu = 0; cpu < nr_cpu_ids; cpu = cpumask_next(cpu, &flow->cpu_used_mask)) {
+	for (cpu = 0; cpu < nr_cpu_ids;
+	     cpu = cpumask_next(cpu, flow->cpu_used_mask)) {
 		struct sw_flow_stats *stats = rcu_dereference_ovsl(flow->stats[cpu]);
 
 		if (stats) {
@@ -159,7 +161,8 @@ void ovs_flow_stats_clear(struct sw_flow *flow)
 	int cpu;
 
 	/* We open code this to make sure cpu 0 is always considered */
-	for (cpu = 0; cpu < nr_cpu_ids; cpu = cpumask_next(cpu, &flow->cpu_used_mask)) {
+	for (cpu = 0; cpu < nr_cpu_ids;
+	     cpu = cpumask_next(cpu, flow->cpu_used_mask)) {
 		struct sw_flow_stats *stats = ovsl_dereference(flow->stats[cpu]);
 
 		if (stats) {
diff --git a/net/openvswitch/flow.h b/net/openvswitch/flow.h
index 073ab73ffeaa..b5711aff6e76 100644
--- a/net/openvswitch/flow.h
+++ b/net/openvswitch/flow.h
@@ -229,7 +229,7 @@ struct sw_flow {
 					 */
 	struct sw_flow_key key;
 	struct sw_flow_id id;
-	struct cpumask cpu_used_mask;
+	struct cpumask *cpu_used_mask;
 	struct sw_flow_mask *mask;
 	struct sw_flow_actions __rcu *sf_acts;
 	struct sw_flow_stats __rcu *stats[]; /* One for each CPU.  First one
diff --git a/net/openvswitch/flow_table.c b/net/openvswitch/flow_table.c
index 0a0e4c283f02..791504b7f42b 100644
--- a/net/openvswitch/flow_table.c
+++ b/net/openvswitch/flow_table.c
@@ -79,6 +79,7 @@ struct sw_flow *ovs_flow_alloc(void)
 		return ERR_PTR(-ENOMEM);
 
 	flow->stats_last_writer = -1;
+	flow->cpu_used_mask = (struct cpumask *)&flow->stats[nr_cpu_ids];
 
 	/* Initialize the default stat node. */
 	stats = kmem_cache_alloc_node(flow_stats_cache,
@@ -91,7 +92,7 @@ struct sw_flow *ovs_flow_alloc(void)
 
 	RCU_INIT_POINTER(flow->stats[0], stats);
 
-	cpumask_set_cpu(0, &flow->cpu_used_mask);
+	cpumask_set_cpu(0, flow->cpu_used_mask);
 
 	return flow;
 err:
@@ -115,7 +116,7 @@ static void flow_free(struct sw_flow *flow)
 					  flow->sf_acts);
 	/* We open code this to make sure cpu 0 is always considered */
 	for (cpu = 0; cpu < nr_cpu_ids;
-	     cpu = cpumask_next(cpu, &flow->cpu_used_mask)) {
+	     cpu = cpumask_next(cpu, flow->cpu_used_mask)) {
 		if (flow->stats[cpu])
 			kmem_cache_free(flow_stats_cache,
 					(struct sw_flow_stats __force *)flow->stats[cpu]);
@@ -1196,7 +1197,8 @@ int ovs_flow_init(void)
 
 	flow_cache = kmem_cache_create("sw_flow", sizeof(struct sw_flow)
 				       + (nr_cpu_ids
-					  * sizeof(struct sw_flow_stats *)),
+					  * sizeof(struct sw_flow_stats *))
+				       + cpumask_size(),
 				       0, 0, NULL);
 	if (flow_cache == NULL)
 		return -ENOMEM;
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [PATCH net-next v7 1/1] net:openvswitch:reduce cpu_used_mask memory
  2023-02-03 15:42 [PATCH net-next v7 1/1] net:openvswitch:reduce cpu_used_mask memory Eddy Tao
@ 2023-02-04 13:28 ` Simon Horman
  2023-02-04 14:47   ` Eddy Tao
  0 siblings, 1 reply; 3+ messages in thread
From: Simon Horman @ 2023-02-04 13:28 UTC (permalink / raw)
  To: Eddy Tao
  Cc: netdev, Pravin B Shelar, David S. Miller, Eric Dumazet,
	Jakub Kicinski, Paolo Abeni, dev, linux-kernel

On Fri, Feb 03, 2023 at 11:42:45PM +0800, Eddy Tao wrote:
> Use actual CPU number instead of hardcoded value to decide the size
> of 'cpu_used_mask' in 'struct sw_flow'. Below is the reason.
> 
> 'struct cpumask cpu_used_mask' is embedded in struct sw_flow.
> Its size is hardcoded to CONFIG_NR_CPUS bits, which can be
> 8192 by default, it costs memory and slows down ovs_flow_alloc
> 
> To address this:
>  Redefine cpu_used_mask to pointer.
>  Append cpumask_size() bytes after 'stat' to hold cpumask.
>  Initialization cpu_used_mask right after stats_last_writer.
> 
> APIs like cpumask_next and cpumask_set_cpu never access bits
> beyond cpu count, cpumask_size() bytes of memory is enough
> 
> Signed-off-by: Eddy Tao <taoyuan_eddy@hotmail.com>

nit: I think the correct prefix for the patch subject is 'openvswitch:'
     And there should be a space after the prefix.

[PATCH net-next v8 1/1] openvswitch: reduce cpu_used_mask

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH net-next v7 1/1] net:openvswitch:reduce cpu_used_mask memory
  2023-02-04 13:28 ` Simon Horman
@ 2023-02-04 14:47   ` Eddy Tao
  0 siblings, 0 replies; 3+ messages in thread
From: Eddy Tao @ 2023-02-04 14:47 UTC (permalink / raw)
  To: Simon Horman
  Cc: netdev, Pravin B Shelar, David S. Miller, Eric Dumazet,
	Jakub Kicinski, Paolo Abeni, dev, linux-kernel

Hi, Simon:

     Thank you for the time on the review.

and i looked into net folder and get various results

'net:', 'net: gre:', 'net: bridge:', 'net: thunderx', 'net: sock', 'net: 
genetlink', and there is also examples as you suggested like 'devlink:'

similarly, in other folders i see similar inconsistency, mm folder is an 
example.

I turned below links and did not find item regarding to the prefix 
definition

Link: https://docs.kernel.org/process/maintainer-netdev.html

Link: 
https://docs.kernel.org/process/submitting-patches.html#submittingpatches

Going through the git log in file net/openvswitch/flow.c, the 'net: 
openvswitch: ' prefix were used in previous commits.


I think the fix contained 'net' keep the prefix from name collision and 
better keeps it consistent with its neighbors in the same file

And yes, there should be a blank space after the colon, i miss it, will 
update the revision, after we nail down the wording of the prefix.


Thanks

eddy

On 2023/2/4 21:28, Simon Horman wrote:
> nit: I think the correct prefix for the patch subject is 'openvswitch:'
>       And there should be a space after the prefix.
>
> [PATCH net-next v8 1/1] openvswitch: reduce cpu_used_mask

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2023-02-04 14:47 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-02-03 15:42 [PATCH net-next v7 1/1] net:openvswitch:reduce cpu_used_mask memory Eddy Tao
2023-02-04 13:28 ` Simon Horman
2023-02-04 14:47   ` Eddy Tao

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).