All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH net-next 0/3] locklessly protect left members in struct rps_dev_flow
@ 2024-04-16  7:42 Jason Xing
  2024-04-16  7:42 ` [PATCH net-next 1/3] net: rps: protect last_qtail with rps_input_queue_tail_save() helper Jason Xing
                   ` (2 more replies)
  0 siblings, 3 replies; 5+ messages in thread
From: Jason Xing @ 2024-04-16  7:42 UTC (permalink / raw)
  To: edumazet, kuba, pabeni, davem, horms; +Cc: netdev, kerneljasonxing, Jason Xing

From: Jason Xing <kernelxing@tencent.com>

Since Eric did a more complicated locklessly change to last_qtail
member[1] in struct rps_dev_flow, the left members are easier to change
as the same.

[1]:
https://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next.git/commit/?id=3b4cf29bdab

Jason Xing (3):
  net: rps: protect last_qtail with rps_input_queue_tail_save() helper
  net: rps: protect filter locklessly
  net: rps: locklessly access rflow->cpu

 net/core/dev.c | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

-- 
2.37.3


^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH net-next 1/3] net: rps: protect last_qtail with rps_input_queue_tail_save() helper
  2024-04-16  7:42 [PATCH net-next 0/3] locklessly protect left members in struct rps_dev_flow Jason Xing
@ 2024-04-16  7:42 ` Jason Xing
  2024-04-17  0:26   ` Jason Xing
  2024-04-16  7:42 ` [PATCH net-next 2/3] net: rps: protect filter locklessly Jason Xing
  2024-04-16  7:42 ` [PATCH net-next 3/3] net: rps: locklessly access rflow->cpu Jason Xing
  2 siblings, 1 reply; 5+ messages in thread
From: Jason Xing @ 2024-04-16  7:42 UTC (permalink / raw)
  To: edumazet, kuba, pabeni, davem, horms; +Cc: netdev, kerneljasonxing, Jason Xing

From: Jason Xing <kernelxing@tencent.com>

Only one left place should be proctected locklessly. This patch made it.

Signed-off-by: Jason Xing <kernelxing@tencent.com>
---
 net/core/dev.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/net/core/dev.c b/net/core/dev.c
index 854a3a28a8d8..cd97eeae8218 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -4501,7 +4501,7 @@ set_rps_cpu(struct net_device *dev, struct sk_buff *skb,
 		struct netdev_rx_queue *rxqueue;
 		struct rps_dev_flow_table *flow_table;
 		struct rps_dev_flow *old_rflow;
-		u32 flow_id;
+		u32 flow_id, head;
 		u16 rxq_index;
 		int rc;
 
@@ -4529,8 +4529,8 @@ set_rps_cpu(struct net_device *dev, struct sk_buff *skb,
 			old_rflow->filter = RPS_NO_FILTER;
 	out:
 #endif
-		rflow->last_qtail =
-			READ_ONCE(per_cpu(softnet_data, next_cpu).input_queue_head);
+		head = READ_ONCE(per_cpu(softnet_data, next_cpu).input_queue_head);
+		rps_input_queue_tail_save(rflow->last_qtail, head);
 	}
 
 	rflow->cpu = next_cpu;
-- 
2.37.3


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH net-next 2/3] net: rps: protect filter locklessly
  2024-04-16  7:42 [PATCH net-next 0/3] locklessly protect left members in struct rps_dev_flow Jason Xing
  2024-04-16  7:42 ` [PATCH net-next 1/3] net: rps: protect last_qtail with rps_input_queue_tail_save() helper Jason Xing
@ 2024-04-16  7:42 ` Jason Xing
  2024-04-16  7:42 ` [PATCH net-next 3/3] net: rps: locklessly access rflow->cpu Jason Xing
  2 siblings, 0 replies; 5+ messages in thread
From: Jason Xing @ 2024-04-16  7:42 UTC (permalink / raw)
  To: edumazet, kuba, pabeni, davem, horms; +Cc: netdev, kerneljasonxing, Jason Xing

From: Jason Xing <kernelxing@tencent.com>

As we can see, rflow->filter can be written/read concurrently, so
lockless access is needed.

Signed-off-by: Jason Xing <kernelxing@tencent.com>
---
I'm not very sure if the READ_ONCE in set_rps_cpu() is useful. I
scaned/checked the codes and found no lock can prevent multiple
threads from calling set_rps_cpu() and handling the same flow
simultaneously. The same question still exists in patch [3/3].
---
 net/core/dev.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/net/core/dev.c b/net/core/dev.c
index cd97eeae8218..6892682f9cbf 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -4524,8 +4524,8 @@ set_rps_cpu(struct net_device *dev, struct sk_buff *skb,
 			goto out;
 		old_rflow = rflow;
 		rflow = &flow_table->flows[flow_id];
-		rflow->filter = rc;
-		if (old_rflow->filter == rflow->filter)
+		WRITE_ONCE(rflow->filter, rc);
+		if (old_rflow->filter == READ_ONCE(rflow->filter))
 			old_rflow->filter = RPS_NO_FILTER;
 	out:
 #endif
@@ -4666,7 +4666,7 @@ bool rps_may_expire_flow(struct net_device *dev, u16 rxq_index,
 	if (flow_table && flow_id <= flow_table->mask) {
 		rflow = &flow_table->flows[flow_id];
 		cpu = READ_ONCE(rflow->cpu);
-		if (rflow->filter == filter_id && cpu < nr_cpu_ids &&
+		if (READ_ONCE(rflow->filter) == filter_id && cpu < nr_cpu_ids &&
 		    ((int)(READ_ONCE(per_cpu(softnet_data, cpu).input_queue_head) -
 			   READ_ONCE(rflow->last_qtail)) <
 		     (int)(10 * flow_table->mask)))
-- 
2.37.3


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH net-next 3/3] net: rps: locklessly access rflow->cpu
  2024-04-16  7:42 [PATCH net-next 0/3] locklessly protect left members in struct rps_dev_flow Jason Xing
  2024-04-16  7:42 ` [PATCH net-next 1/3] net: rps: protect last_qtail with rps_input_queue_tail_save() helper Jason Xing
  2024-04-16  7:42 ` [PATCH net-next 2/3] net: rps: protect filter locklessly Jason Xing
@ 2024-04-16  7:42 ` Jason Xing
  2 siblings, 0 replies; 5+ messages in thread
From: Jason Xing @ 2024-04-16  7:42 UTC (permalink / raw)
  To: edumazet, kuba, pabeni, davem, horms; +Cc: netdev, kerneljasonxing, Jason Xing

From: Jason Xing <kernelxing@tencent.com>

This is the last member in struct rps_dev_flow which should be
protected locklessly. So finish it.

Signed-off-by: Jason Xing <kernelxing@tencent.com>
---
 net/core/dev.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/net/core/dev.c b/net/core/dev.c
index 6892682f9cbf..6003553c05fc 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -4533,7 +4533,7 @@ set_rps_cpu(struct net_device *dev, struct sk_buff *skb,
 		rps_input_queue_tail_save(rflow->last_qtail, head);
 	}
 
-	rflow->cpu = next_cpu;
+	WRITE_ONCE(rflow->cpu, next_cpu);
 	return rflow;
 }
 
@@ -4597,7 +4597,7 @@ static int get_rps_cpu(struct net_device *dev, struct sk_buff *skb,
 		 * we can look at the local (per receive queue) flow table
 		 */
 		rflow = &flow_table->flows[hash & flow_table->mask];
-		tcpu = rflow->cpu;
+		tcpu = READ_ONCE(rflow->cpu);
 
 		/*
 		 * If the desired CPU (where last recvmsg was done) is
-- 
2.37.3


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH net-next 1/3] net: rps: protect last_qtail with rps_input_queue_tail_save() helper
  2024-04-16  7:42 ` [PATCH net-next 1/3] net: rps: protect last_qtail with rps_input_queue_tail_save() helper Jason Xing
@ 2024-04-17  0:26   ` Jason Xing
  0 siblings, 0 replies; 5+ messages in thread
From: Jason Xing @ 2024-04-17  0:26 UTC (permalink / raw)
  To: edumazet, kuba, pabeni, davem, horms; +Cc: netdev, Jason Xing

On Tue, Apr 16, 2024 at 3:42 PM Jason Xing <kerneljasonxing@gmail.com> wrote:
>
> From: Jason Xing <kernelxing@tencent.com>
>
> Only one left place should be proctected locklessly. This patch made it.
>
> Signed-off-by: Jason Xing <kernelxing@tencent.com>
> ---
>  net/core/dev.c | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/net/core/dev.c b/net/core/dev.c
> index 854a3a28a8d8..cd97eeae8218 100644
> --- a/net/core/dev.c
> +++ b/net/core/dev.c
> @@ -4501,7 +4501,7 @@ set_rps_cpu(struct net_device *dev, struct sk_buff *skb,
>                 struct netdev_rx_queue *rxqueue;
>                 struct rps_dev_flow_table *flow_table;
>                 struct rps_dev_flow *old_rflow;
> -               u32 flow_id;
> +               u32 flow_id, head;
>                 u16 rxq_index;
>                 int rc;
>
> @@ -4529,8 +4529,8 @@ set_rps_cpu(struct net_device *dev, struct sk_buff *skb,
>                         old_rflow->filter = RPS_NO_FILTER;
>         out:
>  #endif
> -               rflow->last_qtail =
> -                       READ_ONCE(per_cpu(softnet_data, next_cpu).input_queue_head);
> +               head = READ_ONCE(per_cpu(softnet_data, next_cpu).input_queue_head);
> +               rps_input_queue_tail_save(rflow->last_qtail, head);

I made a mistake. I should pass &rflow->last_qtail actually. Will update it.

>         }
>
>         rflow->cpu = next_cpu;
> --
> 2.37.3
>

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2024-04-17  0:26 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-04-16  7:42 [PATCH net-next 0/3] locklessly protect left members in struct rps_dev_flow Jason Xing
2024-04-16  7:42 ` [PATCH net-next 1/3] net: rps: protect last_qtail with rps_input_queue_tail_save() helper Jason Xing
2024-04-17  0:26   ` Jason Xing
2024-04-16  7:42 ` [PATCH net-next 2/3] net: rps: protect filter locklessly Jason Xing
2024-04-16  7:42 ` [PATCH net-next 3/3] net: rps: locklessly access rflow->cpu Jason Xing

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.