All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH mptcp-next v3] mptcp: update userspace pm mptcp_info fields
@ 2023-02-03  7:55 Geliang Tang
  2023-02-03  9:12 ` mptcp: update userspace pm mptcp_info fields: Tests Results MPTCP CI
  2023-02-08 17:43 ` [PATCH mptcp-next v3] mptcp: update userspace pm mptcp_info fields Matthieu Baerts
  0 siblings, 2 replies; 5+ messages in thread
From: Geliang Tang @ 2023-02-03  7:55 UTC (permalink / raw)
  To: mptcp; +Cc: Geliang Tang

Increase pm subflows counter when userspace pm creates a new subflow,
and decrease the counter when it closes a subflow.

Closes: https://github.com/multipath-tcp/mptcp_net-next/issues/329
Closes: https://github.com/multipath-tcp/mptcp_net-next/issues/330
Signed-off-by: Geliang Tang <geliang.tang@suse.com>
---
v3:
 - update local_addr_used and add_addr_signaled

v2:
 - hold pm locks
---
 net/mptcp/pm_userspace.c | 14 ++++++++++++++
 1 file changed, 14 insertions(+)

diff --git a/net/mptcp/pm_userspace.c b/net/mptcp/pm_userspace.c
index a02d3cbf2a1b..ba8ad500993c 100644
--- a/net/mptcp/pm_userspace.c
+++ b/net/mptcp/pm_userspace.c
@@ -171,6 +171,7 @@ int mptcp_nl_cmd_announce(struct sk_buff *skb, struct genl_info *info)
 	spin_lock_bh(&msk->pm.lock);
 
 	if (mptcp_pm_alloc_anno_list(msk, &addr_val)) {
+		msk->pm.add_addr_signaled++;
 		mptcp_pm_announce_addr(msk, &addr_val.addr, false);
 		mptcp_pm_nl_addr_send_ack(msk);
 	}
@@ -240,6 +241,10 @@ int mptcp_nl_cmd_remove(struct sk_buff *skb, struct genl_info *info)
 		sock_kfree_s((struct sock *)msk, match, sizeof(*match));
 	}
 
+	spin_lock_bh(&msk->pm.lock);
+	msk->pm.subflows--;
+	spin_unlock_bh(&msk->pm.lock);
+
 	err = 0;
  remove_err:
 	sock_put((struct sock *)msk);
@@ -302,6 +307,11 @@ int mptcp_nl_cmd_sf_create(struct sk_buff *skb, struct genl_info *info)
 		goto create_err;
 	}
 
+	spin_lock_bh(&msk->pm.lock);
+	msk->pm.local_addr_used++;
+	msk->pm.subflows++;
+	spin_unlock_bh(&msk->pm.lock);
+
 	lock_sock(sk);
 
 	err = __mptcp_subflow_connect(sk, &addr_l, &addr_r);
@@ -424,6 +434,10 @@ int mptcp_nl_cmd_sf_destroy(struct sk_buff *skb, struct genl_info *info)
 		mptcp_subflow_shutdown(sk, ssk, RCV_SHUTDOWN | SEND_SHUTDOWN);
 		mptcp_close_ssk(sk, ssk, subflow);
 		MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_RMSUBFLOW);
+		spin_lock_bh(&msk->pm.lock);
+		msk->pm.local_addr_used--;
+		msk->pm.subflows--;
+		spin_unlock_bh(&msk->pm.lock);
 		err = 0;
 	} else {
 		err = -ESRCH;
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: mptcp: update userspace pm mptcp_info fields: Tests Results
  2023-02-03  7:55 [PATCH mptcp-next v3] mptcp: update userspace pm mptcp_info fields Geliang Tang
@ 2023-02-03  9:12 ` MPTCP CI
  2023-02-08 17:43 ` [PATCH mptcp-next v3] mptcp: update userspace pm mptcp_info fields Matthieu Baerts
  1 sibling, 0 replies; 5+ messages in thread
From: MPTCP CI @ 2023-02-03  9:12 UTC (permalink / raw)
  To: Geliang Tang; +Cc: mptcp

Hi Geliang,

Thank you for your modifications, that's great!

Our CI did some validations and here is its report:

- KVM Validation: normal (except selftest_mptcp_join):
  - Success! ✅:
  - Task: https://cirrus-ci.com/task/6186399657885696
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/6186399657885696/summary/summary.txt

- KVM Validation: debug (except selftest_mptcp_join):
  - Success! ✅:
  - Task: https://cirrus-ci.com/task/6749349611307008
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/6749349611307008/summary/summary.txt

- KVM Validation: debug (only selftest_mptcp_join):
  - Unstable: 1 failed test(s): selftest_mptcp_join 🔴:
  - Task: https://cirrus-ci.com/task/4510743937155072
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/4510743937155072/summary/summary.txt

- KVM Validation: normal (only selftest_mptcp_join):
  - Success! ✅:
  - Task: https://cirrus-ci.com/task/5623449704464384
  - Summary: https://api.cirrus-ci.com/v1/artifact/task/5623449704464384/summary/summary.txt

Initiator: Patchew Applier
Commits: https://github.com/multipath-tcp/mptcp_net-next/commits/546f0a06b189


If there are some issues, you can reproduce them using the same environment as
the one used by the CI thanks to a docker image, e.g.:

    $ cd [kernel source code]
    $ docker run -v "${PWD}:${PWD}:rw" -w "${PWD}" --privileged --rm -it \
        --pull always mptcp/mptcp-upstream-virtme-docker:latest \
        auto-debug

For more details:

    https://github.com/multipath-tcp/mptcp-upstream-virtme-docker


Please note that despite all the efforts that have been already done to have a
stable tests suite when executed on a public CI like here, it is possible some
reported issues are not due to your modifications. Still, do not hesitate to
help us improve that ;-)

Cheers,
MPTCP GH Action bot
Bot operated by Matthieu Baerts (Tessares)

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH mptcp-next v3] mptcp: update userspace pm mptcp_info fields
  2023-02-03  7:55 [PATCH mptcp-next v3] mptcp: update userspace pm mptcp_info fields Geliang Tang
  2023-02-03  9:12 ` mptcp: update userspace pm mptcp_info fields: Tests Results MPTCP CI
@ 2023-02-08 17:43 ` Matthieu Baerts
  2023-02-23  5:53   ` Geliang Tang
  1 sibling, 1 reply; 5+ messages in thread
From: Matthieu Baerts @ 2023-02-08 17:43 UTC (permalink / raw)
  To: Geliang Tang, mptcp

Hi Geliang,

Thank you for this v3.

On 03/02/2023 08:55, Geliang Tang wrote:
> Increase pm subflows counter when userspace pm creates a new subflow,
> and decrease the counter when it closes a subflow.
> 
> Closes: https://github.com/multipath-tcp/mptcp_net-next/issues/329
> Closes: https://github.com/multipath-tcp/mptcp_net-next/issues/330

Sorry, the description of #330 was not clear: pm.subflows is not
incremented for the server side when using the in-kernel PM if I'm not
mistaken. I just updated the ticket to make it clearer. So this patch is
not fixing #330.

(before this patch, it was also not updated with the userspace PM but
that's different: ticket #329)

> Signed-off-by: Geliang Tang <geliang.tang@suse.com>

Could you also add a Fixes tag please?

> ---
> v3:
>  - update local_addr_used and add_addr_signaled
> 
> v2:
>  - hold pm locks
> ---
>  net/mptcp/pm_userspace.c | 14 ++++++++++++++
>  1 file changed, 14 insertions(+)
> 
> diff --git a/net/mptcp/pm_userspace.c b/net/mptcp/pm_userspace.c
> index a02d3cbf2a1b..ba8ad500993c 100644
> --- a/net/mptcp/pm_userspace.c
> +++ b/net/mptcp/pm_userspace.c
> @@ -171,6 +171,7 @@ int mptcp_nl_cmd_announce(struct sk_buff *skb, struct genl_info *info)
>  	spin_lock_bh(&msk->pm.lock);
>  
>  	if (mptcp_pm_alloc_anno_list(msk, &addr_val)) {
> +		msk->pm.add_addr_signaled++;
>  		mptcp_pm_announce_addr(msk, &addr_val.addr, false);
>  		mptcp_pm_nl_addr_send_ack(msk);
>  	}
> @@ -240,6 +241,10 @@ int mptcp_nl_cmd_remove(struct sk_buff *skb, struct genl_info *info)
>  		sock_kfree_s((struct sock *)msk, match, sizeof(*match));
>  	}
>  
> +	spin_lock_bh(&msk->pm.lock);
> +	msk->pm.subflows--;
> +	spin_unlock_bh(&msk->pm.lock);

Should you not decrement msk->pm.subflows in the loop just above?
Potentially, there is more than one subflow linked to the given addr ID.

Not directly related but it looks like in mptcp_nl_remove_addrs_list()
-- for MPTCP_PM_CMD_FLUSH_ADDRS command -- we don't decrement
pm.subflows as well. Maybe this 'pm.subflows' should be decremented in
mptcp_pm_remove_addrs_and_subflows() directly to cover both cases?

(if you do the modification in mptcp_pm_remove_addrs_and_subflows(), the
"Fixes" tag might be different, a separated commit will be needed)

> +
>  	err = 0;
>   remove_err:
>  	sock_put((struct sock *)msk);
> @@ -302,6 +307,11 @@ int mptcp_nl_cmd_sf_create(struct sk_buff *skb, struct genl_info *info)
>  		goto create_err;
>  	}
>  
> +	spin_lock_bh(&msk->pm.lock);
> +	msk->pm.local_addr_used++;

Mmh, here the counter is incremented for each new subflow: no matter if
it is a new local address (addr_l.id) or not, right?

If I'm not mistaken, we don't maintain a list of local addresses we use
to create subflows, right? Just addresses we announce I suppose.

I don't know what's best:
- not updating the counter with the userspace PM (current behaviour)
- update it but with a not so correct value
- maintain a list of IDs just to set the proper counter?

Maybe it is enough to keep the current behaviour and mention somewhere
"local_addr_used" counter is not supported with the userspace PM because
this is managed by the userspace?

(Same for "add_addr_accepted" counter I suppose)

> +	msk->pm.subflows++;
> +	spin_unlock_bh(&msk->pm.lock);
> +
>  	lock_sock(sk);
>  
>  	err = __mptcp_subflow_connect(sk, &addr_l, &addr_r);
> @@ -424,6 +434,10 @@ int mptcp_nl_cmd_sf_destroy(struct sk_buff *skb, struct genl_info *info)
>  		mptcp_subflow_shutdown(sk, ssk, RCV_SHUTDOWN | SEND_SHUTDOWN);
>  		mptcp_close_ssk(sk, ssk, subflow);
>  		MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_RMSUBFLOW);
> +		spin_lock_bh(&msk->pm.lock);
> +		msk->pm.local_addr_used--;

(same here: remove this line?)

> +		msk->pm.subflows--;
> +		spin_unlock_bh(&msk->pm.lock);
>  		err = 0;
>  	} else {
>  		err = -ESRCH;

Cheers,
Matt
-- 
Tessares | Belgium | Hybrid Access Solutions
www.tessares.net

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH mptcp-next v3] mptcp: update userspace pm mptcp_info fields
  2023-02-08 17:43 ` [PATCH mptcp-next v3] mptcp: update userspace pm mptcp_info fields Matthieu Baerts
@ 2023-02-23  5:53   ` Geliang Tang
  2023-02-23 10:17     ` Matthieu Baerts
  0 siblings, 1 reply; 5+ messages in thread
From: Geliang Tang @ 2023-02-23  5:53 UTC (permalink / raw)
  To: Matthieu Baerts; +Cc: mptcp

On Wed, Feb 08, 2023 at 06:43:58PM +0100, Matthieu Baerts wrote:
> Hi Geliang,
> 
> Thank you for this v3.
> 
> On 03/02/2023 08:55, Geliang Tang wrote:
> > Increase pm subflows counter when userspace pm creates a new subflow,
> > and decrease the counter when it closes a subflow.
> > 
> > Closes: https://github.com/multipath-tcp/mptcp_net-next/issues/329
> > Closes: https://github.com/multipath-tcp/mptcp_net-next/issues/330
> 
> Sorry, the description of #330 was not clear: pm.subflows is not
> incremented for the server side when using the in-kernel PM if I'm not
> mistaken. I just updated the ticket to make it clearer. So this patch is
> not fixing #330.

Matt, how can I reproduce the issue you mentioned in #330? In my test,
pm.subflows is incremented for the server side. My test patch is as
follows:

--- a/tools/testing/selftests/net/mptcp/mptcp_join.sh
+++ b/tools/testing/selftests/net/mptcp/mptcp_join.sh
@@ -981,6 +981,12 @@ do_transfer()
                done
        fi
 
+       ip netns exec ${listener_ns} ss -version
+       echo "listener_ns ss"
+       ip netns exec ${listener_ns} ss -inmHM
+       echo "connector_ns ss"
+       ip netns exec ${connector_ns} ss -inmHM
+
        wait $cpid
        local retc=$?
        wait $spid

The output is:

-----
> sudo ./mptcp_join.sh -f
Created /tmp/tmp.nlYc41VABR (size 1 KB) containing data sent by client
Created /tmp/tmp.jXUuNj0w4c (size 1 KB) containing data sent by server
ss utility, iproute2-6.1.0
listener_ns ss
ESTAB                         0                              0                                                      [::ffff:10.0.1.1]:10000                                                   [::ffff:10.0.1.2]:56062
	 skmem:(r0,rb131072,t0,tb16384,f0,w0,o0,bl0,d0) subflows_max:2 remote_key token:3d8da2a0 write_seq:1bdfa9cc563d9593 snd_una:1bdfa9cc563d9593 rcv_nxt:f9c1c74b10372bef
connector_ns ss
ESTAB                            996                               0                                                            10.0.1.2:56062                                                       10.0.1.1:10000
	 skmem:(r996,rb131072,t0,tb16384,f0,w0,o0,bl0,d0) subflows_max:2 remote_key token:70251e1b write_seq:f9c1c74b10372bef snd_una:f9c1c74b10372bef rcv_nxt:1bdfa9cc563d9593
001 no JOIN                              syn[ ok ] - synack[ ok ] - ack[ ok ]
ss utility, iproute2-6.1.0
listener_ns ss
ESTAB                         0                              0                                                      [::ffff:10.0.1.1]:10001                                                   [::ffff:10.0.1.2]:40688
	 skmem:(r0,rb131072,t0,tb16384,f0,w0,o0,bl0,d0) remote_key token:f9a00098 write_seq:809c061103ce5ad9 snd_una:809c061103ce5ad9 rcv_nxt:412126c447a36405
connector_ns ss
ESTAB                            996                               0                                                            10.0.1.2:40688                                                       10.0.1.1:10001
	 skmem:(r996,rb131072,t0,tb16384,f0,w0,o0,bl0,d0) remote_key token:6bcc9fdb write_seq:412126c447a36405 snd_una:412126c447a36405 rcv_nxt:809c061103ce5ad9
002 single subflow, limited by client    syn[ ok ] - synack[ ok ] - ack[ ok ]
ss utility, iproute2-6.1.0
listener_ns ss
ESTAB                         996                            0                                                      [::ffff:10.0.1.1]:10002                                                   [::ffff:10.0.1.2]:53652
	 skmem:(r996,rb131072,t0,tb16384,f0,w0,o0,bl0,d0) remote_key token:5d60cbb3 write_seq:86f08453ff95670b snd_una:86f08453ff95670b rcv_nxt:f7e7aa5d21fa8a88
connector_ns ss
ESTAB                            996                               0                                                            10.0.1.2:53652                                                       10.0.1.1:10002
	 skmem:(r996,rb131072,t0,tb16384,f0,w0,o0,bl0,d0) subflows_max:1 remote_key token:d62b3e1d write_seq:f7e7aa5d21fa8a88 snd_una:f7e7aa5d21fa8a88 rcv_nxt:86f08453ff95670b
003 single subflow, limited by server    syn[ ok ] - synack[ ok ] - ack[ ok ]
ss utility, iproute2-6.1.0
listener_ns ss
ESTAB                         996                            0                                                      [::ffff:10.0.1.1]:10003                                                   [::ffff:10.0.1.2]:48836
	 skmem:(r996,rb131072,t0,tb16384,f0,w0,o0,bl0,d0) subflows:1 subflows_max:1 remote_key token:d0bc750f write_seq:9f381515047a2a44 snd_una:9f381515047a2a44 rcv_nxt:3c331d647144410e
connector_ns ss
ESTAB                            996                               0                                                            10.0.1.2:48836                                                       10.0.1.1:10003
	 skmem:(r996,rb131072,t0,tb16384,f0,w0,o0,bl0,d0) subflows:1 subflows_max:1 remote_key token:a5f0a0ec write_seq:3c331d647144410e snd_una:3c331d647144410e rcv_nxt:9f381515047a2a44
004 single subflow                       syn[ ok ] - synack[ ok ] - ack[ ok ]
ss utility, iproute2-6.1.0
listener_ns ss
ESTAB                         0                              0                                                      [::ffff:10.0.1.1]:10004                                                   [::ffff:10.0.1.2]:60990
	 skmem:(r0,rb131072,t0,tb16384,f0,w0,o0,bl0,d0) subflows:2 subflows_max:2 remote_key token:3fea3f91 write_seq:bd76dd90157ab6ff snd_una:bd76dd90157ab6ff rcv_nxt:1b0decd9b66febf9
connector_ns ss
ESTAB                            996                               0                                                            10.0.1.2:60990                                                       10.0.1.1:10004
	 skmem:(r996,rb131072,t0,tb16384,f0,w0,o0,bl0,d0) subflows:2 subflows_max:2 remote_key token:db7fad77 write_seq:1b0decd9b66febf9 snd_una:1b0decd9b66febf9 rcv_nxt:bd76dd90157ab6ff
005 multiple subflows                    syn[ ok ] - synack[ ok ] - ack[ ok ]
ss utility, iproute2-6.1.0
listener_ns ss
ESTAB                         0                              0                                                      [::ffff:10.0.1.1]:10005                                                   [::ffff:10.0.1.2]:49704
	 skmem:(r0,rb131072,t0,tb16384,f0,w0,o0,bl0,d0) subflows:1 subflows_max:1 remote_key token:33715c04 write_seq:e55d02a2aec657a4 snd_una:e55d02a2aec657a4 rcv_nxt:eefb958600fec9c
connector_ns ss
ESTAB                            996                               0                                                            10.0.1.2:49704                                                       10.0.1.1:10005
	 skmem:(r996,rb131072,t0,tb16384,f0,w0,o0,bl0,d0) subflows:1 subflows_max:2 remote_key token:95404d0b write_seq:eefb958600fec9c snd_una:eefb958600fec9c rcv_nxt:e55d02a2aec657a4
006 multiple subflows, limited by server syn[ ok ] - synack[ ok ] - ack[ ok ]
ss utility, iproute2-6.1.0
listener_ns ss
ESTAB                         996                            0                                                      [::ffff:10.0.1.1]:10006                                                   [::ffff:10.0.1.2]:34258
	 skmem:(r996,rb131072,t0,tb16384,f0,w0,o0,bl0,d0) subflows:1 subflows_max:1 remote_key token:2ff54490 write_seq:e4f5e6f38dc1fcc7 snd_una:e4f5e6f38dc1fcc7 rcv_nxt:ed5e200e8c906632
connector_ns ss
ESTAB                            996                               0                                                            10.0.1.2:34258                                                       10.0.1.1:10006
	 skmem:(r996,rb131072,t0,tb16384,f0,w0,o0,bl0,d0) subflows:1 subflows_max:1 remote_key token:81398a53 write_seq:ed5e200e8c906632 snd_una:ed5e200e8c906632 rcv_nxt:e4f5e6f38dc1fcc7
007 single subflow, dev                  syn[ ok ] - synack[ ok ] - ack[ ok ]
-------

You can see that the subflows are updated for both sides.

This ss -version is a bit strange. If it is removed, the test will fail.

Anyway, I think it's necessary to add a testcase for this mptcp_info
fileds. WDYT?

Thanks,
-Geliang

> 
> (before this patch, it was also not updated with the userspace PM but
> that's different: ticket #329)
> 
> > Signed-off-by: Geliang Tang <geliang.tang@suse.com>
> 
> Could you also add a Fixes tag please?
> 
> > ---
> > v3:
> >  - update local_addr_used and add_addr_signaled
> > 
> > v2:
> >  - hold pm locks
> > ---
> >  net/mptcp/pm_userspace.c | 14 ++++++++++++++
> >  1 file changed, 14 insertions(+)
> > 
> > diff --git a/net/mptcp/pm_userspace.c b/net/mptcp/pm_userspace.c
> > index a02d3cbf2a1b..ba8ad500993c 100644
> > --- a/net/mptcp/pm_userspace.c
> > +++ b/net/mptcp/pm_userspace.c
> > @@ -171,6 +171,7 @@ int mptcp_nl_cmd_announce(struct sk_buff *skb, struct genl_info *info)
> >  	spin_lock_bh(&msk->pm.lock);
> >  
> >  	if (mptcp_pm_alloc_anno_list(msk, &addr_val)) {
> > +		msk->pm.add_addr_signaled++;
> >  		mptcp_pm_announce_addr(msk, &addr_val.addr, false);
> >  		mptcp_pm_nl_addr_send_ack(msk);
> >  	}
> > @@ -240,6 +241,10 @@ int mptcp_nl_cmd_remove(struct sk_buff *skb, struct genl_info *info)
> >  		sock_kfree_s((struct sock *)msk, match, sizeof(*match));
> >  	}
> >  
> > +	spin_lock_bh(&msk->pm.lock);
> > +	msk->pm.subflows--;
> > +	spin_unlock_bh(&msk->pm.lock);
> 
> Should you not decrement msk->pm.subflows in the loop just above?
> Potentially, there is more than one subflow linked to the given addr ID.
> 
> Not directly related but it looks like in mptcp_nl_remove_addrs_list()
> -- for MPTCP_PM_CMD_FLUSH_ADDRS command -- we don't decrement
> pm.subflows as well. Maybe this 'pm.subflows' should be decremented in
> mptcp_pm_remove_addrs_and_subflows() directly to cover both cases?
> 
> (if you do the modification in mptcp_pm_remove_addrs_and_subflows(), the
> "Fixes" tag might be different, a separated commit will be needed)
> 
> > +
> >  	err = 0;
> >   remove_err:
> >  	sock_put((struct sock *)msk);
> > @@ -302,6 +307,11 @@ int mptcp_nl_cmd_sf_create(struct sk_buff *skb, struct genl_info *info)
> >  		goto create_err;
> >  	}
> >  
> > +	spin_lock_bh(&msk->pm.lock);
> > +	msk->pm.local_addr_used++;
> 
> Mmh, here the counter is incremented for each new subflow: no matter if
> it is a new local address (addr_l.id) or not, right?
> 
> If I'm not mistaken, we don't maintain a list of local addresses we use
> to create subflows, right? Just addresses we announce I suppose.
> 
> I don't know what's best:
> - not updating the counter with the userspace PM (current behaviour)
> - update it but with a not so correct value
> - maintain a list of IDs just to set the proper counter?
> 
> Maybe it is enough to keep the current behaviour and mention somewhere
> "local_addr_used" counter is not supported with the userspace PM because
> this is managed by the userspace?
> 
> (Same for "add_addr_accepted" counter I suppose)
> 
> > +	msk->pm.subflows++;
> > +	spin_unlock_bh(&msk->pm.lock);
> > +
> >  	lock_sock(sk);
> >  
> >  	err = __mptcp_subflow_connect(sk, &addr_l, &addr_r);
> > @@ -424,6 +434,10 @@ int mptcp_nl_cmd_sf_destroy(struct sk_buff *skb, struct genl_info *info)
> >  		mptcp_subflow_shutdown(sk, ssk, RCV_SHUTDOWN | SEND_SHUTDOWN);
> >  		mptcp_close_ssk(sk, ssk, subflow);
> >  		MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_RMSUBFLOW);
> > +		spin_lock_bh(&msk->pm.lock);
> > +		msk->pm.local_addr_used--;
> 
> (same here: remove this line?)
> 
> > +		msk->pm.subflows--;
> > +		spin_unlock_bh(&msk->pm.lock);
> >  		err = 0;
> >  	} else {
> >  		err = -ESRCH;
> 
> Cheers,
> Matt
> -- 
> Tessares | Belgium | Hybrid Access Solutions
> www.tessares.net

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH mptcp-next v3] mptcp: update userspace pm mptcp_info fields
  2023-02-23  5:53   ` Geliang Tang
@ 2023-02-23 10:17     ` Matthieu Baerts
  0 siblings, 0 replies; 5+ messages in thread
From: Matthieu Baerts @ 2023-02-23 10:17 UTC (permalink / raw)
  To: Geliang Tang; +Cc: mptcp

Hi Geliang,

On 23/02/2023 06:53, Geliang Tang wrote:
> On Wed, Feb 08, 2023 at 06:43:58PM +0100, Matthieu Baerts wrote:
>> Hi Geliang,
>>
>> Thank you for this v3.
>>
>> On 03/02/2023 08:55, Geliang Tang wrote:
>>> Increase pm subflows counter when userspace pm creates a new subflow,
>>> and decrease the counter when it closes a subflow.
>>>
>>> Closes: https://github.com/multipath-tcp/mptcp_net-next/issues/329
>>> Closes: https://github.com/multipath-tcp/mptcp_net-next/issues/330
>>
>> Sorry, the description of #330 was not clear: pm.subflows is not
>> incremented for the server side when using the in-kernel PM if I'm not
>> mistaken. I just updated the ticket to make it clearer. So this patch is
>> not fixing #330.
> 
> Matt, how can I reproduce the issue you mentioned in #330? In my test,
> pm.subflows is incremented for the server side. My test patch is as
> follows:
> 
> --- a/tools/testing/selftests/net/mptcp/mptcp_join.sh
> +++ b/tools/testing/selftests/net/mptcp/mptcp_join.sh
> @@ -981,6 +981,12 @@ do_transfer()
>                 done
>         fi
>  
> +       ip netns exec ${listener_ns} ss -version
> +       echo "listener_ns ss"
> +       ip netns exec ${listener_ns} ss -inmHM
> +       echo "connector_ns ss"
> +       ip netns exec ${connector_ns} ss -inmHM
> +
>         wait $cpid
>         local retc=$?
>         wait $spid

Good idea to use 'ss'!

These info are also exported via:

  getsockopt(..., SOL_MPTCP, MPTCP_INFO, ...);

But easier to use 'ss' indeed!

> The output is:
> 
> -----

(...)

> -------
> 
> You can see that the subflows are updated for both sides.

Indeed, my bad. I thought there was an issue, probably I looked at the
wrong time (too early/late) or at the wrong fd. Then when reading the
code, I saw "pm.subflows" was only incremented for the "client" side in
"pm_netlink.c". But I missed "mptcp_pm_allow_new_subflow()" from "pm.c",
doing "++pm->subflows" :)

Thank you for having checked!

> This ss -version is a bit strange. If it is removed, the test will fail.

Maybe just a question of timing? Without it, 'ss' is launched too early:
before the creation of the additional subflows. I guess that kind of
check should be done in the middle of the connection, probably not easy
to do that for all tests not knowing in advance how many subflows will
be created, how long you can wait.

> Anyway, I think it's necessary to add a testcase for this mptcp_info
> fileds. WDYT?
Yes, good idea!

It could make sense to do that in diag.sh selftest but only one subflow
is created there, so not very useful.

Maybe good to modify do_getsockopt_mptcp_info() from mptcp_sockopt.c to
check more fields? Not so many fields are being checked so far.

Or chk_subflow_nr() from mptcp_join.sh to add an additional check where
MPTCP info would be read?

And maybe in userspace_pm.sh to add some coverage for the userspace PM
case, what you are fixing here in this v3 patch?

Cheers,
Matt
-- 
Tessares | Belgium | Hybrid Access Solutions
www.tessares.net

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2023-02-23 10:17 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-02-03  7:55 [PATCH mptcp-next v3] mptcp: update userspace pm mptcp_info fields Geliang Tang
2023-02-03  9:12 ` mptcp: update userspace pm mptcp_info fields: Tests Results MPTCP CI
2023-02-08 17:43 ` [PATCH mptcp-next v3] mptcp: update userspace pm mptcp_info fields Matthieu Baerts
2023-02-23  5:53   ` Geliang Tang
2023-02-23 10:17     ` Matthieu Baerts

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.